Part B: Audit Execution, Evidence, and Reporting
Audit Project Management and Agile Methodologies
The execution of an IS audit is governed by Performance Standards (the 1200 series) and begins with project management. Practitioners must translate the audit objectives into a detailed audit program, representing a step-by-step set of procedural instructions required to complete the fieldwork. Project management includes planning tasks within timelines, setting communication protocols, managing budgets, and supervising staff to meet professional expectations.[1, 1]
To enhance responsiveness, many audit departments are adopting Agile auditing frameworks derived from software development paradigms. Agile auditing prioritises individuals and interactions over rigid processes, and working deliverables over exhaustive preliminary documentation. By organising work into short, iterative sprint cycles and continually reprioritising the audit backlog, Agile auditing enables the function to respond immediately to unexpected enterprise risks. This methodology facilitates direct customer collaboration and real-time assurance, significantly reducing the traditional end-to-end planning cycle while fully maintaining auditor objectivity.
Testing Methodologies and Audit Sampling
The fieldwork phase is dominated by two distinct but intrinsically linked testing methodologies. Compliance testing is designed to gather evidence regarding the operating effectiveness of the enterprise's control procedures. It answers the operational question of whether a control functions as intended. Substantive testing, conversely, bypasses the control environment to substantiate the ultimate integrity, completeness, and accuracy of the actual data, transactions, or account balances. The relationship is inverse: if compliance testing reveals robust controls, the auditor may drastically reduce the volume of substantive testing; if controls fail, substantive testing must be maximised.
Because testing the entire data universe is rarely feasible, auditors rely heavily on sampling techniques to infer a population's characteristics from a subset. These techniques are divided into statistical sampling, which relies on mathematical probabilities, and nonstatistical (judgment) sampling, which relies purely on the auditor's professional intuition.
-
Attribute Sampling
Functional Description: A fixed sample-size methodology used to estimate the rate of occurrence of a specific, binary quality (attribute) within a population.
Primary Application in IS Auditing: Utilised predominantly during compliance testing to determine how frequently a specific control fails (e.g., estimating the percentage of change requests lacking an approval signature).
-
Stop-or-Go Sampling
Functional Description: A technique that permits an audit test to be halted at the earliest possible moment if predefined error thresholds are not met.
Primary Application in IS Auditing: Deployed to prevent excessive sampling when the auditor strongly anticipates a very low error rate within an exceptionally robust control environment.
-
Discovery Sampling
Functional Description: A specialised approach designed mathematically to uncover at least one single instance of an anomaly or deviation.
Primary Application in IS Auditing: Executed when the overarching audit objective is the detection of severe irregularities, circumvention of laws, or malicious internal fraud.
-
Variable Sampling
Functional Description: A quantitative statistical model used to estimate the total physical weight, monetary value, or numerical magnitude of a population.
Primary Application in IS Auditing: Applied primarily during substantive testing to detect material misstatements across massive datasets, employing techniques like stratified or unstratified mean per unit.
All sampling introduces sampling riskāthe inherent danger that the auditor's conclusion drawn from the sample will diverge from the conclusion that would be reached if the entire population were tested. This manifests as the risk of incorrect acceptance (assessing a weak control as effective) and the risk of incorrect rejection (assessing a strong control as ineffective).
Evidence Collection and Analytics Integration
According to Performance Standard 1205, auditors must obtain sufficient and appropriate evidence to draw reasonable conclusions. The reliability of evidence depends heavily on its source. Evidence gathered directly by the auditor through independent observation or re-performance is exponentially more reliable than internal documentation or oral representations provided by the auditee.[1, 1] When auditors lack the technical proficiency to gather specific evidence, Standard 1206 permits using the work of other experts, provided the auditor rigorously assesses the external expert's independence, qualifications, and quality-control processes before reliance.
To process massive volumes of evidence, auditors leverage Computer-Assisted Audit Techniques (CAATs), such as generalised audit software, debugging tools, and application tracing. These tools facilitate continuous auditing, allowing the auditor to monitor system reliability in real-time or near-real-time environments. Techniques include the Systems Control Audit Review File (SCARF), embedded audit modules, audit hooks, and the Integrated Test Facility (ITF).
The integration of Artificial Intelligence (AI) and Machine Learning (ML) represents a paradigm shift in audit execution. AI algorithms are deployed for complex document classification, text summarisation, sentiment analysis, and pattern recognition across unstructured datasets. However, the use of AI introduces profound new risks. Algorithms are often proprietary "black boxes" lacking transparent documentation. Furthermore, AI models are entirely dependent on the correctness and neutrality of their training data; poor or biased training data guarantees flawed audit conclusions. Consequently, ITAF mandates that AI-generated results must always be substantiated by human-led testing and intense professional scepticism to ensure the algorithm is actually answering the specific question the auditor is asking.
Reporting Standards and Remediation Follow-up
The 1400 series of Reporting Standards governs the culmination of the audit process. Following the completion of fieldwork, the auditor conducts an exit interview to ensure the facts presented are materially accurate and to negotiate realistic, cost-effective remediation recommendations with auditee management. The final audit report must formally present the results, provide statements of assurance, identify required corrective actions, and serve as a documented reference for future follow-up engagements.
Auditing requires persistent monitoring of management's remediation progress. Standard 1402 mandates that auditors establish procedures to verify that agreed-upon actions are implemented efficiently. In instances where management determines that the cost or complexity of remediation is too high and formally accepts the risk of not correcting a reported deficiency, the auditor is required to evaluate this decision. If the accepted risk exceeds the enterprise's overarching risk appetite, the practitioner is professionally obligated to immediately escalate the matter to the board of directors or the audit committee.