Activity: Evaluation of the IT Audit Tests

Activity: Evaluation of the IT Audit Tests

Copyright: © 2020 |Pages: 29
DOI: 10.4018/978-1-7998-4198-2.ch006
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

IT audit testing addresses auditable unit risks of the IT audit area. Selecting appropriate techniques, methods, and tools for conducting audit testing can be challenging. Sufficient evidential matter collection requires alignment with audit fieldwork standards that affect the type and means of acquisition by IT audit team members. When test audit evidence is unobtainable, the assigned IT auditor should attempt to acquire appropriate and sufficient evidence by activating alternative procedures directly related to the engagement test plan. Upon completion of an IT audit test, the assigned IT auditor determines whether errors in an auditable unit population exceed the tolerable error rate. Chapter 6 presents how to conduct, measure, and document IT audit area tests.
Chapter Preview
Top

Introduction

IT audit testing addresses auditable unit risks of the IT audit area. Selecting an appropriate technique, methods, and tools for conducting audit testing can be challenging (Davis, 2011). Various classification schemes are adaptable to organize the diverse testing techniques, methods, and tools for provisioning the most suitable IT audit evidence (Davis, 2011). Through taxonomies, an assigned IT auditor can gain an understanding regarding which technique, method, and tool may be most appropriate for a given auditable unit (Davis, 2011). The type of IT under examination is critical to the testing technique, method, and tool selection (Davis, 2011).

The timing of data processing can be an IT differentiator (Davis, 2011). Two types of timing classifications usable for audit testing are batch and “real-time” configurations (Davis, 2011). A batch architecture collects records into groups before processing (Davis, 2011). A “real-time” architecture processes a record immediately upon submission (Davis, 2011). Furthermore, an architectural differentiation may occur by IT physical location (Davis, 2011). The significant categories using physical location criterion are in-house and third-party IT classification (Davis, 2011). An in-house architecture has computer hardware, software, and personnel physically located on the enterprise’s premises (Davis, 2011). A third-party architecture provides services where the primary IT hardware or software belongs to an independent enterprise (Davis, 2011).

Organizational management can establish an outsourcing platform for IT processing through block time, remote batch, timesharing, or service bureau agreements (Davis, 2011). Time rental processing by one enterprise of another enterprise IT for use by the renter’s personnel is block time usage (Davis, 2011). Cluster mode processing with the enterprise maintaining minimal input and output hardware is remote batch usage (Davis, 2011). Timesharing appears as if a particular firm is the sole IT processing user, though multiple customers are using the same resources (Davis, 2011). Lastly, a service bureau exists when the enterprise leases a wide range of IT processing capabilities (Davis, 2011).

IT audit procedural performance when assessing sampling unit characteristics is not dependent on the sampling selection approach (Davis, 2011). Once an assigned IT auditor decides to use a statistical audit sampling technique, confidence levels, deviation rates, and sampling risk evaluations are necessary (Davis, 2005, 2011). If the assigned IT auditor accurately forecast IT auditable unit control effectiveness, then actual test results approximate expected test results (Davis, 2005, 2011). Justification for planned IT audit testing procedure replacement requires full inscription and cross-referencing in IT audit working papers (Davis, 2011). To conclude on statistical testing, the in-charge IT auditor must, and the other IT audit team members may formulate and evaluate auditable unit hypotheses. Hypothesis testing methods determine if attribute and variable sampling assumptions are correct (Davis, 2005, 2011). Critical to standard statistical IT audit testing is hypotheses assessments for ensuring errors do not contaminate audit area conclusions (Davis, 2005, 2011). Nonetheless, statistical and nonstatistical IT audit sampling involves judgments in designing and performing sampling procedures as well as subsequent evaluation of sampling results (Davis, 2005, 2011; Soltani, 2007).

Key Terms in this Chapter

Oracle Program: Generates expected outcomes that permit comparison with the actual outcomes of the software under testing.

Audit Trail: Is an event inscription that permits forward tracing (from the source to digital encoding) as well as backward tracing (from digital encoding to the source).

Event: Is the occurrence of business activity.

Segregation-of-Functions: Reflects a control used to reduce opportunities for perpetration and concealment of errors, mistakes, omissions, irregularities, and illegal acts by separating functional responsibilities.

Decision Logic Table: Conveys a tabular rendering of all contingencies provided in a procedure description that receive refinement into corresponding tasks.

Static Analysis: Is the evaluation process performed without executing the program code under testing.

Dynamic Analysis: Is the evaluation process performed by executing the program code under testing.

Complete Chapter List

Search this Book:
Reset