Using Receiver Operating Characteristic (ROC) Analysis to Evaluate Information-Based Decision-Making

Using Receiver Operating Characteristic (ROC) Analysis to Evaluate Information-Based Decision-Making

DOI: 10.4018/978-1-5225-7362-3.ch057
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Business operators and stakeholders often need to make decisions such as choosing between A and B, or between yes and no, and these decisions are often made by using a classification tool or a set of decision rules. Decision tools usually include scoring systems, predictive models, and quantitative test modalities. In this chapter, the authors introduce the receiver operating characteristic (ROC) curves and demonstrate, through an example of bank decision on granting loans to customers, how ROC curves can be used to evaluate decision making for information-based decision making. In addition, an extension to time-dependent ROC analysis is introduced in this chapter. The authors conclude this chapter by illustrating the application of ROC analysis in information-based decision making and providing the future trends of this topic.
Chapter Preview
Top

Main Focus

We first define accuracy parameters of binary classification tools, and then extend the evaluation method to test modalities with continuous or discrete ordinal values. By applying accuracy parameters and ROC analysis, business analysts can easily examine the expected downstream harms and benefits of positive and negative test results based on these test modalities, and directly link the classification accuracy to important decision making (Cornell, Mulrow & Localio, 2008).

Key Terms in this Chapter

False Negative Rate: The probability that a diagnostic test classify incorrectly a case as a control.

Specificity: The probability that a diagnostic test can correctly specify a non-case.

ROC Curve: A curve that plots a diagnostic test’s sensitivity versus its false positive rate across all possible threshold values for defining positivity.

Gold Standard: A standard that can specify the true status being evaluated without error (a.k.a., reference standard).

AUC (C-Statistic): The area under an ROC curve that summarize the overall probability for correct classification.

False Positive Rate: The probability that a diagnostic test incorrectly classify a control as a case.

Diagnostics Test: A quantitative test modality that is used to discriminate cases of interest from non-cases (controls).

Sensitivity: The probability that a diagnostic test can correctly identify a true case.

Complete Chapter List

Search this Book:
Reset