How to Check Measures for Adequacy

How to Check Measures for Adequacy

Patricia Cerrito (University of Louisville, USA)
DOI: 10.4018/978-1-60566-752-2.ch011
OnDemand PDF Download:
$30.00
List Price: $37.50

Abstract

Perhaps the biggest problem when checking measures for adequacy, in addition to overlooking the fact that model assumptions are invalid, is the need to examine the model for reliability, and also to generalize a reliable result to an assumption of validity. Without some test of validity, the results could be bogus because the model does not measure what it is supposed to measure. The question is, just how should a model be validated?
Chapter Preview
Top

Introduction

Perhaps the biggest problem when checking measures for adequacy, in addition to overlooking the fact that model assumptions are invalid, is the need to examine the model for reliability, and also to generalize a reliable result to an assumption of validity. Without some test of validity, the results could be bogus because the model does not measure what is it supposed to measure. The question is, just how should a model be validated?

Reliability is much easier to show. Simply put, similar data should yield similar results. Therefore, we can compare results across hospitals and patients from one year to another to see if the results by hospital change from one year to the next, or if they are relatively the same. If the results do not really change from year to year, the measure is assumed to be reliable. Validity, however, is a much more difficult concept to demonstrate; so much so, that often validity is ignored in favor of just demonstrating reliability. As written in Donabedian, (Donabedian, 1980)

The concept of validity is itself made up of many parts; and there is no precise way of saying what belongs to it, or what belongs more appropriately under another heading….I would say that the question of validity covers two large domains. The first has to do with the accuracy of the data and the precision of the measures that are constructed with these data. The second has to do with the justifiability of the inferences that are drawn from the data and the measurements.

A search of Medline using the keywords “risk adjustment” and “validation” returned a total of 3 articles. It is not yet a concept that is given much consideration, which is why we have so many different measures of patient severity that can give very different results. However, as providers are more likely to be rewarded or penalized based upon the results of these measures, the measures themselves will become more heavily scrutinized.

Top

Background

We first look at the three papers that were found by using the key words “validation” and “risk adjustment”. A recent paper compared the results of two models with a third, internally developed model to define validation. (Kunadian et al., 2008) Since the models are defined by comparing model values to each other, this does not give validation as much as it gives reliability. An earlier paper defines validation by getting similar results on new datasets. Again, this is reliability rather than validation.(Moscucci et al., 1999) A third paper used the fact that a logistic model predicted accurately, but as discussed in detail in Chapter 3, accuracy does not imply that the model is adequate or valid.(Mandeep et al., 2003)

True validation takes place when the measurement actually measures what it is supposed to measure. In other words, a patient severity index must actually measure the actual level of severity of the patient’s condition. A more formal definition of validity is provided at http://www.socialresearchmethods.net/tutorial/Colosi/lcolosi2.htm. Validity is the best available approximation to the truth or falsity of a given inference, proposition or conclusion. Consideration of validity attempts to answer the question, is the severity index true? There are four major types of validity to consider:

  • 1.

    Convergent validity examines whether there is a relationship between the program and the observed outcome. Or, in our example, is there a connection between the patient severity index and the patient’s level of sickness?

  • 2.

    Internal Validity asks if there is a relationship between the measure and the outcome and whether the relationship is causal. For example, did the patient’s severity level cause the outcomes in mortality, length of stay, and costs?

  • 3.

    Construct validity asks if there is a relationship between how the concepts are operationalized in the study to the actual causal relationship. Or in our example, did the measure of severity reflect the construct of severity, and did the measured outcome reflect the construct of severity? Overall, we are trying to generalize our conceptualized measure and outcomes to broader constructs of the same concepts.

  • 4.

    External validity refers to our ability to generalize the results of our study to other settings. In our example, could we generalize our results defined using certain providers, to all providers?

Complete Chapter List

Search this Book:
Reset