An Illustration of the Actual Steps in Development and Validation of a Multi-Item Scale for Quantitative Research: From Theory to Practice

An Illustration of the Actual Steps in Development and Validation of a Multi-Item Scale for Quantitative Research: From Theory to Practice

Dail Fields (Regent University, USA)
DOI: 10.4018/978-1-7998-7665-6.ch004
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

This chapter describes in detail the process used to develop and validate a scale that measures servant leadership. The steps covered include construct identification from previous studies, review of previously proposed and developed measures, item selection, survey development, collection of data, scale identification, and evaluation of convergent, discriminant, predictive validity. The chapter provides a hands-on example of the steps required for scale measure development and assessment and includes description of the mechanics involved in completing each step of this process.
Chapter Preview
Top

Background

Meyer and Allen (1997) presented requirements for the psychometric evaluation of measures used in quantitative research in the social and behavioral sciences. The process generally consists of two stages. The first is identification of a pool of possible descriptors of the construct of interest and analysis of the applicability of the descriptors based on the views of a sample of subjects familiar with the construct. The second stage consists of evaluating statistically the internal consistency of the new measure as well as its construct validity. The construct validity of a measure, which provides evidence that the derived scale in fact measures what it purports to measure, can be assessed by examining its correlations with other constructs and comparing these correlations with what is expected theoretically (Kerlinger & Lee, 2000). Specifically. construct validity assesses the extent to which a focal measure is significantly related to another validated of a very similar construct (convergent validity), is not related to other distinct different constructs (discriminant validity), and the measure is positively related to an outcome with which the construct is known to be associated (predictive validity). For example, a newly derived measure of job satisfaction should be significantly positively associated with other previously validated measure of job satisfaction, should be negatively associated with a measure of resentment towards and employer, and should be predictive of employee organizational commitment (Scarpello & Vandenberg, 1992).

In addition, confirmatory factor analysis is appropriate for investigating construct validity of multi-item scales because it allows for direct examination of the degree to which specific items jointly are associated with hypothesized factors (i.e. convergent validity) and display minimal cross-loadings on other factors (i.e. discriminant validity). For example, in a four-dimensional measure, if the dimensions do not have discriminant validity, the fit of a single-factor model will be no worse than will the fit of a four-factor model (Kraimer, Seibert, & Liden, 1999).

Complete Chapter List

Search this Book:
Reset