Implementing a Measurement Framework to Assess and Evaluate Student Readiness for Online Learning and Growth

Implementing a Measurement Framework to Assess and Evaluate Student Readiness for Online Learning and Growth

Shannon Sampson (University of Kentucky, USA), Kelly D. Bradley (University of Kentucky, USA), Heather Arrowsmith (University of Kentucky, USA) and Richard Mensah (Kentucky Center for Education and Workforce Statistics, USA)
DOI: 10.4018/978-1-5225-2953-8.ch017


This chapter was developed out of a study on the effectiveness of an online technology mini-course to prepare students for success in online classes. The work focuses on the methodology used to measure student readiness to engage with technology, and to measure growth in student technical knowledge as a result of the mini-course. The researchers applied Rasch analysis for both of these purposes, creating measurement scales from brief surveys. This chapter describes the results of the study, providing a step-by-step description of how to develop a similar scale for use in the classroom, and how to interpret results of Rasch analysis to gain valuable insight into student understanding of technology.
Chapter Preview


Over the past two decades, the evolution of educational technology and its impact on K-12 education has been monumental. In the age of accountability, being able to measure such impacts is not only expected, it is necessary. The call for empirical data-driven evidence has created classrooms conditioned to measuring. Standardized testing results continue to be a driving force in our K-12 classrooms, enforced by federal legislation and pressures from all levels of administration and the community at large. Today’s teachers are in the habit of collecting data and using this information to make decisions. Pre-assessments, formative feedback, growth of learning, classroom assessments, and standardized tests are all part of the normal classroom routine. Parents and even the larger community have become accustomed to reports and assessment results indicating how students are performing. Classroom teachers utilize data to inform their instruction, make decisions about placement, and critique their own delivery. The documentation of this culture has been around for years (Erpenbach, W. J., & Forte, E., 2007; Ballard, K., & Bates, A., 2008).

Key Terms in this Chapter

Classical Test Theory (CTT): Psychometric (test) theory that predicts outcomes of the ability of test-takers. With CTT, examinee and test characteristics can only be interpreted in the context of the other. It is focused on a person’s score on a whole test, and items are not weighted by difficulty when estimating knowledge or ability.

Likert-Type Scale: A psychometric scale used to score items using a range of options. A typical format of response options is Strongly Disagree, Disagree, Agree, Strongly Agree.

Dichotomous: Divided into two parts. In the context of this chapter, it refers to items that are right or wrong, yes or no, true or false, etc.

Unidimensional: The Rasch model analyzes data with the assumption that all items are related to a single (unidimensional) theoretical attribute or trait.

Rasch Measurement Theory: Psychometric (test) models used to assess the quality of tests and questionnaires. It focuses on the pattern of item responses. Success on a difficult item implies success on easier items. Person ability and item difficulty are independent of each other.

Polytomous: Divided into more than two parts. In the context of this chapter, it refers to partial credit items and items with response options that use a rating scale.

Traditional Test Reliability: The quality of a test to produce consistent, repeatable results scores. An increased measurement error negatively affects test reliability.

Logit: Log-odds unit used to express Rasch linear measures.

Measures: Rasch estimates of item difficulty and person ability.

Construct: The theoretical entity or attribute being measured. It is assumed that people have different levels of the attribute, and items vary in the level of the attribute required to be answered successfully. Thus, items are ordered in a hierarchy of difficulty along the theoretical continuum.

Complete Chapter List

Search this Book: