How Do We Measure TPACK? Let Me Count the Ways

How Do We Measure TPACK? Let Me Count the Ways

Matthew J. Koehler, Tae Seob Shin, Punya Mishra
DOI: 10.4018/978-1-60960-750-0.ch002
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

In this chapter we reviewed a wide range of approaches to measure Technological Pedagogical Content Knowledge (TPACK). We identified recent empirical studies that utilized TPACK assessments and determined whether they should be included in our analysis using a set of criteria. We then conducted a study-level analysis focusing on empirical studies that met our initial search criteria. In addition, we conducted a measurement-level analysis focusing on individual measures. Based on our measurement-level analysis, we categorized a total of 141 instruments into five types (i.e., self-report measures, open-end questionnaires, performance assessments, interviews, and observations) and investigated how each measure addressed the issues of validity and reliability. We concluded our review by discussing limitations and implications of our study.
Chapter Preview
Top

Introduction

I often say that when you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge is of a meager and unsatisfactory kind.— William Thompson Kelvin (Popular Lectures, Vol. I, “Electrical Units of Measurement,” p. 73, 1883).

In this chapter we review a wide range of approaches to measure Technological Pedagogical Content Knowledge (TPACK). In the first section we provide a brief overview of the TPACK framework and discuss the need for the current review. In the second section, we identify recent empirical studies that utilized TPACK assessments. We categorize these approaches into five types, and investigate how the researchers address issues of validity and reliability. We end the chapter with a set of summary conclusions, a discussion on limitations and implications of our review for future research on TPACK assessment.

Research on the role and impact of technology in education has often been criticized for being a-theoretical in nature, driven more by the possibilities of the technology than broader or deeper theoretical constructs and frameworks. Accordingly, the preponderance of work in educational technology has consisted of case studies and examples of best practices and implementation of new tools. Though such case studies can be informative, the lack of broader theoretical or explanatory conceptual frameworks prevents us from identifying and developing themes and constructs that would apply across cases and examples of practice. Over the past few years there has been a considerable interest in the Technological Pedagogical Content Knowledge (originally TPCK, now known as TPACK, or Technology, Pedagogy, and Content Knowledge) Framework for effective technology integration (American Association of Colleges for Teacher Education (AACTE), 2008; Koehler & Mishra (2009); Mishra & Koehler, 2006; Niess, 2007). The TPACK framework connects technology to curriculum content and specific pedagogical approaches and describes how teachers’ understandings of these three knowledge bases can interact with one another to produce effective discipline-based teaching with educational technologies. The TPACK framework has had a significant impact on both research and practice in the area of educational technology.

Theoretical frameworks, such as TPACK, play an important role in guiding observation. Quoting Chalmers, a philosopher of science, Mishra and Koehler (2006) write:

… “Precise, clearly formulated theories are a prerequisite for precise observation statements.” (p.27) In other words, observation statements cannot be made without using the language of some theory and in turn, these theories determine what is investigated. Thus, frameworks play an important role by guiding the kinds of questions we can ask, the nature of evidence that is to be collected, the methodologies that are appropriate for collecting this evidence, the strategies available for analyzing the data and finally interpretations we make from this analysis. (p.1039)

The TPACK framework functions as a “conceptual lens” through which one views educational technology by drawing attention to specific aspects of the phenomena, highlighting relevant issues, and ignoring irrelevant ones. In this view, the framework functions as a classification scheme providing insight into the nature and relationships of the objects (and ideas and actions) under scrutiny.

Providing a framework, however, is not enough. Frameworks have to be examined within the real world, where it becomes critical to develop sensitive instruments and measures that are both consistent with the theory and measure what they set out to measure. Since the TPACK framework was first published in Teacher College Record (Mishra & Koehler, 2006), researchers have been developing a wide range of TPACK instruments to measure whether their TPACK-based interventions and professional developments efforts have developed teachers’ TPACK (Graham et al., 2009; Guzey & Roehrig, 2009). The move towards measuring TPACK is notable as a shift from the conceptual to the empirical. As researchers began to focus on empirically testing the effect of their TPACK-based treatments, the issue of how to accurately capture their subjects’ levels of understanding in TPACK became more important.

Complete Chapter List

Search this Book:
Reset