Training Effectiveness Readiness Levels (TERLs)

Training Effectiveness Readiness Levels (TERLs)

Roberto K. Champney, Kay M. Stanney, Jonathan Martin
DOI: 10.4018/978-1-4666-5888-2.ch643
(Individual Chapters)
No Current Special Offers

Chapter Preview



Training expenditures are cyclical in nature and follow general patterns in the economy, yet the use and adoption of training technologies in general has been reaching new heights, accounting for over $1.5 billion globally in 2012 (Ambient Insight, 2013). At the same time, other domains (e.g., medicine, ground military forces) which traditionally had not benefited from simulation training have increased their use of simulation-based training. This has been, in particular, a reaction to emerging challenges such as economic pressures and reduction in resources that demand greater flexibility and efficiencies from training technology (e.g., Bell, Kanar, & Kozlowski, 2008). Simulation technologies have presented themselves as capable means to address the flexibility and experiential learning needs of such emerging training challenges (Bell & Kozlowski, 2007). While the adoption of different types of training technologies continues to increase, a major challenge faced by any organization aiming to invest in a training program is the limited ability to quantify the benefits of such training (e.g., Government Accountability Office, 2013). Assessment of a training system is paramount given that the value added by such a system lies in its ability to produce learning that an individual can then utilize in an operational environment. Without such assessment, the value or risks of training are unknown; in the same manner in which a system promises positive training results it may unknowingly produce negative training results which could be catastrophic once a trainee returns to the operational environment. Unfortunately assessing and quantifying the impact of any training is not trivial due to a variety of challenges that range from technical (e.g., variety of theories, limited skillsets in evaluation methodology) to logistical (e.g., lack of support from stakeholders, cost and complexity of evaluations) (Phillips, 2010). Often for these reasons training assessment is relegated as either an afterthought or conducted with the least resource consuming methods (Champney et al., 2008; Carnevale & Shultz, 1990; Eseryel, 2002; Bassi & van Buren, 1999; Thompson, Koon, Woodwell, & Beauvais, 2002). In addition, given the nature of the training construct under evaluation (i.e., something that is learned, that is retained and applied later in an operational environment; Pennington, Nicolich, & Rahm, 1995; Thorndike & Woodworth, 1901) it is possible to assess different elements of training effectiveness, such as students’ reactions, learning, transferred behaviors, or resulting impact on the organization (Kirkpatrick & Kirkpatrick, 2007); all of which may be labeled as training effectiveness evaluation (TEE). In some instances a system’s technical or functional capabilities are utilized as proof of its training adequacy or effectiveness. This results in systems that are evaluated using a wide range of methods and levels of scrutiny, such that results are not comparable across systems nor meaningful unless one understands the method and criteria used to conduct the evaluation.

In order to address this challenge it is necessary to have a framework that objectively defines the parameters that govern the level of scrutiny and validity of different approaches to assess training effectiveness. The Training Effectiveness Readiness Levels (TERL) scale seeks to address this by providing a framework that defines a progressive scale of training assessment scrutiny. A key characteristic of the TERL scale is its independence from technology development. The scale enables the determination of how well a system can meet training needs regardless of a training system’s technological maturity. A system with a higher TERL rating implies that it has been evaluated and demonstrated to satisfy a training need using a higher degree of scrutiny than one with a lower TERL rating.

Key Terms in this Chapter

Experiential Task Analysis: A task decomposition analysis during which an operational task is characterized in terms of the sensory, functional (i.e., environmental behavior) and psychological (i.e., contextual conditions) cues that someone executing the task would experience.

Human Factors Readiness Level (HFRL): A scale used to assess the maturity of technology with regards to its capability to support its intended user population.

Training Effectiveness Readiness Level (TERL): A scale used to assess the maturity of an evolving technology with regards to its effect on training outcomes.

Technology Readiness Level (TRL): A scale used to assess the maturity of an evolving technology throughout various stages of development.

Knowledge Skills and Attitude (KSA): The specific elements of a task that an individual must know, must be able to do, and appreciate to perform a task successfully.

Training Effectiveness Evaluation (TEE): The evaluations of learning regimens or tools to assess their ability to meet a desired training objective.

Human Factors (HF): A scientific discipline focused on understanding how humans interact with a system with the goal of optimizing overall system performance and human safety and comfort.

Complete Chapter List

Search this Book: