Article Preview
TopIntroduction
General Mental Ability (GMA) has been utilized for decades within psychology and represents one of strongest predictors of future performance in both academic and employment contexts (Park, Lubinksi, & Benbow, 2007; Schmidt & Hunter, 1998). GMA, or g, best represents the capacity of a person to process, learn, and apply information and is typically composed of at least the two supporting factors of Fluid Intelligence (Gf) and Crystallized Intelligence (Gc) (Horn, 1968; Johnson & Bouchardjr, 2005). While Gf represents the mental abilities pertaining to perception of relationships, reasoning, and abstraction, Gc comprises the accumulated knowledge and experiences of a person's life (Horn, 1968; Johnson &Bouchardjr, 2005; McGrew, 2009).
While highly predictive of later performance in multiple contexts, GMA has demonstrated different score patterns by gender and ethnicity (McKay & Doverspike, 2001; Neisser et al., 1996; Roth, Bevier, Bobko, Switzer, & Tyler, 2001). Different approaches towards explaining this difference, such as focusing on only one factor of GMA, or utilizing additional predictors, have been largely unsuccessful in explaining subgroup differences in GMA scores (Sager, Peterson, Oppler, Rosse, & Walker, 1997; Schmidt & Hunter, 1998; Waters, 2007). This problem of subgroup differences has persisted for long enough to deter a belief in equality across subgroups among some researchers (Rushton & Jensen, 2005), but new assessment possibilities with computer mediated simulations (CMS) may allow for new approaches to this longstanding problem.
While CMS are still a relatively new tool, their applications have seen successful use in multiple areas. CMS have successfully been utilized for training and assessment needs in both academic and industry settings (Jong, 1991; Ortiz, 1994), and have been used as a measurement tool for concepts related to Gf (Kroner, Plass, & Leutner, 2005; Richardson, Powers, & Bousquet, 2011). Particularly notable differences between these new methods and traditional paper-based assessments indicate that CMS require a period of acclimation as participants learn to interact with the simulated environment (Mennecke, Hassall, & Triplet, 2008), and that controls should exist when using CMS as an assessment tool, as a participant's prior experience and exposure to similar simulations have been found to affect performance (Hughes et al., 2013). Despite this need to control for prior experiences, CMS offer a new perspective to assessment because of their unique ability to serve not only as an assessment of participant ability, but also as a demonstration of participant skills and problem solving processes, even during their initial learning acclimation period (Kroner et al., 2005; Richardson et al., 2011; Mennecke et al., 2008; Motowidlo, Dunnette, & Carter, 1990).