Assessing the Dimension of Magnitude in Computer Self-efficacy: An Empirical Comparison of Task-Based and Levels of Assistance-Based Methodologies

Assessing the Dimension of Magnitude in Computer Self-efficacy: An Empirical Comparison of Task-Based and Levels of Assistance-Based Methodologies

James P. Downey, R. Kelly Rainer
Copyright: © 2011 |Pages: 21
DOI: 10.4018/978-1-60960-577-3.ch016
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Computer Self-efficacy (CSE) has been used in many studies as a predictor of individual competence or performance, usage behavior, and a variety of attitudes. Although CSE has been effective in explaining a variety of human computing interactions, there have been a number of studies in which the relationship was weak or nonexistent. One reason for such findings concerns how CSE is operationalized in extant instruments. Many (if not most) leading cognitive theorists (Bandura, 1997; Gist & Mitchell, 1992; Marakas et al., 1998) rather emphatically state that actual tasks must be used to most accurately determine an individual’s perception of ability (i.e., self-efficacy) for some task or domain. They suggest that using tasks, of incremental difficulty level within the intended domain, most accurately presents an individual’s self-efficacy and leads to stronger relationships with outcomes such as competence or performance. Yet one of the most utilized measures of self-efficacy uses levels of assistance (GCSE of Compeau & Higgins, 1995a), and not specific tasks. This study examines which methodology provides a stronger relationship with competence and performance. Using a sample of 610, self-efficacy (using both methodologies) and competence or performance were measured for six different computer application domains. Results indicate that for domains in which individual’s had lower ability, actual tasks were superior. For domains of higher ability, however, levels of assistance yielded stronger relationships. This study clarifies the relationship between self-efficacy and performance as an individual moves from low to high ability in a computing domain as a result of training or experience. Implications and suggestions for further study are included.
Chapter Preview
Top

Introduction

Computer Self-efficacy (CSE) has long been a construct of interest to the IT community of practitioners and researchers. CSE remains a strong predictor of computer performance and a variety of computer related activities, attitudes and beliefs. Computer Self-efficacy is defined as an individual judgment of one’s capability to use a computer (Compeau & Higgins, 1995a). Self-efficacy serves as a motivator of action and is influenced by experience (Igbaria & Iivari, 1995) and training (Yi & Davis, 2003). In studies, Computer Self-efficacy has had significant relationships with enhanced attitudes toward computing (Harrison & Rainer, 1992), higher performance (Compeau & Higgins, 1995b), increased computer usage (Compeau, Higgins, & Huff, 1999), and improved computer skills development (Martocchio & Webster, 1992).

Despite previous results, using CSE to help explain behavior and predict performance has not always yielded consistent results, blurring its value in both IT research and usage in organizations. Despite successes noted by researchers, there have been other studies in which the results were either weak or baffling (Chau, 2001; Hasan, 2006; Henry & Martinko, 1997; Martocchio & Webster, 1992; Yi & Im, 2004). These inconsistencies, covered in more detail in the next section, suggest that the relationship between self-efficacy and related constructs is not always as predictable or as strong as expected. While weak or insignificant results may be due to problematic theory or poor research design, instrumentation issues can also affect results (Bandura, 1997; Gist 1987; Marakas, Yi, & Johnson, 1998). An instrument that measures Computer Self-efficacy should accurately capture a respondent’s perception of their ability for the given computing domain. To the extent that a self-efficacy (SE) instrument does not represent an individual’s true perception of ability in the domain, subsequent relationships with key outcomes or antecedents may be insignificant or distorted.

This study examines one important instrumentation issue which can affect results. There are the two common methodologies extant CSE instruments use in capturing an individual’s perception of ability for a computing domain, one based on actual domain tasks and the other based on levels of assistance in completing tasks. Using a sample of 610, this study compares these two methodologies to determine if one is statistically better than the other in the relationship with common outcomes of self-efficacy, including computing competence (in four different domains) and computer performance (two domains). We suggest that the methodology used in measuring an individual’s perception of ability impacts how accurately the domain is captured, and therefore influences its relationship with key outcomes.

This is an important conceptual and empirical consideration and there is no known study which examines this issue. We believe that some of the problems noted in the relationship between self-efficacy and its related constructs may be due to the inability of some instruments to sufficiently capture the intended computing domain. Accurately determining an individual’s “true” perception of ability for a domain will enhance our conceptual knowledge of CSE and provide a more precise description of self-efficacy relationships with related constructs. It appears that most researchers and practitioners use extant CSE instruments without an appreciation of the consequences of its instrumentation methodology. It is the goal of this study to empirically compare the two CSE methodologies, task-based and levels of assistance-based, to determine significant differences between the two and under what circumstances each is more appropriate.

Complete Chapter List

Search this Book:
Reset