Accurately Determining Self-Efficacy for Computer Application Domains: An Empirical Comparison of Two Methodologies

Accurately Determining Self-Efficacy for Computer Application Domains: An Empirical Comparison of Two Methodologies

James P. Downey, R. Kelly Rainer Jr.
Copyright: © 2009 |Pages: 20
DOI: 10.4018/joeuc.2009062602
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Computer self-efficacy (CSE) has been used in many studies as a predictor of individual competence or performance, usage behavior, and a variety of attitudes. Although CSE has been effective in explaining a variety of human computing interactions, there have been a number of studies in which the relationship was weak or nonexistent. One reason for such findings concerns how CSE is operationalized. Many (if not most) leading cognitive theorists (Bandura, 1997; Gist & Mitchell, 1992; Marakas et al., 1998) rather emphatically state that actual tasks must be used to most accurately determine an individual’s perception of ability (i.e., self-efficacy) for some task or domain. They suggest that using tasks, of incremental difficulty level within the intended domain, most accurately presents an individual’s self-efficacy and leads to stronger relationships with outcomes such as competence or performance. Yet one of the most utilized measures of self-efficacy uses levels of assistance (GCSE of Compeau & Higgins, 1995a), and not tasks. This study examines which methodology provides a stronger relationship with competence and performance. Using a sample of 610, self-efficacy (using both methodologies) and competence or performance were measured for six different application domains. Results indicate that for domains in which individual’s had lower ability, actual tasks were superior. For domains of higher ability, however, levels of assistance yielded stronger relationships. This study clarifies the relationship between self-efficacy and performance as an individual moves from low to high ability as a result of training or experience. Implications and suggestions for further study are included.
Article Preview
Top

Introduction

Computer self-efficacy (CSE) has long been a construct of interest to the IT community of practitioners and researchers. CSE remains a strong predictor of computer performance and a variety of computer related activities, attitudes and beliefs. Computer self-efficacy is defined as an individual judgment of one’s capability to use a computer (Compeau & Higgins, 1995a). Self-efficacy serves as a motivator of action and is influenced by experience (Igbaria & Iivari, 1995) and training (Yi & Davis, 2003). In studies, computer self-efficacy has had significant relationships with enhanced attitudes toward computing (Harrison & Rainer, 1992), higher performance (Compeau & Higgins, 1995b), increased computer usage (Compeau, Higgins, & Huff, 1999), and improved computer skills development (Martocchio & Webster, 1992).

Despite these results, using CSE to help explain behavior and predict performance has not always yielded consistent results, blurring its value in IT research and practitioner endeavors. Despite successes noted by researchers, there have been other studies in which the results were either weak or baffling (Chau, 2001; Hasan, 2006; Henry & Martinko, 1997; Martocchio & Webster, 1992; Yi & Im, 2004). These inconsistencies, covered in more detail in the next section, suggest that the relationship between self-efficacy and related constructs is not always as predictable or as strong as expected. While weak or insignificant results may be due to problematic theory or poor research design, instrumentation issues can also affect results (Bandura, 1997; Gist 1987; Marakas, Yi, & Johnson, 1998). A self-efficacy instrument should accurately capture a respondent’s perception of their ability for the given computing domain. To the extent that a self-efficacy (SE) instrument does not represent an individual’s true perception of ability in the domain, subsequent relationships with key outcomes or antecedents may be insignificant or distorted.

This study examines one important instrumentation issue which can affect results. There are the two common methodologies CSE instruments use in capturing an individual’s perception of ability for a computing domain, one based on actual domain tasks and the other based on levels of assistance in completing tasks. Using a sample of 610, this study compares these two methodologies to determine if one is statistically better than the other in the relationship with common outcomes of self-efficacy, including computing competence (in four different domains) and computer performance (two domains). We suggest that the methodology used in measuring an individual’s perception of ability impacts how accurately the domain is captured, and therefore influences its relationship with key outcomes.

This is an important conceptual and empirical consideration and there is no known study which examines this issue. We suggest that some of the problems noted in the relationship between self-efficacy and its related constructs may be due to the inability of some instruments to sufficiently capture the intended computing domain. Accurately determining an individual’s “true” perception of ability for a domain will enhance our conceptual knowledge of SE and provide a more precise description of self-efficacy relationships with related constructs. It appears that most researchers and practitioners use extant CSE instruments without an appreciation of the consequences of its instrumentation methodology. It is the goal of this study to empirically compare the two CSE methodologies, task-based and levels of assistance-based, to determine significant differences between the two and under what circumstances each is more appropriate.

Complete Article List

Search this Journal:
Reset
Volume 36: 1 Issue (2024)
Volume 35: 3 Issues (2023)
Volume 34: 10 Issues (2022)
Volume 33: 6 Issues (2021)
Volume 32: 4 Issues (2020)
Volume 31: 4 Issues (2019)
Volume 30: 4 Issues (2018)
Volume 29: 4 Issues (2017)
Volume 28: 4 Issues (2016)
Volume 27: 4 Issues (2015)
Volume 26: 4 Issues (2014)
Volume 25: 4 Issues (2013)
Volume 24: 4 Issues (2012)
Volume 23: 4 Issues (2011)
Volume 22: 4 Issues (2010)
Volume 21: 4 Issues (2009)
Volume 20: 4 Issues (2008)
Volume 19: 4 Issues (2007)
Volume 18: 4 Issues (2006)
Volume 17: 4 Issues (2005)
Volume 16: 4 Issues (2004)
Volume 15: 4 Issues (2003)
Volume 14: 4 Issues (2002)
Volume 13: 4 Issues (2001)
Volume 12: 4 Issues (2000)
Volume 11: 4 Issues (1999)
Volume 10: 4 Issues (1998)
Volume 9: 4 Issues (1997)
Volume 8: 4 Issues (1996)
Volume 7: 4 Issues (1995)
Volume 6: 4 Issues (1994)
Volume 5: 4 Issues (1993)
Volume 4: 4 Issues (1992)
Volume 3: 4 Issues (1991)
Volume 2: 4 Issues (1990)
Volume 1: 3 Issues (1989)
View Complete Journal Contents Listing