Unsuccessful Performance and Future Computer Self-Efficacy Estimations: Attributions and Generalization to Other Software Applications

Unsuccessful Performance and Future Computer Self-Efficacy Estimations: Attributions and Generalization to Other Software Applications

Richard D. Johnson (Department of Management, University at Albany, Albany, NY, USA), Yuzhu Li (University of Massachusetts Dartmouth, North Dartmouth, MA, USA) and James H. Dulebohn (Michigan State University, East Lansing, MI, USA)
Copyright: © 2016 |Pages: 14
DOI: 10.4018/JOEUC.2016010101
OnDemand PDF Download:
$30.00
List Price: $37.50

Abstract

Using data from 100 individuals, this study examined the role of performance attributions (stability and locus of causality) and computer self-efficacy (CSE) for spreadsheets and databases in the training context. The results show that both self-efficacy and attributions (locus of causality and stability) for unsuccessful performance on one software package affected future efficacy estimations for both the same software package (spreadsheet) as well as for a related software package (database). These findings extend previous research by illustrating that through the generality of CSE estimations, users' performance on one software package are related to self-efficacy estimations on a different, distally similar, software application. This suggests that trainers and managers cannot overlook the importance of self-efficacy generality in the design of technology training initiatives. Early, unsuccessful experiences for those with limited technology experience can make it more challenging to adapt to, and learn to use, new technologies.
Article Preview

Introduction

Technology has had a dramatic impact on how employees work. Employees are expected to bring a variety of software knowledge and skills to their jobs and to use a variety of office productivity tools (e.g. word processers, spreadsheets, enterprise resource planning (ERP) software, employee self-service systems, etc.) in the course of their daily responsibilities (Johnson, Marakas, & Palmer, 2006; Martin, 2001). In addition, the growth of mobile computing means that employees are expected to access these systems across multiple technological platforms (e.g. smartphones, kiosks, laptops), and to do so quickly and seamlessly. To ease the transition between these devices and applications, designers have long advocated for design consistency (e.g. similar menu and navigation structures) across applications and platforms (Martin & Eastman, 1996; Microsoft, 2009; Shneiderman & Plaisant, 2004). For example, in the Windows environment most applications are designed so that opening and saving a file or copying, moving, and formatting text are done in the same manner. These shared characteristics can signal the user that they are utilizing software applications that require similar skills, hopefully increasing computer self-efficacy (CSE) for the software application they are using as well as other software applications that utilize similar interface design components (Marakas et al., 1998).

The challenge is that not everyone will have successful software experiences, and unsuccessful experiences can reduce self-efficacy (Silver, Mitchell & Gist, 1995; Stajkovic & Sommer, 2000). Thus, the same characteristics designed to ease transitions between software applications can actually increase the likelihood that poor experiences on one software application may lead to decreased CSE for multiple applications. This can increase resistance to new technologies or practices, reduce employee performance and customer service, and ultimately lead to increased training costs for organizations. Thus, a key question that needs to be addressed is the process through which this occurs.

Research suggests that attributions, computer self-efficacy (CSE), and self-efficacy generality may play a role in this process. Attributions focus on an individual’s interpretation and ascription of causality to events such as performance (Kelley, 1973). CSE is a reflection of an individual’s belief in his or her ability to perform computer tasks (Marakas, Yi & Johnson, 1998). Self-efficacy generality is a reflection of the extent to which self-efficacy estimations for one activity transfer to other similar activities (Bandura, 1997). The more two activities share common characteristics or skills, the greater the self-efficacy generality. Thus, within the computing domain, where software interfaces are designed to share similar characteristics, generality should be higher than when software tasks are compared to tasks in other domains such as accounting or human resources. The similarities in navigation or functionality of software tasks means that performance and CSE estimations on one software package may provide efficacy information for another, related software package. Unsuccessful experiences on one software application may carry over to new software applications, reducing confidence and performance.

Although there have been a multitude of studies on the role of self-efficacy in software training (Compeau & Higgins, 1995b; Gist et al., 1989; Johnson & Marakas, 2000), we are aware of no studies that have directly investigated the role of attributions and generalization of efficacy estimations to different software packages. Therefore, the goal of this study was to extend previous research by investigating the role of CSE generality and attributions about performance using one software application and their relationship to subsequent self-efficacy estimations on a related software application. Particularly, we focus on unsuccessful performance, and its impact on attributions, CSE generality and future CSE estimations for the same and a related software application.

Complete Article List

Search this Journal:
Reset
Open Access Articles: Forthcoming
Volume 30: 4 Issues (2018): 1 Released, 3 Forthcoming
Volume 29: 4 Issues (2017)
Volume 28: 4 Issues (2016)
Volume 27: 4 Issues (2015)
Volume 26: 4 Issues (2014)
Volume 25: 4 Issues (2013)
Volume 24: 4 Issues (2012)
Volume 23: 4 Issues (2011)
Volume 22: 4 Issues (2010)
Volume 21: 4 Issues (2009)
Volume 20: 4 Issues (2008)
Volume 19: 4 Issues (2007)
Volume 18: 4 Issues (2006)
Volume 17: 4 Issues (2005)
Volume 16: 4 Issues (2004)
Volume 15: 4 Issues (2003)
Volume 14: 4 Issues (2002)
Volume 13: 4 Issues (2001)
Volume 12: 4 Issues (2000)
Volume 11: 4 Issues (1999)
Volume 10: 4 Issues (1998)
Volume 9: 4 Issues (1997)
Volume 8: 4 Issues (1996)
Volume 7: 4 Issues (1995)
Volume 6: 4 Issues (1994)
Volume 5: 4 Issues (1993)
Volume 4: 4 Issues (1992)
Volume 3: 4 Issues (1991)
Volume 2: 4 Issues (1990)
Volume 1: 3 Issues (1989)
View Complete Journal Contents Listing