The Emergence of Organizational Process Liability as a Future Direction for Research on Technology Acceptance

The Emergence of Organizational Process Liability as a Future Direction for Research on Technology Acceptance

Jason Nichols, David Biro, Ramesh Sharda, Upton Shimp
DOI: 10.4018/ijsodit.2012100101
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

During the assessment of a system recently deployed in a military training environment, a survey administered to capture subjects’ intention to continue use of the software resource directly conflicts with the outcome of interviews conducted with the same subjects at the same time. From the richness of the interview data, the conflict is identified to be the result of a seemingly underexplored notion in the literature: organizational process liability. This emerging construct is positioned within existing technology acceptance literature, and initial directions for future research are proposed.
Article Preview
Top

Introduction

The Ammunition Multimedia Encyclopedia (AME) is a mobile platform developed to support the United States Army with identification and inspection of munitions in the field (Lucca et al., 2011). After development, the resulting software was deployed for user and system testing in the classrooms of current and future Army munitions experts. At the conclusion of the semester, a survey instrument containing measures for traditional technology acceptance constructs (e.g. Venkatesh, Morris, Davis, & Davis, 2003), as well as constructs from related models (e.g. Goodhue & Thompson, 1995; Pavlou, 2003), was administered. As is the intention of these theoretically grounded models, the purpose of deploying them here in practice was to 1) assess the respondents’ intention to continue using the software once in the field, and 2) capture the performance of the software through measures of constructs that have been shown to lead to continued system use. Simply put, and common in any software development and deployment cycle, the questions to be answered were, “are you going to use this?” and, “if not, why?”

At the same time the survey was administered, the participants were interviewed post-survey for a related study. After the interview notes were reviewed and assimilated, and the survey results were analyzed, an interesting and unexpected conflict was observed between the outcomes of the two methodologies. Although the survey population is small because of our focus on real users, the survey results all pointed towards positive perceptions of the software across the antecedent constructs, as well as positive intentions towards continued use of the software in the field. The interviews, however, uncovered a uniform lack of willingness to employ the software in their regular work. This conflict is interesting in two ways: First, there was clearly an issue with the operationalization of the intention to use construct, and this issue may or may not be indicative of a broader concern for the construct within a certain class of subjects. Second, none of the antecedent constructs used in the survey raised questions as to either a system or system context characteristic that may hinder intention to use the software in the future. These would appear to be the two key areas where models such as the technology acceptance model are needed in practice as software is developed and deployed – as suggested earlier, software developers want to know if their software will be used, and, if not, why?

Fortunately, the interview phase of data collection for AME was a part of the theoretical sampling phase for a related and ongoing study of contextual dimensions that support or hinder participation in knowledge sharing resources (Nichols, Biros, & Weiser, 2012). As such, the uniform negative responses towards continued use of the software were of particular interest, and the root causes of the sentiment were explored thoroughly. In what follows, the results of both the survey and the interviews are presented in order to further highlight the controversy between outcomes of the two methods. Due to the deeper qualitative investigation that took place during the interview phase, a cause of the conflicting results is identified, and potential remediation for the issue is mapped back to the extant literature for technology acceptance and its antecedents. A call for further research along two specific lines is identified, as guided by the structure of existing theory in the domain.

Ultimately, the goal of models such as the technology acceptance model (TAM) is to be able to support technology development and deployment. It is the hope of this manuscript, then, to 1) provide an account of an unexpected issue that arose from utilizing TAM in just such a fashion, 2) identify and explain the source of the issue as it was uncovered and examined during the fortunate coincidence of concurrent qualitative interviews, and 3) provide initial direction for continued research to resolve the issue for the qualitative instruments used to support technology development and deployment in practice. First, however, we begin with a brief description of AME in order to provide context.

Complete Article List

Search this Journal:
Reset
Open Access Articles: Forthcoming
Volume 6: 1 Issue (2017)
Volume 5: 2 Issues (2016)
Volume 4: 2 Issues (2015)
Volume 3: 4 Issues (2013)
Volume 2: 4 Issues (2012)
Volume 1: 4 Issues (2011)
View Complete Journal Contents Listing