To Evaluate or Not to Evaluate?: A Two-Process Model of Innovation Adoption Decision Making

To Evaluate or Not to Evaluate?: A Two-Process Model of Innovation Adoption Decision Making

Nan (Tina) Wang
Copyright: © 2018 |Pages: 20
DOI: 10.4018/JDM.2018040103
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Using information processing theory (IPT) as the theoretical lens and incorporating various literatures following the IPT lens (e.g., dual-threshold in signal detection), this article develops a two-process model of innovation adoption decision making, accounting for the possibility for potential adopters (at different levels) to make adoption decisions (adopt, do not adopt) with or without an intensive evaluation of the innovation. Specifically, this article proposes that there is an attention process prior to the extensively investigated intensive evaluation process; potential adopters may make adoption decisions (adopt, do not adopt) at the end of the attention process or defer making decisions until after an intensive evaluation is conducted. The impacts of innovation attributes on various influence targets (i.e., relative advantage belief strength, adoption threshold and rejection threshold) during the less examined attention process are also discussed. This article may contribute to the innovation adoption literature and provide practical implications for innovation proponents/detractors regarding how to craft sensegiving messages influencing potential adopters' decision making.
Article Preview
Top

Introduction

Innovation adoption at different levels has received extensive research attention (e.g., Rogers, 1962; Venkatesh, Morris, Davis & Davis, 2003). In this paper, “potential adopters” is used to describe adoption decision makers at the levels of individual (Brancheau & Wetherbe, 1990), organizational unit (Cool, Dierickx & Szulanski, 1997), and organization (Cooper & Zmud, 1990). Several influential models (e.g., TAM, UTAUT) have been proposed to explain potential adopters’ adoption decision making process. Despite the extensive attention paid to adoption decision making process, some questions still remain. A phenomenon that needs deeper understanding is that sometimes potential adopters make adoption decisions (adopt or do not adopt) without conducting an intensive evaluation of the innovation (while other times defer decision making until an intensive evaluation is conducted).

This phenomenon occurs at different levels. For example, at the individual level, research on herd behaviors suggest that individuals may imitate others’ adoption behaviors without conducting an intensive evaluation of the innovation (e.g., Sun, 2013); at the organizational level, the literature on innovation bandwagon suggests that organizations may adopt innovations, especially those fashionable ones, following a “me too” rationale without doing an intensive evaluation (e.g., Swanson & Ramiller, 2004). Apart from adopting an innovation without conducting an intensive evaluation, potential adopters (at different levels) may also reject an innovation without conducting an intensive evaluation. Take the story of British Navy fighting against scurvy as an example. Despite some convincing evidence regarding the effectiveness of oranges to prevent scurvy, authorities at British Navy decided to neglect this innovation (for more than a century) without doing an intensive evaluation, partly because the person who claimed the effectiveness of oranges for curing scurvy was not a naval medicine expert. Similarly, the literature on innovation bandwagon suggests that organizations may also reject an innovation following a “me too” rationale (e.g., Abrahamson, 1991).

Traditional models (e.g., TAM, UTAUT) proposed to explain potential adopters’ adoption decision making largely assume that adoption decisions are made after an intensive evaluation of the innovation. This assumption is problematic, especially nowadays, for several reasons. First, undertaking an intensive evaluation for each candidate innovation is not feasible. The number of innovations that come out and could be considered as a candidate for adoption is increasing quickly. Compared to the large number of candidate innovations, potential adopters’ cognitive resources required for an intensive evaluation become scarce (e.g., Davenport & Beck, 2001; Ocasio, 2011)—Just like what Herbert Simon argued decades ago, “...in an information-rich world, the wealth of information means a dearth of something else: a scarcity of whatever it is that information consumes…Hence a wealth of information creates a poverty of attention and a need to allocate that attention efficiently among the overabundance of information sources that might consume it” (Simon 1971, pp. 40–41). As a result, it is impossible for potential adopters to undertake an intensive evaluation for each candidate innovation. Second, potential adopters are sometimes “forced” to make adoption decisions without an intensive evaluation. This could happen when innovations are really complicated and hence beyond potential adopters’ evaluation capabilities. In this case, potential adopters often make decisions following a “me too” rationale because they “prefer the chance of being wrong with everybody else to the risk of providing a deviant forecast that turns out to be the only incorrect guess” (Anderson & Holt, 1997, p. 848). This could also happen because of the power of social influence, which has been consistently found to affect potential adopters’ decisions (e.g., Swanson & Ramiller, 2004; Rogers, 1962). Third, in some cases, it is actually wise to skip an intensive evaluation. When the innovation is obviously promising, skipping an intensive evaluation allows individuals to act quickly and obtain first-move advantage (e.g., Sambamurthy & Zmud, 2014); when the innovation is obviously unpromising, conducting an intensive evaluation, according to the attention economy argument (e.g., Davenport & Beck, 2001), is simply a waste of scarce resource.

Complete Article List

Search this Journal:
Reset
Volume 35: 1 Issue (2024)
Volume 34: 3 Issues (2023)
Volume 33: 5 Issues (2022): 4 Released, 1 Forthcoming
Volume 32: 4 Issues (2021)
Volume 31: 4 Issues (2020)
Volume 30: 4 Issues (2019)
Volume 29: 4 Issues (2018)
Volume 28: 4 Issues (2017)
Volume 27: 4 Issues (2016)
Volume 26: 4 Issues (2015)
Volume 25: 4 Issues (2014)
Volume 24: 4 Issues (2013)
Volume 23: 4 Issues (2012)
Volume 22: 4 Issues (2011)
Volume 21: 4 Issues (2010)
Volume 20: 4 Issues (2009)
Volume 19: 4 Issues (2008)
Volume 18: 4 Issues (2007)
Volume 17: 4 Issues (2006)
Volume 16: 4 Issues (2005)
Volume 15: 4 Issues (2004)
Volume 14: 4 Issues (2003)
Volume 13: 4 Issues (2002)
Volume 12: 4 Issues (2001)
Volume 11: 4 Issues (2000)
Volume 10: 4 Issues (1999)
Volume 9: 4 Issues (1998)
Volume 8: 4 Issues (1997)
Volume 7: 4 Issues (1996)
Volume 6: 4 Issues (1995)
Volume 5: 4 Issues (1994)
Volume 4: 4 Issues (1993)
Volume 3: 4 Issues (1992)
Volume 2: 4 Issues (1991)
Volume 1: 2 Issues (1990)
View Complete Journal Contents Listing