Article Preview
TopIntroduction
Information Systems (IS) service providers deliver both software products and associated services to organizations. The functional departments periodically evaluate the performance of IS departments through the quality of services provided to them (Watson, Pitt, & Kavan, 1998). IS service quality can be integrated into an IS balanced scorecard - a strategic IS management tool for assessing overall IS performance. In a 2007 survey of CIOs conducted by the Society for Information Management (SIM), “Improve IT quality” emerged as one of the top five concerns facing IT executives (Luftman & Kempaiah, 2008). Monitoring IS service quality is even more critical in the context of IS outsourcing or offshoring, an alternate or complementary delivery mechanism to insourcing (Gorla & Lau, 2010). Previous research has established the importance of IS service quality in the success of IT departments and organizations (Gopal & Koka, 2009; Gorla, Somers, & Wong, 2010).
Service quality can be measured based on customers’ expectations (what they want) and perceptions of actual performance (perceptions of what they think they are getting) for a range of service dimensions (Parasuraman, Zeithaml, & Berry, 1988). The gap between expectations and perceptions averaged across these dimensions is computed as service quality. By measuring the difference between service performance and customer expectations, managers can assess their shortcomings as well as the amount of service that needs to be enhanced in each service dimension to meet customer needs. This difference or gap between customers’ expectations and perceived service performance can be measured with the help of the service quality instrument SERVQUAL, which was originally developed in the field of marketing by Parasuraman, Zeithaml, and Berry (1988). The SERVQUAL instrument has been applied to IS services by modifying the instrument to suit the IS context (Kettinger & Lee, 1994; Kim, Eom, & Ahn, 2005; Pitt et al., 1995; Wang & Tang, 2003). This instrument is a 22-item questionnaire that measures service performance and service expectation in five dimensions: tangibles, reliability, responsiveness, assurance, and empathy Owing to the ambiguity associated with single expectation measure, two comparative norms were conceptualized: adequate service and desired service expectations (Parasuraman, Zeithaml, & Berry, 1994; Zeithaml, Berry, & Parasuraman, 1993). The range between adequate service and desired service represents the window of customer expectations, which is termed as zone of tolerance (ZOT).
It is noted that service quality is positively associated with favorable behavioral intentions and negatively associated with unfavorable behavioral intentions (Zeithaml, Berry, & Parasuraman, 1993, 1996). The relationship between service quality and behavioral intentions vary across the three segments of zone of tolerance: below adequate service level, within the zone of tolerance, and above desired service level (Teas & DeCarlo, 2004). Teas and DeCarlo (2004) provide further motivation by stating that “… the slope of one attribute may be most positive in the acceptable zone, whereas the slope for another attribute may be most positive in the superior zone. This issue of individual dimension slope is an interesting question for future research” (p. 283). There are few studies in the IS context, which examine the impact of service quality on user satisfaction in relation to ZOT or the impact of user expectations on IS service quality. Investigating such issues helps to identify the IS service quality dimensions that are associated with high user satisfaction or high service performance. The present study highlights those important service dimensions for IS managers to examine in the context of resource allocation decisions. The following research questions have been addressed in the paper: