Enabling Resource Access Visibility for Automated Enterprise Services

Enabling Resource Access Visibility for Automated Enterprise Services

Kaushik Dutta, Debra VanderMeer
Copyright: © 2014 |Pages: 28
DOI: 10.4018/jdm.2014040101
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Organizations deliver on their mandates by executing a variety of services. Over the past few decades, service automation software systems, such as SAP and PeopleSoft, have enabled the automation of services. While much attention in the literature and in industry has been devoted to the implementation and functional correctness of automated services, little focus has been granted to ensuring responsiveness for services. As service automation platforms host larger and larger numbers of services, and services execute with greater and greater levels of concurrency, fault resolution becomes an important issue in ensuring expected responsiveness levels. In particular, two factors impact fault resolution in service automation platforms. First, each executing service requires access to specific data and system resources to complete its processing. As greater numbers of services execute concurrently, there is increasing contention for these data and system resources, leading to greater numbers of faults and SLA violations in service execution. Second, the black-box nature of service automation platforms provides little visibility into the nature of resource contention that caused a fault or SLA violation. This lack of visibility makes fault resolution difficult, and in many cases impossible, because it is difficult to trace the root cause of the problem. In this paper, the authors address the problem of system-level resource visibility for services through the design and development of a system capable of mapping abstract service workflows to their data and system impacts to enable resource visibility. The authors' system has been tested and demonstrated effective, as we demonstrate in a case study setting.
Article Preview
Top

Introduction

Firms are increasingly virtualizing manual processes (Overby, 2008) as automated services. In this context, service quality, defined by Xu et al. (2013) as “a customer’s global, subjective assessment of the quality of an interaction with a vendor, including the degree to which specific service needs have been met,” is an important concern. To date, much of the literature in service quality considers the challenges inherent in ensuring functional correctness when converting manual processes to automated services (Linton, 2003), improving service quality for physical processes (Mukherjee et al., 1998; Soteriou and Chase, 2000), or leveraging information technology to improve provisioning of a firm’s customer service in general (Ray et al., 2005; Karimi et al., 2001).

As enterprise service delivery platforms, such as SAP and PeopleSoft, grow in scale, and are assigned increasingly larger sets of responsibility for service processing, the impact of service interruptions has a proportionally larger impact on a firm’s ability to function – when a relatively small set of servers are tasked with handling critical internal and external service tasks, any issue that impacts the service delivery platform has the potential bring entire departments, perhaps even an entire firm, to a standstill (Pang and Whitt, 2009).

Creating effective service quality maintenance processes is an active area of work (Trienekens et al., 2004) and a billion-dollar industry (Oracle, 2008). Quality of Service (QoS) models do not mandate that problems must not occur at all. In fact, modern enterprise technology is complex enough that it is generally accepted that issues will occur. QoS models, typically codified in Service Level Agreements (SLAs), take this as a given, and focus on guarantees of recovery, i.e., how quickly issues are resolved, and how much downtime a service will suffer. Of primary importance is how quickly issues are resolved, usually quantified by the time to resolution (TTR) metric (Hiles, 2002). Organizations prefer TTRs on the order of minutes, not hours. However, for application support staff, TTRs can be on the order of multiple hours or longer, when particularly tricky issues arise. In this work, we collaborated with an application development and support team for a major US media company. In the experience the media company’s enterprise applications manager, TTR for some production issues on their PeopleSoft platform could run 5-8 hours. The service downtimes that result from this have significant impacts on the organization’s bottom line in terms of lost productivity and lost sales opportunities.

Problem resolution in general follows a four-step workflow (Johnson, 2002), as depicted in Figure 1. Here, the workflow begins when the issue is logged as a new trouble ticket, and alerts are sent to the appropriate support staff. In the second step, the root cause analysis (RCA) step, the application support staff member assigned to the ticket gathers information aimed at determining why the problem occurred. In the third step, the support staff cleans up any partially completed processes and restarts them to move the impacted business service(s) forward. For example, a partially-completed payroll calculation will need to be restarted with its original inputs so that the next scheduled steps can take place as designed. In the fourth step, the application support analyst develops recommendations for preventing the issue’s recurrence in the future, and documents the problem characteristics and resolution process for future use.

Figure 1.

Problem resolution workflow

jdm.2014040101.f01
  • IGI Global’s Seventh Annual Excellence in Research Journal Awards
    IGI Global’s Seventh Annual Excellence in Research Journal AwardsHonoring outstanding scholarship and innovative research within IGI Global's prestigious journal collection, the Seventh Annual Excellence in Research Journal Awards brings attention to the scholars behind the best work from the 2014 copyright year.

Complete Article List

Search this Journal:
Reset
Volume 35: 1 Issue (2024)
Volume 34: 3 Issues (2023)
Volume 33: 5 Issues (2022): 4 Released, 1 Forthcoming
Volume 32: 4 Issues (2021)
Volume 31: 4 Issues (2020)
Volume 30: 4 Issues (2019)
Volume 29: 4 Issues (2018)
Volume 28: 4 Issues (2017)
Volume 27: 4 Issues (2016)
Volume 26: 4 Issues (2015)
Volume 25: 4 Issues (2014)
Volume 24: 4 Issues (2013)
Volume 23: 4 Issues (2012)
Volume 22: 4 Issues (2011)
Volume 21: 4 Issues (2010)
Volume 20: 4 Issues (2009)
Volume 19: 4 Issues (2008)
Volume 18: 4 Issues (2007)
Volume 17: 4 Issues (2006)
Volume 16: 4 Issues (2005)
Volume 15: 4 Issues (2004)
Volume 14: 4 Issues (2003)
Volume 13: 4 Issues (2002)
Volume 12: 4 Issues (2001)
Volume 11: 4 Issues (2000)
Volume 10: 4 Issues (1999)
Volume 9: 4 Issues (1998)
Volume 8: 4 Issues (1997)
Volume 7: 4 Issues (1996)
Volume 6: 4 Issues (1995)
Volume 5: 4 Issues (1994)
Volume 4: 4 Issues (1993)
Volume 3: 4 Issues (1992)
Volume 2: 4 Issues (1991)
Volume 1: 2 Issues (1990)
View Complete Journal Contents Listing