Real Time Risk Management in Cloud Computation

Real Time Risk Management in Cloud Computation

Kashif Kifayat (Liverpool John Moores University, UK), Thar Baker Shamsa (Manchester Metropolitan University, UK), Michael Mackay (Liverpool John Moores University, UK), Madjid Merabti (Liverpool John Moores University, UK) and Qi Shi (Liverpool John Moores University, UK)
Copyright: © 2013 |Pages: 23
DOI: 10.4018/978-1-4666-2125-1.ch007
OnDemand PDF Download:
No Current Special Offers


The rise of Cloud Computing represents one of the most significant shifts in Information technology in the last 5 years and promises to revolutionise how we view the availability and consumption of computing storage and processing resources. However, it is well-known that along with the benefits of Cloud Computing, it also presents a number of security issues that have restricted its deployment to date. This chapter reviews the potential vulnerabilities of Cloud-based architectures and uses this as the foundation to define a set of requirements for reassessing risk management in Cloud Computing. To fulfill these requirements, the authors propose a new scheme for the real-time assessment and auditing of risk in cloud-based applications and explore this with the use case of a triage application.
Chapter Preview


Cloud computing represents one of the most significant shifts in information technology. In cloud computing physical computer resources (storage, processing power and services, including software platforms and applications) are abstracted in the way it enables the users to work and access their information and services through multiple devices and networks. It means users can avoid the cost on hardware, software, and services by paying a service provider only for what they use. Computing services as a utility are a promising innovation and have great potential in the future and has been raising significant levels of interest among individual users, enterprises, and governments in recent years (Kifayat, Merabti, & Shi, 2010) .

The most commonly cited benefit of cloud computing is its high degree of redundancy (both at the hardware and software levels but also geographically) that cannot be matched by most localized enterprise infrastructures. This is a key attribute of the cloud that enables a higher degree of service availability. Since the resources are shared between multiple services and customers, the cloud infrastructure can handle peak demand from individual customers much more effectively and at lower overall cost. Geographic redundancy also provides another key benefit from the data protection perspective. It allows the data to survive any localized failures, including power outages, natural disasters or damage to local physical facilities. In many cases, the data can be accessed remotely from anywhere in the world, even if the user-company’s headquarters and data centres are temporarily unavailable (Broda et al., 2010).

Top industry analysts such as the IDC (International Data Corporation) analysis, the worldwide forecast for cloud service in 2009 was in the order of $17.4bn and the estimate for 2013 amounts to $44.2bn, with European market ranging from $971m in 2008 to $6,005m in 2013. However, Steve Ballmer (Microsoft CEO) predicted in his talk at Washington University that cloud computing will be $3.3tn bidding around the globe. This year alone the move to the cloud by many businesses has been exceptional, so much so that some cloud business has grown by over 200%. Large vendors see this as the growing model for software and services in the future so more focus by the vendors is afforded.

Users are excited by reduced capital cost opportunities, a chance to separate them from infrastructure management, and focus on core competencies. However, there are concerns about the risks of cloud computing if not properly secured, and the loss of direct control over systems for which they are, nonetheless, accountable. Recent reports by the Cloud Security Alliance (CSA), European Network and Information Security Agency (ENISA) and Community Research and Development Information Service (CORDIS) identified a variety of security challenges and potential research areas in cloud computing security such as lacking control over data, trust establishment, ensuring integrity, confidentiality, virtualisation, access control, privacy and identity provision (Broda et al., 2010; Brunette & Mogull, 2009; Catteddu & Hogben, 2009; Jeffery & Neidecker-Kutz, 2010). Additionally, the CSA also identified some of the core security issues which we will discuss later in this chapter.

Many of these cloud security risks are not new, and can be found in the enterprises but, perhaps due to its popularity, a wave of new cyber-attacks and zero-day vulnerabilities has been found in recent years related directly to Cloud Computing. In addition, the number of attacks is now so large and their sophistication so great, that many organizations are having trouble determining which new threats and vulnerabilities pose the greatest risk and how resources should be allocated to ensure that the most probable and damaging attacks are dealt with first (TippingPoint, 2009). Therefore it is important to have well design risk management methodology to identify security risks, highlight their impacts and how to mitigate them in order to avoid loss. It is also important to handle risks with the greatest potential loss first and lower risks in descending order. In practice this process can be very difficult, and in particular balancing between risks with a high probability of occurrence but lower loss versus a risk with lower probability but high loss. In cloud computing risk management could imply in different security areas and cyber-attacks. However in this chapter we focus on web application auditing which could help us to identify the risks.

Complete Chapter List

Search this Book: