CloudRank: A Cloud Service Ranking Method Based on Both User Feedback and Service Testing

CloudRank: A Cloud Service Ranking Method Based on Both User Feedback and Service Testing

Jianxin Li, Linlin Meng, Zekun Zhu, Xudong Li, Jinpeng Huai, Lu Liu
DOI: 10.4018/978-1-4666-2854-0.ch010
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

In this chapter, the authors propose a Cloud service ranking system, named CloudRank, based on both the user feedback and service testing. In CloudRank, we design a new ranking-oriented collaborative filtering (CF) approach named WSRank, in which user preferences are modeled as personal rankings derived from user QoS ratings on services to address service quality predication problem. Different from the existing similar approaches, WSRank firstly presents a QoS model which allows users to express their preferences flexibly while providing combination of multiple QoS properties to give an overall rating to a service. Secondly, it measures the similarity among users based on the correlation of their rankings of services rather than the rating values. Nevertheless, it is neither accurate nor sufficient to rank Cloud services merely based on users’ feedbacks, as there are many problems such as cold-start problem, absence of user feedback, even some service faults occurred in a service workflow, so to get an accurate ranking, an active service QoS testing and fault location approach is required together with WSRank. Therefore, in CloudRank, the authors also designed an automated testing prototype named WSTester to collect real QoS information of services. WSTester integrates distributed computers to construct a virtual testing environment for Web service testing and deploys test tasks onto distributed computers efficiently.
Chapter Preview
Top

1 Introduction

In recent years, Cloud and service computing paradigms are converging into a powerful platform, and we have witnessed the strong growth of software delivered as service over the Internet where services are offered by leveraging the power of Cloud datacenters. However, how to select qualified services is becoming a key research issue in a large scale Cloud environment. Due to their dynamic nature of the execution environment, Cloud services may encounter various faults, WSTester can find the original failure service although the faults continue to accumulate and spread. Experimental results confirm that CloudRank outperforms the competing approaches significantly, and it benefits from the testing environment and locates the possible faults effectively and efficiently.

In this chapter, our general perspective is to rank the quality of Cloud services by both user feedback and active service testing based on Service-oriented architecture (SOA) technologies (Yang, Nasser, Surrige & Middleton 2012). In recent years, we have witnessed the maturing of SOA technologies and the rising of Cloud services. SOA has been broadly accepted in enterprise computing paradigm because many IT professionals have realized the potential of SOA, especially a Web service based on SOA can dramatically speed up the application development and deployment processes. According to an investigation of Gartner, SOA has been a prevailing software engineering practice, ending the 40-year domination of monolithic software architecture. At the same time, Cloud computing has become an increasingly popular means of delivering valuable, IT-enabled business services. Customers and end users access Cloud services through self-service portals, using and paying for only those services they need, when and where they need them. In a Cloud, the hardware and software infrastructures are provided in a centralized manner, so software as a service (SaaS) is enabled in a Cloud to provide a new software service mode. All of these technologies show a centralized trend of Web services. Based on the SOA technologies, more and more Web services have been provided in a Cloud.

Due to the dynamic and distributed nature of Internet, services may be artificial and even malicious, which may bring potential security threats to end users. Therefore, it requires an effective method to rank the quality of service (QoS) by the non-functional properties during the process of building critical service-oriented applications. At present, the study of ranking a Web service mainly focuses on two aspects, i.e. services ranking based on predicted QoS values by using collaborative filter algorithms and services ranking based on monitored QoS. In this chapter, we first review existing methods on Web service ranking, and provide a detailed analysis of the collaborative filtering (CF) based Web service ranking. The existing CF approaches have three limitations. Firstly, higher accuracy in rating prediction does not necessarily lead to better ranking effectiveness. Moreover, the ranking accuracy is low because the order in the service ranking list is not sensitive for these approaches. Finally CF-based ranking systems require a large scale of users to provide their QoS records, while in public Internet the number of service users is extremely small, making it hard to be gathered and accurately describe users’ preferences.

Complete Chapter List

Search this Book:
Reset