The MACE Approach for Caching Mashups

The MACE Approach for Caching Mashups

Osama Al-Haj Hassan, Lakshmish Ramaswamy, John Miller
Copyright: © 2010 |Pages: 25
DOI: 10.4018/jwsr.2010100104
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

In recent years, Web 2.0 applications have experienced tremendous growth in popularity. Mashups are a key category of Web 2.0 applications, which empower end-users with a highly personalized mechanism to aggregate and manipulate data from multiple sources distributed across the Web. Surprisingly
Article Preview
Top

Introduction

Web 2.0 is drastically changing the landscape of the World Wide Web by empowering end-users with new tools for enhanced interaction and participation. Among Web 2.0 applications, mashups (Programmable Web, 2009) are becoming increasingly popular as they provide end-users with high degrees of personalization. Conceptually, mashups are Web services that are created by end-users who also consume their results. They offer high level of personalization because they are developed by end-users themselves as opposed to regular Web services which are designed by professional developers. (Throughout this paper, the term Web services refer to traditional Web service model in which a service provider creates and deploys Web services).

Mashups basically collect data from several data sources distributed across the Web, which would then be aggregated, processed, and filtered to generate output which would be sent to end-user. Several mashup platforms exist on the Web including Yahoo Pipes (Yahoo Inc., 2007) and Intel MashMaker (Intel Corp., 2007). The unique features of mashups, represented in high personalization and end-user participation, pose new scalability challenges. First, giving end-users the privilege of designing their own mashups causes a mashup platform to host a large volume of mashups which implies that the scalability requirement for mashup platforms is much higher when compared with Web services portals. Second, large volumes of mashups also imply that the opportunities for data reuse are minimal unless specialized mechanisms to boost data reuse are adopted. Third, mashups fetch data from several data sources across the Web; these data sources differ in their characteristics and their geographical distribution. Finally, mashups may be designed by non-technical-savvy end-users, and hence they are not necessarily optimized for performance. Unfortunately, scalability and performance challenges of mashups received little attention from the research community. Although there have been some studies on the performance of traditional orchestrated Web service processes (Chandrasekaran, 2003), to our best knowledge, no studies have investigated efficiency and scalability aspects of mashups or proposed techniques to tackle them.

This paper explores caching as a mechanism to alleviate the scalability challenges of mashups. Caching is a proven strategy to boost performance and scalability of Web applications. For example, Web content delivery and Web services have long adopted caching (Wang, 1999). Several caching techniques have been specifically developed for Web services (Tatemura, 2005; Terry, 2003). However, most of these techniques cannot be directly used for mashups because of some significant differences between Web services and mashups. We need a caching framework that not only takes into account the structural characteristics of mashups but is also adaptive to the various dynamics of the mashup platform.

Complete Article List

Search this Journal:
Reset
Volume 21: 1 Issue (2024)
Volume 20: 1 Issue (2023)
Volume 19: 4 Issues (2022): 1 Released, 3 Forthcoming
Volume 18: 4 Issues (2021)
Volume 17: 4 Issues (2020)
Volume 16: 4 Issues (2019)
Volume 15: 4 Issues (2018)
Volume 14: 4 Issues (2017)
Volume 13: 4 Issues (2016)
Volume 12: 4 Issues (2015)
Volume 11: 4 Issues (2014)
Volume 10: 4 Issues (2013)
Volume 9: 4 Issues (2012)
Volume 8: 4 Issues (2011)
Volume 7: 4 Issues (2010)
Volume 6: 4 Issues (2009)
Volume 5: 4 Issues (2008)
Volume 4: 4 Issues (2007)
Volume 3: 4 Issues (2006)
Volume 2: 4 Issues (2005)
Volume 1: 4 Issues (2004)
View Complete Journal Contents Listing