MapReduce Based Information Retrieval Algorithms for Efficient Ranking of Webpages

MapReduce Based Information Retrieval Algorithms for Efficient Ranking of Webpages

Srinivasa K.G., Anil Kumar Muppalla, Bharghava Varun A., Amulya M.
Copyright: © 2011 |Pages: 15
DOI: 10.4018/ijirr.2011100102
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

In this paper, the authors discuss the MapReduce implementation of crawler, indexer and ranking algorithms in search engines. The proposed algorithms are used in search engines to retrieve results from the World Wide Web. A crawler and an indexer in a MapReduce environment are used to improve the speed of crawling and indexing. The proposed ranking algorithm is an iterative method that makes use of the link structure of the Web and is developed using MapReduce framework to improve the speed of convergence of ranking the WebPages. Categorization is used to retrieve and order the results according to the user choice to personalize the search. A new score is introduced in this paper that is associated with each WebPage and is calculated using user’s query and number of occurrences of the terms in the query in the document corpus. The experiments are conducted on Web graph datasets and the results are compared with the serial versions of crawler, indexer and ranking algorithms.
Article Preview
Top

1. Introduction

World Wide Web is growing very rapidly from few thousand pages to more than two billion pages at present. The use of web has elevated significant social concerns in the fields of privacy, censorship and access to information (Pinkerton, 2000). Web data consists of enormous number of web pages and links connecting them. Due to its size and dynamic nature it becomes highly difficult to locate the information efficiently across the web. Hence search engines play a vital role in retrieving the relevant and accurate data from the web (Buttcher & Clarke, 2010) and they rely upon the huge collection of web pages obtained by the web crawler, a computer program which browses the World Wide Web in a completely ordered and automated fashion.

Mining for specific data using sequential information retrieval algorithms is time consuming and inefficient. Hence there is a need to develop more parallel algorithms to search and retrieve the user specific data from the ever expanding World Wide Web. The aim of this paper is to utilize a MapReduce framework to build a crawler, indexer and ranking system to retrieve information from the Web and cater to the personalization of the data using Machine learning concepts. The problem with the retrieved results is the order in which they are displayed, as millions of results are obtained it becomes difficult for the user to locate the relevant one. Hence machine learning concept is used to identify the most popular items from the Web and rank them so that it becomes easier for the user to retrieve the popular and important information from the Web (Croft, Metzler, & Strohman, 2010).

PageRank (Brin & Page, 1998) is a link analysis algorithm that assigns a numerical weighting (Qin, Liu, Xu, & Li, 2007) to each element of a hyperlinked set of documents, such as the World Wide Web, with the purpose of “measuring” its relative importance within the set. PageRank hypothesizes that a link from page A to B implies the author of A recommends having a look at page B. A page is said to have a high PageRank value if there are many pages that point to it (Kim & Lee, 2002; Shenoy et al., 2004). The PageRank algorithm and implementation details are explained in detail in the later section. The algorithm represents the structure of the web as a matrix and PageRank as a vector. The PageRank vector is derived by computing matrix-vector multiplications repeatedly. The sum of all PageRank values should be one during computation. In this paper we use MapReduce framework to compute the ranks of the web pages correctly. Experimental evaluation is carried out and its results are briefly discussed.

The objective of this paper is to develop a web portal application with searching capability that runs as a website. As an outcome, the paper also aims at testing the performance of the proposed algorithms on various data sets. Parameters like retrieval time, computation time while ranking of web pages based on popularity and others are tested. The remainder of the paper is organised as follows. Section 2 discusses the related work and Section 3 gives the contributions of the paper. Sections 4 and 5 contain the details of the architectures of crawler, indexer and the ranking system proposed algorithms followed by the pseudocode of all the techniques used. An illustrative example is given in Section 6. Experimental results including timings of the iterative methods on graphs are discussed in Section 7 followed by conclusions.

Complete Article List

Search this Journal:
Reset
Volume 14: 1 Issue (2024)
Volume 13: 1 Issue (2023)
Volume 12: 4 Issues (2022): 3 Released, 1 Forthcoming
Volume 11: 4 Issues (2021)
Volume 10: 4 Issues (2020)
Volume 9: 4 Issues (2019)
Volume 8: 4 Issues (2018)
Volume 7: 4 Issues (2017)
Volume 6: 4 Issues (2016)
Volume 5: 4 Issues (2015)
Volume 4: 4 Issues (2014)
Volume 3: 4 Issues (2013)
Volume 2: 4 Issues (2012)
Volume 1: 4 Issues (2011)
View Complete Journal Contents Listing