A Hybrid Approach for Web Change Detection

A Hybrid Approach for Web Change Detection

Sakher Khalil Alqaaidi
DOI: 10.4018/jitwe.2013040104
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Search engines save copies of crawled web pages to provide instant search results, saved pages may become old and un-updated as original pages change providing new information and new links, and most of websites don’t submit these new changes to search engines so search engines don’t depend mainly on websites techniques of submitting changes. Keeping pages fresh and updated in search engine is important for giving real page ranks and for providing real time information. Techniques were invented to improve the page update process by search engine. In this paper the author combines two of good known techniques and implements the new one via experiments that improve better results in different experiment cases.
Article Preview
Top

1. Introduction

Search engines are the easiest and fastest tool to get information on the web, they consist of many programs that index pages, crawl over the web, and serve requests. One of these programs is web crawler (Ezz, 2007; Castillo, 2004) which is required to take copies of web pages to provide instant search results by search engine, as web contains now billions of pages (Antonio & Signorini, 2005) with dynamic content due to the technology of dynamic pages generated by web server software and programming languages that uses databases to store content like PHP and MySQL (Gilmore, 2006), adding pages to websites does not require generating HTML files (Nielsen 2005) and can be done easily via editors in same website interface, so these pages could be generated or changed at any time or in defined time periods like a daily newspaper website, so web crawler should effectively follow these updates and add new content to improve search engine performance in providing real time information and faster reaching to the required result through ordering search results which depends on page rank (Andrew, 2004) factor calculated by algorithms that depend mainly on URL links found in the crawled pages or the ones that points to a required page, and these links also change as information changes in different periods of time. Many techniques were found that organize the process of taking copies of web pages by web crawler through determining the best time to perform a visit to the page to get the new updates immediately after they occur by expecting the change time which is important for web crawler in different

situations because doing many visits to the page in short periods of time as illustrated in Figure 1 trying to detect the updates once it occur will affect the performance of the crawler as it is required to do the same for billions of pages in the same way and will require using many resources to perform this, and in case a visit has been performed without getting new updates in the page this also will be considered wasting of resources, visiting pages so many times in short periods of time will affect also the server that hosts the page when it suffer from heavy visits performed by search engines, also making visits in long periods of time will cause missing possible updates in pages as shown in Figure 1. This study describes two of best policies used in detecting page freshness and contributes in proposing new policy by combining the previously described ones and simulate it in different cases like suffering from low resources or handling huge number of web pages to crawl.

Figure 1.

Change rate detection problem

jitwe.2013040104.f01

The rest of the paper is divided as follows. Section 2 present related work. Section 3 describes sampling policies. Section 4 covers the proposed policy. Section 5 presents the experiments. Section 6 present conclusions and future work.

Top

Ezz (2007) came up with a fact that it is impossible to keep visiting all pages in the web in a short period of time trying to detect fresh updates due to the huge amount of pages there and it requires 8.5 days to visit all the pages if we assumed that a web page has an average size of 12 kilobytes, a simple equation has been proposed to calculate the speed of any web crawler that indexes all the web as follow:

Complete Article List

Search this Journal:
Reset
Volume 19: 1 Issue (2024)
Volume 18: 1 Issue (2023)
Volume 17: 4 Issues (2022): 1 Released, 3 Forthcoming
Volume 16: 4 Issues (2021)
Volume 15: 4 Issues (2020)
Volume 14: 4 Issues (2019)
Volume 13: 4 Issues (2018)
Volume 12: 4 Issues (2017)
Volume 11: 4 Issues (2016)
Volume 10: 4 Issues (2015)
Volume 9: 4 Issues (2014)
Volume 8: 4 Issues (2013)
Volume 7: 4 Issues (2012)
Volume 6: 4 Issues (2011)
Volume 5: 4 Issues (2010)
Volume 4: 4 Issues (2009)
Volume 3: 4 Issues (2008)
Volume 2: 4 Issues (2007)
Volume 1: 4 Issues (2006)
View Complete Journal Contents Listing