Optimal Query Generation for Hidden Web Extraction through Response Analysis

Optimal Query Generation for Hidden Web Extraction through Response Analysis

Sonali Gupta, Komal Kumar Bhatia
Copyright: © 2014 |Pages: 18
DOI: 10.4018/ijirr.2014040101
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

A huge number of Hidden Web databases exists over the WWW forming a massive source of high quality information. Retrieval of this information for enriching the repository of the search engine is the prime target of a Hidden web crawler. Besides this, the crawler should perform this task at an affordable cost and resource utilization. This paper proposes a Random ranking mechanism whereby the queries to be raised by the hidden web crawler have been ranked. By ranking the queries according to the proposed mechanism, the Hidden Web crawler is able to make an optimal choice among the candidate queries and efficiently retrieve the Hidden web databases. The Hidden Web crawler proposed here also possesses an extensible and scalable framework to improve the efficiency of crawling. The proposed approach has also been compared with other methods of Hidden Web crawling existing in the literature.
Article Preview
Top

Introduction

With the swift development of the Web, more and more web databases appear on the WWW. But the access to these databases is guarded by search forms making it inaccessible to conventional Web crawlers. This portion of the Web is commonly referred to as “hidden web” or “deep web”. “In the article by Bergman (2001), the authors say hidden web content is particularly important”. Not only its size is estimated as hundreds of times larger than the so-called Surface web, but also its information is considered to be of very high quality. Obtaining the content of Hidden web is challenging for which a common solution lies in designing a Hidden-Web crawler. “This topic has been covered many times in past studies (Ntoulas & Zerfos & Cho, 2005; Raghavan & Garcia-Molina, 2001).” The control flow of a basic crawler for the Hidden Web is shown in Fig. 1.

Figure 1.

Basic crawler for the hidden web

ijirr.2014040101.f01

A hidden Web crawler basically starts by parsing the given search form to extract the various controls and filling the search form with the help of a task-specific database. The task-specific database usually contains a list of some possible values that can be submitted to the controls on the search form. The hidden web crawler then submits the filled form to the WWW to retrieve pages from the associated database and forms a repository of the response pages called as the Hidden Web page repository.

Top

Literature Review

A Hidden Web Crawler aims to harvest data records as many as possible efficiently. “This topic has been covered many times in past studies (Ntoulas & Zerfos & Cho, 2005; Barbosa & Freire, 2004).” “In the article by Barbosa and Freire (2004), the authors first introduced the idea, and presented a query selection method which generated the next query using the most frequent keywords in the previous records”. However, queries with the most frequent keywords in hand do not ensure that more new records are returned from the Deep Web database. “In the article by Ntoulas and Zerfos and Cho (2005), the authors proposed a greedy query selection method where candidate query keywords have been generated from the obtained records based on the rate measure and the one with the maximum expected harvest rate will be selected for the next query”.

“In the article by Kashyap and Hristidis and Petropoulos and Tavoular (2011), the authors proposed an intuitive way to categorize the results of a query using a static concept hierarchy”. Their solution was meant to deal with the information overload problem and effectively navigate the results as the search queries on the web database often return a large number of results. The work focuses on the number of records retrieved rather than the size of the retrieved contents of the database.

“In the article by Cheng and Termehchy and Hristidis (2012), the authors have considered the structure and the content of both the database and the query results for identifying the queries that are likely to have low ranking quality”. Their system has been evaluated for keyword queries over web databases and suggests the alternative queries that can be used for substituting such low ranking or hard queries so as to improve the user’s satisfaction.

A critical review of the referenced literature depicts the following shortcomings:

  • 1.

    The query generation process for form filling is optimized on the basis of queries executed only during the current run of the crawler and does not takes into consideration the outcome of any queries executed during the previous runs of the crawler.

  • 2.

    Some methods also consider the downloaded Hidden web pages or response pages as feedback to the crawler so as to download more and more data from the hidden web databases but none of the approaches has considered the sizes of the response pages as criteria in the feedback.

  • 3.

    Most of these methods assign weights to query terms by taking into account the frequency of their occurrence, keeping view of which requires downloading and analysing all the retrieved documents. This makes the process highly inefficient.

Complete Article List

Search this Journal:
Reset
Volume 14: 1 Issue (2024)
Volume 13: 1 Issue (2023)
Volume 12: 4 Issues (2022): 3 Released, 1 Forthcoming
Volume 11: 4 Issues (2021)
Volume 10: 4 Issues (2020)
Volume 9: 4 Issues (2019)
Volume 8: 4 Issues (2018)
Volume 7: 4 Issues (2017)
Volume 6: 4 Issues (2016)
Volume 5: 4 Issues (2015)
Volume 4: 4 Issues (2014)
Volume 3: 4 Issues (2013)
Volume 2: 4 Issues (2012)
Volume 1: 4 Issues (2011)
View Complete Journal Contents Listing