Using Logic Programming and XML Technologies for Data Extraction from Web Pages

Using Logic Programming and XML Technologies for Data Extraction from Web Pages

Amelia Badica (University of Craiova, Romania), Costin Badica (University of Craiova, Romania) and Elvira Popescu (University of Craiova, Romania)
Copyright: © 2009 |Pages: 31
DOI: 10.4018/978-1-59904-576-4.ch002
OnDemand PDF Download:


The Web is designed as a major information provider for the human consumer. However, information published on the Web is difficult to understand and reuse by a machine. In this chapter, we show how well established intelligent techniques based on logic programming and inductive learning combined with more recent XML technologies might help to improve the efficiency of the task of data extraction from Web pages. Our work can be seen as a necessary step of the more general problem of Web data management and integration.
Chapter Preview


The Web is extensively used for information dissemination to humans and businesses. For this purpose, Web technologies are used to convert data from internal formats, usually specific to data base management systems, to suitable presentations for attracting human users. However, the interest has rapidly shifted to make that information available for machine consumption by realizing that Web data can be reused for various problem solving purposes, including common tasks like searching and filtering, and also more complex tasks like analysis, decision making, reasoning and integration.

For example, in the e-tourism domain one can note an increasing number of travel agencies offering online services through online transaction brokers (Laudon & Traver, 2004). They provide useful information to human users about hotels, flights, trains or restaurants, in order to help them plan their business or holiday trips. Travel information, like most of the information published on the Web, is heterogeneous and distributed, and there is a need to gather, search, integrate and filter it efficiently (Staab et al., 2002) and ultimately to enable its reuse for multiple purposes. In particular, for example, personal assistant agents can integrate travel and weather information to assist and advise humans in planning their weekends and holidays. Another interesting use of data harvested from the Web that has been recently proposed (Gottlob, 2005) is to feed business intelligence tasks, in areas like competitive analysis and intelligence.

Two emergent technologies that have been put forward to enable automated processing of information published on the Web are semantic markup (W3C Semantic Web Activity, 2007). and Web services (Web Services Activity, 2007). However, most of the current practices in Web publishing are still being based on the combination of traditional HTML-lingua franca for Web publishing (W3C HTML, 2007) with server-side dynamic content generation from databases. Moreover, many Web pages are using HTML elements that were originally intended for use in structure content (e.g., those elements related to tables), or for layout and presentation effects, even if this practice is not encouraged in theory. Therefore, techniques developed in areas like information extraction, machine learning and wrapper induction are still expected to play a significant role in tackling the problem of Web data extraction.

Data extraction is related to the more general problem of information extraction that is traditionally associated with artificial intelligence and natural language processing. Information extraction was originally concerned with locating specific pieces of information in text documents written in natural language (Lenhert & Sundheim, 1991) and then using them to populate a database or structured document. The field then expanded to cover extraction tasks from Web documents represented in HTML and attracted other communities including databases, electronic documents, digital libraries and Web technologies. Usually, the content of these data sources can be characterized as neither natural language, nor structured, and therefore usually the term semi-structured data is used. For these cases, we consider that the term data extraction is more appropriate than information extraction and consequently, we shall use it in the rest of this chapter.

A wrapper is a program that is used for performing the data extraction task. On one hand, manual creation of Web wrappers is a tedious, error-prone and difficult task because of Web heterogeneity in both structure and content. On the other hand, construction of Web wrappers is a necessary step to allow more complex tasks like decision making and integration. Therefore, a lot of techniques for (semi-)automatic wrapper construction have been proposed. One application area that can be described as a success story for machine learning technologies is wrapper induction for Web data extraction. For a recent overview of state-of-the-art approaches in the field see Chang, Kayed, Girgis, and Shaalan (2006).

Complete Chapter List

Search this Book: