Towards Comparative Mining of Web Document Objects with NFA: WebOMiner System

Towards Comparative Mining of Web Document Objects with NFA: WebOMiner System

C. I. Ezeife (School of Computer Science, University of Windsor, Windsor, ON, Canada) and Titas Mutsuddy (School of Computer Science, University of Windsor, Windsor, ON, Canada)
Copyright: © 2012 |Pages: 21
DOI: 10.4018/jdwm.2012100101
OnDemand PDF Download:
$30.00
List Price: $37.50

Abstract

The process of extracting comparative heterogeneous web content data which are derived and historical from related web pages is still at its infancy and not developed. Discovering potentially useful and previously unknown information or knowledge from web contents such as “list all articles on ’Sequential Pattern Mining’ written between 2007 and 2011 including title, authors, volume, abstract, paper, citation, year of publication,” would require finding the schema of web documents from different web pages, performing web content data integration, building their virtual or physical data warehouse before web content extraction and mining from the database. This paper proposes a technique for automatic web content data extraction, the WebOMiner system, which models web sites of a specific domain like Business to Customer (B2C) web sites, as object oriented database schemas. Then, non-deterministic finite state automata (NFA) based wrappers for recognizing content types from this domain are built and used for extraction of related contents from data blocks into an integrated database for future second level mining for deep knowledge discovery.
Article Preview

Introduction

World Wide Web (WWW) is growing exponentially over the years and web documents constitute some of the largest repositories of information (Kosala & Blockeel, 2000). Web content usually refers to the information that a user sees on a web document. It also includes some hidden information which help users interact with web contents. Web contents are heterogeneous in nature and may be in different forms like text, image, hyper-link, metadata, audio, video and others with combinations of these content types as well. A complete classification of all these different types of web contents does not exist. Web content data are updated frequently, volatile and not historical (Bhowmick et al., 1999; Dung, Rahayu, & Taniar, 2007). The creation and maintenance of a data warehouse based on the web content data is needed for effective derived and historical querying of web content data. Some researchers adopted the web data extraction system in virtual approach without creating physical data base and warehouse (Bornhövd & Buchmann, 1999) but may have difficulty with contents like images. There are some other information or block in the web pages such as advertisement, attached pages, copyright notices. These are also web contents and usually not considered as part of the primary page information. These unwanted information in a web page are called noise information, and usually need to be cleaned before mining the web contents (Gupta et al., 2005; Ezeife & Ohanekwu, 2005; Li & Ezeife, 2006). Borges and Leven (1999) categorized web mining into three areas: web structure mining, web usage mining and web content mining. Web usage mining processes usage information or the history of user’s visit to different web pages, which are generally stored in chronological order in web log file, server log, error log and cookie log (Buchner & Mulvenna, 1998; Ezeife & Lui, 2009; Priya & Vadivel, 2012).When any mechanism is used to extract relevant and important information from web documents or to discover knowledge or pattern from web documents, it is then called web content mining. Traditional mechanisms include: providing a language to extract certain pattern from web pages, discovering frequent patterns, clustering for document classification, machine learning for wrapper (e.g., data extraction program) induction, and automatic wrapper generation (Liu & Chen-Chung-Chang, 2004; Muslea, Minton, & Knoblock, 1999; Zhao et al., 2005; Crescenzi, Mecca, & Merialdo, 2001; Liu, 2007). All these traditional mechanisms are unable to catch heterogeneous web contents together as they strictly rely on web document presentation structure. Existing extractors also are limited with regards to finding comparative historical and derived information from web documents. Creating more robust automatic wrappers for multiple data sources requires incorporating efficient techniques for automatic schema (attribute) match, some of which techniques are presented in Lewis and Janeja (2011). Methods for testing the quality of extracted and integrated information can also be incorporated in the future (Golfarelli & Rizzi, 2011). Some sample queries that may not be accurately answered by existing systems are:

  • 1.

    Provide a comparative analysis of products including sales, comments on four retail store web sites in the past 1 year.

  • 2.

    List all 17” LCD Samsung monitor selling around Toronto with price range less than $200.

Complete Article List

Search this Journal:
Reset
Open Access Articles: Forthcoming
Volume 13: 4 Issues (2017)
Volume 12: 4 Issues (2016)
Volume 11: 4 Issues (2015)
Volume 10: 4 Issues (2014)
Volume 9: 4 Issues (2013)
Volume 8: 4 Issues (2012)
Volume 7: 4 Issues (2011)
Volume 6: 4 Issues (2010)
Volume 5: 4 Issues (2009)
Volume 4: 4 Issues (2008)
Volume 3: 4 Issues (2007)
Volume 2: 4 Issues (2006)
Volume 1: 4 Issues (2005)
View Complete Journal Contents Listing