Article Preview
TopIntroduction
World Wide Web (WWW) is growing exponentially over the years and web documents constitute some of the largest repositories of information (Kosala & Blockeel, 2000). Web content usually refers to the information that a user sees on a web document. It also includes some hidden information which help users interact with web contents. Web contents are heterogeneous in nature and may be in different forms like text, image, hyper-link, metadata, audio, video and others with combinations of these content types as well. A complete classification of all these different types of web contents does not exist. Web content data are updated frequently, volatile and not historical (Bhowmick et al., 1999; Dung, Rahayu, & Taniar, 2007). The creation and maintenance of a data warehouse based on the web content data is needed for effective derived and historical querying of web content data. Some researchers adopted the web data extraction system in virtual approach without creating physical data base and warehouse (Bornhövd & Buchmann, 1999) but may have difficulty with contents like images. There are some other information or block in the web pages such as advertisement, attached pages, copyright notices. These are also web contents and usually not considered as part of the primary page information. These unwanted information in a web page are called noise information, and usually need to be cleaned before mining the web contents (Gupta et al., 2005; Ezeife & Ohanekwu, 2005; Li & Ezeife, 2006). Borges and Leven (1999) categorized web mining into three areas: web structure mining, web usage mining and web content mining. Web usage mining processes usage information or the history of user’s visit to different web pages, which are generally stored in chronological order in web log file, server log, error log and cookie log (Buchner & Mulvenna, 1998; Ezeife & Lui, 2009; Priya & Vadivel, 2012).When any mechanism is used to extract relevant and important information from web documents or to discover knowledge or pattern from web documents, it is then called web content mining. Traditional mechanisms include: providing a language to extract certain pattern from web pages, discovering frequent patterns, clustering for document classification, machine learning for wrapper (e.g., data extraction program) induction, and automatic wrapper generation (Liu & Chen-Chung-Chang, 2004; Muslea, Minton, & Knoblock, 1999; Zhao et al., 2005; Crescenzi, Mecca, & Merialdo, 2001; Liu, 2007). All these traditional mechanisms are unable to catch heterogeneous web contents together as they strictly rely on web document presentation structure. Existing extractors also are limited with regards to finding comparative historical and derived information from web documents. Creating more robust automatic wrappers for multiple data sources requires incorporating efficient techniques for automatic schema (attribute) match, some of which techniques are presented in Lewis and Janeja (2011). Methods for testing the quality of extracted and integrated information can also be incorporated in the future (Golfarelli & Rizzi, 2011). Some sample queries that may not be accurately answered by existing systems are:
- 1.
Provide a comparative analysis of products including sales, comments on four retail store web sites in the past 1 year.
- 2.
List all 17” LCD Samsung monitor selling around Toronto with price range less than $200.