Knowledge Extraction from Information System Using Rough Computing

Knowledge Extraction from Information System Using Rough Computing

DOI: 10.4018/978-1-4666-8513-0.ch009
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The amount of data collected across various sources in real life situations is never in its complete form. That is, it is never precise or it never gives definite knowledge. It always contains uncertainty and vagueness. Therefore, most of our traditional tools for formal modelling, reasoning and computing can not handle efficiently. Therefore, it is very challenging to organize this data in formal system which provides information in more relevant, useful, and structured manner. There are many techniques available for knowledge extraction form this high dimensional data. This chapter discusses various rough computing based knowledge extraction techniques to obtain meaningful knowledge from large amount of data. A real life example is provided to show the viability of the proposed research.
Chapter Preview
Top

Introduction

At the present age of Internet, a huge repository of data is available across various domains. It is due to the wide spread of distributed computing which involves dispersion of data geographically. In addition, these data are neither crisp nor deterministic due to presence of uncertainty and vagueness. Analyzing these data for obtaining meaningful information is a great challenge for human being. Therefore, it is very difficult to extract expert knowledge from the universal dataset. Also, there is much information hidden in the accumulated voluminous data. It is observed that, most of our traditional tools for knowledge extraction are crisp, deterministic and precise in character. So, it is essential for a new generation of computational theories and tools to assist human in extracting knowledge from the rapidly growing voluminous digital data. Knowledge discovery in databases (KDD) is the field that has evolved into an important and active area of research because of theoretical challenges associated with the problem of discovering intelligent solutions for huge data. Knowledge discovery and data mining is the rapidly growing interdisciplinary field which merges database management, statistics, computational intelligence and related areas. The basic aim of all these is knowledge extraction from voluminous data.

The processes of knowledge discovery in databases and information retrieval appear deceptively simple when viewed from the perspective of terminological definition (Fayaad, 1996). The nontrivial process of identifying valid, novel, potentially useful, and ultimately understandable patterns in data is known as knowledge discovery in databases. It consists of several stages such as data selection, cleaning of data, enrichment of data, coding, data mining and reporting. The different stages are shown in the following figure 1. In addition, closely related process of information retrieval is defined by Rocha (2001) as “the methods and processes for searching relevant information out of information systems that contain extremely large numbers of documents”. However in execution, these processes are not simple at all, especially when executed to satisfy specific personal or organizational knowledge management requirements. It is also observed that, usefulness of an individual data element or pattern of data elements change dramatically from individual to individual, organization to organization, or task to task. It is because of the acquisition of knowledge and reasoning that involve in vagueness and incompleteness. In addition, knowledge extraction or description of data patterns generally understandable is also highly problematic. Therefore, there is much need for dealing with the incomplete and vague information in classification, concept formulation, and data analysis.

Figure 1.

The KDD process

978-1-4666-8513-0.ch009.f01

Complete Chapter List

Search this Book:
Reset