Data Mining Fundamental Concepts and Critical Issues

Data Mining Fundamental Concepts and Critical Issues

John Wang, Qiyang Chen, James Yao
Copyright: © 2009 |Pages: 6
DOI: 10.4018/978-1-59904-849-9.ch064
(Individual Chapters)
No Current Special Offers


Data mining is the process of extracting previously unknown information from large databases or data warehouses and using it to make crucial business decisions. Data mining tools find patterns in the data and infer rules from them. The extracted information can be used to form a prediction or classification model, identify relations between database records, or provide a summary of the databases being mined. Those patterns and rules can be used to guide decision making and forecast the effect of those decisions, and data mining can speed analysis by focusing attention on the most important variables.
Chapter Preview


We are drowning in data, but starving for knowledge. In recent years the amount or the volume of information has increased significantly. Some researchers suggest that the volume of information stored doubles every year. Disk storage per person (DSP) is a way to measure the growth in personal data. Edelstein (2003) estimated that the number has dramatically grown from 28MB in 1996 to 472MB in 2000.

Data mining seems to be the most promising solution for the dilemma of dealing with too much data having very little knowledge. By using pattern recognition technologies and statistical and mathematical techniques to sift through warehoused information, data mining helps analysts recognize significant facts, relationships, trend, patterns, exceptions and anomalies. The use of data mining can advance a company’s position by creating a sustainable competitive advantage. Data warehousing and mining is the science of managing and analyzing large datasets and discovering novel patterns (Davenport & Harris, 2007; Wang, 2006; Olafsson, 2006).

Data mining is taking off for several reasons: organizations are gathering more data about their businesses, the enormous drop in storage costs, competitive business pressures, a desire to leverage existing information technology investments, and the dramatic drop in the cost/performance ratio of computer systems. Another reason is the rise of data warehousing. In the past, it was often necessary to gather the data, cleanse it, and merge it. Now, in many cases, the data are already sitting in a data warehouse ready to be used.

Over the last 40 years, the tools and techniques to process data and information have continued to evolve from data bases to data warehousing and further to data mining. Data warehousing applications have become business-critical. Data mining can compress even more value out of these huge repositories of information. Data mining is a multidisciplinary field covering a lot of disciplines such as databases, statistics, artificial intelligence, pattern recognition, machine learning, information theory, control theory, operations research, information retrieval, data visualization, high-performance computing or parallel and distributed computing, etc (Zhou, 2003; (Hand, Mannila, & Smyth, 2001).

Certainly, many statistical models had emerged a long time ago. Machine learning has marked a milestone in the evolution of computer science. Although data mining is still in its infancy, it is now being used in a wide range of industries and for a range of tasks in a variety of contexts (Wang, 2003; Lavoie, Dempsey, & Connaway, 2006). Data mining is synonymous with knowledge discovery in databases, knowledge extraction, data/pattern analysis, data archeology, data dredging, data snooping, data fishing, information harvesting, and business intelligence (Han and Kamber, 2001).

Key Terms in this Chapter

Neural Networks: Also referred to as artificial intelligence (AI), which utilizes predictive algorithms.

Data Visualization: A technology for helping users to see patterns and relationships in large amounts of data by presenting the data in graphical form.

Information Retrieval: The art and science of searching for information in documents, searching for documents themselves, searching for metadata which describe documents, or searching within databases, whether relational stand-alone databases or hypertext networked databases such as the Internet or intranets, for text, sound, images or data.

Explanatory Variables: Used interchangeably and refer to those variables that explain the variation of a particular target variable. Also called driving, or descriptive, or independent variables.

Pattern Recognition: The act of taking in raw data and taking an action based on the category of the data. It is a field within the area of machine learning.

Segmentation: Another major group that comprises the world of data mining involving technology that identifies not only statistically significant relationships between explanatory and target variables, but determines noteworthy segments within variable categories that illustrate prevalent impacts on the target variable.

Machine Learning: Concerned with the development of algorithms and techniques, which allow computers to “learn”.

Data Mining: The process of automatically searching large volumes of data for patterns. Data mining is a fairly recent and contemporary topic in computing.

Information Quality Decay: Quality of some data goes down when facts about real world objects change over time, but those facts are not updated in the database.

Predictive Analysis: Use of data mining techniques, historical data, and assumptions about future conditions to predict outcomes of events.

Complete Chapter List

Search this Book: