Big Data Mining Based on Computational Intelligence and Fuzzy Clustering

Big Data Mining Based on Computational Intelligence and Fuzzy Clustering

Usman Akhtar, Mehdi Hassan
DOI: 10.4018/978-1-4666-8505-5.ch007
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The availability of a huge amount of heterogeneous data from different sources to the Internet has been termed as the problem of Big Data. Clustering is widely used as a knowledge discovery tool that separate the data into manageable parts. There is a need of clustering algorithms that scale on big databases. In this chapter we have explored various schemes that have been used to tackle the big databases. Statistical features have been extracted and most important and relevant features have been extracted from the given dataset. Reduce and irrelevant features have been eliminated and most important features have been selected by genetic algorithms (GA).Clustering with reduced feature sets requires lower computational time and resources. Experiments have been performed at standard datasets and results indicate that the proposed scheme based clustering offers high clustering accuracy. To check the clustering quality various quality measures have been computed and it has been observed that the proposed methodology results improved significantly. It has been observed that the proposed technique offers high quality clustering.
Chapter Preview
Top

3. Data Mining Challenges With Big Data

The goals of the data mining techniques go beyond extracting requested information or even hidden patterns and it must deal with heterogeneity, scalability, and accuracy. There is a need for designing and implementing large scale machine learning and data mining algorithms which accomplish the processing of very large scale data. There are two main challenging areas for big data mining. These areas are computing platform problem, and big data mining algorithms problem.

Key Terms in this Chapter

Volume: The large amount of data generated in every second or data intensity that must be ingested, analyzed, and managed to make decisions based on complete data analysis.

Feature Extraction: It is a process of deriving new features from the original features in order to reduce the cost of feature measurement, increase classifier efficiency, and allow higher classification accuracy.

Cluster Analysis: Cluster analysis aims at identifying groups of similar objects and, therefore helps to discover distribution.

Crisp Clustering: Hard clustering of unlabeled objects which is non-empty mutually disjoints subsets so that the union of the subset is equal to zero.

Fuzzy Clustering: It is more flexible then crisp methods. Fuzzy clustering allows objects belong to several clusters with different degree of membership. Each column of the fuzzy portioning sum must equal to 1.

Big Data: Big data is term for massive datasets having large, more varied complex structure with the difficulties of storing, analyzing and visualizing for further processes or results.

Variety: The rise of information coming from new sources both inside and outside the walls of the enterprise or organization creates integration, management, governance, and architectural pressures in IT.

Velocity: How fast data is being produced, changed and the speed with which data must be received, understood and processed.

Complete Chapter List

Search this Book:
Reset