Data Mining Algorithms, Fog Computing

Data Mining Algorithms, Fog Computing

S. Thilagamani, A. Jayanthiladevi, N. Arunkumar
DOI: 10.4018/978-1-5225-5972-6.ch012
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Different methods are used to mine the large amount of data presents in databases, data warehouses, and data repositories. The methods used for mining include clustering, classification, prediction, regression, and association rule. This chapter explores data mining algorithms and fog computing.
Chapter Preview
Top

Introduction

A cluster is a subset of targets which are “similar”. A subset of objects such that the length between any two targets in the cluster is less than the space between any object in the cluster and any object not located inside it. A connected region of multidimensional space containing the relatively high density of target. Clustering is a process of partitioning a set of data (or objects) into a lot of meaningful sub-divisions, called clusters. Help users understand the natural grouping or structure in a data set.

Clustering is unsupervised classification and it has no predefined categories. Unsupervised classification is where the outcomes (groupings of pixels with common characteristics) are based on the software analysis of an image without the user providing sample classes. The information processing system uses techniques to see which pixels are related and groups them into categories. Used either as a stand-alone tool to bring insight into data distribution or as a preprocessing step for other algorithms (Shridhar D et al (2014).

Figure 1.

Clustering

978-1-5225-5972-6.ch012.f01

The good clustering method will create high quality clusters in which, the intra-class similarity is high and the inter-class similarity is low. The character of a clustering result also depends on both the similarity measure used by the method and its execution. The character of a clustering method is also evaluated by its ability to disclose some or all of the hidden rules.

Type of Clustering

The clustering can be divided into two subgroups:

  • 1.

    Hard Clustering: In hard clustering, each data point either belongs to a cluster completely or not. For instance, in the above instance, each customer is put into one group out of the 10 groups.

  • 2.

    Soft Clustering: In soft clustering, instead of putting each data point into a separate cluster, a probability or likelihood of that data point to be in those clusters is specified. For instance, from the above scenario, each customer is assigned a probability to be in either of 10 clusters of the retail shop.

Clustering Model

Since the task of clustering is subjective, the substances that can be applied for attaining this goal are plenty. Every methodology follows a different set of regulations for determining the ‘similarity’ among data points explained by Eli J. Finkel et al., (2012). In fact, there are more than 100 clustering algorithms known. Only a few of the algorithms are used, these are:

  • Connectivity Models: As the name indicates, these examples are founded on the opinion that the data points closer in the data space exhibit more similar to each other than the data points lying farther out. These models can follow two approaches. In the first attack, they begin by classifying all data points into separate clusters & then aggregating them as the distance decreases. In the second approach, all data points are sorted as a single clump and then partitioned as the length increments. Likewise, the choice of distance function is subjective. These examples are very comfortable to read, but lacks scalability for handling large datasets. Instances of these models are hierarchical clustering algorithm and its variations.

  • Centroid Models: These are iterative clustering algorithms in which the feeling of similarity is derived by the stuffiness of a data point to the centroid of the clumps. K-Means clustering algorithm is a popular algorithm that falls into this class. In these examples, the ordinal number of clusters required at the end has to be noted in advance, which makes it important to have prior knowledge of the dataset. These models run iteratively to determine the local optima.

  • Distribution Models: These clustering modes are founded on the notion of how likely is it that all data points in the cluster belong to the same distribution (For example: Normal, Gaussian). These models often suffer from overfitting. A popular example of these models is Expectation-maximization algorithm which uses multivariate normal distributions.

  • Density Models: These models search the data space for areas of varied density of data points in the data space. It isolates various different density regions and specify the data points within these regions in the same bunch. Popular models of density models are DBSCAN and OPTICS .

Complete Chapter List

Search this Book:
Reset