Self-Boosted With Dynamic Semi-Supervised Clustering Method for Imbalanced Big Data Classification

Self-Boosted With Dynamic Semi-Supervised Clustering Method for Imbalanced Big Data Classification

Akkala Abhilasha, Annan Naidu P.
Copyright: © 2022 |Pages: 24
DOI: 10.4018/IJSI.297990
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Big data plays a major role in the learning, manipulation, and forecasting of information intelligence. Due to the imbalance of data delivery, the learning and retrieval of information from such large datasets can result in limited classification outcomes and wrong decisions. Traditional machine learning classifiers successfully handling the imbalanced datasets still there is inadequacy in overfitting problems, training cost, and sample hardness in classification. In order to forecast a better classification, the research work proposed the novel “Self-Boosted with Dynamic Semi-Supervised Clustering Method”. The method is initially preprocessed by constructing sample blocks using Hybrid Associated Nearest Neighbor heuristic over-sampling to replicate the minority samples and merge each copy with every sub-set of majority samples to remove the overfitting issue thus slightly reduce noise with the imbalanced data. After preprocessing the data, massive data classification requires big data space which leads to large training costs.
Article Preview
Top

1. Introduction

Big data is an emerging topic in the data mining community because of the inexorable demand of variety of fields such as marketing, bioinformatics, medicine, etc. It is defined as the term whose volume, diversity and complexity needs new techniques, algorithms, and analysis to obtain valuable hidden knowledge (Lin et al., 2017). Standard data mining tools may face difficulties to appropriately analyze such a broad amount of data in an appropriate time. The big data real-time processing is pivotal for many web applications nowadays, like cybersecurity, online money transactions, and electronic commerce (García et al., 2018). Generally, not all big data available recently can be handled efficiently for obtaining meaningful information due to the absence of resources and poor analysis tools (Maldonado & López, 2018). Hence, a considerable percentage of big data that is to be handled is either delayed, neglected, or deleted. Due to this a huge percentage of networking power consumption, storage, and bandwidth is wasted. Hence, the big data is its massive veracity which means the prevailing of a broad number of inaccurate, incomplete, noisy and redundant objects (Sáez et al., 2016). Moreover, most big data nowadays suffers from a critical issue known as the class imbalance problem. A data set is generally imbalanced, if the count of instances in one class, massively outnumber the instances in the other class (Fernández, Carmona, Jose del Jesus et al, 2017; Vuttipittayamongkol et al., 2018). The main problem with the class imbalance problem is that the results will be biased towards the majority class which could lead to inaccurate classification results and directs to take wrong decisions. This problem occurs because most classifiers do not usually consider the data distribution when reducing the global features for instance, the error rate (Fernández, del Río, Chawla et al, 2017). So, a preprocessing phase is required for overcoming the issue of imbalanced classes in such datasets before proceeding toward the classification phase.

Classification in the presence of class imbalance has resulted in a substantial amount of attention in the last years. One focuses at improving the exact identification of positive examples, without extremely deteriorating the execution of the negative class (Hassib et al., 2020). A wide range of solutions has been identified to address this problem. Several methods are present to handle the classification of Imbalanced data such as data sampling and algorithmic technique. These techniques are categorized into Data Level Approach, Cost-sensitive Approach, Algorithm Level Approach (Triguero et al., 2016). Furthermore, data-level approaches are split into numerous groups like Oversampling technique, the Underdamping technique, and the hybrid technique (Leevy et al., 2018; Zhai et al., 2018). In oversampling technique, new data from minority classes are provided to the original dataset so that a balanced data set is obtained. In under-sampling technique, data from majority classes are extracted to balance the datasets (Leevy et al., 2018). In hybrid technique, previous techniques are integrated to obtain the goal of a balanced dataset. Generally, at first oversampling technique is handled to generate new samples for the minority class and then the under-sampling technique is executed to remove samples from the majority class (Hassib et al., 2019). The oversampling and under-sampling techniques have some demerits. To overcome this, Synthetic Minority Oversampling Technique (SMOTE) is considered. Though SMOTE is an approved technique in the imbalanced domain, it has some demerits including over-generalization, applicable only for binary class problem, over-sampling rate (Basgall et al., 2018). The remaining techniques depend on existing classifiers, which are altered to make them adapt to deal with the class imbalance in the learning phase. Integration of both techniques via ensemble learning algorithms have also been proposed (Chen et al., 2019).

Complete Article List

Search this Journal:
Reset
Volume 12: 1 Issue (2024)
Volume 11: 1 Issue (2023)
Volume 10: 4 Issues (2022): 2 Released, 2 Forthcoming
Volume 9: 4 Issues (2021)
Volume 8: 4 Issues (2020)
Volume 7: 4 Issues (2019)
Volume 6: 4 Issues (2018)
Volume 5: 4 Issues (2017)
Volume 4: 4 Issues (2016)
Volume 3: 4 Issues (2015)
Volume 2: 4 Issues (2014)
Volume 1: 4 Issues (2013)
View Complete Journal Contents Listing