Analysis of SMOTE: Modified for Diverse Imbalanced Datasets Under the IoT Environment

Analysis of SMOTE: Modified for Diverse Imbalanced Datasets Under the IoT Environment

Ankita Bansal, Makul Saini, Rakshit Singh, Jai Kumar Yadav
Copyright: © 2021 |Pages: 23
DOI: 10.4018/IJIRR.2021040102
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The tremendous amount of data generated through IoT can be imbalanced causing class imbalance problem (CIP). CIP is one of the major issues in machine learning where most of the samples belong to one of the classes, thus producing biased classifiers. The authors in this paper are working on four imbalanced datasets belonging to diverse domains. The objective of this study is to deal with CIP using oversampling techniques. One of the commonly used oversampling approaches is synthetic minority oversampling technique (SMOTE). In this paper, the authors have suggested modifications in SMOTE and proposed their own algorithm, SMOTE-modified (SMOTE-M). To provide a fair evaluation, it is compared with three oversampling approaches, SMOTE, adaptive synthetic oversampling (ADASYN), and SMOTE-Adaboost. To evaluate the performances of sampling approaches, models are constructed using four classifiers (K-nearest neighbour, decision tree, naive Bayes, logistic regression) on balanced and imbalanced datasets. The study shows that the results of SMOTE-M are comparable to that of ADASYN and SMOTE-Adaboost.
Article Preview
Top

Introduction

There has been a conundrum of the imbalanced data in the different spheres of the IoT (Internet of Things). Large variety of sensors connected in IoT produce a great amount of data which is crucial to understand (Sinha et al. 2017). Training machine learning algorithms over the raw data collected from real networks is inadequate due to the data being imbalanced (Zolanvari, Teixeira & Jain, 2018; Choi & Lee, 2018, Makki et al. 2019). For the past few years, Class Imbalance Problem (CIP) has been a major issue in machine learning and data mining. It is a problem where the instances of one class are in abundance, Majority class while other class is in dearth, Minority class (Wang & Yao, 2013; Somasundaram & Reddy, 2016). The issue usually arises when problems are based on classification (He & Garcia, 2009). For example, to attain higher accuracy and to minimize the error rate the classifier may classify all the samples into Majority class but clearly all the Minority class samples will be incorrectly classified. As a result, such classifiers would lead to extremely impressive accuracy while on the other hand, the values of evaluation measures like precision, recall etc. suffer. Hence, the constructed model suffers from an accuracy paradox. To overcome this problem, balancing the data from IoT is necessary. Nowadays, it is seen that CIP is common in a large number of diverse fields like fraud detection, anomaly detection, medical diagnosis, oil spillage, facial detection, and much more (Nagar et al. 2020, Yong, 2012). These could be the sources of imbalanced data in IoT which might come from an attack at a single device or a set of sensors connected over the network. In this paper, the focus is to handle the CIP in particularly two domains or spheres of IoT, viz. fraud detection and medical diagnosis. We have employed a total four datasets, two belonging to fraud detection domain and two belonging to the field of medicine. All the datasets are very diverse in terms of their sample sizes, the number and types of attributes and the imbalance ratio.

Imbalanced datasets from IoT can be processed using various strategies i.e. algorithm level learning (Ruff et al., 2017), cost sensitive learning (Nguyen, Gantner & Schimdt-Theime, 2010) and data level learning. The later one is the focus of this paper. Data level learning includes sampling algorithms which comprise of two major techniques, Oversampling and Undersampling. These techniques calibrate the class distribution of the dataset. Undersampling is the process that works on the Majority class by either reducing or removing some or many instances of Majority dataset without compromising with its classifying features (Hulse, Khoshgoftaar& Napolitano,2009). Hence, Majority class is modified and is made compatible with the Minority class. Oversampling is the technique in which the datasets of the Minority class are increased or amplified to such an extent so that its size becomes comparable to the Majority class. Therefore, the possibility of losing the useful classifying features is not there in Oversampling which might be there if Undersampling is used. Also, Oversampling deals with the decisive boundary points loss problem that can arise if Undersampling would be used for balancing the data. This is due to the fact that in Undersampling, the Majority dataset points are removed which leads to a possibility of loss of decisive boundary points even if the points are being removed using an efficient method. Also, Undersampling is trained on lesser data information and may not be able to consider all the cases. Due to these advantages of Oversampling over Undersampling, the authors aim to work on Oversampling techniques in this paper. There are various Oversampling techniques (Nguyen et al., 2010, Xiaolong et al. 2019) proposed in literature such as Random Oversampling (ROS), Synthetic Minority Oversampling Technique (SMOTE) (Chawla, Bowyer, Hall & Kegelmeyer, 2002; Chawla, Lazarevic, Hall & Bowyer, 2003), Adaptive Synthetic Oversampling technique (ADASYN) (He, Bai, Garcia & Li, 2008), SMOTEBOOST (Chawla et al., 2003), ADABOOST (Zhang et al., 2019) etc. One of the most commonly used technique in literature is SMOTE.

Complete Article List

Search this Journal:
Reset
Volume 14: 1 Issue (2024)
Volume 13: 1 Issue (2023)
Volume 12: 4 Issues (2022): 3 Released, 1 Forthcoming
Volume 11: 4 Issues (2021)
Volume 10: 4 Issues (2020)
Volume 9: 4 Issues (2019)
Volume 8: 4 Issues (2018)
Volume 7: 4 Issues (2017)
Volume 6: 4 Issues (2016)
Volume 5: 4 Issues (2015)
Volume 4: 4 Issues (2014)
Volume 3: 4 Issues (2013)
Volume 2: 4 Issues (2012)
Volume 1: 4 Issues (2011)
View Complete Journal Contents Listing