Article Preview
Top1. Introduction
Today’s highly interconnected world generates almost unmanageable amounts of network traffic which have to be analyzed in a Big Data framework, and this is a major challenge. With the increase in network traffic also comes an increase in the number of network attacks, manifested as anomalies in regular network traffic. Anomaly detection is used to detect attacks based on irregularities in network traffic. Various Machine Learning (ML) classifiers, including Support Vector Machines (SVM), Decision Trees, Naïve Bayes and also Random Forest (RF), have been used for anomaly detection and successfully building efficient network intrusion detection systems (NIDS) to quickly and accurately detect network attacks or anomalies in network traffic. The focus of this work is on classifying anomalies in network traffic using a binary as well as multi-class machine learning (ML) classifier, RF, in the distributed Spark Big Data environment. Spark is a cluster computing framework that sits on top of the hadoop framework. It has an edge over hadoop in terms of speed due to it’s in-memory processing architecture. Spark can run up to 100 times faster than Hadoop since Spark’s data and processes reside completely in-memory (Guller, 2015). In addition, Spark provides scalability and fault tolerance, and provides a rich set of APIs that allow developers to perform many complex analytical operations out-of-the-box. The RF classifier, also used in this work, has been used for classification in intrusion detection systems (IDS) mainly because of it’s fast learning speed as well as high detection accuracy (Farnaaz & Jabbar, 2016; Johnson & Jain, 2016; Wahyudi, et al., 2018).
For creating machine learning based IDSs, studies have previously focused on the popular KDD’99 dataset (UCI, n.d.), the NSL-KDD dataset (n.d.), as well as the UNSW-NB15 dataset (UNSW-NB15 Dataset Description, n.d.). The KDD’99 dataset (UCI, n.d.) had many inherent problems such as large numbers of redundant records and missing values (Janarthanan and Zargari, 2017), and more importantly, it is not reflective of modern network traffic. In 2009, the NSL-KDD dataset (n.d.) was created from KDD’99 (Tavallaee, et al., 2009), addressing some of the problems of KDD’99 (Janarthanan and Zargari, 2017). To address the issues of the KDD’99 (UCI, n.d.) and NSL-KDD (n.d.) datasets, as well as reflect modern network traffic, Moustafa and Slay (2015) created the UNSW-NB15 dataset (n.d.) in 2015. This dataset, developed using IXIA PerfectStorm (Ixiacom, n.d.), is a comprehensive network-based intrusion detection dataset which reflects modern network traffic scenarios and a variety of low footprint intrusions (Moustafa and Slay, 2015).
Prevalent problems in real-world network intrusion detection system datasets are: (i) traffic is imbalanced, that is, there are generally much less network intrusion records than normal records, hence the classifier gets biased towards more frequently occurring records (Bagui and Li, 2021); (ii) the high dimensionality of network data (Janarthanan and Zargari, 2017; Yang et al., 2019). And, in the present day environment, these problems get magnified in the context of Big Data. High dimensionality leads to: (i) increased training as well as testing times using ML algorithms; (ii) increased demand for computing resources; and, (iii) less accurate results due to the incorrect selection of features (Tavallaee et al., 2010). Hence, data preprocessing and feature selection are a very important part of building efficient IDSs. RF is also very sensitive to the proper selection of attributes (Ghorbani et al., 2010).
In this work, Information Gain as well as Principal Component Analysis (PCA) were used for feature selection or dimensionality reduction. First Information Gain was applied to the data to reduce the number of attributes, and then PCA was applied to the data. To address the problem of imbalanced data, the data was subsampled.