Brain Signal Classification Based on Deep CNN

Brain Signal Classification Based on Deep CNN

Terry Gao (Counties Manukau District Health Board (CMDHB), New Zealand) and Grace Ying Wang (Auckland University of Technology, New Zealand)
DOI: 10.4018/IJSPPC.2020040102
OnDemand PDF Download:
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

It is essential to increase the accuracy and robustness of classification of brain data, including EEG, in order to facilitate a direct communication between the human brain and computerized devices. Different machine learning approaches, such as support vector machine (SVM), neural network, and linear discrimination analysis (LDA), have been applied to set up automatic subjective-classifier, and the findings for their capacities in this regard have been inconclusive. The present study developed an effective classifier for human mental status using deep learning in a convolutional neural network. In contrast to most previous studies commonly using EEG waveform or numeric value of brain signals for classification, the authors utilised imaging features generated from EEG data at alpha frequency band. A new model proposed in this study provides a simple and computationally efficient approach to distinguish mental status during resting. With training, this model could predict new 2D EEG images with above 90% accuracy, while traditional machine learning techniques failed to achieve this accuracy.
Article Preview
Top

2. Methods

2.1 EEG Acquisition and Images Extraction

Prior to commencing this research, ethical approval was granted and informed consent was given by all participants. Participants were recruited via advertisements (i.e. flyers posted on notice boards) distributed around a range of local community spaces. A total of 20 participants (10 females) were recruited for this study. A QuickCap (Neuroscan 4.4) 64 sensor shielded cap was used to acquire EEG data from the cephalic sites. EEG was continuously recorded while subjects were sitting relaxed with eyes closed (EC, 2 minutes) and eyes open (EO, 2 minutes) in a sound-attenuated room respectively. The sampling rate was 1000 Hz. The impedance of the electrodes was kept below 5 kΩ and the signal was acquired using a common vertex (Cz) reference. 2D images were extracted from artifact-free EEG data at alpha band (8-12 Hz) using EEGLAB as shown in Figure 1. In total, we extracted 762 images for EC and 762 images for EO (Train on 914 samples, validate on 610 samples).

Figure 1.

Images extracted from EEG data

IJSPPC.2020040102.f01

Complete Article List

Search this Journal:
Reset
Open Access Articles: Forthcoming
Volume 14: 4 Issues (2022): Forthcoming, Available for Pre-Order
Volume 13: 4 Issues (2021): 1 Released, 3 Forthcoming
Volume 12: 4 Issues (2020)
View Complete Journal Contents Listing