Nonlinear Complexity Sorting Approach for 2D ECG Data Compression

Nonlinear Complexity Sorting Approach for 2D ECG Data Compression

Anukul Pandey (Dr. B. R. Ambedkar National Institute of Technology, India), Butta Singh (Guru Nanak Dev University Regional Campus, India), Barjinder Singh Saini (Dr. B. R. Ambedkar National Institute of Technology, India) and Neetu Sood (Dr. B. R. Ambedkar National Institute of Technology, India)
DOI: 10.4018/978-1-5225-0660-7.ch001
OnDemand PDF Download:
$30.00
List Price: $37.50

Abstract

Cardiovascular Disease (CVD) is globally acknowledged research problem. The continuous Electrocardiogram (ECG) monitoring can assist in tackling the problem of CVD. The redundancy in the monitoring of ECG signal is reduced by various signal processing techniques either in 1D or 2D domain. This chapter is having the sole objective of reviewing the existing 2D ECG data compression techniques and comparing it with the 1D compression techniques. Furthermore, proposing a novel nonlinear complexity sorting approach for 2D ECG data compression. The broad basic steps involved in the procedure are preprocessing, transformation and encoding. The preprocessing steps includes QRS detection, 2D ECG image formulation, Dc quantization and complexity sorting. The second stage of transformation includes the various decomposition techniques. At encoding stage, standard image codec (JPEG2000) can be employed. The performance evaluation of the proposed complexity sorting algorithm is performed on records of Massachusetts Institute of Technology – Beth Israel Hospital arrhythmia database.
Chapter Preview
Top

Introduction

In the modern era, Cardiovascular Diseases (CVD) had emerged as one of the vital causes of mortality in both urban and rural areas (S. Gupta, Gudapati, Gaurav, & Bhise, 2013; Nichols, Townsend, Scarborough, & Rayner, 2014). The electrocardiogram (ECG) monitoring facilitates in providing clinical health status of the concerned patient to the healthcare center in case of the hostile cardiac behavior (Alesanco & Garcia, 2010). It is broadly adopted in the 24-h Holter monitoring, the clinical ECG workstation handling and telemedicine applications. Cardiac monitoring using ECG signal is the best representative of heart electrical functionality and had proven to be useful in the diagnosis of most of the heart disease.

The ECG signal which monitors the electrical activity of heart is usually characterized by its various set points (P, QRS, T) and intervals (PR interval, QT interval and RR interval) that reflects the rhythmic electrical depolarisation and repolarisation of atria and ventricles. ECG is having the possibility of reduction of redundant information through inter- and intra-beat correlation, which is the basic cause of its compression. In general, ECG signal compression techniques widely fall under three categories of the direct method, transformed method, and parameter extraction method. The direct data compression method (V. Kumar, Saxena, & Giri, 2006) openly analyzes and reduces data points in the time domain, and the example includes turning point (TP), amplitude zone time epoch coding (AZTEC), Improved Modified AZTEC technique, coordinate reduction time encoding system (CORTES), the delta algorithm, the Fan algorithm (Jalaleddine, Hutchens, Strattan, & Coberly, 1990; Sabarimalai Sur & Dandapat, 2014), and the ASCII character encoding (S K Mukhopadhyay, Mitra, & Mitra, 2012). The transformed method analyzes energy distribution by converting the time domain to some other domain and example includes Fourier transform, Fourier descriptor (Reddy & Murthy, 1986), Karhunen–Loeve transform (KLT), the Walsh transform, the discrete cosine transform (DCT) (Batista, Melcher, & Carvalho, 2001; Benzid, Messaoudi, & Boussaad, 2008), DCT with modified stages and wavelet transform, and the compressed sensing. The parameter extraction method is based upon dominant feature extraction from the raw signal; examples include neural based or syntactic methods, peak picking, and linear prediction method. For such data reduction, compression techniques are designed, which are classified as lossless or lossy techniques. To combat a huge growth rate of memory requirement and data sparsity, many ECG compression and decompression algorithms had already developed to represent the raw ECG in the processed format.

The storage of such huge volume of data requires large memory space, for example, a three-channel ECG signal sampled at a frequency of 1 kHz with 11 bits of resolution in three lead system of a particular patient, recorded for 24 h requires 928 MB of memory size per channel without any overhead. Such huge volume of data is expected to have inter and intra beat correlation or inherent sparsity.

Complete Chapter List

Search this Book:
Reset