Cuckoo Search Optimized Reduction and Fuzzy Logic Classifier for Heart Disease and Diabetes Prediction

Cuckoo Search Optimized Reduction and Fuzzy Logic Classifier for Heart Disease and Diabetes Prediction

Thippa Reddy Gadekallu, Neelu Khare
Copyright: © 2017 |Pages: 18
DOI: 10.4018/IJFSA.2017040102
(Individual Articles)
No Current Special Offers


Disease forecasting using soft computing techniques is major area of research in data mining in recent years. To classify heart and diabetes diseases, this paper proposes a diagnosis system using cuckoo search optimized rough sets based attribute reduction and fuzzy logic system. The disease prediction is done as per the following steps 1) feature reduction using cuckoo search with rough set theory 2) Disease prediction using fuzzy logic system. The first step reduces the computational burden and enhances performance of fuzzy logic system. Second step is based on the fuzzy rules and membership functions which classifies the disease datasets. The authors have tested this approach on Cleveland, Hungarian, Switzerland heart disease data sets and a real-time diabetes dataset. The experimentation result demonstrates that the proposed algorithm outperforms the existing approaches.
Article Preview

1. Introduction

Data mining is the subfield in knowledge management. Data mining helps in healthcare for effective treatment, healthcare management, customer relation management, fraud and mistreatment detection and decision making (Silwattananusarn & Tuamsuk, 2012). In health care, Heart disease has considerably amplified for the past ten years and has become the foremost reason of death for people in most countries around the world. The structure or the function of the heart gets distressed by many characteristics of these heart diseases (Chitra & Seenivasagam, 2013). Computer program acknowledged as Medical Decision-Support System was anticipated to support health professionals formulate medical decision (Shortliffe, 1987).

In disease forecast, feature extraction and selection are important steps. An optimum feature set must have resourceful and perceptive characteristics; and also reduce the redundancy of features to avoid “curse of dimensionality” issue (Osareh & Shadgar, 2011). The impact of unrelated features on the presentation of classifier systems can be scrutinized by feature selection strategies (Acir, Ozdamar, & Guzelis, 2006; Valentini, Muselli, & Ruffino, 2004)). In this phase an optimal subset of features that are necessary are selected. By lessening the dimensionality and ignoring unrelated features, feature selection develops the exactness of algorithms (Zhang, Guo Du, & Li, 2005; Karabak & Ince, 2009). Traditional Principal Component Analysis (PCA) is one of the most frequently used feature extraction methods. It depends on extracting the axes on which data exhibit the maximum randomness (Jollife, 1986). Cluster analysis is a normally applied data mining method to scrutinize the relationships among attributes, samples and the relationships among attributes and samples. Hierarchical clustering tree (HCT) (Eisen, Spellman, Brown, & Botstein, 1998) and k-means (Tavazoie, Hughes, Campbell, Cho, & Church, 1999) are the two most well-known clustering techniques used to eliminate the features from the medical data’s. Alternatively, the rough sets provide a proficient method of managing uncertainties and can be utilized for tasks such as data dependency study, feature identification, dimensionality reduction, and pattern categorization. Rough set theory (Pawlak, 1991; Polkowski, 2003) is a reasonably fresh intelligent method for managing ambiguity that is employed to find out data dependencies, to review the implication of attributes, detecting patterns in data, and to decrease redundancies.

Complete Article List

Search this Journal:
Volume 13: 1 Issue (2024)
Volume 12: 1 Issue (2023)
Volume 11: 4 Issues (2022)
Volume 10: 4 Issues (2021)
Volume 9: 4 Issues (2020)
Volume 8: 4 Issues (2019)
Volume 7: 4 Issues (2018)
Volume 6: 4 Issues (2017)
Volume 5: 4 Issues (2016)
Volume 4: 4 Issues (2015)
Volume 3: 4 Issues (2013)
Volume 2: 4 Issues (2012)
Volume 1: 4 Issues (2011)
View Complete Journal Contents Listing