CAD-Based Machine Learning Project for Reducing Human-Factor-Related Errors in Medical Image Analysis

CAD-Based Machine Learning Project for Reducing Human-Factor-Related Errors in Medical Image Analysis

Adekanmi Adeyinka Adegun (Landmark University, Nigeria), Roseline Oluwaseun Ogundokun (Landmark University, Nigeria), Marion Olubunmi Adebiyi (Landmark University, Nigeria & Covenant University, Nigeria) and Emmanuel Oluwatobi Asani (Landmark University, Nigeria)
DOI: 10.4018/978-1-7998-1279-1.ch011


Machine learning techniques such as deep learning methods have produced promising results in medical images analysis. This work proposes a user-friendly system that utilizes deep learning techniques for detecting and diagnosing diseases using medical images. This includes the design of CAD-based project that can reduce human factor-related errors while performing manual screening of medical images. The system accepts medical images as input and performs segmentation of the images. Segmentation process analyzes and identifies the region of interest (ROI) of diseases from medical images. Analyzing and segmentation of medical images has assisted in the diagnosis and monitoring of some diseases. Diseases such as skin cancer, age-related fovea degeneration, diabetic retinopathy, glaucoma, hypertension, arteriosclerosis, and choroidal neovascularization can be effectively managed by the analysis of skin lesion and retinal vessels images. The proposed system was evaluated on diseases such as diabetic retinopathy from retina images and skin cancer from dermoscopic images.
Chapter Preview

In the last decade, there have been a lot of research about the application of deep learning to medical image analysis. Some works have been particularly carried out in the segmentation process of medical image analysis state-of-the arts techniques. The performance of the deep learning projects has been compared with manual approach with so much human factors related errors. This section performs the review of related works in this aspect.

Deep learning method was utilized for detection and segmentation of colorectal liver metastases by (Vorontsov et al., 2019). They applied three-dimensional automated segmentations to resolve deficiencies of fully automated segmentation for small metastases and it was faster than manual three-dimensional segmentation. They compared the performance of fully automated and user-corrected segmentations with manual segmentations. Chen, L., Bentley, P., & Rueckert, D. (2017) proposed framework to automatically segment stroke lesions images. The framework was made up of two convolutional neural networks to evaluate the lesions detected in order to remove potential disease.

Vesal, S., Ravikumar, N., & Maier, A. (2018) proposed a convolutional neural network (CNN) project called SkinNet that employed dilated and densely block convolutions to incorporate multi-scale and global context information for skin lesion segmentation. Baur, C., Wiestler, B., Albarqouni, S., & Navab, N. (2019). combined the advantages of supervised and unsupervised methods into a novel framework for learning from both labeled & unlabeled data for the challenging task of White Matter lesion segmentation in brain MR images. They proposed a semi-supervised setting for tackling domain shift which is a known problem in MR image analysis. Chlebus et al., (2018) developed a fully automatic method for liver tumor segmentation in CT images based on a 2D fully convolutional neural network with an object-based post-processing step. The system was compared with human performance.

Complete Chapter List

Search this Book: