A Study on Improved Deep Learning Structure Based on DenseNet

A Study on Improved Deep Learning Structure Based on DenseNet

Sang-Kwon Yun, Hye Jeong Kwon, Jongbae Kim
Copyright: © 2022 |Pages: 13
DOI: 10.4018/IJSI.289595
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The existing image-related deep learning research methods are conducted through algorithms based on feature identification and association, but there are limits to their accuracy and reliability. These methods are inefficient for artificial neural networks to extract features and learn because of the loss of spatial information in the process of removing background and flattening images and have a limit on increasing accuracy and reliability. The deep learning algorithm applied in this study was based on the DenseNet neural network which is recently the best in performance and accuracy, and its architecture was improved with a focus on increasing the learning performance. As a result of the experiment, both speed and accuracy of learning data were more increased than the existing DenseNet architecture, which means to diagnose more images than the existing methods within the same amount of time.
Article Preview
Top

Background

The neural network model grown so quickly due to advancement of algorithm and hardware for several years has now more increasing reliability than the existing classification method based on low-level features and is established as a foundation for deep learning. With this background, new models such as CNN which is a model for image recognition and learning have been emerged every year. Therefore, this chapter would like to introduce MLP, CNN, ResNet and DenseNet, which are popular algorithms used for image recognition and learning, and identify their merits and demerits, and take an overview of the future development.

MLP, which stands for multilayer perceptron, was created based on the idea of the artificial neural networks that emulate human brain structure, and it means artificial neural networks that mathematically model the mechanism of neurons, which are human nerve cells, activities (Schmidhuber, 2015). Understanding MLP needs to understand the single perceptron model first. Single perceptron is the first artificial neural network model, which delivers multiple signal data into input and outputs a single signal. This is similar to that neurons transfer information through electrical signals. Also, in the perceptron, weight () has a role of dendrite or axon taking on the role of transmitting signals in the neuron. The weights () mean weighted values which are given to respective input signals, and it outputs 1 when the sum of signals in the calculation with input signals exceeds a specified threshold. A unique value is given to each input signal, and the greater the weight is the more significant the signal is considered. Figure 1 shows a single perceptron model.

Complete Article List

Search this Journal:
Reset
Volume 12: 1 Issue (2024)
Volume 11: 1 Issue (2023)
Volume 10: 4 Issues (2022): 2 Released, 2 Forthcoming
Volume 9: 4 Issues (2021)
Volume 8: 4 Issues (2020)
Volume 7: 4 Issues (2019)
Volume 6: 4 Issues (2018)
Volume 5: 4 Issues (2017)
Volume 4: 4 Issues (2016)
Volume 3: 4 Issues (2015)
Volume 2: 4 Issues (2014)
Volume 1: 4 Issues (2013)
View Complete Journal Contents Listing