Article Preview
Top1. Introduction
Image dehazing in supervised learning models suffers from overfitting and underfitting problems and is a complex job in computer vision. In deep learning due to enormous number of parameters overfitting and training time is really a challenging task. Though there exists batch normalization and dropout to regularize and handle but they do overlap and have their respective strengths and limitations for the network. Overfitting is a fundamental problem which slows down the model performance, dropout technique usually used for this problem. To avoid overfitting, we need to control complex parameters present in the model using regularization techniques. The techniques which are majorly used for regularizing the complex models are dropout and L2 norm. Many researchers try to perform these two techniques in majority to reduce overfitting in models. This reduces overfitting and provide advancements around other regularization methods. Dropout enhances the act of deep neural networks having tasks of supervised learning in computer visualization, medical, recognition of speech and other progressive results, though their practice plans are available, unfortunately no well-defined set of rules or comprehensive revisions to explore them regarding network configurations and learning efficacy is defined. It is not obvious when should consider dropout or batch normalization, can they be combined or not. Dropout helps in reducing overfitting as we drop some units in the network and batch normalization reduces training time but to combine these parameters such as Dropout and Batch Normalization does improve accuracy but to have some empirical results, we have conducted some experiments and, in this paper, we try to analyze combination of various hyperparameters to have better network performance by doing some empirical study using deep neural network using images from cifar10 dataset. In this paper we did an empirical study to identify the various parameter effects like using Batch Normalization and dropout effects separately and in combination, further we also identify the effect of adding more layers to the network and adding more feature maps to the network. The results are shown on a visual plot specifying the accuracy. The qualitative and quantitative study is performed by estimating the accuracy of the model on training and test images using with and without batch normalization and dropout. The comparative result of the model proves that the model performs better with which combination of hyperparameters and is more stable. The experiment outcome shows that dropout regularization technique is better than L2 technique containing hidden layers with large neurons. The paper assesses the denoised functioning of DnCNN model with the techniques like batch normalization and dropout, feature map and adding more layers to the CNN network. We quantitatively identify the value model loss and accuracy with the absence and presence of these parameters. Finally, we conclude by specifying that which parameters are necessary to have better accuracy in our network model.
To improve the image quality and eliminate noise many denoising algorithms have been researched in literature. Many important denoising methods such as dark channel prior (He et al., 2010), using transmission map of natural images (Singh et al., 2021), histogram equalization (Stark & J, 2000), convolutional neural network (Ren et al.,2016), video denoising (Buades & Lisani .,2017), recently deep neural networks (Rahangdale & Raut,2019), denoising using autoencoders (Wen & Zhang,2018), generative adversarial network (GAN) (Goodfellow et al.,2020), dehazing using deep neural network (Hodges et al.,2019) are projected for image denoising. To build any model, we should be able to generalize so that it can predict well on unseen data. In the process of generalizing a model, one should consider a mapping function having input and output function such that the parameters used in model should be enough to train the model otherwise model would either suffer from underfitting or overfitting.