Article Preview
TopIntroduction
Although the artificial intelligence has been expected to have the much more accurate and faster context recognition ability than humans for a long time, the inadequate computing power has restrained its implementation. However, the rapid increase of computing power and the growing demand for various context recognitions made deep learning, which is a type of artificial intelligence, possible. As the object recognition technology using deep learning has quickly advanced, the context recognition technique using images has been applied in wide ranging areas.
The key technology for self-driving car is the object recognition and context recognition using sensor and image data. The technology requires the ability to drive, stop and park the cars with better decision-making capability than human even under the difficult road environment and poor weather situations such as rain, fog, snow and low lighting. The ADAS (advanced driver assistance system) technology currently applied in self-driving cars provides only the simple warning and braking function through object recognition and depends on driver’s judgment in a complex situation. However, the more accurate and faster context recognition in a complex and poor environment must be reflected in the ADAS technology for the self-driving function to replace human driving. As such, many recent studies have focused on deep learning to improve the context recognition capability and the preprocessing capability of poor image data for the ADAS technology to have such capability (Fukui, Yamashita, Yamauchi, Fujiyoshi & Murase, 2016; Nedevschi et al., 2008).
The preprocessing technology of image data obtained from cameras is essential to work with high quality image data in poor road condition and weather environment. The preprocessing technology prevents wrong judgment by improving the object recognition technology of ADAS. Thus, deep learning can greatly improve the accuracy of object and context classification using imaging. Many recent studies and experiments have proven the efficiency of CNN (convolutional neural network) in deep learning as a way to classify objects and contexts (Farabet, Martini, Akselrod, Talay & LeCun, 2010). There’s a great opportunity to use CNN techniques to further enhance vehicle vision applications to achieve high level of accuracy. But high-resolution imaging is so sophisticated that we’re relying on it for everything for the autonomous vehicle of the future (Sochor, Herout & Havel, 2016; Liao et al., 2015).
ADAS needs to maintain the performance despite various weather’s changes, like rain and light. Because the conventional CNN based ADAS use the memory for storing the weights learned previously, they can’t use the optimized weight accommodated to the weather’s change. Namely, the new training can’t be adapted in real time. Therefore, the conventional CNN based ADAS has a disadvantage in a performance depending on the status of input images. For solving such a problem, the conventional CNN based ADAS puts the pre-processing accelerator for improving the image quality before entering a CNN based image classification (Krizhevsky, Sutskever & Hinton). Such as a hardware accelerator is great pressure on the ADAS because of a huge hardware configuring the ADAS.
We propose the training method of convolution neural network with a distorted image instead of using a huge hardware for revising the distorted input image. This training method has the CNN based ADAS showing the low error rate under the high contrast. This provides a significant level of adaptability to weather changing environments, reducing hardware complexity. In this paper, we experiment the comparison of the conventional training method using the preprocessing accelerator and the proposed training method using the gamma variation (Jeong, 1977, pp. 19-21).