Article Preview
TopIntroduction
The ubiquitous deployment of wireless embedded camera sensors for various imaging applications has resulted in an exponential growth of image and video data. There is an emerging need to automatically extract meaningful information from the raw data generated by camera sensors. For this reason, automatic image and video analysis tools, which are enabled by machine learning and other mathematical approaches, have been applied in computing devices to integrate data into information. For imaging applications, automatic analysis results could be used to trigger alarms for abnormal events, provide situational awareness to human operators, and facilitate automatic control of physical systems, among many uses.
To assure the accuracy of automatic analysis methods, the embedded cameras in a system should provide images with satisfactory quality. Many image quality assessment (IQA) models have been designed to evaluate the perceptual quality of images judged by human users (Wang, Bovik, Sheikh, & Simoncelli, 2004). However, the quality of an image evaluated by an automatic analysis tool is not necessarily sensitive to the same factors that drive human perceptions. Characteristics of the human visual system (HVS) such as the visual attention and the contrast sensitivity mechanisms are considered in perceptual image quality assessment. Due to the foveation feature of the HVS, at an instance, only a local area in the image can be perceived with high resolution at typical viewing distances (Gu, et al., 2016), and the HVS is sensitive to the relative rather than the absolute luminance change (Wang, Bovik, Sheikh, & Simoncelli, 2004). In contrast, automatic analysis methods executed by computing devices can have a global “view” of an image and “perceive” the absolute luminance change precisely. A few recent studies have explored the unique characteristics of image quality for automatic analysis algorithms. For example, a study of motion imagery quality for tracking in airborne reconnaissance systems shows that factors such as temporal jitter, level of noise, and edge sharpness have a strong effect on the accuracy of target detection, and unlike human users, the automatic detection algorithms are less sensitive to spatial resolution (Irvine & Wood, 2013). In our recent work (Kong, Dai, & Zhang, 2016), we have found that the performance of object detection algorithms can be affected by the quality of the background areas, which differs from the perception of human beings who can easily detect moving objects from blurred backgrounds. Therefore, the quality of images evaluated by automatic analysis requires further investigation, and new quality models are needed for automatic analysis algorithms.
For wireless imaging systems, an automatic analysis module could be deployed either on compressed videos in a central server or on uncompressed videos at local cameras. Some recent works (Kong & Dai, 2016; Kong & Dai, 2017; Wang, Li, Zhang, & Yang, 2018; Kong & Dai, 2018) have studied the impact of video compression on the accuracy of analysis algorithms. Apart from the distortion introduced by lossy compression, the quality of an image or a video could be degraded by factors such as blur or noise during the image sensing process. These factors should also be taken into consideration to evaluate image quality. It is challenging to obtain general solutions on image quality for automatic analysis since analysis methods are mostly dependent on specific applications. However, the common and the most fundamental step for automatic analysis is object detection, as the detected objects are the basis for higher-level analysis tasks such as object tracking and behavior understanding. Moreover, many existing embedded camera platforms, such as CITRIC (Chen, et al., 2013) and SWEETcam (Abas, Porto, & Obraczka, 2014), have incorporated light-weight object detection algorithms on board. For embedded camera platforms, it would be helpful if the quality of an image for object detection could be predicted and adjusted with light-weight solutions.