An Image Quality Adjustment Framework for Object Detection on Embedded Cameras

An Image Quality Adjustment Framework for Object Detection on Embedded Cameras

Lingchao Kong, Ademola Ikusan, Rui Dai, Dara Ros
DOI: 10.4018/IJMDEM.291557
Article PDF Download
Open access articles are freely available for download

Abstract

Automatic analysis tools are ubiquitously applied on wireless embedded cameras to extract high-level information from raw data. The quality of images may be degraded by factors such as noise and blur introduced during the sensing process, which could affect the performance of automatic analysis. Object detection is the first and the most fundamental step for the automatic analysis of visual information. This paper introduces a quality adjustment framework to provide satisfactory object detection performance on wireless embedded cameras. Key components of the framework include a blind regression model for predicting the performance of object detection and two distortion type classifiers for determining the presence of noise and blur in an image. Experimental results show that the proposed framework achieves accurate estimations of image distortion types, and it can be easily applied on embedded cameras with low computational complexity to improve the quality of captured images.
Article Preview
Top

Introduction

The ubiquitous deployment of wireless embedded camera sensors for various imaging applications has resulted in an exponential growth of image and video data. There is an emerging need to automatically extract meaningful information from the raw data generated by camera sensors. For this reason, automatic image and video analysis tools, which are enabled by machine learning and other mathematical approaches, have been applied in computing devices to integrate data into information. For imaging applications, automatic analysis results could be used to trigger alarms for abnormal events, provide situational awareness to human operators, and facilitate automatic control of physical systems, among many uses.

To assure the accuracy of automatic analysis methods, the embedded cameras in a system should provide images with satisfactory quality. Many image quality assessment (IQA) models have been designed to evaluate the perceptual quality of images judged by human users (Wang, Bovik, Sheikh, & Simoncelli, 2004). However, the quality of an image evaluated by an automatic analysis tool is not necessarily sensitive to the same factors that drive human perceptions. Characteristics of the human visual system (HVS) such as the visual attention and the contrast sensitivity mechanisms are considered in perceptual image quality assessment. Due to the foveation feature of the HVS, at an instance, only a local area in the image can be perceived with high resolution at typical viewing distances (Gu, et al., 2016), and the HVS is sensitive to the relative rather than the absolute luminance change (Wang, Bovik, Sheikh, & Simoncelli, 2004). In contrast, automatic analysis methods executed by computing devices can have a global “view” of an image and “perceive” the absolute luminance change precisely. A few recent studies have explored the unique characteristics of image quality for automatic analysis algorithms. For example, a study of motion imagery quality for tracking in airborne reconnaissance systems shows that factors such as temporal jitter, level of noise, and edge sharpness have a strong effect on the accuracy of target detection, and unlike human users, the automatic detection algorithms are less sensitive to spatial resolution (Irvine & Wood, 2013). In our recent work (Kong, Dai, & Zhang, 2016), we have found that the performance of object detection algorithms can be affected by the quality of the background areas, which differs from the perception of human beings who can easily detect moving objects from blurred backgrounds. Therefore, the quality of images evaluated by automatic analysis requires further investigation, and new quality models are needed for automatic analysis algorithms.

For wireless imaging systems, an automatic analysis module could be deployed either on compressed videos in a central server or on uncompressed videos at local cameras. Some recent works (Kong & Dai, 2016; Kong & Dai, 2017; Wang, Li, Zhang, & Yang, 2018; Kong & Dai, 2018) have studied the impact of video compression on the accuracy of analysis algorithms. Apart from the distortion introduced by lossy compression, the quality of an image or a video could be degraded by factors such as blur or noise during the image sensing process. These factors should also be taken into consideration to evaluate image quality. It is challenging to obtain general solutions on image quality for automatic analysis since analysis methods are mostly dependent on specific applications. However, the common and the most fundamental step for automatic analysis is object detection, as the detected objects are the basis for higher-level analysis tasks such as object tracking and behavior understanding. Moreover, many existing embedded camera platforms, such as CITRIC (Chen, et al., 2013) and SWEETcam (Abas, Porto, & Obraczka, 2014), have incorporated light-weight object detection algorithms on board. For embedded camera platforms, it would be helpful if the quality of an image for object detection could be predicted and adjusted with light-weight solutions.

Complete Article List

Search this Journal:
Reset
Volume 15: 1 Issue (2024)
Volume 14: 1 Issue (2023)
Volume 13: 4 Issues (2022): 1 Released, 3 Forthcoming
Volume 12: 4 Issues (2021)
Volume 11: 4 Issues (2020)
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 4 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing