A Survey of Approaches for Estimating Meteorological Visibility Distance Under Foggy Weather Conditions

A Survey of Approaches for Estimating Meteorological Visibility Distance Under Foggy Weather Conditions

Faouzi Kamoun, Hazar Chaabani, Fatma Outay, Ansar-Ul-Haque Yasar
DOI: 10.4018/978-1-5225-9019-4.ch002
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The immaturity of fog abatement technologies for highway usage has led to growing interest towards developing intelligent transportation systems that are capable of estimating meteorological visibility distance under foggy weather conditions. This capability is crucial to support next-generation cooperative situational awareness and collision avoidance systems as well as onboard driver assistance systems. This chapter presents a survey and a comprehensive taxonomy of daytime visibility distance estimation approaches based on a review and synthesis of the literature. The proposed taxonomy is both comprehensive (i.e., captures a wide spectrum of earlier contributions) and effective (i.e., enables easy comparison among previously proposed approaches). The authors also highlight some open research issues that warrant further investigation.
Chapter Preview
Top

Introduction

According to the U.S. Federal Highway Administration (FHA), fog is a significant contributor to fatal road accidents as it creates the most dangerous type of adverse weather conditions for motorists. In fact, fog can take drivers by surprise, impair their driving behavior and distort their perception of depth, distance and speed (Hamilton et al, 2014). Earlier studies (see for example (Abdel-Aty et al, 2011)) revealed that although the numbers of reported car incidents due to fog are not substantial, the resulting vehicle crashes are often associated with large-scale chain accidents and higher fatality rates. For instance, in the US, the American Automobile Association (AAA) Foundation for Traffic Safety identified fog as the top causal factor of fatal multi-vehicle crashes involving 10 or more vehicles (Hamilton et al, 2014).

Fog abatement technologies for highway usage have not reached yet the desired level of efficiency and cost effectiveness. As a result, several advanced roadside and driver assistance systems have been proposed for safer driving in the presence of fog. Among these we can cite beaded lane delineators, lane departure warning systems, forward collision warning systems, adaptive light control, adaptive cruise control, reflectorized paints on pavement edge striping, electronic message signs, and highway advisory radio messages, among many others (Chaabani et al, 2017).

Among the driveway assistance systems that received peculiar attention during the past decade were programmable speed limit signs and variable message signs that can automatically adapt to degraded visibility conditions and warn drivers accordingly (Hautière et al, 2009). There has been a growing number of research initiatives towards connected vehicular systems based on V2V (Vehicle-to-Vehicle), V2I (Vehicle to Infrastructure) and I2I (Infrastructure to Infrastructure) technologies that would allow intelligent road-side units (RSUs) and vehicles to cooperate for enhanced awareness of driving conditions and for a more proactive approach to counter low visibility conditions that can take motorists by surprise. However, as highlighted by Hautière et al (2009), in order to react to their surrounding environment, these assistance systems depend on efficient mechanisms to detect the presence of fog and estimate the visibility range. This information can for instance be fed as input to (1) electronic traffic warning signs, (2) speed-limit recommender systems or (3) adaptive cruise control and emergency braking systems. In addition, autonomous vehicles are equipped with navigation systems that rely on some sort of image processing to analyze images from vehicle’s on-board camera. However, despite many years of thorough research, weather conditions are a significant limiting factor to their success. For these reasons, various approaches have been proposed to detect the presence of fog and estimate atmospheric visibility range.

While researchers have proposed and experimented with a wide spectrum of visibility distance (VD) estimation approaches, under different assumptions, little effort has been made to thoroughly examine, classify, and compare these approaches. In fact, while it is very important to have a good overview of the different approaches, the classifications proposed in the literature overlook many important criteria and they do not cover the complete range of visibility distance estimation approaches. Hence this contribution aims to present a survey of visibility distance estimation approaches and then propose a taxonomy to objectively classify and distinguish among these various approaches. The proposed taxonomy has been developed with two main objectives in mind: comprehensibility (i.e., capturing a wide spectrum of earlier contributions) and effectiveness (i.e., enabling easy comparison among previously proposed approaches). A special focus will be devoted to camera-based visibility distance estimation methods under daytime conditions.

Key Terms in this Chapter

Homomorphic Filtering: A signal/image processing technique often used in digital image enhancements to correct non-uniform illuminations.

Edge Detection: An image processing technique used to identify the boundaries of an object within a digital image by detecting discontinuities in the image’s brightness.

Meteorological Visibility (by Day): The greatest distance at which a black object of suitable dimensions located near the ground can be seen and recognized when observed against the horizon sky.

LiDAR (Light Detection and Ranging): An optical sensing instrument that measures the distance to a target by illuminating the target with pulsed laser light and measuring the reflected pulses with a sensor.

Lambertian Surface: A surface that appears uniformly bright to an observer regardless of the observer’s angle of view.

Image Segmentation: The process of partitioning a digital image into multiple homogenous segments in order to simplify/modify the representation of an image for further analysis; typically used to locate objects and boundaries within an image.

RSU (Road-Side Unit): A special wireless communicating device located on the roadside that provides connectivity and information support to passing vehicles, including safety warnings and traffic information.

Linear Discriminant Analysis (LDA): A method used to find a linear combination of features that characterizes or discriminates between two or more classes of objects; used for statistical analysis, dimensionality reduction and classification.

Contrast of an Object: The difference in brightness or color that makes an object distinguishable from other objects within the same field of view.

Complete Chapter List

Search this Book:
Reset