Visual and LIDAR Data Processing and Fusion as an Element of Real Time Big Data Analysis for Rail Vehicle Driver Support Systems

Visual and LIDAR Data Processing and Fusion as an Element of Real Time Big Data Analysis for Rail Vehicle Driver Support Systems

Alper M. Selver (Dokuz Eylul University, Turkey), Enes Ataç (Dokuz Eylul University, Turkey), Burak Belenlioglu (Kentkart, Turkey), Sinan Dogan (Kentkart, Turkey) and Yesim E. Zoral (Dokuz Eylul University, Turkey)
Copyright: © 2018 |Pages: 27
DOI: 10.4018/978-1-5225-3176-0.ch003

Abstract

This chapter reviews the challenges, processing and analysis techniques about visual and LIDAR generated information and their potential use in big data analysis for monitoring the railway at onboard driver support systems. It surveys both sensors' advantages, limitations, and innovative approaches for overcoming the challenges they face. Special focus is given to monocular vision due to its dominant use in the field. A novel contribution is provided for rail extraction by utilizing a new hybrid approach. The results of this approach are used to demonstrate the shortcomings of similar strategies. To overcome these disadvantages, dynamic modeling of the tracks is considered. This stage is designed by statistically quantifying the assumptions about the track curvatures presumed in current railway extraction techniques. By fitting polynomials to hundreds of manually delineated video frames, the variations of polynomial coefficients are analyzed. Future trends for processing and analysis of additional sensors are also discussed.
Chapter Preview
Top

Introduction

Recognition of the objects and obstacles in front of a train is an essential component of railway driver support systems, which are supposed to generate an alarm to notify the driver or controllers in case of a dangerous and/or unexpected situation (Lenior, Janssen, Neerincx, & Schreibers, 2006). The essence of such security systems is getting more important as innovative approaches for autonomous trains and personal rapid transfer systems are being considered for transportation such as (Ultra Global Personal Rapid Transit Systems, 2011). Being an emerging element of railway condition monitoring (Chen & Roberts, 2006; Hodge, O’Keefe, & Weeks, 2015), there are various purposes of these systems such as obstacle identification and collision prevention (Ruder, Mohler, & Ahmed, 2003; Wohlfeil, 2011), obstacle-free range detection (Maire & Bigdeli, 2010), self-localization (Maire, 2007), near-miss event analysis (Aminmansour, 2014), road sign and signaling recognition (Kastrinaki, Zervakis, & Kalaitzakis, 2013). Each of these applications requires different processing approaches and pipelines. For instance, obstacle detection needs to be done in real time to prevent collisions, while near-miss event analysis can be performed offline at a later time. In order to develop systems that completely satisfy the requirements imposed by these applications, diverse types of sensors are needed. Most commonly used devices are cameras including monocular ones combined with zoom (Nassu & Ukai, 2012), infrared (Razaei & Sabzevari, 2009), thermal (Berg, Öfjäll, Ahlberg, & Felsberg, 2015), Bird’s eye (Wang et al., 2015) and stereo view systems (Ohta, 2005) etc.). Other devices consist of radio-frequency identification (RFID) (Mašek, Kolarovksi, & Čamaj, 2016), radar (GSM-Railway based passive (He et al., 2016), millimeter wave (Yan, Fang, Li et al., 2016), light detection and ranging (LIDAR) (Jwa & Sonh, 2015)), ultrasonic devices (Sinha & Feroz, 2016), laser (Amaral, Marques, Lourenço et al., 2016) or other types of sensors (Cañete, Chen, Diaz, Llopis, & Rubio, 2015).

These sensors are employed to detect and identify the objects in front of the train or the obstacles on the rails before an accident occurs. In conventional systems, each sensor speaks for itself and monitors the scene individually. If one of the sensors detects a risk, it activates an alarm to warn the driver. Unfortunately, this direct approach can lead to a significant number of false positives (i.e. incorrect alarms) and even worse, false negatives (i.e. missed risks) (Chen & Roberts, 2006). In such systems, the advantages of integral capabilities of the collective data processing for combined sensor information are simply ignored. As a result, the possibility of obtaining an overall picture of the forthcoming scene is lost. Thus, these numerous sensors should be utilized together and new strategies to integrate the data acquired by them should be developed in order to obtain high accuracy and robustness at all scenarios. Moreover, because of the wide variety of operating conditions, these systems should have feasible properties such as being low-cost, ease of deployment, and simplicity of operation.

Complete Chapter List

Search this Book:
Reset