Meta-Cognition for Inferring Car Driver Cognitive Behavior from Driving Recorder Data

Meta-Cognition for Inferring Car Driver Cognitive Behavior from Driving Recorder Data

Fumio Mizoguchi (Tokyo University of Science, Chiba, Japan & WisdomTex, Inc., Tokyo, Japan) and Hirotoshi Iwasaki (Denso IT Laboratory, Inc., Tokyo, Japan)
DOI: 10.4018/IJCINI.2016070101
OnDemand PDF Download:
No Current Special Offers


This study focused on driver behavior by inferring it from driving recorder data. The authors refer to this inference function as meta-cognition. Using this meta-cognition, they attempt to determine the characteristics of driver behavior on the highway. By comparing ACTR simulation results and recorder data, the authors investigated the driver cognitive process in highway driving in response to lane keeping, curve negotiation, and lane changing subtasks. In order for the driving experiment to be realistic, they use a simple driving recorder (type DVRGPS-04, made by Geanee Corporation in Japan) which is available on the public market. Using the driving recorder, they recorded the data on the highway. For the most part, the authors drove on the Shuto, Kanetsu, Jouban, and Joushinetsu highways from Meguro in Tokyo to Nagano or Kashiwa in Chiba. The recording time was about two hours, and the data was recorded as video images stored in a microSD memory card in the driving recorder.
Article Preview

1. Introduction

Various fields of study have produced proposals that automobiles become Internet terminals. For example, Europe has played a leading role in future Internet planning. Information devices in automobiles add to the increase in services that has resulted in a high driving load, not only for navigation but also for access to all kinds of information. We regarded this load as a work load in a paper presented at Cognitive Informatics 2011 (ICCI*CC2011) (Sega et al., 2011b). It is unconscious action of the driving and it is an unmeasurable phenomenon. We have addressed these problems by measuring eye movement during driving, which further guides us in measuring the workload. Based on our measurements, we established a methodology by which car drivers can simultaneously drive in an optimal state and operate advanced information devices. We presented this new methodology at ICCI*CC2014 (Mizoguchi et al., 2014). In the paper, we have shown the new methodology for a man-machine system by using qualitative reasoning on the mental load in an interactive environment. We identify the human cognitive state that determines whether the human will be ready to operate the device in car navigation. From this cognitive-state identification, we develop a minimum mental load methodology for humans, to achieve comfortable machine operation. In addition, we empirically measure and verify the heavy loads experienced by a series of car drivers.

The current study focus on driver behavior in highway driving by a simple method using a driving recorder so as to determine the normal cognitive process in driving. Driving is a complex cognitive behavior, and so it is necessary to devise a simple method like the driving recorder used in this study to record the driving process through video images. In this study, we do not measure eye movement or any biological trait. Measuring eye movement restricts driving behavior. Moreover, it makes it risky to drive in highway traffic situations. Therefore, we have attempted to use a driving recorder for conducting experiments in an easier way to collect the driving behavior. The driving recorder is installed in the car as seen in Figure 1.

Figure 1.

Driving recorder set up in the car


2. Earlier Studies

Here at the beginning of this paper, we will explain two of our past papers to enable more detailed understanding.

2.1. ICCI*CC2014 Paper (Mizoguchi et al., 2014)

We studied car drivers' cognitive mental load (Sega et al., 2011a) and generated rules of relaxation or tension using data concerning drivers' eye movements and driving data obtained from an experimental car using the Denso lab’s in-vehicle LAN. However, neither relaxed driving nor tense driving is necessarily distracted driving. In one example of tense driving data, the driver was operating a car navigation system while driving, and this behavior was considered distracted driving. In our previous study (Sega et al., 2011a), we showed that about 80% of the obtained eye movement and driving data can be interpreted by our qualitative model of the cognitive mental load. In this model, we represent the relationship between the saccade and fixation of the eye movement data and the operation of the accelerator, steering, and brake in the driving data as representing an increase or decrease in the driver's mental load. We focus on the eye movement and driving data taken during difficult situations meeting this description. We consider visual distraction as occurring when the driver is viewing an object not related to driving. For example, the driver may check the car navigation or operate a mobile phone, and even just looking away is a visual distraction. In our study, we try to detect visual distractions using the obtained eye movement and driving data as the observation data. This is a new perspective on the driver’s mental work load and its relation to distraction (Wang et al., 2013). This paradigm shift is very important for safe driving. For this study, we will examine the possibility of detecting cognitive distraction using saccade frequency information extracted from eye movement data. For this purpose, we first introduce new rules of visual distraction using the collected data about the driver's eye movement and driving data (see Figure 2).

Figure 2.

Eye movement and driving data


Complete Article List

Search this Journal:
Volume 16: 1 Issue (2022): Forthcoming, Available for Pre-Order
Volume 15: 4 Issues (2021)
Volume 14: 4 Issues (2020)
Volume 13: 4 Issues (2019)
Volume 12: 4 Issues (2018)
Volume 11: 4 Issues (2017)
Volume 10: 4 Issues (2016)
Volume 9: 4 Issues (2015)
Volume 8: 4 Issues (2014)
Volume 7: 4 Issues (2013)
Volume 6: 4 Issues (2012)
Volume 5: 4 Issues (2011)
Volume 4: 4 Issues (2010)
Volume 3: 4 Issues (2009)
Volume 2: 4 Issues (2008)
Volume 1: 4 Issues (2007)
View Complete Journal Contents Listing