Prototyping Smart Assistance with Bayesian Autonomous Driver Models

Prototyping Smart Assistance with Bayesian Autonomous Driver Models

Claus Moebus, Mark Eilers
DOI: 10.4018/978-1-61692-857-5.ch023
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The Human or Cognitive Centered Design (HCD) of intelligent transport systems requires digital Models of Human Behavior and Cognition (MHBC) enabling Ambient Intelligence e.g. in a smart car. Currently MBHC are developed and used as driver models in traffic scenario simulations, in proving safety assertions and in supporting risk-based design. Furthermore, it is tempting to prototype assistance systems (AS) on the basis of a human driver model cloning an expert driver. To that end we propose the Bayesian estimation of MHBCs from human behavior traces generated in new kind of learning experiments: Bayesian model learning under driver control. The models learnt are called Bayesian Autonomous Driver (BAD) models. For the purpose of smart assistance in simulated or real world scenarios the obtained BAD models can be used as Bayesian Assistance Systems (BAS). The critical question is, whether the driving competence of the BAD model is the same as the driving competence of the human driver when generating the training data for the BAD model. We believe that our approach is superior to the proposal to model the strategic and tactical skills of an AS with a Markov Decision Process (MDP). The usage of the BAD model or BAS as a prototype for a smart Partial Autonomous Driving Assistant System (PADAS) is demonstrated within a racing game simulation.
Chapter Preview
Top

1 Introduction

The Human or Cognitive Centered Design (HCD) (Norman, 2007; Sarter et al., 2000) of intelligent transport systems requires digital Models of Human Behavior and Cognition (MHBC) enabling Ambient Intelligence (AMI) e.g. in a smart car. The AMI paradigm is characterized by systems and technologies that are embedded, context aware, personalized, adaptive, and anticipatory (Zelkha et al., 1998). Models and prototypes we propose here are of that type.

Currently MBHC are developed and used as driver models in traffic scenario simulations (Cacciabue et al., 2007, 2011), in proving safety assertions and in supporting risk-based design. In all cases it is assumed that the conceptualization and development of MHBCs and ambient intelligent assistance systems are parallel and independent activities (Flemisch et al., 2008; Löper et al., 2008). In the near future with the need for smarter and more intelligent assistance the problem of transferring human skills (Yangsheng et al., 2005) into the envisioned technical systems becomes more and more apparent especially when there is no sound skill theory at hand.

The conventional approach to develop smart assistance is to develop control-theoretic or artificial-intelligence-based prototypes (Caccibue et al., 2007, 2011) first and then to evaluate their learnability, usability, and human likeness ex post. This makes revision-evaluation cycles necessary which further delay time-to-market and introduce extra costs. An alternative approach would be the handcrafting of MHBC (Baumann et al., 2009; Gluck et al., 2005; Jürgensohn, 2007; Möbus et al., 2007; Salvucci, 2004, 2007; Weir et al., 2007) on the basis of human behavior traces and their modification to prototypes for smart assistance. An ex post evaluation of their human likeness or empirical validity and revision-evaluation cycles remains obligatory, too.

We propose a third machine-learning alternative. It is tempting to prototype assistance systems on the basis of a human driver model cloning an expert driver. To that end we propose the Bayesian estimation of MHBCs from human behavior traces generated in new kind of learning experiments: Bayesian model learning under driver control. The models learnt are called Bayesian Autonomous Driver (BAD) models.

Dynamic probabilistic models are appropriate for this challenge, especially when they are learnt online in Bayesian model learning under driver control. For the purpose of smart assistance in simulated or real world scenarios the obtained BAD models can be used as prototypical Bayesian Assistance Systems (BAS). The critical question is, whether the driving competence of the BAD model is the same as the driving competence of the human driver when generating the training data for the BAD model.

We believe that our approach is superior to a proposal to model the strategic skills of a PADAS with a Markov Decision Process (MDP) (Tango et al., 2011). A MDP needs a reward function. This function has to be derived deductively solving the inverse reinforcement learning problem (Abbeel et al., 2004). The deductive derivation of reward function often results in strange nonhuman overall behaviors. The inductive mining of the reward function from car trajectories or behavior traces seems to be a detour and more challenging than our approach.

The two new concepts Bayesian learning of agent models under human control and the usage of a BAD modelas a BAS or PADAS are demonstrated when constructing a prototypical smart assistance system for driving stabilization within the racing game simulation TORCS (TORCS, 2011).

Key Terms in this Chapter

Computational Agent Model: Computational agent models have to represent perceptions, beliefs, goals, and actions of ego and alter agents.

Bayesian Filter and Action Model (BFAM): In the Bayesian Filter and Action Model actions are not only dependent on the current process state but also on direct antecedent actions. Thus the generation of erratic behavior is suppressed. Furthermore the BFAM includes direct action effects on the next future process state. This is important when the influence of action effects should be modeled directly into the state not making a detour via the environment and the perception of the agent.

Bayesian Autonomous Driver with Mixture-of-Behaviors (BAD-MoB) Model: The model is suited to represent the sensor-motor system of individuals or groups of human or artificial agents in the functional autonomous layer or stage of Anderson. In a MoB model it is assumed that the behavior can be context-dependent generated as a mixture of ideal schematic behaviors (= experts). The template or class model is distributed across two time slices, and tries to avoid the latent state assumptions of Hidden Markow Models. Learning data are time series or case data of relevant variables: percepts, goals, and actions. Goals are the only latent variables which could be set by commands issued by the higher associative layer.

Dynamic Bayesian Network (DBNs): In the case of identical time-slices and several identical temporal links we have a repetitive temporal model which is called Dynamic Bayesian Network model (DBN). DBNs are dynamic probabilistic models. HMMs and DBN are mathematically equivalent. Though, there is a trade-off between estimation efficiency and descriptive expressiveness in HMMs and DBNs. Estimation in HMMs is more efficient than in DBNs due to algorithms (Viterbi, Baum-Welch) whereas descriptive flexibility is greater in DBNs. At the same time the state-space grows more rapidly in HMMs than in corresponding DBNs.

Shared Space: Based on the observation that individuals’ behavior in traffic is more positively affected by the built environment of the public space than by conventional traffic control devices (signals, signs, road markings, etc.) or regulations.

Bayesian Learning of Agent Models under Human Control: The performance of the BAD model is observed by the human driver while the BAD model is driving. New data are learned only when the model behavior is unsatisfying. By observing and correcting the actions of the BAD model only when needed, problems can be solved, which are nearly impossible to discover by just analyzing its probability distributions.

Partial or Non-Cooperative Scenario: When goals are issued by several different principals.

Cooperative Scenario: When goals are issued by one single principal.

Dynamic Bayesian Filter (DBF): The DBF is a HMM with state, percept and motor variables. The general algorithm consists of two steps in each iteration or recursive call: 1. Prediction step: from the most recent apriori belief(state) and the current control (= action) compute a provisional belief(state); 2. Correction step: from the current provisional belief(state) and the current measurements (= percepts) compute the posteriori belief(state).

Cooperative Driving Scenario: A driving scenario with in-vehicle-cooperation between a human driver and a BAS

Bayesian (Robot) Programs (BPs): BP is a simple and generic framework suitable for the description of human sensory-motor models in the presence of incompleteness and uncertainty. It provides integrated model-driven data analysis and model construction. In contrast to conventional Bayesian network models BP-models put emphasis on a recursive structure and infer concrete motor actions for real-time control on the basis of sensory evidence. Actions are sampled from CPDs according various strategies after propagating sensor or task goal evidence.

Bayesian Assistance Systems (BAS): For the purpose of smart assistance in simulated or real world scenarios the obtained Bayesian Autonomous Driver (BAD) models can be used as prototypical Bayesian Assistance Systems (BAS). Due to their probabilistic nature BAD models or BAS can not only be used for real-time control but also for real-time detection of anomalies in driver behavior and real-time generation of supportive interventions (countermeasures).

Bayesian Autonomous Driver (BAD) model: BAD models describe phenomena on the basis of the variables of interest and the decomposition of their joint probability distribution (JPD) into conditional probability distributions (CPD-factors) according to the special chain rule for Bayesian networks. The underlying conditional independence hypotheses (CIHs) between sets of variables can be tested by standard statistical methods (e.g. the conditional mutual information index. The parameters of BAD models can be learnt objectively with statistical sound methods by batch from multivariate behavior traces or by learning from single cases. Due to their probabilistic nature BAD models or BAS can not only be used for real-time control of vehicles but also for real-time detection of anomalies in driver behavior and real-time generation of supportive interventions (countermeasures).

Dynamic Probabilistic Model: Dynamic probabilistic models evolve over time. If the model contains discrete time-stamps one can have a model for each unit of time. These local models are called time-slices. The time slices are connected through temporal links to give a full model.

Anticipatory Planning: For anticipatory planning the conditional probability of the NextFutureDrive under the assumption of the pastDrive, the currentDrive, and the anticipated expectedFutureDrive has to be computed.

Distributed Cognition: Originated by Edwin Hutchins in the mid 1980s. He proposed that human knowledge and cognition is not confined to individuals but is also embedded in the objects and tools of the environment. Cognitive processes may be distributed across the members of a social group or the material or environmental structure.

Hidden Markow Models (HMMs): A special category of time-stamped dynamic probabilistic models is that of a Hidden Markov Model (HMM). They are repetitive temporal models in which the state of the process is described by a single discrete random variable. Because of the Markov assumption only temporarily adjacent time slices are linked by a single link between the state nodes. HMMs are sequence classifiers and allow the efficient recognition of situations, goals and intentions; e.g. diagnosing driver’s intention to stop at a crossroad. HMMs and DBN are mathematically equivalent. Though, there is a trade-off between estimation efficiency and descriptive expressiveness in HMMs and DBNs. Estimation in HMMs is more efficient than in DBNs due to algorithms (Viterbi, Baum-Welch) whereas descriptive flexibility is greater in DBNs. At the same time the state-space grows more rapidly in HMMs than in corresponding DBNs.

Anomalies: Risky maneuvers are called anomalies when they have a low probability of occurrence in the behavior stream of experienced drivers and which only experienced drivers are able to prevent or to anticipate automatically. A measure of the anomaly of the driver’s behavior is the conditional probability of his behavior under the hypothesis that the observed actions are generated by a stochastic process which generated the trajectories or behaviors of the correct maneuver M+.

Complete Chapter List

Search this Book:
Reset