Article Preview
Top1. Introduction
People with a cognitive deficit often have difficulty carrying out their activities of everyday life (VanTassel et al., 2011). These people, mostly elderly, wish to stay at home, where they feel comfortable and safe, as long as possible. Governments aim to help them for social reasons as well as economical ones. However, keeping cognitively impaired people at home involves many risks that must be controlled (Bouchard et al., 2012). In order to do that, the physical and human environment must be specifically designed to compensate for the cognitive impairments and the loss of autonomy, thus constituting an economically viable alternative to the exhaustion of caregivers (Lapointe et al., 2013). This is why a growing worldwide community of scientists now works on the development of new technologies based on the emerging concept of Ambient Intelligence (AmI), which can be considered as the key to solve the challenge of maintaining semi-autonomous people at home safely.
Ambient intelligence (Capezio et al., 2007; Ramos et al., 2008) refers to a multidisciplinary approach that consists of enriching a common environment (room, building, car, etc.) with technology (sensors, identification tags, etc.), in order to build a system that makes decisions to benefit the users of this environment, based on real-time information and historical data. In this way, technology merges with the environment, becoming non-intrusive, but stands ready to react to the occupant needs and to provide assistance. The main application of this AmI concept concerns the development of Smart Homes (Augusto & Nugent, 2006), which can be seen as houses equipped with ambient agents able to bring advanced assistive services to a resident, for the performance of his Activities of Daily Living (ADL). The main difficulty inherent in this kind of assistance is to identify the on-going inhabitant ADL from observed basic actions and from the events produced by these actions. This difficulty is yet another instance of the so-called plan recognition problem (Carberry, 2001) studied for many years in the field of Artificial Intelligence (AI), in many and varied applicative contexts. A plan, in AI, can be defined as the formal description of an activity. It corresponds to a chain of actions linked by different ordering/temporal relations, which describe the causes, the effects and the goal of a particular activity.
From the recent AmI point of view, which constitutes our focus in this paper, the activities (plan) recognition problem can be summarized as the process of interpreting low-level actions (which are detected by sensors placed in environment’s objects) in order to infer the goal (the on-going activities) pursued by a person (Patterson et al., 2003). One of the main objectives of this recognition process is to identify errors in the performance of activities, in order to target the right moment when assistance is needed and to choose one of the various ways a smart home (observer agent) may help its occupant (a cognitively-impaired patient) (Bouchard et al., 2007). Hence, due to our context specificity, the challenging issues related to activities recognition tend to become much more complex and require dealing with a high possibility of observing errors emanating from the weakening cognitive functions of the patient. For instance, a distraction (e.g. phone call, unfamiliar noise, etc.) or a memory lapse can lead him to perform actions in the wrong order, to skip some steps of his activity, or to perform actions that are not even related to his original goal.