1.1. Motivation
The past decade has seen a steady growth of elderly population. As the baby boomers comprise nearly 26 percent of the U.S. population, they may bring an increased burden on the society in the near future. Compared to the rest of the population, more seniors live alone as the sole occupant of a private dwelling than any other population group. Therefore, helping seniors live a better life is very important and has great societal benefits. In many assisted living systems, there is a great need for automated recognition of human daily activities, which can be used in studying behavior-related diseases and detecting abnormal behaviors such as falling to the floor. Activity recognition is also indispensable for Human-Robot Interaction (HRI) (Yanco & Drury 2004) where a robot companion can understand human’s intentions through his/her behaviors.
There are two main types of activity recognition: vision-based (Moeslunda, Hiltonb, & Kruger, 2006) and wearable sensor-based (Najafi, Aminian, Paraschiv-Ionescu, Loew, Bula, & Robert, 2003; Maurer, Smailagic, Siewiorek, & Deisher, 2006). Vision-based systems can observe full human body movement. However, it is very challenging to recognize human activities through images due to the inherited data association problem and the large volume of data. Compared to vision-based systems, wearable sensor-based systems have no data association problem and also have less data to process, but it is uncomfortable and obtrusive to the user if there are many wearable sensors on the human body.
In this paper, we proposed an approach that combines motion data from a single wearable inertial sensor and location information to recognize human daily activities. This approach has the following advantages: first, a single wireless inertial sensor worn by the user for motion data collection can reduce obtrusiveness to the minimum; second, less data is required for activity recognition so that the computational complexity is significantly reduced compared to a pure vision-based system; third, the recognition accuracy can be improved through the fusion of motion and location data.
This paper is organized as follows. The rest of Section 1 introduces the related work in this area. Section 2 describes the hardware platform for the proposed human daily activity recognition system. Section 3 first explains the activity recognition using motion data only, then explains the fusion of motion data and location information to improve the recognition accuracy. The experimental results are provided in Section 4. Conclusions and future work are given in Section 5.