Monitoring System for Persons With Alzheimer's Disease via Video-Object Tracking

Monitoring System for Persons With Alzheimer's Disease via Video-Object Tracking

Haitham Asaad Al-Anssari (Electrical and Computer Engineering, Western Michigan University, Kalamazoo, USA), Ikhlas Abdel-Qader (Electrical and Computer Engineering, Western Michigan University, Kalamazoo, USA), and Maureen Mickus (Occupational Therapy, Western Michigan University, Kalamazoo, USA)
DOI: 10.4018/978-1-7998-3441-0.ch029
OnDemand PDF Download:
No Current Special Offers


This article presents a framework for a food intake monitoring system intended for use with persons with Alzheimer's disease and other dementias. Alzheimer's disease has a significant impact on the individual's ability to perform their daily activities including eating. Providing assistance with feeding is a major challenge for caregivers, including a significant time commitment. We present a vision-based system that tracks moving objects, such as the hand, using a combined optical flow and skin region detection algorithms. Skin detection is implemented using two different methods. Hue, saturation, and value (HSV) color space, which is on separation of the illuminance component from chrominance one as the first method and skin color information is extracted from subject's face detected using Viola-Johns algorithm for the second method. Once face and other moving skin regions are detected, bounding boxes are created and used to track all moving regions over the video frames, recognizing eating behavior or the lack of it. Based on experimental results the proposed method using optical flow and skin regions segmentation using HSV color detects the hand to mouth eating motion with 92.12% accuracy. The optical flow and skin region segmentation based on face color information achieves a higher accuracy of 94.29%.
Chapter Preview


Food intake detection methods have been also studied by other researchers. Some of these methods used motion sensors for food intake monitoring, like the work presented in (Farooq & Sazonov, 2016; Doulah et al., 2017; Mendi et al., 2013; Dong & Biswas, 2016; Kalantarin et al., 2015). Swallowing and chewing sounds are features that can be also used effectively to detect an eating event. These sounds can be collected using microphones usually mounted on subject’s body close to the mouth. Some researchers worked on this topic like the work presented in (Farooq et al., 2014; Kalantarian et al., 2016; Turan & Erzin, 2018; Shengjie et al., 2017; Makeyev et al., 2017; Paßler et al., 2012). These sensors are usually attached to subject’s body, this is not applicable for some cases. People with Alzheimer’s for instance may simply forget and remove these sensors since they may perceive them as obtrusive. Therefore, designing and using a vision-based monitoring system could be the best scenario.

Complete Chapter List

Search this Book: