Development of Facial Recognition in Clinical Decision Support

Development of Facial Recognition in Clinical Decision Support

Hardianto Wibowo, Mario Soflano, Wildan Suharso
DOI: 10.4018/978-1-6684-5092-5.ch009
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

This chapter discusses the current state of the art of facial recognition technology for clinical decision support through a systematic literature review that identifies the medical areas where it is used, the source of facial datasets used, the machine learning approach and algorithms used, and how it has been the evaluated. Findings show that the technology has been used in diagnosing genetic disorders, mental illness, and depression, and to train the algorithms, the studies identified used either publicly available datasets, published datasets from the literature, or they collected their own data. However, the majority of papers did not explicitly explain how these datasets were obtained. The finding also shows CNN is currently the most used ML algorithm for facial recognition while appearance-based is the most common approach. The evaluations in the shortlisted papers generally focus on the accuracy of the facial recognition capabilities, and the empirical evidence shows the advantage of the technology in supporting clinicians to diagnose symptoms through facial expressions.
Chapter Preview
Top

Introduction

Facial recognition technology maps a person’s facial data for verification or authentication purposes (Martinez-Martin, 2019). Gender, geographic region, age or character are the most common facial features that can be identified through the technology (Kristian et al., 2017; Wibowo et al., 2018; Kasim et al., 2017). The data can be used to identify a person for security purposes or animation that requires avatars to mimic human faces and facial expressions (Bibliowicz, 2005; Abrantes & Pereira, 1999; Delp & Loan, 2000). In the health domain, identification and analysis of changes in facial data in a series of facial expressions have been identified as one of the diagnostic processes to determine symptoms of diseases that would require measurement for its severity and immediate treatment. In the face anatomy, changes in facial expression are affected by the 7th cranial nerve, which innervate the facial muscles (Prendergast, 2013). Figure 1 shows the various parts of facial muscles. These changes can be identified by facial recognition technology by looking at activation areas for example, eyes, brow and cheeks are relaxed (a neutral expression), brows lowered and drawn together, upper eyelids are raised. The Ekman and Friesen (1976) proposed the Facial Action Coding System (FACS) method that breaks down facial expressions into individual components of muscle movement as Action Units that allows facial analysis to be performed per unit for more accuracy (Wibowo et al., 2019; Abrantes & Pereira, 1999).

Figure 1.

Anatomy Facial Muscles (Sobotta, 1909)

978-1-6684-5092-5.ch009.f01

Some diseases can affect the contraction of the facial muscles, and can be used to detect, for example, facial palsy (Cawthorne & Haynes, 1956; Volk et al., 2014), Bell's palsy (Gilden, 2004; Holland & Weiner, 2004; Baugh et al., 2013), neurological disorders such as Parkinson Plus (Piercy, 2005; Poewe & Wenning, 2007; Jin et al., 2020, Priebe et al., 2015), pain in infants (Kristian et al., 2017), and stroke and genetic diseases such as DiGeorge syndrome (Kruszka et al., 2017). According to the systematic analysis for Global Burden of Disease Study between 1990 to 2016, the four largest contributors of neurological disability-adjusted life-years were stroke, migraine, Alzheimer's and other dementias and meningitis (Feigin et al., 2019). Through facial recognition technology, it is possible to monitor and diagnose symptoms for early detection of stroke and dementias.

According to Yang, Kriegman and Ahuja (2002), there are various approaches in facial recognition analysis:

Complete Chapter List

Search this Book:
Reset