Autistic Language Processing by Patterns Detection

Autistic Language Processing by Patterns Detection

Daniela Lopez De Luise, Ben Raul Saad, Pablo D. Pescio, Christian Martin Saliwonczyk
Copyright: © 2018 |Pages: 26
DOI: 10.4018/IJALR.2018010103
(Individual Articles)
No Current Special Offers


The main goal of this article is to present an approach that allows the automatic management of autistic communication patterns by processing audio and video from the therapy session of individuals suffering autistic spectrum disorders (ASD). Such patients usually have social and communication alterations that make it difficult to evaluate the meaning of those expressions. As their communicational skills may have different degrees of variation, it is very hard to understand the semantics behind the verbal behavior. The current work is based on previous work on machine learning for individual performance evaluation. Statistics show that autistic verbal behavior are physically expressed by repetitive sounds and related movements that are evident and stereotyped. The works of Leo Kanner and Ángel Riviere are also considered here. Using machine learning and neural nets with certain set of parameters, it is possible to automatically detect patterns in audio and video recording of patient's performance, which is an interesting opportunity to communicate with ASD patients.
Article Preview

1. Introduction

ASD patients suffer of an alteration of language (Bustamante et al., 2014) and in some cases they completely lack any speech possibility (Wing, 1993). Other symptoms are hyperacusis or hyporacusis, global or focused in certain frequency range (Greer, 2008). There are many other typical symptoms (Filipek, Accardo & Ashwal, 2000) (Palau, Valls Santasusana & Salvadó, 2010) (Señor, Shulman & Di Lavore, 2004) (Rapin & Katzman, 1998) (Dawson, Meltzoff, Osterling, Rinaldi & Brown, 1998) (Ruggieri, 2006) but this paper focuses in pitches and sounds. There are some applications that can determine if a patient suffers ASD or not (Greer, 1997) (Contreras et al., 2016) (Taylor & Baldeweg, 2002) (Contreras et al., 2016) (Torras, 2015) some of them just processing sounds expressions. Other applications, for instance in (López De Luise et al., 2014a) take five musical notes and process its main features: tone, intensity and pitch described by frequency, amplitude, harmonics and wave form. Patients’ reaction are evaluated upon every sound. Sessions are recorded, and analyzed using detailed information collected according a protocol. It is important to note that this type of study contrast with the current proposal that only considers sounds that are currently produced by patients.

Authors in (Nin & Tewfik, 2010) preset a new method for automatic detection of stereotyped behavior using an accelerometer. They build an orthogonal subspace using sampled data, combined with a clustered dictionary that represent signals, in order to classify signals. The algorithm improves the new events detection.

A study performed by the university of South California (USA) (Mower et al., 2011) uses Rachel, an interactive agent teaches emotions to ASD children. In the same line, there is a chatter bot (ECA) that interacts with patients, but it requires also the participation of the parents. The agent produces different scenarios to express emotions like anger, sadness, happiness and fare in order to perform a controlled evaluation of the ASD’s communication abilities. ECA asks children (and sometimes also to the parents), to interact while it saves the interaction. It is important to note that it implements the “Oz wizard” method, that has a person (a therapist) behind the system, that is not directly perceived by children. Many other attempts try to improve the teaching process but with relative success (Krans, 2013) but the paper lacks statistics and validation of the results presented by authors (López De Luise, Hisgen, Cabrera & Morales Rins, 2012).

Complete Article List

Search this Journal:
Open Access Articles: Forthcoming
Volume 8: 2 Issues (2018)
Volume 7: 2 Issues (2017)
Volume 6: 2 Issues (2016)
Volume 5: 1 Issue (2015)
Volume 4: 1 Issue (2014)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing