Next Article in Journal
Complement Inhibition Therapy and Dialytic Strategies in Paroxysmal Nocturnal Hemoglobinuria: The Nephrologist’s Opinion
Next Article in Special Issue
The Effect of a Virtual Reality-Based Intervention Program on Cognition in Older Adults with Mild Cognitive Impairment: A Randomized Control Trial
Previous Article in Journal
Enhanced Serum Levels of sFlt1: Impact on Materno–Fetal CMV Transmission
Previous Article in Special Issue
Use of Virtual Reality for the Management of Anxiety and Pain in Dental Treatments: Systematic Review and Meta-Analysis
Open AccessArticle

Machine Learning and Virtual Reality on Body Movements’ Behaviors to Classify Children with Autism Spectrum Disorder

1
Instituto de Investigación e Innovación en Bioingeniería (i3B), Universitat Politécnica de Valencia, 46022 Valencia, Spain
2
Red Cenit, Centros de Desarrollo Cognitivo, 46020 Valencia, Spain
*
Author to whom correspondence should be addressed.
J. Clin. Med. 2020, 9(5), 1260; https://doi.org/10.3390/jcm9051260
Received: 26 March 2020 / Revised: 21 April 2020 / Accepted: 23 April 2020 / Published: 26 April 2020

Abstract

Autism spectrum disorder (ASD) is mostly diagnosed according to behavioral symptoms in sensory, social, and motor domains. Improper motor functioning, during diagnosis, involves the qualitative evaluation of stereotyped and repetitive behaviors, while quantitative methods that classify body movements’ frequencies of children with ASD are less addressed. Recent advances in neuroscience, technology, and data analysis techniques are improving the quantitative and ecological validity methods to measure specific functioning in ASD children. On one side, cutting-edge technologies, such as cameras, sensors, and virtual reality can accurately detect and classify behavioral biomarkers, as body movements in real-life simulations. On the other, machine-learning techniques are showing the potential for identifying and classifying patients’ subgroups. Starting from these premises, three real-simulated imitation tasks have been implemented in a virtual reality system whose aim is to investigate if machine-learning methods on movement features and frequency could be useful in discriminating ASD children from children with typical neurodevelopment. In this experiment, 24 children with ASD and 25 children with typical neurodevelopment participated in a multimodal virtual reality experience, and changes in their body movements were tracked by a depth sensor camera during the presentation of visual, auditive, and olfactive stimuli. The main results showed that ASD children presented larger body movements than TD children, and that head, trunk, and feet represent the maximum classification with an accuracy of 82.98%. Regarding stimuli, visual condition showed the highest accuracy (89.36%), followed by the visual-auditive stimuli (74.47%), and visual-auditive-olfactory stimuli (70.21%). Finally, the head showed the most consistent performance along with the stimuli, from 80.85% in visual to 89.36% in visual-auditive-olfactory condition. The findings showed the feasibility of applying machine learning and virtual reality to identify body movements’ biomarkers that could contribute to improving ASD diagnosis.
Keywords: autism spectrum disorder; body movements; repetitive behaviors; virtual reality; machine learning autism spectrum disorder; body movements; repetitive behaviors; virtual reality; machine learning

1. Introduction

1.1. Autism Spectrum Disorder and Repetitive Behaviors

Autism spectrum disorder (ASD) is a neurodevelopmental disorder mainly based on impairments in social communication and interactions’ abilities and on the presence of restricted, repetitive patterns of behavior, interests, or activities [1]. It affects 1 in 160 children [2] and its symptomatology tends to appear from two to four years old, although in some cases it is possible to detect in six months old toddlers [3,4]. ASD studies primarily examine the weaknesses in social interaction abilities’, and less on the stereotyped and repetitive motor behaviors that also affect educational, social, and daily life [1,5].
Repetitive behaviors (RBs) are defined as heterogeneous observable motor stereotyped or repetitive sequences characterized by rigidity, invariance, inappropriateness, and being purposeless [6,7]. RBs can occur currently with small changes of the routine or in presence of new and unknowns stimuli to reduce subjective arousal and to cope with unfamiliar events, to maintain homeostasis [8,9,10].
Furthermore, RBs can be classified into two groups: common behaviors and complex behaviors [11]. Common behaviors are for example nail-biting, thumb sucking, and hair twirling and they tend to be also frequent in typical development population (TD), particularly situations that might cause stress or anxiety [12]. On the other hand, complex behaviors include more complex stereotypies, such as flapping hands, fingers wiggling, head spinning and banging, stamping the feet, and high levels of head movement and body rocking. Although complex RBs are possible to find in TDs, complex head spinning and banging, arm flapping, finger wiggling, and body rocking are mostly related to ASD; indeed, ASD individuals tend to exhibit RBs more frequently and severely than age-matched TD controls [13,14,15].
Several studies have demonstrated the high presence of RBs behaviors in ASD, ranging from 60% to 100% of cases [13,14,15]. The relevance of this study lies in understanding if movement features and frequency can be used as a diagnostic biomarker to classify children with and without ASD.

1.2. Traditional Assessment in ASD: Advantages and Limitations

Traditional ASD assessment and diagnosis involve qualitative and quantitative measures, such as semi-structured behavioral task observations (Autism Diagnostic Observation Schedule, ADOS) [16] and structured interview (Autism Diagnostic Interview-Revised, ADI-R) [17].
ADOS consists of several structured and semi-structured tasks on communication, use of imagination, social interaction, play, and restrictive and repetitive behavior analysis. The examiner introduces to the child one task at a time, observing whether ASD symptoms are manifested [16]. ADI-R is a semi-structured interview for family caregivers, who answer to questions related to communication, social interaction, and restricted, repetitive, and stereotyped behaviors [17] (see Materials and Methods for a detailed description of both ADOS and ADI-R). Since in ADOS examiner has to judge a child’s performance giving scoring and rating, evaluation relies on the examiner’s expertise and subjectivity; likewise, in ADI-R ASD diagnosis is based on caregivers’ reports rather than on objective evaluation. Although these measures have been always considered as the gold standard for ASD assessment [18], they present some limitations [19,20].
Regarding ADOS and ADI-R, limitations are related to the absence of both objective assessment methods and the ecological validity of the setting. Examiners need to be trained and prepared to avoid inappropriate task presentation and administration, which might cause symptoms over- or under-interpretation, providing misleading outcomes [21,22]. Moreover, traditional assessment methods might not tap and conceal compensatory capabilities that have been already developed by the child [23], and social desirability [24] might affect responses veracity to tests. Because of social desirability bias, individuals might respond to tasks or questions in a manner conceived as favorable by others [25]. Likewise, in semi-structured interviews, such as ADI-R, family caregivers might report differently certain child’s behaviors according to their interpretation and will [26].
Furthermore, traditional ASD assessment takes place in settings that lack ecological validity (i.e., laboratory) [27,28,29], and results do not mirror performance in real life [30,31]. Indeed, in ASD assessment at a laboratory, children might have learned how to behave according to specific rules and scripts [32]. Concerning RB traditional assessment limitations, direct observation consists of watching individual behavioral sequences, and several weaknesses affect measure reliability, such as difficulties in observing high-speed RBs, analyzing two concomitant RBs, detecting RB sequence beginning, ending, and environmental antecedents [18]. Paper-and-pencil rating scales report RBs frequency and intensity from caregivers’ general observations and impressions, yielding objective methodological issues related to self-report measures, that, as well as in ADI-R, are not accurate and do not properly tap individual RBs characteristics [33,34]. Finally, video-based RB assessment is an off-line RB coding procedure made by experts. Although a video-based procedure is more reliable than direct observation methods and paper-and-pencil procedures, it is laborious, and it takes a long time [33]; moreover, the examiner’s coding ability depends on individual levels of training and expertise [21].
To overcome the lack of ecological validity, and to avoid the use of subjective observational diagnostic methods in ASD, there is a need to automatically quantify and assess RBs, that could be fulfilled using technology [35,36]. Indeed, item-independent methods can grant accurate estimation of RB incidence and co-occurrence [9,35,36], and new technologies, such as virtual reality, can provide more ecological validity and controlled methodologies.

1.3. Implicit Methods: Biomarkers as Supports for ASD Assessment

Recent advances in social cognitive neuroscience (SCN) shed light on how humans analyze and report beliefs, feelings, and behaviors [37]. SCN is a research area that studies biological processes and related cognition-based elements [38], and it is showing that social interactions rely on implicit psychophysiological processes uncontrolled by conscious awareness [39].
Implicit measures tend to assess automatic biological processes outside conscious awareness [38] that ensue from the interaction with environmental external stimuli and their internal processing. Such biomarkers are a valid alternative to explicit measures, which cannot tap implied brain processes on their own [40]. Thus, to overcome explicit measure weaknesses in the ASD diagnosis, more recent research has included biomarkers along with traditional assessment techniques [41,42].
To date, the available utmost biomarkers to study unconscious processes are the electrodermal activity (EDA) [43,44], the functional magnetic resonance imaging (fMRI) [45], the functional near-infrared spectroscopy (fNIRS) [46], the electroencephalography (EEG) [47], the eye tracking [48], and the heart rate variability (HRT) [49]. In ASD assessment, fMRI research showed that ASD is related to hyperactivity in neural activation and alterations in the cingulate posterior cortex and portions of the insula [50]; whereas EEG research suggested that in social context ASD individuals exhibit greater activity in the left hemisphere [51]. Furthermore, recently developed technological tools, such as cameras and/or sensors, allowed the detection and classification of behavioral biomarkers, as body movements [52].
Initial studies on RBs with such devices showed that more automatic and objective assessment is possible, achieving the recognition of RBs [19,33,36,53,54,55,56,57,58]. For instance, to disentangle RBs from other movements in ASD, three wireless accelerometers placed on six ASD children’s wrists and chest in two different settings were able to accurately identify respectively in lab and classroom 86% and 89.5% of spontaneous hand flapping and body rocking cases [33].
Also, RGB color camera equipped with depth sensor and microphones array for simulated-hand flapping discernment was used in a laboratory setting; the application of the dynamic time warping (DTW) algorithm on-camera recorded data showed that it can recognize and isolate all simulated hand flapping instances, leading to claim that RBs detection is possible even involving sensors that not have necessarily to be worn [19].

1.4. Repetitive Behaviors Recognition in ASD

To overcome issues related to traditional RB assessment [18,19], a new research area about movement analysis based on video recordings and automatic tagging has emerged [19,33,36,53,54,55,56,57,58].
The first attempt has been developing systems that involve video recordings and accelerometers placed on the subject’s body to register movements and to transmit data via wireless [35,36,54,59,60,61].
These measures have yielded promising results in RB classification, although some studies have involved typical population and not ASD individuals [61], and to our knowledge, no studies have assessed RBs involving quantitative methods and have compared them between ASD and other clinical populations. However, accelerometer and video-based methods are expensive in terms of time and effort, and ASD children might feel as uncomfortable wearing accelerometers. For this reason, this work involved an RGB-D camera for real-time analysis. RGB-D camera is a video recording device able to augment the conventional image with depth information, related to how much the recorded moving body is far from the sensor.
Owing to deep learning and big data techniques [62,63,64], algorithms have been developed that can estimate real-time subjects’ posture and categorize automatically their movements. Over the last decades, great progress has been made in posture estimation and modern technologies, allowing classifying postures regardless of the worn clothes and the considered point of view [65,66,67,68,69]. Furthermore, machine-learning methods (ML) are improving the predictive value of motor behavioral biomarkers’ measures in ASD, enhancing the development of objective measures in the diagnostic standpoint [70,71]. For example, Crippa et al. (2015) developed a ML to discriminate preschool children with ASD from children with typical development using a simple upper-limb reach-to-drop task. The resulting model showed an accuracy rate of 96.7%, suggesting that ML can be a useful method of classification and discrimination in the diagnosis process [70].

1.5. Use of Virtual Reality in ASD

Virtual reality (VR) is a three-dimensional computer-generated environment that allows users to experience simulated and unreal worlds. VR provides ecological validity to experienced situations and consequent users’ reactions, becoming promising in psychological assessment, training, and treatment [72,73].
Over the last two decades, the VR market has deeply grown because of the broad amount of enterprise birth and consequent wide offer of virtual devices. Overall, head-mounted displays (HMDs) are the most important, affordable, and available in the VR market [74]. However, a different VR device has been suggested as more suitable to our target (ASD children): the CAVE-Automatic Virtual Environment (CAVE™) that is a semi-immersive room where 3 to 6 rear-projected surfaces are installed [74,75,76,77,78].
As a semi-immersive system, CAVE™ overcomes users’ risk to experience cyber-sickness that is the possible user’s discomfort because of sensory-motor incongruence and cognitive dissonance in the virtual world [79]. Furthermore, specifically to ASD children, CAVE system can overcome the significant restrictions of HMDs that, on one hand, are not suitable for small heads, and on the other, can affect and worsen their sensory and cognitive difficulties [80,81]. Previous studies on feasibility, safety of use, and learning skills of CAVE environments in children with ASD have showed no significant negative effect differences between ASD and TD children and improvements in various skills (e.g., pedestrian crossing) [82].
Regardless of the involved technology and the brand, VR systems share three main features: immersion, interaction, and sense of being present in the environment [83,84,85,86,87,88]. Immersion refers to system capability to isolate the user from reality [86,88]. Interaction allows users to interact with virtual objects in real-time through control sticks or gloves, providing engagement, motivation, and fun [84,89]. Sense of presence is a consequence of immersion and real-time interaction, and it is defined as the psychological feeling related to the sensation of being physically in the virtual environment, even though the awareness of not being there [88,90,91,92,93,94]. Finally, another feature to consider, less addressed in studies and especially in ASD population, concerns the perception and interaction with virtual agents. Virtual agents have been mainly used in social trainings and interventions showing positive results on skills in ASD children [82]. These positive results suggest that virtual agents are perceived not as cartoons or passive objects to watch, but actively as an intentional being that wants to communicate with the child and with mutually directed behaviors. To our knowledge, one study examined the perception of ASD children in the interaction with a virtual agent for performing a task (pick up flowers) [95]. The quantitative data (accuracy and reaction times) showed that ASD children could complete the task and the qualitative data showed that most of the ASD children perceived the virtual agent as an intentional being with mutually directed behavioral intentions, able to engage and motivate the ASD peer.
There is a great involvement of VR in psychological ASD treatment and it taps all important field macro-areas, such as clinical psychology, neuropsychology, and cognitive and motor rehabilitation [75,85,96,97,98,99,100,101,102,103,104,105]. Specifically, treatment studies, applying VR to ASD, mainly referred to social competence [101,102,104], emotional recognition [103], and anxiety and phobias [105] showing initial positive effects of this technique. Regarding ASD VR assessment, it has been less addressed and mainly focused on social communication and interaction symptoms. For example, it has been observed that during a virtual interview about personal life, ASD children looked less to social avatars than their TD control peers, identifying correctly 76% of ASD cases [102]. Also, ASD children made atypical social judgments on the kindness of faces photographs in a virtual environment compared to TD controls [103]. Despite VR potential in ASD assessment has already been strengthened [75], to our knowledge, no one has investigated whether it is possible to disentangle ASD in a VR experience using RBs movement analysis.
Starting from these premises, the main aim of this study is to discriminate ASD children from children with typically developing through body movements’ data analysis in a multimodal VR experience, characterized by three stimuli: visual, auditory, and olfactory. Applying ML methods to the dataset, we explored: (a) If through movement data it is possible to discriminate between the two populations; (b) which body parameters better discriminate between the two populations; and (c) which virtual stimuli condition better discriminate body parameters between the two populations.

2. Materials and Methods

2.1. Participants

This study included a sample of 49 children between the ages of 4 and 7 years; 24 children with a diagnosis of ASD (age: 5.13 ± 1.35; male = 21, female = 3) and 25 with a typical development (TD) (age: 4.86 ± 0.91; male = 16, female = 9).
The ASD group sample was provided by the Development Neurocognitive Centre, Red Cenit, Valencia, Spain. TD and ASD participants presented an individual assessment report that included the ADOS-2 and ADI-R tests [16,17,21]. TD group was recruited by a management company through mailings to families.
To participate in the study, family caregivers received written information about the study and were required to give their written consent. Ethical Committee of the Polytechnic University of Valencia approved the study. The study procedure was in accordance with the ethical standards of the institutional and national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.

2.2. Psychological Assessment

The following test and scales have been administered to participants and their family caregivers.

2.2.1. Autism Diagnostic Observation Schedule (ADOS-2)

The Autism Diagnostic Observation Schedule (ADOS) [16,21] is a semi-structured set of observation tasks that measure autism symptoms in social relatedness, communication, play, and repetitive behaviors. A standardized severity score within these domains can be calculated to compare autism symptoms across the different modules, which differ in age and linguistic level. From the trained psychologist observation of these behaviors, the items are scored between 0 to 3 (from no evidence of abnormality related to autism to definitive evidence) and from the sum of scores are obtained two specific indexes (social affectation and restricted and repetitive behavior) and an ASD’s global total index. The ADOS-2 presents excellent test-retest reliability (0.87 for the social affectation index, 0.64 for the repetitive behavior index, and 0.88 for the total global index). In the study, the assessment was performed using module 1.

2.2.2. Autism Diagnostic Interview-Revised (ADI-R)

The ADI-R [17] is a semi-structured interview for family caregivers, designed to provide the developmental history framework for a lifetime to detect the presence of ASD for individuals from early childhood to adult life. It consists of 111 questions with three separate domains: communication, social interaction and restricted, repetitive and stereotyped behaviors. The answers are scored on a 0–3 Likert scale, from the absence of the behavior to a clear presentation of the determined behavior. ADI-R presents a high test-retest reliability ranging from 0.93 to 0.97.

2.3. The Multimodal Virtual Environment (VE) and the Imitation Tasks

The multimodal VE consisted of a simulated city street intersection and was divided into three experimental stimuli conditions: visual (V), visual-auditory (VA), and visual-auditory-olfactory (VAO) (Figure 1a).
First, in the V condition, a boy’s avatar appeared from the left side of the surface CAVE™, walking to the middle of the virtual environment, where he stopped and waved to say hello to the participant three times, just before to leave the virtual scene disappearing from behind (Figure 1b).
Next, a girl’s avatar appeared in the center of the surface CAVE™, walking to the right of the virtual scene, where she also stopped and repeated the three waves to say hello to the participant, just before leaving the virtual scene disappearing from behind. This sequence was identically repeated three times.
Consecutively, in the second VA stimuli condition, the same avatars appeared in the same first order from the same directions, and they danced over an animated disco song for 10 s three times.
Finally, in the last condition (VAO), the same avatars from the two previous conditions, in the same order and from the same directions, bit a buttered muffin, accompanied with an artificial butter odor (Figure 1c). In the three stimuli conditions, participants were asked to imitate the actions performed by the avatars. Specifically, in the first virtual condition, they should wave three times, dance three times in the second virtual condition, and they should imitate the action of biting a muffing three times.
The selection of the stimuli and the gradual exposition to the three stimuli conditions depended on the hyper-and-hypo sensitivities to sensory stimuli of the ASD population. More in detail, with respect to visual and auditive stimuli (e.g., bright lights or noisy sounds), ASD population presented a hypersensitivity [106,107]; conversely, they present hypo-sensitiveness to olfactive stimuli [108,109]. Sensory hyper-and-hypo sensitivities consequently can affect the information processing in ASD and it has been suggested it may cause RBs [110,111].
The Institute for Research and Innovation in Bioengineering (i3B) of the Polytechnic University of Valencia (UPV) developed the 3D modeling. The environment was projected inside a three surfaces Cave Assisted Virtual Environment (CAVE™) with dimensions of 4 × 4 × 3 mt. It was equipped with three ceiling ultra-short lens projectors, which can project a 100° image from just 55 cm and a Logitech Speaker Z906 500W 5.1 THX sound system (Logitech, Canton of Vaud, Switzerland) (Figure 2).

2.4. The Olfactory System

Olorama Technology™ (www.olorama.com) a wireless freshener delivered the olfactory stimuli. It can encompass until 12 scents arranged in 12 pre-charged channels, which can be selected and triggered by employing a UDP packet. The device includes a programmable fan time system that controls the duration and intensity of the scent delivery. In the VAO condition, we used a butter scent to evocate the real muffin smell. The scent valve was opened all the time during the last stimuli condition (VAO).

2.5. Experimental Procedure

First, the family caregivers of participants were informed about the general objectives of the research, and, before the experimental session, the setting was also shown and explained to them.
Second, the participant was accompanied in the CAVE™, by the researcher, and by his or her family caregiver according to the child’s needs. The participant was placed in the middle of the virtual room, standing in front of the central surface at 1.5 m. The presentation order of VR stimuli conditions for all participants was: visual, visual-auditory, and visual-auditory-olfactory. Before each stimuli VR condition, a two-minute baseline was recorded, and subsequently, the stimuli VR experience condition was presented. The presentation order was maintained the same for all participants to avoid and prevent sensory overload and stress that they could experience.
The total duration of the experiment was of 14 min, and each stimulus condition lasted 2 min and 40 s. Movement recording was turned on during the virtual experience. The researcher monitored the child state during the entire experiment.

2.6. Behavioural Motor Assessment and Data Processing

In the experiment design, it was proposed to use an efficient method to estimate the pose in real-time using an RGB-D camera. The participant’s experiment was recorded using an Intel® RealSense™ camera D435 (FRAMOS, Munich, Germany) and Intel RealSense SDK 2.0 (Intel RealSense Technology, Santa Clara, CA, USA) with the cross-platform support. This camera has a depth sensor that uses stereo vision to calculate it. It works like a USB and is equipped with a pair of depth sensors, an RGB sensor, an infrared project a great global image obturator (91.2° × 65.5° × 100.6°).
The detection of the body joints in each frame of the recording was calculated using the deep learning algorithm OpenPose [112], which includes the 2D position related to the video and a confidence level for each joint identification between 0 and 1. The skeleton (Figure 3) includes 25 joints that can be divided into different parts of the body: head (0 nose, 15/16 eyes and 17/18 ears), trunk (1 neck and 8 mid hip), arms (2/5 shoulders, 3/6 elbows and 4/7 wrists), legs (9/12 hips, 10/13 knees, 11/14 ankles), and feet (19/22 big toes, 20/23 small toes and 21/24 heels). After detecting the skeleton, the 3D position of each joint during the experiment was obtained using the depth information of the camera. A computer with an Nvidia GTX1060 graphics card with the NVIDIA Pascal architecture capable of executing neural networks very efficiently in a compact size was used.
After extracting the 3D position of the body’s joints from all the experiments, the data recording was segmented for each stimulus condition, excluding the samples of the joints that have a confidence below 0.5. The displacement of the joints was computed using the Euclidean distance between consecutive frames. Finally, the level of movement of a joint during a stimulus was characterized by computing the mean of all displacement.

2.7. Statistical Analysis

First, to characterize behavior differences between ASD and TD children on each stimuli condition, we analyzed the movement frequency of each joint. Since data followed a non-normal distribution (Shapiro-Wilk test: p < 0.05), Wilcoxon signed-rank tests were applied.
Second, we applied a set of machine learning models to analyze if the frequency of the movement could discriminate between ASD and TD children. The body was divided into five parts: head (joints 15, 16, 17, and 18), trunk (joints 1 and 8), arms (joints 2, 3, 4, 5, 6, and 7), legs (9, 10, 11, 12, 13, and 14), and feet (19, 20, 21, 22, 23, and 24). The machine learning analysis based on 24 features-based dataset included the five body parts, one related to the whole body, the three stimuli conditions (V, VA, VAO) and one related to the entire experiment. Because of a large number of features, a reduction strategy was adopted to decrease the dimensions in each dataset. Principal component analysis method (PCA) was applied to select features that explain 95% of the variability of the dataset. Finally, a supervised machine-learning model was developed using the PCA features in each dataset.
To implement the models, support vector machine (SVM)-based pattern recognition with a leave-one-subject-out (LOSO) cross-validation procedure has been applied [113]. Within the LOSO scheme, the training set was normalized by subtracting the median value and dividing by the median absolute deviation over each dimension. In each iteration, the validation set consisted of one specific subject and it was normalized using the median and deviation of the training set. We used a C-SVM optimized using a Gaussian Kernel function, changing the parameters of cost and gamma using a vector with seven parameters logarithmically spaced between 0.1 and 1000. Also, a SVM recursive feature elimination (SVM-RFE) procedure was included in a wrapper approach. RFE was performed on the training set of each fold and we computed the median rank for each feature among all folds. In particular, a nonlinear SVM-RFE was implemented, which includes a correlation bias reduction strategy in the feature elimination procedure [114]. To analyze the performance of the models, a set of metrics were considered: accuracy, i.e., percentage of subjects who correctly recognized, true positive rate, i.e., percentage of actual ASD subjects recognized as ASD, true negative rate, i.e., percentage of actual control subjects recognized as controls, and Cohen’s kappa, which describes the performance of the model from 0 to 1, 0 being a random class assignation and 1 a perfect classification. The model was optimized to achieve best Cohen’s kappa. The algorithms were implemented using Matlab© R2016a and LIBSVM toolbox [115].

3. Results

3.1. Analysis of Total Movement

Figure 4 shows the body joints’ significant differences obtained applying Wilcoxon signed-rank. In V stimuli condition, 4 joints of the legs, 2 of the head and 1 of the trunk presented higher movements in ASD population than TD children. In VA stimuli condition, 1 joint of the legs and 2 of the head also presented higher movements in ASD population than TD children. Also in VAO stimuli condition, 3 joints of the head presented the same tendency. Conversely, 1 joint of the head (16) presented lower movement in ASD population in the visual condition baseline (BL_V) and in visual-auditive-olfactive condition baseline (BL_VAO).

3.2. ASD Classification Performance

Table 1 and Table 2 show the performance of the computed machine learning models, considering the combination of a set of joints and virtual stimuli conditions. Regarding body parameters, the models including the head and trunk joints presented an accuracy of 82.98% and unbalanced confusion matrices. Conversely, the model that uses the feet joints present the same accuracy (82.98%) and a balanced confusion matrix. In addition, the models including arms and legs movements achieved a lower accuracy than the other body joints (74.47% and 72.4% respectively). Finally, the model including all joints and virtual stimuli conditions showed the lowest accuracy (70.21%) and the lower true positives (45.45%).
Regarding the influence of the virtual stimuli conditions, the V presented the highest accuracy in the study (89.36%), showing that it is the most relevant stimuli of the experiment. In the VA stimuli, the accuracy decrease to 74.47%, and in VAO to 70.21%. Furthermore, the head joints showed the most consistent performance along with the stimuli, from 80.85% in V to 89.36% in VAO, suggesting that it is the most important part of the body to discriminate ASD population. The trunk joints present also a consistent accuracy, from 70.21% in V to 76.60% in VAO, but the discriminate performance is considerably lower than using the head. All the models used three features or less after applying the automatic feature selection procedure.

4. Discussion

ASD is diagnosed according to qualitative clinician judgments, based on symptoms, through semi-structured observations and interviews (ADOS; ADI-R). Given the qualitative nature of the traditional tools, researchers have focused on improving methods of diagnosis and assessment, pointing out the predictive value of behavioral biomarkers, as more objective and quantitative measures. This study aimed, first, to compare tracked body movements data during a multimodal VR stimulation between ASD and TD children. Second, it was verified which body areas might be relevant for the discrimination between the two populations. Third, the study investigated which virtual stimuli condition better discriminate body areas between the two populations. To reach these aims, we applied a machine learning procedure [70] to body movements’ analysis of a multimodal VR experience, composed of three stimuli conditions: visual, visual-auditive, and visual-auditive-olfactory.
The study included a preliminary analysis of the frequency distribution of body movements to investigate the differences between groups and a broad set of supervised ML models combining body parameters and stimuli conditions to evaluate the discriminability between ASD and TD children using movement. Results can be discussed on four points: (1) The significant differences between the two groups on body movements; (2) machine learning methods on body movements and features used; (3) the influence of stimuli conditions; and (4) limitations and future studies.

4.1. Body Movement Parameters’ Differences between Groups

The first aim was to identify differences in terms of body movements between children with ASD and TD children. Figure 4 showed significant differences in15 body joints and 13 of them have showed that ASD children present more body movements than TD children. In particular, 4 legs’ joints, 2 head’s joints and 1 trunk’s joint in V condition, 1 legs’ joints and 2 head’s joints in VA condition, and 4 head’s joints in VAO condition showed higher movements, suggesting that ASD children have performed more head and legs’ movements than TD children during the imitation tasks. Previous studies that showed that head tilting, legs flapping, as well as bilateral repetitive movements involving legs walking represent two or more movements features related to ASD children confirm the presence of larger body movements’ in ASD than in TD children [8,10]. However, only one joint in the head showed higher movement in TD children than ASD children during the baselines of VA and VAO. This result is partially in opposition to the scientific literature but as mentioned in the introduction, common and complex body movements are also present in TD children. To improve the quantitative methods to discriminate ASD from TD children using body movements’ frequencies, we applied machine-learning techniques developing 24 models, combining body joint parts and virtual stimuli conditions. In the next section, we deeply discuss the results.

4.2. Machine Learning Methods on Body Movements and Features Used

Concerning the recognition using different parts of the body, the best classification accuracy reached 82.98% for the head parameter by using only one feature selection method, and the same accuracy was achieved for body and foot parameters by using two features’ selection method, independently by the specific stimuli condition. Interesting results have also resulted from the model including arms and legs that achieved a classification accuracy of 74.47% and 72.34%, respectively. The results are consistent with the scientific literature that has identified head spinning and banging, body rocking, and stamping the feet, three of the main stereotypies and repetitive movements are related to ASD [8]. Furthermore, these results showed that ML could provide an effective solution to best describe body movements’ differences of participant groups’ classification during imitation tasks, while the traditional statistical methods (e.g., mean age comparisons) could fail to deal with such complex tasks. To our knowledge, only two previous studies have investigated the predictive value to discriminate between children with and without ASD using an imitation task [70,71]. Both studies focused on the analysis of arm and hand movements during a simple reach-to-drop imitation task, reaching a maximum classification accuracy of 96.7% and 93.8%, respectively. Our study focused on a more complex real-simulated imitation task, composed of three imitation subtasks, waving, dancing, and eating and tracking all body movements.
Thus, the present findings show the feasibility and applicability of a ML method for correctly classifying preschool children with ASD based on real-simulated imitation tasks. In the standard diagnosis of toddlers it is particularly difficult to evaluate repetitive and stereotyped behaviors. The application of technologies, such as cameras and sensors, might have potential clinical applications to support diagnosis providing accurate quantitative methods along with the qualitative traditional methods. Finally, the use of cameras and sensors are more convenient, less expensive, and less invasive technologies than fMRI and EEG to implement in clinical settings.

4.3. The Influence of Stimuli Conditions

Regarding the virtual stimuli conditions, head showed to be the core body movement in the groups’ classification during the three stimuli conditions, achieving the best classifications’ accuracy of 89.36% in VAO condition, 82.98% in VA condition, and 80.85% in V condition. These results suggest that the greater the sensory stimulation, the greater the stereotypies and repetitive movements, allowing to discriminate children with ASD from TD children. The different classification accuracy among the three stimuli conditions is consistent with previous studies on the sensory sensitivity overload of ASD patients to multimodality stimuli [116,117,118,119]. Specifically, ASD patients have shown lower-functioning abilities to filter process and integrate simultaneous information compared to TD subjects [116]. For example, during multiple auditory tones, tasks matched with single visual-flash stimulus ASD patients perceived more flashes than those are presented [117]. Furthermore, EEG studies on ASD patients showed that the amplitude is higher and the latency in the response is delayed than TD subjects [118,119]. Other multimodal sensory stimulations achieved a good accuracy classification between the two groups, resulting in stereotypes and repetitive movements of the trunk (VAO: 76.60%), legs (VAO: 74.47%), arms and feet (VA: 78.72%). Finally, arms and feet movements showed a high accuracy of 87.23% and 78.72 respectively in the classification between groups during the unimodal visual condition.
In general, V condition including all the body achieved the best accuracy (89.36%), including a 100% true positive rate and using only 1 feature of PCA, in contrast to VA (74.47%) and VAO (70.21%). These results, on one hand, highlight that real-simulated imitation activities that can occur in daily life, such as waving (V), dancing (VA), and eating (VAO) allow evaluating body movements discriminating ASD and TD children. Previous studies on discriminating ASD and TD children using imitation tasks have used simple reach-to-grasp and drop arm movement tasks [70] or a hand movement task to target in laboratory settings [71] and our study aimed to investigate whole-body movements in more complex real-simulated scenarios and tasks.
On the other hand, the predominance of the visual influence can depend on the fact that V stimuli was the first condition and could have generated a wow-effect in the subjects. Finally, the model that includes all the data achieved an accuracy of 70.21% with a 45.45% of true positive rate, showing that the feature selection procedures have a critical role to achieve good performance in biomarkers development. This result could suggest that the implemented virtual tasks could fail in detecting RB behaviors.

4.4. Limitations and Future Studies

Despite the promising results, some methodological limitations of the present exploratory study should be reported. The main limitation is the limited sample sizes of participants for each group. Future studies on larger sample sizes could allow validating the ML method, providing also the possibility to test the model. As ASD is a heterogeneous condition, the possibility of training the model with larger groups of ASD and children with typical neurodevelopment would be useful to improve the generalization of the model to a wide condition range of ASD. Also, other comparisons studies, including other neurodevelopment disorders with movement impairments, as the attention deficit hyperactivity disorder, could be valuable by ML for understanding if these groups present similar body movements that are distinct for the different neurodevelopment conditions (e.g., intellectual disability [120]. Indeed, taking also into account different neurodevelopment conditions could improve the specificity of the classifiers for ASD rather than for neurodevelopment disorders in general.
Furthermore, the experimental groups were not matched on measures related to the intelligence quotient performance, cognitive abilities, and motor comorbidities (e.g., dyspraxia), limiting the confidence that the experimental differences observed in the experiment were due to a diagnosis of ASD, and not to differences in other cognitive or non-cognitive factors. Future studies will consider matching these factors in order to improve the reliability and validity of the model results.
Finally, regarding the stimuli conditions, the order of presentation (V-VA-VAO) was not counterbalanced because of the hyper-sensitiveness of ASD children to multiple stimuli [105,106,107,108]. Future studies including counterbalanced conditions following high and low children functioning are needed.

5. Conclusions

The present study represents, to our knowledge, the first attempt to discriminate ASD children from children with typical neurodevelopment using body movements parameters of an imitation VR task and machine learning. The significant predictive values of our classification approach might be valuable to support ASD diagnosis, as well as the use of more objective methods and ecological tasks VR-aided along with traditional assessment.

Author Contributions

All authors have contributed to the manuscript as described below: M.A.R. and L.A. designed the study, and M.A.R., L.A., and I.A.C.G. supervised the whole study. Movements’ data processing was conducted by J.M.-M. as well as the statistical analyses. The manuscript was originally written by I.A.C.G., M.E.M., G.T.G., and J.M.-M. and the final manuscript was revised by M.A.R. and I.A.C.G. All authors assisted in the revision process. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Spanish Ministry of Economy, Industry, and Competitiveness funded project “Immersive virtual environment for the evaluation and training of children with autism spectrum disorder: T Room” (IDI-20170912) and by the Generalitat Valenciana funded project REBRAND (PROMETEO/2019/105). Furthermore, this work was co-founded by the European Union through the Operational Program of the European Regional development Fund (ERDF) of the Valencian Community 2014–2020 (IDIFEDER/2018/029).

Acknowledgments

We thank Zayda Ferrer Lluch for the development of virtual reality environments.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders (DSM-5®); American Psychiatric Pub: Washington, DC, USA, 2013. [Google Scholar]
  2. World Health Organization. Available online: https://www.who.int/news-room/fact-sheets/detail/autism-spectrum-disorders (accessed on 20 November 2019).
  3. Anagnostou, E.; Zwaigenbaum, L.; Szatmari, P.; Fombonne, E.; Fernandez, B.A.; Woodbury-Smith, M.; Buchanan, J.A. Autism spectrum disorder: Advances in evidence-based practice. Cmaj 2014, 186, 509–519. [Google Scholar] [CrossRef] [PubMed]
  4. Lord, C.; Risi, S.; DiLavore, P.S.; Shulman, C.; Thurm, A.; Pickles, A. Autism from 2 to 9 years of age. Arch. Gen. Psychiatry 2006, 63, 694–701. [Google Scholar] [CrossRef] [PubMed]
  5. Schmidt, L.; Kirchner, J.; Strunz, S.; Broźus, J.; Ritter, K.; Roepke, S.; Dziobek, I. Psychosocial functioning and life satisfaction in adults with autism spectrum disorder without intellectual impairment. J. Clin. Psychol. 2015, 71, 1259–1268. [Google Scholar] [CrossRef]
  6. Turner, M. Annotation: Repetitive behaviour in autism: A review of psychological research. J. Child Psychol. Psychiatry Allied Discip. 1999, 40, 839–849. [Google Scholar] [CrossRef]
  7. Lewis, M.H.; Bodfish, J.W. Repetitive behavior disorders in autism. Ment. Retard. Dev. Disabil. Res. Rev. 1998, 4, 80–89. [Google Scholar] [CrossRef]
  8. Ghanizadeh, A. Clinical approach to motor stereotypies in autistic children. Iran. J. Pediatr. 2010, 20, 149. [Google Scholar]
  9. Mahone, E.M.; Bridges, D.; Prahme, C.; Singer, H.S. Repetitive arm and hand movements (complex motor stereotypies) in children. J. Pediatr. 2004, 145, 391–395. [Google Scholar] [CrossRef] [PubMed]
  10. MacDonald, R.; Green, G.; Mansfield, R.; Geckeler, A.; Gardenier, N.; Anderson, J.; Sanchez, J. Stereotypy in young children with autism and typically developing children. Res. Dev. Disabil. 2007, 28, 266–277. [Google Scholar] [CrossRef] [PubMed]
  11. Singer, H.S. Motor stereotypies. Semin. Pediatr. Neurol. 2009, 16, 77–81. [Google Scholar] [CrossRef]
  12. Lidstone, J.; Uljarević, M.; Sullivan, J.; Rodgers, J.; McConachie, H.; Freeston, M.; Leekam, S. Relations among restricted and repetitive behaviors, anxiety and sensory features in children with autism spectrum disorders. Res. Autism Spectr. Disord. 2014, 8, 82–92. [Google Scholar] [CrossRef]
  13. Bodfish, J.W.; Symons, F.J.; Parker, D.E.; Lewis, M.H. Varieties of repetitive behavior in autism: Comparisons to mental retardation. J. Autism Dev. Disord. 2000, 30, 237–243. [Google Scholar] [CrossRef] [PubMed]
  14. Campbell, M.; Locascio, J.J.; Choroco, M.C.; Spencer, E.K.; Malone, R.P.; Kafantaris, V.; Overall, J.E. Stereotypies and tardive dyskinesia: Abnormal movements in autistic children. Psychopharmacol. Bull. 1990, 26, 260–266. [Google Scholar] [PubMed]
  15. Goldman, S.; Wang, C.; Salgado, M.W.; Greene, P.E.; Kim, M.; Rapin, I. Motor stereotypies in children with autism and other developmental disorders. Dev. Med. Child Neurol. 2009, 51, 30–38. [Google Scholar] [CrossRef] [PubMed]
  16. Lord, C.; Rutter, M.; DiLavore, P.C.; Risi, S.A. Diagnostic Observation Schedule-WPS (ADOS-WPS); Western Psychological Services: Los Angeles, CA, USA, 1999. [Google Scholar]
  17. Lord, C.; Rutter, M.; Le Couteur, A. Autism Diagnostic Interview-Revised: A revised version of a diagnostic interview for caregivers of individuals with possible pervasive developmental disorders. J. Autism Dev. Disord. 1994, 24, 659–685. [Google Scholar] [CrossRef]
  18. Goldstein, S.; Naglieri, J.A.; Ozonoff, S. Assessment of Autism Spectrum Disorder; The Guilford Press: New York, NY, USA, 2009. [Google Scholar]
  19. Gonçalves, N.; Rodrigues, J.L.; Costa, S.; Soares, F. Preliminary study on determining stereotypical motor movements. In Proceedings of the 2012 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, San Diego, CA, USA, 28 August–1 September 2012; pp. 1598–1601. [Google Scholar]
  20. Volkmar, F.R.; State, M.; Klin, A. Autism and autism spectrum disorders: Diagnostic issues for the coming decade. J. Child Psychol. Psychiatry 2009, 50, 108–115. [Google Scholar] [CrossRef]
  21. Lord, C.; Rutter, M.; DiLavore, P.C.; Risi, S. Autism Diagnostic Observation Schedule; Western Psychological Services: Los Angeles, CA, USA, 2001. [Google Scholar]
  22. Reaven, J.A.; Hepburn, S.L.; Ross, R.G. Use of the ADOS and ADI-R in children with psychosis: Importance of clinical judgment. Clin. Child Psychol. Psychiatry 2008, 13, 81–94. [Google Scholar] [CrossRef]
  23. Torres, E.B.; Brincker, M.; Isenhower, R.W., III; Yanovich, P.; Stigler, K.A.; Nurnberger, J.I., Jr.; José, J.V. Autism: The micro-movement perspective. Front. Integr. Neurosci. 2013, 7, 32. [Google Scholar] [CrossRef]
  24. Paulhus, D.L. Measurement and Control of Response Bias. In Measures of Social Psychological Attitudes, Volume 1. Measures of Personality and Social Psychological Attitudes; Robinson, J.P., Shaver, P.R., Wrightsman, L.S., Eds.; Academic Press: San Diego, CA, USA, 1991; pp. 17–59. [Google Scholar]
  25. Edwards, A.L. The Social Desirability Variable in Personality Assessment and Research; Dryden Press: Dryden, ON, Canada, 1957. [Google Scholar]
  26. Möricke, E.; Buitelaar, J.K.; Rommelse, N.N. Do we need multiple informants when assessing autistic traits? The degree of report bias on offspring, self, and spouse ratings. J. Autism Dev. Disord. 2016, 46, 164–175. [Google Scholar] [CrossRef]
  27. Chaytor, N.; Schmitter-Edgecombe, M.; Burr, R. Improving the ecological validity of executive functioning assessment. Arch. Clin. Neuropsychol. 2006, 21, 217–227. [Google Scholar] [CrossRef]
  28. Franzen, M.D.; Wilhelm, K.L. Conceptual Foundations of Ecological Validity in Neuropsychological Assessment. In Ecological Validity of Neuropsychological Testing; Sbordone, R.J., Long, C.J., Eds.; Gr Press/St Lucie Press Inc.: Delray Beach, FL, USA, 1996; pp. 91–112. [Google Scholar]
  29. Brunswick, E. Symposium of the probability approach in psychology: Representative design and probabilistic theory in a functional psychology. Psychol. Rev. 1955, 62, 193–217. [Google Scholar] [CrossRef]
  30. Gillberg, C.; Rasmussen, P. Brief report: Four case histories and a literature review of Williams syndrome and autistic behavior. J. Autism Dev. Disord. 1994, 24, 381–393. [Google Scholar] [CrossRef] [PubMed]
  31. Parsons, S. Authenticity in Virtual Reality for assessment and intervention in autism: A conceptual review. Educ. Res. Rev. 2016, 19, 138–157. [Google Scholar] [CrossRef]
  32. Francis, K. Autism interventions: A critical update. Dev. Med. Child Neurol. 2005, 47, 493–499. [Google Scholar] [CrossRef] [PubMed]
  33. Albinali, F.; Goodwin, M.S.; Intille, S.S. Recognizing stereotypical motor movements in the laboratory and classroom: A case study with children on the autism spectrum. In Proceedings of the 11th International Conference on Ubiquitous Computing, Orlando, FL, USA, 30 September–3 October 2009; ACM: New York, NY, USA, 2009; pp. 71–80. [Google Scholar] [CrossRef]
  34. Pyles, D.A.; Riordan, M.M.; Bailey, J.S. The stereotypy analysis: An instrument for examining environmental variables associated with differential rates of stereotypic behavior. Res. Dev. Disabil. 1997, 18, 11–38. [Google Scholar] [CrossRef]
  35. Min, C.H.; Tewfik, A.H. Novel pattern detection in children with autism spectrum disorder using iterative subspace identification. In Proceedings of the 2010 IEEE International Conference on Acoustics, Speech and Signal Processing, Dallas, TX, USA, 14–19 March 2010; pp. 2266–2269. [Google Scholar]
  36. Min, C.H.; Tewfik, A.H. Automatic characterization and detection of behavioral patterns using linear predictive coding of accelerometer sensor data. In Proceedings of the 2010 Annual International Conference of the IEEE Engineering in Medicine and Biology, Buenos Aires, Argentina, 31 August–4 September 2010; pp. 220–223. [Google Scholar]
  37. Nosek, B.A.; Hawkins, C.B.; Frazier, R.S. Implicit social cognition: From measures to mechanisms. Trends Cogn. Sci. 2011, 15, 152–159. [Google Scholar] [CrossRef] [PubMed]
  38. Lieberman, M.D. Social cognitive neuroscience. In Handbook of Social Psychology; Fiske, S.T., Gilbert, D.T., Lindzey, G., Eds.; John Wiley & Sons Inc.: Hoboken, NJ, USA, 2010; pp. 143–193. [Google Scholar]
  39. Forscher, P.S.; Lai, C.K.; Axt, J.R.; Ebersole, C.R.; Herman, M.; Devine, P.G.; Nosek, B.A. A meta-analysis of procedures to change implicit measures. J. Pers. Soc. Psychol. 2019, 117, 522–559. [Google Scholar] [CrossRef]
  40. LeDoux, J.E.; Pine, D.S. Using neuroscience to help understand fear and anxiety: A two-system framework. Am. J. Psychiatry 2016, 173, 1083–1093. [Google Scholar] [CrossRef] [PubMed]
  41. Fenning, R.M.; Baker, J.K.; Baucom, B.R.; Erath, S.A.; Howland, M.A.; Moffitt, J. Electrodermal variability and symptom severity in children with autism spectrum disorder. J. Autism Dev. Disord. 2017, 47, 1062–1072. [Google Scholar] [CrossRef]
  42. Walsh, P.; Elsabbagh, M.; Bolton, P.; Singh, I. In search of biomarkers for autism: Scientific, social and ethical challenges. Nat. Rev. Neurosc. 2011, 12, 603. [Google Scholar] [CrossRef]
  43. Nikula, R. Psychological correlates of nonspecific skin conductance responses. Psychophysiology 1991, 28, 86–90. [Google Scholar] [CrossRef]
  44. Alcañiz Raya, M.; Chicchi Giglioli, I.A.; Marín-Morales, J.; Higuera-Trujillo, J.L.; Olmos, E.; Minissi, M.E.; Abad, L. Application of Supervised Machine Learning for Behavioral Biomarkers of Autism Spectrum Disorder Based on Electrodermal Activity and Virtual Reality. Front. Hum. Neurosci. 2020, 14, 90. [Google Scholar] [CrossRef] [PubMed]
  45. Cunningham, W.A.; Raye, C.L.; Johnson, M.K. Implicit and explicit evaluation: fMRI correlates of valence, emotional intensity, and control in the processing of attitudes. J. Cogn. Neurosci. 2004, 16, 1717–1729. [Google Scholar] [CrossRef] [PubMed]
  46. Kopton, I.M.; Kenning, P. Near-infrared spectroscopy (NIRS) as a new tool for neuroeconomic research. Front. Hum. Neurosci. 2014, 8, 549. [Google Scholar] [CrossRef] [PubMed]
  47. Knyazev, G.G.; Slobodskaya, H.R.; Wilson, G.D. Personality and Brain Oscillations in the Developmental Perspective. In Advances in Psychology Research; Shohov, S.P., Ed.; Nova Science Publishers: Hauppauge, NY, USA, 2004; Volume 29, pp. 3–34. [Google Scholar]
  48. Gwizdka, J. Characterizing relevance with eye-tracking measures. In Proceedings of the 5th Information Interaction in Context Symposium, Regensburg, Germany, 26–29 August 2014; pp. 58–67. [Google Scholar]
  49. Nickel, P.; Nachreiner, F. Sensitivity and diagnosticity of the 0.1-Hz component of heart rate variability as an indicator of mental workload. Hum. Factors 2003, 45, 575–590. [Google Scholar] [CrossRef] [PubMed]
  50. Di Martino, A.; Yan, C.G.; Li, Q.; Denio, E.; Castellanos, F.X.; Alaerts, K.; Deen, B. The autism brain imaging data exchange: Towards a large-scale evaluation of the intrinsic brain architecture in autism. Mol. Psychiatry 2014, 19, 659. [Google Scholar] [CrossRef] [PubMed]
  51. Van Hecke, A.V.; Lebow, J.; Bal, E.; Lamb, D.; Harden, E.; Kramer, A.; Porges, S.W. Electroencephalogram and heart rate regulation to familiar and unfamiliar people in children with autism spectrum disorders. Child Dev. 2009, 80, 1118–1133. [Google Scholar] [CrossRef]
  52. Alcañiz, M.L.; Olmos-raya, E.; Abad, L. Uso de entornos virtuales para trastornos del neurodesarrollo: Una revisión del estado del arte y agenda futura. Medicina (Buenos Aires) 2019, 79, 77–81. [Google Scholar]
  53. Amiri, A.; Peltier, N.; Goldberg, C.; Sun, Y.; Nathan, A.; Hiremath, S.; Mankodiya, K. WearSense: Detecting autism stereotypic behaviors through smartwatches. Healthcare 2017, 5, 11. [Google Scholar] [CrossRef]
  54. Coronato, A.; De Pietro, G.; Paragliola, G. A situation-aware system for the detection of motion disorders of patients with autism spectrum disorders. Expert Syst. Appl. 2014, 41, 7868–7877. [Google Scholar] [CrossRef]
  55. Goodwin, M.S.; Intille, S.S.; Velicer, W.F.; Groden, J. Sensor-enabled detection of stereotypical motor movements in persons with autism spectrum disorder. In Proceedings of the 7th International Conference on Interaction Design and Children, Chicago, IL, USA, 11–13 June 2008; pp. 109–112. [Google Scholar]
  56. Goodwin, M.S.; Intille, S.S.; Albinali, F.; Velicer, W.F. Automated detection of stereotypical motor movements. J. Autism Dev. Disord. 2011, 41, 770–782. [Google Scholar] [CrossRef]
  57. Paragliola, G.; Coronato, A. Intelligent monitoring of stereotyped motion disorders in case of children with autism. In Proceedings of the 2013 9th International Conference on Intelligent Environments, Athens, Greece, 16–17 July 2013; pp. 258–261. [Google Scholar]
  58. Rodrigues, J.L.; Gonçalves, N.; Costa, S.; Soares, F. Stereotyped movement recognition in children with ASD. Sens. Actuators A Phys. 2013, 202, 162–169. [Google Scholar] [CrossRef]
  59. Madsen, M.; El Kaliouby, R.; Goodwin, M.; Picard, R. Technology for just-in-time in-situ learning of facial affect for persons diagnosed with an autism spectrum disorder. In Proceedings of the 10th International ACM SIGACCESS Conference on Computers and Accessibility, Halifax, NS, Canada, 13–15 October 2008; pp. 19–26. [Google Scholar]
  60. Min, C.H.; Tewfik, A.H.; Kim, Y.; Menard, R. Optimal sensor location for body sensor network to detect self-stimulatory behaviors of children with autism spectrum disorder. In Proceedings of the 2009 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Minneapolis, MN, USA, 2–6 September 2009; pp. 3489–3492. [Google Scholar]
  61. Westeyn, T.; Vadas, K.; Bian, X.; Starner, T.; Abowd, G.D. Recognizing mimicked autistic self-stimulatory behaviors using hmms. In Proceedings of the Ninth IEEE International Symposium on Wearable Computers (ISWC’05), Osaka, Japan, 11–14 October 2005; pp. 164–167. [Google Scholar]
  62. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  63. Andriluka, M.; Pishchulin, L.; Gehler, P.; Schiele, B. 2D human pose estimation: New benchmark and state of the art analysis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 3686–3693. [Google Scholar]
  64. Lin, T.Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Zitnick, C.L. Microsoft coco: Common objects in context. In Proceedings of the European Conference on Computer Vision, Zurich, Switzerland, 6–12 September 2014; Springer: Cham, Switzerland, 2014; pp. 740–755. [Google Scholar]
  65. Carreira, J.; Agrawal, P.; Fragkiadaki, K.; Malik, J. Human pose estimation with iterative error feedback. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 4733–4742. [Google Scholar]
  66. Iqbal, U.; Garbade, M.; Gall, J. Pose for action-action for pose. In Proceedings of the 2017 12th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2017), Washington, DC, USA, 30 May–3 June 2017; pp. 438–445. [Google Scholar]
  67. Pfister, T.; Charles, J.; Zisserman, A. Flowing convnets for human pose estimation in videos. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 1913–1921. [Google Scholar]
  68. Toshev, A.; Szegedy, C. Deeppose: Human pose estimation via deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 1653–1660. [Google Scholar]
  69. Wei, S.E.; Ramakrishna, V.; Kanade, T.; Sheikh, Y. Convolutional pose machines. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 4724–4732. [Google Scholar]
  70. Crippa, A.; Salvatore, C.; Perego, P.; Forti, S.; Nobile, M.; Molteni, M.; Castiglioni, I. Use of machine learning to identify children with autism and their motor abnormalities. J. Autism Dev. Disord. 2015, 45, 2146–2156. [Google Scholar] [CrossRef]
  71. Wedyan, M.; Al-Jumaily, A.; Crippa, A. Using machine learning to perform early diagnosis of autism spectrum disorder based on simple upper limb movements. Int. J. Hybrid Intell. Syst. 2019, 15, 195–206. [Google Scholar] [CrossRef]
  72. Burdea, G.C.; Coiffet, P. Virtual Reality Technology; John Wiley & Sons: Hoboken, NJ, USA, 2003. [Google Scholar]
  73. Fuchs, H.; Bishop, G. Research Directions in Virtual Environments; University of North Carolina at Chapel Hill: Chapel Hill, NC, USA, 1992. [Google Scholar]
  74. Parsons, S.; Mitchell, P.; Leonard, A. The use and understanding of virtual environments by adolescents with autistic spectrum disorders. J. Autism Dev. Disord. 2004, 34, 449–466. [Google Scholar] [CrossRef]
  75. Parsons, T.D.; Rizzo, A.A.; Rogers, S.; York, P. Virtual reality in paediatric rehabilitation: A review. Dev. Neurorehabil. 2009, 12, 224–238. [Google Scholar] [CrossRef] [PubMed]
  76. Parsons, T.D. Neuropsychological assessment using virtual environments: Enhanced assessment technology for improved ecological validity. In Advanced Computational Intelligence Paradigms in Healthcare 6. Virtual Reality in Psychotherapy, Rehabilitation, and Assessment; Springer: Heidelberg/Berlin, Germany, 2011; pp. 271–289. [Google Scholar]
  77. Bowman, D.A.; Gabbard, J.L.; Hix, D. A survey of usability evaluation in virtual environments: Classification and comparison of methods. Presence Teleoperators Virtual Environ. 2002, 11, 404–424. [Google Scholar] [CrossRef]
  78. Pastorelli, E.; Herrmann, H. A small-scale, low-budget semi-immersive virtual environment for scientific visualization and research. Procedia Comput. Sci. 2013, 25, 14–22. [Google Scholar] [CrossRef]
  79. Cobb, S.V.; Nichols, S.; Ramsey, A.; Wilson, J.R. Virtual reality-induced symptoms and effects (VRISE). Presence Teleoperators Virtual Environ. 1999, 8, 169–186. [Google Scholar] [CrossRef]
  80. Guazzaroni, G. (Ed.) Virtual and Augmented Reality in Mental Health Treatment; IGI Global: Hershey, PA, USA, 2018. [Google Scholar]
  81. Wallace, S.; Parsons, S.; Westbury, A.; White, K.; White, K.; Bailey, A. Sense of presence and atypical social judgments in immersive virtual environments: Responses of adolescents with Autism Spectrum Disorders. Autism 2010, 14, 199–213. [Google Scholar] [CrossRef]
  82. Lorenzo, G.; Lledó, A.; Arráez-Vera, G.; Lorenzo-Lledó, A. The application of immersive virtual reality for students with ASD: A review between 1990–2017. Educ. Inf. Technol. 2019, 24, 127–151. [Google Scholar] [CrossRef]
  83. Bailenson, J.N.; Yee, N.; Merget, D.; Schroeder, R. The effect of behavioral realism and form realism of real-time avatar faces on verbal disclosure, nonverbal disclosure, emotion recognition, and copresence in dyadic interaction. Presence 2006, 15, 359–372. [Google Scholar] [CrossRef]
  84. Biocca, F.; Harms, C.; Gregg, J. The networked minds measure of social presence: Pilot test of the factor structure and concurrent validity. In Proceedings of the 4th Annual International Workshop on Presence, Philadelphia, PA, USA, 21–23 May 2001; pp. 1–9. [Google Scholar]
  85. Cipresso, P.; Chicchi Giglioli, I.A.; Alcañiz Raya, M.; Riva, G. The past, present, and future of virtual and augmented reality research: A network and cluster analysis of the literature. Front. Psychol. 2018, 9, 2086. [Google Scholar] [CrossRef] [PubMed]
  86. Cummings, J.J.; Bailenson, J.N. How immersive is enough? A meta-analysis of the effect of immersive technology on user presence. Media Psychol. 2016, 19, 272–309. [Google Scholar] [CrossRef]
  87. Skalski, P.; Tamborini, R. The role of social presence in interactive agent-based persuasion. Media Psychol. 2007, 10, 385–413. [Google Scholar] [CrossRef]
  88. Slater, M. Place illusion and plausibility can lead to realistic behaviour in immersive virtual environments. Philos. Trans. R. Soc. B Biol. Sci. 2009, 364, 3549–3557. [Google Scholar] [CrossRef]
  89. Sundar, S.S.; Xu, Q.; Bellur, S. Designing interactivity in media interfaces: A communications perspective. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Atlanta, GA, USA, 10–15 April 2010; ACM: New York, NY, USA, 2010; pp. 2247–2256. [Google Scholar]
  90. Baños, R.M.; Botella, C.; Garcia-Palacios, A.; Villa, H.; Perpiñá, C.; Alcaniz, M. Presence and reality judgment in virtual environments: A unitary construct? Cyber Psychol. Behav. 2000, 3, 327–335. [Google Scholar] [CrossRef]
  91. Baños, R.; Botella, C.; Garcia-Palacios, A.; Villa, H.; Perpiñá, C.; Gallardo, M. Psychological variables and reality judgment in virtual environments: The roles of absorption and dissociation. Cyber Psychol. Behav. 2009, 2, 143–148. [Google Scholar] [CrossRef]
  92. Bente, G.; Rüggenberg, S.; Krämer, N.C.; Eschenburg, F. Avatar-mediated networking: Increasing social presence and interpersonal trust in net-based collaborations. Hum. Commun. Res. 2008, 34, 287–318. [Google Scholar] [CrossRef]
  93. Heeter, C. Being there: The subjective experience of presence. Presence Teleoperators Virtual Environ. 1992, 1, 262–271. [Google Scholar] [CrossRef]
  94. Sanchez-Vives, M.V.; Slater, M. From presence to consciousness through virtual reality. Nat. Rev. Neurosci. 2005, 6, 332. [Google Scholar] [CrossRef] [PubMed]
  95. Alcorn, A.; Pain, H.; Rajendran, G.; Smith, T.; Lemon, O.; Porayska-Pomsta, K.; Bernardini, S. Social communication between virtual characters and children with autism. In Proceedings of the International Conference on Artificial Intelligence in Education, Auckland, New Zealand, 28 June–1 July 2011; Springer: Berlin, Heidelberg, 2011; pp. 7–14. [Google Scholar]
  96. Mohr, D.C.; Burns, M.N.; Schueller, S.M.; Clarke, G.; Klinkman, M. Behavioral intervention technologies: Evidence review and recommendations for future research in mental health. Gen. Hosp. Psychiatry 2013, 35, 332–338. [Google Scholar] [CrossRef] [PubMed]
  97. Neguț, A.; Matu, S.A.; Sava, F.A.; David, D. Virtual reality measures in neuropsychological assessment: A meta-analytic review. Clin. Neuropsychol. 2016, 30, 165–184. [Google Scholar] [CrossRef]
  98. Riva, G. Virtual reality in psychotherapy. Cyber Psychol. Behav. 2005, 8, 220–230. [Google Scholar] [CrossRef] [PubMed]
  99. Valmaggia, L.R.; Latif, L.; Kempton, M.J.; Rus-Calafell, M. Virtual reality in the psychological treatment for mental health problems: An systematic review of recent evidence. Psychiatry Res. 2016, 236, 189–195. [Google Scholar] [CrossRef] [PubMed]
  100. Mesa-Gresa, P.; Gil-Gómez, H.; Lozano-Quilis, J.A.; Gil-Gómez, J.A. Effectiveness of virtual reality for children and adolescents with autism spectrum disorder: An evidence-based systematic review. Sensors 2018, 18, 2486. [Google Scholar] [CrossRef] [PubMed]
  101. Cheng, Y.; Ye, J. Exploring the social competence of students with autism spectrum conditions in a collaborative virtual learning environment—The pilot study. Comput. Educ. 2010, 54, 1068–1077. [Google Scholar] [CrossRef]
  102. Jarrold, W.; Mundy, P.; Gwaltney, M.; Bailenson, J.; Hatt, N.; McIntyre, N.; Swain, L. Social attention in a virtual public speaking task in higher functioning children with autism. Autism Res. 2013, 6, 393–410. [Google Scholar] [CrossRef]
  103. d’Arc, B.F.; Ramus, F.; Lefebvre, A.; Brottier, D.; Zalla, T.; Moukawane, S.; Leboyer, M. Atypical social judgment and sensitivity to perceptual cues in autism spectrum disorders. J. Autism Dev. Disord. 2016, 46, 1574–1581. [Google Scholar] [CrossRef]
  104. Hopkins, I.M.; Gower, M.W.; Perez, T.A.; Smith, D.S.; Amthor, F.R.; Wimsatt, F.C.; Biasini, F.J. Avatar assistant: Improving social skills in students with an ASD through a computer-based intervention. J. Autism Dev. Disord. 2011, 41, 1543–1555. [Google Scholar] [CrossRef]
  105. Maskey, M.; Lowry, J.; Rodgers, J.; McConachie, H.; Parr, J.R. Reducing specific phobia/fear in young people with autism spectrum disorders (ASDs) through a virtual reality environment intervention. PLoS ONE 2014, 9, e100374. [Google Scholar] [CrossRef]
  106. Baron-Cohen, S.; Ashwin, E.; Ashwin, C.; Tavassoli, T.; Chakrabarti, B. Talent in autism: Hyper-systemizing, hyper-attention to detail and sensory hypersensitivity. Philos. Trans. R. Soc. B Biol. Sci. 2009, 364, 1377–1383. [Google Scholar] [CrossRef] [PubMed]
  107. Tomchek, S.D.; Huebner, R.A.; Dunn, W. Patterns of sensory processing in children with an autism spectrum disorder. Res. Autism Spectr. Disord. 2014, 8, 1214–1224. [Google Scholar] [CrossRef]
  108. Ashwin, C.; Chapman, E.; Howells, J.; Rhydderch, D.; Walker, I.; Baron-Cohen, S. Enhanced olfactory sensitivity in autism spectrum conditions. Mol. Autism 2014, 5, 53. [Google Scholar] [CrossRef] [PubMed]
  109. Dudova, I.; Vodicka, J.; Havlovicova, M.; Sedlacek, Z.; Urbanek, T.; Hrdlicka, M. Odor detection threshold, but not odor identification, is impaired in children with autism. Eur. Child Adolesc. Psychiatry 2011, 20, 333–340. [Google Scholar] [CrossRef] [PubMed]
  110. Boyd, B.A.; Baranek, G.T.; Sideris, J.; Poe, M.D.; Watson, L.R.; Patten, E.; Miller, H. Sensory features and repetitive behaviors in children with autism and developmental delays. Autism Res. 2010, 3, 78–87. [Google Scholar] [CrossRef] [PubMed]
  111. Gabriels, R.L.; Agnew, J.A.; Miller, L.J.; Gralla, J.; Pan, Z.; Goldson, E.; Hooks, E. Is there a relationship between restricted, repetitive, stereotyped behaviors and interests and abnormal sensory response in children with autism spectrum disorders? Res. Autism Spectr. Disord. 2008, 2, 660–670. [Google Scholar] [CrossRef]
  112. Cao, Z.; Hidalgo, G.; Simon, T.; Wei, S.E.; Sheikh, Y. OpenPose: Realtime multi-person 2D pose estimation using Part Affinity Fields. arXiv 2018, arXiv:1812.08008. [Google Scholar] [CrossRef]
  113. Schöllkopf, B.; Smola, A.J.; Williamson, R.C.; Bartlett, P.L. New support vector algorithms. Neural Comput. 2000, 12, 1207–1245. [Google Scholar] [CrossRef]
  114. Yan, K.; Zhang, D. Feature selection and analysis on correlated gas sensor data with recursive feature elimination. Sens. Actuators B Chem. 2015, 212, 353–363. [Google Scholar] [CrossRef]
  115. Chang, C.-C.; Lin, C.-J. Libsvm: A Library for Support Vector Machines. ACM Trans. Intell. Syst. Technol. 2011, 2, 1–27. [Google Scholar] [CrossRef]
  116. O’Neill, M.; Jones, R.S. Sensory-perceptual abnormalities in autism: A case for more research? J. Autism Dev. Disord. 1997, 27, 283–293. [Google Scholar] [CrossRef] [PubMed]
  117. Foss-Feig, J.H.; Kwakye, L.D.; Cascio, C.J.; Burnette, C.P.; Kadivar, H.; Stone, W.L.; Wallace, M.T. An extended multisensory temporal binding window in autism spectrum disorders. Exp. Brain Res. 2010, 203, 381–389. [Google Scholar] [CrossRef] [PubMed]
  118. Courchesne, E.; Lincoln, A.J.; Kilman, B.A.; Galambos, R. Event-related brain potential correlates of the processing of novel visual and auditory information in autism. J. Autism Dev. Disord. 1985, 15, 55–76. [Google Scholar] [CrossRef] [PubMed]
  119. Russo, N.; Foxe, J.J.; Brandwein, A.B.; Altschuler, T.; Gomes, H.; Molholm, S. Multisensory processing in children with autism: High-density electrical mapping of auditory–somatosensory integration. Autism Res. 2010, 3, 253–267. [Google Scholar] [CrossRef] [PubMed]
  120. Ament, K.; Mejia, A.; Buhlman, R.; Erklin, S.; Caffo, B.; Mostofsky, S.; Wodka, E. Evidence for specificity of motor impairments in catching and balance in children with autism. J. Autism Dev. Disord. 2015, 45, 742–751. [Google Scholar] [CrossRef]
Figure 1. The virtual environment. (A) City street intersection; (B) visual (V) condition, boy’s avatar saying hello; (C) visual-auditory-olfactory (VAO) condition, boy’s avatar eating a muffin.
Figure 1. The virtual environment. (A) City street intersection; (B) visual (V) condition, boy’s avatar saying hello; (C) visual-auditory-olfactory (VAO) condition, boy’s avatar eating a muffin.
Jcm 09 01260 g001
Figure 2. Experimental setting.
Figure 2. Experimental setting.
Jcm 09 01260 g002
Figure 3. Joint virtual disposition.
Figure 3. Joint virtual disposition.
Jcm 09 01260 g003
Figure 4. Analysis of the level of movement of joints that presents statistical differences: (a) Boxplot of joints in visual stimuli; (b) boxplot of joints in visual and auditory stimuli; (c) boxplot of joints in visual, auditory, and olfactory stimuli; (d) scheme of the joints including the stimuli where differences are found. Note. * p < 0.05. ** p < 0.01. *** p < 0.001.
Figure 4. Analysis of the level of movement of joints that presents statistical differences: (a) Boxplot of joints in visual stimuli; (b) boxplot of joints in visual and auditory stimuli; (c) boxplot of joints in visual, auditory, and olfactory stimuli; (d) scheme of the joints including the stimuli where differences are found. Note. * p < 0.05. ** p < 0.01. *** p < 0.001.
Jcm 09 01260 g004
Table 1. Overview of the accuracy of each model considering the stimuli and set of joints.
Table 1. Overview of the accuracy of each model considering the stimuli and set of joints.
Stimuli
VVAVAOAll
Set of JointsHead80.85%82.98%89.36%82.98%
Trunk70.21%72.34%76.60%82.98%
Arms87.23%78.72%65.96%74.47%
Legs61.70%63.83%74.47%72.34%
Feet78.72%78.72%65.96%82.98%
All89.36%74.47%70.21%70.21%
Table 2. Detailed level of autism spectrum disorder (ASD) recognition of each model including accuracy, true positive rate (TPR), true negative rate (TNR), Cohen’s Kappa and PCA featured selected.
Table 2. Detailed level of autism spectrum disorder (ASD) recognition of each model including accuracy, true positive rate (TPR), true negative rate (TNR), Cohen’s Kappa and PCA featured selected.
StimuliSet of JointsAccuracyTPRTNRKappaPCA Features Selected
AllAll70.21%45.45%92.00%0.391/20
VAll89.36%100.00%80.00%0.791/14
VAAll74.47%59.09%88.00%0.482/12
VAOAll70.21%63.64%76.00%0.403/12
AllHead82.98%100.00%68.00%0.671/11
AllTrunk82.98%63.64%100.00%0.652/8
AllArms74.47%90.91%60.00%0.501/15
AllLegs72.34%68.18%76.00%0.443/8
AllFeet82.98%81.82%84.00%0.662/13
VHead80.85%68.18%92.00%0.611/6
VTrunk70.21%81.82%60.00%0.411/3
VArms87.23%72.73%100.00%0.741/8
VLegs61.70%45.45%76.00%0.221/5
VFeet78.72%54.55%100.00%0.562/6
VAHead82.98%63.64%100.00%0.653/6
VATrunk72.34%72.73%72.00%0.451/3
VAArms78.72%95.45%64.00%0.581/8
VALegs63.83%45.45%80.00%0.261/4
VAFeet78.72%72.73%84.00%0.572/6
VAOHead89.36%77.27%100.00%0.782/6
VAOTrunk76.60%68.18%84.00%0.532/3
VAOArms65.96%50.00%80.00%0.302/7
VAOLegs74.47%68.18%80.00%0.481/5
VAOFeet65.96%45.45%84.00%0.302/7
Back to TopTop