Next Article in Journal
Breast Density Transformations Using CycleGANs for Revealing Undetected Findings in Mammograms
Previous Article in Journal
Enhanced Neural Network Method-Based Multiscale PCA for Fault Diagnosis: Application to Grid-Connected PV Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Employing Classification Techniques on SmartSpeech Biometric Data towards Identification of Neurodevelopmental Disorders

by
Eugenia I. Toki
1,2,
Giorgos Tatsis
1,3,
Vasileios A. Tatsis
1,4,
Konstantinos Plachouras
1,
Jenny Pange
2 and
Ioannis G. Tsoulos
5,*
1
Department of Speech and Language Therapy, School of Health Sciences, University of Ioannina, 45500 Ioannina, Greece
2
Laboratory of New Technologies and Distance Learning, Department of Early Childhood Education, School of Education, University of Ioannina, 45110 Ioannina, Greece
3
Physics Department, University of Ioannina, 45110 Ioannina, Greece
4
Department of Computer Science & Engineering, University of Ioannina, 45110 Ioannina, Greece
5
Department of Informatics and Telecommunications, University of Ioannina, 47150 Kostaki Artas, Greece
*
Author to whom correspondence should be addressed.
Signals 2023, 4(2), 401-420; https://doi.org/10.3390/signals4020021
Submission received: 25 February 2023 / Revised: 10 May 2023 / Accepted: 23 May 2023 / Published: 30 May 2023

Abstract

:
Early detection and evaluation of children at risk of neurodevelopmental disorders and/or communication deficits is critical. While the current literature indicates a high prevalence of neurodevelopmental disorders, many children remain undiagnosed, resulting in missed opportunities for effective interventions that could have had a greater impact if administered earlier. Clinicians face a variety of complications during neurodevelopmental disorders’ evaluation procedures and must elevate their use of digital tools to aid in early detection efficiently. Artificial intelligence enables novelty in taking decisions, classification, and diagnosis. The current research investigates the efficacy of various machine learning approaches on the biometric SmartSpeech datasets. These datasets come from a new innovative system that includes a serious game which gathers children’s responses to specifically designed speech and language activities and their manifestations, intending to assist during the clinical evaluation of neurodevelopmental disorders. The machine learning approaches were used by utilizing the algorithms Radial Basis Function, Neural Network, Deep Learning Neural Networks, and a variation of Grammatical Evolution (GenClass). The most significant results show improved accuracy (%) when using the eye tracking dataset; more specifically: (i) for the class Disorder with GenClass (92.83%), (ii) for the class Autism Spectrum Disorders with Deep Learning Neural Networks layer 4 (86.33%), (iii) for the class Attention Deficit Hyperactivity Disorder with Deep Learning Neural Networks layer 4 (87.44%), (iv) for the class Intellectual Disability with GenClass (86.93%), (v) for the class Specific Learning Disorder with GenClass (88.88%), and (vi) for the class Communication Disorders with GenClass (88.70%). Overall, the results indicated GenClass to be nearly the top competitor, opening up additional probes for future studies toward automatically classifying and assisting clinical assessments for children with neurodevelopmental disorders.

1. Introduction

Neurodevelopmental disorders (NDs) are a group of disorders that typically appear in childhood and are characterized by impairments in neurological development that affect multiple aspects of communication, learning, social, behavior, cognitive, and emotional ability to function [1,2,3,4,5,6]. NDs include Autism Spectrum Disorders (ASD), Attention Deficit Hyperactivity Disorder (ADHD), Intellectual Disability (ID), Specific Learning Disorder (SLD), and Communication Disorders (CD) [1]. DSM 5 [1] defines these disorders’ profiles with certain characteristics [1,2,3,4,5,6]: (i) ASD exhibits persistent difficulties with social interaction and communication as well as the existence of constrictive, repetitive patterns of behavior, interests, or hobbies resulting in clinically severe functional deficits; (ii) inattention, impulsivity, and hyperactivity are the characteristics of ADHD, interfering with day-to-day functioning; (iii) ID comprises impairments of general mental abilities including verbal abilities, learning aptitude, the capacity for logical reasoning, and practical intelligence (problem-solving) that impact adaptive functioning; (iv) SLD presents significantly poor performance in at least one of these areas: oral expression, listening comprehension, basic reading and/or writing abilities, mathematics calculation and/or problem-solving; and (v) CD refers to a group of disorders (speech sound disorder, language disorder, childhood-onset fluency disorder, social (pragmatic) communication disorder, and unspecified communication disorder) characterized by persistent difficulties in the acquisition, comprehension, and/or use of spoken or written language, which interfere with effective communication.
These disorders commonly onset during childhood from young infancy to adolescence. For instance, ASD can be diagnosed between 2–4 years old, ADHD before 12 years old, ID before 18 years old, and any of the NDs may go undetected until adulthood [4]. The severity of ND symptoms varies, and they affect individuals’ quality of life as well as that of their families, causing major care needs that require extensive community assets [7,8]. Early screening and evaluation are vital to identify children at risk of neurodevelopmental disorders (NDs) and/or communication deficiencies. While the current literature reports a high prevalence of NDs, still many children are underdiagnosed, resulting in them missing out on effective interventions that could be of more impact if administered early [7,8].
Effective communication is essential for indicating the development continuation from childhood through adult life and for social interactions [4,9]. Delayed speech and language development are often early indicators of many NDs [4,10]. Various instruments for assessment, testing, observations, and perceived behaviors of the child and parent/caregiver interviews are employed by clinicians during evaluation procedures [7]. Although all the aforementioned are meant to be applied with clinical discretion, their use raises concerns such as [11,12,13]: (i) clinical symptoms are shared among neurodevelopmental disorders; (ii) severe specifier values may result in a positive diagnostic decision because most indications are expressed quantitatively; (iii) in the lack of biomarkers, we are unable to distinguish false positives from extremely related conditions; (iv) the decoding of instrument values at the threshold may be challenging; (v) diagnostic instruments do not offer a differential diagnosis and are unhelpful for negative diagnoses; and (vi) diagnostic instrumentation does not establish individual Functional Communication Profiles to highlight deficits and strengths valuable for intervention. As such, occasionally they may result in subjective evaluations [11,12,13] which point out elements of clinical assessment based on multiparametric, non-standardized, and subjective diagnostic procedures that are still challenging and require a high level of expertise [14]. Moreover, early detection of developmental disabilities in children is crucial for improving the prognostic procedures for NDs on an individual’s development stages [12]. Therefore, there is a need for additional support to diminish the over- or under-diagnosis of NDs in children [11,12,14,15].
Speech and language therapy and special education can benefit from the advances of biosignal processing techniques, and wearable biosensors have made it feasible for the real-time collection and analysis of biosignals, enabling new possibilities for healthcare monitoring and management. Biosignals are time-varying measures of human body processes that can provide important information about the functioning of the human body [16,17]. There two main categories of biosignals: (i) physical signals, that are directly related to physical properties of the body, such as movement, force, and pressure (i.e., accelerometry, eye movements, blinks, respiration, facial expressions, voice); and (ii) physiological signals, that reflect the activity of the body’s organs and systems, such as the heart, the lungs, and the brain (i.e., electrocardiography, electroencephalography) [16]. As a result, less invasive devices are available (i.e., eye-trackers), which provide the child with a computer interaction community and allow understanding of how children engage with digital technologies, letting novel insights into their visual and cognitive processing [18]. Eye tracking is a method for identifying diagnostic biomarkers with evidence in children with ASD [19,20,21], ADHD [22,23,24,25], ID [26,27,28], SLD [29,30], and CD [24,27]. The role of the autonomic nervous system has earned consideration for many types of neurophysiological features of NDs, such as ASD [31,32,33,34]. There are many characteristics that can be studied by taking heart rate measurements, of which a very common one is heart rate variability signal (HRV), since it has been found to be directly related to health [35], mental stress [36], cognitive functions [37], and psychosomatic state [38]. Autonomic dysregulation is a biomarker for ASD and ADHD. Specifically, assessment using HRV can distinguish sensory reactivity in ASD children from that found in typically developed children [31,39]. Furthermore, ADHD can be assessed using HRV to distinguish measurements regarding sustained attention and emotional and behavioral regulation deficits seen in ADHD, and it may help to define the pathophysiology of the disorder [40,41].
Machine learning (ML) is a subset of AI and a rapidly evolving field of study that aims to establish high-quality prediction models using search strategies, deep learning, and computational analysis to enable machines to learn to make autonomous decisions and improve their performance at specific tasks [42]. There are several uses for ML in health and healthcare [12,43,44,45,46,47,48]. The way we approach disease/disorder screening, diagnosis, and treatment may change as a result; for example, ML algorithms can examine patient data to spot trends and forecast the course of diseases/disorders. Supervised ML for classification is a type of machine learning where a model is trained to predict a categorical output variable. Metrics such as accuracy, error rate, precision, and recall can be used to evaluate a classification model’s performance [39,49]. A good classification model should have high accuracy, precision, and recall, but the optimal values may depend on the specific problem being addressed. For instance, early detection of type 2 diabetes and its complications has been identified from electronically collected data using ML and deep learning techniques [50,51]. Further, towards individualized treatment plans, ML algorithms can examine patient data, including genetic data and medical history improving treatment results [52,53]. Wearable technology and sensor data can be analyzed by ML algorithms to track patient health and spot early disease symptoms [54,55].
In relation to this, a soft computing approach of predictive fuzzy cognitive maps has been employed successfully to represent human reasoning and to derive conclusions and decisions in a way that is human-like for a Medical Decision Support System [48]. This system was intended for medical education, employing a scenario-based learning approach to safely explore extensive “what-if” scenarios in case studies and prepare for dealing with critical adversity [48]. Additionally, a sub-band morphological operation method has also been used successfully to detect cerebral aneurysms [56] and convolutional neural networks have been employed for the classification of leukocytes categories and leukemia prediction [57]. Furthermore, wearable electroencephalogram (EEG) recorders and Brain Computer Interface software have been proposed to aid in the assessment of alcohol-related brain waves [58]. More specifically, calculated spectral and statistical properties were used for classification, and Grammatical Evolution was applied. The suggested approach reported high accuracy results (89.95%), and thus, it was suited for direct drivers’ mental state evaluation for road safety and accident avoidance in a future in-vehicle smart system. Further, for the hemiplegia type classification among patients and healthy individuals, an automatic feature selection and building method based on grammatical evolution (GE) for radial basis function (RBF) networks was presented [59]. Using an accelerometer sensor dataset, this approach was put to the test using four different classification techniques: RBF network, multi-layer perceptron (MLP) trained using the Broyden–Fletcher–Goldfarb–Shanno (BFGS) training algorithm, support vector machine (SVM), and GenClass, a GE-based parallel tool for data classification. The test results showed that the suggested solution had the best classification accuracy (90.07%) [59]. Various approaches of neural networks and deep neural networks have been used for classification of speech quality and voice disorders with very promising results [43,44,45,46,47,60,61].
New prospects are presented to assist clinical decision-making through the use of AI algorithms, automated instruments for measuring, decision-making, and classification in communication deficiencies and NDs in the research setting [11,12,13,14,15,62]. Traditional ML approaches use separate feature extraction procedures and classification methods, but with Deep Learning these two procedures are done comprehensively [42]. For the ASD diagnosis in young children from 5 to 10 years old, an intelligent model has been presented based on resting-state functional magnetic resonance imaging data from global Autism Brain Imaging Data Exchange I and II datasets and using convolutional neural networks (CNNs) [63]. The best results have been obtained with Adamax optimization technique. A review of ML research for MRI-based ASD identification deduced that the accuracy of research studies with a significant number of participants is generally lower than that of studies with fewer participants, implying the further need for large-scale studies [64]. Regarding participants’ age, it is shown that the accuracy of ASD automated diagnosis is higher for younger individuals [64]. Another thorough examination of deep learning approaches looks into the prognosis of neurological and neuropsychiatric disorders, reporting more potential for diagnosing stroke, cerebral palsy, and migraines using various deep learning models [65].
A deep neural network model employed in the early screening of ASD, assessing children’s eye tracking data applicability, reported outcomes that strongly indicated efficiency in helping clinicians for a quick and reliable evaluation [15]. The outcomes of a review article on ML methods of feature selection and classification for ASD, used to analyze and investigate ASD, indicate an improvement in diagnostic accuracy, time, and quality without complexity [66]. In an analysis and detection of ASD after applying various ML techniques and handling missing values, the results strongly suggest that convolutional neural network- (CNN) based prediction models work better on their datasets with a significantly higher accuracy for ASD screening in children, adolescents, and adult data [67]. A CNN is employed for the classification of ADHD, trained with EEG spectrograms of 20 patients and 20 healthy participants. The model has an accuracy of 88% ± 1.12%, outperforming the Recurrent Neural Network and the Shallow Neural Network, with the advantage of avoiding the manual EEG spectral or channel features [68]. Furthermore, a CNN was used to identify ADHD from a dataset of children (ADHD: 50, Healthy: 57) and the network input data consisted of power spectrum density of EEGs. The accuracy obtained was 90.29% ± 0.58% [69].
Additionally, serious games which embed fine motor activities obtained from a mobile device and deep learning convolutional neural networks (CNN) are proposed as novel digital biomarkers for the classification of developmental disorders [12]. A pilot study of an integrated system that includes a serious game and a mobile app, and utilizes ML models that measure ADHD behaviors, suggests their significant potential in the domain of ADHD prediction [14]. Moreover, a gamified online test and ML using Random Forests for the predictive model were designed with results revealing that their model correctly detected over 80% of the participants with dyslexia, pointing out that dyslexia can be screened using an ML approach [62].
Consequently, more in-depth research is needed which utilizes automatic classification techniques to assist clinicians’ decision making. The aim of the current study is to examine automatic classification for the assistance and support of evaluation procedures in speech and language skills on biometric data gathered for children with a Disorder (NDs or no-NDs). Further, and in more detail, we also examine five types of NDs: ASD, ADHD, ID, SLD, and CD. Hence, we overall study six binary classification problems. The methods utilized to classify the data are a Radial Basis Function (RBF) neural network, a Deep Neural Network (DNN), and a Grammatical Evolution variant named GenClass [70].

2. Materials and Methods

2.1. The SmartSpeech Project

This study is part of the ongoing research project “Smart Computing Models, Sensors, and Early diagnostic speech and language deficiencies indicators in Child Communication” (also known as SmartSpeech), funded by the Region of Epirus in Greece and the European Regional Development Fund (ERDF). Specifically designed activities based on ND assessment procedures were used to create a serious game for the SmartSpeech project [71]. This serious game collects players’ responses. A dedicated server backend service processes gathered data and examines whether specified domains or skills can be used for the early clinical screening/diagnostic procedures toward automated indications.

2.2. The Sample

The sample of this study consists of the SmartSpeech biometric data. A total of 435 participants, mean age: 8.8 ± 7.4 years and mixed gender (M:224 and F:211), contributed to the sample of this study. The participants’ sample was divided in groups of NDs (96) and no-NDs (339). NDs have been categorized in agreement with DSM-5 (ASD: 17, ADHD: 18 have, ID: 8, SLD: 19, and CD: 42). Six NDs participants have co-occurrence of more than one disorder.

2.3. Data Collection

To recruit for the sample, many calls were made through health and educational sectors that support children with NDs and no-NDs. For each child participant, an adult has also been involved to provide the required (parental) consent and the child’s developmental and communication history. The project’s nature, purpose, procedures, and approval by the University of Ioannina Research Ethics Committee, Greece (Reg. Num.: 18435/15 May 2020), which complies with the General Data Protection Regulation GDPR, were then thoroughly explained to parents during an informative meeting. Parents then endorsed the consent form.
Next, the child interacts with the serious game, under the clinician’s guidance. The game is designed to record the child’s responses while playing the game along with biometric measurements, i.e., eye tracking and heart rate. The responses are quantified as variables forming four categories, more specifically, hand movements on the touch screen, verbally answering questions or executing commands, eye tracking data while watching scenes, and spontaneous heart rate reactions in real time. Regarding the first category, the game automatically outputs the scores that correspond to the player’s performance. The remaining variables are analyzed as follows.
The digital game employs procedures to recognize words through the built-in speech-to-text (STT) capability. The participant is asked in several phases of the game to pronounce words such as characters’ names and objects of the game. The words that make up the correct answers are predetermined. The duration of each recording is 10 s, long enough to capture the participant’s response. For the word recognition, the speech-to-text program CMUSphinx [72] has been chosen. It is available for free, it can work on different operating systems (desktop, mobile), it is relatively fast, and it works offline. In this software, a corresponding recognition model in Greek has been created and trained [73]. There are a total of 40 words that are expected to be “heard” in the targeted language, that is, Greek. The program essentially detects which word from the above best matches what the child has said and consequently whether the child gave a correct answer or not.
During the game and in real time the player wears a smartwatch which records the heart rate. The values are sent online to the database where they are synchronized with the different phases of the game. Depending on the activity we are interested in samples collected from the specific time period and the corresponding statistics variables are calculated. Mean, standard deviation, and range for every distinct activity of the game are the heart rate (HR) variables. HRV is the variation in the time difference between successive heart beats, and several ways of calculation have been defined [33]. For the exact calculation one should have the information of the time difference of successive pulses, and the most reliable way is with the electrocardiogram (ECG). The wearable device (smartwatch) used in the game allows only the heart rate to be measured, not the individual pulses. From the heart rate (HR) it is not possible to calculate HRV directly, especially when filtering-smoothing techniques are used by the measuring devices which alter the original information of the measurements. However, to obtain an estimate of the rate variability, since it is considered a more important feature than the rate itself, we have calculated the heart rate standard deviation and range statistics as an alternative.
Unity environment (Unity®, 2022) is used to implement the game. Eye tracking is provided through the SeeSo software [74]. This is achieved by detecting the position of the eyes through the camera of the mobile device (tablet) and allows the calculation of the target the observer is looking at on the screen. In certain activities in the game the above functionality is enabled, and the result is a sequence of X, Y coordinates that correspond to the screen of the device at certain time periods. These coordinates give direct information about what the player is looking at (gaze points). It is known that human eye movements, when trying to obtain information when watching a scene, are generally very fast, with a duration of a few milliseconds, so one can quickly process any visual stimulus by literally scanning the scene. Fast eye movements are also called saccades [75], while when the target points of the gaze are relatively close both spatially and temporally, they constitute what we call a fixation [75], which refers to where and when we mentally process the scene by deriving information out of it. The software gives information about the fixations and we extract three basic variables that are common in eye movement research [76]. These are (a) number of fixations (fixation count—FC), (b) time that passed till the first fixation (time to first fixation—TTFF), (c) the total duration of fixations (time spent—TS).

2.4. Data Formulation

Three datasets were formed, corresponding to the categories of (i) game scores N:435 (NDs: 96, no-NDs: 339), (ii) heart rate statistics N:321 (NDs: 88, no-NDs: 233), and (iii) eye-tracker metrics N:182 (NDs: 42, no-NDs: 140). Each set is a representation of the set of classification input variables. Data that were invalid or missing were filtered out and forced to case reduction and thus pathological case reduction in the datasets. Table 1 summarizes the input variables used in each dataset; game-scores dataset: 30 variables, heart-rate dataset: 15, eye tracking dataset: 16 (as depicted visually in Section 2.6). The TTFF variables in the eye tracking dataset had to be removed during the filtering process. Figure 1a–c provides a visualization summarizing descriptive statistics for this study’s variables (means, standard error).
The classes that were used are defined by the target binary variables. The Disorder variable indicates whether the child has ND(s) or not, i.e., no-ND. The remaining variables, such as ASD, ADHD, ID, SLD, and CD, indicate the existence of the disorder in the way that the DSM-5 has previously been described.

2.5. Classification Methods

The methods in this study used to classify the data are RBF, DNN, and Grammatical Evolution variant, named GenClass, which are also depicted in which are also depicted visually in Section 2.6.
The RBF is a kind of artificial neural network which has been widely used for a range of tasks, including classification, regression, and clustering with effectiveness in problems with high-dimensional input spaces and complex patterns [77,78,79]. The RBF network has several advantages over other neural network architectures, including its ability to handle high-dimensional data, fast training and testing times, and the ability to approximate any continuous function with arbitrary precision. The RBF network has three layers, according to [79] (input, hidden, and output). An input comes from a variable in Table 1. The hidden layer uses radial basis functions as activation functions to transform the input data into a new representation. This representation is then used for further processing in the output layer. The output of the network is computed as a linear combination of the transformed inputs. Thus, the output is a binary decision in the form of 0 or 1 (TRUE or FALSE), representing the two outcomes of each of the six classes of the study. Figure 2 presents the RBF Neural Network flowchart.
DNNs [80] consist of many artificial neural networks formed as layers, each composed of a specific number of neurons. The received input is transformed nonlinearly at each layer, and the outputs are then passed on to the layer above it until the network’s output. Their architecture (Figure 3) allows them to learn highly complex representations of input data and link them to the desired output, rendering them a suitable and effective tool for a wide range of applications, including but not limited to image recognition, speech recognition, natural language processing, and classification.
The core element of a DNN is the artificial neuron [81]. Specifically, a neuron applies a nonlinear function to the weighted sum of those inputs and outputs. In the case of a fully connected network, each neuron of each layer is connected with every neuron of the next layer, and the weights of those connections are learned during the training. The learning procedure, or, differently, the training phase of a DNN, usually involves the adjustment of the weights of all neuron connections to minimize an error between the network’s predictions and the actual output values. Usually, this is conducted using a method called backpropagation [82], in which the gradient of the error with respect to each weight in the network is calculated to reduce the error. This minimization can also be conducted by using more sophisticated optimization techniques but with a cost. One of the challenges in training DNNs is to avoid overfitting. Overfitting means that the network becomes so specialized to the training data that it cannot perform well on new unseen data. Many techniques have been proposed to mitigate this problem, such as dropout and weight decay [83], among others.
A genetic programming technique called grammatical evolution uses a grammar-based strategy to evolve computer programs [84]. It is an evolutionary process that has been used in various cases such as music composition [85], economics [86], symbolic regression [87], and caching algorithms [88]. In the genetic algorithm, the chromosomes serve as a vector of integer values to represent the production rules of a Backus–Naur Form (BNF) grammar [89].
The algorithm proposed by Tsoulos, named GenClass [70], is a classification algorithm based on grammatical evolution. The start symbol of the grammar serves as the starting point for the production procedure, which gradually produces the program string by substituting nonterminal symbols with the right hand of the chosen production rule. Figure 4 shows the GenClass flowchart.
The main advantage is that it does not require any additional information, such as the derivatives of the objective problem, which cost in time and memory. Specifically, it generates a series of classification rules in a C-like language that can be easily programmed and used in real C programs without many modifications. The generated rules are constructed with the use of if-else conditions, and the variables represent the corresponding features. The source code of the method can be found in https://github.com/itsoulos/GenClass (accessed on 30 December 2022).
The application details of the utilized classifiers are depicted. The following techniques were used to successfully identify the categories in the 3 datasets: RBF with 10 processing neurons [79], DNN approaches described thereafter, and GenClass [70]. The number of chromosomes used in GenClass were 500 and a maximum of 2000 generations were allowed. Experimental settings parameters are shown in Table 2.
The provided DNN approaches were implemented using Python language and Keras library. Three different approaches were considered for the comparisons with different fully connected layers. The approaches were named according to the corresponding adopted layers, with the names DNN-3, DNN-4, and DNN-5, accordingly. The architecture of DNN-3 consists of three fully connected layers with 64, 32, and 16 neurons, respectively, and a final output layer with three neurons. The neurons used the sigmoid activation function [90], while the final output neurons used the softmax activation. The model is compiled with the Nadam optimizer and categorical cross-entropy loss function and trained over 1000 epochs with a batch size of 8. Accordingly, for DNN-4, the extra added layer has 128 neurons, and for DNN-5, 256.

2.6. Performance Estimation

The 10-fold cross-validation technique has consistently been employed as an evaluation method to fairly assess the predictive ability and produce its efficiency (Figure 5). We divided each dataset into ten partitions. Nine of the partitions we created were used for training, and the final partition was used for testing. For each instance we performed thirty independent experiments and calculated each algorithm and the average classification errors. Moreover, we used different seed numbers for every experiment by using the C programming language’s drand48() random number generator. For the experiments, we used freely downloadable software from https://github.com/itsoulos/IntervalGenetic, (accessed on 18 February 2023).
For classification evaluation, a confusion matrix is used to calculate the error rate, precision, recall, and accuracy, presented below in Equations (1)–(4), respectively [39,49]:
Error   rate = FP + FN TP + TN + FP + FN
Precision = TP TP + FP
Recall = TP TP + FP
Accuracy = TP + TN TP + TN + FP + FN
Overall, to overview the methods followed in this study, Figure 6 visually demonstrates the study’s workflow.

3. Results

The experimental results are shown in Table 3, Table 4, Table 5, Table 6 and Table 7. Table 3, Table 4, Table 5, Table 6 and Table 7 show the results as average error rates percentages (%) for the eye tracking, the heart rate, and the game-based datasets.
For the eye tracking dataset, it is shown that the best overall results are obtained with the GenClass method with a total average error rate of 12.03%. More specifically, this method is found to be more suitable for the Disorder class, which denotes whether a child has a disorder or not and particularly for the disorders ID, SLD, and CD, with average error rates of 13.07%, 11.12% and 11.30%, respectively. DNNs on the other hand are proved to be more accurate for distinguishing the NDs of ASD and ADHD, with average error rates of 13.67% and 12.56%, respectively. The number of layers seems to have a small impact on the outcome with the four-layer DNN achieving the highest performance.
For the heart rate dataset, the RBF classifier is superior to the others for all the classes with an overall average error rate of 18.73%. This method is proved to be more appropriate when the biometric data consist of heart rate measurements, whereas the difference in performance against to the other classifiers is remarkable.
For the game scores dataset, the GenClass classifier is found to be slightly better in detecting all the target disorder variables with an average error rate of 22.08%.
Furthermore, Table 6 compares the precision and recall for the Disorder datasets.
Finally, a comparison in terms of higher classification accuracies is shown in Table 7 for each class and SmartSpeech dataset.

4. Discussion

This study aimed to utilize ML to examine the development of innovative automated solutions for the early identification of NDs in children with communication deficiencies, offering the development of technology-based data-gathering techniques such as motion tracking, heart rate metrics, and eye tracking from the new SmartSpeech dataset developed in Greek. Ten-fold cross-validation was chosen for evaluating model efficacy since it produces high variability in testing and training data, decreases bias, and delivers consistent findings for all tries, parameters, and models. The results of this research give a direct comparison of the different machine learning methods employed on this dataset, which are RBF, DNN, and GenClass.
The reported results of this study (Table 3, Table 4 and Table 5) display the comparison of all the methods employing the performance metric of the error rate (%). Thus, a smaller value implies better performance. Precision and recall metrics are also displayed for the class Disorder (Table 6). Finally, the highest performance classification methods in accuracy metrics are reported for each class and dataset (Table 7). Particularly, Table 7 clearly illustrates the tendency of the specific methods to dominate in each dataset and class; more specifically:
  • For the eye tracking measurements, the GenClass and the DNN-4 have proven to be the best choices, with an accuracy of at least 86.33% for the ASD population. GenClass is superior for the classes Disorder, ID, SLD, and CD, whereas DNN-4 is better for ASD and ADHD. For the aggregate class Disorder, GenClass has the highest observed accuracy of 92.83%. This finding may be utilized for automated screening to discriminate whether an individual has NDs.
  • The RBF method is the most accurate in the heart rate dataset, with an accuracy of at least 80.05%. It is notable that it achieves the best performance for all the classes under study.
  • As for the game-based dataset, the GenClass method has the highest accuracy for the classes Disorder, ASD, ID, and CD. The classes ADHD and SLD are better identified using the RBF algorithm.
However, in most other cases GenClass and DNN-4 outperform the rest. It is worth noting that GenClass is expected to have longer execution times since it is based on genetic algorithms. Nevertheless, in this study we have employed the parallelization feature of the software GenClass [91] to speed up the process.
Similar research attempts to identify NDs have been reported in the literature. For example, one such study evaluated the ability of drag-and-drop data to be used to classify children with developmental disabilities [12]. Data were collected from 223 children with typical development and 147 children with developmental disabilities via a mobile application (DoBrain). A deep learning CNN algorithm was developed to classify an area under the curve (AUC) of 0.817. Furthermore, in line with our study, a binary classifier has also been trained using paralinguistic features extracted from typically developing children and children suffering from Speech Sound Disorders (SSD), reporting 87% accuracy [60]. In the same direction as our study, the HRV was also used as a biomarker to distinguish autistic and typical children by applying several machine learning algorithms, that is, the Logistic Regression, Linear Discriminant Analysis, and Cubic Support Vector Machine [39]. Logistic Regression proved to be the best classifier for a color stimulus test in that study, whereas Linear Discriminant Analysis was better in the baseline test. Moreover, an important biomarker to detect ASD can be considered similar to our research which focused on eye tracking data [15]. While finding the best method to predict autism with the help of eye tracking scan path images, the DNN classifier was compared to traditional machine learning approaches such as Boosted Decision Tree, Deep Support Vector Machine, and Decision Jungle. The DNN model outperformed the other machine learning techniques with an AUC of 97%, sensitivity of 93.28%, specificity of 91.38%, negative prediction value (NPV) of 94.46%, and positive predictive value (PPV) of 90.06% [15]. Moreover, RBF also reported reliable results in a study with an attempt to identify children with ID that was done using two different feature extraction methods of speech samples, that is, the Linear Predictive Coding based cepstral parameters and Mel-frequency cepstral coefficients, along with four classifiers, that is, k-nearest neighbor, support vector machine, linear discriminant analysis, and RBF neural network [92]. The RBF classification model was the best technique for classifying disordered speech, giving higher accuracy compared to the rest of the classifiers (>90%).
Furthermore, this study’s sample size is analogous to other research [12,15,93] due to the high costs of collecting the data involving human subjects and the ongoing development of tasks and experimental techniques that can discriminate between various situations to the greatest extent possible. Similar to prior studies [93], in this study, experimenting while collecting a single multi-dimensional data sample may take 1.5 to 4 h of participant’s time (such as setting up, testing, and setting down) and 2 to 6 h of participant time (which encompasses travel time). Furthermore, reaching out to people and encouraging participation is complex, making recruiting many participants with NDs difficult. As a result, the resources available for early-stage studies do not allow for gathering samples from thousands of people. Although this study’s sample size is not very large, its results form one of the first attempts at employing ML on data from digital gameplay and sensors to automatically assist the clinician’s decision, reducing the inherent uncertainty of clinical diagnosis regarding speech and language activities and their manifestations. This study contributes to the automatic classification of NDs based on new datasets initiated from responses during software interactions, primarily designed and implemented for the Greek language. Future research may focus on enriching the dataset and considering recent advances in classification to enhance accuracy.

5. Conclusions

This study examines a number of ML approaches to explore how to automatically identify children with various neurodevelopmental disorders. The ML techniques utilize modern optimization algorithms such as the Radial Basis Function (RBF) Neural Network, Deep Learning Neural Networks (DNN), and a variant of the Grammatical Evolution method, namely GenClass. These methods are used for disorder classification on our dataset, derived from SmartSpeech, an innovative system with a digital mobile serious game designed to assist clinicians in speech and language therapy in Greek. The dataset is split in three parts, one for the game-based data and two for biometric data measured, that is, eye tracking and heart rate. The results of this study have shown that best performing classifiers for the eye tracking datasets were GenClass and DNN-4, for the heart-rate dataset was the RBF method, and for the game-based were GenClass and RBF.
The outcomes of this study motivate further research in future. Evidently, modern technologies and especially ML methodologies are giving an opportunity to clinicians to improve their assessment both in terms of speed and accuracy.

Author Contributions

Conceptualization, E.I.T. and I.G.T.; methodology, E.I.T. and I.G.T.; software, I.G.T.; validation, J.P., K.P. and V.A.T.; formal analysis, E.I.T., G.T., V.A.T. and I.G.T., investigation, V.A.T. and K.P.; resources, G.T. and K.P.; data curation, G.T., K.P. and J.P.; writing—original draft preparation, E.I.T., G.T., V.A.T. and I.G.T.; writing—review and editing, E.I.T. and I.G.T.; visualization, E.I.T., G.T. and I.G.T.; supervision, E.I.T. and I.G.T.; project administration, E.I.T.; funding acquisition, E.I.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the project titled “Smart Computing Models, Sensors, and Early Diagnostic Speech and Language Deficiencies Indicators in Child Communi-cation”, with code HP1AB-28185 (MIS: 5033088), supported by the European Regional Development Fund (ERDF).

Data Availability Statement

The participants of this study did not give written consent for their data to be shared publicly, so due to privacy restrictions and the sensitive nature of this research data sharing is not applicable to this article.

Acknowledgments

We wish to thank all the participants for their valuable contribution in this study.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders: DSM-5, 5th ed.; American Psychiatric Association: Washington, DC, USA, 2013; ISBN 978-0-89042-554-1. [Google Scholar]
  2. American Psychiatric Association. DSM-5 Intellectual Disability Fact Sheet; American Psychiatric Association: Washington, DC, USA, 2013; Volume 2, Available online: https://www.psychiatry.org/File%20Library/Psychiatrists/Practice/DSM/APA_DSM-5-Intellectual-Disability.pdf (accessed on 22 May 2023).
  3. Lee, K.; Cascella, M.; Marwaha, R. Intellectual Disability. [Updated 2022 Sep 21]. In StatPearls [Internet]; StatPearls Publishing: Treasure Island, FL, USA, 2023. Available online: https://www.ncbi.nlm.nih.gov/books/NBK547654/ (accessed on 9 April 2023).
  4. Thapar, A.; Cooper, M.; Rutter, M. Neurodevelopmental Disorders. Lancet Psychiatry 2017, 4, 339–346. [Google Scholar] [CrossRef] [PubMed]
  5. Harris, J.C. New Classification for Neurodevelopmental Disorders in DSM-5. Curr. Opin. Psychiatry 2014, 27, 95–97. [Google Scholar] [CrossRef] [PubMed]
  6. Fletcher, J.M.; Miciak, J. The Identification of Specific Learning Disabilities: A Summary of Research on Best Practices; Texas Education Agency: Austin, TX, USA, 2019. [Google Scholar]
  7. Hyman, S.L.; Levy, S.E.; Myers, S.M.; Council on Children with Disabilities, Section on Developmental and Behavioral Pediatrics; Kuo, D.Z.; Apkon, S.; Davidson, L.F.; Ellerbeck, K.A.; Foster, J.E.A.; Noritz, G.H.; et al. Identification, Evaluation, and Management of Children with Autism Spectrum Disorder. Pediatrics 2020, 145, e20193447. [Google Scholar] [CrossRef] [PubMed]
  8. Bishop, D.V.M.; Snowling, M.J.; Thompson, P.A.; Greenhalgh, T.; CATALISE consortium. CATALISE: A Multinational and Multidisciplinary Delphi Consensus Study. Identifying Language Impairments in Children. PLoS ONE 2016, 11, e0158753. [Google Scholar] [CrossRef] [PubMed]
  9. Hobson, H.; Kalsi, M.; Cotton, L.; Forster, M.; Toseeb, U. Supporting the Mental Health of Children with Speech, Language and Communication Needs: The Views and Experiences of Parents. Autism Dev. Lang. Impair. 2022, 7, 239694152211011. [Google Scholar] [CrossRef] [PubMed]
  10. Rice, C.E.; Carpenter, L.A.; Morrier, M.J.; Lord, C.; DiRienzo, M.; Boan, A.; Skowyra, C.; Fusco, A.; Baio, J.; Esler, A.; et al. Defining in Detail and Evaluating Reliability of DSM-5 Criteria for Autism Spectrum Disorder (ASD) among Children. J. Autism Dev. Disord. 2022, 52, 5308–5320. [Google Scholar] [CrossRef]
  11. McPartland, J.C. Considerations in Biomarker Development for Neurodevelopmental Disorders. Curr. Opin. Neurol. 2016, 29, 118–122. [Google Scholar] [CrossRef]
  12. Kim, H.H.; An, J.I.; Park, Y.R. A Prediction Model for Detecting Developmental Disabilities in Preschool-Age Children Through Digital Biomarker-Driven Deep Learning in Serious Games: Development Study. JMIR Serious Games 2021, 9, e23130. [Google Scholar] [CrossRef]
  13. Defresne, P.; Mottron, L. Clinical Situations in Which the Diagnosis of Autism Is Debatable: An Analysis and Recommendations. Can. J. Psychiatry 2022, 67, 331–335. [Google Scholar] [CrossRef]
  14. Pandria, N.; Petronikolou, V.; Lazaridis, A.; Karapiperis, C.; Kouloumpris, E.; Spachos, D.; Fachantidis, A.; Vasiliou, D.; Vlahavas, I.; Bamidis, P. Information System for Symptom Diagnosis and Improvement of Attention Deficit Hyperactivity Disorder: Protocol for a Nonrandomized Controlled Pilot Study. JMIR Res. Protoc. 2022, 11, e40189. [Google Scholar] [CrossRef]
  15. Kanhirakadavath, M.R.; Chandran, M.S.M. Investigation of Eye-Tracking Scan Path as a Biomarker for Autism Screening Using Machine Learning Algorithms. Diagnostics 2022, 12, 518. [Google Scholar] [CrossRef] [PubMed]
  16. Giannakakis, G.; Grigoriadis, D.; Giannakaki, K.; Simantiraki, O.; Roniotis, A.; Tsiknakis, M. Review on Psychological Stress Detection Using Biosignals. IEEE Trans. Affect. Comput. 2022, 13, 440–460. [Google Scholar] [CrossRef]
  17. Kaniusas, E. Fundamentals of Biosignals. In Biomedical Signals and Sensors I; Biological and Medical Physics, Biomedical Engineering; Springer: Berlin/Heidelberg, Germany, 2012; pp. 1–26. ISBN 978-3-642-24842-9. [Google Scholar]
  18. Sim, G.; Bond, R. Eye Tracking in Child Computer Interaction: Challenges and Opportunities. Int. J. Child-Comput. Interact. 2021, 30, 100345. [Google Scholar] [CrossRef]
  19. Bacon, E.C.; Moore, A.; Lee, Q.; Barnes, C.C.; Courchesne, E.; Pierce, K. Identifying Prognostic Markers in Autism Spectrum Disorder Using Eye Tracking. Autism 2019, 24, 658–669. [Google Scholar] [CrossRef]
  20. Kou, J.; Le, J.; Fu, M.; Lan, C.; Chen, Z.; Li, Q.; Zhao, W.; Xu, L.; Becker, B.; Kendrick, K.M. Comparison of Three Different Eye-tracking Tasks for Distinguishing Autistic from Typically Developing Children and Autistic Symptom Severity. Autism Res. 2019, 12, 1529–1540. [Google Scholar] [CrossRef] [PubMed]
  21. Tang, W.Y.F. Application of Eye Tracker to Detect Visual Processing of Children with Autism Spectrum Disorder. Curr. Dev. Disord. Rep. 2022, 9, 77–88. [Google Scholar] [CrossRef]
  22. Temeltürk, R.D.; Aydın, Ö.; Güllü, B.Ü.; Kılıç, B.G. Dynamic Eye-Tracking Evaluation of Responding Joint Attention Abilities and Face Scanning Patterns in Children with Attention Deficit Hyperactivity Disorder. Dev. Psychopathol. 2023, 1–12. [Google Scholar] [CrossRef]
  23. Miller, M.; Arnett, A.B.; Shephard, E.; Charman, T.; Gustafsson, H.C.; Joseph, H.M.; Karalunas, S.; Nigg, J.T.; Polanczyk, G.V.; Sullivan, E.L.; et al. Delineating Early Developmental Pathways to ADHD: Setting an International Research Agenda. JCPP Adv. 2023, e12144. [Google Scholar] [CrossRef]
  24. Yang, J.; Chen, Z.; Qiu, G.; Li, X.; Li, C.; Yang, K.; Chen, Z.; Gao, L.; Lu, S. Exploring the Relationship between Children’s Facial Emotion Processing Characteristics and Speech Communication Ability Using Deep Learning on Eye Tracking and Speech Performance Measures. Comput. Speech Lang. 2022, 76, 101389. [Google Scholar] [CrossRef]
  25. Merzon, L.; Pettersson, K.; Aronen, E.T.; Huhdanpää, H.; Seesjärvi, E.; Henriksson, L.; MacInnes, W.J.; Mannerkoski, M.; Macaluso, E.; Salmi, J. Eye Movement Behavior in a Real-World Virtual Reality Task Reveals ADHD in Children. Sci. Rep. 2022, 12, 20308. [Google Scholar] [CrossRef]
  26. Xu, H.; Xuan, X.; Zhang, L.; Zhang, W.; Zhu, M.; Zhao, X. New Approach to Intelligence Screening for Children with Global Development Delay Using Eye-Tracking Technology: A Pilot Study. Front. Neurol. 2021, 12, 723526. [Google Scholar] [CrossRef]
  27. Loth, E.; Evans, D.W. Converting Tests of Fundamental Social, Cognitive, and Affective Processes into Clinically Useful Bio-behavioral Markers for Neurodevelopmental Conditions. WIREs Cogn. Sci. 2019, 10, e1499. [Google Scholar] [CrossRef]
  28. Predescu, E.; Sipos, R.; Costescu, C.A.; Ciocan, A.; Rus, D.I. Executive Functions and Emotion Regulation in Attention-Deficit/Hyperactivity Disorder and Borderline Intellectual Disability. JCM 2020, 9, 986. [Google Scholar] [CrossRef] [PubMed]
  29. Devi, A.; Kavya, G.; Santhanalakshmi, K.; Senthilnayaki, B. ICT Assesment Techniques and Tools for Screening Specific Learning Disabilities. In Proceedings of the 2022 5th International Conference on Advances in Science and Technology (ICAST), Mumbai, India, 2 December 2022; pp. 174–179. [Google Scholar]
  30. Blanchet, M.; Assaiante, C. Specific Learning Disorder in Children and Adolescents, a Scoping Review on Motor Impairments and Their Potential Impacts. Children 2022, 9, 892. [Google Scholar] [CrossRef]
  31. Cheng, Y.-C.; Huang, Y.-C.; Huang, W.-L. Heart Rate Variability in Individuals with Autism Spectrum Disorders: A Meta-Analysis. Neurosci. Biobehav. Rev. 2020, 118, 463–471. [Google Scholar] [CrossRef] [PubMed]
  32. Cai, R.Y.; Richdale, A.L.; Dissanayake, C.; Uljarević, M. Resting Heart Rate Variability, Emotion Regulation, Psychological Wellbeing and Autism Symptomatology in Adults with and without Autism. Int. J. Psychophysiol. 2019, 137, 54–62. [Google Scholar] [CrossRef] [PubMed]
  33. Shaffer, F.; Ginsberg, J.P. An Overview of Heart Rate Variability Metrics and Norms. Front. Public Health 2017, 5, 258. [Google Scholar] [CrossRef]
  34. Esler, A.; Hall-Lande, J.; Hewitt, A. Phenotypic Characteristics of Autism Spectrum Disorder in a Diverse Sample of Somali and Other Children. J. Autism Dev. Disord. 2017, 47, 3150–3165. [Google Scholar] [CrossRef]
  35. Draghici, A.E.; Taylor, J.A. The Physiological Basis and Measurement of Heart Rate Variability in Humans. J. Physiol. Anthropol. 2016, 35, 22. [Google Scholar] [CrossRef]
  36. Chalabianloo, N.; Can, Y.S.; Umair, M.; Sas, C.; Ersoy, C. Application Level Performance Evaluation of Wearable Devices for Stress Classification with Explainable AI. Pervasive Mob. Comput. 2022, 87, 101703. [Google Scholar] [CrossRef]
  37. Luque-Casado, A.; Perales, J.; Vélez, D.; Sanabria, D. Heart Rate Variability and Cognitive Processing: The Autonomic Response to Task Demands. Biol. Psychol. 2015, 113, 83–90. [Google Scholar] [CrossRef] [PubMed]
  38. Goessl, V.C.; Curtiss, J.E.; Hofmann, S.G. The Effect of Heart Rate Variability Biofeedback Training on Stress and Anxiety: A Meta-Analysis. Psychol. Med. 2017, 47, 2578–2586. [Google Scholar] [CrossRef] [PubMed]
  39. Aimie-Salleh, N.; Mtawea, N.E.; Kh’ng, X.Y.; Liaw, C.Y.; Cheng, X.G.; Bah, A.N.; Lim, K.L.; Al Haddad, M.A.Y.; Azaman, A.; Mohamad, M.R.; et al. Assessment of Heart Rate Variability Response in Children with Autism Spectrum Disorder Using Machine Learning. IJIE 2022, 14, 33–38. [Google Scholar]
  40. Griffiths, K.R.; Quintana, D.S.; Hermens, D.F.; Spooner, C.; Tsang, T.W.; Clarke, S.; Kohn, M.R. Sustained Attention and Heart Rate Variability in Children and Adolescents with ADHD. Biol. Psychol. 2017, 124, 11–20. [Google Scholar] [CrossRef]
  41. Loh, H.W.; Ooi, C.P.; Barua, P.D.; Palmer, E.E.; Molinari, F.; Acharya, U.R. Automated Detection of ADHD: Current Trends and Future Perspective. Comput. Biol. Med. 2022, 146, 105525. [Google Scholar] [CrossRef]
  42. Alam, S.; Raja, P.; Gulzar, Y. Investigation of Machine Learning Methods for Early Prediction of Neurodevelopmental Disorders in Children. Wirel. Commun. Mob. Comput. 2022, 2022, 5766386. [Google Scholar] [CrossRef]
  43. Wang, X.; Yang, S.; Tang, M.; Yin, H.; Huang, H.; He, L. HypernasalityNet: Deep Recurrent Neural Network for Automatic Hypernasality Detection. Int. J. Med. Inform. 2019, 129, 1–12. [Google Scholar] [CrossRef]
  44. Muppidi, A.; Radfar, M. Speech Emotion Recognition Using Quaternion Convolutional Neural Networks. In Proceedings of the ICASSP 2021–2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Toronto, ON, Canada, 6–11 June 2021; pp. 6309–6313. [Google Scholar]
  45. Kadiri, S.R.; Javanmardi, F.; Alku, P. Convolutional Neural Networks for Classification of Voice Qualities from Speech and Neck Surface Accelerometer Signals. In Proceedings of the Interspeech 2022 ISCA, Incheon, Republic of Korea, 18–22 September 2022; pp. 5253–5257. [Google Scholar]
  46. Georgoulas, G.; Georgopoulos, V.C.; Stylios, C.D. Speech Sound Classification and Detection of Articulation Disorders with Support Vector Machines and Wavelets. In Proceedings of the 2006 International Conference of the IEEE Engineering in Medicine and Biology Society, New York, NY, USA, 30 August–3 September 2006; pp. 2199–2202. [Google Scholar]
  47. Georgopoulos, V.C. Advanced Time-Frequency Analysis and Machine Learning for Pathological Voice Detection. In Proceedings of the 2020 12th International Symposium on Communication Systems, Networks and Digital Signal Processing (CSNDSP), Porto, Portugal, 20–22 July 2020; pp. 1–5. [Google Scholar]
  48. Georgopoulos, V.C.; Chouliara, S.; Stylios, C.D. Fuzzy Cognitive Map Scenario-Based Medical Decision Support Systems for Education. In Proceedings of the 2014 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Chicago, IL, USA, 26–30 August 2014; pp. 1813–1816. [Google Scholar]
  49. Vakadkar, K.; Purkayastha, D.; Krishnan, D. Detection of Autism Spectrum Disorder in Children Using Machine Learning Techniques. SN Comput. Sci. 2021, 2, 386. [Google Scholar] [CrossRef]
  50. Kopitar, L.; Kocbek, P.; Cilar, L.; Sheikh, A.; Stiglic, G. Early Detection of Type 2 Diabetes Mellitus Using Machine Learning-Based Prediction Models. Sci. Rep. 2020, 10, 11981. [Google Scholar] [CrossRef]
  51. Fregoso-Aparicio, L.; Noguez, J.; Montesinos, L.; García-García, J.A. Machine Learning and Deep Learning Predictive Models for Type 2 Diabetes: A Systematic Review. Diabetol. Metab. Syndr. 2021, 13, 148. [Google Scholar] [CrossRef]
  52. Johnson, K.B.; Wei, W.; Weeraratne, D.; Frisse, M.E.; Misulis, K.; Rhee, K.; Zhao, J.; Snowdon, J.L. Precision Medicine, AI, and the Future of Personalized Health Care. Clin. Transl. Sci. 2021, 14, 86–93. [Google Scholar] [CrossRef] [PubMed]
  53. Ahmed, Z.; Mohamed, K.; Zeeshan, S.; Dong, X. Artificial Intelligence with Multi-Functional Machine Learning Platform Development for Better Healthcare and Precision Medicine. Database 2020, 2020, baaa010. [Google Scholar] [CrossRef] [PubMed]
  54. Seshadri, D.R.; Davies, E.V.; Harlow, E.R.; Hsu, J.J.; Knighton, S.C.; Walker, T.A.; Voos, J.E.; Drummond, C.K. Wearable Sensors for COVID-19: A Call to Action to Harness Our Digital Infrastructure for Remote Patient Monitoring and Virtual Assessments. Front. Digit. Health 2020, 2, 8. [Google Scholar] [CrossRef] [PubMed]
  55. Xu, H.; Li, P.; Yang, Z.; Liu, X.; Wang, Z.; Yan, W.; He, M.; Chu, W.; She, Y.; Li, Y.; et al. Construction and Application of a Medical-Grade Wireless Monitoring System for Physiological Signals at General Wards. J. Med. Syst. 2020, 44, 182. [Google Scholar] [CrossRef]
  56. Khan, H.; Sharif, M.; Bibi, N.; Muhammad, N. A Novel Algorithm for the Detection of Cerebral Aneurysm Using Sub-Band Morphological Operation. Eur. Phys. J. Plus 2019, 134, 34. [Google Scholar] [CrossRef]
  57. Naz, I.; Muhammad, N.; Yasmin, M.; Sharif, M.; Shah, J.H.; Fernandes, S.L. Robust Discrimination of Leukocytes Protuberant Types for Early Diagnosis of Leukemia. J. Mech. Med. Biol. 2019, 19, 1950055. [Google Scholar] [CrossRef]
  58. Tzimourta, K.; Tsoulos, I.; Bilero, T.; Tzallas, A.; Tsipouras, M.; Giannakeas, N. Direct Assessment of Alcohol Consumption in Mental State Using Brain Computer Interfaces and Grammatical Evolution. Inventions 2018, 3, 51. [Google Scholar] [CrossRef]
  59. Christou, V.; Tsoulos, I.; Arjmand, A.; Dimopoulos, D.; Varvarousis, D.; Tzallas, A.T.; Gogos, C.; Tsipouras, M.G.; Glavas, E.; Ploumis, A.; et al. Grammatical Evolution-Based Feature Extraction for Hemiplegia Type Detection. Signals 2022, 3, 737–751. [Google Scholar] [CrossRef]
  60. Shahin, M.; Zafar, U.; Ahmed, B. The Automatic Detection of Speech Disorders in Children: Challenges, Opportunities, and Preliminary Results. IEEE J. Sel. Top. Signal Process. 2020, 14, 400–412. [Google Scholar] [CrossRef]
  61. Chaiani, M.; Selouani, S.A.; Boudraa, M.; Sidi Yakoub, M. Voice Disorder Classification Using Speech Enhancement and Deep Learning Models. Biocybern. Biomed. Eng. 2022, 42, 463–480. [Google Scholar] [CrossRef]
  62. Rello, L.; Baeza-Yates, R.; Ali, A.; Bigham, J.P.; Serra, M. Predicting Risk of Dyslexia with an Online Gamified Test. PLoS ONE 2020, 15, e0241687. [Google Scholar] [CrossRef] [PubMed]
  63. Aghdam, M.A.; Sharifi, A.; Pedram, M.M. Diagnosis of Autism Spectrum Disorders in Young Children Based on Resting-State Functional Magnetic Resonance Imaging Data Using Convolutional Neural Networks. J. Digit. Imaging 2019, 32, 899–918. [Google Scholar] [CrossRef]
  64. Nogay, H.S.; Adeli, H. Machine Learning (ML) for the Diagnosis of Autism Spectrum Disorder (ASD) Using Brain Imaging. Rev. Neurosci. 2020, 31, 825–841. [Google Scholar] [CrossRef] [PubMed]
  65. Gautam, R.; Sharma, M. Prevalence and Diagnosis of Neurological Disorders Using Different Deep Learning Techniques: A Meta-Analysis. J. Med. Syst. 2020, 44, 49. [Google Scholar] [CrossRef] [PubMed]
  66. Rahman, M.D.M.; Usman, O.L.; Muniyandi, R.C.; Sahran, S.; Mohamed, S.; Razak, R.A. A Review of Machine Learning Methods of Feature Selection and Classification for Autism Spectrum Disorder. Brain Sci. 2020, 10, 949. [Google Scholar] [CrossRef]
  67. Raj, S.; Masood, S. Analysis and Detection of Autism Spectrum Disorder Using Machine Learning Techniques. Procedia Comput. Sci. 2020, 167, 994–1004. [Google Scholar] [CrossRef]
  68. Dubreuil-Vall, L.; Ruffini, G.; Camprodon, J.A. Deep Learning Convolutional Neural Networks Discriminate Adult ADHD From Healthy Individuals on the Basis of Event-Related Spectral EEG. Front. Neurosci. 2020, 14, 251. [Google Scholar] [CrossRef]
  69. Chen, H.; Song, Y.; Li, X. Use of Deep Learning to Detect Personalized Spatial-Frequency Abnormalities in EEGs of Children with ADHD. J. Neural Eng. 2019, 16, 066046. [Google Scholar] [CrossRef]
  70. Tsoulos, I.G. Creating Classification Rules Using Grammatical Evolution. Int. J. Comput. Intell. Stud. 2020, 9, 161–171. [Google Scholar]
  71. Toki, E.I.; Zakopoulou, V.; Tatsis, G.; Plachouras, K.; Siafaka, V.; Kosma, E.I.; Chronopoulos, S.K.; Filippidis, D.E.; Nikopoulos, G.; Pange, J.; et al. A Game-Based Smart System Identifying Developmental Speech and Language Disorders in Child Communication: A Protocol Towards Digital Clinical Diagnostic Procedures. In New Realities, Mobile Systems and Applications; Auer, M.E., Tsiatsos, T., Eds.; Lecture Notes in Networks and Systems; Springer International Publishing: Cham, Switzerland, 2022; Volume 411, pp. 559–568. ISBN 978-3-030-96295-1. [Google Scholar]
  72. CMUSphinx 2016. Available online: https://cmusphinx.github.io/ (accessed on 22 May 2023).
  73. Pantazoglou, F.K.; Papadakis, N.K.; Kladis, G.P. Implementation of the Generic Greek Model for CMU Sphinx Speech Recognition Toolkit. In Proceedings of the eRA-12, Denver, CO, USA, 12–17 November 2017. [Google Scholar]
  74. VisualCamp Co., Ltd. SeeSo: Eye Tracking Software 2022. Available online: https://manage.seeso.io/#/console/sdk (accessed on 22 May 2023).
  75. Hessels, R.S.; Niehorster, D.C.; Nyström, M.; Andersson, R.; Hooge, I.T. Is the Eye-Movement Field Confused about Fixations and Saccades? A Survey among 124 Researchers. R. Soc. Open Sci. 2018, 5, 180502. [Google Scholar] [CrossRef]
  76. Borys, M.; Plechawska-Wójcik, M. Eye-Tracking Metrics in Perception and Visual Attention Research. EJMT 2017, 3, 11–23. [Google Scholar]
  77. Haykin, S.S.; Haykin, S.S. Neural Networks and Learning Machines, 3rd ed.; Prentice Hall: New York, NY, USA, 2009; ISBN 978-0-13-147139-9. [Google Scholar]
  78. Bishop, C.M. Neural Networks for Pattern Recognition; Oxford University Press: Oxford, UK, 1995. [Google Scholar]
  79. Tsoulos, I.G.; Tzallas, A.; Tsalikakis, D. Use RBF as a Sampling Method in Multistart Global Optimization Method. Signals 2022, 3, 857–874. [Google Scholar] [CrossRef]
  80. Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016. [Google Scholar]
  81. McCulloch, W.S.; Pitts, W. A Logical Calculus of the Ideas Immanent in Nervous Activity. Bull. Math. Biophys. 1943, 5, 115–133. [Google Scholar] [CrossRef]
  82. Rumelhart, D.E.; Hinton, G.E.; Williams, R.J. Learning Representations by Back-Propagating Errors. Nature 1986, 323, 533–536. [Google Scholar] [CrossRef]
  83. Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R. Dropout: A Simple Way to Prevent Neural Networks from Overfitting. J. Mach. Learn. Res. 2014, 15, 1929–1958. [Google Scholar]
  84. O’Neill, M.; Ryan, C. Grammatical Evolution. IEEE Trans. Evol. Comput. 2003, 5, 349–358. [Google Scholar] [CrossRef]
  85. de la Puente, A.O.; Alfonso, R.S.; Moreno, M.A. Automatic Composition of Music by Means of Grammatical Evolution. In Proceedings of the 2002 Conference on APL: Array Processing Languages: Lore, Problems, and Applications, Madrid, Spain, 22–25 July 2002; pp. 148–155. [Google Scholar]
  86. O’Neill, M.; Brabazon, A.; Ryan, C.; Collins, J. Evolving Market Index Trading Rules Using Grammatical Evolution. In Proceedings of the Applications of Evolutionary Computing, EvoWorkshops 2001, EvoCOP, EvoFlight, EvoIASP, EvoLearn, and EvoSTIM, Como, Italy, 18–20 April 2001; Springer: Berlin/Heidelberg, Germany, 2001; pp. 343–352. [Google Scholar]
  87. O’Neill, M.; Ryan, C. Grammatical Evolution: Evolutionary Automatic Programming in a Arbitrary Language. In Genetic Programming; Springer: Berlin/Heidelberg, Germany, 2003; Volume 4. [Google Scholar]
  88. Miettinen, K. Evolutionary Algorithms in Engineering and Computer Science: Recent Advances in Genetic Algorithms, Evolution Strategies, Evolutionary Programming, GE; John Wiley & Sons Inc.: Hoboken, NJ, USA, 1999. [Google Scholar]
  89. Backus, J. The Syntax and Semantics of the Proposed International Algebraic Language of the Zurich ACM-GAMM Conference. In Proceedings of the Conference on Information Processinging, Paris, France, 15–20 June 1959; pp. 125–131. [Google Scholar]
  90. Lecun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-Based Learning Applied to Document Recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef]
  91. Anastasopoulos, N.; Tsoulos, I.G.; Tzallas, A. GenClass: A Parallel Tool for Data Classification Based on Grammatical Evolution. SoftwareX 2021, 16, 100830. [Google Scholar] [CrossRef]
  92. Aggarwal, G.; Singh, L. Comparisons of Speech Parameterisation Techniques for Classification of Intellectual Disability Using Machine Learning. In Research Anthology on Physical and Intellectual Disabilities in an Inclusive Society; IGI Global: Hershey, PA, USA, 2022; pp. 828–847. [Google Scholar]
  93. Vabalas, A.; Gowen, E.; Poliakoff, E.; Casson, A.J. Machine Learning Algorithm Validation with a Limited Sample Size. PLoS ONE 2019, 14, e0224365. [Google Scholar] [CrossRef]
Figure 1. (a) Descriptive statistics for game-score dataset for NDs and no-NDs per feature category. (b) Descriptive statistics for heart-rate dataset for NDs and no-NDs per feature category. (c) Descriptive statistics for eye tracking dataset for NDs and no-NDs per feature category.
Figure 1. (a) Descriptive statistics for game-score dataset for NDs and no-NDs per feature category. (b) Descriptive statistics for heart-rate dataset for NDs and no-NDs per feature category. (c) Descriptive statistics for eye tracking dataset for NDs and no-NDs per feature category.
Signals 04 00021 g001aSignals 04 00021 g001b
Figure 2. RBF Neural Network flowchart.
Figure 2. RBF Neural Network flowchart.
Signals 04 00021 g002
Figure 3. DNN architecture.
Figure 3. DNN architecture.
Signals 04 00021 g003
Figure 4. GenClass flowchart.
Figure 4. GenClass flowchart.
Signals 04 00021 g004
Figure 5. Ten-fold cross-validation diagram.
Figure 5. Ten-fold cross-validation diagram.
Signals 04 00021 g005
Figure 6. Block diagram of the study’s workflow.
Figure 6. Block diagram of the study’s workflow.
Signals 04 00021 g006
Table 1. Variables for the datasets under study.
Table 1. Variables for the datasets under study.
DatasetVariable DescriptionCount
Game-scoresObject recognition6
Click on objects7
Vocal intensity1
Verbal response6
Memory task2
Emotion recognition3
Hearing test1
Puzzle solving2
Moving objects2
Game-scores total30
Heart-rateMean HR5
HR standard deviation5
HR total range5
Heart-rate total15
Eye trackingFixation counts10
Time spent6
Eye tracking total16
Table 2. Experimental settings parameters.
Table 2. Experimental settings parameters.
Parameters
ArchitectureNameValue
GenClassChromosomes500
GenClassGenerations2000
RBFProcessing Neurons10
DNNLayers1–3
DNNInput layer16, 17, 31
DNNLayer 5 (Activation Function: Sigmoid)256
DNNLayer 4 (Activation Function: Sigmoid)128
DNNLayer 3 (Activation Function: Sigmoid)64
DNNLayer 2 (Activation Function: Sigmoid)32
DNNLayer 1 (Activation Function: Softmax)16
DNNOutput Layer3
DNNOptimizerNadam
DNNepochs1000
DNNBatch Size8
Table 3. Eye tracking dataset: comparison of classification techniques using the average error rate (%).
Table 3. Eye tracking dataset: comparison of classification techniques using the average error rate (%).
ClassRBFDNN-3 DNN-4 DNN-5 GenClass
Disorder22.01%7.67%8.00%9.67%7.17%
ASD 23.36%14.89%13.67%15.11%16.13%
ADHD22.67%13.22%12.56%13.11%13.40%
ID32.03%24.67%17.33%21.33%13.07%
SLD23.47%15.44%14.44%15.33%11.12%
CD23.38%15.67%14.00%15.56%11.30%
Total24.49%15.26%13.33%15.02%12.03%
Table 4. Heart-rate dataset: comparison of classification techniques using the average error rate (%).
Table 4. Heart-rate dataset: comparison of classification techniques using the average error rate (%).
ClassRBFDNN-3DNN-4DNN-5GenClass
Disorder18.48%29.07%23.33%23.33%20.02%
ASD 18.76%28.33%26.85%30.93%22.43%
ADHD18.43%29.44%30.19%29.81%20.73%
ID18.41%26.67%25.74%25.37%21.60%
SLD18.37%26.85%28.15%27.78%21.82%
CD19.95%30.56%29.63%31.85%21.53%
Total18.73%28.49%27.32%28.18%21.36%
Table 5. Game-based dataset: comparison of classification techniques using the average error rate (%).
Table 5. Game-based dataset: comparison of classification techniques using the average error rate (%).
ClassRBFDNN-3DNN-4DNN-5GenClass
Disorder21.62%24.26%24.26%24.26%20.44%
ASD 22.45%24.88%24.88%24.88%22.19%
ADHD22.05%25.19%25.19%25.19%22.40%
ID23.12%24.26%24.81%24.19%22.19%
SLD22.37%25.58%26.12%25.50%22.49%
CD23.33%26.20%28.76%25.97%22.78%
Total22.49%25.06%25.67%25.00%22.08%
Table 6. NDs Disorder class: comparison of classification methods using precision and recall.
Table 6. NDs Disorder class: comparison of classification methods using precision and recall.
Datasets for DisorderClassification MethodPrecisionRecall
Eye tracking RBF 0.75490.6241
Heart rate 0.76710.6634
Game-based 0.75400.5610
Eye tracking GenClass0.90990.8982
Heart rate 0.72480.6327
Game-based 0.72650.6355
Table 7. Overall comparison of classification methods using accuracy.
Table 7. Overall comparison of classification methods using accuracy.
ClassDataset
Eye TrackingHeart RateGame-Based
DisorderGenClass (92.83%)RBF (81.52%)GenClass (79.56%)
ASDDNN-4 (86.33%)RBF (81.24%)GenClass (77.81%)
ADHDDNN-4 (87.44%)RBF (81.57%)RBF (77.95%)
IDGenClass (86.93%)RBF (81.59%)GenClass (77.81%)
SLDGenClass (88.88%)RBF (81.63%)RBF (77.63%)
CDGenClass (88.70%)RBF (80.05%)GenClass (77.92%)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Toki, E.I.; Tatsis, G.; Tatsis, V.A.; Plachouras, K.; Pange, J.; Tsoulos, I.G. Employing Classification Techniques on SmartSpeech Biometric Data towards Identification of Neurodevelopmental Disorders. Signals 2023, 4, 401-420. https://doi.org/10.3390/signals4020021

AMA Style

Toki EI, Tatsis G, Tatsis VA, Plachouras K, Pange J, Tsoulos IG. Employing Classification Techniques on SmartSpeech Biometric Data towards Identification of Neurodevelopmental Disorders. Signals. 2023; 4(2):401-420. https://doi.org/10.3390/signals4020021

Chicago/Turabian Style

Toki, Eugenia I., Giorgos Tatsis, Vasileios A. Tatsis, Konstantinos Plachouras, Jenny Pange, and Ioannis G. Tsoulos. 2023. "Employing Classification Techniques on SmartSpeech Biometric Data towards Identification of Neurodevelopmental Disorders" Signals 4, no. 2: 401-420. https://doi.org/10.3390/signals4020021

APA Style

Toki, E. I., Tatsis, G., Tatsis, V. A., Plachouras, K., Pange, J., & Tsoulos, I. G. (2023). Employing Classification Techniques on SmartSpeech Biometric Data towards Identification of Neurodevelopmental Disorders. Signals, 4(2), 401-420. https://doi.org/10.3390/signals4020021

Article Metrics

Back to TopTop