You are currently viewing a new version of our website. To view the old version click .
Machines
  • Review
  • Open Access

1 July 2023

Exploring the Potential of Social Robots for Speech and Language Therapy: A Review and Analysis of Interactive Scenarios

,
,
,
,
,
,
,
and
1
Institute of Robotics, Bulgarian Academy of Sciences, 1113 Sofia, Bulgaria
2
Department of Logopedics, South-West University “Neofit Rilski”, 2700 Blagoevgrad, Bulgaria
3
Faculty of Information Sciences, University of Library Studies and Information Technologies, 1784 Sofia, Bulgaria
4
Department of Medical-Social Sciences, South-West University “Neofit Rilski”, 2700 Blagoevgrad, Bulgaria
This article belongs to the Special Issue Design and Applications of Service Robots

Abstract

The use of innovative technology in the field of Speech and Language Therapy (SLT) has gained significant attention nowadays. Despite being a promising research area, Socially Assistive Robots (SARs) have not been thoroughly studied and used in SLT. This paper makes two main contributions: firstly, providing a comprehensive review of existing research on the use of SARs to enhance communication skills in children and adolescents. Secondly, organizing the information into tables that categorize the interactive play scenarios described in the surveyed papers. The inclusion criteria for play scenarios in the tables are based only on their effectiveness for SLT proven by experimental findings. The data, systematically presented in a table format, allow readers to easily find relevant information based on various factors, such as disorder type, age, treatment technique, robot type, etc. The study concludes that the despite limited research on the use of social robots for children and adolescents with communication disorders (CD), promising outcomes have been reported. The authors discuss the methodological, technical, and ethical limitations related to the use of SARs for SLT in clinical or home environments, as well as the huge potential of conversational Artificial Intelligence (AI) as a secondary assistive technology to facilitate speech and language interventions.

1. Introduction

There has been a growing interest in the use of innovative technology in Speech and Language Therapy in the recent years. SARs have drawn significant attention in the field of speech and language therapy in the last decade. While initial results have been promising, further exploration is needed to fully understand the potential and usefulness of SARs in the SLT. It has been observed that the robots provide effective and engaging therapy experiences for children and adolescents with different communication disorders. The objectives of this paper are to examine the use of SARs in the treatment of such disorders among juveniles and to explore the benefits and limitations of existing research in this area.
Challenges, which are commonly associated with the use of SLT in a virtual environment and with social robots, are related to technical issues such as software compatibility, hardware failures, internet connectivity, methodological issues and lack of customization, limited personal social interaction, constraints in social cues, lack of ethical guidelines, training for therapists, integration with traditional therapy methods, integration of technology, and collaboration between therapists and technology (guidance and feedback during therapy sessions), as well as high cost. Ongoing research and evaluation of therapy based on social robots can provide evidence-based insights regarding their efficacy, and facilitate the identification of areas that could be improved.
A communication disorder refers to a condition in which an individual has difficulty to send, receive, process, and comprehend information that is represented in verbal, nonverbal, or graphical modes [1]. This can include difficulties with speech, language, and/or hearing. An impairment of articulation of speech sounds, voice, and/or fluency is defined as a speech disorder. According to DSM-V, a language disorder is due to persistent difficulties in the production and/or comprehension of any language forms—spoken, written, sign language, etc. [2]. A congenital or acquired-in-early-childhood hearing disorder can cause limitations in speech and language development. A communication disorder has an impact on interpersonal, educational, and social functioning and it can affect a person’s and their family’s quality of life [1,2].
Play is widely recognized as a fundamental activity for the overall development of every child [3] and it is essential for health and growth [4,5]. Play is an important part of speech and language therapy. Today, a range of play-based interventions have been developed to foster these skills in autistic children [6]. The child needs a combination of skills-based therapeutic work and play-based learning. Assistive technology allows for comfort and can broaden the scope for the child to play [7]. The majority of interventions have focused on children’s social, communication, and language skills [8].
The current survey was performed via exploring the following scientific databases: Scopus, Web of Science, PubMed, Google Scholar, MDPI, and IEEE.
This review investigates the potential of social robots for speech and language therapy. The authors focused on the following research questions:
Are SARs used in speech and language therapy and how common are they?
What type of scenarios with social robots have been developed for therapies of children/adolescents with communication disorders?
What are the technical, methodological, and ethical limitations and challenges of using social robots in speech and language therapy?
The objectives of the current paper are: (1) to present a literature review of the research in the implications of social robots in logopedic therapy and rehabilitation for children and adolescents with communication disorders, and (2) to design tables that systematically display the interactive play scenarios between humans and robots, as described in the papers reviewed. The inclusion criteria for the play scenarios in the tables are solely based on their effectiveness for SLT, supported by experimental findings, and involve the use of socially assistive robots. This table presentation enables readers to quickly locate relevant information based on factors such as the type of disorder, age, treatment technique, type of robot, and others.

2. Methodology

Six electronic databases were used: Scopus, Web of Science, PubMed, Google Scholar, MDPI, and IEEE for the period of fifteen years (January 2008–April 2023).
Eligibility criteria:
  • Sample—Children and adolescents with Disabilities, Neurodevelopmental disorders, Language Disorders, Hearing impairments, Cerebral Palsy, Learning Disabilities, Fluency disorder, Stuttering, Autism Spectrum Disorders, Intellectual disability.
  • Intervention—Study reports speech and language therapy, rehabilitation for social skills, language and communication disorders.
Search strategy
Keywords and phrases: Communication disorders, Children with disabilities, Neurodevelopmental disorders, Language Disorders, Hearing impairments, Cerebral Palsy, Learning Disabilities, Fluency disorder, Stuttering, Autism Spectrum Disorders, Intellectual disability, Rehabilitation of Speech and Language Disorders, Speech-language therapy, Communication skills, Turn-taking, Joint attention, Social interaction, Non-verbal communication; Robot-assisted therapy; Play-Based Intervention, Speech therapy, Robot-mediated intervention, Robotic assistant, Humanoid robot, Human–Robot Interaction, Social robot, Nao, Furhat Robot.
Inclusion criteria: The searched keywords must be present in the title, abstract, and full text of the articles. Articles must include a minimum of two of the keywords: robot + at least one Communication disorder, and SAR was used as an assistant in SLT.
In total, 100 articles were reviewed, but only 30 of them met the inclusion criteria and were analyzed.
The references in Section 5 are related to future directions on how to optimize the role and importance of social robots in speech and language therapy.
The main criteria for the selection of articles in the review was existence of scenarios of interventions between child and robot.
The human–robot interactive scenarios are presented in two tables. Table 1 includes description of scenarios from pilot studies, and Table 2—from empirical ones.
The structure of the presented scenarios includes: reference number, objectives, treatment domain, type of CD, treatment technique, play type (social, cognitive), interaction technique, age, activity description, robot configuration and mode of operation, used software, setting and time, variation.

5. Discussion and Future Directions

To summarize, the potential scenarios for using SARs in the rehabilitation of communication disorders in children and adolescents is huge. Social robots can assist in vocabulary and language development, articulation therapy, speech rate control, storytelling, and improvement in social skills. Through engaging and playful activities, social robots can offer real-time feedback and guidance to help individuals practice and enhance their communication skills.
The types of communication disorders (Figure 2) indicated in the studies mentioned are few, such as dyslexia, dysgraphia, specific language impairment, and dyslalia. The number of articles where the participation of team speech therapists is included is small and for this reason, we assume that the authors have preferred to describe the primary disorder, for example, ASD, cerebral palsy, or hearing impairment. All these conditions have different kinds of communication disorders. They belong to the category of neurodevelopmental disorders; in most of them, the language acquisition is affected at different levels and it varies in severity.
Figure 2. Summary of types of disorders.
Figure 3 presents the age distribution of participants interacting with robots in the pilot and case (empirical) studies. There was a tendency of a larger and heterogeneous age range in the groups of children studied in the pilot studies, while in empirical studies, children interacting with robots have a small age difference. Sixty percent of them focused on a contingent between 2 and 6 years of age. The child develops rapidly in the first 5–6 years of age. This can be explained by the fact that this is the period of tremendous growth and change in language, cognition, adaptive skills, emotional intelligence, and social functioning. The evidence of many studies in neurolinguistics has shown that the critical and most sensitive period for language development is in the early years of life. After the end of this period, there is a reduction in the plasticity of the specific neural pathways responsible for language coding and decoding, and functional communication. This means that early intervention is crucial. Language exposure and multichannel stimulation (more senses—hear, watch, touch, experience) in the early years has a significant effect on the verbal skills of children. Child interaction with robots gives opportunities to play, and experience and repeat scenarios that copy everyday situations with communication models. This stimulation in the early age period will enhance the child development and will affect positively language, cognition, and behavior in later stages of life.
Figure 3. Age of the participants interacting with robots in the studies (A) in pilot studies (B) in case studies.

5.1. How Social Robots Can Assist in the Intervention of Communication Disorders

Possible applications of SARs in the intervention of communication disorders in children and adolescents based on the reviewed papers are:
  • Vocabulary and language development (verbal and sign language): Social robots can assist children in practicing and improving their language skills through playful and engaging activities, offering real-time feedback and encouragement. SARs are able to initiate and support communication and enrich child’s vocabulary. They also help therapists train and assess linguistic capabilities of children and adolescents with language impairments [6,7,8,13,15,16,17,23,32,35,38,39,41,42,43,44,45,47,48,49,53,54,55,56,57,58,59,62].
  • Articulation therapy: Social robots can help children with speech disorders practice pronunciation and articulation exercises. The youngsters are observed to show increased verbal production and participation via SARs. The latter contribute to improvements in articulation, and phonological, morphosyntactical, and semantic communication [13,33,35,36,37,43,44,48,49,57]. Auditory skills: Children learn and develop language through listening. Some SARs are used to develop auditory skills as well as verbal speech. Robots are able to offer sounds with different frequency. SARs can also repeat words and provide support when necessary. In addition, robots can give visual and auditory feedback which is essential for therapists [15,48,60].
  • Speech rate control: Social robots can aid children in practicing speaking at a slower rate, offering real-time feedback to improve fluency gradually [22,23,39].
  • Storytelling: Social robots can assist children in practicing storytelling and engaging in conversation. Stories told by robots are found to be more engaging and funnier for children. SARs encourage verbal communication and enhance cognitive abilities in youngsters. Robots can also monitor, gather, and analyze data from the child’s behavior [16,33,35,64].
  • Social skills: Social robots can help children improve social skills, such as turn-taking, joint attention, emotion regulation, and eye contact through playful and engaging activities. During these activities, different participants, together with the robots, can take part—peers, therapists, or parents. Children are provided support and guidance during play. Youngsters learn to interact and cooperate with the others and robot-based therapies enhance their cognitive and emotional abilities [6,8,15,17,39,42,43,46,55,56,62].
  • Transfer the skills in life: Some of the studies indicate that the skills acquired in play-based interaction between a child and the SAR are transferred to real life and applied in everyday situations [55,56,57,60].
  • Personalization and adaptation: SARs have the ability to personalize the interactive scenarios by utilizing individual data, performance metrics, and individual progress to adapt therapy exercises, content, and level of difficulty to the specific CD [15,33,36,41,44,46,47,49,57].
The scenarios described in Table 1 and Table 2 represent different studies of child–robot interaction. The main goal for all of them is the development of communication. Table 3 provides a summary of the objectives and different levels of communication, pre-verbal, non-verbal, and verbal, aimed at in the research.
Table 3. Description of communication objectives used in the research with scenarios for child–robot interaction.
NAO is the most commonly utilized robot, as evidenced by the data presented in Table 4, which display the number of articles reporting the use of SAR-types in pilot or case studies. Our finding is in line with other studies about the children’s acceptance and perception of SARs [65]. The cost of the robots is indicated in the table, and in cases where it is not specified, the cost is considered to be moderate, neither low nor high. Due to the strong emphasis on communication and language skills in SLT, intensive practice of speaking and listening is crucial. As a result, it is important for robots utilized in SLT to have access to cloud-based chat services, such as iRobi [16] and QTrobot [18]. Unfortunately, the price of these robots is not affordable for home use.
Table 4. Distribution of SAR-types in pilot/case studies and the number of articles they are used in.

5.2. Future Directions on How to Optimize the Role and Importance of Social Robots in Speech and Language Therapy

Our research findings indicated that integrating innovative technologies such as Conversational AI, extended reality, biofeedback, affective computing, additional tactile, visual and auditory modalities, can enhance the human–robot interaction in the intervention. SLT focuses on communication and language skills, and everyday practicing of speaking and listening in a dialog are central. This requires integration of services and models for Natural Language Processing (NLP) and adaptive dialogue in SARs. Additionally, the expectations of children for robots to understand and produce human-like text are huge, which lead to the issue that Conversational AI, which combines NLP with machine and deep learning models, is a key machine learning technology. An additional challenge is the difficulty for SARs to replicate complex scenarios from everyday life only by the embedding in robot skills/behaviors and physical/digital speech therapy materials. More effective rehabilitation by “perception-cognition-action-experience” requires extended reality and multimodal interactions within this virtual environment (VE). Due to its interactivity, extended reality can be considered with potential broader than simple replication of the real world. Through the use of VEs that simulate real-world situations, social communication training will become more effective, leading to enhanced development of brain areas responsible for sensory-motor processing. This, in turn, improves language learning and its usage.
Here are some of the technological advancements that have great potential to assist SARs in their role in SLT:
  • Natural Language Processing (NLP): Advanced NLP techniques can enable SARs to better understand and interpret speech and language patterns via speech recognition models tailored for specific speech and language therapy tasks, such as articulation exercises or language comprehension activities.
  • Adaptive Dialogue Systems: Implementation of adaptive dialogue systems that allow SARs to adapt their conversational style, pacing, and prompts based on the individual’s progress and needs.
  • Multimodal Interaction: Such interaction enables SARs to engage in multimodal interaction by incorporating visual, auditory, and tactile modalities. SARs can use visual aids, such as interactive displays or gesture recognition, to supplement verbal instructions and support visual learning. Incorporates tactile aids such as interactive touchscreens to facilitate hands-on activities or audio.
  • Virtual (VR), Augmented (AR), and Mixed Reality (MR): These technologies can assist SARs to support intervention sessions with more immersive and interactive therapy environments. By using AR, SARs will overlay virtual objects or visual cues in the real world to support language or articulation exercises. VR will help in simulating real-life scenarios for social communication training, providing children and adolescent with a safe and controlled environment to practice their skills. Three-dimensional modeling or avatars can be applied to support both children and robots to immerse themselves in a shared virtual environment. Furthermore, VR can be a source of adaptation in a protected environment.
  • Affective Computing for improving SARs’ emotion recognition and facial expression capabilities: Incorporating emotion recognition algorithms (visual, voice, speech, gesture, physiologically based) can enhance SARs’ ability to detect and respond to individuals’ emotional states during therapy sessions. By tailoring their responses and interventions accordingly, SARs can create a more personalized and empathetic therapeutic environment. Furthermore, developing expressive capabilities for SARs enables them to display appropriate emotional responses and gestures, further enhancing their ability to provide sympathetic and supportive communications.
Assistive technologies that can support the SARs during SLT are described and connected to some references by citation:
  • Integrating SARs with Conversational AI can create a more engaging and interactive speech and language experience. This can be achieved by providing personalized intervention and real-time feedback to children with CD, as well as encouragement and guidance in tasks or play. Currently, robots of types Furhat [66], iRobi [16], and QTrobot [18] have access to cloud-based services and are used for rehabilitation of children with ASD. As an instance, researchers have successfully integrated OpenAI’s text completion services, including GPT-3 and ChatGPT, into the Furhat robot, so that real-time, non-scripted, automated interactions can be provided [67]. Furthermore, different frameworks based on cloud computing and clustering are employed to enhance the capabilities of social robots and address the limitations of existing embedded platforms [68,69,70]. The papers present approaches which enhance the capabilities of social robots via NLP cloud services. The authors also provide detailed descriptions of the architecture and components of the proposed framework and discuss the challenges and opportunities of using cloud computing for social robots, such as security and privacy concerns, network latency, and scalability.
  • Integrating SARs with Adaptive Dialogue Systems can contribute to effective dialogue management. In [71], the authors present a design and implementation of a cloud system for knowledge-based autonomous interaction created for Social Robots and other conversational agents. The system is particularly convenient for low-cost robots and devices: it can be used as a stand-alone dialogue system or as an integration to provide “background” dialogue capabilities to any preexisting Natural Language—(CAIR, Cloud-based Autonomous Interaction with Robots).
  • Automatic Speech Recognition and Text Generation technologies can aid children in language learning through story telling. They also practice social skills and behaviors [72]. The robot Furhat can tell stories to children through interactive conversation, natural language processing, expressive facial expressions and gestures, voice and speech synthesis, and personalization. It engages children in dialogue-like interactions, understands their speech, and adapts the story based on their preferences for a personalized experience [73]. In [74], authors propose how to recommend books in a child–robot interactive environment based on the speech input and child’s profile. Experimental results show that the proposed recommender system has an improved performance and it can operate on embedded consumer devices with limited computational resources. Speech recognition software can also help to provide real-time feedback on the accuracy of a child’s speech production.
  • Graphics, Touch, Sound, and Barcode user interface design can enhance the multimodal interaction with SARs by enriching visual, auditory, and tactile modalities. Graphical interfaces are great support for a robot interaction by listing complex information, allowing text input or showing a game interface. Many SARs provide GUI touch by their own tablet [54] or external touchscreen [16,18,75,76]. Usually, the GUI is used to acquire and repeat knowledge by pictures displayed on a touch screen connected to the SAR. An additional tool for therapists and children to interact with, either in clinics or at home, is the QR-code-scanning capabilities of SARs [16,17,18,77]. An auditory interface can also be integrated into SARs in order to enable users to interact with robots via spoken commands, voice recognition, altered auditory feedback [23], etc. Regarding future work in [25], a voice analyzer can determine the quality of the patient’s voice (configuration of the vocal tract + anatomy of the larynx + scientific component). The AI methods used for automatic classification of autistic children include Support Vector Machines (SVMs) and Convolutional Neural Networks (CNNs). SVMS use acoustic feature sets from multiple Interspeech COMPARE challenges, while CNNs extract deep spectrum features from the spectrogram of autistic speech instances [78].
  • VR/AR/MR technologies can provide significant support for both children and robots to immerse themselves in a shared virtual environment. Through the use of a 3D model, SARs can engage with children and adolescents in a virtual setting, allowing them to explore and interact within the virtual environment together [79]. Virtual reality-based therapy can reduce anxiety and improve speech fluency, as well as social and emotional skills [80]. In terms of application categories, simulations make up approximately 60% of the content, while games account for the remaining 40%. The majority of simulation content focuses on specific scenes and aim to imitate various real-life events and situations, such as classroom scenes, bus stop scenes, and pedestrian crossing scenes. Many studies report on virtual reality training with children with CD and have explored these scenes and their potential therapeutic benefits [81,82,83]. For example, the therapy in [84] involves different stages, including auto-navigation from outside the school to the classroom, welcome and task introduction, and completion of tasks. These stages are accompanied by verbal and nonverbal cues, such as hand-waving animations. Examples how to use AR to overlay virtual objects or visual cues in the real world to support language or articulation exercises can be found in [85].
  • AI-based visual and audio algorithms for Affective computing enables enhanced human–robot interaction in therapy sessions by improving social robots’ emotion recognition and facial expression capabilities. AI-based visual and audio algorithms can detect the emotional state of the individual, allowing the SAR to tailor its responses and interventions during therapy sessions. The reviewed papers with SARs that have an integrated AI-based emotion recognition and facial expression technologies for showing real-time animation of facial emotions on a display in the head are [36,45,47,50,61]. The last illustrates lip-syncing as well. The principle of visual-based algorithms for affective computing involves analyzing visual cues to recognize and interpret emotions, such as facial expressions, body language, and gestures. These algorithms use computer vision techniques to extract relevant features from visual data and apply machine learning or deep learning models to classify and understand emotional states [86,87,88]. The principle of audio-based algorithms for affective computing involves analyzing audio signals, such as speech and vocal intonations, to detect and classify emotions. These algorithms utilize signal processing techniques, feature extraction, and machine learning to analyze acoustic properties and patterns related to emotional states. Acoustic features are extracted to capture the valence and arousal dimensions, including voice quality features [89,90,91,92]. Such AI-based visual and audio algorithms are integrated in Furhat robot, allowing it to display emotions and expressions on its face by animations that correspond with the emotional content and tone of the speech being delivered.
  • The principle of Biofeedback algorithms for Affective computing involves monitoring and analyzing physiological signals to infer and understand emotional states. These algorithms use techniques such as signal processing and machine learning to identify patterns and correlations associated with specific emotions in order to interpret physiological responses of heart rate, muscle tension, pupil dilation and eye-tracking, skin conductance, or sweating. Children and adolescents with CD, especially ASD, frequently have difficulties in their social skills, such as communicating wants and needs and maintaining eye contact. Biofeedback is a technique which is often recommended for those struggling with anxiety, anger, or stress. Some authors have explored various machine learning methods and types of data collected through wearable sensors worn by children diagnosed with ASD [93,94]. Other authors have designed a prototype to reinforce the mental skills of children with ASD through neurofeedback using EEG data and a small humanoid robot to stimulate attention towards a joint activity [95]. The results encourage the development of more precise measures of attention, combining EEG data and behavioral information. In addition, scientists have worked on EEG measures suitable for robotic neurofeedback systems and able to detect and intervene in case of attention breakdowns [96]. In [97], the children’s gaze towards the robot and other individuals present was observed. The analysis revealed that the attention and engagement towards the parents of the children increased. Eye tracking can help in understanding and quantifying where the children direct their visual attention during the therapy sessions, their engagement levels, and emotional responses to different stimuli, such as the robot, other humans, and logopedic materials. Via this feedback, SARs can assess the child’s affect during the SLT in real time and personalize the interactive scenarios.

6. Conclusions

After conducting a thorough review and answering the research questions, we can conclude that despite the limited research on the use of social robots in communication disorders, certain studies have reported promising results for speech and language therapy. It is important to consider the methodological, technical, and ethical limitations and challenges associated with their use and to carefully evaluate their effectiveness before implementing them in clinical settings.
The use of assistive technologies can create a supportive and non-intrusive environment for children, leading to better outcomes in therapy. However, continuous exploration, evaluation, and monitoring of the effectiveness of these technologies is crucial to ensure that assistive integrating of ATs in SLT has a beneficial effect on children’s communication skills. Ethical and privacy concerns should also be taken into account when implementing ATs in speech and language therapy. It is necessary for scientists to conduct more comprehensive experimental studies before considering the widespread implementation of social robots as standard therapeutic interventions in speech and language therapy.

Author Contributions

Conceptualization, A.L. and A.A.; methodology, A.L.; investigation, P.T., M.S., V.S.-P., G.D., K.R.-Y. and I.K.; writing—original draft preparation, G.G.-T. and A.L.; writing—review and editing, A.L. All authors have read and agreed to the published version of the manuscript.

Funding

These research findings are supported by the National Scientific Research Fund, Project “Innovative methodology for integration of assistive technologies in speech therapy for children and adolescents”, Grand No. KΠ-06-H67/1, 12 December 2022.

Data Availability Statement

Suggested Data Availability Statements are available at https://www.youtube.com/watch?v=KpeQcIXG6cA (accessed on 16 April 2023) and https://youtu.be/AZhih7KlaPc (accessed on 16 April 2023).

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

AACAdvanced Audio Coding
ABAApplied Behavior Analysis
ADDAttention Deficit Disorder
ADHDAttention deficit and hyperactivity disorder
AIArtificial Intelligence
ARAugmented Reality
ASDAutism spectrum disorder
ATAssistive Technologies
CDCommunication Disorders
DAFDelayed auditory feedback
DMDDDisruptive mood dysregulation disorder
DSM-V  Diagnostic and Statistical Manual (of Mental Disorders), Fifth Edition (V)
IEEEInstitute of Electrical and Electronics Engineers
ICTInformation and Communications Technology
HRIHuman–Robot Interaction
MDPIMultidisciplinary Digital Publishing Institute
MRMixed reality
ODDOppositional defiant disorder
RAATRobot-Assisted Autism Therapy
RJAResponding to Joint Attention
SARSocially Assistive Robots
SLISpecific Language Impairment
SLTSpeech and Language Therapy
VRVirtual reality

References

  1. Fogle, P.T. Essentials of Communication Sciences & Disorders; Jones & Bartlett Learning: Burlington, MA, USA, 2022; p. 8. [Google Scholar]
  2. American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders, 5th ed.; American Psychiatric Association: Arlington, VA, USA, 2013; Available online: https://dsm.psychiatryonline.org/doi/book/10.1176/appi.books (accessed on 20 April 2023).
  3. Besio, S.; Bulgarelli, D.; Stancheva-Popkostadinova, V. (Eds.) Play Development in Children with Disabilities; De Gruyter: Berlin, Germany, 2017. [Google Scholar]
  4. United Nations. Convention on the Rights of Persons with Disabilities; United Nations: New York, NY, USA, 2007; Available online: https://www.un.org/development/desa/disabilities/convention-on-the-rights-of-persons-withdisabilities.html (accessed on 16 April 2023).
  5. World Health Organisation. The Global Strategy for Women’s, Children’s and Adolescents’ Health, 2016–2030; World Health Organization: Geneva, Switzerland, 2015; Available online: https://www.who.int/life-course/partners/global-strategy/globalstrategy-2016-2030/en/ (accessed on 16 April 2023).
  6. Gibson, J.L.; Pritchard, E.; de Lemos, C. Play-based interventions to support social and communication development in autistic children aged 2–8 years: A scoping review. Autism Dev. Lang. Impair. 2021, 6, 1–30. [Google Scholar] [CrossRef] [PubMed]
  7. Baker, F.S. Engaging in play through assistive technology: Closing gaps in research and practice for infants and toddlers with disabilities. In Assistive Technology Research, Practice, and Theory; IGI Global: Hershey, PA, USA, 2014; pp. 207–221. [Google Scholar] [CrossRef]
  8. Francis, G.; Deniz, E.; Torgerson, C.; Toseeb, U. Play-based interventions for mental health: A systematic review and meta-analysis focused on children and adolescents with autism spectrum disorder and developmental language disorder. Autism Dev. Lang. Impair. 2022, 7, 1–44. [Google Scholar] [CrossRef] [PubMed]
  9. Papakostas, G.A.; Sidiropoulos, G.K.; Papadopoulou, C.I.; Vrochidou, E.; Kaburlasos, V.G.; Papadopoulou, M.T.; Holeva, V.; Nikopoulou, V.-A.; Dalivigkas, N. Social Robots in Special Education: A Systematic Review. Electronics 2021, 10, 1398. [Google Scholar] [CrossRef]
  10. Mahdi, H.; Akgun, S.A.; Salen, S.; Dautenhahn, K. A survey on the design and evolution of social robots—Past, present and future. Robot. Auton. Syst. 2022, 156, 104193. [Google Scholar] [CrossRef]
  11. World Health Organization; United Nations Children’s Fund (UNICEF). Global Report on Assistive Technology; World Health Organization: Geneva, Switzerland, 2022; Available online: https://www.unicef.org/reports/global-report-assistive-technology (accessed on 16 April 2023).
  12. Papadopoulou, M.T.; Karageorgiou, E.; Kechayas, P.; Geronikola, N.; Lytridis, C.; Bazinas, C.; Kourampa, E.; Avramidou, E.; Kaburlasos, V.G.; Evangeliou, A.E. Efficacy of a Robot-Assisted Intervention in Improving Learning Performance of Elementary School Children with Specific Learning Disorders. Children 2022, 9, 1155. [Google Scholar] [CrossRef]
  13. Robins, B.; Dautenhahn, K.; Ferrari, E.; Kronreif, G.; Prazak-Aram, B.; Marti, P.; Laudanna, E. Scenarios of robot-assisted play for children with cognitive and physical disabilities. Interact. Stud. 2012, 13, 189–234. [Google Scholar] [CrossRef]
  14. Estévez, D.; Terrón-López, M.-J.; Velasco-Quintana, P.J.; Rodríguez-Jiménez, R.-M.; Álvarez-Manzano, V. A Case Study of a Robot-Assisted Speech Therapy for Children with Language Disorders. Sustainability 2021, 13, 2771. [Google Scholar] [CrossRef]
  15. Ioannou, A.; Andreva, A. Play and Learn with an Intelligent Robot: Enhancing the Therapy of Hearing-Impaired Children. In Proceedings of the IFIP Conference on Human-Computer Interaction—INTERACT 2019. INTERACT 2019. Lecture Notes in Computer Science, Paphos, Cyprus, 2–6 September 2019; Springer: Cham, Switzerland, 2019; Volume 11747. [Google Scholar] [CrossRef]
  16. Hawon, L.; Hyun, E. The Intelligent Robot Contents for Children with Speech-Language Disorder. J. Educ. Technol. Soc. 2015, 18, 100–113. Available online: http://www.jstor.org/stable/jeductechsoci.18.3.100 (accessed on 16 April 2023).
  17. Lekova, A.; Andreeva, A.; Simonska, M.; Tanev, T.; Kostova, S. A system for speech and language therapy with a potential to work in the IoT. In Proceedings of the CompSysTech ‘22: International Conference on Computer Systems and Technologies 2022, Ruse, Bulgaria, 17–18 June 2022; pp. 119–124. [Google Scholar] [CrossRef]
  18. QTrobot for Education of Children with Autism and Other Special Needs. Available online: https://luxai.com/assistive-tech-robot-for-special-needs-education/ (accessed on 16 April 2023).
  19. Vukliš, D.; Krasnik, R.; Mikov, A.; Zvekić Svorcan, J.; Janković, T.; Kovačević, M. Parental Attitudes Towards The Use Of Humanoid Robots In Pediatric (Re)Habilitation. Med. Pregl. 2019, 72, 302–306. [Google Scholar] [CrossRef]
  20. Szymona, B.; Maciejewski, M.; Karpiński, R.; Jonak, K.; Radzikowska-Büchner, E.; Niderla, K.; Prokopiak, A. Robot-Assisted Autism Therapy (RAAT). Criteria and Types of Experiments Using Anthropomorphic and Zoomorphic Robots. Review of the Research. Sensors 2021, 21, 3720. [Google Scholar] [CrossRef]
  21. Nicolae, G.; Vlãdeanu, G.; Saru, L.M.; Burileanu, C.; Grozãvescu, R.; Craciun, G.; Drugã, S.; Hãþiş, M. Programming The Nao Humanoid Robot For Behavioral Therapy In Romania. Rom. J. Child Amp Adolesc. Psychiatry 2019, 7, 23–30. [Google Scholar]
  22. Gupta, G.; Chandra, S.; Dautenhahn, K.; Loucks, T. Stuttering Treatment Approaches from the Past Two Decades: Comprehensive Survey and Review. J. Stud. Res. 2022, 11, 1–24. [Google Scholar] [CrossRef]
  23. Chandra, S.; Gupta, G.; Loucks, T.; Dautenhahn, K. Opportunities for social robots in the stuttering clinic: A review and proposed scenarios. Paladyn J. Behav. Robot. 2022, 13, 23–44. [Google Scholar] [CrossRef]
  24. Bonarini, A.; Clasadonte, F.; Garzotto, F.; Gelsomini, M.; Romero, M. Playful interaction with Teo, a Mobile Robot for Children with Neurodevelopmental Disorders. DSAI 2016. In Proceedings of the 7th International Conference on Software Development and Technologies for Enhancing Accessibility and Fighting Info-exclusion, Portugal, 1–3 December 2016; pp. 223–231. [Google Scholar] [CrossRef]
  25. Kose, H.; Yorganci, R. Tale of a robot: Humanoid Robot Assisted Sign Language Tutoring. In Proceedings of the 2011 11th IEEE-RAS International Conference on Humanoid Robots, Bled, Slovenia, 26–28 October 2011; pp. 105–111. [Google Scholar]
  26. Robles-Bykbaev, V.; López-Nores, M.; Pazos-Arias, J.; Quisi-Peralta, D.; García-Duque, J. An Ecosystem of Intelligent ICT Tools for Speech-Language Therapy Based on a Formal Knowledge Model. Stud. Health Technol. Inform. 2015, 216, 50–54. [Google Scholar]
  27. Fosch-Villaronga, E.; Millard, C. Cloud Robotics Law and Regulation, Challenges in the Governance of Complex and Dynamic Cyber-Physical Ecosystems. Robot. Auton. Syst. 2019, 119, 77–91. [Google Scholar] [CrossRef]
  28. Samaddar, S.; Desideri, L.; Encarnação, P.; Gollasch, D.; Petrie, H.; Weber, G. Robotic and Virtual Reality Technologies for Children with Disabilities and Older Adults. In Computers Helping People with Special Needs. ICCHP-AAATE 2022. Lecture Notes in Computer Science; Miesenberger, K., Kouroupetroglou, G., Mavrou, K., Manduchi, R., Covarrubias Rodriguez, M., Penáz, P., Eds.; Springer: Cham, Switzerland, 2022; Volume 13342. [Google Scholar] [CrossRef]
  29. da Silva, C.A.; Fernandes, A.R.; Grohmann, A.P. STAR: Speech Therapy with Augmented Reality for Children with Autism Spectrum Disorders. In Enterprise Information Systems. ICEIS 2014. Lecture Notes in Business Information Processing; Cordeiro, J., Hammoudi, S., Maciaszek, L., Camp, O., Filipe, J., Eds.; Springer: Cham, Switzerland, 2015; Volume 227. [Google Scholar] [CrossRef]
  30. Lorenzo, G.; Lledó, A.; Pomares, J.; Roig, R. Design and application of an immersive virtual reality system to enhance emotional skills for children with autism spectrum disorders. Comput. Educ. 2016, 98, 192–205. [Google Scholar] [CrossRef]
  31. Kotsopoulos, K.I.; Katsounas, M.G.; Sofios, A.; Skaloumbakas, C.; Papadopoulos, A.; Kanelopoulos, A. VRESS: Designing a platform for the development of personalized Virtual Reality scenarios to support individuals with Autism. In Proceedings of the 2021 12th International Conference on Information, Intelligence, Systems & Applications (IISA), Vila Real, Portugal, 1–3 December 2016; pp. 1–4. [Google Scholar] [CrossRef]
  32. Furhat and Social Robots in Rehabilitation. Available online: https://furhatrobotics.com/habilitation-concept/ (accessed on 16 April 2023).
  33. Charron, N.; Lindley-Soucy, E.D.K.; Lewis, L.; Craig, M. Robot therapy: Promoting Communication Skills for Students with Autism Spectrum Disorders. New Hampshire J. Edu. 2019, 21, 10983. [Google Scholar]
  34. Silvera-Tawil, D.; Bradford, D.; Roberts-Yates, C. Talk to me: The role of human–robot interaction in improving verbal communication skills in students with autism or intellectual disability. In Proceedings of the 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Nanjing, China, 27–31 August 2018; pp. 1–6. [Google Scholar] [CrossRef]
  35. Syrdal, D.S.; Dautenhahn, K.; Robins, B.; Karakosta, E.; Jones, N.C. Kaspar in the wild: Experiences from deploying a small humanoid robot in a nursery school for children with autism. Paladyn J. Behav. Robot. 2020, 11, 301–326. [Google Scholar] [CrossRef]
  36. Robles-Bykbaev, V.; Ochoa-Guaraca, M.; Carpio-Moreta, M.; Pulla-Sánchez, D.; Serpa-Andrade, L.; López-Nores, M.; García-Duque, J. Robotic assistant for support in speech therapy for children with cerebral palsy. In Proceedings of the 2016 IEEE International Autumn Meeting on Power, Electronics and Computing (ROPEC), Ixtapa, Mexico, 9–11 November 2016; pp. 1–6. [Google Scholar] [CrossRef]
  37. Pereira, J.; de Melo, M.; Franco, N.; Rodrigues, F.; Coelho, A.; Fidalgo, R. Using assistive robotics for aphasia rehabilitation, in: 2019 Latin American Robotics Symposium (LARS), 2019. In Proceedings of the Brazilian Symposium on Robotics (SBR) and 2019 Workshop on Robotics in Education (WRE), Rio Grande, Brazil, 23–25 October 2019; pp. 387–392. [Google Scholar] [CrossRef]
  38. Castillo, J.C.; Alvarez-Fernandez, D.; Alonso-Martin, F.; Marques-Villarroya, S.; Salichs, M.A. Social robotics in therapy of apraxia of speech. J. Healthcare Eng. 2018, 2018, 11. [Google Scholar] [CrossRef]
  39. Kwaśniewicz, Ł.; Kuniszyk-Jóźkowiak, W.; Wójcik, G.M.; Masiak, J. Adaptation of the humanoid robot to speech disfluency therapy. Bio-Algorithms Med-Syst. 2016, 12, 169–177. [Google Scholar] [CrossRef]
  40. Charron, N.; Lewis, L.; Craig, M. A Robotic Therapy Case Study: Developing Joint Attention Skills With a Student on the Autism Spectrum. J. Educ. Technol. Syst. 2017, 46, 137–148. [Google Scholar] [CrossRef]
  41. Kajopoulos, J.; Wong, A.H.Y.; Yuen, A.W.C.; Dung, T.A.; Kee, T.Y.; Wykowska, A. Robot-Assisted Training of Joint Attention Skills in Children Diagnosed with Autism. In Social Robotics. ICSR 2015. Lecture Notes in Computer Science; Tapus, A., André, E., Martin, J.C., Ferland, F., Ammi, M., Eds.; Springer: Cham, Switzerland, 2015; Volume 9388. [Google Scholar] [CrossRef]
  42. Taheri, A.; Meghdari, A.; Alemi, M.; Pouretemad, H.; Poorgoldooz, P.; Roohbakhsh, M. Social Robots and Teaching Music to Autistic Children: Myth or Reality? In Social Robotics. ICSR 2016. Lecture Notes in Computer Science; Agah, A., Cabibihan, J.J., Howard, A., Salichs, M., He, H., Eds.; Springer: Cham, Switzerland, 2016; Volume 9979. [Google Scholar] [CrossRef]
  43. Tariq, S.; Baber, S.; Ashfaq, A.; Ayaz, Y.; Naveed, M.; Mohsin, S. Interactive Therapy Approach Through Collaborative Physical Play Between a Socially Assistive Humanoid Robot and Children with Autism Spectrum Disorder. In Social Robotics. ICSR 2016. Lecture Notes in Computer Science; Agah, A., Cabibihan, J.J., Howard, A., Salichs, M., He, H., Eds.; Springer: Cham, Switzerland, 2016; Volume 9979. [Google Scholar] [CrossRef]
  44. Egido-García, V.; Estévez, D.; Corrales-Paredes, A.; Terrón-López, M.-J.; Velasco-Quintana, P.-J. Integration of a Social Robot in a Pedagogical and Logopedic Intervention with Children: A Case Study. Sensors 2020, 20, 6483. [Google Scholar] [CrossRef]
  45. Jeon, K.H.; Yeon, S.J.; Kim, Y.T.; Song, S.; Kim, J. Robot-based augmentative and alternative communication for nonverbal children with communication disorders. In Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp ‘14), Seattle, Washington, USA, 13–17 September 2014; Association for Computing Machinery: New York, NY, USA, 2014; pp. 853–859. [Google Scholar] [CrossRef]
  46. Spitale, M.; Silleresi, S.; Leonardi, G.; Arosio, F.; Giustolisi, B.; Guasti, M.T.; Garzotto, F. Design Patterns of Technology-based Therapeutic Activities for Children with Language Impairments: A Psycholinguistic-Driven Approach, 2021. In Proceedings of the CHI EA ‘21: Extended Abstracts of the 2021 CHI Virtual Conference on Human Factors in Computing Systems, Yokohama, Japan, 8–13 May 2021; pp. 1–7. [Google Scholar] [CrossRef]
  47. Panceri, J.A.C.; Freitas, É.; de Souza, J.C.; da Luz Schreider, S.; Caldeira, E.; Bastos, T.F. A New Socially Assistive Robot with Integrated Serious Games for Therapies with Children with Autism Spectrum Disorder and Down Syndrome: A Pilot Study. Sensors 2021, 21, 8414. [Google Scholar] [CrossRef]
  48. Robles-Bykbaev, V.E.; Lopez-Nores, M.; Pazos-Arias, J.J.; Garcia-Duque, J. RAMSES: A robotic assistant and a mobile support environment for speech and language therapy. In Proceedings of the Fifth International Conference on the Innovative Computing Technology (INTECH 2015), Galcia, Spain, 20–22 May 2015; pp. 1–4. [Google Scholar] [CrossRef]
  49. Ochoa-Guaraca, M.; Carpio-Moreta, M.; Serpa-Andrade, L.; Robles-Bykbaev, V.; Lopez-Nores, M.; Duque, J.G. A robotic assistant to support the development of communication skills of children with disabilities. In Proceedings of the 2016 IEEE 11th Colombian Computing Conference (CCC), Popayan, Colombia, 27–30 September 2016; pp. 1–8. [Google Scholar] [CrossRef]
  50. Velásquez-Angamarca, V.; Mosquera-Cordero, K.; Robles-Bykbaev, V.; León-Pesántez, A.; Krupke, D.; Knox, J.; Torres-Segarra, V.; Chicaiza-Juela, P. An Educational Robotic Assistant for Supporting Therapy Sessions of Children with Communication Disorders. In Proceedings of the 2019 7th International Engineering, Sciences and Technology Conference (IESTEC), Panama, Panama, 9–11 October 2019; pp. 586–591. [Google Scholar] [CrossRef]
  51. Horstmann, A.C.; Mühl, L.; Köppen, L.; Lindhaus, M.; Storch, D.; Bühren, M.; Röttgers, H.R.; Krajewski, J. Important Preliminary Insights for Designing Successful Communication between a Robotic Learning Assistant and Children with Autism Spectrum Disorder in Germany. Robotics 2022, 11, 141. [Google Scholar] [CrossRef]
  52. Farhan, S.A.; Rahman Khan, M.N.; Swaron, M.R.; Saha Shukhon, R.N.; Islam, M.M.; Razzak, M.A. Improvement of Verbal and Non-Verbal Communication Skills of Children with Autism Spectrum Disorder using Human Robot Interaction. In Proceedings of the 2021 IEEE World AI IoT Congress (AIIoT), Seattle, WA, USA, 10–13 May 2021; pp. 356–359. [Google Scholar] [CrossRef]
  53. van den Berk-Smeekens, I.; van Dongen-Boomsma, M.; De Korte, M.W.P.; Boer, J.C.D.; Oosterling, I.J.; Peters-Scheffer, N.C.; Buitelaar, J.K.; Barakova, E.I.; Lourens, T.; Staal, W.G.; et al. Adherence and acceptability of a robot-assisted Pivotal Response Treatment protocol for children with autism spectrum disorder. Sci. Rep. 2020, 10, 8110. [Google Scholar] [CrossRef] [PubMed]
  54. Lekova, A.; Kostadinova, A.; Tsvetkova, P.; Tanev, T. Robot-assisted psychosocial techniques for language learning by hearing-impaired children. Int. J. Inf. Technol. Secur. 2021, 13, 63–76. [Google Scholar]
  55. Simut, R.E.; Vanderfaeillie, J.; Peca, A.; Van de Perre, G.; Vanderborght, B. Children with Autism Spectrum Disorders Make a Fruit Salad with Probo, the Social Robot: An Interaction Study. J. Autism. Dev. Disord. 2016, 46, 113–126. [Google Scholar] [CrossRef] [PubMed]
  56. Polycarpou, P.; Andreeva, A.; Ioannou, A.; Zaphiris, P. Don’t Read My Lips: Assessing Listening and Speaking Skills Through Play with a Humanoid Robot. In HCI International 2016—Posters’ Extended Abstracts. HCI 2016. Communications in Computer and Information Science; Stephanidis, C., Ed.; Springer: Cham, Switzerland, 2016; Volume 618. [Google Scholar] [CrossRef]
  57. Lewis, L.; Charron, N.; Clamp, C.; Craig, M. Co-robot therapy to foster social skills in special need learners: Three pilot studies. In Methodologies and Intelligent Systems for Technology Enhanced Learning: 6th International Conference; Springer International Publishing: Berlin/Heidelberg, Germany, 2016; pp. 131–139. [Google Scholar]
  58. Akalin, N.; Uluer, P.; Kose, H. Non-verbal communication with a social robot peer: Towards robot assisted interactive sign language tutoring. In Proceedings of the 2014 IEEE-RAS International Conference on Humanoid Robots, Madrid, Spain, 18–20 November 2014; pp. 1122–1127. [Google Scholar] [CrossRef]
  59. Özkul, A.; Köse, H.; Yorganci, R.; Ince, G. Robostar: An interaction game with humanoid robots for learning sign language. In Proceedings of the 2014 IEEE International Conference on Robotics and Biomimetics (ROBIO 2014), Bali, Indonesia, 5–10 December 2014; pp. 522–527. [Google Scholar] [CrossRef]
  60. Andreeva, A.; Lekova, A.; Simonska, M.; Tanev, T. Parents’ Evaluation of Interaction Between Robots and Children with Neurodevelopmental Disorders. In Smart Education and e-Learning—Smart Pedagogy. SEEL-22 2022. Smart Innovation, Systems and Technologies; Uskov, V.L., Howlett, R.J., Jain, L.C., Eds.; Springer: Singapore, 2022; Volume 305. [Google Scholar] [CrossRef]
  61. Esfandbod, A.; Rokhi, Z.; Meghdari, A.F.; Taheri, A.; Alemi, M.; Karimi, M. Utilizing an Emotional Robot Capable of Lip-Syncing in Robot-Assisted Speech Therapy Sessions for Children with Language Disorders. Int. J. Soc. Robot. 2023, 15, 165–183. [Google Scholar] [CrossRef]
  62. Boccanfuso, L.; Scarborough, S.; Abramson, R.K.; Hall, A.V.; Wright, H.H.; O’kane, J.M. A low-cost socially assistive robot and robot-assisted intervention for children with autism spectrum disorder: Field trials and lessons learned. Auton Robot 2016, 41, 637–655. [Google Scholar] [CrossRef]
  63. Alabdulkareem, A.; Alhakbani, N.; Al-Nafjan, A. A Systematic Review of Research on Robot-Assisted Therapy for Children with Autism. Sensors 2022, 22, 944. [Google Scholar] [CrossRef]
  64. Fisicaro, D.; Pozzi, F.; Gelsomini, M.; Garzotto, F. Engaging Persons with Neuro-Developmental Disorder with a Plush Social Robot. In Proceedings of the 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Daegu, Republic of Korea, 11–14 March 2019; pp. 610–611. [Google Scholar] [CrossRef]
  65. Cifuentes, C.; Pinto, M.J.; Céspedes, N.; Múnera, M. Social Robots in Therapy and Care. Curr. Robot. Rep. 2020, 1, 59–74. [Google Scholar] [CrossRef]
  66. Available online: https://furhatrobotics.com/furhat-robot/ (accessed on 16 April 2023).
  67. Integrating Furhat with OpenAI. Available online: https://docs.furhat.io/tutorials/openai/ (accessed on 16 April 2023).
  68. Elfaki, A.O.; Abduljabbar, M.; Ali, L.; Alnajjar, F.; Mehiar, D.; Marei, A.M.; Alhmiedat, T.; Al-Jumaily, A. Revolutionizing Social Robotics: A Cloud-Based Framework for Enhancing the Intelligence and Autonomy of Social Robots. Robotics 2023, 12, 48. [Google Scholar] [CrossRef]
  69. Lekova, A.; Tsvetkova, P.; Andreeva, A. System software architecture for enhancing human-robot interaction by Conversational AI, 2023 International Conference on Information Technologies (InfoTech-2023). In Proceedings of the IEEE Conference, Bulgaria, 20–21 September 2023. in print. [Google Scholar]
  70. Dino, F.; Zandie, R.; Abdollahi, H.; Schoeder, S.; Mahoor, M.H. Delivering Cognitive Behavioral Therapy Using A Conversational Social Robot. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 3–8 November 2019; pp. 2089–2095. [Google Scholar] [CrossRef]
  71. Grassi, L.; Tommaso, C.; Recchiuto, A.S.A. Sustainable Cloud Services for Verbal Interaction with Embodied Agents. Intell. Serv. Robot. 2023. in print. [Google Scholar]
  72. Available online: https://furhatrobotics.com/blog/5-ways-social-robots-are-innovating-education/ (accessed on 16 April 2023).
  73. Elgarf, M.; Skantze, G.; Peters, C. Once upon a story: Can a creative storyteller robot stimulate creativity in children? In Proceedings of the 21st ACM International Conference on Intelligent Virtual Agents, Fukuchiyama, Japan, 14–17 September 2021; pp. 60–67. [Google Scholar]
  74. Liu, Y.; Gao, T.; Song, B.; Huang, C. Personalized Recommender System for Children’s Book Recommendation with A Realtime Interactive Robot. J. Data Sci. Intell. Syst. 2017. [Google Scholar] [CrossRef]
  75. Available online: furhatrobotics.com/Furhat-robot (accessed on 16 April 2023).
  76. AskNAO Tablet. Available online: https://www.asknao-tablet.com/en/home/ (accessed on 16 April 2023).
  77. Available online: https://furhatrobotics.com (accessed on 16 April 2023).
  78. Baird, A.; Amiriparian, S.; Cummins, N.; Alcorn, A.M.; Batliner, A.; Pugachevskiy, S.; Freitag, M.; Gerczuk, M.; Schuller, B. Automatic classification of autistic child vocalisations: A novel database and results. Proc. Interspeech 2017, 849–853. [Google Scholar] [CrossRef]
  79. Shahab, M.; Taheri, A.; Hosseini, S.R.; Mokhtari, M.; Meghdari, A.; Alemi, M.; Pouretemad, H.; Shariati, A.; Pour, A.G. Social Virtual Reality Robot (V2R): A Novel Concept for Educa-tion and Rehabilitation of Children with Autism. In Proceedings of the 2017 5th RSI International Conference on Robotics and Mechatronics (ICRoM), Tehran, Iran, 25–27 October 2017; pp. 82–87. [Google Scholar] [CrossRef]
  80. Marušić, P.; Krhen, A.L. Virtual reality as a therapy for stuttering. Croat. Rev. Rehabil. Res. 2022, 58. [Google Scholar] [CrossRef]
  81. Jingying, C.; Hu, J.; Zhang, K.; Zeng, X.; Ma, Y.; Lu, W.; Zhang, K.; Wang, G. Virtual reality enhances the social skills of children with autism spectrum disorder: A review. Interact. Learn. Environ. 2022, 1–22. [Google Scholar] [CrossRef]
  82. Lee, S.A.S. Virtual Speech-Language Therapy for Individuals with Communication Disorders: Current Evidence, Lim-itations, and Benefits. Curr. Dev. Disord. Rep. 2019, 6, 119–125. [Google Scholar] [CrossRef]
  83. Bailey, B.; Bryant, L.; Hemsley, B. Virtual Reality and Augmented Reality for Children, Adolescents, and Adults with Communication Disability and Neurodevelopmental Disorders: A Systemat-ic Review. Rev. J. Autism. Dev. Disord. 2022, 9, 160–183. [Google Scholar] [CrossRef]
  84. Halabi, O.; Abou El-Seoud, S.; Alja’am, J.; Alpona, H.; Al-Hemadi, M.; Al-Hassan, D. Design of Immersive Virtual Reality System to Improve Communication Skills in Individuals with Autism. Int. J. Emerg. Technolo-Gies Learn. (iJET) 2017, 12, 50–64. [Google Scholar] [CrossRef]
  85. Almurashi, H.; Bouaziz, R.; Alharthi, W.; Al-Sarem, M.; Hadwan, M.; Kammoun, S. Augmented Reality, Serious Games and Picture Exchange Communication System for People with ASD: Systematic Literature Review and Future Directions. Sensors 2022, 22, 1250. [Google Scholar] [CrossRef] [PubMed]
  86. Chai, J.; Zeng, H.; Li, A.; Ngai, E.W. Deep learning in computer vision: A critical review of emerging techniques and application scenarios. Mach. Learn. Appl. 2021, 6, 100134. [Google Scholar] [CrossRef]
  87. O’Mahony, N.; Campbell, S.; Carvalho, A.; Harapanahalli, S.; Hernandez, G.V.; Krpalkova, L.; Riordan, D.; Walsh, J. Deep learning vs. traditional computer vision. In Advances in Computer Vision: Proceedings of the 2019 Computer Vision Conference (CVC); Springer International Publishing: Berlin/Heidelberg, Germany, 2020; Volume 1, pp. 128–144. [Google Scholar]
  88. Debnath, B.; O’brien, M.; Yamaguchi, M.; Behera, A. A review of computer vision-based approaches for physical rehabilitation and assessment. Multimed. Syst. 2022, 28, 209–239. [Google Scholar] [CrossRef]
  89. Aouani, H.; Ayed, Y.B. Speech emotion recognition with deep learning. Procedia Comput. Sci. 2020, 176, 251–260. [Google Scholar] [CrossRef]
  90. Sekkate, S.; Khalil, M.; Adib, A. A statistical feature extraction for deep speech emotion recognition in a bi-lingual scenario. Multimed. Tools Appl. 2023, 82, 11443–11460. [Google Scholar] [CrossRef]
  91. Samyak, S.; Gupta, A.; Raj, T.; Karnam, A.; Mamatha, H.R. Speech Emotion Analyzer. In Innovative Data Communication Technologies and Appli-cation: Proceedings of ICIDCA 2021; Springer Nature: Singapore, 2022; pp. 113–124. [Google Scholar]
  92. Zou, C.; Huang, C.; Han, D.; Zhao, L. Detecting Practical Speech Emotion in a Cognitive Task. In Proceedings of the 20th International Conference on Computer Communications and Networks (ICCCN), Lahaina, HI, USA, 31 July–4 August 2011; pp. 1–5. [Google Scholar] [CrossRef]
  93. Fioriello, F.; Maugeri, A.; D’alvia, L.; Pittella, E.; Piuzzi, E.; Rizzuto, E.; Del Prete, Z.; Manti, F.; Sogos, C. A wearable heart rate measurement device for children with autism spectrum disorder. Sci Rep. 2020, 10, 18659. [Google Scholar] [CrossRef]
  94. Alban, A.Q.; Alhaddad, A.Y.; Al-Ali, A.; So, W.-C.; Connor, O.; Ayesh, M.; Qidwai, U.A.; Cabibihan, J.-J. Heart Rate as a Predictor of Challenging Behaviours among Children with Autism from Wearable Sensors in Social Robot Interactions. Robotics 2023, 12, 55. [Google Scholar] [CrossRef]
  95. Anzalone, S.M.; Tanet, A.; Pallanca, O.; Cohen, D.; Chetouani, M. A Humanoid Robot Controlled by Neurofeedback to Reinforce Attention in Autism Spectrum Disorder. In Proceedings of the 3rd Italian Workshop on Artificial Intelligence and Robotics, Genova, Italy, 28 November 2016. [Google Scholar]
  96. Nahaltahmasebi, P.; Chetouani, M.; Cohen, D.; Anzalone, S.M. Detecting Attention Breakdowns in Robotic Neurofeedback Systems. In Proceedings of the 4th Italian Workshop on Artificial Intelligence and Robotics, Bari, Italy, 14–15 November 2017. [Google Scholar]
  97. Van Otterdijk, M.T.H.; de Korte, M.W.P.; van den Berk-Smeekens, I.; Hendrix, J.; van Dongen-Boomsma, M.; den Boer, J.C.; Buitelaar, J.K.; Lourens, T.; Glennon, J.C.; Staal, W.G.; et al. The effects of long-term child–robot interaction on the attention and the engagement of children with autism. Robotics 2020, 9, 79. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.