Next Article in Journal / Special Issue
An Experimental Characterization of TORVEastro, Cable-Driven Astronaut Robot
Previous Article in Journal
Adaptive Virtual Impedance Control of a Mobile Multi-Robot System
Previous Article in Special Issue
Design of a Mechanism with Embedded Suspension to Reconfigure the Agri_q Locomotion Layout
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Architecture for Safe Child–Robot Interactions in Autism Interventions

by
Ilias A. Katsanis
* and
Vassilis C. Moulianitis
Department of Product and Systems Design Engineering, University of the Aegean, 84100 Syros, Greece
*
Author to whom correspondence should be addressed.
Robotics 2021, 10(1), 20; https://doi.org/10.3390/robotics10010020
Submission received: 29 November 2020 / Revised: 4 January 2021 / Accepted: 7 January 2021 / Published: 21 January 2021
(This article belongs to the Special Issue Advances in European Robotics)

Abstract

:
Autism Spectrum Disorder is a developmental disorder that affects children from a very young age and is characterized by persistent deficits in social, communicational, and behavioral abilities. Since there is no cure for autism, domain experts focus on aiding these children through specific intervention plans that are aimed towards the development of the deficient areas. Using socially assistive robots that interact in a social manner with children in autism interventions, efforts are being made towards alleviating the autistic behavior of children and enhancing their social behavior. However, implementing robots in autism interventions could lead to harmful situations concerning safety. In this paper, an architecture for safe child–robot interactions in autism interventions is proposed. First, a taxonomy of child–robot interactions in autism interventions is presented, explaining its complete framework. Next, the interaction is modelled according to this taxonomy where an interaction case is employed in order for the structure of the interaction to be defined. Based on that, the safety architecture is proposed that will be integrated into the robot’s controller. Focus is placed on detecting possible distracting elements that could influence the performance of the child, affecting their psychological or physical safety. Lastly, the interaction between child and robot is created in a simulated environment through dialogue inputs and outputs, and the code of the architecture is tested, where a virtual robot performs the appropriate actions.

1. Introduction

Autism Spectrum Disorder (ASD) is a pervasive developmental disorder that characterizes a set of peculiar patterns of development that appears in the infant or toddler years of a child [1]. ASD deficits fall into three main categories: social interaction, communication, and stereotyped and repetitive behavior [1,2]. If symptoms are evident and clearly distinguishable, ASD can often be diagnosed in the first 18 months of a child’s life or even younger. In order for a diagnosis to be considered reliable, the child should receive stable assessment by an experienced professional until the age of 2 or 3 years [2]; meanwhile, a systematic early intervention works towards improving a child’s behavior and produces demonstrable benefits [3,4].
Despite the development of intervention plans, many of them present a lack of evidence in the expected results [5], and their effectiveness is still under discussion. The rapid progress of robotics offers great possibilities in the implementation of robots in autism interventions, and advancements in the area report that human–robot interaction (HRI) for autism interventions could be very useful with the support of socially assistive robots (SARs).
SARs aid people through social rather than physical interaction by creating close interaction with a human and have achieved measurable progress in various populations such as the elderly, individuals with physical impairments, students, etc. [6]. The prospect of using SARs with individuals with cognitive disorders, including autism, has been explored in several studies that examine the possibility of using robots in developing the reduced abilities of children such as joint attention, eye gaze, reciprocal communication, emotion recognition, etc. [7,8,9]. Moreover, SARs can be used as a tool to help in the diagnosis of autism by providing quantitative metrics, tracking the performance of each individual, and giving the possibility of detecting abnormalities in infants and toddlers [10,11].
The implementation of SARs in autism interventions supports the accepted opinion in which ASD children tend to interact better with robots and robotic toys than humans [12]. In children with ASD, it is possible for them to feel stress or face anxiety in situations involving interacting with other people. The main reason for this is the unpredictable and complex nature of human behavior, which can hold back every attempt of communication. Thus, there are, indeed, several studies demonstrating that children with ASD exhibit increased levels of communication and social skills interacting with robots [13,14,15] because they consider them as more predictable, controllable, and acceptable partners for interaction [16]. Despite these findings, therapists are not convinced on the potential use of robots in autism interventions and, therefore, are still hesitant. An additional major concern is the safety challenges that these robots carry within the intervention as well. SARs have the duty of being demonstrably safe and trustworthy, especially when they are being implemented in vulnerable populations such as ASD children [17]. As the use of the robots moves towards children’s homes, safety concerns are becoming greater because of the unexpected and unstructured nature of this domain.
The design and application criteria of SARs in autism interventions were investigated in a previous study [18], and 14 criteria across 3 categories were determined. Among these categories is safety, in which eight criteria were included, which show the safety requirements of implementing SARs in autism interventions. During the definition of these criteria, it was realized that every work that proposes some kind of specifications of implementing SAR in autism interventions does not explain how these prerequisites arise, under which methodological approach, and without considering the risks that may have emerged during the interaction with the child.
To investigate this issue, this paper proposes an architecture for safer child–robot interactions in autism interventions. The contributions of the proposed architecture are as follows:
  • The robot is able to identify possible risks of interactions with the child in autism interventions that take place in clinics or home settings.
  • An interaction assessment is to be performed by the robot in real time and before the actual intervention, avoiding the omission of risks.
  • The implementation of robots in autism interventions will be empowered, showing whether the interaction between the child and robot is safe or not.
  • The robot is able to show that it is capable of helping children, gaining the trust of specialists and parents in order to be accepted as an assistive tool.
  • Better relations between child and robot are established, ensuring the child is able to benefit the most from it.
  • Therapists are aided in creating more personalized interventions, according to the needs of every child.
The rest of the paper is organized as follows: Section 2 reviews the scientific background for this work. Section 3 introduces our approach for risk assessment during the child–robot interaction, describes the procedure, and presents the architecture in detail; it also explains an interaction scenario and exhibits the interaction and control code in the simulated environment. Section 4 gives the discussion and future prospects of the work.

2. Background

2.1. Socially Assistive Robots in Autism Interventions

ASD involves a wide range of difficulties that affect children from a very young age, remaining for the rest of their lives and complicating their daily activities. In addition, long-term interventions require a lot of time, energy, and money [19]. Hence, ASD interventions are challenging and demand continuous efforts in order to achieve specific goals. To deal with these challenges, novel innovations have been explored, ensuring efficient interventions while removing the burden both from parents and professionals, and at the same time they provide personalized interactions that can be adapted to child’s specific needs.
Among these innovations, SARs are considered as a tool that support autism interventions, able to assist people through social rather than physical interaction [3]. Τhe robot should be equipped with abilities that provide the proper motivation and encouragement for learning, training, and development of social skills in ASD children [18] endowed with abilities that allow them to behave in engaging ways.
SAR constitute a low-cost alternative to traditional approaches that usually have the form of a robot in virtual environments, computer-aided instructions, and video interventions [20]. Recent studies suggest that children prefer to interact notably better with physical, embodied robots because their behavior is more predictable and consistent than that of humans [16,20]. The robot KASPAR (Kinesics and Synchronization in Personal Assistant Robotics) is used in interventions that develop skills such as collaborative play, imitation, and body parts identification [21,22]. NAO robot is being deployed to observe the behavior of children and to promote social and behavior skills [12,15]. Other studies have explored the role of robots as mediators between humans and individuals with ASD [23]. Researchers in [24] stated that robots could be beneficial for children with ASD who face communication deficits. Research in [14] showed that robots could improve language abilities better than a human or a computer, and in [25], it was investigated how children behave in the four-dimensional space through a joint attention task.
The robots that are used in the state of the art have different roles, characteristics, and abilities, but they share a common attribute: they are located in the same place with the child, which makes the interaction experience richer and more natural [26], capable of performing and learning various tasks and behaviors.
In spite of the exceptional progress in the field of SAR, much has to be done before robots gain the acceptance in everyday life of ASD children. One of these challenges is that they have to be markedly safe, in order to be useful and socially accepted, gaining children’s, parents’ and therapists’ trust.

2.2. Safety in Human–Robot Interaction

In order to promote safety in HRI for autism interventions, it is prudent, primarily, to define the context of safe interactions between human and robot. Preventing unwanted situations is something that must be included in the design of every HRI task, while with task identification, the source of possible risks could be foreseen, and hazards could be prevented. Concerning the latest advancements of robotics that require powerful hardware and software integrations, safety of humans who interact with these robots is undeniably something that must not be omitted. At the same time, safety is a multidisciplinary field that investigates the physical, emotional, social, engaging, and design issues that influence the interaction. By ensuring the safety of humans in HRI tasks, not only will his/her performance be increased, but he/she will also gain trust towards the robot, interacting in a more natural way.
When humans interact with robots, even in manufacturing or home environments, the first priority is to ensure that no accidental, physical collisions would occur during a task. If physical interaction could not be avoided, precautions such as limited forces, motions, or speed of the robot should be set in order for injuries to be prevented. Thus, this kind of interaction is defined as physical safety and specifies the measures and regulations to avoid unintended damage.
Ensuring physical safety is not the only factor that must be considered in HRI tasks. When robots interact with humans, stressful circumstances may arise due to robots’ moves, postures, appearance, or other morphological or design attributes. Such elements can cause disastrous consequences, especially in vulnerable populations such as children with ASD. Hence, the maintenance of psychological well-being in HRI must be ensured alongside physical safety, while the context of psychological safety should include the robotic compliance of social conventions during the interaction. Both physical and psychological safety are mentioned later in the child–robot taxonomy together with the related risks.
Since interactions with SAR take place in domestic environments, safety issues tend to concern roboticists and human–robot interaction researchers, especially when robots are being applied in autism interventions. Thus, safety becomes the first requirement in human–robot interactions in daily settings, and the main goal is to enable humans and robots to interact closely, increasing at the same time the acceptance and trust of SAR [17].
Various methods are being investigated prior to the deployment of robots in HRI tasks, where different approaches are implemented in order to achieve safety of interaction. Before the implementation of robots in everyday activities, robots have been introduced in industrial environments where they collaborated with workers, performing tasks that were characterized as dangerous, difficult, or even monotonous [27]. Due to accidents that happened in the collaboration of humans and robots, various safety measures have been taken in order to prevent future incidents. One of the most common precautions is a physical barrier between the work cell and the worker, which is used to impede the access of humans when the robot is fully operational [27]. In [28] SafeMove by ABB was investigated in order to control up to four robots. The SafeMove was implemented as a second controller, establishing safe human–robot interaction by configuring different parameters such as stand still, maximal tool speed, safe axis range, and others. In [29] the combination of mechanical and control principles was explored, supervising the operation of the robot at all times. In [30] tactile sensors were proposed as a measure to safely stop a manipulator’s movements when the robot comes in contact with humans. Moreover, there are several standards that define the context of safe human–robot interactions. ISO 10218-1:2011 and 10218-2:2011 set the safety requirements for industrial robots [27], while ISO/TS 15066:2016 specifies safety requirements for collaborative robots and the work environment [31]. Lastly, ANSI/RIA R15.06-2012 provides guidelines for the integration and safe use of industrial robots [32].
Moving outside the industrial environments, efforts have been made in order to promote safety in robots that assist humans in their daily tasks. In [33] the certification of robots at runtime is proposed, where the robot modifies its behavior according to changes in the environment or itself. Ιn [34] a method to analyze risks in service robots is investigated, based on models of the system’s structure and behavior. This approach is quite limited though, because it is implemented in the beginning of the robot’s design and not during the actual interaction with the human. In [35] the possibility of internal simulation of mobile robots is explored, operating in real time, in order to predict the consequences of its own actions or those of other actors in the environment, while in [36] an approach to verify and validate the control code for HRI tasks based on simulation testing is developed.
Whilst these methods are efficient and provide an alternative to the safety prospect of the interaction, a major drawback is that they omit the risks of the actual interaction without a risk determination process executed by the robot. Even if the analysis of any possible risks is performed, this is implemented in the design stage of the robot prior to the deployment in a HRI scenario. Moreover, exhaustive simulations are difficult to perform with the robot due to the number and types of risks that should be defined before its implementation. Lastly, safety specifications that need to be taken in an autism intervention with SAR lack evidence on how they emerged and under which approach or method.
Towards this direction, a safety architecture of socially assistive robots for autism interventions, which will be integrated in the robot’s controller and will allow the assessment of critical parameters that may impact interaction outcomes, is proposed in this paper. The architecture will allow the robot to perform identification of risks before the actual interaction with the child and to propose safety measures in order to eliminate the consequences, both psychological and physical, that may arise to the child.

3. Materials and Methods

3.1. Child–Robot Interaction Taxonomy in Autism Interventions

Before the risk architecture is presented, a taxonomy of child–robot interaction in autism interventions is defined, which includes the elements that shape the interaction with the robots in ASD interventions. The taxonomy was developed in order to underline the needs for the implementation of robot in autism interventions and clarifies the context of interaction.
In [18], the design and application criteria of robots in autism interventions are included in three categories. Here, we integrate some of these criteria in a taxonomy that better reflects the framework of interaction between robot and child. The taxonomy consists of nine categories, and each of them includes some parameters that shape the interaction. Figure 1 depicts the present taxonomy that we discuss in detail below.
The proposed child–robot interaction taxonomy includes the following categories:
  • Task: the child’s deficits that are going to be developed during an interaction;
  • Environment of interaction: the setting in which the interaction could take place;
  • Type of interaction: the relationship between child, robot, and therapist and the duration of it;
  • Robot appearance: different types of robots that are being used in an interaction;
  • Roles of robot: the potential roles that a robot can take during the interaction;
  • Interaction modalities: means that a robot uses for interaction with the child;
  • Roles of therapist: the acts that a therapist may perform during interaction;
  • Interaction stage: the phases that an interaction contains;
  • Risks: possible risks that may or may not occur during the interaction with the robot.
The categories are explained in the following subsubsections.

3.1.1. Task

Before intervention starts, the goals of interaction and the skills of the child that are going to be developed should be specified. Every interaction should be task-dependent, defining important parameters that will aid therapists for the implementation of the intervention and the necessary evaluations. Furthermore, task description provides high-level, sufficient information for the parents in order to comprehend the overall aim of the interaction.
Autism interventions should include specific tasks that will be integrated by the robot, and each task should aim at the development of a deficit area of the child. The most common task in ASD child–robot interactions is the development of the child’s social skills, which will permit participation in social activities and engagement such as play, developing interpersonal skills, joint attention abilities, and elaboration of appropriate behavior. Another interaction task is the improvement of emotional skills where the child will be trained to express their emotions, while they will recognize the emotions expressed by other peers. Imitation tasks promote the mimicry of actions in which the child learns how to do what others do, developing an understanding of actions and intentions of others. Communication tasks promote the child’s ability to convey information to others by the use of verbal or non-verbal means such as gestures, requests, expressions, and other cues [12].

3.1.2. Environment of Interaction

SARs tend to be implemented for specific purposes such as the detection, diagnosis, and evaluation of ASD [11]. For diagnostic purposes SARs have been utilized in clinical and therapy environments where the robot will be supervised in a more controlled way. A number of studies have examined the use of robots in settings away from the clinic, specifically at home [37,38]. This is clear evidence that robots are moving beyond clinical sections in order to expand the beneficial outcomes of interaction outside them, implemented in familiar places in which children feel safe and relaxed [38].
Except clinics and homes, interactions with robots could be organized at schools, where children are socialized. Research in [39] explores the beneficial implementation of robots in school environments, indicating that social and communication abilities could be transferred to interactions with humans because children feel easiness and comfort.

3.1.3. Type of Interaction

The third category describes the relationships that could be formed in the intervention plan. In Figure 2, the relationship cases that could be made during the child–robot interactions in autism interventions are represented. The shapes are adopted by [40], and further details are given below.
Figure 2a depicts the most common interaction type, a triadic interaction, in which the child (“C”), robot (“R”), and therapist (“T”) interact with each other in order to complete a task.
We further divide the roles of therapist and robot as active and passive. In an active role (green dotted arrow), the therapist or robot could intervene in the interaction by providing feedback, reward, or other social cues that will encourage the child, while in the passive role (red dotted arrow), the therapist or robot just observes the interaction without interfering in it. These roles could be used when specialists want to monitor or assess the behavior of the child under various circumstances and conditions. Figure 2b represents a case where the child and the robot interact exclusively (blue parallelogram) while the therapist observes the interaction (dotted arrow) and could have an active or passive role with the child. In Figure 2c, the child interacts with the therapist, but this time the robot observes the child with an active role. Figure 2d represents a case where the therapist interacts with the robot while the child observes the interaction with an active role. This means that the child has the ability to intervene in the interaction when it notices something wrong in the behavior or communication between robot and therapist. At the same time, a second therapist observes the child in an active role, interfering when it is necessary.
In Figure 2e, the specialist and child create an exclusive interaction while the child develops an external interaction (right arrow) with the robot. For example, the specialist describes a task to the child in which the child must transmit orders for execution to the robot.
Figure 2f,g depicts a case of triadic interaction. In the former, the child and therapist interact independently with the robot, meaning that each player could create their own interaction relationship, while in the latter, the child and robot interact independently with the therapist. The double-headed arrows with the envelop symbol indicate that the child could exchange messages between therapist and robot, forming a secondary interaction and creating collaborative relationships that will help the child.
Figure 2h depicts a different kind of relationship where the child interacts with the robot exclusively by observing an object (“O”). The therapist supervises the whole interaction while preserving an active role with the child in order to mediate him/her when help is needed.
The abovementioned cases indicate the relationships that could be developed during child–robot interaction in ASD interventions. Active and passive roles could change according the requirements of each intervention. When exclusive interactions are formed, they are observable either by the specialist or the robot. Furthermore, different relationships could be formed, including another child with ASD or typically a developed child that will interact with the robot, but these relationships require further examination of interactions and, thus, make up future work.

3.1.4. Robot Appearance

The robot’s appearance should be determined before proceeding further to the interaction. Different kinds of robots may be used in the interactions, with some robots having a more anthropomorphic appearance, others having animal-like features, and others having more mechanical parts or mascot-type characteristics that allow high personalization. In [7], whether a robot’s appearance influences the interaction with ASD children is investigated. In [8], whether robot features and the suitability of their design impact autism interventions is explored, and researchers in [12] analyze whether robots are ready to participate, and to what extent, in autism interventions.
Features, abilities, and other physical characteristics have an impact not only on the nature of the interaction but on the robot itself. The appearance must be combined with its abilities since it determines how well the child will generalize the acquired skills [5].

3.1.5. Roles of Robot

The primary role of SAR in autism interventions is to aid children during the interaction, and this role could be achieved through various social means, such as providing feedback or assistance when needed. Motivating children often implies a positive reward when they execute a task or behave appropriately in a situation [4]. Provoking social behaviors in children means that the robot, during the interaction, performs actions aiming at the development of certain behaviors and reactions of the child. SARs in autism interventions usually can be used as enablers of interaction and communication between the child and another person (therapist, parent, etc.), developing specific abilities such as joint attention [8]. Moreover, SAR could track the behavior and reactions of the child during the intervention, giving insights to therapists that help them to learn more about the nature of ASD and the behavior of the child [11].

3.1.6. Interaction Modalities

In order for socially assistive robots to be implemented in autism interventions, different kinds of modes that will allow communication and interaction with ASD children must be used. The interaction modalities that will be used, on one hand, have to be relevant with the appearance of robots, and on the other hand they must be in accordance with the kind of task that will be developed.
Interaction modalities range from the communication channel that is used. Typically, these channels consist of speech and optical attributes of interaction, and the signals that produced should be cohesive and effective in order to maintain a sufficient level of interaction [41]. Therefore, interaction modalities of SAR consist of vocal cues, speech, and sound, which are involved in the hearing channel, and lights and expressions, which form the visual channel. The motion abilities of a robot belong to the kinetic channel that contains all the possible movements a robot could perform. Multimodality refers to the ability of a robot to use a combination of two or more interaction modes for communicating with the child (i.e., sounds, speech, and motion). In addition, interaction modalities are of significant importance in the communication with the child, and they should carefully be applied in ASD interventions, conforming to the robot’s actual mechanical and computational abilities [42].

3.1.7. Roles of the Therapist

The therapist’s role during the intervention is very important, not only because they observe the interaction between child and robot, but also because they are partners of these children, assisting them to achieve significant progress in various areas. Therapists could empower children, through motivational or inspiring cues, to engage in the intervention plan and thus to establish strong interaction bonds with the robot in order to perform the necessary actions. Therapists during the interaction also control the robot and supervise its functionality in order to assure its safe operation and that it will not cause any harm to the child. Besides, ensuring the child’s safety, both psychological and physical, is the therapist’s top priority. Furthermore, it should be noted that therapists will not be replaced by the robot, but instead their role will be more influential than the robot’s, providing at the same time assistance and motivation to the child and enhancing the interaction quality.

3.1.8. Interaction Stage

The interaction between child and robot is divided into four phases. In the introduction phase, the therapist explains to the child the intervention that will be followed and talks about the robot that is going to be implemented. The conversation about the robot should be sufficient in order for the child to understand what the robot is and its abilities.
In the familiarization phase, the child is acquainted with the robot and explores its form and type. Typically, this phase is the first meeting between the child and the robot, and the main outcome of this stage is for the child to gain familiarity with the object, knowing its morphological characteristics and realizing that they will work together.
In the main interaction phase, the child interacts directly with the robot, performing different tasks and developing specific abilities. How well or not the child interacts and communicates with the robot depends on the results of the familiarization phase.
The outcome phase is the final stage of the interaction where the specialist assesses the outcomes, evaluates the performance of the child, and elaborates the next intervention plans based on the results.
Every interaction phase is very important in the intervention plan, and the possible goals should be achieved in order to consider it successful. The relationship between child and robot is built from the very first steps, before the physical implementation of the robot, while the overall performance of the child depends on how well or not each phase achieved its own outcomes.

3.1.9. Risks

The final category of the taxonomy is the possible risks of the interaction. Due to the nature of children, the implementation of robots should be careful, and specialists must be aware of the risks that may occur during the interaction.
In the taxonomy, risks fall into two categories: psychological and physical. Psychological risks involve all the situations that elicit emotions such as fear, anxiety, threat, discomfort, distraction, or any other related condition. Physical risks contain physical harms caused by collision with the robot.
Risks of both categories could originate from different sources. Psychological risks often occur due to morphological characteristics of robots, and they may be elicited by the communication channels that have been described above or due to changes in the environment. Physical risks occur due to the design properties of robots such as speed, force, power, and other features related to software/hardware integration [43].
Consequently, knowing the risks that may occur during the interaction may benefit the interaction outcomes and allow specialists to design more precise and personalized interventions. For this reason, a risk-based safety architecture has been proposed in order to explicitly explore and define the possible risks during child–robot interaction, and these are presented with more details in the next section.

3.2. Modelling the Interaction

In this section the methodology for the development of the proposed architecture is presented. First, using a sequence diagram of Unified Modeling Language (UML), a model of child–robot interaction is created in order to represent the taxonomy that was presented earlier and to describe the interaction, focusing on the sequence of actions that needs to be carried out. This approach will be helpful to gain a better understanding of the ASD child–robot interaction in which the architecture is going to be implemented.
Before the actual use of a robot in autism interventions, the structure of interaction should be defined and completely understood. Therefore, a sequence diagram is presented in Figure 3 in order to depict the information and the actions that are carried out during the interaction between child and robot.
The abovementioned sequence diagram presents the interaction between child, robot, and therapist over time and during an interaction task in the ASD intervention. The sequence diagram describes a conversation between the child and robot, which is initiated by the therapist, in which they ask questions with each other and give answers. According to the taxonomy, this task could be considered as communicational or social, in which the child develops interpersonal skills and communicative abilities. The type of interaction according to the taxonomy is triadic, and the four stages of interaction could be distinguished. In the introduction stage the therapist explains to the child the intervention and interaction plan. In the familiarity phase, the child and the robot meet and greet one another. In the interaction stage the conversation between the child and robot is implemented, and in the outcome phase the interaction is over, and the required evaluations are performed. The interaction modality that both child and robot use is speech, since they only have verbal communication, and the roles of the robot could be distinguished as assistive (in a broader perspective) and as a mediator (in particular) since it enables communication with the child. On the contrary, the therapist acts as a facilitator and observer of the interaction between child and robot.
The presented sequence diagram indicates that it is important to model the interaction from its first stages, extracting a lot of information both for its nature and the relationships that could be formed during it. In order to understand the risks according to the taxonomy, a risk assessment process is being proposed in the next section.

3.3. Designing the Safety Architecture

While the implementation of SARs is quite promising, significant concerns may arise when they are being used in settings outside clinics, with the most notable being their ability to adequately prove that they are safe towards the child, specialist, and parents. In demonstrating that a robot is safe, the cognitive load of the human (specialist, parent, etc.) will be reduced, and they will be able to effectively use the robot. According to the taxonomy in Section 3.1, the risks are divided into two categories, and consequently, a risk assessment process should be performed to get an understanding of those risks. Therefore, the process of risk assessment performed by the robot is described, and later the design of the safety architecture will be discussed.

3.3.1. Risk Assessment Process

The risk assessment process will be performed by the robot in order to detect possible sources of risk that should be eliminated during the interaction with the child. The robot could begin to analyze the environment and detect possible objects and a person’s location that could distract the child from the interaction or constitute, according to the taxonomy, potential psychological harms. The robot will perform the risk assessment in real time in order to determine if an action is safe or not. If the action is safe, the robot will inform that it will proceed to the interaction with the child, while if the action is unsafe, the robot will inform the reason why it will not perform the action, and it proposes a countermeasure to mitigate the risk from a predefined list of specific countermeasures. The robot will not procced to the interaction unless the countermeasure has been implemented. Figure 4 presents the risk assessment process performed by the robot using the UML activity diagram.
In the abovementioned diagram, the risk assessment process could be distinguished into two stages. In the first stage, the robot is introduced to the child, and the specialist chooses an action based on the intervention plan that the robot will perform during the interaction. Then, the robot tries to detect the child in order to get engaged in the intervention and start the interaction.
When the robot detects the child, the second stage begins where it analyzes the environment to find possible distracting or harmful objects. If the robot assesses the action as unsafe, it proposes a countermeasure from a predefined list and starts the environment analysis again until it is assured that the countermeasure has been applied and the child is safe. If the action is assessed as safe, the robot proceeds to the task planning where the interaction is being executed. Finally, the robot checks if the child abandoned the interaction by just trying to detect it. If the child gave up on the interaction, the robot would stop. If the child is still engaged, the robot would interact with it.
This process will be executed by the robot every time a new task is selected by the therapist. We can consider the two stages as preliminary and decision-making, where in the former all the necessary actions may be taken by the specialist in order to prepare the interaction, while in the later the robot performs all the required actions for safe interaction and takes the appropriate decisions.

3.3.2. Safety Architecture for Autism Interventions

The safety architecture integrates the risk process that was mentioned above and follows the sense–plan–act approach. Furthermore, the architecture comprises different modules, and each of them includes submodules for the execution of the required action. Figure 5 presents the safety architecture, and then it will be discussed further.
The evaluation module assesses the context of the situation using data that were received from the sensor module. The evaluation module assesses the state of the child based on different variables and includes three submodules: engagement, task definition, and environment analysis. The engagement submodule appraises the child’s participation in the current interaction task, and the robot tracks the child and assesses if it is in its vicinity. In the task definition submodule, the therapist selects a task based on those that are defined and agreed to, and the robot changes from the standby mode to the defined task mode. The environment analysis module is used to evaluate the environment of the interaction. The robot tracks the environment and checks if there are any objects that could be disturbing to the child or if there are other people that could cause anxiety.
In the engagement evaluation submodule, the robot collects data from its sensors. The sensors that may be used can be found at the head, hand, or legs of the robot. Once the robot’s sensor is triggered, then it can begin the evaluation of the environment in order to detect possible distracting elements.
When the robot completes the evaluation, it proceeds to the real-time assessment module in which it classifies the interaction, based on the environment analysis, as safe or not. If the interaction is classified as safe, the robot proceeds directly to the task module. If the interaction is unsafe, the robot proposes a countermeasure from a predefined list. The countermeasure list contains a set of possible measures that can mitigate the psychological harm of the child and will be developed according to the specialist’s recommendation. For the representation of a possible countermeasure the Backus–Naur form technique is used [43]. The countermeasure rule for an unsafe interaction is
<Rule1>::=<emotion_recognition>:IFchild_detected|object_distance≤1000mmTHEN“put_object_aside”
In this rule the countermeasure is to put the detected object aside if it is less than 1 m from the child, and the emotion_recognition is the task that has been selected by the therapist for the intervention. Below, Algorithm 1 describes the assessment process for a chosen task, and Algorithm 2 presents the process of proposing a countermeasure.
Algorithm 1. RiskAssessmentProcess.
1
whilecamera = true do
2
if child_detected() = true and
3
   if task_selected() = true and
4
     if object_distance ≤ 1000 mm or human_detected() = true then
5
       return interaction_status() == unsafe then
6
     CountermeasureProposal()
7 
     else if interaction_status() == safe then
8
     TaskExecution()
9 
     end if
10
   end if
11
  end if
12
end while
Algorithm 2. ProposeCountermeasure.
1
actionCompletedfalse
2
whileactionCompleted = false do
3
restrictions ← RobotState()
4
  disturb ← DisturbanceState()
5
   if action_selection() = true then
6
      if interaction_status() == unsafe and object_distance ≤ 1000 mm then
7
         SetActuatorsSpeedSlow() and “Put object aside” or “remove humans”
8
            if restrictions include RobotUnstable then
9
              return “increase distance from child”
10
          if disturb include Obst_Dist and Human_Dist then
11
             return “remove objects or humans”
12
         end if
13
         end if
14
     end if
15
end if
16
end while
The task module includes the task separation submodule, where the robot integrates the results of the real-time assessment and separates the main task into primary tasks based on its abilities, and this is executed when the real-time assessment is accomplished. These primary tasks can be executed through the robot’s main controller. Additionally, in the task module the robot waits for a certain threshold to determine if the child has withdrawn from the interaction. If so, it changes to standby mode, clearly for safety reasons. Algorithm 3 presents this approach.
Algorithm 3. TaskDetermination.
1
actionCompletedfalse
2
whileactionCompleted = false do
3
ifchild_detected = true then
4
if FaceOrientation() = true then
5
   if ChildInVicinity() = true then
6
      return task_seperation()
7
   else wait(10) then
8
     return StandbyMode()
9
   end if
10
  end if
11
end if
12
end while
When the task separation is finished, the architecture passes these primary tasks into the robot’s behavior in the action module. Firstly, the sequence of the robot’s actions is determined by the behavior submodule, and then this behavior is transferred to the action submodule, where movements and other behaviors of the robot are controlled. The final step, when the behavior of the robot is determined, is the interaction with the child, and when it is completed the whole process of the architecture starts from the beginning again.

3.3.3. Simulation Testing

In order to analyze that the robot fits the purpose in the environment of the interaction, and to determine that it fulfils the desired functionality, we conducted simulation testing allowing the components and the overall architecture to be integrated into the simulator. It should be noted that during the Coronavirus pandemic, which struck in the last months, several clinics have postponed their activities; thus, the interventions for children with autism will be delayed. Due to this unprecedented situation, it was impossible to test the architecture in a real robot. Thus, simulations of a robot’s functionality have been developed, which allow a robot’s control code to be tested while receiving information about its complete behavior. The architecture was developed and tested in a Windows PC system with a i5-3230M 2.60 GHz processor and 4.00 GB installed RAM. The time needed to complete each dialogue simulation was approximately 30 s, depending on the speed of typing of each user, while the time needed for the complete simulation to be executed was about 1 min. Furthermore, the complete code of the architecture with the related simulations and images of the simulation environment are available online at a GitHub repository (see Supplementary Materials).
First, the interaction between child and robot was modelled according to the sequence diagram previously presented. We used Choregraphe for simulating and programming NAO robot’s behavior and developing the interaction between the child and robot. Figure 6 shows the Choregraphe interface.
Three different interaction behaviors were created, one for each stage in the presented sequence diagram of Figure 3, using QiChat script language. Figure 7, Figure 8 and Figure 9 present the syntax used for the development of dialogue between child and robot, and Figure 10 shows the dialogue input and output.
In Figure 7, lines 17 to 31 show the dialogue flow between child and robot. Firstly, the child greets the robot, and then the robot responds. Then, they develop a basic acquaintance relationship asking the child’s name, age, place of residency, its grade in school, and its favorite color. In all these questions, the robot responds to the child aiming to elicit their interest in conversation. Lastly, the robot asks the child to talk further, and according to their answer the robot proceeds to the next stage.
Figure 8 shows the child–robot conversation in the main interaction stage. First, the child initiates the topic of discussion, and the robot begins the dialogue, asking some questions about it. The dialog’s flow, from line 17 to 31, continues according to the child’s answers and robot’s appropriate questions. At some point, at line 32, the child could change the topic of discussion and a new dialogue begins from line 35 to 44. At the end, in lines 45–47, the robot prompts the child to rest and according to its answer continues correspondingly.
Figure 9 depicts the dialogue between child and robot at the final stage of intervention. First, the child says if he/she liked the interaction, and the robot answers back (line 16). Then, the robot asks the child if they would like to talk further (line 17), and whatever the answer the robot encourages the child to talk about it in a new meeting (lines 18–20). Lastly, in lines 21–22, the child says goodbye, the robot responds and takes a sit position, declaring that the interaction is over.
In Figure 10, the conversation between child and robot is depicted, where the child (green) sets the topic and the robot (blue) answers accordingly. It should be noted that Choregraphe simulations do not have the ability to recognize speech, and instead we used the dialogue input–outputs to represent the conversation flow between child and robot.
The aforementioned scenarios give insight about the possible dialogues and cases that could be formed while the child interacts with the robot. When the child answers a question, the robot is pre-programmed in order to give the appropriate answers or ask the appropriate questions. Additionally, the child is informed earlier by the specialist about the nature, the topic, and the objectives of the interaction in order to know what to do and how. Thus, in the corresponding interaction with the robot, the child is guided by the specialist throughout the intervention, and they are aware about the actions that should be performed. In addition, the abovementioned scenarios constitute an example of the sequence that is being followed during real interactions, provided by the specialists.
Next, the simulation of the proposed architecture is developed, in which we modeled the behavior of the robot using its libraries and Python scripts. In the architecture case, the robot explores the environment and indicates an item that should be removed. Once the item has been taken, the robot explores the environment again until everything is correct and the interaction is safe. Figure 11 and Figure 12 indicate the beginning of the area exploration, and NAO prompts touching its head in order to start.
In Figure 13, NAO raises its right hand to show a distracting or dangerous item, while at the same time it says that this item should be removed from the area.
Figure 14 shows the log info and the dialogue between robot and human during the environment analysis. In the log pane, the instances of the robot’s behavior, executing in Python, can be viewed, while in the dialogue area the conversation between the human and robot about the removal of the item, the messages from the robot, and the assessment result can be retrieved. Furthermore, Figure 15 depicts the complete structure of the architecture in the Choregraphe environment.
In the above architecture, the interaction begins with the robot in standing position, which means that whatever its previous position is, it will always stand up to begin the interaction. Next, it prompts to touch its head in order to understand that the human is present. This sensing module is selected due to the virtual robot’s lack of camera simulation. Once its tactile sensor is triggered, it begins the exploration of the area. Python scripts control the robot’s walking and turning abilities towards a direction. With these scripts we tested its ability to walk at a predefined point by setting the x and y coordinates and using the square root function by multiplying and adding these parameters, while using the atan2 function the angle of turn was computed. When the robot tracks an object it simply raises its hand and points out that this item should be removed. When it is removed, it continues the exploration, and when it finishes it says that the interaction is safe or not, and its behavior is terminated.
With these simulations, an effort was made for the robot’s code to be tested, its complete behavior to be explored, and its actions to be modelled in order to gain an understanding of its functionality prior to deployment in real interventions. Although this simulation is a starting point in the development of the interaction, future prospects and limitations still exist, which will be presented in the last section of the paper.

4. Conclusions and Future Work

SARs are an effective means to facilitate the development of social skills in children with autism. Since the deployment of these robots is getting greater, as they are moving beyond clinical settings, the major concern is to promote safety of interaction without reducing the robot’s abilities while at the same time retaining the engagement of the child.
Following the sense–plan–act approach, we proposed a safety architecture where the robot performed an evaluation of the environment of interaction, and based on that it determines if the interaction is safe or not. The function of the architecture and some algorithms of the architecture have also been described.
In the aforementioned simulations, both the sequence diagram and the architecture reflect the proposed approach for safe child–robot interaction, expanding our previous work about the safety of SARs and their implementation in autism interventions. We modeled the interaction between child and robot, indicating the relationships that could be formed during an interaction task, and we tested the control code of the robot to ascertain its functionality. Furthermore, the developed simulations could be an alternative while interventions are currently suspended. In order for children not to be excluded from their intervention activities, a future prospect could be the interaction with a virtual robot through a simulated environment, where specific tasks and the objectives of the interaction could be pointed out by the specialist, considering the limitations of a virtual robot. In this way, children will continue their interventions, even far away from therapeutic centers, and new possibilities may arise in child–robot interaction. Despite this prospect, it should be noted that some limitations of the study still exist.
First, in the virtual robot neither the camera nor the voice recognition module was tested, and we could not simulate face detection algorithms that might be used or voice interaction between human and robot. Towards this direction, we replicated them with tactile sensors of the robot and dialog inputs that imply human–robot interaction. With the presented simulations, we also managed to test both the control code of the robot and, to some extent, the appropriate behavior that could be implemented during interaction scenarios. Hence, the proposed architecture should be further tested in a real robot in order to assure that it fits the robot and verifies its functionality. Furthermore, the tasks of robot interaction for children with ASD should be further enriched and tested with participants in intervention sessions, which will verify the advantages and disadvantages of the architecture.
In the near feature, we plan to work in three major domains: (1) the refinement of the architecture, combining existing control techniques, and developing algorithms of evaluating the complete interaction between child and robot; (2) the evaluation of communication quality between child and robot, improving the functionality of the robot by perfecting the existing algorithms and developing new ones; (3) the integration of modern approaches such as machine learning that will allow a real-time classification of the interaction based on previous results.

Supplementary Materials

The complete simulations with the control code are available online at https://github.com/ikatsanis/NAO_Simulations.git.

Author Contributions

Conceptualization: I.A.K. and V.C.M.; Methodology: I.A.K. and V.C.M.; Writing—original draft: I.A.K.; Writing—review and editing: V.C.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are openly available in GitHub. Data can be found here: https://github.com/ikatsanis/NAO_Simulations.git.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. American Psychiatric Association. Neurodevelopmental Disorders: DSM-5® Selections; American Psychiatric Association: Washington, DC, USA, 2015. [Google Scholar]
  2. Lord, C.; Risi, S.; DiLavore, P.S.; Shulman, C.; Thurm, A.; Pickles, A. Autism From 2 to 9 Years of Age. Arch. Gen. Psychiatry 2006, 63, 694–701. [Google Scholar] [CrossRef] [PubMed]
  3. Matarić, M.J.; Scassellati, B. Socially Assistive Robotics. In Springer Handbook of Robotics; Springer Science and Business Media LLC: Berlin/Heidelberg, Germany, 2016; pp. 1973–1994. [Google Scholar]
  4. Huijnen, C.A.; Lexis, M.A.; Jansens, R.; de Witte, L.P. Roles, strengths and challenges of using robots in interventions for children with autism spectrum disorder (ASD). J. Autism Dev. Disord. 2019, 49, 11–21. [Google Scholar] [CrossRef] [PubMed]
  5. Paola, P.; Tonacci, A.; Tartarisco, G.; Billeci, L.; Ruta, L.; Gangemi, S.; Pioggia, G. Autism and social robotics: A systematic review. Autism Res. 2016, 9, 165–183. [Google Scholar] [CrossRef]
  6. Feil-Seifer, D.; Mataric, M.J. Defining socially assistive robotics. In Proceedings of the 9th International Conference on Rehabilitation Robotics 2005 (ICORR 2005), Chicago, IL, USA, 28 June–1 July 2005; pp. 465–468. [Google Scholar]
  7. Robins, B.; Dautenhahn, K.; Dubowski, J. Does appearance matter in the interaction of children with autism with a humanoid robot? Interact. Stud. 2006, 7, 479–512. [Google Scholar] [CrossRef]
  8. Cabibihan, J.-J.; Javed, H.; Ang, M.; Aljunied, S.M. Why Robots? A Survey on the Roles and Benefits of Social Robots in the Therapy of Children with Autism. Int. J. Soc. Robot. 2013, 5, 593–618. [Google Scholar] [CrossRef]
  9. Feil-Seifer, D.; Viterbi, U. Development of Socially Assistive Robots for Children with Autism Spectrum Disorders; Technical Report CRES-09-001; USC Interaction Lab: Los Angeles, CA, USA, 2009. [Google Scholar]
  10. Huijnen, C.A.; Lexis, M.A.; De Witte, L.P. Robots as New Tools in Therapy and Education for Children with Autism. Int. J. Neurorehabilit. 2017, 4, 1–4. [Google Scholar] [CrossRef]
  11. Scassellati, B. How Social Robots Will Help Us to Diagnose, Treat, and Understand Autism. In Theory and Applications for Control of Aerial Robots in Physical Interaction through Tethers; Springer Science and Business Media LLC: Berlin/Heidelberg, Germany, 2007; pp. 552–563. [Google Scholar]
  12. Begum, M.; Serna, R.W.; Yanco, H.A. Are Robots Ready to Deliver Autism Interventions? A Comprehensive Review. Int. J. Soc. Robot. 2016, 8, 157–181. [Google Scholar] [CrossRef]
  13. Kim, E.; Paul, R.; Shic, F.; Scassellati, B. Bridging the Research Gap: Making HRI Useful to Individuals with Autism. J. Hum. Robot Interact. 2012, 1, 26–54. [Google Scholar] [CrossRef] [Green Version]
  14. Kim, E.; Berkovits, L.D.; Bernier, E.P.; Leyzberg, D.; Shic, F.; Paul, R.; Scassellati, B. Social Robots as Embedded Reinforcers of Social Behavior in Children with Autism. J. Autism Dev. Disord. 2013, 43, 1038–1049. [Google Scholar] [CrossRef] [PubMed]
  15. Fujimoto, I.; Matsumoto, T.; De Silva, P.R.S.; Kobayashi, M.; Higashi, M. Mimicking and evaluating human mo-tion to improve the imitation skill of children with autism through a robot. Int. J. Soc. Robot. 2011, 3, 349–357. [Google Scholar] [CrossRef]
  16. Di Nuovo, A.; Conti, D.; Trubia, G.; Buono, S.; Di Nuovo, S. Deep Learning Systems for Estimating Visual Attention in Robot-Assisted Therapy of Children with Autism and Intellectual Disability. Robotics 2018, 7, 25. [Google Scholar] [CrossRef] [Green Version]
  17. Salem, M.; Lakatos, G.; Amirabdollahian, F.; Dautenhahn, K. Towards Safe and Trustworthy Social Robots: Ethical Challenges and Practical Issues. In Social Robotics; Springer Science and Business Media LLC: Berlin/Heidelberg, Germany, 2015; pp. 584–593. [Google Scholar]
  18. Katsanis, I.A.; Moulianitis, V.C. Criteria for the Design and Application of Socially Assistive Robots in Interventions for Children with Autism. In Advances in Service and Industrial Robotics; Springer: Berlin/Heidelberg, Germany, 2020; pp. 159–167. [Google Scholar]
  19. ASDEU Consortium. Autism Spectrum Disorders in the European Union (ASDEU): Final Report: Main Results of the ASDEU Project-28/08/2018; ASDEU, European Commission: Brussels, Belgium, 2018. [Google Scholar]
  20. Dickstein-Fischer, L.A.; Crone-Todd, D.E.; Chapman, I.; Fathima, A.T.; Fischer, G.S. Socially assistive robots: Current status and future prospects for autism interventions. Innov. Entrep. Health 2018, 5, 15–25. [Google Scholar] [CrossRef] [Green Version]
  21. Costa, S.; Lehmann, H.; Dautenhahn, K.; Robins, B.; Soares, F. Using a Humanoid Robot to Elicit Body Awareness and Appropriate Physical Interaction in Children with Autism. Int. J. Soc. Robot. 2015, 7, 265–278. [Google Scholar] [CrossRef] [Green Version]
  22. Wainer, J.; Robins, B.; Amirabdollahian, F.; Dautenhahn, K. Using the Humanoid Robot KASPAR to Autonomously Play Triadic Games and Facilitate Collaborative Play among Children with Autism. IEEE Trans. Auton. Ment. Dev. 2014, 6, 183–199. [Google Scholar] [CrossRef]
  23. Esteban, P.G.; Baxter, P.E.; Belpaeme, T.; Billing, E.; Cai, H.; Cao, H.-L.; Coeckelbergh, M.; Costescu, C.; David, D.O.; De Beir, A.; et al. How to Build a Supervised Autonomous System for Robot-Enhanced Therapy for Children with Autism Spectrum Disorder. Paladyn J. Behav. Robot. 2017, 8, 18–38. [Google Scholar] [CrossRef]
  24. Alemi, M.; Meghdari, A.; Basiri, N.M.; Taheri, A. The Effect of Applying Humanoid Robots as Teacher Assistants to Help Iranian Autistic Pupils Learn English as a Foreign Language. In Social Robotics; Springer Science and Business Media LLC: Berlin/Heidelberg, Germany, 2015; Volume 9388, pp. 1–10. [Google Scholar]
  25. Anzalone, S.M.; Tilmont, E.; Boucenna, S.; Xavier, J.; Jouen, A.-L.; Bodeau, N.; Maharatna, K.; Chetouani, M.; Cohen, D. How children with autism spectrum disorder behave and explore the 4-dimensional (spatial 3D+time) environment during a joint attention induction task with a robot. Res. Autism Spectr. Disord. 2014, 8, 814–826. [Google Scholar] [CrossRef]
  26. Postlethwaite, I.; Leonards, U.B. Human-Humanoid Interaction: Overview; Springer Science and Business Media LLC: Berlin/Heidelberg, Germany, 2017; pp. 1–16. [Google Scholar]
  27. Vasic, M.; Billard, A. Safety issues in human-robot interactions. In Proceedings of the 2013 IEEE International Conference on Robotics and Automation, Karlsruhe, Germany, 6–10 May 2013; pp. 197–204. [Google Scholar]
  28. Thomas, C.; Busch, F.; Kuhlenkoetter, B.; Deuse, J. Ensuring human safety with offline simulation and real-time work-space surveillance to develope a hybrid robot assistance system for welding of assemblies. In Enabling Manufacturing Competitiveness and Economic Sustainability; Springer: Berlin/Heidelberg, Germany, 2012; pp. 464–470. [Google Scholar]
  29. Matthias, B.; Kock, S.; Jerregard, H.; Kallman, M.; Lundberg, I.; Mellander, R. Safety of collaborative industrial robots: Certification possibilities for a collaborative assembly robot concept. In Proceedings of the 2011 IEEE International Symposium on Assembly and Manufacturing (ISAM), Tampere, Finland, 25–27 May 2011; pp. 1–6. [Google Scholar]
  30. Argall, B.D.; Billard, A.G. A survey of tactile human–robot interactions. Robot. Auton. Syst. 2010, 58, 1159–1176. [Google Scholar] [CrossRef] [Green Version]
  31. ISO Standard. ISO/TS 15066: 2016 Robots and Robotic Devices-Collaborative Robots; ISO: Geneva, Switzerland, 2016. [Google Scholar]
  32. Robotic Industries Association. ANSI/RIA R15. 06: 2012 Safety Requirements for Industrial Robots and Robot Systems; Robotic Industries Association: Ann Arbor, MI, USA, 2012. [Google Scholar]
  33. Rushby, J. Just-in-Time Certification. In Proceedings of the 12th IEEE International Conference on Engineering Complex Computer Systems (ICECCS 2007), Auckland, New Zealand, 11–14 July 2007; pp. 15–24. [Google Scholar]
  34. Guiochet, J.; Martin-Guillerez, D.; Powell, D. Experience with model-based user-centered risk assessment for service robots. In Proceedings of the IEEE 12th International Symposium on High Assurance Systems Engineering, San Jose, CA, USA, 3–4 November 2010; pp. 104–113. [Google Scholar]
  35. Blum, C.; Winfield, A.F.T.; Hafner, V.V. Simulation-Based Internal Models for Safer Robots. Front. Robot. AI 2018, 4, 74. [Google Scholar] [CrossRef] [Green Version]
  36. Araiza-Illan, D.; Western, D.; Pipe, A.G.; Eder, K. Systematic and Realistic Testing in Simulation of Control Code for Robots in Collaborative Human-Robot Interactions. In Towards Autonomous Robotic Systems; Springer Science and Business Media LLC: Berlin/Heidelberg, Germany, 2016; pp. 20–32. [Google Scholar]
  37. Clabaugh, C.; Becerra, D.; Deng, E.; Ragusa, G.; Matarić, M. Month-long, In-home Case Study of a Socially Assistive Robot for Children with Autism Spectrum Disorder. In Proceedings of the Companion of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, Chicago, IL, USA, 5–8 March 2018; pp. 87–88. [Google Scholar]
  38. Jain, S.; Thiagarajan, B.; Shi, Z.; Clabaugh, C.; Matarić, M.J. Modeling engagement in long-term, in-home socially assistive robot interventions for children with autism spectrum disorders. Sci. Robot. 2020, 5, eaaz3791. [Google Scholar] [CrossRef] [Green Version]
  39. Pennazio, V. Social robotics to help children with autism in their interactions through imitation. Res. Educ. Media 2017, 9, 10–16. [Google Scholar] [CrossRef] [Green Version]
  40. Yanco, H.; Drury, J. Classifying human-robot interaction: An updated taxonomy. In Proceedings of the 2004 IEEE International Conference on Systems, Man and Cybernetics (IEEE Cat. No.04CH37583), The Hague, The Netherlands, 10–13 October 2004; Volume 3, pp. 2841–2846. [Google Scholar]
  41. Bonarini, A. Communication in Human-Robot Interaction. Curr. Robot. Rep. 2020, 1, 279–285. [Google Scholar] [CrossRef]
  42. De Santis, A.; Siciliano, B.; De Luca, A.; Bicchi, A. An atlas of physical human–robot interaction. Mech. Mach. Theory 2008, 43, 253–270. [Google Scholar] [CrossRef] [Green Version]
  43. Laros, J.F.J.; Blavier, A.; Dunnen, J.T.D.; Taschner, P.E. A formalized description of the standard human variant nomenclature in Extended Backus-Naur Form. BMC Bioinform. 2011, 12, S5. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. The child–robot interaction taxonomy in autism interventions.
Figure 1. The child–robot interaction taxonomy in autism interventions.
Robotics 10 00020 g001
Figure 2. The relationships that could be formed during the child–robot interaction can be seen in the cases (ah). Case (a) depicts the simplest form of interaction. Cases (bd) present the roles of therapist, robot, and child during the interaction. Case (e) shows an external interaction. Cases (f,g) form a cooperative relationship, while case (h) depicts an object observation.
Figure 2. The relationships that could be formed during the child–robot interaction can be seen in the cases (ah). Case (a) depicts the simplest form of interaction. Cases (bd) present the roles of therapist, robot, and child during the interaction. Case (e) shows an external interaction. Cases (f,g) form a cooperative relationship, while case (h) depicts an object observation.
Robotics 10 00020 g002
Figure 3. Sequence diagram of the presented use case.
Figure 3. Sequence diagram of the presented use case.
Robotics 10 00020 g003
Figure 4. Activity diagram presenting the risk assessment process performed by the robot.
Figure 4. Activity diagram presenting the risk assessment process performed by the robot.
Robotics 10 00020 g004
Figure 5. The safety architecture for autism interventions.
Figure 5. The safety architecture for autism interventions.
Robotics 10 00020 g005
Figure 6. Example of Choregraphe interface.
Figure 6. Example of Choregraphe interface.
Robotics 10 00020 g006
Figure 7. Dialogue between child and robot at introduction stage.
Figure 7. Dialogue between child and robot at introduction stage.
Robotics 10 00020 g007
Figure 8. Dialogue between child and robot at main interaction stage.
Figure 8. Dialogue between child and robot at main interaction stage.
Robotics 10 00020 g008
Figure 9. Dialogue between child and robot at the final stage.
Figure 9. Dialogue between child and robot at the final stage.
Robotics 10 00020 g009
Figure 10. Child–robot dialog inputs and outputs.
Figure 10. Child–robot dialog inputs and outputs.
Robotics 10 00020 g010
Figure 11. NAO prompts for sensing in order to start.
Figure 11. NAO prompts for sensing in order to start.
Robotics 10 00020 g011
Figure 12. NAO starts the area exploration.
Figure 12. NAO starts the area exploration.
Robotics 10 00020 g012
Figure 13. NAO shows a distracting or dangerous item.
Figure 13. NAO shows a distracting or dangerous item.
Robotics 10 00020 g013
Figure 14. Log viewer and dialog pane of the simulated architecture.
Figure 14. Log viewer and dialog pane of the simulated architecture.
Robotics 10 00020 g014
Figure 15. The complete structure of the simulated architecture in the Choregraphe environment.
Figure 15. The complete structure of the simulated architecture in the Choregraphe environment.
Robotics 10 00020 g015
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Katsanis, I.A.; Moulianitis, V.C. An Architecture for Safe Child–Robot Interactions in Autism Interventions. Robotics 2021, 10, 20. https://doi.org/10.3390/robotics10010020

AMA Style

Katsanis IA, Moulianitis VC. An Architecture for Safe Child–Robot Interactions in Autism Interventions. Robotics. 2021; 10(1):20. https://doi.org/10.3390/robotics10010020

Chicago/Turabian Style

Katsanis, Ilias A., and Vassilis C. Moulianitis. 2021. "An Architecture for Safe Child–Robot Interactions in Autism Interventions" Robotics 10, no. 1: 20. https://doi.org/10.3390/robotics10010020

APA Style

Katsanis, I. A., & Moulianitis, V. C. (2021). An Architecture for Safe Child–Robot Interactions in Autism Interventions. Robotics, 10(1), 20. https://doi.org/10.3390/robotics10010020

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop