1. Introduction
Autism Spectrum Disorder (ASD) is a neurodevelopmental condition characterized by persistent difficulties in social communication and interaction, alongside restricted and repetitive patterns of behavior, interests, or activities according to the American Psychiatric Association [
1]. Children with ASD may experience challenges in establishing and maintaining social relationships with parents, peers, teachers, therapists, and healthcare professionals. These difficulties can manifest in reduced eye contact, limited use of gestures or facial expressions, difficulties in turning up during conversation, or lack of interest in shared activities [
2].
Given the importance of developing interventions that support the social development of children with ASD, researchers and practitioners have increasingly turned to the use of different kind of technological approaches. Technologies such as serious games and social robots have been actively developed and tested to support children with Autism Spectrum Disorder (ASD), showing considerable promise in enhancing social, cognitive, and behavioral skills. Computer-, tablet-, and mobile-based serious games are commonly designed to improve areas such as social interaction, emotional regulation, attention, and executive functioning. These platforms often incorporate interactive and adaptive features to maintain engagement and provide personalized learning experiences [
3,
4,
5]. On the other hand, social robots—including systems like MARIA T2—are sometimes combined with serious games and used in therapeutic sessions, where they interact with children through speech, gestures, and projected games. This multimodal interaction helps provide structure, motivation, and positive reinforcement in a predictable and low-anxiety environment [
4,
6,
7].
Effectiveness studies have reported positive outcomes; children with ASD engaging with these technologies often show greater improvements in social communication, eye contact, and behavioral regulation compared to those receiving traditional face-to-face interventions [
3,
8,
9,
10]. Additionally, serious games and social robots contribute to cognitive and emotional development, supporting abilities such as sustained attention, memory, and emotion recognition—often measured through tools like eye-tracking or performance analytics [
4,
5]. One of the key benefits observed is their ability to enhance motivation and sustained participation, making therapeutic activities more accessible, interactive, and enjoyable for both children and the professionals involved [
4,
6,
7].
In particular, robot-mediated interventions (ROMI) have demonstrated significant potential for supporting children with ASD. These children often show heightened interest in technology, and robots can act as simplified, predictable social agents that foster engagement and facilitate social interaction in a less overwhelming manner [
11,
12,
13,
14].
In this context, this article presents the design of JARI (Joint attention through robotic interaction), a robot capable of expressing emotions through facial expressions, movement, and the use of light and color via LED displays on its body. JARI’s design includes a non-recognizable zoomorphic form to avoid rejection—common in children with sensory sensitivities—by not resembling any specific animal or a humanoid figure, which could trigger discomfort related to the Uncanny Valley Effect [
15]. Its external appearance includes soft and textured materials that encourage tactile interaction. The robot also includes customizable clothing made from different fabrics and colors, offering additional sensory stimulation and personalization options during therapy or educational sessions.
The paper is structured as follows. First,
Section 2 presents a summary of robots designed for children with Autism Spectrum Disorder (ASD), identifying relevant characteristics such as form, body and limb mobility, emotional expression capabilities, and types of sensory interaction.
Section 3 describes the JARI robot in detail, including its mechanical structure, aesthetic design, hardware architecture, software architecture, and human–machine interface. In
Section 4, a pilot study involving children diagnosed with ASD is introduced to validate the robot’s functional features and to explore how children respond to social interactions.
Section 5 presents findings from two evaluation scenarios as follows: one involving individual interaction with children diagnosed with moderate ASD and limited or no verbal communication, and another involving group interaction with children who require low levels of support and demonstrate high verbal communication abilities. Finally, the article concludes with a discussion in
Section 6 and closing reflections in
Section 7.
2. Background
The development of robots designed for interaction with children with ASD has advanced significantly in recent years, with approaches varying depending on their application and design. Within this field, robots can serve different roles, ranging from therapeutic devices to educational or assistive tools.
Humanoid robots such as Kaspar [
16], NAO [
17], QT-Robot [
18], and Zeno [
19] have been used in therapeutic settings to facilitate communication and social interaction in children with ASD. These robots are designed with a social interface that allows them to express basic emotions through facial movements and gestures. However, recent studies have questioned the effectiveness of these systems in emotion recognition. Ref. [
20] argues that while these robots can act as mediators in communication between children with ASD and neurotypical adults, their ability to enhance emotional recognition is limited, as human emotional interaction depends not only on fixed facial expressions but also on situational context and multimodal cues.
On the other hand, the uncanny valley is a phenomenon widely discussed in the design of social robots. According to Mori’s [
15] hypothesis, as a robot acquires more human-like characteristics, its acceptance increases until it reaches a point where excessive similarity generates a sense of rejection or discomfort. Ref. [
20] points out that this effect is particularly relevant in robots like Kaspar and Zeno, whose humanoid appearance, combined with limited facial movements, may generate a sense of eeriness in users. In contrast, robots not designed for social interaction, such as Spot, developed by Boston Dynamics, have a clearly mechanical appearance and do not produce the same level of cognitive dissonance in users.
Recent studies have explored how the Uncanny Valley Effect (UVE) manifests in children, as their perception of robot appearance may differ from that of adults. Research suggests that while younger children may not experience this effect as strongly, it tends to emerge in older children, particularly in girls around the age of ten, who show a drop in perceived likability and acceptance when exposed to highly humanlike robots [
21].
Additional contributions to the field emphasize the importance of robot morphology and its alignment with the therapy phase. Tamaral et al. [
22] propose a distinction between humanoid and non-humanoid robots, suggesting that abstract or non-biomimetic designs may be more appropriate in the initial stages of therapy due to their simplicity and lower cognitive load. As therapy progresses, more complex humanoid forms can be introduced gradually. This staged approach aims to reduce overstimulation and increase user comfort, especially in children sensitive to visual and sensory input. Their work supports the idea that robot topology must be adaptable, and that simplicity in design plays a key role in early engagement. In line with this, the authors present TEA-2, a low-cost, non-humanoid robot specifically designed for therapeutic use with children with ASD, featuring a modular structure, minimalistic aesthetic, and basic multimodal interactions—making it well suited for early-phase therapeutic contexts.
Beyond physical design, robots for children with ASD can also be oriented toward sensory experiences or therapeutic interventions. For example, robotic music therapy systems have been shown to be effective in modeling social behaviors and improving motor coordination in children with ASD [
23]. Similarly, collaborative robots, such as the (A)MICO device, have been designed to facilitate interaction in structured environments, using multimodal feedback to enhance communication [
24].
Robot Typologies and Design Approaches
Robots used in interventions with children with ASD vary widely in morphology, level of autonomy, and interaction modalities, reflecting diverse design philosophies. Humanoid robots such as Kaspar, NAO, QT-Robot, and Zeno are designed with anthropomorphic features that enable the imitation of human social behaviors, including gestures, speech, and facial expressions. These robots have been clinically validated in tasks such as emotional recognition, motor imitation, and social skills’ training, showing positive outcomes in both educational and therapeutic contexts [
25,
26,
27].
In contrast, some studies advocate for the use of non-humanoid or ambiguously-shaped robots, arguing that lower visual complexity can promote sustained attention and reduce sensory overstimulation in children with ASD. Examples include Keepon, shaped like a snowman; Moxie, a cartoon robot; and Kiwi, a tabletop robot with an abstract form and simplified expressiveness [
28,
29]. The non-recognizable zoomorphic design aims to provide a middle ground—a friendly, organic shape that does not evoke any specific animal, increasing approachability without triggering pre-existing associations.
The literature also emphasizes the importance of adjusting the level of realism in a robot according to therapeutic goals as follows: while more realistic robots may help children generalize learned behaviors to human contexts, less realistic designs create a more controlled and less intimidating environment [
22,
28]. The choice between humanoid and non-humanoid designs should therefore be made based on individual needs, usage context, and the intended pedagogical or therapeutic objectives.
To provide a clearer understanding of the physical and expressive diversity among social robots used in ASD-related interventions,
Table 1 presents a comparative overview based on key morphological and functional characteristics. These include the robot’s form factor, mobility, articulation of head and neck, visual expression through eyes or screens, emotional expressiveness, and available sensory interaction modalities. This comparison highlights the wide spectrum of design choices, ranging from highly anthropomorphic platforms to abstract or character-like embodiments, each with distinct implications for therapeutic use and user engagement.
Based on this overview, it becomes evident that technical features such as degrees of freedom (DoFs) in movement, screen-based facial animation, and multisensory input channels are carefully chosen to support interaction with children with ASD. Many robots integrate expressive mobility—such as head tilts, arm gestures, or bouncing motions—as a way to maintain engagement and communicate emotions nonverbally. Visual expressiveness also varies as follows: while some robots use mechanical or LED-based cues, others rely on high-resolution animated faces to portray nuanced emotional states, which may help children recognize and respond to affective signals.
Furthermore, sensory interaction plays a critical role in these designs. Robots often include cameras, microphones, and touch sensors to perceive user input, supporting dynamic two-way interaction. In some cases, speech recognition and reactive motion allow the robot to respond in real time, adapting its behavior based on the child’s actions or vocalizations. These features are particularly beneficial for children with ASD, as they create predictable and simplified social scenarios that reduce cognitive load while promoting social, emotional, and communicative development. Thus, the combination of mobility, sensory feedback, and expressive capabilities is essential to shape how effectively a robot can support intervention strategies for children on the autism spectrum.
3. Design of the JARI Robotic Platform
The JARI robotic platform was designed to support social interaction and emotional learning in children with ASD aged 6 to 8 years. The design integrates mechanical, electronic, and software components to ensure that the robot can effectively engage children and foster the development of socio-emotional skills. This section outlines the technical foundations of the platform, including its design requirements, mechanical structure, electronic architecture, and the teleoperation system used for real-time control.
3.1. System Requirements
The design focuses on facilitating social interaction, enabling emotional expression, and ensuring modularity for future enhancements. The main system objectives are as follows:
Engagement: The robot should attract and maintain the attention of children with ASD through expressive movements and dynamic lighting.
Emotion display: JARI must be capable of expressing a range of predefined emotions to promote emotional recognition and interaction. The current set includes the following nine emotions: happy, very happy, sad, very sad, disgusted, angry, fear, dubious, and neutral.
Robustness: The physical design must withstand frequent handling and occasional rough interactions typical in child-robot engagement.
Teleoperation: The system must allow real-time control through a web-based interface employing the Wizard of Oz (WoZ) paradigm [
33].
Adaptability: The platform should support easy updates and modifications to expand or customize its capabilities.
Movement: The robot must be capable of performing neck articulations, including the following: nodding (affirmation), shaking (negation), and lateral flexion, as well as whole-body movements including forward, backward, and rotational motions.
Lighting system: Emotions should also be expressed via color changes. Five colors are defined as follows: yellow (happiness), blue (sadness), red (anger), green (disgust), and light pink (anxiety).
3.2. Mechanical Design
Articulated Head Movement Mechanism: The articulated head mechanism has been designed to provide robustness and durability, capable of supporting abrupt movements generated by users, particularly children, during direct interactions with the robot. This mechanism is based on a three-degree-of-freedom (3 DoF) motion platform, consisting of two active RRS (Revolute-Revolute-Spherical) legs and one passive RS (Revolute-Spherical) leg. This configuration allows roll movements (approximate 50° range) and pitch movements (approximate 70° range). The platform is built from high-strength materials such as aluminum, steel bars, and high-density polyurethane, selected to ensure stability and resistance to external forces, preventing interactions from compromising the system’s performance. The platform is actuated by two MG946R servomotors from Tower Pro (Taipei, Taiwan), each with a torque of 13 kg·cm, responsible for the roll and pitch movements illustrated in
Figure 1. A third MG946R servomotor, with an equal torque of 13 kg·cm, allows the platform to rotate on its vertical axis, achieving a yaw motion (approximate range of 180°). All servomotors are driven by Pulse Width Modulation (PWM) signals generated by an Arduino Nano microcontroller from Arduino Company (Monza, Italy) ensuring precision and real-time response.
Mobile unit: The mobile base, shown in
Figure 2a, uses a differential drive system powered by two DC motors with integrated gear reduction. This configuration enables forward/backward and rotational movements. Each motor delivers a peak speed of 175 RPM under load and torque of 1.1 kg·cm. The chassis has a low center of gravity to improve balance during motion, while rubber wheels ensure adequate traction across various surfaces for smooth navigation.
External body shell: The robot’s body shell is 3D printed in polylactic acid (PLA), while the limbs, neck, and ears are coated with food-grade silicone to provide a safe and comfortable tactile experience, as presented in
Figure 2b. The design includes strategic internal mounting points to house electronic components and mechanical subassemblies, optimizing space and facilitating assembly. Surface finishing techniques, such as sanding and chemical smoothing, were applied to remove sharp edges and achieve a uniform appearance, improving both safety and appearance.
3.3. Electronics Architecture
The following subsection describes the key components of the robot’s electronic system:
Processing unit: A Raspberry Pi 3 B+ is used for high-level processing. It includes built-in Wi-Fi capabilities, allowing it to establish a local wireless network for communication with the teleoperation interface, as will be described later.
Control system: An Arduino Nano handles low-power tasks and sensor integration.
Input sensors:
- −
Ultrasonic distance sensors for obstacle detection and proximity awareness.
- −
Camera Module V2 (8 MP, Sony IMX219) from Raspberry PI company (Cambridge, UK) for visual input, including facial recognition and scene analysis.
Output devices:
- −
RGB LED lights for emotional signaling and visual feedback.
- −
LCD touchscreen: A 3.5-inch USB-connected display with a resolution of 480 × 320 pixels and physical dimensions of approximately 92 mm × 57 mm. It is used to display pictograms and visual cues.
- −
Integrated speaker for playback of sound effects and onomatopoeic cues.
Figure 3 presents an annotated diagram of the JARI robot’s physical components. The design features a 3-degree-of-freedom neck, a camera and microphone for perception, a speaker and LCD touchscreen for expressive interaction, and RGB LEDs for emotional signaling. The body is constructed using 3D-printed PLA and incorporates soft textile materials to ensure safe physical contact. The robot includes a mobile base with wheels and ultrasonic sensors for distance sensing. The power connector is positioned at the rear of the chassis.
3.4. Aesthetic Design
JARI presents a non-recognizable zoomorphic appearance, deliberately avoiding resemblance to specific animals or humanoid forms. This design decision was made to reduce the risk of fear or rejection in children, and to prevent triggering the discomfort commonly associated with Mori’s Uncanny Valley Effect. Unlike clearly defined animal shapes, such as dogs or cats, which may elicit aversions in some children, the abstract design promotes broader acceptance. Similarly, humanoid features were excluded to avoid cognitive dissonance during interaction. JARI has a compact and child-friendly design, making it suitable for interaction with children in both individual and group settings. It measures 28 cm in height, 20.5 cm in width, and 22 cm in depth, with a total weight of approximately 2.5 kg. These dimensions were chosen to ensure the robot appears approachable and non-threatening, which is particularly important when working with children with Autism Spectrum Disorder (ASD), who may be sensitive to large or imposing devices. Research has shown that robot size significantly influences user perception and interpersonal behavior. For example, ref. [
34] found that the size of the robot affects proxemic preferences, with smaller robots perceived as more approachable and less intrusive. Similarly, ref. [
35] emphasized the advantages of using child-sized robots to foster natural and comfortable interactions in educational and therapeutic contexts. The zoomorphic design was informed by previous research [
36,
37] suggesting that animal-like robots are often perceived as less threatening and more approachable, especially by children with ASD, due to their simplicity and reduced social expectations compared to humanoid robots. These insights guided the decision to keep JARI’s form factor small and manageable, thereby enhancing its effectiveness as a social and therapeutic tool for children with ASD. To enhance engagement and encourage physical interaction, the robot incorporates materials selected for their tactile qualities. Soft and textured surfaces were integrated into the body, including silicone-covered ears and additional textile components, which not only increase sensory appeal but also provide opportunities for visual contrast and aesthetic customization. JARI also includes interchangeable garments made from a variety of fabrics and colors (see
Figure 4), which are adapted to the context of the interaction. Some garments offer soft textures, while others are intentionally rough, providing diverse sensory input during interaction. These design features support personalization and contribute to making JARI a more engaging and adaptable companion.
3.5. Software Architecture
The software architecture of the JARI robot is modular and designed to support multiple functionalities while maintaining smooth teleoperation. Its structure consists of the following layers:
Control layer: Responsible for motor actuation, sensor data acquisition, and communication with low-level hardware components.
Behavior layer: Manages predefined movement sequences and emotional expression routines.
Communication layer: Facilitates data exchange and synchronization between the Raspberry Pi, Arduino, and the web-based interface.
Web server: Hosts the graphical user interface, interprets remote commands, and ensures real-time communication between the human operator and the robot.
Interaction layer: Enables operator control through a user-friendly web interface.
This layered architecture enables real-time control and facilitates the future integration of autonomous behavior modules.
Figure 5 illustrates the overall communication architecture between the mobile application, the control command list, and the hardware components (Raspberry Pi and Arduino Nano). As shown in the figure, the teleoperator sends commands to the robot through a web browser. These commands are transmitted via Wi-Fi, using a local network established between the teloperator’s device and the Raspberry Pi. The Raspberry Pi processes high-level tasks such as changing robot emotion or moving. It communicates with the Arduino Nano, which handles low-level components like RGB LED lights, neck servomotors, the mobile wheel base, and ultrasonic sensors.
3.6. Human-Robot Interface
The robot is operated via a web application hosted on a virtual server running on the onboard Raspberry Pi. The human operator connects remotely through a standard web browser using either a desktop computer or a mobile device. The interface, as shown in
Figure 6, includes predefined movement commands for the robot’s base and a set of emotional expression patterns, allowing the operator to control the robot’s behavior and adapt interactions in real time.
Figure 7 shows the nine different emotions that the robot can express through a combination of facial expressions, colored LED lights, and movements of its neck and wheels.
This system is based on the Wizard of Oz (WoZ) paradigm, in which the robot is teleoperated to simulate autonomous behaviors. This approach offers several advantages, outlined as follows:
Flexible interaction: Enables real-time adaptation to children’s responses, allowing for a more personalized and engaging experience.
Testing before automation: Facilitates the evaluation of different interaction strategies before implementing fully autonomous behavior.
Ethical and safety considerations: Ensures that a human teleoperator remains in control and can intervene immediately in case of unexpected or undesired behavior.
4. Pilot Evaluation
To evaluate the JARI robot in interaction with the target population, a pilot study was conducted involving children diagnosed with ASD. The evaluation aimed to validate key design characteristics of the robot, particularly its emotional expressions, use of color, and children’s behavioral responses during interaction. The study was designed to address the following research questions (RQs):
Research Question 1: Does the design of the JARI robot capture and sustain the attention of children with Autism Spectrum Disorder (ASD) during interaction?
Research Question 2: Are children with ASD able to recognize and interpret the emotional expressions displayed by the JARI robot?
The pilot was implemented in two countries and in two types of educational contexts as follows: (1) a special education school in Peru; and (2) an inclusive mainstream school in Spain. In Peru, the study was conducted at the Centro Ann Sullivan del Perú (CASP) in Lima, which serves children with ASD, Down syndrome, and related neurodevelopmental conditions. In Spain, the pilot took place at the Centro Cultural Palomeras in Madrid, a public school that integrates children with ASD into regular classrooms.
Two following scenarios were defined to address different interaction settings and participant profiles.
4.1. Scenario 1: Children with Limited or Nonverbal Communication
This scenario was implemented in Lima, Peru, with a group of school-aged children (6 to 8 years old) diagnosed with ASD. Most of the participants exhibited minimal or no verbal communication and used pictographic systems to express basic needs. All children attended a specialized educational center and were characterized by limited sustained attention and, in some cases, oppositional behaviors or distress in response to transitions or unexpected instructions.
A total of 11 children participated in this scenario (10 boys and 1 girl), 8 of whom used nonverbal communication. The sessions were conducted individually in a dedicated room at the CASP. Following the center’s recommendations, each child was accompanied by a parent during the session to ensure comfort and reduce anxiety.
Interaction time: Approximately 20 min per child, including a brief “ice-breaking” period of about 2 min, followed by the main interaction with the robot and the associated questionnaire.
Key aspects to evaluate: Identification of the robot’s emotional expressions, children’s behavioral responses to the robot’s emotional expressions, likeliness of robot and color changes linked to those expressions, and overall engagement during the session.
Procedure
Prior to implementation, the researchers coordinated with specialists from the CASP to adapt the protocol and questionnaire to the communication profiles and comfort needs of the children involved.
Robot setup: The JARI robot was placed in the center of a large desk, as shown in
Figure 8a. One researcher, positioned at the back of the room, operated the robot remotely using the Wizard of Oz paradigm, while another remained near the participant to monitor and guide the session without interfering. Each child entered the room accompanied by a parent or tutor to ensure emotional support. During the interaction, the researcher asked questions about the robot’s facial expressions using pictograms, as illustrated in
Figure 8b.
In this case, the procedure was as follows:
4.2. Scenario 2: Children with Verbal Communication and Higher Functional Skills
This scenario was conducted at the Centro Cultural Palomeras School in Madrid, Spain, and involved a group of seven children aged between 5 and 12 years (five boys and two girls), all diagnosed with ASD. All participants demonstrated verbal communication skills, although their fluency and expressive abilities varied. Some children relied on echolalia or had difficulty initiating speech, while others communicated more fluidly. All were capable of following simple instructions. Attention levels varied across participants, with some easily distracted and one child requiring continuous support from a familiar teacher to remain engaged.
Interaction time: Approximately 25 min per group, including a brief introduction and explanation of the activity.
Key aspects to evaluate: The clarity and recognition of the robot’s emotional expressions and the children’s perception of the robot’s behavior and characteristics through a post-interaction questionnaire.
Procedure
Before conducting the sessions, the research team met with school staff to explain the protocol. Based on their feedback, the wording of the questionnaire was adjusted to better align with the children’s language level and comprehension. All activities and materials were administered in Spanish.
Robot setup: The interaction took place in a familiar classroom environment, with the JARI robot positioned on a desk accessible to the children. Toys and ARASAAC (
https://arasaac.org/ (accessed on 13 December 2024): A Pictographic Reference System in Augmentative and Alternative Communication) pictograms were prepared to facilitate the identification of emotions and support communication during the activity, as shown in
Figure 9.
In this case, the procedure was as follows:
Introduction: The session began with a brief explanation of the robot and the activity, presented as a game. Children were encouraged to interact playfully and naturally with the robot.
Interaction task: Children participated in small groups. Each child took turns showing an object to the robot (e.g., plush toy, cookie, broken cup, broken cellphone, fake snake). After each object was presented, the robot changed its emotional expression, controlled by the teleoperator. Children were then asked to identify the robot’s emotion by selecting an ARASAAC pictogram and showing it to the researcher. The emotions included happiness, sadness, anger, disgust, and fear.
Post-interaction questionnaire: At the end of the session, each child completed a short questionnaire based on 11 statements (
Table 4). Each child received a response sheet along with three colored stickers as follows: green (agree), yellow (neutral), and red (disagree). The researcher read each statement aloud three to four times, allowing the children enough time to place a sticker next to each one according to their interpretation or preference.
This questionnaire was adapted from a protocol previously developed by the authors in [
38], which was originally validated with neurotypical children in mainstream educational settings. In the present pilot, the objective was to explore its applicability to children with ASD and to assess its potential for use in future large-scale studies involving this population.
5. Results
This section presents the findings from both scenarios, including children’s ability to identify the robot’s emotional expressions, their preferences for specific colors associated with each emotion, behavioral responses during the interaction, and post-interaction questionnaire data.
5.1. Scenario 1
5.1.1. Quantitative Findings of Emotion Identification and Preferences
In Scenario 1, results were collected from individual sessions with children, focusing on the following three aspects: whether they correctly identified the robot’s emotional expressions, whether they liked each emotion, and their preferences regarding the colors displayed by the robot during emotional expression.
Additionally, children’s behavior during the interaction was recorded and analyzed through video footage, providing further insights into the types of spontaneous actions observed.
Emotion matching: As shown in
Figure 10, most children were able to correctly identify the emotional expression displayed by the robot. Only one case showed a mismatch, while other unmatched cases occurred because the children did not provide an answer, and this could be because using pictograms as part of communication systems requires cognitive effort, which can lead to fatigue if overused or if the child is overwhelmed by the demands of the interaction [
39]. Notably, a few children chose not to continue responding after the initial rounds of interaction.
Emotion and color preferences:
Figure 11 presents the children’s preferences regarding the robot’s emotions and associated colors. In
Figure 11a, the “happy” emotion was the most positively received, followed by “sad” and “fear”.
Figure 11b shows that the color yellow—used during the happy emotion—was the most preferred, followed by green and light pink. However, not all children answered these questions.
5.1.2. Qualitative Analysis of the Children’s Behavior
Based on the video analysis, four main types of interaction were identified as follows:
Physical gestures, such as touching, hugging, clapping, jumping, and kissing the robot.
Facial expressions, including smiling, laughing, and opening the mouth.
Visual attention, such as sustained eye gaze toward the robot.
Proxemics, including how children approached the robot physically.
During the object-based interaction task, all children displayed sustained visual attention toward the robot. They maintained their gaze until the researcher interrupted after each toy interaction, following the dynamics established for the experiment. After responding, they resumed looking at the robot, often orienting their entire body in its direction. We did not count such cases as events because children keep visual attention until the researcher asked questions so they oriented their gaze to the researcher or some areas of the classroom or the box toys.
In terms of proxemics, since the game required children to approach the robot to show it objects, it was observed that they approached the robot enough to touch, hug, or even kiss it. In addition, observations showed that they willingly moved closer, even after the game ended, demonstrating a natural tendency to engage with the robot.
Regarding physical gestures, as shown in
Figure 12a, the most common actions were touching parts of the robot. The children primarily touched the body, head, and ears of the robot, as marked in
Figure 12b. Interestingly, two children kissed the robot, one clapped, and another jumped while laughing in response to the robot’s angry expression. Furthermore, facial expressions are shown in
Figure 12c. Smiling was the most frequently observed expression. Some children also opened their mouths in reaction to the robot’s fearful or disgusted expressions—emotions in which the robot’s own mouth appeared open. This may suggest an emerging tendency to imitate the robot’s facial features, a behavior that warrants further investigation in future studies.
Table 5 summarizes the frequency of each type of interaction observed, and in
Figure 13 some of these are shown. Smiling was the most frequent facial expression and was recorded separately from laughing due to its lower intensity and clearer consistency. Touching was the most common physical gesture. Interestingly, some children also clapped, jumped, and kissed the robot; though these actions occurred less frequently, they were unexpected. As illustrated in
Figure 13, several examples demonstrate how children interacted physically and visually with the robot. In
Figure 13a, a child is seen touching the robot’s head. In
Figure 13b, another child is closely observing the robot’s face.
Figure 13c shows a child touching the robot’s body and playfully placing a carrot on its face. In
Figure 13d, a child is gently touching the robot’s face, and in
Figure 13e, another child is interacting by touching the robot’s ear. Finally, in
Figure 13f, a child is again observed closely inspecting the robot’s face. These interactions suggest a high level of engagement and comfort with the robot, indicating that the children were not only interested in the robot’s physical features but also felt safe enough to approach and explore it through touch and sustained visual attention. Such behaviors can be interpreted as early signs of social connection and curiosity, which are promising indicators in the context of therapeutic or educational interventions for children with ASD.
5.2. Scenario 2
5.2.1. Quantitative Findings
In the case of Scenario 2, the results are related to the recognition of emotions by the children and their responses to the questionnaire.
Response match to the robot’s emotional expression: As shown in
Figure 14, most children correctly recognized the robot’s emotional expressions. Only one child mismatched the emotion “fear”.
Response to the questionnaire: Regarding the responses to the questionnaire,
Figure 15 shows the frequency of responses for each item. It can be observed that Item 1 and Item 2 received a high number of “Yes” responses, indicating that the children would like to see or play with the robot again. Item 6, “JARI is funny”, also received a high number of “Yes” responses, with no children selecting “Maybe” or “No”.
For Item 7, “JARI is like a person like me”, four children responded “No”, while three children considered the robot to be like a person. In the case of Item 8, most children viewed the robot as a pet and imagined that it could be alive.
Finally, items 10 and 11 revealed that most children were not afraid of the robot and believed that using it would not result in breaking it.
5.2.2. Qualitative Analysis—Teacher and Therapist Observations
As part of the evaluation process, two educational professionals from the Centro Cultural Palomeras completed a post-interaction questionnaire regarding the children’s engagement with the JARI robot. Both observers noted that the children appeared generally motivated during the session and interacted spontaneously with the robot. They emphasized that the children were attentive to the robot’s expressions and responded appropriately in most cases, demonstrating a clear effort to interpret its emotional cues.
Additionally, the professionals observed that the use of pictograms facilitated communication and that the group setting promoted collaborative responses. One of the respondents highlighted that the robot’s design and emotional clarity were effective in maintaining the children’s attention, particularly when expressions were exaggerated and supported by color. Both professionals agreed that the robot has strong potential as a complementary tool in therapeutic and educational contexts for children with ASD.
6. Discussion
The paper introduces a new design for a social robot, created from a multidisciplinary approach to address the needs of children with Autism Spectrum Disorder (ASD). The robot was designed with the goal of becoming a social mediator among children, parents, therapists, and doctors. The JARI robot has a zoomorphic but undefined shape (according to the responses from children and behavioral observations), which encourages children to approach it, touch it, hug it, and even kiss it. In the case of jumping and clapping, this occurred with one child who became excited when the robot changed its emotion to “disgusted”. These actions reflect engagement [
40,
41] and directly answer our research question, confirming that the robot’s shape and emotional characteristics seem to attract children. Some researchers suggest that robot-like or animal-like appearances often attract more attention from children with ASD compared to strictly humanoid designs [
14,
42].
The matching emotions were particularly interesting. The “sad” emotion received the highest rating, followed by “happy” and “disgusted”. Additionally, these emotions were attractive to children in the first scenario, with most of them enjoying the robot’s happy and sad expressions. On the other hand, the “disgusted” emotion was the one that some children did not like. It is important to mention that the teachers at the CASP indicated that the emotion of disgust was something the children had just started learning. Moreover, the “angry” emotion did not receive many responses, which may have been due to it being one of the last emotions presented, and the children were already tired from answering many questions. Some children took longer to respond to the researcher’s questions, sometimes needing two or three attempts, and some children became distracted when the researcher started asking questions and did not answer at all. This may be related to the inherent characteristics of Autism Spectrum Disorder (ASD), which often includes differences in attention regulation, sensory processing, and responsiveness to social prompts [
1,
43]. In addition, children with ASD can recognize emotions like anger, but they typically do so less accurately and more slowly than their neurotypical peers [
44,
45]. Similarly, recognizing disgusted facial expressions is particularly difficult and takes longer for children to master, and many children do not reliably identify this emotion until later in childhood. This difficulty is even greater for children with ASD, highlighting the need for targeted support and interventions [
46,
47]. Therefore, to reduce the risk of cognitive overload and maintain participant engagement, future iterations of the protocol will limit the number of verbal questions posed to each child. Instead of asking multiple follow-up questions, we will focus on a single question per emotional interaction, followed by one general question regarding the child’s perception of the robot. In addition, video recordings will be used to collect behavioral data, such as facial expressions, gaze direction, and body orientation, which may offer further insight into emotional recognition and engagement without relying solely on verbal responses. Automated behavior analysis tools will help us extract these cues more accurately and non-invasively, making the evaluation process more accessible and less taxing for children with ASD.
The questions related to the lighting color of the robot for each emotions could be considered somewhat redundant, as yellow and the ’happy’ emotion received the highest ratings. However, an interesting observation was the green color. Even though some children did not like the “disgusted” emotion, they seemed to enjoy the green color. Similarly, they liked the light pink color associated with “fear”.
In Scenario 2, children were mostly able to identify the emotions, which may suggest that emotions are understood and recognized socially. This could be related to how the school teaches them. Based on this pilot evaluation, the design seems to be heading in the right direction, although emotions like “fear” may require further revision in terms of expression and movement.
On the other hand, the questionnaire was tested to assess whether children with ASD could understand it and to analyze how they responded. This analysis will help determine if the questionnaire can be useful in future studies involving larger samples. From this small group, it was observed that Item 2 (“I want to play with JARI”) received the highest number of “Yes” responses, suggesting that the robot engaged the children in a way that they found interesting or enjoyable. This was further supported by the high rating of “Yes” responses to Item 6 (“JARI is funny”).
Regarding Item 7 (“JARI is like a pet”), children predominantly viewed the robot as a pet. This could be explained by the robot’s shape and size, which may have influenced their perception. Furthermore, responses to Item 8 (“I imagine JARI is alive”) and Item 9 suggest that children may be influenced by the theory of animism proposed by Piaget [
48], where children often associate movement with being alive, a cognitive framework that is prominent among children under the age of 8. However, it would be interesting to explore why some participants, even those above 8 years old, responded with “Maybe”, as this could indicate a more complex understanding of animacy or expectations about the robot.
In this initial exploration, the robot’s ability to express emotions through a combination of facial expressions displayed on a screen, dynamic LED lighting, and coordinated movements of its head and body was sufficient to attract the attention of children with ASD. These multimodal features proved effective in engaging both nonverbal children requiring communication support through pictograms and verbal children on the Asperger spectrum. While the results from this study provide valuable insights, future research directions are proposed to further refine and expand the findings. These include the following: (1) applying Scenario 1 and Scenario 2 to a larger sample to validate the results; (2) incorporating additional features to enable automatic responses to events such as touching or hugging; (3) personalizing certain aspects of the robot, such as integrating onomatopoeic sounds; and (4) enabling gesture and speech recognition functionalities to enhance the robot’s interactive capabilities. These future steps will help optimize the robot’s design and functionality, offering greater potential for supporting children with ASD in social and therapeutic settings. On the other hand, to enhance the scalability and generalizability of the findings, future research should explore the implementation of the robot in a broader range of educational and therapeutic settings. Adaptations should consider the communication profiles of children with ASD. For nonverbal children, group-based interactions (with 8–10 participants) could be designed around visual and physical activities, such as emotion recognition using pictograms or following daily routines involving coordinated movements. For verbal children, short verbal interactions, such as group surveys or guided discussions, could be incorporated to support emotional learning and instruction following. These activities can be embedded into structured group sessions in ASD centers or inclusive classrooms, supporting peer engagement and reducing individual response pressure. Additionally, scaling the intervention across different cultural and linguistic contexts will require adaptations in content and delivery methods, ensuring accessibility and relevance for diverse populations.
7. Conclusions
JARI, the non-defined robot with a zoomorphic shape and functions such as showing emotional expressions using lighting colors and movements, effectively encouraged children with Autism Spectrum Disorder (ASD) to interact, as they approached, touched, hugged, and even kissed it. These behaviors suggest that the robot’s design and emotional expressions facilitated engagement. The “sad” emotion received the highest rating, followed by “happy” and “disgusted”, and “angry” had fewer responses, probably due to fatigue among the children.
Children perceived the robot mainly as a “pet”, which can be attributed to its design and size. Many also imagined it as “alive”, reflecting a cognitive framework typical for children under eight. The questionnaire showed that the children understood it and responded positively, particularly to items like “I want to play with JARI” and “JARI is funny”. However, they primarily viewed the robot as non-human, mainly as a pet.
Future research should involve larger samples and explore additional features, such as automatic responses to touch or hugging, onomatopoeic sounds, and gesture or speech recognition. These modifications could enhance the robot’s interactivity, further supporting social and therapeutic interactions for children with ASD.
Author Contributions
Conceptualization, E.P.M.R. and H.H.O.F.; methodology, E.P.M.R. and H.H.O.F.; validation, E.P.M.R., R.C.L. and C.E.G.C.; investigation, E.P.M.R., R.C.L. and C.E.G.C.; data curation, E.P.M.R.; writing—original draft preparation, E.P.M.R., H.H.O.F. and R.C.L.; writing—review and editing, E.P.M.R. and R.C.L.; funding acquisition, E.P.M.R. and C.E.G.C. All authors have read and agreed to the published version of the manuscript.
Funding
This work has been funded by the Social Responsibility Direction of the Pontificia Universidad Católica del Perú FCD2017-DARS and by RoboCity 2030-DIH-CM Madrid Robotics Digital Innovation Hub (Applied Robotics for Improving Citizens’ Quality of Life, Phase IV; S2018/NMT-4331), funds from the Community of Madrid, co-financed with EU Structural Funds.
Informed Consent Statement
Informed consent was obtained from all participants involved in the study. Written informed consent has been obtained from the parents or tutors of the participants to publish this paper.
Data Availability Statement
Data are contained within the article.
Acknowledgments
The authors express their gratitude for the collaboration of Centro Ann Sullivan del Perú, Lima, and Colegio Centro Cultural Palomeras, Madrid, which allowed the authors to apply the protocol of this study. They also thank Danna Arias, Marlene Bustamante, Marie Patilongo, and Sheyla Blumen for their contribution in the design of this version of the JARI robot.
Conflicts of Interest
The authors declare no conflicts of interest.
Abbreviations
The following abbreviations are used in this manuscript:
ARASAAC | Aragonese Center of Augmentative and Alternative Communication |
ASD | Autism Spectrum Disorder |
CASP | Centro Ann Sullivan del Perú |
DoF | Degree Of Freedom |
JARI | Joint Attention through Robotic Interaction |
PLA | Polylactic Acid |
RQ | Research Question |
UVE | Uncanny Valley Effect |
WoZ | Wizard of Oz |
References
- American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders: DSM-5; American Psychiatric Association: Washington, DC, USA, 2013. [Google Scholar]
- Klin, A.; Jones, W.; Schultz, R.; Volkmar, F.; Cohen, D. Defining and Quantifying the Social Phenotype in Autism. Am. J. Psychiatry 2002, 159, 895–908. [Google Scholar] [CrossRef] [PubMed]
- Scarcella, I.; Marino, F.; Failla, C.; Doria, G.; Chilà, P.; Minutoli, R.; Vetrano, N.; Vagni, D.; Pignolo, L.; Di Cara, M.; et al. Information and Communication Technologies-Based Interventions for Children with Autism Spectrum Conditions: A Systematic Review of Randomized Control Trials from a Positive Technology Perspective. Front. Psychiatry 2023, 14, 1212522. [Google Scholar] [CrossRef] [PubMed]
- Freitas, É.V.d.S.; Panceri, J.A.C.; Schreider, S.d.L.; Caldeira, E.M.d.O.; Filho, T.F.B. Cognitive Serious Games Dynamically Modulated as a Therapeutic Tool for Applied Behavior Analysis Therapy in Children with Autism Spectrum Disorder. Int. J. Emerg. Technol. Learn. (iJET) 2024, 19, 80–92. [Google Scholar] [CrossRef]
- Chien, Y.L.; Lee, C.H.; Chiu, Y.N.; Tsai, W.C.; Min, Y.C.; Lin, Y.M.; Wong, J.S.; Tseng, Y.L. Game-Based Social Interaction Platform for Cognitive Assessment of Autism Using Eye Tracking. IEEE Trans. Neural Syst. Rehabil. Eng. 2023, 31, 749–758. [Google Scholar] [CrossRef]
- Panceri, J.A.C.; Freitas, É.; de Souza, J.C.; da Luz Schreider, S.; Caldeira, E.; Bastos, T.F. A New Socially Assistive Robot with Integrated Serious Games for Therapies with Children with Autism Spectrum Disorder and Down Syndrome: A Pilot Study. Sensors 2021, 21, 8414. [Google Scholar] [CrossRef]
- Souza, J.C.D.; Schreider, S.D.L.; Freitas, É.V.D.S.; Panceri, J.A.C.; Montero-Contreras, A.; Caldeira, E.M.D.O.; Bastos-Filho, T.F. Proposal of Serious Games and Assistive Robot to Aid Therapies of Children with Autism Spectrum Disorder. In Proceedings of the XIII SEB Simpósio de Engenharia Biomédica, Minas Gerais, Brazil, 13–15 September 2021. [Google Scholar] [CrossRef]
- DiPietro, J.D.; Kelemen, A.; Liang, Y.; Sik-Lányi, C. Computer- and Robot-Assisted Therapies to Aid Social and Intellectual Functioning of Children with Autism Spectrum Disorder. Medicina 2019, 55, 440. [Google Scholar] [CrossRef]
- Grossard, C.; Palestra, G.; Xavier, J.; Chetouani, M.; Grynszpan, O.; Cohen, D. ICT and Autism Care: State of the Art. Curr. Opin. Psychiatry 2018, 31, 474–483. [Google Scholar] [CrossRef]
- Kim, E.S.; Berkovits, L.D.; Bernier, E.P.; Leyzberg, D.; Shic, F.; Paul, R.; Scassellati, B. Social Robots as Embedded Reinforcers of Social Behavior in Children with Autism. J. Autism Dev. Disord. 2013, 43, 1038–1049. [Google Scholar] [CrossRef]
- Zygopoulou, M. A Systematic Review of the Effect of Robot Mediated Interventions in Challenging Behaviors of Children with Autism Spectrum Disorder. SHS Web Conf. 2022, 139, 05002. [Google Scholar] [CrossRef]
- Scassellati, B.; Admoni, H.; Matarić, M. Robots for Use in Autism Research. Annu. Rev. Biomed. Eng. 2012, 14, 275–294. [Google Scholar] [CrossRef]
- Cabibihan, J.J.; Javed, H.; Ang, M.; Aljunied, S.M. Why Robots? A Survey on the Roles and Benefits of Social Robots in the Therapy of Children with Autism. Int. J. Soc. Robot. 2013, 5, 593–618. [Google Scholar] [CrossRef]
- Watson, S.W.; Watson, S.W. Socially Assisted Robotics as an Intervention for Children with Autism Spectrum Disorder. In Using Assistive Technology for Inclusive Learning in K-12 Classrooms; University of Louisiana at Monroe: Monroe, LA, USA, 2023; Volume 1. [Google Scholar] [CrossRef]
- Kelly, D. The Uncanny Valley: The Original Essay by Masahiro Mori-IEEE Spectrum. IEEE Spectr. 1970, 6, 6. [Google Scholar]
- Coşkun, B.; Uluer, P.; Toprak, E.; Barkana, D.E.; Kose, H.; Zorcec, T.; Robins, B.; Landowska, A. Stress Detection of Children with Autism Using Physiological Signals in Kaspar Robot-Based Intervention Studies. In Proceedings of the 2022 9th IEEE RAS/EMBS International Conference for Biomedical Robotics and Biomechatronics (BioRob), Seoul, Republic of Korea, 21–24 August 2022; pp. 1–7. [Google Scholar] [CrossRef]
- Gouaillier, D.; Hugel, V.; Blazevic, P.; Kilner, C.; Monceaux, J.; Lafourcade, P.; Marnier, B.; Serre, J.; Maisonnier, B. Mechatronic Design of NAO Humanoid. In Proceedings of the 2009 IEEE International Conference on Robotics and Automation, Kobe, Japan, 12–17 May 2009; pp. 769–774. [Google Scholar] [CrossRef]
- Hussain, J.; Mangiacotti, A.; Franco, F.; Chinellato, E. Robotic Music Therapy Assistant: A Cognitive Game Playing Robot. In Proceedings of the Social Robotics; Ali, A.A., Cabibihan, J.J., Meskin, N., Rossi, S., Jiang, W., He, H., Ge, S.S., Eds.; Springer: Singapore, 2024; pp. 81–94. [Google Scholar] [CrossRef]
- Bugnariu, N.; Young, C.; Rockenbach, K.; Patterson, R.M.; Garver, C.; Ranatunga, I.; Beltran, M.; Torres-Arenas, N.; Popa, D. Human-robot interaction as a tool to evaluate and quantify motor imitation behavior in children with Autism Spectrum Disorders. In Proceedings of the 2013 International Conference on Virtual Rehabilitation (ICVR), Philadelphia, PA, USA, 26–29 August 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 57–62. [Google Scholar]
- Fussi, A. Affective Responses to Embodied Intelligence. The Test-Cases of Spot, Kaspar, and Zeno. Passion J. Eur. Philos. Soc. Study Emot. 2023, 1, 85–102. [Google Scholar] [CrossRef]
- Chien, S.-E.; Chen, Y.-S.; Chen, Y.-C.; Yeh, S.-L. Exploring the Developmental Aspects of the Uncanny Valley Effect on Children’s Preferences for Robot Appearance. Int. J. Hum.–Comput. Interact. 2024, 41, 6366–6376. [Google Scholar] [CrossRef]
- Tamaral, C.; Hernandez, L.; Baltasar, C.; Martin, J.S. Design Techniques for the Optimal Creation of a Robot for Interaction with Children with Autism Spectrum Disorder. Machines 2025, 13, 67. [Google Scholar] [CrossRef]
- Fengr, H.; Mahoor, M.H.; Dino, F. A Music-Therapy Robotic Platform for Children with Autism: A Pilot Study. arXiv 2022, arXiv:2205.04251. [Google Scholar] [CrossRef]
- Dei, C.; Falerni, M.M.; Nicora, M.L.; Chiappini, M.; Malosio, M.; Storm, F.A. Design of a Multimodal Device to Improve Well-Being of Autistic Workers Interacting with Collaborative Robots. arXiv 2023, arXiv:2304.14191. [Google Scholar] [CrossRef]
- Puglisi, A.; Caprì, T.; Pignolo, L.; Gismondo, S.; Chilà, P.; Minutoli, R.; Marino, F.; Failla, C.; Arnao, A.A.; Tartarisco, G.; et al. Social Humanoid Robots for Children with Autism Spectrum Disorders: A Review of Modalities, Indications, and Pitfalls. Children 2022, 9, 953. [Google Scholar] [CrossRef]
- Marino, F.; Chilà, P.; Sfrazzetto, S.T.; Carrozza, C.; Crimi, I.; Failla, C.; Busà, M.; Bernava, G.; Tartarisco, G.; Vagni, D.; et al. Outcomes of a Robot-Assisted Social-Emotional Understanding Intervention for Young Children with Autism Spectrum Disorders. J. Autism Dev. Disord. 2020, 50, 1973–1987. [Google Scholar] [CrossRef]
- Arsić, B.; Gajić, A.; Vidojković, S.; Maćešić-Petrović, D.; Bašić, A.; Parezanović, R.Z. The Use of Nao Robots in Teaching Children with Autism. Eur. J. Altern. Educ. Stud. 2022, 7, 1–10. [Google Scholar] [CrossRef]
- Gudlin, M.; Ivankovic, I.; Dadic, K. Robots Used In Therapy for Children with Autism Spectrum Disorder. Am. J. Multidiscip. Res. Dev. (AJMRD) 2022, 4, 33–39. [Google Scholar]
- Pakkar, R.; Clabaugh, C.; Lee, R.; Deng, E.; Mataric, M.J. Designing a Socially Assistive Robot for Long-Term In-Home Use for Children with Autism Spectrum Disorders. arXiv 2020, arXiv:2001.09981. [Google Scholar] [CrossRef]
- Ali, S.; Abodayeh, A.; Dhuliawala, Z.; Breazeal, C.; Park, H.W. Towards Inclusive Co-creative Child-robot Interaction: Can Social Robots Support Neurodivergent Children’s Creativity? In Proceedings of the 2025 ACM/IEEE International Conference on Human-Robot Interaction, Melbourne, Australia, 4–6 March 2025; HRI ’25; pp. 321–330. [Google Scholar]
- Costescu, C.; Vanderborght, B.; David, D. Robot-Enhanced CBT for Dysfunctional Emotions in Social Situations for Children with ASD. J. Evid.-Based Psychother. 2017, 17, 119–132. [Google Scholar] [CrossRef]
- Hurst, N.; Clabaugh, C.; Baynes, R.; Cohn, J.; Mitroff, D.; Scherer, S. Social and Emotional Skills Training with Embodied Moxie. arXiv 2020, arXiv:2004.12962. [Google Scholar] [CrossRef]
- Riek, L.D. Wizard of Oz Studies in HRI: A Systematic Review and New Reporting Guidelines. J. Hum.-Robot Interact. 2012, 1, 119–136. [Google Scholar] [CrossRef]
- Lehmann, H.; Rojik, A.; Hoffmann, M. Should a Small Robot Have a Small Personal Space? Investigating Personal Spatial Zones and Proxemic Behavior in Human-Robot Interaction. arXiv 2020, arXiv:2009.01818. [Google Scholar] [CrossRef]
- Allgeuer, P.; Farazi, H.; Ficht, G.; Schreiber, M.; Behnke, S. The Igus Humanoid Open Platform: A Child-sized 3D Printed Open-Source Robot for Research. KI-Künstliche Intell. 2016, 30, 315–319. [Google Scholar] [CrossRef]
- Robins, B.; Dautenhahn, K.; Boekhorst, R.T.; Billard, A. Robotic assistants in therapy and education of children with autism: Can a small humanoid robot help encourage social interaction skills? Univers. Access Inf. Soc. 2005, 4, 105–120. [Google Scholar] [CrossRef]
- Tanaka, F.; Cicourel, A.; Movellan, J.R. Socialization between Toddlers and Robots at an Early Childhood Education Center. Proc. Natl. Acad. Sci. USA 2007, 104, 17954–17958. [Google Scholar] [CrossRef]
- Madrid Ruiz, E.P.; León, R.C.; García Cena, C.E. Quantifying Intentional Social Acceptance for Effective Child-Robot Interaction. In Proceedings of the 2024 7th Iberian Robotics Conference (ROBOT), Madrid, Spain, 6–8 November 2024; pp. 1–6. [Google Scholar] [CrossRef]
- Thiemann-Bourque, K.; Brady, N.; McGuff, S.; Stump, K.; Naylor, A. Picture Exchange Communication System and Pals: A Peer-Mediated Augmentative and Alternative Communication Intervention for Minimally Verbal Preschoolers with Autism. J. Speech Lang. Hear. Res. JSLHR 2016, 59, 1133–1145. [Google Scholar] [CrossRef]
- Dubois-Sage, M.; Jacquet, B.; Jamet, F.; Baratgin, J. People with Autism Spectrum Disorder Could Interact More Easily with a Robot than with a Human: Reasons and Limits. Behav. Sci. 2024, 14, 131. [Google Scholar] [CrossRef]
- Schadenberg, B.R.; Reidsma, D.; Heylen, D.K.J.; Evers, V. Differences in Spontaneous Interactions of Autistic Children in an Interaction with an Adult and Humanoid Robot. Front. Robot. AI 2020, 7, 28. [Google Scholar] [CrossRef] [PubMed]
- Pinto-Bernal, M.J.; Sierra, M.S.D.; Munera, M.; Casas, D.; Villa-Moreno, A.; Frizera-Neto, A.; Stoelen, M.F.; Belpaeme, T.; Cifuentes, C.A. Do Different Robot Appearances Change Emotion Recognition in Children with ASD? Front. Neurorobotics 2023, 17, 1044491. [Google Scholar] [CrossRef] [PubMed]
- Lord, C.; Elsabbagh, M.; Baird, G.; Veenstra-Vanderweele, J. Autism Spectrum Disorder. Lancet 2018, 392, 508–520. [Google Scholar] [CrossRef]
- Masoomi, M.; Saeidi, M.; Cedeno, R.; Shahrivar, Z.; Tehrani-Doost, M.; Ramirez, Z.; Gandi, D.A.; Gunturu, S. Emotion Recognition Deficits in Children and Adolescents with Autism Spectrum Disorder: A Comprehensive Meta-Analysis of Accuracy and Response Time. Front. Child Adolesc. Psychiatry 2025, 3, 1520854. [Google Scholar] [CrossRef]
- Bal, E.; Harden, E.; Lamb, D.; Van Hecke, A.V.; Denver, J.W.; Porges, S.W. Emotion Recognition in Children with Autism Spectrum Disorders: Relations to Eye Gaze and Autonomic State. J. Autism Dev. Disord. 2010, 40, 358–370. [Google Scholar] [CrossRef]
- Riddell, C.; Nikolić, M.; Dusseldorp, E.; Kret, M.E. Age-Related Changes in Emotion Recognition across Childhood: A Meta-Analytic Review. Psychol. Bull. 2024, 150, 1094–1117. [Google Scholar] [CrossRef]
- Widen, S.C.; Russell, J.A. Children’s Recognition of Disgust in Others. Psychol. Bull. 2013, 139, 271–299. [Google Scholar] [CrossRef]
- Rabindran; Madanagopal, D. Piaget’s Theory and Stages of Cognitive Development—An Overview. Sch. J. Appl. Med. Sci. 2020, 8, 2152–2157. [Google Scholar] [CrossRef]
Figure 1.
Articulated head movement mechanism: roll, pitch and yaw movements.
Figure 1.
Articulated head movement mechanism: roll, pitch and yaw movements.
Figure 2.
Mechanical structure: (a) Articulated head mechanism and mobile base unit. (b) Shell.
Figure 2.
Mechanical structure: (a) Articulated head mechanism and mobile base unit. (b) Shell.
Figure 3.
Labeled hardware components of the JARI robot.
Figure 3.
Labeled hardware components of the JARI robot.
Figure 4.
JARI’s interchangeable clothing.
Figure 4.
JARI’s interchangeable clothing.
Figure 5.
System architecture of the JARI platform.
Figure 5.
System architecture of the JARI platform.
Figure 6.
Web-based user interface designed for therapists and educators to remotely control the JARI robot.
Figure 6.
Web-based user interface designed for therapists and educators to remotely control the JARI robot.
Figure 7.
JARI robot emotional expressions.
Figure 7.
JARI robot emotional expressions.
Figure 8.
Interaction setting for Scenario 1. (a) Classroom configuration showing the JARI robot positioned on a desk, with the researcher nearby and the Wizard of Oz operator seated at the back of the room. The child interacts with the robot using toys and pictograms provided during the session. (b) Pictograms and toy objects used to facilitate communication and elicit emotional responses from the robot.
Figure 8.
Interaction setting for Scenario 1. (a) Classroom configuration showing the JARI robot positioned on a desk, with the researcher nearby and the Wizard of Oz operator seated at the back of the room. The child interacts with the robot using toys and pictograms provided during the session. (b) Pictograms and toy objects used to facilitate communication and elicit emotional responses from the robot.
Figure 9.
Interaction setting for Scenario 2. At center, children were seated at individual desks facing the JARI robot, which responded with emotional expressions to the objects presented. At both sides, details of the materials used: Toys and ARASAAC pictograms for the interaction, and response sheets with colored stickers for the post-interaction questionnaire.
Figure 9.
Interaction setting for Scenario 2. At center, children were seated at individual desks facing the JARI robot, which responded with emotional expressions to the objects presented. At both sides, details of the materials used: Toys and ARASAAC pictograms for the interaction, and response sheets with colored stickers for the post-interaction questionnaire.
Figure 10.
Children’s response frequency when identifying the robot’s emotional expressions in Scenario 1.
Figure 10.
Children’s response frequency when identifying the robot’s emotional expressions in Scenario 1.
Figure 11.
Children’s preferences in Scenario 1. (a) Preferred emotional expressions displayed by the robot. (b) Preferred colors associated with the robot’s emotional expressions.
Figure 11.
Children’s preferences in Scenario 1. (a) Preferred emotional expressions displayed by the robot. (b) Preferred colors associated with the robot’s emotional expressions.
Figure 12.
Types of behavioral interactions: (a) Physical gesture actions. (b) Robot’s areas touched by children. (c) Facial expression actions.
Figure 12.
Types of behavioral interactions: (a) Physical gesture actions. (b) Robot’s areas touched by children. (c) Facial expression actions.
Figure 13.
Pictures of actions performed by children interacting with the JARI robot: (a) Child touching the robot’s head. (b) Child approaching and looking closely at the robot’s screen. (c) Child putting a carrot on the robot’s screen. (d) Child smiling while interacting with the robot after presenting an object. (e) Child touching the robot’s ears. (f) Child looking at the robot screen very closely and touching her head with the robot’s head.
Figure 13.
Pictures of actions performed by children interacting with the JARI robot: (a) Child touching the robot’s head. (b) Child approaching and looking closely at the robot’s screen. (c) Child putting a carrot on the robot’s screen. (d) Child smiling while interacting with the robot after presenting an object. (e) Child touching the robot’s ears. (f) Child looking at the robot screen very closely and touching her head with the robot’s head.
Figure 14.
Child response frequency matching robot’s emotion—Scenario 2.
Figure 14.
Child response frequency matching robot’s emotion—Scenario 2.
Figure 15.
Children answers to the questionnaire.
Figure 15.
Children answers to the questionnaire.
Table 1.
Comparison of social robots by morphology and physical characteristics.
Table 1.
Comparison of social robots by morphology and physical characteristics.
Robot | Form | Mobility | Head/Neck | Eyes | Emotional Expression | Sensory Interaction |
---|
Jibo [30] | Abstract | Rotating base | 3 DoF expressive neck | Screen | High (screen + voice) | Cameras, microphones, voice |
Kaspar [16] | Humanoid | Arm movement only | Static head | Mechanical | Partial (simple face) | Basic vision |
Keepon [31] | Character-like (yellow snowman) | Stationary (tabletop) | 4 DoF (bouncing and tilting) | No explicit eyes (uses face orientation) | Moderate (motion + sound cues) | Operator-controlled feedback via motion and sound |
Kiwi [29] | Animal-like (bird) | Tabletop 6 DoF platform | 6 DoF body movement | LCD face | Moderate (animated face + body motion) | Microphone, camera |
Moxie [32] | Character-like (cartoon robot) | Stationary tabletop | Static neck, expressive head tilt | Animated face (LCD) | High (facial expressions, gestures, voice) | Microphones, cameras, speech recognition |
NAO [17] | Humanoid | Walks, gestures | 2 DoF neck | LED | Limited (voice + lights) | Cameras, microphones, touch |
QT-Robot [18] | Humanoid (cartoon-like) | Stationary upper body | Static neck | LCD screen face | High (animated face + gestures + speech) | Microphones, cameras, touch sensors |
TEA-2 [22] | Abstract | Basic motorized movement | Static or configurable modules | LEDs | Basic (light signals + movement) | Microphone, tactile sensors |
Zeno [19] | Humanoid | Head, torso movement | 6 DoF expressive head | LCD screens | High (animated facial screen) | Cameras, microphones |
Table 2.
Sequence of objects and robot’s emotional expression for interaction.
Table 2.
Sequence of objects and robot’s emotional expression for interaction.
Sequence | Object | Robot’s Emotional Expression |
---|
1 | Plush toy (dog or bear) | Happy |
2 | Carrot JARI | Disgusted |
3 | Snake toy | Fear |
4 | Broken piano | Sad |
5 | Broken car | Angry |
6 | Cookies | Happy |
Table 3.
Scenario 1—Questions used during the interaction.
Table 3.
Scenario 1—Questions used during the interaction.
Question | Purpose |
---|
How does JARI feel? | To assess whether the child can recognize the emotion displayed by the robot. |
Do you like JARI? | To evaluate the child’s affective response to the robot’s behavior or expression. |
Do you like JARI color? | To explore the child’s preference or perception of the color associated with the emotion. |
Table 4.
Questionnaire—Statements used to assess the children’s perception of the robot.
Table 4.
Questionnaire—Statements used to assess the children’s perception of the robot.
Item | Statement |
---|
1 | I want to see JARI again |
2 | I want to play with JARI again |
3 | It would be nice if JARI and I could do something together |
4 | I want to take JARI home with me |
5 | I like JARI |
6 | JARI is funny |
7 | JARI is like a person like me |
8 | JARI is like a pet |
9 | I imagine that JARI is alive |
10 | If I use JARI it could be broken |
11 | I am afraid of JARI |
Table 5.
Behavioral interactions observed during the interaction sessions.
Table 5.
Behavioral interactions observed during the interaction sessions.
Type of Interaction | Category of Interaction | Number of Events |
---|
Facial Expression | Smile | 13 |
Laugh | 3 |
Open mouth | 3 |
Surprise | 2 |
Physical Gesture | Touch body | 5 |
Touch screen | 2 |
Touch ears | 2 |
Touch head | 5 |
Touch clothes | 1 |
Clap | 1 |
Jump | 1 |
Kiss | 2 |
Hug | 1 |
| Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).