Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (69)

Search Parameters:
Keywords = NAO robot

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 5469 KB  
Article
Reinforcement-Based Person-Specific Training for Children with Autism Using a Humanoid Robot NAO
by Masud Karim, Md. Solaiman Mia, Saifuddin Md. Tareeq and Md. Hasanuzzaman
Robotics 2026, 15(4), 66; https://doi.org/10.3390/robotics15040066 - 25 Mar 2026
Viewed by 1283
Abstract
Autism Spectrum Disorder (ASD) is defined by ongoing difficulties in social communication, flexibility in behavior, and adaptive learning skills. Interventions that utilize robots have demonstrated potential in providing organized training for children with ASD; however, there is a lack of controlled studies that [...] Read more.
Autism Spectrum Disorder (ASD) is defined by ongoing difficulties in social communication, flexibility in behavior, and adaptive learning skills. Interventions that utilize robots have demonstrated potential in providing organized training for children with ASD; however, there is a lack of controlled studies that specifically examine the effects of reinforcement strategies. This research introduces a systematic interaction policy based on reinforcement, founded on the principles of Applied Behavior Analysis (ABA), and assesses its effectiveness through a randomized controlled experimental design with observation. The humanoid robot NAO was used in two different interaction scenarios, one involving a reinforcement condition (RC) and the other a non-reinforcement condition (RC), ensuring that the instructional material and environment were maintained, while only the availability of contingent positive feedback was altered. A total of 50 participants diagnosed with ASD Level 2 engaged in structured word-learning sessions. Learning outcomes were assessed using institutional performance criteria, average response time, and emotion analysis derived from a CNN-based facial expression model. Independent samples t-tests revealed statistically significant improvements in both performance scores (t(48) = 3.779, p < 0.05) and response times (t(48) = 3.758, p < 0.05) in the reinforcement condition compared to the non-reinforcement condition. The findings demonstrate that structured ABA-based reinforcement within robotic interaction significantly enhances learning efficiency and task engagement, contributing methodologically rigorous evidence to robot-assisted ASD intervention research. Full article
(This article belongs to the Section AI in Robotics)
Show Figures

Figure 1

19 pages, 5786 KB  
Article
Center of Pressure Measurement Sensing System for Dynamic Biomechanical Signal Acquisition and Its Self-Calibration
by Ni Li, Jianrui Zhang and Keer Zhang
Sensors 2026, 26(3), 910; https://doi.org/10.3390/s26030910 - 30 Jan 2026
Viewed by 397
Abstract
The development of highly dynamic bipedal robots demands sensing capable of capturing key contact-related signals in real time, particularly the Center of Pressure (CoP). CoP is fundamental for locomotion control and state estimation and is also of interest in biomedical applications such as [...] Read more.
The development of highly dynamic bipedal robots demands sensing capable of capturing key contact-related signals in real time, particularly the Center of Pressure (CoP). CoP is fundamental for locomotion control and state estimation and is also of interest in biomedical applications such as gait analysis and lower-limb assistive devices. To enable reliable CoP acquisition under dynamic walking, this paper presents a foot-mounted measurement system and an online self-calibration method that adapts sensor scale and bias parameters during locomotion using both external foot sensors and the robot’s proprioceptive measurements. We demonstrate an online self-calibration pipeline that updates foot-sensor scale and bias parameters during a walking experiment on a NAO-V5 platform using a sliding window optimization. The reported results indicate improved within-trial consistency relative to an offline-calibrated reference baseline under the tested walking conditions. In addition, the framework reconstructs a digitized estimate of the vertical ground reaction force (vGRF) from load-cell readings; due to ADC quantization and the discrete offline calibration dataset, the vGRF signal may exhibit stepwise behavior and should be interpreted as a reconstructed (digitized) quantity rather than laboratory-grade continuous force metrology. Overall, the proposed sensing-and-calibration pipeline offers a practical solution for dynamic CoP acquisition with low-cost hardware. Full article
(This article belongs to the Special Issue Advanced Biomedical Imaging and Signal Processing)
Show Figures

Figure 1

19 pages, 5137 KB  
Article
An Accessible AI-Assisted Rehabilitation System for Guided Upper Limb Therapy
by Kevin Hou, Md Mahafuzur Rahaman Khan and Mohammad H. Rahman
Sensors 2025, 25(19), 6239; https://doi.org/10.3390/s25196239 - 8 Oct 2025
Cited by 1 | Viewed by 2023
Abstract
Conventional upper limb rehabilitation methods often encounter significant obstacles, including high costs, limited accessibility, and reduced patient adherence. Emerging technological solutions, such as telerehabilitation, virtual reality (VR), and wearable sensor-based systems, address some of these challenges but still face issues concerning supervision quality, [...] Read more.
Conventional upper limb rehabilitation methods often encounter significant obstacles, including high costs, limited accessibility, and reduced patient adherence. Emerging technological solutions, such as telerehabilitation, virtual reality (VR), and wearable sensor-based systems, address some of these challenges but still face issues concerning supervision quality, affordability, and usability. To overcome these limitations, this study presents an innovative and cost-effective rehabilitation system based on advanced computer vision techniques and artificial intelligence (AI). Developed using Python (3.11.5), the proposed system utilizes a standard webcam in conjunction with robust pose estimation algorithms to provide real-time analysis of patient movements during guided upper limb exercises. Instructional exercise videos featuring an NAO robot facilitate patient engagement and consistency in practice. The system generates instant quantitative feedback on movement precision, repetition accuracy, and exercise phase completion. The core advantages of the proposed approach include minimal equipment requirements, affordability, ease of setup, and enhanced interactive guidance compared to traditional telerehabilitation methods. By reducing the complexity and expense associated with many VR and wearable-sensor solutions, while acknowledging that some lower-cost and haptic-enabled VR options exist, this single-webcam approach aims to broaden access to guided home rehabilitation without specialized hardware. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

16 pages, 835 KB  
Article
Enhancing Communication in Minimally Verbal Autistic Children: A Study on NAO-Assisted Therapy
by Marcella Di Cara, Margherita La Fauci, Maria Tresoldi, Maria Rita Caputo, Daniele Borzelli, Roberta Maggio, Caterina Campestre, Antonella Barbera, Adriana Piccolo, Carmela De Domenico, Massimo Di Blasi, Rocco Salvatore Calabrò, Emanuela Tripodi, Caterina Impallomeni and Francesca Cucinotta
J. Clin. Med. 2025, 14(11), 3735; https://doi.org/10.3390/jcm14113735 - 26 May 2025
Cited by 5 | Viewed by 5097
Abstract
Background/Objectives: Minimally verbal autistic children face significant communication challenges, often unmet by traditional therapies. Social robots, like NAO, offer predictable, structured interactions that may improve engagement and language skills. This study aimed to evaluate the effectiveness of NAO-assisted therapy in improving communication [...] Read more.
Background/Objectives: Minimally verbal autistic children face significant communication challenges, often unmet by traditional therapies. Social robots, like NAO, offer predictable, structured interactions that may improve engagement and language skills. This study aimed to evaluate the effectiveness of NAO-assisted therapy in improving communication and social interaction in minimally verbal autistic children compared to standard therapeutic approaches. Methods: In a single-blind, randomized, controlled study, 37 autistic children aged 4–12 years were assigned to either NAO-assisted therapy or standard speech therapy. Participants were assigned to either an NAO-assisted therapy group or a standard speech therapy control group. The intervention included 12 weekly 45 min sessions. Communication outcomes were measured using the Language Development Level Test (TVL) and mand request observations. Results: All 37 participants completed the 12 sessions without adverse events, highlighting the intervention’s feasibility and safety. Children in the NAO-assisted therapy group showed greater improvements in verbal communication (on average, 159 ± 49% more children exhibited improvement across verbal aspects (range: 107–284%; p < 0.001)) particularly in spontaneous communication, compared to the control group. The therapy also increased mand production (from 6.8 ± 4.3 in session 1 to 16.7 ± 7.7 in session 12; p < 0.001; average gain: 0.9 per session), demonstrating steady growth in communicative initiative. These findings underscore the structured and engaging nature of NAO-assisted therapy in supporting consistent progress in communication skills. Conclusions: NAO-assisted therapy is a promising, safe, and effective intervention for enhancing communication in minimally verbal autistic children, offering unique benefits in promoting spontaneous and consistent verbal engagement. Full article
Show Figures

Figure 1

11 pages, 1513 KB  
Article
How Human–Robot Interaction Can Influence Task Performance and Perceived Cognitive Load at Different Support Conditions
by Simone Varrasi, Roberto Vagnetti, Nicola Camp, John Hough, Alessandro Di Nuovo, Sabrina Castellano and Daniele Magistro
Information 2025, 16(5), 374; https://doi.org/10.3390/info16050374 - 30 Apr 2025
Cited by 2 | Viewed by 2823
Abstract
Cognitive load refers to the mental resources used for executing simultaneous tasks. Since these resources are limited, individuals can only process a specific amount of information at a time. Daily activities often involve mentally demanding tasks, which is why social robots have been [...] Read more.
Cognitive load refers to the mental resources used for executing simultaneous tasks. Since these resources are limited, individuals can only process a specific amount of information at a time. Daily activities often involve mentally demanding tasks, which is why social robots have been proposed to simplify them and support users. This study aimed to verify whether and how a social robot can enhance the performance and support the management of cognitive load. Participants completed a baseline where a cognitive activity was carried out without support, and three other conditions where similar activities of increasing difficulty were collaboratively made with the NAO robot. In each condition, errors, time, and perceived cognitive load were measured. Results revealed that the robot improved performance and perceived cognitive load when compared to the baseline, but this support was then thwarted by excessive levels of cognitive load. Future research should focus on developing and designing collaborative human–robot interactions that consider the user’s mental demand, to promote effective and personalized robotic help for independent living. Full article
(This article belongs to the Special Issue Multimodal Human-Computer Interaction)
Show Figures

Figure 1

22 pages, 1669 KB  
Article
Empowering Education with Intelligent Systems: Exploring Large Language Models and the NAO Robot for Information Retrieval
by Nikos Fragakis, Georgios Trichopoulos and George Caridakis
Electronics 2025, 14(6), 1210; https://doi.org/10.3390/electronics14061210 - 19 Mar 2025
Cited by 5 | Viewed by 2593
Abstract
To unlock more aspects of human cognitive structuring, human–AI and human–robot interactions require increasingly advanced communication skills on both the human and robot sides. This paper compares three methods of retrieving cultural heritage information in primary school education: search engines, large language models [...] Read more.
To unlock more aspects of human cognitive structuring, human–AI and human–robot interactions require increasingly advanced communication skills on both the human and robot sides. This paper compares three methods of retrieving cultural heritage information in primary school education: search engines, large language models (LLMs), and the NAO humanoid robot, which serves as a facilitator with programmed answering capabilities for convergent questions. Human–robot interaction has become a critical aspect of modern education, with robots like the NAO providing new opportunities for engaging and personalized learning experiences. The NAO, with its anthropomorphic design and ability to interact with students, presents a unique approach to fostering deeper connections with educational content, particularly in the context of cultural heritage. The paper includes an introduction, extensive literature review, methodology, research results from student questionnaires, and conclusions. The findings highlight the potential of intelligent and embodied technologies for enhancing knowledge retrieval and engagement, demonstrating the NAO’s ability to adapt to student needs and facilitate more dynamic learning interactions. Full article
Show Figures

Figure 1

24 pages, 259 KB  
Article
How Do Older Adults Perceive Technology and Robots? A Participatory Study in a Care Center in Poland
by Paulina Zguda, Zuzanna Radosz-Knawa, Tymon Kukier, Mikołaj Radosz, Alicja Kamińska and Bipin Indurkhya
Electronics 2025, 14(6), 1106; https://doi.org/10.3390/electronics14061106 - 11 Mar 2025
Cited by 6 | Viewed by 3271
Abstract
One of the key areas of application for social robots is healthcare, particularly for the elderly. To better address user needs, a study involving the humanoid robot NAO was conducted at the Municipal Care Center in Krakow, Poland, with the participation of 29 [...] Read more.
One of the key areas of application for social robots is healthcare, particularly for the elderly. To better address user needs, a study involving the humanoid robot NAO was conducted at the Municipal Care Center in Krakow, Poland, with the participation of 29 older adults. This participatory design study explored their attitudes toward robots and technology both before and after interacting with the robot. It also identified the most desirable applications of social robots that could simplify everyday life for the elderly. Full article
12 pages, 693 KB  
Systematic Review
Exploring the Impact of Socially Assistive Robots in Rehabilitation Scenarios
by Arianna Carnevale, Alessandra Raso, Carla Antonacci, Letizia Mancini, Alessandra Corradini, Alice Ceccaroli, Carlo Casciaro, Vincenzo Candela, Alessandro de Sire, Pieter D’Hooghe and Umile Giuseppe Longo
Bioengineering 2025, 12(2), 204; https://doi.org/10.3390/bioengineering12020204 - 19 Feb 2025
Cited by 9 | Viewed by 3597
Abstract
Background: Socially Assistive Robots (SARs) represent an innovative approach in rehabilitation technology, significantly enhancing the support and motivation for individuals across diverse rehabilitation settings. Despite their growing utilization, especially in stroke recovery and pediatric rehabilitation, their potential in musculoskeletal and orthopedic rehabilitation remains [...] Read more.
Background: Socially Assistive Robots (SARs) represent an innovative approach in rehabilitation technology, significantly enhancing the support and motivation for individuals across diverse rehabilitation settings. Despite their growing utilization, especially in stroke recovery and pediatric rehabilitation, their potential in musculoskeletal and orthopedic rehabilitation remains largely underexplored. Although there is methodological and outcome variability across the included studies, this review aims to critically evaluate and summarize the research on SARs in rehabilitation, providing a thorough overview of the current evidence and practical applications. Methods: A comprehensive search was conducted across multiple databases, resulting in the selection of 20 studies for analysis. The reviewed papers were categorized into three main classes based on the roles of the robots in rehabilitation: Motivation, Imitation, and Feedback Providers. Results: The analysis highlights that SARs significantly improve adherence to rehabilitation programs, enhance motor function, and increase motivation across clinical and home settings. Robots such as NAO, Pepper, and ZORA demonstrated high efficacy, particularly in stroke recovery and pediatric rehabilitation. Conclusions: SARs offer transformative benefits in rehabilitation, providing scalable, personalized solutions through motivational support, guided exercises, and real-time feedback. Their integration into orthopedic rehabilitation could address critical clinical needs, enhancing precision in exercises, adherence to long-term programs, and overall patient outcomes. Future research should prioritize the development and validation of SAR-based interventions for musculoskeletal disorders to unlock their full potential in this domain. Full article
(This article belongs to the Section Biomedical Engineering and Biomaterials)
Show Figures

Figure 1

28 pages, 9455 KB  
Article
Advancing Emotionally Aware Child–Robot Interaction with Biophysical Data and Insight-Driven Affective Computing
by Diego Resende Faria, Amie Louise Godkin and Pedro Paulo da Silva Ayrosa
Sensors 2025, 25(4), 1161; https://doi.org/10.3390/s25041161 - 14 Feb 2025
Cited by 9 | Viewed by 5263
Abstract
This paper investigates the integration of affective computing techniques using biophysical data to advance emotionally aware machines and enhance child–robot interaction (CRI). By leveraging interdisciplinary insights from neuroscience, psychology, and artificial intelligence, the study focuses on creating adaptive, emotion-aware systems capable of dynamically [...] Read more.
This paper investigates the integration of affective computing techniques using biophysical data to advance emotionally aware machines and enhance child–robot interaction (CRI). By leveraging interdisciplinary insights from neuroscience, psychology, and artificial intelligence, the study focuses on creating adaptive, emotion-aware systems capable of dynamically recognizing and responding to human emotional states. Through a real-world CRI pilot study involving the NAO robot, this research demonstrates how facial expression analysis and speech emotion recognition can be employed to detect and address negative emotions in real time, fostering positive emotional engagement. The emotion recognition system combines handcrafted and deep learning features for facial expressions, achieving an 85% classification accuracy during real-time CRI, while speech emotions are analyzed using acoustic features processed through machine learning models with an 83% accuracy rate. Offline evaluation of the combined emotion dataset using a Dynamic Bayesian Mixture Model (DBMM) achieved a 92% accuracy for facial expressions, and the multilingual speech dataset yielded 98% accuracy for speech emotions using the DBMM ensemble. Observations from psychological and technological aspects, coupled with statistical analysis, reveal the robot’s ability to transition negative emotions into neutral or positive states in most cases, contributing to emotional regulation in children. This work underscores the potential of emotion-aware robots to support therapeutic and educational interventions, particularly for pediatric populations, while setting a foundation for developing personalized and empathetic human–machine interactions. These findings demonstrate the transformative role of affective computing in bridging the gap between technological functionality and emotional intelligence across diverse domains. Full article
(This article belongs to the Special Issue Multisensory AI for Human-Robot Interaction)
Show Figures

Figure 1

22 pages, 3579 KB  
Article
Gait-to-Gait Emotional Human–Robot Interaction Utilizing Trajectories-Aware and Skeleton-Graph-Aware Spatial–Temporal Transformer
by Chenghao Li, Kah Phooi Seng and Li-Minn Ang
Sensors 2025, 25(3), 734; https://doi.org/10.3390/s25030734 - 25 Jan 2025
Cited by 4 | Viewed by 2174
Abstract
The emotional response of robotics is crucial for promoting the socially intelligent level of human–robot interaction (HRI). The development of machine learning has extensively stimulated research on emotional recognition for robots. Our research focuses on emotional gaits, a type of simple modality that [...] Read more.
The emotional response of robotics is crucial for promoting the socially intelligent level of human–robot interaction (HRI). The development of machine learning has extensively stimulated research on emotional recognition for robots. Our research focuses on emotional gaits, a type of simple modality that stores a series of joint coordinates and is easy for humanoid robots to execute. However, a limited amount of research investigates emotional HRI systems based on gaits, indicating an existing gap in human emotion gait recognition and robotic emotional gait response. To address this challenge, we propose a Gait-to-Gait Emotional HRI system, emphasizing the development of an innovative emotion classification model. In our system, the humanoid robot NAO can recognize emotions from human gaits through our Trajectories-Aware and Skeleton-Graph-Aware Spatial–Temporal Transformer (TS-ST) and respond with pre-set emotional gaits that reflect the same emotion as the human presented. Our TS-ST outperforms the current state-of-the-art human-gait emotion recognition model applied to robots on the Emotion-Gait dataset. Full article
Show Figures

Figure 1

19 pages, 3110 KB  
Article
Improving Imitation Skills in Children with Autism Spectrum Disorder Using the NAO Robot and a Human Action Recognition
by Abeer Alnafjan, Maha Alghamdi, Noura Alhakbani and Yousef Al-Ohali
Diagnostics 2025, 15(1), 60; https://doi.org/10.3390/diagnostics15010060 - 29 Dec 2024
Cited by 11 | Viewed by 3962
Abstract
Background/Objectives: Autism spectrum disorder (ASD) is a group of developmental disorders characterized by poor social skills, low motivation in activities, and a lack of interaction with others. Traditional intervention approaches typically require support under the direct supervision of well-trained professionals. However, teaching and [...] Read more.
Background/Objectives: Autism spectrum disorder (ASD) is a group of developmental disorders characterized by poor social skills, low motivation in activities, and a lack of interaction with others. Traditional intervention approaches typically require support under the direct supervision of well-trained professionals. However, teaching and training programs for children with ASD can also be enhanced by assistive technologies, artificial intelligence, and robotics. Methods: In this study, we examined whether robotics can improve the imitation skills of children with autism and support therapists during therapeutic sessions. We designed scenarios for training hand clapping imitation skills using the NAO robot and analyzed the interaction between children with autism and the robot. Results: We developed a deep learning approach based on the human action recognition algorithm for analyzing clapping imitation. Conclusions: Our findings suggest that integrating robotics into therapeutic practices can effectively enhance the imitation skills of children with ASD, offering valuable support to therapists. Full article
Show Figures

Figure 1

18 pages, 2763 KB  
Article
Impact of Robot Size and Number on Human–Robot Persuasion
by Abeer Alam, Michael Lwin, Aila Khan and Omar Mubin
Information 2024, 15(12), 782; https://doi.org/10.3390/info15120782 - 5 Dec 2024
Cited by 6 | Viewed by 3398
Abstract
Technological progress has seamlessly integrated digital assistants into our everyday lives, sparking an interest in social robots that communicate through both verbal and non-verbal means. The potential of these robots to influence human behaviour and attitudes holds significant implications for fields such as [...] Read more.
Technological progress has seamlessly integrated digital assistants into our everyday lives, sparking an interest in social robots that communicate through both verbal and non-verbal means. The potential of these robots to influence human behaviour and attitudes holds significant implications for fields such as healthcare, marketing, and promoting sustainability. This study investigates how the design and behavioural aspects of social robots affect their ability to persuade, drawing on principles from human interaction to enhance the quality of human–robot interactions. Conducted in three stages, the experiments involved 73 participants, offering a comprehensive view of human responses to robotic persuasion. Surprisingly, the findings reveal that individuals tend to be more receptive to a single robot than to groups of robots. Nao was identified as more effective and capable of persuasion than Pepper. This study shows that successful persuasion by robots depends on social influence, the robot’s appearance, and people’s past experiences with technology. Full article
(This article belongs to the Special Issue Multimodal Human-Computer Interaction)
Show Figures

Graphical abstract

35 pages, 5660 KB  
Article
“Warning!” Benefits and Pitfalls of Anthropomorphising Autonomous Vehicle Informational Assistants in the Case of an Accident
by Christopher D. Wallbridge, Qiyuan Zhang, Victoria Marcinkiewicz, Louise Bowen, Theodor Kozlowski, Dylan M. Jones and Phillip L. Morgan
Multimodal Technol. Interact. 2024, 8(12), 110; https://doi.org/10.3390/mti8120110 - 5 Dec 2024
Cited by 3 | Viewed by 2747
Abstract
Despite the increasing sophistication of autonomous vehicles (AVs) and promises of increased safety, accidents will occur. These will corrode public trust and negatively impact user acceptance, adoption and continued use. It is imperative to explore methods that can potentially reduce this impact. The [...] Read more.
Despite the increasing sophistication of autonomous vehicles (AVs) and promises of increased safety, accidents will occur. These will corrode public trust and negatively impact user acceptance, adoption and continued use. It is imperative to explore methods that can potentially reduce this impact. The aim of the current paper is to investigate the efficacy of informational assistants (IAs) varying by anthropomorphism (humanoid robot vs. no robot) and dialogue style (conversational vs. informational) on trust in and blame on a highly autonomous vehicle in the event of an accident. The accident scenario involved a pedestrian violating the Highway Code by stepping out in front of a parked bus and the AV not being able to stop in time during an overtake manoeuvre. The humanoid (Nao) robot IA did not improve trust (across three measures) or reduce blame on the AV in Experiment 1, although communicated intentions and actions were perceived by some as being assertive and risky. Reducing assertiveness in Experiment 2 resulted in higher trust (on one measure) in the robot condition, especially with the conversational dialogue style. However, there were again no effects on blame. In Experiment 3, participants had multiple experiences of the AV negotiating parked buses without negative outcomes. Trust significantly increased across each event, although it plummeted following the accident with no differences due to anthropomorphism or dialogue style. The perceived capabilities of the AV and IA before the critical accident event may have had a counterintuitive effect. Overall, evidence was found for a few benefits and many pitfalls of anthropomorphising an AV with a humanoid robot IA in the event of an accident situation. Full article
(This article belongs to the Special Issue Cooperative Intelligence in Automated Driving-2nd Edition)
Show Figures

Figure 1

26 pages, 2800 KB  
Article
Reflective Dialogues with a Humanoid Robot Integrated with an LLM and a Curated NLU System for Positive Behavioral Change in Older Adults
by Ryan Browne, Mirza Mohtashim Alam, Qasid Saleem, Abrar Hyder, Tatsuya Kudo, Francesca D’Agresti, Martino Maggio, Keiko Homma, Eerik-Juhanna Siitonen, Naoko Kounosu, Kristiina Jokinen, Michael McTear, Giulio Napolitano, Kyoungsook Kim, Junichi Tsujii, Rainer Wieching, Toshimi Ogawa and Yasuyuki Taki
Electronics 2024, 13(22), 4364; https://doi.org/10.3390/electronics13224364 - 7 Nov 2024
Cited by 3 | Viewed by 3717
Abstract
We developed an innovative system that combines Natural Language Understanding (NLU), a curated knowledge base, and the efficient management of a Large Language Model (LLM) to support motivational health coaching. Using Rasa as the core framework, we enhanced it by integrating the GPT-3.5-turbo [...] Read more.
We developed an innovative system that combines Natural Language Understanding (NLU), a curated knowledge base, and the efficient management of a Large Language Model (LLM) to support motivational health coaching. Using Rasa as the core framework, we enhanced it by integrating the GPT-3.5-turbo model. Users opt into reflective dialogues during conversations. When they respond to open-ended questions, their input goes directly to the GPT-3.5-turbo model, allowing for more flexible responses. To provide curated trustworthy content, we integrated a knowledge provision component that searches a PDF-based knowledge base and generates user-friendly responses using Retrieval-Augmented Generation. We tested the system in a real-world scenario by deploying it on a Nao robot in seven older adults’ homes for 1–2 weeks, encouraging positive behavioral changes in some users. Our system serves as a valuable foundation for building an even more integrated, personalized system that can connect with other Application Programing Interfaces (APIs) and integrate with home sensors and edge devices. Full article
(This article belongs to the Special Issue Human-Computer Interactions in E-health)
Show Figures

Figure 1

17 pages, 8291 KB  
Article
Experimental Validation of the Essential Model for a Complete Walking Gait with the NAO Robot
by Emanuel Marquez-Acosta, Victor De-León-Gómez, Victor Santibañez, Christine Chevallereau and Yannick Aoustin
Robotics 2024, 13(8), 123; https://doi.org/10.3390/robotics13080123 - 22 Aug 2024
Cited by 3 | Viewed by 2488
Abstract
In this paper, for the first time, experimental tests of complete offline walking gaits generated by the essential model are performed. This model does not make simplifications in the dynamics of the robot, and its main advantage is the definition of desired Zero [...] Read more.
In this paper, for the first time, experimental tests of complete offline walking gaits generated by the essential model are performed. This model does not make simplifications in the dynamics of the robot, and its main advantage is the definition of desired Zero Moment Point trajectories. The designed gaits are implemented in the NAO robot, where starting and stopping stages are also included. Simulations in MATLAB and Webots, and experiments with the real robot are shown. Also, important remarks about the implementation of walking trajectories in the NAO robot are included, such as dealing with the hip joint shared by both legs. A comparison between the linear inverted pendulum (LIP) model and the essential model is also addressed in the experiments. As expected, the robot fails following the offline gait generated by the LIP model, but it does not with the essential model. Moreover, in order to push the boundaries of the essential model, a complex gait is designed with a vertical motion of the center of mass and an abrupt movement of the arms. As shown in experiments, no external balance controller is required to perform this complex gait. Thus, the efficiency of the essential model to design stable open-loop complex gaits is verified. Full article
(This article belongs to the Section Humanoid and Human Robotics)
Show Figures

Figure 1

Back to TopTop