Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,819)

Search Parameters:
Keywords = human-robot interaction

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
29 pages, 3356 KB  
Review
Comparative Analysis of Actuation Methods in Flexible Upper-Limb Exoskeleton Robots
by Cuizhi Fei, Zheng Deng, Chongyu Wang, Shuai Wang and Hui Li
Actuators 2026, 15(3), 171; https://doi.org/10.3390/act15030171 - 18 Mar 2026
Abstract
The flexible upper-limb exoskeleton robot (exosuit) is composed of fabrics, soft actuators and compliant force-transmitting structures, which provides assistance or rehabilitation training for the shoulders, elbows, wrists and hands. By realizing human–robot collaboration, this kind of system has the advantages of comfort, light [...] Read more.
The flexible upper-limb exoskeleton robot (exosuit) is composed of fabrics, soft actuators and compliant force-transmitting structures, which provides assistance or rehabilitation training for the shoulders, elbows, wrists and hands. By realizing human–robot collaboration, this kind of system has the advantages of comfort, light weight and portability, thus promoting motor function recovery and neural plasticity. This review establishes a classification and comparison framework for flexible upper-limb exoskeletons based on the actuation modalities and systematically summarizes the research progress under different actuation modalities. The relevant literature published from 2015 to 2025 was retrieved from the EI, IEEE Xplore, PubMed and Web of Science databases. After screening according to the preset inclusion and exclusion criteria, a total of 64 original research papers meeting the criteria were finally included for analysis. According to the actuation modalities, the flexible upper-limb exoskeleton robot is classified, and all kinds of systems are summarized and compared. Motor–cable/tendon actuation and pneumatic/hydraulic actuation have advanced substantially and are approaching technical maturity for flexible upper-limb exoskeletons. Meanwhile, designs based on passive/hybrid mechanisms (e.g., elastic energy storage elements and clutches) and new intelligent material actuations are showing a diversified development trend. In the future, the development is expected to further focus on lightweight and compliance, and by integrating multimodal sensing and feedback control, motion intention recognition and human–robot interaction theories, actuation systems will be developed towards modularization, intelligence and high-power density, in order to achieve more comfortable, lighter and more effective flexible upper-limb exoskeleton systems. Full article
(This article belongs to the Section Actuators for Robotics)
Show Figures

Figure 1

23 pages, 376 KB  
Article
INTELLECTUM: A Hybrid AR-VR Metaverse Framework for Smart Cities
by Andrey Nechesov and Janne Ruponen
Appl. Syst. Innov. 2026, 9(3), 61; https://doi.org/10.3390/asi9030061 - 17 Mar 2026
Abstract
This work presents INTELLECTUM as a reference architecture and design-time evaluation framework for multi-entity XR–AI–digital twin systems. Rather than optimizing a specific implementation, the paper formalizes architectural invariants, event semantics, and coordination mechanisms that precede and inform system realization. INTELLECTUM provides a conceptual [...] Read more.
This work presents INTELLECTUM as a reference architecture and design-time evaluation framework for multi-entity XR–AI–digital twin systems. Rather than optimizing a specific implementation, the paper formalizes architectural invariants, event semantics, and coordination mechanisms that precede and inform system realization. INTELLECTUM provides a conceptual framework for structuring interactions across physical and virtual environments, emphasizing human-centered design, immersive digital twins, and collaborative extended-reality workspaces. The technical specification defines core architectural components, human integration modalities via WebXR and heterogeneous sensor networks, and representative usage scenarios within smart city ecosystems. By enabling AI-assisted urban planning, interactive simulation, and multi-actor coordination, INTELLECTUM positions itself as an XR-based architectural foundation for next-generation smart city platforms. Full article
(This article belongs to the Special Issue Information Industry and Intelligence Innovation)
Show Figures

Figure 1

20 pages, 1971 KB  
Article
Human–Robot Interaction Strategy of Service Robot with Insufficient Capability in Self-Service Shop
by Wa Gao, Tao He, Yang Ji, Yue Kan and Fusheng Zha
Biomimetics 2026, 11(3), 213; https://doi.org/10.3390/biomimetics11030213 - 16 Mar 2026
Abstract
This paper explores the interaction strategies of service robots in self-service shops from a user experience perspective in the case of robots with insufficient capabilities. A Yanshee robot and a self-developed localization-rotation system are employed as the experimental platform. A sales return in [...] Read more.
This paper explores the interaction strategies of service robots in self-service shops from a user experience perspective in the case of robots with insufficient capabilities. A Yanshee robot and a self-developed localization-rotation system are employed as the experimental platform. A sales return in a self-service shop is employed as the experimental scenario. Two types of robot’s insufficient capabilities, three strategies of robots’ apology and a social interaction cue imitated from a human salesperson are considered in the design of interaction strategy between human and robot in this scenario. The results show that robots’ social insufficiency leads to more negative influence on customer experiences of fluency, comprehensibility, impression, intelligence, willingness for future interaction than robots’ performance insufficiency. An empathetic apology when the robot has insufficient performance is an effective interaction strategy. The interaction cue that the robot turns to face customers is not beneficial to customer experiences but does influence the internal relationship between customer experiences during HRI and after HRI. In the case of robots with social insufficiency in a self-service shop, impression, intelligence and interaction capability have positive impacts on the willingness for future interaction, while they are also positively affected by fluency or comprehensibility. In the case of robots with performance insufficiency, impression has a positive impact on willingness, while it is not directly related to fluency. The findings are valuable for informing the interaction design of service robots deployed in shopping, especially in real environments where performance and cost must be balanced. Full article
(This article belongs to the Section Biomimetic Design, Constructions and Devices)
Show Figures

Figure 1

25 pages, 2220 KB  
Article
HRC Metrology: Assessment Criteria, Metrics and Methods for Human–Robot Co-Manipulation Tasks
by S. M. Mizanoor Rahman
Machines 2026, 14(3), 336; https://doi.org/10.3390/machines14030336 - 16 Mar 2026
Abstract
We developed a human–robot collaborative manipulation system (co-manipulation system) in the form of a power assist robotic system (PARS) where a human and a robot collaborated to perform the co-manipulation of an object with power assistance. We conducted an experiment (the first experiment), [...] Read more.
We developed a human–robot collaborative manipulation system (co-manipulation system) in the form of a power assist robotic system (PARS) where a human and a robot collaborated to perform the co-manipulation of an object with power assistance. We conducted an experiment (the first experiment), where in each trial of the experiment, a human subject performed the co-manipulation of the object with the PARS, and an expert human–robot co-manipulation researcher observed the co-manipulation task. We collected the co-manipulation and observation data, analyzed the data, and conducted reviews of the related literature, and developed the HRC (human–robot collaboration) metrology, which consisted of necessary criteria, metrics and methods to assess human–robot collaborative manipulation tasks. The proposed HRC metrology consisted of both human–robot collaborative performance and human–robot interactions (HRI) related assessment criteria. Then, we developed another human–robot co-manipulation system using a robot manipulator. In this system, the human–robot co-manipulation task was performed in conjunction with a collaborative assembly task between the robot and human co-workers. In another experiment (the second experiment), we assessed the co-manipulation task for each robotic system separately based on the developed HRC metrology (set of assessment criteria, metrics and methods) to verify and validate the practicality, usability and effectiveness of the criteria, metrics and methods. The results showed that the HRC metrology was effective and practical in assessing the co-manipulation tasks. We then discussed the strengths and limitations of the assessment criteria, metrics and methods. The proposed HRC metrology can be used to assess human–robot collaborative performance and human–robot interactions in human–robot co-manipulation tasks with potential real-world applications in industrial manipulation and manufacturing, transport, logistics, civil construction, rescue and disaster management, timber processing, etc. Full article
(This article belongs to the Special Issue Design and Control of Assistive Robots)
Show Figures

Figure 1

58 pages, 7331 KB  
Review
Human–Robot Interaction in Indoor Mobile Robotics: Current State, Interaction Modalities, Applications, and Future Challenges
by Arman Ahmed Khan and Kerstin Thurow
Sensors 2026, 26(6), 1840; https://doi.org/10.3390/s26061840 - 14 Mar 2026
Abstract
This paper provides a comprehensive survey of Human–Robot Interaction (HRI) for indoor mobile robots operating in human-centered environments such as hospitals, laboratories, offices, and homes. We review interaction modalities—including speech, gesture, touch, visual, and multimodal interfaces—and examine key user experience factors such as [...] Read more.
This paper provides a comprehensive survey of Human–Robot Interaction (HRI) for indoor mobile robots operating in human-centered environments such as hospitals, laboratories, offices, and homes. We review interaction modalities—including speech, gesture, touch, visual, and multimodal interfaces—and examine key user experience factors such as usability, trust, and social acceptance. Implementation challenges are discussed, encompassing safety, privacy, and regulatory considerations. Representative case studies, including healthcare and domestic platforms, highlight design trade-offs and integration lessons. We identify critical technical challenges, including robust perception, reliable multimodal fusion, navigation in dynamic spaces, and constraints on computation and power. Finally, we outline future directions, including embodied AI, adaptive context-aware interactions, and standards for safety and data protection. This survey aims to guide the development of indoor mobile robots capable of collaborating with humans naturally, safely, and effectively. Full article
Show Figures

Figure 1

22 pages, 4960 KB  
Article
Development of a Neural-Fuzzy-Based Variable Admittance Control Strategy for an Upper Limb Rehabilitation Exoskeleton
by Yixing Shi, Keyi Li, Yehong Zhang and Qingcong Wu
Sensors 2026, 26(6), 1838; https://doi.org/10.3390/s26061838 - 14 Mar 2026
Abstract
Upper limb motor dysfunction resulting from stroke requires effective rehabilitation solutions; however, current exoskeletons are limited by single-input control, inadequate adaptation to various rehabilitation stages, and restriction to one limb. This study presents the development of a three-degree-of-freedom upper limb rehabilitation exoskeleton with [...] Read more.
Upper limb motor dysfunction resulting from stroke requires effective rehabilitation solutions; however, current exoskeletons are limited by single-input control, inadequate adaptation to various rehabilitation stages, and restriction to one limb. This study presents the development of a three-degree-of-freedom upper limb rehabilitation exoskeleton with three core innovations: (1) a neuro-fuzzy adaptive admittance control architecture that integrates human–robot interaction force and joint angular velocity as dual inputs for real-time damping adjustment, enabling accurate capture of dynamic movement intentions; (2) a Brunnstrom stage-specific fuzzy rule base that directly links clinical rehabilitation needs to adaptive control parameters; (3) a bilateral adaptable mechanical structure, allowing dual-upper limb training to enhance practical application. By combining radial basis function (RBF) neural network-based adaptive proportional–integral–derivative (PID) control with fuzzy variable-parameter admittance control, the system achieves a maximum trajectory tracking error of less than 1.2° and a root mean square (RMS) error of ≤0.13°. Trajectory tracing experiments confirm an RMS error of 2.99 mm for a circular trajectory at Bd = 2. The proposed strategy, validated through position tracking, admittance interaction, and trajectory tracing experiments, effectively balances tracking accuracy and human–machine compliance, providing valuable technical support for robot-assisted upper limb rehabilitation. Full article
Show Figures

Figure 1

32 pages, 7928 KB  
Article
eXCube2: Explainable Brain-Inspired Spiking Neural Network Framework for Emotion Recognition from Audio, Visual and Multimodal Audio–Visual Data
by N. K. Kasabov, A. Yang, Z. Wang, I. Abouhassan, A. Kassabova and T. Lappas
Biomimetics 2026, 11(3), 208; https://doi.org/10.3390/biomimetics11030208 - 14 Mar 2026
Abstract
This paper introduces a biomimetic framework and novel brain-inspired AI (BIAI) models based on spiking neural networks (SNNs) for emotional state recognition from audio (speech), visual (face), and integrated multimodal audio–visual data. The developed framework, named eXCube2, uses a three-dimensional SNN architecture NeuCube [...] Read more.
This paper introduces a biomimetic framework and novel brain-inspired AI (BIAI) models based on spiking neural networks (SNNs) for emotional state recognition from audio (speech), visual (face), and integrated multimodal audio–visual data. The developed framework, named eXCube2, uses a three-dimensional SNN architecture NeuCube that is spatially structured according to a human brain template. The BIAI models developed in eXCube2 are trainable on spatio- and spectro-temporal data using brain-inspired learning rules. Such models are explainable in terms of revealing patterns in data and are adaptable to new data. The eXCube2 models are implemented as software systems and tested on speech and video data of subjects expressing emotional states. The use of a brain template for the SNN structure enables brain-inspired tonotopic and stereo mapping of audio inputs, topographic mapping of visual data, and the combined use of both modalities. This novel approach brings AI-based emotional state recognition closer to human perception, provides a better explainability and adaptability than existing AI systems. It also results in a higher or competitive accuracy, even though this was not the main goal here. This is demonstrated through experiments on benchmark datasets, achieving classification accuracy above 80% on single-modality data and 88.9% when multimodal audio–visual data are used, and a “don’t know” output is introduced. The paper further discusses possible applications of the proposed eXCube2 framework to other audio, visual, and audio–visual data for solving challenging problems, such as recognizing emotional states of people from different origins; brain state diagnosis (e.g., Parkinson’s disease, Alzheimer’s disease, ADHD, dementia); measuring response to treatment over time; evaluating satisfaction responses from online clients; cognitive robotics; human–robot interaction; chatbots; and interactive computer games. The SNN-based implementation of BIAI also enables the use of neuromorphic chips and platforms, leading to reduced power consumption, smaller device size, higher performance accuracy, and improved adaptability and explainability. This research shows a step toward building brain-inspired AI systems. Full article
Show Figures

Figure 1

29 pages, 645 KB  
Article
BCI-Inspired Adaptive Agents in Human–Robot Interaction: A Structural Framework for Coordinated Interaction Design
by Ionica Oncioiu, Iustin Priescu, Daniela Joița, Geanina Silviana Banu and Cătălina-Mihaela Priescu
Electronics 2026, 15(6), 1206; https://doi.org/10.3390/electronics15061206 - 13 Mar 2026
Viewed by 93
Abstract
The accelerated integration of intelligent agents in user-centered digital environments has intensified research in the field of Human–Robot Interaction, especially regarding mechanisms for adaptive, intuitive, and cognitively aligned communication. The present study develops and empirically examines a structural model of BCI-inspired adaptive agents [...] Read more.
The accelerated integration of intelligent agents in user-centered digital environments has intensified research in the field of Human–Robot Interaction, especially regarding mechanisms for adaptive, intuitive, and cognitively aligned communication. The present study develops and empirically examines a structural model of BCI-inspired adaptive agents designed to support coordinated interaction in HRI contexts. The study analyzes users’ perceptions of standardized hypothetical interaction scenarios involving BCI-inspired adaptive digital agents, where BCI inspiration is conceptual and refers to adaptive architectures interpreting behavioral cues rather than direct neural signal acquisition. The proposed model integrates four main constructs—perceived technological innovation, user involvement, agent adaptivity, and digital synergy—and examines their associations with user satisfaction in digital collaborative environments. Data were collected through an anonymous questionnaire (N = 268) and analyzed using structural equation modeling with the PLS-SEM method. The structural model demonstrates substantial explanatory power, accounting for 66.8% of the variance in user satisfaction (R2 = 0.668). The study contributes by empirically supporting a scenario-based structural evaluation framework suitable for early-stage adaptive HRI system design. The results highlight the role of digital synergy in aligning innovation, engagement, and adaptive behavior in BCI-inspired adaptive HRI systems, providing directions for the design of adaptive robotic agents oriented toward coordinated interaction, user-centered integration, and responsible use in collaborative digital ecosystems. Full article
(This article belongs to the Special Issue Human Robot Interaction: Techniques, Applications, and Future Trends)
Show Figures

Figure 1

26 pages, 4174 KB  
Article
An Adaptive Neuro-Fuzzy Fractional-Order PID Controller for Energy-Efficient Tracking of a 2-DOF Hip–Knee Lower-Limb Exoskeleton
by Mukhtar Fatihu Hamza and Auwalu Muhammad Abdullahi
Modelling 2026, 7(2), 54; https://doi.org/10.3390/modelling7020054 - 12 Mar 2026
Viewed by 118
Abstract
For safe and efficient human–robot interaction, lower-limb exoskeletons used for assistance and rehabilitation need to be precisely and energy-efficiently controlled. By creating an adaptive neuro-fuzzy fractional-order PID (ANFIS-FOPID) controller, this project seeks to improve tracking accuracy, robustness, and energy efficiency in a two-degree-of-freedom [...] Read more.
For safe and efficient human–robot interaction, lower-limb exoskeletons used for assistance and rehabilitation need to be precisely and energy-efficiently controlled. By creating an adaptive neuro-fuzzy fractional-order PID (ANFIS-FOPID) controller, this project seeks to improve tracking accuracy, robustness, and energy efficiency in a two-degree-of-freedom hip–knee exoskeleton. The Euler–Lagrange formulation is used to derive a nonlinear dynamic model, and a Lyapunov-based stability analysis is used to show that the closed-loop system remains uniformly ultimately bounded under disturbances and parameter uncertainties. The suggested controller performs noticeably better than traditional PID and fixed-parameter FOPID controllers, according to numerical simulations conducted under both normal and perturbed conditions. The ANFIS FOPID achieves root mean square errors below 0.028 rad and lowers the integral absolute errors at the hip and knee joints to 0.1454 and 0.1480, as opposed to 0.3496–0.3712 for PID controllers. Under ±10% parameter uncertainty, the total control-energy proxy drops from 2870.0 (PID) to 936.25, a 67.4% decrease, and stays at 1587.93. Statistically significant variations in energy consumption are confirmed by one-way ANOVA (p < 10−176). Large effect sizes are found (η2 = 0.237–0.314). These results demonstrate the superior tracking performance, robustness, and energy efficiency of the ANFIS-FOPID controller. The results set a quantitative standard for future experimental validation and hardware-in-the-loop implementation, despite being based on high-fidelity simulations. Full article
Show Figures

Figure 1

27 pages, 8343 KB  
Article
Modeling Human–Robot Impact Dynamics in Collaborative Applications
by Alessio Caneschi, Matteo Bottin and Giulio Rosati
Actuators 2026, 15(3), 165; https://doi.org/10.3390/act15030165 - 12 Mar 2026
Viewed by 120
Abstract
This study presents an integrated experimental and modeling framework to investigate human–robot collision dynamics involving a collaborative manipulator (KUKA LBR iiwa 14 R820). A dedicated impact test prototype was developed to reproduce controlled contact scenarios between the robot and human body analogues under [...] Read more.
This study presents an integrated experimental and modeling framework to investigate human–robot collision dynamics involving a collaborative manipulator (KUKA LBR iiwa 14 R820). A dedicated impact test prototype was developed to reproduce controlled contact scenarios between the robot and human body analogues under various dynamic conditions. The experimental setup enables the acquisition of synchronized force, velocities, and displacement signals during contact events. These data are used to calibrate and validate a set of contact models, ranging from classical formulations such as Hertz and Hunt–Crossley to more recent supervised machine learning models. The proposed methodology allows a quantitative assessment of model accuracy and physical consistency in replicating real collision phenomena. Furthermore, the effective mass of the robot along its kinematic chain is estimated to compute impact energy and predict the interaction severity according to ISO 10218-1/2:2025 safety limits. The results highlight the trade-off between model complexity and predictive capability, offering alternative guidelines for collision severity evaluation in collaborative robotics applications. Full article
Show Figures

Figure 1

22 pages, 1747 KB  
Review
Talking Head Generation Through Generative Models and Cross-Modal Synthesis Techniques
by Hira Nisar, Salman Masood, Zaki Malik and Adnan Abid
J. Imaging 2026, 12(3), 119; https://doi.org/10.3390/jimaging12030119 - 10 Mar 2026
Viewed by 233
Abstract
Talking Head Generation (THG) is a rapidly advancing field at the intersection of computer vision, deep learning, and speech synthesis, enabling the creation of animated human-like heads that can produce speech and express emotions with high visual realism. The core objective of THG [...] Read more.
Talking Head Generation (THG) is a rapidly advancing field at the intersection of computer vision, deep learning, and speech synthesis, enabling the creation of animated human-like heads that can produce speech and express emotions with high visual realism. The core objective of THG systems is to synthesize coherent and natural audio–visual outputs by modeling the intricate relationship between speech signals, facial dynamics, and emotional cues. These systems find widespread applications in virtual assistants, interactive avatars, video dubbing for multilingual content, educational technologies, and immersive virtual and augmented reality environments. Moreover, the development of THG has significant implications for accessibility technologies, cultural preservation, and remote healthcare interfaces. This survey paper presents a comprehensive and systematic overview of the technological landscape of Talking Head Generation. We begin by outlining the foundational methodologies that underpin the synthesis process, including generative adversarial networks (GANs), motion-aware recurrent architectures, and attention-based models. A taxonomy is introduced to organize the diverse approaches based on the nature of input modalities and generation goals. We further examine the contributions of various domains such as computer vision, speech processing, and human–robot interaction, each of which plays a critical role in advancing the capabilities of THG systems. The paper also provides a detailed review of datasets used for training and evaluating THG models, highlighting their coverage, structure, and relevance. In parallel, we analyze widely adopted evaluation metrics, categorized by their focus on image quality, motion accuracy, synchronization, and semantic fidelity. Operating parameters such as latency, frame rate, resolution, and real-time capability are also discussed to assess deployment feasibility. Special emphasis is placed on the integration of generative artificial intelligence (GenAI), which has significantly enhanced the adaptability and realism of talking head systems through more powerful and generalizable learning frameworks. Full article
Show Figures

Figure 1

24 pages, 14494 KB  
Article
Volumetric Obstacle Avoidance Based on Dynamic Movement Primitives for Robot Path Planning in Human–Robot Collaboration
by Arturo Daniel Sosa-Ceron, Hugo G. Gonzalez-Hernandez and Jorge Antonio Reyes-Avendaño
Appl. Sci. 2026, 16(5), 2531; https://doi.org/10.3390/app16052531 - 6 Mar 2026
Viewed by 231
Abstract
Human–robot collaboration (HRC) can be defined as the close interaction between a human user and a robot working together to accomplish a specific task. True collaboration, however, can only be realized when humans and robots can share the same workspace simultaneously and move [...] Read more.
Human–robot collaboration (HRC) can be defined as the close interaction between a human user and a robot working together to accomplish a specific task. True collaboration, however, can only be realized when humans and robots can share the same workspace simultaneously and move freely within it. To address these problems, Learning from Demonstrations (LfD) helps robots become competent in solving plenty of complicated tasks, greatly reducing programming times and allowing task generalization. However, complex robot tasks require complex path planning modeling for a robot to move from one place to another in a heavily constrained workspace following a collision-free path. To this end, a robot programming framework based on Dynamic Movement Primitives (DMPs) is proposed. The framework derives and implements a solution for robot path planning and includes a new DMP formulation with volumetric obstacle avoidance for robot LfD. The formulation equips robotic systems with the capability of online adaptation in the presence of dynamic obstacles. Quantitative evaluations demonstrate high success rates (>96% in tested scenarios) in collision avoidance and typical trajectory adaptation times in the order of milliseconds (<5 ms), supporting its applicability. These methods have been applied in both simulation and real robotic scenarios using a UR10e collaborative robot from Universal Robots for testing and validation purposes. The results indicate that the proposed approach can effectively make the robot follow a user-defined trajectory and learn how to adapt it to avoid collisions with volumetric obstacles of different shapes and poses in an unconstrained human–robot collaborative environment. Full article
(This article belongs to the Special Issue Human–Robot Interaction and Control)
Show Figures

Figure 1

22 pages, 7057 KB  
Article
Educational Simulator for Sustainable Energy Management for a Typical Household
by Flaviu Mihai Frigura-Iliasa, Grigorie Dennis Sergiu, Krzysztof Sornek, Maksymilian Homa and Mihaela Frigura-Iliasa
Sustainability 2026, 18(5), 2506; https://doi.org/10.3390/su18052506 - 4 Mar 2026
Viewed by 980
Abstract
This paper presents the development of Electrohouse, a 3D educational simulator used for illustrating the electricity consumption of a household in the presence of a photovoltaic (PV) system designed to teach users how to efficiently manage electrical equipment from an energy perspective. [...] Read more.
This paper presents the development of Electrohouse, a 3D educational simulator used for illustrating the electricity consumption of a household in the presence of a photovoltaic (PV) system designed to teach users how to efficiently manage electrical equipment from an energy perspective. The paper addresses elements of energy system modeling, human–computer interaction and educational visualization. The application connects electricity consumption graphs with practical appliance controls, providing a comprehensive view of kilowatt-hour usage with an intuitive interface. The software offers two consumption scenarios, with one for 28 days and one for 30 days. Furthermore, the household displays the integration of a photovoltaic solar panel for direct energy production, with the system simulating an actual meter by deducting the generated current from the accumulated consumption. Relevant for sustainability, especially in the fields of energy education, the project incorporates the creation of a prototype of a night-time home surveillance robot designed for intruder detection and control. This study contributes to the global framework of Sustainable Development Goals (SDGs) adopted by the United Nations. The simulator supports SDG 7 (Affordable and Clean Energy) by promoting awareness of photovoltaic integration with household energy optimization and SDG 4 (Quality Education) by providing an interactive digital learning environment that improves energy literacy with sustainability-oriented skills. Full article
Show Figures

Figure 1

53 pages, 5533 KB  
Systematic Review
Embodied AI with Foundation Models for Mobile Service Robots: A Systematic Review
by Matthew Lisondra, Beno Benhabib and Goldie Nejat
Robotics 2026, 15(3), 55; https://doi.org/10.3390/robotics15030055 - 4 Mar 2026
Viewed by 639
Abstract
Rapid advancements in foundation models, including Large Language Models, Vision-Language Models, Multimodal Large Language Models, and Vision-Language-Action models, have opened new avenues for embodied AI in mobile service robotics. By combining foundation models with the principles of embodied AI, where intelligent systems perceive, [...] Read more.
Rapid advancements in foundation models, including Large Language Models, Vision-Language Models, Multimodal Large Language Models, and Vision-Language-Action models, have opened new avenues for embodied AI in mobile service robotics. By combining foundation models with the principles of embodied AI, where intelligent systems perceive, reason, and act through physical interaction, mobile service robots can achieve more flexible understanding, adaptive behavior, and robust task execution in dynamic real-world environments. Despite this progress, embodied AI for mobile service robots continues to face fundamental challenges related to the translation of natural language instructions into executable robot actions, multimodal perception in human-centered environments, uncertainty estimation for safe decision-making, and computational constraints for real-time onboard deployment. In this paper, we present the first systematic review of foundation models in mobile service robotics, following the preferred reporting items for systematic reviews and meta-analysis (PRISMA) guidelines. Using an OpenAlex literature search, we considered 7506 papers for the years spanning 1968–2025. Our detailed analysis identified four main challenges and how recent advances in foundation models, related to the translation of natural language instructions into executable robot actions, multimodal perception in human-centered environments, uncertainty estimation for safe decision-making, and computational constraints for real-time onboard deployment, have addressed these challenges. We further examine real-world applications in domestic assistance, healthcare, and service automation, highlighting how foundation models enable context-aware, socially responsive, and generalizable robot behaviors. Beyond technical considerations, we discuss ethical, societal, human-interaction, and physical design and ergonomic implications associated with deploying foundation-model-enabled service robots in human environments. Finally, we outline future research directions emphasizing reliability and lifelong adaptation, privacy-aware and resource-constrained deployment, as well as the governance and human-in-the-loop frameworks required for safe, scalable, and trustworthy mobile service robotics. Full article
(This article belongs to the Special Issue Embodied Intelligence: Physical Human–Robot Interaction)
Show Figures

Figure 1

19 pages, 4128 KB  
Review
When Robots Learn: A Bibliometric Review of Artificial Intelligence in Engineering Applications of Robotics
by Eduardo García-Sardón, Pablo Fernández-Arias, Antonio del Bosque and Diego Vergara
Appl. Sci. 2026, 16(5), 2466; https://doi.org/10.3390/app16052466 - 4 Mar 2026
Viewed by 280
Abstract
The convergence of Robotics and artificial intelligence (AI) has transformed engineering by enabling the design of intelligent systems capable of learning, adapting, and performing complex tasks. These synergies are driving innovation across multiple engineering disciplines, including mechanical, materials, electrical, industrial, civil, and aerospace [...] Read more.
The convergence of Robotics and artificial intelligence (AI) has transformed engineering by enabling the design of intelligent systems capable of learning, adapting, and performing complex tasks. These synergies are driving innovation across multiple engineering disciplines, including mechanical, materials, electrical, industrial, civil, and aerospace engineering. This review provides a comprehensive overview of the knowledge structure and emerging research directions of Robotics and AI in engineering, with the aim of identifying research trends, influential authors, leading institutions, and emerging thematic areas. Data were collected from the Web of Science and Scopus databases, covering the period from 2020 to 2025, and analyzed using bibliometric mapping techniques and performance indicators. The results reveal a sustained growth in research on autonomous systems, collaborative robots, and human–robot interaction within engineering contexts, with a strong emphasis on AI-driven optimization. Bibliometric analyses show that deep learning, reinforcement learning, and computer vision constitute the core enabling technologies structuring the field. In addition, the results highlight a high degree of international collaboration and a concentration of scientific output and impact in a limited number of leading countries, institutions, and journals. Full article
(This article belongs to the Special Issue Advanced Technologies Applied in Digital Media Era)
Show Figures

Figure 1

Back to TopTop