Human-Robot Interaction and Applications: Challenges and Future Perspectives

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Artificial Intelligence".

Deadline for manuscript submissions: 31 December 2025 | Viewed by 13256

Special Issue Editors


E-Mail Website
Guest Editor
Cognitive Science Department, Jagiellonian University, 31-007 Krakow, Poland
Interests: cognitive science; cognitive robotics; intelligent system for multimodal human–computer interaction

E-Mail Website
Guest Editor
Center for Cybernics Research, University of Tsukuba, Ibaraki 305-8577, Japan
Interests: human-centered computing; haptic devices; computational models of human behavior; interaction design studies

E-Mail Website
Guest Editor
Department for Computer Engineering and Digital Design, Universitat de Lleida, 25006 Lleida, Spain
Interests: educational robotics; embodied interaction; computing education; design research; participatory design

Special Issue Information

Dear Colleagues,

The emergence of new technologies and application areas in human–robot interaction is having significant and meaningful impacts on the ways people experience and interact with the world. This Special Issue sheds light on challenges in the design and use of robots and intelligent systems in our everyday life. Furthermore, this Special Issue invites scholars to critically reflect on the future perspectives of this research field.

We encourage the following types of submissions (among others):

  • Design studies or evaluative research that highlight how the features of robots and intelligent systems are intended to support people’s engagement, learning, and behaviors in everyday life.
  • Critical, sociological, and/or methodological articles on the opportunities/challenges of designing/using intelligent technology in this field.
  • Towards the development of empathic machines: understanding and modeling human behavior to create machines that can respond to and understand humans at an emotional level.
  • Affective haptics: sensors and/or actuators designed to support human–robot interaction through touch.
  • Ethnographic and cultural topics related to human–robot interaction.
  • Calm-technology approach to human–robot interaction.
  • In-the-wild and field studies on human–robot interaction.
  • Child–robot interaction.
  • Infant–robot interaction.

Prof. Dr. Bipin Indurkhya
Dr. Eleuda Nuñez
Prof. Dr. Marie-Monique Schaper
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • affective haptics
  • calm technology
  • child–robot interaction
  • co-design
  • empathetic HRI
  • ethnographic studies on HRI
  • infant–robot interaction
  • multimodal HRI
  • participatory design

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

14 pages, 2032 KiB  
Article
Surface Reading Model via Haptic Device: An Application Based on Internet of Things and Cloud Environment
by Andreas P. Plageras, Christos L. Stergiou, Vasileios A. Memos, George Kokkonis, Yutaka Ishibashi and Konstantinos E. Psannis
Electronics 2025, 14(16), 3185; https://doi.org/10.3390/electronics14163185 - 11 Aug 2025
Viewed by 211
Abstract
In this research paper, we have implemented a computer program thanks to the XML language to sense the differences in image color depth by using haptic/tactile devices. With the use of “Bump Map” and tools such as “Autodesk’s 3D Studio Max”, “Adobe Photoshop”, [...] Read more.
In this research paper, we have implemented a computer program thanks to the XML language to sense the differences in image color depth by using haptic/tactile devices. With the use of “Bump Map” and tools such as “Autodesk’s 3D Studio Max”, “Adobe Photoshop”, and “Adobe Illustrator”, we were able to obtain the desired results. The haptic devices used for the experiments were the “PHANTOM Touch” and the “PHANTOM Omni R” of “3D Systems”. The programs that were installed and configured properly so as to model the surfaces, run the experiments, and finally achieve the desired goal are “H3D Api”, “Geomagic_OpenHaptics”, and “OpenHaptics_Developer_Edition”. The purpose of this project was to feel different textures, shapes, and objects in images by using a haptic device. The primary objective was to create a system from the ground up to render visuals on the screen and facilitate interaction with them via the haptic device. The main focus of this work is to propose a novel pattern of images that we can classify as different textures so that they can be identified by people with reduced vision. Full article
Show Figures

Graphical abstract

16 pages, 5104 KiB  
Article
Integrating OpenPose for Proactive Human–Robot Interaction Through Upper-Body Pose Recognition
by Shih-Huan Tseng, Jhih-Ciang Chiang, Cheng-En Shiue and Hsiu-Ping Yueh
Electronics 2025, 14(15), 3112; https://doi.org/10.3390/electronics14153112 - 5 Aug 2025
Viewed by 319
Abstract
This paper introduces a novel system that utilizes OpenPose for skeleton estimation to enable a tabletop robot to interact with humans proactively. By accurately recognizing upper-body poses based on the skeleton information, the robot autonomously approaches individuals and initiates conversations. The contributions of [...] Read more.
This paper introduces a novel system that utilizes OpenPose for skeleton estimation to enable a tabletop robot to interact with humans proactively. By accurately recognizing upper-body poses based on the skeleton information, the robot autonomously approaches individuals and initiates conversations. The contributions of this paper can be summarized into three main features. Firstly, we conducted a comprehensive data collection process, capturing five different table-front poses: looking down, looking at the screen, looking at the robot, resting the head on hands, and stretching both hands. These poses were selected to represent common interaction scenarios. Secondly, we designed the robot’s dialog content and movement patterns to correspond with the identified table-front poses. By aligning the robot’s responses with the specific pose, we aimed to create a more engaging and intuitive interaction experience for users. Finally, we performed an extensive evaluation by exploring the performance of three classification models—non-linear Support Vector Machine (SVM), Artificial Neural Network (ANN), and convolutional neural network (CNN)—for accurately recognizing table-front poses. We used an Asus Zenbo Junior robot to acquire images and leveraged OpenPose to extract 12 upper-body skeleton points as input for training the classification models. The experimental results indicate that the ANN model outperformed the other models, demonstrating its effectiveness in pose recognition. Overall, the proposed system not only showcases the potential of utilizing OpenPose for proactive human–robot interaction but also demonstrates its real-world applicability. By combining advanced pose recognition techniques with carefully designed dialog and movement patterns, the tabletop robot successfully engages with humans in a proactive manner. Full article
Show Figures

Figure 1

21 pages, 1112 KiB  
Article
Observation of Human–Robot Interactions at a Science Museum: A Dual-Level Analytical Approach
by Heeyoon Yoon, Gahyeon Shim, Hanna Lee, Min-Gyu Kim and SunKyoung Kim
Electronics 2025, 14(12), 2368; https://doi.org/10.3390/electronics14122368 - 10 Jun 2025
Viewed by 567
Abstract
This study proposes a dual-level analytical approach to observing human–robot interactions in a real-world public setting, specifically a science museum. Observation plays a crucial role in human–robot interaction research by enabling the capture of nuanced and context-sensitive behaviors that are often missed by [...] Read more.
This study proposes a dual-level analytical approach to observing human–robot interactions in a real-world public setting, specifically a science museum. Observation plays a crucial role in human–robot interaction research by enabling the capture of nuanced and context-sensitive behaviors that are often missed by post-interaction surveys or controlled laboratory experiments. Public environments such as museums pose particular challenges due to their dynamic and open-ended nature, requiring methodological approaches that balance ecological validity with analytical rigor. To address these challenges, we introduce a dual-level approach for behavioral observation, integrating statistical analysis across demographic groups with time-series modeling of individual engagement dynamics. At the group level, we analyzed engagement patterns based on age and gender, revealing significantly higher interaction levels among children and adolescents compared to adults. At the individual level, we employed temporal behavioral analysis using a Hidden Markov Model to identify sequential engagement states—low, moderate, and high—derived from time-series behavioral patterns. This approach offers both broad and detailed insights into visitor engagement, providing actionable implications for designing adaptive and socially engaging robot behaviors in complex public environments. Furthermore, it can facilitate the analysis of social robot interactions in everyday contexts and contribute to building a practical foundation for their implementation in real-world settings. Full article
Show Figures

Graphical abstract

26 pages, 5529 KiB  
Article
Statistically Informed Multimodal (Domain Adaptation by Transfer) Learning Framework: A Domain Adaptation Use-Case for Industrial Human–Robot Communication
by Debasmita Mukherjee and Homayoun Najjaran
Electronics 2025, 14(7), 1419; https://doi.org/10.3390/electronics14071419 - 31 Mar 2025
Viewed by 508
Abstract
Cohesive human–robot collaboration can be achieved through seamless communication between human and robot partners. We posit that the design aspects of human–robot communication (HRCom) can take inspiration from human communication to create more intuitive systems. A key component of HRCom systems is perception [...] Read more.
Cohesive human–robot collaboration can be achieved through seamless communication between human and robot partners. We posit that the design aspects of human–robot communication (HRCom) can take inspiration from human communication to create more intuitive systems. A key component of HRCom systems is perception models developed using machine learning. Being data-driven, these models suffer from the dearth of comprehensive, labelled datasets while models trained on standard, publicly available datasets do not generalize well to application-specific scenarios. Complex interactions and real-world variability lead to shifts in data that require domain adaptation by the models. Existing domain adaptation techniques do not account for incommensurable modes of communication between humans and robot perception systems. Taking into account these challenges, a novel framework is presented that leverages existing domain adaptation techniques off-the-shelf and uses statistical measures to start and stop the training of models when they encounter domain-shifted data. Statistically informed multimodal (domain adaptation by transfer) learning (SIMLea) takes inspiration from human communication to use human feedback to auto-label for iterative domain adaptation. The framework can handle incommensurable multimodal inputs, is mode and model agnostic, and allows statistically informed extension of datasets, leading to more intuitive and naturalistic HRCom systems. Full article
Show Figures

Figure 1

24 pages, 259 KiB  
Article
How Do Older Adults Perceive Technology and Robots? A Participatory Study in a Care Center in Poland
by Paulina Zguda, Zuzanna Radosz-Knawa, Tymon Kukier, Mikołaj Radosz, Alicja Kamińska and Bipin Indurkhya
Electronics 2025, 14(6), 1106; https://doi.org/10.3390/electronics14061106 - 11 Mar 2025
Cited by 1 | Viewed by 1544
Abstract
One of the key areas of application for social robots is healthcare, particularly for the elderly. To better address user needs, a study involving the humanoid robot NAO was conducted at the Municipal Care Center in Krakow, Poland, with the participation of 29 [...] Read more.
One of the key areas of application for social robots is healthcare, particularly for the elderly. To better address user needs, a study involving the humanoid robot NAO was conducted at the Municipal Care Center in Krakow, Poland, with the participation of 29 older adults. This participatory design study explored their attitudes toward robots and technology both before and after interacting with the robot. It also identified the most desirable applications of social robots that could simplify everyday life for the elderly. Full article
21 pages, 1178 KiB  
Article
User Behavior on Value Co-Creation in Human–Computer Interaction: A Meta-Analysis and Research Synthesis
by Xiaohong Chen and Yuan Zhou
Electronics 2025, 14(6), 1071; https://doi.org/10.3390/electronics14061071 - 7 Mar 2025
Viewed by 1125
Abstract
Value co-creation in online communities refers to a process in which all participants within a platform’s ecosystem exchange and integrate resources while engaging in mutually beneficial interactive processes to generate perceived value-in-use. User behavior plays a crucial role in influencing value co-creation in [...] Read more.
Value co-creation in online communities refers to a process in which all participants within a platform’s ecosystem exchange and integrate resources while engaging in mutually beneficial interactive processes to generate perceived value-in-use. User behavior plays a crucial role in influencing value co-creation in human–computer interaction. However, existing research contains controversies, and there is a lack of comprehensive studies exploring which factors of user behavior influence it and the mechanisms through which they operate. This paper employs meta-analysis to examine the factors and mechanisms based on 42 studies from 2006 to 2023 with a sample size of 30,016. It examines the relationships at the individual, interaction, and environment layers and explores moderating effects through subgroup analysis. The results reveal a positive overall effect between user behavior and value co-creation performance. Factors including self-efficacy, social identity, enjoyment, and belonging (individual layer); information support, social interaction, trust, and reciprocity (interaction layer); as well as shared values, incentives, community culture, and subjective norms (environment layer) positively influence value co-creation. The moderating effect of situational and measurement factors indicates that Chinese communities and monocultural environments have more significant effects than international and multicultural ones, while community type is not significant. Structural equation models and subjective collaboration willingness have a stronger moderating effect than linear regression and objective behavior, which constitutes a counterintuitive finding. This study enhances theoretical research on user behavior and provides insights for managing value co-creation in human–computer interaction. Full article
Show Figures

Figure 1

14 pages, 1900 KiB  
Article
Combining Genetic Algorithm with Local Search Method in Solving Optimization Problems
by Velin Kralev and Radoslava Kraleva
Electronics 2024, 13(20), 4126; https://doi.org/10.3390/electronics13204126 - 20 Oct 2024
Cited by 4 | Viewed by 2247
Abstract
This research is focused on evolutionary algorithms, with genetic and memetic algorithms discussed in more detail. A graph theory problem related to finding a minimal Hamiltonian cycle in a complete undirected graph (Travelling Salesman Problem—TSP) is considered. The implementations of two approximate algorithms [...] Read more.
This research is focused on evolutionary algorithms, with genetic and memetic algorithms discussed in more detail. A graph theory problem related to finding a minimal Hamiltonian cycle in a complete undirected graph (Travelling Salesman Problem—TSP) is considered. The implementations of two approximate algorithms for solving this problem, genetic and memetic, are presented. The main objective of this study is to determine the influence of the local search method versus the influence of the genetic crossover operator on the quality of the solutions generated by the memetic algorithm for the same input data. The results show that when the number of possible Hamiltonian cycles in a graph is increased, the memetic algorithm finds better solutions. The execution time of both algorithms is comparable. Also, the number of solutions that mutated during the execution of the genetic algorithm exceeds 50% of the total number of all solutions generated by the crossover operator. In the memetic algorithm, the number of solutions that mutate does not exceed 10% of the total number of all solutions generated by the crossover operator, summed with those of the local search method. Full article
Show Figures

Figure 1

17 pages, 7640 KiB  
Article
Research on Designing Context-Aware Interactive Experiences for Sustainable Aging-Friendly Smart Homes
by Yi Lu, Lejia Zhou, Aili Zhang, Mengyao Wang, Shan Zhang and Minghua Wang
Electronics 2024, 13(17), 3507; https://doi.org/10.3390/electronics13173507 - 4 Sep 2024
Cited by 4 | Viewed by 3169
Abstract
With the advancement of artificial intelligence, the home care environment for elderly users is becoming increasingly intelligent and systematic. The context aware human–computer interaction technology of sustainable aging-friendly smart homes can effectively identify user needs, enhance energy efficiency, and optimize resource utilization, thereby [...] Read more.
With the advancement of artificial intelligence, the home care environment for elderly users is becoming increasingly intelligent and systematic. The context aware human–computer interaction technology of sustainable aging-friendly smart homes can effectively identify user needs, enhance energy efficiency, and optimize resource utilization, thereby improving the convenience and sustainability of smart home care services. This paper reviews literature and analyzes cases to summarize the background and current state of context-aware interaction experience research in aging-friendly smart homes. Targeting solitary elderly users aged 60–74, the study involves field observations and user interviews to analyze their characteristics and needs, and to summarize the interaction design principles for aging-friendly smart homes. We explore processes for context-aware and methods for identifying user behaviors, emphasizing the integration of green, eco-friendly, and energy-saving principles in the design process. Focusing on the living experience and quality of life for elderly users living alone, this paper constructs a context-aware user experience model based on multimodal interaction technology. Using elderly falls as a case example, we design typical scenarios for aging-friendly smart homes from the perspectives of equipment layout and innovative hardware and software design. The goal is to optimize the home care experience for elderly users, providing theoretical and practical guidance for smart home services in an aging society. Ultimately, the study aims to develop safer, more convenient, and sustainable home care solutions. Full article
Show Figures

Figure 1

20 pages, 629 KiB  
Article
Lessons in Developing a Behavioral Coding Protocol to Analyze In-the-Wild Child–Robot Interaction Events and Experiments
by Xela Indurkhya and Gentiane Venture
Electronics 2024, 13(7), 1175; https://doi.org/10.3390/electronics13071175 - 22 Mar 2024
Cited by 2 | Viewed by 1995
Abstract
Behavioral analyses of in-the-wild HRI studies generally rely on interviews or visual information from videos. This can be very limiting in settings where video recordings are not allowed or limited. We designed and tested a vocalization-based protocol to analyze in-the-wild child–robot interactions based [...] Read more.
Behavioral analyses of in-the-wild HRI studies generally rely on interviews or visual information from videos. This can be very limiting in settings where video recordings are not allowed or limited. We designed and tested a vocalization-based protocol to analyze in-the-wild child–robot interactions based upon a behavioral coding scheme utilized in wildlife biology, specifically in studies of wild dolphin populations. The audio of a video or audio recording is converted into a transcript, which is then analyzed using a behavioral coding protocol consisting of 5–6 categories (one indicating non-robot-related behavior, and 4–5 categories of robot-related behavior). Refining the code categories and training coders resulted in increased agreement between coders, but only to a level of moderate reliability, leading to our recommendation that it be used with three coders to assess where there is majority consensus, and thereby correct for subjectivity. We discuss lessons learned in the design and implementation of this protocol and the potential for future child–robot experiments analyzed through vocalization behavior. We also perform a few observational behavior analyses from vocalizations alone to demonstrate the potential of this field. Full article
Show Figures

Figure 1

Back to TopTop