electronics-logo

Journal Browser

Journal Browser

Research on Deep Learning and Human-Robot Collaboration

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Artificial Intelligence".

Deadline for manuscript submissions: closed (15 January 2026) | Viewed by 2768

Special Issue Editors


E-Mail Website
Guest Editor
Mechanical Engineering, University of Maryland, College Park, MD 20783, USA
Interests: industrial AI; machine learning; deep learning; computer vision; human–robot collaboration
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Science Department, Università degli Studi “Roma Tre”, Via della Vasca Navale n. 84, 00100 Rome, Italy
Interests: UAV; AUV; underwater gliders; navigation and attitude; fuzzy logic system
Computer Science Department, The University of Hong Kong, Hong Kong 999077, China
Interests: active perception; scene understanding; field robotics
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Computer Science Department, The University of Hong Kong, Hong Kong, China
Interests: robot manipulation; robot learning; task and motion planning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Centre for Informatics and Systems of the University of Coimbra, Department of Informatics Engineering, University of Coimbra, 3030-790 Coimbra, Portugal
Interests: human-computer interaction; virtual and augmented reality; tangible user interfaces; interaction techniques; interaction design
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

This Special Issue focuses on advancements in deep learning and its transformative role in enabling human–robot collaborations (HRCs). Emphasizing the integration of machine learning models with robotic systems, we explore how deep learning techniques can enhance robotic perception, decision-making, and interaction for seamless and intelligent collaboration with humans in diverse settings.

This Special Issue will cover a wide array of topics including, but not limited to, the following:

  • Deep learning for robotic perception: Object recognition, scene understanding, and real-time pose an estimation for human-robot interaction.
  • HRC for intelligent manufacturing: Collaborative robots (cobots) that work alongside humans in dynamic environments by using speech, gesture, and gaze recognition.
  • Learning-driven control and adaptation: Reinforcement learning and domain adaptation techniques for optimizing robot behavior in unstructured or changing environments.
  • Multi-modal interaction: Integration of vision, speech, and sensor data to enhance the intuitiveness and efficiency of HRC systems.
  • Safety and trust in HRC: Using deep learning to model human intent, ensure safety, and foster trust in human–robot interactions.

This Special Issue aims to advance the field by highlighting how deep learning can bridge gaps between human cognition and robotic systems, fostering smoother collaboration. By showcasing innovative methods and real-world applications, it seeks to inspire new research directions and encourage interdisciplinary collaboration across robotics, AI, and human factor engineering.

Moreover, this Special issue will complement the existing literature by providing fresh perspectives on the integration of deep learning into human–robot collaboration. It expands on traditional approaches by incorporating emerging technologies such as transfer learning, temporal action recognition, and multi-modal fusion for HRC. Moreover, it addresses critical challenges like adaptability, safety, and real-time interactions in robotics. The contributions to this Special Issue are expected to push boundaries, offering insights into designing more robust, intelligent, and human-centered collaborative systems.

Dr. Haodong Chen
Dr. Enrico Petritoli
Dr. Liang Lu
Dr. Peng Zhou
Dr. Jorge C. S. Cardoso
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • gesture recognition
  • gaze estimation
  • speech recognition
  • multi-modal interaction

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

35 pages, 1070 KB  
Article
Adaptive Deep Learning Framework for Emotion Recognition in Social Robots: Toward Inclusive Human–Robot Interaction for Users with Special Needs
by Eryka Probierz and Adam Gałuszka
Electronics 2026, 15(5), 924; https://doi.org/10.3390/electronics15050924 - 25 Feb 2026
Viewed by 555
Abstract
Emotion recognition is a key capability of social robots operating in real-world human-centered environments, especially when interacting with users with special needs. Such users may express emotions in atypical, subtle, or strongly context-dependent ways. These characteristics pose significant challenges for conventional emotion recognition [...] Read more.
Emotion recognition is a key capability of social robots operating in real-world human-centered environments, especially when interacting with users with special needs. Such users may express emotions in atypical, subtle, or strongly context-dependent ways. These characteristics pose significant challenges for conventional emotion recognition systems. This paper proposes an adaptive deep learning framework for emotion recognition in social robots. The framework is designed to support inclusive and accessible human–robot interaction. It combines region-based convolutional neural networks with adaptive learning mechanisms. These mechanisms explicitly model individual variability, contextual information, and interaction dynamics. Multiple deep architectures are evaluated to assess robustness across diverse emotional expressions, including those influenced by cognitive, sensory, or developmental differences. Rather than relying on fixed emotion models, the proposed approach emphasizes adaptability. The system dynamically adjusts its perception strategies to user-specific expressive patterns. Experimental validation is conducted using context-aware emotion datasets. Performance is evaluated in terms of detection accuracy, robustness to variability, and generalization across emotion categories. The results show that adaptive mechanisms improve recognition performance in scenarios characterized by non-standard or low-intensity expressions, compared to static baseline models. This study highlights the importance of flexible, context-sensitive perception for inclusive social robotics. It also discusses design implications for deploying emotion-aware robots in assistive, educational, and therapeutic settings. Overall, the proposed framework represents a step toward socially intelligent robots capable of engaging more effectively with users with special needs. Full article
(This article belongs to the Special Issue Research on Deep Learning and Human-Robot Collaboration)
Show Figures

Figure 1

14 pages, 11409 KB  
Article
Automatic Parallel Parking System Design with Fuzzy Control and LiDAR Detection
by Jung-Shan Lin, Hao-Jheng Wu and Jeih-Weih Hung
Electronics 2025, 14(13), 2520; https://doi.org/10.3390/electronics14132520 - 21 Jun 2025
Viewed by 1713
Abstract
This paper presents a self-driving system for automatic parallel parking, integrating obstacle avoidance for enhanced safety. The vehicle platform employs three primary sensors—a web camera, a Zed depth camera, and LiDAR—to perceive its surroundings, including sidewalks and potential obstacles. By processing camera and [...] Read more.
This paper presents a self-driving system for automatic parallel parking, integrating obstacle avoidance for enhanced safety. The vehicle platform employs three primary sensors—a web camera, a Zed depth camera, and LiDAR—to perceive its surroundings, including sidewalks and potential obstacles. By processing camera and LiDAR data, the system determines the vehicle’s position and assesses parking space availability, with LiDAR also aiding in malfunction detection. The system operates in three stages: parking space identification, path planning using geometric circles, and fine-tuning with fuzzy control if misalignment is detected. Experimental results, evaluated visually in a model-scale setup, confirm the system’s ability to achieve smooth and reliable parallel parking maneuvers. Quantitative performance metrics, such as precise parking accuracy or total execution time, were not recorded in this study but will be included in future work to further support the system’s effectiveness. Full article
(This article belongs to the Special Issue Research on Deep Learning and Human-Robot Collaboration)
Show Figures

Figure 1

Back to TopTop