sensors-logo

Journal Browser

Journal Browser

A New Era of Embodiment: Cognitive Breakthroughs and Scene Adaptation in Robot Perception, Decision-Making and Autonomous Control

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensors and Robotics".

Deadline for manuscript submissions: 30 July 2026 | Viewed by 1640

Special Issue Editors


E-Mail Website
Guest Editor

E-Mail Website
Co-Guest Editor
State Key Laboratory of Intelligent Manufacturing Equipment and Technology, Huazhong University of Science and Technology, Wuhan 430074, China
Interests: robotic machining; 3D optical measurement; multi-robot cooperative system
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Co-Guest Editor
School of Mechanical Science and Engineering, Huazhong University of Science and Technology, Wuhan 430074, China
Interests: method and technology of the digital design associated with CAD/CAE/Optimisation; modern equipment; structural optimisation; intelligent test
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Currently, the intelligence and integration of advanced autonomous robots with multiple sensors are escalating, with remarkable progress in dexterity and task adaptability. Their applications have expanded beyond manufacturing and automation to diverse scenarios such as domestic services, medical care, outdoor farming, and environmental exploration—all of which increasingly require embodied intelligence to bridge environmental perception, physical interaction, and cognitive reasoning. These emerging scenarios pose unprecedented challenges such as those involving embodied systems achieving multimodal perception with context-aware accuracy, the collaboration of heterogeneous robots, and human–robot interaction via embodied common sense.

Against this backdrop, this Special Issue centers on embodied intelligence as a core pillar of research, aiming to construct a cross-domain innovation pathway for exploring the transformative potential of cutting-edge technologies: focusing on machine learning-based dynamic adaptation mechanisms for embodied perception and sensing, large-model-empowered semantic understanding in embodied decision-making, and advanced algorithms that enable robots to learn embodied skills through physical-world interaction. By integrating interdisciplinary advancements in embodied cognition, machine learning, and large-model-driven robotics, this Special Issue will provide an excellent platform for researchers to share breakthroughs regarding autonomous robots with embodied intelligence.

Dr. Yuanlong Xie
Guest Editor

Prof. Dr. Wenlong Li
Prof. Dr. Shuting Wang
Co-Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • machine learning (ML)-based robot grasping detection
  • visual language model (VLM)-driven robot perception–plan–grasp framework
  • sensor-augmented visual language action (VLA) model for complex dual-arm operations
  • multi-modal sensing, recognition, mapping, and localization based on learning
  • perception fusion, decision-making, and collaboration across cross-domain clusters
  • foundational models for environmental perception and interaction under embodied intelligence
  • sensing and modeling of a collaborative environment for multiple autonomous systems
  • vision–language-based general target recognition and localization
  • human–machine–environment interaction modeling of embodied intelligence
  • perception, planning, and control of integrated embodied intelligence navigation

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

15 pages, 3396 KB  
Article
Latent Code Predictor for Accelerating Disparity Estimation in Stereo-Endoscopic Surface Reconstruction
by Jiawei Dang, Bo Yang, Guan Yao, Chao Liu and Wenfeng Zheng
Sensors 2026, 26(8), 2529; https://doi.org/10.3390/s26082529 - 20 Apr 2026
Viewed by 225
Abstract
Disparity estimation from stereo-endoscopic images is critical for 3D reconstruction in minimally invasive surgery (MIS). However, surgical environments have inherent interference factors including soft tissue deformation, motion blur, and photometric inconsistency. Currently, self-supervised generative networks such as StyleGAN offer an alternative method, but [...] Read more.
Disparity estimation from stereo-endoscopic images is critical for 3D reconstruction in minimally invasive surgery (MIS). However, surgical environments have inherent interference factors including soft tissue deformation, motion blur, and photometric inconsistency. Currently, self-supervised generative networks such as StyleGAN offer an alternative method, but their reliance on iterative latent optimization leads to high computational latency and limits practical deployment. In this work, we propose a temporal latent prediction method to accelerate this optimization process. Instead of designing a brand new generator, our framework learns to predict an optimized initial latent vector, thereby reducing the number of optimization steps and per-frame inference time. Crucially, this prediction-guided mechanism does not alter the architecture or inference logic of the generator, ensuring the fidelity of reconstruction is comparable to that of the original method. Experiments on Phantom and In vivo datasets demonstrate that our method reduces average optimization steps by 16–59% and cuts per-frame latency by about 2.3×, compared to baseline predictors and initialization strategies. Importantly, the final photometric loss remains nearly identical across all methods, confirming that acceleration does not compromise reconstruction quality. These results position our approach as a practical step toward efficient, self-supervised stereo-endoscopic reconstruction in clinical settings. Full article
Show Figures

Figure 1

29 pages, 15025 KB  
Article
Robot End-Effectors Adaptive Design Method Based on Embedding Domain Knowledge into Reinforcement Learning
by Yong Zhu, Taihua Zhang, Yao Lu and Liguo Yao
Sensors 2026, 26(6), 1933; https://doi.org/10.3390/s26061933 - 19 Mar 2026
Viewed by 352
Abstract
Existing robot end-effectors design methods lack structured domain prior knowledge support and have insufficient interaction with the environment, making it difficult to guarantee the accuracy of the design results. An adaptive design method is proposed that deeply embeds domain knowledge of end effectors [...] Read more.
Existing robot end-effectors design methods lack structured domain prior knowledge support and have insufficient interaction with the environment, making it difficult to guarantee the accuracy of the design results. An adaptive design method is proposed that deeply embeds domain knowledge of end effectors into the design process, treats key design parameters as environmental variables, and optimizes them adaptively through reinforcement learning algorithms in perception and feedback. In a simulation environment constructed by combining a knowledge graph, a two-finger translational gripper is used as an example robot end-effector to acquire target data via sensors, and reinforcement learning is used to adaptively optimize the gripper’s key parameters. Experiments are conducted on a simulation platform with three typical tasks, yielding the optimal parameter range. Compared to the proximal policy optimization (PPO) algorithm, which has no prior knowledge input, the knowledge graph embedding proximal policy optimization (KGPPO) algorithm improves the average reward for gripper length and gripper force by 63.96% and 43.09%, respectively, for grasping eggs. The KGPPO algorithm achieves the highest average reward and the best stability compared with other algorithms. Experiments show that this method can significantly improve the efficiency, stability, and accuracy of design parameter optimization. Full article
Show Figures

Figure 1

27 pages, 8664 KB  
Article
Research on Robot Collision Response Based on Human–Robot Collaboration
by Sicheng Zhong, Chaoyang Xu, Guoqiang Chen, Yanghuan Xu and Zhijun Wang
Sensors 2026, 26(2), 495; https://doi.org/10.3390/s26020495 - 12 Jan 2026
Viewed by 733
Abstract
With the rapid advancement of science and technology, robotics is evolving towards more profound and extensive applications. Nevertheless, the inherent limitations of traditional industrial “caged” robots have significantly impeded the full utilization of their capabilities. Consequently, breaking free from these constraints and realizing [...] Read more.
With the rapid advancement of science and technology, robotics is evolving towards more profound and extensive applications. Nevertheless, the inherent limitations of traditional industrial “caged” robots have significantly impeded the full utilization of their capabilities. Consequently, breaking free from these constraints and realizing human–robot collaboration has emerged as a new developmental trend in the robotics field. The collision-response mechanism, as a crucial safeguard for human–robot collaboration safety, has become a pivotal issue in enhancing the performance of human–robot interaction. To address this, an adaptive admittance control collision-response algorithm is proposed in this paper, grounded in the principle of admittance control. A collision simulation model of the AUBO-i5 collaborative robot is constructed. The effectiveness of the proposed algorithm is verified through simulation experiments focusing on both the end-effector collision and body collision of the robot, and by comparing it with existing admittance control algorithms. Furthermore, a collision-response experimental platform is established based on the AUBO-i5 collaborative robot. Experimental studies on end-effector and body collisions are conducted, providing practical validation of the reliability and utility of the proposed adaptive admittance control collision-response algorithm. Full article
Show Figures

Figure 1

Back to TopTop