sensors-logo

Journal Browser

Journal Browser

Sensors for Object Detection, Pose Estimation, and 3D Reconstruction

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensing and Imaging".

Deadline for manuscript submissions: 30 November 2025 | Viewed by 3530

Special Issue Editors


E-Mail Website
Guest Editor
College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China
Interests: machine vision; 3D optical inspection; industrial augmented reality

E-Mail Website
Guest Editor
College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China
Interests: computer vision; 3D reconstruction

E-Mail Website
Guest Editor
College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China
Interests: machine vision; photogrammetry; non-contact strain measurement

Special Issue Information

Dear Colleagues,

This topic encompasses a range of technologies dedicated to gathering data from the environment to identify objects, estimate their spatial orientations, and construct detailed 3D representations. These sensors are pivotal across numerous domains, including robotics, augmented reality, autonomous vehicles, and computer vision. 

Object detection sensors utilize various technologies such as cameras, LiDAR, RADAR, or depth sensors to perceive objects within their surroundings. These technologies are crucial for applications like autonomous vehicles and surveillance systems. 

In the realm of 3D reconstruction, sensors collect data points from multiple vantage points and use algorithms to generate intricate 3D models of either the entire scene or specific objects within it. This process often involves methodologies like point cloud registration, surface reconstruction, and texture mapping. 

Overall, the integration of various sensors for object detection, pose estimation, and 3D reconstruction enables advanced perception capabilities in robotics, computer vision applications, aviation, aerospace, automotive, and beyond.

Prof. Dr. Liyan Zhang
Dr. Shenglan Liu
Dr. Nan Ye
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • machine vision
  • 3D optical inspection
  • industrial augmented reality
  • computer vision
  • 3D reconstruction
  • photogrammetry
  • non-contact strain measurement
  • image processing

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 3014 KiB  
Article
Cross-Modal Interaction Between Perception and Vision of Grasping a Slanted Handrail to Reproduce the Sensation of Walking on a Slope in Virtual Reality
by Yuto Ohashi, Monica Perusquía-Hernández, Kiyoshi Kiyokawa and Nobuchika Sakata
Sensors 2025, 25(3), 938; https://doi.org/10.3390/s25030938 - 4 Feb 2025
Viewed by 578
Abstract
Numerous studies have previously explored the perception of horizontal movements. This includes research on Redirected Walking (RDW). However, the challenge of replicating the sensation of vertical movement has remained a recurring theme. Many conventional methods rely on physically mimicking steps or slopes, which [...] Read more.
Numerous studies have previously explored the perception of horizontal movements. This includes research on Redirected Walking (RDW). However, the challenge of replicating the sensation of vertical movement has remained a recurring theme. Many conventional methods rely on physically mimicking steps or slopes, which can be hazardous and induce fear. This is especially true when head-mounted displays (HMDs) obstruct the user’s field of vision. Our primary objective was to reproduce the sensation of ascending a slope while traversing a flat surface. This effect is achieved by giving the users the haptic sensation of gripping a tilted handrail similar to those commonly found on ramps or escalators. To achieve this, we developed a walker-type handrail device capable of tilting across a wide range of angles. We induced a cross-modal effect to enhance the perception of walking up a slope. This was achieved by combining haptic feedback from the hardware with an HMD-driven visual simulation of an upward-sloping scene. The results indicated that the condition with tactile presentation significantly alleviated fear and enhanced the sensation of walking uphill compared to the condition without tactile presentation. Full article
(This article belongs to the Special Issue Sensors for Object Detection, Pose Estimation, and 3D Reconstruction)
Show Figures

Figure 1

26 pages, 6416 KiB  
Article
Advanced Monocular Outdoor Pose Estimation in Autonomous Systems: Leveraging Optical Flow, Depth Estimation, and Semantic Segmentation with Dynamic Object Removal
by Alireza Ghasemieh and Rasha Kashef
Sensors 2024, 24(24), 8040; https://doi.org/10.3390/s24248040 - 17 Dec 2024
Cited by 1 | Viewed by 1019
Abstract
Autonomous technologies have revolutionized transportation, military operations, and space exploration, necessitating precise localization in environments where traditional GPS-based systems are unreliable or unavailable. While widespread for outdoor localization, GPS systems face limitations in obstructed environments such as dense urban areas, forests, and indoor [...] Read more.
Autonomous technologies have revolutionized transportation, military operations, and space exploration, necessitating precise localization in environments where traditional GPS-based systems are unreliable or unavailable. While widespread for outdoor localization, GPS systems face limitations in obstructed environments such as dense urban areas, forests, and indoor spaces. Moreover, GPS reliance introduces vulnerabilities to signal disruptions, which can lead to significant operational failures. Hence, developing alternative localization techniques that do not depend on external signals is essential, showing a critical need for robust, GPS-independent localization solutions adaptable to different applications, ranging from Earth-based autonomous vehicles to robotic missions on Mars. This paper addresses these challenges using Visual odometry (VO) to estimate a camera’s pose by analyzing captured image sequences in GPS-denied areas tailored for autonomous vehicles (AVs), where safety and real-time decision-making are paramount. Extensive research has been dedicated to pose estimation using LiDAR or stereo cameras, which, despite their accuracy, are constrained by weight, cost, and complexity. In contrast, monocular vision is practical and cost-effective, making it a popular choice for drones, cars, and autonomous vehicles. However, robust and reliable monocular pose estimation models remain underexplored. This research aims to fill this gap by developing a novel adaptive framework for outdoor pose estimation and safe navigation using enhanced visual odometry systems with monocular cameras, especially for applications where deploying additional sensors is not feasible due to cost or physical constraints. This framework is designed to be adaptable across different vehicles and platforms, ensuring accurate and reliable pose estimation. We integrate advanced control theory to provide safety guarantees for motion control, ensuring that the AV can react safely to the imminent hazards and unknown trajectories of nearby traffic agents. The focus is on creating an AI-driven model(s) that meets the performance standards of multi-sensor systems while leveraging the inherent advantages of monocular vision. This research uses state-of-the-art machine learning techniques to advance visual odometry’s technical capabilities and ensure its adaptability across different platforms, cameras, and environments. By merging cutting-edge visual odometry techniques with robust control theory, our approach enhances both the safety and performance of AVs in complex traffic situations, directly addressing the challenge of safe and adaptive navigation. Experimental results on the KITTI odometry dataset demonstrate a significant improvement in pose estimation accuracy, offering a cost-effective and robust solution for real-world applications. Full article
(This article belongs to the Special Issue Sensors for Object Detection, Pose Estimation, and 3D Reconstruction)
Show Figures

Figure 1

16 pages, 8801 KiB  
Article
Noise-Robust 3D Pose Estimation Using Appearance Similarity Based on the Distributed Multiple Views
by Taemin Hwang and Minjoon Kim
Sensors 2024, 24(17), 5645; https://doi.org/10.3390/s24175645 - 30 Aug 2024
Viewed by 1531
Abstract
In this paper, we present a noise-robust approach for the 3D pose estimation of multiple people using appearance similarity. The common methods identify the cross-view correspondences between the detected keypoints and determine their association with a specific person by measuring the distances between [...] Read more.
In this paper, we present a noise-robust approach for the 3D pose estimation of multiple people using appearance similarity. The common methods identify the cross-view correspondences between the detected keypoints and determine their association with a specific person by measuring the distances between the epipolar lines and the joint locations of the 2D keypoints across all the views. Although existing methods achieve remarkable accuracy, they are still sensitive to camera calibration, making them unsuitable for noisy environments where any of the cameras slightly change angle or position. To address these limitations and fix camera calibration error in real-time, we propose a framework for 3D pose estimation which uses appearance similarity. In the proposed framework, we detect the 2D keypoints and extract the appearance feature and transfer it to the central server. The central server uses geometrical affinity and appearance similarity to match the detected 2D human poses to each person. Then, it compares these two groups to identify calibration errors. If a camera with the wrong calibration is identified, the central server fixes the calibration error, ensuring accuracy in the 3D reconstruction of skeletons. In the experimental environment, we verified that the proposed algorithm is robust against false geometrical errors. It achieves around 11.5% and 8% improvement in the accuracy of 3D pose estimation on the Campus and Shelf datasets, respectively. Full article
(This article belongs to the Special Issue Sensors for Object Detection, Pose Estimation, and 3D Reconstruction)
Show Figures

Figure 1

Back to TopTop