sensors-logo

Journal Browser

Journal Browser

3D Sensing, Semantic Reconstruction and Modelling

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensing and Imaging".

Deadline for manuscript submissions: closed (20 May 2023) | Viewed by 13409

Special Issue Editor


E-Mail Website
Guest Editor
German Research Center for Artificial Intelligence (DFKI), 67663 Kaiserslautern, Germany
Interests: 3D computer vision; augmented reality; object pose estimation and tracking; machine learning; sensor fusion; domain adaptation; SLAM; 3D sensing
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The key challenge for autonomous systems is the perception of the environment in real-time both in terms of geometry and semantics, enabling truly intelligent applications in areas such as robotics and XR. Recent machine learning and computer vision developments, together with the advancement of different 3D-sensing technologies, show great potential towards achieving this vision. Starting from the unstructured 3D sensor and 2D camera data, ongoing research currently focuses on semantic and relational mapping as well as geometric prior information utilization toward building accurate, rich, and compact digital representations of the environment. This Special Issue will be a collection of state-of-the-art contributions on topics including, but not limited to:

  • 3D/depth sensing (ToF, lidar, radar);
  • Semantic segmentation and reconstruction;
  • Machine learning on 3D data (point clouds, depth maps);
  • Hybrid methods (machine learning + geometric computer vision);
  • 3D scan to model (scan-to-digital twin, scan-to-BIM);
  • SLAM and scene graphs;
  • Neural fields.

Dr. Jason Rambach
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • machine learning
  • deep learning
  • robotics
  • XR
  • point cloud
  • depth map
  • segmentation
  • semantics
  • 3D sensing
  • scan-to-X
  • semantic SLAM
  • scene graphs

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 4954 KiB  
Article
Robust and Fast Normal Mollification via Consistent Neighborhood Reconstruction for Unorganized Point Clouds
by Guangshuai Liu, Xurui Li, Si Sun and Wenyu Yi
Sensors 2023, 23(6), 3292; https://doi.org/10.3390/s23063292 - 20 Mar 2023
Cited by 1 | Viewed by 1221
Abstract
This paper introduces a robust normal estimation method for point cloud data that can handle both smooth and sharp features. Our method is based on the inclusion of neighborhood recognition into the normal mollification process in the neighborhood of the current point: First, [...] Read more.
This paper introduces a robust normal estimation method for point cloud data that can handle both smooth and sharp features. Our method is based on the inclusion of neighborhood recognition into the normal mollification process in the neighborhood of the current point: First, the point cloud surfaces are assigned normals via a normal estimator of robust location (NERL), which guarantees the reliability of the smooth region normals, and then a robust feature point recognition method is proposed to identify points around sharp features accurately. Furthermore, Gaussian maps and clustering are adopted for feature points to seek a rough isotropic neighborhood for the first-stage normal mollification. In order to further deal with non-uniform sampling or various complex scenes efficiently, the second-stage normal mollification based on residual is proposed. The proposed method was experimentally validated on synthetic and real-world datasets and compared to state-of-the-art methods. Full article
(This article belongs to the Special Issue 3D Sensing, Semantic Reconstruction and Modelling)
Show Figures

Figure 1

14 pages, 11797 KiB  
Article
Three-Dimensional Immersion Scanning Technique: A Scalable Low-Cost Solution for 3D Scanning Using Water-Based Fluid
by Ricardo Spyrides Boabaid Pimentel Gonçalves and Jens Haueisen
Sensors 2023, 23(6), 3214; https://doi.org/10.3390/s23063214 - 17 Mar 2023
Cited by 1 | Viewed by 1463
Abstract
Three-dimensional scanning technology has been traditionally used in the medical and engineering industries, but these scanners can be expensive or limited in their capabilities. This research aimed to develop low-cost 3D scanning using rotation and immersion in a water-based fluid. This technique uses [...] Read more.
Three-dimensional scanning technology has been traditionally used in the medical and engineering industries, but these scanners can be expensive or limited in their capabilities. This research aimed to develop low-cost 3D scanning using rotation and immersion in a water-based fluid. This technique uses a reconstruction approach similar to CT scanners but with significantly less instrumentation and cost than traditional CT scanners or other optical scanning techniques. The setup consisted of a container filled with a mixture of water and Xanthan gum. The object to be scanned was submerged at various rotation angles. A stepper motor slide with a needle was used to measure the fluid level increment as the object being scanned was submerged into the container. The results showed that the 3D scanning using immersion in a water-based fluid was feasible and could be adapted to a wide range of object sizes. The technique produced reconstructed images of objects with gaps or irregularly shaped openings in a low-cost fashion. A 3D printed model with a width of 30.7200 ± 0.2388 mm and height of 31.6800 ± 0.3445 mm was compared to its scan to evaluate the precision of the technique. Its width/height ratio (0.9697 ± 0.0084) overlaps the margin of error of the width/height ratio of the reconstructed image (0.9649 ± 0.0191), showing statistical similarities. The signal-to-noise ratio was calculated at around 6 dB. Suggestions for future work are made to improve the parameters of this promising, low-cost technique. Full article
(This article belongs to the Special Issue 3D Sensing, Semantic Reconstruction and Modelling)
Show Figures

Figure 1

14 pages, 11284 KiB  
Article
A Structure-Based Iterative Closest Point Using Anderson Acceleration for Point Clouds with Low Overlap
by Chao Zeng, Xiaomei Chen, Yongtian Zhang and Kun Gao
Sensors 2023, 23(4), 2049; https://doi.org/10.3390/s23042049 - 11 Feb 2023
Cited by 1 | Viewed by 1478
Abstract
The traditional point-cloud registration algorithms require large overlap between scans, which imposes strict constrains on data acquisition. To facilitate registration, the user has to strategically position or move the scanner to ensure proper overlap. In this work, we design a method of feature [...] Read more.
The traditional point-cloud registration algorithms require large overlap between scans, which imposes strict constrains on data acquisition. To facilitate registration, the user has to strategically position or move the scanner to ensure proper overlap. In this work, we design a method of feature extraction based on high-level information to establish structure correspondences and an optimization problem. And we rewrite it as a fixed-point problem and apply the Lie algebra to parameterize the transform matrix. To speed up convergence, we introduce Anderson acceleration, an approach enhanced by heuristics. Our model attends to the structural features of the region of overlap instead of the correspondence between points. The experimental results show the proposed ICP method is robust, has a high accuracy of registration on point clouds with low overlap on a laser datasets, and achieves a computational time that is competitive with that of prevalent methods. Full article
(This article belongs to the Special Issue 3D Sensing, Semantic Reconstruction and Modelling)
Show Figures

Figure 1

31 pages, 11917 KiB  
Article
Segmentation of Structural Elements from 3D Point Cloud Using Spatial Dependencies for Sustainability Studies
by Joram Ntiyakunze and Tomo Inoue
Sensors 2023, 23(4), 1924; https://doi.org/10.3390/s23041924 - 08 Feb 2023
Cited by 2 | Viewed by 2962
Abstract
The segmentation of point clouds obtained from existing buildings provides the ability to perform a detailed structural analysis and overall life-cycle assessment of buildings. The major challenge in dealing with existing buildings is the presence of diverse and large amounts of occluding objects, [...] Read more.
The segmentation of point clouds obtained from existing buildings provides the ability to perform a detailed structural analysis and overall life-cycle assessment of buildings. The major challenge in dealing with existing buildings is the presence of diverse and large amounts of occluding objects, which limits the segmentation process. In this study, we use unsupervised methods that integrate knowledge about the structural forms of buildings and their spatial dependencies to segment points into common structural classes. We first develop a novelty approach of joining remotely disconnected patches that happened due to missing data from occluding objects using pairs of detected planar patches. Afterward, segmentation approaches are introduced to classify the pairs of refined planes into floor slabs, floor beams, walls, and columns. Finally, we test our approach using a large dataset with high levels of occlusions. We also compare our approach to recent segmentation methods. Compared to many other segmentation methods the study shows good results in segmenting structural elements by their constituent surfaces. Potential areas of improvement, particularly in segmenting walls and beam classes, are highlighted for further studies. Full article
(This article belongs to the Special Issue 3D Sensing, Semantic Reconstruction and Modelling)
Show Figures

Figure 1

15 pages, 2734 KiB  
Article
A Simple Way to Reduce 3D Model Deformation in Smartphone Photogrammetry
by Aleksandra Jasińska, Krystian Pyka, Elżbieta Pastucha and Henrik Skov Midtiby
Sensors 2023, 23(2), 728; https://doi.org/10.3390/s23020728 - 09 Jan 2023
Cited by 6 | Viewed by 2249
Abstract
Recently, the term smartphone photogrammetry gained popularity. This suggests that photogrammetry may become a simple measurement tool by virtually every smartphone user. The research was undertaken to clarify whether it is appropriate to use the Structure from Motion—Multi Stereo View (SfM-MVS) procedure with [...] Read more.
Recently, the term smartphone photogrammetry gained popularity. This suggests that photogrammetry may become a simple measurement tool by virtually every smartphone user. The research was undertaken to clarify whether it is appropriate to use the Structure from Motion—Multi Stereo View (SfM-MVS) procedure with self-calibration as it is done in Uncrewed Aerial Vehicle photogrammetry. First, the geometric stability of smartphone cameras was tested. Fourteen smartphones were calibrated on the checkerboard test field. The process was repeated multiple times. These observations were found: (1) most smartphone cameras have lower stability of the internal orientation parameters than a Digital Single-Lens Reflex (DSLR) camera, and (2) the principal distance and position of the principal point are constantly changing. Then, based on images from two selected smartphones, 3D models of a small sculpture were developed. The SfM-MVS method was used, with self-calibration and pre-calibration variants. By comparing the resultant models with the reference DSLR-created model it was shown that introducing calibration obtained in the test field instead of self-calibration improves the geometry of 3D models. In particular, deformations of local concavities and convexities decreased. In conclusion, there is real potential in smartphone photogrammetry, but it also has its limits. Full article
(This article belongs to the Special Issue 3D Sensing, Semantic Reconstruction and Modelling)
Show Figures

Figure 1

21 pages, 18570 KiB  
Article
3D Radiometric Mapping by Means of LiDAR SLAM and Thermal Camera Data Fusion
by Davide De Pazzi, Marco Pertile and Sebastiano Chiodini
Sensors 2022, 22(21), 8512; https://doi.org/10.3390/s22218512 - 04 Nov 2022
Cited by 6 | Viewed by 3440
Abstract
The ability to produce 3D maps with infrared radiometric information is of great interest for many applications, such as rover navigation, industrial plant monitoring, and rescue robotics. In this paper, we present a system for large-scale thermal mapping based on IR thermal images [...] Read more.
The ability to produce 3D maps with infrared radiometric information is of great interest for many applications, such as rover navigation, industrial plant monitoring, and rescue robotics. In this paper, we present a system for large-scale thermal mapping based on IR thermal images and 3D LiDAR point cloud data fusion. The alignment between the point clouds and the thermal images is carried out using the extrinsic camera-to-LiDAR parameters, obtained by means of a dedicated calibration process. Rover’s trajectory, which is necessary for point cloud registration, is obtained by means of a LiDAR Simultaneous Localization and Mapping (SLAM) algorithm. Finally, the registered and merged thermal point clouds are represented through an OcTree data structure, where each voxel is associated with the average temperature of the 3D points contained within. Furthermore, the paper presents in detail the method for determining extrinsic parameters, which is based on the identification of a hot cardboard box. Both methods were validated in a laboratory environment and outdoors. It is shown that the developed system is capable of locating a thermal object with an accuracy of up to 9 cm in a 45 m map size with a voxelization of 14 cm. Full article
(This article belongs to the Special Issue 3D Sensing, Semantic Reconstruction and Modelling)
Show Figures

Figure 1

Back to TopTop