sensors-logo

Journal Browser

Journal Browser

Advances in 3D Measurement Technology and Sensors

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Physical Sensors".

Deadline for manuscript submissions: closed (30 March 2023) | Viewed by 14325

Special Issue Editor


E-Mail Website
Guest Editor
Jiangsu Key Laboratory of Spectral Imaging & Intelligent Sense, Nanjing University of Science and Technology, Nanjing 210094, China
Interests: 3D measurement; deep learning; computer vision; depth estimation; image processing

Special Issue Information

Dear Colleagues,

The physical space where human beings live is three-dimensional (3D). The acquisition and processing technology of 3D information demonstrates humans’ ability to cognize the world. Although our eyes can perceive qualitative depth and size information, they can only meet the requirement of our daily lives and tend to fail in applications where accurate and quantitative information is needed. At present, 3D measurement technologies such as structured-light illumination, interferometry, stereo vision, time of flight (ToF), X-ray imaging, ultrasonic imaging, and magnetic resonance imaging are playing essential roles in many fields (e.g., industrial manufacturing, remote sensing, biomedicine, optical engineering, and computer vision). There is a rapidly growing interest in employing these technologies for more potential applications.

In recent years, the advent of the big data era has brought about the vigorous development of Internet and computer technology. Benefiting from the abundant accessible data, machine learning techniques such as deep learning are offering new opportunities for the development of novel 3D measurement technologies and sensors. Driven by data, hidden rules may be revealed, which can also be used for recovering 3D data. 

This Special Issue seeks innovative works to explore new measurement theories and sensors for the acquisition and analysis of 3D information. Review papers presenting a deep analysis of 3D measurement technology and systems would also be appropriate.

This Special Issue invites contributions addressing topics including but not limited to the following:

  • Three-dimensional shape and deformation measurement;
  • Three-dimensional reconstruction;
  • Three-dimensional data processing;
  • Three-dimensional sensors and devices;
  • Three-dimensional imaging;
  • Depth data acquisition and processing;
  • Artificial-intelligence-assisted 3D techniques.

Dr. Shijie Feng
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

18 pages, 12232 KiB  
Article
Generalized Fringe-to-Phase Framework for Single-Shot 3D Reconstruction Integrating Structured Light with Deep Learning
by Andrew-Hieu Nguyen, Khanh L. Ly, Van Khanh Lam and Zhaoyang Wang
Sensors 2023, 23(9), 4209; https://doi.org/10.3390/s23094209 - 23 Apr 2023
Cited by 5 | Viewed by 2480
Abstract
Three-dimensional (3D) shape acquisition of objects from a single-shot image has been highly demanded by numerous applications in many fields, such as medical imaging, robotic navigation, virtual reality, and product in-line inspection. This paper presents a robust 3D shape reconstruction approach integrating a [...] Read more.
Three-dimensional (3D) shape acquisition of objects from a single-shot image has been highly demanded by numerous applications in many fields, such as medical imaging, robotic navigation, virtual reality, and product in-line inspection. This paper presents a robust 3D shape reconstruction approach integrating a structured-light technique with a deep learning-based artificial neural network. The proposed approach employs a single-input dual-output network capable of transforming a single structured-light image into two intermediate outputs of multiple phase-shifted fringe patterns and a coarse phase map, through which the unwrapped true phase distributions containing the depth information of the imaging target can be accurately determined for subsequent 3D reconstruction process. A conventional fringe projection technique is employed to prepare the ground-truth training labels, and part of its classic algorithm is adopted to preserve the accuracy of the 3D reconstruction. Numerous experiments have been conducted to assess the proposed technique, and its robustness makes it a promising and much-needed tool for scientific research and engineering applications. Full article
(This article belongs to the Special Issue Advances in 3D Measurement Technology and Sensors)
Show Figures

Figure 1

15 pages, 6652 KiB  
Article
Fringe Projection Profilometry Based on Saturated Fringe Restoration in High Dynamic Range Scenes
by Hongru Li, Hao Wei, Jiangtao Liu, Guoliang Deng, Shouhuan Zhou, Wenwu Wang, Liang He and Peng Tian
Sensors 2023, 23(6), 3133; https://doi.org/10.3390/s23063133 - 15 Mar 2023
Cited by 5 | Viewed by 1838 | Correction
Abstract
In high dynamic scenes, fringe projection profilometry (FPP) may encounter fringe saturation, and the phase calculated will also be affected to produce errors. This paper proposes a saturated fringe restoration method to solve this problem, taking the four-step phase shift as an example. [...] Read more.
In high dynamic scenes, fringe projection profilometry (FPP) may encounter fringe saturation, and the phase calculated will also be affected to produce errors. This paper proposes a saturated fringe restoration method to solve this problem, taking the four-step phase shift as an example. Firstly, according to the saturation of the fringe group, the concepts of reliable area, shallow saturated area, and deep saturated area are proposed. Then, the parameter A related to the reflectivity of the object in the reliable area is calculated to interpolate A in the shallow and deep saturated areas. The theoretically shallow and deep saturated areas are not known in actual experiments. However, morphological operations can be used to dilate and erode reliable areas to produce cubic spline interpolation (CSI) areas and biharmonic spline interpolation (BSI) areas, which roughly correspond to shallow and deep saturated areas. After A is restored, it can be used as a known quantity to restore the saturated fringe using the unsaturated fringe in the same position, the remaining unrecoverable part of the fringe can be completed using CSI, and then the same part of the symmetrical fringe can be further restored. To further reduce the influence of nonlinear error, the Hilbert transform is also used in the phase calculation process of the actual experiment. The simulation and experimental results validate that the proposed method can still obtain correct results without adding additional equipment or increasing projection number, which proves the feasibility and robustness of the method. Full article
(This article belongs to the Special Issue Advances in 3D Measurement Technology and Sensors)
Show Figures

Figure 1

12 pages, 5647 KiB  
Article
High-Accuracy Three-Dimensional Deformation Measurement System Based on Fringe Projection and Speckle Correlation
by Chuang Zhang, Cong Liu and Zhihong Xu
Sensors 2023, 23(2), 680; https://doi.org/10.3390/s23020680 - 6 Jan 2023
Cited by 4 | Viewed by 1447
Abstract
Fringe projection profilometry (FPP) and digital image correlation (DIC) are widely applied in three-dimensional (3D) measurements. The combination of DIC and FPP can effectively overcome their respective shortcomings. However, the speckle on the surface of an object seriously affects the quality and modulation [...] Read more.
Fringe projection profilometry (FPP) and digital image correlation (DIC) are widely applied in three-dimensional (3D) measurements. The combination of DIC and FPP can effectively overcome their respective shortcomings. However, the speckle on the surface of an object seriously affects the quality and modulation of fringe images captured by cameras, which will lead to non-negligible errors in the measurement results. In this paper, we propose a fringe image extraction method based on deep learning technology, which transforms speckle-embedded fringe images into speckle-free fringe images. The principle of the proposed method, 3D coordinate calculation, and deformation measurements are introduced. Compared with the traditional 3D-DIC method, the experimental results show that this method is effective and precise. Full article
(This article belongs to the Special Issue Advances in 3D Measurement Technology and Sensors)
Show Figures

Figure 1

13 pages, 54940 KiB  
Article
Semi-Global Matching Assisted Absolute Phase Unwrapping
by Yi-Hong Liao and Song Zhang
Sensors 2023, 23(1), 411; https://doi.org/10.3390/s23010411 - 30 Dec 2022
Cited by 2 | Viewed by 1732
Abstract
Measuring speed is a critical factor to reduce motion artifacts for dynamic scene capture. Phase-shifting methods have the advantage of providing high-accuracy and dense 3D point clouds, but the phase unwrapping process affects the measurement speed. This paper presents an absolute phase unwrapping [...] Read more.
Measuring speed is a critical factor to reduce motion artifacts for dynamic scene capture. Phase-shifting methods have the advantage of providing high-accuracy and dense 3D point clouds, but the phase unwrapping process affects the measurement speed. This paper presents an absolute phase unwrapping method capable of using only three speckle-embedded phase-shifted patterns for high-speed three-dimensional (3D) shape measurement on a single-camera, single-projector structured light system. The proposed method obtains the wrapped phase of the object from the speckle-embedded three-step phase-shifted patterns. Next, it utilizes the Semi-Global Matching (SGM) algorithm to establish the coarse correspondence between the image of the object with the embedded speckle pattern and the pre-obtained image of a flat surface with the same embedded speckle pattern. Then, a computational framework uses the coarse correspondence information to determine the fringe order pixel by pixel. The experimental results demonstrated that the proposed method can achieve high-speed and high-quality 3D measurements of complex scenes. Full article
(This article belongs to the Special Issue Advances in 3D Measurement Technology and Sensors)
Show Figures

Figure 1

13 pages, 4374 KiB  
Article
Automated Camera Exposure Control for Accuracy-Enhanced Stereo-Digital Image Correlation Measurement
by Xiaoying Zhang, Xiaojun Tang, Liping Yu and Bing Pan
Sensors 2022, 22(24), 9641; https://doi.org/10.3390/s22249641 - 9 Dec 2022
Cited by 1 | Viewed by 1974
Abstract
An automated camera exposure control method, which allows a two-camera stereo-digital image correlation (stereo-DIC) system to capture high-quality speckle image pairs, is presented for accuracy-enhanced stereo-DIC measurement. By using this method, the two synchronized cameras can automatically determine the optimal camera exposure and [...] Read more.
An automated camera exposure control method, which allows a two-camera stereo-digital image correlation (stereo-DIC) system to capture high-quality speckle image pairs, is presented for accuracy-enhanced stereo-DIC measurement. By using this method, the two synchronized cameras can automatically determine the optimal camera exposure and ideal average grayscale for capturing the optimal reference image pair in the reference state. Furthermore, high-quality deformed image pairs can be recorded during the test by adaptively adjusting the camera exposure in case of serious ambient light variations. Validation tests, including varying illumination tests and translation tests, were performed to verify the effectiveness and robustness of this method. Experimental results indicate that the proposed method overperforms the existing stereo-DIC technique with empirically determined fixed camera exposure time. The practicality of the proposed automated camera exposure control method was verified using real high-temperature experiments. Full article
(This article belongs to the Special Issue Advances in 3D Measurement Technology and Sensors)
Show Figures

Figure 1

18 pages, 11493 KiB  
Article
EPI Light Field Depth Estimation Based on a Directional Relationship Model and Multiviewpoint Attention Mechanism
by Ming Gao, Huiping Deng, Sen Xiang, Jin Wu and Zeyang He
Sensors 2022, 22(16), 6291; https://doi.org/10.3390/s22166291 - 21 Aug 2022
Cited by 4 | Viewed by 2125
Abstract
Light field (LF) image depth estimation is a critical technique for LF-related applications such as 3D reconstruction, target detection, and tracking. The refocusing property of LF images provide rich information for depth estimations; however, it is still challenging in cases of occlusion regions, [...] Read more.
Light field (LF) image depth estimation is a critical technique for LF-related applications such as 3D reconstruction, target detection, and tracking. The refocusing property of LF images provide rich information for depth estimations; however, it is still challenging in cases of occlusion regions, edge regions, noise interference, etc. The epipolar plane image (EPI) of LF can effectively deal with the depth estimation because of its characteristics of multidirectionality and pixel consistency—in which the LF depth estimations are converted to calculate the EPI slope. This paper proposed an EPI LF depth estimation algorithm based on a directional relationship model and attention mechanism. Unlike the subaperture LF depth estimation method, the proposed method takes EPIs as input images. Specifically, a directional relationship model was used to extract direction features of the horizontal and vertical EPIs, respectively. Then, a multiviewpoint attention mechanism combining channel attention and spatial attention is used to give more weight to the EPI slope information. Subsequently, multiple residual modules are used to eliminate the redundant features that interfere with the EPI slope information—in which a small stride convolution operation is used to avoid losing key EPI slope information. The experimental results revealed that the proposed algorithm outperformed the compared algorithms in terms of accuracy. Full article
(This article belongs to the Special Issue Advances in 3D Measurement Technology and Sensors)
Show Figures

Figure 1

14 pages, 5362 KiB  
Article
An Improved Circular Fringe Fourier Transform Profilometry
by Qili Chen, Mengqi Han, Ye Wang and Wenjing Chen
Sensors 2022, 22(16), 6048; https://doi.org/10.3390/s22166048 - 12 Aug 2022
Cited by 6 | Viewed by 1652
Abstract
Circular fringe projection profilometry (CFPP), as a branch of carrier fringe projection profilometry, has attracted research interest in recent years. Circular fringe Fourier transform profilometry (CFFTP) has been used to measure out-of-plane objects quickly because the absolute phase can be obtained by employing [...] Read more.
Circular fringe projection profilometry (CFPP), as a branch of carrier fringe projection profilometry, has attracted research interest in recent years. Circular fringe Fourier transform profilometry (CFFTP) has been used to measure out-of-plane objects quickly because the absolute phase can be obtained by employing fewer fringes. However, the existing CFFTP method needs to solve a quadratic equation to calculate the pixel displacement amount related to the height of the object, in which the root-seeking process may get into trouble due to the phase error and the non-uniform period of reference fringe. In this paper, an improved CFFTP method based on a non-telecentric model is presented. The calculation of displacement amount is performed by solving a linear equation instead of a quadratic equation after introducing an extra projection of circular fringe with circular center translation. In addition, Gerchberg iteration is employed to eliminate phase error of the region close to the circular center, and the plane calibration technique is used to eliminate system error by establishing a displacement-to-height look-up table. The mathematical model and theoretical analysis are presented. Simulations and experiments have demonstrated the effectiveness of the proposed method. Full article
(This article belongs to the Special Issue Advances in 3D Measurement Technology and Sensors)
Show Figures

Figure 1

Back to TopTop