E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Special Issue "Depth Sensors and 3D Vision"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Physical Sensors".

Deadline for manuscript submissions: 20 December 2018

Special Issue Editor

Guest Editor
Prof. Roberto Vezzani

AImageLab, Dipartimento di Ingegneria "Enzo Ferrari", University of Modena and Reggio Emilia, Modena, Italy
Website | E-Mail
Interests: computer vision; image processing; machine vision; pattern recognition; surveillance; people behavior understanding; human-computer interaction; depth sensors; 3D vision

Special Issue Information

Dear Colleagues,

The recent diffusion of inexpensive RGB-D sensors has encouraged the computer vision community to explore new solutions based on depth images. Depth information provides a significant contribution to solve or simplify several challenging tasks, such as shape analysis and classification, scene reconstruction, object segmentation, people detection, and body part recognition. The intrinsic metric information as well as the ability to handle texture and illumination variations of objects and scenes are only two of the advantages with respect to pure RGB images.

For example, hardware and software technologies included in the Microsoft Kinect framework allow an easy estimation of the 3D positions of skeleton joints, providing a new compact and expressive representation of the human body.

Although the Kinect failed as a gaming-first device, it has been a launch pad for the spread of depth sensors and, contextually, 3D vision. From a hardware perspective, several stereo, structured IR light, and ToF sensors have appeared on the market, and are studied by the scientific community. At the same time, computer vision and machine learning communities have proposed new solutions to process depth data, individually or fused with other information such as RGB images.

This Special Issue seeks innovative work to explore new hardware and software solutions for the generation and analysis of depth data, including representation models, machine learning approaches, datasets, and benchmarks.

The particular topics of interest include, but are not limited to:

  • Depth acquisition techniques
  • Depth data processing
  • Analysis of depth data
  • Fusion of depth data with other modalities
  • From and to depth domain translation
  • 3D scene reconstruction
  • 3D shape modeling and retrieval
  • 3D object recognition
  • 3D biometrics
  • 3D imaging for cultural heritage applications
  • Point cloud modelling and processing
  • Human action recognition on depth data
  • Biomedical applications of depth data
  • Other applications of depth data analysis
  • Depth datasets and benchmarks
  • Depth data visualization

Prof. Roberto Vezzani
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Depth sensors
  • 3D vision
  • Depth data generation
  • Depth data analysis
  • Depth datasets

Published Papers (32 papers)

View options order results:
result details:
Displaying articles 1-32
Export citation of selected articles as:

Research

Jump to: Review

Open AccessArticle Simultaneous All-Parameters Calibration and Assessment of a Stereo Camera Pair Using a Scale Bar
Sensors 2018, 18(11), 3964; https://doi.org/10.3390/s18113964 (registering DOI)
Received: 26 September 2018 / Revised: 25 October 2018 / Accepted: 10 November 2018 / Published: 15 November 2018
PDF Full-text (4245 KB) | HTML Full-text | XML Full-text
Abstract
Highly accurate and easy-to-operate calibration (to determine the interior and distortion parameters) and orientation (to determine the exterior parameters) methods for cameras in large volume is a very important topic for expanding the application scope of 3D vision and photogrammetry techniques. This paper
[...] Read more.
Highly accurate and easy-to-operate calibration (to determine the interior and distortion parameters) and orientation (to determine the exterior parameters) methods for cameras in large volume is a very important topic for expanding the application scope of 3D vision and photogrammetry techniques. This paper proposes a method for simultaneously calibrating, orienting and assessing multi-camera 3D measurement systems in large measurement volume scenarios. The primary idea is building 3D point and length arrays by moving a scale bar in the measurement volume and then conducting a self-calibrating bundle adjustment that involves all the image points and lengths of both cameras. Relative exterior parameters between the camera pair are estimated by the five point relative orientation method. The interior, distortion parameters of each camera and the relative exterior parameters are optimized through bundle adjustment of the network geometry that is strengthened through applying the distance constraints. This method provides both internal precision and external accuracy assessment of the calibration performance. Simulations and real data experiments are designed and conducted to validate the effectivity of the method and analyze its performance under different network geometries. The RMSE of length measurement is less than 0.25 mm and the relative precision is higher than 1/25,000 for a two camera system calibrated by the proposed method in a volume of 12 m × 8 m × 4 m. Compared with the state-of-the-art point array self-calibrating bundle adjustment method, the proposed method is easier to operate and can significantly reduce systematic errors caused by wrong scaling. Full article
(This article belongs to the Special Issue Depth Sensors and 3D Vision)
Figures

Figure 1

Open AccessArticle A FAST-BRISK Feature Detector with Depth Information
Sensors 2018, 18(11), 3908; https://doi.org/10.3390/s18113908
Received: 7 October 2018 / Revised: 3 November 2018 / Accepted: 7 November 2018 / Published: 13 November 2018
PDF Full-text (4732 KB) | HTML Full-text | XML Full-text
Abstract
RGB-D cameras offer both color and depth images of the surrounding environment, making them an attractive option for robotic and vision applications. This work introduces the BRISK_D algorithm, which efficiently combines Features from Accelerated Segment Test (FAST) and Binary Robust Invariant Scalable Keypoints
[...] Read more.
RGB-D cameras offer both color and depth images of the surrounding environment, making them an attractive option for robotic and vision applications. This work introduces the BRISK_D algorithm, which efficiently combines Features from Accelerated Segment Test (FAST) and Binary Robust Invariant Scalable Keypoints (BRISK) methods. In the BRISK_D algorithm, the keypoints are detected by the FAST algorithm and the location of the keypoint is refined in the scale and the space. The scale factor of the keypoint is directly computed with the depth information of the image. In the experiment, we have made a detailed comparative analysis of the three algorithms SURF, BRISK and BRISK_D from the aspects of scaling, rotation, perspective and blur. The BRISK_D algorithm combines depth information and has good algorithm performance. Full article
(This article belongs to the Special Issue Depth Sensors and 3D Vision)
Figures

Figure 1

Open AccessArticle Microscopic Three-Dimensional Measurement Based on Telecentric Stereo and Speckle Projection Methods
Sensors 2018, 18(11), 3882; https://doi.org/10.3390/s18113882
Received: 20 September 2018 / Revised: 18 October 2018 / Accepted: 9 November 2018 / Published: 11 November 2018
PDF Full-text (2756 KB) | HTML Full-text | XML Full-text
Abstract
Three-dimensional (3D) measurement of microstructures has become increasingly important, and many microscopic measurement methods have been developed. For the dimension in several millimeters together with the accuracy at sub-pixel or sub-micron level, there is almost no effective measurement method now. Here we present
[...] Read more.
Three-dimensional (3D) measurement of microstructures has become increasingly important, and many microscopic measurement methods have been developed. For the dimension in several millimeters together with the accuracy at sub-pixel or sub-micron level, there is almost no effective measurement method now. Here we present a method combining the microscopic stereo measurement with the digital speckle projection. A microscopy experimental setup mainly composed of two telecentric cameras and an industrial projection module is established and a telecentric binocular stereo reconstruction procedure is carried out. The measurement accuracy has firstly been verified by performing 3D measurements of grid arrays at different locations and cylinder arrays with different height differences. Then two Mitutoyo step masters have been used for further verification. The experimental results show that the proposed method can obtain 3D information of the microstructure with a sub-pixel and even sub-micron measuring accuracy in millimeter scale. Full article
(This article belongs to the Special Issue Depth Sensors and 3D Vision)
Figures

Figure 1

Open AccessArticle Temperature Compensation Method for Digital Cameras in 2D and 3D Measurement Applications
Sensors 2018, 18(11), 3685; https://doi.org/10.3390/s18113685
Received: 12 September 2018 / Revised: 26 October 2018 / Accepted: 27 October 2018 / Published: 30 October 2018
PDF Full-text (38313 KB) | HTML Full-text | XML Full-text
Abstract
This paper presents the results of several studies concerning the effect of temperature on digital cameras. Experiments were performed using three different camera models. The presented results conclusively demonstrate that the typical camera design does not adequately take into account the effect of
[...] Read more.
This paper presents the results of several studies concerning the effect of temperature on digital cameras. Experiments were performed using three different camera models. The presented results conclusively demonstrate that the typical camera design does not adequately take into account the effect of temperature variation on the device’s performance. In this regard, a modified camera design is proposed that exhibits a highly predictable behavior under varying ambient temperature and facilitates thermal compensation. A novel temperature compensation method is also proposed. This compensation model can be applied in almost every existing camera application, as it is compatible with every camera calibration model. A two-dimensional (2D) and three-dimensional (3D) application of the proposed compensation model is also described. The results of the application of the proposed compensation approach are presented herein. Full article
(This article belongs to the Special Issue Depth Sensors and 3D Vision)
Figures

Graphical abstract

Open AccessArticle Robust and Efficient CPU-Based RGB-D Scene Reconstruction
Sensors 2018, 18(11), 3652; https://doi.org/10.3390/s18113652
Received: 29 September 2018 / Revised: 25 October 2018 / Accepted: 25 October 2018 / Published: 28 October 2018
PDF Full-text (4364 KB) | HTML Full-text | XML Full-text
Abstract
3D scene reconstruction is an important topic in computer vision. A complete scene is reconstructed from views acquired along the camera trajectory, each view containing a small part of the scene. Tracking in textureless scenes is well known to be a Gordian knot
[...] Read more.
3D scene reconstruction is an important topic in computer vision. A complete scene is reconstructed from views acquired along the camera trajectory, each view containing a small part of the scene. Tracking in textureless scenes is well known to be a Gordian knot of camera tracking, and how to obtain accurate 3D models quickly is a major challenge for existing systems. For the application of robotics, we propose a robust CPU-based approach to reconstruct indoor scenes efficiently with a consumer RGB-D camera. The proposed approach bridges feature-based camera tracking and volumetric-based data integration together and has a good reconstruction performance in terms of both robustness and efficiency. The key points in our approach include: (i) a robust and fast camera tracking method combining points and edges, which improves tracking stability in textureless scenes; (ii) an efficient data fusion strategy to select camera views and integrate RGB-D images on multiple scales, which enhances the efficiency of volumetric integration; (iii) a novel RGB-D scene reconstruction system, which can be quickly implemented on a standard CPU. Experimental results demonstrate that our approach reconstructs scenes with higher robustness and efficiency compared to state-of-the-art reconstruction systems. Full article
(This article belongs to the Special Issue Depth Sensors and 3D Vision)
Figures

Figure 1

Open AccessArticle Assessment of Fringe Pattern Decomposition with a Cross-Correlation Index for Phase Retrieval in Fringe Projection 3D Measurements
Sensors 2018, 18(10), 3578; https://doi.org/10.3390/s18103578
Received: 8 September 2018 / Revised: 17 October 2018 / Accepted: 18 October 2018 / Published: 22 October 2018
PDF Full-text (20492 KB) | HTML Full-text | XML Full-text
Abstract
Phase retrieval from single frame projection fringe patterns, a fundamental and challenging problem in fringe projection measurement, attracts wide attention and various new methods have emerged to address this challenge. Many phase retrieval methods are based on the decomposition of fringe patterns into
[...] Read more.
Phase retrieval from single frame projection fringe patterns, a fundamental and challenging problem in fringe projection measurement, attracts wide attention and various new methods have emerged to address this challenge. Many phase retrieval methods are based on the decomposition of fringe patterns into a background part and a fringe part, and then the phase is obtained from the decomposed fringe part. However, the decomposition results are subject to the selection of model parameters, which is usually performed manually by trial and error due to the lack of decomposition assessment rules under a no ground truth data situation. In this paper, we propose a cross-correlation index to assess the decomposition and phase retrieval results without the need of ground truth data. The feasibility of the proposed metric is verified by simulated and real fringe patterns with the well-known Fourier transform method and recently proposed Shearlet transform method. This work contributes to the automatic phase retrieval and three-dimensional (3D) measurement with less human intervention, and can be potentially employed in other fields such as phase retrieval in digital holography. Full article
(This article belongs to the Special Issue Depth Sensors and 3D Vision)
Figures

Figure 1

Open AccessArticle Person Re-Identification with RGB-D Camera in Top-View Configuration through Multiple Nearest Neighbor Classifiers and Neighborhood Component Features Selection
Sensors 2018, 18(10), 3471; https://doi.org/10.3390/s18103471
Received: 30 August 2018 / Revised: 2 October 2018 / Accepted: 11 October 2018 / Published: 15 October 2018
PDF Full-text (4866 KB) | HTML Full-text | XML Full-text
Abstract
Person re-identification is an important topic in retail, scene monitoring, human-computer interaction, people counting, ambient assisted living and many other application fields. A dataset for person re-identification TVPR (Top View Person Re-Identification) based on a number of significant features derived from both depth
[...] Read more.
Person re-identification is an important topic in retail, scene monitoring, human-computer interaction, people counting, ambient assisted living and many other application fields. A dataset for person re-identification TVPR (Top View Person Re-Identification) based on a number of significant features derived from both depth and color images has been previously built. This dataset uses an RGB-D camera in a top-view configuration to extract anthropometric features for the recognition of people in view of the camera, reducing the problem of occlusions while being privacy preserving. In this paper, we introduce a machine learning method for person re-identification using the TVPR dataset. In particular, we propose the combination of multiple k-nearest neighbor classifiers based on different distance functions and feature subsets derived from depth and color images. Moreover, the neighborhood component feature selection is used to learn the depth features’ weighting vector by minimizing the leave-one-out regularized training error. The classification process is performed by selecting the first passage under the camera for training and using the others as the testing set. Experimental results show that the proposed methodology outperforms standard supervised classifiers widely used for the re-identification task. This improvement encourages the application of this approach in the retail context in order to improve retail analytics, customer service and shopping space management. Full article
(This article belongs to the Special Issue Depth Sensors and 3D Vision)
Figures

Figure 1

Open AccessArticle Large Depth-of-Field Integral Microscopy by Use of a Liquid Lens
Sensors 2018, 18(10), 3383; https://doi.org/10.3390/s18103383
Received: 4 September 2018 / Revised: 28 September 2018 / Accepted: 5 October 2018 / Published: 10 October 2018
PDF Full-text (3202 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Integral microscopy is a 3D imaging technique that permits the recording of spatial and angular information of microscopic samples. From this information it is possible to calculate a collection of orthographic views with full parallax and to refocus computationally, at will, through the
[...] Read more.
Integral microscopy is a 3D imaging technique that permits the recording of spatial and angular information of microscopic samples. From this information it is possible to calculate a collection of orthographic views with full parallax and to refocus computationally, at will, through the 3D specimen. An important drawback of integral microscopy, especially when dealing with thick samples, is the limited depth of field (DOF) of the perspective views. This imposes a significant limitation on the depth range of computationally refocused images. To overcome this problem, we propose here a new method that is based on the insertion, at the pupil plane of the microscope objective, of an electrically controlled liquid lens (LL) whose optical power can be changed by simply tuning the voltage. This new apparatus has the advantage of controlling the axial position of the objective focal plane while keeping constant the essential parameters of the integral microscope, that is, the magnification, the numerical aperture and the amount of parallax. Thus, given a 3D sample, the new microscope can provide a stack of integral images with complementary depth ranges. The fusion of the set of refocused images permits to enlarge the reconstruction range, obtaining images in focus over the whole region. Full article
(This article belongs to the Special Issue Depth Sensors and 3D Vision)
Figures

Figure 1

Open AccessArticle Direct Depth SLAM: Sparse Geometric Feature Enhanced Direct Depth SLAM System for Low-Texture Environments
Sensors 2018, 18(10), 3339; https://doi.org/10.3390/s18103339
Received: 28 July 2018 / Revised: 22 September 2018 / Accepted: 24 September 2018 / Published: 6 October 2018
PDF Full-text (37906 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
This paper presents a real-time, robust and low-drift depth-only SLAM (simultaneous localization and mapping) method for depth cameras by utilizing both dense range flow and sparse geometry features from sequential depth images. The proposed method is mainly composed of three optimization layers, namely
[...] Read more.
This paper presents a real-time, robust and low-drift depth-only SLAM (simultaneous localization and mapping) method for depth cameras by utilizing both dense range flow and sparse geometry features from sequential depth images. The proposed method is mainly composed of three optimization layers, namely Direct Depth layer, ICP (Iterative closest point) Refined layer and Graph Optimization layer. The Direct Depth layer uses a range flow constraint equation to solve the fast 6-DOF (six degrees of freedom) frame-to-frame pose estimation problem. Then, the ICP Refined layer is used to reduce the local drift by applying local map based motion estimation strategy. After that, we propose a loop closure detection algorithm by extracting and matching sparse geometric features and construct a pose graph for the purpose of global pose optimization. We evaluate the performance of our method using benchmark datasets and real scene data. Experiment results show that our front-end algorithm clearly over performs the classic methods and our back-end algorithm is robust to find loop closures and reduce the global drift. Full article
(This article belongs to the Special Issue Depth Sensors and 3D Vision)
Figures

Figure 1

Open AccessArticle GesID: 3D Gesture Authentication Based on Depth Camera and One-Class Classification
Sensors 2018, 18(10), 3265; https://doi.org/10.3390/s18103265
Received: 18 August 2018 / Revised: 20 September 2018 / Accepted: 26 September 2018 / Published: 28 September 2018
PDF Full-text (11263 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Biometric authentication is popular in authentication systems, and gesture as a carrier of behavior characteristics has the advantages of being difficult to imitate and containing abundant information. This research aims to use three-dimensional (3D) depth information of gesture movement to perform authentication with
[...] Read more.
Biometric authentication is popular in authentication systems, and gesture as a carrier of behavior characteristics has the advantages of being difficult to imitate and containing abundant information. This research aims to use three-dimensional (3D) depth information of gesture movement to perform authentication with less user effort. We propose an approach based on depth cameras, which satisfies three requirements: Can authenticate from a single, customized gesture; achieves high accuracy without an excessive number of gestures for training; and continues learning the gesture during use of the system. To satisfy these requirements respectively: We use a sparse autoencoder to memorize the single gesture; we employ data augmentation technology to solve the problem of insufficient data; and we use incremental learning technology for allowing the system to memorize the gesture incrementally over time. An experiment has been performed on different gestures in different user situations that demonstrates the accuracy of one-class classification (OCC), and proves the effectiveness and reliability of the approach. Gesture authentication based on 3D depth cameras could be achieved with reduced user effort. Full article
(This article belongs to the Special Issue Depth Sensors and 3D Vision)
Figures

Figure 1

Open AccessArticle RGB Colour Encoding Improvement for Three-Dimensional Shapes and Displacement Measurement Using the Integration of Fringe Projection and Digital Image Correlation
Sensors 2018, 18(9), 3130; https://doi.org/10.3390/s18093130
Received: 1 August 2018 / Revised: 10 September 2018 / Accepted: 13 September 2018 / Published: 17 September 2018
PDF Full-text (6471 KB) | HTML Full-text | XML Full-text
Abstract
Three-dimensional digital image correlation (3D-DIC) has become the most popular full-field optical technique for measuring 3D shapes and displacements in experimental mechanics. The integration of fringe projection (FP) and two-dimensional digital image correlation (FP + DIC) has been recently established as an intelligent
[...] Read more.
Three-dimensional digital image correlation (3D-DIC) has become the most popular full-field optical technique for measuring 3D shapes and displacements in experimental mechanics. The integration of fringe projection (FP) and two-dimensional digital image correlation (FP + DIC) has been recently established as an intelligent low-cost alternative to 3D-DIC, overcoming the drawbacks of a stereoscopic system. Its experimentation is based on the colour encoding of the characterized fringe and speckle patterns required for FP and DIC implementation, respectively. In the present work, innovations in experimentation using FP + DIC for more accurate results are presented. Specifically, they are based on the improvement of the colour pattern encoding. To achieve this, in this work, a multisensor camera and/or laser structural illumination were employed. Both alternatives are analysed and evaluated. Results show that improvements both in three-dimensional and in-plane displacement are obtained with the proposed alternatives. Nonetheless, multisensor high-speed cameras are uncommon, and laser structural illumination is established as an important improvement when low uncertainty is required for 2D-displacement measurement. Hence, the uncertainty has been demonstrated to be reduced by up to 50% compared with results obtained in previous experimental approaches of FP + DIC. Full article
(This article belongs to the Special Issue Depth Sensors and 3D Vision)
Figures

Figure 1

Open AccessArticle A Versatile Method for Depth Data Error Estimation in RGB-D Sensors
Sensors 2018, 18(9), 3122; https://doi.org/10.3390/s18093122
Received: 8 August 2018 / Revised: 10 September 2018 / Accepted: 13 September 2018 / Published: 16 September 2018
PDF Full-text (860 KB) | HTML Full-text | XML Full-text
Abstract
We propose a versatile method for estimating the RMS error of depth data provided by generic 3D sensors with the capability of generating RGB and depth (D) data of the scene, i.e., the ones based on techniques such as structured light,
[...] Read more.
We propose a versatile method for estimating the RMS error of depth data provided by generic 3D sensors with the capability of generating RGB and depth (D) data of the scene, i.e., the ones based on techniques such as structured light, time of flight and stereo. A common checkerboard is used, the corners are detected and two point clouds are created, one with the real coordinates of the pattern corners and one with the corner coordinates given by the device. After a registration of these two clouds, the RMS error is computed. Then, using curve fittings methods, an equation is obtained that generalizes the RMS error as a function of the distance between the sensor and the checkerboard pattern. The depth errors estimated by our method are compared to those estimated by state-of-the-art approaches, validating its accuracy and utility. This method can be used to rapidly estimate the quality of RGB-D sensors, facilitating robotics applications as SLAM and object recognition. Full article
(This article belongs to the Special Issue Depth Sensors and 3D Vision)
Figures

Figure 1

Open AccessArticle Development and Experimental Evaluation of a 3D Vision System for Grinding Robot
Sensors 2018, 18(9), 3078; https://doi.org/10.3390/s18093078
Received: 3 July 2018 / Revised: 7 September 2018 / Accepted: 10 September 2018 / Published: 13 September 2018
PDF Full-text (1567 KB) | HTML Full-text | XML Full-text
Abstract
If the grinding robot can automatically position and measure the machining target on the workpiece, it will significantly improve its machining efficiency and intelligence level. However, unfortunately, the current grinding robot cannot do this because of economic and precision reasons. This paper proposes
[...] Read more.
If the grinding robot can automatically position and measure the machining target on the workpiece, it will significantly improve its machining efficiency and intelligence level. However, unfortunately, the current grinding robot cannot do this because of economic and precision reasons. This paper proposes a 3D vision system mounted on the robot’s fourth joint, which is used to detect the machining target of the grinding robot. Also, the hardware architecture and data processing method of the 3D vision system is described in detail. In the data processing process, we first use the voxel grid filter to preprocess the point cloud and obtain the feature descriptor. Then use fast library for approximate nearest neighbors (FLANN) to search out the difference point cloud from the precisely registered point cloud pair and use the point cloud segmentation method proposed in this paper to extract machining path points. Finally, the detection error compensation model is used to accurately calibrate the 3D vision system to transform the machining information into the grinding robot base frame. Experimental results show that the absolute average error of repeated measurements at different locations is 0.154 mm, and the absolute measurement error of the vision system caused by compound error is usually less than 0.25 mm. The proposed 3D vision system could easily integrate into an intelligent grinding system and may be suitable for industrial sites. Full article
(This article belongs to the Special Issue Depth Sensors and 3D Vision)
Figures

Figure 1

Open AccessArticle A System for In-Line 3D Inspection without Hidden Surfaces
Sensors 2018, 18(9), 2993; https://doi.org/10.3390/s18092993
Received: 24 July 2018 / Revised: 1 September 2018 / Accepted: 4 September 2018 / Published: 7 September 2018
PDF Full-text (32448 KB) | HTML Full-text | XML Full-text
Abstract
This work presents a 3D scanner able to reconstruct a complete object without occlusions, including its surface appearance. The technique presents a number of differences in relation to current scanners: it does not require mechanical handling like robot arms or spinning plates, it
[...] Read more.
This work presents a 3D scanner able to reconstruct a complete object without occlusions, including its surface appearance. The technique presents a number of differences in relation to current scanners: it does not require mechanical handling like robot arms or spinning plates, it is free of occlusions since the scanned part is not resting on any surface and, unlike stereo-based methods, the object does not need to have visual singularities on its surface. This system, among other applications, allows its integration in production lines that require the inspection of a large volume of parts or products, especially if there is an important variability of the objects to be inspected, since there is no mechanical manipulation. The scanner consists of a variable number of industrial quality cameras conveniently distributed so that they can capture all the surfaces of the object without any blind spot. The object is dropped through the common visual field of all the cameras, so no surface or tool occludes the views that are captured simultaneously when the part is in the center of the visible volume. A carving procedure that uses the silhouettes segmented from each image gives rise to a volumetric representation and, by means of isosurface generation techniques, to a 3D model. These techniques have certain limitations on the reconstruction of object regions with particular geometric configurations. Estimating the inherent maximum error in each area is important to bound the precision of the reconstruction. A number of experiments are presented reporting the differences between ideal and reconstructed objects in the system. Full article
(This article belongs to the Special Issue Depth Sensors and 3D Vision)
Figures

Figure 1

Open AccessArticle Structured-Light-Based System for Shape Measurement of the Human Body in Motion
Sensors 2018, 18(9), 2827; https://doi.org/10.3390/s18092827
Received: 20 July 2018 / Revised: 21 August 2018 / Accepted: 23 August 2018 / Published: 27 August 2018
Cited by 1 | PDF Full-text (7091 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
The existing methods for measuring the shape of the human body in motion are limited in their practical application owing to immaturity, complexity, and/or high price. Therefore, we propose a method based on structured light supported by multispectral separation to achieve multidirectional and
[...] Read more.
The existing methods for measuring the shape of the human body in motion are limited in their practical application owing to immaturity, complexity, and/or high price. Therefore, we propose a method based on structured light supported by multispectral separation to achieve multidirectional and parallel acquisition. Single-frame fringe projection is employed in this method for detailed geometry reconstruction. An extended phase unwrapping method adapted for measurement of the human body is also proposed. This method utilizes local fringe parameter information to identify the optimal unwrapping path for reconstruction. Subsequently, we present a prototype 4DBODY system with a working volume of 2.0 × 1.5 × 1.5 m3, a measurement uncertainty less than 0.5 mm and an average spatial resolution of 1.0 mm for three-dimensional (3D) points. The system consists of eight directional 3D scanners functioning synchronously with an acquisition frequency of 120 Hz. The efficacy of the proposed system is demonstrated by presenting the measurement results obtained for known geometrical objects moving at various speeds as well actual human movements. Full article
(This article belongs to the Special Issue Depth Sensors and 3D Vision)
Figures

Figure 1

Open AccessArticle New Method of Microimages Generation for 3D Display
Sensors 2018, 18(9), 2805; https://doi.org/10.3390/s18092805
Received: 4 August 2018 / Revised: 22 August 2018 / Accepted: 23 August 2018 / Published: 25 August 2018
PDF Full-text (14534 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we propose a new method for the generation of microimages, which processes real 3D scenes captured with any method that permits the extraction of its depth information. The depth map of the scene, together with its color information, is used
[...] Read more.
In this paper, we propose a new method for the generation of microimages, which processes real 3D scenes captured with any method that permits the extraction of its depth information. The depth map of the scene, together with its color information, is used to create a point cloud. A set of elemental images of this point cloud is captured synthetically and from it the microimages are computed. The main feature of this method is that the reference plane of displayed images can be set at will, while the empty pixels are avoided. Another advantage of the method is that the center point of displayed images and also their scale and field of view can be set. To show the final results, a 3D InI display prototype is implemented through a tablet and a microlens array. We demonstrate that this new technique overcomes the drawbacks of previous similar ones and provides more flexibility setting the characteristics of the final image. Full article
(This article belongs to the Special Issue Depth Sensors and 3D Vision)
Figures

Figure 1

Open AccessArticle Robust Fusion of LiDAR and Wide-Angle Camera Data for Autonomous Mobile Robots
Sensors 2018, 18(8), 2730; https://doi.org/10.3390/s18082730
Received: 9 May 2018 / Revised: 6 August 2018 / Accepted: 15 August 2018 / Published: 20 August 2018
PDF Full-text (7360 KB) | HTML Full-text | XML Full-text
Abstract
Autonomous robots that assist humans in day to day living tasks are becoming increasingly popular. Autonomous mobile robots operate by sensing and perceiving their surrounding environment to make accurate driving decisions. A combination of several different sensors such as LiDAR, radar, ultrasound sensors
[...] Read more.
Autonomous robots that assist humans in day to day living tasks are becoming increasingly popular. Autonomous mobile robots operate by sensing and perceiving their surrounding environment to make accurate driving decisions. A combination of several different sensors such as LiDAR, radar, ultrasound sensors and cameras are utilized to sense the surrounding environment of autonomous vehicles. These heterogeneous sensors simultaneously capture various physical attributes of the environment. Such multimodality and redundancy of sensing need to be positively utilized for reliable and consistent perception of the environment through sensor data fusion. However, these multimodal sensor data streams are different from each other in many ways, such as temporal and spatial resolution, data format, and geometric alignment. For the subsequent perception algorithms to utilize the diversity offered by multimodal sensing, the data streams need to be spatially, geometrically and temporally aligned with each other. In this paper, we address the problem of fusing the outputs of a Light Detection and Ranging (LiDAR) scanner and a wide-angle monocular image sensor for free space detection. The outputs of LiDAR scanner and the image sensor are of different spatial resolutions and need to be aligned with each other. A geometrical model is used to spatially align the two sensor outputs, followed by a Gaussian Process (GP) regression-based resolution matching algorithm to interpolate the missing data with quantifiable uncertainty. The results indicate that the proposed sensor data fusion framework significantly aids the subsequent perception steps, as illustrated by the performance improvement of a uncertainty aware free space detection algorithm. Full article
(This article belongs to the Special Issue Depth Sensors and 3D Vision)
Figures

Figure 1

Open AccessArticle A Method for 6D Pose Estimation of Free-Form Rigid Objects Using Point Pair Features on Range Data
Sensors 2018, 18(8), 2678; https://doi.org/10.3390/s18082678
Received: 10 July 2018 / Revised: 12 August 2018 / Accepted: 13 August 2018 / Published: 15 August 2018
PDF Full-text (7026 KB) | HTML Full-text | XML Full-text
Abstract
Pose estimation of free-form objects is a crucial task towards flexible and reliable highly complex autonomous systems. Recently, methods based on range and RGB-D data have shown promising results with relatively high recognition rates and fast running times. On this line, this paper
[...] Read more.
Pose estimation of free-form objects is a crucial task towards flexible and reliable highly complex autonomous systems. Recently, methods based on range and RGB-D data have shown promising results with relatively high recognition rates and fast running times. On this line, this paper presents a feature-based method for 6D pose estimation of rigid objects based on the Point Pair Features voting approach. The presented solution combines a novel preprocessing step, which takes into consideration the discriminative value of surface information, with an improved matching method for Point Pair Features. In addition, an improved clustering step and a novel view-dependent re-scoring process are proposed alongside two scene consistency verification steps. The proposed method performance is evaluated against 15 state-of-the-art solutions on a set of extensive and variate publicly available datasets with real-world scenarios under clutter and occlusion. The presented results show that the proposed method outperforms all tested state-of-the-art methods for all datasets with an overall 6.6% relative improvement compared to the second best method. Full article
(This article belongs to the Special Issue Depth Sensors and 3D Vision)
Figures

Figure 1

Open AccessArticle Towards a Meaningful 3D Map Using a 3D Lidar and a Camera
Sensors 2018, 18(8), 2571; https://doi.org/10.3390/s18082571
Received: 30 May 2018 / Revised: 27 July 2018 / Accepted: 1 August 2018 / Published: 6 August 2018
PDF Full-text (9701 KB) | HTML Full-text | XML Full-text
Abstract
Semantic 3D maps are required for various applications including robot navigation and surveying, and their importance has significantly increased. Generally, existing studies on semantic mapping were camera-based approaches that could not be operated in large-scale environments owing to their computational burden. Recently, a
[...] Read more.
Semantic 3D maps are required for various applications including robot navigation and surveying, and their importance has significantly increased. Generally, existing studies on semantic mapping were camera-based approaches that could not be operated in large-scale environments owing to their computational burden. Recently, a method of combining a 3D Lidar with a camera was introduced to address this problem, and a 3D Lidar and a camera were also utilized for semantic 3D mapping. In this study, our algorithm consists of semantic mapping and map refinement. In the semantic mapping, a GPS and an IMU are integrated to estimate the odometry of the system, and subsequently, the point clouds measured from a 3D Lidar are registered by using this information. Furthermore, we use the latest CNN-based semantic segmentation to obtain semantic information on the surrounding environment. To integrate the point cloud with semantic information, we developed incremental semantic labeling including coordinate alignment, error minimization, and semantic information fusion. Additionally, to improve the quality of the generated semantic map, the map refinement is processed in a batch. It enhances the spatial distribution of labels and removes traces produced by moving vehicles effectively. We conduct experiments on challenging sequences to demonstrate that our algorithm outperforms state-of-the-art methods in terms of accuracy and intersection over union. Full article
(This article belongs to the Special Issue Depth Sensors and 3D Vision)
Figures

Figure 1

Open AccessArticle Relative Pose Based Redundancy Removal: Collaborative RGB-D Data Transmission in Mobile Visual Sensor Networks
Sensors 2018, 18(8), 2430; https://doi.org/10.3390/s18082430
Received: 13 June 2018 / Revised: 18 July 2018 / Accepted: 20 July 2018 / Published: 26 July 2018
PDF Full-text (16567 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, the Relative Pose based Redundancy Removal (RPRR) scheme is presented, which has been designed for mobile RGB-D sensor networks operating under bandwidth-constrained operational scenarios. The scheme considers a multiview scenario in which pairs of sensors observe the same scene from
[...] Read more.
In this paper, the Relative Pose based Redundancy Removal (RPRR) scheme is presented, which has been designed for mobile RGB-D sensor networks operating under bandwidth-constrained operational scenarios. The scheme considers a multiview scenario in which pairs of sensors observe the same scene from different viewpoints, and detect the redundant visual and depth information to prevent their transmission leading to a significant improvement in wireless channel usage efficiency and power savings. We envisage applications in which the environment is static, and rapid 3D mapping of an enclosed area of interest is required, such as disaster recovery and support operations after earthquakes or industrial accidents. Experimental results show that wireless channel utilization is improved by 250% and battery consumption is halved when the RPRR scheme is used instead of sending the sensor images independently. Full article
(This article belongs to the Special Issue Depth Sensors and 3D Vision)
Figures

Figure 1

Open AccessArticle Efficient 3D Objects Recognition Using Multifoveated Point Clouds
Sensors 2018, 18(7), 2302; https://doi.org/10.3390/s18072302
Received: 16 June 2018 / Revised: 10 July 2018 / Accepted: 11 July 2018 / Published: 16 July 2018
Cited by 1 | PDF Full-text (3239 KB) | HTML Full-text | XML Full-text
Abstract
Technological innovations in the hardware of RGB-D sensors have allowed the acquisition of 3D point clouds in real time. Consequently, various applications have arisen related to the 3D world, which are receiving increasing attention from researchers. Nevertheless, one of the main problems that
[...] Read more.
Technological innovations in the hardware of RGB-D sensors have allowed the acquisition of 3D point clouds in real time. Consequently, various applications have arisen related to the 3D world, which are receiving increasing attention from researchers. Nevertheless, one of the main problems that remains is the demand for computationally intensive processing that required optimized approaches to deal with 3D vision modeling, especially when it is necessary to perform tasks in real time. A previously proposed multi-resolution 3D model known as foveated point clouds can be a possible solution to this problem. Nevertheless, this is a model limited to a single foveated structure with context dependent mobility. In this work, we propose a new solution for data reduction and feature detection using multifoveation in the point cloud. Nonetheless, the application of several foveated structures results in a considerable increase of processing since there are intersections between regions of distinct structures, which are processed multiple times. Towards solving this problem, the current proposal brings an approach that avoids the processing of redundant regions, which results in even more reduced processing time. Such approach can be used to identify objects in 3D point clouds, one of the key tasks for real-time applications as robotics vision, with efficient synchronization allowing the validation of the model and verification of its applicability in the context of computer vision. Experimental results demonstrate a performance gain of at least 27.21% in processing time while retaining the main features of the original, and maintaining the recognition quality rate in comparison with state-of-the-art 3D object recognition methods. Full article
(This article belongs to the Special Issue Depth Sensors and 3D Vision)
Figures

Figure 1

Open AccessArticle A Miniature Binocular Endoscope with Local Feature Matching and Stereo Matching for 3D Measurement and 3D Reconstruction
Sensors 2018, 18(7), 2243; https://doi.org/10.3390/s18072243
Received: 17 May 2018 / Revised: 17 June 2018 / Accepted: 28 June 2018 / Published: 12 July 2018
PDF Full-text (7087 KB) | HTML Full-text | XML Full-text
Abstract
As the traditional single camera endoscope can only provide clear images without 3D measurement and 3D reconstruction, a miniature binocular endoscope based on the principle of binocular stereoscopic vision to implement 3D measurement and 3D reconstruction in tight and restricted spaces is presented.
[...] Read more.
As the traditional single camera endoscope can only provide clear images without 3D measurement and 3D reconstruction, a miniature binocular endoscope based on the principle of binocular stereoscopic vision to implement 3D measurement and 3D reconstruction in tight and restricted spaces is presented. In order to realize the exact matching of points of interest in the left and right images, a novel construction method of the weighted orthogonal-symmetric local binary pattern (WOS-LBP) descriptor is presented. Then a stereo matching algorithm based on Gaussian-weighted AD-Census transform and improved cross-based adaptive regions is studied to realize 3D reconstruction for real scenes. In the algorithm, we adjust determination criterions of adaptive regions for edge and discontinuous areas in particular and as well extract mismatched pixels caused by occlusion through image entropy and region-growing algorithm. This paper develops a binocular endoscope with an external diameter of 3.17 mm and the above algorithms are applied in it. The endoscope contains two CMOS cameras and four fiber optics for illumination. Three conclusions are drawn from experiments: (1) the proposed descriptor has good rotation invariance, distinctiveness and robustness to light change as well as noises; (2) the proposed stereo matching algorithm has a mean relative error of 8.48% for Middlebury standard pairs of images and compared with several classical stereo matching algorithms, our algorithm performs better in edge and discontinuous areas; (3) the mean relative error of length measurement is 3.22%, and the endoscope can be utilized to measure and reconstruct real scenes effectively. Full article
(This article belongs to the Special Issue Depth Sensors and 3D Vision)
Figures

Figure 1

Open AccessArticle Accurate Calibration of Multi-LiDAR-Multi-Camera Systems
Sensors 2018, 18(7), 2139; https://doi.org/10.3390/s18072139
Received: 26 May 2018 / Revised: 25 June 2018 / Accepted: 29 June 2018 / Published: 3 July 2018
PDF Full-text (16616 KB) | HTML Full-text | XML Full-text
Abstract
As autonomous driving attracts more and more attention these days, the algorithms and sensors used for machine perception become popular in research, as well. This paper investigates the extrinsic calibration of two frequently-applied sensors: the camera and Light Detection and Ranging (LiDAR). The
[...] Read more.
As autonomous driving attracts more and more attention these days, the algorithms and sensors used for machine perception become popular in research, as well. This paper investigates the extrinsic calibration of two frequently-applied sensors: the camera and Light Detection and Ranging (LiDAR). The calibration can be done with the help of ordinary boxes. It contains an iterative refinement step, which is proven to converge to the box in the LiDAR point cloud, and can be used for system calibration containing multiple LiDARs and cameras. For that purpose, a bundle adjustment-like minimization is also presented. The accuracy of the method is evaluated on both synthetic and real-world data, outperforming the state-of-the-art techniques. The method is general in the sense that it is both LiDAR and camera-type independent, and only the intrinsic camera parameters have to be known. Finally, a method for determining the 2D bounding box of the car chassis from LiDAR point clouds is also presented in order to determine the car body border with respect to the calibrated sensors. Full article
(This article belongs to the Special Issue Depth Sensors and 3D Vision)
Figures

Figure 1

Open AccessArticle Human Part Segmentation in Depth Images with Annotated Part Positions
Sensors 2018, 18(6), 1900; https://doi.org/10.3390/s18061900
Received: 2 May 2018 / Revised: 31 May 2018 / Accepted: 8 June 2018 / Published: 11 June 2018
PDF Full-text (963 KB) | HTML Full-text | XML Full-text
Abstract
We present a method of segmenting human parts in depth images, when provided the image positions of the body parts. The goal is to facilitate per-pixel labelling of large datasets of human images, which are used for training and testing algorithms for pose
[...] Read more.
We present a method of segmenting human parts in depth images, when provided the image positions of the body parts. The goal is to facilitate per-pixel labelling of large datasets of human images, which are used for training and testing algorithms for pose estimation and automatic segmentation. A common technique in image segmentation is to represent an image as a two-dimensional grid graph, with one node for each pixel and edges between neighbouring pixels. We introduce a graph with distinct layers of nodes to model occlusion of the body by the arms. Once the graph is constructed, the annotated part positions are used as seeds for a standard interactive segmentation algorithm. Our method is evaluated on two public datasets containing depth images of humans from a frontal view. It produces a mean per-class accuracy of 93.55% on the first dataset, compared to 87.91% (random forest and graph cuts) and 90.31% (random forest and Markov random field). It also achieves a per-class accuracy of 90.60% on the second dataset. Future work can experiment with various methods for creating the graph layers to accurately model occlusion. Full article
(This article belongs to the Special Issue Depth Sensors and 3D Vision)
Figures

Figure 1

Open AccessArticle Pulse Based Time-of-Flight Range Sensing
Sensors 2018, 18(6), 1679; https://doi.org/10.3390/s18061679
Received: 3 April 2018 / Revised: 10 May 2018 / Accepted: 21 May 2018 / Published: 23 May 2018
PDF Full-text (3532 KB) | HTML Full-text | XML Full-text
Abstract
Pulse-based Time-of-Flight (PB-ToF) cameras are an attractive alternative range imaging approach, compared to the widely commercialized Amplitude Modulated Continuous-Wave Time-of-Flight (AMCW-ToF) approach. This paper presents an in-depth evaluation of a PB-ToF camera prototype based on the Hamamatsu area sensor S11963-01CR. We evaluate different
[...] Read more.
Pulse-based Time-of-Flight (PB-ToF) cameras are an attractive alternative range imaging approach, compared to the widely commercialized Amplitude Modulated Continuous-Wave Time-of-Flight (AMCW-ToF) approach. This paper presents an in-depth evaluation of a PB-ToF camera prototype based on the Hamamatsu area sensor S11963-01CR. We evaluate different ToF-related effects, i.e., temperature drift, systematic error, depth inhomogeneity, multi-path effects, and motion artefacts. Furthermore, we evaluate the systematic error of the system in more detail, and introduce novel concepts to improve the quality of range measurements by modifying the mode of operation of the PB-ToF camera. Finally, we describe the means of measuring the gate response of the PB-ToF sensor and using this information for PB-ToF sensor simulation. Full article
(This article belongs to the Special Issue Depth Sensors and 3D Vision)
Figures

Figure 1

Open AccessArticle Extrinsic Calibration of a Laser Galvanometric Setup and a Range Camera
Sensors 2018, 18(5), 1478; https://doi.org/10.3390/s18051478
Received: 13 March 2018 / Revised: 21 April 2018 / Accepted: 21 April 2018 / Published: 8 May 2018
PDF Full-text (22186 KB) | HTML Full-text | XML Full-text
Abstract
Currently, galvanometric scanning systems (like the one used in a scanning laser Doppler vibrometer) rely on a planar calibration procedure between a two-dimensional (2D) camera and the laser galvanometric scanning system to automatically aim a laser beam at a particular point on an
[...] Read more.
Currently, galvanometric scanning systems (like the one used in a scanning laser Doppler vibrometer) rely on a planar calibration procedure between a two-dimensional (2D) camera and the laser galvanometric scanning system to automatically aim a laser beam at a particular point on an object. In the case of nonplanar or moving objects, this calibration is not sufficiently accurate anymore. In this work, a three-dimensional (3D) calibration procedure that uses a 3D range sensor is proposed. The 3D calibration is valid for all types of objects and retains its accuracy when objects are moved between subsequent measurement campaigns. The proposed 3D calibration uses a Non-Perspective-n-Point (NPnP) problem solution. The 3D range sensor is used to calculate the position of the object under test relative to the laser galvanometric system. With this extrinsic calibration, the laser galvanometric scanning system can automatically aim a laser beam to this object. In experiments, the mean accuracy of aiming the laser beam on an object is below 10 mm for 95% of the measurements. This achieved accuracy is mainly determined by the accuracy and resolution of the 3D range sensor. The new calibration method is significantly better than the original 2D calibration method, which in our setup achieves errors below 68 mm for 95% of the measurements. Full article
(This article belongs to the Special Issue Depth Sensors and 3D Vision)
Figures

Figure 1

Open AccessArticle Geometric Integration of Hybrid Correspondences for RGB-D Unidirectional Tracking
Sensors 2018, 18(5), 1385; https://doi.org/10.3390/s18051385
Received: 9 April 2018 / Revised: 23 April 2018 / Accepted: 27 April 2018 / Published: 1 May 2018
PDF Full-text (8899 KB) | HTML Full-text | XML Full-text
Abstract
Traditionally, visual-based RGB-D SLAM systems only use correspondences with valid depth values for camera tracking, thus ignoring the regions without 3D information. Due to the strict limitation on measurement distance and view angle, such systems adopt only short-range constraints which may introduce larger
[...] Read more.
Traditionally, visual-based RGB-D SLAM systems only use correspondences with valid depth values for camera tracking, thus ignoring the regions without 3D information. Due to the strict limitation on measurement distance and view angle, such systems adopt only short-range constraints which may introduce larger drift errors during long-distance unidirectional tracking. In this paper, we propose a novel geometric integration method that makes use of both 2D and 3D correspondences for RGB-D tracking. Our method handles the problem by exploring visual features both when depth information is available and when it is unknown. The system comprises two parts: coarse pose tracking with 3D correspondences, and geometric integration with hybrid correspondences. First, the coarse pose tracking generates the initial camera pose using 3D correspondences with frame-by-frame registration. The initial camera poses are then used as inputs for the geometric integration model, along with 3D correspondences, 2D-3D correspondences and 2D correspondences identified from frame pairs. The initial 3D location of the correspondence is determined in two ways, from depth image and by using the initial poses to triangulate. The model improves the camera poses and decreases drift error during long-distance RGB-D tracking iteratively. Experiments were conducted using data sequences collected by commercial Structure Sensors. The results verify that the geometric integration of hybrid correspondences effectively decreases the drift error and improves mapping accuracy. Furthermore, the model enables a comparative and synergistic use of datasets, including both 2D and 3D features. Full article
(This article belongs to the Special Issue Depth Sensors and 3D Vision)
Figures

Figure 1

Open AccessArticle Depth Reconstruction from Single Images Using a Convolutional Neural Network and a Condition Random Field Model
Sensors 2018, 18(5), 1318; https://doi.org/10.3390/s18051318
Received: 30 March 2018 / Revised: 19 April 2018 / Accepted: 20 April 2018 / Published: 24 April 2018
PDF Full-text (1630 KB) | HTML Full-text | XML Full-text
Abstract
This paper presents an effective approach for depth reconstruction from a single image through the incorporation of semantic information and local details from the image. A unified framework for depth acquisition is constructed by joining a deep Convolutional Neural Network (CNN) and a
[...] Read more.
This paper presents an effective approach for depth reconstruction from a single image through the incorporation of semantic information and local details from the image. A unified framework for depth acquisition is constructed by joining a deep Convolutional Neural Network (CNN) and a continuous pairwise Conditional Random Field (CRF) model. Semantic information and relative depth trends of local regions inside the image are integrated into the framework. A deep CNN network is firstly used to automatically learn a hierarchical feature representation of the image. To get more local details in the image, the relative depth trends of local regions are incorporated into the network. Combined with semantic information of the image, a continuous pairwise CRF is then established and is used as the loss function of the unified model. Experiments on real scenes demonstrate that the proposed approach is effective and that the approach obtains satisfactory results. Full article
(This article belongs to the Special Issue Depth Sensors and 3D Vision)
Figures

Figure 1

Open AccessArticle In Situ 3D Monitoring of Geometric Signatures in the Powder-Bed-Fusion Additive Manufacturing Process via Vision Sensing Methods
Sensors 2018, 18(4), 1180; https://doi.org/10.3390/s18041180
Received: 1 March 2018 / Revised: 29 March 2018 / Accepted: 10 April 2018 / Published: 12 April 2018
Cited by 3 | PDF Full-text (22909 KB) | HTML Full-text | XML Full-text
Abstract
Lack of monitoring of the in situ process signatures is one of the challenges that has been restricting the improvement of Powder-Bed-Fusion Additive Manufacturing (PBF AM). Among various process signatures, the monitoring of the geometric signatures is of high importance. This paper presents
[...] Read more.
Lack of monitoring of the in situ process signatures is one of the challenges that has been restricting the improvement of Powder-Bed-Fusion Additive Manufacturing (PBF AM). Among various process signatures, the monitoring of the geometric signatures is of high importance. This paper presents the use of vision sensing methods as a non-destructive in situ 3D measurement technique to monitor two main categories of geometric signatures: 3D surface topography and 3D contour data of the fusion area. To increase the efficiency and accuracy, an enhanced phase measuring profilometry (EPMP) is proposed to monitor the 3D surface topography of the powder bed and the fusion area reliably and rapidly. A slice model assisted contour detection method is developed to extract the contours of fusion area. The performance of the techniques is demonstrated with some selected measurements. Experimental results indicate that the proposed method can reveal irregularities caused by various defects and inspect the contour accuracy and surface quality. It holds the potential to be a powerful in situ 3D monitoring tool for manufacturing process optimization, close-loop control, and data visualization. Full article
(This article belongs to the Special Issue Depth Sensors and 3D Vision)
Figures

Figure 1

Open AccessArticle Three-Dimensional Registration for Handheld Profiling Systems Based on Multiple Shot Structured Light
Sensors 2018, 18(4), 1146; https://doi.org/10.3390/s18041146
Received: 15 February 2018 / Revised: 27 March 2018 / Accepted: 3 April 2018 / Published: 9 April 2018
PDF Full-text (8871 KB) | HTML Full-text | XML Full-text
Abstract
In this article, a multi-view registration approach for the 3D handheld profiling system based on the multiple shot structured light technique is proposed. The multi-view registration approach is categorized into coarse registration and point cloud refinement using the iterative closest point (ICP) algorithm.
[...] Read more.
In this article, a multi-view registration approach for the 3D handheld profiling system based on the multiple shot structured light technique is proposed. The multi-view registration approach is categorized into coarse registration and point cloud refinement using the iterative closest point (ICP) algorithm. Coarse registration of multiple point clouds was performed using relative orientation and translation parameters estimated via homography-based visual navigation. The proposed system was evaluated using an artificial human skull and a paper box object. For the quantitative evaluation of the accuracy of a single 3D scan, a paper box was reconstructed, and the mean errors in its height and breadth were found to be 9.4 μm and 23 μm, respectively. A comprehensive quantitative evaluation and comparison of proposed algorithm was performed with other variants of ICP. The root mean square error for the ICP algorithm to register a pair of point clouds of the skull object was also found to be less than 1 mm. Full article
(This article belongs to the Special Issue Depth Sensors and 3D Vision)
Figures

Figure 1

Open AccessArticle Accurate Object Pose Estimation Using Depth Only
Sensors 2018, 18(4), 1045; https://doi.org/10.3390/s18041045
Received: 14 February 2018 / Revised: 7 March 2018 / Accepted: 28 March 2018 / Published: 30 March 2018
PDF Full-text (3978 KB) | HTML Full-text | XML Full-text
Abstract
Object recognition and pose estimation is an important task in computer vision. A pose estimation algorithm using only depth information is proposed in this paper. Foreground and background points are distinguished based on their relative positions with boundaries. Model templates are selected using
[...] Read more.
Object recognition and pose estimation is an important task in computer vision. A pose estimation algorithm using only depth information is proposed in this paper. Foreground and background points are distinguished based on their relative positions with boundaries. Model templates are selected using synthetic scenes to make up for the point pair feature algorithm. An accurate and fast pose verification method is introduced to select result poses from thousands of poses. Our algorithm is evaluated against a large number of scenes and proved to be more accurate than algorithms using both color information and depth information. Full article
(This article belongs to the Special Issue Depth Sensors and 3D Vision)
Figures

Figure 1

Review

Jump to: Research

Open AccessReview 3D Imaging Based on Depth Measurement Technologies
Sensors 2018, 18(11), 3711; https://doi.org/10.3390/s18113711
Received: 3 September 2018 / Revised: 26 October 2018 / Accepted: 26 October 2018 / Published: 31 October 2018
PDF Full-text (10731 KB) | HTML Full-text | XML Full-text
Abstract
Three-dimensional (3D) imaging has attracted more and more interest because of its widespread applications, especially in information and life science. These techniques can be broadly divided into two types: ray-based and wavefront-based 3D imaging. Issues such as imaging quality and system complexity of
[...] Read more.
Three-dimensional (3D) imaging has attracted more and more interest because of its widespread applications, especially in information and life science. These techniques can be broadly divided into two types: ray-based and wavefront-based 3D imaging. Issues such as imaging quality and system complexity of these techniques limit the applications significantly, and therefore many investigations have focused on 3D imaging from depth measurements. This paper presents an overview of 3D imaging from depth measurements, and provides a summary of the connection between the ray-based and wavefront-based 3D imaging techniques. Full article
(This article belongs to the Special Issue Depth Sensors and 3D Vision)
Figures

Figure 1

Back to Top