E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Special Issue "Depth Sensors and 3D Vision"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Physical Sensors".

Deadline for manuscript submissions: 31 August 2018

Special Issue Editor

Guest Editor
Prof. Roberto Vezzani

AImageLab, Dipartimento di Ingegneria "Enzo Ferrari", University of Modena and Reggio Emilia, Modena, Italy
Website | E-Mail
Interests: computer vision; image processing; machine vision; pattern recognition; surveillance; people behavior understanding; human-computer interaction; depth sensors; 3D vision

Special Issue Information

Dear Colleagues,

The recent diffusion of inexpensive RGB-D sensors has encouraged the computer vision community to explore new solutions based on depth images. Depth information provides a significant contribution to solve or simplify several challenging tasks, such as shape analysis and classification, scene reconstruction, object segmentation, people detection, and body part recognition. The intrinsic metric information as well as the ability to handle texture and illumination variations of objects and scenes are only two of the advantages with respect to pure RGB images.

For example, hardware and software technologies included in the Microsoft Kinect framework allow an easy estimation of the 3D positions of skeleton joints, providing a new compact and expressive representation of the human body.

Although the Kinect failed as a gaming-first device, it has been a launch pad for the spread of depth sensors and, contextually, 3D vision. From a hardware perspective, several stereo, structured IR light, and ToF sensors have appeared on the market, and are studied by the scientific community. At the same time, computer vision and machine learning communities have proposed new solutions to process depth data, individually or fused with other information such as RGB images.

This Special Issue seeks innovative work to explore new hardware and software solutions for the generation and analysis of depth data, including representation models, machine learning approaches, datasets, and benchmarks.

The particular topics of interest include, but are not limited to:

  • Depth acquisition techniques
  • Depth data processing
  • Analysis of depth data
  • Fusion of depth data with other modalities
  • From and to depth domain translation
  • 3D scene reconstruction
  • 3D shape modeling and retrieval
  • 3D object recognition
  • 3D biometrics
  • 3D imaging for cultural heritage applications
  • Point cloud modelling and processing
  • Human action recognition on depth data
  • Biomedical applications of depth data
  • Other applications of depth data analysis
  • Depth datasets and benchmarks
  • Depth data visualization

Prof. Roberto Vezzani
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Depth sensors
  • 3D vision
  • Depth data generation
  • Depth data analysis
  • Depth datasets

Published Papers (6 papers)

View options order results:
result details:
Displaying articles 1-6
Export citation of selected articles as:

Research

Open AccessArticle Extrinsic Calibration of a Laser Galvanometric Setup and a Range Camera
Sensors 2018, 18(5), 1478; https://doi.org/10.3390/s18051478
Received: 13 March 2018 / Revised: 21 April 2018 / Accepted: 21 April 2018 / Published: 8 May 2018
PDF Full-text (22186 KB) | HTML Full-text | XML Full-text
Abstract
Currently, galvanometric scanning systems (like the one used in a scanning laser Doppler vibrometer) rely on a planar calibration procedure between a two-dimensional (2D) camera and the laser galvanometric scanning system to automatically aim a laser beam at a particular point on an
[...] Read more.
Currently, galvanometric scanning systems (like the one used in a scanning laser Doppler vibrometer) rely on a planar calibration procedure between a two-dimensional (2D) camera and the laser galvanometric scanning system to automatically aim a laser beam at a particular point on an object. In the case of nonplanar or moving objects, this calibration is not sufficiently accurate anymore. In this work, a three-dimensional (3D) calibration procedure that uses a 3D range sensor is proposed. The 3D calibration is valid for all types of objects and retains its accuracy when objects are moved between subsequent measurement campaigns. The proposed 3D calibration uses a Non-Perspective-n-Point (NPnP) problem solution. The 3D range sensor is used to calculate the position of the object under test relative to the laser galvanometric system. With this extrinsic calibration, the laser galvanometric scanning system can automatically aim a laser beam to this object. In experiments, the mean accuracy of aiming the laser beam on an object is below 10 mm for 95% of the measurements. This achieved accuracy is mainly determined by the accuracy and resolution of the 3D range sensor. The new calibration method is significantly better than the original 2D calibration method, which in our setup achieves errors below 68 mm for 95% of the measurements. Full article
(This article belongs to the Special Issue Depth Sensors and 3D Vision)
Figures

Figure 1

Open AccessArticle Geometric Integration of Hybrid Correspondences for RGB-D Unidirectional Tracking
Sensors 2018, 18(5), 1385; https://doi.org/10.3390/s18051385
Received: 9 April 2018 / Revised: 23 April 2018 / Accepted: 27 April 2018 / Published: 1 May 2018
PDF Full-text (8899 KB) | HTML Full-text | XML Full-text
Abstract
Traditionally, visual-based RGB-D SLAM systems only use correspondences with valid depth values for camera tracking, thus ignoring the regions without 3D information. Due to the strict limitation on measurement distance and view angle, such systems adopt only short-range constraints which may introduce larger
[...] Read more.
Traditionally, visual-based RGB-D SLAM systems only use correspondences with valid depth values for camera tracking, thus ignoring the regions without 3D information. Due to the strict limitation on measurement distance and view angle, such systems adopt only short-range constraints which may introduce larger drift errors during long-distance unidirectional tracking. In this paper, we propose a novel geometric integration method that makes use of both 2D and 3D correspondences for RGB-D tracking. Our method handles the problem by exploring visual features both when depth information is available and when it is unknown. The system comprises two parts: coarse pose tracking with 3D correspondences, and geometric integration with hybrid correspondences. First, the coarse pose tracking generates the initial camera pose using 3D correspondences with frame-by-frame registration. The initial camera poses are then used as inputs for the geometric integration model, along with 3D correspondences, 2D-3D correspondences and 2D correspondences identified from frame pairs. The initial 3D location of the correspondence is determined in two ways, from depth image and by using the initial poses to triangulate. The model improves the camera poses and decreases drift error during long-distance RGB-D tracking iteratively. Experiments were conducted using data sequences collected by commercial Structure Sensors. The results verify that the geometric integration of hybrid correspondences effectively decreases the drift error and improves mapping accuracy. Furthermore, the model enables a comparative and synergistic use of datasets, including both 2D and 3D features. Full article
(This article belongs to the Special Issue Depth Sensors and 3D Vision)
Figures

Figure 1

Open AccessArticle Depth Reconstruction from Single Images Using a Convolutional Neural Network and a Condition Random Field Model
Sensors 2018, 18(5), 1318; https://doi.org/10.3390/s18051318
Received: 30 March 2018 / Revised: 19 April 2018 / Accepted: 20 April 2018 / Published: 24 April 2018
PDF Full-text (1630 KB) | HTML Full-text | XML Full-text
Abstract
This paper presents an effective approach for depth reconstruction from a single image through the incorporation of semantic information and local details from the image. A unified framework for depth acquisition is constructed by joining a deep Convolutional Neural Network (CNN) and a
[...] Read more.
This paper presents an effective approach for depth reconstruction from a single image through the incorporation of semantic information and local details from the image. A unified framework for depth acquisition is constructed by joining a deep Convolutional Neural Network (CNN) and a continuous pairwise Conditional Random Field (CRF) model. Semantic information and relative depth trends of local regions inside the image are integrated into the framework. A deep CNN network is firstly used to automatically learn a hierarchical feature representation of the image. To get more local details in the image, the relative depth trends of local regions are incorporated into the network. Combined with semantic information of the image, a continuous pairwise CRF is then established and is used as the loss function of the unified model. Experiments on real scenes demonstrate that the proposed approach is effective and that the approach obtains satisfactory results. Full article
(This article belongs to the Special Issue Depth Sensors and 3D Vision)
Figures

Figure 1

Open AccessArticle In Situ 3D Monitoring of Geometric Signatures in the Powder-Bed-Fusion Additive Manufacturing Process via Vision Sensing Methods
Sensors 2018, 18(4), 1180; https://doi.org/10.3390/s18041180
Received: 1 March 2018 / Revised: 29 March 2018 / Accepted: 10 April 2018 / Published: 12 April 2018
PDF Full-text (22909 KB) | HTML Full-text | XML Full-text
Abstract
Lack of monitoring of the in situ process signatures is one of the challenges that has been restricting the improvement of Powder-Bed-Fusion Additive Manufacturing (PBF AM). Among various process signatures, the monitoring of the geometric signatures is of high importance. This paper presents
[...] Read more.
Lack of monitoring of the in situ process signatures is one of the challenges that has been restricting the improvement of Powder-Bed-Fusion Additive Manufacturing (PBF AM). Among various process signatures, the monitoring of the geometric signatures is of high importance. This paper presents the use of vision sensing methods as a non-destructive in situ 3D measurement technique to monitor two main categories of geometric signatures: 3D surface topography and 3D contour data of the fusion area. To increase the efficiency and accuracy, an enhanced phase measuring profilometry (EPMP) is proposed to monitor the 3D surface topography of the powder bed and the fusion area reliably and rapidly. A slice model assisted contour detection method is developed to extract the contours of fusion area. The performance of the techniques is demonstrated with some selected measurements. Experimental results indicate that the proposed method can reveal irregularities caused by various defects and inspect the contour accuracy and surface quality. It holds the potential to be a powerful in situ 3D monitoring tool for manufacturing process optimization, close-loop control, and data visualization. Full article
(This article belongs to the Special Issue Depth Sensors and 3D Vision)
Figures

Figure 1

Open AccessArticle Three-Dimensional Registration for Handheld Profiling Systems Based on Multiple Shot Structured Light
Sensors 2018, 18(4), 1146; https://doi.org/10.3390/s18041146
Received: 15 February 2018 / Revised: 27 March 2018 / Accepted: 3 April 2018 / Published: 9 April 2018
PDF Full-text (8871 KB) | HTML Full-text | XML Full-text
Abstract
In this article, a multi-view registration approach for the 3D handheld profiling system based on the multiple shot structured light technique is proposed. The multi-view registration approach is categorized into coarse registration and point cloud refinement using the iterative closest point (ICP) algorithm.
[...] Read more.
In this article, a multi-view registration approach for the 3D handheld profiling system based on the multiple shot structured light technique is proposed. The multi-view registration approach is categorized into coarse registration and point cloud refinement using the iterative closest point (ICP) algorithm. Coarse registration of multiple point clouds was performed using relative orientation and translation parameters estimated via homography-based visual navigation. The proposed system was evaluated using an artificial human skull and a paper box object. For the quantitative evaluation of the accuracy of a single 3D scan, a paper box was reconstructed, and the mean errors in its height and breadth were found to be 9.4 μm and 23 μm, respectively. A comprehensive quantitative evaluation and comparison of proposed algorithm was performed with other variants of ICP. The root mean square error for the ICP algorithm to register a pair of point clouds of the skull object was also found to be less than 1 mm. Full article
(This article belongs to the Special Issue Depth Sensors and 3D Vision)
Figures

Figure 1

Open AccessArticle Accurate Object Pose Estimation Using Depth Only
Sensors 2018, 18(4), 1045; https://doi.org/10.3390/s18041045
Received: 14 February 2018 / Revised: 7 March 2018 / Accepted: 28 March 2018 / Published: 30 March 2018
PDF Full-text (3978 KB) | HTML Full-text | XML Full-text
Abstract
Object recognition and pose estimation is an important task in computer vision. A pose estimation algorithm using only depth information is proposed in this paper. Foreground and background points are distinguished based on their relative positions with boundaries. Model templates are selected using
[...] Read more.
Object recognition and pose estimation is an important task in computer vision. A pose estimation algorithm using only depth information is proposed in this paper. Foreground and background points are distinguished based on their relative positions with boundaries. Model templates are selected using synthetic scenes to make up for the point pair feature algorithm. An accurate and fast pose verification method is introduced to select result poses from thousands of poses. Our algorithm is evaluated against a large number of scenes and proved to be more accurate than algorithms using both color information and depth information. Full article
(This article belongs to the Special Issue Depth Sensors and 3D Vision)
Figures

Figure 1

Back to Top