Next Article in Journal
In-Situ Real-Time Focus Detection during Laser Processing Using Double-Hole Masks and Advanced Image Sensor Software
Next Article in Special Issue
Development of Three-Dimensional Dental Scanning Apparatus Using Structured Illumination
Previous Article in Journal
Development of a PrGO-Modified Electrode for Uric Acid Determination in the Presence of Ascorbic Acid by an Electrochemical Technique
Previous Article in Special Issue
Simultaneous Calibration: A Joint Optimization Approach for Multiple Kinect and External Cameras
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Efficient Depth Enhancement Using a Combination of Color and Depth Information

Department of Electrical and Electronic Engineering, Yonsei University, Seoul 03722, Korea
*
Author to whom correspondence should be addressed.
Sensors 2017, 17(7), 1544; https://doi.org/10.3390/s17071544
Submission received: 30 April 2017 / Revised: 12 June 2017 / Accepted: 26 June 2017 / Published: 1 July 2017
(This article belongs to the Special Issue Imaging Depth Sensors—Sensors, Algorithms and Applications)

Abstract

:
Studies on depth images containing three-dimensional information have been performed for many practical applications. However, the depth images acquired from depth sensors have inherent problems, such as missing values and noisy boundaries. These problems significantly affect the performance of applications that use a depth image as their input. This paper describes a depth enhancement algorithm based on a combination of color and depth information. To fill depth holes and recover object shapes, asynchronous cellular automata with neighborhood distance maps are used. Image segmentation and a weighted linear combination of spatial filtering algorithms are applied to extract object regions and fill disocclusion in the object regions. Experimental results on both real-world and public datasets show that the proposed method enhances the quality of the depth image with low computational complexity, outperforming conventional methods on a number of metrics. Furthermore, to verify the performance of the proposed method, we present stereoscopic images generated by the enhanced depth image to illustrate the improvement in quality.

1. Introduction

RGB-D sensors are used to identify color and depth simultaneously in real time. With the development of low-cost commercial RGB-D sensors such as Kinect and PrimeSense, computer vision technologies utilizing depth images or color and depth images have been used to develop many vision applications such as object tracking [1,2], pose estimation [3,4,5] for human-computer interaction (HCI), 3D modeling [6,7,8] and video surveillance [9,10,11].
The practical use of depth information is recognized as a key technology for many three-dimensional multimedia applications. Over the years, researchers have attempted to develop technologies that generate a high-quality three-dimensional view. Using depth information, high-quality three-dimensional images can be generated in the form of a stereoscopic image, which provides the necessary sense of reality [12]. Accordingly, extensive multimedia research based on depth information has been conducted, such as depth image-based rendering (DIBR) [12,13], free-viewpoint television (FTV) [14,15], augmented reality (AR) [16], virtual reality (VR) [17] and mixed reality (MR) [18].
However, depth sensors that rely on infrared laser light with a speckle pattern (e.g., the Kinect sensor) suffer from missing or inaccurate depth information. These problems are caused by the incorrect matching of infrared patterns and a positional difference between the internal infrared sensors. Incorrect pattern matching yields numerous errors, such as optical noise, loss of depth values and flickering. Moreover, the different positions of the depth sensor, which is composed of an infrared projector and camera [19], mean that the rear regions may be occluded by the front object, making it difficult for depth information to be measured. In particular, there can be much noise around the object shape, as shown in Figure 1. The result is low-quality depth information, which makes it difficult to utilize the computer vision technologies [20,21,22]. For this reason, enhanced depth information is urgently required for applications.
A number of methods for enhancing the quality of depth information and overcoming the limitations of depth sensors have been proposed. Matyunin et al. [23] suggested an algorithm that uses color and motion information derived from the image sequences to fill occlusion regions of the depth image and improve the temporal stability. This algorithm can make depth images more stable, rectify errors and smooth the image. The confidence metric for motion vectors, spatial proximity and occlusion is highly dependent on the depth image. Fu et al. [24] proposed a divisive normalized bilateral filtering method that is a modification of the method proposed in [25], filling up the depth holes in the spatial domain and reducing the noise in the temporal domain. However, this approach leads to a blurry depth image and has a high computational cost. Joint bilateral-based methods, such as joint bilateral filter [26], joint bilateral upsampling [27] and weighted mode filtering [28], aim to improve the quality of the depth image by utilizing an aligned color and depth image. In these methods, the color image is used as a guide while the edges are preserved. Unfortunately, these methods frequently yield blurring effects and artifacts around boundaries in regions with large holes. Chan et al. [29] presented a noise-aware filtering method that enhances the quality and resolution of the depth image using an adaptive multi-lateral upsampling filter. However, this approach must be implemented on a GPU for real-time performance, and the parameters in the heuristic model must be set manually. Le et al. [30] suggested a directional joint bilateral filtering scheme based on [26]. This method fills the holes and suppresses the noise in the depth image using an adaptive directional filter that is adjusted on the basis of the edge direction of a color image. Although the directional joint bilateral filter performs well if the depth hole regions are located near the object boundaries, it is only applicable to four cases described by the edge directions. Lin et al. [31] proposed a method based on inpainting [32] for removing artifacts and padding the occlusions in a depth image. This approach is designed to inpaint the removed regions in a color image by assigning a priority to pixel locations and filling the removed regions based on these priorities. Though this method can eliminate depth noise and temporal variations and smooth inaccurate depth values, the processed depth values are changed from their original values. The computation time remains a problem for real-time applications. Gong et al. [33] incorporated guidance information from an aligned color image for depth inpainting by extending the inpainting model and the propagation strategy of the fast marching method [34]. This method reconstructs unknown regions simply but efficiently from the surrounding areas without additional information. However, this approach cannot convey texture information in the holes. Despite all efforts, these methods are time consuming and deliver blurry results, especially when the depth hole area is large.
To extract the object regions, many image segmentation techniques based on color information have been developed [35,36,37,38,39]. However, these methods suffer from challenging issues concerning illumination variations, shadows, and complex textures. RGB-D sensors have been employed to solve the problems of color-based image segmentation methods, because depth information is less affected by these issues, even if an image has shadows or complex textures [10]. One of the first approaches based on the fusion of color and depth information was developed by Gordon et al. [40], who presented the background model using an approximation of a 4D Gaussian mixture. Using a unimodal approximation, each image pixel is classified as foreground when the background exists in fewer sequences. However, the background model does not provide the correct fit when the background is dynamic and has various values per pixel. Schiller and Koch [41] proposed an object segmentation method by combining the segmentation of depth measurements with segmentation in the color domain using adaptive background mixture of Gaussian (MoG) models. To determine the depth reliability, the authors concluded that the amplitude information provided by the ToF camera is more effective than the depth variance. Fernandez-Sanchez et al. [9] generalized the background subtraction algorithm by fusing color and depth information based on a Codebook-based model [42]. In this method, the depth information is considered as the fourth channel of the codebook, and provides the bias for the foreground based on color information. This approach was extended [10] by building a late fusion mask technique based on morphological reconstruction to reduce the noise of the disparity estimated by stereo vision. Camplani and Salgado [43] suggested an efficient combination of classifiers based on a weighted average. One of the classifiers is based on the color features and the other is based on the depth feature, and the support of each classifier in the ensemble is adaptively modified by considering the foreground detected in the previous sequences and the edges of the color and depth images. del Blanco et al. [11] developed a Bayesian network using a background subtraction method based on [43] to distinguish foreground and background regions from depth sequence images. This method takes advantage of a spatial estimation model and an algorithm for predicting the changes of foreground depth distribution. However, many of these approaches are designed for video surveillance and require image sequence pairs. Moreover, the segmentation results still contain much noise in the foreground and background.
In this paper, we propose a high-performance, low-complexity algorithm based on color and depth information by using asynchronous cellular automata with neighborhood distance maps. Our approach aims to fill the missing depth holes and recover inaccurate object shapes in depth images. The proposed cellular automata-based depth recovery covers whole regions of the inaccurate and noisy depth image. Moreover, a weighted linear combination of spatial filtering algorithms is utilized to fill the inner depth holes in the object. Considering that humans are more sensitive to objects in an image than to its background [44], we focus on depth holes in the object regions. In general, depth hole filling methods based on color information utilize the color values of pixels that have a valid depth value to fill the neighboring depth holes. These methods fill the depth holes by calculating color-metric distances between the color pixel corresponding to the depth hole and the color pixels having a valid depth value. However, if the depth values of the reference pixels are inaccurate because of inherent depth sensor issues (e.g., misaligned color and depth values around the hand, as depicted in Figure 1c, top row), there is a high risk of incorrect depth values filling in the hole regions. To minimize this risk, we design a weighted linear combination of spatial filtering algorithms by reflecting the characteristics of the depth holes in the object (e.g., the blue and green markers in Figure 1). In this algorithm, depth information from the rear regions is used to fill the inner holes. To extract the object depth regions, we introduce an image segmentation algorithm using the connectivity values in the depth domain.
The remainder of this paper is organized as follows. Section 2 describes the proposed method in detail, including an introduction to image segmentation based on the depth domain, the procedure for filling inner depth holes in an object, and the recovery of a depth image. Section 3 presents our experimental results, and Section 4 states the conclusions from this research.

2. Proposed Methodology

In this section, we propose a method to enhance depth images using both color and depth information. The central premise is based on using a color image that has a relatively high resolution and more image information, such as texture and colors, than the depth image. The proposed calculations on the color image are intended to enhance the depth quality.
The problems with the images captured by depth sensors are as follows:
  • Intermittent gaps in depth values in object regions, mainly because of reflections on the surface of the object (blue areas in Figure 1).
  • Depth information of the rear regions cannot be estimated because the different positions of internal sensors in the depth sensor cause the front object to interfere with the depth measurement (green markers in Figure 1).
  • Inaccuracies in the shape of objects compared to the actual scene. The depth value of an actual object consists of the object depth value (correct), background depth value (incorrect), and a missing depth value (incorrect) (red areas in Figure 1 show the inaccurate object boundaries).
In this study, we define an inner hole as the region with a missing depth value on account of gaps and interference from front objects, as stated above. Missing depth values are also called depth holes. To solve the problems of gaps and interference, inner holes are filled by a weighted linear combination of spatial filtering algorithms. In the case of shape inaccuracies, color and depth information is used to fill depth holes and recover the object shape. Our approach has three phases: image acquisition and preprocessing, image segmentation and weighted linear combination of spatial filtering, and depth recovery by asynchronous cellular automata (see Figure 2). In the first phase, the color and depth sensors are calibrated for the aligned color and depth image, and the depth image is filtered for the next phases. A morphological operation and spatial filtering are used to reduce and stabilize the depth noise. In the second phase, each object of the depth image is labeled according to the distribution, distance, and connectivity of depth values to separate the object regions and background. The inner holes in the object regions are filled using a weighted linear combination from the spatial filtering framework. The object and background depth regions are reduced using the morphological operation to recover accurate depth information in the next phase. The final phase uses a depth recovery algorithm to fill the remaining depth holes and refine the object boundary in the depth image. Details are explained in the following subsections.

2.1. Image Acquisition and Preprocessing

A color and depth image pair is acquired from the RGB-D sensor. As mentioned above, the image captured by the depth sensor contains noise, which may have an undesirable effect on the next phases. Hence, depth noise is reduced to stabilize the depth image.
To align the color and depth images, the color and depth sensors are calibrated using the camera geometrical model and calibration formulation [45]. Real depth values obtained from the depth sensor are normalized to the 8-bit range { 0 , 255 } , as shown in Figure 3b. The normalized depth values are utilized for object segmentation.
Equation (1) for the linear quantization of depth is implemented as the pixel value set to zero if the real depth value is less than Z A , and the pixel value set to 255 if the real depth value higher than Z B .
D N ( i , j ) = 0 , if   Z ( i , j ) < Z A 255 Z ( i , j ) Z A Z B Z A , if   Z A Z ( i , j ) Z B 255 , if   Z ( i , j ) > Z B
where Z ( i , j ) and D N ( i , j ) are the real and eight-bit normalized depth values, respectively; i and j are the indices of the pixels in the depth image. Z A and Z B are the minimum (near) and maximum (far) real depth values, respectively. Z A and Z B are set within the reliable measurement range specified for the depth sensor. In this study, we set Z A = 0.4 m and Z B = 3 m in accordance with the Kinect specifications [46]. Thus, quantization darkens the near real depth values and brightens the far real depth values. Zero values represent missing depth values or real depth values of less than Z A .
Morphological operations and a median filter are used to stabilize the initial depth image according to Equation (2). Before using the median filter, erosion is employed to reduce the size of the object regions. The median filter is then applied to smooth the image. Finally, a dilation process restores the object regions to their original size.
D = median D N A B
where ⊖ and ⊕ denote erosion by pixel set A and dilation by pixel set B , respectively. D is the stabilized result of the normalized depth image ( D N ). The preprocessing steps of erosion, median filtering, and dilation have the advantages of reducing the noise and smoothing the boundaries of objects in the depth image without changing their size. Furthermore, the size of depth regions can be reduced by changing the kernel size of the morphological operation when the object regions in the depth image exceed the boundary of the corresponding object in the color image.

2.2. Image Segmentation and Weighted Linear Combination of Spatial Filtering

First, the x-y pixel coordinates of the depth image are transformed into x-D coordinates by projecting all pixels in the pixel coordinate system onto the x-D coordinate system. Subsequently, a morphological operation is applied to connect neighboring valid points, and adjacent points on the transformed depth image are clustered by applying the connected component labeling algorithm [47]. The object regions in the depth domain are extracted by using an object detection method in the visual image. As a result, we can discriminate between the object and the background, and a weighted linear combination of spatial filtering algorithms is used to fill the inner depth holes in the object regions. A detailed explanation is provided in the following subsections.
Figure 4 shows the flowchart of a coordinate transformation and image segmentation for a depth image. In this section, x and y denote the horizontal and vertical axes of the 2D pixel coordinates; Z and D indicate the real and normalized depth axes, respectively; and X is the horizontal axis of the 3D world coordinates.

2.2.1. Coordinate Transformation of Depth Image

Each pixel of the color image (e.g., RGB color space) represents color information from the red, green, and blue channels, whereas each pixel of the depth image represents only depth information. This depth information can be transformed to another depth-based coordinate system. By using the D information instead of the information of y axis in x-y coordinates (Figure 5a), a new two-dimensional image can be represented with x and D domains as shown in Figure 5b, in which its pixel values represent accumulated D values on each column of x axis of the x-y coordinates. Accordingly, a depth image with x-D coordinates represents the three-dimensional information viewed from a top view. The x-D coordinate system of the depth image is useful for analysis because each object has similar depth values, which helps in the clustering of various objects and backgrounds.
The advantage of the x-D coordinate system (Figure 5b) over the X-Z coordinate system (Figure 5c) is that the x-D system produces salient objects from the normalized depth information. In addition, the sharing of the x axis allows us to project and re-project the images between x-y and x-D coordinates more easily than with X-Z coordinates.

2.2.2. Image Segmentation in Depth Domain

To extract object regions that have connective pixels in terms of their normalized depth values and locations, a connected component labeling algorithm is applied to the depth image in x-D coordinates. Figure 5b shows that the pixels of each object are close together. The morphological operation of closing is performed to reinforce the connectivity of the objects.
After closing the depth image in x-D coordinates, the connected components are labeled. Figure 6a shows an example of the connected component labeling. In this figure, the labeled objects are marked in different colors, wherein the values of the pixels are binarized. To extract one of the labeled objects as described in Figure 6a, object detection is applied to the color image. In this study, a pre-trained object detector [48] based on [49] is employed. From this object detection method, we obtain the depth value by using the detected position (x,y). This approach facilitates object selection that matches the detected location (indicated by the circle in Figure 6b) by being projected on the depth image in x-D coordinates. After object selection in x-D coordinates, we extract the object regions (Figure 7b) in the x-y coordinates by re-projecting the x-D coordinates information onto the depth image in x-y coordinates. Other regions are considered to be the background (Figure 7c).

2.2.3. Weighted Linear Combination of Spatial Filtering for Inner Hole Filling

Depth sensors cannot measure depth information in regions of shadow and in the background. Regions of shadow are generally caused by objects in front, which is a geometrical limitation of depth sensors. These sensors consist of an infrared projector and an infrared camera at different positions. Accordingly, the different views of these compositions inevitably create problems such as inner holes on the boundary between the front and rear regions (the green areas in Figure 1b,c). Moreover, technical issues with depth sensors generate noise, i.e., reflection errors on a surface in which depth values cannot be measured (the blue areas in Figure 1b,c). To solve these problems, we propose a weighted linear combination of spatial filtering algorithms. The weighted linear combination is composed of the weighted sum of two terms, one related to the depth information of segmented depth regions and the other related to the depth information in the vicinity of inner holes, as shown in Equation (3).
H = α × mean ( Z s e g ) + β × Z N N = argmax ( Z n ) n k
where H denotes inner hole pixels and Z is a real depth value. Z seg denotes pixels in segmented depth regions. α and β are the weights of each term, with α + β = 1 . n indicates the searching mask size of surrounding pixels at the inner hole and k is the index of n . α , β and n are empirically determined according to the problem being considered.
From the mean real depth value of segmented depth regions and the maximum (far) real depth value of surrounding inner holes, the inner holes in the segmented regions are filled using the above equation. To compute real depth information, the equation uses real depth values. The mean depth value of the segmented depth regions is used to balance the depth biases of the holes, and the maximum real depth value surrounding the inner hole is used to account for depth similarities in the rear regions. Inner holes in the rear regions are mainly caused by the front objects. Hence, the depth values of the front regions are not considered. Therefore, the mean depth value of the segmented depth regions reflects global properties of the segmented depth regions, and the maximum real value reflects local properties of inner holes in the segmented regions.

2.3. Depth Recovery by Asynchronous Cellular Automata

To fill the depth holes and recover depth information for distorted object shapes in a depth image (the red areas in Figure 1b,c), we propose a depth recovery method inspired by [36] based on cellular automata [50]. Cellular automata are described by a triplet A = ( S , N , δ ) that reflects a discrete model in both space and time. For each cell, S indicates the state set and N is the neighborhood system, which is defined as the relationship between the specified cell and the surrounding cells (the von Neumann neighborhood (4-connected) or Moore neighborhood (8-connected) is generally used). δ indicates a local transition function that defines the rules for calculating the next state of each cell. The next state is determined from the current state of the cell and its neighboring cells.
In our proposal, asynchronous cellular automata (ACA) are applied. The ACA change states immediately, regardless of the processing steps, to reduce the number of iterations and computation time. In contrast, synchronous cellular automata (SCA) maintain their current states until the operation of the current step has been completed, and then change states simultaneously before the next step starts. The maximum strength value is given to pixels that have depth values. Conversely, pixels in depth holes are assigned the minimum strength value. These pixels are filled by taking advantage of the feature vectors given by the pixel values in a given color space, strength values of these pixels, and the transition function. The feature vectors of an input image do not change at all times. Therefore, it is unnecessary to repeatedly calculate the distance between the feature vectors of the current cell and its neighboring cells in every step. Finally, we change the RGB color space to the Lab color space to improve the performance of the algorithm. The pixel values represented in a given color space are considered as feature vectors. The details are explained in the following subsections.

2.3.1. Asynchronous Cellular Automata

In an SCA system, all cells have the same state during the computation in each step. When a local transition function is applied to all cells in the current step, the states are updated simultaneously before the next step starts. Therefore, the states of time t and time t + 1 are independent of each other. In other words, the result of the local transition at time t has no effect on other cells at the same time.
In the ACA system applied in the proposed method, however, the states change immediately when the local transition function is computed. The results of this local transition have an effect on the other cells, regardless of the step. Thus, an algorithm that spreads the state of the cell to the neighborhood can be efficiently represented by ACA. Using ACA in place of SCA reduces the number of iterations, and thus the computation time.
In this study, we adopted a vertical scan order as shown in Figure 8 and Figure 9. Figure 8 illustrates the cell evolution steps given by SCA. The current defender (colored yellow and marked X in Figure 8) does not change state until the current time step has been completed, although the defender has been conquered by the attacker and will be changed to the attacker’s state. The defenders’ states are updated simultaneously at the end of the current time. For instance, although the empty cells will be changed by the attackers, the empty state cells are not changed in the current time and have no effect on neighboring cells, as shown in Figure 8. In contrast, the current defender (colored yellow and marked X in Figure 9) changes state immediately when conquered by the attacker in the ACA system. The empty state cells immediately affect the neighboring cells when the state has changed, as shown in Figure 9, which illustrates the cell evolution under ACA. Comparing Figure 8 with Figure 9, the result that requires three steps for SCA takes only one step for ACA.

2.3.2. Depth Recovery by Cellular Automata

To estimate a depth value and refine an object shape, we focus on the strength and feature vectors of cells. The cellular space P is defined by the image and each pixel is considered as a cell. For each cell p in P , the cell state S p has four terms ( d p , C p , θ p , b p ) , where d p is a depth value, C p is a feature vector, θ p is a strength, and b p is a Boolean flag. The depth value d p , strength θ p , and flag b p are defined by the depth image. The feature vector C p is defined by the color image. We assume that θ p [ 0 , 1 ] . If cell p has a valid depth value, then θ p is set to the maximum value of 1 and b p is set to true. If cell p has an invalid depth value, θ p and b p are set to zero and false, respectively. The Boolean flag b p indicates whether cell p has any depth value on the input depth image.
Algorithm 1 (Lines 6–28) depicts the entire process of the depth recovery method. To explain our method using a biological metaphor, a bacterium p (attacker) attacks its neighboring bacteria N ( p ) (defenders) using an attack force. The attack force is defined by the product of the strength θ p of the attacker and the value obtained from Equation (4), expressed as follows [36].
g ( x ) = 1 x m a x C 2
in which x is the distance value between the feature vectors of attacker C p and defender C q as the output of Equation (7), and C is the feature vector. The function g ( x ) is a monotonously decreasing function with a minimum value of zero and a maximum value of one.
Algorithm 1 Depth recovery by asynchronous cellular automata.
Input: color image: I c C ; depth image: I d d ;
Output: enhanced depth image: I d d ;
Initialize: condition flag: k t r u e ;
1:for p P do
2:for q N ( P ) do
3:   N D M p , q g ( C p C q 2 ) ;
4:end for
5:end for
6:for p P do
7:if d p 0 then
8:   θ p 1 ;
9:   b p t r u e ;
10:else
11:   θ p 0 ;
12:   b p f a l s e ;
13:end if
14:end for
15:while k = t r u e do
16: k f a l s e ;
17:for p P do
18:   if b p t r u e then
19:    for q N ( p ) do
20:     if N D M p , q · θ q > θ p then
21:      d p d q ;
22:      θ p N D M p , q · θ q ;
23:      k t r u e ;
24:     end if
25:    end for
26:   end if
27:end for
28:end while
If the attack force is greater than the strength θ q of the defender, the depth value d q and the strength θ q of the defender are replaced by the attacker’s depth value d p and the attack force, respectively. When the replaced bacteria attack their neighboring defenders, they use the changed values immediately, regardless of the step. Only those bacteria that have a false flag ( b p = f a l s e ) are repeatedly attacked. These operations are repeated until there is no change in the state of the cells. In this iterative process, the holes are filled by spreading the bacteria. For this reason, we called this method “GrowFill”. The computational complexity of GrowFill is O ( s n k ) , where s is the number of invalid pixels in the input depth image, n is the size of the neighborhood system, and k is the number of iterations.

2.3.3. Neighborhood Distance Map

The steps involved in calculating the evolution of automata are continuously processed until the stable condition is reached. Equation (5) calculates the Euclidean distance between the feature vector of the current cell p and that of its neighboring cell q:
C p C q 2 = ( R p R q ) 2 + ( G p G q ) 2 + ( B p B q ) 2
where C is the feature vector of a specific pixel, which includes visual information. If the RGB color space is used for the feature vector, R, G, and B are the values of the red, green, and blue channels, respectively, as described in Equation (5). p is the pixel indicating the current cell and q is a pixel in the neighborhood of p.
The feature vector is indicated by pixel information from a color image. When the algorithm is executed, however, the feature vectors do not change until the end. The color image is a hard constraint, because the visual information does not change while the algorithm is being processed. Hence, the distance calculated between two feature vectors does not change, and there is no need to repeat the distance calculations at every step. Therefore, the neighborhood distance map can be generated before entering the automata evolution steps and used to find the necessary distances.
N D M p , q = g ( C p C q 2 ) = 1 C p C q 2 m a x C 2
in which N D M p , q is the neighborhood distance map (NDM). NDMs are generated before starting the evolution steps in Algorithm 1 (Lines 1–5). After the NDMs have been generated, they are used in every iterative step (Algorithm 1 (Lines 15–28)). As a result, during the operation of the algorithm, Equation (6) is not calculated in each iteration process.

2.3.4. Lab Color Space

The RGB color space is commonly used to calculate the color-metric distance between feature vectors. Although the RGB color space is designed for hardware-oriented systems and is convenient for representing colors, it is not useful for object specification and recognition [51] and is not similar to the human perception of colors [52]. In contrast, the Lab color space is known to give a good representation of human color perception and is widely used for the evaluation of color differences and color matching systems [51]. Therefore, we use the Lab color space in the proposed algorithm.
Equation (7) is used to calculate the distance between feature vectors in our method.
C p C q 2 = ( L p L q ) 2 + ( a p a q ) 2 + ( b p b q ) 2
where C is a feature vector and L · a · b denotes the L, a, and b channel values. p is the pixel indicating the current cell, and q is a pixel in the neighborhood of p.

3. Experiments and Discussion

To validate our proposed method, we conducted a series of experiments on real-world Kinect datasets and the Tsukuba Stereo Dataset [53,54]. For the real-world datasets, we captured color and depth image pairs using the Kinect and obtained a public Kinect dataset [9,43,55]. The experimental results have been compared with state-of-the-art methods. All experiments were conducted on a desktop computer with Intel i7-3770 3.4 GHz and 16 GB RAM.
The experiments were as follows:
  • Object segmentation (quantitative and qualitative evaluations).
  • Inner hole filling (qualitative evaluation).
  • Depth recovery (quantitative and qualitative evaluations).
  • ACA, NDMs, and Lab color space on the proposed method (quantitative evaluation).
  • Enhanced depth images and a practical application of the proposed method.
We evaluated the performance of the object segmentation method with Fernandez’s Kinect dataset [9] and compared our method with the mixture of Gaussians based on color and depth (MOG4D) [41], the codebook [42] based on depth (CB1D) and based on color and depth (CB4D), and the depth-extended codebook (DECB) [9].
To evaluate the results, the following measures are used:
  • True positive (TP): the sum of foreground classified as foreground.
  • True negative (TN): the sum of background classified as background.
  • False positive (FP): the sum of background misclassified as foreground.
  • False negative (FN): the sum of foreground misclassified as background.
  • Precision (P): the proportion of TP and the total classified as foreground, P = T P T P + F P .
  • Recall (R): the proportion of TP and the ground truth, R = T P T P + F N .
  • F 1 score: the harmonic mean of precision and recall, F 1 = 2 · P · R P + R .
F 1 ranges from 0–1, with higher values indicating better performance.
Fernandez’s Kinect dataset [9] provides image pairs including color, depth, and ground truth images for the foreground. As our proposed method focuses on single object, five different image pairs (Wall #93, Hallway #120, Chair Box #278 and #286, Shelves #197) were selected for the quantitative and qualitative tests. Following the literature, we compare the results reported in [9], as shown in Table 1 and Figure 10. A pre-trained body [56] and hand [57] detector were used as the object detector in our algorithm.
Table 1 presents the F 1 scores. Our method outperforms MOG4D, CB1D, and CB4D, and has very similar performance to DECB. From Figure 10, we can observe that all the compared methods generate much noise on the whole image. The DECB results, which give an average F 1 score that is 0.008 higher than that of our method, also contain much more noise than the image given by our algorithm. In particular, none of the compared methods can extract object regions that have the depth values of the depth image, as shown in Figure 10e. As the results are used for the following depth recovery algorithms, all the depth regions of the object should be extracted. Otherwise, the actual depth information may be distorted. In addition, when a region with no assigned depth is generated as a segmentation result, the region cannot be estimated in the following algorithms. The purpose of the segmentation at this stage is to extract only the object regions that have actual depth values to fill depth holes or manipulate the object boundary to recover depth values. Therefore, the object segmentation results should be object-oriented and the noise level should be low. Our method is best suited for this purpose.
The following describes the performance of the inner hole filling methods, as shown in Figure 11. To evaluate the performance of inner hole filling, we collected color and depth image pairs acquired by the Kinect sensor in an indoor environment. As in Figure 11e, inner holes exist in the rear object (body) as a result of the front object (hand) in the segmented regions. The results of inner hole filling by the proposed method are compared to those of five previous methods: flood-fill based on morphological reconstruction [58], Navier–Stokes-based inpainting [59], fast marching inpainting [34], joint bilateral filtering [26], and guided depth inpainting followed by guided filtering [33]. We set n = 23 , α = 0.3 , and β = 0.7 in Equation (3) for the proposed method, and set the radius value to 11, σ d = 2 , and σ c = 10 for the methods in [26,33,34,59], as per the values recommended in [33].
From the results of the methods in [26,33,34,59], we can easily observe that the depth values in the inner holes are filled by the depth values of both front and rear objects idirectionally, so that the filled regions are blurred and have incorrect depth values. The methods in [26,33] use both the color and depth images. In these methods, the hole regions of the rear object are affected by the front depth values when the inner holes are filled based on color information. This is because the limitations of the depth sensor cause the depth and color regions of the object to be imprecisely matched. In the case of [34,59], which use only depth information, the blur effect is inevitable because the information on the boundary is initially unknown. In contrast, the method based on [58] and the proposed method fill the holes without spreading the depth values of the front object or blurring the output. The difference is that the method based on [58] fills the holes with the same depth value per hole, which results in a dissimilarity between the filled and actual depth values, whereas the proposed method fills the holes with similar depth values to the actual depth values. The proposed method considers the characteristics of the inner holes and fills them with similar depth values as the rear object without expanding the depth values of the front object. As a result, the proposed method gives the best results among all the methods compared in this experiment.
To evaluate the GrowFill values given by the proposed method, we used the Tsukuba Stereo Dataset. This dataset provides a total of 1800 image pairs including color, ground truth depth (disparity), and occlusion images. The experiments were conducted using both the color images and occluded depth images. The occluded depth images are generated by excluding the occlusion regions from the ground truth depth. In the dataset, all image pairs are based on the right camera, and the color images are illuminated in daylight. We compared our method with the techniques developed by Telea [34], Lin [31], and Gong [33]. The results of Lin’s method [31] are reported in the corresponding paper. Unless specified otherwise, the neighborhood system of our method was implemented with Moore’s system. The numerical results are evaluated in terms of the peak signal-to-noise ratio (PSNR) [60] in decibels (dB), the structural similarity (SSIM) [61] against the ground truth, and the runtime in seconds (s). The runtime is averaged over 10 repeated experiments of our implementation in the C language. Ten different image pairs (frame numbers 1, 214, 291, 347, 459, 481, 509, 525, 715 and 991) were selected [31] and both quantitative and qualitative tests were performed. Figure 12 presents the visual results of the qualitative evaluation, and Table 2 and Figure 13 illustrate the results of the quantitative evaluation. The results obtained from each method show that the proposed method gives better performance than the previous techniques on both the quantitative and qualitative evaluations. The proposed method gives the best performance in all but two cases in the quantitative evaluation results. Frame number 214 (PSNR of Gong’s method [33] is 0.425 dB higher than that of the proposed method) and frame number 525 (SSIM of Gong’s method [33] is about 0.002 higher than that of the proposed method). In particular, the proposed method is the fastest among those compared here for all selected datasets. On average, for the selected dataset, the proposed method improves the PSNR by 10.898 dB, whereas the methods of Telea [34], Lin [31], and Gong [33] produce improvements of 6.627 dB, 6.772 dB, and 9.620 dB, respectively. Our method improves the SSIM value by 0.126, compared with enhancements of 0.116, 0.105, and 0.124, respectively, for the other approaches. The average runtime of the proposed method is 0.118 s, faster than that of Telea’s method [34] (0.187 s) and Gong’s method [33] (0.615 s), and considerably quicker than Lin’s method [31] (12.543 s).
Table 3 presents the experimental results using the entire Tsukuba Stereo Dataset. In this experiment, the proposed method was compared with the methods of Telea [34] and Gong [33], which represent the fastest and best performing methods among those compared in the previous experiments, respectively. Additionally, we implemented the proposed method with both the Moore and von Neumann neighborhood systems. It is clear that the proposed method outperforms the compared methods. On average, for the entire dataset, the proposed method with the Moore and von Neumann neighborhood systems improves the PSNR by 14.485 dB and 14.067 dB and enhances the SSIM value by 0.116 and 0.115 in 0.138 s and 0.057 s, respectively. The methods of Telea [34] and Gong [33] improve the PSNR by 10.691 dB and 13.298 dB and the SSIM value by 0.109 and 0.114 in 0.117 s and 0.544 s, respectively. In particular, the proposed method with Moore’s neighborhood system achieves the best results in terms of PSNR and SSIM, and the proposed method with the von Neumann neighborhood system is the fastest. From these results, we observe that the proposed method performs best among all compared methods, regardless of the neighborhood system used.
In addition, we compared the performance of the internal algorithms of the proposed method (GrowFill) to verify the effects of the ACA and the NDM. Table 4 and Table 5 present the quantitative results for both SCA- and ACA-based methods with Moore’s neighborhood system on the selected Tsukuba Stereo Dataset, respectively. In the experiments, the NDM of our method was compared with the skipping method (SKP) suggested in [62] to reduce the computational cost. We can see that the PSNR, SSIM, and number of iterations of the algorithms did not deteriorate with the SKP or NDM schemes. However, the runtime is reduced by using the schemes. The pure ACA-based method is about 4.4 times faster than the pure SCA-based method. Nonetheless, the proposed method based on ACA combined with NDM is about 1.3-times faster than the pure ACA-based method, and there is no fall-off in quality. As a result, the proposed method (ACA + NDM) is about six-times faster than the pure SCA-based method. The method based on ACA combined with SKP is slower than the pure ACA-based method, although the method based on SCA combined with SKP is faster than the pure SCA-based method. From these results, we can observe that SKP works faster based on SCA, not on the ACA. In the ACA-based experiments, the method with NDM is about 1.4-times faster than the ACA-based method with SKP. Figure 14 compares the runtimes of each internal algorithm. In all cases, the ACA-based methods are faster than the SCA-based methods. Further, the proposed method (ACA + NDM) is the fastest. The results in the tables show that the pure ACA-based method requires only one-third of the number of iterations in the SCA-based method under the same experimental conditions. Note that the runtime can only be reduced by reducing the number of iterations. In the Appendix A, the results obtained with the von Neumann neighborhood system are described in detail.
Table 6 compares the internal algorithms of our method with the Moore and von Neumann neighborhood systems on the entire Tsukuba Stereo Dataset. We can see that the proposed method (ACA + NDM) with the Moore and von Neumann neighborhood system is about 6.5 and 8 times faster than the pure SCA-based method, though the PSNR decreases slightly (by about 0.09 and 0.105 dB, respectively).
The results of the comparison between the RGB and Lab color spaces are presented in Table 7. The experiments show that the PSNR and SSIM performance is improved, and the number of iterations and runtime are decreased, by transforming from the RGB to Lab color space. Thus, the change of color space is an effective means of improving the performance of the algorithm.
Finally, we conducted experiments on the real-world dataset [43,55] and our own dataset to verify the effectiveness of our enhancement method. For the depth normalization, we set Z A = 0.4 m and Z B = 3 m (near range) for our data and Z A = 0.8 m and Z B = 4 m (default range) for the dataset in [43,55]. The extracted object (Figure 15c) and background (Figure 15d) regions were utilized to recover accurate depth information around the object. By taking advantage of the extracted object regions and morphological operations, depth regions around the object were set as the estimable regions in the GrowFill. The yellow marker in Figure 15e indicates the original depth holes. The red and orange markers in Figure 15e indicate the expanded depth holes by using the morphological operations on the object and background regions, respectively. The disk-shaped kernels with r = 6 for the object and r = 3 for the background regions were used in the morphology. The reason for expanding the depth hole is to recover the correct depth information by removing the incorrect depth information in the original depth image as shown in Figure 16, top row, in which the color regions indicate the corresponding object depth regions and it can be noticed that the background also appears in the object depth regions. Figure 15f shows the enhanced depth image processed by the proposed method using Figure 15e as the input image, from which we can easily observe that the quality of the depth image has improved compared with the original depth images (Figure 15b). In particular, not only are the depth values of the depth images complete but the object boundaries have also been clearly recovered. The enhanced depth images (Figure 16, bottom row) shows that the object shape is more accurate than the original depth images (Figure 16, top row). In addition, the results in Figure 17 were obtained by applying the DIBR technique to generate stereoscopic images with background pixel extrapolation on newly exposed regions after 3D image warping. Figure 17b shows the visual enhancement given by the proposed method.

4. Conclusions

The main goal of this study was to enhance the quality of depth efficiently. To achieve this goal, a new depth enhancement approach has been introduced. The proposed method consists of an image segmentation algorithm to extract object regions and a weighted linear combination of spatial filtering algorithms. For inner holes, the characteristics of the hole regions inside the object regions were considered, and for other hole regions, an ACA-based depth recovery algorithm was combined with NDMs. Compared with the initial depth image, our experimental results on the Tsukuba Stereo Dataset show an improvement of 14.485 dB in PSNR and 0.116 in SSIM with Moore’s neighborhood system with an average runtime of only 0.138 s. With the von Neumann neighborhood system, our method achieves improvements of 14.067 dB in PSNR and 0.115 in SSIM in 0.057 s. Comparative experiments show that our method outperforms all compared approaches in terms of both quantitative and qualitative evaluations. Moreover, through experiments with a real-world dataset, we have confirmed that the object shape is recovered and the performance is improved. It is important to note that the proposed method is efficient enough to be employed in near-real-time applications, and it is expected that object regions extracted using our image segmentation algorithm could easily be utilized for activities such as view synthesis and virtual conference systems.

Acknowledgments

This work was supported by Institute for Information & Communications Technology Promotion (IITP) grants funded by the Korea government (MSIP) (No. 2016-0-00197 and No. 2016-0-00562).

Author Contributions

Kyungjae Lee developed the methodology, led the entire research including evaluations, wrote and revised the manuscript. Yuseok Ban was in charge of developing the weighted linear combination of spatial filtering algorithms. Sangyoun Lee guided the research direction and verified the research results.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

We compared the performance of the internal algorithms of the proposed method with the von Neumann neighborhood system to verify the effects of the ACA and the NDM. The quantitative performance of SCA and ACA-based methods with the von Neumann system on the selected Tsukuba Stereo Dataset is presented in Table A1 and Table A2, respectively. The pure ACA-based method is about 4.3-times faster than the pure SCA-based method. Nonetheless, the proposed method based on ACA combined with NDM is about 1.8-times faster than the pure ACA-based method without any degradation in quality. As a result, the proposed method (ACA + NDM) is about 7.7-times faster than the pure SCA-based method.
Table A1. Quantitative evaluation results for comparing internal algorithms of the ACA-based method with von Neumann neighborhood system on the selected Tsukuba Stereo Dataset. The best performance is highlighted in bold.
Table A1. Quantitative evaluation results for comparing internal algorithms of the ACA-based method with von Neumann neighborhood system on the selected Tsukuba Stereo Dataset. The best performance is highlighted in bold.
FramePSNRSSIMIterationsTime
SCASCASCASCASCASCASCASCASCASCASCASCA
+ SKP+ NDM+ SKP+ NDM+ SKP+ NDM+ SKP+ NDM
# 00132.98732.98732.9870.9730.9730.9737979790.2720.2390.222
# 21428.21028.21028.2100.9760.9760.9769696960.3120.2750.264
# 29139.99339.99339.9930.9830.9830.9831011011010.3390.2960.289
# 34736.98636.98636.9860.9860.9860.9861201201200.4130.3600.335
# 45942.73342.73342.7330.9910.9910.991 64 64 640.1830.1620.159
# 48140.22140.22140.2210.9850.9850.9851531531530.4900.4320.407
# 50933.00933.00933.0090.9700.9700.9701041041040.3960.3440.322
# 52525.07325.07325.0730.9460.9460.9461621621620.5690.4900.471
# 71549.74149.74149.7410.9970.9970.9979494940.2440.2080.225
# 99148.37248.37248.3720.9960.9960.9966363630.1650.1470.149
Mean37.73337.73337.7330.9800.9800.980103.6103.6103.60.3380.2950.284
Table A2. Quantitative evaluation results for comparing internal algorithms of the proposed method with von Neumann neighborhood system on the selected Tsukuba Stereo Dataset. The best performance is highlighted in bold.
Table A2. Quantitative evaluation results for comparing internal algorithms of the proposed method with von Neumann neighborhood system on the selected Tsukuba Stereo Dataset. The best performance is highlighted in bold.
FramePSNRSSIMIterationsTime
ACAACAACAACASCAACAACAACAACAACAACAACA
+ SKP+ NDM+ SKP+ NDM+ SKP+ NDM+ SKP+ NDM
# 00132.98732.98732.9870.9730.9730.9734848480.0990.1020.054
# 21428.21028.21028.2100.9760.9760.9762121210.0450.0460.027
# 29139.98139.98139.9810.9830.9830.9833131310.0670.0700.040
# 34736.98636.98636.9860.9860.9860.9866262620.1290.1330.067
# 45942.70642.70642.7060.9900.9900.9904242420.0660.0670.035
# 48140.22140.22140.2210.9850.9850.9855151510.0990.1010.055
# 50933.00933.00933.0090.9700.9700.9703838380.0960.0980.052
# 52525.07325.07325.0730.9460.9460.9464444440.1010.1010.060
# 71549.41249.41249.4120.9970.9970.9972121210.0330.0320.021
# 99148.37048.37048.3700.9960.9960.9963737370.0500.0510.029
Mean37.69637.69637.6960.9800.9800.98039.539.539.50.0780.0800.044

References

  1. Park, S.; Yu, S.; Kim, J.; Kim, S.; Lee, S. 3D hand tracking using Kalman filter in depth space. EURASIP J. Adv. Signal Process. 2012, 2012, 36. [Google Scholar] [CrossRef]
  2. Kim, J.; Yu, S.; Kim, D.; Toh, K.A.; Lee, S. An adaptive local binary pattern for 3D hand tracking. Pattern Recognit. 2017, 61, 139–152. [Google Scholar] [CrossRef]
  3. Kirac, F.; Kara, Y.E.; Akarun, L. Hierarchically constrained 3D hand pose estimation using regression forests from single frame depth data. Pattern Recognit. Lett. 2014, 50, 91–100. [Google Scholar] [CrossRef]
  4. Shotton, J.; Sharp, T.; Kipman, A.; Fitzgibbon, A.; Finocchio, M.; Blake, A.; Cook, M.; Moore, R. Real-time human pose recognition in parts from single depth images. Commun. ACM 2013, 56, 116–124. [Google Scholar] [CrossRef]
  5. Plantard, P.; Auvinet, E.; Pierres, A.S.L.; Multon, F. Pose estimation with a kinect for ergonomic studies: Evaluation of the accuracy using a virtual mannequin. Sensors 2015, 15, 1785–1803. [Google Scholar] [CrossRef] [PubMed]
  6. Chen, X.; Zhou, B.; Lu, F.; Wang, L.; Bi, L.; Tan, P. Garment modeling with a depth camera. ACM Trans. Graph. 2015, 34. [Google Scholar] [CrossRef]
  7. Taylor, J.; Stebbing, R.; Ramakrishna, V.; Keskin, C.; Shotton, J.; Izadi, S.; Hertzmann, A.; Fitzgibbon, A. User-specific hand modeling from monocular depth sequences. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 644–651. [Google Scholar]
  8. Tang, S.; Zhu, Q.; Chen, W.; Darwish, W.; Wu, B.; Hu, H.; Chen, M. Enhanced RGB-D Mapping Method for Detailed 3D Indoor and Outdoor Modeling. Sensors 2016, 16, 1589. [Google Scholar] [CrossRef] [PubMed]
  9. Fernandez-Sanchez, E.J.; Diaz, J.; Ros, E. Background subtraction based on color and depth using active sensors. Sensors 2013, 13, 8895–8915. [Google Scholar] [CrossRef] [PubMed]
  10. Fernandez-Sanchez, E.J.; Rubio, L.; Diaz, J.; Ros, E. Background subtraction model based on color and depth cues. Mach. Vis. Appl. 2014, 25, 1211–1225. [Google Scholar] [CrossRef]
  11. Del Blanco, C.R.; Mantecón, T.; Camplani, M.; Jaureguizar, F.; Salgado, L.; García, N. Foreground segmentation in depth imagery using depth and spatial dynamic models for video surveillance applications. Sensors 2014, 14, 1961–1987. [Google Scholar] [CrossRef] [PubMed]
  12. Fehn, C. Depth-image-based rendering (DIBR), compression, and transmission for a new approach on 3D-TV. In Proceedings of the International Society for Optics and Photonics, Electronic Imaging 2004, San Jose, CA, USA, 21 May 2004; pp. 93–104. [Google Scholar]
  13. Yin, S.; Dong, H.; Jiang, G.; Liu, L.; Wei, S. A Novel 2D-to-3D Video Conversion Method Using Time-Coherent Depth Maps. Sensors 2015, 15, 15246–15264. [Google Scholar] [CrossRef] [PubMed]
  14. Tanimoto, M.; Tehrani, M.P.; Fujii, T.; Yendo, T. Free-viewpoint TV. IEEE Signal Process. Mag. 2011, 28, 67–76. [Google Scholar] [CrossRef]
  15. Cho, J.H.; Song, W.; Choi, H.; Kim, T. Hole Filling Method for Depth Image-Based Rendering Based on Boundary Decision. IEEE Signal Process. Lett. 2017, 24. [Google Scholar] [CrossRef]
  16. Billinghurst, M.; Clark, A.; Lee, G. A survey of augmented reality. Found. Trends® Hum. Comput. Interact. 2015, 8, 73–272. [Google Scholar] [CrossRef]
  17. Wang, L.; Hou, C.; Lei, J.; Yan, W. View generation with DIBR for 3D display system. Multimedia Tools Appl. 2015, 74, 9529–9545. [Google Scholar] [CrossRef]
  18. Fairchild, A.J.; Campion, S.P.; García, A.S.; Wolff, R.; Fernando, T.; Roberts, D.J. A mixed reality telepresence system for collaborative space operation. IEEE Trans. Circuits Syst. Video Technol. 2016, 27, 814–827. [Google Scholar] [CrossRef]
  19. Zhang, Z. Microsoft kinect sensor and its effect. IEEE Multimedia 2012, 19, 4–10. [Google Scholar] [CrossRef]
  20. Chen, L.; Wei, H.; Ferryman, J. A survey of human motion analysis using depth imagery. Pattern Recognit. Lett. 2013, 34, 1995–2006. [Google Scholar] [CrossRef]
  21. Vijayanagar, K.R.; Loghman, M.; Kim, J. Real-time refinement of kinect depth maps using multi-resolution anisotropic diffusion. Mob. Netw. Appl. 2014, 19, 414–425. [Google Scholar] [CrossRef]
  22. Lasang, P.; Kumwilaisak, W.; Liu, Y.; Shen, S.M. Optimal depth recovery using image guided TGV with depth confidence for high-quality view synthesis. J. Vis. Commun. Image Represent. 2016, 39, 24–39. [Google Scholar] [CrossRef]
  23. Matyunin, S.; Vatolin, D.; Berdnikov, Y.; Smirnov, M. Temporal filtering for depth maps generated by kinect depth camera. In Proceedings of the 2011 IEEE 3DTV Conference: The True Vision-Capture, Transmission and Display of 3D Video (3DTV-CON), Antalya, Turkey, 16–18 May 2011; pp. 1–4. [Google Scholar]
  24. Fu, J.; Miao, D.; Yu, W.; Wang, S.; Lu, Y.; Li, S. Kinect-like depth data compression. IEEE Trans. Multimedia 2013, 15, 1340–1352. [Google Scholar] [CrossRef]
  25. Fleishman, S.; Drori, I.; Cohen-Or, D. Bilateral mesh denoising. ACM Trans. Graph. 2003, 22, 950–953. [Google Scholar] [CrossRef]
  26. Petschnigg, G.; Szeliski, R.; Agrawala, M.; Cohen, M.; Hoppe, H.; Toyama, K. Digital photography with flash and no-flash image pairs. ACM Trans. Graph. 2004, 23, 664–672. [Google Scholar] [CrossRef]
  27. Kopf, J.; Cohen, M.F.; Lischinski, D.; Uyttendaele, M. Joint bilateral upsampling. ACM Trans. Graph. 2007, 26, 96. [Google Scholar] [CrossRef]
  28. Min, D.; Lu, J.; Do, M.N. Depth video enhancement based on weighted mode filtering. IEEE Trans. Image Process. 2012, 21, 1176–1190. [Google Scholar] [PubMed]
  29. Chan, D.; Buisman, H.; Theobalt, C.; Thrun, S. A noise-aware filter for real-time depth upsampling. In Proceedings of the Workshop on Multi-Camera and Multi-Modal Sensor Fusion Algorithms and Applications, Marseille, France, 5–6 October 2008. [Google Scholar]
  30. Le, A.V.; Jung, S.W.; Won, C.S. Directional joint bilateral filter for depth images. Sensors 2014, 14, 11362–11378. [Google Scholar] [CrossRef] [PubMed]
  31. Lin, B.S.; Su, M.J.; Cheng, P.H.; Tseng, P.J.; Chen, S.J. Temporal and Spatial Denoising of Depth Maps. Sensors 2015, 15, 18506–18525. [Google Scholar] [CrossRef] [PubMed]
  32. Criminisi, A.; Perez, P.; Toyama, K. Object removal by exemplar-based inpainting. In Proceedings of the 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Madison, WI, USA, 16–22 June 2003; Volume 2. [Google Scholar] [CrossRef]
  33. Gong, X.; Liu, J.; Zhou, W.; Liu, J. Guided depth enhancement via a fast marching method. Image Vis. Comput. 2013, 31, 695–703. [Google Scholar] [CrossRef]
  34. Telea, A. An image inpainting technique based on the fast marching method. J. Graph. Tools 2004, 9, 23–34. [Google Scholar] [CrossRef]
  35. Rother, C.; Kolmogorov, V.; Blake, A. Grabcut: Interactive foreground extraction using iterated graph cuts. ACM trans. Graph. 2004, 23, 309–314. [Google Scholar] [CrossRef]
  36. Vezhnevets, V.; Konouchine, V. GrowCut: Interactive multi-label ND image segmentation by cellular automata. Proc. Graph. Citeseer 2005, 1, 150–156. [Google Scholar]
  37. Boykov, Y.; Funka-Lea, G. Graph cuts and efficient ND image segmentation. Int. J. Comput. Vis. 2006, 70, 109–131. [Google Scholar] [CrossRef]
  38. Grady, L. Random walks for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2006, 28, 1768–1783. [Google Scholar] [CrossRef] [PubMed]
  39. Arbelaez, P.; Maire, M.; Fowlkes, C.; Malik, J. Contour detection and hierarchical image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 898–916. [Google Scholar] [CrossRef] [PubMed]
  40. Gordon, G.; Darrell, T.; Harville, M.; Woodfill, J. Background estimation and removal based on range and color. In Proceedings of the 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Ft. Collins, CO, USA, 23–25 June 1999; Volume 2, pp. 459–464. [Google Scholar]
  41. Schiller, I.; Koch, R. Improved video segmentation by adaptive combination of depth keying and mixture-of-gaussians. In Proceedings of the 17th Scandinavian conference on Image Analysis, Ystad, Sweden, 23–27 May 2011; pp. 59–68. [Google Scholar]
  42. Kim, K.; Chalidabhongse, T.H.; Harwood, D.; Davis, L. Real-time foreground–background segmentation using codebook model. Real Time Imag. 2005, 11, 172–185. [Google Scholar] [CrossRef]
  43. Camplani, M.; Salgado, L. Background foreground segmentation with RGB-D Kinect data: An efficient combination of classifiers. J. Vis. Commun. Image Represent. 2014, 25, 122–136. [Google Scholar] [CrossRef]
  44. Han, J.; Ngan, K.N.; Li, M.; Zhang, H.J. Unsupervised extraction of visual attention objects in color images. IEEE Trans. Circuits Syst. Video Technol. 2006, 16, 141–145. [Google Scholar] [CrossRef]
  45. Smisek, J.; Jancosek, M.; Pajdla, T. 3D with Kinect. In Consumer Depth Cameras for Computer Vision; Springer: Berlin, Germany, 2013; pp. 3–25. [Google Scholar]
  46. Microsoft Corporation, Kinect-Coordinate Spaces. Available online: https://msdn.microsoft.com/en-us/library/hh973078.aspx/ (accessed on 22 May 2017).
  47. He, L.; Chao, Y.; Suzuki, K.; Wu, K. Fast connected-component labeling. Pattern Recognit. 2009, 42, 1977–1987. [Google Scholar] [CrossRef]
  48. Bradski, G. The OpenCV Library. Dr. Dobb’s J. Softw. Tools 2000, 25, 120–123. [Google Scholar]
  49. Lienhart, R.; Kuranov, A.; Pisarevsky, V. Empirical analysis of detection cascades of boosted classifiers for rapid object detection. In Joint Pattern Recognition Symposium; Springer: Berlin, Germany, 2003; pp. 297–304. [Google Scholar]
  50. Von Neumann, J. Theory of Self-Reproducing Automata; University of Illinois Press: Champaign, IL, USA, 2002. [Google Scholar]
  51. Ibraheem, N.A.; Hasan, M.M.; Khan, R.Z.; Mishra, P.K. Understanding color models: A review. ARPN J. Sci. Technol. 2012, 2, 265–275. [Google Scholar]
  52. Gonzalez, R.C.; Woods, R.E. Digital Image Processing; Prentice Hall: Upper Saddle River, NJ, USA, 2002. [Google Scholar]
  53. Peris, M.; Martull, S.; Maki, A.; Ohkawa, Y.; Fukui, K. Towards a simulation driven stereo vision system. In Proceedings of the 2012 21st International Conference on Pattern Recognition (ICPR), Tsukuba, Japan, 11–15 November 2012; pp. 1038–1042. [Google Scholar]
  54. Martull, S.; Peris, M.; Fukui, K. Realistic CG stereo image dataset with ground truth disparity maps. In Proceedings of the ICPR Workshop TrakMark2012, Tsukuba, Japan, 11 November 2012; Volume 111, pp. 117–118. [Google Scholar]
  55. Moyà-Alcover, G.; Elgammal, A.; Jaume-i Capó, A.; Varona, J. Modeling depth for nonparametric foreground segmentation using RGBD devices. Pattern Recognit. Lett. 2016, in press. [Google Scholar]
  56. Castrillón, M.; Déniz, O.; Guerra, C.; Hernández, M. ENCARA2: Real-time detection of multiple faces at different resolutions in video streams. J. Vis. Commun. Image Represent. 2007, 18, 130–140. [Google Scholar] [CrossRef]
  57. Nambissan, A. Haarcascade Trained Model for Hand Detection, 2013. Available online: https://github.com/Aravindlivewire/Opencv/commit/a932f2defc22b0497173a5bea819bf14d9abe3d5/ (accessed on 22 May 2017).
  58. Soille, P. Morphological Image Analysis: Principles and Applications; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
  59. Bertalmio, M.; Bertozzi, A.L.; Sapiro, G. Navier-stokes, fluid dynamics, and image and video inpainting. In Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2001), Kauai, HI, USA, 8–14 December 2001; Volume 1. [Google Scholar] [CrossRef]
  60. Huynh-Thu, Q.; Ghanbari, M. Scope of validity of PSNR in image/video quality assessment. Electron. Lett. 2008, 44, 800–801. [Google Scholar] [CrossRef]
  61. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed]
  62. Yamasaki, T.; Chen, T.; Yagi, M.; Hirai, T.; Murakami, R. GrowCut-based fast tumor segmentation for 3D magnetic resonance images. In Proceedings of the SPIE Medical Imaging. International Society for Optics and Photonics, San Diego, CA, USA, 23 February 2012. [Google Scholar] [CrossRef]
Figure 1. (a) The initial depth image; (b) the depth image with a colored marker; (c) the depth image overlaid with the color image with a colored marker. (The depth images are normalized and aligned with the color images. Blue, green and red markers indicate the first, second and third cases introduced in Section 2, respectively, and the black regions represent missing depth values.)
Figure 1. (a) The initial depth image; (b) the depth image with a colored marker; (c) the depth image overlaid with the color image with a colored marker. (The depth images are normalized and aligned with the color images. Blue, green and red markers indicate the first, second and third cases introduced in Section 2, respectively, and the black regions represent missing depth values.)
Sensors 17 01544 g001
Figure 2. Flowchart of the proposed method.
Figure 2. Flowchart of the proposed method.
Sensors 17 01544 g002
Figure 3. Aligned (a) color and (b) depth image pair. The depth images are normalized to D N (between 0 and 255).
Figure 3. Aligned (a) color and (b) depth image pair. The depth images are normalized to D N (between 0 and 255).
Sensors 17 01544 g003
Figure 4. Flowchart for coordinate transformation and image segmentation process.
Figure 4. Flowchart for coordinate transformation and image segmentation process.
Sensors 17 01544 g004
Figure 5. Depth image of (a) x-y coordinates; (b) x-D coordinates; and (c) X-Z coordinates; (b,c) are binarized; and (c) is normalized from mm to cm for visualization.
Figure 5. Depth image of (a) x-y coordinates; (b) x-D coordinates; and (c) X-Z coordinates; (b,c) are binarized; and (c) is normalized from mm to cm for visualization.
Sensors 17 01544 g005
Figure 6. (a) Connected component labeling result (each colored marker involves discriminating objects); (b) result of object selection (circle indicates detected position).
Figure 6. (a) Connected component labeling result (each colored marker involves discriminating objects); (b) result of object selection (circle indicates detected position).
Sensors 17 01544 g006
Figure 7. (a) Labeled depth image (colored markers correspond to connected component labeling results in Figure 6a); Extracted (b) object and (c) background regions of the depth image in x-y coordinates.
Figure 7. (a) Labeled depth image (colored markers correspond to connected component labeling results in Figure 6a); Extracted (b) object and (c) background regions of the depth image in x-y coordinates.
Sensors 17 01544 g007
Figure 8. Cell evolution steps by SCA. (a) is at time t; (b) is at time t + 1 ; and (c) is at time t + 2 . The first column shows the initial cell state at the time. (The area in yellow marked X indicates the current defender and the red arrow is the direction of attack on the defender by its neighboring cell, represented as the attacker. The rectangular areas in red and yellow indicate that the cell state has changed.)
Figure 8. Cell evolution steps by SCA. (a) is at time t; (b) is at time t + 1 ; and (c) is at time t + 2 . The first column shows the initial cell state at the time. (The area in yellow marked X indicates the current defender and the red arrow is the direction of attack on the defender by its neighboring cell, represented as the attacker. The rectangular areas in red and yellow indicate that the cell state has changed.)
Sensors 17 01544 g008
Figure 9. Cell evolution step by ACA. The first column is the initial cell state at time t. (The yellow marker denoted as X indicates the current defender and the red arrow is the direction of attack by its neighboring cell, represented as the attacker. The rectangular areas in red indicate that the cell state has changed.)
Figure 9. Cell evolution step by ACA. The first column is the initial cell state at time t. (The yellow marker denoted as X indicates the current defender and the red arrow is the direction of attack by its neighboring cell, represented as the attacker. The rectangular areas in red indicate that the cell state has changed.)
Sensors 17 01544 g009
Figure 10. Experimental results using Fernandez’s Kinect dataset ((ae) indicate Wall #93; Hallway #120; Chair Box #278, #286; Shelves #197, respectively). Rows 1–3 are the color images, depth images, and ground truth, respectively. Rows 4–8 present the results given by MOG4D, CB1D, CB4D, DECB, and the proposed method, respectively.
Figure 10. Experimental results using Fernandez’s Kinect dataset ((ae) indicate Wall #93; Hallway #120; Chair Box #278, #286; Shelves #197, respectively). Rows 1–3 are the color images, depth images, and ground truth, respectively. Rows 4–8 present the results given by MOG4D, CB1D, CB4D, DECB, and the proposed method, respectively.
Sensors 17 01544 g010
Figure 11. (a,e) are the segmented depth regions and masking region indicating inner depth holes, respectively; The others show the experimental results of the inner hole filling methods; (b) method based on [58]; (c) method in [59]; (d) method in [34]; (f) method in [26]; (g) method in [33]; (h) proposed method. (The contrast of the depth images has been adjusted for visualization.)
Figure 11. (a,e) are the segmented depth regions and masking region indicating inner depth holes, respectively; The others show the experimental results of the inner hole filling methods; (b) method based on [58]; (c) method in [59]; (d) method in [34]; (f) method in [26]; (g) method in [33]; (h) proposed method. (The contrast of the depth images has been adjusted for visualization.)
Sensors 17 01544 g011
Figure 12. Experimental results using the Tsukuba Stereo Dataset (# 001; # 214; # 291; # 347; # 459; # 481; # 509; # 525; # 715; # 991). (ac) are the color, ground truth of depth, and depth images, respectively; (d) is the method in [34]; (e) is the method in [31]; (f) is the method in [33]; and (g) is the proposed method.
Figure 12. Experimental results using the Tsukuba Stereo Dataset (# 001; # 214; # 291; # 347; # 459; # 481; # 509; # 525; # 715; # 991). (ac) are the color, ground truth of depth, and depth images, respectively; (d) is the method in [34]; (e) is the method in [31]; (f) is the method in [33]; and (g) is the proposed method.
Sensors 17 01544 g012
Figure 13. Comparison of (a) PSNR; (b) SSIM; and (c) running time on the selected Tsukuba Stereo Dataset.
Figure 13. Comparison of (a) PSNR; (b) SSIM; and (c) running time on the selected Tsukuba Stereo Dataset.
Sensors 17 01544 g013
Figure 14. Comparison of runtimes using the selected Tsukuba Stereo Dataset.
Figure 14. Comparison of runtimes using the selected Tsukuba Stereo Dataset.
Sensors 17 01544 g014
Figure 15. Examples of depth enhancement using the proposed method. (a,b) are the color and original depth images, respectively; (c,d) are the object and background depth regions obtained by the proposed method, respectively; (f) shows the enhanced depth images obtained by the GrowFill algorithm using (e) as the input depth image; the yellow marker in (e) indicates the original depth holes; red and orange markers in (e) show the expanded regions by using the morphological operations based on (c,d), respectively.
Figure 15. Examples of depth enhancement using the proposed method. (a,b) are the color and original depth images, respectively; (c,d) are the object and background depth regions obtained by the proposed method, respectively; (f) shows the enhanced depth images obtained by the GrowFill algorithm using (e) as the input depth image; the yellow marker in (e) indicates the original depth holes; red and orange markers in (e) show the expanded regions by using the morphological operations based on (c,d), respectively.
Sensors 17 01544 g015
Figure 16. The synthesized object image by using the object depth regions. Top row is based on the original depth image (Figure 15b). Bottom row is based on the enhanced depth images (Figure 15f).
Figure 16. The synthesized object image by using the object depth regions. Top row is based on the original depth image (Figure 15b). Bottom row is based on the enhanced depth images (Figure 15f).
Sensors 17 01544 g016
Figure 17. Comparison of the quality of the stereoscopic images. (a,b) are generated using original and enhanced depth images by the proposed method, respectively.
Figure 17. Comparison of the quality of the stereoscopic images. (a,b) are generated using original and enhanced depth images by the proposed method, respectively.
Sensors 17 01544 g017
Table 1. Quantitative evaluation results using Fernandez’s Kinect dataset. Red text indicates the best, and green text indicates the second best F 1 score. MOG, mixture of Gaussians; CB, codebook; DECB, depth-extended codebook.
Table 1. Quantitative evaluation results using Fernandez’s Kinect dataset. Red text indicates the best, and green text indicates the second best F 1 score. MOG, mixture of Gaussians; CB, codebook; DECB, depth-extended codebook.
WallHallwayChair BoxShelvesGlobal
Method# 93# 120# 278# 286# 197MeanStd
MOG4D0.4060.4240.8830.8650.9270.7010.262
CB1D0.9270.7910.9040.9040.8970.8850.054
CB4D0.8430.6060.9360.9070.8550.8290.131
DECB0.9660.7820.9370.9280.9260.9080.072
Ours0.9300.8000.9070.9110.9500.9000.058
Table 2. Quantitative evaluation results on the selected Tsukuba Stereo Dataset. Red text indicates the best, and green text indicates the second best performance. PSNR, peak signal-to-noise ratio; SSIM, structural similarity.
Table 2. Quantitative evaluation results on the selected Tsukuba Stereo Dataset. Red text indicates the best, and green text indicates the second best performance. PSNR, peak signal-to-noise ratio; SSIM, structural similarity.
FrameDepth ImageTelea [34]Lin [31]Gong [33]Proposed Method
PSNRSSIMPSNRSSIMTimePSNRSSIMTimePSNRSSIMTimePSNRSSIMTime
# 00127.6590.84330.9120.9620.18832.0360.95013.12532.5790.9710.66333.3140.9750.127
# 21422.8240.82728.1380.9730.20928.1410.96914.83328.6570.9760.64328.2320.9770.065
# 29126.9720.83837.0560.9760.19837.6650.97017.01339.5900.9820.67940.7300.9850.118
# 34726.5490.83331.6990.9710.21832.3430.94213.66635.4350.9820.69137.1770.9860.181
# 45931.9200.89737.2220.9800.14938.4800.977 8.85842.5350.9890.50745.7230.9920.066
# 48129.2720.85437.1920.9780.20637.4880.97014.38939.9440.9820.63440.6910.9860.143
# 50927.0380.80829.1320.9510.26430.1200.93318.41533.0960.9670.81833.2990.9720.163
# 52520.0060.83223.6650.9400.19824.0440.91612.27025.7310.9500.69225.7320.9480.191
# 71529.6650.90246.2690.9950.13643.7720.993 7.55548.9840.9950.41750.2550.9960.067
# 99132.7810.92139.6660.9880.10738.3120.9815.30944.3310.9920.40248.5110.9960.056
Mean27.4680.85534.0950.9710.18734.2400.96012.54337.0880.9790.61538.3660.9810.118
Table 3. Quantitative evaluation results using the Tsukuba Stereo Dataset. The best performance is highlighted in bold.
Table 3. Quantitative evaluation results using the Tsukuba Stereo Dataset. The best performance is highlighted in bold.
MethodMean
PSNRSSIMTime
Depth Image26.7620.871-
Telea [34]37.4530.9800.117
Gong [33]40.0600.9850.544
Ours (von Neumann)40.8290.9860.057
Ours (Moore)41.2470.9870.138
Table 4. Quantitative evaluation results for comparing internal algorithms of the ACA-based method on the selected Tsukuba Stereo Dataset. The best performance is highlighted in bold. SCA, synchronous cellular automata; SKP, skipping method; NDM, neighborhood distance map.
Table 4. Quantitative evaluation results for comparing internal algorithms of the ACA-based method on the selected Tsukuba Stereo Dataset. The best performance is highlighted in bold. SCA, synchronous cellular automata; SKP, skipping method; NDM, neighborhood distance map.
FramePSNRSSIMIterationsTime
SCASCASCASCASCASCASCASCASCASCASCASCA
+ SKP+ NDM+ SKP+ NDM+ SKP+ NDM+ SKP+ NDM
# 00133.31333.31333.3130.9750.9750.9751181181180.6020.4930.513
# 21428.23228.23228.2320.9770.9770.9771311311310.6150.5030.538
# 29140.73040.73040.7300.9850.9850.9851421421420.7120.5790.624
# 34737.17337.17337.1730.9860.9860.9861471471470.7480.6030.637
# 45945.78345.78345.7830.9930.9930.993 89 89 890.3510.2920.305
# 48140.69340.69340.6930.9860.9860.9862112112110.9930.8240.874
# 50933.29933.29933.2990.9720.9720.9721641641640.9490.7780.810
# 52525.73225.73225.7320.9480.9480.9482542542541.3611.1111.174
# 71549.49149.49149.4910.9960.9960.9961301301300.4540.3530.434
# 99148.51648.51648.5160.9960.9960.9968888880.3080.2550.274
Mean38.29638.29638.2960.9810.9810.981147.4147.4147.40.7090.5790.619
Table 5. Quantitative evaluation results for comparing internal algorithms of the proposed method on the selected Tsukuba Stereo Dataset. The best performance is highlighted in bold. ACA, asynchronous cellular automata.
Table 5. Quantitative evaluation results for comparing internal algorithms of the proposed method on the selected Tsukuba Stereo Dataset. The best performance is highlighted in bold. ACA, asynchronous cellular automata.
FramePSNRSSIMIterationsTime
ACAACAACAACASCAACAACAACAACAACAACAACA
+ SKP+ NDM+ SKP+ NDM+ SKP+ NDM+ SKP+ NDM
# 00133.31433.31433.3140.9750.9750.9755050500.1780.1810.127
# 21428.23228.23228.2320.9770.9770.9772626260.0880.0900.065
# 29140.73040.73040.7300.9850.9850.9854444440.1580.1600.118
# 34737.17737.17737.1770.9860.9860.9866868680.2420.2480.181
# 45945.72345.72345.7230.9920.9920.9923737370.0970.0980.066
# 48140.69140.69140.6910.9860.9860.9866060600.1980.2030.143
# 50933.29933.29933.2990.9720.9720.9725151510.2190.2240.163
# 52525.73225.73225.7320.9480.9480.9486666660.2550.2620.191
# 71550.25550.25550.2550.9960.9960.9963838380.0840.0840.067
# 99148.51148.51148.5110.9960.9960.9963939390.0850.0850.056
Mean38.36638.36638.3660.9810.9810.98147.947.947.90.1600.1640.118
Table 6. Quantitative evaluation results for comparing internal algorithms of the proposed method on the entire Tsukuba Stereo Dataset. Left and right tables show the results using the Moore and von Neumann systems, respectively. The best computation times are highlighted in bold.
Table 6. Quantitative evaluation results for comparing internal algorithms of the proposed method on the entire Tsukuba Stereo Dataset. Left and right tables show the results using the Moore and von Neumann systems, respectively. The best computation times are highlighted in bold.
MooreMeanvon NeumannMean
MethodPSNRSSIMIterationsTimeMethodPSNRSSIMIterationsTime
Depth Image26.7620.871--Depth Image26.7620.871--
SCA41.3370.987197.70.902SCA40.9340.986153.10.467
ACA41.2470.98759.40.186ACA40.8290.98651.00.095
SCA + SKP41.3370.987197.70.714SCA + SKP40.9340.986153.10.401
ACA + SKP41.2470.98759.40.189ACA + SKP40.8290.98651.00.097
SCA + NDM41.3370.987197.70.798SCA + NDM40.9340.986153.10.412
ACA + NDM41.2470.98759.40.138ACA + NDM40.8290.98651.00.057
Table 7. Comparison of quantitative evaluation results for color space on the entire Tsukuba Stereo Dataset. The best performance is highlighted in bold.
Table 7. Comparison of quantitative evaluation results for color space on the entire Tsukuba Stereo Dataset. The best performance is highlighted in bold.
MethodMean
PSNRSSIMIterationsTime
Depth Image26.7620.871--
SCA (RGB)41.2900.986211.60.978
SCA (Lab)41.3370.987197.70.902
ACA + NDM (RGB)41.1980.986 63.40.150
ACA + NDM (Lab)41.2470.98759.40.138

Share and Cite

MDPI and ACS Style

Lee, K.; Ban, Y.; Lee, S. Efficient Depth Enhancement Using a Combination of Color and Depth Information. Sensors 2017, 17, 1544. https://doi.org/10.3390/s17071544

AMA Style

Lee K, Ban Y, Lee S. Efficient Depth Enhancement Using a Combination of Color and Depth Information. Sensors. 2017; 17(7):1544. https://doi.org/10.3390/s17071544

Chicago/Turabian Style

Lee, Kyungjae , Yuseok  Ban, and Sangyoun  Lee. 2017. "Efficient Depth Enhancement Using a Combination of Color and Depth Information" Sensors 17, no. 7: 1544. https://doi.org/10.3390/s17071544

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop