Next Article in Journal
Study Models of COVID-19 in Discrete-Time and Fractional-Order
Next Article in Special Issue
A Light-Ray Approach to Fractional Fourier Optics
Previous Article in Journal
Fractal Analysis of Coal Pore Structure Based on Computed Tomography and Fluid Intrusions
Previous Article in Special Issue
Image Edge Detection Based on Fractional-Order Ant Colony Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Fractional-Order Total Variation Regularization-Based Method for Recovering Geiger-Mode Avalanche Photodiode Light Detection and Ranging Depth Images

1
School of Electronic and Information Engineering, Changchun University of Science and Technology, Changchun 130022, China
2
Xi’an Key Laboratory of Active Photoelectric Imaging Detection Technology, Xi’an Technological University, Xi’an 710021, China
*
Authors to whom correspondence should be addressed.
Fractal Fract. 2023, 7(6), 445; https://doi.org/10.3390/fractalfract7060445
Submission received: 21 April 2023 / Revised: 18 May 2023 / Accepted: 29 May 2023 / Published: 31 May 2023

Abstract

:
High-quality image restoration is typically challenging due to low signal–to–background ratios (SBRs) and limited statistics frames. To address these challenges, this paper devised a method based on fractional-order total variation (FOTV) regularization for recovering Geiger-mode avalanche photodiode (GM-APD) light detection and ranging (lidar) depth images. First, the spatial differential peak-picking method was used to extract the target depth image from low SBR and limited frames. FOTV regularization was introduced based on the total variation regularization recovery model, which incorporates the fractional-order differential operator, in order to realize FOTV-regularization-based depth image recovery. These frameworks were used to establish an algorithm for GM-APD depth image recovery based on FOTV. The simulation and experimental results demonstrate that the devised FOTV-recovery algorithm improved the target reduction degree, peak signal–to–noise ratio, and structural similarity index measurement by 76.6%, 3.5%, and 6.9% more than the TV, respectively, in the same SBR and statistic frame conditions. Thus, the devised approach is able to effectively recover GM-APD lidar depth images in low SBR and limited statistic frame conditions.

1. Introduction

Geiger-mode avalanche photodiode (GM-APD) light detection and ranging (lidar) is a laser active imaging radar that uses GM-APD photon-level detection sensitivity to achieve long-range target weak echo signal detection, which has the advantages of high detection sensitivity, long-acting distance, and high distance resolution [1,2] and has significant applications in target detection, remote sensing, military guidance, security monitoring, and other fields. However, GM-APD lidar is seriously disturbed by noise, such as daylight and atmospheric backscattering, and when the signal–to–background ratio (SBR, the ratio of target echo signal photons to background noise photons in the gate) of the laser echo signal is low, the target echo signal is easily drowned in the noise, resulting in the recovered target depth image containing a large amount of noise. In addition, GM-APD is the first photon trigger system, and a single detection only corresponds to a single echo signal photon; thus, it is impossible to distinguish it from a signal photon and noise photon, so it is necessary to make multiple statistical measurements and use the depth recovery algorithm to obtain the depth image of the target. In the field of target detection, security monitoring and other applications, the relative motion between the lidar itself and the target often occurs, and the integration time of the target detection is very short, which cannot guarantee a sufficient number of GM-APD lidar statistics, and the recovery performance of most depth recovery algorithms decreases at this point. Therefore, there is an urgent need to investigate the recovery method of GM-APD lidar depth images under the condition of low SBR and few statistical frames.
More research has been carried out on GM-APD lidar depth image recovery methods; references [3,4] used a peak thresholding method and a center of mass algorithm to achieve the recovery of the target depth image and obtained better target depth image recovery when the noise was less, but when the echo signal SBR was low, the target signal was drowned in the background noise and the depth image of the target could not be recovered. References [5,6,7,8] used algorithms of parameter estimation and data fitting to achieve target depth image recovery with some improvement in noise immunity; however, when the number of statistical frames is low, the image recovery of this type of algorithm is poor. The authors of [9,10,11,12,13,14,15] used color image-assisted methods for depth image recovery, and this type of algorithm using color image information can be used to remove noise from depth images and improve the accuracy and clarity of depth images, but color images and depth images are often acquired by different sensors. There are differences in sensor response and calibration between them, possibly leading to differences in color images and inconsistency between depth images, which, in turn, affects the effect of assisted recovery. Moreover, color images and depth images have different data representations and processing methods. When performing color image-assisted recovery, complex data processing steps, such as data alignment, feature matching, and fusion, are required, which increase the complexity and computational cost of the algorithm. The authors of [16,17] proposed a deep image recovery method based on deep learning, but the algorithm relies on a large amount of training data and the current GM-APD lidar dataset is small, and the deep learning recovery algorithm does not have significant advantages. In Ref. [18], a filtering method is used to remove the noise in the depth image and improve the recovery of the depth image, which is simple to operate, easy to implement, and has a good suppression effect on the nonlinear noise in the distance image, but it destroys the target edges and cannot retain the target detail information. In Ref. [8], a two-dimensional double-threshold approximation recovery algorithm was proposed to achieve noise removal by setting sub-pixel neighborhood thresholds and recovering the true value of noisy pixels by using a neighborhood smoothing algorithm, which only constructs the connection between sub-pixels and surrounding pixels for recovery, the recovered target depth information deviates more, and the overall smoothing effect of the target is poor compared with the global filtering algorithm. In Ref. [19], a distance image recovery algorithm based on nonlocal correlation is proposed, which constructs an energy equation with nonlocal spatial correlation regular terms between pixels and iteratively solves it using the ADMM algorithm to achieve distance image recovery under sparse photons, but distance image recovery under low signal–to–background ratio conditions still needs further study. In Ref. [20], a depth image recovery method based on TV regularization and L1 data fidelity is proposed to reduce the noise level of the depth image by minimizing the weighted sum of the TV regularization term and the L1 data fidelity term, which is more effective when there is less noise in the depth image, but it is less effective for low signal–to–background ratio GM-APD lidar depth images with fewer frames. However, for low signal–to–background ratio and few GM-APD frames lidar depth images, these methods are less effective and can destroy the original depth value of the target in the depth image. The fractional-order total-variance image recovery algorithm, on the other hand, takes more spatial scales into account in the derivative operation, makes full use of the neighborhood information of the central pixel, balances the degree of smoothing, and better preserves the detail information in the image, and the recovered depth values are more accurate. Compared with the existing integer-order methods, the fractional-order full variation is adaptive and can adjust the smoothing degree according to the content and noise level of the image. By choosing the appropriate fractional order value, the trade-off between noise suppression and detail retention can be balanced. Since fractional-order full variance can better adapt to different noise distributions and intensities, it can more effectively suppress various types of noise, including Gaussian noise, pretzel noise, and Poisson noise. Fractional-order total-variance recovery provides higher quality recovering results by introducing fractional-order regularization terms, making it a powerful and flexible tool in the field of image recovery, which can better balance the needs of noise suppression and detail preservation. Currently, image recovery based on fractional-order total-variance regularization has been widely used in 2D image recovery [21,22,23,24,25,26], but less research has been reported in the field of GM-APD lidar distance image recovery. In order to restore GM-APD lidar depth images with low SNRs and frame rates, this paper devises a FOTV-regularization-based method for depth image restoration. First, a spatial-domain differential peak-picking method is used to extract target depth images from low SNR and low frame rate GM-APD lidar data. Second, fractional-order differential operators are introduced into the TV-regularization-based image recovery model to construct a GM-APD depth image recovery model based on FOTV regularization. This fractional-order recovery model is then used to process target depth images and restore the true values of noisy points in depth images. The effectiveness of the devised algorithm is verified through simulations and experiments.
The remainder of this paper is organized as follows. Section 2 describes the construction and solution of the FOTV regularization recovery model. Section 3 describes the FOTV-based method for GM-APD depth-image recovery. Section 4 describes the evaluation of the recovery performance of the devised algorithm, with the evaluation metrics being the K, PSNR, and SSIM. Section 5 presents the concluding remarks.

2. FOTV Regularization Recovery Model

2.1. TV Regularization Recovery Model

The variational method is typically used to solve ill-posed inverse problems by converting the problem into a functional problem. Variational methods can be applied to image recovery because it is an inverse problem. The TV regularization method can be mathematically expressed as follows [27]:
T V ( u ) = | | u | | 1 = i , j | ( u ) i , j |
where | ( u ) i , j | = ( 1 u ) i , j 2 + ( 2 u ) i , j 2 and represents the gradient difference. For an image with a resolution of M × N , the horizontal gradient ( 1 u ) i , j and vertical gradient ( 2 u ) i , j can be represented as follows:
( 1 u ) i , j = { u i + 1 , j u i , j i < M 0 i > M
( 2 u ) i , j = { u i , j + 1 u i , j j < N 0 j > N
The energy functional form of the TV regularization recovery model can be obtained by introducing a Lagrange multiplier λ to generate an unconstrained extremum model:
m i n   E ( u ) = T V ( u ) + λ 2 | | u f | | 2 2
In Equation (4), the first term is the regularization term with the differential operator, which aims to extract the inherent structural features of a noisy image during recovery and remove the noise points that do not belong to the image features. The second term is the data fidelity term, which is used to minimize the difference between the denoised and original images, thereby ensuring that the two images become infinitely close and that the fidelity of the image can be preserved without distortion. u and f represent the denoised and noisy images, respectively. The parameter λ > 0 is the Lagrange multiplier or regularization weight parameter, which is used to balance the regularization and data fidelity terms in the recovery process.
The TV-recovery algorithm exhibits a satisfactory recovery performance and edge preservation ability due to its anisotropic diffusion characteristics. However, the denoised images typically suffer from the “staircase effect” or virtual edges. Moreover, as shown in Equations (2) and (3), the TV regularization term considers only the first-order differences at the pixel points, i.e., it establishes connections between neighboring pixels without considering distant pixels, and thus it cannot fully exploit the information of the neighboring pixels.

2.2. TV-Regularization Recovery Model

The introduction of fractional derivatives can enable more accurate descriptions of certain phenomena than is possible with the above-described method. Specifically, many physical phenomena with memory and hereditary characteristics can be effectively described by introducing fractional-order systems, and thus, such systems have attracted significant attention [28].
The commonly used definitions of fractional derivatives are the Grumwald–Letnikov, Riemann–Liouville, and Caputo definitions [29,30]. The Grumwald–Letnikov definition has a simple formula, which facilitates numerical computation; order flexibility and tunability, enabling adaptation to different signal processing needs; and insensitivity to noise, enabling noise suppression. Therefore, the Grumwald–Letnikov definition is suitable for image-processing applications.
The Grumwald–Letnikov fractional derivative of a real function f ( x ) , with x [ a , t ] and a < t , a R , is defined as follows:
a G D t α f ( x ) = l i m h 0 k = 0 [ t a h ] ( 1 ) k ( α k ) f ( x k h )
where ( α k ) = Γ ( α + 1 ) Γ ( k + 1 ) Γ ( α k + 1 ) and h represents the step size for differentiation.
The equivalent expression for the v-order fractional derivative of a one-dimensional (1D) signal f ( t ) over the interval [a, t], with uniform partitioning using h = 1 and m = [ t a h ] = [ t a ] partitions, is
D t v ( t ) = f ( t ) + ( 1 ) 1 ( v ) f ( t 1 ) + ( 1 ) 2 ( v ( v 1 ) 2 ) f ( t 2 ) + + ( 1 ) j Γ ( v + 1 ) Γ ( j + 1 ) Γ ( v j + 1 ) f ( t j )
The 2D image signal f ( x , y ) is defined by assuming that the fractional derivatives in the x- and y-axis directions are separable under certain conditions. Using the separability of the Fourier transform, the fractional calculus framework can be extended from one to two dimensions. The fractional derivatives for the x- and y-axes can be obtained by uniformly partitioning the 2D image signal f ( x , y ) based on the pixel spacing, as follows:
D x v ( x , y ) = l i m N [ i = 0 N 1 ( 1 ) i Γ ( v + 1 ) Γ ( i + 1 ) Γ ( v i + 1 ) f ( x i , y ) ]
D y v ( x , y ) = l i m N [ j = 0 N 1 ( 1 ) j Γ ( v + 1 ) Γ ( j + 1 ) Γ ( v j + 1 ) f ( x , y j ) ]
The coefficient w m v for the v-order fractional derivative can be obtained from Equations (3) and (4) (where N is the number of polynomial terms) as follows:
w m v = ( 1 ) m Γ ( v + 1 ) Γ ( m + 1 ) Γ ( v m + 1 )

2.3. FOTV-Regularization Recovery Model

Because the TV regularization model uses only first-order differences and cannot fully exploit the information of neighboring pixels, it may result in “block artifacts” in images. Therefore, fractional-order differences must be used to process neighboring pixels. Fractional-order differences possess memory, which allows them to not only use information from adjacent pixels but also incorporate information from distant pixels. Therefore, theoretically, fractional-order differences can help capture more abundant pixel information and mitigate block artifacts.
Fractional calculus is an extension of integer-order calculus, and the fractional TV regularization term is obtained from the TV regularization term T V ( u ) = | | u | | 1 = i , j | ( u ) i , j | as follows:
F O T V ( u ) = T V v ( u ) = | | D v u | | 1 = i , j | ( D v u ) i , j |
where | ( D v u ) i , j | = ( D 1 v u ) i , j 2 + ( D 1 v u ) i , j 2 , D v = ( D 1 v , D 2 v ) T ; D v is a linear operator for the fractional-order derivatives; and D 1 v , D 2 v represent the fractional-order derivative operators in the horizontal and vertical directions, respectively. Equations (5)–(7) can be used to obtain the fractional-order finite forward difference scheme:
D 1 v u x , y = i = 0 N 1 w i v u x + i , y , D 2 v u x , y = j = 0 N 1 w j v u x , y + j ,
Using the matrix approximation method, Equation (11) can be expressed as follows:
D 1 v u u D , D 2 v u D T u ,
Matrix D has the following form:
D = [ w 0 v 0 0 w 1 v w 0 v 0 w m v w m 1 v w 0 v ]
Substituting Equations (10)–(13) into Equation (4) yields the following mathematical model for FOTV regularization:
m i n u   E ( u ) = λ F O T V ( u ) + μ 2 | | u f | | 2 2 = λ D v u + μ 2 | | u f | | 2 2
When v = 1, D is a sparse banded matrix consisting of two diagonal elements, and the gradient information is determined by only the two adjacent points. When v is not an integer, D is a lower triangular matrix, as shown in Equation (13). That is when calculating the fractional-order derivative at the kth point, all points before k are involved. Thus, the fractional-order derivative is a global operator with a long memory, which distinguishes it from integer-order derivatives.

2.4. Solution of the FOTV-Regularization Recovery Model

Traditional optimization algorithms cannot be used to solve the TV-regularization recovery model because it often exhibits non-smooth and non-convex characteristics. Thus, this model is typically solved using iterative algorithms. Among such algorithms, the split Bregman algorithm, which transforms the original problem into a series of subproblems, is commonly used, as it can efficiently and rapidly obtain solutions [31,32,33].
If H ( ) and ϕ ( ) are convex functions and ϕ ( ) is non-differentiable, the constrained optimization problem can be formulated as follows:
{ m i n u | | ϕ ( u ) | | 1 + H ( u ) s . t .   d = H ( u )
This problem can be transformed into an unconstrained problem:
m i n d , u | | d | | 1 + H ( u ) + λ 2 | | d φ ( u ) | | 2 2
where λ is the penalty parameter, and the auxiliary variable b is introduced to facilitate computation.
{ u k + 1 = m i n u   H ( u ) + λ 2 | | d k ϕ ( u ) b k | | 2 2 d k + 1 = m i n d   | | d | | 1 + λ 2 | | d ϕ ( u k + 1 ) b k | | 2 2 b k + 1 = b k + λ ( H ( u k + 1 ) d k + 1 )
The solution for the anisotropic FOTV-regularization recovery problem considered in this study is:
m i n u   | D v u | + μ 2 | | u f | | 2 2
The auxiliary variable d is introduced as follows:
{ m i n u   d + μ 2 | | u f | | 2 2 s . t .   d = D v u
The penalty term is added as follows:
m i n d , u   | d | + μ 2 | | u f | | 2 2 + λ 2 | | d D v u | | 2 2
Then, the solution is obtained using Bregman iterations as follows:
m i n   | d | + μ 2 | | u f | | 2 2 + λ 2 | | d D v u b k | | 2 2
During the Bregman iteration process, the solution approaches the optimal value b k .
To solve this minimization problem, the iterative format for the subproblems of d and u can be obtained by solving them separately as follows:
{ u k + 1 = m i n u   μ 2 | | u f | | 2 2 + λ 2 | | d k D v u b k | | 2 2 d k + 1 = m i n d   | d | + λ 2 | | d D v u k + 1 b k | | 2 2
The subproblem of d can be solved using the shrink operation, namely:
d j k + 1 = s h r i n k ( D v u j k + 1 + b j k , 1 λ )
where s h r i n k ( x , y ) = x | y | m a x ( | x | y , 0 ) .
The subproblem of u can be solved as follows:
( μ I λ D v ) u k + 1 = μ f + λ ( D v ) T ( d k b k )
Because Equation (24) is strictly diagonally dominant, it can be solved using the Gauss–Seidel method. Let u i , j k + 1 = G i , j k , where
G i , j k = λ μ + 4 λ ( u i + 1 , j k + u i 1 , j k + u i , j + 1 k + u i , j 1 k + d x , i 1 , j k d x , i 1 , j k + d y , i , j 1 k d y , i , j k b x , i 1 , j k + b x , i 1 , j k b y , i , j 1 k + b y , i , j k ) + μ μ + 4 λ f i . j
The flow of the algorithm is shown in Algorithm 1.
Algorithm 1: Split Bregman Algorithm for Solving Recovery Models for Anisotropic FOTV Regularization
Initialization: u 0 = f , d = 0
While | | u k u k 1 | | 2 t o l
u k + 1 = G k d k + 1 = s h r i n k ( D v u k + 1 + b k , 1 λ ) b k + 1 = b k + ( D v u k + 1 d k + 1 )
End while

3. GM-APD Depth Image FOTV Restoration Algorithm

3.1. Depth-Image Extraction from Low SBR and Few-Frame Data Using a Spatial-Domain Differential Peak-Picking Method

Equation (26) defines the triggering probability of a single-photon detector in the jth bin within the gating window of GM-APD, assuming that the noise photons in the GM-APD from the lidar follow a uniform distribution and only noise photons trigger the single-photon detector in the absence of a target in the gating window.
P j ( i ) = [ i = 1 j 1 exp ( N ( i ) ) ] [ 1 exp ( N ( j ) ) ]
Figure 1 shows the probability density curve of triggering noise photons, where the probability of triggering noise photons is the probability that each time-interval photon in the GM-APD lidar range gate triggers the GM-APD to generate an avalanche event and output an avalanche current.
In Figure 1, the gate width of GM-APD lidar is 70 ns, and the time resolution of GM-APD is 1 ns, then the gate is divided into 70 bins, and the width of each bin is 1 ns; thus, the vertical coordinates of each GM-APD bin represent the probability of being triggered.
The triggering probability function of GM-APD is a monotonically decreasing function, as shown in Equation (27):
Δ 1 = P ( j + 1 ) P ( j ) = [ i = 1 j exp ( N ( i ) ) ] [ 1 exp ( N ( j + 1 ) ) ] [ i = 1 j 1 exp ( N ( i ) ) ] [ 1 exp ( N ( j ) ) ]
Because the noise photons are uniformly distributed within the gating window, the triggering probability of noise photons can be defined as follows:
exp ( N ( j ) ) = exp ( N ( 1 ) ) = exp ( N ( 2 ) ) = = exp ( N ( j + m ) ) , ( m = 1 , 2 , , )
Substituting Equation (28) into Equation (27) yields:
Δ 1 = [ i = 1 j 1 exp ( N ( i ) ) ] [ 1 exp ( N ( j ) ) ] 2 < 0 ;
According to Equation (29), when GM-APD is triggered only by noise, its triggering probability curve is a logarithmically decreasing curve (it decreases rapidly at first and gradually in later stages).
The triggering probability for each time interval can be calculated by setting a target at the 20th bin as follows:
P ( j ) = [ i = 1 j 1 exp ( S ( i ) N ( i ) ) ] [ 1 exp ( S ( j ) N ( j ) ) ]
where S(i) and N(i) denote the number of signal and noise photons in the ith bin, respectively. Figure 2 shows the triggering probability density curve of the GM-APD under this condition.
The triggering probability density curve of the GM-APD transitions from a logarithmically decreasing function to a convex function near the bin in which the signal photons are located (determined by the laser pulse width). When the photon arrives at time t, Δ 1 = P ( t ) P ( t 1 ) > 0 , the derivative of the GM-APD trigger probability density curve can be calculated, and its distribution curve is shown in Figure 3.
In Figure 3, the Y-coordinate is the value obtained by deriving the frequency-triggered histogram within the selective gate.
When the GM-APD is used for cumulative detection, the frequency-triggered histogram within the gating window is identical to its probability density curve distribution. At this point, the target position can be expressed as follows:
d = a r g   m a x ( d i f f ( h i s t o g r a m ) )
where diff represents the first-order derivative. During the detection process, the target echo signal occupies multiple bins. However, the probability of triggering is highest at the time at which the first photon arrives. The value of the first-order derivative at this time is the largest. Therefore, this method can help decrease the ranging error caused by the laser pulse width.

3.2. FOTV-Regularization Recovery Algorithm

Because a target typically occupies multiple pixels in the focal plane of a detector, the recovery accuracy can be enhanced by introducing fractional-order operators to establish connections between multiple pixels. However, when the depth image of a target contains a large amount of noise, blindly establishing connections between pixels can increase the influence of noise on the current pixel. To address this problem, noise judgment is introduced to retain the depth values of target pixels and perform only fractional-order recovery on the noise points in the depth image. Furthermore, when multiple distance values exist in the target region, the establishment of too many connections between pixels may reduce the accuracy of the depth value of the current pixel. In this study, a 5 × 5 neighborhood pixel calibration method was used to strengthen the connections of the current pixel with the surrounding pixels while decreasing the influence of distant pixels with different depth values, thereby improving the recovery accuracy.
The recovery process involves the following steps:
Step one. Identify the noise points. Generally, points with maximum or minimum values represent noise in an image, as an image is typically composed of pixels with similar and continuous values. However, certain extreme points may be edge points instead of noise points. To accurately detect noise and retain edge information in an image, a fractional-order gradient judgment is introduced. Figure 4 shows the neighborhood pixels of point f ( x , y ) .
To determine whether f ( x , y ) is a noise point, the fractional-order gradients in eight directions around the point must be calculated. Let D α v be the gradient of f ( x , y ) in the α direction, where α = 0 °, 45 °, 90 °, 135 °, 180 °, 225 °, 270 °, and 315 °. v is the fractional order. The direction gradient is calculated as follows:
D 0 v = a 0 f ( x , y ) + a 1 f ( x , y + 1 ) + a 2 f ( x , y + 2 ) D 45 v = a 0 f ( x , y ) + a 1 f ( x 1 , y + 1 ) + a 2 f ( x 2 , y + 2 ) D 90 v = a 0 f ( x , y ) + a 1 f ( x 1 , y ) + a 2 f ( x 2 , y ) D 135 v = a 0 f ( x , y ) + a 1 f ( x 1 , y 1 ) + a 2 f ( x 2 , y 2 ) D 180 v = a 0 f ( x , y ) + a 1 f ( x , y 1 ) + a 2 f ( x , y 2 ) D 225 v = a 0 f ( x , y ) + a 1 f ( x + 1 , y 1 ) + a 2 f ( x + 2 , y 2 ) D 270 v = a 0 f ( x , y ) + a 1 f ( x + 1 , y ) + a 2 f ( x + 2 , y 1 ) D 315 v = a 0 f ( x , y ) + a 1 f ( x + 1 , y + 1 ) + a 2 f ( x + 2 , y + 2 )
where
a 0 = 1 a 1 = v ; a 2 = v ( v 1 ) 2
If the derivative values in all eight directions are greater than a given threshold T, then the current pixel is a noise point; otherwise, it is a signal point.
Step two. Compute the local fractional-order gradient operator D l o c v .
Fractional-order local TV regularization and fractional-order global TV regularization are commonly used image-recovery methods, which differ in terms of the range of image structures they consider during recovery. The former method is typically performed within a small window, and thus it can better preserve the details in the image than the latter method. The latter method considers the structure of an entire image, and although it produces better overall smoothing effects than the local method, it may lose several details.
Therefore, when recovering depth images, the focus is on image details, such as the depth value information. Thus, the local fractional-order gradient operator matrix D l o c v is selected as follows:
D l o c v = [ w 0 v 0 0 0 0 w 1 v w 0 v 0 0 0 w 2 v w 1 v w 0 v 0 0 w 3 v w 2 v w 1 v w 0 v 0 w 4 v w 3 v w 2 v w 1 v w 0 v ]
Step three. Solve the recovery model with anisotropic FOTV regularization using the split Bregman algorithm.
Figure 5 shows the process flow of FOTV recovery for depth images.

4. Simulation and Experimental Verification

Computer simulations and experiments were conducted to evaluate and analyze the performance of the devised approach. The evaluation metrics were K, PSNR, and SSIM.

4.1. Evaluation Metrics

4.1.1. K

The target reduction degree is denoted as K. The higher its value, the better the depth image recovery.
m = { 1 , | d d s | < d b 0 , | d d s | d b
K = m n
where d s is the reconstructed distance value, d s is the standard distance value, d b is the allowed error distance value, n is the total number of target pixels, and m is the number of pixels with allowed error distance values. K measures the ratio of the number of correctly recovered target pixels to the total number of target pixels. K is calculated by preserving the target recovery rate up to the fourth decimal place and retaining its integer part when multiplied by the total number of target pixels.

4.1.2. PSNR

The PSNR evaluates the difference between two corresponding pixels in two images based on the mean square error. The distortion rate of the restored image is evaluated using a standard image as the reference. A higher PSNR corresponds to higher fidelity of the image. The PSNR can be expressed as follows:
P S N R = 10 l o g 10 [ M a x 2 i = 1 M j = 1 N ( x ( i , j ) y ( i , j ) ) 2 ]
where x ( i , j ) represents the depth image data with noise and y ( i , j ) represents the denoised depth image data.

4.1.3. SSIM

The SSIM is a stability-optimized system evaluation metric that is used to quantify the structural similarity between two images. The SSIM can measure the distortion level of an image or the similarity between two images. The performance of an algorithm can be evaluated using a standard image as the reference. A high SSIM corresponds to a high similarity between the two images and shows that the algorithm can effectively denoise, restore, and maintain image fidelity. The SSIM can be expressed as follows:
S S I M ( x , y ) = ( 2 μ x μ y + C 1 ) ( 2 σ x y + C 2 ) ( μ x 2 + μ y 2 + C 1 ) ( σ x 2 + σ y 2 + C 2 )
The SSIM is defined based on three comparison measurements between the x and y samples: luminance ( l ), contrast ( c ), and structure ( s ).
l ( x , y ) = 2 μ x μ y + C 1 μ x 2 + μ y 2 + C 1
c ( x , y ) = 2 σ x σ y + C 2 μ x 2 + μ y 2 + C 2
s ( x , y ) = 2 σ x y + C 3 σ x σ y + C 3
x : reference image;
y : processed image;
μ x : mean value of pixel samples in x;
μ y : mean value of pixel samples in y;
σ x 2 : variance of x;
σ y 2 : variance of y;
σ x y : covariance between x and y;
C 1 = ( K 1 L ) 2 C 2 = ( K 2 L ) 2 C 3 = C 2 / 2 , K 1 1 , K 2 1 ;
L : dynamic range of pixel values (usually 2 b i t s   p e r   p i x e l 1 );
Thus, the SSIM is a weighted combination of l , c , and s :
S S I M ( x , y ) = l ( x , y ) α c ( x , y ) β s ( x , y ) γ
when α , β , γ = 1 , Equation (42) simplifies to Equation (38).

4.2. Simulation Analysis

4.2.1. Depth Image Extraction

The Monte Carlo method [34] was used to simulate the GM-APD lidar echo data. The system parameters were set as follows: the imaging resolution of the GM-APD detector was 64 × 64 pixels; the transmittance values of the receiving and transmitting lenses were 0.8 and 0.9, respectively; the laser pulse energy was 10 nJ; the laser wavelength was 1064 nm; the target reflectivity was 0.1; and the receiving aperture was 50 mm. The time resolution of the detector was 1 ns, and the gate width was set to 70 m. A cup model was simulated, and the distance between the cup model and the detection system was set as 20 m, as shown in Figure 6.
In the depth image shown in Figure 6, the horizontal and vertical coordinates represent the pixel locations of the image, and the color values represent the actual distance of the target from the GM-APD camera.
The peak-picking method and devised spatial-domain differential peak-picking method were applied to process echo data with SBRs of 0.1, 0.11, and 0.2. Figure 7 shows the imaging results of the two methods for 20, 40, 60, 80, and 100 frames.
When the SBR is 0.2, both the peak-picking method and devised algorithm can obtain the depth image of the cup, which becomes clearer with an increase in the number of statistical frames. For a given number of statistical frames, the depth image obtained by the devised algorithm is superior to that obtained by the peak-picking method. When the SBRs are 0.1 and 0.11, the peak-picking method cannot obtain a clear depth image of the cup, whereas the devised algorithm can obtain a relatively clear depth image. The depth image quality was evaluated by 1000 sets of Monte Carlo repetition experiments at different statistical frame numbers using the mean values of three metrics: target recovery, peak signal–to–noise ratio, and structural similarity, as shown in Figure 8, Figure 9 and Figure 10.
The devised algorithm outperforms the peak-picking method in terms of K and SSIM, and this outperformance increases as the SBR decreases. Moreover, when there are more than 50 frames, the PSNR of the devised algorithm is significantly better than that of the peak-picking method.

4.2.2. Depth-Image Recovery Using the FOTV Method

  • Selection of optimal fractional order for fractional calculus
In order to explore the influence of fractional order on the recovery performance of the same statistical frame number and different SBR simulated echo data, the statistical frame number is set to 50 frames, and the recovery algorithm in this paper is used to analyze the simulated echo data with SBRs of 0.1, 0.11, and 0.2, respectively. For recovery processing, 1000 groups of Monte Carlo repeated experiments were carried out, and the average value of the three metrics of target reduction degree, peak signal–to–noise ratio, and structural similarity was taken to evaluate the depth image quality. The results are shown in Figure 11, Figure 12 and Figure 13.
The results show that the fractional order of the devised algorithm considerably affects the recovery results of the simulated echo data with different SBRs but the same statistical frame number. Moreover, the effect of the fractional order on the evaluation indices depends on the SBRs. Table 1 shows the optimal orders corresponding to each evaluation index for SBRs of 0.1, 0.11, and 0.2.
For the same statistical frame number, the optimal orders for different evaluation metrics of the echo signal with different SBRs are different. In contrast, the optimal orders are similar when the SBR is low.
In order to explore the influence of the fractional order on the recovery performance of the simulated echo data with the same SBR and different statistical frame numbers, first set the SBR = 0.11 of the echo signal and then set the fractional order to 0.1, 0.5, 0.9, 1.3, and 1.7, respectively. The depth image recovery is performed on the simulated echo data with a statistical frame number of 10–100 frames, and the average value of the three metrics of target reduction degree, peak signal–to–noise ratio, and structural similarity obtained by 1000 Monte Carlo repeated experiments is used to recover the depth image. This evaluation was carried out, and the results are shown in Figure 14, Figure 15 and Figure 16.
The fractional order of the devised algorithm considerably affects the recovery of the simulated echo data with the same SBR. Moreover, the effect of the fractional order on the evaluation indices depends on the statistical frame numbers. To clarify this aspect, we analyzed the effect of the fractional order on the evaluation indices for data containing 40 and 70 frames. The results are summarized in Table 2.
The optimal orders corresponding to different evaluation metrics vary with the statistical frame numbers, and the optimal orders corresponding to K and SSIM are similar.
Overall, the optimal orders corresponding to different evaluation metrics for echo data with different statistical frame numbers and SBRs are different. The target reduction degree (K) is an evaluation index that better reflects the target detection and identification capability of a radar system, which is able to evaluate the performance of the radar system by comparing the difference between the target detected by the radar system and the real target. Because K is the most important evaluation metric for the considered application, the fractional order corresponding to the highest K is selected as the optimal fractional order for recovering different types of echo data.
2.
FOTV-recovery algorithm
To evaluate the recovery performance of the devised algorithm for low SBR and few-frame target echo data, the devised algorithm and TV-recovery algorithm were applied to simulate echo data with a SBR of 0.1 and 30, 50, and 70 frames, respectively. The recovery results are shown in Figure 17.
It can be seen that the target area is smoother after recovery via the devised algorithm. In order to evaluate the image quality using quantitative metrics, 1000 Monte Carlo repetition experiments were conducted at different statistical frame numbers, and the mean values of each obtained metric are shown in Table 3.
It can be seen that both the devised FOTV-recovery algorithm and the traditional TV-recovery algorithm can improve the three metrics of K, PSNR, and SSIM. When the number of statistical frames is 30, the K, PSNR, and SSIM values of the FOTV recovery are, respectively, 10.1%, 14.4%, and 2.99% better than those of the TV.
To verify the level of advancement of the algorithm, this paper was compared with the frame-less detection algorithm of [8] GM-APD lidar, as shown in Table 4.
The lower the SBR, the fewer the statistical frames and the more difficult the target recovery is. It can be seen from Table 4 that when the SBR is 0.1, and the number of statistical frames is 100, the three indicators, namely K, PSNR, and SSIM, are better than [8] when the SBR is 0.12, and the number of statistical frames is 200. It can be seen that the performance of the algorithm devised in this paper improves by a certain degree.

4.3. Experimental Verification

4.3.1. Experimental Platform

A 64 × 64 array GM-APD lidar system was established to validate the performance of the devised algorithm, as shown in Figure 18. The lidar system consisted of a 1064 nm fiber laser, a 64 × 64 array GM-APD, and transmission and reception optical paths with a transmittance of 0.9. The fields of view for transmission and reception were 0.8° × 0.8°. The maximum output energy of the laser was 100 µJ, the pulse width was 5 ns, and the repetition frequency was 10 kHz.
The laser emits a beam that is transmitted through the optical system and illuminates the target area. The feedback signal from the laser triggers the GM-APD to begin timing. The laser beam is diffusely reflected from the target surface and collected by the optical system onto the focal plane of the GM-APD, which stops the timing process. The readout circuitry completes the time-to-digital conversion of the laser photon flight time and transfers the data to the host computer. The host computer then extracts and denoises the depth image of the target and displays it.

4.3.2. Outdoor Experiment

To validate the recovery performance of the devised algorithm, imaging experiments were conducted on a residential building at a distance of 1.531 km under strong sunlight. The target scene is shown in Figure 19a. To obtain the ideal target depth image, the same target area was imaged and detected at night using the peak-picking method with 5000 frames to accumulate the data. The resulting image was used as the ideal target depth image of this scene, as shown in Figure 19b.
The SBR of the daytime imaging experiment was calculated to be 0.1 by dividing the average number of target photons detected by each pixel within the gating window by the total number of photons within the gating window. In a case involving 200 frames, the target depth image was reconstructed using the peak-picking method and the devised spatial-domain differential peak-picking method. The results are shown in Figure 20.
The devised algorithm obtained a clear contour image of the target. The quality of the reconstructed depth image was quantitatively evaluated using various metrics, as summarized in Table 5.
The devised method outperforms the classical peak-picking method, with improvements of 188%, 23.6%, and 87.9% in the K, PSNR, and SSIM values, respectively.
The depth image obtained by the spatial-domain differential peak-picking method was denoised through both TV recovery and FOTV recovery. Figure 21 shows the denoised results.
The devised FOTV-recovery algorithm restored more of the noise present in the target depth image and produced a smoother target area than the TV-recovery approach. The quality of the reconstructed depth image was quantitatively evaluated using various metrics, as summarized in Table 6.
Compared with the spatial-domain differential peak picking method, although the TV-recovery method improves the SSIM by 0.28%, K decreases by 23.7%. This decrease in K is attributable to the fact that in TV recovery, the noise points in the depth image are corrected, but the true points of the targets are also affected by the noise and thus deviate from the true depth values. In contrast, the devised FOTV-recovery method achieves improvements of 34.6%, 3.5%, and 7.2% in the K, PSNR, and SSIM values, respectively. The results highlight the potential of the devised FOTV-recovery method for effectively recovering GM-APD lidar depth images.

5. Discussion

This paper devised a FOTV-based method to restore GM-APD lidar depth images in low SBR and limited statistical frame conditions. First, the target depth image was extracted, and then, FOTV-based recovery was performed. The simulation results show that compared with the peak-picking method, the devised spatial-domain differential peak-picking method significantly improved the K, PSNR, and SSIM metrics when the SBR was 0.1 and the number of statistical frames was 30. Both the FOTV- and TV-recovery methods enhanced these metrics, but compared with TV recovery, FOTV recovery achieved improvements of 10.1%, 14.4%, and 2.99% in the K, PSNR, and SSIM metrics, respectively. The experimental results demonstrate the effectiveness of the devised method: when the SBR was 0.1 and the number of statistical frames was 200, the K, PSNR, and SSIM values of the spatial-domain differential peak-picking method were 2.88, 1.236, and 1.87 times better than those of the peak-picking method, respectively. Combined with FOTV recovery, the devised method achieved improvements of 76.6%, 3.5%, and 6.9% in the K, PSNR, and SSIM metrics, respectively, compared with those when it was combined with TV recovery. These results indicate that the devised FOTV regularization method effectively restores, denoises, and preserves the fidelity of GM-APD depth images under low SBR and limited statistical frame conditions.
Based on the research presented in this paper, there are several potential directions for future research. First, the depth image-recovery algorithm can be improved further. The current method improves the quality of the GM-APD lidar depth image to some extent, but there is still some noise and some artifacts present. Improving the quality of depth images acquired from GM-APD lidar under low signal–to–background ratio conditions will be of great value. Secondly, research on optimizing the extraction method of the target depth image under the condition of a low signal–to–background ratio and few frames, such as combining motion compensation technology or using the prior knowledge of the scene, can further improve the image restoration effect. In addition, since the application scenarios of GM-APD lidar require real-time performance, improving the efficient data processing performance of the algorithm is also an important direction for future research.

Author Contributions

Conceptualization, D.X., K.Y. and X.W. (Xuyang Wei); methodology, D.X. and X.W. (Xinjian Wang); software, D.X. and X.W. (Xinjian Wang); formal analysis, D.X., X.W. (Xinjian Wang), K.Y. and X.W. (Xuyang Wei); data curation, K.Y., X.W. (Xuyang Wei) and T.H.; writing—original draft preparation, D.X. and X.W. (Xinjian Wang); writing—review and editing, K.Y., X.W. (Xuyang Wei), X.L. and C.W.; funding acquisition, X.L. and C.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Key R&D Program of China, grant number 2022YFC3803700.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Shi, X.; Lu, W.; Sun, J.; Ge, W.; Zhang, H.; Li, S. Suppressing the influence of GM-APD coherent lidar saturation by signal modulation. Optik 2023, 275, 170619. [Google Scholar] [CrossRef]
  2. Ding, Y.; Qu, Y.; Sun, J.; Du, D.; Jiang, Y.; Zhang, H. Long-distance multi-vehicle detection at night based on Gm-APD lidar. Remote Sens. 2022, 14, 3553. [Google Scholar] [CrossRef]
  3. Dai, J.; Li, S.; Gao, F.; Cao, H.; Guo, G.; Liu, X.; Wang, Q. Performance analysis of the photon-counting lidar based on the statistical property. In Signal and Information Processing, Networking and Computers; Sun, J., Wang, Y., Huo, M., Xu, L., Eds.; Lecture Notes in Electrical Engineering; Springer: Singapore, 2023; p. 917. [Google Scholar] [CrossRef]
  4. Li, Y.; Zhou, J.; Huang, F.; Liu, L. Sub-pixel extraction of laser stripe center using an improved gray-gravity method. Sensors 2017, 17, 814. [Google Scholar] [CrossRef] [PubMed]
  5. Liu, D.; Sun, J.; Lu, W.; Li, S.; Zhou, X. 3D reconstruction of the dynamic scene with high-speed targets for GM-APD lidar. Opt. Laser Technol. 2023, 161, 109114. [Google Scholar] [CrossRef]
  6. Zhang, Y.; Li, S.; Sun, J.; Liu, D.; Zhang, X.; Yang, X.; Zhou, X. Dual-parameter estimation algorithm for Gm-APD lidar depth imaging through smoke. Measurement 2022, 196, 111269. [Google Scholar] [CrossRef]
  7. Liu, D.; Sun, J.; Gao, S.; Ma, L.; Jiang, P.; Guo, S.; Zhou, X. Single-parameter estimation construction algorithm for Gm-APD lidar imaging through fog. Opt. Commun. 2021, 482, 126558. [Google Scholar] [CrossRef]
  8. Wang, M.; Sun, J.; Li, S.; Lu, W.; Zhou, X.; Zhang, H. A photon-number-based systematic algorithm for range image recovery of GM-APD lidar under few-frames detection. Infrared Phys. Technol. 2022, 125, 104267. [Google Scholar] [CrossRef]
  9. Zhang, Y.; Liu, Z.; Huang, M.; Zhu, Q.; Yang, B. Multi-resolution depth image restoration. Mach. Vis. Appl. 2021, 32, 65. [Google Scholar] [CrossRef]
  10. Kang, Y.; Xue, R.; Wang, X.; Zhang, T.; Meng, F.; Li, L.; Zhao, W. High-resolution depth imaging with a small-scale SPAD array based on the temporal-spatial filter and intensity image guidance. Opt. Express 2022, 30, 33994–34011. [Google Scholar] [CrossRef]
  11. Ibrahim, M.M.; Liu, Q. Optimized Color-guided Filter for Depth Image Denoising. In Proceedings of the ICASSP 2019—2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, UK, 12–17 May 2019; pp. 8568–8572. [Google Scholar] [CrossRef]
  12. Chen, L.; Lin, H.; Li, S. Depth image enhancement for Kinect using region growing and bilateral filter. In Proceedings of the 21st International Conference on Pattern Recognition (ICPR 2012), Tsukuba, Japan, 11–15 November 2012; pp. 3070–3073. [Google Scholar]
  13. Chen, D.; Wu, J.; Zhu, X.; Jia, T. Depth image restoration based on bimodal joint sequential filling. Infrared Phys. Technol. 2021, 116, 103663. [Google Scholar] [CrossRef]
  14. Liu, W.; Chen, X.; Yang, J.; Wu, Q. Robust Color Guided Depth Map Restoration. IEEE Trans. Image Process. 2017, 26, 315–327. [Google Scholar] [CrossRef] [PubMed]
  15. Zhang, Y.; Feng, Y.; Liu, X.; Zhai, D. Color-Guided Depth Image Recovery with Adaptive Data Fidelity and Transferred Graph Laplacian Regularization. IEEE Trans. Circuits Syst. Video Technol. 2020, 30, 320–333. [Google Scholar] [CrossRef]
  16. Jiang, B.; Lu, Y.; Wang, J.; Lu, G.; Zhang, D. Deep image denoising with adaptive priors. IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 5124–5136. [Google Scholar] [CrossRef]
  17. Tan, Z.; Ou, J.; Zhang, J.; He, J. A laminar denoising algorithm for depth image. Acta Opt. Sin. 2017, 37, 0510002. [Google Scholar] [CrossRef]
  18. Ibrahim, M.M.; Liu, Q.; Yang, Y. Adaptive colour-guided non-local means algorithm for compound noise reduction of depth maps. IET Image Process. 2020, 14, 2768–2779. [Google Scholar] [CrossRef]
  19. Chen, S.; Halimi, A.; Ren, X.; Su, X.; McCarthy, A.; McLaughlin, S.; Buller, G. Learning non-local spatial correlations to restore sparse 3D single-photon data. IEEE Trans. Image Process. 2020, 29, 3119–3131. [Google Scholar] [CrossRef]
  20. Liu, Q.; Gao, X.; He, L.; Lu, W. Single Image Dehazing with Depth-Aware Non-Local Total Variation Regularization. IEEE Trans. Image Process. 2018, 27, 5178–5191. [Google Scholar] [CrossRef]
  21. Zhu, J.; Wei, J.; Lv, H.; Hao, B. Truncated fractional-order total variation for image denoising under Cauchy noise. Axioms 2022, 11, 101. [Google Scholar] [CrossRef]
  22. Zhang, Y.; Liu, T.; Yang, F.; Yang, Q. A Study of Adaptive Fractional-Order Total Variational Medical Image Denoising. Fractal Fract. 2022, 6, 508. [Google Scholar] [CrossRef]
  23. Chan, R.; Liang, H. Truncated Fractional-Order Total Variation Model for Image Restoration. Oper. Res. Soc. China 2019, 7, 561–578. [Google Scholar] [CrossRef]
  24. Chen, D.; Sun, S.; Zhang, C. Fractional-order TV-L2 model for image denoising. Cent. Eur. J. Phys. 2013, 11, 1414–1422. [Google Scholar] [CrossRef]
  25. Wang, Y.; Wang, Z. Image denoising method based on variable exponential fractional-integer-order total variation and tight frame sparse regularization. IET Image Process. 2021, 15, 101–114. [Google Scholar] [CrossRef]
  26. Bai, J.; Feng, X.-C. Image decomposition and denoising using fractional-order partial differential equations. IET Image Process. 2020, 14, 3471–3480. [Google Scholar] [CrossRef]
  27. Rudin, L.; Osher, S.; Fatem, E. Nonlinear total variation based noise removal algorithms. Phys. D Nonlinear Phenom. 1992, 60, 259–268. [Google Scholar] [CrossRef]
  28. Zhang, X. Relationship between integer order systems and fractional order system and its two applications. IEEE/CAA J. Autom. Sin. 2018, 5, 639–643. [Google Scholar] [CrossRef]
  29. Xu, L.; Huang, G.; Chen, Q.; Qin, H.; Men, T.; Pu, Y. An improved method for image denoising based on fractional-order integration. Front. Inf. Technol. Electron. Eng. 2010, 21, 1485–1493. [Google Scholar] [CrossRef]
  30. Zhang, X.; Liu, R.; Ren, J.; Gui, Q. Adaptive Fractional Image Enhancement Algorithm Based on Rough Set and Particle Swarm Optimization. Fractal Fract. 2022, 6, 100. [Google Scholar] [CrossRef]
  31. Tom, G.; Stanley, O. The split Bregman method for L1-regularized problems. SIAM J. Imaging Sci. 2009, 2, 323–343. [Google Scholar] [CrossRef]
  32. Micchelli, C.; Shen, L.; Xu, Y. Proximity algorithms for image models: Denoising. Inverse Probl. 2011, 27, 045009. [Google Scholar] [CrossRef]
  33. Zhao, M.; Wang, Q.; Ning, J. A region fusion based split Bregman method for TV denoising algorithm. Multimed. Tools Appl. 2021, 80, 15875–15900. [Google Scholar] [CrossRef]
  34. O’Brien, M.E.; Fouche, D.G. Simulation of 3D laser radar systems. Linc. Lab. J. 2005, 15, 37–60. [Google Scholar]
Figure 1. Probability curve of triggered noise photons.
Figure 1. Probability curve of triggered noise photons.
Fractalfract 07 00445 g001
Figure 2. Probability density curves of signal and noise triggering for GM-APD.
Figure 2. Probability density curves of signal and noise triggering for GM-APD.
Fractalfract 07 00445 g002
Figure 3. Derivative curves of the probability density functions of signal and noise triggering.
Figure 3. Derivative curves of the probability density functions of signal and noise triggering.
Fractalfract 07 00445 g003
Figure 4. Neighborhood pixels of point f ( x , y ) with a 5 × 5 pixel window.
Figure 4. Neighborhood pixels of point f ( x , y ) with a 5 × 5 pixel window.
Fractalfract 07 00445 g004
Figure 5. Process flow of the FOTV-recovery process for depth images.
Figure 5. Process flow of the FOTV-recovery process for depth images.
Fractalfract 07 00445 g005
Figure 6. Simulated depth image of the cup model.
Figure 6. Simulated depth image of the cup model.
Fractalfract 07 00445 g006
Figure 7. Imaging results for different SBRs and statistical frame numbers. (a) SBR = 0.2; (b) SBR = 0.11; (c) SBR = 0.1.
Figure 7. Imaging results for different SBRs and statistical frame numbers. (a) SBR = 0.2; (b) SBR = 0.11; (c) SBR = 0.1.
Fractalfract 07 00445 g007aFractalfract 07 00445 g007bFractalfract 07 00445 g007c
Figure 8. Target reduction degree (K) curve of the peak method and the devised method.
Figure 8. Target reduction degree (K) curve of the peak method and the devised method.
Fractalfract 07 00445 g008
Figure 9. Structural similarity index measure (SSIM) curve of the peak method and the devised method.
Figure 9. Structural similarity index measure (SSIM) curve of the peak method and the devised method.
Fractalfract 07 00445 g009
Figure 10. Peak signal–to–noise ratio (PSNR) curve of the peak method and the devised method.
Figure 10. Peak signal–to–noise ratio (PSNR) curve of the peak method and the devised method.
Fractalfract 07 00445 g010
Figure 11. Target reduction degree (K) curve for different fractional order under different SBRs conditions.
Figure 11. Target reduction degree (K) curve for different fractional order under different SBRs conditions.
Fractalfract 07 00445 g011
Figure 12. Structural similarity index measure (SSIM) curve for different fractional order under different SBRs conditions.
Figure 12. Structural similarity index measure (SSIM) curve for different fractional order under different SBRs conditions.
Fractalfract 07 00445 g012
Figure 13. Peak signal–to–noise ratio (PSNR) curve for different fractional order under different SBRs conditions.
Figure 13. Peak signal–to–noise ratio (PSNR) curve for different fractional order under different SBRs conditions.
Fractalfract 07 00445 g013
Figure 14. Target reduction degree (K) curve for different fractional order under different Frames conditions.
Figure 14. Target reduction degree (K) curve for different fractional order under different Frames conditions.
Fractalfract 07 00445 g014
Figure 15. Structural similarity index measure (SSIM) curve for different fractional order under different Frames conditions.
Figure 15. Structural similarity index measure (SSIM) curve for different fractional order under different Frames conditions.
Fractalfract 07 00445 g015
Figure 16. Peak signal–to–noise ratio (PSNR) curve for different fractional order under different Frames conditions.
Figure 16. Peak signal–to–noise ratio (PSNR) curve for different fractional order under different Frames conditions.
Fractalfract 07 00445 g016
Figure 17. Denoised images for different statistical frame numbers.
Figure 17. Denoised images for different statistical frame numbers.
Fractalfract 07 00445 g017
Figure 18. Schematic of GM-APD lidar imaging principle.
Figure 18. Schematic of GM-APD lidar imaging principle.
Fractalfract 07 00445 g018
Figure 19. Imaging experiment: (a) target scene; (b) ideal target depth image.
Figure 19. Imaging experiment: (a) target scene; (b) ideal target depth image.
Fractalfract 07 00445 g019
Figure 20. Results of (a) the peak picking method and (b) the devised reconstruction algorithm.
Figure 20. Results of (a) the peak picking method and (b) the devised reconstruction algorithm.
Fractalfract 07 00445 g020
Figure 21. Recovering results based on (a) TV recovery and (b) FOTV recovery.
Figure 21. Recovering results based on (a) TV recovery and (b) FOTV recovery.
Fractalfract 07 00445 g021
Table 1. Optimal orders for different evaluation indexes and different SBRs.
Table 1. Optimal orders for different evaluation indexes and different SBRs.
SBR0.10.110.2
Evaluation metricsKSSIMPSNRKSSIMPSNRKSSIMPSNR
Optimal order0.51.31.70.51.31.70.11.71.7
Table 2. Optimal orders for different statistical frame numbers.
Table 2. Optimal orders for different statistical frame numbers.
Statistical Frame NumbersEvaluation MetricsOptimal Order
40K0.1
SSIM0.1
PSNR1.7
70K1.3
SSIM1.3
PSNR1.7
Table 3. Recovering results for different statistical frame numbers.
Table 3. Recovering results for different statistical frame numbers.
Number
of Frames
305070
AlgorithmOriginal imageTVFOTVOriginal imageTVFOTVOriginal imageTVFOTV
K0.50000.52370.57680.68850.72770.86550.76120.79050.9232
PSNR19.623520.262323.170022.442623.351328.451624.186425.061529.8380
SSIM0.94230.94630.97460.97440.97740.99420.98390.98610.9960
Table 4. Comparison between [8] and the algorithm in this paper.
Table 4. Comparison between [8] and the algorithm in this paper.
Conditions[8]Ours
SBR0.120.1
Frames200100
K0.950.9777
PSNR20.8333.3639
SSIM0.9400.9982
Table 5. Evaluation metrics for the peak-picking method and spatial-domain differential peak-picking method.
Table 5. Evaluation metrics for the peak-picking method and spatial-domain differential peak-picking method.
Evaluation MetricsPeak Picking MethodSpatial-Domain Differential
Peak Picking Method
K0.10580.3051
PSNR14.047917.3686
SSIM0.40650.7637
Table 6. Evaluation metrics for the reconstructed depth image.
Table 6. Evaluation metrics for the reconstructed depth image.
Evaluation MetricTV RecoveringFOTV Recovering
K0.23270.4109
PSNR17.344117.9471
SSIM0.76590.8186
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xie, D.; Wang, X.; Wang, C.; Yuan, K.; Wei, X.; Liu, X.; Huang, T. A Fractional-Order Total Variation Regularization-Based Method for Recovering Geiger-Mode Avalanche Photodiode Light Detection and Ranging Depth Images. Fractal Fract. 2023, 7, 445. https://doi.org/10.3390/fractalfract7060445

AMA Style

Xie D, Wang X, Wang C, Yuan K, Wei X, Liu X, Huang T. A Fractional-Order Total Variation Regularization-Based Method for Recovering Geiger-Mode Avalanche Photodiode Light Detection and Ranging Depth Images. Fractal and Fractional. 2023; 7(6):445. https://doi.org/10.3390/fractalfract7060445

Chicago/Turabian Style

Xie, Da, Xinjian Wang, Chunyang Wang, Kai Yuan, Xuyang Wei, Xuelian Liu, and Tingsheng Huang. 2023. "A Fractional-Order Total Variation Regularization-Based Method for Recovering Geiger-Mode Avalanche Photodiode Light Detection and Ranging Depth Images" Fractal and Fractional 7, no. 6: 445. https://doi.org/10.3390/fractalfract7060445

APA Style

Xie, D., Wang, X., Wang, C., Yuan, K., Wei, X., Liu, X., & Huang, T. (2023). A Fractional-Order Total Variation Regularization-Based Method for Recovering Geiger-Mode Avalanche Photodiode Light Detection and Ranging Depth Images. Fractal and Fractional, 7(6), 445. https://doi.org/10.3390/fractalfract7060445

Article Metrics

Back to TopTop