Next Article in Journal
Plume Characterization of Electrodeless Plasma Thruster with Configurable Exhaust
Previous Article in Journal
Improvement of Microwave Heating Uniformity Using Symmetrical Stirring
Previous Article in Special Issue
Computational Analysis of the Comprehensive Lifetime Performance Index for Exponentiated Fréchet Lifetime Distribution Products with Multi-Components
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Adaptive Optics Retinal Image Restoration Using Total Variation with Overlapping Group Sparsity

School of Science, Dalian Maritime University, Dalian 116026, China
*
Author to whom correspondence should be addressed.
Symmetry 2025, 17(5), 660; https://doi.org/10.3390/sym17050660 (registering DOI)
Submission received: 20 March 2025 / Revised: 15 April 2025 / Accepted: 24 April 2025 / Published: 26 April 2025
(This article belongs to the Special Issue Computational Mathematics and Its Applications in Numerical Analysis)

Abstract

:
Adaptive optics (AO)-corrected retina flood illumination imaging technology is widely used for investigating both structural and functional aspects of the retina. Given the inherent low-contrast nature of original retinal images, it is necessary to perform image restoration. Total variation (TV) regularization is an efficient regularization technique for AO retinal image restoration. However, a main shortcoming of TV regularization is its potential to experience the staircase effects, particularly in smooth regions of the image. To overcome the drawback, a new image restoration model is proposed for AO retinal images. This model utilizes the overlapping group sparse total variation (OGSTV) as a regularization term. Due to the structural characteristics of AO retinal images, only partial information regarding the PSF is known. Consequently, we have to solve a more complicated myopic deconvolution problem. To address this computational challenge, we propose an ADMM-MM-LAP method to solve the proposed model. First, we apply the alternating direction method of multiplier (ADMM) as the outer-layer optimization method. Then, appropriate algorithms are employed to solve the ADMM subproblems based on their inherent structures. Specifically, the majorization–minimization (MM) method is applied to handle the asymmetry OGSTV regularization component, while a modified version of the linearize and project (LAP) method is adopted to address the tightly coupled subproblem. Theoretically, we establish the complexity analysis of the proposed method. Numerical results demonstrate that the proposed model outperforms the existing state-of-the-art TV model across several metrics.

1. Introduction

Adaptive optics (AO) is an optoelectronic technique that compensates for wavefront distortion in optical systems. This technique was initially developed in the field of astronomical observation [1,2] and later applied to retinal imaging. AO retinal imaging is a widely used technique to explore the structure and function of the living retina [3]. By compensating for wavefront distortions during real-time imaging, AO enables us to examine the architecture of the retina at a cellular scale. However, AO retinal images inherently contain information from both in-focus and out-of-focus planes, leading to resolution degradation in AO flood images. Therefore, accurate interpretation of these images requires advanced post-processing techniques. From a mathematical perspective, retinal imaging can be modeled as a three-dimensional (3D) convolution. When the real object demonstrates translational invariance along the optical axis, the original 3D model can be simplified to a two-dimensional (2D) representation.
In conventional 2D image restoration, the point spread function (PSF) is generally fully known or treated as completely unknown [4]. However, AO retinal image restoration presents a distinctive scenario where the true PSF (and the blurring matrix A) is only partially known. This intermediate regime between fully known and completely unknown PSFs is formally classified as a mildly blind or myopic deconvolution problem. Specifically, the global PSF of AO retinal images can be represented through a model that combines unknown parameters and a few PSFs. This combination tends to create coupled terms, making the problem more complex. Therefore, this poses a challenge to the existing methods for AO retinal image restoration problems [5]. The central difficulty lies in the inherent composite nature of the global PSF, which requires simultaneous parameter estimation during image restoration.
The mathematical formulation of myopic deconvolution for AO retinal imaging can be expressed as
d = A ( w ) x + η ,
where d R n 2 denotes an observed image with noise, η R n 2 represents additive noise, and x R n 2 represents unknown real images. A ( w ) R n 2 × n 2 is the blurring matrix defined by PSF, and  w R p is an unknown parameter in the blurring matrix A ( w ) R n 2 × n 2 . A ( w ) is constructed as a linear combination of p known blurring matrices A j , formulated as
A ( w ) = j = 1 p w j A j = w 1 A 1 + w 2 A 2 + + w p A p .
Regularization methods play a central role in solving such deconvolution problems. As a classical technique in image denoising, total variation (TV) regularization [6] effectively balances noise suppression and detail preservation through its edge-preserving diffusion mechanism. TV regularization has demonstrated robust performance across various noise environments, including Gaussian noise, Poisson noise, and other complex noise scenarios [7,8,9,10]. To address the AO retinal image denoising challenge, Chen et al. developed a novel model incorporating isotropic TV regularization [5]
min x C x , w C w Φ x , w = μ 2 A w x d 2 2 + i = 1 n 2 D i x 2 , s . t . j = 1 p w j = 1 ,
where C x = { x x i 0 for i = 1 , , n 2 } , C w = { w w j 0 for j = 1 , , p } . μ 2 A w x d 2 2 denotes the data fidelity term. i = 1 n 2 D i x 2 denotes the isotropic TV regularizer, which has symmetry properties or rotational invariance. μ > 0 denotes the weighting parameter to balance the two terms in the objective function, w j satisfies certain constraints. The TV model achieves simultaneous optimization of image restoration and unknown parameter estimation.
However, the primary limitation of TV regularization lies in its tendency to produce step artifacts. To address this issue, numerous enhanced regularization methods have been developed, including fractional-order TV (FOTV) [11,12], higher-order TV (HOTV) [13,14,15], non-local TV (NLTV) [16,17], total generalized variation (TGV) [18,19], total variation with overlapping group sparse (OGSTV) [20,21,22], and so on. Among them, Selesnick et al. proposed OGSTV [22] as an extension of TV, which has been successfully applied to image denoising in recent years by considering the group sparsity feature of the signal derivatives and utilizing the correlation of neighboring gradient values to retain the image detail information, which effectively reduces the step artifacts.
Specifically, the OGSTV regularizer divides the image gradient field into overlapping K × K pixel blocks (K is a fixed constant) and extends them in all directions centered on the pixel blocks. This ensures that every gradient is jointly constrained by multiple overlapping groups. Mathematically, for the two-dimensional gradient operators, D ( 1 ) + u and D ( 2 ) + u , OGSTV constructs corresponding overlapping group matrices u i , j , K and computes the sum of L 2 norm of gradients within each group, enforcing the global consistency of local smoothness. The OGSTV regularizer has been widely applied to different noise removal scenarios. Liu et al. effectively eliminated salt-and-pepper noise using OGSTV [23]. Yin et al. enhanced impulse noise suppression by integrating OGSTV with an L 0 norm data fidelity term [24]. Li et al. achieved high-quality restoration of Gaussian noise-degraded images through a hybrid framework combining generalized nonconvex nonsmooth fourth-order total variation with OGS regularization [25].
Inspired by the AO retinal image model proposed by Chen et al. in [5], and the better restoration effect of OGSTV in image restoration, we propose the image myopic deconvolution model with OGSTV regularization term
min x C x , w C w Φ x , w = μ 2 A w x d 2 2 + φ ( x ) s . t . j = 1 p w j = 1 ,
where φ ( x ) denotes the OGSTV regularization function. This is a non-convex model. To the best of our knowledge, there have been no attempts to apply OGSTV to AO retinal imaging to simultaneously perform image restoration and parameter estimation in the coupled problem.
Since the proposed model is non-convex, we adopt the alternating direction method of multipliers (ADMM) as the outer-layer optimization strategy. While ADMM was originally designed for 2-block convex optimization problems, recent theoretical advances have demonstrated its empirical effectiveness in handling non-convex objective functions or non-convex sets [26,27,28,29,30,31]. This motivates our application of ADMM to the AO retinal image restoration task. In the concrete steps, the computational architecture employs the ADMM as the outer-layer optimization framework. According to the special structure in our proposed AO retinal image restoration model, we also use the majorization–minimization (MM) method [24] and the linearize and project (LAP) method [32]. Among them, the MM method deals with the OGSTV regularizer that appears during each ADMM iteration, while the LAP method solves the tightly coupled subproblem.
The main contributions of this paper are outlined below:
(i)
We propose a new myopic deconvolution model with an OGSTV regularization term arising from AO retinal image restoration. The proposed model achieves superior image restoration and enhanced parameter estimation accuracy.
(ii)
To address the nonconvex nature of the model and the tight coupling between variables x and w , we propose an ADMM-MM-LAP method.
(iii)
We conduct comprehensive numerical experiments to evaluate the performance of our proposed OGSTV model.
The following outlines the organization of this paper. First, we propose an AO retinal imaging model with the OGSTV regularization term in Section 2. In Section 3, we present the iterative framework of ADMM and provide a concise overview of the MM method and the LAP method. Then, we propose the ADMM-MM-LAP method. Theoretically, we also analyze the computational complexity. Section 4 provides the numerical experiments, and Section 5 offers the concluding remarks.

2. AO Retinal Image Restoration Model with OGSTV Regularization

In this section, we construct a myopic deconvolution model with overlapping group sparse total variation (OGSTV) regularization. First, we introduce the discrete gradient operator x and the overlapping group sparsity (OGS) function and provide the definition of the OGSTV regularization term. Subsequently, we propose a new myopic deconvolution model with an OGSTV regularization term for AO retinal image restoration.

2.1. The First-Order Difference Operator x

First, we provide the definition of the first-order finite difference. For a gray image x R n 2 under periodic boundary conditions, the first-order difference operator : R n 2 R n 2 × 2 n 2 is defined as
( x ) i = ( D ( 1 ) + x ) i , ( D ( 2 ) + x ) i ,
where D ( 1 ) + x and D ( 2 ) + x are the first-order forward differences of x in the horizontal and vertical directions, respectively. Specifically, we have
D ( 1 ) + x i : = x i + n x i , if 1 i n ( n 1 ) , x mod ( i , n ) x i , else ,
and
D ( 2 ) + x i : = x i + 1 x i , if mod ( i , n ) 0 , x i n + 1 x i , else ,
where mod ( · , · ) denotes the residual function, x i stands for pixel of i in the image, i = 1, n 2 .

2.2. OGSTV and Proposed Model

For a given vector y R n , the group K-point is defined as
y i , K = [ y ( i ) , , y ( i + K 1 ) ] R K ,
where y i , K represents a block of size K starting at index i. A widely used group sparse regularization [22] is formulated as
ϕ ( y ) = i k = 0 K 1 | y ( i + k ) | 2 1 / 2 .
In the 2D scenario, the  K × K -point group of an image g R n × n is given by
g ˜ ( i , j ) K = g ( i m 1 , j m 1 ) g ( i m 1 , j m 1 + 1 ) g ( i m 1 , j + m 2 ) g ( i m 1 + 1 , j m 1 ) g ( i m 1 + 1 , j m 1 + 1 ) g ( i m 1 + 1 , j + m 2 ) g ( i + m 2 , j m 1 ) g ( i + m 2 , j m 1 + 1 ) g ( i + m 2 , j + m 2 ) R K × K ,
with m 1 = K 1 2 , m 2 = K 2 , where x represents the maximum integer that is less than or equal to x. The center of g ˜ ( i , j ) K is ( i , j ) . Let g ( i , j ) K denote a vector formed by stacking the K columns of matrix g ˜ ( i , j ) K , i.e.,  g ( i , j ) K = g ˜ ( i , j ) K ( : ) . Subsequently, the OGS function for the grayscale image g is defined as
ϕ ( g ) = i , j = 1 g ( i , j ) K 2 .
Thus, combining (2) and (3), the OGSTV regularization term can be represented as
φ x = ϕ D ( 1 ) + x + ϕ D ( 2 ) + x .
We would like to point out that when K = 1 , the OGSTV regularization term is a traditional anisotropic TV functional with asymmetry.
Subsequently, the proposed myopic deconvolution model incorporating the OGSTV regularization term can be expressed as
min x C x , w C w Φ x , w = μ 2 A w x d 2 2 + φ ( x ) s . t . j = 1 p w j = 1 ,
where C x = { x x i 0 for i = 1 , , n 2 } , C w = { w w j 0 for j = 1 , , p } , A ( w ) = j = 1 p w j A j and φ ( x ) denotes the OGSTV regularization term.
Moreover, to limit the ‘sum of weights’ to be close to 1, a suitable regularization term is introduced on w. Specifically, a quadratic penalty function S ( w ) is introduced, which for sufficiently large ξ effectively guarantees that the weights w j , j = 1 , , p are summed equal to 1. Consequently, we propose the following model:
min x C x , w C w μ 2 A w x d 2 2 + φ ( x ) + S ( w ) ,
where the regularization S ( w ) is defined as
S ( w ) = ξ 2 ( e w 1 ) 2 ,
where ξ > 0 denotes a weighting parameter, and  e R p denotes the all-ones vector.

3. Optimization Schemes

In this section, we employ the alternating direction method of multipliers (ADMM) as an outer-layer optimization framework to solve the proposed model (6). The ADMM subproblems are efficiently solved by two strategies: the majorization–minimization (MM) method is applied to address the OGSTV component, while a modified version of the linearize and project (LAP) method is implemented to address the tightly coupled subproblem. Based on the strategies above, we propose an ADMM-MM-LAP method.

3.1. ADMM Scheme

Considering the structure of the model (6), we apply ADMM as an outer-layer algorithm to solve it. By introducing the artificial vector v , we rewrite (6) in the following equivalent optimization problem
min x , w , v μ 2 | | A ( w ) x d | | 2 2 + φ ( v ) + S ( w ) + δ C x ( x ) + δ C w ( w ) s . t . v = x ,
where δ C ( · ) represents the indicator function
δ C ( s ) = 0 , s C , , s C .
The augmented Lagrangian function for (7) is given by
L ( x , w , v , α ) = μ 2 A ( w ) x d 2 2 + S ( w ) + δ C x ( x ) + δ C w ( w ) + φ ( v ) α T ( v x ) + ρ 2 v x 2 2 ,
where α represents the Lagrangian multiplier and ρ > 0 represents the penalty parameter.
The main idea of ADMM for solving (7) is to alternately update variables by minimizing the augmented Lagrangian function (8). Then, we treat v as a block of variables and ( x , w ) as another block of variables. Given an initial point ( v 0 , x 0 , w 0 , α 0 ) , ADMM iteratively solves the following v -subproblem and ( x , w ) -subproblem
v k + 1 = arg min v L ( x k , w k , v , α k ) , ( x k + 1 , w k + 1 ) = arg min x , w L ( x , w , v k + 1 , α k ) , α k + 1 = α k + ρ x k + 1 v k + 1 .
First, the  v -subproblem can be formulated as
v k + 1 = arg min v ρ 2 v x k 2 2 α T ( v x k ) + φ ( v ) = arg min v ρ 2 v ( x k + α k ρ ) 2 2 + φ ( v ) .
Second, for the ( x , w ) -subproblem,
( x k + 1 , w k + 1 ) = arg min x , w μ 2 A ( w ) x d 2 2 + S ( w ) α T ( v k + 1 x ) + ρ 2 v k + 1 x 2 2 ,
let
C ( x , v , α ) = φ ( v ) α T ( v x ) + ρ 2 v x 2 2 ,
and define
Ψ ^ ( x , w , v , α ) : = μ 2 A w x d 2 2 + S ( w ) + C ( x , v , α ) .
Then, the ( x , w )-subproblem can be formulated as
( x k + 1 , w k + 1 ) = min x C x , w C w Ψ ^ ( x , w , v k + 1 , α k ) .
Finally, we update the Lagrangian multiplier
α k + 1 = α k + ρ x k + 1 v k + 1 .
Next, we will solve these two subproblems according to their respective structural characteristics.

3.2. The Majorization–Minimization Algorithm (MM)

To tackle the v -subproblem (9), we adopt the MM method in [24]. Before providing the solution to (9), we specifically explore a more generalized problem formulation, i.e.,
min v { F ( v ) = 1 2 v v 0 2 2 + 1 ρ φ ( v ) } , v R n 2 ,
where v 0 = x k + α k ρ and the function φ ( · ) is given by (4). Rather than directly optimizing the complicated problem (9), we employ the MM method to iteratively solve it. Specifically, we simply look for a sequence of convex functions: G v , v k , k = 0 , 1 , 2 , . The principle underlying the MM method is that each G ( v , v k ) serves as an upper bound on F ( v ) for any v and v k and attains F ( v ) at G ( v , v ) = F ( v ) . The updated iteration yields a reduced cost. However, the complexity within the MM framework lies in constructing an appropriate function G ( v , v k ) .
Notice that the inequality
1 2 1 u 2 v 2 2 + u 2 v 2 ,
for all v and u 0 with equality when u = v . By plugging each group of φ ( v ) into (13) and aggregating the results, we obtain a majorizor of φ ( v ) . Let
T ( v , u ) = 1 2 i , j = 1 n 1 u i , j , K 2 v i , j , K 2 2 + u i , j , K 2
with T ( v , u ) φ ( v ) , T ( u , u ) = φ ( u ) , provided that u i , j , K 2 0 . By collating, T ( v , u ) can be reformulated as
T ( v , u ) = 1 2 Ω ( u ) v 2 2 + C u ,
where C ( u ) represents a constant, which is independent on v , and  Ω ( u ) denotes a diagonal matrix whose diagonal elements are defined as
[ Ω ( u ) ] l , l = i , j = m 1 m 2 k 1 , k 2 = m 1 m 2 | u r i + k 1 , t j + k 2 | 2 1 2 ,
with l = ( r 1 ) n + t , r , t = 1 , 2 , , n , the entries of Ω ( u ) can be efficiently computed via convolution operation.
Then, the majorizer of F ( v ) can be expressed as
G v , u = 1 2 v v 0 2 2 + 1 ρ T ( v , u ) = 1 2 v v 0 2 2 + 1 2 ρ Ω ( u ) v 2 2 + μ C u .
We have G ( v , u ) F ( v ) for any u , v , and  G ( u , u ) = F ( u ) . In order to obtain the optimal v from F ( v ) , the MM iterations can be performed as
v k + 1 = I + μ Ω v k Ω v k 1 v 0 , k = 0 , 1 , ,
where I represents the identity matrix of the same dimension as Ω v k . We would like to point out that the inversion of I + μ Ω v k Ω v k can be computed efficiently, as it requires only straightforward computations of its components. To sum up, Algorithm 1 shows the iterative format of the MM method for (12).
Algorithm 1 The MM method for (12)
1:
Input: v 0 R n 2 .
2:
Output: Set v = v k + 1 .
3:
Initialize: v 0 = v 0 , k = 0 , ρ , group size K, maximum iterations N.
4:
Iterates
5:
Calculate Ω ( v k ) according to (15).
6:
Calculate v k + 1 according to (16).
7:
k = k + 1 .
8:
until iteration N is satisfied.

3.3. The Linearize and Project (LAP)

To solve the ( x , w ) -subproblem (10), we employ a modified version of the linearize and project (LAP) method. The original LAP framework is by Herring et al. [32] for solving coupled optimization problems, incorporating fundamental principles derived from the Gauss–Newton method. By using linearization and projection steps to deal with coupled variables, this method can effectively deal with tightly coupled optimization problems and complex problems with pointwise boundary constraints.
The main step of the LAP method is to linearize the residuals first, and then eliminate a set of variables by projection operation to obtain the approximation problem. The variables can be classified into two distinct groups: active set variables and inactive set variables. Subsequently, the update steps δ x and δ w can be calculated within both sets. First, we formulate the set of feasible indicators as
N : = q N | x q w q 0 .
The active set and inactive set are defined as
A : = q N | x q w q = 0 , I : = N A ,
respectively.
Then, the variables can be divided into active set variables x A w A , and inactive set variables x I w I . The update step can be represented as δ x A δ w A and δ x I δ w I , respectively.
The computation method of δ x I and δ w I is similar to the unconstrained situation. It follows the idea of Newton’s method, the update steps δ x δ w are given by
δ x δ w = x , w 2 Ψ ^ x , w , v k + 1 , α k 1 x , w Ψ ^ x , w , v k + 1 , α k .
That is,
δ x δ w = J x J x + 1 μ x 2 C x , v k + 1 , α k J x J w J w J x J w J w + w 2 S ( w ) 1 J x r + 1 μ x C x , v k + 1 , α k J w r + w S ( w ) ,
where x C ( x , v k + 1 , α k ) = T α k ρ T ( v k + 1 x ) and x 2 C ( x , v k + 1 , α k ) = ρ T . J x and J w denote the Jacobian matrices with respect to the x block and the w block, respectively. However, it should be noted that the variables must be projected onto the inactive set. To be specific, the value of δ x I at the present iteration point ( x , w ) can be computed by
M ^ δ x I = b ^ ,
M ^ : = J ^ x I J ^ w J ^ w J ^ w + w 2 S ^ ( w ) 1 J ^ w J ^ x + 1 μ x 2 C ^ x , v k + 1 , α k , b ^ : = J ^ x I J ^ w J ^ w J ^ w + w 2 S ^ ( w ) 1 J ^ w r 1 μ x C ^ x , v k + 1 , α k + J ^ x J ^ w J ^ w J ^ w + w 2 S ^ ( w ) 1 w S ^ ( w ) ,
where J ^ x , J ^ w , x C ^ , x 2 C ^ , w S ^ , w 2 S ^ denote J x , J w , x C , x 2 C , w S , w 2 S restricted to the inactive set, respectively. The reduced problem (17) does not require a high-precision solution. For instance, in [32], a stopping tolerance of 10 1 is employed for solving the reduced problem iteratively. Once the value of δ x I has been obtained through the solving process, the value of δ w I can then be calculated by
δ w I = J ^ w J ^ w + w 2 S ^ ( w ) 1 J ^ w J ^ x δ x I + J ^ w r + w S ^ ( w ) .
For the active set, δ x A and δ w A can be obtained through
δ x A δ w A = μ J ˜ x r J ˜ w r x C ˜ x + δ x A , v k + 1 , α k w S ˜ ( w ) ,
where J ˜ x , J ˜ w , x C ˜ , w S ˜ denote the projection of J x , J w , x C , w S onto the active set, respectively.
Subsequently, δ x and δ w can be computed as a combination of δ x A , δ w A , δ x I , and δ w I in the following equation
δ x δ w = δ x I δ w I + θ δ x A δ w A ,
the appropriate parameter θ can be selected based on [33].
After that, the update steps are integrated through the projection gradient descent method, with the active set and inactive set being iteratively updated to guarantee the point-wise boundary constraints. This comprehensive approach enables the effective handling of large-scale optimization problems with tightly coupled variables and point-wise boundary constraints. The LAP method is used to solve the ( x , w ) -subproblem inexactly. The residual is
η x k + 1 η w k + 1 : = ( x , w ) Ψ ^ ( x , w , v , α ) .
For the stopping criterion, let
η x k + 1 η w k + 1 ε k + 1 ,
where { ε k + 1 } k = 0 is a non-negative sequence satisfying k = 0 ε k + 1 < .
In summary, from the above analysis, we provide the iteration scheme for solving (12) in Algorithm 2.
Algorithm 2 The LAP method for (12)
1:
Input: ( x 0 , w 0 ) R n 2 × R p . Let k = 0 .
2:
Output: ( x k , w k ) .
3:
Iterates
4:
Apply the LAP method to compute δ x I and δ w I according to (17) and (18), respectively.
5:
Apply the projection gradient descent method to compute δ x A and δ w A according to (19).
6:
Combine δ x A , δ w A , δ x I , δ w I according to (20).
7:
Compute x k + 1 and w k + 1 with a projected Armijo line search.
8:
Update the active set A and the inactive set I .
9:
k : = k + 1 .
10:
until the termination condition is satisfied.

3.4. ADMM-MM-LAP Method

Based on the representations and analysis above, we present the iterative scheme of the ADMM-MM-LAP method for solving the AO retinal image model (7) with the OGSTV regularization term in Algorithm 3.
Algorithm 3 ADMM-MM-LAP method for (7)
1:
Input: v 0 , x 0 , w 0 and parameters μ > 0 , ρ > 0 , group size K.
2:
Output: v k , x k , w k , α k .
3:
Initialize: x 0 = f , k = 0 , α 0 = 0 .
4:
Apply Algorithm 1 to compute v ( k + 1 ) .
5:
Apply Algorithm 2 to compute ( x ( k + 1 ) , w ( k + 1 ) ).
6:
Update the multipliers using (11).
7:
k : = k + 1 .
8:
until the termination condition is satisfied.

3.5. Complexity Analysis

In this subsection, the computational cost generated at each step of Algorithm 3 is considered, and the computational complexity analysis is given.
First, the v -subproblem can be solved by Algorithm 1. Each iteration needs to compute the diagonal matrix Ω v k , and the computational complexity of each iteration is O ( n ) by calculating diagonal elements through convolution operation. Then, the matrix inverse calculation can be computed efficiently by simple element-by-element calculation, and the overall computational complexity remains O ( n ) . Moreover, the computational cost is also related to the group size K, and K is a constant. Therefore, based on the above analysis, the complexity of v -subproblem is O n .
Second, the ( x , w ) -subproblem can be efficiently solved using Algorithm 2, whose computational cost comprises several main subcomponents, such as δ x I , δ w I , δ x A , δ w A . The computation primarily involves three core components: matrix–vector product, vector norm computation, and conjugate gradient method. Then, we proceed with a detailed step-by-step complexity analysis. First, for computing the update step δ x I , it is necessary to calculate the product of matrix 1 μ x 2 C ^ x , v k + 1 , α k + J ^ x I J ^ w J ^ w J ^ w + w 2 S ^ ( w ) 1 J ^ w J ^ x , vectors J ^ x I J ^ w J ^ w J ^ w + w 2 S ^ ( w ) 1 J ^ w r and J ^ x J ^ w J ^ w J ^ w + w 2 S ^ ( w ) 1 w S ^ ( w ) , respectively. Due to the Jacobi matrix J x = A ( w ) and each PSF matrix, A i supports vector multiplication via the Fast Fourier Transform (FFT), the computational complexity of multiplying J x with a vector is O ( n 2 l o g n ) . Similarly, multiplying J w with a vector also achieves O ( n 2 l o g n ) complexity by FFTs. So the matrix–vector product involving J ^ x I J ^ w J ^ w J ^ w + w 2 S ^ ( w ) 1 J ^ w J ^ x has the same O ( n 2 l o g n ) complexity. Moreover, the computational cost for the matrix–vector product of 1 μ x 2 C ^ x , v k + 1 , α k is O ( n 2 ) . Furthermore, the operations involving the expressions J ^ x I J ^ w J ^ w J ^ w + w 2 S ^ ( w ) 1 J ^ w r and J ^ x J ^ w J ^ w J ^ w + w 2 S ^ ( w ) 1 w S ^ ( w ) can be computed via FFTs with a complexity of O ( n 2 l o g n ) . Thus, the overall computational cost of δ x I is O ( n 2 l o g n ) . Similarly, for δ w I , δ x A , δ w A , the computations of their matrix–vector products also demand acceleration by FFTs, such as J ^ w J ^ x δ x I , J ^ w r , J ˜ x r , J ˜ w r , etc. Therefore, the computational cost of calculating δ w I , δ x A , δ w A is O ( n 2 l o g n ) .
In addition, update steps δ x and δ w mainly involve scalar–vector multiplication and vector summation calculation, while the projection Armijo line search involves the calculations of x , w Ψ ^ x , w , v k + 1 , α k and the inner product. The computational cost is O ( n 2 ) . In summary, the ( x , w ) -subproblem complexity is O ( n 2 l o g n ) .
Finally, in the Lagrangian multiplier update step, the computational cost is dominated by the calculations of ρ v k + 1 and ρ x k + 1 , each requiring O n 2 operations. Therefore, we can conclude the following theorem that the computational complexity per iteration of Algorithm 3 is O n 2 log n .
Theorem 1. 
The computational complexity of Algorithm 3 is O ( n 2 l o g n ) .

4. Numerical Experiments

In this section, we present the numerical experimental results of the proposed model, which incorporates the OGSTV regularization term for myopic deconvolution of AO retinal images. All experimental results were implemented using MATLAB 2020a on a PC with 16.0 GB RAM and AMD Ryzen 5 4600U with Radeon Graphics at 2.10 GHz.
The restoration performance is evaluated by five principal metrics, including the relative errors of x and w , signal-to-noise ratio ( SNR ) , peak signal-to-noise ratio ( PSNR ) , and structural similarity index ( SSIM ) . Specifically, the metrics are defined as follows:
  • RE x = x x * 2 x * 2 , where x * denotes the true image and x denotes the restored image.
  • RE w = w w * 2 w * 2 , where w * is the real parameter, and w is the calculated parameter.
  • SNR = 10 · log 10 x * x ˜ 2 2 x * x 2 2 , where x * denotes the true image, x ˜ denotes the mean intensity value of x * , and x denotes the restored image.
  • PSNR = 10 · log 10 N p x * x 2 , where N p refers to the size of the image, x * and x denote the true image and the restored image, respectively.
  • SSIM = ( 2 μ x μ x * + C 1 ) ( 2 σ + C 2 ) ( μ x 2 + μ x * 2 + C 1 ) ( σ x 2 + σ x * 2 + C 2 ) , where C 1 and C 2 are constants. σ denotes the covariance between the images x and x * . μ x * , σ x * , and μ x , σ x denote the average and standard variance of the AO retinal image x * and x , respectively.
The quantitative evaluation of retinal image restoration requires multiple metrics. Although PSNR provides computationally efficient distortion measurements via mean square error (MSE), its correlation with clinical diagnostic relevance remains limited due to the non-linear nature of human visual perception. In contrast, SSIM demonstrates superior alignment with subjective evaluations by modeling luminance, contrast, and structural correlations. For the comprehensive analysis, we employ both metrics alongside relative errors to ensure mathematical rigor and parameter estimation.
The parameters are carefully configured through experimental verification. In the OGSTV model, the group size K is set to 3 for ensuring restoration performance [12]. We set the parameter ξ = 100 so that the obtained parameter meets j = 1 p ω j = 1 . The parameter ρ is chosen as 9. In order to guarantee the non-negative sequence { ε k + 1 } k = 0 satisfying k = 0 ε k + 1 < , we choose ε k + 1 = 1 b ( k + 1 ) 2 , with b > 0 . We implement a unified stopping criterion
| Ψ ^ ( x k + 1 , ω k + 1 , v k + 1 , α k + 1 ) Ψ ^ ( x k , w k , v k , α k ) | | Ψ ^ ( x k , w k , v k , α k ) | < ε ,
where Ψ ^ ( x k + 1 , ω k + 1 , v k + 1 , α k + 1 ) represents the function value at the ( k + 1 ) -th iteration. The maximum number of iterations for the ADMM is set to 50. For the MM method, the inner iteration number N is set to 10.
In the context of AO retinal imaging, we use the TV model combined with the ADMM-LAP algorithm as the benchmark method [5]. In the following numerical experiments, we present quantitative comparisons between the proposed OGSTV model and the classical TV model, including relative errors for both reconstructed images and estimated parameters, as well as SNR, PSNR, and SSIM.

4.1. Example 1

  • Problem Setting
For the myopic deconvolution problem of AO retinal images, p = 2 is established in this example. The global PSF is synthesized by combining two distinct components: a focused PSF and a defocused PSF. The test image is a 256 × 256 segment extracted from an AO retinal image. The test problem is constructed using the regularization toolbox IR Tools [34]. It can produce different BlurLevel indicators (‘mild’, ‘medium’, ‘severe’), and simulate spatially invariant Gaussian blur and out-of-focus blur using the P R b l u r g a u s s function and P R b l u r d e f o c u s function in the IR Tools, respectively. For example, the P R b l u r g a u s s function simulates a spatially invariant Gaussian blur to construct a PSF, while the P R b l u r d e f o c u s function simulates a spatially invariant out-of-focus blur for another PSF. In the example, three cases are designed. One PSF is fixed as a spatially invariant Gaussian blur generated by the P R b l u r g a u s s function with a ‘mild’ blur level, whereas the other PSF is a composite of two components: a ‘mild’ Gaussian blur from P R b l u r g a u s s , and a defocus blur from P R b l u r d e f o c u s with three distinct levels (‘mild’, ‘medium’, and ‘severe’). Next, we choose the parameter μ = 6 × 10 5 and set ε in the stopping criterion to ε = 0.05 . Gaussian noise is added via the P R n o i s e function and the noise level is 0.01. The true parameter is set to ω = [0.3; 0.7]. We choose the initial value of the parameter ω 0 = [0.5; 0.5]. The initial guess x 0 is a random image with 256 × 256 pixels.
  • Experimental results
As an example, when the BlurLevel of P R b l u r d e f o c u s is set to ‘medium’, comparing the image restoration effects of our proposed OGSTV model and the classical TV model, it can be seen from Figure 1c,d that both models restored the images relatively clearly. It is not easy to judge which one of the OGSTV model and the TV model restores better by the eye only. To show the differences more clearly, Figure 1e–h presents an enlarged fragment of the corresponding images. In Figure 1g, the image recovered by the OGSTV model exhibits enhanced clarity and contrast, while the background in Figure 1h demonstrates a coarser background texture and has more residual noise. This indicates that the restored image obtained by the OGSTV model is more similar to the true image, and its restoration effect is better than that of the TV model.
For a quantitative comparison of Example 1, Figure 2 and Table 1 enumerate the RE x , RE w , SNR, PSNR, and SSIM of the restored images. In Figure 2, we plot the curve of RE x and RE w changing with the number of iterations. We can see that the relative errors of the OGSTV model in the recovered image and estimating parameters are all lower than those of the TV model. It shows that our OGSTV model simultaneously enhances image restoration accuracy and improves parameter recovery precision. This validates our OGSTV model’s capability to outperform the TV model in AO retinal image processing.
In Table 1, the data comparison between the two models under three Blurlevels is detailed. Our OGSTV model outperforms the TV model in the key metrics. Specifically, our proposed OGSTV achieves consistently lower relative errors RE x and RE w than the TV model, reflecting its enhanced precision in reconstructing AO retinal images and parameter estimation. Simultaneously, the OGSTV model’s accuracy is further corroborated by its superior performance in SNR, PSNR, and SSIM levels. Although the OGSTV model exhibits slightly higher computational time compared to the TV model, the disparity remains minor. More critically, the OGSTV model recovers AO retinal images of better quality within a similar computation time. Notably, under the ‘severe’ B l u r L e v e l , the OGSTV achieves a 35.7 % lower RE x and a 33.2 % higher SNR compared to the TV model. It indicates that the OGSTV model demonstrates increasing gaps in computationally demanding cases, and the image restoration effect is getting better. Therefore, it again shows that the OGSTV model outperforms the TV model in this test problem.

4.2. Example 2

  • Problem Setting
The myopic deconvolution problem of AO retinal imaging is also considered, and the image size is 256 × 256. In this example, we build a combined PSF using P R b l u r g a u s s with a ‘mild’ BlurLevel and P R b l u r d e f o c u s with three BlurLevels (‘mild’, ‘medium’, ‘severe’) in IR Tools. The other PSF is constructed using only P R b l u r g a u s s with a ‘mild’ BlurLevel. We choose the parameter μ = 5 × 10 5 and set the stopping criterion with ε = 0.05 . Add Gaussian noise with a noise level of 0.01. Let the real parameter be ω = [0.3; 0.7]. To initiate the process, we select an arbitrary 256 × 256 image as the initial value x 0 and select the initial value of the parameter ω 0 = [ ω 1 0 ; ω 2 0 ] R 2 , where ω 1 0 , ω 2 0 are constants in [0, 1] and satisfy ω 1 0 + ω 2 0 = 1 .
  • Experimental results
The experimental results are consistent with the results of Example 1. In this example, the BlurLevel of P R b l u r d e f o c u s is ‘mild’. We still need to observe the enlarged fragments of the two models in Figure 3. It can be observed that the background region in Figure 3g retains more complete texture details with significantly reduced residual noise levels. The images in Figure 3 further indicate that the image restored by our proposed OGSTV model resembles the true image more closely.
Figure 4 shows the relative errors of the reconstructed image and the estimated parameters against the number of iterations. As shown in Figure 4, although the TV model achieves a marginally lower image’s relative error at the fourth iteration, the OGSTV model demonstrates superior stability. Under identical stopping criteria, the OGSTV model attains a significantly smaller final error after additional iterations. Notably, the parameter estimation error of the OGSTV model converges to a negligible magnitude.
In Table 2, a detailed comparison of the data obtained from the two methods under three distinct Blurlevels is presented. The OGSTV model demonstrates consistent superiority over the TV model in both image reconstruction quality and parameter estimation accuracy across all tested conditions. Meanwhile, the OGSTV framework achieves significantly higher values in key quantitative metrics, including SNR, PSNR, and SSIM. Although the TV model exhibits marginally shorter computational times in some blur scenarios, the OGSTV model maintains a better balance between computational efficiency and restoration precision.

4.3. Example 3

  • Problem Setting
In this example, we examine the scenario where the parameter p is set to 3. We use the same AO retina image as in Example 1, with a portion of 256 × 256. In Example 3, the first PSF is constructed by P R b l u r g a u s s (’mild’). The second PSF is a combination of P R b l u r g a u s s (’mild’) and P R b l u r d e f o c u s (’medium’), and the third PSF is a combination of P R b l u r g a u s s (’mild’) and P R b l u r d e f o c u s (‘mild’, ‘medium’, and ‘severe’) combination. The parameter μ is chosen as μ = 6 × 10 5 , and the parameter ε in the stopping criterion is set to ε = 0.01 . We add Gaussian noise with a noise level of 0.01. Let the real parameter ω * = [ ω 1 * , ω 2 * , ω 3 * ] R 3 be a random quantity with ω i * 0 , ω 1 * + ω 2 * + ω 3 * = 1 . We select a random 256 × 256 image to serve as the initial value x 0 . The initial value of the parameter is choosen as ω 0 = [ ω 1 0 ; ω 2 0 ; ω 3 0 ] , where ω 1 0 , ω 2 0 , ω 3 0 are constants in [0, 1] and satisfy ω 1 0 + ω 2 0 + ω 3 0 = 1 .
  • Experimental results
In Figure 5, we show the corresponding resulting image when the BlurLevel of P R b l u r d e f o c u s is ‘severe’. A larger p means that the AO image denoising is also more complex. In Example 3, the image restoration is significantly more computationally challenging than the previous two examples. By observing the images, unlike Example 1 and 2, we do not necessitate observing enlarged images to evaluate the image restoration. Just by observing Figure 5c–d, it is obvious that the images recovered by the OGSTV model are closer to the real image than those recovered by the TV model. This demonstrates the superior denoising capability of the OGSTV model, particularly when processing images with intricate structures.
In Figure 6, we plot the relative errors against the number of iterations for both models. As in Example 1 and 2, the OGSTV model outperforms the TV model in both the image restoration and parameter estimation tasks. In addition, the TV model terminates the operation at the fifth iteration, while the OGSTV model can perform more iterations and generate smaller relative error values in each iteration.
Table 3 presents the data comparison between the two models at three different Blurlevels. The results indicate a correlation between increased blur and higher computational complexity, as evidenced by the rise in processing time across both models. Although the TV model has some advantages in terms of computation time, its key indexes are significantly degraded under high blur conditions. In contrast, the OGSTV model showed significant advantages, with better performance in relative error, SNR, PSNR, and SSIM, which fully verified the validity and reliability of the proposed OGSTV model in AO retinal image restoration. In conclusion, our proposed OGSTV model outperforms the TV model in AO retinal image restoration in all test cases.

4.4. Performance Comparison

To evaluate the performance differences between the proposed OGSTV model and the TV model, we show the PSNR variations of both models during the iterative process across the above three examples. The corresponding curves of PSNR versus iteration numbers are shown in Figure 7. The results demonstrate that our proposed OGSTV model exhibits significant convergence and achieves higher PSNR levels compared to the TV model.
Moreover, we would like to point out that the OGSTV model requires substantial computational resources for processing high-resolution images. Consequently, in the aforementioned simulations, OGSTV exhibited longer processing times compared to TV in some examples. Future research will prioritize the development of lightweight variants of OGSTV to reduce the computational cost while preserving the reconstruction accuracy.

5. Conclusions

In this paper, we propose a myopic deconvolution model with OGSTV regularization. To efficiently solve the proposed model, we propose an ADMM-MM-LAP method. Specifically, ADMM is applied as an outer-layer optimization method, while the MM method and the LAP method are applied as internal optimization methods, respectively. Theoretically, we give the complexity analysis of the ADMM-MM-LAP method. Numerical experiments demonstrate that the proposed OGSTV model is superior to the existing state-of-the-art TV model. In our future work, we plan to work with more real clinical AO retinal datasets and compare our algorithm with some algorithms developed for high-performance AO systems like the Large Binocular Telescope.

Author Contributions

Conceptualization, X.C. and H.F.; methodology, X.C. and H.F.; software, Y.S.; validation, X.C.; formal analysis, Y.S. and X.C.; writing—original draft preparation, X.C. and Y.S.; writing—review and editing, H.F.; visualization, Y.S.; supervision, H.F.; funding acquisition, X.C. and H.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (No. 12301477, No. 42274166), the Fundamental Scientific Research Projects of Higher Education Institutions of Liaoning Provincial Department of Education (No. JYTMS20230165), and the Fundamental Research Funds for the Central Universities.

Data Availability Statement

The data presented in this study are available upon request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Beckers, J.M. Adaptive optics for astronomy: Principles, performance, and applications. Annu. Rev. Astron. Astrophys. 1993, 31, 13–62. [Google Scholar] [CrossRef]
  2. Davies, R.; Kasper, M. Adaptive optics for astronomy. Annu. Rev. Astron. Astrophys. 2012, 50, 305–351. [Google Scholar] [CrossRef]
  3. Blanco, L.; Mugnier, L.M. Marginal blind deconvolution of adaptive optics retinal images. Opt. Express 2011, 19, 23227–23239. [Google Scholar] [CrossRef] [PubMed]
  4. Hansen, P.C.; Nagy, J.G.; O’leary, D.P. Deblurring Images: Matrices, Spectra, and Filtering; SIAM: Philadelphia, PA, USA, 2006. [Google Scholar] [CrossRef]
  5. Chen, X.; Herring, J.L.; Nagy, J.G.; Xi, Y.; Yu, B. An ADMM-LAP method for total variation myopic deconvolution of adaptive optics retinal images. Inverse Probl. 2020, 37, 014001. [Google Scholar] [CrossRef]
  6. Rudin, L.I.; Osher, S.; Fatemi, E. Nonlinear total variation based noise removal algorithms. Phys. D Nonlinear Phenom. 1992, 60, 259–268. [Google Scholar] [CrossRef]
  7. Ding, M.; Huang, T.; Wang, S.; Mei, J.; Zhao, X. Total variation with overlapping group sparsity for deblurring images under Cauchy noise. Appl. Math. Comput. 2019, 341, 128–147. [Google Scholar] [CrossRef]
  8. Hu, Y.; Nagy, J.G.; Zhang, J.; Andersen, M.S. Nonlinear optimization for mixed attenuation polyenergetic image reconstruction. Inverse Probl. 2019, 35, 064004. [Google Scholar] [CrossRef]
  9. Huo, L.; Chen, W.; Ge, H. Image restoration based on transformed total variation and deep image prior. Appl. Math. Model. 2024, 130, 191–207. [Google Scholar] [CrossRef]
  10. Zhao, R. Nanocrystalline SEM image restoration based on fractional-order TV and nuclear norm. Electron. Res. Arch. 2024, 32, 4954–4968. [Google Scholar] [CrossRef]
  11. Ren, Z.; He, C.; Zhang, Q. Fractional order total variation regularization for image super-resolution. Signal Process. 2013, 93, 2408–2421. [Google Scholar] [CrossRef]
  12. Zhang, X.; Cai, G.; Li, M.; Bi, S. An image-denoising framework using Lq norm-based higher order variation and fractional variation with overlapping group sparsity. Fractal Fract. 2023, 7, 573. [Google Scholar] [CrossRef]
  13. Adam, T.; Paramesran, R. Image denoising using combined higher order non-convex total variation with overlapping group sparsity. Multidimens. Syst. Signal Process. 2019, 30, 503–527. [Google Scholar] [CrossRef]
  14. Thanh, D.N.; Prasath, V.S.; Hieu, L.M.; Dvoenko, S. An adaptive method for image restoration based on high-order total variation and inverse gradient. Signal Image Video Process. 2020, 14, 1189–1197. [Google Scholar] [CrossRef]
  15. Chan, T.; Marquina, A.; Mulet, P. High-order total variation-based image restoration. SIAM J. Sci. Comput. 2000, 22, 503–516. [Google Scholar] [CrossRef]
  16. Hou, G.; Pan, Z.; Wang, G.; Yang, H.; Duan, J. An efficient nonlocal variational method with application to underwater image restoration. Neurocomputing 2019, 369, 106–121. [Google Scholar] [CrossRef]
  17. Jidesh, P.; Holla, S. Non-local total variation regularization models for image restoration. Comput. Electr. Eng. 2018, 67, 114–133. [Google Scholar] [CrossRef]
  18. Bredies, K.; Kunisch, K.; Pock, T. Total generalized variation. SIAM J. Imaging Sci. 2010, 3, 492–526. [Google Scholar] [CrossRef]
  19. Lv, Y. Total generalized variation denoising of speckled images using a primal-dual algorithm. J. Appl. Math. Comput. 2020, 62, 489–509. [Google Scholar] [CrossRef]
  20. Chen, P.; Selesnick, I.W. Group-sparse signal denoising: Non-convex regularization, convex optimization. IEEE Trans. Signal Process. 2014, 62, 3464–3478. [Google Scholar] [CrossRef]
  21. Selesnick, I.W.; Farshchian, M. Sparse signal approximation via nonseparable regularization. IEEE Trans. Signal Process. 2017, 65, 2561–2575. [Google Scholar] [CrossRef]
  22. Selesnick, I.W.; Chen, P. Total variation denoising with overlapping group sparsity. In Proceedings of the 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, Vancouver, BC, Canada, 26–31 May 2013; pp. 5696–5700. [Google Scholar] [CrossRef]
  23. Liu, J.; Huang, T.; Selesnick, I.W.; Lv, X.; Chen, P. Image restoration using total variation with overlapping group sparsity. Inf. Sci. 2015, 295, 232–246. [Google Scholar] [CrossRef]
  24. Yin, M.; Adam, T.; Paramesran, R.; Hassan, M.F. An l0-overlapping group sparse total variation for impulse noise image restoration. Signal Process. Image Commun. 2022, 102, 116620. [Google Scholar] [CrossRef]
  25. Li, R.; Zheng, B. Generalized nonconvex nonsmooth four-directional total variation with overlapping group sparsity for image restoration. J. Comput. Appl. Math. 2024, 451, 116045. [Google Scholar] [CrossRef]
  26. Hien, L.T.K.; Phan, D.N.; Gillis, N. Inertial alternating direction method of multipliers for non-convex non-smooth optimization. Comput. Optim. Appl. 2022, 83, 247–285. [Google Scholar] [CrossRef]
  27. Wang, Y.; Yin, W.; Zeng, J. Global convergence of ADMM in nonconvex nonsmooth optimization. J. Sci. Comput. 2019, 78, 29–63. [Google Scholar] [CrossRef]
  28. Xiu, X.; Liu, W.; Li, L.; Kong, L. Alternating direction method of multipliers for nonconvex fused regression problems. Comput. Stat. Data Anal. 2019, 136, 59–71. [Google Scholar] [CrossRef]
  29. Zhang, B.; Zhu, G.; Zhu, Z.; Kwong, S. Alternating direction method of multipliers for nonconvex log total variation image restoration. Appl. Math. Model. 2023, 114, 338–359. [Google Scholar] [CrossRef]
  30. Li, C.; Zhao, D. A non-convex fractional-order differential equation for medical image Restoration. Symmetry 2024, 16, 258. [Google Scholar] [CrossRef]
  31. Xie, Z.; Liu, L.; Luo, Z.; Huang, J. Image denoising using nonlocal regularized deep image prior. Symmetry 2021, 13, 2114. [Google Scholar] [CrossRef]
  32. Herring, J.L.; Nagy, J.G.; Ruthotto, L. LAP: A linearize and project method for solving inverse problems with coupled variables. Sampl. Theory Signal Image Process. 2018, 17, 127–151. [Google Scholar] [CrossRef]
  33. Haber, E. Computational Methods in Geophysical Electromagnetics; SIAM: Philadelphia, PA, USA, 2014. [Google Scholar] [CrossRef]
  34. Gazzola, S.; Hansen, P.C.; Nagy, J.G. IR Tools: A MATLAB package of iterative regularization methods and large-scale test problems. Numer. Algorithms 2019, 81, 773–811. [Google Scholar] [CrossRef]
Figure 1. Results of AO retinal image restoration in Example 1. The BlurLevel of P R b l u r d e f o c u s is set to ‘medium’. (a) True image; (b) degraded image; (c) restored image by the OGSTV model; (d) restored image by the TV model; (eh) correspond to the enlarged segments of (ad).
Figure 1. Results of AO retinal image restoration in Example 1. The BlurLevel of P R b l u r d e f o c u s is set to ‘medium’. (a) True image; (b) degraded image; (c) restored image by the OGSTV model; (d) restored image by the TV model; (eh) correspond to the enlarged segments of (ad).
Symmetry 17 00660 g001
Figure 2. RE x (left) and RE w (right) versus iteration number in Example 1.
Figure 2. RE x (left) and RE w (right) versus iteration number in Example 1.
Symmetry 17 00660 g002
Figure 3. Results of AO retinal image restoration in Example 2. The BlurLevel of P R b l u r d e f o c u s is set to ‘mild’. (a) True image; (b) degraded image; (c) restored image by the OGSTV model; (d) restored image by the TV model; (eh) correspond to the enlarged segments of (ad).
Figure 3. Results of AO retinal image restoration in Example 2. The BlurLevel of P R b l u r d e f o c u s is set to ‘mild’. (a) True image; (b) degraded image; (c) restored image by the OGSTV model; (d) restored image by the TV model; (eh) correspond to the enlarged segments of (ad).
Symmetry 17 00660 g003
Figure 4. RE x (left) and RE w (right) versus iteration number in Example 2.
Figure 4. RE x (left) and RE w (right) versus iteration number in Example 2.
Symmetry 17 00660 g004
Figure 5. Results of AO retinal image restoration in Example 3. The BlurLevel of PRblurdefocus in the third PSF is set to ‘severe’. (a) True image; (b) degraded image; (c) restored image by the OGSTV model; (d) restored image by the TV model; (eh) correspond to the enlarged segments of (ad).
Figure 5. Results of AO retinal image restoration in Example 3. The BlurLevel of PRblurdefocus in the third PSF is set to ‘severe’. (a) True image; (b) degraded image; (c) restored image by the OGSTV model; (d) restored image by the TV model; (eh) correspond to the enlarged segments of (ad).
Symmetry 17 00660 g005
Figure 6. RE x (left) and RE w (right) versus iteration number in Example 3.
Figure 6. RE x (left) and RE w (right) versus iteration number in Example 3.
Symmetry 17 00660 g006
Figure 7. Comparison of PSNR between the proposed OGSTV model and the TV model across iterations. (a) Example 1, (b) Example 2, (c) Example 3.
Figure 7. Comparison of PSNR between the proposed OGSTV model and the TV model across iterations. (a) Example 1, (b) Example 2, (c) Example 3.
Symmetry 17 00660 g007
Table 1. The numerical results for Example 1.
Table 1. The numerical results for Example 1.
BlurLevelModel RE x RE w SNRPSNRSSIMTime (s)
‘mild’OGSTV1.61  ×   10 1 6.01 ×   10 2 8.8625.240.7233.65
TV2.98 ×   10 1 2.21 ×   10 1 3.5019.880.5221.46
‘medium’OGSTV1.33 ×   10 1 4.48 ×   10 2 10.4726.860.8135.60
TV1.43 ×   10 1 6.81 ×   10 2 9.8726.250.7831.29
‘severe’OGSTV7.52 ×   10 2 4.38 ×   10 2 15.4631.840.9237.53
TV1.17 ×   10 1 4.39 ×   10 2 11.6027.990.8550.11
The use of bold indicates that the numerical results are better.
Table 2. The numerical results for Example 2.
Table 2. The numerical results for Example 2.
BlurLevelModel RE x RE w SNRPSNRSSIMTime (s)
‘mild’OGSTV1.58  ×   10 1 3.12 ×   10 2 8.1124.860.6816.12
TV3.70 ×   10 1 2.01 ×   10 1 0.7417.480.4121.34
‘medium’OGSTV1.10 ×   10 1 1.58 ×   10 2 11.3028.050.8017.11
TV1.63 ×   10 1 3.70 ×   10 1 7.8524.590.7249.06
‘severe’OGSTV1.13 ×   10 1 4.06 ×   10 1 11.0527.790.8945.83
TV1.70 ×   10 1 8.23 ×   10 1 7.4824.220.7730.19
The use of bold indicates that the numerical results are better.
Table 3. The numerical results for Example 3.
Table 3. The numerical results for Example 3.
BlurLevelModel RE x RE w SNRPSNRSSIMTime (s)
‘mild’OGSTV1.30  ×   10 1 8.64 ×   10 2 10.7027.090.78111.65
TV1.32 ×   10 1 1.81 ×   10 1 10.5426.930.7793.94
‘medium’OGSTV1.38 ×   10 1 1.97 ×   10 1 10.2026.590.76548.61
TV3.17 ×   10 1 7.04 ×   10 1 2.9719.350.52381.43
‘severe’OGSTV1.51 ×   10 1 2.28 ×   10 1 9.4325.810.73378.33
TV3.10 ×   10 1 3.28 ×   10 1 3.1419.520.52225.78
The use of bold indicates that the numerical results are better.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, X.; Shi, Y.; Fu, H. Adaptive Optics Retinal Image Restoration Using Total Variation with Overlapping Group Sparsity. Symmetry 2025, 17, 660. https://doi.org/10.3390/sym17050660

AMA Style

Chen X, Shi Y, Fu H. Adaptive Optics Retinal Image Restoration Using Total Variation with Overlapping Group Sparsity. Symmetry. 2025; 17(5):660. https://doi.org/10.3390/sym17050660

Chicago/Turabian Style

Chen, Xiaotong, Yurong Shi, and Hongsun Fu. 2025. "Adaptive Optics Retinal Image Restoration Using Total Variation with Overlapping Group Sparsity" Symmetry 17, no. 5: 660. https://doi.org/10.3390/sym17050660

APA Style

Chen, X., Shi, Y., & Fu, H. (2025). Adaptive Optics Retinal Image Restoration Using Total Variation with Overlapping Group Sparsity. Symmetry, 17(5), 660. https://doi.org/10.3390/sym17050660

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop