Next Article in Journal
A High-Order Proper Orthogonal Decomposition Dimensionality Reduction Compact Finite-Difference Method for Diffusion Problems
Previous Article in Journal
Decoding Retail Commerce Patterns with Multisource Urban Knowledge
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Enhanced BiCGSTAB with Restrictive Preconditioning for Nonlinear Systems: A Mean Curvature Image Deblurring Approach

1
ASSMS, Government College University, Lahore 54000, Pakistan
2
Department of Mathematics, University of Hafr Al Batin, Hafar Al Batin 39524, Saudi Arabia
3
Automatic Control Group—ACG, Institute of Research and Development of Processes, Department of Electricity and Electronics, Faculty of Science and Technology, University of the Basque Country—UPV/EHU, 48940 Leioa, Spain
*
Author to whom correspondence should be addressed.
Math. Comput. Appl. 2025, 30(4), 76; https://doi.org/10.3390/mca30040076
Submission received: 29 May 2025 / Revised: 10 July 2025 / Accepted: 14 July 2025 / Published: 17 July 2025

Abstract

We present an advanced restrictively preconditioned biconjugate gradient-stabilized (RPBiCGSTAB) algorithm specifically designed to improve the convergence speed of Krylov subspace methods for nonlinear systems characterized by a structured 5-by-5 block configuration. This configuration frequently arises from cell-centered finite difference discretizations employed in solving image deblurring problems governed by mean curvature dynamics. The RPBiCGSTAB method is crafted to exploit this block structure, thereby optimizing both computational efficiency and convergence behavior in complex image processing tasks. Analyzing the spectral characteristics of preconditioned matrices often reveals a beneficial distribution of eigenvalues, which plays a critical role in accelerating the convergence of the RPBiCGSTAB algorithm. Furthermore, our numerical experiments validate the computational efficiency and practical applicability of the method in addressing nonlinear systems commonly encountered in image deblurring. Our analysis also extends to the spectral properties of the preconditioned matrices, noting a pronounced clustering of eigenvalues around 1, which contributes to enhanced stability and convergence performance.Through numerical simulations that focus on mean curvature-driven image deblurring, we highlight the superior performance of the RPBiCGSTAB method in comparison to other techniques in this specialized field.

1. Introduction

In recent years, nonlinear variational techniques have gained considerable attention in the field of image processing [1,2]. Applying these methods to noisy, large-scale datasets presents distinct challenges, especially when addressing blurred images. These difficulties stem from the complexity of handling nonlinearity and solving the large-scale systems that arise through linearization.This paper seeks to tackle these computational challenges. A widely adopted nonlinear variational approach for tackling image deblurring is the mean curvature (MC)-based regularization model. This technique has garnered substantial interest and has been extensively studied within the literature due to its capability to effectively preserve critical structural features while mitigating noise in the deblurring process [3,4,5,6,7]. The method’s efficacy in balancing detail retention with noise reduction has established it as a valuable tool in image restoration applications.
Regularization models based on mean curvature (MC) techniques have been widely utilized in various image processing tasks, achieving significant success in preserving edge details and mitigating staircase artifacts in digital image restoration. However, discretizing the Euler–Lagrange equations associated with these models leads to a complex, nonlinear, and ill-conditioned problem that can hinder the efficiency of numerical methods. The Euler–Lagrange equations associated with the image deblurring model arise from minimizing the MC-based functional, which incorporates both the data fidelity term and the regularization term. The regularization term uses the mean curvature κ ( u ) , with the parameter α > 0 serving as a regularization parameter to control the trade-off between the fidelity to the blurred image z and the smoothness of the solution u. The derivation and detailed explanation of these Euler–Lagrange equations, including the regularization mechanism and the underlying variational formulation, have been thoroughly examined in [3], which provides a comprehensive introduction to variational image processing models and their applications.
Furthermore, the Jacobian matrix derived from the nonlinear system associated with mean curvature (MC) methods exhibits a distinct block-banded structure characterized by a specific bandwidth [8,9,10,11,12]. The Jacobian matrix in the context of image deblurring problems, particularly for mean curvature (MC)-based methods, represents the matrix of partial derivatives of the system’s equations. It characterizes the relationship between the image variables and the residuals resulting from the nonlinear system of equations that model the deblurring process. This structured form plays an essential role in computational efficiency, facilitating faster matrix operations within the deblurring algorithm. While mean curvature-based regularization methods offer significant advantages, the inherent nonlinearity and ill-conditioning of the resulting system highlight the importance of a robust and efficient numerical solution. Tackling these complexities is essential for obtaining precise and stable outcomes in advanced image deblurring applications. In what follows, we outline the problem in a 5 × 5 block structure derived from the discretization of the image deblurring model using MC-based techniques [8,9,12]. This block format aids in systematically addressing the computational challenges presented by MC regularization models. Presented below is the problem’s representation in a five-by-five block format obtained through discretizing the image deblurring issue using (MC) techniques [8,9,12].
K h * K h α A h 0 α B h * α B h * 0 I h B h * 0 0 B h 0 D h 0 0 0 B h 0 D h 0 0 0 C h 0 D h u 1 u 2 u 3 u 4 u 5 = K h * z 0 0 0 0 .
Solving these systems efficiently poses a substantial challenge for numerical approaches, even with the use of advanced techniques such as Krylov subspace algorithms, including the Generalized Minimal Residual (GMRES) method. These difficulties highlight the importance of optimizing both the convergence rate and computational efficiency. This complexity arises from the inherent nonlinearity and conditioning issues of the systems involved. In this context, these methods often exhibit slow convergence rates. Such limitations can hinder their effectiveness in practical applications, necessitating the exploration of alternative strategies. One effective strategy to address this challenge is the implementation of preconditioning techniques [13,14,15,16]. These methods can significantly enhance the convergence properties of the algorithms used. In practical applications, it is common to employ the preconditioner with a certain level of approximation within inner iterations. This approach allows for a balance between computational efficiency and solution accuracy. These methods are frequently employed alongside Flexible GMRES (FGMRES) [17]. This combination enhances the flexibility and efficiency of the solution process. These iterative techniques typically incorporate at least one parameter to improve their implementation efficiency. This inclusion helps optimize the convergence process and reduces computational costs. Nevertheless, the parameters considered optimal or nearly optimal in these methods do not necessarily reflect the best choices for utilizing preconditioning matrices in a Krylov subspace context. These insights serve as the basis for the findings discussed in [18]. In this work, we introduce a novel block restrictive preconditioner specifically designed for handling the 5 × 5 block structure arising in mean curvature MC-based image deblurring tasks. Additionally, we introduced a novel restrictively preconditioned biconjugate gradient-stabilized (RPBiCGSTAB) algorithm, which is a robust approach for solving MC-based image deblurring tasks. The RPBiCGSTAB (restrictively preconditioned biconjugate gradient-stabilized) method is a recent advancement over the BiCGSTAB method [19,20,21,22,23,24] for solving large, sparse, and nonsymmetric linear systems. Notably, the proposed preconditioner functions effectively without requiring any additional parameters, offering a streamlined approach to enhancing numerical stability and efficiency in such systems. We begin by examining a distinctive partitioning technique for the coefficient matrix of the system, which effectively ensures that the associated iterative method converges robustly to the equation’s solution. Remarkably, this method operates without necessitating any predefined convergence criteria, thus simplifying the implementation process and enhancing computational efficiency.
This strategy highlights the effectiveness of our method in practical applications. Importantly, the proposed preconditioned matrix demonstrates a unique eigenvalue clustering property, with eigenvalues primarily concentrated near one. The observed distribution indicates a faster convergence rate for the preconditioned Krylov subspace methods. This paper offers several key contributions:
  • We introduce an innovative restrictive block preconditioner based on partitioning the five-by-five block structure of the MC-based image deblurring problem.
  • We propose a robust RPBiCGSTAB algorithm for solving the MC-based image deblurring problem, which converges unconditionally.
  • Spectral analysis indicates that the preconditioned matrix displays a favorable distribution of eigenvalues, thereby facilitating the rapid convergence of the proposed RPBiCGSTAB technique.
  • We incorporate experimental data, subsequently comparing the results with those obtained from established methodologies.
The subsequent sections are organized as follows: In Section 2, we thoroughly analyze the problem statement, exploring the intricate relationship between the original image and its blurred representations u and z, respectively. This examination sets the foundation for understanding how image degradation affects the data and informs the subsequent reconstruction approach. Additionally, we outline the underlying principles that connect the image degradation process to the observable outputs. Section 3 provides a detailed account of the discretization process and outlines the configuration of the 5 × 5 block system as it applies to the MC-based image deblurring framework. This section further elucidates the structure of the block system, highlighting its role in accurately modeling the deblurring procedure. In Section 4, we delve into the numerical methods with a specific emphasis on the proposed RPBiCGSTAB technique. Section 5 is dedicated to the examination of the eigenvalues associated with the preconditioned matrices. In Section 6, we compare the performance of RPBiCGSTAB with P S GMRES , as discussed in [25], and we perform numerical experiments to evaluate the efficacy of the introduced preconditioning strategies. Finally, Section 7 provides a comprehensive summary of our findings and conclusions.

2. Problem Description

Our study addresses the challenge of image deblurring. We start with a concise overview of the problem. Mathematically, the equation characterizes the relationship between the original image and the blurred images u and z.
z = K u + ϵ
In this context, the noise inherent in the image is represented by ϵ . This noise can take on various forms, including Gaussian, Brownian, and salt-and-pepper noise [3,4,5]. Gaussian noise is a statistical noise characterized by a bell-shaped probability density function, often appearing in images due to sensor limitations or transmission errors. It is modeled as a normal distribution with mean zero and a certain standard deviation. Brownian noise, also known as random walk noise, is caused by random fluctuations in the intensity values, leading to a more erratic, continuous variation across the image. Salt-and-pepper noise, on the other hand, manifests as random occurrences of black and white pixels, resembling grains of salt and pepper. These types of noises are commonly found in blurred images. This type of noise typically results from sensor malfunction or transmission errors. Specifically, our research focuses on Gaussian noise. We define the blurring operator K using a Fredholm integral of the first kind, which is expressed in the following manner:
( K u ) ( x ) = Ω k ( x , y ) u ( y ) d y , for x Ω .
The operator is characterized by translation invariance, as shown by the equation k ( x , y ) = k ( x y ) . The issue associated with Equation (1) originates from its ill-posed nature, which is attributed to the compactness of K . We consider Ω to be a square in the two-dimensional space R 2 , where u within Ω denotes a function that characterizes the image intensity. The coordinates within Ω are defined by x = ( x , y ) , with x = x 2 + y 2 representing the Euclidean norm, while · signifies the L 2 ( Ω ) norm. Equation (1) defines an inverse problem, which hinders the recovery of u from z because of its fundamental instability. An effective way to improve stability is by applying the mean curvature (MC) regularization functional [3,4,5,6,7], which is defined as follows:
Ω κ ( u ) 2 d x = Ω ( · u u ) 2 d x .
As a result, Equation (1) is restructured to highlight the goal of determining the function u that minimizes the given functional.
T ( u ) = 1 2 K u z 2 + α 2 Ω κ ( u ) 2 d x
In this context, the parameter α > 0 functions as a regularization parameter. The defined properties of problem (3), under particular conditions and notably in the area of synthetic image denoising, have been examined in [6]. The system of Euler–Lagrange equations corresponding to problem (3) is formulated as follows:

Derivation of the Euler–Lagrange Equation for the Curvature Model

Among the many proposed alternatives, the mean curvature model by [26,27] is one effective method, as efficiently solved by Brito and Chen [27], where the mean curvature κ ( u ) = · u / | u | . Below, we give the brief derivation for its Euler–Lagrange (EL) equation [3] for MC-based image deblurring model.
Take any function v = v ( x , y ) and define
A = ( u x v x + u y v y ) u x , B = ( u x v x + u y v y ) u y
Then, we see that
lim ϵ 0 F ( u + ϵ v ) F ( u ) ϵ = α Ω κ ( u + ϵ v ) κ ( u + ϵ v ) + κ ( u ) + 2 ϵ v ( u z ) / α d Ω
= α Ω κ ( u ) · v | u | · ( A , B ) | u | 3 d Ω + Ω v ( u z ) d Ω = 0 .
Below, we apply Green’s first identity for scalar v = v ( x , y ) and vector ω = ( ω 1 , ω 2 ) . Applying Green’s first identity to term 1 of (4), we obtain
Ω κ ( u ) · v d Ω = Ω κ ( u ) | u | · v d Ω Ω κ ( u ) | u | · n v d Γ .
Next, applying Green’s first identity to term 2 of (4), we obtain
Ω κ ( u ) · ( A , B ) 1 | u | 3 d Ω = Ω κ ( u ) | u | · v d Ω Ω κ ( u ) · v d Ω .
In isolating v , we obtain the simplification
κ ( u ) · ( A , B ) = 1 | u | u x x κ ( u ) + u y y κ ( u ) .
Here, we have used the equality
x κ ( u ) = u x x κ ( u ) + u y y κ ( u ) .
A further step of using Green’s first identity leads to
Ω κ ( u ) · A , B | u | 3 d Ω = Ω κ ( u ) | u | · v d Ω + Ω κ ( u ) | u | · n v d Γ .
Finally, collecting both terms together, we obtain the Euler–Lagrange (EL) equation as
| u | · v Ω κ ( u ) | u | · n v d Γ = 0 .
That is, with the two boundary conditions κ = 0 , u · n = 0 , we have
α · κ | u | · κ · u | u | 3 u + K * ( K u z ) = 0 .
or the same result in its divergence form as
α · κ | u | κ · u | u | 3 u + K * ( K u z ) = 0 .
In this expression, the adjoint operator of K is denoted as K * . By using a positive constant β > 0 to guarantee differentiability at zero, The EL equations for MC-based image deblurring problem take the following form:
K * ( K u z ) + α · κ u 2 + β 2 κ · u u 2 + β 2 3 u = 0 in Ω ,
u n = 0 in Ω ,
κ ( u ) = 0 in Ω ,
The equation given by (5) represents a nonlinear fourth-order differential equation. Furthermore, this equation can also be reformulated into a first-order nonlinear system. By substituting
v = u u 2 + β 2 , w = . v , p = w u 2 + β 2
and
t = ( w . v ) v u 2 + β 2 ,
we have the following system:
K * K u + α . p α . t = K * z ,
w . v = 0 ,
u 2 + β 2 v u = 0 ,
u 2 + β 2 p w = 0 ,
u 2 + β 2 t ( w . v ) v = 0 .
The system of equations presented in (8) to (12) is a reformulation of the Euler–Lagrange Equation (5), which describes a fourth-order nonlinear differential equation for image processing, with the goal of edge preservation and noise reduction. The following provides a discretization of the Euler–Lagrange equation.

3. Cell Discretization

In the framework of the image deblurring problem using a mean curvature (MC), the region Ω = ( 0 , 1 ) × ( 0 , 1 ) is divided into cells of dimensions δ x × δ y , as outlined in [28]. The partitioning is defined as follows:
δ x : 0 = x 1 / 2 < x 3 / 2 < x 5 / 2 < < x n x 1 / 2 < x n x + 1 / 2 = 1 , δ y : 0 = y 1 / 2 < y 3 / 2 < y 5 / 2 < < y n x 1 / 2 < y n x + 1 / 2 = 1
In this context, n x denotes the number of evenly spaced subdivisions along the x or y axis, while ( x i , y j ) signifies the centers of the respective cells. The definitions for the coordinates x i and y j are provided as follows:
x i = i 1 2 h for i = 1 , 2 , 3 , , n x ,
y j = ( j 1 2 ) h for j = 1 , 2 , 3 , . . . , n x
where h = 1 n x . The midpoints of the edges of the cells, denoted as ( x i ± 1 2 , y j ) and ( x i , y j ± 1 2 ) , are described as follows:
x i ± 1 2 = x i ± h 2 for i = 1 , 2 , 3 , , n x ,
y j ± 1 2 = y j ± h 2 for j = 1 , 2 , 3 , . . . , n x .
These specified regions are established for every i = 1 , 2 , , n x and j = 1 , 2 , , n y .
Ω i , j = ( x i 1 / 2 , x i + 1 / 2 ) × ( y j 1 / 2 , y j + 1 / 2 ) ,
Ω i + 1 / 2 , j = ( x i , x i + 1 ) × ( y j 1 / 2 , y j + 1 / 2 ) ,
Ω i , j + 1 / 2 = ( x i 1 / 2 , x i + 1 / 2 ) × ( y j , y j + 1 ) .
The function θ ( x , y ) is expressed as θ k , l , with k and l corresponding to the indices i, i + 1 2 , j, and j + 1 2 , respectively, where i , j are integers greater than or equal to zero. We utilize the notation θ ( x l , y m ) to refer to these values. To indicate the values of discrete functions at the relevant discrete points, we have
[ d x θ ] i + 1 / 2 , j = θ i + 1 , j θ i , j h , [ D x θ ] i , j = θ i + 1 / 2 , j θ i 1 / 2 , j h , [ d y θ ] i , j + 1 / 2 = θ i , j + 1 θ i , j h , [ D y θ ] i , j = θ i , j i + 1 / 2 θ i , j 1 / 2 h .
Using the midpoint quadrature approximation, the expression is reformulated as
( K u ) ( x i , y j ) [ K h U ] ( i j ) .
Considering the lexicographical arrangement of the unknowns, we have
u 1 = [ U ¯ 11 U ¯ n x n x ] t , u 2 = [ W ¯ 11 W ¯ n x n x ] t ,
u 3 = [ V ¯ 11 x V ¯ n x 1 n x 1 x V ¯ 11 y V ¯ n x 1 n x 1 y ] t ,
u 4 = [ P ¯ 11 x P ¯ n x 1 n x 1 x P ¯ 11 y P ¯ n x 1 n x 1 y ] t ,
a n d u 5 = [ T ¯ 11 x T ¯ n x 1 n x 1 x T ¯ 11 y T ¯ n x 1 n x 1 y ] t .
By employing the CCFD method on Equations (8)–(12), we derive the subsequent system:
K h * K h u 1 α A h u 2 + α B h * u 4 α B h * u 5 = K h * z ,
I h u 2 B h * u 3 = 0 ,
B h u 1 + D h u 3 = 0 ,
B h u 2 + D h u 4 = 0 ,
C h u 3 + D h u 5 = 0 ,
The integral term is evaluated using the midpoint quadrature rule. The matrices K h , A h , and I h are each sized n x 2 × n x 2 , while the matrix B h has dimensions of 2 n x ( n x 1 ) × n x 2 . Furthermore, both matrices C h and D h are of size 2 n x ( n x 1 ) × 2 n x ( n x 1 ) . Consequently, the system can be expressed as follows:
K h * K h α A h 0 α B h * α B h * 0 I h B h * 0 0 B h 0 D h 0 0 0 B h 0 D h 0 0 0 C h 0 D h u 1 u 2 u 3 u 4 u 5 = K h * z 0 0 0 0 .
The matrix K h exhibits a Block Toeplitz structure with Toeplitz blocks (BTTB).
K h = h k ( 0 ) k ( h ) k ( ( n 1 ) h ) k ( h ) k ( 0 ) k ( ( n 2 ) h ) k ( ( n 1 ) h ) k ( ( n 2 ) h ) k ( 0 )
Additionally, the product K h * K h is confirmed to be symmetric positive-definite (SPD) [29].
In contrast, the matrix A h is described as a diagonal matrix, with each diagonal element defined as follows:
A h = 2 β h ( A 1 + A 2 ) ,
Both matrices A 1 and A 2 have the same dimensions, specifically n x 2 × n x 2 , and they are defined as follows:
A 1 = I ˜ E and A 2 = E I ˜ ,
In this notation, ⊗ signifies the tensor product. The matrix I ˜ has dimensions n x × n x and functions as the identity matrix. The matrix E is defined as follows:
E = 1 0 0 1 ,
has dimensions n x × n x . The matrix B h is defined as follows:
B h = 1 h B 1 B 2
where B 1 and B 2 share identical dimensions, specifically n x ( n x 1 ) × n x 2 .
B 1 = E ˜ I ˜ a n d B 2 = I ˜ E ˜ .
E ˜ = 1 1 1 1 1 1 1 ,
is an ( n x 1 ) × n x dimensional matrix. The matrix is defined as
C h = C x 0 0 C y ,
The matrix C h is comprised solely of diagonal entries, with its elements derived from the discretization of the expression ( w · v ) . The matrix C x has dimensions of ( n x 1 ) × n x , while the dimensions of the matrix C y are n x × ( n x 1 ) . Similarly, the matrix D h is also diagonal, with positive entries obtained from the discretization of the term | u | 2 + β 2 . The configuration of the matrix D h is described as follows:
D h = D x 0 0 D y .
In this context, the matrices D x and D y have dimensions of ( n x 1 ) × n x and n x × ( n x 1 ) , respectively. It is important to note that the unknown values situated at the horizontal and vertical boundaries of each cell e i j are not readily available. To overcome this challenge, averaging operators are utilized to approximate these boundary values.

4. RPBiCGSTAB Method

In this section, we introduce the RPBiCGSTAB method along with its corresponding restrictive preconditioner P . For clarity, the original system (19) leads to
A x = J M N Q W 0 0 V Y x 1 x 2 x 3 = K * z 0 0 = b .
where
x 1 = u 1 u 2 , x 2 = u 3 u 4 , x 3 = u 5 , K * z = K h * z 0
J = K h * K h α A h 0 I h , M = 0 α B h * B h * 0 , N = α B h * 0 ,
and
Q = B h 0 0 B h , W = D h 0 0 D h , V = C h 0 , Y = D h .
We begin by introducing a decomposition of the coefficient matrix A for the system (21) as
A = M H L
= I 0 0 Q J 1 I 0 0 V W Q J 1 M 1 I J 0 0 0 W Q J 1 M 0 0 0 Y + V W Q J 1 M 1 Q J 1 N
I J 1 M J 1 N 0 I W Q J 1 M 1 Q J 1 N 0 0 I .
If H ˜ is an appropriate approximation for the symmetric positive definite matrix H, it becomes possible to derive a preconditioner P = M H ˜ L for the image deblurring problem. Consequently, we introduce the restrictive preconditioner P as
P = I 0 0 Q J 1 I 0 0 V W Q J 1 M 1 I J 0 0 0 W Q J 1 M 0 0 0 Y
I J 1 M J 1 N 0 I W Q J 1 M 1 Q J 1 N 0 0 I .
Starting with H ˜ = F T F , we apply the biconjugate gradient-stabilized (BiCGSTAB) method to tackle the following equivalent system related to the image deblurring system (21):
R x ^ = b ^ ,
with
R = ( M F T ) 1 A ( F L ) 1 = F T H ˜ F 1 , x ^ = F L x , b ^ = ( M F T ) 1 b .
Let P z = r and K v = z , where K = L 1 M T . By direct calculation, we derive the RPBiCGSTAB method for solving the linear system (21) in Algorithm 1.
Algorithm 1: RPBiCGSTAB method
1.
 Initialize: Choose x ( 0 ) , compute r 0 = b A x ( 0 ) , set r hat = r 0 , ρ old = 1 , α = 1 , ω = 1 , v = 0 , p = 0 .
2.
 For k = 1 , 2 , , max_iter do
3.
            Compute ρ new = r hat T r k .
4.
            If | ρ new | < ϵ , break.
5.
            Compute β = ρ new ρ old · α ω .
6.
            Solve P z k = r k + β ( p k ω v ) to get p k .
7.
            Compute v k = A p k .
8.
            Compute α k = ρ new r hat T v k .
9.
            Compute s k = r k α k v k .
10.
             Solve P z k + 1 = s k to get z k + 1 .
11.
             Compute t k + 1 = A z k + 1 .
12.
             Compute ω k = t k + 1 T s k t k + 1 T t k + 1 .
13.
             Update x ( k + 1 ) = x ( k ) + α k p k + ω k z k + 1 .
14.
             Update r k + 1 = s k ω k t k + 1 .
15.
             Compute residual = r k + 1 .
16.
             If r k + 1 < tol , break.
17.
             Set ρ old = ρ new .
18.
  end do
In Algorithm 1 matrix-vector multiplications and linear system solvers are the dominant operations, and each iteration requires O ( n ) operations. Indeed, the linear system A x = b can be solved by iteratively addressing the two linear systems P z = r and K v = z . To construct the symmetric positive definite matrix H ˜ as an approximation of the symmetric positive-definite matrix H, we will redefine the vectors r, z, and v as follows:
r = ( r 1 , r 2 , r 3 ) T , z = ( z 1 , z 2 , z 3 ) T , v = ( v 1 , v 2 , v 3 ) T .
In Algorithm 2, we outline the implementation details of the RPBiCGSTAB method, which incorporates the preconditioner P to tackle the image deblurring problem.
Algorithm 2: The preconditioning
1.
 Solve t 1 = J 1 ( r 1 M z 2 N z 3 ) for t 1 ,
2.
 Solve t 2 = W 1 ( r 2 Q t 1 ) for t 2 ,
3.
 Solve t 3 = V 1 r 3 + V W Q J 1 M 1 Q J 1 N z 3 for t 3 ,
4.
 Solve t 4 = J 1 ( Q z 2 + W z 3 ) for t 4 ,
5.
 Compute v 1 = t 1 t 4 ,
6.
 Compute v 2 = z 2 V W Q J 1 M 1 + W Q J 1 M 1 Q J 1 N t 3 ,
7.
 Set v 3 = z 3 ,
8.
 Solve z 1 = t 1 2 t 4 ,
9.
 Compute z 2 = v 2 ,
10.
  Compute z 3 = v 3 .
In Algorithm 2, for each step, the most computationally expensive operations are matrix-vector multiplications and matrix inversions. Each of these operations takes O ( n ) in iterative solvers. Thus, the overall complexity for this preconditioning algorithm is O ( n ) .
Upon examining Algorithm 2, it is clear that each iteration necessitates solving four linear subsystems, all of which are symmetric positive definite. Given this characteristic, utilizing Cholesky factorization proves to be a suitable strategy for their solution.

5. Spectral Analysis

To begin with, we introduce the convergence theorem pertaining to the RPBiCGSTAB method.
Lemma 1.
After k iterations, starting from an initial vector x ( 0 ) , the RPBiCGSTAB method produces an approximate solution x ( k ) that closely estimates the true solution x for the image deblurring problem (21). This approximate solution fulfills the following inequality:
x ( k ) x K T A 2 κ ( P 1 A ) 1 κ ( P 1 A ) + 1 k x ( 0 ) x K T A .
In this context, κ ( · ) represents the Euclidean matrix condition number, while · X denotes a weighted norm.
Proof. 
Consider
K T A = ( L T M 1 ) ( M H L ) = L T H L .
So, K T A is an SPD matrix. Also, from H ˜ = F T F for vectors x and x ^ = F L x , we have
x ^ R 2 = < x ^ , R x ^ > ,
= < F L x ^ , ( M F T ) 1 A ( F L ) 1 F T H F 1 F L x ^ >
= < F L x ^ , ( M F T ) 1 A x >
= x T K T A x
= x K T A 2 .
Also,
P 1 A = L 1 J 1 H L
= ( F L ) 1 ( F T H F 1 ) ( F L )
= ( F L ) 1 R ( F L ) .
As a result, we establish that κ ( P 1 A ) = κ ( R ) . By applying the convergence theorem related to the BiCGSTAB method for the linear system (21), we reach the conclusion presented in the theorem. According to Lemma 1, the convergence characteristics of the RPBiCGSTAB method are heavily influenced by the parameter κ ( P 1 A ) . Therefore, it is crucial to analyze the eigenvalues of P 1 A . □
Theorem 1.
Let J R n × n and Y R p × p be positive-definite matrices, while M R n × m and Q R m × n are well defined. Under these conditions, the iterative method converges to the unique solution of (21) for any initial guess when utilizing the preconditioned matrix P . This establishes the robustness of the method in ensuring convergence regardless of the starting point.
Proof. 
Consider
P = I 0 0 Q J 1 I 0 0 V W Q J 1 M 1 I J 0 0 0 W Q J 1 M 0 0 0 Y
I J 1 M J 1 N 0 I W Q J 1 M 1 Q J 1 N 0 0 I .
= I 0 0 Q J 1 I 0 0 V W Q J 1 M 1 I J M N 0 W Q J 1 M Q J 1 N 0 0 Y
= J M N Q W 0 0 V Y V W Q J 1 M 1 Q J 1 N .
So, we have
A = P R =
J M N Q W 0 0 V Y V W Q J 1 M 1 Q J 1 N 0 0 0 0 0 0 0 0 V W Q J 1 M 1 Q J 1 N
Clearly, P is also invertible. So, consider
P 1 R = I J 1 M J 1 N 0 I W Q J 1 M 1 Q J 1 N 0 0 I J 0 0 0 W Q J 1 M 0 0 0 Y 1
I 0 0 Q J 1 I 0 0 V W Q J 1 M 1 I 0 0 0 0 0 0 0 0 V W Q J 1 M 1 Q J 1 N
= I J 1 M J 1 N 0 I W Q J 1 M 1 Q J 1 N 0 0 I J 0 0 0 W Q J 1 M 0 0 0 Y 1
0 0 0 0 0 0 0 0 V W Q J 1 M 1 Q J 1 N
= J 1 J 1 M W Q J 1 M 1 J 1 N Y 1 0 W Q J 1 M 1 W Q J 1 M 1 Q J 1 N Y 1 0 0 Y 1 0 0 0 0 0 0 0 0 V W Q J 1 M 1 Q J 1 N
= 0 0 J 1 N Y 1 V W Q J 1 M 1 Q J 1 N 0 0 W Q J 1 M 1 Q J 1 N Y 1 V W Q J 1 M 1 Q J 1 N 0 0 Y 1 V W Q J 1 M 1 Q J 1 N
Therefore, we conclude that ρ ( P 1 R ) < 1 . Therefore, the iterative method converges to the unique solution of (21), regardless of the initial guess, when utilizing the preconditioning matrix P . This concludes the proof of the theorem. □
Theorem 2.
Let J R n × n and Y R p × p be positive-definite matrices, while M R n × m and Q R m × n are well defined. Then, for A , the preconditioner P satisfies
σ ( P 1 A ) = { 1 } .
In this context, σ ( · ) represents the collection of all eigenvalues associated with a specific matrix.
Proof. 
It can be readily demonstrated that
P 1 A = I P 1 R
= I 0 J 1 N Y 1 V W Q J 1 M 1 Q J 1 N 0 I W Q J 1 M 1 Q J 1 N Y 1 V W Q J 1 M 1 Q J 1 N 0 0 I + Y 1 V W Q J 1 M 1 Q J 1 N .
Thus, λ = 1 is an eigenvalue of P 1 A with a multiplicity of at least m + n + p , and
σ ( P 1 A ) = { 1 } .
This completes the proof. □
Building upon the previous examination of eigenvalues, it is evident that the RPBiCGSTAB method with the preconditioner P will converge to the exact solution in just a few iterations.

6. Numerical Experiments

In this section, we introduce numerical experiments designed to tackle the problem of image deblurring, providing a practical assessment of our proposed method. These experiments are essential for evaluating the practical feasibility and effectiveness of the proposed methods in real-world applications. Through these tests, we gain valuable insights into the methods’ reliability and robustness under various conditions. Initially, we implement a discrete form of the Fixed-Point Iteration (FPI) method to tackle the nonlinearities present within system (21), subsequently applying the RPBiCGSTAB algorithm to find the solution of the image deblurring problem. This sequential approach is integral for effectively managing the complexities of nonlinearity and ensuring accurate computational results. The computational experiments were conducted on a system with an Intel® Core™ i5-6300U processor running at 2.40 GHz and equipped with 8.00 GB of RAM, with MATLAB R2022a serving as the implementation platform.
In this study, we employed the Goldhill, Moon, and Kids [8] images as examples, with Figure 1, Figure 2 and Figure 3 showcasing their unique features. These images serve to illustrate the effectiveness of the proposed methods in various scenarios. Each subfigure presented in Figure 1, Figure 2 and Figure 3 measures 512 × 512 . This uniformity in size facilitates a consistent comparison across the various images. Our numerical approach included a stopping criterion, with the tolerance established at tol = 1 × 10 7 . This precision ensured that the results met a specified level of accuracy before concluding the computation. The numerical experiments employed the k e g e n ( N , 300 , 5 ) kernel, with the parameters specified as α = 1 × 10 8 and β = 0.1 in [8,9,12]. However, our preliminary results indicated that the method showed reasonable robustness to small variations in α and β , and the choice of these parameters did not significantly alter the overall performance in the tasks considered. Here, the Peak Signal-to-Noise Ratio (PSNR) serves as a key metric for evaluating the fidelity of reconstructed images, indicating how well these images replicate their original counterparts. Higher PSNR values generally reflect improved image quality, signaling a closer resemblance to the original image. For comparison with our proposed RPBiCGSTAB method, we employed PGMRES, BiCGSTAB, and the P S GMRES method developed by Ahmad [25], where they introduced a preconditioner P S for the GMRES method to address a 5 × 5 block image deblurring problem.

Remarks

  • Our computational analysis indicates that the optimal range for the eigenvalues is concentrated around 1. The most favorable spectrum of eigenvalues is depicted in Figure 4, which illustrates the distribution of eigenvalues across different scenarios. Clearly, our proposed preconditioner P has a much better spectrum compared to the preconditioner P S [25] for the Goldhill image of size 512 × 512 .
  • The effect of preconditioning is clearly demonstrated in Figure 5. It is evident that the R P B i C G S T A B method attained the desired accuracy with substantially fewer iterations than other methods. In contrast, the method without preconditioning (GMRES and BICGSTAB) required over 50 iterations to reach convergence for the Goldhill image of size 512 × 512 . Similar findings were observed for other image dimensions as well.
  • All the Table 1, Table 2 and Table 3 demonstrate that the PSNR achieved by the R P B i C G S T A B method surpasses that of all other methods, including P S GMRES [25], and this was accomplished with a significantly reduced number of iterations. The implementation of the R P B i C G S T A B method resulted in a decrease of over 60 % in CPU time. Consequently, the R P B i C G S T A B method demonstrated superior performance compared to other methods.
  • Figure 1, Figure 2 and Figure 3 clearly illustrate that the R P B i C G S T A B method yielded a slight enhancement in image quality.

7. Conclusions

This study presents an innovative block preconditioning approach tailored for the 5 × 5 block matrix system arising from the discretization of the Euler–Lagrange equations. The preconditioner is designed to enhance computational efficiency and stability, effectively addressing the complex structures of these discretized equations. We assessed the efficacy of the newly introduced restrictively preconditioned biconjugate gradient-stabilized (RPBiCGSTAB) method in curvature-based image deblurring applications, comparing its performance against established techniques such as GMRES, BiCGSTAB, and P S GMRES [25]. This evaluation highlights the RPBiCGSTAB method’s advancements in tackling image deblurring challenges.
Our comprehensive theoretical analysis confirms that this iterative method ensures unconditional convergence with an appropriate matrix splitting strategy, underscoring its robustness across various applications. We also conducted a spectral analysis of the preconditioned matrix, providing critical insights into its eigenvalues and properties, thereby validating our preconditioning strategy.
The performance of the RPBiCGSTAB method was demonstrated through numerical experiments, showcasing its capabilities in image deblurring tasks. Our findings reveal that the RPBiCGSTAB method significantly enhanced the quality of reconstructed images, as indicated by the PSNR metrics. Furthermore, it exhibited a slight improvement in convergence speed compared to GMRES, BiCGSTAB, and P S GMRES [25]. This method is efficient, requiring minimal CPU time and achieving rapid convergence within a few iterations, while the eigenvalues of the preconditioned matrix cluster around the value of 1, reinforcing the method’s stability and robustness in practical applications. Beyond image processing, the RPBiCGSTAB can be extended to various other applications, including medical image processing, satellite image analysis, and remote sensing, where similar block-structured systems are prevalent.

Author Contributions

All authors contributed equally to this research. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Basque Government, grant number [IT1555-22].

Data Availability Statement

The source codes can be accessed at https://github.com/shahbaz1982/BiCSTAB-Preconditioned-/tree/main, (accessed on 28 May 2025). The datasets used and/or analysed during the current study available from the corresponding author on reasonable request.

Acknowledgments

The authors would like to thank the Basque Government for supporting this research work through Grant IT1555-22. They also thank MICIU/AEI/10.13039/501100011033 and FEDER/UE for partially funding their research work through Grants PID2021-123543OB-C21 and PID2021-123543OB-C22.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Ren, W.; Deng, S.; Zhang, K.; Song, F.; Cao, X.; Yang, M.-H. Fast ultra high-definition video deblurring via multi-scale separable network. Int. J. Comput. Vis. 2024, 132, 1817–1834. [Google Scholar] [CrossRef]
  2. Ren, W.; Wu, L.; Yan, Y.; Xu, S.; Huang, F.; Cao, X. INformer: Inertial-Based Fusion Transformer for Camera Shake Deblurring. IEEE Trans. Image Process. 2024, 33, 6045–6056. [Google Scholar] [CrossRef]
  3. Chen, K. Introduction to variational image-processing models and applications. Int. J. Comput. Math. 2013, 90, 1–8. [Google Scholar] [CrossRef]
  4. Sun, L.; Chen, K. A new iterative algorithm for mean curvature-based variational image denoising. BIT Numer. Math. 2014, 54, 523–553. [Google Scholar] [CrossRef]
  5. Yang, F.; Chen, K.; Yu, B.; Fang, D. A relaxed fixed point method for a mean curvature-based denoising model. Optim. Methods Softw. 2014, 29, 274–285. [Google Scholar] [CrossRef]
  6. Zhu, W.; Chan, T. Image denoising using mean curvature of image surface. SIAM J. Imaging Sci. 2012, 5, 1–32. [Google Scholar] [CrossRef]
  7. Zhu, W.; Tai, X.C.; Chan, T. Augmented Lagrangian method for a mean curvature based image denoising model. Inverse Probl. Imaging 2013, 7, 1409–1432. [Google Scholar] [CrossRef]
  8. Fairag, F.; Chen, K.; Ahmad, S. An effective algorithm for mean curvature-based image deblurring problem. Comput. Appl. Math. 2022, 41, 176. [Google Scholar] [CrossRef]
  9. Fairag, F.; Chen, K.; Ahmad, S. Analysis of the CCFD method for MC-based image denoising problems. Electron. Trans. Numer. Anal. 2021, 54, 108–127. [Google Scholar] [CrossRef]
  10. Mobeen, A.; Ahmad, S.; Fairag, F. Non-blind constraint image deblurring problem with mean curvature functional. Numer. Algorithms 2025, 98, 1703–1723. [Google Scholar] [CrossRef]
  11. Khalid, R.; Ahmad, S.; Medani, M.; Said, Y.; Ali, I. Efficient preconditioning strategies for accelerating GMRES in block-structured nonlinear systems for image deblurring. PLoS ONE 2025, 20, e0322146. [Google Scholar] [CrossRef]
  12. Fairag, F.; Chen, K.; Brito-Loeza, C.; Ahmad, S. A two-level method for image denoising and image deblurring models using mean curvature regularization. Int. J. Comput. Math. 2022, 99, 693–713. [Google Scholar] [CrossRef]
  13. Ahmad, S.; Fairag, F. Circulant preconditioners for mean curvature-based image deblurring problem. J. Algorithms Comput. Technol. 2021, 15, 17483026211055679. [Google Scholar] [CrossRef]
  14. Ahmad, S.; Al-Mahdi, A.M.; Ahmed, R. Two new preconditioners for mean curvature-based image deblurring problem. AIMS Math. 2021, 6, 13824–13844. [Google Scholar] [CrossRef]
  15. Beik, F.P.A.; Benzi, M. Preconditioning techniques for the coupled Stokes–Darcy problem: Spectral and field-of-values analysis. Numer. Math. 2022, 150, 257–298. [Google Scholar] [CrossRef]
  16. Kim, J.; Ahmad, S. On the preconditioning of the primal form of TFOV-based image deblurring model. Sci. Rep. 2023, 13, 17422. [Google Scholar] [CrossRef]
  17. Saad, Y. A flexible inner-outer preconditioned GMRES algorithm. SIAM J. Sci. Comput. 1993, 14, 461–469. [Google Scholar] [CrossRef]
  18. Cao, Y.; Jiang, M.Q.; Zheng, Y.L. A splitting preconditioner for saddle point problems. Numer. Linear Algebra Appl. 2011, 18, 875–895. [Google Scholar] [CrossRef]
  19. Knight, F.M.; Wathen, A.J.; Trefethen, L.N. The BiCGSTAB method for solving large sparse linear systems. SIAM J. Sci. Stat. Comput. 1994, 15, 237–248. [Google Scholar]
  20. Saad, Y. Iterative Methods for Sparse Linear Systems, 2nd ed.; Society for Industrial and Applied Mathematics (SIAM): Philadelphia, PA, USA, 2003; pp. 1–462. [Google Scholar]
  21. Van der Vorst, H.A. BiCGSTAB: A fast and robust iterative solver for large, sparse, non-symmetric linear systems. Numer. Linear Algebra Appl. 2003, 10, 79–94. [Google Scholar]
  22. Chen, J.; Li, Y.; Zhang, Z. Efficient implementation of BiCGSTAB for large-scale sparse linear systems with preconditioning. J. Comput. Appl. Math. 2018, 334, 343–356. [Google Scholar]
  23. Bouyghf, A.; Abdessalem, M. An enhanced version of the BiCGSTAB method with global and block orthogonal projectors. Numer. Linear Algebra Appl. 2023, 30, 201–221. [Google Scholar]
  24. Tadano, Y.; Kuramoto, H. A block-wise updating method for BiCGSTAB with applications to multiple right-hand sides. J. Appl. Comput. Math. 2019, 6, 100–120. [Google Scholar]
  25. Ahmad, S. Optimized five-by-five block preconditioning for efficient GMRES convergence in curvature-based image deblurring. Comput. Math. Appl. 2024, 175, 174–183. [Google Scholar] [CrossRef]
  26. Lysaker, M.; Osher, S.; Tai, X.C. Noise removal using smoothed normals and surface fitting. IEEE Trans. Image Process. 2004, 13, 345–1457. [Google Scholar] [CrossRef]
  27. Brito, C.; Chen, K. Multigrid algorithm for high order denoising. SIAM J. Imaging Sci. 2010, 3, 363–389. [Google Scholar] [CrossRef]
  28. Rui, H.; Pan, H. A block-centered finite difference method for the DarcyForchheimer model. SIAM J. Numer. Anal. 2012, 5, 2612–2631. [Google Scholar] [CrossRef]
  29. Vogel, C.R.; Oman, M.E. Fast, robust total variation-based reconstruction of noisy, blurred images. IEEE Trans. Image Process. 1998, 7, 813–824. [Google Scholar] [CrossRef]
Figure 1. Goldhill image: (a) shows the original image; (b) presents the blurred version; (c) displays the deblurred result using GMRES; (d) shows the deblurred image obtained via BiCGSTAB; (e) depicts the deblurred image produced by P S GMRES ; (f) illustrates the deblurred image using RPBiCGSTAB.
Figure 1. Goldhill image: (a) shows the original image; (b) presents the blurred version; (c) displays the deblurred result using GMRES; (d) shows the deblurred image obtained via BiCGSTAB; (e) depicts the deblurred image produced by P S GMRES ; (f) illustrates the deblurred image using RPBiCGSTAB.
Mca 30 00076 g001aMca 30 00076 g001b
Figure 2. Moon image: (a) presents the original image; (b) displays the blurred image; (c) shows the deblurred result obtained with GMRES; (d) illustrates the deblurred image produced using BiCGSTAB; (e) depicts the deblurred image from P S GMRES ; (f) demonstrates the deblurred image achieved with RPBiCGSTAB.
Figure 2. Moon image: (a) presents the original image; (b) displays the blurred image; (c) shows the deblurred result obtained with GMRES; (d) illustrates the deblurred image produced using BiCGSTAB; (e) depicts the deblurred image from P S GMRES ; (f) demonstrates the deblurred image achieved with RPBiCGSTAB.
Mca 30 00076 g002aMca 30 00076 g002b
Figure 3. Kids image [8]: (a) presents the original image; (b) shows the blurred image; (c) depicts the deblurred image obtained with GMRES; (d) illustrates the deblurred image produced by BiCGSTAB; (e) displays the deblurred image generated using P S GMRES ; (f) demonstrates the deblurred image achieved with RPBiCGSTAB.
Figure 3. Kids image [8]: (a) presents the original image; (b) shows the blurred image; (c) depicts the deblurred image obtained with GMRES; (d) illustrates the deblurred image produced by BiCGSTAB; (e) displays the deblurred image generated using P S GMRES ; (f) demonstrates the deblurred image achieved with RPBiCGSTAB.
Mca 30 00076 g003aMca 30 00076 g003b
Figure 4. The eigenvalue distributions of matrices (a) A , (b) P S 1 A , and (c) P 1 A for Goldhill image of size 512 × 512 .
Figure 4. The eigenvalue distributions of matrices (a) A , (b) P S 1 A , and (c) P 1 A for Goldhill image of size 512 × 512 .
Mca 30 00076 g004
Figure 5. Residual norms and first 50 iteration counts for GMRES, BiCGSTAB, RPBiCGSTAB, and P S GMRES for Goldhill image of size 512 × 512 .
Figure 5. Residual norms and first 50 iteration counts for GMRES, BiCGSTAB, RPBiCGSTAB, and P S GMRES for Goldhill image of size 512 × 512 .
Mca 30 00076 g005
Table 1. The numerical results for Goldhill image.
Table 1. The numerical results for Goldhill image.
n x Blurry PSNRMethodDeblurred PSNRErrorIterationsCPU Time
12824.7144GMRES48.0376 2.78522 × 10 4 4671.4365
BiCGSTAB48.4597 2.34269 × 10 8 2221.7596
P S GMRES 49.7696 9.95176 × 10 8 1(2)18.1246
RPBiCGSTAB49.8697 2.31212 × 10 12 316.1349
25624.5531GMRES47.9646 2.78522 × 10 4 5691.2974
BiCGSTAB47.4377 2.83523 × 10 8 2827.7834
P S GMRES 48.7696 8.91346 × 10 8 1(3)26.3425
RPBiCGSTAB48.9845 2.78521 × 10 12 419.3124
51224.6983GMRES44.2732 2.78522 × 10 4 83106.7548
BiCGSTAB44.5315 2.96524 × 10 8 3741.2586
P S GMRES 46.7696 6.78523 × 10 8 2(5)27.2659
RPBiCGSTAB46.9897 2.78522 × 10 12 622.2791
Table 2. The numerical results for Moon image.
Table 2. The numerical results for Moon image.
n x Blurry PSNRMethodDeblurred PSNRErrorIterationsCPU Time
12828.4896GMRES49.4596 2.43256 × 10 4 4881.5213
BiCGSTAB49.4789 2.12896 × 10 8 2431.8512
P S GMRES 50.4578 8.45896 × 10 8 1(3)19.5963
RPBiCGSTAB50.4586 2.25693 × 10 12 316.1349
25628.4596GMRES48.4786 2.18963 × 10 4 5895.4963
BiCGSTAB48.3125 2.12364 × 10 8 2929.1259
P S GMRES 49.4369 7.14369 × 10 8 1(4)28.1256
RPBiCGSTAB49.1425 2.13689 × 10 12 519.1456
51228.5429GMRES45.4963 2.74269 × 10 4 85109.8549
BiCGSTAB45.4236 2.45236 × 10 8 4145.2369
P S GMRES 47.1236 5.12496 × 10 8 2(6)29.5896
RPBiCGSTAB47.1456 2.49631 × 10 12 721.1263
Table 3. The numerical results for Kids image.
Table 3. The numerical results for Kids image.
n x Blurry PSNRMethodDeblurred PSNRErrorIterationsCPU Time
12823.2189GMRES47.1253 2.25893 × 10 4 4581.1258
BiCGSTAB47.8963 2.14785 × 10 8 2231.1149
P S GMRES 48.2589 9.12356 × 10 8 1(2)18.1858
RPBiCGSTAB48.1589 2.18964 × 10 12 316.0012
25623.1256GMRES46.1256 2.14239 × 10 4 5591.0025
BiCGSTAB46.4589 2.12569 × 10 8 2827.7034
P S GMRES 47.7589 8.25814 × 10 8 1(3)26.0421
RPBiCGSTAB47.1526 2.21852 × 10 12 617.0125
51223.1079GMRES43.5896 2.12846 × 10 4 82106.2109
BiCGSTAB43.8596 2.85219 × 10 8 3741.2586
P S GMRES 45.5896 6.14763 × 10 8 2(7)27.2307
RPBiCGSTAB45.1478 2.14256 × 10 12 820.4839
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Khalid, R.; Ahmad, S.; Ali, I.; De la Sen, M. Enhanced BiCGSTAB with Restrictive Preconditioning for Nonlinear Systems: A Mean Curvature Image Deblurring Approach. Math. Comput. Appl. 2025, 30, 76. https://doi.org/10.3390/mca30040076

AMA Style

Khalid R, Ahmad S, Ali I, De la Sen M. Enhanced BiCGSTAB with Restrictive Preconditioning for Nonlinear Systems: A Mean Curvature Image Deblurring Approach. Mathematical and Computational Applications. 2025; 30(4):76. https://doi.org/10.3390/mca30040076

Chicago/Turabian Style

Khalid, Rizwan, Shahbaz Ahmad, Iftikhar Ali, and Manuel De la Sen. 2025. "Enhanced BiCGSTAB with Restrictive Preconditioning for Nonlinear Systems: A Mean Curvature Image Deblurring Approach" Mathematical and Computational Applications 30, no. 4: 76. https://doi.org/10.3390/mca30040076

APA Style

Khalid, R., Ahmad, S., Ali, I., & De la Sen, M. (2025). Enhanced BiCGSTAB with Restrictive Preconditioning for Nonlinear Systems: A Mean Curvature Image Deblurring Approach. Mathematical and Computational Applications, 30(4), 76. https://doi.org/10.3390/mca30040076

Article Metrics

Back to TopTop