Preconditioning Technique for an Image Deblurring Problem with the Total FractionalOrder Variation Model
Abstract
:1. Introduction
 We propose two block triangular preconditioners and study the bounds of the eigenvalues of the preconditioned matrices. In addition, we demonstrate the effectiveness of our algorithm in the numerical results by starting with the fixed point iteration (FPI) Method as in [28] to linearize the nonlinear primal system $\left[{K}^{T}K+\lambda {L}_{h}^{\alpha}\left({U}^{m}\right)\right]{U}^{m+1}={K}^{T}Z,\phantom{\rule{3.33333pt}{0ex}}m=0,1,\dots $, then we use the preconditioned conjugate gradient (PCG) method [29] for the inner iterations. After that, we use FGMRES method for the outer iterations. We illustrate the performance of our approach by calculating the peak signaltonoise ratio (PSNR), CPUtime, residuals and the number of iterations. Finally, we calculate the PSNR for different values of the order of the fractional derivative, $\alpha $, to show the impact of using the TFOV model.
2. Problem Setup
 Tikhonov regularization [32] is used to stabilize the problem (2) and also called as penalized least squares. In this case, the problem is then to find a u that minimize the functional$$F\left(u\right)=\frac{1}{2}{\Vert \mathbf{K}uz\Vert}^{2}+\lambda J\left(u\right),$$$$J\left(u\right)={\int}_{\Omega}{u}^{2}dx,$$$$J\left(u\right)={\int}_{\Omega}\mid \nabla u{\mid}^{2}dx,$$
 Total Variation (TV): One of the most commonly used regularization models is the TV. It was introduced for the first time [33] in edgepreserving image denoising by Rudin, Osher and Fatemi (ROF) and it has improved in recent years for image denoising, deblurring, inpainting, blind deconvolution, and processing [1,2,3,4,34,35,36,37,38,39]. When using the TV model, the problem is then to find a u that minimizes the functional$$F\left(u\right)=\frac{1}{2}{\Vert \mathbf{K}uz\Vert}^{2}+\lambda {J}_{TV}\left(u\right),$$$${J}_{TV}\left(u\right)={\int}_{\Omega}\mid \nabla u\mid dx.$$Note that, we do not require the continuity of u. Hence, (8) is a good regularization in image processing. However, the Euclidean norm, $\mid \nabla u\mid $, is not differentiable at zero. Common modification is to add a small positive parameter $\beta $. The resulting is in the modified functional:$${{J}_{TV}}_{\beta}\left(u\right)={\int}_{\Omega}\sqrt{\mid \nabla u{\mid}^{2}+{\beta}^{2}}dx.$$The wellposedness of the above minimization problem (7) with the functional given in (9) is studied and analyzed in the literature, such as in [1]. The success of using TV regularization is that TV gives a balance between the ability to describe piecewise smooth images and the complexity of the resulting algorithms. Moreover, the TV regularization performs very well for removing noise/blur while preserving edges. Despite the good contributions of the TV regularization mentioned above, it favors a piecewise constant solution in the bounded variation (BV) space which often leads to the staircase effect. Thus, stair casing remains one of the drawbacks of the TV regularization. To remove the stair case effects, two modifications to the TV regularization model have been used in the literature. The first approach is to higher the order of the derivatives in the TV regularization term, such as the mean curvature or a nonlinear combination of the first and second derivatives [40,41,42,43,44,45]. These modifications remove/reduce the staircase effects and they are effective but they are computationally expensive due to the increasing the order of the derivatives or due to the nonlinearity terms. The second approach is to use the fractionalorder derivatives in the TV regularization terms as shown in [46,47].
2.1. FractionalOrder Derivative in Image Deblurring
2.2. The TFOVModel
2.3. FractionalOrder Derivatives
 Riemann–Liouville (RL) definitions: The left and rightsided RL derivatives of order $\alpha $ of a function $f\left(x\right)$ are given as follows:$${{D}^{\alpha}}_{[a,x]}f\left(x\right)=\frac{1}{\Gamma (n\alpha )}{\left(\frac{d}{dx}\right)}^{n}{\int}_{a}^{x}{(xt)}^{n\alpha 1}f\left(t\right)dt$$$${{D}^{\alpha}}_{[x,b]}f\left(x\right)=\frac{{(1)}^{n}}{\Gamma (n\alpha )}{\left(\frac{d}{dx}\right)}^{n}{\int}_{x}^{b}{(tx)}^{n\alpha 1}f\left(t\right)dt$$$$\Gamma \left(z\right)={\int}_{0}^{\infty}{e}^{t}{t}^{z1}dt.$$
 Grünwald–Letnikov (GL) definitions: The left and rightsided GL derivatives are defined by$${}^{G}{D}_{[a,x]}^{\alpha}f\left(x\right)=\underset{h\to 0}{lim}\frac{{\Sigma}_{j=0}^{\left[\frac{xa}{h}\right]}{(1)}^{j}{C}_{\alpha}^{j}f(xjh)}{{h}^{\alpha}}$$$${}^{G}{D}_{[x,b]}^{\alpha}f\left(x\right)=\underset{h\to 0}{lim}\frac{{\Sigma}_{j=0}^{\left[\frac{bx}{h}\right]}{(1)}^{j}{C}_{\alpha}^{j}f(x+jh)}{{h}^{\alpha}}$$$${C}_{\alpha}^{j}=\frac{\alpha (\alpha 1)\dots (\alpha j+1)}{j!}.$$
 Caputo (C) definitions: The left and rightsided Caputo derivatives are defined by$${}^{C}{D}_{[a,x]}^{\alpha}f\left(x\right)=\frac{1}{\Gamma (n\alpha )}{\int}_{a}^{x}{(xt)}^{n\alpha 1}{f}^{\left(n\right)}\left(t\right)dt$$$${}^{C}{D}_{[x,b]}^{\alpha}f\left(x\right)=\frac{{(1)}^{n}}{\Gamma (n\alpha )}{\int}_{x}^{b}{(tx)}^{n\alpha 1}{f}^{\left(n\right)}\left(t\right)dt$$
2.4. EulerLagrange Equations
2.5. Discretization of the Fractional Derivative
 (1)
 ${{\omega}_{0}}^{\alpha}=1,\phantom{\rule{3.33333pt}{0ex}}{{\omega}_{1}}^{\alpha}=\alpha <0,\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}1\ge {{\omega}_{2}}^{\alpha}\ge {{\omega}_{3}}^{\alpha}\ge \dots \ge 0.$
 (2)
 ${\sum}_{k=0}^{\infty}{{\omega}_{k}}^{\alpha}=0,\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}{\sum}_{k=0}^{m}{{\omega}_{k}}^{\alpha}\le 0(m\ge 1)$.
2.6. Difficulties in TFOVModel Compared to TVModel
3. Preconditioning Technique
4. Preconditioned GMRES Algorithm
Algorithm 1 Preconditioned GMRES Algorithm 

Algorithm 2 ${P}_{1}$Conjugate Gradient Method Algorithm 

Algorithm 3 ${P}_{2}$Conjugate Gradient Method Algorithm 

Eigenvalues Estimates
5. Numerical Results
5.1. The Parameters $\beta $ and $\lambda $ Selecting
5.2. GMRES versus FGMRES
 From Figure 31, Figure 32 and Figure 33, we can clearly see the effectiveness of preconditioning. For all values of N, the number of ${P}_{1}$ and ${P}_{2}$ iterations is much lower than the number of TFOVbased NP and TVbased ${P}_{1}$ iterations to reach the required accuracy $tol={10}^{7}$. The later fixedpoint iterations also have similar results.
 From Table 2, we observed that the PSNR by the TFOVbased PGMRES method is almost the same as that of the ordinary TFOVbased GMRES method, but much higher than that of the TVbased ${P}_{1}$ method for all values of N. However, the ${P}_{1}$ and ${P}_{2}$ methods generate this better PSNR in much fewer iterations. For example, to achieve a better PSNR the ${P}_{1}$ method needs only 18 iterations, and the ${P}_{2}$ method needs only 20 iterations for $N=64$. However, the NP method needs 120+ iterations to get the same PSNR. The TVbased ${P}_{1}$ method also takes 120+ iterations to get its lower PSNR. The same is the case for other values of N. This means that the TFOVbased FGMRES method is faster than the TFOVbased GMRES and TVbased ${P}_{1}$ methods.
6. Conclusions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
 Acar, R.; Vogel, C.R. Analysis of bounded variation penalty methods for illposed problems. Inverse Probl. 1994, 10, 1217. [Google Scholar] [CrossRef]
 Agarwal, V.; Gribok, A.V.; Abidi, M.A. Image restoration using L_{1} norm penalty function. Inverse Probl. Sci. Eng. 2007, 15, 785–809. [Google Scholar] [CrossRef]
 Aujol, J.F. Some firstorder algorithms for total variation based image restoration. J. Math. Imaging Vis. 2009, 34, 307–327. [Google Scholar] [CrossRef]
 Tai, X.C.; Lie, K.A.; Chan, T.F.; Osher, S. Image processing based on partial differential equations. In Proceedings of the International Conference on PDEBased Image Processing and Related Inverse Problems, CMA, Oslo, Norway, 8–12 August 2005; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
 Chen, D.; Chen, Y.; Xue, D. Fractionalorder total variation image restoration based on primaldual algorithm. In Abstract and Applied Analysis; Hindawi Publishing Corporation: London, UK, 2013; Volume 2013. [Google Scholar]
 Williams, B.M.; Zhang, J.; Chen, K. A new image deconvolution method with fractional regularisation. J. Algorithms Comput. 2016, 10, 265–276. [Google Scholar] [CrossRef]
 Chan, R.; Lanza, A.; Morigi, S.; Sgallari, F. An adaptive strategy for the restoration of textured images using fractional order regularization. Numer. Math. Theory Methods Appl. 2013, 6, 276–296. [Google Scholar] [CrossRef]
 Zhang, J.; Chen, K. Variational image registration by a total fractionalorder variation model. J. Comput. Phys. 2015, 293, 442–461. [Google Scholar] [CrossRef]
 Benzi, M.; Golub, G.H.; Liesen, J. Numerical solution of saddle point problems. Acta Numer. 2005, 14, 1–137. [Google Scholar] [CrossRef]
 Silvester, D.; Wathen, A. Fast iterative solution of stabilised Stokes systems. Part II: Using general block preconditioners. SIAM J. Numer. Anal. 1994, 31, 1352–1367. [Google Scholar] [CrossRef]
 Wathen, A.; Silvester, D. Fast iterative solution of stabilised Stokes systems. Part I: Using simple diagonal preconditioners. SIAM J. Numer. Anal. 1993, 30, 630–649. [Google Scholar] [CrossRef]
 Bramble, J.H.; Pasciak, J.E. A preconditioning technique for indefinite systems resulting from mixed approximations of elliptic problems. Math. Comput. 1988, 50, 1–17. [Google Scholar] [CrossRef]
 Cao, Z.H. Positive stable block triangular preconditioners for symmetric saddle point problems. Appl. Numer. Math. 2007, 57, 899–910. [Google Scholar] [CrossRef]
 Klawonn, A. Blocktriangular preconditioners for saddle point problems with a penalty term. SIAM J. Sci. Comput. 1998, 19, 172–184. [Google Scholar] [CrossRef]
 Pestana, J. On the eigenvalues and eigenvectors of block triangular preconditioned block matrices. SIAM J. Matrix Anal. Appl. 2014, 35, 517–525. [Google Scholar] [CrossRef]
 Simoncini, V. Block triangular preconditioners for symmetric saddlepoint problems. Appl. Numer. Math. 2004, 49, 63–80. [Google Scholar] [CrossRef]
 Axelsson, O.; Neytcheva, M. Preconditioning methods for linear systems arising in constrained optimization problems. Numer. Linear Algebr. Appl. 2003, 10, 3–31. [Google Scholar] [CrossRef]
 Bai, Z.Z.; Golub, G.H. Accelerated Hermitian and skewHermitian splitting iteration methods for saddlepoint problems. IMA J. Numer. 2007, 27, 1–23. [Google Scholar] [CrossRef]
 Benzi, M.; Ng, M.K. Preconditioned iterative methods for weighted Toeplitz least squares problems. SIAM J. Matrix Anal. Appl. 2006, 27, 1106–1124. [Google Scholar] [CrossRef]
 Ng, M.K.; Pan, J. Weighted Toeplitz regularized least squares computation for image restoration. SIAM J. Sci. Comput. 2014, 36, B94–B121. [Google Scholar] [CrossRef]
 Cao, Z.H. Block triangular Schur complement preconditioners for saddle point problems and application to the Oseen equations. Appl. Numer. 2010, 60, 193–207. [Google Scholar] [CrossRef]
 Chen, C.; Ma, C. A generalized shiftsplitting preconditioner for saddle point problems. Appl. Math. Lett. 2015, 43, 49–55. [Google Scholar] [CrossRef]
 Salkuyeh, D.K.; Masoudi, M.; Hezari, D. On the generalized shiftsplitting preconditioner for saddle point problems. Appl. Math. 2015, 48, 55–61. [Google Scholar] [CrossRef]
 Beik, F.P.A.; Benzi, M.; Chaparpordi, S.H.A. On block diagonal and block triangular iterative schemes and preconditioners for stabilized saddle point problems. J. Comput. Appl. Math. 2017, 326, 15–30. [Google Scholar] [CrossRef]
 Murphy, M.F.; Golub, G.H.; Wathen, A.J. A note on preconditioning for indefinite linear systems. Siam J. Sci. Comput. 2000, 21, 1969–1972. [Google Scholar] [CrossRef]
 Benzi, M.; Golub, G.H. A preconditioner for generalized saddle point problems. SIAM J. Matrix Anal. Appl. 2004, 26, 20–41. [Google Scholar] [CrossRef]
 Saad, Y. Iterative Methods for Sparse Linear Systems; SIAM: Philadelphia, PA, USA, 2003. [Google Scholar]
 Vogel, C.R.; Oman, M.E. Fast, robust total variationbased reconstruction of noisy, blurred images. IEEE Trans. Image Process. 1998, 7, 813–824. [Google Scholar] [CrossRef] [PubMed]
 Axelsson, O. Iterative Solution Methods; Cambridge University Press: Cambridge, UK, 1996. [Google Scholar]
 Campisi, P.; Egiazarian, K. Blind Image Deconvolution: Theory and Applications; CRC Press: Boca Raton, FL, USA, 2016. [Google Scholar]
 Groetsch, C.W.; Groetsch, C. Inverse Problems in the Mathematical Sciences; Springer: Berlin/Heidelberg, Germany, 1993; Volume 52. [Google Scholar]
 Tikhonov, A.N. Regularization of incorrectly posed problems. Sov. Math. Dokl. 1963, 4, 1624–1627. [Google Scholar]
 Rudin, L.I.; Osher, S.; Fatemi, E. Nonlinear total variation based noise removal algorithms. Phys. D Nonlinear Phenom. 1992, 60, 259–268. [Google Scholar] [CrossRef]
 Osher, S.; Solé, A.; Vese, L. Image decomposition and restoration using total variation minimization and the h^{1}. Multiscale Model. Simul. 2003, 1, 349–370. [Google Scholar] [CrossRef]
 Getreuer, P. Total variation inpainting using split Bregman. Image Process. Line 2012, 2, 147–157. [Google Scholar] [CrossRef]
 Guo, W.; Qiao, L.H. Inpainting based on total variation. In Proceedings of the 2007 International Conference on Wavelet Analysis and Pattern Recognition, Beijing, China, 2–4 November 2007; Volume 2, pp. 939–943. [Google Scholar]
 Bresson, X.; Esedoglu, S.; Vandergheynst, P.; Thiran, J.P.; Osher, S. Fast global minimization of the active contour/snake model. J. Math. Imaging Vis. 2007, 28, 151–167. [Google Scholar] [CrossRef]
 Unger, M.; Pock, T.; Trobin, W.; Cremers, D.; Bischof, H. Tvseginteractive total variation based image segmentation. BMVC 2008, 31, 44–46. [Google Scholar]
 Yan, H.; Zhang, J.X.; Zhang, X. Injected infrared and visible image fusion via l_{1} decomposition model and guided filtering. IEEE Trans. Comput. Imaging 2022, 8, 162–173. [Google Scholar] [CrossRef]
 Chan, T.; Marquina, A.; Mulet, P. Highorder total variationbased image restoration. SIAM J. Sci. Comput. 2000, 22, 503–516. [Google Scholar] [CrossRef]
 Steidl, G.; Didas, S.; Neumann, J. Relations between higher order TV regularization and support vector regression. In International Conference on ScaleSpace Theories in Computer Vision; Springer: Berlin/Heidelberg, Germany, 2005; pp. 515–527. [Google Scholar]
 Bredies, K.; Kunisch, K.; Pock, T. Total generalized variation. SIAM J. Imaging Sci. 2010, 3, 492–526. [Google Scholar] [CrossRef]
 Zhu, W.; Chan, T. Image denoising using mean curvature of image surface. SIAM J. Imaging Sci. 2012, 5, 1–32. [Google Scholar] [CrossRef]
 Lysaker, M.; Osher, S.; Tai, X.C. Noise removal using smoothed normals and surface fitting. IEEE Trans. Image Process. 2004, 13, 1345–1357. [Google Scholar] [CrossRef]
 Ahmad, S.; AlMahdi, A.M.; Ahmed, R. Two new preconditioners for mean curvaturebased image deblurring problem. AIMS Math. 2021, 6, 13824–13844. [Google Scholar] [CrossRef]
 AlMahdi, A.; Fairag, F. Block diagonal preconditioners for an image deblurring problem with fractional total variation. J. Phys. Conf. Ser. 2018, 1132, 012063. [Google Scholar] [CrossRef]
 Fairag, F.; AlMahdi, A.; Ahmad, S. Twolevel method for the total fractionalorder variation model in image deblurring problem. Numer. Algorithms 2020, 85, 931–950. [Google Scholar] [CrossRef]
 Sohail, A.; Bég, O.; Li, Z.; Celik, S. Physics of fractional imaging in biomedicine. Prog. Biophys. Mol. Biol. 2018, 140, 13–20. [Google Scholar] [CrossRef]
 Xu, K.D.; Zhang, J.X. Prescribed performance tracking control of lowertriangular systems with unknown fractional powers. Fractal Fract. 2023, 7, 594. [Google Scholar] [CrossRef]
 Wang, Y.; Zhang, X.; Boutat, D.; Shi, P. Quadratic admissibility for a class of lti uncertain singular fractionalorder systems with 0 < α < 2. Fractal Fract. 2022, 7, 1. [Google Scholar]
 Zhang, J.; Chen, K. A total fractionalorder variation model for image restoration with nonhomogeneous boundary conditions and its numerical solution. SIAM J. Imaging Sci. 2015, 8, 2487–2518. [Google Scholar] [CrossRef]
 Miller, K.S.; Ross, B. An Introduction to the Fractional Calculus and Fractional Differential Equations; Wiley: Hoboken, NJ, USA, 1993. [Google Scholar]
 Oldham, K.; Spanier, J. The Fractional Calculus Theory and Applications of Differentiation and Integration to Arbitrary Order; Elsevier: Amsterdam, The Netherlands, 1974; Volume 111. [Google Scholar]
 Podlubny, I. Fractional Differential Equations: An Introduction to Fractional Derivatives, Fractional Differential Equations, to Methods of Their Solution and Some of Their Applications; Academic Press: Cambridge, MA, USA, 1998; Volume 198. [Google Scholar]
 Chan, T.F.; Golub, G.H.; Mulet, P. A nonlinear primaldual method for total variationbased image restoration. SIAM J. Sci. 1999, 20, 1964–1977. [Google Scholar] [CrossRef]
 Meerschaert, M.M.; Tadjeran, C. Finite difference approximations for fractional advection–dispersion flow equations. J. Comput. Appl. Math. 2004, 172, 65–77. [Google Scholar] [CrossRef]
 Meerschaert, M.M.; Tadjeran, C. Finite difference approximations for twosided spacefractional partial differential equations. Appl. Numer. Math. 2006, 56, 80–90. [Google Scholar] [CrossRef]
 Wang, H.; Du, N. Fast solution methods for spacefractional diffusion equations. J. Comput. Appl. Math. 2014, 255, 376–383. [Google Scholar] [CrossRef]
 Strang, G. A proposal for Toeplitz matrix calculations. Stud. Appl. Math. 1986, 74, 171–176. [Google Scholar] [CrossRef]
 Olkin, J.A. Linear and Nonlinear Deconvolution Problems (Optimization). Ph.D. Thesis, Rice University, Houston, TX, USA, 1986. [Google Scholar]
 Chan, T.F.; Olkin, J.A. Circulant preconditioners for Toeplitzblock matrices. Numer. Algorithms 1994, 6, 89–101. [Google Scholar] [CrossRef]
 Chan, R.H.; Ng, K.P. Toeplitz preconditioners for Hermitian Toeplitz systems. Linear Algebra Appl. 1993, 190, 181–208. [Google Scholar] [CrossRef]
 Lin, F.R. Preconditioners for block Toeplitz systems based on circulant preconditioners. Numer. Algorithms 2001, 26, 365–379. [Google Scholar] [CrossRef]
 Chan, R.H. Toeplitz preconditioners for Toeplitz systems with nonnegative generating functions. IMA J. Numer. Anal. 1991, 11, 333–345. [Google Scholar] [CrossRef]
 Serra, S. Preconditioning strategies for asymptotically illconditioned block Toeplitz systems. BIT Numer. Math. 1994, 34, 579–594. [Google Scholar] [CrossRef]
 Lin, F.R.; Wang, C.X. BTTB preconditioners for BTTB systems. Numer. Algorithms 2012, 60, 153–167. [Google Scholar] [CrossRef]
 Chan, R.H.; Strang, G. Toeplitz equations by conjugate gradients with circulant preconditioner. SIAM J. Sci. Stat. 1989, 10, 104–119. [Google Scholar] [CrossRef]
 Chan, R.H.; Yeung, M.C. Circulant preconditioners constructed from kernels. SIAM J. Numer. Anal. 1992, 29, 1093–1103. [Google Scholar] [CrossRef]
 Chan, T.F. An optimal circulant preconditioner for Toeplitz systems. SIAM J. Sci. Stat. Comput. 1988, 9, 766–771. [Google Scholar] [CrossRef]
 Davis, P.J. Circulant Matrices; American Mathematical Soc.: New York, NY, USA, 2012. [Google Scholar]
 Chowdhury, M.R.; Qin, J.; Lou, Y. Nonblind and blind deconvolution under Poisson noise using fractionalorder total variation. J. Math. Imaging Vis. 2020, 62, 1238–1255. [Google Scholar] [CrossRef]
Parameters  Iterations  CPUTime  

N  $\alpha $  $\lambda $  $\beta $  NP  ${P}_{1}$  ${P}_{2}$  NP  ${P}_{1}$  ${P}_{2}$ 
32  1.3  ${10}^{3}$  1  53  30  32  3.44  1.88  1.98 
64  1.8  ${10}^{8}$  0.1  301  166  194  39.71  20.97  20.55 
128  1.6  ${10}^{6}$  0.01  178  68  91  76.64  35.86  38.22 
Parameters  Iterations  Deblurred PSNR  

N  $\alpha $  $\lambda $  $\beta $  TV ($\alpha =1$)  NP  ${P}_{1}$  ${P}_{2}$  TV ($\alpha =1$)  NP  ${P}_{1}$  ${P}_{2}$ 
64  1.6  ${10}^{4}$  1  $120+$  $120+$  20  18  47.2230  48.6422  49.0131  48.9233 
128  1.8  ${10}^{4}$  1  $120+$  $120+$  40  22  45.2243  46.0352  46.8526  46.8957 
256  1.9  ${10}^{7}$  1  $120+$  $120+$  60  38  40.3331  44.1220  44.6277  44.6241 
Parameters  Iterations  Deblurred PSNR  

N  $\alpha $  $\lambda $  $\beta $  NFOV  NP  ${P}_{1}$  ${P}_{2}$  NFOV  NP  ${P}_{1}$  ${P}_{2}$ 
64  1.7  ${10}^{4}$  1  $120+$  $120+$  41  26  25.9869  26.5625  26.7861  26.8283 
128  1.9  ${10}^{7}$  1  $120+$  $120+$  65  45  24.1417  25.1908  25.4312  25.6952 
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. 
© 2023 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
AlMahdi, A.M. Preconditioning Technique for an Image Deblurring Problem with the Total FractionalOrder Variation Model. Math. Comput. Appl. 2023, 28, 97. https://doi.org/10.3390/mca28050097
AlMahdi AM. Preconditioning Technique for an Image Deblurring Problem with the Total FractionalOrder Variation Model. Mathematical and Computational Applications. 2023; 28(5):97. https://doi.org/10.3390/mca28050097
Chicago/Turabian StyleAlMahdi, Adel M. 2023. "Preconditioning Technique for an Image Deblurring Problem with the Total FractionalOrder Variation Model" Mathematical and Computational Applications 28, no. 5: 97. https://doi.org/10.3390/mca28050097