Next Article in Journal
Cross-Work Theme Identification in Long Novels via Nonnegative Tensor Factorization
Previous Article in Journal
A Generalized Graham–Kohr Extension Operator and Loewner Chains in the Unit Ball
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Adaptive Fractional-Order Total Variation and Minimax-Concave Based Image Denoising Model

1
School of Primary Education, Changsha Normal University, Changsha 410100, China
2
School of Mathematics and Statistics, Hunan First Normal University, Changsha 410205, China
*
Author to whom correspondence should be addressed.
Mathematics 2026, 14(7), 1105; https://doi.org/10.3390/math14071105 (registering DOI)
Submission received: 23 February 2026 / Revised: 16 March 2026 / Accepted: 20 March 2026 / Published: 25 March 2026

Abstract

Total variation (TV)-based image denoising effectively suppresses noise while preserving edges, but it often introduces staircase artifacts in flat regions. To address this limitation, we propose a novel denoising model that combines adaptive fractional-order total variation with a minimax-concave (MC) penalty in the regularization term. The adaptive fractional-order TV alleviates staircase effects in homogeneous areas while preserving fine details in textured regions. The MC penalty provides a more accurate estimation of image sparsity, improving restoration fidelity compared to traditional L1-based regularization. The resulting model, termed AFTVMC, is efficiently solved using an alternating direction method of multipliers (ADMM). Extensive numerical experiments on synthetic and natural images demonstrate that AFTVMC outperforms classical TV, higher-order LLT, adaptive ATV, and state-of-the-art MCFOTV models in both objective metrics—peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM)—and subjective visual quality, particularly in suppressing staircase artifacts and preserving complex texture details.

1. Introduction

In the field of digital image processing, images are inevitably corrupted by various types of noise, such as Gaussian noise and Poisson noise, during acquisition, transmission, and storage. Noise not only severely degrades the visual quality of images but also significantly hinders subsequent high-level processing tasks, including image segmentation, object recognition, medical diagnosis, and computer vision analysis [1]. Therefore, recovering a clean original image from its noisy observation, known as image denoising, remains a fundamental and critical research topic in image processing and computer vision [2,3].
Image denoising is inherently an ill-posed inverse problem. Regularization methods provide an effective framework for solving such problems, typically formulated as a trade-off between a data fidelity term and a regularization term [4]. The data fidelity term (often using the L2-norm) ensures consistency between the restored image and the observed image, while the regularization term constrains the solution space by incorporating prior knowledge of the image to suppress noise and recover structures. Total Variation (TV) regularization, proposed by Rudin, Osher, and Fatemi, is a landmark work in image denoising [5]. The TV model uses the L1-norm of the image gradient as the regularizer, which tends to produce piecewise constant solutions, effectively removing noise while preserving edge information. However, a well-known drawback of the TV model is the introduction of unnatural “staircase effects” in flat regions, where smooth-intensity transitions are approximated by piecewise constant patches. Furthermore, the TV model suffers from the loss of image contrast [6].
To overcome the staircase effect and contrast loss of the TV model, various improvements have been proposed, which can be broadly categorized as follows:
1.
Higher-order regularization models: Examples include the Lysaker–Lundervold–Tai (LLT) model [7] and the Total Generalized Variation (TGV) model [8,9]. These models incorporate higher-order derivative information to better represent smooth image regions, effectively mitigating staircase effects. However, they often lead to over-smoothed edges or introduce speckle artifacts, compromising detail preservation in texture-rich areas [10].
2.
Fractional-order regularization models: Fractional differential operators with order greater than one possess non-locality and weak singularity [11], enabling them to capture both edge and texture details simultaneously. Models based on fractional-order TV (FOTV) have demonstrated unique advantages in suppressing staircase effects while preserving textures [12,13,14,15]. Moreover, FOTV has been shown to better preserve image contrasts compared to the standard TV model [6]. Nevertheless, as noted by Chen et al. [16], the denoising performance of pure fractional-order TV models on images with high noise levels still requires improvement.
3.
Adaptive and anisotropic models: The core idea of these models is to dynamically adjust the strength or direction of regularization based on local image structure. For instance, the anisotropic TV (ATV) model proposed by Pang et al. [17] introduces an adaptive weighting matrix derived from local gradients, applying weaker smoothing at edges for protection and stronger smoothing in flat regions for noise removal. The works of Wu et al. [18] and Yang et al. [3] also demonstrate the effectiveness of adaptive strategies in image segmentation and denoising. Although the ATV model excels in edge preservation, its performance on images with complex textures remains limited.
4.
Non-convex regularization models: To more accurately estimate signal sparsity (e.g., the sparsity of image gradients), non-convex regularizers are introduced to replace the convex L1-norm. The minimax-concave (MC) penalty is a typical non-convex regularization tool that produces a sharper thresholding function than the L1-norm, leading to more accurate estimation of sparse signals [19]. Chen et al. [16] first combined the MC penalty with fractional-order TV, proposing the MCFOTV model, which achieved superior performance compared to traditional TV and FOTV models. However, this model’s adaptive capability is still insufficient when handling images with high noise levels and complex local structures.
In summary, existing advanced models often excel in only one or a subset of the following goals: suppressing staircase effects, preserving edges and textures, handling high noise levels, or achieving local adaptivity. It remains an open and challenging problem to design a model that can adaptively integrate the texture-preserving capability of fractional-order differentiation with the accurate estimation power of non-convex regularization to simultaneously achieve: (a) effective elimination of staircase effects in flat regions, (b) maximum preservation of details in edge and texture regions, and (c) robust denoising performance across low to high noise levels.
Inspired by the ATV model [17] and the MCFOTV model [16], this paper proposes a novel Adaptive Fractional-order Total Variation and Minimax-Concave-based image denoising model (AFTVMC). The core innovation of our model lies in the organic integration of an adaptive weighting mechanism, a fractional-order differential operator, and the non-convex MC penalty within a unified variational framework. Specifically:
  • We employ an adaptive weighting matrix (T) generated from the local gradient information of the noisy image. This allows the model to adaptively adjust the strength of the fractional-order differential operator based on image content (flat areas, edges, textures), enabling finer local control.
  • We introduce a fractional-order gradient operator ( D α ) with order α between 1 and 2 as the basis for sparsity measurement. This operator inherently balances edge preservation and staircase effect suppression.
  • We apply the minimax-concave (MC) penalty to regularize the weighted fractional-order gradient, aiming for a more accurate estimation of the true image sparsity prior than the traditional L1-norm, thereby enhancing restoration accuracy.
Through this combination, our model aims to inherit the local adaptivity of the ATV model, the texture-preserving capability of FOTV models, and the accurate estimation power of the MC penalty, thereby achieving higher-quality restoration with fewer artifacts for various images, especially those containing rich textures and different noise levels.
For numerical solution, we adopt the efficient Alternating Direction Method of Multipliers (ADMM) to solve the proposed optimization problem. Through theoretical analysis, we select appropriate parameters (e.g., a small β value) to ensure that the overall objective function remains convex under specific conditions despite the non-convex term, guaranteeing stable convergence of the ADMM algorithm. Extensive experiments on synthetic and natural images demonstrate that compared to the classical TV model, the LLT model, the ATV model, and the state-of-the-art MCFOTV model, our proposed AFTVMC model shows significant advantages in both objective metrics (Peak Signal-to-Noise Ratio—PSNR, Structural Similarity Index—SSIM) and subjective visual quality, particularly in suppressing staircase effects and preserving complex texture details.
The remainder of this paper is organized as follows: Section 2 details the proposed AFTVMC model and its mathematical formulation. Section 3 describes the ADMM-based numerical algorithm. Section 4 presents extensive numerical experiments and comparative analysis with existing advanced methods. Section 5 concludes the paper and outlines future research directions.

2. The Proposed Model

Let the image domain Ω R 2 , the process of image contamination by noise can be described as f = u + w , where f : Ω R is the observed noisy image, u : Ω R is the clear image, w : Ω R represents the Gaussian noise. To recover u from the degraded image f, we propose the following AFTVMC denoising model:
min u λ 2 f u 2 2 + T D α u 1 min v β 2 T D α u v 2 2 + v 1 ,
where λ and β are nonnegative parameters, D α denotes the derivative of a real order α and T represents the adaptive weighted matrix.
There are various definitions of fractional-order derivative, among which the most popular are Grünwald–Letnikov(G-L), Riemann–Liouville (R-L), and frequency domain definitions [20]. We have adopted an easily implementable G-L fractional derivative. The G-L fractional derivative of the function f ( t ) is as follows [11]
D t α a f ( t ) = lim h 0 n h = t a h α r = 0 n ( 1 ) r C α r f ( t r h ) ,
where C k α = Γ α + 1 Γ k + 1 Γ α k + 1 is the generalized binomial coefficient, Γ ( · ) denotes the Gamma function and a , t are terminals of fractional derivative. The G-L fractional α -order gradient of a two-dimensional image u at a point ( x , y ) Ω is defined as
D α u ( x , y ) = [ D x α u ( x , y ) ; D y α u ( x , y ) ] ,
where D x α u ( x , y ) , D y α u ( x , y ) represent the G-L fractional derivative of u ( x , y ) along the x and y directions, respectively. In the paper, we set 1 < α 2 .
Many adaptive regularization denoising models have been proposed [17,18,21,22,23,24]. They reflect the local structure of u and lead to a good estimation of the edges. Inspired by Pang et al. [17], Bollt et al. [23], Yang et al. [3], Wu et al. [18], we use an adaptive weighted matrix operator T for model (1), as defined by
T v ^ 1 ( x , y ) v ^ 2 ( x , y ) = t 1 ( x , y ) 0 0 t 2 ( x , y ) v ^ 1 ( x , y ) v ^ 2 ( x , y )
= 1 1 + κ G σ ^ ( x , y ) x f ( x , y ) 2 0 0 1 1 + κ G σ ^ ( x , y ) y f ( x , y ) 2 × v ^ 1 ( x , y ) v ^ 2 ( x , y ) ,
where G σ ^ ( x , y ) is the Gaussian kernel, κ and σ ^ are nonnegative parameters. x f ( x , y ) and y f ( x , y ) are first-order derivative of noisy image f in the x-axis direction and in the y-axis direction, respectively. The adaptive weighting matrix T utilizes the gradient information of the noisy image, enabling model (1) to effectively describe local structures. If κ = 0 , then T becomes the identity matrix I and model (1) can be transformed into the MCFOTV model in [16].
The objective function of model (1) is as follows
λ 2 f u 2 2 + T D α u 1 min v β 2 T D α u v 2 2 + v 1 .
It can be converted into
max v λ 2 f u 2 2 + T D α u 1 β 2 T D α u v 2 2 v 1 ,
i.e.,
max v λ 2 f 2 2 λ f , u + β 2 T D α u , v β 2 v 2 2 v 1 + T D α u 1 + λ 2 u 2 2 β 2 T D α u 2 2 .
Because the first two terms in (2) are convex, (2) tends to be convex when λ 2 u 2 2 β 2 T D α u 2 2 is convex. The value of the parameter β affects the convexity of the objective function. A higher β -value will result in the function being non-convex, which makes it challenging to find the optimal solution for the model (1). Based on these analyses, we set β to 0.001 in this paper.

3. Algorithm

3.1. Preliminaries

A digital image is usually defined on the grid points of a rectangular area. Then, images u , f will be matrices of size M × N and u , f R M × N . Each pixel in the images u and f are denoted as u i , j and f i , j , where i { 1 , 2 , . . . , M } , j { 1 , 2 , . . . , N } . The first-order derivative in the weighted matrix T is discretized using first-order difference with the periodic boundary condition, with the following format
x f i , j = f i , j f i 1 , j , i f 1 < i M , 1 j N , f i , j f M , j , i f i = 1 , 1 j N ,
y f i , j = f i , j f i , j 1 , i f 1 i M , 1 < j N , f i , j f i , N , i f 1 i M , j = 1 .
G-L discrete fractional α -order gradient with the periodic boundary condition is defined as [25]
D α u i , j = D x α u i , j ; D y α u i , j ,
with
D x α u i , j = k = 0 K 1 ( 1 ) k C k α u i k , j , D y α u i , j = k = 0 K 1 ( 1 ) k C k α u i , j k .
where the parameter K denotes the number of neighboring pixels at each pixel and u ν , j = u M ν , j , v { 0 , 1 , 2 , , K 1 } , u i , μ = u i , N μ , μ { 0 , 1 , 2 , , K 1 } .
When α = 1 in (4), then C 0 1 = 1 , C 1 1 = 1 , and C k 1 = 0 , for 2 k K 1 . Thus, (3) deduces to first-order differences along the x-axis and the y-axis. When α = 2 in (4), then C 0 2 = 1 , C 1 2 = 2 , C 2 2 = 1 , and C k 2 = 0 for 3 k K 1 . Therefore, (3) deduces to second-order differences along the coordinate axes. In this paper, we take K = 20 .
The conjugation of fractional-order derivative satisfies the following relationship [10]
D α * = 1 α ¯ d i v α .
For p = p 1 ; p 2 , p 1 , p 2 R M × N , the discrete fractional-order divergence is defined as [10,25]
d i v α p i , j = ( 1 ) α k = 0 K 1 ( 1 ) k C k α ( p i + k , j 1 + p i , j + k 2 ) ,
where p M + ν , j 1 = p ν , j 1 , v { 0 , 1 , 2 , , K 1 } , p i , N + μ 2 = p i , μ 2 , μ { 0 , 1 , 2 , , K 1 } .

3.2. ADMM Strategy

In the following, we solve model (1) using the ADMM algorithm, which decomposes an optimization problem into several subproblems. The convergence analysis of the ADMM algorithm is available in [26]. The model (1) can be reformulated as follows
min u max v λ 2 f u 2 2 + T D α u 1 β 2 T D α u v 2 2 v 1 .
By introducing auxiliary variables x and w, the above optimization problem can be transformed into an equivalent form
min u max v λ 2 f u 2 2 + x 1 β 2 x v 2 2 v 1 s . t . x = T w , w = D α u .
Based on the augmented Lagrangian method, we rewrite the problem (5) as follows
L ( u , v , x , w , ξ , λ ) = λ 2 f u 2 2 β 2 x v 2 2 + x 1 v 1 + γ 1 2 x T w 2 2
μ T x T w + γ 2 2 w D α w 2 2 ξ T w D α w ,
where γ 1 and γ 2 are the penalty parameters, μ and ξ are the Lagrange multipliers. Next, we elaborate on how to solve subproblems on x , w , v , and u, respectively.

3.2.1. X-Subproblem

X-subproblem can be written as
x k + 1 = arg min x β 2 x v k 2 2 + x 1 + γ 1 2 x T w k 2 2 μ k T x T w k .
The above equation can be equivalently transformed into
x k + 1 = arg min x x 1 + γ 1 β 2 x γ 1 T w k + μ k β v k γ 1 β 2 2 .
Using the soft-thresholding operator in [27], we can obtain the closed-form solution as
x i , j k + 1 = max t i , j 2 1 γ 1 β , 0 t i , j t i , j 2 ,
where t i , j = γ 1 T w k + μ k β v k γ 1 β .

3.2.2. W-Subproblem

W-subproblem can be expressed as
w k + 1 = arg min w { γ 1 2 x k + 1 T w 2 2 μ k T x k + 1 T w + γ 1 2 w D α u k 2 2
ξ k T w D α u k .
By directly solving, we obtain
w i , j k + 1 = γ 2 D x α u i , j k T i , j u i , j k + γ 1 T i , j x i , j k + 1 + ξ i , j k γ 2 + γ 1 T i , j 2 .

3.2.3. V-Subproblem

V-subproblem corresponding to the following optimization problem
v k + 1 = arg max v β 2 x k + 1 v 2 2 v 1 = arg min v β 2 x k + 1 v 2 2 + v 1 .
Similar to the x-subproblem, we also use the softthresholding operator to obtain the solution as
v i , j k + 1 = max x i , j k + 1 2 1 β , 0 x i , j k + 1 x i , j k + 1 2 .

3.2.4. U-Subproblem

U-subproblem can be written as
u k + 1 = arg min u λ 2 f u 2 2 + γ 2 2 w k + 1 D α u 2 2 ξ k T x k + 1 T w k + 1 ,
which is a linear system as follows
λ + γ 2 D α * D α u k + 1 = λ f + γ 2 D α * γ 2 w k + 1 ξ k .
Under the assumption of the periodic boundary of u, D α * D α is block circulant with circulant blocks (BCCB). So the above equation can be solved efficiently by the fast Fourier transform (FFT). Then its solution can be obtained by
u k + 1 = F 1 λ F f + γ 2 F D α * γ 2 w k + 1 ξ k λ F I + γ 2 F D α * D α .
The algorithm with ADMM is summarized in Algorithm 1.
Algorithm 1: ADMM for AFTVMC Denoising Model
1. Input: f , λ , γ 1 , γ 2 , α , β ;
2. Initialize: u 1 = f , ξ 1 = 0 , η 1 = 0 , v 1 = 0 , w 1 = D x α f ; D y α f ;
3. For k = 1 , 2 , . . .
   Update x k + 1 by (6),
   Update w k + 1 by (9),
   Update v k + 1 by (10),
   Update u k + 1 by (13),
    ξ k + 1 = ξ k γ 2 ( w k + 1 D α u k + 1 ) ,
    μ k + 1 = μ k γ 1 ( x k + 1 T w k + 1 ) ;
4. End for until stopping rule ( u k u k 1 2 u k 1 2 10 5 or iterations > 500 ) is met;
5. Output u = u k + 1 as the denoised image.

4. Numerical Experiments

In this section, we conduct comprehensive denoising experiments to validate the effectiveness and superiority of the proposed Adaptive Fractional-order Total Variation and Minimax-Concave (AFTVMC) model. We compare our method with several state-of-the-art TV-based models, including the classical TV model [5], the high-order LLT model [7], the adaptive ATV model [17], and the recent MCFOTV model [16]. All experiments are performed using MATLAB (R2018a) on a PC with a 2.50 GHz 12th Gen Intel(R) Core(TM) i5-12500H CPU and 16.0 GB RAM. The iterative process for each model is terminated when the relative difference between successive solutions satisfies u k u k 1 2 u k 1 2 10 5 or the number of iterations exceeds 500.

4.1. Evaluation Criteria

To evaluate denoising performance, we consider three aspects: quantitative metrics, convergence behavior, and visual quality. As quantitative measures, we employ the structural similarity index measure (SSIM) and the peak signal-to-noise ratio (PSNR) [28]. SSIM is defined by
S S I M ( x , y ) = 2 μ x μ y + C 1 2 σ x y + C 2 μ x 2 + μ y 2 + C 1 σ x 2 + σ y 2 + C 2 ,
where μ x , μ y are local means, σ x , σ y are standard deviations, σ x y is cross-covariance, and c 1 , c 2 are constants. PSNR is defined as
P S N R ( u , I ) = 10 log 10 M A X 2 1 M N i = 1 M j = 1 N u i , j I i , j 2 ,
where u and I denote the denoised and reference images, respectively. In our experiments, images are normalized to [ 0 , 1 ] , hence M A X = 1 . Larger PSNR and SSIM indicate better denoised performance.

4.2. Benchmark Images for Evaluation

Six benchmark images are used: “lena” ( 512 × 512 ), “boat” ( 512 × 512 ), “pepper” ( 256 × 256 ), “barbara” ( 512 × 512 ), “man” ( 1024 × 1024 ), and “bird” ( 128 × 128 ), as illustrated in Figure 1. These images include both smooth regions and textured structures, making them suitable for evaluating denoising models. Each image is normalized to [ 0 , 1 ] , and additive white Gaussian noise is generated using MATLAB’s imnoise, and the noise level is set to σ { 15 , 20 , 25 , 30 , 35 , 40 } , covering both low- and high-noise scenarios. Here, σ denotes the noise level parameter used in the noise generation procedure.

4.3. Parameter Settings

The regularization parameter λ , which balances noise suppression and detail preservation, is empirically chosen from the range [ 1 , 60 ] . The fractional order α is searched over [ 1.1 , 2.0 ] with step 0.1 (see Table 1), indicating that the denoising performance is sensitive to α . Therefore, α is tuned for each test image at each noise level to maximize PSNR. The resulting PSNR values are reported in Table 1, where the best PSNR for each image and noise level (and its corresponding α ) is highlighted in bold.
We also evaluate the effect of the adaptive weighting matrix T by comparing the cases T I and T = I . The results in Figure 2 indicate that incorporating the adaptive matrix improves performance, and the parameters κ and σ ^ in T should be selected according to a proper choice as suggested in related work [17]. Moreover, the penalty parameters γ 1 and γ 2 mainly influence the convergence speed rather than the final denoising quality; we set γ 1 = γ 2 = 10 α + 1 in all experiments.

4.4. Image Denoising

4.4.1. Metric Comparison (PSNR, SSIM)

We first present a quantitative comparison among TV, LLT, ATV, MCFOTV, and the proposed AFTVMC in terms of PSNR and SSIM. The denoising results for all test images at all noise levels are summarized in Table 2. Overall, AFTVMC achieves the best PSNR and/or SSIM in most cases, demonstrating its strong denoising capability over a wide range of image contents and noise levels.
More specifically, AFTVMC attains the highest PSNR for nearly all images at all noise levels. In terms of SSIM, AFTVMC also performs competitively and achieves the best results in most cases. Nevertheless, there are a few cases in which ATV yields slightly higher SSIM than AFTVMC. For example, ATV performs better on “lena” at noise level σ = 25 , on “barbara” at σ = 15 and 35, on “boat” at σ = 30 , 35 , and 40, and on “bird” at σ = 15 and 25. These results indicate that ATV may have a slight advantage in preserving structural similarity under some specific texture and noise conditions. In contrast, AFTVMC shows more stable overall performance by combining adaptive fractional-order regularization with the minimax-concave penalty, which leads to a better balance between noise removal and detail preservation.

4.4.2. Convergence Rate Comparison (PSNR vs. Iterations; Relative Error vs. Iterations)

To compare convergence behavior and denoising efficiency, we record the relative error and PSNR at each iteration for all five methods. Figure 3 illustrates the curves of (i) relative error versus iteration number and (ii) PSNR versus iteration number at σ = 30 . The relative error curves show that all methods converge rapidly, while AFTVMC and MCFOTV converge the fastest. Furthermore, the PSNR-versus-iterations curves indicate that AFTVMC reaches its peak PSNR with fewer iterations, suggesting higher denoising efficiency under the same computational budget.

4.4.3. Visual Comparison (Profile Plots First, Then Global and Local Views)

We evaluate visual performance in three aspects: intensity profile (cross-section) plots, global denoised images, and local zoom-ins.
(1)
Profile comparison.
To highlight structural fidelity along a line, we first examine the intensity profiles of the “bird” image at noise level σ = 30 along the 39th column. The profile plots are shown in Figure 4. The noisy profile exhibits large fluctuations. In certain regions, such as the second peak, the TV and ATV models deviate more from the reference profile compared with the other methods, resulting in greater contrast loss. While the reference profile rises smoothly, the denoised curves from the TV and ATV models exhibit oscillations. In flat regions, the denoised curves from the LLT and MCFOTV models also exhibit more pronounced oscillations. In comparison, the proposed model provides a closer fit to the reference profile.
(2)
Global and local visual comparisons.
Following the profile analysis, the global denoised results and corresponding local patches are shown in Figure 5 and Figure 6. and for “lena”, “boat”, “pepper”, “barbara”, “man”, and “bird” at σ = 30 . TV shows staircase artifacts in flat regions. TV and ATV oversmooth fine textures (e.g., brim details in “lena”, structural lines in “boat”, and fine features in “pepper”). LLT and MCFOTV suppress noise but slightly attenuate textures. In contrast, AFTVMC effectively preserves both edges and textures while reducing speckle artifacts due to the adaptive fractional-order regularization. Overall, AFTVMC provides the most balanced visual quality across global structures and local details.
Figure 4. Intensity profiles along column 39 of the “bird” image at noise level σ = 30 . Panel (a) compares the clean image (green) with the noisy image (red). Panels (bf) compare the clean image (green) with the profiles of the denoised images (red) obtained by TV, LLT, ATV, MCFOTV, and AFTVMC, respectively. These plots illustrate how closely each denoising method preserves the original structure and details along the selected column.
Figure 4. Intensity profiles along column 39 of the “bird” image at noise level σ = 30 . Panel (a) compares the clean image (green) with the noisy image (red). Panels (bf) compare the clean image (green) with the profiles of the denoised images (red) obtained by TV, LLT, ATV, MCFOTV, and AFTVMC, respectively. These plots illustrate how closely each denoising method preserves the original structure and details along the selected column.
Mathematics 14 01105 g004
Figure 5. Denoising effects of five models for “lena”, “boat” and “pepper” with σ = 30 , the denoising images (lines 1, 3, 5) and the locally enlarged images (lines 2, 4, 6). Images from left to right are restored results using the models as TV, LLT, ATV, MOFCTV, and AFTVMC.
Figure 5. Denoising effects of five models for “lena”, “boat” and “pepper” with σ = 30 , the denoising images (lines 1, 3, 5) and the locally enlarged images (lines 2, 4, 6). Images from left to right are restored results using the models as TV, LLT, ATV, MOFCTV, and AFTVMC.
Mathematics 14 01105 g005aMathematics 14 01105 g005b
Figure 6. Denoising effects of five models for “barbara”, “man” and “bird” with σ = 30 , the denoising images (lines 1, 3, 5) and the locally enlarged images (lines 2, 4, 6). Images from left to right are restored results using the models as TV, LLT, ATV, MOFCTV, and AFTVMC.
Figure 6. Denoising effects of five models for “barbara”, “man” and “bird” with σ = 30 , the denoising images (lines 1, 3, 5) and the locally enlarged images (lines 2, 4, 6). Images from left to right are restored results using the models as TV, LLT, ATV, MOFCTV, and AFTVMC.
Mathematics 14 01105 g006aMathematics 14 01105 g006b

5. Conclusions

In this paper, we propose a new denoising model that combines adaptive fractional-order TV and minimax-concave. We employ the ADMM iterative method to solve the new model. In numerical experiments, we compare our results with other state-of-the-art TV-based models from three perspectives: visual effects, quantitative indicators, and convergence rate. The numerical results confirm the effectiveness and superiority of the AFTVMC model. In future work, we will expand the method to address the removal of Poisson noise and Cauchy noise.

Author Contributions

Conceptualization, Y.Q. and C.D.; methodology, Y.Q.; software, Y.Q. and Y.Y.; formal analysis, Y.Q. and C.D.; data curation, Y.Q. and Y.Y.; writing—original draft preparation, Y.Q.; writing—review and editing, Y.Q. and C.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Hunan Provincial Natural Science Foundation grant number 2026JJ81207.

Institutional Review Board Statement

Not applicable. The study does not involve humans or animals.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Aubert, G.; Kornprobst, P.; Aubert, G. Mathematical Problems in Image Processing: Partial Differential Equations and the Calculus of Variations; Springer: New York, NY, USA, 2006; p. 147. [Google Scholar]
  2. Wang, Y.; Wang, Z. Image denoising method based on variable exponential fractional-integer-order total variation and tight frame sparse regularization. IET Image Process. 2021, 15, 101–114. [Google Scholar] [CrossRef]
  3. Yang, J.; Ma, M.; Zhang, J.; Wang, C. Noise removal using an adaptive euler’s elastica-based model. Vis. Comput. 2023, 39, 5485–5496. [Google Scholar] [CrossRef]
  4. Willoughby, R.A. Solutions of ill-posed problems (an tikhonov and vy arsenin). SIAM Rev. 1979, 21, 266. [Google Scholar]
  5. Rudin, L.I.; Osher, S.; Fatemi, E. Nonlinear total variation based noise removal algorithms. Phys. D Nonlinear Phenom. 1992, 60, 259–268. [Google Scholar] [CrossRef]
  6. Zhang, J.; Chen, K. A total fractional-order variation model for image restoration with nonhomogeneous boundary conditions and its numerical solution. SIAM J. Imaging Sci. 2015, 8, 2487–2518. [Google Scholar] [CrossRef]
  7. Lysaker, M.; Lundervold, A.; Tai, X.-C. Noise removal using fourth-order partial differential equation with applications to medical magnetic resonance images in space and time. IEEE Trans. Image Process. 2002, 12, 1579–1590. [Google Scholar] [CrossRef]
  8. Bredies, K.; Kunisch, K.; Pock, T. Total generalized variation. SIAM J. Imaging Sci. 2010, 3, 492–526. [Google Scholar] [CrossRef]
  9. Bredies, K.; Valkonen, T. Inverse problems with second-order total generalized variation constraints. arXiv 2020, arXiv:2005.09725. [Google Scholar] [CrossRef]
  10. Rahman Chowdhury, M.; Zhang, J.; Qin, J.; Lou, Y. Poisson image denoising based on fractional-order total variation. Inverse Probl. Imaging 2020, 14, 77–96. [Google Scholar] [CrossRef]
  11. Podlubny, I. Fractional Differential Equations: An Introduction to Fractional Derivatives, Fractional Differential Equations, to Methods of Their Solution and Some of Their Applications; Elsevier: New York, NY, USA, 1998. [Google Scholar]
  12. Chen, D.; Chen, Y.; Xue, D. Three fractional-order tv-l2 models for image denoising. J. Comput. Inf. Syst. 2013, 9, 4773–4780. [Google Scholar]
  13. Tian, D.; Xue, D.; Wang, D. A fractional-order adaptive regularization primal–dual algorithm for image denoising. Inf. Sci. 2015, 296, 147–159. [Google Scholar] [CrossRef]
  14. Dong, F.; Chen, Y. A fractional-order derivative based variational framework for image denoising. Inverse Probl. Imaging 2016, 10, 27. [Google Scholar] [CrossRef]
  15. Ben-Loghfyry, A.; Charkaoui, A.; Bouchriti, A.; Alaa, N.E. A novel evolutionary model using the Caputo time-fractional derivative and noise estimator for image denoising and contrast enhancement. Comput. Math. Appl. 2026, 204, 305–346. [Google Scholar] [CrossRef]
  16. Chen, X.; Zhao, P. Image denoising based on the fractional-order total variation and the minimax-concave. Signal Image Video Process. 2023, 18, 1601–1608. [Google Scholar] [CrossRef]
  17. Pang, Z.-F.; Zhou, Y.-M.; Wu, T.; Li, D.-J. Image denoising via a new anisotropic total-variation-based model. Signal Process. Image Commun. 2019, 74, 140–152. [Google Scholar] [CrossRef]
  18. Wu, T.; Gu, X.; Wang, Y.; Zeng, T. Adaptive total variation based image segmentation with semi-proximal alternating minimization. Signal Process. 2021, 183, 108017. [Google Scholar] [CrossRef]
  19. Du, H.; Liu, Y. Minmax-concave total variation denoising. Signal Image Video Process. 2018, 12, 1027–1034. [Google Scholar] [CrossRef]
  20. Bai, J.; Feng, X.-C. Fractional-order anisotropic diffusion for image denoising. IEEE Trans. Image Process. 2007, 16, 2492–2502. [Google Scholar] [CrossRef]
  21. Zhang, H.; Wang, Y. Edge adaptive directional total variation. J. Eng. 2013, 2013, 61–62. [Google Scholar] [CrossRef]
  22. Grasmair, M.; Lenzen, F. Anisotropic total variation filtering. Appl. Math. Optim. 2010, 62, 323–339. [Google Scholar] [CrossRef]
  23. Boltt, E.M.; Chartrand, R.; Esedoğlu, S.; Schultz, P.; Vixie, K.R. Graduated adaptive image denoising: Local compromise between total variation and isotropic diffusion. Adv. Comput. Math. 2009, 31, 61–85. [Google Scholar] [CrossRef]
  24. Pang, Z.-F.; Zhang, H.-L.; Luo, S.; Zeng, T. Image denoising based on the adaptive weighted tvp regularization. Signal Process. 2020, 167, 107325. [Google Scholar] [CrossRef]
  25. Zhang, J.; Wei, Z.; Xiao, L. Adaptive fractional-order multi-scale method for image denoising. J. Math. Imaging Vis. 2012, 43, 39–49. [Google Scholar] [CrossRef]
  26. Eckstein, J.; Yao, W. Augmented lagrangian and alternating direction methods for convex optimization: A tutorial and some illustrative computational results. RUTCOR Res. Rep. 2012, 32, 44. [Google Scholar]
  27. Combettes, P.L.; Wajs, V.R. Signal recovery by proximal forward-backward splitting. Multiscale Model. Simul. 2005, 4, 1168–1200. [Google Scholar] [CrossRef]
  28. Ilham, W.; Ahmad, A. A comprehensive review of ConvNeXt architecture in image classification: Performance, applications, and prospects. Int. J. Adv. Comput. Inform. 2026, 2, 108–114. [Google Scholar] [CrossRef]
Figure 1. Benchmark images used for evaluation.
Figure 1. Benchmark images used for evaluation.
Mathematics 14 01105 g001
Figure 2. Curves of the PSNR of the AFTVMC model with T I and T = I .
Figure 2. Curves of the PSNR of the AFTVMC model with T I and T = I .
Mathematics 14 01105 g002
Figure 3. The curves of relative error and PSNR versus number of iterations at σ = 30 .
Figure 3. The curves of relative error and PSNR versus number of iterations at σ = 30 .
Mathematics 14 01105 g003
Table 1. The PSNR of the AFTVMC model with various α and noise levels.
Table 1. The PSNR of the AFTVMC model with various α and noise levels.
Image σ α
1.11.21.31.41.51.61.71.81.92
lena1532.015032.164232.293132.408632.504832.580432.634132.667132.683132.6845
2031.224531.218131.218631.230831.250131.268431.282631.288231.283731.2685
2529.902230.025230.121930.208330.280430.338730.379530.404430.415530.4118
3029.513629.523229.530729.553129.579229.601929.618129.625229.622129.6067
3528.940628.943228.948528.965628.990929.018729.043029.056429.056729.0419
4028.280928.290328.295928.310928.334428.354928.369428.373928.368728.3495
boat1530.680930.745330.779430.790030.781630.757030.717030.664930.601630.5318
2029.421829.463629.476029.468229.444129.406929.361029.306229.245229.1790
2528.385828.446628.474128.481528.471728.447728.412928.370528.322728.2682
3027.400527.501927.567027.607327.626827.628727.615127.590127.558127.5198
3526.894426.954826.983826.992626.985626.968426.941426.906426.869226.8279
4026.407626.437826.443026.431726.410026.382026.347826.309526.270826.2300
pepper1531.817231.830431.825631.828231.843131.856631.856231.840731.810131.7573
2030.242530.313830.362830.413030.459730.499430.527330.543830.545130.5230
2529.000429.065029.098229.127929.156929.182429.207229.230429.247529.2526
3028.002528.065928.097128.121928.150428.185728.223028.253828.269828.2696
3527.281327.344927.374427.399927.424427.452427.483527.509727.532627.5466
4026.610626.667826.677726.680226.687326.698026.712626.731226.747526.7598
barbara1528.165128.313628.453628.586828.713128.830928.938429.031729.107929.1640
2026.703426.844726.972527.089027.193327.282927.355927.409927.442927.4545
2525.528325.654525.764925.862825.949526.022626.079226.117426.136126.1347
3024.620224.736624.839624.932725.014025.082025.134225.169425.188225.1910
3524.004824.100924.185724.260524.325024.376724.414724.437924.448024.4450
4023.643623.713423.771723.822223.864123.896723.919923.931023.930223.9182
man1530.707930.856230.983831.092831.183331.255431.310531.348931.372131.3809
2029.480629.605829.708129.791229.855429.903129.936129.954729.961929.9590
2528.448628.554728.637728.703728.755428.794528.822128.838428.846028.8452
3027.605327.685727.746527.796227.834427.863527.884527.896827.903027.9025
3526.752626.827726.884426.930526.968427.000627.025427.040327.050027.0537
4026.063226.118226.159626.194426.224426.250426.270826.283626.292126.2956
bird1532.914532.977233.080533.217833.359333.496433.615933.718633.802833.8453
2031.863431.812431.850031.939232.052132.165232.276032.380132.468732.5038
2530.891130.803930.782030.817130.907131.011431.116531.210831.278831.3121
3029.907729.869729.907329.979630.072030.190030.306730.412530.499730.5480
3529.149228.998428.933728.960629.047729.158829.279929.387429.469629.5246
4028.320728.301828.313228.355028.406528.493428.598328.703328.787828.8426
α is the fractional order in model (1), and σ is the noise level. The bold entries denote the best PSNR for each image at each noise level, and its α is adopted.
Table 2. SSIM and PSNR obtained by five denoising models under all noise levels.
Table 2. SSIM and PSNR obtained by five denoising models under all noise levels.
Noise ImageModels σ = 15 σ = 20 σ = 25 σ = 30 σ = 35 σ = 40
PSNRSSIMPSNRSSIMPSNRSSIMPSNRSSIMPSNRSSIMPSNRSSIM
lenaTV32.27370.857330.97510.829630.04080.810429.25340.789428.63400.767427.97890.7640
LLT32.45210.856731.12330.831930.13340.811329.29430.789828.73300.775528.04090.7628
ATV32.42560.859731.13370.834930.22520.815429.45070.796428.83990.778228.16880.7584
MCFOTV32.47170.858131.14460.830830.17370.811629.33980.790128.77140.776428.08360.7647
AFTVMC32.68450.861831.28820.842630.41550.813229.62520.802429.05640.791728.37390.7738
boatTV30.50120.814029.16900.775428.14420.743727.37970.715826.70400.693126.14340.6715
LLT30.51390.812629.13630.773028.10210.739427.27520.712726.58390.688225.99280.6660
ATV30.68200.817229.37880.780128.40140.748627.61720.722726.94600.700026.38480.6799
MCFOTV30.62420.815829.28680.777928.28540.745327.42930.716126.79820.691026.23780.6704
AFTVMC30.79000.819429.47600.781728.48150.750627.62870.722526.99260.698126.44300.6779
pepperTV31.31500.882029.86090.853028.70070.832227.69870.804926.92770.786226.16610.7620
LLT31.29610.880529.84540.852128.56950.818727.54870.792526.85890.767526.10270.7554
ATV31.72190.891130.30940.870029.17290.845028.14380.820727.36180.801926.54270.7796
MCFOTV31.32980.883429.87710.854428.56960.818727.56790.791426.85820.777226.10280.7554
AFTVMC31.85660.899930.54510.878329.25260.849628.26980.824427.54660.805426.75980.7878
barbaraTV28.58620.810026.96290.760325.81870.716925.03730.683124.38330.659223.90370.6431
LLT28.91410.813327.16230.754925.86640.698724.94740.667324.19130.628523.63740.6101
ATV28.84970.827527.19080.777025.96900.735925.05910.691224.39190.667323.90830.6361
MCFOTV28.90910.815027.16260.755225.86950.703424.94760.667324.19140.628523.63750.6101
AFTVMC29.16400.819627.45450.777626.13610.726725.19100.694724.44800.661723.93100.6447
manTV30.88340.787729.46510.740128.38540.701127.45450.669126.63950.636725.92030.6167
LLT31.18760.800429.74200.755428.61610.718327.66500.685326.82490.657126.08540.6322
ATV31.05850.791329.64950.745228.56430.707327.61940.674626.77900.645826.03110.6205
MCFOTV31.18770.800429.74800.754928.62460.717927.67360.685026.82490.657126.08550.6322
AFTVMC31.38090.804129.96190.760828.84600.724627.90300.693427.05370.665126.29560.6409
birdTV32.95220.900631.56930.877730.43610.852029.48070.832328.63580.823527.98170.8055
LLT32.75310.874031.21900.847229.94440.816429.32780.807128.22460.788127.73240.7808
ATV33.35420.910932.03090.889030.95390.875329.91780.848529.23570.844328.28890.8159
MCFOTV32.75330.874031.22300.843329.94460.816429.32790.807128.22460.788127.73250.7808
AFTVMC33.84530.907032.50380.891531.31210.871630.54800.858229.52460.845728.84260.8287
Bold values represent the maximum PSNR or SSIM values among different denoising models for the corresponding noisy image.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Qin, Y.; Du, C.; Yin, Y. Adaptive Fractional-Order Total Variation and Minimax-Concave Based Image Denoising Model. Mathematics 2026, 14, 1105. https://doi.org/10.3390/math14071105

AMA Style

Qin Y, Du C, Yin Y. Adaptive Fractional-Order Total Variation and Minimax-Concave Based Image Denoising Model. Mathematics. 2026; 14(7):1105. https://doi.org/10.3390/math14071105

Chicago/Turabian Style

Qin, Yaping, Chaoxiong Du, and Yimin Yin. 2026. "Adaptive Fractional-Order Total Variation and Minimax-Concave Based Image Denoising Model" Mathematics 14, no. 7: 1105. https://doi.org/10.3390/math14071105

APA Style

Qin, Y., Du, C., & Yin, Y. (2026). Adaptive Fractional-Order Total Variation and Minimax-Concave Based Image Denoising Model. Mathematics, 14(7), 1105. https://doi.org/10.3390/math14071105

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop