Next Article in Journal
An Improvement of Least Squares Theory: Theory of Least p-Variances Approximation and p-Uncorrelated Functions
Previous Article in Journal
Optimal Media Control Strategy for Rumor Propagation in a Multilingual Dual Layer Reaction Diffusion Network Model
Previous Article in Special Issue
Optimizing Aircraft Route Planning Based on Data-Driven and Physics-Informed Wind Field Predictions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Image Inpainting with Fractional Laplacian Regularization: An Lp Norm Approach

1
School of Mathematics, Sun Yat-sen University, Guangzhou 510275, China
2
College of Intelligence Science and Technology, National University of Defense Technology, Changsha 410073, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2025, 13(14), 2254; https://doi.org/10.3390/math13142254
Submission received: 14 May 2025 / Revised: 5 July 2025 / Accepted: 8 July 2025 / Published: 11 July 2025
(This article belongs to the Special Issue Numerical and Computational Methods in Engineering)

Abstract

This study presents an image inpainting model based on an energy functional that incorporates the L p norm of the fractional Laplacian operator as a regularization term and the H 1 norm as a fidelity term. Using the properties of the fractional Laplacian operator, the L p norm is employed with an adjustable parameter p to enhance the operator’s ability to restore fine details in various types of images. The replacement of the conventional L 2 norm with the H 1 norm enables better preservation of global structures in denoising and restoration tasks. This paper introduces a diffusion partial differential equation by adding an intermediate term and provides a theoretical proof of the existence and uniqueness of its solution in Sobolev spaces. Furthermore, it demonstrates that the solution converges to the minimizer of the energy functional as time approaches infinity. Numerical experiments that compare the proposed method with traditional and deep learning models validate its effectiveness in image inpainting tasks.
MSC:
68U10; 94A08

1. Introduction

Image inpainting is a vital task in image processing, with applications ranging from restoring old paintings and removing unwanted scratches or text from images to recovering lost data during transmission. Mathematically, images are considered as functions defined on a continuum, though in practice, digital images are discrete representations of their continuous counterparts. Formally, a digital image can be represented as a matrix u R I × J . Due to various interferences in the image acquisition and transmission processes, the observed image data, u 0 R I × J , is often expressed as
u 0 = A u + η ,
where A is a degenerate operator and η R I × J is a random noise, typically assumed to be white Gaussian noise in this context.  Thus, image inpainting becomes an inverse problem where the goal is to estimate u R I × J from u 0 . The problem is ill-posed, making its solution nontrivial and requiring specialized methods for effective restoration.
A variety of techniques have been proposed to solve the image inpainting problem. Among the most widely used are diffusion-based methods, which leverage the information from neighboring regions to propagate data into damaged areas. These methods typically use partial differential equation (PDE)-based and variational-based frameworks for image restoration. Bertalmio et al. [1] introduced the first PDE-based inpainting model, which was simple but prone to producing blurred results due to the smoothness of differential operators. Subsequent work [2,3] sought to refine this approach by improving the handling of image features, but the issue of blur remained.
Variational methods, based on minimizing an appropriately designed energy functional, have become another cornerstone of image inpainting. A typical variational formulation is given by
m i n u R I × J 1 2 | | A u u 0 | | F 2 + λ R ( u ) ,
where λ > 0 is a regularization parameter, and R ( u ) is a regularization term that controls the smoothness of the solution.  Total variation (TV) models, introduced by Chan and Shen [4], are particularly popular due to their ability to preserve edges while filling in missing regions. However, TV methods often struggle with large missing areas or disconnected regions, leading to the staircase effect [5].  To address these issues, higher-order models such as those by Lysaker et al. [6] and adaptive PDE methods have been developed, offering improved smoothness and edge preservation [7,8].
In recent years, fractional-order PDEs have gained attention for image processing, offering better control over the smoothness and sharpness of restored images.  Fractional differential operators replace standard differential operators with their fractional counterparts, leading to models that mitigate the oversmoothing effect commonly observed in traditional PDE-based methods [9,10]. These fractional models have shown promising results in both theoretical studies and practical applications [11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28].
Another notable approach to image inpainting is exemplar-based methods, which grow image regions pixel by pixel or patch by patch while maintaining coherence with the surrounding texture. These methods, popularized by Criminisi et al. [29], are especially effective in images with repetitive textures but struggle in the absence of suitable matching samples. Recent work has improved these methods by introducing nonlocal techniques [30,31] and hierarchical schemes [32], though structural connectivity remains a challenge.
In more recent years, deep learning-based inpainting techniques have achieved impressive results by leveraging generative models, such as Variational Autoencoders (VAEs) [33,34] and Generative Adversarial Networks (GANs) [35,36], to learn both high- and low-frequency features of the image. These models generate plausible content for missing regions by learning the underlying structure and texture of the image. Several architectures have been proposed to address different types of inpainting challenges, such as irregular hole filling [37], high-resolution restoration [38], and multiple-solution generation [39].
In the context of image inpainting, the use of the fractional Laplacian has gained significant interest in recent years. The fractional Laplacian, a nonlocal operator, provides a more accurate representation of spatial interactions compared to traditional local differential operators. The operator is defined as
( Δ ) α 2 u ( x ) : = C n , α P . V . R n u ( x ) u ( y ) | x y | n + α d y ,
C n , α = 2 α Γ ( n + α 2 ) π n 2 | Γ ( α 2 ) | .
where α is any real number between 0 and 1, and P . V . denotes the principal value of the integral.  Recent work [40,41] has focused on numerical methods for discretizing the fractional Laplacian, demonstrating its ability to improve image quality while avoiding the oversmoothing problem common to traditional methods.
Our original model [23] was based on the fractional Laplacian operator with the L 2 norm serving as the regularization term and the L 2 norm as the fidelity term. However, the paper lacked a theoretical proof of the solution to the diffusion equation. This article primarily explores the application of the fractional Laplacian operator in image inpainting tasks, and the inpainting results are quite promising. This paper represents an optimization of the original model which uses the L p norm of fractional Laplacian operator as the regularization term and the L 2 norm as the fidelity term and completes the theoretical framework. The inpainting results not only surpass those of the original model but also outperform some other models that were previously incomparable.
In summary, the main contributions of this work are as follows:
  • Proposal of a novel image inpainting model: We introduce an image inpainting model based on an energy functional that combines the L p norm of the fractional Laplacian operator as a regularization term and the H 1 norm as a fidelity term.
  • Theoretical derivation of the PDE system: We rigorously derive the Euler–Lagrange equations and the corresponding nonlinear partial differential equation (PDE) system within the variational framework. We prove the existence and uniqueness of weak solutions to this PDE system and show that, as time tends to infinity, the weak solution converges to the minimizer of the original energy functional.
  • Stable numerical method: We design a stable numerical method for solving the PDE system, provide a discrete scheme for the fractional Laplacian operator, and conduct a theoretical analysis of the method’s stability and convergence.

2. Proposed Model

First of all, we explain some notation used in this paper. We use notation F stands for Fourier transform defined as
( F u )   ( ξ ) = R n u ( x ) e i x · ξ d x ,
where i stands for the imaginary unit, and u L 1 ( R n ) . We use 𝒮 ( R n ) to indicate Schwartz space. For u 𝒮 ( R n ) , the fractional Laplacian operator ( Δ ) α 2 u is defined as
( Δ ) α 2 u = F 1 | ξ | α F u ( ξ ) .
The generalized fractional Laplacian operator is defined on distribution space 𝒮 ( R n ) , which is dual space of 𝒮 ( R n ) , as
( Δ ) α 2 u , φ = u , ( Δ ) α 2 φ : = R n u ( Δ ) α 2 φ d x , φ 𝒮 ( R n ) .
Using the above notation, we define the Bessel potential space.
Definition 1 (Bessel potential space).
Let α > 0 , 1 < p < ; the Bessel potential space is defined as
H α , p ( R n ) = u L p ( R n ) : F 1 ( 1 + | ξ | 2 ) α 2 F u L p ( R n ) .
The norm of Bessel potential space is defined as
u H α , p ( R n ) : = F 1 ( 1 + | ξ | 2 ) α 2 F u L p ( R n ) .
This space has some special properties that we need to use in the subsequent proof. We can see the proof in proposition 3.10 and proposition 3.15 in [42].
Theorem 1.
For every α > 0 , 1 < p < , the space H α , p ( R n ) is complete and separable.
Theorem 2 (Fractional Sobolev embedding theorem).
Let α > 0 , 1 < p < , then
  • If α p < n , then:
    H α , p ( R n ) L q ( R n ) , q [ p , n p n α p ] ,
    and the embedding is continuous.
  • If α p = n , then
    H α , p ( R n ) L q ( R n ) , q [ p , ] ,
    and the embedding is continuous.
  • If α p > n , then
    H α , p ( R n ) C α , μ ( R n ) , 0 < μ α n p ,
    and the embedding is continuous.
  • Let 0 < t < α , 1 < p < , then
    H α , p ( R n ) H t , p ( R n ) ,
    and the embedding is continuous.
  • Let 0 < t < α , 1 < p < such that ( α t ) p < n , then
    H α , p ( R n ) H t , q ( R n ) ,
    with
    q : = n p n ( α t ) p ,
    the embedding is continuous.
When processing an image u defined on a finite domain, we need to extend it to the entire space R n while preserving required regularity conditions. Let Ω R n be a finite Lipschitz domain. The extension of u to R n , denoted by u ˜ , must satisfy the following requirements:
u ˜ | Ω = u ; u ˜ L p ( R n ) C u L p ( Ω ) , C > 0 .
Without the loss of generality, we use u to represent the extension function u ˜ , and the subsequent function extensions in the paper will be constructed following this approach, such that the fractional Laplacian operator for the function u defined on the region Ω can be expressed as
( Δ ) α 2 u = ( Δ ) α 2 u ˜ ,
where u ˜ is the extension of u.
We expand the definition of Bessel potential space in domain Ω .
Definition 2.
The Bessel potential space H 0 α , p ( Ω ) defined on the open set Ω R n is defined as
H 0 α , p ( Ω ) = u H α , p ( R n ) :   supp u Ω ¯ .
The norm is defined as
u H 0 α , p ( Ω ) : = u H α , p ( R n ) .
Theorem 3.
Let Ω R n be a Lipschitz domain, then there exists a bounded linear extension operator:
E : H 0 α , p ( Ω ) H α , p ( R n ) ,
such that E ( u ) | Ω = u .
By Theorem 3, Theorem 2 is established for Lipschitz domains.
Theorem 4
(Equal norm in H 0 α , p ( Ω ) ). The norm
· = · L p ( Ω ) + ( Δ ) α 2 ( · ) L p ( Ω ) ,
is equivalent to · H 0 α , p ( Ω ) in H 0 α , p ( Ω ) .

2.1. Inpainting Model

Suppose that Ω R n is Lipschitz domain and D Ω is a closed set representing the inpainting area, α ( 0 , 2 ) , p ( 1 , 2 ] , and λ > 0 . Let
J ( u ) = ( Δ ) α 2 u L p ( Ω ) p + λ 2 u u 0 H 1 ( Ω D ) 2 = Ω ( Δ ) α 2 u p d x + λ 2 Ω D Δ 1 ( u u 0 ) 2 d x ,
then our inpainting model is
u = arg~min u H 0 α , p ( Ω ) H 1 ( Ω ) J ( u ) .
Our model consists of two terms. The first one is the p-power of L p norm of ( Δ ) α 2 u , and the second term represents the square of H 1 norm for u u 0 , where the distribution space H 1 is the dual space of H 0 1 . The first time using H 1 norm in image inpainting is in [43], in which it is said that not only L 2 ( Ω ) H 1 ( Ω ) but also this norm can better separate oscillatory components from high-frequency components compared to the L 2 norm, while also being better at removing noise while preserving edges.
Remark 1.
The operator Δ 1 represents the weak inverse of Δ. Given an open set E , Δ 1 : L 2 ( E ) H 0 1 ( E ) is defined as follows: f L 2 ( E )   Δ 1 f is the unique week solution of PDE (7) in space H 0 1 ( E ) :
Δ φ = f , x E , φ = 0 , x E .
In other words, Δ 1 f = φ , where φ is a weak solution of Equation (7) that satisfies the following:
E φ · ψ d x = E f ψ d x , ψ H 0 1 ( E ) .
By taking Δ 1 f in place of φ, we obtain the following:
E Δ 1 f · ψ d x = E f ψ d x , ψ H 0 1 ( E ) .
We will use this operator to show why the H 1 norm can be written as Δ 1 ( · ) L 2 . Let f H 1 ( E ) , and by Riesz’s theorem, there exists a unique φ H 0 1 ( E ) such that
f , ψ = E φ · ψ d x ψ H 0 1 ( E ) .
We expand the definition of Δ 1 : H 1 ( E ) H 0 1 ( E ) as Δ 1 f = φ and take it in place of φ in the above equation, yielding
f , ψ = E Δ 1 f · ψ d x ψ H 0 1 ( E ) .
Thus, we use the definition of H 1 norm to obtain the following:
f H 1 ( E ) = sup ψ H 0 1 ( E ) = 1 f , ψ = sup ψ H 0 1 ( E ) = 1 E Δ 1 f · ψ d x sup ψ H 0 1 ( E ) = 1 Δ 1 f H 0 1 ( E ) ψ H 0 1 ( E ) = Δ 1 f H 0 1 ( E ) ,
while we let ψ = ( Δ 1 f ) / Δ 1 f H 0 1 ( E ) , yielding the following:
f H 1 ( E ) = sup ψ H 0 1 ( E ) = 1 f , ψ E Δ 1 f 2 d x Δ 1 f H 0 1 ( E ) = Δ 1 f H 0 1 ( E ) .
By leveraging the advantage of it, we can confirm a simple fact: for all f H 1 ( E ) , we have Δ 1 f H 0 1 ( E ) and Δ 1 f = 0 , x E .

2.2. Euler–Lagrange Equation

Our derivation of the Euler–Lagrange (E-L) equations is a formal derivation.
Suppose u is the solution of problem (6) v H 0 α , p ( Ω ) H 1 ( Ω ) , δ R ; let
J ( u + δ v ) = Ω ( Δ ) α 2 u + δ ( Δ ) α 2 v p d x + λ 2 Ω D | Δ 1 ( u u 0 ) | 2 d x + δ 2 Ω D | Δ 1 v | 2 d x + 2 δ Re Ω D Δ 1 ( u u 0 ) · Δ 1 v ¯ d x = I 1 ( δ ) + I 2 ( δ ) ,
where
I 1 ( δ ) = Ω ( Δ ) α 2 u + δ ( Δ ) α 2 v p d x ,
and
I 2 ( δ ) = λ 2 Ω D | Δ 1 ( u u 0 ) | 2 d x + δ 2 Ω D | Δ 1 v | 2 d x + 2 δ Re Ω D Δ 1 ( u u 0 ) · Δ 1 v ¯ d x .
Taking partial derivatives of (10) with respect to δ , we have
d d δ I 1 ( δ ) = Ω p ( Δ ) α 2 u + δ ( Δ ) α 2 v p 1 ( Δ ) α 2 u + δ ( Δ ) α 2 v ( Δ ) α 2 v d x ,
by the variational principle that brings δ = 0 into (12). We also obtain
d I 1 ( δ ) d δ δ = 0 = p Ω ( Δ ) α 2 u p 2 ( Δ ) α 2 u ( Δ ) α 2 v d x .
By utilizing fractional Green’s identity (see Lemma 3.3 in [44]), we obtain the following:
Ω u ( Δ ) α 2 v d x + R n Ω u · N α v d x = C n , α 2 R n Ω ¯ R n Ω ¯ ( u ( x ) u ( y ) ) ( v ( x ) v ( y ) ) | x y | n + α d x d y ,
where N α is fractional Neumann boundary operator defined as
N α v ( x ) = C n , α Ω v ( x ) v ( y ) | x y | n + α d y , x R n Ω .
Then, we have
Ω u ( Δ ) α 2 v d x = Ω ( Δ ) α 2 u v d x + R n Ω N α u · v d x R n Ω u · N α v d x ,
using Equation (16) and v ( x ) = 0 , x R n Ω , we have
d I 1 ( δ ) d δ δ = 0 = p Ω ( Δ ) α 2 ( Δ ) α 2 u p 2 ( Δ ) α 2 u v d x + p R n Ω N α ( Δ ) α 2 u p 2 ( Δ ) α 2 u · v d x p R n Ω ( Δ ) α 2 u p 2 ( Δ ) α 2 u · N α v d x = p Ω ( Δ ) α 2 ( Δ ) α 2 u p 2 ( Δ ) α 2 u v d x p R n Ω ( Δ ) α 2 u ( x ) p 2 ( Δ ) α 2 u ( x ) · C n , α Ω v ( x ) v ( y ) | x y | n + α d y d x , = p Ω ( Δ ) α 2 ( Δ ) α 2 u p 2 ( Δ ) α 2 u v d x p C n , α R n Ω Ω ( Δ ) α 2 u ( x ) p 2 ( Δ ) α 2 u ( x ) · v ( x ) v ( y ) | x y | n + α d y d x = p Ω ( Δ ) α 2 ( Δ ) α 2 u p 2 ( Δ ) α 2 u v d x + p C n , α Ω R n Ω ( Δ ) α 2 u ( x ) p 2 ( Δ ) α 2 u ( x ) · v ( y ) | x y | n + α d x d y = p Ω ( Δ ) α 2 ( Δ ) α 2 u p 2 ( Δ ) α 2 u v d x + p C n , α Ω R n Ω ( Δ ) α 2 u ( y ) p 2 ( Δ ) α 2 u ( y ) | x y | n + α d y v ( x ) d x .
Taking partial derivatives of (11) with respect to δ and letting δ = 0 , using Equation (8), we take Δ 1 ( u u 0 ) in place of ψ and v ¯ in place of f and derive
d I 2 ( δ ) d δ δ = 0 = λ Ω D Δ 1 ( u u 0 ) · Δ 1 v d x = λ Ω D Δ 1 ( u u 0 ) v d x ,
then
0 = d J ( δ ) d δ δ = 0 = d I 1 ( δ ) d δ δ = 0 + d I 2 ( δ ) d δ δ = 0 = p Ω ( Δ ) α 2 ( Δ ) α 2 u p 2 ( Δ ) α 2 u v d x + p C n , α Ω R n Ω ( Δ ) α 2 u ( y ) p 2 ( Δ ) α 2 u ( y ) | x y | n + α d y v ( x ) d x λ Ω D Δ 1 ( u u 0 ) v d x = Ω p ( Δ ) α 2 ( Δ ) α 2 u p 2 ( Δ ) α 2 u + p C n , α R n Ω | ( Δ ) α 2 u ( y ) | p 2 ( Δ ) α 2 u ( y ) | x y | n + α d y λ χ Ω D Δ 1 ( u u 0 ) v d x ,
from which we obtain the Euler–Lagrange equation as follows:
p ( Δ ) α 2 ( Δ ) α 2 u p 2 ( Δ ) α 2 u + p C n , α R n Ω | ( Δ ) α 2 u ( y ) | p 2 ( Δ ) α 2 u ( y ) | x y | n + α d y = λ χ Ω D Δ 1 ( u u 0 ) , for x Ω ,
since we need this equality only for x Ω ; moreover, this integral does not exist when x R n Ω .
Introduce the intermediate variable ω = λ χ Ω D Δ 1 ( u u 0 ) where Δ 1 is defined from H 1 ( Ω D ) to H 0 1 ( Ω D ) . Applying Equation (8), we take u u 0 in place of f and v C 0 ( Ω D ) in place of ψ and obtain
λ Ω D ( u 0 u ) v d x = Ω D λ Δ 1 ( u u 0 ) · v d x = Ω D ω · v d x = Ω D Δ ω v d x ,
and (20) can be further written as
p ( Δ ) α 2 ( Δ ) α 2 u p 2 ( Δ ) α 2 u + p C n , α R n Ω | ( Δ ) α 2 u ( y ) | p 2 ( Δ ) α 2 u ( y ) | x y | n + α d y + ω = 0 , x Ω , Δ ω + λ ( u u 0 ) = 0 , x Ω D , ω = 0 , x D ( Ω D ) .
The Euler–Lagrange Equation (20) is a variational gradient for the minimum problem (6). Using that, we can construct diffusion equations to approach the minimum through time evolution; it is also known as gradient flow. Gradient flow is a dynamic process that describes how a function or system evolves over time to minimize a certain energy functional. Its core idea is to evolve along the negative gradient direction of the energy functional, gradually approaching the minimum.
u t = p ( Δ ) α 2 ( Δ ) α 2 u p 2 ( Δ ) α 2 u p C n , α R n Ω | ( Δ ) α 2 u ( t , y ) | p 2 ( Δ ) α 2 u ( t , y ) | x y | n + α d y ω , ( t , x ) ( 0 , T ) × Ω , w t = Δ ω + λ ( u u 0 ) , ( t , x ) ( 0 , T ) × Ω D , u ( 0 , x ) = u 0 , ω ( 0 , x ) = 0 , x Ω , ω ( t , x ) = 0 , ( t , x ) ( 0 , T ) × D ( Ω D ) .

2.3. Model Analysis

In this section, we will prove that diffusion Equation (23) yields a unique weak solution. The main reason for Equation (23) yielding a unique weak solution is that it consists of monotone operators. Our proof follows the book [45] about monotone operators in nonlinear partial differential equations while also drawing upon established theoretical frameworks in fractional PDE analysis [46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65]. First, we define the monotone operators.
Define the operator as follows:
T ( u ) = ( Δ ) α 2 | ( Δ ) α 2 u | p 2 ( Δ ) α 2 u + C n , α R n Ω | ( Δ ) α 2 u ( y ) | p 2 ( Δ ) α 2 u ( y ) | x y | n + α d y ,
and the corresponding weak form T α , p : H 0 α , p ( Ω ) × H 0 α , p ( Ω ) R is defined as
T α , p ( u , v ) = Ω | ( Δ ) α 2 u | p 2 ( Δ ) α 2 u ( Δ ) α 2 v d x .
Let A p ( x ) = | x | p 2 x . It is easy to see that A p ( x ) is continuous when p > 1 and increases since A p ( x ) 0 . Moreover, it can be regarded as an operator from L p ( Ω ) to L p ( Ω ) = L q ( Ω ) and A p ( u ) L q ( Ω ) = u L p ( Ω ) p 1 . Hence, T α , p ( u , v ) can be seen as a linear functional regarding the second variable on H 0 α , p ( Ω ) which means T α , p ( u , v ) = A p ( ( Δ ) α 2 u ) , ( Δ ) α 2 v .
T α , p has the following properties:
  •  Non-negative. u H 0 α , p ( Ω ) ,
    T α , p ( u , u ) = Ω | ( Δ ) α 2 u | p 2 ( Δ ) α 2 u ( Δ ) α 2 u d x = Ω | ( Δ ) α 2 u | p d x 0 .
  •  Bounded. i.e., u , v H 0 α , p ( Ω ) T α , p ( u , v ) < ,
    T α , p ( u , v ) = Ω | ( Δ ) α 2 u | p 2 ( Δ ) α 2 u ( Δ ) α 2 v d x Ω | ( Δ ) α 2 u | p 1 | ( Δ ) α 2 v | d x Ω | ( Δ ) α 2 u | q ( p 1 ) d x 1 q Ω | ( Δ ) α 2 v | p d x 1 p u H 0 α , p ( Ω ) p 1 v H 0 α , p ( Ω ) .
  • Continuous for each variable.
    Let u k k = 1 H 0 α , p ( Ω ) , v H 0 α , p ( Ω ) , and u k u in H 0 α , p ( Ω ) ; ( Δ ) α 2 is a bounded linear operator from H 0 α , p ( Ω ) to L p ( Ω ) , so ( Δ ) α 2 u k ( Δ ) α 2 u in L p ( Ω ) . Using the fact that when p ( 1 , 2 ] , the following holds:
    | s | p 2 s | t | p 2 t C p | s t | p 1 , s , t R .
    where C p is a constant independent of s , t . So, we have
    T α , p ( u k , v ) T α , p ( u , v ) Ω | ( Δ ) α 2 u k | p 2 ( Δ ) α 2 u k | ( Δ ) α 2 u | p 2 ( Δ ) α 2 u · ( Δ ) α 2 v d x C p Ω | ( Δ ) α 2 ( u k u ) | p 1 ( Δ ) α 2 v d x C p u k u H 0 α , p ( Ω ) p 1 v H 0 α , p ( Ω ) 0 .
    The same proof can show that operator A p satisfies that the real-valued function t A p ( u + t v ) , v for all u , v L p ( Ω ) , which is continuous.
    For the second variable,
    T α , p ( u , v 1 ) T α , p ( u , v 2 ) = T α , p ( u , v 1 v 2 ) u H 0 α , p ( Ω ) p 1 v 1 v 2 H 0 α , p ( Ω ) 0 ,
    as v 1 v 2 H 0 α , p ( Ω ) 0 .
  • Monotonous, i.e., u , v H 0 α , p ( Ω ) ,
    T α , p ( u , u v ) T α , p ( v , u v ) 0 .
    Since
    T α , p ( u , u v ) T α , p ( v , u v ) = Ω A p ( ( Δ ) α 2 u ) A p ( ( Δ ) α 2 v ) ( Δ ) α 2 ( u v ) d x ,
    and the function A p ( x ) = | x | p 2 x is increased; if ( Δ ) α 2 u ( Δ ) α 2 v , then A p ( ( Δ ) α 2 u ) A p ( ( Δ ) α 2 v ) . Similarly, when ( Δ ) α 2 u ( Δ ) α 2 v , then A p ( ( Δ ) α 2 u ) A p ( ( Δ ) α 2 v ) ; therefore, we know T α , p ( u , u v ) T α , p ( v , u v ) 0 .
The integral equations of system (23) is given by
0 T Ω u t φ d x d t + p 0 T T α , p ( u , φ ) d t + 0 T Ω D ω φ d x d t = 0 , 0 T Ω D ω t ψ d x d t + 0 T Ω D ω · ψ d x d t 0 T Ω D λ ( u u 0 ) ψ d x d t = 0 , lim t 0 + u u 0 L 2 ( Ω ) = 0 , lim t 0 + ω L 2 ( Ω ) = 0 , ω = 0 , x D ,
and we should define the weak solution of integral system (28).
Definition 3.
For any initial data u 0 L 2 ( Ω ) H 0 α , p ( Ω ) and given T > 0 , a pair ( u , ω ) is called a weak solution of system (23) if it satisfies the following:
u L ( 0 , T ; H 0 α , p ( Ω ) L 2 ( Ω ) ) , u t L 2 ( ( 0 , T ) × Ω ) ;
ω L ( 0 , T ; H 0 1 ( Ω D ) ) , ω t L 2 ( ( 0 , T ) × ( Ω D ) ) ,
and for all test functions,
φ L 2 ( ( 0 , T ) × Ω ) L 1 ( 0 , T ; H 0 α , p ( Ω ) ) ;
ψ L 2 ( ( 0 , T ) × ( Ω D ) ) L 1 ( 0 , T ; H 0 1 ( Ω D ) ) ,
the pair ( u , ω ) satisfies the integral Equation (28).
Theorem 5.
For any α ( 0 , 2 ) , p ( 1 , 2 ] , Ω R n is a Lipschitz domain, and D is a closed subset of Ω; then, for any initial value u 0 H 0 α , p ( Ω ) L 2 ( Ω ) , given T > 0 , Equation (23) has a unique weak solution ( u , ω ) . Moreover, u AC ( [ 0 , T ] ; L 2 ( Ω ) , ω AC ( [ 0 , T ] ; L 2 ( Ω D ) ) , and there exists a constant M > 0 that is independent of α and p, such that
u L ( 0 , T ; L 2 ( Ω ) ) + u L ( 0 , T ; H 0 α , p ( Ω ) ) + u t L 2 ( ( 0 , T ) × Ω ) M ,
and
ω L ( 0 , T ; H 0 1 ( Ω D ) ) + ω t L 2 ( ( 0 , T ) × ( Ω D ) ) M .
Proof. 
We use Galerkin discretization to prove its existence. Let
r max n 1 α 1 p + 2 , 1 ,
then, through the Sobolev embedding theorem, we know that
H 0 r , 2 ( Ω ) H 0 α , p ( Ω ) , H 0 r , 2 ( Ω ) H 0 1 ( Ω ) ,
and both embeddings are dense. Moreover, let · , · H 0 r , 2 ( Ω ) be the inner product of H 0 r , 2 ( Ω ) , and T : L 2 ( Ω ) H 0 r , 2 ( Ω ) be the solution operator of problem finding u H 0 r , 2 ( Ω ) to given f L 2 ( Ω ) such that
u , v H 0 r , 2 ( Ω ) = f , v L 2 ( Ω ) , v H 0 r , 2 ( Ω ) ,
is a self-adjoint, compact, non-negative, and k e r ( T ) = 0 , so through Hilbert–Schmidt’s theorem, there exists an orthonormal basis φ m H 0 r , 2 ( Ω ) of L 2 ( Ω ) consisting of eigenfunctions of T, and
T φ m = μ m φ m , m = 1 , 2 , ,
where μ m > 0 are the corresponding eigenvalues with μ m 0 as m , then μ m φ m ) is an orthonormal basis of H 0 r , 2 ( Ω ) , which is the Galerkin basis that we need. Define U m = span φ 1 , , φ m H 0 r , 2 ( Ω ) , then
m N U m ¯ , · H 0 r , 2 ( Ω ) = H 0 r , 2 ( Ω ) .
Due to the dense embedding, we have
m N U m ¯ , · H 0 α , p ( Ω ) = H 0 α , p ( Ω ) .
Using the same arguments, there exists an orthonormal basis ψ m W r , 2 ( Ω D ) of L 2 ( Ω D ) , let 𝒱 m = span ψ 1 , , ψ m , then
m N 𝒱 m ¯ , · H 0 1 ( Ω D ) = H 0 1 ( Ω D ) .
We use the following function:
u m ( t , x ) = k = 1 m c k m ( t ) φ k ( x ) ; ω m ( t , x ) = k = 1 m d k m ( t ) ψ k ( x ) ,
to approximate the solution, where coefficient c k m ( t ) , d k m ( t ) k = 1 m is the following ordinary differential equations:
Ω u m t φ k d x + p T α , p ( u m , φ k ) + Ω D ω m φ k d x = 0 , Ω D ω m t ψ k d x + Ω D ω m · ψ k d x Ω D λ ( u m u 0 ) ψ k d x = 0 , u m ( 0 , x ) = f m U m ; ω m ( 0 , x ) = 0 ,
where f m converge to u 0 in H 0 α , p ( Ω ) as m . In order to simplify this, let
c m ( t ) = c 1 m ( t ) , , c m m ( t ) T , d m ( t ) = d 1 m ( t ) , , d m m ( t ) T ,
A = Ω D φ i ( x ) ψ j ( x ) d x C m × m ; B = Ω D ψ i ( x ) · ψ j ( x ) d x C m × m ,
and
c 0 = Ω D u 0 ψ 1 d x , , Ω D u 0 ψ m d x T ; I ( y ) = I 1 ( y ) , I m ( y ) T ,
where
I k ( y ) = Ω j = 1 m y j ( t ) ( Δ ) α 2 φ j p 2 j = 1 m y j ( t ) ( Δ ) α 2 φ j ( Δ ) α 2 φ k d x ,
then ordinary differential Equation (29) can be simplified as
d d t c m d m = p I ( c m ) A d m B d m + λ A T c m λ c 0 ; u m ( 0 , x ) = f m U m ; ω m ( 0 , x ) = 0 .
In ordinary differential Equation (30), let Z = [ z 1 , z 2 ] T R m × R m , and the function
F ( Z ) = F z 1 z 2 = p I ( z 1 ) A z 2 B z 2 + λ A T z 1 λ c 0 ,
can be proven that it is continuous in R × R m × R m because of 1 < p 2 . Moreover, in region G = R × R m 0 × R m , it satisfies the local Lipschitz condition, and N > 0 such that F z 1 , z 2 N ( z 1 , z 2 ) when ( z 1 , z 2 ) is large enough, where ( x , y ) stands for the Euclidean norm in R 2 . Hence, for any m N , there exists a unique solution ( u m , ω m ) that can be extended to ( 0 , + ) ; for any given T, we consider the solution in finite region [ 0 , T ] .
Since the equations are linear for φ k and ψ k , we take u m , ω m in place of φ k , ψ k in (29), through which we get
Ω u m t u m d x + p T α , p ( u m , u m ) + Ω D ω m u m d x = 0 , Ω D ω m t ω m d x + Ω D ω m 2 d x Ω D λ ( u m u 0 ) ω m d x = 0 ,
the first equation in (31) multiplies λ and adds up with the second equation in (31), which we have as
d d t λ Ω u m 2 d x + Ω D ω m 2 d x + λ p T α , p ( u m , u m ) + Ω D ω m 2 d x = λ Ω D u 0 ω m d x λ 2 Ω u 0 2 d x + λ 2 Ω D ω m 2 d x ,
since T α , p ( u m , u m ) 0 and Ω D ω m 2 d x 0 using Grönwall inequality, we know C T > 0 , which only depends on T and u 0 L 2 ( Ω ) , such that
sup 0 t T λ Ω u m 2 d x + Ω D ω m 2 d x C T .
Taking u m t as a test function in (29), this leads to
Ω u m t 2 d x + p T α , p ( u m , u m t ) = Ω D ω m u m t d x 1 2 Ω u m t 2 d x + 1 2 Ω D ω m 2 d x ,
so we have
1 2 Ω u m t 2 d x + d d t Ω | ( Δ ) α 2 u ( t , x ) | p d x C T ,
then integrating in ( 0 , t ) for any t [ 0 , T ] yields
1 2 0 t Ω u m t 2 d x d t + Ω | ( Δ ) α 2 u m ( t , x ) | p d x C T T + Ω | ( Δ ) α 2 f m ( x ) | p d x .
Using the similar method, taking ω m t as test function in (29) leads to
1 2 0 t Ω D ω m t 2 d x d t + 1 2 Ω D | ω m ( t , x ) | 2 d x C T T + 1 2 Ω D | f m ( x ) | 2 d x .
Hence, using the estimates (32), (33), and (34), we know that M > 0 such that
max u m L ( 0 , T ; L 2 ( Ω ) ) , ( Δ ) α 2 u m L ( 0 , T ; L p ( Ω ) ) , u m t L 2 ( ( 0 , T ) × Ω ) M ,
and
max ω m L ( 0 , T ; H 0 1 ( Ω D ) ) , ω m t L 2 ( ( 0 , T ) × Ω D ) M ,
since Ω is bounded, we have u m L ( 0 , T ; H 0 α , p ( Ω ) L 2 ( Ω ) ) M ; then, there exists functions u L ( 0 , T ; H 0 α , p ( Ω ) L 2 ( Ω ) ) , u t L 2 ( ( 0 , T ) × Ω ) , and ω L ( 0 , T ; H 0 1 ( Ω D ) ) C ( [ 0 , T ] ; L 2 ( Ω ) ) , ω t L 2 ( ( 0 , T ) × Ω D ) such that there exists ( u m k , ω m k ) which is a sub-sequence of ( u m , ω m ) that satisfies as m k ,
u m k u , in L ( 0 , T ; H 0 α , p ( Ω ) L 2 ( Ω ) ) , u m k t u t , in L 2 ( ( 0 , T ) × Ω ) , ω m k ω , in L ( 0 , T ; H 0 1 ( Ω D ) ) , ω m k t ω t , in L 2 ( ( 0 , T ) × Ω D ) .
it should be noted that by embedding theorem, some weak convergence can be strengthened, i.e.,
u m k u , i n C ( [ 0 , T ] ; L 1 ( Ω ) ) , ω m k ω , i n C ( [ 0 , T ] ; L 2 ( Ω D ) ) .
Moreover, using inequality (33), we know that when p > 1 ,
sup 0 t T ( Δ ) α 2 u m p 2 ( Δ ) α 2 u m L q ( Ω ) = sup 0 t T Ω ( Δ ) α 2 u m p 2 ( Δ ) α 2 u m q d x 1 q = sup 0 t T Ω ( Δ ) α 2 u m p d x 1 q M p 1 ,
hence, there exist a sub-sequence of u m k without losing the generality still noted as u m k , and it satisfies
A p ( ( Δ ) α 2 u m k ) = ( Δ ) α 2 u m k p 2 ( Δ ) α 2 u m k ξ , in L ( 0 , T ; L q ( Ω ) ) ,
then for any φ L 2 ( ( 0 , T ) × Ω ) L 1 ( 0 , T ; H 0 α , p ( Ω ) ) , we have
0 T Ω u t φ d x d t + p 0 T Ω ξ ( Δ ) α 2 φ d x d t + 0 T Ω D ω φ d x d t = 0 ,
take u in place of φ ; then,
Ω | u ( T , x ) | 2 d x Ω | u 0 | 2 d x + p 0 T Ω ξ ( Δ ) α 2 u d x d t + 0 T Ω D ω u d x d t = 0 ,
while integrating t in ( 0 , T ) for (31),
Ω | u ( T , x ) m | 2 d x Ω | f m | 2 d x + p 0 T Ω ( Δ ) α 2 u m p d x d t + 0 T Ω D ω m u m d x d t = 0 .
By letting m k and applying the weak lower-semi-continuity of the norm together with (35), we have
lim sup m k 0 T Ω ( Δ ) α 2 u m k p d x d t 0 T Ω ξ ( Δ ) α 2 u d x d t ,
then v L p ( Ω ) using the fact that u m k u and ( Δ ) α 2 u m k ( Δ ) α 2 u , and with the monotonous of A p , we have
0 T Ω ξ A p ( v ) ( Δ ) α 2 u v d x d t lim sup m k 0 T Ω A p ( Δ ) α 2 u m k A p ( v ) ( Δ ) α 2 ( u m k v ) d x d t 0 ,
for any w L p ( Ω ) and t > 0 , set v = ( Δ ) α 2 u t w and let t 0 + , after which we get
0 T Ω ξ A p ( ( Δ ) α 2 u ) w d x d t 0 ,
then use the same method by setting v = ( Δ ) α 2 u + t w , through which we have
ξ = A p ( ( Δ ) α 2 u ) = ( Δ ) α 2 u p 2 ( Δ ) α 2 u , L q ( Ω ) .
Next, we prove u t , ω t are the weak derivative of u , ω , for all Φ C 0 [ 0 , T ] , φ k U m , ψ k 𝒱 m , then with the definition of weak derivative, we obtain the following:
0 T Ω u m t ( t , x ) φ k ( x ) d x Φ ( t ) d t = 0 T Ω u m ( t , x ) φ k ( x ) d x Φ ( t ) d t ; 0 T Ω D ω m t ( t , x ) ψ k ( x ) d x Φ ( t ) d t = 0 T Ω D ω m ( t , x ) ψ k ( x ) d x Φ ( t ) d t ,
and then take m k and the completeness of φ k , ψ k , which satisfies that for all φ H 0 α , p ( Ω ) , ψ H 0 1 ( Ω ) :
0 T Ω u t ( t , x ) φ ( x ) d x Φ ( t ) d t = 0 T Ω u ( t , x ) φ ( x ) d x Φ ( t ) d t ; 0 T Ω D ω t ( t , x ) ψ ( x ) d x Φ ( t ) d t = 0 T Ω D ω ( t , x ) ψ ( x ) d x Φ ( t ) d t ,
which show that u t , ω t are the weak derivatives of u , ω . Now, consider ( u , ω ) when t 0 + ; for all φ k U m , we have
Ω f m ( x ) φ k d x = 0 T Ω u m ( t , x ) φ k d x T t T d t = 0 T p T α , p ( u m , φ k ) + Ω D ω m ( t , x ) φ k d x T t T d t + 1 T 0 T Ω u m ( t , x ) φ k d x d t ,
and on the other hand, since u t L 2 ( ( 0 , T ) × Ω ) and Ω is bounded, u AC ( [ 0 , T ] , L 2 ( Ω ) ) ; therefore,
Ω u ( 0 , x ) φ k d x = 0 T Ω u ( t , x ) φ k d x T t T d t = 0 T p T α , p ( u , φ k ) + Ω D ω ( t , x ) φ k d x T t T d t + 1 T 0 T Ω u ( t , x ) φ k d x d t ,
taking m j , through the completeness of φ k L 2 ( Ω ) , we have u ( 0 , x ) = u 0 ( x ) in L 2 ( Ω ) . Using the same method, we can prove that ω AC ( [ 0 , T ] , L 2 ( Ω D ) ) and ω ( 0 , x ) = 0 in L 2 ( Ω D ) .
Lastly, we prove the uniqueness of the solution: suppose ( u 1 , ω 1 ) and ( u 2 , ω 2 ) are two solution of systems (28), let u ^ = u 1 u 2 , ω ^ = ω 1 ω 2 , then
φ L 2 ( ( 0 , T ) × Ω ) L 1 ( 0 , T ; H 0 α , p ( Ω ) ) ; ψ L 2 ( ( 0 , T ) × Ω D ) L 1 ( 0 , T ; H 0 1 ( Ω D ) ) ,
we have t > 0 ,
0 t Ω u ^ t φ d x d τ + p 0 t T α , p u 1 , φ T α , p u 2 , φ d x d τ + 0 t Ω D ω ^ φ d x d τ = 0 ; 0 t Ω D ω ^ t ψ d x d τ + 0 t Ω D ω ^ · ψ d x d τ λ 0 t Ω D u ^ ψ d x d τ = 0 ,
in the equation above, let φ = λ u ^ , ψ = ω ^ ; adding two equations, we have
λ 0 t Ω u ^ t u ^ d x d τ + 0 t Ω D ω ^ t ω ^ d x d τ + 0 t Ω D ω ^ · ω ^ d x d τ + p λ 0 t T α , p u 1 , u 1 u 2 T α , p u 2 , u 1 u 2 d x d τ = 0 ,
using the monotonicity of T α , p , then
λ Ω | u ^ | 2 d x + Ω D | ω ^ | 2 d x 0 .
The proof is complete.    □
The proof above demonstrates that for every T > 0 , the diffusion Equation (23) yields a unique weak solution ( u T , ω T ) in ( 0 , T ) × Ω . Moreover, due to the uniqueness of the solution, we can extend it to R + × Ω . We now investigate the behavior of the solution as T . First, however, we establish the following lemma.
Lemma 1.
Let ( u , ω ) be the weak solution of diffusion system (23), then
φ L 2 ( ( 0 , T ) × Ω ) L 1 ( 0 , T ; H 0 α , p ( Ω ) ) , ψ L 2 ( ( 0 , T ) × ( Ω D ) ) L 1 ( 0 , T ; H 0 1 ( Ω D ) ) ,
for a.e. t > 0 , we have
Ω u t φ d x + p Ω | ( Δ ) α 2 u | p 2 ( Δ ) α 2 u ( Δ ) α 2 φ d x + Ω D ω φ d x = 0 , Ω D ω t ψ d x + Ω D ω · ψ d x Ω D λ ( u u 0 ) ψ d x = 0 .
Proof. 
For all 0 < T 1 T 2 and ( u , ω ) satisfies the integral Equation (28), we have
T 1 T 2 Ω u t φ d x d t + p T 1 T 2 T α , p ( u , φ ) d t + T 1 T 2 Ω D ω φ d x d t = 0 , T 1 T 2 Ω D ω t ψ d x d t + T 1 T 2 Ω D ω · ψ d x d t T 1 T 2 Ω D λ ( u u 0 ) ψ d x d t = 0 ,
hence, we can get Equation (36) using the Lebesgue differentiation theorem.    □
Theorem 6.
For any α ( 0 , 2 ) , p ( 1 , 2 ] , let Ω R n be a Lipschitz domain, then there exists a δ 0 > 0 such that for any initial value u 0 H 0 α , p ( Ω ) L 2 ( Ω ) , when 0 < λ < m i n 1 δ 0 , 1 , the weak solution of diffusion Equation (23) weakly converges to the solution of the Euler–Lagrange Equation (20) when t . Moreover, it is the solution of the minimum problem (6).
Proof. 
For any T > 0 , the weak solution ( u , ω ) satisfies integral Equation (28); we take u t in place of φ and Δ 1 u t in place of ψ , then
0 T u t L 2 ( Ω ) 2 d t + p 0 T T α , p ( u , u t ) d t + 0 T Ω D ω u t d x d t = 0 , 0 T Ω D ω t Δ 1 u t d x d t + 0 T Ω D ω · Δ 1 u t d x d t 0 T Ω D λ ( u u 0 ) Δ 1 u t d x d t = 0 ,
using the fact that
d d t J u ( t , x ) = p T α , p ( u , u t ) + λ Ω D Δ 1 ( u u 0 ) · Δ 1 u t d x ,
and
Ω D ω u t d x = Ω D ω · Δ 1 u t d x ; Ω D ( u u 0 ) Δ 1 u t d x = Ω D Δ 1 ( u u 0 ) · Δ 1 u t d x ,
we can get the first estimation as follows:
0 T u t L 2 ( Ω ) 2 d t + 0 T d d t J u ( t , x ) d t + 0 T Ω D ω t Δ 1 u t d x d t = 0 .
Considering ω t t , we have
ω t t = Δ ω t + λ u t , i n L 1 ( 0 , T ; H 0 1 ( Ω D ) ) ,
multiply Δ 1 ω t , and integrating ( 0 , T ) × Ω D , then
0 T Ω D ω t t Δ 1 ω t d x d t = 0 T Ω D Δ ω t Δ 1 ω t d x d t + λ 0 T Ω D Δ 1 ω t u t d x d t ,
and we can get the second estimation as follows:
1 2 0 T d dt ω t H 1 ( Ω D ) 2 d t + 0 T ω t L 2 ( Ω D ) 2 d t + λ 0 T Ω D ω t Δ 1 u t d x d t = 0 .
Using (38) and (39), let δ 0 = Δ 1 L ( L 2 ( Ω D ) , L 2 ( Ω D ) ) 2   , then
0 T u t L 2 ( Ω ) 2 + d d J u ( t , x ) + 1 2 λ ω t H 1 ( Ω D ) 2 + 1 λ ω t L 2 ( Ω D ) 2 d t = 2 0 T Ω D u t Δ 1 ω t d x d t 0 T u t L 2 ( Ω D ) 2 + Δ 1 ω t L 2 ( Ω D ) 2 d t 0 T u t L 2 ( Ω D ) 2 + δ 0 ω t L 2 ( Ω D ) 2 d t ,
and then we have
0 T u t L 2 ( D ) 2 d t + 1 λ δ 0 0 T ω t L 2 ( Ω D ) 2 d t + 1 2 λ ω t ( T , · ) H 1 ( Ω D ) 2 + J u ( T , x ) J u 0 ,
on the other hand, we have
0 T u t L 2 ( Ω ) 2 d t + p 0 T T α , p ( u , u t ) d t + 0 T Ω D ω + λ Δ 1 ( u u 0 ) λ Δ 1 ( u u 0 ) u t d x d t = 0 ,
then
0 T u t L 2 ( Ω ) 2 d t + 0 T d d J u ( t , x ) d t = 0 T Ω D ω + λ Δ 1 ( u u 0 ) u t d x d t 1 2 0 T ω + λ Δ 1 ( u u 0 ) L 2 ( Ω D ) 2 d t + 1 2 0 T u t L 2 ( Ω D ) 2 d t 1 2 0 T Δ 1 ω t L 2 ( Ω D ) 2 d t + 1 2 0 T u t L 2 ( Ω ) 2 d t δ 0 2 0 T ω t L 2 ( Ω D ) 2 d t + 1 2 0 T u t L 2 ( Ω ) 2 d t ,
so we have
1 2 0 T u t L 2 ( Ω ) 2 d t + J u ( T , x ) J ( u 0 ) + δ 0 2 0 T ω t L 2 ( Ω D ) 2 d t C 1 J ( u 0 ) ,
where C 1 = 1 + λ δ 0 2 ( 1 λ δ 0 ) .
Meanwhile, consider the following equation
0 T ω t L 2 ( Ω D ) 2 d t + 1 2 0 T d dt Ω D | ω | 2 d x d t = 0 T Ω D λ ( u u 0 ) ω t d x d t ,
using the first equation of (37) and
0 T d dt Ω D λ ( u u 0 ) ω d x d t = 0 T Ω D λ ( u u 0 ) ω t d x d t + 0 T Ω D λ u t ω d x d t ,
we have
0 T ω t L 2 ( Ω D ) 2 d t + 1 2 ω H 0 1 ( Ω D ) 2 = 0 T d dt Ω D λ ( u u 0 ) ω d x d t 0 T Ω D λ u t ω d x d t = Ω D λ ( u u 0 ) ω d x + λ 0 T u t L 2 ( Ω ) 2 d t + λ Ω ( Δ ) α 2 u p d x Ω ( Δ ) α 2 u 0 p d x = Ω D λ Δ 1 ( u u 0 ) · ω d x + λ 0 T u t L 2 ( Ω ) 2 d t + λ Ω ( Δ ) α 2 u p d x Ω ( Δ ) α 2 u 0 p d x λ 2 u u 0 H 1 ( Ω D ) 2 + λ 2 ω H 0 1 ( Ω D ) 2 + λ 0 T u t L 2 ( Ω ) 2 d t + λ Ω ( Δ ) α 2 u p d x Ω ( Δ ) α 2 u 0 p d x ,
in estimation (40), we have J ( u ) J ( u 0 ) , i.e.,
Ω ( Δ ) α 2 u ( t , x ) p d x + λ 2 u u 0 H 1 ( Ω D ) 2 Ω ( Δ ) α 2 u 0 p d x , t > 0 ,
then we have the last estimation:
0 T ω t L 2 ( Ω D ) 2 d t + 1 λ 2 ω H 0 1 ( Ω D ) 2 C 2 J ( u 0 ) ,
using inequalities (40) and (41), we know that
0 u t L 2 ( Ω ) 2 d t < ; 0 ω t L 2 ( Ω D ) 2 d t < ,
so we know there exist a sub-sequence t j j = 1 such that u t ( t j , · ) L 2 ( Ω ) 2 0 and ω t ( t j , · ) L 2 ( Ω D ) 2 0 as t , and from Lemma 1, we know
φ L 2 ( Ω ) H 0 α , p ( Ω ) , ψ L 2 ( Ω D ) H 0 1 ( Ω D ) ,
we have
p Ω | ( Δ ) α 2 u ( t j , x ) | p 2 ( Δ ) α 2 u ( t j , x ) ( Δ ) α 2 φ ( x ) d x + Ω D ω ( t j , x ) φ ( x ) d x 0 , t j , Ω D ω ( t j , x ) · ψ ( x ) d x Ω D λ ( u ( t j , x ) u 0 ( x ) ) ψ ( x ) d x 0 , t j ,
which means φ H 0 α , p ( Ω ) L 2 ( Ω ) , and we have
Ω p | ( Δ ) α 2 u | p 2 ( Δ ) α 2 u ( Δ ) α 2 φ λ χ Ω D Δ 1 ( u u 0 ) φ d x = p Ω | ( Δ ) α 2 u | p 2 ( Δ ) α 2 u ( Δ ) α 2 φ d x + λ Ω D Δ 1 ( u u 0 ) Δ 1 φ d x p Ω | ( Δ ) α 2 u | p 2 ( Δ ) α 2 u ( Δ ) α 2 φ d x + Ω D ω φ d x + Ω D ω φ d x + Ω D ω Δ 1 φ d x + Ω D ω Δ 1 φ d x + λ Ω D Δ 1 ( u u 0 ) Δ 1 φ d x 0 , t j 0 ,
Equation (43) shows that as t j ,
p ( Δ ) α 2 ( Δ ) α 2 u p 2 ( Δ ) α 2 u λ χ Ω D Δ 1 ( u u 0 ) 0 , i n H 0 α , p ( Ω ) L 2 ( Ω ) ,
and it has shown that the weak solution u weakly * converge to the solution of Euler–Lagrange Equation (20). Using estimation (40)–(42), there exists M > 0 that depend on λ , Ω , and D such that
sup t > 0 ( Δ ) α 2 u L p ( Ω ) , u H 1 ( Ω D ) , ω H 0 1 ( Ω D ) M J ( u 0 ) + u 0 L 2 ( Ω ) .
By Theorem 4, then there exists functions u H 0 α , p ( Ω ) H 1 ( Ω D ) , ω H 0 1 ( Ω D ) , and a sub-sequence t j j = 1 such that when j ,
u ( t j ) u , i n H 0 α , p ( Ω ) , u ( t j ) u , i n H 1 ( Ω D ) , ω ( t j ) ω , i n H 0 1 ( Ω D ) ,
and u ( t j ) u strongly in L 1 ( Ω ) , ω ( t j ) ω strongly in L 2 ( Ω D ) . Moreover,
sup t > 0 ( Δ ) α 2 u p 2 ( Δ ) α 2 u L q ( Ω ) = sup t > 0 Ω ( Δ ) α 2 u p 2 ( Δ ) α 2 u q d x 1 q = sup t > 0 Ω ( Δ ) α 2 u p d x 1 q M p 1 ,
hence, there exists a sub-sequence of u ( t j , · ) without losing the generality, still noted as u ( t j , · ) , and it satisfies
A p ( ( Δ ) α 2 u ( t j ) ) = ( Δ ) α 2 u ( t j ) p 2 ( Δ ) α 2 u ( t j ) ξ , in L q ( Ω ) ,
using the same method in proving Theorem 5, we have
ξ = A p ( ( Δ ) α 2 u ) = ( Δ ) α 2 u p 2 ( Δ ) α 2 u , L q ( Ω ) .
Moreover, φ H 0 α , p ( Ω ) H 1 ( Ω ) ,
p Ω | ( Δ ) α 2 u | p 2 ( Δ ) α 2 u ( Δ ) α 2 φ λ χ Ω D Δ 1 ( u u 0 ) φ d x = lim j Ω p ( Δ ) α 2 u ( t j , x ) p 2 ( Δ ) α 2 u ( t j , x ) ( Δ ) α 2 φ ( x ) λ χ Ω D Δ 1 ( u ( t j , x ) u 0 ( x ) ) φ ( x ) d x = 0 ,
therefore, u is the solution of Euler–Lagrange Equation (20). In the end, we will prove u is the solution of inpainting model (6); to prove this, we need two basic inequalities, and we will prove this later.
2 ( a b ) · ( c a ) ( | c b | 2 | a b | 2 ) , a , b , c R n , | a | = i = 1 n a i 2 , | v | p | u | p p | u | p 2 u ( v u ) , u , v R , p 1 .
Given a v H 0 α , p ( Ω ) H 1 ( Ω ) , take v u in place of φ in (44), and we have
J ( v ) J ( u ) = Ω ( Δ ) α 2 v p ( Δ ) α 2 u p d x + λ 2 Ω D Δ 1 ( v u 0 ) 2 Δ 1 ( u u 0 ) 2 d x Ω ( Δ ) α 2 v p ( Δ ) α 2 u p d x + λ Ω D Δ 1 ( u u 0 ) · Δ 1 ( v u ) d x = Ω ( Δ ) α 2 v p ( Δ ) α 2 u p p | ( Δ ) α 2 u | p 2 ( Δ ) α 2 u ( Δ ) α 2 ( v u ) d x 0 ,
then, u is the solution of Euler–Lagrange Equation (20).    □
Proof. 
The proof of inequality (45):
a , b , c R n 2 ( a b ) · ( c a ) ( | c b | 2 | a b | 2 ) 2 a · c 2 | a | 2 2 b · c + 2 a · b | c | 2 + | b | 2 | a | 2 | b | 2 2 b · c + 2 a · b 2 a · c | a | 2 + | c | 2 ,
the second inequality:
u , v R , p 1 | v | p | u | p p | u | p 2 u ( v u ) = p | u | p 2 u v p | u | p | v | p + ( p 1 ) | u | p p | u | p 2 u v ,
we use Young’s inequality to prove the following:
p | u | p 2 u v p | u | p 1 | v | = p 1 p | v | · p 1 1 p | u | p 1 1 p p 1 p | v | p + 1 q p 1 1 p | u | p 1 q = | v | p + ( p 1 ) | u | p .
   □

3. Numerical Formats

3.1. Difference Scheme

Let Δ t and Δ h represent the time and space steps, respectively, then T m = T m 1 + Δ t ; x i = x i 1 + Δ h , y j = y j 1 + Δ h . Let u m = u ( T m , x i , y j ) i , j = 1 I , J be a matrix value of u and ω m = ω ( T m , x i , y j ) i , j = 1 I , J be a matrix value of ω , m = 0 , 1 , and the initial u 0 = u 0 , ω 0 = 0 . To approximate u t m , we use forward difference, given by
u t m = u t ( T m , x i , y j ) i , j = 1 I , J 1 Δ t T m T m + Δ t u t d t = u m + 1 u m Δ t .
Similarly, for ω t m , we have
ω t m = ω t ( T m , x i , y j ) i , j = 1 I , J 1 Δ t T m T m + Δ t ω t d t = ω m + 1 ω m Δ t .
Since the spatial discretization step size for images is fixed as Δ h = 1 , the boundary conditions of the Euler-Lagrange Equation (20) pose challenges in finite difference numerical implementations. For image-related problems, we assume the boundary conditions satisfy favorable properties, thus avoiding direct treatment of boundary issues. In the Euler–Lagrange (E-L) equations, the integral term outside Ω is relatively complex. For this reason, our algorithm focuses only on the principal part of the equations. The difference formula of (23) will be half discretion for time given by
u m + 1 u m Δ t = p ( Δ ) α 2 ( Δ ) α 2 u m p 2 ( Δ ) α 2 u m ω m , ω m + 1 ω m Δ t = Δ ω m + λ χ Ω D ( u m u 0 ) , u 0 = u 0 ; ω 0 = 0 , ω m = 0 , ( x i , y j ) D .

3.2. Discretization of Fractional Laplacian

The fractional Laplace operator in integral form definition as
( Δ ) α 2 u = C 2 , α lim r 0 R 2 B ( θ , r ) u ( x ) u ( y ) | x y | 2 + α d y ,
where
C 2 , α = 2 α Γ ( 1 + α 2 ) π | Γ ( α 2 ) | .
In order to obtain the fractional Laplace operator for discrete image u, we give a discrete fractional Laplace operator. Let u ( i , j ) R I × J be an image since Δ h = 1 and we simplify u ( x i , y j ) = u ( i , j ) . We can express the fractional Laplacian value of u ( i , j ) as
( Δ ) α 2 u ( i , j ) = C 2 , α l , k = ( l , k ) ( 0 , 0 ) u ( i , j ) u ( i l , j k ) ( l 2 + k 2 ) 1 + α 2 ,
and assume that the side length of a square window is n N . In such a window, the fractional Laplacian value of u ( i , j ) can be expressed as
( Δ ) n α 2 u ( i , j ) = C 2 , α l , k = n ( l , k ) ( 0 , 0 ) n u ( i , j ) u ( i l , j k ) ( l 2 + k 2 ) 1 + α 2 .
We get error analysis of the above numerical format. Assuming δ u is the error of image u, let ε n ( δ u ) denote the numerical difference in ( Δ ) α 2 u and ( Δ ) n α 2 ( u + δ u ) , given by
ε n ( δ u ) ( i , j ) = ( Δ ) α 2 u ( i , j ) ( Δ ) n α 2 ( u + δ u ) ( i , j ) .
Thus, the absolute error is
ε n ( δ u ) ( i , j ) = ( Δ ) α 2 u ( i , j ) ( Δ ) n α 2 ( u + δ u ) ( i , j ) = C 2 , α l , k = ( k , l ) ( 0 , 0 ) u ( i , j ) u ( i l , j k ) ( l 2 + k 2 ) 1 + α 2 k , l = n ( k , l ) ( 0 , 0 ) n ( u + δ u ) ( i , j ) ( u + δ u ) ( i l , j k ) ( k 2 + l 2 ) 1 + α 2 = C 2 , α | k | > n , o r | l | > n u ( i , j ) u ( i l , j k ) ( k 2 + l 2 ) 1 + α 2 k , l = n ( k , l ) ( 0 , 0 ) n δ u ( i , j ) δ u ( i l , j k ) ( k 2 + l 2 ) 1 + α 2 C 2 , α L ( α ) L n ( α ) · 2 u + 2 L n ( α ) δ u ,
where
L ( α ) = k , l = ( k , l ) ( 0 , 0 ) 1 ( k 2 + l 2 ) 1 + α 2 = 4 k = 1 l = 1 1 ( k 2 + l 2 ) 1 + α 2 + 4 k = 1 1 ( k 2 ) 1 + α 2 < ,
and
L n ( α ) = k , l = n ( k , l ) ( 0 , 0 ) n 1 ( k 2 + l 2 ) 1 + α 2 = 4 k = 1 n l = 1 n 1 ( k 2 + l 2 ) 1 + α 2 + 4 k = 1 n 1 ( k 2 ) 1 + α 2 ,
the relative error will be
ε n ( δ u ) ( i , j ) ( Δ ) α 2 u ( i , j ) 2 C 2 , α u L ( α ) ( Δ ) α 2 u ( i , j ) L ( α ) L n ( α ) L ( α ) + 2 C 2 , α L n ( α ) u ( Δ ) α 2 u ( i , j ) δ u u .
It is hard to compute the exact value of L ( α ) ; therefore, we use L n ( α ) to approach it, and the difference of every step is
L n + 1 ( α ) L n ( α ) = 8 k = 1 n 1 ( n + 1 ) 2 + k 2 1 + α 2 + 4 ( n + 1 ) 2 + α + 4 2 ( n + 1 ) 2 1 + α 2 0 , ( n ) ,
and then
ε n + 1 ( δ u ) ε n ( δ u ) C 2 , α · L n + 1 ( α ) L n ( α ) × 2 u + δ u .
The operator on the right-hand side of the first equation in (48) can be represented by the notation T defined in (24), we analogously use a window with length n to approach it, i.e.,
T n : = ( Δ ) n α 2 A p ( Δ ) n α 2 ,
where A p ( x ) = | x | p 2 x , and the relative error will be
A p ( x + δ x ) A p ( x ) A p ( x ) A p ( x ) | x | A p ( x ) · | δ x | | x | = ( p 1 ) | δ x | | x | ,
then the relative error of T n is
T ( u ) T n ( u ) T ( u ) 2 C 2 , α A p ( u ) L ( α ) T ( u ) L ( α ) L n ( α ) L ( α ) + 2 C 2 , α L n ( α ) A p ( u ) T ( u ) ( p 1 ) · 2 | C 2 , α | u L ( α ) ( Δ ) α 2 u ( i , j ) L ( α ) L n ( α ) L ( α ) ,
using the fact that | A p ( x ) | = | x | p 1 max 1 , | x | when 1 p 2 , the absolute error is
ε T ( n ) = T ( u ) T n ( u ) 2 | C 2 , α | max 1 , u L ( α ) L n ( α ) 1 + 2 ( p 1 ) | C 2 , α | u L n ( α ) ( Δ ) α 2 u ( i , j ) ,
and then
ε T ( n ) ε T ( n + 1 ) 2 | C 2 , α | max 1 , u L n + 1 ( α ) L n ( α ) 1 + 2 ( p 1 ) | C 2 , α | u L n + 1 ( α ) + L n ( α ) L ( α ) ( Δ ) α 2 u ( i , j ) 2 | C 2 , α | max 1 , u L n + 1 ( α ) L n ( α ) 1 + 2 ( p 1 ) | C 2 , α | u L n ( α ) ( Δ ) n α 2 u ( i , j ) .
Since an image are integers ranging from 0 to 255 while storing in a computer, we assume the way to change a function u into integers ranging from 0 to 255 will be
r o u n d 255 · u u min u max u min ,
then we say that two function u , v are the same when
255 u u min u max u min v v min v max v min 255 ( u v ) ( u min v min ) u max u min < 0.5 .
We assume output image ( Δ ) n α 2 u is not a constant, i.e., ( Δ ) n α 2 u max ( Δ ) n α 2 u min 1 and has no error, i.e., δ u = 0 , we want to compute the smallest n such that ( Δ ) n α 2 u ( i , j ) = ( Δ ) n + 1 α 2 u ( i , j ) in the meaning of (52); in order to do so, we need
255 × 2 C 2 , α · L n + 1 ( α ) L n ( α ) × 2 u < 0.5 ,
similarly, we want to compute the smallest n such that T n u = T n + 1 u in the meaning of (52), and we need
2 | C 2 , α | max 1 , u L n + 1 ( α ) L n ( α ) · 1 + 2 ( p 1 ) | C 2 , α | u L n ( α ) ( Δ ) n α 2 u ( i , j ) < 0.5 .
In (48), we need to compute Δ ω . Although we could use the expression ( Δ ) n α 2 ω with α = 2 , this approach proves to be overly complicated. Here, we use
Δ ω ( i , j ) = ω ( i + 1 , j ) + ω ( i 1 , j ) + ω ( i , j + 1 ) + ω ( i , j 1 ) 4 ω ( i , j ) .

3.3. Algorithms

The following Algorithms 1–3 describe the workflow of our image inpainting model.
Algorithm 1 Compute T n ( u )
  • Input: image u, the order of fractional Laplace operator α and L p norm p
  • Calculate n using (53) or (54)
  • for i ← 1 to 2 n + 1  do
  •    for j ← 1 to 2 n + 1  do
  •      if  i = n + 1 and j = n + 1  then
  •         continue
  •      else
  •          W i , j = ( n + 1 i ) 2 + ( n + 1 j ) 2 2 + α 2
  •      end if
  •    end for
  • end for
  • W n + 1 , n + 1 -sum(W)
  • W ← 2 α Γ ( 1 + α 2 ) π | Γ ( α 2 ) | × W
  • ( Δ ) n α 2 u W u
  • T n ( u ) W | ( Δ ) n α 2 u | p 2 ( Δ ) n α 2 u
  • Output:  T n ( u )
Based on Algorithm 1, we have two algorithms (Algorithms 2 and 3) for image inpainting in which we let D be an inpainting area where D i , j = 0 if i , j is inpainting area, else D i , j = 1 .
Algorithm 2 Inpainting for noise-free image
  • Input: Deteriorated image u, inpainting area D, step size Δ t , final time T, the order of fractional Laplace operator α , and L p norm p
  • u 0 u
  • M ← floor( T / Δ t )
  • for m ← 0 to M do
  •     T n ( u m ) using Algorithm 1
  •     u m + 1 u m p Δ t T n ( u m ) D
  • end for
  • Output:  u m
Algorithm 3 Inpainting and denoising
  • Input: Deteriorated image u, inpainting area D, step size Δ t , final time T, the order of fractional Laplace operator α , L p norm p, λ
  • u 0 u
  • ω 0 0
  • M ← floor( T / Δ t )
  • for m ← 0 to M do
  •     T n ( u m ) using Algorithm 1
  •     Δ ω m using (55)
  •     u m + 1 u m + Δ t p T n ( u m ) w m
  •     ω m + 1 ω m + Δ t Δ ω m + λ ( u m + 1 u 0 ) ( 1 D )
  •     ω m + 1 ω m + 1 D
  • end for
  • Output: u m , ω m
In the above algorithms, we will employ the notation ⨀, which denotes the element-wise multiplication of matrices, i.e.,
( A B ) i , j = A i , j · B i , j .

4. Experimental Setup and Results

This section presents the experimental results including the effect of parameters α and p in Algorithm 2 and relative analysis to demonstrate the effectiveness of our proposed image inpainting models. We compared our approach with several existing models, including traditional mathematical models and some deep learning models.
Our method is based on single-valued functions, so it is very suitable for grayscale image processing. When processing RGB images, we will process each RGB channel separately and then use the processed image as a result of image restoration. Our method is designed to fill light intensity to the missing area, and it requires the local smoothness of intensity value. When processing an HLS or HSB image, hue and saturation may not have such smoothness, so it is necessary to convert the HLS or HSB image to the RGB color space, apply our algorithm, and then transform the RGB image back to the HLS or HSB color space.
The metrics used to evaluate the quality of inpainted images will be peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM). Given an m × n image u and its noisy or inpainted approximation u , PSNR and SSIM are defined as
P S N R = 10 · log 10 m n · M A X u 2 | | u u | | F 2 ,
S S I M = 2 E ( u ) E ( u ) + c 1 ( 2 C o v ( u , u ) + c 2 ) ( E 2 ( u ) + E 2 ( u ) + c 1 ) ( V a r ( u ) + V a r ( u ) + c 2 ) ,
where M A X u 2 is the maximum intensity value of image u; E ( · ) , C o v ( · , · ) , and V a r ( · ) represent expectation, covariance, and variance of images, respectively. Usually, the higher the values, the better the approximation of u to u, which shows better inpainting performance. The images we used in the experiments are shown in Figure 1. The masks to be added to the images are shown in Figure 2, incorporating the mark, random loss 50%, scratch, and watermark.
All experimental programs are coded in MATLAB R2023b and Python 3.11.8 under Windows 11 64-bit and run on a system equipped with a 3.70 GHz Intel Core i5-12600KF CPU and 32 GB of memory with GPU NVIDIA GeForce RTX 4070 12G and CUDA Version: 12.7.

4.1. Parameter Effect on Algorithms

The main computational part of our algorithm is Algorithm 1, in which the parameters α and p affect the algorithm results and the computing speed. To show which parameters α and p value will attain the best result, we take parameter α for every 0.1 step in an open interval ( 0 , 2 ) and parameter p in every 0.1 step in a closed interval [ 1 , 2 ] to experiment, using the image cat Figure 1a and mask Figure 2a to show the effect.
Using inequality (53), we show how n will change with an increase in the parameter α and the time to calculate T u ( u ) 100 times using Algorithm 1. The results showed in Table 1 tell that window size will increase when α < 0.2 and decrease when α > 0.3 , and the computation time increases as n increases since the main computation in Algorithm 1 is convolved with an n × n matrix.
Both α and p will affect the result in inpainting. To show what value of α and p will get the best result, we used image cat Figure 1a and mask mark Figure 2a and used Algorithm 2 for inpainting, using PSNR (56) to show the performance of the combination of these two parameters. The results are shown in Figure 3, where the rows represent the value of p and the columns represent the value of α . We can see that the best result happened when α = 1.5 and p = 2 .
Furthermore, after conducting many experiments on various images and masks, the results have shown that the highest PSNR values are achieved when 1.3 α 1.7 and 1.8 p 2 . Meanwhile, while using our algorithm in image inpainting, we recommend choosing a small value for α when the mask is a thicker line.

4.2. Inpainting

This section presents the experimental results and relative analysis to demonstrate the effectiveness of our proposed image inpainting models. We compared our approach with several existing models, including traditional mathematical models, Total Variation (TV) [66], Total Generalized Variation (TGV) [67], Frequency Total Variation (ftTV) [68], and Adaptive modified Mumford–Shah Inpainting (AMSI) [69], and some deep learning models, Globally and Locally Consistent Image Completion (GLCIC) [70], Rethinking Image Inpainting via a Mutual Encoder–Decoder with Feature Equalizations (MEDFE) [71], and Aggregated Contextual Transformations for High-Resolution Image Inpainting (AOT-GAN) [72].
In the inpainting problem, masks are applied directly to ground-truth images that are assumed to be noise-free. The results of our model and those of Total Variation (TV) [66], Total Generalized Variation (TGV) [67], Adaptive modified Mumford–Shah Inpainting (AMSI) [69], Globally and Locally Consistent Image Completion (GLCIC) [70] and Aggregated Contextual Transformations for High-Resolution Image Inpainting (AOT-GAN) [72] models all had been shown in Figure 4, Figure 5, Figure 6, Figure 7, Figure 8, Figure 9, Figure 10, Figure 11, Figure 12, Figure 13, Figure 14, Figure 15, Figure 16, Figure 17, Figure 18, Figure 19, Figure 20, Figure 21, Figure 22, Figure 23, Figure 24, Figure 25, Figure 26, Figure 27, Figure 28, Figure 29, Figure 30 and Figure 31. To enhance the visual impact, we incorporated multiple color-coded boxes to highlight the details.
We can see that deep learning models are incapable of inpainting images from thin curve mask, and the random mask Figure 2b, will completely destroy the ability to recover. The advantage of deep learning models may be inpainting images from large areas of masks, because the traditional model cannot inpaint images when a large area of image information is missing; it needs to use the surrounding information to know what the pixel value is in the masked area.
We can clearly see in the cat masked mark in the Figure 8 that our model based on the Fractional Laplacian operator can recover the masked image on hair details better than other traditional models like Total Variation (TV) [66], Total Generalized Variation (TGV) [67], and Adaptive modified Mumford-Shah Inpainting (AMSI) [69]; meanwhile, those models cannot recover a random mask in the edge of the image. All the results on PSNR and SSIM are shown on Table 2.

4.3. Comparison of Calculation Time

In practical applications, in addition to the inpainting result, the time required for inpainting is also an indicator for evaluating the algorithm’s superiority. For traditional algorithms, most algorithms require many iterations of computation, while deep learning algorithms generally need a short processing time through the neural network. However, in reality, the most time-consuming part of deep learning methods is during the training phase, sometimes taking several hours, days, or even weeks. The time required to process a single image is related to the mask, and the results are shown in Table 3.
The computational time of the deep learning method is much lower than the traditional mathematical models, but if we consider the training time, traditional models still have significant advantages.

4.4. Inpainting and Denoising

To show the ability in inpainting and denoising of our model, we add some different types of noise to all images in Figure 1. We use seven kinds of noise as follows:
  • Exponential Noise: The probability distribution function of exponential noise is
    p ( z ) = λ e λ z , z 0 ; 0 , z < 0 .
  • Gamma Noise: The probability distribution function of gamma noise is
    p ( z ) = λ k z k 1 e λ z ( k 1 ) ! , z 0 ; 0 , z < 0 .
  • Gaussian Noise: The probability distribution function of Gaussian noise is
    p ( z ) = 1 2 π σ 2 e ( z μ ) 2 2 σ 2 .
  • Poisson Noise: The probability distribution function of Poisson noise is:
    P ( k ) = λ k e λ k ! .
  • Rayleigh Noise: The probability distribution function of Rayleigh noise is
    p ( z ) = z σ 2 e z 2 / ( 2 σ 2 ) , z 0 ; 0 , z < 0 .
  • Salt-and-Pepper Noise: The probability distribution function of salt-and-pepper noise is
    I ( x , y ) = 0 , with probability p pepper ; 255 , with probability p salt ; I 0 ( x , y ) , with probability 1 p pepper p salt .
  • Uniform Noise: The probability distribution function of uniform noise is
    p ( z ) = 1 b a , a z b ; 0 , otherwise .
Table 4 shows the PSNR and SSIM results of Algorithm 3 in inpainting and denoising. Figure 32, Figure 33, Figure 34, Figure 35, Figure 36, Figure 37, Figure 38, Figure 39, Figure 40, Figure 41, Figure 42 and Figure 43 show the recovered noised image in using H 1 norm. We have the same result with Osher [43] that using H 1 norm instead of L 2 norm can have a better result in inpainting and denoising. It shows that our method can recover the destroyed image and reduce noise at the same time.
In Figure 40, we can see the noise has been reduced and mask area has been repaired. The effect of our algorithm can be seen easily.
We observed that H 1 norm can better separate oscillatory components from high-frequency components compared to the L 2 norm, while also being better at removing noise while preserving edges. To show that this improvement can bring a better result, we add Gaussian noise with mean μ = 0 , standard deviation σ = 8 , and σ = 36 to all images in Figure 1. Table 5 and Table 6 show the PSNR and SSIM results of Algorithm 3, comparing H 1 norm with L 2 norm in inpainting and denoising. Figure 44, Figure 45, Figure 46 and Figure 47 show the recovered noised image in using H 1 norm. We have the same result with Osher [43] that using the H 1 norm instead of the L 2 norm can have a better result in inpainting and denoising. It shows that our method can recover the destroyed image and reduce noise at the same time.

5. Discussion

In this paper, we present an improved model based on the fractional Laplacian operator, which employs the L p norm of the fractional Laplacian operator as a regularization term and the H 1 norm as a fidelity term; these improvements lead to better results in restoring damaged regions. The introduction of an intermediate variable to the diffusion equations resolves the difficulty in handling H 1 term.
However, in the theoretical proof process, it is noted that the fractional Bessel potential space with p = 1 is not a Banach space under the same norm definition, nor is it reflexive or separable. Since relevant theoretical research in this area is still ongoing, we have not extended our existence and uniqueness theorem to the case of p = 1 . Consistently, our experiments demonstrate that p = 1 as a parameter for image inpainting yields unsatisfactory performance. Given that the constraint at p = 1 induces sparse solutions, there may be some special cases where better outcomes could be achieved. Therefore, future work should specifically investigate whether similar results hold for p = 1 .
Model analysis confirms that our method is both theoretically sound and effective, with guaranteed convergence to the solution. Comparative experiments show that it consistently outperforms traditional models in preserving texture details and surpasses deep learning approaches in recovering thin curve masks. To further improve performance, we propose three potential strategies: (1) replacing deep learning methods with our approach for solving the diffusion equation, (2) using our energy functional as the loss function for network training, and (3) designing a hybrid pipeline where deep models handle large missing regions and our method refines fine structures. Such a combination strategy significantly enhances inpainting quality.

6. Conclusions

This paper proposes a novel image inpainting framework based on minimizing an energy functional that integrates an L p -norm regularization term involving fractional-order Laplacian operators with an H 1 -norm fidelity term. The model is theoretically grounded via an associated diffusion equation whose solution converges to the variational minimizer, with convergence rigorously justified. A finite difference-based numerical scheme is derived from the diffusion formulation and is systematically evaluated through comprehensive experiments. Comparative studies against classical variational methods and deep learning models demonstrate the proposed method’s superior performance in preserving structural details and suppressing noise under various degradation scenarios.

Author Contributions

Methodology, H.Y., Z.-A.Y. and D.H.; Computational Implementation, W.S.; Validation, W.S.; Formal analysis, W.S. and H.Y.; Resources, H.Y.; Writing—original draft, W.S.; Writing—review and editing, H.Y. and X.L.; Supervision, Z.-A.Y. and D.H.; H.Y. and W.S. contributed equally to this work. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the National Key Research and Development Program of China, Grant Number 2020YFA0712500; the Research and Development Project of Pazhou Lab (Huangpu), Grant Number 2023K0601; and the Science and Technology Innovation Program of Hunan Province, Grant Number 2024QK2006.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

We extend our deepest gratitude to LIU Qiang, HUANG Jingchi, and ZHOU Yulong for their contributions to the analytical framework of this study.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Bertalmio, M.; Sapiro, G.; Caselles, V.; Ballester, C. Image Inpainting. In Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques, New Orleans, LA, USA, 23–28 July 2000; pp. 417–424. [Google Scholar]
  2. Perona, P.; Malik, J. Scale-space and Edge Detection Using Anisotropic Diffusion. IEEE Trans. Pattern Anal. Mach. Intell. 1990, 12, 629–639. [Google Scholar] [CrossRef]
  3. Catté, F.; Lions, P.L.; Morel, J.M.; Coll, T. Image Selective Smoothing and Edge Detection by Nonlinear Diffusion. SIAM J. Numer. Anal. 1992, 29, 182–193. [Google Scholar] [CrossRef]
  4. Chan, T.F.; Shen, J. Variational Image Inpainting. Commun. Pure Appl. Math. J. Issued Courant Inst. Math. Sci. 2005, 58, 579–619. [Google Scholar] [CrossRef]
  5. Fadili, M.J.; Starck, J.L.; Murtagh, F. Inpainting and Zooming Using Sparse Representations. Comput. J. 2009, 52, 64–79. [Google Scholar] [CrossRef]
  6. Lysaker, M.; Lundervold, A.; Tai, X.C. Noise Removal Using Fourth-order Partial Differential Equation with Applications to Medical Magnetic Resonance Images in Space and Time. IEEE Trans. Image Process. 2003, 12, 1579–1590. [Google Scholar] [CrossRef]
  7. Kim, S.; Lim, H. Fourth-order Partial Differential Equations for Effective Image Denoising. Electron. J. Differ. Equ. (EJDE) 2009, 2009, 107–121. [Google Scholar]
  8. Zhao, D.H. Adaptive Fourth Order Partial Differential Equation Filter From the Webers Total Variation for Image Restoration. Appl. Mech. Mater. 2014, 475, 394–400. [Google Scholar] [CrossRef]
  9. Bai, J.; Feng, X.C. Fractional-order Anisotropic Diffusion for Image Denoising. IEEE Trans. Image Process. 2007, 16, 2492–2502. [Google Scholar] [CrossRef]
  10. Zhang, Y.; Pu, Y.F.; Hu, J.R.; Liu, Y.; Chen, Q.L.; Zhou, J.L. Efficient CT Metal Artifact Reduction Based on Fractional-order Curvature Diffusion. Comput. Math. Methods Med. 2011, 2011, 173748. [Google Scholar] [CrossRef]
  11. Chen, D.; Chen, Y.; Xue, D. Fractional-order total variation image restoration based on primal-dual algorithm. Abstr. Appl. Anal. 2013, 2013, 585310. [Google Scholar] [CrossRef]
  12. Zhang, J.; Chen, K. A total fractional-order variation model for image restoration with nonhomogeneous boundary conditions and its numerical solution. SIAM J. Imaging Sci. 2015, 8, 2487–2518. [Google Scholar] [CrossRef]
  13. Yin, X.; Zhou, S.; Siddique, M.A. Fractional nonlinear anisotropic diffusion with p-Laplace variation method for image restoration. Multimed. Tools Appl. 2016, 75, 4505–4526. [Google Scholar] [CrossRef]
  14. Dong, B.; Jiang, Q.; Shen, Z. Image Restoration: Wavelet Frame Shrinkage, Nonlinear Evolution PDEs, and Beyond. Multiscale Model. Simul. 2017, 15, 606–660. [Google Scholar] [CrossRef]
  15. Zhou, L.; Tang, J. Fraction-order total variation blind image restoration based on L1-norm. Appl. Math. Model. 2017, 51, 469–476. [Google Scholar] [CrossRef]
  16. Li, D.; Tian, X.; Jin, Q.; Hirasawa, K. Adaptive fractional-order total variation image restoration with split Bregman iteration. ISA Trans. 2018, 82, 210–222. [Google Scholar] [CrossRef]
  17. Liu, Q.; Zhang, Z.; Guo, Z. On a fractional reaction–diffusion system applied to image decomposition and restoration. Comput. Math. Appl. 2019, 78, 1739–1751. [Google Scholar] [CrossRef]
  18. Sridevi, G.; Srinivas Kumar, S. Image inpainting based on fractional-order nonlinear diffusion for image reconstruction. Circuits Syst. Signal Process. 2019, 38, 3802–3817. [Google Scholar] [CrossRef]
  19. Sadaf, M.; Akram, G. Effects of fractional order derivative on the solution of time-fractional Cahn–Hilliard equation arising in digital image inpainting. Indian J. Phys. 2021, 95, 891–899. [Google Scholar] [CrossRef]
  20. Yan, S.; Ni, G.; Zeng, T. Image restoration based on fractional-order model with decomposition: Texture and cartoon. Comput. Appl. Math. 2021, 40, 304. [Google Scholar] [CrossRef]
  21. Gouasnouane, O.; Moussaid, N.; Boujena, S.; Kabli, K. A nonlinear fractional partial differential equation for image inpainting. Math. Model. Comput. 2022, 9, 536–546. [Google Scholar] [CrossRef]
  22. Bhutto, J.A.; Khan, A.; Rahman, Z. Image restoration with fractional-order total variation regularization and group sparsity. Mathematics 2023, 11, 3302. [Google Scholar] [CrossRef]
  23. Lian, X.; Fu, Q.; Su, W.; Zhang, X.; Li, J.; Yao, Z.A. The Fractional Laplacian Based Image Inpainting. Inverse Probl. Imaging 2024, 18, 326–365. [Google Scholar] [CrossRef]
  24. Li, Y.; Guo, Z.; Shao, J.; Li, Y.; Wu, B. Variable-order fractional 1-Laplacian diffusion equations for multiplicative noise removal. Fract. Calc. Appl. Anal. 2024, 27, 3374–3413. [Google Scholar] [CrossRef]
  25. Li, C.; Zhao, D. A non-convex fractional-order differential equation for medical image restoration. Symmetry 2024, 16, 258. [Google Scholar] [CrossRef]
  26. Khan, M.A.; Ullah, A.; Fu, Z.J.; Khan, S.; Khan, S. Image restoration via combining a fractional order variational filter and a TGV penalty. Multimed. Tools Appl. 2024, 83, 60393–60418. [Google Scholar] [CrossRef]
  27. Lian, W.; Liu, X.; Chen, Y. Non-convex Fractional-order TV Model for Image Inpainting. Multimed. Syst. 2025, 31, 17. [Google Scholar] [CrossRef]
  28. Halim, A.; Rohim, A.; Kumar, B.; Saha, R. A fractional image inpainting model using a variant of Mumford-Shah model. Multimed. Tools Appl. 2025, 1–22. [Google Scholar] [CrossRef]
  29. Criminisi, A.; Pérez, P.; Toyama, K. Region Filling and Object Removal by Exemplar-based Image Inpainting. IEEE Trans. Image Process. 2004, 13, 1200–1212. [Google Scholar] [CrossRef]
  30. Komodakis, N.; Tziritas, G. Image Completion Using Efficient Belief Propagation via Priority Scheduling and Dynamic Pruning. IEEE Trans. Image Process. 2007, 16, 2649–2661. [Google Scholar] [CrossRef]
  31. Xu, Z.; Sun, J. Image Inpainting by Patch Propagation Using Patch Sparsity. IEEE Trans. Image Process. 2010, 19, 1153–1165. [Google Scholar]
  32. Newson, A.; Almansa, A.; Gousseau, Y.; Pérez, P. Non-local Patch-based Image Inpainting. Image Process. Line 2017, 7, 373–385. [Google Scholar] [CrossRef]
  33. Kingma, D.P.; Welling, M. Auto-encoding Variational Bayes. arXiv 2013, arXiv:1312.6114. [Google Scholar]
  34. Lopez, R.; Regier, J.; Jordan, M.I.; Yosef, N. Information Constraints on Auto-encoding Variational Bayes. In Proceedings of the 32nd International Conference on Neural Information Processing System, Montréal, QC, Canada, 3–8 December 2018. [Google Scholar]
  35. Mirza, M.; Osindero, S. Conditional Generative Adversarial Nets. arXiv 2014, arXiv:1411.1784. [Google Scholar]
  36. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial Networks. Commun. ACM 2020, 63, 139–144. [Google Scholar] [CrossRef]
  37. Liu, G.; Reda, F.A.; Shih, K.J.; Wang, T.C.; Tao, A.; Catanzaro, B. Image Inpainting for Irregular Holes Using Partial Convolutions. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 85–100. [Google Scholar]
  38. Zeng, Y.; Lin, Z.; Yang, J.; Zhang, J.; Shechtman, E.; Lu, H. High-resolution Image Inpainting with Iterative Confidence Feedback and Guided Upsampling. In Proceedings of the Computer Vision—ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; Proceedings, Part XIX 16. Springer: Berlin/Heidelberg, Germany, 2020; pp. 1–17. [Google Scholar]
  39. Zhao, L.; Mo, Q.; Lin, S.; Wang, Z.; Zuo, Z.; Chen, H.; Xing, W.; Lu, D. Uctgan: Diverse Image Inpainting Based on Unsupervised Cross-space Translation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 5741–5750. [Google Scholar]
  40. Lischke, A.; Pang, G.; Gulian, M.; Song, F.; Glusa, C.; Zheng, X.; Mao, Z.; Cai, W.; Meerschaert, M.M.; Ainsworth, M.; et al. What is the Fractional Laplacian? A Comparative Review with New Results. J. Comput. Phys. 2020, 404, 109009. [Google Scholar] [CrossRef]
  41. Waheed, W.; Deng, G.; Liu, B. Discrete Laplacian Operator and Its Applications in Signal Processing. IEEE Access 2020, 8, 89692–89707. [Google Scholar] [CrossRef]
  42. Bellido, J.C.; García-Sáez, G. Bessel Potential Spaces and Complex Interpolation: Continuous Embeddings. arXiv 2025, arXiv:2503.04310. [Google Scholar]
  43. Osher, S.; Solé, A.; Vese, L. Image Decomposition and Restoration Using Total Variation Minimization and the H1. Multiscale Model. Simul. 2003, 1, 349–370. [Google Scholar] [CrossRef]
  44. Dipierro, S.; Ros-Oton, X.; Valdinoci, E. Nonlocal problems with Neumann boundary conditions. Rev. Mat. Iberoam. 2017, 33, 377–416. [Google Scholar] [CrossRef]
  45. Showalter, R.E. Monotone Operators in Banach Space and Nonlinear Partial Differential Equations; American Mathematical Soc.: Washington, DC, USA, 2013; Volume 49. [Google Scholar]
  46. Yang, Q.; Liu, F.; Turner, I. Numerical methods for fractional partial differential equations with Riesz space fractional derivatives. Appl. Math. Model. 2010, 34, 200–218. [Google Scholar] [CrossRef]
  47. Dehghan, M.; Manafian, J.; Saadatmandi, A. Solving nonlinear fractional partial differential equations using the homotopy analysis method. Numer. Methods Partial Differ. Equ. Int. J. 2010, 26, 448–479. [Google Scholar] [CrossRef]
  48. Ford, N.J.; Xiao, J.; Yan, Y. A finite element method for time fractional partial differential equations. Fract. Calc. Appl. Anal. 2011, 14, 454–474. [Google Scholar] [CrossRef]
  49. Barrios, B.; Colorado, E.; De Pablo, A.; Sánchez, U. On some critical problems for the fractional Laplacian operator. J. Differ. Equ. 2012, 252, 6133–6162. [Google Scholar] [CrossRef]
  50. Musina, R.; Nazarov, A.I. On fractional laplacians. Commun. Partial Differ. Equ. 2014, 39, 1780–1790. [Google Scholar] [CrossRef]
  51. Guo, B.; Pu, X.; Huang, F. Fractional Partial Differential Equations and Their Numerical Solutions; World Scientific: Singapore, 2015. [Google Scholar]
  52. Liu, F.; Zhuang, P.; Liu, Q. Numerical Methods of Fractional Partial Differential Equations and Applications; Science Press, LLC: Nanjing, China, 2015. [Google Scholar]
  53. Cheng, T.; Huang, G.; Li, C. The maximum principles for fractional Laplacian equations and their applications. Commun. Contemp. Math. 2017, 19, 1750018. [Google Scholar] [CrossRef]
  54. Pozrikidis, C. The Fractional Laplacian; Chapman and Hall/CRC: Boca Raton, FL, USA, 2018. [Google Scholar]
  55. Li, C.; Chen, A. Numerical methods for fractional partial differential equations. Int. J. Comput. Math. 2018, 95, 1048–1099. [Google Scholar] [CrossRef]
  56. Yao, Z.A.; Zhou, Y.L. High order approximation for the Boltzmann equation without angular cutoff under moderately soft potentials. Kinet. Relat. Models 2020, 13, 435–478. [Google Scholar] [CrossRef]
  57. Chen, W.; Li, Y.; Ma, P. The Fractional Laplacian; World Scientific: Singapore, 2020. [Google Scholar]
  58. Shao, J.; Guo, B. Existence of solutions and Hyers–Ulam stability for a coupled system of nonlinear fractional differential equations with p-Laplacian operator. Symmetry 2021, 13, 1160. [Google Scholar] [CrossRef]
  59. Mu, X.; Huang, J.; Wen, L.; Zhuang, S. Modeling viscoacoustic wave propagation using a new spatial variable-order fractional Laplacian wave equation. Geophysics 2021, 86, T487–T507. [Google Scholar] [CrossRef]
  60. Ansari, A.; Derakhshan, M.H.; Askari, H. Distributed order fractional diffusion equation with fractional Laplacian in axisymmetric cylindrical configuration. Commun. Nonlinear Sci. Numer. Simul. 2022, 113, 106590. [Google Scholar] [CrossRef]
  61. Daoud, M.; Laamri, E.H. Fractional Laplacians: A short survey. Discret. Contin. Dyn. Syst.—S 2022, 15, 95–116. [Google Scholar] [CrossRef]
  62. Qiu, H.; Yao, Z.A. Local existence for the non-resistive magnetohydrodynamic system with fractional dissipation in the Lp framework. Partial Differ. Equ. Appl. 2022, 3, 73. [Google Scholar] [CrossRef]
  63. Sousa, J.V.d.C.; Zuo, J.; O’Regan, D. The Nehari manifold for a ψ-Hilfer fractional p-Laplacian. Appl. Anal. 2022, 101, 5076–5106. [Google Scholar] [CrossRef]
  64. Borthagaray, J.P.; Nochetto, R.H. Besov regularity for the Dirichlet integral fractional Laplacian in Lipschitz domains. J. Funct. Anal. 2023, 284, 109829. [Google Scholar] [CrossRef]
  65. Ansari, A.; Derakhshan, M.H. On spectral polar fractional Laplacian. Math. Comput. Simul. 2023, 206, 636–663. [Google Scholar] [CrossRef]
  66. Shen, J.; Chan, T.F. Mathematical Models for Local Nontexture Inpaintings. SIAM J. Appl. Math. 2002, 62, 1019–1043. [Google Scholar] [CrossRef]
  67. Bredies, K.; Sun, H.P. Preconditioned Douglas–Rachford Algorithms for TV-and TGV-Regularized Variational Imaging Problems. J. Math. Imaging Vis. 2015, 52, 317–344. [Google Scholar] [CrossRef]
  68. Paulsen, K.D.; Jiang, H. Enhanced Frequency-domain Optical Image Reconstruction in Tissues Through Total-variation Minimization. Appl. Opt. 1996, 35, 3447–3458. [Google Scholar] [CrossRef]
  69. Thanh, D.N.H.; Surya, P.V.; Van Son, N.; Hieu, L.M. An Adaptive Image Inpainting Method Based on the Modified Mumford-Shah Model and Multiscale Parameter Estimation. Comput. Opt. 2019, 43, 251–257. [Google Scholar] [CrossRef]
  70. Iizuka, S.; Simo-Serra, E.; Ishikawa, H. Globally and Locally Consistent Image Completion. ACM Trans. Graph. (ToG) 2017, 36, 1–14. [Google Scholar] [CrossRef]
  71. Liu, H.; Jiang, B.; Song, Y.; Huang, W.; Yang, C. Rethinking Image Inpainting via a Mutual Encoder-Decoder with Feature Equalizations. In Proceedings of the Computer Vision—ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; Proceedings, Part II 16. Springer: Berlin/Heidelberg, Germany, 2020; pp. 725–741. [Google Scholar]
  72. Zeng, Y.; Fu, J.; Chao, H.; Guo, B. Aggregated Contextual Transformations for High-resolution Image Inpainting. IEEE Trans. Vis. Comput. Graph. 2022, 29, 3266–3280. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Ground-truth images.
Figure 1. Ground-truth images.
Mathematics 13 02254 g001
Figure 2. Mask images.
Figure 2. Mask images.
Mathematics 13 02254 g002
Figure 3. PSNR result with the change in α and p. PSNR values are color-coded from red (lowest: 14.01 dB) to green (highest: 42.68 dB), with intermediate values transitioning through yellow.
Figure 3. PSNR result with the change in α and p. PSNR values are color-coded from red (lowest: 14.01 dB) to green (highest: 42.68 dB), with intermediate values transitioning through yellow.
Mathematics 13 02254 g003
Figure 4. Examples of building image with mark recovery.
Figure 4. Examples of building image with mark recovery.
Mathematics 13 02254 g004
Figure 5. Examples of building image with random recovery.
Figure 5. Examples of building image with random recovery.
Mathematics 13 02254 g005
Figure 6. Examples of building image with scratch recovery.
Figure 6. Examples of building image with scratch recovery.
Mathematics 13 02254 g006
Figure 7. Examples of building image with watermark recovery.
Figure 7. Examples of building image with watermark recovery.
Mathematics 13 02254 g007
Figure 8. Examples of cat image with mark recovery.
Figure 8. Examples of cat image with mark recovery.
Mathematics 13 02254 g008
Figure 9. Examples of cat image with random recovery.
Figure 9. Examples of cat image with random recovery.
Mathematics 13 02254 g009
Figure 10. Examples of cat image with scratch recovery.
Figure 10. Examples of cat image with scratch recovery.
Mathematics 13 02254 g010
Figure 11. Examples of cat image with watermark recovery.
Figure 11. Examples of cat image with watermark recovery.
Mathematics 13 02254 g011
Figure 12. Examples of face image with mark recovery.
Figure 12. Examples of face image with mark recovery.
Mathematics 13 02254 g012
Figure 13. Examples of face image with random recovery.
Figure 13. Examples of face image with random recovery.
Mathematics 13 02254 g013
Figure 14. Examples of face image with scratch recovery.
Figure 14. Examples of face image with scratch recovery.
Mathematics 13 02254 g014
Figure 15. Examples of face image with watermark recovery.
Figure 15. Examples of face image with watermark recovery.
Mathematics 13 02254 g015
Figure 16. Examples of forest image with mark recovery.
Figure 16. Examples of forest image with mark recovery.
Mathematics 13 02254 g016
Figure 17. Examples of forest image with random recovery.
Figure 17. Examples of forest image with random recovery.
Mathematics 13 02254 g017
Figure 18. Examples of forest image with scratch recovery.
Figure 18. Examples of forest image with scratch recovery.
Mathematics 13 02254 g018
Figure 19. Examples of forest image with watermark recovery.
Figure 19. Examples of forest image with watermark recovery.
Mathematics 13 02254 g019
Figure 20. Examples of fox image with mark recovery.
Figure 20. Examples of fox image with mark recovery.
Mathematics 13 02254 g020
Figure 21. Examples of fox image with random recovery.
Figure 21. Examples of fox image with random recovery.
Mathematics 13 02254 g021
Figure 22. Examples of fox image with scratch recovery.
Figure 22. Examples of fox image with scratch recovery.
Mathematics 13 02254 g022
Figure 23. Examples of fox image with watermark recovery.
Figure 23. Examples of fox image with watermark recovery.
Mathematics 13 02254 g023
Figure 24. Examples of penguin image with mark recovery.
Figure 24. Examples of penguin image with mark recovery.
Mathematics 13 02254 g024
Figure 25. Examples of penguin image with random recovery.
Figure 25. Examples of penguin image with random recovery.
Mathematics 13 02254 g025
Figure 26. Examples of penguin image with scratch recovery.
Figure 26. Examples of penguin image with scratch recovery.
Mathematics 13 02254 g026
Figure 27. Examples of penguin image with watermark recovery.
Figure 27. Examples of penguin image with watermark recovery.
Mathematics 13 02254 g027
Figure 28. Examples of rabbit image with mark recovery.
Figure 28. Examples of rabbit image with mark recovery.
Mathematics 13 02254 g028
Figure 29. Examples of rabbit image with random recovery.
Figure 29. Examples of rabbit image with random recovery.
Mathematics 13 02254 g029
Figure 30. Examples of rabbit image with scratch recovery.
Figure 30. Examples of rabbit image with scratch recovery.
Mathematics 13 02254 g030
Figure 31. Examples of rabbit image with watermark recovery.
Figure 31. Examples of rabbit image with watermark recovery.
Mathematics 13 02254 g031
Figure 32. Examples of image inpainting and denoising with mask mark in exponential noise and Gamma noise.
Figure 32. Examples of image inpainting and denoising with mask mark in exponential noise and Gamma noise.
Mathematics 13 02254 g032
Figure 33. Examples of image inpainting and denoising with mask mark in Poisson noise and Rayleigh noise.
Figure 33. Examples of image inpainting and denoising with mask mark in Poisson noise and Rayleigh noise.
Mathematics 13 02254 g033
Figure 34. Examples of image inpainting and denoising with mask mark in salt-and-pepper noise and uniform noise.
Figure 34. Examples of image inpainting and denoising with mask mark in salt-and-pepper noise and uniform noise.
Mathematics 13 02254 g034
Figure 35. Examples of image inpainting and denoising with mask random in exponential noise and Gamma noise.
Figure 35. Examples of image inpainting and denoising with mask random in exponential noise and Gamma noise.
Mathematics 13 02254 g035
Figure 36. Examples of image inpainting and denoising with mask random in Poisson noise and Rayleigh noise.
Figure 36. Examples of image inpainting and denoising with mask random in Poisson noise and Rayleigh noise.
Mathematics 13 02254 g036
Figure 37. Examples of image inpainting and denoising with mask random in salt-and-pepper noise and uniform noise.
Figure 37. Examples of image inpainting and denoising with mask random in salt-and-pepper noise and uniform noise.
Mathematics 13 02254 g037
Figure 38. Examples of image inpainting and denoising with mask scratch in exponential noise and Gamma noise.
Figure 38. Examples of image inpainting and denoising with mask scratch in exponential noise and Gamma noise.
Mathematics 13 02254 g038
Figure 39. Examples of image inpainting and denoising with mask scratch in Poisson noise and Rayleigh noise.
Figure 39. Examples of image inpainting and denoising with mask scratch in Poisson noise and Rayleigh noise.
Mathematics 13 02254 g039
Figure 40. Examples of image inpainting and denoising with mask scratch in salt-and-pepper noise and uniform noise.
Figure 40. Examples of image inpainting and denoising with mask scratch in salt-and-pepper noise and uniform noise.
Mathematics 13 02254 g040
Figure 41. Examples of image inpainting and denoising with mask watermark in exponential noise and Gamma noise.
Figure 41. Examples of image inpainting and denoising with mask watermark in exponential noise and Gamma noise.
Mathematics 13 02254 g041
Figure 42. Examples of image inpainting and denoising with mask watermark in Poisson noise and Rayleigh noise.
Figure 42. Examples of image inpainting and denoising with mask watermark in Poisson noise and Rayleigh noise.
Mathematics 13 02254 g042
Figure 43. Examples of image inpainting and denoising with mask watermark in salt-and-pepper noise and uniform noise.
Figure 43. Examples of image inpainting and denoising with mask watermark in salt-and-pepper noise and uniform noise.
Mathematics 13 02254 g043
Figure 44. Examples of image inpainting and denoising with mask mark in Gaussian noise.
Figure 44. Examples of image inpainting and denoising with mask mark in Gaussian noise.
Mathematics 13 02254 g044
Figure 45. Examples of image inpainting and denoising with mask random in Gaussian noise.
Figure 45. Examples of image inpainting and denoising with mask random in Gaussian noise.
Mathematics 13 02254 g045
Figure 46. Examples of image inpainting and denoising with mask scratch in Gaussian noise.
Figure 46. Examples of image inpainting and denoising with mask scratch in Gaussian noise.
Mathematics 13 02254 g046
Figure 47. Examples of image inpainting and denoising with mask watermark in Gaussian noise.
Figure 47. Examples of image inpainting and denoising with mask watermark in Gaussian noise.
Mathematics 13 02254 g047
Table 1. Window size n and calculation time with a change in α .
Table 1. Window size n and calculation time with a change in α .
α 10 9 0.10.150.20.30.40.50.60.70.80.9
n167428481736354464034
Time (s)0.1913.7819.084.1720.0616.3312.209.026.581.013.65
α 1.01.11.21.31.41.51.61.71.81.9 2 10 9
n3026232017151311971
Time (s)2.872.181.730.310.990.790.620.470.3510.250.19
Table 2. Image inpainting comparison results in terms of PSNR and SSIM.
Table 2. Image inpainting comparison results in terms of PSNR and SSIM.
TVTGVftTVAMSIGLCICMEDFEAOT-GANOurs
ImageMask
BuildingMark24.5924.3824.6624.6623.4318.2023.7324.69
0.94090.94170.94240.94240.93440.62490.93580.9431
Random20.3820.2820.4720.4718.9318.1016.7120.55
0.83590.84340.84030.84030.79440.61240.68210.8422
Scratch31.5731.4131.6331.6330.6018.3229.9531.71
0.98840.98880.98870.98870.98640.63650.98060.9889
Watermark25.8225.6425.8725.8724.4718.2724.8125.95
0.95460.95610.95560.95560.94490.63150.93810.9565
CatMark41.2042.0441.8541.8539.7132.3734.5142.73
0.99250.99350.99400.99400.99070.94600.96620.9948
Random39.4238.6340.2440.2333.1735.9421.3040.62
0.98640.98390.98940.98940.96710.97530.62880.9900
Scratch49.1849.8150.0650.0647.8937.0339.4350.41
0.99850.99870.99880.99880.99810.97720.98510.9989
Watermark43.9844.8944.8244.8241.1936.8235.4745.22
0.99550.99620.99650.99650.99360.97670.96080.9967
FaceMark39.2239.7739.9639.9638.4331.5434.7540.21
0.98820.98950.99010.99010.98500.92390.96490.9905
Random36.6736.0538.1338.1330.8133.4614.5138.88
0.97500.97580.98260.98260.95000.94190.40290.9844
Scratch47.5648.2748.8048.8046.8234.5339.2049.55
0.99820.99850.99870.99870.99790.95520.98100.9988
Watermark41.0141.8542.1642.1739.9134.3133.4542.70
0.99140.99270.99340.99340.98770.95260.93740.9940
ForestMark30.5230.4230.5130.5129.4423.3527.8030.58
0.94880.94950.94960.94960.94340.62900.91550.9489
Random25.9225.7025.9225.9224.6223.7418.7125.94
0.83700.83480.83820.83810.80000.64050.56020.8361
Scratch37.5937.4337.5837.5836.5723.8932.9437.63
0.98990.98970.98980.98980.98800.65530.96990.9898
Watermark31.3331.2731.3231.3230.2323.8626.9331.35
0.95590.95640.95630.95630.94860.65200.89010.9560
FoxMark41.0741.2641.2441.2439.5431.1635.1041.34
0.98930.98980.99010.99010.98760.91070.96530.9902
Random36.9034.1037.2337.2234.7134.2116.7536.96
0.96980.96100.97300.97300.95810.93890.51470.9731
Scratch48.0448.0548.5148.5147.0034.6235.0748.63
0.99760.99770.99790.99790.99740.94160.97380.9979
Watermark41.6142.0141.9841.9840.0234.5632.6942.19
0.99130.99190.99240.99240.99010.94100.94370.9925
PenguinMark39.0840.1239.4839.4837.9430.1033.3840.45
0.98980.99120.99250.99250.98950.92080.94240.9935
Random37.1834.8037.4837.4834.3733.2219.2537.60
0.98080.97080.98820.98820.96760.96330.45810.9897
Scratch46.3446.4346.6146.6145.7134.3338.2046.93
0.99850.99860.99900.99900.99820.97330.97740.9991
Watermark42.1342.7642.4242.4240.1234.1231.8543.12
0.99330.99410.99550.99550.99200.97080.90730.9962
RabbitMark37.8838.3138.3538.3537.0329.5631.2638.38
0.99510.99560.99570.99570.99480.96340.96620.9958
Random33.1032.9233.7433.7430.7330.4920.0333.82
0.98490.98450.98730.98730.97610.97030.66110.9882
Scratch43.0043.5943.5343.5342.3830.6236.0743.79
0.99860.99880.99880.99880.99850.97010.98850.9989
Watermark38.2538.8438.8338.8337.3530.5829.5939.01
0.99550.99610.99620.99620.99510.97010.94610.9964
Note: Bold numbers indicate the best PSNR and SSIM values in each row.
Table 3. Image inpainting comparison results in terms of time.
Table 3. Image inpainting comparison results in terms of time.
Time (s)TVTGVftTVAMSIGLCICMEDFEAOT-GANOur
Mark7.868203.6466.86535.5240.5490.1221.10144.721
Random5.722171.7704.78531.13715.6920.0611.09915.175
Scratch4.395138.1773.94834.7160.6110.0611.09812.106
Watermark5.556166.6384.71035.1900.7950.0651.09617.969
Table 4. Inpainting results for images with denoising in additive noise.
Table 4. Inpainting results for images with denoising in additive noise.
Exponential NoiseMarkRandomScratchWatermark
Building23.5219 0.919420.1381 0.818027.4947 0.963124.2992 0.9260
Cat31.3146 0.954630.5670 0.894631.4633 0.956131.2976 0.9601
Face30.8266 0.912130.5651 0.899131.1467 0.919030.9665 0.9139
Forest26.6060 0.883124.4644 0.780324.7045 0.901724.4132 0.8687
Fox30.3924 0.937430.7804 0.930630.8616 0.948930.5092 0.9463
Penguin30.5294 0.940230.7373 0.927130.7464 0.952430.8528 0.9482
Rabbit30.0392 0.983529.2628 0.973330.4775 0.985630.2554 0.9831
Gamma Noise MarkRandomScratchWatermark
Building23.5345 0.919120.1374 0.817727.4883 0.962924.3454 0.9278
Cat31.0909 0.955030.8529 0.915331.5462 0.956931.2054 0.9599
Face30.8984 0.912530.5897 0.899131.2104 0.919530.9707 0.9141
Forest26.7144 0.883924.4632 0.780324.8728 0.903325.8153 0.8861
Fox30.3399 0.937130.6962 0.931831.0956 0.946330.4271 0.9445
Penguin30.3736 0.940230.0680 0.940730.5794 0.951230.8383 0.9485
Rabbit30.0301 0.983429.2411 0.973530.4779 0.985530.2499 0.9831
Poisson noiseMarkRandomScratchWatermark
Building23.0979 0.880320.0634 0.795726.3553 0.921823.9323 0.8941
Cat32.3407 0.895030.0989 0.832732.6814 0.897333.1093 0.9077
Face34.1826 0.959333.1467 0.948035.2935 0.967734.4262 0.9620
Forest26.0793 0.789524.3409 0.711327.1964 0.816726.2036 0.7885
Fox32.6150 0.903631.3355 0.866932.3688 0.887131.6808 0.8722
Penguin31.4094 0.812431.2699 0.832132.1187 0.839232.0490 0.8556
Rabbit30.6436 0.966529.0811 0.953630.9963 0.968130.6285 0.9663
Rayleigh NoiseMarkRandomScratchWatermark
Building22.0915 0.907519.2451 0.786323.8002 0.945122.0345 0.9029
Cat26.1147 0.942026.2009 0.837425.9137 0.944625.7753 0.9302
Face25.9880 0.855226.2727 0.835225.8990 0.860226.0377 0.8520
Forest23.5722 0.868420.9930 0.709120.3927 0.834820.2043 0.7962
Fox25.3740 0.928426.7237 0.900325.5398 0.938625.3137 0.9317
Penguin25.7521 0.929526.8164 0.868025.6746 0.946526.0382 0.9304
Rabbit25.6232 0.979325.6816 0.965625.5858 0.977225.8484 0.9772
Salt-and-Pepper NoiseMarkRandomScratchWatermark
Building23.7426 0.924520.2712 0.830027.9141 0.969024.7012 0.9385
Cat35.1192 0.964533.1050 0.957435.9147 0.967535.4334 0.9653
Face34.2389 0.956333.5288 0.955235.4364 0.965034.4533 0.9589
Forest27.4748 0.891325.1746 0.800029.4569 0.934328.0936 0.9091
Fox33.7617 0.935033.3341 0.937534.6032 0.948734.1221 0.9484
Penguin33.8500 0.953733.1736 0.955434.3207 0.958433.9043 0.9584
Rabbit32.7510 0.982731.0736 0.975633.7107 0.985832.9432 0.9833
Uniform NoiseMarkRandomScratchWatermark
Building24.1427 0.923220.3883 0.827629.3024 0.967825.2204 0.9366
Cat36.2064 0.960933.7892 0.929936.7609 0.961536.9413 0.9657
Face35.2313 0.957434.4223 0.944936.8067 0.965535.7369 0.9597
Forest28.5367 0.895325.4301 0.793531.4508 0.936129.1955 0.9059
Fox34.3727 0.939634.1130 0.935035.6981 0.951335.3842 0.9494
Penguin34.9857 0.946533.8864 0.946235.8017 0.958234.9193 0.9567
Rabbit33.7146 0.985531.5987 0.976534.5199 0.987533.6418 0.9851
Table 5. Norm H 1 and L 2 comparison in inpainting and denoising with standard deviation σ = 8 .
Table 5. Norm H 1 and L 2 comparison in inpainting and denoising with standard deviation σ = 8 .
Image σ = 8 MarkRandomScratchWatermark
Building H 1 23.5013 0.900320.1948 0.811527.3354 0.943024.4183 0.9141
L 2 19.0140 0.707217.9458 0.614620.6456 0.892219.2468 0.7158
Cat H 1 34.1832 0.928732.0829 0.882734.7117 0.931634.9303 0.9386
L 2 33.2248 0.941326.2452 0.886533.7216 0.933831.1999 0.9358
Face H 1 34.0763 0.938233.0326 0.918935.1232 0.945134.2459 0.9382
L 2 33.8527 0.933432.7048 0.907733.3967 0.911233.1220 0.9095
Forest H 1 27.5368 0.857525.0017 0.763529.4249 0.891627.8671 0.8614
L 2 22.8940 0.562923.1994 0.598423.4636 0.608923.8516 0.6505
Fox H 1 33.7629 0.928732.9811 0.912534.3072 0.930233.8023 0.9237
L 2 34.1767 0.932233.0466 0.917132.9092 0.897633.6286 0.9184
Penguin H 1 33.5357 0.900332.8536 0.907434.6136 0.918734.0214 0.9232
L 2 33.5320 0.905832.7783 0.887633.6433 0.864033.7135 0.8840
Rabbit H 1 32.1686 0.977530.3236 0.966632.6918 0.979332.1175 0.9771
L 2 31.3957 0.975130.1936 0.967132.1703 0.975531.9561 0.9766
Table 6. Norm H 1 and L 2 comparison in inpainting and denoising with standard deviation σ = 36 .
Table 6. Norm H 1 and L 2 comparison in inpainting and denoising with standard deviation σ = 36 .
Image σ = 36 MarkRandomScratchWatermark
Building H 1 18.7108 0.682817.7757 0.607419.4323 0.716618.9695 0.6944
L 2 18.2491 0.632717.3849 0.551018.9258 0.674818.4319 0.6406
Cat H 1 28.3353 0.814025.9881 0.790328.0589 0.794429.1774 0.8496
L 2 25.3165 0.637525.2741 0.648024.2393 0.577326.1413 0.6769
Face H 1 26.6930 0.764025.0878 0.695626.6180 0.767925.7667 0.7324
L 2 24.2351 0.629422.0929 0.528421.4625 0.495621.5980 0.5028
Forest H 1 22.5199 0.549622.2628 0.515022.6790 0.560722.5657 0.5521
L 2 22.5861 0.540622.3767 0.526522.8957 0.567022.9009 0.5713
Fox H 1 27.6064 0.803027.3667 0.785327.0020 0.776326.6973 0.7511
L 2 24.7721 0.611625.3076 0.644821.3783 0.419423.2889 0.5236
Penguin H 1 27.2103 0.728126.1111 0.658927.9634 0.750127.0132 0.6988
L 2 24.5030 0.480123.5745 0.436221.9900 0.349323.3234 0.4132
Rabbit H 1 26.0637 0.911825.2335 0.904825.9572 0.908426.1536 0.9168
L 2 24.4858 0.868922.7802 0.815420.7357 0.726222.6686 0.8075
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yuan, H.; Su, W.; Lian, X.; Yao, Z.-A.; Hu, D. Image Inpainting with Fractional Laplacian Regularization: An Lp Norm Approach. Mathematics 2025, 13, 2254. https://doi.org/10.3390/math13142254

AMA Style

Yuan H, Su W, Lian X, Yao Z-A, Hu D. Image Inpainting with Fractional Laplacian Regularization: An Lp Norm Approach. Mathematics. 2025; 13(14):2254. https://doi.org/10.3390/math13142254

Chicago/Turabian Style

Yuan, Hongfang, Weijie Su, Xiangkai Lian, Zheng-An Yao, and Dewen Hu. 2025. "Image Inpainting with Fractional Laplacian Regularization: An Lp Norm Approach" Mathematics 13, no. 14: 2254. https://doi.org/10.3390/math13142254

APA Style

Yuan, H., Su, W., Lian, X., Yao, Z.-A., & Hu, D. (2025). Image Inpainting with Fractional Laplacian Regularization: An Lp Norm Approach. Mathematics, 13(14), 2254. https://doi.org/10.3390/math13142254

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop