You are currently on the new version of our website. Access the old version .
Fractal FractFractal and Fractional
  • Article
  • Open Access

26 January 2026

Reduced-Order Legendre–Galerkin Extrapolation Method with Scalar Auxiliary Variable for Time-Fractional Allen–Cahn Equation

,
and
1
School of Mathematical Sciences, Inner Mongolia University, Hohhot 010021, China
2
Inner Mongolia Key Laboratory of Mathematical Modeling and Scientific Computing, Hohhot 010021, China
*
Author to whom correspondence should be addressed.
Fractal Fract.2026, 10(2), 83;https://doi.org/10.3390/fractalfract10020083 
(registering DOI)
This article belongs to the Section Numerical and Computational Methods

Abstract

This paper presents a reduced-order Legendre–Galerkin extrapolation (ROLGE) method combined with the scalar auxiliary variable (SAV) approach (ROLGE-SAV) to numerically solve the time-fractional Allen–Cahn equation (tFAC). First, the nonlinear term is linearized via the SAV method, and the linearized system derived from this SAV-based linearization is time-discretized using the shifted fractional trapezoidal rule (SFTR), resulting in a semi-discrete unconditionally stable scheme (SFTR-SAV). The scheme is then fully discretized by incorporating Legendre–Galerkin (LG) spatial discretization. To enhance computational efficiency, a proper orthogonal decomposition (POD) basis is constructed from a small set of snapshots of the fully discrete solutions on an initial short time interval. A reduced-order LG extrapolation SFTR-SAV model (ROLGE-SFTR-SAV) is then implemented over a subsequent extended time interval, thereby avoiding redundant computations. Theoretical analysis establishes the stability of the reduced-order scheme and provides its error estimates. Numerical experiments validate the effectiveness of the proposed method and the correctness of the theoretical results.

1. Introduction

The Allen–Cahn (AC) equation stands as one of the core phase-field models for describing phenomena such as phase separation and interface evolution, and it finds extensive applications in fields such as materials science, fluid mechanics, and biological modeling. It is essentially a gradient flow driven by a double-well potential and nonlinear terms. In the AC equation, replacing the first-order time derivative with a fractional derivative (such as the Caputo or Riemann–Liouville derivative) leverages the memory (non-locality) and heredity properties inherent to fractional derivatives, which enable the description of physical processes characterized by anomalous diffusion, memory effects, and viscoelasticity, for example, in describing more complex phase separation dynamics and interface roughening in certain polymer materials [1,2], amorphous materials [3], and porous media [4,5], and in characterizing the viscoelastic interface behaviors in some non-Newtonian fluids [6,7]. In this work, we are interested in the tFAC equation [8]
t α u = Δ u f ( u ) , x Ω , 0 < t T ,   u ( x , 0 ) = u 0 , x Ω ,
where Ω = [ 1 , 1 ] , and the Caputo derivative t α u ( t ) is defined by
t α u ( t ) = 1 Γ ( 1 α ) 0 t u ( v ) ( t v ) α d v ,
with Γ ( · ) denoting the Gamma function. Here, the constant ε represents the interfacial thickness, F ( u ) = 1 4 ε 2 ( u 2 1 ) 2 , f ( u ) = F ( u ) = 1 ε 2 ( u 3 u ) represents the nonlinear bulk force. It is well-known that this problem satisfies the following energy law:
t α u = δ E δ u , x Ω , 0 < t T ,
where
E ( u ) = Ω ( 1 2 | u | 2 + F ( u ) ) d x .
The tFAC equation has attracted extensive research in both theoretical and numerical fields, and numerous methods have been proposed for its numerical solution. The main concerns in designing numerical methods are computational complexity and stability. Currently, a great deal of research effort has been devoted to the development of stable, low-cost, and efficient time-stepping schemes. Broadly, the existing numerical techniques can be categorized into fully implicit methods [9,10], semi-implicit methods [11,12,13], stabilized linearly implicit methods [14], exponential time discretization methods [15], and SAV methods [16,17,18].
To linearize the nonlinear terms, Jie Shen proposed the SAV approach for gradient flows in [19]. In [8], by introducing some auxiliary constants and variables, the energy functional (3) is reformulated into the following form
E ( u ) = Ω ( 1 2 | u | 2 + a 2 ε 2 u 2 + ( u 2 1 a ) 2 4 ε 2 a 2 + 2 a 4 ε 2 ) d x ,
where a > 0 is a constant. The SAV method is a powerful tool for handling nonlinear terms in gradient flows and ensuring unconditional energy stability, which has achieved great success in integer-order equations. In recent years, the SAV framework has been systematically extended to the numerical solution of fractional differential equations. Ref. [16] proposes a high-order and efficient scheme for the time-fractional AC equation, which is based on the L1 discretization method for the time-fractional derivative and the SAV method. This scheme is more robust than existing methods, and its efficiency is not restricted by the specific form of the nonlinear potential. In [17], the authors consider a class of time-fractional phase-field models including the AC equation and the Cahn–Hilliard (CH) equation. Based on the exponential scalar auxiliary variable (ESAV) approach, two explicit time-stepping schemes are constructed, where the fractional derivatives are discretized via the L 1 and L 1 + formulas, respectively. Finally, the accuracy and efficiency of the proposed methods are verified through several numerical experiments. For the numerical solution of time-fractional phase-field models, a second-order numerical scheme is proposed in [18]. This scheme employs the fractional backward difference formula to approximate the time-fractional derivative and utilizes the extended SAV method to handle the nonlinear terms therein. The authors have proven that this numerical scheme possesses the energy dissipation property, and the relevant discussion covers the time-fractional AC equation, time-fractional CH equation, and time-fractional molecular beam epitaxy model. Similarly, we can linearize Equation (1) using the SAV method. It is straightforward to verify that the reformulated free energy (4) is mathematically equivalent to the original definition (3) for any constant a. The gradient flow corresponding to the free energy (4), along with the equation for the auxiliary function, is expressed as
t α u = Δ u a ε 2 u Q ( u ) s ( t ) , x Ω ,   0 < t T , s t = 1 2 ( Q ( u ) , u t ) , x Ω ,   0 < t T ,
with
E 1 ( u ) = Ω ( u 2 1 a ) 2 4 ε 2 d x + 1 ,   s ( t ) = E 1 ( u ) ,   Q ( u ) = f ( u ) a ε 2 u E 1 ( u ) ,
and the initial conditions
u ( x , 0 ) = u 0 , x Ω , s 0 = E 1 ( u 0 ) , x Ω , u = 0 , x Ω .
Thus, problem (5) constitutes a reformulation of (1) and is mathematically equivalent to it. We will work with this reformulated system (5) because it facilitates the derivation of an unconditionally energy-stable and computationally efficient scheme.
Although the SAV method successfully converts nonlinear terms into a linear system, the discretization of tFAC remains an independent and critical challenge, whose difficulty is not eliminated by the application of the SAV method. To discretize the resulting linearized system (5), we adopt the fractional trapezoidal rule with a special shift (SFTR   1 2 ) (i.e., SFTR with the shift parameter as 1 2 ) scheme [20,21,22] in the temporal direction to handle the fractional derivative, thereby obtaining the SFTR-SAV formulation. This SFTR-SAV discretization scheme not only has unconditional stability but, more importantly, its mathematical structure facilitates the subsequent rigorous error analysis. Among them, SFTR   1 2 can effectively deal with the nonlocal historical dependence caused by the time-fractional derivative, while ensuring that the discrete scheme can preserve the energy decay law and maximum principle of the original problem during long-time integration. In [20], the authors propose a second-order accurate formula for the SFTR, reveal its unique characteristics that most other second-order formulas do not possess, and finally investigate high-order methods on uniform grids that can simultaneously preserve the two intrinsic properties of the tFAC equation, namely energy decay and the maximum principle. Spatially, the tFAC equation can become highly stiff due to the small interfacial parameter ε and the nonlinearity, leading to extremely high computational costs, particularly in high dimensions. To overcome this challenge and significantly enhance computational efficiency, the reduced-order method (ROM) can be adopted. Its goal is to reduce the total number of unknowns in the temporal-spatial discretization of the tFAC equation while ensuring theoretical accuracy. The motivation for this study stems from the success of ROMs as powerful tools for numerical simulation in modern engineering [23,24,25,26,27,28]. It is particularly worth noting that this method entails redundant computations, as it constructs a POD basis from classical approximate solutions across the entire time interval [ 0 , T ] and subsequently performs iterative calculations to obtain reduced-order approximate solutions over the very same interval [ 0 , T ] . To reduce redundant computations and save storage space, thereby further improving computational efficiency, Luo and colleagues proposed the reduced-order extrapolation (ROE) method based on POD.
Since 2013, Luo and colleagues proposed a series of ROE methods based on POD numerical methods, including ROE finite element method, ROE finite difference method, ROE finite volume element method, and ROE natural boundary element method. These methods effectively reduce the computational load by eliminating redundancy while preserving the properties of the original solution space, and have been successfully applied to solve various partial differential equations such as parabolic equation [29], Stokes equation [30], heat equations [31], viscoelastic wave equations [32,33], hyperbolic equations [34,35], Sobolev equations [36], Boussinesq equations [37], and Navier–Stokes equations [38]. The core idea is to construct a POD basis using the classical solutions from a short initial time interval [ 0 , T 0 ] (with T 0 T ), and then compute the ROE solution on the remaining interval [ T 0 , T ] , thereby eliminating redundant computations.
The numerical solution of the tFAC equation poses multiple challenges: the inherent nonlocality of fractional derivatives leads to high storage and computational complexity; the nonlinear double-well potential term induces strong nonlinear coupling; the initial weak singularity of the solution impairs numerical accuracy; and the phase separation interface requires high spatial resolution. All these factors result in extremely high computational costs for simulating the full-order model. To the best of our knowledge, no prior study has investigated the application of the POD-based ROE method to the tFAC equation.
To address the aforementioned challenges, this work makes the following key contributions:
  • We propose, for the first time, a ROLGE-SAV method for the tFAC equation. The framework integrates:
    The SAV method for linearizing the nonlinear term and ensuring unconditional energy stability.
    A LG spectral method for high-resolution spatial discretization.
  • We develop a computationally efficient POD-based reduced-order algorithm that fundamentally avoids the redundancy common in conventional approaches. Instead of collecting snapshots over the entire time domain [ 0 , T ] , our algorithm constructs the POD basis using solutions from only a short initial interval [ 0 , T 0 ] (with T 0 T ). The ROLGE-SFTR-SAV is then employed to extrapolate the solution efficiently over the extended interval [ T 0 , T ] , leading to significant savings in both computation time and storage.
  • We establish a complete theoretical foundation, including:
    Proofs of unconditional energy stability for both SFTR-SAV and the LG-SFTR-SAV schemes.
    Detailed error estimates for the SFTR-SAV, LG-SFTR-SAV, and ROLGE-SFTR-SAV models. These estimates provide practical guidance for selecting critical parameters such as the POD basis rank d and the snapshot interval length T 0 .
  • Extensive experiments are conducted to validate the theory and demonstrate efficiency.
    The scheme achieves second-order temporal convergence and spectral spatial convergence.
    The POD eigenvalues exhibit rapid exponential decay, justifying the low-dimensional approximation.
    The ROLGE-SFTR-SAV scheme achieves a significant reduction in CPU time compared to the full-order model, while maintaining comparable numerical accuracy.
The structure of the remaining paper is as follows. In Section 2, we present the SFTR-SAV schemes for temporal discretization. For the reformulated tFAC equation, we also derive the property that the energy of the SAV scheme is monotonically non-increasing over time. In Section 3, we introduce the full discretization of the tFAC equation and establish the error estimates for the LG-SFTR-SAV scheme. In Section 4, we construct the reduced-order model, derive its stability properties, and perform the error analysis of the ROLGE-SFTR-SAV model. We present several numerical examples to validate the theoretical findings in Section 5. Finally, we summarize the key conclusions in Section 6.
We adopt the following notations throughout this paper. The standard Sobolev spaces and their norms are denoted by H m ( Ω ) and · m ( m = 0 , ± 1 , ), respectively [39]. Specifically, for the space L 2 ( Ω ) = H 0 ( Ω ) , we use · for its norm and ( · , · ) for its inner product. Additionally, the norm of the space L ( Ω ) is denoted by · .
In the temporal direction, we will adopt the SFTR scheme to discretize Equation (5), resulting in the SFTR-SAV scheme. On this basis, we will further provide a rigorous proof for the energy non-increasing property and error estimation of this scheme.

2. SFTR-SAV Scheme and Error Estimate

2.1. Kernel Properties of the SFTR Formula

Consider a uniform temporal grid on [ 0 , T ] with mesh size τ = T M , defined by nodal points t n = n τ for n = 0 , , M . Let u n = u ( x , t n ) denote the solution at time t n . The SFTR   1 2 approximation to the Caputo fractional derivative t α u at intermediate time t n 1 2 takes the form [20,21,22]
τ α , n 1 2 u = τ α m = 0 n ϖ m ( u n m u 0 ) ,
where the coefficients ϖ m are defined via the generating function
ϖ ( ζ ) = m = 0 ϖ m ζ m = 1 ζ 1 2 ( 1 + ζ ) + 1 2 α ( 1 ζ ) α ,
or can be computed directly using the recurrence formula
ϖ 0 = ( 2 α α + 1 ) α , ϖ 1 = α ( 2 α α + 1 ) α + 1 ,
ϖ m = 2 α m ( α + 1 ) [ 1 α ( m 1 ) α ] ϖ m 1 + α 1 2 α ( m 2 ) ϖ m 2 , m 2 .
Lemma 1
([20]). For weights ϖ m defined in (7), the following properties hold
(i) 
ϖ 0 > 0 and ϖ m < 0 for all m 1 .
(ii) 
ϖ m > ϖ m 1 for all m 2 .
(iii) 
m = 0 n ϖ m > 0 for any n 0 .
Discretizing the continuous-time system (5) using the SFTR   1 2 scheme (7) yields the following time-discrete system.
Problem 1.
For n 1 , find U n H m ( Ω ) satisfying the following equations
( τ α , n 1 2 U , υ ) = ( Δ U n 1 2 a ε 2 U n 1 2 Q ( U n 1 ) S n 1 2 , υ ) ,
β τ S n = 1 2 ( Q ( U n 1 ) , β τ U n ) ,
U n = 0 , x Ω ,
where U n 1 2 = U n + U n 1 2 , U n 1 = 3 U n 1 U n 2 2 , β τ U n = U n U n 1 τ .
The following sequence plays a fundamental role in establishing the energy decay property and subsequent error estimates. In [20], the sequence { θ m } m = 0 is defined as
m = 0 θ m ζ m = 1 ζ ϖ ( ζ ) = : θ ( ζ ) , for   | ζ | < 1 .
Lemma 2
([20]). The sequence { θ m } m = 0 , defined in (9), can be computed via the following recurrence relation
θ m = 2 α m ( 1 + α ) 1 α ( m 1 ) 1 α 2 α ( 2 α + 1 ) θ m 1 + 1 α 2 α ( 3 m ) θ m 2 , m 2 ,
initialized by
θ 0 = α + 1 2 α α , θ 1 = ( α 1 ) ( 2 α + 1 ) α + 1 θ 0 .
Lemma 3
([20]). For weights θ m defined in (9), the following properties hold
(i) 
θ 0 > 0 and θ m < 0 for all m 1 ,
(ii) 
m = 0 n θ m > 0 for any n 0 .
In [20], another sequence { κ m } m = 0 is defined as,
m = 0 κ m ζ m = θ ( ζ ) 1 ζ = 1 ϖ ( ζ ) = : κ ( ζ ) ,   for   | ζ | < 1 ,
κ m = i = 0 m θ i ,   κ 0 > κ 1 > κ 2 > > κ n > > 0 .

2.2. Discrete Energy Dissipation Law of the SFTR-SAV Scheme

Define W n 1 2 = Δ U n 1 2 a ε 2 U n 1 2 Q ( U n 1 ) S n 1 2 .
Lemma 4.
The SFTR-SAV scheme (8) is unconditionally stable and satisfies the following discrete energy dissipation law
E ( U n ) E ( U n 1 ) 0 ,
where
E ( U n ) = 1 2 U n 2 + a 2 ε 2 U n 2 + | S n | 2 + τ α 2 j = 1 n κ n j W j 1 2 2 .
Proof. 
Rewrite Equation (8a) as
( τ α m = 0 n ϖ m ( U n m U 0 ) , υ ) = ( W n 1 2 , υ ) .
Multiplying Equation (14) by τ α 1 θ n j , summing over j = 1 , , n , and then applying an index transformation yields,
τ 1 j = 1 n θ n j m = 0 j ϖ m ( U j m U 0 ) = τ α 1 j = 1 n θ n j W j 1 2
τ 1 j = 1 n θ n j m = 0 j ϖ m ( U j m U 0 ) = τ 1 j = 0 n θ n j m = 0 j ϖ j m ( U m U 0 ) = τ 1 m = 0 n ( U m U 0 ) j = m n ϖ j m θ n j = τ 1 m = 0 n ( U m U 0 ) j = 0 n m ϖ j θ n j m = τ 1 m = 0 n ( U n m U 0 ) C m ,
where C m = j = 0 m ϖ j θ m j , which corresponds to the m-th coefficient in the series expansion of the product ϖ ( ζ ) θ ( ζ ) . From Equation (9), the convolution coefficients satisfy
C m = 1 m = 0 , 1 m = 1 , 0 m 2 .
We then have
( U n U n 1 τ , υ ) = ( τ α 1 j = 1 n θ n j W j 1 2 , υ ) .
Taking υ = W n 1 2 in (17), we obtain
  τ α ( ( U n U n 1 ) , U n 1 2 ) + a ε 2 ( U n U n 1 , U n 1 2 ) + ( Q ( U n 1 ) S n 1 2 , U n U n 1 ) = j = 1 n θ n j W j 1 2 , W n 1 2 .
Multiplying both sides of (8b) by 2 S n 1 2 yields,
| S n | 2 | S n 1 | 2 = ( Q ( U n 1 ) S n 1 2 , U n U n 1 ) .
Combining (18) and (19) gives,
  τ α 1 2 ( U n 2 U n 1 2 ) + a 2 ε 2 ( U n 2 U n 1 2 ) + | S n | 2 | S n 1 | 2 = j = 1 n θ n j W j 1 2 , W n 1 2 .
From the relation θ j = κ j κ j 1 established in (12), where θ j < 0 for j 1 and θ 0 = κ 0 > 0 , it follows that
j = 1 n θ n j W j 1 2 , W n 1 2 1 2 κ n 1 W n 1 2 2 + 1 2 j = 1 n κ n j W j 1 2 2 1 2 j = 1 n 1 κ n j 1 W j 1 2 2 .
  1 2 ( U n 2 U n 1 2 ) + a 2 ε 2 ( U n 2 U n 1 2 ) + | S n | 2 | S n 1 | 2 + τ α 2 j = 1 n κ n j W j 1 2 2 τ α 2 j = 1 n 1 κ n j 1 W j 1 2 2 0 .
Lemma 5.
For the solution U n of the discrete problem (8), there exists a positive constant C such that
β τ U n C ,   1 n M .
Proof. 
The convolution quadrature theory [40] guarantees τ α 1 j = 1 n θ n j W j 1 2 approximates t α U ( t n 1 2 ) with second-order accuracy under mild conditions. Let
γ n 1 2 = τ α 1 j = 1 n θ n j W j 1 2 t α U ( t n 1 2 ) ,
Equation (17) can be rewritten in the following form
( β τ U n , υ ) + ( U n 1 2 , υ ) + ( a ε 2 U n 1 2 , υ ) = ( Q ( U n 1 ) S n 1 2 , υ ) + ( γ n 1 2 , υ ) .
Taking the difference of (23) for n and n 1 , we have
( β τ U n β τ U n 1 , υ ) + ( U n 1 2 U n 3 2 , υ ) + a ε 2 ( U n 1 2 U n 3 2 , υ ) = ( Q ( U n 1 ) S n 1 2 , υ ) + ( Q ( U n 2 ) S n 3 2 , υ ) + ( γ n 1 2 , υ ) ( γ n 3 2 , υ ) .
Similarly, it follows from (8b)
S n 1 2 S n 3 2 = τ 4 [ ( Q ( U n 1 ) , β τ U n ) + ( Q ( U n 2 ) , β τ U n 1 ) ] .
Taking υ = β τ U n + β τ U n 1 in (24) and using (25), we obtain
  β τ U n 2 β τ U n 1 2 + τ 2 β τ U n + β τ U n 1 2 + a τ 2 ε 2 β τ U n + β τ U n 1 2 = ( S n 1 2 S n 3 2 ) ( Q ( U n 1 ) , β τ U n + β τ U n 1 ) S n 3 2 ( Q ( U n 1 ) Q ( U n 2 ) , β τ U n + β τ U n 1 ) τ 4 [ ( Q ( U n 1 ) , β τ U n ) + ( Q ( U n 2 ) , β τ U n 1 ) ] ( Q ( U n 1 ) , β τ U n + β τ U n 1 ) + S n 3 2 ( Q ( U n 1 ) Q ( U n 2 ) , β τ U n + β τ U n 1 ) + ( γ n 1 2 , β τ U n + β τ U n 1 ) + ( γ n 3 2 , β τ U n + β τ U n 1 ) .
Omitting the positive third and fourth terms on the left-hand side, we obtain
β τ U n 2 β τ U n 1 2 C τ β τ U n 2 + C τ β τ U n 1 2 .
Summing (27) for n = 2 , , M and applying the discrete Grönwall inequality yields the desired bound. □

2.3. Temporal Error Estimate

In this section, we define the error terms as e n = U n u n , e s n = S n s n .
Rewrite Equation (5) in the following form
τ α , n 1 2 u = Δ u n 1 2 a ε 2 u n 1 2 Q ( u n 1 ) s n 1 2 P 1 + P 2 + P 3 + P 4 ,
β τ s n = 1 2 ( Q ( u n 1 ) , β τ u n ) K n + 1 2 ( Q ( u n 1 ) , σ n ) + 1 2 ( Q ( u ( t n 1 2 ) ) Q ( u n 1 ) , σ n ) + 1 2 ( Q ( u ( t n 1 2 ) ) Q ( u n 1 ) , β τ u n ) ,
where P 1 = t α u τ α , n 1 2 u , K n = s t β τ s n , σ n = u t β τ u n , u n 1 = 3 u n 1 u n 2 2 , P 2 = Δ [ u ( t n 1 2 ) u n 1 2 ] ,   P 3 = a ε 2 ( u n 1 2 u ( t n 1 2 ) ) , P 4 = Q ( u ( t n 1 2 ) ) s ( t n 1 2 ) + Q ( u n 1 ) s n 1 2 , u n 1 2 = u n + u n 1 2 .
Taking the inner product of Equation (28), we obtain
( τ α , n 1 2 u , υ ) = ( Δ u n 1 2 a ε 2 u n 1 2 Q ( u n 1 ) s n 1 2 P 1 , υ ) + ( P 2 + P 3 + P 4 , υ ) ,
β τ s n = 1 2 ( Q ( u n 1 ) , β τ u n ) K n + 1 2 ( Q ( u n 1 ) , σ n ) + 1 2 ( Q ( u ( t n 1 2 ) ) Q ( u n 1 ) , σ n ) + 1 2 ( Q ( u ( t n 1 2 ) ) Q ( u n 1 ) , β τ u n ) .
Subtracting (29) from (8), we have
( τ α , n 1 2 e , υ ) = ( Δ e n 1 2 a ε 2 e n 1 2 Q ( U n 1 ) S n 1 2 + Q ( u n 1 ) s n 1 2 + P 1 P 2 P 3 P 4 , υ ) ,
β τ e s n = 1 2 ( Q ( U n 1 ) Q ( u n 1 ) , β τ U n ) + 1 2 ( Q ( u n 1 ) , β τ e n ) + K n 1 2 ( Q ( u n 1 ) , σ n ) + 1 2 ( Q ( u ( t n 1 2 ) ) Q ( u n 1 ) , σ n ) 1 2 ( Q ( u ( t n 1 2 ) ) Q ( u n 1 ) , β τ u n ) .
Theorem 1.
For the solutions U n of (8) and u ( t n ) of (5), the following error estimate is valid
1 2 e n 2 + a 2 ε 2 e n 2 + | e s n | 2 C τ 4 .
Proof. 
Following [8], we note that, similar to other nonlinear equations, the boundedness of the numerical solution is vital for deriving the error estimate. Specifically, the boundedness of U n L is a straightforward result of the stability inequality (13) and the norm equivalence in finite-dimensional spaces. Therefore, we denote
b : = max 1 n M { U n L , u n L } .
Define V n 1 2 = Δ e n 1 2 a ε 2 e n 1 2 Q ( U n 1 ) S n 1 2 + Q ( u n 1 ) s n 1 2 + P 1 P 2 P 3 P 4 . Then, Equation (30a) can be written as
( τ α m = 0 n ϖ m ( e n m e 0 ) , υ ) = ( V n 1 2 , υ ) .
Multiplying Equation (33) by τ α 1 θ n j , summing over j = 1 , , n , and applying an index substitution yields
τ 1 j = 1 n θ n j m = 0 j ϖ m ( e j m e 0 ) = τ α 1 j = 1 n θ n j V j 1 2 .
Upon rearranging the terms in the left-hand side of (34), we have
τ 1 j = 1 n θ n j m = 0 j ϖ m ( e j m e 0 ) = τ 1 j = 0 n θ n j m = 0 j ϖ j m ( e m e 0 ) = τ 1 m = 0 n ( e m e 0 ) j = m n ϖ j m θ n j = τ 1 m = 0 n ( e m e 0 ) j = 0 n m ϖ j θ n j m = τ 1 m = 0 n ( e n m e 0 ) C m .
Since C 0 = 1 , C 1 = 1 and C k = 0 for k > 1 , we get
( e n e n 1 τ , υ ) = ( τ α 1 j = 1 n θ n j V j 1 2 , υ ) .
Setting υ = V n 1 2 in (36), we obtain
  ( e n e n 1 , Δ e n 1 2 a ε 2 e n 1 2 Q ( U n 1 ) S n 1 2 + Q ( u n 1 ) s n 1 2 + P 1 P 2 P 3 P 4 ) = τ α ( V n 1 2 , j = 1 n θ n j V j 1 2 ) .
Similarly to Equation (21), we deduce that
j = 1 n θ n j V j 1 2 , V n 1 2 1 2 κ n 1 V n 1 2 2 + 1 2 j = 1 n κ n j V j 1 2 2 1 2 j = 1 n 1 κ n j 1 V j 1 2 2 .
Substituting Equation (38) into Equation (37) yields,
  1 2 ( e n 2 e n 1 2 ) + a 2 ε 2 ( e n 2 e n 1 2 ) + τ α 2 κ n 1 V n 1 2 2 + τ α 2 ( j = 1 n κ n j V j 1 2 2 j = 1 n 1 κ n j 1 V j 1 2 2 ) τ ( β τ e n , Q ( u n 1 ) e s n 1 2 + S n 1 2 ( Q ( u n 1 ) Q ( U n 1 ) ) ) + τ ( β τ e n , P 1 ) τ ( P 2 + P 3 + P 4 , β τ e n ) .
Multiplying both sides of (30b) by 2 e s n 1 2 , we get
( | e s n | 2 | e s n 1 | 2 ) = τ e s n 1 2 ( Q ( U n 1 ) Q ( u n 1 ) , β τ U n ) + τ e s n 1 2 ( Q ( u n 1 ) , β τ e n ) + 2 τ ( K n , e s n 1 2 ) τ e s n 1 2 ( Q ( u n 1 ) , σ n ) τ e s n 1 2 ( Q ( u ( t n 1 2 ) ) Q ( u n 1 ) , σ n ) τ e s n 1 2 ( Q ( u ( t n 1 2 ) ) Q ( u n 1 ) , β τ u n ) .
Substituting (40) into (39), we get
  1 2 ( e n 2 e n 1 2 ) + a 2 ε 2 ( e n 2 e n 1 2 ) + ( | e s n | 2 | e s n 1 | 2 ) + τ α 2 κ n 1 V n 1 2 2 + τ α 2 ( j = 1 n κ n j V j 1 2 2 j = 1 n 1 κ n j 1 V j 1 2 2 ) τ ( β τ u n , S n 1 2 ω ) τ ( β τ e n , P 4 ) + τ s n 1 2 ( ω , β τ U n ) + 2 τ ( K n , e s n 1 2 ) τ e s n 1 2 ( Q ( u n 1 ) , σ n ) + τ ( P 1 P 2 P 3 , β τ e n ) τ e s n 1 2 ( Q ( u ( t n 1 2 ) ) Q ( u n 1 ) , σ n ) τ e s n 1 2 ( Q ( u ( t n 1 2 ) ) Q ( u n 1 ) , β τ u n ) = i = 1 8 X i ,
where ω = Q ( u n 1 ) Q ( U n 1 ) .
According to [8], we estimate ω as
ω = Q ( u n 1 ) Q ( U n 1 ) a ε 2 ( u n 1 U n 1 ) E 1 ( U n 1 ) + ( Q ( u n 1 ) a ε 2 u n 1 ) ( E 1 ( U n 1 ) E 1 ( u n 1 ) ) E 1 ( u n 1 ) E 1 ( U n 1 ) ( E 1 ( u n 1 ) + E 1 ( U n 1 ) ) = A 1 + A 2 ,
A 1 Q ( ξ U n 1 + ( 1 ξ ) u n 1 ) U n 1 u n 1 + a ε 2 U n 1 u n 1 9 b 2 + 3 a + 3 ε 2 ( e n 1 + e n 2 ) ,
A 2 3 b 2 ( b 2 + a + 1 ) 2 | Ω | 2 ε 4 ( e n 1 + e n 2 ) .
Therefore,
ω A ( e n 1 + e n 2 ) ,
where A = 2 ε 2 ( 9 b 2 + 3 a + 3 ) + 3 b 2 ( b 2 + a + 1 ) 2 | Ω | 2 ε 4 .
In [41], the proof of ω has been introduced. Hence,
ω C ( e n 1 + e n 2 + e n 1 + e n 2 ) .
By applying Lemma 5, we have
X 1 τ | S n 1 2 ( ω , β τ u n ) | C τ ω 1 β τ u n 1 C τ ( e n 1 2 + e n 2 2 + e n 1 2 + e n 2 2 ) .
X 2 τ | ( P 4 , β τ e n ) | C τ P 4 2 C τ 5 .
X 3 τ | s n 1 2 ( ω , β τ U n ) | C τ ( e n 2 2 + e n 1 2 ) .
Using Equation (44), Young inequality and Cauchy–Schwarz inequality, we derive
X 4 τ | 2 ( K n , e s n 1 2 ) | C τ ( | e s n | 2 + | e s n 1 | 2 ) + K n 2 .
X 5 τ | ( Q ( u n 1 ) e s n 1 2 , σ n ) | τ ( b 3 + ( a + 1 ) b ) 2 4 ε 4 | e s n 1 2 | 2 + τ σ n 2 .
Given that P 1 , P 2 , and P 3 all achieve second-order temporal accuracy, it can be concluded that
| X 6 | C τ 5 .
Expanding the term X 7 , X 8 through Equation (42) leads to the following estimate
| X 7 | C τ ( τ 4 + σ n 2 + e s n 1 2 2 ) ,
| X 8 | C τ ( τ 4 + e s n 1 2 2 ) .
Incorporating the above inequalities into (41), we obtain
  1 2 ( e n 2 e n 1 2 ) + a 2 ε 2 ( e n 2 e n 1 2 ) + ( | e s n | 2 | e s n 1 | 2 ) τ ( K n 2 + σ n 2 ) + C τ 5 + C τ ( e n 1 + e n 2 + e n 1 + e n 2 + | e s n | 2 + | e s n 1 | 2 ) .
The summation for n = 1 , 2 , 3 , yields,
1 2 e n 2 + a 2 ε 2 e n 2 + | e s n | 2   τ i = 1 n ( K i 2 + σ i 2 ) + C τ 5 + C τ i = 0 n ( e i 2 + e i 2 + | e s i | 2 ) .
Through the application of Taylor’s theorem, we can easily deduce that
( n = 1 M τ K n 2 ) 1 2 C τ 2 ,   ( n = 1 M τ σ n 2 ) 1 2 C τ 2 .
Applying the above inequality, we obtain
1 2 e n 2 + a 2 ε 2 e n 2 + | e s n | 2 C τ 4 + C τ i = 0 n 1 ( e i 2 + e i 2 + | e s i | 2 ) .
Applying Gronwall’s inequality, we deduce
1 2 e n 2 + a 2 ε 2 e n 2 + | e s n | 2 C τ 4 ,
for sufficiently small τ , which completes the proof of the theorem. □
By applying the LG method to spatially discretize the semi-discrete SFTR-SAV scheme (LG-SFTR-SAV), the following fully discrete LG-SFTR-SAV scheme is obtained.

3. The LG-SFTR-SAV Scheme

For spatial discretization, we employ Legendre–Gauss–Lobatto (LGL) interpolation points [42] as computational nodes. Specifically, we define discrete nodal sets { x j } j = 0 N along the x-axis. The exact solution u ( x ) is approximated by its spectral projection u N ( x ) , the space of polynomials of degree at most N, expressed as
u N n ( x ) = i = 0 N 2 u ^ i n ψ i ( x ) ,
where the basis functions { ψ i ( x ) } i = 0 N 2 is defined as association of Legendre polynomials
ψ i ( x ) = L i ( x ) L i + 2 ( x ) ,
constructed specifically to enforce homogeneous Dirichlet boundary conditions [42]. The approximation space is then defined as
Y N = span ψ i ( x ) 0 i N 2 .
To ensure spectral accuracy, we introduce the orthogonal projection operator Π N : H m ( Ω ) Y N characterized by the variational formulation
( u n Π N u n ) , υ = 0 , υ Y N ,
which satisfies the following optimal approximation property.
Lemma 6
([42]). For any u n H m ( Ω ) with m 1 ,
u n Π N u n H m 1 ( Ω )     N m 1 m u n H m ( Ω ) ,   0 l m , m 1 = 1 , 0 , 1 .
Problem 2.
Given 1 n M , find U N n Y N such that
( τ α , n 1 2 U N , υ N ) = ( Δ U N n 1 2 a ε 2 U N n 1 2 Q ( U N n 1 ) S N n 1 2 , υ N ) ,
β τ S N n = 1 2 ( Q ( U N n 1 ) , β τ U N n ) .
We introduce the approximation errors
E N n = U n U N n = ξ N n + ρ N n = U n Π N U n + Π N U n U N n ,   E S n = S n S N n ,
where U n , S n denote semi-discrete solutions and U N n , S N n represent their fully discrete solutions.

Error Analysis of LG-SFTR-SAV Solutions

Theorem 2.
Let ( { U N n } , { S N n } ) be the solution of the (62). Then the following energy stability holds
E ( U N n ) E ( U N n 1 ) 0 ,
where
E ( U N n ) = 1 2 U N n 2 + a 2 ε 2 U N n 2 + | S N n | 2 + τ α 2 j = 1 n κ n j W N j 1 2 2 ,
W N n 1 2 = Δ U N n 1 2 a ε 2 U N n 1 2 Q ( U N n 1 ) S N n 1 2 .
For 0 n M , the following error estimate holds
  1 2 ( Π N U n U N n ) 2 + a 2 ε 2 Π N U n U N n 2 + | S n S N n | 2 C N 2 2 m .
Proof. 
The approach developed in Lemma 4 extends directly to prove energy decay for the fully discrete system (63). By a method similar to that of Lemma 5, it can be proven that β τ U N n is bounded. In [8], for any nonlinear equations, ensuring the boundedness of the numerical solution is crucial for establishing the error estimate. The boundedness of U n L , U N n L is a direct consequence of the stability Lemma 4 and the equivalence of norms in finite-dimensional spaces. For the sake of notational simplicity, we define
b * : = max 1 n M { U n L , U N n L } .
By subtracting Equation (62) from Equation (8), we obtain
  ( τ α , n 1 2 E N , υ N ) = ( Δ E N n 1 2 a ε 2 E N n 1 2 Q ( U n 1 ) ( S n 1 2 S N n 1 2 ) S N n 1 2 ( Q ( U n 1 ) Q ( U N n 1 ) ) , υ N ) ,
β τ E S n = 1 2 ( Q ( U n 1 ) , β τ ( U n U N n ) ) + 1 2 ( Q ( U n 1 ) Q ( U N n 1 ) , β τ U N n ) .
Set V N n 1 2 = Δ E N n 1 2 a ε 2 E N n 1 2 Q ( U n 1 ) ( S n 1 2 S N n 1 2 ) S N n 1 2 ( Q ( U n 1 ) Q ( U N n 1 ) ) ,
τ α , n 1 2 E N = V N n 1 2 .
Performing the index shift n j in Equation (67), we operate on (67) via multiplication by τ α 1 θ n j and execute the summation over j = 1 to n to derive
τ 1 j = 1 n θ n j m = 0 j ϖ m ( E N j m E N 0 ) = τ α 1 j = 1 n θ n j V N j 1 2 .
Upon rearranging the terms in the left-hand side of (68), we have
τ 1 j = 1 n θ n j m = 0 j ϖ m ( E N j m E N 0 ) = τ 1 j = 0 n θ n j m = 0 j ϖ j m ( E N m E N 0 ) = τ 1 m = 0 n ( E N m E N 0 ) j = m n ϖ j m θ n j = τ 1 m = 0 n ( E N m E N 0 ) j = 0 n m ϖ j θ n j m = τ 1 m = 0 n ( E N n m E N 0 ) C m .
Equation (68) is rewritten as
( E N n E N n 1 τ , υ N ) = ( τ α 1 j = 1 n θ n j V N j 1 2 , υ N ) .
( ρ N n ρ N n 1 τ , υ N ) = ( ξ N n ξ N n 1 τ , υ N ) + ( τ α 1 j = 1 n θ n j V N j 1 2 , υ N ) .
In (71), υ N = V N n 1 2 , we have
  ( ρ N n ρ N n 1 τ , Δ E N n 1 2 a ε 2 E N n 1 2 Q ( U n 1 ) ( S n 1 2 S N n 1 2 ) S N n 1 2 ( Q ( U n 1 ) Q ( U N n 1 ) ) = ( ξ N n ξ N n 1 τ , Δ E N n 1 2 a ε 2 E N n 1 2 Q ( U n 1 ) ( S n 1 2 S N n 1 2 ) S N n 1 2 ( Q ( U n 1 ) Q ( U N n 1 ) ) + τ α 1 ( V N n 1 2 , j = 1 n θ n j V N j 1 2 ) .
Multiply both sides of (66b) by 2 E S n 1 2 , we get
1 τ ( | E S n | 2 | E S n 1 | 2 ) = E S n 1 2 ( Q ( U n 1 ) , β τ ξ N n + β τ ρ N n ) + E S n 1 2 ( Q ( U n 1 ) Q ( U N n 1 ) , β τ U N n ) .
The left-hand side of Equation (72) is rearranged into the following form
  ( ρ N n ρ N n 1 τ , Δ E N n 1 2 a ε 2 E N n 1 2 Q ( U n 1 ) ( S n 1 2 S N n 1 2 ) S N n 1 2 ( Q ( U n 1 ) Q ( U N n 1 ) ) = 1 2 τ ( ρ N n 2 ρ N n 1 2 ) 1 2 τ ( ρ N n ρ N n 1 , ξ N n + ξ N n 1 ) + E S n 1 2 ( Q ( U n 1 ) , β τ ξ N n ) a 2 τ ε 2 ( ρ N n 2 ρ N n 1 2 ) a 2 τ ε 2 ( ρ N n ρ N n 1 , ξ N n + ξ N n 1 ) 1 τ ( | E S n | 2 | E S n 1 | 2 ) + E S n 1 2 ( Q ( U n 1 ) Q ( U N n 1 ) , β τ U N n ) S N n 1 2 ( Q ( U n 1 ) Q ( U N n 1 ) , β τ ρ N n ) .
The first term on the right-hand side of (72) is rewritten as
  ( ξ N n ξ N n 1 τ , Δ E N n 1 2 a ε 2 E N n 1 2 Q ( U n 1 ) ( S n 1 2 S N n 1 2 ) S N n 1 2 ( Q ( U n 1 ) Q ( U N n 1 ) ) = 1 2 τ ( ξ N n 2 ξ N n 1 2 ) 1 2 τ ( ξ N n ξ N n 1 , ρ N n + ρ N n 1 ) E S n 1 2 ( Q ( U n 1 ) , β τ ξ N n ) a 2 τ ε 2 ( ξ N n 2 ξ N n 1 2 ) a 2 τ ε 2 ( ξ N n ξ N n 1 , ρ N n + ρ N n 1 ) S N n 1 2 ( Q ( U n 1 ) Q ( U N n 1 ) , β τ ξ N n ) .
The second term on the right-hand side of (72) satisfies
j = 1 n θ n j V N j 1 2 , V N n 1 2 1 2 κ n 1 V N n 1 2 2 + 1 2 j = 1 n κ n j V N j 1 2 2 1 2 j = 1 n 1 κ n j 1 V N j 1 2 2 .
Upon substituting equations (73)–(76) into Equation (72), it follows that
  1 2 ( ρ N n 2 ρ N n 1 2 ) + a 2 ε 2 ( ρ N n 2 ρ N n 1 2 ) + ( | E S n | 2 | E S n 1 | 2 ) + τ α 1 2 κ n 1 V N n 1 2 2 + 1 2 j = 1 n κ n j V N j 1 2 2 1 2 j = 1 n 1 κ n j 1 V N j 1 2 2 1 2 ( ξ N n 2 ξ N n 1 2 ) + a ε 2 ( ρ N n 1 , ξ N n 1 ) a ε 2 ( ρ N n , ξ N n ) a 2 ε 2 ( ξ N n 2 ξ N n 1 2 ) τ S N n 1 2 ( ω N , β τ E N n ) + τ E S n 1 2 ( ω N , β τ U N n ) ,
where ω N = Q ( U n 1 ) Q ( U N n 1 ) .
We provide the following norm bounds for ω N ,
ω N = Q ( U n 1 ) Q ( U N n 1 ) a ε 2 ( U n 1 U N n 1 ) E 1 ( U N n 1 ) + ( Q ( U n 1 ) a ε 2 U n 1 ) ( E 1 ( U n 1 ) E 1 ( U N n 1 ) ) E 1 ( U n 1 ) E 1 ( U N n 1 ) ( E 1 ( U n 1 ) + E 1 ( U N n 1 ) ) = B 1 + B 2 ,
B 1   9 b * 2 + 3 a + 3 ε 2 ( ξ N n 1 + ξ N n 2 + ρ N n 1 + ρ N n 2 ) ,
B 2   3 b * 2 ( b * 2 + a + 1 ) 2 | Ω | 2 ε 4 ( ξ N n 1 + ξ N n 2 + ρ N n 1 + ρ N n 2 ) ,
ω N   θ N ( ξ N n 1 + ξ N n 2 + ρ N n 1 + ρ N n 2 ) .
where θ N = 2 ε 2 ( 9 b * 2 + 3 a + 3 ) + 3 b * 2 ( b * 2 + a + 1 ) 2 | Ω | 2 ε 4 .
Similarly, for ω N , we can derive
ω N C ( ξ N n 1 + ξ N n 2 + ξ N n 1 + ξ N n 2 + ρ N n 1 + ρ N n 2 + ρ N n 1 + ρ N n 2 ) .
A term-by-term analysis is conducted for the fifth, and sixth, terms of Equation (77),
| E S n 1 2 ( ω N , β τ U N n ) | C | E S n 1 2 | ω N C ( ω N 2 + | E S n 1 2 | 2 ) C ( ξ N n 1 2 + ξ N n 2 2 + ρ N n 1 2 + ρ N n 2 2 + | E S n | 2 + | E S n 1 | 2 ) ,
  | ( ω N , β τ E N n ) | ω N 1 β τ E N n 1 C ω N 2 C ( ξ N n 1 + ξ N n 2 + ξ N n 1 + ξ N n 2 + ρ N n 1 + ρ N n 2 + ρ N n 1 + ρ N n 2 ) .
Substituting the above expressions into Equation (77), τ = O ( N 1 ) , and applying Lemma 6, we obtain
  1 2 ( ρ N n 2 ρ N n 1 2 ) + a 2 ε 2 ( ρ N n 2 ρ N n 1 2 ) + ( | E S n | 2 | E S n 1 | 2 ) C τ ( | E S n | 2 + | E S n 1 | 2 + ρ N n 2 + ρ N n 1 2 + ρ N n 2 2 + ρ N n 2 + ρ N n 1 2 + ρ N n 2 2 ) + C N 2 2 m .
Summing the above inequalities for n = 1 , 2 , 3 , , we have
  1 2 ρ N n 2 + a 2 ε 2 ρ N n 2 + | E S n | 2 C τ i = 0 n ( | E S i | 2 + ρ N i 2 + ρ N i 2 ) + C N 2 2 m .
By applying Gronwall’s lemma, we deduce that
  1 2 ρ N n 2 + a 2 ε 2 ρ N n 2 + | E S n | 2 C N 2 2 m .
Corollary 1.
For the solutions u ( t n ) of Equation (5) and U N n to Equation (62), respectively, under the conditions of Theorem 2, the following error estimate holds
( u ( t n ) U N n ) + a ε 2 ( u ( t n ) U N n ) + 2 | s n S N n | C ( N 1 m + τ 2 ) .

4. The ROLGE-SFTR-SAV Scheme

In this section, the concept introduced in [29,43] is adopted to construct a POD basis and develop the ROLGE-SFTR-SAV formulation for the tFAC equation.

4.1. Construct the POD Basis and Establish ROLGE-SFTR-SAV Formulation

Consider the snapshot solutions { U N n } n = 1 L defined in Section 3. The space spanned by these snapshots is denoted as
V = span { U N 1 , U N 2 , , U N L } .
The existence of at least one non-zero snapshot function is assumed. Let { ψ j } j = 1 l denote an orthonormal basis for V , where l = dim V . Consequently, any snapshot vector admits the expansion
U N i = j = 1 l ( U N i , ψ j ) ψ j ,   i = 1 , 2 , , L .
Definition 1
([29]). The POD method seeks a standard orthonormal basis { ψ j } j = 1 d that minimizes the projection error
min { ψ j } j = 1 d 1 L i = 1 L U N i j = 1 d ( U N i , ψ j ) ψ j 2 .
The basis { ψ j } j = 1 d of rank d optimally captures the dominant features of the snapshot ensemble while reducing dimensionality. A set of solutions { ψ j } j = 1 d of (87) is termed a POD basis of rank d.
We construct the L × L correlation matrix B with entries
B i j = 1 L ( U N i , U N j ) .
As B is positive semidefinite with rank l. This construction yields the following fundamental result.
Proposition 1
([31]). Let { λ j } j = 1 l with λ 1 λ 2 λ l > 0 denote the positive eigenvalues of the correlation matrix B, and { a j } j = 1 l their corresponding orthonormal eigenvectors. A rank-d POD basis ( d l ) is then constructed as
ψ i = 1 L λ i k = 1 L a k i U N k ,   1 i d ,
where a k i denotes the k-th component of eigenvector a i . Moreover, the optimal approximation error satisfies
1 L i = 1 L U N i j = 1 d ( U N i , ψ j ) ψ j 2 = j = d + 1 l λ j .
Define the reduced-order subspace
χ d = span { ψ 1 , ψ 2 , , ψ d } .
For U N Y N , a Ritz operator Π d : Y N χ d is denoted by
( Π d U N , υ d ) = ( U N , υ d ) , υ d χ d .
Then, by functional analysis, there exists an extension Π N : H m ( Ω ) Y N of such that Π N | Y N = Π d : Y N χ d and Π N : H m ( Ω ) Y N Y N χ d . This projection is defined variationally by
( u , υ N ) = ( Π N u , υ N ) ,   υ N Y N ,
where u H m ( Ω ) . The variational formulation (91) ensures Π N is uniquely determined and satisfies the stability bound
( Π d u ) u ,   u H m ( Ω ) .
Lemma 7
([44]). For each integer d ( 1 d l ) , the projection operator Π d fulfills the inequality:
1 L i = 1 L ( U N i Π d U N i ) 2 j = d + 1 l λ j ,
1 L i = 1 L U N i Π d U N i 2 C N 2 j = d + 1 l λ j .
1 L i = 1 L U N i Π d U N i 1 2   C N 2 L i = 1 L ( U N i Π d U N i ) 2 C N 2 j = d + 1 l λ j .
Problem 3.
Find U d n χ d ( n = 1 , 2 , , M ) such that
U d n = Π d U N n = j = 1 d ( U N n , ψ j ) ψ j ,   n = 1 , 2 , , L ,
( τ α , n 1 2 U d , υ d ) = ( Δ U d n 1 2 a ε 2 U d n 1 2 Q ( U d n 1 ) S d n 1 2 , υ d ) ,   n = L + 1 , L + 2 , , M ,
β τ S d n = 1 2 ( Q ( U d n 1 ) , β τ U d n ) ,   n = L + 1 , L + 2 , , M .
Remark 1.
Problem 3 demonstrates the ROLGE-SFTR-SAV solutions for the tFAC equation. Specifically, the initial L ROLGE-SFTR-SAV solutions are constructed by projecting the first L classical LG solutions onto a POD basis, and subsequent ( M L ) solutions are generated through extrapolation and iterative solution of equations. This hybrid approach fundamentally differentiates Problem 3 from conventional POD-based reduced-order models [8,28], particularly in how it integrates projection with extrapolation and iteration steps.
We define the approximation errors
E d n = U d n U N n = ρ d n + ξ d n = U d n Π d U N n + Π d U N n U N n ,   E S d n = S d n S N n ,
where U N n , S N n denote the fully discrete solutions obtained from Problem 2 and U d n , S d n represent the ROLGE solutions computed in Probelm 3. These error metrics will be analyzed in Section 4.

4.2. Error Estimates of the ROLGE-SFTR-SAV Solution for tFAC Equation

Theorem 3.
The solution U d n ( 1 n M ) χ d to Problem 3 satisfies
E ( U d n , S d n ) E ( U d n 1 , S d n 1 ) 0 ,
where
E ( U d n , S d n ) = 1 2 U d n 2 + a 2 ε 2 U d n 2 + | S d n | 2 + τ α 2 i = 1 n κ n i W d i 1 2 2 ,
W d n 1 2 = Δ U d n 1 2 a ε 2 U d n 1 2 Q ( U d n 1 ) S d n 1 2 .
When τ = O ( N 1 ) , the following error estimate holds
( U d n Π d U N n ) + a ε 2 U d n Π d U N n + 2 | S d n S N n | C ( L j = d + 1 l λ j ) 1 2 + C σ ( M ) ( N 1 m + τ 2 ) ,
where U N i Y N ( i = 1 , 2 , , M ) is the solution of Problem 3, σ ( M ) = 0 ( 1 n L ) , σ ( M ) = N 1 ( M L ) ( L + 1 n M ) .
Proof. 
The method established in Lemma 4 can be directly extended to establish the energy decay property of the ROLGE-SFTR-SAV scheme (97). Following an approach analogous to that employed in Lemma 5, one can show that β τ U d n is bounded. As noted in [8], with respect to arbitrary nonlinear equations, guaranteeing the boundedness of the numerical solution plays a vital role in establishing the error estimate. The boundedness of U d n L and U N n L follows directly from the stability result presented in Equation (97) and the equivalence of norms in finite-dimensional spaces. For the sake of notational simplicity, we define
b ¯ : = max 1 n M { U d n L , U N n L } .
By subtracting Equation (62) from Equation (96), we obtain
e d n = U d n U N n = U N n + Π d U N n ,   n = 1 , , L ,
  ( τ α , n 1 2 E d , υ d ) = ( Δ E d n 1 2 + a ε 2 E d n 1 2 Q ( U d n 1 ) ( S d n 1 2 S N n 1 2 ) S N n 1 2 ( Q ( U d n 1 ) Q ( U N n 1 ) ) , υ d ) ,   n = L + 1 , L + 2 , , M ,
β τ E S d n = 1 2 ( Q ( U d n 1 ) , β τ ( U d n U N n ) ) + 1 2 ( Q ( U d n 1 ) Q ( U N n 1 ) , β τ U N n ) ,   n = L + 1 , L + 2 , , M .
For n = 1 , , L , by Lemma 7 and Equation (101), we have
  E d n = ( U N n U d n ) = ( U N n Π d U N n )   ( L j = d + 1 l λ j ) 1 2 .
Set V d n 1 2 = Δ E d n 1 2 + a ε 2 E d n 1 2 Q ( U d n 1 ) ( S d n 1 2 S N n 1 2 ) S N n 1 2 ( Q ( U d n 1 ) Q ( U N n 1 ) ) ,
( τ α , n 1 2 E d , υ d ) = ( V d n 1 2 , υ d ) .
Multiply both sides of (104) by τ α 1 θ n j after replacing n with j in, and sum the index j from 1 to n to obtain
τ 1 j = 1 n θ n j m = 0 j ϖ m ( E d j m E d 0 ) = τ α 1 j = 1 n θ n j V d j 1 2 .
Upon rearranging the terms in the left-hand side of (105), we obtain
τ 1 j = 1 n θ n j m = 0 j ϖ m ( E d j m E d 0 ) = τ 1 j = 0 n θ n j m = 0 j ϖ j m ( E d m E d 0 ) = τ 1 m = 0 n ( E d m E d 0 ) j = m n ϖ j m θ n j = τ 1 m = 0 n ( E d m E d 0 ) j = 0 n m ϖ j θ n j m = τ 1 m = 0 n ( E d n m E d 0 ) C m ,
where C m = j = 0 m ω j θ m j is the m-th coefficient of the series of the product ω ( ζ ) θ ( ζ ) , which by Lemma (3) yields C 0 = 1 , C 1 = 1 and C k = 0 for k > 1 . We then have
( E d n E d n 1 τ , υ d ) = ( τ α 1 j = 1 n θ n j V d j 1 2 , υ d ) ,
( ρ d n ρ d n 1 τ , υ d ) = ( ξ d n ξ d n 1 τ , υ d ) + ( τ α 1 j = 1 n θ n j V d j 1 2 , υ d ) .
In (108), υ d = V d n 1 2 , we have
  ( ρ d n ρ d n 1 τ , Δ E d n 1 2 + a ε 2 E d n 1 2 Q ( U d n 1 ) ( S d n 1 2 S N n 1 2 ) S N n 1 2 ( Q ( U d n 1 ) Q ( U N n 1 ) ) = ( ξ d n ξ d n 1 τ , Δ E d n 1 2 + a ε 2 E d n 1 2 Q ( U d n 1 ) ( S d n 1 2 S N n 1 2 ) S N n 1 2 ( Q ( U d n 1 ) Q ( U N n 1 ) ) + τ α 1 ( V d n 1 2 , j = 1 n θ n j V d j 1 2 ) .
Multiply both sides of (102b) by 2 E S d n 1 2 , we get
1 τ ( | E S d n | 2 | E S d n 1 | 2 ) = E S d n 1 2 ( Q ( U d n 1 ) , β τ ρ d n + β τ ξ d n ) + E S d n 1 2 ( Q ( U d n 1 ) Q ( U N n 1 ) , β τ U N n ) .
The left-hand side of Equation (109) is rearranged into the following form
  ( ρ d n ρ d n 1 τ , Δ E d n 1 2 a ε 2 E d n 1 2 Q ( U d n 1 ) ( S d n 1 2 S N n 1 2 ) S N n 1 2 ( Q ( U d n 1 ) Q ( U N n 1 ) ) = 1 2 τ ( ρ d n 2 ρ d n 1 2 ) 1 2 τ ( ρ d n ρ d n 1 , ξ d n + ξ d n 1 ) + E S d n 1 2 ( Q ( U d n 1 ) , β τ ξ d n ) a 2 τ ε 2 ( ρ d n 2 ρ d n 1 2 ) a 2 τ ε 2 ( ρ d n ρ d n 1 , ξ d n + ξ d n 1 ) 1 τ ( | E S d n | 2 | E S d n 1 | 2 ) + E S d n 1 2 ( Q ( U d n 1 ) Q ( U N n 1 ) , β τ U N n ) S N n 1 2 ( Q ( U d n 1 ) Q ( U N n 1 ) , β τ ρ d n ) .
The first term on the right-hand side of (109) is rewritten as
  ( ξ d n ξ d n 1 τ , Δ E d n 1 2 a ε 2 E d n 1 2 Q ( U d n 1 ) ( S d n 1 2 S N n 1 2 ) S N n 1 2 ( Q ( U d n 1 ) Q ( U N n 1 ) ) = 1 2 τ ( ξ d n 2 ξ d n 1 2 ) 1 2 τ ( ξ d n ξ d n 1 , ρ d n + ρ d n 1 ) E S d n 1 2 ( Q ( U d n 1 ) , β τ ξ d n ) a 2 τ ε 2 ( ξ d n 2 ξ d n 1 2 ) a 2 τ ε 2 ( ξ d n ξ d n 1 , ρ d n + ρ d n 1 ) S N n 1 2 ( Q ( U d n 1 ) Q ( U N n 1 ) , β τ ξ d n ) .
The second term on the right-hand side of (109) satisfies
j = 1 n θ n j V d j 1 2 , V d n 1 2 1 2 κ n 1 V d n 1 2 2 + 1 2 j = 1 n κ n j V d j 1 2 2 1 2 j = 1 n 1 κ n j 1 V d j 1 2 2 .
Upon substituting Equations (110)–(113) into Equation (109), it follows that
  1 2 ( ρ d n 2 ρ d n 1 2 ) + a 2 ε 2 ( ρ d n 2 ρ d n 1 2 ) + ( | E S d n | 2 | E S d n 1 | 2 ) + τ α 1 2 κ n 1 V d n 1 2 2 + 1 2 j = 1 n κ n j V d j 1 2 2 1 2 j = 1 n 1 κ n j 1 V d j 1 2 2 1 2 ( ξ d n 2 ξ d n 1 2 ) a ε 2 ( ρ d n , ξ d n ) + a ε 2 ( ρ d n 1 , ξ d n 1 ) + a 2 ε 2 ( ξ d n 2 ξ d n 1 2 ) τ S N n 1 2 ( ω d , β τ E d n ) + τ E S d n 1 2 ( ω d , β τ U N n ) ,
where ω d = Q ( U d n 1 ) Q ( U N n 1 ) .
We decompose ω d into the following components
ω d = Q ( U d n 1 ) Q ( U N n 1 ) a ε 2 ( U d n 1 U N n 1 ) E 1 ( U N n 1 ) + ( Q ( U d n 1 ) a ε 2 U d n 1 ) ( E 1 ( U d n 1 ) E 1 ( U N n 1 ) ) E 1 ( U d n 1 ) E 1 ( U N n 1 ) ( E 1 ( U d n 1 ) + E 1 ( U N n 1 ) ) = B ¯ 1 + B ¯ 2 .
The terms can be bounded separately by
B ¯ 1 9 b ¯ 2 + 3 a + 3 ε 2 ( ξ d n 1 + ξ d n 2 + ρ d n 1 + ρ d n 2 ) .
B ¯ 2 3 b ¯ 2 ( b ¯ 2 + a + 1 ) 2 | Ω | 2 ε 4 ( ξ d n 1 + ξ d n 2 + ρ d n 1 + ρ d n 2 ) .
Let θ d = 2 ε 2 ( 9 b ¯ 2 + 3 a + 3 ) + 3 b ¯ 2 ( b ¯ 2 + a + 1 ) 2 | Ω | 2 ε 4 , we have
ω d θ d ( ξ d n 1 + ξ d n 2 + ρ d n 1 + ρ d n 2 ) .
Similarly, for ω d , we can derive
ω d C ( ξ d n 1 + ξ d n 2 + ξ d n 1 + ξ d n 2 + ρ d n 1 + ρ d n 2 + ρ d n 1 + ρ d n 2 ) .
Using Equation (116) and Lemma 7, we have
| E S d n 1 2 ( ω d , β τ U N n ) | C | E S d n 1 2 | ω d C ( ξ d n 1 2 + ξ d n 2 2 + ρ d n 1 2 + ρ d n 2 2 + | E S d n | 2 + | E S d n 1 | 2 ) .
For n = L + 1 , L + 2 , , M , using Lemma 6 and Theorem 1, we obtain
ξ d n ( U N n U n ) + ( U n Π N U n ) + ( Π N U n Π d U n ) + ( Π N U n Π N U N n ) C ( τ 2 + N 1 m ) .
Using Equations (117) and (119), Lemma 6, we have
  | ( ω d , β τ E d n ) | ω d 1 β τ E d n 1 C ( ξ d n 1 + ξ d n 2 + ξ d n 1 + ξ d n 2 + ρ d n 1 + ρ d n 2 + ρ d n 1 + ρ d n 2 ) C ( τ 4 + N 2 2 m ) + C ( ξ d n 1 + ξ d n 2 + ρ d n 1 + ρ d n 2 + ρ d n 1 + ρ d n 2 ) .
If τ = O ( N 1 ) , upon substituting Equations (118)–(120) into Equation (114), we obtain
  1 2 ( ρ d n 2 ρ d n 1 2 ) + a 2 ε 2 ( ρ d n 2 ρ d n 1 2 ) + ( | E S d n | 2 | E S d n 1 | 2 ) C τ ( ξ d n 1 2 + ξ d n 2 2 + | E S d n | 2 + | E S d n 1 | 2 + ρ d n 2 + ρ d n 1 2 + ρ d n 2 2 + ρ d n 2 + ρ d n 1 2 + ρ d n 2 2 ) + C τ ( τ 4 + N 2 2 m ) .
By summing (121) for n = L + 1 , , M , using Lemma 7 and Equations (103) and (119), we have
  1 2 ρ d n 2 + a 2 ε 2 ρ d n 2 + | E S d n | 2 C τ i = L + 1 n ( | E S d i | 2 + ρ d i 2 + ρ d i 2 ) + C τ ( M L ) ( τ 4 + N 2 2 m ) + C N 1 L j = d + 1 l λ j .
Employing the Gronwall lemma, we derive
  1 2 ρ d n 2 + a 2 ε 2 ρ d n 2 + | E S d n | 2 C τ ( M L ) ( τ 4 + N 2 2 m ) + C N 1 L j = d + 1 l λ j .
Corollary 2.
( { U d n } , { S d n } ) is the solution of the (96), we deduce the error estimate
  ( u ( t n ) U d n ) + a ε 2 u ( t n ) U d n + 2 | S d n + 1 | C ( δ ( M ) + 1 ) ( τ 2 + N 1 m ) + C ( N 1 L j = d + 1 l λ j ) 1 2 ,
where δ ( M ) = 0 for 1 n L , and δ ( M ) = N 1 ( M L ) for L + 1 n M .
Remark 2.
The error bounds derived in Theorem 3 and Corollary 2 inform the choice of the number of POD modes d. Specifically, d should satisfy
N 1 L j = d + 1 l λ j = O ( N 2 2 m , τ 4 ) ,
where λ j denotes the POD eigenvalues. The iterative error, quantified by N 1 ( M L ) ( N 1 m + τ 2 ) , dictates the need for POD basis updates during extrapolation. Specifically, a POD basis update is required when
N 1 ( M L ) ( N 1 m + τ 2 ) > 1 .
Remark 3.
We remark that while the numerical scheme and the accompanying analysis presented in this work are developed directly for the one-dimensional spatial case, they can be naturally extended to higher-dimensional settings. In fact, the efficiency gain of the reduced-order algorithm, when compared to its non-reduced counterpart, becomes even more pronounced in multi-dimensional problems. The one-dimensional framework is adopted primarily for clarity of exposition, as it allows for a more transparent illustration of the algorithmic construction and a streamlined theoretical analysis, without loss of generality.

5. Numerical Tests

In this section, we verify the convergence rates of the ROLGE-SFTR-SAV scheme and demonstrate its superior efficiency relative to the LG-SFTR-SAV scheme. The temporal convergence rate for the variable u is obtained by the formula
Rate = log 2 max 0 n M u ( t n ) U N n max 0 n 2 M u ( t n ) U N n ,
by which the rates for u and s can also be derived similarly. Since the explicit form of the solution of (1) is unknown, we assume the solution as u ( x , t ) = ( 1 1 2 t 3 ) sin ( π x ) where Z + and rewrite (1) by adding the source term
g ( x , t ) = ( t 3 2 ) 3 8 ε 2 sin ( π x ) 3 + t 3 2 1 ( ε 2 2 π 2 ) 3 t 3 α Γ ( 4 α ) sin ( π x ) .
We illustrate the fast decay property of the eigenvalues λ j in Figure 1 (left) for different τ = 2 8 , 2 9 , 2 10 , 2 11 , 2 12 with fixed α = 0.5 and in Figure 1 (right) for different α = 0.1 , 0.3 , 0.5 , 0.7 , 0.9 with fixed τ = 2 10 , under the setting N = 256 , = 20 , L = 20 . It is clear that only the first several λ j are sufficient to capture the essential properties. For the following test, we set d = 5 . While smaller values of d can yield acceptable results, the choice of d = 5 provides a more robust solution.
Figure 1. The fast decay of the eigenvalues λ j for different τ = 2 8 , 2 9 , 2 10 , 2 11 , 2 12 with fixed α = 0.5 (left) and for different α = 0.1 , 0.3 , 0.5 , 0.7 , 0.9 with fixed τ = 2 10 (right), under the setting N = 256 , = 20 , L = 20 .
In Table 1 and Table 2, we set ε = 1 , S = 4 , T = 1 , = 40 and fixing N = 512 . Both tables present the temporal convergence behavior and computational efficiency of the LG-SFTR-SAV and ROLGE-SFTR-SAV schemes, respectively, for varying values of the fractional order parameter α = 0.1 , 0.5 , 0.9 . The time step τ is successively refined as τ = 2 10 , 2 11 , 2 12 , and the corresponding errors in the L 2 -norm of the solution, its gradient, and the auxiliary variable s are reported, along with their convergence rates and CPU times.
Table 1. The temporal convergence rates and CPU time of the LG-SFTR-SAV scheme.
Table 2. The temporal convergence rates and CPU time of the ROLGE-SFTR-SAV scheme.
Both schemes exhibit a clear second-order temporal convergence across all error measures, as indicated by the consistent rate value of 2.00. In terms of computational cost, the ROLGE-SFTR-SAV scheme shows a significant reduction in CPU time compared to the LG-SFTR-SAV scheme, while maintaining nearly identical accuracy and convergence rates. For instance, at τ = 2 10 , the CPU time for the ROLGE variant is approximately 2.5 s, compared to about 4.3 s for the LG variant—a reduction of over 40 % . This efficiency gain is consistent across all time steps and values of α , underscoring the advantage of the reduced-order approach.
In Figure 2 we investigate the spatial discretization errors under the setting = 10 , M = 2 14 , L = 20 , d = 5 . Clearly, the errors for both LG-SFTR-SAV and ROLGE-SFTR-SAV schemes demonstrate spectral accuracy, as the errors decrease rapidly with increasing spatial resolution N. Both schemes exhibit nearly identical error levels and convergence rates, confirming that the reduced-order approach preserves the excellent spatial accuracy of the spectral method while maintaining computational efficiency.
Figure 2. The spectral accuracy of the LG-SFTR-SAV scheme (left) and the ROLGE-SFTR-SAV scheme (right), under the setting = 10 , M = 2 14 , L = 20 , d = 5 .
These results confirm that the ROLGE-SFTR-SAV scheme preserves the high-order accuracy of the original LG-SFTR-SAV method while offering enhanced computational efficiency, making it a preferable choice for long-time simulations or problems requiring repeated evaluations.

6. Conclusions

This study has presented a novel and efficient numerical framework for solving the tFAC equation. By combining the SAV method for linearization, the SFTR   1 2 for temporal discretization, and a LG spectral method in space, we developed a fully discrete scheme that is both energy-stable and computationally efficient. To further reduce the computational cost, a ROE model based on POD was introduced, effectively minimizing redundant computations while preserving numerical accuracy. Theoretical proofs of stability and error estimates were provided, confirming the reliability of the proposed approach. Numerical experiments demonstrated the validity of the method and aligned well with the theoretical predictions. This work offers a practical and theoretical foundation for simulating time-fractional phase-field models over long time intervals.

Author Contributions

Conceptualization, C.H. and H.L.; methodology, C.H.; numerical simulation, B.Y. and C.H.; formal analysis, C.H. and B.Y.; writing—original draft preparation, C.H.; validation, C.H. and B.Y.; writing—review, B.Y. and H.L.; supervision, H.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by by the National Natural Science Foundation of China (No. 12561068 to H.L. and No. 12201322 to B.Y.), Key Project of Natural Science Foundation of Inner Mongolia Autonomous Region (No. 2025ZD036 to H.L.), Inner Mongolia Autonomous Region Science and Technology Program Project (No. 2025KYPT0098 to H.L.) and the Autonomous Region Level High-Level Talent Introduction Research Support Program in 2022 (No. 12000-15042224 to B.Y.).

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

The authors would like to thank the reviewers and editors for their invaluable comments, which greatly refined the content of this article.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
PODproper orthogonal decomposition
ROLGEreduced-order Legendre–Galerkin extrapolation
ROLGE-SFTR-SAVreduced-order LG extrapolation SFTR-SAV model

References

  1. Sasso, M.; Palmieri, G.; Amodio, D. Application of fractional derivative models in linear viscoelastic problems. Mech. Time-Depend. Mat. 2011, 15, 367–387. [Google Scholar]
  2. Zeng, H.; Xie, Z.; Gu, J.; Sun, H. A 1D thermomechanical network transition constitutive model coupled with multiple structural relaxation for shape memory polymers. Smart Mater. Struct. 2018, 27, 035024. [Google Scholar] [CrossRef]
  3. Liao, G.-K.; Long, Z.; Xu, F.; Liu, W.; Zhang, Z.-Y.; Yang, M. Investigation on the viscoelastic behavior of an fe-base bulk amorphous alloys based on the fractional order rheological model. Acta Phys. Sin. 2015, 64, 136101. [Google Scholar] [CrossRef]
  4. Zhou, H.; Yang, S. Fractional derivative approach to non-Darcian flow in porous media. J. Hydrol. 2018, 566, 910–918. [Google Scholar] [CrossRef]
  5. Linchao, L.; Xiao, Y. Analysis of vertical vibrations of a pile in saturated soil described by fractional derivative model. Rock Soil Mech. 2011, 32, 526–532. [Google Scholar]
  6. Li, B.; Liu, F. Boundary layer flows of viscoelastic fluids over a non-uniform permeable surface. Comput. Math. Appl. 2020, 79, 2376–2387. [Google Scholar] [CrossRef]
  7. Zeng, H.; Leng, J.; Gu, J.; Yin, C.; Sun, H. Modeling the strain rate-, hold time-, and temperature-dependent cyclic behaviors of amorphous shape memory polymers. Smart Mater. Struct. 2018, 27, 075050. [Google Scholar] [CrossRef]
  8. Guo, Y.; Azaïez, M.; Xu, C. Error analysis of a reduced order method for the Allen-Cahn equation. Appl. Numer. Math. 2024, 203, 186–201. [Google Scholar] [CrossRef]
  9. Acosta, G.; Bersetche, F.M. Numerical approximations for a fully fractional Allen–Cahn equation. Math. Model. Numer. Anal. 2021, 55, S3–S28. [Google Scholar] [CrossRef]
  10. Du, Q.; Yang, J.; Zhou, Z. Time-fractional Allen–Cahn equations: Analysis and numerical methods. J. Sci. Comput. 2020, 85, 42. [Google Scholar]
  11. Khalid, N.; Abbas, M.; Iqbal, M.K.; Baleanu, D. A numerical investigation of Caputo time fractional Allen–Cahn equation using redefined cubic B-spline functions. Adv. Differ. Equ. 2020, 2020, 158. [Google Scholar]
  12. Ji, B.; Liao, H.l.; Zhang, L. Simple maximum principle preserving time-stepping methods for time-fractional Allen-Cahn equation. Adv. Comput. Math. 2020, 46, 37. [Google Scholar]
  13. Liao, H.l.; Tang, T.; Zhou, T. An energy stable and maximum bound preserving scheme with variable time steps for time fractional Allen–Cahn equation. SIAM J. Sci. Comput. 2021, 43, A3503–A3526. [Google Scholar]
  14. Jiang, H.; Hu, D.; Huang, H.; Liu, H. Linearly Implicit Schemes Preserve the Maximum Bound Principle and Energy Dissipation for the Time-fractional Allen–Cahn Equation. J. Sci. Comput. 2024, 101, 25. [Google Scholar]
  15. Chen, H.; Sun, H.W. A dimensional splitting exponential time differencing scheme for multidimensional fractional Allen-Cahn equations. J. Sci. Comput. 2021, 87, 30. [Google Scholar] [CrossRef]
  16. Hou, D.; Zhu, H.; Xu, C. Highly efficient schemes for time-fractional Allen-Cahn equation using extended SAV approach. Numer. Algorithms 2021, 88, 1077–1108. [Google Scholar]
  17. Yu, Y.; Zhang, J.; Qin, R. The exponential SAV approach for the time-fractional Allen–Cahn and Cahn–Hilliard phase-field models. J. Sci. Comput. 2023, 94, 33. [Google Scholar]
  18. Zhang, H.; Jiang, X. A high-efficiency second-order numerical scheme for time-fractional phase field models by using extended SAV method. Nonlinear Dyn. 2020, 102, 589–603. [Google Scholar] [CrossRef]
  19. Shen, J.; Xu, J.; Yang, J. The scalar auxiliary variable (SAV) approach for gradient flows. J. Comput. Phys. 2018, 353, 407–416. [Google Scholar] [CrossRef]
  20. Zhang, G.; Huang, C.; Alikhanov, A.A.; Yin, B. A high-order discrete energy decay and maximum-principle preserving scheme for time fractional Allen–Cahn equation. J. Sci. Comput. 2023, 96, 39. [Google Scholar]
  21. Yin, B.; Liu, Y.; Li, H.; Zhang, Z. Efficient shifted fractional trapezoidal rule for subdiffusion problems with nonsmooth solutions on uniform meshes. Bit Numer. Math. 2022, 62, 631–666. [Google Scholar] [CrossRef]
  22. Yin, B.; Liu, Y.; Li, H. Necessity of introducing non-integer shifted parameters by constructing high accuracy finite difference algorithms for a two-sided space-fractional advection–diffusion model. Appl. Math. Lett. 2020, 105, 106347. [Google Scholar] [CrossRef]
  23. Volkwein, S. Proper Orthogonal Decomposition: Theory and Reduced-Order Modelling; Lecture Notes; University of Konstanz: Konstanz, Germany, 2013; Volume 4, pp. 1–29. [Google Scholar]
  24. Li, H.; Song, Z. A reduced-order energy-stability-preserving finite difference iterative scheme based on POD for the Allen-Cahn equation. J. Math. Anal. Appl. 2020, 491, 124245. [Google Scholar] [CrossRef]
  25. Li, H.; Song, Z. A reduced-order finite element method based on proper orthogonal decomposition for the Allen-Cahn model. J. Math. Anal. Appl. 2021, 500, 125103. [Google Scholar]
  26. Li, H.; Wang, D.; Song, Z.; Zhang, F. Numerical analysis of an unconditionally energy-stable reduced-order finite element method for the Allen-Cahn phase field model. Comput. Math. Appl. 2021, 96, 67–76. [Google Scholar]
  27. Song, H.; Jiang, L.; Li, Q. A reduced order method for Allen–Cahn equations. J. Comput. Appl. Math. 2016, 292, 213–229. [Google Scholar] [CrossRef]
  28. Zhou, X.; Azaiez, M.; Xu, C. Reduced-order modelling for the Allen-Cahn equation based on scalar auxiliary variable approaches. J. Math. Study 2019, 52, 258–276. [Google Scholar] [CrossRef]
  29. Luo, Z.; Chen, G. Proper Orthogonal Decomposition Methods for Partial Differential Equations; Academic Press: Cambridge, MA, USA, 2018. [Google Scholar]
  30. Luo, Z. A reduced-order extrapolation algorithm based on SFVE method and POD technique for non-stationary Stokes equations. Appl. Math. Comput. 2014, 247, 976–995. [Google Scholar]
  31. Luo, Z. A POD-based reduced-order TSCFE extrapolation iterative format for two-dimensional heat equations. Bound. Value Probl. 2015, 2015, 1–15. [Google Scholar]
  32. Xia, H.; Luo, Z. An optimized finite element extrapolating method for 2D viscoelastic wave equation. J. Inequal. Appl. 2017, 2017, 218. [Google Scholar]
  33. Luo, Z.; Teng, F. An optimized SPDMFE extrapolation approach based on the POD technique for 2D viscoelastic wave equation. Bound. Value Probl. 2017, 2017, 6. [Google Scholar] [CrossRef]
  34. Luo, Z.; Teng, F. Reduced-order proper orthogonal decomposition extrapolating finite volume element format for two-dimensional hyperbolic equations. J. Comput. Appl. Math. 2017, 38, 289–310. [Google Scholar] [CrossRef]
  35. Teng, F.; Luo, Z.; Yang, J. A reduced-order extrapolated natural boundary element method based on POD for the 2D hyperbolic equation in unbounded domain. Math. Methods Appl. Sci. 2019, 42, 4273–4291. [Google Scholar] [CrossRef]
  36. Luo, Z.; Teng, F.; Chen, J. A POD-based reduced-order Crank–Nicolson finite volume element extrapolating algorithm for 2D Sobolev equations. Math. Comput. Simul. 2018, 146, 118–133. [Google Scholar] [CrossRef]
  37. Luo, Z. Proper orthogonal decomposition-based reduced-order stabilized mixed finite volume element extrapolating model for the nonstationary incompressible Boussinesq equations. J. Math. Anal. Appl. 2015, 425, 259–280. [Google Scholar] [CrossRef]
  38. Gunzburger, M.; Jiang, N.; Schneier, M. An ensemble-proper orthogonal decomposition method for the nonstationary Navier–Stokes equations. J. Numer. Anal. 2017, 55, 286–304. [Google Scholar] [CrossRef]
  39. Adams, R.A.; Fournier, J.J. Sobolev Spaces; Elsevier: Amsterdam, The Netherlands, 2003; Volume 140. [Google Scholar]
  40. Lubich, C. Discretized fractional calculus. SIAM J. Math. Anal. 1986, 17, 704–719. [Google Scholar] [CrossRef]
  41. Shen, J.; Xu, J. Convergence and error analysis for the scalar auxiliary variable (SAV) schemes to gradient flows. J. Numer. Anal. 2018, 56, 2895–2912. [Google Scholar] [CrossRef]
  42. Shen, J.; Tang, T.; Wang, L.L. Spectral Methods: Algorithms, Analysis and Applications; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2011; Volume 41. [Google Scholar]
  43. Kunisch, K.; Volkwein, S. Galerkin proper orthogonal decomposition methods for a general equation in fluid dynamics. J. Numer. Anal. 2002, 40, 492–515. [Google Scholar] [CrossRef]
  44. Huang, C.; Li, H.; Yin, B. Reduced-order Legendre-Galerkin extrapolation method based on proper orthogonal decomposition for Allen-Cahn equation. J. Appl. Math. Comput. 2025. submitted. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Article metric data becomes available approximately 24 hours after publication online.