Next Article in Journal
An Improved Rock Damage Characterization Method Based on the Shortest Travel Time Optimization with Active Acoustic Testing
Previous Article in Journal
Feedback Stabilization Applied to Heart Rhythm Dynamics Using an Integro-Differential Method
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Tensor Conjugate Gradient Methods with Automatically Determination of Regularization Parameters for Ill-Posed Problems with t-Product

1
Geomathematics Key Laboratory of Sichuan, College of Mathematics and Physics, Chengdu University of Technology, Chengdu 610059, China
2
College of Computer Science and Cyber Security, Chengdu University of Technology, Chengdu 610059, China
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(1), 159; https://doi.org/10.3390/math12010159
Submission received: 24 October 2023 / Revised: 18 November 2023 / Accepted: 27 December 2023 / Published: 3 January 2024

Abstract

:
Ill-posed problems arise in many areas of science and engineering. Tikhonov is a usual regularization which replaces the original problem by a minimization problem with a fidelity term and a regularization term. In this paper, a tensor t-production structure preserved Conjugate-Gradient (tCG) method is presented to solve the regularization minimization problem. We provide a truncated version of regularization parameters for the tCG method and a preprocessed version of the tCG method. The discrepancy principle is used to automatically determine the regularization parameter. Several examples on image and video recover are given to show the effectiveness of the proposed methods by comparing them with some previous algorithms.

1. Introduction

Tensors are high-dimensional arrays that have many applications in science and engineering, including in image, video and signal processing, computer vision, and network analysis [1,2,3,4,5,6,7,8]. A new t-product based on third-order tensors was proposed by Kilmer et al. [9,10]. When using high-dimensional data, t-product shows a greater potential value than matricization; see [1,2,10,11,12,13,14,15,16]. Compared to other products, the t-product preserves the inherent natural order and higher correlation embedded in the data, avoiding the loss of intrinsic information during the flattening process of the tensor; see [10]. The t-product has been found to have special value in many application fields, including image deblurring problems [1,2,9,11], image and video compression [8], facial recognition problems [10], etc.
The t-product is widely used in image and video restoration problems; see, e.g., [1,2,9]. In this paper, we consider the solution of large minimization problems of the form
min X R m × 1 × n A X B F , A = [ a ] i , j , k = 1 l , m , n R l × m × n , B R l × 1 × n .
Tensor A has a tubal rank that is difficult to determine, and many of its singular tubes are non-zero and have tiny Frobenius norms of different orders of magnitude. As the exponent increases, the Frobenius norms of these singular tubes rapidly decay to zero. Then, Problems (1) are called the tensor discrete linear ill-posed problems.
We assume that B R m × 1 × n is derived from an unknown and unavailable tensor B t r u e polluted by noise E R m × 1 × n ,
B = B t r u e + E .
We have X t r u e represent a clear solution to Problem (1) to be found and obtain B t r u e through A X t r u e = B t r u e . We assume that the upper bound of the Frobenius norm of E is known,
E F δ .
A straightforward solution of (1) is usually meanless to obtain an approximation of B t r u e because of the illposeness of A = [ a ] i , j , k = 1 l , m , n and because error E is amplified severely. The Tikhonov regularization method is a mathematical approach proposed by Tikhonov [17] to address ill-posed problems. This method introduces a regularization term into the objective function, modeling the properties of the solution based on prior information. This serves to constrain the solution space, enhancing the stability of the problem. Therefore, we consider the application of the Tikhonov regularization method to address Problems (1) and subsequently proceed to solve the problems formulated in the manner of
min X R m × 1 × n A X B F 2 + μ X F 2 ,
where μ is a regularization parameter. We refer to (4) as the tensor penalty least-squares problems. We assume that
N ( A ) N ( I ) = O ,
where N ( A ) denotes the null space of A , I is the identity tensor and O R m × 1 × n is a lateral slice whose elements are all zero. The normal equation of (4) is represented by
( A T A + μ I ) X = A T B ,
and under the condition given in (5), it admits a unique solution
X μ = A T A + μ I 1 A T B .
There are many techniques to determine regularization parameter μ , such as the L-curve criterion, generalized cross-validation (GCV), and the discrepancy principle. We refer to [18,19,20,21,22] for more details. In this paper, the discrepancy principle is extended to tensors based on t-product and is employed to determine a suitable μ in (4). The solution X μ of (4) satisfies
A X μ B F η δ ,
where η > 1 is usually a user-specified constant and is independent of δ in (3). When E F is small enough, and  δ approaches 0, it results in X μ X t r u e . For more details regarding the discrepancy principle, please refer to [23].
In this paper, we additionally explore the extension of the minimization problems represented by (1), where the formulation takes the form
min X R m × p × n A X B F 2 + μ X F 2 ,
where B R m × p × n , p > 1 .
In recent literature addressing discrete ill-posed Problems (1), the prevailing methodologies predominantly feature the application of Tikhonov regularization and truncated singular value methods. Reichel et al. confronted Problems (4) through the application of a subspace construction technique, thereby mitigating the inherent challenges of large-scale regularization problems by effecting a transformation into a more tractable smaller-scale formulation. As a result, they introduced the tensor Arnoldi–Tikhonov (tAT), GMRES-type methods (tGMRES) [2] and the tensor Golub–Kahan–Tikhonov method (tGKT) [1]. The truncated tensor singular value decomposition (T-tSVD) was introduced by Kilmer et al. and colleagues in [9]. Zhang et al. introduced the method of randomized tensor singular value decomposition (rt-SVD) in their work [24]. This method exhibits notable advantages in handling large-scale datasets and holds significant potential for applications in image data compression and analysis. Ugwu and Reichel [25] proposed a new random tensor singular value decomposition (R-tSVD), which improves T-tSVD.
The conjugate gradient method (CG), initially proposed in [26], is well-suited for addressing large-scale problems, particularly in scenarios requiring multiple iterations. Compared to alternative methods, it demonstrates a relatively faster convergence rate. The method’s favorable characteristic of low memory requirements renders it suitable for efficiently handling extensive datasets or high-dimensional problems. Detailed discussions on this approach can be found in the literature, as referenced in [26,27,28,29,30]. Song et al. [31] proposed a tensor conjugate gradient method for automatic parameterization in the Fourier domain (A-tCG-FFT). The A-tCG-FFT method projects Problems (1) into the Fourier domain and uses the CG method that preserves the matrix structure for computation. The solution obtained by the A-tCG-FFT method is of higher quality than the solution obtained by directly matrix or vectorizing the data. The tensor Conjugate Gradient (t-CG) method proposed by Kilmer et al. [10] is employed to address Problem (4), where the regularization parameter is user specified. In this article, we extend our work on the tCG method from [10] and utilize the discrepancy principle for automatic parameter estimation. The proposed method is called the tCG method with automatical determination of regularization parameters (auto-tCG). We also present a truncated auto-tCG method (auto-ttCG) to improve the auto-tCG method by reducing the computation. At last, a preprocessed version of the auto-ttCG method is proposed, which is abbreviated as auto-ttpCG. We remark that the auto-tCG, auto-ttCG, and auto-ttpCG methods are quite different from the methods in [31] because the former do not need to project the problems (1) into the Fourier domain and could maintain the t-product structure of tensors during the iteration process.
The remainder of this manuscript is structured as follows: Section 2 provides an introduction to relevant symbols and foundational concepts essential for the ensuing discussion. In Section 3, we expound upon the auto-tCG, auto-ttCG, and auto-ttpCG methodologies designed to address minimization Problems (4) and (9). Subsequently, Section 4 illustrates various examples pertaining to image and video restoration, while Section 5 encapsulates concluding remarks.

2. Preliminaries

This section provides notations and definitions, offering a concise overview of relevant results that are subsequently applied in the ensuing discourse. Figure 1 illustrates the frontal slices ( A ( : , : , k ) ), lateral slices ( A ( : , j , : ) ), and tube fibers ( A ( i , j , : ) ).
In this manuscript, we employ two operators, unfold and fold . The operator unfold unfolds the tensor into a matrix of dimensions l n × m , while fold serves as the inverse of unfold folding the matrix back into its original three-order tensor form. For clarity of exposition, we denote matrix A k specifically as the kth frontal slice of third-order tensor A R l × m × n , i.e.,  A k = A ( : , : , k ) . We have  
unfold A = A 1 A 2 A n , fold unfold A = A .
The forthcoming definitions and remarks, introduced herein, are utilized in the subsequent theoretical proofs.
Definition 1.
Assuming A is a third-order tensor, the block-circulant matrix bcirc ( A ) is defined as follows:
bcirc A = A 1 A 2 A n A n A 1 A n 1 A 2 A 3 A 1 .
Definition 2
([9]). Given two tensors A R l × m × n and B R m × p × n , the t-product A B is defined as
A B = fold ( bcirc ( A ) unfold ( B ) ) = C ,
where C R l × p × n .
Remark 1
([32]). For any two tensors A and B for which the t-product is defined, they satisfy
(1). bcirc ( A B ) = bcirc ( A ) bcirc ( B ) .
(2). bcirc ( A + B ) = bcirc ( A ) + bcirc ( B ) .
(3). bcirc ( A T ) = bcirc ( A ) T .
We define tensor A ^ obtained by applying the Fast Fourier Transform (FFT) along each tube of A , i.e.,
bdiag A ^ = A ^ 1 A ^ 2 A ^ n = F n I l bcirc A F n * I m ,
where ⊗ is the Kronecker product, and matrix F n * represents the conjugate transpose of the n-by-n unitary discrete Fourier transform matrix F n . The structure of F n is defined as follows:
F n = 1 n 1 1 1 1 1 ω ω 2 ω n 1 1 ω 2 ω 4 ω 2 n 1 1 ω n 1 ω 2 ( n 1 ) ω n 1 n 1 ,
where ω = e 2 π i n . Thus the t-product in (10) can be represented as
A B = fold F n * I l F n I l bcirc A F n * I m F n I m unfold B ,
and (10) is reformulated as
A ^ 1 A ^ 2 A ^ n B ^ 1 B ^ 2 B ^ n = C ^ 1 C ^ 2 C ^ n .
It is easy to calculate (12) in MATLAB.
For a non-zero tensor X R m × 1 × n , we can decompose it in the form
X = D d ,
where D R m × 1 × n is a normalized tensor; see, e.g., ref. [11], and d R 1 × 1 × n is a tube scalar. Algorithm 1 summarizes the decomposition in (14).
Algorithm 1 Normalization
  • Input: X R m × 1 × n is a nonzero tensor
  • Output: D , d with X = D d , D = 1
  • D fft( X ,[ ],3)
  • for  j = 1 , 2 , , n   do
  •      d j D j 2 ( D j is a vector)
  •     if  d j > t o l  then
  •          D j 1 d j D j
  •     else
  •          D j randn ( m , 1 ) ; d j D j 2 ; D j 1 d j D j ; d j 0
  •     end if
  • end for
  • D ifft( D ,[ ],3); d ifft( d ,[ ],3)
Given tensor A R l × m × n , the singular value decomposition (tSVD) of A is expressed as
A = U S V T ,
where U R l × l × n and V R m × m × n are orthogonal under the t-product;
S = diag [ s 1 , s 2 , , s min { l , m } ] R m × l × n
is an upper triangular tensor with the singular tubes s j satisfying
s 1 F s 2 F s min l , m F .
Algorithm 2 introduces the tensor Conjugate Gradient (t-CG) method, presented in [11], for solving the least squares solution of the tensor linear systems (1).
Algorithm 2 The tCG method for sloving (4).
  • Input:   A R m × m × n , B R m × 1 × n , μ .
  • Output: Approximate solution X * of Problem (4).
  • X 0 = 0 , k = 0 .
  • [ R 0 , a ] Normalize ( A T B ( A T A + μ I ) X 0 ) ; P 0 R 0 .
  • for  i = 1 , 2 , until σ < t o l  do
  •      c = P i 1 T ( A T A + μ I ) P i 1 1 R i 1 T R i 1 .
  •      X i = X i 1 + P i 1 c .
  •      R i = R i 1 ( A T A + μ k I ) P i + 1 c .
  •      σ = | R i F R i 1 F | .
  •      d = R i 1 T R i 1 1 R i T R i .
  •      P i = R i + P i 1 d .
  • end for
  • X * = X i a
The operators squeeze and twist [11] are expressed by
X = squeeze ( X j ) X ( i , j ) = X ( i , 1 , j ) , twist ( squeeze ( X ) ) = X .
Figure 2 illustrates the transformation between a matrix and a tensor column by using squeeze and twist . Generally, operators multi squeeze and multi twist are defined for a third-order tensor to make it squeezed or twisted. For tensor D R m × p × n with p > 1 , C = multi squeeze ( D ) means that all side slices of D are squeezed and stacked as front slices of C , the operator multi twist is the reverse operation of multi squeeze . Thus, multi twist ( multi squeeze ( D ) ) = D . We refer to Table 1 for more notations and definitions.

3. Tensor Conjugate Gradient Methods

This section initiates the discussion on the automated determination of an appropriate regularization parameter for the tensor Conjugate Gradient (tCG) method. We abbreviate the improved method as auto-tCG. A truncated auto-tCG method is developed to improve the auto-tCG method and is abbreviated as auto-ttCG. A  preprocessed version of the auto-ttCG method is presented, which is abbreviated as auto-ttpCG.

3.1. The Auto-tCG Method

The regularization parameter in the t-CG method was not discussed and was user-specified. This subsection enhances the t-CG method by utilizing the discrepancy principle for the determination of an appropriate regularization parameter, operating under the assumption (3) and uses it to solve the normal Equation (6). We consider the polynomial function
μ k = μ 0 q k , k = 0 , 1 , ,
where q ( 0 , 1 ) . We set μ 0 = A F and obtain an optimal regularization parameter by continuously reducing the parameter. An effective method to deal with general Problems (9) is to regard it as p independent subproblems (4), i.e.,  
min X j R m × 1 × n A X j B j F 2 + μ X j F 2 , j = 1 , , p ,
where B j is the tensor column of tensor B and is polluted by noise E j . B j , t r u e represents unknown error-free tensor. We assume noise tensor
E j = B j B j , t r u e
can be used or the norm of E j can be estimated, i.e.,
E j F δ j , j = 1 , , p .
Algorithm 3 encapsulates the auto-tCG method designed for solving Equation (9). The initial tensor of Algorithm 3 is set as a zero tensor. The iteration concludes when the Frobenius norm of the residual tensor
R j , μ k i = A T B j ( A T A + μ k I ) X j , μ k i
is small enough, where R j , μ k i denotes the residual generated by the ith iterative solution X j , μ k i of the normal equation with μ k of the jth independent subproblem. We let X i n t = X μ k * be the initial tensor of the normal equation of μ k + 1 . When μ = μ k with m iterations for the CG-process, the affine space is X μ k 0 + K m A T A + μ k I , r μ k 0 , where r μ k 0 = A T B A T A + μ k I X μ k 0 .
Algorithm 3 The auto-tCG method for sloving (9).
  • Input:  A R m × m × n , B j R m × 1 × n , δ j , j = 1 , , p , μ 0 , η > 1 .
  • Output: Approximate solution X * of Problem (9).
  • for  j = 1 , 2 , p   do
  •      X i n t = 0 , k = 0 .
  •     while  do A X j , μ k * B j F 2 > η 2 δ j 2
  •          k = k + 1 , ( A T A + μ k I ) X j = A T B j , e.g.,  μ k = μ 0 q k
  •          [ R 0 , a ] Normalize ( A T B j ( A T A + μ k I ) X i n t ) ; P 0 R 0 .
  •          i = 0 , σ > t o l .
  •         while  σ > t o l  do
  •             i = i + 1 .
  •             c = P i 1 T ( A T A + μ k I ) P i 1 1 R i 1 T R i 1 .
  •             X i = X i 1 + P i 1 c .
  •             R i = R i 1 ( A T A + μ k I ) P i + 1 c .
  •             σ = | R i F R i 1 F | .
  •             d = R i 1 T R i 1 1 R i T R i .
  •             P i = R i + P i 1 d .
  •         end while
  •          X j , μ k * = X i a ( X j , μ k * is the solution of the normal equation about μ k of the jth independent Subproblem (4)).
  •          X i n t = X j , μ k * .
  •     end while
  •      X ( : , j , : ) * = X j , μ k * .
  • end for

3.2. The Truncated Tensor Conjugate Gradient Method

Frommer and Maass in [33] proposed a good condition that can roughly judge some inappropriate value of μ . We introduce this condition to improve Algorithm 3 by excluding some unsuitable value of μ , and present a truncated tensor Conjugate Gradient method for solving (9). We first provide the following results:
Theorem 1.
Given A R l × m × n , we define a t-linear operator T: R m × 1 × n R l × 1 × n , i.e.,   T ( X ) = A X with X R m × 1 × n . We let X μ * be the exact solution of the normal equations
( A T A + μ I ) X = A T B ;
then, for an arbitrary X R m × 1 × n , we have
A X μ * B F 2 A X B F 2 1 4 μ A T B ( A T A + μ I ) X F 2 .
Proof. 
For an arbitrary X R m × 1 × n , we set Z = X μ * X . We let the singular value decomposition of A be A = U S V T ; then,
A Z = U S V T Z .
We suppose V T Z = D R m × 1 × n ; then,
A Z F 2 = U S V T Z F 2 = S D F 2 = bcirc ( S ) unfold ( D ) 2 2 .
Thus,
( A T A + μ I ) Z F 2 = V ( S T S + μ I ) V T Z F 2 = V ( S T S + μ I ) D F 2 = S T S + μ I D F 2 = ( bcirc ( S T S ) + μ bcirc ( I ) ) unfold ( D ) 2 2 = ( bcirc ( S ) T bcirc ( S ) + μ bcirc ( I ) ) unfold ( D ) 2 2 .
We denote bcirc ( S ) = S R n l × n m , bcirc ( I ) = I R n m × n m and unfold ( D ) = d R n m × 1 , then A Z F 2 = S d 2 2 and ( A T A + μ I ) Z F 2 = ( S T S + μ I ) d 2 2 . Thus, we transform the tensor norm into the equivalent matrix norm. We let the singular value decomposition of S be S = U Σ V T , where Σ = diag σ 1 , σ 2 , , σ r , r min n l , n m , U = [ u 1 , u 2 , , u r ] and V = [ v 1 , v 2 , , v r ] are orthogonal matrices with orthogonal columns u k R n l × 1 and v k R n m × 1 , respectively. Thus, we have
S d = σ k > 0 σ k d , v k u k .
Using equation s 2 = ( s + μ s 1 ) 2 ( s 2 + μ ) 2 with estimate
1 s + μ s 1 1 2 μ , ( s , μ > 0 ) ,
we have
Sd 2 2 = σ k > 0 σ k 2 d , v k 2 = σ k > 0 σ k + μ σ k 1 2 σ k 2 + μ 2 d , v k 2 1 4 μ σ k > 0 σ k 2 + μ 2 d , v k 2 .
We note that
S T S + μ I d 2 2 = σ k > 0 σ k 2 + μ 2 d , v k 2 .
It results from (19) and (20) that
Sd 2 2 1 4 μ S T S + μ I d 2 2 .
We note that A Z F 2 = S d 2 2 and ( A T A + μ I ) Z F 2 = ( S T S + μ I ) d 2 2 ; we have
A Z F 2 1 4 μ ( A T A + μ I ) Z F 2 .
Thus,
A X μ B F 2 = A X B + A X μ X F 2 A X B F 2 A Z F 2 A X B F 2 1 4 μ A T A + μ I Z F 2 .
We note that
( A T A + μ I ) Z = ( A T A + μ I ) ( X μ * X ) = A T B ( A T A + μ I ) X ;
then, (23) and (22) result in
A X μ * B F 2 A X B F 2 1 4 μ A T B ( A T A + μ I ) X F 2 .
   □
We apply Theorem 1 to predict in advance whether the exact solution X μ k * satisfies the discrepancy principle in Algorithm 3. We add condition
A X μ k i B F 2 1 4 μ k R μ k i F 2 > η 2 δ 2
in Steps 9–16 of Algorithm 3. If the ith iteration solution of the normal equation with μ k is X μ k i and its residual R μ k i satisfies (24), then A X μ k * B F 2 > η 2 δ 2 . This indicates that the exact solution of the normal equation with μ k does not satisfy the discrepancy principle, so we continue to solve next normal equation with μ k + 1 . Therefore, we obtain a truncated tensor Conjugate Gradient method of automatical determination of a suitable regularization parameter, which is abbreviated as auto-ttCG. Algorithm 4 summarizes the auto-ttCG method.

3.3. A Preconditioned Truncated Tensor Conjugate Gradient Method

In this section, we explore the acceleration of Algorithm 4 through preconditioning. When tensor M is symmetric positive definite under the t-product structure, we can obtain its tensor approximate Cholesky decomposition (tChol) by Algorithm 5.
In Algorithm 4, coefficient tensor A T A + μ k I of kth normal equation
( A T A + μ k I ) X = A T B
is symmetric and positive definite. We set M = A T A + μ k I and apply Algorithm 5 to obtain the decomposition of M = H H T , where each frontal slice of H is a fully sparse lower triangular matrix. After normal Equation (25) is preconditioned by M , we solve preconditioned normal equations
A ˜ X ˜ = B ˜
instead of Equation (25) in Algorithm 4, where A ˜ = H 1 ( A T A + μ k I ) H T , X ˜ = H T X , B ˜ = H 1 A T B .
Algorithm 4 The auto-ttCG method for sloving (9)
  • Input:  A R m × m × n , B j R m × 1 × n , δ j , j = 1 , , p , μ 0 , η > 1 , tol.
  • Output: Approximate solution X * of Problem (9).
  • for  j = 1 , 2 , p   do
  •      X i n t = 0 , k = 0
  •     while  A X j , μ k i B j F 2 > η 2 δ j 2  do
  •          k = k + 1 , ( A T A + μ k I ) X j = A T B j , e.g.,  μ k = μ 0 q k .
  •          [ R 0 , a ] Normalize ( A T B j ( A T A + μ k I ) X i n t ) ; P 0 R 0 .
  •          i = 0 , σ = 10 tol, X j , μ k 0 = X i n t .
  •         while  σ > t o l and A X j , μ k i B F 2 1 4 μ k R i a F 2 < η 2 δ 2  do
  •             i = i + 1 .
  •             c = P i 1 T ( A T A + μ k I ) P i 1 1 R i 1 T R i 1 .
  •             X i = X i 1 + P i 1 c , X j , μ k i = X i a .
  •             R i = R i 1 ( A T A + μ k I ) P i + 1 c
  •             σ = | R i F R i 1 F | .
  •             d = R i 1 T R i 1 1 R i T R i .
  •             P i = R i + P i 1 d .
  •         end while
  •          X i n t = X μ k i .
  •     end while
  •      X ( : , j , : ) = X j , μ k i .
  • end for
Algorithm 5 Tensor Cholesky decomposition (tChol)
1:
Input:  M R m × m × n O
2:
Output:  H R m × m × n and M = H H T .
3:
M ^ fft( M ,[ ],3)
4:
for  j = 1 , 2 , , n   do
5:
     H c h o l ( M ^ ( : , : , j ) ) , H is the lower triangular matrix, which is obtained by approximate Cholesky decomposition.
6:
     H ^ : , : , j H .
7:
end for
8:
H ifft( H ^ ,[ ],3).
We apply Algorithm 4 to solve (26) instead of (25). We let X i and X ˜ i denote the solution of (25) and (26), respectively. Then, we have  
R ˜ i = B ˜ A ˜ X ˜ i = H 1 A T B ( H 1 ( A T A + μ k I ) H T ) H T X i
= H 1 ( A T B ( A T A + μ k I ) X i )
= H 1 R i .
We let W i = H 1 R i , P ˜ i 1 = H T P i 1 ; then, we have
d ˜ = ( R ˜ i 1 T R ˜ i 1 ) 1 ( R ˜ i T R ˜ i ) = ( ( H 1 R i 1 ) T H 1 R i 1 ) 1 ( ( H 1 R i ) T H 1 R i )
= ( W i 1 T W i 1 ) 1 ( W i T W i ) ,
and
c ˜ = ( P ˜ i 1 T A ˜ P ˜ i 1 ) 1 ( R ˜ i 1 T R ˜ i 1 ) = ( ( H T P i 1 ) T H 1 ( A T A + μ k I ) H T ( H T P i 1 ) ) 1 ( ( H 1 R i 1 ) T H 1 R i 1 ) = ( ( H T P i 1 ) T H 1 ( A T A + μ k I ) P i 1 ) 1 W i 1 T W i 1 = ( P i 1 T ( A T A + μ k I ) P i 1 ) 1 W i 1 T W i 1 .
In addition, we have iteration
X ˜ i = X ˜ i 1 + P ˜ i 1 c ˜ H T X i = H T X i 1 + H T P i 1 c ˜ X i = X i 1 + P i 1 c ˜ ,
and
R ˜ i = R ˜ i 1 A ˜ P ˜ i + 1 c ˜ H 1 R i = H 1 R i 1 H 1 A T A + μ k I H T H T P i + 1 c ˜ R i = R i 1 A T A + μ k I P i + 1 c ˜ ,
together with
P ˜ i = R ˜ i + P ˜ i 1 d ˜ H T P i = H 1 R i + H T P i 1 d ˜ P i = H T H 1 R i + P i 1 d ˜ = H T W i + P i 1 d ˜ .
Implementing Preprocessing procedure (27)–(35) into Algorithm 4, we obtain the improved auto-ttCG method, which is called the truncated tensor preconditioned Conjugate Gradient method of automatical determination of a suitable regularization parameter and abbreviated as auto-ttpCG. Algorithm 6 summarizes the auto-ttpCG method. Numerical experiments in Section show Algorighm 6 converges faster than Algorithm 4.
Algorithm 6 The auto-ttpCG method for sloving (9)
  • Input:  A R m × m × n , B j R m × 1 × n , δ j , j = 1 , , p , μ 0 , η > 1 , tol.
  • Output: Approximate solution X * of Problem (9).
  • for  j = 1 , 2 , p   do
  •      X i n t = 0 , k = 0
  •     while  A X j , μ k i B j F 2 > η 2 δ j 2  do
  •          k = k + 1 , μ k = μ 0 q k .
  •          H = t C h o l ( A T A + μ k I ).
  •          [ R 0 , a ] Normalize ( A T B j ( A T A + μ k I ) X i n t ) .
  •          W 0 = H 1 R 0 , P 0 = H T W 0 .
  •          i = 0 , σ = 10 tol, X j , μ k 0 = X i n t .
  •         while  σ > t o l and A X j , μ k i B j F 2 1 4 μ k R i a F 2 < η 2 δ 2  do
  •             i = i + 1 .
  •             c ˜ = ( P i 1 T ( A T A + μ k I ) P i 1 ) 1 W i 1 T W i 1 .
  •             X i = X i 1 + P i 1 c ˜ , X j , μ k i = X i a .
  •             R i = R i 1 A T A + μ k I P i + 1 c ˜ , W i = H 1 R i
  •             σ = | R i F R i 1 F | .
  •             d ˜ = ( W i 1 T W i 1 ) 1 ( W i T W i ) .
  •             P i = H T W i + P i 1 d ˜ .
  •         end while
  •          X i n t = X μ k i .
  •     end while
  •      X ( : , j , : ) * = X j , μ k i .
  • end for

4. Numerical Examples

This section presents three illustrative examples showcasing the application of Algorithms 3, 4 and 6 in the context of image and video restoration. All computations are executed using MATLAB R2018a on computing platforms equipped with Intel Core i7 processors and 16 GB of RAM.
We suppose X k is the kth approximate solution to Minimization problem (9). The quality of the approximate solution X k is defined by the relative error
Err k = X k X t r u e F X t r u e F ,
and the signal-to-noise ratio (SNR)
SNR ( X k ) = 10 log 10 X t r u e E ( X t r u e ) F 2 X k X t r u e F 2 ,
where X t r u e represents the uncontaminated data tensor and E ( X t r u e ) is the average gray-level of X t r u e . The observed data, B , in (9) is contaminated by a “noise” tensor E , i.e., B = B t r u e + E . E is determined as follows. We let E j be the jth transverse slice of E , whose entries are scaled and normally distributed with a mean of zero, i.e.,
E j = ν E r , j E r , j F B t r u e , j F , j = 1 , , p ,
where the data of E r , j is generated according to N(0, 1).
Example 1
(Gray image). This illustration concerns the restoration of the blurred and noisy image of the cameraman with a size of 256 × 1 × 256 . For operator A , its front slices A ( : , : , i ) , i = 1 , , 256 are generated by using the MATLAB function blur, i.e.,
z = [ e x p ( ( [ 0 : b a n d 1 ] . 2 ) / ( 2 σ 2 ) ) , z e r o s ( 1 , N b a n d ) ] , A = 1 σ 2 π t o e p l i t z z 1 f l i p l r z 2 : e n d , z , A : , : , i = A i , 1 A ,
with N = 256 , σ = 4 and b a n d = 12 . The condition numbers of A ( i ) are c o n d ( A ( : , : , 1 ) ) = c o n d ( A ( : , : , 246 ) ) = = c o n d ( A ( : , : , 256 ) ) = 11.1559 , while he condition numbers of the remaining slices are infinite. We let X t r u e denote the original undaminated cameraman image. The operator twist converts X t r u e into tensor column X t r u e R 256 × 1 × 256 for storage. The noised tensor E is generated by (36) with different noise level ν = 10 i , i = 2 , 3 . The images characterized by blurring and noise are generated through the mathematical expression B = A X t r u e + E .
The auto-tCG, auto-ttCG and auto-ttpCG methods are used to solve tensor discrete linear ill-posed Problems (1). The discrepancy principle is utilized to ascertain an appropriate regularization parameter and set μ k = μ 0 q k , μ 0 = A F , q = 1 2 . We set η = 1.05 in (8).
Figure 3 shows the convergence of relative errors verus (a) the iteration number k and (b) the CPU time for the tCG, auto-tCG, auto-ttCG and auto-ttpCG methods with the noise level ν = 10 3 corresponding in Table 2. The iteration process is terminated when the discrepancy principle is satisfied. From Figure 3a, we can see that the auto-ttCG and auto-ttpCG methods do not need to solve the normal equation for all μ k ( k < 8 ) . This shows that the auto-ttCG and auto-ttpCG methods improve the auto-tCG method by Condition (24). Figure 3b shows that the auto-ttpCG method converges fastest among three methods.
Table 2 lists the regularization parameter, the iteration number, the relative error, SNR and the CPU time of the optimal solution obtained by using the tCG, A-tCG-FFT, A-CGLS-FFT, A-tpCG-FFT, auto-tCG, auto-ttCG and auto-ttpCG methods with different noise levels ν = 10 i , i = 2 , 3 . The determination of the regularization parameter for the tCG method involved conducting several experiments to obtain a more appropriate value. The CPU time represents only the usage in a single CG process.
The image restoration experiment of Example 1, the A-tCG-FFT, A-CGLS-FFT, and A-tpCG-FFT methods proposed by Song et al. [31] project the t-product into the Fourier domain and solve 256 ill-posed problems in matrix form, respectively. In Song et al.’s setting, when the frontal slice is small, the time required is very small. In the setting of our article, the number of frontal slices is related to the size of the image. As the number of frontal slices increases, the time cost increases. The calculation process of auto-tCG, auto-ttCG and auto-ttpCG methods always maintains the tensor t-product structure, resulting in higher quality image restoration in the end. The quality of the regularization solution obtained by the auto-tCG surpasses that of the solution obtained by the tCG method. It can be seen from Table 2 that the auto-ttpCG method has the lowest relative error, highest SNR and the least CPU time for different noise level.
Figure 4 shows the reconstructed images obtained by using the tCG, A-tCG-FFT, A-CGLS-FFT, auto-tCG, auto-ttCG and auto-ttpCG methods on the blurred and noised image with the noise level ν = 10 3 in Table 2. From Figure 4, we can see that the restored image by the auto-ttpCG method looks a bit better than others but with the least CPU time. The image restoration performance of the tCG method is inferior to three conjugate gradient methods with automatically determined parameters.
Example 2
(Color image). This example illustrates the restoration of a blurred Lena color image using Algorithms 3, 4 and 6. The original Lena image X o r i R 256 × 256 × 3 is stored as a tensor X t r u e R 256 × 3 × 256 through the MATLAB function multi twist . We set N = 256 , σ = 3 and band = 12, and obtain A R 256 × 256 × 256 by
z = e x p ( ( 0 : b a n d 1 . 2 ) / ( 2 σ 2 ) ) , z e r o s ( 1 , N b a n d ) ,
A = t o e p l i t z ( z ) , A : , : , i = 1 2 π σ A i , 1 A , i = 1 , , 256 .
Then, c o n d ( A ( : , : , 1 ) ) = = c o n d ( A ( : , : , 12 ) ) = 4.68 e + 07 , and the condition number of other tensor slices of A is infinite. The noise tensor E is defined by (36). The blurred and noised tensor is derived by B = A X t r u e + E , which is shown in Figure 5b.
We set color image B to be divided into multiple lateral slices and independently process each slice through (1) by using the tCG, auto-tCG, auto-ttCG and auto-ttpCG methods. Figure 6 shows the convergence of relative errors verus (a) the iteration number k and (b) the CPU time for the auto-tCG, auto-ttCG and auto-ttpCG methods when dealing with the first tensor lateral slice B ( : , 1 , : ) of B with ν = 10 3 . Similar results can be derived as that in Example 1 from Figure 6. We can see that the auto-ttCG and auto-ttpCG methods need less iterations than the auto-tCG method from Figure 6a and the auto-ttpCG method converges fastest among all methods from Figure 6b.
Table 3 lists the relative error, SNR and the CPU time of the optimal solution obtained by using the tCG, A-tCG-FFT, A-CGLS-FFT, A-tpCG-FFT, auto-tCG, auto-ttCG and auto-ttpCG methods with different noise levels ν = 10 i , i = 2 , 3 . The results are very similar to that in Table 2 for different noise levels. In the application of the tCG method, we define distinct regularization parameters, specifically setting μ = 0.01 when ν = 10 2 , and μ = 0.005 when ν = 10 3 . The regularization parameter set for the tCG method has been determined through multiple iterations, yielding a reasonably suitable value. However, when applying tCG to solve Problem (9) corresponding to other regularization parameters attempted during this process, divergence or excessively large relative errors are commonly observed. When the condition number for frontal slicing is larger, the condition number for the matrix projected into the Fourier domain also increases, which leads to increased ill-posedness and results in more CPU time for the A-tCG-FFT, A-CGLS-FFT and A-tpCG-FFT methods to obtain regularization parameters. The quality of the solutions obtained by the tCG method is inferior to that of the other three versions with automatic parameter tuning. However, the CPU time used by the tCG method is shorter than the other three methods. This is because it only represents the time spent solving a regularized equation, whereas the process of manually selecting regularization parameters would consume more time. Table 3 also reflects the advantages of both truncation parameters and preprocessing operations.
Figure 5 shows the recovered images by the tCG, A-tCG-FFT, A-CGLS-FFT, auto-tCG, auto-ttCG and auto-ttpCG methods corresponding to the results with noise level ν = 10 3 . The results are very similar to that in Figure 5.
Example 3
(Video). In this example, we employ three distinct reconstruction methods on MATLAB to recover the initial 10 consecutive frames of the blurred and noisy Rhinos video, with each frame containing 240 × 240 pixels. We store ten frames devoid of pollution and noise from the original video in the tensor X t r u e R 240 × 10 × 240 . We let z be defined by (37) with N = 240 , σ = 2 and b a n d = 12 . The coefficient tensor A is defined as follows:
A = 1 2 π σ t o e p l i t z z , A : , : , i = A i , 1 A , i = 1 , , 240 .
The condition number of the frontal slices of A is c o n d ( A ( : , : , i ) ) = 7.4484 e + 09 ( i 12 ) , and the condition number of the remaining frontal sections of A is infinite. The suitable regularization parameter is determined by using the discrepancy principle with η = 1.1 . The blurred and noised tensor B is generated by B = A X t r u e + E with E R 120 × 30 × 120 being defined by (36).
Figure 7 shows the convergence of relative errors verus the iteration number k and relative errors verus the CPU time for the auto-tCG, auto-ttCG and auto-ttpCG methods when the second frame of the video with ν = 10 3 is restored. Very similar results can be derived from Figure 7 to that in Example 1.
Table 4 displays the relative error, SNR and the CPU time of the optimal solution obtained by using the tCG, the A-tCG-FFT, A-CGLS-FFT, A-tpCG-FFT, auto-tCG, auto-ttCG and auto-ttpCG methods for the second frame with different noise levels ν = 10 i , i = 2 , 3 . In video restoration experiments with continuous frames, using the A-tCG-FFT, A-CGLS-FFT, A-tpCG-FFT methods to perform matrix calculations in the Fourier domain may cause a certain degree of damage to the spatial structure that may exist between consecutive frames, resulting in a decrease in restoration quality. When employing the tCG method, we configured distinct regularization parameters, specifically, when ν = 10 2 , μ = 0.05 ; and when ν = 10 3 , μ = 0.001 . With the increase in data volume, the auto-tCG method demonstrates better solution quality compared to the tCG method. Additionally, the truncation parameter operation and preprocessing strategy exhibit superior solution quality and time advantages over auto-tCG. We can see that the auto-ttpCG method has the largest SNR and the lowest CPU time for different noise level ν = 10 i , i = 2 , 3 .
Figure 8 shows the original video, blurred and noised video, and the recovered video of the second frame of the video for the tCG, A-tCG-FFT, A-CGLS-FFT, A-tCG-FFT, A-CGLS-FFT, auto-tCG, auto-ttCG and the auto-ttpCG methods with noise level ν = 10 3 corresponding to the results in Table 4. The recovered frame by the auto-ttpCG method looks best among all recovered frames.

5. Conclusions

This paper introduces three tensor Conjugate Gradient methods designed for the resolution of large-scale linear discrete ill-posed problems formulated in tensor representation. Initially, we introduce an automated strategy for determining an appropriate regularization parameter for the tensor Conjugate Gradient (tCG) method. Furthermore, we develop a truncated version and a preprocessed version of the tCG method. The introduced methodologies are employed in diverse instances of image and video restoration. The efficacy of the proposed methodologies in image and video restoration applications is demonstrated through illustrative examples. These approaches circumvent the need for problem matrixization or vectorization. Notably, these methods exhibit significant potential in terms of both speed and quality of computed restoration, as assessed by relative errors and SNR values, providing a comprehensive evaluation of algorithmic performance in image restoration.

Author Contributions

Software, S.-W.W.; Formal analysis, F.Y.; Investigation, F.Y.; Writing—original draft, S.-W.W.; Writing—review & editing, G.-X.H.; Supervision, G.-X.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported in part by the Sichuan Science and Technology Program (Grant No. 2022ZYD0008).

Data Availability Statement

Data are contained within the article.

Acknowledgments

The authors would like to thank the referees for their helpful and constructive comments.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Reichel, L.; Ugwu, U.O. The tensor Golub–Kahan–Tikhonov method applied to the solution of ill-posed problems with at-product structure. Numer. Linear Algebr. Appl. 2022, 29, e2412. [Google Scholar] [CrossRef]
  2. Ugwu, U.O.; Reichel, L. Tensor Arnoldi–Tikhonov and GMRES-Type Methods for Ill-Posed Problems with a t-Product Structure. J. Sci. Comput. 2022, 90, 1–39. [Google Scholar]
  3. Cichocki, A.; Mandic, D.; De Lathauwer, L.; Zhou, G.; Zhao, Q.; Caiafa, C.; Phan, H.A. Tensor decompositions for signal processing applications: From two-way to multiway component analysis. IEEE Signal Process. Mag. 2015, 32, 145–163. [Google Scholar] [CrossRef]
  4. Signoretto, M.; Tran Dinh, Q.; De Lathauwer, L.; Suykens, J.A. Learning with tensors: A framework based on convex optimization and spectral regularization. Mach. Learn. 2014, 94, 303–351. [Google Scholar] [CrossRef]
  5. Kilmer, M.E.; Horesh, L.; Avron, H.; Newman, E. Tensor-tensor algebra for optimal representation and compression of multiway data. Proc. Natl. Acad. Sci. USA 2021, 118, e2015851118. [Google Scholar] [CrossRef] [PubMed]
  6. Beik, F.P.A.; Najafi–Kalyani, M.; Reichel, L. Iterative Tikhonov regularization of tensor equations based on the Arnoldi process and some of its generalizations. Appl. Numer. Math. 2020, 151, 425–447. [Google Scholar] [CrossRef]
  7. Bentbib, A.H.; Khouia, A.; Sadok, H. The LSQR method for solving tensor least-squares problems. Electron. Trans. Numer. Anal. 2022, 55, 92–111. [Google Scholar] [CrossRef]
  8. Zheng, M.M.; Ni, G. Approximation strategy based on the T-product for third-order quaternion tensors with application to color video compression. Appl. Math. Lett. 2023, 140, 108587. [Google Scholar] [CrossRef]
  9. Kilmer, M.E.; Martin, C.D. Factorization strategies for third order tensors. Linear Alg. Appl. 2011, 435, 641–658. [Google Scholar] [CrossRef]
  10. Hao, N.; Kilmer, M.E.; Braman, K.; Hoover, R.C. Facial recognition using tensor-tensor decompositions. SIAM J. Imaging Sci. 2013, 6, 437–463. [Google Scholar] [CrossRef]
  11. Kilmer, M.E.; Braman, K.; Hao, N.; Hoover, R.C. Third-order tensors as operators on matrices: A theoretical and computational framework with applications in imaging. SIAM J. Matrix Anal. Appl. 2013, 34, 148–172. [Google Scholar] [CrossRef]
  12. Bentbib, A.H.; El Hachimi, A.; Jbilou, K.; Ratnani, A. Fast multidimensional completion and principal component analysis methods via the cosine product. Calcolo 2022, 59, 26. [Google Scholar] [CrossRef]
  13. Khaleel, H.S.; Sagheer, S.V.M.; Baburaj, M.; George, S.N. Denoising of Rician corrupted 3D magnetic resonance images using tensor-SVD. Biomed. Signal Process. Control 2018, 44, 82–95. [Google Scholar] [CrossRef]
  14. Zeng, C.; Ng, M.K. Decompositions of third-order tensors: HOSVD, T-SVD, and Beyond. Numer. Linear Algebr. Appl. 2020, 27, e2290. [Google Scholar] [CrossRef]
  15. El Hachimi, A.; Jbilou, K.; Ratnani, A.; Reichel, L. Spectral computation with third-order tensors using the t-product. Appl. Numer. Math. 2023, 193, 1–21. [Google Scholar] [CrossRef]
  16. Yu, Q.; Zhang, X. T-product factorization based method for matrix and tensor completion problems. Comput. Optim. Appl. 2023, 84, 761–788. [Google Scholar] [CrossRef]
  17. Tikhonov, A.N.; Arsenin, V.Y. Solutions of Ill-Posed Problems; Transl. from Russian; V.H. Winston & Sons: Washington, DC, USA, 1977. [Google Scholar]
  18. Fenu, C.; Reichel, L.; Rodriguez, G. GCV for tikhonov regularization via global Golub–Kahan decomposition. Numer. Linear Algebr. Appl. 2016, 25, 467–484. [Google Scholar] [CrossRef]
  19. Hansen, P.C. Rank-deficient and Discrete Ill-posed Problems: Numerical Aspects of Linear Inversion. SIAM J. Sci. Comput. 1998, 20, 684–696. [Google Scholar]
  20. Kindermann, S. Convergence analysis of minimization-based noise level-free parameter choice rules for linear ill-posed problems. Electron. Trans. Numer. Anal. 2011, 38, 233–257. [Google Scholar]
  21. Kindermann, S.; Raik, K. A simplified L-curve method as error estimator. Electron. Trans. Numer. Anal. 2020, 53, 217–238. [Google Scholar] [CrossRef]
  22. Reichel, L.; Rodriguez, G. Old and new parameter choice rules for discrete ill-posed problems. Numer. Algorithms 2013, 63, 65–87. [Google Scholar] [CrossRef]
  23. Engl, H.W.; Hanke, M.; Neubauer, A. Neubauer. In Regularization of Inverse Problems; Kluwer: Dordrecht, The Netherlands, 1996. [Google Scholar]
  24. Zhang, J.; Saibaba, A.K.; Kilmer, M.E.; Aeron, S. A randomized tensor singular value decomposition based on the t-product. Numer. Linear Algebr. Appl. 2018, 25, e2179. [Google Scholar] [CrossRef]
  25. Ugwu, U.O.; Reichel, L. Tensor regularization by truncated iteration: A comparison of some solution methods for large-scale linear discrete ill-posed problem with a t-product. arXiv 2021, arXiv:2110.02485. [Google Scholar]
  26. Hestenes, M.R.; Stiefel, E. Methods of conjugate gradients for solving linear systems. J. Res. Natl. Bur. Stand. 1952, 49, 409–436. [Google Scholar] [CrossRef]
  27. Polyak, B.T. The conjugate gradient method in extremal problems. U.S.S.R. Comput. Math. Math. Phys. 1969, 9, 94–112. [Google Scholar] [CrossRef]
  28. Gilbert, J.C.; Nocedal, J. Global convergence properties of conjugate gradient methods for optimization. SIAM J. Optim. 1992, 2, 21–42. [Google Scholar] [CrossRef]
  29. Nocedal, J.; Wright, S.J. Numerical Optimization; Springer: New York, NY, USA, 1999. [Google Scholar]
  30. Saad, Y. Iterative Methods for Sparse Linear Systems; SIAM: Philadelphia, PA, USA, 2003. [Google Scholar]
  31. Song, H.M.; Wang, S.W.; Huang, G.X. Tensor Conjugate-Gradient methods for tensor linear discrete ill-posed problems. AIMS Math. 2023, 8, 26782–26800. [Google Scholar] [CrossRef]
  32. Lund, K. The tensor t-function: A definition for functions of third-order tensors. Numer. Linear Algebr. Appl. 2020, 27, e2288. [Google Scholar] [CrossRef]
  33. Frommer, A.; Maass, P. Fast CG-based methods for Tikhonov–Phillips regularization. SIAM J. Sci. Comput. 1999, 20, 1831–1850. [Google Scholar] [CrossRef]
Figure 1. (a) Frontal slices A ( : , : , k ) , (b) lateral slices A ( : , j , : ) and (c) tube fibers A ( i , j , : ) .
Figure 1. (a) Frontal slices A ( : , : , k ) , (b) lateral slices A ( : , j , : ) and (c) tube fibers A ( i , j , : ) .
Mathematics 12 00159 g001
Figure 2. Twist squeeze.
Figure 2. Twist squeeze.
Mathematics 12 00159 g002
Figure 3. Example 1: Comparison of convergence between (a) relative errors verus the iteration number k and (b) relative errors verus the CPU time for the auto-tCG, auto-ttCG and auto-ttpCG methods with the noise level ν = 10 3 .
Figure 3. Example 1: Comparison of convergence between (a) relative errors verus the iteration number k and (b) relative errors verus the CPU time for the auto-tCG, auto-ttCG and auto-ttpCG methods with the noise level ν = 10 3 .
Mathematics 12 00159 g003
Figure 4. Example 1: (a) The original image and (b) the blurred and noised image, reconstructed images by (c) the tCG method (SNR = 12.21, CPU = 9.87), (d) the A-tCG-FFT method (SNR = 18.63, CPU = 88.02), (e) the A-CGLS-FFT method (SNR = 18.63, CPU = 82.23), (f) the auto-tCG method (SNR = 22.36, CPU = 109.87), (g) the auto-ttCG method (SNR = 22.41, CPU = 80.93) and (h) the auto-ttpCG method (SNR = 22.48, CPU = 33.98) according to the noise level ν = 10 3 in Table 2.
Figure 4. Example 1: (a) The original image and (b) the blurred and noised image, reconstructed images by (c) the tCG method (SNR = 12.21, CPU = 9.87), (d) the A-tCG-FFT method (SNR = 18.63, CPU = 88.02), (e) the A-CGLS-FFT method (SNR = 18.63, CPU = 82.23), (f) the auto-tCG method (SNR = 22.36, CPU = 109.87), (g) the auto-ttCG method (SNR = 22.41, CPU = 80.93) and (h) the auto-ttpCG method (SNR = 22.48, CPU = 33.98) according to the noise level ν = 10 3 in Table 2.
Mathematics 12 00159 g004
Figure 5. Example 2: (a) The original image Lena, (b) the blurred and noised image and reconstructed images by (c) the tCG method, (d) the A-tCG-FFT method, (e) the A-CGLS-FFT method, (f) the auto-tCG method, (g) the auto-ttCG and (h) the auto-ttpCG method according to the noise level ν = 10 3 in Table 3.
Figure 5. Example 2: (a) The original image Lena, (b) the blurred and noised image and reconstructed images by (c) the tCG method, (d) the A-tCG-FFT method, (e) the A-CGLS-FFT method, (f) the auto-tCG method, (g) the auto-ttCG and (h) the auto-ttpCG method according to the noise level ν = 10 3 in Table 3.
Mathematics 12 00159 g005
Figure 6. Example 2: Comparison of convergence between (a) relative errors verus the iteration number k and (b) relative errors verus the CPU time for the auto-tCG, auto-ttCG and auto-ttpCG methods with the noise level ν = 10 3 .
Figure 6. Example 2: Comparison of convergence between (a) relative errors verus the iteration number k and (b) relative errors verus the CPU time for the auto-tCG, auto-ttCG and auto-ttpCG methods with the noise level ν = 10 3 .
Mathematics 12 00159 g006
Figure 7. Example 3: Comparison of convergence between (a) relative errors verus the iteration number k and (b) relative errors verus the CPU time for the auto-tCG, auto-ttCG and auto-ttpCG methods with the noise level ν = 10 3 .
Figure 7. Example 3: Comparison of convergence between (a) relative errors verus the iteration number k and (b) relative errors verus the CPU time for the auto-tCG, auto-ttCG and auto-ttpCG methods with the noise level ν = 10 3 .
Mathematics 12 00159 g007
Figure 8. Example 3: (a) The second frame image of the original video, (b) the blurred and noisy image and recovered images by (c) the tCG method, (d) the A-tCG-FFT method, (e) the A-CGLS-FFT method, (f) the auto-tCG method, (g) the auto-ttCG and (h) the auto-ttpCG method according to the noise level ν = 10 3 in Table 4.
Figure 8. Example 3: (a) The second frame image of the original video, (b) the blurred and noisy image and recovered images by (c) the tCG method, (d) the A-tCG-FFT method, (e) the A-CGLS-FFT method, (f) the auto-tCG method, (g) the auto-ttCG and (h) the auto-ttpCG method according to the noise level ν = 10 3 in Table 4.
Mathematics 12 00159 g008
Table 1. Notation Explanation.
Table 1. Notation Explanation.
NotationInterpretation
Amatrix
Iidentity matrix
A T transpose of tensor
A 1 inverse of tensor, A T = ( A 1 ) T = ( A T ) 1
A ^ FFT of A along the third mode
I identity tensor
A F the Frobenius norm, i.e, A F = i = 1 l j = 1 m k = 1 n a i j k 2 .
t-product
A j , A ( : , j , : ) the jth tensor column of A , jth lateral slice of A
A ( : , : , k ) the kth frontal slice of tensor A
d tube
A , B A , B = i j k a i j k b i j k
A , B A , B = i k a i 1 k b i 1 k
Table 2. Example 1: Comparison of relative error, SNR, and CPU time between the tCG with μ = 1 × 10 3 , the A-tCG-FFT, A-CGLS-FFT, A-tpCG-FFT, auto-tCG, auto-ttCG and auto-ttpCG methods with different noise level ν = 10 i , i = 2 , 3 .
Table 2. Example 1: Comparison of relative error, SNR, and CPU time between the tCG with μ = 1 × 10 3 , the A-tCG-FFT, A-CGLS-FFT, A-tpCG-FFT, auto-tCG, auto-ttCG and auto-ttpCG methods with different noise level ν = 10 i , i = 2 , 3 .
Noise LevelMethodk μ k Relative ErrorSNRCPU (s)
10 3 tCG- × 10 3 9.14  × 10 2 12.219.87
A-tCG-FFT--5.44  × 10 2 18.6388.02
A-tCGLS-FFT--5.44  × 10 2 18.6382.23
A-tPCG-FFT--5.42  × 10 2 18.6776.09
auto-tCG151.96  × 10 5 3.54  × 10 2 22.36109.87
auto-ttCG151.96  × 10 5 3.52  × 10 2 22.4180.93
auto-ttpCG151.96  × 10 5 3.49  × 10 2 22.4833.98
10 2 tCG- × 10 3 1.17  × 10 1 11.979.64
A-tCG-FFT--1.04  × 10 2 12.7579.33
A-tCGLS-FFT--1.04  × 10 2 12.7572.29
A-tPCG-FFT--9.81  × 10 2 12.9061.75
auto-tCG113.14  × 10 4 8.74  × 10 2 14.5181.94
auto-ttCG113.14  × 10 4 8.64  × 10 2 14.6126.42
auto-ttpCG113.14  × 10 4 8.54  × 10 2 14.7218.50
Table 3. Example 2: Comparison of relative error, SNR, and CPU time between the tCG ( ν = 10 2 : μ = 0.01 ; ν = 10 3 : μ = 0.005 ), the A-tCG-FFT, A-CGLS-FFT and A-tpCG-FFT, auto-tCG, auto-ttCG and auto-ttpCG methods with different noise levels ν = 10 i , i = 2 , 3 .
Table 3. Example 2: Comparison of relative error, SNR, and CPU time between the tCG ( ν = 10 2 : μ = 0.01 ; ν = 10 3 : μ = 0.005 ), the A-tCG-FFT, A-CGLS-FFT and A-tpCG-FFT, auto-tCG, auto-ttCG and auto-ttpCG methods with different noise levels ν = 10 i , i = 2 , 3 .
Noise LevelMethodRelative ErrorSNRTime (s)
10 3 tCG8.90  × 10 2 11.1534.73
A-tCG-FFT7.34  × 10 2 13.396939.36
A-tCGLS-FFT7.34  × 10 2 13.395634.69
A-tPCG-FFT7.34  × 10 2 13.391638.91
auto-tCG5.90  × 10 2 14.62314.73
auto-ttCG5.90  × 10 2 14.62262.81
auto-ttpCG5.43  × 10 2 15.37103.41
10 2 tCG9.64  × 10 2 10.2331.63
A-tCG-FFT8.98  × 10 2 11.015236.55
A-tCGLS-FFT8.98  × 10 2 11.014895.52
A-tPCG-FFT8.98  × 10 2 11.011236.21
auto-tCG7.64  × 10 2 12.37117.48
auto-ttCG7.48  × 10 2 12.5562.01
auto-ttpCG7.01  × 10 2 13.1354.85
Table 4. Example 3: Comparison of relative error, SNR, and CPU time between the tCG ( ν = 10 2 : μ = 0.05 ; ν = 10 3 : μ = 0.001 ), the A-tCG-FFT, A-CGLS-FFT, A-tpCG-FFT, auto-tCG, auto-ttCG and auto-ttpCG methods with different noise level ν = 10 i , i = 2 , 3 .
Table 4. Example 3: Comparison of relative error, SNR, and CPU time between the tCG ( ν = 10 2 : μ = 0.05 ; ν = 10 3 : μ = 0.001 ), the A-tCG-FFT, A-CGLS-FFT, A-tpCG-FFT, auto-tCG, auto-ttCG and auto-ttpCG methods with different noise level ν = 10 i , i = 2 , 3 .
Noise LevelMethodRelative ErrorSNRTime (s)
10 3 tCG3.94  × 10 2 21.4396.33
A-tCG-FFT3.67  × 10 2 21.959396.36
A-tCGLS-FFT3.67  × 10 2 21.957423.69
A-tPCG-FFT3.67  × 10 2 21.953798.81
auto-tCG2.94  × 10 2 23.17697.78
auto-ttCG2.92  × 10 2 23.23487.35
auto-ttpCG2.66  × 10 2 24.05214.16
10 2 tCG8.31  × 10 2 14.1480.61
A-tCG-FFT7.89  × 10 2 14.898972.69
A-tCGLS-FFT7.89  × 10 2 14.897263.02
A-tPCG-FFT7.89  × 10 2 14.893269.36
auto-tCG5.24  × 10 2 18.15480.75
auto-ttCG5.10  × 10 2 18.38281.54
auto-ttpCG4.74  × 10 2 19.02156.44
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, S.-W.; Huang, G.-X.; Yin, F. Tensor Conjugate Gradient Methods with Automatically Determination of Regularization Parameters for Ill-Posed Problems with t-Product. Mathematics 2024, 12, 159. https://doi.org/10.3390/math12010159

AMA Style

Wang S-W, Huang G-X, Yin F. Tensor Conjugate Gradient Methods with Automatically Determination of Regularization Parameters for Ill-Posed Problems with t-Product. Mathematics. 2024; 12(1):159. https://doi.org/10.3390/math12010159

Chicago/Turabian Style

Wang, Shi-Wei, Guang-Xin Huang, and Feng Yin. 2024. "Tensor Conjugate Gradient Methods with Automatically Determination of Regularization Parameters for Ill-Posed Problems with t-Product" Mathematics 12, no. 1: 159. https://doi.org/10.3390/math12010159

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop