Color Image Recovery Using Generalized Matrix Completion over Higher-Order Finite Dimensional Algebra

To improve the accuracy of color image completion with missing entries, we present a recovery method based on generalized higher-order scalars. We extend the traditional second-order matrix model to a more comprehensive higher-order matrix equivalent, called the"t-matrix"model, which incorporates a pixel neighborhood expansion strategy to characterize the local pixel constraints. This"t-matrix"model is then used to extend some commonly used matrix and tensor completion algorithms to their higher-order versions. We perform extensive experiments on various algorithms using simulated data and algorithms on simulated data and publicly available images and compare their performance. The results show that our generalized matrix completion model and the corresponding algorithm compare favorably with their lower-order tensor and conventional matrix counterparts.


Introduction 1.Background and Related Works
Vectors and matrices are fundamental to data analysis and processing, but they often struggle to encapsulate the complex, higher-order structures found in real-world applications such as color images, video sequences, and hyperspectral images.These multilinear data structures defy satisfactory representation by traditional vectors and matrices, prompting the use of tensors, higher-order extensions of vectors and matrices, for more accurate representation.
In real-world scenarios, it's common to find that higher-order, high-dimensional data often have low intrinsic dimensionality.This property facilitates several advanced techniques and applications.
For example, Floryan et al. introduced a method to reduce data to their intrinsic dimensionality, allowing more accurate and low-dimensional dynamical models to capture the essential behavior of high-dimensional systems with low-dimensional features [1].Li et al. achieved efficiency and robustness by training deep neural networks in low-dimensional spaces without sacrificing performance [2].

Chen et al. used deep learning networks for nonparametric regression on low-dimensional manifolds,
emphasizing their adaptability to low-dimensional geometric structures in data [3].This notion of low intrinsic dimensionality often leads to the low rank or approximate low rank nature of higher order and high dimensional data when represented as matrices or tensors.Several methods take advantage of this property.Fu et al. developed a low-rank tensor approximation model for multiview intrinsic subspace clustering that effectively reduces view-specific constraints and improves optimization, with notable success on real-world datasets [6].Wang et al.'s tensor low-rank and sparse representation method skillfully preserves intrinsic 3D structures in hyperspectral anomaly detection [7].In addition, Liu et al. comprehensively surveyed low-rank tensor approximation for hyperspectral image restoration, highlighting state-of-the-art techniques and current challenges in the field [8].This collective body of work underscores the flexibility and potential of using low-rank approximations to manage and interpret complex, high-dimensional data.
In addition, the collection of high-dimensional data can result in the loss of some elements.Low-Rank Tensor Completion (LRTC) addresses this problem by reconstructing the missing components from known data elements.Unlike Low-Rank Matrix Completion (LRMC), which relies solely on second-order information, LRTC exploits higher-order information, making the study of low-rank tensor completion techniques an important frontier in many fields.
For example, Liu et al. introduced the Sum of Matricization-based Nuclear Norms (SMNN), which is based on the Tucker rank of the tensor, and formulated three optimization algorithms for tensor completion via SMNN minimization, and successfully applied them to visual data completion [9].Zhang et al. developed the Tubal Nuclear Norm (Tubal-NN) approach based on the Tubal-rank tensor, and designed an algorithm that uses tensor nuclear norm penalization for tensor completion, which proved to be effective in video recovery and denoising [10,20].Lu et al. constructed a tensor completion model using the tensor nuclear norm (TNN), showed that TNN is a specific atomic norm, and established a bound for guaranteed low-tubal-rank tensor recovery, thus providing recovery guarantees for tensor completion [23].Xue et al. extended the tensor completion model to include the tensor truncated nuclear norm (T-TNN), thereby improving its effectiveness in real-world video and image processing [24].
In addition to these newly defined tensor norms, researchers are now developing other techniques that extend traditional concepts to new applications in tensor completion.For example,Zeng introduced a multimodal nuclear tensor factorization technique, which incorporates low-rank insights and an efficient block successive upper bound minimization algorithm.The method was applied to tasks such as hyperspectral image, video, and MRI completion, with experimental results confirming its superior performance [11].Similarly, Wu presented the tensor wheel decomposition method, a new tensor completion approach that characterizes complex interactions with fewer hyperparameters, improving performance on both synthetic and real data [12].Zhao et al. presented a nonconvex model with a proximal majorization-minimization algorithm for robust low-rank tensor completion, providing theoretical guarantees and demonstrating high efficiency on visual data, including color and multispectral images [13].
In recommendation systems, Deng et al. applied a meta-learning strategy with low-rank tensor completion for hyperparameter optimization and demonstrated its effectiveness [14].Nguyen developed a consistency-based framework that emphasizes unit-scale consistency for matrix and tensor completion, with attributes such as fairness and the ability to exploit high-dimensional relationships [15].Hui et al. integrated social-spatial context into tensor completion for time-aware point-of-interest recommendations, outperforming existing methods [16].
Tensor completion has also contributed to advances in data mining.Song et al. reviewed recent tensor completion algorithms, examining four perspectives and various applications in data mining [17].
Wu et al. introduced a multiattentional tensor completion network for handling missing entries in road sensor data, demonstrating improved performance [18].Lee's sign representable tensor model addressed both low and high rank signals for tensor completion, improving performance on human brain connectivity and topic data mining datasets [19].

Contributions and Organization of this Paper
Building on the success of Low-Rank Tensor Completion (LRTC) over Low-Rank Matrix Completion (LRMC), this paper takes a leap forward by employing a higher-order t-matrix model with higher-order circular convolution.This novel method forms a specific generalization of LRMC tailored to multi-way image recovery challenges, including the completion of RGB images with missing entries.
Our model is inspired by and extends the well-regarded completion algorithm proposed by Lu et al. by incorporating a higher-order methodology that exploits the intricate relationships within highdimensional data.Preliminary evaluations of our algorithm indicate that this generalized higher-order approach exhibits favorable recovery performance compared to existing algorithms.
Inspired by Lu et al.'s acclaimed completion algorithm, we present a higher-order methodology that exploits the inherent interrelationships within high-dimensional data.Evaluations of our approach demonstrate competitive recovery performance, with favorable recovery performance compared to existing algorithms.
The practical implications of this work are multifaceted, providing solutions that not only improve visual data completion, but also offer broader applications in areas such as video recovery, data mining, and medical imaging.By integrating our higher-order generalization into existing systems, it is possible to create more efficient and robust mechanisms for handling higher-order, high-dimensional data.
The contributions of this research can be summarized as follows.
• This research uses the higher-order t-matrix model, which generalizes low-rank matrix completion, to recover RGB images with missing entries.The model uses higher-order methodology to exploit complex relationships within high-dimensional data.The proposed method, termed "Higher-order TNN", compares favorably with its lower-order counterparts in terms of recovery performance, demonstrating distinct advantages.
• This research provides consistent solutions for visual data completion that have potential for broader applications.By integrating higher-order generalization into existing systems, it lays the groundwork for more effective analysis of higher-order, high-dimensional data.
• By generalizing the matrix model over finite-dimensional algebra, the approach employed extends several image analysis algorithms to their higher-order versions using a novel pixel neighborhood strategy.
• This research presents a consistent methodology for defining many of the notions of the tmatrix model, including rank, norm, and inner product, compared to the existing ones of the "tproduct" model.This methodology provides insights into generalized scalars/matrices from the perspective of representation and operator theory.In addition, the study explores the application of higher-order Lagrange multipliers with generalized matrix variables.
The rest of the paper is organized as follows: Section 2 introduces generalized matrices (t-matrices), outlining their structure, representation, and extension potential.Section 3 describes the Low-Rank Matrix Completion (LRMC) methodology and its higher-order counterparts, covering mathematical formulations and generalizations.Section 4 provides an in-depth exploration of rank considerations, presenting different notions of rank and the concept of higher-order rank.Section 5 details experimental validation and performance analysis, using both simulated random data and real-world datasets such as the Berkeley segmentation dataset.Section 6 summarizes the content of this paper and its implications.Appendix A provides further mathematical justification, explaining the mechanism of t-scalars and t-matrices from a unique matrix perspective of representation and operator theory, along with an exploration of the Lagrange multiplier with t-matrix variables.

Generalized Matrices
A generalized matrix (t-matrix) is a rectangular array composed of elements called generalized scalars (t-scalars) [25].Since a generalized scalar forms an array in C I 1 ×•••×I N , a generalized matrix with D 1 rows and D 2 columns can be represented by a multiway complex array in While various authors, including Kilmer et al., categorize these generalized matrices as tensors [20,21,22], we use the term "generalized matrix over higher-order scalars".Using this generalized matrix model provides an opportunity to extend many existing matrix algorithms.

Generalized Scalars
Let's consider a complex array of order N to be an element of the set C, where C ≡ C I 1 ×•••×I N .In parallel, a real array of order N is identified as an element of the set R, where R ≡ R I 1 ×•••×I N .The sets R and C share the commutative ring structure, where the multiplication of their elements is defined by the circular convolution of order N , and the addition corresponds to the entry-wise array addition.
Elements inside C and R are called generalized scalars.In this paper, we focus primarily on C, since R is a subset of C. By further defining the multiplication of a generalized scalar with a complex number by conventional scalar multiplication, we can elevate the ring C to a finite dimensional commutative algebra.
Using generalized scalars not only allows us to construct novel matrices, but also extends many classical matrix algorithms into the realm of generalized matrix algorithms.In 2011, Kilmer et al.
pioneered the "t-product" model [20].In this model, scalar elements in traditional matrices are replaced by fixed-size one-dimensional arrays, allowing the extension of many classical matrix algorithms.
Taking advantage of this extension, new generalized matrices use elements from the commutative algebras R or C -the generalized scalars -and are thus considered matrices built on the foundation of finite-dimensional commutative algebras.
In this paper, we adopt generalized matrices inspired by the work of Kilmer et al. [20], following the model of Liao and Maybank [25].This research, based on the notions introduced in the generalized matrix model [25], called "t-matrix", extends Kilmer et al.'s order-one generalized scalars to higher orders through a neighborhood strategy, thereby extending conventional matrices to their high-order versions.Then, the multi-way circular convolution of generalized scalars in the spatial domain is translated into Hadamard multiplication in the Fourier domain via the Fourier transform, facilitating relevant computations.For example, the following definitions are given for generalized scalars.Definition 2.1 (Addition of Generalized Scalars [25]) Consider two generalized scalars, called t-scalars, ẋ, ẏ ∈ C I 1 ×•••×I N as order-two arrays of size is calculated element-wise, meaning the complex entry of ċ at position (i 1 , . . ., i N ) is Their product is defined as ċ = ẋ • ẏ, where ċ results from the order-two circular convolution of ẋ and ẏ.Specifically, we have While order-two generalized t-scalars share the data structure of an order-two numerical array, they are not matrices, since their multiplication is by definition commutative.However, to describe linear transformations, it can be convenient to consider the underlying order-two arrays as matrices.
We use the notation tensor( ẋ) to elevate the underlying order-N array of ẋ to a conventional tensor of identical size and entries.With tensor( ẋ) as the conventional tensor, the multiplication of two t-scalars is equivalently given by the following theorem.= F ( ċ) as their respective multilinear Fourier transforms.For any t-scalar ẋ ∈ C, its multilinear Fourier transform is given by the following multi-mode multiplication: where W n denotes the Fourier matrices of appropriate size.Consequently, the following Hadamard product holds for all i 1 , . . ., i N : Definitions 2.1 and 2.2 qualify all t-scalars as elements of a commutative ring C.An essential operation to bring the ring C to life as an algebra is the multiplication of any element of C with a conventional scalar.This leads to the following definition.
a generalized scalar and λ a complex number.Their multiplication, denoted as With the previous definitions as a basis, we can easily have the identity and zero t-scalars within the algebra C.These two unique t-scalars are defined as Proposition 2.1 (Identity T-scalar and Zero T-scalar [25]) In this case, ( ė while all other entries are 0, characterizing the identity t-scalar ė.Alternatively, if every entry of ẋ is 0, we have this t-scalar as the zero t-scalar ż.Note that each entry of the identity t-scalar is 1 in the Fourier domain, while the zero t-scalar remains unchanged in the Fourier domain.

T-scalars as Finite-dimensional Linear Operators
Since every generalized scalar in the algebra C operates as a finite-dimensional linear commutative operator, operator theory allows us to determine the spectrum of any t-scalar ẋ ∈ C. The spectrum, or the set of complex eigenvalues of ẋ, corresponds to the K entries of the Fourier transform x (where With the eigenvalues (i.e., Fourier entries) of an arbitrary t-scalar at hand, let us now delve into the following definitions.
Definition 2.4 (Conjugate [25]) A unique t-scalar ẏ in C is the conjugate of an t-scalar ẋ in C if each eigenvalue of ẏ is the complex conjugate of the corresponding eigenvalue of ẋ.The conjugate is denoted by ẋ * .
Definition 2.5 (Nonnegativity [25]) A t-scalar ẋ is said to be nonnegative if and only if all of its complex eigenvalues (i.e., Fourier entries) are nonnegative real numbers.
Definition 2.5 is crucial because it facilitates the generalization of various concepts of nonnegativity, including matrix rank, space dimension, norm, and distance, to nonnegative elements in C. We will explore these generalizations as needed.

Generalized Matrices
A generalized matrix, called a t-matrix, is a rectangular array of generalized scalars.Since these generalized scalars (i.e., t-matrices) are arrays in C I 1 ×•••×I N , it is logical to represent the underlying data form of a generalized matrix in C D 1 ×D 2 as an order-four array in We refer to this format as the little-endian representation of a t-matrix.Conversely, some authors may arrange We call this form the big-endian representation, which is used by Kilmer et al. in their paper [20].Despite the two protocols, the conversion between the little-endian and big-endian protocols is straightforward.
Because of the underlying multiway array structure of t-matrices, some authors refer to these t-matrices as tensors [30], although they are different from ordinary tensors with complex entries.
The operations on t-matrices are analogous to those on traditional matrices.Specifically, if we Similarly, constructs such as the conjugate transpose and the diagonal matrix can be defined analogously.For a more detailed discussion of these concepts, see [25].

Singular Value Decomposition of a Generalized Matrix
To exploit the structure of a generalized matrix, it is often decomposed into a sequence of simpler components, which is usually written in a matrix form.Parallel to the compact Singular Value Decomposition (SVD) of traditional matrices, a crucial decomposition, called TSVD (Tensorial SVD), of a generalized matrix Ẋ ∈ C D 1 ×D 2 is shown below: where U ∈ C D 1 ×D , Ṡ ∈ C D×D , and V ∈ C D 2 ×D , where D .= min(D 1 , D 2 ).The symbol V * denotes the conjugate transpose of the t-matrix V , and Ṡ .= diag( σ1 , • • • , σD ) is a diagonal t-matrix with nonnegative t-scalars as its diagonal entries.The partial order is σ1 In addition, the following generalized orthogonal constraints are available: Here İ denotes the identity t-matrix, which has diagonal entries of ė and off-diagonal entries of ż.
Equation 2 defines the tensorial singular value decomposition (TSVD) of a t-matrix.Although its non-uniqueness persists, numerous methods expound on the computational and operational aspects of TSVD.Among these, one particularly practical method employs the mechanism of spectral slices.
Given a t-matrix Ẋ ∈ C D 1 ×D 2 , represented as a little-endian complex array in let tensor( Ẋ) map this underlying array into a conventional tensor with identical size and entries.
Complying with Equation 1, the Fourier transform of Ẋ can be expressed as the following multi-mode multiplication: Since all operations on t-scalars in the Fourier domain are Fourier-entry-wise, the following definition can be used to decompose t-matrices in the Fourier domain and to establish further constructs.
Definition 2.6 (Spectral Slice [25]) For any t-matrix as defined in Equation 4, can be partitioned into K spectral slices (where Each spectral slice, indexed by (i 1 , . . ., i N ), is a conventional complex matrix denoted by X(i 1 , . . ., i N ) ∈ C D 1 ×D 2 .This satisfies the following equation for all i 1 , . . ., i N and Using spectral slices, various constructs can be introduced.For example, the Tensor Singular Value Decomposition (TSVD) of a generalized matrix is outlined in Algorithm 1.
Algorithm 1 Tensorial Singular Value Decomposition via Spectral Slices Apply Equation 4 to compute the transform X from Ẋ. 3: Compute the compact SVD of each spectral slice X(i 1 , . . ., i N ) = U • S • V H .

6:
end for

7:
Apply the inverse transform F −1 to each of Ũ , S, and Ṽ to obtain U , Ṡ, and V , respectively.Use Equation 4 to compute the transform X of Ẋ. 3: Compute the compact SVD on each spectral slice X(i 1 , . . ., i 2 ) = U • S • V H .

5:
Store the result of the singular value thresholding end for

7:
Apply the inverse transform F −1 to Ỹ to obtain Ẏ , the approximation of Ẋ.

Low-Rank Matrix Completion and Its Generalizations
Besides the generalization of SVD to its higher-order counterpart TSVD, many other matrix algorithms can be extended analogously over t-scalars.One of them is the so-called low-rank matrix completion (LRMC) problem.

Low Rank Matrix Completion
A variant of the matrix completion problem is to determine the minimum rank matrix X ∈ R D 1 ×D 2 that matches the desired matrix M for all observed entries within the index set Ω [5].This problem can be expressed mathematically as minimize X rank(X) subject to (X) i,j = (M ) i,j ∀(i, j) ∈ Ω.
Given the NP-hard nature of the initial minimization problem, in most practical scenarios the solution to Equation 6 can most likely be reformulated as the solution to the following convex optimization problem: minimize where G Ω : R D 1 ×D 2 → R D 1 ×D 2 represents the linear operator, which preserves entries within the set Ω and sets entries outside of Ω to zero.
The augmented Lagrange multiplier function for the minimization problem is formalized as where Y is the dual variable and τ > 0.
The Alternating Direction Method of Multipliers (ADMM) [28,29] can be used to iteratively refine the optimization variables X and E, as described in Algorithm 3. Here, Algorithm 3 ADMM for solving Equation 71: procedure X COMP = MatrixCompletion(M, Ω)

3:
Set the missing entries of M , i.e., (i, j) ∈ Ω c , to zero 4: while neither convergence nor predefinite maximum iterations achieved do 5: end while 10: X COMP ← X k 11: end procedure

Generalization of Matrix Completion over Higher-Order Generalized Scalars
Following the application of ADMM to the optimization of Equation 7, many authors have proposed methods to extend the completion process to third-order arrays.For example, using Kilmer et al.'s "t-product" model, Lu et al. extended the above completion approach to third-order tensors [23].
Although called tensor algorithms, Lu et al.'s approach [23] and other variants [10,24,27] are essentially matrix completion algorithms operating on generalized first-order scalars.However, as noted above, the order of generalized scalars can actually be defined as higher.Since higher-order arrays encapsulate more structural information than their lower-order counterparts in real-world scenarios, we exploit this aspect by increasing the order of the arrays via a pixel neighborhood strategy originally introduced in [31] but largely overlooked by the research community.
Specifically Figure 1 shows the application of a "3 × 3 pixel neighborhood" strategy to increase the order of a 4 × 4 pixel grayscale image.Note that the figure represents the result of the order-four array as a two-dimensional array of two-dimensional blocks.4) with that of generalized scalars (as in Equation 1).
Since each pixel value can be elevated to a generalized scalar (i.e., t-scalar), allowing the conversion of a traditional matrix into a generalized one with higher-order fixed-size arrays as entries, the extension of algorithm 3 for generalized matrix completion is straightforward.The generalization of line 5 in Algorithm 3 can be achieved using TSVD, as described in Algorithm 2. Meanwhile, line 6 in Algorithm 3 is extended by the elevated linear operator G Θ, which will which preserves entries within the enhanced set Θ and sets those outside Θ to the zero t-scalar ż.
We propose a generalized matrix (t-matrix) completion algorithm for recovering multispectral images with missing values, as described in Algorithm 4, where M ∈ R D 1 ×D 2 ×D 3 , Ω represents a random non-empty proper subset of the Cartesian product , and the size of t-scalars is I 1 × I 2 × D 3 , where I 1 , I 2 are both odd numbers.
Algorithm 4 Higher Order TNN: ADMM for recovering an image with missing values 1: procedure X COMP = SpectralImageCompletion(M, Ω)

2:
Assign an improbable value, such as −1, to the missing entries indexed by Ω.

5:
Convert M UP into a generalized matrix Ṁ ∈ C D 1 ×D 2 ≡ R I 1 ×I 2 ×D 3 ×D 1 ×D 2 by permuting of the indices of array.

14:
Use the row-index-first (MATLAB compliant) protocol to reshape Ẋk into an I 1 I 2 × D 3 D 1 D 2 matrix, extract the central row, and subsequently reshape it with the row-index-first protocal into an array X DOWN ∈ R D 3 ×D 1 ×D 3 .

15:
Permute the indices of X DOWN to convert it into an array in R D 1 ×D 3 ×D 3 .

16:
Adjust the entries of X DOWN to nonnegative integers and store the adjusted array X DOWN as X COMP , as the recovered multispectral image.17: end procedure The lines 3, 4 and 5 of the of the proposed algorithm up-convert the input multispectral image M , an initial array Conversely, line 14 down-converts the optimal generalized matrix Ẋk into a third-order array X DOWN .Algorithm 4 is based on the tensor completion algorithm of Lu et al [23].Section 5 of this paper focuses primarily on its empirical validation.

Rank Considerations
Matrix completion aims to recover a complete low-rank matrix.Its generalization aims to recover an analogous higher-order t-matrix.However, the rank of a t-matrix is not yet specified, so let's look at some novel rank notions defined for higher order arrays.

Tubal Rank and Average Rank
An RGB image, a special case of multispectral images, consists of three monochromatic channels.
Each channel in real RGB images can be adequately approximated by lower-rank matrices.However, when viewed as a third-order tensor, the canonical rank of an RGB image, which is defined by the minimal set of rank-one tensor addends, becomes computationally intractable.Consequently, the canonical tensor rank is unsuitable for modeling the optimal recovery of an RGB image.
Kilmer et al.'s approach is to introduce a novel rank concept, called tubal rank, for a third-order array.Specifically, for a given third-order array Ẋ, with its TSVD defined as Ẋ = U • Ṡ • V * , the tubal rank of Ẋ corresponds to the number of non-zero (i.e., not equal to ż) diagonal t-scalars in Ṡ.
Nevertheless, by this definition, a t-matrix of full tubal rank can consist of a full-rank matrix as one of its spectral slices, with all other spectral slices being zero matrices.
To address this problem, Lu et al. proposed to define the average of all spectral slice ranks as the "average rank" of a generalized matrix [27].This "average rank" definition is more appropriate than the tubal rank.However, this term is only used for Lu et al.'s generalization of robust component analysis [27], not for the higher-order array recovery problem presented in [23].Moreover, despite the potential for "average rank" to be fractional, the mathematical justification for "average rank" has not been adequately addressed.

Higher-Order Rank and Its Trace Variant
In addition to the tubal rank of Kilmer et al. and the average rank of Lu et al., another relevant concept is the higher-order rank introduced by Liao and Maybank in their paper [25].Specifically, given a t-matrix Ẋ with its tensor singular value decomposition (TSVD) Ẋ = U • Ṡ • V * , the higherorder rank of Ẋ is a non-negative t-scalar.It is computed as the sum of the diagonal entries of the product Ṡ † • Ṡ, where Ṡ † denotes the pseudoinverse of Ṡ.
The above definition corresponds to the analogous concept for traditional matrices.The pseudoinverse of a t-matrix can be computed using spectral slices, similar to Algorithms 1 and 2. Specifically, the pseudo-inverse Ẋ † of a t-matrix is defined by assigning the pseudo-inverse of each spectral slice to its corresponding slice in the result.It is not difficult to verify that the pseudo-inverse Ẋ † defined above is equal to the product V From the previous definition, it is easy to see that the higher-order rank of any t-matrix is a nonnegative t-scalar.It can be sorted alongside comparable nonnegative counterparts using the partial order introduced in Section 2.2.However, sometimes we prefer a more efficient rank notion similar to the fully ordered ones proposed by Kilmer et al. and Lu et al. as opposed to the partially ordered higher order rank.Reassuringly, the Szpilrajn extension theorem asserts that the partially ordered rank system proposed in [25] can always be extended to a fully ordered construct.
There are several strategies for transforming the higher order rank system into its fully ordered equivalents.Considering any higher-order rank of a t-matrix, all spectral points (i.e., Fourier entries) are nonnegative integers.Consequently, Kilmer et al.'s tubal rank denotes the maximum value among these spectral points, while Lu et al.'s average rank is equal to their arithmetic mean.
Typically, in most scenarios, the average rank of Lu et al. is considered a superior statistic for a higher order rank.However, to avoid fractional rank values, we propose to use the sum, rather than the arithmetic mean, of the spectral points of a higher-order rank to define its corresponding fully ordered rank.Furthermore, since every t-scalar also functions as a finite-dimensional linear endomorphic operator, the previously defined "sum rank" of a t-matrix is equivalent to the trace of the higher-order rank, so it would be appropriate to formally label it as the "trace rank".
The fairness of the above definitions can be given by using the representation theory known in the mathematical community.We give a brief discussion of representation theory with its application to justify the above definition in Appendix.
The fairness of the above definitions can be given by using the representation theory known in the mathematical community.We give a brief discussion of representation theory with its application to justify the above definition in Appendix A.

Experiments
This part presents experimental validation and performance analysis of the related algorithms.

Experiments on Simulated Random Data
To evaluate the completion capability of the proposed higher-order TNN algorithm, we use simulated random t-matrices for verification.Specifically, we generate two random t-matrices Ṗ ∈ C D×r and Q ∈ C r×D , which are represented as arrays in R I 1 ×I 2 ×I 3 ×D×r and R I 1 ×I 2 ×I 3 ×r×D , where I 1 ×I 2 ×I 3 = 3×3×3 and r < D. The parameter r is (with high probability) the tubal rank as defined by Kilmer et al. at [20,30].The real numbers in the underlying arrays of Ṗ and Q are independently sampled from the distribution N (0, 1).
The product Ẏ = Ṗ • Q gives a random t-matrix in C D×D ≡ R I 1 ×I 2 ×I 3 ×D×D .The trace rank of Ẏ is, with high probability, rank trace Ẏ = I 1 I 2 I 3 • r.From the underlying array of Ẏ , we uniformly select entries to simulate missing data.The resulting incomplete Ẏ , with a varying percentage of missing entries and a rank parameter r for Ẏ , serves as input to the Higher-order TNN algorithm.
The Higher-order TNN algorithm produces a t-matrix Ẋ ∈ C D×D ≡ R I 1 ×I 2 ×I 3 ×D×D , which serves as an estimated version of Ẏ .If RSE .= ∥ tensor( Ẏ ) − tensor( Ẋ)∥ F /∥ tensor( Ẏ )∥ F is less than a threshold, the completion by the Higher-order TNN algorithm is considered successful.
Figure 2 illustrates the RSE distributions and phase transitions for fifth-order array completions using the proposed higher-order TNN.

Experiments on BSD Color Images
In the following experiments, we use the Berkeley segmentation dataset as a benchmark to compare the performance of four related algorithms: Tubal-TNN [10], T-TNN [24], TNN [23], and our Higherorder TNN.Three RGB images, namely "Resort", "Insect", and "Seagulls" are selected for the first experiment.These images are represented as 321 × 481 × 3 unsigned integer arrays.
To compare the completion performance, we randomly select 70% of the pixel values of each image as "missing" entries.The uncompleted observed images, with missing values set to zero, provide a visual representation in Figure 4, which shows the original complete images alongside their incomplete versions.
We use the three competing algorithms and our Higher-Order TNN, described in Algorithm 4, to obtain an optimal, complete RGB image of equal size.The quality of the image completion is quantified by the Peak Signal to Noise Ratio (PSNR), defined as   In our second experiment, we use three different images -"Temple", "Chapel", and "Grassflower", each with 50% entries randomly missing, to perform completions analogous to the first experiment on the Birkeley images, each with 70% entries randomly missing.Figure 6 shows the original images along with their incomplete versions.
Figure 6 shows the visual and quantitative comparisons of the four related image completion algorithms.Consistent with the results of the first experiment, the higher-order TNN significantly outperforms the other algorithms, achieving gains of at least 1.5 dB, 1.2 dB, and 2.2 dB, respectively.
After experiments on 6 RGB images, our research now includes 10 randomly selected RGB images from the Berkeley Segmentation Dataset, with the percentage of missing entries set to increments within {0.1, 0.2, . . ., 0.9}.
Figure 7 shows the 10 randomly selected RGB images used for the experiments.Figure 8 shows the PSNR heatmaps for four relevant algorithms: Tubal NN [10], T-TNN [24], TNN [23], and our newly developed Higher-Order TNN on the images shown in figure 7. It's important to note that similar results were obtained in experiments on other randomly selected images not included in this paper.
These results demonstrate the superior performance of the higher-order TNN, as it outperforms its counterparts in terms of PSNR.
For a clearer demonstration, Figure 9 shows the PSNR gains of our Higher-Order TNN over Tubal NN, T-TNN, and TNN.It shows that the PSNR gains of Higher-Order TNN over its counterparts are (a) Tubal-NN [10] (b) T-TNN [24] (c) TNN [23] (d) Higher-Order TNN (ours) PSNR comparison of four competing algorithms

Conclusions
In this paper, we consider the problem of higher-order array completion using the higher-order t-matrix model.By adopting a consistent solution, we generalize low-rank matrix completion for the recovery of RGB images with missing entries.Our proposed "higher-order TNN" method outperforms competitors such as Tubal TNN, T-TNN, and TNN in terms of recovery performance, demonstrating the ability to exploit higher-order relationships within high-dimensional data and showing distinct advantages.
Our solution not only improves visual data completion, but also paves the way for broader applications.Integrating higher-order generalization into existing systems lays the foundation for more robust and efficient handling of higher-order, high-dimensional data.
We demonstrate this with a generalization of Lu et al.'s tensor completion algorithm to its higherorder version by formulating its matrix model over finite-dimensional algebra.This is achieved through a novel pixel neighborhood strategy.
The study also provides a consistent methodology for exploring various properties of the t-matrix model, including the notions of rank, norm, and inner product.Compared to the existing "t-product" model, our approach offers new insights into generalized scalars and matrices from a fresh perspective of representation and operator theory.Moreover, the higher-order Lagrange multipliers with generalized matrix variables add to our contributions.
The inclusion of the novel "trace rank", nuclear norm, Schatten p-norm, and the adaptation of the recent tensor completion algorithm by Lu et al. for higher-order scenarios highlight our results.The public image experiments emphasize the competitive advantage of our higher-order matrix completion algorithm in RGB image recovery.

A. Appendix: A Methematical Justification
The ADMM optimization for low-rank matrix completion described in Algorithm 3 relies on the Lagrange multiplier given in Equation 8.However, the generalized ADMM optimization presented in Algorithm 4 currently lacks a corresponding Lagrange multiplier.To construct a valid Lagrange     A.1.Matrix representation for t-scalars and higher-order measures In the appendix, we offer an organized exposition of t-matrices via representation theory, an area that has yet to gain widespread recognition in computer science.A representation of an algebra C requires a vector space V and a homomorphism from C into End(V ), the endomorphism algebra of V .
The representation of the algebra C allows to represent each t-scalar in C ≡ C I 1 ×•••×I N as a diagonal complex matrix.The diagonal entries of the matrix correspond to the Fourier components of the t-scalar, resulting in the following mapping for all ẋ ∈ C: Here, F 1 ( ẋ), . . ., F K ( ẋ) represent the Fourier entries of F ( ẋ).
The conjugate ẋ maps to the conjugate transpose of M ( ẋ), giving the following one-to-one mapping for all ẋ ∈ C, Since any t-scalar ẋ is a normal operator, i.e., ẋ * • ẋ = ẋ • ẋ * , there exists a unique non-negative square root | ẋ| .= √ ẋ * • ẋ, which leads to the following one-to-one mapping: In operator theory, such a non-negative square root is called a positive operator, alternatively a "non-negative operator", by Definition 2.5, despite its less common usage.This non-negative operator (t-scalar) can appropriately be called the higher-order absolute value of ẋ.
In addition, a t-scalar behaves as an endomorphism within the finite-dimensional algebra C and as such has a trace.The trace of any given t-scalar ẋ can be computed via trace This trace, when pertaining to a nonnegative t-scalar (a nonnegative operator), is a nonnega-tive real number.This property elevates the partial order of nonnegative t-scalars to a total order.Therefore, the trace of the higher-order absolute value √ ẋ * • ẋ, or the trace norm of ẋ ∈ C, can be determined.
The trace norm of a t-scalar is the nuclear norm of its matrix representation.For any t-scalar ẋ, this relation can be expressed as where K is the dimension of the algebra C.

A.2. A Representation Model for T-Matrices and Higher-Order Measures
Consider a t-matrix Ẋ ∈ C D 1 ×D 2 , characterized by its higher-order singular values σ1 ≥ σ2 ≥ . . .≥ σn ≥ . . .≥ ż.The higher order Schatten p-norm of Ẋ is defined by A t-matrix Ẋ ∈ C D 1 ×D 2 can be represented by placing the diagonal matrix of each corresponding t-scalar into its respective block in the final matrix.If the diagonal matrix size of a t-scalar is K × K, then the matrix representation of Ẋ is C KD 1 ×KD 2 , as shown by Liao et al. [32].
However, matrix representations are not unique.In addition to the above format, a more convenient representation uses the direct sum of matrices, also known as the block diagonal sum.This operation combines several matrices into a larger one, where the summand matrices are arranged along the main diagonal and off-diagonal blocks are filled with zeros.
Given a t-matrix Ẋ ∈ C D 1 ×D 2 with spectral slices denoted by X1 , . . ., XK , the direct sum representation of Ẋ is established via a bijective mapping: This mapping extends the bijective mapping for t-scalars given in Equation 10.Kilmer et al. proposed the first version of this mapping for the analysis of generalized matrices with order-one entries [20,30].However, the convenience of its direct sum properties has been largely overlooked by subsequent authors, indicating the underexplored potential of this matrix representation.
It is easy to see that Ẋ * → M ( Ẋ) H = ⊕ K k=1 XH k .Furthermore, given the one-to-one nature of the mapping, we can define the rank of Ẋ by the rank of M ( Ẋ), which leads to the following equation: This definition corresponds to the "trace rank" discussed in Section 4.2.The direct sum representation also provides a mathematical justification for Algorithm 1.In particular, if the singular value decomposition of the summand Xk is Xk = U k • S k • V H k , the following equation holds for all Ẋ, In addition, the following equation is valid, which is the TSVD of Ẋ: where M − * (•) is a shorthand for (M −1 (•)) * .

A.3. Lagrange Multiplier with t-matrix variables
The direct sum properties also validate the spectral-slice-wise mechanism exhibited in Algorithms The real-valued Schatten 2-norm ∥ Ẋ∥ 2 can also be derived from the higher-order Schatten 2-norm N p ( Ẋ) given in Equation 16.Consequently, for all t-matrices Ẋ, the following equation holds It is important to note that ∥ Ẋ∥ 2 is measured by spectral slices and the Fourier transform is not isometric.Therefore, unlike the case for conventional matrices, the Schatten 2-norm ∥ Ẋ∥ 2 and the Frobenius norm ∥ tensor( Ẋ)∥ F are different.For any t-matrix Ẋ, the equality ∥ Ẋ∥ 2 = √ K • ∥ tensor( Ẋ)∥ F holds.
Similarly, the nuclear norm of any t-matrix Ẋ is defined by the nuclear norm of M ( Ẋ).As with the Schatten 2-norm norm, the real-valued nuclear norm ∥ Ẋ∥ * can be derived from its higher-order counterpart N 1 ( Ẋ) as follows: Currently, a real inner product ⟨ Ẋ, Ẏ ⟩ of a pair of t-matrices Ẋ, Ẏ is required to have the full Lagrange multiplier as in Equation 21.However, if the inner product ⟨ Ẋ, Ẏ ⟩ is defined as trace M ( Ẋ) H M ( Ẏ ) , it will be complex.
To ensure a real-valued inner product instead of a complex-valued one, a feasible strategy is to isomorph M ( Ẋ) to a real matrix for any t-matrix Ẋ.
According to representation theory, any complex number a + b √ −1 can be represented as a 2 × 2 real matrix using the following mapping: By replacing each complex entry of M ( Ẋ) ∈ C KD 1 ×KD 2 with its 2 × 2 real matrix equivalent, the complex matrix M ( Ẋ) ∈ C KD 1 ×KD 2 can be isomorphically transformed into a real matrix R( Ẋ) ∈ R 2KD 1 ×2KD 2 .Thus, the real inner product for any pair of t-matrices Ẋ and Ẏ can be defined as The coefficient (1/2) is essential to account for the doubling of absolute values in the real representation.
The following discussion is standard in convex analysis with the Lagarange multiplier given by Equation 21and has been studied extensively by previous authors.Therefore, it is beyond the scope of this appendix.
and Ω represents a random non-empty proper subset of the Cartesian product [D 1 ] × [D 2 ].Furthermore, D τ in line 5 denotes the SVT operator with threshold τ .

Figure 4
Figure 4 presents visual and quantitative comparisons of the performance of four competing algorithms in completing the observed images: "Resort", "Insect", and "Seagulls" from the Berkeley Segmentation Dataset.The proposed Higher-Order TNN outperforms its competitors in terms of PSNRs by at least 1 dB, 1.4 dB, and 1.6 dB, respectively.

Figure 4 :
Figure 4: Visual and quantitative comparisons of the performance of four related algorithms in completing the images "Resort", "Insect", and "Seagulls"

Figure 6 :
Figure 6: Visual and quantitative comparisons of the performance of four related algorithms in completing the images "Temple", "Chapel", and "Grass-flower"

Figure 7 :
Figure 7: 10 random RGB images selected from the Berkeley segmentation dataset

Figure 8 :
Figure 8: PSNR heatmaps of four completion algorithms with different percentages of missing entries on 10 RGB images.
Equation 14 implies that the norm of M ( ẋ) serves as a real-valued, totally ordered amplitude of ẋ.Together with ∥M ( ẋ)∥ * , the Schatten 2-norm ∥M ( ẋ)∥ 2 is also a valid norm of ẋ.However, due to the non-isometric nature of the Fourier transform, ∥M ( ẋ)∥ 2 is not equal to the Frobenius norm ∥ tensor( ẋ)∥ F .Both the Schatten 2-norm and the Frobenius norm of the underlying tensor of ẋ can be derived from | ẋ| as follows: