A Novel Tensor Ring Sparsity Measurement for Image Completion

As a promising data analysis technique, sparse modeling has gained widespread traction in the field of image processing, particularly for image recovery. The matrix rank, served as a measure of data sparsity, quantifies the sparsity within the Kronecker basis representation of a given piece of data in the matrix format. Nevertheless, in practical scenarios, much of the data are intrinsically multi-dimensional, and thus, using a matrix format for data representation will inevitably yield sub-optimal outcomes. Tensor decomposition (TD), as a high-order generalization of matrix decomposition, has been widely used to analyze multi-dimensional data. In a direct generalization to the matrix rank, low-rank tensor modeling has been developed for multi-dimensional data analysis and achieved great success. Despite its efficacy, the connection between TD rank and the sparsity of the tensor data is not direct. In this work, we introduce a novel tensor ring sparsity measurement (TRSM) for measuring the sparsity of the tensor. This metric relies on the tensor ring (TR) Kronecker basis representation of the tensor, providing a unified interpretation akin to matrix sparsity measurements, wherein the Kronecker basis serves as the foundational representation component. Moreover, TRSM can be efficiently computed by the product of the ranks of the mode-2 unfolded TR-cores. To enhance the practical performance of TRSM, the folded-concave penalty of the minimax concave penalty is introduced as a nonconvex relaxation. Lastly, we extend the TRSM to the tensor completion problem and use the alternating direction method of the multipliers scheme to solve it. Experiments on image and video data completion demonstrate the effectiveness of the proposed method.


Introduction
Image completion is a well-known problem in the field of image processing, which aims to recover the missing entries of partially observed image data [1,2].Image completion methods have extensive applications in practical scenarios, including hyperspectral image recovery [3] and video inpainting [4].These techniques have also exhibited successful implementations across various domains within the natural sciences.For example, in [5,6], they used image completion methods to achieve reliable, high-quality seismic data, which is crucial before the subsequent processing steps.Moreover, in [7], they used the image completion method to acquire dense earthquake data, which largely promotes the understanding of the structure and dynamics of Earth.
Sparsity is an important property in data representation and is extremely important for data analysis.Examples include Anisotropic interaction rules recognition in biological groups [8], face modeling [9], image compressive sensing [10,11], MRI compressive sensing [12][13][14], signal restoration [15,16], etc.For image completion, it is important to capture the inherent sparsity of the image data for the successful recovery of the missing entries.A common method of measuring the sparsity of the data uses the rank of the matrix, and it essentially measures the numbers of second-order Kronecker bases: u i v T i (u i and v i are vectors), i " 1, . . ., R, used for matrix representation, where i is an index referring to the i-th Kronecker basis, and R is the matrix rank.The image completion task, thus, can be formulated as an optimization problem with the objective of minimizing the rank of the candidate matrices.However, the optimization problem involving rank minimization is NP-hard due to its discrete nature, and the trace norm is widely used as the convex surrogate [17,18].Successful application of trace norm minimization for image completion can be found in [19].Furthermore, it has been proved theoretically that the rank minimization problem can be solved by the trace norm minimization problem under some reasonable conditions [20].
However, in practice, images are usually generated from the interaction of multiple factors.While a matrix is well-suited for representing data arising from the interaction of two factors, employing a matrix rank for image completion in such cases needs to transform the data into a matrix format.This transformation may break the multi-mode correlation of the data and result in sub-optimal performance [21].As a high-order generalization of a matrix, a tensor provides a natural way to represent these multi-way data and has been widely used for data representation in many areas such as electrodynamics [22,23], signal processing [24][25][26], and computer vision [27][28][29].One of the most important tensor analysis techniques is tensor decomposition, which provides a compact representation of the tensor.The most classical tensor decomposition method is called CANDECOMP/PARAFAC or canonical polyadic (CP) decomposition, which was proposed by Carroll et al. [30].CP decomposition decomposes a tensor into a sum of the rank-one tensor ř R r"1 a p1q r ˝¨¨¨˝a pdq r , where a pkq r , k " 1, . . ., d are vectors, d is the order of the tensor, R is the CP rank, and is the outer product operation.It can be observed that the CP rank serves as a natural generalization for the sparsity measurement of the tensor since matrix decomposition and the matrix rank are the special cases of CP decomposition when d " 2. However, unlike its lower-order counterpart, the computation of the CP rank is an NP-hard problem [31], and it is also difficult to establish a solvable relaxation form [32].
Besides CP decomposition, many studies focus on the Tucker decomposition proposed in [33], which decomposes a tensor into a core tensor multiplied by factor matrices along each mode.Image completion algorithms based on low-rank Tucker decomposition can be found in [34,35].Recently, tensor network decomposition, initially derived from the physics community for quantum many-body simulations, has been brought to the signal processing community and is emerging as a powerful tool for the analysis of tensor data [36,37].Among them, the tensor train (TT) decomposition [38], the quantized tensor train (QTT) decomposition [39], and the tensor ring (TR) decomposition [40] have achieved the most research interests.Many low-rank TT and TR completion algorithms have been developed [41][42][43], and the experiment results demonstrate the superiority of the tensor network methods compared with Tucker decomposition methods in image completion, which is mainly due to the powerful representation ability of the tensor network decomposition.
While Tucker, TT, and TR decomposition ranks are commonly employed in tensor data analysis, they lack a similar interpretation of tensor sparsity as seen in CP rank.To bridge this gap, several studies measure the sparsity of the tensor using the Kronecker bases constructed by the factor matrices of the Tucker decomposition [44][45][46][47].Different from the previous studies, and inspired by the strong expressiveness of the tensor network, in this work, we focus on TR decomposition, as shown in Figure 1, and the main contributions of the paper are the following:

•
We define a novel tensor sparsity measurement, termed tensor ring sparsity measurement (TRSM), which can be efficiently computed by the continuous product of ranks of the TR cores.Specifically, it measures the sparsity of tensors as the numbers of the Kronecker bases constructed by the TR cores for tensor representation.The graphical demonstration of the Kronecker bases representation of a tensor based on TR decomposition is shown in Figure 2.

•
To improve the practicality of TRSM, the minimax concave penalty (MCP) foldedconcave penalty is introduced as a nonconvex relaxation and then applied to the tensor completion problem, which has previously been applied in computer vision and pattern recognition.As a result, we formulate a new tensor completion model called tensor ring sparsity measurement tensor completion (TRSM-TC).An efficient algorithm based on the alternating direction method of multipliers (ADMM) is developed to optimize the proposed model.Experiments show that TRSM-TC achieves better performance than other algorithms in recovering a high missing rate hyperspectral images and video.The remainder of this paper is organized as follows.Section 2 introduces the preliminaries of tensor algebra and TR decomposition.Section 3 presents the proposed TRSM.The TRSM-TC problem and its optimization are formulated in Section 4. The complexity analysis is presented in Section 5. Section 6 demonstrates the experimental results and is followed by the conclusions in Section 7.

Preliminaries 2.1. Tensor Algebra
In this paper, some notations and preliminaries of tensors [48] are adopted.Scalars, vectors, and matrices are denoted by lowercase letters (e.g., x P R), boldface lowercase letters (e.g., x P R n 1 ), and capital letters (e.g., X P R n 1 ˆn2 ), respectively.A tensor is a multidimensional array and is denoted by calligraphic letters, (e.g., X P R n 1 ˆ¨¨¨ˆn d ), where n i is the size of the corresponding mode.X pi 1 , i 2 , . . ., i n q or x i 1 i 2 ¨¨¨i n denote an element of tensor X P R n 1 ˆ¨¨¨ˆn d in position pi 1 , i 2 , . . ., i n q.A mode-k fiber of tensor X P R n 1 ˆ¨¨¨ˆn d is a vector denoted as x i 1 ¨¨¨i k ´1:i k `1¨¨¨i d , where a colon is used to indicate all elements of a mode.A tensor sequence of tX p1q , . . ., X pdq u can be denoted as tX pkq u d k"1 .Where appropriate, a tensor sequence can also be written as rX s.The matrix sequences and vector sequences can be defined similarly.The inner product of two tensors X , Y with the same size is defined as xX , Yy " ř Furthermore, the Frobenius norm of X is defined by }X } F " a xX , X y.Two types of tensor unfolding expressions are defined in this article.The mode-k unfolding of tensor X P R n 1 ˆ¨¨¨ˆn d converts a tensor to a matrix, which is denoted as X pkq P R n k ˆn1 ¨¨¨n k´1 n k`1 ¨¨¨n d , and using the multi-indices defined in [36], its elements are defined by X pkq pi k , i 1 ¨¨¨i k´1 i k`1 ¨¨¨i d q " X pi 1 , i 2 , . . ., i d q Another mode-k unfolding of tensor X P R n 1 ˆ¨¨¨ˆn d is denoted as X rks P R n k ˆnk`1 ¨¨¨n d n 1 ¨¨¨n k´1 , and its elements are defined by X rks pi k , i k`1 ¨¨¨i d i 1 ¨¨¨i k´1 q " X pi 1 , i 2 , . . ., i d q.Furthermore, the inverse operation of tensor unfolding is matrix folding, which transforms matrices to higher-order tensors as an inverse operation of the corresponding tensor unfolding.In this paper, we only define the folding operation fold k p¨q for the first type of mode-k unfolding as fold k pX pkq q " X .

Tensor Ring Decomposition
TR decomposition is proposed to represent a high-order tensor by a sequence of three-order tensors that are multiplied together circularly.Specifically, given a d-order tensor X P R n 1 ˆ¨¨¨ˆn d , TR decomposition decomposes it into a sequence of latent tensors Z k P R R k ˆnk ˆRk`1 , k " 1, . . ., d, which can be expressed in an element-wise form given by Z k pi k q is the i k th lateral slice matrix of the latent tensor Z k , which is of size d, is defined as TR ranks.The illustration of TR decomposition is shown in Figure 1.

Tensor Ring Sparsity Measurement
By expressing the TR decomposition of Equation ( 1) in the tensor form, we obtain the following expression where z k pr k , r k`1 q is the mode-2 fiber of the TR core Z k .As we can see, based on TR decomposition, a tensor X can be represented by ś d i"1 R i Kronecker bases constructed by the TR-cores.In Figure 2, we provide a graphical demonstration of this representation over a three-order tensor.In consequence, when considering tensors represented through TR decomposition, the total numbers of the Kronecker bases required in Equation ( 2) naturally provide a way to measure the sparsity of the tensor.In this work, we utilize the following formulation to measure the Kronecker bases sparsity of a tensor X defined upon the TR decomposition KpX q " The following theorem explains the relation between KpX q and the TR Kronecker bases sparsity of the tensor Theorem 1.Given a d-th order tensor X P R n 1 ˆ¨¨¨ˆn d , which can be represented by Equation ( 1), the following inequality holds g f f e Proof.For the mode-2 unfolding TR cores Z ip2q P R n i ˆRi R i`1 , according to the property of matrix rank, the following inequality holds The proof is completed by We can conclude that b ś d i"1 rankpZ ip2q q serves as the lower bound of the TR Kronecker bases sparsity of the tensor.However, due to the lack of a straightforward optimization method for handling the matrix optimization problem involving a square root.We drop the square root and relax it to the form of KpX q.Moreover, the optimization of rankpZ ip2q q will lead to combinatorial optimization when applied.Instead of using the trace norm as the convex surrogate to ensure the practical performance of the proposed method, in this work, we use the nonconvex relaxations over the rank of the matrix.Specifically, the MCP folded-concave penalty [49] is chosen here due to its nearly unbiased statistical properties in variable selection.As a result, we can obtain the following relaxation over KpX q Here, rank MCP p¨q denotes the MCP folded-concave penalties of the matrix, which is defined as rank MCP pXq " where σ i pXq denotes the i-th singular value of matrix X and The function ψ mcp p¨q is the MCP folded-concave penalty, which can be seen as a continuous interpolation between the ℓ 1 penalty when a " 8, and the ℓ 0 penalty when a Ñ 0`.Therefore, applying the MCP penalty to the singular values will result in a tighter surrogate of the matrix rank compared to the nuclear norm.Here, λ and a are two hyperparameters that play different roles in determining the shape of the ψ mcp p¨q. a mainly controls the concavity of the function, while λ controls the penalty level [49].

Tensor Ring Sparsity Measurement-Based Tensor Completion
The core problem of the missing value estimation lies in how to build up the relationship between the known elements and the unknown ones.In this paper, we incorporate the proposed TRSM in the tensor completion problem as the optimization objective and formulate the TRSM-TC model as min rZs,X where P Ω pT q denotes all the observed entries with respect to the set of indices of observed entries represented by Ω, T denotes the observed tensor, and ΨprZsq denotes the TR format tensor generated from rZs.Every element of ΨprZsq is calculated by Equation (1).Due to the complex folded-concave penalties, in order to solve the model, a slightly modified ADMM inspired by [46] is needed.In the modified ADMM, the solution process over Equation ( 10) is divided into two stages.In the first stage, we use ADMM to solve the trace norm version of Equation (10), demonstrated as follows min rZs,rMs,X Here, we introduce rMs as auxiliary variables over the TR-cores, and the updated values of M ip2q , for i " 1, . . ., d will be used in the subsequent optimization.It is worth mentioning that the first stage of ADMM is only running for one iteration.In the second stage, we instead solve the following problem min rZs,rMs,X Here, { rank MCP pX|Yq " ř i ψMCP pσ i pXq|σ i pYqq, and ψMCP pσ i pXq|σ i pYqq " ψ MCP pσ i pYqq ψ1 MCP pσ i pYqqpσ i pXq ´σi pYqq, ψ 1 MCP p¨q is the derivative of ψ MCP p¨q.For M 0 i , i " 1, . . ., d, they are the fixed parameters of the optimization problem.Moreover, the values of M 0 i are assigned from the M i obtained in the first stage element-wisely, for i " 1, . . ., d.For Equation ( 12), the augmented Lagrangian function is LprZs, X , rMs, rYsq " where rYs are the Lagrangian multipliers, and µ ą 0 is a penalty parameter.For k " 1, . . ., d, Z k , M k , Y k are all independent, and we update them using the following updating schemes.Update of Z k .For k " 1, . . ., d, the augmented Lagrangian function with respect to Z k can be simplified as where the constant C Z k consists of other parts of the Lagrangian function, which are irrelevant to updating Z k .This is a least squares problem, and the best result is obtained when the derivative of Equation ( 14) equals zero, so for k " 1, . . ., d, Z k can be updated by Z k " fold 2 ppµM kp2q `Ykp2q `λX rks pZ ‰k r2s qqpµI `λpZ ‰k r2s q T pZ ‰k r2s qq ´1q, where k denotes the identity matrix, and Z ‰k r2s is the subchain of TR decomposition with respect to mode-k, where the definition can be found in [40].
Update of M i .For i " 1, . . ., d, the augmented Lagrangian functions with respect to M i is expressed where a i " Here, we introduce a i as an auxiliary variable for saving space to represent the constant factor that is not related to the optimization variable M i .The above formulation has a closed-form solution, which is given by where v i " pψ 1 MCP pσ 1 pM 0 ip2q qq, . . ., ψ 1 MCP pσ r pM 0 ip2q qq T is a collection of the singular values of M 0 ip2q and Dr,w pXq " UΣ r,w V T is the generalized shrinkage operator over X where Σ r,w " diagpmaxpσ i ´rw i , 0qq (σ i is the i-th singular value of X, and w i is the i-th element of w).
Update of X .The augmented Lagrangian functions with respect to X are given by LpX q " λ 2 }X ´ΨprZsq} 2 F `CX s.t.P Ω pX q " P Ω pT q. (18) The recovery tensor X is updated by inputting the observed values in the corresponding entries and by approximating the missing entries by the updated TR factors rZs for every iteration.
X " P Ω pT q `P ΩpΨprZ sqq, where Ω is the set of indices of missing entries, which is a complement to Ω. Update of Y k .For k " 1, . . ., d and the Lagrangian multiplier, Y k is updated as The penalty term of the Lagrangian functions L is restricted by µ, and it is updated for every iteration by µ " minpρµ, µ max q, where 1 ă ρ ă 1.5 is a tuning hyperparameter.Moreover, at the end of each iteration, the values of M ip2q will be assigned to M 0 ip2q elementwisely as an update of M 0 ip2q , for i " 1, . . ., d.The ADMM-based solving scheme is updated iteratively based on the above updating scheme.Two optimization stopping conditions are set: (i) maximum number of iterations maxiter and (ii) the difference between two iterations (i.e., }X ´Xlast } F {}X } F ), which is thresholded by the tolerance tol.The implementation process of TRSM-TC is summarized in Algorithm 1.The convergent property of the proposed algorithm is demonstrated in Figure 3.

Computational Complexity
We analyze the computational complexity of the proposed TRSM-TC as follows.For a tensor X P R n 1 ˆ¨¨¨ˆn d , the TR-ranks are set as R 1 " R 2 " ¨¨¨" R d " j, and the main computational complexity in updating auxiliary variables rMs comes from the SVD operation, which is Op ř d k"1 2pd ´1qn 2 k j 2 q.The computational complexities of calculating Z ‰k r2s and updating rZs are Opdj 3 ś d i"1,i‰k n i q and Opdj 2 ś d i"1 n i `dj 6 q, respectively.If we assume n 1 " n 2 " ¨¨¨" n d " n, then the overall complexity of the proposed algorithm can be written as Opdj 2 n d `dj 6 q.

Color Images Completion
In this section, we test the proposed TRSM-TC against other completion algorithms on eight benchmark color images shown in Figure 4.In recent years, reshaping low-order tensors into high-order tensors and then selecting an appropriate high-order form is a commonly used strategy to improve the performance of the TT/TR-based methods on visual-data completion [41,43].To evaluate the performances of different methods in highorder forms and investigate in which forms they perform the best, we further reshaped the color images of size p256 ˆ256 ˆ3q (3D) to p16 ˆ16 ˆ16 ˆ16 ˆ3q (5D), p4 ˆ4 ˆ4 ˆ4 ˆ4 4 ˆ4 ˆ4 ˆ3q (9D) and visual data tensorization (VDT) (9D).The VDT tensorization has been introduced in [55].The idea is simple, if the size of a RGB image is u ˆv ˆ3 and u " u 1 ˆu2 ˆ¨¨¨ˆu l and v " v 1 ˆv2 ˆ¨¨¨ˆv l are satisfied, then the image can be tensorized to a pl `1q dimension tensor of size u 1 v 1 ˆu2 v 2 ˆ¨¨¨ˆu l v l ˆ3.In this experiment, we firstly reshape the three-way color image to a seventeen-way tensor of size 2 ˆ2 ˆ¨¨¨ˆ2 ˆ3 and permute the tensor according to order t1, 9, 2, 10, 3, 11, 4, 12, 5, 13, 6, 14, 7, 15, 8, 16, 17u.Then, we reshape the tensor to a nine-way tensor of size p4 ˆ4 ˆ4 ˆ4 ˆ4 ˆ4 ˆ4 ˆ4 ˆ3q.
In this experiment, we set the random missing rate as 0.8 and 0.9 and evaluated the recovery performance using RSE, which is defined as where X is the ground-truth tensor, and T is the tensor recovered by the tensor completion algorithm.Smaller RSE means better recovery results.In addition to RSE, we also adopt a structural similarity index measure (SSIM) [56] to assess the performance of image recovery.A higher SSIM value indicates better recovery performance.For TR-related algorithms, we set the TR-ranks as the same value (i.e., R 1 " R 2 " ¨¨¨" R d ), and we vary the TR-ranks to obtain the best results.The parameters of other algorithms are tuned to achieve the best performance.Table 1 lists the averaged performances of the eight color images over different algorithms under different sampling rates and tensor shapes.It can be seen from Table 1 that the proposed method and other TR-based algorithms (i.e., TRLRF, TR-WOPT) obtain significantly lower RSE in 5D.These experiment results suggest that an appropriate high-order tensorization scheme does help improve the performances of TR-based methods.For other compared methods, we can observe that they achieve worse performances after reshaping into high-order forms.Compared with other methods, the proposed TRSM-TC obtains the best recovery performance in the 5D form.In the following experiments, we keep the 5D tensorization scheme for the TR-based methods and adopt the original data structure for the other methods.In Table 2, we compare the recovery SSIM between different methods.It is evident that the proposed method consistently achieves the highest SSIM across various missing rates, thereby demonstrating its effectiveness in recovering color images.In Figure 5, we visualize the house image recovered under 90 missing rates by a portion of the Kronecker bases constructed by the learned TR-cores from different TR-based completion methods.Besides showing the fully recovered tensor constructed by the TRrelated algorithms, the recovered images constructed from the first 200,000, 210,000, 220,000, 230,000 largest Kronecker bases (ranked based on the Frobenius norm of the Kronecker basis) are also demonstrated.As we can see, when adding more Kronecker bases, the recovery results of all the methods improve.However, the improvements in TR-WOPT and TRLRF are relatively small.Moreover, TRSM-TC consistently achieves better recovery performance compared with TR-WOPT and TRLRF under the same number of Kronecker bases.These results indicate that the proposed TRSM-TC is encouraged to find more meaningful and expressive TR Kronecker bases.In practice, there is additive noise in the image data.Therefore, in this section, we tend to test the robustness of the proposed method to the noise.By selecting eight benchmark color images described in the last section as the ground truth images, we add Gaussian noise with different intensities to them according to signal-noise ratios (SNR) of 21, 26, and 31.Then, we set the random missing rate as 0.8 and 0.9 and evaluate the recovery performance of different methods using RSE and SSIM.
In Table 3, we demonstrate the averaging recovery performances of different methods over eight color images under different missing rates and noise.From the result, we can see that the recovery performances of all the methods become worse than their noise-free cases.However, we can see that the proposed TRSM-TC still outperforms the other testing methods in the noisy cases in terms of both RSE and SSIM.

Multispectral Images Completion
In this section, we use the Columbia Multispectral Image Database (CAVE) [57], which contains multispectral images of 32 real-world scenes, each with a spatial resolution of 512 ˆ512 and 31 bands (varying from 400 nm to 700 nm in 10 nm steps).Each image is resized to 256 ˆ256 for all spectral bands and rescaled to [0, 1].For the TR-related methods, the multispectral images of size p256 ˆ256 ˆ31q (3D) are directly reshaped to p16 ˆ16 ˆ16 ˆ16 ˆ31q (5D).
In this experiment, we considered the high missing rates of 0.95 and 0.98.RSE, SSIM, and erreur relative globale adiensionnelle de synthese (ERGAS [58]) were employed for performance evaluation, and ERGAS is defined as ERGAS " 100 MSEpX ::i , T ::i q MEANpT ::i q 2 , ( where s is equal to the spectral dimension of the tensor.X ::i and T ::i are the i-th frontal slice of ground-truth tensor X and recovery tensor T .Moreover, MSEp¨q is the mean square error operation of two matrices and MEANp¨q is the mean value operation of the matrix.For ERGAS, good completion results correspond to smaller values.The averaged performances of 32 multispectral images under different sampling rates are summarized in Table 4.As shown in Table 4, TRSM-TC performs the best with respect to all the evaluation metrics.Moreover, it should be noted that it is challenging for image completion algorithms to recover the images with a 0.98 missing rate.While most of the algorithms fail to achieve satisfactory completion performances, the proposed method still achieves a high performance.In order to visually compare the performances of the competing methods with a 0.98 missing rate, we show in Figure 6 the 31st band of fake and real apples and the 29th band of sponges.We only demonstrate the results of TRSM-TC, TR-WOPT, TRLRF, and FBCP because other methods fail in this case.From the figures, the superiority of the proposed method can be observed.
To further demonstrate the superiority of the proposed method in recovering data with a high missing rate, we consider that only 2% of the video data are observed, and the average completion results are shown in Table 5.From the table, we can see that TRSM-TC achieves the lowest RSE and ERGAS and the highest SSIM compared with other methods.Moreover, the visual completion results of TRSM-TC, TRLRF, TR-WOPT, and FBCP are demonstrated in Figure 7.It can be seen from the results that the proposed method outperforms all the other algorithms.In particular, the proposed method can recover the finer-grained textures and coarser-grained structures of the videos while the other methods can not.
Furthermore, since the proposed method includes an explicit low-rank constraint on the TR-cores, the resulting TR decomposition tends to be low-rank, even when larger initial TR-ranks are set.In the following experiment, we aim to test the rank robustness of the proposed algorithm.As we can see in Figure 8, the recovery results of container and akiyo under different choices of TR-ranks are shown.The best completion results for container and akiyo are obtained when TR-ranks are set as 15 and 21, respectively.After that, with the increase in TR-ranks, the recovery performances are bound to decline due to the overfitting.However, the results indicate that the performance remains relatively stable.Specifically, in the case of the container's recovery, even when the TR-ranks increase to 21-which is 6 higher than the best ranks-the overall structure of the image is still preserved.These findings confirm the rank robustness of the proposed method.The First TR-rank

Conclusions
In this paper, we propose a novel tensor sparsity measurement based on TR decomposition, termed TRSM.The proposed TRSM has a unified interpretation with a matrix sparsity measurement by taking the Kronecker basis as the fundamental representation component.By incorporating TRSM in the tensor completion problem as an optimization objective and solving it with the modified ADMM, we conducted extensive image and video data completion experiments to demonstrate the effectiveness of the proposed TRSM.
In our future research, we will employ TRSM to TRPCA and other tensor analysis applications to improve their performance.

Figure 1 .
Figure 1.Illustrations of the TR decomposition.

Figure 2 .
Figure 2. Illustrations of TR decomposition and its Kronecker bases representation.

Figure 3 .
Figure 3. Illustration of convergence property for TRSM-TC under different choices of TR-ranks.A synthetic tensor with TR structure (size p7 ˆ8 ˆ7 ˆ8 ˆ7q with TR-rank r5, 5, 5, 5, 5s, missing rate 0.8) is tested.The experiment records the change in the objective function values along the number of iterations.Each independent experiment is conducted 100 times, the average results are shown in the graphs, and the convergence curve is presented.

Figure 4 .
Figure 4.The eight benchmark color images.

Figure 5 .
Figure 5. Visual completion result of the house image for a missing rate of 0.9 with 5D tensorization and TR-ranks 12. Top row from left to right: the original image, its 0.9 missing case, and the ground truth error image.The second and third rows show the images recovered by TRSM-TC and their corresponding error images, respectively; the fourth and fifth rows show the images recovered by TRLRF and their corresponding error images, respectively; the sixth and seventh rows show the images recovered by TR-WOPT and their corresponding error images, respectively.(a) The fully recovered images.(b) The images recovered by the first 230,000 Kronecker bases.(c) The images recovered by the first 220,000 Kronecker bases.(d) The images recovered by the first 210,000 Kronecker bases.(e) The images recovered by the first 200,000 Kronecker bases.In each figure, we zoom in on the small red box located on the left and display the result in the upper right area.

Figure 6 .
Figure 6.Visual completion result of the 31st band of fake and real apples and the 29th band of sponges of the 0.98 missing rate.The first and second rows represent the recovery results of fake and real apples and sponges, respectively, and from left to right are the original images, the 0.98 missing rate case of the images, and the images recovered by algorithms TRSM-TC, TR-WOPT, TRLRF, and FBCP, respectively.In each figure, we zoom in on the small red box and display the result in the larger red box.

Figure 7 .
Figure 7. Visual completion results of 6 videos under a 0.98 missing rate.The columns from left to right: the original videos, the 0.98 missing cases and the videos recovered by algorithms TRSM-TC, TRLRF, TR-WOPT, and FBCP, respectively.

Figure 9 .
Figure 9. Ablation studies examining the effects of λ and the first TR-rank.In figures focusing on the first TR-rank, the horizontal dashed line represents results achieved when the first TR-rank is set equal to the other ranks.

Table 1 .
The average performance comparison of 7 competing TC methods with different missing rates on 8 color images.All the methods' best performances over different tensorization schemes are highlighted in bold.

Table 2 .
The averaged recovery SSIM between 7 competing TC methods with different missing rates on 8 color images.The best results are highlighted in bold.

Table 3 .
The average performance comparison of 7 competing TC methods with different missing rates and SNR on 8 color images.The upper and the lower tables show the recovery performances under missing rates 0.8 and 0.9, respectively.The best RSE and SSIM are highlighted in bold.

Table 4 .
The average performance comparison of 7 competing TC methods with different missing rates on 32 MSI images.The best results achieved over different recovery metrics are highlighted in bold.

Table 5 .
The average video completion performances of 7 competing TC methods with a missing rate of 0.98.The best results achieved over different recovery metrics are highlighted in bold. CP-