We perform experiments on synthetic data and two real-world video sequences, and validate the feasibility and effectiveness of the proposed method. The experimental results of GLRTR are compared with that of MRPCA, where the missing values in MRCPA are replaced by zeros.
6.1. Synthetic Data
In this subsection, we synthesize data tensors with missing entries. First, we generate an Nth-order low-rank tensor as follows: with the core tensor and mode matrices . The entries of and are independently drawn from the standard normal distribution. Then, we generate randomly a dense noise tensor whose entries also obey the standard normal distribution. Next, we construct a sparse noise tensor , where is produced by a uniform distribution on the interval and the index set is produced by uniformly sampling on with probability p′%. Finally, the generation of the sampling index set Ω is similar to Ω′ and the corresponding sampling rate is set to be p%. Therefore, an incomplete data tensor is synthesized as .
For given data tensor with missing values, its low-rank component recovered by some method is denoted by . The Relative Error (RE) is employed to evaluate the recovery performance of the low-rank structure and its definition is given as follows: . Small relative error means good recovery performance. The experiments are carried out on 50 × 50 × 50 tensors and 20 × 20 × 20 × 20 tensors, respectively. Furthermore, we set a = 500 and .
For convenience of comparison, we design three groups of experiments. In the first group of experiments, we only consider the case that there are no missing entries, that is,
or p = 100. The values of
are adopted to indicate the Inverse Signal-to-Noise Ratio (ISNR) with respect to the sparse and the Gaussian noise respectively. Three different degrees of sparsity are taken into account, that is, p′ = 5, 10 and 15. In addition, we take
for 3rd-order tensors and
for 4th-order tensors. For given parameters, we repeat the experiments ten times and report the average results. As a low-rank approximation method for tensors, the Higher-Order SVD (HOSVD) truncation [12
] to rank-
is not suitable for gross corruptions owing to the fact that its relative error reaches up to 97% or even 100%. Hence, we do not compare this method with our GLRTR in subsequent experiments. The experimental results are shown in Table 1
and Table 2
From the above two tables, we have the following observations: (I) Although the values of ISNR on sparse noise are very large, both MRPCA and GLRTR remove efficiently sparse noise to some extent. Meanwhile, large values of ISNR on Gaussian noise are disadvantageous for recovering the low-rank components; (II) GLRTR has better recovery performance than MRPCA. In an average sense, the relative error of GLRTR is 2.68% smaller than that of MRPCA for 50 × 50 × 50 tensors, and 14.17% for 20 × 20 × 20 × 20 tensors; (III) For 3rd-order tensors, GLRTR removes effectively Gaussian noise, and, on average, its relative error is 5.96% smaller than the value of ISNR on Gaussian noise; for 4th-order tensors, GLRTR effectively removes Gaussian noise only in the case r = 2. In summary, GLRTR is more effective than MRPCA in recovering the low-rank components.
The second group of experiments considers four different sampling rates for Ω and one fixed degree of sparsity for Ω′, that is,
and p′ = 5. We set τ = 0.02 for both 3rd-order and 4th-order tensors, and choose the superior tradeoff parameter λ for each p.
The comparisons of experimental results between MRPCA and GLRTR are shown in Table 3
and Table 4
, respectively. We can see from these two tables that MRPCA is very sensitive to the sampling rate p
%, and it hardly recovers the low-rank components. In contrast, GLRTR achieves better recovery performance for 3rd-order tensors. As for 4th-order tensors, it also has smaller relative error when the sampling rate p
% is relatively large. These observations show that GLRTR is more robust to missing values than MRPCA.
We will evaluate the sensitivity of GLRTR to the choice of λ and τ in the last group of experiments. For convenience of designing experiments, we only perform experiments on 50 × 50 × 50 tensors and consider the case that p = 100. The values of λ and τ are set according to the following manner: we vary the value of one parameter while letting the other be fixed. In the first case, the parameter τ is chosen as 0.01. Under this circumstance, the relative errors versus different λ of MRPCA and GLRTR are shown in Figure 1
. We take λ = 0.01 in the second case and the relative errors versus
different τ of GLRTR are shown in Figure 2
It can be seen from Figure 1
that the relative errors of MRPCA and GLRTR are about 9.90% and 4.62%, respectively, if 0.02 ≤ λ ≤ 0.07, which means the latter has better recovery performance than the former. Furthermore, their relative errors are relatively stable when λ lies within a certain interval. Figure 2
illustrates that the relative error has the tendency to increase monotonically, and it becomes almost stationary when τ ≥ 1. At this moment, the relative errors lie in the interval (0.037, 0.080). This group of experiments implies that, for our synthetic data, GLRTR is not very sensitive to the choice of λ and τ.
6.2. Influence of Noise and Sampling Rate on the Relative Error
This subsection will evaluate the influence of noise and sampling rate on the relative error. For this purpose, we design four groups of experiments and use the synthetic data generated in the same manner as in the previous subsection. For the sake of convenience, we only carry out experiments on 50 × 50 × 50 tensors.
The first group of experiments aims to investigate the influence of noise on the recovery performance. In the data generation process, we only change the manner for generating
, that is, each entry of
is drawn independently from the normal distribution with mean zero and standard deviation b
and b = 0.2j, where
. For different combinations of a and b, the relative errors of GLRTR are shown in Figure 3
. We can draw two conclusions from this figure. For given b, the relative error is relatively stable with the increasing of a, which means the relative error is not very sensitive to the magnitude of sparse noise. The relative error monotonically increases with the increasing of Gaussian noise level, which validates that large Gaussian noise is disadvantageous for recovering the low-rank component.
Next, we study the influence of sampling rate p
% on the relative error for different r. Set
and vary the value of p% from 30 to 100 in steps of size 10. For fixed r and p, we obtain the low-rank component according to GLRTR and then plot the relative errors in Figure 4
. From the 3-D colored surface in Figure 4
, we can see that both r and p have significant influence on the relative error. This observation indicates that small r or large p is conducive to the recovery of low-rank term.
The third group of experiments will validate the robustness of GLRTR to sparse noise. Concretely speaking, we investigate the recovery performance under different ISNR on sparse noise without consideration of Gaussian noise. Set
. The experimental results are shown in Figure 5
, where the horizontal and the vertical coordinates represent the ISNR on sparse noise and the relative error, respectively. This figure illustrates that the relative error is less than 4.5% for synthetic 3rd-order tensors, which verifies experimentally that our method is very robust to sparse noise.
In the last group of experiments, we discuss the performance of GLRTR in removing the small dense noise for given large sparse noise. We also propose a combination strategy: GLRTR + HOSVD, that is, GLRTR is followed by HOSVD. The goal of this new method is to improve the denoising performance of GLRTR. Let
. Different values for b lead to different ISNR on Gaussian noise. We draw four curves to reflect the relationship between the relative error and ISNR on Gaussian noise, as shown in Figure 6
, where the black dashed line is a reference line. We have two observations from this figure. When ISNR on Gaussian noise is larger than 3.5%, GLRTR not only successfully separates the sparse noise to some extent but also effectively removes the Gaussian noise. The GLRTR+HOSVD method has better denoising performance than GLRTR in the presence of large Gaussian noise.
6.3. Applications in Background Modeling
In this subsection, we test our method on two real-world surveillance videos for object detection and background subtraction: Lobby and Bootstrap datasets [26
]. For convenience of computation, we only consider the first 200 frames for each dataset and transform the color images into the gray-level images. The resolutions of each image in the Lobby and Bootstrap datasets are 128 × 160 and 120 × 160, respectively. We add Gaussian noise with mean zero and standard deviation 5 to each image. Hence, we obtain two data tensors of order 3 and their sizes are 128 × 160 × 200 and 120 × 160 × 200, respectively. For two given tensors, we execute random sampling on them with a probability of 50%.
Considering the fact that MRPCA fails in recovering the low-rank components on the synthetic data with missing values, we only implement the method of GLRTR on the video datasets. Two tradeoff parameters are set as follows: λ = 0.0072 and τ = 0.001. We can obtain the low-rank, the sparse and the completed components from the incomplete data tensors according to the proposed method. Actually, the low-rank terms are the backgrounds and the sparse noise terms correspond to the foregrounds. The experimental results are partially shown in Figure 7
and Figure 8
, respectively, where the missing entries in incomplete images are shown in white. From Figure 7
and Figure 8
, we can see that GLRTR can recover efficiently the low-rank images and the sparse noise images. Moreover, we observe from the recovered images that a large proportion of missing entries are effectively completed.
To evaluate the completion performance of GLRTR, we define the Relative Approximate Error (RAE) as , where is the original video tensor without Gaussian noise corruptions and missing entries, and is the approximated term of . The RAE of the Lobby dataset is 8.94% and that of the Bootstrap dataset is 20.11%. These results demonstrate GLRTR can complete approximately the missing entries to a certain degree. There are two reasons for that the Bootstrap dataset has relatively large RAE: one is its more complex foreground and the other is that the entries of the foreground can not be recovered when they are missing. In summary, GLRTR is robust to gross corruption, Gaussian noise and missing values.