Next Article in Journal
A Blockchain-Enabled Decentralized Autonomous Access Control Scheme for Data Sharing
Previous Article in Journal
Industrial-AdaVAD: Adaptive Industrial Video Anomaly Detection Empowered by Edge Intelligence
Previous Article in Special Issue
A Hybrid Harmony Search Algorithm for Distributed Permutation Flowshop Scheduling with Multimodal Optimization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Tensorized Multi-View Subspace Clustering via Tensor Nuclear Norm and Block Diagonal Representation

1
School of Computer Science and Information, Anhui Polytechnic University, Wuhu 241000, China
2
Anhui Provincial Medical Big Data Intelligent System Engineering Research Center, Anhui Normal University, Wuhu 241000, China
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(17), 2710; https://doi.org/10.3390/math13172710
Submission received: 9 July 2025 / Revised: 16 August 2025 / Accepted: 20 August 2025 / Published: 22 August 2025

Abstract

Recently, a growing number of researchers have focused on multi-view subspace clustering (MSC) due to its potential for integrating heterogeneous data. However, current MSC methods remain challenged by limited robustness and insufficient exploitation of cross-view high-order latent information for clustering advancement. To address these challenges, we develop a novel MSC framework termed TMSC-TNNBDR, a tensorized MSC framework that leverages t-SVD based tensor nuclear norm (TNN) regularization and block diagonal representation (BDR) learning to unify view consistency and structural sparsity. Specifically, each subspace representation matrix is constrained by a block diagonal regularizer to enforce cluster structure, while all matrices are aggregated into a tensor to capture high-order interactions. To efficiently optimize the model, we developed an optimization algorithm based on the inexact augmented Lagrange multiplier (ALM). The TMSC-TNNBDR exhibits both optimized block-diagonal structure and low-rank properties, thereby enabling enhanced mining of latent higher-order inter-view correlations while demonstrating greater resilience to noise. To investigate the capability of TMSC-TNNBDR, we conducted several experiments on certain datasets. Benchmarking on circumscribed datasets demonstrates our method’s superior clustering performance over comparative algorithms while maintaining competitive computational overhead.

1. Introduction

In the field of pattern recognition, subspace clustering (SC) is a very important research topic [1,2,3,4]. In recent decades, scholars have created many subspace clustering algorithms, among which spectral-type methods have shown good performance.
Sparse subspace clustering (SSC) [5] and low-rank representation (LRR) [6], which have achieved significant success, are two typical self-representation subspace clustering methods. Block diagonal structure is a matrix form where non-zero elements are confined to square blocks along the main diagonal, with all elements outside these blocks being zero. Recent studies [7,8] have shown that the block diagonal structure within the learned low-dimensional subspace projection serves the purpose of obtaining the correct clustering results. However, SSC and LRR pursue the block diagonal representation (BDR) matrix indirectly since they only impose nuclear-norm and L1-norm on the subspace representation, respectively. Furthermore, Feng et al. [8] imposed a block diagonal prior on the subspace representation matrices obtained using SSC and LRR, and their clustering performance was improved. However, it is difficult to optimize Feng’s method since the rank constraint is NP-hard. To tackle this problem, Lu et al. [7] developed a simple BDR that relaxes the rank constraint. Compared with Feng’s method, BDR is more easily optimized since it is smooth. Xu et al. [9] developed a learning projective model for BDR to deal with the large-scale subspace clustering problem. Xing et al. [10] proposed an enhanced version of DBSCAN, which is a highly prevalent algorithm in data mining, to improve the clustering process using the block diagonal property of affinity matrices. Meanwhile, Guo et al. [11] put forward a spectral clustering algorithm with BDR for large-scale datasets.
The above-mentioned approaches are single-view-based since they assume that there is only one data source. However, in fact, data are generally sourced from various origins. For instance, one event can be represented by images, text, videos. As such, multi-view clustering (MVC) methods, which often demonstrate a better clustering performance than single-view methods [12,13,14,15,16,17,18,19], are becoming increasingly popular.
The authors of [20] proposed co-training learning to flux the multi-view features. Additionally, the study in [21] investigated the key factors contributing to the success of the co-training method. The co-training learning model is not robust enough against noise pollution, which can lead to error exaggeration. Kumar et al. [17] proposed an MSC framework, in which the clustering hypotheses among views is co-regularized. Graph-based methods are another category MSC methods, which generally use the multiple graph fusion strategy to utilize the information among different views. Sa [22] developed a two-view clustering method, which utilizes different information between two views by constructing bipartite graphs. Moreover, the authors of [13] developed a multi-view spectral clustering algorithm with the help of low-rank and sparse decomposition (RMSC), achieving encouraging success in relation to several real datasets. In the work of Cao et al. [15], the diversity-induced MSC (DiMSC) was presented, leveraging the Hilbert–Schmidt Independence Criterion (HSIC), which plays a key role in utilizing complementary information among different views to enhance the clustering.
By assuming that the different views of an object come from a potential subspace, subspace learning MVC methods can be developed to capture the shared potential subspace. Blaschko and Lampert [23] introduced a novel spectral clustering technique that utilizes canonical correlation analysis (CCA) in its linear and kernel forms for dimensionality reduction. In [24], a low-rank common subspace (LRCS) MVC method is proposed, which can obtain compatible intrinsic information among views by using a common low-rank projection.
These MVC methods shows promising performance in clustering applications; however, they only use paired associations between different views, and may overlook the higher-order associations hidden in multi-view data [12,14,19,25,26,27]. Zhang et al. [12,19] developed a novel multi-view spectral clustering method named LTMSC, incorporating low-rank tensor constraints. In the method, the subspace representations are constructed into a single tensor. It is possible to explore higher-order relationships hidden in the multi-view data. Lu et al. [28] introduced an MSC method with hyper-Laplacian regularization and low-rank tensor constraints (HLR-MSCLRT), which can uncover the local information hidden in the data on the manifold. Nevertheless, the tensor norm employed in both LTMSC and HLR-MSCLRT lacks a clear physical interpretation.
Zhang et al. recently introduced the TNN [29] leveraging the tensor singular value decomposition (t-SVD). The TNN, defined as the summation of singular values, provides a rigorous measure of tensor data low-rankness. In [14], Xie et al. developed a t-SVD based MSC model, namely t-SVD-MSC, which preserves the low-rank property through TNN. With the use of TNN, t-SVD-MSC can more effectively explore the complementary information among all the views [30,31,32,33,34]. Furthermore, in [18], an essential tensor learning method for MSC using a TNN constraint, known as ETLMSC, is proposed. Pan et al. [35] proposed a non-negative non-convex low-rank tensor kernel function in an MSC model (NLRTGC) to reduce the bias from rank. To exploit high-dimensional hidden information, Pan et al. [36] proposed a low-rank fuzzy MSC learning algorithm with the TNN constraint (LRTGFL). Peng et al. [37] designed log-based non-convex functions to approximate tensor rank and tensor sparsity in the Finger-MVC model; these are more precise than the convex ones. Wang et al. [38] integrate noise elimination and subspace learning into a unified MSC framework, holding high-order associations of views constrained by the TNN. Du et al. [39] proposed a robust t-SVD-based multi-view clustering which simultaneously uses low rank and local smooth priors. Luo et al. [40] used an adaptively weighted tensor Schatten-p norm with an adjustable p-value to eliminate the biased estimate of rank.
The optimized BDR structure in affinity matrices inherently encodes cluster information, thereby substantially enhancing clustering efficacy. The low-rank tensor representations intrinsically capture latent high-order correlations across multi-view data through subspace embeddings, resulting in statistically significant clustering improvements. In this paper, inspired by the optimized BDR structure and low-rank tensor representations, we propose a novel MSC method called TMSC-TNNBDR, which integrates the advantages of TNN and BDR. The proposed model imposes BDR constraints on each subspace representation matrix, and all affinity matrices are combined into a tensor regularized by TNN. Finally, an efficient optimization algorithm based on ALM is developed.
The primary contributions of our work are as follows:
  • The proposed TMSC-TNNBDR incorporates a BDR regularizer, which promotes a more pronounced block diagonal structure and improves clustering robustness.
  • In the TMSC-TNNBDR model, the optimized architecture encodes a TNN constraint, under which TMSC-TNNBDR captures the global structure across all views, thereby effectively exploiting latent complementary information and high-order interactions among views.
  • We proposed an ALM optimizer for TMSC-TNNBDR. This approach demonstrates superior clustering performance over comparative algorithms while maintaining competitive computational efficiency.
The remainder of this work is organized as follows. In Section 2, we summarize the notations used and some preliminary definitions. In Section 3, we briefly review two methods, namely the LRR [6] and the BDR [7]. Then, we propose the TMSC-TNNBDR and a solving procedure for TMSC-TNNBDR in Section 4. Subsequently we documented the experimental findings in Section 5. Ultimately, we conclude our work in Section 6.

2. Notations and Preliminaries

2.1. Notations

For a clear explanation of TMSC-TNNBDR, We summarize the notations in Table 1. Bold calligraphy letters, e.g., U , are deployed to denote tensors. Bold upper case letters, e.g., W, are deployed to denote matrices. Bold lower case letters, e.g., u, are deployed to denote vectors. Lower case letters, e.g., u i j , are deployed to denote the entries. Among others, 1 is assigned to denote the column vector of one. The diagonal elements of matrix W line up as a column vector, which is referred to as d i a g ( W ) . The column vector w is expanded into a diagonal matrix, which is labeled as D i a g ( w ) .
Let W denote the nuclear norm operator of matrix W, i.e., W = i σ i ( W ) , in which σ i ( W ) is the ith largest singular value of W. Assuming that the singular value decomposition of matrix W is expressed as W = U Σ V T , then D τ ( W ) denotes the singular-value thresholding operator applied to matrix W with boundary value τ , i.e., D τ ( W ) = U Σ τ V T , where Σ τ = d i a g max ( σ i ( W ) τ , 0 .
Matrices extend naturally to tensors, which are multidimensional arrays. Mathematically, a matrix is a two-way tensor. Suppose Y n 1 × n 2 × n 3 is a three-way tensor, Y is likely to be regarded as the stack of matrices. Some block-based operators [31] for Y n 1 × n 2 × n 3 are defined as follows:
b c i r c ( Y ) = Y ( 1 ) Y ( n 3 ) Y ( 2 ) Y ( 2 ) Y ( 1 ) Y ( n 3 ) Y ( n 3 ) Y ( n 3 1 ) Y ( 1 )
b v e c ( Y ) : = Y ( 1 ) Y ( 2 ) Y ( n 3 ) ,   bvfold ( b v e c ( Y ) ) = Y
b d i a g ( Y ) : = Y ( 1 )             Y ( n 3 ) ,   bdfold ( b d i a g ( Y ) ) = Y

2.2. Preliminaries

We introduced some preliminary definitions [31] as follows:
Definition 1.
Suppose Y n 1 × n 2 × n 3 and Z n 2 × n 4 × n 3 ; the t-product Y Z is M n 1 × n 4 × n 3 , i.e.,
M = Y Z = : bvfold { b c i r c ( Y ) b v e c ( Z ) }
Definition 2.
Suppose Y n 1 × n 2 × n 3 ; then, the tensor transpose of Y is Y T n 2 × n 1 × n 3 .
Definition 3.
Suppose I n 1 × n 1 × n 2 ; then, I is an identity tensor while its first frontal slice is a unit matrix ( I n 1 × n 1 ) and all others are zeros.
Definition 4.
If P n 1 × n 2 × n 3 satisfies
P T P = P P T = I
Then, P is an orthogonal tensor.
Definition 5.
Suppose P n 1 × n 2 × n 3 ; we define P as an f-diagonal tensor while all its frontal slices are diagonal.
Definition 6.
Suppose the tensor Y n 1 × n 2 × n 3 ; then, the t-SVD is defined as follows:
Y = U L V T
where L ,   U ,   V n 1 × n 2 × n 3 , L is f-diagonal, while U and V are both orthogonal.
Definition 7.
Suppose Y n 1 × n 2 × n 3 ; then, the TNN of Y , i.e., Y T N N , is the summation of the singular values which are decomposed by t-SVD. It has been proven that t-SVD based TNN is the tightest convex relaxation to tensor tubal rank [29].

3. Related Work

Before presenting our method, this section establishes the theoretical foundation by first reviewing two classical methods: LRR [6] and BDR [7].

3.1. Low-Rank Representation (LRR)

Let us assume that X = [ x 1 , x 2 , , x N ] d × N is a set of N data points and the dimensionality of the data is d. LRR seeks to find a low-rank factorization of samples for clustering. The objective of LRR can be formulated as follows:
min Z , E λ E 2 , 1 + Z s . t . X = X Z + E
where Z = [ z 1 , z 2 , , z N ] N × N is the representation of dataset X, E refers to the approximation error, 2 , 1 refers to the L 2 , 1 -norm, and refers to the nuclear norm.
LRR executes spectral clustering via the affinity matrix W, where W = Z + Z T 2 .

3.2. Block Diagonal Representation (BDR)

The authors of [7] provide the following block diagonal regularizer to chase the optimal representation.
Definition 8.
The k-block diagonal regularizer of the affinity matrix W N × N can be formulated as follows:
W k = i = N k + 1 N δ i ( L W )
where L W = D i a g ( W 1 ) W , i.e., L W is the Laplacian matrix of W. δ i ( L W ) refers to the ith eigenvalue of L W and is in decreasing order. k is the number of subspaces. The loss function of BDR is defined as follows
min Z ,   W 1 2 X X Z F 2 + λ 2 Z W F 2 + γ W k s . t . d i a g ( W ) = 0 , W 0 , W = W T
The affinity matrix W is also defined as W = Z + Z T 2 .

4. The Proposed TMSC-TNNBDR

In this section, we introduce the TMSC-TNNBDR framework that extends classical LRR and BDR approaches. Subsequently, we derive an ALM-based optimization scheme to solve the resulting non-convex problem.

4.1. Problem Formulation

Let X ( v ) = [ x 1 ( v ) , x 2 ( v ) , , x N ( v ) ] d × N and H ( v ) = [ h 1 ( v ) , h 2 ( v ) , , h N ( v ) ] N × N be, respectively, the feature matrix and subspace coefficient for the vth view. The loss function of TMSC-TNNBDR is demonstrated as follows:
min H ( v ) , B ( v ) v = 1 V λ 2 X ( v ) X ( v ) H ( v ) F 2 + α 2 H ( v ) B ( v ) F 2 + γ B ( v ) k + H T N N s . t . H = Ψ ( H ( 1 ) , H ( 2 ) , , H ( V ) ) ,   d i a g ( B ( v ) ) = 0 , B ( v ) 0 , B ( v ) = B ( v ) T
where Ψ ( ) represents an function that stacks all H ( v ) (v = 1, 2, …, V) into a tensor in N × N × V then applying a rotation transformation to H N × V × N .
In Equation (10), X ( v ) X ( v ) H ( v ) F 2 is the self-representation reconstruction error, B ( v ) k denotes the BDR constraint to B ( v ) , H T N N denotes TNN low-rank constraint to H , and H ( v ) B ( v ) F 2 can be seen as a Robust PCA term to remove the noise contained in the H(v). Moreover, λ , α , and γ are tunable hyperparameters.

4.2. Optimization

The loss function of TMSC-TNNBDR, i.e., Equation (10), can be optimized through the ALM. The theorem relating to B ( v ) k is described as follows:
Theorem 1 
([41]). Suppose L n × n , where L is semi-positive; then, the following holds:
i = n k + 1 n λ i ( L ) = min W L , W s . t . 0 W I , t r ( W ) = k
In accordance with Theorem 1, Equation (10) can be rewritten as Equation (12):
min H ( v ) , W ( v ) , B ( v ) v = 1 V λ 2 X ( v ) X ( v ) H ( v ) F 2 + α 2 H ( v ) B ( v ) F 2 + γ D i a g ( B ( v ) 1 ) B ( v ) , W ( v ) + H T N N s . t . H = Ψ ( H ( 1 ) , H ( 2 ) , , H ( V ) ) ,   d i a g ( B ( v ) ) = 0 , B ( v ) 0 , B ( v ) = B ( v ) T ,   0 W ( v ) I ,   t r ( W ( v ) ) = k
To solve Equation (12), an auxiliary tensor variable G is introduced to replace H . Then, the loss function of TMSC-TNNBDR is converted into the following:
min H ( v ) , W ( v ) , B ( v ) , G v = 1 V λ 2 X ( v ) X ( v ) H ( v ) F 2 + α 2 H ( v ) B ( v ) F 2 + γ D i a g ( B ( v ) 1 ) B ( v ) , W ( v ) + G T N N s . t . G = H ,   H = Ψ ( H ( 1 ) , H ( 2 ) , , H ( V ) ) ,   G = Ψ ( G ( 1 ) , G ( 2 ) , , G ( V ) ) , d i a g ( B ( v ) ) = 0 ,   B ( v ) 0 ,   B ( v ) = B ( v ) T ,     0 W ( v ) I ,   t r ( W ( v ) ) = k
Equation (13) will be converted to the augmented Lagrangian formula, as follows:
L ( H ( 1 ) , , H ( V ) ; W ( 1 ) , , W ( V ) ; B ( 1 ) , , B ( V ) ; G ) = v = 1 V λ 2 X ( v ) X ( v ) H ( v ) F 2 + α 2 H ( v ) B ( v ) F 2 + γ D i a g ( B ( v ) 1 ) B ( v ) , W ( v ) + G T N N + P , H G + ρ 2 H G F 2 s . t . d i a g ( B ( v ) ) = 0 ,   B ( v ) 0 ,   B ( v ) = B ( v ) T ,   0 W ( v ) I ,   t r ( W ( v ) ) = k
where P denotes the Lagrange multiplier; ρ is actually the penalty parameter.
We get the resolutions to H ( v ) , W ( v ) , B ( v ) , and G by solving each variable alternately in Equation (14). The steps are described as follows:
H ( v ) -subproblem: For computing H ( v ) , we fix the other variables and tackle the following problem:
H ( v ) = arg min H ( v ) λ 2 X ( v ) X ( v ) H ( v ) F 2 + α 2 H ( v ) B ( v ) F 2 + P ( v ) , H ( v ) G ( v ) + ρ 2 H ( v ) G ( v ) F 2
Differentiating by H ( v ) , we can obtain the following:
H ( v ) = λ X ( v ) T X ( v ) + ( α + ρ ) I 1 λ X ( v ) T X ( v ) + α B ( v ) + ρ G ( v ) P ( v )
W ( v ) -subproblem: W ( v ) will be computed as follows:
W ( v ) = argmin W ( v ) D i a g ( B ( v ) 1 ) B ( v ) , W ( v ) s . t . 0 W ( v ) I , t r ( W ( v ) ) = k
For Equation (17), W ( v ) = U U T , where U N × k is a matrix concatenated from k eigenvectors that correspond to the k smallest eigenvalues of D i a g ( B ( v ) 1 ) B ( v ) [7].
B ( v ) -subproblem: B ( v ) can be computed as follows:
B ( v ) = arg min B ( v ) α 2 H ( v ) B ( v ) F 2 + γ D i a g ( B ( v ) 1 ) B ( v ) , W ( v ) s . t . d i a g ( B ( v ) ) = 0 , B ( v ) 0 , B ( v ) = B ( v ) T
Equation (18) can be converted into the following:
B ( v ) = arg min B ( v ) 1 2 B ( v ) H ( v ) + α γ d i a g ( W ( v ) ) 1 T W ( v ) F 2 s . t . d i a g ( B ( v ) ) = 0 , B ( v ) 0 , B ( v ) = B ( v ) T
The theorem in [7] enables the solution of Equation (19).
G -subproblem: We fixed the other variables and update G as follows:
G = arg min G G T N N + ρ 2 G H + P ρ F 2
The solution to Equation (20) can be obtained using the theorem in [14,29].
P -subproblem: the Lagrange multiplier P can be updated as follows:
P = P + μ H G ,
Finally, the TMSC-TNNBDR procedure is outlined in Algorithm 1.
Algorithm 1 TMSC-TNNBDR
  • Input: X ( 1 ) , X ( 2 ) , , X ( v ) , λ , α , γ and cluster number k;
  • Output: Clustering result
  • Initialize:
  •    H ( v ) = 0 , W ( v ) = 0 , B ( v ) = 0 , v = 1 , , V , G = P = 0 , μ = 10 5 , ρ = 10 4 , ε = 10 7 ,
  •    μ max = ρ max = 10 10
  • While not converged do
  •   for v = 1 , , V do
  •      Update H ( v ) in accordance with Equation (16);
  •      Update W ( v ) in accordance with Equation (17);
  •      Update B ( v ) in accordance with Equation (19)
  •   end
  •   Update G in accordance with Equation (20);
  •   Update P in accordance with Equation (21);
  •   Update μ by μ = min ( ρ μ ; μ max ) ;
  •   Check the convergence conditions: H ( v ) G ( v ) < ε .
  • end
  • Let S = 1 V v = 1 Z H ( v ) + H ( v ) T ;
  • Perform spectral clustering on S.

4.3. Computational Complexity and Convergence

To calculate H ( v ) , it involves matrix multiplication and matrix inversion, whose complexities are O ( d N 2 ) and O ( N 3 ) , respectively. For computing W ( v ) , its complexity is O ( N 3 + k N 2 ) because the main computational burdens are eigenvalue decomposition and matrix product. The computation of B ( v ) is O ( N 3 ) since it mainly depends on matrix multiplication. As for computing G , the computational complexity is O ( N 2 V log ( N ) ) . Thereafter, the total complexity of TMSC-TNNBDR is O ( V ( d N 2 + N 3 ) + N 2 V log ( N ) ) .
The procedure of TMSC-TNNBDR is non-convex, which means it cannot achieve a global optimal solution. Nevertheless, TMSC-TNNBDR can converge to a local optimal point. In fact, each variable in Algorithm 1 has a closed-form solution. Following this, the value of the loss function decreases monotonically and remains bounded below. Clustering experiments are performed on some classic datasets, and the results showed that the TMSC-TNNBDR could converge stably.

5. Experimental Results

To evaluate the performance of the TMSC-TNNBDR model, we conduct experiments on abovementioned five image datasets. We compare TMSC-TNNBDR with some representative single-view-based methods, including SPCbest, LRRbest, BDRbest; multi-view-based methods, including Co-Reg SPC, RMSC, LTMSC, and DiMSC; and TNN-based method, i.e., t-SVD-MSC.
  • SPCbest: SPCbest is a single-view clustering model using spectral clustering [42] to reach the best capability in all views.
  • LRRbest: LRRbest is a single-view clustering model using LRR [6] to reach the best capability in all views.
  • BDRbest: BDRbest is a single-view clustering model using BDR [7] to reach the best capability in all views.
  • Co-Reg SPC [17]: A co-regularized MVSC method.
  • RMSC [13]: RMSC is a multi-view spectral clustering algorithm with the help of low-rank and sparse de-composition.
  • LTMSC [12]: LTMSC is a multi-view spectral clustering method incorporating low-rank tensor constraints.
  • DiMSC [15]: DiMSC is a multi-view model utilizing HSIC to enhance diversity.
  • t-SVD-MSC [14]: A MSC model using t-SVD.

5.1. Description of the Dataset

UCI-Digits: (https://archive.ics.uci.edu/dataset/80/optical+recognition+of+handwritten+digits (accessed on 17 August 2025)). There are a total of 2000 digit images that correspond to 10 classes in this dataset. Similarly to the work in [13], three feature types—morphological features, Fourier coefficients, and pixel averages—are extracted to assemble multi-view data. Ten UCI-Digits samples are shown in Figure 1.
ORL: (https://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html (accessed on 17 August 2025)). There are a total of 400 face images in ORL, which correspond to 40 people. Firstly, we zoom each image to a resolution of 64 × 64. Then, similarly to the work of [12,14], we retrieved LBP [43], intensity, and Gabor [44] features to formulate multi-view data. Some ORL samples are shown in Figure 2.
Yale: (https://gitcode.com/open-source-toolkit/885dd (accessed on 17 August 2025)). There are 165 face images in Yale, which correspond to 11 individuals. We zoom each image to a resolution of 64 × 64 . Similarly to the work of [12,14], we retrieved LBP [43], intensity, and Gabor [44] features in order to formulate multi-view data. Some Yale samples are shown in Figure 3.
Extended YaleB: (https://gitcode.com/open-source-toolkit/3d6b2 (accessed on 17 August 2025)). This dataset consists of pictures of 38 people; each individual has about 64 images under different illuminations. Similarly to the work in [12,14], we retrieved LBP [43], intensity, and Gabor [44] features to formulate multi-view data. We zoom each image to a resolution of 32 × 32. Ten examples of YaleB are shown in Figure 4.
COIL-20: (https://www.cs.columbia.edu/CAVE/software/softlib/coil-20.php (accessed on 17 August 2025)). There are a total of 1440 pictures in COIL-20, which are associated with 40 object classes. We zoom each picture to a resolution of 32 × 32. Similarly to the work in [12,14], we retrieved LBP [43], intensity, and Gabor [44] features to formulate multi-view data. Some COIL-20 samples are shown in Figure 5.

5.2. Evaluation Metrics

We employ ACC and NMI to compare the performance of various algorithms.
ACC, i.e., accuracy, is defined as follows:
ACC = i = 1 n δ ( g n d i , m a p ( r i ) ) n
where g n d i refers to the true category label of the i-th sample, m a p ( r i ) refers to clustering label, and δ ( x , y ) refers to Kronecker delta, which is defined as follows:
δ ( x , y ) = 1   , i f   x = y 0   , o t h e r w i s e
MI, i.e., mutual information, is defined as follows:
MI ( D , D ) = d i D , d j D p ( d i , d j ) log p ( d i d j ) p ( d i ) p ( d j )
where D and D refer to two clusters. p ( d i ) and p ( d j ) refer to the probabilities of belonging to D and D . Correspondingly, p ( d i , d j ) refers to the joint probability.
NMI, i.e., normalize mutual information, is defined as follows:
NMI ( D , D ) = M I ( D , D ) max ( H ( D ) , H ( D ) )
where H ( ) is the entropy of the dataset.

5.3. Experiment Results

Notably, all evaluated clustering methods—specifically TMSC-TNNBDR, LRRbest, SPCbest, BDRbest, t-SVD-MSC, DiMSC, Co-Reg SPC, RMSC, and LTMSC—incorporate K-means procedures in the final step of spectral clustering. Randomness in K-means stems from both the random choice of initial cluster centers and the non-convex objective function. Twenty repetitions typically represent a sound compromise to mitigate the effects of randomness while maintaining computational feasibility. Therefore, we report median performance metrics after 20 experimental repetitions to ensure statistical robustness. Table 2, Table 3, Table 4, Table 5 and Table 6 report the clustering results from five public databases: UCI-digits, ORL, Yale, YaleB, COIL-20. Bold values in the tables indicate the best. The results clearly indicate that TMSC-TNNBDR exceeds the performance of other comparison algorithms. Our observation and analysis are delineated as follows:
  • In comparative analyses of single-view clustering methodologies, LRR and BDR consistently outperform conventional spectral clustering (SPC). This performance advantage likely stems from their enhancement of the SPC framework through the incorporation of prior structural knowledge—specifically, low-rank constraints and block-diagonal regularization, respectively.
  • Multi-view methods, including TMSC-TNNBDR, demonstrated a superior performance compared with single-view approaches. Our experimental results demonstrate that even selecting the best outcome from all individual single-view clustering procedures still underperforms multi-view clustering in the vast majority of cases. This confirms the superiority of multi-view clustering. It is generally believed that the success of existing multi-view clustering methods involve learning latent cross-view correlations, discovering underlying patterns, and integrating this summarized prior knowledge into clustering models. For instance, Co-Reg SPC embeds co-regularization of clustering consensus into spectral clustering. Similarly, RMSC incorporates both low-rank tensor and sparse constraints into the MSC framework. DiMSC leverages the Hilbert Schmidt Independence Criterion (HSIC) to extract complementary information across views, while LTMSC and t-SVD-MSC employ TNN constraint to explore such complementary information.
  • In most cases, multi-view clustering based on tensor representation outperforms co-regularization-based Co-Reg SPC in clustering performance. This superiority is generally attributed to tensor representation’s ability to integrate multiple views into a unified structure, easily capturing complementary information and high-order interactions across views. Experimental results validate this explanation.
  • TMSC-TNNBDR achieves superior performance compared to benchmark methods while maintaining favorable time efficiency. This advantage is likely attributed to the complementary interplay between the low-rank property and block-diagonal structure of the similarity matrix—where Tensor Nuclear Norm (TNN) and Block Diagonal Regularization (BDR), as two priori constraints, synergistically enhance the multi-view subspace clustering framework from distinct perspectives.
Table 2. Clustering results (mean ± standard deviation) on UCI-Digits.
Table 2. Clustering results (mean ± standard deviation) on UCI-Digits.
AlgorithmLRRbestSPCbestBDRbest
ACC0.968 ± 0.0010.740 ± 0.0210.814 ± 0.006
NMI0.769 ± 0.0020.639 ± 0.0130.765 ± 0.005
Algorithmt-SVD-MSCDiMSCCo-Reg SPC
ACC0.953 ± 0.0010.719 ± 0.0130.786 ± 0.007
NMI0.929 ± 0.0020.776 ± 0.0080.801 ± 0.004
AlgorithmRMSCLTMSCTMSC-TNNBDR
ACC0.776 ± 0.0080.912 ± 0.0030.994 ± 0.001
NMI0.791 ± 0.0030.923 ± 0.0020.984 ± 0.001
Table 3. Clustering results (mean ± standard deviation) on ORL.
Table 3. Clustering results (mean ± standard deviation) on ORL.
AlgorithmLRRbestSPCbestBDRbest
ACC0.773 ± 0.0030.726 ± 0.0250.848 ± 0.003
NMI0.895 ± 0.0060.884 ± 0.0020.938 ± 0.002
Algorithmt-SVD-MSCDiMSCCo-Reg SPC
ACC0.973 ± 0.0030.837 ± 0.0010.715 ± 0.000
NMI0.992 ± 0.0020.939 ± 0.0030.853 ± 0.003
AlgorithmRMSCLTMSCTMSC-TNNBDR
ACC0.735 ± 0.0060.793 ± 0.0081.000 ± 0.000
NMI0.873 ± 0.0110.932 ± 0.9931.000 ± 0.000
Table 4. Clustering results (mean ± standard deviation) on Yale.
Table 4. Clustering results (mean ± standard deviation) on Yale.
AlgorithmLRRbestSPCbestBDRbest
ACC0.703 ± 0.0020.634 ± 0.0150.712 ± 0.004
NMI0.706 ± 0.0120.646 ± 0.0090.716 ± 0.002
Algorithmt-SVD-MSCDiMSCCo-Reg SPC
ACC0.878 ± 0.0130.703 ± 0.0040.668 ± 0.002
NMI0.913 ± 0.0090.728 ± 0.0090.715 ± 0.003
AlgorithmRMSCLTMSCTMSC-TNNBDR
ACC0.639 ± 0.0380.743 ± 0.0030.992 ± 0.002
NMI0.685 ± 0.0290.759 ± 0.090.994 ± 0.001
Table 5. Clustering results (mean ± standard deviation) on Extended YaleB.
Table 5. Clustering results (mean ± standard deviation) on Extended YaleB.
AlgorithmLRRbestSPCbestBDRbest
ACC0.447 ± 0.0230.283 ± 0.0350.464 ± 0.012
NMI0.408 ± 0.0320.225 ± 0.0430.432 ± 0.007
Algorithmt-SVD-MSCDiMSCCo-Reg SPC
ACC0.568 ± 0.0030.470 ± 0.0070.240 ± 0.001
NMI0.605 ± 0.0020.397 ± 0.0060.148 ± 0.001
AlgorithmRMSCLTMSCTMSC-TNNBDR
ACC0.223 ± 0.0110.626 ± 0.0090.641 ± 0.002
NMI0.161 ± 0.0210.621 ± 0.0050.631 ± 0.004
Table 6. Clustering results (mean ± standard deviation) on COIL-20.
Table 6. Clustering results (mean ± standard deviation) on COIL-20.
AlgorithmLRRbestSPCbestBDRbest
ACC0.767 ± 0.0020.682 ± 0.0240.805 ± 0.002
NMI0.870 ± 0.0030.769 ± 0.0110.872 ± 0.003
Algorithmt-SVD-MSCDiMSCCo-Reg SPC
ACC0.803 ± 0.0040.774 ± 0.0140.720 ± 0.007
NMI0.865 ± 0.0030.846 ± 0.0020.809 ± 0.005
AlgorithmRMSCLTMSCTMSC-TNNBDR
ACC0.687 ± 0.0430.802 ± 0.0090.823 ± 0.004
NMI0.802 ±0.0160.853 ± 0.0050.892 ± 0.003
To evaluate the actual running time of our method, we conducted experiments on the aforementioned five datasets. For fairness in comparison, single-view clustering approaches (e.g., LRRbest, SPCbest, BDRbest) are intentionally excluded. Table 7 summarizes computational times of selected multi-view methods, with bold values indicating the best performance.
In the experiments, all compared multi-view clustering methods are based on subspace affinity matrix computations using complete graphs and spectral clustering, exhibiting cubic time complexity of N. Results across five datasets indicate that both Co-Reg SPC and our proposed TMSC-TNNBDR achieve optimal time efficiency. While significant performance gaps persist between algorithms, these gaps do not expand drastically with increasing dataset sizes. Considering the noticeable inferior clustering performance of Co-Reg SPC compared to tensor-based multi-view clustering methods, TMSC-TNNBDR delivers exceptionally outstanding results.

5.4. Convergence Analysis

Proving the global convergence of TMSC-TNNBDR is difficult. However, we did conduct some experiments to show the convergence properties. As per the convergence conditions in Algorithm 1, the match error is defined as follows:
M E = 1 V v = 1 V H ( v ) G ( v ) < ε
The convergence curves of the TMSC-TNNBDR method on ORL and Yale are presented in Figure 6. As shown in Figure 6, the TMSC-TNNBDR method exhibits rapid convergence, achieving stable results within roughly 15 iterations.

5.5. Parametric Sensitivity

Three tunable hyperparameters characterize our proposed TMSC-TNNBDR: λ , α and γ , formally defined in Equation (10). During our optimization, candidate values of λ , α and γ are selected from { 0.002 , 0.006 , 0.02 , 0.06 , 0.2 , 0.6 , 2 , 6 , 12 } . Where applicable, the search space is refined to narrowed ranges based on preliminary selections. The resulting configurations for tunable parameters are documented in Table 8.
The clustering performance on the ORL and Yale databases is depicted in Figure 7. As shown in Figure 7, it is evident that the performance of the TMSC-TNNBDR is generally good and insensitive to varying values of α and γ , especially when α and γ are relatively small.

6. Conclusions

In this paper, we propose a novel tensorized MSC with tensor nuclear norm and block diagonal representation (TMSC-TNNBDR). In this model, a BDR constraint is utilized to enforce the block diagonal structure of the learned similarity matrix. Additionally, low-rank tensor representations are applied to uncover the high-order correlations within multi-view data, ultimately advancing clustering objectives. Then, we developed an efficient augmented Lagrangian-based procedure for optimizing the TMSC-TNNBDR model. This method harnesses dual priors—block-diagonal structure and low-rank tensor representations—to steer subspace clustering, enabling more thorough exploitation of latent information in multi-view data while significantly alleviating noise interference.
Comparative evaluations demonstrate the superior clustering performance of TMSC-TNNBDR over baseline approaches. While the efficacy of TMSC-TNNBDR is empirically validated, significant challenges persist in multi-view subspace clustering. Our methodology leverages two distinct priors (block-diagonal structure and low-rank tensor constraints), raising new research questions: What additional priors could be incorporated? Is there demonstrable value in integrating more than two priors? Furthermore, algorithmic complexity remains a critical limitation; akin to other classical graph-based multi-view clustering techniques, TMSC-TNNBDR has a cubic time complexity of N. It is inevitably compromised by the curse of dimensionality under real-world data growth conditions. For larger-scale datasets, using anchor graphs instead of full-sample graphs may be an effective alternative. In such scenarios, leveraging prior knowledge to design regularizers remains highly valuable.

Author Contributions

Conceptualization, G.-F.L. and G.-Y.T.; methodology, G.-Y.T., G.-F.L. and Y.W.; software, G.-Y.T. and G.-F.L.; validation, G.-Y.T., G.-F.L., Y.W. and L.-L.F.; formal analysis, G.-Y.T. and G.-F.L.; investigation, G.-Y.T. and L.-L.F.; resources, G.-F.L., Y.W. and G.-Y.T.; data curation, G.-Y.T. and L.-L.F.; writing—original draft preparation, G.-Y.T. and G.-F.L.; writing—review and editing, G.-Y.T., G.-F.L., Y.W. and L.-L.F.; visualization, G.-Y.T. and L.-L.F.; supervision, G.-F.L. and Y.W.; project administration, G.-Y.T.; funding acquisition, G.-F.L., G.-Y.T., Y.W. and L.-L.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by NSFC (No. 61976005), the University Natural Science Research Project of Anhui Province (No. 2022AH050970, KJ2020A0363), the Open Project of Anhui Provincial Medical Big Data Intelligent System Engineering Research Center (No. MBD2024P04), and the 2024 Anhui Provincial Quality Engineering Project for Higher Education Institutions (No. 2024aijy199).

Data Availability Statement

The original data presented in the study are openly available in: UCI Machine Learning Repository—Optical Recognition of Handwritten Digits at https://archive.ics.uci.edu/dataset/80/optical+recognition+of+handwritten+digits (accessed on 17 August 2025). Cambridge University AT&T Lab—ORL Face Database at https://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html (accessed on 17 August 2025). GitCode Open Source—Yale Face Database at https://gitcode.com/open-source-toolkit/885dd (accessed on 17 August 2025). GitCode Open Source—YaleB extends Face Database at https://gitcode.com/open-source-toolkit/3d6b2 (accessed on 17 August 2025). Columbia University CAVE Lab—COIL-20 Dataset at https://www.cs.columbia.edu/CAVE/software/softlib/coil-20.php (accessed on 17 August 2025).

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ALMAugmented Lagrange multiplier
BDRBlock-diagonal representation
Co-Reg SPCCo-regularized multi-view spectral clustering
DiMSCDiversity-induced multi-view subspace clustering
HSICHilbert–Schmidt Independence Criterion
LRRLow-rank representation
LTMSCLow-rank tensor constrained multi-view subspace clustering
MSCMulti-view subspace clustering
RMSCRobust multi-view spectral clustering via low-rank and sparse decomposition
SCSubspace clustering
SPCSpectral clustering
SSCSparse subspace clustering
TMSC-TNNBDRTensorized multi-view subspace clustering via tensor nuclear norm and block diagonal representation
TNNTensor nuclear norm
t-SVDthe tensor singular value decomposition
t-SVD-MSCt-SVD based multi-view subspace clustering

References

  1. Fu, L.; Lin, P.; Vasilakos, A.V.; Wang, S. An overview of recent multi-view clustering. Neurocomputing 2020, 302, 148–161. [Google Scholar] [CrossRef]
  2. Duda, R.O.; Hart, P.E.; Stork, D.G. Pattern Classification, 2nd ed.; John Wiley & Sons: New York, NY, USA, 2008. [Google Scholar]
  3. Berkhin, P. Survey of clustering data mining techniques. In Grouping Multidimensional Data: Recent Advances in Clustering; Kogan, J., Nicholas, C., Teboulle, M., Eds.; Springer: Berlin/Heidelberg, Germany, 2006; pp. 25–71. [Google Scholar]
  4. Wang, J.; Wang, X.; Tian, F.; Liu, C.H.; Yu, H. Constrained low-rank representation for robust subspace clustering. IEEE Trans. Cybern. 2017, 47, 4534–4546. [Google Scholar] [CrossRef] [PubMed]
  5. Elhamifar, E.; Vidal, R. Sparse subspace clustering: Algorithm, theory, and applications. IEEE Trans. Pattern Anal. 2013, 35, 2765–2781. [Google Scholar] [CrossRef] [PubMed]
  6. Liu, G.; Lin, Z.; Yan, S.; Sun, J.; Yu, Y.; Ma, Y. Robust recovery of subspace structures by low-rank representation. IEEE Trans. Pattern Anal. 2013, 35, 171–184. [Google Scholar] [CrossRef]
  7. Lu, C.; Feng, J.; Lin, Z.; Mei, T.; Yan, S. Subspace clustering by block diagonal representation. IEEE Trans. Pattern Anal. 2019, 41, 487–501. [Google Scholar] [CrossRef]
  8. Feng, J.; Lin, Z.; Xu, H.; Yan, S. Robust subspace segmentation with block-diagonal prior. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Columbus, OH, USA, 23–28 June 2014. [Google Scholar]
  9. Xu, Y.Y.; Chen, S.; Li, J.; Li, C.; Yang, J. Fast subspace clustering by learning projective block diagonal representation. Pattern Recogn. 2023, 135, 109152. [Google Scholar] [CrossRef]
  10. Xing, Z.; Wu, G. Block-diagonal guided DBSCAN clustering. IEEE Trans. Knowl. Data Eng. 2024, 36, 5709–5722. [Google Scholar] [CrossRef]
  11. Guo, Y.; Chen, S. A restarted large-scale spectral clustering with self-guiding and block diagonal representation. Pattern Recogn. 2023, 156, 110746. [Google Scholar] [CrossRef]
  12. Zhang, C.; Fu, H.; Liu, S.; Liu, G.; Cao, X. Low-rank tensor constrained multiview subspace clustering. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015. [Google Scholar]
  13. Xia, R.; Pan, Y.; Du, L.; Yin, J. Robust multi-view spectral clustering via low-rank and sparse decomposition. In Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence (AAAI), Québec City, QC, Canada, 27–31 July 2014. [Google Scholar]
  14. Xie, Y.; Tao, D.; Zhang, W.; Liu, Y.; Zhang, L.; Qu, Y. On unifying multi-view self-representations for clustering by tensor multi-tank minimization. Int. J. Comput. Vision. 2018, 126, 1157–1179. [Google Scholar] [CrossRef]
  15. Cao, X.; Zhang, C.; Fu, H.; Liu, S.; Zhang, H. Diversity induced multi-view subspace clustering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015. [Google Scholar]
  16. Bickel, S.; Scheffer, T. Multi-view clustering. In Proceedings of the Fourth IEEE International Conference on Data Mining (ICDM), Brighton, UK, 1–4 November 2004. [Google Scholar]
  17. Kumar, A.; Rai, P.; Daumé, H. Co-regularized multi-view spectral clustering. In Proceedings of the 24th International Conference on Neural Information Processing Systems (NIPS), Granada, Spain, 12–15 December 2011. [Google Scholar]
  18. Wu, J.; Lin, Z.; Zha, H. Essential tensor learning for multi-view spectral clustering. IEEE Trans. Image Process. 2019, 28, 5910–5922. [Google Scholar] [CrossRef]
  19. Zhang, C.; Fu, H.; Wang, J.; Li, W.; Cao, X.; Hu, Q. Tensorized multi-view subspace representation learning. Int. J. Comput. Vision. 2020, 128, 2344–2361. [Google Scholar] [CrossRef]
  20. Ghani, R. Combining labeled and unlabeled data for multiclass text categorization. In Proceedings of the Nineteenth International Conference on Machine Learning, Sydney, Australia, 8–12 July 2002. [Google Scholar]
  21. Wang, W.; Zhou, Z.H. A new analysis of co-training. In Proceedings of the 27th International Conference on International Conference on Machine Learning, Haifa, Israel, 21–24 June 2010. [Google Scholar]
  22. Sa, V.R.d. Spectral clustering with two views. In Proceedings of the ICML Workshop on Learning with Multiple Views, Bonn, Germany, 7–11 August 2005. [Google Scholar]
  23. Blaschko, M.B.; Lampert, C.H. Correlational spectral clustering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, AK, USA, 23–28 June 2008. [Google Scholar]
  24. Ding, Z.; Fu, Y. Low-rank common subspace for multi-view learning. In Proceedings of the IEEE International Conference on Data Mining, Shenzhen, China, 14–17 December 2014. [Google Scholar]
  25. Lu, C.; Feng, J.; Chen, Y.; Liu, W.; Lin, Z.; Yan, S. Tensor robust principal component analysis: Exact recovery of corrupted low-rank tensors via convex optimization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
  26. Zhou, P.; Feng, J. Outlier-robust tensor PCA. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
  27. Zhou, P.; Lu, C.; Lin, Z.; Zhang, C. Tensor factorization for low-rank tensor completion. IEEE Trans. Image Process. 2018, 27, 1152–1163. [Google Scholar] [CrossRef] [PubMed]
  28. Lu, G.F.; Yu, Q.R.; Wang, Y.; Tang, G.Y. Hyper-Laplacian regularized multi-view subspace clustering with low-rank tensor constraint. Neural Netw. 2020, 125, 214–223. [Google Scholar] [CrossRef] [PubMed]
  29. Zhang, Z.; Ely, G.; Aeron, S.; Hao, N.; Kilmer, M. Novel methods for multilinear data completion and de-noising based on Tensor-SVD. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Columbus, OH, USA, 23–28 June 2014. [Google Scholar]
  30. Kilmer, M.E.; Martin, C.D. Factorization strategies for third-order tensors. Linear Algebra Appl. 2011, 435, 641–658. [Google Scholar] [CrossRef]
  31. Kilmer, M.E.; Braman, K.S.; Hao, N.; Hoover, R.C. Third-order tensors as operators on matrices: A theoretical and computational framework with applications in imaging. SIAM J. Matrix Anal. A. 2013, 34, 148–172. [Google Scholar] [CrossRef]
  32. Du, Y.F.; Lu, G.F.; Ji, G.Y. Robust least squares regression for subspace clustering: A multi-view clustering perspective. IEEE Trans. Image Process. 2024, 33, 216–227. [Google Scholar] [CrossRef]
  33. Cai, B.; Lu, G.F.; Li, H.; Song, W.H. Tensorized scaled simplex representation for multi-view clustering. IEEE Trans. Multimed. 2024, 26, 6621–6631. [Google Scholar] [CrossRef]
  34. Lu, C.; Feng, J.; Chen, Y.; Liu, W.; Lin, Z.; Yan, S. Tensor robust principal component analysis with a new tensor nuclear norm. IEEE Trans. Pattern Anal. 2020, 42, 925–938. [Google Scholar] [CrossRef]
  35. Pan, B.; Li, C.; Che, H. Nonconvex low-rank tensor approximation with graph and consistent regularizations for multi-view subspace learning. Neural Networks. 2023, 161, 638–658. [Google Scholar] [CrossRef]
  36. Pan, B.; Li, C.; Che, H.; Leung, M.F.; Yu, K. Low-rank tensor regularized graph fuzzy learning for multi-view data processing. IEEE Trans. Consum. Electr. 2023, 70, 2925–2938. [Google Scholar] [CrossRef]
  37. Peng, C.; Kang, K.; Chen, Y.; Kang, Z.; Chen, C.; Cheng, Q. Fine-grained essential tensor learning for robust multi-view spectral clustering. IEEE Trans. Image Process. 2024, 33, 3145–3159. [Google Scholar] [CrossRef]
  38. Wang, S.; Chen, Y.; Lin, Z.; Cen, Y.; Cao, Q. Robustness meets low-rankness: Unified entropy and tensor learning for multi-view subspace clustering. IEEE Trans. Circuits Syst. Video Technol. 2023, 33, 6302–6316. [Google Scholar] [CrossRef]
  39. Du, Y.F.; Lu, G.F. Joint local smoothness and low-rank tensor representation for robust multi-view clustering. Pattern Recogn. 2025, 157, 110944. [Google Scholar] [CrossRef]
  40. Luo, C.; Zhang, J.; Zhang, X. Tensor multi-view clustering method for natural image segmentation. Expert Syst. Appl. 2025, 260, 125431. [Google Scholar] [CrossRef]
  41. Dattorro, J. Convex Optimization & Euclidean Distance Geometry, 2nd ed.; Meboo Publishing: Palo Alto, CA, USA, 2018. [Google Scholar]
  42. Ng, A.Y.; Jordan, M.I.; Weiss, Y. On spectral clustering: Analysis and an algorithm. In Proceedings of the 15th International Conference on Neural Information Processing Systems: Natural and Synthetic (NIPS), Vancouver, BC, Canada, 3–8 December 2001. [Google Scholar]
  43. Ojala, T.; Pietikäinen, M.; Mäenpää, T. Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans. Pattern Anal. 2002, 24, 971–987. [Google Scholar] [CrossRef]
  44. Lades, M.; Vorbruggen, J.C.; Buhmann, J.; Lange, J.; Malsburg, C.v.d.; Wurtz, R.P.; Konen, W. Distortion invariant object recognition in the dynamic link architecture. IEEE Trans. Comput. 1993, 42, 300–311. [Google Scholar] [CrossRef]
Figure 1. Example images in UCI-Digits.
Figure 1. Example images in UCI-Digits.
Mathematics 13 02710 g001
Figure 2. Example images in ORL.
Figure 2. Example images in ORL.
Mathematics 13 02710 g002
Figure 3. Example images in Yale.
Figure 3. Example images in Yale.
Mathematics 13 02710 g003
Figure 4. Example images in Extended YaleB.
Figure 4. Example images in Extended YaleB.
Mathematics 13 02710 g004
Figure 5. Example images in COIL-20.
Figure 5. Example images in COIL-20.
Mathematics 13 02710 g005
Figure 6. Convergence curves on different datasets: (a) Convergence curves on UCI-Digits; (b) Convergence curves on ORL; (c) Convergence curves on Yale; (d) Convergence curves on COIL-20.
Figure 6. Convergence curves on different datasets: (a) Convergence curves on UCI-Digits; (b) Convergence curves on ORL; (c) Convergence curves on Yale; (d) Convergence curves on COIL-20.
Mathematics 13 02710 g006
Figure 7. Clustering performance of TMSC-TNNBDR versus α and γ on different datasets. (a) ACC of TMSC-TNNBDR versus the parameters α and γ on ORL; (b) NMI of TMSC-TNNBDR versus the parameters α and γ on ORL; (c) ACC of TMSC-TNNBDR versus the parameters α and γ on Yale; (d) NMI of TMSC-TNNBDR versus the parameters α and γ on Yale.
Figure 7. Clustering performance of TMSC-TNNBDR versus α and γ on different datasets. (a) ACC of TMSC-TNNBDR versus the parameters α and γ on ORL; (b) NMI of TMSC-TNNBDR versus the parameters α and γ on ORL; (c) ACC of TMSC-TNNBDR versus the parameters α and γ on Yale; (d) NMI of TMSC-TNNBDR versus the parameters α and γ on Yale.
Mathematics 13 02710 g007
Table 1. Notations summarized.
Table 1. Notations summarized.
NotationDescriptionNotationDescription
uA scalarUA constant
uA column vectoruiThe ith column of a matrix
UA matrix U A tensor
UTThe matrix transpose U TThe tensor transpose
U(v)The vth frontal slice (matrix) of a tensor U (v)The vth frontal slice (matrix) of U (tensor)
U The nuclear norm U T N N The t-SVD-based TNN
U 2 , 1 L2,1-norm I The identity tensor
U F The Frobenius norm U V The t-product
U k The k-block diagonal regularizer U ,   V The Frobenius inner product
d i a g ( U ) Line up the diagonal elements of U as a column vector D i a g ( w ) Expand w (column vector) into a diagonal matrix
Table 7. Computational time (unit: seconds) of the comparative multi-view methods.
Table 7. Computational time (unit: seconds) of the comparative multi-view methods.
DatasetsAlgorithms
t-SVD-MSCDiMSCCo-Reg SPCTMSC-TNNBDR
UCI-Digits96.57171.3930.38107.82
ORL19.175.073.262.74
Yale5.931.090.770.47
Extended YaleB236.58343.6863.11222.59
COIL-2077.1871.5414.3644.05
Table 8. Our configurations for tunable parameters.
Table 8. Our configurations for tunable parameters.
DatabaseParameters
λ α γ
UCI-Digits0.20.030.003
ORL0.20.060.006
Yale0.20.020.002
YaleB0.20.020.002
COIL-200.20.020.002
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tang, G.-Y.; Lu, G.-F.; Wang, Y.; Fan, L.-L. Tensorized Multi-View Subspace Clustering via Tensor Nuclear Norm and Block Diagonal Representation. Mathematics 2025, 13, 2710. https://doi.org/10.3390/math13172710

AMA Style

Tang G-Y, Lu G-F, Wang Y, Fan L-L. Tensorized Multi-View Subspace Clustering via Tensor Nuclear Norm and Block Diagonal Representation. Mathematics. 2025; 13(17):2710. https://doi.org/10.3390/math13172710

Chicago/Turabian Style

Tang, Gan-Yi, Gui-Fu Lu, Yong Wang, and Li-Li Fan. 2025. "Tensorized Multi-View Subspace Clustering via Tensor Nuclear Norm and Block Diagonal Representation" Mathematics 13, no. 17: 2710. https://doi.org/10.3390/math13172710

APA Style

Tang, G.-Y., Lu, G.-F., Wang, Y., & Fan, L.-L. (2025). Tensorized Multi-View Subspace Clustering via Tensor Nuclear Norm and Block Diagonal Representation. Mathematics, 13(17), 2710. https://doi.org/10.3390/math13172710

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop