Next Article in Journal
MMF-Gait: A Multi-Model Fusion-Enhanced Gait Recognition Framework Integrating Convolutional and Attention Networks
Previous Article in Journal
A New Exponentiated Power Distribution for Modeling Censored Data with Applications to Clinical and Reliability Studies
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Orthogonal-Constrained Graph Non-Negative Matrix Factorization for Clustering

School of Mathematical Sciences, Tiangong University, Tianjin 300387, China
*
Authors to whom correspondence should be addressed.
Symmetry 2025, 17(7), 1154; https://doi.org/10.3390/sym17071154 (registering DOI)
Submission received: 21 June 2025 / Revised: 14 July 2025 / Accepted: 15 July 2025 / Published: 19 July 2025
(This article belongs to the Section Mathematics)

Abstract

We propose a novel approach for clustering problem, which refers to as Graph Regularized Orthogonal Subspace Non Negative Matrix Factorization (GNMFOS). This type of model introduces both graph regularization and orthogonality as penalty terms into the objective function. It not only obtains the uniqueness of matrix decomposition but also improves the sparsity of decomposition and reduces computational complexity. Most importantly, using the idea of iteration under weak orthogonality, we construct an auxiliary function for the algorithm and obtain convergence proof to compensate for the lack of convergence proof in similar models. The experimental results show that compared with classical models such as GNMF and NMFOS, our algorithm significantly improves clustering performance and the quality of reconstructed images.

1. Introduction

Clustering is a fundamental technique in data analysis, extensively applied in image processing, document classification, and other fields. As an unsupervised learning method, it aims to partition a dataset into distinct groups such that data points within the same group are more similar to each other than to those in different groups. Common clustering algorithms determine the relationship between data points by measuring their similarity, and assign data points with similar features to the same cluster. Due to the non-negativity requirement in many real-world applications, Non-negative Matrix Factorization (NMF) has attracted considerable attention as a dimensionality reduction and clustering technique. NMF effectively reduces storage cost and computation complexity by producing interpretable parts-based representations. NMF has been successfully applied and extended to various domains including clustering and classification [1,2,3], image analysis [4,5,6], spectral analysis [7,8], and blind source separation [9,10]. More recently, NMF-related methods have also found applications in micro-video classification [11], RNA-seq data deconvolution [12], attributed network anomaly detection [13], and signal processing tasks such as dynamic spectrum cartography and tensor completion [14], which further demonstrate the method’s adaptability and growing relevance.
Various extensions of NMF have been proposed to capture the latent structures within data. For example, Cai et al. [15] proposed Graph Regularized NMF (GNMF) by incorporating a graph Laplacian term into the objective function, enabling the preservation of intrinsic geometric structure. Hu et al. [16] further introduced GCNMF by integrating GNMF with Convex NMF. Meng et al. [17] extended the regularization to both the sample and feature spaces, resulting in the Dual-graph Sparse NMF (DSNMF) method. Other notable variants include GDNMF [18], DNMF [19], RHDNMF [20], AWMDNMF [21], TGLMC [22], and PGCNMF [23], each offering improvements for non-linear or structured data modeling.
In the context of image clustering, Li et al. [24] observed that local representations are often associated with orthogonality. Ding et al. [25] introduced Orthogonal NMF (ONMF), demonstrating that orthogonality enhances sparsity, reduces redundancy, and improves the interpretability and uniqueness of clustering results. However, enforcing strict orthogonality often leads to high computational cost, especially for large document datasets.
To overcome such problems, researchers have proposed a series of methods with orthogonal constraints [26,27,28]. Unfortunately, among the existing results (to the best of our ability to search for all existing results), there is a lack of NMF methods that simultaneously consider the orthogonality of coefficient matrices in orthogonal subspaces and the intrinsic geometric structures hidden in data and feature manifolds. Or, even if a similar model is proposed, the convergence proof of the update algorithm is missing or incorrect.
In this paper, we propose a novel approach that combines graph Laplacian and nonnegative matrix factorization in orthogonal subspaces. The generation of orthogonal factor matrices is incorporated as a penalty term into the objective function, along with the addition of a graph regularization term. On one hand, by introducing orthogonal penalties and graph Laplacian terms, the orthogonality of the factor matrices is achieved through the factorization process, reducing the computational burden imposed by enforcing orthogonality constraints. On the other hand, it enhances sparsity in the coefficient matrix to some extent and utilizes the geometric structure properties of the data, thereby improving the ability to recover underlying nonlinear relationships. We have uniformly named this series of models as Graph Regularized Nonnegative Matrix Factorization on Orthogonal Subspace (GNMFOS), and of course, some special cases will be annotated with lowercase letters and numbers, such as GNMFOSv2. We develop two efficient algorithms to solve the proposed models and demonstrate their convergence. Moreover, we construct the objective function of GNMFOS using Euclidean distance and optimize it using the MUR. Finally, we conduct experiments to evaluate the proposed algorithms. The experimental results show that GNMFOS and GNMFOSv2 combine the advantages of previous algorithms, resulting in improved effectiveness and robustness.
In this study, the orthogonal constraints in matrix factorization inherently exhibit mathematical symmetry, ensuring balanced and reciprocal relationships between data representations. This symmetry not only simplifies the computational model but also enhances the interpretability of clustering results.
The remaining contents of this paper are as follows. Section 2 presents a review of related work, including NMF, NMFOS, and GNMF. Section 3 provides a detailed description of our proposed method, including the derivation of update rules and corresponding algorithms. Section 3 also presents the convergence analysis of the algorithms. Section 4 will show experimental results and the relevant analysis. Section 5 is a summary of the paper and also discusses future research directions.

2. Preliminaries

In this section, we will introduce some symbols in Table 1, then introduce some models and related works to serve as the foundation for our paper.

2.1. NMF

The classical NMF [29] is a nonnegative matrix factorization algorithm that focuses on analyzing data matrices consisting of nonnegative elements. Given a nonnegative data matrix X, the aim of NMF is to find two nonnegative matrices W and H, such that the product of W and H can closely approximate the original matrix X, i.e.,
The optimization model for NMF can be formulated as follows:
min W , H X W H F 2 , s . t . W 0 , H 0 .

2.2. NMFOS

Although NMF has many advantages, it is not easy to obtain the desired solution. Chris Ding et al. [25] extended the traditional NMF by adding an orthogonal constraint on the basis or coefficient matrix into the objective function’s constraints. Li et al. [30] optimized the above model; they proposed a novel approach called Nonnegative Matrix Factorization with Orthogonal Subspace (NMFOS), which is based on the idea of putting the orthogonality as a penalty term into the objective function, and obtained the following new model.
min W , H X W H F 2 + α H H T I F 2 , s . t . W 0 , H 0 .
Here, the orthogonality of H is shown (and G’s is G T G ) with parameter α 0 controlling the orthogonality of the vectors in H.

2.3. GNMF

Graph regularization is a data processing method based on manifolds. The core concept is that manifolds can enhance the smoothness of data in linear and nonlinear spaces by minimizing (1) beolw. As demonstrated by existing studies, incorporating the geometric structure information from data points can enhance the clustering performance of NMF [31,32]. So, the p-nearest neighbor graph is constructed based on the input data matrix X to effectively capture the geometric structure. Meanwhile, in order to characterize the similarity between adjacent points, a weight matrix S = { S j l } is defined. In this paper, the 0 1 weight matrix below is commonly used.
S j l = 1 , x j N ( x l ) or x l N ( x j ) , 0 , otherwise , j , l { 1 , , N } .
Here, N ( x j ) represents the set of p-nearest neighbors for data point x j . There are many options for defining S = { S j l } containing heat kernal weight matrix S j l = e x j x l 2 σ and dot-product weight matrix S j l = x j T x l . With the help of the weight matrix S, the Laplacian matrix L is defined to characterize the degree of similarity between points in geometric structure and L = D S , where D is the degree matrix with D i i = j S i j . Inspired by manifold learning, Cai et al. [15] proposed the GNMF algorithm. The core idea is the graph regularization
1 2 l = 1 n j = 1 n h l h j 2 2 S l j = j = 1 n D j j h j T h j l = 1 n j = 1 n S l j h l T h j = T r ( H D H T ) T r ( H S H T ) = T r ( H L H T ) ,
Then, the optimization model of GNMF is given by
min X W H F 2 + λ Tr H L H T , s . t . W 0 , H 0 .
In fact, the minimization of (1) shows that if two data points are close in original data’ distribution, they will also be close under the new basis vectors h l and h j . The regularization parameter λ 0 controls the smoothness of the model.

3. A New Model

In this section, we will combine NMFOS and GNMF to introduce a new method called Graph Regularized Non-negative Matrix Factorization on Orthogonal Subspaces (GNMFOS). More importantly, we also provide a convergence proof for this new model to compensate for the lack or inadequacy of algorithm convergence proof. Note that in order to enhance the clustering performance, our model GNMFOS integrates the advantages of existing methods.

3.1. GNMFOS

By putting both the orthogonality and the Graph Laplacian into the objective function, GNMFOS is formulated as follows.
min J ( W , H ) = min W , H 1 2 X W H F 2 + α 4 H H T I F 2 + λ 2 T r ( H L H T ) , s . t . W 0 , H 0 .
where X R + M × N , W R + M × K , H R + K × N . Considering the Lagrange function in (2), we obtain the following update rules.
W i k W i k X H T i k W H H T i k ,
H k j H k j W T X + α H + λ H S k j W T W H + α H H T H + λ H D k j .
The update rule (3) and its convergence proof are the same as in [15]. But (4) is new and it is hard to prove its convergence directly, as the orthogonality of H requires handling the fourth power term of H, which makes it difficult to implement the commonly used approach of using second-order Taylor formula to construct auxiliary functions and further obtain convergence proofs for updating rules. To clarify this difficult task, the first and second derivatives of the objective function J ( W , H ) with respect to H k j are given below.
F H k j = J ( W , H ) H k j = ( W T W H W T X + α H H T H α H + λ H L ) k j ,
F H k j = ( W T W + α H H T α I ) k k + ( α H T H + λ L ) j j + α H k j 2 .
According to tradition, the auxiliary function of H should be defined as follows:
G H ( H k j , H k j t ) = F H k j ( H k j t ) + F H k j ( H k j t ) ( H k j H k j t ) + 1 2 ( W T W H t + α H t H t T H t + λ H t D ) k j H k j t ( H k j H k j t ) 2 .
So, we can ensure that G H ( H k j , H k j t ) H k j = 0 leads to the same update rule in (4). Returning to the Taylor series expansion of F H k j ( H k j ) , it is easy to obtain
F H k j ( H k j ) = F H k j ( H k j t ) + F H k j ( H k j t ) ( H k j H k j t ) + 1 2 [ ( W T W + α H t H t T α I ) k k + ( α H t T H t + λ L ) n n ] ( H k j H k j t ) 2 .
In this way, a difficulty of convergence proof between G H ( H k j , H k j t ) and F H k j ( H k j ) arises, i.e., when proving that the function G H ( H k j , H k j t ) is an auxiliary function, we cannot prove that
( H t H t T H t ) k j H k j t ( H t H t T I ) k k + ( H t T H t ) j j + H k j 2 .
So, we improve (2) by proposing a new variant model where H is weakly orthogonal. This new model is called weakly orthogonal because it is obtained by orthogonalizing the t-th iteration of H with the (t − 1)-th iteration, where the t-th iteration is treated as an unknown variable and the (t − 1)-th iteration is treated as a constant ( t = 1 , 2 , ). Then, the following new model is now arrived at.
min J ( W , H t , H t 1 ) = min W , H 1 2 X W H t F 2 + α 2 H t H t 1 T I F 2 + λ 2 T r ( H t L H t T ) , s . t . W 0 , H 0 .
Here, α 0 and λ 0 represent the orthogonal penalty parameter and regularization parameter, respectively. The parameter α is used to adjust the degree of orthogonality in H, while the parameter λ balances the reconstruction error in the first term and the graph regularization in the third term of the objective function. It is worth mentioning that if the update rule for H converges, we have reason to suspect that H t 1 approximates H t , and H t H t 1 T H t H t T .
Due to the non-convexity of the objective function in both variables W and H in (5), we adopt an iterative optimization algorithm and utilize multiplicative update rules to obtain local optima (original idea comes from [33,34,35]). The variables W and H are updated alternately while keeping the other variable fixed.
We can rewrite the objective function of J ( W , H t , H t 1 ) in trace form as follows.
J ( W , H t , H t 1 ) = 1 2 T r ( X T X 2 X T W H t + H t T W T W H t ) + α 2 T r ( H t 1 H t T H t H t 1 T 2 H t 1 H t T + I ) + λ 2 T r ( H t L H t T ) ,
where ψ and ϕ represent the Lagrange multipliers for W 0 and H 0 in (5), respectively. Let Ψ = ψ i k , Φ = ϕ k j , then, the Lagrangian function follows:
L = 1 2 T r ( X T X 2 X T W H t + H t T W T W H t ) + α 2 T r ( H t 1 H t T H t H t 1 T 2 H t 1 H t T + I ) + λ 2 T r ( H t L H t T ) T r ( Ψ T H t ) T r ( Φ T W ) .
The function L solves for stable points of W and H, respectively, and obtains the following equations.
Ψ = W H t H t T X H t T ,
Φ = W T W H t W T X + α H t H t 1 T H t 1 α H t 1 + λ H t D λ H t S .
With the help of KKT conditions ψ i k W i k = 0 and ϕ k j H k j t = 0 , multiplying both sides of (6) by W i k and (7) by H k j t , we will receive the iterative update rule for (5):
W i k t + 1 W i k t X H t T i k W t H t H t T i k ,
H k j t + 1 H k j t W t + 1 T X + α H t 1 + λ H t S k j W t + 1 T W t + 1 H t + α H t H t 1 T H t 1 + λ H t D k j .
In summary, we can now develop the efficient algorithm below to solve (5). For a detailed proof of the convergence of Algorithm 1, please refer to Appendix A.
Algorithm 1 GNMFOS
  • Input: Nonnegative data matrix X = x 1 , , x N R + M × N , the number of clusters K, the orthogonality parameter α 0 , the regularization parameter λ 0 , the maximum number of iterations T;
  • Initialization randomly choose initial matrices W 0 R + M × K and H 0 R + K × N
  • while The termination conditions are not satisfied or t T  do
  •    Fixing H, updating W according to (8)
  •    Fixing W, updating H according to (9)
  • end while
  • return Locally optimal basis matrix W and solution coefficient matrix H

3.2. GNMFOSv2

In Algorithm 1, if W i k = 0 or H k j t = 0 occurs in a certain iteration, the update rule will automatically stop, regardless of whether it reaches a stable point. This phenomenon is called “zero locking”. In [36], Andri Mirzal found that almost all NMF algorithms based on MUR exhibit zero locking phenomenon. So, Andri Mirzal proposed adding a small positive number δ to the denominator of the update rule to avoid this phenomenon. Using this approach, we will change the update rules (8) and (9) to the following format and rename it as GNMFOSv2.
W i k t + 1 W i k t X H t T i k W t H t H t T i k + δ ,
H k j t + 1 H k j t W t + 1 T X + α H t 1 + λ H t S k j W t + 1 T W t + 1 H t + α H t H t 1 T H t 1 + λ H t D k j + δ .

3.3. Complexity of Algorithms

In this subsection, the computational complexity of Algorithms 1 and 2 is analyzed. In computer science, the computational complexity is the amount of resources required to run the algorithm, especially the time (CPU usage time) and space (memory usage space) requirements. Here, we mainly measure the space requirements, which refers to the basic arithmetic operations of floating-point numbers used to measure the cost of computing steps. The symbol O ( n ) means the value is of the same order as n.
Recall that M denotes the number of features, N denotes the number of samples, K denotes the number of clusters, and p denotes the number of nearest neighbors. In Algorithms 1 and 2, each iteration of the computational complexity of X H T , W H H T , W T X , and W T W H are all M N K . Similarly, the complexities of H D and H S are both N 2 K , while H t H t 1 T H t 1 ’s is N K 2 . In addition, we also need to provide an additional complexity of O ( N 2 M ) to construct the p-nearest neighbor graph.
In conclusion, the computational complexity of each iteration in Algorithms 1 and 2 is O ( M N K + N 2 M + N 2 K + N K 2 ) .
Algorithm 2 GNMFOSv2
  • Input: Nonnegative data matrix X = x 1 , , x N R + M × N , the number of clusters K, the orthogonality parameter α 0 , the regularization parameter λ 0 , the maximum number of iterations T;
  • Initialization randomly choose initial matrices W 0 R + M × K and H 0 R + K × N
  • while The termination conditions are not satisfied or t T  do
  •    Fixing H, updating W according to (10)
  •    Fixing W, updating H according to (11)
  • end while
  • return Locally optimal basis matrix W and solution coefficient matrix H

4. Experimental Results and Analysis

In this section, we will assess the performance of Algorithms 1 and 2 through experimental evaluation. Firstly, we compare our algorithms with other classical algorithms in the clustering performance based on image datasets. Secondly, the clustering results for document datasets are compared. Thirdly, parameter sensitivity and weighting scheme selection for S are also analyzed, and this helps us to identify suitable parameters and construction schemes for the algorithm proposed in this paper.

4.1. Evaluation Indicators and Data Sets

In order to compare the clustering performance, we use four evaluation indicators, including Purity, Mutual information (MI), Normalized Mutual Information (NMI) and Accuracy (ACC), and the higher value indicates better clustering quality [36]. These indicators can reflect the advantages of our algorithm from different perspectives.
Specifically, the Purity measures the extent to which clusters contain a single class, defined as
P u r i t y = i = 1 K max j n i j N
The notation n i j represents the count of occurrences where the j-th input class is assigned to the i-th cluster, with N denoting the total number of samples.
The ACC reflects the percentage of correctly clustered samples after optimal label mapping, defined as
A C C = i = 1 N δ c i , map c j N
N represents the total number of samples. m a p ( · ) is a mapping function that denotes the mapping of the computed clustering label c j to the true clustering label c i .
MI and NMI quantify the shared information between the cluster assignments and true labels, measuring the quality of clustering from an information-theoretic perspective.
M I C , C = c i C , c j C p c i , c j · log p c i , c j p c i · p c j ,
N M I C , C = M I C , C max H ( C ) , H C
Here, C and C represent a set of true labels and a set of clustering labels, respectively, and H ( · ) denotes the entropy function. c i and c j denote the i-th class and the j-th cluster. p c i , c j represents the joint probability distribution function of clustering and class labels. p c i and p c j correspond to the marginal probability distribution functions of classes and clusters, respectively.
To assess the algorithm’s performance, we conduct tests using three publicly available image datasets (including AR, ORL, and Yale datasets) and two public document datasets (including TDT2 and TDT2 all).

4.2. Comparison Algorithms

This section will list several classic algorithms for comparison and provide a brief introduction to each of them.
  • K -means [37]: This algorithm directly performs clustering operations on the original matrices without extracting or analyzing them.
  • NMF [29]: This algorithm only directly obtains the factorization of two non-negative matrices without adding any other conditions or constraints.
  • ONMF [25]: Building upon NMF, an orthogonal constraint is introduced to the basis matrix or coefficient matrix.
  • NMFSC [38]: The algorithm adds sparse representation on NMF, representing the data as a linear combination of a small number of basis vectors.
  • GNMF [15]: By integrating graph theory and manifold assumptions with the original NMF, the algorithm extracts the intrinsic geometric structure of the data.
  • NMFOS [30]: Building on ONMF, this algorithm incorporates an orthogonal constraint on the coefficient matrix as a penalty term in the objective function.
  • NMFOSv2 [36]: This algorithm, based on NMFOS, introduces a small δ term into the denominator of the update rule formula.
Meanwhile, our new models proposed in this paper are listed below.
  • GNMFOS: Building upon NMFOS, this algorithm incorporates the graph Laplacian term into the objective function.
  • GNMFOSv2: Building upon GNMFOS, this algorithm introduces a small δ term into the denominator of the update rule formula.
For consistency, we consider using Frobenius norm for all calculations. In the experiments, the termination criteria and the selection of relevant parameters are computed according to the numerical values chosen by the corresponding original papers’ authors. Additionally, aspects such as the input raw dataset and related settings are kept consistent across all methods in this paper. Each algorithm is run 10 times, and the average values of Purity, NMI, MI, and ACC from these 10 runs are calculated for evaluation.

4.3. Parameter Settings

In Algorithms 1 and 2, there are four parameters: regularization parameter λ , orthogonality parameter α , nearest neighbor point p, and δ in the denominator of Algorithm 2. Different parameter values will lead to different clustering results. To control variables and investigate the clustering effects of different algorithms, the experiments in Section 4.4.1 initially choose the same set of parameters: the maximum number of iterations T = 100 , λ = 100 , α = 0.01 , p = 5 , and δ = 1 × 10 8 . In the parameter sensitivity subsection, different parameters will be further explored to identify the optimal parameters suitable for the algorithm.

4.4. Clustering Results for Image Datasets

In this subsection, we will compare the clustering performance based on the AR, ORL, and Yale datasets.

4.4.1. Comparison Display on Image Datasets

We will compare the various indicators’ values under different K (number of clusters) on each dataset. Therefore, the first column in Table 2, Table 3 and Table 4 contains all the values of K used. Each table consists of four sub-tables, which display the overall values of all comparison algorithms in one dataset and one evaluation indicator, respectively. The maximum value is represented in bold. To ensure accuracy, the experiments are conducted with 10 runs at different cluster numbers, and the results are averaged. Of course, in order to see the differences between the four evaluation metrics in each dataset more clearly, we will draw line graphs in different colors to show the corresponding effects, for detailed information, please refer to Figure 1, Figure 2 and Figure 3.
Discussion of Results on AR Dataset:
  • For indicators of ACC, MI, NMI, and Purity, GNMFOS achieves the highest values at the same clusters’ number K = 120 in AR dataset, while GNMFOSv2 outperforms GONMF at other cluster numbers. Overall, GNMFOSv2 has the highest clustering indicator values.
  • The differences in ACC, MI, NMI, and Purity, as well as their average values, for GNMFOS and GNMFOSv2 are not significant across different cluster numbers K. In contrast, most of other algorithms show larger fluctuations in clustering indicator values at different K. This shows that GNMFOS and GNMFOSv2 exhibit better stability across different cluster numbers, making them more suitable for applications.
  • Among the orthogonal-based algorithms, ONMF, NMFOS, NMFOSv2, GNMFOS, and GNMFOSv2, the clustering performance of the proposed GNMFOS and GNMFOSv2 algorithms is the best. This indicates that our algorithms enhance the clustering performance and improve the sparse representation capability based on NMF methods.
  • Among the graph Laplacian-based algorithms, GNMFOSv2 and GNMFOS consistently rank among the top two (followed by GNMF). This indicates that our models effectively utilize the hidden geometric structure information in data space.
Discussion of Results on ORL Dataset:
  • When K is small (≤10), K-means achieves the highest values in ACC, MI, NMI, and Purity. When K is larger, the GNMFOS and GNMFOSv2 algorithms consistently rank among the top two in terms of the four clustering indicators. This shows that our algorithms are more suitable for larger numbers of clusters in the ORL dataset.
  • Overall, GNMFOS has higher average values in ACC, MI, and Purity compared to GNMFOSv2 (NMI is slightly lower than GNMFOSv2), but the differences between GNMFOS and GNMFOSv2 are not significant. This indicates that, for the ORL dataset, GNMFOS is slightly more applicable.
  • In terms of ACC, MI, NMI, and Purity, GNMFOS achieves the overall highest values at the same number of clusters K = 40 . This indirectly indicates that the GNMFOS algorithm is suitable for clustering the dataset into the original classes.
  • Among the orthogonal-based algorithms (including ONMF, NMFOS, and NMFOSv2), GNMFOS and GNMFOSv2 exhibit the best clustering performance. This also indicates that our algorithms enhance the clustering performance and improve the sparse representation capability of NMF-based methods.
  • Among the graph Laplacian-based algorithms including GNMF, GNMFOS and GNMFOSv2 exhibit the best clustering performance. This also indicates that our models effectively utilize the hidden geometric structure information in data space.
Discussion of Results on Yale Dataset:
  • On the Yale dataset, no algorithm consistently leads in all four clustering indicators, so they need to be discussed separately. For ACC, NMFOS has the highest value (e.g., 0.4424 when K = 12 ). However, it is worth noting that GNMFOS and GNMFOSv2 do not significantly lag behind NMFOS. Specifically, at K = 6 , GNMFOS achieves the second-highest rank. For MI, NMFOS also achieves the highest value, but GNMFOS lags behind it by only 0.04%, and GNMFOS attains the overall highest value at K = 15 , indicating strong competitiveness in MI. For NMI, GNMFOS has the highest value and achieves the overall highest value at K = 15 . For Purity, NMFOS has the highest value, but GNMFOS attains the overall highest values at K = 6 and K = 15 , indicating that GNMFOS is also highly competitive.
  • Although in some indicators, the values of NMFOS are slightly higher than those of GNMFOS and GNMFOSv2, these differences are moderate. The results indicate that both the GNMFOS and NMFOS algorithms are viable choices for the Yale dataset.
  • For all four clustering indicators at K = 3 , ONMF has the highest values, indicating that the ONMF algorithm maybe more suitable for smaller K.
  • When K increases, the values of ACC, MI, NMI, and Purity for NMFOS do not consistently increase or decrease monotonically. But for GNMFOS, the values of MI and NMI significantly increase. This indicates that our algorithms are more suitable for clustering experiments with larger numbers of clusters on the Yale dataset.
  • The orthogonal-based algorithms (ONMF, NMFOS, NMFOSv2, GNMFOS, and GNMFOSv2) exhibit the overall best performance, indicating that algorithms with orthogonality are more suitable for the Yale dataset.
Overall, the algorithms proposed in this paper outperform algorithms that solely possess orthogonal, manifold, or sparse constraints. The results indicate that our algorithms successfully integrate the advantages of orthogonal constraints and geometric structures, further optimizing over the compared algorithms for a better learning of parts-based representation in the dataset. So, the above results demonstrate that our GNMFOS and GNMFOSv2 algorithms have strong competitiveness in clustering.
However, it is worth noting that no single algorithm consistently outperforms others across all four clustering indicators on the Yale dataset. This reflects the inherent trade-offs among different clustering evaluation metrics and highlights the limitations of current algorithms in achieving uniformly best performance. Such a phenomenon is common in clustering tasks where different metrics emphasize distinct aspects of clustering quality. Future work could focus on developing methods that balance multiple criteria more effectively.

4.4.2. Summary of Analysis on Image Datasets

In this part, we will summarize the above analysis on image datasets.
  • In orthogonal-based algorithms, such as ONMF, NMFOS, NMFOSv2, GNMFOS, and GNMFOSv2, they outperform NMF, GNMF, SNMF-H, and SNMF-W algorithms on the Yale dataset. Moreover, on the AR and ORL datasets, GNMFOS and GNMFOSv2 exhibit the best clustering performance among them. In graph Laplacian-based algorithms, such as GNMF, GNMFOS, and GNMFOSv2, their clustering performance is almost superior to other algorithms. This is because graph-based algorithms consider local geometric structures, preserving locality in low-dimensional data space. The GNMFOS and GNMFOSv2 algorithms combine the advantages of orthogonality and graph, enhancing clustering performance and improving the sparse representation capability of NMF based models.
  • Due to the various backgrounds in each dataset (e.g., lighting conditions, poses, etc.), the performances of GNMFOS and GNFOSv2 are different. For different feature dimensions, the values of ACC, MI, NMI, and Purity also show slight variations, but the overall performance of GNMFOS and GNFOSv2 remains stable.
  • For the Yale dataset, GNMFOS and GNMFOSv2 do not significantly outperform, indicating that orthogonal penalty constraints and graph Laplacian constraints may not necessarily enhance clustering performance in certain situations. Therefore, further research is needed to develop more effective algorithms in the future.
  • Considering the results for different clustering numbers K across the three image datasets, the clustering indicators (ACC, MI, NMI, and Purity) of the GNMFOS and GNMFOSv2 algorithms exhibit some fluctuations. However, overall, they outperform algorithms that only have orthogonal or manifold structure constraints, sparse constraints, or no constraints.
In conclusion, our algorithms demonstrate the best overall clustering performance for image datasets, and the above results have clearly shown that incorporating the orthogonality and graph regularity into the NMF model can enhance the clustering performance of the corresponding algorithms.

4.5. Clustering Results for Document Datasets

This subsection conducts clustering experiments on document datasets TDT2all and TDT2. In addition to the newly proposed GNMFOS and GNMFOSv2 models, the NMF, GNMF, NMFOS, and NMFOSv2 algorithms, which have shown excellent performance on image datasets, are selected for comparison. Similarly, detailed information on cluster numbers K is provided in the first column of Table 5 and Table 6. The tables consist of four sub-tables corresponding to four evaluation indicators, with the maximum values highlighted in bold. Adopting the same approach, we run tests separately for different cluster numbers and average the results over 10 runs. Details will be listed in Table 5 and Table 6. Figure 4 and Figure 5 display the corresponding line color images.
  • For the TDT2all dataset, when K 70 , GNMFOSv2 achieves the maximum values in ACC, NMI, MI, and Purity; when 30 K < 70 , GNMFOS achieves the maximum values in ACC, NMI, MI, and Purity; when K < 30 , GNMF achieves the maximum values in ACC, NMI, MI, and Purity. Although GNMF has the best average performance in the ACC metric, GNMFOS, proposed in this paper, achieves the highest ACC value at K = 50 , reaching 42.27%. This suggests that GNMFOSv2 is more suitable when the number of clusters is close to the size of the document dataset itself; GNMFOS is more suitable when the number of clusters is moderate; and GNMF is more suitable when the number of clusters is very small. Additionally, GNMFOS ranks first in the average Purity indicator, while GNMFOSv2 ranks first in the average NMI and MI indicators.
  • From Figure 4, it can be observed that, for the TDT2all dataset, the clustering performance is relatively optimal when the number of clusters K is set to 50. This suggests that the optimal number of clusters for the TDT2all dataset is 50. Additionally, the GNMF, GNMFOS, and GNMFOSv2 algorithms outperform the NMF, NMGOS, and NMFOSv2 algorithms significantly across all four indicators. This indicates that graph Laplacian-based algorithms can enhance clustering performance for the TDT2all document dataset.
  • For the TDT2 dataset, the GNMFOSv2 algorithm achieves the best performance in terms of MI, NMI, and Purity across different numbers of clusters K and their average values. The GNMF algorithm performs better in terms of ACC; however, the ACC values of GNMFOSv2 increase as the number of clusters K increases. Particularly at K = 30 , the ACC value of GNMFOSv2 is only 0.7% lower than that of the GNMF algorithm. This indicates that when the number of clusters is close to the class structure of the document dataset itself, the GNMFOSv2 algorithm has significant potential in terms of ACC. Combining MI, NMI, and Purity, it can be concluded that the GNMFOSv2 algorithm is more suitable for the TDT2 document dataset when the number of clusters is relatively large.
  • Additionally, it is worth nothing that the NMFOS and GNMGOS algorithms exhibit extremely low values in terms of ACC, NMI, MI, and Purity for the TDT2 dataset, and these values do not change with varying numbers of clusters K. This could be due to the zero-locking phenomenon mentioned earlier. In contrast, the NMFOSv2 and GNMFOSv2 algorithms incorporate δ into their iteration processes, preventing the zero-locking phenomenon. This suggests that the proposed GNMFOSv2 algorithm is practically significant, addressing issues observed in other algorithms.
  • However, it is worth noting that no single algorithm consistently outperforms others across all four clustering indicators on the Yale dataset. This reflects the inherent trade-offs among different clustering evaluation metrics and highlights the limitations of current algorithms in achieving uniformly best performance. Such a phenomenon is common in clustering tasks where different metrics emphasize distinct aspects of clustering quality. Future work could focus on developing methods that balance multiple criteria more effectively.
  • Considering the results of the two document datasets at different clustering numbers K, it can be concluded that the clustering indicators ACC, MI, NMI, and Purity of the GNMFOS and GNMFOSv2 algorithms exhibit some fluctuations, but overall, these algorithms outperform the other. Therefore, the proposed GNMFOS and GNMFOSv2 algorithms can effectively learn the parts-based representation of the dataset, making them effective for document clustering.

4.6. Parameter Sensitivity

We are now investigating how different choices of algorithm parameters will affect their numerical performance. Clearly, if an algorithm is not sensitive to different parameter choices, it can be considered more robust and easier to apply in practice. There are four parameters that affect the experimental results: the regularization parameter λ , the orthogonality parameter α , the nearest neighbor point p, and the δ in Algorithm 2. So, in this subsection, we will compare the parameter choices of the GNMFOS and GNMFOSv2 algorithms.
Considering the comprehensiveness of the experiments, we first fix the nearest neighbor point p and δ , and let α and λ vary in 10 3 , 10 2 , 10 1 , 10 0 , 10 1 , 10 2 , 10 3 . Specifically, in the experiments, we set the parameters α , λ as a group for testing, and the experimental results are shown in Figure 6 and Figure 7.
For the AR dataset, taking the number of classes in the dataset itself as the clustering number K = 120 , when the parameters α , λ are set to { 1000 , 1000 } , the clustering performance of the GNMFOS algorithm is relatively optimal. When the parameters α , λ are set to { 1 , 1 } , the clustering performance of the GNMFOSv2 algorithm is relatively optimal. At the same time, it can be observed that the clustering metrics of GNMFOS and GNMFOSv2 algorithms do not fluctuate much when the parameters α and λ change. This indicates that these two algorithms are not sensitive to different parameter choices, making them more suitable for practical applications.
Then, based on Figure 6 and Figure 7, fixing the optimal parameters { α , λ } = {1000, 1000} and { α , λ } = {1,1} for GNMFOS and GNMFOSv2, respectively, we let the nearest neighbor point p vary in the range 1 , 3 , 5 , 7 , 9 . The experimental results are shown in Figure 8. It can be observed that for the AR dataset, the clustering performance of the GNMFOS algorithm is relatively optimal when the nearest neighbor point p = 3 . As we can see, the performance of GNMFOS is stable with respect to the parameters α and λ .
However, it is important to note that GNMFOS and GNMFOSv2 rely on the assumption that two neighboring data points share the same label. Clearly, as p increases, this assumption may fail. This is also why the performance of GNMFOS and GNMFOSv2 decreases with increasing p. It demonstrates the existence of an optimal neighborhood size p for effectively learning the underlying manifold structures of the data.
Due to the fact that the parameter δ on the denominator of Algorithm 2’s MUR cannot be too large, as it might affect the convergence of finer rules, we uniformly set it to 1 × 10 8 .
Considering the overall results, the performance of the GNMFOS and GNMFOSv2 algorithms is highly robust to different values of parameters λ , α , and p. The algorithm exhibits stability even across a wide range of parameter values, demonstrating its ability to capture both global and local geometric structures effectively and showcasing good generalization ability. The following bar charts demonstrate the robustness of our new algorithms well, and the subsequent line graphs in Figure 8 also illustrate the conclusion that the parameter p is also robust.

4.7. Weighting Scheme Selection

There are several choices for defining the weight matrix S on the p-nearest neighbors. Three popular schemes are 0-1 weighting, heat kernel weighting, and dot product weighting. In our previous experiments, for simplicity, we used 0-1 weighting, where, for a given point x, the p-nearest neighbors are treated as equally important. However, in many cases, it is necessary to distinguish these p nearest neighbors, especially when p is large. In such cases, heat kernel weighting or dot product weighting can be used [15].
For image data, Figure 9 and Figure 10 illustrate the relationship between the clustering performance of the GNMFOS algorithm and that of the GNMFOSv2 algorithm on the AR dataset as the number of nearest neighbors p varies. The cluster number is set to 120, matching the number of classes in the dataset. It can be observed that for the GNMFOSv2 algorithm, different values of p correspond to different optimal schemes. This variation might be attributed to the impact of the parameter δ in the update rule of the algorithm. The optimal weighting scheme can be chosen based on practical considerations.
For the GNMFOS algorithm, the heat kernel scheme is superior to the 0-1 weighting scheme and the dot product weighting scheme when p 10 . When 10 p 15 , the 0-1 weighting scheme is optimal, and when p 15 , the heat kernel scheme is optimal. However, the heat kernel scheme involves a parameter σ , which is crucial for this scheme. Automatically selecting the parameter σ is a challenging problem that has garnered significant interest among researchers.
For document data, document vectors are often normalized to unit vectors. In this case, the dot product of two document vectors becomes their cosine similarity, a widely used document similarity metric in the information retrieval community. Therefore, using dot product weighting is common for document data. Unlike the 0-1 weighting, the dot product weighting does not involve any parameters.
Figure 11 and Figure 12 illustrate the relationship between the clustering performance of the GNMFOSv2 algorithm with 0-1 weighting and dot product weighting schemes on the TDT2all document dataset as the number of nearest neighbors p varies. The cluster number is set to 96, matching the number of classes in the dataset. It can be observed that for the GNMFOS algorithm, the dot product weighting scheme performs better, especially when p is large. For the GNMFOSv2 algorithm, the 0-1 weighting scheme exhibits significant fluctuations. Specifically, at p = 12 and p = 21 , the values of the four clustering metrics are low, while at p = 15 , the values are high. However, at p = 3 , the clustering performance is optimal. Overall, the 0-1 weighting scheme is more suitable for the GNMFOSv2 algorithm.

5. Conclusions

Two effective algorithms are developed to solve the proposed model, and more importantly, their convergence has been theoretically proven.
The main contributions of this work are summarized as follows:
  • GNMFOS integrates the strengths of both NMFOS and GNMF by incorporating orthogonality constraints and a graph Laplacian term into the objective function. This enhances the model’s capability in sparse representation, reduces computational complexity, and improves clustering accuracy by leveraging the underlying geometric structure of both data and feature spaces.
  • The model supports flexible control over orthogonality and regularization through parameters α and λ , respectively. This makes GNMFOS adaptable to various real-world scenarios.
  • Extensive experiments on three image datasets and two document datasets demonstrate the superior clustering performance of GNMFOS and its variant GNMFOSv2. The image experiments show better parts-based feature learning and higher discriminative power, while the document experiments suggest improved semantic structure representation.
In future work, we aim to apply GNMFOS to data from diverse fields such as medicine, engineering, and management to further validate its effectiveness. In addition, considering the inherent complexity of NMF-based optimization models, developing more efficient algorithms for large-scale and constrained scenarios remains an important research direction.

Author Contributions

W.L., J.Z., and Y.C. wrote the main manuscript text. All authors reviewed the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

The second and third authors are partially supported by the China Tobacco Hebei industrial co., Ltd. Technology Project, China (Grant No. HBZY2023A034), Natural Science Foundation of Tianjin City, China (Grant Nos. 24JCYBJC00430 and 18JCYBJC16300), the Natural Science Foundation of China (Grant No. 11401433), and the Ordinary Universities Undergraduate Education Quality Reform Research Project of Tianjin, China (Grant No. A231005802).

Data Availability Statement

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Acknowledgments

We would like to acknowledge the anonymous reviewers for their meticulous and effective review of this article.

Conflicts of Interest

Authors Wen Li, Jujian Zhao, Yasong Chen were employed by the School of Mathematical Sciences, Tiangong University, China. The authors declare that this study received funding which were listed in the ’Founding’ section.The funder was not involved in the study design, collection, analysis, interpretation of data, the writing of this article, or the decision to submit it for publication.

Appendix A. Convergence Proof of the Algorithm 1

In this section, we will prove the convergence of Algorithm 1. Since the objective function of (5) has a non-negative lower bound, we only need to prove that the objective function value of (5) does not increase with respect to the update rules (8) and (9). Before introducing the convergence theorem and its proof, we first import a definition from [39]; then, a lemma follows.
Definition A1. 
Let F : R + υ × ω R be continuously differentiable with υ , ω N , and G : R + υ × ω × R + υ × ω R also be continuously differentiable. Then, G is called an auxiliary function of F if it satisfies the following conditions:
G ( V , V ) = F ( V ) , G ( V , V t ) F ( V ) .
for any V , V t R + υ × ω [39].
Lemma A1. 
Let G be an auxiliary function of F. Then, F is nonincreasing under the sequence of matrices { V t } t = 0 + generated by the following update rule:
V t + 1 = arg min V R υ × ω G ( V , V t ) .
Proof. 
By Definition (A1), the formula
G ( V t + 1 , V t + 1 ) = F ( V t + 1 ) G ( V t + 1 , V t ) G ( V t , V t ) = F ( V t ) .
holds naturally. □
Next, we will prove two important lemmas. Firstly, rewrite the symbols involved in the objective function F = J ( W , H t , H t 1 ) . Let F W i k and F H k j be the parts of (5) related to W i k and H k j t , respectively. Then, the first-order derivative and the second-order derivative of F W i k with respect to W i k are as follows:
F W i k = J ( W , H t , H t 1 ) W i k = ( W H t H t T X H t T ) i k ,
F W i k = ( H t H t T ) k k .
Similarly, taking the first- and second-order derivatives of F H k j over H k j t , then
F H k j = J ( W , H t , H t 1 ) H t k j = ( W T W H t W T X + α H t H t 1 T H t 1 α H t 1 + λ H t L ) k j ,
F H k j = ( W T W α I ) k k + ( α H t 1 T H t 1 + λ L ) j j .
Secondly, according to the Lemma A1, it is sufficient to prove that F W i k and F H k j do not increase under the sequence { W i k t } t = 0 + and sequence { H k j t } t = 0 + via (8) and (9), respectively. We now construct the following auxiliary functions.
G W ( W i k t + 1 , W i k t ) = F W i k ( W i k t ) + F W i k ( W i k t ) ( W i k t + 1 W i k t ) + 1 2 ( W t H t H t T ) i k W i k t ( W i k t + 1 W i k t ) 2 ,
G H ( H k j t + 1 , H k j t ) = F H k j ( H k j t ) + F H k j ( H k j t ) ( H k j t + 1 H k j t ) + 1 2 ( W T W H t + α H t H t 1 T H t 1 + λ H t D ) k j H k j t ( H k j t + 1 H k j t ) 2 .
Thus, the two key lemmas and their proofs are presented below.
Lemma A2. 
Let G W ( W i k t + 1 , W i k t ) be defined as in (A2). Then, G W ( W i k t + 1 , W i k t ) is an auxiliary function of F W i k .
Proof. 
The following equation is obtained from F W i k ( W i k ) ’s Taylor expansion:
F W i k ( W i k t + 1 ) = F W i k ( W i k t ) + F W i k ( W i k t ) ( W i k t + 1 W i k t ) + 1 2 ( H t H t T ) k k ( W i k t + 1 W i k t ) 2 .
To prove that G W ( W i k t + 1 , W i k t ) F W i k ( W i k t + 1 ) , it is sufficient to show that
( W t H t H t T ) i k W i k t ( H t H t T ) k k
holds, and in fact, this is true because of the following inequality:
( W t H t H t T ) i k = r = 1 K W i r t ( H t H t T ) r k W i k t ( H t H t T ) k k .
So, by (A4), one has that
1 2 ( W t H t H t T ) i k W i k t ( W i k t + 1 W i k t ) 2 1 2 W i k t ( H t H t T ) k k W i k t ( W i k t + 1 W i k t ) 2 = 1 2 ( H t H t T ) k k ( W i k t + 1 W i k t ) 2 .
Therefore,
G W ( W i k t + 1 , W i k t ) F W i k ( W i k t + 1 )
follows. Meanwhile, it is easy to see that G W ( W i k t + 1 , W i k t + 1 ) = F W i k ( W i k t + 1 ) . By Definition A1, we obtain that G W ( W i k t + 1 , W i k t ) is an auxiliary function of F W i k t + 1 . □
Lemma A3. 
Let G H ( H k j t + 1 , H k j t ) be defined as in (A3). Then, G H ( H k j t + 1 , H k j t ) is an auxiliary function of F H k j .
Proof. 
Compared to Lemma A2, the proof method is similar, but due to the complexity of auxiliary function, the steps are a bit cumbersome. Therefore, we provide them in detail. By a Taylor series expansion of F H k j ( H k j t + 1 ) , we have
F H k j ( H k j t + 1 ) = F H k j ( H k j t ) + F H k j ( H k j t ) ( H k j t + 1 H k j t ) + 1 2 [ ( W T W α I ) k k + ( α H t 1 T H t 1 + λ L ) j j ] ( H k j t + 1 H k j t ) 2 .
Similar to (A4), the following three inequalities are obtained:
( W T W H t ) k j = r = 1 K ( W T W ) k r H r j t ( W T W ) k k H k j t ,
( H t H t 1 T H t 1 ) k j = r = 1 K H k r t ( H t 1 T H t 1 ) r j H k j t ( H t 1 T H t 1 ) j j H k j t ,
( H t D ) k j = r = 1 K H k r t D r j H k j t D j j H k j t ( D S ) j j = H k j t L j j .
With the help of these three inequalities, one can prove that
1 2 ( W T W H t + α H t H t 1 T H t 1 + λ H t D ) k j H k j t ( H k j t + 1 H k j t ) 2 1 2 ( W T W ) k k H k j t + α H k j t ( H t 1 T H t 1 ) j j α H k j t + λ H k j t L j j H k j t ( H k j t + 1 H k j t ) 2 = 1 2 [ ( W T W α I ) k k + ( α H t 1 T H t 1 + λ L ) j j ] ( H k j t + 1 H k j t ) 2 .
Therefore,
G H ( H k j t + 1 , H k j t ) F H k j ( H k j t + 1 ) .
is obtained. Additionally, G H ( H k j t + 1 , H k j t + 1 ) = F H k j ( H k j t + 1 ) is true. Then, G H ( H k j t + 1 , H k j t ) is an auxiliary function of F H k j . □
Based on the above preparations, we can now provide the convergence theorem and its proof for the main conclusion of this paper.
Theorem A1. 
For a given matrix X R + M × N , W 0 R + M × K , H 0 R + K × N , the objective function in (5) is non-increasing under the matrix sequences { W t , H t } t = 0 + generated by the update rules (8) and (9). In other words, (8) and (9) are two convergent iterative formats.
Proof. 
Due to
G W ( W i k t + 1 , W i k t ) W i k t + 1 = F W i k ( W i k t ) + ( W t H t H t T ) i k W i k t ( W i k t + 1 W i k t ) .
Let G W ( W i k t + 1 , W i k t ) W i k t + 1 = 0 ; it then gives to
F W i k ( W i k t ) W i k t + ( W t H t H t T ) i k W i k t + 1 ( W t H t H t T ) i k W i k t = 0 .
According to Lemma A2, we can obtain
W i k t + 1 = W i k t ( W t H t H t T ) i k W i k t F W i k ( W i k t ) ( W t H t H t T ) i k = W i k t ( W t H t H t T ) i k W i k t ( W t H t H t T X H t T ) i k ( W t H t H t T ) i k = W i k t ( X H t T ) i k ( W t H t H t T ) i k ,
and according to Lemma A1, we can also obtain that if
W i k t + 1 = arg min W i k G W ( W i k , W i k t ) = W i k t X H t T ( W t H t H t T ) i k ,
then, because the objective function in (5) has a non-negative lower bound, it can be concluded that (8) is a convergent iterative formula.
Similarly,
G H ( H k j t + 1 , H k j t ) H k j t + 1 = F H k j ( H k j t ) + ( W T W H t + α H t H t 1 T H t 1 + λ H t D ) k j H k j t ( H k j t + 1 H k j t ) ,
then, G H ( H k j t + 1 , H k j t ) H k j t + 1 = 0 leads to
F H k j ( H k j t ) H k j t + ( W T W H t + α H t H t 1 T H t 1 + λ H t D ) k j H k j t + 1 ( W T W H t + α H t H t 1 T H t 1 + λ H t D ) k j H k j t = 0 .
According to Lemma A3, it follows that
H k j t + 1 = H k j t ( W T W H t + α H t H t 1 T H t 1 + λ H t D ) k j H k j t F H k j ( H k j t ) ( W T W H t + α H t H t 1 T H t 1 + λ H t D ) k j = H k j t Θ k j 1 H k j t Θ k j 2 ( W T W H t + α H t H t 1 T H t 1 + λ H t D ) k j = H k j t W T X + α H t 1 + λ H t S k j ( W T W H t + α H t H t 1 T H t 1 + λ H t D ) k j ,
where
Θ 1 = W T W H t + α H t H t 1 T H t 1 + λ H t D
and
Θ 2 = W T W H t W T X + α H t H t 1 T H t 1 α H t 1 + λ H t L .
By Lemma A1, it is easy to see that if
H k j t + 1 = arg min H k j G H ( H k j , H k j t ) = H k j t W T X + α H t 1 + λ H t S k j ( W T W H t + α H t H t 1 T H t 1 + λ H t D ) k j .
holds (also because the objective function (5) has a nonnegative lower bound), it can be concluded that (9) is a convergent iterative formula. □

References

  1. Devarajan, K. Nonnegative matrix factorization: An analytical and interpretive tool in computational biology. PLoS Comput. Biol. 2008, 4, e1000029. [Google Scholar] [CrossRef] [PubMed]
  2. Gao, Z.; Wang, Y.T.; Wu, Q.W.; Ni, J.C.; Zheng, C.H. Graph regularized l2, 1-nonnegative matrix factorization for mirna-disease association prediction. BMC Bioinform. 2020, 21, 1–13. [Google Scholar] [CrossRef] [PubMed]
  3. Zhang, L.; Liu, Z.; Pu, J.; Song, B. Adaptive graph regularized nonnegative matrix factorization for data representation. Appl. Intell. 2020, 50, 438–447. [Google Scholar] [CrossRef]
  4. Gillis, N.; Glineur, F. A multilevel approach for nonnegative matrix factorization. J. Comput. Appl. Math. 2012, 236, 1708–1723. [Google Scholar] [CrossRef]
  5. Hoyer, P.O. Non-negative matrix factorization with sparseness constraints. J. Mach. Learn. Res. 2002, 5, 457–469. [Google Scholar]
  6. Wang, D.; Lu, H. On-line learning parts-based representation via incremental orthogonal projective non-negative matrix factorization. Signal Process. 2013, 93, 1608–1623. [Google Scholar] [CrossRef]
  7. Berry, M.W.; Browne, M.; Langville, A.N.; Pauca, V.P.; Plemmons, R.J. Algorithms and applications for approximate nonnegative matrix factorization. Comput. Stat. Data Anal. 2007, 52, 155–173. [Google Scholar] [CrossRef]
  8. Pauca, V.P.; Piper, J.; Plemmons, R.J. Nonnegative matrix factorization for spectral data analysis. Linear Algebra Its Appl. 2006, 416, 29–47. [Google Scholar] [CrossRef]
  9. Bertin, N.; Badeau, R.; Vincent, E. Enforcing harmonicity and smoothness in bayesian non-negative matrix factorization applied to polyphonic music transcription. IEEE Trans. Audio Speech Lang. Process. 2010, 18, 538–549. [Google Scholar] [CrossRef]
  10. Zhou, G.; Yang, Z.; Xie, S.; Yang, J.M. Online blind source separation using incremental nonnegative matrix factorization with volume constraint. IEEE Trans. Neural Netw. 2023, 22, 550–560. [Google Scholar] [CrossRef] [PubMed]
  11. Fan, F.; Jing, P.; Nie, L.; Gu, H.; Su, Y. SADCMF: Self-Attentive Deep Consistent Matrix Factorization for Micro-Video Multi-Label Classification. IEEE Trans. Multimed. 2024, 26, 10331–10341. [Google Scholar] [CrossRef]
  12. Chen, D.; Li, S.X. An augmented GSNMF model for complete deconvolution of bulk RNA-seq data. Math. Biosci. Eng. 2025, 22, 988. [Google Scholar] [CrossRef] [PubMed]
  13. Xi, L.; Li, R.; Li, M.; Miao, D.; Wang, R.; Haas, Z. NMFAD: Neighbor-aware mask-filling attributed network anomaly detection. IEEE Trans. Inf. Forensics Secur. 2024, 20, 364–374. [Google Scholar] [CrossRef]
  14. Chen, X.; Wang, J.; Huang, Q. Dynamic Spectrum Cartography: Reconstructing Spatial-Spectral-Temporal Radio Frequency Map via Tensor Completion. IEEE Trans. Signal Process. 2025, 73, 1184–1199. [Google Scholar] [CrossRef]
  15. Cai, D.; He, X.; Han, J.; Huang, T.S. Graph regularized nonnegative matrix factorization for data representation. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 33, 1548–1560. [Google Scholar] [CrossRef] [PubMed]
  16. Hu, W.; Choi, K.S.; Wang, P.; Jiang, Y.; Wang, S. Convex nonnegative matrix factorization with manifold regularization. Neural Netw. 2015, 63, 94–103. [Google Scholar] [CrossRef] [PubMed]
  17. Meng, Y.; Shang, R.; Jiao, L.; Zhang, W.; Yuan, Y.; Yang, S. Feature selection based dual-graph sparse non-negative matrix factorization for local discriminative clustering. Neurocomputing 2018, 290, 87–99. [Google Scholar] [CrossRef]
  18. Long, X.; Lu, H.; Peng, Y.; Li, W. Graph regularized discriminative non-negative matrix factorization for face recognition. Multimed. Tools Appl. 2014, 72, 2679–2699. [Google Scholar] [CrossRef]
  19. Shang, F.; Jiao, L.; Wang, F. Graph dual regularization non-negative matrix factorization for co-clustering. Pattern Recognit. 2012, 45, 2237–2250. [Google Scholar] [CrossRef]
  20. Che, H.; Li, C.; Leung, M.; Ouyang, D.; Dai, X.; Wen, S. Robust hypergraph regularized deep non-negative matrix factorization for multi-view clustering. IEEE Trans. Emerg. Top. Comput. Intell. 2025, 9, 1817–1829. [Google Scholar] [CrossRef]
  21. Yang, X.; Che, H.; Leung, M.; Liu, C.; Wen, S. Auto-weighted multi-view deep non-negative matrix factorization with multi-kernel learning. IEEE Trans. Signal Inf. Pract. 2025, 11, 23–34. [Google Scholar] [CrossRef]
  22. Liu, C.; Li, R.; Che, H.; Leung, M.F.; Wu, S.; Yu, Z.; Wong, H.S. Beyond euclidean structures: Collaborative topological graph learning for multiview clustering. IEEE Trans. Neural Netw. Learn. Syst. 2025, 36, 10606–10618. [Google Scholar] [CrossRef] [PubMed]
  23. Li, G.; Zhang, X.; Zheng, S.; Li, D. Semi-supervised convex nonnegative matrix factorizations with graph regularized for image representation. Neurocomputing 2017, 237, 1–11. [Google Scholar] [CrossRef]
  24. Li, S.Z.; Hou, X.W.; Zhang, H.J.; Cheng, Q.S. Learning spatially localized, parts-based representation. In Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2001, Kauai, HI, USA, 8–14 December 2001. [Google Scholar]
  25. Ding, C.; Li, T.; Peng, W.; Park, H. Orthogonal nonnegative matrix t-factorizations for clustering. In Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Philadelphia, PA, USA, 20–23 August 2006; pp. 126–135. [Google Scholar]
  26. Li, S.; Li, W.; Hu, J.; Li, Y. Semi-supervised bi-orthogonal constraints dual-graph regularized nmf for subspace clustering. Appl. Intell. 2022, 52, 227–3248. [Google Scholar] [CrossRef]
  27. Liang, N.; Yang, Z.; Li, Z.; Han, W. Incomplete multi-view clustering with incomplete graph-regularized orthogonal non-negative matrix factorization. Appl. Intell. 2022, 52, 14607–14623. [Google Scholar] [CrossRef]
  28. Zhang, X.; Xiu, X.; Zhang, C. Structured joint sparse orthogonal nonnegative matrix factorization for fault detection. IEEE Trans. Instrum. Meas. 2023, 72, 1–15. [Google Scholar] [CrossRef]
  29. Lee, D.D.; Seung, H.S. Learning the parts of objects by non-negative matrix factorization. Nature 1999, 401, 788–791. [Google Scholar] [CrossRef] [PubMed]
  30. Li, Z.; Wu, X.; Peng, H. Nonnegative matrix factorization on orthogonal subspace. Pattern Recognit. Lett. 2010, 31, 905–911. [Google Scholar] [CrossRef]
  31. Peng, S.; Ser, W.; Chen, B.; Sun, L.; Lin, Z. Robust nonnegative matrix factorization with local coordinate constraint for image clustering. Eng. Appl. Artif. Intell. 2020, 88, 103354. [Google Scholar] [CrossRef]
  32. Tosyali, A.; Kim, J.; Choi, J.; Jeong, M.K. Regularized asymmetric nonnegative matrix factorization for clustering in directed networks. Pattern Recognit. Lett. 2019, 125, 750–757. [Google Scholar] [CrossRef]
  33. Guo, J.; Wan, Z. A modified spectral prp conjugate gradient projection method for solving large-scale monotone equations and its application in compressed sensing. Math. Probl. Eng. 2019, 2019, 5261830. [Google Scholar] [CrossRef]
  34. Li, T.; Wan, Z. New adaptive barzilai–borwein step size and its application in solving large-scale optimization problems. ANZIAM J. 2019, 61, 76–98. [Google Scholar]
  35. Lv, J.; Deng, S.; Wan, Z. An efficient single-parameter scaling memoryless broyden-fletcher-goldfarb-shanno algorithm for solving large scale unconstrained optimization problems. IEEE Access 2020, 8, 85664–85674. [Google Scholar] [CrossRef]
  36. Mirzal, A. A convergent algorithm for orthogonal nonnegative matrix factorization. J. Comput. Appl. Math. 2014, 260, 149–166. [Google Scholar] [CrossRef]
  37. Wang, J.; Wang, J.; Ke, Q.; Zeng, G.; Li, S. Fast approximate k-means via cluster closures. In Multimedia Data Mining and Analytics: Disruptive Innovation; Springer: Berlin/Heidelberg, Germany, 2015; pp. 373–395. [Google Scholar]
  38. Hoyer, P.O. Non-negative sparse coding. In Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing, Martigny, Switzerland, 6 September 2002; pp. 557–565. [Google Scholar]
  39. Lee, D.D.; Seung, H.S. Algorithms for non-negative matrix factorization. Adv. Neural Inf. Process. Syst. 2000, 13, 535–541. [Google Scholar]
Figure 1. The clustering performance on AR. (a) ACC; (b) MI; (c) NMI; (d) Purity.
Figure 1. The clustering performance on AR. (a) ACC; (b) MI; (c) NMI; (d) Purity.
Symmetry 17 01154 g001
Figure 2. The clustering performance on ORL. (a) ACC; (b) MI; (c) NMI; (d) Purity.
Figure 2. The clustering performance on ORL. (a) ACC; (b) MI; (c) NMI; (d) Purity.
Symmetry 17 01154 g002
Figure 3. The clustering performance on Yale. (a) ACC; (b) MI; (c) NMI; (d) Purity.
Figure 3. The clustering performance on Yale. (a) ACC; (b) MI; (c) NMI; (d) Purity.
Symmetry 17 01154 g003
Figure 4. The clustering performance on TDT2all. (a) ACC; (b) MI; (c) NMI; (d) Purity.
Figure 4. The clustering performance on TDT2all. (a) ACC; (b) MI; (c) NMI; (d) Purity.
Symmetry 17 01154 g004
Figure 5. The clustering performance on TDT2. (a) ACC; (b) MI; (c) NMI; (d) Purity.
Figure 5. The clustering performance on TDT2. (a) ACC; (b) MI; (c) NMI; (d) Purity.
Symmetry 17 01154 g005
Figure 6. Performance variations in ACC, MI, NMI, and Purity with respect to different combinations of α and λ values on AR dataset in GNMFOS. (a) ACC; (b) MI; (c) NMI; (d) Purity.
Figure 6. Performance variations in ACC, MI, NMI, and Purity with respect to different combinations of α and λ values on AR dataset in GNMFOS. (a) ACC; (b) MI; (c) NMI; (d) Purity.
Symmetry 17 01154 g006
Figure 7. Performance variations in ACC, MI, NMI, and Purity with respect to different combinations of α and λ values on AR dataset in GNMFOSv2. (a) ACC; (b) MI; (c) NMI; (d) Purity.
Figure 7. Performance variations in ACC, MI, NMI, and Purity with respect to different combinations of α and λ values on AR dataset in GNMFOSv2. (a) ACC; (b) MI; (c) NMI; (d) Purity.
Symmetry 17 01154 g007
Figure 8. The performance of GNMFOS and GNMFOSv2 as p increases on AR dataset. (a) ACC; (b) MI; (c) NMI; (d) Purity.
Figure 8. The performance of GNMFOS and GNMFOSv2 as p increases on AR dataset. (a) ACC; (b) MI; (c) NMI; (d) Purity.
Symmetry 17 01154 g008
Figure 9. The performance of GNMFOS and with different weighting schemes on AR dataset. (a) ACC; (b) MI; (c) NMI; (d) Purity.
Figure 9. The performance of GNMFOS and with different weighting schemes on AR dataset. (a) ACC; (b) MI; (c) NMI; (d) Purity.
Symmetry 17 01154 g009
Figure 10. The performance of GNMFOSv2 and with different weighting schemes on AR dataset. (a) ACC; (b) MI; (c) NMI; (d) Purity.
Figure 10. The performance of GNMFOSv2 and with different weighting schemes on AR dataset. (a) ACC; (b) MI; (c) NMI; (d) Purity.
Symmetry 17 01154 g010
Figure 11. The performance of GNMFOS and with different weighting schemes on TDT2all dataset. (a) ACC; (b) MI; (c) NMI; (d) Purity.
Figure 11. The performance of GNMFOS and with different weighting schemes on TDT2all dataset. (a) ACC; (b) MI; (c) NMI; (d) Purity.
Symmetry 17 01154 g011
Figure 12. The performance of GNMFOSv2 and with different weighting schemes on TDT2all dataset. (a) ACC; (b) MI; (c) NMI; (d) Purity.
Figure 12. The performance of GNMFOSv2 and with different weighting schemes on TDT2all dataset. (a) ACC; (b) MI; (c) NMI; (d) Purity.
Symmetry 17 01154 g012
Table 1. Important symbols annotated in this paper.
Table 1. Important symbols annotated in this paper.
SymbolsDescription
N The set of all natural numbers
R + υ × ω The set of all nonnegative matrices with υ , ω N
XInput nonnegative data matrix X = [ x 1 , , x N ] R + M × N
WThe basis matrix W = [ w i k ] R + M × K
HThe coefficient matrix H = [ h k j ] R + K × N
MThe number of data features(or denoted as data dimension)
NThe number of data instances(or denoted as data points)
KThe number of data clusters
iElements in the sets { 1 , 2 , , M }
jElements in the sets { 1 , 2 , , N }
kElements in the sets { 1 , 2 , , K }
· F Frobenius norm
T r ( · ) The trace operator
LThe graph Laplacian operators of the data manifold
SThe weighted matrix
DThe diagonal matrices with D i i = j S i j
pThe size of nearestneighbors
IThe identity matrix
α The penalty parameter for controlling orthogonality α 0
λ The regularization parameter λ 0
ψ , ϕ Lagrange multiplier
Table 2. The clustering performance on AR.
Table 2. The clustering performance on AR.
KK-MeansNMFONMFGNMFSNMF-HSNMF-WNMFOSNMFOSv2GNMFOSGNMFOSv2
ACC
200.39940.37080.33210.45000.30180.39050.39460.36960.46550.5083
400.41430.42980.38750.47020.31310.43150.40480.41370.49820.5036
600.40710.44290.42320.46900.42500.48750.43930.42140.48270.4839
800.37200.46070.40950.45540.36370.48100.46550.45770.49050.4988
1000.36900.49520.42080.45420.39760.48150.48630.45770.50360.5131
1200.39520.48810.43570.46670.39170.49050.49760.47560.50830.4810
Avg.0.39290.44790.40150.46090.36550.46040.44800.43260.49150.4981
MI
200.69270.68250.66920.73810.63920.68390.68850.67700.76680.7823
400.71000.72810.69700.74720.64050.73920.71500.71070.77410.7727
600.71130.73460.72140.74490.66890.75180.72690.71270.76510.7641
800.69340.74050.72240.74030.67560.74790.73830.73020.75880.7656
1000.69080.75430.71430.73930.69190.74190.75390.74600.77050.7717
1200.69480.75700.71990.74660.68230.74770.73990.74240.77300.7650
Avg.0.69880.73280.70740.74280.66640.73540.72710.71980.76810.7702
NMI
200.70330.69090.67830.75420.64640.69380.69610.68570.78000.7948
400.72040.73770.70700.76030.64740.74970.72390.72040.78630.7837
600.72200.74640.72880.75910.67970.76440.73570.72470.77740.7781
800.70590.75140.73310.75490.68430.76120.74900.74310.77210.7776
1000.70590.75270.72490.75600.70130.75370.75390.74790.77980.7824
1200.70660.76810.73330.76130.69560.76160.75460.75460.78490.7776
Avg.0.71070.74120.71760.75760.67580.74740.73550.72940.78010.7824
Purity
200.43210.40420.35710.49640.33390.43390.41550.39350.52020.5440
400.44230.46610.41370.49760.43150.46790.46250.46010.53330.5381
600.44640.48870.45000.49760.40480.54940.48510.45120.51670.5202
800.40950.50480.44110.48990.51730.51490.50240.49110.52560.5298
1000.39580.48930.45540.49460.47020.51490.51190.50060.53690.5440
1200.39580.51850.46130.50420.53210.53870.52980.50890.54230.5262
Avg.0.42030.47860.42980.49670.44830.50330.48450.46760.52920.5337
Table 3. The clustering performance on ORL.
Table 3. The clustering performance on ORL.
Clusters KK-MeansNMFONMFGNMFSNMF-HSNMF-WNMFOSNMFOSv2GNMFOSGNMFOSv2
ACC
50.47250.36250.32000.43750.38250.37750.35500.33500.43250.4350
100.52250.43750.43000.46000.33500.41750.41750.37500.45000.4525
150.47250.41500.40750.48500.35750.43750.41500.44250.55250.5025
200.49750.41250.43500.51250.42750.43000.43500.44000.55250.5375
250.46000.44000.47750.45250.42750.49500.44250.47500.54250.5425
300.52000.47500.47250.46750.50500.49500.48250.47500.47250.5300
350.48250.46000.41000.48000.44250.47750.43750.45250.51000.5425
400.47000.44750.47500.41500.42500.46250.48250.49750.57500.5075
Avg.0.48720.43130.42840.46380.41280.44910.43340.43660.51090.5063
MI
50.68870.60580.54700.65430.53390.60160.62430.57880.64560.6387
100.70250.65870.65350.68800.56880.64220.62790.62410.65800.6951
150.68830.64720.63050.68130.58940.66830.62890.67250.74210.7132
200.70680.63720.65380.69880.61850.65830.63950.65450.73490.7378
250.68320.66550.69170.66040.63120.68350.64330.69720.74440.7395
300.69750.67720.64980.67570.62020.68930.68900.68000.71430.7480
350.70010.67450.64860.69410.64560.69420.67060.66400.73670.7428
400.68600.67940.67470.67380.63560.67900.68980.69110.74930.7039
Avg.0.69410.65570.64370.67830.60540.66450.65170.65780.71570.7149
NMI
50.71220.61240.55430.66980.54320.61090.63150.59090.66560.6733
100.71980.63000.66790.68530.57560.65320.63610.63390.67750.7082
150.71280.66180.64390.69800.59650.68530.64170.66200.75140.7269
200.72800.64850.66710.71370.63380.67090.65370.66870.74530.7468
250.71510.68280.70410.67970.64010.69580.65970.69020.75750.7519
300.71110.69090.66810.69240.63620.70580.70080.69600.73130.7643
350.71500.71650.65930.68840.66270.70950.68490.69630.75250.7627
400.68700.69760.69350.68940.65010.70340.74130.72880.76380.7260
Avg.0.71260.66760.65730.68960.61730.67940.66870.67080.73060.7325
Purity
50.52000.39500.39500.46500.33250.40750.40000.38250.46750.4725
100.54750.42250.47750.50750.37000.45500.45500.43000.51000.5025
150.52500.46250.44500.52500.49500.47250.50750.49750.59250.5450
200.55000.45500.48500.54250.46500.48250.47750.46500.59000.5700
250.53000.49500.51250.48000.46250.53750.47750.50750.59750.5825
300.55500.50500.52500.49750.44500.52750.51500.52000.53750.5800
350.52500.53500.44750.50250.48500.52500.49250.50250.54750.5950
400.48000.50000.50250.46250.48500.50750.53250.52250.61000.5300
Avg.0.52910.47130.47380.49780.44250.48940.48220.47840.55660.5472
Table 4. The clustering performance on Yale.
Table 4. The clustering performance on Yale.
Clusters KK-MeansNMFONMFGNMFSNMF-HSNMF-WNMFOSNMFOSv2GNMFOSGNMFOSv2
ACC
30.35760.32120.36970.32120.30300.35760.33330.33940.33330.3152
60.27880.35760.34550.32730.40000.32120.38180.35760.42420.3818
90.37580.30910.40000.33940.38790.30910.40000.36970.35760.3758
120.33940.34550.32730.30300.40610.32730.44240.32730.35760.3455
150.30300.32120.36360.35760.41820.41210.37580.32730.41210.4061
Avg.0.33090.33090.36120.32970.38300.34550.38670.34420.37700.3648
MI
30.42490.39450.46470.35730.36480.40670.40400.38790.39150.3945
60.38180.41850.40640.39170.44080.37540.42210.38030.46790.4441
90.44640.40720.44410.37900.42530.39210.45190.40170.40870.4297
120.39470.38160.42510.34560.46930.41010.47800.40820.41420.4223
150.39440.37360.42480.41240.45930.44760.43280.39950.50480.4717
Avg.0.40850.39510.43300.37720.43190.40640.43780.39550.43740.4325
NMI
30.43620.40290.47400.38060.37540.41200.41480.39930.39550.4028
60.40600.42280.41770.40450.44550.38550.42730.39420.47980.4523
90.46240.41480.45110.38940.43410.40580.45390.41020.42090.4501
120.41790.39700.43700.36700.47650.43100.48370.41830.42310.4335
150.41060.38280.43650.43050.46770.45130.44570.41090.51410.4771
Avg.0.42660.40400.44330.39440.43990.41710.44510.40660.44670.4432
Purity
30.37580.33330.40000.33330.32120.36360.35760.35150.34550.3273
60.34550.38180.35150.33940.40610.33940.38790.36970.42420.3818
90.39390.33940.41210.33940.39390.33330.40610.37580.37580.3939
120.34550.36970.35760.30910.41210.36970.44240.35760.38180.3939
150.34550.33330.37580.36360.43030.43640.40610.36970.44240.4303
Avg.0.36120.35150.37940.33700.39270.36850.40000.36480.39390.3855
Table 5. The clustering performance on TDT2all.
Table 5. The clustering performance on TDT2all.
Clusters KNMFGNMFNMFOSNMFOSv2GNMFOSGNMFOSv2
ACC
100.18060.39450.17030.17110.29190.2582
300.24010.35170.23910.24650.38510.3087
500.27710.38510.31680.27110.42270.4049
700.30380.37410.32900.32770.38400.3934
900.34370.39490.32340.32480.38290.4135
960.31310.38160.32230.34580.38350.4094
Avg.0.27640.38030.28350.28110.37500.3647
MI
100.44330.56230.42730.41490.52170.5125
300.48230.55720.48160.48830.60680.5910
500.51190.56030.51250.49940.62110.6105
700.51970.56660.52100.52260.60010.6135
900.53950.56510.53230.52790.60810.6190
960.52460.56420.53770.53850.61050.6227
Avg.0.50360.56260.50210.49860.59470.5949
NMI
100.53180.65700.51110.49500.62690.6140
300.57650.65260.57740.58250.72650.7133
500.61080.65780.61110.59540.73390.7333
700.62290.66490.61360.62350.72250.7356
900.64280.66470.63740.62600.73020.7411
960.62590.66590.63550.64420.73090.7456
Avg.0.60180.66050.59770.59440.71180.7138
Purity
100.68170.80230.66450.65010.77140.7477
300.73800.78960.73890.74040.88660.8700
500.76590.81040.77890.75410.89290.8866
700.77700.81100.77250.78460.88380.8925
900.79160.82040.80650.79570.88460.9000
960.78180.82490.80500.81420.88970.9107
Avg.0.75600.80980.76110.75650.86820.8679
Table 6. The clustering performance on TDT2.
Table 6. The clustering performance on TDT2.
Clusters KNMFGNMFNMFOSNMFOSv2GNMFOSGNMFOSv2
ACC
50.24820.60800.19470.26730.19470.3980
100.29120.61080.19470.32580.19470.4256
150.37190.54960.19470.36520.19470.4825
200.35620.63870.19470.39670.19470.5513
250.40440.57860.19470.42210.19470.5199
300.35870.63000.19470.41880.19470.6230
Avg.0.33850.60260.19470.36600.19470.5001
MI
50.43300.65830.00190.46830.00190.6892
100.48840.68100.00190.50070.00190.7007
150.52040.64690.00190.51780.00190.6369
200.53730.67880.00190.56330.00190.6948
250.57370.67990.00190.56440.00190.6958
300.55500.69960.00190.57890.00190.7400
Avg.0.51800.67410.00190.53220.00190.6929
NMI
50.46820.68780.01740.51430.01740.7309
100.53520.70770.01740.55110.01740.7648
150.57180.67390.01740.56630.01740.7029
200.59990.72510.01740.61210.01740.7597
250.63150.70250.01740.61730.01740.7384
300.60430.72170.01740.62970.01740.7956
Avg.0.56850.70310.01740.58180.01740.7487
Purity
50.61370.76230.19770.65230.19770.7886
100.66520.79190.19770.68310.19770.8046
150.66220.75860.19770.70220.19770.8067
200.72610.80350.19770.72670.19770.8313
250.73300.76630.19770.74590.19770.8039
300.71930.79710.19770.72600.19770.8761
Avg.0.68660.77990.19770.70600.19770.8185
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, W.; Zhao, J.; Chen, Y. Orthogonal-Constrained Graph Non-Negative Matrix Factorization for Clustering. Symmetry 2025, 17, 1154. https://doi.org/10.3390/sym17071154

AMA Style

Li W, Zhao J, Chen Y. Orthogonal-Constrained Graph Non-Negative Matrix Factorization for Clustering. Symmetry. 2025; 17(7):1154. https://doi.org/10.3390/sym17071154

Chicago/Turabian Style

Li, Wen, Junjian Zhao, and Yasong Chen. 2025. "Orthogonal-Constrained Graph Non-Negative Matrix Factorization for Clustering" Symmetry 17, no. 7: 1154. https://doi.org/10.3390/sym17071154

APA Style

Li, W., Zhao, J., & Chen, Y. (2025). Orthogonal-Constrained Graph Non-Negative Matrix Factorization for Clustering. Symmetry, 17(7), 1154. https://doi.org/10.3390/sym17071154

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop