Semi-Supervised Fuzzy Clustering Based on Prior Membership
Abstract
1. Introduction
- We propose a new semi-supervised fuzzy clustering algorithm named SFCM-PM, which introduces prior information entropy as an auxiliary partition criterion to quantify the divergence between partition membership and prior membership.
- The algorithm integrates sample similarity and prior consistency into a unified objective function, allowing effective utilization of limited prior information.
- Extensive experiments are conducted on standard and real-world datasets, demonstrating the superior performance of SFCM-PM over traditional and semi-supervised clustering algorithms, especially in imbalanced data scenarios.
2. Related Work
Fuzzy c-Means Clustering
- Proposal of prior information entropy as a direct metric for quantifying membership divergence.
- Development of a dual-criteria optimization framework that combines data similarity and prior consistency.
- Theoretical analysis of convergence properties with mathematical guarantees.
3. Methodology
3.1. Semi-Supervised Fuzzy Clustering Based on Prior Membership
3.1.1. Basic Idea of the Algorithm
- Handling Incomplete Information: The algorithm is fundamentally a semi-supervised approach. The penalty coefficient in the objective function (Equation (6)) acts as a switch: for samples with available prior membership and for unlabeled samples. This means that the term prior information entropy only influences the clustering process for the supervised subset. For the vast majority of unlabeled data, the algorithm reverts to a data-driven approach, relying solely on sample similarity. This design ensures that the algorithm remains effective even when only a small fraction of the data is labeled, which is the common case in real-world applications.
- Robustness to Inaccurate Information: The use of prior information entropy as a soft constraint, rather than a hard constraint, is key to the robustness of the algorithm. Unlike methods that strictly enforce prior labels (e.g., must-link/cannot-link constraints), our entropy term gently guides the membership degrees towards the prior. If a prior label is incorrect, the algorithm can still correct it through joint optimization of the objective function, which balances the prior consistency term () with the data similarity term (). The fuzziness coefficient m further provides flexibility, allowing memberships to adjust even for labeled samples. This prevents the algorithm from being overly sensitive to noisy or erroneous prior labels.
3.1.2. Algorithm Description
- Direct Optimization: Unlike methods that use prior information to modify distance metrics (e.g., SFCM [13]) or impose hard constraints (e.g., constrained k-means [10]), our entropy term directly optimizes the membership values themselves. This allows for a more precise and targeted adjustment of the fuzzy partition.
- Soft and Probabilistic Guidance: The entropy term acts as a soft regularization. It does not force a sample to belong to a specific cluster (hard constraint) but gently encourages the membership distribution to align with the prior. This is particularly beneficial when prior information might be noisy or incomplete, as it prevents overfitting to potentially erroneous labels while still guiding the clustering process.
- Mathematical Tractability: The entropy function is smooth and differentiable with respect to the membership degree . This property is crucial for deriving the update rules via gradient-based optimization, ensuring a stable and efficient iterative solution.
3.1.3. Objective Function Design
3.1.4. Optimization Solution Method
3.2. Algorithm Steps
- Cluster Center Update: The cluster center is recalculated using the cluster center update formula (Equation (9)), which incorporates both the similarity of the data and the prior consistency. This step adjusts the cluster prototypes based on the weighted positions of the data points, where the weights are derived from the difference between the current partition memberships and the prior memberships .
- Membership Update: The membership degrees for each sample belonging to each cluster i are updated using the membership update formula (Equation (11)). This update rule is influenced by three factors: the distance between the sample and the center of the cluster, the prior membership for the sample–cluster pair, and the current membership values themselves (implicitly through the normalization term). This ensures that the new memberships reflect both the similarity of the data and the consistency with the prior information provided (controlled by ).
Algorithm 1 SFCM-PM algorithm parameters and process. |
Input: Samples set , Prior membership , Clusters number c, Allowable error , Max iterations T, Penalty coefficient . Output: Cluster labels Process: 1 Initialize the membership by cluster number and samples set 2 Repeat 3 Calculate cluster centers 4 Calculate the membership 5 If t < T or 6 break; 7 Else 8 t = t + 1 9 End 10 Return the membership and then confirm the cluster label |
3.3. Algorithm Complexity Analysis
4. Experimental Section
4.1. Datasets
4.2. Baselines and Evaluation Metric
- K-Means: The classic hard clustering algorithm iteratively updates the cluster centers by minimizing the squared error within the clusters. Each sample belongs only to a single cluster, and the algorithm does not have the capability to model prior information; it serves as a benchmark for unsupervised clustering [10].
- FCM: The foundational work in fuzzy clustering introduces membership degrees to describe the attribution probability of samples to clusters. Soft partitioning is achieved by minimizing the weighted squared error, but prior information is not utilized [11].
- SFCM: The semi-supervised fuzzy clustering algorithm converts pairwise constraints into membership degree constraints. The supervisory information was integrated by modifying the objective function, but the differences in prior membership degrees were not explicitly optimized [13].
- SSFCM: Based on the improved algorithm of SFCM, entropy regularization and a metric learning mechanism are introduced. The fuzziness of the membership degrees is controlled by regularization terms to enhance the robustness to noisy data, but the use of previous membership degrees is still insufficient [28].
- cGFCM: A general FCM clustering algorithm was developed based on Minkowski distance and contraction mapping. This algorithm updates the prototype by constructing a contraction mapping [16].
- True Positive (TP): The number of sample pairs that are in the same cluster in both the ground truth and the clustering result.
- False Positive (FP): The number of sample pairs that are in the same cluster in the clustering result but in different clusters in the ground truth.
- True Negative (TN): The number of sample pairs that are in different clusters in both the ground truth and the clustering result.
- False Negative (FN): The number of sample pairs that are in different clusters in the clustering result but in the same cluster in the ground truth.
4.3. Implementation Details
Prior Membership Generation Strategy
- Random Sampling: For each dataset, a predefined proportion (5%, 10%, 15%, or 20%) of the total samples were randomly selected without replacement. This subset represents the “supervised” portion of the data.
- Label Assignment: The ground-truth class label for each selected sample was used to define its prior membership. Specifically, a one-hot encoding scheme was applied: for a sample known to belong to cluster i, its prior membership vector was set to 1 for the i-th component and 0 for all other components.
- Handling Unlabeled Data: For the remaining samples (the “unsupervised” portion), no prior membership information was assumed. This was mathematically represented by setting all components of their prior membership vector to zero.
4.4. Experimental Result Analysis
4.4.1. Overall Performance Comparison
4.4.2. Friedman Test
5. Discussion
- As shown in the tables, when the prior sample ratio is 0%, the SFCM-PM algorithm degenerates into an unsupervised fuzzy clustering method, with performance that is basically consistent with the FCM algorithm and shows no obvious advantages. After introducing prior information (prior membership sample ratios of 5% and above), the SFCM-PM algorithm significantly outperforms the other algorithms. In the Segment, Ecoli, Air, and Parkinsons datasets, when the prior sample ratio reaches 20%, the RI values are 0.894, 0.808, 0.719, and 0.547, respectively, significantly higher than other algorithms. This indicates that the auxiliary partitioning criterion based on prior information entropy proposed in the SFCM-PM algorithm can more effectively guide partition memberships to approach prior memberships, thereby improving clustering accuracy. On the Digits-389 dataset, the SFCM-PM algorithm performs relatively poorly, mainly due to the difficulty in distinguishing between classes caused by the visual similarity of digits. This prevents the algorithm from achieving ideal performance on this dataset, although it still outperforms traditional methods. After introducing prior membership samples, the SFCM-PM algorithm’s RI values on most datasets are superior to those of the other comparison algorithms, demonstrating its excellent clustering performance. As unsupervised methods, k-means and FCM exhibit stable but relatively low clustering accuracy throughout the experiment, with no significant changes due to the introduction of prior information. In contrast, semi-supervised fuzzy clustering methods such as SFCM, SSFCM, cGFCM, and SFCM-PM show varying degrees of performance improvement as the prior sample ratio increases, indicating that these algorithms can effectively model prior information and integrate it into the clustering process.
- As illustrated in Figure 2, a systematic evaluation of the impact of varying proportions of supervised samples on the performance of the six clustering algorithms was conducted using the Friedman two-way ranked variance analysis. It is important to note that a lower average rank indicates superior algorithmic performance. The results demonstrate that SFCM-PM consistently exhibits significant and stable performance advantages across all levels of supervision, with its average rank maintaining a relatively low value. This indicates the algorithm’s effective utilization of limited supervisory information to optimize clustering outcomes. When the proportion of prior samples is 0%, the average rank of SFCM-PM is 3.06, showing no marked advantage, which reflects the absence of prior information influence at this stage. As the proportion of prior samples increases, the performance gap between SFCM-PM and the other algorithms widens considerably, signifying its ability to efficiently integrate prior knowledge to enhance clustering performance. Specifically, at a 5% prior sample proportion, the average rank of SFCM-PM decreases to 1.06, significantly outperforming the other algorithms and demonstrating superior performance improvement. Moreover, only a small amount of supervision is sufficient to significantly boost algorithmic performance. In contrast, K-means and FCM algorithms, which are unable to effectively incorporate prior knowledge, maintain stable average ranks of 5.75 and 4.31 respectively, consistently ranking lower. Other algorithms such as SFCM, SSFCM, and cGFCM show minor fluctuations in average rank and exhibit moderate performance improvements as labeled data increase. Nevertheless, the average rank of SFCM-PM remains stable between 1.13 and 1.31, consistently occupying the top position, thereby validating its superiority and efficiency under limited prior supervision conditions.
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Abdelaziz, F.B.; Limam, O. Multiobjective fuzzy clustering with coalition formation: The case of brain image processing. INFOR Inf. Syst. Oper. Res. 2016, 55, 52–69. [Google Scholar] [CrossRef]
- Anand, R.; Veni, S.; Aravinth, J. An application of image processing techniques for detection of diseases on brinjal leaves using k-means clustering method. In Proceedings of the 2016 International Conference on Recent Trends in Information Technology (ICRTIT), Chennai, India, 8–9 April 2016; pp. 1–6. [Google Scholar] [CrossRef]
- Elmoataz, A.; Desquesnes, X.; Toutain, M. On the game p-Laplacian on weighted graphs with applications in image processing and data clustering. Eur. J. Appl. Math. 2017, 28, 1–27. [Google Scholar] [CrossRef]
- Xu, Q.; Tang, D.; Cai, Q. Improved Fast Fuzzy C-Means Clustering Algorithm for Image Segmentation. J. Nanjing Univ. Sci. Technol. 2016, 40, 309–314. [Google Scholar]
- Sun, Y.; Jiang, Z.; Shan, G.; Liu, H.; Rao, Y. Key Frame Extraction Based on Optimal Distance Clustering and Feature Fusion Expression. J. Nanjing Univ. Sci. Technol. 2018, 42, 416–423. [Google Scholar]
- Qin, P.; Chen, W.; Zhang, M.; Li, D.; Feng, G. CC-GNN: A Clustering Contrastive Learning Network for Graph Semi-Supervised Learning. IEEE Access 2024, 12, 71956–71969. [Google Scholar] [CrossRef]
- Frémal, S.; Lecron, F. Weighting Strategies for a Recommender System Using Item Clustering Based on Genres. Expert Syst. Appl. 2017, 77, 105–113. [Google Scholar] [CrossRef]
- Forsati, R.; Doustdar, H.M.; Shamsfard, M.; Keikha, A.; Meybodi, M.R. A Fuzzy Co-Clustering Approach for Hybrid Recommender Systems. Int. J. Hybrid Intell. Syst. 2013, 10, 71–81. [Google Scholar] [CrossRef]
- Selvi, C.; Sivasankar, E. A Novel Adaptive Genetic Neural Network (AGNN) Model for Recommender Systems Using Modified K-Means Clustering Approach. Multimed. Tools Appl. 2018, 78, 14303–14330. [Google Scholar] [CrossRef]
- Anderberg, M.R. Cluster Analysis for Applications; Academic Press: New York, NY, USA, 1973; pp. 347–353. [Google Scholar]
- Bezdek, J.C. Pattern Recognition with Fuzzy Objective Function Algorithms. Adv. Appl. Pattern Recognit. 1981, 22, 203–239. [Google Scholar]
- Wagstaff, K.; Cardie, C.; Rogers, S.; Schrödl, S. Constrained K-means Clustering with Background Knowledge. In Proceedings of the Eighteenth International Conference on Machine Learning, Williamstown, MA, USA, 28 June–1 July 2001. [Google Scholar]
- Yasunori, E.; Yukihiro, H.; Makito, Y.; Sadaaki, M. On semi-supervised fuzzy c-means clustering. In Proceedings of the IEEE International Conference on Fuzzy Systems, Jeju, Republic of Korea, 20–24 August 2009; pp. 1119–1124. [Google Scholar] [CrossRef]
- Yin, X.; Shu, T.; Huang, Q. Semi-supervised fuzzy clustering with metric learning and entropy regularization. Knowl.-Based Syst. 2012, 35, 304–311. [Google Scholar] [CrossRef]
- Zhao, Q.; Zhan, S.; Cheng, R.; Zhu, J.; Zeng, H. A Benchmark for Vehicle Re-Identification in Mixed Visible and Infrared Domains. IEEE Signal Process. Lett. 2024, 31, 726–730. [Google Scholar] [CrossRef]
- Zhao, K.; Dai, Y.; Jia, Z.; Ji, Y. General Fuzzy C-Means Clustering Strategy: Using Objective Function to Control Fuzziness of Clustering Results. IEEE Trans. Fuzzy Syst. 2022, 30, 3601–3616. [Google Scholar] [CrossRef]
- Wang, Z.; Wang, S.-S.; Bai, L.; Wang, W.-S.; Shao, Y.-H. Semisupervised Fuzzy Clustering With Fuzzy Pairwise Constraints. IEEE Trans. Fuzzy Syst. 2022, 30, 3797–3811. [Google Scholar] [CrossRef]
- Yao, L.; Yan, H.; Wei, Z. Robust Fuzzy Clustering Algorithm Based on Cross Entropy. Appl. Res. Comput. 2019, 36, 2948–2951. [Google Scholar]
- Li, L.; Wang, F. Research on Semi-Supervised K-Medoids Algorithm Based on Hierarchical Strategy. Appl. Res. Comput. 2021, 38, 1387–1392. [Google Scholar]
- Li, Q.; Yang, X.; Xie, X.; Liu, G. The Data Recovery Strategy on Machine Learning Against False Data Injection Attacks in Power Cyber Physical Systems. Meas. Control 2025, 58, 632–642. [Google Scholar] [CrossRef]
- Tan, C.; Wu, H.; Tang, K.; Tan, C. An Extendable Gaussian Mixture Model for Lane-Based Queue Length Estimation Based on License Plate Recognition Data. J. Adv. Transp 2022, 2022, 5119209. [Google Scholar] [CrossRef]
- Gao, J.; Tian, Y.-B.; Chen, X.-Z. Modeling of antenna resonant frequency based on co-training of semi-supervised Gaussian process with different kernel functions. Int. J. RF Microw. Comput.-Aided Eng 2021, 31, e22627. [Google Scholar] [CrossRef]
- Chen, J.; Xu, J.; Gong, Y.; Xu, L. Ship Hull Principal Dimensions Optimization Employing Fuzzy Decision-Making Theory. Math. Probl. Eng. 2016, 2016, 5262160. [Google Scholar] [CrossRef]
- Ren, Y.; Wang, C.; Li, B.; Yu, C.; Zhang, S. A Genetic Algorithm for Fuzzy Random and Low-Carbon Integrated Forward/Reverse Logistics Network Design. Neural Comput. Applic. 2020, 32, 2005–2025. [Google Scholar] [CrossRef]
- Tang, H.; Wu, J.; Wu, F.; Chen, L.; Liu, Z.; Yan, H. An Optimization Framework for Collaborative Control of Power Loss and Voltage in Distribution Systems With DGs and EVs Using Stochastic Fuzzy Chance Constrained Programming. IEEE Access 2020, 8, 49013–49027. [Google Scholar] [CrossRef]
- Feng, Y.; Liu, P.; Du, Y.; Jiang, Z. Cross Working Condition Bearing Fault Diagnosis Based on the Combination of Multimodal Network and Entropy Conditional Domain Adversarial Network. J. Vib. Control 2024, 30. [Google Scholar] [CrossRef]
- Zhang, L.; Yang, T.; Xiao, G.; Xu, S. MATLAB Data Analysis and Data Mining; China Machine Press: Beijing, China, 2015. [Google Scholar]
- Bai, F.J.; Gao, J.L.; Song, W.H.; He, S.Y. Research and improvement of semi-supervised fuzzy clustering algorithm. Commun. Technol. 2018, 51, 5. [Google Scholar]
- Baghshah, M.S.; Shouraki, S.B. Kernel-Based Metric Learning for Semi-Supervised Clustering. Neurocomputing 2010, 73, 1352–1361. [Google Scholar] [CrossRef]
- Grira, N. Active Semi-Supervised Fuzzy Clustering. Pattern Recognit. 2008, 41, 1834–1844. [Google Scholar] [CrossRef]
- Huang, X.; Yang, X.; Zhao, J.; Xiong, L.; Ye, Y. A New Weighting k-Means Type Clustering Framework with an l2-Norm Regularization. Knowl.-Based Syst. 2018, 151, 165–179. [Google Scholar] [CrossRef]
Dataset | Number of Samples | Number of Attributes | Number of Clusters |
---|---|---|---|
Segment | 2310 | 19 | 7 |
Digits-389 | 3165 | 16 | 3 |
Parkinsons | 195 | 22 | 2 |
Ecoli | 336 | 7 | 8 |
Iris | 150 | 4 | 3 |
Banknote Authentication | 1372 | 4 | 2 |
Water | 173 | 9 | 3 |
Air | 284 | 6 | 3 |
Structure | Configuration |
---|---|
CPU | Intel(R) Core(TM) i7-6700K @ 4.00 GHz 4.00 GHz |
Memory | 16.0 GB |
Operating System | 64bit Windows 10 |
Software Platform | PyCharm 2022.2.2 |
Dataset | k-Means | FCM | SFCM | cGFCM | SSFCM | SFCM-PM |
---|---|---|---|---|---|---|
Segment | 0.822− | 0.886= | 0.886= | 0.886= | 0.886= | 0.886 |
Iris | 0.721− | 0.832= | 0.832= | 0.832= | 0.832= | 0.832 |
Digits | 0.707+ | 0.666= | 0.667= | 0.668= | 0.667= | 0.667 |
Water | 0.587− | 0.646− | 0.675− | 0.675− | 0.675− | 0.675 |
Air | 0.705= | 0.706= | 0.706= | 0.708= | 0.706= | 0.706 |
Banknote Authentication | 0.532− | 0.551= | 0.551= | 0.551= | 0.551= | 0.551 |
Parkinsons | 0.518− | 0.522= | 0.522= | 0.5− | 0.522= | 0.522 |
Ecoli | 0.79= | 0.794= | 0.794= | 0.797= | 0.794= | 0.794 |
+ / − / = | 1/5/2 | 0/1/7 | 0/0/8 | 0/1/7 | 0/0/8 |
Dataset | k-Means | FCM | SFCM | cGFCM | SSFCM | SFCM-PM |
---|---|---|---|---|---|---|
Segment | 0.822− | 0.886= | 0.886= | 0.885= | 0.886= | 0.887 |
Iris | 0.721− | 0.832= | 0.832= | 0.832= | 0.832= | 0.836 |
Digits | 0.707− | 0.666− | 0.808− | 0.818= | 0.8− | 0.824 |
Water | 0.587− | 0.645− | 0.696− | 0.708− | 0.697− | 0.725 |
Air | 0.705− | 0.706= | 0.706= | 0.709= | 0.706= | 0.711 |
Banknote Authentication | 0.532− | 0.551= | 0.551= | 0.537− | 0.551= | 0.559 |
Parkinsons | 0.518− | 0.522− | 0.522− | 0.5− | 0.522− | 0.541 |
Ecoli | 0.79= | 0.794= | 0.794= | 0.797= | 0.794= | 0.797 |
+ / − / = | 0/7/1 | 0/3/5 | 0/3/5 | 0/3/5 | 0/3/5 |
Dataset | k-Means | FCM | SFCM | cGFCM | SSFCM | SFCM-PM |
---|---|---|---|---|---|---|
Segment | 0.822− | 0.886= | 0.886= | 0.882= | 0.886= | 0.887 |
Iris | 0.721− | 0.832= | 0.832= | 0.832= | 0.832= | 0.835 |
Digits | 0.707− | 0.666− | 0.839− | 0.855= | 0.831− | 0.852 |
Water | 0.587− | 0.645− | 0.716− | 0.721− | 0.719− | 0.751 |
Air | 0.705− | 0.706− | 0.706− | 0.71= | 0.706− | 0.716 |
Banknote Authentication | 0.532− | 0.551− | 0.551− | 0.537− | 0.551− | 0.565 |
Parkinsons | 0.518− | 0.522− | 0.523− | 0.5− | 0.524− | 0.541 |
Ecoli | 0.79= | 0.794= | 0.794= | 0.797= | 0.795= | 0.8 |
+ / − / = | 0/7/1 | 0/5/3 | 0/5/3 | 0/3/5 | 0/5/3 |
Dataset | k-Means | FCM | SFCM | cGFCM | SSFCM | SFCM-PM |
---|---|---|---|---|---|---|
Segment | 0.822− | 0.886= | 0.892= | 0.886= | 0.886= | 0.892 |
Iris | 0.721− | 0.832= | 0.832= | 0.832= | 0.832= | 0.835 |
Digits | 0.707− | 0.666− | 0.85= | 0.864+ | 0.845− | 0.856 |
Water | 0.587− | 0.645− | 0.724− | 0.737− | 0.727− | 0.758 |
Air | 0.705− | 0.706− | 0.706− | 0.709− | 0.706− | 0.717 |
Banknote Authentication | 0.532− | 0.551− | 0.551− | 0.537− | 0.551− | 0.568 |
Parkinsons | 0.518− | 0.522− | 0.526− | 0.501− | 0.525− | 0.542 |
Ecoli | 0.79− | 0.793− | 0.795− | 0.797− | 0.795− | 0.805 |
+ / − / = | 0/8/0 | 0/6/2 | 0/5/3 | 1/5/2 | 0/6/2 |
Dataset | k-Means | FCM | SFCM | cGFCM | SSFCM | SFCM-PM |
---|---|---|---|---|---|---|
Segment | 0.822− | 0.886− | 0.894= | 0.886− | 0.886− | 0.894 |
Iris | 0.721− | 0.832= | 0.832= | 0.832= | 0.832= | 0.833 |
Digits | 0.707− | 0.666− | 0.867= | 0.879+ | 0.858= | 0.863 |
Water | 0.587− | 0.645− | 0.745− | 0.751− | 0.746− | 0.77 |
Air | 0.705− | 0.706− | 0.706− | 0.709− | 0.706− | 0.719 |
Banknote Authentication | 0.532− | 0.551− | 0.551− | 0.537− | 0.551− | 0.569 |
Parkinsons | 0.518− | 0.522− | 0.529− | 0.507− | 0.529− | 0.547 |
Ecoli | 0.79− | 0.793− | 0.794− | 0.797− | 0.794− | 0.808 |
+ / − / = | 0/8/0 | 0/7/1 | 0/5/3 | 1/6/1 | 0/6/2 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Hong, Y.; Zhong, G.; Lian, J.; Mai, G.; Zhou, H.; Chen, P.; Zhong, J.; Cao, H. Semi-Supervised Fuzzy Clustering Based on Prior Membership. Mathematics 2025, 13, 2559. https://doi.org/10.3390/math13162559
Hong Y, Zhong G, Lian J, Mai G, Zhou H, Chen P, Zhong J, Cao H. Semi-Supervised Fuzzy Clustering Based on Prior Membership. Mathematics. 2025; 13(16):2559. https://doi.org/10.3390/math13162559
Chicago/Turabian StyleHong, Yinghan, Guoxiang Zhong, Jiahao Lian, Guizhen Mai, Honghong Zhou, Pinghua Chen, Junliu Zhong, and Hui Cao. 2025. "Semi-Supervised Fuzzy Clustering Based on Prior Membership" Mathematics 13, no. 16: 2559. https://doi.org/10.3390/math13162559
APA StyleHong, Y., Zhong, G., Lian, J., Mai, G., Zhou, H., Chen, P., Zhong, J., & Cao, H. (2025). Semi-Supervised Fuzzy Clustering Based on Prior Membership. Mathematics, 13(16), 2559. https://doi.org/10.3390/math13162559