Unified Open-Set Recognition and Novel Class Discovery via Prototype-Guided Representation
Abstract
1. Introduction
- i.
- OSR and NCD are both critical tasks in open-world machine learning, where the "open-world” assumption typically lies between completely closed and entirely open scenarios. Completely open scenarios assume the model may encounter an infinite number of unknown classes, which could be entirely unrelated or unpredictable. Such scenarios are highly challenging and nearly unsolvable with existing machine-learning methods. Furthermore, completely open scenarios may be impractical and nonsensical. For instance, a car classifier is unlikely to encounter an image of a bird.
- ii.
- Test datasets for evaluating OSR and NCD methods typically include both known and unknown classes, with the known and unknown classes being mutually exclusive. Known classes are often referred to as “in-distribution classes,” “normal classes,” or “labeled classes.” Conversely, unknown classes may be referred to as “out-of-distribution classes,” “abnormal classes,” or “unlabeled classes.” Despite differences in terminology across tasks, from a representation learning perspective, known classes generally represent the part of the dataset that follows the same independent and identically distributed (i.i.d.) assumptions as the training set.
- iii.
- Known and unknown classes in OSR and NCD tasks usually belong to different classes within the same target domain. In other words, there exists a categorical relationship between known and unknown classes. For example, in the CUB-200-2011 dataset, both known and unknown classes fall under the domain of birds. In contrast, asking a bird classifier to distinguish classes within aircraft is clearly unreasonable and impractical. However, such scenarios might be meaningful in specific applications, such as out-of-distribution detection [8].
- Distance-based evaluation for OSR: We propose a robust distance-based score that mitigates the sensitivity of existing OSR methods to hyperparameter tuning, enhancing the detection of unknown samples. This metric is also utilized within our prototype-based classification head to improve feature representations.
- Prototype-based classification head: Based on the score in the previous stage, we design a prototype-based classification head to facilitate compact and discriminative feature representations for known classes. These compact representations enable a better clustering of unknown samples, enhancing NCD accuracy.
- Unified training pipeline: Our framework combines OSR and NCD into a systematic pipeline, automating the identification and categorization of unseen samples. This approach reduces manual intervention and hyperparameter dependency, making it more efficient.
- Extensive experimental validation: Our framework achieves notable performance gains on benchmark datasets, including a 4.85% AUROC improvement and a 3.19% boost in novel class accuracy on CUB-200-2011 for OSR tasks.
2. Related Works
2.1. OSR Methods Based on Discriminative Representation Scores
2.2. Novel Class Discovery
3. Methodology
3.1. Problem Statement
3.2. Unified Framework
3.3. Estimating the Number of Novel Classes
4. Experiments
4.1. Implementation Details
4.2. Datasets and Splits
4.3. Evaluation Metrics
4.4. Main Result
4.4.1. Accuracy on Known Classes
4.4.2. Open-Set Recognition Performance
| Dataset | FPR@95 ↓/AUROC ↑ (%) | |||||
|---|---|---|---|---|---|---|
| Known/Unknown | Energy | Entropy | Variance | MSP | Max-Logits | Ours |
| CUB-200-2011 | 57.66/82.37 | 50.85/87.58 | 54.52/87.32 | 55.12/87.21 | 55.32/83.55 | 48.68/88.40 |
| (100/100) | 48.75/88.49 | 48.62/88.53 | 48.58/88.58 | 48.58/88.53 | 48.65/88.49 | - |
| FGVC-Aircraft | 96.94/42.96 | 86.99/59.54 | 82.22/62.43 | 82.88/62.80 | 96.43/45.71 | 80.75/81.84 |
| (50/50) | 81.44/81.74 | 82.31/81.98 | 81.89/81.95 | 81.68/81.91 | 80.93/81.77 | - |
| Herbarium19 | 94.88/51.82 | 71.31/80.75 | 70.52/81.51 | 69.74/81.44 | 89.45/63.89 | 66.75/82.99 |
| (341/342) | 68.82/82.74 | 66.12/83.26 | 66.11/83.39 | 65.87/83.35 | 67.77/82.84 | - |
| Plant Village | 97.41/26.88 | 21.39/93.08 | 21.33/92.96 | 21.33/92.95 | 96.51/28.84 | 5.45/98.81 |
| (T/A) | 4.55/98.89 | 5.51/98.79 | 5.51/98.79 | 5.51/98.80 | 4.61/98.89 | - |
| Plant Village | 72.82/50.41 | 12.34/96.07 | 11.49/96.31 | 11.49/96.31 | 72.01/52.76 | 0.90/99.68 |
| (T/C) | 0.81/99.72 | 0.94/99.68 | 0.94/99.68 | 0.90/99.68 | 0.86/99.72 | - |
| Plant Village | 99.38/16.37 | 8.98/97.55 | 9.60/97.49 | 9.60/97.48 | 99.34/16.98 | 28.13/95.53 |
| (T/G) | 27.88/95.47 | 27.80/95.68 | 27.97/95.63 | 28.01/95.60 | 27.84/95.47 | - |
| Plant Village | 99.69/25.21 | 53.89/70.64 | 53.51/70.53 | 53.51/70.53 | 99.61/25.37 | 47.34/79.06 |
| (T/P) | 48.42/79.47 | 47.26/79.02 | 47.26/79.02 | 47.26/79.04 | 48.42/79.47 | - |
| Plant Village | 97.97/26.22 | 25.83/90.82 | 25.30/91.00 | 25.29/91.00 | 97.00/28.30 | 7.21/98.08 |
| (T/H) | 7.20/98.06 | 7.13/98.10 | 7.17/98.09 | 7.17/98.09 | 7.20/98.06 | - |
| Plant Village | 93.03/41.72 | 28.11/90.84 | 27.56/91.00 | 27.54/91.00 | 91.41/43.80 | 4.35/99.13 |
| (T/D) | 3.19/99.30 | 4.37/99.12 | 4.37/99.12 | 4.37/99.12 | 3.19/99.30 | - |
4.4.3. Novel Class Discovery Performance
4.5. Ablation Study
4.6. Estimating the Number of Unknown Classes
5. Discussion
5.1. The Number of Clusters Is Unknown
5.2. The Impact of Openness
5.3. Visualization Analysis and Feature Distribution
5.4. Complexity, Limitations and Future Directions
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Appendix A. Estimation Methods for the Number of Novel Classes
- Compute the second derivative of the SSE:
- The optimal number of clusters is found by identifying the point where the second derivative reaches its maximum value:
- Define the line between the points and .
- For each point, , on the SSE curve, calculate the perpendicular distance to the line:where D denotes the distance.
- The optimal k is the one with the maximum calculated distance.
- For each (where i ranges from 2 to ), define three points: , , and .
- Compute the vectors and between these points:
- Calculate the cosine of the angle between these vectors:
- The optimal k corresponds to the point with the smallest angle.
- For each k, compute the silhouette score for the clustering solution:where is the average intra-cluster distance, and is the average nearest-cluster distance.
- The optimal k is the one that maximizes the silhouette score.
- For each k, compute the Calinski–Harabasz score:where is the between-group dispersion matrix, is the within-group dispersion matrix, and N is the number of data points.
- The optimal k is the one that maximizes the Calinski–Harabasz score.
- For each k, calculate the Davies–Bouldin score:where and are the intra-cluster distances, and is the distance between cluster centroids and .
- The optimal k is the one that minimizes the Davies-Bouldin score.
Appendix B. Datasets Information
| Dataset | Known Classes | Unknown Classes | ||||
|---|---|---|---|---|---|---|
| Training | Test | Classes | Training | Test | Classes | |
| CUB-200-2011 | 2997 | 2884 | 100 | 2997 | 2910 | 100 |
| FGVC-Aircraft | 3332 | 1668 | 50 | 3335 | 1665 | 50 |
| Herbarium19 | 17,013 | 1335 | 341 | 17,212 | 1344 | 342 |
| Plant Village (T/A) | 10,785 | 7374 | 10 | 1889 | 1282 | 4 |
| Plant Village (T/C) | 10,785 | 7374 | 10 | 2333 | 1519 | 4 |
| Plant Village (T/G) | 10,785 | 7374 | 10 | 2428 | 1634 | 4 |
| Plant Village (T/P) | 10,785 | 7374 | 10 | 1297 | 855 | 3 |
| Plant Village (T/H) | 10,785 | 7374 | 10 | 6111 | 3999 | 7 |
| Plant Village (T/D) | 10,785 | 7374 | 10 | 7696 | 5101 | 6 |
References
- Sharifani, K.; Amini, M. Machine learning and deep learning: A review of methods and applications. World Inf. Technol. Eng. J. 2023, 10, 3897–3904. [Google Scholar]
- Opanasenko, V.; Fazilov, S.K.; Radjabov, S.; Kakharov, S.S. Multilevel Face Recognition System. Cybern. Syst. Anal. 2024, 60, 146–151. [Google Scholar] [CrossRef]
- Zou, Z.; Chen, K.; Shi, Z.; Guo, Y.; Ye, J. Object detection in 20 years: A survey. Proc. IEEE 2023, 111, 257–276. [Google Scholar] [CrossRef]
- Koley, S.; Bhunia, A.K.; Sain, A.; Chowdhury, P.N.; Xiang, T.; Song, Y.Z. You’ll Never Walk Alone: A Sketch and Text Duet for Fine-Grained Image Retrieval. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 16–22 June 2024; pp. 16509–16519. [Google Scholar]
- Kejriwal, M.; Kildebeck, E.; Steininger, R.; Shrivastava, A. Challenges, evaluation and opportunities for open-world learning. Nat. Mach. Intell. 2024, 6, 580–588. [Google Scholar] [CrossRef]
- Yang, J.; Wang, P.; Zou, D.; Zhou, Z.; Ding, K.; Peng, W.; Wang, H.; Chen, G.; Li, B.; Sun, Y.; et al. OpenOOD: Benchmarking Generalized Out-of-Distribution Detection. In Proceedings of the Thirty-Sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track, Virtual, 28 November–9 December 2022. [Google Scholar]
- Fini, E.; Sangineto, E.; Lathuilière, S.; Zhong, Z.; Nabi, M.; Ricci, E. A unified objective for novel class discovery. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 9284–9292. [Google Scholar]
- Dong, J.; Yao, Y.; Jin, W.; Zhou, H.; Gao, Y.; Fang, Z. Enhancing Few-Shot Out-of-Distribution Detection with Pre-Trained Model Features. IEEE Trans. Image Process. 2024, 33, 6309–6323. [Google Scholar] [CrossRef]
- Hendrycks, D.; Gimpel, K. A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks. In Proceedings of the International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, 2–4 May 2016. [Google Scholar]
- Liang, S.; Li, Y.; Srikant, R. Enhancing The Reliability of Out-of-distribution Image Detection in Neural Networks. In Proceedings of the International Conference on Learning Representations, Vancouver, BC, Canada, 30 April–3 May 2018. [Google Scholar]
- Liu, W.; Wang, X.; Owens, J.; Li, Y. Energy-based out-of-distribution detection. In Proceedings of the 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, BC, Canada, 6–12 December 2020; pp. 21464–21475. [Google Scholar]
- Hendrycks, D.; Basart, S.; Mazeika, M.; Zou, A.; Kwon, J.; Mostajabi, M.; Steinhardt, J.; Song, D. Scaling Out-of-Distribution Detection for Real-World Settings. In Proceedings of the International Conference on Machine Learning, PMLR, Baltimore, MD, USA, 17–23 July 2022; pp. 8759–8773. [Google Scholar]
- Miller, D.; Sunderhauf, N.; Milford, M.; Dayoub, F. Class anchor clustering: A loss for distance-based open set recognition. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Virtual, 5–9 January 2021; pp. 3570–3578. [Google Scholar]
- Liu, Z.G.; Fu, Y.M.; Pan, Q.; Zhang, Z.W. Orientational distribution learning with hierarchical spatial attention for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 45, 8757–8772. [Google Scholar] [CrossRef]
- Hsu, Y.C.; Lv, Z.; Kira, Z. Learning to cluster in order to transfer across domains and tasks. In Proceedings of the International Conference on Learning Representations, Vancouver, BC, Canada, 30 April–3 May 2018. [Google Scholar]
- Han, K.; Vedaldi, A.; Zisserman, A. Learning to discover novel visual categories via deep transfer clustering. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 8401–8409. [Google Scholar]
- Han, K.; Rebuffi, S.A.; Ehrhardt, S.; Vedaldi, A.; Zisserman, A. Autonovel: Automatically discovering and learning novel visual categories. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 6767–6781. [Google Scholar] [CrossRef] [PubMed]
- Zhong, Z.; Fini, E.; Roy, S.; Luo, Z.; Ricci, E.; Sebe, N. Neighborhood contrastive learning for novel class discovery. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 10867–10875. [Google Scholar]
- Han, K.; Rebuffi, S.; Ehrhardt, S.; Vedaldi, A.; Zisserman, A. Automatically discovering and learning new visual categories with ranking statistics. In Proceedings of the 8th Intennational Conference on Learning Representations, ICLR 2020, Schloss Dagstuhl-Leibniz-Zentrum für Informatik, Virtual, 26 April–1 May 2020. [Google Scholar]
- Jaiswal, A.; Babu, A.R.; Zadeh, M.Z.; Banerjee, D.; Makedon, F. A survey on contrastive self-supervised learning. Technologies 2020, 9, 2. [Google Scholar] [CrossRef]
- Jing, M.; Zhu, Y.; Zang, T.; Wang, K. Contrastive self-supervised learning in recommender systems: A survey. ACM Trans. Inf. Syst. 2023, 42, 1–39. [Google Scholar] [CrossRef]
- Asano, Y.; Rupprecht, C.; Vedaldi, A. Self-labelling via simultaneous clustering and representation learning. In Proceedings of the International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, 30 April 2020. [Google Scholar]
- Gu, P.; Zhang, C.; Xu, R.; He, X. Class-relation Knowledge Distillation for Novel Class Discovery. In Proceedings of the 2023 IEEE/CVF International Conference on Computer Vision (ICCV), Paris, France, 1–6 October 2023; IEEE Computer Society: Los Alamitos, CA, USA, 2023; pp. 16428–16437. [Google Scholar]
- An, W.; Tian, F.; Shi, W.; Chen, Y.; Wu, Y.; Wang, Q.; Chen, P. Transfer and alignment network for generalized category discovery. In Proceedings of the AAAI Conference on Artificial Intelligence, Vancouver, BC, Canada, 26–27 February 2024; pp. 10856–10864. [Google Scholar]
- Bharadwaj, R.; Naseer, M.; Khan, S.; Khan, F.S. Enhancing Novel Object Detection via Cooperative Foundational Models. arXiv 2023, arXiv:2311.12068. [Google Scholar] [CrossRef]
- Hayes, T.L.; de Souza, C.R.; Kim, N.; Kim, J.; Volpi, R.; Larlus, D. PANDAS: Prototype-based Novel Class Discovery and Detection. arXiv 2024, arXiv:2402.17420. [Google Scholar] [CrossRef]
- Liu, J.; Wang, Y.; Zhang, T.; Fan, Y.; Yang, Q.; Shao, J. Open-world semi-supervised novel class discovery. In Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence, Macao, China, 19–25 August 2023; pp. 4002–4010. [Google Scholar]
- Xiao, R.; Feng, L.; Tang, K.; Zhao, J.; Li, Y.; Chen, G.; Wang, H. Targeted representation alignment for open-world semi-supervised learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 16–22 June 2024; pp. 23072–23082. [Google Scholar]
- Zhang, C.; Xu, R.; He, X. Novel Class Discovery for Long-tailed Recognition. arXiv 2023, arXiv:2308.02989. [Google Scholar] [CrossRef]
- Huang, H.; Gao, F.; Sun, J.; Wang, J.; Hussain, A.; Zhou, H. Novel category discovery without forgetting for automatic target recognition. IEEE J. Sel. Top. Appl. Earth Obs. Remote. Sens. 2024, 17, 4408–4420. [Google Scholar] [CrossRef]
- Liu, Y.; Cai, Y.; Jia, Q.; Qiu, B.; Wang, W.; Pu, N. Novel class discovery for ultra-fine-grained visual categorization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 16–2 June 2024; pp. 17679–17688. [Google Scholar]
- Liu, F.; Deng, Y. Determine the number of unknown targets in open world based on elbow method. IEEE Trans. Fuzzy Syst. 2020, 29, 986–995. [Google Scholar] [CrossRef]
- Ketchen, D.J.; Shook, C.L. The application of cluster analysis in strategic management research: An analysis and critique. Strateg. Manag. J. 1996, 17, 441–458. [Google Scholar] [CrossRef]
- Milligan, G.W.; Cooper, M.C. An examination of procedures for determining the number of clusters in a data set. Psychometrika 1985, 50, 159–179. [Google Scholar] [CrossRef]
- Thorndike, R.L. Who belongs in the family? Psychometrika 1953, 18, 267–276. [Google Scholar] [CrossRef]
- Rousseeuw, P.J. Silhouettes: A graphical aid to the interpretation and validation of cluster analysis. J. Comput. Appl. Math. 1987, 20, 53–65. [Google Scholar] [CrossRef]
- Caliński, T.; Harabasz, J. A dendrite method for cluster analysis. Commun. Stat. Theory Methods 1974, 3, 1–27. [Google Scholar] [CrossRef]
- Davies, D.L.; Bouldin, D.W. A cluster separation measure. IEEE Trans. Pattern Anal. Mach. Intell. 1979, PAMI-1, 224–227. [Google Scholar] [CrossRef]
- Oquab, M.; Darcet, T.; Moutakanni, T.; Vo, H.V.; Szafraniec, M.; Khalidov, V.; Fernandez, P.; HAZIZA, D.; Massa, F.; El-Nouby, A.; et al. DINOv2: Learning Robust Visual Features without Supervision. arXiv 2023, arXiv:2304.07193. [Google Scholar]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. In Proceedings of the Advances in Neural Information Processing Systems 25 (NIPS 2012), Lake Tahoe, NV, USA, 3–6 December 2012. [Google Scholar]
- Du, X.; Wang, Z.; Cai, M.; Li, Y. Vos: Learning what you don’t know by virtual outlier synthesis. arXiv 2022, arXiv:2202.01197. [Google Scholar]
- Powers, D.M. Evaluation: From precision, recall and F-measure to ROC, informedness, markedness and correlation. arXiv 2020, arXiv:2010.16061. [Google Scholar] [CrossRef]
- Kuhn, H.W. The Hungarian method for the assignment problem. Nav. Res. Logist. Q. 1955, 2, 83–97. [Google Scholar] [CrossRef]
- Yuan, Y.; He, X.; Jiang, Z. Adaptive open domain recognition by coarse-to-fine prototype-based network. Pattern Recognit. 2022, 128, 108657. [Google Scholar] [CrossRef]
- Scheirer, W.J.; de Rezende Rocha, A.; Sapkota, A.; Boult, T.E. Toward open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 35, 1757–1772. [Google Scholar] [CrossRef] [PubMed]
- Zhao, B.; Wen, X.; Han, K. Learning semi-supervised gaussian mixture models for generalized category discovery. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 1–6 October 2023; pp. 16623–16633. [Google Scholar]
- Krause, J.; Jin, H.; Yang, J.; Fei-Fei, L. Fine-grained recognition without part annotations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 5546–5555. [Google Scholar]
- Maji, S.; Rahtu, E.; Kannala, J.; Blaschko, M.; Vedaldi, A. Fine-grained visual classification of aircraft. arXiv 2013, arXiv:1306.5151. [Google Scholar] [CrossRef]
- Luo, Z.; Liu, Y.; Schiele, B.; Sun, Q. Class-incremental exemplar compression for class-incremental learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 11371–11380. [Google Scholar]
- Tan, K.C.; Liu, Y.; Ambrose, B.; Tulig, M.; Belongie, S. The herbarium challenge 2019 dataset. arXiv 2019, arXiv:1906.05372. [Google Scholar] [CrossRef]
- Vaze, S.; Han, K.; Vedaldi, A.; Zisserman, A. Generalized category discovery. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 7492–7501. [Google Scholar]
- Hughes, D.; Salathé, M. An open access repository of images on plant health to enable the development of mobile disease diagnostics. arXiv 2015, arXiv:1511.08060. [Google Scholar]









| Datasets | CUB-200-2011 | FGVC-Aircraft | Herbarium19 | Plant Village |
|---|---|---|---|---|
| Linear Classification Head | 94.95 | 89.05 | 85.18 | 99.95 |
| Prototype Classification Head | 95.37 | 88.85 | 86.43 | 99.95 |
| Δ | +0.42 | −0.20 | +1.25 | +0.00 |
| Dataset | Methods | Task-Aware (%) | Task-Agnostic (%) | ||
|---|---|---|---|---|---|
| Known/Unknown | Novel | All | Novel | Labeled | |
| K-means | 79.41 | 76.01 | 65.26 | 86.86 | |
| CUB-200-2011 | Cr-kd | 79.48 | 84.24 | 79.83 | 88.7 |
| (100/100) | Ours | 83.38 | 85.86 | 83.02 | 88.71 |
| K-means | 70.01 | 68.11 | 57.60 | 78.6 | |
| FGVC-Aircraft | Cr-kd | 69.72 | 77.14 | 70.30 | 83.96 |
| (50/50) | Ours | 77.54 | 80.39 | 76.70 | 84.08 |
| K-means | 33.69 | 52.03 | 31.77 | 72.43 | |
| Herbarium19 | Cr-kd | 34.71 | 58.47 | 39.03 | 78.05 |
| (341/342) | Ours | 35.88 | 59.29 | 40.18 | 78.54 |
| K-means | 74.01 | 93.43 | 57.49 | 99.67 | |
| Plant Village | Cr-kd | 63.98 | 94.00 | 61.47 | 99.65 |
| (T/A) | Ours | 93.70 | 98.45 | 91.38 | 99.68 |
| K-means | 69.70 | 95.61 | 74.92 | 99.88 | |
| Plant Village | Cr-kd | 69.01 | 94.38 | 67.54 | 99.92 |
| (T/C) | Ours | 90.76 | 98.19 | 91.28 | 99.62 |
| K-means | 72.08 | 94.04 | 69.46 | 99.68 | |
| Plant Village | Cr-kd | 89.11 | 97.84 | 88.98 | 99.87 |
| (T/G) | Ours | 99.59 | 99.57 | 99.48 | 99.59 |
| K-means | 94.76 | 92.45 | 65.26 | 95.64 | |
| Plant Village | Cr-kd | 86.93 | 98.22 | 85.85 | 99.67 |
| (T/P) | Ours | 85.39 | 98.36 | 86.43 | 99.76 |
| K-means | 42.99 | 79.85 | 43.64 | 99.88 | |
| Plant Village | Cr-kd | 56.93 | 83.87 | 54.89 | 99.90 |
| (T/H) | Ours | 67.65 | 88.06 | 66.77 | 99.83 |
| K-means | 65.47 | 79.03 | 49.21 | 99.85 | |
| Plant Village | Cr-kd | 67.02 | 86.40 | 67.07 | 99.90 |
| (T/D) | Ours | 67.35 | 87.02 | 68.75 | 99.77 |
| Prototype | Pre-Training | Task-Aware (%) | Task-Agnostic (%) | |||
|---|---|---|---|---|---|---|
| Accuracy | Novel | All | Novel | Labeled | ||
| ✗ | ✗ | 94.95 | 78.87 | 79.69 | 73.45 | 85.91 |
| ✔ | ✗ | 94.95 | 79.48 | 84.24 | 79.83 | 88.70 |
| ✔ | ✔ | 95.37 | 83.38 | 85.86 | 83.02 | 88.71 |
| Estimated Value and Absolute Error | ||||||
|---|---|---|---|---|---|---|
| Unknown Classes | CUB-200-2011 | FGVC-Aircraft | Herbarium19 | |||
| 100 | 50 | 342 | ||||
| Second Derivative | 92 (8) | 44 (6) | 302 (40) | |||
| Maximum Distance | 96 (4) | 50 (0) | 322 (20) | |||
| Minimum Angle | 90 (30) | 50 (0) | 412 (70) | |||
| Silhouette Coefficient | 86 (14) | 42 (8) | 272 (70) | |||
| Calinski-Harabasz Index | 80 (20) | 40 (10) | 262 (80) | |||
| Davies-Bouldin Index | 80 (20) | 42 (8) | 262 (80) | |||
| Unknown Classes | Plant Village | |||||
| (A)-4 | (C)-4 | (G)-4 | (P)-3 | (H)-7 | (D)-6 | |
| Second Derivative | 4 (0) | 4 (0) | 4 (0) | 5 (2) | 4 (3) | 4 (2) |
| Maximum Distance | 5 (1) | 4 (0) | 4 (0) | 4 (1) | 6 (1) | 5 (1) |
| Minimum Angle | 8 (4) | 4 (0) | 9 (5) | 8 (5) | 8 (1) | 7 (1) |
| Silhouette Coefficient | 6 (2) | 4 (0) | 2 (2) | 3 (0) | 4 (3) | 8 (2) |
| Calinski-Harabasz Index | 3 (1) | 3 (1) | 2 (2) | 4 (1) | 3 (4) | 2 (4) |
| Davies-Bouldin Index | 6 (2) | 4 (0) | 3 (1) | 2 (1) | 3 (4) | 2 (4) |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Dong, J.; Wang, S.; Xue, J.; Zhang, S.; Li, Z.; Zhou, H. Unified Open-Set Recognition and Novel Class Discovery via Prototype-Guided Representation. Appl. Sci. 2025, 15, 11468. https://doi.org/10.3390/app152111468
Dong J, Wang S, Xue J, Zhang S, Li Z, Zhou H. Unified Open-Set Recognition and Novel Class Discovery via Prototype-Guided Representation. Applied Sciences. 2025; 15(21):11468. https://doi.org/10.3390/app152111468
Chicago/Turabian StyleDong, Jiuqing, Sicheng Wang, Jianxin Xue, Siwen Zhang, Zixin Li, and Heng Zhou. 2025. "Unified Open-Set Recognition and Novel Class Discovery via Prototype-Guided Representation" Applied Sciences 15, no. 21: 11468. https://doi.org/10.3390/app152111468
APA StyleDong, J., Wang, S., Xue, J., Zhang, S., Li, Z., & Zhou, H. (2025). Unified Open-Set Recognition and Novel Class Discovery via Prototype-Guided Representation. Applied Sciences, 15(21), 11468. https://doi.org/10.3390/app152111468

