AnomalyNLP: Noisy-Label Prompt Learning for Few-Shot Industrial Anomaly Detection
Abstract
1. Introduction
- We propose a Noisy-Label Prompt-Learning framework grounded in feature learning theory. By decoupling latent representations into task-relevant and task-irrelevant components, our method effectively suppresses noise propagation while enhancing robustness in vision-language prompt optimization.
- We introduce a prompt-driven optimal transport purification mechanism that leverages the expressive representation power and semantic alignment capabilities of vision-language foundation models, enabling robust prompt learning against label corruption.
- We perform comprehensive experiments on MVTecAD and VisA, two publicly available industrial anomaly detection datasets, and the results show that AnomalyNLP attains state-of-the-art performance in few-shot anomaly detection.
2. Related Work
2.1. Vision-Language Foundation Models
2.2. Industrial Anomaly Detection
2.3. Few-Shot Industrial Anomaly Detection
3. Preliminaries
4. Methodology
4.1. Vision-Language Feature Representation
4.2. Prompt-Driven Optimal Transport Purification
4.3. Noisy-Label Prompt Learning
- : Projection coefficient onto the initialization subspace, capturing the optimization inertia or “memory” of the starting point. Its decay rate is an indicator of optimization efficiency.
- : Alignment strength with the true semantic feature . The growth of this coefficient is the primary signal of meaningful learning progress and generalization.
- : Susceptibility to nuisance feature . The premature or excessive growth of these coefficients is a telltale sign of overfitting and noise memorization, a common failure mode in late-stage training on noisy datasets.
4.4. Experiment Setup
4.4.1. Datasets
4.4.2. Evaluation Metrics
4.4.3. Implementation Details
5. Results
5.1. Image-Level Comparison Results
5.2. Pixel-Level Comparison Results
5.3. Compared with Many-Shot Methods
5.4. Ablation Study
5.5. Visualization Results
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Data Availability Statement
Conflicts of Interest
Appendix A
Algorithm A1 AnomalyNLP: Optimal Transport-based Noisy-Label Prompt Learning (Detailed Version) |
|
References
- Roth, K.; Pemula, L.; Zepeda, J.; Schölkopf, B.; Brox, T.; Gehler, P. Towards total recall in industrial anomaly detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 14318–14328. [Google Scholar]
- Chandola, V.; Banerjee, A.; Kumar, V. Anomaly detection: A survey. ACM Comput. Surv. (CSUR) 2009, 41, 1–58. [Google Scholar] [CrossRef]
- Ahmed, M.; Mahmood, A.N.; Hu, J. A survey of network anomaly detection techniques. J. Netw. Comput. Appl. 2016, 60, 19–31. [Google Scholar] [CrossRef]
- Wang, Z.; Zhou, Y.; Wang, R.; Lin, T.Y.; Shah, A.; Lim, S.N. Few-shot fast-adaptive anomaly detection. Adv. Neural Inf. Process. Syst. 2022, 35, 4957–4970. [Google Scholar]
- Zhao, Z.; Liu, Y.; Wu, H.; Wang, M.; Li, Y.; Wang, S.; Teng, L.; Liu, D.; Cui, Z.; Wang, Q.; et al. Clip in medical imaging: A comprehensive survey. arXiv 2023, arXiv:2312.07353. [Google Scholar] [CrossRef]
- Zhang, R.; Guo, Z.; Zhang, W.; Li, K.; Miao, X.; Cui, B.; Qiao, Y.; Gao, P.; Li, H. Pointclip: Point cloud understanding by clip. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022 2022; pp. 8552–8562. [Google Scholar]
- Hafner, M.; Katsantoni, M.; Köster, T.; Marks, J.; Mukherjee, J.; Staiger, D.; Ule, J.; Zavolan, M. CLIP and complementary methods. Nat. Rev. Methods Prim. 2021, 1, 20. [Google Scholar] [CrossRef]
- Pan, B.; Li, Q.; Tang, X.; Huang, W.; Fang, Z.; Liu, F.; Wang, J.; Yu, J.; Shi, Y. NLPrompt: Noise-Label Prompt Learning for Vision-Language Models. In Proceedings of the Computer Vision and Pattern Recognition Conference, Nashville, TN, USA, 11–15 June 2025; pp. 19963–19973. [Google Scholar]
- Jeong, J.; Zou, Y.; Kim, T.; Zhang, D.; Ravichandran, A.; Dabeer, O. Winclip: Zero-/few-shot anomaly classification and segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 19606–19616. [Google Scholar]
- Cao, Y.; Xu, X.; Sun, C.; Cheng, Y.; Du, Z.; Gao, L.; Shen, W. Segment any anomaly without training via hybrid prompt regularization. arXiv 2023, arXiv:2305.10724. [Google Scholar] [CrossRef]
- Deng, H.; Zhang, Z.; Bao, J.; Li, X. Bootstrap fine-grained vision-language alignment for unified zero-shot anomaly localization. arXiv 2023, arXiv:2308.15939. [Google Scholar]
- Gu, Z.; Zhu, B.; Zhu, G.; Chen, Y.; Tang, M.; Wang, J. Anomalygpt: Detecting industrial anomalies using large vision-language models. In Proceedings of the AAAI Conference on Artificial Intelligence, Vancouver, BC, Canada, 20–27 February 2024; Volume 38, pp. 1932–1940. [Google Scholar]
- Su, Y.; Lan, T.; Li, H.; Xu, J.; Wang, Y.; Cai, D. Pandagpt: One model to instruction-follow them all. arXiv 2023, arXiv:2305.16355. [Google Scholar] [CrossRef]
- Villani, C. Optimal Transport: Old and New; Springer: Berlin/Heidelberg, Germany, 2008; Volume 338. [Google Scholar]
- Cuturi, M. Sinkhorn distances: Lightspeed computation of optimal transport. In Proceedings of the Advances in Neural Information Processing Systems, Lake Tahoe, NV, USA, 5–10 December 2013; Volume 26. [Google Scholar]
- Montesuma, E.F.; Mboula, F.M.N.; Souloumiac, A. Recent advances in optimal transport for machine learning. IEEE Trans. Pattern Anal. Mach. Intell. 2024, 47, 1161–1180. [Google Scholar] [CrossRef] [PubMed]
- Bao, H.; Wang, W.; Dong, L.; Liu, Q.; Mohammed, O.K.; Aggarwal, K.; Som, S.; Piao, S.; Wei, F. Vlmo: Unified vision-language pre-training with mixture-of-modality-experts. Adv. Neural Inf. Process. Syst. 2022, 35, 32897–32912. [Google Scholar]
- Jia, C.; Yang, Y.; Xia, Y.; Chen, Y.T.; Parekh, Z.; Pham, H.; Le, Q.; Sung, Y.H.; Li, Z.; Duerig, T. Scaling up visual and vision-language representation learning with noisy text supervision. In Proceedings of the International Conference on Machine Learning, Online, 18–24 July 2021; PMLR: 2021. pp. 4904–4916. [Google Scholar]
- Li, J.; Li, D.; Xiong, C.; Hoi, S. Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation. In Proceedings of the International Conference on Machine Learning, Baltimore, MD, USA, 17–23 July 2022; PMLR: 2022. pp. 12888–12900. [Google Scholar]
- Abdelhamed, A.; Afifi, M.; Go, A. What do you see? Enhancing zero-shot image classification with multimodal large language models. arXiv 2024, arXiv:2405.15668. [Google Scholar] [CrossRef]
- Naeem, M.F.; Khan, M.G.Z.A.; Xian, Y.; Afzal, M.Z.; Stricker, D.; Van Gool, L.; Tombari, F. I2mvformer: Large language model generated multi-view document supervision for zero-shot image classification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 15169–15179. [Google Scholar]
- Lai, Z.; Li, Z.; Oliveira, L.C.; Chauhan, J.; Dugger, B.N.; Chuah, C.N. Clipath: Fine-tune clip with visual feature fusion for pathology image analysis towards minimizing data collection efforts. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 2–3 October 2023; pp. 2374–2380. [Google Scholar]
- Radford, A.; Kim, J.W.; Hallacy, C.; Ramesh, A.; Goh, G.; Agarwal, S.; Sastry, G.; Askell, A.; Mishkin, P.; Clark, J.; et al. Learning transferable visual models from natural language supervision. In Proceedings of the International Conference on Machine Learning, Online, 18–24 July 2021; PMLR: 2021. pp. 8748–8763. [Google Scholar]
- Liu, H.; Li, C.; Li, Y.; Lee, Y.J. Improved baselines with visual instruction tuning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 16–22 June 2024; pp. 26296–26306. [Google Scholar]
- Liu, H.; Li, C.; Wu, Q.; Lee, Y.J. Visual instruction tuning. Adv. Neural Inf. Process. Syst. 2023, 36, 34892–34916. [Google Scholar]
- Li, J.; Li, D.; Savarese, S.; Hoi, S. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. In Proceedings of the International Conference on Machine Learning, Honolulu, HI, USA, 23–29 July 2023; PMLR: 2023. pp. 19730–19742. [Google Scholar]
- Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
- Chung, H.W.; Hou, L.; Longpre, S.; Zoph, B.; Tay, Y.; Fedus, W.; Li, Y.; Wang, X.; Dehghani, M.; Brahma, S.; et al. Scaling instruction-finetuned language models. J. Mach. Learn. Res. 2024, 25, 1–53. [Google Scholar]
- Girdhar, R.; El-Nouby, A.; Liu, Z.; Singh, M.; Alwala, K.V.; Joulin, A.; Misra, I. Imagebind: One embedding space to bind them all. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 15180–15190. [Google Scholar]
- Chiang, W.L.; Li, Z.; Lin, Z.; Sheng, Y.; Wu, Z.; Zhang, H.; Zheng, L.; Zhuang, S.; Zhuang, Y.; Gonzalez, J.E.; et al. Vicuna: An Open-Source Chatbot Impressing GPT-4 with 90%* Chatgpt Quality. 2023. Available online: https://vicuna.lmsys.org (accessed on 14 April 2023).
- Yi, J.; Yoon, S. Patch svdd: Patch-level svdd for anomaly detection and segmentation. In Proceedings of the Asian Conference on Computer Vision, Kyoto, Japan, 30 November–4 December 2020. [Google Scholar]
- Gudovskiy, D.; Ishizaka, S.; Kozuka, K. Cflow-ad: Real-time unsupervised anomaly detection with localization via conditional normalizing flows. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 4–8 January 2022; pp. 98–107. [Google Scholar]
- Lei, J.; Hu, X.; Wang, Y.; Liu, D. Pyramidflow: High-resolution defect contrastive localization using pyramid normalizing flow. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 14143–14152. [Google Scholar]
- Lee, S.; Lee, S.; Song, B.C. Cfa: Coupled-hypersphere-based feature adaptation for target-oriented anomaly localization. IEEE Access 2022, 10, 78446–78454. [Google Scholar] [CrossRef]
- Gu, Z.; Liu, L.; Chen, X.; Yi, R.; Zhang, J.; Wang, Y.; Wang, C.; Shu, A.; Jiang, G.; Ma, L. Remembering normality: Memory-guided knowledge distillation for unsupervised anomaly detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 2–3 October 2023; pp. 16401–16409. [Google Scholar]
- Salehi, M.; Sadjadi, N.; Baselizadeh, S.; Rohban, M.H.; Rabiee, H.R. Multiresolution knowledge distillation for anomaly detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Virtual, 19–25 June 2021; pp. 14902–14912. [Google Scholar]
- Wang, G.; Han, S.; Ding, E.; Huang, D. Student-teacher feature pyramid matching for anomaly detection. arXiv 2021, arXiv:2103.04257. [Google Scholar] [CrossRef]
- Zhang, X.; Li, S.; Li, X.; Huang, P.; Shan, J.; Chen, T. Destseg: Segmentation guided denoising student-teacher for anomaly detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 18–22 June 2023; pp. 3914–3923. [Google Scholar]
- Zavrtanik, V.; Kristan, M.; Skočaj, D. Reconstruction by inpainting for visual anomaly detection. Pattern Recognit. 2021, 112, 107706. [Google Scholar] [CrossRef]
- Yan, X.; Zhang, H.; Xu, X.; Hu, X.; Heng, P.A. Learning semantic context from normal samples for unsupervised anomaly detection. In Proceedings of the AAAI Conference on Artificial Intelligence, Virtual, 2–9 February 2021; Volume 35, pp. 3110–3118. [Google Scholar]
- Pirnay, J.; Chai, K. Inpainting transformer for anomaly detection. In Proceedings of the 21st International Conference on Image Analysis and Processing, Lecce, Italy, 23–27 May 2022; Springer: Cham, Switzerland, 2022; pp. 394–406. [Google Scholar]
- Wyatt, J.; Leach, A.; Schmon, S.M.; Willcocks, C.G. Anoddpm: Anomaly detection with denoising diffusion probabilistic models using simplex noise. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 21–24 June 2022; pp. 650–656. [Google Scholar]
- Huang, C.; Guan, H.; Jiang, A.; Zhang, Y.; Spratling, M.; Wang, Y.F. Registration based few-shot anomaly detection. In Proceedings of the 17th European Conference on Computer Vision, Tel Aviv, Israel, 23–27 October 2022; Springer: Cham, Switzerland, 2022; pp. 303–319. [Google Scholar]
- Chen, X.; Han, Y.; Zhang, J. April-gan: A zero-/few-shot anomaly classification and segmentation method for cvpr 2023 vand workshop challenge tracks 1&2: 1st place on zero-shot ad and 4th place on few-shot ad. arXiv 2023, arXiv:2305.17382. [Google Scholar]
- Li, X.; Zhang, Z.; Tan, X.; Chen, C.; Qu, Y.; Xie, Y.; Ma, L. Promptad: Learning prompts with only normal samples for few-shot anomaly detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 16–22 June 2024; pp. 16838–16848. [Google Scholar]
- Li, Y.; Wang, S.; Tian, Q.; Ding, X. Feature representation for statistical-learning-based object detection: A review. Pattern Recognit. 2015, 48, 3542–3559. [Google Scholar] [CrossRef]
- Pérez, D.; Alonso, S.; Morán, A.; Prada, M.A.; Fuertes, J.J.; Domínguez, M. Comparison of network intrusion detection performance using feature representation. In Proceedings of the 20th International Conference on Engineering Applications of Neural Networks, Crete, Greece, 24–26 May 2019; Springer: Cham, Switzerland, 2019; pp. 463–475. [Google Scholar]
- Cohen, N.; Hoshen, Y. Sub-image anomaly detection with deep pyramid correspondences. arXiv 2020, arXiv:2005.02357. [Google Scholar]
- Defard, T.; Setkov, A.; Loesch, A.; Audigier, R. Padim: A patch distribution modeling framework for anomaly detection and localization. In Proceedings of the International Conference on Pattern Recognition, Virtual, 10–15 January 2021; Springer: Cham, Switzerland, 2021; pp. 475–489. [Google Scholar]
- Liu, Z.; Zhou, Y.; Xu, Y.; Wang, Z. Simplenet: A simple network for image anomaly detection and localization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, USA, 18–22 June 2023; pp. 20402–20411. [Google Scholar]
- Rudolph, M.; Wandt, B.; Rosenhahn, B. Same same but differnet: Semi-supervised defect detection with normalizing flows. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Virtual, 5–9 January 2021; pp. 1907–1916. [Google Scholar]
- Sheynin, S.; Benaim, S.; Wolf, L. A hierarchical transformation-discriminating generative model for few shot anomaly detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Virtual, 11–17 October 2021; pp. 8495–8504. [Google Scholar]
- Fang, Z.; Wang, X.; Li, H.; Liu, J.; Hu, Q.; Xiao, J. Fastrecon: Few-shot industrial anomaly detection via fast feature reconstruction. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 2–3 October 2023; pp. 17481–17490. [Google Scholar]
Setup | Method | Public | MVTecAD | VisA | AVG | ||
---|---|---|---|---|---|---|---|
Image-AUC | Pixel-AUC | Image-AUC | Pixel-AUC | ||||
1-shot | SPADE | arXiv’2020 | 81.0 ± 2.0 | 91.2 ± 0.4 | 79.5 ± 4.0 | 95.6 ± 0.4 | 86.8 |
PaDiM | ICPR’2020 | 76.6 ± 3.1 | 89.3 ± 0.9 | 62.8 ± 5.4 | 89.9 ± 0.8 | 79.7 | |
PatchCore | CVPR’2022 | 83.4 ± 3.0 | 92.0 ± 1.0 | 79.9 ± 2.9 | 95.4 ± 0.6 | 87.7 | |
WinCLIP | CVPR’2023 | 93.1 ± 2.0 | 95.2 ± 0.5 | 83.8 ± 4.0 | 96.4 ± 0.4 | 92.1 | |
APRIL-GAN | arXiv’2023 | 92.0 ± 1.0 | 95.1 ± 0.3 | 91.2 ± 2.0 | 96.0 ± 0.7 | 93.6 | |
AnomalyGPT | AAAI’2024 | 94.1 ± 1.1 | 95.3 ± 0.1 | 87.4 ± 0.8 | 96.2 ± 0.1 | 93.3 | |
PromptAD | CVPR’2024 | 94.6 ± 0.3 | 95.9 ± 1.1 | 86.9 ± 0.8 | 96.7 ± 2.0 | 93.5 | |
KAG-prompt | AAAI’2025 | 95.8 ± 0.2 | 96.2 ± 1.1 | 91.6 ± 0.4 | 97.0 ± 3.0 | 95.2 | |
AnomalyNLP | —— | 96.3 ± 0.6 | 96.9 ± 1.2 | 92.8 ± 1.3 | 97.8 ± 2.1 | 96.0 | |
2-shot | SPADE | arXiv’2020 | 82.9 ± 2.6 | 92.0 ± 0.3 | 80.7 ± 5.0 | 96.2 ± 0.4 | 88.0 |
PaDiM | ICPR’2020 | 78.9 ± 3.1 | 91.3 ± 0.7 | 67.4 ± 5.1 | 92.0 ± 0.7 | 82.4 | |
PatchCore | CVPR’2022 | 86.3 ± 3.3 | 93.3 ± 0.6 | 81.6 ± 4.0 | 96.1 ± 0.5 | 89.3 | |
WinCLIP | CVPR’2023 | 94.4 ± 1.3 | 96.0 ± 0.3 | 84.6 ± 2.4 | 96.8 ± 0.3 | 93.0 | |
APRIL-GAN | arXiv’2023 | 92.4 ± 2.3 | 95.0 ± 1.5 | 92.2 ± 0.1 | 96.2 ± 1.6 | 94.0 | |
AnomalyGPT | AAAI’2024 | 95.5 ± 0.8 | 95.6 ± 0.2 | 88.6 ± 0.7 | 96.4 ± 0.1 | 94.0 | |
PromptAD | CVPR’2024 | 95.7 ± 1.0 | 96.2 ± 0.3 | 88.3 ± 1.3 | 97.1 ± 2.0 | 94.3 | |
KAG-prompt | AAAI’2025 | 96.6 ± 1.3 | 96.5 ± 0.5 | 92.7 ± 2.0 | 97.4 ± 2.0 | 95.8 | |
AnomalyNLP | —— | 97.9 ± 0.4 | 97.4 ± 1.2 | 94.0 ± 1.7 | 98.2 ± 0.6 | 96.9 | |
4-shot | SPADE | arXiv’2020 | 84.8 ± 2.5 | 92.7 ± 0.3 | 81.7 ± 3.4 | 96.6 ± 0.3 | 89.0 |
PaDiM | ICPR’2020 | 80.4 ± 2.5 | 92.6 ± 0.7 | 72.8 ± 2.9 | 93.2 ± 0.5 | 91.3 | |
PatchCore | CVPR’2022 | 88.8 ± 2.6 | 94.3 ± 0.5 | 85.3 ± 2.1 | 96.8 ± 0.3 | 91.3 | |
WinCLIP | CVPR’2023 | 95.2 ± 1.3 | 96.2 ± 0.3 | 87.3 ± 1.8 | 97.2 ± 0.2 | 94.0 | |
APRIL-GAN | arXiv’2023 | 92.8 ± 2.0 | 95.9 ± 0.5 | 92.2 ± 1.7 | 96.2 ± 3.0 | 94.3 | |
AnomalyGPT | AAAI’2024 | 96.3 ± 0.3 | 96.2 ± 0.1 | 90.6 ± 0.7 | 96.7 ± 0.1 | 95.0 | |
PromptAD | CVPR’2024 | 96.6 ± 0.4 | 96.5 ± 2.1 | 89.1 ± 0.5 | 97.4 ± 1.1 | 94.9 | |
KAG-prompt | AAAI’2025 | 97.1 ± 0.4 | 96.7 ± 1.3 | 93.3 ± 0.9 | 97.7 ± 2.7 | 96.2 | |
AnomalyNLP | —— | 98.1 ± 0.1 | 98.6 ± 2.0 | 94.4 ± 1.5 | 98.8 ± 0.3 | 97.5 |
Model | Public | Setting | Image-AUC | Pixel-AUC |
---|---|---|---|---|
AnomalyNLP | - | 1-shot | 96.3 | 96.9 |
AnomalyNLP | - | 4-shot | 98.1 | 98.6 |
DiffNet [51] | WACV’2021 | 16-shot | 87.3 | - |
TDG [52] | ICCV’2021 | 10-shot | 78.0 | - |
RegAD [43] | ECCV2022 | 8-shot | 91.2 | 96.7 |
FastRecon [53] | ICCV’2023 | 8-shot | 95.2 | 97.3 |
MKD [36] | CVPR’2021 | full-shot | 87.8 | 90.7 |
P-SVDD [31] | ACCV’2021 | full-shot | 95.2 | 96.0 |
PatchCore [1] | CVPR’2022 | full-shot | 99.1 | 98.1 |
SimpleNet [50] | CVPR’2023 | full-shot | 99.6 | 98.1 |
Model | NLPL | OT | MVTecAD | VisA | AVG | |||
---|---|---|---|---|---|---|---|---|
Image-AUC | Pixel-AUC | Image-AUC | Pixel-AUC | |||||
A | ✗ | ✗ | ✗ | 93.1 | 95.2 | 83.8 | 96.4 | 92.1 |
B | ✓ | ✗ | ✗ | 93.4 | 95.7 | 84.3 | 96.9 | 92.6 |
C | ✓ | ✓ | ✗ | 95.3 | 96.0 | 87.6 | 97.1 | 94.0 |
D | ✗ | ✗ | ✓ | 95.1 | 96.3 | 91.3 | 97.4 | 95.0 |
E | ✓ | ✓ | ✓ | 96.3 | 96.9 | 92.8 | 97.8 | 96.0 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Hua, L.; Qian, J. AnomalyNLP: Noisy-Label Prompt Learning for Few-Shot Industrial Anomaly Detection. Electronics 2025, 14, 4016. https://doi.org/10.3390/electronics14204016
Hua L, Qian J. AnomalyNLP: Noisy-Label Prompt Learning for Few-Shot Industrial Anomaly Detection. Electronics. 2025; 14(20):4016. https://doi.org/10.3390/electronics14204016
Chicago/Turabian StyleHua, Li, and Jin Qian. 2025. "AnomalyNLP: Noisy-Label Prompt Learning for Few-Shot Industrial Anomaly Detection" Electronics 14, no. 20: 4016. https://doi.org/10.3390/electronics14204016
APA StyleHua, L., & Qian, J. (2025). AnomalyNLP: Noisy-Label Prompt Learning for Few-Shot Industrial Anomaly Detection. Electronics, 14(20), 4016. https://doi.org/10.3390/electronics14204016