MLG-STPM: Meta-Learning Guided STPM for Robust Industrial Anomaly Detection Under Label Noise
Abstract
1. Introduction
- We systematically analyze the detrimental effect of varying levels of label noise on the performance of the STPM unsupervised anomaly detection method.
- We propose a novel framework, MLG-STPM, which augments STPM with an Evolving Meta-Set (EMS) mechanism. This enhances robustness by dynamically generating a high-confidence set of training samples, thereby removing the dependency on an external clean dataset common in traditional meta-learning.
- We demonstrate through comprehensive experiments on the MVTec AD and VisA datasets that MLG-STPM consistently and significantly outperforms the baseline STPM under various noise conditions, achieving competitive results against other relevant approaches.
2. Related Work
2.1. Unsupervised Anomaly Detection in Industrial Images
- Reconstruction-based Methods: These methods train models such as Autoencoders (AEs) [9], Variational Autoencoders (VAEs) [10], and GANs [11] to reconstruct normal images accurately. Anomalies are detected based on high reconstruction errors. While intuitive, controlling the model’s generalization to avoid reconstructing anomalies themselves remains a challenge.
- Feature Embedding-based Methods: These methods map images into a feature space using pretrained or specially trained networks [3,5]. Prominent techniques include one-class classification like Deep-SVDD [13] and memory-bank-based approaches like PaDiM [14] and PatchCore [25]. Moreover, other significant paradigms such as normalizing flows [26,27] and diffusion probabilistic models [4] have demonstrated strong performance by modeling the distribution of normal features. Student–teacher frameworks, which we discuss next, are a particularly effective subclass within this broad category.
2.2. Student–Teacher Frameworks for Anomaly Detection
2.3. Robust Learning via Sample Selection
3. Methodology
3.1. Preliminaries: Student–Teacher Feature Pyramid Matching
3.2. Proposed Method: MLG-STPM
3.2.1. Stage 1: Warm-Up for Reliable Loss Evaluation
3.2.2. Stage 2: EMS-Guided Robust Training
EMS Update
EMS-Guided Student Learning
Algorithm 1 MLG-STPM Training Process |
Require: Pretrained teacher network T; student network S with initial parameters ; training dataset D; mini-batch size ; meta-set size ; warm-up iterations ; learning rate .
|
3.3. Inference and Anomaly Scoring
- Feature Discrepancy Calculation: For each layer l, a per-layer anomaly map is computed. The score at each spatial location is the cosine distance between the teacher’s and student’s normalized feature vectors:
- Pixel-level Anomaly Map Generation: Each per-layer map is up-sampled to the input image resolution, , and subsequently aggregated via element-wise multiplication. This yields the final anomaly map, .
- Image-level Score Derivation: While the anomaly map provides detailed spatial information, a single scalar value is required for the subsequent image-level classification task. Therefore, we derive a final score for image J by taking the maximum value from the pixel-level anomaly map:
4. Experiments
4.1. Datasets and Evaluation Setup
- MVTec AD [7,31]: A foundational benchmark for unsupervised anomaly detection. It comprises 5 texture and 10 object categories, totaling 3629 normal images for training and 1725 images for testing, comprising 467 normal and 1258 anomalous instances. It covers over 70 distinct artificially induced defect types, as illustrated in Figure 3.Figure 3. Examples of normal and anomalous samples from the MVTec AD dataset. The top row illustrates normal pristine items. The middle row presents anomalous counterparts, while the bottom row provides close-up views of these anomalies, often highlighted with contours to emphasize the defect regions. This selection demonstrates the diverse defect types encountered across various object and texture categories within the dataset.Figure 3. Examples of normal and anomalous samples from the MVTec AD dataset. The top row illustrates normal pristine items. The middle row presents anomalous counterparts, while the bottom row provides close-up views of these anomalies, often highlighted with contours to emphasize the defect regions. This selection demonstrates the diverse defect types encountered across various object and texture categories within the dataset.
- VisA [3]: A more recent and challenging dataset designed to reflect complex industrial inspection scenarios. It contains 10,621 normal and 1200 anomaly images across 12 object categories. The dataset is particularly difficult as it includes images with multiple objects and lacks consistent camera alignment, posing additional challenges for anomaly detection algorithms. Representative samples illustrating these complexities are shown in Figure 4.Figure 4. Representative samples from the VisA dataset. The top row showcases various normal instances. The middle row displays anomalous counterparts, and the bottom row offers detailed zoomed-in views of the defects, frequently outlined with contours to highlight the anomalous regions. This illustrates the complexity and variability of anomalies in this challenging multi-object dataset.Figure 4. Representative samples from the VisA dataset. The top row showcases various normal instances. The middle row displays anomalous counterparts, and the bottom row offers detailed zoomed-in views of the defects, frequently outlined with contours to highlight the anomalous regions. This illustrates the complexity and variability of anomalies in this challenging multi-object dataset.
4.1.1. Noisy IAD Setting
4.1.2. Evaluation Protocol
4.2. Implementation Details
4.3. Evaluation Metrics
- Area Under the ROC Curve (AU-ROC): This metric evaluates a model’s ability to distinguish between normal and anomalous instances. We report this metric for both tasks. For image-level classification, the AU-ROC is computed from the final anomaly score of each image (I-AUROC). For pixel-level localization, it is computed from the pixel-wise anomaly map (P-AUROC). The formula is
- Area Under the Precision-Recall Curve (AU-PR): This metric is particularly robust for imbalanced datasets. It is also applied to both the final image scores (I-AP) and the pixel-wise maps (P-AP). For consistency with related literature, we refer to it as Average Precision (AP) in our result tables. Its conceptual formula is
- Area Under the Per-Region Overlap Curve (AUPRO): This metric specifically assesses segmentation quality. As it inherently measures spatial accuracy, it is exclusively used to evaluate pixel-level localization performance. AUPRO integrates the Per-Region Overlap (PRO) metric over multiple thresholds:
4.4. Results and Analysis
4.4.1. Comparative Results
4.4.2. Robustness to Varying Label Noise
4.4.3. Qualitative Results
5. Discussion
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Appendix A. List of Symbols
Symbol | Ref. | Description |
---|---|---|
T | Section 3.1 | The pretrained and frozen teacher network. |
S | Section 3.1 | The trainable student network. |
Section 3.1 | Learnable parameters of the student network S. | |
Section 3.1 | An input image from the training set (I) or test set (J). | |
Section 3.1 | The height, width, and number of channels of an input image. | |
l | Section 3.1 | Index for a specific layer in the network’s feature pyramid (). |
L | Section 3.1 | Total number of selected layers for feature extraction. |
Section 3.1 | Feature map extracted from the l-th layer of the teacher network T. | |
Section 3.1 | Feature map extracted from the l-th layer of the student network S. | |
Section 3.1 | The height, width, and number of channels of a feature map at layer l. | |
Equation (1) | L2-normalized feature map. | |
Equation (1) | The L2-norm operation. | |
Equation (2) | The per-sample loss, used to evaluate a sample’s normality for EMS selection. | |
Section 3.2.1 | The number of warm-up iterations before activating the EMS. | |
Section 3.2.2 | The Evolving Meta-Set (EMS), a dynamic buffer for high-confidence images. | |
Section 3.2.2 | The fixed maximum capacity (a hyperparameter) of the EMS. | |
Equation (3) | The candidate pool for the EMS update, composed of the current mini-batch and the previous EMS. | |
Equation (4) | The state of the EMS at training iteration t. | |
Equation (3) | The current mini-batch of training data sampled from the training dataset at iteration t. | |
Equation (7) | The parameters of the student network S at iteration t. | |
Equation (5) | The augmented training batch at iteration t, composed of and . | |
Equation (6) | The guided training loss for the student network, computed over . | |
Equation (7) | The learning rate for the optimizer. | |
Equation (7) | The gradient operator with respect to the student network’s parameters . | |
Section 3.3 | The final trained parameters of the student network. | |
Equation (8) | The per-layer anomaly map generated from the feature discrepancy at layer l. | |
Equation (8) | Spatial coordinates within a feature map or image. | |
Equation (9) | The final aggregated anomaly map for a test image J, used for pixel-level localization. | |
Equation (10) | The final image-level anomaly score for a test image J, used for image-level classification. |
References
- Wang, Z.; Li, S.; Xuan, J.; Shi, T. Biologically Inspired Compound Defect Detection Using a Spiking Neural Network with Continuous Time–Frequency Gradients. Adv. Eng. Inform. 2025, 65, 103132. [Google Scholar] [CrossRef]
- Peng, C.; Shangguan, W.; Wang, Z.; Peng, J.; Chai, L.; Xing, Y.; Cai, B. Reliability Assessment of Urban Rail Transit Vehicle On-Board Controller with Multi-Component Failure Dependence Based on R-vine-copula. Reliab. Eng. Syst. Saf. 2025, 257, 110795. [Google Scholar] [CrossRef]
- Zou, Y.; Jeong, J.; Pemula, L.; Zhang, D.; Dabeer, O. SPot-the-Difference Self-supervised Pre-training for Anomaly Detection and Segmentation. In Proceedings of the Computer Vision—ECCV 2022, Tel Aviv, Israel, 23–27 October 2022; Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T., Eds.; Springer: Cham, Switzerland, 2022; pp. 392–408. [Google Scholar] [CrossRef]
- Zhang, X.; Li, N.; Li, J.; Dai, T.; Jiang, Y.; Xia, S.T. Unsupervised Surface Anomaly Detection with Diffusion Probabilistic Model. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 2–6 October 2023; pp. 6782–6791. [Google Scholar]
- Liu, J.; Xie, G.; Wang, J.; Li, S.; Wang, C.; Zheng, F.; Jin, Y. Deep Industrial Image Anomaly Detection: A Survey. Mach. Intell. Res. 2024, 21, 104–135. [Google Scholar] [CrossRef]
- Li, C.; Li, J.; Li, Y.; He, L.; Fu, X.; Chen, J. Fabric Defect Detection in Textile Manufacturing: A Survey of the State of the Art. Secur. Commun. Netw. 2021, 2021, 9948808. [Google Scholar] [CrossRef]
- Bergmann, P.; Fauser, M.; Sattlegger, D.; Steger, C. MVTec ADߞA Comprehensive Real-World Dataset for Unsupervised Anomaly Detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 9592–9600. [Google Scholar]
- Xie, G.; Wang, J.; Liu, J.; Lyu, J.; Liu, Y.; Wang, C.; Zheng, F.; Jin, Y. IM-IAD: Industrial Image Anomaly Detection Benchmark in Manufacturing. arXiv 2024, arXiv:2301.13359. [Google Scholar] [CrossRef] [PubMed]
- Bergmann, P.; Löwe, S.; Fauser, M.; Sattlegger, D.; Steger, C. Improving Unsupervised Defect Segmentation by Applying Structural Similarity to Autoencoders. In Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, Prague, Czech Republic, 25–27 February 2019; pp. 372–380. [Google Scholar] [CrossRef]
- Dehaene, D.; Eline, P. Anomaly Localization by Modeling Perceptual Features. arXiv 2020, arXiv:2008.05369. [Google Scholar] [CrossRef]
- Yan, X.; Zhang, H.; Xu, X.; Hu, X.; Heng, P.A. Learning Semantic Context from Normal Samples for Unsupervised Anomaly Detection. Proc. AAAI Conf. Artif. Intell. 2021, 35, 3110–3118. [Google Scholar] [CrossRef]
- Bergmann, P.; Fauser, M.; Sattlegger, D.; Steger, C. Uninformed Students: Student-Teacher Anomaly Detection with Discriminative Latent Embeddings. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 4182–4191. [Google Scholar] [CrossRef]
- Sohn, K.; Li, C.L.; Yoon, J.; Jin, M.; Pfister, T. Learning and Evaluating Representations for Deep One-class Classification. arXiv 2021, arXiv:2011.02578. [Google Scholar] [CrossRef]
- Defard, T.; Setkov, A.; Loesch, A.; Audigier, R. PaDiM: A Patch Distribution Modeling Framework for Anomaly Detection and Localization. In Proceedings of the Pattern Recognition. ICPR International Workshops and Challenges, Virtual, 10–15 January 2021; Del Bimbo, A., Cucchiara, R., Sclaroff, S., Farinella, G.M., Mei, T., Bertini, M., Escalante, H.J., Vezzani, R., Eds.; Springer: Cham, Switzerland, 2021; pp. 475–489. [Google Scholar] [CrossRef]
- Li, C.L.; Sohn, K.; Yoon, J.; Pfister, T. CutPaste: Self-Supervised Learning for Anomaly Detection and Localization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Virtual, 19–25 June 2021; pp. 9664–9674. [Google Scholar]
- Cohen, N.; Hoshen, Y. Sub-Image Anomaly Detection with Deep Pyramid Correspondences. arXiv 2021, arXiv:2005.02357. [Google Scholar] [CrossRef]
- Wang, G.; Han, S.; Ding, E.; Huang, D. Student-Teacher Feature Pyramid Matching for Anomaly Detection. arXiv 2021, arXiv:2103.04257. [Google Scholar] [CrossRef]
- Salehi, M.; Sadjadi, N.; Baselizadeh, S.; Rohban, M.H.; Rabiee, H.R. Multiresolution Knowledge Distillation for Anomaly Detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 14902–14912. [Google Scholar]
- Chen, Y.; Tian, Y.; Pang, G.; Carneiro, G. Deep One-Class Classification via Interpolated Gaussian Descriptor. Proc. AAAI Conf. Artif. Intell. 2022, 36, 383–392. [Google Scholar] [CrossRef]
- Qiu, C.; Li, A.; Kloft, M.; Rudolph, M.; Mandt, S. Latent Outlier Exposure for Anomaly Detection with Contaminated Data. In Proceedings of the 39th International Conference on Machine Learning, Baltimore, MD, USA, 17–23 July 2022; PMLR: Cambridge, MA, USA, 2022; pp. 18153–18167. [Google Scholar]
- Jiang, X.; Liu, J.; Wang, J.; Nie, Q.; Wu, K.; Liu, Y.; Wang, C.; Zheng, F. SoftPatch: Unsupervised Anomaly Detection with Noisy Data. Adv. Neural Inf. Process. Syst. 2022, 35, 15433–15445. [Google Scholar]
- Arpit, D.; Jastrzębski, S.; Ballas, N.; Krueger, D.; Bengio, E.; Kanwal, M.S.; Maharaj, T.; Fischer, A.; Courville, A.; Bengio, Y.; et al. A Closer Look at Memorization in Deep Networks. In Proceedings of the 34th International Conference on Machine Learning, Sydney, Australia, 6–11 August 2017; PMLR: Cambridge, MA, USA, 2017; pp. 233–242. [Google Scholar]
- Gui, X.J.; Wang, W.; Tian, Z.H. Towards Understanding Deep Learning from Noisy Labels with Small-Loss Criterion. arXiv 2021, arXiv:2106.09291. [Google Scholar] [CrossRef]
- Shu, J.; Yuan, X.; Meng, D.; Xu, Z. CMW-Net: Learning a Class-Aware Sample Weighting Mapping for Robust Deep Learning. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 11521–11539. [Google Scholar] [CrossRef]
- Roth, K.; Pemula, L.; Zepeda, J.; Schölkopf, B.; Brox, T.; Gehler, P. Towards Total Recall in Industrial Anomaly Detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 14318–14328. [Google Scholar]
- Rudolph, M.; Wandt, B.; Rosenhahn, B. Same Same but DifferNet: Semi-Supervised Defect Detection with Normalizing Flows. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–8 January 2021; pp. 1907–1916. [Google Scholar]
- Yu, J.; Zheng, Y.; Wang, X.; Li, W.; Wu, Y.; Zhao, R.; Wu, L. FastFlow: Unsupervised Anomaly Detection and Localization via 2D Normalizing Flows. arXiv 2021, arXiv:2111.07677. [Google Scholar] [CrossRef]
- Deng, H.; Li, X. Anomaly Detection via Reverse Distillation From One-Class Embedding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 9737–9746. [Google Scholar]
- Yamada, S.; Hotta, K. Reconstruction Student with Attention for Student-Teacher Pyramid Matching. arXiv 2022, arXiv:2111.15376. [Google Scholar] [CrossRef]
- Liu, T.; Tao, D. Classification with Noisy Labels by Importance Reweighting. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 38, 447–461. [Google Scholar] [CrossRef] [PubMed]
- Bergmann, P.; Batzner, K.; Fauser, M.; Sattlegger, D.; Steger, C. The MVTec Anomaly Detection Dataset: A Comprehensive Real-World Dataset for Unsupervised Anomaly Detection. Int. J. Comput. Vis. 2021, 129, 1038–1059. [Google Scholar] [CrossRef]
- Lee, S.; Lee, S.; Song, B.C. CFA: Coupled-Hypersphere-Based Feature Adaptation for Target-Oriented Anomaly Localization. IEEE Access 2022, 10, 78446–78454. [Google Scholar] [CrossRef]
Dataset | Sample Number | Classes | Image Resolution | |||
---|---|---|---|---|---|---|
Normal | Anomaly | Anomaly Type | Object | Min | Max | |
MVTec AD | 4096 | 1258 | 73 | 15 | 700 | 1024 |
VisA | 10,621 | 1200 | 78 | 12 | 960 | 1562 |
Method | Training Epochs | Batch Size | Image Size | Learning Rate |
---|---|---|---|---|
CFA [32] | 50 | 4 | 256 | 0.001 |
FastFlow [27] | 500 | 32 | 256 | 0.001 |
FAVAE [10] | 100 | 64 | 256 | 0.00001 |
Reverse [28] | 200 | 8 | 256 | 0.005 |
SPADE [16] | 1 | 8 | 256 | N/A |
STPM (baseline) [17] | 100 | 8 | 256 | 0.4 |
MLG-STPM (Ours) | 100 | 8 | 256 | 0.4 |
Method | Dataset | P-AUROC | I-AUROC | P-AP | I-AP | P-AUPRO |
---|---|---|---|---|---|---|
CFA [32] | MVTec AD | 0.9340 | 0.9817 | 0.5759 | 0.9948 | 0.8459 |
VisA | 0.8465 | 0.8673 | 0.1286 | 0.9010 | 0.7177 | |
FastFlow [27] | MVTec AD | 0.9499 | 0.9603 | 0.4909 | 0.9860 | 0.8347 |
VisA | 0.9518 | 0.8837 | 0.0605 | 0.8739 | 0.8730 | |
FAVAE [10] | MVTec AD | 0.9775 | 0.9881 | 0.7052 | 0.9965 | 0.9183 |
VisA | 0.9715 | 0.8422 | 0.1123 | 0.8470 | 0.9023 | |
Reverse [28] | MVTec AD | 0.9770 | 0.8944 | 0.6989 | 0.9690 | 0.9207 |
VisA | 0.9713 | 0.8375 | 0.1552 | 0.9037 | 0.9044 | |
SPADE [16] | MVTec AD | 0.6311 | 0.9889 | 0.0784 | 0.9963 | 0.2274 |
VisA | 0.6263 | 0.8447 | 0.0018 | 0.8682 | 0.1886 | |
STPM (baseline) [17] | MVTec AD | 0.9729 | 0.9738 | 0.6916 | 0.9924 | 0.9253 |
VisA | 0.9750 | 0.8932 | 0.1402 | 0.9025 | 0.8997 | |
MLG-STPM (Ours) | MVTec AD | 0.9817 | 0.9929 | 0.7093 | 0.9976 | 0.9419 |
VisA | 0.9723 | 0.8953 | 0.1598 | 0.9085 | 0.9126 |
Dataset | Method | Noise () | P-AUROC | I-AUROC | P-AP | I-AP | P-AUPRO |
---|---|---|---|---|---|---|---|
MVTec AD | STPM (baseline) | 0.00 | 0.9698 | 0.9238 | 0.6611 | 0.9777 | 0.9088 |
0.05 | 0.9729 | 0.9913 | 0.6603 | 0.9974 | 0.9296 | ||
0.10 | 0.9729 | 0.9738 | 0.6916 | 0.9924 | 0.9253 | ||
0.15 | 0.9561 | 0.9460 | 0.5855 | 0.9845 | 0.8904 | ||
0.20 | 0.9580 | 0.9698 | 0.5625 | 0.9910 | 0.8998 | ||
MLG-STPM (Ours) | 0.00 | 0.9815 | 0.9929 | 0.7075 | 0.9976 | 0.9419 | |
0.05 | 0.9818 | 0.9937 | 0.7103 | 0.9979 | 0.9425 | ||
0.10 | 0.9817 | 0.9929 | 0.7093 | 0.9976 | 0.9419 | ||
0.15 | 0.9816 | 0.9921 | 0.7085 | 0.9973 | 0.9378 | ||
0.20 | 0.9812 | 0.9929 | 0.7051 | 0.9976 | 0.9415 | ||
VisA | STPM (baseline) | 0.00 | 0.9675 | 0.8943 | 0.1599 | 0.9015 | 0.9180 |
0.05 | 0.9619 | 0.8830 | 0.1838 | 0.8939 | 0.9161 | ||
0.10 | 0.9750 | 0.8932 | 0.1402 | 0.9025 | 0.8997 | ||
0.15 | 0.9692 | 0.8810 | 0.1653 | 0.8952 | 0.9183 | ||
0.20 | 0.8941 | 0.8356 | 0.1235 | 0.8534 | 0.7844 | ||
MLG-STPM (Ours) | 0.00 | 0.9716 | 0.8734 | 0.1203 | 0.8742 | 0.9116 | |
0.05 | 0.9719 | 0.8765 | 0.1230 | 0.8776 | 0.9121 | ||
0.10 | 0.9723 | 0.8953 | 0.1598 | 0.9085 | 0.9126 | ||
0.15 | 0.9717 | 0.8732 | 0.1207 | 0.8741 | 0.9114 | ||
0.20 | 0.9721 | 0.8727 | 0.1229 | 0.8729 | 0.9131 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Huang, Y.-H.; Lo, S.-L.; Chen, Z.-Q.; Wang, J.-K. MLG-STPM: Meta-Learning Guided STPM for Robust Industrial Anomaly Detection Under Label Noise. Sensors 2025, 25, 6255. https://doi.org/10.3390/s25196255
Huang Y-H, Lo S-L, Chen Z-Q, Wang J-K. MLG-STPM: Meta-Learning Guided STPM for Robust Industrial Anomaly Detection Under Label Noise. Sensors. 2025; 25(19):6255. https://doi.org/10.3390/s25196255
Chicago/Turabian StyleHuang, Yu-Hang, Sio-Long Lo, Zhen-Qiang Chen, and Jing-Kai Wang. 2025. "MLG-STPM: Meta-Learning Guided STPM for Robust Industrial Anomaly Detection Under Label Noise" Sensors 25, no. 19: 6255. https://doi.org/10.3390/s25196255
APA StyleHuang, Y.-H., Lo, S.-L., Chen, Z.-Q., & Wang, J.-K. (2025). MLG-STPM: Meta-Learning Guided STPM for Robust Industrial Anomaly Detection Under Label Noise. Sensors, 25(19), 6255. https://doi.org/10.3390/s25196255