Next Article in Journal
Physics-Informed SDAE-Based Denoising Model for High-Impedance Fault Detection
Previous Article in Journal
Metallogels as Supramolecular Platforms for Biomedical Applications: A Review
Previous Article in Special Issue
Analysis and Mitigation of Wideband Oscillations in PV-Dominated Weak Grids: A Comprehensive Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Article

Exploring Kolmogorov–Arnold Networks for Unsupervised Anomaly Detection in Industrial Processes

by
Enrique Luna-Villagómez
and
Vladimir Mahalec
*
Department of Chemical Engineering, McMaster University, 1280 Main St W, Hamilton, ON L8S 4L8, Canada
*
Author to whom correspondence should be addressed.
Processes 2025, 13(11), 3672; https://doi.org/10.3390/pr13113672 (registering DOI)
Submission received: 27 October 2025 / Revised: 10 November 2025 / Accepted: 11 November 2025 / Published: 13 November 2025
(This article belongs to the Special Issue AI-Driven Advanced Process Control for Smart Energy Systems)

Abstract

Designing reliable fault detection and diagnosis (FDD) systems remains difficult when only limited fault-free data are available. Kolmogorov–Arnold Networks (KANs) have recently been proposed as parameter-efficient alternatives to multilayer perceptrons, yet their effectiveness for unsupervised FDD has not been systematically established. This study presents a statistically grounded comparison of Kolmogorov–Arnold Autoencoders (KAN-AEs) against an orthogonal autoencoder and a PCA baseline using the Tennessee Eastman Process benchmark. Four KAN-AE variants (EfficientKAN-AE, FastKAN-AE, FourierKAN-AE, and WavKAN-AE) were trained on fault-free data subsets ranging from 625 to 250,000 samples and evaluated over 30 independent runs. Detection performance was assessed using Bayesian signed-rank tests to estimate posterior probabilities of model superiority across fault scenarios. The results show that WavKAN-AE and EfficientKAN-AE achieve approximately 90–92% fault detection rate with only 2500 samples. In contrast, the orthogonal autoencoder requires over 30,000 samples to reach comparable performance, while PCA remains markedly below this level regardless of data size. Under data-rich conditions, Bayesian tests show that the orthogonal autoencoder matches or slightly outperforms the KAN-AEs on the more challenging fault scenarios, while remaining computationally more efficient. These findings position KAN-AEs as compact, data-efficient tools for industrial fault detection when historical fault-free data are scarce.
Keywords: autoencoders; data efficiency; fault detection and diagnosis (FDD); industrial process monitoring; Kolmogorov–Arnold Networks (KANs); Tennessee Eastman Process; unsupervised learning autoencoders; data efficiency; fault detection and diagnosis (FDD); industrial process monitoring; Kolmogorov–Arnold Networks (KANs); Tennessee Eastman Process; unsupervised learning

Share and Cite

MDPI and ACS Style

Luna-Villagómez, E.; Mahalec, V. Exploring Kolmogorov–Arnold Networks for Unsupervised Anomaly Detection in Industrial Processes. Processes 2025, 13, 3672. https://doi.org/10.3390/pr13113672

AMA Style

Luna-Villagómez E, Mahalec V. Exploring Kolmogorov–Arnold Networks for Unsupervised Anomaly Detection in Industrial Processes. Processes. 2025; 13(11):3672. https://doi.org/10.3390/pr13113672

Chicago/Turabian Style

Luna-Villagómez, Enrique, and Vladimir Mahalec. 2025. "Exploring Kolmogorov–Arnold Networks for Unsupervised Anomaly Detection in Industrial Processes" Processes 13, no. 11: 3672. https://doi.org/10.3390/pr13113672

APA Style

Luna-Villagómez, E., & Mahalec, V. (2025). Exploring Kolmogorov–Arnold Networks for Unsupervised Anomaly Detection in Industrial Processes. Processes, 13(11), 3672. https://doi.org/10.3390/pr13113672

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop