Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,347)

Search Parameters:
Keywords = anomaly data detection

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 4406 KB  
Article
An Abnormal File Access Detection Model for Containers Based on eBPF Listening
by Naqin Zhou, Hao Chen, Zeyu Chen, Chao Li and Fan Li
Mathematics 2026, 14(6), 991; https://doi.org/10.3390/math14060991 (registering DOI) - 14 Mar 2026
Abstract
With the widespread adoption of container technology, its shared kernel architecture has made abnormal file access behavior a key precursor to container escape and lateral attacks, necessitating precise and efficient runtime detection mechanisms. However, existing monitoring methods typically suffer from issues such as [...] Read more.
With the widespread adoption of container technology, its shared kernel architecture has made abnormal file access behavior a key precursor to container escape and lateral attacks, necessitating precise and efficient runtime detection mechanisms. However, existing monitoring methods typically suffer from issues such as insufficient granularity in data collection, limited path semantic modeling capabilities, and low anomaly detection accuracy. To address these challenges, this paper proposes an eBPF-based method for detecting abnormal file access in containers. A lightweight kernel-level monitoring mechanism is constructed to capture access behavior in real time at the system call level, effectively enhancing both the granularity of data collection and the completeness of context. At the feature modeling layer, a multimodal path semantic representation method is designed, combining risk-layer rules and semantic vectorization strategies to enhance the hierarchical expression of path structures and improve context modeling ability. In the detection layer, an attention-enhanced autoencoder model is introduced, achieving high-precision identification of abnormal access behavior and low false-positive monitoring under unsupervised conditions through a path segment attention mechanism and weighted reconstruction loss function. Experiments in real container environments show that the proposed method achieves a recall rate of 82.0%, a false-positive rate of 0.79%, and a Matthews correlation coefficient of 0.852, significantly outperforming mainstream unsupervised detection methods such as Isolation Forest, One-Class SVM, and Local Outlier Factor. These results verify the advantages of the proposed method in terms of detection accuracy, real-time performance, and system friendliness, providing an efficient and feasible solution for enhancing the detection of unknown attacks in container runtimes. Full article
Show Figures

Figure 1

26 pages, 4041 KB  
Article
Outlier Curve Detection in Functional Data Using Robust FPCA
by Wilson Pérez-Rocano, Antonio Gabriel López-Herrera and Manuel Escabias
Mathematics 2026, 14(6), 988; https://doi.org/10.3390/math14060988 (registering DOI) - 14 Mar 2026
Abstract
We propose a robust method for outlier detection in functional data analysis. This approach uses the robust Minimum Covariance Determinant estimator to compute the Mahalanobis distance applied to functional principal component scores. The main contribution of this research is the detection of outlier [...] Read more.
We propose a robust method for outlier detection in functional data analysis. This approach uses the robust Minimum Covariance Determinant estimator to compute the Mahalanobis distance applied to functional principal component scores. The main contribution of this research is the detection of outlier curves using the robust covariance matrix of functional principal components, in contrast to existing methods that use principal components on the discrete dataset. The proposed method is practical because it considers the entire functional form of the data, through their functional principal components, providing a comprehensive analysis that can detect anomalies across the entire functional range. A simulation study compares this approach with existing methods to evaluate their performance, followed by applications to El Niño Sea Surface Temperature data and SCImago Journal Rank data. The results show that the proposed method provides greater accuracy, demonstrating its effectiveness in detecting outlier curves. Full article
17 pages, 2083 KB  
Article
Monitoring of Liquid Metal Reactor Heater Zones with Recurrent Neural Network Learning of Temperature Time Series
by Maria Pantopoulou, Derek Kultgen, Lefteri Tsoukalas and Alexander Heifetz
Energies 2026, 19(6), 1462; https://doi.org/10.3390/en19061462 (registering DOI) - 14 Mar 2026
Abstract
Advanced high-temperature fluid reactors (ARs), such as sodium fast reactors (SFRs) and molten salt cooled reactors (MSCRs) utilize high-temperature fluids at ambient pressure. To melt the fluid during reactor startup and prevent fluid freezing during cooldown, the thermal–hydraulic systems of such ARs include [...] Read more.
Advanced high-temperature fluid reactors (ARs), such as sodium fast reactors (SFRs) and molten salt cooled reactors (MSCRs) utilize high-temperature fluids at ambient pressure. To melt the fluid during reactor startup and prevent fluid freezing during cooldown, the thermal–hydraulic systems of such ARs include heater zones consisting of specific heaters with controllers, temperature sensors, and thermal insulation. The failure of heater zones due to insulation material degradation or improper installation, resulting in parasitic heat losses, can lead to fluid freezing. The detection of faults using a heat-transfer model is difficult because of a lack of knowledge of the experimental details. Data-driven machine learning of heater zone temperature time series offers a viable alternative. In this study, we benchmarked the performance of recurrent neural networks (RNNs) in an analysis of heat-up transient temperature time series of heater zones installed on a liquid sodium vessel. The RNN models include long short-term memory (LSTM) and gated recurrent unit (GRU) networks, as well as their bi-directional variants, BiLSTM and BiGRU. Anomalous temperature points were designated using a percentile-based threshold applied to residual fluctuations in the detrended temperature time series. Additionally, the impact of the exponentially weighted moving average (EWMA) method on detection accuracy was examined. The RNN models’ performance was assessed using precision, recall, and F1 score metrics. Results demonstrated that RNN models effectively detect anomalies in temperature time series with the best models for each heater zone achieving F1 scores of over 93%. To explain the variations in RNN model performance across different heater zones, we used Kullback–Leibler (KL) divergence to quantify the relative entropy between training and testing data, and the Detrended Fluctuation Analysis (DFA) to assess long-range temporal correlations. For datasets with strong long-range correlations and minimal relative entropy between training and testing data, GRU is the best-performing model. When the data exhibits weaker long-term correlations and a significant relative entropy between training and testing distributions, BiGRU shows the best performance. For the data sets with intermediate values of both KL divergence and DFA, the best performance is obtained with LSTM and BiLSTM, respectively. Full article
22 pages, 10587 KB  
Article
Accelerating Optimal Building Control Through Reinforcement Learning with Surrogate Building Models
by Andres Sebastian Cespedes Cubides, Christian Friborg Laursen and Muhyiddine Jradi
Appl. Sci. 2026, 16(6), 2790; https://doi.org/10.3390/app16062790 - 13 Mar 2026
Abstract
Buildings account for a substantial share of global energy use, yet the adoption of advanced optimal control strategies remains limited due to high computational costs and the difficulty of safe deployment. This paper presents a fully Python-based, data-driven deep reinforcement learning (DRL) supervisory [...] Read more.
Buildings account for a substantial share of global energy use, yet the adoption of advanced optimal control strategies remains limited due to high computational costs and the difficulty of safe deployment. This paper presents a fully Python-based, data-driven deep reinforcement learning (DRL) supervisory control framework that leverages gray box surrogate modeling and Imitation Learning to overcome these barriers. The novelty of this work lies in the integration of an ontology-based Twin4Build surrogate model with Imitation Learning and Deep Reinforcement Learning, enabling efficient training of building control policies in a low-cost environment before transfer to a high-fidelity BOPTEST emulator. Results demonstrate that the trade-off of using a lower-accuracy surrogate accelerates training by a factor of 11 compared to high-fidelity models. Furthermore, the RL agent successfully learned load-shifting and peak-shaving strategies, eliminating start-up power spikes and achieving energy savings of up to 28.9%. Beyond substantial energy reductions, this pipeline yields a calibrated digital twin suitable for ongoing building services like anomaly detection, presenting a scalable path for real-world smart building optimization. Full article
31 pages, 15712 KB  
Article
Real-Time Anomaly Detection for Civil Aviation VHF Communications Using Learnable Kernels and Conditional GANs
by Junyi Zhai, Gang Sun, Zhengqiang Li, Quanxin Cao and Yufeng Huang
Aerospace 2026, 13(3), 270; https://doi.org/10.3390/aerospace13030270 - 13 Mar 2026
Abstract
Civil aviation VHF communication is safety-critical, yet operational links are routinely disturbed by atmospheric effects, aging hardware, and electromagnetic interference. The resulting anomalies are typically weak, intermittent, and extremely rare, which makes real-time detection difficult under strong temporal dependence and severe class imbalance. [...] Read more.
Civil aviation VHF communication is safety-critical, yet operational links are routinely disturbed by atmospheric effects, aging hardware, and electromagnetic interference. The resulting anomalies are typically weak, intermittent, and extremely rare, which makes real-time detection difficult under strong temporal dependence and severe class imbalance. We propose an end-to-end framework that couples (i) a learnable kernel projection for adaptive nonlinear feature extraction, (ii) a differentiable relevance–redundancy objective for feature refinement, and (iii) conditional temporal generation to augment minority anomaly patterns. A lightweight CNN–LSTM head is used for streaming inference. Training uses a mixture of operational anomalies and simulated degradation scenarios, while evaluation is conducted using operational data only. Experiments on 1.2 million VHF frames collected from real flight operations and ground station monitoring achieve an F1-score of 0.947, ROC-AUC of 0.972, and PR-AUC of 0.968, with an average inference latency of 34.7 ms. Full article
(This article belongs to the Section Air Traffic and Transportation)
Show Figures

Figure 1

13 pages, 332 KB  
Article
Data-Driven Operational Bounds of Transmembrane Pressure for Modelling and Digital Twin Development in Haemodialysis and Haemodiafiltration
by Alexandru Dinu, Mădălin Frunzete and Denis Mihailovschi
Bioengineering 2026, 13(3), 331; https://doi.org/10.3390/bioengineering13030331 - 12 Mar 2026
Abstract
Transmembrane pressure (TMP) is a central state variable in haemodialysis (HD) and haemodiafiltration (HDF), governing ultrafiltration dynamics, convective transport, and membrane performance. Although dialysis devices specify high maximum allowable pressure limits derived from in vitro testing and mechanical safety margins, the effective operating [...] Read more.
Transmembrane pressure (TMP) is a central state variable in haemodialysis (HD) and haemodiafiltration (HDF), governing ultrafiltration dynamics, convective transport, and membrane performance. Although dialysis devices specify high maximum allowable pressure limits derived from in vitro testing and mechanical safety margins, the effective operating pressure space encountered under routine clinical conditions remains insufficiently quantified from a systems engineering perspective. In this study, aggregated real-world minimum–maximum TMP intervals collected from four geographically distributed dialysis centres were used to anchor a model-based characterisation of operational pressure ranges. To enable reproducible modelling and numerical exploration, Gaussian-based synthetic datasets were constructed from empirically observed pressure intervals while incorporating physiological and operational constraints. Across all centres, HD exhibited stable and narrowly distributed TMP values (typically 20–60 mmHg), whereas HDF operated within higher but well-defined pressure regimes (approximately 120–260 mmHg). Values above 300 mmHg were rare, and pressures exceeding 400 mmHg were not observed under routine conditions. Statistical tail modelling, extreme value theory, and unsupervised anomaly detection consistently identified such extreme pressures as structurally incompatible with the learned operational state space. These results provide quantitative engineering bounds for TMP that may be directly integrated into reduced-order models, control design, and digital twin development for dialysis systems. By constraining modelling environments to empirically supported pressure regimes, the proposed framework enhances numerical stability, prevents non-physical extrapolation, and supports physiologically realistic data-driven applications in biomedical engineering. Full article
(This article belongs to the Section Biomedical Engineering and Biomaterials)
Show Figures

Figure 1

43 pages, 2166 KB  
Article
Research on Root Cause Analysis Method for Certain Civil Aircraft Based on Ensemble Learning and Large Language Model Reasoning
by Wenyou Du, Jingtao Du, Haoran Zhang and Dongsheng Yang
Machines 2026, 14(3), 322; https://doi.org/10.3390/machines14030322 - 12 Mar 2026
Abstract
To address the challenges commonly encountered in civil aircraft operating under multi-mode, strongly coupled closed-loop control—namely scarce fault samples, pronounced distribution shift, and root-cause explanations that are easily confounded by covariates—this paper proposes a root-cause analysis method that integrates ensemble learning with constraint-guided [...] Read more.
To address the challenges commonly encountered in civil aircraft operating under multi-mode, strongly coupled closed-loop control—namely scarce fault samples, pronounced distribution shift, and root-cause explanations that are easily confounded by covariates—this paper proposes a root-cause analysis method that integrates ensemble learning with constraint-guided reasoning by large language models (LLMs). First, for Full Authority Digital Engine Control (FADEC) monitoring sequences, a feature system comprising environment-normalized ratios, mechanism-informed mixing indices, and multi-scale temporal statistics is constructed, thereby improving cross-mode comparability and enhancing engineering-semantic expressiveness. Second, in the anomaly detection stage, a cost-sensitive LightGBM model is adopted and a validation-set-based adaptive thresholding strategy is introduced to achieve robust identification under highly imbalanced fault conditions. Furthermore, for Root Cause Analysis (RCA), a “computation–reasoning decoupling” framework is developed: Shapley Additive exPlanations (SHAP) are used to generate segment-level contribution evidence, while causal chains, engineering prohibitions, and structured output templates are injected into prompts to constrain the LLM, enabling it to infer root-cause candidates and produce structured explanations under mechanism-consistency constraints. Experiments on real flight data demonstrate that our method yields an anomaly detection F1-score of 0.9577 and improves overall RCA accuracy to 97.1% (versus 62.3% for a pure SHAP baseline). Practically, by translating complex high-dimensional data into actionable natural language diagnostic reports, the proposed method provides reliable and interpretable decision support for rapid RCA. Full article
(This article belongs to the Section Automation and Control Systems)
Show Figures

Figure 1

28 pages, 6053 KB  
Article
A Low-Cost Predictive Maintenance System for CO2 Laser Cutting Machines Based on Multi-Sensor Data and Supervised Machine Learning
by Mayra Comina Tubón, Joe Guerrero and Cristina Manobanda
Appl. Sci. 2026, 16(6), 2689; https://doi.org/10.3390/app16062689 - 11 Mar 2026
Viewed by 107
Abstract
This study presents a structured multi-sensor predictive maintenance framework for CO2 laser cutting machines based on real-time data acquisition and supervised machine learning. The proposed architecture integrates heterogeneous sensor signals—including vibration, temperature, humidity, and acoustic measurements—through synchronized feature-level fusion to characterize machine [...] Read more.
This study presents a structured multi-sensor predictive maintenance framework for CO2 laser cutting machines based on real-time data acquisition and supervised machine learning. The proposed architecture integrates heterogeneous sensor signals—including vibration, temperature, humidity, and acoustic measurements—through synchronized feature-level fusion to characterize machine operational states. A statistically grounded thresholding strategy, validated using two years of operational observations and controlled experimental perturbations, is employed to distinguish normal and abnormal behavior. Sensor data are processed using a Decision Tree classifier implemented in Python with Scikit-learn, enabling short-horizon probabilistic fault prediction during operational cycles. The system is deployed in a real industrial environment and validated using cross-validation and structured dataset partitioning to assess generalization performance. Results demonstrate reliable fault discrimination capability under controlled operational conditions, highlighting the effectiveness of feature-level sensor integration for early anomaly detection. The modular hardware–software architecture supports adaptability to other CNC platforms with appropriate recalibration and retraining. The proposed framework provides a low-cost, interpretable, and computationally efficient solution for real-time industrial predictive maintenance applications. Full article
Show Figures

Figure 1

45 pages, 9532 KB  
Review
Advances, Challenges, and Recommendations for Non-Destructive Testing Technologies for Wind Turbine Blade Damage: A Review of the Literature from the Past Decade
by Guodong Qin, Yongchang Jin, Lizheng Qiao and Zhenyu Wu
Sensors 2026, 26(6), 1773; https://doi.org/10.3390/s26061773 - 11 Mar 2026
Viewed by 112
Abstract
As critical components of wind energy systems, the structural integrity of wind turbine blades is directly tied to the operational safety and economic performance of wind turbines. With blade designs trending toward larger and more flexible structures and operating environments becoming increasingly harsh, [...] Read more.
As critical components of wind energy systems, the structural integrity of wind turbine blades is directly tied to the operational safety and economic performance of wind turbines. With blade designs trending toward larger and more flexible structures and operating environments becoming increasingly harsh, maintenance strategies must urgently shift from reactive approaches to predictive maintenance paradigms. From an engineering application perspective, this study conducts a systematic and critical review of non-destructive testing (NDT) and structural health monitoring (SHM) technologies for wind turbine blades. Drawing on the literature published over the past decade, we examine the field applicability, limitations, and engineering challenges of core NDT techniques—including vision-based methods, acoustic approaches, vibration analysis, ultrasound, and infrared thermography. Particular emphasis is placed on the integration of data-driven approaches with engineering practice, evaluating the role of machine learning in fault classification and anomaly diagnosis, as well as the contributions of deep learning to automated defect detection in image and signal data. Moreover, this paper critically discusses the growing use of robotic inspection platforms, such as unmanned aerial vehicles and climbing robots, as multi-sensor carriers enabling rapid and comprehensive blade assessment. By comparatively analyzing detection performance, cost, and automation levels across technologies, we identify key engineering barriers, including environmental noise robustness, signal attenuation within complex blade structures, and the persistent gap between laboratory methods and field deployment. Finally, we outline forward-looking research directions, encompassing multi-modal sensor fusion, edge computing for real-time diagnostics, and the development of standardized SHM systems aimed at supporting full lifecycle blade management. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

14 pages, 3131 KB  
Article
Prenatal Classification and Perinatal Outcomes of Fetal Umbilical–Portal–Systemic Venous Shunts: A Tertiary Center Experience
by Kubra Kurt Bilirer, Hale Özer Caltek, Tuğçe Arslanoğlu, Fırat Ersan and Hakan Erenel
Diagnostics 2026, 16(6), 829; https://doi.org/10.3390/diagnostics16060829 - 11 Mar 2026
Viewed by 89
Abstract
Background/Objectives: Umbilical–portal–systemic venous shunts (UPSVS) are rare fetal vascular anomalies with heterogeneous embryologic origins and variable perinatal implications. Although prenatal diagnosis has increased with advances in fetal imaging, data correlating prenatal subclassification with structural/genetic abnormalities and neonatal outcomes remain limited. Methods: [...] Read more.
Background/Objectives: Umbilical–portal–systemic venous shunts (UPSVS) are rare fetal vascular anomalies with heterogeneous embryologic origins and variable perinatal implications. Although prenatal diagnosis has increased with advances in fetal imaging, data correlating prenatal subclassification with structural/genetic abnormalities and neonatal outcomes remain limited. Methods: This retrospective study included 50 fetuses prenatally diagnosed with UPSVS at a tertiary referral perinatology center between 2021 and 2025. Cases were subclassified according to the Achiron prenatal classification into Type 1 umbilical–systemic shunt (USS), Type 2 ductus venosus–systemic shunt (DVSS), Type 3a intrahepatic portosystemic shunt (IHPSS), and Type 3b extrahepatic portosystemic shunt (EHPSS). Prenatal ultrasound, Doppler, fetal echocardiography, and genetic testing (karyotype and chromosomal microarray) were analyzed. Perinatal metrics—including structural/genetic anomalies, fetal growth restriction (FGR), termination of pregnancy (TOP), and neonatal outcomes—were evaluated with postnatal verification. Results: The distribution of subtypes was Type 1: 28% (14/50), Type 2: 48% (24/50), Type 3a: 20% (10/50), and Type 3b: 4% (2/50). Gestational age at diagnosis was significantly higher in Type 3a compared with Type 1 and Type 2 (32.2 ± 2.4 vs. 21.1 ± 6.7 and 22.4 ± 5.8 weeks; p < 0.001). Structural anomalies were most frequent in Type 1 (13/14, 92.9%; p < 0.001), while FGR predominated in Type 3a (9/10, 90%; p = 0.006). Ductus venosus (DV) agenesis was universal in Type 1 (14/14) and Type 3b (2/2), absent in Type 2 (0/24), and present in 20% of Type 3a (2/10) (p < 0.001). Genetic abnormalities were detected in 57% of Type 1 (4/7) and 56% of Type 2 (9/16) fetuses, with trisomy 21 most prevalent in Type 2. TOP was highest in Type 1 (8/14, 57.1%; p < 0.001). Adverse neonatal outcomes occurred primarily in Type 1 and Type 3b (p < 0.001), whereas Type 2 demonstrated favorable neonatal outcomes. Conclusions: UPSVS subtype is strongly associated with structural/genetic anomalies, FGR, and neonatal outcomes, underscoring the importance of prenatal subclassification in prognostic assessment and counseling. Type 1 and Type 3b represent the highest—risk subgroups requiring delivery planning in tertiary centers, while Type 2 generally exhibits a benign perinatal course. The association between Type 3a and FGR highlights the need for detailed evaluation of the hepatic venous system in growth-restricted fetuses. However, interpretation of subgroup-specific associations should consider the relatively small sample size of Type 3b cases and the limited genetic testing performed in some Type 3a fetuses. Multicenter prospective studies are warranted to standardize diagnostic algorithms, optimize genetic testing strategies, and refine perinatal management. Full article
(This article belongs to the Section Clinical Diagnosis and Prognosis)
Show Figures

Figure 1

32 pages, 6386 KB  
Article
Crossing the Threshold: Land Cover Change Triggers Hydrological Regime Shift in Brazil’s Itaipu Hydropower Region
by Jessica Besnier, Augusto Getirana and Venkataraman Lakshmi
Remote Sens. 2026, 18(6), 848; https://doi.org/10.3390/rs18060848 - 10 Mar 2026
Viewed by 193
Abstract
Rapid agricultural expansion threatens water security in one of the world’s largest hydroelectric systems, the Itaipu dam, located on the Brazil–Paraguay border. Yet regional hydrological responses to land cover change and climate variability remain insufficiently characterized at management-relevant scales. The Upper Paraná River [...] Read more.
Rapid agricultural expansion threatens water security in one of the world’s largest hydroelectric systems, the Itaipu dam, located on the Brazil–Paraguay border. Yet regional hydrological responses to land cover change and climate variability remain insufficiently characterized at management-relevant scales. The Upper Paraná River Basin (UPRB), which sustains agriculture, hydropower, and municipal water supply across both countries, exemplifies this challenge as accelerating cropland conversion raises concerns about long-term water availability. This study investigates hydrological transitions and their statistical associations with land cover changes in the Itaipu study region from 2002 to 2023. We integrate GRACE/GRACE-FO (Gravity Recovery and Climate Experiment Follow-On), Terrestrial Water Storage Anomalies (TWSAs), MODIS (Moderate Resolution Imaging Spectroradiometer) land cover, CHIRPS (Climate Hazards Group InfraRed Precipitation with Station data) precipitation, and LandScan population density using Pettitt’s breakpoint test and Mann–Kendall trend analysis to detect temporal breakpoints and quantify co-variability between hydrology and land surface dynamics. Together, these methods identify a significant basin-wide shift in TWSAs in mid-2009, with storage increases of 151.6 cm at Itaipu and 103.1 cm at Yguazú Reservoir. Over the study period, cropland expanded from 13.5% to 37.9% of total land cover, while savanna declined from 28.1% to 24.2%. After 2009, correlations between land cover and TWSAs strengthened substantially, particularly for wetlands (r = 0.88), croplands (r = 0.73), and savannas (r = −0.81; all p < 0.001), indicating strong coupling between landscape transformation and basin-scale storage variability. Principal Component Analysis shows land use change explains 39–41% of TWSA variance, exceeding hydroclimatic contributions. Granger causality analysis reveals bidirectional coupling between wetlands and water storage at Itaipu, while cropland and savanna dynamics exert predictive influence on downstream hydrology in the Yguazú basin. Water balance decomposition further indicates a post-2009 regime shift, with residual storage transitioning from −10.6 to +4.7 and 78% greater runoff generation per unit precipitation, consistent with reduced infiltration capacity. Together, these findings underscore intensifying land–water feedback and the need for adaptive watershed management under expanding agriculture and climate variability. Full article
(This article belongs to the Special Issue Satellite Gravimetry for the Retrieval of Hydrological Variables)
Show Figures

Figure 1

27 pages, 1259 KB  
Review
Integrating Artificial Intelligence in Audit Workflow: Opportunities, Architecture, and Challenges: A Systematic Review
by Ashif Anwar and Muhammad Osama Akeel
Account. Audit. 2026, 2(1), 4; https://doi.org/10.3390/accountaudit2010004 - 9 Mar 2026
Viewed by 1540
Abstract
Background: This paper is a systematic review of 100 peer-reviewed articles (2015–2025) related to artificial intelligence (AI) applications in the auditing field, and includes machine learning, natural language processing, robotic process automation, and other AI methods. Purpose: The paper delves into the integration [...] Read more.
Background: This paper is a systematic review of 100 peer-reviewed articles (2015–2025) related to artificial intelligence (AI) applications in the auditing field, and includes machine learning, natural language processing, robotic process automation, and other AI methods. Purpose: The paper delves into the integration of these AI technologies into the audit workflow; empirical implications of these technologies on audit effectiveness; efficiency and quality; and technical, organizational, and regulatory obstacles that suggest more widespread adoption is still limited. Methods: Five large-scale databases and other sources were searched and selected using PRISMA; structured data were extracted, assessed in quality and narrative, and thematically analyzed. Results: The discussion indicates that machine learning-based anomaly detection and predictive analytics, document analysis through NLP, and automation through RPA are becoming part of planning, risk assessments, control tests, and substantive procedures/reporting, with improvements in detection capabilities, coverage and efficiency reported in various empirical and design science studies. The review also presents common architectural models of AI-enabled audit processes, including layered data and governance, model development and oversight, orchestration and automation, auditor-facing applications, and human-in-the-loop controls. Conclusions: The article proposes an AI-based audit workflow reference architecture and summarizes evidence on opportunities, threats, and implementation obstacles, highlighting gaps in longitudinal assessment, comparative evaluation of AI methods, and regulatory recommendations. The results have practical implications for auditors, standard-setters, and system designers seeking to revise the audit approach and regulations to enable AI-driven assurance. Full article
Show Figures

Figure 1

22 pages, 3908 KB  
Article
Physics-Topology-Anchored Learning: A Robust and Lightweight Framework for Time-Series Prediction and Anomaly Detection Under Data Scarcity
by Xuanhao Hua, Weiqi Yin, Libin Wang, Meng Ma, Jianfeng Yuan and Jing Zhang
Sensors 2026, 26(5), 1721; https://doi.org/10.3390/s26051721 - 9 Mar 2026
Viewed by 121
Abstract
Health monitoring of complex systems is critical for ensuring reliability and achieving cost-effective reusability. However, deploying deep learning models in this domain is impeded by two primary constraints: the scarcity of high-quality fault samples and the restricted computational resources available on-board. To address [...] Read more.
Health monitoring of complex systems is critical for ensuring reliability and achieving cost-effective reusability. However, deploying deep learning models in this domain is impeded by two primary constraints: the scarcity of high-quality fault samples and the restricted computational resources available on-board. To address these challenges, this paper proposes a Physics-Topology-Anchored Learning (PTAL) framework. The core innovation lies in the effective integration of physical inductive bias into the model architecture. Specifically, PTAL incorporates a predefined adjacency matrix, derived from the physical mechanism, as a structural prior. This design anchors the neural network to explicit physical causality, effectively constraining the hypothesis space and reducing the model’s dependency on large-scale data. Furthermore, by coupling this physics-informed structure with a lightweight recurrent attention mechanism, the model avoids the high computational overhead typical of generic large-scale networks. Experimental evaluations demonstrate that PTAL achieves a peak diagnostic accuracy of 97.8% and a low standard deviation of 0.1145, significantly outperforming baseline models in data-scarce regimes. The results confirm that the proposed model successfully leverages physical bias to maintain a favorable trade-off between diagnostic performance and computational efficiency, making it highly suitable for the resource-constrained environments of complex systems. Full article
(This article belongs to the Special Issue AI-Assisted Condition Monitoring and Fault Diagnosis)
Show Figures

Figure 1

41 pages, 5011 KB  
Review
Recent Techniques Used for Anomaly Detection in the Automotive Sector: A Comprehensive Survey
by Cihangir Derse, Sajib Chakraborty and Omar Hegazy
Appl. Sci. 2026, 16(5), 2584; https://doi.org/10.3390/app16052584 - 8 Mar 2026
Viewed by 162
Abstract
The rapid digital transformation of industrial systems in the 21st century has led to an exponential growth in data generated by manufacturing processes and end-user products, particularly in the automotive sector. While this big data creates new opportunities for monitoring and diagnostics, it [...] Read more.
The rapid digital transformation of industrial systems in the 21st century has led to an exponential growth in data generated by manufacturing processes and end-user products, particularly in the automotive sector. While this big data creates new opportunities for monitoring and diagnostics, it also introduces significant challenges related to system complexity, scalability, and nonlinearity, as well as the increasing shortage of experienced domain experts. These challenges motivate the adoption of intelligent, automated fault and anomaly detection techniques capable of operating reliably under real-world conditions. The primary objective of this paper is to provide a comprehensive and structured review of the anomaly detection methodologies for automotive applications, with particular emphasis on intelligent fault diagnosis, tolerance, and monitoring architectures. To this end, the paper systematically categorizes existing approaches, including model-based, data-driven, and hybrid techniques, and analyzes their underlying principles, data requirements, computational complexity, and applicability to safety-critical systems. Based on this analysis, the paper highlights current limitations, open research challenges, and emerging trends, including the integration of machine learning and artificial intelligence with domain knowledge and control-oriented frameworks. The main contribution of this work is a unified perspective that supports researchers and practitioners in selecting, designing, and deploying effective anomaly detection solutions for next-generation automotive and cyber-physical systems. Full article
Show Figures

Figure 1

29 pages, 6223 KB  
Article
Distinguishing Process Faults from Model Drift Through Variable Contribution Analysis: A Novel Perspective on Anomaly Diagnosis
by Thiago K. Anzai and José Carlos Costa da Silva Pinto
Processes 2026, 14(5), 859; https://doi.org/10.3390/pr14050859 - 7 Mar 2026
Viewed by 283
Abstract
Conventional anomaly diagnosis methods often treat process faults and model drift as distinct, independent issues: anomalous behavior is attributed to process problems, whereas drift is seen as a secondary concern. This traditional perspective neglects the fact that, when a fault is detected, the [...] Read more.
Conventional anomaly diagnosis methods often treat process faults and model drift as distinct, independent issues: anomalous behavior is attributed to process problems, whereas drift is seen as a secondary concern. This traditional perspective neglects the fact that, when a fault is detected, the first diagnosis that must be provided regards the source of the observed deviation: a process fault or a model malfunction. In this context, the present study tackles this fundamental diagnosis problem, proposing that effective anomaly diagnosis should distinguish process faults from model inadequacies originating from operational changes. To address this challenge, the Nearest Normal Value (NNV) contribution analysis technique was developed to quantify individual variable contributions through counterfactual analysis. Unlike conventional diagnostic methods that rely on static references, the NNV technique provides contribution profiles that characterize the operational state dynamically. The methodology was validated using three distinct datasets, including actual operational data from an oil production system. On real data, the normalized dispersion index (S) decreased from 0.92 to 0.58 during a documented fault (37% change), whereas it changed from 0.76 to 0.63 during an operating mode shift (17% change), showing, thus, distinct contribution signatures for faults versus drift-related regime changes. The findings suggest that incorporating the proposed approach into anomaly diagnosis systems could reduce false alarms and improve diagnostic accuracy in dynamic industrial environments where operating conditions evolve over time. Full article
(This article belongs to the Section Process Safety and Risk Management)
Show Figures

Figure 1

Back to TopTop