Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (178)

Search Parameters:
Keywords = ML-based anomaly detection

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 4549 KB  
Article
Online Track Anomaly Detection: Comparison of Different Machine Learning Techniques Through Injection of Synthetic Defects on Experimental Datasets
by Giovanni Bellacci, Luca Di Carlo, Marco Fiaschi, Luca Bocciolini, Carmine Zappacosta and Luca Pugi
Machines 2026, 14(4), 424; https://doi.org/10.3390/machines14040424 - 10 Apr 2026
Abstract
The adoption of instrumented wheelsets on diagnostic trains offers the possibility of continuous monitoring of wheel–rail contact forces. The collection of large datasets can be exploited for diagnostic purposes, aiming to localize specific track defects, allowing significant improvements in terms of safety and [...] Read more.
The adoption of instrumented wheelsets on diagnostic trains offers the possibility of continuous monitoring of wheel–rail contact forces. The collection of large datasets can be exploited for diagnostic purposes, aiming to localize specific track defects, allowing significant improvements in terms of safety and maintenance costs. Machine learning (ML) techniques can be used to automate anomaly detection. In this work, the authors compare the application of various ML algorithms based on the identification of different frequency or time-based features of analyzed signals. To perform the activity, a significant number and variety of local defects have been included in the recorded data. From a practical point of view, the insertion of real known defects into an existing line is extremely time-consuming, expensive, and not immune to safety issues. On the other hand, the design of anomaly detection algorithms involves the usage of relatively extended datasets with different faulty conditions. The authors propose deliberately adding real contact force profiles of healthy lines to a mix of synthetic signals, which substantially reproduce the behavior and the variability of foreseen faulty conditions. The results of this work, although preliminary and still to be completed, offer a contribution to the scientific community both in terms of obtained results and adopted methodologies. Full article
(This article belongs to the Special Issue AI-Driven Reliability Analysis and Predictive Maintenance)
Show Figures

Figure 1

24 pages, 4459 KB  
Article
AI-Driven Decision Support System for Proactive Risk Management in Construction Projects
by Jon Zorrilla, Sandra Seijo, Unai Arenal and Juan Ramón Mena
Intell. Infrastruct. Constr. 2026, 2(2), 4; https://doi.org/10.3390/iic2020004 - 26 Mar 2026
Viewed by 480
Abstract
Construction projects frequently face risks such as anomalies, delays, and bottlenecks, which can substantially affect timelines and budgets. This study proposes a machine learning (ML)-based framework for early identification of risks in construction projects, enabling pattern understanding and decision-making through clustering, outlier and [...] Read more.
Construction projects frequently face risks such as anomalies, delays, and bottlenecks, which can substantially affect timelines and budgets. This study proposes a machine learning (ML)-based framework for early identification of risks in construction projects, enabling pattern understanding and decision-making through clustering, outlier and bottleneck detection, and relevant variables identification. It uses a business process management (BPM) dataset of construction documents and applies clustering techniques to both numerical and mixed datasets to group documents with similar characteristics, enabling the detection of temporal deviations and the patterns behind them. Additionally, an ensemble anomaly detection model based on different algorithms is implemented to identify outliers through key variables, which may indicate hidden risks and planning errors. Explainable artificial intelligence (XAI) techniques are then used to analyse the importance of the variables, supporting the identification and analysis of bottlenecks that may compromise project success. The results reveal an F1 score of 0.73 in bottleneck detection using three understandable decision rules, a 6% rate of anomalies within the dataset, and three distinct project clusters. This approach enables accurate and timely detection of risks while providing valuable insights for decision-making, improving risk management, and optimising project execution in the architecture, engineering and construction (AEC) industry. Full article
Show Figures

Figure 1

82 pages, 6468 KB  
Article
Correction Functions and Refinement Algorithms for Enhancing the Performance of Machine Learning Models
by Attila Kovács, Judit Kovácsné Molnár and Károly Jármai
Automation 2026, 7(2), 45; https://doi.org/10.3390/automation7020045 - 6 Mar 2026
Viewed by 732
Abstract
The aim of this study is to investigate and demonstrate the role of correction functions and optimisation-based refinement algorithms in enhancing the performance of machine learning models, particularly in predictive anomaly detection tasks applied in industrial environments. The performance of machine learning models [...] Read more.
The aim of this study is to investigate and demonstrate the role of correction functions and optimisation-based refinement algorithms in enhancing the performance of machine learning models, particularly in predictive anomaly detection tasks applied in industrial environments. The performance of machine learning models is highly dependent on the quality of data preprocessing, model architecture, and post-processing methodology. In many practical applications—particularly in time-series forecasting and anomaly detection—the conventional training pipeline alone is insufficient, because model uncertainty, structural bias and the handling of rare events require specialised post hoc calibration and refinement mechanisms. This study provides a systematic overview of the role of correction functions (e.g., Principal Component Analysis (PCA), Squared Prediction Error (SPE)/Q-statistics, Hotelling’s T2, Bayesian calibration) and adaptive improvement algorithms (e.g., Genetic Algorithms (GA), Particle Swarm Optimisation (PSO), Simulated Annealing (SA), Gaussian Mixture Model (GMM) and ensemble-based techniques) in enhancing the performance of machine learning pipelines. The models were trained on a real industrial dataset compiled from power network analytics and harmonic-injection-based loading conditions. Model validation and equipment-level testing were performed using a large-scale harmonic measurement dataset collected over a five-year period. The reliability of the approach was confirmed by comparing predicted state transitions with actual fault occurrences, demonstrating its practical applicability and suitability for integration into predictive maintenance frameworks. The analysis demonstrates that correction functions introduce deterministic transformations in the data or error space, whereas improvement algorithms apply adaptive optimisation to fine-tune model parameters or decision boundaries. The combined use of these approaches significantly reduces overfitting, improves predictive accuracy and lowers false alarm rates. This work introduces the concept of an Organically Adaptive Predictive (OAP) ML model. The proposed model presents organic adaptivity, continuously adjusting its predictive behaviour in response to dynamic variations in network loading and harmonic spectrum composition. The introduced terminology characterises the organically emergent nature of the adaptive learning mechanism. Full article
Show Figures

Figure 1

39 pages, 3580 KB  
Review
Application of AI in Cyberattack Detection: A Review
by Yaw Jantuah Boateng, Nusrat Jahan Mim, Nasrin Akhter, Ranesh Naha, Aniket Mahanti and Alistair Barros
Sensors 2026, 26(5), 1518; https://doi.org/10.3390/s26051518 - 28 Feb 2026
Viewed by 784
Abstract
In today’s fast-changing digital environment, cyber-physical systems face escalating security challenges due to increasingly sophisticated cyberattacks. Artificial Intelligence (AI) has emerged as a powerful enabler of modern cyberattack detection, offering scalable, accurate, and adaptive solutions to counter dynamic threats. This paper provides a [...] Read more.
In today’s fast-changing digital environment, cyber-physical systems face escalating security challenges due to increasingly sophisticated cyberattacks. Artificial Intelligence (AI) has emerged as a powerful enabler of modern cyberattack detection, offering scalable, accurate, and adaptive solutions to counter dynamic threats. This paper provides a comprehensive review of recent advancements in AI-based cyberattack detection, focusing on Machine Learning (ML), Deep Learning (DL), Reinforcement Learning (RL), Federated Learning (FL), and emerging techniques such as generative AI, neuro-symbolic AI, swarm intelligence, lightweight AI, and quantum Computing. We evaluate the strengths and limitations of these approaches, highlighting their performance on benchmark datasets. The review discusses traditional signature-based Intrusion Detection Systems (IDS) and their limitations against novel attack patterns, contrasted with AI-driven anomaly-based and hybrid detection methods that improve detection rates for unknown and zero-day attacks. Key challenges, including computational costs, data quality, privacy concerns, and model interpretability, are analysed alongside the role of Explainable AI (XAI) in enhancing trust and transparency. The impact of computational resources, dataset representativeness, and evaluation metrics on AI model performance is also explored. Furthermore, we investigate the potential of lightweight AI for resource-constrained environments like IoT and edge devices, and quantum computing’s role in advancing detection efficiency and cryptographic security. The paper also draws attention to future research directions, particularly the development of up-to-date datasets, integration of hybrid quantum–classical models, and optimisation of asynchronous FL protocols to address evolving cybersecurity challenges. This study aims to inspire innovation in AI-driven cyberattack detection, fostering robust, interpretable, and efficient solutions for securing complex digital environments. Full article
(This article belongs to the Section Communications)
Show Figures

Figure 1

24 pages, 1160 KB  
Article
Enhancing Data Security in Satellite Communication Systems: Integrating Quantum Cryptography with CatBoost Machine Learning
by Mohd Nadeem, Syed Anas Ansar, Sakshi Halwai, Arpita Singh and Rajeev Kumar
Information 2026, 17(3), 220; https://doi.org/10.3390/info17030220 - 25 Feb 2026
Viewed by 517
Abstract
In modern communication networks, particularly satellite-based systems, data security faces significant challenges from vulnerabilities such as signal interception, jamming, and latency during long distance transmissions. Traditional cryptographic methods are increasingly vulnerable to quantum computing threats, underscoring the need for advanced solutions to protect [...] Read more.
In modern communication networks, particularly satellite-based systems, data security faces significant challenges from vulnerabilities such as signal interception, jamming, and latency during long distance transmissions. Traditional cryptographic methods are increasingly vulnerable to quantum computing threats, underscoring the need for advanced solutions to protect data integrity, confidentiality, and availability. This research investigates the fusion of quantum cryptography and Machine Learning (ML) to improve security in satellite communication. The Quantum Key Distribution (QKD), which is grounded in quantum mechanics, enables unbreakable encryption by detecting eavesdropping via quantum state disturbances. The CatBoost ML algorithm is applied to a dataset of 10,000 records featuring categorical attributes for prioritizing security elements such as anomaly detection, encryption types, and access controls. The model yields an accuracy of 89.23% and Area under Curve the Receiver Operating Characteristic (AUC-ROC) score of 94.56%, effectively predicting threat levels. Feature importance reveals anomaly detection (28.5%) and quantum encryption (22.3%) as primary contributors. While hurdles such as high implementation costs and transmission range limitations persist, this quantum ML synergy provides a proactive, adaptive framework for resilient, future-ready communication networks. Full article
(This article belongs to the Special Issue 2nd Edition of 5G Networks and Wireless Communication Systems)
Show Figures

Figure 1

53 pages, 3028 KB  
Review
Optimization and Machine Learning for Electric Vehicles Management in Distribution Networks: A Review
by Stefania Conti, Giovanni Aiello, Salvatore Coco, Antonino Laudani, Santi Agatino Rizzo, Nunzio Salerno, Giuseppe Marco Tina and Cristina Ventura
Energies 2026, 19(4), 986; https://doi.org/10.3390/en19040986 - 13 Feb 2026
Viewed by 933
Abstract
The growing penetration of Electric Vehicles (EVs) in power distribution networks presents both challenges and opportunities for grid operators and planners. This paper provides a comprehensive review of recent advances in optimization techniques and machine learning (ML) approaches for the efficient management of [...] Read more.
The growing penetration of Electric Vehicles (EVs) in power distribution networks presents both challenges and opportunities for grid operators and planners. This paper provides a comprehensive review of recent advances in optimization techniques and machine learning (ML) approaches for the efficient management of EV charging and integration in low- and medium-voltage distribution systems. Optimization methods are analyzed with reference to their objectives—such as load flattening, voltage regulation, loss minimization, and infrastructure cost reduction—and their capability to handle multi-objective, stochastic, and real-time constraints. Concurrently, the role of ML is explored in load forecasting, user behavior modeling, anomaly detection, and adaptive control strategies. Particular attention is given to hybrid approaches that combine optimization algorithms (e.g., MILP, heuristic methods) with data-driven models (e.g., neural networks, reinforcement learning), highlighting their effectiveness in enhancing grid flexibility and resilience. This review adopts a unified system-level perspective that links EV management objectives, optimization techniques, and machine learning-based solutions within distribution networks. In addition, particular attention is devoted to data availability, reproducibility, and practical deployment aspects, with the aim of identifying current limitations and providing actionable insights for future research and real-world applications. This study aims to support the development of intelligent energy management strategies for EVs, fostering a sustainable and reliable evolution of distribution networks. Full article
Show Figures

Figure 1

22 pages, 561 KB  
Review
A Systematic Review of Anomaly and Fault Detection Using Machine Learning for Industrial Machinery
by Syed Haseeb Haider Zaidi, Alex Shenfield, Hongwei Zhang and Augustine Ikpehai
Algorithms 2026, 19(2), 108; https://doi.org/10.3390/a19020108 - 1 Feb 2026
Cited by 2 | Viewed by 1409
Abstract
Unplanned downtime in industrial machinery remains a major challenge, causing substantial economic losses and safety risks across sectors such as manufacturing, food processing, oil and gas, and transportation. This systematic review investigates the application of machine learning (ML) techniques for anomaly and fault [...] Read more.
Unplanned downtime in industrial machinery remains a major challenge, causing substantial economic losses and safety risks across sectors such as manufacturing, food processing, oil and gas, and transportation. This systematic review investigates the application of machine learning (ML) techniques for anomaly and fault detection within the broader context of predictive maintenance. Following a hybrid review methodology, relevant studies published between 2010 and 2025 were collected from major databases including IEEE Xplore, ScienceDirect, SpringerLink, Scopus, Web of Science, and arXiv. The review categorizes approaches into supervised, unsupervised, and hybrid paradigms, analyzing their pipelines from data collection and preprocessing to model deployment. Findings highlight the effectiveness of deep learning architectures such as convolutional neural networks (CNNs), recurrent neural networks (RNNs), autoencoders, and hybrid frameworks in detecting faults from time series and multimodal sensor data. At the same time, key limitations persist, including data scarcity, class imbalance, limited generalizability across equipment types, and a lack of interpretability in deep models. This review concludes that while ML-based predictive maintenance systems are enabling a transition from reactive to proactive strategies, future progress requires improved hybrid architectures, Explainable AI, and scalable real-time deployment. Full article
(This article belongs to the Special Issue Machine Learning for Pattern Recognition (3rd Edition))
Show Figures

Figure 1

31 pages, 1140 KB  
Review
A Survey of Multi-Layer IoT Security Using SDN, Blockchain, and Machine Learning
by Reorapetse Molose and Bassey Isong
Electronics 2026, 15(3), 494; https://doi.org/10.3390/electronics15030494 - 23 Jan 2026
Viewed by 857
Abstract
The integration of Software-Defined Networking (SDN), blockchain (BC), and machine learning (ML) has emerged as a promising approach to securing Internet of Things (IoT) and Industrial IoT (IIoT) networks. This paper conducted a comprehensive review of recent studies focusing on multi-layered security across [...] Read more.
The integration of Software-Defined Networking (SDN), blockchain (BC), and machine learning (ML) has emerged as a promising approach to securing Internet of Things (IoT) and Industrial IoT (IIoT) networks. This paper conducted a comprehensive review of recent studies focusing on multi-layered security across device, control, network, and application layers. The analysis reveals that BC technology ensures decentralised trust, immutability, and secure access validation, while SDN enables programmability, load balancing, and real-time monitoring. In addition, ML/deep learning (DL) techniques, including federated and hybrid learning, strengthen anomaly detection, predictive security, and adaptive mitigation. Reported evaluations show similar gains in detection accuracy, latency, throughput, and energy efficiency, with effective defence against threats, though differing experimental contexts limit direct comparison. It also shows that the solutions’ effectiveness depends on ecosystem factors such as SDN controllers, BC platforms, cryptographic protocols, and ML frameworks. However, most studies rely on simulations or small-scale testbeds, leaving large-scale and heterogeneous deployments unverified. Significant challenges include scalability, computational and energy overhead, dataset dependency, limited adversarial resilience, and the explainability of ML-driven decisions. Based on the findings, future research should focus on lightweight consensus mechanisms for constrained devices, privacy-preserving ML/DL, and cross-layer adversarial-resilient frameworks. Advancing these directions will be important in achieving scalable, interoperable, and trustworthy SDN-IoT/IIoT security solutions. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

32 pages, 4251 KB  
Article
Context-Aware ML/NLP Pipeline for Real-Time Anomaly Detection and Risk Assessment in Cloud API Traffic
by Aziz Abibulaiev, Petro Pukach and Myroslava Vovk
Mach. Learn. Knowl. Extr. 2026, 8(1), 25; https://doi.org/10.3390/make8010025 - 22 Jan 2026
Cited by 1 | Viewed by 1328
Abstract
We present a combined ML/NLP (Machine Learning, Natural Language Processing) pipeline for protecting cloud-based APIs (Application Programming Interfaces), which works both at the level of individual HTTP (Hypertext Transfer Protocol) requests and at the access log file reading mode, linking explicitly technical anomalies [...] Read more.
We present a combined ML/NLP (Machine Learning, Natural Language Processing) pipeline for protecting cloud-based APIs (Application Programming Interfaces), which works both at the level of individual HTTP (Hypertext Transfer Protocol) requests and at the access log file reading mode, linking explicitly technical anomalies with business risks. The system processes each event/access log through parallel numerical and textual branches: a set of anomaly detectors trained on traffic engineering characteristics and a hybrid NLP stack that combines rules, TF-IDF (Term Frequency-Inverse Document Frequency), and character-level models trained on enriched security datasets. Their results are integrated using a risk-aware policy that takes into account endpoint type, data sensitivity, exposure, and authentication status, and creates a discrete risk level with human-readable explanations and recommended SOC (Security Operations Center) actions. We implement this design as a containerized microservice pipeline (input, preprocessing, ML, NLP, merging, alerting, and retraining services), orchestrated using Docker Compose and instrumented using OpenSearch Dashboards. Experiments with OWASP-like (Open Worldwide Application Security Project) attack scenarios show a high detection rate for injections, SSRF (Server-Side Request Forgery), Data Exposure, and Business Logic Abuse, while the processing time for each request remains within real-time limits even in sequential testing mode. Thus, the pipeline bridges the gap between ML/NLP research for security and practical API protection channels that can evolve over time through feedback and retraining. Full article
(This article belongs to the Section Safety, Security, Privacy, and Cyber Resilience)
Show Figures

Figure 1

32 pages, 3734 KB  
Article
A Hierarchical Framework Leveraging IIoT Networks, IoT Hub, and Device Twins for Intelligent Industrial Automation
by Cornelia Ionela Bădoi, Bilge Kartal Çetin, Kamil Çetin, Çağdaş Karataş, Mehmet Erdal Özbek and Savaş Şahin
Appl. Sci. 2026, 16(2), 645; https://doi.org/10.3390/app16020645 - 8 Jan 2026
Viewed by 1123
Abstract
Industrial Internet of Things (IIoT) networks, Microsoft Azure Internet of Things (IoT) Hub, and device twins (DvT) are increasingly recognized as core enablers of adaptive, data-driven manufacturing. This paper proposes a hierarchical IIoT framework that integrates industrial IoT networking, DvT for asset-level virtualisation, [...] Read more.
Industrial Internet of Things (IIoT) networks, Microsoft Azure Internet of Things (IoT) Hub, and device twins (DvT) are increasingly recognized as core enablers of adaptive, data-driven manufacturing. This paper proposes a hierarchical IIoT framework that integrates industrial IoT networking, DvT for asset-level virtualisation, system-level digital twins (DT) for cell orchestration, and cloud-native services to support the digital transformation of brownfield, programmable logic controller (PLC)-centric modular automation (MA) environments. Traditional PLC/supervisory control and data acquisition (SCADA) paradigms struggle to meet interoperability, observability, and adaptability requirements at scale, motivating architectures in which DvT and IoT Hub underpin real-time orchestration, virtualisation, and predictive-maintenance workflows. Building on and extending a previously introduced conceptual model, the present work instantiates a multilayered, end-to-end design that combines a federated Message Queuing Telemetry Transport (MQTT) mesh on the on-premises side, a ZigBee-based backup mesh, and a secure bridge to Azure IoT Hub, together with a systematic DvT modelling and orchestration strategy. The methodology is supported by a structured analysis of relevant IIoT and DvT design choices and by a concrete implementation in a nine-cell MA laboratory featuring a robotic arm predictive-maintenance scenario. The resulting framework sustains closed-loop monitoring, anomaly detection, and control under realistic workloads, while providing explicit envelopes for telemetry volume, buffering depth, and latency budgets in edge-cloud integration. Overall, the proposed architecture offers a transferable blueprint for evolving PLC-centric automation toward more adaptive, secure, and scalable IIoT systems and establishes a foundation for future extensions toward full DvT ecosystems, tighter artificial intelligence/machine learning (AI/ML) integration, and fifth/sixth generation (5G/6G) and time-sensitive networking (TSN) support in industrial networks. Full article
(This article belongs to the Special Issue Novel Technologies of Smart Manufacturing)
Show Figures

Figure 1

22 pages, 1021 KB  
Article
A Multiclass Machine Learning Framework for Detecting Routing Attacks in RPL-Based IoT Networks Using a Novel Simulation-Driven Dataset
by Niharika Panda and Supriya Muthuraman
Future Internet 2026, 18(1), 35; https://doi.org/10.3390/fi18010035 - 7 Jan 2026
Cited by 2 | Viewed by 707
Abstract
The use of resource-constrained Low-Power and Lossy Networks (LLNs), where the IPv6 Routing Protocol for LLNs (RPL) is the de facto routing standard, has increased due to the Internet of Things’ (IoT) explosive growth. Because of the dynamic nature of IoT deployments and [...] Read more.
The use of resource-constrained Low-Power and Lossy Networks (LLNs), where the IPv6 Routing Protocol for LLNs (RPL) is the de facto routing standard, has increased due to the Internet of Things’ (IoT) explosive growth. Because of the dynamic nature of IoT deployments and the lack of in-protocol security, RPL is still quite susceptible to routing-layer attacks like Blackhole, Lowered Rank, version number manipulation, and Flooding despite its lightweight architecture. Lightweight, data-driven intrusion detection methods are necessary since traditional cryptographic countermeasures are frequently unfeasible for LLNs. However, the lack of RPL-specific control-plane semantics in current cybersecurity datasets restricts the use of machine learning (ML) for practical anomaly identification. In order to close this gap, this work models both static and mobile networks under benign and adversarial settings by creating a novel, large-scale multiclass RPL attack dataset using Contiki-NG’s Cooja simulator. To record detailed packet-level and control-plane activity including DODAG Information Object (DIO), DODAG Information Solicitation (DIS), and Destination Advertisement Object (DAO) message statistics along with forwarding and dropping patterns and objective-function fluctuations, a protocol-aware feature extraction pipeline is developed. This dataset is used to evaluate fifteen classifiers, including Logistic Regression (LR), Support Vector Machine (SVM), Decision Tree (DT), k-Nearest Neighbors (KNN), Random Forest (RF), Extra Trees (ET), Gradient Boosting (GB), AdaBoost (AB), and XGBoost (XGB) and several ensemble strategies like soft/hard voting, stacking, and bagging, as part of a comprehensive ML-based detection system. Numerous tests show that ensemble approaches offer better generalization and prediction performance. With overfitting gaps less than 0.006 and low cross-validation variance, the Soft Voting Classifier obtains the greatest accuracy of 99.47%, closely followed by XGBoost with 99.45% and Random Forest with 99.44%. Full article
Show Figures

Graphical abstract

25 pages, 4839 KB  
Article
AI/ML Based Anomaly Detection and Fault Diagnosis of Turbocharged Marine Diesel Engines: Experimental Study on Engine of an Operational Vessel
by Deepesh Upadrashta and Tomi Wijaya
Information 2026, 17(1), 16; https://doi.org/10.3390/info17010016 - 24 Dec 2025
Viewed by 1229
Abstract
Turbocharged diesel engines are widely used for the propulsion and as the generators for powering auxiliary systems in marine applications. Many works were published on the development of diagnosis tools for the engines using data from simulation models or from experiments on a [...] Read more.
Turbocharged diesel engines are widely used for the propulsion and as the generators for powering auxiliary systems in marine applications. Many works were published on the development of diagnosis tools for the engines using data from simulation models or from experiments on a sophisticated engine test bench. However, the simulation data varies a lot with actual operational data, and the available sensor data on the actual vessel is much less compared to the data from test benches. Therefore, it is necessary to develop anomaly prediction and fault diagnosis models from limited data available from the engines. In this paper, an artificial intelligence (AI)-based anomaly detection model and machine learning (ML)-based fault diagnosis model were developed using the actual data acquired from a diesel engine of a cargo vessel. Unlike the previous works, the study uses operational, thermodynamic, and vibration data for the anomaly detection and fault diagnosis. The paper provides the overall architecture of the proposed predictive maintenance system including details on the sensorization of assets, data acquisition, edge computation, and AI model for anomaly prediction and ML algorithm for fault diagnosis. Faults with varying severity levels were induced in the subcomponents of the engine to validate the accuracy of the anomaly detection and fault diagnosis models. The unsupervised stacked autoencoder AI model predicts the engine anomalies with 87.6% accuracy. The balanced accuracy of supervised fault diagnosis model using Support Vector Machine algorithm is 99.7%. The proposed models are vital in marching towards sustainable shipping and have potential to deploy across various applications. Full article
Show Figures

Graphical abstract

21 pages, 2313 KB  
Review
A Bibliometric and Network Analysis of Digital Twins and BIM in Water Distribution Systems
by Chiamba Ricardo Chiteculo Canivete, Mercy Chitauro, Martina Flörke and Maduako E. Okorie
Technologies 2025, 13(12), 575; https://doi.org/10.3390/technologies13120575 - 8 Dec 2025
Viewed by 820
Abstract
The increasing complexity of water distribution systems (WDSs) and the growing demand for sustainable infrastructure management have spurred interest in Building Information Modelling (BIM) and Digital Twin (DT) technologies. This study presents a comprehensive bibliometric and thematic literature review aiming to identify key [...] Read more.
The increasing complexity of water distribution systems (WDSs) and the growing demand for sustainable infrastructure management have spurred interest in Building Information Modelling (BIM) and Digital Twin (DT) technologies. This study presents a comprehensive bibliometric and thematic literature review aiming to identify key trends, research clusters, and knowledge gaps at the intersection of BIM, DT, and WDSs. Using the Scopus database, 95 relevant publications from 2004 to 2024 were systematically analyzed. VOSviewer was applied to create, visualize, and analyze maps of countries, journals, documents, and keywords based on citation, co-citation, collaboration, and co-occurrence data. The results indicate a sharp rise in scholarly attention after 2020, with dominant contributions from European institutions. Co-authorship networks show limited global interconnectedness, suggesting that developing countries should especially prioritize integrated DT and BIM for more inclusive and diverse research partnerships. This study characterizes the state of the art and future requirements for research on the use of DT and BIM technologies in WDSs and makes a noteworthy contribution to the body of knowledge. Future research should focus on integrating DT and BIM technologies with ML, which represents scalability challenges of real-time anomaly detection integration models, advancing decision-making and operational resilience in WDNs. Full article
Show Figures

Figure 1

25 pages, 1859 KB  
Review
Artificial Intelligence in Anaerobic Digestion: A Review of Sensors, Modeling Approaches, and Optimization Strategies
by Milena Marycz, Izabela Turowska, Szymon Glazik and Piotr Jasiński
Sensors 2025, 25(22), 6961; https://doi.org/10.3390/s25226961 - 14 Nov 2025
Cited by 5 | Viewed by 3139
Abstract
Anaerobic digestion (AD) is increasingly recognized as a key technology for renewable energy generation and sustainable waste management within the circular economy. However, its performance is highly sensitive to feedstock variability and environmental fluctuations, making stable operation and high methane yields difficult to [...] Read more.
Anaerobic digestion (AD) is increasingly recognized as a key technology for renewable energy generation and sustainable waste management within the circular economy. However, its performance is highly sensitive to feedstock variability and environmental fluctuations, making stable operation and high methane yields difficult to sustain. Conventional monitoring and control systems, based on limited sensors and mechanistic models, often fail to anticipate disturbances or optimize process performance. This review discusses recent progress in electrochemical, optical, spectroscopic, microbial, and hybrid sensors, highlighting their advantages and limitations in artificial intelligence (AI)-assisted monitoring. The role of soft sensors, data preprocessing, feature engineering, and explainable AI is emphasized to enable predictive and adaptive process control. Various machine learning (ML) techniques, including neural networks, support vector machines, ensemble methods, and hybrid gray-box models, are evaluated for yield forecasting, anomaly detection, and operational optimization. Persistent challenges include sensor fouling, calibration drift, and the lack of standardized open datasets. Emerging strategies such as digital twins, data augmentation, and automated optimization frameworks are proposed to address these issues. Future progress will rely on more robust sensors, shared datasets, and interpretable AI tools to achieve predictive, transparent, and efficient biogas production supporting the energy transition. Full article
(This article belongs to the Section Biosensors)
Show Figures

Graphical abstract

22 pages, 2549 KB  
Article
Lightweight Signal Processing and Edge AI for Real-Time Anomaly Detection in IoT Sensor Networks
by Manuel J. C. S. Reis
Sensors 2025, 25(21), 6629; https://doi.org/10.3390/s25216629 - 28 Oct 2025
Cited by 5 | Viewed by 5721
Abstract
The proliferation of IoT devices has created vast sensor networks that generate continuous time-series data. Efficient and real-time processing of these signals is crucial for applications such as predictive maintenance, healthcare monitoring, and environmental sensing. This paper proposes a lightweight framework that combines [...] Read more.
The proliferation of IoT devices has created vast sensor networks that generate continuous time-series data. Efficient and real-time processing of these signals is crucial for applications such as predictive maintenance, healthcare monitoring, and environmental sensing. This paper proposes a lightweight framework that combines classical signal processing techniques (Fourier and Wavelet-based feature extraction) with edge-deployed machine learning models for anomaly detection. By performing feature extraction and classification locally, the approach reduces communication overhead, minimizes latency, and improves energy efficiency in IoT nodes. Experiments with synthetic vibration, acoustic, and environmental datasets showed that the proposed Shallow Neural Network achieved the highest detection performance (F1-score ≈ 0.94), while a Quantized TinyML model offered a favorable trade-off (F1-score ≈ 0.92) with a 3× reduction in memory footprint and 60% lower energy consumption. Decision Trees remained competitive for ultra-constrained devices, providing sub-millisecond latency with limited recall. Additional analyses confirmed robustness against noise, missing data, and variations in anomaly characteristics, while ablation studies highlighted the contributions of each pipeline component. These results demonstrate the feasibility of accurate, resource-efficient anomaly detection at the edge, paving the way for practical deployment in large-scale IoT sensor networks. Full article
(This article belongs to the Special Issue Internet of Things Cybersecurity)
Show Figures

Figure 1

Back to TopTop