Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (311)

Search Parameters:
Keywords = concept drift

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 1137 KB  
Article
Adaptive Healthcare Monitoring Through Drift-Aware Edge-Cloud Intelligence
by Aleksandra Stojnev Ilic, Milos Ilic, Natalija Stojanovic and Dragan Stojanovic
Future Internet 2026, 18(3), 156; https://doi.org/10.3390/fi18030156 - 17 Mar 2026
Abstract
Continuous healthcare monitoring systems generate non-stationary physiological data streams, where evolving statistical properties and patterns often invalidate static models and fixed user classifications. To address this challenge, we propose drift-aware adaptive architecture that integrates concept drift detection into a distributed edge–cloud data analytics [...] Read more.
Continuous healthcare monitoring systems generate non-stationary physiological data streams, where evolving statistical properties and patterns often invalidate static models and fixed user classifications. To address this challenge, we propose drift-aware adaptive architecture that integrates concept drift detection into a distributed edge–cloud data analytics pipeline. In the proposed design, a concept drift is elevated from a maintenance signal to the primary mechanism governing user-state adaptation, model evolution, and inference consistency. Within the proposed system, the edge tier performs low-latency inference and preliminary drift screening under strict resource constraints, while the cloud tier executes advanced drift detection and validation, orchestrates user reclassification and model retraining, and manages model evolution. A feedback loop synchronizes edge and cloud operations, ensuring that detected drift triggers appropriate system transitions, either reassigning a user to an updated state category or initiating targeted model updates. This architecture reduces reliance on static group assignments, improves personalization, and preserves model fidelity under evolving physiological conditions. We analyze the drift types most relevant to healthcare data streams, evaluate the suitability of lightweight and cloud-grade drift detectors, and define the system requirements for stability, responsiveness, and clinical safety. Evaluation across 21 concurrent users demonstrates that drift-aware adaptation reduced prediction MAE by 40.6% relative to periodic retraining, with an end-to-end adaptation latency of 66 ± 37 s. Hierarchical cloud validation reduced the false-positive retraining rate from 88.9% (edge-only triggering) to 27.3%, while maintaining uninterrupted inference throughout all adaptation events. Full article
Show Figures

Figure 1

27 pages, 3391 KB  
Article
A Hybrid Federated–Incremental Learning Framework for Continuous Authentication in Zero-Trust Networks
by Jie Ji, Shi Qiu, Shengpeng Ye and Xin Liu
Future Internet 2026, 18(3), 154; https://doi.org/10.3390/fi18030154 - 16 Mar 2026
Abstract
Zero-trust architecture (ZTA) requires continuous and adaptive identity authentication to maintain security in dynamic environments. However, current federated learning (FL)-based authentication models often struggle to incorporate evolving attack patterns without experiencing catastrophic forgetting. Moreover, non-independent and identically distributed (non-IID) client data and concept [...] Read more.
Zero-trust architecture (ZTA) requires continuous and adaptive identity authentication to maintain security in dynamic environments. However, current federated learning (FL)-based authentication models often struggle to incorporate evolving attack patterns without experiencing catastrophic forgetting. Moreover, non-independent and identically distributed (non-IID) client data and concept drift frequently lead to degraded model robustness and personalization. To address these issues, this paper presents a hybrid learning framework that integrates federated learning with incremental learning (IL) for sustainable authentication. A Dynamic Weighted Federated Aggregation (DWFA) algorithm is developed to mitigate concept drift by adjusting aggregation weights in real time, ensuring that the global model adapts to changing data distributions. This approach enables continuous learning from distributed threat data while maintaining privacy and eliminating the need for historical data retention. Experimental results on real-world traffic datasets indicate that the proposed framework outperforms conventional FL baselines, reducing the overall error rate by approximately 56% and improving the detection rate for novel attack types by over 17.8%. Furthermore, the framework remains stable against performance decay while maintaining efficient communication overhead. This study provides an adaptive, privacy-preserving solution for identity authentication in zero-trust systems. Full article
(This article belongs to the Special Issue Cybersecurity in the Age of AI, IoT, and Edge Computing)
Show Figures

Graphical abstract

18 pages, 4434 KB  
Article
A Novel Spiral Si Drift Detector with a Constant Cathode Gap and Arbitrary Cathode Pitch Profiles
by Hongfei Wang and Zheng Li
Micromachines 2026, 17(3), 354; https://doi.org/10.3390/mi17030354 - 13 Mar 2026
Viewed by 57
Abstract
In this paper, an innovative design of a silicon spiral drift detector (SDD) has been proposed. In this design, gaps under the SiO2 layer between the cathode rings are kept constant, with a minimum value to reduce the surface leakage current. The [...] Read more.
In this paper, an innovative design of a silicon spiral drift detector (SDD) has been proposed. In this design, gaps under the SiO2 layer between the cathode rings are kept constant, with a minimum value to reduce the surface leakage current. The cathode pitch profile Pr as a function of radius r is allowed to change in an arbitrary way to achieve the optimum field distribution. The concept, design considerations, modeling and electrical simulations have been carried out for this novel structure with a hexagonal spiral silicon drift detector. Using one-dimensional analyses, we obtain the exact solution of the spiral design r=rθ  with a near-arbitrary pitch profile Pr=P1rr11η, with η as an arbitrary real number. We also obtained the electric potential and field profiles on both surfaces of the detector. Using a Technology Computer-Aided Design (TCAD) tool, we made 3D simulations of the detector’s electrical properties. The hexagonal spiral silicon drift detector has excellent electrical properties: a uniform electric field, smooth distribution of electric potential and electron concentration, and a clear electron drift channel. The distributions of the electric field, electric potential, and electron concentration are symmetrical and smooth, which is beneficial for applications in photon sciences (X-ray) and safeguards and homeland security (particle radiation). The theoretical work and simulation results serve as solid foundations for the detector design and the expansion of semiconductor technology. Full article
(This article belongs to the Special Issue Photonic and Optoelectronic Devices and Systems, 4th Edition)
Show Figures

Figure 1

31 pages, 23331 KB  
Article
Drift-Aware Online Ensemble Learning for Real-Time Cybersecurity in Internet of Medical Things Networks
by Fazliddin Makhmudov, Gayrat Juraev, Ozod Yusupov, Parvina Nasriddinova and Dusmurod Kilichev
Mach. Learn. Knowl. Extr. 2026, 8(3), 67; https://doi.org/10.3390/make8030067 - 9 Mar 2026
Viewed by 212
Abstract
The rapid growth of Internet of Medical Things (IoMT) devices has revolutionized diagnostics and patient care within smart healthcare networks. However, this progress has also expanded the attack surface due to the heterogeneity and interconnectivity of medical devices. To overcome the limitations of [...] Read more.
The rapid growth of Internet of Medical Things (IoMT) devices has revolutionized diagnostics and patient care within smart healthcare networks. However, this progress has also expanded the attack surface due to the heterogeneity and interconnectivity of medical devices. To overcome the limitations of traditional batch-trained security models, this study proposes an adaptive online intrusion detection framework designed for real-time operation in dynamic healthcare environments. The system combines Leveraging Bagging with Hoeffding Tree classifiers for incremental learning while integrating the Page–Hinkley test to detect and adapt to concept drift in evolving attack patterns. A modular and scalable network architecture supports centralized monitoring and ensures seamless interoperability across various IoMT protocols. Implemented within a low-latency, high-throughput stream-processing pipeline, the framework meets the stringent clinical requirements for responsiveness and reliability. To simulate streaming conditions, we evaluated the model using the CICIoMT2024 dataset, presenting one instance at a time in random order to reflect dynamic, real-time traffic in IoMT networks. Experimental results demonstrate exceptional performance, achieving accuracies of 0.9963 for binary classification, 0.9949 for six-class detection, and 0.9860 for nineteen-class categorization. These results underscore the framework’s practical efficacy in protecting modern healthcare infrastructures from evolving cyber threats. Full article
Show Figures

Figure 1

10 pages, 255 KB  
Proceeding Paper
Adaptive Multimodal LSTM with Online Learning for Evolving IoT Data Streams
by Osaretin Edith Okoro, Nurudeen Mahmud Ibrahim, Prema Kirubakan and Suleiman Aliyu Muhammad
Eng. Proc. 2026, 124(1), 57; https://doi.org/10.3390/engproc2026124057 - 7 Mar 2026
Viewed by 93
Abstract
The Internet of Things (IoT) uses networked devices, dispersed sensors, and cameras to create huge, diverse data streams. Concept drift, in which the underlying data distribution shifts over time, is frequently caused by the non-stationary and multimodal character of these streams. Static machine [...] Read more.
The Internet of Things (IoT) uses networked devices, dispersed sensors, and cameras to create huge, diverse data streams. Concept drift, in which the underlying data distribution shifts over time, is frequently caused by the non-stationary and multimodal character of these streams. Static machine learning models, based on fixed data distributions, reduce forecast accuracy and system reliability since they are unable to adapt to such changes. This paper proposes an Adaptive Multimodal Long Short-Term Memory (AM-LSTM) architecture to address these challenges by combining modality-specific temporal modelling, attention-based dynamic fusion, and drift-aware online learning. An attention mechanism adaptively weights informative streams to mitigate the impact of noisy or missing input, while specialist LSTM encoders capture the temporal correlations of each modality. Concept drift is detected using a sliding-window error monitoring technique, and adaptive learning rate adjustment and selective retraining are started when significant distributional changes occur. The proposed system is tested under synthetic drift conditions using the Edge-IoT and UNSW-NB15 benchmark datasets. Experimental results demonstrate that AM-LSTM achieves 88.7% accuracy and an F1-score of 0.85, adapting to drift within 620 samples while maintaining an average update latency of 47 ms per batch. Compared with static and existing adaptive baselines, the proposed approach provides improved robustness, faster drift adaptation, and computational efficiency suitable for real-time IoT environments. Full article
(This article belongs to the Proceedings of The 6th International Electronic Conference on Applied Sciences)
27 pages, 2849 KB  
Systematic Review
Intrusion Detection in Fog Computing: A Systematic Review of Security Advances and Challenges
by Nyashadzashe Tamuka, Topside Ehleketani Mathonsi, Thomas Otieno Olwal, Solly Maswikaneng, Tonderai Muchenje and Tshimangadzo Mavin Tshilongamulenzhe
Computers 2026, 15(3), 169; https://doi.org/10.3390/computers15030169 - 5 Mar 2026
Viewed by 290
Abstract
Fog computing extends cloud services to the network edge to support low-latency IoT applications. However, since fog environments are distributed and resource-constrained, intrusion detection systems must be adapted to defend against cyberattacks while keeping computation and communication overhead minimal. This systematic review presents [...] Read more.
Fog computing extends cloud services to the network edge to support low-latency IoT applications. However, since fog environments are distributed and resource-constrained, intrusion detection systems must be adapted to defend against cyberattacks while keeping computation and communication overhead minimal. This systematic review presents research on intrusion detection systems (IDSs) for fog computing and synthesizes advances and research gaps. The study was guided by the “Preferred-Reporting-Items for-Systematic-Reviews-and-Meta-Analyses” (PRISMA) framework. Scopus and Web of Science were searched in the title field using TITLE/TI = (“intrusion detection” AND “fog computing”) for 2021–2025. The inclusion criteria were (i) 2021–2025 publications, (ii) journal or conference papers, (iii) English language, and (iv) open access availability; duplicates were removed programmatically using a DOI-first key with a title, year, and author alternative. The search identified 8560 records, of which 4905 were unique and included for qualitative grouping and bibliometric synthesis. Metadata (year, venue, authors, affiliations, keywords, and citations) were extracted and analyzed in Python to compute trends and collaboration. Intrusion detection systems in fog networks were categorized into traditional/signature-based, machine learning, deep learning, and hybrid/ensemble. Hybrid and DL approaches reported accuracy ranging from 95 to 99% on benchmark datasets (such as NSL-KDD, UNSW-NB15, CIC-IDS2017, KDD99, BoT-IoT). Notable bottlenecks included computational load relative to real-time latency on resource-constrained nodes, elevated false-positive rates for anomaly detection under concept drift, limited generalization to unseen attacks, privacy risks from centralizing data, and limited real-world validation. Bibliometric analyses highlighted the field’s concentration in fast-turnaround, open-access journals such as IEEE Access and Sensors, as well as a small number of highly collaborative author clusters, alongside dominant terms such as “learning,” “federated,” “ensemble,” “lightweight,” and “explainability.” Emerging directions include federated and distributed training to preserve privacy, as well as online/continual learning adaptation. Future work should consist of real-world evaluation of fog networks, ultra-lightweight yet adaptive hybrid IDS, self-learning, and secure cooperative frameworks. These insights help researchers select appropriate IDS models for fog networks. Full article
Show Figures

Figure 1

41 pages, 815 KB  
Article
XAI-Compliance-by-Design: A Modular Framework for GDPR- and AI Act-Aligned Decision Transparency in High-Risk AI Systems
by Antonio Goncalves and Anacleto Correia
J. Cybersecur. Priv. 2026, 6(2), 43; https://doi.org/10.3390/jcp6020043 - 2 Mar 2026
Viewed by 408
Abstract
High-risk Artificial Intelligence (AI) systems deployed in cybersecurity and privacy-critical contexts must satisfy not only demanding performance targets but also stringent obligations for transparency, accountability, and human oversight under the General Data Protection Regulation (GDPR) and the Artificial Intelligence Act (AI Act). Existing [...] Read more.
High-risk Artificial Intelligence (AI) systems deployed in cybersecurity and privacy-critical contexts must satisfy not only demanding performance targets but also stringent obligations for transparency, accountability, and human oversight under the General Data Protection Regulation (GDPR) and the Artificial Intelligence Act (AI Act). Existing approaches often treat these concerns in isolation as follows: Explainable Artificial Intelligence (XAI) methods are added ad hoc to machine learning pipelines, while governance and regulatory frameworks remain largely conceptual and weakly connected to the concrete artefacts produced in practice. This article proposes XAI-Compliance-by-Design, a modular framework that integrates XAI techniques, compliance-by-design principles and trustworthy Machine Learning Operations (MLOps) practices into a unified architecture for high-risk AI systems in cybersecurity and privacy domains. The framework follows a dual-flow design that couples an upstream technical pipeline (data, model, explanation, and monitoring) with a downstream governance pipeline (policy, oversight, audit, and decision-making), orchestrated by a Compliance-by-Design Engine and a technical–regulatory correspondence matrix aligned with the GDPR, the AI Act, and ISO/IEC 42001. The framework is instantiated and evaluated through an end-to-end, Python-based proof of concept using a synthetic, intrusion detection system (IDS)-inspired anomaly detection scenario with a Random Forest (RF) classifier, Shapley Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME), drift indicators, and tamper-evident evidence bundles and decision dossiers. The results show that, even in a modest, toy setting, the framework systematically produces verifiable artefacts that support auditability and accountability across the model lifecycle. By linking explanation reports, drift statistics and compliance logs to concrete regulatory provisions, the approach illustrates how organisations operating high-risk AI for cybersecurity and privacy can move from model-centric optimisation to evidence-centric governance. The article discusses how the proposed framework can be generalised to real-world high-risk AI applications, contributing to the operationalisation of European digital sovereignty in AI governance. This article does not introduce a new intrusion detection algorithm; instead, it proposes an evidence-centric governance pipeline that captures decision provenance and compliance artefacts so that decisions can be audited and justified against regulatory obligations. Full article
(This article belongs to the Section Security Engineering & Applications)
Show Figures

Graphical abstract

10 pages, 847 KB  
Proceeding Paper
Enhancing Precision Farming Security Through IoT-Driven Adaptive Anomaly Detection Using a Hybrid CNN–PSO–GA Framework
by Faruk Salihu Umar and Nurudeen Mahmud Ibrahim
Biol. Life Sci. Forum 2025, 54(1), 29; https://doi.org/10.3390/blsf2025054029 - 28 Feb 2026
Viewed by 218
Abstract
The adoption of Internet of Things (IoT) technologies has significantly enhanced precision farming by enabling continuous environmental monitoring and data-driven agricultural management. However, the increasing reliance on distributed sensor networks introduces critical challenges, including sensor faults, data anomalies, and cyber-physical security threats, which [...] Read more.
The adoption of Internet of Things (IoT) technologies has significantly enhanced precision farming by enabling continuous environmental monitoring and data-driven agricultural management. However, the increasing reliance on distributed sensor networks introduces critical challenges, including sensor faults, data anomalies, and cyber-physical security threats, which can undermine system reliability and decision accuracy. This study proposes an IoT-driven anomaly detection framework for smart agriculture that integrates a Convolutional Neural Network (CNN) optimized using a hybrid Particle Swarm Optimization and Genetic Algorithm (PSO–GA). The CNN learns complex spatio-temporal patterns from multivariate sensor data, while the PSO–GA strategy automatically tunes CNN hyperparameters to improve detection accuracy and model stability. To enhance adaptability under dynamic agricultural conditions, the proposed framework incorporates an online learning mechanism that incrementally updates the CNN model using newly arriving sensor data, enabling continuous adaptation to environmental changes and concept drift without full model retraining. Experiments conducted on a publicly available smart agriculture dataset demonstrate that the proposed CNN–PSO–GA framework achieves an accuracy of 74%, precision of 74%, recall of 100%, and an F1-score of 85%, outperforming baseline methods such as One-Class Support Vector Machine and Isolation Forest, particularly in reducing missed anomaly events. The results confirm the robustness, adaptability, and reliability of the proposed approach. Overall, the framework provides a practical and scalable solution for enhancing security, resilience, and operational effectiveness in precision farming systems. Full article
(This article belongs to the Proceedings of The 3rd International Online Conference on Agriculture)
Show Figures

Figure 1

32 pages, 4314 KB  
Article
A Hardware-Aware Federated Meta-Learning Framework for Intraday Return Prediction Under Data Scarcity and Edge Constraints
by Zhe Wen, Xin Cheng, Ruixin Xue, Jinao Ye, Zhongfeng Wang and Meiqi Wang
Appl. Sci. 2026, 16(5), 2319; https://doi.org/10.3390/app16052319 - 27 Feb 2026
Viewed by 223
Abstract
Although deep learning has achieved remarkable success in time-series prediction, intraday algorithmic trading is characterized by frequent regime shifts (concept drift), which can rapidly render models trained on historical data obsolete in real applications. This motivates on-device adaptation at edge trading terminals. However, [...] Read more.
Although deep learning has achieved remarkable success in time-series prediction, intraday algorithmic trading is characterized by frequent regime shifts (concept drift), which can rapidly render models trained on historical data obsolete in real applications. This motivates on-device adaptation at edge trading terminals. However, practical deployment is constrained by a tripartite bottleneck: real-time samples are scarce, hardware resources on edge are limited, and communication overhead between cloud and edge must be kept low to satisfy stringent latency requirements. To address these challenges, we develop a hardware-aware edge learning framework that combines federated learning (FL) and meta-learning to enable rapid few-shot personalization without exposing local data. Importantly, the framework incorporates our proposed Sleep Node Algorithm (SNA), which turns the “FL + meta-learning” combination into a practical and efficient edge solution. Specifically, SNA dynamically deactivates “inertial” (insensitive) network components during adaptation: it provides a structural regularizer that stabilizes few-shot updates and mitigates overfitting under concept drift, while inducing sparsity that reduces both on-device computation and cloud-edge communication. To efficiently leverage these unstructured zero nodes introduced by SNA, we further design a dedicated accelerator, EPAST (Energy-efficient Pipelined Accelerator for Sparse Training). EPAST adopts a heterogeneous architecture and introduces a dedicated Backward Pipeline (BPIP) dataflow that overlaps backpropagation stages, thereby improving hardware utilization under irregular sparse workloads. Experimental results demonstrate that our system consistently outperforms strong baselines, including DQN, GARCH-XGBoost, and LRU, in terms of Pearson IC. A 55 nm CMOS ASIC implementation further validates robust learning under an extreme 5-shot setting (IC = 0.1176), achieving an end-to-end training speed-up of 11.35× and an energy efficiency of 45.78 TOPS/W. Full article
(This article belongs to the Special Issue Applications of Artificial Intelligence in Industrial Engineering)
Show Figures

Figure 1

21 pages, 3724 KB  
Article
Fault Diagnosis for IP-Based Networks Using Incremental Learning Algorithms and Data Stream Methods
by Angela María Vargas-Arcila, Angela Rodríguez-Vivas, Juan Carlos Corrales, Araceli Sanchis and Álvaro Rendón Gallón
Technologies 2026, 14(2), 132; https://doi.org/10.3390/technologies14020132 - 19 Feb 2026
Viewed by 307
Abstract
Network fault diagnosis has evolved in response to the needs of modern networks, transitioning from traditional methods, such as passive and active monitoring, to advanced learning techniques. While conventional methods often introduce invasive traffic and control overhead, newer approaches face challenges such as [...] Read more.
Network fault diagnosis has evolved in response to the needs of modern networks, transitioning from traditional methods, such as passive and active monitoring, to advanced learning techniques. While conventional methods often introduce invasive traffic and control overhead, newer approaches face challenges such as increased internal processes and the need for extensive knowledge of network behavior. Learning-based methods offer an advantage by not requiring a complete network model, allowing the use of statistical and Machine Learning techniques to process historical data. However, existing learning methods face limitations, such as the need for extensive data samples and extended retraining periods, which can leave systems vulnerable to failures, particularly in dynamic environments. This work addresses these issues by proposing an incremental learning approach for continuous fault diagnosis in IP-based networks. The approach utilizes online learning to process symptoms in real-time, adapting to network changes while managing data imbalance through drift detection and rebalancing strategies, such as ADWIN and SMOTE. We evaluated the performance of this method using 25 incremental algorithms on the SOFI dataset. The results, assessed using metrics such as recall, G-mean, kappa, and MCC, demonstrated high performance over time, indicating the potential for resilient, adaptive fault detection processes in dynamic network environments. Additionally, a non-invasive process can be ensured through peripheral observation of failure symptoms, provided that data collection does not increase network traffic, overhead control, or internal network processes. Full article
Show Figures

Figure 1

32 pages, 1453 KB  
Review
A Review of Artificial Intelligence for Financial Fraud Detection
by Haiquan Yang, Zarina Shukur and Shahnorbanun Sahran
Appl. Sci. 2026, 16(4), 1931; https://doi.org/10.3390/app16041931 - 14 Feb 2026
Viewed by 1343
Abstract
Financial fraud has expanded rapidly with the growth of the digital economy, evolving from conventional transactional misconduct to more complex and data-intensive forms. Traditional rule-based detection methods are increasingly inadequate for addressing the scale, heterogeneity, and dynamic behavior of modern fraud. In this [...] Read more.
Financial fraud has expanded rapidly with the growth of the digital economy, evolving from conventional transactional misconduct to more complex and data-intensive forms. Traditional rule-based detection methods are increasingly inadequate for addressing the scale, heterogeneity, and dynamic behavior of modern fraud. In this context, artificial intelligence (AI) has become a core tool in financial fraud detection research. This review systematically surveys AI-based financial fraud detection studies published between 2015 and 2025. It summarizes representative machine learning and deep learning approaches, including tree-based models, neural networks, and graph-based methods, and examines their applications in major fraud scenarios such as credit card fraud, loan fraud, and anti-money laundering. In addition, emerging research on cryptocurrency- and blockchain-related fraud is reviewed, highlighting the distinct challenges posed by decentralized transaction environments. Through a comparative analysis of methods, datasets, and evaluation practices, this review identifies persistent issues in the literature, including severe class imbalance, concept drift, limited access to labeled data, and trade-offs between detection performance and interpretability. Based on these findings, the paper discusses practical considerations for applied fraud detection systems and outlines future research directions from a data-centric and application-oriented perspective. This review aims to provide a structured reference for researchers and practitioners working on real-world financial fraud detection problems. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

25 pages, 2045 KB  
Article
A Comparative Analysis of Self-Aware Reinforcement Learning Models for Real-Time Intrusion Detection in Fog Networks
by Nyashadzashe Tamuka, Topside Ehleketani Mathonsi, Thomas Otieno Olwal, Solly Maswikaneng, Tonderai Muchenje and Tshimangadzo Mavin Tshilongamulenzhe
Future Internet 2026, 18(2), 100; https://doi.org/10.3390/fi18020100 - 14 Feb 2026
Viewed by 287
Abstract
Fog computing extends cloud services to the network edge, enabling low-latency processing for Internet of Things (IoT) applications. However, this distributed approach is vulnerable to a wide range of attacks, necessitating advanced intrusion detection systems (IDSs) that operate under resource constraints. This study [...] Read more.
Fog computing extends cloud services to the network edge, enabling low-latency processing for Internet of Things (IoT) applications. However, this distributed approach is vulnerable to a wide range of attacks, necessitating advanced intrusion detection systems (IDSs) that operate under resource constraints. This study proposes integrating self-awareness (online learning and concept drift adaptation) into a lightweight RL (reinforcement learning)-based IDS for fog networks and quantitatively comparing it with non-RL static thresholds and bandit-based approaches in real time. Novel self-aware reinforcement learning (RL) models, the Hierarchical Adaptive Thompson Sampling–Reinforcement Learning (HATS-RL) model, and the Federated Hierarchical Adaptive Thompson Sampling–Reinforcement Learning (F-HATS-RL), were proposed for real-time intrusion detection in a fog network. These self-aware RL policies integrated online uncertainty estimation and concept-drift detection to adapt to evolving attacks. The RL models were benchmarked against the static threshold (ST) model and a widely adopted linear bandit (Linear Upper Confidence Bound/LinUCB). A realistic fog network simulator with heterogeneous nodes and streaming traffic, including multi-type attack bursts and gradual concept drift, was established. The models’ detection performance was compared using metrics including latency, energy consumption, detection accuracy, and the area under the precision–recall curve (AUPR) and the area under the receiver operating characteristic curve (AUROC). Notably, the federated self-aware agent (F-HATS-RL) achieved the best AUROC (0.933) and AUPR (0.857), with a latency of 0.27 ms and the lowest energy consumption of 0.0137 mJ, indicating its ability to detect intrusions in fog networks with minimal energy. The findings suggest that self-aware RL agents can detect traffic–dynamic attack methods and adapt accordingly, resulting in more stable long-term performance. By contrast, a static model’s accuracy degrades under drift. Full article
Show Figures

Figure 1

14 pages, 6484 KB  
Article
Short-Term Electricity Price Forecasting via a Reinforcement Learning-Based Dynamic Soft Ensemble Strategy
by Yan Wang, Yongxi Zhao, Kun Liang and Hong Fan
Energies 2026, 19(3), 761; https://doi.org/10.3390/en19030761 - 1 Feb 2026
Viewed by 310
Abstract
To address the high volatility of spot market prices and the feature extraction limitations of single models, a short-term electricity price forecasting method based on a reinforcement learning dynamic soft ensemble strategy is proposed. First, a complementary dual-branch architecture is constructed: the CNN-LSTM-Attention [...] Read more.
To address the high volatility of spot market prices and the feature extraction limitations of single models, a short-term electricity price forecasting method based on a reinforcement learning dynamic soft ensemble strategy is proposed. First, a complementary dual-branch architecture is constructed: the CNN-LSTM-Attention branch mines local temporal features, while the Transformer branch captures long-range global dependencies. Second, the Q-learning algorithm is introduced to model weight optimization as a Markov Decision Process. An intelligent agent perceives fluctuation states to adaptively allocate weights, overcoming the rigidity of traditional ensembles. Case studies on PJM market data demonstrate that the proposed model outperforms advanced benchmarks in MAE and RMSE metrics. Notably, prediction accuracy is significantly improved during price spikes and negative price periods. The results verify that the strategy effectively copes with market concept drift, supporting reliable bidding and risk mitigation. Full article
(This article belongs to the Special Issue Energy, Electrical and Power Engineering: 5th Edition)
Show Figures

Figure 1

16 pages, 567 KB  
Article
Insuring Algorithmic Operations: Liability Risk, Pricing, and Risk Control
by Zhiyong (John) Liu, Jin Park, Mengying Wang and He Wen
Risks 2026, 14(2), 26; https://doi.org/10.3390/risks14020026 - 31 Jan 2026
Viewed by 476
Abstract
Businesses increasingly rely on algorithmic systems and machine learning models to make operational decisions about customers, employees, and counterparties. These “algorithmic operations” can improve efficiency but also concentrate liability in a small number of technically complex, drifting models. Algorithmic operations liability (AOL) risk [...] Read more.
Businesses increasingly rely on algorithmic systems and machine learning models to make operational decisions about customers, employees, and counterparties. These “algorithmic operations” can improve efficiency but also concentrate liability in a small number of technically complex, drifting models. Algorithmic operations liability (AOL) risk arises when these systems generate legally cognizable harm. We develop a simple taxonomy of AOL risk sources: model error and bias, data quality failures, distribution shift and concept drift, miscalibration, machine learning operations (MLOps) and integration failures, governance gaps, and ecosystem-level externalities. Building on this taxonomy, we outline a simple analysis of AOL risk pricing using some basic actuarial building blocks: (i) a confusion-matrix-based expected-loss model for false positives and false negatives; (ii) drift-adjusted error rates and stress scenarios; and (iii) credibility-weighted rates when insureds have limited experience data. We then introduce capital and loss surcharges that incorporate distributional uncertainty and tail risk. Finally, we link the framework to AOL risk controls by identifying governance, documentation, model-monitoring, and MLOps practices that both reduce loss frequency and severity and serve as underwriting prerequisites. Full article
(This article belongs to the Special Issue AI for Financial Risk Perception)
Show Figures

Figure 1

25 pages, 969 KB  
Article
H-CLAS: A Hybrid Continual Learning Framework for Adaptive Fault Detection and Self-Healing in IoT-Enabled Smart Grids
by Tina Babu, Rekha R. Nair, Balamurugan Balusamy and Sumendra Yogarayan
IoT 2026, 7(1), 12; https://doi.org/10.3390/iot7010012 - 27 Jan 2026
Viewed by 438
Abstract
The rapid expansion of Internet of Things (IoT)-enabled smart grids has intensified the need for reliable fault detection and autonomous self-healing under non-stationary operating conditions characterized by frequent concept drift. To address the limitations of static and single-strategy adaptive models, this paper proposes [...] Read more.
The rapid expansion of Internet of Things (IoT)-enabled smart grids has intensified the need for reliable fault detection and autonomous self-healing under non-stationary operating conditions characterized by frequent concept drift. To address the limitations of static and single-strategy adaptive models, this paper proposes H-CLAS, a novel Hybrid Continual Learning for Adaptive Self-healing framework that unifies regularization-based, memory-based, architectural, and meta-learning strategies within a single adaptive pipeline. The framework integrates convolutional neural networks (CNNs) for fault detection, graph neural networks for topology-aware fault localization, reinforcement learning for self-healing control, and a hybrid drift detection mechanism combining ADWIN and Page–Hinkley tests. Continual adaptation is achieved through the synergistic use of Elastic Weight Consolidation, memory-augmented replay, progressive neural network expansion, and Model-Agnostic Meta-Learning for rapid adaptation to emerging drifts. Extensive experiments conducted on the Smart City Air Quality and Network Intrusion Detection Dataset (NSL-KDD) demonstrate that H-CLAS achieves accuracy improvements of 12–15% over baseline methods, reduces false positives by over 50%, and enables 2–3× faster recovery after drift events. By enhancing resilience, reliability, and autonomy in critical IoT-driven infrastructures, the proposed framework contributes to improved grid stability, reduced downtime, and safer, more sustainable energy and urban monitoring systems, thereby providing significant societal and environmental benefits. Full article
Show Figures

Figure 1

Back to TopTop