Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (790)

Search Parameters:
Keywords = ML for security

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
13 pages, 741 KB  
Review
Model-Informed Precision Dosing: Conceptual Framework for Therapeutic Drug Monitoring Integrating Machine Learning and Artificial Intelligence Within Population Health Informatics
by Jennifer Le, Hien N. Le, Giang Nguyen, Rebecca Kim, Sean N. Avedissian, Connie Vo, Ba Hai Le, Thanh Hai Nguyen, Dua Thi Nguyen, Dylan Huy Do, Brian Le, Austin-Phong Nguyen, Tu Tran, Chi Kien Phung, Duong Anh Minh Vu, Karandeep Singh and Amy M. Sitapati
J. Pers. Med. 2026, 16(2), 76; https://doi.org/10.3390/jpm16020076 (registering DOI) - 31 Jan 2026
Viewed by 52
Abstract
Background/Objective: Traditional therapeutic drug monitoring is limited by manual interpretation and specific constraints like sampling at steady-state and requiring a minimum of two drug concentrations. The integration of model-informed precision dosing (MIPD) into population health informatics represents a promising approach to address [...] Read more.
Background/Objective: Traditional therapeutic drug monitoring is limited by manual interpretation and specific constraints like sampling at steady-state and requiring a minimum of two drug concentrations. The integration of model-informed precision dosing (MIPD) into population health informatics represents a promising approach to address drug safety and efficacy. This article explored the integration of MIPD within population health informatics and evaluated its potential to enhance precision dosing using artificial intelligence (AI), machine learning (ML), and electronic health records (EHRs). Methods: PubMed and Embase searches were conducted, and all relevant peer-reviewed studies in English published between 1958 and December 2024 were included if they pertained to MIPD and population-level health, with the use of AI/ML algorithms to predict individualized drug dosing requirements. Emphasis was placed on vulnerable populations such as critically-ill, geriatric, and pediatric groups. Results: MIPD with the Bayesian method represents a scalable innovation in precision medicine, with significant implications for population health informatics. By combining AI/ML with comprehensive electronic health records (EHRs), MIPD can offer real-time, precise dosing adjustments. This integration has the potential to improve patient safety, optimize therapeutic outcomes, and reduce healthcare costs, especially for vulnerable populations where evidence is limited. Successful implementation requires collaboration among clinicians, pharmacists, and health informatics professionals, alongside secure data management and interoperability solutions. Conclusions: Further research is needed to define, implement, and evaluate practical applications of AI/ML. This insight may help develop standards and identify drugs for MIPD to advance personalized medicine within population health informatics. Full article
Show Figures

Graphical abstract

38 pages, 783 KB  
Article
A Review on Protection and Cybersecurity in Hybrid AC/DC Microgrids: Conventional Challenges and AI/ML Approaches
by Farzaneh Eslami, Manaswini Gangineni, Ali Ebrahimi, Menaka Rathnayake, Mihirkumar Patel and Olga Lavrova
Energies 2026, 19(3), 744; https://doi.org/10.3390/en19030744 - 30 Jan 2026
Viewed by 274
Abstract
Hybrid AC/DC microgrids (HMGs) are increasingly recognized as a solution for the transition toward future energy systems because they can combine the efficiency of DC networks with an AC system. Despite these advantages, HMGs still have challenges in protection, cybersecurity, and reliability. Conventional [...] Read more.
Hybrid AC/DC microgrids (HMGs) are increasingly recognized as a solution for the transition toward future energy systems because they can combine the efficiency of DC networks with an AC system. Despite these advantages, HMGs still have challenges in protection, cybersecurity, and reliability. Conventional protection schemes often fail due to reduced fault currents and the dominance of power electronic converters in islanded or dynamically reconfigured topologies. At the same time, IEC 61850 protocols remain vulnerable to advanced cyberattacks such as Denial of Service (DoS), false data injection (FDIA), and man-in-the-middle (MITM), posing serious threats to the stability and operational security of intelligent power networks. Previous surveys have typically examined these challenges in isolation; however, this paper provides the first integrated review of HMG protection across three complementary dimensions: traditional protection schemes, cybersecurity threats, and artificial intelligence/machine learning (AI/ML)-based approaches. By analyzing more than 100 studies published between 2012 and 2024, we show that AI/ML methods in simulation environments can achieve detection accuracies of 95–98% with response times under 10 ms, while these values are case-specific and depend on the evaluation setting such as network scale, sampling configuration, noise levels, inverter control mode, and whether results are obtained in simulation, hardware in loop (HIL)/real-time digital simulator (RTDS), or field conditions. Nevertheless, the absence of standardized datasets and limited field validation remain key barriers to industrial adoption. Likewise, existing cybersecurity frameworks provide acceptable protection timing but lack resilience against emerging threats, while conventional methods underperform in clustered and islanded scenarios. Therefore, the future of HMG protection requires the integration of traditional schemes, resilient cybersecurity architectures, and explainable AI models, along with the development of benchmark datasets, hardware-in-the-loop validation, and implementation on platforms such as field-programmable gate array (FPGA) and μPMU. Full article
Show Figures

Figure 1

24 pages, 2441 KB  
Article
Parametric Studies and Semi-Continuous Harvesting Strategies for Enhancing CO2 Bio-Fixation Rate and High-Density Biomass Production Using Adaptive Laboratory-Evolved Chlorella vulgaris
by Sufia Hena, Tejas Bhatelia, Nadia Leinecker and Milinkumar Shah
Microorganisms 2026, 14(2), 324; https://doi.org/10.3390/microorganisms14020324 - 30 Jan 2026
Viewed by 80
Abstract
This study adopts a biochemical approach to sequester CO2 while producing biomass rich in protein and lipids, using an adapted strain of Chlorella vulgaris (ALE-Cv), which had previously evolved to tolerate a gas mixture containing 10% CO2 and 90% [...] Read more.
This study adopts a biochemical approach to sequester CO2 while producing biomass rich in protein and lipids, using an adapted strain of Chlorella vulgaris (ALE-Cv), which had previously evolved to tolerate a gas mixture containing 10% CO2 and 90% air. The research studied the operating parameters of the batch photobioreactor for ALE-Cv to evaluate the effects of inoculum size, photoperiod, light intensity, pH of culture, and CO2 supply rate on biomass productivity and CO2 bio-fixation rate. The optimal conditions were identified as 16:8 h light–dark cycles, 5000 lux, pH 7, 20 mL of 10 g/L inoculum, and 0.6 VVM; the system achieved a maximum total biomass production of 7.03 ± 0.21 g/L with a specific growth rate of 0.712 day−1, corresponding to a CO2 bio-fixation of 13.4 ± 0.45 g/L in batch cultivation. While the pre-adapted strain of Chlorella vulgaris under the same operating conditions, except for the gas supply, which was air, achieved a maximum total biomass production of 0.52 ± 0.008 g/L, and the total CO2 bio-fixation was 1.036 ± 0.021 g/L during 7-day cultivation. A novel semi-continuous harvesting process, with and without nutrient addition, was also investigated to maximise biomass yield and enable water recycling for culture media. The maximum biomass production in semi-continuous harvesting process with and without nutrition added was 5.29 ± 0.09 and 9.91 ± 0.11 g/L, while the total corresponding CO2 bio-fixation was 9.70 ± 0.13 and 18.16 ± 0.11 g/L, respectively, during 15-day cultivation. The findings provide critical insights into enhancing CO2 bio-fixation through adaptive evolution of ALE-Cv and offer optimal operational parameters for future large-scale microalgae cultivation. This research also links microalgae-based CO2 sequestration to green technologies and the bioeconomy, highlighting its potential contribution to climate change mitigation while supporting environmental sustainability, food security, and ecosystem resilience. Full article
(This article belongs to the Special Issue Contribution of Microalgae and Cyanobacteria in One Health Approach)
Show Figures

Graphical abstract

31 pages, 947 KB  
Systematic Review
A Systematic Review of Cyber Risk Analysis Approaches for Wind Power Plants
by Muhammad Arsal, Tamer Kamel, Hafizul Asad and Asiya Khan
Energies 2026, 19(3), 677; https://doi.org/10.3390/en19030677 - 28 Jan 2026
Viewed by 138
Abstract
Wind power plants (WPPs), as large-scale cyber–physical systems (CPSs), have become essential to renewable energy generation but are increasingly exposed to cyber threats. Attacks on supervisory control and data acquisition (SCADA) networks can cause cascading physical and economic impacts. The systematic synthesis of [...] Read more.
Wind power plants (WPPs), as large-scale cyber–physical systems (CPSs), have become essential to renewable energy generation but are increasingly exposed to cyber threats. Attacks on supervisory control and data acquisition (SCADA) networks can cause cascading physical and economic impacts. The systematic synthesis of cyber risk analysis methods specific to WPPs and cyber–physical energy systems (CPESs) is a need of the hour to identify research gaps and guide the development of resilient protection frameworks. This study employs a Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) protocol to review the state of the art in this area. Peer-reviewed studies published between January 2010 and January 2025 were taken from four major journals using a structured set of nine search queries. After removing duplicates, applying inclusion and exclusion criteria, and screening titles and abstracts, 62 studies were examined for analysis on the basis of a synthesis framework. The studies were classified along three methodological dimensions, qualitative vs. quantitative, model-based vs. data-driven, and informal vs. formal, giving us a unified taxonomy of cyber risk analysis approaches. Among the included studies, 45% appeared to be qualitative or semi-quantitative frameworks such as STRIDE, DREAD, or MITRE ATT&CK; 35% were classified as quantitative or model-based techniques such as Bayesian networks, Markov decision processes, and Petri nets; and 20% adopted data-driven or hybrid AI/ML methods. Only 28% implemented formal verification, and fewer than 10% explicitly linked cyber vulnerabilities to safety consequences. Key research gaps include limited integration of safety–security interdependencies, scarce operational datasets, and inadequate modelling of environmental factors in WPPs. This systematic review highlights a predominance of qualitative approaches and a shortage of data-driven and formally verified frameworks for WPP cybersecurity. Future research should prioritise hybrid methods that integrate formal modelling, synthetic data generation, and machine learning-based risk prioritisation to enhance resilience and operational safety of renewable-energy infrastructures. Full article
(This article belongs to the Special Issue Trends and Challenges in Cyber-Physical Energy Systems)
Show Figures

Figure 1

14 pages, 286 KB  
Article
Trusted Yet Flexible: High-Level Runtimes for Secure ML Inference in TEEs
by Nikolaos-Achilleas Steiakakis and Giorgos Vasiliadis
J. Cybersecur. Priv. 2026, 6(1), 23; https://doi.org/10.3390/jcp6010023 - 27 Jan 2026
Viewed by 188
Abstract
Machine learning inference is increasingly deployed on shared and cloud infrastructures, where both user inputs and model parameters are highly sensitive. Confidential computing promises to protect these assets using Trusted Execution Environments (TEEs), yet existing TEE-based inference systems remain fundamentally constrained: they rely [...] Read more.
Machine learning inference is increasingly deployed on shared and cloud infrastructures, where both user inputs and model parameters are highly sensitive. Confidential computing promises to protect these assets using Trusted Execution Environments (TEEs), yet existing TEE-based inference systems remain fundamentally constrained: they rely almost exclusively on low-level, memory-unsafe languages to enforce confinement, sacrificing developer productivity, portability, and access to modern ML ecosystems. At the same time, mainstream high-level runtimes, such as Python, are widely considered incompatible with enclave execution due to their large memory footprints and unsafe model-loading mechanisms that permit arbitrary code execution. To bridge this gap, we present the first Python-based ML inference system that executes entirely inside Intel SGX enclaves while safely supporting untrusted third-party models. Our design enforces standardized, declarative model representations (ONNX), eliminating deserialization-time code execution and confining model behavior through interpreter-mediated execution. The entire inference pipeline (including model loading, execution, and I/O) remains enclave-resident, with cryptographic protection and integrity verification throughout. Our experimental results show that Python incurs modest overheads for small models (≈17%) and outperforms a low-level baseline on larger workloads (97% vs. 265% overhead), demonstrating that enclave-resident high-level runtimes can achieve competitive performances. Overall, our findings indicate that Python-based TEE inference is practical and secure, enabling the deployment of untrusted models with strong confidentiality and integrity guarantees while maintaining developer productivity and ecosystem advantages. Full article
(This article belongs to the Section Security Engineering & Applications)
Show Figures

Figure 1

20 pages, 1854 KB  
Article
Dual-Optimized Genetic Algorithm for Edge-Ready IoT Intrusion Detection on Raspberry Pi
by Khawlah Harasheh, Satinder Gill, Kendra Brinkley, Salah Garada, Dindin Aro Roque, Hayat MacHrouhi, Janera Manning-Kuzmanovski, Jesus Marin-Leal, Melissa Isabelle Arganda-Villapando and Sayed Ahmad Shah Sekandary
J 2026, 9(1), 3; https://doi.org/10.3390/j9010003 - 25 Jan 2026
Viewed by 169
Abstract
The Internet of Things (IoT) is increasingly deployed at the edge under resource and environmental constraints, which limits the practicality of traditional intrusion detection systems (IDSs) on IoT hardware. This paper presents two IDS configurations. First, we develop a baseline IDS with fixed [...] Read more.
The Internet of Things (IoT) is increasingly deployed at the edge under resource and environmental constraints, which limits the practicality of traditional intrusion detection systems (IDSs) on IoT hardware. This paper presents two IDS configurations. First, we develop a baseline IDS with fixed hyperparameters, achieving 99.20% accuracy and ~0.002 ms/sample inference latency on a desktop machine; this configuration is suitable for high-performance platforms but is not intended for constrained IoT deployment. Second, we propose a lightweight, edge-oriented IDS that applies ANOVA-based filter feature selection and uses a genetic algorithm (GA) for the bounded hyperparameter tuning of the classifier under stratified cross-validation, enabling efficient execution on Raspberry Pi-class devices. The lightweight IDS achieves 98.95% accuracy with ~4.3 ms/sample end-to-end inference latency on Raspberry Pi while detecting both low-volume and high-volume (DoS/DDoS) attacks. Experiments are conducted in a Raspberry Pi-based real lab using an up-to-date mixed-modal dataset combining system/network telemetry and heterogeneous physical sensors. Overall, the proposed framework demonstrates a practical, hardware-aware, and reproducible way to balance detection performance and edge-level latency using established techniques for real-world IoT IDS deployment. Full article
31 pages, 1140 KB  
Review
A Survey of Multi-Layer IoT Security Using SDN, Blockchain, and Machine Learning
by Reorapetse Molose and Bassey Isong
Electronics 2026, 15(3), 494; https://doi.org/10.3390/electronics15030494 - 23 Jan 2026
Viewed by 266
Abstract
The integration of Software-Defined Networking (SDN), blockchain (BC), and machine learning (ML) has emerged as a promising approach to securing Internet of Things (IoT) and Industrial IoT (IIoT) networks. This paper conducted a comprehensive review of recent studies focusing on multi-layered security across [...] Read more.
The integration of Software-Defined Networking (SDN), blockchain (BC), and machine learning (ML) has emerged as a promising approach to securing Internet of Things (IoT) and Industrial IoT (IIoT) networks. This paper conducted a comprehensive review of recent studies focusing on multi-layered security across device, control, network, and application layers. The analysis reveals that BC technology ensures decentralised trust, immutability, and secure access validation, while SDN enables programmability, load balancing, and real-time monitoring. In addition, ML/deep learning (DL) techniques, including federated and hybrid learning, strengthen anomaly detection, predictive security, and adaptive mitigation. Reported evaluations show similar gains in detection accuracy, latency, throughput, and energy efficiency, with effective defence against threats, though differing experimental contexts limit direct comparison. It also shows that the solutions’ effectiveness depends on ecosystem factors such as SDN controllers, BC platforms, cryptographic protocols, and ML frameworks. However, most studies rely on simulations or small-scale testbeds, leaving large-scale and heterogeneous deployments unverified. Significant challenges include scalability, computational and energy overhead, dataset dependency, limited adversarial resilience, and the explainability of ML-driven decisions. Based on the findings, future research should focus on lightweight consensus mechanisms for constrained devices, privacy-preserving ML/DL, and cross-layer adversarial-resilient frameworks. Advancing these directions will be important in achieving scalable, interoperable, and trustworthy SDN-IoT/IIoT security solutions. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

32 pages, 4251 KB  
Article
Context-Aware ML/NLP Pipeline for Real-Time Anomaly Detection and Risk Assessment in Cloud API Traffic
by Aziz Abibulaiev, Petro Pukach and Myroslava Vovk
Mach. Learn. Knowl. Extr. 2026, 8(1), 25; https://doi.org/10.3390/make8010025 - 22 Jan 2026
Viewed by 214
Abstract
We present a combined ML/NLP (Machine Learning, Natural Language Processing) pipeline for protecting cloud-based APIs (Application Programming Interfaces), which works both at the level of individual HTTP (Hypertext Transfer Protocol) requests and at the access log file reading mode, linking explicitly technical anomalies [...] Read more.
We present a combined ML/NLP (Machine Learning, Natural Language Processing) pipeline for protecting cloud-based APIs (Application Programming Interfaces), which works both at the level of individual HTTP (Hypertext Transfer Protocol) requests and at the access log file reading mode, linking explicitly technical anomalies with business risks. The system processes each event/access log through parallel numerical and textual branches: a set of anomaly detectors trained on traffic engineering characteristics and a hybrid NLP stack that combines rules, TF-IDF (Term Frequency-Inverse Document Frequency), and character-level models trained on enriched security datasets. Their results are integrated using a risk-aware policy that takes into account endpoint type, data sensitivity, exposure, and authentication status, and creates a discrete risk level with human-readable explanations and recommended SOC (Security Operations Center) actions. We implement this design as a containerized microservice pipeline (input, preprocessing, ML, NLP, merging, alerting, and retraining services), orchestrated using Docker Compose and instrumented using OpenSearch Dashboards. Experiments with OWASP-like (Open Worldwide Application Security Project) attack scenarios show a high detection rate for injections, SSRF (Server-Side Request Forgery), Data Exposure, and Business Logic Abuse, while the processing time for each request remains within real-time limits even in sequential testing mode. Thus, the pipeline bridges the gap between ML/NLP research for security and practical API protection channels that can evolve over time through feedback and retraining. Full article
(This article belongs to the Section Safety, Security, Privacy, and Cyber Resilience)
Show Figures

Figure 1

24 pages, 2337 KB  
Article
Cutting-Edge DoS Attack Detection in Drone Networks: Leveraging Machine Learning for Robust Security
by Albandari Alsumayt, Naya Nagy, Shatha Alsharyofi, Resal Alahmadi, Renad Al-Rabie, Roaa Alesse, Noor Alibrahim, Amal Alahmadi, Fatemah H. Alghamedy and Zeyad Alfawaer
Sci 2026, 8(1), 20; https://doi.org/10.3390/sci8010020 - 20 Jan 2026
Viewed by 225
Abstract
This study aims to enhance the security of unmanned aerial vehicles (UAVs) within the Internet of Drones (IoD) ecosystem by detecting and preventing Denial-of-Service (DoS) attacks. We introduce DroneDefender, a web-based intrusion detection system (IDS) that employs machine learning (ML) techniques to identify [...] Read more.
This study aims to enhance the security of unmanned aerial vehicles (UAVs) within the Internet of Drones (IoD) ecosystem by detecting and preventing Denial-of-Service (DoS) attacks. We introduce DroneDefender, a web-based intrusion detection system (IDS) that employs machine learning (ML) techniques to identify anomalous network traffic patterns associated with DoS attacks. The system is evaluated using the CIC-IDS 2018 dataset and utilizes the Random Forest algorithm, optimized with the SMOTEENN technique to tackle dataset imbalance. Our results demonstrate that DroneDefender significantly outperforms traditional IDS solutions, achieving an impressive detection accuracy of 99.93%. Key improvements include reduced latency, enhanced scalability, and a user-friendly graphical interface for network administrators. The innovative aspect of this research lies in the development of an ML-driven, web-based IDS specifically designed for IoD environments. This system provides a reliable, adaptable, and highly accurate method for safeguarding drone operations against evolving cyber threats, thereby bolstering the security and resilience of UAV applications in critical sectors such as emergency services, delivery, and surveillance. Full article
(This article belongs to the Topic Trends and Prospects in Security, Encryption and Encoding)
Show Figures

Figure 1

26 pages, 925 KB  
Review
Integrating Artificial Intelligence and Machine Learning for Sustainable Development in Agriculture and Allied Sectors of the Temperate Himalayas
by Arnav Saxena, Mir Faiq, Shirin Ghatrehsamani and Syed Rameem Zahra
AgriEngineering 2026, 8(1), 35; https://doi.org/10.3390/agriengineering8010035 - 19 Jan 2026
Viewed by 266
Abstract
The temperate Himalayan states of Jammu and Kashmir, Himachal Pradesh, Uttarakhand, Ladakh, Sikkim, and Arunachal Pradesh in India face unique agro-ecological challenges across agriculture and allied sectors, including pest and disease pressures, inefficient resource use, post-harvest losses, and fragmented supply chains. This review [...] Read more.
The temperate Himalayan states of Jammu and Kashmir, Himachal Pradesh, Uttarakhand, Ladakh, Sikkim, and Arunachal Pradesh in India face unique agro-ecological challenges across agriculture and allied sectors, including pest and disease pressures, inefficient resource use, post-harvest losses, and fragmented supply chains. This review systematically examines 21 critical problem areas, with three key challenges identified per sector across agriculture, agricultural engineering, fisheries, forestry, horticulture, sericulture, and animal husbandry. Artificial Intelligence (AI) and Machine Learning (ML) interventions, including computer vision, predictive modeling, Internet of Things (IoT)-based monitoring, robotics, and blockchain-enabled traceability, are evaluated for their regional applicability, pilot-level outcomes, and operational limitations under temperate Himalayan conditions. The analysis highlights that AI-enabled solutions demonstrate strong potential for early pest and disease detection, improved resource-use efficiency, ecosystem monitoring, and market integration. However, large-scale adoption remains constrained by limited digital infrastructure, data scarcity, high capital costs, low digital literacy, and fragmented institutional frameworks. The novelty of this review lies in its cross-sectoral synthesis of AI/ML applications tailored to the Himalayan context, combined with a sector-wise revenue-loss assessment to quantify economic impacts and guide prioritization. Based on the identified gaps, the review proposes feasible, context-aware strategies, including lightweight edge-AI models, localized data platforms, capacity-building initiatives, and policy-aligned implementation pathways. Collectively, these recommendations aim to enhance sustainability, resilience, and livelihood security across agriculture and allied sectors in the temperate Himalayan region. Full article
Show Figures

Figure 1

24 pages, 3303 KB  
Article
Deep Learning-Based Human Activity Recognition Using Binary Ambient Sensors
by Qixuan Zhao, Alireza Ghasemi, Ahmed Saif and Lila Bossard
Electronics 2026, 15(2), 428; https://doi.org/10.3390/electronics15020428 - 19 Jan 2026
Viewed by 232
Abstract
Human Activity Recognition (HAR) has become crucial across various domains, including healthcare, smart homes, and security systems, owing to the proliferation of Internet of Things (IoT) devices. Several Machine Learning (ML) techniques, including Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM), have [...] Read more.
Human Activity Recognition (HAR) has become crucial across various domains, including healthcare, smart homes, and security systems, owing to the proliferation of Internet of Things (IoT) devices. Several Machine Learning (ML) techniques, including Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM), have been proposed for HAR. However, they are still deficient in addressing the challenges of noisy features and insufficient data. This paper introduces a novel approach to tackle these two challenges, employing a Deep Learning (DL) Ensemble-Based Stacking Neural Network (SNN) combined with Generative Adversarial Networks (GANs) for HAR based on ambient sensors. Our proposed deep learning ensemble-based approach outperforms traditional ML techniques and enables robust and reliable recognition of activities in real-world scenarios. Comprehensive experiments conducted on six benchmark datasets from the CASAS smart home project demonstrate that the proposed stacking framework achieves superior accuracy on five out of six datasets when compared to literature-reported state-of-the-art baselines, with improvements ranging from 3.36 to 39.21 percentage points and an average gain of 13.28 percentage points. Although the baseline marginally outperforms the proposed models on one dataset (Aruba) in terms of accuracy, this exception does not alter the overall trend of consistent performance gains across diverse environments. Statistical significance of these improvements is further confirmed using the Wilcoxon signed-rank test. Moreover, the ASGAN-augmented models consistently improve macro-F1 performance over the corresponding baselines on five out of six datasets, while achieving comparable performance on the Milan dataset. The proposed GAN-based method further improves the activity recognition accuracy by a maximum of 4.77 percentage points, and an average of 1.28 percentage points compared to baseline models. By combining ensemble-based DL with GAN-generated synthetic data, a more robust and effective solution for ambient HAR addressing both accuracy and data imbalance challenges in real-world smart home settings is achieved. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

21 pages, 1209 KB  
Review
Intelligent Discrimination of Grain Aging Using Volatile Organic Compound Fingerprints and Machine Learning: A Comprehensive Review
by Liuping Zhang, Jingtao Zhou, Guoping Qian, Shuyi Liu, Mohammed Obadi, Tianyue Xu and Bin Xu
Foods 2026, 15(2), 216; https://doi.org/10.3390/foods15020216 - 8 Jan 2026
Viewed by 261
Abstract
Grain aging during storage leads to quality deterioration and significant economic losses. Traditional analytical approaches are often labor-intensive, slow, and inadequate for modern intelligent grain storage management. This review summarizes recent advances in the intelligent discrimination of grain aging using volatile organic compound [...] Read more.
Grain aging during storage leads to quality deterioration and significant economic losses. Traditional analytical approaches are often labor-intensive, slow, and inadequate for modern intelligent grain storage management. This review summarizes recent advances in the intelligent discrimination of grain aging using volatile organic compound (VOC) fingerprints combined with machine learning (ML) techniques. It first outlines the biochemical mechanisms underlying grain aging and identifies VOCs as early and sensitive biomarkers for timely determination. The review then examines VOC determination methodologies, with a focus on headspace solid-phase microextraction coupled with gas chromatography-mass spectrometry (HS-SPME-GC-MS), for constructing volatile fingerprinting profiles, and discusses related method standardization. A central theme is the application of ML algorithms, including Partial Least Squares Discriminant Analysis (PLS-DA), Support Vector Machines (SVM), Random Forest (RF), and Convolutional Neural Networks (CNN)) for feature extraction and pattern recognition in high-dimensional datasets, enabling effective discrimination of aging stages, spoilage types, and grain varieties. Despite these advances, key challenges remain, such as limited model generalizability, the lack of large-scale multi-source databases, and insufficient validation under real storage conditions. Finally, future directions are proposed that emphasize methodological standardization, algorithmic innovation, and system-level integration to support intelligent, non-destructive, real-time grain quality monitoring. This emerging framework provides a promising powerful pathway for enhancing global food security. Full article
Show Figures

Figure 1

32 pages, 3734 KB  
Article
A Hierarchical Framework Leveraging IIoT Networks, IoT Hub, and Device Twins for Intelligent Industrial Automation
by Cornelia Ionela Bădoi, Bilge Kartal Çetin, Kamil Çetin, Çağdaş Karataş, Mehmet Erdal Özbek and Savaş Şahin
Appl. Sci. 2026, 16(2), 645; https://doi.org/10.3390/app16020645 - 8 Jan 2026
Viewed by 432
Abstract
Industrial Internet of Things (IIoT) networks, Microsoft Azure Internet of Things (IoT) Hub, and device twins (DvT) are increasingly recognized as core enablers of adaptive, data-driven manufacturing. This paper proposes a hierarchical IIoT framework that integrates industrial IoT networking, DvT for asset-level virtualisation, [...] Read more.
Industrial Internet of Things (IIoT) networks, Microsoft Azure Internet of Things (IoT) Hub, and device twins (DvT) are increasingly recognized as core enablers of adaptive, data-driven manufacturing. This paper proposes a hierarchical IIoT framework that integrates industrial IoT networking, DvT for asset-level virtualisation, system-level digital twins (DT) for cell orchestration, and cloud-native services to support the digital transformation of brownfield, programmable logic controller (PLC)-centric modular automation (MA) environments. Traditional PLC/supervisory control and data acquisition (SCADA) paradigms struggle to meet interoperability, observability, and adaptability requirements at scale, motivating architectures in which DvT and IoT Hub underpin real-time orchestration, virtualisation, and predictive-maintenance workflows. Building on and extending a previously introduced conceptual model, the present work instantiates a multilayered, end-to-end design that combines a federated Message Queuing Telemetry Transport (MQTT) mesh on the on-premises side, a ZigBee-based backup mesh, and a secure bridge to Azure IoT Hub, together with a systematic DvT modelling and orchestration strategy. The methodology is supported by a structured analysis of relevant IIoT and DvT design choices and by a concrete implementation in a nine-cell MA laboratory featuring a robotic arm predictive-maintenance scenario. The resulting framework sustains closed-loop monitoring, anomaly detection, and control under realistic workloads, while providing explicit envelopes for telemetry volume, buffering depth, and latency budgets in edge-cloud integration. Overall, the proposed architecture offers a transferable blueprint for evolving PLC-centric automation toward more adaptive, secure, and scalable IIoT systems and establishes a foundation for future extensions toward full DvT ecosystems, tighter artificial intelligence/machine learning (AI/ML) integration, and fifth/sixth generation (5G/6G) and time-sensitive networking (TSN) support in industrial networks. Full article
(This article belongs to the Special Issue Novel Technologies of Smart Manufacturing)
Show Figures

Figure 1

28 pages, 2746 KB  
Systematic Review
A Review of the Transition from Industry 4.0 to Industry 5.0: Unlocking the Potential of TinyML in Industrial IoT Systems
by Margarita Terziyska, Iliana Ilieva, Zhelyazko Terziyski and Nikolay Komitov
Sci 2026, 8(1), 10; https://doi.org/10.3390/sci8010010 - 7 Jan 2026
Viewed by 568
Abstract
The integration of artificial intelligence into the Industrial Internet of Things (IIoT), supported by edge computing architectures, marks a new paradigm of intelligent automation. Tiny Machine Learning (TinyML) is emerging as a key technology that enables the deployment of machine learning models on [...] Read more.
The integration of artificial intelligence into the Industrial Internet of Things (IIoT), supported by edge computing architectures, marks a new paradigm of intelligent automation. Tiny Machine Learning (TinyML) is emerging as a key technology that enables the deployment of machine learning models on ultra-low-power devices. This study presents a systematic review of 110 peer-reviewed publications (2020–2025) identified from Scopus, Web of Science, and IEEE Xplore following the PRISMA protocol. Bibliometric and thematic analyses were conducted using Biblioshiny and VOSviewer to identify major trends, architectural approaches, and industrial applications of TinyML. The results reveal four principal research clusters: edge intelligence and energy efficiency, federated and explainable learning, human-centric systems, and sustainable resource management. Importantly, the surveyed industrial implementations report measurable gains—typically reducing inference latency to the millisecond range, lowering on-device energy cost to the sub-milliwatt regime, and sustaining high task accuracy, thereby substantiating the practical feasibility of TinyML in real IIoT settings. The analysis indicates a conceptual shift from engineering- and energy-focused studies toward cognitive, ethical, and security-oriented perspectives aligned with the principles of Industry 5.0. TinyML is positioned as a catalyst for the transition from automation to cognitive autonomy and as a technological foundation for building energy-efficient, ethical, and sustainable industrial ecosystems. Full article
(This article belongs to the Section Computer Sciences, Mathematics and AI)
Show Figures

Figure 1

44 pages, 4883 KB  
Article
Mapping the Role of Artificial Intelligence and Machine Learning in Advancing Sustainable Banking
by Alina Georgiana Manta, Claudia Gherțescu, Roxana Maria Bădîrcea, Liviu Florin Manta, Jenica Popescu and Mihail Olaru
Sustainability 2026, 18(2), 618; https://doi.org/10.3390/su18020618 - 7 Jan 2026
Viewed by 347
Abstract
The convergence of artificial intelligence (AI), machine learning (ML), blockchain, and big data analytics is transforming the governance, sustainability, and resilience of modern banking ecosystems. This study provides a multivariate bibliometric analysis using Principal Component Analysis (PCA) of research indexed in Scopus and [...] Read more.
The convergence of artificial intelligence (AI), machine learning (ML), blockchain, and big data analytics is transforming the governance, sustainability, and resilience of modern banking ecosystems. This study provides a multivariate bibliometric analysis using Principal Component Analysis (PCA) of research indexed in Scopus and Web of Science to explore how decentralized digital infrastructures and AI-driven analytical capabilities contribute to sustainable financial development, transparent governance, and climate-resilient digital societies. Findings indicate a rapid increase in interdisciplinary work integrating Distributed Ledger Technology (DLT) with large-scale data processing, federated learning, privacy-preserving computation, and intelligent automation—tools that can enhance financial inclusion, regulatory integrity, and environmental risk management. Keyword network analyses reveal blockchain’s growing role in improving data provenance, security, and trust—key governance dimensions for sustainable and resilient financial systems—while AI/ML and big data analytics dominate research on predictive intelligence, ESG-related risk modeling, customer well-being analytics, and real-time decision support for sustainable finance. Comparative analyses show distinct emphases: Web of Science highlights decentralized architectures, consensus mechanisms, and smart contracts relevant to transparent financial governance, whereas Scopus emphasizes customer-centered analytics, natural language processing, and high-throughput data environments supporting inclusive and equitable financial services. Patterns of global collaboration demonstrate strong internationalization, with Europe, China, and the United States emerging as key hubs in shaping sustainable and digitally resilient banking infrastructures. By mapping intellectual, technological, and collaborative structures, this study clarifies how decentralized intelligence—enabled by the fusion of AI/ML, blockchain, and big data—supports secure, scalable, and sustainability-driven financial ecosystems. The results identify critical research pathways for strengthening financial governance, enhancing climate and social resilience, and advancing digital transformation, which contributes to more inclusive, equitable, and sustainable societies. Full article
Show Figures

Figure 1

Back to TopTop