Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (307)

Search Parameters:
Keywords = real-time cybersecurity

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 747 KB  
Article
Adaptive Real-Time Risk and Impact Assessment for 5G Network Security
by Dionysia Varvarigou, Kostas Lampropoulos, Spyros Denazis and Paris Kitsos
Network 2026, 6(1), 3; https://doi.org/10.3390/network6010003 - 24 Dec 2025
Abstract
The expansion of 5G networks has led to larger attack surfaces due to more applications
and use cases, more IoT connections, and the distributed 5G system architecture. Existing
security frameworks often lack the ability to perform real-time, context-aware risk
assessments that are specifically [...] Read more.
The expansion of 5G networks has led to larger attack surfaces due to more applications
and use cases, more IoT connections, and the distributed 5G system architecture. Existing
security frameworks often lack the ability to perform real-time, context-aware risk
assessments that are specifically adapted to dynamic 5G environments. In this paper, we
present an integrated framework that combines Snort intrusion detection with a risk and
impact assessment model to evaluate threats in real time. By correlating intrusion alerts
with contextual risk metrics tied to 5G core functions, the framework prioritizes incidents
and supports timely mitigation. Evaluation in a controlled testbed shows the framework’s
stability, scalability, and effective risk classification, thereby strengthening cybersecurity for
next-generation networks. Full article
(This article belongs to the Special Issue Cybersecurity in the 5G Era)
29 pages, 29480 KB  
Article
FPGA-Based Dual Learning Model for Wheel Speed Sensor Fault Detection in ABS Systems Using HIL Simulations
by Farshideh Kordi, Paul Fortier and Amine Miled
Electronics 2026, 15(1), 58; https://doi.org/10.3390/electronics15010058 - 23 Dec 2025
Abstract
The rapid evolution of modern vehicles into intelligent and interconnected systems presents new complexities in both functional safety and cybersecurity. In this context, ensuring the reliability and integrity of critical sensor data, such as wheel speed inputs for anti-lock brake systems (ABS), is [...] Read more.
The rapid evolution of modern vehicles into intelligent and interconnected systems presents new complexities in both functional safety and cybersecurity. In this context, ensuring the reliability and integrity of critical sensor data, such as wheel speed inputs for anti-lock brake systems (ABS), is essential. Effective detection of wheel speed sensor faults not only improves functional safety, but also plays a vital role in keeping system resilience against potential cyber–physical threats. Although data-driven approaches have gained popularity for system development due to their ability to extract meaningful patterns from historical data, a major limitation is the lack of diverse and representative faulty datasets. This study proposes a novel dual learning model, based on Temporal Convolutional Networks (TCN), designed to accurately distinguish between normal and faulty wheel speed sensor behavior within a hardware-in-the-loop (HIL) simulation platform implemented on an FPGA. To address dataset limitations, a TruckSim–MATLAB/Simulink co-simulation environment is used to generate realistic datasets under normal operation and eight representative fault scenarios, yielding up to 5000 labeled sequences (balanced between normal and faulty behaviors) at a sampling rate of 60 Hz. Two TCN models are trained independently to learn normal and faulty dynamics, and fault decisions are made by comparing the reconstruction errors (MSE and MAE) of both models, thus avoiding manually tuned thresholds. On a test set of 1000 sequences (500 normal and 500 faulty) from the 5000 sample configuration, the proposed dual TCN framework achieves a detection accuracy of 97.8%, a precision of 96.5%, a recall of 98.2%, and an F1-score of 97.3%, outperforming a single TCN baseline, which achieves 91.4% accuracy and an 88.9% F1-score. The complete dual TCN architecture is implemented on a Xilinx ZCU102 FPGA evaluation kit (AMD, Santa Clara, CA, USA), while supporting real-time inference in the HIL loop. These results demonstrate that the proposed approach provides accurate, low-latency fault detection suitable for safety-critical ABS applications and contributes to improving both functional safety and cyber-resilience of braking systems. Full article
(This article belongs to the Special Issue Artificial Intelligence and Microsystems)
22 pages, 1920 KB  
Article
Industry 4.0 Enabled Sustainable Manufacturing
by Ibrahim Abdelfadeel Shaban, Rahaf Ajaj, Haitham Elshimy and Hussien Hegab
Sustainability 2026, 18(1), 156; https://doi.org/10.3390/su18010156 - 23 Dec 2025
Abstract
The nexus of sustainable manufacturing and Industry 4.0 technologies is redefining modern industrial practices. Conventional manufacturing, characterized by intensive energy use, resource depletion, and waste generation, is increasingly unsustainable in the face of environmental pressures and evolving regulations. Industry 4.0 technologies—including IoT, artificial [...] Read more.
The nexus of sustainable manufacturing and Industry 4.0 technologies is redefining modern industrial practices. Conventional manufacturing, characterized by intensive energy use, resource depletion, and waste generation, is increasingly unsustainable in the face of environmental pressures and evolving regulations. Industry 4.0 technologies—including IoT, artificial intelligence, data analytics, cloud computing platforms, and, recently, digital twins—provide opportunities to embed sustainability by enabling real-time monitoring, predictive analytics, and adaptive decision-making. This paper addresses key methods and strategies for sustainability and Industry 4.0 nexus. It involves IoT systems for data-driven monitoring, AI for process optimization, cloud platforms for supply chain sustainability, and emphasizes the use of digital twins for predictive maintenance. Organizational strategies such as cross-functional collaboration, customized software, dual-focus performance metrics, and workforce reskilling are explored, alongside barriers including high capital costs, cybersecurity risks, and system integration challenges. The findings present a structured perspective on harmonizing sustainability and Industry 4.0, demonstrating how this nexus can reduce environmental impact, enhance efficiency, and support long-term industrial resilience. Full article
Show Figures

Figure 1

67 pages, 2221 KB  
Systematic Review
Artificial Intelligence of Things for Next-Generation Predictive Maintenance
by Taimia Bitam, Aya Yahiaoui, Djallel Eddine Boubiche, Rafael Martínez-Peláez, Homero Toral-Cruz and Pablo Velarde-Alvarado
Sensors 2025, 25(24), 7636; https://doi.org/10.3390/s25247636 - 16 Dec 2025
Viewed by 471
Abstract
Industry 5.0 introduces a shift toward human-centric, sustainable, and resilient industrial ecosystems, emphasizing intelligent automation, collaboration, and adaptive operations. Predictive Maintenance (PdM) plays a critical role in this transition, addressing the limitations of traditional maintenance approaches in increasingly complex and data-driven environments. The [...] Read more.
Industry 5.0 introduces a shift toward human-centric, sustainable, and resilient industrial ecosystems, emphasizing intelligent automation, collaboration, and adaptive operations. Predictive Maintenance (PdM) plays a critical role in this transition, addressing the limitations of traditional maintenance approaches in increasingly complex and data-driven environments. The convergence of Artificial Intelligence and the Industrial Internet of Things, referred to as the Artificial Intelligence of Things (AIoT), enables real-time sensing, learning, and decision-making for advanced fault detection, Remaining Useful Life estimation, and prescriptive maintenance actions. This study provides a systematic and structured review of AIoT-enabled PdM aligned with Industry 5.0 objectives. It presents a unified taxonomy integrating AI models, Industrial Internet of Things (IIoT) infrastructures, and AIoT architectures; reviews AI-driven techniques, sector-specific implementations in manufacturing, transportation, and energy; and analyzes emerging paradigms such as Edge–Cloud collaboration, federated learning, self-supervised learning, and digital twins for autonomous and privacy-preserving maintenance. Furthermore, this paper synthesizes strengths, limitations, and cross-industry challenges, and outlines future research directions centered on explainability, data quality and heterogeneity, resource-constrained intelligence, cybersecurity, and human–AI collaboration. By bridging technological advancements with Industry 5.0 principles, this review contributes a comprehensive foundation for the development of scalable, trustworthy, and next-generation AIoT-based predictive maintenance systems. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

24 pages, 1617 KB  
Systematic Review
A Systematic Review on the Intersection of the Cold Chain and Digital Transformation
by Nadin Alherimi and Mohamed Ben-Daya
Sustainability 2025, 17(24), 11202; https://doi.org/10.3390/su172411202 - 14 Dec 2025
Viewed by 751
Abstract
Digital transformation (DT) is reshaping cold chain operations through technologies such as the Internet of Things (IoT), artificial intelligence (AI), blockchain, and digital twins. However, evidence remains fragmented, and a systematic synthesis focused on how these technologies affect cold chain performance, sustainability, and [...] Read more.
Digital transformation (DT) is reshaping cold chain operations through technologies such as the Internet of Things (IoT), artificial intelligence (AI), blockchain, and digital twins. However, evidence remains fragmented, and a systematic synthesis focused on how these technologies affect cold chain performance, sustainability, and cost-efficiency is limited. This PRISMA-based systematic literature review analyzes 107 studies published between 2009 and 2025 to examine enabling technologies and application areas, operational and sustainability impacts, and the main adoption challenges. The reviewed evidence suggests that digitalization can improve real-time visibility, temperature control, traceability, and energy management, supporting waste reduction and improved quality assurance. Key challenges include high implementation costs and uncertain returns on investment, interoperability constraints, data governance and cybersecurity concerns, and organizational readiness gaps. The paper concludes with implications for managers and policymakers and a future research agenda emphasizing integrated multi-technology solutions, standardized sustainability assessment, and rigorous validation through pilots, testbeds, and real-world deployments to enable scalable and resilient cold chain digitalization. Full article
Show Figures

Figure 1

22 pages, 371 KB  
Review
Artificial Intelligence as the Next Frontier in Cyber Defense: Opportunities and Risks
by Oladele Afolalu and Mohohlo Samuel Tsoeu
Electronics 2025, 14(24), 4853; https://doi.org/10.3390/electronics14244853 - 10 Dec 2025
Cited by 1 | Viewed by 512
Abstract
The limitations of conventional rule-based security systems have been exposed by the quick evolution of cyber threats, necessitating more proactive, intelligent, and flexible solutions. In cybersecurity, Artificial Intelligence (AI) has emerged as a transformative factor, offering improved threat detection, prediction, and automated response [...] Read more.
The limitations of conventional rule-based security systems have been exposed by the quick evolution of cyber threats, necessitating more proactive, intelligent, and flexible solutions. In cybersecurity, Artificial Intelligence (AI) has emerged as a transformative factor, offering improved threat detection, prediction, and automated response capabilities. This paper explores the advantages of using AI in strengthening cybersecurity, focusing on its applications in machine learning, Deep Learning, Natural Language Processing, and reinforcement learning. We highlight the improvement brought by AI in terms of real-time incident response, detection accuracy, scalability, and false positive reduction while processing massive datasets. Furthermore, we examine the challenges that accompany the integration of AI into cybersecurity, including adversarial attacks, data quality constraints, interpretability, and ethical implications. The study concludes by identifying potential future directions, such as integration with blockchain and IoT, Explainable AI and the implementation of autonomous security systems. By presenting a comprehensive analysis, this paper underscores exceptional potential of AI to transform cybersecurity into a field that is more robust, adaptive, and predictive. Full article
Show Figures

Figure 1

10 pages, 488 KB  
Proceeding Paper
Enhancing Critical Industrial Processes with Artificial Intelligence Models
by Karim Amzil, Rajaa Saidi and Walid Cherif
Eng. Proc. 2025, 112(1), 75; https://doi.org/10.3390/engproc2025112075 - 8 Dec 2025
Viewed by 258
Abstract
This review explores the deployment of Artificial Intelligence (AI) technologies to augment key industry processes in the new paradigm of Industry 5.0. Based on a handpicked collection of 35 peer-reviewed articles and leading resources, the study integrates the latest breakthroughs in Machine Learning [...] Read more.
This review explores the deployment of Artificial Intelligence (AI) technologies to augment key industry processes in the new paradigm of Industry 5.0. Based on a handpicked collection of 35 peer-reviewed articles and leading resources, the study integrates the latest breakthroughs in Machine Learning (ML), Deep Learning (DL), Reinforcement Learning (RL), and Federated Learning (FL) with their applications in predictive maintenance, process planning, real-time monitoring, and operational excellence. The results emphasize AI’s central role in making manufacturing smarter, minimizing system downtime, and facilitating decision-making based on information in various industries like aerospace, energy, and intelligent manufacturing. Yet, the review also highlights significant challenges, ranging from data heterogeneity to model interpretability, security risks, and the ethics of automation. Solutions in the making, including Explainable AI (XAI), privacy-enhancing collaborative models, and enhanced cybersecurity protocols, are postulated to be the key drivers for the development of dependable and resilient industrial AI systems. The study concludes by postulating directions for further research and practice to secure the safe, transparent, and human-centered deployment of AI in industrial settings. Full article
Show Figures

Figure 1

41 pages, 6103 KB  
Article
H-RT-IDPS: A Hierarchical Real-Time Intrusion Detection and Prevention System for the Smart Internet of Vehicles via TinyML-Distilled CNN and Hybrid BiLSTM-XGBoost Models
by Ikram Hamdaoui, Chaymae Rami, Zakaria El Allali and Khalid El Makkaoui
Technologies 2025, 13(12), 572; https://doi.org/10.3390/technologies13120572 - 5 Dec 2025
Viewed by 432
Abstract
The integration of connected vehicles into smart city infrastructure introduces critical cybersecurity challenges for the Internet of Vehicles (IoV), where resource-constrained vehicles and powerful roadside units (RSUs) must collaborate for secure communication. We propose H-RT-IDPS, a hierarchical real-time intrusion detection and prevention system [...] Read more.
The integration of connected vehicles into smart city infrastructure introduces critical cybersecurity challenges for the Internet of Vehicles (IoV), where resource-constrained vehicles and powerful roadside units (RSUs) must collaborate for secure communication. We propose H-RT-IDPS, a hierarchical real-time intrusion detection and prevention system targeting two high-priority IoV security pillars: availability (traffic overload) and integrity/authenticity (spoofing), with spoofing evaluated across multiple subclasses (GAS, RPM, SPEED, and steering wheel). In the offline phase, deep learning and hybrid models were benchmarked on the vehicular CAN bus dataset CICIoV2024, with the BiLSTM-XGBoost hybrid chosen for its balance between accuracy and inference speed. Real-time deployment uses a TinyML-distilled CNN on vehicles for ultra-lightweight, low-latency detection, while RSU-level BiLSTM-XGBoost performs a deeper temporal analysis. A Kafka–Spark Streaming pipeline supports localized classification, prevention, and dashboard-based monitoring. In baseline, stealth, and coordinated modes, the evaluation achieved accuracy, precision, recall, and F1-scores all above 97%. The mean end-to-end inference latency was 148.67 ms, and the resource usage was stable. The framework remains robust in both high-traffic and low-frequency attack scenarios, enhancing operator situational awareness through real-time visualizations. These results demonstrate a scalable, explainable, and operator-focused IDPS well suited for securing SC-IoV deployments against evolving threats. Full article
(This article belongs to the Special Issue Research on Security and Privacy of Data and Networks)
Show Figures

Figure 1

47 pages, 12434 KB  
Article
AI-Driven Blockchain and Federated Learning for Secure Electronic Health Records Sharing
by Muhammad Saeed Javed, Ali Hennache, Muhammad Imran and Muhammad Kamran Khan
Electronics 2025, 14(23), 4774; https://doi.org/10.3390/electronics14234774 - 4 Dec 2025
Viewed by 453
Abstract
The proliferation of electronic health records necessitates secure and privacy-preserving data sharing frameworks to combat escalating cybersecurity threats in healthcare. Current systems face critical limitations including centralized data repositories vulnerable to breaches, static consent mechanisms, and inadequate audit capabilities. This paper introduces an [...] Read more.
The proliferation of electronic health records necessitates secure and privacy-preserving data sharing frameworks to combat escalating cybersecurity threats in healthcare. Current systems face critical limitations including centralized data repositories vulnerable to breaches, static consent mechanisms, and inadequate audit capabilities. This paper introduces an integrated blockchain and federated learning framework that enables privacy-preserving collaborative AI across healthcare institutions without centralized data pooling. The proposed approach combines federated distillation for heterogeneous model collaboration with dynamic differential privacy that adapts noise injection to data sensitivity levels. A novel threshold key-sharing protocol ensures decentralized access control, while a dual-layer Quorum blockchain establishes immutable audit trails for all data sharing transactions. Experimental evaluation on clinical datasets (Mortality Prediction and Clinical Deterioration from eICU-CRD) demonstrates that our framework maintains diagnostic accuracy within 3.6% of centralized approaches while reducing communication overhead by 71% and providing formal privacy guarantees. For Clinical Deterioration prediction, the framework achieves 96.9% absolute accuracy on the Clinical Deterioration task with FD-DP at ϵ = 1.0, representing only 0.14% degradation from centralized performance. The solution supports HIPAA-aligned technical safeguards, mitigates inference and membership attacks, and enables secure cross-institutional data sharing with real-time auditability. This work establishes a new paradigm for privacy-preserving healthcare AI that balances data utility, regulatory requirements, and protection against emerging threats in distributed clinical environments. Full article
Show Figures

Graphical abstract

26 pages, 729 KB  
Article
Sensor-Based Cyber Risk Management in Railway Infrastructure Under the NIS2 Directive
by Rafał Wachnik, Katarzyna Chruzik and Bolesław Pochopień
Sensors 2025, 25(23), 7384; https://doi.org/10.3390/s25237384 - 4 Dec 2025
Viewed by 369
Abstract
This study introduces a sensor-centric cybersecurity framework for railway infrastructure that extends Failure Mode and Effects Analysis (FMEA) from traditional reliability evaluation into the domain of cyber-induced failures affecting data integrity, availability and authenticity. The contribution lies in bridging regulatory obligations of the [...] Read more.
This study introduces a sensor-centric cybersecurity framework for railway infrastructure that extends Failure Mode and Effects Analysis (FMEA) from traditional reliability evaluation into the domain of cyber-induced failures affecting data integrity, availability and authenticity. The contribution lies in bridging regulatory obligations of the NIS2 Directive with field-layer monitoring by enabling risk indicators to evolve dynamically rather than remain static documentation artefacts. The approach is demonstrated using a scenario-based dataset collected from approximately 250 trackside, rolling-stock, environmental and power-monitoring sensors deployed over a 25 km operational segment, with representative anomalies generated through controlled spoofing, replay and injection conditions. Risk was evaluated using RPN scores derived from Severity–Occurrence–Detectability scales, while anomaly-detection performance was observed through detection-latency variation, changes in RPN distribution, and qualitative responsiveness of timestamp-based alerts. Instead of presenting a fixed benchmark, the results show how evidence from real sensor streams can recalibrate O and D factors in near-real-time and reduce undetected exposure windows, enabling measurable compliance documentation aligned with NIS2 Article 21. The findings confirm that coupling FMEA with streaming telemetry creates a verifiable risk-evaluation loop and supports a transition toward continuous, evidence-driven cybersecurity governance in railway systems. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

33 pages, 2499 KB  
Review
Adaptive Control and Interoperability Frameworks for Wind Power Plant Integration: A Comprehensive Review of Strategies, Standards, and Real-Time Validation
by Sinawo Nomandela, Mkhululi E. S. Mnguni and Atanda K. Raji
Appl. Sci. 2025, 15(23), 12729; https://doi.org/10.3390/app152312729 - 1 Dec 2025
Viewed by 318
Abstract
The rapid integration of wind power plants (WPPs) into modern electrical power systems (MEPSs) is crucial to global decarbonization, but it introduces significant technical challenges. Variability, intermittency, and forecasting uncertainty compromise frequency stability, voltage regulation, and grid reliability, particularly at high levels of [...] Read more.
The rapid integration of wind power plants (WPPs) into modern electrical power systems (MEPSs) is crucial to global decarbonization, but it introduces significant technical challenges. Variability, intermittency, and forecasting uncertainty compromise frequency stability, voltage regulation, and grid reliability, particularly at high levels of renewable energy integration. To address these issues, adaptive control strategies have been proposed at the turbine, plant, and system levels, including reinforcement learning-based optimization, cooperative plant-level dispatch, and hybrid energy schemes with battery energy storage systems (BESS). At the same time, interoperability frameworks based on international standards, notably IEC 61850 and IEC 61400-25, provide the communication backbone for vendor-independent coordination; however, their application remains largely limited to monitoring and protection, rather than holistic adaptive operation. Real-Time Automation Controllers (RTACs) emerge as promising platforms for unifying monitoring, operation, and protection functions, but their deployment in large-scale WPPs remains underexplored. Validation of these frameworks is still dominated by simulation-only studies, while real-time digital simulation (RTDS) and hardware-in-the-loop (HIL) environments have only recently begun to bridge the gap between theory and practice. This review consolidates advances in adaptive control, interoperability, and validation, identifies critical gaps, including limited PCC-level integration, underutilization of IEC standards, and insufficient cyber–physical resilience, and outlines future research directions. Emphasis is placed on holistic adaptive frameworks, IEC–RTAC integration, digital twin–HIL environments, and AI-enabled adaptive methods with embedded cybersecurity. By synthesizing these perspectives, the review highlights pathways toward resilient, secure, and standards-compliant renewable power systems that can support the transition to a low-carbon future. Full article
(This article belongs to the Special Issue Energy and Power Systems: Control and Management)
Show Figures

Figure 1

28 pages, 3050 KB  
Review
Safety Engineering for Humanoid Robots in Everyday Life—Scoping Review
by Dávid Kóczi and József Sárosi
Electronics 2025, 14(23), 4734; https://doi.org/10.3390/electronics14234734 - 1 Dec 2025
Viewed by 1029
Abstract
As humanoid robots move from controlled industrial environments into everyday human life, their safe integration is essential for societal acceptance and effective human–robot interaction (HRI). This scoping review examines engineering safety frameworks for humanoid robots across four core domains: (1) physical safety in [...] Read more.
As humanoid robots move from controlled industrial environments into everyday human life, their safe integration is essential for societal acceptance and effective human–robot interaction (HRI). This scoping review examines engineering safety frameworks for humanoid robots across four core domains: (1) physical safety in HRI, (2) cybersecurity and software robustness, (3) safety standards and regulatory frameworks, and (4) ethical and societal implications. In the area of physical safety, recent research trends emphasize proactive, multimodal perception-based collision avoidance, the use of compliance mechanisms, and fault-tolerant control to handle hardware failures and falls. In cybersecurity and software robustness, studies increasingly address the full threat landscape, secure real-time communication, and reliability of artificial intelligence (AI)-based control. The analysis of standards and regulations reveals a lag between technological advances and the adaptation of key safety standards in current research. Ethical and societal studies show that safety is also shaped by user trust, perceived safety, and data protection. Within the corpus of 121 peer-reviewed studies published between 2021 and 2025 and included in this review, most work concentrates on physical safety, while cybersecurity, standardization, and socio-ethical aspects are addressed less frequently. These gaps point to the need for more integrated, cross-domain approaches to safety engineering for humanoid robots. Full article
Show Figures

Figure 1

16 pages, 333 KB  
Article
Compact Models for Some Cluster Problems on Node-Colored Graphs
by Roberto Montemanni, Derek H. Smith, Pongchanun Luangpaiboon and Pasura Aungkulanon
Algorithms 2025, 18(12), 759; https://doi.org/10.3390/a18120759 - 29 Nov 2025
Viewed by 209
Abstract
Three optimization problems based on node-colored undirected graphs are the subject of the present study. These problems model real-world applications in several domains, such as cybersecurity, bioinformatics, and social networks, although they have a similar abstract representation. In all of the problems, the [...] Read more.
Three optimization problems based on node-colored undirected graphs are the subject of the present study. These problems model real-world applications in several domains, such as cybersecurity, bioinformatics, and social networks, although they have a similar abstract representation. In all of the problems, the goal is to partition the graph into colorful connected components, which means that in each of the connected components, a color can appear in at most one node. The problems are optimized according to different objective functions, leading to different optimal partitions. We propose a compact Mixed Integer Linear Programming formulation for each of the three problems. These models are based on spanning trees, represented through multi-commodity flows. The compact nature of the new linear models is easier to handle than the approaches that previously appeared in the literature. These were based on models with an exponential number of constraints, which, therefore, required complex solving techniques based on the dynamic generation of constraints within a branch-and-cut framework. Computational experiments carried out on the standard benchmark instances for the problems show the potential of the new compact methods, which, once fed into modern state-of-the-art solvers, are able to obtain results better than the previous algorithmic approaches. As an outcome of the experimental campaign, a dozen instances of the different problems considered are closed for the first time. Full article
(This article belongs to the Section Combinatorial Optimization, Graph, and Network Algorithms)
Show Figures

Figure 1

29 pages, 2191 KB  
Review
IoT Applications and Challenges in Global Healthcare Systems: A Comprehensive Review
by Fadele Ayotunde Alaba, Alvaro Rocha, Hakeem Adewale Sulaimon and Owamoyo Najeem
Future Internet 2025, 17(12), 549; https://doi.org/10.3390/fi17120549 - 29 Nov 2025
Viewed by 666
Abstract
The Internet of Things (IoT) has influenced the healthcare industry by enabling real-time monitoring, data-driven decision-making, and automation of medical activities. IoT in healthcare comprises a network of interconnected medical devices, sensors, and software systems that gather, analyse, and transmit patient data, enhancing [...] Read more.
The Internet of Things (IoT) has influenced the healthcare industry by enabling real-time monitoring, data-driven decision-making, and automation of medical activities. IoT in healthcare comprises a network of interconnected medical devices, sensors, and software systems that gather, analyse, and transmit patient data, enhancing the efficiency, accuracy, and accessibility of healthcare services. Despite its benefits, the deployment and impact of IoT in healthcare vary between countries due to differences in healthcare infrastructure, regulatory frameworks, and technical advancements. This review highlights how IoT technologies underpin the efficiency of EHR and HIE systems by enabling continuous data flow, interoperability, and real-time patient care. It also addresses the problems involved with IoT adoption, including data privacy concerns, interoperability issues, high implementation costs, and cybersecurity dangers. Additionally, the paper examines future trends in IoT healthcare, including 5G integration, AI-enhanced healthcare analytics, blockchain-based security solutions, and the creation of energy-efficient IoT medical equipment. Through an analysis of worldwide trends and obstacles, this research offers suggestions for policies, methods, and best practices to close the digital healthcare gap and make sure that healthcare solutions powered by the IoT are available, safe, and effective everywhere. Full article
Show Figures

Figure 1

18 pages, 1417 KB  
Review
The Aho-Corasick Paradigm in Modern Antivirus Engines: A Cornerstone of Signature-Based Malware Detection
by Paul A. Gagniuc, Ionel-Bujorel Păvăloiu and Maria-Iuliana Dascălu
Algorithms 2025, 18(12), 742; https://doi.org/10.3390/a18120742 - 25 Nov 2025
Viewed by 494
Abstract
The Aho-Corasick (AC) algorithm remains one of the most influential developments in deterministic multi-pattern matching due to its ability to recognize multiple strings in linear time within a single data stream. Originally conceived for bibliographic text retrieval, the structure of the algorithm is [...] Read more.
The Aho-Corasick (AC) algorithm remains one of the most influential developments in deterministic multi-pattern matching due to its ability to recognize multiple strings in linear time within a single data stream. Originally conceived for bibliographic text retrieval, the structure of the algorithm is based on a trie augmented with failure links and output functions, which has proven to be remarkably adaptable across computational domains. This review presents a comprehensive synthesis of the AC algorithm, with details on its theoretical foundations, formal automaton structure, and operational principles, as well as tracing its historical evolution from text-search systems to large-scale malware detection. This work further explores the integration of Aho-Corasick automata within modern antivirus architectures, describing mechanisms of signature compilation, real-time scanning pipelines, and large-scale deployment in contemporary cybersecurity systems. The deterministic structure of the Aho-Corasick automaton provides linear-time pattern recognition relative to input size, while practical performance characteristics reflect memory and architecture constraints in large signature sets. This linear-time property enables predictable and efficient malware detection, where each byte of input induces a constant computational cost. Such deterministic efficiency makes the algorithm ideally suited for real-time antivirus scanning and signature-based threat identification. Thus, nearly fifty years after its inception, AC continues to bridge formal automata theory and modern cybersecurity practice. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Graphical abstract

Back to TopTop