Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,759)

Search Parameters:
Keywords = networked rule-based

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 7980 KB  
Article
Smart Predictive Maintenance: A TCN-Based System for Early Fault Detection in Industrial Machinery
by Abuzar Khan, Ahmad Junaid, Muhammad Farooq Siddique, Abid Iqbal, Husam S. Samkari, Mohammed F. Allehyani and Ghassan Husnain
Machines 2026, 14(2), 164; https://doi.org/10.3390/machines14020164 (registering DOI) - 1 Feb 2026
Abstract
Modern factories still struggle with unexpected machine failures because traditional maintenance systems depend on fixed rules and threshold-based alerts. These older approaches often overlook subtle or complex patterns in multimodal sensor data, causing them to miss early signs of wear and leading to [...] Read more.
Modern factories still struggle with unexpected machine failures because traditional maintenance systems depend on fixed rules and threshold-based alerts. These older approaches often overlook subtle or complex patterns in multimodal sensor data, causing them to miss early signs of wear and leading to late or incorrect maintenance decisions. As a result, production can slow down, costs increase and equipment reliability suffers. To address this challenge, this study introduces a smart and interpretable fault diagnosis and predictive maintenance framework designed to detect wear, degradation and potential failures before they disrupt operations. The proposed framework integrates multiscale feature extraction, multimodal sensor fusion and cross-sensor correlation analysis with advanced temporal modeling using a Temporal Convolutional Network (TCN). By jointly performing tool-health classification and Remaining Useful Life (RUL) estimation, the framework provides a comprehensive assessment of machine condition. When evaluated on the NASA Ames milling dataset, the model achieved an overall accuracy of 86%, correctly classifying healthy and failed tools in more than 88% of cases and worn tools in over 75%, demonstrating consistent performance across different stages of tool wear. Explainable artificial intelligence (XAI) techniques, including attention-based visualizations and SHAP-based feature attribution, reveal that electrical and vibration signals are the most influential early indicators of tool degradation. The proposed framework exhibits low computational latency and minimal memory requirements, making it suitable for real-time fault diagnosis and deployment on industrial edge devices. Overall, the framework balances predictive accuracy, interpretability and practical applicability, enabling proactive and reliable maintenance decisions that enhance machine uptime and support efficient smart manufacturing operations. Full article
Show Figures

Figure 1

22 pages, 3678 KB  
Article
Neuro-Adaptive Finite-Time Command-Filter Backstepping Control of Full State Feedback Nonlinear System
by Jiaxun Che, Mengxuan Zhang and Lin Sun
Symmetry 2026, 18(2), 274; https://doi.org/10.3390/sym18020274 (registering DOI) - 31 Jan 2026
Abstract
This work develops a neuro-adaptive finite-time command-filtered backstepping (CFB) control framework for full-state feedback systems. The design methodology initiates with error transformation techniques to embed finite-time prescribed performance (FT-PP) specifications into the control architecture. Building upon this foundation, a dynamic error compensation system [...] Read more.
This work develops a neuro-adaptive finite-time command-filtered backstepping (CFB) control framework for full-state feedback systems. The design methodology initiates with error transformation techniques to embed finite-time prescribed performance (FT-PP) specifications into the control architecture. Building upon this foundation, a dynamic error compensation system is formulated to neutralize filtering artifacts induced by the finite-time command filter (FT-CF), thereby achieving precise finite-time convergence. To address state estimation requirements, we construct a neural network-based state estimation framework utilizing radial basis function neural networks (RBFNNs) for simultaneous uncertainty approximation and unmeasurable state reconstruction. The synthesis of FT-PP constraints and neural state estimation culminates in the derivation of an adaptive control law with Lyapunov-stable update rules, theoretically ensuring tracking errors enter and remaining within small neighborhoods of target compact sets within predefined finite time horizons. The simulation experiments cover both numerical simulation and actual case studies, which verify the feasibility and effectiveness of the proposed control mode. Full article
(This article belongs to the Special Issue Symmetry in Control Systems: Theory, Design, and Application)
22 pages, 2193 KB  
Article
Deep Reinforcement Learning-Based Experimental Scheduling System for Clay Mineral Extraction
by Bo Zhou, Lei He, Yongqiang Li, Zhandong Lv and Shiping Zhang
Electronics 2026, 15(3), 617; https://doi.org/10.3390/electronics15030617 (registering DOI) - 31 Jan 2026
Abstract
Efficient and non-destructive extraction of clay minerals is fundamental for shale oil and gas reservoir evaluation and enrichment mechanism studies. However, traditional manual extraction experiments face bottlenecks such as low efficiency and reliance on operator experience, which limit their scalability and adaptability to [...] Read more.
Efficient and non-destructive extraction of clay minerals is fundamental for shale oil and gas reservoir evaluation and enrichment mechanism studies. However, traditional manual extraction experiments face bottlenecks such as low efficiency and reliance on operator experience, which limit their scalability and adaptability to intelligent research demands. To address this, this paper proposes an intelligent experimental scheduling system for clay mineral extraction based on deep reinforcement learning. First, the complex experimental process is deconstructed, and its core scheduling stages are abstracted into a Flexible Job Shop Scheduling Problem (FJSP) model with resting time constraints. Then, a scheduling agent based on the Proximal Policy Optimization (PPO) algorithm is developed and integrated with an improved Heterogeneous Graph Neural Network (HGNN) to represent the relationships among operations, machines, and constraints. This enables effective capture of the complex topological structure of the experimental environment and facilitates efficient sequential decision-making. To facilitate future practical applicability, a four-layer system architecture is proposed, comprising the physical equipment layer, execution control layer, scheduling decision layer, and interactive application layer. A digital twin module is designed to bridge the gap between theoretical scheduling and physical execution. This study focuses on validating the core scheduling algorithm through realistic simulations. Simulation results demonstrate that the proposed HGNN-PPO scheduling method significantly outperforms traditional heuristic rules (FIFO, SPT), meta-heuristic algorithms (GA), and simplified reinforcement learning methods (PPO-MLP). Specifically, in large-scale problems, our method reduces the makespan by over 9% compared to the PPO-MLP baseline, and the algorithm runs more than 30 times faster than GA. This highlights its superior performance and scalability. This study provides an effective solution for intelligent scheduling in automated chemical laboratory workflows and holds significant theoretical and practical value for advancing the intelligentization of experimental sciences, including shale oil and gas research. Full article
Show Figures

Figure 1

25 pages, 1078 KB  
Article
Novel Global Network Signal Station Sorting Algorithm Based on Hop Describe Word (HDW) and Clustering-Assisted Temporal Sorting
by Huijie Zhu, Wei Wang, Cui Yang, Youjun Xiang and Qi Ding
Mathematics 2026, 14(3), 495; https://doi.org/10.3390/math14030495 - 30 Jan 2026
Viewed by 29
Abstract
To address the sorting challenge of multiple stations and multiple networking modes (networks with inconsistent features and networks with similar features but asynchrony) in complex electromagnetic environments, this paper proposes a full-network station sorting algorithm that integrates Hop Describe Word (HDW), hierarchical clustering, [...] Read more.
To address the sorting challenge of multiple stations and multiple networking modes (networks with inconsistent features and networks with similar features but asynchrony) in complex electromagnetic environments, this paper proposes a full-network station sorting algorithm that integrates Hop Describe Word (HDW), hierarchical clustering, and temporal sorting. First, from the perspectives of hardware differences, channel interference, and networking strategies, it is demonstrated that “there theoretically exist no multiple frequency-hopping networks that are completely synchronized and have consistent features”, thereby defining the sorting boundary of the algorithm. Second, “preliminary clustering sorting” is used to separate networks with significant differences in HDW static features, and then the “temporal sorting algorithm” designed in this paper is applied to overcome the sorting bottleneck of networks with similar features but asynchrony. Finally, based on the feature rules of the sorted networks, ARIMA temporal prediction and K-nearest neighbor (KNN) feature completion are adopted to achieve accurate recovery of missing signals. Experimental results show that in scenarios with a signal-to-noise ratio (SNR) of −8 dB to 5 dB and a signal loss rate of 0% to 15%, the proposed algorithm achieves an average sorting accuracy of 96.7%, a sorting completeness of 94.3%, and a robustness fluctuation range of only ±4.2% for 10 mixed networks. It significantly outperforms the traditional K-means algorithm and the single HDW clustering algorithm, and can effectively meet the needs of military and civilian spectrum reconnaissance and station sorting. Full article
(This article belongs to the Special Issue Application of Neural Networks and Deep Learning, 2nd Edition)
Show Figures

Figure 1

40 pages, 581 KB  
Review
A Survey of AI-Enabled Predictive Maintenance for Railway Infrastructure: Models, Data Sources, and Research Challenges
by Francisco Javier Bris-Peñalver, Randy Verdecia-Peña and José I. Alonso
Sensors 2026, 26(3), 906; https://doi.org/10.3390/s26030906 - 30 Jan 2026
Viewed by 168
Abstract
Rail transport is central to achieving sustainable and energy-efficient mobility, and its digitalization is accelerating the adoption of condition-based maintenance (CBM) strategies. However, existing maintenance practices remain largely reactive or rely on limited rule-based diagnostics, which constrain safety, interoperability, and lifecycle optimization. This [...] Read more.
Rail transport is central to achieving sustainable and energy-efficient mobility, and its digitalization is accelerating the adoption of condition-based maintenance (CBM) strategies. However, existing maintenance practices remain largely reactive or rely on limited rule-based diagnostics, which constrain safety, interoperability, and lifecycle optimization. This survey provides a comprehensive and structured review of Artificial Intelligence techniques applied to the preventive, predictive, and prescriptive maintenance of railway infrastructure. We analyze and compare machine learning and deep learning approaches—including neural networks, support vector machines, random forests, genetic algorithms, and end-to-end deep models—applied to parameters such as track geometry, vibration-based monitoring, and imaging-based inspection. The survey highlights the dominant data sources and feature engineering techniques, evaluates the model performance across subsystems, and identifies research gaps related to data quality, cross-network generalization, model robustness, and integration with real-time asset management platforms. We further discuss emerging research directions, including Digital Twins, edge AI, and Cyber–Physical predictive systems, which position AI as an enabler of autonomous infrastructure management. This survey defines the key challenges and opportunities to guide future research and standardization in intelligent railway maintenance ecosystems. Full article
Show Figures

Figure 1

24 pages, 3822 KB  
Article
Optimising Calculation Logic in Emergency Management: A Framework for Strategic Decision-Making
by Yuqi Hang and Kexi Wang
Systems 2026, 14(2), 139; https://doi.org/10.3390/systems14020139 - 29 Jan 2026
Viewed by 60
Abstract
Given the increasing demand for rapid emergency management decision-making, which must be both timely and reliable, even slight delays can result in substantial human and economic losses. However, current systems and recent state-of-the-art work often use inflexible rule-based logic that cannot adapt to [...] Read more.
Given the increasing demand for rapid emergency management decision-making, which must be both timely and reliable, even slight delays can result in substantial human and economic losses. However, current systems and recent state-of-the-art work often use inflexible rule-based logic that cannot adapt to rapidly changing emergency conditions or dynamically optimise response allocation. As a result, our study presents the Calculation Logic Optimisation Framework (CLOF), a novel data-driven approach that enhances decision-making intelligently and strategically through learning-based predictive and multi-objective optimisation, utilising the 911 Emergency Calls data set, comprising more than half a million records from Montgomery County, Pennsylvania, USA. The CLOF examines patterns over space and time and uses optimised calculation logic to reduce response latency and increase decision reliability. The suggested framework outperforms the standard Decision Tree, Random Forest, Gradient Boosting, and XGBoost baselines, achieving 94.68% accuracy, a log-loss of 0.081, and a reliability score (R2) of 0.955. The mean response time error is reported to have been reduced by 19%, illustrating robustness to real-world uncertainty. The CLOF aims to deliver results that confirm the scalability, interpretability, and efficiency of modern EM frameworks, thereby improving safety, risk awareness, and operational quality in large-scale emergency networks. Full article
(This article belongs to the Section Artificial Intelligence and Digital Systems Engineering)
Show Figures

Figure 1

18 pages, 3833 KB  
Article
A Data-Driven Two-Phase Energy Consumption Prediction Method for Injection Compressor Systems in Underground Gas Storage
by Ying Yang, De Tang, Guicheng Yu, Junchi Zhou, Jinsong Yang, Tingting Jiang, Zixu Huang and Jianguo Miao
Appl. Syst. Innov. 2026, 9(2), 32; https://doi.org/10.3390/asi9020032 - 28 Jan 2026
Viewed by 121
Abstract
Since the compressor system in underground gas storage (UGS) facilities operates under highly dynamic and complex injection conditions, traditional rule-based operation and mechanism-based modeling approaches prove inadequate for meeting the stringent requirements of high-accuracy prediction under such variable conditions. To address this, a [...] Read more.
Since the compressor system in underground gas storage (UGS) facilities operates under highly dynamic and complex injection conditions, traditional rule-based operation and mechanism-based modeling approaches prove inadequate for meeting the stringent requirements of high-accuracy prediction under such variable conditions. To address this, a data-driven two-phase prediction framework for compressor energy consumption is proposed. In the first phase, a convolutional neural network with efficient channel attention (CNN-ECA) is developed to accurately forecast key operating condition parameters. Based on these outputs, the second phase employs a compressor performance prediction model to estimate unit energy consumption with improved precision. In addition, a hybrid prediction strategy integrating a Transformer architecture is introduced to capture long-range temporal dependencies, thereby enhancing both single-step and multi-step forecasting performance. The proposed method is evaluated using operational data from eight compressors at the Xiangguosi underground gas storage. Experimental results show that the framework achieves high prediction accuracy, with a MAPE of 4.0779% (single-step) and 4.2449% (multi-step), outperforming advanced benchmark models. Full article
Show Figures

Figure 1

24 pages, 6313 KB  
Article
IoT-Driven Pull Scheduling to Avoid Congestion in Human Emergency Evacuation
by Erol Gelenbe and Yuting Ma
Sensors 2026, 26(3), 837; https://doi.org/10.3390/s26030837 - 27 Jan 2026
Viewed by 182
Abstract
The efficient and timely management of human evacuation during emergency events is an important area of research where the Internet of Things (IoT) can be of great value. Significant areas of application for optimum evacuation strategies include buildings, sports arenas, cultural venues, such [...] Read more.
The efficient and timely management of human evacuation during emergency events is an important area of research where the Internet of Things (IoT) can be of great value. Significant areas of application for optimum evacuation strategies include buildings, sports arenas, cultural venues, such as museums and concert halls, and ships that carry passengers, such as cruise ships. In many cases, the evacuation process is complicated by constraints on space and movement, such as corridors, staircases, and passageways, that can cause congestion and slow the evacuation process. In such circumstances, the Internet of Things (IoT) can be used to sense the presence of evacuees in different locations, to sense hazards and congestion, to assist in making decisions based on sensing to guide the evacuees dynamically in the most effective direction to limit or eliminate congestion and maximize safety, and notify to the passengers the directions they should take or whether they should stop and wait, through signaling with active IoT devices that can include voice and visual indications and signposts. This paper uses an analytical queueing network approach to analyze an emergency evacuation system, and suggests the use of the Pull Policy, which employs the IoT to direct evacuees in a manner that reduces downstream congestion by signalling them to move forward when the preceding evacuees exit the system. The IoT-based Pull Policy is analyzed using a realistic representation of evacuation from an existing commercial cruise ship, with a queueing network model that also allows for a computationally very efficient comparison of different routing rules with wide-ranging variations in speed parameters of each of the individual evacuees.Numerical examples are used to demonstrate its value for the timely evacuation of passengers within the confined space of a cruise ship. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

22 pages, 3101 KB  
Article
A Real-Time Pedestrian Situation Detection Method Using CNN and DeepSORT with Rule-Based Analysis for Autonomous Mobility
by Yun Hee Lee and Manbok Park
Electronics 2026, 15(3), 532; https://doi.org/10.3390/electronics15030532 - 26 Jan 2026
Viewed by 112
Abstract
This paper presents a real-time pedestrian situation detection framework for autonomous mobility platforms. The proposed approach extracts pedestrians from images acquired by a camera mounted on an autonomous mobility system, classifies their postures, tracks their trajectories, and subsequently detects pedestrian situations. A convolutional [...] Read more.
This paper presents a real-time pedestrian situation detection framework for autonomous mobility platforms. The proposed approach extracts pedestrians from images acquired by a camera mounted on an autonomous mobility system, classifies their postures, tracks their trajectories, and subsequently detects pedestrian situations. A convolutional neural network (CNN) is employed for pedestrian detection and posture classification, where the YOLOv12 model is fine-tuned via transfer learning for this purpose. To improve detection and classification performance, a region of interest (ROI) is defined using camera calibration data, enabling robust detection of small-scale pedestrians over long distances. Using a custom-labeled dataset, the proposed method achieves a precision of 96.6% and a recall of 97.0% for pedestrian detection and posture classification. The detected pedestrians are tracked using the DeepSORT algorithm, and their situations are inferred through a rule-based analysis module. Experimental results demonstrate that the proposed system operates at an execution speed of 58.11 ms per frame, corresponding to 17.2 fps, thereby satisfying the real-time requirements for autonomous mobility applications. These results confirm that the proposed framework enables reliable real-time pedestrian extraction and situation awareness in real-world autonomous mobility environments. Full article
47 pages, 2599 KB  
Review
The Role of Artificial Intelligence in Next-Generation Handover Decision Techniques for UAVs over 6G Networks
by Mohammed Zaid, Rosdiadee Nordin and Ibraheem Shayea
Drones 2026, 10(2), 85; https://doi.org/10.3390/drones10020085 - 26 Jan 2026
Viewed by 138
Abstract
The rapid integration of unmanned aerial vehicles (UAVs) into next-generation wireless systems demands seamless and reliable handover (HO) mechanisms to ensure continuous connectivity. However, frequent topology changes, high mobility, and dynamic channel variations make traditional HO schemes inadequate for UAV-assisted 6G networks. This [...] Read more.
The rapid integration of unmanned aerial vehicles (UAVs) into next-generation wireless systems demands seamless and reliable handover (HO) mechanisms to ensure continuous connectivity. However, frequent topology changes, high mobility, and dynamic channel variations make traditional HO schemes inadequate for UAV-assisted 6G networks. This paper presents a comprehensive review of existing HO optimization studies, emphasizing artificial intelligence (AI) and machine learning (ML) approaches as enablers of intelligent mobility management. The surveyed works are categorized into three main scenarios: non-UAV HOs, UAVs acting as aerial base stations, and UAVs operating as user equipment, each examined under traditional rule-based and AI/ML-based paradigms. Comparative insights reveal that while conventional methods remain effective for static or low-mobility environments, AI- and ML-driven approaches significantly enhance adaptability, prediction accuracy, and overall network robustness. Emerging techniques such as deep reinforcement learning and federated learning (FL) demonstrate strong potential for proactive, scalable, and energy-efficient HO decisions in future 6G ecosystems. The paper concludes by outlining key open issues and identifying future directions toward hybrid, distributed, and context-aware learning frameworks for resilient UAV-enabled HO management. Full article
52 pages, 3528 KB  
Review
Advanced Fault Detection and Diagnosis Exploiting Machine Learning and Artificial Intelligence for Engineering Applications
by Davide Paolini, Pierpaolo Dini, Abdussalam Elhanashi and Sergio Saponara
Electronics 2026, 15(2), 476; https://doi.org/10.3390/electronics15020476 - 22 Jan 2026
Viewed by 270
Abstract
Modern engineering systems require reliable and timely Fault Detection and Diagnosis (FDD) to ensure operational safety and resilience. Traditional model-based and rule-based approaches, although interpretable, exhibit limited scalability and adaptability in complex, data-intensive environments. This survey provides a systematic overview of recent studies [...] Read more.
Modern engineering systems require reliable and timely Fault Detection and Diagnosis (FDD) to ensure operational safety and resilience. Traditional model-based and rule-based approaches, although interpretable, exhibit limited scalability and adaptability in complex, data-intensive environments. This survey provides a systematic overview of recent studies exploring Machine Learning (ML) and Artificial Intelligence (AI) techniques for FDD across industrial, energy, Cyber-Physical Systems (CPS)/Internet of Things (IoT), and cybersecurity domains. Deep architectures such as Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Transformers, and Graph Neural Networks (GNNs) are compared with unsupervised, hybrid, and physics-informed frameworks, emphasizing their respective strengths in adaptability, robustness, and interpretability. Quantitative synthesis and radar-based assessments suggest that AI-driven FDD approaches offer increased adaptability, scalability, and early fault detection capabilities compared to classical methods, while also introducing new challenges related to interpretability, robustness, and deployment. Emerging research directions include the development of foundation and multimodal models, federated learning (FL), and privacy-preserving learning, as well as physics-guided trustworthy AI. These trends indicate a paradigm shift toward self-adaptive, interpretable, and collaborative FDD systems capable of sustaining reliability, transparency, and autonomy across critical infrastructures. Full article
Show Figures

Figure 1

33 pages, 32306 KB  
Article
A Reward-and-Punishment-Aware Incentive Mechanism for Directed Acyclic Graph Blockchain-Based Federated Learning in Unmanned Aerial Vehicle Networks
by Xiaofeng Xue, Qiong Li and Haokun Mao
Drones 2026, 10(1), 70; https://doi.org/10.3390/drones10010070 - 21 Jan 2026
Viewed by 134
Abstract
The integration of unmanned aerial vehicles (UAVs) and Federated Learning (FL) enables distributed model training while preserving data privacy. To overcome the challenges caused by centralized and synchronous model updates, we integrate Directed Acyclic Graph (DAG) blockchain-based FL into UAV networks. In this [...] Read more.
The integration of unmanned aerial vehicles (UAVs) and Federated Learning (FL) enables distributed model training while preserving data privacy. To overcome the challenges caused by centralized and synchronous model updates, we integrate Directed Acyclic Graph (DAG) blockchain-based FL into UAV networks. In this decentralized and asynchronous framework, UAVs can independently and autonomously participate in the FL process according to their own requirement. To achieve the high FL performance, it is essential for UAVs to actively contribute their computational and data resources to the FL process. However, it is challenging to ensure that UAVs consistently contribute their resources, as they may have a propensity to prioritize their own self-interest. Therefore, it is crucial to design effective incentive mechanisms that encourage UAVs to actively participate in the FL process and contribute their computational and data resources. Currently, research on effective incentive mechanisms for DAG blockchain-based FL framework in UAV networks remains limited. To address these challenges, this paper proposes a novel incentive mechanism that integrates both rewards and punishments to encourage UAVs to actively contribute to FL and to deter free riding under incomplete information. We formulate the interactions among UAVs as an evolutionary game, and the aspiration-driven rule is employed to imitate the UAV’s decision-making processes. We evaluate the proposed mechanism for UAVs within a DAG blockchain-based FL framework. Experimental results show that the proposed incentive mechanism substantially increases the average UAV contribution rate from 77.04±0.84% (without incentive mechanism) to 97.48±1.29%. Furthermore, the higher contribution rate results in an approximate 2.23% improvement in FL performance. Additionally, we evaluate the impact of different parameter configurations to analyze how they affect the performance and efficiency of the FL system. Full article
(This article belongs to the Section Drone Communications)
Show Figures

Figure 1

24 pages, 1137 KB  
Article
Detecting TLS Protocol Anomalies Through Network Monitoring and Compliance Tools
by Diana Gratiela Berbecaru and Marco De Santo
Future Internet 2026, 18(1), 62; https://doi.org/10.3390/fi18010062 - 21 Jan 2026
Viewed by 139
Abstract
The Transport Layer Security (TLS) protocol is widely used nowadays to create secure communications over TCP/IP networks. Its purpose is to ensure confidentiality, authentication, and data integrity for messages exchanged between two endpoints. In order to facilitate its integration into widely used applications, [...] Read more.
The Transport Layer Security (TLS) protocol is widely used nowadays to create secure communications over TCP/IP networks. Its purpose is to ensure confidentiality, authentication, and data integrity for messages exchanged between two endpoints. In order to facilitate its integration into widely used applications, the protocol is typically implemented through libraries, such as OpenSSL, BoringSSL, LibreSSL, WolfSSL, NSS, or mbedTLS. These libraries encompass functions that execute the specialized TLS handshake required for channel establishment, as well as the construction and processing of TLS records, and the procedures for closing the secure channel. However, these software libraries may contain vulnerabilities or errors that could potentially jeopardize the security of the TLS channel. To identify flaws or deviations from established standards within the implemented TLS code, a specialized tool known as TLS-Anvil can be utilized. This tool also verifies the compliance of TLS libraries with the specifications outlined in the Request for Comments documents published by the IETF. TLS-Anvil conducts numerous tests with a client/server configuration utilizing a specified TLS library and subsequently generates a report that details the number of successful tests. In this work, we exploit the results obtained from a selected subset of TLS-Anvil tests to generate rules used for anomaly detection in Suricata, a well-known signature-based Intrusion Detection System. During the tests, TLS-Anvil generates .pcap capture files that report all the messages exchanged. Such files can be subsequently analyzed with Wireshark, allowing for a detailed examination of the messages exchanged during the tests and a thorough understanding of their structure on a byte-by-byte basis. Through the analysis of the TLS handshake messages produced during testing, we develop customized Suricata rules aimed at detecting TLS anomalies that result from flawed implementations within the intercepted traffic. Furthermore, we describe the specific test environment established for the purpose of deriving and validating certain Suricata rules intended to identify anomalies in nodes utilizing a version of the OpenSSL library that does not conform to the TLS specification. The rules that delineate TLS deviations or potential attacks may subsequently be integrated into a threat detection platform supporting Suricata. This integration will enhance the capability to identify TLS anomalies arising from code that fails to adhere to the established specifications. Full article
(This article belongs to the Special Issue DDoS Attack Detection for Cyber–Physical Systems)
Show Figures

Figure 1

17 pages, 783 KB  
Article
Hospital-Wide Sepsis Detection: A Machine Learning Model Based on Prospectively Expert-Validated Cohort
by Marcio Borges-Sa, Andres Giglio, Maria Aranda, Antonia Socias, Alberto del Castillo, Cristina Pruenza, Gonzalo Hernández, Sofía Cerdá, Lorenzo Socias, Victor Estrada, Roberto de la Rica, Elisa Martin and Ignacio Martin-Loeches
J. Clin. Med. 2026, 15(2), 855; https://doi.org/10.3390/jcm15020855 - 21 Jan 2026
Viewed by 158
Abstract
Background/Objectives: Sepsis detection remains challenging due to clinical heterogeneity and limitations of traditional scoring systems. This study developed and validated a hospital-wide machine learning model for sepsis detection using retrospectively developed data from prospectively expert-validated cases, aiming to improve diagnostic accuracy beyond conventional [...] Read more.
Background/Objectives: Sepsis detection remains challenging due to clinical heterogeneity and limitations of traditional scoring systems. This study developed and validated a hospital-wide machine learning model for sepsis detection using retrospectively developed data from prospectively expert-validated cases, aiming to improve diagnostic accuracy beyond conventional approaches. Methods: This retrospective cohort study analysed 218,715 hospital episodes (2014–2018) at a tertiary care centre. Sepsis cases (n = 11,864, 5.42%) were prospectively validated in real-time by a Multidisciplinary Sepsis Unit using modified Sepsis-2 criteria with organ dysfunction. The model integrated structured data (26.95%) and unstructured clinical notes (73.04%) extracted via natural language processing from 2829 variables, selecting 230 relevant predictors. Thirty models including random forests, support vector machines, neural networks, and gradient boosting were developed and evaluated. The dataset was randomly split (5/7 training, 2/7 testing) with preserved patient-level independence. Results: The BiAlert Sepsis model (random forest + Sepsis-2 ensemble) achieved an AUC-ROC of 0.95, sensitivity of 0.93, and specificity of 0.84, significantly outperforming traditional approaches. Compared to the best rule-based method (Sepsis-2 + qSOFA, AUC-ROC 0.90), BiAlert reduced false positives by 39.6% (13.10% vs. 21.70%, p < 0.01). Novel predictors included eosinopenia and hypoalbuminemia, while traditional variables (MAP, GCS, platelets) showed minimal univariate association. The model received European Medicines Agency approval as a medical device in June 2024. Conclusions: This hospital-wide machine learning model, trained on prospectively expert-validated cases and integrating extensive NLP-derived features, demonstrates superior sepsis detection performance compared to conventional scoring systems. External validation and prospective clinical impact studies are needed before widespread implementation. Full article
Show Figures

Figure 1

23 pages, 21878 KB  
Article
STC-SORT: A Dynamic Spatio-Temporal Consistency Framework for Multi-Object Tracking in UAV Videos
by Ziang Ma, Chuanzhi Chen, Jinbao Chen and Yuhan Jiang
Appl. Sci. 2026, 16(2), 1062; https://doi.org/10.3390/app16021062 - 20 Jan 2026
Viewed by 132
Abstract
Multi-object tracking (MOT) in videos captured by Unmanned Aerial Vehicles (UAVs) is critically challenged by significant camera ego-motion, frequent occlusions, and complex object interactions. To address the limitations of conventional trackers that depend on static, rule-based association strategies, this paper introduces STC-SORT, a [...] Read more.
Multi-object tracking (MOT) in videos captured by Unmanned Aerial Vehicles (UAVs) is critically challenged by significant camera ego-motion, frequent occlusions, and complex object interactions. To address the limitations of conventional trackers that depend on static, rule-based association strategies, this paper introduces STC-SORT, a novel tracking framework whose core is a two-level reasoning architecture for data association. First, a Spatio-Temporal Consistency Graph Network (STC-GN) models inter-object relationships via graph attention to learn adaptive weights for fusing motion, appearance, and geometric cues. Second, these dynamic weights are integrated into a 4D association cost volume, enabling globally optimal matching across a temporal window. When integrated with an enhanced AEE-YOLO detector, STC-SORT achieves significant and statistically robust improvements on major UAV tracking benchmarks. It elevates MOTA by 13.0% on UAVDT and 6.5% on VisDrone, while boosting IDF1 by 9.7% and 9.9%, respectively. The framework also maintains real-time inference speed (75.5 FPS) and demonstrates substantial reductions in identity switches. These results validate STC-SORT as having strong potential for robust multi-object tracking in challenging UAV scenarios. Full article
(This article belongs to the Section Aerospace Science and Engineering)
Show Figures

Figure 1

Back to TopTop