Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (4,202)

Search Parameters:
Keywords = computational decision-making

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
38 pages, 519 KB  
Review
Advancements in CO2 Capture and Storage: Technologies, Performance, and Strategic Pathways to Net-Zero by 2050
by Ahmed A. Bhran and Abeer M. Shoaib
Materials 2026, 19(8), 1497; https://doi.org/10.3390/ma19081497 - 8 Apr 2026
Abstract
In order to reach net-zero by 2050, we need to have strong decarbonization policies, especially in hard-to-abate clean-ups like steel (8% of the global emissions), cement (7%), and power generation (30%), and negative emissions through direct air capture (DAC) and bioenergy with carbon [...] Read more.
In order to reach net-zero by 2050, we need to have strong decarbonization policies, especially in hard-to-abate clean-ups like steel (8% of the global emissions), cement (7%), and power generation (30%), and negative emissions through direct air capture (DAC) and bioenergy with carbon capture and storage (BECCS). This review paper summarizes the progress in CO2 capture, compression, transportation, and storage technologies between 2020 and 2025, including energy penalty (20–40%) and cost (15–30%) reductions, with innovations such as metal–organic frameworks (MOFs), bio-inspired catalysts, ionic liquids, and artificial intelligence (AI)-based optimization. This paper, as a new input into the carbon capture and storage (CCS) field, uses the Weighted Sum Model (WSM) as a multi-criteria decision-making tool to rank the best technologies in the capture, storage, monitoring, and transportation sectors. The weights of the criteria are calculated based on Shannon entropy, and the assessment is performed in three conditions, namely, optimistic, pessimistic, and expected. The weights are computed with sensitivity analysis to make the assessment robust. The viability of key projects, such as Northern Lights (Norway, 1.5 MtCO2/year), Porthos (The Netherlands, 2.5 MtCO2/year), Quest (Canada, 1 MtCO2/year), and Petra Nova (USA, 1.6 MtCO2/year), is evident, and it is projected that, globally, CCS will reach 49 MtCO2/year across 43 plants in 2025. The review incorporates socio-economic and environmental justice, including barriers such as high costs ($30–600/MtCO2), energy penalties (1–10 GJ/tCO2), and opposition between people (20–40% in EU/US). In comparison with previous reviews, this article has a more comprehensive focus, provides quantitative synthesis through WSM, and discusses the implications for researchers, policymakers, and stakeholders towards achieving faster CCS implementation on the path to net-zero. Full article
(This article belongs to the Section Energy Materials)
23 pages, 995 KB  
Article
Eye-Tracking Response Modeling and Design Optimization Method for Smart Home Interface Based on Transformer Attention Mechanism
by Yanping Lu and Myun Kim
Electronics 2026, 15(8), 1562; https://doi.org/10.3390/electronics15081562 - 8 Apr 2026
Abstract
In response to the redundant spatio-temporal modeling and insufficient adaptation to dynamic decision-making in eye-tracking interaction of smart home interfaces, a smart home interface eye-tracking response optimization model based on spatio-temporal Transformer and gate control cross-attention is proposed. It adapts the physiological characteristics [...] Read more.
In response to the redundant spatio-temporal modeling and insufficient adaptation to dynamic decision-making in eye-tracking interaction of smart home interfaces, a smart home interface eye-tracking response optimization model based on spatio-temporal Transformer and gate control cross-attention is proposed. It adapts the physiological characteristics of eye-tracking jumps through dynamic sparse attention gating to compress computational redundancy and combines multi-objective reinforcement learning attention modulation to construct a closed-loop decision-making mechanism, optimizing interface parameters in real-time. Experiments showed that the model reduced eye-tracking trajectory prediction error by 23.7% compared to advanced benchmarks, increased the success rate of adapting to dynamic mutation scenarios to 89.2%, and controlled performance fluctuations within 2.3% under noise interference. In high-fidelity user testing, the accuracy of cross-task gaze transfer reached 93.4%, the failure rate of glare interference was optimized to 2.4%, and the user cognitive load index was reduced by 27.9%. Its resource consumption and energy consumption were reduced by 26.7% and 44.9%, respectively, while its posture deviation tolerance remained at 3.5°. The sparse spatio-temporal modeling of the spatio-temporal adaptive Transformer module and the enhanced gating mechanism of the hierarchical gated cross-attention module work together to break through the limitations of traditional methods in computational efficiency and dynamic feedback, providing high-precision and low-latency eye-tracking interaction solutions for smart home interface systems, and promoting the practical evolution of personalized human–machine collaborative control. Full article
34 pages, 3638 KB  
Article
Multi-Station UAV–UGV Cooperative Delivery Scheduling Problem with Temporally Discontinuous Service Availability Under Diverse Urban Scenarios
by Yinying Liu, Jianmeng Liu, Xin Shi and Cheng Tang
Drones 2026, 10(4), 269; https://doi.org/10.3390/drones10040269 - 8 Apr 2026
Abstract
Urban logistics systems face growing delivery demand and complex traffic and operational constraints, which make unmanned delivery carriers, including unmanned aerial vehicles (UAVs) and unmanned ground vehicles (UGVs), a promising solution. Existing studies typically focus on a single delivery carrier type and rely [...] Read more.
Urban logistics systems face growing delivery demand and complex traffic and operational constraints, which make unmanned delivery carriers, including unmanned aerial vehicles (UAVs) and unmanned ground vehicles (UGVs), a promising solution. Existing studies typically focus on a single delivery carrier type and rely on idealized assumptions, overlooking heterogeneous cooperation under multiple stations, multiple time windows, and real-world transport conditions. To address these gaps, we propose the Multi-Station UAV–UGV Cooperative Delivery Scheduling Problem with Temporally Discontinuous Service Availability (MSUUCDSP) to minimize the total travel and waiting time of UAVs and UGVs. To solve the problem, we propose a mixed-integer linear programming (MILP) model with a novel mathematical approach and a Hybrid Large Neighborhood Search (HLNS) algorithm. Additionally, we adopt a Hidden Markov Model (HMM)-based map-matching method and big data techniques to capture realistic operational characteristics. Computational experiments are conducted on various realistic instances under four diverse scenarios. Results show that UAV–UGV cooperation significantly improves efficiency, reducing total time cost by 17.12% compared with single-mode delivery, and they reveal substantial discrepancies between idealized assumptions and realistic scenarios. We further develop an ArcGIS-based simulation to support practical implementation. The findings provide valuable insights for decision-making and engineering applications for logistics operators. Full article
(This article belongs to the Special Issue Advances in Drone Applications for Last-Mile Delivery Operations)
Show Figures

Figure 1

15 pages, 1103 KB  
Article
Multi-Output Probabilistic Prediction of Drug Side Effects Using Classical Machine Learning Algorithms
by Diego Quiguango Farias, Juan Sarasti Espejo, Marlene Arce Salcedo and Byron Velasquez Ron
Pharmaceuticals 2026, 19(4), 595; https://doi.org/10.3390/ph19040595 - 8 Apr 2026
Abstract
Introduction: Drug side effects are a relevant problem for patient safety and public health, and traditional methods have limitations in capturing complex patterns between clinical and pharmacological variables. Objective: To evaluate machine learning models to probabilistically predict multiple side effects associated with drug [...] Read more.
Introduction: Drug side effects are a relevant problem for patient safety and public health, and traditional methods have limitations in capturing complex patterns between clinical and pharmacological variables. Objective: To evaluate machine learning models to probabilistically predict multiple side effects associated with drug use. Materials and methods: A cross-sectional computational study was carried out with data from 1000 medications that included clinical condition, dosage and duration of treatment. Random Forest, Decision Tree, Support Vector Classifier and KNN were trained and optimized using Grid Search and an 80:20 split for training and testing. Chi-square tests and Principal Component Analysis were applied to explore associations and overlap between categories. Results: Significant associations were found between side effects and clinical condition (p < 0.05) and the drug administered (p < 0.05). The PCA showed a high overlap between categories, which justified a probabilistic approach. Tree-based models showed better performance (accuracy ≈ 0.35). Conclusions: Prediction of side effects is a multifactorial and non-deterministic problem; probabilistic machine learning models allow for estimating several plausible adverse events and can support clinical decision-making and pharmacovigilance. Full article
(This article belongs to the Section Biopharmaceuticals)
Show Figures

Graphical abstract

29 pages, 4375 KB  
Article
Application of AI in Tablet Development: An Integrated Machine Learning Framework for Pre-Formulation Property Prediction
by Masugu Hamaguchi, Tomoki Adachi and Noriyoshi Arai
Pharmaceutics 2026, 18(4), 452; https://doi.org/10.3390/pharmaceutics18040452 - 8 Apr 2026
Abstract
Background/Objectives: Tablet development requires simultaneous optimization of multiple quality attributes under limited experimental budgets, yet formulation–property relationships are highly nonlinear in mixture systems. To support pre-formulation decision-making prior to extensive tablet prototyping, this study proposes an AI framework that organizes formulation and process [...] Read more.
Background/Objectives: Tablet development requires simultaneous optimization of multiple quality attributes under limited experimental budgets, yet formulation–property relationships are highly nonlinear in mixture systems. To support pre-formulation decision-making prior to extensive tablet prototyping, this study proposes an AI framework that organizes formulation and process data together with raw-material property records into a reusable database, and enriches conventional composition/process features with physically motivated mixture descriptors derived from raw-material properties and formulation/process settings. Methods: Mixture-level scalar descriptors are constructed by composition-weighted aggregation of material properties, and particle size distribution (PSD) is incorporated via a compact set of summary statistics computed from composition-weighted mixture PSDs. Three feature sets are compared: (i) Materials + Processes (MP), (ii) MP with scalar Descriptors (MPD), and (iii) MPD with PSD summaries (MPDD). Five target properties are modeled: hardness, disintegration time, flow function, cohesion, and thickness. We train and evaluate Random Forest, Extra Trees Regressor, Lasso, Partial Least Squares, Support Vector Regression, and a multi-branch neural network that processes the three feature blocks separately and concatenates them for prediction. For interpolation assessment, repeated Train/Dev/Test splitting (5:3:2) across multiple random seeds is used, and the effect of feature augmentation is quantified by paired RMSE improvements with bootstrap confidence intervals and paired Wilcoxon signed-rank tests. To assess robustness under practical formulation updates, rolling-origin time-series splits are employed and Applicability Domain indicators are computed to characterize out-of-distribution coverage. Results: Across interpolation evaluations, mixture-descriptor augmentation (MPD/MPDD) improves hardness and disintegration time in most settings, whereas gains for flow function are smaller and cohesion/thickness show mixed effects under limited sample sizes. Conclusions: Under extrapolation-oriented evaluation, the descriptors can improve hardness but may degrade disintegration-time prediction under covariate shift, emphasizing the need for careful descriptor selection and dimensionality control when deploying pre-formulation predictors. Full article
Show Figures

Figure 1

25 pages, 1501 KB  
Article
MA-JTATO: Multi-Agent Joint Task Association and Trajectory Optimization in UAV-Assisted Edge Computing System
by Yunxi Zhang and Zhigang Wen
Drones 2026, 10(4), 267; https://doi.org/10.3390/drones10040267 - 7 Apr 2026
Abstract
With the rapid development of applications such as smart cities and the industrial internet, the computation-intensive tasks generated by massive sensing devices pose significant challenges to traditional cloud computing paradigms. Unmanned aerial vehicle (UAV)-assisted edge computing systems, leveraging their high mobility and wide-area [...] Read more.
With the rapid development of applications such as smart cities and the industrial internet, the computation-intensive tasks generated by massive sensing devices pose significant challenges to traditional cloud computing paradigms. Unmanned aerial vehicle (UAV)-assisted edge computing systems, leveraging their high mobility and wide-area coverage capabilities, offer an innovative architecture for low-latency and highly reliable edge services. However, the practical deployment of such systems faces a highly complex multi-objective optimization problem featured by the tight coupling of task offloading decisions, UAV trajectory planning, and edge server resource allocation. Conventional optimization methods are difficult to adapt to the dynamic and high-dimensional characteristics of this problem, leading to suboptimal system performance. To address this critical challenge, this paper constructs an intelligent collaborative optimization framework for UAV-assisted edge computing systems and formulates the system quality of service (QoS) optimization problem as a mixed-integer non-convex programming problem with the dual objectives of minimizing task processing latency and reducing overall system energy consumption. A multi-agent joint task association and trajectory optimization (MA-JTATO) algorithm based on hybrid reinforcement learning is proposed to solve this intractable problem, which innovatively decouples the original coupled optimization problem into three interrelated subproblems and realizes their collaborative and efficient solution. Specifically, the Advantage Actor-Critic (A2C) algorithm is adopted to realize dynamic and optimal task association between UAVs and edge servers for discrete decision-making requirements; the multi-agent deep deterministic policy gradient (MADDPG) method is employed to achieve cooperative and energy-efficient trajectory planning for multiple UAVs to meet the needs of continuous control in dynamic environments; and convex optimization theory is applied to obtain a closed-form optimal solution for the efficient allocation of computational resources on edge servers. Simulation results demonstrate that the proposed MA-JTATO algorithm significantly outperforms traditional baseline algorithms in enhancing overall QoS, effectively validating the framework’s superior performance and robustness in dynamic and complex scenarios. Full article
(This article belongs to the Section Drone Communications)
Show Figures

Figure 1

24 pages, 988 KB  
Article
An Improved Tracklet Generation Approach for Radar Maneuvering Target Tracking
by Songyao Dou, Ying Chen and Yaobing Lu
Electronics 2026, 15(7), 1538; https://doi.org/10.3390/electronics15071538 - 7 Apr 2026
Abstract
Aiming to improve radar multi-target tracking (MTT) accuracy and association performance in complex scenarios involving dense clutter, missed detections, and maneuvering targets, an improved tracklet generation approach based on the expectation–maximization (EM) framework is proposed in which data association variables and motion model [...] Read more.
Aiming to improve radar multi-target tracking (MTT) accuracy and association performance in complex scenarios involving dense clutter, missed detections, and maneuvering targets, an improved tracklet generation approach based on the expectation–maximization (EM) framework is proposed in which data association variables and motion model variables are jointly modeled as latent variables. These variables are estimated through iterative updates based on the loopy belief propagation (LBP) algorithm and the interacting multiple model (IMM) filtering and smoothing algorithms to generate high-confidence tracklets. Then, a delayed decision-making strategy based on the multi-hypothesis approach is employed to associate these tracklets into complete target trajectories. The resulting algorithm is named IMM-TrackletMHT. The performance of the IMM-TrackletMHT algorithm is evaluated and compared with several baseline algorithms in simulated scenarios under different clutter rates and detection probabilities. The simulation results demonstrate that the proposed algorithm consistently outperforms the baseline methods in terms of tracking accuracy, exhibits strong robustness to variations in the operating environment, and achieves higher computational efficiency in multi-scan measurement processing, thereby demonstrating the effectiveness and superiority of the proposed tracklet generation approach for maneuvering MTT. Full article
Show Figures

Figure 1

20 pages, 860 KB  
Article
Two-Stage Robust Optimization for Coupled Multi-Agent Task Allocation in Disaster Response Under Demand Uncertainty
by Chenxi Duan, Chongshuang Hu, Minghao Li and Jiang Jiang
Systems 2026, 14(4), 405; https://doi.org/10.3390/systems14040405 - 7 Apr 2026
Abstract
Multi-agent systems (MASs), with unmanned aerial vehicles (UAVs) as a representative embodiment, have become increasingly vital in time-sensitive disaster response scenarios, where multiple agents must collaborate to execute “observe-and-intervene” emergency tasks and jointly cope with dynamic environmental uncertainties. Existing research on task allocation [...] Read more.
Multi-agent systems (MASs), with unmanned aerial vehicles (UAVs) as a representative embodiment, have become increasingly vital in time-sensitive disaster response scenarios, where multiple agents must collaborate to execute “observe-and-intervene” emergency tasks and jointly cope with dynamic environmental uncertainties. Existing research on task allocation mostly eliminates uncertainty through deterministic models; the few studies that directly consider uncertainty focus primarily on time uncertainty, overlooking the critical importance of demand uncertainty. To this end, this study accounts for the impact of harsh environmental conditions and incident complexity factors on intervention resource demands. We establish an uncertainty set for these demands and construct a two-stage robust optimization model to solve the coupled multi-agent task allocation problem. Compared with deterministic models, this framework enhances risk resistance while simultaneously reducing the conservatism of decisions. Furthermore, to overcome the computational challenges of large-scale instances, a Learning-Enhanced Column and Constraint Generation (LE-C&CG) algorithm is proposed. Experimental results demonstrate that LE-C&CG converges over an order of magnitude faster than standard Benders and C&CG algorithms, consistently achieving a 0% optimality gap within fractions of a second, making it highly suitable for time-critical emergency applications. Full article
(This article belongs to the Section Artificial Intelligence and Digital Systems Engineering)
Show Figures

Figure 1

20 pages, 1234 KB  
Article
Lightweight Real-Time Navigation for Autonomous Driving Using TinyML and Few-Shot Learning
by Wajahat Ali, Arshad Iqbal, Abdul Wadood, Herie Park and Byung O Kang
Sensors 2026, 26(7), 2271; https://doi.org/10.3390/s26072271 - 7 Apr 2026
Abstract
Autonomous vehicle navigation requires low-latency and energy-efficient machine learning models capable of operating in dynamic and resource-constrained environments. Conventional deep learning approaches are often unsuitable for real-time deployment on embedded edge devices due to their high computational and memory demands. In this work, [...] Read more.
Autonomous vehicle navigation requires low-latency and energy-efficient machine learning models capable of operating in dynamic and resource-constrained environments. Conventional deep learning approaches are often unsuitable for real-time deployment on embedded edge devices due to their high computational and memory demands. In this work, we propose a unified TinyML-optimized navigation framework that integrates a lightweight convolutional feature extractor (MobileNetV2) with a metric-based few-shot learning classifier to enable rapid adaptation to unseen driving scenarios with minimal data. The proposed framework jointly combines feature extraction, few-shot generalization, and edge-aware optimization into a single end-to-end pipeline designed specifically for real-time autonomous decision-making. Furthermore, post-training quantization and structured pruning are employed to significantly reduce the memory footprint and inference latency while preserving the classification performance. Experimental results demonstrate that the proposed model achieved a 93.4% accuracy on previously unseen road conditions, with an average inference latency of 68 ms and a memory usage of 18 MB, outperforming traditional CNN and LSTM models in efficiency while maintaining a competitive predictive performance. These results highlight the effectiveness of the proposed approach in enabling scalable, real-time navigation on low-power edge devices. Full article
Show Figures

Figure 1

18 pages, 535 KB  
Review
Artificial Intelligence in Intraoperative Imaging and Navigation for Spine Surgery: A Narrative Review
by Mina Girgis, Allison Kelliher, Michael S. Pheasant, Alex Tang, Siddharth Badve and Tan Chen
J. Clin. Med. 2026, 15(7), 2779; https://doi.org/10.3390/jcm15072779 - 7 Apr 2026
Viewed by 51
Abstract
Artificial intelligence (AI) is increasingly transforming spine surgery, with expanding applications in diagnostics, intraoperative imaging, and surgical navigation. As the field advances toward greater precision and safety, machine learning (ML) and deep learning technologies are being integrated to augment surgeon expertise and optimize [...] Read more.
Artificial intelligence (AI) is increasingly transforming spine surgery, with expanding applications in diagnostics, intraoperative imaging, and surgical navigation. As the field advances toward greater precision and safety, machine learning (ML) and deep learning technologies are being integrated to augment surgeon expertise and optimize operative workflows. In particular, AI-driven innovations in image acquisition and navigation are reshaping intraoperative decision-making and technical execution. This narrative review provides an overview of AI applications relevant to intraoperative imaging and navigation in spine surgery. We begin by defining key concepts in AI, ML, and deep learning and briefly outline the historical evolution of AI within spine practice. We then examine current capabilities in image recognition and automated pathology detection, emphasizing their clinical relevance. Given the central role of imaging accuracy in modern navigation-assisted procedures, we review conventional acquisition platforms, including intraoperative computed tomography (CT) systems (e.g., O-arm, GE, Airo), surface-based registration to preoperative CT (Stryker, Medtronic), and optical surface mapping technologies (e.g., 7D Surgical). Emerging AI-optimized advancements are subsequently discussed, including low-dose intraoperative CT protocols, expanded scan windows, metal artifact reduction algorithms, integration of 2D fluoroscopy with preoperative CT datasets, and 3D reconstruction derived from 2D imaging. These developments aim to improve image quality, reduce radiation exposure, and enhance navigational accuracy. By synthesizing current evidence and technological progress, this review highlights how AI-enhanced imaging systems are redefining intraoperative spine surgery and shaping the future of precision-based care. The primary purpose of this review is to outline the applications of AI and its potential for perioperative and intraoperative optimization, including radiation exposure reduction, workflow streamlining, preoperative planning, robot-assisted surgery, and navigation. The secondary purpose is to define AI, machine learning, and deep learning within the medical context, describe image and pathology recognition, and provide a historical overview of AI in orthopedic spine surgery. Full article
(This article belongs to the Special Issue Spine Surgery: Current Practice and Future Directions)
Show Figures

Figure 1

39 pages, 1002 KB  
Review
Patient and Healthcare Provider Barriers in the LDCT Lung Cancer Screening Continuum
by Rodica Anghel, Antonia-Ruxandra Folea, Vlad-Luca Moga, Cristian Pavel, Diana Troncotă, Corneliu-Octavian Dumitru, Andreea-Iren Șerban and Liviu Bîlteanu
Diagnostics 2026, 16(7), 1092; https://doi.org/10.3390/diagnostics16071092 - 4 Apr 2026
Viewed by 168
Abstract
Background/Objectives: Despite demonstrated mortality benefits, annual low-dose computed tomography (LDCT) screening faces challenges in real-world adoption due to low uptake and poor longitudinal adherence. This review evaluates patient- and provider-level factors that influence screening participation and highlights strategies to strengthen equitable engagement [...] Read more.
Background/Objectives: Despite demonstrated mortality benefits, annual low-dose computed tomography (LDCT) screening faces challenges in real-world adoption due to low uptake and poor longitudinal adherence. This review evaluates patient- and provider-level factors that influence screening participation and highlights strategies to strengthen equitable engagement throughout the screening pathway. Methods: A structured literature search of PubMed and Web of Science was performed to identify studies published between 2013 and November 2025 (search conducted on 25 November 2025). Eligible publications included qualitative and quantitative studies, study protocols, and reviews examining LDCT screening uptake, adherence, and follow-up practices. Extracted evidence was synthesized, with particular attention being paid to patient- and provider-level determinants. Results: The evidence demonstrates that both patient- and provider-level factors substantially influence screening participation and continuity. At the patient level, limited awareness of screening, misconceptions regarding asymptomatic disease, and psychosocial factors such as fear, fatalism, stigma, and medical mistrust were consistently associated with reduced uptake and adherence. At the provider level, gaps in guideline familiarity, time constraints, and challenges in delivering high-quality shared decision-making limited referrals and follow-up. Conclusions: Improving real-world effectiveness of LDCT lung cancer screening requires reframing screening as a longitudinal program of care. Strategies that support patient navigation, enhance provider capacity for sustained engagement, and integrate tobacco dependence treatment into screening pathways are central to improving adherence and reducing disparities. Full article
(This article belongs to the Special Issue Lung Cancer: Screening, Diagnosis and Survival Outcomes)
Show Figures

Figure 1

40 pages, 6859 KB  
Article
Safe Cooperative Decision-Making for Multi-UAV Pursuit–Evasion Games via Opponent Intent Inference
by Wenxin Li, Yongxin Feng and Wenbo Zhang
Sensors 2026, 26(7), 2243; https://doi.org/10.3390/s26072243 - 4 Apr 2026
Viewed by 168
Abstract
Cooperative multi-UAV pursuit–evasion under occlusions and sensor noise is challenged by intermittent observability of the evader, varying observation-window lengths, and non-stationary evader tactics, all of which destabilize prediction and undermine safety-constrained cooperation. To address these challenges, we propose a safe decision-making framework that [...] Read more.
Cooperative multi-UAV pursuit–evasion under occlusions and sensor noise is challenged by intermittent observability of the evader, varying observation-window lengths, and non-stationary evader tactics, all of which destabilize prediction and undermine safety-constrained cooperation. To address these challenges, we propose a safe decision-making framework that uses behavior mode and subgoal inference as intermediate representations for interpretable, uncertainty-aware cooperation. Specifically, an observation-driven generative intent–subgoal model infers the evader’s behavior mode and subgoal from short observation windows. Building on this model, a length-agnostic trajectory predictor is trained via multi-window knowledge distillation and consistency regularization to produce future trajectory predictions with calibrated uncertainty for arbitrary observation-window lengths, thereby reducing cross-window inference inconsistency and lowering online computational cost. Based on these predictions, we derive belief and risk features and develop a belief–risk-gated hierarchical multi-agent policy based on soft actor-critic with a safety projection layer, enabling adaptive strategy switching and a controllable trade-off between efficiency and safety. Experiments in obstacle-rich pursuit–evasion environments with randomized layouts and diverse obstacle configurations demonstrate more stable cooperative capture, safer maneuvering, and lower decision variance than representative baselines, indicating strong robustness and real-time feasibility. Specifically, across different observation-window settings, the proposed method improves the normalized expected return by approximately 5–7% over the strongest baseline and reduces pursuer losses by roughly 22–25%. Moreover, its end-to-end decision latency consistently remains within the 50 ms control cycle. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

52 pages, 14386 KB  
Review
Trustworthy Intelligence: Split Learning–Embedded Large Language Models for Smart IoT Healthcare Systems
by Mahbuba Ferdowsi, Nour Moustafa, Marwa Keshk and Benjamin Turnbull
Electronics 2026, 15(7), 1519; https://doi.org/10.3390/electronics15071519 - 4 Apr 2026
Viewed by 174
Abstract
The Internet of Things (IoT) plays an increasingly central role in healthcare by enabling continuous patient monitoring, remote diagnosis, and data-driven clinical decision-making through interconnected medical devices and sensing infrastructures. Despite these advances, IoT healthcare systems remain constrained by persistent challenges related to [...] Read more.
The Internet of Things (IoT) plays an increasingly central role in healthcare by enabling continuous patient monitoring, remote diagnosis, and data-driven clinical decision-making through interconnected medical devices and sensing infrastructures. Despite these advances, IoT healthcare systems remain constrained by persistent challenges related to data privacy, computational efficiency, scalability, and regulatory compliance. Federated learning (FL) reduces reliance on centralised data aggregation but remains vulnerable to inference-based privacy risks, while edge-oriented approaches are limited by device heterogeneity and restricted computational and energy resources; the deployment of large language models (LLMs) further exacerbates concerns surrounding privacy exposure, communication overhead, and practical feasibility. This study introduces Trustworthy Intelligence (TI) as a guiding framework for privacy-preserving distributed intelligence in IoT healthcare, explicitly integrating predictive performance, privacy protection, and deployment-oriented system design. Within this framework, split learning (SL) is examined as a core architectural mechanism and extended to support split-aware LLM integration across heterogeneous devices, supported by a structured taxonomy spanning architectural configurations, system adaptation strategies, and evaluation considerations. The study establishes a systematic mapping between SL design choices and representative healthcare scenarios, including wearable monitoring, multi-modal data fusion, clinical text analytics, and cross-institutional collaboration, and analyses key technical challenges such as activation-level privacy leakage, early-round vulnerability, reconstruction risks, and communication–computation trade-offs. An energy- and resource-aware adaptive cut layer selection strategy is outlined to support efficient deployment across devices with varying capabilities. A proof-of-concept experimental evaluation compares the proposed SL–LLM framework with centralised learning (CL), federated learning (FL), and conventional SL in terms of training latency, communication overhead, model accuracy, and privacy exposure under realistic IoT constraints, providing system-level evidence for the applicability of the TI framework in distributed healthcare environments and outlining directions for clinically viable and regulation-aligned IoT healthcare intelligence. Full article
Show Figures

Figure 1

27 pages, 1577 KB  
Article
An Intelligent Fuzzy Protocol with Automated Optimization for Energy-Efficient Electric Vehicle Communication in Vehicular Ad Hoc Network-Based Smart Transportation Systems
by Ghassan Samara, Ibrahim Obeidat, Mahmoud Odeh and Raed Alazaidah
World Electr. Veh. J. 2026, 17(4), 191; https://doi.org/10.3390/wevj17040191 - 4 Apr 2026
Viewed by 132
Abstract
Vehicular ad hoc networks (VANETs) operating in dense urban environments are characterized by highly dynamic topology, fluctuating traffic conditions, and stringent latency requirements, which significantly complicate reliable data routing and packet forwarding. To address these challenges, this paper proposes an Intelligent Fuzzy Protocol [...] Read more.
Vehicular ad hoc networks (VANETs) operating in dense urban environments are characterized by highly dynamic topology, fluctuating traffic conditions, and stringent latency requirements, which significantly complicate reliable data routing and packet forwarding. To address these challenges, this paper proposes an Intelligent Fuzzy Protocol (IFP) for adaptive vehicle-to-vehicle data routing under uncertain and rapidly changing traffic scenarios. The proposed protocol integrates fuzzy logic decision making with the real-time vehicular context, including vehicle velocity, traffic congestion level, distance to road junctions, and data urgency, to dynamically select appropriate forwarding actions. IFP employs a structured fuzzy inference engine comprising fuzzification, rule evaluation, inference aggregation, and centroid-based defuzzification to determine routing and forwarding decisions in a decentralized manner. To further enhance performance robustness, the fuzzy membership parameters and rule weights are optimized using metaheuristic techniques, namely, genetic algorithms (GAs) and particle swarm optimization (PSO). Extensive simulations are conducted using NS-3 coupled with SUMO under realistic urban mobility scenarios and varying network densities. The simulation results demonstrate that IFP significantly outperforms conventional routing approaches in terms of end-to-end delay, packet delivery ratio, and routing overhead. In particular, the optimized IFP variants achieve notable reductions in latency and improvements in delivery reliability under high-congestion conditions, while maintaining low computational and communication overhead. These findings confirm that IFP offers an interpretable, scalable, and energy-aware routing solution suitable for large-scale intelligent transportation systems and next-generation vehicular networks. Full article
(This article belongs to the Special Issue Power and Energy Systems for E-Mobility, 2nd Edition)
Show Figures

Figure 1

43 pages, 1881 KB  
Article
Cognitive ZTNA: A Neuro-Symbolic AI Approach for Adaptive and Explainable Zero Trust Access Control
by Ahmed Alzahrani
Mathematics 2026, 14(7), 1211; https://doi.org/10.3390/math14071211 - 3 Apr 2026
Viewed by 164
Abstract
Zero Trust Network Access (ZTNA) has emerged as a fundamental paradigm for securing cloud-native and distributed computing environments. However, existing ZTNA implementations remain largely limited by static policy enforcement and opaque machine-learning-based anomaly detection mechanisms, which often lack contextual adaptability, policy awareness, and [...] Read more.
Zero Trust Network Access (ZTNA) has emerged as a fundamental paradigm for securing cloud-native and distributed computing environments. However, existing ZTNA implementations remain largely limited by static policy enforcement and opaque machine-learning-based anomaly detection mechanisms, which often lack contextual adaptability, policy awareness, and interpretable decision-making capabilities. These limitations create significant challenges in dynamic multi-cloud environments where access behavior continuously evolves and security decisions must be both accurate and explainable. To address these challenges, this study proposes Cognitive ZTNA framework, a unified neuro-symbolic trust enforcement framework that integrates transformer-based behavioral trust modeling with ontology-guided symbolic reasoning. The proposed architecture enables continuous trust evaluation by combining behavioral access patterns with explicit policy semantics through a hybrid trust fusion mechanism. This design allows the system to capture long-range behavioral dependencies while maintaining policy-compliant and interpretable access control decisions. The framework is evaluated using the CloudZT-Bench-2025 dataset, comprising 4.2 million cross-platform access events derived from enterprise security telemetry, AWS CloudTrail logs, and simulated adversarial scenarios. Experimental results demonstrate that Cognitive ZTNA achieves Precision = 0.96, Recall = 0.93, and F1-score = 0.95, significantly outperforming rule-based and machine-learning baselines while reducing the false positive rate to 0.03. In addition, the system maintains real-time feasibility with an average decision latency of 24 ms and explanation latency below 5 ms, while achieving 92% analyst-rated explanation sufficiency. These findings demonstrate that integrating behavioral intelligence with symbolic policy reasoning enables adaptive, interpretable, and policy-aware Zero Trust enforcement. The proposed framework therefore provides a practical foundation for next-generation ZTNA systems capable of supporting secure, transparent, and context-aware access control in modern cloud environments. Full article
(This article belongs to the Special Issue New Advances in Network Security and Data Privacy)
Show Figures

Figure 1

Back to TopTop