Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (10,204)

Search Parameters:
Keywords = optimization task

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 1202 KB  
Perspective
The Innovative Potential of Artificial Intelligence Applied to Patient Registries to Implement Clinical Guidelines
by Sebastiano Gangemi, Alessandro Allegra, Mario Di Gioacchino, Luca Gammeri, Irene Cacciola and Giorgio Walter Canonica
Mach. Learn. Knowl. Extr. 2026, 8(2), 38; https://doi.org/10.3390/make8020038 (registering DOI) - 7 Feb 2026
Abstract
Guidelines provide specific recommendations based on the best available medical knowledge, summarizing and balancing the advantages and disadvantages of various diagnostic and treatment options. Currently, consensus methods are the best and most common practices in creating clinical guidelines, even though these approaches have [...] Read more.
Guidelines provide specific recommendations based on the best available medical knowledge, summarizing and balancing the advantages and disadvantages of various diagnostic and treatment options. Currently, consensus methods are the best and most common practices in creating clinical guidelines, even though these approaches have several limitations. However, the rapid pace of biomedical innovation and the growing availability of real-world data (RWD) from clinical registries (containing data like clinical outcomes, treatment variables, imaging, and laboratory results) call for a complementary paradigm in which recommendations are continuously stress-tested against high-quality, interoperable data and auditable artificial intelligence (AI) pipelines. AI, based on information retrieved from patient registries, can optimize the process of creating guidelines. In fact, AI can analyze large volumes of data, ensuring essential tasks such as correct feature identification, prediction, classification, and pattern recognition of all information. In this work, we propose a four-phase lifecycle, comprising data curation, causal analysis and estimation, objective validation, and real-time updates, complemented by governance and machine learning operations (MLOps). A comparative analysis with consensus-only methods, a pilot protocol, and a compliance checklist are provided. We believe that the use of AI will be a valuable support in drafting clinical guidelines to complement expert consensus and ensure continuous updates to standards, providing a higher level of evidence. The integration of AI with high-quality patient registries has the potential to substantially modernize guideline development, enabling continuously updated, data-driven recommendations. Full article
Show Figures

Figure 1

14 pages, 801 KB  
Article
Phishing Email Detection Using BERT and RoBERTa
by Mariam Ibrahim and Ruba Elhafiz
Computation 2026, 14(2), 46; https://doi.org/10.3390/computation14020046 (registering DOI) - 7 Feb 2026
Abstract
One of the most harmful and deceptive forms of cybercrime is phishing, which targets users with malicious emails and websites. In this paper, we focus on the use of natural language processing (NLP) techniques and transformer models for phishing email detection. The Nazario [...] Read more.
One of the most harmful and deceptive forms of cybercrime is phishing, which targets users with malicious emails and websites. In this paper, we focus on the use of natural language processing (NLP) techniques and transformer models for phishing email detection. The Nazario Phishing Corpus is preprocessed and blended with real emails from the Enron dataset to create a robustly balanced dataset. Urgency, deceptive phrasing, and structural anomalies were some of the neglected features and sociolinguistic traits of the text, which underwent tokenization, lemmatization, and noise filtration. We fine-tuned two transformer models, Bidirectional Encoder Representations from Transformers (BERT) and the Robustly Optimized BERT Pretraining Approach (RoBERTa), for binary classification. The models were evaluated on the standard metrics of accuracy, precision, recall, and F1-score. Given the context of phishing, emphasis was placed on recall to reduce the number of phishing attacks that went unnoticed. The results show that RoBERTa has more general performance and fewer false negatives than BERT and is therefore a better candidate for deployment on security-critical tasks. Full article
28 pages, 3081 KB  
Article
An Abnormal Increase in Switching Frequency in Multi-Sources Line Commutated Converter and Suppression Method
by Xintong Mao, Xianmeng Zhang, Jian Ling, Honglin Yan, Rui Jing, Zhihan Liu and Chuyang Wang
Energies 2026, 19(4), 870; https://doi.org/10.3390/en19040870 (registering DOI) - 7 Feb 2026
Abstract
Distinct from the traditional Modular Multilevel Converter (MMC) which focuses on fundamental frequency operation, the Static Var and Filter (SVF) within the Multi-Source Line-Commutated Converter (SLCC) system is tasked with the core function of high-frequency harmonic filtering. This paper reveals a unique engineering [...] Read more.
Distinct from the traditional Modular Multilevel Converter (MMC) which focuses on fundamental frequency operation, the Static Var and Filter (SVF) within the Multi-Source Line-Commutated Converter (SLCC) system is tasked with the core function of high-frequency harmonic filtering. This paper reveals a unique engineering reliability issue stemming from this functional difference: to satisfy the Nyquist sampling theorem for precise tracking and elimination of high-frequency harmonics, the update frequency of the capacitor voltage balancing algorithm in the SLCC-SVF system is forced to increase significantly. Mathematical modeling and quantitative analysis demonstrate that this strong coupling between harmonic tracking demands and the voltage sorting strategy directly drives an abnormal surge in the average switching frequency (reaching over five times that of the fundamental condition), severely threatening device safety. To address this, an optimized adaptive hybrid modulation strategy is proposed. The system operates under Nearest Level Modulation (NLM) in normal conditions and automatically transitions to Carrier Phase-Shifted PWM (CPS-PWM)—leveraging its closed-loop balancing capability—when switching frequency or junction temperature exceeds safety thresholds. Furthermore, a non-integer frequency ratio optimization theory for low-modulation indices is constructed specifically for SVF conditions to prevent low-frequency oscillations. PLECS simulation results validate the theoretical analysis, showing that the proposed strategy effectively reduces the average switching frequency by approximately 20% under complex harmonic conditions, significantly enhancing thermal stability and operational reliability while guaranteeing filtering performance. Full article
22 pages, 1982 KB  
Article
Perceptual Decision Advantages in Open-Skill Athletes Emerge near the Threshold of Awareness: Behavioral, Computational, and Electrophysiological Evidence
by Xudong Liu, Shiying Gao, Yanglan Yu and Anmin Li
Brain Sci. 2026, 16(2), 198; https://doi.org/10.3390/brainsci16020198 (registering DOI) - 7 Feb 2026
Abstract
Background/Objectives: Perceptual awareness and decision formation unfold gradually as sensory evidence increases. Near the threshold of awareness, small differences in neural processing efficiency can be amplified into marked behavioral variability. Open-skill athletes are trained to make rapid decisions under dynamic and uncertain [...] Read more.
Background/Objectives: Perceptual awareness and decision formation unfold gradually as sensory evidence increases. Near the threshold of awareness, small differences in neural processing efficiency can be amplified into marked behavioral variability. Open-skill athletes are trained to make rapid decisions under dynamic and uncertain conditions, yet it remains unclear whether their perceptual advantage reflects enhanced early sensory sensitivity or more efficient late-stage evidence accumulation. This study aimed to identify the processing stage at which open-skill sports expertise exerts its influence. Methods: Twenty-five open-skill athletes and twenty-three non-athlete controls completed a visual orientation discrimination task with eight graded levels of stimulus visibility, ranging from subliminal to clearly visible. Behavioral performance was analyzed together with hierarchical drift–diffusion modeling to estimate latent decision parameters. Event-related potentials (ERPs) were recorded using a 64-channel EEG system during an active decision task and a passive viewing task, focusing on early (N2, P2) and late (P3) components. ERP–behavior correlations were examined across visibility levels. Results: No group differences were observed at the lowest visibility levels. Group differences emerged selectively at intermediate to high visibility levels, where athletes showed higher accuracy and a tendency toward faster responses. Drift–diffusion modeling revealed that this advantage was driven by higher drift rates in athletes, with no group differences in non-decision time, boundary separation, or starting point. Early ERP components (N2, P2) were strongly modulated by stimulus visibility but showed no consistent group differences. In contrast, the P3 component exhibited earlier and more pronounced differentiation across visibility levels in athletes. In the passive viewing task, group differences were substantially reduced. ERP–behavior analyses showed stronger and earlier P3–behavior coupling in athletes. Conclusions: Open-skill sports expertise selectively optimizes late-stage evidence accumulation and its translation into behavior, rather than enhancing unconscious or early sensory processing. Full article
(This article belongs to the Section Cognitive, Social and Affective Neuroscience)
19 pages, 2021 KB  
Article
An Adaptive Super-Resolution Network for Drone Ship Images
by Haoran Li, Wei Xiong, Yaqi Cui and Libo Yao
Entropy 2026, 28(2), 187; https://doi.org/10.3390/e28020187 (registering DOI) - 7 Feb 2026
Abstract
Uncovering latent structures from complex, degraded data is a central challenge in modern unsupervised learning, with critical implications for downstream tasks. This principle is exemplified in the domain of aerial imagery, where the quality of images captured by drones is often compromised by [...] Read more.
Uncovering latent structures from complex, degraded data is a central challenge in modern unsupervised learning, with critical implications for downstream tasks. This principle is exemplified in the domain of aerial imagery, where the quality of images captured by drones is often compromised by complex, flight-induced degradations, thereby raising the information entropy and obscuring essential semantic patterns. Conventional super-resolution methods, trained on generic data, fail to restore these unique artifacts, thereby limiting their effectiveness for vessel identification, a task that fundamentally relies on clear pattern recognition. To bridge this gap, we introduce a novel adaptive super-resolution framework for ship images captured by drones. The approach integrates a static stage for foundational feature extraction and a dynamic stage for adaptive scene reconstruction, enabling robust performance in complex aerial environments. Furthermore, to ensure the super-resolution model’s generalizability and effectiveness, we optimize the design of degradation methods based on the characteristics of drone aerial images and construct a high-resolution dataset of ship images captured by drones. Extensive experiments demonstrate that our method surpasses existing state-of-the-art algorithms, confirming the efficacy of our proposed model and dataset. Full article
21 pages, 1387 KB  
Article
Dynamic Assessment of Reconnaissance Requirements for Fire Response in Large-Scale Hazardous Chemical Logistics Warehouses
by Boyang Qin, Chaoqing Wang, Dengyou Xia, Jianhang Li, Changqi Liu, Jun Shen, Jun Yang and Zhiang Chen
Fire 2026, 9(2), 72; https://doi.org/10.3390/fire9020072 (registering DOI) - 7 Feb 2026
Abstract
At present, large-scale hazardous chemical logistics warehouses are characterized by complex structural layouts, diverse stored materials, and high operational risks, which pose significant challenges to fire emergency response. The awareness of hazardous material inventory, orderliness, and timeliness of on-site reconnaissance directly determine the [...] Read more.
At present, large-scale hazardous chemical logistics warehouses are characterized by complex structural layouts, diverse stored materials, and high operational risks, which pose significant challenges to fire emergency response. The awareness of hazardous material inventory, orderliness, and timeliness of on-site reconnaissance directly determine the efficiency and safety of firefighting and rescue operations. In response to these challenges, this study, based on 77 fire cases involving hazardous chemical logistics warehouses, proposes an evaluation framework that integrates a TOWA–TOWGA hybrid operator with complex network analysis. Accordingly, a fire scene core reconnaissance task identification model is developed. The new model is capable of identifying key reconnaissance tasks while capturing the dynamic evolutionary patterns of fire development across three distinct stages. The research findings demonstrate that identifying the fire’s spread direction, locating accessible water sources, and pinpointing the fire’s ignition point constitute the core tasks throughout the entire fire emergency response cycle. The priority ranking of these core tasks exhibits distinct temporal variability as the fire evolves dynamically. This model enables the accurate identification of key reconnaissance tasks and critical operational pathways, thereby providing robust theoretical support and a solid practical foundation for fire rescue teams to optimize resource allocation strategies and formulate science-based reconnaissance protocols. Full article
(This article belongs to the Special Issue Fire and Explosion Hazards in Energy Systems)
25 pages, 7057 KB  
Article
Reinforcement-Learning-Based Adaptive PID Depth Control for Underwater Vehicles Against Buoyancy Variations
by Jian Wang, Shuxue Yan, Honghao Bao, Cong Chen, Deyong Yu, Jixu Li, Xi Chen, Rui Dou, Yuangui Tang and Shuo Li
J. Mar. Sci. Eng. 2026, 14(4), 323; https://doi.org/10.3390/jmse14040323 (registering DOI) - 7 Feb 2026
Abstract
Underwater vehicles performing sampling tasks often encounter significant buoyancy variations due to payload adjustments and environmental changes, which severely challenge the stability and accuracy of controllers. To address this issue, this paper proposes a hybrid control framework that integrates Proximal Policy Optimization (PPO) [...] Read more.
Underwater vehicles performing sampling tasks often encounter significant buoyancy variations due to payload adjustments and environmental changes, which severely challenge the stability and accuracy of controllers. To address this issue, this paper proposes a hybrid control framework that integrates Proximal Policy Optimization (PPO) with adaptive PID tuning. The framework employs PPO to dynamically adjust PID parameters online while incorporating output saturation, stepwise quantization, and dead zone filtering to ensure control safety and actuator longevity. A dual-error state representation—combining instantaneous error and its derivative—along with actuator command buffering is introduced to compensate for system lag and inertia. Comparative simulations and experimental tests demonstrate that the proposed method achieves faster convergence, lower steady-state error, and smoother control signals compared to both conventional PID and pure PPO-based control. The framework is validated through pool tests and field trials, confirming its robustness under realistic hydrodynamic disturbances. This work provides a practical and safe solution for adaptive depth control of sampling-capable AUVs operating in dynamic underwater environments. Full article
(This article belongs to the Section Ocean Engineering)
27 pages, 3121 KB  
Article
DI-WOA: Symmetry-Aware Dual-Improved Whale Optimization for Monetized Cloud Compute Scheduling with Dual-Rollback Constraint Handling
by Yuanzhe Kuang, Zhen Zhang and Hanshen Li
Symmetry 2026, 18(2), 303; https://doi.org/10.3390/sym18020303 - 6 Feb 2026
Abstract
With the continuous growth in the scale of engineering simulation and intelligent manufacturing workflows, more and more problem-solving tasks are migrating to cloud computing platforms to obtain elastic computing power. However, a core operational challenge for cloud platforms lies in the difficulty of [...] Read more.
With the continuous growth in the scale of engineering simulation and intelligent manufacturing workflows, more and more problem-solving tasks are migrating to cloud computing platforms to obtain elastic computing power. However, a core operational challenge for cloud platforms lies in the difficulty of stably obtaining high-quality scheduling solutions that are both efficient and free of symmetric redundancy, due to the coupling of multiple constraints, partial resource interchangeability, inconsistent multi-objective evaluation scales, and heterogeneous resource fluctuations. To address this, this paper proposes a Dual-Improved Whale Optimization Algorithm (DI-WOA) accompanied by a modeling framework featuring discrete–continuous divide-and-conquer modeling, a unified monetization mechanism of the objective function, and separation of soft/hard constraints; its iterative trajectory follows an augmented Lagrangian dual-rollback mechanism, while being rooted in a three-layer “discrete gene–real-valued encoding–decoder” structure. Scalability experiments show that as the number of tasks J increases, the DI-WOA ranks optimal or sub-optimal at most scale points, indicating its effectiveness in reducing unified billing costs even under intensified task coupling and resource contention. Ablation experiment results demonstrate that the complete DI-WOA achieves final objective values (OBJ) 8.33%, 5.45%, and 13.31% lower than the baseline, the variant without dual update (w/o dual), and the variant without perturbation (w/o perturb), respectively, significantly enhancing convergence performance and final solution quality on this scheduling model. In robustness experiments, the DI-WOA exhibits the lowest or second-lowest OBJ and soft constraint violation, indicating higher controllability under perturbations. In multi-workload generalization experiments, the DI-WOA achieves the optimal or sub-optimal mean OBJ across all scenarios with H = 3/4, leading the sub-optimal algorithm by up to 13.85%, demonstrating good adaptability to workload variations. A comprehensive analysis of the experimental results reveals that the DI-WOA holds practical significance for stably solving high-quality scheduling problems that are efficient and free of symmetric redundancy in complex and diverse environments. Full article
(This article belongs to the Section Computer)
35 pages, 2737 KB  
Article
Joint Trajectory and Power Optimization for Loosely Coupled Tasks: A Decoupled-Critic MAPPO Approach
by Xiangyu Wu, Changbo Hou, Guojing Meng, Zhichao Zhou and Qin Liu
Drones 2026, 10(2), 116; https://doi.org/10.3390/drones10020116 - 6 Feb 2026
Abstract
Multi-unmanned aerial vehicle (UAV) systems are crucial for establishing resilient communication networks in disaster-stricken areas, but their limited energy and dynamic characteristics pose significant challenges for sustained and reliable service provision. Optimizing resource allocation in this situation is a complex sequential decision-making problem, [...] Read more.
Multi-unmanned aerial vehicle (UAV) systems are crucial for establishing resilient communication networks in disaster-stricken areas, but their limited energy and dynamic characteristics pose significant challenges for sustained and reliable service provision. Optimizing resource allocation in this situation is a complex sequential decision-making problem, which is naturally suitable for multi-agent reinforcement learning (MARL). However, the most advanced MARL methods (e.g., multi-agent proximal policy optimization (MAPPO)) often encounter difficulties in the “loosely coupled” multi-UAV environment due to their overly centralized evaluation mechanism, resulting in unclear credit assignment and inhibiting personalized optimization. To overcome this, we propose a novel hierarchical framework supported by MAPPO with decoupled critics (MAPPO-DC). Our framework employs an efficient clustering algorithm for user association in the upper layer, while MAPPO-DC is used in the lower layer to enable each UAV to learn customized trajectories and power control strategies. MAPPO-DC achieves a complex balance between global coordination and personalized exploration by redesigning the update rules of the critic network, allowing for precise and personalized credit assignment in a loosely coupled environment. In addition, we designed a composite reward function to guide the learning process towards the goal of proportional fairness. The simulation results show that our proposed MAPPO-DC outperforms existing baselines, including independent proximal policy optimization (IPPO) and standard MAPPO, in terms of communication performance and sample efficiency, validating the effectiveness of our tailored MARL architecture for the task. Through model robustness experiments, we have verified that our proposed MAPPO-DC still has certain advantages in strongly coupled environments. Full article
(This article belongs to the Section Drone Communications)
Show Figures

Figure 1

20 pages, 1878 KB  
Article
Research on Scheduling of Metal Structural Part Blanking Workshop with Feeding Constraints
by Yaping Wang, Xuebing Wei, Xiaofei Zhu, Lili Wan and Zihui Zhao
Math. Comput. Appl. 2026, 31(1), 24; https://doi.org/10.3390/mca31010024 - 6 Feb 2026
Abstract
Taking a metal structural part blanking workshop as the application background, this study addresses the challenges of high material variety, long crane feeding travel caused by heterogeneous line-side storage layouts, and frequent machine stoppages due to the limited feeding capacity of a single [...] Read more.
Taking a metal structural part blanking workshop as the application background, this study addresses the challenges of high material variety, long crane feeding travel caused by heterogeneous line-side storage layouts, and frequent machine stoppages due to the limited feeding capacity of a single overhead crane. To this end, an integrated machine–crane dual-resource scheduling model is developed by explicitly considering line-side storage locations. The objective is to minimize the maximum waiting time among all machine tools. Under constraints of material assignment, processing sequence, and the crane’s single-task execution and travel requirements, the storage positions of materials in line-side buffers are jointly optimized. To solve the problem, a genetic algorithm with fitness-value-based crossover is proposed, and a simulated-annealing acceptance criterion is embedded to suppress premature convergence and enhance the ability to escape local optima. Comparative experiments on randomly generated instances show that the proposed algorithm can significantly reduce the maximum waiting time and yield more stable results for medium- and large-scale cases. Furthermore, a simulation based on real production data from an industrial enterprise verifies that, under limited feeding capacity, the proposed method effectively shortens material-waiting time, improves equipment utilization, and enhances production efficiency, demonstrating its effectiveness. Full article
Show Figures

Figure 1

39 pages, 2550 KB  
Article
An Enhanced Projection-Iterative-Methods-Based Optimizer for Complex Constrained Engineering Design Problems
by Xuemei Zhu, Han Peng, Haoyu Cai, Yu Liu, Shirong Li and Wei Peng
Computation 2026, 14(2), 45; https://doi.org/10.3390/computation14020045 - 6 Feb 2026
Abstract
This paper proposes an Enhanced Projection-Iterative-Methods-based Optimizer (EPIMO) to overcome the limitations of its predecessor, the Projection-Iterative-Methods-based Optimizer (PIMO), including deterministic parameter decay, insufficient diversity maintenance, and static exploration–exploitation balance. The enhancements incorporate three core strategies: (1) an adaptive decay strategy that introduces [...] Read more.
This paper proposes an Enhanced Projection-Iterative-Methods-based Optimizer (EPIMO) to overcome the limitations of its predecessor, the Projection-Iterative-Methods-based Optimizer (PIMO), including deterministic parameter decay, insufficient diversity maintenance, and static exploration–exploitation balance. The enhancements incorporate three core strategies: (1) an adaptive decay strategy that introduces stochastic perturbations into the step-size evolution; (2) a mirror opposition-based learning strategy to actively inject structured population diversity; and (3) an adaptive adjustment mechanism for the Lévy flight parameter β to enable phase-sensitive optimization behavior. The effectiveness of EPIMO is validated through a multi-stage experimental framework. Systematic evaluations on the CEC 2017 and CEC 2022 benchmark suites, alongside four classical engineering optimization problems (Himmelblau function, step-cone pulley design, hydrostatic thrust bearing design, and three-bar truss design), demonstrate its comprehensive superiority. The Wilcoxon rank-sum test confirms statistically significant performance improvements over its predecessor (PIMO) and a range of state-of-the-art and classical algorithms. EPIMO exhibits exceptional performance in convergence accuracy, stability, robustness, and constraint-handling capability, establishing it as a highly reliable and efficient metaheuristic optimizer. This research contributes a systematic, adaptive enhancement framework for projection-based metaheuristics, which can be generalized to improve other swarm intelligence systems when facing complex, constrained, and high-dimensional engineering optimization tasks. Full article
(This article belongs to the Section Computational Engineering)
24 pages, 4662 KB  
Article
A Unified Complementary Regularization Framework for Long-Tailed Image Classification
by Xingyu Shen, Lei Zhang, Lituan Wang and Yan Wang
Appl. Sci. 2026, 16(3), 1656; https://doi.org/10.3390/app16031656 - 6 Feb 2026
Abstract
Class imbalance is a formidable and ongoing challenge in image classification tasks. Existing methods address this issue by emphasizing minority classes through class redistribution in the feature space or adjusting decision boundaries. Although such approaches improve the accuracy of minority classes, they often [...] Read more.
Class imbalance is a formidable and ongoing challenge in image classification tasks. Existing methods address this issue by emphasizing minority classes through class redistribution in the feature space or adjusting decision boundaries. Although such approaches improve the accuracy of minority classes, they often lead to unstable training and performance degradation on majority classes. To alleviate these challenges, we propose a unified redistribution framework termed as ComReg, which explicitly enforces complementary regularization on feature learning and decision boundary optimization in long-tailed image classification. Specifically, ComReg employs a multi-expert learning framework combined with prior-knowledge-guided online distillation to construct distribution-aware decision boundaries. From the feature space learning perspective, we enhance intra-class compactness and inter-class separability through decoupled-balanced contrastive learning. To further align the distributions in both spaces, we introduce a delay-weighted prototype learning strategy, which incorporates the decision boundary constructed by the head-class expert into the decoupled-balanced contrastive learning process. Extensive experiments on widely used long-tailed benchmarks, including CIFAR10-LT and CIFAR100-LT, as well as the real-world long-tailed datasets such as subsets of MedMNIST v2, demonstrate that our method achieves state-of-the-art performance. Full article
(This article belongs to the Special Issue AI-Driven Image and Signal Processing)
23 pages, 5683 KB  
Article
Optimizing RTAB-Map Viewability to Reduce Cognitive Workload in VR Teleoperation: A User-Centric Approach
by Hojin Yoon, Haegyeom Choi, Jaehoon Jeong and Donghun Lee
Mathematics 2026, 14(3), 579; https://doi.org/10.3390/math14030579 - 6 Feb 2026
Abstract
In industrial environments, providing intuitive spatial information via 3D maps is essential for maximizing the efficiency of teleoperation. However, existing SLAM algorithms generating 3D maps predominantly focus on improving robot localization accuracy, often neglecting the optimization of viewability required for human operators to [...] Read more.
In industrial environments, providing intuitive spatial information via 3D maps is essential for maximizing the efficiency of teleoperation. However, existing SLAM algorithms generating 3D maps predominantly focus on improving robot localization accuracy, often neglecting the optimization of viewability required for human operators to clearly perceive object depth and structure in virtual environments. To address this, this study proposes a methodology to optimize the viewability of RTAB-Map-based 3D maps using the Taguchi method, aiming to enhance VR teleoperation efficiency and reduce cognitive workload. We identified eight key parameters that critically affect visual quality and utilized an L18 orthogonal array to derive an optimal combination that controls point cloud density and noise levels. Experimental results from a target object picking task demonstrated that the optimized 3D map reduced task completion time by approximately 9 s compared to the RGB image condition, achieving efficiency levels approaching those of the physical-world baseline. Furthermore, evaluations using NASA-TLX confirmed that intuitive visual feedback minimized situational awareness errors and substantially alleviated cognitive workload. This study suggests a new direction for constructing high-efficiency teleoperation interfaces from a Human–Robot Interaction perspective by expanding SLAM optimization criteria from geometric precision to user-centric visual quality. Full article
(This article belongs to the Special Issue Advances in Machine Learning and Intelligent Systems)
Show Figures

Figure 1

22 pages, 1664 KB  
Article
KAN+Transformer: An Explainable and Efficient Approach for Electric Load Forecasting
by Long Ma, Changna Guo, Yangyang Wang, Yan Zhang and Bin Zhang
Sustainability 2026, 18(3), 1677; https://doi.org/10.3390/su18031677 - 6 Feb 2026
Abstract
Short-Term Residential Load Forecasting (STRLF) is a core task in smart grid dispatching and energy management, and its accuracy directly affects the economy and stability of power systems. Current mainstream methods still have limitations in addressing issues such as complex temporal patterns, strong [...] Read more.
Short-Term Residential Load Forecasting (STRLF) is a core task in smart grid dispatching and energy management, and its accuracy directly affects the economy and stability of power systems. Current mainstream methods still have limitations in addressing issues such as complex temporal patterns, strong stochasticity of load data, and insufficient model interpretability. To this end, this paper proposes an explainable and efficient forecasting framework named KAN+Transformer, which integrates Kolmogorov–Arnold Networks (KAN) with Transformers. The framework achieves performance breakthroughs through three innovative designs: constructing a Reversible Mixture of KAN Experts (RMoK) layer, which optimizes expert weight allocation using a load-balancing loss to enhance feature extraction capability while preserving model interpretability; designing an attention-guided cascading mechanism to dynamically fuse the local temporal patterns extracted by KAN with the global dependencies captured by the Transformer; and introducing a multi-objective loss function to explicitly model the periodicity and trend characteristics of load data. Experiments on four power benchmark datasets show that KAN+Transformer significantly outperforms advanced models such as Autoformer and Informer; ablation studies confirm that the KAN module and the specialized loss function bring accuracy improvements of 7.2% and 4.8%, respectively; visualization analysis further verifies the model’s decision-making interpretability through weight-feature correlation, providing a new paradigm for high-precision and explainable load forecasting in smart grids. Collectively, the results demonstrate our model’s superior capability in representing complex residential load dynamics and capturing both transient and stable consumption behaviors. By enabling more accurate, interpretable, and computationally efficient short-term load forecasting, the proposed KAN+Transformer framework provides effective support for demand-side management, renewable energy integration, and intelligent grid operation. As such, it contributes to improving energy utilization efficiency and enhancing the sustainability and resilience of modern power systems. Full article
Show Figures

Figure 1

17 pages, 335 KB  
Article
A Macrocognitive Design Taxonomy for Simulation-Based Training Systems: Bridging Cognitive Theory and Human–Computer Interaction
by Jessica M. Johnson
Computers 2026, 15(2), 110; https://doi.org/10.3390/computers15020110 - 6 Feb 2026
Abstract
Simulation-based training systems are increasingly deployed to prepare learners for complex, safety-critical, and dynamic work environments. While advances in computing have enabled immersive and data-rich simulations, many systems remain optimized for procedural accuracy and surface-level task performance rather than the macrocognitive processes that [...] Read more.
Simulation-based training systems are increasingly deployed to prepare learners for complex, safety-critical, and dynamic work environments. While advances in computing have enabled immersive and data-rich simulations, many systems remain optimized for procedural accuracy and surface-level task performance rather than the macrocognitive processes that underpin adaptive expertise. Macrocognition encompasses higher-order cognitive processes that are essential for performance transfer beyond controlled training conditions. When these processes are insufficiently supported, training systems risk fostering brittle strategies and negative training effects. This paper introduces a macrocognitive design taxonomy for simulation-based training systems derived from a large-scale meta-analysis examining the transfer of macrocognitive skills from immersive simulations to real-world training environments. Drawing on evidence synthesized from 111 studies spanning healthcare, industrial safety, skilled trades, and defense contexts, the taxonomy links macrocognitive theory to human–computer interaction (HCI) design affordances, computational data traces, and feedback and adaptation mechanisms shown to support transfer. Grounded in joint cognitive systems theory and learning engineering practice, the taxonomy treats macrocognition as a designable and computable system concern informed by empirical transfer effects rather than as an abstract explanatory construct. Full article
(This article belongs to the Special Issue Innovative Research in Human–Computer Interactions)
Show Figures

Figure 1

Back to TopTop