Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (5,728)

Search Parameters:
Keywords = heuristics

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
37 pages, 3754 KB  
Article
A Multi-UAV Cooperative Decision-Making Method in Dynamic Aerial Interaction Environments Based on GA-GAT-PPO
by Maoming Zou, Zhengyu Guo, Jian Zhang, Yu Han, Caiyi Chen, Huimin Chen and Delin Luo
Drones 2026, 10(5), 313; https://doi.org/10.3390/drones10050313 - 22 Apr 2026
Abstract
Autonomous task assignment in multi-unmanned aerial vehicle (UAV) systems operating in dynamic and safety-critical airspace environments is highly challenging due to complex spatial interactions and rapidly changing relative geometries. This paper proposes a hierarchical decision-making framework that bridges individual maneuvering behaviors with cooperative [...] Read more.
Autonomous task assignment in multi-unmanned aerial vehicle (UAV) systems operating in dynamic and safety-critical airspace environments is highly challenging due to complex spatial interactions and rapidly changing relative geometries. This paper proposes a hierarchical decision-making framework that bridges individual maneuvering behaviors with cooperative task allocation in multi-agent aerial systems. First, a high-fidelity single-agent maneuver model is learned using a physics-consistent simulation environment, where spatial advantage is evaluated based on relative distance and angular relationships within a kinematically feasible interaction zone (KIZ). Subsequently, a Geometry-Aware Graph Attention Network (GA-GAT) is developed to address scalable multi-agent assignment problems. Unlike conventional approaches that rely on flat feature representations, the proposed method explicitly incorporates kinematic feasibility constraints into the attention mechanism via a novel gating module, enabling efficient relational reasoning under dynamic conditions. The proposed framework is applicable to a range of civilian and safety-oriented scenarios, including UAV swarm coordination, emergency response monitoring, infrastructure inspection, and autonomous airspace management. Simulation results demonstrate that the GA-GAT-based approach significantly outperforms heuristic baselines in terms of coordination efficiency and overall system performance in complex multi-agent environments. This study highlights that decoupling maneuver-level control from high-level coordination provides a scalable and computationally efficient solution for real-time multi-UAV decision-making in safety-critical applications. The proposed framework is designed for general multi-agent coordination problems in civilian aerial applications. Full article
(This article belongs to the Special Issue UAV Swarm Intelligent Control and Decision-Making)
27 pages, 2391 KB  
Article
Oracle Upper Bounds on Clean-EEG Recoverability from Single-Channel Decompositions Under EOG/EMG Contamination
by Usman Qamar Shaikh, Anubha Manju Kalra, Andrew Lowe and Imran Khan Niazi
Sensors 2026, 26(9), 2581; https://doi.org/10.3390/s26092581 - 22 Apr 2026
Abstract
Objective: Single-channel EEG artifact suppression often relies on signal decomposition; however, it is not always clear how much clean EEG is recoverable from a given decomposition when component weighting is ideal. We present an oracle-based benchmark that characterises this best-case recoverability across common [...] Read more.
Objective: Single-channel EEG artifact suppression often relies on signal decomposition; however, it is not always clear how much clean EEG is recoverable from a given decomposition when component weighting is ideal. We present an oracle-based benchmark that characterises this best-case recoverability across common 1-D decomposition families under controlled EOG, EMG, and mixed contamination. This work does not propose a new denoising algorithm; rather, it isolates representation capacity from component-selection heuristics by computing an upper bound on reconstruction quality. Approach: Using EEGdenoiseNet, we constructed a synthetic benchmark of 4500 single-channel 2 s segments (125 Hz; T = 250) by mixing clean EEG with ocular (EOG) and/or cranial EMG exemplars at noise-to-signal ratios (NSRs) spanning −10 to +10 dB (floor −10 dB denotes an absent modality). We evaluated variational mode decomposition (VMD), singular spectrum analysis (SSA), discrete wavelet transform (DWT), and CEEMDAN by decomposing each mixture and reconstructing the clean EEG using a bounded nonnegative linear combination of components obtained via constrained least squares (the oracle). Main results: Under this oracle benchmark, SSA achieved the lowest reconstruction error in most tested conditions, while DWT tended to rank best in milder ocular regimes; VMD performance improved, with an increased mode count at higher computational cost. CEEMDAN exhibited higher latency dominated by ensemble settings. Significance: These results should be interpreted as decomposition-level upper bounds under controlled mixtures, not field-ready denoising performance. The benchmark provides a tool with which to compare representational recoverability across decompositions and to inform the subsequent design of practical component-selection strategies. Full article
(This article belongs to the Section Biomedical Sensors)
26 pages, 1020 KB  
Article
A Hybrid Heuristic Algorithm for the Traveling Salesman Problem with Structured Initialization in Global–Local Search
by Eduardo Chandomí-Castellanos, Elías N. Escobar-Gómez, Jorge Antonio Orozco Torres, AlejandroMedina Santiago, Betty Yolanda López Zapata, Juan Antonio Arizaga Silva, José Roberto-Bermúdez and Héctor Daniel Vázquez-Delgado
Algorithms 2026, 19(5), 324; https://doi.org/10.3390/a19050324 - 22 Apr 2026
Abstract
This work proposes solving the Traveling Salesman Problem by applying combined heuristic global and local search methods. The proposed method is divided into three phases: first, it evaluates an initial route and chooses the minimum value of rows in a distance matrix. The [...] Read more.
This work proposes solving the Traveling Salesman Problem by applying combined heuristic global and local search methods. The proposed method is divided into three phases: first, it evaluates an initial route and chooses the minimum value of rows in a distance matrix. The next phase seeks to improve the route’s cost globally and with a 2-opt local search method, remove the crossings, and further minimize the cost of departure. Finally, the last phase evaluates and conserves each cost using tabu search, proposing a parameter β that describes the algorithm convergence factor. This paper assessed 29 TSPLIB instances and compared them with other algorithms: the ant colony optimization algorithm (ACO), artificial neural network (ANN), particle swarm optimization (PSO), and genetic algorithm (GA). With the proposed algorithm, results close to the optimal ones are obtained, and the proposed algorithm is assessed on 29 TSPLIB instances. Based on 30 independent runs per instance, the method achieves a mean absolute percentage error (MAPE) of 1.4484% relative to the known optima, demonstrating its accuracy. Furthermore, statistical comparisons using the coefficient of variation (CV) for runtime and the Wilcoxon signed-rank test confirm that the proposed hybrid algorithm is significantly faster than traditional ant colony optimization (T-ACO) and a new ant colony optimization algorithm (N-ACO) while maintaining competitive solution quality. Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
15 pages, 11487 KB  
Article
DaN: A Comprehensive Semi-Real Dataset for Extreme Low-Light Image Enhancement
by Qiuyang Sun, Shaonan Liu, Hong Li, Yingchao Feng, Liuqing Sun, Kun Lu and Kangtai Liu
Computers 2026, 15(5), 261; https://doi.org/10.3390/computers15050261 - 22 Apr 2026
Abstract
Extreme low-light image enhancement (ELLIE) targets the restoration of visual quality under ultra-dim environments (<0.1 lux). Conventional image signal processing (ISP) pipelines often fail in such scenarios due to the limitations of heuristic, hand-crafted algorithms. While deep learning has advanced the field via [...] Read more.
Extreme low-light image enhancement (ELLIE) targets the restoration of visual quality under ultra-dim environments (<0.1 lux). Conventional image signal processing (ISP) pipelines often fail in such scenarios due to the limitations of heuristic, hand-crafted algorithms. While deep learning has advanced the field via end-to-end mapping, existing models suffer from constrained generalization and suboptimal perceptual fidelity, primarily stemming from the scarcity of large-scale, high-diversity datasets. To bridge this gap, we present the Day and Night (DaN) dataset, a semi-synthetic benchmark synthesized through a rigorous physics-based noise model. This approach effectively captures authentic noise characteristics while enabling the scalable generation of paired samples across multifaceted illumination conditions and scenes. Furthermore, we propose No Longer Vigil (NLV), a fully differentiable AI-ISP framework. By replacing traditional rigid blocks with adaptive non-linear networks, NLV facilitates scene-dependent transformations without requiring manual priors. Comprehensive evaluations demonstrate that our method significantly outshines state-of-the-art approaches, yielding a 4.15 dB gain in PSNR and a 0.026 improvement in SSIM. Full article
(This article belongs to the Special Issue Advanced Image Processing and Computer Vision (2nd Edition))
Show Figures

Figure 1

20 pages, 1109 KB  
Article
Economic Rationality and Management of Denetworking in Infrastructure Maintenance
by Chihiro Konasugawa and Akira Nagamatsu
Businesses 2026, 6(2), 20; https://doi.org/10.3390/businesses6020020 - 21 Apr 2026
Abstract
Shrinking and aging societies undermine the economic viability of network-based infrastructure once supported by economies of scale and network externalities. This paper develops a conceptual framing of “Denetworking” as a possible reconfiguration strategy in the contraction phase: reducing dependence on highly asset-specific dedicated [...] Read more.
Shrinking and aging societies undermine the economic viability of network-based infrastructure once supported by economies of scale and network externalities. This paper develops a conceptual framing of “Denetworking” as a possible reconfiguration strategy in the contraction phase: reducing dependence on highly asset-specific dedicated networks (e.g., pipes and rail tracks) and shifting service functions to distributed systems or generic shared networks (e.g., roads) while maintaining minimum service standards. Rather than presenting a calibrated optimization model or full life-cycle cost (LCC) estimation, the paper proposes a heuristic decision condition for comparing a “keep” scenario (renew and maintain the dedicated network) with a “shift” scenario (Denetworking) and uses quantitative anchors from public sources to illustrate the associated fiscal and institutional trade-offs. Two Japanese cases are used as contrasting illustrations: physical Denetworking, referring to the reduction in or substitution of dedicated physical network assets, in wastewater services (centralized sewerage to decentralized treatment); and functional Denetworking, referring to the transfer of service functions from dedicated networks to more generic shared networks, in regional mobility (local rail to bus/BRT on the road network). The cross-case discussion suggests that Denetworking may become a rational policy option under certain conditions, particularly when demand density declines near renewal-investment peaks and asset specificity increases lock-in. The paper contributes a conceptual vocabulary and comparative policy framing for discussing infrastructure reconfiguration in shrinking societies and highlights practical issues of timing, cost sharing, phased implementation, and stakeholder engagement. Full article
Show Figures

Figure 1

22 pages, 4808 KB  
Article
Transforming Opportunistic Routing: A Deep Reinforcement Learning Framework for Reliable and Energy-Efficient Communication in Mobile Cognitive Radio Sensor Networks
by Suleiman Zubair, Bala Alhaji Salihu, Altyeb Altaher Taha, Yakubu Suleiman Baguda, Ahmed Hamza Osman and Asif Hassan Syed
IoT 2026, 7(2), 34; https://doi.org/10.3390/iot7020034 - 21 Apr 2026
Abstract
The Mobile Reliable Opportunistic Routing (MROR) protocol improves data-forwarding reliability in Cognitive Radio Sensor Networks (CRSNs) through mobility-aware virtual contention groups and handover zoning. However, its heuristic decision logic is difficult to optimize under highly dynamic spectrum access and random node mobility. To [...] Read more.
The Mobile Reliable Opportunistic Routing (MROR) protocol improves data-forwarding reliability in Cognitive Radio Sensor Networks (CRSNs) through mobility-aware virtual contention groups and handover zoning. However, its heuristic decision logic is difficult to optimize under highly dynamic spectrum access and random node mobility. To address this limitation, we present DRL-MROR, a refined routing framework that incorporates deep reinforcement learning (DRL) to enable intelligent and adaptive forwarding decisions. In DRL-MROR, the secondary users (SUs) act as autonomous agents that observe local state information, including primary-user activity, link quality, residual energy, and neighbor-mobility patterns. Each agent learns a forwarding policy through a Deep Q-Network (DQN) optimized for long-term network utility in terms of throughput, delay, and energy efficiency. We formulate routing as a Markov Decision Process (MDP) and use experience replay with prioritized sampling to improve learning stability and convergence. The DQN used at each node is intentionally lightweight, requiring 5514 trainable parameters, about 21.5 kB of weight storage in 32-bit precision, and approximately 5.4k multiply-accumulate operations per inference, which supports practical deployment on edge-capable CRSN nodes. Extensive simulations show that DRL-MROR outperforms the original MROR protocol and representative AI-based routing baselines such as AIRoute under diverse operating conditions. The results indicate gains of up to 38% in throughput, 42% in goodput, a 29% reduction in energy consumed per packet, and an approximately 18% improvement in network lifetime, while maintaining high route stability and fairness. DRL-MROR also reduces control overhead by about 30% and average end-to-end delay by up to 32%, maintaining strong performance even under elevated PU activity and higher node mobility. These results show that augmenting opportunistic routing with lightweight DRL can substantially improve adaptability and efficiency in next-generation IoT-oriented CRSNs. Full article
(This article belongs to the Special Issue Advances in Wireless Communication Technologies for IoT Devices)
Show Figures

Graphical abstract

42 pages, 10596 KB  
Systematic Review
Measurement and Modeling of Sustainable Food Choice and Purchasing Behavior: A Systematic Review of Methods and Models
by Tiago Negrão Andrade and Helena Maria André Bolini
Foods 2026, 15(8), 1442; https://doi.org/10.3390/foods15081442 - 21 Apr 2026
Abstract
Despite decades of methodological sophistication, research on sustainable food behavior remains critically limited in predicting actual purchases. This study aims to examine how methodological fragmentation across psychometric, econometric, and behavioral approaches affects the predictive validity of sustainable food choice and purchasing behavior. This [...] Read more.
Despite decades of methodological sophistication, research on sustainable food behavior remains critically limited in predicting actual purchases. This study aims to examine how methodological fragmentation across psychometric, econometric, and behavioral approaches affects the predictive validity of sustainable food choice and purchasing behavior. This integrative systematic review of 62 empirical studies across psychometric validation, discrete choice experiments (DCEs), trust and cognitive biases, and objective behavioral measurement diagnoses the structural disarticulation between these traditions as the primary cause of limited predictive validity. Findings reveal a pronounced inversion of the evidence hierarchy: while self-report studies report moderate attitude–behavior correlations (β ≈ 0.40–0.50, self-report), the only long-term study using objective scanner data demonstrates that this relationship collapses to a virtually null effect (β = 0.022), representing a 95.6% decay in predictive capacity. Psychometric instruments demonstrate strong structural validity but lack ecological validation against actual purchases. DCEs have evolved econometrically (from MNL to GMNL models), yet remain isolated from psychological theory and real-world validation. Critically, no reviewed study integrated validated scales, a DCE, and objective behavioral data within a single design. Key moderators—skepticism, halo effects, and affective heuristics—are systematically underoperationalized. To overcome this impasse, we propose Hybrid Choice Models (HCM) as the central tool to formally articulate latent attitudes, stated preferences, and observed behavior, enabling cumulative evidence to inform policy and market strategies with greater predictive accuracy. These findings indicate that predictive advances depend on integrating measurement paradigms to achieve ecologically valid and policy-relevant models of sustainable consumer behavior. Full article
Show Figures

Figure 1

26 pages, 31446 KB  
Article
A Training-Free Paradigm for Data-Scarce Maritime Scene Classification Using Vision-Language Models
by Jiabao Wu, Yujie Chen, Wentao Chen, Yicheng Lai, Junjun Li, Xuhang Chen and Wangyu Wu
Sensors 2026, 26(8), 2549; https://doi.org/10.3390/s26082549 - 21 Apr 2026
Abstract
Maritime Domain Awareness (MDA) relies heavily on data acquired from high-resolution optical spaceborne sensors; however, processing this massive quantity of sensor data via traditional supervised deep learning is severely bottlenecked by its dependency on exhaustively annotated datasets. Under extreme data scarcity, conventional architectures [...] Read more.
Maritime Domain Awareness (MDA) relies heavily on data acquired from high-resolution optical spaceborne sensors; however, processing this massive quantity of sensor data via traditional supervised deep learning is severely bottlenecked by its dependency on exhaustively annotated datasets. Under extreme data scarcity, conventional architectures suffer severe performance degradation, rendering them impractical for time-critical, zero-day deployments. To overcome this barrier, we propose a training-free inference paradigm that leverages the extensive pre-trained knowledge of Large Vision-Language Models (VLMs). Specifically, we introduce a Domain Knowledge-Enhanced In-Context Learning (DK-ICL) framework coupled with a Macro-Topological Chain-of-Thought (MT-CoT) strategy. This approach bridges the perspective gap between natural images and top–down optical sensor imagery by translating expert remote sensing heuristics into a strict, step-by-step reasoning pipeline. Extensive evaluations demonstrate the substantial efficacy of this framework. Armed with merely 4 visual exemplars per category as in-context triggers, our MT-CoT augmented VLMs outperform traditional models trained under identical scarcity by over 38% in F1-score. Crucially, real-world case studies confirm that this zero-gradient approach maintains robust generalization on unannotated, out-of-distribution coastal clutters, achieving performance parity with data-heavy networks trained on 50 times the data volume. By substituting massive human annotation and GPU optimization with scalable logical deduction, this paradigm establishes a resource-efficient foundation for next-generation intelligent maritime sensing networks. Full article
Show Figures

Figure 1

43 pages, 1926 KB  
Article
Static Solitons in an Expanding Universe
by Nagabhushana Prabhu
Universe 2026, 12(4), 119; https://doi.org/10.3390/universe12040119 - 20 Apr 2026
Abstract
We show, analytically, that a static sine-Gordon soliton cannot exist in 1 + 1 non-dynamical de Sitter spacetime if α:= (m/H)2 < 2, where m is the mass parameter of the sine-Gordon theory and H is the Hubble [...] Read more.
We show, analytically, that a static sine-Gordon soliton cannot exist in 1 + 1 non-dynamical de Sitter spacetime if α:= (m/H)2 < 2, where m is the mass parameter of the sine-Gordon theory and H is the Hubble constant. Conversely, we also show that static sine-Gordon solitons exist in 1 + 1 non-dynamical de Sitter spacetime if α >2. The above threshold is explained—qualitatively and to within an O(1) factor—using a heuristic argument involving the interplay of tensile force in the Lorentzian sine-Gordon soliton and the tidal force in de Sitter spacetime. A similar heuristic argument, which remains to be confirmed analytically, also suggests the existence of a threshold, (mV/H)2O(1)mV/H)O(1), below which the tidal forces are too strong to permit the existence of a static't Hooft–Polyakov monopole in non-dynamical 3 + 1 de Sitter spacetime; mV is the mass of the vector boson. Linde has suggested that new inflation could have triggered secondary inflation at the core of a GUT (Grand Unified Theory) monopole even if the Hubble constant at or after the GUT phase transition was significantly smaller than the mass of the X boson. We present a heuristic argument, which suggests that the SO(3)'t Hooft–Polyakov monopole does not allow secondary inflation at its core when the inflationary background is weak. Based on the above, as yet analytically unconfirmed, heuristic argument for the SO(3)'t Hooft–Polyakov monopole, we conjecture that secondary inflation at the core of a GUT monopole is infeasible. Full article
(This article belongs to the Section Cosmology)
31 pages, 1360 KB  
Article
Optimizing Post-Earthquake Relief with Combined Ground and Air Routing: ε-Constraint and NSGAII-Nearest Neighbor Approaches
by Sogol Mousavi, Mohammadreza Taghizadeh-Yazdi and Seyed Mojtaba Sajadi
Systems 2026, 14(4), 449; https://doi.org/10.3390/systems14040449 - 20 Apr 2026
Abstract
In the wake of an earthquake, severe infrastructure disruption and limited access to affected areas pose serious challenges to the relief process. Therefore, developing efficient models for vehicle allocation and routing plays a crucial role in reducing response time and improving operational efficiency. [...] Read more.
In the wake of an earthquake, severe infrastructure disruption and limited access to affected areas pose serious challenges to the relief process. Therefore, developing efficient models for vehicle allocation and routing plays a crucial role in reducing response time and improving operational efficiency. In this study, a multi-objective routing model is proposed for a hybrid ground–air transportation system, where trucks are responsible for covering accessible areas and drones are deployed to serve inaccessible locations. The model’s objectives include reducing service time, distance travel, total cost, and fuel consumption. To solve the model, the ε-constraint (epsilon-constraint) approach is used for small-scale problems, and a heuristic approach combining the Non-Dominated Sorting Genetic Algorithm II (NSGA-II) and the nearest neighbors concept is used for large-scale problems. The computational results show that the proposed hybrid system can reduce response time and significantly improve cost and fuel consumption compared to the ground fleet-only scenario through the optimal assignment of routes and drone missions. The proposed hybrid model resulted in a reduction of approximately 15% in total cost, 12% in service time, and nearly 10% in fuel consumption compared to using the ground fleet alone. These findings demonstrate the effectiveness and efficiency of the proposed framework in post-crisis relief operations. Full article
(This article belongs to the Special Issue Simulation and Digital Twins in Humanitarian Supply Chain Management)
20 pages, 9297 KB  
Article
D3QN-Guided Sand Cat Swarm Optimization with Hybrid Exploration for Multi-Objective Cloud Task Scheduling
by Minghao Shao, Ying Guo, Jibin Wang and Hu Zhang
Algorithms 2026, 19(4), 321; https://doi.org/10.3390/a19040321 - 20 Apr 2026
Abstract
Task scheduling in cloud computing environments is a complex NP-hard problem that requires maximizing resource utilization while satisfying quality-of-service (QoS) constraints. Traditional meta-heuristic algorithms often become stuck in local optima, while single deep reinforcement learning (DRL) models exhibit instability when exploring large-scale solution [...] Read more.
Task scheduling in cloud computing environments is a complex NP-hard problem that requires maximizing resource utilization while satisfying quality-of-service (QoS) constraints. Traditional meta-heuristic algorithms often become stuck in local optima, while single deep reinforcement learning (DRL) models exhibit instability when exploring large-scale solution spaces. To address this, this paper proposes a hybrid scheduling algorithm based on multi-objective sand cat colony optimization (MoSCO). This algorithm utilizes a D3QN network to extract task features and guide population initialization, followed by a multi-objective Sand Cat Swarm Optimization (SCSO) algorithm for refined local search. Results from 50 independent replicate experiments conducted in a simulated cloud environment, coupled with an analysis of the dynamic convergence process, demonstrate that MoSCO exhibits significant superiority and robustness. Scatter plot convergence analysis further confirms that MoSCO’s knowledge injection mechanism effectively overcomes the blind exploration phase of traditional algorithms and successfully breaks through the local optimum bottleneck in the late iteration stages of single reinforcement learning, achieving higher-quality, denser, and more stable convergence. Furthermore, 3D and 2D Pareto front analyses show that MoSCO generates highly competitive, well-distributed non-dominated solutions, offering flexible trade-off options for conflicting objectives. Compared to PureD3QN, H-SCSO, and NSGA-II, MoSCO exhibits the smallest performance fluctuations in box plots. Specifically, MoSCO elevates the average resource utilization of clusters to 92.20%, while reducing the average maximum Makespan and Tardiness to 528 and 4187, respectively. Experimental data confirm that MoSCO effectively balances global exploration with local exploitation, delivering stable, high-quality solutions for dynamic cloud task scheduling. Full article
Show Figures

Figure 1

22 pages, 3673 KB  
Article
A Novel Gradient-Based Method for Decision Trees Optimizing Arbitrary Differential Loss Functions
by Andrei Konstantinov, Lev Utkin and Vladimir Muliukha
Mathematics 2026, 14(8), 1379; https://doi.org/10.3390/math14081379 - 20 Apr 2026
Abstract
There are many approaches for training decision trees. This work introduces a novel gradient-based method for constructing decision trees that optimize arbitrary differentiable loss functions, overcoming the limitations of heuristic splitting rules. Unlike traditional approaches that rely on heuristic splitting rules, the proposed [...] Read more.
There are many approaches for training decision trees. This work introduces a novel gradient-based method for constructing decision trees that optimize arbitrary differentiable loss functions, overcoming the limitations of heuristic splitting rules. Unlike traditional approaches that rely on heuristic splitting rules, the proposed method refines predictions using the first and second derivatives of the loss function, enabling the optimization of complex tasks such as classification, regression, and survival analysis. We demonstrate the method’s applicability to classification, regression, and survival analysis tasks, including those with censored data. Numerical experiments on both real and synthetic datasets compare the proposed method with traditional decision tree algorithms such as CART, Extremely Randomized Trees, and SurvTree. The implementation of the method is publicly available, providing a practical tool for researchers and practitioners. This work advances the field of decision tree-based modeling, offering a more flexible and accurate approach for handling structured data and complex tasks. By leveraging gradient-based optimization, the proposed method bridges the gap between traditional decision trees and modern machine learning techniques, paving the way for further innovations in interpretable and high-performing models. Full article
44 pages, 7084 KB  
Article
Fractional-Order Anteater Foraging Optimization Algorithm for Compact Layout Design of Electro-Hydrostatic Actuator Controllers
by Shuai Cao, Wei Xu, Weibo Li, Kangzheng Huang and Xiaoqing Deng
Fractal Fract. 2026, 10(4), 269; https://doi.org/10.3390/fractalfract10040269 - 20 Apr 2026
Abstract
The development of More Electric Aircraft (MEA) necessitates that Electro-Hydrostatic Actuator (EHA) controllers achieve exceptional power density within rigorously constrained volumes. However, the compact layout design of these controllers constitutes a challenging NP-hard problem, characterized by strong multi-physics coupling—such as electromagnetic, thermal, and [...] Read more.
The development of More Electric Aircraft (MEA) necessitates that Electro-Hydrostatic Actuator (EHA) controllers achieve exceptional power density within rigorously constrained volumes. However, the compact layout design of these controllers constitutes a challenging NP-hard problem, characterized by strong multi-physics coupling—such as electromagnetic, thermal, and structural fields—and complex nonlinear constraints. Traditional meta-heuristic algorithms frequently suffer from premature convergence and struggle to balance global exploration with local exploitation. To address these challenges, the core contribution of this paper is the proposal of a novel Fractional-Order Anteater Foraging Optimization Algorithm (AFO), which is successfully applied to an established EHA controller layout optimization model. At the algorithmic level, by incorporating the Grünwald–Letnikov fractional derivative, the algorithm exploits the inherent memory property of fractional calculus to dynamically adjust the search step size and direction based on historical evolutionary information, thereby preventing stagnation in local optima. At the engineering application level, a high-fidelity mathematical model of the EHA controller is established, comprising 11 design variables and 10 critical physical constraints, including parasitic inductance minimization, thermal radiation efficiency, and electromagnetic interference (EMI) isolation. Extensive validation against the CEC2005 and CEC2022 benchmark functions demonstrates the superior convergence accuracy and stability of the AFO algorithm. In a specific EHA case study, the proposed method reduced the controller volume by 33.9% while strictly satisfying all multi-physics constraints, compared to traditional methods. Furthermore, a physical prototype was fabricated based on the optimized layout, and experimental tests confirmed its stable operation and excellent thermal performance. The results validate the efficacy of incorporating fractional calculus into bio-inspired algorithms to solve complex, high-dimensional engineering optimization problems. Full article
(This article belongs to the Section Engineering)
Show Figures

Figure 1

15 pages, 1357 KB  
Article
Quantitative Assessment of Human Error Effects on Evacuation Performance in Underground Stations Using a Node–Link Simulation Model
by Chiyeong Kang, Kyeonghwan Seong and Mintaek Yoo
Appl. Sci. 2026, 16(8), 3987; https://doi.org/10.3390/app16083987 - 20 Apr 2026
Abstract
Human error in evacuation guidance systems can significantly affect evacuation performance, particularly in complex underground environments where large numbers of occupants are concentrated. While previous studies have focused on optimizing evacuation routes and modeling crowd dynamics, the direct quantitative impact of human error [...] Read more.
Human error in evacuation guidance systems can significantly affect evacuation performance, particularly in complex underground environments where large numbers of occupants are concentrated. While previous studies have focused on optimizing evacuation routes and modeling crowd dynamics, the direct quantitative impact of human error in evacuation guidance has not been sufficiently addressed. This study aims to evaluate the effects of human error on evacuation efficiency in underground stations using a node–link-based evacuation model. A virtual three-level underground station was modeled, and evacuation simulations were conducted using two representative pathfinding algorithms, Dijkstra and A*, to compare classical and heuristic routing approaches under both normal and error conditions. Three scenarios were considered: a normal condition with accurate guidance, a misguidance scenario with incorrect information on exit availability, and a delayed evacuation scenario in which a subset of evacuees started evacuation later than others. In addition, congestion effects were incorporated by adjusting walking speeds based on crowd density. The results show that human error significantly increases evacuation time and alters congestion patterns. Compared to the normal condition, the misguidance scenario increased evacuation time by approximately 17.6%, while the delayed evacuation scenario resulted in an increase of up to 37.9%, indicating that delayed response has the most critical impact due to the interaction between late-starting evacuees and existing congestion. Although the A* algorithm demonstrated higher computational efficiency, its advantage did not consistently translate into improved evacuation performance under dynamic conditions. These findings highlight that evacuation performance is highly sensitive to the accuracy and timing of evacuation guidance, rather than being determined solely by optimal pathfinding. Therefore, improving the reliability and timeliness of evacuation guidance systems is essential for enhancing safety in underground environments. Full article
Show Figures

Figure 1

35 pages, 1350 KB  
Article
A Bayesian Approach to Bad Data Identification in Power System State Estimation
by Gabriele D’Antona
Electronics 2026, 15(8), 1732; https://doi.org/10.3390/electronics15081732 - 19 Apr 2026
Viewed by 124
Abstract
This paper addresses the problem of robust identification of gross errors affecting both measurements and network parameters in power system state estimation. The study is conducted within a steady-state framework and focuses on improving bad data identification in the presence of modeling and [...] Read more.
This paper addresses the problem of robust identification of gross errors affecting both measurements and network parameters in power system state estimation. The study is conducted within a steady-state framework and focuses on improving bad data identification in the presence of modeling and measurement uncertainties, explicitly accounting for the limited observability of gross errors. Building on an Extended Weighted Least Squares (EWLS) estimator and a theoretically refined eigenvalue-based clustering of dominant error components, a novel Bayesian identification framework is introduced. The proposed Bayesian approach assigns probabilities to competing gross error models, including scenarios involving multiple simultaneous errors, given the observed clusters of dominant errors. This probabilistic formulation enables a systematic and quantitative decision-making process for identifying the most likely sources of gross errors, extending existing deterministic or heuristic approaches. The methodology is evaluated through numerical simulations on the IEEE-14 bus test system, considering several gross error scenarios and significant parameter uncertainties. The results demonstrate that the proposed Bayesian framework enhances the interpretability and discriminative capability of gross error identification, highlighting its potential for robust bad data identification in power system state estimation. Full article
Back to TopTop