Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (808)

Search Parameters:
Keywords = heuristic technique

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 1717 KiB  
Article
Optimal Midcourse Guidance with Terminal Relaxation and Range Convex Optimization
by Jiong Li, Jinlin Zhang, Jikun Ye, Lei Shao and Xiangwei Bu
Aerospace 2025, 12(7), 618; https://doi.org/10.3390/aerospace12070618 - 9 Jul 2025
Viewed by 155
Abstract
In midcourse guidance, strong constraints and dual-channel control coupling pose major challenges for trajectory optimization. To address this, this paper proposes an optimal guidance method based on terminal relaxation and range convex programming. The study first derived a range-domain dynamics model with the [...] Read more.
In midcourse guidance, strong constraints and dual-channel control coupling pose major challenges for trajectory optimization. To address this, this paper proposes an optimal guidance method based on terminal relaxation and range convex programming. The study first derived a range-domain dynamics model with the angle of attack and bank angle as dual control inputs, augmented with path constraints including heat flux limitations, to formulate the midcourse guidance optimization problem. A terminal relaxation strategy was then proposed to mitigate numerical infeasibility induced by rigid terminal constraints, thereby guaranteeing the solvability of successive subproblems. Through the integration of affine variable transformations and successive linearization techniques, the original nonconvex problem was systematically converted into a second-order cone programming (SOCP) formulation, with theoretical equivalence between the relaxed and original problems established under well-justified assumptions. Furthermore, a heuristic initial trajectory generation scheme was devised, and the solution was obtained via a sequential convex programming (SCP) algorithm. Numerical simulation results demonstrated that the proposed method effectively satisfies strict path constraints, successfully generates feasible midcourse guidance trajectories, and exhibits strong computational efficiency and robustness. Additionally, a systematic comparison was conducted to evaluate the impact of different interpolation methods and discretization point quantities on algorithm performance. Full article
(This article belongs to the Special Issue Dynamics, Guidance and Control of Aerospace Vehicles)
Show Figures

Figure 1

27 pages, 3702 KiB  
Article
Domain Knowledge-Enhanced Process Mining for Anomaly Detection in Commercial Bank Business Processes
by Yanying Li, Zaiwen Ni and Binqing Xiao
Systems 2025, 13(7), 545; https://doi.org/10.3390/systems13070545 - 4 Jul 2025
Viewed by 194
Abstract
Process anomaly detection in financial services systems is crucial for operational compliance and risk management. However, traditional process mining techniques frequently neglect the detection of significant low-frequency abnormalities due to their dependence on frequency and the inadequate incorporation of domain-specific knowledge. Therefore, we [...] Read more.
Process anomaly detection in financial services systems is crucial for operational compliance and risk management. However, traditional process mining techniques frequently neglect the detection of significant low-frequency abnormalities due to their dependence on frequency and the inadequate incorporation of domain-specific knowledge. Therefore, we develop an enhanced process mining algorithm by incorporating a domain-specific follow-relationship matrix derived from standard operating procedures (SOPs). We empirically evaluated the effectiveness of the proposed algorithm based on real-world event logs from a corporate account-opening process conducted from January to December 2022 in a Chinese commercial bank. Additionally, we employed large language models (LLMs) for root cause analysis and process optimization recommendations. The empirical results demonstrate that the E-Heuristic Miner significantly outperforms traditional machine learning methods and process mining algorithms in process anomaly detection. Furthermore, the integration of LLMs provides promising capabilities in semantic reasoning and offers explainable optimization suggestions, enhancing decision-making support in complex financial scenarios. Our study significantly improves the precision of process anomaly detection in financial contexts by incorporating banking-specific domain knowledge into process mining algorithms. Meanwhile, it extends theoretical boundaries and the practical applicability of process mining in intelligent, semantic-aware financial service management. Full article
(This article belongs to the Special Issue Business Process Management Based on Big Data Analytics)
Show Figures

Figure 1

21 pages, 6305 KiB  
Article
Use of BOIvy Optimization Algorithm-Based Machine Learning Models in Predicting the Compressive Strength of Bentonite Plastic Concrete
by Shuai Huang, Chuanqi Li, Jian Zhou, Xiancheng Mei and Jiamin Zhang
Materials 2025, 18(13), 3123; https://doi.org/10.3390/ma18133123 - 1 Jul 2025
Viewed by 232
Abstract
The combination of bentonite and conventional plastic concrete is an effective method for projecting structures and adsorbing heavy metals. Determining the compressive strength (CS) is a crucial step in the design of bentonite plastic concrete (BPC). Traditional experimental analyses are resource-intensive, time-consuming, and [...] Read more.
The combination of bentonite and conventional plastic concrete is an effective method for projecting structures and adsorbing heavy metals. Determining the compressive strength (CS) is a crucial step in the design of bentonite plastic concrete (BPC). Traditional experimental analyses are resource-intensive, time-consuming, and prone to high uncertainties. To address these challenges, several machine learning (ML) models, including support vector regression (SVR), artificial neural network (ANN), and random forest (RF), are generated to forecast the CS of BPC materials. To improve the prediction accuracy, a meta-heuristic optimization, called the Ivy algorithm, is integrated with Bayesian optimization (BOIvy) to optimize the ML models. Several statistical indices, including the coefficient of determination (R2), root mean square error (RMSE), prediction accuracy (U1), prediction quality (U2), and variance accounted for (VAF), are adopted to evaluate the predictive performance of all models. Additionally, Shapley additive explanation (SHAP) and sensitivity analysis are conducted to enhance model interpretability. The results indicate that the best model is the BOIvy-ANN model, which achieves the optimal indices during the testing. Moreover, water, curing time, and cement are found to be more influential on the prediction of the CS of BPC than other features. This paper provides a strong example of applying artificial intelligence (AI) techniques to estimate the performance of BPC materials. Full article
(This article belongs to the Section Construction and Building Materials)
Show Figures

Figure 1

21 pages, 677 KiB  
Article
Exploring Tabu Tenure Policies with Machine Learning
by Anna Konovalenko and Lars Magnus Hvattum
Electronics 2025, 14(13), 2642; https://doi.org/10.3390/electronics14132642 - 30 Jun 2025
Viewed by 206
Abstract
Tabu search is a well-known local search-based metaheuristic, widely used for tackling complex combinatorial optimization problems. As with other metaheuristics, its performance is sensitive to parameter configurations, requiring careful tuning. Among the critical parameters of tabu search is the tabu tenure. This study [...] Read more.
Tabu search is a well-known local search-based metaheuristic, widely used for tackling complex combinatorial optimization problems. As with other metaheuristics, its performance is sensitive to parameter configurations, requiring careful tuning. Among the critical parameters of tabu search is the tabu tenure. This study aims to identify key search attributes and instance characteristics that can help establish comprehensive guidelines for a robust tabu tenure policy. First, a review different tabu tenure policies is provided. Next, critical baselines to understand the fundamental relationship between tabu tenure settings and solution quality are established. We verified that generalizable parameter selection rules provide value when implementing metaheuristic frameworks, specifically showing that a more robust tabu tenure policy can be achieved by considering whether a move is improving or non-improving. Finally, we explore the integration of machine learning techniques that exploits both dynamic search attributes and static instance characteristics to obtain effective and robust tabu tenure policies. A statistical analysis confirms that the integration of machine learning yields statistically significant performance gains, achieving a mean improvement of 12.23 (standard deviation 137.25, n= 10,000 observations) when compared to a standard randomized tabu tenure selection (p-value < 0.001). While the integration of machine learning introduces additional computational overhead, it may be justified in scenarios where heuristics are repeatedly applied to structurally similar problem instances, and even small improvements in solution quality can accumulate to large overall gains. Nonetheless, our methods have limitations. The influence of the tabu tenure parameter is difficult to detect in real time during the search process, complicating the reliable identification of when and how tenure adjustments impact search performance. Additionally, the proposed policies exhibit similar performance on the chosen instances, further complicating the evaluation and differentiation of policy effectiveness. Full article
(This article belongs to the Special Issue Advances in Algorithm Optimization and Computational Intelligence)
Show Figures

Figure 1

38 pages, 7430 KiB  
Article
Against Expectations: A Simple Greedy Heuristic Outperforms Advanced Methods in Bitmap Decomposition
by Ville Pitkäkangas
Electronics 2025, 14(13), 2615; https://doi.org/10.3390/electronics14132615 - 28 Jun 2025
Viewed by 218
Abstract
Partitioning rectangular and rectilinear shapes in n-dimensional binary images into the smallest set of axis-aligned n-cuboids is a fundamental problem in image analysis, pattern recognition, and computational geometry, with applications in object detection, shape simplification, and data compression. This paper introduces and evaluates [...] Read more.
Partitioning rectangular and rectilinear shapes in n-dimensional binary images into the smallest set of axis-aligned n-cuboids is a fundamental problem in image analysis, pattern recognition, and computational geometry, with applications in object detection, shape simplification, and data compression. This paper introduces and evaluates four deterministic decomposition methods: pure greedy selection, greedy with backtracking, greedy with a priority queue, and an iterative integer linear programming (IILP) approach. These methods are benchmarked against three established baseline techniques across 13 diverse 1D–4D images (up to 8 × 8 × 8 × 8 elements), featuring holes, concavities, and varying orientations. Surprisingly, the simplest approach—a purely greedy heuristic selecting the largest unvisited region at each step—consistently achieved optimal or near-optimal decompositions, even for complex images, and maintained optimality under rotation without post-processing. By contrast, the more sophisticated methods (backtracking, prioritization, and IILP) exhibited trade-offs between speed and quality, with IILP adding overhead without superior results. Runtime testing showed IILP was on average ~37× slower than the fastest greedy method (ranging from ~3× to 100× slower). These findings highlight that a well-designed greedy strategy can outperform more complex algorithms for practical binary shape decomposition, offering a compelling balance between computational efficiency and solution quality in pattern recognition and image analysis. Full article
Show Figures

Graphical abstract

20 pages, 1686 KiB  
Article
Dynamic Security-Aware Resource Allocation in Quantum Key Distribution-Enabled Optical Networks
by Vimal Bhatia, Adolph Kasegenya and Bowen Chen
Photonics 2025, 12(7), 645; https://doi.org/10.3390/photonics12070645 - 25 Jun 2025
Viewed by 299
Abstract
The demand for secure communication in the age of quantum technologies has driven progress in quantum key distribution (QKD) techniques for optical networks. This research addresses the issues of high blocking probabilities (BPs) and the proper utilization of quantum resources in varying network [...] Read more.
The demand for secure communication in the age of quantum technologies has driven progress in quantum key distribution (QKD) techniques for optical networks. This research addresses the issues of high blocking probabilities (BPs) and the proper utilization of quantum resources in varying network loads by introducing a novel heuristic approach, termed dynamic security-aware quantum resource allocation (D-SQRA), designed for dynamic resource allocation in QKD-enabled optical networks. We propose two D-SQRA algorithms to employ an adaptive resource assignment (RA) strategy that concurrently addresses routing, wavelength, and time-slot selection while dynamically modifying security levels according to the real-time network load and resource availability. We evaluate the proposed D-SQRA performance against two conventional methods, namely, fixed security quantum resource allocation (F-SQRA) and baseline quantum resource allocation (B-QRA). We discuss the results for NSFNET and UBN24 topologies for network security performance metrics such as network security performance (NSP), BP, quantum key utilization (QKU), and time-slot utilization. The results show that the proposed D-SQRA algorithms provide significant improvement with respect to conventional techniques in addressing proper resource utilization and management by reducing BPs of the new incoming connection requests. Full article
(This article belongs to the Special Issue Enabling Technologies for Optical Communications and Networking)
Show Figures

Figure 1

35 pages, 1485 KiB  
Article
Detecting Cyber Threats in UWF-ZeekDataFall22 Using K-Means Clustering in the Big Data Environment
by Sikha S. Bagui, Germano Correa Silva De Carvalho, Asmi Mishra, Dustin Mink, Subhash C. Bagui and Stephanie Eager
Future Internet 2025, 17(6), 267; https://doi.org/10.3390/fi17060267 - 18 Jun 2025
Viewed by 325
Abstract
In an era marked by the rapid growth of the Internet of Things (IoT), network security has become increasingly critical. Traditional Intrusion Detection Systems, particularly signature-based methods, struggle to identify evolving cyber threats such as Advanced Persistent Threats (APTs)and zero-day attacks. Such threats [...] Read more.
In an era marked by the rapid growth of the Internet of Things (IoT), network security has become increasingly critical. Traditional Intrusion Detection Systems, particularly signature-based methods, struggle to identify evolving cyber threats such as Advanced Persistent Threats (APTs)and zero-day attacks. Such threats or attacks go undetected with supervised machine-learning methods. In this paper, we apply K-means clustering, an unsupervised clustering technique, to a newly created modern network attack dataset, UWF-ZeekDataFall22. Since this dataset contains labeled Zeek logs, the dataset was de-labeled before using this data for K-means clustering. The labeled data, however, was used in the evaluation phase, to determine the attack clusters post-clustering. In order to identify APTs as well as zero-day attack clusters, three different labeling heuristics were evaluated to determine the attack clusters. To address the challenges faced by Big Data, the Big Data framework, that is, Apache Spark and PySpark, were used for our development environment. In addition, the uniqueness of this work is also in using connection-based features. Using connection-based features, an in-depth study is done to determine the effect of the number of clusters, seeds, as well as features, for each of the different labeling heuristics. If the objective is to detect every single attack, the results indicate that 325 clusters with a seed of 200, using an optimal set of features, would be able to correctly place 99% of attacks. Full article
Show Figures

Figure 1

21 pages, 438 KiB  
Article
Confucian Educational Thought and Its Relevance to Contemporary Vietnamese Education
by Phuong Thi Nguyen, Khoa Ngoc Vo Nguyen, Huyen Thanh Thi Do and Quyet Thi Nguyen
Philosophies 2025, 10(3), 70; https://doi.org/10.3390/philosophies10030070 - 17 Jun 2025
Viewed by 663
Abstract
This study explores the contemporary relevance of Confucian educational thought in the context of Vietnam’s ongoing educational reform. It examines how foundational Confucian principles—particularly those related to moral cultivation, pedagogical methods, and the role of the learner—can be adapted to align with modern [...] Read more.
This study explores the contemporary relevance of Confucian educational thought in the context of Vietnam’s ongoing educational reform. It examines how foundational Confucian principles—particularly those related to moral cultivation, pedagogical methods, and the role of the learner—can be adapted to align with modern educational objectives. Employing a qualitative, comparative methodology, the research analyzes classical Confucian texts, historical records, and current Vietnamese education policy documents, alongside Humboldtian liberal ideals. The findings demonstrate that Confucian values such as benevolence (ren), ritual propriety (li), and exemplary moral conduct continue to offer meaningful frameworks for promoting ethical development and civic responsibility. Pedagogical techniques, including heuristic questioning, modeling, and situational teaching, remain relevant to modern goals like critical thinking and learner autonomy. While some critiques highlight limitations in Confucianism’s hierarchical structure or insufficient scientific orientation, this study also incorporates existing research showing that Confucian education—particularly across East Asia—has been positively associated with fostering students’ creativity and critical thinking. This paper distinguishes itself by proposing a hybrid model that critically adapts Confucian pedagogy in conjunction with Humboldtian liberalism to enhance both moral grounding and cognitive autonomy in Vietnamese education. The research concludes that a critically integrative approach can support Vietnam in building a culturally grounded, morally resilient, and globally competitive education system. Full article
(This article belongs to the Section Virtues)
28 pages, 1509 KiB  
Article
Adaptive Congestion Detection and Traffic Control in Software-Defined Networks via Data-Driven Multi-Agent Reinforcement Learning
by Kaoutar Boussaoud, Abdeslam En-Nouaary and Meryeme Ayache
Computers 2025, 14(6), 236; https://doi.org/10.3390/computers14060236 - 16 Jun 2025
Viewed by 428
Abstract
Efficient congestion management in Software-Defined Networks (SDNs) remains a significant challenge due to dynamic traffic patterns and complex topologies. Conventional congestion control techniques based on static or heuristic rules often fail to adapt effectively to real-time network variations. This paper proposes a data-driven [...] Read more.
Efficient congestion management in Software-Defined Networks (SDNs) remains a significant challenge due to dynamic traffic patterns and complex topologies. Conventional congestion control techniques based on static or heuristic rules often fail to adapt effectively to real-time network variations. This paper proposes a data-driven framework based on Multi-Agent Reinforcement Learning (MARL) to enable intelligent, adaptive congestion control in SDNs. The framework integrates two collaborative agents: a Congestion Classification Agent that identifies congestion levels using metrics such as delay and packet loss, and a Decision-Making Agent based on Deep Q-Learning (DQN or its variants), which selects the optimal actions for routing and bandwidth management. The agents are trained offline using both synthetic and real network traces (e.g., the MAWI dataset), and deployed in a simulated SDN testbed using Mininet and the Ryu controller. Extensive experiments demonstrate the superiority of the proposed system across key performance metrics. Compared to baseline controllers, including standalone DQN and static heuristics, the MARL system achieves up to 3.0% higher throughput, maintains end-to-end delay below 10 ms, and reduces packet loss by over 10% in real traffic scenarios. Furthermore, the architecture exhibits stable cumulative reward progression and balanced action selection, reflecting effective learning and policy convergence. These results validate the benefit of agent specialization and modular learning in scalable and intelligent SDN traffic engineering. Full article
Show Figures

Figure 1

24 pages, 2850 KiB  
Article
Solving Three-Stage Operating Room Scheduling Problems with Uncertain Surgery Durations
by Yang-Kuei Lin and Chin Soon Chong
Mathematics 2025, 13(12), 1973; https://doi.org/10.3390/math13121973 - 15 Jun 2025
Viewed by 418
Abstract
Operating room (OR) scheduling problems are often addressed using deterministic models that assume surgery durations are known in advance. However, such assumptions fail to reflect the uncertainty that often occurs in real surgical environments, especially during the surgery and recovery stages. This study [...] Read more.
Operating room (OR) scheduling problems are often addressed using deterministic models that assume surgery durations are known in advance. However, such assumptions fail to reflect the uncertainty that often occurs in real surgical environments, especially during the surgery and recovery stages. This study focuses on a robust scheduling problem involving a three-stage surgical process that includes pre-surgery, surgery, and post-surgery stages. The scheduling needs to coordinate multiple resources—pre-operative holding unit (PHU) beds, ORs, and post-anesthesia care unit (PACU) beds—while following a strict no-wait rule to keep patient flow continuous without delays between stages. The main goal is to minimize the makespan and improve schedule robustness when surgery and post-surgery durations are uncertain. To solve this problem, we propose a Genetic Algorithm for Robust Scheduling (GARS), which evaluates solutions using a scenario-based robustness criterion derived from multiple sampled instances. GARS is compared with four other algorithms: a deterministic GA (GAD), a random search (BRS), a greedy randomized insertion and swap heuristic (GRIS), and an improved version of GARS with simulated annealing (GARS_SA). The results from different problem sizes and uncertainty levels show that GARS and GARS_SA consistently perform better than the other algorithms. In large-scale tests with moderate uncertainty (30 surgeries, α = 0.5), GARS achieves an average makespan of 633.85, a standard deviation of 40.81, and a worst-case performance ratio (WPR) of 1.00, while GAD reaches 673.75, 54.21, and 1.11, respectively. GARS can achieve robust performance without using any extra techniques to strengthen the search process. Its structure remains simple and easy to use, making it a practical and effective approach for creating reliable and efficient surgical schedules under uncertainty. Full article
(This article belongs to the Special Issue Theory and Applications of Scheduling and Optimization)
Show Figures

Figure 1

30 pages, 2368 KiB  
Article
A Hybrid Approach for Reachability Analysis of Complex Software Systems Using Fuzzy Adaptive Particle Swarm Optimization Algorithm and Rule Composition
by Nahid Salimi, Seyfollah Soleimani, Vahid Rafe and Davood Khodadad
Math. Comput. Appl. 2025, 30(3), 65; https://doi.org/10.3390/mca30030065 - 10 Jun 2025
Viewed by 399
Abstract
Model checking has become a widely used and precise technique for verifying software systems. However, a major challenge in model checking is state space explosion, which occurs due to the exponential memory usage required by the model checker. To address this issue, meta-heuristic [...] Read more.
Model checking has become a widely used and precise technique for verifying software systems. However, a major challenge in model checking is state space explosion, which occurs due to the exponential memory usage required by the model checker. To address this issue, meta-heuristic and evolutionary algorithms offer a promising solution by searching for a state where a property is either satisfied or violated. Recently, various evolutionary algorithms, such as Genetic Algorithms and Particle Swarm Optimization, have been applied to detect deadlock states. While these approaches have been useful, they primarily focus on deadlock detection. This paper proposes a fuzzy algorithm to analyse reachability properties in systems specified through Graph Transformation Systems with large state spaces. To achieve this, the existing Particle Swarm Optimisation algorithm, which is typically used for deadlock detection, has been extended to analyse reachability properties. To further enhance accuracy, a Fuzzy Adaptive Particle Swarm Optimization algorithm is introduced to determine which states and paths should be explored at each step-in order to find the corresponding reachable state. Additionally, the proposed hybrid algorithm was applied to models generated through rule composition to assess the impact of rule composition on execution time and the number of explored states. These approaches were implemented within an open-source toolset called GROOVE, which is used for designing and model checking Graph Transformation Systems. Experimental results demonstrate that proposed hybrid algorithm reduced verification time by up to 49.86% compared to Particle Swarm Optimization and 65.17% compared to Genetic Algorithms in reachability analysis of complex models. Furthermore, it explored 32.7% fewer states on average than the hybrid method based on Particle Swarm Optimization and Gravitational Search Algorithms, and 57.4% fewer states compared to Genetic Algorithms, indicating improved search efficiency. The application of rule composition further reduced execution time by 35.7% and the number of explored states by 41.2% in large-scale models. These results confirm that proposed hybrid algorithm significantly enhances reachability analysis in the systems modelled via Graph Transformation, improving both computational efficiency and scalability. Full article
Show Figures

Figure 1

17 pages, 8639 KiB  
Article
Route Optimization for UGVs: A Systematic Analysis of Applications, Algorithms and Challenges
by Dario Fernando Yépez-Ponce, William Montalvo, Ximena Alexandra Guamán-Gavilanes and Mauricio David Echeverría-Cadena
Appl. Sci. 2025, 15(12), 6477; https://doi.org/10.3390/app15126477 - 9 Jun 2025
Viewed by 530
Abstract
This research focuses on route optimization for autonomous ground vehicles, with key applications in precision agriculture, logistics and surveillance. Its goal is to create planning techniques that increase productivity and flexibility in changing settings. To achieve this, a PRISMA-based systematic literature review was [...] Read more.
This research focuses on route optimization for autonomous ground vehicles, with key applications in precision agriculture, logistics and surveillance. Its goal is to create planning techniques that increase productivity and flexibility in changing settings. To achieve this, a PRISMA-based systematic literature review was carried out, encompassing works published during the last five years in databases like IEEE Xplore, ScienceDirect and Scopus. The search focused on topics related to route optimization, unmanned ground vehicles and heuristic algorithms. From the analysis of 56 selected articles, trends, technologies and challenges in real-time route planning were identified. Fifty-seven percent of the recent studies focus on UGV optimization, with prominent applications in agriculture, aiming to maximize efficiency and reduce costs. Heuristic algorithms, such as Humpback Whale Optimization, Firefly Search and Particle Swarm Optimization, are commonly employed to solve complex search problems. The findings underscore the need for more flexible planning techniques that integrate spatiotemporal and curvature constraints, allowing systems to respond effectively to unforeseen changes. By increasing their effectiveness and adaptability in practical situations, our research helps to provide more reliable autonomous navigation solutions for crucial applications. Full article
(This article belongs to the Topic Digital Agriculture, Smart Farming and Crop Monitoring)
Show Figures

Figure 1

21 pages, 442 KiB  
Article
A Mixed-Integer Convex Optimization Framework for Cost-Effective Conductor Selection in Radial Distribution Networks While Considering Load and Renewable Variations
by Oscar Danilo Montoya, Oscar David Florez-Cediel, Luis Fernando Grisales-Noreña, Walter Gil-González and Diego Armando Giral-Ramírez
Sci 2025, 7(2), 72; https://doi.org/10.3390/sci7020072 - 3 Jun 2025
Viewed by 363
Abstract
The optimal selection of conductors (OCS) in radial distribution networks is a critical aspect of system planning, directly impacting both investment costs and energy losses. This paper proposed a mixed-integer convex (MI-Convex) optimization framework to solve the OCS problem under balanced operating conditions, [...] Read more.
The optimal selection of conductors (OCS) in radial distribution networks is a critical aspect of system planning, directly impacting both investment costs and energy losses. This paper proposed a mixed-integer convex (MI-Convex) optimization framework to solve the OCS problem under balanced operating conditions, integrating the costs of conductor investment and energy losses into a single convex objective. This formulation leveraged second-order conic constraints and was solved using a combination of branch-and-bound and interior-point methods. Numerical validations on standard 27-, 33-, and 85-bus test systems confirmed the effectiveness of the proposal. In the 27-bus grid, the MI-Convex approach achieved a total cost of $550,680.25, outperforming or matching the best results reported by state-of-the-art metaheuristic algorithms, including the vortex search algorithm (VSA), Newton’s metaheuristic algorithm (NMA), the generalized normal distribution optimizer (GNDO), and the tabu search algorithm (TSA). The MI-Convex method demonstrated consistent and repeatable results, in contrast to the variability observed in heuristic techniques. Further analyses considering three-period and daily load profiles led to cost reductions of up to 27.6%, and incorporating distributed renewable generation into the 85-bus system achieved a total cost of $705,197.06—approximately 22.97% lower than under peak-load planning. Moreover, the methodology proved computationally efficient, requiring only 1.84 s for the 27-bus and 12.27 s for the peak scenario of the 85-bus. These results demonstrate the superiority of the MI-Convex approach in achieving globally optimal, reproducible, and computationally tractable solutions for cost-effective conductor selection. Full article
(This article belongs to the Section Computer Sciences, Mathematics and AI)
Show Figures

Figure 1

36 pages, 1612 KiB  
Article
Quantum-Inspired Hyperheuristic Framework for Solving Dynamic Multi-Objective Combinatorial Problems in Disaster Logistics
by Kassem Danach, Hassan Harb, Louai Saker and Ali Raad
World Electr. Veh. J. 2025, 16(6), 310; https://doi.org/10.3390/wevj16060310 - 2 Jun 2025
Viewed by 1107
Abstract
Disaster logistics presents a highly complex decision-making challenge under conditions of uncertainty, where the timely and efficient allocation of scarce resources is essential to minimize human suffering. In this context, we propose a novel Quantum-Inspired Hyperheuristic Framework (QHHF) designed to solve Dynamic Multi-Objective [...] Read more.
Disaster logistics presents a highly complex decision-making challenge under conditions of uncertainty, where the timely and efficient allocation of scarce resources is essential to minimize human suffering. In this context, we propose a novel Quantum-Inspired Hyperheuristic Framework (QHHF) designed to solve Dynamic Multi-Objective Combinatorial Optimization Problems (DMOCOPs) arising in disaster relief operations. The proposed framework integrates Quantum-Inspired Evolutionary Algorithms (QIEAs), which facilitate diverse and explorative solution generation, with a Reinforcement Learning (RL)-based hyperheuristic capable of dynamically selecting the most suitable low-level heuristic in response to evolving disaster conditions. A dynamic multi-objective mathematical model is formulated to simultaneously minimize total travel cost and risk exposure, while maximizing priority-weighted demand satisfaction. The model captures real-world complexity through time-dependent variables, stochastic demand variations, and fluctuating transportation risks. Extensive simulations using real-world disaster scenarios demonstrate the effectiveness of the proposed approach in generating high-quality solutions within stringent response time constraints. Comparative evaluations reveal that QHHF consistently outperforms traditional heuristics and metaheuristics in terms of adaptability, scalability, and solution quality across multiple objective trade-offs. Notably, our method achieves a 9.6% reduction in total travel cost, a 6.5% decrease in cumulative risk exposure, and a 4.7% increase in priority-weighted demand satisfaction when benchmarked against existing techniques. This work contributes both to the advancement of hyperheuristic theory and to the development of practical, AI-enabled decision-support tools for emergency logistics management. Full article
(This article belongs to the Special Issue Modeling for Intelligent Vehicles)
Show Figures

Figure 1

41 pages, 4206 KiB  
Systematic Review
A Systematic Literature Review on Load-Balancing Techniques in Fog Computing: Architectures, Strategies, and Emerging Trends
by Danah Aldossary, Ezaz Aldahasi, Taghreed Balharith and Tarek Helmy
Computers 2025, 14(6), 217; https://doi.org/10.3390/computers14060217 - 2 Jun 2025
Viewed by 542
Abstract
Fog computing has emerged as a promising paradigm to extend cloud services toward the edge of the network, enabling low-latency processing and real-time responsiveness for Internet of Things (IoT) applications. However, the distributed, heterogeneous, and resource-constrained nature of fog environments introduces significant challenges [...] Read more.
Fog computing has emerged as a promising paradigm to extend cloud services toward the edge of the network, enabling low-latency processing and real-time responsiveness for Internet of Things (IoT) applications. However, the distributed, heterogeneous, and resource-constrained nature of fog environments introduces significant challenges in balancing workloads efficiently. This study presents a systematic literature review (SLR) of 113 peer-reviewed articles published between 2020 and 2024, aiming to provide a comprehensive overview of load-balancing strategies in fog computing. This review categorizes fog computing architectures, load-balancing algorithms, scheduling and offloading techniques, fault-tolerance mechanisms, security models, and evaluation metrics. The analysis reveals that three-layer (IoT–Fog–Cloud) architectures remain predominant, with dynamic clustering and virtualization commonly employed to enhance adaptability. Heuristic and hybrid load-balancing approaches are most widely adopted due to their scalability and flexibility. Evaluation frequently centers on latency, energy consumption, and resource utilization, while simulation is primarily conducted using tools such as iFogSim and YAFS. Despite considerable progress, key challenges persist, including workload diversity, security enforcement, and real-time decision-making under dynamic conditions. Emerging trends highlight the growing use of artificial intelligence, software-defined networking, and blockchain to support intelligent, secure, and autonomous load balancing. This review synthesizes current research directions, identifies critical gaps, and offers recommendations for designing efficient and resilient fog-based load-balancing systems. Full article
(This article belongs to the Special Issue Edge and Fog Computing for Internet of Things Systems (2nd Edition))
Show Figures

Figure 1

Back to TopTop