Next Article in Journal
Macro-Mechanical Property and Microfracture Evolution of Layered Rock Mass: Effects of Confining Pressure and Bedding Direction
Previous Article in Journal
A Cold Sintering Process for Manufacturing Zn Foams from Spherical Powders
Previous Article in Special Issue
Identifying and Predicting Changes in Behavioral Patterns for Temporal Data in Treatment of Neonatal Respiratory Failure
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

ARQ: A Cohesive Optimization Design for Stable Performance on Noisy Landscapes

by
Vasileios Charilogis
1,
Ioannis G. Tsoulos
1,*,
Anna Maria Gianni
1 and
Dimitrios Tsalikakis
2
1
Department of Informatics and Telecommunications, University of Ioannina, 47150 Kostaki Artas, Greece
2
Department of Engineering Informatics and Telecommunications, University of Western Macedonia, 50100 Kozani, Greece
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(22), 12180; https://doi.org/10.3390/app152212180
Submission received: 28 October 2025 / Revised: 13 November 2025 / Accepted: 14 November 2025 / Published: 17 November 2025
(This article belongs to the Special Issue Engineering Applications of Hybrid Artificial Intelligence Tools)

Abstract

The proposed Adaptive RTR with Quarantine (ARQ) method integrates, within a single evolutionary scheme for continuous optimization, three mature ideas of pbest differential evolution with an archive, success-history parameter adaptation, and restricted tournament replacement (RTR) and extends them with a novel outlier quarantine mechanism. At the heart of ARQ is a combination of the following complementary mechanisms: (1) an event-driven outlier-quarantine loop that triggers on robustly detected tail behavior, (2) a robust center from the best half of the population to which quarantined candidates are gently repaired under feasibility projections, (3) local RTR-based replacement that preserves spatial diversity and avoids premature collapse, (4) archive-guided trial generation that blends current and archived differences while steering toward strong exemplars, and (5) success-history adaptation that self-regulates search from recent successes and reduces manual fine-tuning. Together, these parts sustain focused progress while periodically renewing diversity. Search pressure remains focused yet diversity is steadily replenished through micro-restarts when progress stalls, producing smooth and reliable improvement on noisy or rugged landscapes. In a comprehensive benchmark campaign spanning separable, ill-conditioned, multimodal, hybrid, and composition problems, ARQ was compared against leading state-of-the-art baselines, including top entrants and winners from CEC competitions under identical evaluation budgets and rigorous protocols. Across these settings, ARQ delivered competitive peak results while maintaining favorable average behaviour, thereby narrowing the gap between best and typical outcomes. Overall, this design positions ARQ as a robust choice for practical performance and consistency, providing a dependable tool that can meaningfully strengthen the methodological repertoire of the research community.

1. Introduction

We studied continuous minimization over a rectangulardomain
min x Ω f ( x ) , Ω : = [ 1 , u 1 ] × × [ D , u D ] R D ,
where f : Ω R is accessed via point wise evaluations (derivatives may be unavailable).
Feasible set and constraints. If constraints are present,
F = x Ω : g i ( x ) 0 , i = 1 , , m , h k ( x ) = 0 , k = 1 , , p ,
and the task becomes min x F f ( x ) . Box feasibility is enforced by the projection
Π Ω ( z ) j = min { u j , max { j , z j } } , j = 1 , , D .
General constraints can be handled with a penalty functional:
φ ρ ( x ) = f ( x ) + ρ i = 1 m [ g i ( x ) ] + ν + k = 1 p | h k ( x ) | ν , [ a ] + : = max { a , 0 } , ρ > 0 , ν 1 .
Best-so-far and termination. Given an evaluation budget N fe , an algorithm produces iterates { x t } t = 1 N fe and tracks the best-so-far value
f min ( t ) : = min 1 s t f ( x s ) .
Termination occurs when either t = N fe or a target quality f target is reached.
Performance reporting (best and mean). Over R independent runs, let f min ( r ) : = f min ( N fe ) denote the per-run best-so-far value at budget. We report
Best = min 1 r R f min ( r ) , Mean = 1 R r = 1 R f min ( r ) ,
optionally complemented by average rank across a problem suite.
Following the problem statement, we position our contribution within the main families of continuous optimization. When smoothness and dependable directional information are available, Gradient Descent (GD) remains a natural choice and modern momentum or learning rate schedules extend its reach, although the dependence on differentiability and well behaved curvature can limit robustness on rugged, high dimensional, or strongly multimodal objectives [1]. Newton Raphson can achieve rapid local convergence by exploiting curvature provided stable Hessian information or proper regularization is available [2]. Deterministic derivative free local search such as the Nelder and Mead Simplex Method offers geometry aware progress without gradients and is effective in low to moderate dimensions, yet performance typically degrades as dimensionality and multimodality grow [3]. Stochastic search complements these tools. Monte Carlo simulation legitimizes randomized proposals and statistical summary to probe complex landscapes [4], while simulated annealing regulates uphill acceptance via a temperature schedule to promote principled escapes from local minima [5].
Population based methods expand these ideas with distributed exploration and information sharing. Genetic algorithms model recombination and selection over evolving candidate sets [6]. Particle Swarm Optimization coordinates velocity and position updates to disseminate useful search directions and has inspired stability oriented and diversity oriented refinements, for example CLPSO and modern parallel or optimized variants [7,8,9,10]. Ant Colony Optimization and Artificial Bee Colony bias sampling toward historically promising regions while preserving exploratory motion [11,12,13].
Within this ecosystem, Differential Evolution (DE) has become a central tool for real parameter search due to expressive trial generators and strong empirical performance across heterogeneous test beds [14]. Recent DE progress converges on several effective motifs. Memory based parameter adaptation as in SHADE reduces manual tuning [15]. Population size scheduling as in L-SHADE improves efficiency on diverse suites [16]. Updates in the refined success history as in jSO further stabilize performance under tight evaluation budgets [17]. Multi-population flows, elite regeneration, and adaptive mutation or selection policies add resilience [18,19]. Orthogonal advances include coordinate system learning and eigen informed operators that mitigate ill conditioning and non-separability [20,21], while parallel implementations extend practicality under limited budgets [22]. Targeted DE modifications and implementation guidelines also systematize robust choices in scaling, crossover, and selection [23]. Beyond these families, recent DE advances report improved mutation and parameter control mechanisms [24], and a multi-operator ensemble L-SHADE with restart and local search for single-objective optimization [25], further broadening the toolkit. Additional ensemble/restart-style enhancements likewise expand the L-SHADE lineage [16].
ARQ is motivated by two converging lines: multi-strategy/co-evolutionary DE that strengthens the exploration-exploitation balance across heterogeneous landscapes [26], and robust-statistics principles where trimmed/median location and robust scale safeguard estimates under heavy tails [27]. Adjacent domains (e.g., systems leveraging contextual knowledge to stabilize signals under domain noise) point to the same practical need: preserve a reliable core while injecting measured diversity [28,29]. Within this perspective, we aim for a lightweight yet effective mechanism that turns extremes from destabilizing noise into controlled exploration stimuli.
We propose ARQ, a method aimed at dependable best-so-far progress and competitive mean performance under finite budgets. ARQ integrates pbest DE with an external archive in the JADE or SHADE lineage of archive assisted search and success history adaptation [15,16,17,30]. It applies success history updates of the scale and crossover rates through gain weighted statistics [15,16]. It enforces a neighborhood-aware replacement rule inspired by restricted tournament reasoning to preserve structured diversity [31]. Our contribution is an explicitly triggered tail-quarantine coupled with a trimmed center and micro-restarts, so that stabilization coexists with a steady, low-intensity injection of diversity. The key innovation is an outlier quarantine mechanism. Individuals identified in the extreme tail of the fitness distribution relative to a robust threshold are gently attracted toward a robust population center computed from the better half of the population, with mild perturbation and box projection x = Π Ω c + ε . When a repaired candidate improves, it is accepted, and the previous one is archived. By trimming distribution tails and re-aligning weak individuals with promising regions, quarantine stabilizes average progress while maintaining the ability to intensify around incumbents. Complementary micro-restarts refresh a small fraction of the worst solutions around the current elite to provide targeted escapes without global resets, while mini-batch updates modulate per-step cost and support scalability [22]. We recommend ARQ in scenarios where one seeks a robust optimizer that consistently achieves strong best-so-far values and reliable mean performance across independent runs. By combining selective attraction, self tuning, neighborhood-aware selection, and principled repair and regeneration, ARQ addresses common failure modes such as premature convergence, tail accumulation of poor individuals, and inefficient restarts, and offers a dependable and broadly applicable tool for continuous optimization.
In relation to recent DE variants, multi-strategy and cooperative/co-evolutionary schemes show that combining complementary operators improves resilience on heterogeneous problems [26]. Applied strands in consumer and industrial settings also report restart- and scheduling-oriented enhancements, such as NSGA-III delay-recovery pipelines and AHMQDE-ACO co-optimization [32,33]. In parallel, micro/partial restarts provide controlled diversity without dissolving the incumbent core [34]. ARQ differs by making tail isolation event-driven (triggered only upon tail inflation), defining a practical robust center as the mean of the top-50%, and systematically coupling quarantine with micro-restarts. The result is a retain-refresh loop that improves convergence consistency with negligible overhead [27,34]. Evidence from business-process control corroborates robustness under operational constraints [35], while e-commerce pricing studies highlight stability in fast-changing demand environments [36]. In transportation analytics, arrival-time prediction pipelines benefit from resilient learning components [37], and distributed privacy-robust learning demonstrates complementary stability in federated settings [38]. Finally, adjacent optimization tasks including trajectory generation and macro-analytics illustrate portability beyond canonical benchmarks [39,40].
ARQ introduces an explicitly event-driven outlier-quarantine loop triggered by tail behavior using a robust cutoff θ = Q 3 + α · I Q R . Flagged samples are quarantined up to a proportion ρ and repaired around a robust center (mean of the best 50%), in tandem with a low-overhead micro-restart. Unlike success-history-only DE variants, this mechanism turns extremes from destabilizing noise into controlled exploration stimuli, yielding a smaller best–mean gap without sacrificing peak performance.
Finally, the remainder of this article is organized as follows: Section 2 details the ARQ method its control flow, parameter-policy with success-history updates, pbest/1/bin trial construction with archive support, neighborhood-aware RTR selection, and the quarantine and micro-restart mechanisms linking the pseudo-code routines to their roles in the overall algorithm. Section 3 describes the experimental setup and real-world benchmarks, the evaluation protocol, and reporting conventions, it then presents the parameter sensitivity analysis (Section 3.3), the complexity analysis with respect to dimension (Section 3.4), and a comparative performance study against strong baselines (Section 3.5). The article concludes with a discussion of findings and implications for robust black-box optimization (Section 4).

2. The ARQ Method

2.1. The ARQ Method with Its Mechanisms

Scope and references to preliminaries. We adopt the continuous minimization setting and feasibility operators introduced in Equations (1)–(4), which define the objective over the box domain, the projection operator onto the box, the optional penalty model for general constraints, and the base distance or normalization metric used throughout.
State and mini-batch size. At iteration t, the algorithm maintains a population P t of size N and an external archive A t , together with the incumbent best
x b e s t ( t ) arg min x P 0 P t f ( x ) , f b e s t ( t ) = f x b e s t ( t ) .
A mini-batch size m is used per iteration
m = agent f r a c t i o n · N .
Trial generation. For a selected agent x P t , the mutant is formed with a pbest/1/bin scheme with archive support
v = x + F ( x p b e s t x ) + F ( r 1 r 2 ) , r 1 P t , r 2 P t A t , r 1 r 2 x .
Then, binomial crossover with index guard produces u
u j = v j , if rand ( ) < C R or j = j r a n d , x j , otherwise , j r a n d Unif { 1 , , D } ,
and the candidate is projected to the box, y = Π Ω ( u ) , using the projection already defined in Equation (2).
Success history parameter adaptation. On sampling calls the control rates are drawn around running means and clipped to bounds
F Cauchy ( μ F , γ ) clipped to [ F l o , F h i ] , C R N ( μ C R , σ 2 ) clipped to [ 0 , 1 ] .
Let S index successful trials within the current iteration, with gains g i = f ( parent i ) f ( child i ) > 0 and normalized weights w i = g i / k S g k .
The running means are updated by
μ F i S w i F i 2 i S w i F i , μ C R i S w i C R i ,
and remain unchanged if S = Ø .
Local replacement via RTR. Let d B N ( · , · ) denote the bounds normalized Euclidean distance introduced in the preliminaries, cf. Equation (4). The replacement rule is
if f ( y ) < f ( x ) : x y , Arch Arch { x old } .
else draw Q P t with | Q | = rtr _ pool , q s t a r arg min q Q d B N ( y , q ) .
if f ( y ) < f ( q s t a r ) : q s t a r y , Arch Arch { q s t a r old } .
Outlier quarantine. Using the fitness values of the current population, compute quartiles Q 1 , Q 3 and the interquartile range IQR = Q 3 Q 1 as in the preliminaries. Define the robust threshold
θ = Q 3 + α · IQR , O = { x P t f ( x ) θ } .
Select a subset O ρ with | O ρ | = ρ · | O | and a robust center c as the mean of the best fifty percent of P t . For each x O ρ propose a repair
x = Π Ω c + ϵ , ϵ small zero - mean perturbation , accept if f ( x ) < f ( x ) , archive the displaced point .
Targeted micro-restart. Let W be the set containing a fraction w of the worst individuals. For each x W propose a restart around the incumbent
x = Π Ω x b e s t + η , η N 0 , σ 2 · ( u ) 2 ,
and accept if f ( x ) < f ( x ) , archiving the displaced point. Activation of this mechanism is controlled by the dedicated stagnation trigger.
Termination and reporting. The loop continues while i t e r a t i o n < T . Upon exit the method returns
x b e s t , f b e s t = f ( x b e s t ) ,
and, across R independent runs, we summarize the end-of-budget best-so-far via Equation (6).
Building on the preliminaries and problem conventions, we present ARQ from the overall control flow to its constituent mechanisms. We began with the overall control flow (Algorithm 1 and Figure 1), detailing initialization, the iterative cycle with mini-batch agent updates, success-history parameter adaptation, neighborhood-aware RTR selection, outlier quarantine, and the targeted micro-restart, terminating with the incumbent best. We then analyzed the four core subroutines that implement ARQ’s key building blocks: ParameterPolicy (Algorithm 2 and Figure 2) for sampling and gain-weighted updates of μ F and μ C R . TrialGeneration (Algorithm 3 and Figure 3) for pbest/1/bin trial construction with archive support and box projection. SelectionRTR (Algorithm 4 and Figure 4) for local replacement via restricted tournament. PopulationMaintenance (Algorithm 5 and Figure 5), which introduces the outlier-quarantine innovation together with the micro-restart mechanism. We concluded by mapping pseudo-code parameters to implementation controls and highlighting how these components interact to ensure stability and strong performance across diverse search regimes.
Algorithm 1 ARQ main pseudo-code
INPUT
- f: objective function
- Ω : box domain [1,u1] … [ D , u D ]
- N: population size
- T: max evaluations
- p: pbest fraction
- a g e n t f r a c t i o n : fraction updated per step
- μ F : mean scale factor
- μ C R : mean crossover rate
- α : quarantine threshold coefficient
- ρ : repaired outliers fraction
- w: worst fraction for micro-restart
OUTPUT
- x b e s t , f b e s t
INITIALIZATION
- Initialize P with N random solutions in Ω and evaluate f
- Set archive and means μ F , μ C R
- Extract x b e s t f b e s t from the best member of P
- Set e v a l s N
ARQ main pseudo-code
01 While e v a l s < T do
02     Compute m c e i l ( a g e n t f r a c t i o n · N )
03     Draw random subset A P with | A | = m
04     Clear S F , S C R , S g a i n
05     For each x A do
06         Obtain F, C R from ParameterPolicy in sampling mode with input ( μ F , μ C R )
07         Obtain y from TrialGeneration with input (x, P, a r c h i v e , p, F, C R , Ω )
08         Set f y f ( y )
09         Set e v a l s e v a l s + 1
10         Obtain x n e w , f x n e w from SelectionRTR with input (x, y, f y , P)
11         If f x n e w < f ( x ) then
12             Append F to S F
13             Append C R to S C R
14             Append f ( x ) f x n e w to S g a i n
15         Endif
16         If f x n e w < f b e s t then
17             Set x b e s t x n e w
18             Set f b e s t f x n e w
19         Endif
20     Endfor
21     Update ( μ F , μ C R ) ← via ParameterPolicy. update ( S F , S C R , S g a i n , μ C R )
22     Apply PopulationMaintenance with input (P, a r c h i v e , Ω , x b e s t , α , ρ , w)
23 Endwhile
24 Return x b e s t , f b e s t
Algorithm 1 together with Figure 1 summarizes the control flow of ARQ. The process starts with population and archive initialization and with current means for F and C R . At each iteration, the termination condition iteration < T is assessed. If budget remains, a mini-batch of agents is selected and a compact improvement cycle is executed. This cycle samples parameters from the success history policy, constructs a pbest/1/bin trial with archive support and box projection, evaluates the trial, and applies neighborhood-aware RTR selection. Successful updates are recorded so that they inform the subsequent mean update. After the mini-batch is processed, the parameter means are updated using gain weighted statistics, outlier quarantine is applied to robustly handle poor solutions, and a targeted micro-restart is performed only when needed around the current incumbent. When the termination condition fails, the algorithm explicitly returns x b e s t and f b e s t . This design couples parameter adaptivity, local selection pressure, and controlled population refresh to deliver steady progress while avoiding derailment by outliers or premature convergence.
Algorithm 2 and Figure 2 define the policy of ARQ’s parameter. The routine has two clear modes. In sampling mode, it draws F and C R around their current means and clips them to preset bounds, supplying the improvement cycle with a controlled blend of exploration and exploitation. In update mode, it gathers the successful trials of the current cycle, forms gains as weights, and computes new means, using a weighted Lehmer mean for F and a weighted arithmetic mean for C R . This rewards parameters that delivered substantive improvement while damping isolated lucky events. If no successes are recorded, the means are kept unchanged to preserve stability. The result is fast adaptation when profitable directions are coherent and conservative updates when improvements are scattered, aligning the learning pace with the search landscape and avoiding excessive oscillations.
Algorithm 2 Subroutine ParameterPolicy sampling and update of F, C R
INPUT ( μ F , μ C R , S F , S C R , S g a i n , m o d e )
OUTPUT s a m p l i n g ( F , C R )   u p d a t e ( μ F , μ C R )
01 If mode = sampling then
02     Draw F from Cauchy centered at μ F and clip to 0 < F 1
03     Draw C R from Normal with mean μ C R and clip to 0 C R 1
04     Return (F, C R )
05 Endif
06 If mode = update then
07     If S g a i n empty then
08         Return ( μ F , μ C R )
10     Endif
11     Normalize weights w from S g a i n so that sum w = 1
12     Set μ F ( s u m w · S F 2 ) divided by (sum w · S F )
13     Set μ C R sum w · S C R
14     Return ( μ F , μ C R )
15 Endif
Algorithm 3 and Figure 3 show details of the trial construction pipeline that feeds ARQ’s local improvement. A guide is first drawn from the top fraction of the population so that movement is biased toward promising regions. Two additional guides are then selected, one from the current population and one from either the population or the archive, injecting diverse directions. On these anchors, a pbest/1/bin style mutation is formed that blends an attraction toward the elite guide with a difference of two solutions, striking a balance between exploitation and exploration. Next, component-wise binomial crossover is applied with a guarantee that at least one component originates from the mutant, preventing stagnation. Finally, the candidate is projected back to the box to enforce feasibility, which stabilizes behavior across variable scales. The archive broadens directional cues when progress stalls, and the projection keeps the process constraint compliant without sacrificing trial diversity.
Algorithm 3 Subroutine TrialGeneration pbest/1/bin with archive and projection
INPUT (x, P, a r c h i v e , p, F, C R , Ω )
OUTPUT (y)
01 Choose x p b e s t from the top p fraction of P
02 Choose r 1 P with r 1 x
03 Choose r 2 ( P a r c h i v e ) with r 2 x and r 2 r 1 if feasible
04 Compute v x + F · ( x p b e s t x ) + F · ( r 1 r 2 )
05 Sample j r a n d uniformly from {1,2,…,D}
06 For each coordinate j do
07     If r a n d ( ) < C R or j = j r a n d then
08         set u j v j
09     else
10         set u j x j
11     Endif
12 Endfor
13 Project u to Ω componentwise to obtain y
14 Return (y)
The selection routine in Algorithm 4 and Figure 4 implements a restricted tournament to keep replacements local and meaningful. The trial is first tested against its parent and, if superior, it directly replaces the parent while the old solution is archived. Otherwise, a small candidate pool is drawn to maintain computational efficiency. From this pool, the nearest neighbor under bounds normalized distance is identified, making the locality metric scale invariant. The trial is compared only to this neighbor and replaces it if better, with the displaced solution pushed to the archive. If not, the parent is retained. This mechanism concentrates selection pressure where it yields real gains, preserves the spatial structure of the population, mitigates premature convergence, and sustains diversity without incurring substantial overhead.
Algorithm 4 Subroutine SelectionRTR local replacement with restricted tournament
INPUT (x, y, f y , P)
OUTPUT ( x o u t , f x o u t )
01 If f y < f ( x ) then
02     Replace x by y in P and archive old x
03     Return (y, f y )
04 Endif
05 Draw fixed size pool Q P
06 Find q s t a r Q with minimum bounds normalized distance to y
07 If f y < f ( q s t a r ) then
08     Replace q s t a r by y and archive old q s t a r
09     Return (y, f y )
10 Else
11     Return (x, f ( x ) )
12 Endif
Unlike the previous routines, Algorithm 5 in Figure 5 explicitly shapes the population. The first phase identifies outliers using the threshold θ Q 3 + α · I Q R and selects a subset for repair. Each repair proposes a candidate near a robust center computed from the best half of the population and is accepted only if it yields a measurable improvement, with the displaced solution archived. The second phase targets a fraction of the worst individuals for a micro-restart around the current incumbent, using mild, box-scaled perturbations, replacements are again conditional and archived. Together, outlier quarantine and micro-restart prevent the accumulation of toxic points, continually steer the population geometry toward productive regions, and provide a controlled recovery mechanism when progress slows.
Algorithm 5 Subroutine PopulationMaintenance outlier quarantine and micro-restart
INPUT (P, a r c h i v e , Ω , x b e s t , α , ρ , w)
OUTPUT (P, a r c h i v e )
01 Compute Q 1 , Q 3 , I Q R from fitness values and set θ Q 3 + α · I Q R
02 Set O { x P with f ( x ) θ }
03 Compute center c as mean of the best fifty percent of P
04 Choose random subset O ρ with | O ρ | = f l o o r ( ρ · | O | )
05 For each x O ρ do
06     Set x n e w c + ϵ and project to Ω
07     If f ( x n e w ) < f ( x ) then
08         replace and a r c h i v e old one
09     Endif
10 Endfor
11 Choose fraction w of the worst individuals of P
12 For each x in that fraction do
13     Set x n e w x b e s t + Normal ( 0 , σ · ( u ) ) and project to Ω
14     If f ( x n e w ) < f ( x ) then
15         replace and a r c h i v e old one
16     Endif
17 Endfor
18 Return (P, a r c h i v e )

2.2. Design and Parameterization

The outlier quarantine mechanism is introduced to stabilize the search whenever a few very distant samples stretch the solution cloud and weaken the convergence signals. In this situation, standard means and spreads become brittle because the tails dominate. Quarantine temporarily isolates the most extreme portion of the population so that the core can update a reliable location reference and then reinsert the isolated points with a mild, controlled perturbation.
The quarantine proportion ρ is chosen to provide a simple and stable balance between robustness and diversity. We use a small-to-moderate ρ so that only the far tail is removed while the effective population remains intact. In practice, ρ adapts to population size and current dispersion: when the cloud is tight, a very small ρ suffices because outliers are rare, when tails visibly inflate, a slightly larger ρ removes disruptive noise without drying out exploration. This mirrors the logic of a trimmed mean, where clipping a small top slice stabilizes the location estimate.
The perturbation scale q s i g m a determines how strongly quarantined points are reintroduced. We set it relative to the cloud’s current geometry, using robust dispersion measures such as the median absolute deviation or a bounds-normalized per-coordinate standard deviation. When the distribution is compact, a small q s i g m a yields fine local search around the center. When the distribution is scattered or the search is in early phases, a larger q s i g m a helps recapture underrepresented regions. Consequently, x = Π Ω ( c + ε ) with zero-mean ε always scales to the landscape and avoids unnecessarily large jumps.
We define the robust center as the simple mean of the top half of individuals by fitness because this estimate remains sensitive to improvement and resistant to outliers. The top fifty percent is large enough to reduce random noise and clean enough to exclude low-quality or extreme points. Since the subset is formed by fitness ranking, it concentrates near the promising basin, its plain mean becomes a practical, stable location reference without requiring sophisticated robust estimators.
Synergy with micro-restart is direct and functional. Quarantine suppresses the heavy tail and protects the center estimate from jitter, allowing exploitative operators to move steadily toward the active basin. Micro-restart re-injects small but meaningful diversity near the robust center or along uncertain directions, without dissolving the core. Alternating the two makes the search operate in a retain-refresh cycle: first the tail is smoothed, then the center is updated reliably, and finally exploration resumes with a measured step.
Parameter sensitivity clarifies each effect. Increasing ρ stabilizes references faster but can reduce diversity unless backed by micro-restart. Increasing q s i g m a helps rediscover neighborhoods that would otherwise be missed, but if too large, it weakens local improvement. Using fifty percent to compute the center proved a dependable balance because it keeps statistical efficiency high while remaining outlier-resistant, small deviations around this level do not change qualitative behavior, making it a safe and portable default.
Operationally, the mechanism triggers periodically or when tail inflation is detected via bounds-normalized distances from the center and fitness ranking. Quarantined points are removed, resampled around the center at scale q s i g m a , and reinserted under an acceptance rule that respects the objective while archiving displaced points. In this way we convert extremes from a source of destabilization into controlled exploration stimuli.
The method’s novelty lies in tying three ideas into a light yet effective loop: it makes tail isolation explicit and actionable at the moment it harms estimation, it defines a practical robust center via a simple trimmed average aligned with improvement, and it systematically couples this stabilization with micro-restarts to maintain a steady trickle of fresh variability. The result is a retain-refresh process that improves convergence consistency without heavy adaptation or hyperparameter overhead and that integrates cleanly with existing evolutionary operators. See Figure 6 for the concise flow of quarantine, repair, and reinsertion.

3. Experimental Setup and Benchmark Results

3.1. Setup

The following Table 1 and Table 2 summarize all experimental settings and how results are reported.
Protocol and configurations. Table 2 enumerates the hyper-parameters of all competing methods to enable strict reproducibility under a common budget. Population size is fixed at N = 100. A single termination rule is enforced function evaluations (FEs) and every solver runs under the same budget T. Method-specific controls (e.g., CLPSO comprehensive-learning probability, CMA-ES population and coefficients, EA4Eig JADE-style parameters, mLSHADE_RL and UDE3 [42] success-history memories and pbest ranges, SaDE adaptation schemes) follow the literature and public reference implementations. Complete settings appear in Table 1 and Table 2.
Implementation and environment. All algorithms including the proposed method and baselines were implemented in optimized ANSI C++ and integrated into the open-source OPTIMUS framework [43]. Source code: https://github.com/itsoulos/GLOBALOPTIMUS (accessed on 15 October 2025). Builds used Debian 12.12 with GCC 13.4.
Hardware. Experiments ran on a high-performance node with an AMD Ryzen 9 5950X (16 cores, 32 threads) and 128 GB DDR4 memory, under Debian Linux.
Evaluation protocol. Each benchmark function was executed 30 independent runs with distinct random seeds to capture stochastic variability. With a fixed FE budget, comparisons are made at the same for all solvers.
Metrics and reporting. The primary outcomes are the best and mean objective values at T across 30 runs per test function. In the ranking tables, 1st place entries are highlighted green and 2nd place entries blue. Parameter choices exactly follow Table 1 and Table 2.

3.2. Benchmark Functions

Table 3 compiles the real-world optimization problems used in our evaluation. For each case, we report a brief description, the dimensionality and variable types (continuous/mixed-integer), the nature and count of constraints (inequalities/equalities), salient landscape properties (nonconvexity and multi-modality), as well as the evaluation budget and comparison criteria. The set spans, indicatively, mechanical design, energy scheduling, process optimization, and parameter estimation with black-box simulators, ensuring that conclusions extend beyond synthetic test functions. Where applicable, we also note any normalizations or constraint reformulations adopted for fair comparison.

3.3. Parameter Sensitivity Analysis of ARQ

Following Lee et al.’s [64] parameter-sensitivity methodology, we constructed a structured analysis to quantify responsiveness to parameter changes and the preservation of reliability across diverse operating regimes.
For the static economic load dispatch one problem, the dominant finding of the sensitivity analysis is that p exhibits by far the largest main effect on mean best (range ≈ 0.937), making it the primary regulator of the exploitation–exploration balance, the trend is clearly decreasing, as increasing p from 0.05 to 0.30 is accompanied by an almost monotonic drop in mean best, with 0.05 emerging as the best setting, indicating that exploitation pressure should remain modest to preserve diversity and avoid population alignment that harms the mean, see Table 4 for the exact main-effect ranges and the per-setting summaries. The main effects and stability ranges are visualized in Figure 7.
The second most influential factor is the success-history memory rate s h c (range ≈ 0.208). Values 0.1 and 0.5 deliver the strongest averages, whereas 0.3 and 0.7 underperform. This matches the expected trade-off: too fast forgetting induces oscillations in F/ C R , while too slow memory anchors prematurely in suboptimal ranges. The data suggest two sweet spots, a more agile setting at 0.1 and a more conservative one at 0.5, implying that problem attributes likely modulate the preferable regime, an adaptive schedule that moves from 0.5 to 0.1 when progress resumes would be reasonable.
The α parameter in quarantine/repair shows a moderate main effect (≈0.159), with a peak around 1.5 and only minor differences nearby. Values far below 1.0 or above 2.0 do not improve mean best in this sample. A mid-level aggressiveness for tail definition appears optimal sufficient to trim sporadic failures without crushing diversity supporting the view of quarantine as a noise regulator rather than a hard filter.
The RTR geometry parameter r t r k has the smallest main effect in the tested band (≈0.075), with shallow maxima at 5–10 neighbors. This suggests robustness to modest neighborhood changes within the examined range, extremes would likely matter more. A mid-range choice around 5–10 preserves local structure without trapping the population.
The s t a g n a t i o n t r i g g e r exhibits a small-to-moderate effect (≈0.099), with a best value near 30. Short windows (10–20) do not help mean best, presumably due to premature micro-restarts that cut off promising paths, while very long windows (50) delay exiting genuine stagnation. A trigger around 30 strikes the best balance for the present budget.
Putting this together, the effect ranking is unambiguous: p s h c α > s t a g n a t i o n t r i g g e r > r t r k . A consolidated view of these effects and recommended settings is reported in Table 4. The data-driven recommendation is to fix p at a low level around 0.05, set s h c to either 0.1 or 0.5 depending on algorithmic phase, place α near 1.5, choose s t a g n a t i o n t r i g g e r around 30 generations, and keep r t r k in a mid-range of 5–10. While the main-effect signals are clear, confirmatory runs should check for adverse interactions at extreme combinations, especially between p and s h c , and between α and s t a g n a t i o n t r i g g e r .
For dynamic economic dispatch one (ELD1), sensitivity is dominated by p and outlier_alpha with main-effect ranges of ≈0.311 and ≈0.295, whereas r t r k , s h c , and s t a g n a t i o n t r i g g e r play a much smaller role. See Table 5 for the exact main-effect ranges and the per-setting summaries. Unlike the static ELD1, increasing p from 0.05 to 0.30 consistently raises the mean best (peaking at 0.30), indicating that this problem benefits from stronger exploitation, the intertemporal coupling and ramp constraints reward elite-guided moves and reduce the cost of exploratory drift. Higher α values (2.0–2.5) further improve the mean best, suggesting that a more permissive tail threshold is advantageous: over-aggressive trimming of outliers can strip away useful diversity needed for smooth cross-period transitions. Changes in s h c are tiny with a slight edge at 0.7, implying that a heavier memory stabilizes F/ C R adaptation without suppressing peaks, and r t r k is marginally better around k = 3 with negligible differences across the tested band. s t a g n a t i o n t r i g g e r is essentially neutral with a mild optimum near 30 generations. Overall, the effect ordering is p α r t r k s h c > s t a g n a t i o n t r i g g e r , and the configuration profile emerging for the dynamic case favors a higher p (~0.30), elevated α (≈2.0–2.5), slightly slower s h c (~0.7), small r t r k (≈3), and s t a g n a t i o n t r i g g e r around 30 choices that strengthen guided exploitation while avoiding over-sterilization of diversity across time-coupled. The corresponding graphical summary is given in Figure 8.
For Lennard-Jones (13 atoms, 43D) the dominant driver of mean best is s h c (main-effect range ≈ 0.991): heavier success-history memory around 0.7 yields more negative (better) means, improving from −39.33 at 0.1 to −40.32 at 0.7. See Table 6 for the exact main-effect ranges and the per-setting summaries. This indicates that the highly multimodal, deceptive LJ-13 landscape benefits from slower, stabilizing adaptation of F, C R , which dampens oscillations and sustains coherent progress into deeper energy wells. p shows a moderate effect (≈0.202) with a clear preference for low values, increasing it from 0.05 to 0.30 degrades the mean from −40.02 to −39.82, so restrained exploitation and preserved exploration are essential to avoid alignment around shallow basins. α exhibits a modest, favorable trend (≈0.195) as it rises toward 2.0–2.5, a more permissive outlier threshold makes quarantine gentler, which here helps retain useful diversity in challenging basins. r t r k and s t a g n a t i o n t r i g g e r have small effects (≈0.040 and ≈0.014): performance is marginally better near k ≈ 10 and t r i g g e r ≈ 50, but differences are minor within the tested band. Overall, LJ-13 favors s h c ≈ 0.7, p ≈ 0.05, α ≈ 2.0–2.5, with r t r k mid-range and a slightly larger s t a g n a t i o n t r i g g e r a configuration that promotes steady, directed exploration without overly sterilizing the tails. Notably, the min consistently reaches about −44.33 across settings, suggesting that global-level depths are sporadically discovered regardless, while the recommended choices compress the best–mean gap and systematically improve average quality. See Figure 9 for the main-effect curves.

3.4. Analysis of Complexity of ARQ

Below, we present three real-world problems with brief descriptions shell-and-tube heat-exchanger design, a Brayton-type gas-cycle surrogate, and the TANDEM multi-gravity-assist space-trajectory surrogate. In the experiments, we varied only the problem dimension while keeping all modeling and algorithmic settings fixed, so that the observed effects reflect purely the impact of dimensionality. This protocol lets us examine how the proposed optimization method scales as the number of decision variables grows, without confounding changes to constraints, penalties, or hyperparameters. For these measurements, the termination rule was 500 iterations, which proved sufficient to reach the optimal value and thus no additional iterations were required.

3.4.1. Shell-And-Tube Heat Exchanger (Surrogate) [65,66,67]

Vars:  x = [ D s , D t , L , p r , B , c b ] . Bounds:  D s [ 0.30 , 1.50 ] m, D t [ 0.010 , 0.050 ] m, L [ 1 , 6 ] m, p r [ 1.25 , 2.00 ] , B [ 0.10 , 0.60 ] m, c b [ 0.15 , 0.45 ] .
min x J ( x ) = C area A + w pump P pump t y + Π U A + Π soft .
Heat transfer:  A = π D t L N t κ f , U = ( 1 / h i + R w + 1 / h o + R f ) 1 , U A = U A .
h i = a t v t 0.8 D i 0.2 , h o = a s v s 0.8 D s 0.2 , R w = t w / k wall .
Requirement/penalties:  U A req = Q req / LMTD , Π U A = λ h ( U A req U A ) + U A req 2 ,   Π soft = λ s [ v t 3 ] + 2 + 0.5 [ v s 5 ] + 2 + [ 0.2 D s B ] + 2 .
Hydraulics:  P pump = ( Δ P tube V ˙ c + Δ P shell V ˙ h ) / η pump , Δ P tube = k t L v t 2 / D i , Δ P shell = k s ( D s / B ) v s 2 .

3.4.2. Gas Cycle (Brayton-like Surrogate) [68]

Vars:  x = [ T 1 , T 3 , P 1 , P 3 ] . Bounds:  T 1 [ 300 , 1500 ] K, T 3 [ 1200 , 2000 ] K, P 1 , P 3 [ 1 , 20 ] bar.
min x f ( x ) = η ( x ) , η ( x ) = 1 r ( γ 1 ) / γ T 1 T 3 , r = P 3 P 1 , γ = 1.4 .

3.4.3. Space Trajectory: TANDEM (MGA-1DSM Surrogate) [69,70,71]

Vars (D = 18):  x = [ t 0 , T 1 , T 2 , T 3 , T 4 , T 5 A , T 5 B , s 1 , s 2 , s 3 , s 4 , s 5 A , s 5 B , r p , k A 1 , k A 2 , k B 1 , k B 2 ] . Bounds:  t 0 [ 7000 , 10,000 ] (MJD2000 d), T 1 [ 30 , 500 ] , T 2 [ 30 , 600 ] , T 3 [ 30 , 1200 ] , T 4 [ 30 , 1600 ] , T 5 A , T 5 B [ 30 , 2000 ] d, s , r p , k [ 0 , 1 ] .
min x Δ V tot = Δ V launch ( T 1 ) + i = 1 4 Δ V leg ( T i ) + Δ V branch ( T 5 A , s 5 A , k A 1 , k A 2 ) +
Δ V branch ( T 5 B , s 5 B , k B 1 , k B 2 ) + i = 1 4 Δ V DSM ( s i ) G GA ( T 1 , T 2 , T 3 ) G J ( T 4 ) + Π ToF + Π bounds .
Compact component forms (surrogate): Δ V leg ( T ) = s / ( 1 + 2 clip ( T / t r , 0.2 , 4 ) ) , Δ V DSM ( s ) = c DSM ( 0.25 + 0.75 s ) , Δ V branch ( · ) adds DSM shaping terms with r p , k · , G GA , G J are decreasing functions of leg times, Π ToF = β [ ( T 1 + T 2 + T 3 + T 4 + 1 2 ( T 5 A + T 5 B ) ) T soft ] + .
Across the three problems, runtime grows smoothly with dimension from 40 to 400 and the trend is very close to linear (Figure 10). Heat exchanger rises from 1.838 s at 40 D to 12.878 s at 400 D, gas cycle from 1.686 s to 11.742 s, and tandem from 2.268 s to 13.116 s. Doubling the dimensionality typically increases runtime by roughly a factor of 1.7–1.9, indicating linear growth with a small fixed overhead at low dimensions. Gas cycle is consistently the fastest, heat exchanger sits in the middle, and Tandem is the slowest, largely due to a higher constant setup cost per evaluation even though its growth rate with dimension is similar to the others. Averaged over the grid, each additional 40 variables adds about 1.1–1.3 s, which remains stable across the range. Overall, the evidence points to an approximately linear time complexity in dimension for the proposed method, with differences between problems attributable mainly to per-evaluation overhead rather than to divergent asymptotic behavior.

3.5. Comparative Performance Analysis of ARQ

Our comparator suite JSO, TRIDENT-DE, UDE3, EA4Eig, mLSHADE_RL, SaDE, CMA-ES, jDE, and CLPSO is a deliberate, sharp, and diverse set that enables fair and generalizable assessment under a common protocol with uniform end-of-budget best-so-far and mean metrics. It blends state-of-the-art DE variants with canonical non-DE references, prioritizing excellence and representativeness (strong CEC and real-world records), so the proposed method is tested against genuinely formidable opponents.
The suite spans the mechanisms that govern the best vs. mean trade-off: aggressive exploitation (pbest/1/bin, current-to-best/1/bin, adaptive F/ C R ), diversity preservation (self-adaptation, and mutation ensembles), stagnation handling (restarts and re-initialization), and learning-driven exploration (policy selection). JSO and mLSHADE_RL serve as state-of-the-art DE with robust adaptation/learning and consistent means, TRIDENT-DE is a practitioner-grade baseline known for strong best-so-far, UDE3 probes robustness via ensemble strategies, EA4Eig leverages eigenspace/second-order cues for ill-conditioned landscapes, SaDE tests whether endogenous strategy learning suffices, CMA-ES is the non-DE gold standard learning local geometry via covariance, jDE provides a lightweight self-adaptation lower bound, and CLPSO adds cooperative PSO strength on multimodal/noisy tasks. Together, this mix stresses the proposed method across peak best-so-far, stable mean, and overall robustness, making any observed gains substantive and credible.
Table 7 provides a CEC-style descriptive suite-level summary (best/mean). Statistical tests and significance markers are reported per problem in Table 8, Table 9 and Table 10.
Table 7 provides the end-of-budget snapshot, reporting for each real-world task the end-of-budget best-so-far alongside the mean over multiple independent runs. This dual perspective separates peak-attainment from run-to-run consistency, revealing not only who reaches high but also who does so reliably. Within this lens, ARQ exhibits the profile it was designed for: strong best-so-far on rugged or noisy landscapes without sacrificing mean stability. The pattern is mechanistically grounded rather than accidental. Success-history parameter adaptation quickly locks F/CR into profitable ranges, curbing wasteful oscillations that typically inflate variance. Neighborhood-aware RTR replacement focuses selection where it matters while preserving population geometry and diversity, thus avoiding brittle premature convergence. The quarantine mechanism, paired with targeted micro-restarts around the incumbent, trims toxic tails by gently pulling a small fraction of worst individuals toward a robust center and accepting repairs only when they yield genuine improvement. The net effect is a tighter best–mean gap high peaks without bleeding the average.
Reading across problem families reinforces the same message. On geometry-sensitive or ill-conditioned settings, where eigenspace-aware or covariance-adapting methods traditionally shine, ARQ maintains a competitive mean precisely because tail control suppresses rare catastrophic runs that would otherwise drag the average down. On energy and network planning tasks, micro-restarts provide inexpensive, targeted escapes from stagnation that accumulate into steady gains in both best and mean. On classic multimodal benchmarks, the combination of pbest/1/bin with an archive and success-history keeps exploration directed late into the budget, which shows up as robust best-so-far without a collapse in mean.
The deltas between best and mean act as a health indicator of the performance distribution. Where ARQ ties or narrowly trails specialized competitors in best, it often compensates with a superior mean, signaling run-level resilience. Where it attains the top best, the mean remains stable, indicating not a solitary spike but a cohesive cloud of good solutions around it. Because all solvers run under a harmonized protocol, identical budgets, and multiple independent trials, this advantage cannot be dismissed as a tuning artifact. Table 7 therefore suggests that ARQ achieves the hard balance between peak best-so-far and preserved mean through a cohesive TRIDENT-DE: gain-weighted parameter learning, neighborhood-sensitive replacement that protects geometry and diversity, and tail control via quarantine and micro-restarts. This sets the stage for the rankings that follow, in which superior peaks are not purchased at the expense of stability.
Table 8 converts the raw outcomes of Table 7 into per-problem ranks and aggregate indicators, making it explicit whether a solver wins consistently or only sporadically. The pattern for ARQ is a high density of top 1/2 finishes across a broad portion of the suite, accompanied by a low average rank and a tight spread. This indicates breadth and stability rather than a handful of outlier peaks.
The juxtaposition of best-rank and mean-rank indicates a balanced profile. When ARQ achieves a top end-of-budget best-so-far, it does not incur a collapse in mean-rank, this suggests that tail control and micro-restarts keep the population concentrated around the most promising basin. Conversely, in cases where highly specialized competitors hold a slight edge in best-rank, ARQ often compensates with a superior mean-rank, evidencing run-level resilience and an ability to avoid degenerative trajectories.
The geography of ranks across problem families reinforces this reading. On ill-conditioned or geometry-sensitive tasks, where methods that learn search-space geometry traditionally excel, ARQ remains near the front with limited variance, consistent with neighborhood-aware replacement preserving structure and diversity. On multimodal landscapes, where occasional spectacular hits can inflate best-rank while harming mean behavior, Table 8 shows a small gap between ARQ’s two ranking columns, in line with directed exploration sustained late into the budget. On energy and network design applications, micro-restarts around the incumbent accumulate incremental gains that lower average rank and increase the frequency of top-2 placements.
Head-to-head comparisons against recognized state-of-the-art references highlight that ARQ competes not only within the DE family but also against non-DE geometry-learning approaches, reducing the likelihood that its advantage is a tuning artifact. The overall rank and the accompanying aggregates distill this behavior into a single index, yielding clear evidence of generalizability: ARQ does not win only here and there, but stays consistently near the top across heterogeneous classes of problems.
In Figure 11, we report one-sided paired Wilcoxon signed-rank tests (hypothesis ARQ < Other) across all datasets, with Benjamini–Hochberg (FDR) adjustment. ARQ achieves lower ranks on average and is significantly better than most competitors after FDR correction specifically, ARQ vs. UDE3, mLSHADE_RL, CMA-ES, EA4Eig, SaDE, jDE, and CLPSO are significant, whereas ARQ vs. JSO and ARQ vs. TRIDENT-DE remain non-significant. Overall, ARQ shows consistent improvements in rank relative to most alternatives, while being statistically indistinguishable from the two strongest baselines (JSO and TRIDENT-DE). Consistent with the per-problem best outcomes, Figure 12 presents the aggregate mean ranking, condensing averages. Consistent with the per-problem best outcomes, Figure 13 highlights ARQ’s frequent top-tier placements.
Table 9 acts as a distillation of the evidence: per-problem ranks are aggregated into global indicators that capture overall behavior. ARQ’s position on these summary measures reflects a solver that excels not by isolated flashes, but by maintaining persistently strong showings across the board. Totals and average ranks remain low, with a tight dispersion around the central tendency, in practice, this means frequent proximity to the top with limited variability from task to task.
The alignment between summary metrics derived from best and from mean confirms the balance observed previously. When ARQ signals strength at the end-of-budget best-so-far, the mean-based summary does not erode, indicating that the cloud of solutions is cohesive rather than driven by rare spikes. Even where narrowly specialized competitors gain a local edge, the cumulative ranking over the whole suite shifts the advantage back to ARQ, because small, steady improvements add up.
What emerges from Table 9 is not merely many first places, but a combination of high top-2 frequency with a scarcity of poor outcomes. ARQ exhibits a compact performance profile: the tails of weak runs are curtailed while strong results appear repeatedly, not accidentally. This distributional shape explains why the global indicators favor the method. In practical terms, Table 9 certifies that the proposed approach preserves a competitive margin as problem geometry, conditioning, and multimodality vary. Consistency across these shifts is precisely what converts scattered victories into an overall lead.
In Figure 14, ARQ secures a clearly lower average rank and leaves most rivals behind even after multiple-comparison control. The strongest pairwise gains are against UDE3, mLSHADE_RL, CMA-ES, EA4Eig, SaDE, jDE, and CLPSO, whereas its gaps to JSO and TRIDENT-DE do not rise to statistical significance. In short, ARQ consistently ranks better than the bulk of competing methods while being on par with the two top baselines (JSO, TRIDENT-DE). Figure 12 presents the aggregate mean ranking, condensing averages. Figure 13 highlights ARQ’s frequent top-tier placements.e performance into a single comparative view.
Table 10 consolidates the picture established earlier: ARQ attains the lowest total rank on both best and mean 49 and 28, respectively for a combined sum of 77, corresponding to an average rank of 2.139. The next best competitor sits noticeably higher (e.g., JSO: 61 + 52 = 113), and others trail further behind, indicating that ARQ’s advantage stems from consistent top placements across the suite rather than isolated spikes. This result is aligned with the harmonized experimental protocol and the carefully matched competitor settings, which lends credibility to the comparison.
The pattern strong totals in best without erosion in mean is consistent with ARQ’s design. Success-history parameter adaptation stabilizes profitable ranges for F and CR early on, neighborhood-aware RTR preserves structure and diversity via local replacement, and the outlier-quarantine mechanism with targeted micro-restarts trims tail failures. Together, these components compress the gap between peak and average performance, a behavior visible in Table 7 and now distilled by the aggregates in Table 10. In practical terms, ARQ achieves high end-of-budget best-so-far without paying for it with a degraded mean, and this translates directly into low overall ranks.
In Figure 15, we compare ten algorithms over 36 problems using average ranks and assess pairwise differences with the Nemenyi test ( α = 0.05 ). The pattern is clear: ARQ attains the best mean rank and is significantly better than UDE3 ( p 0.0096 ), mLSHADE_RL ( p 0.0039 ), CMA-ES ( p 3.4 × 10 5 ), EA4Eig ( p 1.2 × 10 7 ), SaDE ( p 1.7 × 10 8 ), jDE ( p 2.8 × 10 9 ) and CLPSO ( p 1.0 × 10 13 ), while its differences with JSO (p ≈ 0.93) and TRIDENT-DE ( p 0.87 ) are not significant. JSO and TRIDENT-DE form the next tier: each is significantly better than CMA-ES ( p 0.015 and p 4.6 × 10 4 , respectively) and markedly better than EA4Eig, SaDE, jDE and CLPSO ( p 10 3 ), but neither differs from ARQ and they do not differ from each other. The mid-ranked group (UDE3, mLSHADE_RL, CMA-ES) shows mostly non-significant pairwise differences, except that CMA-ES is worse than JSO and TRIDENT-DE as noted above. CLPSO is the worst on average and is significantly worse than ARQ, JSO, TRIDENT-DE, UDE3, mLSHADE_RL, CMA-ES and EA4Eig ( p 0.03 ), but its gaps to SaDE and jDE are not significant at α = 0.05 ( p 0.073 and p 0.15 , respectively). Overall, the global ranking differences are strong, with ARQ leading; JSO and TRIDENT-DE are competitive with ARQ, whereas the remaining methods particularly CMA-ES, EA4Eig, SaDE, jDE and CLPSO trail with multiple significant deficits. Figure 16 reports the combined (best + mean) ranking as a single bar chart, offering a holistic view that aligns with the consolidated statistics in Table 10.
An additional point is breadth. The per-problem rankings show competitiveness on geometry-sensitive and ill-conditioned landscapes as well as on multimodal and application-driven tasks in energy and network planning, where incremental gains must accumulate reliably. That ARQ remains first overall despite strong specialists CMA-ES in highly correlated settings or JSO/mLSHADE_RL for mean stability suggests that its mechanisms address the common failure modes of stochastic optimization: premature convergence, accumulation of weak individuals in the tails, and coarse-grained restarts. Given the aligned configurations and transparent implementation, the evidence supports a substantive rather than accidental advantage.

4. Conclusions

This study demonstrates that ARQ achieves the intended balance between strong end-of-budget best-so-far and preserved mean performance under a strictly harmonized protocol and a demanding comparator suite. Population size, evaluation budget, and opponent configurations are aligned to minimize confounders and ensure reproducibility.
The comparator set spans state-of-the-art DE variants and canonical non-DE references with complementary strengths self-adaptation, policy learning, eigenspace/second-order cues, and swarm strategies so conclusions are not confined to a single operator family.
In terms of metrics, Table 10 consolidates ARQ’s advantage: it attains the lowest total rank on best and mean (49 and 28, sum 77, average rank 2.139), with the next best solver noticeably higher, indicating breadth rather than isolated spikes.
Table 9 clarifies the distributional shape behind this outcome a compact profile with frequent top placements and few poor runs while Table 8 shows that peaks do not come at the expense of stability, when ARQ narrowly trails in best, it typically compensates with a superior mean.
Mechanistically, the behavior is consistent with ARQ’s cohesive design. Success-history adaptation quickly locks F/ C R into profitable ranges, neighborhood-aware RTR preserves geometry and diversity while channeling selection pressure locally, outlier quarantine together with targeted micro-restarts trims tail failures and provides controlled recovery when progress stalls. The net effect is a persistently small best–mean gap that appears at the per-problem level and crystallizes in the aggregate ranks.
In the present study we targeted α , r t r k , s h c , p, and s t a g n a t i o n t r i g g e r because they map to ARQ’s four core mechanisms (TrialGeneration, ParameterPolicy, SelectionRTR, and PopulationMaintenance with micro-restart) and they exhibit material, measured main effects in Table 4, Table 5 and Table 6 and Figure 7, Figure 8 and Figure 9, thereby covering the decisive levers for mean and best performance as well as stability. In a forthcoming updated release of ARQ, we will also include principled techniques for parameter setting such as fractional–factorial screening and response–surface tuning together with automated controllers such as success-history adaptation and auto-tuning of the restart trigger, so that recommended defaults emerge from data rather than manual selection.
The contribution thus lies not in a single trick but in a principled combination of mature ideas that interact constructively, explaining why ARQ competes strongly against both DE and non-DE geometry-learning methods on ill-conditioned or correlated landscapes.
Validity is reinforced by three factors. First, strict resource alignment reduces the chance of artificial gains.
Second, the heterogeneous set of real-world tasks from physico-chemical potentials and code design to energy/network planning and interplanetary trajectories supports generalization.
Third, the explicit exposition of subroutines and controls makes the link between design choices and observed behavior transparent.
There are limitations. Results are reported for a 1.5 × 10 5 evaluation budget and a specific population size, while parameter sensitivity follows a structured methodology, a deeper map of the interactions among α , ρ , and w, together with the micro-restart trigger, would clarify edge-case trade-offs.
Moreover, although the benchmark suite is diverse, scenarios with time-varying noise or large-scale mixed-integer constraints remain under-explored.
These observations motivate several research avenues. Adaptive control of quarantine intensity via data-driven tuning of θ = Q 3 + α × I Q R and the repair fraction ρ , informed by tail statistics and diversity indices, could balance tail-cutting with exploration more finely.
Policy-learning to orchestrate the handover between RTR replacement and restart regimes may further compress the best–mean gap on stagnation-prone landscapes. Geometry-aware differences and archive schemes that periodically estimate principal directions could improve progress on highly correlated objectives without sacrificing DE’s simplicity. Asynchronous mini-batch updates and population-fraction scheduling deserve attention for better hardware utilization. Finally, coupling ARQ with active resampling on noisy objectives, and profiling per-subroutine complexity relative to f-call costs, would sharpen guidance for tight-budget or expensive-evaluation settings.
What distinguishes ARQ. The tail-quarantine loop (with α for a robust threshold and ρ for controlled repair intensity), coupled with the robust center and micro-restarts, acts as a diversity-stabilizing mechanism that cooperates with success-history parameter adaptation. In practice, this delivers consistent mean performance together with strong best results, avoiding the large volatility often observed with adaptation-only DE settings.
Overall, the evidence supports a clear conclusion: a cohesive blend of parameter self-adaptation, neighborhood-aware selection, tail control, and targeted regeneration can deliver top-tier best-so-far while keeping the mean stable. Given resource alignment and implementation transparency, ARQ’s advantage appears substantive and practically relevant, providing a dependable optimization tool for heterogeneous, challenging real-world landscapes.

Author Contributions

Conceptualization, I.G.T.; Software, V.C.; Validation, A.M.G. and D.T.; Visualization, A.M.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research has been financed by the European Union: Next Generation EU through the Program Greece 2.0 National Recovery and Resilience Plan, under the call RESEARCH-CREATE-INNOVATE, project name “iCREW: Intelligent small craft simulator for advanced crew training using Virtual Reality techniques” (project code: TAEDK-06195).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Tapkin, A. A Comprehensive Overview of Gradient Descent and its Optimization Algorithms. Int. Adv. Res. J. Sci. Eng. Technol. 2023, 10, 37–45. [Google Scholar] [CrossRef]
  2. Cawade, S.; Kudtarkar, A.; Sawant, S.; Wadekar, H. The Newton-Raphson Method: A Detailed Analysis. Int. J. Res. Appl. Sci. Eng. (IJRASET) 2024, 12, 729–734. [Google Scholar] [CrossRef]
  3. Nelder, J.A.; Mead, R. A simplex method for function minimization. Comput. J. 1965, 7, 308–313. [Google Scholar] [CrossRef]
  4. Bonate, P.L. A Brief Introduction to Monte Carlo Simulation. Clin. Pharmacokinet. 2001, 40, 15–22. [Google Scholar] [CrossRef] [PubMed]
  5. Eglese, R.W. Simulated annealing: A tool for operational research. Eur. J. Oper. Res. 1990, 46, 271–281. [Google Scholar] [CrossRef]
  6. Holland, J.H. Adaptation in Natural and Artificial Systems; University of Michigan Press: Ann Arbor, MI, USA, 1975. [Google Scholar]
  7. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar] [CrossRef]
  8. Liang, J.J.; Qin, A.K.; Suganthan, P.N.; Baskar, S. Comprehensive learning particle swarm optimizer for global optimization of multimodal functions. IEEE Trans. Evol. Comput. 2006, 10, 281–295. [Google Scholar] [CrossRef]
  9. Charilogis, V.; Tsoulos, I.G. Toward an Ideal Particle Swarm Optimizer for Multidimensional Functions. Information 2022, 13, 217. [Google Scholar] [CrossRef]
  10. Charilogis, V.; Tsoulos, I.G.; Tzallas, A. An Improved Parallel Particle Swarm Optimization. SN Comput. Sci. 2023, 4, 766. [Google Scholar] [CrossRef]
  11. Dorigo, M.; Di Caro, G. Ant Colony Optimization. In Proceedings of the 1999 Congress on Evolutionary Computation-CEC99, Washington, DC, USA, 6–9 July 1999; Volume 2, pp. 1470–1477. [Google Scholar] [CrossRef]
  12. Karaboga, D. An idea based on honey bee swarm for numerical optimization: Artificial bee colony (ABC) algorithm. J. Glob. Optim. 2005, 39, 459–471. [Google Scholar] [CrossRef]
  13. Kyrou, G.; Charilogis, V.; Tsoulos, I.G. Improving the Giant-Armadillo Optimization Method. Analytics 2024, 3, 225–240. [Google Scholar] [CrossRef]
  14. Pant, M.; Zaheer, H.; Garcia-Hernandez, L.; Abraham, A. Differential Evolution: A review of more than two decades of research. Eng. Appl. Artif. Intell. 2020, 90, 103479. [Google Scholar] [CrossRef]
  15. Tanabe, R.; Fukunaga, A. Success-History Based Parameter Adaptation for Differential Evolution (SHADE). In Proceedings of the 2013 IEEE Congress on Evolutionary Computation (CEC), Cancún, Mexico, 20–23 June 2013; pp. 71–78. [Google Scholar] [CrossRef]
  16. Tanabe, R.; Fukunaga, A.S. Improving the search performance of SHADE using linear population size reduction (L-SHADE). In Proceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC), Beijing, China, 6–11 July 2014; pp. 1658–1665. [Google Scholar] [CrossRef]
  17. Brest, J.; Maučec, M.S.; Boskovic, B. jSO: An advanced differential evolution algorithm using success-history and linear population size reduction. In Proceedings of the 2017 IEEE Congress on Evolutionary Computation (CEC), Donostia/San Sebastián, Spain, 5–8 June 2017; pp. 1995–2002. [Google Scholar] [CrossRef]
  18. Cao, Y.; Luan, J. A novel differential evolution algorithm with multi-population and elites regeneration. PLoS ONE 2024, 19, e0302207. [Google Scholar] [CrossRef]
  19. Sun, Y.; Wu, Y.; Liu, Z. An improved differential evolution with adaptive population allocation and mutation selection. Expert Syst. Appl. 2024, 258, 125130. [Google Scholar] [CrossRef]
  20. Hansen, N.; Ostermeier, A. Completely derandomized self-adaptation in evolution strategies. Evol. Comput. 2001, 9, 159–195. [Google Scholar] [CrossRef]
  21. Bujok, P.; Kolenovský, P. Eigen crossover in cooperative model of evolutionary algorithms applied to CEC 2022 single objective numerical optimisation. In Proceedings of the 2022 IEEE Congress on Evolutionary Computation (CEC), Padua, Italy, 18–23 July 2022; pp. 1–8. [Google Scholar] [CrossRef]
  22. Charilogis, V.; Tsoulos, I.G. Parallel Implementation of the Differential Evolution Method. Analytics 2023, 2, 17–30. [Google Scholar] [CrossRef]
  23. Charilogis, V.; Tsoulos, I.G.; Tzallas, A.; Karvounis, E. Modifications for the Differential Evolution Algorithm. Symmetry 2022, 14, 447. [Google Scholar] [CrossRef]
  24. Deng, W.; Shang, S.; Cai, X.; Zhao, H.; Song, Y.; Xu, J. An improved differential evolution algorithm and its application in optimization problem. Soft Comput. 2021, 25, 5277–5298. [Google Scholar] [CrossRef]
  25. Chauhan, D.; Trivedi, A.; Shivani. A Multi-operator Ensemble LSHADE with Restart and Local Search Mechanisms for Single-objective Optimization. arXiv 2024, arXiv:2409.15994. [Google Scholar] [CrossRef]
  26. Deng, W.; Shang, S.; Zhang, L.; Lin, Y.; Huang, C.; Zhao, H.; Ran, X.; Zhou, X.; Chen, H. Multi-strategy quantum differential evolution algorithm with cooperative co-evolution and hybrid search for capacitated vehicle routing. IEEE Trans. Intell. Transp. Syst. 2025, 26, 18460–18470. [Google Scholar] [CrossRef]
  27. Huber, P.J.; Ronchetti, E.M. Robust Statistics, 2nd ed.; Wiley: Hoboken, NJ, USA, 2009. [Google Scholar] [CrossRef]
  28. Guo, D.; Zhang, S.; Yang, B.; Lin, Y.; Li, J. Exploring contextual knowledge-enhanced speech recognition in air traffic control communication: A comparative study. IEEE Trans. Neural Netw. Learn. Syst. 2025, 36, 16085–16099. [Google Scholar] [CrossRef] [PubMed]
  29. Guo, D.; Zhang, S.; Lin, Y. Multi-modal intelligent situation awareness in real-time air traffic control: Control intent understanding and flight trajectory prediction. Chin. J. Aeronaut. 2024, 37, 103376. [Google Scholar] [CrossRef]
  30. Zhang, J.; Sanderson, A.C. JADE: Adaptive differential evolution with optional external archive. IEEE Trans. Evol. Comput. 2009, 13, 945–958. [Google Scholar] [CrossRef]
  31. Lima, C.F.; Lobo, F.G.; Goldberg, D.E. Investigating restricted tournament replacement in ECGA for stationary and non-stationary optimization. In Proceedings of the 10th Annual Conference on Genetic and Evolutionary Computation (GECCO ’08), Atlanta, GA, USA, 12–16 July 2008; pp. 1–8. [Google Scholar] [CrossRef]
  32. Deng, W.; Li, X.; Zhao, H.; Xu, J. PSO-K-means clustering-based NSGA-III for delay recovery. IEEE Trans. Consum. Electron. 2025; Advance online publication. [Google Scholar] [CrossRef]
  33. Zhao, H.; Deng, W.; Lin, Y. Joint optimization scheduling using AHMQDE-ACO for key resources in smart operations. IEEE Trans. Consum. Electron. 2025; Advance online publication. [Google Scholar] [CrossRef]
  34. Shen, Y.; Zhang, N.; Wang, Z.; Wang, X. Dual-performance multi-subpopulation adaptive restart differential evolution (DPR-MGDE). Symmetry 2025, 17, 223. [Google Scholar] [CrossRef]
  35. Horita, H. Optimizing runtime business processes with fair workload distribution. J. Compr. Bus. Adm. Res. 2025, 2, 162–173. [Google Scholar] [CrossRef]
  36. Sun, J.; Wang, Z.; Qiao, Z.; Li, X. Dynamic pricing model for e-commerce products based on DDQN. J. Compr. Bus. Adm. Res. 2024, 1, 171–178. [Google Scholar] [CrossRef]
  37. Deng, W.; Li, K.; Zhao, H. A flight arrival time prediction method based on cluster clustering-based modular with deep neural network. IEEE Trans. Intell. Transp. Syst. 2024, 25, 6238–6247. [Google Scholar] [CrossRef]
  38. Li, X.; Zhao, H.; Xu, J.; Deng, W. APDPFL: Anti-poisoning attack decentralized privacy-enhanced federated learning scheme for flight operation data sharing. IEEE Trans. Wirel. Commun. 2024, 23, 19098–19109. [Google Scholar] [CrossRef]
  39. Ran, X.; Suyaroj, N.; Tepsan, W.; Lei, M.; Ma, H.; Zhou, X.; Deng, W. A novel fuzzy system-based genetic algorithm for trajectory segment generation in urban GPS. J. Adv. Res. 2025; in press. [Google Scholar] [CrossRef]
  40. Lopatin, A. Intelligent system of estimation of total factor productivity (TFP) and investment efficiency in the economy with external technology gaps. J. Compr. Bus. Adm. Res. 2023, 1, 160–170. [Google Scholar] [CrossRef]
  41. Charilogis, V.; Tsoulos, I.G.; Gianni, A.M. TRIDENT-DE: Triple-Operator Differential Evolution with Adaptive Restarts and Greedy Refinement. Future Internet 2025, 17, 488. [Google Scholar] [CrossRef]
  42. Dehghani, M.; Trojovská, E.; Trojovský, P.; Malik, O.P. OOBO: A new metaheuristic algorithm for solving optimization problems. Biomimetics 2023, 8, 468. [Google Scholar] [CrossRef] [PubMed]
  43. Tsoulos, I.G.; Charilogis, V.; Kyrou, G.; Stavrou, V.N.; Tzallas, A. OPTIMUS: A Multidimensional Global Optimization Package. J. Open Source Softw. 2025, 10, 7584. [Google Scholar] [CrossRef]
  44. Das, S.; Abraham, A.; Chakraborty, U.K.; Konar, A. Differential evolution using a neighborhood-based mutation operator. IEEE Trans. Evol. Comput. 2009, 13, 526–553. [Google Scholar] [CrossRef]
  45. Kluabwang, J.; Thomthong, T. Solving parameter identification of frequency modulation sounds problem by modified adaptive tabu search under management agent. Procedia Eng. 2012, 31, 1006–1011. [Google Scholar] [CrossRef]
  46. Ahandan, M.A.; Alavi-Rad, H.; Jafari, N. Frequency modulation sound parameter identification using shuffled particle swarm optimization. Int. J. Appl. Evol. Comput. 2013, 4, 1–15. [Google Scholar] [CrossRef]
  47. Lennard-Jones, J.E. On the determination of molecular fields. Proc. R. Soc. A 1924, 106, 463–477. [Google Scholar] [CrossRef]
  48. Hofer, E.P. Optimization of bifunctional catalysts in tubular reactors. J. Optim. Theory Appl. 1976, 18, 379–393. [Google Scholar] [CrossRef]
  49. Luus, R.; Dittrich, J.; Keil, F.J. Multiplicity of solutions in the optimization of a bifunctional catalyst blend in a tubular reactor. Can. J. Chem. Eng. 1992, 70, 780–785. [Google Scholar] [CrossRef]
  50. Luus, R.; Bojkov, B. Global optimization of the bifunctional catalyst problem. Can. J. Chem. Eng. 1994, 72, 160–163. [Google Scholar] [CrossRef]
  51. Javinsky, M.A.; Kadlec, R.H. Optimal control of a continuous flow stirred tank chemical reactor. AIChE J. 1970, 16, 916–924. [Google Scholar] [CrossRef]
  52. Soukkou, A.; Khellaf, A.; Leulmi, S.; Boudeghdegh, K. Optimal control of a CSTR process. Braz. J. Chem. Eng. 2008, 25, 799–812. [Google Scholar] [CrossRef]
  53. Pinheiro, C.I.C.; de Souza, M.B., Jr.; Lima, E.L. Model predictive control of reactor temperature in a CSTR with constraints. Comput. Chem. Eng. 1999, 23, 1553–1563. [Google Scholar] [CrossRef]
  54. Tersoff, J. New empirical approach for the structure and energy of covalent systems. Phys. Rev. B 1988, 37, 6991–7000. [Google Scholar] [CrossRef] [PubMed]
  55. Tersoff, J. Modeling solid-state chemistry: Interatomic potentials for multicomponent systems. Phys. Rev. B 1989, 39, 5566–5568. [Google Scholar] [CrossRef] [PubMed]
  56. He, H.; Stoica, P.; Li, J. Designing unimodular sequence sets with good correlations-Including an application to MIMO radar. IEEE Trans. Signal Process. 2009, 57, 4391–4405. [Google Scholar] [CrossRef]
  57. Garver, L.L. Transmission network estimation using linear programming. IEEE Trans. Power Appar. Syst. 1970, PAS-89, 1688–1697. [Google Scholar] [CrossRef]
  58. Schweppe, F.C.; Caramanis, M.; Tabors, R.D.; Bohn, R.E. Spot Pricing of Electricity; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1988. [Google Scholar] [CrossRef]
  59. Balanis, C.A. Antenna Theory: Analysis and Design, 4th ed.; Wiley: Hoboken, NJ, USA, 2016. [Google Scholar]
  60. Biscani, F.; Izzo, D.; Yam, C.H. Global Optimization for Space Trajectory Design (GTOP Database and Benchmarks). European Space Agency, Advanced Concepts Team. (GTOP Online Resource; See Also Related ACT Publications). 2010. Available online: https://www.esa.int/gsp/ACT/projects/gtop/ (accessed on 13 November 2025).
  61. Calles-Esteban, F.; Olmedo, A.A.; Hellín, C.J.; Valledor, A.; Gómez, J.; Tayebi, A. Optimizing antenna positioning for enhanced wireless coverage: A genetic algorithm approach. Sensors 2024, 24, 2165. [Google Scholar] [CrossRef] [PubMed]
  62. Li, J.; Chen, X.; Zhang, Y. Optimization of 5G base station coverage based on self-adaptive genetic algorithm. Comput. Commun. 2024, 218, 1–12. [Google Scholar] [CrossRef]
  63. Das, S.; Suganthan, P.N. Problem Definitions and Evaluation Criteria for CEC 2011 Competition on Testing Evolutionary Algorithms on Real World Optimization Problems. Jadavpur University, Nanyang Technological University, Kolkata, 341–359. 2010. Available online: https://www.semanticscholar.org/paper/Problem-Definitions-and-Evaluation-Criteria-for-CEC-Das-Suganthan/d2f546248edd0c66d833c3e5f67e094e6922d262#citing-papers (accessed on 13 November 2025).
  64. Lee, Y.; Filliben, J.; Micheals, R.; Phillips, J. Sensitivity Analysis for Biometric Systems: A Methodology Based on Orthogonal Experiment Designs; National Institute of Standards and Technology Gaithersburg (NISTIR): Gaithersburg, MD, USA, 2012. [CrossRef]
  65. Shah, R.K.; Sekulić, D.P. Fundamentals of Heat Exchanger Design; Wiley: Hoboken, NJ, USA, 2003. [Google Scholar] [CrossRef]
  66. Serna, M.; Jiménez, A. A compact formulation of the Bell–Delaware method for heat exchanger design and optimization. Chem. Eng. Res. Des. 2005, 83, 539–550. [Google Scholar] [CrossRef]
  67. Gonçalves, C.d.O.; Costa, A.L.H.; Bagajewicz, M.J. Linear method for the design of shell and tube heat exchangers using the Bell–Delaware method. AIChE J. 2019, 65, e16602. [Google Scholar] [CrossRef]
  68. Moran, M.J.; Shapiro, H.N.; Boettner, D.D.; Bailey, M.B. Fundamentals of Engineering Thermodynamics, 9th ed.; Wiley: Hoboken, NJ, USA, 2019. [Google Scholar]
  69. Coustenis, A.; Atreya, S.K.; Balint, T.; Brown, R.H.; Dougherty, M.; Dragonetti, Y.; Zarnecki, J.C. TandEM: Titan and Enceladus mission. Exp. Astron. 2009, 23, 893–946. [Google Scholar] [CrossRef]
  70. Ceriotti, M. Global Optimisation of Multiple Gravity Assist Trajectories. Doctoral Dissertation, University of Glasgow, Glasgow, Scotland, 2010. Available online: https://theses.gla.ac.uk/2003/ (accessed on 13 November 2025).
  71. Hinckley, D., Jr.; Parker, J.S. Global optimization of interplanetary trajectories using MGA-1DSM transcription (AAS 15-582). In Proceedings of the AAS/AIAA Astrodynamics Specialist Conference, Vail, CO, USA, 9–13 August 2015; NASA Technical Reports. Available online: https://ntrs.nasa.gov/api/citations/20150020817/downloads/20150020817.pdf (accessed on 13 November 2025).
Figure 1. The ARQ method, an accelerated evolutionary optimization method with success-history adaptation, RTR selection, outlier quarantine, and targeted micro-restart.
Figure 1. The ARQ method, an accelerated evolutionary optimization method with success-history adaptation, RTR selection, outlier quarantine, and targeted micro-restart.
Applsci 15 12180 g001
Figure 2. ParameterPolicy: sampling of control rates and success-history update of running means (fast self-tuning for coherent gains, conservative for scattered ones).
Figure 2. ParameterPolicy: sampling of control rates and success-history update of running means (fast self-tuning for coherent gains, conservative for scattered ones).
Applsci 15 12180 g002
Figure 3. TrialGeneration: pbest/1/bin trial construction with archive support and box projection (targeted drift toward strong exemplars).
Figure 3. TrialGeneration: pbest/1/bin trial construction with archive support and box projection (targeted drift toward strong exemplars).
Applsci 15 12180 g003
Figure 4. SelectionRTR: local replacement via restricted tournament with nearest-neighbor check (archiving displaced solutions for future diversity).
Figure 4. SelectionRTR: local replacement via restricted tournament with nearest-neighbor check (archiving displaced solutions for future diversity).
Applsci 15 12180 g004
Figure 5. PopulationMaintenance: outlier quarantine and targeted micro-restart (controlled re-injection/refresh).
Figure 5. PopulationMaintenance: outlier quarantine and targeted micro-restart (controlled re-injection/refresh).
Applsci 15 12180 g005
Figure 6. ARQ flow-detect tail, isolate, resample, and reinsert.
Figure 6. ARQ flow-detect tail, isolate, resample, and reinsert.
Applsci 15 12180 g006
Figure 7. Sensitivity to selection controllers, main effects, and stability range. Graphical representation of outlier_alpha ( α ), rtr_k ( r t r k ), sh_c ( s h c ), pbest_frac (p) end stagnation_trigger ( s t a g n a t i o n t r i g g e r ) for the static economic load dispatch one problem.
Figure 7. Sensitivity to selection controllers, main effects, and stability range. Graphical representation of outlier_alpha ( α ), rtr_k ( r t r k ), sh_c ( s h c ), pbest_frac (p) end stagnation_trigger ( s t a g n a t i o n t r i g g e r ) for the static economic load dispatch one problem.
Applsci 15 12180 g007
Figure 8. Sensitivity to selection controllers, main effects, and stability range. Graphical representation of outlier_alpha ( α ), rtr_k ( r t r k ), sh_c ( s h c ), pbest_frac (p) end stagnation_trigger ( s t a g n a t i o n t r i g g e r ) for the dynamic economic dispatch one problem.
Figure 8. Sensitivity to selection controllers, main effects, and stability range. Graphical representation of outlier_alpha ( α ), rtr_k ( r t r k ), sh_c ( s h c ), pbest_frac (p) end stagnation_trigger ( s t a g n a t i o n t r i g g e r ) for the dynamic economic dispatch one problem.
Applsci 15 12180 g008
Figure 9. Sensitivity to selection controllers, main effects, and stability range. Graphical representation of outlier_alpha ( α ), rtr_k ( r t r k ), sh_c ( s h c ), pbest_frac (p) end stagnation_trigger ( s t a g n a t i o n t r i g g e r ) for problem of Lennard-Jones Potential, Atoms 13, Dim: 43.
Figure 9. Sensitivity to selection controllers, main effects, and stability range. Graphical representation of outlier_alpha ( α ), rtr_k ( r t r k ), sh_c ( s h c ), pbest_frac (p) end stagnation_trigger ( s t a g n a t i o n t r i g g e r ) for problem of Lennard-Jones Potential, Atoms 13, Dim: 43.
Applsci 15 12180 g009
Figure 10. Runtime vs. dimension (40–400) for ARQ on three real-world problems (500-iteration budget).
Figure 10. Runtime vs. dimension (40–400) for ARQ on three real-world problems (500-iteration budget).
Applsci 15 12180 g010
Figure 11. Statistical comparison of optimization methods by best-so-far over 30 runs (evaluation budget 150,000 FEs).
Figure 11. Statistical comparison of optimization methods by best-so-far over 30 runs (evaluation budget 150,000 FEs).
Applsci 15 12180 g011
Figure 12. Aggregate mean ranking across ten algorithms.
Figure 12. Aggregate mean ranking across ten algorithms.
Applsci 15 12180 g012
Figure 13. Aggregate best ranking across ten algorithms.
Figure 13. Aggregate best ranking across ten algorithms.
Applsci 15 12180 g013
Figure 14. Statistical comparison of optimization methods by mean best-so-far over 30 runs (evaluation budget 150,000 FEs).
Figure 14. Statistical comparison of optimization methods by mean best-so-far over 30 runs (evaluation budget 150,000 FEs).
Applsci 15 12180 g014
Figure 15. Per-method significance heatmap: ARQ vs. competitors (Wilcoxon signed-rank with Holm correction; effect direction encoded by color).
Figure 15. Per-method significance heatmap: ARQ vs. competitors (Wilcoxon signed-rank with Holm correction; effect direction encoded by color).
Applsci 15 12180 g015
Figure 16. Total ranking combining best and mean across ten algorithms.
Figure 16. Total ranking combining best and mean across ten algorithms.
Applsci 15 12180 g016
Table 1. ARQ parameters in the pseudo-code.
Table 1. ARQ parameters in the pseudo-code.
NameValueDescription
N100Population size
T150,000Maximum evaluations
p0.12Top fraction for pbest
a g e n t f r a c t i o n 0.60Agents updated per step
μ F 0.6Mean scale factor
μ C R 0.85Mean crossover rate.
α 1.0Outlier threshold coeff in θ = Q 3 + α · I Q R
ρ 0.08Outliers repaired fraction
w0.08Worst frac for micro-restart
s h c 0.10SH learning rate
F l o 0.05F sample min
F h i 1.40F sample max
C R l o w 0.0 C R sample min
C R h i 1.0 C R sample max
a r c h i v e r a t e 1.5Archive capacity
r t r k 7RTR neighborhood size
r t r p o o l 14RTR pool size
r t r m i n 0.0RTR min gain
q s i g m a 0.10Quarantine perturb scale
s t a g n a t i o n t r i g g e r 24No-improve trigger
r s i g m a 0.18Restart noise stdev
Table 2. Parameters of other methods.
Table 2. Parameters of other methods.
NameValueDescription
N100Population size for all methods
JSO
N P f a c t o r 18Initial population size N P 0 = N P f a c t o r × D i m
H20Memory size for success-history means M F / M C R
p b e s t 0.11Fraction for p-best selection (top-p set).
a r c h i v e r a t e 1Archive size as multiple of NP0 (i.e., archiveMax = archive_rate × NP0).
CLPSO
c l p 0.3Comprehensive learning probability
c o g n i t i v e w e i g h t 1.49445Cognitive weight
i n e r t i a w e i g h t 0.729Inertia weight
m u t a t i o n r a t e 0.01Mutation rate
s o c i a l w e i g h t 1.49445Social weight
CMA-ES
N C M A E S 4 + 3 · log ( d i m ) Population size
EA4Eig
a r c h i v e s i z e 100Archive size for JADE-style mutation
e i g i n t e r v a l 5Recompute eigenbasis every k iterations
C R m a x 1Upper bound for C R
F m a x 1Upper bound for F
C R m i n 0Lower bound for C R
F m i n 0.1Lower bound for F
p b e s t 0.2pbest fraction (current-to-pbest/1/bin)
C R t a u 0.1Self-adaptation prob. for C R
F t a u 0.1Self-adaptation prob. for F
mLSHADE_RL
a r c h i v e s i z e 500Archive size
M e m o r y s i z e 10Success-history memory size
P o p u l a t i o n m i n 4Minimum population size
p b e s t m a x 0.2Maximum pbest fraction
p b e s t m i n 0.05Minimum pbest fraction
SaDE
C R s i g m a 0.1Std for C R sampling
F g a m m a 0.1Scale for Cauchy F sampling
C R i n i t 0.5Initial C R mean
F i n i t 0.7Initial F mean
l e a r n i n g p e r i o d 25Iterations per adaptation window
UDE3
P o p u l a t i o n m i n 4Minimum population size.
M e m o r y s i z e 10Success-history memory size
a r c h i v e s i z e 100Archive size
p b e s t m i n 0.05Minimum pbest fraction
p b e s t m a x 0.2Maximum pbest fraction.
TRIDENT-DE. The parameters for this method are mentioned in Table in [41].
Table 3. The real world benchmark functions used in the conducted experiments.
Table 3. The real world benchmark functions used in the conducted experiments.
ProblemFormulaDimConstraints/Bounds
Parameter Estimation for Frequency-Modulated Sound Waves [44,45,46] min x [ 6.4 , 6.35 ] 6 f ( x ) = 1 N n = 1 N y ( n ; x ) y target ( n ) 2  
y ( n ; x ) = x 0 sin x 1 n + x 2 sin ( x 3 n + x 4 sin ( x 5 n ) )
6 x i [ 6.4 , 6.35 ]
Lennard-Jones Potential (atoms: 10, 13, 38) [47] min x R 3 N 6 f ( x ) = 4 i = 1 N 1 j = i + 1 N 1 r i j 12 1 r i j 6 24,
43,
108
x 0 ( 0 , 0 , 0 )  
x 1 , x 2 [ 0 , 4 ]  
x 3 [ 0 , π ]  
x 3 k 3  
x 3 k 2  
x i [ b k , + b k ]
Bifunctional Catalyst Blend Optimal Control [48,49,50] d x 1 d t = k 1 x 1 , d x 2 d t = k 1 x 1 k 2 x 2 + k 3 x 2 + k 4 x 3 ,
d x 3 d t = k 2 x 2 , d x 4 d t = k 4 x 4 + k 5 x 5 ,
d x 5 d t = k 3 x 2 + k 6 x 4 k 5 x 5 + k 7 x 6 + k 8 x 7 + k 9 x 5 + k 10 x 7  
d x 6 d t = k 8 x 5 k 7 x 6 , d x 7 d t = k 9 x 5 k 10 x 7  
k i ( u ) = c i 1 + c i 2 u + c i 3 u 2 + c i 4 u 3
1 u [ 0.6 , 0.9 ]
Optimal Control of a Non-Linear Stirred Tank Reactor [51,52,53] J ( u ) = 0 0.72 x 1 ( t ) 2 + x 2 ( t ) 2 + 0.1 u 2 d t  
d x 1 d t = 2 x 1 + x 2 + 1.25 u + 0.5 exp x 1 x 1 + 2  
d x 2 d t = x 2 + 0.5 exp x 1 x 1 + 2  
x 1 ( 0 ) = 0.9 , x 2 ( 0 ) = 0.09 , t [ 0 , 0.72 ]
1 u [ 0 , 5 ]
Tersoff Potential for Model Si (B) [54,55] min x Ω f ( x ) = i = 1 N E ( x i )  
E ( x i ) = 1 2 j i f c ( r i j ) V R ( r i j ) B i j V A ( r i j )
where r i j = x i x j , V R ( r ) = A exp ( λ 1 r )  
V A ( r ) = B exp ( λ 2 r )  
f c ( r ) : cutoff function with f c ( r ) : angle parameter
30 x 1 [ 0 , 4 ]  
x 2 [ 0 , 4 ]  
x 3 [ 0 , π ]  
x i 4 ( i 3 ) 4 , 4
Tersoff Potential for model Si (C) [54,55] min x V ( x ) = i = 1 N j > i N f C ( r i j ) a i j f R ( r i j ) + b i j f A ( r i j )  
f C ( r ) = 1 , r < R D 1 2 + 1 2 cos π ( r R + D ) 2 D , | r R | D 0 , r > R + D  
f R ( r ) = A exp ( λ 1 r )  
f A ( r ) = B exp ( λ 2 r )  
b i j = 1 + ( β n ) ζ i j n 1 / ( 2 n )  
k i , j f C ( r i k ) g ( θ i j k ) exp λ 3 3 ( r i j r i k ) 3
30 x 1 [ 0 , 4 ]  
x 2 [ 0 , 4 ]  
x 3 [ 0 , π ]  
x i 4 ( i 3 ) 4 , 4
Spread Spectrum Radar Polyphase Code Design [56] min x X f ( x ) = max | φ 1 ( x ) | , | φ 2 ( x ) | , , | φ m ( x ) |  
X = { x R n 0 x j 2 π , j = 1 , , n } m = 2 n 1  
φ j ( x ) = k = 1 n j cos ( x k x k + j ) for j = 1 , , n 1 n for j = n φ 2 n j ( x ) for j = n + 1 , , 2 n 1  
φ j ( x ) = k = 1 n j cos ( x k x k + j ) , j = 1 , , n 1  
φ n ( x ) = n , φ n + ( x ) = φ n ( x ) , = 1 , , n 1
20 x j [ 0 , 2 π ]
Transmission Network Expansion Planning [57] min l Ω c l n l + W 1 l O L | f l f ¯ l | + W 2 l Ω max ( 0 , n l n ¯ l )  
S f = g d  
f l = γ l n l Δ θ l , l Ω  
| f l | f ¯ l n l , l Ω  
0 n l n ¯ l , n l Z , l Ω
7 0 n i n ¯ l  
n i Z
Electricity Transmission Pricing [58] min x f ( x ) = i = 1 N g C i g e n P i g e n R i g e n 2 + j = 1 N d C j l o a d P j l o a d R j l o a d 2  
j G D i , j + j B T i , j = P i g e n , i  
i G D i , j + i B T i , j = P j l o a d , j  
G D i , j m a x = min ( P i g e n B T i , j , P j l o a d B T i , j )
126 G D i , j [ 0 , G D i , j m a x ]
Circular Antenna Array Design [59] min r 1 , , r 6 , φ 1 , , φ 6 f ( x ) = max θ Ω A F ( x , θ )  
A F ( x , θ ) = k = 1 6 exp j 2 π r k cos ( θ θ k ) + φ k π 180
12 r k [ 0.2 , 1 ]  
φ k [ 180 , 180 ]
Cassini 2: Spacecraft Trajectory Optimization Problem [60] x = t 0 , Δ t 1 , , Δ t 5 , p 1 , , p 4 ,  
min x J ( x ) = Δ v launch + = 1 4 Δ v DSM + Δ v arrival ,  
E V E V J S  
t 0 [ 850 , 2000 ] , Δ t [ 20 , 1200 ] , p [ 1.0 , 10.0 ] .
10 r p , R P + h min , P ,  
δ δ max , P ,  
t 0 min t 0 t 0 max ,  
Δ t min Δ t Δ t max .
Wireless Coverage Antenna Placement [61,62] x i , y i R ( position ) , P i R 0 ( transmission power ) .  
S i u ( x i , y i , P i ) = G P i ( x i x u ) 2 + ( y i y u ) 2 α / 2 + ε ,  
min x , y , P u U τ S u ( x , P ) + coverage deficiency + λ P i = 1 N P i + ,  
λ O u U i < j min { S i u , S j u } κ +  
S u ( x , P ) = max i = 1 , , N S i u ( x i , y i , P i ) .
30 0 x i W ,  
0 y i H ,  
0 P i P max
Dynamic Economic Dispatch 1 [63] min P f ( P ) = t = 1 24 i = 1 5 a i P i , t 2 + b i P i , t + c i  
P i min P i , t P i max , i = 1 , , 5 , t = 1 , , 24  
i = 1 5 P i , t = D t , t = 1 , , 24  
P min = [ 10 , 20 , 30 , 40 , 50 ]  
P max = [ 75 , 125 , 175 , 250 , 300 ]
120 P i min P i , t P i max
Dynamic Economic Dispatch 2 [63] min P f ( P ) = t = 1 24 i = 1 9 a i P i , t 2 + b i P i , t + c i  
P i min P i , t P i max , i = 1 , , 5 , t = 1 , , 24  
i = 1 5 P i , t = D t , t = 1 , , 24  
P min = [ 150 , 135 , 73 , 60 , 73 , 57 , 20 , 47 , 20 ]  
P max = [ 470 , 460 , 340 , 300 , 243 , 160 , 130 , 120 , 80 ]
216 P i min P i , t P i max
Static Economic Load Dispatch (1, 2, 3, 4, 5) [63] min P 1 , , P N G F = i = 1 N G f i ( P i )  
f i ( P i ) = a i P i 2 + b i P i + c i , i = 1 , 2 , , N G  
f i ( P i ) = a i P i 2 + b i P i + c i + | e i sin ( f i ( P i min P i ) ) |  
P i min P i P i max , i = 1 , 2 , , N G  
i = 1 N G P i = P D + P L  
P L = i = 1 N G j = 1 N G P i B i j P j + i = 1 N G B 0 i P i + B 00  
P i P i 0 U R i P i 0 P i D R i
6
13
15
40
140
See Technical Report of CEC2011
Table 4. Sensitivity analysis of the method parameters for the static economic load dispatch one problem.
Table 4. Sensitivity analysis of the method parameters for the static economic load dispatch one problem.
ParameterValueMeanMinMaxItersMain Effect Range
α 0.52978.97972967.24913169.541796000.1594
12978.98522967.24913169.54179600
1.52979.09182967.24913169.54179600
22978.93242967.24913166.81009600
2.52978.95692967.24913166.81009600
r t r k 32978.97852967.24913169.541796000.0748
52979.02182967.24913169.54179600
72978.94692967.24913169.54179600
102979.01292967.24913169.54179600
142978.98602967.24913166.81009600
s h c 0.12979.07352967.24913169.541712,0000.2084
0.32978.86502967.24913169.541712,000
0.52979.05622967.24913169.541712,000
0.72979.05622967.24913166.810012,000
p0.052979.49782967.24913169.541712,0000.9368
0.12979.07812967.24913169.541712,000
0.22978.82012967.24913167.043012,000
0.32978.56092967.24913169.541712,000
s t a g n a t i o n t r i g g e r 102979.00962967.24913169.541712,0000.0990
202978.95882967.24913166.810012,000
302979.04382967.24913169.541712,000
502978.94472967.24913169.541712,000
Table 5. Sensitivity analysis of the method parameters for the dynamic economic dispatch one problem.
Table 5. Sensitivity analysis of the method parameters for the dynamic economic dispatch one problem.
ParameterValueMeanMinMaxItersMain Effect Range
α 0.5130,643.6614130,642.7267130,668.172796000.2952
1130,643.6546130,642.7276130,667.10709600
1.5130,643.7393130,642.7131130,662.76819600
2130,643.9058130,642.7216130,709.25649600
2.5130,643.9498130,642.7275130,702.01759600
r t r k 3130,643.7943130,642.7253130,709.256496000.0309
5130,643.7775130,642.7275130,702.01759600
7130,643.7889130,642.7233130,684.77839600
10130,643.7634130,642.7282130,662.76819600
14130,643.7869130,642.7131130,667.51669600
s h c 0.1130,643.7862130,642.7282130,680.093912,0000.0295
0.3130,643.7630130,642.7168130,680.930512,000
0.5130,643.7870130,642.7131130,709.256412,000
0.7130,643.7925130,642.7253130,702.017512,000
p0.05130,643.6902130,642.7131130,709.256412,0000.3111
0.1130,643.6697130,642.7259130,702.017512,000
0.2130,643.7879130,642.7393130,684.778312,000
0.3130,643.9809130,642.7459130,682.858412,000
s t a g n a t i o n t r i g g e r 10130,643.7790130,642.7233130,709.256412,0000.0164
20130,643.7830130,642.7294130,684.778312,000
30130,643.7915130,642.7253130,702.017512,000
50130,643.7751130,642.7131130,680.093912,000
Table 6. Sensitivity analysis of the method parameters for problem entitled Lennard-Jones Potential, Atoms 13, Dim: 43.
Table 6. Sensitivity analysis of the method parameters for problem entitled Lennard-Jones Potential, Atoms 13, Dim: 43.
ParameterValueMeanMinMaxItersMain Effect Range
α 0.5−39.8220−44.32680−35.818996000.1952
1−39.9118−44.32680−35.12529600
1.5−39.9657−44.32680−35.39099600
2−39.9857−44.32680−36.06389600
2.5−40.0173−44.32680−35.33089600
r t r k 3−39.9465−44.32680−35.125296000.0404
5−39.9392−44.32680−35.39099600
7−39.9172−44.32680−35.33089600
10−39.9577−44.32680−35.43529600
14−39.9419−44.32680−35.81049600
s h c 0.1−39.3275−44.32680−35.390912,0000.9911
0.3−39.9237−44.32680−35.592112,000
0.5−40.1921−44.32680−35.125212,000
0.7−40.3186−44.32680−36.048712,000
p0.05−40.0216−44.32680−35.125212,0000.2021
0.1−40.0054−44.32680−35.390912,000
0.2−39.9157−44.32680−35.330812,000
0.3−39.8194−44.32680−35.476912,000
s t a g n a t i o n t r i g g e r 10−39.9334−44.32680−35.330812,0000.0141
20−39.9398−44.32680−35.838712,000
30−39.9412−44.32680−35.741312,000
50−39.94763−44.32680−35.125212,000
Table 7. Comparison Based on Best and Mean after 1.5 × 10 5 FEs.
Table 7. Comparison Based on Best and Mean after 1.5 × 10 5 FEs.
ProblemARQ Best MeanJSO Best MeanTRIDENT-DE Best MeanUDE3 Best MeanEA4Eig Best MeanmLSHADE_RL Best MeanSaDE Best MeanCMA-ES Best MeanjDE Best MeanCLPSO Best Mean
Lennard-Jones Potential (10 atoms)−28.42253189−27.68965427−28.42253189−17.60964115−22.47842596−28.42252711−22.86077544−28.42253189−15.91366007−16.59269921
−27.62259837−26.43980674−27.19833972−16.33758634−19.48663385−23.77189104−21.22806787−27.52754934−13.75563015−13.55503647
Lennard-Jones Potential (13 atoms)−44.12080311−36.56391435−41.39220729−21.90945922−28.01101572−40.6992486−29.31393114−44.32680142−18.77073675−18.00250514
−38.97673056−35.18151099−39.14390841−19.49372047−24.58810405−30.67706246−27.48874237−41.44245617−15.4243072−15.86691988
Lennard-Jones Potential (38 atoms)−157.3383508−73.79663557−125.1886792−2.015038465−55.05415428−120.0680452−68.91313107−167.73690199186.25096330.9934848
−125.0267732−67.48713618−107.1751456140.2211284−3.350181902−73.95117658−45.80265994−163.60916739186.250961087.035295
Tersoff Potential for model Si (B)−29.30289228−28.73373322−28.93480467−25.43447342−26.90746941−28.12281867−26.65687822−28.33045002−24.75133772−22.67387001
−28.279056−28.0408539−27.70615763−23.30318979−24.69059932−25.49977206−25.27422603−27.38991233−22.94168766−21.21150428
Tersoff Potential for model Si (C)−33.78825497−33.24194738−33.8820283−29.30227462−30.88865174−31.70444684−30.94469385−32.50963421−29.44789882−26.88039528
−32.71055796−32.47083022−31.91749393−27.53891341−29.0199918−29.44303263−29.70029831−31.53772845−29.44789882−24.653633
Parameter Estimation for Frequency-ModulatedSound Waves 5.37 × 10 20 2.40 × 10 5 0.11615753500.152724530.1161575350.1480076020.210122687 3.21 × 10 25 0.131483748
0.1224368150.1141973580.1343245440.1034063190.2130996920.2082108460.1480076020.2673299140.1325399230.212498169
Circular Antenna Array Design0.0068096380.0068096380.0068096380.0068096650.0068096380.0068096620.0068146820.0072537310.006817150.006933401
0.0068104170.0068196530.0068096830.0068173850.0068096380.0068253380.007907010.0087553590.0068357640.051815518
Spread Spectrum Radar Polyphase Code Design0.0069022150.3310623210.0144268360.9538727090.6015678240.0745529110.5508370190.0625194091.0057397850.860294378
0.3518720020.442245770.262545271.2065773850.8692575990.5350289190.8035276050.1977135221.3314169571.273200439
Cassini 2: Spacecraft Trajectory Optimization Problem 1.42 × 10 13 1.42 × 10 13 4.54 × 10 10 0.000926598 1.42 × 10 13 1.76 × 10 12 0.039231105 2.84 × 10 13 2.70 × 10 5 1.22633022
2.90 × 10 7 1.42 × 10 13 1.12 × 10 5 0.008206106 5.96 × 10 14 1.73 × 10 5 0.0702304175.9291437220.0002854113.639687905
Wireless Coverage Antenna Placement0.9463507360.9463507360.9463507360.9463507360.9466550320.9463507360.9463619871.189393750.9463507360.946365969
0.9463710230.946360880.946621240.9465028840.9466595750.9468751070.9466887571.1906998030.9464014520.946727502
Transmission Network Expansion Planning4.4852929264.4852929264.4852929264.4852951064.4852929264.4852929264.4852995254.4852929264.4852929264.486699087
4.4852929264.4852929264.4853040034.4853040034.4852929264.4852929264.4853119244.4852929484.4852929264.495857336
Dynamic Economic Dispatch 1130,642.8565130,661.111130,850.0389130,693.5423130,694.29130,882.0646131,010.8769130,650.9354131,225.2453131,834.4235
130,643.4825130,681.9224130,931.1074130,717.6052130,862.9893130,955.331131,099.1959130,654.2758131,225.2453132,151.5397
Dynamic Economic Dispatch 2165,304.7824171,554.4678165,980.9574164,946.164172,067.4426167,519.3281167,908.0605165,847.1092186,121.2812177,120.3822
165,662.994172,600.3374166,478.1534165,614.9256172,931.6964168,275.5429168,495.9731166,233.691190,793.5621178,190.4198
Static Economic Load Dispatch 12967.2491962967.2491962967.2491962967.2491962979.8033692967.2491962967.2491962967.2495862967.2491962967.249197
2978.9664242973.6495212975.7213432976.0576572979.8033692976.9562212976.1398163108.9319172967.6599922973.33009
Static Economic Load Dispatch 217,866.897417,864.0442817,879.7367917,864.6968718,006.6597617,882.2838417,892.3812917,960.8473417,867.5744717,910.47794
17,915.7646317,864.6503917,928.6560317,890.8341318,063.1208517,950.1589217,958.9420418,077.5800617,992.0948618,089.83526
Static Economic Load Dispatch 332,367.5773532,367.5773532,367.5773532,367.5773532,415.8036732,367.5776532,376.0219732,645.3410232,391.6498132,384.85409
32,539.3739632,367.5773532,440.8775232,400.8633732,573.0357232,491.5723532,491.2897132,867.172932,476.5840532,532.19143
Static Economic Load Dispatch 4121,093.4505121,078.7215121,071.4654121,066.9247121,197.2468121,085.9922121,195.4656122,350.1013121,234.0466121,328.7006
121,331.4674508,807.9108121,422.9908121,197.2468121,545.1315121,308.5801121,517.4918122,957.6217121,526.7414121,541.4182
Static Economic Load Dispatch 5508,614.1309508,807.9108508,663.8176508,661.3113508,872.6908508,851.668508,985.9092508,717.1467511,174.5326509,025.7426
508,614.8964508,858.0755508,703.0424508,676.4938508,986.6092508,988.5396509,125.2079508,770.1661562,012.6548509,080.3439
Table 8. Detailed Ranking of Algorithms Based on Best after 1.5 × 10 5 FEs. (1 = green, 2 = blue).
Table 8. Detailed Ranking of Algorithms Based on Best after 1.5 × 10 5 FEs. (1 = green, 2 = blue).
ProblemARQJSOTRIDENT-DEUDE3EA4EigmLSHADERLSaDECMA-ESjDECLPSO
Lennard-Jones Potential (10 atoms)14387561910
Lennard-Jones Potential (13 atoms)34287561109
Lennard-Jones Potential (38 atoms)25387461109
Tersoff Potential for model Si (B)12387564910
Tersoff Potential for model Si (C)12398754610
Parameter Estimation for Frequency-Modulated Sound Waves32519761048
Circular Antenna Array Design35241689710
Spread Spectrum Radar PolyphaseCode Design34287561109
Cassini 2: Spacecraft TrajectoryOptimization Problem31471581069
Wireless Coverage Antenna Placement21546971038
Transmission Network ExpansionPlanning11771196110
Dynamic Economic Dispatch 113645782910
Dynamic Economic Dispatch 227418563109
Static Economic Load Dispatch 183459761012
Static Economic Load Dispatch 231428569710
Static Economic Load Dispatch 381329651047
Static Economic Load Dispatch 431041825967
Static Economic Load Dispatch 515326794108
Table 9. Detailed ranking of algorithms based on mean after 1.5 × 10 5 FEs. (1 = green, 2 = blue).
Table 9. Detailed ranking of algorithms based on mean after 1.5 × 10 5 FEs. (1 = green, 2 = blue).
ProblemARQJSOTRIDENT-DEUDE3EA4EigmLSHADERLSaDECMA-ESjDECLPSO
Lennard-Jones Potential (10 atoms)25387461910
Lennard-Jones Potential (13 atoms)25387461109
Lennard-Jones Potential (38 atoms)13286574910
Tersoff Potential for model Si (B)23197564810
Tersoff Potential for model Si (C)14519581017
Parameter Estimation for Frequency-Modulated Sound Waves11161571089
Circular Antenna Array Design15297463108
Spread Spectrum Radar PolyphaseCode Design11681591710
Cassini 2: Spacecraft TrajectoryOptimization Problem11119171018
Wireless Coverage AntennaPlacement11181191110
Transmission Network ExpansionPlanning13645782910
Dynamic Economic Dispatch 127418563109
Dynamic Economic Dispatch 211111011918
Static Economic Load Dispatch 131521067948
Static Economic Load Dispatch 211119561087
Static Economic Load Dispatch 353217461089
Static Economic Load Dispatch 415327684109
Static Economic Load Dispatch 512345678910
Table 10. Comparison of algorithms and final Ranking. (1 = green, 2 = blue).
Table 10. Comparison of algorithms and final Ranking. (1 = green, 2 = blue).
AlgorithmBest Total RankMean Total RankOverall Rank SumAverage Rank
ARQ4928772.139
JSO61521133.139
TRIDENT-DE67501173.250
UDE389821714.750
mLSHADE_RL98791774.917
CMA-ES1041002045.666
EA4Eig1141162306.389
SaDE1181202386.611
jDE1221232456.806
CLPSO1551613168.778
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Charilogis, V.; Tsoulos, I.G.; Gianni, A.M.; Tsalikakis, D. ARQ: A Cohesive Optimization Design for Stable Performance on Noisy Landscapes. Appl. Sci. 2025, 15, 12180. https://doi.org/10.3390/app152212180

AMA Style

Charilogis V, Tsoulos IG, Gianni AM, Tsalikakis D. ARQ: A Cohesive Optimization Design for Stable Performance on Noisy Landscapes. Applied Sciences. 2025; 15(22):12180. https://doi.org/10.3390/app152212180

Chicago/Turabian Style

Charilogis, Vasileios, Ioannis G. Tsoulos, Anna Maria Gianni, and Dimitrios Tsalikakis. 2025. "ARQ: A Cohesive Optimization Design for Stable Performance on Noisy Landscapes" Applied Sciences 15, no. 22: 12180. https://doi.org/10.3390/app152212180

APA Style

Charilogis, V., Tsoulos, I. G., Gianni, A. M., & Tsalikakis, D. (2025). ARQ: A Cohesive Optimization Design for Stable Performance on Noisy Landscapes. Applied Sciences, 15(22), 12180. https://doi.org/10.3390/app152212180

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop