Next Article in Journal
Opportunities for Adapting Data Write Latency in Geo-Distributed Replicas of Multicloud Systems
Previous Article in Journal
Enhanced Position Estimation via RSSI Offset Correction in BLE Fingerprinting-Based Indoor Positioning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Healing Intelligence: A Bio-Inspired Metaheuristic Optimization Method Using Recovery Dynamics

by
Vasileios Charilogis
and
Ioannis G. Tsoulos
*
Department of Informatics and Telecommunications, University of Ioannina, 47150 Kostaki Artas, Greece
*
Author to whom correspondence should be addressed.
Future Internet 2025, 17(10), 441; https://doi.org/10.3390/fi17100441
Submission received: 17 August 2025 / Revised: 21 September 2025 / Accepted: 25 September 2025 / Published: 27 September 2025

Abstract

BioHealing Optimization (BHO) is a bio-inspired metaheuristic that operationalizes the injury–recovery paradigm through an iterative loop of recombination, stochastic injury, and guided healing. The algorithm is further enhanced by adaptive mechanisms, including scar map, hot-dimension focusing, RAGE/hyper-RAGE bursts (Rapid Aggressive Global Exploration), and healing-rate modulation, enabling a dynamic balance between exploration and exploitation. Across 17 benchmark problems with 30 runs, each under a fixed budget of 1.5 · 10 5 function evaluations, BHO achieves the lowest overall rank in both the “best-of-runs” (47) and the “mean-of-runs” (48), giving an overall rank sum of 95 and an average rank of 2.794. Representative first-place results include Frequency-Modulated Sound Waves, the Lennard–Jones potential, and Electricity Transmission Pricing. In contrast to prior healing-inspired optimizers such as Wound Healing Optimization (WHO) and Synergistic Fibroblast Optimization (SFO), BHO uniquely integrates (i) an explicit tri-phasic architecture (DE/best/1/bin recombination → Gaussian/Lévy injury → guided healing), (ii) per-dimension stateful adaptation (scar map, hot-dims), and (iii) stagnation-triggered bursts (RAGE/hyper-RAGE). These features provide a principled exploration–exploitation separation that is absent in WHO/SFO.

Graphical Abstract

1. Introduction

Global optimization refers to the task of identifying the global minimum of a real-valued, continuous objective function f ( x ) , where the variable x belongs to a predefined, bounded search space S R n . The goal is to determine the point x * S such that the function f ( x ) achieves its lowest possible value over the entire domain:
x * = a r g min x S f ( x )
where f ( x ) is the objective function to be minimized. This function can represent a variety of criteria depending on the problem context, such as cost, loss, error, potential energy, or any other performance metric. S is the feasible search space, a compact subset of R n , meaning it is both closed and bounded. Typically, S is defined as an n-dimensional hyperrectangle (also called a box constraint), given by
S = a 1 , b 1 a 2 , b 2 a n , b n
This denotes that each variable x i is constrained within a finite interval: x i [ a i , b i ] , for i = 1 , 2 , , n . The Cartesian product of these intervals defines the multidimensional region where the search for the global optimum takes place.
Optimization is one of the most fundamental and widely applied domains of computational intelligence, with a vast range of applications in scientific, technological, and industrial fields. Although classical optimization techniques can be effective for small- or medium-scale problems, they often fail to deliver satisfactory results when applied to complex, nonlinear, and high-dimensional environments, where issues such as non-convexity, high dimensionality, and the presence of multiple local optima dominate the search landscape. In this context, recent years have seen a continuous rise in interest in metaheuristic methods, which offer flexible and stochastic search tools capable of addressing complex optimization problems without requiring derivative information or assumptions of continuity in the solution space.
Metaheuristic techniques are typically inspired by natural, biological, social, or physical processes, aiming to simulate powerful mechanisms for balancing exploration and exploitation within complex search spaces. Classic examples include Genetic Algorithms (GAs) [1], Particle Swarm Optimization (PSO) [2], and Ant Colony Optimization (ACO) [3], which have been widely used for decades. In recent years, however, a multitude of novel metaheuristics have emerged, motivated by the desire to overcome common limitations such as premature convergence and weak performance on rugged, multimodal landscapes [4]. These methods draw inspiration from a broad range of biological and ecological systems. From the animal kingdom, algorithms like Artificial Bee Colony (ABC) [5], Grey Wolf Optimizer (GWO) [6], the Whale Optimization Algorithm (WOA) [7], the Dragonfly Algorithm (DA) [8], Cuckoo Search (CS) [9], and the Bat Algorithm [10] emulate foraging, social, or navigation behaviors. Predator–prey-based algorithms such as Harris Hawks Optimization (HHO) [11] and the Snake Optimizer [12] capture hunting dynamics. Others derive from insect colonies or swarm intelligence, including the Firefly Algorithm [13], Glowworm Swarm Optimization (GSO) [14], and Butterfly Optimization [15]. Likewise, bacterial or microbial behaviors inspired algorithms such as Bacterial Foraging Optimization (BFO) [16], Virus Colony Search [17], and COVIDOA [18]. Some algorithms are motivated by botanical and plant behavior, such as the Plant Propagation Algorithm (PPA) [19], Invasive Weed Optimization (IWO) [20], and the Root Growth Optimizer [21]. Other methods are inspired by natural physical phenomena, including the Gravitational Search Algorithm (GSA) [22], Simulated Annealing [23], and the Harmony Search algorithm [24]. More recently, complex hybrid and bio-inspired models such as Gorilla Troops Optimization (GTO) [25], the Reptile Search Algorithm (RSA) [26], the Sine Cosine Algorithm (SCA) [27], and the Slime Mould Algorithm (SMA) [28] have been introduced. Despite the thematic diversity and creativity of modern bio-inspired metaheuristic techniques, many of them continue to face common shortcomings, such as the lack of truly adaptive dynamics, a static and rigid balance between exploration and exploitation, and the absence of documented convergence guarantees [29]. These limitations underline the need for next-generation algorithms capable of self-regulating their behavior according to the state of the search, maintaining stability in convergence, and faithfully reflecting the principles and rhythms of complex biological processes.
BioHealing Optimization (BHO) is positioned within this scientific and technological context, drawing inspiration from the regenerative process of wound healing in living organisms. Wound healing is a natural function characterized by a delicate balance between disruption and restoration, aimed at re-establishing homeostasis. BHO translates this biological principle into the optimization domain, creating a multi-phase methodology in which each phase has a distinct yet interdependent role:
  • Injury phase: Rather than relying on static or simplistic modifications, BHO applies stochastic disturbances to selected dimensions of candidate solutions, emulating the initial, uncontrolled nature of biological injury. These disturbances may follow distributions that favor either gentler changes or rare, high-impact shifts, with their intensity dynamically adapted as the search progresses, encouraging broad exploration in the early stages and gradually reducing disruption near convergence.
  • Healing phase: Subsequently, BHO selectively guides the modified dimensions toward the best-known solution in a process that mirrors the progressive restoration of biological tissues. This movement is neither mechanical nor fixed; the proportion and direction of adjustments are adapted to the search conditions, maintaining diversity while also enhancing the exploitation of high-quality solutions.
  • Recombination phase: Optionally, this process is preceded by an information-exchange mechanism inspired by differential evolution (DE) [30,31,32], where components of the best solution are combined with differences from other members of the population. This allows the inheritance of strong traits while simultaneously introducing variations that keep the search active.
Relative to earlier healing-inspired optimizers such as Wound Healing Optimization (WHO) [33] and Synergistic Fibroblast Optimization (SFO) [34], BHO instantiates a concrete three-phase loop with per-dimension stochastic injury using Gaussian and Lévy steps under a decaying intensity schedule, guided healing toward the incumbent best, and an optional DE/best/1/bin recombination path. These operators are governed by stateful per-dimension controllers that update from accepted improvements, including a scar map with momentum, hot-dimension focusing, and stagnation-triggered RAGE and hyper-RAGE bursts, together with dynamic modulation of healing probabilities and intensities. As a result, BHO separates exploration from exploitation, preserves feasibility within bounds, and adapts both where it perturbs and how strongly it perturbs over time, providing capabilities that WHO and SFO do not explicitly implement.
Beyond healing-inspired methods, the last three years have produced a stream of contemporary population-based optimizers and refined DE and ES variants that frame the immediate state of the art for positioning BHO. Recent nature-inspired single-swarm methods include the Osprey Optimization Algorithm (OOA), the One-to-One-Based Optimizer (OOBO), the Secretary Bird Optimization Algorithm (SBOA), and the Dream Optimization Algorithm (DOA), each reporting competitive results on standard benchmarks and releasing enough detail for replication [35,36,37,38]. Within the differential evolution family, modern designs emphasize efficiency and adaptivity, for example, EBJADE with multi-population management and elite regeneration, iDE-APAMS with adaptive population allocation and mutation selection, and RLDMDE, which uses reinforcement learning to choose mutation strategies from diversity signals [39,40,41]. On the evolution strategies side for discrete and mixed variables, the line of work on (1 + 1)-CMA-ES with Margin and CMA-ES-SoP stabilizes sampling for integer-valued domains through margin control and point-set modeling [42,43]. Together, these threads capture the current landscape and provide up-to-date baselines against which BHO can be compared in the remainder of this paper.
Section 2 presents BHO. Section 2.1 provides the core pseudocode, and Section 2.2 details the adaptive mechanisms and their integration in the main loop. Section 3 describes the experimental protocol and the benchmark suite. Section 3.1 lists the test functions. Section 3.3 reports the parameter-sensitivity study. Section 3.3 presents the comparative experimental results under a fixed evaluation budget. Section 3.4 introduces the BHO-assisted neural network training case study on classification datasets. Section 4 states the conclusions, and Section 5 outlines directions for future research.

2. The BioHealing Optimization Algorithm

2.1. The Basic Body of the BioHealing Optimizer Pseudocode

The overall algorithm of the method is described below.
Notes for pseudocode:
U ( 0 , 1 ) returns a sample from Uniform[0,1], randInt(1, d i m ) returns a uniformly random integer in {1, …, d i m }. clamp(v, l o w e r , u) projects v onto [ l o w e r , u p p e r ]. stochasticStep() produces a zero-mean random step; in practice, this is instantiated either as Gaussian N ( 0 , 1 ) for fine local search or as a Lévy step (e.g., Mantegna) to enable rare large jumps; selection follows this paper’s described schedule/policy. healStep( h r ) returns a scalar gain a used to bias coordinates toward x b e s t when the per-dimension healing trigger U ( 0 , 1 ) < h r fires. F E counts every call to f ( · ) (initialized after the first N P evaluations and incremented after each new evaluation inside the loop). The w s schedule line maintains a small, nonzero intensity to avoid premature freezing late in the run.
The core loop of Algorithm 1 of the BHO maintains a population of candidate solutions within box constraints and repeatedly balances broad exploration with guided exploitation. It begins by sampling each vector uniformly within the per-dimension bounds, evaluating all candidates, and designating the incumbent best. At every iteration, the current elite is identified, and the wound intensity follows a monotone decay schedule so that early updates encourage wide exploration while later ones stabilize around promising regions. For each non-elite individual, an optional differential evolution recombination (best/1, bin) may combine the incumbent best with a scaled difference of two distinct peers; all values are kept feasible through clamping to the bounds. The injury phase then applies a per-dimension stochastic disturbance with a specified probability, using either Gaussian noise or a Lévy-tailed step produced by a generic stochasticStep() procedure and scaled by the current wound intensity and the variable range; feasibility is again enforced by clamping. The healing phase gently attracts modified components toward the incumbent best with a specified probability, using a step a = healStep( h r ) that increases with the healing rate and preserves bounds. The resulting trial is evaluated and accepted greedily only if it improves the previous fitness; whenever an improvement is accepted, the global best is also updated. The procedure terminates upon exhausting either the iteration budget or the cap on function evaluations, and returns the pair ( x b e s t , f b e s t ). This description captures the clean, modular backbone of BHO (elite selection, optional recombination, injury, healing, and greedy replacement) while allowing optional extensions to be integrated without altering the fundamental methodology.
Below are the core equations of BHO’s three components (injury, healing, and recombination), together with the minimal auxiliary relations required for completeness.
Notations:
  • x i , j ( i t e r ) : coordinate j of individual i at iteration i t e r ;
  • [ lower j , upper j ] : bounds;
  • x b e s t , j ( i t e r ) : incumbent best;
  • clamp(z, lower , upper ) = min (max(z, l o w e r ), upper );
  • U ( 0 , 1 ) : uniform random in [0,1].
Algorithm 1 The basic body of the BioHealing Optimizer pseudocode
  • Input:
       f: objective function to minimize
        d i m : problem dimensionality
        N P : population size
        i t e r m a x : maximum number of iterations
        F E : counts all calls to f ( · ) so far
        F E m a x : maximum number of function evaluations
        l o w e r , u p p e r : bounds for each variable
    Params:
        w s 0 : initial wound intensity
        w p : probability of wounding per dimension
        h r : probability of healing per dimension
        r p : probability of recombination with the best solution
       F : differential weight scaling factor in recombination
        C R : crossover probability in recombination
    Output:
        x b e s t : the best solution found
        f b e s t : the value of f at best solution
    Initialization:
    01 For i=1.. N P  do:
    02         For j=1.. d i m  do:
    03                 x i , j U ( l o w e r j , u p p e r j )
    04         End for
    05 f i t i = f ( x i )
    06 End for
    07 x b e s t , f b e s t = argmin( f i t i ) // elite at initialization
    08 i t e r = 0
    09 F E = 0
    Main loop:
    10 While  i t e r < i t e r m a x or F E < F E m a x do
    11         i t e r = i t e r + 1
    12         e l i t e = argmin( f i t i )
    13         x b e s t = x e l i t e
    14         f b e s t = f i t e l i t e
    15         // decay wound intensity but keep a safety floor at 5% of w s 0
    16         w s = max(0.05 · w s 0 , w s 0 · (1 − i t e r i t e r m a x ))
    17         For i=1.. N P  do
    18                 If i = e l i t e  then
    19                         continue
    20                 end if
    21                  x o l d = x i , f o l d = f i t i
    22                  f b e s t = f i t e l i t e
    23                  // Optional DE(best/1,bin) recombination
    24                 if  U ( 0 , 1 ) < r p then
    25                         Choose  r 1 i , r 2 i , r 2 r 1
    26                          j r = randInt(1, d i m ) // ensure at least one dimension crosses
    27                         For j=1.. d i m  do
    28                               If  U ( 0 , 1 ) < C R or j = j r  then
    29                                 v = x b e s t , j + F · ( x r 1 , j x r 2 , j )
    30                                  x i , j = clamp(v, l o w e r j , u p p e r j )
    31                              End if
    32                         End for
    33                End if
    34                // Stochastic injury: Gaussian or Lévy step per dimension
    35                For j=1.. d i m  do
    36                        If  U ( 0 , 1 ) < w p  then
    37                               ξ = stochasticStep() // returns zero-mean step, Gaussian or Lévy
    38                              d = w s · ξ · ( u p p e r j l o w e r j )
    39                               x i , j = clamp( x i , j + d, l o w e r j , u p p e r j )
    40                        End if
    41                End for
    42                // Guided healing toward x b e s t
    43                a = healStep( h r ) // scalar healing gain, e.g., in (0,1)
    44                For j=1.. d i m  do
    45                         If  U ( 0 , 1 ) < h r  then
    46                               x i , j = clamp( x i , j + a( x b e s t j x i , j ), l o w e r j , u p p e r j )
    47                         End if
    48                End for
    49                 f n e w = f( x i )
    50                 F E = F E +1
    51                // Greedy acceptance with rollback
    52                If  f n e w < f o l d  then
    53                          f i t i = f n e w
    54                         If  f n e w < f b e s t  then
    55                               f b e s t = f n e w
    56                               x b e s t = x i
    57                        End if
    58                Else
    59                          x i = x o l d
    60                          f i t i = f o l d
    61                End if
    62         End for
    63 End while
    64 Return  x b e s t , f b e s t
1.
Injury (Stochastic perturbation)
Notations and definitions:
  • Bounds and projection: lower and upper are vectors of per-dimension bounds, clamp ( lower j ,  upper j ) projects a scalar to [ lower j ,  upper j ];
  • Wound intensity: w s ( i t e r ) is the injury intensity at iteration t, w s 0 is its initial value.
  • Stochastic step ξ : At iteration i t e r , ξ i , j ( t ) is a zero-mean scalar, sampled independently across i, j, and i t e r from either (i) a standard Gaussian (unit variance) or (ii) a Lévy/Mantegna generator with stability index α [ 1 , 2 ] and global-scale Lévy scale. An optional truncation ξ i , j ( t ) ξ m a x may be applied to avoid extreme outliers.
With per-dimension wound probability w p (or adaptive w p , j ):
   x i , j ( iter ) c l a m p x i , j ( iter ) + w s ( iter ) ξ i , j ( iter ) ( upper j lower j ) , lower , upper j
where ξ i , j ( t ) = stochasticStep(), and the wound intensity w s ( t ) can be scheduled, e.g.,
           w s ( i t e r ) = max 0.05 w s 0 , w s 0 1 i t e r i t e r max
Auxiliary (stochastic step definition):
     N ( 0 , 1 ) , ( Gaussian ) levyScale u | v | 1 / α , u N ( 0 , σ u 2 ) , v N ( 0 , 1 ) , ( L é vy / Mantegna )
with
             σ u = Γ ( 1 + α ) sin π α / 2 Γ 1 + α 2 α 2 ( α 1 ) / 2 1 / α
and an optional global-scale Lévy scale.
The injury mechanism relies on stochastic perturbations that are complementary in their search behavior. Gaussian steps exhibit light tails and finite variance, which favor fine-grained local exploration and controlled diffusion around the current region of interest. In contrast, Lévy-distributed steps (with stability index α [ 1 , 2 ] ) have heavy tails, producing occasional long jumps that help the algorithm escape local basins and probe distant regions of the search space. By enabling per-dimension perturbations, BHO can adjust the disruption intensity across coordinates, which is beneficial in non-separable and multimodal landscapes where sensitivity is uneven across variables. This dual-mode design (Gaussian vs. Lévy) therefore provides a principled mechanism to balance local refinement and global exploration within the same injury phase.
2.
Healing (guided move toward x b e s t )
Notations and definitions:
  • Healing probability: h r [ 0 , 1 ] is the per-dimension probability that a healing move is applied (if adaptive per coordinate, write h r , j ).
  • Healing gain: α ( i t e r ) is a scalar returned by healStep( h r ) at iteration iter; it controls the attraction strength toward the incumbent best. Unless stated otherwise, α ( i t e r ) [ 0 , 1 ] and is non-decreasing in h r (e.g., the simple rule α ( i t e r ) = 0.15 + 0.35 h r , capped to stay below 1).
  • Incumbent best: x b e s t i t e r denotes the best solution available at iteration i t e r .
  • Bounds and projection: lower and upper are vectors of per-dimension bounds; clamp (z, lower , upper j ) projects a scalar to [ lower j , upper j ].
  • Order of updates: Healing takes as input the post-injury state x i , j ( i t e r , inj ) and produces x i , j ( i t e r , heal ) .
With per-dimension healing probability h r :
    x i , j ( i t e r , heal ) = c l a m p x i , j ( i t e r , inj ) + a ( i t e r ) x b e s t , j ( i t e r ) x i , j ( i t e r , inj ) , lower j , upper j ,
where a ( i t e r ) = healStep ( h r ) ( e . g . , a simple linear rule a ( i t e r ) = 0.15 + 0.35 h r , bounded in [ 0 , 1 ) )
3.
Recombination (DE/best/1/bin)
Notations and definitions:
  • Recombination probability: r p is the probability that differential evolution (DE) recombination is applied to candidate i. Otherwise, the candidate proceeds without recombination.
  • Mutation strategy: The variant used is DE/best/1, meaning one difference vector is added to the current best solution. Specifically, two distinct indices r 1 , r 2 i are sampled uniformly, and the donor vector is constructed as v = x b e s t , j ( i t e r ) + F x r 1 , j ( i t e r ) x r 2 , j ( i t e r ) .
  • Scaling factor: F [ 0 , 2 ] is the differential weight controlling the magnitude of the perturbation.
  • Crossover mask: C is a binary mask with entries C j ∼Bernoulli( CR ), where C R [ 0 , 1 ] is the crossover probability. To ensure at least one coordinate crosses over, one randomly selected dimension j r a n d is forced to have C j r a n d = 1.
  • Offspring construction: The trial vector after recombination is x i ( i t e r , r e c ) , which combines donor and parent according to the mask and is then projected to the feasible range by clamp.
With probability r p the DE/best/1 with binomial crossover is applied:
             v = x b e s t , j ( i t e r ) + F x r 1 , j ( i t e r ) x r 2 , j ( i t e r )
           C Bernoulli ( CR ) , enforce C j rand = 1
      x i , j ( i t e r , rec ) = c l a m p C j v i , j ( i t e r ) + ( 1 C J ) x i , j ( i t e r ) , lower j , upper j
4.
Greedy acceptance
Notations and definitions:
  • Trial fitness: After injury, healing, or recombination, the candidate x i ( i t e r · ) is evaluated to obtain f n e w . The previous fitness is f o l d .
  • Greedy rule: The update is strictly elitist: if f n e w < f o l d , the candidate is accepted and its fitness is updated; otherwise, the previous state is restored.
  • Best solution update: If the accepted candidate also improves upon the current best value f b e s t , then both f b e s t and x b e s t are updated.
  • Rollback mechanism: If the trial vector fails to improve, the algorithm explicitly resets x i x o l d and f f o l d to prevent quality degradation.
After the phases (in the algorithm’s prescribed order):
         accept x i new if f x i new < f x i old , else revert .

2.2. Integration of Adaptive Mechanisms into the BHO Core Loop

The following mechanisms plug into the injury and healing stages of Algorithm 1, changing only how perturbations are generated and how guidance toward the best is applied, while selection and acceptance remain unchanged.
The integration of the extensions into the BHO core loop follows the method’s natural flow without altering its backbone. After initialization and before processing individuals in each iteration, the algorithm updates the stagnation status and configures any exploration bursts. At this point, a targeted micro-restart around the incumbent best may also be triggered when a lack of improvement persists. In the same pre-loop stage, the per-dimension importance scores are decayed, and the current hot set is selected so that subsequent perturbations are preferentially strengthened where recent gains have been observed.
At the heart of the iteration, immediately before applying stochastic disturbances, the noise generator is determined: the random step may use a heavy-tailed Mantegna draw to allow rare long jumps, or fall back to Gaussian noise, depending on the settings. The injury phase then exploits the hot-dimension priorities, the multiplicative burst effects when RAGE or hyper-RAGE windows are active, and a mild intensity boost when early stagnation is detected. The random step is gently biased toward the recent momentum direction of each coordinate through a dedicated bias coefficient, while bandage protection can be bypassed only during bursts so as not to throttle exploration. Immediately after the standard injuries, a targeted alpha-strike may rarely fire on a few coordinates, scaled by variable ranges and current wound strength, before control passes to healing.
The healing phase is modulated so that, during bursts, the restorative rate is temporarily reduced to avoid erasing exploration, followed by a short cooldown with amplified healing to smooth the return to stabilization. Optionally, while bursts are active, selected components may be copied from the incumbent best to inject direction without sacrificing diversity. Acceptance remains greedy, and only upon true improvement are the scar-map learning states updated: the per-dimension wounding probabilities and strengths, momentum, importance scores, and bandage. These updates feed the next injury cycle, conveying where and how stronger or more frequent disturbances are worthwhile. When no improvement occurs, the distance since the last best increases and can re-trigger soft boosts, bursts, or, if needed, micro-restarts at the beginning of the next iteration. In this way, the mechanisms are woven into injury, healing, and their transitions, while preserving the clean core architecture of elite selection, recombination, disturbance, restoration, and greedy replacement.
1.
Scar Map: Momentum and Bandage
Notations and definitions:
  • Per-dimension wound schedule: p ( w ) = p 1 ( w ) , , p d i m ( w ) , s ( w ) = s 1 ( w ) , , s d i m ( w ) , clamped in [ p m i n , p m a x ] and [ s m i n , s m a x ].
  • Momentum: m j [—1,1] = decayed running sign of recent accepted moves on dimension j.
  • Improvement score: d j 0 = per-dimension score.
  • Bandage counter: b i , j N 0 = freeze length for coordinate (i,j).
  • Signals: t o w a r d B e s t j [0,1] and s i g n D i r j {−1,+1}.
  • Hyper-parameters: learning rate η , momentum decay ρ (0,1), bias coefficient k > 0, bandage length L b with bounds p m i n , p m a x , s m i n , s m a x .
After greedy acceptance ( f n e w < f o l d ), the scar map updates, per improved coordinate j, the wound probability p j ( w ) and s j ( w ) using learning rate η and clamping to admissible ranges. The update strength grows with t o w a r d B e s t j , so dimensions aligned with the incumbent best are reinforced. Momentum is refreshed as
m j ( 1 ρ ) · m j + ρ · s i g n D i r j ,
which biases future perturbations by k m j . The score d j accumulates proportional improvement, supporting hot-dimension selection. Finally, the bandage sets b i , j L b , freezing recently improved coordinates.
Scar map is invoked immediately after acceptance; in subsequent iterations, injury samples per-dimension parameters p j ( w ) , s j ( w ) (instead of a global w p ), applies bias from m j , and skips coordinates with b i , j > 0. Healing remains unchanged and it graphically illustrate in Figure 1.
The algorithm for Scar map is provided in Algorithm 2.
Algorithm 2 Scar map: momentum and bandage
  • Input: Set D of improved coordinates, t o w a r d B e s t j , s i g n D i r j for j D
    Params: η , p m i n , p m a x , s m i n , s m a x , ρ , κ , L b
    State: p j ( w ) , s j ( w ) , m j , d j , b i , j
    01 For jD do
    02      g p = η · (0.5 + 0.5 ·   t o w a r d B e s t j )
    03      g s = η · (0.25 + 0.75 ·   t o w a r d B e s t j )
    04      p j ( w ) = clamp( p j ( w ) + g p , p m i n , p m a x )
    05      s j ( w ) = clamp( s j ( w ) g p , p m i n , p m a x ))
    06      m j = (1 — ρ ) · m j + ρ · s i g n D i r j
    07      d j = d j + improvement_signal()
    08      If L b > 0 then
    09           b i , j = L b
    10     End if
    11     For each (i,j) do
    12     If b i , j > 0 then
    13          b i , j = b i , j − 1
    14         Continue
    15     End if
    15     Apply injury with prob. p j ( w ) , scale s j ( w ) , and bias κ m j
    If m o m b i a s j = m o m b i a s · sign( s c a r M o m e n t u m j )
Scar map updates per-dimension memory and momentum after successful acceptance and provides the scores that hot-dims will use to focus exploration.
2.
Hot-Dims Focus (Top-K and Boosts for Injury)
Notations and definitions:
  • Per-dimension score: d j 0 accumulates recent accepted improvements (same d j used by scar map).
  • Decay: δ (0,1) applies a mild forgetting to down-weight stale gains.
  • Top-K hot set: H i t e r = t o p K ( d , K ) (upright H ) selects the K highest-scoring dimensions at iteration i t e r .
  • Boost factors (used at injury time only): β p 1 for probability, β s 1 for scale.
  • These do not modify the stored p j ( w ) , s j ( w ) ; they produce effective values for the current injury step:
    p ˜ j = min p max , β p p j ( w ) , j H i t e r , p j ( w ) , otherwise , s ˜ j = min s max , β s s j ( w ) , j H i t e r , s j ( w ) , otherwise .
    (Bounds p m i n , p m a x , s m i n , s m a x as in scar map.)
  • Integration: Hot-dims runs once per iteration before injury: decay d j , select H i t e r , then injury uses p ˜ j , s ˜ j in place of the base p j ( w ) , s j ( w ) . Scar-map updating of d j still happens after greedy acceptance. The hot dims algorithm is listed in Algorithm 3.
During injury (this iteration), use p ˜ j as the per-dimension injury probability and s ˜ j as the intensity scale in place of p j ( w ) , s j ( w ) ; feasibility remains enforced via clamp. (No persistent change to the stored scar-map state.) The Hot-dims procedure is graphically illustrated in Figure 2.
Algorithm 3 Hot-dims focus: boosting probability and intensity on top-K dimensions
  • Params: δ , K, β p , β s
    State: d j (scores), p j ( w ) , s j ( w ) from Scar Map
    01 For j= 1.. d i m  do
    02        d j = (1 — δ ) · d j // mild forgetting
    03 End for
    04 H i t e r = t o p K ( d , K )
    05 // Injury-time effective parameters
    06 For j= 1.. d i m  do
    07       If  j H i t e r  then
    08             p ˜ j = min( p max , β p · p j ( w ) )
    09             s ˜ j = min( s max ), β s · s j ( w ) )
    10       Else
    11             p ˜ j = p j ( w )
    12             s ˜ j = s j ( w )
    12       End if
    13 End for
Once hot dimensions are identified, exploration intensity is dynamically escalated under stagnation through RAGE and hyper-RAGE.
3.
RAGE and Hyper-RAGE
Notations and definitions:
  • Trigger threshold: τ > 0 . A burst is armed when imprSignal ( ) τ and no cooldown is active.
  • Burst length: B N 0 . During a burst, the counter b rage > 0 is decremented each iteration.
  • Hot-set magnifier: μ 1 . While b rage > 0 , the hot set size is enlarged to K = min ( dim , μ K ) (otherwise K).
  • Boost multipliers (RAGE): φ 1 for probability and γ 1 for scale; these are applied o n l y during a burst.
  • Score shaping (hyper-RAGE): ν 0 and per-dimension weights w j [ 0 , 1 ] obtained from the scores d j (e.g., w j = d j / max d or rank-based).
  • Cooldown: L c N 0 . After a burst ends ( b rage = 0 ), a cooldown c rage L c prevents immediate retriggering.
  • Effective injury parameters: As in hot-dims, the current-iteration values are p ˜ j , s ˜ j (possibly boosted for j H t ). The stored schedules p j ( w ) , s j ( w ) remain unchanged.
RAGE triggers immediately after greedy acceptance if imprSignal ( ) τ and c rage = 0 , opening a B-iteration window. Before injury, the hot set H t is computed with size K (if in burst) or K (otherwise). The injury step then uses the effective parameters
p ^ j = min p max , φ p ˜ j , s ^ j = min s max , γ s ˜ j , j H t ,
and p ^ j = p ˜ j , s ^ j = s ˜ j for j H t .
Hyper-RAGE makes boosts dimension-aware via
φ j = φ 1 + ν w j , γ j = γ 1 + ν w j ,
so that higher-scoring coordinates receive proportionally stronger yet bounded pressure. When the burst ends, cooldown starts, and healing remains unchanged. All boosts are non-persistent (state schedules p j ( w ) , s j ( w ) are not overwritten). The rage procedure is given in Figure 3.
The Explosive exploration under stagnation procedure is described in Algorithm 4.
Algorithm 4 RAGE/hyper-RAGE: Explosive exploration under stagnation
  • Input current scores d j , base per-dimension schedules p j ( w ) , s j ( w ) , hot-set size K.
    Parameters: trigger τ > 0 , burst length B N 0 , magnifier μ 1 , boosts φ 1 , γ 1 , shaping ν 0 , cooldown L c N 0 .
    State: burst counter b rage N 0 , cooldown c rage N 0 .
    01 At acceptance (after f new < f old )
    02 If c rage = 0   And   imprSignal ( ) = | f old f new | τ   then
    03       b rage = B
    04 End if
    05 //Before injury (each iteration):
    06 Hot-set size:
    07 If b rage > 0 then
    08      K = min ( dim , μ K )
    09 Else
    10       K = K
    11 End if
    12 Hot set selection: H t topK ( d , K ) .
    13 Effective bases (from hot-dims): compute p ˜ j , s ˜ j for all j (temporary, non-persistent).
    14 //RAGE / hyper-RAGE boosts (for use in this injury):
    15 For  j = 1 ,, dim do
    16      If  b rage > 0 and   j H t then
    17           // RAGE: uniform boosts on hot dimensions
    18            p j = min p max , φ p ˜ j ,
    19            s j = min s max , γ s ˜ j
    20           // Hyper-RAGE: score-shaped boosts (optional)
    21           choose w j [ 0 , 1 ] from d j (e.g., w j = d j / max d )
    22            p ^ j = min p max , φ ( 1 + ν w j ) p ˜ j
    23            s ^ j = min s max , γ ( 1 + ν w j ) s ˜ j
    24      Else
    25            p = p ˜ j
    26            s = s ˜ j
    27      End if
    28 End For
    29 //Pass to injury: use p ^ j , s ^ j for the current iteration’s stochastic perturbations, feasibility via clamp remains unchanged. (Stored p j ( w ) , s j ( w ) are not overwritten.)
    30 //After injury (end of iteration):
    31 If  b rage > 0 then
    32       b rage = b rage 1 ,
    33      If  b rage = 0 then
    34            c rage = L c
    35      End if
    35 Else
    36      If  c rage > 0 then
    37       c rage = c rage 1
    38      End if
    39 End if
Outside burst windows, the injury sampling is governed by Lévy-wounds to allow rare long jumps when progress diminishes.
4.
Lévy-Wounds (Mantegna)
Notations and definitions:
  • Stability index: α ( 1 , 2 ] .
  • Global Lévy scale: c > 0 (scalar).
  • Per-dimension injury scheduling: use p j ( w ) (probability) and s j ( w ) (scale) from scar map/hot-dims; bandage/cooldowns apply as defined earlier.
  • Stochastic step (Lévy/Mantegna) for iteration t and coordinate ( i , j ) ,
    u N ( 0 , σ u 2 ) , v N ( 0 , 1 ) ,
with
σ u = Γ ( 1 + α ) sin ( π α / 2 ) Γ 1 + α 2 α 2 ( α 1 ) / 2 1 / α .
Optional truncation: | ξ i , j ( t ) | ξ max to avoid extreme outliers.
  • Range scaling and projection: the raw move is scaled by ( upper j lower j ) and projected with clamp ( · ) to enforce feasibility.
Lévy-wounds replaces the Gaussian injury step with a heavy-tailed Mantegna draw, producing infrequent long jumps that enhance basin escape while retaining many small moves. For each dimension j of candidate i, with probability p j ( w ) we apply the update
x i , j ( t , inj ) clamp x i , j ( t ) + s j ( w ) ξ i , j ( t ) upper j lower j , lower j , upper j .
The stability α controls tail/heaviness (smaller α ⇒ rarer but longer jumps), and c sets the overall Lévy magnitude and can be tuned jointly with s j ( w ) . If momentum m j is active, a mild directional bias κ m j (with small κ > 0 ) may be added to the step before clamping. Lévy-wounds integrates seamlessly with hot-dims and RAGE/hyper-RAGE: when a dimension j is hot, the effective injury parameters p ˜ j , s ˜ j (or p ^ j , s ^ j under bursts) are used in place of p j ( w ) , s j ( w ) , while the Lévy draw ξ i , j ( t ) remains unchanged. This procedure is given in Algorithm 5.
Algorithm 5 Lévy-wounds (Mantegna): heavy-tailed jumps for rare long-range moves
  • Input:  α , c , p j ( w ) , s j ( w ) , optional m j , κ , ξ max .
    State: candidate x i , bounds lower , upper .
    01 For  j = 1 dim  do
    02       If bandage is active on ( i , j ) Then
    03            Continue
    04       End if
    05       Draw r U ( 0 , 1 )
    06       If  r < p j ( w )  then
    07            Sample u N ( 0 , σ u 2 ) ,
    08             v N ( 0 , 1 )
    09             ξ c u / | v | 1 / α
    10            If truncation enabled then
    11             ξ clip ( ξ , ξ max , ξ max )
    12            End if
    13       (optional bias) ξ ξ + κ m j
    14        Δ s j ( w ) ξ ( upper j lower j )
    15        x i , j ( t , inj ) clamp x i , j ( t ) + Δ , lower j , upper j
    16       End if
    17 End for
The flowchart of this method is given in Figure 4.
Beyond step distribution, alpha-strike adds rare targeted large moves on a few coordinates to vault past barriers.
5.
Alpha-Strike
Notations and definitions:
  • Trigger condition: Alpha-strike activates when the algorithm stagnates for more than T stall iterations (no improvement in f best ) or when the diversity indicator D falls below a threshold D min .
  • Strike length:  L α N , the number of consecutive iterations in which strike mode remains active.
  • Boost multipliers:  ϕ α 1 (probability) and γ α 1 (scale), applied uniformly to hot dimensions during the strike.
  • Hot set magnification: factor μ α 1 , enlarging the set H t selected from dimension scores.
  • Reset option: fraction ρ [ 0 , 1 ] of the worst individuals may be re-initialized uniformly in [ lower , upper ] to maximize disruption.
Mechanism (text). Alpha-strike is a deliberate, short burst of intensified perturbations designed to break strong stagnation. Once triggered, the algorithm enters a strike window of length L α . For all individuals and dimensions j H t , the effective injury parameters are magnified:
p ^ j = min p max , ϕ α p ˜ j , s ^ j = min s max , γ α s ˜ j .
In addition, with probability ρ , entire individuals are reinitialized within the feasible bounds. When the strike window expires, normal operation resumes. Healing is not modified, while scar map, momentum, and dimension scores continue to update, ensuring smooth integration with the other auxiliary mechanisms. The Alpha-strike procedure is given in Algorithm 6.
Algorithm 6 Alpha-strike: targeted, rare, large jump on a few coordinates
  • Input:  T stall , D min , L α , ϕ α , γ α , μ α , ρ
    State: best fitness f best , stagnation counter t stall , diversity indicator D, strike counter b α
    01 //At each iteration:
    02 If  t stall > T stall  or  D < D min  then
    03       b α = L α
    04       t stall = 0
    05 End if
    06 If  b α > 0 then
    07      Expand hot set: K = min ( dim , μ α K )
    08       H t topK ( d , K )
    09      Compute effective parameters p ˜ j , s ˜ j for all j
    10      For each individual i do
    11           Draw r U ( 0 , 1 )
    12           If  r < ρ then
    13           Reinitialize x i U ( lower , upper )
    14           Continue to next individual
    15           End if
    16           For each j H t  do
    17                 p ^ j min ( p max , ϕ α p ˜ j )
    18                 s ^ j min ( s max , γ α s ˜ j )
    19                Apply injury using p ^ j , s ^ j
    20           End for
    21      End for
    22       b α b α 1
    23 Else
    24      Run normal injury / healing / recombination with parameters from scar map, hot-dims, RAGE/hyper-RAGE
    25 End if
Notes.
  • Alpha-strike is designed to be rare but disruptive, acting as a restart-like operator while preserving the algorithmic structure.
  • Magnifiers ϕ α , γ α should be chosen conservatively (e.g., 2 3 × boosts) to avoid instability.
  • The diversity indicator D can be population variance, centroid spread, or any normalized measure.
The flowchart of Alpha-strike is provided in Figure 5.
If mild or targeted boosts do not revive progress, a small-scale restart around the incumbent best follows.
6.
Catastrophic Micro-Reset
Notations and definitions:
  • Trigger condition: Activates when stagnation persists despite alpha-strike and Lévy-wounds, i.e., after T cat consecutive iterations without improvement in f best .
  • Reset fraction:  ρ cat ( 0 , 1 ] , fraction of the worst individuals to be replaced.
  • Reset policy: New candidates are sampled uniformly within [ lower , upper ] , optionally blended with the elite centroid.
  • Cooldown:  C cat prevents consecutive activations within a short window.
Catastrophic micro-reset is a last-resort disruption operator. Instead of modifying coordinates dimension by dimension, it directly replaces a portion of the population with fresh samples, injecting diversity. It is “micro” because only a small fraction ρ cat of individuals are reset, rather than the entire population, thus preserving useful structure and elite solutions. Optionally, reset positions may be drawn from a mixture of uniform distribution and a Gaussian centered on the elite centroid, balancing exploration of new areas and exploitation near promising regions. After activation, a cooldown C cat ensures micro-reset cannot be triggered again too soon. The steps of the algorithm are shown in Algorithm 7.
Notes.
  • Catastrophic micro-reset should be rare and disruptive, serving as a population-level restart.
  • The fraction ρ cat is usually small (e.g., 5–15%) to avoid destroying global progress.
  • The cooldown C cat prevents repeated resets and stabilizes the dynamics. The flowchart for this procedure is given in Figure 6.
    Algorithm 7 Catastrophic micro-reset: targeted mini-restart around the incumbent best
    • Input:  T cat , ρ cat , C cat (optionally: elite centroid x ¯ , blend λ [ 0 , 1 ] )
      State: best fitness f best , stagnation counter t stall , cooldown counter c cat , population x i = 1 N P with bounds lower , upper
      01 At each iteration:
      02 If  t stall > T cat  and  c cat = 0  then
      03      Identify worst set W of size ρ cat · N P (largest objective values).
      04      For each i W  do
      05           Sample z U ( lower , upper )
      06           If elite biasing enabled then
      07                  x i = ( 1 λ ) z + λ x ¯
      08           Else
      09                  x i = z
      10           End if
      11      End for
      12       t stall = 0
      13       c cat = C cat
      14 End if
      15 If  c cat > 0  then
      16       c cat = c cat 1
      17 End if
      18 Continue normal injury/healing/recombination.
After a restart or a burst, a smooth return to stable exploitation is needed, which is controlled by healing adjustments with a short cooldown.
7.
Healing Adjustments and Cooldown
Notations and definitions:
  • Adaptive healing gain:  a ( t ) , normally produced by healStep ( h r ) , is scaled dynamically based on recent outcomes; cap a max .
  • Boost/decay factors:  β h > 1 (boost on success), δ h ( 0 , 1 ) (decay on failure).
  • Cooldown threshold:  τ h ( 0 , 1 ) ; if a single healing move exceeds τ h of the range, healing on that coordinate cools down.
  • Cooldown length:  C h N ; per-dimension timer c j .
  • Counters:  s j consecutive successes, f j consecutive failures (per dimension j).
  • Bounds/projection: vectors lower , upper ; clamp ( z , lower j , upper j ) .
After each healing attempt on coordinate j, the algorithm checks whether the new state improves fitness. On success, the healing gain is strengthened a ( t ) min ( a max , β h a ( t ) ) and s j increments (reset f j = 0 ). On failure, the gain is reduced a ( t ) δ h a ( t ) and f j increments (reset s j = 0 ). If the displacement magnitude exceeds a fraction τ h of the variable’s range, a cooldown c j C h starts, and while c j > 0 , no healing is applied to that coordinate. This self-regulation complements scar map/bandage: bandage protects just-improved coordinates from injury, whereas cooldown throttles over-correction by healing. The steps of the algorithm are shown in Algorithm 8.
Algorithm 8 Healing adjustments and cooldown: modulating healing during bursts with a smooth post-burst ramp
  • Input:  β h , δ h , a max , τ h , C h
    State: gains a ( t ) , counters s j , f j , cooldown timers c j , candidate x i , best x best , bounds lower , upper
    01 For each dimension j (on healing step) do
    02      If  c j > 0  then
    03             c j = c j 1
    04            Continue
    05      End if
    06      Propose update: x i , j ( t , heal ) clamp x i , j ( t ) + a ( t ) x best , j ( t ) x i , j ( t ) , lower j , upper j .
    07      //Evaluate fitness, let outcome be success if improved, else failure.
    08      If success then
    09             s j = s j + 1
    10             f j = 0
    11             a ( t ) = min ( a max , β h a ( t ) )
    12      Else
    13             f j = f j + 1 ; s j 0
    14             a ( t ) = δ h a ( t )
    15      End if
    16      IF  | x i , j ( t , heal ) x i , j ( t ) | > τ h ( upper j lower j )  then
    17             c j = C h
    18      End if
    19 End for
Notes.
  • Boosts/decays make healing stronger in promising directions and weaker in insensitive ones.
  • Cooldown prevents oscillations and over-correction after large moves.
  • Conservative tuning (e.g., β h 1.1 , δ h 0.9 ) is often adequate; C h small (e.g., 2–5). The flowchart for this method is shown in Figure 7.
For early or mild stagnation, a gentle injury amplification is applied before more aggressive mechanisms are engaged.
8.
Soft-Stagnation Boost
Notations and definitions:
  • Global restart threshold:  T restart , maximum stagnation length tolerated across all mechanisms.
  • Restart fraction:  ρ restart ( 0 , 1 ] , fraction of worst individuals replaced.
  • Hybrid injection pool:  P inj , contains elite centroid, archived bests, or random samples.
  • Blend parameter:  λ [ 0 , 1 ] , probability of using P inj vs. uniform resampling.
  • Diversity safeguard:  D min must be re-established after restart; else ρ restart increases temporarily.
Adaptive restarts and hybrid injection serve as a final safeguard against premature convergence. While scar map, hot-dims, RAGE/hyper-RAGE, alpha-strike, Lévy-wounds, healing and cooldown, and catastrophic micro-reset progressively increase disruption, there remains a chance that the population collapses to a narrow region. A global restart is triggered when no improvement in f best occurs for more than T restart iterations.
At restart, a fraction ρ restart of the worst individuals is replaced. Each replacement is drawn using a hybrid scheme:
  • With probability λ , sample from P inj (e.g., elite centroid, archived bests).
  • With probability 1 λ , sample uniformly from [ lower , upper ] .
After injection, population diversity D is recomputed. If D < D min , ρ restart is temporarily increased until diversity is restored. This mechanism thus preserves elites while re-introducing fresh search directions. The steps for this method are shown in Algorithm 9.
Algorithm 9 Soft-stagnation boost: gentle injury amplification under early stagnation
  • Input:  T restart , ρ restart , λ , D min
    State: stagnation counter t stall , population x i , injection pool P inj , bounds lower , upper
    01 At each iteration:
    02 If t stall > T restart then
    03      Identify worst set W of size ρ restart · N P .
    04      For each i W do
    05            Draw r U ( 0 , 1 )
    06            If r < λ then
    07                 inject from P inj
    08            Else
    09                 0  x i U ( lower , upper )
    10            End if
    11      End for
    12       t stall 0
    13 End if
    14 Recompute diversity D
    15 If  D < D min then
    16      increase ρ restart temporarily
    17 End if
    18 Continue normal injury/healing/recombination.
Notes.
  • Acts as a population-level reboot, but partial: elites are preserved.
  • P inj can evolve dynamically, e.g., storing historical elites.
  • Parameters T restart and ρ restart should be tuned conservatively to avoid overly frequent resets. The flowchart for this method is shown in Figure 8.
    Figure 8. Soft-stagnation boost.
    Figure 8. Soft-stagnation boost.
    Futureinternet 17 00441 g008
These mechanisms integrate without altering the backbone of Algorithm 1 and are triggered under simple stagnation or improvement conditions. Also, the flowchart for this technique is shown in Figure 9.

3. Experimental Setup and Benchmark Results

This section first introduces the benchmark functions selected for experimental evaluation, followed by a comprehensive analysis of the experiments conducted. The study systematically examines the various parameters of the proposed algorithm to assess its reliability and effectiveness in different optimization scenarios. The experimental settings for the proposed method are outlined in Table 1 and for every other used method in Table 2.
Beyond the settings summarized in the accompanying comparator table, we selected baselines that span the main metaheuristic families and design assumptions: CMA-ES for smooth, well-conditioned landscapes with strong local adaptation; DE variants (jDE, SaDE) for robust performance on multimodal or non-separable problems; advanced DE methods (mLSHADE_RL, UDE3) that maintain diversity and adapt step intensities; CLPSO, which often yields fast early descent but risks premature convergence; and EA4Eig, which leverages eigen-directions on anisotropic tasks. These choices follow methodological diversity, prevalence in the literature, code/settings availability, and reproducibility, ensuring a fair comparison under a harmonized evaluation budget and framing the analysis in Section 3.3.

3.1. Test Functions

The performance assessment of the proposed method was carried out using a comprehensive and diverse collection of well-established benchmark functions [44,45,46], as listed in Table 3. These test functions represent a standard suite commonly utilized in the global optimization literature for validating and comparing metaheuristic algorithms. Each function exhibits distinct characteristics in terms of modality, separability, dimensionality, and landscape complexity, thus providing a robust basis for evaluating the generalization capability of the algorithm. Notably, the functions were employed in their original, unaltered form; no additional transformations such as shifting, rotation, or scaling were applied, allowing for a transparent and reproducible comparison with prior studies.
Table 4 provides a standardized description of the test set along three dimensions: modality (unimodal vs. multimodal), separability (separable vs. non-separable), and conditioning (well-/ill-conditioned or near-flat where applicable). The labels reflect the structural forms presented in the problem definitions (Table 3) and the qualitative behavior noted in the text for near-flat cases. This overview is intended to document the landscape characteristics of the suite without anticipating performance conclusions.

3.2. Parameter Sensitivity

Following the parameter-sensitivity protocol introduced by Lee et al. [28], we provide a systematic foundation for analyzing algorithmic responsiveness to configuration changes and the extent to which reliability is preserved across diverse operating conditions. The results are shown in Table 5.
The graphical representation of w p and h r for the “Parameter Estimation for Frequency-Modulated Sound Waves” problem is shown in Figure 10.
A sensitivity analysis for the Lennard–Jones problem is shown in Table 6.
Also, a graphical representation of w p and h r for the “Lennard–Jones Potential” problem is shown in Figure 11.
A sensitivity analysis for for the “Tersoff Potential for Model Si (B)” problem is given in Table 7.
The sensitivity study for wp and hr was conducted on three representative problems with N = 150 runs per setting and is summarized via the mean best indicator, complemented by min/max extremes. The main-effect range, defined as the spread of mean best across the scanned values, serves as a comparable measure of parameter influence. Also, a graphical representation of w p and h r for the “Tersoff Potential for Model Si (B)” problem is shown in Figure 12.
For Parameter Estimation of FM Sound Waves, the main-effect ranges are very small for both parameters ( w p : 0.0067, h r : 0.00545), i.e., roughly 4–5% of the typical mean best level (∼0.137). This pattern indicates strong robustness to moderate mis-specification. Slight improvements are observed toward higher wp (≈0.9) and lower h r (≈0.1), but the gains are marginal and do not justify aggressive fine-tuning. The extremes (Min on the order of 10 20 and Max around 0.27) remain stable across settings, reflecting the spread expected from repeated independent runs under a fixed budget.
For the Lennard–Jones potential, sensitivity is more pronounced and asymmetric across parameters: the main effect is moderate for w p (0.248) and clearly larger for h r (0.566). Because lower (more negative) values are better here, the mean performance improves around intermediate hr (≈0.5), while moving toward the extremes degrades the average. The w p effect is milder but present, with a mild preference for mid-to-higher values. The best observed minima (down to —33.31) appear near intermediate settings, which is consistent with the need to balance injury intensity and healing rate on rugged, non-separable landscapes.
For Tersoff (Si, B) the picture is intermediate: main effects are modest ( w p : 0.203, h r : 0.164) relative to the mean best level (∼—26.2), corresponding to sub-percent relative changes. Mean performance tends to favor slightly lower hr (≈0.1–0.3), while wp performs well in the mid-to-high range (≈0.7). The smallest minima are obtained for w p ≈ 0.3 and h r ≈ 0.1, suggesting that occasional deep improvements are more reliably triggered in these zones, even though the mean shows limited variation.
Taken together, the results substantiate that the algorithm is robust to reasonable parameter deviations, with negligible sensitivity on FM Sound Waves, a stronger dependence on hr for Lennard–Jones, and mild-to-moderate sensitivity on Tersoff (Si, B). A practical guideline is that a single conservative default can be used across the suite, while targeted, problem-aware adjustments are beneficial for highly multimodal and poorly conditioned cases, where hr in particular appears to govern the exploration–exploitation balance most effectively. While min/max broaden our sense of dispersion, conclusions should primarily rely on the mean behavior under the prescribed multi-run protocol.

3.3. Experimental Results

All experimental procedures were executed on a high-performance computer equipped with a machine running Debian Linux with 16 cores. The evaluation protocol was designed to ensure statistical rigor and reproducibility. Specifically, each benchmark function was subjected to 30 independent runs, with each trial initialized using distinct random seeds to account for stochastic variability in the algorithm’s behavior.
The proposed method and all comparators were implemented in C++ language and integrated into the open-source OPTIMUS framework [47]. The source code is publicly available at https://github.com/itsoulos/GLOBALOPTIMUS (accessed on 9 September 2025) to promote transparency and reproducibility. The environment was Debian 12.12, GCC 13.4. The primary performance metric is the average number of objective function evaluations over 30 runs for each test function. We also report success rates (in parentheses next to the mean), defined as the percentage of runs that reached the global optimum, when all runs converged optimally, and this indicator is omitted for clarity. In the results tables, best values are shown in bold and second-best values are underlined.
With a fixed termination of 500 iterations and all other parameters as in Table 1, we measured the wall-clock time (s) for the proposed method on two classic tests as dimensionality (D) increased from 20 to 200 (Figure 13). The results indicate an almost linear growth with dimension, consistent with the per-iteration complexity O( N P · d i m e n s i o n ): for Ellipsoidal, the average slope was about 0.139 s per additional dimension (≈2.78 s per +20D), whereas for Rosenbrock, it was ≈0.149 s per dimension (≈2.97 s per +20D). Both curves increased monotonically, while Rosenbrock exhibited a slightly steeper slope and a noticeable uptick beyond 160D, which was consistent with the higher per-evaluation cost induced by its cross-dimensional coupling. A crossover was observed: at low dimensions, Rosenbrock ran faster than Ellipsoidal, but from ~160D onward, Rosenbrock became slower, indicating that the interaction terms started to dominate the arithmetic in high D. Overall, these findings confirm the expected near-linear scaling of BHO with dimension under a fixed iteration budget, with function-dependent differences driven primarily by the cost of evaluating f ( · ) .
The test functions were as follows:
  • Ellipsoidal: f ( x ) = i = 1 n 10 6 i 1 n 1 x i 2 ;
  • Rosenbrock: f ( x ) = i = 1 n 1 100 x i + 1 x i 2 2 + x i 1 2 , 30 x i 30 .
For the experimental evaluation of BHO, we selected the following optimization algorithms as baselines for comparison:
  • EA4Eig [35], a cooperative evolutionary algorithm with an eigen-crossover operator. It was introduced at the IEEE Congress on Evolutionary Computation 2022 (CEC 2022) and evaluated on that event’s official single-objective benchmark suite. The method was primarily designed and tested within the CEC single-objective context.
  • UDE3 (UDE-III) [36], a recent member of the Unified DE family for constrained problems, it was evaluated on the CEC 2024 constrained set, while its predecessor (UDE-II/IUDE) won first place at CEC 2018, giving UDE3 a clear lineage with proven competitive performance.
  • mLSHADE_RL [37] is a multi-operator descendant of LSHADE-cnEpSin, and was one of the CEC 2017 winners for real-parameter optimization. This variant augments the base algorithm with restarts and local search, and it has been assessed on modern test suites.
  • CLPSO [38], a classic PSO variant with comprehensive learning (2006) that has long served as a strong baseline in CEC benchmarks; it is not tied to a single award entry but is widely used in comparative studies.
  • SaDE [39], the self-adaptive DE developed by Qin and Suganthan, was evaluated on the CEC 2005 test set and has since become a reference point for adaptive strategies.
  • jDE [40], developed by Brest et al., introduced self-adaptation of F and CR and has been extensively evaluated, including special CEC 2009 sessions on dynamic/uncertain optimization. This work helped popularize self-adaptation in DE.
  • CMA-ES [48] is the established covariance matrix adaptation evolution strategy for continuous domains. Beyond its vast literature, it is a staple baseline and frequent participant/reference in BBOB benchmarking at GECCO, effectively serving as a standard black-box comparison standard.
On synthetic and physical potentials, BHO delivered particularly strong best values—nearly zero error on Parameter Estimation for Frequency-Modulated Sound Waves—and attained the lowest best value on the Lennard–Jones Potential. However, for the mean on the Lennard–Jones Potential, jDE ranked first, followed by CMA-ES, indicating that classical, Gaussian-driven strategies remain very stable when the landscape exhibits symmetric curvature. On the Tersoff Potential for Model Si (B) and Tersoff Potential for Model Si (C), the picture shifted: for Si (B), the best mean was achieved by EA4Eig, with CMA-ES close behind. While UDE3 achieved the best best, for Si (C), EA4Eig achieved the best best and CMA-ES achieved the best mean. Overall, advanced DE variants with stronger recombination (EA4Eig, UDE3) and CMA-ES alternated at the top, which aligns with the literature on rough but moderately structured landscapes.
In electro-economic and industrial test cases, the pattern diverged in informative ways. On Electricity Transmission Pricing, BHO attained the lowest best, whereas UDE3 achieved the best mean, suggesting that BHO can reach excellent extremes, while UDE3 maintained consistently low variability across runs. In Dynamic Economic Dispatch 1 and Dynamic Economic Dispatch 2, CMA-ES dominated first place in both the best and mean, confirming its strength on smooth, nearly quadratic geometries. In second place, BHO recorded the lowest mean and EA4Eig the best best, indicating that BHO’s injury-healing dynamics act as a variance-damping safety net. Across the Static Dispatch problems, the results were mixed: in Static Economic Load Dispatch 1, the best best was achieved by mLSHADE_RL, and the best mean was achieved by EA4Eig; in Static Economic Load Dispatch 2, mLSHADE_RL yielded the lowest mean (with EA4Eig achieving the best best); and in Static Economic Load Dispatch 4, mLSHADE_RL again achieved the best best, while EA4Eig achieved the best mean. Static Economic Load Dispatch 3 showed practical ties around the same value and was not very discriminative. Static Economic Load Dispatch 15 contained a notable outlier, where BHO’s mean was orders of magnitude lower than the others. This result merits independent verification, such as re-running experiments or checking units, because the unusually large gap likely signals a scaling difference or an unexpected effect. A comparison of algorithms based on best and mean after 1.5 e × 10 5 FEs is shown in Table 8.
On more “classic” comparative tests, CMA-ES showed impressive robustness on smooth, well-scaled landscapes: beyond Dynamic Economic Dispatch 1, it also achieved both the best best and mean on Spread-Spectrum Radar Polly Phase-Code Design. EA4Eig stood out when strong anisotropy or correlation mattered. Circular Antenna Array Design was surpassed by EA4Eig in both the best best and mean, consistent with its eigen-crossover design. UDE3 performed very well on problems with additional structure or constraints, such as the mean on Electricity Transmission Pricing, matching its remit as a unified DE for constrained scenarios. mLSHADE_RL frequently attained best-case wins in Static Economic Load Dispatch 1 and Static Economic Load Dispatch 4, and remained competitive in the mean across several cases, underscoring the value of ensemble mutation plus restarts on fractured landscapes. SaDE, while a solid baseline, tended to trail the newer DE descendants, and CLPSO underperformed in most tables, as expected where anisotropy demanded directional information beyond standard swarm-velocity updates. jDE remained competitive on certain physical potentials, for example, the mean on Lennard–Jones, but showed larger dispersion on industrial dispatch tasks.
On flat or near-flat landscapes (Bifunctional-Catalyst Blend Optimal Control, Transmission Network Expansion Planning, and essentially Optimal Control of a Nonlinear Stirred-Tank Reactor), where differences were on the order of 10 10 , all methods tied or were practically indistinguishable, so these tests add little diagnostic power. In sum, there was no single winner: CMA-ES was the reference choice for smooth, well-conditioned cases and remained very strong on mean performance; EA4Eig excelled where alignment with principal variance directions helped; UDE3 often won on mean under constrained or pricing-structure scenarios; mLSHADE_RL frequently achieved best-case wins on difficult static dispatch variants; and BHO achieved the top best values on several critical functions and competitive means in demanding settings, indicating an effective exploration-to-exploitation transition.
Linking outcomes to the benchmark landscape (see Table 4) revealed consistent patterns. On multimodal, non-separable problems such as FM Sound Waves, Lennard–Jones Potential, and Electricity Transmission Pricing, BHO frequently attained first-place “best” values, whereas CMA-ES was strongest on smooth, well-conditioned cases (e.g., Dynamic Economic Dispatch 1). EA4Eig and mLSHADE_RL traded advantages on anisotropic or fractured landscapes (e.g., Tersoff and Static Economic Load Dispatch). By contrast, three near-flat/structured tests (Bifunctional-Catalyst Blend Optimal Control, Transmission Network Expansion Planning, and Stirred-Tank Reactor) were effectively non-discriminative and had little impact on the global ordering. The rank-based view in Table 9 and Table 10 separates peak performance (“best”) from reliability (“mean”) under a fixed 1.5 × 10 5 -FE budget, reflecting these patterns.
The evaluation distinguished peak performance (“best”) from reliability (“mean”) over seventeen problems. Summing per-function ranks showed that BHO had the lowest totals in both views (best: 47; mean: 48) and therefore the best overall score (overall: 95; average rank: 2.794). UDE3 followed closely (best/mean: 51/54; overall: 105; average rank: 3.088). EA4Eig and mLSHADE_RL formed the next tier with near-equal totals (112 and 113; average ranks: 3.294 and 3.323). CMA-ES sat mid-pack (overall: 125; average rank: 3.676), followed by SaDE (overall: 137; average rank: 4.029), while CLPSO and jDE ranked lowest overall (overall: 189 and 198; average ranks: 5.558 and 5.823). These rankings reflect consistently higher placements across most functions.
The detailed “best” table shows that BHO frequently ranked first in demanding tasks (e.g., Parameter Estimation for Frequency-Modulated Sound Waves, Lennard–Jones Potential, and Electricity Transmission Pricing), indicating strong exploration and high upside. UDE3 recorded many top-three finishes and several wins, especially where structure/constraints were prominent, explaining its overall second place and good generalization. EA4Eig and mLSHADE_RL traded advantages on anisotropic or fractured landscapes (e.g., Tersoff Si(B)/Si(C) and Static Economic Load Dispatch), consistent with eigen-guided recombination in EA4Eig and ensemble mutation with restarts in mLSHADE_RL. CMA-ES showed the expected resilience on smooth, well-conditioned geometries; its mean rank was often better than its best rank, reflecting low variance rather than aggressive extremes. SaDE remained a sturdy baseline but lagged behind newer DE descendants. CLPSO underperformed (as is typical under strong anisotropy). jDE exhibited occasional peaks but greater dispersion, which inflated its ranks in both the best and mean.
A comparison of all algorithms and a final ranking is given in Table 11.
Contrasting the best and mean revealed performance consistency. BHO’s lead in best values did not come at the expense of reliability; its mean total was also the lowest, indicating that top outcomes were not isolated “lucky” runs. Likewise, UDE3’s small best-mean gap indicates stable performance across repeats. EA4Eig and mLSHADE_RL displayed complementary behaviors (alignment with principal directions for EA4Eig, collective mutations and restarts for mLSHADE_RL). CMA-ES often “won” on mean where smoothness enforced small fluctuations.
Three problems were practically non-discriminative. In Bifunctional-Catalyst Blend Optimal Control and Transmission Network Expansion Planning, all methods tied, and in Optimal Control of a Nonlinear Stirred-Tank Reactor, differences were negligible. These problems diluted separability without altering the final ordering. By contrast, problems such as Electricity Transmission Pricing and the Dynamic/Static Economic Load Dispatch families yielded substantive differences, driving a clear lead for BHO and UDE3, and a tight contest for third and fourth place between EA4Eig and mLSHADE_RL.
In summary, with a fixed budget of 1.5 · 10 5 evaluations, BHO was the strongest overall method in both peak and average performance, followed closely by UDE3 with high consistency, then tEA4Eig and mLSHADE_RL, which leveraged different mechanisms. CMA-ES remained a reliable reference baseline, while SaDE, CLPSO, and jDE underperformed under the present conditions.

3.4. BHO-Assisted Neural Network Training

We conducted an additional experiment in which BHO was used to train artificial neural networks [49,50] by minimizing the training error defined as
E N x , w = i = 1 M N x i , w y i 2
In this expression, N x , w denotes the neural network applied to an input vector x , and w is the network’s parameter vector. The set x i , y i , i = 1 , , M is the training dataset, with y i representing the desired targets for each pattern x i . To assess BHO across diverse conditions, we employed a broad set of classification datasets obtained from public repositories:
The experiments were conducted using the following datasets:
  • APPENDICITIS, a medical dataset [53].
  • ALCOHOL, a dataset related to alcohol consumption [54].
  • AUSTRALIAN, a dataset derived from various bank transactions [55].
  • BALANCE [56], a dataset derived from various psychological experiments.
  • CLEVELAND, a medical dataset that was discussed in a series of papers [57,58].
  • CIRCULAR, an artificial dataset.
  • DERMATOLOGY, a medical dataset for dermatology problems [59].
  • ECOLI, a dataset related to protein problems [60].
  • GLASS, a dataset containing measurements from glass component analysis.
  • HABERMAN, a medical dataset related to breast cancer.
  • HAYES-ROTH [61].
  • HEART, a dataset related to heart diseases [62].
  • HEARTATTACK, a medical dataset for the detection of heart diseases.
  • HOUSEVOTES, a dataset related to Congressional voting in the USA [63].
  • IONOSPHERE, a dataset containing measurements from the ionosphere [64,65].
  • LIVERDISORDER, a medical dataset extensively studied in a series of papers [66,67].
  • LYMOGRAPHY [68].
  • MAMMOGRAPHIC, a medical dataset used for the prediction of breast cancer [69].
  • PARKINSONS, a medical dataset used for the detection of Parkinson’s disease [70,71].
  • PIMA, a medical dataset for the detection of diabetes [72].
  • PHONEME, a dataset that contains sound measurements.
  • POPFAILURES, a dataset related to experiments regarding climate [73].
  • REGIONS2, a medical dataset applied to liver problems [74].
  • SAHEART, a medical dataset concerning heart diseases [75].
  • SEGMENT [76].
  • STATHEART, a medical dataset related to heart diseases.
  • SPIRAL, an artificial dataset with two classes.
  • STUDENT, a dataset related experiments in schools [77].
  • TRANSFUSION, which is a medical dataset [78].
  • WDBC, a medical dataset related to breast cancer [79,80].
  • WINE, a dataset related to measurements of the quality of wines [81,82].
  • EEG, a dataset containing EEG recordings [83,84]. From this dataset the following cases were used: Z_F_S, ZO_NF_S, ZONF_S, and Z_O_N_F_S.
  • ZOO, a dataset related to animal classification [85].
    Interpreting the entries as classification error rates, where lower is better, BHO attained the lowest average error at 21.52%, ahead of Genetic at 26.55%, BFGS at 34.34%, and Adam at 34.48%. This corresponded to a 5.03 percentage-point improvement over Genetic (≈19% relative reduction) and ≈ a 13 percentage-point improvement over Adam and BFGS (≈37% relative reduction). At the dataset level, BHO recorded the best error on a majority of datasets, including ALCOHOL 16.89%, AUSTRALIAN 29.20%, BALANCE 8.06%, CLEVELAND 46.36%, CIRCULAR 4.23%, DERMATOLOGY 9.89%, ECOLI 48.72%, HEART 20.96%, HEARTATTACK 21.06%, HOUSEVOTES 6.45%, LYMOGRAPHY 27.24%, PARKINSONS 14.72%, PIMA 29.40%, REGIONS2 26.83%, SAHEART 33.88%, SEGMENT 27.26%, SPIRAL 43.75%, STATHEART 20.71%, TRANSFUSION 24.56%, WDBC 6.43%, WINE 15.86%, ZO_NF_S 9.76%, ZONF_S 2.68%, and ZOO 7.27%, and it tied for best on MAMMOGRAPHIC at 17.24%. There were, however, datasets where other optimizers had an edge, notably Adam on APPENDICITIS and STUDENT, and Genetic on GLASS, IONOSPHERE, LIVERDISORDER, and Z_F_S, with BHO typically close behind in these cases. Overall, the pattern showed that BHO delivered consistently low error across heterogeneous benchmarks, while conceding a few instances where gradient-based or classical evolutionary baselines aligned better with the data geometry. Reporting dispersion measures or statistical tests alongside these point estimates would further substantiate the observed gains.
Significance levels were computed in R on the classification error data and encoded, as shown in Figure 14, with ns for p > 0.05, * for p < 0.05, ** for p < 0.01, *** for p < 0.001, and **** for p < 0.0001. For the pairwise comparisons, BHO vs. Adam, BHO vs. BFGS, and BHO vs. Genetic, we obtained p = ****, i.e., p < 0.0001, indicating extremely significant differences. Combined with BHO’s lower average error rates, this suggests that its advantage is not due to random variation but reflects a systematic performance difference.

4. Conclusions

We presented BioHealing Optimization, a population-based metaheuristic that blends stochastic “injury,” guided “healing” toward the incumbent best, and an optional DE(best/1,bin) recombination step, augmented by adaptive per-dimension controllers (scar map and momentum, hot-dimension focusing, RAGE/hyper-RAGE bursts, Lévy steps, and healing modulation). The architecture self-regulates exploration–exploitation while preserving the core loop of elite selection, recombination, disturbance, greedy acceptance, and restoration.
Our protocol used 30 independent runs per problem, a fixed budget of 1.5 · 10 5 evaluations, and harmonized parameter disclosure. Rank aggregation over 17 problems separated peak performance (best-of-runs) from reliability (mean-of-runs). Within this suite and budget, BHO attained the best combined ranking (best/mean), while strong baselines such as UDE3, EA4Eig, mLSHADE_RL, and CMA-ES remained competitive. This does not imply dominance on every instance, but supports the view that injury–healing–recombination with adaptive, per-dimension control is an effective strategy for challenging continuous optimization.
We commonly observed rapid early descent from Lévy/Gaussian perturbations, plateau escape when RAGE/hyper-RAGE fired, and stabilization via healing/protection (scar/bandage). On smooth, well-conditioned landscapes, CMA-ES often enjoyed faster early convergence, whereas BHO tended to excel on multimodal or non-separable cases.
The findings hold under the fixed budget and the evaluated corpus. Performance depends on problem traits (smoothness, separability, conditioning), and heavy-tailed steps may need conservative tuning under tight constraints. Mechanism overhead is linear in D and modest relative to evaluation cost, though non-zero. Sensitivity to key parameters (e.g., w p , h r ) is small to moderate, not null. Non-discriminative tests (near-ties) do not drive the conclusions. Adding a real-world case (e.g., Lennard–Jones) strengthens practical evidence but does not exhaust application breadth.
Additionally, Section 3.4 assessed external validity by applying BHO to the training of feedforward neural networks on thirty-three public classification datasets. Under the same protocol and evaluation budget, BHO attained the lowest mean classification error (21.52 percent), compared with Genetic (26.55 percent), BFGS (34.34 percent), and Adam (34.48 percent), as summarized in Table 12. Pairwise significance tests on the error data yielded p = **** (i.e., p < 0.0001) for BHO versus each comparator, as shown in Figure 14. These findings indicate that BHO’s exploration and healing dynamics transfer beyond synthetic benchmarks to supervised training objectives, while a small number of datasets still favor gradient-based or classical evolutionary baselines, which motivates broader architectures and hyperparameter sweeps in future work.

5. Future Research Directions

Future work can first aim at a clearer theory of convergence under stochastic, non-stationary dynamics. A practical route is to model the algorithm’s evolution (population, incumbent best, scar/momentum/bandage) as an extended state and study conditions that ensure stability when injury intensity decays over time. Within this lens, it is important to characterize when heavy-tailed Lévy jumps accelerate basin escape without hurting late-stage stability, relative to small Gaussian steps. In parallel, an explicit complexity account will clarify per-iteration time and memory costs, and indicate when objective evaluations dominate bookkeeping overhead.
A second priority is mechanism attribution and reduced hyperparameter sensitivity. Budget-matched ablations can reveal when RAGE/hyper-RAGE, hot-dims, or Lévy steps dominate and under which landscape types they are most effective (smooth, ill-conditioned, multimodal, constrained). Building on this, self-adaptation can be introduced: Intensity and probability knobs (e.g., wp, hr) can be adjusted online from simple performance signals, while burst triggers and hot-dim boosts are governed by lightweight feedback policies, maintaining stable performance without meticulous manual tuning.
Next, scaling to high dimensions and handling realistic constraints call for targeted extensions. Subspace search, scar-weighted directions, and lightweight preconditioning (diagonal or low-rank) can improve robustness and efficiency under strong anisotropy. For constraints, noise, and non-stationarity, healing can incorporate projection/prox steps, acceptance can be made noise-robust with budgeted re-evaluations, and bursts can be triggered by simple change detectors, maintaining safety within bounded domains.
Finally, a broader evaluation program will strengthen generality. Useful steps include larger test families with budget sweeps, convergence curves that report central tendency and dispersion, and time-to-target metrics. Incorporating representative real-world cases (e.g., Lennard–Jones, economic dispatch, materials models) under identical budgets and with reproducible code/environments will clarify where BHO excels, where classical baselines such as CMA-ES or UDE-type methods are preferable, and when hybrids are the right choice. Stating limitations and contraindications alongside results will guide responsible deployment in practical settings.

Author Contributions

V.C. implemented the methodology; I.G.T. and V.C. conducted the experiments, employing all optimization methods and problems, and provided the comparative experiments; I.G.T. and V.C. performed the statistical analysis and prepared the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research was financed by the European Union: Next-Generation EU through the Program Greece 2.0 National Recovery and Resilience Plan, under the call RESEARCH-CREATE-INNOVATE, project name “iCREW: Intelligent small craft simulator for advanced crew training using Virtual Reality techniques” (project code: TAEDK-06195).

Data Availability Statement

Data is contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Holland, J.H. Adaptation in Natural and Artificial Systems; University of Michigan Press: Ann Arbor, MI, USA, 1975. [Google Scholar]
  2. Kennedy, J.; Eberhart, R. Particle Swarm Optimization. In Proceedings of the ICNN’95—International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar] [CrossRef]
  3. Dorigo, M.; Di Caro, G. Ant Colony Optimization. In Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406), Washington, DC, USA, 6–9 July 1999; Volume 1472, pp. 1470–1477. [Google Scholar] [CrossRef]
  4. Talbi, E.G. Metaheuristics: From Design to Implementation; Wiley: Hoboken, NJ, USA, 2009. [Google Scholar] [CrossRef]
  5. Karaboga, D. An idea based on honey bee swarm for numerical optimization: Artificial bee colony (ABC) algorithm. J. Glob. Optim. 2005, 39, 459–471. [Google Scholar] [CrossRef]
  6. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  7. Mirjalili, S.; Lewis, A. Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  8. Mirjalili, S. Dragonfly Algorithm: A new meta-heuristic optimization technique for solving single-objective, discrete, and multi-objective problems. Neural Comput. Appl. 2016, 27, 1053–1073. [Google Scholar] [CrossRef]
  9. Yang, X.S.; Deb, S. Cuckoo Search via Lévy Flights. In Proceedings of the 2009 World Congress on Nature & Biologically Inspired Computing (NaBIC), Coimbatore, India, 9–11 December 2009; pp. 210–214. [Google Scholar] [CrossRef]
  10. Yang, X.S. A new metaheuristic bat-inspired algorithm. In Nature Inspired Cooperative Strategies for Optimization (NICSO 2010); Springer: Berlin/Heidelberg, Germany, 2010; pp. 65–74. [Google Scholar] [CrossRef]
  11. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris Hawks Optimization: Algorithm and applications. Future Gener. Comput. Syst. 2020, 97, 849–872. [Google Scholar] [CrossRef]
  12. Hashim, F.A.; Hussien, A.G. Snake Optimizer: A novel meta-heuristic optimization algorithm. Knowl.-Based Syst. 2022, 242, 108320. [Google Scholar] [CrossRef]
  13. Yang, X.S. Nature-Inspired Metaheuristic Algorithms; Luniver Press: Frome, UK, 2008. [Google Scholar]
  14. Krishnanand, K.N.; Ghose, D. Glowworm Swarm Optimization for simultaneous capture of multiple local optima of multimodal functions. Swarm Intell. 2009, 3, 87–124. [Google Scholar] [CrossRef]
  15. Arora, S.; Singh, S. Butterfly Optimization Algorithm: A novel approach for global optimization. Soft Comput. 2019, 23, 715–734. [Google Scholar] [CrossRef]
  16. Passino, K.M. Biomimicry of bacterial foraging for distributed optimization and control. IEEE Control Syst. Mag. 2002, 22, 52–67. [Google Scholar] [CrossRef]
  17. Li, M.D.; Zhao, H.; Weng, X.W.; Han, T. A novel nature-inspired algorithm for optimization: Virus colony search. Adv. Eng. Softw. 2016, 92, 65–88. [Google Scholar] [CrossRef]
  18. Al-Betar, M.A.; Alyasseri, Z.A.A.; Awadallah, M.A.; Abu Doush, I. Coronavirus herd immunity optimizer (CHIO). Neural Comput. Appl. 2021, 33, 5011–5042. [Google Scholar] [CrossRef]
  19. Salhi, A.; Fraga, E.S. Nature-inspired optimisation approaches and the new plant propagation algorithm. In Proceedings of the International Conference on Numerical Analysis and Optimization (ICeMATH 2011), Halkidiki, Greece, 19–25 September 2011. [Google Scholar]
  20. Mehrabian, A.R.; Lucas, C. A novel numerical optimization algorithm inspired from weed colonization. Ecol. Inform. 2006, 1, 355–366. [Google Scholar] [CrossRef]
  21. Zhou, Y.; Zhang, J.; Yang, X. Root growth optimizer: A metaheuristic algorithm inspired by root growth. IEEE Access 2020, 8, 109376–109389. [Google Scholar]
  22. Rashedi, E.; Nezamabadi-Pour, H.; Saryazdi, S. GSA: A gravitational search algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  23. Kirkpatrick, S.; Gelatt, C.D.; Vecchi, M.P. Optimization by simulated annealing. Science 1983, 220, 671–680. [Google Scholar] [CrossRef]
  24. Geem, Z.W.; Kim, J.H.; Loganathan, G.V. A new heuristic optimization algorithm: Harmony search. Simulation 2001, 76, 60–68. [Google Scholar] [CrossRef]
  25. Sallam, K.M.; Chakraborty, S.; Elsayed, S.M. Gorilla troops optimizer for real-world engineering optimization problems. IEEE Access 2022, 10, 121396–121423. [Google Scholar]
  26. Abualigah, L.; Yousri, D.; Abd Elaziz, M.; Ewees, A.A.; Al-Qaness, M.A.; Gandomi, A.H. Reptile search algorithm (RSA): A nature-inspired meta-heuristic optimizer. Expert Syst. Appl. 2021, 191, 116158. [Google Scholar] [CrossRef]
  27. Mirjalili, S. SCA: A sine cosine algorithm for solving optimization problems. Knowl.-Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  28. Li, S.; Chen, H.; Wang, M.; Heidari, A.A.; Mirjalili, S. Slime mould algorithm: A new method for stochastic optimization. Future Gener. Comput. Syst. 2020, 111, 300–323. [Google Scholar] [CrossRef]
  29. Boussaïd, I.; Lepagnot, J.; Siarry, P. A survey on optimization metaheuristics. Inf. Sci. 2013, 237, 82–117. [Google Scholar] [CrossRef]
  30. Storn, R. On the usage of differential evolution for function optimization. In Proceedings of the 1996 Biennial Conference of the North American Fuzzy Information Processing Society (NAFIPS), Berkeley, CA, USA, 19–22 June 1996; IEEE Press: New York, NY, USA, 1996; pp. 519–523. [Google Scholar]
  31. Charilogis, V.; Tsoulos, I.G.; Tzallas, A.; Karvounis, E. Modifications for the Differential Evolution Algorithm. Symmetry 2022, 14, 447. [Google Scholar] [CrossRef]
  32. Charilogis, V.; Tsoulos, I.G. A Parallel Implementation of the Differential Evolution Method. Analytics 2023, 2, 17–30. [Google Scholar] [CrossRef]
  33. Chawla, S.; Saini, J.S.; Kumar, M. Wound healing based optimization—Vision and framework. Int. J. Innov. Technol. Explor. Eng. 2019, 8, 88–91. [Google Scholar]
  34. Dhivyaprabha, T.T.; Subashini, P.; Krishnaveni, M. Synergistic fibroblast optimization: A novel nature-inspired computing algorithm. Front. Inf. Technol. Electron. Eng. 2018, 19, 815–833. [Google Scholar] [CrossRef]
  35. Dehghani, M.; Trojovský, P. Osprey optimization algorithm: A new bio-inspired metaheuristic algorithm for solving engineering optimization problems. Front. Mech. Eng. 2023, 8, 1126450. [Google Scholar] [CrossRef]
  36. Dehghani, M.; Trojovská, E.; Trojovský, P.; Malik, O.P. OOBO: A new metaheuristic algorithm for solving optimization problems. Biomimetics 2023, 8, 468. [Google Scholar] [CrossRef] [PubMed]
  37. Fu, Y.; Liu, D.; Chen, J.; He, L. Secretary bird optimization algorithm: A new metaheuristic for solving global optimization problems. Artif. Intell. Rev. 2024, 57, 123. [Google Scholar] [CrossRef]
  38. Lang, Y.; Gao, Y. Dream Optimization Algorithm: A cognition-inspired metaheuristic and its applications to engineering problems. Comput. Methods Appl. Mech. Eng. 2025, 436, 117718. [Google Scholar] [CrossRef]
  39. Cao, Y.; Luan, J. A novel differential evolution algorithm with multi-population and elites regeneration. PLoS ONE 2024, 19, e0302207. [Google Scholar] [CrossRef]
  40. Sun, Y.; Wu, Y.; Liu, Z. An improved differential evolution with adaptive population allocation and mutation selection. Expert Syst. Appl. 2024, 258, 125130. [Google Scholar] [CrossRef]
  41. Yang, Q.; Chu, S.C.; Pan, J.S.; Chou, J.H.; Watada, J. Dynamic multi-strategy integrated differential evolution algorithm based on reinforcement learning. Complex Intell. Syst. 2024, 10, 1845–1877. [Google Scholar] [CrossRef]
  42. Watanabe, Y.; Uchida, K.; Hamano, R.; Saito, S.; Nomura, M.; Shirakawa, S. (1+1)-CMA-ES with margin for discrete and mixed-integer problems. In Proceedings of the Genetic and Evolutionary Computation Conference, GECCO ’23, Lisbon, Portugal, 15–19 July 2023; pp. 882–890. [Google Scholar] [CrossRef]
  43. Uchida, K.; Hamano, R.; Nomura, M.; Saito, S.; Shirakawa, S. CMA-ES for discrete and mixed-variable optimization on sets of points. In Proceedings of the Parallel Problem Solving from Nature—PPSN XVIII, Hagenberg, Austria, 14–18 September 2024; pp. 236–251. [Google Scholar] [CrossRef]
  44. Siarry, P.; Berthiau, G.; Durdin, F.; Haussy, J. Enhanced simulated annealing for globally minimizing functions of many-continuous variables. ACM Trans. Math. Softw. (TOMS) 1997, 23, 209–228. [Google Scholar] [CrossRef]
  45. Koyuncu, H.; Ceylan, R. A PSO based approach: Scout particle swarm algorithm for continuous global optimization problems. J. Comput. Des. Eng. 2019, 6, 129–142. [Google Scholar] [CrossRef]
  46. LaTorre, A.; Molina, D.; Osaba, E.; Poyatos, J.; Del Ser, J.; Herrera, F. A prescription of methodological guidelines for comparing bio-inspired optimization algorithms. Swarm Evol. Comput. 2021, 67, 100973. [Google Scholar] [CrossRef]
  47. Tsoulos, I.G.; Charilogis, V.; Kyrou, G.; Stavrou, V.N.; Tzallas, A. OPTIMUS: A Multidimensional Global Optimization Package. J. Open Source Softw. 2025, 10, 7584. [Google Scholar] [CrossRef]
  48. Yang, X.S. Nature-Inspired Metaheuristic Algorithms, 2nd ed.; Luniver Press: Frome, UK, 2010. [Google Scholar]
  49. Abiodun, O.I.; Jantan, A.; Omolara, A.E.; Dada, K.V.; Mohamed, N.A.; Arshad, H. State-of-the-art in artificial neural network applications: A survey. Heliyon 2018, 4, e00938. [Google Scholar] [CrossRef]
  50. Suryadevara, S.; Yanamala, A.K.Y. A Comprehensive Overview of Artificial Neural Networks: Evolution, Architectures, and Applications. Rev. Intel. Artif. Med. 2021, 12, 51–76. [Google Scholar]
  51. Kelly, M.; Longjohn, R.; Nottingham, K. The UCI Machine Learning Repository. Available online: https://archive.ics.uci.edu (accessed on 24 September 2025).
  52. Alcalá-Fdez, J.; Fernandez, A.; Luengo, J.; Derrac, J.; García, S.; Sánchez, L.; Herrera, F. KEEL Data-Mining Software Tool: Data Set Repository, Integration of Algorithms and Experimental Analysis Framework. J. -Mult.-Valued Log. Soft Comput. 2011, 17, 255–287. [Google Scholar]
  53. Weiss, S.M.; Kulikowski, C.A. Computer Systems That Learn: Classification and Prediction Methods from Statistics, Neural Nets, Machine Learning, and Expert Systems; Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA, 1991. [Google Scholar]
  54. Tzimourta, K.D.; Tsoulos, I.; Bilero, I.T.; Tzallas, A.T.; Tsipouras, M.G.; Giannakeas, N. Direct Assessment of Alcohol Consumption in Mental State Using Brain Computer Interfaces and Grammatical Evolution. Inventions 2018, 3, 51. [Google Scholar] [CrossRef]
  55. Quinlan, J.R. Simplifying Decision Trees. Int. J. -Man-Mach. Stud. 1987, 27, 221–234. [Google Scholar] [CrossRef]
  56. Shultz, T.; Mareschal, D.; Schmidt, W. Modeling Cognitive Development on Balance Scale Phenomena. Mach. Learn. 1994, 16, 59–88. [Google Scholar] [CrossRef]
  57. Zhou, Z.H.; Jiang, Y. NeC4.5: Neural ensemble based C4.5. IEEE Trans. Knowl. Data Eng. 2004, 16, 770–773. [Google Scholar] [CrossRef]
  58. Setiono, R.; Leow, W.K. FERNN: An Algorithm for Fast Extraction of Rules from Neural Networks. Appl. Intell. 2000, 12, 15–25. [Google Scholar] [CrossRef]
  59. Demiroz, G.; Govenir, H.A.; Ilter, N. Learning Differential Diagnosis of Eryhemato-Squamous Diseases using Voting Feature Intervals. Artif. Intell. Med. 1998, 13, 147–165. [Google Scholar]
  60. Horton, P.; Nakai, K. A Probabilistic Classification System for Predicting the Cellular Localization Sites of Proteins. In Proceedings of the International Conference on Intelligent Systems for Molecular Biology, St. Louis, MO, USA, 12–15 June 1996; Volume 4, pp. 109–115. [Google Scholar]
  61. Hayes-Roth, B.; Hayes-Roth, F. Concept learning and the recognition and classification of exemplars. J. Verbal Learn. Verbal Behav. 1977, 16, 321–338. [Google Scholar] [CrossRef]
  62. Kononenko, I.; Šimec, E.; Robnik-Šikonja, M. Overcoming the Myopia of Inductive Learning Algorithms with RELIEFF. Appl. Intell. 1997, 7, 39–55. [Google Scholar] [CrossRef]
  63. French, R.M.; Chater, N. Using noise to compute error surfaces in connectionist networks: A novel means of reducing catastrophic forgetting. Neural Comput. 2002, 14, 1755–1769. [Google Scholar] [CrossRef]
  64. Dy, J.G.; Brodley, C.E. Feature Selection for Unsupervised Learning. J. Mach. Learn. Res. 2004, 5, 845–889. [Google Scholar]
  65. Perantonis, S.J.; Virvilis, V. Input Feature Extraction for Multilayered Perceptrons Using Supervised Principal Component Analysis. Neural Process. Lett. 1999, 10, 243–252. [Google Scholar] [CrossRef]
  66. Garcke, J.; Griebel, M. Classification with sparse grids using simplicial basis functions. Intell. Data Anal. 2002, 6, 483–502. [Google Scholar] [CrossRef]
  67. Mcdermott, J.; Forsyth, R.S. Diagnosing a disorder in a classification benchmark. Pattern Recognit. Lett. 2016, 73, 41–43. [Google Scholar] [CrossRef]
  68. Cestnik, G.; Konenenko, I.; Bratko, I. Assistant-86: A Knowledge-Elicitation Tool for Sophisticated Users. In Progress in Machine Learning; Bratko, I., Lavrac, N., Eds.; Sigma Press: Wilmslow, UK, 1987; pp. 31–45. [Google Scholar]
  69. Elter, M.; Schulz-Wendtland, R.; Wittenberg, T. The prediction of breast cancer biopsy outcomes using two CAD approaches that both emphasize an intelligible decision process. Med Phys. 2007, 34, 4164–4172. [Google Scholar] [CrossRef]
  70. Little, M.A.; McSharry, P.E.; Roberts, S.J.; Costello, D.A.E.; Moroz, I.M. Exploiting Nonlinear Recurrence and Fractal Scaling Properties for Voice Disorder Detection. BioMed Eng. OnLine 2007, 6, 23. [Google Scholar] [CrossRef]
  71. Little, M.A.; McSharry, P.E.; Hunter, E.J.; Spielman, J.; Ramig, L.O. Suitability of dysphonia measurements for telemonitoring of Parkinson’s disease. IEEE Trans. Biomed. Eng. 2009, 56, 1015–1022. [Google Scholar] [CrossRef] [PubMed]
  72. Smith, J.W.; Everhart, J.E.; Dickson, W.C.; Knowler, W.C.; Johannes, R.S. Using the ADAP learning algorithm to forecast the onset of diabetes mellitus. Proc. Annu. Symp. Comput. Appl. Med. Care 1988, 9, 261–265. [Google Scholar]
  73. Lucas, D.D.; Klein, R.; Tannahill, J.; Ivanova, D.; Brandon, S.; Domyancic, D.; Zhang, Y. Failure analysis of parameter-induced simulation crashes in climate models. Geosci. Model Dev. 2013, 6, 1157–1171. [Google Scholar] [CrossRef]
  74. Giannakeas, N.; Tsipouras, M.G.; Tzallas, A.T.; Kyriakidi, K.; Tsianou, Z.E.; Manousou, P.; Hall, A.; Karvounis, E.C.; Tsianos, V.; Tsianos, E. A clustering based method for collagen proportional area extraction in liver biopsy images. In Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS, Virtual, 1–5 November 2015; art. no. 7319047. pp. 3097–3100. [Google Scholar]
  75. Hastie, T.; Tibshirani, R. Non-parametric logistic and proportional odds regression, JRSS-C. Appl. Stat. 1987, 36, 260–276. [Google Scholar] [CrossRef]
  76. Dash, M.; Liu, H.; Scheuermann, P.; Tan, K.L. Fast hierarchical clustering and its validation. Data Knowl. Eng. 2003, 44, 109–138. [Google Scholar] [CrossRef]
  77. Cortez, P.; Silva, A.M.G. Using data mining to predict secondary school student performance. In Proceedings of the 5th Annual Future Business Technology Conference, EUROSIS-ETI, Porto, Portugal, 9–11 April 2008; pp. 5–12. [Google Scholar]
  78. Yeh, I.-C.; Yang, K.-J.; Ting, T.-M. Knowledge discovery on RFM model using Bernoulli sequence. Expert Syst. Appl. 2009, 36, 5866–5871. [Google Scholar] [CrossRef]
  79. Jeyasingh, S.; Veluchamy, M. Modified bat algorithm for feature selection with the Wisconsin diagnosis breast cancer (WDBC) dataset. Asian Pac. J. Cancer Prev. APJCP 2017, 18, 1257. [Google Scholar] [PubMed]
  80. Alshayeji, M.H.; Ellethy, H.; Gupta, R. Computer-aided detection of breast cancer on the Wisconsin dataset: An artificial neural networks approach. Biomed. Signal Process. Control 2022, 71, 103141. [Google Scholar] [CrossRef]
  81. Raymer, M.; Doom, T.E.; Kuhn, L.A.; Punch, W.F. Knowledge discovery in medical and biological datasets using a hybrid Bayes classifier/evolutionary algorithm. IEEE Trans. Syst. Man Cybern. 2003, 33, 802–813. [Google Scholar] [CrossRef]
  82. Zhong, P.; Fukushima, M. Regularized nonsmooth Newton method for multi-class support vector machines. Optim. Methods Softw. 2007, 22, 225–236. [Google Scholar] [CrossRef]
  83. Andrzejak, R.G.; Lehnertz, K.; Mormann, F.C.; Rieke, P.D.; Elger, C.E. Indications of nonlinear deterministic and finite-dimensional structures in time series of brain electrical activity: Dependence on recording region and brain state. Phys. Rev. E 2001, 64, 061907. [Google Scholar] [CrossRef]
  84. Tzallas, A.T.; Tsipouras, M.G.; Fotiadis, D.I. Automatic Seizure Detection Based on Time-Frequency Analysis and Artificial Neural Networks. Comput. Intell. Neurosci. 2007, 2007, 80510. [Google Scholar] [CrossRef]
  85. Koivisto, M.; Sood, K. Exact Bayesian Structure Discovery in Bayesian Networks. J. Mach. Learn. Res. 2004, 5, 549–573. [Google Scholar]
Figure 1. Scar map: momentum and bandage.
Figure 1. Scar map: momentum and bandage.
Futureinternet 17 00441 g001
Figure 2. Hot-dims focus (top-K and boosts for injury).
Figure 2. Hot-dims focus (top-K and boosts for injury).
Futureinternet 17 00441 g002
Figure 3. RAGE and hyper-RAGE.
Figure 3. RAGE and hyper-RAGE.
Futureinternet 17 00441 g003
Figure 4. Lévy-wounds (Mantegna).
Figure 4. Lévy-wounds (Mantegna).
Futureinternet 17 00441 g004
Figure 5. Alpha-strike.
Figure 5. Alpha-strike.
Futureinternet 17 00441 g005
Figure 6. Catastrophic micro-reset: reinitialize a small subset when stagnation persists.
Figure 6. Catastrophic micro-reset: reinitialize a small subset when stagnation persists.
Futureinternet 17 00441 g006
Figure 7. Healing adjustments and cooldown.
Figure 7. Healing adjustments and cooldown.
Futureinternet 17 00441 g007
Figure 9. BHO flowchart: core algorithm and mechanism Integration.
Figure 9. BHO flowchart: core algorithm and mechanism Integration.
Futureinternet 17 00441 g009
Figure 10. Graphical representation of w p and h r for the “Parameter Estimation for Frequency-Modulated Sound Waves” problem.
Figure 10. Graphical representation of w p and h r for the “Parameter Estimation for Frequency-Modulated Sound Waves” problem.
Futureinternet 17 00441 g010
Figure 11. Graphical representation of w p and h r for the “Lennard–Jones Potential” problem.
Figure 11. Graphical representation of w p and h r for the “Lennard–Jones Potential” problem.
Futureinternet 17 00441 g011
Figure 12. Graphical representation of w p and h r for the “Tersoff Potential for Model Si (B)” problem.
Figure 12. Graphical representation of w p and h r for the “Tersoff Potential for Model Si (B)” problem.
Futureinternet 17 00441 g012
Figure 13. Time scaling under a fixed iteration budget.
Figure 13. Time scaling under a fixed iteration budget.
Futureinternet 17 00441 g013
Figure 14. Statistical comparison of the proposed method with other approaches on the classification datasets.
Figure 14. Statistical comparison of the proposed method with other approaches on the classification datasets.
Futureinternet 17 00441 g014
Table 1. Parameters and settings of BHO.
Table 1. Parameters and settings of BHO.
GroupSymbol/NameValueDescription (Upright)
Core N P 100Population size.
i t e r max 500Max iterations (also used in w s schedule).
F E max 150,000Max function evaluations (stopping).
w s 0 0.14Initial injury intensity; decays with i t e r .
w p 0.38Per-dimension wounding probability.
h r 0.80Per-dimension healing probability.
r p 0.15Probability of DE (best/1,bin) move.
F0.70DE differential weight (scale factor).
C R 0.50DE crossover probability.
enabledyesEnable scar map.
η 0.06Learning rate for p j ( w ) , s j ( w ) updates.
L b 4Bandage freeze length (iters).
p min / p max 0.05/0.98Bounds for per-dimension wound prob.
s min / s max 0.50/3.00Bounds for per-dimension wound scale.
ρ 0.25Momentum forgetting (decay).
κ 0.75Bias strength toward momentum direction.
Hot-Dims Focus δ 0.05Decay rate for d j (dim scores).
K6Number of top (hot) dimensions.
β p 1.5Prob. boost on hot-dims (effective, non-persistent).
β s 1.6Scale boost on hot-dims (effective, non-persistent).
RAGE/hyper-RAGEenabledyesEnable RAGE bursts.
T rage 12No-improve threshold to arm RAGE (iters).
B10RAGE burst length (iters).
φ 2.2Prob. multiplier during RAGE.
γ 2.8Scale multiplier during RAGE.
ignore bandageyesIgnore bandage during RAGE window.
enabledyesEnable hyper-RAGE.
T HR 28Second stagnation threshold.
B HR 12Hyper-RAGE burst length (iters).
φ HR 3.0Prob. multiplier during hyper-RAGE.
γ HR 3.5Scale multiplier during hyper-RAGE.
Lévy-Wounds (Mantegna)enabledyesEnable Lévy steps.
α 1.5Stability index (tail heaviness).
c 0.6Global Lévy scale.
Alpha-Strike ρ 0.10Reinit probability per individual within strike.
σ α 0.90Strike step scaling modifier (implementation-specific).
Catastrophic Micro-ResetenabledyesEnable micro-reset.
T cat 40No-improve iters ⇒ reset.
ρ cat 0.08Fraction of population to reset.
σ cat 0.25Std. around elite centroid for reset (if biased).
Healing Adjustments and Cooldown r h burst 0.35Healing-rate reduction during bursts.
r h post 0.25Healing-rate increase post-burst.
C h 8Healing cooldown length (iters).
Adaptive Restarts and Hybrid Injection T restart 10Global stagnation threshold for restart.
ρ restart Replacement fraction at restart (set per experiment).
λ Probability of injection from P inj (vs. uniform).
D min Minimum target diversity after restart.
Table 2. Parameters and settings of other methods.
Table 2. Parameters and settings of other methods.
NameValueDescription
N P 100Population size for all methods
i t e r m a x 500Maximum number of iterations for all methods
CLPSO
clProb0.3Comprehensive learning probability
cognitiveWeight1.49445Cognitive weight
inertiaWeight0.729Inertia weight
mutationRate0.01Mutation rate
socialWeight1.49445Social weight
CMA-ES
N P C M A E S 4 + 3 · log ( d i m ) Population size
EA4Eig
archiveSize100Archive size for JADE-style mutation
eig_interval5Recompute eigenbasis every k iterations
maxCR1Upper bound for CR
maxF1Upper bound for F
minCR0Lower bound for CR
minF0.1Lower bound for F
pbest0.2p-best fraction (current-to-pbest/1)
tauCR0.1Self-adaptation prob. for CR
tauF0.1Self-adaptation prob. for F
mLSHADE_RL
archiveSize500Archive size
memorySize10Success-history memory size (H)
minPopulation4Minimum population size
pmax0.2Maximum p-best fraction
pmin0.05Minimum p-best fraction
SaDE
crSigma0.1Std for CR sampling
fGamma0.1Scale for Cauchy F sampling
initCR0.5Initial CR mean
initF0.7Initial F mean
learningPeriod25Iterations per adaptation window
UDE3
minPopulation4Minimum population size.
memorySize10Success-history memory size (H).
archiveSize100Archive size.
pmin0.05Minimum p-best fraction.
pmax0.2Maximum p-best fraction.
Table 3. The benchmark functions used in the experiments.
Table 3. The benchmark functions used in the experiments.
ProblemFormulaDimBounds
Parameter
Estimation for
Frequency-Modulated
Sound Waves
min x [ 6.4 , 6.35 ] 6 f ( x ) = 1 N n = 1 N y ( n ; x ) y target ( n ) 2 y ( n ; x ) = x 0 sin x 1 n + x 2 sin ( x 3 n + x 4 sin ( x 5 n ) ) 6 x i [ 6.4 , 6.35 ]
Lennard–Jones
Potential
min x R 3 N 6 f ( x ) = 4 i = 1 N 1 j = i + 1 N 1 r i j 12 1 r i j 6 30 x 0 ( 0 , 0 , 0 )   x 1 , x 2 [ 0 , 4 ]   x 3 [ 0 , π ]   x 3 k 3   x 3 k 2   x i [ b k , + b k ]
Bifunctional
Catalyst
Blend
Optimal
Control
d x 1 d t = k 1 x 1 , d x 2 d t = k 1 x 1 k 2 x 2 + k 3 x 2 + k 4 x 3 , d x 3 d t = k 2 x 2 , d x 4 d t = k 4 x 4 + k 5 x 5 , d x 5 d t = k 3 x 2 + k 6 x 4 k 5 x 5 + k 7 x 6 + k 8 x 7 + k 9 x 5 + k 10 x 7   d x 6 d t = k 8 x 5 k 7 x 6 , d x 7 d t = k 9 x 5 k 10 x 7   k i ( u ) = c i 1 + c i 2 u + c i 3 u 2 + c i 4 u 3 1 u [ 0.6 , 0.9 ]
Optimal
Control of a
Nonlinear
Stirred-
Tank Reactor
J ( u ) = 0 0.72 x 1 ( t ) 2 + x 2 ( t ) 2 + 0.1 u 2 d t   d x 1 d t = 2 x 1 + x 2 + 1.25 u + 0.5 exp x 1 x 1 + 2   d x 2 d t = x 2 + 0.5 exp x 1 x 1 + 2   x 1 ( 0 ) = 0.9 , x 2 ( 0 ) = 0.09 , t [ 0 , 0.72 ] 1 u [ 0 , 5 ]
Tersoff
Potential
for Model Si (B)
min x Ω f ( x ) = i = 1 N E ( x i )   E ( x i ) = 1 2 j i f c ( r i j ) V R ( r i j ) B i j V A ( r i j ) where r i j = x i x j , V R ( r ) = A exp ( λ 1 r )   V A ( r ) = B exp ( λ 2 r )   f c ( r ) : cutoff function with f c ( r ) : angle parameter 30 x 1 [ 0 , 4 ]   x 2 [ 0 , 4 ]   x 3 [ 0 , π ]   x i 4 ( i 3 ) 4 , 4
Tersoff
Potential
for Model Si (C)
min x V ( x ) = i = 1 N j > i N f C ( r i j ) a i j f R ( r i j ) + b i j f A ( r i j )   f C ( r ) = 1 , r < R D 1 2 + 1 2 cos π ( r R + D ) 2 D , | r R | D 0 , r > R + D   f R ( r ) = A exp ( λ 1 r )   f A ( r ) = B exp ( λ 2 r )   b i j = 1 + ( β n ) ζ i j n 1 / ( 2 n )   k i , j f C ( r i k ) g ( θ i j k ) exp λ 3 3 ( r i j r i k ) 3 30 x 1 [ 0 , 4 ]   x 2 [ 0 , 4 ]   x 3 [ 0 , π ]   x i 4 ( i 3 ) 4 , 4
Spread
Spectrum Radar
Polly phase
Code Design
min x X f ( x ) = max | φ 1 ( x ) | , | φ 2 ( x ) | , , | φ m ( x ) |   X = { x R n 0 x j 2 π , j = 1 , , n } m = 2 n 1   φ j ( x ) = k = 1 n j cos ( x k x k + j ) for j = 1 , , n 1 n for j = n φ 2 n j ( x ) for j = n + 1 , , 2 n 1   φ j ( x ) = k = 1 n j cos ( x k x k + j ) , j = 1 , , n 1   φ n ( x ) = n , φ n + ( x ) = φ n ( x ) , = 1 , , n 1 20 x j [ 0 , 2 π ]
Transmission
Network
Expansion
Planning
min l Ω c l n l + W 1 l O L | f l f ¯ l | + W 2 l Ω max ( 0 , n l n ¯ l )   S f = g d   f l = γ l n l Δ θ l , l Ω   | f l | f ¯ l n l , l Ω   0 n l n ¯ l , n l Z , l Ω 7 0 n i n ¯ l
n i Z
Electricity
Transmission
Pricing
min x f ( x ) = i = 1 N g C i g e n P i g e n R i g e n 2 + j = 1 N d C j l o a d P j l o a d R j l o a d 2   j G D i , j + j B T i , j = P i g e n , i   i G D i , j + i B T i , j = P j l o a d , j   G D i , j m a x = min ( P i g e n B T i , j , P j l o a d B T i , j ) 126 G D i , j [ 0 , G D i , j m a x ]
Circular
Antenna
Array
Design
min r 1 , , r 6 , φ 1 , , φ 6 f ( x ) = max θ Ω A F ( x , θ )   A F ( x , θ ) = k = 1 6 exp j 2 π r k cos ( θ θ k ) + φ k π 180 12 r k [ 0.2 , 1 ]   φ k [ 180 , 180 ]
Dynamic
Economic
Dispatch 1
min P f ( P ) = t = 1 24 i = 1 5 a i P i , t 2 + b i P i , t + c i   P i min P i , t P i max , i = 1 , , 5 , t = 1 , , 24   i = 1 5 P i , t = D t , t = 1 , , 24   P min = [ 10 , 20 , 30 , 40 , 50 ]   P max = [ 75 , 125 , 175 , 250 , 300 ] 120 P i min P i , t P i max
Dynamic
Economic
Dispatch 2
min P f ( P ) = t = 1 24 i = 1 9 a i P i , t 2 + b i P i , t + c i   P i min P i , t P i max , i = 1 , , 5 , t = 1 , , 24   i = 1 5 P i , t = D t , t = 1 , , 24   P min = [ 150 , 135 , 73 , 60 , 73 , 57 , 20 , 47 , 20 ]   P max = [ 470 , 460 , 340 , 300 , 243 , 160 , 130 , 120 , 80 ] 216 P i min P i , t P i max
Static
Economic
Load
Dispatch
(1, 2, 3, 4, 5)
min P 1 , , P N G F = i = 1 N G f i ( P i )   f i ( P i ) = a i P i 2 + b i P i + c i , i = 1 , 2 , , N G   f i ( P i ) = a i P i 2 + b i P i + c i + | e i sin ( f i ( P i min P i ) ) |   P i min P i P i max , i = 1 , 2 , , N G   i = 1 N G P i = P D + P L   P L = i = 1 N G j = 1 N G P i B i j P j + i = 1 N G B 0 i P i + B 00   P i P i 0 U R i P i 0 P i D R i 6
13
15
40
140
See
Technical
Report
of
CEC2011
Table 4. Properties of benchmark functions used.
Table 4. Properties of benchmark functions used.
ProblemModalitySeparabilityConditioning
Parameter
Estimation for
Frequency-Modulated
Sound Waves
MultimodalNon-separableModerate
Lennard–Jones
Potential
Near-flat/
effectively
Non-separableIll-conditioned
Bifunctional-
Catalyst Blend
Optimal Control
Near-flat /
effectively unimodal
Non-separable Near-flat/
well-conditioned
Optimal Control of a
Nonlinear Stirred-
Tank Reactor
Near-flat/
effectively unimodal
Non-separable Near-flat/
well-conditioned
Tersoff Potential
for Model Si (B)
MultimodalNon-separableIll-conditioned
Tersoff Potential
for Model Si (C)
MultimodalNon-separableIll-conditioned
Spread-Spectrum
Radar Polly Phase-
Code Design
MultimodalNon-separableModerate
Transmission Network
Expansion Planning
Near-flat/
effectively unimodal
Non-separableNear-flat
Electricity
Transmission Pricing
Structured/
likely multimodal
Non-separableModerate
Circular Antenna
Array Design
MultimodalNon-separableModerate
Dynamic Economic
Dispatch 1
Smooth (nearly quadratic)Non-separable
Dynamic Economic
Dispatch 2
Structured/
possibly multimodal
Non-separableModerate
Static Economic
Load Dispatch 1
Multimodal
(valve-point effects)
Non-separableIll-conditioned
Static Economic
Load Dispatch 2
Multimodal
(valve-point effects)
Non-separableIll-conditioned
Static Economic
Load Dispatch 3
Multimodal
(valve-point effects)
Non-separableIll-conditioned
Static Economic
Load Dispatch 4
Multimodal
(valve-point effects)
Non-separableIll-conditioned
Static Economic
Load Dispatch 5
Multimodal
(valve-point effects)
Non-separableIll-conditioned
Table 5. Sensitivity analysis of the method parameters for the “Parameter Estimation for Frequency-Modulated Sound Waves” problem.
Table 5. Sensitivity analysis of the method parameters for the “Parameter Estimation for Frequency-Modulated Sound Waves” problem.
ParameterValueMeanMinMaxItersMain-Effect
Range
w p 0.10.138292 2.55618 × 10 20 0.2742591500.0066789
0.30.141141 3.30715 × 10 17 0.274394150
0.50.135853 8.66494 × 10 21 0.234322150
0.70.136441 6.84876 × 10 20 0.272128150
0.90.134732 3.07218 × 10 21 0.21554150
h r 0.10.133427 8.66494 × 10 21 0.2211961500.00544895
0.30.138797 5.62092 × 10 17 0.274394150
0.50.137095 3.07218 × 10 21 0.274259150
0.70.137095 5.95873 × 10 20 0.235609150
0.90.138533 2.08349 × 10 16 0.23928150
Table 6. Sensitivity analysis of the method parameters for the “Lennard–Jones Potential” problem.
Table 6. Sensitivity analysis of the method parameters for the “Lennard–Jones Potential” problem.
ParameterValueMeanMinMaxItersMain-Effect Range
w p 0.1−27.1628−32.3234−19.5091500.247722
0.3−27.2453−32.3291−20.1075150
0.5−27.1499−33.3141−18.626150
0.7−26.9975−33.0304−20.6726150
0.9−27.243−33.216−19.6771150
h r 0.1−27.0514−33.0304−20.10751500.565867
0.3−27.1281−32.3291−18.626150
0.5−27.5176−33.3141−21.4231150
0.7−27.1498−33.216−19.509150
0.9−26.9517−31.699−19.6771150
Table 7. Sensitivity analysis of the method parameters for the “Tersoff Potential for Model Si (B)” problem.
Table 7. Sensitivity analysis of the method parameters for the “Tersoff Potential for Model Si (B)” problem.
ParameterValueMeanMinMaxItersMain-Effect Range
w p 0.1−26.2187−27.4563−24.75141500.20291
0.3−26.2884−28.385−24.788150
0.5−26.1057−27.5849−25.209150
0.7−26.3086−28.1733−24.8844150
0.9−26.2761−27.4406−25.5369150
h r 0.1−26.3295−28.385−24.75141500.164339
0.3−26.2855−28.1733−25.1714150
0.5−26.1948−27.7785−24.788150
0.7−26.2224−27.1772−25.2564150
0.9−26.1652−27.5849−25.1464150
Table 8. Comparison of algorithms based on best and mean after 1.5 e × 10 5 FEs.
Table 8. Comparison of algorithms based on best and mean after 1.5 e × 10 5 FEs.
Function EA4Eig
Best/Mean
(2022) [35]
UDE3
Best/Mean
(2024) [36]
mLSHADE_RL
Best/Mean
(2024) [37]
CLPSO
Best/Mean
(2006) [38]
SaDE
Best/Mean
(2009) [39]
jDE
Best/Mean
(2006) [40]
CMA-ES
Best/Mean
(2001) [48]
BHO
Best/Mean
Parameter Estimation for
Frequency-Modulated Sound Waves
0.1539933050.038087550.1161575350.13148374770.0958294440.1161575410.18160916 1.543774 × 10 25
0.2134472960.1158338150.2050110620.21249816880.1956002530.1460087560.2568639660.200220764
Lennard–Jones
Potential
−18.48174236−21.41786661−28.41816707−13.43649135−21.93636189−29.98126575−28.42253189−32.07742417
−16.3133561−17.33977959−22.49792055−10.25073403−17.95333019−27.49258505−25.78783328−24.33212306
Bifunctional-
Catalyst Blend Optimal Control
−0.000286591−0.000286591−0.000286591−0.000286591−0.000286591−0.000286591−0.000286591−0.000286591
−0.000286591−0.000286591−0.000286591−0.000286591−0.000286591−0.000286591−0.000286591−0.000286591
Optimal Control of a
Nonlinear Stirred-Tank Reactor
0.3903767230.3903767230.3903767230.39037672280.3903767230.3903767230.39037672280.3903767228
0.3903767230.3903767230.3903767230.39037672280.3903767230.3903767230.3903767230.390376723
Tersoff Potential
for Model Si (B)
−29.11960284−29.44152761−28.60814558−28.23544117−27.25703406−13.51157064−29.26244222−29.03183049
−27.89597789−25.70627602−26.07976794−26.18834522−25.25867422−3.983690794−27.5889735−27.26630121
Tersoff Potential
for Model Si (C)
−33.39767521−33.12997729−32.28575942−30.85200257−31.85343594−18.76214649−33.19699356−33.38947338
−31.11610936−28.6603137−30.03594436−28.87349048−29.59692733−8.506037168−31.79270914−31.31864105
Spread-Spectrum Radar Polly
Phase-Code Design
0.5178669931.0482401960.0331460961.0853349910.5727313221.5258705580.014848227220.195096433
0.8387529781.2651529510.6257884511.3439561530.8440140751.8120421660.1719886660.601190863
Transmission Network
Expansion Planning
250250250250250250250250
250250250250250250250250
Electricity
Transmission Pricing
13,773,68013773582.5313773567.3613775010.113773468.6813774627.8413775841.7713773334.9
13774198.2813773582.5313773852.6313775395.0713773930.9314020953.7813787550.1813773632.45
Circular Antenna
Array Design
0.0068096380.0068096530.0068097010.0069334010450.006812870.0068200720.0072047975760.007101505
0.0068096380.0117229440.0068235470.051815517980.0081862040.0176579980.0086356553640.158523549
Dynamic Economic
Dispatch 1
412736103.9410197836.9415275891.6428607927.6411226317.3968042312.188285.6024410074526.4
421199260.5410628483.3418526775.3435250914.5413699347.41034393036102776.7103410079513.3
Dynamic Economic
Dispatch 2
346855.5418357530.5408392247.721333031590.31519820.5596340091475.3502699.4187347469.862
12332507.25537163.41395966956.36553906147.384090304.18397471715.1477720.1511354734.7105
Static Economic
Load Dispatch 1
6163.5609786164.7669196163.5468836554.6721736360.3533056163.7490066657.6130286512.525519
6170.9650136266.2502516353.7220197668.3336036464.8618286778.527028415917.46256772.880257
Static Economic
Load Dispatch 2
14.4616048918725.6470717905.8538319030.3608118455.372861161578.904763001.218518754.99866
18779.9203619660.4907418661.2076320699.0021921829.102083671587.6051425815.4419232.05211
Static Economic
Load Dispatch 3
470023233.3470023232.3470023232.6470192288.3470023232.7471058115.8470023232.3470023233.2
470023234.5470023232.3470023234.7470294703.2470023232.7471963142.3470023232.3470023278
Static Economic
Load Dispatch 4
71193.07649168334.700371067.8441884980.5569862196.4326482592.714476053.519771089.03508
71193.07649348720.6677406986.21811423887.3581469886.14217527314.242925852.935100831.805
Static Economic
Load Dispatch 5
80857967748070408727807911801281059476158078489742845309077880720779638072061692
814530478080718029228104455647811092407180816804788459337082808401779174458095
Table 9. Detailed ranking of algorithms based on best after 1.5 × 10 5 FEs.
Table 9. Detailed ranking of algorithms based on best after 1.5 × 10 5 FEs.
FunctionEA4EigUDE3mLSHADE_RLCLPSOSaDEjDECMA-ESBHO
(2022) [35](2024) [36](2024) [37](2006) [38](2009) [39](2006) [40](2001) [48]
Parameter
Estimation for
Frequency-Modulated
Sound Waves
72463581
Lennard–Jones
Potential
76485231
Bifunctional-
Catalyst Blend
Optimal Control
11111111
Optimal Control of a
Nonlinear Stirred-
Tank Reactor
44414411
Tersoff Potential
for Model Si (B)
31567824
Tersoff Potential
for Model Si (C)
14576832
Spread-Spectrum
Radar Polly Phase-
Code Design
46275813
Transmission Network
Expansion Planning
11111111
Electricity
Transmission Pricing
54372681
Circular Antenna
Array Design
12364587
Dynamic Economic
Dispatch 1
53674812
Dynamic Economic
Dispatch 2
13476852
Static Economic
Load Dispatch 1
24175386
Static Economic
Load Dispatch 2
14263875
Static Economic
Load Dispatch 3
11475816
Static Economic
Load Dispatch 4
34176852
Static Economic
Load Dispatch 5
61574832
Total5351559871996647
Table 10. Detailed ranking of algorithms based on mean after 1.5 × 10 5 FEs.
Table 10. Detailed ranking of algorithms based on mean after 1.5 × 10 5 FEs.
FunctionEA4EigUDE3mLSHADE_RLCLPSOSaDEjDECMA-ESBHO
(2022) [35](2024) [36](2024) [37](2006) [38](2009) [39](2006) [40](2001) [48]
Parameter
Estimation for
Frequency-Modulated
Sound Waves
71563284
Lennard–Jones
Potential
76485123
Bifunctional-
Catalyst Blend
Optimal Control
11111111
Optimal Control of a
Nonlinear Stirred-
Tank Reactor
22212222
Tersoff Potential
for Model Si (B)
16547823
Tersoff Potential
for Model Si (C)
37465812
Spread-Spectrum
Radar Polly Phase-
Code Design
46375812
Transmission Network
Expansion Planning
11111111
Electricity
Transmission Pricing
51364872
Circular Antenna
Array Design
15273648
Dynamic Economic
Dispatch 1
63574812
Dynamic Economic
Dispatch 2
63574821
Static Economic
Load Dispatch 1
12374685
Static Economic
Load Dispatch 2
24156873
Static Economic
Load Dispatch 3
41573816
Static Economic
Load Dispatch 4
13456872
Static Economic
Load Dispatch 5
72563841
Total5954589166995948
Table 11. Comparison of algorithms and final ranking.
Table 11. Comparison of algorithms and final ranking.
AlgorithmBestMeanOverallAverageRank
BHO4748952.7941
UDE3
(2024) [36]
51541053.0882
EA4Eig
(2022) [35]
53591123.2943
mLSHADE_RL
(2024) [37]
55581133.3234
CMA-ES
(2001) [48]
66591253.6765
SaDE
(2009) [39]
71661374.0296
CLPSO
(2006) [38]
98911895.5587
jDE
(2006) [40]
99991985.8238
Table 12. Experimental results on the classification datasets across various machine learning methods. The bold notation is used for the method with the lowest error.
Table 12. Experimental results on the classification datasets across various machine learning methods. The bold notation is used for the method with the lowest error.
DatasetAdamBFGSGeneticBHO
APPENDICITIS16.50%18.00%24.40%22.07%
ALCOHOL57.78%41.50%39.57%16.89%
AUSTRALIAN35.65%38.13%32.21%29.20%
BALANCE12.27%8.64%8.97%8.06%
CLEVELAND67.55%77.55%51.60%46.36%
CIRCULAR19.95%6.08%5.99%4.23%
DERMATOLOGY26.14%52.92%30.58%9.89%
ECOLI64.43%69.52%54.67%48.72%
GLASS61.38%54.67%52.86%53.43%
HABERMAN29.00%29.34%28.66%28.82%
HAYES-ROTH59.70%37.33%56.18%35.49%
HEART38.53%39.44%28.34%20.96%
HEARTATTACK45.55%46.67%29.03%21.06%
HOUSEVOTES7.48%7.13%6.62%6.45%
IONOSPHERE16.64%15.29%15.14%15.54%
LIVERDISORDER41.53%42.59%31.11%31.87%
LYMOGRAPHY39.79%35.43%28.42%27.24%
MAMMOGRAPHIC46.25%17.24%19.88%17.24%
PARKINSONS24.06%27.58%18.05%14.72%
PIMA34.85%35.59%32.19%29.40%
POPFAILURES5.18%5.24%5.94%7.40%
REGIONS229.85%36.28%29.39%26.83%
SAHEART34.04%37.48%34.86%33.88%
SEGMENT49.75%68.97%57.72%27.26%
SPIRAL47.67%47.99%48.66%43.75%
STATHEART44.04%39.65%27.25%20.71%
STUDENT5.13%7.14%5.61%6.66%
TRANSFUSION25.68%25.84%24.87%24.56%
WDBC35.35%29.91%8.56%6.43%
WINE29.40%59.71%19.20%15.86%
Z_F_S47.81%39.37%10.73%10.88%
ZO_NF_S47.43%43.04%21.54%9.76%
ZONF_S11.99%15.62%4.36%2.68%
ZOO14.13%10.70%9.50%7.27%
AVERAGE34.48%34.34%26.55%21.52%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Charilogis, V.; Tsoulos, I.G. Healing Intelligence: A Bio-Inspired Metaheuristic Optimization Method Using Recovery Dynamics. Future Internet 2025, 17, 441. https://doi.org/10.3390/fi17100441

AMA Style

Charilogis V, Tsoulos IG. Healing Intelligence: A Bio-Inspired Metaheuristic Optimization Method Using Recovery Dynamics. Future Internet. 2025; 17(10):441. https://doi.org/10.3390/fi17100441

Chicago/Turabian Style

Charilogis, Vasileios, and Ioannis G. Tsoulos. 2025. "Healing Intelligence: A Bio-Inspired Metaheuristic Optimization Method Using Recovery Dynamics" Future Internet 17, no. 10: 441. https://doi.org/10.3390/fi17100441

APA Style

Charilogis, V., & Tsoulos, I. G. (2025). Healing Intelligence: A Bio-Inspired Metaheuristic Optimization Method Using Recovery Dynamics. Future Internet, 17(10), 441. https://doi.org/10.3390/fi17100441

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop