Next Article in Journal
Texture-Adaptive Fabric Defect Detection via Dynamic Subspace Feature Extraction and Luminance Reconstruction
Previous Article in Journal
Deep Learning for Brain MRI Tissue and Structure Segmentation: A Comprehensive Review
Previous Article in Special Issue
Scheduling the Exchange of Context Information for Time-Triggered Adaptive Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Phymastichus–Hypothenemus Algorithm for Minimizing and Determining the Number of Pinned Nodes in Pinning Control of Complex Networks

by
Jorge A. Lizarraga
1,
Alberto J. Pita
1,
Javier Ruiz-Leon
2,
Alma Y. Alanis
3,
Luis F. Luque-Vega
4,*,
Rocío Carrasco-Navarro
5,
Carlos Lara-Álvarez
6,
Yehoshua Aguilar-Molina
7 and
Héctor A. Guerrero-Osuna
8
1
Departamento de Investigación, Centro de Enseñanza Técnica Industrial, Guadalajara 44638, Jalisco, Mexico
2
CINVESTAV, Unidad Guadalajara, Guadalajara 45017, Jalisco, Mexico
3
CUCEI, Universidad de Guadalajara, Guadalajara 44430, Jalisco, Mexico
4
Department of Technological and Industrial Processes, ITESO, Tlaquepaque 45604, Jalisco, Mexico
5
Department of Mathematics and Physics, ITESO, Tlaquepaque 45604, Jalisco, Mexico
6
Centro de Investigación en Matemáticas, Unidad Zacatecas, Zacatecas 98160, Zacatecas, Mexico
7
Departamento de Ciencias Computacionales e Ingenierías, Centro Universitario de los Valles de la Universidad de Guadalajara, Ameca 46600, Jalisco, Mexico
8
Posgrado en Ingeniería y Tecnología Aplicada, Unidad Académica de Ingeniería Eléctrica, Universidad Autónoma de Zacatecas, Zacatecas 98000, Zacatecas, Mexico
*
Author to whom correspondence should be addressed.
Algorithms 2025, 18(10), 637; https://doi.org/10.3390/a18100637
Submission received: 7 September 2025 / Revised: 28 September 2025 / Accepted: 30 September 2025 / Published: 9 October 2025
(This article belongs to the Special Issue Bio-Inspired Algorithms: 2nd Edition)

Abstract

Pinning control is a key strategy for stabilizing complex networks through a limited set of nodes. However, determining the optimal number and location of pinned nodes under dynamic and structural constraints remains a computational challenge. This work proposes an improved version of the Phymastichus–Hypothenemus Algorithm—Minimized and Determinated (PHA-MD) to solve multi-constraint, hybrid optimization problems in pinning control without requiring a predefined number of control nodes. Inspired by the parasitic behavior of Phymastichus coffea on Hypothenemus hampei, the algorithm models each agent as a parasitoid capable of propagating influence across a network, inheriting node importance and dynamically expanding search dimensions through its “offspring.” Unlike its original formulation, PHA-MD integrates variable-length encoding and V-stability assessment to autonomously identify a minimal yet effective pinning set. The method was evaluated on benchmark network topologies and compared against state-of-the-art heuristic algorithms. The results show that PHA-MD consistently achieves asymptotic stability using fewer pinned nodes while maintaining energy efficiency and convergence robustness. These findings highlight the potential of biologically inspired, dimension-adaptive algorithms in solving high-dimensional, combinatorial control problems in complex dynamical systems.

1. Introduction

Recently, research on the analysis and control of complex networks has advanced rapidly [1]. The presence of complex networks in nature and society has sparked significant interest in this field [2], as they are integral to various systems in our daily lives [3]. For instance, the organization and function of a cell result from complex interactions between genes, proteins, and other molecules [4]; the brain comprises a vast network of interconnected neurons [5]; social systems can be represented by graphs illustrating interactions among individuals [6]; ecosystems consist of species whose interdependencies can be mapped into food webs [7]; and large networked infrastructures, such as power grids and transportation networks, can be represented by graphs describing interactions among components [8,9].
In complex networks, a key challenge is ensuring control and stability while minimizing resource consumption. This requires selecting a subset of nodes, known as pinning nodes, to control the entire network. Pinning control achieves global stabilization by intervening in these selected nodes. The optimal selection of these nodes is crucial, with various centrality metrics, like Degree Centrality [10], Betweenness Centrality [11], and Closeness Centrality [12], proposed in the literature. These metrics offer different advantages depending on the network’s structure and dynamics [3,13,14].
Recent advances in complex network control have led to more sophisticated algorithms designed to handle the growing complexity of modern networks. Techniques such as adaptive control [15], robust control [16], and sliding mode control [17] have been developed to improve stability and robustness under various uncertainties. However, these methods often struggle with the optimal selection of pinning nodes, especially in large networks with vast combinations of nodes.
Existing methods are limited by their reliance on fixed-dimension approaches and their difficulty adapting to dynamic network environments. These algorithms often require prior knowledge of the network’s structure, which may not always be available, and can become computationally expensive as the network grows, making them impractical for real-time use.
The PHA-MD algorithm proposed in this work addresses these limitations by offering a novel approach to pinning control. Inspired by the parasitic relationship between the Phymastichus coffea wasp and the Hypothenemus hampei coffee borer, PHA-MD prioritizes the network’s asymptotic stability while optimizing pinning node selection. Unlike its predecessor [18,19], PHA-MD autonomously determines the number of pinning nodes, avoiding the issues associated with fixed-dimension algorithms. This innovation allows PHA-MD to dynamically adapt to network changes, ensuring stability even in complex scenarios.
PHA-MD also leverages the V-stability tool [20] to ensure stability during optimization. Comparative simulations demonstrate PHA-MD’s superior performance compared to several heuristic optimization algorithms, particularly in achieving network stability with fewer pinned nodes and efficient energy use.
The performance of PHA-MD is compared with other heuristic optimization algorithms, such as Ant Lion Optimizer (ALO) [21], Teaching–Learning-Based Optimization (TLBO) [22], Grey Wolf Optimizer (GWO) [23], Animal Migration Optimization (AMO) [24], Particle Swarm Optimization (PSO) [25], Artificial Bee Colony (ABC) [26], Gaining–Sharing Knowledge Based Algorithm (GSK) [27], Biogeography-Based Optimization (BBO) [28], Whale Optimization Algorithm (WOA) [29], Ant Colony Optimization (ACO) [30], Osprey Optimization Algorithm (OOA) [31], Mayfly Algorithm (MA) [32], Archimedes Optimization Algorithm (AOA) [33], Coronavirus Herd Immunity Optimizer (CHIO) [34], and Driving Training-Based Optimization (DTBO) [35].
These algorithms were selected for several reasons. They represent a wide range of optimization techniques, from bio-inspired methods like ALO [21] and PSO [25] to recent innovations such as CHIO [34] and OOA [31]. This diversity allows for a comprehensive comparison across different strategies. Many of these algorithms have shown high performance in solving complex optimization problems, making them robust benchmarks for evaluating PHA-MD. Their popularity in the research community ensures that the comparison results are relevant and easily interpretable.
The selected algorithms also cover different optimization mechanisms, including swarm intelligence (PSO, WOA [29]), natural evolution (Genetic Algorithm (GA) [36]), intensive local search (GWO [23]), and reinforcement learning (Q-Learning based algorithms [37]). Comparing PHA-MD with these approaches helps its competitiveness with the latest developments in heuristic optimization.
In summary, the selection of these algorithms provides a rigorous benchmarking framework, allowing for a thorough assessment of PHA-MD’s capabilities, particularly in solving the permutation problem in node selection and ensuring the network’s asymptotic stability. Additionally, PHA-MD’s ability to handle multi-constraint optimization problems makes it versatile and effective in various applications.
The rest of this paper is organized as follows. Section 2 provides an overview of complex network pinning control, detailing the V-stability tool and other essential mathematical preliminaries. This section also defines the optimization problem, focusing on minimizing the energy consumed by control actions at the pinning nodes and the number of pinning nodes required for network stability. In Section 3, the biological foundation of the PHA-MD algorithm is described, highlighting its unique characteristics and mechanisms. Section 4 presents a comprehensive simulation study of the proposed PHA-MD algorithm across various complex network topologies, including a detailed comparison with other heuristic optimization algorithms. Finally, Section 5 discusses the conclusions and potential future directions of the research.

2. Problem Statement

The optimization problem in complex networks is formulated to achieve two primary objectives:
(i)
Minimizing the energy consumed by the control actions at the pinning nodes: this objective focuses on reducing the overall energy required to stabilize the network through targeted interventions at selected nodes.
(ii)
Minimizing the number of pinning nodes required to achieve network stability: this objective aims to reduce the number of nodes that need to be pinned to maintain the stability of the entire network, thereby optimizing resource usage.
These objectives define the optimization problem as follows. Let M Q ( Θ , G , K ) denote the closed-loop matrix built from the node passivity degrees Θ , the coupling matrix G, and the diagonal gain matrix K; let λ { · } denote the spectrum (set of eigenvalues); and let R e { · } denote the real-part operator applied to those eigenvalues. Then
minimize { f ( U ) , card ( U ) } subject to U { 1 , , N n } , max R e { λ { M } } < 0 ,
where U is the index set of pinning nodes and card ( U ) is its cardinality. The control law for node n i is u i = k i x i , with  x i = ( x i , 1 , x i , 2 , , x i , N s ) T R N s and k i 0 the i-th diagonal entry of K, defined as
K i , i = k i , i U , 0 , i U .
The inequality max R e { λ { M } } < 0 expresses exponential stability.
Specifically, the optimization problem involves the construction of the control gain matrix K. The goal is to find the optimal set of pinning nodes that achieves efficient network control. In this context, the construction of the control gain matrix K is crucial. The matrix K determines the effectiveness of the pinning control strategy, directly impacting the stability and efficiency of the network. Ensuring an optimal configuration of K is essential for achieving network stability with minimal resource consumption. Thus, the following describes the relevance of the matrix K in the pinning control of complex networks, and its impact on the stability of the network.
Consider the following Network (2) of N n nodes with linear diffusive couplings and an N s -dimensional dynamical system [38]:
x ˙ i = f i ( x i ) + j = 1 , j i N n c i j a i j Γ x j x i ,
where f i : R N s R N s denotes the self-dynamics of node n i , i.e., the intrinsic vector field that governs its evolution in isolation (all couplings and controls set to zero). The constants c i j are the coupling strengths between n i and n j , Γ R N s × N s specifies how state components are coupled for each connected pair ( x j , x i ) , and the connection matrix A = a i j R N n × N n encodes the network topology: if there is a connection between n i and n j for i j , then a i j = a j i = 1 ; otherwise a i j = a j i = 0 for i j . The diagonal elements are defined by a i i = j = 1 j i N n d i , where d i is the degree of node n i . A small subset of nodes in the network is subjected to local feedback as part of the pinning control, and these are called pinning nodes [39]. Assuming the diffusive condition
c i i a i i + j = 1 j i N n c i j a i j = 0 , i = 1 , 2 , , N n ,
Network (2) can be rewritten in a compact controlled form as
x ˙ i = f i ( x i ) + j = 1 N n c i j a i j Γ x j + B i u i ,
where B i R N s × N s is the input (actuation) matrix that maps the control vector u i R N s into the state derivatives of node n i . In the context of pinning control, B i specifies which state components are directly actuated (full or partial channels). Typical choices are B i = I N s (full–state actuation) or B i = diag ( b i ) with b i { 0 , 1 } N s (selected coordinates), where I N s denotes the N s × N s identity matrix (ones on the diagonal and zeros elsewhere). Unless otherwise stated, we adopt B i = I N s in the simulations.
The control input is defined nodewise as
u i ( x i ) = k i x i , i U , 0 , i U ,
where U is the set of pinning nodes with 1 card ( U ) N n [40]. Hence, the self-dynamics of the controlled nodes becomes
x ˙ i = f i ( x i ) k i B i x i , i U .
To demonstrate network stability and to calculate the lower bound N λ + for the number of pinned nodes, consider a continuously differentiable Lyapunov function V ( x i ) : D R N s R + , satisfying V ( x ¯ ) = 0 with x ¯ D , such that for the self-dynamics (6), there is a scalar θ i and k i guaranteeing
V ( x i ) x i ( f i ( x i ) k i B x i + θ i Γ x i + k i Γ x i ) < 0     x i D i D , x i 0 ,
where
D i = { x i : x i x ¯ i < α } , α > 0 , D = i = 1 N n D i ,
and θ i is the passivity degree of f ( x i ) . Then, to determine whether Network (4) is synchronized at the equilibrium point X ¯ = ( x ¯ 1 T , , x ¯ N n T ) T , such that
x 1 ( t ) = x 2 ( t ) = = x N n ( t ) x ¯ as t ,
let us consider
V N ( X ) = i = 1 N n 1 2 x i T P x i , P = P T > 0 ,
for the controlled Network (4); then
V ˙ N ( X ) = i = 1 N n x i T P f i ( x i ) + j = 1 N n c i j a i j Γ x j k i B x i V ˙ N ( X ) < i = 1 N n x i T P θ i Γ x i + j = 1 N n c i j a i j Γ x j + k i Γ x i
or, using the Kronecker product,
V ˙ N ( X ) < X T ( Θ + G K ) P Γ X ,
where c i j a i j are the entries of G R N n × N n , Θ = diag ( θ 1 , θ 2 , …, θ N n ) , K = diag ( k 1 , k 2 , , k N n ) , and P Γ 0 . Then, according to [20], Network (2) is locally asymptotically stable around its equilibrium point if the closed-loop characteristic matrix
Q = Θ + G K
is negative definite; then, the number of pinned nodes cannot be less than the number of positive eigenvalues N λ + with K = [ 0 ] N n × N n , such that, card ( U ) > N λ + . Thus, satisfying the above conditions, Network (2) is V-stable [20], demonstrating the importance of the K matrix to ensure stability in fixation control.
In (11), the importance of the K matrix for ensuring stability in the pinning control is evident. However, the energy consumed by the pinning nodes during the control process is a significant factor that impacts the overall efficiency of the network. Therefore, optimizing the energy consumption is essential to ensure that the network operates effectively without unnecessary expenditure of resources.
The objective function f ( U ) in (1) focuses on minimizing the energy consumed by the pinning nodes at a specific time t 0 . This approach not only helps in achieving the desired control with minimal energy, but also ensures that the network remains stable and efficient. By concentrating on energy optimization, the control strategy can be made more sustainable and cost-effective, addressing both performance and resource utilization concerns in complex network systems.
The selected objective function for this problem is the energy consumed by the pinning nodes at time t 0 , E t 0 ( U ) , defined as
E t 0 ( U ) = 1 2 i U k i x i ( t 0 ) 2 ,
where U is the set of pinned nodes (only those that contribute to the sum; equivalently, summing over all i with K i i = 0 for i U yields the same value).
Remark 1.
Considering V-stability and that energy is consumed only by the pinning nodes, the maximum energy occurs at time t 0  [20]. This allows the optimization to jointly minimize E t 0 ( U ) and card ( U ) , ensuring efficient and stable control of the complex network.

3. PHA-MD Algorithm

The PHA-MD algorithm iteratively constructs the control gain matrix K, leveraging the adaptive nature of its agents. Each agent presents a unique and variable candidate solution in each iteration, inheriting information and discarding non-optimal nodes to improve the solution in successive generations. This approach enables the optimization of the pinning node set configuration U , achieving a network controlled efficiently with the minimum number of pinning nodes and the lowest possible energy consumption. In summary, the PHA-MD algorithm excels in simultaneously optimizing the energy consumed and the cardinality of the set of pinning nodes, optimizing the control gain matrix K for complex networks.

3.1. Biological Basis

The proposed PHA-MD algorithm is inspired by the parasitic symbiotic relationship between the parasitoid wasp (Phymastichus coffea, LaSalle, Hymenoptera: Eulophidae, Figure 1a) and the coffee borer (Hypothenemus hampei, Ferrari, Coleoptera: Curculionidae: Scolytinae, Figure 1b). This symbiotic relationship is used as a biological control mechanism because the borer is considered the most harmful pest affecting the coffee crop due to its attack on the berry, producing weight loss, depreciation of the grain, and loss of quality due to the presence of impurities in the infected beans [41].
The life cycle of the wasp begins when P. coffea parasitizes H. hampei, depositing up to two eggs per host, typically one male and one female. From this point, the incubation process begins. Upon hatching, the larvae feed on the abdominal tissues of the host, the coffee borer, until they complete their metamorphosis. Once this stage is finished, the larvae emerge from the host as adult wasps [42].
The i-th symbiotic relationship ϕ i ( t ) between the i-th agents P. coffea and H. hampei at iteration t is represented as
ϕ i ( t ) = { F i ( t ) , h i ( t ) , p i ( t ) , h i ( t ) } ,
where p i N j [ 1 , N n ] and h i R N n are the i-th agents P. coffea and H. hampei respectively, whose populations are defined as
P ( t ) = p 1 ( t ) p N a ( t ) , H ( t ) = h 1 ( t ) h N a ( t ) ,
where N a is the number of agents for both populations. F i ( t ) is a penalty function, defined as
F i ( t ) = E t 0 ( U ) + α 1 c i ( 1 ) ( t ) + α 2 Λ ( h i ( t ) ) ,
where α 1 and α 2 are penalty parameters, with the difference between them defining the importance between the second and third terms in (15); c i ( 1 ) = card ( p i ) is the cardinality of the agent p i ( t ) ; and Λ ( h i ( t ) ) is the maximum value between zero and the maximum real eigenvalue of matrix (11), defined as follows:
Λ ( h i ( t ) ) = [ max R e λ Q ( Θ , G , K i ) ] + ,
where the elements k i , j of the main diagonal of the i-th control gains matrix K i are determined by the relationship of the agents p i and h i , such that
k i , j = h i , j j p i 0 j p i .
The symbiotic relationship ϕ is classified in three possible favorable events for the P. coffea species (see Figure 2), which are as follows:
ϕ i ( 1 ) ( t ) = { F i ( 1 ) ( t ) , Λ i ( 1 ) ( t ) , p i ( 1 ) ( t ) , h i ( 1 ) ( t ) } ,
ϕ ( 2 ) ( t ) = { F ( 2 ) ( t ) , Λ ( 2 ) ( t ) , p ( 2 ) ( t ) , h ( 2 ) ( t ) } ,
ϕ ( 3 ) ( t ) = { F ( 3 ) ( t ) , Λ ( 3 ) ( t ) , p ( 3 ) ( t ) , h ( 3 ) ( t ) } ,
where ϕ i ( 1 ) ( t ) is the favorable parasitization event, when the wasp parasites the beetle; ϕ ( 2 ) ( t ) implies the birth of only a female; and  ϕ ( 3 ) ( t ) corresponds to the birth of a female and a male. Note that types 2 and 3 affect the entire population; however, type 1 is particular for each agent, and this is reflected in the subscript i for this type of symbiotic relationship. A single male fertilizes numerous females; however, it requires a female to bore the host abdomen in order to emerge, and therefore, the birth of a female and a male is the ideal scenario for the P. coffea species [42].
The concept of Memory Inheritance justifies the relevance of these events. This attribute implies that each i-th wasp can pass down two types of memories, m i N N n and w i R N n , to its future generations. Since the coffee tree can be modeled as a graph, whose berries are equivalent to nodes, m and w are, respectively, “how many nodes to select” and “which nodes to select”.
The example in Figure 3 shows the decision of the i-th agent p i ( t ) to visit four nodes, selecting n 18 , n 15 , n 10 , and n 8 . The elements m i , j and w i , j of each memory are selected by the roulette method. Note that in case of m i , j ( t ) j or w i , j ( t ) i , if the element is selected, it returns the index it represents, and not its occurrence value. In the following sections, the process of modifying the frequency of occurrence is explained.
Another attribute of biological inspiration is the behavior of H. hampei. In nature, the beetle drills into coffee berries at or near the apex, which is the softest area of the coffee bean [43] (Figure 4). Similar to how the gravity, shape, relief, or orientation of the berry influence its movement, the borer functions guide H. hampei towards the optimal regions identified by the group of beetles, without direct interference. These functions reduce the search space in favor of the symbiotic relationship ϕ ( 2 ) , because the beetles that manage to bore into the fruit will be able to feed, thereby increasing the probability that at least one P. coffea individual will be born. The borer functions are defined as
β ( 1 ) = h ( 2 ) ζ | h ( 2 ) | + β ( 1 ) ( 0 ) h ( 2 ) + ζ | h ( 2 ) | e γ τ N τ ,
β ( 2 ) = h ( 2 ) + ζ | h ( 2 ) | h ( 2 ) β ( 2 ) ( 0 ) + ζ | h ( 2 ) | e γ τ N τ ,
where β ( 1 ) and β ( 2 ) are the lower and upper borer functions, respectively; β j ( 1 ) h i , j β j ( 2 ) j [ 1 , N n ] , | · | denotes the absolute value; ζ is the compression parameter, which prevents the limits β ( 2 ) and β ( 1 ) from converging to h ( 2 ) ; N τ is the maximum interaction number per epoch; γ is an iteration coefficient; and β ( 1 ) ( 0 ) and β ( 2 ) ( 0 ) are, respectively, the initial upper and lower bounds. It is important to mention that the borer functions limit the values of each dimension j; however, they can be specific for each i-th agent H. hampei. It should be understood that with the notation of (21) and (22), the entire population H is delimited equally.

3.2. Attribute Updating

This section describes the creation of new generations for each type of agent based on updating parameters such as symbiotic relationships, updating subtypes, and optimizing inheritance memories. The subtypes of symbiotic relationships are updated as follows:
ϕ i ( 1 ) ( t ) = ϕ i ( t ) if   t = 1 τ = 1 ϕ i ( t ) if   t > 1 τ > 1 F i ( t ) < F i ( 1 ) ( t 1 ) ϕ i ( 1 ) ( t 1 ) else ,
ϕ ( 2 ) ( t ) = ϕ min ( 1 ) ( t ) if   t = 1 τ = 1 ϕ min ( 1 ) ( t ) if   t > 1 τ > 1 F min ( 1 ) ( t ) < F ( 2 ) ( t 1 ) ϕ ( 2 ) ( t 1 ) else ,
ϕ ( 3 ) ( t ) = ϕ ( 2 ) ( t ) if   τ = N τ B ( t ) = h ( 2 ) ( t ) = 0 ϕ ( 2 ) ( t ) if   τ = N τ B ( t ) h ( 2 ) ( t ) = 0 c ( 2 ) ( t ) < c ( 3 ) ( t 1 ) ϕ ( 2 ) ( t ) if   τ = N τ B ( t ) h ( 2 ) ( t ) = 0 c 2 ( t ) = c 3 ( t 1 ) F ( 2 ) ( t ) < F ( 3 ) ( t 1 ) ϕ ( 3 ) ( t 1 ) else ,
where τ is the epoch trigger, which increases by one with each iteration t. When τ = N τ , a new epoch starts, whose first generation has its initial conditions reset, including the epoch trigger (Figure 4), F min ( 1 ) ( t ) = min i 1 , N a F i ( 1 ) ( t ) .
Figure 5 details how PHA-MD updates the three symbiotic subtypes at each iteration and at the end of every epoch. First, each agent evaluates its candidate and updates the per-agent best ϕ i ( 1 ) ( t ) whenever its current cost F i ( t ) improves (Equation (23)). Next, the algorithm promotes the population-wide best ϕ ( 2 ) ( t ) using the minimum among { F i ( 1 ) ( t ) } (Equation (24)). When the epoch boundary is reached ( τ = N τ ), the blocking stage is invoked; based on stability ( h ( 2 ) = 0 ), cardinality, and cost, ϕ ( 2 ) ( t ) may be adopted as the epoch-level best ϕ ( 3 ) ( t ) (Equation (25)). If the stated conditions are not met, the corresponding records are preserved (“else” branches in Equations (23)–(25)). While the flow focuses on ϕ -updates, it is coordinated with the blocking-set dynamics B ( t ) (Equations (26) and (27)) that are executed at epoch end.
In the PHA-MD algorithm, a new feature has been introduced compared to its previous version, a node-blocking stage. This stage involves the identification and blocking of nodes, with the set of blocked nodes denoted as B ( t ) . During each epoch, if and only if h ( 1 ) ( t ) = 0 , a node will be selected for blocking. This action increases the cardinality of the set B ( t ) by one. The set B ( t ) captures the nodes that are currently blocked at time t, reflecting the evolving state of the algorithm as it progresses through each epoch. The set of blocked nodes is defined as
B ( t ) = if   t < N τ b ( t ) if   τ = N τ b ( t ) = h ( 2 ) ( t ) = 0 B ( t 1 ) b if   τ = N τ B ( t ) h ( 2 ) ( t ) = 0 N n c ( B ) > N λ + B max ( t ) if   τ = N τ B ( t ) h ( 2 ) ( t ) 0 B ( t 1 ) else ,
where b ( t ) is a blocked node, and B max ( t ) is the best set of blocked nodes, such that
B max ( t ) = if   t < N τ B ( t 1 ) if   τ = N τ B ( t ) = h ( 2 ) ( t ) = 0 B ( t 1 ) if   τ = N τ B ( t ) h ( 2 ) ( t ) = 0 c ( B ) c ( B max ) B max ( t 1 ) else ,
with c ( B ) = card ( B ) and c ( B m a x ) = card ( B m a x ) . The i-th node will be blocked when its number of visits implies the minimum number of successes achieved, thus
b ( t ) = max i 1 , N n j = 1 N a v i , j ( t ) w i , j ( t ) ,
where v i , j ( t ) represents the number of visits made by the i-th agent of P. coffea at the j-th node. A visit by this agent is considered successful if it leads to performance improvement. The memory vector w i ( t ) R 1 × N n increases its j-th dimension by 1 to signify successful visits.
In this context, v i , j ( t ) accounts for both successful and unsuccessful visits, while w i , j ( t ) specifically tracks successful visits by incrementing its j-th dimension when the i-th agent enhances its performance after visiting the j-th node. As shown in Figure 3, the first type of memory that P. coffea agents inherit from their offspring is M R N a × N n . This memory represents “how many nodes to select”, that is, the cardinality of each agent, and its elements m i , j ( t ) are updated in the following way:
m i , j ( t + 1 ) = 1 i if   t = 1 j > N λ + 0 i if   t = 1 j N λ + m i , j ( t ) + 1 if   F i ( 1 ) ( t ) < F i ( 1 ) ( t 1 ) j = c ( 1 ) ( t ) 0 i if   h i ( 1 ) ( t ) = 0 j > c ( 1 ) ( t ) 1 i if   τ = N τ j [ N n c ( B ) 1 , N n c ( B ) ] 0 i if   τ = N τ j [ N n c ( B ) 1 , N n c ( B ) ] m i , j ( t ) else ,
where c ( 1 ) ( t ) = card ( p i ( 1 ) ( t ) ) . The second type of memory that P. coffea agents inherit from their offspring is W R N a × N n . This memory represents “which nodes to select”, and its elements w i , j ( t ) are updated in the following way:
w i , j ( t + 1 ) = 1 if   t = 1 w i , j ( t ) + 1 if   F i ( 1 ) ( t ) < F i ( 1 ) ( t 1 ) j p i ( 1 ) ( t ) w i , j ( t ) + 1 if   F ( 2 ) ( t ) < F ( 2 ) ( t 1 ) i R ( t ) j p ( 2 ) ( t ) 0 i if   τ = N τ j B ( t ) 1 i if   τ = N τ j B ( t ) w i , j ( t ) else ,
where R ( t ) is a set of uniformly distributed random nodes (without repetition) with random cardinality.

3.3. Saturated Sigmoid Switch Functions

Saturated sigmoids are employed as compact binary comparators (switches) with explicit tie handling through a small ϵ > 0 . The parameter ϵ is taken as a fixed tolerance (machine epsilon, eps) and is used to disambiguate equalities [44]. Let x = ( a b ) denote the comparison residual. The ideal steep-slope limit is indicated by “ · ” (in code, a large gain g 1 is used).
S 1 ( x ) = 1 1 + e ( x + ϵ ) · = 1 , x < 0 , 0 , x > 0 , 0 , x = 0 , < switch .
S 2 ( x ) = 1 1 1 + e ( x ϵ ) · = 1 , x > 0 , 0 , x < 0 , 0 , x = 0 , > switch .
S 3 ( x ) = 1 1 + e ( x ϵ ) · = 1 , x 0 , 0 , x > 0 , switch .
S 4 ( x ) = 1 1 1 + e ( x + ϵ ) · = 1 , x 0 , 0 , x < 0 , switch .
Non-constancy of the switches in the proposed rules. The value of x depends on random draws and evolving state/memory variables, so both activation and deactivation are produced:
  • In the cardinality rule (Equation (35)), S 3 ( μ 1 z j ) compares a uniform draw μ 1 [ 0 , 1 ] with cumulative weights z j [ 0 , 1 ] ; across j, both cases x < 0 and x > 0 are obtained, so S 3 toggles. The quantity q S 2 ( w i , q ( t ) ) counts strictly positive success weights w i , q ( t ) , and  S 2 is activated only when w i , q ( t ) > 0 .
  • In node selection (Equation (36)), S 3 ( μ 2 z j ) operates analogously with a fresh μ 2 [ 0 , 1 ] and cumulative weights derived from w i ( t ) , yielding both x 0 and x > 0 .
  • In the clipping function (Equation (39)), S 3 ( β j ( 2 ) x ) and S 4 ( β j ( 1 ) x ) compare the current value x with the moving upper and lower bounds β j ( 2 ) and β j ( 1 ) ; depending on the outcome, x is passed through or snapped to the nearest bound.
The ϵ offsets enforce a consistent tie policy: equality is included in S 3 (≤) and S 4 (≥) and excluded in S 1 and S 2 . This policy is applied consistently in all the referenced rules.

3.4. Offspring: Phymastichus

This subsection delves into the process governing the generation of new offspring of P. coffea agents. As previously discussed, PHA–MD introduces the capability to block nodes, reducing the pool of available nodes. This attribute materially influences the decision-making of each P. coffea agent through the memory M. Situations may arise where the number of available nodes is fewer than the destinations desired by an agent. To avoid this condition, the cardinality of each agent p i ( t ) is defined as
c i ( t ) = min min j [ 1 , N n ] n j S 3 μ 1 z j , S 2 w i ( t ) , z ( M ) = z 1 , , z N n , z k = j = 1 k m i , j ( t ) m i ( t ) , n = 1 , , N n .
where S 2 ( · ) and S 3 ( · ) are the saturated sigmoid switch functions defined in Section 3.3, and  μ 1 [ 0 , 1 ] is a uniformly distributed random number. Once c i ( t ) is selected, the set of pinning nodes is defined (Figure 3) so that the j-th node visited by agent p i ( t ) is
p i , j ( t ) = min j [ 1 , c i ( t ) ] n j S 3 μ 2 z j , z ( W ) = z 1 , , z N n , z k = j = 1 k ω j ω , ω = w i ( t ) , n = 1 , , N n .
where S 3 ( · ) is defined in Section 3.3 and μ 2 [ 0 , 1 ] is uniformly distributed. If node n j is selected, its corresponding w i , j ( t ) during iteration t is set to 0 so that n j is not re-selected by the same agent. The auxiliary vector ω is used for this purpose, allowing temporary modification without altering the original w i .

3.5. Offspring: Hypothenemus

In this subsection, the generation process of the offspring in the population H ( t ) is described. The population H ( t ) evolves under random perturbations and position increments, and is given by
h i , j ( t + 1 ) = β j ( 1 ) ( 0 ) + β j ( 2 ) ( 0 ) β j ( 1 ) ( 0 ) μ 3 , if   t = 1 τ = 1 , o h i , j ( t ) + ρ 3 Δ i , j ( t ) , else ,
where μ 3 [ 0 , 1 ] is uniformly distributed. The position increment of the i-th individual is
Δ i , j ( t + 1 ) = 0 , if   t = 1 τ = 1 , Δ i , j ( t ) + ρ 1 μ 4 h i , j ( 1 ) ( t ) h i , j ( t ) + ρ 2 μ 5 h j ( 2 ) ( t ) h i , j ( t ) , else ,
where ρ 1 is the local-influence constant (two-female births), ρ 2 is the global-influence constant (female–male births), ρ 3 is an inertial constant, and  ( μ 4 , μ 5 ) [ 0 , 1 ] 2 is uniformly distributed.
The clipping function o ( x ) prevents H. hampei agents from exceeding the search space bounds:
o ( x ) = x S 3 β j ( 1 ) x S 4 β j ( 2 ) x + S 4 β j ( 1 ) x β j ( 1 ) + S 3 β j ( 2 ) x β j ( 2 ) ,
where S 3 ( · ) and S 4 ( · ) are the saturated sigmoid switch functions defined in Section 3.3. In this way, values inside the interval [ β j ( 1 ) , β j ( 2 ) ] are preserved, whereas values attempting to cross a bound are snapped to the closest limit. This mechanism ensures that the population remains within the feasible search space while adapting to the evolving influence cues.

3.6. Flowchart Analysis of the PHA-MD Algorithm

Figure 6 sketches the overall control flow of PHA-MD. It highlights, at a high level, the cycle that repeats across iterations and epochs: (i) initialization at the start of each epoch; (ii) agent-driven updates—Phymastichus constructs the candidate pinning set via the inheritance memories, while Hypothenemus refines the continuous gains through the borer and clipping functions; and (iii) the epoch-end blocking step, where visits and successes are evaluated to update the blocking set. The diagram intentionally omits low-level details to emphasize data flow and stage interactions. For completeness, Algorithm 1 provides the corresponding pseudocode, aligned with Section 3, which can be read in parallel with the equations to recover the full procedural detail.
The steps that describe the Phymastichus Agent Behavior stage are as follows. “Initialization”: Each agent initializes with a random set of potential pinning. “Node Selection”: Using cumulative probabilities, the agent selects nodes to visit based on the success memories ( W ( t ) ) and counts ( M ( t ) ). “Objective Function Evaluation”: The selected nodes are evaluated using the objective function to determine their effectiveness in controlling the network. “Memory Update”: The agent updates its success memories and counts based on the evaluation results (Figure 7).
The steps that describe the Hypothenemus Agent Behavior stage are as follows. “Parameter Adjustment”: The agents adjust their parameters ( β ( 2 ) and β ( 1 ) ) using the sigmoid functions based on the iteration count. “Delta Update”: Each agent updates its delta values, incorporating local and global influences. “Node Evaluation”: Nodes are evaluated using the clipping function to ensure they remain within the bounds of β ( 1 ) and β ( 2 ) . “Update Influence”: The agents update the network influence values based on their evaluations (Figure 8).
The steps that describe the Update and Evaluation of the Blocking Group stage are as follows. “Epoch Check”: The algorithm checks if the current epoch or iteration limit has been reached. “Node Blocking”: Nodes are blocked based on their visitation and success rates. “Reset Counters”: Visitation and success memories are reset for the next iteration. “Stability Check”: The algorithm checks the stability of the network and updates the best blocking group if stability is achieved. In summary, these flowcharts provide a visual representation of the PHA-MD algorithm’s structure and operations, offering insights into the systematic and iterative processes that underlie its effectiveness in optimizing pinning control for complex networks (Figure 9).
Algorithm 1 PHA-MD
Data: 
N n , N t , N τ , N λ + , N a , bounds β ( 1 ) ( 0 ) = l 0 , β ( 2 ) ( 0 ) = u 0 , ζ , ( ρ 1 , ρ 2 , ρ 3 )
Result: 
ϕ ( 2 ) (global best), ϕ ( 3 ) (best under blocking), B , B max
1:
Init:  t 1 , τ 1 , B , B max
2:
W 1 N a × N n ,    M 1 N a × N n with M ( : , 1 : N λ + ) 0 ,    V 0
3:
Records: ϕ (current), ϕ ( 1 ) (per-agent best), ϕ ( 2 ) (global best), ϕ ( 3 ) (blocked best)
4:
while  t N t  do
5:
    if  t = 1  or  τ = 1  then                                ▹ epoch start
6:
        Initialize H within [ l 0 , u 0 ] as in Equation (37); set Δ 0
7:
    end if
8:
    for  i = 1  to  N a  do                    ▹ Phymastichus: node selection via memories
9:
        Compute Z M = cumsum M ( i , : ) / M ( i , : )
10:
         c i CardinalityBy Equation (35) using Z M , S 2 , S 3
11:
         ω W ( i , : )                         ▹ temporary copy to avoid reselection
12:
         p i
13:
        for  j = 1  to  c i  do
14:
            Z W = cumsum ω / ω
15:
           Select p SelectNodeBy Equation (36) using Z W , S 3 ; set ω ( p ) 0  ▹ no reselection
16:
            p i p i { p } ;    V ( i , p ) V ( i , p ) + 1
17:
        end for
18:
        Evaluate F i , h i by Equation (15); form ϕ i ( t ) (Equation (13))
19:
        Update ϕ i ( 1 ) by Equation (23); if improved then W ( i , p i ) + = 1 , M ( i , | p i | ) + = 1
20:
        if  h i = 0  then                              ▹ stable at current size
21:
            M ( : , | p i | + 1 : N n ) 0                          ▹ prune larger cardinalities
22:
        end if
23:
    end for
24:
    Update ϕ ( 2 ) by Equation (24) using F min ( 1 ) ( t ) and argmin k
25:
    Broadcast  ϕ ( 2 ) . H ϕ ( 1 ) . H at row k replicated to N a rows
26:
    if  ϕ ( 2 )  improved this stepthen                   ▹ as in code: only if f min improves
27:
        Draw random subset R { 1 , , N a } ; set W ( R , ϕ ( 2 ) . p ) W ( R , ϕ ( 2 ) . p ) + 1
28:
    end if
29:
    Hypothenemus: borer & clipping
30:
    Update lower/upper bounds β ( 1 ) , β ( 2 ) by Equations (21) (lower), (22) (upper) using ϕ ( 2 ) . H , ζ , l 0 , u 0
31:
     Δ Δ + ρ 1 rand ( ϕ ( 1 ) . H H ) + ρ 2 rand ( ϕ ( 2 ) . H H )                 ▹ Equation (38)
32:
     H o H + ρ 3 Δ , β ( 1 ) , β ( 2 )                         ▹ Equation (39) with S 3 , S 4
33:
    if  τ = N τ  or  t = N t  then                            ▹ blocking stage
34:
         b arg max i j V ( j , i ) W ( j , i )                            ▹ Equation (28)
35:
        Reset epoch counters:  V 0 , W 1 , M 0 , τ 0
36:
        if  ϕ ( 2 ) . h = 0  then                             ▹ stability at epoch end
37:
           if  B =  then
38:
                ϕ ( 3 ) ϕ ( 2 ) ;    B { b } ;    B max B
39:
           else
40:
               If | ϕ ( 2 ) . p | < | ϕ ( 3 ) . p | or (tie & F ( 2 ) < F ( 3 ) ): ϕ ( 3 ) ϕ ( 2 )
41:
               If | B | | B max | : B max B
42:
               if  N n | B | > N λ +  then
43:
                    B unique ( B { b } )
44:
               end if
45:
           end if
46:
            W ( : , B ) 0 ;    M ( : , N n | B | 1 : N n | B | ) 1
47:
        else if  B  and  ϕ ( 2 ) . h 0  then                  ▹ recover best blocking if no stability
48:
            B B max ;    W ( : , B ) 0 ;    M ( : , N n | B | 1 : N n | B | ) 1
49:
        end if
50:
    end if
51:
     t t + 1 ;    τ τ + 1
52:
end while
53:
return  ϕ ( 2 ) , ϕ ( 3 ) , B , B max

3.7. Spatial Complexity Analysis

The spatial complexity of the proposed algorithm is determined by analyzing the memory usage in relation to the input parameters, specifically the number of agents N a and the number of nodes N n in the complex network. The key data structures contributing to memory consumption include matrices and cell arrays that are dynamically allocated during the algorithm’s execution.
The primary matrices W, M, V, and ϕ , as well as the matrices β ( 1 ) and β ( 2 ) , each have a size of N a × N n . Each of these matrices contributes O ( N a × N n ) to the spatial complexity. Since there are six such matrices, their combined complexity is O ( 6 × N a × N n ) .
The cell array p , where each cell contains a vector of selected nodes, also contributes to the spatial complexity. In the worst case, the memory required is O ( N a × N n ) . Additional vectors, such as F, h, and their counterparts in ϕ ( 1 ) , ϕ ( 2 ) , and ϕ ( 3 ) , have a size of N a , contributing O ( 6 × N a ) in total. When combined, the total spatial complexity is
S ( n ) = O ( 6 × N a × N n ) + O ( N a × N n ) + O ( 6 × N a ) .
Given that the term O ( N a × N n ) dominates, the spatial complexity can be simplified to S ( n ) = O ( N a × N n ) . This indicates that the memory usage of the algorithm increases linearly with both the number of agents and the number of nodes in the complex network, demonstrating the algorithm’s spatial efficiency.

4. Simulation Study

It is important to note that the proposed PHA-MD algorithm features variable dimensions across its agents, with each agent having a different number of dimensions. This characteristic initially precluded a direct comparison with other algorithms, such as ALO [21], TLBO [22], GWO [23], AMO [24], PSO [25], ABC [26], GSK [27], BBO [28], WOA [29], ACO [30], OOA [31], MA [32], AOA [33], CHIO [34], and DTBO [35]. These algorithms assume a fixed number of dimensions throughout the optimization process, which is not compatible with the dynamic dimensionality of PHA-MD.
The optimization problem addressed by PHA-MD involves both combinatorial and continuous elements, further complicating the use of traditional algorithms. PHA-MD was specifically designed to handle the dynamic dimensionality needed to identify the minimum number of pinning nodes, a task that requires agents to propose different sets of nodes. This flexibility in dimension handling is not supported by the aforementioned algorithms, which rely on a fixed dimension setup. For instance, initializing agents with a predefined number of dimensions for both gain constants and potential pinning nodes imposes assumptions that may not be accurate.
To allow a fair comparison, a reconditioning function was introduced. This function adjusts the dimensions by rounding them to the nearest integer corresponding to node indices in the matrix K and selecting unique elements. This approach standardizes the input, enabling fixed-dimension algorithms to adapt to the variable dimensionality required by PHA-MD.
All runs were performed on a single workstation under Ubuntu 22.04.3 LTS (64-bit) using MATLAB R2023b in double precision. Default multithreaded BLAS (Intel oneMKL) was enabled; GPU acceleration was not used. Randomness was controlled with MATLAB’s rng(’twister’,2024). The population size, iteration budget, and epoch length followed the values stated in Section 3, PHA-MD Algorithm, and the figure set Convergence curves for the Networks. The hardware and software stack is summarized in Table 1.
Remark 2.
The number of runs required to obtain a representative solution cannot be specified a priori in a universal manner, since the selection of pinning nodes constitutes a combinatorial NP–hard problem. Consequently, the evaluation protocol is grounded in a theoretical feasibility criterion: under the V-stability of Xiang and Chen, the number of pinned nodes cannot be less than the number of non–negative eigenvalues of the characteristic matrix; thus, N λ + is a natural lower bound. Operationally, each method was executed up to the point at which the baseline algorithms exhibited instability, whereas PHA–MD preserved feasible solutions. Accordingly, a finite iteration budget was adopted as the stopping criterion; its value was selected experimentally to reach a regime in which baseline methods typically displayed instability in their solutions, while PHA–MD continued to yield feasible trajectories. Stability therefore informs the choice of iteration budget, although simulations are not terminated upon the detection of instability itself.
Figure 10 illustrates the reconditioning function used in the simulation study. This function is essential for aligning fixed-dimension algorithms with the variable dimensionality necessary for pinning node selection in complex networks. The properties of the six complex networks analyzed are presented in Table 2. These properties include the number of edges ( N e ), positive eigenvalues ( N λ + ), average connection degree ( c d ¯ ), average coupling strength ( c ¯ ), average passivity degree ( θ ¯ ), average initial conditions ( x ¯ 0 ), and their respective distribution intervals ( I c , θ , x 0 ). Additionally, the search space ( S S ) defines the feasible region for selecting control gains during the optimization process.
The six synthetic networks were sampled uniformly within the intervals reported in Table 2 under two design criteria: (i) passivity degrees θ and coupling strengths c i j chosen to yield a nonzero count of positive eigenvalues N λ + in the V-stability analysis, and (ii) a relatively large N λ + compared to N n , which increases the theoretical lower bound on the number of required pinning nodes. This produces challenging instances where stability is nontrivial and energy–cardinality trade-offs are visible. Although synthetic, these topologies are representative of interaction patterns commonly found in signed social influence graphs (trust/distrust) [45], neuronal microcircuits with mixed excitatory/inhibitory synapses [46], gene-regulatory subnetworks with activation/repression [47], distribution grids with grid-connected power-electronic converters exhibiting impedance interactions [48,49], and antagonistic ecological webs (predator–prey) [50]. Thus, “Network 1–6” should be read as topology classes that capture these motifs, rather than as single specific datasets.
The parameter settings used in the simulations for the various algorithms are summarized in Table 3. For each baseline, hyperparameters were taken from the authors’ recommended operating conditions in their original sources; no per-benchmark retuning was performed to avoid overfitting to the synthetic networks. The population size and number of iterations were kept constant across all methods to enforce a common computational budget. A reconditioning wrapper was employed to standardize candidate representations so fixed-dimension heuristics could accept the variable-cardinality solutions produced by PHA–MD. Because the control objective was stability-first, candidates that failed the stability test are reported as unstable and are not considered acceptable low-energy optima. This protocol separates algorithmic design differences (fixed versus variable dimensionality with stability feedback) from hyperparameter tweaks and supports a fair comparison.
As mentioned, the performance of the proposed PHA-MD algorithm is compared with several other heuristic optimization algorithms across six different complex network topologies. These topologies were generated uniformly at random within the intervals specified in Table 2. The networks were designed to challenge current optimization algorithms and reveal scenarios where stability is difficult to achieve. The aim was to create increasingly complex networks to showcase the superior capability of the PHA-MD algorithm. If a network were encountered where even PHA-MD could not find stable solutions, the algorithm would need to be reconfigured to address these new challenges.
The simulation results include two sets of graphs. The first set (Figure 11, Figure 12 and Figure 13) presents the convergence curves for each network’s optimization, comparing the performance of the PHA-MD algorithm with that of other algorithms. The second set (Figure 14, Figure 15 and Figure 16) of graphs shows the optimization of energy E t 0 against the percentage of unstable solutions, emphasizing solution stability.
In Figure 11a, the convergence behavior for Network 1 is depicted. Due to the scale, only the ABC algorithm is prominently visible, with others appearing unchanged. Figure 11b allows for a comparison across all algorithms, showing similar convergence except for in the case of GSK and ACO.
The PHA-MD algorithm reaches its minimum value earlier than others, demonstrating its efficiency. Figure 12a complicates comparisons due to the ABC algorithm’s scale, yet most algorithms show early convergence, with PHA-MD continuing to converge rapidly.
As shown in Figure 12b, the optimization process becomes more challenging due to Network 4’s size, with a clear gap between PHA-MD and the others, and only four out of five epochs observed due to unchanged candidate solutions. Figure 13a displays similar behavior, with drilling functions proving effective, and BBO and AMO following PHA-MD in performance. Finally, Figure 13b shows an even greater gap between PHA-MD and the other algorithms, highlighting PHA-MD’s superior performance in minimizing the cost function F.
In addition to the convergence curves, Figure 14, Figure 15 and Figure 16 present the optimization of energy E t 0 against the percentage of unstable solutions. Blue bars represent the average energy consumption of each algorithm’s solutions, with percentages indicating the proportion of unstable solutions. Figure 14a shows that the ABC algorithm achieves the lowest energy consumption with 0% unstable solutions, while ACO, despite being energy-efficient, has 94% unstable solutions, highlighting the trade-off between energy efficiency and stability.
In Figure 14b, the ABC algorithm remains the most energy-efficient, but several algorithms, including ALO, GWO, PSO, WOA, GSK, and ACO, present unstable solutions, with ACO having the highest instability rate at 95%. Figure 15a shows a reduction in unstable solutions due to Network 3 having fewer states. By Figure 15b, nearly all algorithms exhibit 100% unstable solutions, despite Network 4 having the same number of states, indicating that complexity depends on more than just the number of states. Figure 16a shows some algorithms achieving stable solutions, while Figure 16b highlights that only PHA-MD provides stable solutions, underscoring its prioritization of stability over energy efficiency.
These results demonstrate the PHA-MD algorithm’s effectiveness in achieving rapid convergence and superior optimization compared to other algorithms. Its ability to handle variable dimensionality and combinatorial optimization makes it particularly suitable for pinning control in complex networks. The simulation results are summarized in Table 4, which provides a comprehensive overview of each algorithm’s performance across the six networks.
The Average Cost Function (Avg. CF) represents the mean value of the cost function, indicating overall performance efficiency. The Best Cost Function Result (Best CF) shows the lowest cost function value achieved, reflecting the algorithm’s capability to find optimal solutions. The Best Number of Pinning Nodes (Best NP) and the Average Number of Pinning Nodes (Avg. NP) provide insight into the algorithm’s effectiveness in minimizing control nodes. Energy consumption is also evaluated, with the Best Energy Consumption (Best E) indicating the lowest energy usage, and the Average Energy Consumption (Avg. E) representing mean energy consumption.
To emphasize stability, values corresponding to unstable solutions are marked in red, indicating networks that did not achieve stability, while stable solutions are marked in black. This color-coding facilitates quick assessment of which algorithms balance energy efficiency with network stability. Overall, Table 4 underscores the PHA-MD algorithm’s superior performance, particularly in achieving stable solutions with efficient energy use and minimal pinning nodes.

Diagnostic Visualization of the Node-Blocking Stage

To illustrate the internal mechanics of the proposed blocking stage—biologically, how P. coffea wasps cease to visit unpromising nodes—we include the following diagnostic plots (Figure 17, Figure 18, Figure 19, Figure 20, Figure 21 and Figure 22). These figures are intended to explain how nodes are progressively blocked or excluded from the pinning set, rather than to provide additional performance benchmarks. For clarity, let n * denote the current average number of successful visits across the remaining (non-blocked) nodes; nodes below this running threshold are provisionally excluded from the pinning set, while blocked nodes become members of B ( t ) . If stability is still not achieved after blocking or excluding a node, that node can be returned to the candidate pool, consistent with the update rules in Section 3.2.
As shown in Figure 17, the first network begins without any blocked nodes, but several nodes lie below the threshold. Figure 18 presents the corresponding behavior for Network 2, while Figure 19 illustrates a topology that requires fewer pinning nodes. Figure 20, Figure 21 and Figure 22 summarize the results for the remaining networks.

5. Conclusions

The proposed PHA-MD algorithm is an effective method for pinning control of complex networks, eliminating the need for prior information on the minimal number of pinned nodes. It is designed to solve multi-constraint optimization problems, focusing on optimal node selection with the minimal number of pinned nodes, while ensuring network stabilization through the V-stability tool.
A stability-first evaluation has been adopted. In the pinning control setting, feasibility (stability) is the primary objective; therefore, candidates that are energy-efficient but unstable are not considered valid solutions. Cross-method comparisons have been reported under equal population and iteration budgets and with explicit instability flags, so as not to conflate algorithmic merit with implementation details or with external enumeration required by fixed-dimension heuristics. Within this framing, PHA–MD consistently identified smaller, stable pinning sets and competitive energy levels across increasingly challenging network topologies.
A key advantage of PHA-MD is its ability to handle variable dimensions for its agents, unlike current algorithms that operate with fixed dimensions. This adaptability allows PHA-MD to address challenges related to permutations in pinning node selection, making it a robust and versatile solution for complex network control. Consequently, the hybrid nature of the optimization problem tackled by PHA-MD complicates direct comparisons with other collaborative behavior algorithms.
Future research could explore various weighting methodologies in penalty functions, analyze convergence times in border functions, and investigate more complex decision vector dynamics. These studies could further enhance the algorithm’s efficiency and applicability in more challenging scenarios, reinforcing its role as a leading solution in complex network control.

Author Contributions

Conceptualization, J.A.L. and A.Y.A.; methodology, J.A.L. and J.R.-L.; software, J.A.L. and A.J.P.; validation, R.C.-N. and C.L.-Á.; formal analysis, J.R.-L. and Y.A.-M.; investigation, J.A.L., A.J.P., and J.R.-L.; resources, C.L.-Á. and H.A.G.-O.; data curation, R.C.-N. and A.J.P.; writing—original draft preparation, J.A.L. and J.R.-L.; writing—review and editing, L.F.L.-V., A.Y.A., and Y.A.-M.; visualization, R.C.-N. and C.L.-Á.; supervision, Y.A.-M. and H.A.G.-O.; project administration, L.F.L.-V.; funding acquisition, C.L.-Á. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author due to the size and complexity of the raw simulation files.

Acknowledgments

The authors greatly appreciate Edgar Nelson Sánchez Camperos for allowing us to develop this project and helping us grow as research scientists.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
PHA-MDPhymastichus–Hypothenemus Algorithm—Minimized and Determinated
ALOAnt Lion Optimizer
TLBOTeaching–Learning-Based Optimization
GWOGrey Wolf Optimizer
AMOAnimal Migration Optimization
PSOParticle Swarm Optimization
ABCArtificial Bee Colony
GSKGaining–Sharing Knowledge Based Algorithm
BBOBiogeography-Based Optimization
WOAWhale Optimization Algorithm
ACOAnt Colony Optimization
OOAOsprey Optimization Algorithm
MAMayfly Algorithm
AOAArchimedes Optimization Algorithm
CHIOCoronavirus Herd Immunity Optimizer
DTBODriving Training-Based Optimization
GAGenetic Algorithm

References

  1. Barabási, A.L.; Albert, R. Emergence of scaling in random networks. Science 1999, 286, 509–512. [Google Scholar] [CrossRef]
  2. Strogatz, S.H. Exploring complex networks. Nature 2001, 410, 268–276. [Google Scholar] [CrossRef] [PubMed]
  3. Newman, M.E.J. The structure and function of complex networks. SIAM Rev. 2003, 45, 167–256. [Google Scholar] [CrossRef]
  4. Alon, U. An Introduction to Systems Biology: Design Principles of Biological Circuits; Chapman and Hall and CRC: Boca Raton, FL, USA, 2006. [Google Scholar]
  5. Sporns, O.; Chialvo, D.R.; Kaiser, M.; Hilgetag, C.C. Organization, development and function of complex brain networks. Trends Cogn. Sci. 2004, 8, 418–425. [Google Scholar] [CrossRef] [PubMed]
  6. Wasserman, S.; Faust, K. Social Network Analysis: Methods and Applications; Cambridge University Press: Cambridge, UK, 1994. [Google Scholar]
  7. Pimm, S.L. Food Webs; University of Chicago Press: Chicago, IL, USA, 1982. [Google Scholar]
  8. Motter, A.E.; Lai, Y.C. Cascade-based attacks on complex networks. Phys. Rev. E 2002, 66, 065102. [Google Scholar] [CrossRef]
  9. Buldyrev, S.V.; Parshani, R.; Paul, G.; Stanley, H.E.; Havlin, S. Catastrophic cascade of failures in interdependent networks. Nature 2010, 464, 1025–1028. [Google Scholar] [CrossRef]
  10. Freeman, L.C. Centrality in social networks conceptual clarification. Soc. Netw. 1978, 1, 215–239. [Google Scholar] [CrossRef]
  11. Girvan, M.; Newman, M.E.J. Community structure in social and biological networks. Proc. Natl. Acad. Sci. USA 2002, 99, 7821–7826. [Google Scholar] [CrossRef]
  12. Sabidussi, G. The centrality index of a graph. Psychometrika 1966, 31, 581–603. [Google Scholar] [CrossRef]
  13. Borgatti, S.P. Centrality and network flow. Soc. Netw. 2005, 27, 55–71. [Google Scholar] [CrossRef]
  14. Newman, M.E.J.; Girvan, M. Finding and evaluating community structure in networks. Phys. Rev. E 2004, 69, 026113. [Google Scholar] [CrossRef]
  15. Zhang, H.; Liu, Y.; Wang, J. Adaptive control for complex networks: A review. IEEE Trans. Control Netw. Syst. 2020, 7, 1234–1245. [Google Scholar]
  16. Chen, G. Controllability robustness of complex networks. J. Autom. Intell. 2022, 1, 100004. [Google Scholar] [CrossRef]
  17. Hu, J.; Zhang, H.; Liu, H.; Yu, X. A survey on sliding mode control for networked control systems. Int. J. Syst. Sci. 2021, 52, 1129–1147. [Google Scholar] [CrossRef]
  18. Lizarraga, J.A.; Vega, C.J.; Sanchez, E.N. Particles swarm optimization for minimal energy consumption in complex networks node search. In Proceedings of the 2022 17th International Conference on Electrical Engineering, Computing Science and Automatic Control (CCE), Mexico City, Mexico, 11–13 November 2020; pp. 1–6. [Google Scholar]
  19. Lizarraga, J.A.; Ruiz-Leon, J.; Sanchez, E.N. Phymastichus-hypothenemus-based algorithm for optimal node selection on pinning control of complex networks. In Proceedings of the 2022 19th International Conference on Electrical Engineering, Computing Science and Automatic Control, Mexico City, Mexico, 9–11 November 2022; pp. 1–6. [Google Scholar]
  20. Xiang, J.; Chen, G. On the V-stability of complex dynamical networks. Automatica 2007, 43, 1049–1057. [Google Scholar] [CrossRef]
  21. Mirjalili, S. The ant lion optimizer. Adv. Eng. Softw. 2015, 83, 80–98. [Google Scholar] [CrossRef]
  22. Rao, R.; Savsani, V.; Vakharia, D. Teaching–learning-based optimization: An optimization method for continuous non-linear large scale problems. Inf. Sci. 2012, 183, 1–15. [Google Scholar] [CrossRef]
  23. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  24. Li, X.; Zhang, J.; Yin, M. Animal migration optimization: An optimization algorithm inspired by animal migration behavior. Neural Comput. Appl. 2013, 24, 1867–1877. [Google Scholar] [CrossRef]
  25. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95—International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  26. Karaboga, D.; Basturk, B. A powerful and efficient algorithm for numerical function optimization: Artificial bee colony (abc) algorithm. J. Glob. Optim. 2007, 39, 459–471. [Google Scholar] [CrossRef]
  27. Wagdy, A.; Hadi, A.; Khater, A. Gaining-sharing knowledge based algorithm for solving optimization problems: A novel nature-inspired algorithm. Int. J. Mach. Learn. Cybern. 2020, 11, 1501–1529. [Google Scholar]
  28. Simon, D. Biogeography-based optimization. IEEE Trans. Evol. Comput. 2008, 12, 702–713. [Google Scholar] [CrossRef]
  29. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  30. Dorigo, M.; Maniezzo, V.; Colorni, A. Ant system: Optimization by a colony of cooperating agents. IEEE Trans. Syst. Man Cybern. Part B Cybern. 1996, 26, 29–41. [Google Scholar] [CrossRef]
  31. Sulaiman, M.; Saif, N.; Khader, A.T. Osprey Optimization Algorithm: A Novel Nature-Inspired Metaheuristic. IEEE Access 2022, 10, 54629–54641. [Google Scholar]
  32. Zhang, T.; Zhou, Y.; Zhou, G.; Deng, W.; Luo, Q. Bioinspired Bare Bones Mayfly Algorithm for Large-Scale Spherical Minimum Spanning Tree. Front. Bioeng. Biotechnol. 2022, 10, 830037. [Google Scholar] [CrossRef]
  33. Gandomi, A.H.; Yang, X.S.; Alavi, A.H. Archimedes Optimization Algorithm. J. Comput. Des. Eng. 2020, 7, 258–268. [Google Scholar]
  34. Hatamlou, A. Coronavirus Herd Immunity Optimizer (CHIO): A novel metaheuristic algorithm for global optimization. Comput. Ind. Eng. 2021, 161, 107606. [Google Scholar]
  35. Nasiri, M.; Mousavirad, S.M.; Mohammadi, S.R. Driving Training-Based Optimization (DTBO): A new optimization algorithm. Eng. Appl. Artif. Intell. 2020, 91, 103570. [Google Scholar]
  36. Holland, J. Adaptation in Natural and Artificial Systems; University of Michigan Press: Ann Arbor, MI, USA, 1975. [Google Scholar]
  37. Sutton, R.S.; Barto, A.G. Reinforcement Learning: An Introduction; MIT Press: Cambridge, MA, USA, 1998. [Google Scholar]
  38. Chen, G.; Wang, X.; Li, X. Fundamentals of Complex Networks: Models, Structures and Dynamics; John Wiley & Sons: Hoboken, NJ, USA, 2014. [Google Scholar]
  39. Su, H.; Wang, X. Pinning Control of Complex Networked Systems: Synchronization, Consensus and Flocking of Networked Systems via Pinning; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
  40. Chen, G. Pinning control and synchronization on complex dynamical networks. Int. J. Control Autom. Syst. 2014, 12, 221–230. [Google Scholar] [CrossRef]
  41. Bustillo, P.; Villagalba, G.; Chaves, C. Consideraciones sobre el uso de insecticidas químicos en la zona cafetera en el control de la broca del café, hypothenemus hampei. Congr. Soc. Colomb. Entomol. 1993, 20, 152–158. [Google Scholar]
  42. Espinoza, J.C. The biology of phymastichus coffea lasalle (hymenoptera: Eulophidae) under field conditions. Biol. Control 2009, 49, 227–233. [Google Scholar] [CrossRef]
  43. Waller, J.M.; Bigger, M.; Hillocks, R.J. Coffee Pests, Diseases and Their Management; CAB Books; CABI Pub: Oxon, UK, 2007; Volume 1, pp. 70–71. [Google Scholar]
  44. Cody, W.J. Algorithm 665: Machar: A subroutine to dynamically determined machine parameters. ACM Trans. Math. Softw. 1988, 14, 303–311. [Google Scholar] [CrossRef]
  45. Leskovec, J.; Huttenlocher, D.; Kleinberg, J. Predicting Positive and Negative Links in Online Social Networks. In Proceedings of the 19th International Conference on World Wide Web (WWW ’10), New York, NY, USA, 26–30 April 2010; pp. 641–650. [Google Scholar] [CrossRef]
  46. Yizhar, O.; Fenno, L.E.; Davidson, T.J.; Mogri, M.A.; Deisseroth, K. Neocortical Excitation/Inhibition Balance in Information Processing and Social Dysfunction. Nature 2011, 477, 171–178. [Google Scholar] [CrossRef]
  47. Karlebach, G.; Shamir, R. Modelling and analysis of gene regulatory networks. Nat. Rev. Mol. Cell Biol. 2008, 9, 770–780. [Google Scholar] [CrossRef]
  48. Sun, J. Impedance-Based Stability Criterion for Grid-Connected Inverters. IEEE Trans. Power Electron. 2011, 26, 3075–3078. [Google Scholar] [CrossRef]
  49. Kundur, P. Power System Stability and Control; McGraw-Hill: New York, NY, USA, 1994. [Google Scholar]
  50. Allesina, S.; Tang, S. Stability criteria for complex ecosystems. Nature 2012, 483, 205–208. [Google Scholar] [CrossRef]
Figure 1. (a) Phymastichus coffea Lasalle (P. coffea). (b) Hypothenemus hampei Ferrari (H. hampei).
Figure 1. (a) Phymastichus coffea Lasalle (P. coffea). (b) Hypothenemus hampei Ferrari (H. hampei).
Algorithms 18 00637 g001
Figure 2. Parasitic symbiotic relationship between agents, favorable events. (1) Parasitization, (2) birth of a female wasp, (3) birth of a female and a male wasp.
Figure 2. Parasitic symbiotic relationship between agents, favorable events. (1) Parasitization, (2) birth of a female wasp, (3) birth of a female and a male wasp.
Algorithms 18 00637 g002
Figure 3. Behavior and modeling of P. coffea wasp: visits of P. coffea wasp, coffee tree graph.
Figure 3. Behavior and modeling of P. coffea wasp: visits of P. coffea wasp, coffee tree graph.
Algorithms 18 00637 g003
Figure 4. The biological inspiration for the borer functions: the path of Hypothenemus hampei towards the coffee berry apex.
Figure 4. The biological inspiration for the borer functions: the path of Hypothenemus hampei towards the coffee berry apex.
Algorithms 18 00637 g004
Figure 5. A flowchart for the updating of symbiotic relationships in PHA-MD. The diagram summarizes the decision logic for the three favorable subtypes: per-agent best ϕ i ( 1 ) ( t ) (Equation (23)), population best ϕ ( 2 ) ( t ) (Equation (24)), and epoch-level best under blocking ϕ ( 3 ) ( t ) (Equation (25)), with the epoch trigger τ and the blocking set B ( t ) governing transitions at τ = N τ .
Figure 5. A flowchart for the updating of symbiotic relationships in PHA-MD. The diagram summarizes the decision logic for the three favorable subtypes: per-agent best ϕ i ( 1 ) ( t ) (Equation (23)), population best ϕ ( 2 ) ( t ) (Equation (24)), and epoch-level best under blocking ϕ ( 3 ) ( t ) (Equation (25)), with the epoch trigger τ and the blocking set B ( t ) governing transitions at τ = N τ .
Algorithms 18 00637 g005
Figure 6. General loop.
Figure 6. General loop.
Algorithms 18 00637 g006
Figure 7. Phymastichus agent behavior.
Figure 7. Phymastichus agent behavior.
Algorithms 18 00637 g007
Figure 8. Hypothenemus agent behavior.
Figure 8. Hypothenemus agent behavior.
Algorithms 18 00637 g008
Figure 9. Updating and evaluation of the blocking group.
Figure 9. Updating and evaluation of the blocking group.
Algorithms 18 00637 g009
Figure 10. A flowchart of the reconditioning function.
Figure 10. A flowchart of the reconditioning function.
Algorithms 18 00637 g010
Figure 11. Convergence curves for Networks 1 and 2.
Figure 11. Convergence curves for Networks 1 and 2.
Algorithms 18 00637 g011
Figure 12. Convergence curves for Networks 3 and 4.
Figure 12. Convergence curves for Networks 3 and 4.
Algorithms 18 00637 g012
Figure 13. Convergence curves for Networks 5 and 6.
Figure 13. Convergence curves for Networks 5 and 6.
Algorithms 18 00637 g013
Figure 14. Energy optimization and percentage of unstable solutions for Networks 1 and 2.
Figure 14. Energy optimization and percentage of unstable solutions for Networks 1 and 2.
Algorithms 18 00637 g014
Figure 15. Energy optimization and percentage of unstable solutions for Networks 3 and 4.
Figure 15. Energy optimization and percentage of unstable solutions for Networks 3 and 4.
Algorithms 18 00637 g015
Figure 16. Energy optimization and percentage of unstable solutions for Networks 5 and 6.
Figure 16. Energy optimization and percentage of unstable solutions for Networks 5 and 6.
Algorithms 18 00637 g016
Figure 17. Blocked and excluded nodes for Network 1. Before the first epoch, there are no blocked nodes; however, several nodes lie below the performance threshold n * . Between iterations 150 and 600, only one node is blocked, indicating that the population had not yet identified which node should be blocked to guarantee stability. After iteration 900, nodes n 10 , n 8 , n 2 , and n 1 are blocked, and nodes n 4 and n 5 are excluded from the pinning set; the remaining nodes then ensure network stability.
Figure 17. Blocked and excluded nodes for Network 1. Before the first epoch, there are no blocked nodes; however, several nodes lie below the performance threshold n * . Between iterations 150 and 600, only one node is blocked, indicating that the population had not yet identified which node should be blocked to guarantee stability. After iteration 900, nodes n 10 , n 8 , n 2 , and n 1 are blocked, and nodes n 4 and n 5 are excluded from the pinning set; the remaining nodes then ensure network stability.
Algorithms 18 00637 g017
Figure 18. Blocked and excluded nodes for Network 2. Between iterations 150 and 900, the blocking set has a cardinality of six. The blocking node n 8 at iteration 750 causes the average number of successes of node n 20 to exceed the threshold, yet node n 12 is blocked at iteration 900 and nodes n 20 and n 10 are subsequently excluded from the pinning set.
Figure 18. Blocked and excluded nodes for Network 2. Between iterations 150 and 900, the blocking set has a cardinality of six. The blocking node n 8 at iteration 750 causes the average number of successes of node n 20 to exceed the threshold, yet node n 12 is blocked at iteration 900 and nodes n 20 and n 10 are subsequently excluded from the pinning set.
Algorithms 18 00637 g018
Figure 19. Blocked and excluded nodes for Network 3. Owing to the network topology and initial conditions, this case requires fewer pinning nodes than Network 2 for stabilization. Between iterations 900 and 1000, the excluded node n 18 appears above the threshold n * due to randomness and the local/global influence coefficients of the agents.
Figure 19. Blocked and excluded nodes for Network 3. Owing to the network topology and initial conditions, this case requires fewer pinning nodes than Network 2 for stabilization. Between iterations 900 and 1000, the excluded node n 18 appears above the threshold n * due to randomness and the local/global influence coefficients of the agents.
Algorithms 18 00637 g019
Figure 20. Blocked and excluded nodes for Network 4. In this case, the algorithm finishes the iterative process with blocked nodes; unlike previous simulations, Network 4 requires a high number of pinning nodes for stabilization, and the blocked nodes remain below the threshold n * .
Figure 20. Blocked and excluded nodes for Network 4. In this case, the algorithm finishes the iterative process with blocked nodes; unlike previous simulations, Network 4 requires a high number of pinning nodes for stabilization, and the blocked nodes remain below the threshold n * .
Algorithms 18 00637 g020
Figure 21. Blocked and excluded nodes for Network 5. This figure shows one of two permutations of the final blocked/excluded sets. PHA-MD may vary the order in which nodes are blocked or excluded; some nodes are excluded because they were previously blocked, and conversely, some are blocked because they were excluded. Therefore, solutions can differ depending on the blocking/exclusion order while still satisfying stability.
Figure 21. Blocked and excluded nodes for Network 5. This figure shows one of two permutations of the final blocked/excluded sets. PHA-MD may vary the order in which nodes are blocked or excluded; some nodes are excluded because they were previously blocked, and conversely, some are blocked because they were excluded. Therefore, solutions can differ depending on the blocking/exclusion order while still satisfying stability.
Algorithms 18 00637 g021
Figure 22. Blocked and excluded nodes for Network 6. If stability is not achieved after blocking or excluding a node, that node can return to the candidate set (e.g., node n 17 at iteration 900).
Figure 22. Blocked and excluded nodes for Network 6. If stability is not achieved after blocking or excluding a node, that node can return to the candidate set (e.g., node n 17 at iteration 900).
Algorithms 18 00637 g022
Table 1. Hardware and software used for all computations.
Table 1. Hardware and software used for all computations.
ComponentSpecification
CPUIntel Core i7-12700K (12 cores: 8P+4E, 20 threads, 3.6–5.0 GHz)
Memory32 GB DDR4-3200
GPUNVIDIA GeForce RTX 3060, 12 GB VRAM (not used)
Storage1 TB NVMe SSD
Operating systemUbuntu 22.04.3 LTS (64-bit), Linux kernel 5.15
MATLABR2023b (double precision)
ToolboxesOptimization, Global Optimization; Parallel Computing (installed, disabled)
Math kernelIntel oneMKL (default in MATLAB R2023b)
ThreadingDefault multithreaded linear algebra; no GPU acceleration
Random seedrng(’twister’,2024)
MeasurementMATLAB tic/toc; single-process execution
Table 2. Complex network data analyzed.
Table 2. Complex network data analyzed.
Network 1Network 2Network 3Network 4Network 5Network 6
N n 102030405060
N e 1893224383653876
N λ + 375212533
c d ¯ 3.69.314.9319.1526.1229.2
min { c d } 2510121919
max { c d } 71320243438
c ¯ 10.925.789.53−11.35−6.90−4.64
I c [ 4 , 30 ] [ 2 , 15 ] [ 2 , 20 ] [ 40 , 20 ] [ 24 , 10 ] [ 24 , 16 ]
θ ¯ 6.696.3145.93−6.310.46−7.69
I θ [ 1 , 13 ] [ 10 , 15 ] [ 4 , 80 ] [ 23 , 8 ] [ 16 , 12 ] [ 56 , 35 ]
x ¯ 0 16.85 10.73 10.2 22.13 1.03 12.59 0.01 2.02 2.02 2.21 2.33 2.83 23.83 26 31.85
I x 0 [ 80 , 70 ] [ 90 , 55 ] [ 8 , 9 ] [ 2 , 7 ] [ 14 , 8 ] [ 23 , 80 ]
S S 10 1 × 10 10 1 × 10 4 2 × 10 5 1 × 10 2 1 × 10 7 1 × 10 4 1 × 10 6 1 × 10 5 1 × 10 7 1 × 10 7 1 × 10 9
Table 3. The parameter settings for the algorithms used in the simulations.
Table 3. The parameter settings for the algorithms used in the simulations.
AlgorithmParameterValue
ALOElite rate0.1
Ant rate0.9
WOAConvergence parameter (a)[2, 0]
r (random vector)[0, 1]
l (random number)[−1, 1]
TLBO T F (teaching factor) [ ( 1 + rand ) ]
rand (random number)[0, 1]
GWOa (convergence parameter)[2, 0]
α (alternative parameter) 2 2 ( g / max g )
AMOAnimals in each group5
PSO ω (inertia weight)0.6
c 1 , c 2 (cognitive and social constants)2
ABCAbandonment criteria25
GSKP (probability)0.1
k f (forward coeff)0.5
k r (return coeff)0.9
K (number of iters)10
BBOHabitat modification probability1
Immigration probability bounds per gene[0, 1]
Step size for numerical integration1
Max. immigration and migration rates1
Mutation probability0.1
ACOInitial pheromone value1 × 10−6
Pheromone update constant20
Exploration constant1
Global pheromone decay rate0.9
Local pheromone decay rate0.5
Pheromone sensitivity1
Visibility sensitivity5
PHA ρ 1 (local influence coeff)0.5
ρ 2 (global influence coeff)0.6
ρ 3 (increment coeff)0.7
ζ (clustering tolerance)0.02
γ (control gain)10
OOAElite rate0.2
Infection rate0.1
MAAttraction rate0.5
Repulsion rate0.3
AOAExploration rate0.4
Exploitation rate0.6
CHIOInfection probability0.3
Recovery probability0.1
DTBOLearning rate0.1
Feedback rate0.2
All algorithmsPopulation size50
Number of iters500
Table 4. Simulation results for different algorithms across six networks.
Table 4. Simulation results for different algorithms across six networks.
AlgorithmMeasureNetwork 1Network 2Network 3Network 4Network 5Network 6
ALOAvg. CF5.1 × 10 60 6.8602 × 10 68 1.912 × 10 61 4.6041 × 10 71 3.6848 × 10 71 6.132 × 10 71
Best CF4 × 10 60 1.3 × 10 61 1.7 × 10 61 1.7079 × 10 71 1.6174 × 10 71 3.2368 × 10 71
Avg. NP51519334046
Best NP41317303540
Avg. EC6.2082 × 10 23 5.9117 × 10 14 1.7025 × 10 16 1.2553 × 10 14 5.7283 × 10 18 3.6252 × 10 22
Best EC1.1201 × 10 23 2.404 × 10 14 7.8747 × 10 15 3.7911 × 10 13 2.5242 × 10 18 2.3748 × 10 22
TLBOAvg. CF4 × 10 60 1.498 × 10 61 1.962 × 10 61 6.7339 × 10 71 4.942 × 10 71 7.6473 × 10 71
Best CF4 × 10 60 1.4 × 10 61 1.7 × 10 61 5.7775 × 10 71 3.6628 × 10 71 5.9984 × 10 71
Avg. NP41520313743
Best NP41417273337
Avg. E3.8762 × 10 23 4.8607 × 10 14 1.4766 × 10 16 1.1278 × 10 14 4.8949 × 10 18 3.0739 × 10 22
Best E4.5673 × 10 22 1.9126 × 10 14 8.0718 × 10 15 5.7146 × 10 13 2.0877 × 10 18 1.382 × 10 22
GWOAvg. CF4.75 × 10 60 2.4728 × 10 68 1.6844 × 10 67 6.434 × 10 71 4.6174 × 10 71 7.2403 × 10 71
Best CF4 × 10 60 1.2 × 10 61 1.8 × 10 61 3.745 × 10 71 2.6409 × 10 71 3.9817 × 10 71
Avg. NP51520303743
Best NP41218263337
Avg. E5.2435 × 10 23 3.9094 × 10 14 1.3449 × 10 16 8.5704 × 10 13 3.7608 × 10 18 2.3976 × 10 22
Best E8.0848 × 10 22 1.1117 × 10 14 2.5247 × 10 15 1.9739 × 10 13 1.0436 × 10 18 1.268 × 10 22
AMOAvg. CF4 × 10 60 1.282 × 10 61 1.653 × 10 61 1.8614 × 10 71 1.9714 × 10 71 3.3892 × 10 71
Best CF4 × 10 60 1.2 × 10 61 1.5 × 10 61 2.5401 × 10 70 1.624 × 10 70 2.0201 × 10 71
Avg. NP41317364451
Best NP41215344146
Avg. E3.979 × 10 23 4.4029 × 10 14 1.3695 × 10 16 1.9327 × 10 14 8.0387 × 10 18 4.3284 × 10 22
Best E5.5216 × 10 22 1.9064 × 10 14 7.1131 × 10 15 1.0368 × 10 14 5.0968 × 10 18 2.4297 × 10 22
PSOAvg. CF4.79 × 10 60 3.4255 × 10 69 1.4353 × 10 69 7.4066 × 10 71 5.4168 × 10 71 8.231 × 10 71
Best CF4 × 10 60 1.3 × 10 61 2 × 10 61 6.2805 × 10 71 4.7809 × 10 71 6.5008 × 10 71
Avg. NP51522293541
Best NP41320243135
Avg. E5.1762 × 10 23 5.8063 × 10 14 2.0949 × 10 16 1.3113 × 10 14 5.6448 × 10 18 3.3657 × 10 22
Best E4.7222 × 10 22 2.3678 × 10 14 8.6401 × 10 15 5.8626 × 10 13 3.1002 × 10 18 1.9188 × 10 22
ABCAvg. CF7.84 × 10 60 1.496 × 10 61 2.659 × 10 61 1.7899 × 10 71 1.8478 × 10 71 6.2455 × 10 71
Best CF6 × 10 60 1.3 × 10 61 2.4 × 10 61 3.7 × 10 61 4.7 × 10 61 3.7078 × 10 71
Avg. NP81527374548
Best NP61324323941
Avg. E6.0857 × 10 13 3.8347 × 10 12 2.0035 × 10 7 4.4134 × 10 10 1.9193 × 10 13 1.0839 × 10 19
Best E6.3576 × 10 7 3.3042 × 10 12 1.0895 × 10 7 3.7038 × 10 10 1.4898 × 10 13 8.7471 × 10 18
GSKAvg. CF5.7 × 10 60 4.7397 × 10 70 2.0125 × 10 70 7.9435 × 10 71 5.601 × 10 71 8.3242 × 10 71
Best CF4 × 10 60 1.6 × 10 61 1.9 × 10 61 6.2524 × 10 71 4.7282 × 10 71 7.0532 × 10 71
Avg. NP61522293642
Best NP41319253236
Avg. E6.0977 × 10 23 5.672 × 10 14 1.8551 × 10 16 1.1749 × 10 14 5.1288 × 10 18 3.27 × 10 22
Best E5.7557 × 10 22 1.4781 × 10 14 7.3913 × 10 15 5.3564 × 10 13 1.9935 × 10 18 2.2152 × 10 22
BBOAvg. CF4.27 × 10 60 1.441 × 10 61 1.733 × 10 61 1.8776 × 10 71 1.5097 × 10 71 2.7855 × 10 71
Best CF4 × 10 60 1.3 × 10 61 1.5 × 10 61 3.9 × 10 61 4.5 × 10 61 1.2984 × 10 70
Avg. NP41417364552
Best NP41315314145
Avg. E3.8848 × 10 23 5.0742 × 10 14 1.3723 × 10 16 1.5802 × 10 14 6.4197 × 10 18 3.9379 × 10 22
Best E6.4946 × 10 22 2.1085 × 10 14 4.5255 × 10 15 8.8315 × 10 13 3.9199 × 10 18 2.7205 × 10 22
WOAAvg. CF5.09 × 10 60 7.1636 × 10 69 1.1383 × 10 68 6.548 × 10 71 4.6704 × 10 71 7.2743 × 10 71
Best CF4 × 10 60 1.4 × 10 61 1.8 × 10 61 3.6658 × 10 71 3.512 × 10 71 5.6986 × 10 71
Avg. NP51520313744
Best NP41418263238
Avg. E6.1491 × 10 23 5.6463 × 10 14 1.8316 × 10 16 1.2598 × 10 14 5.2791 × 10 18 3.392 × 10 22
Best E4.7187 × 10 22 1.895 × 10 14 8.9749 × 10 15 4.7129 × 10 13 2.0887 × 10 18 1.7534 × 10 22
ACOAvg. CF4.5699 × 10 71 3.7148 × 10 71 9.3099 × 10 71 1.4033 × 10 72 1.0814 × 10 72 1.4512 × 10 72
Best CF8 × 10 60 1.5 × 10 61 2.2 × 10 61 1.2746 × 10 72 1.0048 × 10 72 1.2674 × 10 72
Avg. NP47891012
Best NP245676
Avg. E3.0953 × 10 22 4.0813 × 10 13 5.9113 × 10 14 1.751 × 10 12 3.6959 × 10 16 4.261 × 10 20
Best E4.4537 × 10 5 1.1508 × 10 12 1.3255 × 10 6 1.957 × 10 10 1.7838 × 10 12 7.7962 × 10 18
OOAAvg. CF4.96 × 10 60 4.7818 × 10 70 5.1587 × 10 69 7.7914 × 10 71 5.4469 × 10 71 8.1832 × 10 71
Best CF4 × 10 60 1.5 × 10 61 1.9 × 10 61 6.0594 × 10 71 4.679 × 10 71 6.7698 × 10 71
Avg. NP51521293642
Best NP41319253237
Avg. E9.5457 × 10 23 6.7351 × 10 14 2.2913 × 10 16 1.4039 × 10 14 5.9964 × 10 18 3.5683 × 10 22
Best E2.0688 × 10 23 2.1862 × 10 14 1.129 × 10 16 7.4459 × 10 13 2.9095 × 10 18 1.8181 × 10 22
MAAvg. CF4.43 × 10 60 1.516 × 10 61 2 × 10 61 6.5542 × 10 71 4.7063 × 10 71 7.2175 × 10 71
Best CF4 × 10 60 1.3 × 10 61 1.8 × 10 61 5.0182 × 10 71 3.8797 × 10 71 5.8847 × 10 71
Avg. NP41520313843
Best NP41318283437
Avg. E4.697 × 10 23 5.641 × 10 14 1.6752 × 10 16 1.2139 × 10 14 5.5114 × 10 18 3.3116 × 10 22
Best E9.1966 × 10 22 2.7977 × 10 14 7.8853 × 10 15 3.1976 × 10 13 3.5254 × 10 18 2.0607 × 10 22
AOAAvg. CF4.69 × 10 60 3.3716 × 10 70 1.0224 × 10 70 7.8676 × 10 71 5.4175 × 10 71 8.0266 × 10 71
Best CF4 × 10 60 1.4 × 10 61 1.8 × 10 61 5.593 × 10 71 4.2144 × 10 71 6.4571 × 10 71
Avg. NP51520283440
Best NP41217232934
Avg. E4.5233 × 10 23 2.8891 × 10 14 1.05 × 10 16 6.0699 × 10 13 2.7815 × 10 18 2.0416 × 10 22
Best E1.2327 × 10 23 7.5247 × 10 13 4.099 × 10 15 1.6334 × 10 13 8.9783 × 10 17 7.737 × 10 21
CHIOAvg. CF4.7 × 10 60 3.0251 × 10 70 1.4891 × 10 70 7.7324 × 10 71 5.5317 × 10 71 8.2538 × 10 71
Best CF4 × 10 60 1.4 × 10 61 1.8 × 10 61 4.2028 × 10 71 4.3911 × 10 71 6.7268 × 10 71
Avg. NP51521293642
Best NP41318242935
Avg. E4.7572 × 10 23 5.6382 × 10 14 1.576 × 10 16 1.0685 × 10 14 4.7282 × 10 18 3.0955 × 10 22
Best E1.0896 × 10 23 2.081 × 10 14 8.5226 × 10 15 4.7669 × 10 13 2.0199 × 10 18 1.8926 × 10 22
DTBOAvg. CF4.43 × 10 60 2.3611 × 10 70 4.6668 × 10 69 6.4274 × 10 71 4.5929 × 10 71 6.933 × 10 71
Best CF4 × 10 60 1.3 × 10 61 1.7 × 10 61 3.1498 × 10 71 2.9082 × 10 71 4.7492 × 10 71
Avg. NP41619303845
Best NP41317253137
Avg. E4.1237 × 10 23 5.4967 × 10 14 1.3802 × 10 16 1.1513 × 10 14 4.9397 × 10 18 3.2404 × 10 22
Best E1.5942 × 10 23 2.6794 × 10 14 7.1215 × 10 15 6.0937 × 10 13 3.0635 × 10 18 2.1081 × 10 22
PHAMDAvg. CF4 × 10 60 1.324 × 10 61 1.716 × 10 61 3.636 × 10 61 4.502 × 10 61 5.489 × 10 61
Best CF4 × 10 60 1.2 × 10 61 1.5 × 10 61 3.6 × 10 61 4.4 × 10 61 5.3 × 10 61
Avg. NP41317364553
Best NP41215364450
Avg. E3.6528 × 10 23 4.2558 × 10 14 1.2833 × 10 16 1.3892 × 10 14 6.647 × 10 18 4.0136 × 10 22
Best E3.4829 × 10 22 1.4022 × 10 14 6.1203 × 10 15 8.0683 × 10 13 4.0684 × 10 18 2.7623 × 10 22
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lizarraga, J.A.; Pita, A.J.; Ruiz-Leon, J.; Alanis, A.Y.; Luque-Vega, L.F.; Carrasco-Navarro, R.; Lara-Álvarez, C.; Aguilar-Molina, Y.; Guerrero-Osuna, H.A. Phymastichus–Hypothenemus Algorithm for Minimizing and Determining the Number of Pinned Nodes in Pinning Control of Complex Networks. Algorithms 2025, 18, 637. https://doi.org/10.3390/a18100637

AMA Style

Lizarraga JA, Pita AJ, Ruiz-Leon J, Alanis AY, Luque-Vega LF, Carrasco-Navarro R, Lara-Álvarez C, Aguilar-Molina Y, Guerrero-Osuna HA. Phymastichus–Hypothenemus Algorithm for Minimizing and Determining the Number of Pinned Nodes in Pinning Control of Complex Networks. Algorithms. 2025; 18(10):637. https://doi.org/10.3390/a18100637

Chicago/Turabian Style

Lizarraga, Jorge A., Alberto J. Pita, Javier Ruiz-Leon, Alma Y. Alanis, Luis F. Luque-Vega, Rocío Carrasco-Navarro, Carlos Lara-Álvarez, Yehoshua Aguilar-Molina, and Héctor A. Guerrero-Osuna. 2025. "Phymastichus–Hypothenemus Algorithm for Minimizing and Determining the Number of Pinned Nodes in Pinning Control of Complex Networks" Algorithms 18, no. 10: 637. https://doi.org/10.3390/a18100637

APA Style

Lizarraga, J. A., Pita, A. J., Ruiz-Leon, J., Alanis, A. Y., Luque-Vega, L. F., Carrasco-Navarro, R., Lara-Álvarez, C., Aguilar-Molina, Y., & Guerrero-Osuna, H. A. (2025). Phymastichus–Hypothenemus Algorithm for Minimizing and Determining the Number of Pinned Nodes in Pinning Control of Complex Networks. Algorithms, 18(10), 637. https://doi.org/10.3390/a18100637

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop