Next Article in Journal
Multi-Rate Sampling-Based H LFC for Networked Power Systems: An Area-Information-Fusion Method
Previous Article in Journal
An Enhanced ABC Algorithm with Hybrid Initialization and Stagnation-Guided Search for Parameter-Efficient Text Summarization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Making Networks Less Amplifiers Under Resource Constraints

by
Noël Bonneuil
1,2
1
Institut national d’études démographiques, 9, cours des Humanités, 93322 Aubervilliers Cedex, France
2
Centre d’Analyse et de Mathématique Sociales, École des hautes études en sciences sociales, 54, bld Raspail, 75006 Paris, France
Mathematics 2026, 14(7), 1121; https://doi.org/10.3390/math14071121
Submission received: 23 February 2026 / Revised: 25 March 2026 / Accepted: 26 March 2026 / Published: 27 March 2026

Abstract

In a network invaded by a mutant according to the birth–death updating rule and uniform initialization, in order to minimize the amplifying effect of any directed network, the adjacency matrix is modified at each time step up to a given time horizon, subject to resource constraints. The fixation probability of an invasive mutant is deduced from the first eigenvector of the resulting modified Markov transition matrix. Large-scale minimization is solved numerically for a representative sample of directed graphs of dimensions 6 to 8. The effects of the determinants of the optimal reduction of the fixation probability are estimated using a Heckman selection model. The number of neighbors, the heterogeneity of the incoming edge weights, and the homogeneity of the outgoing edge weights of the initial network increase the likelihood that the graphs are amendable. Among the amended networks, the reduction in the fixation probability is greater when the outgoing edge weights of the initial network are heterogeneous, those of its incoming edges are homogeneous, and the sequence of modifications increases the variance of the outgoing edge weights and decreases that of the incoming edge weights, thereby creating a trade-off, which is estimated numerically.

1. Introduction

Evolutionary dynamics deals with networks where each node represents an individual that can be either a resident or a mutant. These networks can be complete—in which case, it is a Moran process [1]—or more generally change according to a birth–death updating rule and uniform initialization process: at each discrete time point, each individual at a given node can implant its offspring in an adjacent node. The probability of implantation is proportional to the fitness of the parent individual and the length of the edge connecting the two nodes. Depending on the structure of the network, certain configurations facilitate the spread of the mutant, accelerating fixation, while others slow it down or prevent the spread of the mutant [2,3,4]. The probability of fixation is the probability that a single mutant initially placed in one of the nodes of the network will asymptotically establish itself in all nodes [2]. The probabilities of mutant fixation depend on the temporal trajectories of the networks [5].
How can the amplifying power of a network be reduced? Looking for a way to reduce the amplification power of a network, Bhaumik and Masuda [6] proposed oscillating the network with another fixed network, at intervals of equal duration or not. Both networks can be amplifiers. Even assuming constant mutant fitness over time, they obtained less amplification than with either of the two networks taken over the entire period. Alcalde Cuesta et al. [7] modified the edge weights by removing all non-trivial loops, allowing the expected time to fixation or extinction to be shortened. Alcalde Cuesta et al. [8,9] gave examples of suppressor graphs, noting that these graphs are “hard to isolate or identify”. This is precisely the purpose of this article: to treat any network with a given number of neighbors in order to make it less amplifying, if not suppressive.
Moreover, Li et al. [10] modified a given network C = C ( 0 ) into a given network C ( T ) after T time steps with the addition of sufficient “energy” E ( t ) at time t from 0 to T. The dynamic is C ( t + 1 ) = W 1 C ( t ) + W 2 E ( t ) and the energy t E T ( t ) E ( t ) , where T here denotes transposition, is to be minimized. These authors did not address the issue of reducing the amplifying effect of networks, which is the topic of this article.
Research question: Instead of choosing a network that oscillates between two fixed networks, can a given directed graph invaded by a mutant with fixed relative fitness s-which measures the reproductive success of an individual relative to others in the population—have its edge weights modified under resource constraints in T time steps in order to make the resulting network as less amplifying as possible?
The successive modifications of the initial network have no target network but are required to provide the smallest possible fixation probability at horizon T. Optimization on the disturbance matrix will firstly make it possible to characterize the initial graph as likely to be amended or not. Secondly, if it is, it will be used to modify the initial matrix under resource constraints in order to reduce the amplifying effect as much as possible. The mathematical point of interest is to interweave stochastic optimization within a network with eigenvector solving, and to deal with a question relevant to evolutionary biology.
Implementing network control has to do with cancer therapies, for example. Knowledge of the effects of individual drugs and the interactions between multiple drugs has made it possible to predict the effect of drug synergies on cell-to-cell signaling [11]. Pharmacologically, it is possible to manipulate protein interactions (which constitute the interactome) [12] and influence the plasticity of intercellular networks  [13]. Li et al. [10] provide further examples of network control in oncology, in ant behavior, and in networks amongst friends.
After recalling the calculation of the fixation probability (Section 2.1), the initial networks, the resource constraint, the number of nodes, the relative fitness of the mutant, and the time horizon are varied to estimate the difference between the fixation probabilities associated with the initial networks and their modified versions (Section 2.2). In addition to the parameters, these quantities will be related to the variation in the mean variances of the incoming or outgoing edge weights.
The main finding is that the amplification effect of certain initial graphs can be reduced. This probability is higher when each node has a larger number of neighbors, when the mean variance of the weights of incoming edges is high, and when that of the weights of outgoing edges is low. Among these amendable graphs, the probability of fixation is reduced as the time horizon increases, the nodes have fewer neighbors, the resource and the relative fitness are higher, and the mean variance of incoming edge weights is low and that of outgoing edge weights is high. Moreover, the modification leads to more heterogeneous outgoing edge weights and to more homogenous incoming edge weights. This optimal trade-off on the edge weight distributions is estimated numerically.

2. Materials and Methods

2.1. The Fixation Probability for a Static Network as a Fixed Point of Recursion

The N vertices of a network associated with an adjacency matrix C are occupied either by a wild (resident) individual with fitness 1 or by a mutant with fitness s. A state of the network is described by the N-dimensional vector b = ( b 1 , b 2 , , b N ) , with b i = 1 if vertex i is occupied by a mutant type and b i = 0 if i is occupied by a resident type. The set B of all states b has dimension 2 N . A bijection h assigns a unique integer from { 1 , 2 , , 2 N } to each state b B and conversely. An individual placed at a vertex i reproduces with a probability proportional to its fitness (s if it is a mutant or 1 if not) [3]. Its offspring replaces the occupant of vertex i with a probability proportional to the weight of the directed edge ( i , i ) . The process continues until all nodes are occupied by the same type, which is then said to be “fixated”. This recursion is described by a 2 N ×   2 N Markov transition matrix θ ( C ) , whose entries are the probabilities that the state of the network changes from one value to another in one time step [6].
A state b with m mutants, with b i = 1 for i { σ ( 1 ) , , σ ( m ) } and b i = 0 for i { σ ( m + 1 ) , σ ( N ) } , where σ is a permutation on { 1 , , N } , is changed into a state b with m + 1 mutants, such that b i = 1 for i { σ ( 1 ) , , σ ( m ) , σ ( m + 1 ) } and b i = 0 for i { σ ( m + 2 ) , σ ( N ) } . The entry of the Markov transition matrix θ ( C ) from b to b is
θ h ( b ) , h ( b ) = s s m + N m m = 1 m C σ ( m ) , σ ( m + 1 ) j = 1 N C σ ( m ) , j .
Besides, the entry of the Markov transition matrix θ ( C ) from b to the state b having m 1 mutants such that b i = 1 for i { σ ( 1 ) , , σ ( m ˜ 1 ) , σ ( m ˜ + 1 ) , σ ( m ) } , and b i = 0 for i { σ ( m ˜ ) , σ ( m + 1 ) , σ ( N ) } , where m ˜ { 1 , , m } is
θ h ( b ) , h ( b ) = 1 s m + N m m = m + 1 N C σ ( m ) , σ ( m ˜ ) j = 1 N C σ ( m ) , j .
The probability that b is unchanged is
θ h ( b ) , h ( b ) = 1 s s m + N m m = 1 m C σ ( m ) , σ ( m + 1 ) j = 1 N C σ ( m ) , j 1 s m + N m m = m + 1 N C σ ( m ) , σ ( m ˜ ) j = 1 N C σ ( m ) , j .
The fixation probability of a mutant initially occupying state b is the h ( b ) th coordinate p ( f ) ( C ) h ( b ) of the eigenvector p ( f ) ( C ) = ( p ( f ) ( C ) 1 , , p ( f ) ( C ) 2 N ) T , where T here denotes transposition, of θ ( C ) with the eigenvalue 1 [2,14]:
p ( f ) ( C ) = θ ( C ) p ( f ) ( C ) .
As b h ( 0 , , 0 ) ( C ) = 0 —because if the mutant is absent from all nodes, it has no chance of becoming fixated—and b h ( 1 , , 1 ) ( C ) = 1 —because if the mutant already occupies all nodes, it is fixated, for certain—solving Equation (4) amounts to solving 2 N 2 equations.
The fixation probability p ¯ ( f ) ( C ) is the mean over all N vertices of the coordinates of p ( f ) ( C ) for which initially the mutant occupies a single vertex [6]. The subset of the states associated with these coordinates is denoted B ( 1 ) , so that the fixation probability is
p ¯ ( f ) ( C ) : = 1 N b B ( 1 ) p ( f ) ( C ) h ( b ) .

2.2. Controlling for the Least Possible Amplification

The adjacency matrix C ( t ) , with C ( 0 ) = C , is augmented with a matrix U t to C ( t ) = C ( t 1 ) + U t at time t = 1 , , T . U t is denoted with the subscript t, instead of U ( t ) , to emphasize that U t is a control. Each U t = ( U t , i , j ) i , j = 1 , , N satisfies the resource constraint i , j = 1 , , N | U t , i , j | < u , where u is a small enough positive constant.
The variation in the fixation probability after adding the matrices U t , t = 1 , , T , is
Δ p ¯ ( f ) ( C ( T ) ) : = p ¯ ( f ) ( C ( T ) ) p ¯ ( f ) ( C ) = 1 N b B ( 1 ) p h ( b ) ( f ) ( C + t = 1 T U t ) p h ( b ) ( f ) ( C ) .
The minimization program of the fixation probability after T time steps is
min U ( . ) Δ p ¯ ( f ) ( C ( T ) )
under
C ( t ) = C ( t 1 ) + U t ; C ( 0 ) = C i , j = 1 N U t , i , j u , t = 1 , , T , p ( f ) ( C ( t ) ) = θ ( C ( t ) ) p ( f ) ( C ( t ) ) , p ¯ ( f ) ( C ( t ) ) = 1 N b B ( 1 ) p h ( b ) ( f ) ( C ( t ) ) ; t = 0 , , T .
The expression of Δ p ¯ ( f ) ( C ( T ) ) provides no analytical formula for the optimal ( U t ) t = 1 , , T , due to the non-linear entanglement of the successive U t in C ( T ) , each row of which is re-scaled, then in the transition matrix θ ( C ( T ) ) , then in its first eigenvector p ( f ) ( C ( T ) ) , then in the expression of the fixation probability p ¯ ( f ) ( C ( T ) ) , which involves only the components of B ( 1 ) . Numerical minimization is required. The computer code is in Ratfor, calling IMSL routines for solving the system of equations. It uses an algorithm of stochastic optimization, presented in Appendix A, and adapted from [15]. The calculation with a powerful computer (two 64-bit “AMD EPYC 7402” physical processors (maximum speed: 3 Ghz) for a total of 96 logical processors) took more than six months. The article therefore fills a gap for N 8 , but the solution is computational and the analysis is necessarily statistical.
As with all network problems, network dimension is an issue. For networks of dimension 8 with 4 neighbors at time horizon T = 4 , the minimization involves 1 + T N ( ν + 1 ) = 161 controls (1 control to parameterize the upper bound of max t = 1 , , T i , j = 1 t U i j and the T N ( ν + 1 ) controls U t , i , j , t = 1 , , T , i , j = 1 , , N , for the ν closest neighbors plus the reflexive loop). Beyond that, for larger networks, parallelization and distributed architectures can handle very large dimensions, particularly with distributed stochastic methods [16], which involve calculating the gradient on a subset of data on different nodes and then aggregating the results. With this method, minimization can in principle involve millions of variables and the network can have millions of nodes. Processing large network dimensions is fine. However, the purpose of this article is not to conquer large dimensions (up to what value?), but to demonstrate the potential of a program designed to minimize the fixation probability. Doing so up to dimension 8 proves the point. Analyzing results with dimensions 6 through 8 is sufficient to identify econometric patterns.
For each value of ( T , N , ν , s , u ) , with time horizon T = 1 , 2 , 3 , 4 , dimension N = 6 , 7 , 8 , number of closest neighbors ν = 2 , 3 , 4 , relative fitness s = 0.7 , 0.9 , 1.1 , 1.3 , 1.5 , and resource constraint u = 0.5 , 1.0 , 1.5 , 2.0 , at least 30 initial matrices C = C ( 0 ) associated with weighted directed graphs are simulated. Robustness is verified using bootstrapping. The mean obtained by direct calculation failed outside the bootstrap confidence interval for ν = 4 and T = 1 , 2 for each dimension N. Ten more draws were added for each of these ( N , ν , s , u ) , ensuring that the associated bootstrap confidence intervals included the means obtained by direct calculation. The weights of their edges between the ν closest neighbors were taken by uniform sampling between 0 and 1, so that the addition of U t was calibrated relative to C. The vertices between the ν closest neighbors are valued in each direction; the others are set to zero.
The minimization program {(7), (8)} is solved numerically by stochastic optimization on all entries U t , i , j , i , j = 1 , , N , of the matrices U t , t = 1 , , T . At each step, it requires calculating θ ( C + t = 1 T U t ) , solving p ( f ) = θ ( C + t = 1 T U t ) p ( f ) in p ( f ) , computing Δ p ¯ ( f ) ( C ( T ) ) , and repeating this process in the framework of stochastic optimization.

3. Results

Figure 1 shows an example of such an optimal modification. In this example, p ¯ ( f ) ( C ) decreases at each time step, but this is not compulsory.
Figure 2 presents the means of Δ p ¯ ( f ) ( C ( T ) ) computed on samples of 30 or 40 matrices (to obtain the means by direct calculation included in their bootstrap confidence intervals) drawn at random by value of ( N , ν , s , u ) for T = 4 time steps taken as an example (similar figures are obtained for T = 1 ,   2 ,   3 ).
At horizon T, for certain matrices C, the additions of U t fail numerically to reduce the fixation probability. The percentages in Table 1 for T = 4 of amendable matrices increase with resources u , relative fitness s, and the number ν of neighbors. The percentages for other values of T do not differ much according to the value of ( N , ν , u ) : the overall percentage is 56% for T = 1 , 56% for T = 2 , 57% for T = 3 , and 58% for T = 4 .
Estimation: As previously mentioned, for some simulated matrices, the fixation probability is not reduced after minimization in T steps. The question arises as to whether to account for this fact, which requires using Heckman’s two-stage selection regression [17], or to estimate the determinants of Δ p ¯ ( f ) ( C ( T ) ) in a single regression, regardless of whether this criterion is zero or not. The lack of improvement in Δ p ¯ ( f ) ( C ( T ) ) after T steps may indicate a lack of linearity in the relationship between the initial matrices and the final difference in fixation probabilities. Matrices for which the criterion does not improve may have a particular structure: the first random draw would yield the optimal adjacency matrix. This possibility is supported by the fact that the difference Δ p ¯ ( f ) ( C ( T ) ) , when strictly negative, is small, as Figure 2 shows. This structural difference between amendable and non-amendable networks is accounted for by Heckman’s two-stage regression. The first stage characterizes this structural difference, while the second stage applies only to the amended matrices. Equation (9) below will show that amendable matrices have a different structure from matrices that are not amendable. Therefore, Heckman’s procedure is more consistent than mixing all the matrices into a single regression.
The estimation system consists of the probit model of the probability that Δ p ¯ ( f ) ( C ( T ) ) < 0 , which provides the inverse Mills ratio (IMR), to be included as an explanatory variable in the regression of Δ p ¯ ( f ) ( C ( T ) ) for the draws for which Δ p ¯ ( f ) ( C ( T ) ) < 0 .
To answer a reviewer, the probit model is based on the cumulative distribution function of the normal distribution. Its sigmoid curve, which allows for a gradual transition between 0 and 1, reflects a nonlinear relationship between the explanatory variables and the probability that the binary variable equals 1 ( Δ p ¯ ( f ) ( C ( T ) ) < 0 ) versus 0 ( Δ p ¯ ( f ) ( C ( T ) ) = 0 ). The use of normality in the probit response function, as in any binary regression model, does not mean that the residuals must be normally distributed. The reason is that the residuals from binary regression are of the deviance or Pearson type. They measure the difference between the observed value (0 or 1) and the predicted probability. These residuals are discrete or binary, or fit a binomial distribution, which does not correspond to a continuous normal distribution. In logistic or probit regression, more important than the normality of the residuals is the significance of the explanatory variables and the consistency of the estimates, which is the case here for Equation (9) below. The distribution of the residuals may be skewed or have heavy tails [18,19,20].
The probit model includes the explanatory variables T , N , ν , s , u , V ¯ out ( C ) , V ¯ in ( C ) , where V ¯ out ( C ) and V ¯ in ( C ) are the mean variances of the outgoing and incoming edge weights of the initial adjacency matrix C. Formally, V ¯ out ( C ( t ) ) : = 1 / N i = 1 N 1 / ( N 1 ) j = 1 N ( C i , j ( t ) 1 / N k = 1 N C i , k ( t ) ) 2 , and V ¯ in ( C ( t ) ) : = 1 / N j = 1 N 1 / ( N 1 ) i = 1 N ( C i , j ( t ) 1 / N k = 1 N C k , j ( t ) ) 2 . Because each row of C ( t ) , t = 0 , , T , is rescaled to build θ ( C ) , as C i j ( t ) / k = 1 N C i k ( t ) is a probability, the mean out-degree (the out-degree at a vertex is the sum of the weights of edges leaving the vertex) is constant (equal to 1). This is also the case for the mean in-degree (the in-degree at a vertex is the sum of the weights of edges arriving at this vertex).
The possible explanatory variable p ¯ ( f ) ( C ) on the entire set of simulated data turns out to be collinear with s and is therefore not included in the probit of Δ p ¯ ( f ) ( C ( T ) ) < 0 . For the regression of Δ p ¯ ( f ) ( C ( T ) ) conditioned on Δ p ¯ ( f ) ( C ( T ) ) < 0 , the collinearity of p ¯ ( f ) ( C ) with s is sufficiently reduced such that p ¯ ( f ) ( C ) can be included in the explanatory variables, with quadratic and cross-effects with s. The presence of p ¯ ( f ) ( C ) among the explanatory variables reflects the fluidity properties of the initial network C. Adding the matrices U t is likely to increase or decrease the weights of the edges, which is reflected in the variations Δ V ¯ out ( C ( T ) ) : = V ¯ out ( C ( T ) ) V ¯ out ( C ( 0 ) ) and Δ V ¯ in ( C ( T ) ) : = V ¯ in ( C ( T ) ) V ¯ in ( C ( 0 ) ) . These two variables themselves depend on T , N , ν , s , u , p ¯ ( f ) ( C ) ; hence a system of three regressions, each incorporating the IMR and estimated by taking into account the variance–covariance matrix between the residuals of these three equations.
Estimation result: On normalized variables, Heckman’s two-step system is estimated on 21,600 observations for the probit as
Probit ( Δ p ¯ ( f ) ( C ( T ) ) < 0 ) = 0.43 ( 0.06 , . ) 0.48 * ( 0.03 , 0.19 ) T 0.34 * ( 0.03 , 0.14 ) N + 1.14 * ( 0.05 , 0.46 ) ν + 0.18 * ( 0.03 , 0.07 ) s + 0.10 * ( 0.02 , 0.04 ) u 1.70 * ( 0.12 , 0.67 ) V ¯ out ( C ) + 4.73 * ( 0.18 , 1.89 ) V ¯ in ( C ) + ϵ 0 , j
for the jth draw and where the star indicates significance at the 5% threshold. The first number under the coefficient is the associated standard deviation. The perturbations ϵ 0 , j are homoscedastic of zero expectation. The model correctly classifies 63% of the predictions, which is not low, but not that high either. The point is that several explanatory variables have significant coefficients, so taking the selection effect into account should improve the estimation of the associated three regressions below. The average marginal effects (AME) presented under the coefficients, in second position, highlight the major conflicting influences (AME = −0.67 and 1.89) of the two mean variances of the weights of the outgoing and incoming edges of the initial adjacency matrix C, and, in third place, overshadowing the effects of the other parameters, the role played by the number ν of neighbors of each node (AME = 0.46). The probit model was run successively without V ¯ out ( C ) and without V ¯ in ( C ) : The coefficients varied only slightly compared to the results of the probit model that included these two variables. This proves that the multicollinearity between V ¯ out ( C ) and V ¯ in ( C ) is sufficiently low to validate the probit model including these two variables.
Out of the 21,600 simulated draws, 12,308 have Δ p ¯ ( f ) ( C ( T ) ) < 0 . For the jth draw of them, estimation on normalized variables is
Δ p ¯ ( f ) ( C ( T ) ) j = 0.94 ( 0.04 ) * 0.04 ( 0.01 ) * T 0.02 ( 0.01 ) * N + 0.08 ( 0.02 ) * ν 0.02 ( 0.003 ) * u + 0.41 ( 0.02 ) * s 0.57 * ( 0.03 ) p ¯ ( f ) ( C ) 0.76 ( 0.09 ) * s × p ¯ ( f ) ( C ) 0.13 ( 0.03 ) * s 2 + 1.06 ( 0.07 ) * p ¯ ( f ) ( C ) 2 0.32 ( 0.04 ) * V ¯ out ( C ) + 0.45 ( 0.08 ) * V ¯ in ( C ) 0.44 ( 0.01 ) * Δ V ¯ out ( C ( T ) ) + 0.17 ( 0.01 ) * Δ V ¯ in ( C ( T ) ) + 0.13 ( 0.04 ) * IMR + ϵ 1 , j Δ V ¯ out ( C ( T ) ) j = 0.35 ( 0.01 ) * + 0.11 ( 0.01 ) * T + 0.02 ( 0.01 ) * N 0.15 ( 0.01 ) * ν + 0.09 ( 0.01 ) * s + 0.02 ( 0 + ) * u 0.11 ( 0.01 ) * p ¯ ( f ) ( C ) 0.12 ( 0.01 ) * V ¯ out ( C ) 0.13 ( 0.01 ) * IMR + ϵ 2 , j Δ V ¯ in ( C ( T ) ) j = 0.26 ( 0.01 ) * 0.002 ( 0.002 ) T 0.007 ( 0.002 ) * N + 0.02 ( 0.01 ) * ν 0.01 ( 0.01 ) s + 0.03 ( 0.01 ) * u + 0.01 ( 0.01 ) p ¯ ( f ) ( C ) + 0.07 ( 0.01 ) * V ¯ in ( C ) + 0.04 ( 0.01 ) * IMR + ϵ 3 , j ,
where the numbers in parentheses are standard deviations; the ϵ i , j , i = 1 , 2 , 3 , are homoscedastic of zero expectation; and correlations are estimated as corr ( ϵ 1 , ϵ 2 ) = 0 , corr ( ϵ 1 , ϵ 3 ) = 0 , corr ( ϵ 2 , ϵ 3 ) = 0.36 . The coefficients of determination R 2 = 0.40, 0.45, and 0.30, respectively. In a Heckman model with a nonlinear relationship and cross-effects, R 2 can often appear relatively low, even if the model is actually relevant and useful [21]. This stems from the nonlinear and inextricable dependence of the dependent variable on its predictors. An R 2 around 0.40 is acceptable and even good in this context of nonlinear relationships [21], and specifically in the Heckman model [22].

4. Discussion

The probit model of Equation (9) has shown that the graph modified by the ( U t ) t = 1 , , T is more likely to be less amplifying with more neighbors ( ν ), which gives more possibilities, with more resources u , and with a higher relative fitness s. The more nodes the graph has (the higher N), the less the graph can be amended towards less amplification, but this effect is small (0.02). The time horizon T has a negative effect, because if the matrices are not modified at the first step t = 1 , they are not modified at the subsequent steps t = 2 , , T .
On the subset of draws for which C manages to be amended towards less amplification, System (10) of simultaneous equations shows the determinants that reduce amplification (their coefficients are significant and negative) and those that increase amplification (their coefficients are significant and positive). System (10) thus shows that a longer time horizon T favors reduction (coefficient −0.04), as a longer time horizon allows more resources to be used to modify the graph. This is also the case for the dimension N of the graph (coefficient −0.02): the larger it is, the less likely the mutant becomes fixated, as it faces greater competition from more numerous residents. This is also the case for the relative fitness s and its cross-effect with the initial probability of fixation p ¯ ( f ) ( C ) , as the term 0.41 s 0.57 p ¯ ( f ) ( C ) 0.76 s × p ¯ ( f ) ( C ) 0.13 s 2 + 1.06 p ¯ ( f ) ( C ) 2 corresponds to a downward curve as a function of s for s > 0.2 , regardless of p ¯ ( f ) ( C ) [ 0 , 1 ] . From this term, the decrease in Δ p ¯ ( f ) ( C ( T ) ) accelerates when p ¯ ( f ) ( C ) increases. Therefore, from the probit model, an initial matrix C with a higher relative fitness s has a higher probability of being amended towards less amplification, and, from the first regression of (10), the expected reduction in amplification is all the greater.
This regression also shows that the fixation probability is reduced when the initial adjacency matrix has a high mean variance of outgoing edge weights (coefficient −0.32) and a small mean variance of incoming edge weights (0.45).
Moreover, an increase in the mean variance of the weights of the outgoing edges between C and C + t = 1 T U t favors a reduction in the amplification effect (coefficient −0.44). As the weights of the outgoing edges are normalized by row for the calculation of p ¯ ( f ) ( C ( t ) ) , the increase in the mean variance corresponds to greater heterogeneity of the outgoing edge weights. Then, after the modification sequence ( U t ) t = 1 , , T , certain weights are relatively weaker compared to the strongest links, which amounts to weakening the probability for the mutant to spread through these edges. This is consistent with the coefficient 0.08 of the number of neighbors, which thwarts the reduction in amplification.
The growth in the mean variance of the weights of the incoming edges also thwarts the reduction in the amplifying effect (coefficient 0.17). Limiting amplification requires distributing the risks of contamination among the incoming flows, thus tending toward uniform weighting of incoming edges, which, because of the correlation at 0.36 between Δ V ¯ out ( C ( T ) ) and Δ V ¯ in ( C ( T ) ) , runs up against the amplification-inhibiting role of V ¯ out ( C ( T ) ) . The regression of Δ V ¯ out ( T ) on Δ V ¯ in ( T ) gives a coefficient of 0.49 (SD = 0.01), which leaves an advantage for the effect of Δ V ¯ out ( T ) . The search for less amplification therefore involves increasing the heterogeneity of the weights of the outgoing edges, at the cost of increasing the homogeneity of incoming edge weights.
At last, the significance of the IMR coefficient confirms that the bias arising from selection must be taken into account.
The variance–covariance matrix provides correlations close to 0 between the regression residuals of Δ p ¯ ( f ) ( C ( T ) ) and those of the regressions of the two other dependent variables, so that taking the variance–covariance matrix into account does not change the estimates in the regression of Δ p ¯ ( f ) ( C ( T ) ) taken individually, but this deserved to be tested.
Limitations: As is often the case with graph-related problems, a limitation is the size N of the graph. Parallelization and distributed architectures could handle larger networks. Also, rather than solving 2 N 2 equations, the fixation probability could be computed by letting x ( τ + 1 ) = θ ( C ( t ) ) x ( τ ) converge for large τ and fixed t for an initial N-vector x ( 0 ) . This adds even more computing time. The aim, however, here, is not to explore large graphs; it is simply to show that minimization is possible and that the largest reduction in the fixation probability is related to the properties of the graph. The time horizon is another limitation, which lengthens the sequence of control matrices accordingly. Each adjacency matrix was simulated with a fixed number of neighbors. This has the advantage of quantifying the influence of this number by varying it and of working with a fixed set of parameters. However, modifying the edge weights in stages can be adapted to any given initial network, whether it was for example generated to be scale-free, random, or small-world.

5. Conclusions

The question was whether it was possible to amend a given directed network in order to reduce its power to amplify the spread of a mutant spreading according to the birth–death rule. The idea was to modify the weights of the edges of the graph associated with each step of a given horizon, subject to resource constraints. As amplification power is measured by the fixation probability associated with the adjacency matrix C of this graph, reducing this probability as much as possible led to a minimization program under resource constraints. This program cannot be solved analytically. A stochastic minimization in large dimension showed that, yes, reducing the fixation probability under resource constraints is feasible. This is the first main result. However, the computational cost is quite high for dimensions ranging from 6 to 8, because stochastic minimization requires repeated calculation of the transition matrices and their first eigenvectors in a simulated annealing scheme.
The second main result comes from the simulation of 21,600 random directed matrices, which helped clarify the influences of the parameters, the fixation probability of the initial adjacency matrix, the mean variances of the weights of outgoing and incoming edges of the initial graph, and the variations of these mean variances. The trade-off between these mean variances is estimated in the probit model by the coefficients −1.69 and 4.73 and in the first regression of System (10) by the coefficients −0.32 and 0.45 for the initial graph. It is backed up by the effects of the same opposite directions (−0.44 and 0.17) in the first regression of System (10) of the differences between the initial and final graphs of these mean variances. A promising research direction is the simultaneous minimization of V ¯ in ( C ( T ) ) and maximization of V ¯ out ( C ( T ) ) in order to examine how the resulting Pareto optimal solutions [23] influence the fixation probability.
These results make it possible to estimate the probability of reducing the amplifying effect of any given adjacency matrix C and to stipulate which perturbation matrices ( U t ) t = 1 , , T to use in order to reduce the amplifying effect from C ( 0 ) to C ( T ) as much as possible, by exploiting the variability and distribution of connection weights.  

Funding

This research received no external funding.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

The author declares no conflicts of interest.

Appendix A

Algorithm A1 Minimizing the difference of fixation probabilities by modifying an initial adjacency matrix C in T steps under resource constraints
  1:
Initialization of the temperature in the context of simulated annealing;
  2:
Initialization of the random distance matrix D representing the random distances between nodes;
  3:
For each node, select its ν closest neighbors in D;
  4:
Set the weights of the edges between each node and its ν neighbors between 0 and 1, otherwise to 0;
  5:
Repeat until θ ( C + U 1 + + U T ) regular:
  • Randomly generate the weights of the non-zero edges of D between 0 and 1, which gives the adjacency matrix C;
  • Initialization of the solution: one control equal to max t = 1 , , T i , j = 1 N U i j to be positive and less or equal to u , and the T N ( ν + 1 ) controls U t , i , j , t = 1 , , T , i , j = 1 , , N between 0 and 1 for the ν closest neighbors plus the reflexive loop;
  • Compute the matrix C + U 1 + + U T ;
  • Normalize C + U 1 + + U T by row;
  • Generate the transition matrix θ ( C + U 1 + + U T ) where C + U 1 + + U T is normalized by row;
  • Compute θ ( C + U 1 + + U T ) ;
  • Compute the determinant of θ ( C + U 1 + + U T ) ;
  6:
Solve the system of 2 N 2 linear equations p ( f ) ( C ) = θ ( C ) p ( f ) ( C ) ;
  7:
Solve the system of 2 N 2 linear equations p ( f ) ( C + U 1 + + U T ) = θ ( C + U 1 + + U T ) p ( f ) ( C + U 1 + + U T ) ;
  8:
Compute the associated cost, equal to the difference p ( f ) ( C + U 1 + + U T ) p ( f ) ( C ) between the fixation probabilities;
  9:
Optimization using simulated annealing simplex:
Repeat until there is no improvement in cost, convergence for a fractional convergence tolerance of 10 7 :
  • Construct the simplex;
  • Try the solutions associated with the vertices of the simplex;
  • Modify the simplex by reflection, reflection and expansion, contraction, or multiple contraction, which yields updated i , j = 1 N U i j and U 1 , … U T ;
  • At each step, update C + U 1 + U T ;
  • Normalize C + U 1 + U T ;
  • Repeat updating until the determinant is not zero;
  • Solve the system of 2 N 2 linear equations p ( f ) ( C + U 1 + + U T ) = θ ( C + U 1 + + U T ) p ( f ) ( C + U 1 + + U T ) ;
  • Compute the associated cost, equal to the difference p ( f ) ( C + U 1 + + U T ) p ( f ) ( C ) between the fixation probabilities;
  • Select the candidate solution ( U t ) t = 1 , , T with a probability function of the temperature;
  • Slowly lower the temperature and continue with simulated annealing;
10:
Restart p = 30 times by modifying the candidate vector of controls U t , i , j . Practically, there was no improvement after 10 restarts.

References

  1. Moran, P.A.P. Random processes in genetics. Proc. Camb. Philos. Soc. 1958, 54, 60–71. [Google Scholar] [CrossRef]
  2. Lieberman, E.; Hauert, C.; Nowak, M.A. Evolutionary dynamics on graphs. Nature 2005, 433, 312–316. [Google Scholar] [CrossRef] [PubMed]
  3. Nowak, M.A. Evolutionary Dynamics: Exploring the Equations of Life; Belknap Press of Harvard University Press: Cambridge, MA, USA, 2006. [Google Scholar]
  4. Pavlogiannis, A.; Tkadlec, J.; Chatterjee, K.; Nowak, M.A. Construction of arbitrarily strong amplifiers of natural selection using evolutionary graph theory. Commun. Biol. 2018, 1, 71. [Google Scholar] [CrossRef] [PubMed]
  5. Cardillo, A.; Petri, G.; Nicosia, V.; Sinatra, V.; Gòmez-Gardenes, J.; Latora, V. Evolutionary dynamics of time-resolved social interactions. Phys. Rev. E 2014, 90, 052825. [Google Scholar] [CrossRef] [PubMed]
  6. Bhaumik, J.; Masuda, N. Fixation probability in evolutionary dynamics on switching temporal networks. J. Math. Biol. 2023, 87, 64. [Google Scholar] [CrossRef] [PubMed]
  7. Alcalde Cuesta, F.; González Sequeiros, P.; Lozano Rojo, Á. Fast and asymptotic computation of the fixation probability for Moran processes on graphs. Biosystems 2015, 129, 25–35. [Google Scholar] [CrossRef] [PubMed]
  8. Alcalde Cuesta, F.; González Sequeiros, P.; Lozano Rojo, Á. Suppressors of selection. PLoS ONE 2017, 12, e0180549. [Google Scholar] [CrossRef] [PubMed]
  9. Alcalde Cuesta, F.; González Sequeiros, P.; Lozano Rojo, Á. Evolutionary regime transitions in structured populations. PLoS ONE 2018, 13, e0200670. [Google Scholar] [CrossRef] [PubMed]
  10. Li, A.; Cornelius, S.P.; Liu, Y.Y.; Wang, L.; Barabási, A.L. The fundamental advantages of temporal networks. Science 2017, 358, 1042–1046. [Google Scholar] [CrossRef] [PubMed]
  11. Li, H.; Li, T.; Quang, D.; Guan, Y. Network propagation predicts drug synergy in cancers. Cancer Res. 2018, 78, 5446–5457. [Google Scholar] [CrossRef] [PubMed]
  12. Joshi, S.; Gomes, E.D.; Wang, T.; Corben, A.; Taldone, T.; Gandu, S.; Xu, C.; Sharma, S.; Buddaseth, S.; Yan, P.; et al. Pharmacologically controlling protein-protein interactions through epichaperomes for therapeutic vulnerability in cancer. Commun. Biol. 2021, 4, 1333. [Google Scholar] [CrossRef] [PubMed]
  13. Kerestély, M.; Narozsny, I.; Szarka, L.; Veres, D.V.; Csermely, P.; Keresztes, D. Modulation of network plasticity opens novel therapeutic possibilities in cancer, diabetes, and neurodegeneration. Adv. Sci. 2026, 13, e22532. [Google Scholar] [CrossRef] [PubMed]
  14. Hindersin, L.; Müller, M.; Traulsen, A.; Bauer, B. Exact numerical calculation of fixation probability and time on graphs. Biosystems 2016, 150, 87–91. [Google Scholar] [CrossRef] [PubMed]
  15. Press, W.H.; Flannery, B.P.; Teukolky, S.A.; Vetterling, W.T. Numerical Recipes in Fortran: The Art of Scientific Computing; Cambridge University Press: Cambridge, MA, USA, 1992. [Google Scholar]
  16. Boyd, S.; Parikh, N.; Chu, E.; Peleato, B.; Eckstein, J. Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers; Foundations and Trends in Machine Learning; Now Publishers Inc.: Hanover, MA, USA, 2011; Volume 3, pp. 1–122. [Google Scholar]
  17. Heckman, J.J. Sample selection bias as a specification error. Econometrica 1979, 47, 153–161. [Google Scholar] [CrossRef]
  18. Gelman, A. Data Analysis Using Regression and Multilevel/Hierarchical Models; Cambridge University Press: Cambridge, MA, USA, 2007. [Google Scholar]
  19. Hosmer, D.W., Jr.; Lemeshow, S.; Sturdivant, R.X. Applied Logistic Regression; John Wiley & Sons: New York, NY, USA, 2013. [Google Scholar]
  20. Harrell, F.E. Regression Modeling Strategies; Springer: Berlin/Heidelberg, Germany, 2015. [Google Scholar]
  21. Greene, W.H. Econometric Analysis, 7th ed.; Prentice Hall: New York, NY, USA, 2012. [Google Scholar]
  22. Wooldridge, J.M. Econometric Analysis of Cross Section and Panel Data; The MIT Press: Cambridge, MA, USA, 2010. [Google Scholar]
  23. Dang, Q.; Liu, Q.; Yang, S.; He, X. Data-driven evolutionary algorithm based on inductive graph neural networks for multimodal multiobjective optimization. IEEE Trans. Evol. Comput. 2026, 30, 186–198. [Google Scholar]
Figure 1. Example of modifying a network to make its amplification as low as possible on the horizon T = 4, ν = 3 neighbors, relative fitness s = 1.3 , resource u = 0.5 .
Figure 1. Example of modifying a network to make its amplification as low as possible on the horizon T = 4, ν = 3 neighbors, relative fitness s = 1.3 , resource u = 0.5 .
Mathematics 14 01121 g001
Figure 2. Mean values over 30 or 40 (to ensure robustness) random draws per value of ( T , N , ν , s , u ) , shown here for N = 8 nodes, ν = 2 , 4 , and T = 4 time steps.
Figure 2. Mean values over 30 or 40 (to ensure robustness) random draws per value of ( T , N , ν , s , u ) , shown here for N = 8 nodes, ν = 2 , 4 , and T = 4 time steps.
Mathematics 14 01121 g002
Table 1. Percentages of trials for which Δ p ¯ ( f ) ( C ( T ) ) < 0 versus Δ p ¯ ( f ) ( C ( T ) ) = 0 . T = 4, 7200 observations.
Table 1. Percentages of trials for which Δ p ¯ ( f ) ( C ( T ) ) < 0 versus Δ p ¯ ( f ) ( C ( T ) ) = 0 . T = 4, 7200 observations.
ResourceDimension of the Graph
N  = 6 N  = 7 N  = 8
u Number of Neighbors
ν = 2 ν = 3 ν = 4 ν = 2 ν = 3 ν = 4 ν = 2 ν = 3 ν = 4
Relative fitness s = 0.7
0.5436790275343336063
1.0435073506767436060
1.5377750236067475780
2.0606370505767436053
Relative fitness s = 0.9
0.5436373375753235067
1.0434770375350405770
1.5436347306353637087
2.0376357377750506367
Relative fitness s = 1.1
0.5406357407370375070
1.0477060437070477067
1.5535787576377534750
2.0505783436370436067
Relative fitness s = 1.3
0.5377060506383305360
1.0436360338770275383
1.5536783435377536363
2.0436380477377576787
Relative fitness s = 1.5
0.5506760536083206377
1.0505760436780436063
1.5536783426760436380
2.0476787505787235077
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bonneuil, N. Making Networks Less Amplifiers Under Resource Constraints. Mathematics 2026, 14, 1121. https://doi.org/10.3390/math14071121

AMA Style

Bonneuil N. Making Networks Less Amplifiers Under Resource Constraints. Mathematics. 2026; 14(7):1121. https://doi.org/10.3390/math14071121

Chicago/Turabian Style

Bonneuil, Noël. 2026. "Making Networks Less Amplifiers Under Resource Constraints" Mathematics 14, no. 7: 1121. https://doi.org/10.3390/math14071121

APA Style

Bonneuil, N. (2026). Making Networks Less Amplifiers Under Resource Constraints. Mathematics, 14(7), 1121. https://doi.org/10.3390/math14071121

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop