In this study, we aimed to simulate the progression from NC to AD through network modeling. Here, we call the networks generated by our proposed model synthetic brain networks, and the networks constructed from the preprocessed images real brain networks. By comparing the properties of synthetic networks with real target brain networks, we could predict the connection mechanism that causes the topological alterations of AD.

#### 2.4.2. Connection Probabilities Models

To explore the formation mechanism of generating human brain network topologies, Vértes et al. proposed the Economical Clustering Model (ECM) [

26]. Their experiment results show that both the

**topological similarity** of common neighbors (CN) and the

**Euclidean distance similarity** between two brain regions are treated as essential impactors in brain networks modeling. The connection probability of ECM is defined in the following function:

Connection probability of ECM: The more common neighbors (CN) that node

u and node

v have, the higher topological similarity to one another [

48]. The number of common neighbors between node

u and node

v can be described by

where

$\Gamma \left(u\right)$ and

$\Gamma \left(v\right)$ represent the set of neighboring nodes of node

u and node

v, respectively. Scaling

${S}_{(u,v)}^{CN}$ by the Euclidean distance similarity

$E(u,v)$ between two brain regions, the ECN connection probability between node

u and node

v, i.e., the probability that

u prefers to build the connection with

v, is given by

In this expression, $P(u,v)$ is the connection probability of ECM. ${S}_{(u,v)}^{CN}$ is the contribution of the topological similarity computed by CN. The other term, $E(u,v)$, is the contribution of the Euclidean distance similarity which represents the anatomical distance between nodes u and v. $\gamma $ and $\eta $ represent the parameters of topological similarity and anatomical distance penalization, respectively. Although ECM can construct networks similar to the real target brain networks on several essential topological features, there are still two major problems that have been pointed out.

One problem is that ECM is devoted to calculating the topological similarity between node

u and node

v only according to the number of their common neighbors; it does not consider the individual characteristics of node

u and node

v. In fact, the individual topological characteristics of node

u and node

v are paramount for the formation of a connection between them. Additionally, the other problem is that ECM gives each common neighbor the same score to the topological similarity. However, various common neighbors may have different topological characteristics, e.g., degree and clustering coefficient. Therefore, they make a different contribution to establishing the connection. In this study, we propose a novel brain network model named MINM from the perspective of mutual information to solve above issues. Mutual information provides a measure of the statistical relationship between two random variables. More specifically, it is a measure of the reduction in uncertainty about one random variable given knowledge of another [

32]. MINM adopts not only the individual features of node

u and node

v, but also the topological-based mutual information of their common neighbors. The topological-based mutual information is used to distinguish the different contributions of the common neighbors of the node pair

$(u,v)$ in calculating the existence probability of one connection. Moreover, higher mutual information indicates a substantial reduction in the uncertainty of the formation of a connection; otherwise, lower mutual information means a smaller probability of the existence of one connection. The question we investigate is whether the mutual information between node

u and node

v can be helpful for the modeling of brain networks during the evolution process from NC to AD. Next, we introduce the definitions of self-information and mutual information, and then, we give a precise definition of our proposed connection probability of MINM.

**Definition** **1.** (Self-information) The Self-information of a random variable is a function that concentrates on quantifying the information involved in the value of a random variable. Given a random variable X with a probability distribution $P\left(X\right)$, where X takes on values in a set $X=\{{x}_{1},{x}_{2},\cdots ,{x}_{n}\}$, the self-information of the random variable can be expressed by **Definition** **2.** (Mutual information) Mutual information is a quantity that measures the number of messages that can be acquired about one random variable by observing another. Formally, the mutual information of two random variables X and Y, whose joint distribution is given by $P(X,Y)$, can be defined as
where

$P\left(X\right)$ and

$P\left(Y\right)$ represent the

marginal distributions of

X and

Y, respectively.

$I(X;Y)=0$ if and only if

X and

Y are independent.

Connection probability of MINM: Given a pair of node

$(u,v)$, whose common neighbors can be represented by

${\omega}_{u,v}=\Gamma \left(u\right)\bigcap \Gamma \left(v\right)$, the connection probability of MINM between them can be given by

where

${S}_{u,v}^{MI}$ is the topological similarity between node

u and node

v, which represents the topology-based mutual information between them.

$E(u,v)$ is the Euclidean distance similarity, which is similar with ECM.

${S}_{u,v}^{MI}$ can be given by

where

$I\left({L}_{u,v}^{1}\right|{\omega}_{u,v})$ denotes the conditional self-information of an event that there is a connection between node

u and node

v, whose common neighbors are known as

${\omega}_{u,v}$. From Equation (

6), we can know that the smaller

$I\left({L}_{u,v}^{1}\right|{\omega}_{u,v})$ is, the higher topological similarity

${S}_{u,v}^{MI}$ is, and this indicates that a larger probability for node

u and node

v to establish one connection between them.

Next, we give a comprehensive illustration about how to calculate

${S}_{u,v}^{MI}$. According to the definition of Mutual Information in Equation (

4), the topological similarity of

${S}_{u,v}^{MI}$ can be given by

where

$I({L}_{u,v}^{1};{\omega}_{u,v})$ denotes the mutual information between two events

${L}_{u,v}^{1}$ and

${\omega}_{u,v}$.

${L}_{u,v}^{1}$ describes the event that there is one link between node

u and node

v, and

${\omega}_{u,v}$ represents the event that the common neighbors between node

u and node

v are observed.

$I({L}_{u,v}^{1};{\omega}_{u,v})$ evaluates the increment of the probability for the formation of one connection between node

u and node

v by knowing the information of their common neighbors.

$I\left({L}_{u,v}^{1}\right)$ represents the self-information of the event that node

u and node

v are connected. According to the definition in Equation (

3),

$I\left({L}_{u,v}^{1}\right)$ can be obtained by calculating the probability of

$p\left({L}_{u,v}^{1}\right)$. In this work, we assumed that the common neighbors in

${\omega}_{u,v}$ are independent with each other, and thus,

Given one common neighbor

z in

${\omega}_{u,v}$,

$I({L}_{u,v}^{1};z)$ can be estimated by calculating the average mutual information of all pairs of nodes that have node

z as their neighbor in common.

where

$\Gamma \left(z\right)$ is the neighbor set of

z;

$I\left({L}_{m,n}^{1}\right)$ represents the self-information of an event that node

m and node

n are connected; and

$I\left({L}_{m,n}^{1}\right|z)$ is used to describe the conditional self-information of the event that node

m and node

n are connected on the condition that node

z is known as one of their common neighbors. Here, we can get

$I\left({L}_{m,n}^{1}\right)$ through calculating

$p\left({L}_{m,n}^{1}\right)$.

$p\left({L}_{m,n}^{1}\right)$ can be given by

where

${L}_{m,n}^{0}$ represents the event that node

u and node

v are disconnected with each other;

$\left|E\right|$ represents the existing number of connections in the network; and

${k}_{m}$ and

${k}_{n}$ denote the connection number of node

m and node

n, respectively.

Then, we give an explanation of the calculation of

$I\left({L}_{m,n}^{1}\right|z)$, which is described in Equation (

9). According to the definition of self-information, to obtain

$I\left({L}_{m,n}^{1}\right|z)$ we need to calculate the

$p\left({L}_{m,n}^{1}\right|z)$. In the current work,

$p\left({L}_{m,n}^{1}\right|z)$ is equal to the clustering coefficient of node

z
where

${N}_{\wedge z}$ and

${N}_{\vee z}$ represent the connected and disconnected link number in

$\Gamma \left(z\right)$, respectively. Obviously,

${N}_{\wedge z}+{N}_{\vee z}=|\Gamma \left(z\right)|\left(\right|\Gamma \left(z\right)|-1)/2$, which represents the total connections that could possibly exist within

$\Gamma \left(z\right)$. Then,

$p\left({L}_{m,n}^{1}\right|z)$ can be described by

where

$\Gamma \left(z\right)$ represents neighbors set of node

z,

$|{E}_{z}|$ denotes the existing link number in

$|\Gamma (z\left)\right|$. Therefore,

$I({L}_{u,v}^{1};z)$ is given by

Finally, substituting Equations (8), (10) and (13) back into Equation (

7), we can get the topological similarity of

${S}_{u,v}^{MI}$.

As we can see from Equations (7) and (14), both the mutual information ${\sum}_{z\in {\omega}_{u,v}}I({L}_{u,v}^{1};z)$ and the self-information $I\left({L}_{u,v}^{1}\right)$ are used to define the topological similarity between node u and node v. Thus, two different node pairs $(u,v)$ who have the same number of common neighbors may get various topological similarity.