2. Methodology
In practical applications, the computation of a global index to measure the importance to each node is a main task. If the system studied contains several types of relations between actors it is expected that the measures, in some way, consider the importance obtained from the different layers. A simple choice could be to combine the centrality of the nodes computed from the different layers independently according to some heuristic choice.
In this section, both the wellestablished methods and the proposed centrality for multiplex networks with data are described in detail.
2.1. The Adapted PageRank Algorithm (APA) Model
Let us establish some notation that will be used in the following. Let
$\mathcal{G}=(\mathcal{N},\mathcal{E})$ be a graph where
$\mathcal{N}=\left(\right)open="\{"\; close="\}">1,2,\dots ,n$ and
$n\in \mathrm{N}$. The link
$(i,j)$ belongs to the set
$\mathcal{E}$ if and only if there exists a link connecting node
i to
j. The adjacency matrix of
$\mathcal{G}$ is an
$n\times n$ matrix
$A=\left(\right)open="("\; close=")">{a}_{ij}$, where
The adapted PageRank algorithm (APA) proposed by Agryzkov et al. [
25] provides us a model to establish a ranking of nodes in an urban network taking into account the data presented in it. This centrality was originally proposed for urban networks, although it may be generalized to spatial networks or networks with data. It constitutes a centrality measure for networks with the main characteristic being that it is able to consider the importance of data obtained from any source in the whole process of computing the centrality of the individual nodes. Starting from the basic idea of the PageRank vector concept, the construction of the matrix used for obtaining the classification of the nodes is modified.
In its original approach, PageRank was based on a model of a web surfer that probabilistically browses the web graph, starting at a node chosen at random according to a personalization vector whose components give us the probability of starting at node v. At each step, if the current node had outgoing links to other nodes, the surfer next browsed with probability $\alpha $ one of those nodes (chosen uniformly at random), and with probability $1\alpha $ a node chosen at random according to the personalized vector. For the web graph, the most popular value of the dumping factor was $0.85$. If the current node was a sink with no outgoing links, the surfer automatically chose the next node at random according to the personalized vector.
In the APA model, the data matrix was constructed following a similar reasoning from the original idea of the PageRank vector; a random walker can jump between connecting nodes following the local link given by the network or can jump between nodes (not directly connected) with the same probability, regardless the topological distance between them (number of nodes in the walk).
In the algorithm implemented to calculate the APA centrality (see [
25], p. 2190), a new matrix
${A}^{*}=\left(\right)open="("\; close=")">{p}_{ij}$ is constructed from the adjacency matrix
A, as
where
${c}_{j}$ represents the sum of the
jth column of the adjacency matrix.
Algebraically,
${A}^{*}$ may be obtained as
where
$\mathrm{\Delta}=\left(\right)open="("\; close=")">{\delta}_{ij}$ is the degree matrix of the graph, that is,
${\delta}_{ij}={c}_{j}^{1}$, for
$i=j$ and
${\delta}_{ij}=0$, for
$i\ne j$. We refer to
${A}^{*}$ as the transition matrix, and it represents, by columns, the probability to navigate from a page to other. In the literature related to this topic the matrix
${A}^{*}$ is also denoted as
P or
${P}_{A}$, so
${A}^{*}$,
P or
${P}_{A}$ are the same matrix. Following the notation of Pedroche et al. [
34] it will preferably be used
P.
The transition matrix,
$P={A}^{*}$ has the following characteristics (see [
25]):
The key point of the model is the construction of the socalled data matrix D of size $n\times k$, with its n rows representing the n nodes of the network, and each of its k columns representing the attributes of the data object of the study. Specifically, an element ${d}_{ij}\in D$ is the value we attach to the data class ${k}_{j}$ at node i.
However, not all the characteristics of data may have the same relevance or influence in the question object of the analysis. Therefore, a vector ${\mathit{v}}_{\mathbf{0}}\in {\mathbb{R}}^{k\times 1}$ is constructed, where the element that occupies the row i is the multiplicative factor associated with the property or characteristic ${k}_{i}$. With this vector ${\mathit{v}}_{\mathbf{0}}$ a weighting factor of the data is introduced, in order to work with the entire data set or a part of it.
Then, multiplying
D and
${\mathit{v}}_{\mathbf{0}}$,
$\mathit{v}$ may be obtained as
with
$\mathit{v}\in {\mathbb{R}}^{n\times 1}$.
The construction of vector
$\mathit{v}$ allows us to associate to each node a value that represents the amount of data assigned to it. Thus, two different values are associated with every node; on the one hand, its degree, related to the topology and, on the other hand, the value of the data associated to it. For a more detailed description of how the data are associated to the nodes, see [
25,
35].
After normalizing
$\mathit{v}$, denoted as
${\mathit{v}}^{*}$, it is possible to define the matrix
${M}_{APA}$ as
where
$V\in {\mathbb{R}}^{n\times n}$ is a matrix in which all of its components in the
ith row are equal to
${\mathit{v}}_{\mathit{i}}^{*}$. The parameter
$\alpha $ is fixed and it is related to the teleportation idea. The value that is traditionally used is
$\alpha =0.15$.
In practice, vector ${\mathit{v}}^{*}$ is repeated (n times) in every column of the matrix V.
The matrix ${M}_{APA}$ was used to compute the ranking vector for the network.
With these considerations, the APA algorithm proposed in [
25] may be summarized by the Algorithm 1:
Algorithm 1: (Adapted PageRank algorithm (APA)). Let $G=(V,E)$ be a primary graph representing a network with n nodes. 
 1
Compute the matrix P from the graph G.  2
Construct the data matrix D.  3
Construct the weighted vector${\mathit{v}}_{0}$.  4
Compute$\mathit{v}$as$D{\mathit{v}}_{0}=\mathit{v}$.  5
Normalize$\mathit{v}$, and denote it as${\mathit{v}}^{*}$.  6
Construct V as$V={\mathit{v}}^{*}{\mathit{e}}^{T}$.  7
Construct the matrix${M}_{APA}$as${M}_{APA}=\left(\right)open="("\; close=")">1\alpha $.  8
Compute the eigenvector$\mathit{x}$of the matrix${M}_{APA}$associated to eigenvalue$\lambda =1$. The components of the resulting eigenvector$\mathit{x}$represent the ranking of the nodes in the graph G.

The main feature of this algorithm is the construction of the data matrix D and the weighted vector ${\mathit{v}}_{\mathbf{0}}$. The matrix D allows us to represent numerically the dataset. Vector ${\mathit{v}}_{\mathbf{0}}$ determines the importance of each of the factors or characteristics that have been measured by means of D.
The Perron–Frobenius theorem is of great importance in this problem, since it constitutes the theoretical base that ensures that there exists an eigenvector
$\mathit{x}$ associated with the dominant eigenvalue
$\lambda =1$, so that all its components are positive, which allows establishing an order or classification of these elements. In our case, due to the way in which
P and
V have been constructed, it can be seen that
${M}_{APA}$ is a stochastic matrix by columns, which assures us of the spectral properties necessary for the Perron–Frobenius theorem to be fulfilled. Therefore, the existence and uniqueness of a dominant eigenvector with all its components positive is guaranteed. See [
36,
37] for further study of spectral and algebraic properties of the models based on PageRank.
Vector $\mathit{x}$ constitutes the adapted PageRank vector and provides a classification or ranking of the pages according to the connectivity criterion between them and the presence of data.
2.2. The Biplex Approach for Classic PageRank
Pedroche et al. [
34] propose a twolayer approach for the classic PageRank classification vector based on the idea that we now briefly expose. The twolayer approach is to consider the PageRank classification of the nodes as a process divided into two clearly differentiated parts. The first part is related to the topology of the network, where the connections of the nodes are basically taken into account by means of their adjacency matrix. There is a second part regarding to the teleportation from one node to another, following a criterion of equiprobability.
They affirm that the PageRank classification for a graph G with personalized vector $\mathit{v}$ can be understood as the stationary distribution of a Markov chain that occurs in a network with two layers, which are:
${l}_{1}$, physical layer, it is the network G.
${l}_{2}$, teleportation layer, it is an alltoall network, with weights given by the personalized vector.
Under this perspective, it is easy to construct a block matrix
${M}_{A}$ based on these two layers where each of the diagonal blocks is associated to a given layer. Therefore, we can construct
where
${M}_{A}$ defines a Markov chain in a network with two layers.
Due to the good spectral characteristics of
${M}_{A}$ (it is irreducible and primitive), they arrive to the conclusion that given a network with
n nodes, whose adjacency matrix is
A, the twolayer approach PageRank of
A is the vector
where
${\left(\right)}^{{\pi}_{u}^{T}}T$ is the unique normalized and positive eigenvector of matrix
${M}_{A}$ given by the expression (
4).
In [
38], the authors propose a new centrality measure for complex networks with geolocated data based on the application of the twolayer PageRank approach to the APA centrality measure for spatial networks with data. They design an algorithm to evaluate this centrality and show the coherence of this measure regarding the original APA by calculating the correlation and the quantitative difference of both centralities using different network models. This coherence in the results obtained for the APA and the proposed centrality using the twolayer approach is absolutely mandatory in our objective to extend the twolayer approach for multiplex networks with data.
Therefore, the twolayer approach may be extended to the case of multiplex networks, where we have several networks with the same nodes and with different topologies and connections between nodes. Following the notation used by Pedroche et al. [
34], let us consider a multiplex network
$\mathcal{M}=(\mathcal{N},\mathcal{E},\mathcal{S})$ with layers
$\mathcal{S}=({l}_{1},{l}_{2},\dots ,{l}_{k})$. Given a multiplex network
$\mathcal{M}$ with several layers, a multiplex PageRank centrality can be defined by associating to each layer
${l}_{i}$ a twolayer random walker with one the physical layer and a teleportation layer. In addition, transition between these layers must be allowed. The idea behind this process is the application of the twolayer approach to each layer of the multiplex network.
For now, let us consider our problem restricted to biplex networks $\mathcal{M}=(\mathcal{N},\mathcal{E},\mathcal{S})$, with layers $\mathcal{S}=({l}_{1},{l}_{2})$ whose adjacency matrices are ${A}_{1},{A}_{2}\in {\mathit{R}}^{n\times n}$, respectively. For convenience in the notation we will write ${P}_{A}$, ${P}_{1}$ and ${P}_{2}$ instead of ${A}^{*}$, ${A}_{1}^{*}$ and ${A}_{2}^{*}$, respectively.
The authors (see, [
34]) construct a general matrix
${M}_{2}$ as a new block matrix by associating to each layer
${l}_{i}$ a twolayer multiplex defined, for
$i=1,2$, as:
Reordering the blocks in such a way that the physical layers appear in the first block, the final matrix is
with
${P}_{i}$, for
$i=1,2$ row stochastic matrices and
${\mathit{v}}_{i}$, for
$i=1,2$ the personalized vectors.
It is straightforward to check that all the spectral properties of
${M}_{2}$ are essentially the same as the Google matrix in the PageRank model. Then, there exists an eigenvector
associated to the dominant eigenvalue
$\lambda =1$. This vector is the key to obtain the classification vector representing the nodes centrality.
Consequently, summarizing the main characteristic of the biplex PageRank approach, considering a biplex network
$\mathcal{M}$ with
n nodes, with two layers
$\mathcal{S}=({l}_{1},{l}_{2})$, and whose adjacency matrices are
${A}_{1},{A}_{2}\in {\mathbb{R}}^{n}$, it can be affirmed that the PageRank vector that classifies the nodes of this biplex network is the unique vector
${\mathit{\pi}}_{\mathbf{2}}$ such as
with
${\mathit{\pi}}_{\mathbf{2}}$ normalized.
2.3. Constructing the APABI Centrality by Applying the TwoLayer Approach
The idea of the treatment of the PageRank concept by means of two layers has a great sense within the idea of APA centrality, since the influence of the data in the network is measured separately in the original algorithm. Paying attention to the construction of
${M}_{APA}$ given by (
3), note that
V is the matrix summarizing all the data information. But not only the application of this concept is interesting for our centrality, since may be also interesting to analyze the differences that occur between both techniques of calculating the importance of the nodes.
In this section, we describe how to adapt the APA centrality taking as a reference the twolayers technique, where a block matrix is used to distinguish the topology and the personalized vector.
The original APA centrality model, described in
Section 2, presents some differences from the model described and implemented by Pedroche et al. [
34], where the final matrix involved in the eigenvector computation is stochastic by rows. In our approach, the basis of the original APA model consists of the construction of a stochastic matrix by columns, where we reflect the topology of the network by the probability matrix
P and the influence of the data, through the matrix
V.
In order to build a
$2\times 2$ block matrix, the same approach used in [
34] may be reproduced. The first upper diagonal block contains the information referring to the network topology, while the lower diagonal block is associated to the collected data in the network and assigned to each node of it.
Taking as a reference the APA algorithm, the matrix used to compute the eigenvector associated to the dominant eigenvalue
$\lambda =1$ is given by
A new
$2\times 2$ block matrix is constructed as
The idea that underlies the construction of the matrix by blocks given by (
6) is to maintain the spectral properties of the original matrix
${M}_{APA}$, with the aim that the numerical algorithms for determining dominant eigenvalue and eigenvector are stable and fast. Note that we have doubled the size of the original matrix.
Following the same reasoning used in
Section 3 to construct a model for biplex networks taking as a basis the classical PageRank vector, it is necessary to extend the twolayers APA approach given by the block matrix (
6). Using the same notation, let us consider
$\mathcal{M}=(\mathcal{N},\mathcal{E},\mathcal{S})$, with layers
$\mathcal{S}=({l}_{1},{l}_{2})$ be a biplex network.
Reordering the blocks in such a way that the physical layers appear in the first block, the final matrix is given by
with
${P}_{i}$, for
$i=1,2$ column stochastic matrices and
${V}_{i}$, for
$i=1,2$, the matrices containing the data information.
Note the differences between matrices
${M}_{2}$ (
5) and
${M}_{BI}$ (
7). The matrix
${M}_{2}$ is stochastic by rows, however, in the APA centrality the basic matrix
${M}_{APA}$ is stochastic by columns, so the definition of the matrix
${M}_{BI}$ is determined by the need to maintain the spectral properties suitable for obtaining the proper vector of the centrality. These desirable spectral properties are ensured by the way in which
${M}_{BI}$ has been built, being stochastic by columns, as well as irreducible.
Then, there exists an eigenvector
associated to the dominant eigenvalue
$\lambda =1$. This vector is the key to obtain the classification vector representing the nodes centrality. Therefore, it can be obtained a unique vector
with
$\mathit{x}$ a normalized vector.
In
Figure 1, a schematic representation of the extended APA model for biplex networks has been represented. All the graphs share the same
n nodes, although the relationships between them in the two layers
${l}_{1}$ and
${l}_{2}$ are different, which produces two different adjacency matrices
${A}_{1}$ and
${A}_{2}$. Data are also different in each layer; consequently, two data matrices are constructed
${D}_{1}$ and
${D}_{2}$.
The existence of the weighted vectors ${v}_{01}$ and ${v}_{02}$ allows us to determine those data that are the object of our interest, being able to discard those that do not present real interest for the study that is carried out.
The model presented in this section may be summarized by the Algorithm 2.
Algorithm 2: (Adapted PageRank algorithm biplex (APABI)). Let $\mathcal{M}=(\mathcal{N},\mathcal{E},\mathcal{S})$, with layers $\mathcal{S}=({l}_{1},{l}_{2})$ and adjacency matrices ${A}_{1},{A}_{2}$ be a biplex network with n nodes. 
 1
For the layers${l}_{i}$, with$i=1,2$, construct the probability matrices${P}_{i}$, for$i=1,2$, respectively.  2
From data, construct the matrices${D}_{i}$, for$i=1,2$, respectively.  3
Define the weighted vectors${\mathit{v}}_{i}$, for each layer.  4
For the layers${l}_{i}$, for$i=1,2$, compute${\mathit{v}}_{\mathit{i}}$as${D}_{i}{\mathit{v}}_{0i}$, respectively.  5
Normalize${\mathit{v}}_{\mathit{i}}$, for$i=1,2$, and denote them as${\mathit{v}}_{i}^{*}$.  6
For the layers${l}_{i}$, for$i=1,2$, construct${V}_{i}$as${V}_{i}={{\mathit{v}}_{\mathit{i}}}^{*}{\mathit{e}}^{T}$, respectively.  7
From${P}_{i}$, ${V}_{i}$, for$i=1,2$, and the parameter$\alpha $, construct${M}_{BI}$using the expression (7).  8
Compute the eigenvector${\widehat{\mathit{\pi}}}_{\mathit{BI}}$using the expression (8).  9
The components of the resulting eigenvector$\mathit{x}$, given by the expression (9), represent the ranking of the nodes in the biplex network.

Algorithm 2 summarizes the steps necessary to calculate a centrality measure that will be denoted as the adapted PageRank algorithm biplex (APABI). This measure provides us with a vector for classifying the nodes of the network according to their importance within a biplex network. This classification is obtained from the importance of the nodes in two networks where what changes are the associations between the nodes and the data associated with them. This classification is obtained from the importance of the nodes in two layers where the nodes are the same and what changes are the associations (links) between the nodes and the data associated with them.
Note that the
${M}_{BI}$ matrix is built for biplex networks. However, it can be easily extended the same reasoning for a multiplex network with
k layers
$\left(\right)$, defining the adjacency matrices
$\left(\right)$ and a set of
k data matrices
$\left(\right)$. The matrix
${M}_{BI}$ is extended to a multiplex network with
k layers as
with
where
${M}_{1,2}$,
${M}_{2,1}$ are diagonal matrices. More exactly,
${M}_{1,2}$ is formed by
k blocks
$2(1\alpha )I$ and
${M}_{2,1}$ is formed by
k identity blocks
I.
In the approach made so far in this section, we have considered the same value for the parameter
$\alpha $ in all the layers that make up the network. However, it could happen that the
$\alpha $ value was different in the different layers, as a consequence of the need to differentiate the importance that must be assigned to the data in each of the layers. That leads us to consider various
${\alpha}_{i}$, for each layer
i. Note that this variant does not imply in any way modify the spectral properties of the matrices involved in the calculation of centrality. Consequently, matrix
${M}_{BI}$ should be written now as
The generalization of the matrix
${M}_{BI}$ to
k layers with
k parameters
${\alpha}_{i}$ consists simply of replacing each
$\alpha $ in the expressions (
10) and (
11) with its corresponding
${\alpha}_{i}$ in the
ith row.
2.4. A Note about the Computational Cost
We discuss certain general aspects of the computational cost of the proposed model. It should be noted that if we look closely at Algorithm 2, the most expensive algebraic operations that are carried out are the product of a scalar by a matrix, the matrixvector product and the calculation of the dominant eigenpair
$(\lambda ,\mathbf{x})$ of matrix
${M}_{BI}$, given by the expression (
8).
As it is wellknown and can be seen in any linear algebra textbook, the product of a scalar by a square matrix of size n requires $n\times n$ multiplications, while the product of two square matrices of size n requires a computational cost of $O\left({n}^{2}\right)$. In our case, we need to make the product $D\xb7{v}_{i}$, for $i=1,2$, where D is the data matrix of size $n\times k$ and ${v}_{i}$ is a column vector of size n. Therefore, the computational cost of $D\xb7{v}_{0}$ is of $O\left(nk\right)$.
However, the most expensive part from the computational point of view is found in step 8 of the algorithm, in which, once the
${M}_{BI}$ matrix is constructed, it is necessary to obtain its dominant eigenpair
$(\lambda ,\mathbf{x})$. The numerical problem of calculating the eigenvalues and eigenvectors of any matrix is very expensive, in general, if the matrix in question does not have a structure that simplifies its calculation in some way. In general, for matrices of low dimension (such as
$N<150$), there are efficient methods for finding all the eigenvalues and eigenvectors. For example, the Householder–QL–Wilkinson modification of the given method is built into the EISPACK routines and is routinely used. The computation time for any of these methods grows as
${N}^{3}$ and the memory requirement grows as
${N}^{2}$. For large matrices, a very commonly used algorithm is Lanczos, that is an adaptation of power methods to find the
m most useful eigenvalues and eigenvectors of an
$n\times n$ Hermitian matrix. For a more detailed description of the numerical matrix eigenvalue problems, see [
39].
Due to the way we have built the
${M}_{BI}$ matrix following the original idea of the Google matrix used in the original PageRank combined with the twolayer PageRank approach, we have ensured that this matrix inherits the spectral properties of the original Google matrix in the original PageRank model. It is a stochastic matrix by columns of which we can affirm, using a variant of the Perron–Frobenius theorem, that its own dominant eigenvector associated to eigenvalue
$\lambda =1$ corresponds to the stationary distribution of the Markov chain by the column normalized matrix
${M}_{BI}$. This stationary vector
${\widehat{\mathit{\pi}}}_{\mathit{BI}}$ verifies that
and may be obtained by using the wellknown power iteration method, applying it until the convergence of the iterative process
for
$k=1,2,\dots $.
In addition, it should be noted that the use of the power method for the calculation of the dominant eigenvector is especially useful when applied to sparse matrices.
3. Results
In this section, an example of a biplex network is analyzed in detail in order to highlight the possibilities offered by disposing of a centrality measure that establishes a classification of the nodes in order of importance.
For this purpose, let us consider a graph of 20 nodes, where each node represents a physical person; specifically, a player from a football team. Around this group the example is developed.
With these 20 nodes, let us proceed to construct a network with two layers that relate the nodes in a different way, taking two datasets (one for each of the layers). The calculation of the APABI centrality on this biplex network will allow us to determine the importance of each member of the team and obtain a classification of them in order of importance.
First, a layer
${l}_{1}$ is constructed with the 20 nodes where the relationships between the members of the team are analyzed from the point of view of social or virtual relationships. Thus, an undirected graph is constructed with an adjacency matrix
${A}_{1}$ in which two nodes are joined by an edge if they are related or linked through a social network. The graph of social relations between the nodes is shown in
Figure 2. The data that are considered in this layer associated with each node are related to the number of messages that each person receives from their teammates in a period of time.
Secondly, a second layer
${l}_{2}$ is constituted from the same 20 nodes of layer
${l}_{1}$, but analyzing in
${l}_{2}$ how the players relate to each other within the game, that is, if they associate with each other in the game. Thus, two players who combine with each other or pass the ball with some assiduity during a match are connected by an edge. From these links, it is possible to build a new adjacency matrix associated with this layer, which we denote by
${A}_{2}$. In most team sports with a ball, each player occupies a specific position in the field of play, covering a certain area. Players (nodes) that occupy closer positions associate or relate more easily with each other than with those who are further away. For example, a defender is associated more with a midfielder than with a forward. The graph of game relations between the nodes is shown in
Figure 2.
Both the links and the data associated to each of the 20 nodes of the graph are summarized in
Table 1. There, the second column specifies the social links of every node while in the four column the game links between them are detailed. So, for example, the table shows that node 1 (player labeled as 1) has social interactions with the nodes
$\left(\right)$ while presents strong interactions within the game with the nodes
$\left(\right)$.
Datasets about the number of messages received during one day through social networks and the number of games that have played along the season are detailed in columns three and five, respectively. So, node 1 has received 15 messages in a day and has played 33 games in the season.
In this example, one of the advantages of working with biplex networks becomes manifest, such as the possibility of studying different relationships between the same set of nodes, analyzing the correlation between them. This example shows the advantage offered by adding what we can call a data layer in each of the multilayers of the network. We can assign the data that we consider appropriate to the specific relationships that we are representing by means of the corresponding graph. Thus, as can be seen in this example, in the layer where the social relations between the nodes are considered, we introduce the data corresponding to the number of received messages. However, in the second layer where game relations are represented, data are completely different, since now the number of games played are considered. Thus, each layer allows us to introduce one or more data sets related to the relationships of the nodes. This leads us to affirm that the inclusion of data in each of the layers enriches the nature of the problems that can be analyzed.
The objective in this example is to determine the most important or influential players within the team. For this task, two different aspects may be evaluated; on the one hand, the importance of the nodes from the point of view of the social relations that are established between them through messages, social networks, or any other virtual means. The nodes that have an intense social activity in the group create a very important union within the group, being very influential for other nodes. On the other hand, the importance of the nodes from the point of view of the game may be also evaluated, that is, which players are more important in the game, for their participation or quality. In other words, it is decisive to look for the leaders of the group, analyzing their importance from the social and technical point of view.
In order to determine the importance of the nodes of the biplex network of this example, the APABI centrality described in
Section 2.3 has been calculated. Algorithm 2 has been executed using the information shown in
Table 1. The numerical results shown in
Table 2 are graphically displayed in
Figure 3. This calculation gives us the importance of the nodes relating both layers.
Figure 4 shows the final graph considering the information of two layers and the final result of the APABI centrality for each node, representing the size of each node according to its importance.
The APABI centrality shows that the nodes that can be classified as the most important, the leaders within the group, are nodes 20 and 18, respectively. Note that node 20, which is the most important, is not the node that receives the most messages from its colleagues, being the node 18 is the one that receives the most messages from his teammates, but is second in the ranking.
Finally, it should be noted that, as discussed in
Section 2.4, the use of the power method to calculate the stationary vector of the Markov chain that forms the stochastic matrix
${M}_{BI}$ provides the numerical stability needed in the implemented algorithm.
We have performed several tests with randomly generated adjacency matrices of different sizes in a range from 10 to 10,000 and we have obtained stable results. Matrices of dimension exceeding ${10}^{5}$ cannot be stored in the central memory of most computers, except for sparse matrices. Consequently, the only matrixarithmetic operation that is easily performed is a matrixvector product. This makes possible to use this centrality algorithm for large matrices. In the scope of our research with urban network matrices, the sizes of the case studies are relatively large, with 2000–5000 nodes, approximately.
4. Discussion
In the discussion of the model presented and evaluated in the previous sections, let us pay attention to the example studied, the network of football team components. Results have been presented related to the proposed centrality measure APABI for a biplex network like this one.
We also consider that it may be relevant to calculate the nodes centrality in each of the layers, that is, separately and independently. In this way, it is possible to analyze the differences in the calculation of centrality in a single or multiplex networks, respectively, verifying if there is a certain correlation between the results obtained. We have calculated the APA centrality of the nodes in each of the layers. The numerical results obtained are shown in
Table 2, where the numerical values of the centrality and the position of the nodes in the classification or ranking can be seen.
It should be noted that all the calculations have been made taking the value of the alpha parameter equal to $0.5$. This means that we assign the same importance in the centrality computation to the connections of the nodes as to the data associated with them.
As it was already mentioned, the APABI centrality provides a classification of the nodes according to their importance in the biplex network studied. So, nodes 20 and 18 were the most important in the team. This can also be analyzed when the layers are considered independently.
The comparison between the APABI centrality and the APA centrality by independent layers offers us some remarkable facts. For instance, the most active nodes from the point of view of social networks are 20, 18, 10 and 5. The most important nodes from the point of view of the game are 4, 7, 1, 15. It is evident that there is no correlation between both relationships; leaders from the social point of view are not necessarily the most decisive players in the team’s game. Thus, the most participative player of the team, the one who most relates to the game with his teammates, is not socially the most active nor will he be the most influential individual within the group.
We appreciate that the most important nodes in a biplex network are nodes that maintain high positions in the two rankings obtained in each of the layers. The node 20 is the one that presents a greater importance from the social point of view, however, it is not among the first three nodes that have a greater participation in the team game. On the other hand, node 18 is not as socially active as node 20, although it has a greater degree of association with its teammates in the team game than the first one and has had a greater presence in the team games, specifically participating in more than 5 games. The numerical results show that this positive rating does not compensate the high social importance of node 20, although it should be highlighted that the difference in the centrality between both nodes is very small. It can be concluded that both nodes are the team leaders.