Next Article in Journal
Global and Local Behavior of the System of Piecewise Linear Difference Equations xn+1 = |xn| − ynb and yn+1 = xn − |yn| + 1 Where b ≥ 4
Previous Article in Journal
Development of a Shipment Policy for Collection Centers
Previous Article in Special Issue
Assisting Users in Decisions Using Fuzzy Ontologies: Application in the Wine Market

A Methodology for Redesigning Networks by Using Markov Random Fields

Department of Applied Mathematics, University of Granada, 18071 Granada, Spain
Department of Computer Architecture and Computer Technology, University of Granada, 18071 Granada, Spain
Department of Social Psychology, University of Granada, 18071 Granada, Spain
Institute of Artificial Intelligence, De Montfort University, Leicester LE1 9BH, UK
Department of Computer Science and Artificial Intelligence, University of Granada, 18071 Granada, Spain
Author to whom correspondence should be addressed.
Academic Editor: Vassilis C. Gerogiannis
Mathematics 2021, 9(12), 1389;
Received: 10 May 2021 / Revised: 8 June 2021 / Accepted: 10 June 2021 / Published: 15 June 2021
(This article belongs to the Special Issue Group Decision Making Based on Artificial Intelligence)


Standard methodologies for redesigning physical networks rely on Geographic Information Systems (GIS), which strongly depend on local demographic specifications. The absence of a universal definition of demography makes its use for cross-border purposes much more difficult. This paper presents a Decision Making Model (DMM) for redesigning networks that works without geographical constraints. There are multiple advantages of this approach: on one hand, it can be used in any country of the world; on the other hand, the absence of geographical constraints widens the application scope of our approach, meaning that it can be successfully implemented either in physical (ATM networks) or non-physical networks such as in group decision making, social networks, e-commerce, e-governance and all fields in which user groups make decisions collectively. Case studies involving both types of situations are conducted in order to illustrate the methodology. The model has been designed under a data reduction strategy in order to improve application performance.
Keywords: universal decision making model; redesigning networks; Markov random fields universal decision making model; redesigning networks; Markov random fields

1. Introduction

Network redesigning is a dynamic problem that arises in several fields. The wide range of reasons for redesigning includes the need to adapt to changing requirements (new regulatory scenarios, for instance) or to enhance the sector’s capacities [1,2,3]. Not only is the further expansion of the network considered, but also reductions may be provided. Likewise, redesigning affects a large number of sectors in which networks play a role: apart from areas with standard branch networks (banking institutions, supermarkets and petrol or gas stations), it concerns the telecommunications and electricity sectors, transport industries by land, sea and air and many others in which the consideration of physical networks is required [4].
Redesigning processes go much further than adapting to changes as they involve several factors. However, at their core, all these variables may be considered to be within two main groups: (i) identifying operational shortfalls and (ii) projecting future performance in order to anticipate future needs. The latter concern could be interpreted as re-distributing nodes in the network (adding, removing and merging nodes) and assessing the results for each possibility. This paper is related to the latter concept: an approach is developed to model the evaluation of the future performance of a network in different scenarios in order to design the future network structure.
Standard methodologies for redesigning physical networks rely on Geographic Information Systems [5,6], which strongly depend on local specifications. However, the absence of a universal definition of demography makes the joint use of such technologies in domestic and international scenarios more difficult, especially for cross-border enterprises. Our contribution is to present a Decision Making Model to restructure networks that works without geographical constraints. More specifically, the novelty of our model is that local geographical specifications are optional (they could be included if required by the specific context) instead of compulsory. There are multiple advantages of our approach: on one hand, it may be used in any country of the world; on the other hand, there is a broad application scope, as our approach could successfully be implemented either in physical (ATM networks [7]) or in non-physical networks such as e-commerce, e-governance and all fields in which user groups make decisions collectively [8,9,10].
An additional contribution is that the model has been designed under a data reduction strategy (i.e., only significant information is required) in order to improve application performance. The advantages of this vary from reducing the storage space to increasing the speed of the decision-making model. Specifically, our approach is based on a joint probability distribution, which can be expressed as function of a few significant nodes of the network, called the cliques. While the employment of probability functions in decision-making models is not new (they have been widely used by Bayesian-based techniques), the novelty of our approach relies on the choice of Markov random fields instead of Bayesian networks as a supporting structure [11].
As a previous stage to the decision model itself, a universal (geographical constraint-free) network is overlapped with the given network. Such a universal network is constructed from the selected criteria for redesigning in such a way that if multiple criteria were needed, the methodology may be executed in parallel for all the considered criteria, thereby enhancing the decision capabilities of the model. Besides, the independence of the processes guarantees that the model may monitor the application performance automatically, without the figure of a driver (a moderator in GDM contexts, for instance; see [12,13]. As a result, different levels of precision can be simultaneously considered.
Related works on Markov fields include [14,15], where Markov Decision Processes are the tools used to analyze the given information in order to make decisions. In [16], Markov networks are the framework used to develop a scoring function (called BJP) that computes the joint posterior distribution of the collection of Markov blankets.
Network redesigning may be addressed from several standpoints and is usually related to a given criteria and performed by means of many techniques. In [17], the authors presented a branch redesigning framework by means of integer 0–1 programming, the objective of which is to restructure the network after mergers and takeovers. Other approaches rely on Fuzzy Cognitive Maps (FCMs) [18], in which a dynamic network of interconnected knowledge nodes is designed through the use of FCMs. The authors could not find any previous works which jointly addressed the restructuring of both physical and non-physical networks.
The organization of the paper is as follows. Section 2 provides an overview of Markov random fields. Section 3 and Section 4 are devoted to the decision-making model. Specifically, in Section 3, the pre-processing steps are established (Section 3.1 shows the construction of the universal network and Section 3.2 shows its resulting properties), while in Section 4, the decision-making model itself is developed. In Section 5 and Section 6, some applications are presented (Section 5: the banking context, Section 6: group decision making problems). Finally, Section 7 concludes the paper.

2. Preliminaries on Markov Random Fields

In the beginning, Graph Theory was devoted to finding walks and paths (Euler, Hamilton), but it is now used to find substructures (communities) in networks. As this is also the philosophy that underlies this paper, let us briefly review Markov random fields by starting with Graph Theory.
In a graph G = ( V , E ) (V represents the set of vertices (nodes or sites) and E represents the set of edges), two vertices u , v are said to be adjacent, u v , if ( u , v ) E . Thus, the collection of adjacent vertices to a given vertex u is called the neighborhood of u, N ( u ) = { v V | u v } . In this regard, it is well-known that defining a neighborhood is equivalent to defining the set E. Cliques in a graph are communities (subgraphs) of fully connected nodes: C V is a clique if C { u , N ( u ) } , u C .
Depending on whether the edges are bidirectional or not, graphs are called undirected or directed. Those graphs whose nodes are random variables X v , v V and whose edges show statistical relationships with nodes are called graphical models. Graphical models over an undirected graph are called Markov random fields (MRFs), while Bayesian networks are graphical models whose underlying graph is a directed one.
MRFs may be also viewed as spatial networks that fulfill an extension of the Markov chains’ memory-less property. More specifically, Markov chains are linear random processes { X n | n N } in which the probability of occurrence of each state X n depends solely on the immediately previous state X n 1 :
P [ X n = x n | X k = x k , k n ] = P [ X n = x n | X n 1 = x n 1 ] .
Now, let us consider a spatial random process X = { X v | v V } , V = { 1 , 2 , , n } × { 1 , 2 , , n } , and let P [ X ] denote the joint distribution of X in the following sense: P [ X ] = P [ { X v = x v | v V } ] = { P [ X v = x v ] | v V } . , X = { X v | v V } is an MRF if the probability of a state depends solely on the nearest states, on the understanding that the nearest nodes are those that are in the neighborhood N:
P [ X v = x v | X V { v } = x V { v } ] = P [ X v = x v | X N ( v ) = x N ( v ) ] .
For our purposes, it is important to highlight that the joint probability distribution of an MRF X takes a specific form provided that P [ X ] 0 . Particularly, it may be written in terms of functions that only take values on cliques of the network (called clique potentials), as follows:
P [ X ] = 1 Z C C ϕ C ( X C ) , Z = C C ϕ C ( X C ) ,
where functions ϕ C ( X C ) over the cliques C C are the clique potentials and Z is a normalization constant. In Gibbs distribution contexts, the normalizing constant Z is also known as the partition function. Its primary role is to ensure that the joint distribution sums to 1. While such a function ϕ C may take several forms, these are usually ϕ C ( X C ) = exp ( f ( C ) ) , with f ( C ) being a function known as an energy function on C. Thus, the joint distribution of an MRF X may be expressed as
P [ X ] = 1 Z exp C C f ( C ) .
In Image modeling and processing, such a joint probability distribution is called a Gibbs distribution, and it gives its name to those spatial random processes whose full distribution is similar to the one above. These are known as Gibbs random fields and, following the Hammersley–Clifford theorem, they are essentially the same as MRFs.

3. The Redesigning Process: Pre-Processing Steps

The required steps in the redesigning process may be mainly divided into pre-processing steps and the decision-making model itself, according to Section 3.1 and Section 3.2.

3.1. Creation of a Universal Support Network ( X v , N v ) v V

This section presents the design of a support network that is overlapped with the original one in order to granulate the original set of nodes depending on the selected criteria for redesigning. This support network is called universal in the sense that it works without local constraints. Let ( V , E ) be the network that we aim to restructure according to a selected criterium. ( V , E ) could be either physical or non-physical: in physical networks, nodes are physically connected, while in the non-physical contexts, edges are abstract links (for instance, friendships in social networks or business relationships in market scenarios). Thus, the steps to follow are as follows:
(i) A random variable X related to the criterion for redesigning should be selected. For example, in the case of restructuring a bank branch network under the criterion of branch size (as shown in Section 5), we have to select the random variable which best gathers the main features of the branch size.
(ii) Construct a support network ( X v , N v ) v V . To achieve our goal, each node in the original network v V produces a node in the support network X v when v is identified with the random variable X v tailored to v and extensively detailed by a collection of features x v k , k = 1 , , n ; i.e., v X v ( x v 1 , x v 2 , , x v n ) t .
From now on, both terms v X v shall be used indistinctly. Note that in X v , the vector coordinates (which represent the features x v k , k = 1 , , n ) and their number can be chosen as needed. As mentioned in the Introduction, in the universal support network ( X v , N v ) v V , local geographical specifications are optional if required by the specific context. Actually, they may be included as part of the features of the random variable X. We refer to X v as a variable-in-a-site, thereby providing a set of random variables X = { X v v V } .
Definition 1.
Consider nodes X v i = ( x v i 1 , , x v i n ) t , X v j = ( x v j 1 , , x v j n ) t   i j in the support network, and let X v i ¯ be the mean value in a given interval of time. The distance between X v i , X v j , i j , written as d i j , is
d i j = d ( v i , v j ) ( = d ( X v i , X v j ) ) = + k = 1 n ( x k ¯ v i x k ¯ v j ) 2 .
Remark 1.
Recall that the realization of a random variable X is the process of taking a concrete value x v over the full range of values that may be considered: X v = x v . In this regard, the mean value is a choice of realization of the random variable, but many other choices may be considered instead depending on what best suits each particular context. The definition of distance may be freely selected as well.
Definition 2.
The neighborhood of a node v i V , N v i , is defined as
N v i = { v j V | d ( v i , v j ) k , k R , k 0 } ,
where the benchmark k (which states the degree of similarity) must be specifically defined.
Note that, due to the equivalence between defining edges in a graph and a topology through a system of neighborhoods, edges in the support network ( X v , N v ) v V are now defined: for v j N ( v i ) , an edge ( v i , v j ) will be joined if d ( v i , v j ) k 0 . It should be noticed that Definition 2 could be seen as a more general definition of a neighborhood (a multi-hop neighborhood; see [19]).

3.2. Properties

In this section, some of the properties of the universal support network are described. Firstly, let us describe the nature of the neighborhood of a random variable:
Proposition 1
(Neighborhoods). The neighborhood of a random variable-in-a-site is formed by random variables that are very similar with regard to the selected criterion for redesigning the network.
Having identified every node with its set of features (according to the selected criterion) and considering that the benchmark k may be as small as desired, the former definition of a neighborhood implies that all random variables in a neighborhood are similar to the selected features. □
Proposition 2
(Cliques). A clique C in the universal network ( X v , N v ) v V (and in the original network ( V , E ) as well) is a community which consists of those nodes c 1 i , c 2 i that are highly similar with respect to the features of the random variable X v ; i.e., d ( c 1 i , c 2 i ) k i , i = 1 , 2 .
The result follows from the fact that the nodes in a clique are also in the corresponding neighborhoods. Then, d ( c 1 i , c 2 i ) min { k 1 , k 2 } k 1 , k 2 . □
Remark 2.
Note that the selection of X directly affects the type of network categorization in the sense that for each choice of the random variable X, there is an associated network re-configuration. It is also important to note that the level of similarity for nodes in each C i is variable; i.e., nodes c 1 i , c 2 i C i are similar with degree k i while c 1 j , c 2 j C j may be similar with different degrees of similarity k i k j if i j (an instance of this is the Internet (the universal network), in which a community of routers are linked by cables with different lengths). In short, cliques are groups of nodes that are similar but with different degrees of similarity depending on each clique.
The following property is very useful in practice: it states that, even if the entire network is not an MRF, there are cliques when viewed as subnetworks:
Lemma 1.
As subnetworks, cliques are MRFs.
Since cliques are clique subgraphs of fully connected nodes, C is a clique if u C it holds true that C { u } N ( u ) . This implies that the (Markov) local property is satisfied. □
Finally, we focus on the entire universal network. The fact that the entire network is an MRF would provide extra information about the behavior of the network under the selected criterion (i.e., the selected random variable X). Let it be noted that, as long as the universal network ( X v , N v ) v V is an MRF, the next result follows by using the Hammersley–Clifford theorem:
Theorem 1.
Assuming that ( X v , N v ) v V is an MRF, the joint probability distribution of the universal network can be written as a product of clique potentials based on the function f of clique C
P [ X ] = 1 Z C C exp f ( C ) .
The function f may be freely selected depending on the context. This opens the door for sensitivity tests to be performed to empirically determine the best functions for each scenario. This result is of high practical functionality thanks to the simple and direct form of determining the joint probability distribution, expressed only in terms of cliques.

4. The Redesigning Process: The Decision-Making Model

In this section, the process by which the best location for a new node is identified is fully described. To this end, let us simulate that the universal support network ( X v , N v ) is divided into subnets S i ( X v , N v ) . Thus, such a partition offers different scenarios for locating a new node v * :
( X v , N v ) = i = 1 S i ( X v , N v ) .
The key point is to consider not subnets but the cliques, since cliques on a network, considered as subnetworks, are MRFs themselves. Thus, the decision-making model for best locating a new node according to a given criterion consists of the following steps:
Mathematics 09 01389 i001 The universal support network ( X v , N v ) should be divided into its own cliques C i ( X v , N v ) .
This division yields different possibilities for locating a new node v * : ( X v , N v ) = i = 1 C i ( X v , N v ) .
Mathematics 09 01389 i002 The subnetworks are considered together with the new node v * : C v * i ( X v , N v ) = C i ( X v , N v ) { v * } .
In this regard, the following remark should be made:
Remark 3.
Let it be noted that the subnetworks C v * i ( X v , N v ) = C i ( X v , N v ) { v * } are cliques as well: actually, a clique consists of either a single node or a maximally connected subgraph of the whole graph.
Mathematics 09 01389 i003 By Lemma 1, the joint distributions of the subnetworks { P [ C v * i ( X v , N v ) ] } i may be computed.
Comparisons between these numerical scores allow us to make decisions regarding the most convenient locations.
Mathematics 09 01389 i004 In the case of multiple outputs, the output should be considered that is most in accordance with the established criteria.
The criteria considered can vary from cost minimization to the minimization of distances, and even combinations of criteria.

5. Case Study for Physical Networks: The Bank Branch Network

In this section, the decision making model is applied to the banking context—an scenario that continually undergoes changes (mergers, acquisitions, entries and exits of banks into the market, changes in banking regulation) that require dynamic redesigning. In this regard, let us suppose that a new branch with a specific size has to be opened. Thus, we need to locate a new node in the bank branch network depending on the size of the branch; that is, size is the criterion under which the network will be restructured.
In order to identify the random variable which best fits the criterion (size), let it be noted that size may be set according to several parameters (number of users, number of staff, brick-and-mortar dimensions, number of credits/deposits ⋯). We select the most accepted measure following the advice of branch managers: the branch cash holdings, C H , which represent the total amount of cash which is allowed to be stored in a branch: X = C H .
On the one hand, note that this random variable has a strong dependence on local demographics. Actually, branch cash holdings greatly rely on local demographics as they depend on branch cash transactions, which depend on the needs for cash of customers, which strongly depend on branch locations. In general, most branch variables have a strong dependence on local demographics. This is one of the reasons why our approach, which is geographical constraint-free, would be useful in the banking scenario.
On the other hand, the two stochastic processes associated with C H are as follows: firstly, the temporal process that models the temporal movements of cash holdings, denoted by { C H n , n N } with n being the unit of time (see [20]); and secondly, the spatial process { C H b , b B N } in which b denotes a branch in the network B N . Here, we are concerned with the spatial stochastic process { C H b , b B N } . Now, features that explicitly describe the random variable C H should be selected. Branch cash holdings are mainly determined by cash transactions. For simplicity, only two determinants of branch cash transactions are chosen, C H b = ( n b , v b ) , where n b represents the number of branch transactions at branch b while v b denotes the maximum volume of branch transactions permitted at branch b. As a result, branches b B N (nodes in the universal support network) are identified with their cash holdings, C H b = ( n b , v b ) .
Following the definition of the support network, edges are defined by alternatively defining the neighborhood of a branch. This is formed by those branches which are nearby branches, N ( b i ) = { b j B N such that d ( b i , b j ) k } , where “nearby” is understood to mean that there are great similarities in the features related to branches’ cash holdings: following the branch managers’ criteria, this is equivalent to having the same size. As a result, there is an edge linking two branches if and only if they have the same size. Moreover, cliques in the branch network B N are communities of branches with the same size. For instance, cliques in B N may be C l , C m or C s , representing the clique of branches with a large size, a medium size and a small size, respectively, as shown in Figure 1, Figure 2, Figure 3 and Figure 4.
Remark 4.
Each selection of the random variable (which is stated according to the selected criterion for redesigning) leads to cliques representing different communities of branches. Thus, in a natural way, the original network ( B N ) is granulated depending on the selected criterion.
Thus, the decision making model can now be applied:
  • Mathematics 09 01389 i005 The universal network ( X v , N v ) might be divided into its own cliques C i ( X v , N v ) depending on the different possibilities for locating a new branch b * .
  • Mathematics 09 01389 i006 The new branch b * is added to the above network:
    C v * i ( X v , N v ) = C i ( X v , N v ) { v * } .
  • Mathematics 09 01389 i007 According to Lemma 1, the joint distributions of the sub-networks
    { P [ C v * i ( X v , N v ) ] } i
  • Mathematics 09 01389 i008 may be computed, thereby allowing comparisons between these numerical scores. When more than one output is obtained, the most suitable output is selected according to the given criteria.
Additionally, the BBN is an MRF with respect to branch cash holdings, as shown in the next theorem:
Theorem 2.
Branch network { C H b , b B N } is a MRF.
We show here that B N is an MRF by proving the corresponding Markov requirement:
P [ C H b = c b | C H B N { b } = c B N { b } ] = P [ C H b = c b | C H N ( B ) = c N ( b ) ] .
The result follows from the choice of the features considered. □
According to the Hammersley–Clifford theorem, we may prove the following result:
Corollary 1.
The joint distribution function of the branch network’s cash holdings C H = { C H b , b B N } P [ C H ) is of the form
P [ C H ) = 1 Z c C ϕ c ( C H C ) = 1 Z c C e 1 T V c ( C H C ) = 1 Z e 1 T c C V c ( C H C ) ,
where ϕ c are clique potentials and V c denote energy functions.
Note that Theorem 2 also allows us to specify the weight w of the cliques C i (see Hao et al., 2018) by computing the ratio between the joint distribution of the clique as MRFs and the joint distribution of the network support, as follows:
w C i = P [ { C H , b B N } ] P [ { C H , b C i } ]
through a selected realization (the mean value of the random variable in a given time interval, for instance) of these random variables (the same for both).

6. Case Study for Non-Physical Networks: Group Decision Making

This section demonstrates the application of the proposed universal decision-making model to the framework of Qualitative Reasoning (QR) as part of an AI approach that analyzes human reasoning with incomplete information. Specifically, an application to reach consensus in Group Decision Making (GDM) problems is provided. Actually, the application to consensus presented in this paper produces a model that is able to measure the consensus within a committee without the need for a moderator (Section 6.1). Further, the theoretical model applied to this context may be used in order to measure the impact of changes in opinions when reaching consensus (Section 6.2).
Group decision making is the process of multiple individuals making decisions by acting collectively, analyzing problems and assessing alternative courses of action in order to select a solution (see the papers [21,22,23,24] for further details). This resembles the general description of our decision-making model (see the Introduction) as a process that would allow the evaluation of the future performance of the network in different scenarios to design the future network structure.
There are several contingencies that define group decision making: the number and nature of people involved, the target of the decision-making groups—either formal (formally designated and charged with a specific task) or informal—and the process used to reach decisions—unstructured or structured . Due to the global nature of the universal decision-making model, the process described here will be valid for group decision making of any kind. A typical group decision making problem can be formally defined by a set of experts E and a set of alternatives A. Thus, the group decision making problem consists of sorting A using some preferences values provided by the experts. In general, to solve a group decision making problem, the following steps need to be taken:
  • The definition of the DMP (decision making process), which includes the descriptions of alternatives and a list of experts, E = { e 1 , , e n } ;
  • Outlining the alternatives, A = { a 1 , , a m } ;
  • Extracting individual preferences: for each participant, a specific preference value is assigned, P e k , k { 1 , , n } ;
  • Calculating the aggregation of collective preferences;
  • Ranking alternatives (sorted using the collective preference values);
  • Calculating consensus values: the level of consensus reached allows the determination of whether participants in the network community reach a common opinion or not.
The preferences are considered to be fuzzy preference relations in which the preference matrix is additive reciprocal. To this regard, recall that a fuzzy preference relation P on a set of alternatives A is defined as a fuzzy set on the product set A × A , which is identified with a membership function μ P : A × A [ 0 , 1 ] , usually represented by the a matrix P = ( p i j ) , where p i j = μ P ( a i , a j ) is interpreted as the preference degree of the alternative a i over a j (for instance, p i j = 1 / 2 indicates indifference between a i and a j , p i j = 1 indicates that a i is absolutely preferred to a j and p i j > 1 / 2 indicates that a i is preferred to a j ). As mentioned, it is assumed that the preference matrix, P, is additive reciprocal; i.e., p i j + p j i = 1 , i , j { 1 , , n } . Reciprocal multiplicative preference relations ( p i j · p j i = 1 ) could be considered instead (where the equivalence between reciprocal multiplicative preference relations with values in the interval range [1/9, 9] and reciprocal fuzzy preference relations with values in the range [0, 1]) are stated).
For the purposes of applying our universal decision making model, the support network must be defined first. Thus, let us consider the group decision making problem as a network in which each node is represented by an expert e k , k = 1 , , n . As the random variable X must be selected according to the goal of reaching consensus, the list of individual fuzzy preferences of each expert over the set of alternatives shall be considered: X = P = ( individual fuzzy preferences ) . That is, any expert e k (i.e., any node in the GDM) is identified with the corresponding node of the support network, with the random variable-in-a-site of P e k = ( p i j e k ) t , k = 1 , , n , in such a way that any expert e k is identified with their list of preferences P e k . Note that, using the reciprocity condition, p j i = 1 p i j , the resulting pairwise comparisons (sometimes structured in a matrix shape) may be gathered into a vector of individual preferences. Following the steps provided before, edges in the support network are defined by alternatively defining the neighborhhood of an expert. To this end, apart from the Euclidean distance, many other definitions in the GDM context (Manhatan, Cosine) may be taken. For further details, see [8] for instance.
Furthermore, as mentioned, the realization of the random variable P e k may be freely selected. In this context, this can be simply considered as the scores (i.e., the numerical preferences) provided by the experts. Once the support network has been stated, some of its properties should apply: the neighborhood of an expert is composed of those experts with the closest preferences. Moreover, a clique C is a community of experts with a high degree of similarity regarding their preferences with respect to the discussion topic (closest preferences, where the degree of similarity may vary depending on the clique). Importantly, let it be noted that the degree of similarity between preferences is measured by means of the selected distance (as stated in Definition 1), where this distance may be freely selected as necessary.

6.1. The Universal Decision-Making Model used to Measure Consensus

We now show the use of the decision-making model to derive a model that is capable of measuring the consensus within a committee without the use of a moderator. For this, recall that a probability distribution may be associated to each of these cliques (either the original cliques or the original ones plus one more expert) and provides the likelihood of the preferences (as a random variable) of the whole clique in such a way that it enables comparisons between cliques (or cliques plus one more expert).
The process of reaching consensus may be divided into three stages:
  • The counting process, which consists of calculating the number of participants who have selected each preference value for each alternative pair.
  • The coincidence process, the main goal of which is to aggregate the distances between individual preferences. Thus, the objective is twofold: on the one hand, to find similarities between preferences by computing the distance between them—this is called the LCR process, which consists of finding a common label for each of the groups containing the most similar preferences; on the other hand, to compute the numbers of experts who integrate each of the previous labeled groups.
  • The computing process: If D denotes the set of decisions, the calculation of some mean value decision schema of D should imply the selection either of an algebraic consensus (i.e., a mapping D × D D ) or a topological consensus (i.e., a mapping D × D L for some lattice complete L). An example of algebraic consensus is the (L)OWA ((Label) Ordered Weighted Averaging) operator.
Let it be noted that our approach automatically carries out steps two and three: on the one hand, for the coincidence process, our model aggregates the individual preferences by computing the cliques of the universal network (consisting of those people whose preferences present a high degree of similarity with respect to the distance chosen for this task). On the other hand, for the computing process, the distribution functions corresponding to cliques (viewed as a score attached to each clique) allow us to find the consensus level by simply comparing the scores and selecting the highest one. Many other levels of consensus can be considered. For instance, the level of consensus may be defined as the following ratio:
C k is a clique P [ { P e k | e k C k ] C k is a clique P [ { P e k | e k C k ] .
The main advantage of this proposal is that different levels of precision can be simultaneously considered without the presence of a moderator as well as many other functions (as shown in the next subsection) that may be executed in parallel.

6.2. The Universal Decision-Making Model to Assess the Impact of Changes in Opinions

The proposed model can evaluate either the impact of a new expert’s entry in the network or the impact of changes of the clique of an existing expert. In the context of GDM, this means that the model may consider the impact of experts’ changing preferences. This could be used to reach consensus when it has not been reached after some attempts have been made.
Based on the standard notion of herd behavior (herding refers to an alignment of behaviors of individuals in a group through local interactions among them), the main result is as follows:
Theorem 3.
Assuming that there is no herd behavior (and no mimetic contagion in consequence), the group decision making (GDM) problem is a Markov random field.
Let X v i , X v j be two random variables in the same neighborhood. Thus, their distance in terms of similarity with regard to the selected features is as small as desired (as it is the benchmark k). Thus, their marginal distributions are equal, P [ X v i ] = P [ X v j ] , and so the corresponding conditional distributions are obtained by the Bayes’ theorem: P [ X v i | X v j ] = P [ X v j | X v i ] .
As a consequence, an opinion depends on those of the cliques with maximum likelihood. Moreover, the assumption of no herd behavior ensures that there will not be any mimetic contagion to modify the collective learning, excepting the nearest opinions (i.e., from the neighborhood of an expert). This means that the Markov property holds: P [ P e k = p k | P G D M { e k } = p G D M { e k } ] = P [ P e k = p k | P N ( e k ) = p N ( e k ) ] .  □
Similar to the branch network case, the previous theorem provides extra information that allows us to assign a weight w C i to the ith-clique by computing the ratio between the joint distribution of the clique as an MRF and the joint distribution of the network support.
Moreover, based on primary notions on collective learning through mimetic contagion, a sketch of a procedure to change opinions in a GDM in case consensus has not been reached may be provided. From the perspective of economic behavior and sociological theories about the interactions and collective dynamics of opinion, one effective way to produce changes in opinion (from opinion A to opinion B) is to “surround” an individual with opinion A by people with opinion B.
In GDM, that would mean moving an expert from their own clique—of opinion A—to another clique: changes in the projects/teams/departments of an employee within a company should provide a new framework for them to be “surrounded” by new opinions. In that case, the proposed universal decision-making model allows the comparison of all the possibilities of change through the corresponding probability distribution whose outputs are the likelihood of the new preferences, expressed by numerical values (see Section 5, step 3 in the model). Thus, the comparison of these outputs allows decisions to be made.

7. Sensitive Study

The previous theorem 1 shows that the joint probability distribution of an MRF is presented in terms of cliques C as
P [ X ] = 1 Z exp C C f ( C ) .
It is worthy of mention that the comments on this theorem state that the form of function f on clique C (called the energy function) strongly determines the performance of the whole probability distribution in such a way that it is highly advisable to carry out sensitivity tests in order to determine which best represent each particular situation. By and large, function f on clique C may be arbitrarily selected as long as it meets the energy constraint: if the features in a clique match the features in a given template, the energy function decreases; otherwise, it increases. Thus, the general procedure for defining function f on clique C consists of considering it as f ( C ) = 1 λ ( C ) , where λ is an assessment of the similarities between the features of the clique and those of a given template, which may be taken as a distance. Some examples of energy functions f ( C ) are exp c + i w i f i ( X C i ) ( log linear ) or ( X C i μ i ) 2 σ i 2 ( Gaussian ) , where μ i and σ i 2 are the mean and the variance, respectively. Let us consider the Gaussian energy functions in the particular context of the G D M , with P e denoting the preference of an expert e. Following Theorem 3, the GDM process is an MRF. Let us assume that the GMD is granulated into n cliques, C k , k = 1 , , n , with each of them containing experts with similar preferences P e k between them, P e k , e k C k . Then, since the joint probability distribution is P [ P e ] = 1 Z e i = 1 n ( P e i μ ) 2 σ i 2 , the probability of GMD obtaining a concrete preference p e i = P C i e ( e i ) may be simply computed as P [ P e = p e i ] = 1 Z e i = 1 n ( p e i μ ) 2 σ i 2 , and a similar approach can be taken to compute P [ P e p e i ] or [ P e > p e i ] .

8. Conclusions

The performance of any network-based system can be improved by an efficient and reliable re-distribution. Dealing with redesigning involves both managing and anticipating the changes. This paper presents a decision-making model that can be used in order to design the future network structure of either physical or non-physical networks. This works on the basis that it allows the evaluation of the future performance of the network in different scenarios, thereby anticipating future needs. As for managing redesigning, our approach benefits from an easy-to handle application performance as it is based on joint probabilistic distributions (working independently) that measure the likelihood of success of any alternative scenarios. If multiple criteria for redesigning are needed, the methodology may be executed in parallel, thereby enhancing the decision-making capabilities of the model. Regarding the potential applications of our model, it has been shown in game theory, for instance [25], that every correlated equilibrium of every graphical game is a Markov random field. In this sense, similar models have been proposed in recent works to solve the problem of opponent state modeling in RTS games (StarCraft) (see [26]), contributing the benefit of giving players an extra dimension with which to compete against the AI or for social media spam detection on Twitter (see [27]).
Moreover, one of the advantages of our approach is that it is geographical constraint-free, which makes it suitable for cross-border enterprises as well as expands its range of potential applications. Apart from physical networks, group decision making, social networks and all groups of users that make decisions together (ranking and recommendation systems, online prediction markets) can now be included in the application scope.
Regarding future lines of research, alternative fuzzy versions of our DMM that incorporate the use and development of linguistic information are under consideration (see [28]). Furthermore, although the case of multiple criteria is possible now in our model, new proposals for decision-making models with multiple criteria based on incomplete information should be considered (see [29]).

Author Contributions

Formal analysis, Investigation, Original draft, J.G.C.; Writing, Review, Editing, P.A.C., M.-d.-C.A.-L., F.C. and E.H.-V. All authors have read and agreed to the published version of the manuscript.


This research received no external funding.


Support from the Spanish State Research Agency under Project PID2019-103880RB-I00/AEI/10.13039/501100011033 and Junta de Andalucía (SEJ340) is gratefully acknowledged.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript or in the decision to publish the results.


  1. Morente-Molinera, J.A.; Kou, G.; Peng, Y.; Torres-Albero, C.; Herrera-Viedma, E. Analysing discussions in social networks using group decision making methods and sentiment analysis. Inf. Sci. 2018, 447, 157–168. [Google Scholar] [CrossRef]
  2. Reguieg, S.; Taghezout, N. Supporting Multi-agent Coordination and Computational Collective Intelligence in Enterprise 2.0 Platform. Int. J. Interact. Multimed. Artif. 2017, 4, 70–80. [Google Scholar] [CrossRef]
  3. Ureña, R.; Kou, G.; Dong, Y.; Chiclana, F.; Herrera-Viedma, E. A review on trust propagation and opinion dynamics in social networks and group decision making frameworks. Inf. Sci. 2019, 478, 461–475. [Google Scholar] [CrossRef]
  4. Kaur, R.; Arora, S. Nature Inspired Range Based Wireless Sensor Node Localization Algorithms. Int. J. Interact. Multimed. Artif. Intell. 2017, 4, 7–17. [Google Scholar] [CrossRef]
  5. Allahi, S.; Mobin, M.; Vafadarnikjoo, A.; Salmon, C. An Integrated AHP-GIS-MCLP Method to Locate Bank Branches. In Proceedings of the 2015 Industrial and Systems Engineering Research Conference, Nashville, TX, USA, 30 May–2 June 2015; ISBN 978-098376244-7. [Google Scholar]
  6. Rezaei, H.; Zare, H.K.; Bashiri, M.; Fakhrzad, M.B. A Multi-stage Stochastic Programming for Redesigningthe Relief Logistics Network: A Real Case Study. In Proceedings of the Proceeding ICIME, Barcelona, Spain, 9–11 October 2017; pp. 111–115. [Google Scholar] [CrossRef]
  7. Sykas, E.D.; Vlakos, K.M.; Hillyard, M.J. Overview of ATM networks: Functions and procedures. Comput. Commun. 1991, 14, 615–626. [Google Scholar] [CrossRef]
  8. Del Moral, M.J.; Chiclana, F.; Tapia, J.M.; Herrera-Viedma, E. A Comparative Study on Consensus Measures in Group Decision Making. Int. J. Intell. Syst. 2018, 33, 1624–1638. [Google Scholar] [CrossRef]
  9. Dong, Y.; Zha, Q.; Zhang, H.; Kou, G.; Fujita, H.; Chiclana, F.; Herrera-Viedma, E. Consensus Reaching in Social Network Group Decision Making: Research Paradigms and Challenges. Knowl. Based Syst. 2018, 162, 3–13. [Google Scholar] [CrossRef]
  10. Perez, I.J.; Cabrerizo, F.J.; Alonso, S.; Dong, Y.C.; Chiclana, F.; Herrera-Viedma, E. On Dynamic Consensus Processes in Group Decision Making Problems. Inf. Sci. 2018, 459, 20–35. [Google Scholar] [CrossRef]
  11. Dabrowski, J.J.; Beyers, C.; Villiers, J.P. Systemic banking crisis early warning systems using dynamic Bayesian networks. Expert Syst. Appl. 2016, 62, 225–242. [Google Scholar] [CrossRef]
  12. Cabrerizo, F.J.; Chiclana, F.; Al-Hmouz, R.; Morfeq, A.; Balamash, A.S.; Herrera-Viedma, E. Fuzzy decision making and consensus: Challenges. J. Intell. Fuzzy Syst. 2015, 29, 1109–1118. [Google Scholar] [CrossRef]
  13. Dong, Y.; Zhao, S.; Zhang, H.; Chiclana, F.; Herrera-Viedma, E. A self-management mechanism for non-cooperative behaviors in large-scale group consensus reaching processes. IEEE Trans. Fuzzy Syst. 2018, 26, 3276–3288. [Google Scholar] [CrossRef]
  14. Amor, N.B.; El Khalfi, Z.; Fargier, H.; Sabbadin, R. Lexicographic refinements in stationary possibilistic Markov Decision Processes. Int. J. Approx. 2018, 103, 343–363. [Google Scholar] [CrossRef]
  15. Fagundes, M.S.; Ossowski, S.; Cerquides, J.; Noriega, P. Design and evaluation of norm-aware agents based on Normative Markov Decision Processes. Int. J. Approx. Reason. 2016, 78, 33–61. [Google Scholar] [CrossRef]
  16. Schluter, F.; Strappa, Y.; Milone, D.H.; Bromberg, F. Blankets Joint Posterior score for learning Markov network structures. Int. J. Approx. Reason. 2018, 92, 295–320. [Google Scholar] [CrossRef]
  17. Ruiz-Hernandez, D.; Delgado-Gomez, D.; Lopez-Pascual, J. Restructuring bank networks after mergers and acquisitions: A capacitated delocation model for closing and resizing branches. Comput. Oper. Res. 2015, 62, 316–324. [Google Scholar] [CrossRef]
  18. Glykas, M.; Xirogiannis, A.G. A soft knowledge modeling approach for geographically dispersed financial organizations. Soft Comput. 2005, 9, 579–593. [Google Scholar] [CrossRef]
  19. Shang, Y. Multi-hop generalized core percolation on complex networks. Adv. Complex Syst. 2020, 23, 2050001. [Google Scholar] [CrossRef]
  20. García Cabello, J. The future of branch cash holdings management is here: New Markov chains. Eur. J. Oper. Res. 2017, 259, 789–799. [Google Scholar] [CrossRef]
  21. Gong, Z.; Xu, X.; Guo, W.; Herrera-Viedma, E.; Cabrerizo, F.J. Minimum cost consensus modeling under various linear uncertain-constrained scenarios. Inf. Fusion 2021, 66, 1–17. [Google Scholar] [CrossRef]
  22. Cabrerizo, J.F.; Morente-Molinera, J.A.; Pedrycz, W.; Taghavi, A.; Herrera-Viedma, E. Granulating linguistic information in decision making under consensus and consistency. Expert Syst. Appl. 2018, 99, 83–92. [Google Scholar] [CrossRef]
  23. Cabrerizo, F.J.; Al-Hmouz, R.; Morfeq, A.; Balamash, A.S.; Martinez, M.A.; Herrera-Viedma, E. Soft consensus measures in group decision making using unbalanced fuzzy linguistic information. Soft Comput. 2017, 21, 3037–3050. [Google Scholar] [CrossRef]
  24. Cabrerizo, J.F.; Ureã, M.R.; Pedrycz, W.; Herrera-Viedma, E. Building consensus in group decision making with an allocation of information granularity. Fuzzy Sets Syst. 2014, 255, 115–127. [Google Scholar] [CrossRef]
  25. Kakade, S.; Kearns, M.; Langford, J.; Ortiz, L. Correlated equilibria in graphical games. In Proceedings of the 4th ACM Conference on Electronic Commerce, San Diego, CA, USA, 9–12 June 2003; pp. 2–47. [Google Scholar]
  26. Leece, M.; Jhala, A. Opponent state modeling in RTS games with limited information using Markov random fields. In Proceedings of the IEEE Conference on Computational Intelligence and Games, Dortmund, Germany, 26–29 August 2014; pp. 1–7. [Google Scholar] [CrossRef]
  27. El-Mawass, N.; Honeine, P.; Vercouter, L. SimilCatch: Enhanced social spammers detection on twitter using Markov random fields. Inf. Process. Manag. 2020, 57, 102317. [Google Scholar] [CrossRef]
  28. Li, C.-C.; Dong, Y.; Herrera, F.; Herrera-Viedma, E.; Martínez, L. Personalized individual semantics in Computing with Words for supporting linguistic Group Decision Making. An Application on Consensus reaching. Inf. Fusion 2017, 33, 29–40. [Google Scholar] [CrossRef]
  29. Ureña, M.R.; Chiclana, F.; Morente-Molinera, J.A.; Herrera-Viedma, E. Managing Incomplete Preference Relations in Decision Making: A Review and Future Trends. Inf. Sci. 2015, 302, 14–32. [Google Scholar] [CrossRef]
Figure 1. Partition of the network into its cliques.
Figure 1. Partition of the network into its cliques.
Mathematics 09 01389 g001
Figure 2. Subnetwork C b * l = C l { b * } .
Figure 2. Subnetwork C b * l = C l { b * } .
Mathematics 09 01389 g002
Figure 3. Subnetwork C b * m = C m { b * } .
Figure 3. Subnetwork C b * m = C m { b * } .
Mathematics 09 01389 g003
Figure 4. Subnetwork C b * s = C s { b * } .
Figure 4. Subnetwork C b * s = C s { b * } .
Mathematics 09 01389 g004
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Back to TopTop