Next Article in Journal
Effects of Unequal Lot Size and Variable Transportation in Unreliable Supply Chain Management
Next Article in Special Issue
Drivers’ Skills and Behavior vs. Traffic at Intersections
Previous Article in Journal
Self-Regulating Artificial-Free Linear Programming Solver Using a Jump and Simplex Method
Previous Article in Special Issue
Multiobjective Games for Detecting Abnormally Expressed Genes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Euler-Equation Approach in Average-Oriented Opinion Dynamics †

by
Vladimir Mazalov
1,2,3,‡ and
Elena Parilina
2,3,4,*,‡
1
Institute of Applied Mathematical Research, Karelian Research Center of the Russian Academy of Sciences, 11, Pushkinskaya str., 185910 Petrozavodsk, Russia
2
School of Mathematics and Statistics, Qingdao University, Qingdao 266071, China
3
Institute of Applied Mathematics of Shandong, Qingdao 266071, China
4
Saint Petersburg State University, 7/9 Universitetskaya nab., 199034 Saint Petersburg, Russia
*
Author to whom correspondence should be addressed.
The work was supported by the Shandong Province “Double-Hundred Talent Plan” (No. WST2017009).
These authors contributed equally to this work.
Mathematics 2020, 8(3), 355; https://doi.org/10.3390/math8030355
Submission received: 30 December 2019 / Revised: 23 February 2020 / Accepted: 28 February 2020 / Published: 5 March 2020
(This article belongs to the Special Issue Mathematical Game Theory 2019)

Abstract

:
We consider the models of average-oriented opinion dynamics. An opinion about an event is distributed among the agents of a social network. There are an optimization problem and two game-theoretical models when players as centers of influence aim to make the opinions of the agents closer to the target ones in a finite time horizon minimizing their costs. The optimization problem and the games of competition for the agents’ opinion are linear-quadratic and solved using the Euler-equation approach. The optimal strategies for optimization problem and the Nash equilibria in the open-loop strategies for the games are found. Numerical simulations demonstrate theoretical results.

1. Introduction

The new informational aspects of our life allow us to imagine and realize the mechanisms of influences on society’s members and their opinions on some political and other social events. We can also assume that that the level of influence may be regulated by centers of influence in different ways (e.g., increasing the costs on advertisement policies, political debates, etc.). Different centers of influence may want to obtain opposite opinions or various levels of opinions on some event and the process to obtain these opinions among the members of the society becomes a competition. We propose several models with one or two players who compete for the opinion of society members based on DeGroot’s model of information diffusion [1], in which the opinion dynamic is described as a dynamic process where agents or society members can influence each other’s opinions with a given intensity rate at any discrete time.
The models of finding a consensus are close to the models of informational influence on population and information control models (see, e.g., papers [2,3] and books [4,5]). A model of opinion dynamics with two principals or centers of influence is presented in [6], in which the conditions of existence of the limit opinions are obtained for a given matrix of influences. The problem of reaching a consensus can be modeled as a repeated game [7] or mean field game [8]. There are also imitation models of opinion dynamics [9]. The competitive and cooperative models of opinion dynamics are also considered in [10,11], in which the dynamic games of opinion formation are linear-quadratic. The authors find the equilibria representing the level of influence with different information structures (open-loop and closed-loop).
We introduce the models of opinion dynamics when agents, being the members of the network, form their opinions by the influence of the average opinion of the society. The averaging society opinion is considered as a natural way to represent a global trend in the present communities [12]. Besides the agents who form the social network, there are one or two centers of influence which are interested in making the opinion of any player as close as possible to the target one. First, we consider an optimization problem with one player (a unique center of influence) minimizing costs that also depend on the level of influence. In this case, there are only two agents in the network, one of them is directly influenced by the player and the other one is influenced by the player via the opinion of the first agent. Second, we introduce a model of opinion dynamics with two players and two agents in the social network. Each player influences the opinion of the corresponding agent directly and the other agent’s opinion indirectly. Players are willing to minimize quadratic costs, trying to make the agents’ opinions closer to the target ones. We use the Nash equilibrium as a solution concept to solve the game. Third, we extend the previous model to the case when there are many agents and two players and the agents are divided into two groups by the influence of a particular player.
A social network defined by a set of agents described as the nodes in a network which is called a star is examined in [13]. In that paper, the agents of the society communicate within the network and form opinions by influence of other society members who have direct links with them in the network. Agents cannot vary the intensity of influence over time. We assume that intensities are defined in a given matrix. In a star graph, one agent is centered and all non-central agents have a unique link connecting these agents with the center. The agents which are located not in the center of the network are assumed to be identical with their influential power on the central agent. The communication graph with a star structure can describe the small social networks in any company or office when there exists one head [14]. Models using game theory ideas describing the network structures are considered in [15]. In the case where the network has a lot of nodes, which can be difficult to examine, first the problem of partitioning the nodes is applied (see such models in [16,17]).
In all models presented in the paper, the state of the system considered in discrete time is defined by a profile of the opinions of the social network members. The dynamics of the state variable are given by a linear equation subject to the state in the previous period and players’ intensities of influence. The players’ costs are linear-quadratic functions with respect to the state variables and players’ strategies; therefore, the game we model is a linear-quadratic difference game. See [18,19] for a detailed analysis of the results on LQ games. The Euler-equation approach to find the solutions in such a class of games is considered in [20]. In our paper, we first use the Euler-equation approach for the problems of opinion dynamics. This approach allows to find optimal or equilibrium strategies in a simpler way in comparison with Pontryagin’s maximum principle. In numerical simulation made for the paper, the Euler-equation approach demonstrates better results in working time, especially, when the number of periods is large.
The rest of the paper is organized as follows. We describe the Euler-equation approach in Section 2. In Section 3, we describe the optimization model with the given average-oriented opinion dynamics. Section 4 contains the main result for the case of two agents and one player and finds the optimal strategy of the player optimal opinion trajectory in an explicit way. Then we introduce a game-theoretical model of competition for agents’ opinion in Section 5. An extension of this model on the case of many agents is presented in Section 6. We briefly conclude in Section 7.

2. The Euler-Equation Approach

Let X R n and U R m be the state space and the control set, respectively. The state of a system evaluates with respect to dynamics
x ( t + 1 ) = f t ( x ( t ) , u ( t ) ) , t = 0 , 1 , , T 1 .
with initial state x 0 X . The optimal control problem is to find a control u ( t ) U maximizing Functional
t = 0 T δ t r t ( x ( t ) , u ( t ) )
with respect to the state dynamics Equations (1) and a given initial condition x ( 0 ) = x 0 .
We can reformulate this problem in terms of the state trajectory x ( t ) . Suppose that we can express u ( t ) from Equation (1) as a function of x ( t ) and x ( t + 1 ) , say u ( t ) = q ( x ( t ) , x ( t + 1 ) ) . Therefore, we can rewrite Functional (2) in the following form:
t = 0 T δ t g t ( x ( t ) , x ( t + 1 ) ) ,
where g t ( x ( t ) , x ( t + 1 ) ) = r t ( x ( t ) , q ( x ( t ) , x ( t + 1 ) ) ) , t = 0 , 1 , , T 1 .
The Euler equation approach gives the necessary conditions (see [20,21,22]) for the optimal trajectory x * ( t ) :
g t 1 ( x * ( t 1 ) , x * ( t ) ) y + δ g t ( x * ( t ) , x * ( t + 1 ) ) x = 0 , t = 1 , , T 1 .
We can notice that games considered in the paper belong to the class of linear-quadratic games, so the necessary conditions will be also sufficient conditions. We will apply the Euler-equation method to find the optimal strategies in the dynamic games with average-oriented opinion dynamics.

3. Optimization Problem

We consider a society consisting of n agents. Each agent i has an opinion x i R , i = 1 , , n . The agents communicate with each other and change the opinion over time, which is assumed to be discrete and finite. Let opinion of agent i at time t be x i ( t ) R , t = 0 , , T . We also assume that there exists a player who influences the opinion of one particular agent, say agent 1 with intensity u ( t ) R at time t = 0 , 1 , , T 1 . In period T the player does not influence the agent’s opinion. The opinion of agent 1 changes depending on his own opinion, the average opinion of the society and influence of the player. The opinion of any agent i = 2 , , n changes over time depending on his own opinion and the average opinion of the society. The opinion dynamics is defined by the following equations:
x 1 ( t + 1 ) = x 1 ( t ) + a 1 j = 1 n x j ( t ) n x 1 ( t ) + u ( t ) ,
x i ( t + 1 ) = x i ( t ) + a i j = 1 n x j ( t ) n x i ( t ) , i = 2 , , n
with initial condition
x i ( 0 ) = x i 0 , i = 1 , , n .
In Equations (5) and (6), constants a i R + , i = 1 , , n show the beliefs of agents to the average opinion of the society. The higher the average opinion, the higher the opinion of the agent at next time period.
Let the player aim in taking the opinion of all agents around some particular level s R minimizing the total costs which are
J ( u ) = t = 0 T 1 δ t i = 1 n ( x i ( t ) s ) 2 + c u 2 ( t ) + δ T i = 1 n ( x i ( T ) s ) 2 ,
where δ ( 0 , 1 ] is a discount factor and c > 0 is the player’s costs per unit of influence intense.

4. Case of Two Agents and One Player

For simplicity, we first consider the problem when the society consists of two agents and the opinion of agent 1 is influenced by the player. Therefore, we have the dynamics equations:
x 1 ( t + 1 ) = x 1 ( t ) + a 1 x 1 ( t ) + x 2 ( t ) 2 x 1 ( t ) + u ( t ) , x 2 ( t + 1 ) = x 2 ( t ) + a 2 x 1 ( t ) + x 2 ( t ) 2 x 2 ( t ) ,
with initial condition
x 1 ( 0 ) = x 1 0 , x 2 ( 0 ) = x 2 0 .
The dynamics equations may be rewritten in the following form:
x 1 ( t + 1 ) = x 1 ( t ) + a 1 2 x 2 ( t ) x 1 ( t ) + u ( t ) ,
x 2 ( t + 1 ) = x 2 ( t ) + a 2 2 x 1 ( t ) x 2 ( t ) ,
with the same initial conditions.
The optimization problem for the player is to minimize the functional
J ( u ) = t = 0 T 1 δ t ( x 1 ( t ) s ) 2 + ( x 2 ( t ) s ) 2 + c u 2 ( t ) + δ T ( x 1 ( T ) s ) 2 + ( x 2 ( T ) s ) 2 .
The following theorem provide conditions to find the optimal strategy u ( t ) minimizing Functional (12) subject to initial Conditions (9) and the state dynamics of Systems (10) and (11).
Theorem 1.
Let { u * ( t ) : t = 0 , , T 1 } be the optimal strategy minimizing Functional (12) subject to initial conditions (9) and state dynamics Equations (10) and (11) and { ( x 1 * ( t ) , x 2 * ( t ) ) : t = 0 , , T } be the corresponding state trajectory, then the optimal strategy u * ( t ) , t = 0 , , T 1 is
u * ( t ) = z * ( t + 1 ) A z * ( t ) ,
and corresponding optimal state trajectory ( x 1 * ( t ) , x 2 * ( t ) ) , t = 1 , , T is satisfied to equations:
A c δ z ( t + 1 ) + B z ( t ) C z ( t 1 ) + A c δ z ( t 2 ) = δ ( a 2 1 ) ( x 2 ( t ) s ) + ( x 2 ( t 1 ) s ) , t = 2 , , T 1 , ( A c a 2 δ 2 ) z ( T ) ( A 2 c + 1 + c δ ) z ( T 1 ) + A c δ z ( T 2 ) = δ a 2 ( x 2 ( T ) s ) + ( x 2 ( T 1 ) s ) , ( δ + c ) z ( T ) A c z ( T 1 ) + δ ( x 2 ( T ) s ) = 0 , x 2 ( t + 1 ) = x 2 ( t ) + a 2 2 z ( t ) , t = 1 , , T 1 ,
where z * ( t ) = x 1 * ( t ) x 2 * ( t ) , A = 1 a 1 + a 2 2 , B = δ + c + A 2 c δ + A c a 2 δ 2 , and C = A c ( A + 1 ) + 1 + c δ .
Proof of Theorem 1.
For simplicity, we introduce a new variable z ( t ) as
z ( t ) = x 1 ( t ) x 2 ( t ) , t = 0 , , T .
Then, we rewrite state equations in the following way:
z ( t + 1 ) = A z ( t ) + u ( t ) ,
x 2 ( t + 1 ) = x 2 ( t ) + a 2 2 z ( t )
with initial conditions z ( 0 ) = x 1 0 x 2 0 , x 2 ( 0 ) = x 2 0 . Notice that x 2 ( 1 ) = x 2 ( 0 ) + a 2 2 z ( 0 ) .
For using the method of Euler’s equations, we find the expression of u ( t ) from Equation (14) and substitute it to Functional (3) and obtain
J ( z , x 2 ) = ( x 1 0 s ) 2 + ( x 2 0 s ) 2 + c ( z ( 1 ) A z ( 0 ) ) 2 + t = 1 T 1 δ t ( z ( t ) + x 2 ( t ) s ) 2 + ( x 2 ( t ) s ) 2 + c ( z ( t + 1 ) A z ( t ) ) 2 + δ T ( z ( T ) + x 2 ( T ) s ) 2 + ( x 2 ( T ) s ) 2 .
The function J ( z , x 2 ) is convex in z, x 2 . So, the minimum of this function under condition given by Equation (15) always exists. We form the Lagrange function
L ( z , x 2 , k ) = J ( z , x 2 ) + t = 1 T 1 k t ( x 2 ( t + 1 ) x 2 ( t ) a 2 2 z ( t ) )
and write the first order conditions J ( z , x 2 ) z ( t ) = 0 , t = 1 , , T and J ( z , x 2 ) x 2 ( t ) = 0 , t = 2 , , T , we obtain the system of equations:
A c z ( t + 1 ) + ( 1 + c δ + A 2 c ) z ( t ) A c δ z ( t 1 ) + ( x 2 ( t ) s ) = a 2 4 k t δ t , t = 1 , , T 1 , ( δ + c ) z ( t ) A c z ( t 1 ) + δ ( x 2 ( t ) s ) = 0 , t = T , 2 z ( t ) + 4 ( x 2 ( t ) s ) = ( k t k t 1 ) δ t , t = 2 , , T 1 , 2 z ( t ) + 4 ( x 2 ( t ) s ) = k t 1 δ t , t = T .
with initial conditions z ( 0 ) = x 1 0 x 2 0 , x 2 ( 0 ) = x 2 0 .
Excluding k t from System (16) we obtain
A c δ z ( t + 1 ) + B z ( t ) C z ( t 1 ) + A c δ z ( t 2 ) = δ ( a 2 1 ) ( x 2 ( t ) s ) + ( x 2 ( t 1 ) s ) , t = 2 , , T 1 , ( A c a 2 δ 2 ) z ( T ) ( A 2 c + 1 + c δ ) z ( T 1 ) + A c δ z ( T 2 ) = δ a 2 ( x 2 ( T ) s ) + ( x 2 ( T 1 ) s ) , ( δ + c ) z ( T ) A c z ( T 1 ) + δ ( x 2 ( T ) s ) = 0 ,
where B = δ + c + A 2 c δ + A c a 2 δ 2 , C = A c ( A + 1 ) + 1 + c δ .
The theorem is proved. □
Remark 1.
The Euler equation approach used to prove Theorem 1 and further Theorems 2 and 3 can be called an inverse method of solving the optimal control problems. System (13) in Theorem 1 is solved with respect to the state variable, i.e., the solution gives the optimal trajectory, and corresponding optimal controls are found in a unique way from the optimal state trajectory. This approach can be used only if the control or strategy can be represented as a function of state variables.
Example 1.
Let the parameters of the optimal control problem be following: a 1 = 0.8 , a 2 = 0.6 , δ = 0.9 , c = 0.1 and initial opinions be x 1 ( 0 ) = 0.4 , x 2 ( 0 ) = 0.3 . For time horizon T = 8 and target opinion s = 0.9 , the optimal trajectory and optimal control are presented in Table 1. The optimal value for Functional (3) is 1.141 .
For the same parameters and duration T = 50 we introduce state trajectories and strategy trajectory on Figure 1 and Figure 2.

5. Case of Two Agents and Two Players

Now we consider the case of competition on agents’ opinions. In comparison with the model described above, we assume that there are two players and one of them directly influences agent 1 and the other one—agent 2. The players have the same discount factors but they are different in their costs per unit of influence intense ( c 1 and c 2 ) and target opinion levels ( s 1 and s 2 ).
The state equations for opinions of agents 1 and 2 are
x 1 ( t + 1 ) = x 1 ( t ) + a 1 x 1 ( t ) + x 2 ( t ) 2 x 1 ( t ) + u 1 ( t ) ,
x 2 ( t + 1 ) = x 2 ( t ) + a 2 x 1 ( t ) + x 2 ( t ) 2 x 2 ( t ) + u 2 ( t )
with initial Condition (9).
Players 1 and 2 are willing to minimize the functionals
J 1 ( u 1 , u 2 ) = t = 0 T 1 δ t ( x 1 ( t ) s 1 ) 2 + ( x 2 ( t ) s 1 ) 2 + c 1 u 1 2 ( t ) + δ T ( x 1 ( T ) s 1 ) 2 + ( x 2 ( T ) s 1 ) 2 ,
J 2 ( u 1 , u 2 ) = t = 0 T 1 δ t ( x 1 ( t ) s 2 ) 2 + ( x 2 ( t ) s 2 ) 2 + c 2 u 2 2 ( t ) + δ T ( x 1 ( T ) s 2 ) 2 + ( x 2 ( T ) s 2 ) 2
correspondingly, where c i > 0 is player i’s costs per unit of influence intensity.
We can define a two-player game in normal form representation with the set of players’ strategies U 1 , U 2 , where U j = ( u j ( t ) R : t = 0 , , T 1 ) , j = 1 , 2 , players’ cost functions J 1 , J 2 , defined by Formulas (19) and (20) s.t. state Equations (17) and (18) with initial state x ( 0 ) = ( x 1 ( 0 ) , x 2 ( 0 ) ) = ( x 1 0 , x 2 0 ) . The game belongs to the class of linear-quadratic games.
The Nash equilibrium is used as an optimality principle or solution of the game with profile of strategies ( u 1 * , u 2 * ) s.t. the inequalities
J 1 ( u 1 * , u 2 * ) J 1 ( u 1 , u 2 * ) , J 2 ( u 1 * , u 2 * ) J 2 ( u 1 * , u 2 )
hold for any u 1 U 1 and u 2 U 2 .
Let us fix u 2 ( t ) , t = 0 , , T 1 and find the best response of Player 1. We use the Euler-approach to find the Nash equilibrium in the game described. First, we find expressions of u 1 ( t ) from state Equation (17) as a function of x 1 and x 2 and u 2 ( t ) from state Equation (18) as a function of x 1 and x 2 :
u 1 ( t ) = x 1 ( t + 1 ) 1 a 1 2 x 1 ( t ) a 1 2 x 2 ( t ) .
The goal of the first player is to minimize
J 1 ( x 1 , x 2 , u 2 ) = t = 0 T 1 δ t ( x 1 ( t ) s 1 ) 2 + ( x 2 ( t ) s 1 ) 2 + c 1 ( x 1 ( t + 1 ) 1 a 1 2 x 1 ( t ) a 1 2 x 2 ( t ) ) 2 + δ T ( x 1 ( T ) s 1 ) 2 + ( x 2 ( T ) s 1 ) 2 ,
subject to Condition (18). We form the Lagrange function
L 1 ( x 1 , x 2 , k 1 ) = J 1 ( x 1 , x 2 , u 2 ) + t = 0 T 1 k t 1 x 2 ( t + 1 ) 1 a 2 2 x 2 ( t ) a 2 2 x 1 ( t ) u 2 ( t ) ,
where k 1 = { k t 1 , t = 0 , , T 1 } . Finding the derivatives L 1 ( x 1 , x 2 , k 1 ) x 1 ( t ) and L 1 ( x 1 , x 2 , k 1 ) x 2 ( t ) , t = 1 , , T and equating them to zero, we obtain the corresponding systems:
c 1 x 1 ( t ) ( 1 a 1 2 ) x 1 ( t 1 ) a 1 2 x 2 ( t 1 ) + δ x 1 ( t ) s 1 c 1 ( 1 a 1 2 ) x 1 ( t + 1 ) ( 1 a 1 2 ) x 1 ( t ) a 1 2 x 2 ( t ) = a 2 4 k t 1 δ ( t 1 ) , t = 1 , , T 1 , c 1 x 1 ( t ) ( 1 a 1 2 ) x 1 ( t 1 ) a 1 2 x 2 ( t 1 ) + δ ( x 1 ( t ) s 1 ) = 0 , t = T ,
2 ( x 2 ( t ) s 1 ) c 1 a 1 x 1 ( t + 1 ) ( 1 a 1 2 ) x 1 ( t ) a 1 2 x 2 ( t ) = ( 1 a 2 2 ) k t 1 k t 1 1 δ t , t = 1 , , T 1 , x 2 ( t ) s 1 = 1 2 k t 1 1 δ t , t = T .
From the first equation of System (21) we obtain
k t 1 = 4 a 2 δ t 1 δ c 1 ( 1 a 1 2 ) x 1 ( t + 1 ) + δ + c 1 + c 1 δ ( 1 a 1 2 ) 2 x 1 ( t ) c 1 ( 1 a 1 2 ) x 1 ( t 1 ) + δ c 1 a 1 2 ( 1 a 1 2 ) x 2 ( t ) c 1 a 1 2 x 2 ( t 1 ) δ s 1 , t = 1 , , T 1 .
Denoting k ^ t 1 = k t 1 δ t and substituting it to System (22) and taking into account the second equation in System (21), we obtain the system of equations to define the best response strategy of Player 1:
2 ( x 2 ( t ) s 1 ) c 1 a 1 x 1 ( t + 1 ) ( 1 a 1 2 ) x 1 ( t ) a 1 2 x 2 ( t ) = ( 1 a 2 2 ) k ^ t 1 k ^ t 1 1 δ , t = 1 , , T 1 , c 1 x 1 ( t ) ( 1 a 1 2 ) x 1 ( t 1 ) a 1 2 x 2 ( t 1 ) + δ ( x 1 ( t ) s 1 ) = 0 , t = T x 2 ( t ) = s 1 1 2 δ k ^ t 1 1 , t = T
where
k ^ t 1 = 4 a 2 δ [ δ c 1 ( 1 a 1 2 ) x 1 ( t + 1 ) + δ + c 1 + c 1 δ ( 1 a 1 2 ) 2 x 1 ( t ) c 1 ( 1 a 1 2 ) x 1 ( t 1 ) + δ c 1 a 1 2 ( 1 a 1 2 ) x 2 ( t ) c 1 a 1 2 x 2 ( t 1 ) δ s 1 ] , t = 1 , , T 1 .
We determine the best response strategy of Player 2 in the same way.
The following theorem gives the necessary conditions of the Nash equilibrium for the game described above.
Theorem 2.
Let { ( u 1 , u 2 ) , u i = ( u i ( t ) : t = 0 , , T 1 ) , i = 1 , 2 } be the Nash equilibrium in the game and { ( x 1 ( t ) , x 2 ( t ) ) : t = 0 , , T } be the state trajectory corresponding to this equilibrium with initial condition x 1 ( 0 ) = x 1 0 , x 2 ( 0 ) = x 2 0 , then they satisfy the system:
2 ( x j ( t ) s i ) c i a i x i ( t + 1 ) ( 1 a i 2 ) x i ( t ) a i 2 x j ( t ) = = ( 1 a j 2 ) k ^ t i k ^ t 1 i δ , t = 2 , , T 1 , c i x i ( t ) ( 1 a i 2 ) x i ( t 1 ) a i 2 x j ( t 1 ) + δ ( x i ( t ) s i ) = 0 , t = T , x j ( t ) = s i 1 2 δ k ^ t 1 i , t = T
where i , j = 1 , 2 , i j and
k ^ t i = 4 a j δ δ c i ( 1 a i 2 ) x i ( t + 1 ) + δ + c i + c i δ ( 1 a i 2 ) 2 x i ( t ) c i ( 1 a i 2 ) x i ( t 1 ) + δ c i a i 2 ( 1 a i 2 ) x j ( t ) c i a i 2 x j ( t 1 ) δ s i , t = 1 , , T 1 ,
taking into account the state Equations (17) and (18) and initial state ( x 1 ( 0 ) , x 2 ( 0 ) ) = ( x 1 0 , x 2 0 ) .
Example 2.
Let the parameters of the game considered in this section be the following: a 1 = 0.2 , a 2 = 0.5 , δ = 1 , c 1 = 10 , c 2 = 5 and initial opinions be x 1 ( 0 ) = 0.1 , x 2 ( 0 ) = 0.7 . For time horizon T = 10 , target opinions are s 1 = 0.7 , s 2 = 0.9 . The equilibrium state and strategy trajectories are presented on Figure 3 and Figure 4. The equilibrium costs of Players 1 and 2 are 1.13548 and 2.99631 , respectively.

6. Case of Many Agents and Two Players

In this section we extend the model introduced in Section 4 on the case when there are more than two agents. The finite set of agents is divided into two groups. The first group contains agent 1 whose opinion is influenced by the average opinion of the society and Player 1. The second group contains n identical agents whose opinion is influenced by the average opinion of the society and Player 2. All agents in the second group have unique initial opinion. The dynamics of agents’ opinions is given by equations
x 1 ( t + 1 ) = x 1 ( t ) + a 1 j = 1 n + 1 x j ( t ) n + 1 x 1 ( t ) + u 1 ( t ) ,
x i ( t + 1 ) = x i ( t ) + a 2 j = 1 n + 1 x j ( t ) n + 1 x i ( t ) + u 2 ( t ) , i = 2 , , n + 1
with initial condition x 1 ( 0 ) = x 1 0 and x 2 ( 0 ) = = x n + 1 ( 0 ) = x 2 0 . As the agents 2, , n + 1 are identical, then we obtain that x i ( t ) = x j ( t ) for any i and j = 2 , , n + 1 and any t = 0 , , T .
Players 1 and 2 are willing to minimize the functionals
J 1 ( u 1 , u 2 ) = t = 0 T 1 δ t j = 1 n + 1 ( x j ( t ) s 1 ) 2 + c 1 u 1 2 ( t ) + δ T j = 1 n + 1 ( x j ( t ) s 1 ) 2 ,
J 2 ( u 1 , u 2 ) = t = 0 T 1 δ t j = 1 n + 1 ( x j ( t ) s 2 ) 2 + n c 2 u 2 2 ( t ) + δ T j = 1 n + 1 ( x j ( t ) s 2 ) 2 ,
correspondingly.
Thus, we define two-player game in a normal form with the set of players’ strategies U 1 , U 2 , where U j = ( u j ( t ) R : t = 0 , , T 1 ) , j = 1 , 2 , cost functions of the players J 1 , J 2 given by Functionals (26) and (27) s.t. state Equations (24) and (25) with initial state x ( 0 ) = ( x 1 ( 0 ) , x 2 ( 0 ) , x n + 1 ( 0 ) ) = ( x 1 0 , x 2 0 , , x 2 0 ) . The game belongs to the class of linear-quadratic games.
The following theorem gives the necessary conditions of the Nash equilibrium for the game described above.
Theorem 3.
Let { ( u 1 , u 2 ) , u i = ( u i ( t ) : t = 0 , , T 1 ) , i = 1 , 2 } be the Nash equilibrium and { ( x 1 ( t ) , x 2 ( t ) , , x n + 1 ( t ) ) : t = 0 , , T } , where x 2 ( t ) = x 3 ( t ) = = x n + 1 ( t ) be the corresponding state trajectory with initial condition x 1 ( 0 ) = x 1 0 , x 2 ( 0 ) = , x n + 1 ( 0 ) = x 2 0 , then they satisfy the system:
( x j ( t ) s i ) c i a i n i 1 n + 1 x i ( t + 1 ) ( 1 n j 1 n + 1 a i ) x i ( t ) n j 1 n + 1 a i x j ( t ) = 1 2 ( 1 n n + 1 a j ) k ^ t i k ^ t 1 i 2 δ , t = 1 , , T 1 , n i 1 c i x i ( t ) ( 1 n j 1 n + 1 a i ) x i ( t 1 ) n j 1 n + 1 a i x j ( t 1 ) + δ ( x i ( t ) s i ) = 0 , t = T , x j ( t ) = s i 1 2 δ k ^ t 1 i , t = T
where i , j = 1 , 2 , i j and
k ^ t 1 = 2 ( n + 1 ) n a 2 δ δ c 1 ( 1 n n + 1 a 1 ) x 1 ( t + 1 ) + δ + c 1 + c 1 δ ( 1 n n + 1 a 1 ) 2 x 1 ( t ) c 1 ( 1 n n + 1 a 1 ) x 1 ( t 1 ) + δ c 1 n n + 1 a 1 ( 1 n n + 1 a 1 ) x 2 ( t ) c 1 n n + 1 a 1 x 2 ( t 1 ) δ s 1 , t = 1 , , T 1 ,
k ^ t 2 = 2 ( n + 1 ) a 1 δ n δ c 2 ( 1 n n + 1 a 2 ) x 2 ( t + 1 ) + δ + n c 2 + n c 2 δ ( 1 n n + 1 a 2 ) ( 1 1 n + 1 a 2 ) x 2 ( t ) n c 2 ( 1 1 n + 1 a 2 ) x 2 ( t 1 ) + δ n c 2 1 n + 1 a 2 ( 1 n n + 1 a 2 ) x 1 ( t ) n c 2 1 n + 1 a 2 x 1 ( t 1 ) δ s 2 , t = 1 , , T 1 .
taking into account the state Equations (24) and (25) and initial state ( x 1 ( 0 ) , x 2 ( 0 ) , x 3 ( 0 ) , , x n + 1 ( 0 ) ) = ( x 1 0 , x 2 0 , x 2 0 , , x 2 0 ) .
Proof of Theorem 3.
First, we fix u 2 ( t ) , t = 0 , , T 1 and find the best response of Player 1. Using the Euler-approach to find the Nash equilibrium in the game described, we find expressions of u 1 ( t ) from state Equation (24) as a function of x 1 , x 2 , , x n + 1 and u 2 ( t ) from state Equation (25) as a function of x 1 , , x n + 1 :
u 1 ( t ) = x 1 ( t + 1 ) 1 n n + 1 a 1 x 1 ( t ) a 1 n + 1 j = 2 n + 1 x j ( t ) .
The goal of the first player is to minimize
J 1 ( x 1 , x 2 , , x n + 1 , u 2 ) = t = 0 T 1 δ t { ( x 1 ( t ) s 1 ) 2 + j = 2 n + 1 ( x j ( t ) s 1 ) 2 + c 1 x 1 ( t + 1 ) 1 n n + 1 a 1 x 1 ( t ) a 1 n + 1 j = 2 n + 1 x j ( t ) 2 } + δ T ( x 1 ( T ) s 1 ) 2 + j = 2 n + 1 ( x j ( T ) s 1 ) 2 ,
subject to Equation (25). We form the Lagrange function
L 1 ( x 1 , , x n + 1 , k 1 ) = J 1 ( x 1 , x 2 , , x n + 1 , u 2 ) + t = 0 T 1 i = 2 n + 1 k t 1 i ( x i ( t + 1 ) 1 1 n + 1 a 2 x i ( t ) a 2 n + 1 j = 1 , j i n + 1 x j ( t ) u 2 ( t ) ) ,
where k 1 = { k t 1 j , j = 2 , , n + 1 , t = 0 , , T 1 } . Finding the derivatives L 1 x 1 ( t ) and L 1 x i ( t ) , i = 2 , , n + 1 , t = 1 , , T and equating them to zero, and taking into account the symmetry of agents 2, , n + 1 , we obtain the corresponding systems:
c 1 x 1 ( t ) ( 1 n n + 1 a 1 ) x 1 ( t 1 ) n n + 1 a 1 x 2 ( t 1 ) + δ x 1 ( t ) s 1 c 1 ( 1 n n + 1 a 1 ) x 1 ( t + 1 ) ( 1 n n + 1 a 1 ) x 1 ( t ) n n + 1 a 1 x 2 ( t ) = n 2 ( n + 1 ) a 2 k t 1 δ ( t 1 ) , t = 1 , , T 1 , c 1 x 1 ( t ) ( 1 n n + 1 a 1 ) x 1 ( t 1 ) n n + 1 a 1 x 2 ( t 1 ) + δ ( x 1 ( t ) s 1 ) = 0 , t = T ,
x 2 ( t ) s 1 c 1 a 1 1 n + 1 x 1 ( t + 1 ) ( 1 n n + 1 a 1 ) x 1 ( t ) n n + 1 a 1 x 2 ( t ) = ( 1 n n + 1 a 2 ) k t 1 k t 1 1 δ t , t = 1 , , T 1 , x 2 ( t ) s 1 = 1 2 k t 1 1 δ t , t = T .
From the first equation of System (29) we obtain
k t 1 = 2 ( n + 1 ) n a 2 δ t 1 δ c 1 ( 1 n n + 1 a 1 ) x 1 ( t + 1 ) + δ + c 1 + c 1 δ ( 1 n n + 1 a 1 ) 2 x 1 ( t ) c 1 ( 1 n n + 1 a 1 ) x 1 ( t 1 ) + δ c 1 n n + 1 a 1 ( 1 n n + 1 a 1 ) x 2 ( t ) c 1 n n + 1 a 1 x 2 ( t 1 ) δ s 1 , t = 1 , , T 1 .
Denoting k ^ t 1 = k t 1 δ t and substituting it to Equation (30) and taking into account the second equation in System (29), we obtain the system of equations to define the best response strategy of Player 1:
x 2 ( t ) s 1 a 1 c 1 1 n + 1 x 1 ( t + 1 ) ( 1 n n + 1 a 1 ) x 1 ( t ) n n + 1 a 1 x 2 ( t ) = 1 2 ( 1 n n + 1 a 2 ) k ^ t 1 k ^ t 1 1 2 δ , t = 1 , , T 1 , c 1 x 1 ( t ) ( 1 n n + 1 a 1 ) x 1 ( t 1 ) n n + 1 a 1 x 2 ( t 1 ) + δ ( x 1 ( t ) s 1 ) = 0 , t = T , x 2 ( t ) = s 1 1 2 δ k ^ t 1 1 , t = T
where
k ^ t 1 = 2 ( n + 1 ) n a 2 δ δ c 1 ( 1 n n + 1 a 1 ) x 1 ( t + 1 ) + δ + c 1 + c 1 δ ( 1 n n + 1 a 1 ) 2 x 1 ( t ) c 1 ( 1 n n + 1 a 1 ) x 1 ( t 1 ) + δ c 1 n n + 1 a 1 ( 1 n n + 1 a 1 ) x 2 ( t ) c 1 n n + 1 a 1 x 2 ( t 1 ) δ s 1 , t = 1 , , T 1 .
We determine the best response strategy of Player 2 in the same way and obtain the System (28). □
Example 3.
Let the parameters of the game considered in this section be the following: a 1 = 0.01 , a 2 = 0.5 , δ = 1 , c 1 = 10 , c 2 = 7 and initial opinions be x 1 ( 0 ) = 0.3 , x 2 ( 0 ) = 0.7 . For time horizon T = 10 , target opinions are s 1 = 0.7 , s 2 = 0.9 . The equilibrium state and strategy trajectories are presented on Figure 5 and Figure 6. The equilibrium costs of Players 1 and 2 are is 2.36558 and 4.39855 , respectively.
Example 4.
We also demonstrate how the costs of the players depend on the number of agents n. Consider the game with two players and the following parameters: a 1 = 0.01 , a 2 = 0.4 , δ = 1 , c 1 = 15 , c 2 = 5 and let initial opinions be x 1 ( 0 ) = 0.2 , x 2 ( 0 ) = 0.5 . For time horizon T = 10 , target opinions are s 1 = 0.6 , s 2 = 0.9 . Corresponding graphs of the costs as the functions on n are depicted on Figure 7. One can notice that the cost function of Player 1 who influences directly on agent 1 is non-monotonic. First, it decreases for n from 1 to 5, then it increases. As this fact is not clearly recognized from Figure 7, we provide the values of the costs of Player 1 for periods from 1 to 20: 0.97, 0.94, 0.90, 0.87, 0.85, 0.86, 0.89, 0.92, 0.97, 1.03, 1.09, 1.16, 1.23, 1.31, 1.39, 1.47, 1.56, 1.64, 1.73, 1.83.
The cost function of Player 2 is an increasing function of the number of agents in the network.
Remark 2.
The Euler-equation approach can be considered as an indirect method of solving optimal control problems contrary to Pontryagin’s maximum principle. The Euler-equation approach defines the system of equations with respect to state variables. In many cases the method leads to a solution of difference equation with respect to state variables. We should notice the computational advantage of the Euler-equation, the realizations of which (see Examples 1–4) provide better results compared with realizations of Pontryagin’s maximum principle, e.g., the program finding the equilibrium strategies and costs in Example 2 for 20 periods provides the results on Euler-equation approach in 5 s, but on Pontryagin’s maximum principle, it takes more than 20 min.

7. Conclusions

We propose a model of opinion dynamics with one and two players who may influence the society opinion interacting with the agents. The opinion of an agent, being a member of a society, is formed taking into account not only his own past opinion but also the past average opinion of the society. We use the arithmetic mean of the opinion as an average opinion while the other methods of averaging may be considered in future research including geometric mean, a power mean or a random mean [23]. We find the optimal influence intensity of the player in the case when he is the unique center of influence and the Nash equilibria when there are two centers in cases where there are two agents and many agents in the society. We use the Euler-equation approach to find the solutions. The equilibrium policies found in the paper may help in planning the “influence strategy” for the players, minimizing their costs to make opinions of the members of a social network as close to the desired one as possible. The method of solving the problem may be applied to the models with more than two players and many groups of influence. The presentation of the resulting systems of equations providing the Nash equilibrium may be difficult in paper format. Numerical modeling demonstrates the theoretical results of the paper.

Author Contributions

Conceptualization, V.M. and E.P.; methodology, V.M. and E.P.; software, V.M. and E.P.; validation, V.M. and E.P.; formal analysis, V.M. and E.P.; investigation, V.M. and E.P.; resources, V.M. and E.P.; data curation, V.M. and E.P.; writing—original draft preparation, V.M. and E.P.; writing—review and editing, V.M. and E.P.; visualization, V.M. and E.P.; supervision, V.M. and E.P.; project administration, V.M. and E.P.; funding acquisition, V.M. and E.P. All authors have read and agreed to the published version of the manuscript.

Funding

The work was supported by the Shandong Province “Double-Hundred Talent Plan” (No. WST2017009).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. DeGroot, M.H. Reaching a Consensus. J. Am. Stat. Assoc. 1974, 69, 118–121. [Google Scholar] [CrossRef]
  2. Barabanov, I.N.; Korgin, N.A.; Novikov, D.A.; Chkhartishvili, A.G. Dynamic models of informational control in social networks. Autom. Remote Control. 2010, 71, 2417–2426. [Google Scholar] [CrossRef]
  3. Gubanov, D.A.; Novikov, D.A.; Chkhartishvili, A.G. Informational Influence and Informational Control Models in Social Networks. Autom. Remote Control 2011, 72, 1557–1567. [Google Scholar] [CrossRef]
  4. Chkhartishvili, A.G.; Gubanov, D.A.; Novikov, D.A. Social Networks: Models of Information Influence, Control and Confrontation; Springer Nature: Cham, Switzerland, 2019. [Google Scholar]
  5. Gubanov, D.A.; Novikov, D.A.; Chkhartishvili, A.G. Sotsial’nye Seti: Modeli Informatsionnogo Vliyaniya, Upravleniya i Protivoborstva (Social Networks: Models of Informational Influence, Control and Confrontation); Fizmatlit: Moscow, Russia, 2010. [Google Scholar]
  6. Bure, V.M.; Parilina, E.M.; Sedakov, A.A. Consensus in a social network with two principals. Autom. Remote Control 2017, 78, 1489–1499. [Google Scholar] [CrossRef]
  7. Bauso, D.; Cannon, M. Consensus in opinion dynamics as a repeated game. Automatica 2018, 90, 204–211. [Google Scholar] [CrossRef]
  8. Bauso, D.; Tembine, H.; Basar, T. Opinion dynamics in social networks through mean field games. SIAM J. Control Optim. 2016, 54, 3225–3257. [Google Scholar] [CrossRef]
  9. Bure, V.M.; Ekimov, A.V.; Svirkin, M.V. A simulation model of forming profile opinions within the collective. Vestnik of Saint Petersburg University. Appl. Math. Comput. Sci. Control Process. 2015, 3, 93–98. [Google Scholar]
  10. Rogov, M.A.; Sedakov, A.A. Coordinated influence on the beliefs of social network members. Math. Game Theory Appl. 2018, 10, 30–58. [Google Scholar]
  11. Sedakov, A.; Zhen, M. Opinion dynamics game in a social network with two influence nodes. Vestn. St. Petersburg Univ. Appl. Math. Comput. Sci. Control Process. 2019, 15, 118–125. [Google Scholar]
  12. Epitropou, M.; Fotakis, D.; Hoefer, M.; Skoulakis, S. Opinion Formation Games with Aggregation and Negative Influence. Theory Comput. Syst. 2019, 63, 1531–1553. [Google Scholar] [CrossRef]
  13. Mazalov, V.; Parilina, E. Game of competition for opinion with two centers of influence. In Mathematical Optimization Theory and Operations Research - 18th International Conference, MOTOR 2019, Proceedings, Russia, July 8–12, 2019 (Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); 11548 LNCS. 673-684; Springer: Cham, Switzerland, 2019. [Google Scholar]
  14. Parilina, E.; Sedakov, A. Stable Cooperation in a Game with a Major Player. Int. Game Theory Rev. 2016, 18, 1640005. [Google Scholar] [CrossRef]
  15. Mazalov, V.V. Mathematical Game Theory and Applications; Wiley, Wiley Desktop Editions: Hoboken, NJ, USA, 2014. [Google Scholar]
  16. Avrachenkov, K.E.; Kondratev, A.Y.; Mazalov, V.V. Cooperative Game Theory Approaches for Network Partitioning. In Computing and Combinatorics, COCOON 2017, LNCS; Cao, Y., Chen, J., Eds.; Springer: Heidelberg, Germany, 2017; Volume 10392, pp. 591–602. [Google Scholar]
  17. Mazalov, V.V. Comparing game-theoretic and maximum likelihood approaches for network partitioning. In Transactions on Computational Collective Intelligence XXXI; Springer: Berlin/Heidelberg, Germany, 2018; Volume 11290, pp. 37–46. [Google Scholar]
  18. Engwerda, J. LQ Dynamic Optimization and Differential Games; Wiley: Hoboken, NJ, USA, 2005. [Google Scholar]
  19. Haurie, A.; Krawczyk, J.B.; Zaccour, G. Games and Dynamic Games; World Scientific Publishing: Singapore, 2012. [Google Scholar]
  20. González-Sánchez, D.; Hernandez-Lerma, O. Discrete-Time Stochastic Control and Dynamic Potential Games: The Euler-Equation Approach; Springer International Publishing: Cham, Switzerland, 2013. [Google Scholar]
  21. Dechert, D. Optimal Control Problems from Second-Order Difference Equations. J. Econ. Theory 1978, 19, 50–63. [Google Scholar] [CrossRef]
  22. González-Sánchez, D.; Hernandez-Lerma, O. On the Euler-Equation Approach to Discrete-Time Nonstationary Optimal Control Problems. J. Dyn. Games 2014, 1, 57–78. [Google Scholar] [CrossRef]
  23. Hegselmann, R.; Krause, U. Opinion Dynamics Driven by Various Ways of Averaging. Comput. Econ. 2005, 25, 381–405. [Google Scholar] [CrossRef]
Figure 1. State trajectories (red— x 1 ( t ) , blue— x 2 ( t ) ).
Figure 1. State trajectories (red— x 1 ( t ) , blue— x 2 ( t ) ).
Mathematics 08 00355 g001
Figure 2. Strategy trajectory u ( t ) .
Figure 2. Strategy trajectory u ( t ) .
Mathematics 08 00355 g002
Figure 3. State trajectories (red— x 1 ( t ) , blue— x 2 ( t ) ).
Figure 3. State trajectories (red— x 1 ( t ) , blue— x 2 ( t ) ).
Mathematics 08 00355 g003
Figure 4. Strategy trajectories (red— u 1 ( t ) , blue— u 2 ( t ) ).
Figure 4. Strategy trajectories (red— u 1 ( t ) , blue— u 2 ( t ) ).
Mathematics 08 00355 g004
Figure 5. State trajectories (red— x 1 ( t ) , blue— x 2 ( t ) ).
Figure 5. State trajectories (red— x 1 ( t ) , blue— x 2 ( t ) ).
Mathematics 08 00355 g005
Figure 6. Strategy trajectories (red— u 1 ( t ) , blue— u 2 ( t ) ).
Figure 6. Strategy trajectories (red— u 1 ( t ) , blue— u 2 ( t ) ).
Mathematics 08 00355 g006
Figure 7. Players’ cost functions (red— J 1 , blue— J 2 ).
Figure 7. Players’ cost functions (red— J 1 , blue— J 2 ).
Mathematics 08 00355 g007
Table 1. Optimal state trajectory and control.
Table 1. Optimal state trajectory and control.
t t = 0 t = 1 t = 2 t = 3 t = 4 t = 5 t = 6 t = 7 t = 8
x 1 ( t ) 0.41.0090.9850.9540.9330.9210.9110.9050.899
x 2 ( t ) 0.30.330.5330.6690.7550.8080.8420.8630.875
z ( t ) 0.10.6790.4520.2850.1780.1120.0690.0420.023
u ( t ) 0.6490.2480.1490.0930.0580.0360.0210.011

Share and Cite

MDPI and ACS Style

Mazalov, V.; Parilina, E. The Euler-Equation Approach in Average-Oriented Opinion Dynamics. Mathematics 2020, 8, 355. https://doi.org/10.3390/math8030355

AMA Style

Mazalov V, Parilina E. The Euler-Equation Approach in Average-Oriented Opinion Dynamics. Mathematics. 2020; 8(3):355. https://doi.org/10.3390/math8030355

Chicago/Turabian Style

Mazalov, Vladimir, and Elena Parilina. 2020. "The Euler-Equation Approach in Average-Oriented Opinion Dynamics" Mathematics 8, no. 3: 355. https://doi.org/10.3390/math8030355

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop