Discrete Versus Continuous Algorithms in Dynamics of Affective Decision Making

The dynamics of affective decision making is considered for an intelligent network composed of agents with different types of memory: long-term and short-term memory. The consideration is based on probabilistic affective decision theory, which takes into account the rational utility of alternatives as well as the emotional alternative attractiveness. The objective of this paper is the comparison of two multistep operational algorithms of the intelligent network: one based on discrete dynamics and the other on continuous dynamics. By means of numerical analysis, it is shown that, depending on the network parameters, the characteristic probabilities for continuous and discrete operations can exhibit either close or drastically different behavior. Thus, depending on which algorithm is employed, either discrete or continuous, theoretical predictions can be rather different, which does not allow for a uniquely defined description of practical problems. This finding is important for understanding which of the algorithms is more appropriate for the correct analysis of decision-making tasks. A discussion is given, revealing that the discrete operation seems to be more realistic for describing intelligent networks as well as affective artificial intelligence.


Introduction
Algorithms of modeling dynamic decision making are important for understanding and predicting the behavior of societies with regard to many principal problems that people encounter in their life.As examples of such problems, it is possible to mention climate change, factory production, traffic control, firefighting, driving a car, military command, and so on.Research in dynamic decision making has focused on investigating the extent to which decision makers can use the obtained information and the acquisition of experience in making decisions.Dynamic decision making is a multiple, interdependent, real-time decision process, occurring in a changing environment.The latter can change independently or as a function of a sequence of actions by decision makers [1][2][3][4].
A society of decision makers forms a network, where separate agents play the role of network nodes.Decision making in networks has been studied in many papers that are summarized in the recent reviews [5][6][7][8].The role of moral preferences in following their personal and social norms has been studied [7].
Here, we consider dynamic decision making in a network of intelligent agents.The agents make decisions in the frame of affective decision theory that is a probabilistic theory where the agents choose alternatives taking account of both utility and emotions [9,10].This theory can serve as a basis for creating affective artificial intelligence [11].The society of intelligent agents forms an intelligent network.Interactions between the agents occur through the exchange of information and through herding effect.
Real-life situations are usually modeled by computer simulations, which is termed microworld modeling [1,12].The derivation of equations in dynamic decision making can be achieved by assuming the time variation of an observable quantity in the presence of noise and then passing to the equations for the corresponding probabilities [13].An important point in dynamic decision making is that living beings need to accumulate information adaptively in order to make sound decisions [14,15].This stresses the necessity of obtaining sufficient information for making optimal decisions.The received information accumulates in memory, which can be of different types, say, long-term and short-term.Generally, the type of memory depends on the environment and on the personality of decision makers.For example, in quickly changing environments, animals use decision strategies that value recent observations more than older ones [16][17][18], although in gradually varying environments, they can have rather long-term memory.Human beings can have both types of memory, long-term and short-term memories [19].
Decision making in a society of many agents includes several problems.One of them is associated with multi-agent reinforcement learning [20].In the latter, one considers a society of many agents in an environment shared by all members.The agents can accomplish actions leading to the change of the environmental state with a transition probability usually characterized by a Markov process.At each step of the procedure, each agent receives an immediate reward, generally diminishing with time due to time discounting.The aim of each agent is to find a behavioral policy, which is a strategy that can guide the agent to take sequential actions that maximize the discounted cumulative reward.
The setup we consider has some analogies, although being quite different from multi-agent reinforcement learning.We consider a society where the environment for an agent consists of other society members.The state of the society is the set of probabilities of choosing alternatives by each member, with the probabilities taking account of the utility of alternatives as well as their attractiveness influencing the agents' emotions.The actions executed by the agents are the exchange of information on the choice of all other members.The aim of the agents is to find out whether stable distributions over the set of alternatives exist and, if so, what type of attractors they correspond to.The principal difference from multi-agent reinforcement learning is in two aspects: first, the aim is not a maximal reward, but a stable probability distribution over the given alternatives; and second, the influence of emotions is taken into account.
Considering a sequence of multistep decision events, it is possible to accept two types of dynamics, based on either an algorithm with discrete time or with continuous time.The aim of the present paper is to compare these two kinds of algorithms in order to understand whether they are equivalent or not, and if they could lead to qualitatively differing results.If it happens that the conclusions are principally different, it is necessary to decide which of the ways has to be used for the correct description of realistic situations.
The layout of the paper is as follows.In order that the reader could better understand the approach to affective decision making used in the present paper, it seems necessary to recall the main points of this approach, which is presented in Section 2 .In Section 3, the process of affective decision making in a society is formulated.In Section 4, the picture is specified for a society composed of two groups of agents choosing between two alternatives in a multistep dynamics of decision making.One group of agents enjoys long-term memory, while the other short-term memory.Section 5 reformulates the dynamical process of multistep discrete decision making into a continuous process characterized by continuous time.In Section 6, a detailed numerical investigation is analyzed comparing the discrete and continuous algorithms of affective decision making.Section 7 concludes.

Affective Decision Making by Individuals
The usual approach to decision making is based on constructing a utility functional for each of the alternatives from the considered set [21,22].In order to include the role of emotions, the expected utility is modified by adding the terms characterizing the influence of emotions [23][24][25][26].Thus, one tries to incorporate into utility at once both sides of decision making: rational reasoning, based on logical normative rules, and irrational unconscious emotions, such as joy, sadness, anger, fear, happiness, disgust, and surprise.The alternative that corresponds to the largest expected utility is treated as optimal and is certainly to be preferred.
The approach we are using is principally different in several aspects: (i) This is a probabilistic theory, where the main characteristics are the probabilities of choosing each of the given alternatives.(ii) The probability of a choice is the sum of a utility factor, describing the probability of a choice based on rational reasoning, and an attraction factor, characterizing the influence of emotions.(iii) The optimal, or more correctly, a stochastically optimal alternative, is that which is associated with the largest probability.
The mathematically rigorous axiomatic formulation of the theory has been carried out in Refs.[9][10][11].The theory starts with the process of making decisions by separate individuals.Here, we state the main points of the approach in order that the reader could better understand the extension to decision making by a society, as is presented in this paper.
First of all, decision making is understood as a probabilistic process.Let us consider decision makers choosing between the alternatives from a set The decision makers are considered as separate agents making decisions independently from each other.Equivalently, it is possible to keep in mind a single decision maker deciding on the given alternatives.The aim is to define the probability p(A n ) of choosing an alternative A n .This probability can be understood as either the fraction of agents choosing this alternative or the frequency of choices of the alternative A n by a separate decision maker.Of course, the probability is normalized: The process of taking decisions consists of two sides.One evaluates the utility of alternatives as well as the attractiveness of alternatives that is influenced by emotions with respect to the choice of the alternatives.Therefore, the probability p(A n ) of choosing an alternative A n is a behavioral probability consisting of two terms: a utility factor f (A n ) and an attraction factor q(A n ): The utility factor f (A n ) shows the rational probability of choosing an alternative A n being based on the rational evaluation of the alternative utility, with the normalization The attraction factor characterizes the influence of emotions in the process of choice of the alternative A n .Emotions can be positive or negative.For instance, the positive emotions are joy, happiness, pride, calm, serenity, love, gratitude, cheerfulness, euphoria, satisfaction (moral or physical), inspiration, amusement, pleasure, etc. Examples of negative emotions are sadness, anger, fear, disgust, guilt, shame, anxiety, loneliness, disappointment, etc. Taking into account Conditions (2)-(4) implies To be more precise, the attraction factor varies in the interval An alternative A opt is stochastically optimal if and only if it corresponds to the maximal behavioral probability Let the alternatives be characterized by utilities (or value functionals) U(A n ).The utility factor (rational probability) f (A n ) can be derived from the minimization of the information functional where f 0 (A n ) is a prior distribution defined by the Luce rule [27,28], which gives The parameter β is a belief parameter characterizing the level of certainty of a decision maker in the fairness of the decision task and in the subject confidence with respect to their understanding of the overall rules and conditions of the decision problem [9][10][11].Here, we keep in mind rational beliefs representing reasonable, objective, flexible, and constructive conclusions or inferences about reality [29,30].
The attraction factor is a random quantity that is different for different decision makers and even for the same decision maker at different times.The average values for positive or negative emotions of the attraction factor can be estimated by non-informative priors as ±0. 25, respectively [10,11].The description of decision making by independent agents in the frame of probabilistic affective decision making has been studied and expounded in detail in Refs.[9][10][11].The aim of the present paper is to consider the extension of the theory from single-step affective decision making of a single agent to multistep dynamic affective decision making by a society of many decision makers.
Utility factors are objective quantities that can be calculated provided the utility of alternatives U(A n ) are defined.Generally, U(A n ) can be an expected utility, a value functional, or any other functional measuring the rational utility of alternatives.For example, in the case of multi-criteria decision making, this can be an objective function defined by one of the known multi-criteria evaluation methods [31][32][33][34].For the purpose of the present paper, we do not need to plunge into numerous methods of evaluating the utility of alternatives.We assume that the utility factor is defined in one of the ways.Our basic goal is the investigation of the role of emotions.
In what follows, we assume that the utility factors, evaluated at the initial moment of time, do not change, since their values have been objectively defined.On the contrary, The attraction factors depend on emotions that change in the process of decision making due to the exchange of information between the society members and because the behavior of decision makers is influenced by the actions of other members of a society.

Discrete Dynamics in Affective Decision Making
The approach to affective decision making, considered in the present paper, is based on the probabilistic theory [9][10][11] characterized by probabilities of choosing an alternative among the set of given alternatives, taking account of utility as well as emotions.In studying dynamic equations, one has to define initial conditions, that is, the utility factors and attraction factors at time t = 0.At the initial time, the decisions are taken by agents independently, since they have no time for exchanging information and observing the behavior of their neighbors.Thus, the initial behavioral probabilities define the required initial conditions for the following dynamics.
A society, or a network, is considered to consist of many agents.For each member of a society, the other members play the role of surrounding.The agents of a society interact with each other through the exchange of information and by imitating the actions of others.The probability dynamics is due to these features [35][36][37].
Let us consider N A alternatives between which one needs to make a choice.The alternatives are enumerated by the index n = 1, 2, . . ., N A .A society of N tot agents is making a choice among the available alternatives.The overall society is structured into N groups enumerated by the index j = 1, 2, . . ., N. Each group differs from other groups by its specific features, such as its type of memory and the inclination to replicate the actions of others, which is termed herding.The herding effect is well known and has been studied in voluminous literature [38][39][40][41][42][43][44][45][46].
The number of agents in a group j is N j so that the summation over all groups gives the total number of agents, The number of agents in a group j choosing an alternative A n at time t is N j (A n , t).Since each member of a group j chooses one alternative, then The probability that a member of a group j chooses an alternative A n at time t is which satisfies the normalization condition Probability ( 12) is a functional of the utility factor f j (A n , t) and the attraction factor q j (A n , t).The utility factor characterizes the utility of an alternative A n at time t and obeys the normalization condition The attraction factor quantifies the influence of emotions when selecting an alternative A n at time t and satisfies the normalization condition At the initial moment of time t = 0, the functional dependence of the probability on the utility and attraction factors has the form where the initial utility factor and attraction factor can be calculated following the rules explained in detail in earlier works [9][10][11][46][47][48].
The tendency of agents of a group j to replicate the actions of the members of other groups is described by the herding parameters ε j , which lay in the interval The other meaning of these parameters is the level of tendency for acting as others, which in the present setup models the agents' cooperation.
Generally, the value ε j can vary in time.However, this variation is usually very slow so that the herding parameters can be treated as constants characterizing the members of the related groups.
The time evolution, consisting of a number of subsequent decisions at discrete moments of time t/τ = 1, 2, . .., is given by the dynamic equation where τ is a delay time required for taking a decision by an agent.It is possible to measure time in units of τ keeping in mind the dimensionless time t = 1, 2, . ... The time dependence of the utility factor can be prescribed by a discount function [11,49,50], and the temporal dependence of the attraction factor for an agent of a group j, is defined by the amount of information received from other society members and kept in the memory M j (t) by time t.The derivation of Relation (19) can be achieved by resorting to the theory of quantum measurements [51] or by accepting the empirical fact [52][53][54][55][56][57][58][59][60][61][62][63][64][65][66] that the increase in information kept in the memory decreases the role of emotions so that δq j = −q j δM j .At the beginning, when t < 1, there is no yet any memory with respect to the choice between the present alternatives so that and one returns to the initial condition (16).For the time t 1, the memory is written as where J ij (t, t ′ ) is the interaction transfer function describing the interaction between the agents i and j during the time from t ′ to t, µ ji is the information gain received by the agent j from the agent i, and the unit-step function is used In contemporary societies, the interaction between agents is of long-range type, since the society members are able to interact by exchanging information through numerous sources not depending on the distance, e.g. through phone, Skype, Whatsapp, and a number of other messengers.The long-range interactions are characterized by the expression On the contrary, in the case of short-range interactions, J ij (t, t ′ ) essentially depends on the fixed location of agents.However, the members of modern societies are not fixed forever to precise prescribed locations.This concerns not only human societies, but animal groups as well.Therefore, the long-range interaction (22) looks to be the most realistic case.
The information gain can be taken in the Kullback-Leibler [67,68] form Thus, the memory function ( 21) reads as From the point of view of duration, there exist two types of memory: long-term and shortterm memory [19,[69][70][71][72].Long-term memory allows us to store information for long periods of time, including information that can be retrieved.This implies weak dependence of the interaction transfer on time, which defines the long-term memory Short-term memory is the capacity to store a small amount of information in the mind and keep it readily available for a short period of time.Then, the interaction transfer is modeled by the function so that the short-term memory takes the form 4 Two Groups with Binary Choice For concreteness, let us study the case where the choice is between two alternatives, A 1 and A 2 .Then, it is convenient to simplify the notation by setting the probabilities the utility factors and the attraction factors where the normalization conditions ( 13)-( 15) are taken into account.
Let the society consist of two groups, one whose members possess long-term memory and the other group consisting of the members with short-term memory.In the following numerical modeling, we set J = 1.Now, the long-term memory reads as while the short-term memory becomes The information gain (23) takes the form For brevity, let us use the notations Also, we assume that the process of making decisions concerns the alternatives with given utilities so that although emotions can vary due to the exchange of information between the agents.Thus, we come to the equations of dynamic decision making with the initial conditions The attraction factors have the form with the long-and short-term memories (32) and (33).

Continuous Dynamics of Affective Decision Making
Repeated multistep decision making is a discrete process, as is described above.However, if the time of taking a decision is much shorter than the whole multistep process, τ /t ≪ 1, then it looks admissible to pass from the equations with discrete time to continuous time by expanding the probabilities in powers of τ /t, Measuring time again in units of τ gives Using this, Equation ( 18) transforms into For the binary case of the previous section, we obtain For small τ , it is possible to use the relation which yields the long-term memory Employing the approximate equality the short-term memory can be represented as In numerical calculations, τ is taken as a step of the used numerical scheme.

Comparison of Discrete Versus Continuous Algorithms
Formally, it looks that the fixed points, if they exist, of the discrete (37) and continuous (43) dynamical systems are the same, being given by the equations where q * j is the limit of q j (t) as time goes to infinity.However, strictly speaking, the discrete and continuous limits can be different, since the related expressions for the memory functions in the discrete and continuous cases are different.Also, the considered equations are not autonomous and contain time delay.In addition, even if the fixed points would be the same, the stability conditions of discrete, continuous, and delay equations, generally, are different [73][74][75].Thus, numerical investigations are necessary.
We have compared the solutions to discrete-time Equation ( 37) and continuous-time Equation (43), for the same sets of parameters and initial conditions.The society is composed of two groups, one whose members enjoy long-term memory and the other group, consists of members with short-term memory.Solutions for discrete equations are marked as p dis j (t) and for continuous equations as p con j (t).In all figures, time is dimensionless, being measured in units of τ .The results are discussed below.
Figure 1 presents the case where the fractions (probabilities) p con j (t) and p dis j (t) starting from the same values smoothly tend to the same fixed points, being only slightly different at intermediate times.37) and to continuous Equation ( 43) for the initial conditions f 1 = 0.4, f 2 = 0.1, q 1 = 0.59, and q 2 = 0.6, in the absence of herding effect, when ε 1 = ε 2 = 0: (a) Discrete solution p dis 1 (t) (solid line) and continuous solution p con 1 (t) (dashed-dotted line).Both solutions tend to the same fixed point p * 1 = 0.4; (b) Discrete solution p dis 2 (t) (solid line) and continuous solution p con 2 (t) (dashed-dotted line).Both solutions tend to the same fixed point p * 2 = 0.636, which is a stable node.
Figure 2 shows the situation when the probabilities of choosing an alternative by agents with long-term memory smoothly tend to the same fixed point, but the probabilities for agents with short-term memory, although tending to the same fixed point, tend in a rather different way.The continuous solution tends smoothly, while the discrete solution, through oscillations.37) and to continuous Equation ( 43) for the initial conditions f 1 = 0.8, f 2 = 0.9, q 1 = 0.19, and q 2 = −0.8,when there is no herding effect, hence ε 1 = ε 2 = 0: (a) Discrete solution p dis 1 (t) (solid line) and continuous solution p con 1 (t) (dashed-dotted line).Both solutions tend to the same fixed point p * 1 = 0.8; (b) Discrete solution p dis 2 (t) (solid line) and continuous solution p con 2 (t) (dashed-dotted line).Probability p con 2 (t) tends monotonically, while p dis 2 (t) tends with oscillations to the same fixed point p * 2 = 0.377.Discrete and continuous solutions tend to the same fixed point, but for the agents with long-term memory the fixed point is a stable node, however for the agents with short-term memory, the continuous solution tends to a stable node, while for the discrete solution, to a stable focus.
Figure 3 demonstrates that the fixed points of discrete and continuous solutions can be of different nature.Thus, for the group of agents with long-term memory, the discrete and continuous solutions tend to the same stable node.However, for the agents with short-term memory, it is a stable node for the continuous solution, but a center for the discrete solution.
Figure 4 shows that the fixed points of agents with long-term memory can coincide for discrete and continuous solutions, both being stable nodes, while for agents with short-term memory, the continuous solution tends to a stable node, whereas the discrete solution at the beginning almost coincides with the continuous one, but starts oscillating from a finite time and after this continues oscillating for all times.
Figure 5 explains that discrete and continuous probabilities, though both being stable nodes, tend to different fixed points, which do not coincide.This happens in the presence of a strong herding effect.
Figures 6 and 7 illustrate qualitatively different behaviors of discrete and continuous solutions in the presence of the herding effect, when the related p dis j (t) and p con j (t) can either tend to coinciding stable nodes or p dis j (t) can exhibit oscillations, while p con j (t) smoothly tends to a stable node.
Figure 8 shows a rare case, where all probabilities for the groups with long-term memory as well as short-term memory, for discrete as well as continuous solutions, tend to the common fixed point p * 1dis = p * 1con = p * 2dis = p * 2con = f 2 + q 2 = 0.99. Figure 9 gives an example where continuous solutions for both groups, with long-term and short-term memory, can tend to coinciding limits, while the related discrete solutions for these groups are very different: One solution permanently oscillates, and the other tends to a stable node.
Finally, Figures 10 and 11 demonstrate the possibility of chaotic behavior for discrete solu-  37) and to continuous Equation ( 43) for the initial conditions f 1 = 0.8, f 2 = 1, q 1 = 0.1, and q 2 = −0.99, in the absence of herding effect, when ε 1 = ε 2 = 0: (a) Discrete solution p dis 1 (t) (solid line) and continuous solution p con 1 (t) (dashed-dotted line).Solutions p con 1 (t) and p dis 1 (t) tend to the same fixed point p * 1 = 0.8; (b) Discrete solution p dis 2 (t) (solid line) and continuous solution p con 2 (t) (dashed-dotted line).Solution p con 2 (t) tends to the fixed point p * 2 = 0.366, whereas p dis 2 (t) oscillates around p * 2 with the constant amplitude.For the agents with long-term memory, both probabilities, discrete and continuous, tend to the same stable node, but for the agents with short-term memory, the fixed point for discrete probability is a stable limit cycle, while the continuous probability tends to a stable node.37) and to continuous Equation ( 43) for the initial conditions f 1 = 0.3, f 2 = 0, q 1 = 0.699, and q 2 = 0.98, without the herding effect, when ε 1 = ε 2 = 0: (a) Discrete solution p dis 1 (t) (solid line) and continuous solution p con 1 (t) (dashed-dotted line).Solutions p con 1 (t) and p dis 1 (t) tend to the same fixed point p * 1 = 0.3; (b) Discrete solution p dis 2 (t) (solid line) and continuous solution p con 2 (t) (dashed-dotted line).Solution p con 2 (t) tends to p * 2 = 0.699, whereas p dis 2 (t) oscillates around p * 2 , starting at a finite time and continues oscillating for t → ∞ with a constant amplitude.The fixed points of agents with long-term memory coincide for discrete and continuous solutions, both being stable nodes, while for agents with short-term memory, the continuous solution tends to a stable node, whereas the discrete one oscillates.37) and to continuous Equation (43) for the initial conditions f 1 = 1, f 2 = 0.2, q 1 = −0.9, and q 2 = 0.6, in the presence of strong herding effect, when ε 1 = ε 2 = 1: (a) Discrete solution p dis 1 (t) (solid line) tends to the fixed point p * 1dis = 0.5 and continuous solution p con 1 (t) (dashed-dotted line) tends to the fixed point p * 1con = f 2 + q 2 = 0. displays smooth behavior of probabilities for both groups, with either long-term or short-term memory always converging to a stable node.However, discrete decision making can exhibit, for the same probabilities, a larger variety of behavior types, which can be smooth, tending to a stable node, or oscillating, hence tending to a stable focus, or even chaotic.
As far as the temporal behavior of the probabilities of choosing the related alternatives for discrete and continuous decision making can be essentially different, the natural question arises: Which of the algorithms, discrete or continuous, better corresponds to the real decision making of social groups?It seems there are activities, such as car driving, where decisions can be well approximated by a continuous process.At the same time, it looks like such processes can be described by a series of decisions occurring discretely, although with rather small time intervals between the subsequent steps.It may happen that, despite the small time intervals, the discrete   37) and to continuous Equation (43) for the initial conditions f 1 = 0.6, f 2 = 1, q 1 = 0.39, and q 2 = −0.9:(a) Discrete solution p dis 1 (t) (solid line) and continuous solution p con 1 (t) (dashed-dotted line) for the herding parameters ε 1 = ε 2 = 1.Solution p con 1 (t) tends to the fixed point p * 1con = 0.280, whereas solution p dis 1 (t) oscillates with a constant amplitude around p * 1con for t → ∞; (b) Discrete solution p dis 2 (t) (solid line) and continuous solution p con 2 (t) (dashed-dotted line) for the herding parameters ε 1 = ε 2 = 1.Solutions p dis 2 (t) and p con 2 (t) tend to the same fixed point p * 2dis = p * 2con = f 1 = 0.6; (c) Discrete solution p dis 1 (t) (solid line) and continuous solution p con 1 (t) (dashed-dotted line) for the herding parameters ε 1 = 0.9 and ε 2 = 0.8.Solution p dis 1 (t) oscillates, and solution p con 1 (t) monotonically tends to the fixed point p * 1con = 0.265; (d) Discrete solution p dis 2 (t) (solid line) and continuous solution p con 2 (t) (dashed-dotted line) for the herding parameters ε 1 = 0.9 and ε 2 = 0.8.Solution p dis 2 (t) oscillates, and solution p con 2 (t) monotonically tends to the limit p * 2con = 0.525.The behavior of discrete and continuous solutions is qualitatively different.and continuous decision algorithms lead to different conclusions.From our point of view, the discrete algorithm is preferable, since decisions, anyway, are complex, discrete actions composed of several subactions: receiving information, processing this information, and making a decision, so that always there is a delay time from the start of receiving information to the moment of making a decision.The continuous algorithm can provide a reasonable approximation in some cases, although sometimes can result in wrong conclusions.
the exchange of information with other agents of all groups, taking account of agents' emotions, and the tendency of the agents to herding.When a probability oscillates either periodically or chaotically, this implies that the agents are not able to come to a decision, but cannot stop hesitating.There exist numerous examples of chaotic behavior of decision making in medicine, economics, and different types of management [76][77][78][79][80][81][82][83][84][85][86][87].
The mathematical reason why the considered continuous solutions for the probabilities cannot display chaos is as follows.The probabilities, by definition, are bounded, hence Lagrange stable.Then, for a plane motion, the Poincare-Bendixson theorem tells us that if a trajectory of a continuous two-dimensional dynamical system is Lagrange stable, then it approaches either a stable node or a limit cycle [75].However, for discrete equations, there is no such theorem,    37) and continuous Equation ( 43) for the initial conditions f 1 = 0.3, f 2 = 0, q 1 = 0.699, and q 2 = 0.99, with the herding parameters ε 1 = 0.9 and ε 2 = 0.8: (a) Solutions p dis 1 (t) (solid line) and p con 1 (t) (dashed-dotted line).Solutions p dis 1 (t) and p con 1 (t) tend to the same limit p * 1 = f 2 + q 2 = 0.99; (b) Solutions p dis 2 (t) (solid line) and p con 2 (t) (dashed-dotted line).Solutions p dis 2 (t) and p con 2 (t) tend to the same limit p * 2 = f 2 + q 2 = 0.99.Note that here p * 1 = p * 2 .All probabilities for the groups with long-term memory as well as short-term memory, for discrete as well as continuous solutions, tend to the common fixed point.37) and to continuous Equation (43) for the initial conditions f 1 = 0.1, f 2 = 0, q 1 = 0.899, and q 2 = 0.93, with the herding parameters ε 1 = ε 2 = 1: (a) Solution to discrete Equation (37) p dis 1 (t) (solid line) oscillates, but solution p dis 2 (t) (dashed-dotted line) tends to the fixed point p * 2 = f 1 = 0.1; (b) Solutions to continuous Equation (43) p con 1 (t) (solid line) and p con 2 (t) (dashed-dotted line) tend to the same fixed point p * 1 = p * 2 = f 2 + q 2 = 0.93.Continuous solutions for both groups, with long-term and short-term memory, tend to coinciding limits, while the related discrete solutions for these groups are very different: One solution permanently oscillates, and the other tends to a stable node.and a discrete dynamical system can exhibit chaos.
for instance differing from each other by memory longevity or by the strength of mutual interactions in the process of exchanging information.It is also possible to take into account time discounting diminishing the utility factors with time.These extensions are planned for future research.

Figure 1 :
Figure 1: Solutions to discrete Equation (37) and to continuous Equation (43) for the initial conditions

Figure 2 :
Figure 2: Solutions to discrete Equation (37) and to continuous Equation (43) for the initial conditions

Figure 3 :
Figure 3: Solutions to discrete Equation (37) and to continuous Equation (43) for the initial conditions

Figure 4 :
Figure 4: Solutions to discrete Equation (37) and to continuous Equation (43) for the initial conditions

Figure 5 :
Figure 5: Solutions to discrete Equation (37) and to continuous Equation (43) for the initial conditions

Figure 6 :
Figure 6: Solutions to discrete Equation (37) and to continuous Equation (43) for the initial conditions

Figure 7 :
Figure 7: Solutions to discrete Equation (37) and to continuous Equation (43) for the initial conditions 0

Figure 8 :
Figure 8: Solutions to discrete Equation (37) and continuous Equation (43) for the initial conditions

Figure 9 :
Figure 9: Solutions to discrete Equation (37) and to continuous Equation (43) for the initial conditions tions, when, for the same parameters, continuous solutions smoothly converge to stable nodes.Summarizing the possible types of behavior, we see that continuous decision making always