Consider that, after period

$t=0$ and before period

$t=1,$ a

central planner can influence the structure of the network by changing the neighborhood function

g. This type of intervention makes sense only if the social planner has access to the signals available to all agents in the population and thus makes uses of the information available at the

interim stage of the game. In this sense, the current paper is addressing interim efficiency issues. The expected loss of all agents that observe a subset of signals

$\mathit{s}\in {\mathcal{S}}_{g}$, under a symmetric BNE action function

${a}^{*}$, is

The goal of this paper is to investigate how the network described by

g influences the shape of the

social welfare loss functionSince the technical details required to obtain the welfare loss function in this setup are constructive, they are provided in the main text. Using such arguments, Proposition 1 then derives the relevant welfare loss function for our environment with fundamental and coordination motives.

To address this central question, we need first to characterize the class of linear symmetric BNE of the game that the agents play once they receive their signals at

$t=1$ under the restrictions imposed by the network. As in the related literature (see, e.g., [

2,

4,

6,

7,

8]), the existence of symmetric BNE is guaranteed under the quadratic-Gaussian structure that the model assumes. To obtain a solution to Equation (

2), we must study how information is aggregated and how this influences the agents’ optimal actions. For a neighborhood function

g and for a subset of signals

$\mathit{s}\in {\mathcal{S}}_{g}$, the pairs

$(\theta ,\mathit{s})$ are jointly normally distributed. Let us use

$\mathrm{Cov}[\theta ,\mathit{s}]$ to denote the vector of covariances between the state of the world and each of the signals in

$\mathit{s}$ and

$\mathrm{Var}\left[\mathit{s}\right]$ to denote the variance–covariance matrix of the signals in

$\mathit{s}$. It follows from some basic results on normal distributions that

and

Hence, normality ensures that the conditional expectations of the state are linear in the signals contained in

$\mathit{s}$. This implication allows us to focus the analysis of BNE on linear strategies. If the agents that observe a signal profile

${\mathit{s}}^{\prime}$ use a linear strategy with respect to such signals, then the optimal action of the agents that observe another signal profile

$\mathit{s}$ must be also linear in such signals. While linear strategies are in general fairly simple and intuitive to interpret, in the current context they are also robust.

Equation (

2) reveals that the optimal action followed by the agents that observe a signal profile

$\mathit{s}$ depend in a recursive way on the average posterior expectation over the true state. Hence, we need to account for arbitrarily higher-order average posterior expectations over

θ. To formalize these average posterior expectations, let

$\overline{E}\left[\theta \right]={\sum}_{\mathit{s}\in {\mathcal{S}}_{g}}{E}_{\mathit{s}}\left[\theta \right]$ be the average posterior expectation on the state over the collection of possible subsets of signals

8. We begin with the 0–order average posterior expectation. Notice that the 0–order average posterior expectation must coincide with the true realization of the state so that we set

${\overline{E}}^{(0)}\left[\theta \right]=\theta $. Then, for the 1–order average posterior expectation, we have

whereas for higher-order average posterior expectations, we use

${\overline{E}}^{(m)}[\theta ]=\overline{E}[{\overline{E}}^{(m-1)}\left[\theta \right]]$ to indicate in a recursive way the

m–order average posterior expectation over

θ, for

$m\ge 2$. With such higher-order average posterior expectations in place, recursive application of Equation (

2) allows us to express the optimal action followed by the agents that observe

$\mathit{s}$ as

Under the assumed information structure, we have

$\mathrm{Cov}[\theta ,\mathit{s}]={\sigma}^{2}\underline{1}$, where

$\underline{1}$ is a vector of ones with the same dimension as the number of signals contained in the restricted profile

$\mathit{s}$. Furthermore, recall that

${s}_{j}=\theta +{\epsilon}_{j}$, where

${E}_{\mathit{s}}\left[{\epsilon}_{j}\right]=0$ for all signals

$j=1,\dots ,n$. Take a given realization of the state

θ. Then, using the expression in Equation (

5), we obtain that

${E}_{\mathit{s}}\left[{\overline{E}}^{(0)}\left[\theta \right]\right]={E}_{\mathit{s}}\left[\theta \right]$, for the 0–order average posterior expectation, and

for the 1–order average posterior expectation. Here again,

$\underline{1}$ is a vector of ones whose dimension equals the number of signals in the profile

$\mathit{s}$. Let us use

to denote the average of the inverses of the posterior variances of the state across signal profiles in the network. Given this notation for the average across (the inverse of) posterior variances, we can write

$\overline{E}\left[\theta \right]={\overline{\omega}}_{g}\theta $ and iterate to obtain that

${\overline{E}}^{(m)}\left[\theta \right]={\overline{\omega}}_{g}^{m}\theta $ for each

$m\ge 0$. Thus, we can express the equality in Equation (

7) as

where

${E}_{\mathit{s}}\left[\theta \right]$ satisfies the equality in Equation (

5). Now, if we average the expression above over all possible subsets of signals observed in the network, we obtain

Therefore, in a BNE, each agent that observes a subset of signals

$\mathit{s}$ wishes to match his action to the objective

By plugging the expressions in Equations (

8) and (

9) into the expected loss function given by Equation (

3), we obtain:

where the conditional variance

${\mathrm{Var}}_{\mathit{s}}\left[\theta \right]$ is given by the expression in Equation (

6). By combining this with the expression in Equation (

4), we obtain the the social welfare loss function is given by