Open Access
This article is

- freely available
- re-usable

*Games*
**2017**,
*8*(2),
23;
https://doi.org/10.3390/g8020023

Article

Security Investment, Hacking, and Information Sharing between Firms and between Hackers

Faculty of Social Sciences, University of Stavanger, 4036 Stavanger, Norway

Academic Editor:
Christos Dimitrakakis

Received: 5 April 2017 / Accepted: 21 May 2017 / Published: 25 May 2017

## Abstract

**:**

A four period game between two firms and two hackers is analyzed. The firms first defend and the hackers thereafter attack and share information. Each hacker seeks financial gain, beneficial information exchange, and reputation gain. The two hackers’ attacks and the firms’ defenses are inverse U-shaped in each other. A hacker shifts from attack to information sharing when attack is costly or the firm’s defense is cheap. The two hackers share information, but a second more disadvantaged hacker receives less information, and mixed motives may exist between information sharing and own reputation gain. The second hacker’s attack is deterred by the first hacker’s reputation gain. Increasing information sharing effectiveness causes firms to substitute from defense to information sharing, which also increases in the firms’ unit defense cost, decreases in each firm’s unit cost of own information leakage, and increases in the unit benefit of joint leakage. Increasing interdependence between firms causes more information sharing between hackers caused by larger aggregate attacks, which firms should be conscious about. We consider three corner solutions. First and second, the firms deter disadvantaged hackers. When the second hacker is deterred, the first hacker does not share information. Third, the first hacker shares a maximum amount of information when certain conditions are met. Policy and managerial implications are provided for how firms should defend against hackers with various characteristics.

Keywords:

information sharing; cyber security; game theory; asset allocation; cyber war; contest success function; security investment; policy## 1. Introduction

#### 1.1. Background

The Internet enables cyber hackers to attack and gain information from firms, requiring firms to design a variety of defensive security measures. So many firms, institutions, elections, etc. have been hacked that assessing who may be exempt is challenging or impossible. This raises the issue of counter measures. The gathering, analysis and sharing of information has been launched as one counter measure. Encouraging information sharing, the US federal government recommends Security Based Information Sharing Organizations (SB/ISOs), e.g., Information Sharing & Analysis Centers (ISACs), CERT, INFRAGARD, etc. Kampanakis [1] elaborates upon attempts to standardize security information sharing. Cyber attacks and information sharing differ in that the former demands funding, planning, effort, competence, infrastructure, etc., while the latter may be practically costless except providing the information, which today is possible in almost innumerable ways. One benefit of information sharing for firms are that if several firms know what each firm knows individually, they may benefit collectively in preventing future security breaches. That may improve their reputation, and enhance sales and profits. One benefit of information sharing for hackers is that if they cooperate, they may become more successful. Hackers may be malevolent agents, but may also be firms exploiting rival firms.

#### 1.2. Early and General Literature

Novshek and Sonnenschein [2], Gal-Or [3], Shapiro [4], Kirby [5], and Vives [6] consider information sharing in duopolies, oligopolies, and trade associations. Cremonini and Nizovtsev [7] show that well-protected targets can deter strategic attackers through signaling. Fultz and Grossklags [8] conceptualize distributed security attacks. Herley [9] considers collisions among attackers. Lin [10] assesses how hacking practices are institutionalized. Sarvari, et al. [11] evaluate criminal networks. August, et al. [12] assess how software network structure and security risks are impacted by cloud technology. Dey, et al. [13] assess quality competition and market segmentation in the security software market. Dey, et al. [14] analyze the security software market, including network effects and hacker behavior. Galbreth and Shor [15] evaluate how the enterprise software industry is impacted by malevolent agents. Chul Ho, et al. [16] consider double moral hazard when contracting information security. Ransbotham and Mitra [17] develop a model of paths to information security compromise.

#### 1.3. Information Sharing among Firms

Information sharing among firms to defend against cyber attacks has received scrutiny. Gordon, et al. [18] evaluate how information sharing affects information security, focusing on the cost side effects. They show that firms have a tradeoff between investing in information security and free riding, which may cause under-investment in security. Gal-Or and Ghose [19] assess the competition in the product market on information sharing and security investment, focusing on the demand side effects. Hausken [20,21] determines that information sharing and security investment for two firms are inverse U-shaped in the aggregate attack, impacted by their interdependence.

Making different assumptions, Gal-Or and Ghose [19] find that security investments and information sharing are strategic complements, while Hausken [21] finds that they are strategic substitutes. Gordon, Loeb and Lucyshyn [18] determine that sharing information induces a firm to invest less in information security.

Gao, et al. [22] consider how two firms with complementary information assets approach information sharing and security investments. Liu, et al. [23] show that complementary firms share information, and substitutable firms free ride and require a social planner to ensure information sharing. Mallinder and Drabwell [24] investigate information sharing and data sensitivity. Choras [25] assesses technical, human, organizational, and regulatory dimensions related to information sharing and network security. Tamjidyamcholo, et al. [26] relate information sharing to self-efficacy, trust, reciprocity, and shared language. Rocha Flores, et al. [27] assess how behavioral information security governance and national culture impact information sharing. Tamjidyamcholo, et al. [28] find that knowledge sharing depends crucially on perceived consequences, affect, and facilitating conditions, and marginally on social factors.

In a related stream of work, Png and Wang [29] consider user precautions vis-à-vis enforcement against attackers, and strategic interaction among end-users and between users and hackers with a continuum of user types. They show that users’ effort in fixing depends on hackers’ targeting and vice-versa. Prior work e.g., by Choi, et al. [30], Nizovtsev and Thursby [31], Arora, et al. [32], and Temizkan, et al. [33]) has considered incentives to disclose security flaws and provide patches. Cavusoglu, et al. [34] and Moore, et al. [35] argue that misplaced incentives rather than technical reasons may cause systems failure. See Skopik, et al. [36] for a review.

#### 1.4. Information Sharing among Hackers

Hackers sharing information operate differently. It has hardly been studied except statically by Hausken [37] and in a repeated game by Hausken [38]. Firms being hacked prefer to avoid or obstruct anything that may give hackers a competitive edge, such as sharing information or otherwise cooperating to improve their attacks. Hackers gather information about firms’ weaknesses, vulnerabilities, defenses, and information firms gather about security breaches. Hackers may choose to share this information with each other, and/or make it publicly available.

Raymond [39] argues that hackers may prefer not to share information due to competition and, as also argued by Ritchie [40], to enhance one’s reputation. However, Brunker [41] offers the contrasting argument that hackers seldom keep secrets. This paper allows the role of both competition and seeking reputation thus accounting for the multiple possibilities.

#### 1.5. This Paper’s Contribution

In this paper, we make the context especially realistic by simultaneously studying the impact of information sharing amongst hackers and information sharing amongst firms. The analysis endogenizes firms’ decisions to share information and allows comparison between the firms’ strategies when they share information vis-à-vis when they do not. The analysis strengthens the managerial implications compared with isolated analyses of information sharing between hackers, or information sharing between firms.

More specifically, this paper analyzes two hackers who may share information about firms’ vulnerabilities, in addition to deciding on the size of their attacks. The firms invest in information security to defend against the attacks, and additionally share information with each other after the first hackers attack. Naturally, each hacker prefers to receive information from the other hacker, but may be reluctant to deliver information, though there are benefits from joint information sharing. We assume that both hackers and the defending firm are strategic players. The opponent does not have a given, fixed, or immutable strategy, which has been common in much of prior research in information security. The absence of an assumption about a fixed threat, or a fixed defense, enables a much richer analysis.

The two hackers and two firms are considered as unitary players. Firms are usually collective players. Hackers may also be collective players. For non-unitary players that are sufficiently aligned e.g., regarding preferences, or can somehow be assigned similar preferences, Simon’s [42] principle of near-decomposability may be applicable. That means that players that are not entirely unitary may be interpreted as unitary as an approximation. For example, firms may perceive each hacker as some unidentified player out there which may either be coordinated, uncoordinated, or may perhaps even consist of disparate players who do not know each other but may have a common objective. Similarly, each firm may be a division within a company, or a conglomerate that is somehow able to design a unitary defense and share information with another conglomerate.

We build a model where a hacker has a triple motivation. The first is attacking for financial gain, e.g., through stealing assets like credit card information of the firms’ customers. The second is information exchange with the other hacker for joint benefit and synergy to lay the foundation for future superior exploits. The third is to obtain reputation, e.g., through sharing information on websites etc., showcasing the flaws in the firms’ security, and demonstrating in various ways the hacker’s capabilities to the world.

Hackers often conduct concerted attacks, which means that they work together and benefit from each other’s penetration. In our model first the firms defend against the first hacker. Second, the first hacker attacks the firms and shares information with the second hacker. Third, the firms share information with each other and defend against the second hacker. Fourth, the second hacker uses the information from the first hacker and attacks the firms. After the attacks, hackers share their information and experiences with other hackers in various hacking community forums, and more hackers will or may launch similar attacks on the same firms or similar firms. Characteristics of the information are the type of firewalls (e.g., network layers or packet filters, application-layers, proxy servers, network address translation), encryption techniques (e.g., hashing, private-key cryptography, public-key cryptography), access control mechanisms, intrusion detection systems, etc. employed by the firms, the training and procedures of the firms’ security experts, the nature of the defense, and the properties of the vulnerabilities. As the hackers share information with each other, synergies emerge. For instance, they discuss the available information, transformation occurs, missing pieces are filled in, and reasoning based on the joint information generates new knowledge. Joint information sharing by the two hackers can thus be expected to generate even deeper insight into the firms’ vulnerabilities and defense.

We interpret “attack” and “defense” broadly, inspired by Hirshleifer [43], who states that “falling also into the category of interference struggles are political campaigns, rent-seeking maneuvers for licenses and monopoly privileges [44], commercial efforts to raise rivals’ costs [45], strikes and lockouts, and litigation—all being conflicting activities that need not involve actual violence”. In the model we use credible specific functional forms to produce exact analytical solutions for the variables. In return for the sacrifice of generality, a successful specification demonstrates internal consistency, illumination, and ranges of parameter values where the various equilibriums exist.

## 2. Model

We develop a sequential move four period model for the interaction between two hackers i and j and two firms A and B. The players are fully rational and have complete information. Table 1 provides the nomenclature. Figure 1 illustrates the four time periods in the game. Figure 2 shows the interaction between the players.

Period 1: Both firms exert defense efforts t

_{Ai}and t_{Bi}to protect against potential future attacks.Period 2: Hacker i, without loss of generality, exerts attack effort T

_{Ai}against firm A and attack effort T_{Bi}against firm B, and shares with hacker j information S_{i}which includes knowledge about the firms’ vulnerabilities. Hacker i knows that hacker j does not already possess the information S_{i}before it is provided. The actual breach, if the attacker succeeds so that a breach occurs, and to the extent a breach occurs, occurs in period 2.Period 3: Knowing that hacker i may or may not share its information gained from the attack in period 1 with other hackers, the firms exert defense efforts t

_{Aj}and t_{Bj}against firms A and B to protect against future attacks. Additionally, firms A and B share information s_{A}and s_{B}, respectively, with each other based on what they learned from the two attacks by hacker i.Period 4: Hacker j exerts attack efforts T

_{Aj}and T_{Bj}against firms A and B to obtain further information, and shares information S_{j}with hacker i for future joint benefit. The actual breach by hacker j, if it occurs and to the extent it occurs, occurs in period 4. Hacker j is either another attacker than hacker i, or a combination of attackers considered as unitary, or a combination of attackers including hacker i.In period 1 the firms have one strategic choice variable each which are their defenses t

_{Ai}and t_{Bi}. The firms do not know which hacker attacks first, but prepare by defending against any hacker. In period 2 hacker i, which is the first hacker that happens to attack, has three strategic choice variables which are the attacks T_{Ai}and T_{Bi}and information sharing S_{i}. Information S_{i}is delivered by hacker i to hacker j in period 2. Hacker i chooses T_{Ai}and T_{Bi}before S_{i}, using the attacks to gather information, but since the three choices are made in period 2, it is mathematically sufficient to state that T_{Ai}, T_{Bi}and S_{i}are made in period 2. The firms’ defense efforts in period 1 last two periods, and thereafter have to be renewed. In period 3 the firms again have one strategic choice variable each which are their defenses t_{Aj}and t_{Bj}. In period 4 hacker j has two strategic choice variables which are the attacks T_{Ai}and T_{Bi}, and information S_{j}is a parameter since the game ends after period 4. Hacker j uses the information S_{i}from hacker i when exerting its attacks. In real life subsequent defense, attacks and information sharing occur after period 4, with S_{j}as a free choice variable. However, considering more periods than the four in Figure 1 is beyond this paper’s scope.Each firm has an asset valued as v

_{i}before hacker i’s attack, and valued as V_{i}by hacker i. The firms invest t_{Ai}and t_{Bi}to defend their assets, with defense expenditures f_{Ai}and f_{Bi}, where $\partial {f}_{Ai}/\partial {t}_{Ai}$ > 0 and $\partial {f}_{Bi}/\partial {t}_{Bi}$ > 0. To obtain financial gain, hacker i invests T_{Ai}and T_{Bi}to attack the assets, with attack expenditures F_{Ai}and F_{Bi}, where $\partial {F}_{Ai}/\partial {T}_{Ai}$ > 0 and $\partial {F}_{Bi}/\partial {T}_{Bi}$ > 0. We consider, for simplicity, linear functions f_{Ai}= c_{i}t_{Ai}, f_{Bi}= c_{i}t_{Bi}, F_{Ai}= C_{i}T_{Ai}, and F_{Bi}= C_{i}T_{Bi}, where c_{i}is the unit cost (inefficiency) of cyber defense for both firms and C_{i}is the unit cost (inefficiency) of cyber attack for hacker i. Highly competent players (defenders or attackers) have lower unit costs than less competent players since they can exert efforts (defense or attack) more efficiently with less effort. An incompetent player has infinite unit cost, and is incapable of defending or attacking. An attack means attempting to break through the security defense of the firm in order to appropriate something that is valuable to the firm. Examples are customer related information, business strategy information or accounting related information. We assume, for simplicity, risk-neutral players, which does not change the nature of the argument. The expenditures c_{i}t_{Ai}, c_{i}t_{Bi}, C_{i}T_{Ai}, and C_{i}T_{Bi}can be interpreted as expenses in capital and/or labor.Hacker i has a triple motivation of financial gain through the attacks T

_{Ai}and T_{Bi}, information exchange with hacker j for mutual benefit, and reputation gain through information sharing S_{i}. Information sharing S_{i}has three interpretations in this model; that it is provided exclusively to hacker j, provided exclusively to the entire hacking community, and released publicly.For the first motivation, the cyber contest between hacker i and firm Q, Q = A,B, takes the common ratio form [46,47]. We consider the contest success function
which is the probability that hacker i wins and the firm loses the contest, $\partial {g}_{Qi}^{\alpha =0}/\partial {T}_{Qi}$ > 0, $\partial {g}_{Qi}^{\alpha =0}/\partial {t}_{Qi}$ < 0, where α = 0 means independent firms. This means that firm Q benefits from its own security investment, and suffers from hacker i’s attack. When penetration occurs, the loss incurred by firm Q may not be the same as the value gained by hacker i. Moreover, hacker i may attack a subset of the firm’s assets, and the same subset may be valued differently by hacker i and firm Q. This is accounted for by the different valuations v

$${g}_{Qi}^{\alpha =0}=\frac{{T}_{Qi}}{{T}_{Qi}+{t}_{Qi}}$$

_{i}by each firm and V_{i}by hacker i. Hacker i’s utility is thus its benefit ${g}_{Qi}^{\alpha =0}{V}_{Qi}$ minus its expenditure C_{i}T_{Qi}. Firm Q’s utility is its initial asset value v_{i}minus its loss ${g}_{Qi}^{\alpha =0}{v}_{Qi}$ minus its expenditure c_{Qi}t_{Qi}. Applying (1), the utilities from the first attack for hacker i and firm Q, respectively, are
$${U}_{i}^{first,\alpha =0}=\frac{{T}_{Ai}}{{T}_{Ai}+{t}_{Ai}}{V}_{i}+\frac{{T}_{Bi}}{{T}_{Bi}+{t}_{Bi}}{V}_{i}-{C}_{i}{T}_{Ai}-{C}_{i}{T}_{Bi},\text{\hspace{1em}}{u}_{Q}^{first}={v}_{i}-\frac{{T}_{Qi}}{{T}_{Qi}+{t}_{Qi}}{v}_{i}-{c}_{i}{t}_{Qi}$$

As in Kunreuther and Heal [48] and Hausken [21,49], we assume interdependence α between the firms, so that an attack on one firm gets transferred with a proportionality parameter α as an attack on the other firm. Analogously, one firm’s defense also defends the other firm with proportionality parameter α. We assume α ≤ 1 where α = 0 means independent firms and negative α means that each firm’s security investment is detrimental to the other firm, and merely strengthens one’s own firm. Thus, generalizing (1) from α = 0 to general α, the contest for firm A’s asset gives the probability
that hacker k gains the asset, k = i,j, where the attack on firm A consists of T
that hacker k gains the asset, k = i,j.

$${g}_{Ak}=\frac{{T}_{Ak}+\alpha {T}_{Bk}}{{t}_{Ak}+{T}_{Ak}+\alpha ({t}_{Bk}+{T}_{Bk})}$$

_{Ak}directly from hacker k and αT_{Bk}indirectly from hacker k through firm B and onto firm A. Analogously, the contest for firm B’s asset gives the probability
$${g}_{Bk}=\frac{{T}_{Bk}+\alpha {T}_{Ak}}{{t}_{Bk}+{T}_{Bk}+\alpha ({t}_{Ak}+{T}_{Ak})}$$

After hacker i’s attack in period 2, we assume in period 3 that firm A shares information s
respectively, where ${t}_{Aj}+\gamma {s}_{B}+\alpha ({t}_{Bj}+\gamma {s}_{A})$ and ${t}_{Bj}+\gamma {s}_{A}+\alpha ({t}_{Aj}+\gamma {s}_{B})$ are firm A’s and firm B’s, respectively, aggregate defenses against hacker j. When hacker i shares information S
against firms A and B respectively, where ${T}_{Aj}+\alpha {T}_{Bj}+{\mathrm{\Gamma}}_{i}({T}_{Ai}+{T}_{Bi}){S}_{i}$ and ${T}_{Bj}+\alpha {T}_{Aj}+{\mathrm{\Gamma}}_{i}({T}_{Ai}+{T}_{Bi}){S}_{i}$ are hacker j’s aggregate attack against firms A and B, respectively. After both hackers’ attacks, the two hackers share their information with each other for mutual benefit, which is their second motivation. First, Г
to hackers i and j, respectively.

_{A}with firm B with sharing effectiveness γ, and firm B shares information s_{B}with firm A with sharing effectiveness γ. Receiving information from the other firm strengthens firm A’s defense from t_{Aj}to t_{Aj}+ γs_{B}, and strengthens firm B’s defense from t_{Bj}to t_{Bj}+ γs_{A}, against hacker j. We thus replace the probabilities in (3) and (4) with
$${h}_{Aj}=\frac{{T}_{Aj}+\alpha {T}_{Bj}}{{t}_{Aj}+\gamma {s}_{B}+{T}_{Aj}+\alpha ({t}_{Bj}+\gamma {s}_{A}+{T}_{Bj})},\text{\hspace{1em}}{h}_{Bj}=\frac{{T}_{Bj}+\alpha {T}_{Aj}}{{t}_{Bj}+\gamma {s}_{A}+{T}_{Bj}+\alpha ({t}_{Aj}+\gamma {s}_{B}+{T}_{Aj})}$$

_{i}with hacker j, the effectiveness of hacker i’s sharing is a function of its attacking effort levels T_{Ai}+ T_{Bi}. The reason is that when hacker i exerts higher effort in attacking, e.g., more efforts on scanning and probing the firms before attacks, the information it collects and shares becomes more valuable to hacker j. We assume for simplicity linear effectiveness Г_{i}(T_{Ai}+ T_{Bi}), proportional to effort T_{Ai}+ T_{Bi}, where the parameter Г_{i}is hacker i’s sharing effectiveness. Consequently, hacker j can utilize the effectiveness Г_{i}(T_{Ai}+ T_{Bi}) multiplied with the amount S_{i}that hacker i shares, i.e., Г_{i}(T_{Ai}+ T_{Bi})S_{i}, scaled in the same denomination as hacker j’s effort T_{j}in the second attack. Hacker i cannot share more information than what has become available through its attacks, i.e., 0 ≤ S_{i}≤ Г_{i}(T_{Ai}+ T_{Bi}). Hence we replace the probabilities in (5) for hacker j with
$$\begin{array}{l}{q}_{Aj}=\frac{{T}_{Aj}+\alpha {T}_{Bj}+{\mathrm{\Gamma}}_{i}({T}_{Ai}+{T}_{Bi}){S}_{i}}{{t}_{Aj}+\gamma {s}_{B}+{T}_{Aj}+\alpha ({t}_{Bj}+\gamma {s}_{A}+{T}_{Bj})+{\mathrm{\Gamma}}_{i}({T}_{Ai}+{T}_{Bi}){S}_{i}},\\ {q}_{Bj}=\frac{{T}_{Bj}+\alpha {T}_{Aj}+{\mathrm{\Gamma}}_{i}({T}_{Ai}+{T}_{Bi}){S}_{i}}{{t}_{Bj}+\gamma {s}_{A}+{T}_{Bj}+\alpha ({t}_{Aj}+\gamma {s}_{B}+{T}_{Aj})+{\mathrm{\Gamma}}_{i}({T}_{Ai}+{T}_{Bi}){S}_{i}}\end{array}$$

_{i}(T_{Ai}+ T_{Bi})S_{i}expresses what hacker j can utilize from hacker i. Second, Г_{j}(T_{Aj}+ T_{Bj})S_{j}expresses what hacker i can utilize from hacker j. The two hackers have different sharing effectiveness parameters Г_{i}and Г_{j}caused by differences in sharing competence, skills, motivations, beliefs, and information processing capacities. The sharing effectiveness Г_{i}also depends on how well hacker i extracts information from its attacks T_{Ai}and T_{Bi}, how effectively hacker i shares information with hacker j, hacker j’s capability and willingness to use the information, and it scales (T_{Ai}+ T_{Bi})S_{i}relative to ${T}_{Bj}+\alpha {T}_{Aj}$. The two hackers’ joint benefit is expressed by the product of these two expressions, i.e., Г_{i}(T_{Ai}+ T_{Bi})S_{i}Г_{j}(T_{Aj}+ T_{Bj})S_{j}. Hackers i and j earn a utility proportional to this joint benefit, with proportionality parameters Ʌ_{i}and Ʌ_{j}, respectively. The parameters Ʌ_{i}and Ʌ_{j}are scaling parameters in the hackers’ utility functions and reflect differences in the two hackers’ ability to utilize and process joint sharing. They account only for mutual information sharing expressed with the product S_{i}S_{j}, in contrast to Г_{i}and Г_{j}, which account only for one way information sharing. If Ʌ_{i}= Ʌ_{j}= 0, the two hackers are unable to utilize joint sharing. Upper limits exist to Ʌ_{i}and Ʌ_{j}so that information shared by the two hackers is not more valuable than if the same amount of information is generated by only one hacker. This gives
$${\mathrm{\Theta}}_{i}={\mathrm{\Lambda}}_{i}{\mathrm{\Gamma}}_{i}({T}_{Ai}+{T}_{Bi}){S}_{i}{\mathrm{\Gamma}}_{j}({T}_{Aj}+{T}_{Bj}){S}_{j},\text{\hspace{1em}}{\mathrm{\Theta}}_{j}={\mathrm{\Lambda}}_{j}{\mathrm{\Gamma}}_{i}({T}_{Ai}+{T}_{Bi}){S}_{i}{\mathrm{\Gamma}}_{j}({T}_{Aj}+{T}_{Bj}){S}_{j}$$

Hacker k’s third motivation of information sharing for reputation gain is also obtained through S
to hackers i and j, respectively, where Ω

_{k}. Also here we scale proportional to effort T_{Ak}+ T_{Bk}, yielding
$${\mathrm{\Psi}}_{i}={\mathsf{\Omega}}_{i}({T}_{Ai}+{T}_{Bi}){S}_{i},\text{\hspace{1em}}{\mathrm{\Psi}}_{j}={\mathsf{\Omega}}_{j}({T}_{Aj}+{T}_{Bj}){S}_{j}$$

_{k}is the reputation gain parameter which expresses hacker k’s capabilities of obtaining and marketing its reputation gain. The parameters Ω_{i}and Ω_{j}differ since the hackers generally gain reputation from the attack and information sharing differently.We finally assume that hacker k values firm Q’s asset as V

_{k}, and that hacker k’s attack on firm Q has unit cost C_{k}, Q = A,B, k = i,j. The two hackers’ utilities are
$$\begin{array}{l}{U}_{i}={g}_{Ai}{V}_{i}+{g}_{Bi}{V}_{i}+{\mathrm{\Theta}}_{i}+{\mathrm{\Psi}}_{i}-{C}_{i}{T}_{Ai}-{C}_{i}{T}_{Bi}\\ =\frac{{T}_{Ai}+\alpha {T}_{Bi}}{{t}_{Ai}+{T}_{Ai}+\alpha ({t}_{Bi}+{T}_{Bi})}{V}_{i}+\frac{{T}_{Bi}+\alpha {T}_{Ai}}{{t}_{Bi}+{T}_{Bi}+\alpha ({t}_{Ai}+{T}_{Ai})}{V}_{i}\\ \text{\hspace{1em}}+{\mathrm{\Lambda}}_{i}{\mathrm{\Gamma}}_{i}({T}_{Ai}+{T}_{Bi}){S}_{i}{\mathrm{\Gamma}}_{j}({T}_{Aj}+{T}_{Bj}){S}_{j}+{\mathsf{\Omega}}_{i}({T}_{Ai}+{T}_{Bi}){S}_{i}-{C}_{i}{T}_{Ai}-{C}_{i}{T}_{Bi}\\ {U}_{j}={q}_{Aj}{V}_{j}+{q}_{Bj}{V}_{j}+{\mathrm{\Theta}}_{j}+{\mathrm{\Psi}}_{j}-{C}_{j}{T}_{Aj}-{C}_{j}{T}_{Bj}\\ =\frac{{T}_{Aj}+\alpha {T}_{Bj}+{\mathrm{\Gamma}}_{i}({T}_{Ai}+{T}_{Bi}){S}_{i}}{{t}_{Aj}+\gamma {s}_{B}+{T}_{Aj}+\alpha ({t}_{Bj}+\gamma {s}_{A}+{T}_{Bj})+{\mathrm{\Gamma}}_{i}({T}_{Ai}+{T}_{Bi}){S}_{i}}{V}_{j}\\ \text{\hspace{1em}}+\frac{{T}_{Bj}+\alpha {T}_{Aj}+{\mathrm{\Gamma}}_{i}({T}_{Ai}+{T}_{Bi}){S}_{i}}{{t}_{Bj}+\gamma {s}_{A}+{T}_{Bj}+\alpha ({t}_{Aj}+\gamma {s}_{B}+{T}_{Aj})+{\mathrm{\Gamma}}_{i}({T}_{Ai}+{T}_{Bi}){S}_{i}}{V}_{j}\\ \text{\hspace{1em}}+{\mathrm{\Lambda}}_{j}{\mathrm{\Gamma}}_{i}({T}_{Ai}+{T}_{Bi}){S}_{i}{\mathrm{\Gamma}}_{j}({T}_{Aj}+{T}_{Bj}){S}_{j}+{\mathsf{\Omega}}_{j}({T}_{Aj}+{T}_{Bj}){S}_{j}-{C}_{j}{T}_{Aj}-{C}_{j}{T}_{Bj}\end{array}$$

In (9) each hacker has six terms in its utility. The first four correspond to each hacker’s three motivations, and the two negative terms are the attack expenditures.

As in Gal-Or and Ghose [19] and Hausken [21], we assign leakage costs to the firms of information sharing. The transfer channels and usually broad domain within which the information transferred between firms exists give hackers larger room for maneuver. Players within or associated with the two firms may choose to leak shared information to criminals and hackers, or to agents with a conflict of interest with one or both firms. We consider the functional forms
where ${\varphi}_{1}$ ≥ 0 is the inefficiency (unit cost) of own leakage, ${\varphi}_{2}$ ≥ 0 as the efficiency (unit benefit) of the other firm’s leakage (since the first firm benefits from it), and ${\varphi}_{3}$ ≥ 0 as the efficiency (unit benefit) of joint leakage.

$${\xi}_{A}={\varphi}_{1}{s}_{A}^{2}-{\varphi}_{2}{s}_{B}^{2}-{\varphi}_{3}{s}_{A}{s}_{B},\text{\hspace{1em}}{\xi}_{B}={\varphi}_{1}{s}_{B}^{2}-{\varphi}_{2}{s}_{A}^{2}-{\varphi}_{3}{s}_{A}{s}_{B},\text{\hspace{1em}}{\varphi}_{1}\ge {\varphi}_{2}+{\varphi}_{3}$$

Firm Q’s valuation of its asset as defended against hacker k is v

_{k}, and firm Q’s unit cost of defense against hacker k is c_{k}, Q = A,B, k = i,j. Thus, the two firms’ utilities are
$$\begin{array}{l}{u}_{A}={v}_{i}-{g}_{Ai}{v}_{i}-{c}_{i}{t}_{Ai}+{v}_{j}-{q}_{Aj}{v}_{j}-{c}_{j}{t}_{Aj}-{\xi}_{A}\\ ={v}_{i}-\frac{{T}_{Ai}+\alpha {T}_{Bi}}{{t}_{Ai}+{T}_{Ai}+\alpha ({t}_{Bi}+{T}_{Bi})}{v}_{i}-{c}_{i}{t}_{Ai}+{v}_{j}\\ \text{\hspace{1em}}-\frac{{T}_{Aj}+\alpha {T}_{Bj}+{\mathrm{\Gamma}}_{i}({T}_{Ai}+{T}_{Bi}){S}_{i}}{{t}_{Aj}+\gamma {s}_{B}+{T}_{Aj}+\alpha ({t}_{Bj}+\gamma {s}_{A}+{T}_{Bj})+{\mathrm{\Gamma}}_{i}({T}_{Ai}+{T}_{Bi}){S}_{i}}{v}_{j}-{c}_{j}{t}_{Aj}-({\varphi}_{1}{s}_{A}^{2}-{\varphi}_{2}{s}_{B}^{2}-{\varphi}_{3}{s}_{A}{s}_{B})\\ {u}_{B}={v}_{i}-{g}_{Bi}{v}_{i}-{c}_{i}{t}_{Bi}+{v}_{j}-{q}_{Bj}{v}_{j}-{c}_{j}{t}_{Bj}-{\xi}_{B}\\ ={v}_{i}-\frac{{T}_{Bi}+\alpha {T}_{Ai}}{{t}_{Bi}+{T}_{Bi}+\alpha ({t}_{Ai}+{T}_{Ai})}{v}_{i}-{c}_{i}{t}_{Bi}+{v}_{j}\\ \text{\hspace{1em}}-\frac{{T}_{Bj}+\alpha {T}_{Aj}+{\mathrm{\Gamma}}_{i}({T}_{Ai}+{T}_{Bi}){S}_{i}}{{t}_{Bj}+\gamma {s}_{A}+{T}_{Bj}+\alpha ({t}_{Aj}+\gamma {s}_{B}+{T}_{Aj})+{\mathrm{\Gamma}}_{i}({T}_{Ai}+{T}_{Bi}){S}_{i}}{v}_{j}-{c}_{j}{t}_{Bj}-({\varphi}_{1}{s}_{B}^{2}-{\varphi}_{2}{s}_{A}^{2}-{\varphi}_{3}{s}_{A}{s}_{B})\end{array}$$

For each firm the two ratio terms correspond to defense against the hackers’ first motivation of financial gain. These two negative ratio terms are subtracted from the firm’s asset values. Two of the negative terms are the firm’s defense expenditures. The final negative term is leakage costs of information sharing.

## 3. Analysis

This section provides the interior solution in Section 3.1, the corner solution when hacker i is deterred in Section 3.2, the corner solution when hacker j is deterred in Section 3.3, the corner solution when hacker i shares a maximum amount of information in Section 3.4, and some special cases of advantage for hackers i and j in Section 3.5. Appendix A.1 solves the game with backward induction.

#### 3.1. Interior Solution

This subsection provides in Assumption 1 four assumptions for an interior solution, where all four players exert efforts and share information. Thereafter we present the related propositions. For an interior solution, where all four players exert efforts and share information, we assume the following:

$$\text{Assumption 1.}(\mathrm{a})\frac{2{c}_{i}}{{v}_{i}}\frac{{C}_{i}}{{V}_{i}};(\mathrm{b})\frac{2{c}_{j}}{{v}_{j}}\frac{{C}_{j}}{{V}_{j}}+\frac{2{\mathsf{\Omega}}_{i}{c}_{j}^{2}/{v}_{j}^{2}}{{\mathrm{\Lambda}}_{i}{\mathrm{\Gamma}}_{i}{\mathrm{\Gamma}}_{j}{S}_{j}}-\frac{{\mathsf{\Omega}}_{j}{S}_{j}}{{V}_{j}};\phantom{\rule{0ex}{0ex}}(\mathrm{c})\text{}\frac{2{c}_{j}}{{v}_{j}}\sqrt{\frac{(1+\alpha ){\mathrm{\Lambda}}_{j}{\mathrm{\Gamma}}_{j}{S}_{j}}{{V}_{j}}}\text{and}\alpha \ge -1;\phantom{\rule{0ex}{0ex}}(\mathrm{d})\frac{\left(\frac{{C}_{j}}{{V}_{j}}-\frac{{\mathsf{\Omega}}_{j}{S}_{j}}{{V}_{j}}\right)\left(\frac{8{c}_{j}^{2}}{{v}_{j}^{2}}-\frac{(1+\alpha ){\mathrm{\Lambda}}_{j}{\mathrm{\Gamma}}_{j}{S}_{j}}{{V}_{j}}\right)-2{\mathrm{\Lambda}}_{j}(1+\alpha )\frac{{c}_{j}}{{v}_{j}}\left(\frac{{\mathrm{\Gamma}}_{j}{S}_{j}}{{V}_{j}}+\frac{{\mathsf{\Omega}}_{i}{c}_{j}/{v}_{j}}{{\mathrm{\Lambda}}_{i}{\mathrm{\Gamma}}_{i}{V}_{j}}\right)}{8\left(4{c}_{j}^{2}/{v}_{j}^{2}-(1+\alpha ){\mathrm{\Lambda}}_{j}{\mathrm{\Gamma}}_{j}{S}_{j}/{V}_{j}\right){c}_{j}^{2}/{v}_{j}^{2}}\gamma s$$

Assumption 1a ensures that hacker i is not deterred by the firms’ defense in period 1, which would give a corner solution analyzed in Section 3.2. If hacker i’s unit attack cost C

_{i}relative to its valuation V_{i}is less than twice that of the firms’ unit defense cost c_{i}relative to their valuation v_{i}, the firms’ moderate defense t_{i}is not perceived as overwhelming, and hacker i attacks. Conversely, if hacker i suffers high unit attack cost C_{i}or has low valuation V_{i}, hacker i is deterred by the overwhelming defense t_{i}and does not attack, i.e., T_{i}= 0.Assumption 1b ensures that hacker j attacks with T

_{j}> 0 in period 4, and is not deterred by the firms’ defense t_{j}in period 3, which would give a corner solution analyzed in Section 3.3. When Ω_{i}= Ω_{j}= 0, if the firms’ unit defense cost c_{j}relative to their valuation v_{j}is larger than half that of hacker j’s unit attack cost C_{j}relative to its valuation V_{j}, the firms’ moderate defense t_{j}is not perceived as overwhelming and deterrent, and hacker j attacks. When Ω_{i}= 0 and Ω_{j}> 0, motivated by own reputation gain, hacker j attacks even when 2c_{j}/v_{j}is lower. When Ω_{i}> 0 and Ω_{j}= 0, deterred by hacker i’s reputation gain, hacker j requires higher 2c_{j}/v_{j}(i.e., more disadvantaged firms) in order to attack. Finally, if Ω_{i}= Ω_{j}= 0 and the firms enjoy low unit defense cost c_{j}or have high valuation v_{j}, hacker j is deterred by the overwhelming defense t_{j}and does not attack, i.e., T_{i}= 0.Assumption 1c is needed to ensure positive and finite information sharing 0 < S

_{i}< ∞ for hacker i, which also occurs when the firms’ unit defense cost c_{j}relative to their valuation v_{j}is high, so that the firms can afford only moderate defense. Thus, hacker i does not share information when sharing is not worthwhile assessed against the strength of the firms’ defense. High interdependence α between the firms may prevent hacker i from sharing information. More specifically, the size of c_{j}/v_{j}to ensure S_{i}> 0 must be large if the interdependence α between the firms is large, hacker j shares much information (S_{j}is high), if hacker j utilizes joint sharing (Ʌ_{j}is high), if hacker j’s sharing effectiveness Г_{j}is high, and if hacker j’s valuation V_{j}is low. This means that both hackers benefit from information sharing, and information sharing between the hackers is ensured when the firms are disadvantaged with a large c_{j}/v_{j}so that the defense is not too large. α ≥ −1 is common in practice and prevents negative values under the root. See the corner solution in Section 3.4 when Assumption 1c is satisfied with a small margin.Assumption 1d follows from ${C}_{j}>({\mathsf{\Omega}}_{j}+2{\mathrm{\Lambda}}_{j}{\mathrm{\Gamma}}_{i}{T}_{i}{S}_{i}{\mathrm{\Gamma}}_{j}){S}_{j}$, which is needed in hacker j’s utility in (6) so that hacker j experiences a cost of attacking, and more generally ensures that hacker j’s attack T

_{j}is positive. If hacker j’s unit cost C_{j}is too low, hacker j benefits so much from information sharing, expressed with $({\mathsf{\Omega}}_{j}+2{\mathrm{\Lambda}}_{j}{\mathrm{\Gamma}}_{i}{T}_{i}{S}_{i}{\mathrm{\Gamma}}_{j}){S}_{j}$, that attack effort T_{j}determined by C_{j}is not needed, and would decrease hacker j’s utility because of the high expenditure C_{j}T_{j}. Assumption 1d is less likely satisfied when γs is large, i.e., when the firms share much information and the sharing effectiveness γ is large which prevents hacker j from attacking.With these four assumptions, we present 10 propositions. First come 1. the interior solution and 2. mutual reaction between each firm’s defense t

_{i}and hacker i’s attack T_{i}in the first attack. Thereafter follow six propositions for the six independent variables in Table 1, i.e., 3. hacker i’s information sharing S_{i}, 4. hacker i’s effort T_{i}, 5. the firms’ defense t_{i}against hacker i, 6. the firms’ defense t_{j}against hacker j, 8. the firms’ information sharing s, and 9. hacker j’s attack effort T_{j}. We supplement with 7. the firms’ aggregate defense ${t}_{j}^{agg}$ and 10. hacker j’s aggregate attack ${T}_{j}^{agg}$.**Proposition**

**1.**

When Assumption 1 is satisfied and 0 ≤ S
and the utilities follow from inserting into (9) and (11).

_{i}≤ 2Г_{i}T_{i}, the players’ efforts and information sharing are
$$\begin{array}{l}{t}_{i}=\frac{{C}_{i}/{V}_{i}}{4{c}_{i}^{2}/{v}_{i}^{2}},\text{\hspace{1em}}{T}_{i}=\frac{1}{4{c}_{i}^{2}/{v}_{i}^{2}}\left(\frac{2{c}_{i}}{{v}_{i}}-\frac{{C}_{i}}{{V}_{i}}\right),\text{\hspace{1em}}{S}_{i}=\frac{(1+\alpha )\left(\frac{2{c}_{j}}{{v}_{j}}-\frac{{C}_{j}}{{V}_{j}}+\frac{2{\mathsf{\Omega}}_{i}{c}_{j}^{2}/{v}_{j}^{2}}{{\mathrm{\Lambda}}_{i}{\mathrm{\Gamma}}_{i}{\mathrm{\Gamma}}_{j}{S}_{j}}+\frac{{\mathsf{\Omega}}_{j}{S}_{j}}{{V}_{j}}\right)}{\frac{{\mathrm{\Gamma}}_{i}}{{c}_{i}^{2}/{v}_{i}^{2}}\left(\frac{2{c}_{i}}{{v}_{i}}-\frac{{C}_{i}}{{V}_{i}}\right)\left(\frac{4{c}_{j}^{2}}{{v}_{j}^{2}}-\frac{(1+\alpha ){\mathrm{\Lambda}}_{j}{\mathrm{\Gamma}}_{j}{S}_{j}}{{V}_{j}}\right)},\\ {t}_{j}=\frac{\left(\frac{{C}_{j}}{{V}_{j}}-\frac{{\mathsf{\Omega}}_{j}{S}_{j}}{{V}_{j}}\right)\left(\frac{8{c}_{j}^{2}}{{v}_{j}^{2}}-\frac{(1+\alpha ){\mathrm{\Lambda}}_{j}{\mathrm{\Gamma}}_{j}{S}_{j}}{{V}_{j}}\right)-2{\mathrm{\Lambda}}_{j}(1+\alpha )\frac{{c}_{j}}{{v}_{j}}\left(\frac{{\mathrm{\Gamma}}_{j}{S}_{j}}{{V}_{j}}+\frac{{\mathsf{\Omega}}_{i}{c}_{j}/{v}_{j}}{{\mathrm{\Lambda}}_{i}{\mathrm{\Gamma}}_{i}{V}_{j}}\right)}{8\left(4{c}_{j}^{2}/{v}_{j}^{2}-(1+\alpha ){\mathrm{\Lambda}}_{j}{\mathrm{\Gamma}}_{j}{S}_{j}/{V}_{j}\right){c}_{j}^{2}/{v}_{j}^{2}}-\gamma s,\\ s=\frac{\gamma {c}_{j}}{2{\varphi}_{1}-{\varphi}_{3}},\text{\hspace{1em}}{T}_{j}=\frac{2{c}_{j}/{v}_{j}-{C}_{j}/{V}_{j}+{\mathsf{\Omega}}_{j}{S}_{j}/{V}_{j}}{8{c}_{j}^{2}/{v}_{j}^{2}}-\frac{{\mathsf{\Omega}}_{i}}{4{\mathrm{\Lambda}}_{i}{\mathrm{\Gamma}}_{i}{\mathrm{\Gamma}}_{j}{S}_{j}}\end{array}$$

**Proof.**

Appendix A.1. ☐

**Proposition**

**2.**

Mutual reaction between each firm and hacker i in the first attack: For the first attack in isolation, hacker i’s attack T

_{i}is inverse U-shaped in the defense t_{i}, and each firm’s defense t_{i}is inverse U-shaped in the attack T_{i}.**Proof.**

Appendix A.2. ☐

Proposition 2 considers the non-equilibrium values of t

_{i}and T_{i}relative to each other, in contrast to the unique equilibrium values of t_{i}and T_{i}in Proposition 1. Proposition 2 states that hacker i’s attack and each firm’s defense are inverse U-shaped in each other. The amount of information uncovered by hacker i is proportional to hacker i’s attack. Consequently, if hacker i is disadvantaged relative to each firm, C_{i}/V_{i}> c_{i}/v_{i}, so that its attack T_{i}is small compared with each firm’s defense t_{i}, then little information is uncovered by hacker i through the attack. This is reflected in hacker i’s sharing effectiveness Г_{i}(T_{Ai}+ T_{Bi}), which is 2Г_{i}T_{i}in equilibrium, which is low when T_{i}is low, and little information can be transferred to hacker j. As T_{i}increases, more information is uncovered by hacker i through the attack. If hacker i and the firm are equally matched, C_{i}/V_{i}≈ c_{i}/v_{i}, both T_{i}and t_{i}are large, and hacker i has large sharing effectiveness. If hacker i is advantaged relative to the firm, C_{i}/V_{i}< c_{i}/v_{i}, so that its attack T_{i}is large compared with each firm’s defense t_{i}, then much information is uncovered by hacker i through the attack.**Proposition**

**3.**

Assume that Assumption 1 is satisfied and 0 ≤ S

_{i}≤ 2Г_{i}T_{i}. $\partial {S}_{i}/\partial \alpha $ > 0, $\partial {S}_{i}/\partial ({C}_{i}/{V}_{i})$ > 0, $\partial {S}_{i}/\partial ({C}_{j}/{V}_{j})$ < 0, $\partial {S}_{i}/\partial {\mathrm{\Lambda}}_{i}$ < 0, $\partial {S}_{i}/\partial {\mathrm{\Lambda}}_{j}$ > 0, $\partial {S}_{i}/\partial {\mathrm{\Gamma}}_{i}$ < 0, $\partial {S}_{i}/\partial {\mathsf{\Omega}}_{i}$ > 0, $\partial {S}_{i}/\partial {\mathsf{\Omega}}_{j}$ > 0. When ${C}_{i}/{V}_{i}>{c}_{i}/{v}_{i}$, $\partial {S}_{i}/\partial ({c}_{i}/{v}_{i})$ < 0. When ${C}_{i}/{V}_{i}<{c}_{i}/{v}_{i}$, $\partial {S}_{i}/\partial ({c}_{i}/{v}_{i})$ > 0. When additionally Ω_{i}= 0, $\partial {S}_{i}/\partial {\mathrm{\Gamma}}_{j}$ > 0, $\partial {S}_{i}/\partial {S}_{j}$ > 0.**Proof.**

$\frac{\partial {S}_{i}}{\partial \alpha}=\frac{{S}_{i}}{(1+\alpha )}\frac{4{c}_{j}^{2}}{{v}_{j}^{2}}/\left(\frac{4{c}_{j}^{2}}{{v}_{j}^{2}}-\frac{(1+\alpha ){\mathrm{\Lambda}}_{j}{\mathrm{\Gamma}}_{j}{S}_{j}}{{V}_{j}}\right)$. The other inequalities follow straightforwardly from differentiating S

_{i}in (13). ☐Proposition 3 states, first, that hackers’ information sharing S

_{i}increases in the interdependence α between the firms. When firms are interdependent, the hackers’ attacks propagate more easily to the other firm not under direct attack. This causes larger aggregate attacks that enable hackers to compile more information and share more information with each other. Second, information sharing S_{i}increases in hacker i’s ratio C_{i}/V_{i}of unit cost to valuation. This is a substitution effect. When exerting effort T_{i}becomes too costly relative to the valuation, hacker i substitutes to information sharing instead, limited by 0 ≤ S_{i}≤ 2Г_{i}T_{i}since a small attack T_{i}provides hacker i with limited information. Third, when the firms are disadvantaged quantified as ${C}_{i}/{V}_{i}<{c}_{i}/{v}_{i}$, conversely, S_{i}decreases in the firms’ ratio c_{i}/v_{i}of unit cost to valuation. This is also a substitution effect operating the other way since increasing C_{i}/V_{i}and decreasing c_{i}/v_{i}have the qualitatively same impact on T_{i}. However, when the firms are advantaged quantified as ${C}_{i}/{V}_{i}>{c}_{i}/{v}_{i}$, S_{i}increases in c_{i}/v_{i}. Fourth, hacker i’s information sharing increases in both the hackers’ reputation gain parameters Ω_{i}and Ω_{j}, which motivate information sharing.Fifth, and most interestingly, S

_{i}decreases in hacker j’s ratio C_{j}/V_{j}of unit cost to valuation. This means that when hacker j is disadvantaged with a large ratio C_{j}/V_{j}of unit cost to valuation, and thus exerts low effort T_{j}, then hacker i shares less information. Hacker j would hope for the opposite, that hacker i would compensate hacker j’s disadvantage of a high C_{j}/V_{j}, by sharing more information, but that is not the case. Instead, hacker i uses hacker j’s high C_{j}/V_{j}against hacker j, so that when hacker j exerts lower effort T_{j}, then hacker j will also be disadvantaged by receiving less information S_{i}. This follows since hacker i does not expect hacker j to use the shared information S_{i}cost efficiently in a manner that benefits hacker i. This can also be interpreted so that hacker i does not trust hacker j, or does not think hacker j deserves to receive more information.Except for this fifth point, when Ω

_{i}= 0 hackers i and j focus on their joint interests and support each other when sharing information. Thus, S_{i}increases in hacker j’s sharing effectiveness Г_{j}, decreases in hacker i’s sharing effectiveness Г_{i}, increases in hacker j’s utilization Ʌ_{j}of joint sharing, and increases in hacker j’s sharing S_{j}. Summing up, when Ω_{i}= 0, the two hackers reinforce information sharing with each other, except that hacker i shares less with hacker j when hacker j is unable to exert high attack effort T_{j}. When Ω_{i}> 0, the dependence of S_{i}on hacker j’s sharing effectiveness Г_{j}and hacker j’s sharing S_{j}is mixed and has to be assessed in each individual case as the hackers search for individual reputation gain.**Proposition**

**4.**

When Assumption 1 is satisfied and 0 ≤ S

_{i}≤ 2Г_{i}T_{i}, hacker i’s effort T_{i}and information sharing S_{i}are strategic substitutes as impacted by C_{i}/V_{i}and c_{i}/v_{i}.**Proof.**

Follows from (13), where $\partial {T}_{i}/\partial ({C}_{i}/{V}_{i})$ < 0 and $\partial {S}_{i}/\partial ({C}_{i}/{V}_{i})$ > 0. When ${C}_{i}/{V}_{i}>{c}_{i}/{v}_{i}$, then $\partial {T}_{i}/\partial ({c}_{i}/{v}_{i})$ > 0 and $\partial {S}_{i}/\partial ({c}_{i}/{v}_{i})$ < 0. When ${C}_{i}/{V}_{i}<{c}_{i}/{v}_{i}$, then $\partial {T}_{i}/\partial ({c}_{i}/{v}_{i})$ < 0 and $\partial {S}_{i}/\partial ({c}_{i}/{v}_{i})$ > 0. ☐

Proposition 4 implies that hacker i adjusts its attack effort T

_{i}and information sharing S_{i}in opposite directions dependent on changes in C_{i}/V_{i}and c_{i}/v_{i}and limited by 0 ≤ S_{i}≤ 2Г_{i}T_{i}. That is, if hacker i’s own unit cost to valuation ratio C_{i}/V_{i}increases relative to the firms’ unit cost to valuation ratio c_{i}/v_{i}, hacker i chooses lower T_{i}and higher S_{i}, and conversely if C_{i}/V_{i}decreases relative to c_{i}/v_{i}. Hacker i’s attack T_{i}increases in c_{i}/v_{i}when hacker i is disadvantaged (${C}_{i}/{V}_{i}>{c}_{i}/{v}_{i}$), and decreases in c_{i}/v_{i}when hacker i is advantaged.**Proposition**

**5.**

When Assumption 1 is satisfied and 0 ≤ S

_{i}≤ 2Г_{i}T_{i}, $\partial {t}_{i}/\partial ({c}_{i}/{v}_{i})$ < 0, $\partial {t}_{i}/\partial ({C}_{i}/{V}_{i})$> 0.**Proof.**

Follows from differentiating t

_{i}in (13). ☐Proposition 5 states that the firms’ defense t

_{i}against hacker i intuitively decreases in their own ratio c_{i}/v_{i}of unit cost to valuation, since defense becomes more costly (high c_{i}) and/or less desirable (low v_{i}). For the opposite reason, and thus also intuitively, the firms’ defense t_{i}against hacker i increases in hacker i’s ratio C_{i}/V_{i}of unit cost to valuation, which comparatively corresponds to increasing c_{i}/v_{i}.**Proposition**

**6.**

When Assumption 1 is satisfied and 0 ≤ S

_{i}≤ 2Г_{i}T_{i}, $\partial {t}_{j}/\partial \alpha $ < 0, $\partial {t}_{j}/\partial \gamma $ < 0, $\partial {t}_{j}/\partial ({C}_{j}/{V}_{j})$ > 0, $\partial {t}_{j}/\partial {\mathrm{\Gamma}}_{j}$ < 0, $\partial {t}_{j}/\partial {\mathrm{\Lambda}}_{j}$ < 0, $\partial {t}_{j}/\partial {\mathsf{\Omega}}_{i}$ < 0, $\partial {t}_{j}/\partial {\mathsf{\Omega}}_{j}$ < 0. When additionally Ω_{j}= 0, $\partial {t}_{j}/\partial {S}_{j}$ < 0.**Proof.**

Follows from differentiating t

_{j}in (13). ☐Proposition 6 states that the firms’ defense t

_{j}decreases in their interdependence α. One possible explanation is that when attacks propagate more easily between firms, each firm may prefer the other firm to incur the defense burden. Mathematically for t_{j}, in (13) terms with α are subtracted in the numerator, and in (A5) T_{i}S_{i}which increases in α is subtracted in the numerator, causing lower t_{j}. Further, the firms’ defense t_{j}against hacker j increases in C_{j}/V_{j}, regardless of whether the firms are disadvantaged or not, and decreases in hacker j’s sharing effectiveness Г_{j}and utilization Ʌ_{j}of joint sharing. The defense t_{j}decreases in information sharing S_{j}when Ω_{j}= 0. Furthermore, the firms defend less as the reputation gain parameters Ω_{i}and Ω_{j}increase, which may be controversial, as discussed in Section 5.**Proposition**

**7.**

When Assumption 1 is satisfied and 0 ≤ S

_{i}≤ 2Г_{i}T_{i}, except for $\partial {t}_{j}^{agg}/\partial \alpha $ which can be negative or positive, the firms’ aggregate defense ${t}_{j}^{agg}=(1+\alpha )({t}_{j}+\gamma s)$ has equivalent derivatives as in Proposition 6 for ${t}_{j}$, i.e., $\partial {t}_{j}^{agg}/\partial z=\partial {t}_{j}/\partial z$, where $z={C}_{j}/{V}_{j}$, $z={\mathrm{\Gamma}}_{j}$, $z={\mathrm{\Lambda}}_{j}$, $z={\mathsf{\Omega}}_{i}$, $z={\mathsf{\Omega}}_{j}$ and $z={S}_{j}$ < 0.**Proof.**

Follows from (13) and Proposition 6 where $\partial {t}_{j}/\partial \alpha $ < 0. ☐

Proposition 7 illustrates how the firms strike a balance or tradeoff between defense t

_{j}and information sharing γs, and earns a reinforced defense through α. If defense becomes costly or undesirable for some reason, the firms substitute to information sharing, and vice versa.**Proposition**

**8.**

When Assumption 1 is satisfied and 0 ≤ S

_{i}≤ 2Г_{i}T_{i}, $\partial s/\partial \gamma $ > 0, $\partial s/\partial {c}_{j}$ > 0, $\partial s/\partial {\varphi}_{1}$ < 0, $\partial s/\partial {\varphi}_{3}$ > 0.**Proof.**

Follows from differentiating s in (13). $2{\varphi}_{1}$ > ${\varphi}_{3}$ since ${\varphi}_{1}\ge {\varphi}_{2}+{\varphi}_{3}$. ☐

Proposition 8 states that the firms’ information sharing s increases in their sharing effectiveness $\gamma $, since sharing then becomes more useful for the firms, and increases in their unit defense cost c

_{j}against hacker j, since defense then becomes more costly making it beneficial to substitute into information sharing instead. Further, s decreases in each firm’s unit cost ${\varphi}_{1}$ of own information leakage, and increases in the unit benefit ${\varphi}_{3}$ of joint leakage.Comparing large sharing effectiveness $\gamma $ > 0 with zero sharing effectiveness $\gamma $ = 0 enables comparing between the firms’ strategies when they share information vis-à-vis when they do not. The most useful insight from the subtraction of γs in the expression for t

_{j}in (13) is that large sharing effectiveness enables firms to rely on information sharing as directly useful in defending against hackers, which in turn enables firms to cut back on their security defense t_{j}.**Proposition**

**9.**

When Assumption 1 is satisfied and 0 ≤ S

_{i}≤ 2Г_{i}T_{i}, $\partial {T}_{j}/\partial \alpha $ = 0. $\partial {T}_{j}/\partial ({C}_{j}/{V}_{j})$ < 0, $\partial {T}_{j}/\partial {\mathsf{\Omega}}_{i}$ < 0, $\partial {T}_{j}/\partial {\mathsf{\Omega}}_{j}$ > 0, $\partial {T}_{j}/\partial {S}_{j}$ > 0, $\partial {T}_{j}/\partial {\mathrm{\Gamma}}_{i}$ > 0, $\partial {T}_{j}/\partial {\mathrm{\Gamma}}_{j}$ > 0, $\partial {T}_{j}/\partial {\mathrm{\Lambda}}_{i}$ > 0. When additionally $\frac{{c}_{j}}{{v}_{j}}<\frac{{C}_{j}}{{V}_{j}}-\frac{{\mathsf{\Omega}}_{j}{S}_{j}}{{V}_{j}}$, $\partial {T}_{j}/\partial ({c}_{j}/{v}_{j})$ > 0.**Proof.**

Follows from differentiating T

_{j}in (13). ☐Proposition 9 states that hacker j’s attack effort T

_{j}decreases in C_{j}/V_{j}, increases in its reputation gain parameter Ω_{j}, decreases in hacker i’s reputation gain parameter Ω_{i}, and increases in its information sharing S_{j}, hacker i’s utilization Ʌ_{i}of joint sharing, and both sharing effectiveness parameters Г_{i}and Г_{j}. Further, hacker j’s attack effort T_{j}increases in the firms’ ratio c_{j}/v_{j}when the firms are advantaged with a low c_{j}/v_{j}. In this event hacker j is disadvantaged and takes advantage of increasing c_{j}/v_{j}by attacking more. Conversely, high c_{j}/v_{j}means that hacker j is advantaged and a large attack is not needed against disadvantaged firms.**Proposition**

**10.**

When Assumption 1 is satisfied and 0 ≤ S

_{i}≤ 2Г_{i}T_{i}, hacker j’s aggregate attack ${T}_{j}^{agg}=(1+\alpha ){T}_{j}+2{\mathrm{\Gamma}}_{i}{T}_{i}{S}_{i}$ increases in the firms’ interdependence α, i.e., $\partial {T}_{j}^{agg}/\partial \alpha $ > 0.**Proof.**

Follows from (13) and $\partial {S}_{i}/\partial \alpha $ > 0 in Proposition 3. ☐

Comparing Propositions 10 and 7 suggests that hacker j’s aggregate attack ${T}_{j}^{agg}$, directed toward each firm and channeled through α to the other firm, increases in the firms’ interdependence α, whereas the firms’ aggregate defense ${t}_{j}^{agg}$, furnished by own defense t

_{j}and reinforced by information sharing from the other firm, either decreases or increases in the firms’ interdependence α. Interdependence between firms is a potential liability firms should be conscious about. It causes attacks against firms to propagate to other firms, and may possibly cause firms to defend less.#### 3.2. Corner Solution When Hacker i Is Deterred

**Proposition**

**11.**

When Assumption 1a is not satisfied and 0 ≤ S

_{i}≤ 2Г_{i}T_{i}, the firms choose t_{i}= V_{i}/C_{i}+ ɛ, where ɛ is arbitrarily small but positive, causing T_{i}= S_{i}= 0.**Proof.**

Appendix A.3. ☐

That Assumption 1a is not satisfied means that hacker i is disadvantaged, which means that hacker i’s unit attack cost C

_{i}relative to its valuation V_{i}is larger than twice that of the firms’ unit defense cost c_{i}relative to its valuation v_{i}. With such a disadvantaged hacker i, the firms choose their defense t_{j}slightly above that level which makes hacker i indifferent between attacking and not attacking. This deters hacker i (T_{i}= S_{i}= 0). The game between the firms and hacker j in periods 3 and 4 is thus without information sharing, with t_{j}+ γs and T_{j}as t_{i}and T_{i}in (13).#### 3.3. Corner Solution When Hacker j Is Deterred

**Proposition**

**12.**

When Assumption 1b is not satisfied and 0 ≤ S

_{i}≤ 2Г_{i}T_{i}, the firm chooses t_{j}= V_{j}/C_{j}+ γs + ɛ, where ɛ is arbitrarily small but positive, causing T_{j}= S_{i}= S_{j}= 0.**Proof.**

Appendix A.4. ☐

That Assumption 1b is not satisfied means that hacker j is disadvantaged, which when Ω

_{i}= Ω_{j}= 0 means that hacker j’s C_{j}/V_{j}is larger than twice that of the firms’ c_{j}/v_{j}. The firm then deters hacker j (T_{j}= S_{j}= 0). Hacker j’s unwillingness to attack in period 4 has ripple effects to period 1. Hacker i realizes that nothing is gained by sharing information with hacker j. Hacker i thus chooses not to share information, S_{i}= 0. The game between the firms and hacker i in periods 1 and 2 is thus without information sharing between the two hackers, with t_{i}and T_{i}as in (13).#### 3.4. Corner Solution When Hacker i Shares a Maximum Amount of Information

**Proposition**

**13.**

When Assumption 1b is satisfied and 0 ≤ S

_{i}≤ 2Г_{i}T_{i}, and Assumption 1c is satisfied with a small margin, hacker i shares a maximum amount of information with hacker j, i.e., S_{i}= 2Г_{i}T_{i}.**Proof.**

When $\frac{2{c}_{j}}{{v}_{j}}=\sqrt{\frac{(1+\alpha ){\mathrm{\Lambda}}_{j}{\mathrm{\Gamma}}_{j}{S}_{j}}{{V}_{j}}}+\epsilon >\frac{{C}_{j}}{{V}_{j}}-\frac{2{\mathsf{\Omega}}_{i}{c}_{j}^{2}/{v}_{j}^{2}}{{\mathrm{\Lambda}}_{i}{\mathrm{\Gamma}}_{i}{\mathrm{\Gamma}}_{j}{S}_{j}}-\frac{{\mathsf{\Omega}}_{j}{S}_{j}}{{V}_{j}}$, the interior solution for S

_{i}in (13) applies with positive numerator and small positive denominator. As ɛ decreases towards zero, the denominator decreases towards zero causing S_{i}to increase towards infinity. As ɛ becomes negative, the interior solution for S_{i}in (13) no longer applies and hacker i shares a maximum amount of information with hacker j, i.e., S_{i}= 2Г_{i}T_{i}. ☐Proposition 13 assumes that the firms’ ratio c

_{j}/v_{j}of unit defense cost relative to their valuation is intermediate. That is, c_{j}/v_{j}is not so low that hacker j is deterred (Proposition 12), and not so high that the interior solution applies. Instead, driven by hacker j’s large information sharing S_{j}relative to its valuation V_{j}, hacker j’s large sharing effectiveness Г_{i}, and hacker j’s large utilization Ʌ_{j}of joint sharing, both hackers benefit substantially from hacker i’s sharing and hacker i thus shares information maximally. In this solution T_{i}follows from solving $\partial {U}_{i}/\partial {T}_{i}$ = 0 in (A8) when S_{i}= S_{imax}(not shown because it is a voluminous solution of a third order equation in T_{i}), t_{j}follows from (A5), T_{j}follows from (A1), and t_{i}follows from using (A3) to differentiate firm A’s period 1 utility with respect to t_{Ai}and setting t_{i}= t_{Bi}= t_{Ai}.#### 3.5. Some Special Cases of Advantage for Hackers i and j

Assume Ω

_{i}= Ω_{j}= 0, Ʌ_{i}= Ʌ_{j}= c_{i}= v_{i}= c_{j}= v_{j}= V_{i}= V_{j}= α = γ = ϕ_{2}= ϕ_{3}= 1, ϕ_{1}= 3, Г_{i}= Г_{j}= 4 and S_{j}= 0.25 which gives S_{i}= S_{j}when C_{i}= C_{j}, see row 2 in Table 2.Row 3 assumes that hacker i is 2/3 more advantaged than hacker j in terms of unit cost divided by valuation, i.e., ${C}_{i}$ = 1 and ${C}_{j}$ = 3/2. The advantaged hacker i shares less, S

_{i}= 0.125, causing hacker j to attack less. Both hackers earn lower expected utilities and the firms earn higher expected utility. Conversely, row 4 assumes that hacker j is 2/3 more advantaged, i.e., ${C}_{i}$ = 3/2 and ${C}_{j}$ = 1. Then the disadvantaged hacker i shares more, S_{i}= 0.5, causing higher expected utility to the advantaged hacker j. Comparing the bottom two rows in Table 2, the hackers’ collective expected utility U_{i}+ U_{j}is largest when they benefit from more substantial mutual information sharing. Hence with these strong assumptions hacker j should be the advantaged hacker from the two hackers’ collective viewpoint of view. Intuitively, the firms prefer the hackers to be disadvantaged with large unit costs C_{i}or C_{j}.## 4. Policy and Managerial Implications

First, our analysis reveals that the first hacker shares less information when the second hacker can be expected to attack inefficiently. Hence if hackers believe that their attacks may not be followed up by subsequent attacks, they may share less information.

Second, unit costs of effort and asset valuations are influential in the analysis. Firms cannot do too much about their own asset valuations since their utility flows from the valuations, but they can acquire defense technology to decrease their own unit effort costs. Firms can further seek to design their defenses so that the available attack technology incurs a high unit attack cost. Large firms may have the expertise to lobby lawmakers to hamper the availability or forbid certain attack technologies, e.g., spyware. Firms may also seek to decrease the hackers’ valuations of their assets so that the assets becomes less usable or not usable elsewhere, e.g., that the assets get destroyed upon procurement or that law enforcement gets enabled to interfere with hackers’ successful exploitation of hacked information assets.

Third, especially large firms may possess the ability to impact public and hacker opinion e.g., so that sharing information acquired by hacking causes lower or negative reputation. For example, some communities have successfully handled graffiti tagging by shaming perpetrators into other activities, which may be tried for hacking.

Fourth, that the first hacker’s reputation gain deters the second hacker’s attack causes a dilemma for the firms. Firms prefer hackers not earn a reputation gain. However, if one hacker’s reputation can deter other hackers, that may be preferable for the firms if they have found a way to handle the reputed hacker.

Fifth, one may attempt to decrease the hackers’ sharing effectiveness parameters and utilization of joint sharing. To the extent hackers meet online, these online sites can be attempted surveyed or hacked by firms and law enforcement making it more difficult for hackers to share information without being noticed, or planting incorrect information about the firms making it costly for hackers to distinguish between correct and incorrect information. To the extent hackers meet offline, e.g., Internet cafes or various gathering places, these places can be placed under surveillance to prevent hackers from feeling safe from supervision.

Sixth, that hackers’ information sharing increases in the interdependence between firms is a vulnerability firms should be conscious about.

Seventh, the corner solution where the advantaged firm deters a disadvantaged hacker confirms for the firms that their defense strategy work, and may continue to work if the first hacker does not share information with the second hacker.

Eighth, the corner solution where the first hacker shares information maximally may be handled by the firms by attempting to hinder hackers from sharing information.

Ninth, if a hacker’s attack can be reduced, information sharing increases since attack and information sharing are strategic substitutes. Understanding this relationship may enable combating one or the other.

Tenth, our analysis suggests the need to heighten firms’ awareness that hackers not only choose strategically how much to invest in an attack, and that hackers may compete with each other in attacking more successfully, but also that hackers may cooperate through sharing information with each other about firms’ vulnerabilities.

## 5. Limitations and Future Research

One challenge for a complex model such as the one in this paper is that the requirements for a reality check of the results are higher. Although many of the results are plausible, some may be interpreted as indicative, and others may need further scrutiny, especially if they sound counterintuitive.

Let us interpret Proposition 3 about hacker i’s information sharing S

_{i}, Proposition 6 about the firms’ defense t_{j}against hacker j, and Proposition 9 about hacker j’s attack T_{j}. The three expressions for S_{i}, t_{j}, and T_{j}are the most complicated in (13), with many functional dependencies. Proposition 3 seems largely intuitive. For example, as hacker i’s ratio C_{i}/V_{i}of unit cost to valuation increases, hacker i can be expected to cut back on hacking and substitute into alternatives, which in the current model means information sharing. Propositions 6 and 9 suggest many ways in which the firms’ defense t_{j}and hacker j’s attack T_{j}may increase or decrease. These results may need further scrutiny since increases or decreases in defense or attack may be due to how two opposing players are advantaged or disadvantaged relative to each other. In this regard, Proposition 2 states that hacker i’s attack and each firm’s defense are inverse U-shaped in each other. The inverse U shape follows since a player may exert high effort when opponents are similarly matched expressed as similar unit effort costs relative to valuation, and may exert low effort when opponents are differently matched. Being differently matched means either advantaged or disadvantaged. When advantaged, the player exerts low effort since the opponent is merely a nuisance not worth paying too much attention to. Thus, a cost benefit analysis suggests low effort. When disadvantaged, the player exerts low effort since the opponent’s effort is so overwhelmingly large that the player’s effort does not make much difference. Thus, a cost benefit analysis again suggests low effort. It seems theoretically possible that the complex model captures only one side of the story for the various findings in Propositions 6 and 9, and that future research should check how firms defend against advantaged versus disadvantaged hackers due to firms being advantaged versus disadvantaged. The inverse U shape has also been found in earlier research. For example, Hausken [20,21] finds that information sharing and security investment for two firms are inverse U-shaped in the aggregate attack.The finding in Proposition 6 that firms defend less as the hackers’ reputation gain parameters Ω

_{i}and Ω_{j}increase, may be controversial for the same reason of this inverse U shape. For example, larger Ω_{j}causes larger attack T_{j}for hacker j (Proposition 9). Whether the firms react to the increased attack with larger or smaller defense t_{j}may depend on weighing benefits and costs related to being advantaged versus disadvantaged.Logical implications of complex models benefit from a reality test. In the earlier sections we have tried to indicate whether results seem intuitive or plausible. Complex models may uncover hidden, hitherto unknown, and sometimes bizarre relations, and reveal new insight. However, it is also possible that if the results of modeling are counterintuitive or do not match with experience, the model may be insufficiently expressive in various respects. That is, some results may constitute spurious effects and fail a reality test despite flowing from the model. Thus, Levins [50] and Levins and Lewontin [51] suggest, regarding modeling, that “truth is the intersection of multiple lies”. This work proceeds in the right direction. Future research should extend game theoretical modeling of complex strategic scenarios between defenders and attackers for cybersecurity. Particular focus may be devoted to reputation gain, interdependence, and being advantaged versus disadvantaged.

We have considered a scenario with two hackers and two firms, which are interpreted to be sufficiently unitary. The literature, e.g., about duopoly versus oligopoly, reveals that much insight is often obtained by considering a limited number of players. Generalizing to n hackers and N firms, to scrutinize the system’s scalability, is interesting but analytically challenging. We reasonably assume that many of the qualitative insights of the model carry through to scenarios with more than two hackers. One difference is that firms facing more than two hackers are subject to an opposition that may share information in more sophisticated manners.

The chosen four period defense and attack scenario is one of the simplest that seems possible and realistic. The phenomenon inevitably involves the time dimension where players react to each other through time. Information has to be obtained before it can be shared. Future research, with four or more than four players, should consider alternative defense and attack scenarios, and alternative sequences and manners in which players choose strategies and share in formation.

Other extensions are to different kinds of security investment, and distinguishing between different kinds of information that hackers can share. Information is multidimensional. Security breaches occur at low and high levels of sophistication, and variation is large regarding methods, success of earlier attacks, identities of hackers, and secrets about research, development, future plans, trade, capacities, personnel dispositions, etc. Future research may also consider case studies, assess how the model confirms with empirics, and apply various forms of performance evaluation.

## 6. Conclusions

We consider two firms under cyber attack by two hackers who share information with each other about the firms’ vulnerabilities and security breaches. We analyze a game where, first, the firms defend against hacking. Second, the first hacker chooses whether or not to attack, and if it attacks it chooses how much information to share with a second hacker. Third, the firms defend against subsequent attacks and share information with each other about the first hacker’s attack. Fourth, the second hacker attacks the firms and shares information with the first hacker. Each hacker has a triple motivation of financial gain, information exchange as a basis for future superior attacks, and reputation gain. The firms choose optimal defenses, which are costly and consist in investing in information technology security to ensure protection. The firms also choose optimal information sharing and incur leakage costs. The hackers collect information in various manners, and attempt to gain access to the information the firms collect about their security breaches. Each hacker prefers to receive information from the other hacker about the firms’ vulnerabilities, but synergies of joint sharing also provide incentives to provide information. The paper analyzes the extent to which a hacker has incentives to provide information voluntarily to the other hacker, and the tradeoffs each hacker makes between sharing information and investing in costly attacks.

We find that the first hacker’s attack and each firm’s defense are inverse U-shaped in each other. A disadvantaged player refrains from exerting effort due to weakness, and an advantaged player refrains from exerting effort due to strength, causing the largest efforts to be exerted when the hacker and firm are equally matched.

Driven by the substitution effect, the first hacker shares more information and attacks less if its unit cost of attack increases relative to its valuation. When the second hacker is disadvantaged with a high unit cost relative to its valuation, it receives less information from the first hacker, which does not expect the shared information to be used efficiently. As the hackers’ reputation gain parameters increase, both hackers share more information.

The second hacker’s attack increases in its own reputation gain parameter, and decreases in the first hacker’s reputation gain parameter. Although the second hacker is motivated by its own reputation, it is deterred by the first hacker’s reputation gain. The second hacker’s attack increases in both sharing effectiveness parameters and in the first hacker’s utilization of joint sharing, which illustrates the benefits of joint sharing and attack.

As firms’ information sharing effectiveness increases, they substitute from defense to information sharing which also increases in the firms’ unit defense cost, decreases in each firm’s unit cost of own information leakage, and increases in the unit benefit of joint leakage. This shows how firms’ information sharing furnishes a solid foundation for firms’ aggregate defense and enable them to cut back on their regular defense not based on information sharing.

Increasing interdependence between firms has multiple impacts. It causes hackers’ attacks to propagate to the firm not attacked directly, which enables obtaining more information, which enables more information sharing between hackers. Firms need to be conscious about such enhanced aggregate attacks. Firms’ defense gets additionally reinforced by information sharing between firms.

We consider three corner solutions. The first two involve deterrence when players move sequentially and the first moving advantaged players, i.e., the firms, choose a strategy that suffices to deter the subsequent disadvantaged player, i.e., the first and the second hacker. First, the firms deter the first hacker when the first hacker is disadvantaged. The deterrence defense is proportional to the first hacker’s valuation and inverse proportional to the first hacker’s unit attack cost. Second, and with the same logic, the firms deter the second hacker when the second hacker is disadvantaged. Furthermore, when the second hacker is deterred in period 4, the first hacker does not share information in period 2. Third, a corner solution exists where the first hacker shares a maximum amount of information. This occurs when the second hacker shares much information relative to its valuation, has large sharing effectiveness and large utilization of joint sharing, so that both hackers benefit substantially from joint sharing.

## Acknowledgments

We thank two anonymous reviewers of this journal for useful comments. No sources of funding exist.

## Conflicts of Interest

The author declares no conflict of interest.

## Appendix A

#### Appendix A.1. Interior Solution

We solve the symmetric game with backward induction starting with period 4. Differentiating hacker j’s utility U
where hacker j assumes that firms A and B behave equivalently in equilibrium. The second order condition is always satisfied as negative;
Without loss of generality we consider firm A in period 3, and replace t

_{j}in (9) with respect to T_{Aj}, and thereafter setting T_{j}= T_{Aj}= T_{Bj}and analogously for all variables and parameters, equating with zero and solving, gives
$$\begin{array}{l}\frac{\partial {U}_{j}}{\partial {T}_{Aj}}=\frac{{(1+\alpha )}^{2}({t}_{j}+\gamma s){V}_{j}}{{\left((1+\alpha )({T}_{j}+{t}_{j}+\gamma s)+2{\mathrm{\Gamma}}_{i}{T}_{i}{S}_{i}\right)}^{2}}+({\mathsf{\Omega}}_{j}+2{\mathrm{\Lambda}}_{j}{\mathrm{\Gamma}}_{i}{T}_{i}{S}_{i}{\mathrm{\Gamma}}_{j}){S}_{j}-{C}_{j}=0\Rightarrow \\ {T}_{j}=\{\begin{array}{l}\begin{array}{l}\sqrt{\frac{({t}_{j}+\gamma s){V}_{j}}{{C}_{j}-({\mathsf{\Omega}}_{j}+2{\mathrm{\Lambda}}_{j}{\mathrm{\Gamma}}_{i}{T}_{i}{S}_{i}{\mathrm{\Gamma}}_{j}){S}_{j}}}-\frac{2{\mathrm{\Gamma}}_{i}{T}_{i}{S}_{i}}{1+\alpha}-{t}_{j}-\gamma s\\ \text{\hspace{1em}}when\text{\hspace{0.17em}}{C}_{j}>({\mathsf{\Omega}}_{j}+2{\mathrm{\Lambda}}_{j}{\mathrm{\Gamma}}_{i}{T}_{i}{S}_{i}{\mathrm{\Gamma}}_{j}){S}_{j}\text{\hspace{0.17em}}and\text{\hspace{0.17em}}\sqrt{\frac{({t}_{j}+\gamma s){V}_{j}}{{C}_{j}-({\mathsf{\Omega}}_{j}+2{\mathrm{\Lambda}}_{j}{\mathrm{\Gamma}}_{i}{T}_{i}{S}_{i}{\mathrm{\Gamma}}_{j}){S}_{j}}}>\frac{2{\mathrm{\Gamma}}_{i}{T}_{i}{S}_{i}}{1+\alpha}+{t}_{j}+\gamma s\end{array}\\ 0\text{\hspace{0.17em}}otherwise\end{array}\end{array}$$

$$\frac{{\partial}^{2}{U}_{j}}{\partial {{T}_{Aj}}^{2}}=\frac{-2(1+\alpha )(1+{\alpha}^{2})({t}_{j}+\gamma s){V}_{j}}{{\left((1+\alpha )({T}_{j}+{t}_{j}+\gamma s)+2{\mathrm{\Gamma}}_{i}{T}_{i}{S}_{i}\right)}^{3}}$$

_{j}and s in (A1) with t_{Aj}and s_{A}, respectively, since firm A’s optimization is based on taking firm B’s behavior as given. Inserting T_{Aj}= T_{Bj}= T_{j}in (A1) into (11) gives firm A’s period 3 utility
$$\begin{array}{l}{u}_{A}={v}_{i}-\frac{{T}_{Ai}+\alpha {T}_{Bi}}{{t}_{Ai}+{T}_{Ai}+\alpha ({t}_{Bi}+{T}_{Bi})}{v}_{i}-{c}_{i}{t}_{Ai}\\ +\frac{\sqrt{({t}_{Aj}+\gamma {s}_{A})}\sqrt{{C}_{j}-({\mathsf{\Omega}}_{j}+2{\mathrm{\Lambda}}_{j}{\mathrm{\Gamma}}_{i}{T}_{i}{S}_{i}{\mathrm{\Gamma}}_{j}){S}_{j}}}{\sqrt{{V}_{j}}}{v}_{j}-{c}_{j}{t}_{Aj}-({\varphi}_{1}{s}_{A}^{2}-{\varphi}_{2}{s}_{B}^{2}-{\varphi}_{3}{s}_{A}{s}_{B})\end{array}$$

Differentiating u

_{A}in (A3) with respect to t_{Aj}and s_{A}, and equating with zero gives
$$\begin{array}{l}\frac{\partial {u}_{A}}{\partial {t}_{Aj}}=\frac{{v}_{j}\sqrt{{C}_{j}-({\mathsf{\Omega}}_{j}+2{\mathrm{\Lambda}}_{j}{\mathrm{\Gamma}}_{i}{T}_{i}{S}_{i}{\mathrm{\Gamma}}_{j}){S}_{j}}}{2\sqrt{{t}_{Aj}+\gamma {s}_{A}}\sqrt{{V}_{j}}}-{c}_{j}=0\\ \frac{\partial {u}_{A}}{\partial {s}_{A}}=\frac{\gamma {v}_{j}\sqrt{{C}_{j}-({\mathsf{\Omega}}_{j}+2{\mathrm{\Lambda}}_{j}{\mathrm{\Gamma}}_{i}{T}_{i}{S}_{i}{\mathrm{\Gamma}}_{j}){S}_{j}}}{2\sqrt{({t}_{Aj}+\gamma {s}_{A})}\sqrt{{V}_{j}}}-2{\varphi}_{1}{s}_{A}+{\varphi}_{3}{s}_{B}=0\end{array}$$

Inserting T

_{i}= T_{Ai}= T_{Bi}, t_{i}= t_{Ai}= t_{Bi}, t_{j}= t_{Aj}= t_{Bj}, s = s_{A}= s_{B}, and equivalent parameters into (A4) and solving yields
$$\begin{array}{l}s=\{\begin{array}{c}\frac{\gamma {c}_{j}}{2{\varphi}_{1}-{\varphi}_{3}}\text{\hspace{0.17em}}when\text{\hspace{0.17em}}\frac{{C}_{j}/{V}_{j}-({\mathsf{\Omega}}_{j}+2{\mathrm{\Lambda}}_{j}{\mathrm{\Gamma}}_{i}{T}_{i}{S}_{i}{\mathrm{\Gamma}}_{j}){S}_{j}/{V}_{j}}{4{c}_{j}^{2}/{v}_{j}^{2}}>\gamma s\\ 0\text{\hspace{0.17em}}otherwise\end{array}\\ {t}_{j}=\{\begin{array}{c}\frac{{C}_{j}/{V}_{j}-({\mathsf{\Omega}}_{j}+2{\mathrm{\Lambda}}_{j}{\mathrm{\Gamma}}_{i}{T}_{i}{S}_{i}{\mathrm{\Gamma}}_{j}){S}_{j}/{V}_{j}}{4{c}_{j}^{2}/{v}_{j}^{2}}-\gamma s\text{\hspace{0.17em}}when\text{\hspace{0.17em}}\frac{{C}_{j}/{V}_{j}-({\mathsf{\Omega}}_{j}+2{\mathrm{\Lambda}}_{j}{\mathrm{\Gamma}}_{i}{T}_{i}{S}_{i}{\mathrm{\Gamma}}_{j}){S}_{j}/{V}_{j}}{4{c}_{j}^{2}/{v}_{j}^{2}}>\gamma s\\ 0\text{\hspace{0.17em}}otherwise\end{array}\end{array}$$

The second order conditions are always satisfied as negative, and the Hessian matrix is negative semi-definite, i.e.,

$$\begin{array}{l}\frac{{\partial}^{2}{u}_{A}}{\partial {t}_{Aj}^{2}}=-\frac{{v}_{j}\sqrt{{C}_{j}-({\mathsf{\Omega}}_{j}+2{\mathrm{\Lambda}}_{j}{\mathrm{\Gamma}}_{i}{T}_{i}{S}_{i}{\mathrm{\Gamma}}_{j}){S}_{j}}}{4{({t}_{Aj}+\gamma {s}_{A})}^{3/2}\sqrt{{V}_{j}}}\\ \frac{{\partial}^{2}{u}_{A}}{\partial {s}_{A}^{2}}=-\frac{{\gamma}^{2}{v}_{j}\sqrt{{C}_{j}-({\mathsf{\Omega}}_{j}+2{\mathrm{\Lambda}}_{j}{\mathrm{\Gamma}}_{i}{T}_{i}{S}_{i}{\mathrm{\Gamma}}_{j}){S}_{j}}}{4{({t}_{Aj}+\gamma {s}_{A})}^{3/2}\sqrt{{V}_{j}}}\\ \frac{{\partial}^{2}{u}_{A}}{\partial {t}_{Aj}\partial {s}_{A}}=\frac{{\partial}^{2}{u}_{A}}{\partial {s}_{A}\partial {t}_{Aj}}=-\frac{\gamma {v}_{j}\sqrt{{C}_{j}-({\mathsf{\Omega}}_{j}+2{\mathrm{\Lambda}}_{j}{\mathrm{\Gamma}}_{i}{T}_{i}{S}_{i}{\mathrm{\Gamma}}_{j}){S}_{j}}}{4{({t}_{Aj}+\gamma {s}_{A})}^{3/2}\sqrt{{V}_{j}}}\\ \left|H\right|=\left|\begin{array}{cc}\frac{{\partial}^{2}{u}_{A}}{\partial {t}_{Aj}^{2}}& \frac{{\partial}^{2}{u}_{A}}{\partial {t}_{Aj}\partial {s}_{A}}\\ \frac{{\partial}^{2}{u}_{A}}{\partial {s}_{A}\partial {t}_{Aj}}& \frac{{\partial}^{2}{u}_{A}}{\partial {s}_{A}^{2}}\end{array}\right|=\frac{{\partial}^{2}{u}_{A}}{\partial {t}_{Aj}^{2}}\frac{{\partial}^{2}{u}_{A}}{\partial {s}_{A}^{2}}-\frac{{\partial}^{2}{u}_{A}}{\partial {t}_{Aj}\partial {s}_{A}}\frac{{\partial}^{2}{u}_{A}}{\partial {s}_{A}\partial {t}_{Aj}}=0\end{array}$$

Inserting T

_{j}in (A1) and t_{j}in (A5) into (9), and setting T_{i}= T_{Ai}= T_{Bi}and t_{i}= t_{Ai}= t_{Bi}, gives hacker i’s period 2 utility
$${U}_{i}=2{T}_{i}\left[\frac{{V}_{i}}{{t}_{i}+{T}_{i}}+{\mathrm{\Lambda}}_{i}{\mathrm{\Gamma}}_{i}{S}_{i}{\mathrm{\Gamma}}_{j}\left(\frac{1}{{c}_{j}/{v}_{j}}-\frac{4{\mathrm{\Gamma}}_{i}{T}_{i}{S}_{i}}{1+\alpha}-\frac{{C}_{j}/{V}_{j}-({\mathsf{\Omega}}_{j}+2{\mathrm{\Lambda}}_{j}{\mathrm{\Gamma}}_{i}{T}_{i}{S}_{i}{\mathrm{\Gamma}}_{j}){S}_{j}/{V}_{j}}{2{c}_{j}^{2}/{v}_{j}^{2}}\right){S}_{j}+{\mathsf{\Omega}}_{i}{S}_{i}-{C}_{i}\right]$$

Differentiating U
which are solved to yield

_{i}in (A7) with respect to T_{i}and S_{i}, and equating with zero, gives
$$\begin{array}{l}\frac{\partial {U}_{i}}{\partial {T}_{i}}=2{S}_{i}{\mathrm{\Lambda}}_{i}{\mathrm{\Gamma}}_{i}{\mathrm{\Gamma}}_{j}\left(\frac{1}{{c}_{j}/{v}_{j}}-\frac{8{\mathrm{\Gamma}}_{i}{T}_{i}{S}_{i}}{1+\alpha}-\frac{{C}_{j}/{V}_{j}-({\mathsf{\Omega}}_{j}+4{\mathrm{\Lambda}}_{j}{\mathrm{\Gamma}}_{i}{T}_{i}{S}_{i}{\mathrm{\Gamma}}_{j}){S}_{j}/{V}_{j}}{2{c}_{j}^{2}/{v}_{j}^{2}}\right){S}_{j}\\ \text{\hspace{1em}}+\frac{2{V}_{i}}{{t}_{i}+{T}_{i}}-\frac{2{T}_{i}{V}_{i}}{{({t}_{i}+{T}_{i})}^{2}}+2{\mathsf{\Omega}}_{i}{S}_{i}-2{C}_{i}=0,\\ \frac{\partial {U}_{i}}{\partial {S}_{i}}=2{T}_{i}\left[{\mathrm{\Lambda}}_{i}{\mathrm{\Gamma}}_{i}{\mathrm{\Gamma}}_{j}\left(\frac{1}{{c}_{j}/{v}_{j}}-\frac{8{\mathrm{\Gamma}}_{i}{T}_{i}{S}_{i}}{1+\alpha}-\frac{{C}_{j}/{V}_{j}-({\mathsf{\Omega}}_{j}+4{\mathrm{\Lambda}}_{j}{\mathrm{\Gamma}}_{i}{T}_{i}{S}_{i}{\mathrm{\Gamma}}_{j}){S}_{j}/{V}_{j}}{2{c}_{j}^{2}/{v}_{j}^{2}}\right){S}_{j}+{\mathsf{\Omega}}_{i}\right]=0\end{array}$$

$$\begin{array}{l}{T}_{i}=\{\begin{array}{c}\sqrt{{t}_{i}}(\sqrt{{V}_{i}/{C}_{i}}-\sqrt{{t}_{i}})\text{\hspace{0.17em}}when\text{\hspace{0.17em}}\frac{{V}_{i}}{{C}_{i}}>{t}_{i}\\ 0\text{\hspace{0.17em}}otherwise\end{array}\\ {S}_{i}=\{\begin{array}{l}\begin{array}{l}\frac{(1+\alpha )\left(2{c}_{j}/{v}_{j}-{C}_{j}/{V}_{j}+\frac{2{\mathsf{\Omega}}_{i}{c}_{j}^{2}/{v}_{j}^{2}}{{\mathrm{\Lambda}}_{i}{\mathrm{\Gamma}}_{i}{\mathrm{\Gamma}}_{j}{S}_{j}}+\frac{{\mathsf{\Omega}}_{j}{S}_{j}}{{V}_{j}}\right)}{4{T}_{i}{\mathrm{\Gamma}}_{i}\left(4{c}_{j}^{2}/{v}_{j}^{2}-(1+\alpha ){\mathrm{\Lambda}}_{j}{\mathrm{\Gamma}}_{j}{S}_{j}/{V}_{j}\right)}\\ \text{\hspace{1em}}when\text{\hspace{0.17em}}\frac{2{c}_{j}}{{v}_{j}}>\frac{{C}_{j}}{{V}_{j}}-\frac{2{\mathsf{\Omega}}_{i}{c}_{j}^{2}/{v}_{j}^{2}}{{\mathrm{\Lambda}}_{i}{\mathrm{\Gamma}}_{i}{\mathrm{\Gamma}}_{j}{S}_{j}}-\frac{{\mathsf{\Omega}}_{j}{S}_{j}}{{V}_{j}}\text{\hspace{0.17em}}and\text{\hspace{0.17em}}\frac{2{c}_{j}}{{v}_{j}}>\sqrt{\frac{(1+\alpha ){\mathrm{\Lambda}}_{j}{\mathrm{\Gamma}}_{j}{S}_{j}}{{V}_{j}}}\text{\hspace{1em}}and\text{\hspace{0.17em}}\frac{{V}_{i}}{{C}_{i}}>{t}_{i}\end{array}\\ 0\text{\hspace{0.17em}}otherwise\end{array}\end{array}$$

Assuming 0 ≤ S

_{i}≤ 2Г_{i}T_{i}. When (A9) yields S_{i}> 2Г_{i}T_{i}, inserting S_{i}= 2Г_{i}T_{i}into the first equation in (A8) gives a fifth order equation in T_{i}which we do not solve.Inserting T
where T
which is inserted into (A9) to yield

_{i}and S_{i}in (A9) into (A3), and inserting T_{Ai}= T_{Bi}= T_{i}and t_{Ai}= t_{Bi}= t_{i}due to symmetry, gives firm A’s period 1 utility
$${u}_{A}=\frac{{v}_{i}\sqrt{{t}_{i}}}{\sqrt{{V}_{i}/{C}_{i}}}-{c}_{i}{t}_{i}+\frac{\sqrt{({t}_{Aj}+\gamma {s}_{A})}\sqrt{{C}_{j}-({\mathsf{\Omega}}_{j}+2{\mathrm{\Lambda}}_{j}{\mathrm{\Gamma}}_{i}{T}_{i}{S}_{i}{\mathrm{\Gamma}}_{j}){S}_{j}}}{\sqrt{{V}_{j}}}{v}_{j}-{c}_{j}{t}_{Aj}-({\varphi}_{1}{s}_{A}^{2}-{\varphi}_{2}{s}_{B}^{2}-{\varphi}_{3}{s}_{A}{s}_{B})$$

_{i}S_{i}and t_{Aj}do not depend on t_{i}. Differentiating u_{A}in (A10) with respect to t_{i}, equating with zero and solving, gives
$$\frac{\partial u}{\partial {t}_{i}}=\frac{{v}_{i}}{2\sqrt{{t}_{i}{V}_{i}/{C}_{i}}}-{c}_{i}=0\Rightarrow {t}_{i}=\{\begin{array}{l}\frac{{C}_{i}/{V}_{i}}{4{c}_{i}^{2}/{v}_{i}^{2}}\text{\hspace{0.17em}}when\text{\hspace{0.17em}}\frac{{C}_{i}}{{V}_{i}}<\frac{2{c}_{i}}{{v}_{i}}\\ 0\text{\hspace{0.17em}}otherwise\end{array}$$

$$\begin{array}{l}{T}_{i}=\{\begin{array}{l}\frac{1}{4{c}_{i}^{2}/{v}_{i}^{2}}\left(\frac{2{c}_{i}}{{v}_{i}}-\frac{{C}_{i}}{{V}_{i}}\right)\text{\hspace{0.17em}}when\text{\hspace{0.17em}}\frac{{C}_{i}}{{V}_{i}}<\frac{2{c}_{i}}{{v}_{i}}\\ 0\text{\hspace{0.17em}}otherwise\end{array}\\ {S}_{i}=\{\begin{array}{l}\begin{array}{l}\frac{(1+\alpha )\left(\frac{2{c}_{j}}{{v}_{j}}-\frac{{C}_{j}}{{V}_{j}}+\frac{2{\mathsf{\Omega}}_{i}{c}_{j}^{2}/{v}_{j}^{2}}{{\mathrm{\Lambda}}_{i}{\mathrm{\Gamma}}_{i}{\mathrm{\Gamma}}_{j}{S}_{j}}+\frac{{\mathsf{\Omega}}_{j}{S}_{j}}{{V}_{j}}\right)}{\frac{{\mathrm{\Gamma}}_{i}}{{c}_{i}^{2}/{v}_{i}^{2}}\left(\frac{2{c}_{i}}{{v}_{i}}-\frac{{C}_{i}}{{V}_{i}}\right)\left(\frac{4{c}_{j}^{2}}{{v}_{j}^{2}}-\frac{(1+\alpha ){\mathrm{\Lambda}}_{j}{\mathrm{\Gamma}}_{j}{S}_{j}}{{V}_{j}}\right)}\\ \text{\hspace{1em}}when\text{\hspace{0.17em}}\frac{2{c}_{j}}{{v}_{j}}>\frac{{C}_{j}}{{V}_{j}}-\frac{2{\mathsf{\Omega}}_{i}{c}_{j}^{2}/{v}_{j}^{2}}{{\mathrm{\Lambda}}_{i}{\mathrm{\Gamma}}_{i}{\mathrm{\Gamma}}_{j}{S}_{j}}-\frac{{\mathsf{\Omega}}_{j}{S}_{j}}{{V}_{j}}\text{\hspace{0.17em}}and\text{\hspace{0.17em}}\frac{2{c}_{j}}{{v}_{j}}>\sqrt{\frac{(1+\alpha ){\mathrm{\Lambda}}_{j}{\mathrm{\Gamma}}_{j}{S}_{j}}{{V}_{j}}}\\ \text{\hspace{1em}}and\text{\hspace{0.17em}}\frac{{C}_{i}}{{V}_{i}}<\frac{2{c}_{i}}{{v}_{i}}\end{array}\\ 0\text{\hspace{0.17em}}otherwise\end{array}\end{array}$$

The second order conditions and Hessian matrix are

$$\begin{array}{l}\frac{{\partial}^{2}{U}_{i}}{\partial {T}_{i}^{2}}=-\frac{4{S}_{i}^{2}{\mathrm{\Lambda}}_{i}{\mathrm{\Gamma}}_{i}^{2}{\mathrm{\Gamma}}_{j}}{(1+\alpha ){c}_{j}^{2}/{v}_{j}^{2}}\left(\frac{4{c}_{j}^{2}}{{v}_{j}^{2}}-\frac{(1+\alpha ){\mathrm{\Lambda}}_{j}{\mathrm{\Gamma}}_{j}{S}_{j}}{{V}_{j}}\right){S}_{j}-\frac{4{t}_{i}{V}_{i}}{{({t}_{i}+{T}_{i})}^{3}}\\ \frac{{\partial}^{2}{U}_{i}}{\partial {S}_{i}^{2}}=-\frac{4{T}_{i}^{2}{\mathrm{\Lambda}}_{i}{\mathrm{\Gamma}}_{i}^{2}{\mathrm{\Gamma}}_{j}}{(1+\alpha ){c}_{j}^{2}/{v}_{j}^{2}}\left(\frac{4{c}_{j}^{2}}{{v}_{j}^{2}}-\frac{(1+\alpha ){\mathrm{\Lambda}}_{j}{\mathrm{\Gamma}}_{j}{S}_{j}}{{V}_{j}}\right){S}_{j}\\ \frac{{\partial}^{2}{U}_{i}}{\partial {T}_{i}\partial {S}_{i}}=\frac{{\partial}^{2}{U}_{i}}{\partial {S}_{i}\partial {T}_{i}}=2{\mathrm{\Lambda}}_{i}{\mathrm{\Gamma}}_{i}{\mathrm{\Gamma}}_{j}\left(\frac{1}{{c}_{j}/{v}_{j}}-\frac{8{\mathrm{\Gamma}}_{i}{T}_{i}{S}_{i}}{1+\alpha}-\frac{{C}_{j}/{V}_{j}-({\mathsf{\Omega}}_{j}+4{\mathrm{\Lambda}}_{j}{\mathrm{\Gamma}}_{i}{T}_{i}{S}_{i}{\mathrm{\Gamma}}_{j}){S}_{j}/{V}_{j}}{2{c}_{j}^{2}/{v}_{j}^{2}}\right){S}_{j}\\ \text{\hspace{1em}}-\frac{4{T}_{i}{S}_{i}{\mathrm{\Lambda}}_{i}{\mathrm{\Gamma}}_{i}^{2}{\mathrm{\Gamma}}_{j}}{(1+\alpha ){c}_{j}^{2}/{v}_{j}^{2}}\left(\frac{4{c}_{j}^{2}}{{v}_{j}^{2}}-\frac{(1+\alpha ){\mathrm{\Lambda}}_{j}{\mathrm{\Gamma}}_{j}{S}_{j}}{{V}_{j}}\right){S}_{j}+2{\mathsf{\Omega}}_{i}{S}_{i}\\ \left|H\right|=\left|\begin{array}{cc}\frac{{\partial}^{2}{U}_{i}}{\partial {T}_{i}^{2}}& \frac{{\partial}^{2}{U}_{i}}{\partial {T}_{i}\partial {S}_{i}}\\ \frac{{\partial}^{2}{U}_{i}}{\partial {S}_{i}\partial {T}_{i}}& \frac{{\partial}^{2}{U}_{i}}{\partial {S}_{i}^{2}}\end{array}\right|=\frac{{\partial}^{2}{U}_{i}}{\partial {T}_{i}^{2}}\frac{{\partial}^{2}{U}_{i}}{\partial {S}_{i}^{2}}-\frac{{\partial}^{2}{U}_{i}}{\partial {T}_{i}\partial {S}_{i}}\frac{{\partial}^{2}{U}_{i}}{\partial {S}_{i}\partial {T}_{i}}\ge 0\end{array}$$

Inserting the values for t

_{i}in (A11) and T_{i}and S_{i}in (A12) into (A13) gives
$$\left|H\right|=\frac{2{C}_{i}{S}_{j}{(2{c}_{i}/{v}_{i}-{C}_{i}/{V}_{i})}^{2}{\mathrm{\Gamma}}_{i}^{2}{\mathrm{\Gamma}}_{j}{\mathrm{\Lambda}}_{i}(4{c}_{j}^{2}/{v}_{j}^{2}-(1+\alpha ){\mathrm{\Lambda}}_{j}{\mathrm{\Gamma}}_{j}{S}_{j}/{V}_{j})}{(1+\alpha )({c}_{i}^{3}/{v}_{i}^{3}){c}_{j}^{2}/{v}_{j}^{2}}\ge 0$$

The Hessian matrix is negative semi-definite when $\frac{2{c}_{j}}{{v}_{j}}>\sqrt{\frac{(1+\alpha ){\mathrm{\Lambda}}_{j}{\mathrm{\Gamma}}_{j}{S}_{j}}{{V}_{j}}}$.

Inserting (A12) into (A5) gives

$$\begin{array}{l}s=\{\begin{array}{l}\frac{\gamma {c}_{j}}{2{\varphi}_{1}-{\varphi}_{3}}\text{\hspace{0.17em}}when\text{\hspace{0.17em}}\frac{\left(\frac{{C}_{j}}{{V}_{j}}-\frac{{\mathsf{\Omega}}_{j}{S}_{j}}{{V}_{j}}\right)\left(\frac{8{c}_{j}^{2}}{{v}_{j}^{2}}-\frac{(1+\alpha ){\mathrm{\Lambda}}_{j}{\mathrm{\Gamma}}_{j}{S}_{j}}{{V}_{j}}\right)-2{\mathrm{\Lambda}}_{j}(1+\alpha )\frac{{c}_{j}}{{v}_{j}}\left(\frac{{\mathrm{\Gamma}}_{j}{S}_{j}}{{V}_{j}}+\frac{{\mathsf{\Omega}}_{i}{c}_{j}/{v}_{j}}{{\mathrm{\Lambda}}_{i}{\mathrm{\Gamma}}_{i}{V}_{j}}\right)}{8\left(4{c}_{j}^{2}/{v}_{j}^{2}-(1+\alpha ){\mathrm{\Lambda}}_{j}{\mathrm{\Gamma}}_{j}{S}_{j}/{V}_{j}\right){c}_{j}^{2}/{v}_{j}^{2}}>\gamma s\\ 0\text{\hspace{0.17em}}otherwise\end{array}\\ {t}_{j}=\{\begin{array}{l}\begin{array}{l}\frac{\left(\frac{{C}_{j}}{{V}_{j}}-\frac{{\mathsf{\Omega}}_{j}{S}_{j}}{{V}_{j}}\right)\left(\frac{8{c}_{j}^{2}}{{v}_{j}^{2}}-\frac{(1+\alpha ){\mathrm{\Lambda}}_{j}{\mathrm{\Gamma}}_{j}{S}_{j}}{{V}_{j}}\right)-2{\mathrm{\Lambda}}_{j}(1+\alpha )\frac{{c}_{j}}{{v}_{j}}\left(\frac{{\mathrm{\Gamma}}_{j}{S}_{j}}{{V}_{j}}+\frac{{\mathsf{\Omega}}_{i}{c}_{j}/{v}_{j}}{{\mathrm{\Lambda}}_{i}{\mathrm{\Gamma}}_{i}{V}_{j}}\right)}{8\left(4{c}_{j}^{2}/{v}_{j}^{2}-(1+\alpha ){\mathrm{\Lambda}}_{j}{\mathrm{\Gamma}}_{j}{S}_{j}/{V}_{j}\right){c}_{j}^{2}/{v}_{j}^{2}}-\gamma s\\ \text{\hspace{1em}}when\text{\hspace{0.17em}}\frac{2{c}_{j}}{{v}_{j}}>\frac{{C}_{j}}{{V}_{j}}-\frac{2{\mathsf{\Omega}}_{i}{c}_{j}^{2}/{v}_{j}^{2}}{{\mathrm{\Lambda}}_{i}{\mathrm{\Gamma}}_{i}{\mathrm{\Gamma}}_{j}{S}_{j}}-\frac{{\mathsf{\Omega}}_{j}{S}_{j}}{{V}_{j}}\text{\hspace{0.17em}}and\text{\hspace{0.17em}}\frac{2{c}_{j}}{{v}_{j}}>\sqrt{\frac{(1+\alpha ){\mathrm{\Lambda}}_{j}{\mathrm{\Gamma}}_{j}{S}_{j}}{{V}_{j}}}\text{\hspace{1em}}and\text{\hspace{0.17em}}\frac{{C}_{i}}{{V}_{i}}<\frac{2{c}_{i}}{{v}_{i}}\\ \text{\hspace{1em}}and\text{\hspace{1em}}\frac{\left(\frac{{C}_{j}}{{V}_{j}}-\frac{{\mathsf{\Omega}}_{j}{S}_{j}}{{V}_{j}}\right)\left(\frac{8{c}_{j}^{2}}{{v}_{j}^{2}}-\frac{(1+\alpha ){\mathrm{\Lambda}}_{j}{\mathrm{\Gamma}}_{j}{S}_{j}}{{V}_{j}}\right)-2{\mathrm{\Lambda}}_{j}(1+\alpha )\frac{{c}_{j}}{{v}_{j}}\left(\frac{{\mathrm{\Gamma}}_{j}{S}_{j}}{{V}_{j}}+\frac{{\mathsf{\Omega}}_{i}{c}_{j}/{v}_{j}}{{\mathrm{\Lambda}}_{i}{\mathrm{\Gamma}}_{i}{V}_{j}}\right)}{8\left(4{c}_{j}^{2}/{v}_{j}^{2}-(1+\alpha ){\mathrm{\Lambda}}_{j}{\mathrm{\Gamma}}_{j}{S}_{j}/{V}_{j}\right){c}_{j}^{2}/{v}_{j}^{2}}>\gamma s\end{array}\\ 0\text{\hspace{0.17em}}otherwise\end{array}\end{array}$$

Inserting (A12) and (A15) into (A1) gives

$${T}_{j}=\{\begin{array}{l}\begin{array}{l}\frac{2{c}_{j}/{v}_{j}-{C}_{j}/{V}_{j}+{\mathsf{\Omega}}_{j}{S}_{j}/{V}_{j}}{8{c}_{j}^{2}/{v}_{j}^{2}}-\frac{{\mathsf{\Omega}}_{i}}{4{\mathrm{\Lambda}}_{i}{\mathrm{\Gamma}}_{i}{\mathrm{\Gamma}}_{j}{S}_{j}}\\ \text{\hspace{1em}}when\text{\hspace{0.17em}}\frac{2{c}_{j}}{{v}_{j}}>\frac{{C}_{j}}{{V}_{j}}+\frac{2{\mathsf{\Omega}}_{i}{c}_{j}^{2}/{v}_{j}^{2}}{{\mathrm{\Lambda}}_{i}{\mathrm{\Gamma}}_{i}{\mathrm{\Gamma}}_{j}{S}_{j}}-\frac{{\mathsf{\Omega}}_{j}{S}_{j}}{{V}_{j}}\text{\hspace{0.17em}}and\text{\hspace{0.17em}}\frac{2{c}_{j}}{{v}_{j}}>\sqrt{\frac{(1+\alpha ){\mathrm{\Lambda}}_{j}{\mathrm{\Gamma}}_{j}{S}_{j}}{{V}_{j}}}\text{\hspace{1em}}and\text{\hspace{0.17em}}\frac{{C}_{i}}{{V}_{i}}<\frac{2{c}_{i}}{{v}_{i}}\\ \text{\hspace{1em}}and\text{\hspace{0.17em}}\frac{\left(\frac{{C}_{j}}{{V}_{j}}-\frac{{\mathsf{\Omega}}_{j}{S}_{j}}{{V}_{j}}\right)\left(\frac{8{c}_{j}^{2}}{{v}_{j}^{2}}-\frac{(1+\alpha ){\mathrm{\Lambda}}_{j}{\mathrm{\Gamma}}_{j}{S}_{j}}{{V}_{j}}\right)-2{\mathrm{\Lambda}}_{j}(1+\alpha )\frac{{c}_{j}}{{v}_{j}}\left(\frac{{\mathrm{\Gamma}}_{j}{S}_{j}}{{V}_{j}}+\frac{{\mathsf{\Omega}}_{i}{c}_{j}/{v}_{j}}{{\mathrm{\Lambda}}_{i}{\mathrm{\Gamma}}_{i}{V}_{j}}\right)}{8\left(4{c}_{j}^{2}/{v}_{j}^{2}-(1+\alpha ){\mathrm{\Lambda}}_{j}{\mathrm{\Gamma}}_{j}{S}_{j}/{V}_{j}\right){c}_{j}^{2}/{v}_{j}^{2}}>\gamma s\end{array}\\ 0\text{\hspace{0.17em}}otherwise\end{array}$$

#### Appendix A.2. Mutual Reaction between Hacker i and Each Firm in the First Attack

Differentiating (2) gives
where $\partial {T}_{i}/\partial {t}_{i}$ > 0 when t

$$\begin{array}{l}\frac{\partial {U}_{i}^{first}}{\partial {T}_{Ai}}=\frac{{t}_{Ai}{V}_{i}}{{({T}_{Ai}+{t}_{Ai})}^{2}}-{C}_{i}=0\Rightarrow {T}_{Ai}={T}_{i}=\{\begin{array}{c}\left(\sqrt{\frac{{V}_{i}}{{C}_{i}}}-\sqrt{{t}_{i}}\right)\sqrt{{t}_{i}}\text{\hspace{0.17em}}when\text{\hspace{0.17em}}{t}_{i}<\frac{{V}_{i}}{{C}_{i}}\\ 0\text{\hspace{0.17em}}otherwise\end{array},\\ \text{\hspace{1em}}\frac{\partial {T}_{i}}{\partial {t}_{i}}=\frac{1}{2}\sqrt{\frac{{V}_{i}}{{C}_{i}{t}_{i}}}-1,\frac{{\partial}^{2}{T}_{i}}{\partial {t}_{i}^{2}}=-\frac{\sqrt{{V}_{i}}}{2{t}_{i}^{3/2}\sqrt{{C}_{i}}}\\ \frac{\partial {u}^{first}}{\partial {t}_{Ai}}=\frac{{T}_{Ai}{v}_{i}}{{({T}_{Ai}+{t}_{Ai})}^{2}}-{c}_{i}=0\Rightarrow {t}_{Ai}={t}_{i}=\{\begin{array}{c}\left(\sqrt{\frac{{v}_{i}}{{c}_{i}}}-\sqrt{{T}_{i}}\right)\sqrt{{T}_{i}}\text{\hspace{0.17em}}when\text{\hspace{0.17em}}{T}_{i}<\frac{{v}_{i}}{{c}_{i}}\\ 0\text{\hspace{0.17em}}otherwise\end{array},\\ \text{\hspace{1em}}\frac{\partial {t}_{i}}{\partial {T}_{i}}=\frac{1}{2}\sqrt{\frac{{v}_{i}}{{c}_{i}{T}_{i}}}-1,\frac{{\partial}^{2}{t}_{i}}{\partial {T}_{i}^{2}}=-\frac{\sqrt{{v}_{i}}}{2{T}_{i}^{3/2}\sqrt{{c}_{i}}}\end{array}$$

_{i}< V_{i}/4C_{i}, $\partial {T}_{i}/\partial {t}_{i}$ < 0 when V_{i}/C_{i}> t_{i}> V_{i}/4C_{i}, $\partial {t}_{i}/\partial {T}_{i}$ > 0 when T_{i}< v_{i}/4c_{i}, $\partial {t}_{i}/\partial {T}_{i}$ < 0 when v_{i}/c_{i}> T_{i}> v_{i}/4c_{i}.#### Appendix A.3. Corner Solution When Hacker i Is Deterred

T
which is the same solution as for t

_{i}= T_{Ai}= T_{Bi}= 0 causes S_{i}= 0 according to U_{i}in (9). Inserting T_{i}= S_{i}= 0, and S_{j}= 0 since hacker j does not gain from information sharing, into (A9) and solving gives t_{i}= V_{i}/C_{i}. When hacker i is deterred, inserting T_{i}= S_{i}= S_{j}= 0 into (A5) and (A1) gives
$${t}_{j}=\frac{{C}_{j}/{V}_{j}}{4{c}_{j}^{2}/{v}_{j}^{2}}-\gamma s,\text{\hspace{1em}}s=\frac{\gamma {c}_{j}}{2{\varphi}_{1}-{\varphi}_{3}},\text{\hspace{1em}}{T}_{j}=\{\begin{array}{c}\frac{1}{4{c}_{j}^{2}/{v}_{j}^{2}}\left(\frac{2{c}_{j}}{{v}_{j}}-\frac{{C}_{j}}{{V}_{j}}\right)\text{\hspace{0.17em}}when\text{\hspace{0.17em}}\frac{2{c}_{j}}{{v}_{j}}>\frac{{C}_{j}}{{V}_{j}}\\ 0\text{\hspace{0.17em}}otherwise\end{array}$$

_{i}and T_{i}in (13) except that we now have t_{j}+ γs instead of t_{i}because of information sharing.#### Appendix A.4. Corner Solution When Hacker j Is Deterred

Deterring hacker j means inserting T
with respect to t

_{j}= 0 into (A1) and solving
$$\sqrt{\frac{({t}_{j}+\gamma s){V}_{j}}{{C}_{j}-({\mathsf{\Omega}}_{j}+2{\mathrm{\Lambda}}_{j}{\mathrm{\Gamma}}_{i}{T}_{i}{S}_{i}{\mathrm{\Gamma}}_{j}){S}_{j}}}-\frac{2{\mathrm{\Gamma}}_{i}{T}_{i}{S}_{i}}{1+\alpha}-{t}_{j}-\gamma s=0$$

_{j}. When hacker j is deterred, hacker i gains nothing by sharing information causing S_{i}= 0. Accordingly we assume that hacker j does not share information either, S_{j}= 0. Inserting into (A19) yields t_{j}= V_{j}/C_{j}− γs.## References

- Kampanakis, P. Security automation and threat information-sharing options. IEEE Secur. Priv.
**2014**, 12, 42–51. [Google Scholar] [CrossRef] - Novshek, W.; Sonnenschein, H. Fulfilled expectations cournot duopoly with information acquisition and release. Bell J. Econ.
**1982**, 13, 214–218. [Google Scholar] [CrossRef] - Gal-Or, E. Information sharing in oligopoly. Econometrica
**1985**, 53, 329–343. [Google Scholar] [CrossRef] - Shapiro, C. Exchange of cost information in oligopoly. Rev. Econ. Stud.
**1986**, 53, 433–446. [Google Scholar] [CrossRef] - Kirby, A.J. Trade associations as information exchange mechanisms. RAND J. Econ.
**1988**, 19, 138–146. [Google Scholar] [CrossRef] - Vives, X. Trade association disclosure rules, incentives to share information, and welfare. RAND J. Econ.
**1990**, 21, 409–430. [Google Scholar] [CrossRef] - Cremonini, M.; Nizovtsev, D. Risks and benefits of signaling information system characteristics to strategic attackers. J. Manag. Inf. Syst.
**2009**, 26, 241–274. [Google Scholar] [CrossRef] - Fultz, N.; Grossklags, J. Blue versus red: Towards a model of distributed security attacks. In Proceedings of the Thirteenth International Conference Financial Cryptography and Data Security, Accra Beach, Barbados, 23–26 February 2009; Springer: Christ Church, Barbados, 2009; pp. 167–183. [Google Scholar]
- Herley, C. Small world: Collisions among attackers in a finite population. In Proceedings of the 12th Workshop on the Economics of Information Security (WEIS), Washington, DC, USA, 11–12 June 2013. [Google Scholar]
- Lin, Y. The institutionalization of hacking practices. Ubiquity
**2003**, 2003. [Google Scholar] [CrossRef] - Sarvari, H.; Abozinadah, E.; Mbaziira, A.; Mccoy, D. Constructing and analyzing criminal networks. In Proceedings of the IEEE Security and Privacy Workshops (SPW), San Jose, CA, USA, 17–18 May 2014; pp. 84–91. [Google Scholar]
- August, T.; Niculescu, M.F.; Shin, H. Cloud implications on software network structure and security risks. Inf. Syst. Res.
**2014**, 25, 489–510. [Google Scholar] [CrossRef] - Dey, D.; Lahiri, A.; Zhang, G. Quality competition and market segmentation in the security software market. MIS Q.
**2014**, 38, 589–606. [Google Scholar] - Dey, D.; Lahiri, A.; Zhang, G. Hacker behavior, network effects, and the security software market. J. Manag. Inf. Syst.
**2012**, 29, 77–108. [Google Scholar] [CrossRef] - Galbreth, M.; Shor, M. The impact of malicious agents on the enterprise software industry. MIS Q.
**2010**, 34, 595–612. [Google Scholar] - Chul Ho, L.; Xianjun, G.; Raghunathan, S. Contracting information security in the presence of double moral hazard. Inf. Syst. Res.
**2013**, 24, 295–311. [Google Scholar] - Ransbotham, S.; Mitra, S. Choice and chance: A conceptual model of paths to information security compromise. Inf. Syst. Res.
**2009**, 20, 121–139. [Google Scholar] [CrossRef] - Gordon, L.A.; Loeb, M.P.; Lucyshyn, W. Sharing information on computer systems security: An economic analysis. J. Account. Public Policy
**2003**, 22, 461–485. [Google Scholar] [CrossRef] - Gal-Or, E.; Ghose, A. The economic incentives for sharing security information. Inf. Syst. Res.
**2005**, 16, 186–208. [Google Scholar] [CrossRef] - Hausken, K. Security investment and information sharing for defenders and attackers of information assets and networks. In Information Assurance, Security and Privacy Services, Handbooks in Information Systems; Rao, H.R., Upadhyaya, S.J., Eds.; Emerald Group Pub Ltd.: Bingley, UK, 2009; Volume 4, pp. 503–534. [Google Scholar]
- Hausken, K. Information sharing among firms and cyber attacks. J. Account. Public Policy
**2007**, 26, 639–688. [Google Scholar] [CrossRef] - Gao, X.; Zhong, W.; Mei, S. A game-theoretic analysis of information sharing and security investment for complementary firms. J. Oper. Res. Soc.
**2014**, 65, 1682–1691. [Google Scholar] [CrossRef] - Liu, D.; Ji, Y.; Mookerjee, V. Knowledge sharing and investment decisions in information security. Decis. Support Syst.
**2011**, 52, 95–107. [Google Scholar] [CrossRef] - Mallinder, J.; Drabwell, P. Cyber security: A critical examination of information sharing versus data sensitivity issues for organisations at risk of cyber attack. J. Bus. Contin. Emerg. Plan.
**2013**, 7, 103–111. [Google Scholar] - Choras, M. Comprehensive approach to information sharing for increased network security and survivability. Cybern. Syst.
**2013**, 44, 550–568. [Google Scholar] [CrossRef] - Tamjidyamcholo, A.; Bin Baba, M.S.; Tamjid, H.; Gholipour, R. Information security—Professional perceptions of knowledge-sharing intention under self-efficacy, trust, reciprocity, and shared-language. Comput. Educ.
**2013**, 68, 223–232. [Google Scholar] [CrossRef] - Rocha Flores, W.; Antonsen, E.; Ekstedt, M. Information security knowledge sharing in organizations: Investigating the effect of behavioral information security governance and national culture. Comput. Secur.
**2014**, 43, 90–110. [Google Scholar] [CrossRef] - Tamjidyamcholo, A.; Bin Baba, M.S.; Shuib, N.L.M.; Rohani, V.A. Evaluation model for knowledge sharing in information security professional virtual community. Comput. Secur.
**2014**, 43, 19–34. [Google Scholar] [CrossRef] - Png, I.P.L.; Wang, Q.-H. Information security: Facilitating user precautions vis-à-vis enforcement against attackers. J. Manag. Inf. Syst.
**2009**, 26, 97–121. [Google Scholar] [CrossRef] - Choi, J.P.; Fershtman, C.; Gandal, N. Network security: Vulnerabilities and disclosure policy. J. Ind. Econ.
**2010**, 58, 868–894. [Google Scholar] [CrossRef] - Nizovtsev, D.; Thursby, M. To disclose or not? An analysis of software user behavior. Inf. Econ. Policy
**2007**, 19, 43–64. [Google Scholar] [CrossRef] - Arora, A.; Krishnan, R.; Telang, R.; Yang, Y. An empirical analysis of software vendors’ patch release behavior: Impact of vulnerability disclosure. Inf. Syst. Res.
**2010**, 21, 115–132. [Google Scholar] [CrossRef] - Temizkan, O.; Kumar, R.L.; Park, S.; Subramaniam, C. Patch release behaviors of software vendors in response to vulnerabilities: An empirical analysis. J. Manag. Inf. Syst.
**2012**, 28, 305–338. [Google Scholar] [CrossRef] - Cavusoglu, H.; Mishra, B.; Raghunathan, S. The value of intrusion detection systems in information technology security architecture. Inf. Syst. Res.
**2005**, 16, 28–46. [Google Scholar] [CrossRef] - Moore, T.; Clayton, R.; Anderson, R. The economics of online crime. J. Econ. Perspect.
**2009**, 23, 3–20. [Google Scholar] [CrossRef] - Skopik, F.; Settanni, G.; Fiedler, R. A problem shared is a problem halved: A survey on the dimensions of collective cyber defense through security information sharing. Comput. Secur.
**2016**, 60, 154–176. [Google Scholar] [CrossRef] - Hausken, K. A strategic analysis of information sharing among cyber attackers. J. Inf. Syst. Technol. Manag.
**2015**, 12, 245–270. [Google Scholar] [CrossRef] - Hausken, K. Information sharing among cyber hackers in successive attacks. Int. Game Theory Rev.
**2017**, 19. [Google Scholar] [CrossRef] - Raymond, E.S. The Cathedral & the Bazaar: Musings on Linux and Open Source by an Accidental Revolutionary; O’Reilly Media: Sebastopol, CA, USA, 2008. [Google Scholar]
- Ritchie, C. A Look at the Security of the Open Source Development Model; Technical Report; Oregon State University: Corvallis, OR, USA, 2000. [Google Scholar]
- Brunker, M. Hackers: Knights-Errant or Knaves? NBCNews. 1998. Available online: http://msnbc.msn.com/id/3078783 (accessed on 24 May 2017).
- Simon, H. The Sciences of the Artificial; MIT Press: Cambridge, MA, USA, 1969. [Google Scholar]
- Hirshleifer, J. Anarchy and its breakdown. J. Political Econ.
**1995**, 103, 26–52. [Google Scholar] [CrossRef] - Tullock, G. The welfare costs of tariffs, monopolies, and theft. West. Econ. J.
**1967**, 5, 224–232. [Google Scholar] [CrossRef] - Salop, S.C.; Scheffman, D.T. Raising rivals’ costs. Am. Econ. Rev.
**1983**, 73, 267–271. [Google Scholar] - Hausken, K. Production and conflict models versus rent-seeking models. Public Choice
**2005**, 123, 59–93. [Google Scholar] [CrossRef] - Tullock, G. Efficient rent-seeking. In Toward a Theory of the Rent-Seeking Society; Buchanan, J.M., Tollison, R.D., Tullock, G., Eds.; Texas A. & M. University Press: College Station, TX, USA, 1980; pp. 97–112. [Google Scholar]
- Kunreuther, H.; Heal, G. Interdependent security. J. Risk Uncertain.
**2003**, 26, 231–249. [Google Scholar] [CrossRef] - Hausken, K. Income, interdependence, and substitution effects affecting incentives for security investment. J. Account. Public Policy
**2006**, 25, 629–665. [Google Scholar] [CrossRef] - Levins, R. The strategy of model building in population biology. Am. Sci.
**1966**, 54, 421–431. [Google Scholar] - Levins, R.; Lewontin, R. The Dialectical Biologist; Harvard University Press: Cambridge, MA, USA, 1985. [Google Scholar]

t_{Qi} | Firm Q’s defense against hacker i in period 1, Q = A,B | iv |

t_{Qj} | Firm Q’s defense against hacker j in period 3, Q = A,B | iv |

s_{Q} | Firm Q’s information sharing with the other firm in period 3, Q = A,B | iv |

T_{Qi} | Hacker i’s attack against firm Q in period 2, Q = A,B | iv |

T_{Qj} | Hacker j’s attack against firm Q in period 4, Q = A,B | iv |

S_{i} | Hacker i’s information sharing with hacker j in period 2 | iv |

u_{Q} | Firm Q’s expected utility, Q = A,B | dv |

U_{k} | Hacker k’s expected utility, k = i,j | dv |

S_{j} | Hacker j’s information sharing with hacker i in period 4 | p |

v_{k} | Each firm’s asset value before hacker k’s attack, k = i,j | p |

V_{k} | Hacker k’s valuation of each firm before its attack, k = i,j | p |

c_{k} | Each firm’s unit defense cost before hacker k’s attack, k = i,j | p |

C_{k} | Hacker k’s unit attack cost, k = i,j | p |

α | Interdependence between the firms | p |

γ | Information sharing effectiveness between firms | p |

ϕ_{1} | Each firm’s unit cost (inefficiency) of own information leakage | p |

ϕ_{2} | Each firm’s unit benefit (efficiency) of the other firm’s information leakage | p |

ϕ_{3} | Each firm’s unit benefit (efficiency) of joint information leakage | p |

Г_{k} | Hacker k’s information sharing effectiveness with the other hacker, k = i,j | p |

Ʌ_{k} | Hacker k’s utilization of joint information sharing, k = i,j | p |

Ω_{k} | Hacker k’s reputation gain parameter, k = i,j | p |

t_{i} | T_{i} | S_{i} | t_{j} | s | T_{j} | U_{i} | U_{j} | U_{i} + U_{j} | u | |
---|---|---|---|---|---|---|---|---|---|---|

C_{i} = C_{j} = 1 | 0.25 | 0.25 | 0.25 | 0.208 | 0.2 | 0.125 | 0.625 | 0.832 | 1.457 | 0.523 |

C_{i} = 1, C_{j} = 3/2 | 0.25 | 0.25 | 0.125 | 0.354 | 0.2 | 0.0625 | 0.531 | 0.349 | 0.881 | 0.603 |

C_{i} = 3/2, C_{j} = 1 | 0.375 | 0.125 | 0.5 | 0.208 | 0.2 | 0.125 | 0.25 | 0.832 | 1.082 | 0.648 |

© 2017 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).