# An Adversarial-Risk-Analysis Approach to Counterterrorist Online Surveillance

^{1}

^{2}

^{*}

## Abstract

**:**

## 1. Introduction

- We analyze the suitability of decision-making models based on standard game theory and ARA, to tackle the problem of online surveillance. Our analysis contemplates the case of sequential defense-attack models, and examines the fulfillment of certain requirements on the defender and attacker’s side.
- We propose an ARA-based model to investigate the problem of online surveillance and analyze the rationality conditions of an automatic threat detection system. Our analysis constitutes a preliminary step towards the systematic application of ARA, in that it aims to establish a point of departure and connection between the analytical framework provided by ARA, a young field within risk analysis, and the problem of online surveillance.
- We conduct an experimental evaluation of the proposed decision-making model and illustrate the typical problem solving approach used in a real case. Our evaluation methodology, in fact, may serve as a template for real problems, which would basically add modeling and computational complexities. Furthermore, we carried out a sensitivity analysis and provide a thorough comparison with a standard game-theoretic approach under assumptions of common knowledge. Our experiments showed that our ARA-based model outperforms the standard game-theoretic approach, although at the expense of more costly solutions, from a computational point of view.
- The connection between the ARA models and online counterterrorism sheds new light on the understanding of the suitability of such decision-making models when it comes to applying them to the online surveillance problem. We also hope to illustrate the riveting intersection between the fields of ARA and threat intelligence, in an attempt towards bridging the gap between the respective communities.

## 2. Background and Assumptions

#### 2.1. Background in Online Third-Party Tracking

#### 2.2. Assumptions

## 3. The Problem of Online Surveillance

## 4. Analysis of Decision-Making Models

#### 4.1. Model Requirements and Notation

- Both opponents (intelligent, rational) want to maximize their utility.
- There is uncertainty about the attacker’s actions due to uncertainty about their utilities and probabilities.
- The information on the evaluation of the objectives between opponents is incomplete, with the possibility of obtaining it partially through different sources that we will call intelligence (experts, historical data and/or statistical distributions).
- And it is possible to model simultaneous and non-simultaneous (sequential) decisions.

#### 4.2. Sequential Defense–Attack Model

**Example 1**(Counterterrorism scenario)

**.**

#### 4.3. Analysis Based on Standard Game Theory

#### 4.4. Analysis Based on ARA

## 5. An ARA Model for the Online Surveillance Problem

#### 5.1. The Defender’s Decision Problem

- decide if they use the technology, assigning values ${d}_{1}=\{0,1\}$ in node ${D}_{1}$.
- face the possible existence of a threat $a=\{0,1\}$ in node A.
- observe optionally, given the case, the result of the automatic detection system, ${s}_{1}=\{0,1\}$, in node ${S}_{1}$.
- establish proportions of profiles investigated manually based on the available resources, assigning values ${d}_{2}=\{0,1\}$ in node ${D}_{2}$.
- observe the final result of the surveillance,${\phantom{\rule{4pt}{0ex}}s}_{2}=\{0,1\}$, in node ${S}_{2}$; and.
- add their costs and evaluate the results with their utility function ${u}_{D}$.

- First, for each relevant scenario (${d}_{2},{s}_{2})$, add the consequences and obtain the utility ${u}_{D}({d}_{2},{s}_{2})$.
- In node ${S}_{2}$, calculate the expected utilities:$${\psi}_{D}\left(\right)open="("\; close=")">{d}_{1},{s}_{1},{d}_{2}$$
- In node ${D}_{2}$, calculate the expected utilities:$${\psi}_{D}\left(\right)open="("\; close=")">{d}_{1},{s}_{1}$$
- In node ${S}_{1}$, calculate the expected utilities:$${\psi}_{D}\left(\right)open="("\; close=")">{d}_{1},a.$$
- In node A, calculate the expected utilities:$${\psi}_{D}\left(\right)open="("\; close=")">{d}_{1}$$
- Finally, the decision node ${D}_{1}$ maximizes the expected utility and stores the corresponding optimal initial decision ${d}_{1}^{\ast}$.$${\psi}_{D}=\underset{{d}_{1}}{arg\; max}{\psi}_{D}\left({d}_{1}\right).$$

#### 5.2. The Adversary’s Decision Problem

- observe the initial decision of D, ${d}_{1}=\{0,1\}$.
- decide on their presence in the set of monitored sites, $a=\{0,1\}$, with impact over time if they are not detected.
- observe their success ${s}_{2}=\{0,1\}$ after the defender makes their allocations ${d}_{2}=\{0,1\}$ on the manual investigation of the profiles; and
- add their costs and obtain the corresponding utility ${u}_{A}$.

- Add the consequences and obtain the random utility ${U}_{A}(a,{s}_{2})$, for each (a, ${s}_{2}$).
- In node ${S}_{2}$, calculate the expected random utilities:$${\Psi}_{A}\left(\right)open="("\; close=")">a,{s}_{1},{d}_{2}$$
- In node ${D}_{2}$, calculate the expected random utilities:$${\Psi}_{A}\left(\right)open="("\; close=")">{d}_{1},a,{s}_{1}$$
- In node ${S}_{1}$, calculate the expected random utilities:$${\Psi}_{A}\left(\right)open="("\; close=")">{d}_{1},a$$
- In node A, calculate the (random) optimal decision in response to each value of ${d}_{1}$:$${A}^{\ast}\left({d}_{1}\right)={arg\; max}_{a}{\Psi}_{A}({d}_{1},a).$$

Algorithm 1: Overall attacker–defender approach. |

#### 5.3. Overall Approach

## 6. Experimental Evaluation

#### 6.1. Structure of the Problem

#### 6.2. The Defender’s Evaluations

- Evaluating${p}_{D}\left({S}_{1}\right|a,{d}_{1})$. ${S}_{1}$ represents the probability that the automatic threat detection system generates an alarm, whether there is suspicion or not. Obviously, if the system is not used, it is impossible that it generates an alarm. We established a range of values for this probability, although the defender will operate their problem with the base values. These considerations are reflected in Table 3.
- Evaluating${p}_{D}\left({D}_{2}\right|{d}_{1},{s}_{1})$. ${D}_{2}$ represents the probability of manually investigating a profile collected both when the automatic system is used and when it is not. We also established a range of values that includes the base value for the defender’s problem, as shown in Table 4.
- Evaluating${p}_{D}\left({S}_{2}\right|{d}_{2},a)$. ${S}_{2}$ represents the final success/failure of the surveillance. As described in Section 3, manual investigation was considered 100% effective in confirming or ruling out a threat. In this case, we did not use a range for the values of this probability. Table 5 shows these considerations.
- Evaluating${u}_{D}({d}_{2},{s}_{2})$. Finally, the utility ${u}_{D}({d}_{2},{s}_{2})$ as a measure of the quality of the model. We opted for an exponential utility function that allowed us to order the costs ${v}_{D}$ of the defender while assuming their (constant) risk aversion. Accordingly, we define ${u}_{D}\left(\right)open="("\; close=")">{d}_{2},{s}_{2}$, with ${c}_{D}\sim U(0,\phantom{\rule{4pt}{0ex}}3)$ and consider the parameters shown in Table 6.

#### 6.3. The Defender’s Evaluations about the Adversary

- Evaluating${P}_{A}\left({S}_{1}\right|a,{d}_{1})$. We assumed that ${p}_{A}({S}_{1}=1|{d}_{1},a)$ is similar to ${p}_{D}({S}_{1}=1|{d}_{1},a)$. To model our lack of knowledge about the probabilities used by the adversary in their decision problem, we added some uncertainty. In particular, we assumed that, except in cases where ${p}_{D}({S}_{1}=1|{d}_{1},a)$ is 0 or 1, for those who suppose that the adversary’s probabilities will match their beliefs ${P}_{A}$ about ${p}_{A}({S}_{1}=1|{d}_{1},a)$ are uniform within the ranges $[{p}_{A}^{\mathrm{min}},{p}_{A}^{\mathrm{max}}]$ of Table 7, evaluated by the defender.Then, we modeled ${p}_{A}$ as a uniform distribution between ${p}_{A}^{\mathrm{min}}$ and ${p}_{A}^{\mathrm{max}}$. Thus, ${P}_{A}\left({S}_{1}\right|a,{d}_{1})$, was defined by the expression$${p}_{A}={p}_{D}^{\mathrm{min}}+\omega ({p}_{D}^{\mathrm{max}}-\phantom{\rule{4pt}{0ex}}{p}_{D}^{\mathrm{min}}),$$
- Evaluating${P}_{A}\left({D}_{2}\right|{d}_{1},{s}_{1})$. We adopted the same approach as before, now based on Table 8.
- Evaluating${P}_{A}\left(\right)open="("\; close=")">{S}_{2}\left|\phantom{{S}_{2}{d}_{2},a}\right)\phantom{\rule{0.0pt}{0ex}}{d}_{2},a$. We adopted the same approach as before, now based on Table 9.
- Evaluating${U}_{A}\left(\right)open="("\; close=")">a,{s}_{2}$. Finally, for utility ${u}_{A}\left(\right)open="("\; close=")">a,{s}_{2}$, we also opted for an exponential utility function that allowed us to order the adversary’s costs ${v}_{A}$, while we assumed their (constant) risk seeking in relation to their benefits. Thus, we defined ${u}_{A}\left(\right)open="("\; close=")">a,{s}_{2}$, with ${c}_{A}\sim U(0,\phantom{\rule{4pt}{0ex}}0.025)$ and consider the parameters shown in Table 10.

#### 6.4. Results

^{®}Core${}^{\mathrm{TM}}$ processor i3-2370 CPU at 2.4 GHz, 4Gb RAM on a Windows 10 64-bit operating system. In our example, the computation time was acceptable (15–20 s per problem on average) and therefore we did not consider the implementation and its performance as the object of the analysis. In any case, it should be noted that the resolution of the problem implied a Monte Carlo simulation for each value ${d}_{1}$ and that in each simulation we must propagate uncertainty at different levels, which became a strong computational challenge for larger problems.

`bestglm`(Available online: https://cran.r-project.org/web/packages/bestglm/index.html (accessed on 8 May 2012)) package of R. The reason was to avoid losing information and overestimating the logit model. Table 14 shows the results of the adjustments, where between the null model and the complete model, the best model obtained is “ARA08.06” (logit model with six variables out of eight available variables, highlighted in bold). Thus, the model indicates that, a priori, we could do without the parameters ${\beta}^{\mathrm{base}}$ and $\lambda $ to explain the optimal decision ${\phantom{\rule{4pt}{0ex}}d}_{1}^{\ast}=1$ of the defender, while the parameter ${\phantom{\rule{4pt}{0ex}}\rho}^{\mathrm{base}}$ (proportion of profiles investigated manually when the system is not used), with an $odds\phantom{\rule{4pt}{0ex}}ratio=16.56$, is shown to be highly influential.

#### Comparison with Game Theory

## 7. Discussion

## 8. Conclusions and Future Work

## Author Contributions

## Funding

## Conflicts of Interest

## References

- Reinares, F.; Calvo, C. Estado islámico en España; Real Instituto Elcano: Madrid, Spain, 2016. [Google Scholar]
- Khan, S.; Paul, D.; Momtahan, P.; Aloqaily, M. Artificial intelligence framework for smart city microgrids: State of the art, challenges, and opportunities. In Proceedings of the International Conference on Fog and Mobile Edge Computing (FMEC), Barcelona, Spain, 23–26 April 2018; pp. 283–288. [Google Scholar]
- Aloqaily, M.; Kantarci, B.; Mouftah, H. Trusted Third Party for service management in vehicular clouds. In Proceedings of the International Wireless Communications and Mobile Computing Conference (IWCMC), Valencia, Spain, 26–30 June 2017; pp. 928–933. [Google Scholar]
- Aloqaily, M.; Kantarci, B.; Mouftah, H. Fairness-Aware Game Theoretic Approach for Service Management in Vehicular Clouds. In Proceedings of the Vehicular Technology Conference (VTC-Fall), Toronto, ON, Canada, 24–27 September 2017; pp. 1–5. [Google Scholar]
- Otoum, S.; Kantarci, B.; Mouftah, H. Adaptively Supervised and Intrusion-Aware Data Aggregation for Wireless Sensor Clusters in Critical Infrastructures. In Proceedings of the International Conference on Communications (ICC), Kansas City, MO, USA, 20–24 May 2018; pp. 1–6. [Google Scholar]
- Hintz, A.; Dencik, L. The politics of surveillance policy: UK regulatory dynamics after snowden. Inf. Syst. Res.
**2016**, 5, 1–6. [Google Scholar] - Bier, V.; Azaie, M. Game Theoretic Risk Analysis of Security Threats; Springer-Verlag: Boston, FL, USA, 2008; Volume 128. [Google Scholar]
- Rios, D.; Rios, J.; Banks, D. Adversarial risk analysis. J. Am. Stat. Assoc.
**2009**, 104, 841–854. [Google Scholar] [CrossRef] - Parra-Arnau, J.; Castelluccia, C. On the cost-effectiveness of mass surveillance. IEEE Access.
**2018**, 6, 46538–46557. [Google Scholar] [CrossRef] - Englehardt, S.; Narayanan, A. Online tracking: A 1-million-site measurement and analysis. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, Vienna, Austria, 24–28 October 2016; pp. 1388–1401. [Google Scholar]
- Parra-Arnau, J.; Achara, J.; Castelluccia, C. MyAdChoices: Bringing transparency and control to online advertising. ACM Trans. Web
**2017**, 11. [Google Scholar] [CrossRef] - Yuan, S.; Abidin, A.; Sloan, M.; Wang, J. Internet advertising: An interplay among advertisers, online publishers, ad exchanges and web users. arXiv, 2012; arXiv:1206.1754. [Google Scholar]
- Toubiana, V. SquiggleSR. 2007. Available online: www.squigglesr.com (accessed on 7 August 2018).
- Liu, B.; Sheth, A.; Weinsberg, U.; Chandrashekar, J.; Govindan, R. Adreveal: Improving transparency into online targeted advertising. In Proceedings of the Twelfth ACM Workshop on Hot Topics in Networks Article, College Park, MD, USA, 21–22 November 2013; pp. 121–127. [Google Scholar]
- Yan, J.; Liu, N.; Wang, G.; Zhang, W.; Jiang, Y.; Chen, Z. How much can behavioral targeting help online advertising? In Proceedings of the 18th International Conference on World Wide Web, Madrid, Spain, 20–24 April 2009; pp. 261–270. [Google Scholar]
- Aly, M.; Hatch, A.; Josifovski, V.; Narayanan, V. Web-scale user modeling for targeting. In Proceedings of the 21st International Conference on World Wide Web, Lyon, France, 16–20 April 2012; pp. 3–12. [Google Scholar]
- Tsang, M.; Ho, S.; Liang, T. Consumer attitudes toward mobile advertising: An empirical study. Int. J. Electron. Commer.
**2004**, 8, 65–78. [Google Scholar] [CrossRef] - Cavusoglu, H.; Mishra, B.; Raghunathan, S. The value of intrusion detection systems in information technology security architecture. Inform. Syst. Res.
**2005**, 16, 28–46. [Google Scholar] [CrossRef] - Mell, P.; Bace, R. Nist Special Publication on Intrusion Detection Systems; National Institute of Standards and Technology (NIST): Gaithersburg, MD, USA, 2001.
- Merrick, J.; McLay, L. Is screening cargo containers for smuggled nuclear threats worthwhile? Decis. Anal.
**2010**, 7, 155–171. [Google Scholar] [CrossRef] - Rios, J.; Insua, D. Adversarial risk analysis: Applications to basic counterterrorism models. In Proceedings of the International Conference on Algorithmic Decision Theory, Venice, Italy, 20–23 October 2009; pp. 306–315. [Google Scholar]
- Zhuang, J.; Bier, V. Balancing terrorism and natural disasters—Defensive strategy with endogenous attacker effort. Oper. Res.
**2007**, 55, 976–991. [Google Scholar] [CrossRef] - Brown, G.; Carlyle, M.; Salmeron, J.; Wood, K. Defending critical infrastructure. Interfaces
**2006**, 36, 530–544. [Google Scholar] [CrossRef] - Winterfeldt, D.; O’Sullivan, T. Should we protect commercial airplanes against surface-to-air missile attacks by terrorists? Decis. Anal.
**2006**, 3, 63–75. [Google Scholar] [CrossRef] - Gil, C.; Rios, D.; Rios, J. Adversarial risk analysis for urban security resource allocation. Risk Anal.
**2016**, 36, 727–741. [Google Scholar] [CrossRef] [PubMed] - Barton, R.; Meckesheimer, M. Handbooks in Operations Research and Management Science; Metamodel-Based Simulation Optimization; Elsevier: New York, NY, USA, 2006; pp. 535–574. [Google Scholar]
- Shachter, D.R. Evaluating influence diagrams. Oper. Res.
**1996**, 34, 871–882. [Google Scholar] [CrossRef] - Wang, D.; Wang, N.; Wang, P.; Qing, S. Preserving privacy for free. Inf. Sci.
**2015**, 321, 162–178. [Google Scholar] [CrossRef] - Wang, D.; Li, W.; Wang, P. Measuring Two-Factor Authentication Schemes for Real-Time Data Access in Industrial Wireless Sensor Networks. IEEE Trans. Ind. Inform.
**2018**, 14, 4081–4092. [Google Scholar] [CrossRef] - Wang, D.; Li, W. Two Birds with One Stone: Two-Factor Authentication with Security Beyond Conventional Bound. IEEE Trans. Dependable Secur. Comput.
**2018**, 15, 708–722. [Google Scholar] [CrossRef] - Banks, D.; Aliaga, J.; Insua, D. Adversarial Risk Analysis; CRC Press: Boca Raton, FL, USA, 2015. [Google Scholar]

**Figure 1.**Third-party tracking requires that publishers include a link to the ad platform(s) they want to partner with (1). When a user visits pages partnering with this/these ad platform(s) (2), the browser is instructed to load the URLs provided by the ad platform(s). Through the use of third-party cookies and other tracking mechanisms, the ad platform(s) can track all these visits and build a browsing profile (3). Finally, the information collected by the ad platform(s) is shared with the security agency, provided that they have an agreement (4).

**Figure 8.**Results of ARA. (

**a**) No. of cases (over 1 000) and (

**b**) ratio $\frac{{\psi}_{D}\left(\right)open="("\; close=")">{d}_{1}^{\ast ARA}}{}$ (average ± deviation) for ${d}_{1}^{\ast}$, both depending on the risk aversion level ${c}_{D}$ of the defender (abscissa axis).

**Figure 9.**Values of ${\widehat{p}}_{D}\left(A\right|{d}_{1}=1)$. On the left if ${d}_{1}=1$ and on the right if ${d}_{1}=0$.

**Figure 10.**ARA results (grey) vs. game theory (black). (

**a**) No. of cases (over 1000) and (

**b**) ratio $\frac{{\psi}_{D}\left(\right)open="("\; close=")">{d}_{1}^{\ast GT}}{}$ (average ± deviation) for ${d}_{1}^{\ast}=1$, both depending on the level ${c}_{D}$ of risk aversion of the defender.

Symbol | Description |
---|---|

$\alpha $ | Probability of ASC alarm due to suspicion (true positive) |

$\beta $ | Probability of ASC alarm without suspicion (false positive) |

$\pi $ | Probability of presence of a suspicious user |

$\rho $ | Probability of manual investigation without using ASC |

${\rho}_{1}$ | Probability of manual investigation when the ASC generates an alarm |

${\rho}_{0}$ | Probability of manual investigation when the ASC does not generate an alarm |

c | Cost of manual investigation; $c\u2a7d\varphi d,$ $\varphi \u2a7d1$ |

d | Damage derived from an undetected suspect |

$\varphi $ | Cost/damage coefficient of the system |

b | Benefit for suspects not detected; $l\u2a7e(1+\lambda )b,\phantom{\rule{4pt}{0ex}}\lambda \u2a7d1$ |

l | Loss for suspects not detected |

$\lambda $ | Benefit/loss coefficient of the suspect |

Requirements | Standard Game Theory | ARA |
---|---|---|

opponents aim to maximize their utility | ✓ | ✓ |

uncertainty about the attacker’s actions | ✗ | ✓ |

incomplete information about the evaluation of the objectives between opponents | ✗ | ✓ |

simultaneous and sequential decisions | ✓ | ✓ |

$\mathit{a}=1$ | $\mathit{a}=0$ | |
---|---|---|

${d}_{1}=1$ | ${\alpha}^{\mathrm{base}}$ | ${\beta}^{\mathrm{base}}$ |

${d}_{1}=0$ | 0 | 0 |

${\mathit{s}}_{1}=1$ | ${\mathit{s}}_{1}=0$ | |
---|---|---|

${d}_{1}=1$ | ${\rho}_{1}^{\mathrm{base}}$ | ${\rho}_{0}^{\mathrm{base}}$ |

${d}_{1}=0$ | ${\rho}^{\mathrm{base}}$ |

$\mathit{a}=1$ | $\mathit{a}=0$ | |
---|---|---|

${d}_{2}=1$ | 1 | 0 |

${d}_{2}=0$ | 0 | 0 |

${\mathit{s}}_{2}=1$ | ${\mathit{s}}_{2}=0$ | |
---|---|---|

${d}_{2}=1$ | c | $c+d$ |

${d}_{2}=0$ | 0 | d |

$\mathit{a}=1$ | $\mathit{a}=0$ | |
---|---|---|

${d}_{1}=1$ | $[{\alpha}^{\mathrm{min}},{\alpha}^{\mathrm{max}}]$ | $[{\beta}^{\mathrm{min}},{\beta}^{\mathrm{max}}]$ |

${d}_{1}=0$ | 0 | 0 |

${\mathit{s}}_{1}=1$ | ${\mathit{s}}_{1}=0$ | |
---|---|---|

${d}_{1}=1$ | $[{\rho}_{1}^{\mathrm{min}},{\rho}_{1}^{\mathrm{max}}]$ | $[{\rho}_{0}^{\mathrm{min}},{\rho}_{0}^{\mathrm{max}}]$ |

${d}_{1}=0$ | $[{\rho}^{\mathrm{min}},{\rho}^{\mathrm{max}}]$ |

$\mathit{a}=1$ | $\mathit{a}=0$ | |
---|---|---|

${d}_{2}=1$ | 1 | 0 |

${d}_{2}=0$ | 0 | 0 |

$\mathit{a}=1$ | $\mathit{a}=0$ | |
---|---|---|

${d}_{2}=1$ | $b-l$ | b |

${d}_{2}=0$ | 0 | 0 |

Sensitivity and Specificity | Proportion of Manual Investigations | Costs and Coefficients | ||
---|---|---|---|---|

${\alpha}^{\mathrm{base}}$ | ${\beta}^{\mathrm{base}}$ | ${\rho}^{\mathrm{base}}$, ${\rho}_{1}^{\mathrm{base}}$,$\phantom{\rule{4pt}{0ex}}{\rho}_{0}^{\mathrm{base}}$ | $\varphi $ | $\lambda $ |

$U(0.60,\phantom{\rule{4pt}{0ex}}0.99)$ | $U\left(\right)open="("\; close=")">0,\phantom{\rule{4pt}{0ex}}0.1$ | $U(0,\phantom{\rule{4pt}{0ex}}1)$ | $U(0,\phantom{\rule{4pt}{0ex}}1)$ | $U(0,\phantom{\rule{4pt}{0ex}}1)$ |

${\alpha}^{\mathrm{min}}$ | ${\beta}^{\mathrm{min}}$ | ${\rho}^{\mathrm{min}}$, ${\rho}_{1}^{\mathrm{min}}$,$\phantom{\rule{4pt}{0ex}}{\rho}_{0}^{\mathrm{min}}$ | c | b |

$U(0.60,\phantom{\rule{4pt}{0ex}}{\alpha}^{\mathrm{base}})$ | $U(0,\phantom{\rule{4pt}{0ex}}{\beta}^{\mathrm{base}})$ | $U(0,\phantom{\rule{4pt}{0ex}}{\rho}^{\mathrm{base}})$ ditto for ${\rho}_{1}^{\mathrm{min}}$ and ${\rho}_{0}^{\mathrm{min}}$ | $\varphi d$$\varphi \sim U(0,1)$ | 100 |

${\alpha}^{\mathrm{max}}$ | ${\beta}^{\mathrm{max}}$ | ${\rho}^{\mathrm{max}}$, ${\rho}_{1}^{\mathrm{max}}$,$\phantom{\rule{4pt}{0ex}}{\rho}_{0}^{\mathrm{max}}$ | d | l |

$U({\alpha}^{\mathrm{base}},\phantom{\rule{4pt}{0ex}}0.99)$ | $U({\beta}^{\mathrm{base}},0.1)$ | $U(\phantom{\rule{4pt}{0ex}}{\rho}^{\mathrm{base}},1)$ ditto for ${\rho}_{1}^{\mathrm{max}}$ and ${\rho}_{0}^{\mathrm{max}}$ | 100 | $l=(1+\lambda )b$$\lambda \sim U(0,1)$ |

${\mathit{d}}_{1}^{\ast}$ | ${\mathit{c}}_{\mathit{D}}$ | $\frac{{\mathit{\psi}}_{\mathit{D}}\left(\right)open="("\; close=")">{\mathit{d}}_{1}^{\ast}}{}{\mathit{\psi}}_{\mathit{D}}\left({\mathit{d}}_{1}^{\neg \ast}\right)$ | ${\mathit{\alpha}}^{\mathbf{base}}$ | ${\mathit{\beta}}^{\mathbf{base}}$ | ${\mathit{\rho}}^{\mathbf{base}}$ | ${\mathit{\rho}}_{1}^{\mathbf{base}}$ | ${\mathit{\rho}}_{0}^{\mathbf{base}}$ | $\mathit{\varphi}$ | $\mathit{\lambda}$ | ${\widehat{\mathit{p}}}_{\mathit{D}}(\mathit{A}=1|{\mathit{d}}_{1}\phantom{\rule{4pt}{0ex}})$ | |
---|---|---|---|---|---|---|---|---|---|---|---|

${d}_{1}=1$ | ${d}_{1}=0$ | ||||||||||

1 | 0.01 | 0.73 | 0.92 | 0.01 | 0.48 | 0.89 | 0.73 | 0.08 | 0.05 | 0.91 | 0.99 |

1 | 0.10 | 0.53 | 0.92 | 0.01 | 0.48 | 0.89 | 0.73 | 0.08 | 0.05 | 0.91 | 0.99 |

0 | 0.50 | 0.21 | 0.92 | 0.01 | 0.48 | 0.89 | 0.73 | 0.08 | 0.05 | 0.91 | 0.99 |

0 | 1.00 | 0.08 | 0.92 | 0.01 | 0.48 | 0.89 | 0.73 | 0.08 | 0.05 | 0.91 | 0.99 |

0 | 3.00 | 0.08 | 0.92 | 0.01 | 0.48 | 0.89 | 0.73 | 0.08 | 0.05 | 0.91 | 0.99 |

$\mathit{a}=1$ | $\mathit{a}=0$ | |
---|---|---|

${d}_{1}=1$ | 0.91 | 0.09 |

${d}_{1}=0$ | 0.99 | 0.01 |

Model | cte. Logit | ${\mathit{\alpha}}^{\mathbf{base}}$ | ${\mathit{\beta}}^{\mathbf{base}}$ | ${\mathit{\rho}}^{\mathbf{base}}$ | ${\mathit{\rho}}_{1}^{\mathbf{base}}$ | ${\mathit{\rho}}_{0}^{\mathbf{base}}$ | $\mathit{\varphi}$ | $\mathit{\lambda}$ | ${\mathit{c}}_{\mathit{D}}$ | AIC |
---|---|---|---|---|---|---|---|---|---|---|

ARA00.00 | −0.75 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 6283.70 |

ARA08.06 | 0.64 | −1.49 | 0.00 | 2.81 | −0.46 | −2.37 | −0.58 | 0.00 | −0.08 | 5281.03 |

ARA08.08 | 0.59 | −1.46 | −1.08 | 2.81 | −0.47 | −2.37 | −0.58 | 0.16 | −0.08 | 5277.06 |

Pred. | |||
---|---|---|---|

${\mathit{d}}_{\mathbf{1}}^{\ast}=\mathbf{0}$ | ${\mathit{d}}_{\mathbf{1}}^{\ast}=\mathbf{1}$ | ||

Obs. | ${d}_{1}^{\ast}=0$ | 2753 | 639 |

${d}_{1}^{\ast}=1$ | 655 | 953 |

${\mathit{c}}_{\mathit{D}}$ | ${\mathit{d}}_{1}^{\ast \mathit{GT}}$ | $\frac{{\mathit{\psi}}_{\mathit{D}}\left(\right)open="("\; close=")">{\mathit{d}}_{1}^{\ast \mathit{GT}}}{}{\mathit{\psi}}_{\mathit{D}}\left({\mathit{d}}_{1}^{\neg \ast}\right)$ | ${\mathit{d}}_{1}^{\ast \mathit{ARA}}$ | $\frac{{\mathit{\psi}}_{\mathit{D}}\left(\right)open="("\; close=")">{\mathit{d}}_{1}^{\ast \mathit{ARA}}}{}{\mathit{\psi}}_{\mathit{D}}\left({\mathit{d}}_{1}^{\neg \ast}\right)$ | ${\mathit{\alpha}}^{\mathbf{base}}$ | ${\mathit{\beta}}^{\mathbf{base}}$ | ${\mathit{\rho}}^{\mathbf{base}}$ | ${\mathit{\rho}}_{1}^{\mathbf{base}}$ | ${\mathit{\rho}}_{0}^{\mathbf{base}}$ | $\mathit{\varphi}$ | $\mathit{\lambda}$ |
---|---|---|---|---|---|---|---|---|---|---|---|

0.01 | 1 | 0.77 | 1 | 0.73 | 0.92 | 0.01 | 0.48 | 0.89 | 0.73 | 0.08 | 0.05 |

0.10 | 1 | 0.04 | 1 | 0.53 | 0.92 | 0.01 | 0.48 | 0.89 | 0.73 | 0.08 | 0.05 |

0.50 | 1 | 0.04 | 0 | 0.21 | 0.92 | 0.01 | 0.48 | 0.89 | 0.73 | 0.08 | 0.05 |

1.00 | 1 | 0.04 | 0 | 0.08 | 0.92 | 0.01 | 0.48 | 0.89 | 0.73 | 0.08 | 0.05 |

3.00 | 1 | 0.04 | 0 | 0.08 | 0.92 | 0.01 | 0.48 | 0.89 | 0.73 | 0.08 | 0.05 |

Model | Cte. Logit | ${\mathit{\alpha}}^{\mathbf{base}}$ | ${\mathit{\beta}}^{\mathbf{base}}$ | ${\mathit{\rho}}^{\mathbf{base}}$ | ${\mathit{\rho}}_{1}^{\mathbf{base}}$ | ${\mathit{\rho}}_{0}^{\mathbf{base}}$ | $\mathit{\varphi}$ | $\mathit{\lambda}$ | ${\mathit{k}}_{\mathit{D}}$ | AIC |
---|---|---|---|---|---|---|---|---|---|---|

GT00.00 | 0.07 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 6927.13 |

GT 08.05 | 0.34 | 0.00 | 3.40 | 1.34 | −1.25 | −0.74 | 0.00 | 0.00 | −0.09 | 6549.20 |

GT08.08 | −0.13 | 0.50 | 3.28 | 1.35 | −1.24 | −0.74 | 0.05 | 0.10 | −0.09 | 6550.38 |

ARA08.06 | 0.64 | −1.49 | 0.00 | 2.81 | −0.46 | −2.37 | −0.58 | 0.00 | −0.08 | 5281.03 |

Pred. | |||
---|---|---|---|

${\mathit{d}}_{\mathbf{1}}^{\ast}=\mathbf{0}$ | ${\mathit{d}}_{\mathbf{1}}^{\ast}=\mathbf{1}$ | ||

Obs. | ${d}_{1}^{\ast}=0$ | 1595 | 816 |

${d}_{1}^{\ast}=1$ | 1277 | 1312 |

© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Gil, C.; Parra-Arnau, J.
An Adversarial-Risk-Analysis Approach to Counterterrorist Online Surveillance. *Sensors* **2019**, *19*, 480.
https://doi.org/10.3390/s19030480

**AMA Style**

Gil C, Parra-Arnau J.
An Adversarial-Risk-Analysis Approach to Counterterrorist Online Surveillance. *Sensors*. 2019; 19(3):480.
https://doi.org/10.3390/s19030480

**Chicago/Turabian Style**

Gil, César, and Javier Parra-Arnau.
2019. "An Adversarial-Risk-Analysis Approach to Counterterrorist Online Surveillance" *Sensors* 19, no. 3: 480.
https://doi.org/10.3390/s19030480