# Argumentation-Based Query Answering under Uncertainty with Application to Cybersecurity

^{1}

^{2}

^{3}

^{*}

## Abstract

**:**

## 1. Introduction

**Contributions.**We contribute to the area of intelligent systems applied to cybersecurity in the following ways:

- A use case for the application of a structured probabilistic argumentation model (DeLP3E) [18] based on publicly available cybersecurity datasets.
- Design of the P-DAQAP framework, an extension of DAQAP [19], to work with DeLP3E, and the proposal of different classes of queries in the context of applications related to CTA.
- A preliminary empirical evaluation of an approximation algorithm for probabilistic query answering in P-DAQAP, showing the potential for the system to scale to nontrivial problem sizes, arriving at solutions efficiently and effectively.

## 2. Preliminaries

#### 2.1. Defeasible Logic Programming (DeLP)

#### 2.2. Probabilistic DeLP: DeLP3E Framework

#### A Simple Illustrative Example

- Analytical Model
- ${\theta}_{1}:{L}_{1}$
- ${\theta}_{2}:{L}_{2}$
- ${\theta}_{3}:{L}_{1}$
- Annotation Function
- $\mathit{af}\left({\theta}_{1}\right):a\wedge \neg b$
- $\mathit{af}\left({\theta}_{2}\right):b$
- $\mathit{af}\left({\theta}_{3}\right):b$
- Environmental Model
World a b ${P}_{r}\left({\lambda}_{i}\right)$ ${\lambda}_{1}$ T T 0.25 ${\lambda}_{2}$ T F 0.20 ${\lambda}_{3}$ F T 0.05 ${\lambda}_{4}$ F F 0.50

**Subprograms induced in each possible world:**- –
- ${P}_{\mathit{AM}}\left({\lambda}_{1}\right)=\{{L}_{2},{L}_{1}\}$
- –
- ${P}_{\mathit{AM}}\left({\lambda}_{2}\right)=\left\{{L}_{1}\right\}$
- –
- ${P}_{\mathit{AM}}\left({\lambda}_{3}\right)=\{{L}_{2},{L}_{1}\}$
- –
- ${P}_{\mathit{AM}}\left({\lambda}_{4}\right)=\{\xd8\}$

Query ${L}_{1}$ is, thus, clearly warranted only in world ${\lambda}_{2}$, while its complement (${L}_{1}$) is warranted in ${\lambda}_{1}$ and ${\lambda}_{3}.$**Probability interval calculation:**$$\left(\right)$$**Result:**$0.20\le {P}_{r}\left({L}_{1}\right)\le 0.70$

## 3. Cyberthreat Analysis with DeLP3E

- (i)
- Tactics, denoting short-term tactical adversary goals during an attack.
- (ii)
- Techniques, describing the means by which adversaries achieve tactical goals.
- (iii)
- Subtechniques, describing more specific means at a lower level than that of techniques by which adversaries achieve tactical goals.
- (iv)
- Documented adversary usage of techniques, their procedures, and other metadata.

$\langle {\mathcal{A}}_{1},\phantom{\rule{4pt}{0ex}}\mathit{tech}\_\mathit{in}\_\mathit{use}(\mathit{account}\_\mathit{discovery})\rangle $, with

${\mathcal{A}}_{1}=\{{\delta}_{3},\phantom{\rule{4pt}{0ex}}{\theta}_{1}(adv\_group\left(apt29\right))\}$

$\langle {\mathcal{A}}_{2},\phantom{\rule{4pt}{0ex}}\mathit{impl}\_\mathit{techsub}(\mathit{os}\_\mathit{credential}\_\mathit{dumping})\rangle $, with

${\mathcal{A}}_{2}=\{{\delta}_{6},\phantom{\rule{4pt}{0ex}}{\delta}_{1}(\mathit{prev}\_\mathit{techsub}(\mathit{os}\_\mathit{credential}\_\mathit{dumping}))$,

$\phantom{{\mathcal{A}}_{2}=\{}{\varphi}_{1}\left(\mathit{mitigation}(\mathit{credential}\_\mathit{access}\_\mathit{protection})\right)\}.$

**Listing 1.**Left: DeLP program that comprises the AM. Right: Annotation function.

- Θ
- ${\theta}_{1}:\mathit{adv}\_\mathit{group}\left(\mathit{G}\right)$${\theta}_{3}:\mathit{platform}\_\mathit{available}\left(\mathit{P}\right)$${\theta}_{2}:\mathit{software}\left(\mathit{S}\right)$${\theta}_{4}:\mathit{tech}\_\mathit{subtech}(\mathit{T}\_\mathit{ST})$
- Ω
- ${\omega}_{1}:\mathit{accomp}\_\mathit{tactic}\left(\mathit{Tactic}\right)\phantom{\rule{0.277778em}{0ex}}\leftarrow \phantom{\rule{0.277778em}{0ex}}\mathit{tech}\_\mathit{subtech}(\mathit{T}\_\mathit{ST})$${\omega}_{2}:\mathit{op}\_\mathit{in}\_\mathit{platform}\left(\mathit{Platform}\right)\phantom{\rule{0.277778em}{0ex}}\leftarrow \phantom{\rule{0.277778em}{0ex}}\mathit{tech}\_\mathit{subtech}(\mathit{T}\_\mathit{ST})$${\omega}_{3}:\mathit{impl}\_\mathit{techsub}(\mathit{T}\_\mathit{ST})\phantom{\rule{0.277778em}{0ex}}\leftarrow \phantom{\rule{0.277778em}{0ex}}\mathit{software}\left(\mathit{S}\right)$${\omega}_{4}:\mathit{capec}\_\mathit{rel}\_\mathit{weaknesses}(\mathit{CWE}\_\mathit{List})\phantom{\rule{0.277778em}{0ex}}\leftarrow \phantom{\rule{0.277778em}{0ex}}\mathit{capec}\_\mathit{id}(\mathit{T}\_\mathit{ST})$${\omega}_{5}:\mathit{cwe}\_\mathit{observed}(\mathit{CVE}\_\mathit{List})\phantom{\rule{0.277778em}{0ex}}\leftarrow \phantom{\rule{0.277778em}{0ex}}\mathit{capec}\_\mathit{rel}\_\mathit{weaknesses}(\mathit{CWE}\_\mathit{List})$${\omega}_{6}:\mathit{nvd}\_\mathit{cve}(\mathit{Vuln}\_\mathit{info})\phantom{\rule{0.277778em}{0ex}}\leftarrow \phantom{\rule{0.277778em}{0ex}}\mathit{cwe}\_\mathit{observed}(\mathit{CVE}\_\mathit{List})$${\omega}_{7}:\mathit{known}\_\mathit{techst}(\mathit{T}\_\mathit{ST})\phantom{\rule{0.277778em}{0ex}}\leftarrow \phantom{\rule{0.277778em}{0ex}}\mathit{accomp}\_\mathit{tactic}\left(\mathit{T}\right)$${\omega}_{8}:\mathit{known}\_\mathit{techst}(\mathit{T}\_\mathit{ST})\phantom{\rule{0.277778em}{0ex}}\leftarrow \phantom{\rule{0.277778em}{0ex}}\mathit{platform}\_\mathit{available}\left(\mathit{P}\right)$
- Φ
- ${\varphi}_{1}:\mathit{mitigation}\left(\mathit{M}\right)\phantom{\rule{0.277778em}{0ex}}\u2014\phantom{\rule{-1.5pt}{0ex}}<$${\varphi}_{2}:\mathit{likelihoodAttack}(\mathit{CAPEC}\_\mathit{ID},\mathit{Value})\phantom{\rule{0.277778em}{0ex}}\u2014\phantom{\rule{-1.5pt}{0ex}}<$ $\mathit{af}\left({\varphi}_{1}\right)={e}_{1}$
- Δ
- ${\delta}_{1}:\mathit{prev}\_\mathit{techsub}(\mathit{T}\_\mathit{ST})\phantom{\rule{0.277778em}{0ex}}\u2014\phantom{\rule{-1.5pt}{0ex}}<$mitigation(M) $\mathit{af}\left({\varphi}_{2}\right)={e}_{2}$${\delta}_{2}:\mathit{known}\_\mathit{mit}\left(\mathit{M}\right)\phantom{\rule{0.277778em}{0ex}}\u2014\phantom{\rule{-1.5pt}{0ex}}<$tech_subtech(T_ST) $\mathit{af}\left({\delta}_{1}\right)={e}_{3}$${\delta}_{3}:\mathit{tech}\_\mathit{in}\_\mathit{use}(T\_ST)\phantom{\rule{0.277778em}{0ex}}\u2014\phantom{\rule{-1.5pt}{0ex}}<$adv_group(G) $\mathit{af}\left({\delta}_{2}\right)={e}_{4}$${\delta}_{4}:\mathit{soft}\_\mathit{in}\_\mathit{use}\left(S\right)\phantom{\rule{0.277778em}{0ex}}\u2014\phantom{\rule{-1.5pt}{0ex}}<$adv_group(G) $\mathit{af}\left({\delta}_{3}\right)={e}_{5}$${\delta}_{5}:\mathit{pos}\_\mathit{threat}(\mathit{T}\_\mathit{ST},\mathit{S})\phantom{\rule{0.277778em}{0ex}}\u2014\phantom{\rule{-1.5pt}{0ex}}<$tech_in_use(T_ST), soft_in_use(S) $\mathit{af}\left({\delta}_{4}\right)={e}_{6}$${\delta}_{6}:~\mathit{impl}\_\mathit{techsub}(\mathit{T}\_\mathit{ST})\phantom{\rule{0.277778em}{0ex}}\u2014\phantom{\rule{-1.5pt}{0ex}}<$prev_techsub(T_ST) $\mathit{af}\left({\delta}_{5}\right)={e}_{7}$${\delta}_{7}:\mathit{intensify}\_\mathit{mit}\left(\mathit{M}\right)\phantom{\rule{0.277778em}{0ex}}\u2014\phantom{\rule{-1.5pt}{0ex}}<$known_mit(M), tech_in_use(T_ST), $\mathit{af}\left({\delta}_{6}\right)={e}_{8}$likelihoodAttack(T_ST, high) $\mathit{af}\left({\delta}_{7}\right)={e}_{9}$

**Queries.**We lastly present two queries that we revisit in the next section:

- $\mathit{pos}\_\mathit{threat}(T1134,SO344)$:What is the probability that access token manipulation (technique T1134) uses leveraging the Azorult malware (software id SO344) to attack our systems?
- $\mathit{intensify}\_\mathit{mit}\left(M1026\right)$:What is the probability that privileged account management (mitigation strategy M1026) should be deployed? M1026 mitigates T1134.

## 4. P-DAQAP Platform

#### 4.1. Architecture and Workflow

#### 4.2. P-DAQAP Functionalities

#### 4.2.1. Current State: Registered Queries

#### 4.2.2. “What-If” Scenarios

#### 4.2.3. Next Steps: Explainability

**Most Probable Scenarios.**As a combination of the previous two functionalities, the system can compute a set of the k most probable scenarios given the current set of observations. In the current implementation, which uses Bayesian networks to specify the probability distribution in the EM, this set can be computed by the probabilistic model module by returning the most probable explanations (MPEs) of the BN given the current evidence in the EM. Then, the result of this first step can be combined with the counterfactual analysis described above and each scenario can be explored taking into account its probability of occurrence and its consequences.

**Rule-based Explanations.**Another possibility is to show the arguments that support the query in the subprogram generated by a particular scenario or set of scenarios. This provides the analyst with the set of rules and facts involved in the derivation, and precisely what role they played, which may highlight the need to revise one or more of these components (for example, facts coming from an outdated data source); an approach in this direction was recently reported in [29]. Another benefit of rule-based approaches is that they can be rendered more interpretable by, for instance, using templates to translate rules into natural language, as proposed in [30]. Lastly, it is also possible to show the user minimal sets of EM elements (BN variables or worlds) that allow for the generation of supporting arguments for the query, thus pointing to the uncertain elements that play a role in the logical derivations of interest.

## 5. Empirical Evaluation

#### 5.1. Experimental Setup

- Generating the basic components on which the more complex structures are created, that is, facts and assumptions are generated first.
- Arguments are organized in levels, where each level indicates the maximal number of rules used in its derivation chain until a basic element is reached.
- Dialectical trees are generated only for top-level arguments because they have a greater number of possible points of attack, given that they have more elements in their body.

`networkx`library (https://networkx.github.io, accessed on 21 August 2022). To control the entropy of the encoded distribution, we took each node probability table entry and randomly choose between true and false; then, we randomly assigned a probability to that outcome in the interval $[\alpha ,1]$, where $\alpha $ is a parameter varied in $\{0.7,0.9\}$.

**Quality Metric.**Given a probability interval ${i}_{1}=[a,b]$, we used the following metric to gauge the quality of a sound approximation ${i}_{2}=[c,d]$ (that is $[a,b]\subseteq [c,d]$ always holds):

#### 5.2. Results

- First, sampling larger sets of worlds leads to higher quality approximations. Though this is expected, there are two interesting details:
- For the 20 EM variable case, the quality obtained by 5000 vs. 10,000 samples was not statistically significant (two-tailed two-sample unequal variance Student’s t-tests yielded p-values greater than 0.08 for $\alpha =0.7$ and greater than 0.16 for $\alpha =0.9$), which means that only 5000 samples sufficed to obtain a good approximation.
- The proportion of repeated samples (i.e., wasted effort) was quite high for both entropy levels; for $\alpha =0.7$ (higher entropy) on average 52% of samples were repeated, while for $\alpha =0.9$ (lower entropy), an average of 87% were not unique. For the 20 EM variable case, the quality levels were achieved with only 2293 and 469 unique samples, respectively. Larger sample sizes also lead to lower variation in quality (shorter error bars).

- Next, entropy noticeably impacted solution quality (except for 10 EM variables, the smallest setting). Since our approximation algorithm samples worlds directly from the BN’s distribution, it is natural to observe better effectiveness with lower (less spread out) entropy distributions. A smaller number of worlds represents a larger portion of the probability mass.
- Lastly, even for higher values of entropy, we observed adequate quality levels for modest numbers of samples compared to the size of the full sample space.

#### 5.3. Results in the Context of Practical Applications

## 6. Conclusions and Future Work

## Author Contributions

## Funding

## Institutional Review Board Statement

## Informed Consent Statement

## Data Availability Statement

## Conflicts of Interest

## Abbreviations

AM | Analytical Model |

CAPEC | Common Attack Pattern Enumeration and Classification |

CPE | Common Platform Enumeration |

CTA | Cyberthreat Analysis |

CVE | Common Vulnerabilities and Exposures |

CWE | Common Weakness Enumeration |

DeLP | Defeasible Logic Programming |

DeLP3E | Defeasible Logic Programming with Presumptions and Probabilistic Environments |

EM | Environmental Model |

KB | Knowledge Base |

P-DAQAP | Probabilistic Defeasible Argumentation Query Answering Platform |

NVD | National Vulnerability Database |

XAI | Explainable Artificial Intelligence |

## References

- Mumford, E. The story of socio-technical design: Reflections on its successes, failures and potential. Inf. Syst. J.
**2006**, 16, 317–342. [Google Scholar] [CrossRef] - Miller, T. Explanation in artificial intelligence: Insights from the social sciences. Artif. Intell.
**2019**, 267, 1–38. [Google Scholar] [CrossRef] - Arrieta, A.B.; Díaz-Rodríguez, N.; Del Ser, J.; Bennetot, A.; Tabik, S.; Barbado, A.; García, S.; Gil-López, S.; Molina, D.; Benjamins, R.; et al. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion
**2020**, 58, 82–115. [Google Scholar] [CrossRef] - Gunning, D. Explainable Artificial Intelligence (XAI). Defense Advanced Research Projects Agency (DARPA). 2017. Available online: https://nsarchive.gwu.edu/sites/default/files/documents/5794867/National-Security-Archive-David-Gunning-DARPA.pdf (accessed on 21 August 2022).
- Viganò, L.; Magazzeni, D. Explainable security. In Proceedings of the 2020 IEEE European Symposium on Security and Privacy Workshops (EuroS&PW), Genoa, Italy, 7–11 September 2020; pp. 293–300. [Google Scholar]
- Castelvecchi, D. Can we open the black box of AI? Nat. News
**2016**, 538, 20. [Google Scholar] [CrossRef] - Mahdavifar, S.; Ghorbani, A.A. DeNNeS: Deep embedded neural network expert system for detecting cyber attacks. Neural Comput. Appl.
**2020**, 32, 14753–14780. [Google Scholar] [CrossRef] - Kuppa, A.; Le-Khac, N.A. Black Box Attacks on Explainable Artificial Intelligence (XAI) methods in Cyber Security. In Proceedings of the 2020 International Joint Conference on Neural Networks (IJCNN), Glasgow, UK, 19–24 July 2020; pp. 1–8. [Google Scholar]
- Szczepański, M.; Choraś, M.; Pawlicki, M.; Kozik, R. Achieving explainability of intrusion detection system by hybrid oracle-explainer approach. In Proceedings of the 2020 International Joint Conference on Neural Networks (IJCNN), Glasgow, UK, 19–24 July 2020; pp. 1–8. [Google Scholar]
- Malatji, M.; Sune, V.S.; Marnewick, A. Socio-technical systems cybersecurity framework. Inf. Comput. Secur.
**2019**, 27, 233–272. [Google Scholar] [CrossRef] - Alsmadi, I. The NICE Cyber Security Framework: Cyber Security Management; Springer Nature: Cham, Switzerland, 2020. [Google Scholar]
- Leiva, M.A.; Simari, G.I.; Simari, G.R.; Shakarian, P. Cyber threat analysis with structured probabilistic argumentation. In Proceedings of the AI3. CEUR-WS, Rende, Italy, 19–22 November 2019; Volume 2528, pp. 50–64. [Google Scholar]
- Shakarian, P.; Simari, G.I.; Moores, G.; Parsons, S.; Falappa, M.A. An Argumentation-based Framework to Address the Attribution Problem in Cyber-Warfare. In Proceedings of the CyberSecurity, ASE, Stanford, CA, USA, 27–31 May 2014. [Google Scholar]
- Kuppa, A.; Le-Khac, N.A. Adversarial xai methods in cybersecurity. IEEE Trans. Inf. Forensics Secur.
**2021**, 16, 4924–4938. [Google Scholar] [CrossRef] - Liu, H.; Zhong, C.; Alnusair, A.; Islam, S.R. FAIXID: A framework for enhancing ai explainability of intrusion detection results using data cleaning techniques. J. Netw. Syst. Manag.
**2021**, 29, 1–30. [Google Scholar] [CrossRef] - Srivastava, G.; Jhaveri, R.H.; Bhattacharya, S.; Pandya, S.; Rajeswari; Maddikunta, P.K.R.; Yenduri, G.; Hall, J.G.; Alazab, M.; Gadekallu, T.R. XAI for Cybersecurity: State of the Art, Challenges, Open Issues and Future Directions. arXiv
**2022**, arXiv:2206.03585. [Google Scholar] - Hariharan, S.; Velicheti, A.; Anagha, A.; Thomas, C.; Balakrishnan, N. Explainable Artificial Intelligence in Cybersecurity: A Brief Review. In Proceedings of the 2021 4th International Conference on Security and Privacy (ISEA-ISAP), Dhanbad, India, 27–30 October 2021; pp. 1–12. [Google Scholar]
- Shakarian, P.; Simari, G.I.; Moores, G.; Paulo, D.; Parsons, S.; Falappa, M.A.; Aleali, A. Belief revision in structured probabilistic argumentation. AMAI
**2016**, 78, 259–301. [Google Scholar] [CrossRef] - Leiva, M.A.; Simari, G.I.; Gottifredi, S.; García, A.J.; Simari, G.R. DAQAP: Defeasible Argumentation Query Answering Platform. In Proceedings of the FQAS 2019, Amantea, Italy, 2–5 July 2019; pp. 126–138. [Google Scholar]
- Simari, G.R.; Loui, R.P. A mathematical treatment of defeasible reasoning and its implementation. Artif. Intell.
**1992**, 53, 125–157. [Google Scholar] [CrossRef] [Green Version] - Toni, F. A tutorial on assumption-based argumentation. Argum. Comput.
**2014**, 5, 89–117. [Google Scholar] [CrossRef] - Modgil, S.; Prakken, H. The ASPIC+ framework for structured argumentation: A tutorial. Argum. Comput.
**2014**, 5, 31–62. [Google Scholar] [CrossRef] - García, A.J.; Simari, G.R. Defeasible logic programming: DeLP-servers, contextual queries, and explanations for answers. Argum. Comput.
**2014**, 5, 63–88. [Google Scholar] [CrossRef] - Besnard, P.; Garcia, A.; Hunter, A.; Modgil, S.; Prakken, H.; Simari, G.; Toni, F. Introduction to structured argumentation. Argum. Comput.
**2014**, 5, 1–4. [Google Scholar] [CrossRef] - Martinez, M.V.; García, A.J.; Simari, G.R. On the Use of Presumptions in Structured Defeasible Reasoning. In COMMA; Verheij, B., Szeider, S., Woltran, S., Eds.; IOS Press: Amsterdam, The Netherlands, 2012; Volume 245, pp. 185–196. [Google Scholar]
- Suciu, D.; Olteanu, D.; Ré, C.; Koch, C. Probabilistic databases. Synth. Lect. Data Manag.
**2011**, 3, 1–180. [Google Scholar] - Pearl, J. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference; Morgan Kaufmann: San Francisco, CA, USA, 1988. [Google Scholar]
- Paredes, J.; Teze, J.C.; Simari, G.I.; Martinez, M.V. On the Importance of Domain-specific Explanations in AI-based Cybersecurity Systems (Technical Report). arXiv
**2021**, arXiv:2108.02006. [Google Scholar] - Buron Brarda, M.E.; Tamargo, L.H.; García, A.J. Using Argumentation to Obtain and Explain Results in a Decision Support System. IEEE Intell. Syst.
**2021**, 36, 36–42. [Google Scholar] [CrossRef] - Grover, S.; Pulice, C.; Simari, G.I.; Subrahmanian, V.S. BEEF: Balanced English Explanations of Forecasts. IEEE Trans. Comput. Soc. Syst.
**2019**, 6, 350–364. [Google Scholar] [CrossRef] - Alfano, G.; Greco, S.; Parisi, F.; Simari, G.I.; Simari, G.R. Incremental computation for structured argumentation over dynamic DeLP knowledge bases. Artif. Intell.
**2021**, 300, 103553. [Google Scholar] [CrossRef] - Al-Shaer, R.; Spring, J.M.; Christou, E. Learning the Associations of MITRE ATT & CK Adversarial Techniques. In Proceedings of the 2020 IEEE Conference on Communications and Network Security (CNS), Avignon, France, 29 June–1 July 2020; pp. 1–9. [Google Scholar] [CrossRef]
- Kuppa, A.; Aouad, L.; Le-Khac, N.A. Linking CVE’s to MITRE ATT&CK Techniques. In Proceedings of the 16th International Conference on Availability, Reliability and Security, Vienna, Austria, 17–20 August 2021; pp. 1–12. [Google Scholar]
- Hong, S.; Kim, K.; Kim, T. The Design and Implementation of Simulated Threat Generator based on MITRE ATT&CK for Cyber Warfare Training. J. Korea Inst. Mil. Sci. Technol.
**2019**, 22, 797–805. [Google Scholar] - Choi, S.; Yun, J.H.; Min, B.G. Probabilistic attack sequence generation and execution based on mitre att&ck for ics datasets. In Proceedings of the Cyber Security Experimentation and Test Workshop, Virtual, CA, USA, 9 August 2021; pp. 41–48. [Google Scholar]

**Figure 2.**Designing a DeLP3E KB for cyberthreat analysis from a variety of publicly available cyber security datasets.

**Figure 3.**P-DAQAP platform architecture, including a mock-up of a dashboard for displaying query-answering results related to our use case.

**Figure 4.**(

**a**) Average running times per world sampled (n = 100 runs). For each case, we estimate the running time (in hours) required to run the exact (brute force) algorithm. (

**b**) Average solution quality varying #EM variables (log of #worlds), #samples, and the parameter that controls the entropy (H) of the probability distribution. For 30 EM variables (1B worlds, bottom right), quality is approximated on the basis of a sample of 250,000 worlds. Error bars correspond to standard deviation (n > 50 for the top charts, n > 15 for the bottom charts).

Front-end | |
---|---|

Step 1 | Loads a DeLP3E knowledge base and specifies a task. |

Back-end | |

Step A | Web server sends the job to be executed by the Probabilistic Argumentation module. |

Step B | Generate data structures and executes the job; when results become available, it returns the output data in JSON format to the web server. |

Front-end | |

Step 2 | Client receives the response, and the data are presented to the user. |

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Leiva, M.A.; García, A.J.; Shakarian, P.; Simari, G.I.
Argumentation-Based Query Answering under Uncertainty with Application to Cybersecurity. *Big Data Cogn. Comput.* **2022**, *6*, 91.
https://doi.org/10.3390/bdcc6030091

**AMA Style**

Leiva MA, García AJ, Shakarian P, Simari GI.
Argumentation-Based Query Answering under Uncertainty with Application to Cybersecurity. *Big Data and Cognitive Computing*. 2022; 6(3):91.
https://doi.org/10.3390/bdcc6030091

**Chicago/Turabian Style**

Leiva, Mario A., Alejandro J. García, Paulo Shakarian, and Gerardo I. Simari.
2022. "Argumentation-Based Query Answering under Uncertainty with Application to Cybersecurity" *Big Data and Cognitive Computing* 6, no. 3: 91.
https://doi.org/10.3390/bdcc6030091