Robust Financial Fraud Alerting System Based in the Cloud Environment
Abstract
:1. Introduction
2. Related Work
Ref | Risk/Resilience Assessment | AI/ML | Formal Models Verification | Cyberattacks | Potential Contribution |
---|---|---|---|---|---|
[12] | X | Application to fintech and fraud detection domain. | |||
[14] | X | X | Additional risk factors can be incorporated into the framework. | ||
[18] | X | Application to fintech and fraud detection domain and consideration of additional cyberattacks. | |||
[19] | X | X | User scoring systems can be implemented in a cloud environment and use the official support. | ||
[21] | X | X | X | Cyberattacks against the cloud can be modelled in attack graphs. | |
[22] | X | X | X | User scoring systems can be used for vulnerability-based assessment. | |
[32] | X | X | X | Real-world cloud system concept that can be used in analysis and implemented. | |
[23] | X | X | Additional cloud-related security issues in form of a greater challenge. | ||
[24] | X | System models to generate and execute model-based tests. | |||
[25] | X | X | AI solutions can be applied to scoring systems. |
3. Proposed Solution
- Fraud detection robustness—ensured through the proposed parallel processing anomaly detectors;
- Alerting robustness—a novel scoring system is proposed, representing the users’ historical behaviour analysis;
- An extensive cyber resilience analysis—the system’s robustness during different cyberattacks targeting the cloud infrastructure, analysed by using formal methods.
3.1. Methodology
- Part 1: Investigating and testing of different ML-based anomaly detection techniques for fraud detection in the fintech environment. This part was proposed and extensively reported in our previous paper in [33], which included the following aspects:
- Extensive SoA analysis covering automated fraud detection in the credit card , financial transactions and blockchain fintech domains;
- Analysis and case studies’ definitions based on publicly available datasets in this domain, including credit card fraud detection (CreditCard dataset), financial transactions fraud detection (PaySim dataset) and bank transactions fraud detection (BankSim dataset);
- Investigation and testing of suitable preprocessing techniques, including statistical analysis, feature engineering and feature selection based on information value;
- Analysis and testing of different applicable ML techniques, including outlier detection methods (local outlier factor, isolation forest and elliptic envelope) and ensemble approaches (random forest, adaptive boosting and extreme gradient boosting);
- Reliability analysis of anomaly detection algorithms based on layer-wise relevance propagation.
- Part 2: Resilience analysis of the system’s implementation possibilities in the cloud environment. This part presents the main focus of this paper. It involved findings and partial results from Part 1 and performance of a resilience analysis of implementation possibilities for the fraud detection service in the cloud environment. We also propose an additional scoring-based alerting logic which uses historical decisions of the ML models. It includes following aspects:
- SoA analysis covering risk and resilience assessment, formal verification and cybersecurity in cloud environments;
- Case study selection based on case studies in Part 1—credit card fraud detection (CreditCard dataset) based on outlier detection methods (local outlier factor, isolation forest and elliptic envelope);
- The fraud detection robustness is improved through the proposed implementation that includes parallel processing anomaly detectors;
- Alerting robustness is improved by proposing a novel ML-based scoring system representing the users’ historical behaviour analysis;
- The system’s resilience is analysed during different cyberattacks targeting the cloud infrastructure by using formal methods.
3.2. Financial Fraud Alerting System Based on Parallel Anomaly Detectors
3.2.1. Anomaly Detection
- Formal checking of the influence of additional parallel detectors, with regard to true positive rate (TPR) and true negative rate (TNR), during the system’s functioning with uninterrupted by cyberattacks;
- Formal checking of the system behaviour and the influence of different cyberattacks on the system’s performance, reflected in missed alerts (false negatives) and false alerts (false positives).
- CreditCard: This credit card fraud detection dataset contains transactions made with credit cards of European cardholders in September 2013 (encoded as PCA components).
- PaySim: Synthetic financial datasets for fraud detection; the authors of this dataset used aggregated data from a private dataset to generate a synthetic one. The synthetic dataset resembles the common operation of transactions, but contains injected malicious behaviour to be able to evaluate the performance of fraud detection methods.
- BankSim dataset: Synthetic data from a financial payment system. In order to generate this dataset, its authors used an agent-based simulator of bank payments. This was based on a sample of aggregated transactional data that was provided by a bank in Spain.
- True positive rate (TPR) represents the probability that fraudulent activity (transaction) will be correctly detected;
- True negative rate (TNR) represents the probability that a regular (non-fraudulent) transaction will be correctly classified.
3.2.2. Decision Making
3.3. System Extension—Machine-Learning-Based Scoring Module
4. Assessment of Cyber Resilience Using Formal Methods
4.1. Resilience Analysis Work-Flow
- System specification: Definition of the system’s architecture, its components and communication channels.
- Cyber threat identification: In order to identify cyber threats in the proposed systems, its technological components need to be considered separately. In fact, threats are identified with respect to the system’s architecture—those which are typical for each component. For this reason, personal expertise and available information sources (e.g., [20]) were consulted to infer a list of threats for both proposed systems.
- Threat assessment: According to the obtained list of threats and the system’s architecture, possible entry points for an attacker are identified. Then, cyber vulnerabilities are attributed to certain entry points for an attacker and result in different types of exploits. Subsequently, corresponding attack scenarios are defined that depict the behaviour of an attacker in the given environment. For every vulnerability, exploitation probabilities are calculated according to several metrics, which each represents the likelihood of occurrence for a given attack.
- Model: The formal model was created in the PRISM modelling language based on the proposed architecture; we identified vulnerabilities with exploitation probabilities and modelled attacks. The non-deterministic model was developed as a Markov decision process (MDP). Formal attack properties were identified and modelled next using the probabilistic computation tree logic (PCTL) embedded in the PRISM model checker.
- Model checker (PRISM): Model checking was performed against the identified properties using the PRISM model checker. The required input elements for formal verification were the system model and the identified attack properties.
- Model checking results: This process resulted in the maximum likelihoods of successful attack attempts—risk exposure scores.
4.2. Overview of Formal Methods
5. Risk and Resilience Assessment for Fraud Alerting Systems
5.1. Basic and Extended Fraud Alerting Systems
5.2. Attack Scenarios in Fraud Alerting Systems
- SQL injection [58]: This attack targets the database behind an application and comes in the form of a SQL query. The aim of the attack is to gain access to a database and conduct unauthorized operations on its data entries. In the context of a fraud alerting system, this means that client data can be compromised or modified by an attacker. As a consequence, a fraudulent user or activity may be covered up, or a benign user may become a suspect erroneously.
- Denial of service (DoS) attack [59]: The goal of this type of attack is to interrupt the functionality of a system or to cause access-control restrictions. In this way, the service becomes unavailable for legitimate users or some system component. Typically, successful attacks cause buffer overflows by overflowing the temporary storage with large amounts of data [60]. In addition, this type of attack can come in the form of flooding the target system with network traffic until disabling it altogether [61].
- Adversarial attacks on AI [62]: In the domain of AI, adversarial AI addresses vulnerabilities that can be exploited by ML algorithms. In fact, cyberattacks against ML can disrupt statistical classifiers by injecting malicious data to this algorithm. In such way, malicious data are classified as legitimate during the training phase, whereas legitimate training data are rejected. Thus, typical adversarial attacks include data poisoning, which degrades the performance of the target ML model [28]. In the context of a fraud alerting system, such attacks cause a misclassification of clients with regard to their fraud levels.
- Attack scenario 1
- Attack scenario 2
- Attack scenario 3
- Attack scenario 4
- Attack scenario 5
5.3. Exploitation Probability Assessment
5.4. Modelled Attack Outcomes
- Missed alert—transaction was fraudulent but the system did not detect it,
- False alert—transaction was regular, but system created an alert.
6. Results and Discussion
6.1. Parallel Detectors’ Influence on the Detection Performance
6.2. Cyber-Attacks’ Influence on the System’s Performance
7. Conclusions and Future Work
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Bettinger, A. FINTECH: A Series of 40 Time Shared Models Used at Manufacturers Hanover Trust Company. Interfacec 1972, 2, 62–63. [Google Scholar]
- Thakor, A.V. Fintech and banking: What do we know? J. Financ. Intermediation 2020, 41, 100833. [Google Scholar] [CrossRef]
- Lynn, T.; Mooney, J.G.; Rosati, P.; Cummins, M. Disrupting finance: FinTech and strategy in the 21st century. In Proceedings of the International Conference on Artificial Intelligence and Computer Vision (AICV2020), Advances in Intelligent Systems and Computing, Cairo, Egypt, 8–10 April 2020. [Google Scholar]
- Vivek, D.; Rakesh, S.; Walimbe, R.S.; Mohanty, A. The Role of CLOUD in FinTech and RegTech. Ann. Dunarea Jos Univ. Galati-Fascicle Econ. Appl. Inform. 2020, 26, 5–13. [Google Scholar] [CrossRef]
- Microsoft Azure: Cloud Computing Services. Available online: https://azure.microsoft.com (accessed on 10 August 2022).
- Kott, A.; Linkov, I. Cyber Resilience of Systems and Networks; Springer: Berlin, Germany, 2019. [Google Scholar] [CrossRef]
- Dal Pozzolo, A.; Boracchi, G.; Caelen, O.; Alippi, C.; Bontempi, G. Credit card fraud detection: A realistic modeling and a novel learning strategy. IEEE Trans. Neural Netw. Learn. Syst. 2017, 29, 3784–3797. [Google Scholar] [PubMed]
- Kaur, G.; Habibi Lashkari, Z.; Habibi Lashkari, A. Cybersecurity Threats in FinTech. Underst. Cybersecur. Manag. Fintech. Future Bus. Financ. 2021. [Google Scholar] [CrossRef]
- Martins, N.; Magalhães Cruz, J.; Cruz, T.; Abreu, P.H. Adversarial Machine Learning Applied to Intrusion and Malware Scenarios: A Systematic Review. IEEE Access 2020, 8, 35403–35419. [Google Scholar] [CrossRef]
- Imerman, M.; Patel, R.; Kim, Y.D. Cloud finance: A review and synthesis of cloud computing and cloud security in financial services. J. Financ. Transform. Capco Inst. 2022, 55, 18–25. [Google Scholar]
- Kettani, H.; Cannistra, R.M. On Cyber Threats to Smart Digital Environments. In Proceedings of the 2nd International Conference on Smart Digital Environment (ICSDE’18), Rabat, Morocco, 18–20 October 2018. [Google Scholar]
- Tsaregorodtsev, A.V.; Kravets, O.J.; Choporov, O.N.; Zelenina, A.N. Information Security Risk Estimation for Cloud Infrastructure. Int. J. Inf. Technol. Secur. 2018, 4, 67–76. [Google Scholar]
- Common Vulnerability Scoring System SIG. Available online: https://www.first.org/cvss (accessed on 4 August 2022).
- Sun, X.; Liu, P.; Singhal, A. Toward Cyberresiliency in the Context of Cloud Computing. IEEE Secur. Priv. 2018, 16, 71–75. [Google Scholar] [CrossRef]
- Furfaro, A.; Piccolo, A.; Parise, A.; Argento, L.; Saccà, D. A Cloud-based platform for the emulation of complex cybersecurity scenarios. Future Gener. Comput. Syst. 2018, 89, 791–803. [Google Scholar] [CrossRef]
- Singh Sohal, A.; Sandhu, R.; Sood, S.K.; Chang, V. A cybersecurity framework to identify malicious edge device in fog computing and cloud-of-things environments. Comput. Secur. 2018, 74, 340–354. [Google Scholar] [CrossRef]
- Hawasli, A. AzureLang: A Probabilistic Modeling and Simulation Language for Cyber Attacks in Microsoft Azure Cloud Infrastructure. Master’s Thesis, KTH, School of Electrical Engineering and Computer Science (EECS), Stockholm, Sweden, 2018. [Google Scholar]
- Sontowski, S.; Gupta, M.; Chukkapalli, S.S.L.; Abdelsalam, M.; Mittal, S.; Joshi, A.; Sandhu, R. Cyber Attacks on Smart Farming Infrastructure. In Proceedings of the International Conference on Collaborative Computing: Networking, Applications and Worksharing (CollaborateCom), Shanghai, China, 16–18 October 2020. [Google Scholar]
- Jauhiainen, H. Designing End User Area Cybersecurity for Cloud-Based Organization. Master’s Thesis, Metropolia University of Applied Sciences, Helsinki, Finland, 2018. [Google Scholar]
- MITRE ATT&CK®. Available online: https://attack.mitre.org (accessed on 16 November 2022).
- Sabur, A.; Chowdhary, A.; Huang, D.; Alshamrani, A. Toward scalable graph-based security analysis for cloud networks. Comput. Netw. 2022, 206, 108795. [Google Scholar] [CrossRef]
- George, G.; Thampi, S.M. Vulnerability-based risk assessment and mitigation strategies for edge devices in the Internet of Things. Pervasive Mob. Comput. 2019, 59, 101068. [Google Scholar] [CrossRef]
- Souaf, S.; Berthomó, P.; Loulergue, F. A Cloud Brokerage Solution: Formal Methods Meet Security in Cloud Federations. In Proceedings of the 2018 International Conference on High Performance Computing & Simulation (HPCS), Orleans, France, 16–20 July 2018. [Google Scholar]
- Gomes Valadares, D.; Alvares de Carvalho César Sobrinho, A.; Perkusich, A.; Costa Gorgonio, K. Formal Verification of a Trusted Execution Environment-Based Architecture for IoT Applications. IEEE Internet Things J. 2021, 8, 17199–17210. [Google Scholar] [CrossRef]
- Waqas, M.; Tu, S.; Halim, Z.; Ur Rehman, S.; Abbas, G.; Haq Abbas, Z. The role of artificial intelligence and machine learning in wireless networks security: Principle, practice and challenges. In Artificial Intelligence Review; Springer: Berlin/Heidelberg, Germany, 2022; pp. 5215–5261. [Google Scholar]
- Al Nafea, R.; Almaiah, M.A. Cyber Security Threats in Cloud: Literature Review. In Proceedings of the International Conference on Information Technology (ICIT), Amman, Jordan, 14–15 July 2021. [Google Scholar]
- Ahmad, W.; Rasool, A.; Javed, A.R.; Baker, T.; Jalil, Z. Cyber Security in IoT-Based Cloud Computing: A Comprehensive Survey. Electronics 2022, 11, 16. [Google Scholar] [CrossRef]
- Duddu, V. A Survey of Adversarial Machine Learning in Cyber Warfare. Def. Sci. J. 2018, 68, 356. [Google Scholar] [CrossRef] [Green Version]
- Alt, F. Pervasive Security and Privacy—A Brief Reflection on Challenges and Opportunities. IEEE Pervasive Comput. 2021, 55, 82–86. [Google Scholar] [CrossRef]
- Kulik, T.; Dongol, B.; Larsen, P.G.; Macedo, H.D.; Schneider, S.; Tran-Jorgensen, P.W.V.; Woodcock, J. A Survey of Practical Formal Methods for Security. Form. Asp. Comput. 2022, 34, 1–39. [Google Scholar] [CrossRef]
- Tissir, N.; El Kafhali, S.; Aboutabit, N. Cybersecurity management in cloud computing: Semantic literature review and conceptual framework proposal. J. Reliab. Intell. Environ. 2021, 7, 69–84. [Google Scholar] [CrossRef]
- Vallant, H.; Stojanović, B.; Božić, J.; Hofer-Schmitz, K. Threat Modelling and Beyond-Novel Approaches to Cyber Secure the Smart Energy System. Appl. Sci. 2021, 11, 5149. [Google Scholar] [CrossRef]
- Stojanović, B.; Božić, J.; Hofer-Schmitz, K.; Nahrgang, K.; Weber, A.; Badii, A.; Sundaram, M.; Jordan, E.; Runevic, J. Follow the trail: Machine learning for fraud detection in Fintech applications. Sensors 2021, 21, 1594. [Google Scholar] [CrossRef] [PubMed]
- PRISM—Probabilistic Symbolic Model Checker. Available online: https://www.prismmodelchecker.org (accessed on 1 August 2022).
- Keerthi, K.; Roy, I.; Hazra, A.; Rebeiro, C. Formal verification for security in IoT devices. Secur. Fault Toler. Internet Things 2019, 179–200. [Google Scholar] [CrossRef]
- Basin, D.; Cremers, C.; Meadows, C. Model checking security protocols. In Handbook of Model Checking; Springer: Berlin/Heidelberg, Germany, 2018; pp. 727–762. [Google Scholar]
- Hahn, E.M.; Hartmanns, A.; Hensel, C.; Klauck, M.; Klein, J.; Křetínskỳ, J.; Parker, D.; Quatmann, T.; Ruijters, E.; Steinmetz, M. The 2019 comparison of tools for the analysis of quantitative formal models. In International Conference on Tools and Algorithms for the Construction and Analysis of Systems; Springer: Berlin/Heidelberg, Germany, 2019; pp. 69–92. [Google Scholar]
- Hofer-Schmitz, K.; Stojanović, B. Towards formal verification of IoT protocols: A Review. Comput. Netw. 2020, 174, 107233. [Google Scholar] [CrossRef]
- Katoen, J.P. The probabilistic model checking landscape. In Proceedings of the 31st Annual ACM/IEEE Symposium on Logic in Computer Science, New York, NY, USA, 5–8 July 2016; pp. 31–45. [Google Scholar]
- Bartels, F.; Sokolova, A.; de Vink, E. A hierarchy of probabilistic system types. Theor. Comput. Sci. 2004, 327, 3–22. [Google Scholar] [CrossRef] [Green Version]
- Hartmanns, A.; Hermanns, H. In the quantitative automata zoo. Sci. Comput. Program. 2015, 112, 3–23. [Google Scholar] [CrossRef]
- Bengtsson, J.; Larsen, K.; Larsson, F.; Pettersson, P.; Yi, W. UPPAAL—A tool suite for automatic verification of real-time systems. In International Hybrid Systems Workshop; Springer: Berlin/Heidelberg, Germany, 1995; pp. 232–243. [Google Scholar]
- Behrmann, G.; David, A.; Larsen, K.G. A Tutorial on Uppaal 4.0.; Department of Computer Science, Aalborg University: Aalborg, Denmark, 2006. [Google Scholar]
- Hinton, A.; Kwiatkowska, M.; Norman, G.; Parker, D. PRISM: A tool for automatic verification of probabilistic systems. In Proceedings of the International Conference on Tools and Algorithms for the Construction and Analysis of Systems, Vienna, Austria, 25 March–2 April 2006; Springer: Berlin/Heidelberg, Germany, 2006; pp. 441–444. [Google Scholar]
- Kwiatkowska, M.; Norman, G.; Parker, D. PRISM 4.0: Verification of probabilistic real-time systems. In Proceedings of the International Conference on Computer Aided Verification, Snowbird, UT, USA, 5 July 2011; Springer: Berlin/Heidelberg, Germany, 2011; pp. 585–591. [Google Scholar]
- Dehnert, C.; Junges, S.; Katoen, J.P.; Volk, M. A storm is coming: A modern probabilistic model checker. In Proceedings of the International Conference on Computer Aided Verification, Heidelberg, Germany, 24–28 July 2017; Springer: Berlin/Heidelberg, Germany, 2017; pp. 592–600. [Google Scholar]
- Hensel, C.; Junges, S.; Katoen, J.P.; Quatmann, T.; Volk, M. The probabilistic model checker Storm. Int. J. Softw. Tools Technol. Transf. 2022, 24, 589–610. [Google Scholar] [CrossRef]
- Naeem, A.; Azam, F.; Amjad, A.; Anwar, M.W. Comparison of model checking tools using timed automata-PRISM and UPPAAL. In Proceedings of the 2018 IEEE International Conference on Computer and Communication Engineering Technology (CCET), Beijing, China, 18–20 August 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 248–253. [Google Scholar]
- Guide for Conducting Risk Assessments. 2012, p. 86. Available online: https://www.proquest.com/openview/18c4c4b072ef4af28d2bf91db8e278b8/1?pq-origsite=gscholar&cbl=41798 (accessed on 29 November 2022).
- Tripathy, D.; Gohil, R.; Halabi, T. Detecting SQL Injection Attacks in Cloud SaaS using Machine Learning. In Proceedings of the International Conference on Big Data Security on Cloud (BigDataSecurity), High Performance and Smart Computing (HPSC) and Intelligent Data and Security (IDS), Baltimore, MD, USA, 25–27 May 2020. [Google Scholar]
- Xiao, F.; Zhijian, W.; Meiling, W.; Ning, C.; Yue, Z.; Lei, Z.; Pei, W.; Xiaoning, C. An old risk in the new era: SQL injection in cloud environment. Int. J. Grid Util. Comput. 2021, 12, 43–54. [Google Scholar] [CrossRef]
- Gupta, B.B.; Badve, O.P. Taxonomy of DoS and DDoS attacks and desirable defense mechanism in a Cloud computing environment. Neural Comput. Appl. 2017, 28, 3655–3682. [Google Scholar] [CrossRef]
- Somani, G.; Singh Gaur, M.; Sanghi, D.; Conti, M.; Buyya, R. DDoS attacks in cloud computing: Issues, taxonomy, and future directions. Comput. Commun. 2017, 107, 30–48. [Google Scholar] [CrossRef] [Green Version]
- Logesswari, S.; Jayanthi, S.; KalaiSelvi, D.; Muthusundari, S.; Aswin, V. A study on cloud computing challenges and its mitigations. Mater. Today Proc. 2020. [Google Scholar] [CrossRef]
- Santoso, L.W. Cloud Technology: Opportunities for Cybercriminals and Security Challenges. In Proceedings of the Twelfth International Conference on Ubi-Media Computing (Ubi-Media), Bali, Indonesia, 6–9 August 2019. [Google Scholar]
- Chen, Y.; Gong, X.; Wang, Q.; Di, X.; Huang, H. Backdoor Attacks and Defenses for Deep Neural Networks in Outsourced Cloud Environments. IEEE Netw. 2020, 34, 141–147. [Google Scholar] [CrossRef]
- Ma, Z.; Ma, J.; Miao, Y.; Liu, X.; Choo, K.K.R.; Deng, R.H. Pocket Diagnosis: Secure Federated Learning against Poisoning Attack in the Cloud. IEEE Trans. Serv. Comput. 2021. [Google Scholar] [CrossRef]
- SQL Injection. Available online: https://owasp.org/www-community/attacks/SQL_Injection (accessed on 1 August 2022).
- Denial of Service. Available online: https://owasp.org/www-community/attacks/Denial_of_Service (accessed on 3 August 2022).
- Buffer Overflow Attack. Available online: https://owasp.org/www-community/attacks/Buffer_overflow_attack (accessed on 3 August 2022).
- Understanding Denial-of-Service Attacks. Available online: https://www.cisa.gov/uscert/ncas/tips/ST04-015 (accessed on 3 August 2022).
- Vorobeychik, Y.; Kantarcioglu, M. Adversarial Machine Learning; Springer: Cham, Switzerland, 2018. [Google Scholar]
- Common Vulnerability Scoring System Version 3.1 Calculator. Available online: https://www.first.org/cvss/calculator/3.1 (accessed on 4 August 2022).
- National Vulnerability Database. Available online: https://nvd.nist.gov (accessed on 4 August 2022).
Outlier Detection Method | TPR | TNR | |
---|---|---|---|
Local Outlier Factor | 0.8824 | 0.8960 | |
Isolation Forest | 0.9265 | 0.8992 | |
Elliptic Envelope | 0.8824 | 0.9003 |
# | Type of Attack | Affected Component (Domain) | Base Score |
---|---|---|---|
1 | Denial-of-Service | Buffer (Anomaly detection and alerting module) | 8.6 (High) |
2 | Adversarial attack | Feature Extraction module | 6.1 (Medium) |
(poisoning) | (Anomaly detection and alerting module) | ||
3 | SQL injection | Alerts database | 6.5 (Medium) |
# | Type of Attack | Affected Component (Domain) | Base Score |
---|---|---|---|
1 | Denial-of-Service | Buffer (Anomaly detection and alerting module) | 8.6 (High) |
2 | Adversarial attack | Feature Extraction module | 6.1 (Medium) |
(poisoning) | (Anomaly detection and alerting module) | ||
3 | Denial-of-Service | Buffer (Blacklisting module) | 8.6 (High) |
4 | SQL injection | Alerts database | 6.5 (Medium) |
5 | SQL injection | Scores database | 6.5 (Medium) |
Final State | ||||
---|---|---|---|---|
Outcome | Description | Initial State | Basic System | Extended System |
Missed alert (no attacks) | Transaction was fraudulent but the system did not detect it during uninterrupted system operation | & | ||
Missed alert (active attacks) | Transaction was fraudulent but the system did not detect it during system operation potentially interrupted by cyber-attacks | & | ||
False alert (no attacks) | Transaction was regular, but system created an alert during uninterrupted system operation | & | ||
False alert (active attacks) | Transaction was regular, but system created an alert during system operation potentially interrupted by cyber-attacks | & |
Outlier Detection Method | TPR | TNR | |
---|---|---|---|
Proposed parallel processing | 0.9709 | 0.9712 | |
Local Outlier Factor | 0.8824 | 0.8960 | |
Isolation Forest | 0.9265 | 0.8992 | |
Elliptic Envelope | 0.8824 | 0.9003 |
Extended System | |||
---|---|---|---|
Outcome | Basic System | ||
Missed alert (no attacks) | 0.0291 | 0.0291 | 0.0281 |
Missed alert (active attacks) | 0.9524 | 0.8236 | 0.7519 |
False alert (no attacks) | 0.0288 | 0.0007 | 0.0143 |
False alert (active attacks) | 0.8612 | 0.5669 | 0.6315 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Stojanović, B.; Božić, J. Robust Financial Fraud Alerting System Based in the Cloud Environment. Sensors 2022, 22, 9461. https://doi.org/10.3390/s22239461
Stojanović B, Božić J. Robust Financial Fraud Alerting System Based in the Cloud Environment. Sensors. 2022; 22(23):9461. https://doi.org/10.3390/s22239461
Chicago/Turabian StyleStojanović, Branka, and Josip Božić. 2022. "Robust Financial Fraud Alerting System Based in the Cloud Environment" Sensors 22, no. 23: 9461. https://doi.org/10.3390/s22239461
APA StyleStojanović, B., & Božić, J. (2022). Robust Financial Fraud Alerting System Based in the Cloud Environment. Sensors, 22(23), 9461. https://doi.org/10.3390/s22239461