AI-Powered Cybersecurity Mesh for Financial Transactions: A Generative-Intelligence Paradigm for Payment Security †
Abstract
1. Introduction
1.1. Context: The Dual-Edged Sword of Modern Payments
1.2. The ISO 20022 Paradox
1.3. The Inadequacy of Monolithic Security
1.4. Thesis: A Generative-Intelligence Cybersecurity Mesh (GI-CSM)
1.5. Summary of Contributions
- The design and implementation of a novel microservice architecture that integrates GenAI agents, a formal reinforcement-learning framework, and privacy-preserving federated-learning protocols, all secured by a blockchain-based trust fabric.
- Empirical validation through a thirty-day, dual-site pilot on production-scale datasets, evidencing substantial improvements in threat detection accuracy, decision latency, and operational efficiency over established baseline systems.
1.6. Structure of the Paper
2. Related Work and Theoretical Foundations
2.1. Cybersecurity Mesh Architectures (CSMA)
2.2. AI-Based Payment-Fraud Mitigation
2.3. Blockchain for AI Auditability and Provenance
3. The Generative-Intelligence Cybersecurity Mesh Architecture (GI-CSM)
3.1. Architectural Layers
- Telemetry Ingestion Layer: This layer serves as the data gateway for the entire system. It utilizes Apache Kafka to ingest high-velocity data streams, including ISO 20022 payment payloads, device fingerprints, and behavioral biometrics, at a rate exceeding 25,000 events per second. Apache Flink is employed for stream processing, assembling raw events into 300-millisecond micro-batches for efficient feature derivation downstream.
- Threat-Intelligence Gateway: This layer enriches incoming data with external context. It consumes curated threat intelligence feeds from sources like the Financial Services Information Sharing and Analysis Center (FS-ISAC) and PhishLabs. An LLM performs named entity recognition on these feeds to extract indicators of compromise (IoCs), which are then converged with adaptive IP and domain blocklists to create a real-time threat landscape view.
- GenAI Agent Fabric: This is the cognitive core of the architecture. It consists of a distributed fabric of lightweight, containerized agents deployed on edge gateways and Kubernetes services. Each agent, with a memory footprint of approximately 200 MB, embeds a distilled 7-billion-parameter LLM and a Gradient-Boosted Decision Tree (GBDT) classifier, enabling sub-100 millisecond inference times crucial for real-time payment processing.
- Federated-Learning Orchestrator: This layer manages the continuous, privacy-preserving training of the agent collective. It uses the PySyft v0.8 framework to coordinate model aggregation rounds every four hours. To comply with differential privacy principles, it employs additive secret sharing, ensuring that individual transaction data never leaves its local environment and that the central server only receives aggregated, anonymized model updates.
- Blockchain Trust Layer: This layer provides the system’s foundation of trust and auditability. It is implemented as a permissioned Hyperledger Fabric cluster operating with RAFT consensus. This distributed ledger immutably records agent identities and SHA-256 digests of their policy model checkpoints, creating a verifiable lineage for every decision and model update, which is critical for regulatory scrutiny.
3.2. Data Flow and Secure Communication
3.3. Security-by-Design Principles
- Resiliency: To prevent single points of failure, Kafka partitions are replicated across multiple availability zones (Replication Factor = 3), and the Hyperledger Fabric cluster’s Raft consensus mechanism uses leader election to tolerate regional failures.
- Explainability: An XAI Explainer module provides regulatory-compliant transparency. It uses techniques like SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model-agnostic Explanations) to generate visual, feature-importance heatmaps for any given decision. These explanations can be surfaced in analyst dashboards or issuer dispute portals, satisfying demands for model clarity. Such explainability mechanisms align with established taxonomies and best practices for explainable AI in cybersecurity and high-risk decision systems [12].
- Confidentiality: A multi-layered confidential computing strategy protects data and models at rest, in transit, and in use. HashiCorp Vault manages a robust key management service, performing envelope encryption with Data Encryption Keys (DEKs) rotated every 24 h. Inference is executed within gVisor sandboxes to isolate processes, model artifacts are encrypted with AES-256-GCM, and sensitive runtime secrets are secured within hardware-backed Intel SGX enclaves on edge hosts.
4. GenAI Agent Cognition and Federated Learning
4.1. Hybrid Model Design
- Semantic Embedding Generation: A distilled LLM, named Mistral-7B-Pay [13], first processes the raw transaction data. This model was created by fine-tuning a base 7-billion-parameter model on an 18 GB domain-specific corpus comprising PCI DSS reports, anonymized fraud dispute narratives, chargeback memos, and merchant descriptors. Knowledge distillation was performed using Low-Rank Adaptation (LoRA) [14] adapters (rank = 8, alpha = 16), which produced a model that is 2.1 times faster in inference with a negligible drop in performance (BLEU score reduction of −0.8). This LLM outputs a dense 256-dimensional vector embedding that captures the semantic risk profile of the transaction. The architectural foundations of such language models build upon transformer-based attention mechanisms and representation learning advances.
- Fraud Probability Classification: A highly optimized CatBoost classifier then acts as a fast front-runner. It takes as input a concatenated vector containing 64 traditionally engineered features (e.g., transaction velocity, amount deviation) and the 256-dimensional semantic embedding from the LLM. This GBDT model outputs a final, pre-threshold fraud probability, enabling rapid decision-making.
4.2. Federated Reinforcement Learning Formalism
- State Space (S): The state is represented by a 320-dimensional vector, which is the concatenation of the 64 engineered features and the 256-dimensional LLM embedding. This rich state representation provides the agent with both syntactic and semantic information about the transaction.
- Action Space (A): The agent can choose from a discrete set of four actions: A = {approve, step–up, deny, hold}. ‘Step-up’ initiates a request for stronger authentication, while ‘hold’ flags the transaction for manual review.
- Reward Function (R): The agent’s learning is guided by a carefully designed reward function that aligns its objectives with business goals. The function is defined aswhere TP is a true positive, FP is a false positive, latency is the decision time in milliseconds, and friction represents the negative customer impact of actions like step-up or deny. This function explicitly incentivizes maximizing correct detections while penalizing false positives, decision latency, and negative customer experiences.R = 2·TP − 1·FP − 0.01·latency − 0.5·friction
4.3. Optimization and Training Strategy
4.4. Privacy and Integrity
4.5. Privacy-Amplified Aggregation (PATE-Style)
5. Blockchain-Anchored Provenance and Post-Quantum Readiness
5.1. Smart Contract Design for Immutable Attestation
- AgentRegistry: This contract manages the lifecycle of each GenAI agent. It records a unique identifier for each agent, its public key, and its operational status, providing a foundational layer of identity attestation.
- ModelLedger: This contract is responsible for creating an immutable record of the system’s intellectual state. Following each successful federated learning round, the orchestrator invokes a PutCheckpoint transaction. This transaction stores a record containing the agentID of the participating agents, a SHA-256 hash of the newly aggregated model, a timestamp, and the roundID. This creates an unbroken, verifiable chain linking every version of the AI model to a specific identity and point in time, delivering the traceability required to comply with audit standards like SOC 2 and GDPR Article 30. The ledger’s data footprint is manageable, growing at an average of 1.2 MB per day, and the RAFT consensus mechanism finalizes blocks in under 800 ms, satisfying the sub-second decisioning requirements of the payment pipeline. For PQC overheads and signature-size implications on commit latency and storage, see Appendix A.
5.2. The Impending Threat of Quantum Computing
5.3. Architectural Integration of Post-Quantum Signatures
- Chosen Algorithm: CRYSTALS-Dilithium: The selected algorithm for upgrading digital signatures is CRYSTALS-Dilithium, which has been standardized by the U.S. National Institute of Standards and Technology (NIST) as the primary PQC signature scheme (ML-DSA, FIPS 204) [16]. Dilithium is a lattice-based algorithm designed to be secure against attacks from both classical and quantum computers [17], offering a strong foundation for next-generation digital trust. Its security is based on the hardness of lattice problems over module lattices, and it is designed to be strongly secure under chosen message attacks. The cryptographic foundations and security properties of lattice-based signature schemes have been rigorously analyzed in prior research, establishing their suitability for long-term post-quantum security [18,19].
- Integration Points within GI-CSM:
- Blockchain Trust Layer: The ModelLedger smart contract will be upgraded to support Dilithium. The PutCheckpoint transaction will require the model hash to be signed with an agent’s Dilithium private key. The smart contract will, in turn, use the corresponding public key from the AgentRegistry to verify this PQC signature before committing the transaction to the ledger. This ensures that the on-chain provenance of the AI models remains secure and non-repudiable in a post-quantum world.
- Inter-Agent Communication: The gRPC service mesh’s security will be hardened using a hybrid cryptographic approach. The mTLS handshakes will be augmented to use a PQC Key Encapsulation Mechanism (KEM), such as the NIST-standardized CRYSTALS-Kyber, alongside a classical key exchange. This ensures that session keys are protected against “harvest now, decrypt later” attacks. Furthermore, the identities issued to workloads by SPIFFE will be backed by Dilithium key pairs, ensuring that authentication is quantum-resistant.
- Performance and Security Trade-offs: The transition to PQC involves trade-offs. Dilithium signatures and public keys are significantly larger than their ECC counterparts, which can impact network latency and increase the storage footprint of the blockchain. For example, a Dilithium3 public key is 1952 bytes, and its signature is 3293 bytes, compared to the much smaller sizes of ECC. To mitigate this, the implementation will leverage the most efficient standardized parameter sets (e.g., Dilithium2) and may apply PQC selectively to the most critical operations, balancing security with performance during the transition period.
6. Experimental Methodology
6.1. Datasets
- MW-2M: A dataset from a mobile wallet provider, containing 61 million transactions processed over a 30-day period. The organic fraud prevalence in this dataset was 0.21%.
- CB-150k: A dataset from a cross-border remittance platform, containing 4.5 million transactions over the same period, with a higher fraud prevalence of 0.35%. All records were anonymized in accordance with ISO/TS 12812-3:2017 (Core banking—Mobile financial services—Part 3: Financial application lifecycle management) [20], and all payment card numbers were tokenized as per PCI DSS v4.0 [21] to ensure data privacy.
6.2. Adversarial Traffic Simulation
6.3. Baseline Systems
- Static-Rule Engine (SRE): Representative of incumbent detection stacks curated by risk analysts.
- XGBoost + Sliding Window (XGB): A strong tabular baseline widely deployed in production for low-latency scoring.
- Centralised LSTM (C-LSTM): A sequential deep baseline capturing temporal spend patterns.
- GraphSAGE (GNN): A modern entity-graph approach for collusive rings and mule networks, included to reflect state-of-the-art graph modeling while acknowledging higher inference overhead [22].
6.4. Environment and Metrics
- Area Under the Receiver Operating Characteristic Curve (AUC-ROC): Measures the model’s overall discriminatory power.
- F1 Score: The harmonic mean of precision and recall, providing a balanced measure of accuracy.
- Matthews Correlation Coefficient (MCC): A robust statistical measure that accounts for true and false positives and negatives, providing a balanced score even on imbalanced datasets.
- Median Latency: The median time taken from transaction ingestion to decision output, measured in milliseconds (ms).
- Compliance-Violation Count (CVC): A count of decisions that violated predefined regulatory or business rules. The optimal classification threshold for each model was determined using Youden’s J statistic to maximize the difference between the true positive rate and the false positive rate.
6.5. Adversarial Test Suite
6.6. Federated Participation and Resilience
7. Empirical Results and Analysis
7.1. Comparative Performance Analysis
7.2. Error Analysis and Model Discrimination
7.3. Ablation Study and Feature Importance
7.4. Latency and Sustainability Profile
7.5. Scalability Under Stress
8. Discussion
8.1. Comparative Analysis with State-of-the-Art
- Advancing CSMA from Static to Adaptive: Previous CSMA implementations are primarily static, enforcing policies based on predefined rules. The GI-CSM transforms this concept by creating a self-optimizing policy fabric. Its use of federated reinforcement learning allows the agent collective to continuously learn from the environment and autonomously adapt its policies in response to emerging threat patterns. This represents a qualitative leap from the manually curated, reactive posture of traditional CSMAs.
- Solving the AI Latency-Interpretability Dilemma: The field of AI-driven fraud detection has long faced a trade-off between the accuracy of complex models and the speed required for real-time decisioning. The GI-CSM’s hybrid agent design—combining a deep, semantic LLM embedding with a fast GBDT classifier—offers a pragmatic and effective solution. As shown in Table 2, this architecture outperforms pure GBDT/LSTM models in accuracy while simultaneously achieving lower latency than centralized deep learning models, effectively resolving this dilemma.
- Active vs. Passive Provenance: The GI-CSM’s blockchain layer provides a fundamentally more robust form of auditability than prior systems. While approaches like MIT’s OpenCBDC passively archive static policies, the GI-CSM implements active, on-chain attestation of the model’s entire lifecycle. By immutably recording every model update, it creates a dynamic and cryptographically verifiable audit trail that is essential for establishing trust in an adaptive AI system.
8.2. Implications for Regulatory Compliance and Ethical AI
- The XAI Explainer subsystem directly addresses the “right to explanation” stipulated in regulations like the EU’s GDPR Article 22, enabling financial institutions to provide clear reasons for automated decisions.
- The federated learning approach, which keeps raw data decentralized, inherently supports compliance with data localization and cross-border transfer laws, such as Brazil’s LGPD and the India DPDP Act 2023.
- The immutable, verifiable records provided by the blockchain trust layer are designed to meet the stringent documentation and transparency requirements that forthcoming regulations, like the EU AI Act, will impose on systems classified as “high-risk,” a category that includes AI-driven fraud detection.
8.3. Limitations and Mitigation
8.4. Deployment Trade-Offs and Integration
8.5. Consortium Interoperability
9. Conclusions and Future Directions
9.1. Summary of Findings
9.2. Technically Grounded Research Roadmap
- Quantum-Accelerated Hyperparameter Optimization: The performance of the RL agents is highly dependent on a large set of hyperparameters (e.g., learning rate, network architecture, PPO coefficients). Tuning these parameters is a complex, high-dimensional optimization problem where classical methods like grid search are computationally infeasible. Future research will explore the use of a quantum annealer, such as the D-Wave Advantage, to address this challenge. This involves reformulating the hyperparameter search as a Quadratic Unconstrained Binary Optimization (QUBO) problem, which is the native input format for quantum annealers. The discrete hyperparameter space will be encoded into binary variables, and the objective function to be minimized by the annealer will be the negative of the agent’s reward function. By leveraging quantum effects like tunneling, this approach can explore the vast solution space more effectively than classical methods, potentially discovering more optimal hyperparameter configurations to further enhance agent performance. Related studies have demonstrated the applicability of quantum annealing techniques to reinforcement learning, neural network training, and policy optimization problems [23,24,25,26].
- Production Pilots on Next-Generation Payment Rails: The next phase of research involves deploying the GI-CSM in live pilots on ISO 20022-native payment systems, specifically Request-to-Pay (RTP) rails and Real-Time Gross Settlement (RTGS) back-ends. These environments present unique security challenges, such as Authorized Push Payment (APP) fraud in RTP schemes, where social engineering tricks a victim into sending an instant payment. APP fraud includes various scams like invoice redirection, romance scams, and impersonation scams, where the victim willingly authorizes a payment to a fraudster’s account, making it difficult to detect with traditional methods. The GI-CSM’s ability to perform semantic analysis on the rich, structured data within ISO 20022 RTP messages (e.g., pain.013) provides a powerful tool for detecting contextual anomalies indicative of such fraud. The pilot architecture will involve integrating the Telemetry Ingestion layer with RTP message flows and feeding the Policy Engine’s real-time decisions back to the payment gateway before settlement finality occurs in the RTGS system, requiring a robust and low-latency feedback loop.
9.3. Long-Term Vision: “Security-as-Liquidity” Network
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
| APP | Authorized Push Payment |
| CVC | Compliance-Violation Count |
| DP | Differential Privacy |
| FL | Federated Learning |
| GNN | Graph Neural Network |
| OPA | Open Policy Agent |
| PATE | Private Aggregation of Teacher Ensembles |
| PQC | Post-Quantum Cryptography |
| RACI | Responsibility–Accountability–Consulted–Informed |
| SPIFFE | Secure Production Identity Framework For Everyone |
Appendix A
PQC Overhead Benchmarks
References
- ISO 20022-1:2013; Financial Services—Universal Financial Industry Message Scheme. International Organization for Standardization (ISO): Geneva, Switzerland, 2013.
- Chen, T.; Guestrin, C. XGBoost: A Scalable Tree Boosting System. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD ’16), San Francisco, CA, USA, 13–17 August 2016; ACM: New York, NY, USA, 2016; pp. 785–794. [Google Scholar]
- Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016. [Google Scholar]
- Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. In Advances in Neural Information Processing Systems 27 (NIPS 2014); Ghahramani, Z., Welling, M., Cortes, C., Lawrence, N.D., Weinberger, K.Q., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2014; pp. 2672–2680. [Google Scholar]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. In Advances in Neural Information Processing Systems 30 (NIPS 2017); Guyon, I., Luxburg, U.V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., Garnett, R., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2017; pp. 5998–6008. [Google Scholar]
- European Parliament and Council of the European Union. Directive (EU) 2015/2366 of 25 November 2015 on payment services in the internal market (PSD2). Off. J. Eur. Union 2015, 337, 35–127. [Google Scholar]
- Federal Reserve Financial Services. FedNow Service Readiness Guide; FRB Services: Chicago, IL, USA, 2023. [Google Scholar]
- Gartner. Cybersecurity Mesh Architecture (CSMA); Gartner Research Note; Gartner: Stamford, CT, USA, 2022. [Google Scholar]
- Hoover, J.; Shoard, P.; Gaehtgens, F. How to Start Building a Cybersecurity Mesh Architecture; Gartner: Stamford, CT, USA, 2022. [Google Scholar]
- Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
- The Nilson Report. Global Card Fraud Projections; Issue #1234; The Nilson Report: Carpinteria, CA, USA, 2025. [Google Scholar]
- Zhang, Y.; Li, P.; Jin, X. Explainable AI for Cybersecurity: A Taxonomy, Survey, and New Perspectives. ACM Comput. Surv. 2023, 55, 1–38. [Google Scholar]
- Jiang, A.Q.; Sablayrolles, A.; Mensch, A.; Bamford, C.; Chaplot, D.S.; Casas, D.d.l.; Bressand, F.; Lengyel, G.; Lample, G.; Saulnier, L.; et al. Mistral 7B. arXiv 2023, arXiv:2310.06825. [Google Scholar] [CrossRef]
- Hu, E.J.; Shen, Y.; Wallis, P.; Allen-Zhu, Z.; Li, Y.; Wang, S.; Wang, L.; Chen, W. LoRA: Low-Rank Adaptation of Large Language Models. arXiv 2021, arXiv:2106.09685. [Google Scholar]
- Schulman, J.; Wolski, F.; Dhariwal, P.; Radford, A.; Klimov, O. Proximal Policy Optimization Algorithms. arXiv 2017, arXiv:1707.06347. [Google Scholar] [CrossRef]
- National Institute of Standards and Technology (NIST). FIPS PUB 204: Module-Lattice-Based Digital Signature Standard; NIST: Gaithersburg, MD, USA, 2024. [Google Scholar]
- Ducas, L.; Kiltz, E.; Lepoint, T.; Lyubashevsky, V.; Schwabe, P.; Seiler, G.; Stehlé, D. CRYSTALS-Dilithium: A Lattice-Based Digital Signature Scheme. IACR Trans. Cryptogr. Hardw. Embed. Syst. 2018, 2018, 238–268. [Google Scholar]
- Bai, S.; Galbraith, S.D. Lattice-based signatures and bimodal Gaussians. In Proceedings of the 13th International Conference on Cryptology and Network Security (CANS 2014); Springer: Cham, Switzerland, 2014; pp. 34–52. [Google Scholar]
- Lyubashevsky, V. Lattice signatures without trapdoors. In Proceedings of the 31st Annual International Conference on the Theory and Applications of Cryptographic Techniques (EUROCRYPT 2012); Springer: Berlin/Heidelberg, Germany, 2012; pp. 738–755. [Google Scholar]
- ISO/TS 12812-3:2017; Core Banking—Mobile Financial Services—Part 3: Financial Application Lifecycle Management. ISO: Geneva, Switzerland, 2017.
- PCI Security Standards Council (PCI SSC). Payment Card Industry Data Security Standard (PCI DSS), Version 4.0; PCI SSC: Wakefield, MA, USA, 2022. [Google Scholar]
- Hamilton, W.; Ying, Z.; Leskovec, J. Inductive Representation Learning on Large Graphs. In Advances in Neural Information Processing Systems 30 (NIPS 2017); Guyon, I., Luxburg, U.V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., Garnett, R., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2017; pp. 1024–1034. [Google Scholar]
- Crawford, S.W.; Valluri, S.R. A multi-agent reinforcement learning approach to grid-world traversal using quantum annealing. Quantum Inf. Process. 2020, 19, 227. [Google Scholar]
- D-Wave Systems. D-Wave Advantage System Documentation; D-Wave Systems: Burnaby, BC, Canada, 2023. [Google Scholar]
- Adachi, S.H.; Henderson, M.P. Application of quantum annealing to training of deep neural networks. arXiv 2015, arXiv:1510.06356. [Google Scholar] [CrossRef]
- Jerbi, S.; Gyurik, C.; Marshall, S.C.; Briegel, H.J.; Dunjko, V. Parametrized quantum policies for reinforcement learning. In Advances in Neural Information Processing Systems 34 (NeurIPS 2021); Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2021; pp. 28362–28375. [Google Scholar]

| Component | Parameter | Value |
|---|---|---|
| Distilled LLM (Mistral-7B-Pay) | Base Model | Mistral-7B |
| Distillation Method | LoRA | |
| LoRA Rank (r) | 8 | |
| LoRA Alpha (α) | 16 | |
| Corpora Size | 18 GB | |
| CatBoost Classifier | Input Features | 64 (engineered) + 256 (embedding) |
| Max Depth | 9 | |
| Learning Rate (η) | 0.05 | |
| PPO Algorithm | Network Architecture | MLP (256-128-64) |
| PPO Clip (ϵ) | 0.2 | |
| Value Loss Coefficient | 0.5 | |
| Entropy Bonus | 0.01 | |
| Minibatch Size | 4000 | |
| Learning Rate | 3 × 10−4 | |
| Federated Learning | Aggregation Frequency | 4 h |
| DP Noise (σ) | 1.3 | |
| Privacy Budget (ϵ,δ) | (1.5, 10−5) |
| Metric | SRE | XGB | C-LSTM | GNN (GraphSAGE) | GenAI Mesh |
|---|---|---|---|---|---|
| AUC-ROC | 0.79 | 0.86 | 0.88 | 0.92 | 0.97 |
| F1 Score | 0.56 | 0.71 | 0.75 | 0.83 | 0.92 |
| MCC | 0.41 | 0.58 | 0.61 | 0.74 | 0.88 |
| Median Latency (ms) | 180 | 320 | 410 | 355 | 270 |
| Predicted Fraud | Predicted Legit | Total | |
|---|---|---|---|
| Actual Fraud | 12,211 (TP) | 632 (FN) | 12,843 |
| Actual Legit | 3014 (FP) | 5,892,143 (TN) | 5,895,157 |
| Total | 15,225 | 5,892,775 | 5,908,000 |
| Metric | \multicolumn{3}{c | }{\textbf{Value}} | |
| True Positive Rate (Recall) | \multicolumn{3}{c | }{95.1%} | |
| False Positive Rate | \multicolumn{3}{c | }{0.05%} | |
| Precision | \multicolumn{3}{c | }{80.3%} |
| Dimension | GI-CSM | Centralised AI (C-LSTM) | Static CSMA (e.g., IBM) | Traditional SIEM/Rules |
|---|---|---|---|---|
| Adaptability | Real-time, autonomous learning | Batch retraining, slow | Static, manual updates | Static, brittle rules |
| Auditability | Dynamic, on-chain, verifiable | Centralized logs, mutable | Policy logs, limited | Disparate, mutable logs |
| Explainability | High (SHAP/LIME on features) | Low (black-box) | Medium (rule-based) | High (simple rules) |
| Scalability | Decentralized, federated | Monolithic, bottleneck-prone | Distributed, but static | Centralized, limited |
| Privacy | High (Federated Learning) | Low (data centralization) | N/A | Low (data centralization) |
| Future-Proofing | PQC-ready by design | Legacy cryptography | Legacy cryptography | Legacy cryptography |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Sethupathy, U.K.A.; Ananthanarayan, V. AI-Powered Cybersecurity Mesh for Financial Transactions: A Generative-Intelligence Paradigm for Payment Security. Comput. Sci. Math. Forum 2025, 12, 10. https://doi.org/10.3390/cmsf2025012010
Sethupathy UKA, Ananthanarayan V. AI-Powered Cybersecurity Mesh for Financial Transactions: A Generative-Intelligence Paradigm for Payment Security. Computer Sciences & Mathematics Forum. 2025; 12(1):10. https://doi.org/10.3390/cmsf2025012010
Chicago/Turabian StyleSethupathy, Utham Kumar Anugula, and Vijayanand Ananthanarayan. 2025. "AI-Powered Cybersecurity Mesh for Financial Transactions: A Generative-Intelligence Paradigm for Payment Security" Computer Sciences & Mathematics Forum 12, no. 1: 10. https://doi.org/10.3390/cmsf2025012010
APA StyleSethupathy, U. K. A., & Ananthanarayan, V. (2025). AI-Powered Cybersecurity Mesh for Financial Transactions: A Generative-Intelligence Paradigm for Payment Security. Computer Sciences & Mathematics Forum, 12(1), 10. https://doi.org/10.3390/cmsf2025012010