Next Article in Journal
A Multiscale Convolutional Neural Network Framework for Automated Segmentation and Pattern Mapping of Psoriatic Lesions
Previous Article in Journal
Self-Supervised Learning for Complex Pattern Interpretation in Vitiligo Skin Imaging
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

AI-Powered Cybersecurity Mesh for Financial Transactions: A Generative-Intelligence Paradigm for Payment Security †

by
Utham Kumar Anugula Sethupathy
1,* and
Vijayanand Ananthanarayan
2
1
Independent Researcher, Atlanta, GA 30040, USA
2
Independent Researcher, Atlanta, GA 30040, USA
*
Author to whom correspondence should be addressed.
Presented at the First International Conference on Computational Intelligence and Soft Computing (CISCom 2025), Melaka, Malaysia, 26–27 November 2025.
Comput. Sci. Math. Forum 2025, 12(1), 10; https://doi.org/10.3390/cmsf2025012010
Published: 19 December 2025

Abstract

The rapid expansion of digital payment channels has significantly widened the financial transaction attack surface, exposing ecosystems to sophisticated, polymorphic threat vectors. This study introduces an AI-powered cybersecurity mesh that unites Generative AI (GenAI), federated reinforcement learning, and zero-trust principles, with a forward-looking architecture designed for post-quantum readiness. The architecture ingests high-velocity telemetry, coordinates self-evolving agent collectives, and anchors model provenance in a permissioned blockchain to guarantee verifiability and non-repudiation. Empirical evaluations across two production-scale environments—a mobile wallet processing two million transactions per day and a high-throughput cross-border remittance rail—demonstrate a 95.1% threat-detection rate, a 62% reduction in false positives, and a 35.7% latency decrease compared to baseline systems. These results affirm the feasibility of a generative cybersecurity mesh as a scalable, future-proofed blueprint for next-generation payment security.

1. Introduction

1.1. Context: The Dual-Edged Sword of Modern Payments

The global financial landscape is undergoing a profound transformation, driven by the relentless growth of digital commerce, which is projected to surpass USD 11 trillion by 2026. This expansion is largely fueled by the proliferation of mobile wallets and instant payment systems. Concurrently, this digital shift has created a fertile ground for financial crime, with annual fraud losses now exceeding USD 40 billion globally. The industry’s response has included a migration toward richer data standards, most notably ISO 20022 [1], a global standard for financial messaging that promises to enhance interoperability and transparency across payment systems. This standard is not merely a technical update but a fundamental shift intended to create a universal language for financial transactions worldwide, covering everything from payments and securities to trade services and foreign exchange. Machine learning and deep learning techniques—including gradient-boosting models, deep neural networks, and transformer-based architectures—have increasingly become the foundation of modern payment fraud detection and risk-scoring systems [2,3,4,5].

1.2. The ISO 20022 Paradox

The adoption of ISO 20022 presents a fundamental paradox. On one hand, its structured, data-rich messages (e.g., pacs.008) provide unprecedented detail, enabling better automation, more accurate compliance checks, and improved fraud prevention measures. The ability to carry extensive, structured information—such as unambiguous payer/payee identification, invoice references, and purpose codes—is designed to reduce manual interventions and enhance fraud detection. On the other hand, this very complexity introduces new challenges and attack vectors. The increased data volume strains legacy systems, while the nuanced data fields can be exploited by sophisticated adversaries to conceal fraudulent activities, making detection more difficult for traditional systems. This creates a situation where the richer data, intended to improve security, can initially lead to more false positives as security systems struggle to adapt to the new volume and complexity of information. This paradox—where richer data simultaneously enables superior security and novel risks—underscores the urgent need for a more intelligent and adaptive security paradigm. The architecture of the proposed Generative-Intelligence Cybersecurity Mesh (GI-CSM) is purpose-built to address these challenges. Its high-throughput ingestion pipeline, leveraging Apache Kafka and Flink, is designed to handle the high-velocity, complex data streams characteristic of ISO 20022 environments, while its GenAI agents are engineered to comprehend the semantic context within this rich data, turning the standard’s complexity from a liability into a powerful asset for fraud detection.

1.3. The Inadequacy of Monolithic Security

Traditional security postures, characterized by centralized Security Information and Event Management (SIEM) systems and static, brittle rule engines, are architecturally ill-suited for the decentralized, high-velocity nature of modern payment ecosystems. These monolithic approaches struggle to adapt to the real-time, adversarial exploits that define the current threat landscape. Their deterministic nature means they are inflexible and cannot adapt to new or unforeseen situations. As the number of rules grows, managing them becomes exponentially complex, leading to performance degradation and potential conflicts that hinder scalability. This complexity often results in “alert fatigue,” where security personnel are overwhelmed by a high volume of alerts, many of which are false positives, potentially causing real threats to be overlooked. Furthermore, these systems often operate in silos, lacking the ability to correlate data across different security tools, which creates blind spots that attackers can exploit. Incremental regulatory mandates, such as the Second Payment Services Directive’s Strong Customer Authentication (PSD2 SCA) [6], and the ISO 20022-aligned FedNow® instant-payment framework [7], while strengthening compliance, also increase architectural complexity, further diminishing the effectiveness of legacy security models.

1.4. Thesis: A Generative-Intelligence Cybersecurity Mesh (GI-CSM)

To overcome these limitations, this paper proposes and evaluates a Generative-Intelligence Cybersecurity Mesh (GI-CSM). This work operationalizes a cybersecurity mesh architecture (CSMA) that moves beyond the static, rule-centric models of the past by integrating a fabric of cognitive Generative AI agents, a rigorous federated reinforcement learning framework for continuous optimization, and a blockchain-anchored trust layer for immutable provenance. The resulting system yields an autonomous, verifiable, and continuously improving defense layer, representing not just an incremental improvement but a necessary architectural evolution for securing next-generation payment rails.

1.5. Summary of Contributions

This research makes the following primary contributions to the field:
  • The design and implementation of a novel microservice architecture that integrates GenAI agents, a formal reinforcement-learning framework, and privacy-preserving federated-learning protocols, all secured by a blockchain-based trust fabric.
  • Empirical validation through a thirty-day, dual-site pilot on production-scale datasets, evidencing substantial improvements in threat detection accuracy, decision latency, and operational efficiency over established baseline systems.

1.6. Structure of the Paper

The remainder of this paper is organized as follows: Section 2 reviews related work and theoretical foundations. Section 3 provides a detailed exposition of the proposed GI-CSM architecture. Section 4 elaborates on the cognitive design of the GenAI agents and the federated learning mechanics. Section 5 explains the blockchain-integrated provenance model and its extension toward post-quantum readiness. Section 6 describes the experimental methodology. Section 7 presents the quantitative results and analysis. Section 8 discusses the broader implications of the findings, including a comparative analysis and regulatory considerations. Finally, Section 9 concludes the paper and outlines future research directions.

2. Related Work and Theoretical Foundations

2.1. Cybersecurity Mesh Architectures (CSMA)

The concept of a cybersecurity mesh architecture (CSMA) was introduced by Gartner as a strategic approach to security in an increasingly distributed and perimeter-less world [8]. CSMA advocates for a composable and scalable security model where policy enforcement is decoupled from a central point and moved closer to the assets being protected. This decentralized approach replaces the traditional, monolithic security perimeter with a more flexible and adaptive framework where security controls are distributed and operate at a functional level. The goal is to create an integrated ecosystem of security tools rather than a collection of siloed solutions, thereby reducing security gaps and improving threat detection. Early implementations, such as IBM’s Zero-Trust Mesh, have advanced this concept by incorporating zero-trust principles and proxy-based mediation for access control [9]. However, these frameworks largely remain rule-centric, relying on predefined policies that lack the ability to adapt dynamically to novel or evolving threats. This work advances the state of the art by replacing static rule engines with a fabric of cognitive, autonomous agents capable of real-time contextual inference and adaptive policy generation.

2.2. AI-Based Payment-Fraud Mitigation

The application of AI to payment fraud detection has evolved significantly. Initial approaches relied on traditional machine learning techniques, including unsupervised clustering algorithms to identify outliers, and supervised models like gradient-boosting ensembles (e.g., XGBoost) to classify transactions based on engineered features. More recent studies have explored deep learning models, such as Graph Neural Networks (GNNs) for detecting collusive fraud patterns and Long Short-Term Memory (LSTM) networks for analyzing sequential transaction data. Training stability and convergence of such deep learning architectures are typically achieved using adaptive stochastic optimization methods such as Adam, which has become a standard optimizer in large-scale neural fraud-detection systems [10]. The latest frontier involves the use of Large Language Models (LLMs) fine-tuned on payment-specific ontologies. These models have demonstrated powerful capabilities in semantic anomaly detection by understanding the context and narrative of a transaction. However, their practical application has been hindered by significant latency trade-offs, making them unsuitable for many real-time payment scenarios. The GI-CSM framework addresses this challenge by proposing a hybrid approach that harmonizes fast, lightweight anomaly scorers with the deep contextual embeddings from a distilled LLM, thereby balancing detection speed with semantic interpretability.

2.3. Blockchain for AI Auditability and Provenance

Permissioned distributed ledger technologies, such as Hyperledger Fabric, have been recognized for their ability to provide an immutable and transparent record for governance and compliance in regulated industries. In the context of AI, projects like MIT’s OpenCBDC have explored using blockchains to archive system policies on-chain, ensuring a verifiable history of governance rules. A key limitation of such approaches, however, is their passive nature; they record static policies but lack a mechanism for providing oversight of an adaptive AI system whose behavior evolves over time. The GI-CSM introduces a more dynamic and integrated approach by embedding cryptographic checkpoints directly into the AI agent’s lifecycle. Each time an agent’s model is updated through federated learning, a hash of the new model state is recorded on the blockchain. This creates a cryptographically non-repudiable, time-stamped audit trail of the model’s evolution, achieving a level of active, verifiable oversight that is essential for trustworthy adaptive AI. This need for verifiable oversight is reinforced by industry analyses showing sustained growth in global card fraud losses despite increasing automation, underscoring the importance of accountable AI governance in payment systems [11].

3. The Generative-Intelligence Cybersecurity Mesh Architecture (GI-CSM)

The proposed GI-CSM is a multi-layered, microservice-based system designed for scalability, resilience, and real-time decision-making. As illustrated in Figure 1, the architecture comprises five interlocking layers that communicate through a secure service mesh, exchanging telemetry, policy updates, and trust assertions.

3.1. Architectural Layers

  • Telemetry Ingestion Layer: This layer serves as the data gateway for the entire system. It utilizes Apache Kafka to ingest high-velocity data streams, including ISO 20022 payment payloads, device fingerprints, and behavioral biometrics, at a rate exceeding 25,000 events per second. Apache Flink is employed for stream processing, assembling raw events into 300-millisecond micro-batches for efficient feature derivation downstream.
  • Threat-Intelligence Gateway: This layer enriches incoming data with external context. It consumes curated threat intelligence feeds from sources like the Financial Services Information Sharing and Analysis Center (FS-ISAC) and PhishLabs. An LLM performs named entity recognition on these feeds to extract indicators of compromise (IoCs), which are then converged with adaptive IP and domain blocklists to create a real-time threat landscape view.
  • GenAI Agent Fabric: This is the cognitive core of the architecture. It consists of a distributed fabric of lightweight, containerized agents deployed on edge gateways and Kubernetes services. Each agent, with a memory footprint of approximately 200 MB, embeds a distilled 7-billion-parameter LLM and a Gradient-Boosted Decision Tree (GBDT) classifier, enabling sub-100 millisecond inference times crucial for real-time payment processing.
  • Federated-Learning Orchestrator: This layer manages the continuous, privacy-preserving training of the agent collective. It uses the PySyft v0.8 framework to coordinate model aggregation rounds every four hours. To comply with differential privacy principles, it employs additive secret sharing, ensuring that individual transaction data never leaves its local environment and that the central server only receives aggregated, anonymized model updates.
  • Blockchain Trust Layer: This layer provides the system’s foundation of trust and auditability. It is implemented as a permissioned Hyperledger Fabric cluster operating with RAFT consensus. This distributed ledger immutably records agent identities and SHA-256 digests of their policy model checkpoints, creating a verifiable lineage for every decision and model update, which is critical for regulatory scrutiny.

3.2. Data Flow and Secure Communication

All inter-layer communication occurs across a gRPC service mesh, which is secured using mutual TLS (mTLS) to encrypt traffic and SPIFFE-issued identities to provide strong, verifiable workload identities, enforcing a zero-trust communication model.
Telemetry Enricher side-car container normalizes incoming data, such as ISO 20022 pacs.008 (Financial Institution To Financial Institution Customer Credit Transfer) and pain.001 (Customer Credit Transfer Initiation) messages, into standardized Protobuf descriptors. This process embeds a 128-bit request identifier that persists through the entire pipeline for end-to-end traceability.
The Policy Engine utilizes Open Policy Agent (OPA) to evaluate declarative policies written in the Rego language. These policies are dynamically generated by the GenAI agents based on their learned understanding of fraud patterns. Enforcement decisions (e.g., approve, deny) are then propagated via Redis streams to downstream micro-gateways that act as policy enforcement points.

3.3. Security-by-Design Principles

The architecture incorporates several principles to ensure robustness and security:
  • Resiliency: To prevent single points of failure, Kafka partitions are replicated across multiple availability zones (Replication Factor = 3), and the Hyperledger Fabric cluster’s Raft consensus mechanism uses leader election to tolerate regional failures.
  • Explainability: An XAI Explainer module provides regulatory-compliant transparency. It uses techniques like SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model-agnostic Explanations) to generate visual, feature-importance heatmaps for any given decision. These explanations can be surfaced in analyst dashboards or issuer dispute portals, satisfying demands for model clarity. Such explainability mechanisms align with established taxonomies and best practices for explainable AI in cybersecurity and high-risk decision systems [12].
  • Confidentiality: A multi-layered confidential computing strategy protects data and models at rest, in transit, and in use. HashiCorp Vault manages a robust key management service, performing envelope encryption with Data Encryption Keys (DEKs) rotated every 24 h. Inference is executed within gVisor sandboxes to isolate processes, model artifacts are encrypted with AES-256-GCM, and sensitive runtime secrets are secured within hardware-backed Intel SGX enclaves on edge hosts.

4. GenAI Agent Cognition and Federated Learning

The intelligence of the GI-CSM resides within its autonomous agents, which are designed to perceive, reason, and act upon the financial transaction environment. Their cognitive abilities are derived from a hybrid AI model and are continuously refined through a federated reinforcement learning framework.

4.1. Hybrid Model Design

Each agent employs a two-stage inference process designed to balance deep contextual understanding with low-latency performance.
  • Semantic Embedding Generation: A distilled LLM, named Mistral-7B-Pay [13], first processes the raw transaction data. This model was created by fine-tuning a base 7-billion-parameter model on an 18 GB domain-specific corpus comprising PCI DSS reports, anonymized fraud dispute narratives, chargeback memos, and merchant descriptors. Knowledge distillation was performed using Low-Rank Adaptation (LoRA) [14] adapters (rank = 8, alpha = 16), which produced a model that is 2.1 times faster in inference with a negligible drop in performance (BLEU score reduction of −0.8). This LLM outputs a dense 256-dimensional vector embedding that captures the semantic risk profile of the transaction. The architectural foundations of such language models build upon transformer-based attention mechanisms and representation learning advances.
  • Fraud Probability Classification: A highly optimized CatBoost classifier then acts as a fast front-runner. It takes as input a concatenated vector containing 64 traditionally engineered features (e.g., transaction velocity, amount deviation) and the 256-dimensional semantic embedding from the LLM. This GBDT model outputs a final, pre-threshold fraud probability, enabling rapid decision-making.

4.2. Federated Reinforcement Learning Formalism

The agents learn and adapt their decision-making policies through a federated reinforcement learning (RL) framework, formalized as a Markov Decision Process (MDP):
  • State Space (S): The state is represented by a 320-dimensional vector, which is the concatenation of the 64 engineered features and the 256-dimensional LLM embedding. This rich state representation provides the agent with both syntactic and semantic information about the transaction.
  • Action Space (A): The agent can choose from a discrete set of four actions: A = {approve, step–up, deny, hold}. ‘Step-up’ initiates a request for stronger authentication, while ‘hold’ flags the transaction for manual review.
  • Reward Function (R): The agent’s learning is guided by a carefully designed reward function that aligns its objectives with business goals. The function is defined as
    R = 2·TP − 1·FP − 0.01·latency − 0.5·friction
    where TP is a true positive, FP is a false positive, latency is the decision time in milliseconds, and friction represents the negative customer impact of actions like step-up or deny. This function explicitly incentivizes maximizing correct detections while penalizing false positives, decision latency, and negative customer experiences.

4.3. Optimization and Training Strategy

To optimize the agent’s policy, this work employs the Proximal Policy Optimization (PPO-Clip) algorithm, which is well-suited for stable training in non-stationary environments like evolving fraud patterns. PPO has become a widely adopted policy-gradient method due to its favorable balance between training stability and sample efficiency in complex, high-dimensional environments [15]. The policy (π) and value (ψ) networks are implemented as three-layer Multi-Layer Perceptrons (MLPs) with a 256-128-64 architecture. Key PPO hyperparameters include a clipped surrogate objective with ϵ = 0.2, a value loss coefficient of 0.5, and an entropy bonus of 0.01 to encourage exploration. During federated learning, agents transmit their computed gradients every 20,000 steps. The resulting payload is only 96 kB, well below the 128 kB gRPC message size limit, ensuring efficient communication.
To facilitate reproducibility, the core hyperparameters for the models and training process are summarized in Table 1.

4.4. Privacy and Integrity

Privacy is maintained through the application of differential privacy during the federated learning process. Gaussian noise with a standard deviation of σ = 1.3 is added to the gradients before transmission. Over the 30-day pilot, this mechanism provided an empirical privacy budget of ϵ = 1.42, which satisfies stringent regulatory requirements for data protection. Furthermore, deterministic guardrails implemented in a domain-specific policy language override any RL-generated policy that violates fundamental business invariants (e.g., mandatory denial of transactions from sanctioned merchant category codes), preventing catastrophic policy drift and ensuring baseline compliance.

4.5. Privacy-Amplified Aggregation (PATE-Style)

To harden against model inversion, we sketch a teacher–student workflow: local teachers vote on pseudo-labeled samples; a noisy aggregator (σ calibrated to maintain utility) produces student updates; only student parameters propagate globally. Integration point: the FL coordinator orchestrates teacher votes per round; governance defines quorum and abstention rules.

5. Blockchain-Anchored Provenance and Post-Quantum Readiness

5.1. Smart Contract Design for Immutable Attestation

The blockchain trust layer provides a tamper-proof, auditable record of the AI system’s behavior, a critical requirement for operating in regulated financial environments. This is achieved through two primary smart contracts deployed on the Hyperledger Fabric ledger:
  • AgentRegistry: This contract manages the lifecycle of each GenAI agent. It records a unique identifier for each agent, its public key, and its operational status, providing a foundational layer of identity attestation.
  • ModelLedger: This contract is responsible for creating an immutable record of the system’s intellectual state. Following each successful federated learning round, the orchestrator invokes a PutCheckpoint transaction. This transaction stores a record containing the agentID of the participating agents, a SHA-256 hash of the newly aggregated model, a timestamp, and the roundID. This creates an unbroken, verifiable chain linking every version of the AI model to a specific identity and point in time, delivering the traceability required to comply with audit standards like SOC 2 and GDPR Article 30. The ledger’s data footprint is manageable, growing at an average of 1.2 MB per day, and the RAFT consensus mechanism finalizes blocks in under 800 ms, satisfying the sub-second decisioning requirements of the payment pipeline. For PQC overheads and signature-size implications on commit latency and storage, see Appendix A.

5.2. The Impending Threat of Quantum Computing

While the current cryptographic primitives (SHA-256, ECDSA) are secure against classical computers, they are vulnerable to attacks from large-scale quantum computers. Algorithms like Shor’s algorithm can efficiently break the public-key cryptography that underpins modern digital finance, including the security of the GI-CSM’s blockchain and communication channels. A security architecture designed today without a clear and viable path to post-quantum cryptography (PQC) risks systemic obsolescence and catastrophic failure within the next decade. Therefore, designing for “cryptographic agility” and proactive, future-proofed security is not a theoretical exercise but a core architectural principle of the GI-CSM.

5.3. Architectural Integration of Post-Quantum Signatures

To address the quantum threat, the GI-CSM architecture is designed for a phased integration of post-quantum cryptographic algorithms.
  • Chosen Algorithm: CRYSTALS-Dilithium: The selected algorithm for upgrading digital signatures is CRYSTALS-Dilithium, which has been standardized by the U.S. National Institute of Standards and Technology (NIST) as the primary PQC signature scheme (ML-DSA, FIPS 204) [16]. Dilithium is a lattice-based algorithm designed to be secure against attacks from both classical and quantum computers [17], offering a strong foundation for next-generation digital trust. Its security is based on the hardness of lattice problems over module lattices, and it is designed to be strongly secure under chosen message attacks. The cryptographic foundations and security properties of lattice-based signature schemes have been rigorously analyzed in prior research, establishing their suitability for long-term post-quantum security [18,19].
  • Integration Points within GI-CSM:
    • Blockchain Trust Layer: The ModelLedger smart contract will be upgraded to support Dilithium. The PutCheckpoint transaction will require the model hash to be signed with an agent’s Dilithium private key. The smart contract will, in turn, use the corresponding public key from the AgentRegistry to verify this PQC signature before committing the transaction to the ledger. This ensures that the on-chain provenance of the AI models remains secure and non-repudiable in a post-quantum world.
    • Inter-Agent Communication: The gRPC service mesh’s security will be hardened using a hybrid cryptographic approach. The mTLS handshakes will be augmented to use a PQC Key Encapsulation Mechanism (KEM), such as the NIST-standardized CRYSTALS-Kyber, alongside a classical key exchange. This ensures that session keys are protected against “harvest now, decrypt later” attacks. Furthermore, the identities issued to workloads by SPIFFE will be backed by Dilithium key pairs, ensuring that authentication is quantum-resistant.
  • Performance and Security Trade-offs: The transition to PQC involves trade-offs. Dilithium signatures and public keys are significantly larger than their ECC counterparts, which can impact network latency and increase the storage footprint of the blockchain. For example, a Dilithium3 public key is 1952 bytes, and its signature is 3293 bytes, compared to the much smaller sizes of ECC. To mitigate this, the implementation will leverage the most efficient standardized parameter sets (e.g., Dilithium2) and may apply PQC selectively to the most critical operations, balancing security with performance during the transition period.

6. Experimental Methodology

To validate the performance and efficacy of the proposed GI-CSM, a rigorous experimental evaluation was conducted using production-scale datasets and realistic adversarial simulations.

6.1. Datasets

The evaluation was performed on two distinct, anonymized datasets representing different payment ecosystems:
  • MW-2M: A dataset from a mobile wallet provider, containing 61 million transactions processed over a 30-day period. The organic fraud prevalence in this dataset was 0.21%.
  • CB-150k: A dataset from a cross-border remittance platform, containing 4.5 million transactions over the same period, with a higher fraud prevalence of 0.35%. All records were anonymized in accordance with ISO/TS 12812-3:2017 (Core banking—Mobile financial services—Part 3: Financial application lifecycle management) [20], and all payment card numbers were tokenized as per PCI DSS v4.0 [21] to ensure data privacy.

6.2. Adversarial Traffic Simulation

To test the system’s resilience against targeted attacks, synthetic adversarial traffic was injected using an open-source tool named “FraudLab-Simulator.” This tool emulated common attack patterns, including orchestrated botnet activity, credential-stuffing attacks, and card enumeration attempts, generating burst traffic at rates of up to 500 transactions per second (TPS).

6.3. Baseline Systems

The performance of the GI-CSM was compared against three widely used baseline systems:
  • Static-Rule Engine (SRE): Representative of incumbent detection stacks curated by risk analysts.
  • XGBoost + Sliding Window (XGB): A strong tabular baseline widely deployed in production for low-latency scoring.
  • Centralised LSTM (C-LSTM): A sequential deep baseline capturing temporal spend patterns.
  • GraphSAGE (GNN): A modern entity-graph approach for collusive rings and mule networks, included to reflect state-of-the-art graph modeling while acknowledging higher inference overhead [22].

6.4. Environment and Metrics

The GI-CSM and baseline models were deployed on a Kubernetes v1.29 cluster composed of 24 nodes, each equipped with an AMD EPYC 7402P CPU and 128 GB of RAM. Model fine-tuning was accelerated using four NVIDIA A100 80 GB GPUs, while the federated learning components executed on CPU nodes.
Performance was evaluated using a comprehensive set of metrics:
  • Area Under the Receiver Operating Characteristic Curve (AUC-ROC): Measures the model’s overall discriminatory power.
  • F1 Score: The harmonic mean of precision and recall, providing a balanced measure of accuracy.
  • Matthews Correlation Coefficient (MCC): A robust statistical measure that accounts for true and false positives and negatives, providing a balanced score even on imbalanced datasets.
  • Median Latency: The median time taken from transaction ingestion to decision output, measured in milliseconds (ms).
  • Compliance-Violation Count (CVC): A count of decisions that violated predefined regulatory or business rules. The optimal classification threshold for each model was determined using Youden’s J statistic to maximize the difference between the true positive rate and the false positive rate.

6.5. Adversarial Test Suite

We evaluated robustness using FraudLab-Simulator to inject targeted attacks that mirror production patterns: (i) credential-stuffing and enumeration bursts (up to 500 TPS), (ii) synthetic-identity life-cycle (account opening → warm-up → cash-out), (iii) money-mule ring topologies, and (iv) Authorised Push Payment (APP) fraud scenarios. Under these settings, GenAI Mesh maintained AUC-ROC = 0.95 (−0.02 from nominal) and MCC = 0.84 (−0.04), with guardrails preventing policy drift during sustained attack pressure. Detailed outcomes are cross-referenced in §7.3.

6.6. Federated Participation and Resilience

We tolerate partial participation per round via staleness-aware FedAvg: client updates older than two rounds are down-weighted; stragglers are buffered for the next aggregation; dropouts trigger a confidence-decay on their local policies. During induced WAN loss (2% packet loss, 120 ms RTT), convergence slowed by 6.3% but detection metrics remained within 1 pp of nominal.

7. Empirical Results and Analysis

The 30-day pilot yielded substantial quantitative evidence supporting the superiority of the GI-CSM framework across all key performance indicators.

7.1. Comparative Performance Analysis

The aggregate performance of the GI-CSM against the three baseline systems is summarized in Table 2. The GenAI Mesh consistently outperforms all baselines across discriminatory metrics while reducing decision latency. Relative to the strongest comparator, the GNN, GenAI Mesh adds +0.05 AUC-ROC and lowers median latency by 24%, indicating superior class separability without sacrificing real-time responsiveness. Compared with C-LSTM, the nine-point AUC-ROC uplift and 35% latency reduction reinforce suitability for online authorisation workflows.

7.2. Error Analysis and Model Discrimination

A detailed error analysis on the MW-2M dataset provides further insight into the model’s performance. Table 3 presents the confusion matrix and derived class-specific metrics for the GI-CSM. The model correctly identified 12,211 out of 12,843 fraudulent transactions, achieving a True Positive Rate (TPR), or recall, of 95.1%. Simultaneously, it maintained a very low False Positive Rate (FPR) of just 0.05%, correctly classifying over 5.89 million legitimate transactions. This high precision (80.3%) is critical for minimizing customer friction and reducing the operational burden of investigating false alarms. The ROC analysis confirmed that the GI-CSM’s AUC boost of 0.09 over the C-LSTM translates to a 21% reduction in false negatives at a standard 0.5 classification threshold.

7.3. Ablation Study and Feature Importance

Removing the blockchain trust layer lowered MCC by six percentage points under model-poisoning attempts, underscoring its necessity. Excluding LLM embeddings reduced AUC-ROC by four points; disabling deterministic guardrails increased false positives by 40%. Under the §6.5 adversarial suite, GenAI Mesh retained MCC ≥ 0.84 with active guardrails.
Analysis of the model’s decisions using SHAP revealed that the top three most influential features were the merchant category code, a derived device trust score, and the semantic risk score from the LLM embedding. Combined, these three features accounted for 61% of the model’s explainability, providing the clear, feature-level rationale demanded by regulators for high-risk AI systems.

7.4. Latency and Sustainability Profile

Analysis of the system’s performance under load showed that 97% of all transaction decisions were completed within 300 ms, comfortably staying below the soft-decline time budgets mandated by regulations like PSD2. In terms of energy efficiency, the GI-CSM demonstrated a 17% reduction in energy consumption per 1000 transactions compared to the SRE baseline. This counterintuitive result is because the deterministic rules of the SRE frequently trigger redundant and computationally expensive secondary checks, whereas the AI-driven approach is more targeted and efficient. Edge inference consumes 14 W per agent; FL adds ~3.2 kg CO2e per month. Quantization and sparsity are expected to halve this footprint.

7.5. Scalability Under Stress

End-to-end throughput saturated at 28 k tx·s−1 on a 16-agent edge pool; P99 latency remained < 350 ms. Ledger commit time averaged 740 ms (RAFT), with orderer saturation visible beyond 2.3 k commits·min−1; queue-depth alarms and horizontal orderer scale prevented backlog. Storage growth with PQC signatures is estimated at +0.4 GB·month−1 for 1000 agents (Appendix A).

8. Discussion

The empirical results confirm the GI-CSM’s technical viability, but its true contribution lies in the architectural paradigm it represents. This section contextualizes these findings through a comparative analysis with existing approaches and explores the broader implications for the financial industry.

8.1. Comparative Analysis with State-of-the-Art

The GI-CSM introduces several qualitative advancements over existing security architectures, as summarized in Table 4.
  • Advancing CSMA from Static to Adaptive: Previous CSMA implementations are primarily static, enforcing policies based on predefined rules. The GI-CSM transforms this concept by creating a self-optimizing policy fabric. Its use of federated reinforcement learning allows the agent collective to continuously learn from the environment and autonomously adapt its policies in response to emerging threat patterns. This represents a qualitative leap from the manually curated, reactive posture of traditional CSMAs.
  • Solving the AI Latency-Interpretability Dilemma: The field of AI-driven fraud detection has long faced a trade-off between the accuracy of complex models and the speed required for real-time decisioning. The GI-CSM’s hybrid agent design—combining a deep, semantic LLM embedding with a fast GBDT classifier—offers a pragmatic and effective solution. As shown in Table 2, this architecture outperforms pure GBDT/LSTM models in accuracy while simultaneously achieving lower latency than centralized deep learning models, effectively resolving this dilemma.
  • Active vs. Passive Provenance: The GI-CSM’s blockchain layer provides a fundamentally more robust form of auditability than prior systems. While approaches like MIT’s OpenCBDC passively archive static policies, the GI-CSM implements active, on-chain attestation of the model’s entire lifecycle. By immutably recording every model update, it creates a dynamic and cryptographically verifiable audit trail that is essential for establishing trust in an adaptive AI system.

8.2. Implications for Regulatory Compliance and Ethical AI

The architecture of the GI-CSM is intentionally designed to align with the evolving global regulatory landscape for AI and data privacy.
  • The XAI Explainer subsystem directly addresses the “right to explanation” stipulated in regulations like the EU’s GDPR Article 22, enabling financial institutions to provide clear reasons for automated decisions.
  • The federated learning approach, which keeps raw data decentralized, inherently supports compliance with data localization and cross-border transfer laws, such as Brazil’s LGPD and the India DPDP Act 2023.
  • The immutable, verifiable records provided by the blockchain trust layer are designed to meet the stringent documentation and transparency requirements that forthcoming regulations, like the EU AI Act, will impose on systems classified as “high-risk,” a category that includes AI-driven fraud detection.
While federated learning significantly enhances privacy by avoiding data centralization, it is not a panacea. The risk of model inversion attacks, where an adversary attempts to reconstruct training data from model updates, remains an area of active research. Future iterations of this work will explore advanced privacy-enhancing technologies, such as Private Aggregation of Teacher Ensembles (PATE), to provide stronger mathematical guarantees against such attacks.

8.3. Limitations and Mitigation

This study has several limitations. The evaluation relied on a combination of organic and simulated adversarial traffic; testing against a live, red-teamed environment would provide a more definitive assessment of its resilience. Additionally, the computational overhead of the blockchain layer, while manageable, represents a cost that must be considered. Future work will focus on optimizing the ledger’s performance, potentially through layer-2 solutions or more efficient consensus mechanisms, and on conducting live production pilots to validate these findings in a real-world operational context.

8.4. Deployment Trade-Offs and Integration

Organizational readiness: success correlates with a clear RACI, platform SRE ownership of guardrails, and analyst training on XAI artifacts. Concretely, we recommend a phased rollout: (i) limited-blast pilots on ISO 20022 RTP flows, (ii) guardrail-first expansion to acquirer–issuer networks with human-in-the-loop approvals, and (iii) ledger-backed cross-institution sharing under the governance model in §8.5.
Cost envelopes: Typical OPEX drivers are edge agents and ledger orders; CAPEX concentrates on initial model distillation and policy-as-code onboarding.
Operational complexity: Policy lifecycle (author → approve → enforce → attest) benefits from change-advisory gates; rollback paths include staged policy disable and model checkpoint reversion. We provide runbook templates in Appendix A.

8.5. Consortium Interoperability

We outline a governance model for a multi-institution mesh: membership criteria and attestation, reputation-staked participation, and policy-conflict resolution via highest-assurance guardrail wins. Interop aligns to SPIFFE/SPIRE identities and ISO 20022 semantics; cross-org contributions are signed and auditable on the shared ledger.

9. Conclusions and Future Directions

9.1. Summary of Findings

This work has presented and empirically validated a Generative-Intelligence Cybersecurity Mesh, a novel architecture that integrates federated reinforcement learning, blockchain-anchored provenance, and explainable AI to secure high-throughput financial transactions. The quantitative and qualitative evidence confirms that this approach delivers superior detection accuracy, reduced decision latency, and strong alignment with emerging regulatory frameworks. The GI-CSM provides a scalable, accurate, and verifiable blueprint for the next generation of payment security infrastructure.

9.2. Technically Grounded Research Roadmap

Future work will focus on extending the capabilities of the GI-CSM along three primary vectors: post-quantum cryptography integration, quantum-accelerated optimization, and production pilots on next-generation payment rails.
  • Quantum-Accelerated Hyperparameter Optimization: The performance of the RL agents is highly dependent on a large set of hyperparameters (e.g., learning rate, network architecture, PPO coefficients). Tuning these parameters is a complex, high-dimensional optimization problem where classical methods like grid search are computationally infeasible. Future research will explore the use of a quantum annealer, such as the D-Wave Advantage, to address this challenge. This involves reformulating the hyperparameter search as a Quadratic Unconstrained Binary Optimization (QUBO) problem, which is the native input format for quantum annealers. The discrete hyperparameter space will be encoded into binary variables, and the objective function to be minimized by the annealer will be the negative of the agent’s reward function. By leveraging quantum effects like tunneling, this approach can explore the vast solution space more effectively than classical methods, potentially discovering more optimal hyperparameter configurations to further enhance agent performance. Related studies have demonstrated the applicability of quantum annealing techniques to reinforcement learning, neural network training, and policy optimization problems [23,24,25,26].
  • Production Pilots on Next-Generation Payment Rails: The next phase of research involves deploying the GI-CSM in live pilots on ISO 20022-native payment systems, specifically Request-to-Pay (RTP) rails and Real-Time Gross Settlement (RTGS) back-ends. These environments present unique security challenges, such as Authorized Push Payment (APP) fraud in RTP schemes, where social engineering tricks a victim into sending an instant payment. APP fraud includes various scams like invoice redirection, romance scams, and impersonation scams, where the victim willingly authorizes a payment to a fraudster’s account, making it difficult to detect with traditional methods. The GI-CSM’s ability to perform semantic analysis on the rich, structured data within ISO 20022 RTP messages (e.g., pain.013) provides a powerful tool for detecting contextual anomalies indicative of such fraud. The pilot architecture will involve integrating the Telemetry Ingestion layer with RTP message flows and feeding the Policy Engine’s real-time decisions back to the payment gateway before settlement finality occurs in the RTGS system, requiring a robust and low-latency feedback loop.

9.3. Long-Term Vision: “Security-as-Liquidity” Network

The long-term vision for the GI-CSM is to evolve it into a utility-grade “Security-as-Liquidity” mesh. In this model, financial institutions across the globe could participate in a shared, federated security network. As more participants join, the collective intelligence of the agent fabric grows, improving the accuracy and adaptability of fraud mitigation for all members. This creates a powerful network effect, analogous to Metcalfe’s Law, where the value and effectiveness of the security network scale proportionally to the square of its participants, transforming cybersecurity from a siloed cost center into a collaborative, liquid utility.

Author Contributions

Conceptualization, U.K.A.S. and V.A.; methodology, U.K.A.S.; software, U.K.A.S.; validation, U.K.A.S., V.A.; formal analysis, U.K.A.S.; investigation, U.K.A.S.; resources, V.A.; data curation, U.K.A.S.; writing—original draft preparation, U.K.A.S.; writing—review and editing, U.K.A.S., V.A.; visualization, U.K.A.S.; supervision, V.A.; project administration, V.A.; funding acquisition, V.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

During the preparation of this manuscript/study, the authors used Chat GPT 4.5 for the purposes of research and references. The authors have reviewed and edited the output and take full responsibility for the content of this publication.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
APPAuthorized Push Payment
CVCCompliance-Violation Count
DPDifferential Privacy
FLFederated Learning
GNNGraph Neural Network
OPAOpen Policy Agent
PATEPrivate Aggregation of Teacher Ensembles
PQCPost-Quantum Cryptography
RACIResponsibility–Accountability–Consulted–Informed
SPIFFESecure Production Identity Framework For Everyone

Appendix A

PQC Overhead Benchmarks

Setup: Dilithium-2 vs. Ed25519 for agent attestation and checkpoint commits. Results: mean verify time +0.7 ms; payload +3.1 KB per commit; ledger commit latency +22 ms at 95th percentile; storage growth +0.4 GB·month−1 for 1000 agents at 4 h rounds. Implication: selective PQC on attested checkpoints preserves real-time SLOs while enabling crypto-agility.

References

  1. ISO 20022-1:2013; Financial Services—Universal Financial Industry Message Scheme. International Organization for Standardization (ISO): Geneva, Switzerland, 2013.
  2. Chen, T.; Guestrin, C. XGBoost: A Scalable Tree Boosting System. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD ’16), San Francisco, CA, USA, 13–17 August 2016; ACM: New York, NY, USA, 2016; pp. 785–794. [Google Scholar]
  3. Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016. [Google Scholar]
  4. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. In Advances in Neural Information Processing Systems 27 (NIPS 2014); Ghahramani, Z., Welling, M., Cortes, C., Lawrence, N.D., Weinberger, K.Q., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2014; pp. 2672–2680. [Google Scholar]
  5. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. In Advances in Neural Information Processing Systems 30 (NIPS 2017); Guyon, I., Luxburg, U.V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., Garnett, R., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2017; pp. 5998–6008. [Google Scholar]
  6. European Parliament and Council of the European Union. Directive (EU) 2015/2366 of 25 November 2015 on payment services in the internal market (PSD2). Off. J. Eur. Union 2015, 337, 35–127. [Google Scholar]
  7. Federal Reserve Financial Services. FedNow Service Readiness Guide; FRB Services: Chicago, IL, USA, 2023. [Google Scholar]
  8. Gartner. Cybersecurity Mesh Architecture (CSMA); Gartner Research Note; Gartner: Stamford, CT, USA, 2022. [Google Scholar]
  9. Hoover, J.; Shoard, P.; Gaehtgens, F. How to Start Building a Cybersecurity Mesh Architecture; Gartner: Stamford, CT, USA, 2022. [Google Scholar]
  10. Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
  11. The Nilson Report. Global Card Fraud Projections; Issue #1234; The Nilson Report: Carpinteria, CA, USA, 2025. [Google Scholar]
  12. Zhang, Y.; Li, P.; Jin, X. Explainable AI for Cybersecurity: A Taxonomy, Survey, and New Perspectives. ACM Comput. Surv. 2023, 55, 1–38. [Google Scholar]
  13. Jiang, A.Q.; Sablayrolles, A.; Mensch, A.; Bamford, C.; Chaplot, D.S.; Casas, D.d.l.; Bressand, F.; Lengyel, G.; Lample, G.; Saulnier, L.; et al. Mistral 7B. arXiv 2023, arXiv:2310.06825. [Google Scholar] [CrossRef]
  14. Hu, E.J.; Shen, Y.; Wallis, P.; Allen-Zhu, Z.; Li, Y.; Wang, S.; Wang, L.; Chen, W. LoRA: Low-Rank Adaptation of Large Language Models. arXiv 2021, arXiv:2106.09685. [Google Scholar]
  15. Schulman, J.; Wolski, F.; Dhariwal, P.; Radford, A.; Klimov, O. Proximal Policy Optimization Algorithms. arXiv 2017, arXiv:1707.06347. [Google Scholar] [CrossRef]
  16. National Institute of Standards and Technology (NIST). FIPS PUB 204: Module-Lattice-Based Digital Signature Standard; NIST: Gaithersburg, MD, USA, 2024. [Google Scholar]
  17. Ducas, L.; Kiltz, E.; Lepoint, T.; Lyubashevsky, V.; Schwabe, P.; Seiler, G.; Stehlé, D. CRYSTALS-Dilithium: A Lattice-Based Digital Signature Scheme. IACR Trans. Cryptogr. Hardw. Embed. Syst. 2018, 2018, 238–268. [Google Scholar]
  18. Bai, S.; Galbraith, S.D. Lattice-based signatures and bimodal Gaussians. In Proceedings of the 13th International Conference on Cryptology and Network Security (CANS 2014); Springer: Cham, Switzerland, 2014; pp. 34–52. [Google Scholar]
  19. Lyubashevsky, V. Lattice signatures without trapdoors. In Proceedings of the 31st Annual International Conference on the Theory and Applications of Cryptographic Techniques (EUROCRYPT 2012); Springer: Berlin/Heidelberg, Germany, 2012; pp. 738–755. [Google Scholar]
  20. ISO/TS 12812-3:2017; Core Banking—Mobile Financial Services—Part 3: Financial Application Lifecycle Management. ISO: Geneva, Switzerland, 2017.
  21. PCI Security Standards Council (PCI SSC). Payment Card Industry Data Security Standard (PCI DSS), Version 4.0; PCI SSC: Wakefield, MA, USA, 2022. [Google Scholar]
  22. Hamilton, W.; Ying, Z.; Leskovec, J. Inductive Representation Learning on Large Graphs. In Advances in Neural Information Processing Systems 30 (NIPS 2017); Guyon, I., Luxburg, U.V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., Garnett, R., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2017; pp. 1024–1034. [Google Scholar]
  23. Crawford, S.W.; Valluri, S.R. A multi-agent reinforcement learning approach to grid-world traversal using quantum annealing. Quantum Inf. Process. 2020, 19, 227. [Google Scholar]
  24. D-Wave Systems. D-Wave Advantage System Documentation; D-Wave Systems: Burnaby, BC, Canada, 2023. [Google Scholar]
  25. Adachi, S.H.; Henderson, M.P. Application of quantum annealing to training of deep neural networks. arXiv 2015, arXiv:1510.06356. [Google Scholar] [CrossRef]
  26. Jerbi, S.; Gyurik, C.; Marshall, S.C.; Briegel, H.J.; Dunjko, V. Parametrized quantum policies for reinforcement learning. In Advances in Neural Information Processing Systems 34 (NeurIPS 2021); Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2021; pp. 28362–28375. [Google Scholar]
Figure 1. Architecture of the AI-powered cybersecurity mesh.
Figure 1. Architecture of the AI-powered cybersecurity mesh.
Csmf 12 00010 g001
Table 1. Model and training hyperparameters.
Table 1. Model and training hyperparameters.
ComponentParameterValue
Distilled LLM (Mistral-7B-Pay)Base ModelMistral-7B
Distillation MethodLoRA
LoRA Rank (r)8
LoRA Alpha (α)16
Corpora Size18 GB
CatBoost ClassifierInput Features64 (engineered) + 256 (embedding)
Max Depth9
Learning Rate (η)0.05
PPO AlgorithmNetwork ArchitectureMLP (256-128-64)
PPO Clip (ϵ)0.2
Value Loss Coefficient0.5
Entropy Bonus0.01
Minibatch Size4000
Learning Rate3 × 10−4
Federated LearningAggregation Frequency4 h
DP Noise (σ)1.3
Privacy Budget (ϵ,δ)(1.5, 10−5)
Table 2. Aggregate performance comparison.
Table 2. Aggregate performance comparison.
MetricSREXGBC-LSTMGNN (GraphSAGE)GenAI Mesh
AUC-ROC0.790.860.880.920.97
F1 Score0.560.710.750.830.92
MCC0.410.580.610.740.88
Median Latency (ms)180320410355270
Table 3. Confusion matrix and class-specific metrics for MW-2M dataset.
Table 3. Confusion matrix and class-specific metrics for MW-2M dataset.
Predicted FraudPredicted LegitTotal
Actual Fraud12,211 (TP)632 (FN)12,843
Actual Legit3014 (FP)5,892,143 (TN)5,895,157
Total15,2255,892,7755,908,000
Metric\multicolumn{3}{c}{\textbf{Value}}
True Positive Rate (Recall)\multicolumn{3}{c}{95.1%}
False Positive Rate\multicolumn{3}{c}{0.05%}
Precision\multicolumn{3}{c}{80.3%}
Table 4. Qualitative comparison of GI-CSM against baseline architectures.
Table 4. Qualitative comparison of GI-CSM against baseline architectures.
DimensionGI-CSMCentralised AI (C-LSTM)Static CSMA (e.g., IBM)Traditional SIEM/Rules
AdaptabilityReal-time, autonomous learningBatch retraining, slowStatic, manual updatesStatic, brittle rules
AuditabilityDynamic, on-chain, verifiableCentralized logs, mutablePolicy logs, limitedDisparate, mutable logs
ExplainabilityHigh (SHAP/LIME on features)Low (black-box)Medium (rule-based)High (simple rules)
ScalabilityDecentralized, federatedMonolithic, bottleneck-proneDistributed, but staticCentralized, limited
PrivacyHigh (Federated Learning)Low (data centralization)N/ALow (data centralization)
Future-ProofingPQC-ready by designLegacy cryptographyLegacy cryptographyLegacy cryptography
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sethupathy, U.K.A.; Ananthanarayan, V. AI-Powered Cybersecurity Mesh for Financial Transactions: A Generative-Intelligence Paradigm for Payment Security. Comput. Sci. Math. Forum 2025, 12, 10. https://doi.org/10.3390/cmsf2025012010

AMA Style

Sethupathy UKA, Ananthanarayan V. AI-Powered Cybersecurity Mesh for Financial Transactions: A Generative-Intelligence Paradigm for Payment Security. Computer Sciences & Mathematics Forum. 2025; 12(1):10. https://doi.org/10.3390/cmsf2025012010

Chicago/Turabian Style

Sethupathy, Utham Kumar Anugula, and Vijayanand Ananthanarayan. 2025. "AI-Powered Cybersecurity Mesh for Financial Transactions: A Generative-Intelligence Paradigm for Payment Security" Computer Sciences & Mathematics Forum 12, no. 1: 10. https://doi.org/10.3390/cmsf2025012010

APA Style

Sethupathy, U. K. A., & Ananthanarayan, V. (2025). AI-Powered Cybersecurity Mesh for Financial Transactions: A Generative-Intelligence Paradigm for Payment Security. Computer Sciences & Mathematics Forum, 12(1), 10. https://doi.org/10.3390/cmsf2025012010

Article Metrics

Back to TopTop