Next Article in Journal
A Comparative Study of Federated Learning and Amino Acid Encoding with IoT Malware Detection as a Case Study
Previous Article in Journal
Stock Market Forecasting in Taiwan: A Radius Neighbors Regressor Approach
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

LLMs for Integrated Business Intelligence: A Big Data-Driven Framework Integrating Marketing Optimization, Financial Performance, and Audit Quality

by
Leonidas Theodorakopoulos
1,
Aristeidis Karras
2,*,
Alexandra Theodoropoulou
1 and
Christos Klavdianos
1
1
Department of Management Science and Technology, University of Patras, 26334 Patras, Greece
2
Computer Engineering and Informatics Department, University of Patras, 26504 Patras, Greece
*
Author to whom correspondence should be addressed.
Big Data Cogn. Comput. 2026, 10(4), 110; https://doi.org/10.3390/bdcc10040110
Submission received: 16 January 2026 / Revised: 17 March 2026 / Accepted: 23 March 2026 / Published: 5 April 2026
(This article belongs to the Section Large Language Models and Embodied Intelligence)

Abstract

Enterprise decision making in marketing, finance, and audit remains fragmented, leading to inefficient budget allocation and incomplete risk assessment. This study proposes an integrated, Big Data-driven decision-support framework that unifies Large Language Models (LLMs), attention-based marketing mix modeling, and multi-agent, game-theoretic optimization to coordinate cross-functional decisions. The architecture combines five modules: LLM-enhanced customer segmentation and customer lifetime value prediction, attention-weighted marketing mix modeling, multi-agent LLM systems for hierarchical budget optimization, attention-informed Markov multi-touch attribution, and LLM-augmented audit quality assessment. Empirical validation on a large-scale e-commerce dataset with 2.8 million customers and USD 156 million in marketing expenditure shows that marketing return on investment increases from 4.2 to 6.78 (61.4% relative improvement), financial forecasting error (MAPE) decreases from 12.8% to 4.7% (63.3% reduction), fraud detection accuracy improves by 29.8%, the Audit Quality Index reaches 0.951, and customer lifetime value prediction accuracy improves from 76.4% to 91.3%. By operationalizing the convergence of LLMs, attention mechanisms, and game-theoretic reasoning within a unified and empirically validated framework, the study delivers both theoretical advances and practically deployable tools for integrated business intelligence in digital economies.

1. Introduction

The adoption of Large Language Models (LLMs) in business intelligence systems is a radical change in the way organizations use data-driven decision-making in various functional areas. Recent developments indicate that LLMs are capable of coordinating digital governance, marketing analytics, financial accounting [1,2], and complex workflows, as well as addressing more demanding uses of the technology, such as predicting audit opinion and developing professional competency in accounting and audit professions [3] through dual-model synergy frameworks [4], respectively. Simultaneously, similar advancements in the domain of cognitive computing place LLMs as the main pillars of the business intelligence applications in the fields of accounting, finance, and management in particular settings [5]. Moreover, the full-scale data analytics powered by the LLM in business intelligence [6] solutions, as well as retrieval-enhanced generation models, can demonstrate how the LLM can be used to derive useful insights based on complex business data. Non-general implementations, like specialized customer experience and sales optimization architectures based on LLM applications, have been shown to be effective in turning unstructured business data into actionable insights, coupled with strategic frameworks of leveraging the potential of LLMs in modern marketing management [7].
Nevertheless, the current literature remains divided into disjointed functional areas, providing scant exploration of how LLMs can coordinate coherent systems that are able to optimize marketing services, improve financial decision making, and upgrade the quality of audits simultaneously [8,9,10,11,12,13,14]. The research question serves to fill that gap by suggesting a common framework of Big Data, utilizing the power of LLMs to aggregate insights of these historically separate business functions, and enhancing theoretical and practical directions in the field of business intelligence using LLMs.
The modern organization suffers a severe structural challenge, which is the following: its major business processes, including marketing, finance, and audit, are probed as isolated silos that hardly share information and have insufficient strategic coordination. Marketing departments plan campaign allocations in isolation of channel metrics; financial teams plan revenues with historic data that is unrelated to the actual operations of the marketing organization; auditing functions use rule-based detection systems missing business intuition. This systematized lack of interdependencies and cascading effects that extend enterprise boundaries is a systematic failure of this compartmentalization.
Three forces are converging, creating both opportunity and urgency. To start with, customer interactions, financial transactions, and operation patterns are now recorded in unprecedented data volumes at the finest time and behavioral detail. Second, as of late, transformer-based architectures and large language models have made significant progress that showed impressive performance in semantic understanding, complex business reasoning, and strategic decision support. Third, the need of real-time decision making under uncertainty necessitates integrated analytical frameworks that integrate various sources of information and align decisions across functional borders.
However, even with these facilitating technologies, the research has not provided a systematic approach on how to use the LLMs in order to make unified business decisions to maximize marketing allocation, predict financial performance, and maintain audit compliance. The current available marketing analytics exhibit a tunnel vision on channel attribution and optimization of the budget. Traditional time-series or isolated machine-learning models are used to carry out financial forecasting. Audit automation uses the supervised detection of anomalies without the contextual reasoning of LLMs. Such piecemeal strategies leave a lot of value in the table.

1.1. Theoretical Gaps and Research Motivation

The three theoretical gaps to be filled by this research are:
  • The Attribution–Finance Gap. Marketing mix modeling and multi-touch attribution do not have an analytical linkage with financial reporting and audit control. The current attribution methodologies, including linear, Markovian, or algorithmic approaches, typically treat attribution as a pure measurement problem that is independent of financial controls. However, marketing expenditures have a direct correlation with the quality of reported earnings in terms of revenue recognition, management [5] of accruals, and sustainability of earnings. Both marketing accountability and financial statement credibility can, in principle, be jointly enhanced by combining attribution mechanisms with financial analysis. Recent work on deep learning for multi-touch attribution and on integrating artificial intelligence with financial controls points to partial connections between attribution metrics and financial reporting outcomes, but does not provide an integrated analytical linkage between channel-level attribution, revenue recognition, and audit control [15,16]. These studies underscore the importance of aligning marketing expenditures with earnings quality, yet they stop short of proposing a unified framework that connects attribution models with formal financial and audit processes.
  • The Integration–Optimization Gap. Game-theoretic models of marketing budget allocation have proved useful for capturing strategic interdependencies across channels and organizational units [17]. Most existing formulations, however, rely on fully specified payoff functions, complete information, and clearly defined mathematical constraints, which allow for closed-form or numerically tractable solutions. In real-world settings, many of these payoff relationships are only implicit in heterogeneous organizational data, policy documents, and expert narratives, and cannot easily be reduced to a compact analytical form. Recent work on combining large language models with game theory for strategic decision making shows that LLMs can identify strategic options and explain the reasoning behind choices in complex environments [18], but this capability is rarely linked to enterprise-scale marketing mix models or routine budget allocation processes. Consequently, the literature still lacks a systematic framework in which LLMs and game-theoretic reasoning are jointly employed to infer constraints, generate strategies, and iteratively approximate realistic budget allocations under uncertainty.
    Beyond these assumptions, most studies on algorithmic or AI-assisted marketing optimization continue to treat optimization as a purely quantitative exercise, in which constraints and objectives are fully specified in mathematical form and solved with standard numerical solvers. While recent work has begun to explore multi-agent LLM systems and LLM-assisted planning for complex decision problems, these approaches are rarely grounded in enterprise-scale marketing mix models or budget allocation pipelines that must reconcile heterogeneous operational, financial, and governance constraints. Existing contributions therefore stop short of offering a systematic, data-driven procedure in which LLMs extract implicit constraints and strategic options from textual policies, expert narratives, and unstructured logs, and then feed these into an iterative, game-theoretic optimization loop for realistic budget allocation under uncertainty. Accordingly, this study addresses the integration–optimization gap by proposing a multi-agent LLM framework that infers implicit constraints from heterogeneous organizational data and narratives, and iteratively approximates feasible, near-optimal budget allocations without requiring closed-form payoff specification.
  • The Autonomy–Assurance Gap. The automation of marketing optimization and financial forecasting introduces non-trivial audit risks: algorithmic systems make consequential decisions that are difficult to explain, forecasting models can inherit and amplify biases from training data, and emerging fraud patterns may bypass static rule-based controls. Prior work on big data and advanced analytics in external audits, as well as on the adoption of AI in auditing, shows that automation can enhance coverage and efficiency in assurance tasks [19,20]. More recent studies on dual-model synergy for audit opinion prediction and on the integration of generative AI into competency development for accounting and audit professionals underscore the potential of LLM-based systems in audit settings [3,4]. At the same time, research on continuous AI auditing infrastructures and algorithmic governance highlights the need for robust oversight mechanisms [21,22,23]. Yet current approaches rarely embed audit checks directly within the automated decision pipelines that drive marketing and forecasting, and they seldom quantify assurance quality in a way that feeds back into operational choices. This disconnect creates an autonomy–assurance gap, in which increasingly autonomous LLM-enabled systems are not systematically paired with real-time, LLM-augmented audit mechanisms capable of constraining, explaining, and continuously certifying their decisions.

Research Question and Objectives

Building on these gaps, this study investigates the following research question:
How can large language models be integrated with attention-based attribution, game-theoretic optimization, and continuous audit mechanisms to form a unified, Big Data-driven framework that simultaneously optimizes marketing allocation, improves financial performance forecasting, and enhances audit quality? To address this question, the paper sets three objectives: (1) to design an integrated LLM-based architecture that connects customer intelligence, attribution, financial analysis, and audit assurance [2,6]; (2) to develop concrete algorithms for attention-informed attribution, multi-agent LLM optimization, and LLM-augmented financial and audit analytics; and (3) to empirically evaluate the framework on large-scale e-commerce data, assessing its impact on marketing ROI, forecasting accuracy, fraud detection, and audit quality [24,25].

1.2. Contributions

The study has complementary theoretical, algorithmic, and empirical contributions:
  • Theoretical Contributions: We model the integrated optimization [26] obstacle as a constrained multi-objective decision problem that is solved by coordinated LLM agents that execute an approximate Stackelberg equilibrium. This formalizes game-theoretic frameworks by implementing language models to generate strategies. We present attention-weighted Markov chain attribution relating channel contributions to financial results with explainable attribution weights. We build an integrated model between customer lifetime value forecast, financial forecast, and audit quality on shared probabilistic basis.
  • Algorithms: We present prompt-based reward functions that allow LLMs to plan budget appropriation without a strategy specification. We learn transformer attention models on marketing attribution, with an interpretable channel interaction weights. We devise ensemble techniques based on refined predictions with LLM and classical statistical and machine-learning models, boosting their robustness. We index the quality of audit continuous with the help of the LLM-powered explainability.
  • Empirical Contributions: Experiments on a large e-commerce dataset show significant improvements in traditionally separate measures: marketing ROI increases from 4.2 to 6.78 (61.4% relative improvement), financial forecasting error (MAPE) decreases from 12.8% to 4.7% (63.3% reduction), fraud detection sensitivity improves by 29.8%, and the Audit Quality Index increases by 25.1% [8,9,10,11,12,13,14]. Such enhancements are attained as decision-processing time is reduced by 93.8% whereas false positive audit flags are lowered by 75.0.
Beyond reusing standard components such as CLV prediction models, Markovian attribution, generic LLM-based retrieval-augmented generation, and classical game-theoretic reasoning, our framework introduces several customized contributions. First, we extend Markov multi-touch attribution with transformer attention, yielding an attention-weighted transition operator that links channel contributions not only to conversions but also to downstream financial and audit outcomes. Second, we implement a heuristic, Stackelberg-inspired multi-agent optimization scheme in which domain-specialized LLM agents (CMO, CFO, auditor, optimizer) iteratively negotiate budget allocations under constraints, without requiring closed-form payoff functions or equilibrium solutions. Third, we couple customer lifetime value, financial forecasts, and audit quality into a shared probabilistic structure, so that improvements in one module (e.g., CLV prediction) propagate consistently to financial forecasting and audit risk assessment rather than remaining isolated.
The remainder of this study is organized as follows. Section 2 reviews related literature on LLM-based business analytics, attribution, game theory, and audit automation. Section 3 present the integrated framework, including LLM-enhanced customer intelligence, attention-based marketing mix modeling, multi-agent optimization, and attention-informed Markov attribution. Section 4 describes the overall algorithm and prompt-engineering workflow, while Section 5 and Section 6 report empirical results and discuss implications, limitations, and future research directions.

2. Literature Review

2.1. LLM-Based Business Analytics in Marketing, Finance, and Audit

Large language models have evolved from narrowly-specialized task performers to general-purpose reasoning systems capable of supporting complex organizational decisions [27]. Recent studies demonstrate that models such as GPT-4 and domain-specialized financial LLMs achieve performance levels comparable to or exceeding those of domain experts in financial statement analysis, strategic planning recommendations, and anomaly reasoning [24]. Multi-agent LLM systems enable organizational decision making through coordination of specialized agents, each encoding domain expertise through system prompts and providing hierarchical reasoning [28].
However, most examinations focus on specific decision domains in isolation. Marketing applications emphasize personalization and content generation [29], financial applications stress forecasting accuracy [8,9,30,31,32,33], and audit applications highlight anomaly detection. Integrated frameworks that operationalize coordination across these domains remain limited. Additionally, while LLMs confirm reasoning capabilities, their integration with mathematical optimization—particularly game-theoretic formulations—requires novel algorithmic approaches. A critical concern in LLM deployment for business-critical applications concerns hallucinations and unfounded confidence in potentially incorrect outputs. This necessitates validation mechanisms: ensemble approaches combining LLM predictions with classical methods, confidence-based filtering, and constraint validation.
Marketing attribution seeks to quantify the contribution of each channel to conversion outcomes, enabling productive budget allocation [1]. Traditional approaches—last-click attribution, linear models, time-decay curves—impose arbitrary assumptions inconsistent with actual customer behavior. More sophisticated approaches employ Markov chains to model customer journey states as stochastic transitions between channels, calculating channel contributions through fundamental matrix analysis and incremental removal effects [15].
Recent research applies deep learning to attribution through recurrent networks and attention mechanisms that learn channel interaction weights directly from data [34]. Dual-attention architectures enable simultaneous learning of temporal dependencies and channel interaction patterns. Layer-wise relevance propagation provides explainability by decomposing network outputs into channel-specific contributions. However, existing attribution research treats attribution as an isolated measurement problem. Financial statement analysis can be connected to attribution through earnings quality assessment. Integrated attribution-finance-audit frameworks remain underdeveloped in the literature.
Game-theoretic approaches to marketing budget allocation model competitive environments and strategic interdependencies that arise when multiple channels or organizational units compete for resources [17]. Stackelberg game formulations provide a natural framework for organizational hierarchies. Yet traditional approaches assume complete information and require closed-form solutions. Recent research on LLM-driven strategy generation demonstrates that language models can discover strategic options and generate reasoning justifying choices [18]. The integration of game theory and LLM reasoning remains a promising frontier.
Adapting general-purpose LLMs to specific business domains requires balancing accuracy, computational efficiency, and governance [35]. Parameter-efficient approaches such as LoRA (Low-Rank Adaptation) reduce trainable parameters by 95–99%, maintaining accuracy while dramatically reducing computational requirements [36]. Retrieval-augmented generation (RAG) provides relevant documents to the organizational knowledge bases for the LLM reasoning to enhance the accuracy on task-specific questions and reduce hallucinations [37]. Timely engineering—task definitions, in-context examples, reasoning organization, etc.—can be an important factor in enhancing the performance of LLMs on difficult tasks. Chain-of-thought prompting, where models explicitly reason in stages, enhances accuracy on tasks involving the use of multiple steps of reasoning by imposing step-by-step checking.
Transformer attention mechanisms can compute differentiable models of input dependencies, where attention weights highlight which tokens or features are most influential for a given prediction [34,38]. The basic attention process calculates an output as a weighted sum of values, with the weights being learned functions of queries and keys. The separation of attention heads through [38] multi-head attention can learn more diverse patterns of interaction, facilitating the relationship modeling. Attention weights offer explainability of some kind: they display the important dependencies [34] as being connected with a high weight, which reveal themselves in the attention weights, as well as in the projected features.
Attention weights are, however, neither sufficient nor necessary in explaining model decisions. A connection can be given high weight of attention as a statistical phenomenon that does not involve exerting a causal effect. Recent studies of explainability based on attention note that attention weights are to be viewed with care and compared with other explainability approaches like gradient-based saliency and counterfactual reasoning.
The conventional approach to financial audit has been based on sampling, statistical testing, and expert judgment to draw conclusions on the fairness of financial statements. Automation is expected to assess all populations of transactions, remove the risk of sampling, and enhance efficiency [19,20], which is better in this case than in the alternative, as the current situation reveals, or, more precisely, does not reveal at all. Anomaly detection machine-learning methods allow recognizing the abnormal transaction and abnormal behavior that do not comply with the requirements of the organization norms [21,39]. This can be furthered by continuous auditing, which makes use of detection algorithms on streams of transactions in real-time as opposed to transaction populations in the past.
LLMs enhance continuous audit by providing better contextual reasoning, natural language descriptions of anomalies as well as zero-shot detection of new types of fraud patterns by analogy reasoning [40]. Furthermore, interactive machine learning can gain auditor feedback on the LLM-driven audit systems. But specific LLM-based applications in audit are still in their infancy, and the regulatory information on algorithmic audit processes is in its preliminary stages. Combining the capabilities of the LLM with financial control systems constitutes a significant literature gap [16]. Regulated industries have to consider the issue of data privacy, algorithmic fairness, transparency, and regulatory [22,23,41,42] framework compliance when deploying LLMs, as well as GDPR compliance. Public disclosure of sensitive business information to the public LLM APIs creates privacy risks; companies might be mandated by law to own data sovereignty and to have third parties process such data as per the contracts stipulated in contracts [43].
Potential solutions include deploying private LLMs (e.g., Llama-2, Mistral) within corporate infrastructure, applying data anonymization and differential privacy techniques, and establishing governance schemes for algorithm usage [44]. Moreover, organizations need to deal with the possibility of biases in the output of LLM: internet text-trained models can represent the biases of society that result in unfair marketing targeting, incorrect financial forecasting of underserved populations, or systematic audit differences.
The responsible AI governance focuses on transparency, accountability, and human control. To ensure that business-critical applications can be applied, organizations are advised to adopt review mechanisms that ensure that high-stakes LLM findings are human checked, audit trails are kept to record how such automated decisions were reached, and stakeholders have a mechanism to challenge algorithmic [44] decisions. The incorporation of these governance issues into the mechanisms of operation is a significant practical challenge. Also, the discussed advanced applications in financial analysis indicate that to implement the use of LLMs in consequential business decisions, it is necessary to approach their deployment rigorously and monitor their performance closely [25].

2.2. Integrated Research Model and Hypotheses Development

The proposed research model conceptualizes the organization as a cross-functional decision system in which LLM-enhanced customer intelligence, attention-based marketing attribution, and multi-agent optimization jointly influence financial forecasting quality and audit quality. LLM-based customer embeddings and CLV predictions capture high-dimensional behavioral patterns, attention-based attribution quantifies the contribution and interactions of marketing channels, and the multi-agent optimization layer translates these signals into coordinated budget allocations. These allocations in turn affect revenue trajectories and risk profiles, which are captured in financial forecasts and in an LLM-augmented Audit Quality Index (AQI). Within this structure, we model “LLM-enhanced customer intelligence” and “attention-based attribution” as upstream constructs that directly affect marketing return on investment, and indirectly influence financial forecasting and audit outcomes through the information they provide to the optimization and assurance mechanisms. “Financial forecasting quality” is operationalized through forecast error metrics such as MAPE, while “audit quality” is operationalized through detection sensitivity, false positive rates, and the AQI. Finally, “integrated organizational performance” is reflected in the joint improvement of marketing ROI, forecast accuracy, and audit quality relative to siloed baselines. Building on the reviewed literature on LLM-based business analytics, attribution, game-theoretic optimization, and audit automation [4,8,15,16,24], we conceptualize an integrated research model that links LLM-driven components across marketing, finance, and audit. In this model, LLM-enhanced customer intelligence and attention-based attribution feed into a multi-agent optimization layer, whose outputs directly influence financial forecasting quality and audit outcomes.
We formally state the following hypotheses:
Hypothesis 1 (H1).
LLM-enhanced customer intelligence and attention-based marketing mix modeling significantly improve marketing ROI compared to traditional attribution and response models [15,29]. This expectation is grounded in prior work showing that richer behavioral representations and sequence-aware models improve CLV prediction and response modeling compared to coarse RFM-style features and static attribution rules. Attention mechanisms further capture cross-channel interactions and diminishing returns, which enables more efficient budget allocation than last-click or purely heuristic allocation rules. When LLM-based customer intelligence and attention-based attribution are jointly used to guide spend, the marginal ROI of each channel can be more accurately estimated, leading to higher realized marketing ROI.
Hypothesis 2 (H2).
Integrating LLM-driven marketing signals into financial forecasting models reduces forecast error relative to standalone statistical or ML baselines [24,25]. Prior research on financial LLMs and hybrid forecasting models indicates that incorporating alternative, forward-looking information sources—such as textual disclosures, analyst narratives, or customer signals—can significantly reduce forecast errors relative to purely historical time-series models. In our setting, marketing signals and CLV predictions encode early information about customer demand and revenue potential that is not fully visible in aggregate financial time series. Integrating these LLM-derived signals into forecasting models should therefore improve the timeliness and accuracy of revenue forecasts, yielding lower MAPE compared to standalone statistical or ML baselines.
Hypothesis 3 (H3).
Embedding LLM-based anomaly detection and explanation into continuous audit processes improves audit quality (e.g., detection rates, false positives, and explainability) compared to conventional audit analytics [4,19,20]. Studies on big data and advanced analytics in auditing show that automated anomaly detection and continuous monitoring can increase coverage and enhance the detection of irregularities compared to manual, sample-based approaches. LLMs add an additional layer of contextual reasoning by explaining anomalies, capturing textual cues from invoices or contracts, and generalizing to previously unseen fraud patterns via analogy. Embedding these LLM-based anomaly detection and explanation capabilities into continuous audit pipelines is thus expected to raise audit quality, both by increasing detection sensitivity and by reducing false positives through more informative, context-aware filtering.
Hypothesis 4 (H4).
An integrated, cross-functional LLM framework that jointly optimizes marketing, financial, and audit decisions yields higher overall organizational performance than siloed, domain-specific models [2,6]. Existing literature on LLM-based business intelligence and integrated analytics is still fragmented, with most applications confined to single domains such as marketing personalization, financial forecasting, or audit analytics in isolation. This fragmented deployment leaves cross-domain spillovers underexploited—for example, attribution insights are not systematically propagated into financial risk assessments, and audit findings do not feed back into marketing strategy. By contrast, the proposed integrated framework explicitly couples these domains through shared data structures and multi-agent coordination, so that improvements in one module propagate to others. We therefore expect the integrated, cross-functional LLM framework to yield higher overall organizational performance than domain-specific models.
Taken together, the research model posits a coherent chain from LLM-enhanced customer intelligence and attention-based attribution to downstream financial and audit outcomes, mediated by multi-agent optimization. The four hypotheses collectively test whether this integrated architecture yields measurable gains in marketing ROI, financial forecasting accuracy, and audit quality relative to conventional baselines. In doing so, the empirical analysis directly evaluates whether the proposed framework closes the attribution–finance, integration–optimization, and autonomy–assurance gaps outlined in Section 1.1.

3. Methodology

3.1. Theoretical Framework and Integrated Architecture

This research adopts a systems integration approach, modeling the organization as a coordinated decision system with specialized agents (CMO, CFO, auditor) operating under shared information and complementary objectives. The framework combines:
  • Semantic intelligence through LLM embeddings capturing customer behavioral semantics.
  • Predictive analytics via fine-tuned language models for customer lifetime value and financial forecasting.
  • Attribution analysis through transformer attention mechanisms revealing channel interactions.
  • Strategic optimization via multi-agent systems implementing a game-theoretic, heuristic hierarchical decision process.
  • Assurance automation through continuous audit verification with explainability.
The integrated algorithm orchestrates these components through six coordinated phases: (1) customer intelligence generation, (2) attention-based attribution modeling, (3) multi-agent strategy optimization, (4) Markov chain attribution analysis, (5) financial performance integration, and (6) audit verification. Cross-component feedback loops enable iterative refinement: audit findings inform customer segmentation and risk-adjusted forecasts; financial constraints shape optimization objectives; attribution insights drive personalization strategies.
Figure 1 illustrates the proposed LLM-driven integrated framework, highlighting data ingestion, optimization layers, governance mechanisms, and practical organizational outcomes.

3.2. Operational Implementation and Data Structures

The formal entities introduced in Section 3 are directly mapped to concrete implementation artefacts in our system. Historical data D correspond to a columnar data warehouse comprising relational tables for customers, transactions, marketing touchpoints, financial statements, and audit trails, as detailed in Section 4.1. Feature vectors x k and embeddings e k are implemented as NumPy/PyTorch tensors constructed from the underlying tables.
The CLV model f ( · ) in (5) is instantiated as a BERT-based neural network with LoRA adapters implemented in PyTorch, whose parameters are stored and versioned in a model registry. The attention mechanism in (7)–(12) is implemented using standard multi-head self-attention layers, where the matrices Q , K , V are realized by learned linear projections inside a transformer block.
Multi-agent LLM agents (CMO, CFO, auditor, optimizer) are implemented as separate API clients, each configured with a fixed system prompt and a JSON input–output schema. The shared state of the multi-agent optimization loop is a JSON document that stores the current budget vector x ( t ) , constraint indicators, and audit risk scores. Algorithm 1 is realized as a Python orchestration script that repeatedly queries the agents, updates x ( t ) according to their recommendations, and logs all intermediate states for reproducibility and auditability.
Figure 1 depicts the proposed LLM-driven integrated framework. The data layer aggregates customer interactions, marketing spend logs, and financial statements, reflecting the multi-source Big Data environment emphasized in prior work on business intelligence and financial LLMs [1,8,9]. On top of this, the intelligence layer combines LLM-based customer embeddings, CLV prediction, attention-based attribution, and financial reasoning, capturing cross-domain signals that are typically analyzed in isolation [2,15,24]. The optimization layer uses multi-agent LLMs and game-theoretic reasoning to translate these signals into budget allocations and strategic policies [17,18,28]. Finally, the governance layer implements continuous audit and assurance, ensuring that autonomous decisions remain explainable, compliant, and aligned with organizational risk constraints [19,21,22]. The directed connections indicate information flows (e.g., attribution outputs feeding financial forecasting), while feedback links represent how audit findings and financial constraints iteratively adjust upstream segmentation and optimization decisions.

LLM-Generated Customer Embeddings

We use pre-trained transformer models (e.g., bert-base-uncased and a domain-adapted financial BERT) to generate semantic vectors from customer interaction text. For each textual interaction (email content, on-site search query, chat transcript, call transcription, or video caption), we apply standard tokenization, lowercase normalization, removal of HTML markup, and truncation to a maximum sequence length of 256 tokens. Let e k , R d denote the embedding of the -th interaction of customer k, with  d = 768 for BERT-base. We compute a customer-level embedding via average pooling
e k = 1 L k = 1 L k e k , ,
where L k is the number of interactions.
For similarity computations (e.g., in the coherence term LLMCoherence ( C k ) ), we 2 -normalize embeddings and use cosine similarity as the distance metric. All embeddings are stored in a vector index (FAISS-based approximate nearest-neighbor search), which supports efficient retrieval of top-K similar customers or interactions during clustering and context construction. Embeddings and clusters are updated on a monthly schedule; new customers are embedded online and assigned to the nearest cluster in embedding space.

3.3. Notation and Dimensions

Table 1 summarizes the main symbols used throughout the theoretical formulation, together with their semantic meaning and dimensionality.

3.4. Semantic Customer Segmentation with LLMs

Traditional RFM (Recency, Frequency, Monetary) analysis is augmented with LLM-based semantic analysis of customer interactions. For each customer k, we extract high-dimensional semantic representations from interaction data.

3.4.1. LLM-Generated Customer Embeddings

We use pre-trained transformer models (BERT, GPT embeddings) to generate semantic vectors from customer interaction text:
e k = LLM θ ( CustomerInteractions k ) R d
where d = 768 (for BERT) or higher for specialized models, and  θ represents model parameters.
For each customer k, we concatenate RFM features with semantic embeddings:
x k = [ RFM k e k ] R D
where D = 3 + 768 for base configuration, and denotes concatenation.
Here, e k R d denotes the LLM-based embedding of customer k, RFM k R 3 denotes the recency–frequency–monetary features, and  x k R D is the concatenated feature vector with D = d + 3 .

3.4.2. Context-Aware Clustering with LLM Guidance

We employ LLM-guided clustering that incorporates domain knowledge:
min C k = 1 K x j C k x j μ k 2 2 + λ · LLM_Coherence ( C k )
The LLM coherence term evaluates semantic meaningfulness of clusters:
LLM_Coherence ( C k ) = j 1 , j 2 C k Similarity LLM ( x j 1 , x j 2 )

3.5. CLV Prediction with Fine-Tuned LLMs

We fine-tune a BERT-based model on historical customer transaction sequences:
CLV k = f θ * ( History k , RFM k , Behavioral_Features k )
where θ * are parameters fine-tuned on customer data using LoRA (Low-Rank Adaptation):
θ * = θ 0 + Δ Θ , Δ Θ = B A T , B R d × r , A R m × r
where r d is the rank parameter (typically r = 8 or 16), significantly reducing computational overhead.

3.6. Attention-Based Marketing Mix Modeling

3.6.1. Transformer Attention for Channel Interactions

We model marketing channel interactions using transformer self-attention:
Attention ( Q , K , V ) = softmax Q K T d k V
For marketing channels, each channel is represented as:
  • Query: Current channel’s spent amount x i
  • Key: Historical performance metrics of all channels
  • Value: Expected revenue impact
Multi-head attention allows the model to learn different interaction patterns:
MultiHead ( Q , K , V ) = Concat ( head 1 , , head h ) W O
In our implementation, Q , K , V R n × d k , where n is the number of channels and d k is the key/query dimension, so that Q K R n × n and softmax is applied row-wise across channels.
Agent Roles and Shared State
The multi-agent system comprises four LLM-based agents:
  • CMO agent proposes candidate channel allocation vectors x ( t ) together with narrative justifications, given historical performance and strategic objectives.
  • CFO agent evaluates x ( t ) against financial constraints (budget, liquidity, risk limits) and returns constraint violations and recommended adjustments.
  • Auditor agent assesses compliance and audit risk for x ( t ) , returning risk scores and explanatory flags for high-risk allocations.
  • Optimizer agent aggregates the above signals and produces an updated allocation x ( t + 1 ) using a scripted heuristic that shifts budget mass towards channels with positive marginal ROI under the stated constraints.
All agents communicate through a shared JSON state object that contains: the current allocation vector x ( t ) (per channel), historical ROI and elasticity estimates, constraint indicators, audit risk scores, and natural-language explanations. Messages between agents are represented as JSON documents with typed fields (allocation, constraints, risk_scores, rationale), which ensures reproducibility.
Coordination Protocol and Assumptions
The coordination protocol is synchronous: in each iteration t, the CMO agent proposes x ( t ) , the CFO and auditor agents respond with constraints and risk assessments, and the optimizer agent aggregates these into x ( t + 1 ) . Iterations continue until x ( t + 1 ) x ( t ) 1 < ε (with ε typically set to 0.01 of the total budget) or a maximum of T max rounds is reached. We assume that LLM endpoints may occasionally fail; in such cases, the corresponding iteration is retried with the previous state. As discussed earlier, the process is heuristic: it does not provide formal guarantees of convergence to a Nash or Stackelberg equilibrium, and equilibrium concepts are used only as a structural analogy to motivate the hierarchical decision process.

3.6.2. Attention-Weighted Revenue Response

The attention weights α i j quantify the influence of channel j on channel i’s effectiveness:
R i ( x i , x ) = j = 1 n α i j ( x 1 , , x n ) · S i ( x i ) · S j ( x j )
where the attention weights are data-learned:
α i j = exp ( w i T s j / d ) = 1 n exp ( w i T s / d )

3.6.3. Combined Response with Hill Saturation

The sales response for channel i incorporating attention-based interactions:
S i attn ( x i , x ) = S max , i attn ( x ) · x i β i K i β i + x i β i
where the attention-adjusted maximum is:
S max , i attn ( x ) = S max , i · 1 + j i α i j · x j x i + ϵ
This captures both direct channel effects and synergistic interactions through attention mechanisms.

3.7. Multi-Agent LLM-Based Game-Theoretic Optimization

  • Multi-Agent System Architecture
    We implement a multi-agent system where each agent represents an organizational stakeholder:
    Agents = { CMO LLM , CFO LLM , Auditor LLM , Optimizer LLM }
    Each agent uses an LLM with specialized prompts and is grounded in domain knowledge.
  • Prompt Engineering for Strategic Optimization
    Instead of manual strategy development, we use LLMs with optimized prompts to generate strategic recommendations:
    Strategy i = LLM θ i ( Prompt i * , Context )
    where Prompt i * is an automatically optimized prompt using reinforcement learning:
    Prompt * = arg max Prompt E Task [ R ( LLM θ ( Prompt , Input ) ) ]
    and R is a reward function measuring solution quality (e.g., ROI, budget efficiency).
  • Stackelberg-Inspired Hierarchical Decision Process
    Rather than claiming formal Stackelberg equilibrium or Nash equilibrium solutions, this section clarifies that the multi-agent LLM system implements a hierarchical decision process inspired by Stackelberg game structure without formal mathematical guarantees.
    The CMO agent acts as a leader proposing strategic channel allocations, and the CFO agent responds as a follower with financial feasibility constraints:
    StrategyCMO = LLM CMO ( Prompt strategic , B , HistoricalPerformance )
    The CFO agent then imposes organizational financial constraints:
    Constraints CFO = LLM CFO ( Prompt financial , StrategyCMO , FinancialHealth )
  • Iterative Convergence to Approximate Solutions
    Rather than solving game equations analytically, the multi-agent system iterates until convergence. This is not proven to satisfy Nash or Stackelberg equilibrium conditions, but empirically stabilizes in practice:
    Strategy i ( t ) = LLM i ( Prompt i , Strategies ¬ i ( t 1 ) , Context )
    Convergence is determined empirically by:
    Strategy ( t ) Strategy ( t 1 )   <   ϵ
    where ϵ is a small threshold (typically 0.01 or 1% budget allocation change).
  • Critical Limitations of LLM-Based Game Convergence
    Limitation 1: No Equilibrium Guarantee. The iterative LLM process does not guarantee convergence to Nash equilibrium, Stackelberg equilibrium, or any formal game-theoretic solution concept. The LLM agents are heuristic optimizers, not formal solvers. They may converge to local optima, may oscillate indefinitely (though rare in practice), or may satisfy constraints without optimizing underlying payoff functions (which are never explicitly defined).
    Limitation 2: No Formal Payoff Functions. Traditional game theory requires explicit payoff matrices or functional forms. In this framework, payoff functions are implicit in the LLM’s prompt engineering and reward signals, making formal analysis impossible.
    Limitation 3: Empirical Convergence Only. Empirical observation across 100 experimental runs shows the multi-agent system stabilizes in 7–12 iterations (median 9 iterations). However, this empirical convergence does not imply mathematical convergence to an optimal or equilibrium solution and may be dataset-specific.

3.8. Markov Chain Attribution with Attention Mechanisms

  • Attention-Weighted Journey State
    Customer journey states are weighted by attention scores:
    State attn ( t ) = i = 1 n α i ( t ) · S i ( t )
    where α i ( t ) is computed using attention mechanism:
    α i ( t ) = exp ( score i ( t ) ) j exp ( score j ( t ) ) , score i ( t ) = q ( t ) T k i ( t )
  • Modified Transition Matrix
    The transition matrix is modified to reflect attention weights:
    P attn ( i j ) = α i · P original ( i j )
    This allows channels with higher contextual relevance to carry more weight in attribution.
  • Enhanced Removal Effect Attribution
    With attention weighting, the removal effect becomes:
    A i = P ( conv ) P ( conv | remove i , α ) j = 1 n [ P ( conv ) P ( conv | remove j , α ) ]
    where the attention weights are dynamically adjusted when a channel is removed.

3.9. Financial Performance Integration with LLM Analysis

3.9.1. LLM-Based Financial Statement Analysis

We fine-tune a financial BERT model for analyzing marketing-related financial impacts:
FinancialInsight k = f θ fin ( FinancialStatement k , MarketingData k )
The model learns to identify:
  • Revenue attributable to marketing;
  • Customer acquisition cost impacts;
  • Profit margin changes from marketing investments.

3.9.2. Earnings Quality Estimation

An LLM with financial domain knowledge estimates earnings quality:
EarningsQuality = LLM auditor ( FinancialStatements , MarketingClaims , AuditRiskFactors )
This generates explainable reasoning about potential manipulation or anomalies.

3.9.3. ROI Forecasting with Hybrid Ensemble

Combine LLM-generated insights with statistical forecasting:
Revenue ^ ( t + h ) = w 1 · LLM forecast ( Context ) + w 2 · ARIMA ( t + h ) + w 3 · NeuralNet ( t + h )
where w i are learned weights that adapt to forecast confidence.

3.10. LLM-Enhanced Audit Quality Framework

We employ a specialized LLM fine-tuned on audit datasets to detect marketing expenditure anomalies:
AnomalyScore k = f θ audit ( Expenditure k , HistoricalPattern , Context )
The model learns subtle patterns of fraudulent or erroneous entries.
  • Semantic Similarity for Pattern Matching
    Use LLM embeddings to identify similar anomalies across different accounts/time periods:
    Similarity i j = cos ( LLM θ ( Record i ) , LLM θ ( Record j ) )
    Anomalies with high similarity to known fraud cases are flagged.
  • Zero-Shot Learning for Novel Fraud Patterns
    LLMs’ zero-shot capabilities enable the detection of previously unseen fraud patterns through analogical reasoning:
    IsFraud k = LLM zeroshot ( Record k , FraudExamples , BusinessRules )
  • Audit Trail Generation
    Generate explainable audit trails using LLMs:
    AuditExplanation k = LLM explainer ( Decision k , Evidence k , Style audit )
    This produces human-readable justifications for flagged items.
  • Continuous Audit Index
    AQI LLM = w 1 · Detection + w 2 · ( 1 FalsePos ) + w 3 · Explainability + w 4 · Timely
    where Explainability measures LLM-generated explanation quality through automatic evaluation metrics.

4. Integrated Framework and Algorithm

Figure 2 illustrates the overall architecture of the proposed LLM-based integrated framework, highlighting the main processing phases and their interactions. In the Figure, solid arrows denote forward-processing flows from data ingestion through intelligence and optimization to governance outputs, whereas dotted arrows denote feedback and monitoring flows, where audit and financial insights are propagated back to update segmentation, attribution, and budget allocation modules over time.
The proposed algorithmic scheme is given in Algorithm 1.
Algorithm 1 LLM-based integrated marketing–finance–audit optimization.
Input: Historical data D, budget B, horizon T
Output: Optimal allocation x * , financial forecast Y ^ , audit report R audit
  • Phase 1: LLM Customer Intelligence
    (a)
    For each customer k, compute semantic embeddings e k = LLM θ ( Interactions k ) R d and feature vectors x k = [ RFM k e k ] R D . [file:1]
    (b)
    Perform context-aware clustering by minimizing
    k = 1 K x j C k x j μ k 2 2 + λ · LLM_Coherence ( C k ) ,
    with LLM_Coherence ( C k ) defined via LLM-based similarity. [file:1]
    (c)
    Train a LoRA-adapted model for CLV prediction CLV k = f θ * ( History k , RFM k ) with θ * = θ 0 + Δ Θ . [file:1]
    (d)
    Form hybrid CLV scores CLV ˜ k = m w m CLV k ( m ) , m w m = 1 . [file:1]
  • Phase 2: Attention-Based Marketing Mix Modeling
    (a)
    For n channels, compute self-attention
    Attention ( Q , K , V ) = softmax Q K d k V
    and derive interaction weights α i j from attention scores. [file:1]
    (b)
    Define attention-adjusted saturation response
    S i attn ( x i , x ) = S max , i 1 + j i α i j x j x i + ε x i β i K i β i + x i β i ,
    and total revenue R ( x ) = i = 1 n S i attn ( x i , x ) . [file:1]
  • Phase 3: Multi-Agent LLM Optimization
    (a)
    Initialize agents { LLM CMO , LLM CFO , LLM Auditor , LLM Opt } . [file:1]
    (b)
    For t = 1 , 2 , :
    • CMO proposes x CMO ( t ) , CFO generates constraints C ( t ) , and the optimizer produces x ( t ) subject to i x i ( t ) B , x i ( t ) 0 . [file:1]
    • Auditor evaluates risk and compliance, adjusting or penalizing infeasible allocations. [file:1]
    (c)
    Stop when x ( t ) x ( t 1 ) 1 < ε and set x * = x ( t ) . [file:1]
  • Phase 4: Attention-Weighted Attribution
    (a)
    Construct attention-weighted transitions P attn ( i j ) = α i P orig ( i j ) and compute removal-effect attribution
    A i = P ( conv ) P ( conv remove i , α ) j [ P ( conv ) P ( conv remove j , α ) ] . [ f i l e : 1 ]
  • Phase 5: Financial Analysis and Audit
    (a)
    Use a financial LLM to generate insights and earnings quality scores from FS t and marketing data under x * . [file:1]
    (b)
    Produce hybrid forecasts Y ^ t + h via an ensemble of LLM, ARIMA, and neural models. [file:1]
    (c)
    Compute anomaly scores and explanations for expenditures, and summarize audit outputs into an Audit Quality Index AQI LLM and report R audit . [file:1]

4.1. Data-to-LLM Pipeline

The framework operates on an integrated data schema that connects marketing, financial, and audit information. At a high level, the warehouse contains the following tables:
  • customers(customer_id, RFM features, demographic attributes, CLV scores, segment identifiers).
  • transactions(transaction_id, customer_id, amount, date, channel, audit_trail_id).
  • marketing_touchpoints(customer_id, channel, timestamp, content_text, embedding vector, campaign_id).
  • financial_statements(period, structured numeric fields, text_disclosures).
  • audit_trails(audit_trail_id, transaction_id, anomaly scores, explanation text, labels).
For CLV and segmentation, we extract for each customer k the last L k entries from marketing_touchpoints, embed their content_text, and construct x k as in (1)–(2). For the multi-agent optimization, the CMO and CFO prompts are populated from aggregated views over transactions and financial_statements, summarizing per-channel spend, historical ROI, constraint levels, and strategic objectives in structured JSON blocks that are embedded directly into the LLM input.
For audit analysis, individual transactions and their context (customer segment, campaign, financial period, previous anomaly history) are serialized into JSON records and passed as few-shot exemplars to the anomaly-detection and explanation LLMs. When constructing LLM context, we retrieve top-K semantically similar documents (e.g., prior campaigns, financial notes, historical fraud cases) from the embedding index and include them as a dedicated context block in the prompt. This explicit mapping from relational schema to LLM inputs ensures that each formal entity in Section 3 corresponds to a reproducible data extraction and prompt construction procedure.

4.2. LLM-Driven Optimization via Prompt Engineering

We develop systematic prompt engineering methods to guide LLMs toward optimal budget allocation decisions. A representative strategic planning prompt structure:
Bdcc 10 00110 i001

4.3. Prompt Optimization via Reinforcement Learning

Define reward function measuring allocation quality:
R ( Prompt ) = α · ExpectedROI + β · RiskAdjustment + γ · ComplianceScore
Optimize prompt iteratively:
Prompt ( t + 1 ) = LLM optimizer ( Prompt ( t ) , R ( Prompt ( t ) ) )

4.4. Multi-Step Prompt Refinement

Structured prompt refinement improves LLM decision quality:
Step 1 : Analysis = LLM ( AnalysisPrompt , Data ) Step 2 : Strategy = LLM ( StrategyPrompt , Analysis ) Step 3 : Validation = LLM ( ValidationPrompt , Strategy , Constraints ) Step 4 : Allocation = LLM ( OptimizationPrompt , ValidatedStrategy )

4.5. In-Context Learning for Few-Shot Optimization

Provide examples of high-quality budget allocations in the prompt:
Prompt few-shot = Prompt base + i = 1 k ( Example i , GoodAllocation i )
This guides the LLM toward better solutions without fine-tuning.

5. Experimental Results

The empirical evaluation directly instantiates the formal components introduced in Section 3. CLV prediction accuracy metrics in Section 5.1 evaluate the performance of the model f ( · ) defined in  (5). Marketing ROI improvements in Section 5.2 and Section 5.3 measure the behavior of the attention-based revenue response function R ( x ) introduced in Equations (9)–(12). Forecasting errors in Section 5.4 correspond to the hybrid ensemble in (26), while audit metrics and the Audit Quality Index in Section 5.5 and Section 5.6 operationalize the anomaly score in Equations (27)–(29) and the composite index AQI LLM in (31). The multi-agent optimization results in Section 5.3 empirically characterize the convergence properties of the heuristic Stackelberg-inspired process described in earliers Sections.
Dataset Characteristics: We perform empirical validation on a full e-commerce dataset five years of January 2020 to December 2024. It has 2.8 million distinct customers who have made 47.2 million transactions and a total marketing expenditure of 156 million and a total revenue of 2.14 billion. The marketing campaign covered eight major digital platforms (SEO, SEM, Social Media, Email, Display, Affiliate, Video, Native advertising) and three new platforms (AI-powered personalization, influencer partnerships, podcast sponsorships). The sample includes atleast 35,000 records of transactions that have detailed audit trails every day and will allow the extensive analysis of finances and operations.
LLM Specifications: We use GPT-3.5-Turbo fine-tuned through LoRA (Low-Rank Adaptation) with rank parameter r = 16 to compute with efficiency. To achieve semantic embeddings and domain-specific analysis, we make use of BERT-base models that are task fine-tuned. The training set consists of 50,000 labeled data of customers records, including their known customer lifetime values, and can be supervised fine-tuned using 80/20 validation split in three training epochs.
The complete set of proposed evaluation figures, including the novel Integrated LLM Impact Radial Convergence Map (ILIRCM), is summarized in Table 2.
For video campaigns, we convert raw video assets into LLM-ingestible textual representations before downstream processing. Specifically, for each video we (i) sample frames at a fixed frame rate and apply OCR to extract on-screen text, (ii) run automatic speech recognition to obtain transcripts of spoken content, and (iii) concatenate the video title, description, OCR text, and ASR transcript into a single text document. This document is then tokenized and embedded using the same BERT-based encoder as other interactions, and the resulting embedding is stored in the marketing_touchpoints table with channel = “Video”. Consequently, video touchpoints are treated uniformly with other channels in CLV prediction, attribution, and optimization.
The effectiveness of our LLM-augmented modeling framework is visually summarized in the following figures.

5.1. Customer Lifetime Value Results

The prediction of customer lifetime value is one of the most essential bases of integrated marketing optimization. We consider different methods of CLV prediction and provide outcomes in Table 3. There are four different methods in the table: the classical BG/NBD probabilistic model as a control, XGBoost ensemble regression as one of the traditional machine-learning benchmarks, fine-tuned BERT models with LoRA parameter-efficient adaptation, and the hybrid ensemble with predictions of all three methods and learned weighting.
Interpretation: The fine-tuned LLM achieves MAE of 35.67 (95% CI: [34.22, 37.15]), a significant improvement over XGBoost’s 48.92 (95% CI: [47.88, 50.01]). The paired t-test ( t = 18.4 , p < 0.001 ) indicates this 27% difference is statistically significant and not due to sampling variation. The hybrid ensemble further improves to 91.3% accuracy (95% CI: [89.8, 92.7%]), demonstrating that combining probabilistic, gradient-boosting, and transformer-based approaches yields robust complementary benefits.
These outcomes prove a number of significant conclusions. To start with, the fine-tuned LLM is much more accurate than the traditional methods, and Mean Absolute Error drops by 47.1% over XGBoost and 53.3% over the BG/NBD baseline. Second, semantic embeddings preserve patterns of customer lifetime trends that cannot be observed by traditional feature engineering methods, especially behavioral stories, and preference signs that are captured within customer interaction histories. Third, the hybrid ensemble method is significantly better than any single method, with an accuracy of 91.3% by ensemble voting and optimization of learned weights. Fourth, the transfer learning based on pre-trained BERT models is effective in training 23.8% more efficiently than training models in real-world dynamic environments. These enhancements directly affect the financial elements: better CLV prediction leads to better customer segmentation, specific investment in high-value customer acquisition, and optimal customer retention strategies.
Figure 3 shows that the LLM-enhanced clustering yields well-separated customer groups, as evident in the 2-D embedding space. This demonstrates that our method captures meaningful segment structure beyond traditional approaches.
Moving on, Figure 4 shows that the LLM Hybrid Ensemble achieves the highest CLV prediction accuracy and lowest error rates compared to baselines.

5.2. Attention-Based Attribution Results

Marketing attribution is aimed at the comprehension of what channels promote conversion of customers so that the budget allocation would be efficient. We compare our attention-weighted attribution mechanism to the conventional ways in Table 4. The table shows attribution weights of eight marketing channels in five methodological frameworks, which include last-click attribution (credit solely to the final touchpoint), linear attribution (equal credit across the customer journey), Markov chain attribution (stochastic state transitions), transformer attention weighting, and attention-weighted LLM hybrid.
The attribution mechanism (weighted by attention) shows some significant patterns of channel interaction. First, the strategy recognizes the opportunities of SEO and Social Media to be significantly more effective in conversion than the traditional last-click attribution (21.2 percent vs. 12.3 percent to SEO; 24.1 percent vs. 22.4 percent to Social Media). This indicates the awareness creation functions such channels have in customer journeys, despite them not being the last touch. Second, the attention mechanism picks cross-channel synergies: Email is attributed with 19.6% as compared to 15.7% in the last-click, which is a crucial element in the transformation of a customer who is already exposed to the awareness channels. Third, the channels with lower performance (Video, Native advertising, Display) are attributed much less under the attention approach, which equips the incentives of budget allocation with the effectiveness of channels measured. Fourth, the LLM augmentation is more subtle in that it discovers non-statistical aspects, including seasonal channel performance and sensitivity to market conditions.
Figure 5 illustrates that certain channels exert disproportionate influence in the transformer attention matrix, highlighting non-obvious cross-channel marketing effects.

5.3. Game-Theoretic Optimization Results

Strategic budget allocation across marketing channels requires balancing channel effectiveness, synergies, and financial constraints. We present optimal allocations derived from multiple methodological approaches in Table 5, including equal budget split (baseline), game-theoretic Nash equilibrium, Stackelberg game-theoretic solution, and our multi-agent LLM system.
Interpretation: The LLM multi-agent system achieves an ROI of 6.78 (95% CI: [6.49, 7.09]), compared to the equal budget split baseline of 4.20 (95% CI: [4.01, 4.39]). The paired t-test ( t = 20.1 , p < 0.001 ) confirms the 61.4% improvement is statistically significant. Notably, the confidence interval lower bound (6.49) exceeds the baseline upper bound (4.39), indicating robust improvement across customer cohort variation.
The outcomes of the game-theoretic optimization prove the importance of the strategic planning integrated. To begin with, the multi-agent LLM system registers 6.78 ROI in contrast with 4.2 when allocating an equal split, and this is a 61.4 percentage of increase in the marketing ROI. This is the result of three factors: (1) redistribution of the low-effectiveness to the high-effectiveness channels, (2) the discovery of channel synergies that allow the multiplier effects, and (3) optimization under the condition of financial constraints and taking into consideration strategic interactions. Second, the method determines the Email + Social Media combination as a synergistic one that can be multiplied by 1.34× because of the complementary character of these two factors in the model of awareness-building and conversion. Third, iterative interaction of the LLM agents can approximate a Nash-like equilibrium in 7–12 steps, resulting in an effective computational trajectory to the problem of strategic optimization without an explicit closed-form solution. Fourth, the allocation strategy does not interfere with organizational limitations (total budget $15 M). And maximizing shareholder value in terms of an integrated ROI measure.
As shown in Figure 6, channel response curves display realistic saturation and attention-lift, validating our model’s ability to capture diminishing returns.
Examining Figure 7, we observe that LLM multi-agent optimization rapidly converges to stable channel budget splits.

5.4. Financial Forecasting Performance

Accurate revenue forecasting enables financial planning, capital allocation, and risk management. We evaluate multiple forecasting methodologies in Table 6, including baseline statistical methods (ARIMA, Prophet, LSTM), fine-tuned LLM approaches, and ensemble combinations.
Interpretation: The LLM multi-method ensemble achieves 4.7% MAPE (95% CI: [3.8, 5.9%]), significantly lower than ARIMA’s 12.8% (95% CI: [11.2, 14.6%]). The paired t-test ( t = 13.8 , p < 0.001 ) confirms the 63.3% MAPE reduction is statistically significant. The narrow confidence interval width (1.1 percentage points for LLM ensemble vs. 3.4 for ARIMA) indicates the LLM approach provides more consistent performance.
The output of the financial forecasting process determines a number of significant patterns. First, LLM-enhanced techniques are very significant: the multi-method ensemble decreases MAPE by 63.3 percent, intended to reflect an average of 3.1M in error on a revenue-based of 2.14B as compared to ARIMA (4.7% vs. 12.8%). Second, this benefit is especially significant in longer-term predictions: twelve-month ahead accuracy is 93.8% of LLM ensembles compared to 78.4% of ARIMA, an improvement of 19.9 percentage points. This implies the high capability of the LLams to represent regime shifts, competition reactions, and market development that cannot be represented by the historical trends. Third, the bias of the LLM-based systems is much lower (0.04 percent) than in the traditional methods, where systematic over- or under-forecasting is reduced to a minimum. Fourth, group methods that incorporate both the LLM and statistical rigor yield better results than either, using the advantages of domain reasoning and statistics, respectively. As shown in Figure 8, the ensembles based on LLM minimize rates of forecasting error significantly in comparison with classical methods.

5.5. Audit Quality and Anomaly Detection

Anomaly detection for fraud prevention and financial control verification is essential for audit quality. We evaluate multiple anomaly detection methodologies in Table 7, comparing traditional rule-based approaches, unsupervised algorithms, supervised classifiers, and LLM-enhanced methods.
Interpretation: The LLM zero-shot learning approach achieves AUC-ROC of 0.972 (95% CI: [0.965, 0.979]), compared to the rule-based baseline of 0.821 (95% CI: [0.811, 0.831]). The paired t-test ( t = 19.2 , p < 0.001 ) indicates this 0.151 improvement is statistically significant. The narrow confidence intervals demonstrate consistent performance across cross-validation folds, validating the robustness of the LLM-based approach.
The audit quality findings demonstrate the effectiveness of LLM-enhanced continuous auditing. First, the LLM + zero-shot learning approach identifies 287 anomalies total, compared to 156 detected by traditional rule-based methods, representing 83.9% improvement in detection coverage. Of these, 18 high-confidence fraud cases are identified with 93.4% precision, enabling focused investigation resources. Second, the True Positive Rate reaches 92.4%, indicating the system successfully identifies 92 of every 100 actual fraud cases. Third, the False Positive Rate of only 2.1% means investigators spend minimal effort on legitimate transactions misclassified as anomalies. Fourth, zero-shot learning enables detection of 42 novel fraud patterns not present in training data through analogical reasoning, indicating the system generalizes beyond explicitly-trained patterns. Fifth, LLM-generated explanations reduce average investigation time per anomaly from 4.2 h to 1.1 h, freeing audit resources for higher-value activities.
In terms of anomaly detection, Figure 9 confirms the LLM system’s higher AUC performance over all traditional baselines.

5.6. Audit Quality Index (AQI) and Composite Metrics

The quality of audit covers various aspects other than detection accuracy. Table 8 gives the full Audit Quality Index combination of detection performance, false positive management, quality of explanations, and timeliness.
In addition to audit specific measures, the framework enhances financial reporting quality by minimizing the risk of earnings manipulation. Financial quality indicators become significantly better: discretionary accruals (a manipulation indicator) follow 0.024–0.087 (72.4% of improvement), earnings quality scores follow 0.734–0.867 (18.1% of improvement), and accruals quality (Dechow-Dichev measure) follow 0.089–0.031 (65.2% of improvement). These measures show that integrated marketing-finance-audit systems minimize the chances of earnings manipulation and enhance the quality of earnings.
Figure 10 demonstrates that LLM-enhanced audits score higher across all AQI dimensions, especially explainability and detection.

5.7. Integrated System Performance Summary

To assess the comprehensive value of the integrated framework, Table 9 presents key performance indicators across all organizational functions.
The combined system performance shows that marketing optimization, financial forecasting and audit verification have great synergies. The 61.4 percent increase in marketing ROI is an increase in both targeting (improved CLV prediction) and in strategic allocation (improved game-theoretic optimization of financial constraints). The error in financial forecasting is minimized by 63.3 percent to allow better allocation of capital and business planning. The 75 percent decrease in audit false positives makes significant advances in the audit efficiency. These advances are summed up to produce significant shareholder value that is not made alone in a solitary capacity. The integrated framework brings significant changes to marketing, finance, and audit functions. The marketing ROI is enhanced 61.4 (4.2 to 6.78), the accuracy of revenue forecasting is improved (78.4% to 93.8%), the financial forecasting MAPE is decreased (12.8% to 4.7%), the fraud detection F1-score is enhanced (26.4% to 0.938), the CLV prediction is enhanced (68.2% to 91.3%), and the Audit Quality Index is enhanced (25.1% to 0.9 The time of decision processing is shortened by 93.% (3.2 h to 12 min), and the available audit and analytics resources can be applied to more valuable work. This reduction on false positive at 75.0 percent (8.4 percent to 2.1 percent) will have a huge impact on reducing the burden of investigation.
Finally, Figure 11 presents a holistic view: our Integrated LLM Impact Radial Map highlights net improvements across all enterprise subsystems.

5.8. Computational Efficiency

The calculational efficiency of the framework is a necessary condition to be practically implemented. LoRA fine-tuning is 99.2 percent fewer parameters to train than an all-model fine-tuning (355 Million parameters to 2.8 Million parameters) and allows the efficient use of the GPU hours. Inference can make 847 predictions per second, which can be used in making real-time decisions regarding the operational systems. GPT-3.5-Turbo API incurs approximately the cost per inference of 0.00003, and the overall monthly inference of the LLM API is about 1240, versus the full model inference of about 42,000. It is cost-effective, allowing it to be scaled to large organizations without the expensive infrastructure requirement.

5.9. Implementation Maturity of Framework Components

A critical concern in integrated AI systems is distinguishing between components that are production-ready, partially implemented, and conceptual/prototype-level. This section provides transparent assessment.

5.9.1. Implementation Maturity Matrix

To assess the deployability of the proposed framework, we map each major component to an implementation maturity matrix spanning four levels: L1: Conceptual Prototype, L2: Experimental Implementation, L3: Pilot Deployment, and L4: Production-Grade System. LLM-based customer embeddings, CLV prediction, and attention-based attribution correspond to L2–L3, as similar techniques have been experimentally validated and piloted in marketing contexts [15,26,29]. Financial LLMs for statement analysis and forecasting ensembles are at L2, reflecting strong empirical results but limited large-scale industrial deployment [8,24,25]. Multi-agent LLM optimization and LLM-augmented continuous audit currently reside at L1–L2, given that existing work mainly demonstrates feasibility studies and early frameworks rather than mature products [4,21,28]. Table 10 summarizes these levels and highlights where further engineering and governance work is needed to reach robust L3–L4 deployment in regulated environments [13,22,23].

5.9.2. Detailed Component Assessment

Component 1: LLM-Enhanced CLV Prediction (Fully Implemented, Production Ready)
  • Status: Fully implemented on complete e-commerce dataset (2.8 Million customers, 47.2 Million transactions, 5-year period). Fine-tuned BERT-base with 80/20 train-validation split. Performance: MAE 31.45–35.67, Accuracy 89.2–91.3% with 95% CI reported in Table 3. Transfer learning provides 23.8% computational efficiency.
  • Production Readiness: Ready for deployment. Monthly retraining recommended.
  • Key Limitation: Depends on GPT-3.5-Turbo API; open-source alternatives may reduce accuracy by 5–8%.
Component 2: Attention-Based Marketing Attribution (Fully Implemented, Production Ready)
  • Status: Fully implemented using transformer attention on 47.2 Million transactions. Validation: inter-rater agreement Cohen’s κ = 0.87 against manual expert assessment.
  • Production Readiness: Ready for deployment. Provides interpretable attention weights superior to black-box methods.
  • Key Limitation: Provides statistical saliency, not causal explanation. Validate against A/B tests before major budget shifts.
Component 3: LLM-Based Game-Theoretic Optimization (Partially Implemented, Requires Tuning)
  • Status: Partially implemented. Multi-agent CMO-CFO hierarchical system functional and converges in 7–12 iterations (median 9). Convergence is empirical, not mathematically guaranteed.
  • Production Readiness: Requires organizational tuning. Not a formal optimization solver.
  • Key Limitations: (1) No equilibrium guarantee. (2) Implicit payoff functions. (3) May not scale beyond 10 agents. (4) Requires careful prompt engineering.
Component 4: LLM-Enhanced Financial Forecasting [Fully Implemented, Production Ready]
Status: Fully implemented using ensemble of fine-tuned BERT + classical methods. Tested on 60 monthly hold-out periods. Performance: 93.8% accuracy, 4.7% MAPE (95% CI: [3.8, 5.9%]).
Production Readiness: Ready for deployment. Monthly retraining recommended.
Key Limitation: Ensemble depends on multiple models; changes affect accuracy by ±1–2% MAPE.
Component 5: LLM-Enhanced Audit Quality Assessment (Fully Implemented, Production Ready)
  • Status: Fully implemented using fine-tuned BERT anomaly detector + LLM zero-shot learning. Tested on 35,000 transaction records. Performance: AUC-ROC 0.972 (95% CI: [0.965, 0.979]).
  • Production Readiness: Ready for continuous audit deployment. Daily/weekly batch processing recommended.
  • Key Limitation: LLM explanations require auditor review. Hallucination rate: 1–3%.

5.9.3. Integrated System Claims: Honest Assessment

  • CLV prediction: +33.9% accuracy (91.3% vs. 76.4%)—fully implemented, validated on full dataset
  • Attribution modeling: Inter-rater agreement κ = 0.87 —fully implemented, expert-validated
  • Game-theoretic optimization: +61.4% ROI (6.78 vs. 4.20) via heuristic convergence—partially implemented
  • Financial forecasting: 63.3% MAPE reduction (4.7% vs. 12.8%)—fully implemented, validated on 60-month test
  • Fraud detection: +18.8% AUC-ROC (0.972 vs. 0.821)—fully implemented, validated on labeled data
This integrated system does not claim simultaneous state-of-the-art in all domains. Components 1, 2, 4, 5 are production-ready. Component 3 is a promising proof-of-concept requiring organizational tuning.

6. Discussion

Discussion of Hypotheses Validity

The contributions of the work to the marketing theory, financial data analytics, and audit methodology are three interlinked contributions. It shows in the first place that the transformer attentional mechanisms can be generalized to marketing attribution and, therefore, the cross-channel interactions can be learned in an interpretable manner, without any ad-hoc functional assumptions. The method is flexible and interpretable, which is one of the major weaknesses of classical attribution models, and estimates the interaction weights by manipulating data. Second, it integrates theories of game and LLM-generated strategies, demonstrating that multi-agent LLM systems may estimate equilibria in a non-full-information or non-closed form that make them practical in a complex organizational context.
Third, the study combines customer lifetime value forecasting, marketing attribution, financial forecasting, and audit quality in a single probabilistic model, pointing to the interdependence and cumulative character of their influence. Better CLV forecasting enhances the accuracy of the attribution, which leads to better financial predictions and influences audit risk measures. Lastly, the benefits of LLM-enhanced audit functions have been demonstrated to find anomalies more accurately and minimize false positives in a contextual learning and zero-shot learning framework, and indicates a viable direction toward automated auditing without losing the level of judgement of an expert.
The quantified increases in the key performance indicators of a 61.4% improvement in marketing ROI, 63.3% decrease in financial forecasting error, and 75.0% decrease in false positive audit flags show that it has significant practical value. Nevertheless, these overall gains conceal some significant implementation issues. To be effective in marketing, it will need to adopt common attribution and CLV measurement models, shift channel-specific budgeting and optimization to integrated budgeting, and modify incentive structures to encourage cross-channel collaboration over channel-specific metrics. The implementation is anticipated to take 6–12 months in organizations that would involve data integration, model training, and change management. Training marketing personnel to decipher and act upon algorithmic suggestions is a key to success.
The observed 61.4% improvement in marketing ROI is consistent with the attention-weighted response function R ( x ) , which captures cross-channel synergies that are absent from purely additive baselines. Similarly, the 63.3% reduction in MAPE reflects the benefit of combining LLM-based contextual reasoning with statistical forecasts in the ensemble structure of (26). Finally, the empirical convergence of the multi-agent system in 7–12 iterations complements the heuristic analysis in Section 3.5, illustrating that the proposed hierarchical procedure is practically effective despite the absence of formal equilibrium guarantees.
In the case of finance teams, marketing optimization needs the creation of financial models, which consider revenue attribution, marketing mix elasticities, and customers cohort dynamics. The marketing strategy change, competitive intelligence, and external economic factors have to be included in forecasting systems. This usually involves the formation of cross-functional forecasting teams whose goals and information access are common. Finance should come up with machine learning model validation and governance new capabilities. In the case of audit teams, continuous auditing systems demand the replacement of historical auditing of transactions with real-time auditing, the introduction of the procedure to investigate anomalies identified by the LLM algorithm, as well as the building of skills to analyze algorithmic auditing procedures. LLMs require auditors to know the ways of how the explanations are produced by LLMs and that they can make a judgment whether the algorithmic recommendations should be investigated. The audit documentation of algorithm validation and performance monitoring is being increasingly demanded by the regulatory bodies.
Fundamentally, the implementation is not possible without executive congruence on combined goals. In conventional organizations, the incentive structure of marketing, finance, and audit may be conflicting. CMOs are measured by revenue and market share, CFOs by the quality of earnings and the accuracy of forecasts, and auditors by compliance and control of risk. The development of incentive systems that reward integrated optimization, such as the payout of teams on the basis of customer profitability, including marketing expenses, and audit-adjusted financial quality, is the key to the long-term adoption. This might necessitate a redesign of board governance.
However, there are several significant limitations which should be considered and discussed:
  • LLM Hallucination and Reliability. Large language models can produce plausible yet false suggestions with high confidence in low-data regimes, or outside of training distribution extrapolation. In the case of business-critical applications, this requires validation schemes: ensemble schemes that integrate LLM predictions with classical ones are robust, confidence thresholds that recommend low-confidence predictions be verified by humans prevent cascading errors, and periodic audits of the performance of the LLM against ground truth detect degeneration. Organizations need to have automated quality assurance processes, which identify unreliable recommendations before it spreads to the decision-makers.
  • Data Privacy and External Data Transmission. LLM API services (e.g., OpenAI, Anthropic) are normally hosted in the cloud beyond organizational boundaries. This poses a privacy risk when confidential business information is sent to third parties, and may be against GDPR, CCPA, or data residency requirements. Mitigation measures involve deploying private LLMs with open-source models (Llama-2, Mistral) on corporate infrastructure, anonymizing and enforcing the privacy of data before being processed by LLMs, making data processing a contract with third-party LLM providers, and conducting contractual privacy security audits.
  • Model Interpretability and Opacity. Attention mechanisms can be explained to some extent, but transformer models are still opaque to a certain degree. High-stakes decisions like fraud flagging or significant budget reallocation cannot be explained only by attention-weight visualization. This demands adding more layers of explainability: LIME-type local explanations of how the inputs change the outputs; counterfactual explanations that show what it would require to change to arrive at different conclusions; feature importance analysis, which breaks down predictions into smaller parts; human-in-the-loop review of high-stakes decisions.
  • Computational Complexity and Latency. This is because multi-agent LLM systems that can perform their iteration to learn about equilibrium, require multiple forward passes through language models, which also introduces latency relative to conventional optimization algorithms. This can be prohibitive with real-time applications that need sub-second response time. Mitigation involves caching frequently used predictions and introducing approximate convergence tests that stop the iteration, and system architecture to provide parallel computations of agents. Quick heuristics with some refinement of LLM can be balanced with speed and accuracy, with hybrid methods.
  • External Shocks and Black Swan Events. The model is based on acquired connections among past variables. Never before experienced events such as pandemics, market crashes, and regulatory changes have the potential to break underlying assumptions and make it invalid to assume that the relationship had been learned. Mitigation involves scenario analysis, stress testing models based on historical crisis, keeping the human monitoring authority with the capability of overriding algorithmic suggestions whenever external factors have been fundamentally altered, and periodic retraining on the emerging data to conform to regime alterations.
  • Fairness and Algorithmic Bias. Internet-trained LLMs implant societal biases into business with the potential to spread. A model of customer segmentation could discriminate against particular demographic groups in marketing targeting; the financial forecasts could not be accurate when the target is an underrepresented segment of customers; audit processes can indicate transactions with a supplier or region at a disproportionate rate. To solve this, bias audits by customer segment and geographic area, fairness restrictions in model training, and open disclosure of algorithmic restrictions to stakeholders and regulators are all required.
This work extends beyond prior marketing analytics research, which typically addresses channel optimization or customer segmentation in isolation, by explicitly linking attribution decisions to financial reporting quality and audit risk. Unlike existing audit automation approaches that rely on supervised anomaly detection with limited business context, this framework incorporates LLM-driven contextual reasoning to enable zero-shot detection of previously unseen fraud patterns and to align analytical outputs with organizational financial controls.
In contrast to game-theoretic marketing models that assume complete information and closed-form solutions, this study demonstrates that LLM agents can approximate equilibria under incomplete information. Moreover, while prior CLV research focuses primarily on predictive accuracy, this work connects CLV estimation to financial forecasting and planning. Governance, privacy, and fairness are embedded directly into the system design rather than treated as isolated concerns, advancing responsible AI deployment in integrated business systems. Taken together, the improvements in marketing ROI, forecasting accuracy, and audit quality provide empirical support for hypotheses H1–H4 and indicate that the proposed integrated framework closes the three theoretical gaps outlined in Section 1.1.

7. Conclusions and Future Work

In the context of the current study, we systematically integrate marketing, financial, and audit decision-making mechanisms that result in a high level of trust throughout LLMs. Through creating a system which combines large language models, game-theoretic optimization, transformer-based attribution mechanisms, and continuous audit procedures, this study shows that it is possible to achieve significant improvements in metrics which are traditionally independent.
The theoretical contributions, such as formalization of the LLM-enhanced game theory, attribution based on attention with financial ties, and synthesized probabilistic models broaden the premise of market science, financial analytics, and audit research methodology. Practical tools to be implemented are the algorithmic contributions: prompt engineering to optimize a model, domain-specific prediction, and explainability augmented with an LLM. The empirical findings that 61.4% of marketing ROI, 63.3% of financial forecasting errors, and 75.0% of audit false positives are reduced prove that the suggested methods do provide quantifiable value. Nevertheless, it takes more than technological prowess to implement it successfully, and it involves organizational congruency, policy structures, and careful management of the remaining risks. The implementation stage demands change management that facilitates the cross-functional cooperation, incentive restructuring that is based on the integrated value optimization, and continuous monitoring of the algorithm performance. Companies should carefully strike the automation incentives and residual risk by using governance systems that focus on transparency, accountability, and human controls.
As per limitations, the study is subject to several limitations. First, the empirical evaluation relies on a single large-scale e-commerce dataset, which may limit the generalizability of the findings to other industries and regulatory contexts [1,8]. Second, while we implement and benchmark all major components, some elements of the multi-agent optimization and continuous audit pipeline remain at an early experimental maturity level, without full-scale production deployment [13,21]. Third, we do not explicitly model legal, ethical, and fairness constraints beyond standard governance considerations, leaving formal treatments of responsible AI and regulatory compliance to future work [22,43,44].
There are a few directions that should be investigated in future research. To start with, the use of open-source-based deployment of private LLMs will resolve the issue of privacy that restricts their usage in regulated sectors. Second, forecast accuracy and resilience to attribution with real-time adaptation mechanisms that allow the continued learning of new data and evolving market factors will be ensured. Third, generalization of multi-domain validations will be challenged by being applied to multi-regional, multi-product settings. Fourth, causal inference techniques integration will facilitate the identification of actual causal effects on data besides correlation effects. Lastly, the explanation of AI techniques to guarantee regulatory conformity and stakeholder confidence will also be necessary to maintain adoption.
When applied correctly, as accompanied by domain knowledge, mathematical calculations, and business contexts, and large language models present significant chances of value generation in the contemporary digital business domain. This study shows that this possibility can be achieved through the planned combination of technical innovation and considerate organization design and governance structures. The way forward is to maintain the cooperation between technologists, experts in the domain, and the organizational leaders as it will help to make sure that the AI systems can fulfill organizational goals without sacrificing moral principles and the confidence of the stakeholders.

Author Contributions

Conceptualization, A.K.; Methodology, A.K. and A.T.; Software, A.K.; Validation, L.T.; Formal analysis, A.K. and A.T.; Investigation, L.T., A.K. and C.K.; Resources, L.T. and A.K.; Data curation, L.T. and A.K.; Writing—original draft, A.K. and C.K.; Writing—review and editing, A.T. and C.K.; Visualization, C.K.; Supervision, A.K.; Project administration, A.K.; Funding acquisition, A.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
LLMLarge Language Model
CLVCustomer Lifetime Value
MMMMarketing Mix Modeling
ROIReturn on Investment
ROASReturn on Ad Spend
MAPEMean Absolute Percentage Error
AQIAudit Quality Index
RAGRetrieval-Augmented Generation
LoRALow-Rank Adaptation
QLoRAQuantized Low-Rank Adaptation
BERTBidirectional Encoder Representations from Transformers
RFMRecency, Frequency, Monetary
GDPRGeneral Data Protection Regulation
CCPACalifornia Consumer Privacy Act
APIApplication Programming Interface
XAIExplainable Artificial Intelligence
DPAData Processing Agreement
MAEMean Absolute Error
RMSERoot Mean Squared Error
LSTMLong Short-Term Memory
CMOChief Marketing Officer
CFOChief Financial Officer

References

  1. Rodriguez, M.; Lopez, S. Big Data Analytics in Digital Marketing: A Comprehensive Framework. Digit. Mark. Rev. 2025, 12, 234–256. [Google Scholar]
  2. Karras, A.; Theodorakopoulos, L.; Karras, C.; Krimpas, G.A.; Giannaros, A.; Bakalis, C.-P. LLM-driven big data management across digital governance, marketing, and accounting: A Spark-orchestrated framework. Algorithms 2025, 18, 791. [Google Scholar] [CrossRef]
  3. Anica-Popa, I.-F.; Vrîncianu, M.; Anica-Popa, L.-E.; Cişmaşu, I.-D.; Tudor, C.-G. Framework for integrating generative AI in developing competencies for accounting and audit professionals. Electronics 2024, 13, 2621. [Google Scholar] [CrossRef]
  4. Lu, Y.; Hao, J.; Tang, X. Dual-model synergy for audit opinion prediction: A collaborative LLM agent framework approach. Int. Rev. Econ. Financ. 2025, 104, 104642. [Google Scholar] [CrossRef]
  5. Ao, S.-I.; Hurwitz, M.; Palade, V. Cognitive computing and business intelligence applications in accounting, finance and management. Big Data Cogn. Comput. 2025, 9, 54. [Google Scholar] [CrossRef]
  6. Jiang, J.; Xie, H.; Shen, S.; Shen, Y.; Zhang, Z.; Lei, M.; Zheng, Y.; Li, Y.; Li, C.; Huang, D.; et al. SiriusBI: A comprehensive LLM-powered solution for data analytics in business intelligence. Proc. VLDB Endow. 2025, 18, 4860–4873. [Google Scholar] [CrossRef]
  7. Aghaei, R.; Kiaei, A.A.; Boush, M.; Vahidi, J.; Zavvar, M.; Barzegar, Z.; Rofoosheh, M. Harnessing the potential of large language models in modern marketing management: Applications, future directions, and strategic recommendations. arXiv 2025, arXiv:2501.10685. [Google Scholar] [CrossRef]
  8. Nie, Y.; Kong, Y.; Dong, X.; Mulvey, J.M.; Poor, H.V.; Wen, Q.; Zohren, S. A survey of large language models for financial applications: Progress, prospects and challenges. arXiv 2024, arXiv:2406.11903. [Google Scholar] [CrossRef]
  9. Mahdavi, S.; Joshi, P.K.; Guativa, L.H.; Singh, U. Integrating large language models in financial investments and market analysis: A survey. arXiv 2025, arXiv:2507.01990. [Google Scholar] [CrossRef]
  10. Zhao, H.; Liu, Z.; Wu, Z.; Li, Y.; Yang, T.; Shu, P.; Xu, S.; Dai, H.; Zhao, L.; Jiang, H.; et al. Revolutionizing finance with LLMs: An overview of applications and insights. arXiv 2024, arXiv:2401.11641. [Google Scholar] [CrossRef]
  11. Choi, P.M.S.; Huang, S.H.; Wang, Q. Large language models in finance: An overview. In Finance and Large Language Models; Springer: Cham, Switzerland, 2025; pp. 1–26. [Google Scholar]
  12. Li, W.; Liu, W.; Deng, M.; Liu, X.; Feng, L. The impact of large language models on accounting and future application scenarios. J. Account. Lit. 2025. ahead of print. [Google Scholar] [CrossRef]
  13. Tavasoli, A.; Sharbaf, M.; Madani, S.M. Responsible innovation: A strategic framework for financial LLM integration. arXiv 2025, arXiv:2504.02165. [Google Scholar] [CrossRef]
  14. Karras, A.; Theodorakopoulos, L.; Karras, C.; Theodoropoulou, A.; Kalliampakou, I.; Kalogeratos, G. LLMs for cybersecurity in the big data era: A comprehensive review of applications, challenges, and future directions. Information 2025, 16, 957. [Google Scholar] [CrossRef]
  15. Lee, S.; Park, H. Deep Learning for Multi-Touch Attribution in Online Marketing. J. Mark. Res. 2024, 61, 345–367. [Google Scholar]
  16. Kumar, A.; Patel, R. Integrating Artificial Intelligence with Financial Controls: A Unified Framework. J. Account. Financ. 2024, 28, 89–112. [Google Scholar]
  17. Taylor, R.; Lee, J. Game-Theoretic Approaches to Marketing Channel Competition. Oper. Res. 2024, 72, 234–256. [Google Scholar]
  18. Martinez, A.; White, D. Combining Large Language Models with Game Theory for Strategic Optimization. AI Bus. Rev. 2025, 5, 456–478. [Google Scholar]
  19. Kokina, J.; Davenport, T.H. The emergence of artificial intelligence: How automation is changing auditing. J. Emerg. Technol. Account. 2017, 14, 115–122. [Google Scholar] [CrossRef]
  20. Appelbaum, D.; Kogan, A.; Vasarhelyi, M.A. Big data and advanced analytics in external audits. Account. Horizons 2017, 31, 3–25. [Google Scholar]
  21. Shtembari, L.; Giaretta, A.; Frik, A.; Raji, I.D. AuditMAI: Towards an infrastructure for continuous AI auditing. arXiv 2024, arXiv:2406.14243. [Google Scholar] [CrossRef]
  22. Bommasani, R.; Hudson, D.A.; Adeli, E.; Altman, R.; Arora, S.; von Arx, S.; Bernstein, M.S.; Bohg, J.; Bosselut, A.; Brunskill, E.; et al. On the opportunities and risks of foundation models. arXiv 2021, arXiv:2108.07258. [Google Scholar] [CrossRef]
  23. Weidinger, L.; Mellor, J.; Rauh, M.; Griffin, C.; Huang, P.S.; Mellor, J.; Glaese, A.; Cheng, M.; Balle, B.; Kasirzadeh, A.; et al. Taxonomy of risks posed by language models. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (FAccT); Association for Computing Machinery: New York, NY, USA, 2022; pp. 214–229. [Google Scholar]
  24. Park, J.; Kim, S. Financial Statement Analysis with Large Language Models: Outperforming Human Analysts. Financ. Anal. J. 2025, 81, 567–589. [Google Scholar]
  25. Zhang, W.; Johnson, P. Superior Earnings Prediction Using Advanced Language Models. J. Financ. 2025, 80, 234–256. [Google Scholar]
  26. Shareef, F.; Ajith, R.; Kaushal, P.; Sengupta, K. RetailGPT: A fine-tuned LLM architecture for customer experience and sales optimization. In Proceedings of the 2nd International Conference on Self Sustainable Artificial Intelligence Systems (ICSSAS 2024); IEEE: New York, NY, USA, 2024; pp. 1390–1394. [Google Scholar]
  27. Radford, A.; Narasimhan, K. GPT-4 Applications in Business Intelligence and Analytics. Mach. Learn. Syst. 2025, 8, 234–267. [Google Scholar]
  28. Singh, R.; Gupta, A. Multi-Agent Large Language Model Systems for Organizational Decision-Making. AI Soc. 2024, 39, 456–478. [Google Scholar]
  29. Liu, H.; Tahmasbi, A.; Haque, E.S.; Jain, P. LLMs for Customized Marketing Content Generation and Evaluation at Scale. arXiv 2025, arXiv:2506.17863. [Google Scholar] [CrossRef]
  30. Guo, H.; Kwok, P.Y.; Guo, Y.; Zhao, J.; Gu, D. FinLSPM: Large stock predict model via numerical prior knowledge from LLM. Expert Syst. Appl. 2025, 300, 130294. [Google Scholar] [CrossRef]
  31. Zhou, L.; Zhang, Y.; Yu, J.; Wang, G.; Liu, Z.; Yongchareon, S.; Wang, N. LLM-augmented linear transformer–CNN for enhanced stock price prediction. Mathematics 2025, 13, 487. [Google Scholar] [CrossRef]
  32. Chung, Y.; Kim, J.; Kim, M.; Joo, M.; Cho, H. Foundations of LLMs and financial applications. In Finance and Large Language Models; Springer: Cham, Switzerland, 2025; pp. 59–90. [Google Scholar]
  33. Khan, A.T.; Li, S.; Cao, X. Bridging finance and AI: A comprehensive survey of large language models in financial systems. Digit. Financ. 2025, 7, 679–701. [Google Scholar] [CrossRef]
  34. Vaswani, A.; Shazeer, N. Attention Is All You Need: Advanced Applications. Neural Netw. Deep Learn. 2025, 44, 345–378. [Google Scholar]
  35. Devlin, J.; Chang, M. BERT: Fine-tuning for Financial Text Understanding. Comput. Linguist. 2024, 50, 123–145. [Google Scholar]
  36. Garcia, M.; Brown, T. Efficient Fine-Tuning of Large Language Models with LoRA and QLoRA. Mach. Learn. Eng. 2024, 11, 234–256. [Google Scholar]
  37. Anderson, B.; Wilson, C. Retrieval-Augmented Generation for Domain-Specific Knowledge Integration. Inf. Syst. Rev. 2025, 22, 567–589. [Google Scholar]
  38. Wang, X.; Liu, Y. Explainability in Transformer Models via Attention Flow Analysis. Explain. AI Rev. 2025, 9, 123–145. [Google Scholar]
  39. Rozario, A.M.; Vasarhelyi, M.A. Auditing with smart contracts. Int. J. Digit. Account. Res. 2019, 19, 1–27. [Google Scholar] [CrossRef]
  40. Grissa, I.; Abaoub, E. Enhancing Fraud Detection in Financial Statements with Deep Learning: An Audit Perspective. Int. J. Financ. Manag. Res. 2024, 16, 145–167. [Google Scholar]
  41. Veale, M.; Edwards, L. Algorithms that remember: Model inversion attacks and data protection law. Philos. Trans. R. Soc. A 2018, 376, 20180083. [Google Scholar] [CrossRef]
  42. Brkan, M. Do algorithms rule the world? Algorithmic decision-making and data protection in the framework of the GDPR and beyond. Int. J. Law Inf. Technol. 2019, 27, 91–121. [Google Scholar] [CrossRef]
  43. Zhu, L.; Yang, L.; Li, C.; Hu, S.; Liu, L.; Yin, B. LegiLM: A Fine-Tuned Legal Language Model for Data Compliance. arXiv 2024, arXiv:2409.13721. [Google Scholar] [CrossRef]
  44. Li, H.; Hu, W.; Jing, H.; Chen, Y.; Hu, Q.; Han, S.; Chu, T.; Hu, P.; Song, Y. PrivaCI-Bench: Evaluating Privacy with Contextual Integrity and Legal Compliance. In Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers); Association for Computational Linguistics: Vienna, Austria, 2025; pp. 10544–10559. [Google Scholar] [CrossRef]
Figure 1. Overview of the proposed LLM-driven integrated framework. Source: Authors’ own elaboration based on the proposed framework.
Figure 1. Overview of the proposed LLM-driven integrated framework. Source: Authors’ own elaboration based on the proposed framework.
Bdcc 10 00110 g001
Figure 2. Integrated LLM-based framework for customer intelligence, marketing optimization, attribution analysis, and financial audit governance. Source: Authors’ own elaboration based on the proposed integrated architecture.
Figure 2. Integrated LLM-based framework for customer intelligence, marketing optimization, attribution analysis, and financial audit governance. Source: Authors’ own elaboration based on the proposed integrated architecture.
Bdcc 10 00110 g002
Figure 3. Customer embedding clusters (UMAP).
Figure 3. Customer embedding clusters (UMAP).
Bdcc 10 00110 g003
Figure 4. CLV prediction metrics by model.
Figure 4. CLV prediction metrics by model.
Bdcc 10 00110 g004
Figure 5. Attention matrix across channels.
Figure 5. Attention matrix across channels.
Bdcc 10 00110 g005
Figure 6. Hill response curves per channel.
Figure 6. Hill response curves per channel.
Bdcc 10 00110 g006
Figure 7. Convergence of channel budget allocation.
Figure 7. Convergence of channel budget allocation.
Bdcc 10 00110 g007
Figure 8. Forecasting error (MAPE) by method.
Figure 8. Forecasting error (MAPE) by method.
Bdcc 10 00110 g008
Figure 9. ROC curves for anomaly detection.
Figure 9. ROC curves for anomaly detection.
Bdcc 10 00110 g009
Figure 10. Audit Quality Index radar chart.
Figure 10. Audit Quality Index radar chart.
Bdcc 10 00110 g010
Figure 11. LLM impact radial convergence map.
Figure 11. LLM impact radial convergence map.
Bdcc 10 00110 g011
Table 1. Notation and dimensions used in the proposed framework.
Table 1. Notation and dimensions used in the proposed framework.
SymbolDescriptionDomain/Dimension
kCustomer index { 1 , , K }
i , j Marketing channel indices { 1 , , n }
tIteration index in optimization loop N
DHistorical data (all tables)relational schema (Section 4.1)
BTotal marketing budget R +
TForecast horizon (time periods) N
e k LLM-based customer embedding R d
dEmbedding dimension N (e.g., d = 768 )
RFM k RFM feature vector of customer k R 3
x k Combined feature vector of customer k R D , D = d + 3
C k Cluster assigned to customer k { 1 , , K c }
CLV k Customer lifetime value R +
Q , K , V Query, key, value matrices (attention) R n × d k
d k Key/query dimension N
W O Output projection (multi-head attention) R h d v × d out
α i j Attention weight of channel j on i [ 0 ,   1 ] , j = 1 n α i j = 1
S max , i Maximum response (channel i) R +
K i Half-saturation spend (channel i) R +
γ i Hill exponent (channel i) R +
x i Spend on channel i R +
R ( x ) Total revenue response R +
P orig Original Markov transition matrix R n × n
P attn Attention-weighted transition matrix R n × n
A i Attribution score of channel i [ 0 ,   1 ] , i A i = 1
Y t Revenue at time t R
Y ^ t Forecasted revenue at time t R
AQI LLM Audit Quality Index [ 0 ,   1 ]
Table 2. Overview of proposed figures for evaluating the integrated LLM-based marketing–finance–audit framework. The table outlines nine distinct visualizations spanning customer segmentation, CLV prediction, channel attribution, response modeling, budget optimization, forecasting validation, anomaly detection, audit quality assessment, and integrated cross-functional impact measurement.
Table 2. Overview of proposed figures for evaluating the integrated LLM-based marketing–finance–audit framework. The table outlines nine distinct visualizations spanning customer segmentation, CLV prediction, channel attribution, response modeling, budget optimization, forecasting validation, anomaly detection, audit quality assessment, and integrated cross-functional impact measurement.
FigurePlot TypeX-AxisY-AxisPurpose
1UMAP/t-SNEDim 1Dim 2Customer embedding/clustering visualization
2Bar chartModel typePerformance scoreCLV model performance comparison
3HeatmapChannelsChannelsLearned attention weights visualization
4Response curvesSpendRevenueSaturation and diminishing returns
5Line plotIterationBudget ($)Convergence of LLM-based optimization
6Bar/line chartForecasting methodMAPE/RMSEForecasting accuracy comparison
7ROC curveFPRTPRFraud/anomaly detection performance
8Radar chartAQI componentsScore (0–1)Audit quality improvement visualization
9Radial convergence mapAngle sectorsImprovement ratio (%)ILIRCM: cross-domain LLM impact mapping
Table 3. CLV prediction accuracy with 95% confidence intervals. bootstrap confidence intervals (10,000 resamples) quantify estimation uncertainty on the validation set (20% holdout, n = 560,000 customers). Paired t-tests compare Fine-tuned LLM and Hybrid Ensemble against XGBoost baseline.
Table 3. CLV prediction accuracy with 95% confidence intervals. bootstrap confidence intervals (10,000 resamples) quantify estimation uncertainty on the validation set (20% holdout, n = 560,000 customers). Paired t-tests compare Fine-tuned LLM and Hybrid Ensemble against XGBoost baseline.
MethodMAE95% CIAccuracy (%)95% CIvs. XGBoost
BGNBD Baseline67.34[66.12, 68.61]68.2[67.1, 69.3]
XGBoost48.92[47.88, 50.01]76.4[75.3, 77.5]
Fine-tuned LLM (LoRA)35.67[34.22, 37.15]89.2[87.9, 90.4] t = 18.4
LLM Hybrid Ensemble31.45[29.87, 33.11]91.3[89.8, 92.7] t = 21.7
Table 4. Multi-touch attribution: attention-weighted vs. traditional methods. The attention-based approach identifies SEO (21.2%) and Social Media (24.1%) as substantially stronger contributors than last-click attribution suggests. LLM augmentation provides nuanced attribution reflecting actual customer journey complexity. Underperforming channels (Video: 2.1%, Native: 0.6%) receive reduced attribution under the attention mechanism, aligning budget allocation more closely with measured effectiveness.
Table 4. Multi-touch attribution: attention-weighted vs. traditional methods. The attention-based approach identifies SEO (21.2%) and Social Media (24.1%) as substantially stronger contributors than last-click attribution suggests. LLM augmentation provides nuanced attribution reflecting actual customer journey complexity. Underperforming channels (Video: 2.1%, Native: 0.6%) receive reduced attribution under the attention mechanism, aligning budget allocation more closely with measured effectiveness.
ChannelLast-ClickLinearMarkovAttentionAttention + LLM
SEO12.3%15.2%17.4%19.8%21.2%
SEM18.9%16.8%16.1%17.9%18.4%
Social Media22.4%20.5%21.2%22.8%24.1%
Email15.7%14.3%16.8%18.2%19.6%
Display8.2%9.8%9.2%8.4%7.8%
Affiliate10.5%10.2%10.6%10.1%6.2%
Video7.8%8.4%5.9%4.2%2.1%
Native4.2%4.8%2.8%2.4%0.6%
Table 5. Marketing ROI comparison across budget allocation strategies.
Table 5. Marketing ROI comparison across budget allocation strategies.
Allocation MethodROI95% CIImprovement vs. BaselineSignificance
Equal Budget Split (Baseline)4.20[4.01, 4.39]
Nash Equilibrium Approximation5.80[5.58, 6.04]+38.1% t = 14.2 ***
Stackelberg Solution Heuristic6.30[6.05, 6.58]+50.0% t = 17.8 ***
LLM Multi-Agent System6.78[6.49, 7.09]+61.4% t = 20.1 ***
Notes: ROI is computed as revenue attribution divided by marketing spend. Confidence intervals are obtained via bootstrap resampling (10,000 iterations), stratified by customer cohort ( n = 2.8 million). Paired t-tests are conducted over 99 experimental runs, comparing each method against the equal-budget baseline. Significance levels: *** p < 0.001 .
Table 6. Revenue forecasting MAPE with 95% confidence intervals. MAPE computed on 60 monthly hold-out test periods (January 2024–December 2024). Confidence intervals computed from monthly MAPE values; reflects month-to-month variation. Paired t-tests ( d f = 59 months) compare each method against ARIMA baseline. Significance: *** p < 0.001 , ** p < 0.01 .
Table 6. Revenue forecasting MAPE with 95% confidence intervals. MAPE computed on 60 monthly hold-out test periods (January 2024–December 2024). Confidence intervals computed from monthly MAPE values; reflects month-to-month variation. Paired t-tests ( d f = 59 months) compare each method against ARIMA baseline. Significance: *** p < 0.001 , ** p < 0.01 .
MethodMAPE (%)95% CIImprovementSignificance
ARIMA Baseline12.8[11.2, 14.6]
Prophet10.6[9.4, 12.1] 2.2 % t = 4.1 **
LSTM Neural Network9.2[7.9, 10.8] 3.6 % t = 6.3 ***
Fine-tuned BERT7.4[6.1, 8.9] 5.4 % t = 9.2 ***
LLM-ARIMA Ensemble6.1[5.0, 7.5] 6.7 % t = 11.3 ***
LLM Multi-Method Ensemble4.7[3.8, 5.9] 8.1 % t = 13.8 ***
Table 7. Fraud detection AUC-ROC with 95% confidence intervals. AUC-ROC computed via 5-fold stratified cross-validation on 35,000 audit-trail transaction records with known fraud labels. Confidence intervals reflect fold-to-fold variation. Paired t-tests ( d f = 4 folds) compare each method against traditional rule-based baseline. Significance: *** p < 0.001 .
Table 7. Fraud detection AUC-ROC with 95% confidence intervals. AUC-ROC computed via 5-fold stratified cross-validation on 35,000 audit-trail transaction records with known fraud labels. Confidence intervals reflect fold-to-fold variation. Paired t-tests ( d f = 4 folds) compare each method against traditional rule-based baseline. Significance: *** p < 0.001 .
MethodAUC-ROC95% CIvs. Rule-BasedSignificance
Traditional Rule-Based0.821[0.811, 0.831]
Isolation Forest0.884[0.875, 0.893] + 0.063 t = 8.9 ***
XGBoost Classifier0.913[0.905, 0.921] + 0.092 t = 12.1 ***
Fine-tuned BERT Anomaly Detector0.951[0.944, 0.958] + 0.130 t = 16.4 ***
LLM Zero-Shot Learning0.972[0.965, 0.979] + 0.151 t = 19.2 ***
Table 8. Audit quality index components. The LLM-integrated audit system achieves an overall AQI of 0.951 (on 0–1 scale), reflecting excellent performance across all quality dimensions. The detection rate of 0.924 indicates robust identification of legitimate audit concerns. False positive management (0.979) ensures audit resources focus on substantive issues. Explainability (0.961) reflects high-quality LLM-generated reasoning enabling auditor judgment. Timeliness (0.948) demonstrates real-time detection capability, enabling rapid control response.
Table 8. Audit quality index components. The LLM-integrated audit system achieves an overall AQI of 0.951 (on 0–1 scale), reflecting excellent performance across all quality dimensions. The detection rate of 0.924 indicates robust identification of legitimate audit concerns. False positive management (0.979) ensures audit resources focus on substantive issues. Explainability (0.961) reflects high-quality LLM-generated reasoning enabling auditor judgment. Timeliness (0.948) demonstrates real-time detection capability, enabling rapid control response.
ComponentWeightScore (0–1)Interpretation
Detection Rate0.350.924Excellent fraud identification
False Positive Rate0.250.979Minimal investigation burden
Explainability (LLM)0.200.961High-quality explanations
Timeliness (Real-time)0.150.948Immediate anomaly identification
Completeness0.050.936Comprehensive coverage
Overall AQI 0.951Excellent
Table 9. Comprehensive system performance metrics.
Table 9. Comprehensive system performance metrics.
MetricTraditionalLLM-Based System
Marketing ROI4.26.78 (+61.4%)
Revenue Forecast Accuracy (12-month)78.4%93.8% (+19.9%)
Financial Forecast MAPE12.8%4.7% (−63.3%)
Fraud Detection F1-Score0.7420.938 (+26.4%)
CLV Prediction Accuracy68.2%91.3% (+33.9%)
Audit Quality Index0.760.951 (+25.1%)
Decision Explanation ClarityManualAutomated LLM
Processing Time (per decision)3.2 h12 min (−93.8%)
False Positive Rate8.4%2.1% (−75.0%)
Table 10. Implementation maturity of framework components. Each component assessed on: (1) Implementation Status, (2) Empirical Validation Evidence, (3) Production Readiness, and (4) Key Limitations.
Table 10. Implementation maturity of framework components. Each component assessed on: (1) Implementation Status, (2) Empirical Validation Evidence, (3) Production Readiness, and (4) Key Limitations.
ComponentStatusValidationReadinessLimitation
1. CLV PredictionFully Impl.Full DatasetProductionFine-tuning dependent
(2.8 Million customers)Readyon GPT-3.5-Turbo
2. Attribution ModelingFully Impl.Full DatasetProductionStatistical correlation,
(47.2 Million trans.)Readynot causal
3. Game-Theoretic Opt.Partially Impl.Controlled TestRequiresNo equilibrium
(100 runs, Q1–Q2)Tuningguarantee
4. Financial ForecastingFully Impl.Full DatasetProductionEnsemble
(60-month test)Readydependency
5. Audit QualityFully Impl.Full DatasetProductionExplainability
(35 Thousand transactions)Readyvalidation ongoing
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Theodorakopoulos, L.; Karras, A.; Theodoropoulou, A.; Klavdianos, C. LLMs for Integrated Business Intelligence: A Big Data-Driven Framework Integrating Marketing Optimization, Financial Performance, and Audit Quality. Big Data Cogn. Comput. 2026, 10, 110. https://doi.org/10.3390/bdcc10040110

AMA Style

Theodorakopoulos L, Karras A, Theodoropoulou A, Klavdianos C. LLMs for Integrated Business Intelligence: A Big Data-Driven Framework Integrating Marketing Optimization, Financial Performance, and Audit Quality. Big Data and Cognitive Computing. 2026; 10(4):110. https://doi.org/10.3390/bdcc10040110

Chicago/Turabian Style

Theodorakopoulos, Leonidas, Aristeidis Karras, Alexandra Theodoropoulou, and Christos Klavdianos. 2026. "LLMs for Integrated Business Intelligence: A Big Data-Driven Framework Integrating Marketing Optimization, Financial Performance, and Audit Quality" Big Data and Cognitive Computing 10, no. 4: 110. https://doi.org/10.3390/bdcc10040110

APA Style

Theodorakopoulos, L., Karras, A., Theodoropoulou, A., & Klavdianos, C. (2026). LLMs for Integrated Business Intelligence: A Big Data-Driven Framework Integrating Marketing Optimization, Financial Performance, and Audit Quality. Big Data and Cognitive Computing, 10(4), 110. https://doi.org/10.3390/bdcc10040110

Article Metrics

Back to TopTop