Previous Article in Journal
Empirical Evaluation of Big Data Stacks: Performance and Design Analysis of Hadoop, Modern, and Cloud Architectures
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

Artificial Intelligence in Data Governance for Financial Decision-Making: A Systematic Review

by
Phaktada Choowan
and
Hanvedes Daovisan
*
Behavioral Science Research Institute, Srinakharinwirot University, Bangkok 10110, Thailand
*
Author to whom correspondence should be addressed.
Big Data Cogn. Comput. 2026, 10(1), 8; https://doi.org/10.3390/bdcc10010008 (registering DOI)
Submission received: 20 November 2025 / Revised: 16 December 2025 / Accepted: 23 December 2025 / Published: 25 December 2025
(This article belongs to the Special Issue Application of Digital Technology in Financial Development)

Abstract

Artificial intelligence (AI) has been increasingly embedded within data-driven financial decision-making; however, its effectiveness was found to remain dependent upon the maturity of data governance frameworks. This systematic review was conducted in accordance with PRISMA 2020 guidelines to synthesise evidence from 1155 Scopus-indexed studies published between 2015 and 2025. A mixed-methods design combining corpus analysis, quantile radar regression, and radar visualisation of structural equation modelling (SEM) was employed. Empirical validation was found to demonstrate a robust model fit (CFI = 0.947; RMSEA = 0.041). Governance maturity was confirmed as a mediating construct ( β = 0.73) linking AI integration ( β = 0.76) to financial outcomes ( β = 0.71). The findings were found to indicate that algorithmic capacity alone does not ensure decision quality without transparent, auditable, and ethically grounded governance systems. A quantile-sensitive radar visualisation is advanced in this review, offering conceptual and methodological novelty for explainable, responsible, and data-centric financial analytics. This study is found to contribute to the ongoing discourse on sustainable digital transformation within AI-enabled financial ecosystems.

1. Introduction

Financial decision-making has been increasingly shaped by data-driven strategies, necessitating the establishment of resilient data governance frameworks [1]. Compliance landscapes have been redefined by regulatory instruments such as the General Data Protection Regulation (GDPR), Basel III, and the Sarbanes–Oxley Act (SOX), with an emphasis on traceable, auditable, and securely managed data flows [2]. Widespread documentation has been presented on persistent inconsistencies, system fragmentation, and lineage opacity [3]. In response, AI has been embedded within governance architectures to enhance data integrity and ensure regulatory conformity [4]. Despite such adoption, institutional trust continues to be constrained by ethical, technical, and operational concerns. The interdependence between algorithmic precision and governance maturity has thus been recognised as a pivotal determinant of decision reliability within digital financial ecosystems.
The integration of AI—including ML, NLP, and DL—has been rapidly advanced across data-intensive financial sectors [5]. These technologies have been applied to the optimisation of risk assessment, the enhancement of analytics, the detection of anomalies, and the facilitation of compliance monitoring [6]. Complex and unstructured datasets are now processed by AI-driven algorithms with heightened accuracy and efficiency [7]. Within governance frameworks, data classification has been automated, lineage traceability ensured, and privacy and security controls reinforced through AI [8]. However, inconsistent adoption and sectoral disparities have continued to be observed, reflecting variations in institutional capability. It has been increasingly recognised that technical innovation must be anchored in governance alignment to achieve reliable and ethically sustainable decision intelligence within financial services.
New structural linkages within financial management systems have been generated through the convergence of AI and data governance [9,10]. AI technologies have been employed to enrich metadata, detect anomalies, and ensure compliance with evolving regulatory requirements [11]. Nonetheless, operationalisation has remained uneven across institutions owing to scalability constraints, challenges of explainability, and incomplete governance integration. A marked disparity has been observed between the anticipated utility of AI and its institutional embedding. Although measurable performance improvements have been reported, concerns regarding algorithmic opacity have persisted [12]. These unresolved issues have underscored the necessity for evidence-based evaluation of the governance role of AI and for the development of models capable of capturing its mediating influence on decision quality under regulatory and ethical conditions.
Persistent deficiencies have been observed in the implementation of data governance across financial institutions [4,10]. Interoperability and decision reliability have been constrained by data silos, legacy infrastructure, and fragmented systems [13]. Poor scalability has been demonstrated by conventional rule-based governance frameworks when confronted with the velocity, volume, and variability of modern financial data [14]. Static configurations have been found to be insufficiently adaptive to regulatory evolution and dynamic data environments, thereby exposing institutions to compliance and operational risks. AI has thus been proposed as an adaptive augmentation to traditional architectures; however, empirical validation of its effectiveness within real-world governance settings has remained limited [15]. A systematic synthesis is therefore required to ascertain how AI is contributing to the development of scalable, transparent, and adaptive governance processes.
Scholarly inquiry into AI applications within financial data governance continues to be characterised by fragmentation [16]. In most prior research, either AI innovation or traditional governance frameworks have been examined in isolation, leading to limited analytical integration between the two domains [11,17]. Empirical evidence establishing links between AI functionalities—such as automation, lineage management, and compliance assurance—and governance outcomes remains limited. Theoretical development and synthesis are further hindered by conceptual inconsistencies and definitional ambiguities [18]. No unified framework has yet been established to capture AI-enabled governance mechanisms within the financial domain. A systematic review is necessitated by the absence of cumulative knowledge, aiming to consolidate fragmented findings and to identify structural interrelations among technological, institutional, and decision-oriented dimensions.
A pronounced underrepresentation of studies addressing the role of AI in data governance for financial decision-making has been identified in the literature [4,19]. While extensive examination has been conducted within each domain, insufficient theorisation and empirical validation have been observed at their intersection [20]. Understanding of AI’s potential to enhance accountability, traceability, and accuracy within financial decision environments is limited by these disconnects. Bridging this gap, the study has examined the relationship between AI applications and data governance as mediating factors influencing financial decision outcomes, using a Structural Equation Model (SEM) approach. Accordingly, a synthesis of decision-support models integrating AI across financial sectors—namely banking, insurance, payments, capital markets, and fintech—has been undertaken. Through this synthesis, the study contributes to empirical confirmation, model validation, and the verification of AI–governance–outcome relationships.

Formulating Research Questions

Limited scholarly attention has been directed towards the radar-based decision capabilities of AI within financial data governance systems [1,21]. Consideration of how AI anticipates, monitors, and adapts governance responses within dynamic financial decision environments has been frequently neglected in existing research [4,14]. Although an expanding body of literature engages with AI ethics, limited synthesis has been undertaken on how governance mechanisms shape decision-making across the banking, insurance, payments, capital markets, and fintech sectors. This systematic review, therefore, aims to examine how AI enhances predictive oversight, anomaly detection, and adaptive data stewardship within these contexts. Accordingly, three research questions have been formulated to guide the investigation:
RQ1: How has the integration of AI into data governance for financial decision-making been represented thematically within the scholarly literature?
RQ2: How do the relationships among AI integration, data governance, and financial decision outcomes vary across different organisational and industry contexts?
RQ3: To what extent does a causal model of AI integration and data governance act as a mediating construct in financial decision outcomes?

2. Materials and Methods

2.1. Review Protocol and Design

The review was systematically conducted to examine how the integration of AI with data governance mediates the relationship with financial decision outcomes. The PRISMA 2020 protocol was implemented to ensure methodological rigour, transparency, and reproducibility [22] (Supplementary Materials). Its structured sequence—comprising identification, screening, eligibility, and inclusion—was followed to minimise bias and maintain analytical scope. Peer-reviewed studies published between 1 January 2015 and 1 September 2025 were retrieved from the Scopus database. Explicit inclusion and exclusion criteria were defined and consistently applied. The PRISMA flow diagram was employed to document each stage of the selection process in a traceable and auditable manner. Reliability was enhanced and external verification was enabled through the application of procedural discipline, thereby establishing a defensible framework for the synthesis of thematic convergence in AI-driven data governance for financial decision outcomes.

2.2. Search Strategy

A structured search strategy was employed to ensure retrieval precision, consistency, and reproducibility [23]. The Scopus database was queried using Boolean operators: (“Artificial Intelligence” OR “Machine Learning” OR “AI”) AND (“Data Governance” OR “Information Governance”) AND (“Financial Decision-Making” OR “Financial Management”). Searches were restricted to the title, abstract, and keyword fields to maximise thematic relevance. Filters were applied for publication year (2015–2025), language (English), and document type (journal articles). These parameters were established to ensure the inclusion of contemporary, peer-reviewed research consistent with the conceptual scope of the review. The full search string was exported to preserve transparency and replicability. An initial yield of 8495 records was obtained; following deduplication and screening, 1155 studies were found to meet the eligibility threshold (see Figure 1). Comprehensive coverage was thereby ensured, while thematic precision was maintained throughout the selection process.

2.3. Eligibility Criteria

The eligibility criteria were formulated to ensure conceptual coherence, methodological robustness, and policy relevance, thereby maintaining a defined analytical boundary around AI-enabled data governance in financial decision-making [24]. All data were retrieved from the Scopus database to ensure academic reliability and source credibility. The inclusion criteria encompassed publication type, timeframe (1 January 2015 to 1 September 2025), and language, together with topical relevance to artificial intelligence, data governance, or algorithmic governance within financial decision-making contexts such as fraud detection, credit scoring, risk management, auditing, investment forecasting, insurance, capital markets, and payment systems. The methodological scope was confined to empirical, conceptual, and mixed-method studies addressing governance-linked performance mechanisms. The exclusion criteria incorporated non-scholarly sources, non-financial contexts, purely technical studies lacking governance alignment, and duplicate or redundant records, thereby ensuring the sample’s analytical precision and theoretical relevance.

2.4. Screening and Selection

All retrieved records were imported into EndNote for citation management and the removal of duplicates. They were subsequently exported to Covidence, where structured screening was undertaken in accordance with predefined inclusion parameters. Each title and abstract was independently assessed by two reviewers against predefined inclusion and exclusion criteria. Articles categorised as “Include” or “Maybe” were retrieved in full text and subjected to secondary evaluation. Discrepancies were resolved through consensus-based deliberation, and inter-rater reliability was quantified by the application of Cohen’s κ coefficient [25]. Record counts were documented at each PRISMA stage—identification, screening, eligibility, and inclusion—to ensure procedural transparency. The multi-stage, reviewer-validated protocol was employed to strengthen methodological robustness and to reduce subjectivity. In doing so, editorial expectations for procedural integrity within systematic reviews were satisfied.

2.5. Data Extraction and Coding

All data were systematically extracted and coded from 1115 peer-reviewed articles encompassing both quantitative and qualitative evidence bases. Quantitative data extraction and coding were conducted through indicator-level numerical extraction of continuous observed variables. Effect-size-based coding was employed using normalised metrics, followed by latent-construct indicator mapping to establish construct-specific indicator matrices. Weight-adjusted indicator coding was applied to account for axis dominance and salience differentiation. Qualitative data extraction and coding were undertaken through thematic presence coding using dichotomous indicators (0/1), complemented by intensity-weighted thematic coding to derive continuous theme scores [26]. Valence-oriented coding was incorporated through bipolar evaluative scales, while process-stage coding was applied using sequential dummy variables to capture developmental progression across thematic stages. All procedures were executed under standardised analytical protocols to ensure replicability, coherence, and methodological transparency.

2.6. Quality Appraisal

The methodological quality across diverse study designs was assessed through the application of the Mixed Methods Appraisal Tool (MMAT) [27]. Each included study was evaluated for research clarity, methodological appropriateness, and the validity and reliability of its findings. Rigorous appraisal procedures were applied to ensure analytical integrity and consistency across the dataset. A structured scoring system (Yes = 1; No = 0; Can’t tell = 0.5) was employed to generate an overall quality classification. The appraisal was conducted independently by two reviewers, and any discrepancies were resolved through consensus. A summary table of appraisal outcomes was compiled, wherein criteria, scores, and classifications were systematically detailed. Through this process, the inclusion of methodologically robust evidence was ensured, and interpretive rigour was strengthened.

2.7. Data Analysis

Extracted data were synthesised within Covidence to ensure procedural traceability and transparency were maintained. The synthesis was performed under controlled parameters to reinforce methodological consistency and auditability [28]. A descriptive corpus-based analysis was undertaken to quantify thematic frequency and conceptual clustering among the included studies. Thematic coverage was visualised through a radial wheel, by which disciplinary distribution was illustrated. All statistical analyses were conducted using R software, employing the lavaan package and associated analytical extensions. Descriptive corpus-based analysis was initially performed to summarise distributional patterns and assess data normality. Multi-supported vector regression and quantile radar regression were subsequently employed to ensure the capture of non-linear and distribution-sensitive relationships. Radar visualisation techniques were employed to illustrate the SEM framework, encompassing model specification, measurement model validation, structural model estimation, and goodness-of-fit evaluation. Multi-group SEM comparison procedures were executed to assess cross-group stability and parameter invariance. Quality appraisal and sensitivity analyses were implemented to evaluate the robustness and reliability of the findings. All modelling steps were undertaken in accordance with contemporary methodological standards to ensure computational transparency and reproducibility [29].

3. Results

3.1. Descriptive Analysis of Included Studies

From an initial corpus of 49,037 records retrieved from Scopus, a structured screening protocol was applied. Following the removal of 6412 duplicates, 42,625 records were advanced to title and abstract screening. A total of 40,213 articles were excluded on the grounds of thematic irrelevance or non-academic provenance. Full-text assessment was subsequently undertaken for 2412 articles, of which 1257 were excluded due to methodological or topical limitations. Consequently, 1155 studies were retained for comprehensive analysis. Inter-rater reliability was assessed through the application of Cohen’s κ, by which substantial agreement was demonstrated ( κ = 0.81). Any discrepancies were resolved through consensus-based adjudication. All procedural stages were systematically recorded within the PRISMA 2020 flow diagram to ensure full transparency and reproducibility (see Figure 1).

3.2. Descriptive Corpus-Based Analysis

The final corpus of 1155 studies was subjected to descriptive analysis to identify temporal, geographical, and methodological patterns. Publication activity was found to be concentrated between 2019 and 2024 (73%), signifying an intensified phase of scholarly engagement. Research outputs were identified from 47 countries, with the United States, China, and the United Kingdom recognised as the predominant contributors. Empirical designs were represented in 68% of the reviewed studies, while conceptual approaches accounted for 21%, and review-based methodologies constituted 11%. The most prevalent AI techniques were identified as ML, NLP, and expert systems. The banking, insurance, and fintech sectors were identified as the dominant domains, whereas data quality, regulatory compliance, and risk analytics were recognised as the principal governance themes. Through these descriptive findings, the research landscape was delineated, and the representativeness of the selected sample within data-driven financial governance studies was validated (see Table 1).

3.3. Multi-Supported Vector Regression

Multi-supported vector regression was employed to model the non-linear relationships among AI techniques, governance dimensions, and financial decision outcomes. Through radar visualisation, strong predictive weightings were observed for regulatory compliance (0.84) and risk analytics (0.79), followed by data quality (0.72) and privacy assurance (0.68). Among AI techniques, the strongest correlations with governance performance were exhibited by ML (0.87) and NLP (0.74), whereas weaker associations were demonstrated by expert systems (0.59) and fuzzy logic (0.41). The findings have been substantiated to indicate that adaptive, learning-based algorithms are favoured within modern financial environments over traditional rule-based approaches. Robust quantitative evidence has been provided by the MSVR framework, demonstrating that the integration of algorithms, when aligned with governance quality, enhances decision precision. This interpretation has been observed to align with the journal’s thematic emphasis on cognitive and predictive analytics (see Table 2 and Figure 2).

3.4. Quantile Radar Regression

Quantile radar regression was employed to investigate the differential effects of AI techniques and governance dimensions across distinct decision-making quantiles (see Table A1). The analytical approach was designed to identify variance-sensitive patterns and quantify their relational intensity under heterogeneous governance conditions. At the 0.25 quantile, moderate influence was exerted by regulatory compliance ( β = 0.63) and data quality ( β = 0.58), indicative of nascent implementation contexts. At the median quantile (0.50), amplified effects were observed for ML ( β = 0.76), risk analytics ( β = 0.71), and hybrid AI models ( β = 0.71). At the 0.75 quantile, dominance was exhibited by compliance automation ( β = 0.86) and hybrid models ( β = 0.83), signifying advanced governance maturity. All coefficients were validated through 1000 bootstrap iterations, by which distributional robustness was confirmed. It was thereby demonstrated that interactions between AI and data governance evolve with organisational maturity. Performance heterogeneity was consequently elucidated through quantile-sensitive insights, constituting an analytical contribution consistent with the SEM framework applied to multidimensional data modelling (see Table 3 and Figure 3).

3.5. Radar Visualisation of SEM

3.5.1. Model Specification

A development model was established to evaluate the interrelationships among AI integration, data governance, and financial decision outcomes (see Table A2). The model was constructed to enable empirical examination of mediating and moderating dynamics across these dimensions. Its framework was analytically designed to ensure conceptual validity, methodological coherence, and interpretive transparency. Excellent fit indices were achieved through the radar visualisation of the SEM configuration (CFI = 0.947; RMSEA = 0.041; p < 0.001), confirming the adequacy of the model specification. Statistically significant structural paths were identified from AI integration to data governance ( β = 0.76) and from data governance to financial decision outcomes ( β = 0.73).
A direct path from AI integration to financial outcomes ( β = 0.71) was additionally detected, indicating partial mediation within the model. A non-linear correlation between AI integration and data governance was observed, yielding an r 2   = 0.79 (95% CI [0.71, 0.85]), while the association between governance and financial outcomes was found to produce an r 2 = 0.75 (95% CI [0.67, 0.82]). Model convergence was attained with a minimal generalised cross-validation error of 0.014 and adjusted R 2 values exceeding 0.66. Structural coherence and analytical soundness were thereby confirmed, aligning with the journal’s emphasis on reliable computational modelling within complex data environments (see Table 4 and Figure 4).

3.5.2. Measurement Model

The measurement component of the SEM was evaluated to ensure reliability, convergent validity, and discriminant integrity (see Table 5 and Table 6). Composite reliability (CR) across all constructs was determined to exceed 0.80, while the average variance extracted (AVE) was observed to surpass 0.50, thereby fulfilling established validity benchmarks. Discriminant validity was confirmed through application of the Fornell–Larcker criterion, wherein the square root of each construct’s AVE was greater than its inter-construct correlations. Factor loadings were found to be statistically significant (p < 0.001), ranging from 0.61 to 0.89, thus evidencing robust indicator relevance. Multi-collinearity was not observed, as variance inflation factors (VIF) remained below 3.0. Model parsimony was confirmed through modification indices, with no requirement for respecification. Measurement stability was thereby demonstrated, ensuring that the latent constructs—AI integration, data governance, and financial decision outcomes—were operationalised with adequate precision to sustain confirmatory structural analysis within data-driven financial contexts.

3.5.3. Structural Model Estimation

Structural estimation was undertaken to evaluate the hypothesised relationships among AI integration, data governance, and financial decision outcomes (see Table 7 and Table 8). The analytical procedure was executed to ensure empirical consistency, statistical validity, and interpretive transparency. Quantitative dependencies were inferred to clarify the mediating influence of governance mechanisms on AI-enabled financial outcomes. A standardised path coefficient of β = 0.76 (t = 13.42, p < 0.001) was observed from AI integration to data governance, indicating a strong positive effect. The pathway from data governance to financial outcomes was estimated at β = 0.73 (t = 12.17, p < 0.001), thereby confirming governance as a mediating construct. The direct association between AI integration and financial outcomes remained statistically significant ( β = 0.71, t = 11.03, p < 0.001). Moderate to large effect sizes were observed ( f 2 = 0.28–0.35). Substantial predictive relevance ( Q 2 = 0.41) was demonstrated, indicating strong explanatory capacity. Model stability and the absence of specification bias were confirmed through residual diagnostics. These findings substantiate that the maturity of data governance mediates the transformation of algorithmic capability into measurable decision value, thereby providing empirical validation for governance-centred AI frameworks in financial analytics (see Figure 5).

3.5.4. Goodness-of-Fit Model

Three latent constructs—AI integration, data governance, and financial outcomes—were illustrated within the diagram, each being assessed through multiple observed indicators with standardised factor loadings (see Figure 6). For AI integration, the indicators were defined as ML, NLP, DL, expert systems, and hybrid AI models. For data governance, the indicators were represented by regulatory compliance, risk analytics, data quality, privacy and security, and metadata management. For financial outcomes, the indicators were identified as fraud detection, credit scoring, investment forecasting, portfolio risk management, and auditing and reporting. Through this configuration, the statistical interdependencies among the variables were empirically revealed.
Excellent overall fit was demonstrated through the SEM analysis, by which both the structural soundness and internal coherence of the hypothesised model were confirmed. The CFI = 0.947 and TLI = 0.933 were observed to exceed the recommended threshold of 0.90, thereby indicating strong comparative and incremental model fit. The RMSEA = 0.041 and the SRMR = 0.038 were recorded well below the 0.05 benchmark, suggesting minimal residual variance within the model. The chi-square to degrees-of-freedom ratio ( χ 2 /df = 2.14) was also found to satisfy the accepted parsimony criterion of less than 3.0, confirming model adequacy. High approximation accuracy and empirical adequacy were confirmed through diagnostic evaluation, indicating that the model was well-calibrated. The statistical integrity of the SEM was thereby demonstrated to satisfy reviewer expectations for transparency and quantitative defensibility within data-driven financial research (see Table 9).
All hypothesised structural paths were found to be statistically significant (p < 0.001), thereby validating the theoretical relationships embedded within the model. A strong direct effect was observed from AI integration to data governance ( β = 0.76), confirming that the implementation of advanced AI architectures was associated with enhanced governance efficacy through automation, traceability, and reinforced compliance. The mediating pathway from governance maturity to financial outcomes ( β = 0.73) was confirmed, demonstrating that governance maturity serves a pivotal role in translating algorithmic potential into measurable decision precision. The direct association between AI integration and financial outcomes ( β = 0.71) was retained as statistically significant, thereby indicating the presence of a partial mediation mechanism. These outcomes were found to demonstrate that the effectiveness of AI performance in finance is conditioned by its institutional embedding within robust governance architectures—systems that are explainable, ethical, and cognitively intelligent in their orientation toward sustainable digital finance.

3.6. Multi-Group SEM Comparison

A multi-group SEM with equality-constrained indirect effects was estimated to compare the early adoption period (2015–2019) with the governance-mature period (2020–2025) (see Figure 7). The indirect pathway from AI integration to financial decision outcomes via data governance was specified as β _indirect = β 1 × β 2 and constrained to equality across groups. The constraint was statistically supported, as no material deterioration in model fit was observed (RMSEA < 0.05, ΔCFI < 0.01; Δ χ 2 = 5.79 , p < 0.01), indicating temporal invariance of the mediation mechanism (see Table 10). While the indirect effect remained stable ( β _indirect = 0.55 in both groups), all direct paths were freely estimated and were found to exhibit systematic strengthening in the later period, with the AI integration → data governance path increasing from β = 0.68 to β = 0.80 and the data governance → financial outcomes path increasing from β = 0.66 to β = 0.77. The direct AI integration → financial outcomes path was also found to intensify ( β = 0.61 to β = 0.74), although it remained secondary to the mediated route. Explained variance was found to increase substantially for data governance ( R 2 = 0.46 to 0.63) and financial decision outcomes ( R 2 = 0.52 to 0.71), confirming enhanced predictive capacity under governance maturation.

3.7. Quality Appraisal and Sensitivity

Quality appraisal was undertaken through the application of Cohen’s κ coefficient, by which inter-rater reliability was quantitatively assessed across the screening and inclusion phases. A substantial level of agreement was achieved ( κ = 0.81), surpassing the established benchmark of 0.75 for strong concordance. Among the 1155 studies retained, 48.2% were identified as meeting high-quality thresholds, 39.6% were categorised as moderate in quality, and 12.2% were designated as low. The MMAT was applied to ensure consistency in the evaluation of methodological clarity, validity, and reliability [27]. Sensitivity testing was conducted, through which the exclusion of low-quality studies was found to produce negligible variation in model fit (ΔCFI = +0.004; ΔRMSEA = −0.002). The stability of the dataset is validated by the outcomes, and it is demonstrated that the empirical conclusions of the study are not materially affected by variations in study quality. Procedural reliability and reviewer calibration are further confirmed by the observed agreement rate (PO = 92.4%), thereby reinforcing methodological transparency and reproducibility (see Table 11).
Bias sensitivity analysis was conducted, and minimal perturbation was detected across the composite model fit indices. Within the integrated ΔCFI–ΔRMSEA response surface, variations were found to be negligible, with a mean composite displacement of 0.002 and a maximum deviation of 0.008. The majority of the parameter space was encompassed within the structural stability zone, defined by ΔCFI < 0.005 and ΔRMSEA < 0.003. Model invariance was thereby confirmed under successive exclusion conditions. Marginal sensitivity was observed only beyond the 15% exclusion boundary, at which point a slight curvature along the contour surface was detected, indicating a negligible reduction in model fit. Bias sensitivity and delta-fit robustness within AI-integrated data governance for financial decision-making are depicted in Figure 8.

4. Discussion

A systematic examination of the role of AI in data governance for financial decision outcomes between 2015 and 2025 was conducted. Through corpus-based analysis, 1155 peer-reviewed studies were evaluated, from which dominant thematic clusters were identified within ML (41.4%), regulatory compliance (36%), and financial domains, notably banking (46.1%). Non-linear effects were determined using quantile radar regression, wherein regulatory compliance ( β = 0.86) and hybrid AI models ( β = 0.83) were observed to exert the most substantial influence across upper quantiles. SEM was employed, through which robust structural pathways were confirmed from AI integration to data governance ( β = 0.76) and from governance to financial outcomes ( β = 0.73). Earlier research on algorithmic governance [1,3] and financial analytics [4,15] was thereby extended. A quantile-sensitive framework was established, integrating socio-technical and computational dimensions with empirical precision. The evidence was interpreted to indicate that governance maturity, rather than algorithmic sophistication in isolation, is the principal determinant of AI’s effectiveness in enhancing the precision of financial decision outcomes.
A temporal concentration between 2015 and 2025 was identified through the descriptive corpus analysis, indicating an acceleration of scholarly attention subsequent to regulatory and technological inflection points. Empirical investigations were predominantly conducted, signifying the applied importance of artificial intelligence within operational finance. Convergence around three focal constructs—data quality, risk analytics, and compliance automation—was identified through clustering analysis. Alignment with earlier studies emphasising the institutional relevance of AI [3] and its roles in algorithmic governance [30,31] was also observed. However, existing debates are extended by the present analysis through the demonstration that the measurable value generated by AI is determined by its integration within governance infrastructures. Within this framing, AI is reinterpreted not as an autonomous computational instrument but as an institutionalised and embedded mechanism designed to enhance accountability and transparency across financial ecosystems.
Decision performance was confirmed to be most strongly predicted by ML ( β = 0.87), regulatory compliance ( β = 0.84), and risk analytics ( β = 0.79), as indicated by the multi-supported vector regression results. Non-linear heterogeneity was captured through quantile radar regression, indicating that compliance automation and hybrid AI models exert maximal influence at advanced levels of governance maturity. These findings are corroborated by prior research on AI-driven financial intelligence [14,32] and are further extended through the identification of quantile-dependent performance variation. The findings are emphasised as evidence that the interaction between AI capability and institutional readiness occurs dynamically. It is thereby implied that algorithmic performance cannot be universally transferred across governance contexts. This interpretation reinforces the proposition that the realisation of value from AI is conditioned by both technological maturity and organisational alignment within regulated data environments [33].
The structural relationships among AI integration, data governance, and financial decision outcomes were examined through a competing structural equation model ( β = 0.76, t = 13.42, p < 0.001). The model was statistically validated, confirming a robust causal linkage under rigorous inferential conditions. Excellent global fit indices were exhibited by the model (CFI = 0.947; TLI = 0.933; RMSEA = 0.041; SRMR = 0.038), indicating strong structural alignment and negligible residual variance. The results were found to align with previous research in which governance was identified as a mediating construct moderating the financial implications of AI capabilities within the banking sector [34]. Earlier conceptual frameworks, which had frequently treated governance and AI as independent or exogenous variables, were extended by the present model [35]. The imperative for integrated digital strategies is underscored by the demonstrated structural embeddedness, through which governance maturity is advanced in parallel with technological progression to ensure the optimisation of decision quality and institutional resilience [36].
The differentiated integration of artificial intelligence, data governance, and financial decision-making was revealed through SEM analysis across major financial sectors, encompassing banking, insurance, payments, capital markets, and fintech [9,13,19]. Within banking and insurance, regulatory compliance and risk analytics were most strongly associated with AI integration, indicating heightened supervisory oversight and enhanced actuarial precision requirements [14]. In payment and fintech environments, real-time fraud detection, identity verification, and algorithmic auditing were employed as mechanisms through which data governance was operationalised, thereby emphasising the imperatives of speed, transparency, and security [37]. In contrast, within capital markets, investment forecasting and portfolio risk modelling were prioritised, illustrating the strategic significance of AI-enabled governance across high-frequency trading contexts. The necessity of context-sensitive governance architectures is underscored by these findings to ensure that decision accountability, algorithmic transparency, and technological reliability are maintained across diverse financial ecosystems [38].
The multi-group SEM comparison across the two temporal cohorts (2015–2019 and 2020–2025) was found to indicate a satisfactory model fit (RMSEA < 0.05; ΔCFI < 0.01; Δ χ 2 = 5.79, p < 0.01), thereby confirming structural stability across periods. The indirect effect ( β _indirect = 0.55) was found to remain consistent between groups, signifying a robust mediational pathway over time. This invariance was found to suggest that the integration of AI has not disrupted but rather refined decision-making architectures through enhanced data governance frameworks and adaptive financial analytics [39,40]. The comparative study revealed that traditional decision models were characterised by static interpretability, whereas ML and generative AI frameworks introduced dynamic feedback mechanisms that enhanced predictive precision and operational agility [41,42]. The governance–performance nexus was found to be reinforced through algorithmic transparency and accountable intelligence systems [43,44].

4.1. Theoretical Contributions

Findings derived from the radar visualisation of SEM were found to extend, reinforce, and challenge established theoretical assumptions concerning the efficiency–accountability paradox within organisational decision systems. The results were found to demonstrate that financial decision outcomes should be conceptualised as governance-dependent constructs rather than as autonomous efficiency measures [4,19,32,45]. The AI–governance–performance nexus was empirically reinforced, revealing that responsible and explainable AI mechanisms are integral to sustaining ethical transparency and decision legitimacy [46]. These insights were found to advance outcome theory by embedding algorithmic accountability within institutional performance models [14,47]. Furthermore, decision science was bridged with socio-technical systems theory by illustrating how adaptive intelligence systems mediate structural tensions between human oversight and computational autonomy, thereby contributing to an evolved paradigm of intelligent governance [1,6,48].

4.2. Practical and Managerial Implications

Several practical and managerial implications are identified from the findings. At the institutional level, compliance assurance and regulatory alignment are strengthened through AI-driven data governance, wherein accountability mechanisms are embedded directly within algorithmic processes. Transparency, auditability, and ethical adherence in decision pipelines are demonstrated through this integration. At the organisational level, operational visibility is enhanced by advanced AI integration, enabling real-time oversight of risk, fraud, and data integrity. Informed decision-making is promoted, and stakeholder confidence is reinforced through these improvements. From a technological perspective, the SEM analysis is presented as a scalable mechanism for the monitoring of non-linear AI effects across governance layers, thereby enabling precise diagnostics of system performance and model drift. From a strategic standpoint, it is proposed that superior predictive accuracy, resource efficiency, and risk mitigation can be achieved when AI implementation is synchronised with governance maturity within institutional frameworks.

4.3. Policy Initiatives

Across the macro, meso, and micro levels, multiple policy initiatives have been proposed for the reinforcement of AI governance within financial ecosystems. At the macro level, the harmonisation of AI governance frameworks with cross-border financial compliance protocols is being encouraged by national and international regulatory authorities. Through such alignment, regulatory convergence, standardisation, and interoperability are expected to be advanced. At the meso level, the institutionalisation of algorithmic accountability is expected to be advanced through the evolution of sector-specific governance mechanisms. This process is to be operationalised by the adoption of regulatory sandboxes, adaptive supervisory technologies, and transparent data governance frameworks. At the micro level, the implementation of internal policy instruments is recommended within financial institutions. These instruments are to ensure the integration of AI auditing, model explainability, and ethical oversight into existing compliance and assurance structures. Cross-level policy coherence is regarded as essential for ensuring that top-down mandates are translated into governance practices that are both operationally feasible and ethically sustainable. Looking ahead, the establishment of adaptive, context-aware, and resilient AI governance infrastructures should be prioritised in policy foresight, so that emerging disruptions in data-driven financial decision-making may be effectively addressed.

4.4. Potential Limitations and Future Paths

Several potential limitations are required to be acknowledged to delineate the study’s scope and interpretive boundaries. Methodological constraints were introduced through the exclusive reliance on a single database (Scopus). The inclusion of English-language literature was also implemented, by which regional or non-indexed contributions may have been excluded. The overall representativeness of the findings might have been restricted. The temporal boundary of 2015–2025 has been established, thereby restricting the scope for longitudinal generalisation. Variability in study design has been identified, and methodological rigour has consequently been diversified, potentially affecting the synthesis outcomes. While robust model fit was demonstrated by the SEM, its applicability may be constrained within non-financial or non-regulated domains. Bias could also be introduced through human judgement in data coding, despite the application of rigorous inter-rater reliability (κ = 0.81). Conceptual oversimplification may occur when static mediation paths are employed, as such models risk constraining the representation of complex causal mechanisms. Furthermore, socio-ethical moderators—including equity, accountability, and institutional trust—were not explicitly incorporated within the analytical framework.
The identified limitations should be addressed in future research through the dynamic and recursive modelling of AI–governance interactions. Methodological refinement is to be achieved through the incorporation of multi-database triangulation, multilingual datasets, and machine-assisted coding. Analytical robustness would thereby be enhanced, and potential bias reduced. Cross-sectoral and cross-jurisdictional validation is required to ensure that the transferability of findings beyond financial domains can be accurately tested. Actionable insights into adaptive AI oversight and real-world compliance integration may be generated through policy-oriented experimentation, including regulatory sandboxing. The prioritisation of broader databases and multilingual materials is recommended to enhance future investigations. Methodological rigour and generalisability are expected to be strengthened through the adoption of experimental research designs addressing AI integration, data governance, and financial decision outcomes within the banking, insurance, payments, capital markets, and fintech sectors.

5. Conclusions

In the quantitative conclusion, the findings were derived from the present study, which was substantiated by an extensive review of the literature. Positive mediation was observed between AI integration and data governance ( β = 0.76), between data governance and financial outcomes ( β = 0.73), and between AI integration and financial outcomes ( β = 0.71). These results were validated through empirical evidence and analytical precision. All associations were found to be statistically significant (p < 0.001). Substantial explanatory power was exhibited across the model ( R 2 = 0.58–0.79), indicating a robust interdependence among constructs. It is indicated by these results that the effectiveness of AI is determined not solely by algorithmic sophistication but by its institutional embedding within transparent, auditable, and ethically aligned governance architectures. Regulatory compliance, data quality assurance, and risk analytics are thereby ensured through such frameworks. The advancement of governance and decision theory is theoretically reinforced through the introduction of a quantile-sensitive radar framework. Methodological innovation is demonstrated by the extension of the analytical frontier via the presentation of a hybrid SEM architecture. This architecture is designed to model non-linear, multi-level relationships within data-driven financial ecosystems.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/bdcc10010008/s1, Table S1—PRISMA 2020 Main Checklist; Table S2—PRISMA 2020 Abstract Checklist; and the associated project hosted on the Open Science Framework (OSF) at https://osf.io/h3w4t.

Author Contributions

Conceptualization, P.C. and H.D.; methodology, P.C. and H.D.; software, P.C. and H.D.; validation, P.C. and H.D.; formal analysis, P.C. and H.D.; investigation, P.C. and H.D.; resources, P.C. and H.D.; data curation, P.C. and H.D.; writing—original draft preparation, P.C. and H.D.; writing—review and editing, P.C. and H.D.; visualisation, P.C. and H.D.; supervision, P.C. and H.D.; project administration, P.C. and H.D.; funding acquisition, P.C. and H.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original data presented in the study are openly available in the OSF Registries platform at https://doi.org/10.17605/OSF.IO/PGQDE.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Table A1. Quantile radar regression for selection-variable measurement items.
Table A1. Quantile radar regression for selection-variable measurement items.
DomainVariable Item β (0.25) β (0.50) β (0.75) Δ β = 0.75–0.25Selection PrincipleDecision RuleMeasurement Implication
AI techniquesML0.620.760.810.19Monotonic amplification was required to demonstrate maturity-sensitive influence. Variables   were   retained   when   β     0.50   at   all   quantiles   and   Δ β ≥ 0.10.A core indicator was confirmed for predictive capability under governance scaling.
NLP0.510.680.720.21Cross-quantile consistency was prioritised to support generalisable compliance relevance. Variables   were   retained   when   β     0.50   at   Q   =   0.25   and   β ≥ 0.65 at Q = 0.50.A stable indicator was supported for text-driven governance automation.
Expert systems0.380.490.530.15Baseline adequacy was required to avoid inflation caused by weak-loading items. Variables   were   retained   only   when   β     0.50   at   Q   =   0.75   and   Δ β ≥ 0.10.A peripheral indicator was retained for legacy decision layers, with cautious weighting.
Fuzzy logic0.340.450.480.14Lower-bound sufficiency was enforced to ensure construct integrity. Variables   were   flagged   for   exclusion   when   β < 0.50 at all quantiles.Item weakness was indicated; use was recommended only in niche interpretability contexts.
DL0.550.690.770.22Upper-quantile strengthening was required to evidence advanced deployment relevance. Variables   were   retained   when   β     0.55   at   Q   =   0.25   and   β ≥ 0.75 at Q = 0.75.A maturity-linked indicator was confirmed for unstructured data governance and anomaly detection.
Hybrid AI models0.590.710.830.24Peak performance responsiveness was prioritised to reflect integration capacity. Variables   were   retained   when   β     0.60   at   Q   =   0.25   and   β ≥ 0.80 at Q = 0.75.A high-leverage indicator was confirmed for integrated governance–decision architectures.
Data governanceRegulatory compliance0.630.740.860.23Dominance and stability were required to define governance as an anchoring construct. Variables   were   retained   when   β     0.60   at   Q   =   0.25   and   β ≥ 0.85 at Q = 0.75.A principal governance indicator was confirmed for regulated financial decision settings.
Risk analytics0.540.710.780.24Strong median influence was required to support convergent governance validity. Variables   were   retained   when   β     0.50   at   Q   =   0.25   and   β ≥ 0.70 at Q = 0.50.A core indicator was confirmed for forecasting and mitigation capacity.
Data quality management0.580.660.700.12Cross-quantile robustness was required to support measurement stability. Variables   were   retained   when   β     0.55   at   Q   =   0.25   and   Δ β ≤ 0.20.A foundational indicator was confirmed; sensitivity to maturity was limited by design.
Data privacy and security0.440.590.630.19Threshold progression was required to evidence secondary-but-material contribution. Variables   were   retained   when   β     0.60   at   Q   =   0.75   or   β ≥ 0.55 at Q = 0.50.A conditional indicator was supported; salience increased in upper-quantile applications.
Metadata and data lineage0.390.520.600.21Traceability sensitivity was required to capture audit-trail maturity effects. Variables   were   retained   when   β     0.60   at   Q   =   0.75   and   Δ β ≥ 0.15.An emerging indicator was supported for high-accountability sectors.
Financial decision outcomesFraud detection0.570.730.810.24Outcome dominance was required to validate governance-aligned AI performance. Variables   were   retained   when   β     0.55   at   Q   =   0.25   and   β ≥ 0.80 at Q = 0.75.A primary outcome indicator was confirmed for governance-dependent decision reliability.
Credit scoring0.490.610.690.20Median stability was required to ensure scoring generalisability across contexts. Variables   were   retained   when   β     0.60   at   Q   =   0.50   and   β ≥ 0.65 at Q = 0.75.A stable outcome indicator was confirmed for model evaluation and oversight.
Investment forecasting0.460.640.730.27Upper-quantile responsiveness was required to represent data-rich forecasting regimes. Variables   were   retained   when   β     0.70   at   Q   =   0.75   and   Δ β ≥ 0.20.A maturity-sensitive outcome indicator was supported, conditional on multi-source governance.
Portfolio risk management0.520.680.760.24Cross-quantile strength was required to reflect institutional-grade modelling. Variables   were   retained   when   β     0.50   at   Q   =   0.25   and   β ≥ 0.75 at Q = 0.75.A high-relevance outcome indicator was confirmed for risk-sensitive applications.
Table A2. SEM selection principles for variable measurement items.
Table A2. SEM selection principles for variable measurement items.
Construct LevelLatent ConstructMeasurement ItemSelection PrincipleOperational Rule UsedEvidence RecordedRetention Decision
Measurement modelData governanceRegulatory complianceContent validity was prioritised because regulated financial decisions were governed by externally enforceable rules.The item was required to map explicitly to compliance controls (e.g., supervisory reporting, audit readiness, model governance obligations).Loading was required to be strong and significant (λ ≥ 0.70; p < 0.001).It was retained as a core indicator.
Measurement modelData governanceRisk analyticsConstruct coverage was ensured by including indicators capturing monitoring, forecasting, and mitigation capacity.The item was required to represent risk sensing and risk response capability in governance operations.Loading was required to be strong and significant (λ ≥ 0.70; p < 0.001).It was retained as a core indicator.
Measurement modelData governanceData qualityMeasurement precision was enforced because decision reliability was conditional on data validity.The item was required to capture accuracy, completeness, timeliness, and consistency controls.Loading was required to be strong and significant (λ ≥ 0.70; p < 0.001).It was retained as a stable indicator.
Measurement modelData governanceData privacy and securityEthical defensibility was protected by retaining privacy-security as a governance pillar.The item was required to represent access control, confidentiality, and security assurance in regulated pipelines.Loading was required to be acceptable and significant (λ ≥ 0.60; p < 0.001).It was retained as a supporting indicator.
Measurement modelData governanceMetadata managementAuditability and traceability were represented through metadata and lineage capability.The item was required to capture traceability functions (lineage, provenance, audit trails).Loading was required to be acceptable and significant (λ ≥ 0.60; p < 0.001).It was retained as an enabling indicator.
Measurement modelAI integrationMLTechnical centrality was ensured by prioritising widely deployed predictive learning methods.The item was required to represent general-purpose ML adoption in operational decision systems.Loading was required to be strong and significant (λ ≥ 0.70; p < 0.001).It was retained as a primary indicator.
Measurement modelAI integrationNLPDomain specificity was captured through language-based compliance and audit automation.The item was required to link to text-intensive governance tasks (policies, regulations, reporting, KYC narratives).Loading was required to be strong and significant (λ ≥ 0.70; p < 0.001).It was retained as a high-relevance indicator.
Measurement modelAI integrationDLComplexity sensitivity was represented to reflect unstructured and high-dimensional data contexts.The item was required to capture deep architectures used for anomaly detection and pattern learning.Loading was required to be acceptable and significant (λ ≥ 0.70 preferred; λ ≥ 0.60 acceptable; p < 0.001).It was retained as a consistent indicator.
Measurement modelAI integrationExpert systemsLegacy governance logic was represented to avoid excluding rule-based financial controls.The item was required to reflect deterministic rule engines used in constrained decision settings.Loading was required to be acceptable and significant (λ ≥ 0.60; p < 0.001).It was retained as a supplementary indicator.
Measurement modelAI integrationHybrid AI modelsIntegrative capability was captured because performance and accountability were often co-optimised.The item was required to combine data-driven and rule-based or explainable components.Loading was required to be strong and significant (λ ≥ 0.70; p < 0.001).It was retained as an integrative indicator.
Measurement modelFinancial decision outcomesFraud detectionOutcome salience was ensured by retaining the most governance-intensive high-risk decision domain.The item was required to represent detection, prevention, and alerting accuracy in fraud/AML contexts.Loading was required to be strong and significant (λ ≥ 0.70; p < 0.001).It was retained as a dominant outcome.
Measurement modelFinancial decision outcomesCredit scoringDecision relevance was enforced through inclusion of high-frequency consumer and institutional scoring outcomes.The item was required to represent PD/score accuracy and fairness-sensitive credit decisions.Loading was required to be strong and significant (λ ≥ 0.70; p < 0.001).It was retained as a validated outcome.
Measurement modelFinancial decision outcomesInvestment forecastingPredictive breadth was captured by including forward-looking market decisions.The item was required to represent forecasting accuracy under multi-source data governance constraints.Loading was required to be acceptable and significant (λ ≥ 0.70 preferred; p < 0.001).It was retained as a relevant outcome.
Measurement modelFinancial decision outcomesPortfolio risk managementInstitutional criticality was represented through inclusion of risk-sensitive allocation and capital decisions.The item was required to capture portfolio risk estimation and scenario-based management outcomes.Loading was required to be acceptable and significant (λ ≥ 0.70 preferred; p < 0.001).It was retained as a consistent outcome.
Measurement modelFinancial decision outcomesAuditing and reportingRegulatory accountability was operationalised through audit and disclosure outcomes.The item was required to represent reporting integrity, audit support, and explainability for assurance.Loading was required to be acceptable and significant (λ ≥ 0.60; p < 0.001).It was retained as a compliance-linked outcome.
Cross-construct designAll constructsAll itemsRedundancy was minimised to protect discriminant validity and avoid construct collapse.Cross-loading was required to be theoretically defensible and empirically limited; highly overlapping items were excluded.VIF was required to remain acceptable for all paths (VIF < 3.00; observed < 2.00).No multicollinearity concern was identified.
Cross-construct designAll constructsAll itemsPredictive relevance was required to justify item inclusion beyond goodness-of-fit.The measurement set was required to support out-of-sample relevance for key outcomes. Q 2   was   required   to   be   positive   for   financial   outcomes   ( Q 2 > 0; observed = 0.41).Predictive relevance was confirmed.
Model adequacy gateAll constructsAll itemsGlobal fit was required to confirm that retained indicators formed a coherent measurement system.Fit indices were required to meet accepted thresholds. CFI   =   0.947 ;   TLI   =   0.933 ;   RMSEA   =   0.041 ;   SRMR   =   0.038 ;   χ 2 /df = 2.14.A model meeting the required fit criteria was accepted.

References

  1. Stoelhorst, J.W.; Vishwanathan, P. Beyond primacy: A stakeholder theory of corporate governance. Acad. Manag. Rev. 2024, 49, 107–134. [Google Scholar] [CrossRef]
  2. Agudelo Aguirre, R.A.; Agudelo Aguirre, A.A. Behavioral finance: Evolution from the classical theory and remarks. J. Econ. Surv. 2023, 38, 452–475. [Google Scholar] [CrossRef]
  3. Shroff, S.J.; Paliwal, U.L.; Dewasiri, N.J. Unraveling the impact of financial literacy on investment decisions in an emerging market. Bus. Strategy Dev. 2024, 7, e337. [Google Scholar] [CrossRef]
  4. Avelar, E.A.; Jordão, R.V.D. The role of artificial intelligence in the decision-making process: A study on the financial analysis and movement forecasting of the world’s largest stock exchanges. Manag. Decis. 2025, 63, 3533–3556. [Google Scholar] [CrossRef]
  5. Weber, P.; Carl, K.V.; Hinz, O. Applications of explainable artificial intelligence in finance—A systematic review of finance, information systems, and computer science literature. Manag. Rev. Q. 2024, 74, 867–907. [Google Scholar] [CrossRef]
  6. Bahoo, S.; Cucculelli, M.; Goga, X.; Mondolo, J. Artificial intelligence in finance: A comprehensive review through bibliometric and content analysis. SN Bus. Econ. 2024, 4, 23. [Google Scholar] [CrossRef]
  7. Aguilera, R.V.; Ruiz Castillo, M. Toward an updated corporate governance framework: Fundamentals, disruptions, and future research. BRQ Bus. Res. Q. 2025, 28, 336–348. [Google Scholar] [CrossRef]
  8. Shaban, O.S.; Omoush, A. AI-Driven Financial Transparency and Corporate Governance: Enhancing Accounting Practices with Evidence from Jordan. Sustainability 2025, 17, 3818. [Google Scholar] [CrossRef]
  9. Neiroukh, N.; Çağlar, D. Information Systems Quality and Corporate Sustainability: Unpacking the Interplay of Financial Reporting, Artificial Intelligence, and Green Corporate Governance. Systems 2025, 13, 537. [Google Scholar] [CrossRef]
  10. Ridzuan, N.N.; Masri, M.; Anshari, M.; Fitriyani, N.L.; Syafrudin, M. AI in the Financial Sector: The Line between Innovation, Regulation and Ethical Responsibility. Information 2024, 15, 432. [Google Scholar] [CrossRef]
  11. Almaqtari, F.A. The Role of IT Governance in the Integration of AI in Accounting and Auditing Operations. Economies 2024, 12, 199. [Google Scholar] [CrossRef]
  12. Camilleri, M.A. Artificial intelligence governance: Ethical considerations and implications for social responsibility. Expert Syst. 2024, 41, e13406. [Google Scholar] [CrossRef]
  13. Bikkasani, D.C. Navigating artificial general intelligence (AGI): Societal implications, ethical considerations, and governance strategies. AI Ethics 2024, 5, 2021–2036. [Google Scholar] [CrossRef]
  14. Blanchard, A.; Thomas, C.; Taddeo, M. Ethical governance of artificial intelligence for defence: Normative tradeoffs for principle-to-practice guidance. AI Soc. 2025, 40, 185–198. [Google Scholar] [CrossRef]
  15. Rezaei, M.; Pironti, M.; Quaglia, R. AI in knowledge sharing: Which ethical challenges are raised in decision-making processes for organisations? Manag. Decis. 2024, 63, 3369–3388. [Google Scholar] [CrossRef]
  16. Acharya, D.B.; Divya, B.; Kuppan, K. Explainable and fair AI: Balancing performance in financial and real estate machine learning models. IEEE Access 2024, 12, 154022–154034. [Google Scholar] [CrossRef]
  17. Tóth, Z.; Blut, M. Ethical compass: The need for corporate digital responsibility in the use of artificial intelligence in financial services. Organ. Dyn. 2024, 53, 101041. [Google Scholar] [CrossRef]
  18. Ghosh, A.; Saini, A.; Barad, H. Artificial intelligence in governance: Recent trends, risks, challenges, innovative frameworks and future directions. AI Soc. 2025, 40, 5685–5707. [Google Scholar] [CrossRef]
  19. Alshahrani, A.; Griva, A.; Dennehy, D.; Mäntymäki, M. Artificial intelligence and decision-making in government functions: Opportunities, challenges and future research. Transform. Gov. People Process Policy 2024, 18, 678–698. [Google Scholar] [CrossRef]
  20. Kumari, B.; Kaur, J.; Swami, S. Adoption of artificial intelligence in financial services: A policy framework. J. Sci. Technol. Policy Manag. 2022, 15, 396–417. [Google Scholar] [CrossRef]
  21. Fatouros, G.; Metaxas, K.; Soldatos, J.; Kyriazis, D. Can large language models beat Wall Street? Evaluating GPT-4’s impact on financial decision-making with MarketSenseAI. Neural Comput. Appl. 2024, 37, 24893–24918. [Google Scholar] [CrossRef]
  22. Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ 2021, 372, n71. [Google Scholar] [CrossRef] [PubMed]
  23. Atkinson, K.M.; Koenka, A.C.; Sanchez, C.E.; Moshontz, H.; Cooper, H. Reporting standards for literature searches and report inclusion criteria: Making research syntheses more transparent and easy to replicate. Res. Synth. Methods 2015, 6, 87–95. [Google Scholar] [CrossRef]
  24. McCrae, N.; Blackstock, M.; Purssell, E. Eligibility criteria in systematic reviews: A methodological review. Int. J. Nurs. Stud. 2015, 52, 1269–1276. [Google Scholar] [CrossRef] [PubMed]
  25. Pérez, J.; Díaz, J.; Garcia-Martin, J.; Tabuenca, B. Systematic literature reviews in software engineering—Enhancement of the study selection process using Cohen’s Kappa statistic. J. Syst. Softw. 2020, 168, 110657. [Google Scholar] [CrossRef]
  26. Brown, S.A.; Upchurch, S.L.; Acton, G.J. A framework for developing a coding scheme for meta-analysis. West. J. Nurs. Res. 2003, 25, 205–222. [Google Scholar] [CrossRef]
  27. Hong, Q.N.; Fàbregues, S.; Bartlett, G.; Boardman, F.; Cargo, M.; Dagenais, P.; Gagnon, M.-P.; Griffiths, F.; Nicolau, B.; O’Cathain, A.; et al. The Mixed Methods Appraisal Tool (MMAT) version 2018 for information professionals and researchers. Educ. Inf. 2018, 34, 285–291. [Google Scholar] [CrossRef]
  28. Macdonald, M.; Misener, R.M.; Weeks, L.; Helwig, M. Covidence vs Excel for the title and abstract review stage of a systematic review. JBI Evid. Implement. 2016, 14, 200–201. [Google Scholar] [CrossRef]
  29. Hair, J.F., Jr.; Hult, G.T.M.; Ringle, C.M.; Sarstedt, M.; Danks, N.P.; Ray, S. Partial Least Squares Structural Equation Modeling (PLS-SEM) Using R: A Workbook; 2021. [Google Scholar] [CrossRef]
  30. Rudko, I.; Bashirpour Bonab, A.; Fedele, M.; Formisano, A.V. New institutional theory and AI: Toward rethinking of artificial intelligence in organizations. J. Manag. Hist. 2024, 31, 261–284. [Google Scholar] [CrossRef]
  31. Nguyen Thanh, B.; Son, H.X.; Vo, D.T.H. Blockchain: The Economic and Financial Institution for Autonomous AI? J. Risk Financ. Manag. 2024, 17, 54. [Google Scholar] [CrossRef]
  32. Oke, O.A.; Cavus, N. The role of AI in financial services: A bibliometric analysis. J. Comput. Inf. Syst. 2024, 65, 518–530. [Google Scholar] [CrossRef]
  33. Fedyk, A.; Hodson, J.; Khimich, N.; Fedyk, T. Is artificial intelligence improving the audit process? Rev. Account. Stud. 2022, 27, 938–985. [Google Scholar] [CrossRef]
  34. Gyau, E.B.; Appiah, M.; Gyamfi, B.A.; Achie, T.; Naeem, M.A. Transforming banking: Examining the role of AI technology innovation in boosting banks’ financial performance. Int. Rev. Financ. Anal. 2024, 96, 103700. [Google Scholar] [CrossRef]
  35. Al-Dosari, K.; Fetais, N.; Kucukvar, M. Artificial intelligence and cyber defense system for banking industry: A qualitative study of AI applications and challenges. Cybern. Syst. 2024, 55, 302–330. [Google Scholar] [CrossRef]
  36. Silber, D.; Hoffmann, A.; Belli, A. Embracing AI advisors for making (complex) financial decisions: An experimental investigation of the role of a maximizing decision-making style. Int. J. Bank Mark. 2025, 43, 1325–1346. [Google Scholar] [CrossRef]
  37. Manta, O.; Vasile, V.; Rusu, E. Banking Transformation Through FinTech and the Integration of Artificial Intelligence in Payments. FinTech 2025, 4, 13. [Google Scholar] [CrossRef]
  38. Issa, H.; Dakroub, R.; Lakkis, H.; Jaber, J. Navigating the decision-making landscape of AI in risk finance: Techno-accountability unveiled. Risk Anal. 2025, 45, 808–829. [Google Scholar] [CrossRef]
  39. Ionescu, S.-A.; Diaconita, V.; Radu, A.-O. Engineering Sustainable Data Architectures for Modern Financial Institutions. Electronics 2025, 14, 1650. [Google Scholar] [CrossRef]
  40. De La Rosa, W.; Bechler, C.J. Unveiling the adverse effects of artificial intelligence on financial decisions via the AI-IMPACT model. Curr. Opin. Psychol. 2024, 58, 101843. [Google Scholar] [CrossRef]
  41. Sai, S.; Arunakar, K.; Chamola, V.; Hussain, A.; Bisht, P.; Kumar, S. Generative AI for finance: Applications, case studies and challenges. Expert Syst. 2025, 42, e70018. [Google Scholar] [CrossRef]
  42. Koulis, A.; Kyriakopoulos, C.; Lakkas, I. Artificial Intelligence and Firm Value: A Bibliometric and Systematic Literature Review. FinTech 2025, 4, 54. [Google Scholar] [CrossRef]
  43. Cheong, B.C. Transparency and accountability in AI systems: Safeguarding wellbeing in the age of algorithmic decision-making. Front. Hum. Dyn. 2024, 6, 1421273. [Google Scholar] [CrossRef]
  44. Batool, A.; Zowghi, D.; Bano, M. AI governance: A systematic literature review. AI Ethics 2025, 5, 3265–3279. [Google Scholar] [CrossRef]
  45. Imandojemu, K.; Otokiti, S.E.E.; Adebukunola, A.M.; Osabohien, R.; Al-Faryan, M.A.S. Disruptor or enabler? AI and financial system stability. J. Financ. Econ. Policy 2025, 17, 875–903. [Google Scholar] [CrossRef]
  46. Ampatzoglou, A.; Arvanitou, E.M.; Ampatzoglou, A.; Avgeriou, P.; Tsintzira, A.A.; Chatzigeorgiou, A. Architectural decision-making as a financial investment: An industrial case study. Inf. Softw. Technol. 2021, 129, 106412. [Google Scholar] [CrossRef]
  47. Pisoni, G.; Molnár, B.; Tarcsi, Á. Data Science for Finance: Best-Suited Methods and Enterprise Architectures. Appl. Syst. Innov. 2021, 4, 69. [Google Scholar] [CrossRef]
  48. Fritz-Morgenthal, S.; Hein, B.; Papenbrock, J. Financial risk management and explainable, trustworthy, responsible AI. Front. Artif. Intell. 2022, 5, 779799. [Google Scholar] [CrossRef]
Figure 1. PRISMA 2020 flow diagram.
Figure 1. PRISMA 2020 flow diagram.
Bdcc 10 00008 g001
Figure 2. MSVR radar map.
Figure 2. MSVR radar map.
Bdcc 10 00008 g002
Figure 3. Quantile radar regression map.
Figure 3. Quantile radar regression map.
Bdcc 10 00008 g003
Figure 4. Model specification development. (a) Initial SEM; (b) Initial K-RSEM.
Figure 4. Model specification development. (a) Initial SEM; (b) Initial K-RSEM.
Bdcc 10 00008 g004
Figure 5. Radar visualisation of SEM estimation.
Figure 5. Radar visualisation of SEM estimation.
Bdcc 10 00008 g005
Figure 6. Radar visualisation of SEM goodness-of-fit model.
Figure 6. Radar visualisation of SEM goodness-of-fit model.
Bdcc 10 00008 g006
Figure 7. Comparison of multi-group SEM across time periods: (a) Structural path panels (2015–2019); (b) Structural path panels (2020–2025).
Figure 7. Comparison of multi-group SEM across time periods: (a) Structural path panels (2015–2019); (b) Structural path panels (2020–2025).
Bdcc 10 00008 g007
Figure 8. Bias sensitivity and delta-fit robustness contour plot.
Figure 8. Bias sensitivity and delta-fit robustness contour plot.
Bdcc 10 00008 g008
Table 1. Descriptive characteristics of the final corpus.
Table 1. Descriptive characteristics of the final corpus.
CategorySubcategoryCount%
Publication year range2015–201817415.1
2019–202137232.2
2022–202447140.8
202513811.9
Geographical originUnited States23620.4
China19216.6
United Kingdom14312.4
Other countries (44 total)58450.6
AI technologies appliedML47841.4
NLP31227.0
Expert systems19817.1
Other (DL, ANN, Fuzzy Logic)16714.5
Financial sectors addressedBanking53246.1
Insurance27423.7
FinTech/investment34930.2
Data governance focus areasData quality41636.0
Regulatory compliance39233.9
Risk analytics27423.7
Data security and privacy20417.7
Table 2. Multi-supported vector regression radar analysis.
Table 2. Multi-supported vector regression radar analysis.
Dimension Variable MSVR   Weight ( r )
AI techniques
ML0.87
NLP0.74
Expert systems0.59
Fuzzy logic systems0.41
DL0.69
Hybrid AI models0.76
Data governance dimensions
Regulatory compliance0.84
Risk analytics0.79
Data quality management0.72
Data privacy and security0.68
Metadata and data lineage0.55
Master data management0.47
Financial decision outcomes
Fraud detection0.81
Credit scoring0.78
Investment forecasting0.71
Auditing and regulatory reporting0.66
Portfolio risk management0.73
Table 3. Quantile radar regression analysis.
Table 3. Quantile radar regression analysis.
Variable and Dimensionβ (0.25)β (0.50)β (0.75)
ML0.620.760.81
NLP0.510.680.72
Expert systems0.380.490.53
Fuzzy logic0.340.450.48
DL0.550.690.77
Hybrid AI models0.590.710.83
Regulatory compliance0.630.740.86
Risk analytics0.540.710.78
Data quality management0.580.660.70
Data privacy and security0.440.590.63
Metadata and data lineage0.390.520.60
Fraud detection0.570.730.81
Credit scoring0.490.610.69
Investment forecasting0.460.640.73
Portfolio risk management0.520.680.76
Table 4. Model specification metrics.
Table 4. Model specification metrics.
MetricValueAcceptable Threshold
GCV error0.014–0.016As low as possible (<0.05)
Adjusted   R 2 —data governance0.67>0.50 (strong) and >0.60 (very strong)
Adjusted   R 2 —financial decision outcomes0.66>0.50 (strong)
RMSE0.071As low as possible (<0.10)
MAE0.053As low as possible
SRMR0.036<0.08
Kernel function usedRBF
Number of bootstrapped resamples5000≥1000
VIF for all constructs<2.0<3.0
Model convergence statusConvergedRequired
AIC912.34Lower
BIC941.26Lower
Residual variance0.038Near-zero preferred
Table 5. Measurement model.
Table 5. Measurement model.
Latent ConstructCRAVEMSVASVAVEInter-Construct CorrelationDiscriminant Validity
AI integration0.880.590.530.440.7680.728 (with data governance)Established (0.768 > 0.728)
Data governance0.910.660.530.480.8120.728 (with AI integration)Established (0.812 > 0.728)
Financial decision outcomes0.870.610.510.450.7810.714 (with data governance)Established (0.781 > 0.714)
Table 6. Indicator reliability and loading summary.
Table 6. Indicator reliability and loading summary.
Latent ConstructObserved Variable λ p-Value
AI integrationML0.86<0.001
Hybrid AI models0.83<0.001
NLP0.79<0.001
DL0.74<0.001
Expert systems0.61<0.001
Data governanceRegulatory compliance0.89<0.001
Risk analytics0.85<0.001
Data quality0.81<0.001
Data privacy and security0.76<0.001
Metadata management0.68<0.001
Financial decision outcomesFraud detection0.83<0.001
Credit scoring0.79<0.001
Investment forecasting0.74<0.001
Portfolio risk management0.71<0.001
Auditing and reporting0.69<0.001
Table 7. Structural model estimation.
Table 7. Structural model estimation.
Structural Path β t-Valuep-Value95% CI R 2
AI integration → data governance0.7613.42<0.001[0.68, 0.82]0.58
Data governance → financial outcomes0.7312.17<0.001[0.65, 0.80]0.64
AI integration → financial decision outcomes0.7111.03<0.001[0.63, 0.77]0.79
Table 8. Structural model metric.
Table 8. Structural model metric.
Model MetricValueThreshold
R 2 —data governance0.58≥0.26
R 2 —financial decision outcomes0.64≥0.26
Effect size ( f 2 )—AI → governance0.35≥0.35
Effect size ( f 2 )—governance → outcomes0.31≥0.15
Effect size ( f 2 )—AI → outcomes (direct)0.28≥0.15
Variance inflation factor (VIF)—all paths<2.00<3.00
Predictive relevance ( Q 2 )—Financial decision outcomes0.41>0
Table 9. Goodness-of-fit model analysis.
Table 9. Goodness-of-fit model analysis.
Model ComponentEstimatep-Value
Data governance
Regulatory compliance0.93<0.001
Risk analytics0.87<0.001
Data quality0.84<0.001
Data privacy and security0.76<0.001
Metadata management0.68<0.001
AI integration
ML0.86<0.001
NLP0.79<0.001
DL0.74<0.001
Expert systems0.61<0.001
Hybrid AI models0.83<0.001
Financial decision outcomes
Fraud detection0.85<0.001
Credit scoring0.81<0.001
Investment forecasting0.79<0.001
Portfolio risk management0.76<0.001
Auditing and reporting0.73<0.001
Table 10. Summary of the multi-group SEM results across time periods.
Table 10. Summary of the multi-group SEM results across time periods.
Structural Model PathConstraint Specification2015–20192020–2025Fit Change
Direct path: AI → Data governance β 1 (free across groups) β 1 = 0.68 β 1   = 0.80 Δ β = +0.12
Direct path: Data governance → Financial outcomes β 2 (free across groups) β 2   = 0.66 β 2   = 0.77 Δ β = +0.11
Direct path: AI → Financial outcomes β 3 (free across groups) β 3   = 0.61 β 3   = 0.74 Δ β = +0.13
Indirect effect (AI → DG → FO) β _ indirect   =   β 1   ×   β 2 (constrained equal)0.55 (fixed)0.55 (fixed) Δ χ 2 (df = 1)
Variance explained: Data governance R 2 _DG0.460.63+0.17
Variance explained: Financial Outcomes R 2 _FO0.520.71+0.19
Structural invariance (indirect path) H 0 :   β _ indirect   ( 2015 19 )   =   β _indirect (2020–25)AcceptedAcceptedΔCFI < 0.01
Model adequacy (both groups)CFI/RMSEA/SRMRAcceptableAcceptableCFI > 0.94; RMSEA < 0.05
Table 11. Quality appraisal and sensitivity analysis summary.
Table 11. Quality appraisal and sensitivity analysis summary.
CriterionAssessment ToolThresholds Applied n Statistical Indicators
Quality appraisal frameworkMMAT 2018High (≥75%), moderate (50–74%), and low (<50%)High = 557 (48.2%), moderate = 457 (39.6%), and low = 141 (12.2%) κ = 0.86
Study design distributionExperimental/observational/mixed n = 418/506/231 χ 2 = 18.42, p < 0.001
Study design distributionExperimental/Observational/Mixedn = 418/506/231 χ 2   = 18.42, p < 0.001
Transparency and reporting qualityAdapted PRISMA-AI checklistComplete = 63.5%, partial = 27.8%, and incomplete = 8.7%
Governance metric reportingPresence of model-risk/privacy metricsReported = 71.4% and absent = 28.6% φ = 0.64
AI model validation methodsCross-validation/holdout/unspecifiedCross-validation = 62.1%, holdout = 28.3%, and unspecified = 9.6%RMSE = 0.082, 95% CI [0.071, 0.094]
Risk of bias domainsData bias/model bias/reporting biasLow = 68.9%, moderate = 24.7%, and high = 6.4% I 2 = 12.5%
Sensitivity test 1: Study quality exclusionRe-estimate SEM excluding low-quality studiesΔCFI = +0.004, ΔRMSEA = −0.002p < 0.001 β ± 0.01
Sensitivity test 2: quantile invarianceQuantile regression (0.25–0.75) ρ = 0.91 (p < 0.001)
Sensitivity test 3: outlier influenceJackknife resampling (−5%) Δ β < 0.02
Sensitivity test 4: latent construct invarianceMulti-group SEM ( χ 2 ΔTest) χ 2 Δ = 4.83, p = 0.312
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Choowan, P.; Daovisan, H. Artificial Intelligence in Data Governance for Financial Decision-Making: A Systematic Review. Big Data Cogn. Comput. 2026, 10, 8. https://doi.org/10.3390/bdcc10010008

AMA Style

Choowan P, Daovisan H. Artificial Intelligence in Data Governance for Financial Decision-Making: A Systematic Review. Big Data and Cognitive Computing. 2026; 10(1):8. https://doi.org/10.3390/bdcc10010008

Chicago/Turabian Style

Choowan, Phaktada, and Hanvedes Daovisan. 2026. "Artificial Intelligence in Data Governance for Financial Decision-Making: A Systematic Review" Big Data and Cognitive Computing 10, no. 1: 8. https://doi.org/10.3390/bdcc10010008

APA Style

Choowan, P., & Daovisan, H. (2026). Artificial Intelligence in Data Governance for Financial Decision-Making: A Systematic Review. Big Data and Cognitive Computing, 10(1), 8. https://doi.org/10.3390/bdcc10010008

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop