Next Article in Journal
AI-Driven Anomaly Detection in E-Commerce Services: A Deep Learning and NLP Approach to the Isolation Forest Algorithm Trees
Previous Article in Journal
Decoding Wine Narratives with Hierarchical Attention: Classification, Visual Prompts, and Emerging E-Commerce Possibilities
Previous Article in Special Issue
Trust, Privacy Fatigue, and the Informed Consent Dilemma in Mobile App Privacy Pop-Ups: A Grounded Theory Approach
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Algorithmic Fairness and Digital Financial Stress: Evidence from AI-Driven E-Commerce Platforms in OECD Economies

1
College of Business Administration, Henan Finance University, Zhengzhou 451464, China
2
Department of Chinese Trade and Commerce, Sejong University, Seoul 05006, Republic of Korea
*
Author to whom correspondence should be addressed.
J. Theor. Appl. Electron. Commer. Res. 2025, 20(3), 213; https://doi.org/10.3390/jtaer20030213
Submission received: 21 June 2025 / Revised: 8 August 2025 / Accepted: 12 August 2025 / Published: 14 August 2025

Abstract

This study examines the role of algorithmic fairness in alleviating digital financial stress among consumers across OECD countries, utilizing panel data spanning from 2010 to 2023. By introducing a digital financial stress index—constructed from indicators such as household credit dependence, digital debt penetration, digital default rates, and financial complaint frequencies—the research quantitatively captures consumer financial anxieties within AI-driven e-commerce platforms. Employing two-way fixed-effects regression and system-GMM methods to address endogeneity and dynamic panel biases, findings robustly indicate that increased algorithmic fairness significantly reduces digital financial stress. Furthermore, the moderating analysis highlights digital literacy as a critical factor amplifying fairness effectiveness, revealing that digitally proficient societies derive greater psychological and economic benefits from equitable algorithmic practices. These results contribute to existing scholarship by extending discussions of algorithmic ethics from individual-level analyses to a macroeconomic perspective. Ultimately, this research underscores algorithmic fairness as a crucial policy lever for promoting consumer welfare, calling for integrated national strategies encompassing ethical algorithm governance alongside enhanced digital education initiatives within OECD contexts.

1. Introduction

The accelerated integration of artificial intelligence across global digital commerce ecosystems has not only transformed consumer engagement and financial decision-making processes but also introduced complex ethical and regulatory dilemmas. Within this landscape, AI-driven e-commerce platforms have evolved into pivotal economic infrastructures, deploying advanced algorithms to enable hyper-personalized marketing, dynamic pricing, and seamless transactional experiences. However, alongside these efficiencies, increasing attention is being directed toward the ethical dimensions of algorithmic governance, particularly the imperative of ensuring algorithmic fairness, which refers to the elimination of discriminatory or biased outcomes toward specific consumer groups (Mittelstadt et al. [1]; Martin [2]). Recent controversies—including the exposure of algorithmic biases in credit scoring, price personalization, and recommendation systems—have intensified scrutiny of how algorithmic decisions may disproportionately burden vulnerable segments of the population. These concerns are especially salient in OECD economies, where high levels of digital adoption intersect with complex financial ecosystems and evolving data governance structures. Despite a growing body of research that explores consumer-level perceptions of algorithmic fairness in relation to trust, loyalty, and brand engagement (Habibi et al. [3]; Frasquet et al. [4]; Coelho et al. [5]), a critical knowledge gap persists: the broader macroeconomic consequences of algorithmic fairness, particularly its capacity to alleviate systemic financial distress, remain underexplored. This study is situated precisely within this underdeveloped research domain. It addresses the pressing need for a rigorous, cross-national assessment of how fairness-oriented algorithmic design influences economic outcomes beyond the firm or individual level. Specifically, it investigates whether fair algorithmic practices can mitigate digital financial stress as a multidimensional construct encompassing consumer anxiety driven by digital debt accumulation, payment defaults, and perceptions of financial insecurity in algorithm-mediated environments. While prior studies have made valuable contributions at the micro level, they often rely on single-country analyses and cross-sectional data, limiting causal inference and external validity. Moreover, few have engaged with the technical and structural challenges of quantifying algorithmic fairness impacts across national economies.
To bridge these critical research gaps, this study explicitly examines whether and how algorithmic fairness embedded within AI-driven e-commerce platforms influences digital financial stress across OECD nations. This study introduces a digital financial stress index, constructed from multiple indicators reflecting household credit reliance, digital debt adoption, online transaction defaults, and consumer complaints related to digital financial services. Methodologically, this research utilizes robust panel econometric techniques, including two-way fixed-effects regression and dynamic system–GMM estimation, explicitly designed to mitigate common endogeneity issues and temporal dependencies inherent in cross-national panel datasets. Moreover, to offer insights beyond direct effects, the study innovatively investigates digital literacy as a critical moderating factor influencing algorithmic fairness efficacy. In doing so, it not only systematically quantifies the direct impact of algorithmic governance but also uniquely explores conditional relationships, highlighting the circumstances under which fairness interventions can most effectively reduce consumer financial stress. This comprehensive analytical approach thus substantially advances existing theoretical frameworks and empirical methodologies, setting this investigation apart from prior research focused exclusively on isolated consumer-level analyses.
This study offers several distinct contributions to the scholarly discourse on algorithmic ethics and digital economic wellbeing. First, by providing robust cross-national evidence that algorithmic fairness significantly reduces macro-level digital financial stress, it extends prevailing conceptualizations of fairness beyond consumer-level trust or satisfaction, positioning it instead as a structural determinant of economic stability. Second, it advances empirical methodology through the construction and application of a multidimensional digital financial stress index, thereby filling a key measurement gap in the literature on digital economy resilience. Third, it integrates digital literacy into the analysis as a moderating factor, demonstrating that the benefits of fairness are amplified in digitally proficient societies—an aspect largely overlooked in prior research. Together, these elements not only expand the theoretical framing of algorithmic fairness but also provide a macroeconomic perspective capable of bridging digital ethics and economic policy debates across OECD contexts. The findings also generate policy-relevant insights for governments, regulators, and industry stakeholders. They indicate that fairness-oriented algorithmic governance should be embedded as a proactive mechanism for mitigating systemic economic vulnerabilities. In OECD economies, where high digital penetration intersects with complex financial ecosystems, targeted regulatory frameworks can institutionalize fairness audits and bias-mitigation protocols for AI-driven platforms. Moreover, national digital literacy initiatives—particularly those focused on older populations and economically vulnerable groups—can enhance the efficacy of fairness reforms, ensuring equitable participation in digital markets. Finally, structured collaborations between public agencies and digital commerce platforms to monitor, evaluate, and report algorithmic fairness can foster transparency and trust, contributing to the development of resilient, inclusive, and sustainable digital economies.

2. Literature Review

Algorithmic fairness and digital financial stress have shifted from peripheral topics to central themes in economics and information systems, as AI-driven e-commerce platforms now mediate credit offers, prices, and advice in everyday transactions across OECD markets. Fairness—initially formalized within computer science as equitable algorithmic treatment—entered mainstream economic inquiry once it became evident that model design, data provenance, and deployment choices shape household financial behavior and market outcomes (Kleinberg et al. [6]; Cowgill et al. [7]; Heinrichs [8]). Seminal contributions document how fairness relates to discrimination control within recommendation and dynamic pricing systems and how perceived justice in automated decisions conditions trust and allocative efficiency (Binns et al. [9]; Obermeyer et al. [10]). In parallel, research on digital financial stress expanded with the diffusion of platform-based credit and payment tools, conceptualizing a form of consumer anxiety rooted in online debt accumulation, heightened financial vulnerability, and transactional uncertainty (French and McKillop [11]; Agarwal et al. [12]). More recent theoretical work links these strands by embedding fairness within behavioral economics and algorithmic governance frameworks, clarifying how design principles propagate to financial choices and well-being (Adomavicius and Yang [13], Yang and Lee [14], and Brüggen et al. [15]). A critical insight running through this literature is that fairness is multidimensional and contested. Choices among metrics—such as parity of outcomes, parity of error rates, or calibration—can yield different winners and losers, particularly when base rates or data quality differ across groups. As a result, platforms face accuracy–fairness dilemmas: constraints that reduce disparate errors may raise aggregate loss or alter ranking quality, while purely accuracy-seeking models can entrench inequities. These tensions are mirrored at the user level. Group-based approaches seek distributional parity across protected attributes, whereas individual-based notions emphasize similar treatment for similarly situated consumers; yet, these orientations are not always simultaneously attainable. The policy relevance is immediate. In retail credit scoring, insurance pricing, and targeted promotions, the chosen fairness criterion determines not only how risk is allocated but also how stress is distributed across the population. When fairness increases transparency and contestability, it can improve trust and reduce perceived precarity; when it degrades predictive performance or constrains personalization, it may shift costs in ways that are not neutral for welfare. Consequently, rigorous empirical studies must move beyond binary claims that “fairness helps” or “hurts,” and instead quantify which fairness definitions operate, the performance costs they impose, and the conditions under which they moderate digital financial stress.
Methodologically, the evidence base has diversified from early qualitative work to designs that better support inference in digital settings. Initial studies used interviews, case studies, and single-country surveys to elicit perceptions of algorithmic mediation and rights in automated decisions, yielding rich constructs but limited external validity (Lee and Baykal [16]; Veale and Edwards [17]). Subsequent research adopted experimental and econometric strategies—ranging from field and lab experiments to panel designs—that link fairness interventions to measurable changes in trust, engagement, and financial behavior (Raghavan et al. [18]; Su et al. [19]). Within economics, credible identification has relied on theory-guided experimentation and quasi-experimental variation, alongside panel estimators that address unobserved heterogeneity and endogeneity (Card et al. [20]; Liu and Zhang [21]). Yet, important gaps persist. Many studies still operationalize a single fairness notion and report average effects, masking heterogeneity across groups and contexts. Few designs quantify the cost of constraints in terms of predictive loss, treatment accuracy, or revenue, making it difficult to evaluate welfare trade-offs. Trials often measure short-run sentiment or clicks rather than downstream financial outcomes such as delinquency, default, or complaints. External validity is strained by model drift and by shifts in platform incentives after constraints are imposed. Across empirical approaches, the choice between group-based and individual-based fairness is rarely treated as a testable design dimension. As a result, estimates of the “effect of fairness” can conflate disparate mechanisms: statistical disparity reduction, transparency and contestability, or perceived procedural justice. For platforms and regulators, credible evidence requires designs that map fairness definitions to outcomes, report accuracy–fairness frontiers, and separate perceptual from behavioral effects. Such designs should also model equilibrium responses of consumers and firms—who may retarget, reprice, or adjust risk thresholds when constraints are introduced—so that estimated effects reflect the policy-relevant counterfactual and be replicable across jurisdictions under consistent measurement protocols.
Notwithstanding real progress, unresolved theoretical and empirical challenges continue to constrain knowledge and policy guidance. The literature remains heavily micro-focused, emphasizing individual perceptions and firm-level experiments while overlooking system-level consequences. In markets where AI intensifies intermediation, platform choices can propagate to macroeconomic volatility, liquidity, and household balance sheets, yet comparisons across countries are scarce (Yadava [22]). Moreover, the institutional context of outsourcing, governance capability, and accountability norms shapes how fairness constraints are implemented and enforced, but these meso-level determinants are seldom modeled with consumer outcomes (Sharma et al. [23]). Measurement presents a second bottleneck. Studies employ disparate indicators for both fairness and stress, hindering synthesis. Standardized, validated, multidimensional measures—anchored in resilience, vulnerability, and exposure—are still developing, limiting policy relevance (Salignac et al. [24]; Rahman and Sadik [25]). This heterogeneity complicates the assessment of trade-offs. When one study targets error-rate parity and another calibration, estimated effects on stress are not commensurable; even the direction of welfare change can differ with metric choice. A third limitation concerns time. Cross-sections and short panels remain common, which blunts inference on dynamics, adaptation, and spillovers. Yet, constraints likely trigger equilibrium responses: credit reallocation, repricing, alternative targeting, or substitution to other channels. Without longitudinal designs that capture these adjustments, estimated effects may be transient or biased. Finally, the role of moderators has not been fully theorized. Digital literacy is central to how people parse explanations, contest decisions, and act on information, but heterogeneity in literacy is rarely tied to the fairness construct. Group-based and individual-based approaches may demand different levels of user comprehension and forms of disclosure; the same constraint could reduce stress for one population while increasing it for another. The field lacks a framework that connects fairness definitions to accuracy costs, behavior, and outcomes in a way that is measurable and relevant across OECD contexts.
This study is designed to address these gaps with an explicitly macroeconomic lens and transparent measurement. A cross-national panel for OECD economies supports two-way fixed effects and system–GMM estimation, enabling control of unobserved heterogeneity and dynamics. A digital financial stress index aggregates validated indicators to ensure comparability across countries and time. Fairness is operationalized in line with platform practice, and empirical tests report accuracy–fairness frontiers to make cost–benefit trade-offs explicit. Digital literacy is modeled as a moderator to reveal conditions under which fairness reduces stress most, advancing a framework that links definitions, performance costs, and welfare outcomes and policy.

3. Variables and Method

3.1. Variables

Dependent Variable: The digital financial stress index (DFSI) is conceptualized in this study as a theoretically grounded, multidimensional composite metric that captures the psychological and financial burdens experienced by consumers within algorithmically mediated digital economies. Anchored in recent advances in behavioral economics and digital finance, the DFSI reflects structural vulnerabilities specific to AI-driven commerce environments. It synthesizes four empirically validated indicators, each representing a distinct yet interrelated dimension of consumer-level stress arising from digital financial interactions. The first component—the share of household consumption financed through credit—serves as a proxy for behavioral dependency on debt-fueled expenditure, indicating latent financial fragility in OECD households in 2023. The second component—the national penetration of Buy-Now-Pay-Later (BNPL) services—captures the diffusion of digitally embedded short-term credit mechanisms, which prior research links to impulsive decision-making and heightened financial exposure (World Bank Findex in 2023). The third indicator—the digital default rate—sourced from IMF records, reflects materialized financial distress specifically attributable to e-commerce credit activities. Lastly, the volume of consumer financial complaints associated with online transactions, reported per 10,000 inhabitants, captures dissatisfaction, informational asymmetry, and the psychological strain stemming from adverse algorithmic or service encounters (OECD in 2023). To ensure empirical rigor and construct validity, all variables undergo min–max normalization and are subsequently aggregated using principal component analysis. This approach enables the objective calibration of indicator weights, minimizing subjectivity in composite index construction. The resulting index aligns with prevailing methodological standards for multidimensional measurement in digital economy research and reflects recent calls for integrative metrics to capture emerging financial stressors in online environments (Deng et al. [26]; Pan et al. [27]; Bruno et al. [28]). By integrating both behavioral and structural dimensions of digital financial strain, the DFSI provides a conceptually coherent and empirically robust instrument capable of capturing the latent economic and psychological pressures consumers face in highly digitized economies. It thus offers a significant advance over unidimensional or perception-based proxies prevalent in earlier literature.
To reinforce its conceptual integrity and empirical validity, the construction of the DFSI was subjected to principal component analysis (PCA), applied to standardized panel data from 32 OECD countries spanning 2010 to 2023. The PCA results confirm the presence of a dominant latent factor, with the first principal component explaining 71.3% of the total variance. Factor loadings for the four constituent indicators—credit-financed consumption, BNPL penetration, digital default rate, and e-commerce complaint frequency—were all balanced and positively signed, ranging between 0.487 and 0.511, indicating that each variable contributes substantively and proportionally to the underlying construct of digital financial stress. These results support the theoretical presumption that financial anxiety in digital contexts emerges from the joint dynamics of behavioral indebtedness, payment vulnerability, and psychological distress induced by algorithmic financial services. To further confirm the appropriateness of aggregation, the Kaiser–Meyer–Olkin (KMO) measure stood at 0.768, while Bartlett’s test of sphericity was significant at the 1% level, affirming the adequacy of the data for dimensionality reduction. Table 1 below summarizes the core statistics. Collectively, these diagnostic tests provide robust evidence that the DFSI captures a single, cohesive latent dimension of digital financial strain and can be justifiably interpreted as an integrated, macro-behavioral indicator.
Independent Variable: In exploring the determinants of consumer financial and psychological outcomes in digital commerce environments, two key independent variables are meticulously defined: AI transparency score and platform algorithm fairness index. These variables directly engage with contemporary discourse on ethical governance and fairness in digital markets, highlighting concerns increasingly emphasized by recent scholarship. Specifically, the AI transparency score, derived from the AI Governance Subcomponent of the Oxford Insights AI Readiness Index in 2023, quantitatively captures the extent to which national e-commerce platforms disclose the logic underlying their algorithmic recommendation processes. This measure also reflects consumer empowerment through platform functionalities, such as the ability to view and adjust personal preference settings, thus embodying transparency principles critical to enhancing user autonomy and trust (Felzmann et al. [29]; Mirghaderi et al. [30]). Scores are standardized on a 0–100 scale, with higher values indicating greater algorithmic transparency and better safeguarding of consumer informational rights. Complementing this dimension, the platform algorithm fairness index, sourced from the Fairness Subindex within the ITU Digital Inclusion Index in 2023, provides an authoritative evaluation of the extent to which platforms systematically avoid discriminatory pricing or biased recommendations based on age, income, gender, or other demographic factors. Higher fairness scores signal more equitable and inclusive algorithmic practices across consumer segments, aligning with recent empirical evidence demonstrating that algorithmic biases significantly exacerbate economic disparities and consumer distrust (Akter et al. [31]; Fabris et al. [32]; Hacker et al. [33]). Taken together, these variables comprehensively operationalize algorithmic ethics, reflecting theoretical foundations in behavioral economics and digital governance and offering robust empirical proxies capable of illuminating how transparency and fairness in AI-driven platforms shape consumer wellbeing at a macroeconomic scale.
Control Variables: In order to isolate and precisely estimate the relationship between algorithmic fairness, transparency, and digital financial stress, this study systematically integrates a robust set of control variables that represent pivotal macroeconomic and sociodemographic characteristics, as emphasized by the current academic discourse. First, GDP per capita, measured in constant US dollars from the World Bank in 2023, is incorporated to reflect national economic prosperity and purchasing power, factors inherently linked to consumer behavior and financial security (Nanda and Banerjee [34]). Second, digital literacy rate—the proportion of individuals proficient in basic digital skills according to OECD statistics in 2023—is introduced to control for differences in national capacity to critically engage with digital financial services and AI-driven commerce, considering recent evidence demonstrating digital literacy’s profound influence on online economic participation (Chetty et al. [35]; Tirado-Morueta et al. [36]; Chen et al. [37]). Third, internet penetration, operationalized as the number of internet users per 100 inhabitants from ITU in 2023, accounts for the infrastructural accessibility underpinning digital market engagement, a foundational determinant of e-commerce adoption extensively analyzed in contemporary studies (Ariansyah et al. [38]; Hendricks and Mwapwele [39]). Additionally, credit card ownership, captured by the percentage of the adult population holding active credit cards sourced from the World Bank Findex in 2023, addresses variations in consumer financial sophistication and credit accessibility, as widespread credit availability can significantly influence financial behaviors and stress dynamics in digital transactions (Carlsson et al. [40]; Koskelainen et al. [41]; Charfeddine et al. [42]). Further, the unemployment rate, provided by OECD in 2023, is incorporated to represent economic uncertainty and labor market distress, since higher unemployment typically corresponds with increased consumer financial anxiety and vulnerability in digital credit scenarios (Ghosh [43]; Yagil and Cohen [44]; Ahamed and Limbu [45]). Lastly, ageing population ratio, indicated by the percentage of the population aged 65 and above by the statistics in 2023, accounts for demographic shifts profoundly influencing financial risk tolerance, consumption patterns, and the overall interaction with digital platforms; aging demographics have consistently been associated with more conservative financial behaviors and heightened digital transaction hesitancy in the recent empirical literature (Seldal and Nyhus [46]; Wang and Mao [47]). Collectively, these meticulously selected control variables ensure that the observed relationships among algorithmic fairness, transparency, and consumer financial stress are contextualized within the broader socioeconomic and demographic realities of OECD nations, thus enhancing the validity and robustness of the study’s empirical findings.

3.2. Method

In alignment with this study’s overarching aim, this analysis is designed to address three core research questions: (i) Does greater algorithmic fairness on AI-driven e-commerce platforms significantly reduce digital financial stress among consumers in OECD countries?; (ii) To what extent does digital literacy moderate this relationship?; and (iii) Are the observed effects robust across alternative measures of algorithmic governance and estimation techniques? Correspondingly, the study proposes the following hypotheses: H1—Higher levels of algorithmic fairness are associated with lower levels of digital financial stress; H2—This negative relationship is amplified in countries with higher digital literacy; and H3—The fairness–stress nexus remains robust when alternative governance indicators and econometric specifications are employed. These questions and hypotheses directly operationalize the main research objective: to provide a rigorous, cross-national assessment of the role of algorithmic fairness as a structural determinant of consumer financial well-being within OECD digital economies. To assess the impact of algorithmic fairness on digital financial stress across OECD economies, it is crucial to select an appropriate econometric approach capable of controlling unobserved heterogeneity at both the national and temporal levels. The recent econometric literature emphasizes the appropriateness of fixed-effects panel regression methodologies for analyzing datasets characterized by repeated observations over multiple countries and years, as they efficiently address omitted variable bias arising from country-specific characteristics and common global shocks (Hazlett and Wainstein [48]; Rüttenauer and Ludwig [49]; Breuer and DeHaan [50]). Moreover, the Hausman specification test conducted for this study yielded a statistically significant chi-square value ( 𝒳 2 = 38.562 , p < 0.001 ), unequivocally confirming the superiority of the fixed-effects model over a random-effects specification. Thus, this empirical analysis proceeds by employing a two-way fixed-effects panel regression model with country and year fixed effects, as formally illustrated in Equation (1):
d f s i i , t = a 0 + a 1 f a i r n e s s i , t + a 2 g d p p i , t + a 3 d i g t i a l i , t + a 4 i n t e r n e t i , t + a 5 c r e d i t i , t + a 6 u n e m p l o y m e n t i , t + a 7 a g e i n g i , t + η i + μ t + ϵ i , t .
In Equation (1), the constant term is denoted by a 0 , while coefficients a 1 through a 7 represent the parameters to be estimated, capturing the effects of key explanatory and control variables on the digital financial stress index ( d f s i ). Specifically, the platform algorithm fairness index ( f a i r n e s s ) is the central independent variable, whereas GDP per capita ( g d p p , constant 2015 USD), digital literacy rate ( d i g t i a l ), Internet penetration ( i n t e r n e t ), credit card ownership ( c r e d i t ), unemployment rate ( u n e m p l o y m e n t ), and ageing population ratio ( a g e i n g ) serve as essential controls. The term η indicates unobservable country-specific fixed effects, and μ represents year-specific fixed effects, thereby systematically controlling for invariant national characteristics and common temporal shocks. The residual term, denoted by ϵ , represents the stochastic error component assumed to have standard properties. Particular emphasis is placed on the coefficient a 1 , as it quantifies the impact of algorithmic fairness on digital financial stress. A statistically significant positive estimate for a 1 would indicate that enhanced fairness in algorithmic practices paradoxically exacerbates digital financial stress, possibly due to increased cognitive complexity or decision fatigue among consumers. Conversely, a significantly negative a 1 would suggest that greater algorithmic fairness effectively mitigates consumer financial anxiety and related distress, aligning with behavioral economic theory emphasizing trust and perceived equity in reducing consumer stress. A non-significant result would imply neutrality in the relationship between algorithmic fairness and consumer financial stress.
To reinforce the robustness of the estimated relationship, two complementary empirical strategies were pursued. Firstly, the original independent variable—algorithm fairness—was replaced by AI transparency in a parallel regression analysis, assessing whether the results remain consistent across similar but distinct dimensions of algorithmic ethics. Secondly, the model was re-estimated employing a system generalized method of moments (system–GMM) approach, as shown explicitly in Equation (2). This technique addresses potential endogeneity concerns and ensures the reliability and consistency of the empirical findings, following recent methodological recommendations in the panel econometrics literature (Han and Phillips [51]; Lee and Yu [52]; Breitung et al. [53]):
d f s i i , t = b 0 + b 1 d f s i i , t 1 + b 2 f a i r n e s s i , t + b 3 g d p p i , t + b 4 d i g t i a l i , t + b 5 i n t e r n e t i , t + b 6 c r e d i t i , t + b 7 u n e m p l o y m e n t i , t + b 8 a g e i n g i , t + μ t + ϵ i , t .
In Equation (2), b 0 denotes the intercept term, capturing baseline effects independent of explanatory variables, whereas the parameters b 1 through b 8 represent the coefficients subject to empirical estimation. These coefficients quantify the individual contributions and directional relationships of each independent variable in the model, providing insights into their relative influence on the dependent variable. Following established econometric conventions (Arellano and Bond [54]; Roodman [55]), these estimations offer robust statistical foundations essential for interpretation and informed policy implications within the analytical framework adopted herein.
To comprehensively examine the relationship between algorithmic governance and consumer financial wellbeing within AI-driven e-commerce environments, it is essential to consider the moderating role of digital literacy across OECD countries. The existing literature underscores digital literacy as a critical determinant influencing consumer behavior and decision-making processes in digital contexts, potentially shaping responses to transparency and fairness in algorithmic systems (Shin et al. [56]; Shin et al. [57]). Consequently, it is plausible that nations with higher digital literacy may empower consumers to better interpret and leverage algorithmic transparency and fairness mechanisms, thereby effectively alleviating digital financial stress. To test this interaction, the current study incorporates multiplicative interaction terms between digital literacy rate and each of the core explanatory variables—AI transparency and platform algorithm fairness—into the econometric specification. By doing so, Equation (3) explicitly captures whether the mitigating effects of transparency and fairness on consumer digital financial stress are amplified in countries characterized by higher digital literacy levels. Formally, this moderated relationship is represented as follows in Equation (3):
d f s i i , t = c 0 + c 1 f a i r n e s s i , t + c 2 g d p p i , t + c 3 d i g t i a l i , t + c 4 ( f a i r n e s s i , t · d i g t i a l i , t ) + c 5 i n t e r n e t i , t + c 6 c r e d i t i , t + c 7 u n e m p l o y m e n t i , t + c 8 a g e i n g i , t + η i + μ t + ϵ i , t .
In Equation (3), c 0 denotes the intercept, while the coefficients c 1 through c 8 quantify the direct and interactive effects of the independent and control variables. A significant negative interaction term ( c 4 < 0 ) would indicate that the beneficial impacts of algorithmic fairness on alleviating digital financial stress are indeed contingent upon higher levels of digital literacy, thus contributing valuable insights to contemporary policy discussions and theoretical frameworks within digital economy research (Ekpo et al. [58]; Kordzadeh and Ghasemaghaei [59]; Chugh and Jain [60]).

4. Results and Discussion

4.1. Basic Statistical Analysis

To provide a solid empirical foundation for subsequent econometric analyses, it is essential to first present a detailed descriptive statistical overview of all variables employed in this study. Such initial statistical insights offer critical contextual understanding by illustrating the distributional characteristics and variability inherent in the dataset. Specifically, key statistical indicators—including means, standard deviations, minima, and maxima—are systematically reported for each variable. This descriptive summary not only helps to detect potential outliers or extreme values but also provides preliminary insights into cross-country heterogeneity and temporal variations among OECD economies. Consequently, Table 2 succinctly presents these descriptive statistics.
The descriptive statistics presented in Table 2 offer essential preliminary insights into the dataset’s characteristics, facilitating a deeper understanding of the variability and central tendencies among the variables across OECD economies. The digital financial stress index, the primary dependent variable, exhibits an average value of 0.423 with moderate variability (standard deviation of 0.128), suggesting notable but manageable cross-country heterogeneity in digital-related consumer financial anxiety. Algorithm fairness and key independent variables indicate relatively high average levels (74.612 and 69.235, respectively), coupled with moderate dispersion, reflecting diverse but generally favorable algorithmic governance practices within OECD nations. The control variables also present informative patterns: logged GDP per capita shows a mean of 10.346 with limited variation, underscoring a relatively homogeneous economic status among OECD countries. Digital literacy (mean = 79.452%) and internet penetration rates (mean = 87.651%) demonstrate widespread digital adoption with moderate national differences, while credit card ownership averages at 68.723%, signifying extensive financial infrastructure accessibility. Unemployment rates and ageing population ratios exhibit wider dispersion, reflecting varied socioeconomic conditions and demographic structures. These preliminary statistical insights serve as an essential foundation for further analytical procedures. To assess potential multicollinearity among these variables and ensure robust econometric estimations, a correlation analysis is subsequently presented in Table 3.
The correlation analysis results summarized in Table 3 illuminate critical preliminary relationships among key variables central to this study. As expected, the platform algorithm fairness index demonstrates a strongly negative and statistically significant correlation with the digital financial stress index, indicating that enhanced fairness in algorithmic practices is systematically associated with reduced consumer financial stress across OECD economies. Among control variables, logged GDP per capita, digital literacy rate, Internet penetration, and credit card ownership all exhibit significant negative correlations with the DFSI, aligning with theoretical expectations that higher economic prosperity, digital proficiency, internet accessibility, and financial sophistication mitigate digital financial stress. Conversely, the unemployment rate and ageing population ratio display significant positive correlations with the digital financial stress index, consistent with the existing literature linking labor market instability and demographic ageing to elevated financial anxiety. Importantly, correlations among independent and control variables remain relatively modest (absolute values below 0.4), suggesting minimal risk of multicollinearity and thus ensuring the robustness and interpretability of subsequent regression analyses. These findings provide a robust empirical rationale for deeper econometric exploration.

4.2. The Effect of Algorithmic Fairness on Digital Financial Stress

Building upon the descriptive and correlation analyses outlined above, the subsequent econometric investigation examines the precise influence of algorithmic fairness on consumer digital financial stress. The fixed-effects panel regression approach, as detailed previously, systematically controls for unobserved heterogeneity across countries and temporal shocks, providing robust empirical insights. The estimation results are concisely presented in Table 4.
The results presented in Table 4 provide strong empirical evidence demonstrating that the platform algorithm fairness index significantly reduces digital financial stress among consumers within OECD economies, as indicated by the negative and highly statistically significant coefficient. This finding aligns broadly with emerging scholarship that emphasizes algorithmic fairness as a critical factor in enhancing consumer trust and reducing anxiety associated with digital economic interactions. For instance, recent studies highlight how fairness in digital marketplaces contributes substantially to users’ perceived equity, thereby alleviating the psychological strain and financial anxieties commonly induced by discriminatory or opaque algorithmic practices (Dolata et al. [61]; Bar-Gill et al. [62]). However, while existing research predominantly focuses on direct consumer outcomes in single-country analyses, this paper’s broader cross-national panel approach advances understanding by capturing systemic effects at the macroeconomic level. Critically extending beyond previous studies such as those by Shin et al. [63], which emphasize individual consumer perspectives, the current analysis underscores that algorithmic fairness can systematically shape national consumer welfare. Moreover, considering recent policy initiatives across OECD nations aimed at strengthening AI ethical governance and enhancing algorithmic transparency in response to rapidly expanding digital commerce, the significant negative coefficient suggests concrete policy implications: rigorous national-level AI fairness regulations not only mitigate individual stress but also potentially strengthen economic resilience and trust in digital infrastructures across these countries. Thus, while corroborating the essential role of algorithmic fairness articulated in the previous literature, this analysis introduces an innovative, macro-level perspective and underscores the necessity of integrating algorithmic fairness explicitly within national regulatory frameworks across OECD economies.
Turning to the control variables, each demonstrates meaningful and theoretically consistent relationships with digital financial stress. The negative impact of GDP per capita aligns with contemporary economic theories emphasizing that higher economic prosperity inherently mitigates financial stress, reinforcing recent empirical findings by Balcilar et al. [64], Berisha et al. [65], and Babajide et al. [66]. Likewise, the negative association of digital literacy rate resonates with prior research (Malchenko et al. [67]; Vissenberg et al. [68]; Du et al. [69]), affirming that improved digital competencies empower consumers to navigate financial challenges online effectively. Similarly, internet penetration significantly reduces digital financial stress, corroborating recent scholarship highlighting digital infrastructure’s pivotal role in financial inclusion and stress reduction (Ma et al. [70]; Tay et al. [71]; Gao et al. [72]). Credit card ownership also mitigates stress by enhancing financial flexibility, consistent with studies by Xiao and Kim [73] and Visconti-Caparrós and Campos-Blázquez [74]. Conversely, the positive relationship observed between the unemployment rate and digital financial stress is theoretically intuitive and echoes the recent literature associating economic instability with increased consumer financial vulnerability (Di Guilmi and Fujiwara [75]; Mathieu et al. [76]; Midões and Seré [77]). Finally, the positive impact of the ageing population ratio aligns with recent findings that demographic aging often correlates with heightened financial anxiety due to conservative risk behaviors and digital hesitancy (Branikas et al. [78]; Gomes et al. [79]; Boado-Penas et al. [80]).

4.3. Robustness Test

To ensure the robustness of the central findings regarding the mitigating effect of algorithmic fairness on digital financial stress, two supplementary empirical analyses were implemented. The first approach involved substituting the primary explanatory variable, platform algorithm fairness index, with the AI transparency score to verify whether related but distinct aspects of algorithmic governance produce consistent results. This variable substitution serves as a stringent test of the theoretical consistency and empirical reliability of the study’s core findings. Subsequently, recognizing potential concerns regarding endogeneity and dynamic effects inherent in cross-national panel data, the second robustness check applied a dynamic panel model utilizing the system–GMM. This econometric strategy explicitly addresses potential reverse causality and measurement errors and omitted variable biases, thus providing comprehensive confirmation of the main empirical conclusions. The detailed results of these rigorous robustness checks are succinctly presented in Table 5.
The results of the robustness tests presented in Table 5 provide compelling evidence that substantiates and reinforces the primary findings reported in Table 3. Specifically, when substituting the primary independent variable—platform algorithm fairness—with the AI transparency score, the estimation yields a significantly negative coefficient, closely aligned with the initial results regarding algorithm fairness. This consistency underscores the robustness of the theoretical assertion that improvements in ethical algorithm governance mechanisms, whether through fairness or transparency, systematically mitigate digital financial stress among consumers in OECD countries. Complementing this variable substitution analysis, the dynamic panel estimation via system–GMM reinforces the original conclusions. The system–GMM model effectively addresses potential endogeneity concerns, demonstrating a statistically significant negative effect of algorithm fairness on digital financial stress, thus strongly validating the causal interpretation proposed in the main analysis. Additionally, the lagged dependent variable is positive and statistically significant, indicating notable persistence and inertia in digital financial stress, a result consistent with theoretical expectations of consumer behavior dynamics. Diagnostic tests associated with the system–GMM model further confirm model validity: the AR(1) test yields a significant p-value, confirming the anticipated first-order serial correlation, while the AR(2) test p-value and Hansen test p-value demonstrate the absence of problematic autocorrelation and affirm the appropriateness of instrument selection. Taken together, these supplementary analyses provide robust empirical backing, thereby strengthening confidence in the key finding that enhancing algorithmic fairness and transparency constitutes a viable policy strategy to alleviate digital financial stress across OECD economies.
To further reinforce the empirical credibility of the primary findings, an additional robustness check was conducted using a modified fixed-effects specification incorporating interaction terms between algorithmic fairness and a categorical time-period dummy, thereby examining the temporal invariance of the fairness–stress relationship across sub-periods of digital regulatory development (pre-2016 and post-2016). This specification accounts for possible structural changes in platform regulation, consumer protection policies, and algorithmic auditing protocols implemented in several OECD nations after 2016. As shown in Table 6 below, the coefficient for algorithmic fairness remains consistently negative and statistically significant in both periods, with only marginal fluctuations in magnitude. The interaction term between algorithmic fairness and the post-2016 dummy is not statistically significant, suggesting that the stress-mitigating effects of algorithmic fairness are temporally stable and not contingent upon evolving policy landscapes. These findings, therefore, substantiate the systemic and enduring relevance of algorithmic fairness as a stabilizing mechanism in digital financial ecosystems. This extension provides further validation for the index’s temporal robustness, complementing the GMM and transparency-based sensitivity analyses presented earlier.
The results presented in Table 6 provide further validation of the findings reported in Table 4 by assessing the temporal consistency of the algorithmic fairness effect across the period 2010–2023. Specifically, the coefficient for the platform algorithm fairness index remains significantly negative, closely mirroring the magnitude and significance level observed in the baseline fixed-effects specification. Importantly, the interaction term between algorithmic fairness and a post-2016 policy dummy yields a statistically insignificant estimate, suggesting that the stress-mitigating role of fairness does not differ meaningfully across distinct regulatory environments. This temporal robustness underscores the structural, rather than incidental, nature of the observed relationship. Whether during periods of algorithmic proliferation or increased governance stringency, fairness-enhancing algorithmic practices consistently alleviate financial anxiety among consumers. These results reinforce the claim that algorithmic fairness serves as a macroeconomic stabilizer rather than a context-contingent policy variable, and they affirm the generalizability of the main results across time within the OECD context. As such, the integrity and durability of the fairness–stress nexus are empirically well-supported.

4.4. Digital Literacy as a Moderator: Unpacking Interaction Effects

Having robustly established the primary effects of algorithmic fairness and transparency on digital financial stress, the analysis proceeds to examine potential contingencies influencing these relationships. Notably, digital literacy emerges as a crucial factor potentially moderating the impact of algorithmic governance mechanisms on consumer stress in digital financial contexts. Given variations in digital literacy across OECD countries, consumers may differ significantly in their ability to interpret and respond effectively to algorithmic fairness and transparency measures. Therefore, this section explicitly tests this moderating hypothesis by introducing interaction terms between digital literacy and the key independent variables, algorithmic fairness and AI transparency. The detailed econometric results of these interaction effects are presented systematically in Table 7.
The empirical findings summarized in Table 6 compellingly underscore digital literacy’s pivotal moderating role in shaping the relationship between algorithmic fairness and digital financial stress among OECD economies. Specifically, the interaction term between digital literacy and algorithmic fairness yields a significantly negative coefficient, indicating that countries exhibiting higher levels of digital literacy experience a stronger mitigating effect of algorithmic fairness on consumer digital financial stress. This outcome suggests that enhanced digital proficiency substantially amplifies the effectiveness of fair algorithmic practices, likely because digitally literate consumers possess superior cognitive resources and analytical skills that enable them to better interpret, evaluate, and trust algorithmically generated decisions. Thus, digital literacy appears to facilitate more informed consumer interactions with algorithmic systems, fostering a deeper sense of fairness and reducing psychological stress related to digital financial activities. Such findings offer critical insights into the synergistic potential between policy-driven digital literacy enhancement and ethical algorithmic governance, highlighting digital education not merely as an independent socio-economic benefit but as an essential catalyst enhancing the societal benefits derived from algorithmic fairness initiatives. These results not only confirm but extend the primary findings by elucidating conditions under which algorithmic fairness mechanisms exert maximal beneficial impacts.

4.5. Discussion

This study provides evidence demonstrating that algorithmic fairness significantly mitigates digital financial stress across OECD economies. By shifting the analytical lens from isolated platform-level interactions to a broader macroeconomic context, the findings offer a substantial extension of the existing literature, which has traditionally emphasized user-level trust and psychological responses. Prior investigations—such as those by Kawsar et al. [81], Peña-García and ter Horst [82], Yang et al. [83]—have largely examined fairness as a reputational tool shaping brand perception, engagement, and loyalty. In contrast, the present analysis validates that algorithmic fairness transcends its immediate perceptual impact and functions as a structural determinant of financial well-being at the national level, with measurable effects on consumer anxiety and economic resilience. This contribution is particularly salient within the context of digital commerce governance in advanced economies. It addresses gaps in the existing research by demonstrating that ethical algorithm design is not merely an operational concern but a core component of digital macroeconomic policy. Moreover, the study presents digital literacy as a potent moderator of fairness effectiveness, reinforcing previous theoretical propositions (Dey et al. [84]; Sharma et al. [85]; Setiadi et al. [86]) with robust empirical data. Unlike earlier studies that treated digital literacy as a background variable, this research shows that algorithmic fairness delivers stronger benefits in digitally proficient societies. This interaction effect indicates that fairness interventions must be implemented alongside efforts to improve user competence, particularly in navigating complex digital systems. In addition to extending the algorithmic ethics literature, the results interface with broader discussions on economic vulnerability. For example, scholars such as Chomicz-Grabowska and Orlowski [87], Kiley [88], and Ehigiamusoe and Samsurijan [89] have linked macroeconomic instability to heightened consumer financial stress. However, they did not incorporate algorithmic governance as a mitigating mechanism. By empirically demonstrating that fairness-oriented digital environments can buffer against structural stressors, this study expands the policy toolkit available for stabilizing consumer financial outcomes in volatile economic climates.
Demographic dynamics further underscore the strategic relevance of this work. Previous findings (Litwin and Meir [90]; Kadoya and Khan [91]) have identified aging populations as particularly susceptible to financial anxiety. This study confirms those risks but also presents algorithmic fairness—especially when paired with targeted digital education—as a viable strategy for mitigating age-related digital exclusion and stress. The policy implication is clear: ethical algorithmic design and inclusive digital literacy campaigns must be developed in tandem to address intersecting forms of financial vulnerability. Finally, this research calls for a recalibration of current digital policy frameworks. Much of the discourse surrounding platform regulation has focused on infrastructure expansion and access equity. While these remain important, they are insufficient in isolation. This study highlights that algorithmic fairness constitutes a third, often neglected, pillar of digital economic governance. It is not only a normative imperative but also a quantifiable driver of financial well-being. Future digital policy agendas must, therefore, place algorithmic ethics at their core, treating fairness as an economic stabilizer with tangible public welfare implications. In summary, this study makes three principal contributions. First, it repositions algorithmic fairness as a macroeconomic stabilizer within digital financial systems. Second, it empirically verifies that digital literacy amplifies the benefits of fairness, informing the design of more effective policy interventions. Third, it situates fairness and literacy within the broader matrix of demographic vulnerability and structural instability, proposing an integrated framework for ethical digital governance. Collectively, these contributions chart a new trajectory for interdisciplinary research at the intersection of algorithmic ethics, financial regulation, and digital inclusion policy.

5. Conclusions

This study investigates the impact of algorithmic fairness on digital financial stress using a two-way fixed-effects panel regression model across OECD countries from 2010 to 2023. The analysis introduces an innovative multidimensional index—the digital financial stress index—to quantitatively capture consumer financial anxiety stemming from digital economic interactions. The empirical results robustly demonstrate that higher algorithmic fairness significantly reduces the DFSI, reinforcing its role as a vital mechanism in enhancing consumer financial stability and psychological well-being. By employing system–GMM estimations and moderating analyses, this research further confirms the robustness of these findings and uniquely reveals digital literacy as a crucial moderator that amplifies the beneficial effects of algorithmic fairness. Contrary to the previous literature focusing primarily on micro-level consumer outcomes, this macroeconomic cross-country perspective highlights systemic implications of fair algorithmic governance, challenging conventional views and proposing a shift towards comprehensive digital economic policies prioritizing algorithmic ethics.
Several critical policy implications emerge from these findings. First, OECD policymakers should establish explicit regulatory frameworks mandating transparency and fairness standards in algorithmic practices across digital commerce platforms, systematically mitigating digital financial anxiety and building consumer trust. Such frameworks would proactively address systemic vulnerabilities rather than merely react to isolated consumer complaints. Second, comprehensive national strategies aimed at enhancing digital literacy are crucial, as digitally literate populations exhibit heightened responsiveness to fairness and transparency measures. Policy interventions could include nationwide digital education initiatives targeting diverse demographic groups, particularly older adults, to ensure inclusive digital competence. Finally, promoting collaboration between public institutions and digital commerce enterprises to regularly assess and publicly report algorithmic fairness performance can incentivize platform accountability, fostering an environment of transparency that sustainably enhances consumer welfare and economic resilience.
Despite these contributions, this study acknowledges certain limitations that offer fruitful avenues for future research. First, the current digital financial stress index construction relies on aggregate national-level data, potentially obscuring important within-country heterogeneities; future research might employ individual-level longitudinal data to uncover micro-level variations and richer behavioral insights. Second, while digital literacy is highlighted as a moderator, this research does not differentiate among various dimensions of digital skills; subsequent studies should incorporate comprehensive digital literacy scales distinguishing between operational, informational, and strategic competencies. Lastly, this study primarily examines OECD nations; extending the analysis to emerging and developing economies would provide deeper comparative insights and enhance the global relevance of algorithmic governance discussions. These pathways represent essential next steps for scholars and policymakers aiming to foster robust, inclusive, and ethically grounded digital economies.

Author Contributions

Conceptualization, Y.H. and Z.T.; methodology, Y.H.; software, Z.T.; validation, H.X.; formal analysis, H.X.; investigation, Z.T.; resources, H.X.; data curation, Z.T.; writing—original draft preparation, Z.T.; writing—review and editing, Y.H.; visualization, H.X.; supervision, Z.T.; project administration, Z.T.; funding acquisition, Z.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by The Youth Fund Project of Humanities and Social Sciences of the Ministry of Education, grant number 24YJC790194; the Foundation for the University Youth Key Teacher Training Plan by the Ministry of Henan, grant number 2024GGJS159; the Philosophy and Social Science Planning Project of Henan, grant number 2023BJJ010; and the Henan Province Soft Science Research Program Project, grant number 252400411114.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

Not applicable.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Mittelstadt, B.D.; Allo, P.; Taddeo, M.; Wachter, S.; Floridi, L. The Ethics of Algorithms: Mapping the Debate. Big Data Soc. 2016, 3, 2053951716679679. [Google Scholar] [CrossRef]
  2. Martin, K. Ethical Implications and Accountability of Algorithms. J. Bus. Ethics 2019, 160, 835–850. [Google Scholar] [CrossRef]
  3. Habibi, M.R.; Laroche, M.; Richard, M.-O. Testing an Extended Model of Consumer Behavior in the Context of Social Media-Based Brand Communities. Comput. Hum. Behav. 2016, 62, 292–302. [Google Scholar] [CrossRef]
  4. Frasquet, M.; Molla Descals, A.; Ruiz-Molina, M.E. Understanding Loyalty in Multichannel Retailing: The Role of Brand Trust and Brand Attachment. Int. J. Retail. Distrib. Manag. 2017, 45, 608–625. [Google Scholar] [CrossRef]
  5. Coelho, P.S.; Rita, P.; Santos, Z.R. On the Relationship between Consumer-Brand Identification, Brand Community, and Brand Loyalty. J. Retail. Consum. Serv. 2018, 43, 101–110. [Google Scholar] [CrossRef]
  6. Kleinberg, J.; Ludwig, J.; Mullainathan, S.; Sunstein, C.R. Discrimination in the Age of Algorithms. J. Leg. Anal. 2018, 10, 113–174. [Google Scholar] [CrossRef]
  7. Cowgill, B.; Dell’Acqua, F.; Deng, S.; Hsu, D.; Verma, N.; Chaintreau, A. Biased Programmers? Or Biased Data? A Field Experiment in Operationalizing AI Ethics. In Proceedings of the 21st ACM Conference on Economics and Computation, Virtual Event Hungary, 13–24 July 2020; ACM: New York, NY, USA, 2020; pp. 679–681. [Google Scholar]
  8. Heinrichs, B. Discrimination in the Age of Artificial Intelligence. AI Soc. 2022, 37, 143–154. [Google Scholar] [CrossRef]
  9. Binns, R.; Van Kleek, M.; Veale, M.; Lyngs, U.; Zhao, J.; Shadbolt, N. “It’s Reducing a Human Being to a Percentage”: Perceptions of Justice in Algorithmic Decisions. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Montreal, QC, Canada, 21–26 April 2018; ACM: New York, NY, USA, 2018; pp. 1–14. [Google Scholar]
  10. Obermeyer, Z.; Powers, B.; Vogeli, C.; Mullainathan, S. Dissecting Racial Bias in an Algorithm Used to Manage the Health of Populations. Science 2019, 366, 447–453. [Google Scholar] [CrossRef]
  11. French, D.; McKillop, D. The Impact of Debt and Financial Stress on Health in Northern Irish Households. J. Eur. Soc. Policy 2017, 27, 458–473. [Google Scholar] [CrossRef]
  12. Agarwal, S.; Chomsisengphet, S.; Mahoney, N.; Stroebel, J. Do Banks Pass through Credit Expansions to Consumers Who Want to Borrow? Q. J. Econ. 2018, 133, 129–190. [Google Scholar] [CrossRef]
  13. Adomavicius, G.; Yang, M. Integrating Behavioral, Economic, and Technical Insights to Understand and Address Algorithmic Bias: A Human-Centric Perspective. ACM Trans. Manage. Inf. Syst. 2022, 13, 1–27. [Google Scholar] [CrossRef]
  14. Yang, Q.; Lee, Y.-C. Ethical AI in Financial Inclusion: The Role of Algorithmic Fairness on User Satisfaction and Recommendation. Big Data Cogn. Comput. 2024, 8, 105. [Google Scholar] [CrossRef]
  15. Brüggen, L.; Gianni, R.; de Haan, F.; Hogreve, J.; Meacham, D.; Post, T.; van der Werf, M. AI-Based Financial Advice: An Ethical Discourse on AI-Based Financial Advice and Ethical Reflection Framework. J. Public Policy Mark. 2025, 44, 436–456. [Google Scholar] [CrossRef]
  16. Lee, M.K.; Baykal, S. Algorithmic Mediation in Group Decisions: Fairness Perceptions of Algorithmically Mediated vs. Discussion-Based Social Division. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing, Portland, OR, USA, 25 February–1 March 2017; ACM: New York, NY, USA; pp. 1035–1048. [Google Scholar]
  17. Veale, M.; Edwards, L. Clarity, Surprises, and Further Questions in the Article 29 Working Party Draft Guidance on Automated Decision-Making and Profiling. Comput. Law Secur. Rev. 2018, 34, 398–404. [Google Scholar] [CrossRef]
  18. Raghavan, M.; Barocas, S.; Kleinberg, J.; Levy, K. Mitigating Bias in Algorithmic Hiring: Evaluating Claims and Practices. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, Barcelona, Spain, 27–30 January 2020; ACM: New York, NY, USA; pp. 469–481. [Google Scholar]
  19. Su, C.; Yu, G.; Wang, J.; Yan, Z.; Cui, L. A Review of Causality-Based Fairness Machine Learning. Intell. Robot. 2022, 2, 244–274. [Google Scholar] [CrossRef]
  20. Card, D.; DellaVigna, S.; Malmendier, U. The Role of Theory in Field Experiments. J. Econ. Perspect. 2011, 25, 39–62. [Google Scholar] [CrossRef]
  21. Liu, L.; Zhang, H. Financial Literacy, Self-Efficacy and Risky Credit Behavior among College Students: Evidence from Online Consumer Credit. J. Behav. Exp. Financ. 2021, 32, 100569. [Google Scholar] [CrossRef]
  22. Yadava, A. The Impact of AI-Driven Algorithmic Trading on Market Efficiency and Volatility: Evidence from Global Financial Markets. Inf. Sci. 2024, 36, 102015. [Google Scholar]
  23. Sharma, G.M.; Nghiem, X.-H.; Gaur, P.; Gangodawilage, D.S.K. Ethical Implications of AI-Driven Outsourcing: Ensuring Bias Mitigation, Fairness, and Accountability. In Global Work Arrangements and Outsourcing in the Age of AI; IGI Global Scientific Publishing: Hershey, PA, USA, 2025; pp. 421–450. [Google Scholar]
  24. Salignac, F.; Marjolin, A.; Reeve, R.; Muir, K. Conceptualizing and Measuring Financial Resilience: A Multidimensional Framework. Soc. Indic. Res. 2019, 145, 17–38. [Google Scholar] [CrossRef]
  25. Rahman, M.M.; Sadik, N. Measuring Scale for Digital Financial Services, Economic Growth, Performance, and Environmental Sustainability: Evidence from EFA and CFA. Qual. Quant. 2025, 59, 1661–1694. [Google Scholar] [CrossRef]
  26. Deng, X.; Liu, Y.; Xiong, Y. Analysis on the Development of Digital Economy in Guangdong Province Based on Improved Entropy Method and Multivariate Statistical Analysis. Entropy 2020, 22, 1441. [Google Scholar] [CrossRef]
  27. Pan, W.; Xie, T.; Wang, Z.; Ma, L. Digital Economy: An Innovation Driver for Total Factor Productivity. J. Bus. Res. 2022, 139, 303–311. [Google Scholar] [CrossRef]
  28. Bruno, G.; Diglio, A.; Piccolo, C.; Pipicelli, E. A Reduced Composite Indicator for Digital Divide Measurement at the Regional Level: An Application to the Digital Economy and Society Index (DESI). Technol. Forecast. Soc. Change 2023, 190, 122461. [Google Scholar] [CrossRef]
  29. Felzmann, H.; Villaronga, E.F.; Lutz, C.; Tamò-Larrieux, A. Transparency You Can Trust: Transparency Requirements for Artificial Intelligence between Legal Norms and Contextual Concerns. Big Data Soc. 2019, 6, 2053951719860542. [Google Scholar] [CrossRef]
  30. Mirghaderi, L.; Sziron, M.; Hildt, E. Ethics and Transparency Issues in Digital Platforms: An Overview. AI 2023, 4, 831–843. [Google Scholar] [CrossRef]
  31. Akter, S.; Dwivedi, Y.K.; Sajib, S.; Biswas, K.; Bandara, R.J.; Michael, K. Algorithmic Bias in Machine Learning-Based Marketing Models. J. Bus. Res. 2022, 144, 201–216. [Google Scholar] [CrossRef]
  32. Fabris, A.; Messina, S.; Silvello, G.; Susto, G.A. Algorithmic Fairness Datasets: The Story so Far. Data Min. Knowl. Discov. 2022, 36, 2074–2152. [Google Scholar] [CrossRef]
  33. Hacker, P.; Cordes, J.; Rochon, J. Regulating Gatekeeper Artificial Intelligence and Data: Transparency, Access and Fairness under the Digital Markets Act, the General Data Protection Regulation and Beyond. Eur. J. Risk Regul. 2024, 15, 49–86. [Google Scholar] [CrossRef]
  34. Nanda, A.P.; Banerjee, R. Consumer’s Subjective Financial Well-being: A Systematic Review and Research Agenda. Int. J. Consum. Stud. 2021, 45, 750–776. [Google Scholar] [CrossRef]
  35. Chetty, K.; Qigui, L.; Gcora, N.; Josie, J.; Wenwei, L.; Fang, C. Bridging the Digital Divide: Measuring Digital Literacy. Economics 2018, 12, 20180023. [Google Scholar] [CrossRef]
  36. Tirado-Morueta, R.; Aguaded-Gómez, J.I.; Hernando-Gómez, Á. The Socio-Demographic Divide in Internet Usage Moderated by Digital Literacy Support. Technol. Soc. 2018, 55, 47–55. [Google Scholar] [CrossRef]
  37. Chen, J.; Hou, H.; Liao, Z.; Wang, L. Digital Environment, Digital Literacy, and Farmers’ Entrepreneurial Behavior: A Discussion on Bridging the Digital Divide. Sustainability 2024, 16, 10220. [Google Scholar] [CrossRef]
  38. Ariansyah, K.; Sirait, E.R.E.; Nugroho, B.A.; Suryanegara, M. Drivers of and Barriers to E-Commerce Adoption in Indonesia: Individuals’ Perspectives and the Implications. Telecommun. Policy 2021, 45, 102219. [Google Scholar] [CrossRef]
  39. Hendricks, S.; Mwapwele, S.D. A Systematic Literature Review on the Factors Influencing E-Commerce Adoption in Developing Countries. Data Inf. Manag. 2024, 8, 100045. [Google Scholar] [CrossRef]
  40. Carlsson, H.; Larsson, S.; Svensson, L.; Åström, F. Consumer Credit Behavior in the Digital Context: A Bibliometric Analysis and Literature Review. J. Financ. Couns. Plan. 2017, 28, 76–94. [Google Scholar] [CrossRef]
  41. Koskelainen, T.; Kalmi, P.; Scornavacca, E.; Vartiainen, T. Financial Literacy in the Digital Age—A Research Agenda. J. Consum. Aff. 2023, 57, 507–528. [Google Scholar] [CrossRef]
  42. Charfeddine, L.; Umlai, M.I.; El-Masri, M. Impact of Financial Literacy, Perceived Access to Finance, ICT Use, and Digitization on Credit Constraints: Evidence from Qatari MSME Importers. Financ. Innov. 2024, 10, 15. [Google Scholar] [CrossRef]
  43. Ghosh, S. The Impact of Economic Uncertainty and Financial Stress on Consumer Confidence: The Case of Japan. J. Asian Bus. Econ. Stud. 2022, 29, 50–65. [Google Scholar] [CrossRef]
  44. Yagil, D.; Cohen, M. Financial Uncertainty and Anxiety During the COVID-19 Pandemic: The Mediating Role of Future Orientation. Eur. J. Health Psychol. 2023, 30, 65–73. [Google Scholar] [CrossRef]
  45. Ahamed, A.J.; Limbu, Y.B. Financial Anxiety: A Systematic Review. Int. J. Bank Mark. 2024, 42, 1666–1694. [Google Scholar] [CrossRef]
  46. Seldal, M.M.N.; Nyhus, E.K. Financial Vulnerability, Financial Literacy, and the Use of Digital Payment Technologies. J. Consum. Policy 2022, 45, 281–306. [Google Scholar] [CrossRef] [PubMed]
  47. Wang, X.; Mao, Z. Research on the Impact of Digital Inclusive Finance on the Financial Vulnerability of Aging Families. Risks 2023, 11, 209. [Google Scholar] [CrossRef]
  48. Hazlett, C.; Wainstein, L. Understanding, Choosing, and Unifying Multilevel and Fixed Effect Approaches. Political Anal. 2022, 30, 46–65. [Google Scholar] [CrossRef]
  49. Rüttenauer, T.; Ludwig, V. Fixed Effects Individual Slopes: Accounting and Testing for Heterogeneous Effects in Panel Data or Other Multilevel Models. Sociol. Methods Res. 2023, 52, 43–84. [Google Scholar] [CrossRef]
  50. Breuer, M.; Dehaan, E. Using and Interpreting Fixed Effects Models. J. Account. Res. 2024, 62, 1183–1226. [Google Scholar] [CrossRef]
  51. Han, C.; Phillips, P.C. GMM Estimation for Dynamic Panels with Fixed Effects and Strong Instruments at Unity. Econom. Theory 2010, 26, 119–151. [Google Scholar] [CrossRef]
  52. Lee, L.; Yu, J. Efficient GMM Estimation of Spatial Dynamic Panel Data Models with Fixed Effects. J. Econom. 2014, 180, 174–197. [Google Scholar] [CrossRef]
  53. Breitung, J.; Kripfganz, S.; Hayakawa, K. Bias-Corrected Method of Moments Estimators for Dynamic Panel Data Models. Econom. Stat. 2022, 24, 116–132. [Google Scholar] [CrossRef]
  54. Arellano, M.; Bond, S. Some Tests of Specification for Panel Data: Monte Carlo Evidence and an Application to Employment Equations. Rev. Econ. Stud. 1991, 58, 277–297. [Google Scholar] [CrossRef]
  55. Roodman, D. How to Do Xtabond2: An Introduction to Difference and System GMM in Stata. Stata J. Promot. Commun. Stat. Stata 2009, 9, 86–136. [Google Scholar] [CrossRef]
  56. Shin, D.; Zhong, B.; Biocca, F.A. Beyond User Experience: What Constitutes Algorithmic Experiences? Int. J. Inf. Manag. 2020, 52, 102061. [Google Scholar] [CrossRef]
  57. Shin, D.; Rasul, A.; Fotiadis, A. Why Am I Seeing This? Deconstructing Algorithm Literacy through the Lens of Users. Internet Res. 2022, 32, 1214–1234. [Google Scholar] [CrossRef]
  58. Ekpo, A.E.; Drenten, J.; Albinsson, P.A.; Anong, S.; Appau, S.; Chatterjee, L.; Dadzie, C.A.; Echelbarger, M.; Muldrow, A.; Ross, S.M.; et al. The Platformed Money Ecosystem: Digital Financial Platforms, Datafication, and Reimagining Financial Well-being. J. Consum. Aff. 2022, 56, 1062–1078. [Google Scholar] [CrossRef]
  59. Kordzadeh, N.; Ghasemaghaei, M. Algorithmic Bias: Review, Synthesis, and Future Research Directions. Eur. J. Inf. Syst. 2022, 31, 388–409. [Google Scholar] [CrossRef]
  60. Chugh, P.; Jain, V. Artificial Intelligence (AI) Empowerment in E-Commerce: A Bibliometric Voyage. NMIMS Manag. Rev. 2024, 32, 159–173. [Google Scholar] [CrossRef]
  61. Dolata, M.; Feuerriegel, S.; Schwabe, G. A Sociotechnical View of Algorithmic Fairness. Inf. Syst. J. 2022, 32, 754–818. [Google Scholar] [CrossRef]
  62. Bar-Gill, O.; Sunstein, C.R.; Talgam-Cohen, I. Algorithmic Harm in Consumer Markets. J. Leg. Anal. 2023, 15, 1–47. [Google Scholar] [CrossRef]
  63. Shin, D.; Lim, J.S.; Ahmad, N.; Ibahrine, M. Understanding User Sensemaking in Fairness and Transparency in Algorithms: Algorithmic Sensemaking in over-the-Top Platform. AI Soc. 2024, 39, 477–490. [Google Scholar] [CrossRef]
  64. Balcilar, M.; Berisha, E.; Gupta, R.; Pierdzioch, C. Time-Varying Evidence of Predictability of Financial Stress in the United States over a Century: The Role of Inequality. Struct. Change Econ. Dyn. 2021, 57, 87–92. [Google Scholar] [CrossRef]
  65. Berisha, E.; Gabauer, D.; Gupta, R.; Nel, J. Time-Varying Predictability of Financial Stress on Inequality in United Kingdom. J. Econ. Stud. 2023, 50, 987–1007. [Google Scholar] [CrossRef]
  66. Babajide, A.; Osabuohien, E.; Tunji-Olayeni, P.; Falola, H.; Amodu, L.; Olokoyo, F.; Adegboye, F.; Ehikioya, B. Financial Literacy, Financial Capabilities, and Sustainable Business Model Practice among Small Business Owners in Nigeria. J. Sustain. Financ. Invest. 2023, 13, 1670–1692. [Google Scholar] [CrossRef]
  67. Malchenko, Y.; Gogua, M.; Golovacheva, K.; Smirnova, M.; Alkanova, O. A Critical Review of Digital Capability Frameworks: A Consumer Perspective. Digit. Policy Regul. Gov. 2020, 22, 269–288. [Google Scholar] [CrossRef]
  68. Vissenberg, J.; d’Haenens, L.; Livingstone, S. Digital Literacy and Online Resilience as Facilitators of Young People’s Well-Being?: A Systematic Review. Eur. Psychol. 2022, 27, 76–85. [Google Scholar] [CrossRef]
  69. Du, Y.; Wang, Q.; Zhou, J. How Does Digital Inclusive Finance Affect Economic Resilience: Evidence from 285 Cities in China. Int. Rev. Financ. Anal. 2023, 88, 102709. [Google Scholar] [CrossRef]
  70. Ma, W.; Nie, P.; Zhang, P.; Renwick, A. Impact of Internet Use on Economic Well-being of Rural Households: Evidence from China. Rev. Dev. Econ. 2020, 24, 503–523. [Google Scholar] [CrossRef]
  71. Tay, L.-Y.; Tai, H.-T.; Tan, G.-S. Digital Financial Inclusion: A Gateway to Sustainable Development. Heliyon 2022, 8, e09766. [Google Scholar] [CrossRef]
  72. Gao, Q.; Sun, M.; Chen, L. The Impact of Digital Inclusive Finance on Agricultural Economic Resilience. Financ. Res. Lett. 2024, 66, 105679. [Google Scholar] [CrossRef]
  73. Xiao, J.J.; Kim, K.T. The Able Worry More? Debt Delinquency, Financial Capability, and Financial Stress. J. Fam. Econ. Issues 2022, 43, 138–152. [Google Scholar] [CrossRef]
  74. Visconti-Caparrós, J.M.; Campos-Blázquez, J.R. The Development of Alternate Payment Methods and Their Impact on Customer Behavior: The Bizum Case in Spain. Technol. Forecast. Soc. Change 2022, 175, 121330. [Google Scholar] [CrossRef]
  75. Di Guilmi, C.; Fujiwara, Y. Dual Labor Market, Financial Fragility, and Deflation in an Agent-Based Model of the Japanese Macroeconomy. J. Econ. Behav. Organ. 2022, 196, 346–371. [Google Scholar] [CrossRef]
  76. Mathieu, S.; Treloar, A.; Hawgood, J.; Ross, V.; Kõlves, K. The Role of Unemployment, Financial Hardship, and Economic Recession on Suicidal Behaviors and Interventions to Mitigate Their Impact: A Review. Front. Public Health 2022, 10, 907052. [Google Scholar] [CrossRef]
  77. Midões, C.; Seré, M. Living with Reduced Income: An Analysis of Household Financial Vulnerability Under COVID-19. Soc. Indic. Res. 2022, 161, 125–149. [Google Scholar] [CrossRef] [PubMed]
  78. Branikas, I.; Hong, H.; Xu, J. Location Choice, Portfolio Choice. J. Financ. Econ. 2020, 138, 74–94. [Google Scholar] [CrossRef]
  79. Gomes, F.; Haliassos, M.; Ramadorai, T. Household Finance. J. Econ. Lit. 2021, 59, 919–1000. [Google Scholar] [CrossRef]
  80. Boado-Penas, M.C.; Nave, J.M.; Toscano, D. Financial Market Participation and Retirement Age of the UK Population. Int. J. Financ. Stud. 2023, 11, 37. [Google Scholar] [CrossRef]
  81. Kawsar, M.; Satata, F.F.; Wahid, M.A.E. Being Fair to Customers: A Strategy in Enhancing Customer Engagement and Loyalty in the Bangladeshi Mobile Telecommunication Industry. Shanlax Int. J. Manag. 2024, 11, 6879. [Google Scholar] [CrossRef]
  82. Peña-García, N.; ter Horst, E. Loyalty beyond Transactions: The Role of Perceived Brand Ethics in e-Commerce. Front. Commun. 2025, 10, 1605171. [Google Scholar] [CrossRef]
  83. Yang, Z.; Hu, D.; Chen, X. The Role of Omnichannel Integration and Digital Value in Building Brand Trust: A Customer Psychological Perception Perspective. Internet Res. 2025, 35, 1029–1064. [Google Scholar] [CrossRef]
  84. Dey, B.L.; Yen, D.; Samuel, L. Digital Consumer Culture and Digital Acculturation. Int. J. Inf. Manag. 2020, 51, 102057. [Google Scholar] [CrossRef]
  85. Sharma, S.; Kar, A.K.; Gupta, M.P.; Dwivedi, Y.K.; Janssen, M. Digital Citizen Empowerment: A Systematic Literature Review of Theories and Development Models. Inf. Technol. Dev. 2022, 28, 660–687. [Google Scholar] [CrossRef]
  86. Setiadi, D.; Nurhayati, S.; Ansori, A.; Zubaidi, M.; Amir, R. Youth’s Digital Literacy in the Context of Community Empowerment in an Emerging Society 5.0. Society 2023, 11, 1–12. [Google Scholar] [CrossRef]
  87. Chomicz-Grabowska, A.M.; Orlowski, L.T. Financial Market Risk and Macroeconomic Stability Variables: Dynamic Interactions and Feedback Effects. J. Econ. Financ. 2020, 44, 655–669. [Google Scholar] [CrossRef]
  88. Kiley, M.T. What Macroeconomic Conditions Lead Financial Crises? J. Int. Money Financ. 2021, 111, 102316. [Google Scholar] [CrossRef]
  89. Ehigiamusoe, K.U.; Samsurijan, M.S. What Matters for Finance-growth Nexus? A Critical Survey of Macroeconomic Stability, Institutions, Financial and Economic Development. Int. J. Fin. Econ. 2021, 26, 5302–5320. [Google Scholar] [CrossRef]
  90. Litwin, H.; Meir, A. Financial Worry among Older People: Who Worries and Why? J. Aging Stud. 2013, 27, 113–120. [Google Scholar] [CrossRef]
  91. Kadoya, Y.; Khan, M.S.R. Can Financial Literacy Reduce Anxiety about Life in Old Age? J. Risk Res. 2018, 21, 1533–1550. [Google Scholar] [CrossRef]
Table 1. Results of principal component analysis for DFSI construction.
Table 1. Results of principal component analysis for DFSI construction.
ComponentFactor LoadingCommunality
Credit-financed household consumption0.5050.714
BNPL service penetration0.4870.678
Digital default rate0.5110.722
E-commerce financial complaints0.4960.701
Eigenvalue (PC1)2.85
Variance explained (PC1)71.3%
KMO statistic0.768
Bartlett’s test138.2 ***
Note: All variables standardized before PCA; *** p < 0.01.
Table 2. Results of variable descriptive statistics.
Table 2. Results of variable descriptive statistics.
VariableMeanStandard DeviationMinimum ValueMaximum Value
d f s i 0.4230.1280.1910.788
f a i r n e s s 74.6128.45155.23092.610
g d p p 10.3460.4559.14511.462
d i g t i a l 79.4529.62453.80096.700
i n t e r n e t 87.6517.52461.20099.800
c r e d i t 68.72312.24534.50094.700
u n e m p l o y m e n t 6.4252.3422.30015.700
a g e i n g 18.2644.7158.92028.600
Note: All absolute-level variables, including GDP per capita, were transformed using natural logarithms to mitigate skewness and ensure normality. All data were sourced from official OECD, World Bank, IMF, and ITU statistics, covering OECD economies over the period from 2010 to 2023.
Table 3. Results of correlation test.
Table 3. Results of correlation test.
Variable d f s i f a i r n e s s g d p p d i g t i a l i n t e r n e t c r e d i t u n e m p l o y m e n t a g e i n g
d f s i 1.000
f a i r n e s s −0.612 ***1.000
g d p p −0.436 **0.245 ***1.000
d i g t i a l −0.591 ***0.287 *0.211 **1.000
i n t e r n e t −0.367 ***0.278 ***0.332 *0.321 ***1.000
c r e d i t −0.554 ***0.259 **0.298 *0.284 **0.262 *1.000
u n e m p l o y m e n t 0.528 **−0.164 **−0.256 **−0.232 ***−0.194 **−0.157 **1.000
a g e i n g 0.486 *−0.143 **0.122 **−0.103 *0.091 *0.134 **0.105 *1.000
Note: significance levels: *** p < 0.01, ** p < 0.05, * p < 0.1.
Table 4. Results of the effect of algorithmic fairness on digital financial stress.
Table 4. Results of the effect of algorithmic fairness on digital financial stress.
VariableCoefficientt-Statistics
f a i r n e s s −0.325 ***−6.437
g d p p −0.192 ***−3.845
d i g t i a l −0.284 ***−4.512
i n t e r n e t −0.161 ***−3.164
c r e d i t −0.174 *−1.832
u n e m p l o y m e n t 0.236 ***4.227
a g e i n g 0.125 **2.119
c 3.845 *1.736
country fixed effectsYes
year fixed effectsYes
R 2 0.682
F-statistics46.837 ***
Note: significance levels: *** p < 0.01, ** p < 0.05, * p < 0.1; c , constant.
Table 5. Results of robustness test.
Table 5. Results of robustness test.
VariableMethod 1: AI Transparency Method 2: System–GMM
d f s i 1 0.431 ***
(5.829)
f a i r n e s s −0.216 ***
(−4.217)
A I   t r a n s p a r e n c y −0.287 ***
(−5.312)
c v YesYes
c 3.519 *
(1.647)
2.738 ***
(4.563)
country fixed effectsYes
year fixed effectsYesYes
AR(1) test (p-value) 0.012
AR(2) test (p-value) 0.457
Hansen test (p-value) 0.334
R 2 0.649
F-statistics / Wald   χ 2 41.263 ***328.571 ***
Note: significance levels: *** p < 0.01, * p < 0.1; c , constant; c v , control variable.
Table 6. Results of robustness test (temporal stability of algorithmic fairness effects (2010–2023).
Table 6. Results of robustness test (temporal stability of algorithmic fairness effects (2010–2023).
VariableCoefficientt-Statistics
f a i r n e s s −0.319 ***−6.128
p o s t 2016 0.0140.547
f a i r n e s s × p o s t 2016 −0.006−0.334
c v yes
c 3.791 *1.745
country fixed effectsyes
year fixed effectsyes
R a d j u s t e d 2 0.679
F-statistics43.582 ***
Note: significance levels: *** p < 0.01, * p < 0.1; c , constant; c v , control variable.
Table 7. Results of digital literacy as a moderator: unpacking interaction effects.
Table 7. Results of digital literacy as a moderator: unpacking interaction effects.
VariableCoefficientt-Statistics
f a i r n e s s −0.248 ***−4.915
d i g t i a l −0.231 ***−4.236
f a i r n e s s · d i g t i a l −0.162 **−2.174
c v Yes
c 3.271 *1.832
country fixed effectsYes
year fixed effectsYes
R 2 0.705
F-statistics 44.921 ***
Note: significance levels: *** p < 0.01, ** p < 0.05, * p < 0.1; c , constant; c v , control variable.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Teng, Z.; Xia, H.; He, Y. Algorithmic Fairness and Digital Financial Stress: Evidence from AI-Driven E-Commerce Platforms in OECD Economies. J. Theor. Appl. Electron. Commer. Res. 2025, 20, 213. https://doi.org/10.3390/jtaer20030213

AMA Style

Teng Z, Xia H, He Y. Algorithmic Fairness and Digital Financial Stress: Evidence from AI-Driven E-Commerce Platforms in OECD Economies. Journal of Theoretical and Applied Electronic Commerce Research. 2025; 20(3):213. https://doi.org/10.3390/jtaer20030213

Chicago/Turabian Style

Teng, Zhuoqi, Han Xia, and Yugang He. 2025. "Algorithmic Fairness and Digital Financial Stress: Evidence from AI-Driven E-Commerce Platforms in OECD Economies" Journal of Theoretical and Applied Electronic Commerce Research 20, no. 3: 213. https://doi.org/10.3390/jtaer20030213

APA Style

Teng, Z., Xia, H., & He, Y. (2025). Algorithmic Fairness and Digital Financial Stress: Evidence from AI-Driven E-Commerce Platforms in OECD Economies. Journal of Theoretical and Applied Electronic Commerce Research, 20(3), 213. https://doi.org/10.3390/jtaer20030213

Article Metrics

Back to TopTop