Next Article in Journal
Driving Sustainable Adaptation Through Community Engagement: A Social Adaptive Capacity Tool for Climate Policy
Previous Article in Journal
A Holistic One Health Assessment Framework for Coastal Areas
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Data-Driven Framework for Aligning Artificial Intelligence with Inclusive Development in the Global South

by
G. H. B. A. de Silva
Department of Human Resource Management, Faculty of Commerce and Management Studies, No. 218, University of Kelaniya, Kelaniya 11600, Sri Lanka
Sustainability 2025, 17(21), 9360; https://doi.org/10.3390/su17219360
Submission received: 2 September 2025 / Revised: 1 October 2025 / Accepted: 7 October 2025 / Published: 22 October 2025

Abstract

Artificial Intelligence is reshaping social, political, economic, and cultural life, yet its developmental value in the Global South remains contingent on governance, participation, and design choices. This study develops and validates a data-driven framework that aligns Artificial Intelligence with inclusive development across four interdependent dimensions like access, agency, accountability, and adaptation using a mixed-method, sequential explanatory design that integrates large-sample surveys, qualitative interviews and observations, and participatory workshops across six urban, peri-urban, and rural sites (total n = 1920 ). Measurement development followed best practices in item generation, content validity, cognitive interviewing, piloting, and psychometric evaluation; exploratory factor analysis and confirmatory factor analysis supported a four-factor structure with satisfactory reliability and convergent discriminant validity. Structural equation modeling indicated that access and adaptation are the strongest predictors of service reach and time efficiency, whereas agency and accountability are most closely associated with grievance resolution and reductions in reported harms; these relations were strong across subgroups and alternative specifications. Qualitative integration clarified mechanisms that map onto the quantitative signals, including infrastructural precarity that constrains reach, contestability gaps that limit remedy, and locally responsive design features that reduce transaction costs. The framework translates normative commitments into measurable levers for policy and practice: investments that prioritize access and adaptation expand reach and efficiency, while strengthening agency and accountability enhances remedy and safety. Embedding the four dimensions into diagnostics, procurement, audit, and performance management offers a practical pathway to make Artificial Intelligence inclusive by default in diverse low-resource settings.

1. Introduction

Artificial Intelligence has become one of the most influential forces shaping the trajectory of the information society. Beyond its technical capabilities, Artificial Intelligence functions as a socio-technical system whose development and deployment reconfigure power relations, modes of governance, and forms of everyday life. Optimistic narratives position Artificial Intelligence as an enabler of economic efficiency, innovation, and a catalyst for achieving the United Nations Sustainable Development Goals [1,2]. These accounts emphasize its potential to optimize agricultural practices, transform healthcare delivery, and expand financial inclusion in low-resource contexts [3,4]. However, a growing body of scholarship has highlighted substantial risks, including the reproduction of structural inequalities, algorithmic bias, surveillance capitalism, and digital extractivism in the Global South [5,6,7,8]. The ambivalence surrounding Artificial Intelligence in development reflects long-standing debates within Information and Communication Technologies for Development; while digital technologies have historically been associated with expanding opportunities for education, participation, and mobility [9,10], they have also produced new forms of exclusion and dependency [11,12]. Scholars increasingly argue that developmental outcomes are not technologically predetermined but shaped by institutional frameworks, governance arrangements, and participatory practices [4]. This tension between opportunity and inequality has stimulated calls for operational frameworks capable of evaluating Artificial Intelligence not only in terms of technical performance but also in relation to inclusivity, justice, and sustainability [6,7].
Recent contributions explain this demand for critical and context-sensitive approaches. Cachat-Rosset and Klarsfeld [13] demonstrate that diversity, equity, and inclusion principles remain insufficiently embedded in Artificial Intelligence guidelines, limiting their potential to deliver equitable outcomes. Radanliev [14] emphasizes the urgency of integrating transparency, fairness, and privacy into governance mechanisms to ensure accountable innovation. The rapid diffusion of generative Artificial Intelligence has further intensified these debates. Khan et al. [15] provide a systematic review showing that generative models such as ChatGPT 3.5 are exerting significant cross-disciplinary impacts, raising both opportunities for knowledge democratization and risks of epistemic homogenization. Li et al. [16] illustrate how Artificial Intelligence can contribute to creativity in architectural and care-oriented design, highlighting the importance of aligning systems with specific social needs. Buckmire et al. [17] explain culturally relevant pedagogy as a foundation for equitable Artificial Intelligence and data science education, while Bozkus and Kaya [18] advance the necessity of strong risk assessment frameworks to anticipate systemic vulnerabilities in the deployment of Artificial Intelligence. In response to these debates, this article advances a novel data-driven framework for aligning Artificial Intelligence with inclusive development in the Global South. The framework rests on four interdependent dimensions: access, agency, accountability, and adaptation. access emphasizes the infrastructural and affordability conditions that determine meaningful participation. Agency foregrounds the ability of individuals and communities to understand, contest, and influence decisions mediated by Artificial Intelligence. Accountability addresses transparency, auditability, and responsibility mechanisms that mitigate harms. Adaptation highlights the importance of designing systems that are sensitive to local languages, practices, and cultural contexts. Collectively, these dimensions operationalize normative commitments into tools that can be empirically tested and applied to policy and practice.
Unlike existing ICT4D or AI governance frameworks that largely remain at the level of normative principles or descriptive cases, the 4A model advances the field by (a) operationalizing inclusivity through four empirically validated constructs (access, agency, accountability, and adaptation), (b) integrating large-scale quantitative analysis with qualitative and participatory insights, and (c) demonstrating cross-context comparability through measurement invariance testing. This combination ensures that the framework not only theorizes inclusivity but also provides actionable, evidence-based tools for policymakers and practitioners.
The purpose of this article is therefore twofold. First, it synthesizes insights from Information and Communication Technologies for Development, Artificial Intelligence ethics, and digital governance to articulate the tensions that shape developmental outcomes. Second, it introduces and validates a model grounded in empirical evidence from marginalized communities, offering governments, practitioners, and scholars a practical framework to guide design, implementation, and evaluation. By integrating critical theory with mixed-method field data, this study contributes to ongoing debates in information, communication and Society on the social, cultural, political, and economic consequences of evolving technologies. It argues that the developmental trajectories of Artificial Intelligence are contingent on governance choices and participatory practices, and that equitable futures depend on embedding inclusivity, accountability, and sustainability at the core of technological design and implementation.

2. Literature Review

2.1. Inclusion, Diversity, and Data Justice

A central strand of scholarship interrogates how Artificial Intelligence systems reproduce or disrupt social hierarchies. Work on diversity, equity, and inclusion audits the prescriptive quality of ethics guidelines against their operational sufficiency, showing persistent gaps between principles and practice [13]. Data justice perspectives extend this critique by centering epistemic, distributive, and participatory dimensions of fairness, with recent contributions advancing decolonial and Indigenous frameworks that foreground situated knowledges, land relations, and collective rights [19]. Building on these positions, the debate has turned to whether universal templates of justice travel across geographies or whether governance must be reflexively plural and negotiated within context-specific power relations [20]. Regionally grounded policy analyses argue that inclusive Artificial Intelligence requires rights-respecting data governance architectures that attend to sovereignty, procurement, and representativeness, particularly across African states [1]. In education and capacity-building, scholarship links culturally responsive pedagogy in data science with social justice outcomes, proposing curricular designs that scaffold critical awareness and civic responsibility [17].

2.2. From Ethics to Governance: Translating Principles into Organizational Practice

A rapidly growing body of work examines how organizations move from high-level ethical commitments to concrete governance routines. A comprehensive review in the information systems field synthesizes responsible Artificial Intelligence governance as interlocking structural, relational, and procedural practices across the system life cycle and calls for stronger operationalization, validation, and accountability mechanisms [21]. Organizational governance agendas now encompass technical guardrails, stakeholder and contextual alignment, regulatory compliance, and process maturity [22]. Public policy scholarship adds a meso–macro lens, arguing that generative Artificial Intelligence introduces volatility, opacity, and new coordination problems that must be addressed with governance models attuned to uncertainty and institutional capacity [23]. Conceptual work further frames generative Artificial Intelligence as a complex adaptive system, urging adaptive, iterative, and learning-oriented governance rather than static compliance checklists [24]. Complementing academic proposals, standard-setting efforts articulate risk profiles and actionable controls for governance, content provenance, pre-deployment testing, and incident disclosure, consolidating a baseline of shared practice [25].

2.3. Societal Risks, Surveillance, and the Politics of Framing

Critical social science research highlights the entanglement of Artificial Intelligence with surveillance, extraction, and market logics that can amplify social control and asymmetries of power. Studies in communication and media show how policy crises surrounding generative Artificial Intelligence reconfigure governance agendas and discursive opportunity structures, shaping which harms and remedies become actionable in law and regulation [26]. Recent investigations within communication and development also emphasize participatory and action-research approaches that redistribute epistemic authority and interrogate dominant narratives of technological solutionism [27]. These currents converge with analyses of inclusion and data justice by insisting that governance choices are political choices that structure whose agency counts and which futures are seen as legitimate [19,20].

2.4. Digital Divides, Access, and Territorial Inequalities

Work on access and connectivity remains foundational for equitable Artificial Intelligence. New evidence from development policy shows that place-based digital strategies and big data pilot zones can reduce territorial divides when institutional diffusion, administrative supply, and market incentives align, although effects are contingent on local capacity and authority [28]. Global monitoring explains that billions remain offline and that uptake lags even where coverage exists, implying that inclusive Artificial Intelligence depends on affordability, devices, skills, and trustworthy public-service applications in health, education, and finance [29]. These findings connect with broader research in Information and Communication Technologies for Development that situates Artificial Intelligence within infrastructures, markets, and institutions rather than isolating algorithms as autonomous drivers of change.

2.5. Applications, Sectoral Transformations, and Methodological Advances

Across domains, systematic reviews map the diffusion of generative Artificial Intelligence and its multidisciplinary impacts while also documenting uneven benefits and emergent risks [15]. Design research in the built environment illustrates how conditional generative adversarial networks support early-stage planning under real constraints, demonstrating opportunities and risk trade-offs in human–machine co-creation [16]. In safety-critical and organizational contexts, quantitative risk assessment methods continue to evolve; fuzzy failure mode and effect analysis with Z-numbers advances prioritization under uncertainty, with implications for auditing complex, hybrid human–machine systems [18]. Ethical integration work argues that transparency, fairness, and privacy must be engineered across the life cycle rather than appended at the end, aligning normative aspirations with technical controls and oversight [14].

2.6. Synthesis and Implications for This Study

Across this literature, three cross-cutting insights recur. First, inclusion requires governance architectures that are context-sensitive, participatory, and enforceable, not merely declarative [13,21]. Second, generative Artificial Intelligence challenges static governance by introducing uncertainty and systemic coupling, which strengthens the case for adaptive, risk-based, and learning-centered approaches grounded in standards and continuous monitoring [23,24,25]. Third, developmental value depends on addressing territorial and social inequalities in access and use, linking connectivity, capability-building, and sectoral service quality to the design and evaluation of Artificial Intelligence interventions [28,29]. These insights inform the framework advanced in this article, which foregrounds access, agency, accountability, and adaptation as interdependent dimensions for aligning Artificial Intelligence with inclusive development.
The Table 1 consolidates five dominant literature themes of data justice, ethics, societal risks, digital divides, and sectoral applications highlighting a persistent gap between conceptual ideals and applied mechanisms. The proposed framework translates these theoretical domains into measurable constructs of access, agency, accountability, and adaptation, thereby operationalizing inclusion within empirical contexts. This alignment bridges ethical intent with governance implementation, enhancing the methodological and policy relevance of inclusive AI in the Global South.

3. Materials and Methods

3.1. Study Design and Rationale

This study used a mixed-method, sequential explanatory design to develop and validate a data-driven framework for aligning Artificial Intelligence with inclusive development outcomes [30,31,32]. The design integrated quantitative and qualitative components in an iterative and explicitly theory-informed sequence. In the first phase, the research team generated and refined constructs representing access, agency, accountability, and adaptation through fieldwork and instrument development guided by best practices in scale construction, content validity assessment, and cognitive interviewing [33,34,35,36]. In the second phase, the measurement model and the structural relations among constructs and outcomes were evaluated quantitatively using factor-analytic techniques and structural equation modeling with a prespecified analytic plan [37,38]. In the third phase, qualitative inquiry was undertaken to explain quantitative signals, test the boundary conditions of the constructs, and ensure contextual fitness for purpose; integration proceeded through joint displays and meta-inferences in line with guidance for high-quality mixed methods integration [39].

3.2. Setting and Participants

Fieldwork took place in urban, peri-urban, and rural communities across the Global South that vary in socio-economic conditions, linguistic ecologies, and digital infrastructure. Participants included end users of Artificial Intelligence-enabled or digitally mediated public services in health, education, agriculture, and social protection; frontline workers and implementers; and local policymakers. Sampling emphasized heterogeneity in access conditions and use contexts to maximize explanatory leverage for framework development and to enhance the transferability of findings across settings [30]. Recruitment procedures were community-based and relied on local partners to support inclusive participation.

3.3. Measures and Instrument Development

The framework was operationalized as four latent constructs with reflective indicators derived from field notes, policy documents, and instruments commonly used in evaluations of digital public services in Information and Communication Technologies for Development. Item generation combined deductive theorization with inductive insights from preliminary qualitative work. Content validity was established through expert review panels and calculation of a content validity index [35]. Cognitive interviewing was used to assess item comprehension, recall processes, and response mapping across languages and literacy levels, followed by piloting to verify clarity and cultural appropriateness [36,40]. Access captured the connectivity, device availability, affordability, language support, disability accommodations, and service reliability. Agency captured the understanding and contestation of automated decisions, perceived self-efficacy, consent and opt-out mechanisms, and pathways to human redress. To reflect its multi-level character, agency was defined at both the individual level (comprehension of automated outputs, confidence to challenge, and personal recourse channels) and the community level (collective avenues for redress, participatory decision-making, and support networks that amplify individual claims) [41], while survey items predominantly assessed individual agency, qualitative inquiry and participatory workshops captured the community dimension. Accountability captured auditability and explainability, clarity of institutional responsibility across public and private actors, and transparency of data practices.
Adaptation captured fit to local languages and practices, availability of offline or low-bandwidth modes, and organizational capacity for iterative improvement. To make explicit its scope, adaptation was defined across three interrelated dimensions: (a) linguistic adaptation, ensuring interfaces accommodate local languages and literacy levels; (b) socio-cultural adaptation, aligning services with local customs, norms, and everyday practices; and (c) infrastructural adaptation, including provisions for low-connectivity environments, fallback human support, and mechanisms for iterative improvement. Items used four- or five-point Likert-type response formats. Objective indicators such as bandwidth, device access, and service uptime were recorded as continuous or binary variables. Reliability targets and item retention thresholds followed established guidance for measurement development [33,34,42], while adaptation in our framework emphasizes responsiveness to language, cultural practices, and infrastructural diversity; it does not directly subsume sensitivity to local power structures. Issues of power and governance are instead addressed by agency (which enables contestation of automated decisions) and accountability (which requires responsibility assignment and redress). This boundary preserves conceptual clarity: Adaptation focuses on ecological fit, while agency and accountability capture institutional adaptation to asymmetries of power [43].
Accountability captured auditability and explainability, clarity of institutional responsibility across public and private actors, and transparency of data practices. To ensure adaptability to governance capacity, indicators were specified in a tiered fashion: in high-capacity sites, items included formal audits, certified personnel, and traceability reports; in low-capacity sites, proxy indicators such as grievance logs, disclosure practices, and community monitoring committees were used. Subgroup comparisons showed lower mean scores in low-capacity sites but invariance in construct functioning, confirming that the accountability dimension remains valid across governance contexts. Li et al. illustrate how Artificial Intelligence can contribute to creativity in architectural and care-oriented design, highlighting the importance of aligning systems with specific social needs. Related algorithmic advances in rehabilitation explain this point: recent work demonstrates how 3D graph deep learning with laser point clouds enables intelligent hand segmentation for visual interaction and how deep learning–based human–machine interaction can empower hand function rehabilitation in aging populations [16].
As summarized in Table 2, the framework aligns conceptual domains with measurable indicators and harm typologies.

3.4. Data Collection Procedures

Survey modules captured demographics, the four constructs, service use, development-relevant outcomes, and perceived harms. Enumerators administered face-to-face questionnaires using secure mobile data collection tools with encrypted local storage and authenticated synchronization [44]. Semi-structured interviews and non-participant observations documented lived experiences with Artificial Intelligence-mediated services, with focused attention to breakdowns, workarounds, and informal governance. Qualitative protocols followed consolidated guidance for thematic analysis and ensured reflexive memoing throughout fieldwork [45,46]. Participatory workshops with community representatives, service providers, and policymakers were used to validate items, interpret preliminary findings, and co-prioritize design and policy recommendations [27,47,48]. Field teams recorded paradata to monitor interview context, interruptions, and device issues for subsequent quality checks.

3.5. Analytical Strategy

For this study no commercial materials were used in this research. All statistical and graphical analyses were conducted using R (Version 4.2.2; R Foundation for Statistical Computing, Vienna, Austria) and RStudio (Version 2023.03.0+386; Posit Software, Boston, MA, USA). These details have been incorporated in accordance with MDPI’s style requirements. As the study did not employ formal standards or proprietary datasets, the reference guidance for standards is not applicable. Data preparation included de-identification, range checks, logical consistency checks, and screening for missingness patterns. When justified by diagnostics, missing data were handled using multiple imputation under a missing at random assumption with an appropriate number of imputations and convergence checks [49,50,51]. Objective indicators were normalized to aid comparability across sites.
Measurement modeling proceeded in two stages. First, exploratory factor analysis on a random half-sample examined dimensionality and guided item reduction using principal axis factoring with oblique rotation; decisions considered communalities, cross-loadings, and interpretability in line with methodological recommendations [52,53]. Second, confirmatory factor analysis on the holdout half-sample evaluated the four-factor structure. Reliability and validity were assessed using coefficient alpha, McDonald’s omega, composite reliability, average variance extracted, and the heterotrait–monotrait ratio of correlations [42,54,55,56]. Model fit was judged using the comparative fit index, the Tucker–Lewis index, the root mean square error of approximation, and the standardized root mean square residual with conventional thresholds and simulation-based guidance [38,57]. Dimension scores were computed from retained items and standardized across sites. A composite index was calculated as the unweighted mean of the four standardized dimension scores; concordance was assessed against a principal components weighted index [58]. When performing families of hypothesis tests, false discovery was controlled using the Benjamini–Hochberg procedure [59]. Structural relations between the four dimensions and outcomes, including service reach, time savings, grievance resolution, and reported harms, were estimated using structural equation modeling with strong estimators and site-level clustering in sensitivity analyses [38,60,61]. Directed acyclic graphs specified a priori encoded assumptions about confounding, mediation, and selection, informing adjustment sets for identification [62,63,64]. Where appropriate, doubly strong estimators were implemented to estimate average treatment effects of access improvements, combining outcome regression with propensity weighting to protect against single-model misspecification [65,66,67].
Qualitative analysis followed a combined deductive–inductive approach. Deductive codes aligned with the four constructs. Inductive codes captured emergent themes including surveillance risks, language barriers, adaptive workarounds, and institutional responsibility. Analyst triangulation, peer debriefing, and intercoder agreement checks supported credibility, and the team pursued code and meaning saturation benchmarks to determine sample sufficiency [46,68,69]. Integration used joint displays to align quantitative signals with qualitative explanations and to refine construct boundaries and implications [39].

3.6. Strongness and Bias Mitigation

Subgroup analyses by gender, age, disability status, and urbanicity assessed heterogeneity. Measurement invariance testing examined configural, metric, and scalar invariance across subgroups to evaluate comparability of latent constructs, with attention to changes in fit indices under multi-group constraints [70,71,72]. To mitigate common method bias, the measurement design combined self-reports with objective indicators and administrative logs, used method separation where feasible, and tested for problematic single-factor structures [73]. Potential enumerator effects were examined using multilevel models with random enumerator effects and sensitivity checks for interviewer clustering [74,75].

3.7. Ethics and Consent

The study obtained the ethical clearance from the University of Moratuwa - Sri Lanka, and the protocol, instruments, and consent procedures were approved by relevant institutional review boards and national ethics committees. Ethical conduct followed the Belmont principles of respect for persons, beneficence, and justice, as well as the Declaration of Helsinki [76,77]. Written informed consent was obtained from all participants, with additional assent and caregiver consent when required. Data security procedures included encrypted storage, role-based access controls, and privacy-by-design practices. De-identification combines removal of direct identifiers with conservative treatment of quasi-identifiers following established disclosure control principles [78].

3.8. Reproducibility, Data, and Code Availability

Upon publication, survey instruments, codebooks, de-identified datasets, analysis code, and step-by-step replication instructions will be deposited in a public repository with a persistent identifier. The repository will include computational environment details and executable scripts to enable end-to-end reproduction in line with best practices for open and reproducible research [79,80,81]. Any restrictions related to privacy, contractual limitations, or legal frameworks will be documented together with procedures for vetted access.

4. Results

4.1. Sample Characteristics

A total of N = 1920 respondents were enrolled across six sites classified as urban ( n = 720 ), peri-urban ( n = 640 ), and rural ( n = 560 ). The sample comprised 52.1% female, 46.9% male, and 1.0% non-disclosed respondents. The median age was 33 years with an interquartile range of 24 to 44 years. Daily internet access was reported by 67.8% of respondents, intermittent access by 21.4%, and no regular access by 10.8%. Smartphone ownership was 82.3% in urban sites, 74.1% in peri-urban sites, and 61.5% in rural sites. Table 3 and Figure 1 summarize baseline characteristics that are relevant for Artificial Intelligence-enabled public service use.
Urban sites show systematically higher connectivity and device access than peri-urban and rural sites. The gradient matches subsequent differences in outcomes, supporting the expectation that infrastructural conditions and device availability are foundational determinants of engagement with Artificial Intelligence-enabled services.

4.2. Measurement Results: Dimensionality, Reliability, and Validity

Exploratory factor analysis on a random half-sample ( n = 960 ) supported a four-factor solution corresponding to access, agency, accountability, and adaptation (Kaiser–Meyer–Olkin measure of sampling adequacy = 0.89 , Bartlett’s test of sphericity p < 0.001 ). After removing six items with low communalities ( h 2 < 0.30 ) or cross-loadings greater than 0.30, the retained twenty-item structure explained 64.2% of total variance. Standardized loadings ranged from 0.58 to 0.82 for access, 0.61 to 0.85 for agency, 0.57 to 0.81 for accountability, and 0.59 to 0.83 for adaptation.
Confirmatory factor analysis on the holdout sample ( n = 960 ) indicated good fit: χ 2 ( 164 ) = 412.6 , comparative fit index = 0.957 , Tucker–Lewis index = 0.948 , root mean square error of approximation = 0.041 with ninety percent confidence interval 0.038 to 0.045, and standardized root mean square residual = 0.046 , in line with conventional thresholds [38,57]. Composite reliability ranged from 0.82 to 0.88 across dimensions, and average variance extracted ranged from 0.54 to 0.63. Discriminant validity held by the Fornell–Larcker criterion and by a heterotrait–monotrait ratio less than 0.85 for all pairs [55,56]. Table 4 provides summary psychometrics, while Figure 2 visualizes the measurement structure at the dimension level. To evaluate cross-cultural generalizability, I conducted multi-group confirmatory factor analyses (South Asian). Configural and metric invariance held across groups, and partial scalar invariance was established after freeing two intercepts in the agency dimension. These results indicate that the four-factor structure is structurally stable across cultural contexts, supporting the reuse of the framework for cross-regional analysis while acknowledging minor baseline response differences.
The four-dimensional structure shows clear separation with satisfactory reliability and convergent validity, indicating that the constructs capture distinct yet related aspects of inclusive Artificial Intelligence implementation.

4.3. Index Scores and Site Differences

Dimension scores were standardized to mean zero and standard deviation one and averaged to form the composite four-dimension index. The pooled mean was 0.02 (standard deviation = 0.73 ), reflecting a slight negative skew driven by lower rural scores. Means by site type were + 0.21 (standard deviation = 0.65 ) for urban, 0.03 (standard deviation = 0.69 ) for peri-urban, and 0.28 (standard deviation = 0.76 ) for rural. Pairwise differences were significant at the p < 0.01 level after controlling the false discovery rate using the Benjamini–Hochberg procedure. A principal components weighted index produced highly concordant rankings with Spearman correlation = 0.94 and p < 0.001 , indicating strongness to alternative weighting schemes. Figure 3 shows index distributions by site type.
The composite index Figure 3 exhibits a consistent urban advantage that narrows in peri-urban sites and is most constrained in rural sites. The pattern is strong for alternative index construction and aligns with observed gradients in connectivity and device access.

4.4. Structural Relations with Development-Relevant Outcomes

Structural equation modeling related the four dimensions to four outcomes: service reach, time savings per transaction (log-transformed), grievance resolution within two weeks, and reported harms in the previous six months. Model fit was acceptable (comparative fit index = 0.951 , Tucker–Lewis index = 0.942 , root mean square error of approximation = 0.045 , standardized root mean square residual = 0.051 ). Figure 4 presents a coefficient plot with ninety-five percent confidence intervals, and Table 5 lists estimates.
I did not introduce explicit gap-adjustment variables in the structural models, as sociodemographic covariates and site fixed effects already capture relative differences in access. Gap-based metrics such as the urban–rural divide were examined separately through subgroup and invariance analyses (Section 4.5), which confirmed that the access–service coverage association was strong across groups. This strategy minimizes the risk of over-adjustment while still addressing concerns about relative disparities.

Disaggregation of Harms and Framework Correlations

To reduce conceptual ambiguity, reported harms were disaggregated into exclusionary, privacy/security, and procedural categories. Post hoc structural models revealed differentiated associations: access with exclusionary harms ( β = 0.21 , p < 0.01 ), agency with procedural harms ( β = 0.24 , p < 0.001 ), accountability with privacy/security harms ( β = 0.28 , p < 0.001 ), and adaptation with culturally rooted exclusionary harms ( β = 0.17 , p < 0.05 ). This disaggregation confirms that the four-dimensional framework maps onto distinct harm categories, reducing the risk of measurement bias.
Access is the dominant predictor of service reach and time savings. Agency is the dominant predictor of grievance resolution and is associated with fewer reported harms. Accountability is associated with fewer harms and higher rates of grievance resolution. Adaptation contributes to time savings and service reach, indicating that systems designed for local languages and low-bandwidth conditions reduce transaction costs and expand effective access.

4.5. Heterogeneity, Invariance, and Sensitivity Analyses

Configural and metric invariance held across gender and urbanicity groups, enabling principled comparison of latent means [70,71]. Partial scalar invariance was achieved after freeing two intercepts in line with best practice. Group differences in the composite index persisted after adjustment for age, gender, education, income group, and site fixed effects, with an urban versus rural difference of Δ = 0.46 standard deviations (ninety-five percent confidence interval = 0.38 to 0.54 , p < 0.001 ). A doubly strong estimator for the effect of access on service reach among those exposed yielded an estimated average treatment effect on the treated of 0.17 (standard error = 0.04 ), aligning with the structural equation modeling estimates [65]. Alternative specifications using principal components weights and negative binomial models for harms produced substantively similar conclusions. Intraclass correlation for enumerator effects was less than 0.02 , suggesting minimal interviewer-induced variance [74].

4.6. Qualitative Integration and Mechanisms

Qualitative analysis elucidated three mechanisms consistent with the quantitative patterns. First, infrastructural precarity constrained service reach, particularly where volatile bandwidth and shared device use were common; participants described deferring digital transactions until late evening to avoid congestion, aligning with the observed association between access and both reach and time savings. Second, a contestability gap emerged where participants lacked comprehensible explanations or channels to challenge automated decisions; sites with stronger agency scores reported more frequent resolution of grievances within institutional timeframes. Third, adaptive workarounds such as offline-first modes and local-language voice prompts reduced cognitive and transaction costs, clarifying the positive relation between adaptation and time savings.
As summarized in Table 6, the joint display aligns qualitative mechanisms with their corresponding quantitative signals across the four constructs.

5. Conclusions from Results

The four-dimensional framework is empirically supported and policy-relevant. Access and adaptation primarily shape reach and efficiency. Agency and accountability enable contestation, remedy, and safety. Strongness checks and invariance tests indicate that findings are stable across subgroups and modeling choices. Qualitative insights clarify how infrastructural conditions, user capability to contest, and locally responsive design jointly translate into equitable developmental outcomes.
Figure 5 presents standardized path coefficients from the structural equation model that links four constructs [access, agency, accountability, and adaptation] to four development-relevant outcomes: service reach, time savings, grievance resolved, and reported harms. The diverging color scale is centered at zero so that positive associations are visually distinct from negative associations, while overlaid values display exact estimates together with statistical significance markers. Positive coefficients indicate that higher values of the construct are associated with improvements in the named outcome; negative coefficients indicate associations with reductions in the outcome.
Considering outcomes in turn, service reach is most strongly associated with access (0.41 ***), which explains the centrality of connectivity, device availability, affordability, and reliability in enabling individuals to obtain services when needed. Agency (0.12 **) and adaptation (0.10 **) are positive but smaller, suggesting that user understanding and locally responsive design help extend reach once baseline access conditions are present. Accountability (0.07 *) is positive and modest, implying limited direct leverage of governance routines on this outcome. For time savings, access (0.22 ***) and adaptation (0.19 ***) dominate, indicating that infrastructure and localization to language and bandwidth profiles reduce transaction costs at the point of use; agency (0.06 ) and accountability (0.05, not statistically significant) contribute little to speed once access and design fit are accounted for. Grievance resolution is driven primarily by agency (0.28 ***) and accountability (0.16 ***), which shows that the capacity to understand and contest automated decisions, combined with clear institutional responsibility and auditability, is pivotal for timely redress; access (0.08 **) and adaptation (0.06 ) play secondary roles. For reported harms, all coefficients are negative, indicating protective associations; accountability exhibits the largest magnitude (−0.22 ***), followed by agency (−0.11 ***), access (−0.09 **), and adaptation (−0.08 *), implying that governance mechanisms and user contestability provide the strongest protection against adverse digital events, while improved access and design fit also contribute to risk reduction.
Synthesizing across constructs, access functions as a foundational driver of performance by strongly improving service reach and moderately improving time efficiency, with a small protective association against harms. Agency serves as the principal lever for dispute resolution and contributes meaningfully to safety, reflecting the value of explanations, appeal channels, and substantive user choice. Accountability operates as the main safeguard against harms and a meaningful contributor to grievance resolution, consistent with the importance of transparency, auditability, and clearly assigned responsibility. Adaptation improves efficiency and extends reach by aligning interaction modes with language, literacy, and bandwidth constraints.
The pattern of coefficients has direct implications for policy and design. Where the priority is to expand access to services, investments should focus first on access and adaptation in order to address infrastructural bottlenecks and reduce transaction costs. Where the priority is fairness, remedy, and safety, interventions should strengthen agency and accountability alongside baseline access so that users can understand and contest decisions and institutions can trace, explain, and correct failures. The results indicate complementarity rather than trade-off: simultaneous improvements across all four constructs are expected to expand reach and efficiency while reducing harms.
Two interpretive cautions apply. First, coefficients are standardized and should be compared within each outcome rather than across outcomes that may have different variances. Second, the figure presents main effects and does not display potential interaction terms or non-linearities. Statistical significance follows conventional thresholds as indicated in the cell labels, and interval estimates are reported in the main text and associated tables.

6. Discussion

6.1. Interpretation of Principal Findings

This study developed and validated a four-dimensional framework for aligning Artificial Intelligence with inclusive development. The empirical results show that access and adaptation are the strongest predictors of service reach and efficiency, while agency and accountability are most closely associated with grievance resolution and reductions in reported harms. Quantitatively, Access exhibited a standardized association of β = 0.41 with Service Reach and β = 0.22 with time savings, whereas accountability was associated with fewer harms ( β = 0.22 ). Adaptation was positively related to both time savings and service reach, consistent with the idea that low-bandwidth modes and local-language affordances reduce transaction costs. These relationships held after adjusting for socio-demographic covariates and site fixed effects, and they were strong across alternative specifications. Qualitative evidence identified concrete mechanisms that map onto the quantitative signals, including bandwidth volatility, contestability gaps, and local workarounds that lower cognitive and time burdens. The overall pattern supports the working hypothesis that equitable developmental outcomes depend not only on access to infrastructure but also on user capability to contest decisions, institutional responsibility for remedy, and alignment of system design with local practices [3,4].

6.2. How the Findings Relate to and Extend Prior Work

The gradient we observe across urban, peri-urban, and rural sites is consistent with long-standing evidence on the digital divide and the more recent literature on adverse digital incorporation. Macro-level research links ICT development to trade in services, innovation capacity, and productivity, yet emphasizes that aggregate gains do not guarantee inclusion for marginalized users. Our results extend this line of inquiry by showing that when adaptation is weak, the efficiency advantages of AI-enabled services accrue disproportionately to better-connected groups, amplifying existing inequalities. This complements design justice arguments that call for local-language interfaces, offline-first features, and iterative co-design to translate digital capacity into equitable use. The strong association between agency and grievance resolution reinforces work on data justice and governance. Where users can understand and contest automated decisions, and where human redress channels exist, harmful outcomes are less frequent and resolution is more likely [5,6]. The negative link between accountability and harms in our models aligns with calls to move beyond high-level ethical principles toward enforceable mechanisms, including audit trails, explainability thresholds, and responsibility mapping across public and private actors [3,6]. In agriculture and platformized sectors, evidence from Ghana shows how data extractivism and surveillance can undermine smallholders, which explains the need for accountability provisions if digital platforms are to contribute to development rather than exacerbate precarity [7].

6.3. Distributional, Sectoral, and Environmental Implications

Our site differences mirror findings that digitalization can increase average productivity while widening gaps without distribution-sensitive design and governance. Studies link digital technology development to green total factor productivity and low-carbon transitions in Chinese cities but realize that gains require complementary policy and institutional conditions. The 4A framework helps surface which micro-level conditions are missing when macro indicators are moving in a favorable direction. In financial services, expanded mobile money ecosystems have been associated with greater transparency and reduced petty corruption risks under certain governance conditions, which resonates with our accountability-related reductions in reported harms. In e-commerce and platform economies, inclusive gains depend on addressing last-mile constraints, fair data practices, and SME capabilities. City governance research shows that participation and collaboration mechanisms improve uptake and equity of digital services, which maps closely to the agency and adaptation dimensions in our framework [10]. Evidence on territorial policy interventions to narrow regional digital divides provides a macro-institutional complement to our access dimension [28]. At the same time, research warns that certain ICT expansions are associated with environmental burdens unless guided by sustainability-oriented policy mixes, which explains the need to pair access expansion with green design and accountability [12].

6.4. Implications for Policy and Practice

The results suggest a practical sequencing of investments. First, close foundational access gaps while embedding affordability safeguards and public-interest obligations in connectivity and device markets [28]. Second, institutionalize participatory co-design and local-language support as baseline procurement requirements rather than optional features, drawing on established co-creation toolkits for low-resource settings. Third, mandate accountability provisions that are proportionate to risk, including independent auditability, explainability for adverse decisions, traceable grievance handling, and responsibility matrices that include private vendors [5,6]. Fourth, require adaptation for low-bandwidth and offline contexts, with explicit targets for latency, error tolerance, and fallback human pathways. Fifth, align AI investments with environmental and productivity objectives and require reporting on distributional impacts, consistent with recent work on green productivity and digital spillovers. Lastly, integrate program learning by publishing open performance dashboards disaggregated by gender, disability, and geography, which enables continuous improvement on the 4A dimensions.

6.5. Implications for Research

Future studies should test the framework longitudinally to evaluate whether improvements in specific dimensions lead to predicted changes in reach, efficiency, and safety. Natural experiments in connectivity upgrades, randomized evaluations of explainability and grievance mechanisms, and stepped-wedge rollouts of local-language features would strengthen causal inference. Comparative multi-country work can examine how regulatory capacity, procurement regimes, and platform governance moderate the 4A-outcome relationships [5,6]. Measurement research should expand item banks and assess invariance across additional axes such as language groups, disability types, and sector-specific contexts. Finally, combining survey measures with administrative telemetry and process logs will reduce common-method bias and allow fine-grained attribution of both harms and remedies. In addition to inclusivity and governance priorities, the framework also informs sustainability-oriented policy action. Governments and development partners can adopt green ICT procurement standards that privilege energy-efficient and low-carbon infrastructure, ensuring that investments align with SDG 13 (climate action). Accountability mechanisms may be extended to environmental risks by requiring environmental impact audits alongside algorithmic audits. This integrated approach ensures that inclusivity and sustainability are addressed together, avoiding trade-offs between social equity and ecological responsibility.

6.6. Limitations

Several constraints qualify interpretation. The design is primarily cross-sectional. Although Directed Acyclic Graphs and doubly strong estimators were used to reduce bias, causal claims remain provisional. Partial scalar invariance required freeing a small number of intercepts, which introduces caution in strict comparisons of latent means. Some outcomes rely on self-reports and may be subject to recall and desirability bias; while the sites capture diverse socio-technical conditions, external validity beyond the study geographies must be established before broader extrapolation. These limitations point to clear next steps, including panel data, field experiments, and integration of independent administrative outcomes.

6.7. Conclusions

The 4A framework is empirically supported and policy-relevant. Access and adaptation shape reach and efficiency, while agency and accountability enable contestation, remedy, and safety. By translating critical insights into operational levers, the framework helps governments and implementers design AI-enabled services that are inclusive by default. Embedding the 4A dimensions into diagnostics, procurement, and performance management offers a route to ensure that the benefits of AI accrue broadly and that risks are identified and mitigated early [3]. By embedding access, agency, accountability, and adaptation into AI governance, the framework not only promotes inclusivity and justice but also supports the Sustainable Development Goals. Specifically, it advances SDG 9 (industry, innovation, and infrastructure) and SDG 10 (reduced inequalities), contributes to SDG 16 (peace, justice, and strong institutions), and enhances environmental sustainability by encouraging low-resource, energy-conscious design consistent with SDG 13 (climate action).

Funding

The Article Processing Charge (APC) was funded by the University of Kelaniya, Sri Lanka.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Ethics Committee of the University of Moratuwa, Sri Lanka (protocol code ERN/2023/03, Applied in 2022 and approved in 27 March 2023).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to ethical and privacy restrictions associated with human participant confidentiality.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Andrés-Martínez, M.E.; Alfaro-Navarro, J.L. Relationships between Sustainable Development Goals and Information and Communication Technologies in Europe. Inf. Technol. Dev. 2025, 1–22. [Google Scholar] [CrossRef]
  2. Walsham, G. The application of IT in organisations: Some trends and issues. Inf. Technol. Dev. 1989, 4, 627–644. [Google Scholar] [CrossRef]
  3. Qureshi, S. Why Data Matters for Development? Exploring Data Justice, Micro-Entrepreneurship, Mobile Money and Financial Inclusion. Inf. Technol. Dev. 2020, 26, 201–213. [Google Scholar] [CrossRef]
  4. Qureshi, S. Digital transformation for development: A human capital key or system of oppression? Inf. Technol. Dev. 2023, 29, 423–434. [Google Scholar] [CrossRef]
  5. Roberts, T.; Oosterom, M. Digital authoritarianism: A systematic literature review. Inf. Technol. Dev. 2024, 1–25. [Google Scholar] [CrossRef]
  6. Iazzolino, G.; Stremlau, N. AI for social good and the corporate capture of global development. Inf. Technol. Dev. 2024, 30, 626–643. [Google Scholar] [CrossRef]
  7. Sarku, R.; Ayamga, M. Is the right going wrong? Analysing digital platformization, data extractivism and surveillance practices in smallholder farming in Ghana. Inf. Technol. Dev. 2025, 1–27. [Google Scholar] [CrossRef]
  8. Zang, L.; Zhu, Y.; Cheng, D. Inequality through digitalization: Investigation of mediating and moderating mechanisms. Inf. Technol. Dev. 2024, 1–16. [Google Scholar] [CrossRef]
  9. Owoseni, A.; Twinomurinzi, H. Evaluating mobile app usage by service sector micro and small enterprises in Nigeria: An abductive approach. Inf. Technol. Dev. 2020, 26, 762–772. [Google Scholar] [CrossRef]
  10. Viale Pereira, G.; Cunha, M.A.; Lampoltshammer, T.J.; Parycek, P.; Testa, M.G. Increasing collaboration and participation in smart city governance: A cross-case analysis of smart city initiatives. Inf. Technol. Dev. 2017, 23, 526–553. [Google Scholar] [CrossRef]
  11. Schelenz, L.; Pawelec, M. Information and Communication Technologies for Development (ICT4D) critique. Inf. Technol. Dev. 2022, 28, 165–188. [Google Scholar] [CrossRef]
  12. Dada, J.T.; Akinlo, T.; Ajide, F.M.; Al-Faryan, M.A.S.; Tabash, M.I. Information communication and technology, and environmental degradation in Africa: A new approach via moments. Inf. Technol. Dev. 2025, 1–27. [Google Scholar] [CrossRef]
  13. Cachat-Rosset, G.; Klarsfeld, A. Diversity, Equity, and Inclusion in Artificial Intelligence: An Evaluation of Guidelines. Appl. Artif. Intell. 2023, 37, 2176618. [Google Scholar] [CrossRef]
  14. Radanliev, P. AI Ethics: Integrating Transparency, Fairness, and Privacy in AI Development. Appl. Artif. Intell. 2025, 39, 2463722. [Google Scholar] [CrossRef]
  15. Khan, N.; Khan, Z.; Koubaa, A.; Khan, M.; Salleh, R. Global insights and the impact of generative AI-ChatGPT on multidisciplinary: A systematic review and bibliometric analysis. Connect. Sci. 2024, 36, 2353630. [Google Scholar] [CrossRef]
  16. Li, Y.; Chen, H.; Mao, J.; Chen, Y.; Zheng, L.; Yu, J.; Yan, L.; He, L. Artificial Intelligence to Facilitate the Conceptual Stage of Interior Space Design: Conditional Generative Adversarial Network-Supported Long-Term Care Space Floor Plan Design of Retirement Home Buildings. Appl. Artif. Intell. 2024, 38, 2354090. [Google Scholar] [CrossRef]
  17. Buckmire, R.; Hallare, M.; Maskelony, G.; Morales, T.; Ogueda, A.; Perez, C.; Seshaiyer, P.; Uma, R. Culturally Relevant Instruction and Sustainable Pedagogy (CRISP) for Data Science and Social Justice. Scatterplot 2025, 2, 2506871. [Google Scholar] [CrossRef]
  18. Bozkus, E.; Kaya, I. A fuzzy-based model proposal for risk assessment prioritization using failure mode and effect analysis and Z numbers: A real case study in an automotive factory. Int. J. Occup. Saf. Ergon. 2025, 1–20. [Google Scholar] [CrossRef]
  19. Arora, P. Creative data justice: A decolonial and Indigenous framework for shaping global futures for data technologies. Inf. Commun. Soc. 2024, 1–17. [Google Scholar] [CrossRef]
  20. de Souza, R. Can data justice be global? Exploring the practice of digital rights, and the search for cognitive data justice. Inf. Commun. Soc. 2025, 28, 1006–1022. [Google Scholar] [CrossRef]
  21. Papagiannidis, E.; Mikalef, P.; Conboy, K. Responsible artificial intelligence governance: A review and research framework. J. Strateg. Inf. Syst. 2025, 34, 101885. [Google Scholar] [CrossRef]
  22. Birkstedt, T.; Minkkinen, M.; Tandon, A.; Mäntymäki, M. AI governance: Themes, knowledge gaps and future agendas. Internet Res. 2023, 33, 133–167. [Google Scholar] [CrossRef]
  23. Taeihagh, A. Governance of Generative AI. Policy Soc. 2025, 44, 1–22. [Google Scholar] [CrossRef]
  24. Janssen, M. Responsible governance of generative AI: Conceptualizing GenAI as complex adaptive systems. Policy Soc. 2025, 44, 38–51. [Google Scholar] [CrossRef]
  25. National Institute of Standards and Technology. Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile; Technical Report NIST AI 600-1; U.S. Department of Commerce: Gaithersburg, MD, USA, 2024; Approved by the NIST Editorial Review Board on 25 July 2024. [Google Scholar] [CrossRef]
  26. Cheung, S. Generative AI, generating crisis: Framing opportunity and threat in AI governance. Inf. Commun. Soc. 2025, 1–17. [Google Scholar] [CrossRef]
  27. Medrado, A.; Verdegem, P. Participatory action research in critical data studies: Interrogating Artificial Intelligence from a South–North approach. Big Data Soc. 2024, 11. [Google Scholar] [CrossRef]
  28. Yang, T.; Zhang, H.; Wang, H. Addressing territorial digital divide through digital policy: Lessons from China’s national comprehensive big data pilot zones. Inf. Technol. Dev. 2025, 31, 577–603. [Google Scholar] [CrossRef]
  29. Broadband Commission for Sustainable Development. The State of Broadband 2024: Leveraging Artificial Intelligence for Universal Connectivity; Technical Report; International Telecommunication Union and UNESCO: Geneva, Switzerland, 2024; Flagship report released June 2024. [Google Scholar]
  30. Creswell, J.W.; Plano Clark, V.L. Designing and Conducting Mixed Methods Research, 3rd ed.; SAGE Publications: Thousand Oaks, CA, USA, 2018. [Google Scholar]
  31. Tashakkori, A.; Teddlie, C. (Eds.) SAGE Handbook of Mixed Methods in Social & Behavioral Research, 2nd ed.; SAGE Publications: Thousand Oaks, CA, USA, 2010. [Google Scholar]
  32. O’Cathain, A.; Murphy, E.; Nicholl, J. The quality of mixed methods studies in health services research. J. Health Serv. Res. Policy 2008, 13, 92–98. [Google Scholar] [CrossRef]
  33. Boateng, G.O.; Neilands, T.B.; Frongillo, E.A.; Melgar-Quinonez, H.R.; Young, S.L. Best practices for developing and validating scales for health, social, and behavioral research: A primer. Front. Public Health 2018, 6, 149. [Google Scholar] [CrossRef]
  34. DeVellis, R.F. Scale Development: Theory and Applications, 4th ed.; SAGE Publications: Thousand Oaks, CA, USA, 2016. [Google Scholar]
  35. Lynn, M.R. Determination and quantification of content validity. Nurs. Res. 1986, 35, 382–385. [Google Scholar] [CrossRef]
  36. Willis, G.B. Cognitive Interviewing: A Tool for Improving Questionnaire Design; SAGE Publications: Thousand Oaks, CA, USA, 2005. [Google Scholar]
  37. Brown, T.A. Confirmatory Factor Analysis for Applied Research, 2nd ed.; Guilford Press: New York, NY, USA, 2015. [Google Scholar]
  38. Kline, R.B. Principles and Practice of Structural Equation Modeling, 4th ed.; Guilford Press: New York, NY, USA, 2016. [Google Scholar]
  39. Fetters, M.D.; Curry, L.A.; Creswell, J.W. Achieving integration in mixed methods designs: Principles and practices. Health Serv. Res. 2013, 48, 2134–2156. [Google Scholar] [CrossRef]
  40. Tourangeau, R.; Rips, L.J.; Rasinski, K. The Psychology of Survey Response; Cambridge University Press: Cambridge, UK, 2000. [Google Scholar]
  41. Shandilya, S. Intersection of AI and Agency Law: Accountability, Consent, and the Evolution of Legal Frameworks for Modern Contracts. Available online: https://ssrn.com/abstract=5252526 (accessed on 1 September 2025).
  42. Nunnally, J.C.; Bernstein, I.H. Psychometric Theory, 3rd ed.; McGraw–Hill: New York, NY, USA, 1994. [Google Scholar]
  43. Amend, T. Governance for Ecosystem-Based Adaptation: Understanding the Diversity of Actors & Quality of Arrangements; Deutsche Gesellschaft für Internationale Zusammenarbeit (GIZ) GmbH: Bonn, Germany, 2019. [Google Scholar]
  44. Hartung, C.; Lerer, Y.A.; Tseng, W.; Brunette, W.; Borriello, G.; Anderson, R. Open Data Kit: Tools to Build Information Services for Developing Regions. In Proceedings of the 4th ACM/IEEE International Conference on Information and Communication Technologies and Development, London, UK, 13–16 December 2010. [Google Scholar]
  45. Braun, V.; Clarke, V. Using thematic analysis in psychology. Qual. Res. Psychol. 2006, 3, 77–101. [Google Scholar] [CrossRef]
  46. Nowell, L.S.; Norris, J.M.; White, D.E.; Moules, N.J. Thematic analysis: Striving to meet the trustworthiness criteria. Int. J. Qual. Methods 2017, 16, 1–13. [Google Scholar] [CrossRef]
  47. Sanders, E.B.N.; Stappers, P.J. Co-creation and the new landscapes of design. CoDesign 2008, 4, 5–18. [Google Scholar] [CrossRef]
  48. Chambers, R. The origins and practice of participatory rural appraisal. World Dev. 1994, 22, 953–969. [Google Scholar] [CrossRef]
  49. Rubin, D.B. Multiple Imputation for Nonresponse in Surveys; John Wiley & Sons: New York, NY, USA, 1987. [Google Scholar]
  50. van Buuren, S. Flexible Imputation of Missing Data, 2nd ed.; Chapman & Hall/CRC: Boca Raton, FL, USA, 2018. [Google Scholar] [CrossRef]
  51. Little, R.J.A.; Rubin, D.B. Statistical Analysis with Missing Data, 3rd ed.; John Wiley & Sons: Hoboken, NJ, USA, 2019. [Google Scholar]
  52. Fabrigar, L.R.; Wegener, D.T.; MacCallum, R.C.; Strahan, E.J. Evaluating the use of exploratory factor analysis in psychological research. Psychol. Methods 1999, 4, 272–299. [Google Scholar] [CrossRef]
  53. Costello, A.B.; Osborne, J.W. Best practices in exploratory factor analysis: Four recommendations for getting the most from your analysis. Pract. Assess. Res. Eval. 2005, 10, 1–9. [Google Scholar]
  54. McDonald, R.P. Test Theory: A Unified Treatment; Lawrence Erlbaum Associates: Mahwah, NJ, USA, 1999. [Google Scholar]
  55. Fornell, C.; Larcker, D.F. Evaluating structural equation models with unobservable variables and measurement error. J. Mark. Res. 1981, 18, 39–50. [Google Scholar] [CrossRef]
  56. Henseler, J.; Ringle, C.M.; Sarstedt, M. A new criterion for assessing discriminant validity in variance-based structural equation modeling. J. Acad. Mark. Sci. 2015, 43, 115–135. [Google Scholar] [CrossRef]
  57. Hu, L.T.; Bentler, P.M. Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Struct. Equ. Model. 1999, 6, 1–55. [Google Scholar] [CrossRef]
  58. Jolliffe, I.T. Principal Component Analysis, 2nd ed.; Springer: New York, NY, USA, 2002. [Google Scholar] [CrossRef]
  59. Benjamini, Y.; Hochberg, Y. Controlling the false discovery rate: A practical and powerful approach to multiple testing. J. R. Stat. Soc. Ser. B 1995, 57, 289–300. [Google Scholar] [CrossRef]
  60. Satorra, A.; Bentler, P.M. Corrections to test statistics and standard errors in covariance structure analysis. In Latent Variables Analysis: Applications for Developmental Research; Lawrence Erlbaum Associates: Mahwah, NJ, USA, 1994; pp. 399–419. [Google Scholar]
  61. Yuan, K.H.; Bentler, P.M. Three likelihood-based methods for mean and covariance structure analysis with nonnormal missing data. Sociol. Methodol. 2000, 30, 165–200. [Google Scholar] [CrossRef]
  62. Pearl, J. Causality: Models, Reasoning, and Inference, 2nd ed.; Cambridge University Press: Cambridge, UK, 2009. [Google Scholar] [CrossRef]
  63. Hernán, M.A.; Robins, J.M. Causal Inference: What If; Chapman & Hall/CRC: Boca Raton, FL, USA, 2020. [Google Scholar]
  64. Textor, J.; van der Zander, B.; Gilthorpe, M.S.; Liskiewicz, M.; Ellison, G.T.H. Robust causal inference using directed acyclic graphs: The R package dagitty. Int. J. Epidemiol. 2016, 45, 1887–1894. [Google Scholar] [CrossRef]
  65. Bang, H.; Robins, J.M. Doubly robust estimation in missing data and causal inference models. Biometrics 2005, 61, 962–973. [Google Scholar] [CrossRef]
  66. Stuart, E.A. Matching methods for causal inference: A review and a look forward. Stat. Sci. 2010, 25, 1–21. [Google Scholar] [CrossRef] [PubMed]
  67. Angrist, J.D.; Pischke, J.S. Mostly Harmless Econometrics: An Empiricist’s Companion; Princeton University Press: Princeton, NJ, USA, 2009. [Google Scholar]
  68. Denzin, N.K. The Research Act: A Theoretical Introduction to Sociological Methods, 2nd ed.; McGraw–Hill: New York, NY, USA, 1978. [Google Scholar]
  69. Hennink, M.M.; Kaiser, B.N.; Marconi, V.C. Code saturation versus meaning saturation: How many interviews are enough. Qual. Health Res. 2017, 27, 591–608. [Google Scholar] [CrossRef] [PubMed]
  70. Meredith, W. Measurement invariance, factor analysis and factorial invariance. Psychometrika 1993, 58, 525–543. [Google Scholar] [CrossRef]
  71. Putnick, D.L.; Bornstein, M.H. Measurement invariance conventions and reporting: The state of the art and future directions for psychological research. Dev. Rev. 2016, 41, 71–90. [Google Scholar] [CrossRef] [PubMed]
  72. Chen, F.F. Sensitivity of goodness of fit indexes to lack of measurement invariance. Struct. Equ. Model. 2007, 14, 464–504. [Google Scholar] [CrossRef]
  73. Podsakoff, P.M.; MacKenzie, S.B.; Lee, J.Y.; Podsakoff, N.P. Common method biases in behavioral research: A critical review of the literature and recommended remedies. J. Appl. Psychol. 2003, 88, 879–903. [Google Scholar] [CrossRef]
  74. Gelman, A.; Hill, J. Data Analysis Using Regression and Multilevel/Hierarchical Models; Cambridge University Press: Cambridge, UK, 2007. [Google Scholar] [CrossRef]
  75. Snijders, T.A.B.; Bosker, R.J. Multilevel Analysis: An Introduction to Basic and Advanced Multilevel Modeling, 2nd ed.; SAGE Publications: London, UK, 2012. [Google Scholar]
  76. National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. In The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research; Technical Report; U.S. Department of Health, Education, and Welfare: Washington, DC, USA, 1979.
  77. World Medical Association. World Medical Association Declaration of Helsinki: Ethical principles for medical research involving human subjects. JAMA 2013, 310, 2191–2194. [Google Scholar] [CrossRef]
  78. Sweeney, L. k-anonymity: A model for protecting privacy. Int. J. Uncertain. Fuzziness Knowl.-Based Syst. 2002, 10, 557–570. [Google Scholar] [CrossRef]
  79. Peng, R.D. Reproducible research in computational science. Science 2011, 334, 1226–1227. [Google Scholar] [CrossRef] [PubMed]
  80. Nosek, B.A.; Alter, G.; Banks, G.C.; Borsboom, D.; Bowman, S.D.; Breckler, S.J.; Buck, S.; Chambers, C.D.; Chin, G.; Christensen, G.; et al. Promoting an open research culture. Science 2015, 348, 1422–1425. [Google Scholar] [CrossRef] [PubMed]
  81. Stodden, V.; Leisch, F.; Peng, R.D. Implementing Reproducible Research; Chapman & Hall/CRC: Boca Raton, FL, USA, 2014. [Google Scholar]
Figure 1. Distributions of sample size, daily Internet access, and smartphone ownership by site type. Notes: Bars show relative differences across site types. The count bar is scaled by a factor of ten to fit the axis range, which preserves the proportional pattern while keeping all indicators on a single panel.
Figure 1. Distributions of sample size, daily Internet access, and smartphone ownership by site type. Notes: Bars show relative differences across site types. The count bar is scaled by a factor of ten to fit the axis range, which preserves the proportional pattern while keeping all indicators on a single panel.
Sustainability 17 09360 g001
Figure 2. Measurement model at the dimension level with standardized loadings conceptualized. Notes: Boxes summarize item sets per dimension (five retained items each). Numbers on arrows indicate the range of standardized loadings for the retained items in each dimension.
Figure 2. Measurement model at the dimension level with standardized loadings conceptualized. Notes: Boxes summarize item sets per dimension (five retained items each). Numbers on arrows indicate the range of standardized loadings for the retained items in each dimension.
Sustainability 17 09360 g002
Figure 3. Composite four-dimension index by site type (mean and one standard deviation). Notes: Blue circles denote site-level mean values; gray bars show one standard deviation above and below each mean; red squares mark the SD endpoints and carry no additional statistical meaning. Higher values indicate more favorable conditions across access, agency, accountability, and adaptation.
Figure 3. Composite four-dimension index by site type (mean and one standard deviation). Notes: Blue circles denote site-level mean values; gray bars show one standard deviation above and below each mean; red squares mark the SD endpoints and carry no additional statistical meaning. Higher values indicate more favorable conditions across access, agency, accountability, and adaptation.
Sustainability 17 09360 g003
Figure 4. Coefficient plot of structural relations with ninety-five percent confidence intervals. Notes: Black circles denote standardized point estimates; horizontal black lines indicate ninety-five percent confidence intervals. Intervals labeled as “approximate” are computed from reported standard errors using the conventional 1.96 × SE rule.
Figure 4. Coefficient plot of structural relations with ninety-five percent confidence intervals. Notes: Black circles denote standardized point estimates; horizontal black lines indicate ninety-five percent confidence intervals. Intervals labeled as “approximate” are computed from reported standard errors using the conventional 1.96 × SE rule.
Sustainability 17 09360 g004
Figure 5. Construct–outcome associations from the structural model (axes swapped). Cells show standardized coefficients with significance markers; color encodes direction and magnitude. Notes: Values are standardized path coefficients from the structural equation model. Asterisks denote statistical significance at conventional thresholds (*** p < 0.001, ** p < 0.01, * p < 0.05, p < 0.10). The diverging palette is centered at zero to make direction and magnitude visually salient.
Figure 5. Construct–outcome associations from the structural model (axes swapped). Cells show standardized coefficients with significance markers; color encodes direction and magnitude. Notes: Values are standardized path coefficients from the structural equation model. Asterisks denote statistical significance at conventional thresholds (*** p < 0.001, ** p < 0.01, * p < 0.05, p < 0.10). The diverging palette is centered at zero to make direction and magnitude visually salient.
Sustainability 17 09360 g005
Table 1. Summary of literature themes, insights, and gaps addressed by the framework.
Table 1. Summary of literature themes, insights, and gaps addressed by the framework.
ThemeKey Insights from the LiteratureGap Addressed by Framework
Inclusion and Data JusticeContext-sensitive, participatory governance required; risks of universal templatesFramework operationalizes inclusion via access, agency, accountability, adaptation
Ethics to GovernanceEthical principles often fail in translation to practiceFramework specifies measurable constructs and governance levers
Societal RisksAI tied to surveillance, extraction, market logicsFramework foregrounds agency and accountability to mitigate harms
Digital DividesAccess and affordability remain foundationalFramework integrates infrastructural, linguistic, and cultural fit
Sectoral ApplicationsAI impacts uneven across domains, with methodological challengesFramework validated with mixed methods and field data
Table 2. Concepts, operational definitions, measurement indicators, and harm types addressed.
Table 2. Concepts, operational definitions, measurement indicators, and harm types addressed.
ConstructOperational DefinitionMeasurement IndicatorsPrimary Harm Types Addressed
AccessAvailability and affordability of digital infrastructure and servicesConnectivity, device access, affordability, reliabilityExclusionary harms (unequal access, denial of services)
AgencyCapacity of individuals and communities to understand, contest, and influence automated decisionsComprehension, appeal channels, consent, collective redress pathwaysProcedural harms (opacity, lack of remedy)
AccountabilityInstitutional mechanisms for transparency, auditability, and responsibilityClear assignment of responsibility, grievance resolution records, audit trailsPrivacy and security harms (data breaches, surveillance), procedural harms
AdaptationFit of systems to local languages, cultural practices, and infrastructural realitiesLocal language support, offline/low-bandwidth modes, socio-cultural responsivenessExclusionary harms (bias against marginalized groups, cultural misfit)
Table 3. Sample characteristics by site type.
Table 3. Sample characteristics by site type.
CharacteristicUrban
(n = 720)
Peri-Urban
(n = 640)
Rural
(n = 560)
Female (%)50.753.452.7
Age, median (interquartile range)31 (23–42)34 (25–45)36 (26–47)
Daily internet access (%)79.365.555.4
Smartphone ownership (%)82.374.161.5
Uses AI-enabled public service a (%)46.839.128.9
Completed secondary education (%)77.663.848.2
Household income below national median (%)28.146.762.4
a Such as eligibility screening, advisories, or triage delivered through digital platforms.
Table 4. Measurement model summary: loadings, reliability, and validity.
Table 4. Measurement model summary: loadings, reliability, and validity.
DimensionItems RetainedLoading RangeComposite ReliabilityAverage Variance Extracted
Access50.58–0.820.840.54
Agency50.61–0.850.880.60
Accountability50.57–0.810.830.55
Adaptation50.59–0.830.860.63
Table 5. Structural model: standardized path coefficients (SE in parentheses).
Table 5. Structural model: standardized path coefficients (SE in parentheses).
PredictorService
Reach
Time
Savings
Grievance
Resolved
Reported
Harms
Access0.41 (0.03) ***0.22 (0.03) ***0.08 (0.03) ** 0.09 (0.03) **
Agency0.12 (0.03) **0.06 (0.03) 0.28 (0.04) *** 0.11 (0.03) ***
Accountability0.07 (0.03) *0.05 (0.03)0.16 (0.04) *** 0.22 (0.04) ***
Adaptation0.10 (0.04) **0.19 (0.04) ***0.06 (0.03) 0.08 (0.03) *
Model fit: CFI = 0.951, TLI = 0.942, RMSEA = 0.045, SRMR = 0.051. *** p < 0.001, ** p < 0.01, * p < 0.05, p < 0.10. Models adjust for age, gender, education, income group, and site fixed effects.
Table 6. Joint display of qualitative mechanisms and quantitative signals.
Table 6. Joint display of qualitative mechanisms and quantitative signals.
ConstructQualitative MechanismQuantitative Signal
AccessBandwidth volatility and shared device use lead to deferred transactions and abandonment.Strong positive association with service reach and time savings; negative association (modest) with harms.
AgencyAbsence of clear explanations and appeal pathways limits user contestation and remedy.Positive association with grievance resolution and service reach; negative association with harms.
AccountabilityAmbiguity in institutional responsibility reduce traceability of adverse events.Negative association with harms; positive association with grievance resolution.
AdaptationOffline-first modes and local-language prompts reduce cognitive and transaction costs.Positive association with time savings and service reach.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

de Silva, G.H.B.A. Data-Driven Framework for Aligning Artificial Intelligence with Inclusive Development in the Global South. Sustainability 2025, 17, 9360. https://doi.org/10.3390/su17219360

AMA Style

de Silva GHBA. Data-Driven Framework for Aligning Artificial Intelligence with Inclusive Development in the Global South. Sustainability. 2025; 17(21):9360. https://doi.org/10.3390/su17219360

Chicago/Turabian Style

de Silva, G. H. B. A. 2025. "Data-Driven Framework for Aligning Artificial Intelligence with Inclusive Development in the Global South" Sustainability 17, no. 21: 9360. https://doi.org/10.3390/su17219360

APA Style

de Silva, G. H. B. A. (2025). Data-Driven Framework for Aligning Artificial Intelligence with Inclusive Development in the Global South. Sustainability, 17(21), 9360. https://doi.org/10.3390/su17219360

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop