Next Article in Journal
A Synergistic Multi-Agent Framework for Resilient and Traceable Operational Scheduling from Unstructured Knowledge
Previous Article in Journal
Learning and Reconstruction of Mobile Robot Trajectories with LSTM Autoencoders: A Data-Driven Framework for Real-World Deployment
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Exploring the Impact of Generative AI on Digital Inclusion: A Case Study of the E-Government Divide

by
Stefan Radojičić
and
Dragan Vukmirović
*
Faculty of Organizational Science, University of Belgrade, Jove Ilića, 154, 11000 Belgrade, Serbia
*
Author to whom correspondence should be addressed.
AI 2025, 6(12), 303; https://doi.org/10.3390/ai6120303
Submission received: 23 September 2025 / Revised: 5 November 2025 / Accepted: 21 November 2025 / Published: 25 November 2025
(This article belongs to the Topic Big Data and Artificial Intelligence, 3rd Edition)

Abstract

This paper examines how Generative AI (GenAI) reshapes digital inclusion in e-government. We develop the E-Government Divide Measurement Indicator (EGDMI) across three dimensions: D1—Breadth of the Divide (foundational access, affordability, and basic skills), D2—Sectoral/Specific Divide (actual use, experience, and trust in e-government), and D3—GenAI Gap (access, task use, and competence). The index architecture specifies indicator lists, sources, units, transformations, uniform normalization, and a documented weighting strategy with sensitivity and basic uncertainty checks. Using official statistics and qualitative evidence for Serbia, we report D1 and D2 as composite indices and treat D3 as an exploratory, non-aggregated layer given current data maturity. Results show strong foundational readiness (D1 = 73.6) but very low e-government uptake (D2 = 19.9), indicating a shift of the divide from access to meaningful use, usability, and trust. GenAI capabilities are emergent and uneven (D3 sub-dimensions: access 47.8; task use 39.4; competence/verification 43.6). Cluster analysis identifies four user profiles—from “Digitally Excluded” to “GenAI-Augmented Citizens”— that support differentiated interventions. The initial hypothesis—that GenAI can widen disparities in the short run—receives partial confirmation: GenAI may lower interaction costs but raises verification and ethics thresholds for vulnerable groups. We outline a policy roadmap prioritizing human-centered service redesign, transparency, and GenAI literacy before automation, and provide reporting templates to support comparable monitoring and cross-country learning.

1. Introduction

Digital transformation in the public sector reshapes how states design, deliver, and govern services, with direct implications for access, equity, accountability, and inclusion. We adopt a public-sector–specific view of digital transformation as a socio-technical, data-intensive reconfiguration of processes, capabilities, and citizen–state interactions aimed at creating public value and reducing exclusion. Consistent with Saeedikiya et al. (2025) [1], we define digital transformation as a fundamental, technology-enabled change process that reconfigures capabilities and stakeholder interactions to produce public value; we also draw on Saeedikiya et al. (2024) [2] to foreground dynamic capabilities as enablers of such change in public-service contexts. Within this lens, our three dimensions map as follows: D1 (Breadth of the Divide) ↔ foundational access/skills capabilities; D2 (Sectoral/Specific Divide) ↔ reconfigured service processes and workflows; D3 (GenAI Gap) ↔ emergent capabilities and user–system interactions specific to Generative AI (GenAI)—namely, access, task use, and competence.
Prior research on the digital divide has predominantly examined access and skills, with sectoral studies of electronic government (e-government) highlighting persistent gaps in usage, competence, and trust. Far less is known about how GenAI-enabled interfaces and workflows (e.g., conversational form completion, eligibility guidance, support for appeals) alter these gaps—who benefits, who is left behind, and under which conditions inclusion improves or deteriorates. This paper addresses that gap by proposing and testing a three-dimensional framework for digital inclusion in e-government: Breadth of the Divide (D1), Sectoral/Specific Divide (D2), and the GenAI Gap (D3), capturing access, task use, and competence related to GenAI.
Research problem: How does the integration of GenAI in e-government affect digital inclusion when assessed jointly across D1–D3, and what measurement design makes such assessment transparent, comparable, and policy-relevant?
Research questions and hypotheses:
RQ1.
Can a composite indicator consistently quantify D1–D3 while preserving interpretability for policy?
RQ2.
How do D1 and D2 relate to the emergent D3 in an upper-middle-income setting?
H1. 
Higher GenAI access and task-specific competence are associated with narrower e-government usage and competence gaps (ceteris paribus).
H2. 
In the short run, insufficient GenAI literacy and uneven access widen disparities among vulnerable groups, even where basic access (D1) is high.
H3. 
The relationship between D3 and D2 is moderated by trust and identification frictions (e.g., electronic identification, eID), such that GenAI benefits concentrate among digitally confident users.
Note: The literature remains divided: one stream expects GenAI to lower interaction costs and barriers via assistive interfaces [3,4]. At the same time, another anticipates a higher competence and trust threshold, risking a new axis of inequality [5,6]. Some reviews explicitly frame this tension as a “double-edged sword” [7,8]. Our design explicitly tests both possibilities.
Contributions:
  • We introduce the E-Government Divide Measurement Indicator (EGDMI) with explicit indicator lists, sources, units, transformations, and uniform normalization across D1–D3.
  • We replace ad hoc weights with a documented weighting strategy and provide sensitivity and uncertainty analyses.
  • We formally position D3 (GenAI Gap) as exploratory, given current data constraints, outline direct measures (access, task use, competence), and avoid its aggregation with D1–D2 where data do not warrant summation.
  • We deliver actionable implications by mapping inclusion barriers to concrete e-government workflows (identity proofing, eligibility assessment, form completion, and appeals). These contributions align with the journal’s scope on AI methods and impacts in public-service contexts, speaking directly to AI for digital government and governance.
Empirical setting and originality: Using official statistics and qualitative evidence for Serbia, we demonstrate how EGDMI reveals multi-layered exclusion in e-government and how GenAI may both lower interaction costs and raise competence/ethics thresholds. The proposed architecture, transparency checks, and reporting templates aim to standardize monitoring and enable cross-country learning.
Structure of the paper: Section 2 develops the theoretical foundation and defines D1–D3. Section 3 details the index architecture (indicators, sources, transformations, normalization, weighting, and sensitivity). Section 4 presents the methodology and data. Section 5 reports results for Serbia and a robustness suite. Section 6 discusses findings, limitations, and implications (ethical, practical, and theoretical). Section 7 concludes with policy pathways and a roadmap for future data collection on the GenAI Gap.
Principal conclusions (preview): Our preliminary evidence indicates layered exclusion in e-government; D3 (GenAI Gap) is presently best reported separately from D1–D2 due to data immaturity, with immediate implications for inclusive design, literacy programs, and staged policy adoption.

2. Theoretical Background

2.1. Digital Transformation in Public Services: A Socio-Technical Lens

Digital transformation (DT) in the public sector is not a technology roll-out but a socio-technical reconfiguration of processes, capabilities, and actor relationships that aims to create public value and reduce exclusion. Consistent with Saeedikiya et al. (2025) [1], we view DT as a technology-enabled, capability-driven change process that reshapes organizational routines and stakeholder interactions; Saeedikiya et al. (2024) [2] emphasize dynamic capabilities (sensing, seizing, reconfiguring) as enablers of sustained transformation in service settings. Applying this lens to public administration implies that inclusion outcomes depend jointly on (i) foundational capabilities (access, basic skills, affordability), (ii) service-specific process design (identification, eligibility, form completion, appeals, redress), and (iii) emergent AI-mediated interaction capabilities (the ability of users—and institutions—to work productively and safely with GenAI interfaces).
This framing establishes clear theoretical anchors for the three dimensions we study: D1 (Breadth) reflects foundational capabilities; D2 (Sectoral/Specific) captures service-process reconfiguration; and D3 (GenAI Gap) captures emergent interaction capabilities at the human–AI boundary.

2.2. Revisiting the Digital Divide: From Access to Outcomes

The digital divide literature has evolved from a first-level focus on access (infrastructure, devices) to second-level skills and uses, and third-level outcomes (who benefits, how much, and in what ways). This evolution from first-level (access) to second-level (skills/uses) and third-level (outcomes) divides is well documented in the digital divide literature [9]. In public services, these divides manifest as persistent gaps in usage, competence, and trust, even where connectivity is high. Recent e-government studies document that channel usage (online vs. phone/front-desk) and competence (e.g., eID use, completing multi-step transactions) remain uneven, with subjective non-use (perceived complexity, low need, mistrust) acting as an additional barrier. This literature motivates distinguishing baseline conditions (D1) from sector-specific adoption frictions (D2) that arise in concrete service journeys.

2.3. The Sector-Specific (E-Government) Divide

E-government embeds the general divide in process-intensive workflows (such as identity proofing, eligibility checks, form completion, payment, tracking, and appeals). Sectoral context matters: health, education, taxation, or licensing each imposes its own cognitive and procedural burdens, with varying requirements for identification, documentation, and temporal coordination. The literature consistently shows that even among the online population, a sizable share does not transact digitally with the government—due to skill gaps, low perceived utility, or trust and identification frictions (e.g., reluctance to use eID). This supports modeling a distinct D2 that (a) stratifies the online population into users vs. non-users of e-government, and (b) distinguishes competence (e.g., eID usage) from subjective non-use (e.g., “no need”).

2.4. Generative AI and the Emergence of a New Divide

Generative AI (GenAI) introduces conversational, assistance-oriented interfaces that can lower interaction costs (natural-language guidance, summarization, translation, form pre-fill), but also raise capability thresholds (prompting skill, verification literacy, understanding of data ethics, and provenance). The literature remains divided: some studies expect GenAI to lower interaction costs via assistive interfaces, while others anticipate higher competence and trust thresholds, potentially widening inequalities [3,4,6]. Two competing mechanisms therefore co-exist:
Barrier-reducing mechanism: GenAI can simplify navigation, comprehension, and completion of forms; support eligibility reasoning; and provide multilingual, accessibility-friendly assistance (audio/text, plain-language explanations).
Barrier-raising mechanism: Effective and safe use requires functional GenAI literacy (task framing, checking model outputs, handling identity data responsibly), domain knowledge (to validate content), and ethical competence (privacy, bias, attribution). Institutions must also implement guardrails.
Given these countervailing forces—and the current scarcity of direct measures of GenAI access, use, and competence in official statistics—D3 (GenAI Gap) should presently be treated as exploratory, reported and analyzed separately from D1–D2 unless commensurate measures are available. This stance aligns with the precautionary framing in recent AI governance and e-government literatures and minimizes over-claiming in composite aggregation [10,11,12,13].

2.5. Mapping D1–D3 to Digital Transformation Theory

Grounding the three dimensions in DT theory clarifies their distinct roles and testable mechanisms:
  • D1—Breadth of the Divide (Foundational capabilities).
    DT mapping: Sensing infrastructural gaps and seizing access/affordability interventions (devices, connectivity, training).
    Mechanism: Without baseline access and basic digital skills, subsequent service reforms will not translate into inclusion.
  • D2—Sectoral/Specific Divide (Service-process design).
    DT mapping: Reconfiguring processes and user journeys (identity proofing, eligibility, submissions, appeals), reducing procedural complexity and transaction costs.
    Mechanism: Even at high D1, poor service design (e.g., eID hurdles, opaque instructions) generates usage and competence gaps.
  • D3—GenAI Gap (Emergent interaction capabilities).
    DT mapping: New dynamic capabilities at the human–AI boundary—on both the citizen side (GenAI literacy, verification, ethics) and the administration side (safe orchestration, auditability).
    Mechanism: GenAI can reduce cognitive/linguistic barriers but increase verification and ethical burdens, potentially widening disparities if literacy programs and safeguards lag.
Building on Saeedikiya et al. (2024; 2025) [1,2], the three dimensions are mapped to the core digital transformation capabilities to clarify their conceptual boundaries and mechanisms (see Table 1).
Implementation note: Given current data limitations, we classify D3 as exploratory and recommend transparent reporting (indicator lists, sources, units, transformations) and uniform normalization across dimensions, while postponing aggregation of D3 with D1–D2 until direct, stable measures of GenAI access/use/competence become available.

2.6. Synthesis and Implications for Measurement (EGDMI)

The theoretical synthesis suggests three design imperatives for a measurement architecture:
  • Dimensional clarity and commensurability
    Report explicit indicator lists with sources, units, and transformations; apply uniform normalization across D1–D3 to support interpretability and comparability.
  • Transparent weighting with robustness checks
    Replace ad hoc weights with a documented scheme (expert elicitation, MCDA, or data-driven alternatives) and provide sensitivity/uncertainty analysis (e.g., ±10% weight shifts; bootstrapped confidence bands).
  • Precautionary treatment of D3
    Until direct measures are available, label D3 as exploratory and avoid summing with D1–D2; provide scenario- or dashboard-based reporting for D3 (access, task use, competence, verification, and ethics sub-domains).
This theoretical consolidation directly informs the design of the E-Government Divide Measurement Indicator (EGDMI) used in this study. It motivates our empirical choices in the next section (data, normalization, weighting, robustness).

3. Methodology

This study adopts a mixed-methods research design to examine the E-Government Divide in the context of Generative Artificial Intelligence (GenAI). The methodology integrates quantitative and qualitative components to ensure comprehensive measurement, interpretation, and validation of the E-Government Divide Measurement Indicator (EGDMI). The approach aligns with the best international practices in composite index development, including the European Commission’s DESI [14], the UN E-Government Survey (UN-EGDI) [15], the ITU IDI framework [16], and the OECD digital measurement standards [17]. It is adapted to the specific context of GenAI-enabled public services.

3.1. Basic Indicator Setting

Building on the conceptual framework presented in Section 2 and its mapping to Digital Transformation theory (see Table 1), the EGDMI serves as a structured measurement architecture composed of three dimensions:
  • D1—Breadth of the Divide (Basic Digital Divide): foundational access, affordability, and basic digital skills enabling digital participation.
  • D2—Sectoral/Specific E-Government Divide: actual use and competence within e-government workflows (eID, identity proofing, eligibility, submissions, appeals) and reasons for non-use.
  • D3—GenAI Gap (Exploratory): emerging disparities in access to, task use of, and competence with GenAI tools in public service contexts, including verification and ethical literacy.
EGDMI is operationalized into two composite sub-indices (D1 and D2), while D3 is computed and reported separately as an exploratory layer until direct, stable measures of GenAI access and competence become available. Qualitative insights from focus groups inform indicator selection, interpretation, and policy implications but are not numerically aggregated with quantitative scores. The overall research design follows established mixed-methods guidance to integrate quantitative measurement with qualitative interpretation [18].

3.2. Algorithms and Computation of the EGDMI

The algorithm for constructing the EGDMI follows established guidelines for composite indicators (OECD/JRC Handbook) and ensures consistency across dimensions through uniform normalization, expert-derived weights, and robustness testing [12].
Step 1—Indicator definition and mapping:
Identify candidate indicators x n , k for each dimension n   ϵ   {1, 2, 3}, mapped to conceptual constructs in Table 1 with directionality adjusted so that higher values denote greater inclusion.
Step 2—Data screening:
Check coverage, timeliness, and reliability; exclude indicators with insufficient quality or missing data.
Step 3—Pre-processing:
Standardize units, apply necessary transformations (e.g., reverse coding for negative indicators), and document all processing steps [12].
Step 4—Normalization (uniform):
All indicators are scaled to [0,1] using min–max normalization:
x n , k = x n , k m i n ( x n . k ) max x n , k min x n , k
(Uniform application per OECD/JRC guidance) [12].
Step 5—Weighting for D1 and D2 (expert elicitation):
Weights w n , k are derived via expert elicitation from five specialists in digital governance, statistics, and AI policy. Experts rated the relevance of each indicator on a 1–5 scale; the average ratings were normalized so that w n , k = 1 per dimension. A Delphi-style consensus process enhances transparency and avoids ad hoc choices [19].
Step 6—Computation of dimension scores:
For D 1 and D 2 , sub-indices are calculated as weighted sums of normalized indicators:
D n = w n . k x n , k ,   n 1,2 .
Each dimension is then rescaled to a 0–100 range for interpretability [12].
Step 7—GenAI Gap (D3):
D3 is treated as an exploratory index capturing preliminary measures of GenAI access, usage, and competence. Its indicators are normalized and summarized descriptively, but not aggregated with D1 and D2. Interpretation draws on emerging AI literacy guidance to contextualize capabilities at the human–AI boundary [20].
Step 8—Robustness and uncertainty analysis:
To test stability, local sensitivity checks vary each   w n , k     10 %   (with renormalization) and recompute D1 and D2. Absence of rank reversals or material score shifts indicates robustness. The approach follows good practice in global sensitivity analysis [21].
Step 9—Validity and reliability assessment:
Three tests were conducted:
  • Content validity: expert review of indicator coverage and conceptual relevance [12].
  • Construct validity: Spearman correlations among D1 and D2 sub-indices to confirm expected relationships [12].
  • Reliability: Cronbach’s alpha for multi-item constructs, with psychometric benchmarks from measurement literature [22,23].
Given its exploratory status, D3 was excluded from reliability testing.
Step 10—User Segmentation (Cluster Analysis): To move beyond generalized findings and identify distinct citizen profiles based on their inclusion levels, a cluster analysis was performed. We employed a K-means clustering algorithm, a non-hierarchical partitioning method, using the ten normalized sub-dimension scores (D11–D33, as presented in Table 2) as input variables. The optimal number of clusters was determined using the ‘Elbow’ method, which identifies the point of diminishing returns in the within-cluster sum of squares (WCSS). This four-cluster solution was further validated for its robustness by assessing its Silhouette coefficient, which confirmed good cluster cohesion and separation. The final clusters, presented in Section 4.5, were confirmed to be statistically distinct.

3.3. Data Sources and Qualitative Component

The foundation for the quantitative analysis is data from the official survey “Usage of Information and Communication Technologies in the Republic of Serbia, 2023” (SORS 2023), conducted by the Statistical Office of the Republic of Serbia (SORS). This research was conducted as a CATI (telephone) survey on a two-phase, stratified sample comprising 2800 households and 2800 individuals. The target population included all individuals aged 16 to 74 and households with at least one member in that age range. The reference period for household and individual data was the three months preceding the interview.
This dataset provided the primary indicators for the following:
  • D1 (Breadth of the Divide): data on device access, connectivity, affordability, and basic digital skills.
  • D2 (E-Government Divide): behavioral data on service use, barriers, and eID (electronic identification) adoption.
Data for D3 (GenAI Gap), which is exploratory in nature, were derived from pilot CATI and online surveys on GenAI use, providing proxy measures for access, task use, and verification literacy.
To complement the quantitative findings, test the proposed framework, and clarify methodological approaches, qualitative research was conducted from October to November 2023. This research involved two focus groups (N = 8 and N = 9) with participants representing diverse demographic profiles (age, gender, education level, employment status, and living environment). Each session lasted 90 min.

3.4. Computation and Reporting

Composite scores for D1 and D2 were aggregated as weighted averages of normalized indicators and rescaled to a 0–100 range [12]. D3 is presented side-by-side as a separate layer to highlight emerging AI inequalities without inflating composite values [20]. This presentation supports policy relevance while maintaining methodological transparency and comparability over time.

3.5. Process Overview

The overall methodological process, summarized in Figure 1, followed a structured seven-step sequence. This approach integrates conceptualization (mapping D1–D3 to DT theory) with empirical validation and transparent reporting.
The process was guided by established international best practices for measuring digital development [14,15,16,17] and relies on the OECD/JRC Handbook [12] for index construction, normalization, and expert-based weighting [19]. The validation steps align with standards for reliability [22,23], and the sensitivity analysis [21], while the exploratory treatment of D3 is based on AI guidelines [20].
The following section further clarifies the key terminology used within this methodological framework.

3.6. Note on Terminology

Throughout the paper, EGDMI refers to the overall measurement architecture. In the current release, the index produces two composite scores (D1 and D2) and an exploratory GenAI layer (D3), reflecting data availability and theoretical maturity. Future iterations will enable complete aggregation once reliable GenAI measures become available, ensuring continuity with global digital inclusion metrics [12,20].

4. Results

This section presents the empirical findings of the E-Government Divide Measurement Indicator (EGDMI). Results are structured as follows:
(i)
Descriptive findings across sub-dimensions of D1–D2;
(ii)
Composite index scores for D1 and D2;
(iii)
Exploratory results for D3 (GenAI Gap);
(iv)
Validation and robustness checks;
(v)
User segmentation based on cluster analysis.

4.1. Descriptive Results Across D1–D2

Table 2 provides an overview of scores for all ten sub-dimensions. The results indicate relatively strong foundational digital conditions (D1), contrasted with significantly lower engagement with e-government services (D2), and early-stage, uneven adoption of GenAI tools (D3).
Policy implication: Despite strong digital access, e-government engagement remains low, suggesting that the primary barriers are not connectivity but service design, motivation, and trust.
To better illustrate the distribution of performance across the ten sub-dimensions, Figure 2 visualizes the relative strengths and weaknesses across D1, D2, and D3.
As shown in Figure 1, the scores for foundational conditions (D1) are high, particularly in Internet access (D12 = 85.6) and Access to Technology (D11 = 73.4). Conversely, the Support & Training score (D14 = 10.7) is critically low. The E-Government (D2) dimensions confirm this gap: while barriers like Subjective Non-use (D2.4 = 53.7) are high, actual uptake, such as E-Government Users (D2.1 = 34.2) and eID Competence (D2.3 = 29.0), is very low. This confirms that the core bottleneck is no longer access, but skills, support, and motivation.

4.2. Composite Index Results for D1 and D2

Composite index scores show a notable divergence between general digital readiness (D1) and the actual adoption of e-government services (D2). D1 reaches a relatively high level (73.6), while D2 remains critically low (19.9), indicating a significant conversion gap between capability and usage (Table 3).
Policy implication: The EGDMI core confirms a structural shift: the digital divide has moved from “basic access” toward service interaction, usability, and trust barriers. Policy should now focus on human-centered service redesign, rather than infrastructure alone.
To compare overall digital readiness with the actual adoption of e-government services, Figure 3 presents the composite scores for D1 and D2.
Figure 3 clearly shows a significant gap between general digital readiness (D1) and e-government usage (D2). Despite strong foundational conditions (D1 = 73.6), user uptake of e-government remains critically low (D2 = 19.9). This confirms that digital inclusion efforts must now shift from infrastructure and skills toward improving service usability, perceived value, and trust.

4.3. GenAI Gap (D3)—Exploratory Results

D3 results are presented separately due to their exploratory status and the limited availability of standardized indicators. The GenAI Gap average score is 43.6, reflecting early adoption patterns and high inequality across socio-demographic groups (Table 4).
Policy implication: Without early intervention, GenAI may widen existing divides, reinforcing inequalities in access, comprehension, and benefit from digital public services. Early investment in GenAI literacy and verification skills is essential.
To provide deeper insights into the exploratory GenAI Gap dimension, Figure 4 displays the scores for its three sub-dimensions.
As illustrated in Figure 3, GenAI adoption is at an early stage, with modest levels of access (D31 = 47.8) and limited task diversity (D32 = 39.4). Competence and verification literacy (D33 = 43.6) remain underdeveloped, indicating that many users lack the skills to assess AI-generated content critically. These results highlight the importance of early investments in GenAI literacy to prevent the deepening of existing disparities.

4.4. Validity, Reliability, and Robustness Checks

The results of validity and robustness tests are provided in Table 5. All metrics meet accepted thresholds, indicating that the core index (D1 and D2) is methodologically sound and stable for policy use.
Interpretation: EGDMI (core) demonstrates robust internal structure. D3 remains exploratory and should not yet be aggregated with D1 and D2 until direct measures of GenAI are standardized.

4.5. User Segmentation—Cluster Analysis

Cluster analysis was used to segment the population into digital & GenAI inclusion profiles. Four clusters emerged, enabling targeted intervention design (Table 6).
Policy implication: Interventions must be tailored:
  • C1 → inclusion & access;
  • C2 → motivation & awareness;
  • C3 → trust and service redesign;
  • C4 → co-creation of AI-assisted public services.
To support targeted policy and intervention design, Figure 5 maps four citizen clusters across the three main dimensions (D1–D3).
Figure 5 reveals four distinct digital inclusion profiles, with substantial variation in readiness and in the benefits derived from digital and GenAI-assisted services. While “GenAI-Augmented Citizens” (C4) are positioned to benefit most, “Digitally Excluded” (C1) and “Basic Digital Users” (C2) remain at risk of being left behind. This segmentation highlights the need for differentiated policy approaches tailored to each cluster rather than one-size-fits-all interventions.

4.6. Limitations of Interpretation

These results should be interpreted with caution due to the exploratory nature of GenAI indicators (D3) and limited longitudinal data. While D1 and D2 are statistically reliable, D3 requires further empirical development and should be monitored over time.

5. Discussion

This study examined the multidimensional nature of the e-government divide in the context of rapid advancements in Generative AI (GenAI), positioning the findings within the Digital Transformation (DT) theoretical lens proposed by Saeedikiya et al. [1,2]. The results offer important insights into how digital inclusion dynamics are shifting from basic access and skills toward more complex forms of interaction, trust, and algorithmic literacy. This discussion synthesizes the key findings, explains their theoretical and practical significance, and reflects on the implications for inclusive digital government in the GenAI era (Additional interpretations for Appendix A).

5.1. Interpreting the Findings Through Digital Transformation Theory

The DT framework distinguishes between capabilities required for sensing opportunities, seizing them through redesign, and reconfiguring processes to sustain value. Our results reveal an apparent misalignment across these stages. First, the high score in D1 indicates that foundational capabilities—access, connectivity, and basic digital skills—are mainly in place, consistent with the idea that access barriers have decreased over time while usage and skills disparities persist (the “second-level” or deepening divide) [24].
However, the very low score in D2 indicates that these capabilities are not translating into effective service use—aligning with the e-government adoption literature, which highlights perceived usefulness, ease of use, compatibility, and trust as decisive factors [25].
The emerging D3 illustrates the onset of a new capability layer—AI-assisted interaction—consistent with recent public-sector AI syntheses that emphasize new competencies and governance challenges introduced by AI in government [26].

5.2. Where the Divide Now Resides: From Access to Interaction and Agency

Findings confirm a structural shift from “access & devices” toward service interaction and agency. Three mechanisms dominate: (1) cognitive/procedural complexity of services; (2) low institutional trust; and (3) unclear value propositions—mechanisms already flagged in e-government adoption and trust studies [25].
This aligns with broader DT insights that transformation in the public sector requires human-centered service redesign and organizational reconfiguration, rather than just technology rollout [27].

5.3. Implications for Inclusive Digital Government in the GenAI Era

Theoretical: Our separation of foundational (D1), behavioral/service (D2), and AI-assisted (D3) divides updates to the classic digital divide theory (access → skills/usage) for the GenAI phase [28,29,30].
Policy: Prioritize simplification and trust-by-design in e-government; complement with literacy programs that include algorithmic awareness and verification competence—directions echoed in public-sector AI reviews and practice frameworks [26].
Equity: GenAI can lower cognitive and linguistic barriers, but without safeguards, may entrench inequalities—hence the need for ethical guidelines and continuous oversight in public services [31,32].

5.4. The GenAI Paradox: Inclusion Booster or Exclusion Amplifier?

GenAI can simplify bureaucratic processes and assist vulnerable users, yet those lacking verification literacy risk “automated exclusion.” This duality mirrors the public-sector AI literature, which documents both opportunities and risks and calls for staged, responsible deployment [26].
A pragmatic path is “literacy first, automation second”, consistent with ethical frameworks arguing that legality is necessary but insufficient and that proactive ethics improves societal outcomes [31].

5.5. Alternative Interpretations of the Results

(1) Low e-government uptake may reflect satisfaction with offline channels rather than barriers; (2) cultural preferences for interpersonal interaction can outweigh perceived efficiency; (3) GenAI gaps may be transitory, reflecting early diffusion rather than structural inequality—each of which is compatible with established adoption and trust perspectives in the literature [25].

6. Conclusions

This study examined the evolving nature of the digital divide in the era of Generative AI (GenAI), with a focus on the e-government context. By developing and applying the E-Government Divide Measurement Indicator (EGDMI), the research provided a comprehensive assessment across three dimensions: the Basic Digital Divide (D1), the E-Government Divide (D2), and the GenAI Gap (D3). The findings show that while digital access and basic skills are relatively strong, e-government adoption remains low, and GenAI-related capabilities are still emerging and unevenly distributed. Based on the evidence, the initial hypothesis—that the introduction of GenAI may amplify the digital divide—was partially confirmed: GenAI holds significant potential to support inclusion, but also carries a clear risk of widening inequalities if safeguards and literacy efforts are not prioritized.

6.1. Key Contributions

This study offers three key contributions. First, it updates the conceptual understanding of the digital divide by expanding it beyond access and skills to include service interaction and GenAI-assisted capabilities. Second, it develops the EGDMI as a multidimensional measurement framework, enabling more apparent distinctions among foundational readiness (D1), behavioral use of public digital services (D2), and emerging AI-related capabilities (D3). Third, it provides empirical evidence from Serbia that illustrates a structural shift—from digital inequality based on access to one based on meaningful use, agency, and algorithmic literacy.

6.2. Practical and Policy Implications

The results highlight a need for a strategic shift in digital inclusion policies. Rather than focusing primarily on infrastructure and basic digital skills, governments should prioritize human-centered redesign of public services to strengthen usability, trust, and perceived value. GenAI should be introduced gradually, supported by targeted GenAI literacy programs that build citizens’ ability to assess and use AI-generated outputs safely and critically. Furthermore, differentiated policy approaches are required: initiatives should be tailored to distinct user groups—from digitally excluded citizens to advanced GenAI users—to prevent further stratification and unlock inclusive public value.

6.3. Limitations and Directions for Future Research

This research has several limitations that should be considered when interpreting the findings. The GenAI dimension (D3) remains exploratory due to the limited availability of standardized indicators and behavioral data. The study is cross-sectional, capturing a snapshot in time; future research should adopt a longitudinal approach to observe changes as GenAI integration in public services evolves. Expanding the model to other sectors (e.g., health, education, justice) and comparing results across countries would further validate the EGDMI and support the development of international benchmarks. Finally, future work should integrate qualitative user research to better understand motivational, cultural, and behavioral factors shaping meaningful digital participation.

7. Policy Recommendations

Based on the findings of this study, a set of targeted policy recommendations is proposed to support inclusive digital transformation and mitigate the risk of widening digital inequities in the age of Generative AI. The recommendations are structured around the three dimensions of the EGDMI—D1, D2, and D3—and tailored to key stakeholder groups, ensuring actionable, differentiated interventions.

7.1. Strengthening Foundational Digital Inclusion (D1)

Short term (0–12 months)
  • Expand access to affordable internet and devices through targeted subsidies for vulnerable groups (low-income households, rural communities, elderly citizens).
  • Integrate basic digital skills modules into community centers, libraries, and employment bureaus, with simple, scenario-based learning pathways.
Medium term (1–3 years)
  • Develop national digital inclusion curricula aligned with the European Digital Competence Framework (DigComp) to standardize skills development across regions and education providers.
  • Introduce incentive schemes for telecom operators to invest in underserved areas, ensuring minimum service quality levels.
Long term (3–5 years)
  • Institutionalize digital inclusion programs as part of social protection policies, recognizing digital access and skills as essential public goods.

7.2. Increasing Meaningful Use and Trust in E-Government (D2)

Short term (0–12 months)
  • Conduct a usability redesign of priority e-government services (ID, certificates, taxation, social benefits) guided by human-centered design principles.
  • Implement “assisted e-government service points” in municipalities and post offices to support users with low digital confidence.
Medium term (1–3 years)
  • Develop personalized e-government onboarding journeys for different user groups (youth, seniors, parents, entrepreneurs), improving perceived relevance and value.
  • Strengthen transparency and communication campaigns to build trust, including clear explanations of data usage, privacy safeguards, and complaint/appeals mechanisms.
Long term (3–5 years)
  • Transition from administrative-centered to citizen-journey-centered service design, integrating life-event-based bundles (e.g., “birth of a child”, “starting a business”, “retirement”).

7.3. Responsible and Inclusive Integration of Generative AI (D3)

Short term (0–12 months)
  • Launch GenAI literacy programs for citizens, focusing on verification skills, bias awareness, and safe use of AI-generated information.
  • Introduce clear guidelines for the ethical and transparent use of GenAI in public administration, including required human oversight and auditing procedures.
Medium term (1–3 years)
  • Deploy GenAI-based assistants for e-government services as optional aids—not replacements—ensuring continued support for low-literacy users and preserving the choice of non-AI channels.
  • Establish a national GenAI Observatory to monitor usage patterns, risks, digital disparities, and societal impact.
Long term (3–5 years)
  • Integrate explainable AI (XAI) principles into public AI systems, ensuring that citizens can understand and contest AI-assisted decisions affecting their rights.
  • Embed human-AI interaction and verification literacy into school and adult education systems, fostering long-term equitable participation.

7.4. Targeted Interventions by Stakeholder Group

Table 7 shows targeted interventions by stakeholder groups.

7.5. Linking EGDMI to Policy Planning and Monitoring

The EGDMI can serve as a continuous monitoring tool for national and regional governments. It is recommended to:
  • Track D1–D3 annually to measure progress and disparities.
  • Use cluster profiles to target interventions proportionally to citizens’ needs.
  • Report EGDMI scores transparently to inform public debate and evidence-based policymaking.
Applying the EGDMI in this manner transforms it from a static measurement tool into an active instrument for guiding inclusive digital policy.

Author Contributions

Conceptualization, S.R. and D.V.; methodology, D.V.; software, S.R.; validation, S.R. and D.V.; formal analysis, D.V.; investigation, S.R.; resources, S.R.; data curation, D.V.; writing—original draft preparation, S.R.; writing—review and editing, D.V.; visualization, S.R.; supervision, D.V.; project administration, D.V. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author(s).

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
Gen AIGenerative AI
EGDMIE-Government Divide Measurement Indicator
DTDigital transformation
SORSStatistical Office of the Republic of Serbia
eIDelectronic identification

Appendix A

This appendix provides the detailed data, weighting, and calculation steps used to derive the composite scores for D1 (Basic Digital Divide) and D2 (E-Government Divide), as presented in Section 4.

Appendix A.1. Calculation of D1—Basic Digital Divide (Score: 73.6)

The D1 score is calculated as the weighted sum of four sub-dimensions (D11 to D14). The component indicators are derived from the official SORS 2023 survey.
Step 1: Component Indicator Derivation Scores for sub-dimensions D11 and D13 are calculated as weighted composites of their underlying indicators. Scores for D12 and D14 are single-indicator values from the survey data. The weights for these component indicators were determined through the expert elicitation process described in Section 3.2.
Step 2: Final Composite Score for D1
The final D1 score is calculated by applying the expert-derived weights (W) for each of the four main sub-dimensions 4 to the resulting scores from Table A1.
Table A1. Component Indicator Calculations for D1.
Table A1. Component Indicator Calculations for D1.
CodeSub-DimensionComponent Indicators (from SORS 2023 Survey)Weights (Expert Elicitation)CalculationResult
D11 %TV (97.3)0.1 (0.1 × 97.3) + 73.38
Access to Technology%PS (75.9)0.3 (0.3 × 75.9) +
%Laptop (55.0)0.4 (0.4 × 55.0) +
%Mobile (94.4)0.2 (0.2 × 94.4)
D12Internet access% Households with Internet (85.6)1.0 85.6 × 1.0 85.60
D13 % Move files (70.7)0.4 (0.4 × 70.7) + 58.43
0.3 (0.3 × 60.9) +
Digital Literacy% Install software (60.9)0.3
% Configure settings (39.6) (0.3 × 39.6)
D14Support & Training% Used online courses (10.7)1.0 10.7 × 1.0 10.70
The symbol “+” indicates that the values shown are summed together to obtain the final result.
Formula:
D1 = (D11 × W_D11) + (D12 × W_D12) + (D13 × W_D13) + (D14 × W_D14)
Weights:
  • W_D11 (Access) = 0.4
  • W_D12 (Internet) = 0.3
  • W_D13 (Literacy) = 0.3
  • W_D14 (Support) = 0.1
Calculation:
D1 = (73.38 × 0.4) + (85.60 × 0.3) + (58.43 × 0.3) + (10.70 × 0.1)
D1 = 29.35 + 25.68 + 17.53 + 1.07 = 73.635
Final Score (reported in text): 73.6

Appendix A.2. Calculation of D2—E-Government Divide (Score: 19.9)

The D2 score is calculated as the weighted sum of four key indicators6. One indicator (D2.4) is treated as a negative barrier and is therefore subtracted7. Weights were determined by the expert elicitation process.
Note on D2.1: The value 34.20 is derived by adjusting the 40.1% of e-gov users for the total online population (40.1 × 0.854 = 34.20).
Step 2: Final Composite Score for D2 The final D2 score is calculated by applying the expert-derived weights to the component indicator values from Table A2.
Table A2. Component indicators for D2.
Table A2. Component indicators for D2.
CodeIndicatorSurvey DataWeight (Expert Elicitation)Value (Score)
D2.1 E-Government Users (adjusted) 40.1% of online pop. 35% (0.35) 34.20
D2.2 Lack of Skills 20.2% of online non-users 30% (0.30) 20.20
D2.3 eID Competence 29.0% of online pop. 25% (0.25) 29.00
D2.4 Subjective Non-use 53.7% of online non-users −10% (−0.10) 53.70
Formula:
D2 = (D2.1 × W_1) + (D2.2 × W_2) + (D2.3 × W_3) − (D2.4 × W_4)
Calculation:
D2 = (34.20 × 0.35) + (20.20 × 0.30) + (29.00 × 0.25) − (53.70 × 0.10)
D2 = 11.97 + 6.06 + 7.25 − 5.37 = 19.91
Final Score (reported in text): 19.9

References

  1. Saeedikiya, M.; Salunke, S.; Kowalkiewicz, M. The nexus of digital transformation and innovation: A multilevel framework and research agenda. J. Innov. Knowl. 2025, 10, 100640. [Google Scholar] [CrossRef]
  2. Saeedikiya, M.; Salunke, S.; Kowalkiewicz, M. Toward a dynamic capability perspective of digital transformation in SMEs: A study of the mobility sector. J. Clean. Prod. 2024, 439, 140718. [Google Scholar] [CrossRef]
  3. Bendel, O. How Can Generative AI Enhance the Well-Being of Blind? arXiv 2024, arXiv:2402.07919. [Google Scholar] [CrossRef]
  4. Leporini, B.; Buzzi, M.; Della Penna, G. A Preliminary Evaluation of Generative AI Tools for Blind Users: Usability and Screen Reader Interaction. In Proceedings of the 18th International Conference on Pervasive Technologies Related to Assistive Environments, Corfu Island, Greece, 25–27 June 2025; ACM: New York, NY, USA, 2025. [Google Scholar] [CrossRef]
  5. Daschner, S.; Obermaier, R. Algorithm Aversion? On the Influence of Advice Accuracy on Trust in Algorithmic Advice. J. Decis. Syst. 2022, 31, 77–97. [Google Scholar] [CrossRef]
  6. OECD. Job Creation and Local Economic Development 2024: The Geography of Generative AI; OECD Publishing: Paris, France, 2024. [Google Scholar] [CrossRef]
  7. Park, H.E. The Double-Edged Sword of Generative Artificial Intelligence in Digitalization: An Affordances and Constraints Perspective. Psychol. Mark. 2024, 41, 2924–2941. [Google Scholar] [CrossRef]
  8. Almirall, E. Generative AI and Public Administration. Medium 2023. Available online: https://medium.com/@ealmirall/generative-ai-and-public-administration-d16be1990d6a (accessed on 1 November 2025).
  9. Scheerder, A.; Van Deursen, A.; Van Dijk, J. Determinants of Internet skills, uses and outcomes. A systematic review of the second-and third-level digital divide. Telemat. Inform. 2017, 34, 1607–1624. [Google Scholar] [CrossRef]
  10. OECD. OECD Principles on Artificial Intelligence; OECD Publishing: Paris, France, 2019; Available online: https://oecd.ai/en/ai-principles (accessed on 1 November 2025).
  11. Regulation (EU) 2024/1689. Artificial Intelligence Act (AI Act). Official Journal of the European Union. 2024. Available online: https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng#:~:text=Regulation%20%28EU%29%202024%2F1689%20of%20the%20European%20Parliament%20and,2020%2F1828%20%28Artificial%20Intelligence%20Act%29%20%28Text%20with%20EEA%20relevance%29 (accessed on 1 November 2025).
  12. Nardo, M.; Saisana, M.; Saltelli, A.; Tarantola, S.; Hoffman, A.; Giovannini, E. Handbook on Constructing Composite Indicators: Methodology and User Guide; OECD Publishing: Paris, France, 2008. [Google Scholar] [CrossRef]
  13. Greco, S.; Ishizaka, A.; Tasiou, M.; Torrisi, G. On the Methodological Framework of Composite Indices: A Review of the Issues of Weighting, Aggregation, and Robustness. Soc. Indic. Res. 2019, 141, 61–94. [Google Scholar] [CrossRef]
  14. European Commission. Digital Economy and Society Index (DESI) 2022; Publications Office of the European Union: Luxembourg, 2022. Available online: https://digital-strategy.ec.europa.eu/en/policies/desi (accessed on 30 October 2025).
  15. United Nations Department of Economic and Social Affairs (UN DESA). United Nations E-Government Survey 2022: The Future of Digital Government; United Nations: New York, NY, USA, 2022. Available online: https://publicadministration.un.org/egovkb/en-us/reports/un-e-government-survey-2022 (accessed on 30 October 2025).
  16. International Telecommunication Union (ITU). Measuring the Information Society Report 2017, Volume 1—Annex 1: ICT Development Index Methodology; ITU: Geneva, Switzerland, 2017; Available online: https://www.itu.int/en/itu-d/statistics/Documents/publications/misr2017/misr2017_volume1.pdf (accessed on 30 October 2025).
  17. OECD. Measuring the Digital Transformation: A Roadmap for the Future; OECD Publishing: Paris, France, 2019. [Google Scholar] [CrossRef]
  18. Creswell, J.W.; Plano Clark, V.L. Designing and Conducting Mixed Methods Research, 3rd ed.; SAGE: Thousand Oaks, CA, USA, 2018. [Google Scholar]
  19. Hsu, C.-C.; Sandford, B.A. The Delphi Technique: Making Sense of Consensus. Pract. Assess. Res. Eval. 2007, 12, 10. [Google Scholar] [CrossRef]
  20. UNESCO. Guidance for Generative AI in Education and Research; UNESCO: Paris, France, 2023; Available online: https://unesdoc.unesco.org/ark:/48223/pf0000386693 (accessed on 30 October 2025)ISBN 978-92-3-100612-8.
  21. Saltelli, A.; Ratto, M.; Andres, T.; Campolongo, F.; Cariboni, J.; Gatelli, D.; Saisana, M.; Tarantola, S. Global Sensitivity Analysis: The Primer; John Wiley & Sons: Chichester, UK, 2008. [Google Scholar] [CrossRef]
  22. Cronbach, L.J. Coefficient Alpha and the Internal Structure of Tests. Psychometrika 1951, 16, 297–334. [Google Scholar] [CrossRef]
  23. Nunnally, J.C.; Bernstein, I.H. Psychometric Theory, 3rd ed.; McGraw-Hill: New York, NY, USA, 1994. [Google Scholar]
  24. Hargittai, E. Second-Level Digital Divide: Differences in People’s Online Skills. First Monday 2002, 7. Available online: https://firstmonday.org/ojs/index.php/fm/article/view/942 (accessed on 30 October 2025). [CrossRef]
  25. Carter, L.; Bélanger, F. The Utilization of E-Government Services: Citizen Trust, Innovation and Acceptance Factors. Inf. Syst. J. 2005, 15, 5–25. [Google Scholar] [CrossRef]
  26. Zuiderwijk, A.; Chen, Y.-C.; Salem, F. Implications of the Use of Artificial Intelligence in Public Governance: A Systematic Literature Review and a Research Agenda. Gov. Inf. Q. 2021, 38, 101577. [Google Scholar] [CrossRef]
  27. Mergel, I.; Edelmann, N.; Haug, N. Defining Digital Transformation: Results from Expert Interviews. Gov. Inf. Q. 2019, 36, 101385. [Google Scholar] [CrossRef]
  28. van Dijk, J.A.G.M. The Deepening Divide: Inequality in the Information Society; SAGE: London, UK, 2005. [Google Scholar]
  29. van Dijk, J. The Digital Divide; Polity: Cambridge, UK, 2020. [Google Scholar]
  30. Bannister, F.; Connolly, R. Trust and Transformational Government: A Proposed Framework for Research. Gov. Inf. Q. 2011, 28, 137–147. [Google Scholar] [CrossRef]
  31. Floridi, L.; Cowls, J.; Beltrametti, M.; Chatila, R.; Chazerand, P.; Dignum, V.; Luetge, C.; Madelin, R.; Pagallo, U.; Rossi, F.; et al. AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations. Minds Mach. 2018, 28, 689–707. [Google Scholar] [CrossRef] [PubMed]
  32. Wirtz, B.W.; Weyerer, J.C.; Geyer, C. Artificial Intelligence and the Public Sector—Applications and Challenges. Int. J. Public Adm. 2019, 42, 596–615. [Google Scholar] [CrossRef]
Figure 1. Methodological process overview.
Figure 1. Methodological process overview.
Ai 06 00303 g001
Figure 2. Sub-dimensions across D1 and D2.
Figure 2. Sub-dimensions across D1 and D2.
Ai 06 00303 g002
Figure 3. Composite results: D1 vs. D2.
Figure 3. Composite results: D1 vs. D2.
Ai 06 00303 g003
Figure 4. D3: GenAI Gap—sub-dimension results. (Insert Graph 3 here: Bar chart of D31–D33).
Figure 4. D3: GenAI Gap—sub-dimension results. (Insert Graph 3 here: Bar chart of D31–D33).
Ai 06 00303 g004
Figure 5. Cluster profiles across D1–D3.
Figure 5. Cluster profiles across D1–D3.
Ai 06 00303 g005
Table 1. Summary of D1–D3: Definitions, Theoretical Basis, and Key References.
Table 1. Summary of D1–D3: Definitions, Theoretical Basis, and Key References.
DimensionDefinition (This Study)Theoretical Basis (DT Mapping)Typical Indicators (Examples)Conceptual Foundations
D1: Breadth of the DivideFoundational disparities in access, affordability, and basic digital skillsSensing/seizing foundational capabilities; socio-technical prerequisites for inclusion.Device ownership; internet access; basic skills indices; availability of trainingDigital divide (1st/2nd level); DT capability view; inclusion policy; official statistics
D2: Sectoral/Specific (E-Gov) DivideGaps in actual use and competence within e-government workflows; subjective non-useReconfiguring service processes (ID, eligibility, forms, appeals); transaction-cost and trust perspectives.Share of e-government users among online, eID usage, reasons for non-use (complexity, low need)E-government adoption/usage, channel choice, competence, and trust literature
D3: GenAI Gap (Exploratory)Disparities in access to, task use of, and competence with GenAI; verification and ethicsEmergent dynamic capabilities at human–AI boundary; AI literacy & governanceDirect measures (GenAI access, task use, competence); interim proxies (verification literacy, related skills)GenAI literacy & governance; assistive UX; risk/ethics; treat as non-aggregated pending data maturity
Table 2. Descriptive results for sub-dimensions of D1 and D2 (scale 0–100).
Table 2. Descriptive results for sub-dimensions of D1 and D2 (scale 0–100).
Dimension CodeSub-DimensionScore (0–100)Interpretation
D1: Basic Digital Divide ///
D11 Access to Technology 73.38 Strong foundational access; weighted score for availability of PCs, mobiles, and laptops is high
D12 Internet access 85.60 Very high connectivity; household internet penetration is widespread and is not the primary barrier
D13 Digital Literacy 58.43 Moderate skills gap; a majority can perform basic tasks (e.g., move files), but advanced skills (e.g., configuration) are low
D14 Support & Training Availability 10.70 Critically low support; engagement with online courses or formal digital training is minimal
D2: E-Government Divide ///
D2.1 E-Government Users (adjusted) 34.20 Low overall uptake; only about one-third of the total population uses any e-government service
D2.2 Lack of Skills 20.20 Significant skills barrier; one-fifth of online users report lacking the necessary skills to use e-gov services
D2.3 eID Competence 29.00 Low competence/tool adoption; less than 30% of online users use critical electronic identification (eID)
D2.4 Subjective Non-use 53.70 Dominant barrier; the majority of online non-users cite subjective reasons (e.g., “no need”) for not engaging
Table 3. Composite scores for D1 and D2 (0–100).
Table 3. Composite scores for D1 and D2 (0–100).
DimensionComposite ScoreAssessment
D1—Basic Digital Divide73.6Strong foundational readiness; infrastructure and basic skills primarily in place
D2—E-Government Divide19.9Very low service uptake and user conversion
Table 4. Exploratory GenAI Gap (D3) results (0–100).
Table 4. Exploratory GenAI Gap (D3) results (0–100).
MeasureScoreStatus
GenAI Gap (D3)43.6Early-stage adoption; uneven and capability-dependent
Table 5. Validation and robustness results.
Table 5. Validation and robustness results.
TestResultInterpretation
Content Validity (Expert Agreement)0.87Strong conceptual alignment
Cronbach’s α (D1 Multi-item)0.79Acceptable internal consistency
Cronbach’s α (D2 Multi-item)0.72Acceptable internal consistency
Spearman Correlation (D1–D2)0.41 *Moderate positive, expected direction
Weight Sensitivity (±10%)StableNo rank reversals; robust indicator
* p < 0.05.
Table 6. Digital & GenAI inclusion profiles—cluster summary.
Table 6. Digital & GenAI inclusion profiles—cluster summary.
ClusterSize (%)Profile Description
C1: Digitally Excluded18%Low skills, low access, no e-gov or GenAI use
C2: Basic Digital User37%Good basic access, low e-gov use, limited GenAI exposure
C3: E-Gov Pragmatic User28%Active in e-gov, selective GenAI use, moderate trust
C4: GenAI-Augmented Citizen17%High digital & e-gov use, emerging GenAI proficiency
Table 7. Targeted Interventions by Stakeholder Group.
Table 7. Targeted Interventions by Stakeholder Group.
StakeholderRecommended Priority Actions
CitizensDigital and GenAI literacy, critical verification skills, awareness of rights, and safe use of public digital services
Public AdministrationEthical GenAI adoption frameworks, service usability redesign, transparent communication, and training for civil servants
Education SectorCurriculum integration of digital and GenAI literacy, teacher training, and school–community partnerships
Private Sector & Telecom ProvidersInfrastructure investments, affordability schemes, co-funding literacy programs, and inclusive innovation pilots
Civil Society & NGOsCommunity outreach, user advocacy and monitoring, support for vulnerable groups, watchdog role to ensure fair AI adoption
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Radojičić, S.; Vukmirović, D. Exploring the Impact of Generative AI on Digital Inclusion: A Case Study of the E-Government Divide. AI 2025, 6, 303. https://doi.org/10.3390/ai6120303

AMA Style

Radojičić S, Vukmirović D. Exploring the Impact of Generative AI on Digital Inclusion: A Case Study of the E-Government Divide. AI. 2025; 6(12):303. https://doi.org/10.3390/ai6120303

Chicago/Turabian Style

Radojičić, Stefan, and Dragan Vukmirović. 2025. "Exploring the Impact of Generative AI on Digital Inclusion: A Case Study of the E-Government Divide" AI 6, no. 12: 303. https://doi.org/10.3390/ai6120303

APA Style

Radojičić, S., & Vukmirović, D. (2025). Exploring the Impact of Generative AI on Digital Inclusion: A Case Study of the E-Government Divide. AI, 6(12), 303. https://doi.org/10.3390/ai6120303

Article Metrics

Back to TopTop