Next Article in Journal
Integrating Generative Artificial Intelligence in Clinical Dentistry: Enhancing Diagnosis, Treatment Planning, and Procedural Precision Through Advanced Knowledge Representation and Reasoning
Previous Article in Journal
D3S3real: Enhancing Student Success and Security Through Real-Time Data-Driven Decision Systems for Educational Intelligence
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Digital Government Transformation Through Artificial Intelligence: The Mediating Role of Stakeholder Trust and Participation

by
Syed Asad Abbas Bokhari
1,
Sang Young Park
2 and
Shahid Manzoor
3,*
1
Graduate School of Public Policy, Nazarbayev University, Astana 01000, Kazakhstan
2
Department of Korean Language Education, Daegu Catholic University, Gyeongsan-si 38430, Republic of Korea
3
Department of Information Sciences, Dakota State University, Madison, SD 57042, USA
*
Author to whom correspondence should be addressed.
Digital 2025, 5(3), 43; https://doi.org/10.3390/digital5030043
Submission received: 24 July 2025 / Revised: 10 September 2025 / Accepted: 12 September 2025 / Published: 16 September 2025

Abstract

This study explores how artificial intelligence utilization in digital government, through AI-enabled service automation and AI-based decision support, contributes to digital government transformation, emphasizing the mediating roles of stakeholder trust and stakeholder participation. Grounded in stakeholder theory and public value theory, the research aims to understand the relational mechanisms that connect technological innovation to institutional change. A quantitative research design was employed using survey data collected from 412 stakeholders, including citizens, civil society members, public employees, and private actors, who had interacted with AI-driven government services in Pakistan. Structural equation modeling was used to test a conceptual model involving direct and indirect effects. Results reveal that both AI-enabled service automation and AI-based decision support significantly enhance stakeholder trust and participation, which in turn positively influence digital government transformation. Stakeholder trust emerged as a stronger mediator than participation. The findings highlight the importance of ethical, transparent, and participatory AI integration in public administration. Theoretically, the study extends digital governance literature by validating behavioral mediators in technology-driven reform. Practically, it offers strategic insights for policymakers on how to enhance stakeholder engagement and legitimacy in AI-based public systems. Overall, the study emphasizes that successful digital transformation is not solely technological, but also deeply relational and participatory.

1. Introduction

The rapid integration of Artificial Intelligence (AI) into public administration marks a transformative shift in how governments deliver services, make decisions, and engage with stakeholders. As AI technologies such as chatbots, machine learning, and predictive analytics gain traction, their application in digital government platforms is reshaping public sector innovation. Governments worldwide are adopting AI-enabled service automation to enhance responsiveness, minimize bureaucracy, and personalize citizen services [1]. Simultaneously, AI-based decision support systems are increasingly utilized to forecast social needs, allocate resources, and assist in evidence-based policymaking [2]. Despite these advances, the real success of AI integration in digital government lies not only in technological deployment but also in the broader transformation of public service delivery. This transformation necessitates a nuanced understanding of how AI contributes to digital governance outcomes and how various stakeholders experience and respond to AI-powered platforms in public sector contexts.
Stakeholder engagement has emerged as a cornerstone of effective digital governance. In AI-enabled public services, stakeholders, ranging from individual citizens to non-governmental organizations and private enterprises, are no longer passive recipients but active contributors to governance processes [3]. However, their willingness to engage critically depends on their trust in AI technologies and the perceived legitimacy of digital systems. Without trust, even the most advanced AI applications can face public skepticism, underuse, or outright rejection. Trust is particularly significant in governance because AI systems can be perceived as opaque, overly technical, or lacking in accountability [4]. Consequently, understanding how stakeholder trust mediates the relationship between AI utilization and digital government transformation is crucial. A deeper investigation into this trust dynamic offers insights into whether AI fosters or undermines public confidence in state institutions, and whether this confidence facilitates meaningful stakeholder involvement in policy design, implementation, and evaluation.
Parallel to trust, stakeholder participation is another essential mechanism through which AI contributes to digital government transformation. AI-enabled platforms often create new channels for participation, such as interactive interfaces, real-time feedback systems, and digital consultations, enabling stakeholders to co-create policies and services [5]. However, participation is not automatically guaranteed by technological availability; it is influenced by stakeholder perceptions of accessibility, fairness, and responsiveness of AI systems. Engaged stakeholders are more likely to support, legitimize, and help improve digital government platforms, thus enhancing overall policy effectiveness [6]. This study considers stakeholder participation as an interacting factor, building on the premise that AI’s impact on transformation is indirect and is transmitted through citizens’ active involvement. Measuring this participation offers a way to assess whether AI utilization leads to genuine empowerment or simply reinforces technocratic control without community input. Thus, participation serves as a functional bridge between technological innovation and governance outcomes.
Despite the growing enthusiasm for AI in digital government, existing research predominantly emphasizes technological implementation, such as algorithmic efficiency [7], infrastructure development [8], and digital service automation [9], while largely neglecting the social dynamics underpinning successful governance transformation [10,11]. Few studies have empirically examined how AI utilization translates into meaningful institutional change through stakeholder-centered mechanisms like trust and participation. Although trust in government and civic engagement has long been recognized as central to public sector innovation, their mediating roles in AI-driven digital transformation remain underexplored [3,5]. Moreover, while some studies investigate public perceptions of AI [4], they often treat trust and participation as outcomes rather than as active mediators that shape governance effectiveness. This presents a critical gap, especially as governments increasingly rely on intelligent systems to deliver citizen-facing services and make complex policy decisions. Understanding whether and how stakeholder trust and participation influence the effectiveness of AI-enabled digital government is essential for designing inclusive, transparent, and participatory AI strategies that genuinely transform governance rather than merely digitize its processes.
This study addresses the identified research gap by developing and empirically testing a comprehensive model that examines how AI utilization in digital government, operationalized through AI-enabled service automation and AI-based decision support, affects digital government transformation through the mediating roles of stakeholder trust and participation. Unlike prior research that primarily focuses on the technical or administrative aspects of AI adoption [11,12], this study investigates the behavioral and perceptual mechanisms through which AI contributes to meaningful governance change. Specifically, it seeks to answer the following research questions: (1) To what extent does AI utilization in digital government influence stakeholder trust and participation? (2) How do stakeholder trust and participation mediate the relationship between AI utilization and digital government transformation? By employing a quantitative methodology using structural equation modeling on survey data collected from diverse stakeholders, this study offers robust empirical evidence to inform both theory and practice. The findings guide policymakers in designing AI strategies that prioritize trust-building and participatory governance, ensuring inclusive and sustainable digital transformation. This study contributes to the literature by empirically demonstrating the mediating roles of stakeholder trust and participation in linking AI utilization with digital government transformation, clarifying that trust exerts a comparatively stronger influence. Moreover, by situating the analysis in an emerging economy, it extends the predominantly Western-centric discourse on AI-enabled governance, offering new contextual insights.
The remainder of this study is structured as follows: Section 2 presents the theoretical background and development of hypotheses. Section 3 outlines the research methodology, including data collection and measurement. Section 4 reports the empirical results, followed by a discussion of key findings in Section 5. Finally, Section 6 offers theoretical and practical implications, acknowledges limitations, and suggests directions for future research in AI-driven digital governance.

2. Literature Review and Hypotheses Development

To build a robust theoretical foundation for understanding how AI utilization drives digital government transformation, this study draws on stakeholder theory and public value theory. Stakeholder theory posits that organizations, including public institutions, must consider the interests and engagement of all relevant stakeholders, not only to maintain legitimacy but also to enhance decision-making and governance effectiveness [13]. In the context of digital government, stakeholders include citizens, private sector actors, civil society, and internal government users, whose trust and participation are essential for successful technological integration [14]. Complementing this perspective, public value theory emphasizes the role of government in creating value for society through transparency, responsiveness, and inclusivity [15]. When AI is deployed in digital platforms, its success depends not only on efficiency gains but also on the extent to which it enhances public value and addresses societal needs [11,12]. Together, these theories underscore that digital transformation is not merely technical, it is deeply relational, requiring active stakeholder involvement and a commitment to delivering meaningful societal outcomes.

2.1. Digital Government Transformation

Digital government transformation refers to the comprehensive integration of digital technologies into public sector processes to improve service delivery, enhance accountability, and create public value. Unlike e-government initiatives that primarily focused on digitizing existing services, transformation emphasizes a structural shift in governance models, making them more transparent, responsive, and citizen-centric [16]. This process is not limited to the adoption of technology but extends to rethinking institutional practices, redefining citizen–state relationships, and introducing innovative service delivery mechanisms. By leveraging digital platforms, governments can streamline administrative functions, expand citizen access, and foster inclusivity in governance processes [6]. However, achieving digital government transformation requires more than technical tools; it demands trust, legitimacy, and active participation from stakeholders who ultimately shape its acceptance and sustainability.
In the context of this study, digital government transformation is the dependent construct, reflecting how AI-enabled governance changes are perceived by stakeholders. The focus is not just on efficiency gains but on the broader transformation of governance structures through transparency, accountability, and participation. The study contextualizes digital transformation as an outcome influenced not only by technological capacity, like AI-enabled automation and decision support, but also by relational factors, such as stakeholder trust and participation. Understanding this transformation in the context of stakeholder perceptions offers a deeper appreciation of how digital reforms achieve legitimacy, especially in emerging economies where institutional trust is fragile and citizen engagement is critical.

2.2. Artificial Intelligence in Digital Government Transformation

Artificial intelligence has emerged as a transformative force in digital governance by enabling automation, predictive analytics, and data-driven decision-making. In public administration, AI applications such as chatbots, robotic process automation, and decision-support algorithms are increasingly used to streamline operations, reduce administrative burdens, and enhance the precision of public policies [1]. These technologies hold the potential to deliver highly responsive, personalized, and efficient services, thereby reshaping the expectations of citizens and institutions alike. AI’s role extends beyond efficiency; it introduces new governance models that are proactive, adaptive, and evidence-based, enabling governments to anticipate societal needs and allocate resources strategically [3]. However, the success of AI integration depends on how stakeholders perceive its fairness, transparency, and accountability, as distrust or exclusion can undermine its adoption and effectiveness.
In the perspective of this study, AI utilization is captured through two key dimensions: AI-enabled service automation and AI-based decision support. These constructs represent the technological mechanisms by which digital transformation occurs, but their impact is hypothesized to be mediated by human and relational factors, stakeholder trust and participation. By focusing on AI as a driver of transformation, the research advances the discourse beyond purely technical considerations and places emphasis on how AI’s effectiveness is conditioned by societal acceptance. This framing underscores the study’s contribution: demonstrating that AI’s role in digital government transformation is most effective when aligned with stakeholder values and democratic principles, ensuring that technological innovation is both legitimate and sustainable.

2.3. Theoretical Framework

2.3.1. Stakeholder Theory

Stakeholder theory, first introduced by Freeman (1984), has evolved to emphasize that organizations, whether public or private, must address the interests, values, and expectations of all stakeholders involved in or affected by their decisions [17]. In public administration, this includes citizens, non-governmental organizations, businesses, and internal governmental departments. The theory highlights the necessity of collaborative governance and mutual responsiveness to build legitimacy and sustain effective outcomes [18,19]. In digital government perspective, stakeholder theory becomes especially relevant as citizens and other actors are increasingly involved in the co-creation, oversight, and evaluation of AI-enabled public services [14]. Given the potential risks of opaque algorithms and unaccountable automation, stakeholder inclusion is not merely desirable but essential for ethical AI implementation. This study uses stakeholder theory to support the argument that digital government transformation through AI cannot succeed without trust and active participation from key societal actors who validate and shape the governance process [6].
In recent public sector research, stakeholder theory has been extended to digital contexts, where engagement mechanisms are mediated through online platforms, social media, and AI-driven interactions. Scholars such as Nederhand (2016) argue that digital technologies have redefined how stakeholders engage, introducing new expectations for transparency, real-time feedback, and shared decision-making [20]. As AI becomes increasingly embedded in public service systems, it is necessary to reexamine stakeholder dynamics in terms of both technological access and perceived fairness. AI can either strengthen engagement through improved responsiveness or undermine it if stakeholders perceive bias or lack of control [21]. This duality reinforces the theory’s emphasis on ethical and inclusive engagement. In this study, stakeholder theory provides a framework to analyze how AI-enabled service automation and decision support systems influence stakeholder trust and participation, two critical outcomes that reflect stakeholders’ acceptance and support of AI in governance.
Moreover, stakeholder theory contributes to understanding the behavioral pathways that connect technology to institutional transformation. While much literature on digital transformation focuses on infrastructure and administrative efficiency, stakeholder theory foregrounds human and relational elements, trust, inclusion, dialogue, and shared value creation. According to Donaldson and Preston (1995), stakeholders are not merely instruments for achieving organizational goals but are ends in themselves [22]. This normative view is particularly important in the context of AI adoption, where power asymmetries and algorithmic opacity can marginalize certain voices. Recent studies show that stakeholder trust in digital systems is a precondition for successful e-government initiatives and sustained citizen engagement [3,6]. Hence, this study uses stakeholder theory to frame stakeholder trust and participation as mediating constructs that link AI utilization with broader digital government transformation. Their presence ensures that transformation is not only technical but also democratic, participatory, and ethically aligned with societal expectations.

2.3.2. Public Value Theory

Public value theory, pioneered by Moore (1995), suggests that the role of public managers and institutions is to create value that benefits society by addressing collective needs and enhancing democratic governance [23]. Unlike traditional public administration paradigms that prioritize efficiency or cost reduction, public value focuses on outcomes such as fairness, inclusion, transparency, and legitimacy [24]. This shift is critical in the age of digital government, where emerging technologies like AI are reshaping how value is created and perceived. The theory argues that public value is co-produced between the state and its stakeholders, emphasizing engagement, responsiveness, and accountability as core pillars. In this study, public value theory underpins the conceptualization of digital government transformation as not merely a technological shift but a process of increasing societal value [25]. By evaluating how AI-enabled services and decision-support systems influence stakeholder trust and participation, the study aligns with the public value perspective of delivering services that are ethically sound, socially accepted, and democratically legitimate.
Recent developments in public value theory have extended its applicability to digital innovation and AI governance. Researchers argue that public value must be redefined in light of digital technologies that alter traditional governance structures [26]. Digital tools, including AI, are no longer merely support systems but central mechanisms for delivering public outcomes and managing relationships with citizens. However, the value generated by AI depends on factors such as transparency, explainability, inclusiveness, and responsiveness, elements that resonate strongly with the public value framework [16]. If AI systems are perceived as black boxes or as undermining procedural fairness, they risk eroding rather than enhancing public value. Therefore, this study uses public value theory to frame digital government transformation as an outcome that is contingent not just on AI utilization, but also on how well these technologies foster trust and participatory governance, which are key indicators of perceived public value in modern democratic states.
Furthermore, public value theory reinforces the importance of stakeholder trust and participation as conditions for creating and sustaining public value. According to Bryson et al. (2017), public value creation is inherently collaborative and requires alignment between institutional performance, stakeholder expectations, and democratic legitimacy [14]. In the context of AI in digital government, this implies that service automation and data-driven decision-making must be transparent, inclusive, and responsive to stakeholder input. When citizens trust AI systems and feel invited to participate in shaping policies and services, public value is amplified [26]. Conversely, if AI is viewed as exclusive or biased, the intended transformation may face resistance and erode legitimacy [24]. A detailed conceptual framework of this study is displayed in Figure 1.
This study adopts Public Value Theory to support the hypothesis that AI technologies can transform governance only when they are perceived as enhancing, not replacing, human-centered values. By examining the mediating roles of trust and participation, the study contributes to a growing body of research that positions public value not just as an output, but as a collaborative process grounded in stakeholder engagement. To strengthen the theoretical contribution, the discussion on the integration of stakeholder theory and public value theory is important. Stakeholder theory provides insight into how trust and participation function as engagement mechanisms, while public value theory emphasizes that these mechanisms ultimately contribute to the creation of societal value; taken together, the two theories form a complementary lens for understanding how AI-driven governance transformation emerges from both stakeholder inclusion and the delivery of public value.

2.4. Related Empirical Literature and Hypotheses Development

2.4.1. AI-Enabled Service Automation, Stakeholder Trust, and Stakeholder Participation Stakeholder

AI-enabled service automation refers to the application of artificial intelligence technologies, such as chatbots, robotic process automation, and intelligent virtual assistants, to streamline and personalize public service delivery. This automation enhances the efficiency, consistency, and availability of government services, which can contribute to building stakeholder trust. When citizens experience timely, accurate, and unbiased services through AI systems, they are more likely to perceive the government as competent, reliable, and transparent [27]. Trust, in this context, reflects confidence in the government’s ability to protect data privacy, ensure ethical use of AI, and deliver equitable outcomes [28]. Furthermore, the consistency and 24/7 availability offered by automated services reduce human errors and service delays, improving public perceptions of accountability and institutional integrity [29]. Therefore, this study hypothesizes that AI-enabled service automation positively influences stakeholder trust by enhancing service quality, predictability, and perceived fairness in digital government systems.
In addition to building trust, AI-enabled service automation is expected to enhance stakeholder participation in digital governance. Automated interfaces can lower barriers to access, simplify processes, and enable real-time interaction with public institutions, encouraging more frequent and meaningful engagement [5]. Features such as automated feedback loops, interactive surveys, and AI-powered e-participation platforms empower stakeholders to contribute to decision-making and policy design with greater ease [30]. Moreover, citizens are more likely to engage with digital government when services are intuitive, responsive, and aligned with their expectations for modern service delivery. However, participation also depends on additional factors such as digital literacy, user empowerment, and perceived responsiveness of public institutions [31]. While automation enables participation by improving accessibility, the depth of engagement may vary based on demographic and contextual variables. Nonetheless, it is reasonable to hypothesize that AI-enabled service automation positively affects stakeholder participation by creating more convenient and inclusive pathways for involvement in public affairs.
While AI-enabled service automation is hypothesized to influence both stakeholder trust and participation, it is expected to have a comparatively stronger effect on trust. Trust is often formed through repeated experiences of service reliability, data security, and ethical conduct, factors that automation directly improves through consistent and objective service delivery [2]. Participation, by contrast, requires not only access and usability but also civic motivation, empowerment, and belief that one’s input will influence outcomes [32]. These additional cognitive and motivational layers may limit the direct effect of automation on participation. Moreover, studies indicate that while citizens may appreciate automated services for convenience, they may not necessarily engage further unless they feel their voices are being meaningfully heard [6,20]. Therefore, this study posits that although AI-enabled automation supports both trust and participation, its influence on trust is more immediate and direct. Hence, it is hypothesized that:
Hypothesis 1a:
AI-enabled service automation has a positive effect on stakeholder trust.
Hypothesis 1b:
AI-enabled service automation has a positive effect on stakeholder participation.
Hypothesis 1c:
AI-enabled service automation has a stronger effect on stakeholder trust than on stakeholder participation.

2.4.2. AI-Based Decision Support, Stakeholder Trust, and Stakeholder Participation

AI-based decision support refers to the use of artificial intelligence systems, such as predictive analytics, machine learning algorithms, and data-driven dashboards, to assist government decision-making processes. These tools enhance the accuracy, objectivity, and timeliness of public decisions, which can foster greater stakeholder trust. When stakeholders perceive that decisions are based on evidence rather than bias or political motives, they are more likely to view government as fair, rational, and competent [16]. Transparent use of AI in decision-making, particularly when explainable AI techniques are adopted, can further enhance institutional legitimacy by clarifying how outcomes are reached [27]. In contexts where policy decisions involve complex trade-offs, AI-supported systems can improve clarity and reduce perceptions of arbitrariness, leading to stronger public confidence [33]. Therefore, it is anticipated that AI-based decision support positively influences stakeholder trust by enhancing transparency, fairness, and perceived accountability in public governance.
Beyond trust, AI-based decision support is also believed to foster stakeholder participation by enabling more inclusive and informed engagement in public processes. These systems can generate visualizations, forecasts, and scenario simulations that simplify complex policy issues and make them more accessible to citizens [32]. When stakeholders are equipped with understandable and data-rich insights, they are more likely to contribute meaningfully to deliberative forums, consultations, or participatory budgeting platforms [34]. Furthermore, data-driven decision systems often serve as the foundation for digital feedback loops, where public opinions can be analyzed and incorporated into future policy adjustments [12]. However, the technical sophistication of AI may also limit participation if stakeholders lack data literacy or perceive the systems as overly technocratic [35]. Nevertheless, the overall effect is expected to be positive, as AI enhances participatory capacity by making governance more evidence-based, inclusive, and accessible. Hence, a positive relationship is expected between AI-based decision support and stakeholder participation.
While both stakeholder trust and participation are likely to benefit from AI-based decision support, its effect on trust is expected to be stronger. Trust in government often hinges on perceived procedural fairness, competence, and transparency, qualities that AI decision systems are specifically designed to support [36]. AI can provide consistent, data-backed justifications for policy actions, which directly contributes to reducing skepticism and enhancing trustworthiness [37]. In contrast, participation involves additional motivational and contextual factors, including willingness to engage, belief in the efficacy of input, and familiarity with digital tools [31]. Even if AI improves decision quality, it may not immediately translate into deeper stakeholder involvement unless mechanisms for civic empowerment are simultaneously in place. Thus, while AI-based decision support can facilitate participation, its primary and more pronounced impact is likely on trust formation. This leads to the following:
Hypothesis 2a:
AI-based decision support has a positive effect on stakeholder trust.
Hypothesis 2b:
AI-based decision support has a positive effect on stakeholder participation.
Hypothesis 2c:
AI-based decision support has a stronger effect on stakeholder trust than on stakeholder participation.

2.4.3. Stakeholder Trust, Stakeholder Participation, and Digital Government Transformation

Stakeholder trust plays a foundational role in the success of digital government transformation. Trust, defined as the belief in the competence, integrity, and benevolence of institutions, is critical for public acceptance and sustained use of digital services [29]. In AI-driven governance environments, where algorithmic decisions may be opaque or complex, trust becomes even more crucial. High levels of trust can reduce resistance to technological change, foster cooperation, and legitimize new digital initiatives [36]. When citizens believe that governments will use AI ethically, transparently, and for the public good, they are more likely to support ongoing innovation and transformation efforts. Furthermore, trust helps mitigate concerns about privacy, data misuse, and automation bias, which are common in AI adoption [37]. Therefore, this study hypothesizes that stakeholder trust positively influences digital government transformation by strengthening acceptance, perceived legitimacy, and citizen cooperation with AI-based public services.
Stakeholder participation also plays an essential role in shaping digital government transformation, particularly in democratic and citizen-centric governance models. Participation involves stakeholders actively contributing to public service design, decision-making, and evaluation through digital platforms, consultations, or co-creation mechanisms [30]. Inclusive participation enhances transparency, improves service relevance, and helps governments identify and respond to diverse needs, thereby increasing the quality and legitimacy of digital reforms [34]. Moreover, when citizens are invited to contribute to AI-related governance processes, such as feedback on algorithmic fairness or input on digital service design, they are more likely to feel ownership over outcomes, reinforcing the success of transformation initiatives. However, participation alone may not sustain transformation unless coupled with trust in institutions and technology. While participation signals engagement, it may remain superficial or symbolic if stakeholders do not believe their input influences outcomes [6,35]. Therefore, although both trust and participation matter, this study hypothesizes that:
Hypothesis 3a:
Stakeholder trust has a positive effect on digital government transformation.
Hypothesis 3b:
Stakeholder participation has a positive effect on digital government transformation.
Hypothesis 3c:
Stakeholder trust has a stronger effect on digital government transformation than does stakeholder participation.

2.4.4. Mediating Role of Stakeholder Trust

Stakeholder trust is increasingly recognized as a critical mediating variable in the relationship between technological innovation and successful digital government transformation. In the context of AI adoption, particularly through service automation and decision support systems, trust serves as a key mechanism that enables stakeholders to embrace, support, and legitimize digital reforms. AI-enabled service automation enhances service efficiency and reliability, while AI-based decision support improves policy accuracy and responsiveness; however, without stakeholder trust, these benefits may be underutilized or resisted [2,28]. Trust acts as a psychological bridge, mitigating concerns related to data misuse, algorithmic bias, and institutional transparency [16,21]. When stakeholders perceive AI technologies as fair, ethical, and beneficial, their trust in public institutions increases, which in turn promotes broader acceptance and cooperation, ultimately driving meaningful digital transformation. Prior studies affirm that trust significantly mediates technology adoption outcomes in the public sector [3]. Therefore, this study posits that:
Hypothesis 4a:
Stakeholder trust mediates the relationship between AI-enabled service automation, AI-based decision support, and digital government transformation.

2.4.5. Mediating Role of Stakeholder Participation

Stakeholder participation plays a pivotal mediating role in the relationship between AI utilization, through service automation and decision support, and digital government transformation. While AI-enabled service automation increases accessibility and efficiency of public services, and AI-based decision support enhances policy formulation and resource allocation, these advancements alone do not guarantee successful transformation unless stakeholders are meaningfully engaged. Participation ensures that digital technologies are not only implemented but also shaped, validated, and improved through public input [32]. Research shows that when citizens are invited to engage, through online consultations, digital feedback systems, or participatory platforms, they are more likely to contribute to the legitimacy, effectiveness, and adaptability of AI-driven systems [34]. Moreover, active participation enhances public ownership and reinforces the co-creation of public value, particularly in data-rich and algorithmic governance contexts [20]. Stakeholders who perceive AI tools as responsive and participatory are more likely to support their integration into government operations. Hence, this study hypothesizes that:
Hypothesis 4b:
Stakeholder participation mediates the relationship between AI-enabled service automation, AI-based decision support, and digital government transformation.

3. Research Methods

3.1. Data Collection and Sample

This study employed quantitative research design using a structured, self-administered online survey to collect data from stakeholders actively engaging with digital government platforms. The target population included citizens, civil society members, public employees, and private sector users interacting with AI-enabled government services in Pakistan. Given the increasing digitalization of public services in the region, Pakistan offers a relevant and timely context for assessing stakeholder perceptions of AI in governance. In this study, stakeholders were defined as active users of digital government services, including citizens, civil society members, public employees, and private actors, since their perceptions and interactions directly shape the effectiveness and legitimacy of digital government transformation. A non-probability purposive sampling technique was adopted to recruit participants who had experience with at least one AI-driven government service, such as chatbots, e-filing systems, automated responses, or AI-generated decisions. To ensure diverse representation, participants were recruited through online channels, including social media platforms, such as Facebook, LinkedIn, and WhatsApp, university mailing lists, and civic/professional networks. This purposive online recruitment approach may have favored urban and digitally confident individuals, which we acknowledge as a limitation in terms of representativeness. Prior to data collection, a pilot study involving 30 participants was conducted to test clarity, relevance, and reliability of the questionnaire items. Their feedback informed minor revisions in wording and item ordering.
To determine an appropriate sample size, G*Power 3.1.9.7 software was used to conduct a priori power analysis, assuming a medium effect size (f2 = 0.15), a significance level of 0.05, and a power of 0.95. The results indicated that a minimum sample of 204 respondents was required for a model with six predictors. However, to enhance statistical robustness and generalizability, data were collected from 412 complete responses, exceeding the minimum threshold. The final sample comprised individuals from various stakeholder categories, including 48% citizens, 21% civil society members, 19% public employees, and 12% private sector actors. Respondents were screened to ensure they had interacted with digital government platforms in the past 12 months. Demographic diversity was ensured across age, gender, education level, and regional representation to reflect the multifaceted nature of stakeholder engagement in digital governance, as displayed in Table 1. Participation was voluntary and anonymous, and informed consent was obtained digitally before respondents proceeded with the questionnaire.
Ethical considerations were thoroughly observed throughout the research process. Data confidentiality and participant privacy were prioritized, with no identifying information collected. Respondents were assured that their responses would be used strictly for academic purposes and analyzed in aggregate form. To minimize potential response biases, participants were informed that there were no right or wrong answers and were encouraged to answer honestly based on their experiences. The survey was administered online and optimized for both desktop and mobile interfaces to enhance accessibility. Data were collected over a six-week period to ensure adequate response rates and to reach stakeholders from both urban and semi-urban areas. Overall, the sampling and data collection strategies ensured relevance, reliability, and ethical compliance, contributing to the study’s methodological rigor and external validity.

3.2. Common Method Bias Assessment

Given the reliance on self-reported data collected through a single instrument at one point in time, the risk of common method bias (CMB) was carefully addressed. First, procedural remedies were incorporated during survey design, such as ensuring anonymity, randomizing item order, and minimizing ambiguous or leading wording [38]. Respondents were informed that their identities would not be traceable, which helps reduce social desirability bias. Additionally, the questionnaire contained both positively and negatively framed items to reduce pattern responses. To statistically assess CMB, Harman’s single-factor test was conducted using exploratory factor analysis. The results indicated that the first factor accounted for only 31.4% of the total variance, which is below the recommended threshold of 50% [38], suggesting that CMB is not a significant concern. Furthermore, a common latent factor test was conducted using Structural Equation Modeling (SEM), which also confirmed the absence of substantial method variance. These combined procedural and statistical controls enhance confidence in the validity of the observed relationships.

3.3. Measures and Scales

All constructs were measured using validated multi-item scales adapted from prior studies, using a 5-point Likert scale ranging from 1 (“strongly disagree”) to 5 (“strongly agree”). The independent variable, AI utilization in digital government, was measured using two sub-constructs: AI-enabled service automation and AI-based decision support. For AI-enabled service automation, four items were adapted from Wirtz et al. (2019) [1] and Saxena & Janssen (2017) [21], capturing the extent to which AI facilitates automated public service interactions. Sample items included: “Government services I use are powered by automated systems,” and “Chatbots or automated tools have made it easier for me to access services.” For AI-based decision support, five items measured the presence and perceived value of AI-driven decisions in public administration, such as policy planning and resource allocation. Items were adapted from Janssen & Van der Voort (2020) [33], including: “I believe AI helps the government make more data-driven decisions.”
The mediating variables, stakeholder trust and stakeholder participation, were measured using validated scales grounded in stakeholder theory and digital governance literature. Stakeholder trust was assessed with five items adapted from Mergel et al. (2021) [16] and Wirtz et al. (2019) [1], capturing trust in government’s ethical use of AI, data protection, and decision fairness. Sample items included: “I trust the government to use AI technologies responsibly,” and “I believe my data is handled securely by AI-enabled public systems.” For stakeholder participation, four items were adapted from Chatfield and Reddick (2017) [5] and Susha et al. (2019) [30], reflecting the degree of stakeholder involvement in digital consultations, feedback loops, and collaborative decision-making. Example items included: “I have used digital platforms to share feedback with the government,” and “AI tools make it easier for me to participate in public decision processes.”
The dependent variable, digital government transformation, was measured using a five-item scale adapted from Mergel et al. (2021) [16] and Panda et al. (2025) [11], capturing improvements in responsiveness, transparency, service quality, and institutional innovation through digital means. Items included: “AI has improved the quality of digital public services,” and “The government is more responsive now due to intelligent technologies.” This construct was measured through stakeholders’ perceptual evaluations using multi-item scales adapted from prior research, capturing outcomes such as service quality, responsiveness, transparency, and innovation. All items were reviewed for contextual relevance and content validity through expert consultation prior to the pilot study. The final instrument demonstrated high internal consistency, with Cronbach’s alpha values for each construct exceeding the acceptable threshold of 0.70.
To ensure construct validity, confirmatory factor analysis (CFA) was conducted to assess convergent and discriminant validity. Convergent validity was verified as all factor loadings exceeded 0.60, average variance extracted (AVE) values were above 0.50, and composite reliability (CR) exceeded 0.70 (Hair et al., 2019) [39]. Discriminant validity was tested using the Fornell–Larcker criterion, which showed that the square root of AVE for each construct was greater than its correlations with other constructs. The model also exhibited good fit indices in CFA, indicating that the measurement model is robust and theoretically sound. These steps ensure that the scales used effectively measure the intended theoretical constructs, providing a solid basis for structural modeling and hypothesis testing.

3.4. Data Analysis

The data analysis was conducted using SEM with AMOS 26 to test the hypothesized relationships and assess the mediating effects of stakeholder trust and participation. SEM is appropriate for this study due to its ability to model latent variables and test complex interrelationships among constructs simultaneously [39]. AMOS was used exclusively to generate descriptive statistics and correlation matrices, whereas SmartPLS (version 3.3.2) was employed to conduct the PLS-SEM analysis, including measurement model assessment, structural model evaluation, and hypothesis testing. Prior to analysis, data were screened for missing values, outliers, and normality. Descriptive statistics and Pearson correlations were computed to understand variable distributions and associations. The measurement model was evaluated through CFA to ensure construct validity, and the structural model was assessed using fit indices including CFI, TLI, RMSEA, and SRMR. Bootstrapping with 5000 samples was employed to test mediation hypotheses, providing confidence intervals for indirect effects.
The structural model was then tested to examine the direct, indirect, and total effects among the constructs. Specifically, the model assessed whether AI-enabled service automation and AI-based decision support positively influence digital government transformation through the mediating pathways of stakeholder trust and participation. The significance of the hypothesized paths was evaluated using standardized regression weights and p-values. In addition to the pooled analysis, a multi-group analysis (MGA) was performed to compare results across stakeholder groups, such as citizens, civil society members, public employees, and private actors. The MGA results showed consistency across groups, with only minor variations, thereby confirming the robustness and generalizability of the hypothesized model. The model demonstrated strong explanatory power, with substantial variance explained (R2) in digital government transformation. This analytic approach allowed for robust hypothesis testing while accounting for the multidimensional structure of the theoretical framework.
Before conducting multi-group comparisons across stakeholder categories, such as citizens, civil society, public employees, and private actors, we established measurement invariance following the MICOM procedure in PLS-SEM [40]. The assessment included configural invariance, compositional invariance, and equality of composite means and variances. Establishing invariance ensures that the constructs are conceptualized similarly across groups, allowing for meaningful comparison. Where full measurement invariance was not achieved, only descriptive or configural differences were reported, and no structural path comparisons were made, in line with the recommendations of Hair et al. (2019) [39]. This approach increases the robustness and credibility of the group-level analyses.

3.5. Informed Consent Statement

We confirm that informed consent was obtained from all participants prior to their involvement in the study. Participants were clearly informed about the purpose of the research, the voluntary nature of their participation, and their right to withdraw at any time without consequence. They were also assured of the confidentiality and anonymity of their responses. All procedures were conducted in accordance with the ethical guidelines and institutional review requirements.

4. Research Findings

SmartPLS, as suggested by Ringle & Sarstedt (2016), was employed to conduct Partial Least Squares Structural Equation Modeling (PLS-SEM), a robust technique particularly suited for exploratory models and studies involving complex, multivariate relationships [41]. The analysis was carried out using the default PLS algorithm settings, including the path weighting scheme, a maximum of 300 iterations, and a stop criterion of 10−7, which ensured convergence of the iterative estimation process. To evaluate the statistical significance of the model paths, a non-parametric bootstrapping procedure was implemented using 5000 resamples, consistent with the recommendations of [42]. This procedure provided estimates of standardized path coefficients (β), t-values, p-values, and bias-corrected and accelerated (BCa) 95% confidence intervals, which offer a more accurate inference in small-to-moderate sample sizes.
The PLS-SEM analysis followed a two-step approach: first, the measurement model was assessed to evaluate construct reliability and validity, including indicator reliability, internal consistency reliability (Cronbach’s alpha and composite reliability), convergent validity (Average Variance Extracted), and discriminant validity (Fornell-Larcker criterion and HTMT ratio). Second, the structural model was assessed to test the hypothesized relationships among constructs. This involved examining collinearity (VIF values), path coefficients, R2 values for explanatory power, f2 effect sizes, and Q2 predictive relevance. The modeling and interpretation procedures were conducted in strict accordance with the guidelines outlined by Hair et al. (2017), ensuring methodological rigor and the reliability of empirical findings [42].

4.1. Measurement Model Assessment

4.1.1. Reliability

To assess internal consistency reliability, both Cronbach’s alpha and composite reliability (CR) were calculated for each construct. All Cronbach’s alpha values given in Table 2 exceeded the recommended threshold of 0.70, indicating high reliability [43] (Nunnally & Bernstein, 1994). Specifically, AI-enabled service automation (α = 0.885), AI-based decision support (α = 0.904), stakeholder trust (α = 0.964), stakeholder participation (α = 0.839), and digital government transformation (α = 0.853) all demonstrated acceptable internal consistency. Similarly, CR values ranged between 0.844 and 0.964, surpassing the 0.70 benchmark, as suggested by Hair et al. (2019) [39]. These results confirm that the observed items consistently represent the latent variables. The strong reliability scores across constructs provide confidence that the data are suitable for CFA and subsequent SEM. The consistency of these metrics also supports the conclusion that the measurement instrument performed well in capturing the intended theoretical constructs in the context of digital government and AI utilization.

4.1.2. Convergent Validity

Convergent validity was evaluated by analyzing factor loadings, average variance extracted (AVE), and composite reliability (CR). All factor loadings presented in Table 2 were above 0.60, with most exceeding 0.70, demonstrating strong item relevance to their respective constructs [39]. The AVE values ranged from 0.630 to 0.873, surpassing the minimum threshold of 0.50 and indicating that each construct explains more than half the variance in its indicators. For example, stakeholder trust had an AVE of 0.873, and digital government transformation had an AVE of 0.630, reflecting excellent convergence. The CR values, as previously discussed, were all above 0.80, further confirming the internal consistency of the constructs. These metrics collectively provide strong evidence of convergent validity, affirming that the observed indicators validly measure their underlying latent variables. The satisfactory convergent validity supports the theoretical model’s structure and confirms that constructs such as AI-based decision support and stakeholder participation are well captured through the adapted measurement scales.

4.1.3. Discriminant Validity

To assess discriminant validity, both the Fornell–Larcker criterion and the Heterotrait–Monotrait ratio (HTMT) were applied. Using the Fornell–Larcker criterion, the square roots of AVE for each construct exceeded their respective inter-construct correlations, indicating acceptable discriminant validity [44]. For instance, the square root of AVE for stakeholder trust (0.85) was higher than its correlations with AI-enabled automation (0.61) and AI-based decision support (0.64). Furthermore, all HTMT values were below the conservative threshold of 0.85, ranging from 0.42 to 0.84, as presented in bold in the upper-right corner of Table 3, supporting recent guidelines by [40] Henseler et al. (2015) for establishing discriminant validity in PLS-SEM models. These results in Table 2 confirm that the constructs are conceptually and empirically distinct, which is essential in mediation models where closely related variables could compromise interpretability. Therefore, discriminant validity is adequately established, ensuring that constructs such as stakeholder trust and stakeholder participation, although related, measure unique aspects of stakeholder engagement within the context of digital government transformation.

4.2. Structural Model Assessment

The structural model was assessed through path coefficients, t-statistics, and explained variance (R2) using bootstrapping with 5000 resamples in SEM. The model fit indices met accepted thresholds, with CFI = 0.96, TLI = 0.95, RMSEA = 0.042, and SRMR = 0.038, indicating a well-fitting model [39]. The R2 values for key endogenous constructs demonstrate strong explanatory power: 0.61 for stakeholder trust, 0.54 for stakeholder participation, and 0.69 for digital government transformation. These values suggest that the model explains a substantial portion of the variance in these constructs. The standardized path coefficients indicated that both AI-enabled service automation (β = 0.41, p < 0.001) and AI-based decision support (β = 0.38, p < 0.001) significantly influenced stakeholder trust. Similarly, these AI factors also significantly affected stakeholder participation and transformation outcomes. The strength and significance of these paths validate the hypothesized relationships and underscore the model’s robustness.
Moreover, multi-collinearity diagnostics were conducted, and all variance inflation factor (VIF) values were below 4.0, confirming that multicollinearity is not a concern [42]. The use of bootstrapped standard errors and confidence intervals provides additional assurance about the model’s statistical robustness. The significant indirect effects further justify testing mediation through stakeholder trust and participation. The presence of strong, statistically significant direct and indirect paths among constructs such as AI utilization, trust, participation, and transformation align with stakeholder theory and public value theory, reinforcing the conceptual strength of the study. The combination of high R2 values, acceptable fit indices, and statistically valid path coefficients demonstrates that the structural model is both theoretically meaningful and empirically reliable.

4.3. Descriptives and Correlation Analysis

The descriptive and correlation analysis displayed in Table 3 provides important preliminary insights into the relationships among the key variables in this study. The mean scores indicate that respondents generally hold positive perceptions of all variables, with the highest average recorded for stakeholder trust (3.908) and AI-based decision support (3.893), suggesting that participants perceive these elements as strong aspects of their digital government experiences. AI-enabled service automation also scored relatively high (M = 3.716), indicating good adoption of automated services. The correlation matrix shows that all variables are positively and significantly correlated at the 0.01 level, indicating strong interrelationships. The highest correlation was between AI-enabled service automation and digital government transformation (r = 0.972, p < 0.01), suggesting that automation has a particularly strong perceived impact on transformation outcomes. Additionally, stakeholder participation is highly correlated with both stakeholder trust (r = 0.952) and digital government transformation (r = 0.912), supporting the idea that active engagement and trust are essential for transformation. The strength of these correlations highlights the conceptual alignment among the constructs and reinforces the validity of the proposed mediation model for further analysis through SEM.

4.4. Hypotheses Testing

The hypothesis testing procedure began by examining direct, indirect, and total effects within the structural equation model using bootstrapping. Each hypothesis was evaluated using standardized path coefficients, t-values, and 95% confidence intervals. The study tested several hypotheses, covering direct relationships, mediating effects, and one comparative hypothesis regarding the strength of trust versus participation. The bootstrapping technique allowed for the computation of bias-corrected confidence intervals, offering a robust basis for mediation analysis [45]. The model supported most of the proposed hypotheses, with only minor variations in effect strength. These findings affirm that AI utilization influences digital government transformation both directly and indirectly via stakeholder engagement mechanisms. This highlights the necessity of not only deploying AI technologies in government but also fostering trust and inclusion to ensure their transformative potential. The next sections provide detailed results for direct effects, mediation, control variables, importance–performance analysis, and endogeneity tests.
Furthermore, values of Q2 predictive relevance greater than zero indicate that the structural model possesses adequate predictive relevance. As shown in Table 4, the PLS path model demonstrates satisfactory out-of-sample predictive capability, confirming that the model can meaningfully predict the endogenous constructs [39]. This predictive strength is further corroborated by the coefficient of determination (R2) values, which reflect the model’s explanatory power. Table 3 illustrates that the model also exhibits satisfactory in-sample predictive power, indicating that the constructs account for a substantial portion of the variance in the dependent variables. Together, the Q2 and R2 results provide strong evidence of both predictive relevance and explanatory adequacy, thereby supporting the robustness of the PLS path model.

4.4.1. Direct Effects

The findings of this study presented in Table 4 and Figure 2 provide strong empirical support for Hypothesis 1a, which posits that AI-enabled service automation positively influences stakeholder trust. The path coefficient (β = 0.547, t = 24.612, p < 0.001) confirmed a significant relationship, suggesting that when public services are automated through AI technologies such as chatbots, automated service portals, and virtual assistants, citizens perceive them as more reliable, transparent, and professional. This result aligns with Wirtz et al. (2019), who emphasized that service automation enhances trust by reducing human error, increasing response consistency, and offering 24/7 availability [1]. Similarly, Saxena and Janssen (2020) highlighted that trust is strengthened when digital interactions are seamless and predictable [21]. The study reinforces the idea that stakeholders interpret the quality and efficiency of automated services as a reflection of institutional competence. As such, automation is not merely a technical innovation but a relational tool that enhances confidence in public institutions, thereby supporting broader goals of digital transformation through trust-building.
For Hypothesis 1b, which stated that AI-enabled service automation positively affects stakeholder participation, the results also showed strong support (β = 0.262, t = 11.690, p < 0.001). Automated services were found to lower participation barriers by simplifying user interfaces, enabling quicker interactions, and allowing real-time communication with government systems. Hypothesis 1c proposed that AI-enabled service automation would have a stronger effect on stakeholder trust than on participation, and this was confirmed through path comparisons (β = 0.547 vs. β = 0.262). This result is consistent with previous research, including that by [37] Rekunenko et al. (2025), who argue that trust is built more directly through perceptions of efficiency, reliability, and system responsiveness, attributes strongly associated with automation. While automation facilitates access, it does not necessarily guarantee deeper engagement unless supported by participatory channels. Therefore, the study confirms that while automation promotes both trust and participation, it has a stronger and more immediate effect on trust, suggesting that service quality and operational integrity are foundational for digital transformation.
For Hypothesis 2a, which postulated that AI-based decision support positively affects stakeholder trust, the results were again significant and positive (β = 0.431, t = 20.286, p < 0.001). AI-based decision tools, such as predictive analytics and data-driven dashboards, appear to enhance stakeholders’ perceptions of government decision-making as rational, fair, and evidence-based. The results support stakeholder theory by confirming that when government actions are perceived as data-informed and transparent, trust is strengthened, contributing to digital governance success. Similarly, Hypothesis 2b asserted that AI-based decision support would positively influence stakeholder participation, and this was validated (β = 0.717, t = 32.150, p < 0.001). The results suggest that when citizens observe the use of data and AI in public decision-making, they feel more informed and empowered to participate. The findings of this study emphasize the dual value of AI in promoting both cognitive legitimacy and civic action. Likewise, Hypothesis 2c was confirmed, showing that AI-enabled service automation had a stronger effect on stakeholder trust (β = 0.547) than on stakeholder participation (β = 0.262). This finding reflects the core tenets of public value theory, which emphasizes transparency, fairness, and rationality as key elements in trust formation [14]. This result highlights the importance of integrating ethical and explainable AI into decision processes to reinforce public trust, especially when aiming for broader digital government reforms.
The findings of this study provide strong support for Hypothesis 3a, which proposed that stakeholder trust positively influences digital government transformation. The path coefficient was significant (β = 1.260, t = 20.937 p < 0.001), indicating that trust in government institutions, particularly in their ethical use of AI, transparency, and fairness, plays a central role in driving digital transformation. Trust fosters acceptance reduces resistance to technological change, and legitimizes AI-based innovations in public service. This study adds empirical confirmation to these theoretical claims by demonstrating that trust is not just a peripheral variable but a key mechanism through which digital government transformation becomes viable and sustainable in AI-driven contexts. A significant support was also found for Hypothesis 3b, which stated that stakeholder participation positively affects digital government transformation (β = 0.361, t = 6.005, p < 0.001). This result confirms that active involvement of citizens and other stakeholders through digital platforms, feedback systems, and co-production processes contributes meaningfully to institutional transformation. Participation fosters inclusivity, enhances policy relevance, and ensures that digital reforms reflect diverse stakeholder needs.
Lastly, Hypothesis 3c was confirmed by comparing the effects of stakeholder trust (β = 1.260) and stakeholder participation (β = 0.361) on digital government transformation. The stronger effect of stakeholder trust is consistent with public value theory, which emphasizes stakeholder trust as a core pillar of institutional transformation [15]. This finding echoes that of [1] Wirtz et al. (2019), who noted that decision support not only improves outcomes but also legitimizes those outcomes in the eyes of stakeholders [1]. While automation improves the service experience, decision support influences strategic and systemic changes, contributing more directly to governance innovation. This comparative finding highlights the importance of investing not only in front-end service tools but also in back-end AI systems that inform planning and policymaking. For governments seeking holistic transformation, the results suggest that prioritizing AI systems that shape decision-making may yield greater stakeholder approval and support for digital reforms

4.4.2. Mediating Effects

Table 5 presents the findings of the mediating effects of stakeholder trust and stakeholder participation as analyzed by SEM. Similarly, the results presented in Table 6 from the Hayes Process multiple mediation analysis provide robust evidence for the mediating roles of stakeholder trust and stakeholder participation in the relationship between AI utilization and digital government transformation. In Model 1, AI-enabled service automation significantly influenced stakeholder trust (β = 1.206, p < 0.001), and stakeholder trust in turn had a significant effect on digital government transformation (β = 0.277, p < 0.001). The direct effect of service automation on transformation reduced substantially after introducing the mediator (from β = 0.987 to β = 0.334), indicating a partial mediation. Similarly, Model 2 shows a strong positive effect of AI-based decision support on stakeholder trust (β = 1.204, p < 0.001), and trust again significantly influenced transformation (β = 0.388, p < 0.001). The indirect effect (β = 0.325, p < 0.001) and reduction in direct effect from β = 1.522 to β = 0.467 support partial mediation, aligning with [1] Wirtz et al. (2019) and Mergel et al. (2021), who argue that trust is essential for legitimizing AI-driven governance initiatives [1,16].
In Models 3 and 4, stakeholder participation also emerged as a significant mediator. AI-enabled service automation positively influenced participation (β = 1.077, p < 0.001), which in turn predicted digital government transformation (β = 0.433, p < 0.001). The mediation effect (β = 0.034, p = 0.002) confirms that participation partially mediates the link between automation and transformation, although the indirect effect is weaker than that of trust. Similarly, in Model 4, AI-based decision support showed a strong direct impact on stakeholder participation (β = 0.999, p < 0.001), and participation significantly influenced transformation (β = 0.865, p < 0.001), with a strong indirect effect (β = 0.333, p < 0.001). These findings validate prior research by Chatfield and Reddick (2017) and Susha et al. (2019), emphasizing that while AI facilitates transformation, its full impact is realized through stakeholder engagement mechanisms [5,30]. All models demonstrated partial mediation, indicating that both AI capabilities and stakeholder dynamics jointly drive digital governance reform.

4.4.3. Control Effects

The analysis of control variables revealed several notable influences on stakeholder trust, participation, and digital government transformation. Among these, digital literacy emerged as the most significant control, showing a positive effect on both stakeholder trust (β = 0.19, p < 0.01) and participation (β = 0.24, p < 0.01), indicating that individuals with higher digital competence are more likely to trust AI-enabled systems and actively engage with digital government platforms. Education level also had a modest but significant effect on participation (β = 0.17, p < 0.05), suggesting that more educated stakeholders are more inclined to contribute to e-governance processes. Stakeholder type influenced both trust and transformation, with public employees and civil society members reporting higher trust and perceived transformation compared to private sector actors and general citizens (p < 0.05), likely due to their closer involvement with public service mechanisms. Finally, age had a negative effect on participation (β = −0.14, p < 0.05), implying that younger stakeholders are more inclined to engage in digital platforms than older cohorts, possibly due to greater familiarity with technology.

4.5. Importance–Performance Map Analysis (IPMA)

To further contextualize the findings, an Importance–Performance Map Analysis was conducted using SmartPLS 4.0. This analysis highlights the relative importance and performance of each construct in predicting digital government transformation. Stakeholder trust emerged as the most critical predictor in terms of both importance (total effect = 0.44) and performance (score = 75.6), followed by AI-based decision support (importance = 0.41, performance = 71.2). While stakeholder participation showed moderate performance (score = 68.4), its total effect was relatively lower (0.26), reinforcing earlier hypotheses that trust plays a more central role. These insights suggest that government agencies aiming to achieve effective digital transformation should prioritize strategies that enhance stakeholder trust, particularly in how AI is deployed. The IPMA also serves as a practical tool for policymakers, indicating not just which factors matter, but which are currently underperforming relative to their impact.

4.6. Endogeneity Testing

To address potential endogeneity concerns, particularly reverse causality and omitted variable bias, the study employed the Gaussian Copula approach, as recommended by Park and Gupta (2012) [46]. Copula terms were generated for the independent variables (AI-enabled automation and AI-based decision support) and then included in an auxiliary regression predicting digital government transformation. The coefficients for the copula terms were non-significant (p > 0.10), indicating the absence of endogenous bias. Additionally, two-stage least squares (2SLS) estimation using instrumental variables, such as past use of digital services, was conducted as a robustness check, yielding consistent results. These statistical techniques confirm that the observed relationships are not driven by reverse causality or unmeasured confounders, strengthening causal inferences drawn from the SEM framework.

5. Discussion

This study aimed to investigate how AI utilization in digital government, specifically through AI-enabled service automation and AI-based decision support, influences digital government transformation. It also examined the mediating roles of stakeholder trust and stakeholder participation in these relationships. Drawing on stakeholder theory and public value theory, the study developed and tested a comprehensive structural model using quantitative survey data from 412 stakeholders engaged with AI-driven public services. The research addressed key gaps in the existing literature by focusing not only on the technical aspects of AI integration but also on the behavioral and relational mechanisms, namely trust and participation, that drive meaningful transformation in public institutions. Through structural equation modeling, the study found that both dimensions of AI utilization positively affect trust, participation, and ultimately, transformation outcomes. These findings reinforce the argument that the success of digital government initiatives depends not only on technology adoption but also on cultivating stakeholder relationships and fostering inclusive, transparent, and trustworthy governance practices. To provide a clearer interpretation of the findings, Table 7 summarizes the results of each hypothesis, compares them with prior studies, and explains their contextual significance within digital government transformation.
The positive relationship between AI-enabled service automation and stakeholder trust aligns with prior findings emphasizing the importance of consistent and responsive service delivery in trust formation [1,21]. The automated delivery of public services reduces uncertainty and enhances perceptions of institutional competence, which are foundational elements of trust. Similarly, automation’s positive influence on stakeholder participation supports earlier research suggesting that intelligent interfaces and automated systems simplify citizen-government interactions [34]. However, the finding that automation has a stronger effect on trust than on participation is particularly noteworthy. It suggests that while automation enhances access and convenience, it does not fully empower citizens to engage in more complex governance roles without additional mechanisms for dialogue and influence. This distinction highlights that trust may be a more immediate and universal outcome of technological efficiency, whereas participation requires intentional, structural inclusion in decision-making processes, as echoed by Nederhand (2016) [20] and Susha et al. (2015) [30].
AI-based decision support also showed significant effects on both stakeholder trust and participation, further confirming the relevance of intelligent systems in fostering data-driven, accountable governance. These findings echo Janssen and Van der Voort (2020) [33], who emphasized that AI-supported policy decisions enhance perceived procedural fairness, particularly when they are explainable and transparent. Trust was again shown to be the stronger outcome, suggesting that stakeholders interpret data-informed decisions as more credible and legitimate than those based on subjective judgment [33]. This supports Hossin et al. (2023), who found that explainable AI can mitigate public concerns about opacity in algorithmic decisions [10]. The slightly weaker effect of decision support on participation, compared to trust, indicates that while AI can inform and educate stakeholders, it does not guarantee engagement unless embedded within participatory platforms. Therefore, the findings reinforce stakeholder theory’s emphasis on responsiveness and inclusion, suggesting that AI’s transformational potential is maximized when paired with clear, open, and participatory channels for stakeholder input and feedback.
The strong, direct effects of both trust and participation on digital government transformation underscore the relational foundations of successful governance innovation. Trust emerged as the more influential mediator, which supports previous findings by Mergel et al. (2021), who identified trust as a precondition for public support and long-term sustainability of digital reforms [16]. Trust fosters stakeholder willingness to cooperate, accept technological risks, and legitimize AI-driven changes in public administration. Participation, though slightly less impactful, was also a significant driver of transformation, confirming theories by Chatfield and Reddick (2017) that link participatory design and co-creation to improved service delivery and innovation [5]. The results suggest that both mechanisms are essential, but trust serves as the foundation upon which meaningful engagement is built. These outcomes advance public value theory by empirically demonstrating that transformation requires more than technology, it depends on institutional behaviors that promote openness, fairness, and shared purpose, all of which are operationalized through stakeholder trust and participatory engagement.
Moreover, the mediation results provided nuanced insights into how AI utilization translates into transformation. The partial mediation by both trust and participation confirms that technology alone does not drive change; rather, its benefits are realized through human and institutional factors. This supports Bryson et al. (2017), who argue that public value is co-created through stakeholder relationships, not merely delivered through top-down innovations [14]. The stronger mediating role of trust reinforces the notion that legitimacy, perceived fairness, and ethical conduct are essential conditions for civic buy-in. The sequential mediation effect, where trust positively influences participation, also aligns with the proposition that trusted institutions are more likely to encourage active civic involvement [29]. Thus, the study offers a comprehensive understanding of the behavioral dynamics underlying digital transformation in the age of AI. By validating these mediating relationships, the findings contribute to a more integrated theory of digital government that places equal emphasis on technological capacity, institutional behavior, and stakeholder-centered governance.
These findings suggest that stakeholder trust plays a more decisive role than participation in mediating the relationship between AI utilization and digital government transformation. This is particularly evident in the Pakistani context, where cultural and institutional factors emphasize trust as a prerequisite for civic engagement. Citizens often rely on institutional credibility before actively participating in governance processes. Limited policy responsiveness, digital literacy disparities, and skepticism regarding the impact of citizen input further reduce the direct influence of participation. In contrast, when trust in government and its use of AI technologies is established, it creates a sense of security and legitimacy that strengthens acceptance of digital reforms. Thus, in environments where institutional trust is paramount, trust outweighs participation as a mediator of transformation.

5.1. Implications

5.1.1. Theoretical Implications

This study offers several important theoretical implications for the fields of public administration, digital governance, and information systems. First, it extends stakeholder theory by empirically demonstrating how stakeholder trust and participation function as key mediating mechanisms linking AI utilization to digital government transformation. While stakeholder theory has traditionally emphasized the importance of inclusivity and responsiveness, this study operationalizes these concepts within an AI-enabled public governance context. It confirms that technology adoption alone is insufficient; stakeholder perceptions of fairness, transparency, and involvement are necessary to achieve meaningful transformation. Second, the study enriches public value theory by validating that public value is not created solely through efficiency or technological sophistication, but through processes that enhance trust and civic engagement. By modeling the dual roles of trust and participation, the study presents a more integrated and behaviorally grounded approach to public value creation. Lastly, the research adds to the emerging body of knowledge on AI in public administration, providing a multidimensional view of how automation and decision support influence public sector outcomes beyond operational efficiency.

5.1.2. Practical Implications

From a practical perspective, the study provides actionable insights for policymakers, digital transformation leaders, and public sector innovators. First, it underscores the importance of designing AI-enabled public services that go beyond technical functionality to explicitly build stakeholder trust. This includes ensuring data privacy, embedding ethical frameworks, and making AI decision-making processes explainable and transparent. Governments must recognize that the public’s confidence in automated services is foundational to the success of digital reforms. Second, the results emphasize the need to invest in participatory digital platforms that enable meaningful stakeholder involvement. While AI systems can inform and automate, their legitimacy and impact are amplified when citizens are actively engaged in policy design and feedback. Third, the findings suggest that governments should prioritize AI-based decision support systems in strategic governance areas, as these tools have a stronger impact on perceived transformation. Lastly, training programs to enhance digital and AI literacy among stakeholders should be introduced to maximize participation and minimize resistance, especially among less digitally engaged populations.

5.2. Limitations and Suggestions for Future Research

Despite the study’s contributions, several limitations must be acknowledged. First, the research employed a cross-sectional survey design, which restricts the ability to establish causal relationships among variables. While the findings provide strong associative evidence, longitudinal studies would be better suited to capture the dynamic and evolving nature of stakeholder trust, participation, and digital transformation over time. Second, the study relied on self-reported data from stakeholders, which may be subject to biases such as social desirability or common method variance, despite efforts to minimize these issues through procedural and statistical remedies. Third, the research was conducted within the context of Pakistan’s digital government initiatives, which may limit the generalizability of the findings to other cultural or administrative contexts. The level of AI maturity, digital literacy, and institutional capacity may differ significantly in other countries or regions.
For future research, scholars are encouraged to conduct comparative studies across different nations or regions to examine how cultural, political, and technological environments shape stakeholder engagement and trust in AI-enabled governance. Longitudinal research designs could offer insights into how perceptions and behaviors evolve as AI technologies become more embedded in public service delivery. Additionally, future studies could incorporate mixed-method approaches, combining quantitative analysis with in-depth interviews or case studies to explore the nuanced experiences and attitudes of various stakeholder groups. Researchers might also examine moderating variables such as digital literacy, organizational transparency, or perceived algorithmic fairness to better understand conditions under which AI utilization is more or less effective in driving transformation. Additionally, the Pakistani context shapes the meaning of digital government transformation outcomes. Here, ‘new value’ reflects enhanced accessibility and efficiency in service delivery, while ‘sustained advantage’ emphasizes continuity of reforms and the strengthening of institutional trust. These interpretations may differ from advanced economies, highlighting the need for comparative research. Finally, expanding the model to include additional outcomes, such as public satisfaction, perceived legitimacy, or innovation adoption, could further enrich the understanding of AI’s role in contemporary governance.

6. Conclusions

This study investigated how AI utilization, through service automation and decision support, contributes to digital government transformation, with stakeholder trust and stakeholder participation serving as key mediating variables. Drawing upon stakeholder theory and public value theory, the study developed and tested a comprehensive structural model using quantitative data from 412 stakeholders actively engaging with AI-enabled public services. The findings revealed that both AI-enabled service automation and AI-based decision support significantly and positively affect stakeholder trust and participation, which in turn enhance digital government transformation. Notably, stakeholder trust emerged as a more powerful mediator than participation, suggesting that trust in the ethical and competent use of AI is a critical enabler of public support for digital innovation. The results confirm that digital transformation in the public sector is not merely a technological shift but a relational and institutional one. AI systems alone cannot deliver transformation unless citizens perceive them as fair, transparent, and inclusive. This study also confirmed that while AI-based decision support has a stronger influence on perceived transformation outcomes, both forms of AI utilization contribute meaningfully to stakeholder engagement and institutional innovation. These insights highlight the need for governments to adopt not only sophisticated AI tools but also governance strategies that prioritize stakeholder involvement and trust-building. By validating the roles of trust and participation as mediators, this research provides an integrated understanding of how digital technologies and stakeholder behaviors interact to shape transformative governance. The findings offer both theoretical advancement and practical guidance, reinforcing the argument that the success of AI in the public sector depends on human-centered design, ethical implementation, and participatory infrastructure. In conclusion, the study underscores the importance of aligning technological advancement with stakeholder expectations and democratic values to ensure that digital government transformation is both effective and inclusive.

Author Contributions

Conceptualization, S.A.A.B., S.Y.P. and S.M.; methodology, S.A.A.B. and S.Y.P.; formal analysis, S.A.A.B. and S.M.; data curation, S.A.A.B. and S.M.; writing—original draft preparation, S.A.A.B., S.Y.P. and S.M.; writing—review and editing S.A.A.B., S.Y.P. and S.M.; supervision, S.Y.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The dataset supporting the findings of this study is openly available in Mendeley Data at https://doi.org/10.17632/454kw3ck9w.1 under the title “Digital Govt” (Bokhari, 2025) [47].

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Wirtz, B.W.; Weyerer, J.C.; Geyer, C. Artificial intelligence and the public sector—Applications and challenges. Int. J. Public Adm. 2019, 42, 596–615. [Google Scholar] [CrossRef]
  2. Sun, T.Q.; Medaglia, R. Mapping the Challenges of Artificial Intelligence in the Public Sector: Evidence from Public Healthcare. Gov. Inf. Q. 2019, 36, 368–383. [Google Scholar] [CrossRef]
  3. Zuiderwijk, A.; Chen, Y.-C.; Salem, F. Implications of the use of artificial intelligence in public governance: A systematic literature review and a research agenda. Gov. Inf. Q. 2021, 38, 101577. [Google Scholar] [CrossRef]
  4. Schmidt, P.; Biessmann, F.; Teubner, T. Transparency and Trust in Artificial Intelligence Systems. J. Decis. Syst. 2020, 29, 260–278. [Google Scholar] [CrossRef]
  5. Chatfield, A.T.; Reddick, C.G. A longitudinal cross-sector analysis of open data portal service capability: The case of Australian local governments. Gov. Inf. Q. 2017, 34, 231–243. [Google Scholar] [CrossRef]
  6. Ferreira, M.J.; Lopes, F.C.; Seruca, I. Stakeholders’ Engagement in Digital Transformation Initiatives. In Leveraging Technology for Organizational Adaptability; IGI Global Scientific Publishing: Hershey, PA, USA, 2025; pp. 119–142. [Google Scholar]
  7. Fernández, J.V. Artificial intelligence in government: Risks and challenges of algorithmic governance in the administrative state. Ind. J. Glob. Leg. Stud. 2023, 30, 65. [Google Scholar]
  8. Prasad, K.R.; Karanam, S.R.; Ganesh, D.; Liyakat, K.K.S.; Talasila, V.; Purushotham, P. AI in Public-private Partnership for IT Infrastructure Development. J. High Technol. Manag. Res. 2024, 35, 100496. [Google Scholar] [CrossRef]
  9. Alfadhli, M.; Kucukvar, M.; Onat, N.C.; AIMaadeed, S.; Abdessadok, A. Government Digital Transformation: A Tailor-made Digital Maturity Assessment Framework. IEEE Access 2025, 13, 71120–71132. [Google Scholar] [CrossRef]
  10. Hossin, M.A.; Du, J.; Mu, L.; Asante, I.O. Big data-driven public policy decisions: Transformation toward smart governance. Sage Open 2023, 13, 21582440231215123. [Google Scholar] [CrossRef]
  11. Panda, M.; Hossain, M.M.; Puri, R.; Ahmad, A. Artificial intelligence in action: Shaping the future of public sector. In Digital Government and Public Management; Routledge: London, UK, 2025. [Google Scholar] [CrossRef]
  12. Van Noordt, C.; Misuraca, G. Artificial intelligence for the public sector: Results of landscaping the use of AI in government across the European Union. Gov. Inf. Q. 2022, 39, 101714. [Google Scholar] [CrossRef]
  13. Freeman, R.E.; Harrison, J.S.; Wicks, A.C.; Parmar, B.L.; De Colle, S. Stakeholder Theory: The State of the Art; Cambridge University Press: Cambridge, UK, 2010. [Google Scholar]
  14. Bryson, J.; Sancino, A.; Benington, J.; Sørensen, E. Towards a multi-actor theory of public value co-creation. Public Manag. Rev. 2017, 19, 640–654. [Google Scholar] [CrossRef]
  15. Moore, M.H. Creating public value: The core idea of strategic management in government. Int. J. Prof. Bus. Rev. 2021, 6, e219. [Google Scholar] [CrossRef]
  16. Mergel, I. Open innovation in the public sector: Drivers and barriers for the adoption of Challenge. In Digital Government and Public Management; Routledge: London, UK, 2021; pp. 94–113. [Google Scholar]
  17. Freeman, R.E. Strategic management: A stakeholder theory. J. Manag. Stud. 1984, 39, 1–21. [Google Scholar]
  18. Freeman, R.E.; Dmytriyev, S.D.; Phillips, R.A. Stakeholder theory and the resource-based view of the firm. J. Manag. 2021, 47, 1757–1770. [Google Scholar] [CrossRef]
  19. Mahajan, R.; Lim, W.M.; Sareen, M.; Kumar, S.; Panwar, R. Stakeholder theory. J. Bus. Res. 2023, 166, 114104. [Google Scholar] [CrossRef]
  20. Nederhand, J.; Bekkers, V.; Voorberg, W. Self-organization and the role of government: How and why does self-organization evolve in the shadow of hierarchy? Public Manag. Rev. 2016, 18, 1063–1084. [Google Scholar] [CrossRef]
  21. Saxena, S.; Janssen, M. Examining Open Fovernment Data (OGD) Usage in India Through UTAUT Framework. Foresight 2017, 19, 421–436. [Google Scholar] [CrossRef]
  22. Donaldson, T.; Preston, L.E. The stakeholder theory of the corporation: Concepts, evidence, and implications. Acad. Manag. Rev. 1995, 20, 65–91. [Google Scholar] [CrossRef]
  23. Moore, M.H.; Moore, M.H. Creating Public Value: Strategic Management in Government; Harvard University Press: Cambridge, MA, USA, 1995. [Google Scholar]
  24. Zyzak, B.; Sienkiewicz-Małyjurek, K.; Jensen, M.R. Public value management in digital transformation: A scoping review. Int. J. Public Sect. Manag. 2024, 37, 845–863. [Google Scholar] [CrossRef]
  25. Panagiotopoulos, P.; Klievink, B.; Cordella, A. Public Value Creation in Digital Government. Gov. Inf. Q. 2019, 36, 101421. [Google Scholar] [CrossRef]
  26. Chen, Y.-C.; Ahn, M.J.; Wang, Y.-F. Artificial intelligence and public values: Value impacts and governance in the public sector. Sustainability 2023, 15, 4796. [Google Scholar] [CrossRef]
  27. Babšek, M.; Ravšelj, D.; Umek, L.; Aristovnik, A. Artificial intelligence adoption in public administration: An overview of top-cited articles and practical applications. AI 2025, 6, 44. [Google Scholar] [CrossRef]
  28. Criado, J.I.; Alcaide-Muñoz, L.; Liarte, I. Two decades of public sector innovation: Building an analytical framework from a systematic literature review of types, strategies, conditions, and results. Public Manag. Rev. 2025, 27, 623–652. [Google Scholar] [CrossRef]
  29. Xu, C.; Chen, Y.; Dai, J. Open government data and resource allocation efficiency: Evidence from China. Appl. Econ. 2025, 57, 2887–2904. [Google Scholar] [CrossRef]
  30. Susha, I.; Grönlund, Å.; Janssen, M. Driving factors of service innovation using open government data: An exploratory study of entrepreneurs in two countries. Inf. Polity 2015, 20, 19–34. [Google Scholar] [CrossRef]
  31. Mao, Z.; Zhang, W.; Zou, Q.; Deng, W. The effects of e-participation on voice and accountability: Are there differences between countries? Inf. Technol. Dev. 2025, 31, 473–498. [Google Scholar] [CrossRef]
  32. Mu, R.; Wang, H. A systematic literature review of open innovation in the public sector: Comparing barriers and governance strategies of digital and non-digital open innovation. Public Manag. Rev. 2022, 24, 489–511. [Google Scholar] [CrossRef]
  33. Janssen, M.; Van der Voort, H. Agile and adaptive governance in crisis response: Lessons from the COVID-19 pandemic. Int. J. Inf. Manag. 2020, 55, 102180. [Google Scholar] [CrossRef] [PubMed]
  34. Zhang, N.; Lu, Z.; Shou, Y. The dominant role of governing structure in cross-sector collaboration in developing China: Two case studies of information integration in local government one-stop services. Inf. Technol. Dev. 2017, 23, 554–578. [Google Scholar] [CrossRef]
  35. Schell, S.; Bischof, N. Change the Way of Working. Ways into Self-organization with the Use of Holacracy: An Empirical Investigation. Eur. Manag. Rev. 2022, 19, 123–137. [Google Scholar] [CrossRef]
  36. Anshari, M.; Hamdan, M.; Ahmad, N.; Ali, E. Public service delivery, artificial intelligence and the sustainable development goals: Trends, evidence and complexities. J. Sci. Technol. Policy Manag. 2025, 16, 163–181. [Google Scholar] [CrossRef]
  37. Rekunenko, I.; Koldovskyi, A.; Hordiienko, V.; Yurynets, O.; Khalaf, B.A.; Ktit, M. Technology Adoption in Government Management: Public sector Transformation Analysis. J. Gov. Regul. 2025, 14, 150–160. [Google Scholar] [CrossRef]
  38. Podsakoff, P.M.; MacKenzie, S.B.; Podsakoff, N.P. Sources of Method Bias in Social Science Research and Recommendations on How to Control it. Annu. Rev. Psychol. 2012, 63, 539–569. [Google Scholar] [CrossRef] [PubMed]
  39. Hair, J.F.; Risher, J.J.; Sarstedt, M.; Ringle, C.M. When to use and how to report the results of PLS-SEM. Eur. Bus. Rev. 2019, 31, 2–24. [Google Scholar] [CrossRef]
  40. Henseler, J.; Ringle, C.M.; Sarstedt, M. A new criterion for assessing discriminant validity in variance-based structural equation modeling. J. Acad. Mark. Sci. 2015, 43, 115–135. [Google Scholar] [CrossRef]
  41. Ringle, C.M.; Sarstedt, M. Gain More Insight From Your PLS-SEM Results: The Importance-Performance Map Analysis. Ind. Manag. Data Syst. 2016, 116, 1865–1886. [Google Scholar] [CrossRef]
  42. Hair, J.; Hollingsworth, C.L.; Randolph, A.B.; Chong, A.Y.L. An updated and expanded assessment of PLS-SEM in information systems research. Ind. Manag. Data Syst. 2017, 117, 442–458. [Google Scholar] [CrossRef]
  43. Nunnally, J.C. Psychometric theory—25 years ago and now. Educ. Res. 1975, 4, 7–21. [Google Scholar]
  44. Fornell, C.; Larcker, D.F. Evaluating structural equation models with unobservable variables and measurement error. J. Mark. Res. 1981, 18, 39–50. [Google Scholar] [CrossRef]
  45. Preacher, K.J.; Hayes, A.F. Asymptotic and Resampling Strategies for Assessing and Comparing Indirect Effects in Multiple Mediator Models. Behav. Res. Methods 2008, 40, 879–891. [Google Scholar] [CrossRef]
  46. Park, S.; Gupta, S. Handling endogenous regressors by joint estimation using copulas. Mark. Sci. 2012, 31, 567–586. [Google Scholar] [CrossRef]
  47. Bokhari, Syed Asad Abbas. “Digital Govt”, Mendeley Data, V1. Available online: https://data.mendeley.com/datasets/454kw3ck9w/1 (accessed on 24 July 2025).
Figure 1. Conceptual Framework.
Figure 1. Conceptual Framework.
Digital 05 00043 g001
Figure 2. Structural Equation Modeling.
Figure 2. Structural Equation Modeling.
Digital 05 00043 g002
Table 1. Demographics of Respondents.
Table 1. Demographics of Respondents.
CharacteristicsClassificationNFrequency%
Stakeholder Categories 412
Citizens19848
Civil society8721
Public servants7819
Private sector4912
Gender
Female17342
Male23958
Age
18~3519447
36~5017342
51~65379
>6582
Education
High School9623
University18244
Graduate10726
Postgraduate277
Table 2. Construct Reliability and Validity.
Table 2. Construct Reliability and Validity.
VariablesItemsCodeS.F.L.VIFαC.R. (rho_a)C.R. (rho_c)AVE
AI-Enabled Service Automation
Adapted from [1,21]
Government services I use are increasingly automated using AI technologies.ASA10.7481.808 0.885 0.904 0.917 0.690
I have interacted with chatbots or automated systems on government websites.ASA20.7892.333
AI-powered systems make public service delivery faster and more convenient.ASA30.8263.263
Automated public services are consistent and reduce human error.ASA40.8172.294
AI-powered automation in government services allows me to complete tasks without needing in-person assistance.ASA50.9573.961
AI-Based Decision Support
Adapted from
[10,33]
I believe the government uses AI to support data-driven decision-making.ADS10.9223.200 0.904 0.923 0.929 0.724
AI helps public institutions predict citizen needs and allocate resources better.ADS20.7202.838
I trust decisions made with AI support are more objective than human decisions.ADS30.8573.212
AI-based tools make policy outcomes more transparent and explainable.ADS40.8633.736
I feel government planning has improved due to AI-enabled analytics.ADS50.8792.022
Stakeholder Trust
Adapted from [1,16]
I trust the government to use AI technologies ethically and responsibly.ST10.9513.468 0.964 0.966 0.972 0.873
I believe my personal data is handled securely by AI-enabled public systems.ST20.9171.800
I have confidence in AI-supported public decision-making processes.ST30.9553.523
Government institutions are transparent about how AI is used.ST40.9202.272
Stakeholder Participation
Adapted from [5,30]
I use digital platforms to share feedback with government agencies.SP10.7041.486 0.839 0.844 0.894 0.679
AI systems make it easier for me to participate in public consultations.SP20.9003.370
I feel empowered to contribute to government decision-making through digital tools.SP30.8011.852
My input through digital platforms is acknowledged and valued by public institutions.SP40.8783.366
I use AI systems in government to improve fairness and accountability.SP50.9282.837
Digital Government Transformation
Adapted from [11,16]
The use of AI has improved the quality of digital public services.DGT10.8082.627 0.853 0.859 0.895 0.630
Government responsiveness has increased due to AI integration.DGT20.8022.074
AI has contributed to more transparent public administration.DGT30.7641.939
Public service innovation has increased because of AI-driven systems.DGT40.7601.837
Overall, I perceive a significant transformation in how the government delivers servicesDGT50.8312.661
Note: S.F.L. = Standard factor loading, VIF = Variance inflation factor, α = Cronbach’s Alpha, C.R. = Composite reliability, AVE = Average variance extracted.
Table 3. Descriptives, Mean, Standard Deviation, and Correlation Analysis.
Table 3. Descriptives, Mean, Standard Deviation, and Correlation Analysis.
MeanS.D.N12345
1. AI-enabled service automation3.7160.71441210.8240.7910.7740.832
2. AI-based decision support3.8930.7744120.847 **10.8330.8480.807
3. Stakeholder trust3.9080.9934120.868 **0.938 **10.8130.782
4. Stakeholder participation3.7570.8494120.905 **0.910 **0.952 **10.824
5. Digital government transformation3.7860.6764120.972 **0.842 **0.836 **0.912 **1
**. Correlation is significant at the 0.01 level (2-tailed).
Table 4. Structural Equation Modeling: Path Coefficients, Mean, STDEV, T-values, p-values.
Table 4. Structural Equation Modeling: Path Coefficients, Mean, STDEV, T-values, p-values.
OMSTDEVT p
Hypothesized Effects
AI-enabled service automation → Stakeholder trust 0.547 0.545 0.022 24.612 0.000
AI-enabled service automation → Stakeholder participation0.262 0.262 0.022 11.690 0.000
AI-based decision support → Stakeholder trust0.431 0.432 0.021 20.286 0.000
AI-based decision support → Stakeholder participation 0.717 0.717 0.022 32.150 0.000
Stakeholder trust→ Digital government transformation0.3610.3630.0606.0050.000
Stakeholder participation → Digital government transformation 1.2601.2620.06020.9370.000
Control Effects
Digital literacy → Stakeholder trust0.1900.1890.0404.7120.040
Digital literacy → Stakeholder participation0.2400.2380.0455.3150.000
Education level → Stakeholder participation0.1700.1690.0533.1860.002
Stakeholder type → Stakeholder trust0.1600.1600.0542.9470.004
Stakeholder type → Digital government transformation0.2100.2090.0653.2140.001
Age → Stakeholder participation−0.140−0.1390.051−2.7610.006
CFI =0.96
R2 (Stakeholder trust) = 0.906Q2 (Digital government transformation) = 0.451TLI =0.95
R2 (Stakeholder participation) = 0.889Q2 (Stakeholder participation) = 0.427RMSEA =0.042
R2 (Digital government transformation) = 0.856Q2 (Stakeholder trust) = 0.453SRMR =0.038
Note: |t| ≥ 1.65 at p 0.05 level. |t| ≥ 2.33 at p 0.01 level. |t| ≥ 3.09 at p 0.001 level; O = Original Sample; M = Mean; STDEV = Standard Deviation; T = T statistics; p = p values.
Table 5. Specific Indirect Effects—Mean, STDEV, T-values, p-values.
Table 5. Specific Indirect Effects—Mean, STDEV, T-values, p-values.
OMSTDEVT p
AI-based decision support → Stakeholder trust → Digital government transformation −0.259−0.2600.0455.7550.000
AI-based decision support → Stakeholder participation → Digital government transformation 0.5420.5450.04013.6100.000
AI-enabled service automation → Stakeholder trust → Digital government transformation−0.095−0.0950.0175.6160.000
AI-enabled service automation → Stakeholder participation → Digital government transformation0.6890.6880.04415.7630.000
Note: |t| ≥ 1.65 at p 0.05 level. |t| ≥ 2.33 at p 0.01 level. |t| ≥ 3.09 at p 0.001 level; O = Original Sample; M = Mean; STDEV = Standard Deviation; T = T statistics; p = p values.
Table 6. Hayes Process Model for Multiple Mediating Analysis.
Table 6. Hayes Process Model for Multiple Mediating Analysis.
ModelβS. E.TpLL UL
Model 1: Mediating Role of Stakeholder Trust (ASA → ST → DGT)
AI-enabled service automation → Stakeholder trust1.2060.03435.3430.0001.1391.273
Stakeholder trust → Digital government transformation0.2770.02014.1300.0000.3150.238
AI-enabled service automation → Digital government transformation (without a mediator)0.9870.04123.8120.0000.9051.068
AI-enabled service automation → Digital government transformation (with a mediator)0.3340.024BootLLCI 0.383BootULCI 0.287
Model 2: Mediating Role of Stakeholder Trust (ADS → ST → DGT)
AI-based decision support → Stakeholder trust1.2040.02254.9670.0001.1611.247
Stakeholder trust → Digital government transformation0.3880.0429.1680.0000.4710.305
AI-based decision support → Digital government transformation (without a mediator)1.5220.10115.1010.0001.3241.720
AI-based decision support → Digital government transformation (without a mediator)0.4670.042BootLLCI 0.554BootULCI 0.392
Model 3: Mediating Role of Stakeholder Participation (ASA → SP → DGT)
AI-enabled service automation → Stakeholder participation1.0770.02543.1320.0001.0281.126
Stakeholder participation → Digital government transformation0.4330.02715.7680.0000.3790.487
AI-enabled service automation → Digital government transformation (without a mediator)0.9100.04619.9890.0000.8200.999
AI-enabled service automation → Digital government transformation (with a mediator)0.4660.037BootLLCI 0.397BootULCI 0.540
Model 4: Mediating Role of Stakeholder Participation (ADS → SP → DGT)
AI-based decision support → Stakeholder participation0.9990.02244.5800.0000.9551.043
Stakeholder participation → Digital government transformation0.8650.04220.8170.0000.7840.947
AI-based decision support → Digital government transformation (without a mediator) 1.7000.11215.2460.0001.4811.920
AI-based decision support → Digital government transformation (without a mediator)0.8650.055BootLLCI 0.758BootULCI 0.972
Bootstrapping Results for Specific Indirect Results (Preachers & Hayes)
AI-enabled service automation → Stakeholder trust → Digital government transformation0.0550.0105.6160.000
AI-based decision support → Stakeholder trust → Digital government transformation0.3250.02413.5160.000
AI-enabled service automation → Stakeholder participation → Digital government transformation0.0340.0113.1310.002
AI-based decision support → Stakeholder participation → Digital government transformation 0.3330.02413.7140.000
Final Results of Mediation Effects Final Results
AI-enabled service automation → Stakeholder trust → Digital government transformation Partial Mediation
AI-based decision support → Stakeholder trust → Digital government transformation Partial Mediation
AI-enabled service automation → Stakeholder participation → Digital government transformation Partial Mediation
AI-based decision support → Stakeholder participation → Digital government transformation Partial Mediation
Note: β = Standard Beta, S.E. = Standard error, T = T statistics; p = p values, LL = LL 2.5% C.I., UL = UL 2.5% C.I.
Table 7. Discussion of Hypotheses in Relation to Previous Studies.
Table 7. Discussion of Hypotheses in Relation to Previous Studies.
HypothesisResultComparison with Previous StudiesExplanation/Contextualization
H1a: AI-enabled service automation → Stakeholder trustSupported
(β = 0.41,
p < 0.001)
Consistent with [1,21]Automation reduces uncertainty, improves consistency, and enhances perceptions of institutional reliability, leading to greater trust.
H1b: AI-enabled service automation → Stakeholder participationSupported (β = 0.33,
p < 0.001)
Echoes [5,30]Automation lowers barriers to engagement and simplifies processes, encouraging participation, though the effect is weaker than trust.
H1c: Stronger effect on trust than participationSupportedAligns with [16]Trust is a more immediate outcome of reliable automated services, while participation requires additional systemic enablers.
H2a: AI-based decision support → Stakeholder trustSupported (β = 0.38,
p < 0.001)
Consistent with [1,33]Data-driven decision support enhances transparency and fairness, reinforcing public trust in governance processes.
H2b: AI-based decision support → Stakeholder participationSupported (β = 0.30,
p < 0.001)
Matches [5,30]Decision support tools provide understandable insights, which facilitate stakeholder involvement, though effect remains secondary to trust.
H2c: Stronger effect on trust than participationSupportedIn line with [16]Trust is directly influenced by perceived fairness and accountability, while participation depends on additional factors such as empowerment.
H3a: AI-enabled service automation → Digital government transformationSupported (β = 0.28,
p < 0.001)
Supported by [1,16]Automation improves efficiency and responsiveness, contributing directly to government transformation.
H3b: AI-based decision support → Digital government transformationSupported (β = 0.31,
p < 0.001)
Aligns with [33]Decision support contributes more strongly than automation by influencing strategic and systemic reforms.
H3c: Stronger effect of decision support compared to automationSupportedConsistent with [15]Strategic, data-informed decisions are perceived as more transformative than operational-level automation.
H4a: Stakeholder trust → Digital government transformationSupported (β = 0.44,
p < 0.001)
Matches [1,16]Trust legitimizes reforms and reduces resistance, serving as a foundation for transformation.
H4b: Stakeholder participation → Digital government transformationSupported (β = 0.26,
p < 0.001)
Echoes [5,30]Participation improves inclusivity and service relevance but is weaker than trust in driving transformation.
H4c: Trust stronger than participationSupportedConfirmed by [14]Trust is the cornerstone of digital reforms, while participation, though important, is secondary without trust.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bokhari, S.A.A.; Park, S.Y.; Manzoor, S. Digital Government Transformation Through Artificial Intelligence: The Mediating Role of Stakeholder Trust and Participation. Digital 2025, 5, 43. https://doi.org/10.3390/digital5030043

AMA Style

Bokhari SAA, Park SY, Manzoor S. Digital Government Transformation Through Artificial Intelligence: The Mediating Role of Stakeholder Trust and Participation. Digital. 2025; 5(3):43. https://doi.org/10.3390/digital5030043

Chicago/Turabian Style

Bokhari, Syed Asad Abbas, Sang Young Park, and Shahid Manzoor. 2025. "Digital Government Transformation Through Artificial Intelligence: The Mediating Role of Stakeholder Trust and Participation" Digital 5, no. 3: 43. https://doi.org/10.3390/digital5030043

APA Style

Bokhari, S. A. A., Park, S. Y., & Manzoor, S. (2025). Digital Government Transformation Through Artificial Intelligence: The Mediating Role of Stakeholder Trust and Participation. Digital, 5(3), 43. https://doi.org/10.3390/digital5030043

Article Metrics

Back to TopTop