Next Article in Journal
Shaping the Future Through Business Education: Teaching Business Administration for Sustainable Higher Education
Previous Article in Journal
The Role of Financial Institutions in Bridging the Financing Gap for Women Entrepreneurs in Sub-Saharan Africa
Previous Article in Special Issue
Leveraging Centralized Procurement for Digital Innovation in Higher Education: Institutional Capacity and Policy Gaps in Romania
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Towards Smart Public Administration: A TOE-Based Empirical Study of AI Chatbot Adoption in a Transitioning Government Context

by
Mansur Samadovich Omonov
1,* and
Yonghan Ahn
2,*
1
Department of Applied Artificial Intelligence, Hanyang University ERICA, Ansan 15588, Republic of Korea
2
Department of Architecture and Architectural Engineering, Hanyang University ERICA, Ansan 15588, Republic of Korea
*
Authors to whom correspondence should be addressed.
Adm. Sci. 2025, 15(8), 324; https://doi.org/10.3390/admsci15080324
Submission received: 2 July 2025 / Revised: 5 August 2025 / Accepted: 13 August 2025 / Published: 16 August 2025
(This article belongs to the Special Issue Innovation Management of Organizations in the Digital Age)

Abstract

As governments pursue digital transformation to improve service delivery and administrative efficiency, AI chatbots have emerged as a promising innovation in smart public administration. However, their adoption remains limited, particularly in transitioning countries where institutional, organizational, and technological conditions are complex and evolving. This study aims to empirically examine the key aspects, challenges, and strategic implications of AI chatbots’ adoption in public administration of Uzbekistan, a transitioning government in Central Asia. The study offers a novel contribution by employing an extended technology–organization–environment (TOE) framework. Data were collected through a survey among 501 public employees and partial least squares structural equation modeling was used to analyze data. The results reveal that perceived usefulness, compatibility, organizational readiness, effective accountability, and ethical AI regulation are key enablers, while system complexity, traditional leadership, resistance to change, and concerns over data management and security pose major barriers. The findings contribute to the literature on effective innovation in public administration and provide practical insights for policymakers and public managers aiming to effectively implement AI solutions in complex governance settings.

1. Introduction

In recent years, the rise of artificial intelligence (AI) technologies has prompted a paradigm shift in the public sector, particularly in how governments engage with citizens, deliver services, and manage internal operations (Chong et al., 2021; Medaglia et al., 2021). Among these technologies, AI chatbots are emerging as strategic tools for enhancing administrative efficiency, improving responsiveness, and enabling real-time citizen interaction (Sousa & Rocha, 2019; Wirtz et al., 2021). Globally, AI integration in public administration is analyzed through a technology-centric lens, with a focus on infrastructure, innovation capability, and organizational readiness (Aoki, 2020; Sharma et al., 2022). However, such approaches frequently overlook the institutional, political, and environmental factors that shape how AI is adopted and governed within the public administration sector of transitioning governments and countries. These limitations, such as a bureaucratic environment, traditional centralized administration, and low digital maturity hinder digital governance transformation efficiency. While advanced countries have made significant progress in integrating AI into public services, the implications, features, and challenges of AI adoption in developing transitional countries remain underexplored in empirical research (Gupta et al., 2020; Nicolescu & Tudorache, 2022). Moreover, recently, several studies have systematically and empirically examined the broader organizational, and environmental dynamics associated with the adoption of AI chatbots in the context of developing countries’ public sector (Chatterjee & Chaudhuri, 2022; Gulyamov et al., 2023; Kobilov et al., 2023). While these studies provide valuable insights, they neglect the broader institutional, organizational, environmental and strategic factors that strongly influence AI adoption, decision-making in public administration in the context of a transitional centralized government. Despite the high-level efforts to modernize public administration with smart technologies in recent years, there is a lack of empirical research on the use of AI in public administration reforms, especially in transition Central Asia, in particular Uzbekistan (Kuldosheva, 2021). Indeed, despite the promising potential of these technologies (Henman, 2020), the adoption of AI chatbots in public administration within developing countries continues to face persistent challenges. These include limited technological infrastructure, gaps in digital literacy, privacy concerns, and organizational resistance—all of which often impede the effective integration of AI solutions in the public sector (Chatterjee & Chaudhuri, 2022; Folorunso et al., 2024; Henman, 2020; Radhakrishnan & Chattopadhyay, 2020; Zouridis et al., 2020). It is important to note that, while global adoption is steadily growing, the specific pathways, barriers, and enabling conditions for implementing AI chatbots in the contexts of developing transitional countries remain underexplored, particularly from a multidimensional perspective. Compared to advanced countries, a developing country in transition like Uzbekistan is still in its infancy (Aderibigbe et al., 2023; Edwards et al., 2024; Mannuru et al., 2023). The government of Uzbekistan launched the “Digital Uzbekistan 2030 Strategy” in 2020, followed by the “Strategy for the Development of AI Technologies until 2030” in 2024. These initiatives aim to accelerate digitalization and promote the adoption and effective use of AI technologies within public administration systems (Gulyamov et al., 2023; Kobilov et al., 2023). Thus, empirical research on the real-world application of AI tools in public administration remains scarce, especially in developing transitional country-specific settings. The purpose and direction of this research are timely and significant, as the use of AI chatbots in developing countries in transition is essential for achieving inclusive, effective, and sustainable smart public governance (Neumann et al., 2024; Nicolescu & Tudorache, 2022).
To address this gap, the present study empirically investigates the key aspects, challenges, and strategic implications of AI chatbots adoption in public administration of Uzbekistan, a transitioning government in Central Asia, through the lens of an extended TOE framework. While the TOE model has been widely used to examine technology adoption in both private and public sectors, prior studies often exclude strategic governance-specific variables that are essential for understanding public sector dynamics (Dwivedi et al., 2021; Sharma et al., 2022; Vogl et al., 2020). This study, therefore, advances the TOE framework by extending key institutional and strategic factors such as perceived usefulness, compatibility, organizational readiness, effective accountability, and ethical AI regulation, system complexity, traditional top-down leadership, resistance to mindset change, and concerns over data management and security influence to adopt AI chatbots in public administration settings. Furthermore, the country represents a compelling case of a transitioning governance system actively pursuing digital reforms while grappling with legacy bureaucratic structures. By focusing on this underrepresented context, the study provides empirical insights that challenge conventional assumptions based on developed-country models.
This study offers a novel contribution to the literature on smart public administration by applying an extended TOE framework to examine the adoption of AI chatbots in the public administration sector of a transitional developing country (Mergel et al., 2019; Zouridis et al., 2020). As mentioned above, while most existing research on AI chatbot implementation has predominantly focused on technological factors or user acceptance models in developed countries’ contexts, this study incorporates organizational, environmental, and strategic dimensions that are specific to the governance structures of developing transitional countries. By leveraging empirical data from national digital transformation initiatives, the paper uncovers context-specific challenges—such as traditional leadership, centralized decision-making, limited institutional readiness, and resistance to change—that are largely underexplored in current knowledge. This research not only broadens the geographical scope of AI adoption studies but also deepens theoretical understanding by integrating governance-related variables into the TOE framework. Therefore, this study fills a gap in the literature by empirically investigating the strategic, institutional, and technological challenges of AI chatbot adoption in a transitioning public administration context. At the same time, its theoretical contributions, the study offers actionable recommendations for policymakers and digital reform leaders. It assesses not only the technological and organizational enablers of AI chatbot adoption, but also the critical barriers related to institutional trust, leadership rigidity, and data security concerns.
Finally, these findings hold significant value for smart governance in other emerging transitional countries seeking to design inclusive, ethical, and strategically aligned AI implementation frameworks. In sum, this study addresses three critical gaps in the current literature: (1) the lack of empirical research on AI adoption in transitional public administration contexts; (2) the limitations of traditional TOE models in capturing public sector complexities; and (3) the need for practical policy insights to support the responsible deployment of AI in digital government. These contributions position the paper within the growing body of knowledge on smart public administration, offering both academic and policy relevance.

2. Literature Review

AI chatbots are increasingly recognized as transformative tools in smart public administration, particularly in developing countries, aiming to enhance efficiency, transparency, and citizen engagement (Duan et al., 2019). Studies in developed contexts have shown that chatbots enhance efficiency and accessibility (Lindgren, 2021; Wirtz et al., 2019). However, empirical investigations in the public sector remain limited, especially in developing and transitioning countries, where infrastructural, cultural, and institutional barriers are substantial. There is no single classification of AI chatbot technologies in smart government, as scholars emphasize different perspectives—from technical innovation to governance reform. For this study, AI chatbots are considered within the broader lens of public administration innovation (Samoili et al., 2020). The first research stream highlights the functional benefits of AI chatbots in government and technology plays a key role in driving digital transformation in the public sector (Henman, 2020). These technologies improve administrative efficiency, enable real-time communication, and enhance citizen satisfaction and trust (Dwivedi et al., 2021; Mergel et al., 2019). Existing studies have shown that AI applications such as chatbots support public employees by automating repetitive tasks and expanding service reach, especially in resource-constrained environments (Engin & Treleaven, 2019; Mergel, 2019).
A second research stream emphasizes the organizational dimensions of AI adoption. Scholars argue that successful integration of AI chatbots depends on internal organizational factors—such as leadership support, institutional readiness, and accountability—as well as external pressures from regulatory and political environments. The success of AI chatbots requires fundamental changes in government processes, employee performance, and leadership accountability (Mergel, 2019). Neumann et al. explored adopting AI applications in public organizations to enhance personalized services, while also reducing administrative burdens (Neumann et al., 2024). Previous studies have shown that organizational form, institutional pressures, institutional elements, manager support, and accountability significantly affect the effective use of AI technologies (Albu, 2023). In addition, strategic approach, mindset change, interoperability, and improving employee digital skills are success factors for the effective integration of AI tools in digital government (Bannister & Connolly, 2020; Zuiderwijk et al., 2021).
A third stream examines socio-technical transformation, highlighting the interplay between technological innovation and the broader social context. Scholars argue that AI chatbots are driven by both social and technological factors (Engin & Treleaven, 2019; Madan & Ashok, 2023; Zouridis et al., 2020). Gong et al. studied how the adaptability of government institutions is shaped not only by internal capacities but also by external forces such as technological change, legal frameworks, and public expectations (Mikhaylov et al., 2018). Research in this area suggests that both internal drivers (e.g., digital infrastructure) and external pressures (e.g., regulatory compliance, international standards) influence AI adoption routes (Taheri et al., 2020; Van Noordt & Misuraca, 2022).
These new systems are valued not only for cost savings and service speed, but also for their potential to support citizen-centric governance and improve public satisfaction (Madan & Ashok, 2023). Despite these benefits, most empirical studies on AI deployment in government are concentrated in high-income countries with advanced digital ecosystems and mature governance frameworks. This concentration has led to a skewed understanding of how AI functions in contexts marked by institutional fragility, bureaucratic rigidity, and constrained digital infrastructure. Transition economies, such as those in Central Asia, present distinct dynamics for AI integration—characterized by hybrid bureaucratic models, post-authoritarian legacies, and fast-tracked digital reform agendas. Yet, countries like Uzbekistan remain largely underrepresented in global scholarship, despite their strategic efforts to digitize public services and modernize governance structures (Aoki, 2020). Uzbekistan, since the launch of its e-government platform in 2010, the country has adopted major national strategies such as Digital Uzbekistan 2030 and the AI Development Strategy to 2030, aimed at enhancing technological integration in public administration system operations (Gulyamov et al., 2023; Kobilov et al., 2023; Kuldosheva, 2021). However, the successful implementation of AI chatbots depends not only on technical availability but also on broader organizational and environmental readiness (Shin et al., 2020). Despite increasing government efforts to implement AI technologies, the use of AI chatbots in government institutions remains limited. Beyond this technical readiness, there are serious challenges related to the lack of clarity in organizational culture and regulation, the growing interest in AI tools, the lack of understanding of the key factors and issues that influence citizen adoption and use, and the significant lack of empirical research that comprehensively examines these dimensions in public administration, especially in developing countries (Gulyamov et al., 2024; Kuldosheva, 2021; Shin et al., 2020). Most of the existing research focuses on narrow technical aspects or is based on advanced economies (Moon et al., 2024). Therefore, according to the available evidence, this study is among the first empirical investigations to examine AI chatbot adoption in smart public administration within a transitioning government context, specifically through an extended TOE framework. While digital innovation in the public sector has received growing attention (Mergel et al., n.d.; Wirtz et al., 2019), and a number of studies have addressed AI use in government (Zuiderwijk et al., 2021), large-scale empirical research focused on chatbot-specific adoption in transitional public administrations remains limited. Recent systematic reviews (Sun & Medaglia, 2019) suggest that most studies on AI adoption in the public sector are concentrated in Western democracies. Therefore, this study contributes to filling both a geographical and theoretical gap by offering new empirical insights from the understudied Central Asian, especially in the context of Uzbekistan, since the launch of its e-government platform.
This study adopts the technology–organization–environment (TOE) framework as the primary structural foundation to categorize determinants of AI chatbot adoption in public administration. TOE offers a comprehensive lens for analyzing technological innovation adoption at the organizational level by considering internal and external contextual factors. To reinforce the conceptual robustness of the model, complementary theoretical perspectives have been incorporated. Specifically, the technology acceptance model (TAM) informs (Davis, 1989) the inclusion of perceptual constructs such as perceived usefulness and system complexity (representing perceived ease of use). Despite TAM’s individual-level focus, its insights remain relevant for capturing user-oriented perceptions influencing organizational adoption decisions. Additionally, the diffusion of innovation (DOI) theory (Orr, 2003) supports the inclusion of constructs like Compatibility and Perceived Relative Advantage, which reflect innovation characteristics crucial for assessing organizational readiness and contextual alignment in transitioning governments. By integrating TOE with TAM and DOI, this study develops a triangulated theoretical foundation that bridges macro-level organizational factors with micro-level behavioral attributes, thereby providing a comprehensive and context-sensitive model for AI adoption in public sector institutions. The limitations of each theory are acknowledged, and their combined application is justified to mitigate the conceptual gaps inherent in relying solely on a single framework.

2.1. TOE Framework and Hypothesis Development

The technology-organization-environment (TOE) framework, introduced by Tornatzky and Fleischer, has been extensively applied in studies of technology adoption across sectors (see Figure 1 and Table 1). It categorizes adoption factors into three domains: (1) technological factors; (2) organizational factors; and (3) environmental factors (Arpaci et al., 2012). While TOE provides a solid foundation, its factors are often limited in their ability to capture public sector-specific drivers of innovation (Mannuru et al., 2023; Wirtz et al., 2019). In public administration, technology adoption is shaped by authority structures, accountability norms, and evolving public expectations. These distinctions necessitate a contextualized and expanded TOE model tailored to modernization of public administration settings. Recent literature has begun to advocate for enriched versions of TOE that account for public governance variables. A growing strand of knowledge recognizes that successful AI adoption in government hinges not only on technological infrastructure but on strategic alignment between policy vision, institutional capabilities, and governance norms. While TOE has been widely applied in private sector innovation studies, its application to AI adoption in public administration settings is underexplored (Rana et al., 2017). According to existing studies, to date, no empirical study has applied an extended TOE model to examine the adoption of AI chatbots in the public administration sector of a transitional developing country, Uzbekistan. This research contributes to the literature by introducing a governance-sensitive extension of the TOE model tailored for AI technologies in public administration; offering the first large-scale empirical study on AI adoption in Uzbekistan’s public sector; and providing practical, context-specific insights to inform the adoption of AI in public administration frameworks and digital transformation policies in transitional developing countries.

2.2. Technological Contexts

Technology-related factors have consistently been shown to influence adoption decisions. In this study, we attention on three core technological constructs: perceived usefulness, compatibility, and system complexity. Within the technological context of the TOE framework, perceived usefulness is a key driver influencing the organizational decision to adopt innovations such as AI chatbots.

2.2.1. Perceived Usefulness

Perceived usefulness (PU) refers to the degree to which a user believes that employing a specific technology will enhance performance or efficiency (Arpaci et al., 2012; Mikhaylov et al., 2018; Oliveira & Martins, 2011). In the public sector, AI chatbots are perceived to streamline service delivery and improve responsiveness to citizen inquiries (Wirtz et al., 2019). Governments are more likely to adopt technologies that demonstrate a clear and measurable impact on performance and citizen engagement (Ali et al., 2023). Moreover, usefulness often interacts with other technological constructs such as compatibility and system complexity, magnifying their influence on adoption readiness (Gangwar et al., 2015). When AI tools are seen as beneficial for increasing efficiency and productivity, users are more likely to support and adopt them (Davis, 1989). When users perceive that chatbots or others facilitate better task management, decision-making, and citizen engagement, they are more inclined to form positive attitudes toward using these AI-based technologies (Gefen & Straub, 2000). Therefore, perceived usefulness in the TOE framework is not merely an individual-level belief but a strategic organizational consideration that can guide digital transformation in public services. We propose the following hypothesis:
Hypothesis 1 (H1). 
Perceived usefulness positively influences the intention to adopt AI chatbots in public administration.

2.2.2. Compatibility

Compatibility reflects how well a technology fits with existing values, workflows, and systems. Technologies that align with bureaucratic processes and digital systems are more likely to be accepted in public institutions (Rogers et al., 2005). In the context of AI chatbots, compatibility implies alignment with current digital government systems, operational processes, and the prevailing organizational culture (Oliveira & Martins, 2011). High compatibility often facilitates smoother integration by minimizing disruptions, reducing the learning curve, and lowering resistance among employees (Thong, 1999). This is especially vital in public institutions, where rigid bureaucratic structures and standardized procedures can present obstacles to technological innovation (P. Chen et al., 2021). When AI chatbots enhance existing services such as citizen-facing e-portals or internal digital workflows, they are more likely to be accepted by both public servants and end-users (Ali et al., 2023). Conversely, low compatibility may lead to confusion, duplication of effort, or even rejection, particularly in resource-constrained settings where the capacity to manage change is limited (Bwalya & Mutula, 2015). Therefore, ensuring that AI systems are tailored to existing institutional contexts not only improves functional alignment but also increases perceived usefulness and reduces perceived complexity (Venkatesh et al., 2003; Gangwar et al., 2015). Overall, compatibility plays a foundational role in shaping how AI chatbots are evaluated and integrated within digital government services.
Hypothesis 2 (H2). 
Compatibility positively influences the intention to adopt AI chatbots in public administration.

2.2.3. Complexity

Complexity refers to the degree to which a technological innovation is perceived as difficult to understand, implement, and use (Arpaci et al., 2012). Complex systems often face resistance, especially in low-digital-maturity organizations. Technologies that are perceived as complex often generate resistance among organizational staff due to increased training demands, uncertainty, and fear of workflow disruption (Lippert & Govindarajulu, 2006). This resistance may be heightened by rigid bureaucratic structures, limited ICT competencies, and the lack of agile implementation mechanisms. Complex AI systems, especially those involving natural language processing and adaptive machine learning, require not only technical expertise for configuration and integration but also a cultural shift in how public services are delivered (Wirtz et al., 2019). For instance, government employees may struggle with the interpretability of AI assistants’ decisions, fear being replaced, or find it difficult to align AI tools with existing service protocols (Oliveira & Martins, 2011). From the TOE perspective, such system complexity reduces the perceived ease of use, slows down integration efforts, and creates uncertainty about expected outcomes (Gangwar et al., 2015). In contrast to technologies that seamlessly plug into existing infrastructures, complex AI systems require significant organizational adaptation, which in turn raises perceived implementation costs and risks (Chatterjee & Chaudhuri, 2022). Therefore, high perceived system complexity negatively affects the intention to adopt AI chatbots in public administration services, especially when organizational digital maturity is low or when the benefits of AI adoption are not clearly communicated to stakeholders.
Hypothesis 3 (H3). 
System complexity negatively influences the intention to adopt AI chatbots in public administration.

2.3. Organizational Contexts

2.3.1. Traditional Leadership

Leadership plays a crucial role in public sector innovation by supporting reforms, securing resources, and reducing bureaucratic resistance. But in developing countries, leadership styles remain traditional and hierarchical, which can hinder rather than support innovation (Chuang & Shaw, 2005; Low et al., 2011). In such circumstances, top-down leadership can stifle open innovation, constrain bottom-up feedback, and limit the autonomy of public servants to experiment with AI tools (Chuang & Shaw, 2005; Trottier et al., 2008). In this context, leadership motivation reflects not only the strategic vision of top managers, but also the extent to which their style facilitates or constrains technological change (Alsuqayh et al., 2024). Lack of leadership flexibility, risk tolerance, or a genuine commitment to AI integration can undermine a culture of innovation, delay resource allocation, or reduce employee motivation to engage with new tools (Silva, 2016). In centralized management systems, executive mandates can lead to symbolic or fragmented adoption, where AI chatbot implementations become superficial or performative rather than transformative (Dwivedi et al., 2021). Thus, traditional leadership approaches characterized by control, rigidity, and low empowerment can become significant barriers to the adoption of AI chatbots.
Hypothesis 4 (H4). 
Traditional top-down leadership negatively influences the intention to adopt AI chatbots in public administration.

2.3.2. Resistance to Mindset Change

Mindset change reflects the willingness and readiness of civil servants to embrace innovation and rethink traditional ways of working. Bur resistance to change is a huge issue in public sector organizations (Arpaci et al., 2012). While mindset change is typically viewed as a facilitator of innovation, the reality in many public institutions is resistance to such change, rooted in entrenched bureaucratic cultures, risk aversion, and limited incentives for experimentation (Wirtz et al., 2019). Without a proactive attitude toward transformation, AI chatbot initiatives face skepticism or passive resistance from both decision-makers and frontline staff (Labadze et al., 2023; Peters, 2015). Additionally, citizens’ unfamiliarity or discomfort with AI tools can further discourage institutional change, especially in societies where digital literacy is low or public trust in government technology is weak (Murphy & Reeves, 2019). In these contexts, a lack of mindset readiness—rather than a positive orientation—becomes the dominant factor influencing technology adoption. Thus, organizational inactivity and resistance to strategic mindset change hinder the successful integration of AI chatbots in public service delivery.
Hypothesis 5 (H5). 
Resistance to mindset change negatively influences the intention to adopt AI chatbots in public administration.

2.3.3. Organizational Readiness

Organizational readiness refers to the extent to which an institution possesses the necessary infrastructure, resources, human capital, and strategic orientation to adopt and implement new technologies (Arpaci et al., 2012). Organizational readiness includes the availability of resources, infrastructure, human capital, and support mechanisms for technology deployment (Oliveira & Martins, 2010). AI chatbots adoption is not only technological infrastructure (e.g., servers, cloud computing, internet bandwidth) but also organizational competencies such as the availability of skilled IT and qualified personnel, digital literacy among civil servants, and an internal culture conducive to technological change (Zhu et al., 2006), For public sector organizations in developing countries, organizational readiness is particularly critical, as bureaucratic rigidity, fragmented data systems, and resource limitations can significantly delay or derail AI implementation efforts (Alraja et al., 2020). Moreover, organizational readiness also involves aligning internal processes and policies to accommodate AI integration, such as updating service protocols, ensuring inter-departmental data sharing, and establishing governance mechanisms for AI use (Venkatesh et al., 2012). Inadequate readiness can lead to implementation failures, misuse of AI tools, or underutilization of technological investments. Therefore, organizations that exhibit a high degree of readiness are more likely to adopt AI chatbots successfully and realize their intended benefits in terms of efficiency, responsiveness, and citizen satisfaction.
Hypothesis 6 (H6). 
Organizational readiness positively influences the intention to adopt AI chatbots in public administration.

2.4. Environmental Contexts

External environmental factors significantly shape technology adoption decisions in public sector settings. These include effective accountability mechanisms, ethical AI regulation, data management, and security.

2.4.1. Effective Accountability

Public sector institutions must ensure that AI adoption does not undermine transparency or citizen trust. Citizens and public administrators are more likely to support AI when accountability frameworks are in place. Clear accountability mechanisms can mitigate resistance and foster trust in AI systems (Arpaci et al., 2012). AI chatbots increase dynamics of automation in government operations, the redesign of government processes, and the freeing up of staff resources, ultimately leading to higher productivity and enhancing citizen interaction, thereby improving public service effective accountability (Dwivedi et al., 2021). Adopting AI chatbots, accountability refers to the mechanisms through which public institutions ensure that AI systems operate in line with ethical standards, legal obligations, and performance expectations (Wirtz et al., 2019). This includes clearly defined roles and responsibilities, performance monitoring, and feedback loops that allow for public scrutiny and internal review (Bannister & Connolly, 2020). An accountable digital transformation strategy not only fosters citizen trust but also minimizes risks associated with automation bias, algorithmic discrimination, and opaque decision-making processes (Zouridis et al., 2020). Additionally, by automating routine tasks, AI allows public officials to dedicate greater attention to complex, value-added activities, potentially elevating the overall quality of services provided to citizens. AI chatbots adoption must ensure that transparent and traceable, reinforcing trust in digital public services (Cameron, 2004; Hubbell, 2007). On the other hand, effective accountability is a critical pillar that drives the successful integration of new technologies (Kulal et al., 2024), including AI chatbot adoption (Cameron, 2004). Higher levels of effective accountability contribute to more effective AI chatbot adoption by optimizing resource allocation, improving service delivery (Wirtz et al., 2021), and enabling a more responsive approach to evolving public demands (Zuiderwijk et al., 2021).
Hypothesis 7 (H7). 
Effective accountability positively influences the intention to adopt AI chatbots in public administration.

2.4.2. Ethical AI Regulation

Ethical concerns—such as fairness, transparency, and data protection—are particularly salient in public sector AI adoption. Institutions with clear ethical AI guidelines are more likely to implement such technologies responsibly. Ethical guidelines and governance frameworks are essential to ensuring that AI applications, including chatbots, are implemented transparently, equitably, and with accountability (Cowls et al., 2019; Jobin et al., 2019). AI chatbots in public administration services have the potential to significantly improve management processes, enhance decision-making, and support staff operations (Dasgupta & Wendler, 2019; Goralski & Tan, 2020). However, the successful and sustainable integration of AI chatbot adoption requires adherence to ethical standards, regulatory frameworks, and privacy protections (Gordon, 2001). Responsible AI entails not only leveraging AI’s technical capabilities, such as improving service delivery, but also ensuring that these technologies operate within boundaries that safeguard citizen rights and uphold democratic values (Androutsopoulou et al., 2019; Thierer et al., 2017). Without a strong ethical foundation, the risk of misuse or biased decision-making by AI systems can undermine citizen trust and delegitimize digital governance efforts (Bilginoğlu & Yozgat, 2023; Martin & Pear, 2019; Stahl et al., 2021). Establishing strong ethical and legal frameworks builds public trust, a key driver for widespread acceptance of AI in governance (Reis et al., 2019). In this context, ethical AI regulation not only ensures responsible usage but also plays a significant impact in shaping public employees’ and citizens’ perspectives of the ease-of-use AI chatbots. When regulations are fair and transparent, user view AI systems particularly AI chatbots as more accessible, and trustworthy. In this light, clearly defined and enforced ethical standards are essential for fostering an environment conducive to sustainable AI adoption in smart public administration services.
Hypothesis 8 (H8). 
Ethical AI regulations positively influence the intention to adopt AI chatbots in public administration.

2.4.3. Concerns over Data Management and Security

Data availability, quality, and security are critical for AI chatbot performance. However, weak data governance, concerns about data privacy, cybersecurity vulnerabilities, and weak institutional protections can be barriers to the adoption of AI chatbots in government (Brown & Baker, 2012; P. Chen et al., 2021). In many developing countries, public institutions lack comprehensive data governance frameworks, raising fears over data misuse, cyber threats, and non-compliance with legal standards (Bannister & Connolly, 2020; Bertino, 2016). This lack of trust is amplified when AI chatbots process sensitive information or interact with vulnerable populations. In such settings, even well-intentioned initiatives face strong institutional and public resistance, particularly if existing digital infrastructure is perceived as insecure or misaligned with ethical standards (Chatterjee & Chaudhuri, 2022; Mannuru et al., 2023; Zuiderwijk et al., 2021). These concerns are not merely technical—they reflect deeper issues of governance, accountability, and digital trust. Therefore, perceived data security risks, rather than strong data management practices, become the dominant factor influencing AI chatbot adoption.
Hypothesis 9 (H9). 
Concerns over data management and security negatively influence the intention to adopt AI chatbots in public administration.

3. Materials and Methods

Figure 2 presents the research outline for this study to assess and identify determinants of AI chatbots in digital government (Davis, 1989). The study begins by analyzing the issues of adopting AI chatbots through a literature review. The theoretical foundation was identified and selected for the research. Based on the research problem, a questionnaire was prepared, presented to an expert, and data was collected and analyzed for the research.

3.1. Research Design and Measurement Scale

This study employs a quantitative, structured survey design to empirically examine the key factors, challenges, and strategic implications of AI chatbot adoption in smart public administration. The research is grounded in an extended TOE framework, enriched with governance-relevant variables such as traditional leadership, mindset change, accountability, ethical AI regulation, and data governance. The framework is tailored to capture the institutional and organizational complexities present in transitioning public sectors, with a specific focus on Uzbekistan—a Central Asian transition country undergoing rapid public administration reforms. The survey instrument consisted of 20 questions, measured using a seven (7) point Likert scale ranging from 1 “Strong disagree” to 7 “Strong agree” (Scheuren, 2004). The questionnaire was developed based on a comprehensive review of existing literature on public administration and aligned with international frameworks such as the OECD AI Principles and UNESCO’s Recommendation on the Ethics of AI (Gordon, 2001). Two government officials—experts in digitalization and artificial intelligence reviewed the draft survey instrument. Based on their feedback, three items were removed, resulting in a final set of 20 items.
To ensure the validity and reliability of the constructs, test the hypotheses, the study employed PLS-SEM, a robust technique suited for complex models involving multiple constructs and small to medium sample sizes. All constructs were measured using validated scales adapted TOE framework and tailored to the context of AI chatbots’ adoption in public administration. Each measurement is implemented from well-established previous research (Ali et al., 2023; Arpaci et al., 2012; Oliveira & Martins, 2011; Sas et al., 2021; Scheuren, 2004) with minor adjustments in wording to suit the public administration and AI chatbot context. We employed a purposive sampling strategy, selecting public sector organizations involved in ongoing or planned AI chatbot initiatives and digital service delivery reforms. This ensured that participants possessed relevant knowledge and experience related to the topic. The questionnaire was formulated to measure the following with TOE factors (Appendix A, Table A1).
Ethical considerations, the study involved an anonymous and voluntary online survey. No personal or sensitive data were collected, and the research posed minimal risk to participants. In accordance with national ethical guidelines and institutional policies, such survey-based research did not require formal approval from an Institutional Review Board. Participants were informed about the purpose of the study, and informed consent was obtained before data collection.
Participants were recruited using a purposive sampling strategy targeting public sector employees involved in digital transformation initiatives across various government agencies in Uzbekistan. Initial contact was made online through official channels, including institutional emails and professional networks. Inclusion criteria required participants to have at least one year of experience in public administration and familiarity with digital service tools or AI-assisted systems. A total of 501 valid responses were collected from those who voluntarily agreed to participate after being informed of the study’s objectives and confidentiality protocols. Demographic details of respondents are shown in Table 2. According to the respondents of this study. A total of 501 responses were collected through an online survey. Particularly, the overall percentages by gender were 69.9% male and 30.1% female. Regarding the age level, respondents were 18–25 years old—23.8%, 26–45 years old—49.9%, and over 46 years old—26.3%. Regarding the level of education, 30.1% respondents had an undergraduate degree, 58.3% had a graduate (master’s) degree, and 11.6% had a Ph.D. degree and others. Turning to years of experience were 0–5 years, 24.4%; 6–10 years, 47.3%; and more than 11 years of work experience, 28.3%. All respondents were voluntary participants.

3.2. Data Collection and Sampling

Data were collected through a structured survey designed to assess participants’ perceptions regarding AI chatbots adoption in Uzbekistan’s public administration services (Gulyamov et al., 2023; Oliveira & Martins, 2011). The questionnaire was distributed through email, and the target population includes employees, IT professionals, digital transformation officers, and mid-to-senior level managers working within various government agencies, ministries, and public service institutions in Uzbekistan that are engaged in digital service delivery and/or AI integration projects. The questionnaire included items measuring TOE constructs, including perceived usefulness, compatibility, and system complexity; traditional top-down leadership, resistance to mindset changes, and organizational readiness; effective accountability mechanisms, ethical AI regulation, and concerns over data management and security, as well as demographic variables. Online survey items were established theoretical understandings from related studies (Dasgupta & Wendler, 2019). Five hundred ten respondents responded and a total of 501 usable responses were used for the final data analysis. The following criteria are used for selection; particularly, participants should be government service users, professionals, specialists, and have awareness about AI technologies.

3.3. Data Analysis Techniques

In this study, PLS-SEM was employed using SmartPLS 4v. software to analyze the relationships between the variables and test the hypotheses (Hair et al., 2019). Moreover, all items of the model examine the following: (1) Measurement model assessment: reliability: Cronbach’s alpha and composite reliability (CR); convergent validity: average variance extracted (AVE); discriminant validity: Fornell–Larcker criterion. The validation of the structural model was confirmed according to the established guidelines suggested by Hair et al. (2019). Initially, PLS-SEM considers each item presented full collinearity assessment based on variance inflation factors (VIFs) (Kock, 2015). We measure the reliability of the model by estimating Cronbach’s alpha, composite reliability (CR), and average variance extracted (AVE) values (Fornell & Larcker, 1981) to assess inner consistency and observe the external loadings for each item. (2) Structural model evaluation: path coefficients and significance (bootstrapping with 5000 samples); R2 values to measure explained variance and effect size (f2). In addition, the Fornell–Larker correlation matrix criterion (Fornell & Larcker, 1981) and the cross-loading results are used in discriminant validity. This approach allows simultaneous examination of latent constructs and hypothesis testing with high precision and reliability. On the other hand, this approach provides full-bodied insights into the determinants shaping AI chatbots’ adoption in the specific context of Uzbekistan’s digital government services.

4. Results

4.1. Collinearity Test Results

In terms of the range model, PLS-SEM examines each item presented a full collinearity assessment based on variance inflation factors (VIF) (Kock, 2015). VIF value should be small or equal to 10. Table 3 shows that all the concepts have VIF standards less than 10, demonstrating no collinearity problem in the model. According to the VIF test, if the VIF values of the full collinearity test are equal to or less than 3.3, the model is measured free from general uncertainties. The VIF values of the study also showed less than 3.3. This means that the study model is free from uncertainties.

4.2. Measurement Model

PLS-SEM was used to confirm the consistency and accuracy of constraints for examination of relationships between variables within a structural model and testing hypotheses through SmartPLS 4v. software. This model is important for diagnosis of data health for further analysis. Prior to analyzing the structural relationships, the measurement model’s validity and reliability were assessed to ensure the robustness of the results. It scores for 500 iterations and 5000 bootstrap subsamples with a 97.5 percent self-confidence interval, two-tailed, and significance level of 2.5 percent, as suggested by Hair et al. (2019). In this study, we used two major tests, reliability and validity, for a comprehensive examination.
(1) For the reliability of the items, all external loadings should be close to or greater than 0.7. As can be seen from Table 4 of reliability and validity, all items have loading values that are more significant than the cut-off value of 0.7. For internal reliability, Cronbach’s alpha (C.A) and composite reliability (C.R) were used. If the external loading values are significant, cut-off values of 0.7 and 0.6 are accepted (Ursachi et al., 2015). Accordingly, as can be seen from Table 4 of reliability and validity values, all constructs have composite reliability and Cronbach’s alpha value is greater than the cut-off value. This indicates that the reliability levels of the data are fully met and the data are reliable for further analysis. All items positively supported level of estimation. The results of Cronbach’s alpha and CR presented the cut-off threshold value between 0.6 and 0.7 (Hair et al., 2019, 2012). The results showed that the convergent validity was acceptable.
(2) For the validity of the items, the study used convergent and discriminant validity (Boomsma, 2003). For the convergent validity, average variance extracted (AVE) was used; its values should be above 0.5 for each construct. In this study, above table of the reliability and validity shows that all constructs for AVE values have a greater than cut-off threshold value of 0.5 (Fornell & Larcker, 1981; Shi et al., 2019), which shows all constructs are convergently valid. For discriminant validity, the Fornell–Larker criteria and the cross-loadings are used in this study. Table 5 of the Fornell–Larker criteria shows that all the bold corresponding construct (rows and pillars) values are significantly greater than threshold standards. Discriminant validity is considered acceptable if the square root of the AVE is reliably greater than above-mentioned values of the square root of the corresponding correlation (Rönkkö & Cho, 2022). To estimate cross loadings, each indicator’s loading should be higher than its conforming variables’ indicators. The results show that indicator items measuring each construct were significantly linked with each other and had greater loadings on their concepts. Based on these, major values of discriminant validity were established and the cross-loading’s criterion is satisfied in Table 6. The results specify all the square root of the AVE is greater than conforming squared relationship. Thus, this measurement model confirms acceptable discriminant validity. The above measurement analysis offers solid indication for reliability and validity based on convergent and discriminate of the developed model in this research.
Table 7, the model fit was assessed using multiple global fit indices. The standardized root Mean square residual (SRMR) was 0.085, indicating an acceptable fit below the conventional 0.10 threshold. The model is theoretically acceptable despite minor fit limitations. The normed fit index (NFI) value of 0.771, although slightly below the commonly referenced threshold of 0.80, still indicates a marginally acceptable fit—particularly given the complexity of the model and the exploratory nature of the study. Additionally, the d_ULS (1.506) and d_G (1.126) values fell within acceptable ranges, further supporting overall model adequacy. The chi-square statistic (4332.929) is provided for reference only, as it is not directly interpretable within the PLS-SEM framework due to its sensitivity to sample size and model complexity.
Multicollinearity was not a concern, as all full collinearity VIF values were below the threshold of 3.3. The R2 value for the intention to adopt AI chatbots (IAAC) was 0.73, indicating a substantial level of explained variance. Although exact f2 values could not be extracted due to software limitations, the path coefficients indicate large effect sizes for perceived usefulness and compatibility; medium effects for effective accountability, traditional leadership, resistance to mindset change, and data management concerns; and smaller contributions from ethical AI regulation and system complexity. Taken together, these global fit indices and structural parameters confirm that the model demonstrates a reasonable and theoretically acceptable fit for explanatory purposes in the context of this exploratory study.

4.3. Structural Model Assessment

The structural model explains the inter-construct relations of the research model. Figure 3 presents the structural model offered, describing the assessed regression path coefficients (β) and their conforming meaning between the hypotheses and indicator items of the outer loadings with their significance points. Based on examination in Figure 3, Table 8 describes path coefficients (β) and p-values for the hypothesis. Table 8 presents the summary of hypothesis testing results. According to the path analysis, five hypotheses perceived usefulness, compatibility, organizational readiness, effective accountability, and ethical AI regulation were supported and positively influence adoption. Equally, system complexity, traditional top-down leadership, resistance to mindset change, and concerns over data management and security negatively influenced the intention to adopt AI chatbots (IAAC) in public administration. Specifically:
H1—Perceived usefulness had a positive influence on IAAC (β = 0.427, t = 7.114, p < 0.000), suggesting that when public employees perceive chatbots as beneficial for improving efficiency and service delivery, they are more likely to support their implementation.
H2—Compatibility also showed a positive impact on IAAC (β = 0.482, t = 7.422, p < 0.000), indicating that alignment of chatbot technologies with existing organizational systems and workflows facilitates smoother adoption.
H3—System complexity was found to have a negative influence on IAAC (β = −0.155, t = 1.962, p < 0.025). This implies that perceived system difficulty or technical sophistication can hinder the willingness of employees to engage with chatbot technologies.
H4—Traditional top-down leadership demonstrated a negative influence on IAAC (β = −0.313, t = 4.417, p < 0.000). This counter-intuitive aspect reflects the fact that traditional top-down leadership styles or authoritarian decision-making approaches in public institutions are resistant to new innovations—either due to limited employee participation in decision-making or fear of job displacement by automation.
H5—Resistance to mindset change also negatively influenced IAAC (β = −0.272, t = 3.639, p < 0.000). This suggests that an inability to adapt sufficiently to change strategic thinking in organizations, or resistance to it, can be a barrier to effective AI integration.
H6—Organizational readiness had a positive impact on IAAC (β = 0.292, t = 4.193, p < 0.000), highlighting the importance of internal preparedness, infrastructure, and digital capabilities in facilitating chatbot implementation.
H7—Effective accountability positively influenced IAAC (β = 0.421, t = 5.505, p < 0.000). This result underscores the role of transparent oversight and responsible governance in encouraging technology adoption in the public sector.
H8—Ethical AI regulation showed a positive influence on IAAC (β = 0.328, t = 3.049, p < 0.000), indicating that a clear ethical and regulatory framework enhances trust and supports broader organizational adoption.
H9—Concerns over data management security had a negative impact on IAAC (β = −0.317, t = 3.942, p < 0.000), revealing that concerns over cybersecurity and data protection remain key barriers to adoption, especially in public institutions handling sensitive information.
The acceptance or rejection of each hypothesis was guided by both the statistical significance of the tested relationships and their theoretical relevance to the transitioning public administration context of Uzbekistan. Accepted hypotheses reflect strong empirical support consistent with prior literature and contextual dynamics, such as the influence of perceived usefulness, compatibility, organizational readiness, and effective accountability. Rejected hypotheses, such as those related to certain environmental or motivational variables, may be attributed to contextual factors like system complexity, traditional top-down leadership, resistance to mindset change, insufficient ethical AI regulation, and concern over data management and security. These findings highlight the nuanced role of the TOE and extended variables in shaping AI adoption in a transitioning government context. In summary, the results assessed the impact of key enabling and barrier-busting factors for the adoption of AI chatbots in the public sector. This also confirms the scientific basis and robustness of the study and provides a solid foundation for future research and policy development on smart public administration and digital government in traditional developing and transition countries.

4.4. Results of Direct, Indirect, and Total Effects

The study proved the significance of the mediating effects of using PLS presented in Table 9. The results confirm that, Compatibility was significantly positive and had a direct impact on IAAC (β = 0.482, p < 0.000); System complexity was significantly negative and had a direct impact on IAAC (β = −0.155, p < 0.025); CDMS also was significantly negative and had a direct impact on IAAC (β = −0.317, p < 0.000); EA was significantly positive and had a direct impact on IAAC (β = 0.421, p < 0.000); EAR was also significantly positive and had a direct impact on IAAC (β = 0.328, p < 0.001); TL was significantly negative and had a direct impact on IAAC (β = −0.313, p < 0.000); OR was significantly positive and had a direct impact on IAAC (β = 0.292, p < 0.000); PU also was significantly positive and had a direct impact on IAAC (β = 0.427, p < 0.000); RMC was significantly negative and had a direct impact on IAAC (β = −0.272, p < 0.000). In this study all variables supported directly and did not observe indirect effects on intention to adopt AI chatbots in public administration. Thus, these results also suggest that developing countries in transition can achieve effective implementation by considering these important factors in adopting AI chatbots in the integration of administrative services for smart public governance.

5. Discussion

5.1. Interpretation of Main Findings

This study provides empirical examination of the key determinants, challenges, and strategic implications of AI chatbots adoption in public administration of developing and transitioning countries, like Uzbekistan. The study investigated comprehensively and made a valuable contribution to the field of AI chatbots integration in administrative services in the public sector (Aderibigbe et al., 2023; Ali et al., 2023; Arpaci et al., 2012; Henman, 2020; Kulal et al., 2024; Neumann et al., 2024; Oliveira & Martins, 2011; Stahl et al., 2021; Zuiderwijk et al., 2021). This aligns with (Setayesh & Daryaei, 2017) results, which highlight how good governance and innovation jointly contribute to improved economic outcomes, thereby underscoring the importance of governance capacity in enabling AI-driven public sector transformation in transitioning contexts like Uzbekistan. While this study focuses on Uzbekistan, similar institutional challenges, such as bureaucratic inertia, fragmented infrastructure, and inconsistent digitalization policy enforcement, are prevalent in many transitioning governments. These common barriers highlight the need for comparative cross-country research to better understand how such institutional factors influence AI adoption in public administration.
Although these scholars showed the value of AI-based technologies and their implementation, impacts, and efficiency in several investigations, which are components of the public sector, digital governance, and digital public services in developed and developing countries’ contexts (Chong et al., 2021; Madan & Ashok, 2023). We exposed in this study the significance of AI chatbots adoption in smart public administration of a transitioning government context, impacts and opportunities of AI adoption in this field. The findings resulted in the acceptance of H1, H2, H4, H7, and H9 proposed in this investigation. As specified in these existing works (Adam et al., 2021; Aderibigbe et al., 2023), role of AI technologies adoption on digital governance and algorithmic government is important to support public servants and provide services to citizens. AI adoption has a significant influence on public administration services (Bannister & Connolly, 2020; Engin & Treleaven, 2019; Nicolescu & Tudorache, 2022). Other studies have examined the impact and opportunities of AI technologies on improving public services and the efficiency of public services delivery (Aderibigbe et al., 2023; Chatterjee & Chaudhuri, 2022), and our study investigated specific tool of AI in the context of a transitioning country, Uzbekistan. Our findings corroborate prior research employing the TOE framework in public sector digital innovation studies (Alateyah et al., 2013; Jais et al., 2024), particularly regarding the significance of technological readiness and organizational support. However, distinct from studies conducted in more developed or stable environments, our results emphasize the amplified role of perceived usefulness, compatibility, system complexity, organizational readiness, leadership commitment, strong accountability, ethical AI regulation, concern over data management and security, and resistance to digital mindset change within a transitioning government context. These findings reinforce the context-dependent nature of AI adoption drivers and suggest the necessity of adaptive, context-aware policy frameworks for emerging digital governments.
Second, the study integrated a TOE-based framework and the factors that influence users’ acceptance and usage intentions of AI tools in public administration. The findings confirm that the majority hypothesis was supported, indicating that the selected variables significantly affect adoption intentions. Remarkably, while factors such as perceived usefulness (H1), compatibility (H2), organizational readiness (H6), effective accountability (H7), and ethical AI regulation (H8) showed positive influences, other factors such as system complexity (H3) had a negatively impact, suggesting that the more complicated the chatbot system is perceived to be, the less likely public sector organizations are to adopt it. This aligns with earlier research in innovation diffusion and technology acceptance models, which consistently show that high perceived system complexity can discourage adoption (Ali et al., 2023; Chong et al., 2021). Traditional top-down leadership (H4) and resistance to change (H5) also negatively affected AI chatbots’ adoption. These findings may appear counterintuitive at first glance; however, they might reflect a scenario where rigid leadership frameworks or poorly managed change initiatives create resistance, confusion, or fear among employees, ultimately reducing willingness to adopt new technologies (Rawassizadeh et al., 2019; Thierer et al., 2017). Lastly, data management and security (H9) also had a negative effect on adopting AI chatbots in the public administration sector. This indicates that heightened concerns about data protection and cybersecurity may deter public organizations from embracing AI chatbots, perhaps due to compliance pressure or lack of clear regulation and guidelines. These results underscore the importance of not only promoting positive enablers but also mitigating the barriers that hinder AI integration in public administration of transition developing countries. Policymakers and technology leaders should address these negative influences through training, inclusive leadership, simplified system design, and stronger data governance frameworks. Moreover, the institutional features of Uzbekistan particularly its transitional centralized bureaucratic structure and state-driven digital governance policy significantly mediate the adoption of AI chatbots. For instance, the emphasis on top-down digitalization strategies, such as the “Digital Uzbekistan—2030” initiative (Ministry for the Development of Information Technologies and Communications of the Republic of Uzbekistan, 2020), may explain the relatively high influence of leadership and strategic alignment variables in our findings. Furthermore, the centralization of public administration could reinforce the salience of organizational readiness and accountability structures, as local agencies often rely on directives from the central government when adopting new technologies. These contextual factors highlight the importance of tailoring the TOE framework to transitional governance environments like Uzbekistan.

5.2. Theoretical Implications

This study contributes to the academic literature on AI tools adoption in several important ways: It extends the TOE framework by incorporating external governance-related dimensions, including perceived usefulness, compatibility, system complexity, traditional leadership, resistance to change, organizational readiness, effective accountability, ethical AI regulation, and concerns over data management and security. The model’s effectiveness in capturing the key determinants suggests that foundational constructs like technology, organization, and environmental factors remain relevant beyond developed settings, though their external influences are especially appropriate in a transitioning government context. Unlike traditional models such as TAM and UTAUT, which are largely individual or technology-focused, this framework captures the institutional, organizational, and strategic complexity of AI adoption in public administration (Bilginoğlu & Yozgat, 2023; Folorunso et al., 2024; Martin & Pear, 2019; Medaglia et al., 2021). Moreover, the study also highlights the importance of integrating the TOE framework to better understand technology acceptance in the complex public administration, broadening the model’s applicability and theoretical richness in the developing transition government contexts. The study delivers a significant contribution by applying the extended TOE framework to identify the determinants influencing user acceptance and usage intentions of AI tools for policymakers to achieve positive outcomes when adopting AI chatbots in smart public administration services.
Theoretical contributions. This study contributes to the theoretical advancement of the TOE framework by incorporating modern public governance-specific constructs such as leadership commitment, strong accountability, mindset change, strategic alignment, and ethical AI regulation into the “organizational” and “environmental” dimensions. Unlike prior applications of the TOE model that predominantly focus on private sector or generic institutional contexts, this research tailors the framework to address the distinct features of transitional governments. It responds to recent calls for contextualizing technology adoption models in the public sector by accounting for administrative hierarchies, institutional inertia, and normative governance expectations. By doing so, it extends the TOE’s explanatory power and offers a more robust framework for analyzing AI integration in digitally transforming public administrations, especially in developing country contexts.

5.3. Practical Implications and Policy Recommendations

The results of this study are also important for digital government providers in assessing their readiness to implement AI chatbots. Practitioners aiming to foster AI chatbots’ adoption should focus on enhancing perceived ease of use by designing user-friendly interfaces, mindset change, and providing targeted training. Emphasizing the usefulness of AI chatbots by demonstrating improved organizational leadership approaches, data management privacy can significantly influence users’ positive intentions to use AI tools. Launching a strategic approach and promoting ethical AI regulation are crucial for building trust and addressing ethical concerns, which can further facilitate acceptance. Integrating technological, organizational, and ethical considerations into strategic planning is essential for successful AI integration into smart public administration in a transitioning government context. These crucial factors should be taken into account for the effective application of AI chatbots in public administration operations. Certainly, in the social sciences, the findings of this study confirm the essential of integrating new technological initiatives in public and private sectors. These findings suggest that both theoretical and practical significance of AI chatbots’ adoption in public administration services in the context of other developing countries as well. Finally, all these factors can be valuable considerations for effective adoption and integration of AI tools towards smart public administration.
Policy Recommendations: Based on the study’s findings, the following policy recommendations are proposed to support the effective adoption and integration of AI chatbots in smart public administrative services in Uzbekistan and similar transitioning developing countries context: Strengthen leadership and strategic vision in organization: Institutionalize leadership training to awareness of digital administration—developing dedicated programs to train public sector leaders on emerging technologies, change management, and digital innovation strategies. Foster strategic mindset change to encourage cultural transformation through incentives, awareness campaigns, and recognition programs that reward innovation and risk-taking in public administration. Build technical and organizational capacity: Invest in infrastructure and interoperability to expand broadband access, modernize legacy IT systems, and ensure systems. Enhance workforce digital competency: Implement continuous training programs for civil servants to improve digital literacy, user experience design, and chatbot management capabilities. Promote ethical and accountable AI governance: Establish clear AI ethics and accountability frameworks to develop and enforce national guidelines for ethical AI use in government, ensuring transparency, fairness, and human oversight. Ensure robust data governance to strengthen laws and regulations for data protection, consent, and cybersecurity, especially in sectors handling sensitive citizen data. Foster regional and international cooperation: Leverage regional knowledge sharing platforms to engage in partnerships with regional organizations, international donors, and knowledge networks to exchange best practices and co-develop policy tools. Pilot and scale incrementally to start with small-scale pilot programs in key service areas (e.g., social services, licensing) and scale based on iterative learning and user feedback. While the proposed policy recommendations offer a strategic roadmap for AI chatbot adoption, their implementation must be tailored to resource-constrained transitioning countries’ contexts. In transitioning governments, persistent limitations in funding, infrastructure, and human capital often constrain the implementation of large-scale reforms. A phased approach, such as deploying regulatory sandboxes, adapting audit protocols to institutional capacity, and modular civil servant training programs, ensures that effective adoption efforts remain both feasible and sustainable. Aligning policy ambitions with existing practical capabilities is essential for fostering accountable, ethical, and practical AI integration in public administration.

5.4. Limitations and Directions for Future Research

This study has several limitations that should be acknowledged. First, reliance on self-reported survey data may introduce response biases, such as social desirability or overestimation of support for AI adoption. Second, the cross-sectional design limits the ability to capture changes in perceptions over time or assess long-term impacts of adoption. Third, the study focuses entirely on the case of Uzbekistan—while this provides valuable insights into a transitioning government context, its specific bureaucratic structure, digital maturity, and governance culture may differ from those in other transitioning states, where institutional trust, infrastructure limitations, or resistance to change may present distinct challenges. To strengthen the robustness and generalizability of future research, scholars are encouraged to replicate and extend this model in diverse country contexts, incorporate mixed-methods or longitudinal designs, and examine additional contextual moderators such as digital literacy, institutional trust, and regulatory environments. Comparative studies across transitioning and emerging economies would also be instrumental in refining theoretical insights and expanding policy relevance. Finally, while this study applies PLS-SEM in an exploratory context appropriate for early-stage theory development, future research could enhance predictive power and generalizability by employing longitudinal or mixed-methods approaches, conducting robustness checks, and incorporating comparative case studies from other transitioning nations. Clearer sampling protocols and triangulation with qualitative data would further improve validity and contextual depth. At the end, given the study’s quantitative design, we acknowledge that the absence of qualitative data limits our ability to capture deeper organizational nuances. Future studies could complement these findings through interviews or case studies to explore factors such as resistance to leadership or institutional barriers in greater depth.

6. Conclusions

This study investigated the key determinants, challenges, and strategic implications of AI chatbots adoption in smart public administration of a transitioning government context using an extended TOE framework. Based on original empirical data from Uzbekistan—a transitioning digital government with ambitious reform goals—the research provides new theoretical and practical insights into the complex interplay of technological readiness, organizational capability, and environmental enablers in driving AI chatbots adoption. The findings confirm that majority hypothesis supported particularly, five factors such as perceived usefulness, compatibility, organizational readiness, effective accountability, and ethical AI regulation were supported and positively influence and four factors such as system complexity, Traditional top-down leadership, resistance to mindset change, and concerns over data management and security were negatively impact to adopt AI chatbots in public administration. These results give new insights and play a pivotal role in adoption of AI chatbots in smart public administration and fostering an innovation-friendly environment, building public trust, and ensuring sustainable integration of AI solutions in transitioning governments. The extended TOE-governance model developed in this study demonstrates the significance of this method in capturing the multifaceted nature of technological adoption in the public sector. By contextualizing the TOE framework to the realities of a transitioning country in Central Asia with a legacy of centralized governance’s administrative setting, this study contributes a novel perspective to global discussions on smart public governance and AI-driven public service reform in the rapidly digital era. It fills an important empirical gap in Uzbekistan, Central Asia, and also offers a scalable model to study the factors important for effective digital transformation in other context-specific developing and transition countries.
Second, this study makes significant contributions to the understanding of AI chatbots adoption in public administration context of a single transitioning country. By empirically validating the applicability of the TOE framework, the results highlight the critical roles of technological, organizational, and environmental perspectives in shaping user acceptance and social intentions. Furthermore, it highlights the importance of addressing broader organizational and ethical factors such as perceived usefulness, compatibility, system complexity, effective accountability, organizational readiness, traditional top-down leadership, resistance to mindset change, ethical AI regulation, and concerns over data management and security that directly impact perceptions and facilitate effective adoption processes in smart public administration. The nuanced insights regarding indirect impacts did not observe to emphasize the multifaceted nature of successful AI integration.
Third, the findings carry broader relevance for other developing countries seeking to implement AI chatbots in effective digital governance: Many governments face similar institutional, infrastructural, and sociopolitical constraints. The study offers a replicable and adaptable tool for identifying enablers and barriers in such settings. Developing countries can leapfrog traditional digital service development by combining visionary leadership, strong policy commitment, and responsible AI governance. Concluding remarks, the research highlights that in the era of rapid technological change, public administration systems must continuously evolve. To advance digital public services, it is recommended that public administration institutions implement permanent monitoring procedures and integrate the identified key determining factors into their AI chatbots’ adoption frameworks. However, ensure the ethical use of AI, and engage citizens to build trust in digital government. The study emphasizes the essential of increasing quality and quantity of professionals in this area and establishing continuous professional development programs for both public servants and citizens to enhance their knowledge and skills on AI technologies. These insights have practical policy implications. Public sector organizations should focus not only on technological investment but also on governance reform, organizational leadership approach, change mindset, and digital capacity-building. These concepts are important and promising in efforts to effectively use AI chatbots in developing and transitioning governments in the world.

Author Contributions

Conceptualization, M.S.O.; methodology, M.S.O.; software, M.S.O.; validation, M.S.O.; formal analysis, M.S.O.; investigation, M.S.O.; resources, M.S.O.; data curation, M.S.O.; writing—original draft preparation, M.S.O.; writing—review and editing, M.S.O. and Y.A.; visualization, M.S.O.; supervision, Y.A.; project administration, Y.A.; funding acquisition, Y.A. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government. (MSIT) (No. 2023R1A2C2007623).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The dataset exists in open-access resources.

Acknowledgments

Sincere thanks to Quang Hoai Le, Kwon Gi Heon, and other esteemed experts for their valuable insights and constructive recommendations, which have significantly contributed to the development of this paper.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Table A1. Measurement items.
Table A1. Measurement items.
ConstructsReferences
Perceived Usefulness (PU)(Arpaci et al., 2012; Gefen & Straub, 2000; Mikhaylov et al., 2018; Oliveira & Martins, 2010; Wirtz et al., 2021).
PU1AI chatbots enhance the efficiency of government administrative services.
PU2I believe AI-powered chatbots help my engagement.
Competability (Comp)(P. Chen et al., 2021; Gangwar et al., 2015; Thong, 1999; Venkatesh et al., 2012).
Comp1AI chatbots fit with current work processes and workflows.
Comp2AI chatbots are compatible with the technologies currently used in my organization.
System Complexity (SCom)(Chatterjee & Chaudhuri, 2022; Oliveira & Martins, 2011; Rogers et al., 2005; Wirtz et al., 2021).
SCom1I believe that the AI chatbot system is too complex for our organization’s employees to use effectively.
SCom2Using the AI chatbot requires specialized skills or training that most staff members do not have.
Traditional Top-Down Leadership (TL)(Chuang & Shaw, 2005; Ifinedo, 2012; Low et al., 2011; McLeod, 2007; Peters, 2015; Silva, 2016).
TL1In my organization, decisions about adopting AI tools are made only by senior management.
TL2My organization’s leadership style is traditional for new digital tools.
Resistance to Mindset Change (RMC)(Bwalya & Mutula, 2015; Chong et al., 2021; Labadze et al., 2023; Murphy & Reeves, 2019; Rawassizadeh et al., 2019; Thierer et al., 2017).
SMC1My organization’s employees are reluctant to change their traditional ways of working even when new technologies are introduced.
SMC2There is a reluctance among employees and management to adopt AI technologies such as chatbots.
Organizational Readiness (OR)(Chatterjee & Chaudhuri, 2022; Edwards et al., 2024; Madan & Ashok, 2023; Mergel et al., 2019; Venkatesh & Davis, 2000; Zhai et al., 2021; Zhu et al., 2006).
OR1The organization has sufficient technical infrastructure to support AI chatbot implementation.
OR2Skilled personnel are available to manage and maintain AI chatbot systems.
Effective Accountability (EA)(Arpaci et al., 2012; Cameron, 2004; Dwivedi et al., 2021; Hubbell, 2007; Wirtz et al., 2019).
EA1AI chatbots help improve compliance with rules and regulations.
EA2The use of AI chatbots supports accountable decision-making by tracking actions and outcomes.
Ethical AI Regulation (EAR)(Androutsopoulou et al., 2019; Gordon, 2001; Reis et al., 2019; Thierer et al., 2017; Wirtz et al., 2021).
EAR1My organization follows national guidelines and international AI ethics standards (e.g., OECD, UNESCO) to use of AI chatbots.
EAR2Ethical concerns about AI chatbot adoption are actively addressed by top management.
Concerns over Data Management and Security (CDMS)(Bertino, 2016; Chatterjee & Chaudhuri, 2022; P. Chen et al., 2021; Chong et al., 2021; Dwivedi et al., 2021; Mannuru et al., 2023; Medaglia et al., 2021; Wirtz et al., 2019).
CDMS1I am always concerned that the use of AI chatbots in administrative services could compromise the security of sensitive data.
CDMS2My organization does not yet have clear policies or safeguards in place to manage data processed by AI chatbots.

References

  1. Adam, M., Wessel, M., & Benlian, A. (2021). AI-based chatbots in customer service and their effects on user compliance. Electronic Markets, 31(2), 427–445. [Google Scholar] [CrossRef]
  2. Aderibigbe, A. O., Ohenhen, P. E., Nwaobia, N. K., Gidiagba, J. O., & Ani, E. C. (2023). Artificial intelligence in developing countries: Bridging the gap between potential and implementation. Computer Science & IT Research Journal, 4(3), 185–199. [Google Scholar] [CrossRef]
  3. Alateyah, S., Crowder, R. M., & Wills, G. B. (2013). An exploratory study of proposed factors to adopt e-government services. International Journal of Advanced Computer Science and Applications, 4(11), 57–66. [Google Scholar] [CrossRef]
  4. Albu, D. (2023). Report “Progress on the sustainable development goals: The gender snapshot 2023” (UN Women, UN DESA, 2023). Drepturile Omului, 94. [Google Scholar]
  5. Ali, O., Abdelbaki, W., Shrestha, A., Elbasi, E., Alryalat, M. A. A., & Dwivedi, Y. K. (2023). A systematic literature review of artificial intelligence in the healthcare sector: Benefits, challenges, methodologies, and functionalities. Journal of Innovation & Knowledge, 8(1), 100333. [Google Scholar] [CrossRef]
  6. Alraja, M. N., Khan, S. F., Khashab, B., & Aldaas, R. (2020). Does Facebook commerce enhance SMEs performance? A structural equation analysis of Omani SMEs. Sage Open, 10(1), 2158244019900186. [Google Scholar] [CrossRef]
  7. Alsuqayh, N., Mirza, A., & Alhogail, A. (2024, December 2). A phishing website detection system based on hybrid feature engineering with SHAP explainable artificial intelligence technique. International Conference on Web Information Systems Engineering, Doha, Qatar. [Google Scholar]
  8. Androutsopoulou, A., Karacapilidis, N., Loukis, E., & Charalabidis, Y. (2019). Transforming the communication between citizens and government through AI-guided chatbots. Government Information Quarterly, 36(2), 358–367. [Google Scholar] [CrossRef]
  9. Aoki, N. (2020). An experimental study of public trust in AI chatbots in the public sector. Government Information Quarterly, 37(4), 101490. [Google Scholar] [CrossRef]
  10. Arpaci, I., Yardimci, Y. C., Ozkan, S., & Turetken, O. (2012). Organizational adoption of information technologies: A literature review. International Journal of Ebusiness and Egovernment Studies, 4(2), 37–50. [Google Scholar]
  11. Bannister, F., & Connolly, R. (2020). Administration by algorithm: A risk management framework. Information Polity, 25(4), 471–490. [Google Scholar] [CrossRef]
  12. Bertino, E. (2016, June 10–14). Data security and privacy: Concepts, approaches, and research directions. 2016 IEEE 40th Annual computer Software and Applications conference (cOMPSAc), Atlanta, GA, USA. [Google Scholar]
  13. Bilginoğlu, E., & Yozgat, U. (2023). The validity and reliability of the measure for digital leadership: Turkish form. In Multidimensional and strategic outlook in digital business transformation: Human resource and management recommendations for performance improvement (pp. 53–67). Springer. [Google Scholar]
  14. Boomsma, A. (2003). Structural equation modeling: Foundations and extensions (Advanced quantitative techniques in the social sciences series, vol. 10.) by David Kaplan. Structural Equation Modeling, 10(2), 323–331. [Google Scholar] [CrossRef]
  15. Brown, B. J., & Baker, S. (2012). Responsible citizens: Individuals, health, and policy under neoliberalism. Anthem Press. [Google Scholar]
  16. Bwalya, K. J., & Mutula, S. M. (2015). Digital solutions for contemporary democracy and government. IGI Global. [Google Scholar]
  17. Cameron, W. (2004). Public accountability: Effectiveness, equity, ethics. Australian Journal of Public Administration, 63(4), 59–67. [Google Scholar] [CrossRef]
  18. Chatterjee, S., & Chaudhuri, R. (2022). Adoption of artificial intelligence integrated customer relationship management in organizations for sustainability. In Business under crisis, volume III: Avenues for innovation, entrepreneurship and sustainability (pp. 137–156). Palgrave Macmillan. [Google Scholar]
  19. Chen, P., Jamet, C., Zhang, Z., He, Y., Mao, Z., Pan, D., Wang, T., Liu, D., & Yuan, D. (2021). Vertical distribution of subsurface phytoplankton layer in South China Sea using airborne lidar. Remote Sensing of Environment, 263, 112567. [Google Scholar] [CrossRef]
  20. Chong, T., Yu, T., Keeling, D. I., & de Ruyter, K. (2021). AI-chatbots on the services frontline addressing the challenges and opportunities of agency. Journal of Retailing and Consumer Services, 63, 102735. [Google Scholar] [CrossRef]
  21. Chuang, M.-L., & Shaw, W. H. (2005). A roadmap for e-business implementation. Engineering Management Journal, 17(2), 3–13. [Google Scholar] [CrossRef]
  22. Cowls, J., King, T., Taddeo, M., & Floridi, L. (2019). Designing AI for social good: Seven essential factors. SSRN. [Google Scholar] [CrossRef]
  23. Dasgupta, A., & Wendler, S. (2019). AI adoption strategies. Centre for Technology and Global Affairs, University of Oxford. Available online: https://www.politics.ox.ac.uk/sites/default/files/2022-03/201903-CTGA-Dasgupta%20A-Wendler%20S-aiadoptionstrategies.pdf (accessed on 18 September 2022).
  24. Davis, F. D. (1989). Technology acceptance model: TAM. Al-Suqri, MN, Al-Aufi, AS: Information Seeking Behavior and Technology Adoption, 205(219), 5. [Google Scholar]
  25. Duan, Y., Edwards, J. S., & Dwivedi, Y. K. (2019). Artificial intelligence for decision making in the era of Big Data–evolution, challenges and research agenda. International Journal of Information Management, 48, 63–71. [Google Scholar] [CrossRef]
  26. Dwivedi, Y. K., Hughes, L., Ismagilova, E., Aarts, G., Coombs, C., Crick, T., Duan, Y., Dwivedi, R., Edwards, J., & Eirug, A. (2021). Artificial Intelligence (AI): Multidisciplinary perspectives on emerging challenges, opportunities, and agenda for research, practice and policy. International Journal of Information Management, 57, 101994. [Google Scholar] [CrossRef]
  27. Edwards, D. B., Jr., Caravaca, A., Rappeport, A., & Sperduti, V. R. (2024). World Bank influence on policy formation in education: A systematic review of the literature. Review of Educational Research, 94(4), 584–622. [Google Scholar] [CrossRef]
  28. Engin, Z., & Treleaven, P. (2019). Algorithmic government: Automating public services and supporting civil servants in using data science technologies. The Computer Journal, 62(3), 448–460. [Google Scholar] [CrossRef]
  29. Folorunso, A., Olanipekun, K., Adewumi, T., & Samuel, B. (2024). A policy framework on AI usage in developing countries and its impact. Global Journal of Engineering and Technology Advances, 21(01), 154–166. [Google Scholar] [CrossRef]
  30. Fornell, C., & Larcker, D. F. (1981). Evaluating structural equation models with unobservable variables and measurement error. Journal of Marketing Research, 18(1), 39–50. [Google Scholar] [CrossRef]
  31. Gangwar, H., Date, H., & Ramaswamy, R. (2015). Developing a cloud-computing adoption framework. Global Business Review, 16(4), 632–651. [Google Scholar] [CrossRef]
  32. Gefen, D., & Straub, D. W. (2000). The relative importance of perceived ease of use in IS adoption: A study of e-commerce adoption. Journal of the Association for Information Systems, 1(1), 8. [Google Scholar] [CrossRef]
  33. Goralski, M. A., & Tan, T. K. (2020). Artificial intelligence and sustainable development. The International Journal of Management Education, 18(1), 100330. [Google Scholar] [CrossRef]
  34. Gordon, K. (2001). The OECD guidelines and other corporate responsibility instruments: A comparison. Organisation for Economic Co-operation and Development. [Google Scholar]
  35. Gulyamov, S., Akhmedov, A., Bazarov, S., Ubaydullaeva, A., Musaev, S., Rodionov, A., & Odilkhujaev, I. (2024, November 13–15). Using digital twins for modeling and testing cybersecurity scenarios in smart cities. 2024 6th International Conference on Control Systems, Mathematical Modeling, Automation and Energy Efficiency (SUMMA), Lipetsk, Russia. [Google Scholar]
  36. Gulyamov, S., Karieva, G., & Rasulova, M. (2023). Experience of development of digital technologies in Uzbekistan. E3S Web of Conferences, 389, 03040. [Google Scholar] [CrossRef]
  37. Gupta, A., Hathwar, D., & Vijayakumar, A. (2020). Introduction to AI chatbots. International Journal of Engineering Research and Technology, 9(7), 255–258. [Google Scholar]
  38. Hair, J. F., Risher, J. J., Sarstedt, M., & Ringle, C. M. (2019). When to use and how to report the results of PLS-SEM. European Business Review, 31(1), 2–24. [Google Scholar] [CrossRef]
  39. Hair, J. F., Sarstedt, M., Pieper, T. M., & Ringle, C. M. (2012). The use of partial least squares structural equation modeling in strategic management research: A review of past practices and recommendations for future applications. Long Range Planning, 45(5–6), 320–340. [Google Scholar] [CrossRef]
  40. Henman, P. (2020). Improving public services using artificial intelligence: Possibilities, pitfalls, governance. Asia Pacific Journal of Public Administration, 42(4), 209–221. [Google Scholar] [CrossRef]
  41. Hubbell, L. L. (2007). Quality, efficiency, and accountability: Definitions and applications. New Directions for Higher Education, 2007(140), 5–13. [Google Scholar] [CrossRef]
  42. Ifinedo, P. (2012). Drivers of e-government maturity in two developing regions: Focus on Latin America and Sub-Saharan Africa. JISTEM-Journal of Information Systems and Technology Management, 9, 5–22. [Google Scholar] [CrossRef]
  43. Jais, R., Ngah, A. H., Rahi, S., Rashid, A., Ahmad, S. Z., & Mokhlis, S. (2024). Chatbots adoption intention in public sector in Malaysia from the perspective of TOE framework. The moderated and mediation model. Journal of Science and Technology Policy Management. [Google Scholar] [CrossRef]
  44. Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389–399. [Google Scholar] [CrossRef]
  45. Kobilov, A., Bozorov, J., Rajabov, S. B., Abdulakhatov, M., & Sapaev, I. (2023). Development of the digital economy in the Republic of Uzbekistan. E3S Web of Conferences, 402, 08038. [Google Scholar] [CrossRef]
  46. Kock, N. (2015). Common method bias in PLS-SEM: A full collinearity assessment approach. International Journal of e-Collaboration (IJEC), 11(4), 1–10. [Google Scholar] [CrossRef]
  47. Kulal, A., Rahiman, H. U., Suvarna, H., Abhishek, N., & Dinesh, S. (2024). Enhancing public service delivery efficiency: Exploring the impact of AI. Journal of Open Innovation: Technology, Market, and Complexity, 10(3), 100329. [Google Scholar] [CrossRef]
  48. Kuldosheva, G. (2021). Challenges and opportunities of digital transformation in the public sector in transition economies: Examination of the case of Uzbekistan. Asian Development Bank Institute. [Google Scholar]
  49. Labadze, L., Grigolia, M., & Machaidze, L. (2023). Role of AI chatbots in education: Systematic literature review. International Journal of Educational Technology in Higher Education, 20(1), 56. [Google Scholar] [CrossRef]
  50. Lindgren, S. (2021). Digital media and society. SAGE Publications Ltd. [Google Scholar]
  51. Lippert, S. K., & Govindarajulu, C. (2006). Technological, organizational, and environmental antecedents to web services adoption. Communications of the IIMA, 6(1), 14. [Google Scholar] [CrossRef]
  52. Low, C., Chen, Y., & Wu, M. (2011). Understanding the determinants of cloud computing adoption. Industrial Management & Data Systems, 111(7), 1006–1023. [Google Scholar]
  53. Madan, R., & Ashok, M. (2023). AI adoption and diffusion in public administration: A systematic literature review and future research agenda. Government Information Quarterly, 40(1), 101774. [Google Scholar] [CrossRef]
  54. Mannuru, N. R., Shahriar, S., Teel, Z. A., Wang, T., Lund, B. D., Tijani, S., Pohboon, C. O., Agbaji, D., Alhassan, J., & Galley, J. (2023). Artificial intelligence in developing countries: The impact of generative artificial intelligence (AI) technologies for development. Information Development, 41, 1036–1054. [Google Scholar] [CrossRef]
  55. Martin, G., & Pear, J. J. (2019). Behavior modification: What it is and how to do it. Routledge. [Google Scholar]
  56. McLeod, S. (2007). Maslow’s hierarchy of needs. Simply Psychology, 1, 1–18. [Google Scholar]
  57. Medaglia, R., Misuraca, G., & Aquaro, V. (2021, June 9–11). Digital government and the united nations’ sustainable development goals: Towards an analytical framework. 22nd Annual International Conference on Digital Government Research, Omaha, NE, USA. [Google Scholar]
  58. Mergel, I. (2019). Digital service teams in government. Government Information Quarterly, 36(4), 101389. [Google Scholar] [CrossRef]
  59. Mergel, I., Edelmann, N., & Haug, N. (2019). Defining digital transformation: Results from expert interviews. Government Information Quarterly, 36(4), 101385. [Google Scholar] [CrossRef]
  60. Mergel, I., Edelmann, N., & Haug, N. (n.d.). Outcomes of value co-creation and co-destruction in the digital transformation of public services. Digital Government: Research and Practice.
  61. Mikhaylov, S. J., Esteve, M., & Campion, A. (2018). Artificial intelligence for the public sector: Opportunities and challenges of cross-sector collaboration. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2128), 20170357. [Google Scholar] [CrossRef]
  62. Ministry for the Development of Information Technologies and Communications of the Republic of Uzbekistan. (2020). Digital Uzbekistan—2030 strategy. Government of Uzbekistan. Available online: https://lex.uz/docs/7008256 (accessed on 5 August 2025).
  63. Moon, J., Choi, J. W., & Kim, K. H. (2024). Regional disparities in safe and clean environments in Uzbekistan: Analysis of 2021–2022 Uzbekistan multiple indicator cluster survey data. Sustainability, 16(4), 1580. [Google Scholar] [CrossRef]
  64. Murphy, M. C., & Reeves, S. L. (2019). Personal and organizational mindsets at work. Research in Organizational Behavior, 39, 100121. [Google Scholar] [CrossRef]
  65. Neumann, O., Guirguis, K., & Steiner, R. (2024). Exploring artificial intelligence adoption in public organizations: A comparative case study. Public Management Review, 26(1), 114–141. [Google Scholar] [CrossRef]
  66. Nicolescu, L., & Tudorache, M. T. (2022). Human-computer interaction in customer service: The experience with AI chatbots—A systematic literature review. Electronics, 11(10), 1579. [Google Scholar] [CrossRef]
  67. Oliveira, T., & Martins, M. F. (2010). Understanding e-business adoption across industries in European countries. Industrial Management & Data Systems, 110(9), 1337–1354. [Google Scholar]
  68. Oliveira, T., & Martins, M. F. (2011). Literature review of information technology adoption models at firm level. Electronic Journal of Information Systems Evaluation, 14(1), 110–121. [Google Scholar]
  69. Orr, G. (2003). Diffusion of innovations, by Everett Rogers (1995). Free Press. Available online: https://web.stanford.edu/class/symbsys205/Diffusion%20of%20Innovations (accessed on 21 January 2005).
  70. Peters, R. S. (2015). The concept of motivation. Routledge. [Google Scholar]
  71. Radhakrishnan, J., & Chattopadhyay, M. (2020, December 18–19). Determinants and barriers of artificial intelligence adoption—A literature review. International Working Conference on Transfer and Diffusion of IT. [Google Scholar]
  72. Rana, N. P., Dwivedi, Y. K., Lal, B., Williams, M. D., & Clement, M. (2017). Citizens’ adoption of an electronic government system: Towards a unified view. Information Systems Frontiers, 19, 549–568. [Google Scholar] [CrossRef]
  73. Rawassizadeh, R., Sen, T., Kim, S. J., Meurisch, C., Keshavarz, H., Mühlhäuser, M., & Pazzani, M. (2019). Manifestation of virtual assistants and robots into daily life: Vision and challenges. CCF Transactions on Pervasive Computing and Interaction, 1, 163–174. [Google Scholar] [CrossRef]
  74. Reis, J., Santo, P. E., & Melão, N. (2019). Artificial intelligence in government services: A systematic literature review. In New knowledge in information systems and technologies: Volume 1 (pp. 241–252). Springer. [Google Scholar]
  75. Rogers, E. M., Medina, U. E., Rivera, M. A., & Wiley, C. J. (2005). Complex adaptive systems and the diffusion of innovations. The Innovation Journal: The Public Sector Innovation Journal, 10(3), 1–26. [Google Scholar]
  76. Rönkkö, M., & Cho, E. (2022). An updated guideline for assessing discriminant validity. Organizational Research Methods, 25(1), 6–14. [Google Scholar] [CrossRef]
  77. Samoili, S., Lopez Cobo, M., Gomez Gutierrez, E., de Prato, G., Martinez-Plumed, F., & Delipetrev, B. (2020). AI WATCH. Defining Artificial Intelligence (No. 1). European Comission. [Google Scholar]
  78. Sas, M., Hardyns, W., Van Nunen, K., Reniers, G., & Ponnet, K. (2021). Measuring the security culture in organizations: A systematic overview of existing tools. Security Journal, 34, 340–357. [Google Scholar] [CrossRef]
  79. Scheuren, F. (2004). What is a survey? American Statistical Association. [Google Scholar]
  80. Setayesh, M. H., & Daryaei, A. A. (2017). Good governance, innovation, economic growth and the stock market turnover rate. The Journal of International Trade & Economic Development, 26(7), 829–850. [Google Scholar] [CrossRef]
  81. Sharma, A., Bahl, S., Bagha, A. K., Javaid, M., Shukla, D. K., & Haleem, A. (2022). Blockchain technology and its applications to combat COVID-19 pandemic. Research on Biomedical Engineering, 38, 173–180. [Google Scholar] [CrossRef]
  82. Shi, D., Lee, T., & Maydeu-Olivares, A. (2019). Understanding the model size effect on SEM fit indices. Educational and Psychological Measurement, 79(2), 310–334. [Google Scholar] [CrossRef]
  83. Shin, S.-C., Ho, J.-W., & Pak, V. Y. (2020, February 16–19). Digital transformation through e-Government innovation in Uzbekistan. 22nd International Conference on Advanced Communication Technology (ICACT), Phoenix Park, Republic of Korea. [Google Scholar]
  84. Silva, A. (2016). What is leadership? Journal of Business Studies Quarterly, 8(1), 1. [Google Scholar]
  85. Sousa, M. J., & Rocha, Á. (2019). Digital learning: Developing skills for digital transformation of organizations. Future Generation Computer Systems, 91, 327–334. [Google Scholar] [CrossRef]
  86. Stahl, B. C., Andreou, A., Brey, P., Hatzakis, T., Kirichenko, A., Macnish, K., Shaelou, S. L., Patel, A., Ryan, M., & Wright, D. (2021). Artificial intelligence for human flourishing–Beyond principles for machine learning. Journal of Business Research, 124, 374–388. [Google Scholar] [CrossRef]
  87. Sun, T. Q., & Medaglia, R. (2019). Mapping the challenges of Artificial Intelligence in the public sector: Evidence from public healthcare. Government Information Quarterly, 36(2), 368–383. [Google Scholar] [CrossRef]
  88. Taheri, S., Zaghloul, H., Chagoury, O., Elhadad, S., Ahmed, S. H., El Khatib, N., Abou Amona, R., El Nahas, K., Suleiman, N., & Alnaama, A. (2020). Effect of intensive lifestyle intervention on bodyweight and glycaemia in early type 2 diabetes (DIADEM-I): An open-label, parallel-group, randomised controlled trial. The Lancet Diabetes & Endocrinology, 8(6), 477–489. [Google Scholar]
  89. Thierer, A. D., Castillo O’Sullivan, A., & Russell, R. (2017). Artificial intelligence and public policy. Mercatus Research Paper. Mercatus Center at George Mason University. [Google Scholar]
  90. Thong, J. Y. (1999). An integrated model of information systems adoption in small businesses. Journal of Management Information Systems, 15(4), 187–214. [Google Scholar] [CrossRef]
  91. Trottier, T., Van Wart, M., & Wang, X. (2008). Examining the nature and significance of leadership in government organizations. Public Administration Review, 68(2), 319–333. [Google Scholar] [CrossRef]
  92. Ursachi, G., Horodnic, I. A., & Zait, A. (2015). How reliable are measurement scales? External factors with indirect influence on reliability estimators. Procedia Economics and Finance, 20, 679–686. [Google Scholar] [CrossRef]
  93. Van Noordt, C., & Misuraca, G. (2022). Artificial intelligence for the public sector: Results of landscaping the use of AI in government across the European Union. Government Information Quarterly, 39(3), 101714. [Google Scholar] [CrossRef]
  94. Venkatesh, V., & Davis, F. D. (2000). A theoretical extension of the technology acceptance model: Four longitudinal field studies. Management Science, 46(2), 186–204. [Google Scholar] [CrossRef]
  95. Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information technology: Toward a unified view. MIS Quarterly, 27, 425–478. [Google Scholar] [CrossRef]
  96. Venkatesh, V., Thong, J. Y., & Xu, X. (2012). Consumer acceptance and use of information technology: Extending the unified theory of acceptance and use of technology. MIS Quarterly, 36, 157–178. [Google Scholar] [CrossRef]
  97. Vogl, T. M., Seidelin, C., Ganesh, B., & Bright, J. (2020). Smart technology and the emergence of algorithmic bureaucracy: Artificial intelligence in UK local authorities. Public Administration Review, 80(6), 946–961. [Google Scholar] [CrossRef]
  98. Wirtz, B. W., Langer, P. F., & Fenner, C. (2021). Artificial intelligence in the public sector—A research agenda. International Journal of Public Administration, 44(13), 1103–1128. [Google Scholar] [CrossRef]
  99. Wirtz, B. W., Weyerer, J. C., & Geyer, C. (2019). Artificial intelligence and the public sector—Applications and challenges. International Journal of Public Administration, 42(7), 596–615. [Google Scholar] [CrossRef]
  100. Zhai, F., Guan, Y., Zhu, B., Chen, S., & He, R. (2021). Intraparticle and interparticle transferable DNA walker supported by DNA micelles for rapid detection of microRNA. Analytical Chemistry, 93(36), 12346–12352. [Google Scholar] [CrossRef]
  101. Zhu, K., Dong, S., Xu, S. X., & Kraemer, K. L. (2006). Innovation diffusion in global contexts: Determinants of post-adoption digital transformation of European companies. European Journal of Information Systems, 15(6), 601–616. [Google Scholar] [CrossRef]
  102. Zouridis, S., van Eck, M., & Bovens, M. (2020). Automated discretion. In T. Evans, & P. Hupe (Eds.), Discretion and the quest for controlled freedom. Springer Nature. [Google Scholar]
  103. Zuiderwijk, A., Chen, Y.-C., & Salem, F. (2021). Implications of the use of artificial intelligence in public governance: A systematic literature review and a research agenda. Government Information Quarterly, 38(3), 101577. [Google Scholar] [CrossRef]
Figure 1. Conceptual framework on AI chatbots adoption.
Figure 1. Conceptual framework on AI chatbots adoption.
Admsci 15 00324 g001
Figure 2. Step-by-step research design.
Figure 2. Step-by-step research design.
Admsci 15 00324 g002
Figure 3. Results of the structural model.
Figure 3. Results of the structural model.
Admsci 15 00324 g003
Table 1. TOE Model Constructs and Hypotheses.
Table 1. TOE Model Constructs and Hypotheses.
TOE
Dimension
ConstructCodeHypothesis Statement
TechnologicalPerceived usefulnessH1Perceived usefulness positively influences the intention to adopt AI chatbots in public administration.
CompatibilityH2Compatibility positively influences the intention to adopt AI chatbots in public administration.
System complexityH3System complexity negatively influences the intention to adopt AI chatbots in public administration.
OrganizationalTraditional top-down leadershipH4Traditional top-down leadership negatively influences the intention to adopt AI chatbots in public administration.
Resistance to mindset changeH5Resistance to mindset change negatively influences the intention to adopt AI chatbots in public administration.
Organizational readinessH6Organizational readiness positively influences the intention to adopt AI chatbots in public administration.
EnvironmentalEffective accountabilityH7Effective accountability positively influences the intention to adopt AI chatbots in public administration.
Ethical AI regulationH8Ethical AI regulation positively influences the intention to adopt AI chatbots in public administration.
Concerns over data management securityH9Concerns over data management security negatively influence the intention to adopt AI chatbots in public administration.
Table 2. Sample characteristics.
Table 2. Sample characteristics.
Demographic VariablesFrequency (n = 501)Percentage (%)
Gender
Male35069.90%
Female15130.10%
Age
18–25 years12323.80%
26–45 years24049.90%
Over 46 years13826.30%
Level of Education
Undergraduate15330.10%
Graduate (master’s)29058.30%
PhD and others5811.60%
Year of Experiences
0–512424.40%
6–1023347.30%
Over 11 years14428.30%
Table 3. Collinearity statistics (VIF).
Table 3. Collinearity statistics (VIF).
ItemsPUCompSComTLRMCOREaAEARCDMSIAAC
VIF1.3121.2971.5741.5291.5161.2401.6771.4871.6911.408
Table 4. Construct reliability and validity.
Table 4. Construct reliability and validity.
TOEConstructsCodesFactor LoadingsC.AC.RAVE
Technological ContextPerceived usefulnessPU10.805 0.655 0.849 0.738
PU20.910
CompatibilityComp10.899 0.647 0.848 0.736
Comp20.814
System complexitySCom10.886 0.753 0.890 0.802
SCom20.905
Organizational ContextTraditional top-down leadershipTL10.886 0.741 0.885 0.794
TL20.896
Resistance to mindset changeRMC10.890 0.737 0.884 0.792
RMC20.890
Organizational readinessOR10.804 0.611 0.835 0.717
OR20.888
Environmental ContextEffective accountabilityEA10.896 0.777 0.900 0.818
EA20.912
Ethical AI regulationEAR10.878 0.728 0.880 0.786
EAR20.895
Concerns over data management and securityCDMS10.875 0.780 0.899 0.817
CDMS20.932
Intention to adopt AI chatbotsIAAC10.881
IAAC20.873 0.700 0.869 0.769
Cronbach’s alpha (C.A), composite reliability (C.R), average variance extracted (AVE).
Table 5. Discriminant validity—Fornell–Larcker criterion.
Table 5. Discriminant validity—Fornell–Larcker criterion.
12345678910
1. Compatibility0.858
2. SCom0.6410.895
3. CDMS0.7240.6630.904
4. EA0.7160.7060.9080.904
5. EAR0.6660.8690.7330.7920.887
6. IAAC0.7720.6190.6570.7110.6290.877
7. TL0.7920.7190.7590.7790.8580.6160.891
8. OR0.6920.7350.8320.7660.7160.6850.7370.847
9. PU0.7560.7240.7060.7490.7430.7750.7320.7230.859
10. RMC0.6510.7500.7380.7950.8920.6030.8690.7240.8100.890
Note: CDMS—concerns over data management and security; EAR—ethical AI regulation; EA—effective accountability; TL—traditional top-down leadership; OR—organizational readiness; PU—perceived usefulness; RMC—resistance to mindset change, IAAC—intention to adopt AI chatbots.
Table 6. Cross loading results.
Table 6. Cross loading results.
Compatibility SCom CDMSEA EAR TL OR PU RMC IAAC
Comp10.8990.4760.5640.5770.512 0.523 0.533 0.640 0.485 0.745
Comp20.8140.6560.7060.6720.657 0.896 0.682 0.668 0.664 0.561
SCom10.5590.8860.5270.5710.650 0.643 0.678 0.613 0.642 0.530
SCom20.5890.9050.6540.6880.895 0.646 0.640 0.680 0.700 0.578
CDMS10.6700.5660.8750.7060.625 0.733 0.804 0.625 0.680 0.501
CDMS20.6470.6270.9320.9120.696 0.656 0.718 0.652 0.663 0.668
EaA10.6480.6510.7020.8960.738 0.757 0.665 0.705 0.781 0.616
EaA20.6470.6270.9320.9120.696 0.656 0.718 0.652 0.663 0.668
EAR10.5930.6260.6460.7170.8780.886 0.630 0.636 0.890 0.537
EAR20.5890.9050.6540.6880.8950.646 0.640 0.680 0.700 0.578
TL10.5930.6260.6460.7170.878 0.8860.630 0.636 0.890 0.537
TL20.8140.6560.7060.6720.657 0.8960.682 0.668 0.664 0.561
OR10.6700.5660.8750.7060.625 0.733 0.8040.625 0.680 0.501
OR20.5260.6710.5790.6090.597 0.545 0.8880.608 0.567 0.647
PU10.5650.7100.6680.6980.710 0.660 0.659 0.8050.890 0.536
PU20.7180.5700.5730.6160.599 0.617 0.605 0.9100.571 0.767
RMC10.5650.7100.6680.6980.710 0.660 0.659 0.805 0.8900.536
RMC20.5930.6260.6460.7170.878 0.886 0.630 0.636 0.8900.537
IAAC10.7200.4910.5080.5460.525 0.529 0.534 0.686 0.480 0.881
IAAC20.6320.5970.6470.7030.579 0.553 0.670 0.673 0.579 0.873
Note: Bold characters indicate the loading of each consistent variable.
Table 7. Summary of model fit indicators.
Table 7. Summary of model fit indicators.
Fit IndexValueThresholdInterpretation
SRMR0.085<0.10Acceptable model fit
NFI0.771≥0.80Slightly acceptable
d_ULS1.5060Acceptable
d_G1.1260Acceptable
Chi-square4332.9For reference only
Collinearity (VIF)<3.3 for all constructs<3.3 (Kock, 2015)No multicollinearity
Note: The SRMR value (0.085) falls below the recommended threshold of 0.10, indicating an acceptable level of model fit. The model is theoretically acceptable despite minor fit limitations. NFI is slightly below 0.80, suggesting a marginal but tolerable fit. Additionally, the collinearity statistics confirm that multicollinearity is not a concern, with all VIF values < 3.3.
Table 8. Structural model results—hypothesis testing.
Table 8. Structural model results—hypothesis testing.
HypothesisRelationshipβ (Path Coefficient)Standard
Deviation
t-Valuep-ValueSupported
H1PU -> IAAC0.427 0.060 7.11 0.000 Yes
H2Comp -> IAAC 0.482 0.065 7.42 0.000 Yes
H3SCom -> IAAC−0.155 0.079 1.96 0.025 No
H4TL -> IAAC−0.313 0.071 4.41 0.000 No
H5RMC -> IAAC−0.272 0.075 3.63 0.000 No
H6OR -> IAAC0.292 0.070 4.19 0.000 Yes
H7EaA -> IAAC0.421 0.076 5.50 0.000 Yes
H8EAR -> IAAC0.328 0.107 3.04 0.001 Yes
H9CDMS -> IAAC−0.317 0.080 3.94 0.000 No
Notes: Path coefficients are significant at p < 0.01, p < 0.001.
Table 9. Outcomes of estimation.
Table 9. Outcomes of estimation.
Path CoefficientsDirect EffectsIndirect EffectsTotal Effects
Compatibility -> IAAC0.482 *00.482 *
SCom -> IAAC−0.155 *0−0.155 *
CDMS -> IAAC−0.317 *0−0.317 *
EA -> IAAC0.421 *00.421 *
EAR -> IAAC0.328 *00.328 *
TL -> IAAC−0.313 *0−0.313 *
OR -> IAAC0.292 *00.292 *
PU -> IAAC0.427 *00.427 *
RMC -> IAAC−0.272 *0−0.272 *
* Notice: p < 0.001.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Omonov, M.S.; Ahn, Y. Towards Smart Public Administration: A TOE-Based Empirical Study of AI Chatbot Adoption in a Transitioning Government Context. Adm. Sci. 2025, 15, 324. https://doi.org/10.3390/admsci15080324

AMA Style

Omonov MS, Ahn Y. Towards Smart Public Administration: A TOE-Based Empirical Study of AI Chatbot Adoption in a Transitioning Government Context. Administrative Sciences. 2025; 15(8):324. https://doi.org/10.3390/admsci15080324

Chicago/Turabian Style

Omonov, Mansur Samadovich, and Yonghan Ahn. 2025. "Towards Smart Public Administration: A TOE-Based Empirical Study of AI Chatbot Adoption in a Transitioning Government Context" Administrative Sciences 15, no. 8: 324. https://doi.org/10.3390/admsci15080324

APA Style

Omonov, M. S., & Ahn, Y. (2025). Towards Smart Public Administration: A TOE-Based Empirical Study of AI Chatbot Adoption in a Transitioning Government Context. Administrative Sciences, 15(8), 324. https://doi.org/10.3390/admsci15080324

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop