Previous Article in Journal
The Impact of the 2023 Wikipedia Redesign on User Experience
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

From E-Government to AI E-Government: A Systematic Review of Citizen Attitudes

Department of Management Science and Technology, University of Patras, 26334 Patras, Greece
*
Author to whom correspondence should be addressed.
Informatics 2025, 12(3), 98; https://doi.org/10.3390/informatics12030098
Submission received: 5 July 2025 / Revised: 8 September 2025 / Accepted: 10 September 2025 / Published: 16 September 2025

Abstract

Governments increasingly integrate artificial intelligence (AI) into digital public services, and understanding how citizens perceive and respond to these technologies has become essential. This systematic review analyzes 30 empirical studies published from early January 2019 to mid-April 2025, following PRISMA guidelines, to map the current landscape of citizen attitudes toward AI-enabled e-government services. Guided by four research questions, the study examines: (1) the forms of AI implementation most commonly investigated, (2) the attitudinal variables used to assess user perception, (3) key factors influencing attitudes, and (4) concerns and challenges reported by users. The findings reveal that chatbots dominate current implementations, with behavioral intentions and satisfaction serving as the main outcome measures. Perceived usefulness, ease of use, trust, and perceived risk emerge as recurring determinants of positive attitudes. However, widespread concerns related to privacy and interface usability highlight persistent barriers. Overall, the review underscores the need for transparent, citizen-centered AI design and ethical safeguards to enhance acceptance and trust. It concludes that future research should address understudied applications, include vulnerable populations, and explore perceptions across diverse public sector domains.

1. Introduction

In recent years, digital technologies have changed the way governments communicate and provide services to citizens. E-government initiatives aim to improve the accessibility, efficiency, and transparency of public services. A key part of this development is the increasing use of artificial intelligence (AI) in the public sector. Today, governments are increasingly deploying AI tools such as virtual assistants and chatbots to guide citizens through online procedures, including document processing systems for handling applications and licenses, fraud detection in tax and welfare services, predictive models for public health and city planning, and recommender systems that suggest services based on user needs by assisting us in navigating complex administrative procedures by directing them to relevant forms, services, or regulations [1,2,3]. Collectively, these applications are transforming citizen–government interactions by enabling more responsive, data-driven, and cost-effective service delivery.
However, the success of these technologies depends not only on their technical capability but also on whether citizens are willing to use them. Research in the field of e-government has shown that digital services are more likely to succeed when citizens trust them and find them useful and easy to use [4,5,6]. More recently, studies confirm that citizen trust in AI-enabled government systems is influenced by ethical robustness and context-based trust transfer mechanisms [7,8]. The Technology Acceptance Model (TAM) and the Unified Theory of Acceptance and Use of Technology (UTAUT) both highlight the importance of these factors [9]. Other studies also stress that the clarity of information and the transparency of how systems work are important for building trust and encouraging citizens to adopt these services [10]. In the case of AI-based tools, this is even more important. Users frequently express apprehensions regarding fairness, data privacy, and the use of complex “black-box” algorithms [11]. For this reason, citizen attitudes—shaped by trust, perceived usefulness, risk, digital skills, and beliefs about fairness—have become a key topic in discussions about responsible and inclusive use of AI in government [12].
However, most current research still focuses more on technical or organizational aspects of AI in the public sector than on how citizens evaluate or respond to these technologies. While some literature reviews examine public attitudes, many either analyze AI applications broadly without focusing on digital delivery channels or concentrate on specific tools, such as chatbots. Furthermore, limited attention has been paid to identifying the full range of attitudinal variables and psychological constructs shaping public responses to AI in government.
This paper aims to fill that gap by conducting a systematic literature review of recent empirical studies on citizen attitudes toward AI-based digital government services. Using PRISMA guidelines, we analyzed 30 studies published between 2019 and early 2025. The review is guided by four core research questions:
(RQ1): What implementation forms of AI-enabled services are studied in the current literature?
(RQ2): What are the dependent attitudinal variables examined?
(RQ3): What are the key determinants of citizen attitudes toward AI-enabled government services?
(RQ4): What are the most commonly reported citizen concerns and challenges related to AI-enabled services?
The remainder of this article is structured as follows: Section 2 reviews the existing literature on AI in e-government with a focus on citizen perspectives. Section 3 outlines the methodology used for this systematic review, covering the search strategy and inclusion criteria. Section 4 presents the findings organized around the four key research questions. Section 5 discusses these findings in relation to existing theoretical models and highlights implications for research and practice. Finally, Section 6 concludes the paper by summarizing the key points and proposing directions for future investigation.

2. Related Work

While many studies have explored individuals’ attitudes toward e-government services, the relationship between attitudes, public e-services, and AI technology remains understudied. This is due in large part to the novel nature of AI technology. This observation is reinforced by the recent publication dates of the existing literature reviews. Moreover, to date, the literature has predominantly focused on the supply side rather than the demand side. For example, prior studies have emphasized enablers [13], challenges [14,15,16,17,18,19,20,21], and benefits [14,16,17,19,21,22] of embedding AI systems into the public sector. In the same vein, past research endeavors have stressed the facilitators, risks, opportunities, and benefits of chatbot adoption in public services [23,24]. Both studies demonstrated that chatbots can enhance efficiency, accessibility, and engagement in public services, but successful adoption depends on factors like technical integration, institutional readiness, and ethical safeguards. So far, [23,24] seem to be the only reviews focusing on a distinct area of AI applications for e-government services.
From a user perspective, we have located just one literature review following a broad approach to AI implementation in public services. Its scope includes both digitally and non-digitally delivered services [25]. The study sought to understand the key factors impacting citizens’ perceptions of government AI adoption. The authors defined the concept of perception as encompassing individuals’ attitudes, beliefs, and their overall evaluation of AI-based public services. The identified factors were grouped into six categories: perceived benefits, perceived concerns, individuals’ and services’ characteristics, trust, and external factors. Regarding public service domains, order and safety emerged as the most frequently investigated areas. User perspectives have also been explored in the context of chatbot use by local governments. The results indicated that individuals’ perceptions are primarily affected by adoption benefits and risks [24].
To the best of our knowledge, the present study represents the first systematic literature review delving into citizens’ attitudes toward AI-enabled digital government services. Unlike the last two studies, which either addressed AI-enabled services broadly or narrowed their scope to a single form of AI implementation (i.e., chatbots), our review adopts a different perspective. It examines citizen-government interactions through all forms of digitally rendered AI services. Through this lens, we aim to gain a comprehensive understanding of how citizens’ attitudes vary across distinct communication channels identified in the literature.

3. Research Method

The present study utilized a systematic review approach, adhering to PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines [26]. Specifically, our research involved four phases: (1) conducting a thorough literature review to identify relevant studies; (2) screening the identified studies; (3) establishing the eligibility criteria; and (4) determining the studies for inclusion.

3.1. Identification

We began our study with a systematic search in Scopus and Web of Science, two of the most widely used databases for literature reviews. The search query we used in both databases is outlined below: ((TS = (“eGovern*” OR “e-govern*” OR “electronic govern*” OR “digital govern*” OR “smart govern*” OR “electronic public services”)) AND TS = (“artificial intelligence” OR “AI” OR “A.I.” OR “chatbots” OR “machine learning” OR “deep learning” OR “ChatGPT”)) AND TS = (“citizen” AND “accep*” OR “behavioral intention” OR “intention” OR “usage continuance” OR “continuance of use” OR “using” OR “adopti*”). To ensure that our analysis reflected the most recent research, we focused exclusively on studies published between January 2019 and our final search date in mid-April 2025. To this end, a comprehensive search strategy was employed, with no restrictions to specific scientific domains, in a deliberate effort to maximize coverage of relevant studies. We set the language criterion to English. The results we obtained were 123 studies from Scopus and 154 from Web of Science. At this point, we focused our evaluation on the titles and abstracts of the studies. If title/abstract relevance was unclear, we reviewed the full text.
To expand our search, we utilized two AI tools (Elicit and Scispace) to automate the literature discovery process. As noted in prior research, this method addresses key limitations of conventional keyword-based searches by reducing time-intensive manual effort [27]. The tools’ natural language processing capabilities also enable complex queries phrased as plain-text questions [28]. The initial inquiries we posed on both platforms pertained to citizens’ perceptions of AI utilization in government portals (i.e., “How do citizens perceive AI use in government portals?”) and their perspectives on the integration of AI technology into public services (i.e., “What are citizens’ views on the integration of AI technology into public services?”). The results obtained were consistent with the maximum results provided by the basic (free) version of each search engine, i.e., 500 and 100 per query on Elicit and Scispace, respectively. As chatbots emerged as a dominant theme in initial results, we ran a third query regarding citizens’ attitudes about the utilization of such technology in public services (i.e., “What are citizens’ attitudes toward the use of chatbots in public services?”). In a manner consistent with the approach previously adopted for Scopus and Web of Science, the titles and abstracts were screened. In cases where the title indicated that the result was not pertinent to our research, we relied on tool-generated summaries (Elicit’s abstract overviews, Scispace’s insights).
The entire search process was carried out from 10 December 2024 to 15 April 2025 and yielded a total of 2077 results for screening.
In the identification process, as delineated above, we omitted studies published before 2019, studies not directly relevant to the topic under investigation, and studies written in languages other than English. In addition, we did not consider studies related to the use of AI technology in the context of smart cities, as smart cities are outside the scope and objectives of this study. The application of these criteria narrowed the results down to 142.

3.2. Screening

The studies that advanced to the screening phase were subjected to a deduplication process. For this purpose, we used the Zotero software (version 7.0.24). A total of 31 duplicates were identified and subsequently removed, thereby reducing the number of records to 111.

3.3. Eligibility

Following the screening phase, we established the eligibility criteria as follows:
Inclusion Criteria
  • Studies should have peer-reviewed journal or conference publications.
  • Studies should be empirical.
  • Studies should focus on examining citizens’ attitudes toward the application of AI technologies in the context of e-government.
  • Studies should address the use of AI in services that are solely digitally delivered by governments.
Exclusion Criteria
  • Non-open access studies.
  • Studies not published in the English language.
  • Studies that merely mention the use of AI in e-government, among other state-of-the-art technologies, rather than focusing on it.
  • Studies that do not clearly specify how government services are provided.
We retrieved the full text of all potentially qualifying studies and read them thoroughly. We acknowledge that this process may introduce a degree of subjectivity, although efforts such as independent review and predefined criteria were used to minimize bias. We excluded 80 articles for not meeting the eligibility criteria, and a further three articles were excluded due to inability to obtain access. The number of articles that remained was 28. Through the examination of their references, we located an additional relevant study, which increased the number of the selected documents to 29.

3.4. Inclusion

The concluding phase in our methodological approach involved a cursory review of the full texts of all 109 articles (from an initial 111, after excluding the three inaccessible studies and adding the one retrieved from reference tracking) that potentially fulfilled the inclusion criteria. This step was undertaken to verify that no qualifying studies had been inadvertently omitted and that no ineligible studies had been mistakenly retained. In the course of the process, we identified one erroneously excluded article. As a result, 30 records were ultimately included in this review (Figure 1).

4. Results

4.1. Descriptive Analysis

4.1.1. Year of Publication

As illustrated in Figure 2, the number of publications has exhibited a consistent upward trend from 2020 to 2024. This is supported by the growing worldwide focus on incorporating AI technology into government settings over the past few years [29]. Given the limited number of publications (n = 2) in 2019, the decrease from 2019 to 2020 may reflect the nascent stage of research into government AI adoption during that period. As our search covered only the first three and a half months of 2025, just one publication from that year was retrieved.

4.1.2. Authors

Figure 3 shows the number of studies by authors with more than one publication. The analysis reveals that Tambouris is the most prolific author, with three publications. The papers included in this review feature a total of 91 authors, representing affiliated institutes across 21 countries.

4.1.3. Country Context

The countries where the studies were conducted (n = 19) are highlighted in Figure 4. In order to present the distribution of papers per country, we set the total number of papers to 33—instead of 30—as one of the studies was cross-national, containing data from four countries (Greece, Singapore, Switzerland, and the USA). China stands out, representing 24% of the total, a position that may be attributed to its top global ranking in both the publication and citation of AI-related research articles [30]. Greece follows at approximately 12%. Indonesia, Morocco, Norway, and Switzerland each represent around 6%. The remaining 39% is distributed equally across 13 other countries.
Despite the low number of studies and their distribution per country, as depicted in Figure 4, some preliminary regional patterns can still be observed. Studies from Europe were the most numerous and tended to emphasize concerns around privacy, transparency, and data protection, reflecting the influence of regulatory frameworks such as the GDPR. Research from Asia, though smaller in volume, often focused on citizens’ trust in government and optimism toward technological innovation, consistent with the rapid adoption of digital services in the region. Evidence from developing regions such as Africa and Latin America was particularly scarce, but the few available studies pointed to digital divide issues, infrastructural barriers, and low baseline trust in government institutions as key challenges. These contrasts suggest that citizen attitudes toward AI in e-government are shaped not only by the technology itself but also by broader socio-political, cultural, and infrastructural contexts.
Below, we present the relationship between the number of publications per country (research location) and their corresponding 2024 E-Government Development Index (EGDI) scores. The EGDI is a comparative measure of e-government capacity development across United Nations member countries, encompassing three component indicators: scope and quality of online services (OSI), development status of telecommunication infrastructure (TII), and inherent human capital (HCI) [31,32].
Our analysis shows no apparent correlation, as illustrated in Figure 5.
Both countries that rank very high in the EGDI, like Singapore (0.9691) and South Korea (0.9679), and low-ranking ones, like India (0.6678) and Pakistan (0.5096), contributed one publication each. Meanwhile, Greece (0.8674), which ranks close to the average value (0.8594) of the countries in question, produced four. China (0.8718), on the other hand, has the highest number of publications (n = 8), despite its score being relatively close to the average. Norway stands out as the only exception, with a significantly high score (0.9315) and two publications. The absence of a clear positive association may stem from the small number of publications per country, which does not allow for generalizability of the findings. These data are presented for descriptive purposes only and are not intended for statistical inference. It is worth noting that these studies examine not just e-government, but its integration with AI technology. Consequently, the dimension of AI justifies the volume of research papers yielded by China, Norway, and Switzerland, as technologically advanced countries where AI is extensively applied. Although this is not the case for Indonesia and Morocco.

4.1.4. Affiliations

In Table 1, we present author affiliations. To compile this list, we manually reviewed all affiliations. In cases where multiple authors within a single publication belonged to the same institution, we recorded the affiliation only once. For authors with more than one affiliation, we documented each institution. As a whole, our review yielded 47 affiliations across 21 countries. Consistent with the analysis at a country level presented earlier, only the most productive countries are included in the table. China evidently holds a prominent position with seven represented universities, while Tsinghua University (Beijing) is the most frequently occurring institution among all publications (n = 4). Greece follows with two universities, occurring in three and two publications, respectively. Morocco also has two represented universities contributing four equally distributed publications. Norway’s representation includes three universities and one research institution, each appearing in four distinct publications. Finally, Indonesia and Switzerland follow, with three and two institutions, respectively, each appearing in a different publication.

4.1.5. Keyword Co-Occurrence

VOSviewer (Visualization of Similarities Viewer, version 1.6.20) is a freely available software tool widely used in academic research. Utilizing text-mining analysis, it generates and visualizes scientific network maps based on co-occurrence matrices [33]. Although VOSviewer is primarily designed for bibliometric analysis rather than systematic literature reviews, we employed it because it offers valuable insights into keyword co-occurrence by visualizing them in clustered networks (Figure 6).
The analysis showed that AI (n = 15), e-government and chatbots (each with n = 11) are the most frequently occurring keywords, grouped into four clusters: red reflects technology and government (e.g., chatbots, e-government), blue includes AI and research methods (conjoint experiment and online surveys), green relates to citizens’ views (e.g., intention to use, satisfaction), and light green covers behavioral research and public expectation.

4.2. Study Overview

An overview of the studies is presented in Table 2, detailing the study references, titles, publication types, journals or conference proceedings where each study was published, and research objectives. Approximately half of the studies were published in peer-reviewed journals, while the remaining half were presented at international conferences. The wide variety of journals and conferences underscores the multidisciplinary nature of AI research in the field of government services. It encompasses fields such as computer science, medical science, economics, and public administration.

4.2.1. Analysis Approach

This subsection analyzes three key aspects of the reviewed literature: theoretical frameworks, methodological approaches, and statistical techniques (Table 3). We first examine the theoretical foundations employed across studies, then assess methodological transparency through questionnaire availability, and, finally, identify the most prevalent statistical methods in the research.
The methods used were primarily quantitative (n = 25), with mixed methods (n = 4), and one qualitative approach comprising the remaining studies. Most quantitative studies utilized survey research methods (n = 21), with a smaller number employing experimental techniques (n = 4). Three of the mixed-methods studies used experimental approaches [35,36,37], while the fourth conducted cross-platform digital discourse analysis [53]. In the qualitative study, the data were collected with semi-structured interviews [47].
A variety of theoretical models were used; nevertheless, TAM was the most widely applied (n = 8). This is in line with our review’s focus on the attitude aspects of interactions between citizens and AI in e-government settings. The IS Success Theory was the second most common framework identified (n = 4), followed by UTAUT/UTAUT2 (n = 3). Frameworks were proposed by the authors in eight studies, whilst in four studies, the theoretical basis was not specified. More than one framework was utilized in three studies [39,58,59]. It is important to highlight that one study made a significant contribution to the existing literature by introducing a novel theory [57]. When it comes to data collection methods, questionnaires were fully documented in fewer than half of the studies (n = 13), most commonly in appendices, rather than in the methodology. One study incorporated only partial items within its main text [34]. Finally, PLS-SEM (n = 10) and descriptive statistics (n = 7) were the most frequently used statistical analysis approaches, followed by regression-based models (n = 6).

4.2.2. Study Quality Assessment

To ensure rigor and transparency, we assessed the quality of the included studies for methodological soundness and reporting clarity. We adopted a simplified checklist approach inspired by the Critical Appraisal Skills Programme (CASP) guidelines [64] and aligned with the PRISMA methodology [26]. Two authors independently assessed each study, and any disagreements were resolved through discussion and consensus. Five key quality criteria were considered:
  • Clear Research Objectives: whether the study explicitly states its aims or research questions. Transparent objectives are necessary to determine the focus and relevance of the study.
  • Methodology Transparency: the extent to which the study adequately describes its research design, data collection procedures, and analytical approach. Transparent reporting enables replication and enhances reliability.
  • Sampling Adequacy: The representativeness and appropriateness of the sampling strategy, including sample size justification. Appropriate sampling strengthens the validity and generalizability of findings.
  • Validity/Reliability/Trustworthiness of Measures: Whether the study reports reliability checks (e.g., Cronbach’s alpha) and/or validity measures (construct, content, or convergent validity). This criterion ensures that the instruments used to measure constructs produce consistent and accurate results. In the case of qualitative studies, the “Validity/Reliability” column was interpreted as “Trustworthiness” referring to whether the authors described steps ensuring credibility, dependability, transferability, or confirmability [65].
  • Relevance to the Review Scope: The degree to which the study directly examines citizen attitudes toward AI-enabled e-government services. It ensures inclusion of studies aligned with the objectives of this review.
Table 4 depicts the assessment of each included study against each predefined criterion. More specifically, each study was rated using ✔, △, or ✖ for each criterion, assigned a cumulative score, and classified as either high quality (meeting at least four out of five criteria), medium quality (meeting two to three criteria), or low quality (meeting fewer than two criteria). Out of the 30 studies, 11 (37%) were classified as high quality and 19 (63%) as medium quality, while no study was rated as low quality. The review retained all studies to ensure comprehensive coverage, but findings from medium-quality studies are interpreted with caution.

4.3. Synthesis of Findings

4.3.1. AI Implementation Forms

For the purposes of this review, “AI implementation forms” refer to the various forms of AI that serve as theoretical foundations in the selected literature. These approaches are organized into three primary categories: (1) Chatbots and Virtual Assistants, (2) AI-Enabled E-Services, and (3) Other Forms (Table 5).
It is evident that chatbots are a prominent feature of the selected studies. They are presented as a means of providing personalized information [36,37,42], empowering citizens [44], enhancing public engagement [45,57,61], addressing the public sector resources shortage [46,47], expediting service delivery [54], enabling government digital transformation [55], and improving accessibility to public services [58]. Some papers propose innovations such as technological architectures [36,39], novel solutions [37,41], and technological integrations aiming at enhancing functionality [42]. Others base their research on the analysis of existing chatbot systems [40,44,45,46,47,49,59,61]. The remaining studies concerning conversational agents address them as a general concept rather than specific implementations [54,55,57,58,62]. Additionally, chatbots are examined at different levels of governance, including both national and local (municipal/provincial) levels [40,47,49,59], demonstrating their ability to adapt and be used in various settings. Overall, chatbots designed for various purposes and capable of performing diverse tasks are investigated. Examples include providing the cost of filing for divorce [36], assisting with passport applications [37,42], offering mental health advice [40,46], and delivering policy consultation services [57,58]. In contrast, virtual agents or virtual assistants are addressed in two studies, although the term is used interchangeably with chatbots [34,38]. Similarly, several other studies treat chatbots and virtual agents as synonymous terms [39,40,44,45,46,61]. Notably, one study distinguishes chatbots into two categories: simpler chatbots that use preprogrammed rules and AI-powered chatbots that use Large Language Models (LLMs) to automatically provide responses using Natural Language Processing (NLP) [62]. An LLM is a type of artificial intelligence model based on deep neural networks that is trained on massive corpora of text data to learn statistical patterns of language. By leveraging billions of parameters, LLMs are able to generate coherent, contextually relevant text, perform reasoning tasks, and adapt to a wide range of NLP applications without task-specific training [66].
The second category pertains to AI-driven digital government services. The studies in this group do not focus on AI-specific tools, but rather on AI technology in its wider sense. This category comprises a significantly smaller number of publications (n = 7). Here, publications can be divided into those focusing on specific e-services and those discussing e-services in a broader manner. Targeted e-services are addressed in three out of the seven studies, covering areas such as digital voting [43], issuing national entry visas and parking licenses [48], and tax filing [52]. The remaining studies—[50,51,60,63]—do not focus on specific digital services. Instead, they discuss e-services more generally.
Other types of AI implementation were also identified. Although they involve distinct AI technologies and given their limited representation in the reviewed literature, they were combined into a single category. These include a proposed system combining human and AI-powered machine capabilities for integration into e-government portals to streamline completion of e-services [35]; the ChatGPT model examined for integration with government services [53]; and recommender systems aiming at providing tailored interactions with public services [56]. The limited attention these innovations have received may be attributed to two factors: their emerging nature and the multifaceted challenges associated with the adoption of such advanced technologies by the public sector [67,68,69,70].

4.3.2. Definition and Components of Attitude

In this subsection, we discuss the different aspects of citizens’ attitudes explored in the extant literature. The Cambridge Dictionary defines attitude as “a feeling or opinion about something or someone, or a way of behaving that is caused by this” [71]. The definition clearly shows that attitude constitutes a multidimensional concept encompassing feelings, opinions, and behaviors as inherent characteristics. That is further supported by the ABC model [72], which proposes that attitude consists of three components: affective, behavioral, and cognitive. The ABC Model of Attitudes has been extensively applied in consumer behavior research, primarily through its integration into the Theory of Reasoned Action (TRA). The affective component refers to individuals’ emotions, the behavioral component relates to their actions, and the cognitive component pertains to their thoughts and beliefs. In concern to the behavioral dimension, it is essential to point out that two of the reviewed studies examined behavioral outcomes [51,54], while the third assessed behavioral intentions, despite the fact that usage behavior was the dependent variable [39]. This might be because measurement of actual behaviors necessitates non-probabilistic sampling, which may introduce research bias. After all, information systems adoption models (e.g., TAM, UTAUT, TRA, and TPB) postulate that intentions are strong predictors of subsequent behaviors [73]. Table 6 shows the distribution of the selected articles across the three aspects of attitude.
Our analysis reveals that the literature focuses predominantly on the behavioral component of attitude (12 exclusive references), followed by the cognitive (8 references) and affective dimensions (7 exclusive references). Three additional studies fall into both behavioral and affective categories [44,46,54], and are not included in the exclusive counts above.

4.3.3. Attitudinal Dependent Variables

Studies aiming at predicting individuals’ behavior examine the attitudinal factors of intentions [34,36,37,44,46,47,50], adoption [35], willingness to use [38,45], usage behavior [39,51,54], engagement [61], and the likelihood of citizens to initiate contact with the government [62]. Research on individuals’ thoughts and beliefs investigates usability [42], acceptance [43,48,53,60], preferences [49], perceptions [52], and favorability [58]. Exploration of citizens’ emotions includes examination of satisfaction [40,41,44,46,54,55,56,57,63], and experience [40,59] (Figure 7). Given the empirical nature of the articles, these constructs serve as dependent variables in their respective theoretical models. In studies where more than one response variable was examined, we selected only those pertaining to our research scope. Our examination revealed 14 distinct dependent variables, listed in Table A1 of Appendix A.
It is important to note that some of these concepts are nuanced and have been defined in different ways within the context of information systems. For example, acceptance has been defined as “the positive decision to use an innovation.” [74]. On the other hand, it has also been described “as the demonstrable willingness within a user group to employ information technology for the tasks it is designed to support” [75]. According to this rationale, willingness to use is not a distinct characteristic of attitude but inherent in acceptance. Moreover, adoption has been conceptualized as “the use or acceptance of a new technology, or new product.” [76]. This interpretation suggests that adoption encompasses acceptance. Furthermore, we observed that in some of the reviewed studies, concepts were used as synonyms. For instance, acceptance and intention to use were used interchangeably in [43], and this is also the case with acceptance and attitude in [53]. Nevertheless, we treated them as distinct variables to maintain consistency with the theoretical frameworks of their respective studies.

4.3.4. Key Determinants of Citizen Attitudes Towards AI-Based E-Government Services

As governments around the world increasingly employ AI technology, there is a growing need to understand citizens’ views. Scholars have examined the effect of multiple factors on attitudinal dependent variables. Below, we present these factors by dependent variable and AI implementation form.
Factors Influencing Citizens’ Behavior
Research shows that individuals are more prone to using virtual assistants when their autonomy and decision-making capacity are low [34]. Perceived ease of use (PEU) and perceived usefulness (PU) hold a prominent position among the determinants of citizen intention to use government chatbots. Proof-of-concept chatbots in experimental studies were overall positively evaluated by participants who found them effortless to use, useful in obtaining information, and time-saving [36,37]. Citizens’ intention to use chatbots is strengthened by the satisfaction gained from using them [44], as well as by their trust in both technology itself and public institutions [47]. Regarding citizens’ intention to continue using AI-enabled services, it is primarily driven by usefulness, service reliability, and security of technology [50]. Contrary to what might be expected, the study in [46] found that voice interaction—generally considered a positive attribute of chatbots—along with the feeling of enjoyment and the COVID-19 pandemic did not exert a significant impact on the continuance intention of mental health chatbots usage. Another interesting result of this study is that the ability of a chatbot to provide tailored information only weakly influences continuance intention. This is not in line with the results reported in [36,37] nor with the previous literature. The authors attributed this inconsistency to the potential inappropriateness of the applied theoretical model for the objectives of their research.
Adoption was investigated in a single experimental study aiming to establish whether a human–machine collaborative system designed to be integrated into government portals could boost adoption of digital services. The research team targeted specific population groups, including older adults and individuals with disabilities. The results indicated that the system was positively rated by participants as it enhanced accessibility [35]. With regard to willingness to use voice assistants, interest is greater among citizens embracing emerging technologies. That said, individuals remain doubtful about using them for complex digital services [38]. Similar to the intention to use government chatbots, willingness to use AI-powered public services is largely influenced by PU [45]. Usage behavior in its literal sense is assessed in two studies [51,54], while the remaining study evaluates usage intention rather than actual use [39]. Actual usage has been studied in relation to smart government encompassing AI technology [51] and chatbots [54]. The findings suggest that the utilization of smart government technology is positively influenced by satisfaction, trust in government, and perceived cost [51]. Furthermore, behavioral intention was found to be a good predictor of chatbot usage [54]. Meanwhile, citizens’ intention to utilize chatbots to communicate with the government is contingent upon trust in technology and perceived risk [39]. Next, citizen engagement in decision-making can be enhanced through the use of chatbots [61]. Notably, empirical evidence indicates that citizens are more likely to initiate contact with the government when responses are human rather than AI-generated. This likelihood further decreases with rule-driven AI systems, declining even more when learning-driven AI is utilized [62]. The authors connected this behavior to citizens’ reluctance to use AI technology when direct communication with civil servants remains an option.
Factors Influencing Citizens’ Cognitive Beliefs
Acceptance is the predominant variable explored in this category (n = 4). A cross-cultural survey conducted in four countries (Greece, Singapore, Switzerland, and the USA) highlighted the significance of PU and PEU in relation to the acceptance of an AI-based voting system [43]. Accuracy and reduced cost also proved to be pivotal factors in the acceptance of AI-powered decision-making in e-government [48]. The degree of acceptance of ChatGPT’s integration into digital governance was assessed through user-generated data collected from platforms such as YouTube and Zhihu. The analysis indicates that trust in both government and technology, along with perceived risk, are determinants of citizen acceptance [53]. Social influence is also recognized as a key factor in the acceptance of AI use in the public sector [60]. Similar to the approach followed in [36,37], usability was experimentally tested through a prototype chatbot integrating knowledge graphs. Results from ΤAΜ and SUS-based questionnaires validated the chatbots’ ease of use, effectiveness, and usefulness [42]. Regarding preferences, users favor interacting with chatbots over other e-government services when social characteristics like emotional intelligence and proactivity (e.g., ability to propose subsequent inquiries) are integrated into chatbots. The same study posits that informal and friendly chatbot language is more appealing to citizens [49]. Public perceptions of AI-powered income tax filing systems are positively influenced by the systems’ accuracy, reduced error risk, and time savings [52]. Finally, cognitive beliefs are also explored through the factor of favorability. For e-service delivery, citizens favor chatbots with characteristics like appealing interfaces, ease of use, usefulness, and security. Attributes like natural-feeling dialogue and high system quality do not significantly impact their favorability. Conversely, for policy consultation purposes, system quality and quality of information are considered of great importance [58].
Factors Influencing Citizens’ Affective Responses
Among the various factors shaping citizens’ attitudes regarding government AI use, satisfaction has received the most attention (n = 9). Higher satisfaction levels are associated with citizens opting for automated solutions [41], their sense of autonomy, and their perception of chatbots’ benefits [44]. In contrast, personalization and user eagerness to acquire knowledge through engagement demonstrate only a modest positive correlation with satisfaction [46]. Intention to use chatbots, usage behavior [54], and high degrees of chatbots’ and recommender systems’ use lead to greater satisfaction levels as well. Although trust in these systems plays a positive role in raising satisfaction levels, the most significant aspect is its moderating effect in the relationship between usage behavior and satisfaction [55,56]. Citizens’ expectations from the government (e.g., accountability, transparency, interactivity) and positive emotional perceptions further enhance the satisfaction experienced by chatbot users [57]. Compared to transparency and trust in AI services, service accuracy exerts the strongest influence on perceived service value, thereby contributing to higher satisfaction levels. Furthermore, human–AI collaboration proves particularly significant in services where transparency and trust are more crucial than service accuracy [63]. Experience was assessed in relation to the use of mental health chatbots during the coronavirus pandemic. The findings revealed that a chatbot’s ability to provide tailored services, users’ desire to gain knowledge through interaction, and the pandemic’s unusual circumstances conjointly contributed to a positive user experience [40]. Beyond these factors, perceived friendliness and competence of chatbots are also identified as drivers of experience enhancement [59].
The analysis reveals that PU and PEU dominate among the key factors shaping citizens’ attitudes toward AI-driven government services. This aligns with the frequent adoption of TAM and UTAUT as theoretical frameworks in both the reviewed studies and the wider literature on citizens’ attitudes toward information systems. By the same token, trust—particularly trust in government and technology—is identified as a pivotal factor. Trust, as a construct, is widely employed in empirical studies exploring attitudes in relation to nascent technologies. It is considered an essential precondition, particularly for potential users, to embrace the technology in question and rely on its provider. The literature also focuses on the significance of the perceived risk, the provision of personalized services, and the usage frequency of AI-driven services. Security, accuracy, cost reduction, time savings, and chatbots’ human attributes are also commonly investigated, albeit usually less influential. Last, socio-demographic factors appear to be among the least impactful determinants.
The research variables (independent, control, mediating, and moderating) deployed in the reviewed literature, along with their definitions/explanations, are documented in Table A2 of Appendix A.

4.3.5. Citizens’ Concerns and Challenges in AI-Enabled Services

Inexperience is often closely associated with hesitance stemming from doubt and uncertainty. Thus, it is common for individuals to question the trustworthiness and reliability of novel technologies, especially during initial interactions. Several factors inducing skepticism were identified. To begin with, ethical concerns, particularly the lack of data privacy and privacy protection measures, often discourage citizens from using AI in government interactions [38,45,48,52]. Concerns intensify when AI systems operate with high levels of autonomy, as this can generate a perceived loss of control [34]. This feeling becomes more intense when AI technology is utilized for decision-making processes [34,48] or when users are required to share higher amounts of private data [45]. Interestingly, though, citizens are sometimes willing to disclose personal information at the expense of their privacy protection, as long as they perceive AI technology as useful, validating the so-called “privacy paradox” [45]. At the same time, the sense of unfamiliarity with digital assistants constitutes another source of concern [30].
Apart from issues evoking skepticism, we also discovered challenges linked to the practical use of AI systems. These challenges are largely derived from interface designs and limitations of chatbots. They were primarily detected in experimental studies where participants described their experiences after testing them. These comments provide valuable insights into potential downsides of chatbots, even if reported by a minority of participants. For example, 7 out of 19 evaluators of a chatbot designed to provide tailored services reported that, overall, they were disappointed by its performance, while 2 found it ineffective [36]. After interacting with a chatbot facilitating the acquisition of a passport, 17 out of 53 participants encountered difficulties conveying their inquiries to the chatbot due to its inability to comprehend input texts. At the same time, 12 users complained about extensive replies and the absence of an option to connect with a human representative [37]. In a trial assessing citizens’ intention to use a municipality chatbot, there were respondents who perceived the use of the chatbot as unnecessary, a result also seen in [36], whilst most reported that the chatbot failed to address complex inquiries, consistent with [38]. Another point made by users was related to chatbot visibility. In particular, placing it at the bottom right corner of the screen instead of the top would discourage them from engaging with it. Meanwhile, a few participants noted the difficulty of formulating a prompt that would be understood by the chatbot [47]. Beyond usability, barriers related to interaction were also found. An online survey exploring citizens’ willingness to use voice assistants revealed that impersonal interaction and the lack of features facilitating accessibility for people with disabilities were reasons for negative evaluations [38]. Regarding studies on AI-enabled services, we identified only one challenge: respondents faced difficulty in understanding AI algorithms when using AI-powered tax-filing systems [52]. Finally, in the category of other forms of AI implementation, we located challenges in a single study assessing the adoption of a hybrid intelligence system integrated into a portal mirroring an existing e-government platform. Specifically, most participants complained about the excessive length and complexity of the e-services presented during the experiment. They also criticized the large volume of information provided and reported difficulty in understanding the website’s phrasing [35]. While these challenges may not reflect the views of the majority of participants or respondents, they should still be considered when designing AI-powered systems for e-government services.
Taken together, these challenges and concerns represent significant barriers to the acceptance of AI technology in the public sector. Building citizen trust requires more than technical improvements; it demands robust regulatory frameworks and governance mechanisms. Section 5.2 elaborates on how effective AI governance can alleviate these obstacles.

4.3.6. Limitations and Future Recommendations of the Reviewed Studies

Table 7 presents the limitations and suggested future research directions identified in the literature.
The reviewed studies identified the prevalence of methodological limitations primarily involving sampling issues [34,40,42,43,44,46,51,52,55,56,58,62,63] and data collection and evaluation problems [34,43,45,50,53,57,58,59,63]. The most common theoretical implications concern potential insufficiency of the model(s) employed [39,46,59], and non-consideration of important constructs [49,50,54]. No practical limitations were found; however, three studies did not explicitly report any constraints [38,41,60].
In addition to the limitations cited by the authors, our review yielded general conclusions about weaknesses in the literature. Sample-wise, respondents and experimental subjects were mostly young (18–30 years old) and moderately to highly educated, both suggesting a high likelihood of digital literacy. Technological familiarity is closely associated with adoption willingness [77]; hence, tech-savvy individuals are more susceptible to engaging with novel technologies. Moreover, the widespread use of web-based data collection methods, primarily online questionnaires followed by online experiments, leads to the underrepresentation of certain population segments. These include individuals with no or limited internet access, older adults, people with disabilities, and often non-English speakers. Research has shown that web surveys entail a high risk of coverage error [78] and that online and offline households significantly differ in terms of demographic characteristics [79]. All the same, we must acknowledge that both AI and digital government are heavily contingent on web-based systems, making online sampling and data sourcing methods an almost unavoidable constraint. In terms of methodology, only a limited number of studies explored mediating and moderating variables [45,48,50,55,56,58,59,63]. Incorporating moderating and mediating effects into academic research enables a more profound understanding of construct relationships [80].
Another shortcoming we found is that there are several attitude-forming factors that prior studies did not touch upon. Unequivocally, technology, besides being beneficial, has a harmful facet as well, first and foremost because of improper human use. In this context, apart from data privacy and security issues, concerns are raised about the adverse societal implications of AI. Frequently expressed apprehensions include job displacement risks and potential exacerbation of discrimination and inequality. As these concerns hold equally true for AI use in digital government, one would expect them to be discussed in the reviewed studies. We must also point out that personality traits (e.g., agreeableness, conscientiousness, extraversion) are overlooked despite substantial evidence of their effect on technology acceptance [81,82,83,84]. Lastly, the role of government as a watchdog for responsible and ethical use of AI in the public sector remains underexplored. Although trust in government has received scholarly attention [47,51,53], the impact of government regulations and legal frameworks—their presence or absence—as an attitude-forming variable, is poorly addressed. This gap has significant implications for understanding citizen trust and, by extension, attitude.
Regarding future recommendations, we paid particular attention to proposals offering specific directions, avoiding as much as possible reporting those simply stating the opposite of limitations. Notably, [34] acknowledges the need to investigate civil servants’ perceptions about AI implementations in government. Building on this recommendation, we argue that a comparative study mapping similarities and differences between citizens’ and public officials’ attitudes, challenges, and concerns could substantially enrich the literature. For the recommendation to grant chatbots access to users’ private data to deliver more efficient, tailored services [37], we suggest combining it with increased transparency and robust legal frameworks to mitigate privacy concerns. Expanding on [56], calling for evaluation of recommender systems’ impartiality and fairness, we argue that integrating user feedback is fundamental. User feedback should serve as a guide for both AI systems developers and governments in designing and deploying recommender systems and bots in web portals. Since government recommender systems, chatbots, and especially generative AI systems have access to sensitive information, post-adoption evaluation for potential biases and discrimination is a critical prerequisite for building public trust.
We further recommend investigating citizen attitudes toward understudied AI services, such as ChatGPT, voice-to-form systems, document verification tools, user identification applications, etc. Future work should also address ethical dimensions by examining, for instance, how regulatory frameworks (e.g., EU AI Act) shape citizen trust, and consequently attitudes. Finally, it would be worthwhile to assess potential relationships between attitudes toward e-government services and AI-based e-government services.

5. Discussion

5.1. Summary of Core Findings

Our study highlights key aspects of the relationship between citizen attitudes and government AI services. Based on our research questions, our findings are summarized as follows: Regarding RQ1 (What implementation forms of AI-enabled services are studied in the current literature?), our research revealed that chatbots are the main focus. Considering that the private sector has long adopted chatbots through web portals, it can be argued that the public is more familiar with them compared to other AI tools. This assumption is further supported by [85], which shows that chatbots are among the most popular types of AI used in governments globally. Between 2018 and 2022, the number of UN member countries providing chatbot functionality in their national portals experienced significant growth, rising by approximately 146% [86]. Unlike chatbots, advanced systems such as generative AI have become popular only recently and thus remain understudied.
To address RQ2 (What are the dependent attitudinal variables examined?), we first classified them into three categories—behavioral, cognitive, and affective—based on the ABC model. Our findings show that overall, attitudes are mostly measured through behavioral and affective outcome variables, particularly through behavioral intentions (n = 9) and satisfaction (n = 8), respectively. This focus on intentions aligns with theoretical frameworks often used in attitude research, such as the TPB, TRA, TAM, and UTAUT, where intention is viewed as a direct antecedent of behavior. By the same token, the prominence of satisfaction underscores its crucial role as an attitude-shaping factor. Drawing from the IS Success Model, satisfaction emerges as a key factor connecting the quality of a system to positive citizen attitudes and intentions to use the system [87].
About RQ3 (What are the key determinants of citizen attitudes toward AI-enabled government services?), PU and PEU stand out as the most prominent factors fostering positive attitudes. The predictive power of PU and PEU has been widely established in both the context of digital government and AI. For example, [88] found that PU and PEU explained more than half of the variance in users’ continuance intention of e-government services. Likewise, studies on consumers’ intention to use autonomous vehicles [89] and to shop at AI-powered automated retail stores [90] identified PU and PEU, respectively, as the most influential factors. Another critical finding is that trust in both the technology and the institutions deploying it operates as an amplifying factor of the PU and PEU effect [23]. This suggests that while citizens are primarily motivated by how useful and easy to use a service is, trust mitigates perceived risk [91].
Regarding RQ4 (What are the most commonly reported citizen concerns and challenges related to AI-enabled services?), our results reveal two main dimensions. First, data privacy is the most frequently cited issue among users, contributing to negative attitudes. The importance of protecting data and privacy in AI applications has been widely acknowledged not only in scholarly research but also by international organizations and the EU. An example is the OECD Recommendation of the Council on Artificial Intelligence [92], which stresses the need for AI stakeholders to respect human privacy. Another example is the EU Ethics Guidelines for Trustworthy AI [93], which call, among other things, for processing data in a lawful manner, protecting data subjects’ privacy. Second, practical challenges result mainly from chatbots’ lack of competence and ineffective interface design. For instance, experimental participants reported that the tested chatbots had limited capacity to understand prompts, generated overly long responses, failed to handle complex tasks, and lacked accessibility features. Prior studies have also identified chatbots’ limited prompt comprehension as one of the most common reasons for failure, leading to user frustration and frequent abandonment of interactions [94,95,96,97].

5.2. AI Governance in the Public Sector

Citizen concerns and challenges [35,36,37,38,45,48,52], underscore the importance of governance frameworks. Effective AI governance encompasses a set of regulations, procedures, and technological mechanisms designed to ensure that the development and deployment of AI technologies align with democratic values, the rule of law, and citizens’ well-being [98].
Transparency and accountability are central to this governance. For example, chatbots’ limited ability to interpret prompts [37] and users’ difficulty in understanding “black box” algorithms [52] can raise trust issues [99]. Citizens may question the reliability of the technology and public institutions, which may discourage them from using AI-enabled services [99]. Transparency is about governments providing clear information on the purpose of these systems, how they operate, and the data they rely on [100]. Importantly, transparency also means guaranteeing that users know whenever they are engaging with AI [101]. Accountability strengthens transparency. In fact, one cannot exist without the other [101]. Citizen preference for human interaction over AI [37] highlights the need for accountability. Accountability ensures that mistakes and biases are addressed efficiently. Knowing that a human being will be held responsible for an erroneous AI-generated decision or misjudgment contributes to building citizen trust. However, accountability is not limited to decision-making; it spans the entire life cycle of AI systems [99]: design, development, and deployment [102]. Governments should establish due diligence mechanisms, implement audit systems, and employ human oversight to ensure accountability [99]. Explainability is also a key component of AI governance. A lack of explainability prevents citizens from having confidence in AI systems [103], especially when decisions affect their welfare, such as determining eligibility for social benefits. Beyond understanding the rationale of an AI system’s decision, citizens also have the right to know why AI is employed in a given instance. To this end, governments are obliged to provide interpretable explanations as a necessary precondition for fairness and non-discrimination [104]. Furthermore, principles such as privacy, security, and social benefit are fundamental to fostering public trust as a key determinant of positive citizen attitudes [105]. For these principles to be applied lawfully and ethically, legal frameworks have been established, and initiatives have been taken around the world. The AI Act [105] constitutes the first global attempt at introducing a horizontal regulatory framework for AI. Its objective is to ensure that AI technologies are safe, transparent, and aligned with EU values. Complementary to this, the EU Ethics Guidelines for Trustworthy AI [93] provide a framework for promoting lawful, ethical, and robust development and deployment of AI technologies. Globally, the OECD Recommendation of the Council on Artificial Intelligence [92] is the first international standard for AI governance. Some of its core principles are transparency, explainability, accountability, security, and safety. Similarly, the UNESCO Recommendation on the Ethics of Artificial Intelligence [99] provides guidelines for countries to create their own legal frameworks. Finally, initiatives like the Global Partnership on Artificial Intelligence (GPAI) [106] work to ensure that AI systems are designed responsibly and for the benefit of humanity. In summary, fostering positive attitudes toward AI-enabled services necessitates a foundation of ethical governance. Standards like transparency, accountability, and explainability are essential to demonstrate that AI operates fairly and reliably. When governments apply these values, they enhance public trust. This means that AI does not just work properly, but citizens also perceive it as a dependable and useful tool.

5.3. Theoretical Integration

As discussed in Section 4.3 (Synthesis of Findings), several of the studies employed established theoretical frameworks such as the Technology Acceptance Model (TAM), the Unified Theory of Acceptance and Use of Technology (UTAUT), and the Information Systems Success Model (IS Success Model). The present review indicated that these models provide useful but partial explanations for the observed variations in results.
Findings linked to TAM consistently emphasized the importance of perceived usefulness and ease of use, which appeared as common predictors of citizen attitudes across diverse contexts. UTAUT added explanatory depth by highlighting the role of social influence and facilitating conditions, factors that appeared particularly salient in developing regions where infrastructural barriers are more pronounced. Meanwhile, the IS Success Model was useful in accounting for variations in system quality, information quality, and service quality, which were shown to affect trust and satisfaction with AI-enabled services.
At the same time, existing models only partially capture the unique features of AI in e-government. Critical factors such as algorithmic transparency, fairness, explainability, and accountability are not fully addressed within traditional acceptance frameworks. This points to the need for theoretical extension and integration, combining established technology acceptance and IS models with concepts from AI ethics and governance. Such integration would enable a more comprehensive understanding of citizen attitudes and provide a conceptual basis for future empirical research.

5.4. Limitations and Future Recommendations

This review is subject to several limitations that should be considered when interpreting the findings. First, it focuses exclusively on digitally delivered AI-based government services, particularly those available through the web and mobile platforms. Future studies could broaden the scope of research, encompassing, for example, hybrid services and self-service terminals. Second, our study does not examine back-end administrative AI applications. To comprehend and expand our understanding of citizen attitudes, future research should examine AI-enabled services that are not directly seen by citizens, like automated decision-making and fraud detection. In a similar manner, research including smart cities, where both digital and non-digital AI-based services are provided, would produce valuable insights. Third, our research does not focus on specific public sector domains. It would be interesting if subsequent studies explored citizen attitudes in relation to the use of AI in education, health, law enforcement, other governmental functions, etc. Fourth, this review may be affected by publication bias. By limiting our search to two databases (Scopus and Web of Science), non-indexed studies might have been missed. Although we extended our search using Elicit and SciSpace, the risk of bias could not have been eliminated. Furthermore, while all studies were peer-reviewed, not all selected studies were published in top-tier journals or presented at highly ranked conferences. Finally, the exclusion of non-English studies and gray literature may have also introduced publication bias.
While this review aimed to provide a comprehensive and systematic synthesis, several potential threats to validity should be acknowledged. Regarding consistency, we applied predefined inclusion and exclusion criteria, a transparent search strategy, and double coding for study selection to ensure reliability. Nevertheless, the heterogeneity of study designs and national contexts inevitably introduces variability, which may affect the comparability of results. In terms of replicability, the search process, databases, and quality assessment checklist were documented in detail, yet the reliance on Scopus as the primary database and the temporal cutoff of the search mean that some relevant studies may not have been captured. The generalizability of the findings remains limited: the overall evidence base is modest in size, with most countries contributing only one or two studies, and entire regions such as Africa being underrepresented. Another limitation concerns the potential for subjectivity in the full-text screening process. Although inclusion and exclusion criteria were applied systematically, the assessment of fit to the review’s scope inevitably involved researcher judgment. This potential bias was mitigated through transparent criteria and cross-checking, but it cannot be fully eliminated. For these reasons, our conclusions should be interpreted as indicative, and further research across diverse contexts is required to strengthen the evidence base.

6. Conclusions

Artificial intelligence holds great potential to transform e-government services. While the introduction of traditional e-government services signified the beginning of a new era, AI is poised to revolutionize the way these services are delivered. The responsibility for this transformation lies with developers and governments. Creating citizen-centered AI systems is critical to cultivating public trust and engagement. Yet, whether this transformation succeeds is up to citizens to determine. Thus, understanding their perceptions, preferences, intentions, and behaviors sets the foundation for harnessing the merits of AI.
Beyond the recognized significance of designing and embedding easy-to-use, reliable, and transparent AI services into digital government, our research has uncovered equally important yet less apparent implications. For policymakers, raising citizens’ awareness of AI’s benefits can strengthen its perceived utility. At the same time, informing the public about potential risks, as well as ways to avoid them, can make them feel less intimidated. In the same vein, the gradual introduction of tools other than chatbots may foster greater familiarity. For developers, co-creating AI services with citizens can help them better understand and meet citizen needs, thereby boosting confidence. Finally, public institutions should ensure that humans remain in the loop and actively promote civil servants’ digital literacy.

Author Contributions

Conceptualization, M.R. and S.B.; methodology, M.R., S.B. and I.S.; formal analysis, I.S.; investigation, I.S. and M.R.; data curation, I.S.; writing—original draft preparation, I.S. and M.R.; writing—review and editing, I.S. and M.R.; visualization, I.S.; supervision, M.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding. The publication fees of this manuscript have been financed by the Research Council of the University of Patras.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Dataset available on request from the authors.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
TAMTechnology Acceptance Model
IS Success ModelInformation Systems Success Model
UTAUTUnified Theory of Acceptance and Use of Technology
TRATheory of Reasoned Action
TPBTheory of Planned Behavior
PEUPerceived Ease of Use
PUPerceived Usefulness

Appendix A

Table A1. Dependent Variables—AI Implementation Forms—Attitude Components.
Table A1. Dependent Variables—AI Implementation Forms—Attitude Components.
Dependent VariableAI Implementation FormAttitude Component
Chatbots & VAsAI-Enabled E-ServicesOther Forms
Intention to Use[34,36,37,44,47] Behavioral
Continuance Intention[46][50]
Adoption [35]
Willingness to Use[38,45]
Usage Behavior[39,54][51]
Engagement[61]
Citizen-Initiated Contact[62]
Usability[42] Cognitive
Acceptance [43,48,60][53]
Preferences[49]
Perceptions [52]
Favorability[58]
Experience[40,59] Affective
Satisfaction[40,41,44,46,54,55,57,63] [56]
Table A2. Independent (IV), moderating (MoV), mediating (MeV), and control (CV) variables identified in the reviewed studies.
Table A2. Independent (IV), moderating (MoV), mediating (MeV), and control (CV) variables identified in the reviewed studies.
VariableDefinition/ExplanationStudy Reference
Prior ExperienceIVNot Defined.Abbas et al., 2023 [47]
MoVThe degree of previous use of technology.Horvath et al., 2023 [48]
Time ReductionIVThe difference in the time spent to complete an e-service with and without using the proposed model.Zabaleta et al., 2019 [35]
AutonomyIVThe number of participants that could complete the e-services autonomously with and without using the proposed model.Zabaleta et al., 2019 [35]
Perceived Ease of Use (PEU)
or
Effort Expectancy (EE)
IVNot Defined.Stamatis et al., 2020 [36]
Antoniadis & Tampouris (2021) [37]
Patsoulis et al., 2022 [42]
Suter et al., 2022 [43]
Abbas et al., 2023 [47]
Kim et al., 2023 [50]
It is easy to use without effort.Yang & Wang, 2023 [53]
The level of ease associated with the use of a system.Abed, 2024 [54]
Perceived Usefulness (PU)
or
Performance Expectancy (PE)
IVNot Defined.Stamatis et al., 2020 [36]
Antoniadis & Tampouris (2021) [37]
Alhalabi et al., 2022 [41]
Patsoulis et al., 2022 [42]
Suter et al., 2022 [43]
Willems et al., 2022 [45]
Abbas et al., 2023 [47]
Kim et al., 2023 [50]
Moreira & Naranjo-Zolotov, 2024 [60]
The degree to which an individual believes that using the system will help him or her to attain gains in job performance.Abed, 2024 [54]
Cognitive CommunicationIVCognitive based virtual agents’ interaction between government and citizens for e-gov services.Chohan et al., 2021 [39]
TrustIVNot Defined.Chohan et al., 2021 [39]
Abbas et al., 2023 [47]
IV
MoV
Trust in chatbots.El Gharbaoui, El Boukhari, et al., 2024 [55]
Trust in recommender systems.El Gharbaoui, El Boukhari, et al., 2024 [56]
Perceived RiskIVThe subjective belief that there is a probability of suffering a loss in pursuit of a desired outcome [51].Chohan et al., 2021 [39]
Pribadi et al., 2023 [51]
Public perception of possible adverse consequences for themselves.Yang & Wang, 2023 [53]
PersonalizationIVThe provision of personally relevant products and services according to the user’s unique characteristics and demands.Zhu et al., 2021 [40]
Zhu et al., 2022 [46]
Voice InteractionIVA function allows robots to make human-like communication with users.Zhu et al., 2021 [40]
Zhu et al., 2022 [46]
EnjoymentIVThe extent to which using an IS product or service is perceived as enjoyable, fun, and pleasurable.Zhu et al., 2021 [40]
Zhu et al., 2022 [46]
LearningIVA means of satisfying users’ desire to acquire new knowledge.Zhu et al., 2021 [40]
Zhu et al., 2022 [46]
ConditionIVThe perceived utility received from using chatbots to meet the mental health demands of the current condition a person faces.Zhu et al., 2021 [40]
Zhu et al., 2022 [46]
Information QualityIVThe extent of how accurate, relevant, precise and complete the information provided by the IS is and how it fits users’ needs [44].Alhalabi et al., 2022 [41]
Tisland et al., 2022 [44]
Usage CharacteristicsIVNot Defined.Alhalabi et al., 2022 [41]
Perceived SatisfactionIVNot Defined.Alhalabi et al., 2022 [41]
ImportanceIVNot Defined.Alhalabi et al., 2022 [41]
Acceptance of AutomationIVNot Defined.Alhalabi et al., 2022 [41]
Socio-Demographic FactorsIVAge.Suter et al., 2022 [43]
Srikanth & Dwarakesh, 2023 [52]
Pislaru et al., 2024 [61]
CVKim et al., 2023 [50]
IVGender.Suter et al., 2022 [43]
Srikanth & Dwarakesh, 2023 [52]
CVKim et al., 2023 [50]
IVEducation.Suter et al., 2022 [43]
Pislaru et al., 2024 [61]
IVMonthly income.Suter et al., 2022 [43]
Pislaru et al., 2024 [61]
IVResidence (rural or urban).Suter et al., 2022 [43]
Pislaru et al., 2024 [61]
IVEmployment Status.Pislaru et al., 2024 [61]
Political SupportIVAn attitude by which individuals situate themselves, either favorably or unfavorably, vis-á-vis the political community, the political regime, and the political authorities.Suter et al., 2022 [43]
Trust in TechnologyIVThe belief that a technology has the attributes necessary to perform as expected.Suter et al., 2022 [43]
A person’s belief that the operation of a technology can be trusted to obtain online information.Pribadi et al., 2023 [51]
Not Defined.Yang & Wang, 2023 [53]
System QualityIVThe extent of how consistent, easy to use and responsive an IS is, and to what degree it fits the users’ needs.Tisland et al., 2022 [44]
Service QualityIVThe extent of the reliability, responsiveness, assurance and empathy of an IS.Tisland et al., 2022 [44]
Services that provide up-to-date, accurate, and well-structured information.Kim et al., 2023 [50]
Human EmpowermentIVA personally meaningful increase in power that a person obtains through his or her own efforts.Tisland et al., 2022 [44]
Trusting BeliefsIVThe confident truster perception that the trustee has attributes that are beneficial to the truster.Tisland et al., 2022 [44]
Competence of UserCVAn individual’s belief in his or her capability to use the system in tasks with relevant knowledge, skills and confidence.Tisland et al., 2022 [44]
Impact of System UsageCVThe degree to which an individual can influence task outcomes based on the use of the system.Tisland et al., 2022 [44]
Meaning of System UsageCVThe importance an individual attaches to system usage in relation to his or her own ideals or standards.Tisland et al., 2022 [44]
Self- DeterminationCVAn individual’s sense of having choices (i.e., authority to make his or her own decisions) about system usage.Tisland et al., 2022 [44]
Data Privacy

IV
MoV
Individuals’ control over the release of personal information including its collection, use, access, and correction of errors [45].Willems et al., 2022 [45]
Personal InformationIV
MoV
The amount of personal information a user is required to share.Willems et al., 2022 [45]
Anthropomorphic DesignIV
MoV
Human likeness design.Willems et al., 2022 [45]
Hedonic MotivationIVUsers’ perceptions of the engagement and
experiential aspects of technology.
Abbas et al., 2023 [47]
HabitIVNot Defined.Abbas et al., 2023 [47]
Social InfluenceIVUsers’ perceptions of attitudes and priorities of significant others.Abbas et al., 2023 [47]
The extent to which an individual feels that significant others think he or she should use the new system.Abed, 2024 [54]
Not Defined.Moreira & Naranjo-Zolotov, 2024 [60]
Facilitating ConditionsIVTechnology availability or needed infrastructure to benefit from the technology.Abbas et al., 2023 [47]
The extent to which a person believes that an organizational and technical infrastructure exists to support the use of the system.Abed, 2024 [54]
Human InvolvementIVThe degree of human involvement in the process of decision making in public services.Horvath et al., 2023 [48]
AI LiteracyMoVA user’s ability of recognizing instances of AI and distinguishing between technological artefacts that use and do not use AI.Horvath et al., 2023 [48]
ProactivityIVThe capability of a chatbot to autonomously act on behalf of users.Ju et al., 2023 [49]
The capability of a chatbot to provide additional, useful information to keep the conversation alive.Li & Wang, 2024 [59]
ConscientiousnessIVThe capacity of a chatbot to demonstrate attentiveness to the conversation at hand.Ju et al., 2023 [49]
The capability of a chatbot to exhibit focused engagement in the ongoing conversation.Li & Wang, 2024 [59]
CommunicabilityIVThe capacity of a chatbot to convey its underlying features and interactive principles to users.Ju et al., 2023 [49]
Emotional IntelligenceIVThe capability of a chatbot to appraise and express feelings, regulate effective reactions, and harness emotions to solve problems.Ju et al., 2023 [49]
Identity ConsistencyIVThe capability of a chatbot to present itself as a particular social actor.Ju et al., 2023 [49]
Service ReliabilityIVAI public services with suitable functionality and quick resolution of errors.Kim et al., 2023 [50]
ResponsivenessIVAI public services that provide timely updates and feedback on users’ inquiries, and immediate action when problems arise.Kim et al., 2023 [50]
Security of TechnologyIVNot Defined.Kim et al., 2023 [50]
SatisfactionIVNot Defined.Pribadi et al., 2023 [51]
Srikanth & Dwarakesh, 2023 [52]
MoVKim et al., 2023 [50]
Communication ChannelsCVAI services within Korea’s public sector.Kim et al., 2023 [50]
Government AccountabilityIVA government is considered accountable when all its activities are in accordance with the needs and interests of the wider community.Pribadi et al., 2023 [51]
Community CultureIVIncludes symbols, perceptions, behavior, and creation of works.Pribadi et al., 2023 [51]
Trust in GovernmentIVThe public’s assessment of government based on their perceptions of political authorities’, agencies’ and institutions’ integrity and ability to provide services according to the expectations of citizens.Pribadi et al., 2023 [51]
Not Defined.Yang & Wang, 2023 [53]
Perceived CostsIVNot Defined.Pribadi et al., 2023 [51]
OccupationIV-Srikanth & Dwarakesh, 2023 [52]
Method of Filing Income Tax ReturnsIVCitizens e-file their income taxes on their own.
Citizens consult tax professionals for e-filing their income taxes.
Srikanth & Dwarakesh, 2023 [52]
Replacement of Human Tax Advisors by AIIVIndividuals’ opinion on whether AI can replace human tax advisors in the future.Srikanth & Dwarakesh, 2023 [52]
Data CapacityIVThe size of the underlying data supporting government services.Yang & Wang, 2023 [53]
Data QualityIVThe data quality of the underlying data that underpin government services.Yang & Wang, 2023 [53]
Own TechnologyIVKey technologies are developed domestically, not imported from abroad.Yang & Wang, 2023 [53]
Technology MaturityIVThe technology has been researched, developed and verified to the extent that it can be applied in practice.Yang & Wang, 2023 [53]
Technology EthicsIVIn the process of using technology, ethics and rationality should be considered to achieve the purpose of technology.Yang & Wang, 2023 [53]
Information LiteracyIVThe ability of data users to effectively utilize information awareness and information knowledge to acquire information, process information and create and exchange new information.Yang & Wang, 2023 [53]
Meeting DemandsIVGovernment services provided by the government can meet the diverse needs of the public.Yang & Wang, 2023 [53]
Relative AdvantageIVAdvantages of the ChatGPT model compared with the traditional government services.Yang & Wang, 2023 [53]
Old and New in ParallelIVIn the transitional period, the government adopts new forms of government affairs while retaining the traditional forms of government affairs.Yang & Wang, 2023 [53]
Oversight and AccountabilityIVOrganizations (governments, enterprises, research and development institutions, etc.) and leading cadres who lose their posts and responsibilities will be investigated for their principal responsibilities, supervisory responsibilities and leadership responsibilities.Yang & Wang, 2023 [53]
Security GuaranteeIVClarification of the response to security issues, such as private security, data security, etc.Yang & Wang, 2023 [53]
Public Officials’ LiteracyIVThe proficiency of public officials in artificial intelligence knowledge and technology.Yang & Wang, 2023 [53]
AttitudeIVAn individual’s positive or negative feelings about performing the target behavior.Abed, 2024 [54]
Behavioral IntentionIVNot Defined.Abed, 2024 [54]
Use IVUse of chatbots.El Gharbaoui, El Boukhari, et al., 2024 [55]
Use of recommender systems.El Gharbaoui, El Boukhari, et al., 2024 [56]
Public ExpectationIVNot Defined.Guo & Dong, 2024 [57]
Guo & Dong, 2024 [58]
Emotion PerceptionIVNot Defined.Guo & Dong, 2024 [57]
Guo & Dong, 2024 [58]
System PerceptionIVUsers’ perceptions of a chatbot’s utility, its intuitiveness, and its overall responsiveness.Guo & Dong, 2024 [58]
Social SupportIVThe impact of other users, experts, and institutions, understanding how their shared wisdom, critiques, or general stances modulate the public’s contentment with chatbot interactions. Guo & Dong, 2024 [58]
Behavioral QualityIV
MeV
Tangible and observed user behaviors, mapping out their current engagement trajectories and future interaction blueprints with the chatbot.Guo & Dong, 2024 [58]
Warmth PerceptionMeVA user’s perception of a chatbots’ friendliness, kindness, and warmthless.Li & Wang, 2024 [59]
Competence PerceptionMeVA user’s perception of a chatbots’ competency, intelligence, and skillfulness.Li & Wang, 2024 [59]
MannersIVThe capability of a chatbot to manifest polite behavior and conversational habits.Li & Wang, 2024 [59]
Task-Oriented Language StyleIVThe capability of a chatbot to provide task-centered conversation guidance and goal-oriented verbal
cues.
Li & Wang, 2024 [59]
FairnessIVThe capability of a chatbot to provide the same level of service to all user groups.Li & Wang, 2024 [59]
ProfessionalizationIVThe capability of a chatbot to solve problems with clear-cut solutions in a specific task.Li & Wang, 2024 [59]
AI AwarenessIVNot Defined.Moreira & Naranjo-Zolotov, 2024 [60]
Privacy ConcernsIVNot Defined.Moreira & Naranjo-Zolotov, 2024 [60]
Online Self-EfficacyIVBelief in the self’s ability to protect their privacy online.Moreira & Naranjo-Zolotov, 2024 [60]
Trust in AIIVA degree of trustworthy of being able to fulfil a user’s purpose of usage.Moreira & Naranjo-Zolotov, 2024 [60]
Political InterestIVThe degree of citizens’ interest in political affairs and their familiarity with the decisions made by the government.Moreira & Naranjo-Zolotov, 2024 [60]
Government Response MethodIVHuman response.
Human-predefined automated response.
Intelligently generated automated response.
Wang et al., 2024 [62]
Image of Government RespondentIVNo gender.
Male.
Female.
Wang et al., 2024 [62]
Channel for Initiating ContactIVIn-person.
Telephone.
Internet.
Wang et al., 2024 [62]
Purpose of Initiating ContactIVConsultation.
Offer advice.
Complaint report.
Wang et al., 2024 [62]
Complexity of MatterIVSimple.
Complex.
Wang et al., 2024 [62]
Urgency of MatterIVNot urgent.
Urgent.
Wang et al., 2024 [62]
Service AccuracyIVHow reliably and precisely AI-integrated public services meet diverse citizen needs.Rulandari & Silalahi [63]
TransparencyIVThe extent to which AI-driven public services
disseminate reliable, timely and accessible information.
Rulandari & Silalahi [63]
Trust in AI servicesIVThe confidence individuals place in government-provided AI services to act effectively and in their best interests.Rulandari & Silalahi [63]
Perceived Service ValueIVThe tangible and intangible benefits citizens perceive from AI integration.Rulandari & Silalahi [63]
Human–AI CollaborationMoVThe synergistic interaction between human decision-makers (e.g., civil servants) and AI systems to optimize task efficiency and service quality.Rulandari & Silalahi [63]

References

  1. Wirtz, B.W.; Müller, W.M. An integrated artificial intelligence framework for public management. Public Manag. Rev. 2019, 21, 1076–1100. [Google Scholar] [CrossRef]
  2. Bannister, F.; Connolly, R. ICT, public values and transformative government: A framework and programme for research. Gov. Inf. Q. 2014, 31, 119–128. [Google Scholar] [CrossRef]
  3. Meijer, A.; Bolívar, M.P.R. Governing the smart city: A review of the literature on smart urban governance. Int. Rev. Adm. Sci. 2016, 82, 392–408. [Google Scholar] [CrossRef]
  4. Carter, L.; Bélanger, F. The utilization of e-government services: Citizen trust, innovation and acceptance factors. Inf. Syst. J. 2005, 15, 5–25. [Google Scholar] [CrossRef]
  5. Venkatesh, V.; Morris, M.G.; Davis, G.B.; Davis, F.D. User Acceptance of Information Technology: Toward a Unified View. MIS Q. 2003, 27, 425. [Google Scholar] [CrossRef]
  6. Shareef, M.A.; Kumar, V.; Kumar, U.; Dwivedi, Y.K. e-Government Adoption Model (GAM): Differing service maturity levels. Gov. Inf. Q. 2011, 28, 17–35. [Google Scholar] [CrossRef]
  7. Kleizen, B.; Van Dooren, W.; Verhoest, K.; Tan, E. Do citizens trust trustworthy artificial intelligence? Experimental evidence on the limits of ethical AI measures in government. Gov. Inf. Q. 2023, 40, 101834. [Google Scholar] [CrossRef]
  8. Wang, Y.-F.; Chen, Y.-C.; Chien, S.-Y.; Wang, P.-J. Citizens’ trust in AI-enabled government systems. Inf. Polity 2024, 29, 293–312. [Google Scholar] [CrossRef]
  9. Alshehri, M.; Drew, S.; Alhussain, T.; Alghamdi, R. The Effects of Website Quality on Adoption of E-Government Service: An Empirical Study Applying UTAUT Model Using SEM. arXiv 2012, arXiv:1211.2410. [Google Scholar] [CrossRef]
  10. AlAwadhi, S.; Morris, A. The Use of the UTAUT Model in the Adoption of E-Government Services in Kuwait. In Proceedings of the 41st Annual Hawaii International Conference on System Sciences (HICSS 2008), Waikoloa, HI, USA, 7–10 January 2008; p. 219. [Google Scholar] [CrossRef]
  11. Mehrabi, N.; Morstatter, F.; Saxena, N.; Lerman, K.; Galstyan, A. A Survey on Bias and Fairness in Machine Learning. ACM Comput. Surv. 2022, 54, 1–35. [Google Scholar] [CrossRef]
  12. Straub, V.J.; Morgan, D.; Bright, J.; Margetts, H. Artificial intelligence in government: Concepts, standards, and a unified framework. Gov. Inf. Q. 2023, 40, 101881. [Google Scholar] [CrossRef]
  13. Tomaževič, N.; Murko, E.; Aristovnik, A. Organisational Enablers of Artificial Intelligence Adoption in Public Institutions: A Systematic Literature Review. Cent. Eur. Public Adm. Rev. 2024, 22, 109–138. [Google Scholar] [CrossRef]
  14. Zuiderwijk, A.; Chen, Y.-C.; Salem, F. Implications of the use of artificial intelligence in public governance: A systematic literature review and a research agenda. Gov. Inf. Q. 2021, 38, 101577. [Google Scholar] [CrossRef]
  15. Al-Besher, A.; Kumar, K. Use of artificial intelligence to enhance e-government services. Meas. Sens. 2022, 24, 100484. [Google Scholar] [CrossRef]
  16. Hakimi, M.; Salem Hamidi, M.; Samim Miskinyar, M.; Sazish, B. Integrating Artificial Intelligence into E-Government: Navigating Challenges, Opportunities, and Policy Implications. Int. J. Acad. Pract. Res. 2023, 2, 11–21. [Google Scholar] [CrossRef]
  17. Alshahrani, A.; Griva, A.; Dennehy, D.; Mäntymäki, M. Artificial intelligence and decision-making in government functions: Opportunities, challenges and future research. Transform. Gov. People Process Policy 2024, 18, 678–698. [Google Scholar] [CrossRef]
  18. Caiza, G.; Sanguña, V.; Tusa, N.; Masaquiza, V.; Ortiz, A.; Garcia, M.V. Navigating Governmental Choices: A Comprehensive Review of Artificial Intelligence’s Impact on Decision-Making. Informatics 2024, 11, 64. [Google Scholar] [CrossRef]
  19. Ivic, A.; Milicevic, A.; Krstic, D.; Kozma, N.; Havzi, S. The Challenges and Opportunities in Adopting AI, IoT and Blockchain Technology in E-Government: A Systematic Literature Review. In Proceedings of the 2022 International Conference on Communications, Information, Electronic and Energy Systems (CIEES), Veliko Tarnovo, Bulgaria, 24–26 November 2022; pp. 1–6. [Google Scholar] [CrossRef]
  20. Totonchi, A. Artificial Intelligence in E-Government: Identifying and Addressing Key Challenges. Malays. J. Inf. Commun. Technol. 2025, 10, 10–24. [Google Scholar] [CrossRef]
  21. Tveita, L.J.; Hustad, E. Benefits and Challenges of Artificial Intelligence in Public sector: A Literature Review. Procedia Comput. Sci. 2025, 256, 222–229. [Google Scholar] [CrossRef]
  22. Pini, B.; Dolci, V.; Gianatti, E.; Petroni, A.; Bigliardi, B.; Barani, A. Artificial Intelligence as a Facilitator for Public Administration Procedures: A Literature Review. Procedia Comput. Sci. 2025, 253, 2537–2546. [Google Scholar] [CrossRef]
  23. Arends, M.; Mawela, T. Chatbot Adoption in Public Service Delivery. In Proceedings of the 2024 International Conference on Computer and Applications (ICCA), Cairo, Egypt, 22–24 December 2024; pp. 1–7. [Google Scholar] [CrossRef]
  24. Senadheera, S.; Yigitcanlar, T.; Desouza, K.C.; Mossberger, K.; Corchado, J.; Mehmood, R.; Li, R.Y.M.; Cheong, P.H. Understanding Chatbot Adoption in Local Governments: A Review and Framework. J. Urban Technol. 2024, 32, 35–69. [Google Scholar] [CrossRef]
  25. Azzahro, F.; Hidayanto, A.N.; Shihab, M.R. Examining factors shaping citizens’ perception of artificial intelligence in government: A systematic literature review. Soc. Sci. Humanit. Open 2025, 11, 101518. [Google Scholar] [CrossRef]
  26. Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ 2021, 372, n71. [Google Scholar] [CrossRef]
  27. Wagner, G.; Lukyanenko, R.; Paré, G. Artificial intelligence and the conduct of literature reviews. J. Inf. Technol. 2022, 37, 209–226. [Google Scholar] [CrossRef]
  28. Bolaños, F.; Salatino, A.; Osborne, F.; Motta, E. Artificial intelligence for literature reviews: Opportunities and challenges. Artif. Intell. Rev. 2024, 57, 259. [Google Scholar] [CrossRef]
  29. United Nations Department of Economic and Social Affairs, Addendum on AI and Digital Government: E-Government Survey 2024, United Nations. 2024. Available online: https://desapublications.un.org/sites/default/files/publications/2024-10/Addendum%20on%20AI%20and%20Digital%20Government%20%20E-Government%20Survey%202024.pdf (accessed on 13 March 2025).
  30. Khanal, S.; Zhang, H.; Taeihagh, A. Development of New Generation of Artificial Intelligence in China: When Beijing’s Global Ambitions Meet Local Realities. J. Contemp. China 2025, 34, 19–42. [Google Scholar] [CrossRef]
  31. United Nations Department of Economic and Social Affairs. UN E-Government Knowledgebase–Data Center. Available online: https://publicadministration.un.org/egovkb/en-us/Data-Center (accessed on 3 April 2025).
  32. United Nations Department of Economic and Social Affairs. United Nations E-Government Survey 2024: Accelerating Digital Transformation for Sustainable Development–With the Addendum on Artificial Intelligence; United Nations: New York, NY, USA, 2024.
  33. Van Eck, N.J.; Waltman, L. Software survey: VOSviewer, a computer program for bibliometric mapping. Scientometrics 2010, 84, 523–538. [Google Scholar] [CrossRef]
  34. Akkaya, C.; Krcmar, H. Potential Use of Digital Assistants by Governments for Citizen Services: The Case of Germany. In Proceedings of the 20th Annual International Conference on Digital Government Research, Dubai, United Arab Emirates, 18–20 June 2019; pp. 81–90. [Google Scholar] [CrossRef]
  35. Zabaleta, K.; Lago, A.B.; Lopez-De-Ipina, D.; Di Modica, G.; Santos De La Camara, R.; Pistore, M. Combining Human and Machine Intelligence to foster wider adoption of e-services. In Proceedings of the 2019 IEEE SmartWorld, Ubiquitous Intelligence & Computing, Advanced & Trusted Computing, Scalable Computing & Communications, Cloud & Big Data Computing, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI), Leicester, UK, 19–23 August 2019; pp. 1854–1859. [Google Scholar] [CrossRef]
  36. Stamatis, A.; Gerontas, A.; Dasyras, A.; Tambouris, E. Using chatbots and life events to provide public service information. In Proceedings of the 13th International Conference on Theory and Practice of Electronic Governance, Athens, Greece, 23–25 September 2020; ACM: New York, NY, USA, 2020; pp. 54–61. [Google Scholar] [CrossRef]
  37. Antoniadis, P.; Tambouris, E. PassBot: A chatbot for providing information on Getting a Greek Passport. In Proceedings of the 14th International Conference on Theory and Practice of Electronic Governance, Athens, Greece, 6–8 October 2021; ACM: New York, NY, USA, 2021; pp. 292–297. [Google Scholar] [CrossRef]
  38. Baldauf, M.; Zimmermann, H.-D.; Pedron, C. ‘Exploring Citizens’ Attitudes Towards Voice-Based Government Services in Switzerland. In International Conference on Human-Computer Interaction; Kurosu, M., Ed.; Springer International Publishing: Cham, Switzerland, 2021; pp. 229–238. [Google Scholar] [CrossRef]
  39. Chohan, S.R.; Hu, G.; Khan, A.U.; Pasha, A.T.; Sheikh, M.A. Design and behavior science in government-to-citizens ognitive-communication: A study towards an inclusive framework. Transform. Gov. People Process Policy 2021, 15, 532–549. [Google Scholar] [CrossRef]
  40. Zhu, Y.; Janssen, M.; Wang, R.; Liu, Y. It Is Me, Chatbot Working to Address the COVID-19 Outbreak-Related Mental Health Issues in China. User Experience, Satisfaction, and Influencing Factors. Int. J. Hum.–Comput. Interact. 2021, 38, 1182–1194. [Google Scholar] [CrossRef]
  41. Alhalabi, M.; Al Dhaghari, B.; Hussein, N.; Ehtesham, H.; Laconsay, J.; Khelifi, A.; Salem, A.; Ghazal, M. M-Government Smart Service using AI Chatbots: Evidence from the UAE. In Proceedings of the 2022 2nd International Mobile, Intelligent, and Ubiquitous Computing Conference (MIUCC), New York, NY, USA, 8–9 May 2022; pp. 325–330. [Google Scholar] [CrossRef]
  42. Patsoulis, G.; Promikyridis, R.; Tambouris, E. Integration of chatbots with Knowledge Graphs in eGovernment: The case of Getting a Passport. In Proceedings of the 25th Pan-Hellenic Conference on Informatics, in PCI ’21, New York, NY, USA, 26–28 November 2021; Association for Computing Machinery: New York, NY, USA, 2022; pp. 425–429. [Google Scholar] [CrossRef]
  43. Suter, V.; Meckel, M.; Shahrezaye, M.; Steinacker, L. AI Suffrage: A four-country survey on the acceptance of an automated voting system. In Proceedings of the 55th Hawaii International Conference on System Sciences, Maui, HI, USA, 4–7 January 2022. [Google Scholar] [CrossRef]
  44. Tisland, I.; Sodefjed, M.L.; Vassilakopoulou, P.; Pappas, I.O. The Role of Quality, Trust, and Empowerment in Explaining Satisfaction and Use of Chatbots in e-government. In The Role of Digital Technologies in Shaping the Post-Pandemic World, Conference on e-Business, e-Services and e-Society, Newcastle upon Tyne, UK, 13–14 September 2022; Papagiannidis, S., Alamanos, E., Gupta, S., Dwivedi, Y.K., Mäntymäki, M., Pappas, I.O., Eds.; Springer International Publishing: Cham, Switzerland, 2022; pp. 279–291. [Google Scholar] [CrossRef]
  45. Willems, J.; Schmid, M.J.; Vanderelst, D.; Vogel, D.; Ebinger, F. AI-driven public services and the privacy paradox: Do citizens really care about their privacy? Public Manag. Rev. 2022, 25, 2116–2134. [Google Scholar] [CrossRef]
  46. Zhu, Y.; Wang, R.; Pu, C. “I am chatbot, your virtual mental health adviser.” What drives citizens’ satisfaction and continuance intention toward mental health chatbots during the COVID-19 pandemic? An empirical study in China. Digit. Health 2022, 8, 20552076221090031. [Google Scholar] [CrossRef]
  47. Abbas, N.; Følstad, A.; Bjørkli, C.A. Chatbots as Part of Digital Government Service Provision—A User Perspective. In Chatbot Research and Design; Følstad, A., Araujo, T., Papadopoulos, S., Law, E.L.-C., Luger, E., Goodwin, M., Brandtzaeg, P.B., Eds.; Springer International Publishing: Cham, Switzerland, 2023; pp. 66–82. [Google Scholar] [CrossRef]
  48. Horvath, L.; James, O.; Banducci, S.; Beduschi, A. ‘Citizens’ acceptance of artificial intelligence in public services: Evidence from a conjoint experiment about processing permit applications. Gov. Inf. Q. 2023, 40, 101876. [Google Scholar] [CrossRef]
  49. Ju, J.; Meng, Q.; Sun, F.; Liu, L.; Singh, S. Citizen preferences and government chatbot social characteristics: Evidence from a discrete choice experiment. Gov. Inf. Q. 2023, 40, 101785. [Google Scholar] [CrossRef]
  50. Kim, Y.; Myeong, S.; Ahn, M.J. Living Labs for AI-Enabled Public Services: Functional Determinants, User Satisfaction, and Continued Use. Sustainability 2023, 15, 8672. [Google Scholar] [CrossRef]
  51. Pribadi, U.; Juhari; Ibrahim, M.A.; Kurniawan, C. Pivotal Factors Affecting Citizens in Using Smart Government Services in Indonesia. In Proceedings of the Eighth International Congress on Information and Communication Technology, London, UK, 20–23 February 2023; Springer: Cham, Switzerland, 2023; pp. 1087–1099. [Google Scholar] [CrossRef]
  52. Srikanth, K.; Dwarakesh, B.S. A Study on Assessees Perception Towards AIPowered Income Tax Filing in Chennai City. J. Dev. Econ. Manag. Res. Stud. 2024, 11, 103–112. [Google Scholar] [CrossRef]
  53. Yang, L.; Wang, J. Factors influencing initial public acceptance of integrating the ChatGPT-type model with government services. Kybernetes 2023, 53, 4948–4975. [Google Scholar] [CrossRef]
  54. Abed, S.S.S. Understanding the Determinants of Using Government AI-Chatbots by Citizens in Saudi Arabia. Int. J. Electron. Gov. Res. 2024, 20, 1–20. [Google Scholar] [CrossRef]
  55. El Gharbaoui, O.E.; Boukhari, H.E.; Salmi, A. Chatbots and Citizen Satisfaction: Examining the Role of Trust in AI-Chatbots as a Moderating Variable. TEM J. 2024, 13, 1825–1836. [Google Scholar] [CrossRef]
  56. El Gharbaoui, O.; El Boukhari, H.; Salmi, A. The transformative power of recommender systems in enhancing citizens’ satisfaction: Evidence from the Moroccan public sector. Innov. Mark. 2024, 20, 224–236. [Google Scholar] [CrossRef]
  57. Guo, Y.; Dong, P. Emotion Perception, Public Expectations, and Public Satisfaction: A Behaviour Experimental Study on Government Chatbots in Government Service Scenarios. In Proceedings of the 25th Annual International Conference on Digital Government Research, in dg.o ’24, New York, NY, USA, 11–14 June 2024; Association for Computing Machinery: New York, NY, USA, 2024; pp. 63–69. [Google Scholar] [CrossRef]
  58. Guo, Y.; Dong, P. Factors Influencing User Favorability of Government Chatbots on Digital Government Interaction Platforms across Different Scenarios. J. Theor. Appl. Electron. Commer. Res. 2024, 19, 818–845. [Google Scholar] [CrossRef]
  59. Li, X.; Wang, J. Should government chatbots behave like civil servants? The effect of chatbot identity characteristics on citizen experience. Gov. Inf. Q. 2024, 41, 101957. [Google Scholar] [CrossRef]
  60. Moreira, J.; Naranjo-Zolotov, M. Exploring Potential Drivers of Citizen’s Acceptance of Artificial Intelligence Use in e-Government. In World Conference on Information Systems and Technologies; Springer: Cham, Switzerland, 2024; pp. 336–345. [Google Scholar] [CrossRef]
  61. Pislaru, M.; Ciprian, V.; Ivascu, L.; Mircea, I. Citizen-Centric Governance: Enhancing Citizen Engagement through Artificial Intelligence Tools. Sustainability 2024, 16, 2686. [Google Scholar] [CrossRef]
  62. Wang, S.; Min, C.; Liang, Z.; Zhang, Y.; Gao, Q. The decision-making by citizens: Evaluating the effects of rule-driven and learning-driven automated responders on citizen-initiated contact. Comput. Hum. Behav. 2024, 161, 108413. [Google Scholar] [CrossRef]
  63. Rulandari, N.; Silalahi, A.D.K. Achieving effectiveness of public service in AI-enabled service from public value theory: Does human–AI collaboration matters? Transform. Gov. People Process Policy 2025, 19, 428–452. [Google Scholar] [CrossRef]
  64. Critical Appraisal Skills Programme (CASP). CASP Qualitative Checklist. n.d. Available online: https://casp-uk.net/casp-tools-checklists/systematic-review-checklist/ (accessed on 1 September 2025).
  65. Lincoln, Y.S.; Guba, E.G. Naturalistic Inquiry; SAGE: Newbury Park, CA, USA, 1985. [Google Scholar]
  66. Bommasani, R.; Hudson, D.A.; Adeli, E.; Altman, R.; Arora, S.; von Arx, S.; Bernstein, M.S.; Bohg, J.; Bosselut, A.; Brunskill, E.; et al. On the Opportunities and Risks of Foundation Models. arXiv 2021, arXiv:2108.07258. [Google Scholar] [CrossRef]
  67. Salah, M.; Abdelfattah, F.; Al Halbusi, H. Generative Artificial Intelligence (ChatGPT & Bard) in Public Administration Research: A Double-Edged Sword for Street-Level Bureaucracy Studies. Int. J. Public Adm. 2023, 1–7. [Google Scholar] [CrossRef]
  68. Goher, G.N. Navigating the integration of ChatGPT in UAE’s government sector: Challenges and opportunities. Digit. Transform. Soc. 2025, 4, 57–72. [Google Scholar] [CrossRef]
  69. Cantens, T. How will the state think with ChatGPT? The challenges of generative artificial intelligence for public administrations. AI Soc. 2025, 40, 133–144. [Google Scholar] [CrossRef]
  70. Dreyling, R.; Koppel, T.; Tammet, T.; Pappel, I. Challenges of Generative AI Chatbots in Public Services—An Integrative Review. In Social Science Research Network; Elsevier: Rochester, NY, USA, 2024; p. 4850714. [Google Scholar] [CrossRef]
  71. Cambridge Dictionary. Attitude. Available online: https://dictionary.cambridge.org/dictionary/english/attitude (accessed on 19 April 2025).
  72. Fishbein, M.; Ajzen, I. Belief, Attitude, Intention and Behavior: An Introduction to Theory and Research; Addison-Wesley Series in Social Psychology; Addison-Wesley: Reading, MA, USA, 1975. [Google Scholar]
  73. Al-Mamary, Y.H.; Al-nashmi, M.; Hassan, Y.A.G.; Shamsuddin, A. A Critical Review of Models and Theories in Field of Individual Acceptance of Technology. Int. J. Hybrid Inf. Technol. 2016, 9, 143–158. [Google Scholar] [CrossRef]
  74. Simon, B. Wissensmedien im Bildungssektor. Eine Akzeptanzuntersuchung an Hochschulen; WU Vienna: Wien, Austria, 2001. [Google Scholar]
  75. Dillon, A.; Morris, M.G. User acceptance of information technology: Theories and models. Annu. Rev. Inf. Sci. Technol. 1996, 30, 3–32. [Google Scholar]
  76. Robert, W.Z. Individual Acceptance of Information Technologies. In Framing the Domains of IT Management: Projecting the Future Through the Past; Pinnaflex Educational Resources: Cincinnati, OH, USA, 2000; pp. 85–104. [Google Scholar]
  77. Nikou, S.; De Reuver, M.; Mahboob Kanafi, M. Workplace literacy skills—How information and digital literacy affect adoption of digital technology. J. Doc. 2022, 78, 371–391. [Google Scholar] [CrossRef]
  78. Schonlau, M.; Van Soest, A.; Kapteyn, A.; Couper, M. Selection Bias in Web Surveys and the Use of Propensity Scores. Sociol. Methods Res. 2009, 37, 291–318. [Google Scholar] [CrossRef]
  79. Eckman, S. Does the Inclusion of Non-Internet Households in a Web Panel Reduce Coverage Bias? Soc. Sci. Comput. Rev. 2016, 34, 41–58. [Google Scholar] [CrossRef]
  80. Kouam, F.; William, A. Assimilating Mediating and Moderating Variables in Academic Research: Role and Significance. Soc. Sci. Res. Netw. 2024, 143. [Google Scholar] [CrossRef]
  81. Özbek, V.; Alnıaçık, Ü.; Koc, F.; Akkılıç, M.E.; Kaş, E. The Impact of Personality on Technology Acceptance: A Study on Smart Phone Users. Procedia–Soc. Behav. Sci. 2014, 150, 541–551. [Google Scholar] [CrossRef]
  82. Barnett, T.; Pearson, A.W.; Pearson, R.; Kellermanns, F.W. Five-factor model personality traits as predictors of perceived and actual usage of technology. Eur. J. Inf. Syst. 2015, 24, 374–390. [Google Scholar] [CrossRef]
  83. Svendsen, G.B.; Johnsen, J.-A.K.; Almås-Sørensen, L.; Vittersø, J. Personality and technology acceptance: The influence of personality factors on the core constructs of the Technology Acceptance Model. Behav. Inf. Technol. 2013, 32, 323–334. [Google Scholar] [CrossRef]
  84. Behrenbruch, K.; Söllner, M.; Leimeister, J.M.; Schmidt, L. Understanding Diversity–The Impact of Personality on Technology Acceptance. In Human-Computer Interaction–INTERACT 2013; Kotzé, P., Marsden, G., Lindgaard, G., Wesson, J., Winckler, M., Eds.; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2013; Volume 8120, pp. 306–313. [Google Scholar] [CrossRef]
  85. Organisation for Economic Co-operation and Development (OECD). Hello, World: Artificial Intelligence and Its Use in the Public Sector. In OECD Working Papers on Public Governance; OECD: Paris, France, 2019; p. 36. [Google Scholar] [CrossRef]
  86. Department of Economic and Social Affairs. United Nations E-Government Survey 2022: The Future of Digital Government, 1st ed.; United Nations e-Government Survey Series; United Nations Publications: Bloomfield, NY, USA, 2022. [Google Scholar]
  87. Delone, W.; McLean, E. The DeLone and McLean Model of Information Systems Success: A Ten-Year Update. J. Manag. Inf. Syst. 2003, 19, 9–30. [Google Scholar] [CrossRef]
  88. Hamid, A.A.; Razak, F.Z.A.; Bakar, A.A.; Abdullah, W.S.W. The Effects of Perceived Usefulness and Perceived Ease of Use on Continuance Intention to Use E-Government. Procedia Econ. Financ. 2016, 35, 644–649. [Google Scholar] [CrossRef]
  89. Panagiotopoulos, I.; Dimitrakopoulos, G. An empirical investigation on consumers’ intentions towards autonomous driving. Transp. Res. Part C Emerg. Technol. 2018, 95, 773–784. [Google Scholar] [CrossRef]
  90. Pillai, R.; Sivathanu, B.; Dwivedi, Y.K. Shopping intention at AI-powered automated retail stores (AIPARS). J. Retail. Consum. Serv. 2020, 57, 102207. [Google Scholar] [CrossRef]
  91. Pavlou, P.A. Consumer Acceptance of Electronic Commerce: Integrating Trust and Risk with the Technology Acceptance Model. Int. J. Electron. Commer. 2003, 7, 101–134. [Google Scholar] [CrossRef]
  92. Organisation for Economic Co-Operation and Development. OECD Recommendations on Artificial Intelligence. 2019. Available online: https://wecglobal.org/uploads/2019/07/2019_OECD_Recommendations-AI.pdf (accessed on 10 June 2025).
  93. High-Level Expert Group on Artificial Intelligence. Ethics Guidelines for Trustworthy AI; European Commission: Brussels, Belgium, 2019. Available online: https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai (accessed on 10 June 2025).
  94. Janssen, A.; Grützner, L.; Breitner, M.H. Why do Chatbots fail? A Critical Success Factors Analysis. In Proceedings of the ICIS 2021 Proceedings, Austin, TX, USA, 12–15 December 2021; p. 6. Available online: https://aisel.aisnet.org/icis2021/hci_robot/hci_robot/6 (accessed on 17 June 2025).
  95. Helal, M.; Holthaus, P.; Lakatos, G.; Amirabdollahian, F. Chat Failures and Troubles: Reasons and Solutions. arXiv 2024, arXiv:2309.03708. [Google Scholar] [CrossRef]
  96. Li, C.-H.; Yeh, S.-F.; Chang, T.-J.; Tsai, M.-H.; Chen, K.; Chang, Y.-J. A Conversation Analysis of Non-Progress and Coping Strategies with a Banking Task-Oriented Chatbot. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020; ACM: New York, NY, USA, 2020; pp. 1–12. [Google Scholar] [CrossRef]
  97. Diederich, S.; University of Göttingen; Lembcke, T.-B.; Brendel, A.B.; Kolbe, L.M.; Dresden, T. Understanding the Impact that Response Failure has on How Users Perceive Anthropomorphic Conversational Service Agents: Insights from an Online Experiment. AIS Trans. Hum.-Comput. Interact. 2021, 13, 82–103. [Google Scholar] [CrossRef]
  98. Mišić, J.; Van Est, R.; Kool, L. Good governance of public sector AI: A combined value framework for good order and a good society. AI Ethics 2025, 1–15. [Google Scholar] [CrossRef]
  99. UNESCO. Recommendation on the Ethics of Artificial Intelligence. UNESCO, Recommendation Adopted by the General Conference of UNESCO. 2022. Available online: https://unesdoc.unesco.org/ark:/48223/pf0000381137 (accessed on 24 August 2025).
  100. Dubber, M.D.; Pasquale, F.; Das, S. (Eds.) The Oxford Handbook of Ethics of AI; Oxford University Press: New York, NY, USA, 2020. [Google Scholar]
  101. Bannister, F.; Connolly, R. The Trouble with Transparency: A Critical Review of Openness in e-Government. Policy Internet 2011, 3, 1–30. [Google Scholar] [CrossRef]
  102. De Silva, D.; Alahakoon, D. An Artificial Intelligence Life Cycle: From Conception to Production. Patterns 2022, 3, 100489. [Google Scholar] [CrossRef]
  103. Nikiforova, A.; Lnenicka, M.; Milić, P.; Luterek, M.; Rodríguez Bolívar, M.P. From the Evolution of Public Data Ecosystems to the Evolving Horizons of the Forward-Looking Intelligent Public Data Ecosystem Empowered by Emerging Technologies. In Electronic Government; Janssen, M., Crompvoets, J., Gil-Garcia, J.R., Lee, H., Lindgren, I., Nikiforova, A., Viale Pereira, G., Eds.; Lecture Notes in Computer Science; Springer Nature: Cham, Switzerland, 2024; Volume 14841, pp. 402–418. [Google Scholar] [CrossRef]
  104. Vericir. Ethics & AI in Public Administration: Ensuring Trust, Transparency, and Fairness. Vericir, GO643hjk-00/10. 2025. Available online: https://archit3ct.io/wp-content/uploads/2025/03/Ethics-AI-in-Public-Administration_-Ensuring-Trust-Transparency-and-Fairness.pdf (accessed on 25 August 2025).
  105. European Union. Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 Laying down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act). 2024. Available online: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689 (accessed on 28 August 2025).
  106. Global Partnership on Artificial Intelligence. Global Partnership on Artificial Intelligence. n.d. Available online: https://gpai.ai (accessed on 29 August 2025).
Figure 1. PRISMA flow diagram of our review methodology.
Figure 1. PRISMA flow diagram of our review methodology.
Informatics 12 00098 g001
Figure 2. Distribution of research papers per year.
Figure 2. Distribution of research papers per year.
Informatics 12 00098 g002
Figure 3. Distribution of research papers per author.
Figure 3. Distribution of research papers per author.
Informatics 12 00098 g003
Figure 4. Distribution of research papers per country.
Figure 4. Distribution of research papers per country.
Informatics 12 00098 g004
Figure 5. EGDI 2024 score and number of papers per research paper country.
Figure 5. EGDI 2024 score and number of papers per research paper country.
Informatics 12 00098 g005
Figure 6. Keyword co-occurrence network generated by VOSviewer (Version 1.6.20).
Figure 6. Keyword co-occurrence network generated by VOSviewer (Version 1.6.20).
Informatics 12 00098 g006
Figure 7. Components of attitude.
Figure 7. Components of attitude.
Informatics 12 00098 g007
Table 1. Author affiliations.
Table 1. Author affiliations.
Author AffiliationCountryNumber of Publications
Tsinghua University, BeijingChina4
Harbin Institute of Technology, HarbinChina3
Chongqing University, ChongqingChina2
Huazhong University of Science and Technology, WuhanChina2
Nanjing University, NanjingChina1
Ramakrishna Mission Vivekananda College, ChennaiChina1
Zhejiang University, HangzhouChina1
University of Macedonia, ThessalonikiGreece3
Hellenic Open UniversityGreece2
Mohamed V University, RabatMorocco2
Sidi Mohamed ben Abdellah University, FezMorocco2
Norwegian University of Science and Technology, TrondheimNorway1
SINTEF Research Foundation, OsloNorway1
University of Agder, KristiansandNorway1
University of OsloNorway1
Bina Nusantara University, JakartaIndonesia1
Universitas Muhammadiyah Palangkaraya, PalangkarayaIndonesia1
Universitas Muhammadiyah Yogyakarta, Yogyakarta,Indonesia1
Eastern Switzerland University of Applied Sciences, St. GallenSwitzerland1
University of St. GallenSwitzerland1
Table 2. Overview of the research papers.
Table 2. Overview of the research papers.
Study ReferenceTitleItem TypeJournal/
Conference
Objective(s)
Akkaya & Krcmar, 2019 [34]Potential Use of Digital Assistants by Governments for Citizen Services: The Case of GermanyConference PaperProceedings of the 20th Annual International Conference on Digital Government ResearchTo shed light on citizens’ views on the German government using digital assistants for citizen services.
Zabaleta et al., 2019 [35]Combining Human and Machine Intelligence to
foster wider adoption of e-services
Conference PaperSmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCITo understand how hybrid intelligence can improve on AI’s current limitations by making it easier for citizens to complete e-government services.
Stamatis et al., 2020 [36]Using chatbots and life events to provide public service informationConference PaperProceedings of the 13th International Conference on Theory and Practice of Electronic GovernanceTo integrate chatbot technology with life events (LE) to improve public services information provision, and to implement a chatbot that uses LE info from a Greek e-government portal.
Antoniadis & Tampouris (2021) [37]PassBot: A chatbot for providing information on Getting a Greek PassportConference Paper14th International Conference on Theory and Practice of Electronic GovernanceTo investigate the use of chatbots in public services for the provision of personalized information.
Baldauf et al., 2021 [38]Exploring Citizens’ Attitudes Towards Voice-Based Government Services in SwitzerlandConference PaperInternational Conference on Human-Computer InteractionTo study how citizens view voice-based e-government services.
Chohan et al., 2021 [39]Design and behavior science in government-to-citizens cognitive-communication: a study towards an inclusive frameworkJournal
Article
Transforming Government: People, Process and PolicyTo present a modern AI cognitive-communication medium for G2C interactions, to capture G2C cognitive-communication information, and to investigate citizens’ intentions to use e-government channels.
Zhu et al., 2021 [40]It Is Me, Chatbot: Working to Address the COVID-19 Outbreak-Related Mental Health Issues in China. User Experience, Satisfaction, and Influencing FactorsJournal
Article
International Journal of Human–Computer InteractionTo explore the factors that drive citizens’ users’ experience with mental health chatbots and to identify factors that can significantly influence user satisfaction.
Alhalabi et al., 2022 [41]M-Government Smart Service using AI Chatbots: Evidence from the UAEConference Paper2022 2nd International Mobile, Intelligent, and Ubiquitous Computing Conference (MIUCC)To propose a novel AI-chatbot-based mobile application and to test its usefulness and citizens’ satisfaction.
Patsoulis et al., 2022 [42]Integration of chatbots with Knowledge Graphs in eGovernment: The case of Getting a PassportConference PaperProceedings of the 25th Pan-Hellenic Conference on InformaticsTo investigate the integration of chatbots with knowledge graphs for providing personalized information on public services.
Suter et al., 2022 [43]AI Suffrage: A four-country survey on the acceptance of an automated voting systemConference PaperProceedings of the 55th Hawaii International Conference on System SciencesTo explore how technology improves democratic governance and public opinion of these changes.
Tisland et al., 2022 [44]The Role of Quality, Trust, and Empowerment in Explaining Satisfaction and Use of Chatbots in e-governmentConference PaperConference on e-Business, e-Services and e-SocietyTo examine the factors that influence citizens’ satisfaction and utilization of government chatbots from a citizen perspective.
Willems et al., 2022 [45]AI-driven public services and the privacy paradox: do citizens really care about their privacy?Journal
Article
Public Management ReviewTo test the way citizens balance their privacy concerns with how useful they think AI applications are when it comes to public services.
Zhu et al., 2022 [46]“I am chatbot, your virtual mental health adviser.” What drives citizens’ satisfaction and continuance intention toward mental health chatbots during the COVID-19 pandemic? An empirical study in ChinaJournal
Article
Digital HealthTo assess whether Theory of Consumption Values (TCVs) can identify the determinants of user satisfaction and continuance intention and to examine the relationship between them during the COVID-19 pandemic.
Abbas et al., 2023 [47]Chatbots as Part of Digital Government Service Provision–A User PerspectiveConference PaperInternational Workshop on Chatbot Research and DesignTo gain insight into users’ perceptions of, and intentions to use a chatbot for municipality information services.
Horvath et al., 2023 [48]Citizens’ acceptance of artificial intelligence in public services: Evidence from a conjoint experiment about processing permit applicationsJournal
Article
Government Information QuarterlyTo investigate how citizens perceive and accept the integration of AI in public service processes, particularly in the context of permit applications.
Ju et al., 2023 [49]Citizen preferences and government chatbot social characteristics: Evidence from a discrete choice experimentJournal
Article
Government Information QuarterlyTo explore the pivotal social characteristics of provincial government chatbots and the way in which they affect citizens’ preferences for interactively engaging with it.
Kim et al., 2023 [50]Living Labs for AI-Enabled Public Services: Functional Determinants, User Satisfaction, and Continued UseJournal
Article
SustainabilityTo examine the impact of different functional factors on the continued use of AI-enabled public services.
Pribadi et al., 2023 [51]Pivotal Factors Affecting Citizens in Using Smart Government Services in IndonesiaConference PaperProceedings of Eighth International Congress on Information and Communication TechnologyTo identify and analyze the key factors that influence citizens’ adoption and use of smart government services in Indonesia.
Srikanth & Dwarakesh, 2023 [52]A Study on Assessees Perception Towards AIPowered Income Tax Filing in Chennai CityJournal
Article
Journal of Development Economics and Management Research StudiesTo understand taxpayers’ views on AI-based tax filing software, to assess if AI-based tax filing software increases tax compliance, to determine if AI-powered tax software replaces human tax advisors or services, and to identify main taxpayers’ concerns about using AI-based tax software.
Yang & Wang, 2023 [53]Factors influencing initial public acceptance of integrating the ChatGPT-type model with government servicesJournal
Article
KybernetesTo identify the key factors that most influence public acceptance of the ChatGPT model being used in government services.
Abed, 2024 [54]Understanding the Determinants of Using Government AI-Chatbots by Citizens in Saudi ArabiaJournal
Article
International Journal of Electronic Government ResearchTo investigate user acceptance of e-government chatbots in Saudi Arabia.
El Gharbaoui, El Boukhari, et al., 2024 [55]Chatbots and Citizen Satisfaction: Examining the Role of Trust in AI-Chatbots as a Moderating VariableJournal
Article
TEM JournalTo study how incorporating AI chatbots into public services affects citizens’ perceptions and satisfaction, and how trust in these technologies could affect their impact.
El Gharbaoui, El Boukhari, et al., 2024 [56]The transformative power of recommender systems in enhancing citizens’ satisfaction: Evidence from the Moroccan public sectorJournal
Article
Innovative MarketingTo evaluate the potential impact of implementing AI-powered recommender systems on citizen satisfaction within Moroccan public services.
Guo & Dong, 2024 [57]Emotion Perception, Public Expectations, and Public Satisfaction: A Behaviour Experimental Study on Government Chatbots in Government Service ScenariosConference PaperProceedings of the 25th Annual International Conference on Digital Government ResearchTo analyze the dynamics of emotion perception and public expectations and how they can be optimized to enhance citizens’ satisfaction by government chatbots.
Guo & Dong, 2024 [58]Factors Influencing User Favorability of Government Chatbots on Digital Government Interaction Platforms across Different ScenariosJournal
Article
Journal of Theoretical and Applied Electronic Commerce ResearchTo determine the fundamental factors that influence users’ perceptions concerning government chatbots and to examine the variation of these factors across diverse government services.
Li & Wang, 2024 [59]Should government chatbots behave like civil servants? The effect of chatbot identity characteristics on citizen experienceJournal
Article
Government Information QuarterlyTo explore how distinct identity traits of government chatbots affect the way citizens perceive and interact with them.
Moreira & Naranjo-Zolotov, 2024 [60]Exploring Potential Drivers of Citizen’s Acceptance of Artificial Intelligence Use in e-GovernmentConference PaperWorld Conference on Information Systems and TechnologiesTo evaluate the degree to which certain features of technology and personal characteristics can shape the acceptance of the use of AI in e-government.
Pislaru et al., 2024 [61]Citizen-Centric Governance: Enhancing Citizen Engagement through Artificial Intelligence ToolsJournal
Article
SustainabilityTo identify the potential for AI to make citizens’ communication with public administration more efficacious.
Wang et al., 2024 [62]The decision-making by citizens: Evaluating the effects of rule-driven and learning-driven automated responders on citizen-initiated contactJournal
Article
Computers in Human BehaviorTo study the impact of different categories of automated responders on citizens’ decision to commence contact with the government.
Rulandari & Silalahi [63]Achieving effectiveness of public service in AI-enabled service from public value theory: does human–AI collaboration matters?Journal
Article
Transforming Government:
People, Process and Policy
To evaluate how AI enhances service accuracy, transparency, and trust and how human–AI synergy can maximize public value.
Table 3. Study methodologies overview.
Table 3. Study methodologies overview.
Study ReferenceTheoretical Framework(s)Sample Size (N)Availability of QuestionnaireStatistical Analysis
Akkaya & Krcmar, 2019 [34]Not Specified1077Partially availableDescriptive Statistics
Zabaleta et al., 2019 [35]Not Specified278NoDescriptive Statistics
Stamatis et al., 2020 [36]Technology Acceptance Model (TAM)19NoDescriptive Statistics
Antoniadis & Tampouris (2021) [37]Technology Acceptance Model (TAM)53NoDescriptive Statistics
Baldauf et al., 2021 [38]Not Specified397NoDescriptive Statistics
Chohan et al., 2021 [39]Information Richness Theory (IRS)
Cognitive Theory of Trust
Theory of Perceived Risk and Attractiveness
IS Success Model
266NoPLS-SEM
Zhu et al., 2021 [40]Theory of Consumption Values (TCV)295YesPLS-SEM
Alhalabi et al., 2022 [41]IS Success Model200YesSPSS AMOS
Patsoulis et al., 2022 [42]Technology Acceptance Model (TAM)65NoDescriptive Statistics
Suter et al., 2022 [43]Technology Acceptance Model (TAM)6043YesProportional Odds
Logistic Regression
Tisland et al., 2022 [44]IS Success Model105YesPLS-SEM
Willems et al., 2022 [45]Experiment / Privacy Calculus Theory (PCT)1048NoBinominal Logistic Regression
Zhu et al., 2022 [46]Theory of Consumption Values (TCV)371YesPLS-SEM
Abbas et al., 2023 [47]Unified Theory of Acceptance and Use of Technology 2 (UTAUT2)15NoThematic Analysis (Semi-Structured Interviews)
Horvath et al., 2023 [48]Proposed by the authors2143YesAverage Marginal Component Effects (AMCE),
Average Component Interaction Effects (ACIE)
Ju et al., 2023 [49]Proposed by the authors371NoMultinomial Logit Model
Kim et al., 2023 [50]Technology Acceptance Model (TAM)350NoReliability and Validity Testing
Regression Models
Pribadi et al., 2023 [51]Technology Acceptance Model (TAM)300NoPLS-SEM
Srikanth & Dwarakesh, 2023 [52] Proposed by the authors50NoRegression Based Models
Yang & Wang, 2023 [53]Total Adversarial Interpretive Structure Model (TAISM)11,502NoNon-Parametric Hierarchical Bayesian Model
Abed, 2024 [54]Unified Theory of Acceptance and Use of Technology (UTAUT)490YesPLS-SEM
El Gharbaoui, El Boukhari, et al., 2024 [55]Proposed by the authors157YesPLS-SEM
El Gharbaoui, El Boukhari, et al., 2024 [56]Expectation Confirmation
Theory (ECT)
157YesPLS-SEM
Guo & Dong, 2024 [57]Emotional Governance Theory194NoModeration Analysis
Simple Slope Analysis
Guo & Dong, 2024 [58]Expectation Confirmation Model (ECM)
IS Success Model
Technology
Acceptance Model (TAM)
194YesLinear Regression Models
Mediation Analysis
Li & Wang, 2024 [59]Computers Are Social Actors (CASA) Theory
Stereotype Content Model (SCM)
735YesSPSS AMOS
Moreira & Naranjo-Zolotov, 2024 [60]Proposed by the authors
(Based on TAM and UTAUT)
208NoPLS-SEM
Pislaru et al., 2024 [61]Proposed by the authors507NoDescriptive Statistics
Reliability and Validity Testing
Regression Models
Wang et al., 2024 [62]Not Specified763YesBased Models
Average Marginal Component Effect (AMCE)
Causal Forest Technique
Rulandari & Silalahi [63]Public Value Theory (PVT)591YesPLS-SEM
Necessary Condition Analysis (NCA)
Table 4. Study quality assessment.
Table 4. Study quality assessment.
Study ReferenceClear ObjectivesMethodology TransparencySampling AdequacyValidity/Reliability/TrustworthinessRelevanceOverall Quality
Akkaya & Krcmar (2019) [34]Medium
Zabaleta et al. (2019) [35]Medium
Stamatis et al. (2020) [36]Medium
Antoniadis & Tampouris (2021) [37]Medium
Baldauf et al. (2021) [38]Medium
Chohan et al. (2021) [39]High
Zhu et al. (2021) [40]High
Alhalabi et al. (2022) [41]Medium
Patsoulis et al. (2022) [42]Medium
Suter et al. (2022) [43]High
Tisland et al. (2022) [44]Medium
Willems et al. (2022) [45]High
Zhu et al. (2022) [46]High
Abbas et al. (2023) [47]Medium
Horvath et al. (2023) [48]High
Ju et al. (2023) [49]High
Kim et al. (2023) [50]High
Pribadi et al. (2023) [51]Medium
Srikanth & Dwarakesh (2023) [52]Medium
Yang & Wang (2023) [53]High
Abed (2024) [54]High
El Gharbaoui et al. (2024) [55]Medium
El Gharbaoui et al. (2024) [56]Medium
Guo & Dong (2024) [57]Medium
Guo & Dong (2024) [58]Medium
Li & Wang (2024) [59]High
Moreira & Naranjo-Zolotov (2024) [60]Medium
Pislaru et al. (2024) [61]High
Wang et al. (2024) [62]High
Rulandari & Silalahi (2024) [63]Medium
Table 5. Categories of AI implementation forms.
Table 5. Categories of AI implementation forms.
AI Implementation FormStudy Reference
Chatbots and
Virtual Assistants
[34,36,37,38,39,40,41,42,44,45,46,47,49,54,55,57,58,59,61,62]
AI-Enabled E-Services[43,48,50,51,52,60,63]
Other Forms[35,53,56]
Table 6. Components of Attitude investigated in the reviewed literature.
Table 6. Components of Attitude investigated in the reviewed literature.
Attitude ComponentStudy Reference
Behavioral[34,35,36,37,38,39,44,45,46,47,50,51,54,61,62]
Cognitive[42,43,48,49,52,53,58,60]
Affective[40,41,44,46,54,55,56,57,59,63]
Table 7. Limitations and future recommendations of the reviewed studies.
Table 7. Limitations and future recommendations of the reviewed studies.
Study ReferenceLimitation(s)Future Recommendation(s)
Akkaya & Krcmar, 2019 [34]Offline population excluded.
Online questionnaire limited depth of understanding.
Only virtual assistants were investigated.
Extension of data collecting approaches.
Examination of public servants’ stances.
Analysis of case studies from alternative AI applications.
Zabaleta et al., 2019 [35]Usability issues.
Extensive and complex e-services.
Additional research is required
to improve the proposed model.
Stamatis et al., 2020 [36]Technical limitations of the selected chatbot engine.
CPSV-AP is not user friendly.
Some LE are very broad.
Implementation of chatbot technology on a larger scale to identify unresolved technical challenges.
Antoniadis & Tampouris (2021) [37]Limited compatibility of the CPSV-AP with chatbots.
Greek language not supported by most chatbot development platforms.
The chatbot’s access to user-related, personal data could be utilized for the provision of personalized services.
Baldauf et al., 2021 [38]Not mentioned.Testing experimental voice chatbots in collaboration with citizens could reveal valuable insights.
Chohan et al., 2021 [39]Limited expert validation.
Generic research model.
The artifact should be subject to validation in government agencies.
Differences between countries at different levels of development in the adoption of AI by the public agencies have to be considered.
Zhu et al., 2021 [40]Geographically limited sampling.
Only one mental health chatbot was examined.
More variables should be explored.
Demographic factors need to be examined.
Alhalabi et al., 2022 [41]Not mentioned.Not mentioned.
Patsoulis et al., 2022 [42]Sampling bias.Connecting the chatbot to external APIs that users are familiar with would be useful.
Suter et al., 2022 [43]Non-probability sampling.
Offline population excluded.
Potential bias from vignette phrasing.
Not mentioned
Tisland et al., 2022 [44]Convenience sampling.Broader success of e-government chatbots requires investigation.
Willems et al., 2022 [45]Private data sharing in AI conversations not considered.
One-time decision analysis.
Inclusion of factors like national and cultural background.
Investigation of actual behavior.
Zhu et al., 2022 [46]Geographically limited sampling.
Ineffective theoretical model.
Moderating effects not studied.
Extension of research to various chatbots categories.
Abbas et al., 2023 [47]Study limited to a single chatbot.
Longitudinal effects not explored.
Exploration of the relation between perceptions and actual behavior.
Horvath et al., 2023 [48]Focused solely on permits.
Unrealistic availability of extensive information.
Exploring more personal attributes and linking specific demographics to preferences.
Ju et al., 2023 [49]Limited characteristics tested.
Merely explorative study.
The effect of each attribute should be investigated.
Various categories of scenarios may be deployed.
Kim et al., 2023 [50]Other equally important variables not examined.
Potential common method bias.
Potential bias from self-reported data.
Trust and privacy concerns are worth researching.
Different AI-platforms could be explored.
Mixed methods may be applied.
Pribadi et al., 2023 [51]Geographically limited sampling.
Small sample size.
Different local authorities and a larger sample size would provide more insight.
Srikanth & Dwarakesh, 2023 [52]Geographically limited sampling.Privacy and security issues should be addressed.
Yang & Wang, 2023 [53]Subjective method.
Limited data.
More objective methods should be explored.
Increasing the amount of data will allow more indicators to be extracted.
Abed, 2024 [54]Limited to UTAUT model constructs.
Cultural particularities not considered.
Ethical issues not addressed.
More cultural aspects should be explored.
Longitudinal research could provide more insights.
El Gharbaoui, El boukhari, et al., 2024 [55]Probability sampling.
Geographically limited study.
Quantitative assessment of quality attributes.
Investigation of additional constructs and multiple facets of trust.
El Gharbaoui, El Boukhari, et al., 2024 [56]Small sample size.
Non-probability sampling.
Measurement of different trust indicators.
Impartiality and fairness of recommender systems require evaluation.
Guo & Dong, 2024 [57]Data collection limited to specific time frames and regions.
Subjective data interpretation.
Evaluation of additional factors.
Long-term studies could offer deeper understanding.
Guo & Dong, 2024 [58]Potential bias from self-reported data.
Geographically limited study.
Integration of AI techniques for data analysis.
Use of qualitative research methods.
Longitudinal research.
Li & Wang, 2024 [59]Subjective data interpretation.
Possible insufficient theoretical support.
Early-stage findings.
Analysis of moderating variables.
Study of diverse cultures and comparative analysis of the results.
Actual behavior should be investigated.
Moreira & Naranjo-Zolotov, 2024 [60]Not mentioned.Not mentioned.
Pislaru et al., 2024 [61]Variation in results across different samples not explored.Larger sample size.
Rural population needs to be studied.
Comparative analysis of occupational status and levels of engagement.
Wang et al., 2024 [62]Sample bias.
Geographically limited study.
Conduction of cross-cultural studies.
Incorporation of visual elements or interactive components into experimental designs.
Use of real-life scenarios.
Rulandari & Silalahi
[63]
Non-probability sampling.
Offline population excluded.
Potential bias from self-reported data.
Geographically limited study.
Longitudinal research.
Comparative studies across different areas of administration.
Analysis of trust and transparency interactions with other variables.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Savveli, I.; Rigou, M.; Balaskas, S. From E-Government to AI E-Government: A Systematic Review of Citizen Attitudes. Informatics 2025, 12, 98. https://doi.org/10.3390/informatics12030098

AMA Style

Savveli I, Rigou M, Balaskas S. From E-Government to AI E-Government: A Systematic Review of Citizen Attitudes. Informatics. 2025; 12(3):98. https://doi.org/10.3390/informatics12030098

Chicago/Turabian Style

Savveli, Ioanna, Maria Rigou, and Stefanos Balaskas. 2025. "From E-Government to AI E-Government: A Systematic Review of Citizen Attitudes" Informatics 12, no. 3: 98. https://doi.org/10.3390/informatics12030098

APA Style

Savveli, I., Rigou, M., & Balaskas, S. (2025). From E-Government to AI E-Government: A Systematic Review of Citizen Attitudes. Informatics, 12(3), 98. https://doi.org/10.3390/informatics12030098

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop