Next Article in Journal
Channel Integration Through a Wireless Applet and an E-Commerce Platform
Previous Article in Journal
Unravelling the Effects of Privacy Policies on Information Disclosure: Insights from E-Commerce Consumer Behavior
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Influence of Public Expectations on Simulated Emotional Perceptions of AI-Driven Government Chatbots: A Moderated Study

School of Public Policy & Management, Tsinghua University, Beijing 100084, China
*
Author to whom correspondence should be addressed.
J. Theor. Appl. Electron. Commer. Res. 2025, 20(1), 50; https://doi.org/10.3390/jtaer20010050
Submission received: 28 August 2024 / Revised: 19 October 2024 / Accepted: 12 November 2024 / Published: 11 March 2025

Abstract

:
This study focuses on the impact of technological changes, particularly the development of generative artificial intelligence, on government–citizen interactions in the context of government services. From a psychological perspective with an emphasis on technological governance theory and emotional contagion theory, it examines public perceptions of the simulated emotions of governmental chatbots and investigates the moderating role of age. Data were collected through a multi-stage stratified purposive sampling method, yielding 194 valid responses from an original distribution of 300 experimental questionnaires between 24 September and 13 October 2023. The findings reveal that public expectations significantly enhance the simulated emotional perception of chatbots, with this effect being stronger among older individuals. Age shows significant main and interaction effects, indicating that different age groups perceive the simulated emotional capabilities of chatbots differently. This study highlights the transformative impact of generative artificial intelligence on government–citizen interactions and the importance of integrating AI technology into government services. It calls for governments to pay attention to public perceptions of the simulated emotions of governmental chatbots to enhance public experience.

1. Introduction

As artificial intelligence technology advances, the increasing implementation of chatbots in government agencies is transforming service delivery and enhancing citizen engagement [1,2]. This growth has spurred a surge in research, particularly focusing on the anthropomorphic features of governmental chatbots. Anthropomorphic features, such as simulated emotions, verbal expressiveness, and non-verbal cues, are designed to make chatbots appear more human-like, enhancing their relatability and perceived intelligence [3,4]. The role of anthropomorphic features is particularly significant in public service contexts, where the user’s perception of authenticity and empathy can directly influence their engagement and trust in governmental institutions. Simulated emotions, which refer to the artificial expressions of feelings such as humor, happiness, or empathy exhibited by government chatbots, have been shown to shape user expectations and responses, contributing to a more satisfying and efficient interaction experience [3,5,6]. Consequently, understanding how these anthropomorphic features affect public perception is vital to effectively integrating AI-driven chatbots into government services and ensuring they fulfill their role in improving citizen satisfaction and trust [7,8].
Current research on chatbots largely overlooks their application within the government sector, particularly the use of anthropomorphic features in governmental chatbots. Existing research primarily focuses on the impact of the anthropomorphic features of chatbots on various aspects such as information search behavior [9], user purchase intentions [10], user acceptance [11], subjective well-being [12,13], and alleviating user dissatisfaction [14]. While there is a growing body of literature on the role of simulated emotions in enhancing user interaction and satisfaction, few studies address how these emotional simulations operate specifically within the context of government services. However, this gap in the research is significant, as simulated emotions in governmental chatbots hold considerable potential for improving public engagement and service efficiency. Understanding the impact of emotional simulations in this unique setting could provide valuable insights for public sector innovation, helping to foster trust and satisfaction in government–citizen interactions. Despite these potential benefits, the role of simulated emotions in governmental chatbots remains underexplored, underscoring the need for further research in this area.
Given this context, this paper posits several research questions to enhance the understanding of user engagement with governmental chatbots. First, it seeks to explore how public expectations influence the perception of simulated emotions in the context of interactions with government chatbots. Second, it aims to examine the ways in which age moderates the relationship between public expectations and the perception of simulated emotions when interacting with government chatbots. By addressing these questions, through the questionnaire experiment, the study aims to provide insights that can improve the design and implementation of AI-driven public services.

2. Literature Review

This section reviews the foundational theories and research on government chatbots and emotion perception. Section 2.1 discusses the theories of technological governance and emotional contagion, framing the study’s focus on public expectations, simulated emotions, and age. Section 2.2 outlines the current state of government chatbot research, highlighting their application and limitations. Section 2.3 examines the literature on emotion perception in chatbots, emphasizing simulated emotions’ impact on user engagement. Lastly, Section 2.4 introduces the research framework and hypotheses, defining key variable relationships.

2.1. Theories of Technological Governance and Emotional Contagion

Technological governance theory highlights the transformative impact of technology on public governance and its influence on public behavior and attitudes. This theory suggests that integrating advanced technologies into public service delivery can enhance transparency, efficiency, and responsiveness [15]. For instance, when technologies like chatbots are embedded into government services, they can improve citizen interactions by making services more accessible, user-friendly, and personalized to individual needs [16]. A key aspect of technological governance is understanding how the public perceives these innovations. Public perception is critical to the acceptance and successful use of new technologies [17]. When government services are designed to meet or exceed user expectations, citizens are more likely to engage positively, thereby improving the overall effectiveness of the technology. In the case of government chatbots, simulating human-like interactions can shape how citizens perceive and engage with these services [18]. Chatbots that effectively mimic emotional responses can create a sense of trust and connection with users, leading to more positive experiences and greater satisfaction. This underscores the importance of user-centered design and ongoing evaluation of public expectations and perceptions to fully realize the potential of technology in public governance.
Emotional contagion theory posits that emotions can be transferred from one individual to another, thereby influencing their emotional states and behaviors [19]. This phenomenon occurs through automatic mimicry—where individuals unconsciously mirror the emotions and behaviors of others—and the synchronization of expressions, vocalizations, postures, and movements. Recent studies have expanded on this concept, exploring its application in human–computer interactions, particularly in the design of AI-driven chatbots. Automatic mimicry refers to the unconscious reflection of users’ emotions and behaviors during interactions. For example, if a user expresses happy or humor, a chatbot can “mimic” these emotions by using similar language styles or emotional vocabulary [20]. This automatic mimicry has been shown to enhance user engagement and resonance, improving users’ satisfaction with the services provided [21]. In a study by Bilquise et al. [22], chatbots capable of mimicking users’ emotional tones were found to foster a stronger sense of connection, which increased user willingness to continue interacting with the technology. Similarly, synchronization of expressions involves the coordination of emotional expressions between the chatbot and the user. If a user displays anxiety while inquiring about a service, for example, the chatbot can adjust its tone and wording to convey understanding and concern [23]. This synchronization enhances the user experience by making users feel their emotions are acknowledged, which increases trust and satisfaction with the services provided [24]. Studies show that emotional alignment between users and chatbots strengthens emotional engagement, leading to better outcomes in customer service contexts [25]. When applied to chatbot design, emotional contagion theory suggests that simulated emotional expressions can have a profound impact on user interactions. The relevance of emotional contagion to chatbots lies in the potential of these technologies to evoke genuine emotional responses from users. Research by Beattie and Ellis [26] found that when chatbots convincingly simulate human-like emotions, they can elicit similar emotions in users, creating a sense of trust and relatability. This emotional connection fosters user engagement and satisfaction, making the interaction feel more personal and human-like. Picard [27] previously noted that empathy in AI systems could significantly enhance user satisfaction, as users feel more understood and valued. Moreover, studies on empathetic chatbot design demonstrate that emotional intelligence in chatbots increases the likelihood of users trusting and relying on the technology. For instance, chatbots that successfully simulate empathy in response to user queries make users feel understood, which leads to higher satisfaction and trust in the service [28]. Emotional contagion, therefore, plays a vital role in maximizing the benefits of chatbot technologies in public service delivery by making interactions feel more natural and human-centered. In conclusion, understanding and implementing emotional contagion in chatbot design is essential for maximizing their effectiveness. As chatbots become increasingly integrated into government services, creating emotionally intelligent chatbots that interact with users in an empathetic manner will be crucial for improving the overall user experience.
In the context of government chatbots, simulated emotions refer to the responses generated by AI systems that mimic human emotions to enhance user interaction. These simulated emotions are not actual emotional awareness but rather algorithmically crafted emotional expressions, such as showing sympathy, concern, or joy, in responses. Perception of simulated emotions refers to how users receive and interpret these simulated emotion-like responses generated by government chatbots. It examines whether users trust these emotional simulations, consider them appropriate, and whether these emotions enhance the satisfaction and trust in the interaction. In government services, the use of simulated emotions aims to increase public recognition and satisfaction with government services, making the interaction more humanized and effective. This can improve public acceptance of government chatbots and increase their willingness to continue using the service.
With the development of chatbots, an increasing number of scholars are focusing on the anthropomorphic features of these systems. Anthropomorphism refers to the tendency to attribute human-like characteristics, motives, and other qualities to imagined or real non-human entities [29]. It is used to assess the degree of similarity between robots and humans in terms of behavior or appearance [30]. Therefore, anthropomorphism can be categorized into appearance anthropomorphism [31] and behavioral anthropomorphism [32]. This paper, based on the theories of technological governance and emotional contagion, primarily explores the relationships between variables such as public expectations, perception of simulated emotions, and age.

2.2. The Current Research Status of Government Chatbots

In the domain of public administration [1,33], chatbots find applications primarily in areas like government consultation, policy promotion, service guidance, public affairs processing, emergency response, and decision support. Such applications not only amplify the efficiency of government sectors and mitigate operational costs but also fortify the interactive experience between governments and the public, leading to elevated public satisfaction. Through chatbots, the public can access policy information, execute tasks, and voice concerns with greater ease, while governments harness the capacity to swiftly collate public opinion, address issues, and introduce targeted improvements. Government chatbots, representing a specific adaptation of chatbots in public administration, inherit the technological attributes of intelligent interaction and natural language processing. Furthermore, these bots undergo bespoke customizations and optimizations to cater to the unique demands of governmental services. Such chatbots prioritize the dissemination and communication of information pertaining to policy interpretation, legal consultation, and administrative processes, ensuring the public benefits from more accessible, accurate, and efficient governmental services. By integrating government chatbots, governmental departments can realize round-the-clock online services, magnify the transparency and accessibility of public services, fortify trust and communication between the government and its populace, and consequently enhance overall satisfaction with public services [2].
Specifically, research related to government chatbots encompasses several areas: analysis of the role and impact of government chatbots in elevating the level of government services, enhancing the image of the government, and fostering communication between the government and citizens; exploration of the application and value of government chatbots in domains such as emergency management, public safety, and environmental regulation; investigation into the auxiliary role of government chatbots in the governmental decision-making process, emphasizing their contributions in data analysis, forecasting, and recommendation; scrutiny of challenges posed by government chatbots in safeguarding citizen privacy and data security, along with the government’s corresponding management and regulatory measures; and assessment of the degree of governmental policy support for the development of government chatbots, coupled with potential issues in the policy implementation process and their respective solutions.
According to Senadheera et al. [34], the adoption of chatbots by local governments primarily serves purposes such as information provisioning, consultation, transaction facilitation, and complaint handling, highlighting their role in enhancing operational efficiency and citizen service delivery. The study reveals that chatbots can significantly increase outreach and engagement between local governments and citizens, offering a channel for accessible and instant communication. However, it also identifies critical ethical concerns related to accuracy, accountability, and the potential for exclusionary practices due to algorithmic biases. The perceived humanness of chatbots significantly influences user adoption, suggesting that more human-like interactions could improve engagement and satisfaction. Folstad et al. [35] point out that while some initial anthropomorphizing has occurred, the dialogue analysis reveals that citizen interactions remain largely utilitarian, showing limited humanization of the chatbot.
Jais and Ngah [36] investigated chatbot adoption intentions in Malaysian government agencies using the technology–organization–environment (TOE) framework. They found that technology readiness, organizational readiness, and citizen demand positively influence adoption intentions, with government support acting as a crucial moderating factor. This research underscores the TOE framework’s applicability in the public sector and highlights the vital role of government support in adopting AI technologies like chatbots among Malaysian government agencies. Ju et al. [37] investigate citizen preferences regarding the social characteristics of government chatbots through a discrete choice experiment conducted among Chinese users. The study identifies emotional intelligence, proactivity, identity consistency, and conscientiousness as significant determinants of user preferences, with identity consistency negatively affecting preferences. The findings reveal how individual characteristics, including age, gender, and prior chatbot experience, modulate these preferences.
Tsai et al. [38] proposed the EUSM framework for managing disaster emergency data using chatbots in Taiwanese government agencies. The framework enhances workflows and efficiency while addressing integration challenges, demonstrating the valuable role of chatbots in disaster response efforts. Keyner et al. [39] explore how chatbots combined with Open Data can enhance government transparency and citizen engagement. Their research shows that chatbots, as dialogue-based interfaces, make it easier for non-experts to access and navigate open government datasets. This integration aims to simplify complex data interactions, reducing barriers to citizen participation in public affairs. By merging chatbots with Open Data, they propose a model that promotes efficient human–computer interaction and fosters transparent, participatory governance, advancing public access to government information and enhancing democratic processes. Aoki [40] examines public trust in AI chatbots within the public sector, highlighting the need for empirical research on societal attitudes toward AI in public services. The study finds that initial trust in chatbots is influenced by the service domain and the communication of their intended purpose. This research provides critical insights for developing effective chatbot systems in public administration to improve citizen acceptance of AI technologies.
Abbas et al. [41] conducted an exploratory qualitative interview study with 15 users of a municipal chatbot to understand user perceptions and their impact on the intention to use digital government services. The study highlights the importance of performance expectations, effort expectations, and trust in user acceptance and engagement. Users value chatbots for their 24/7 availability and assistance in navigating municipal services, yet they consistently compare these benefits to other digital government channels. Hasan et al. [42] propose an AI-enabled chatbot framework utilizing semantic modeling for natural language processing, aimed at improving citizen–government interactions. Leveraging Google Dialogflow, the framework efficiently addresses citizen inquiries, achieving around 95% accuracy in domain-specific responses. Georgios et al. [43] investigate the integration of chatbots with Knowledge Graphs (KG) for eGovernment, focusing on the “Getting a Passport” service. They develop a proof-of-concept combining a chatbot with a new KG schema, demonstrating technical feasibility and potential citizen benefits. Evaluated through TAM and SUS-based questionnaires, the prototype receives positive feedback on ease of use, usefulness, and usability.
Wang et al. [44] explore AI adoption in Chinese local governments, focusing on AI-guided chatbots for enhanced service delivery during the COVID-19 pandemic. Their two-phase study first uses a survival model to identify factors driving chatbot adoption, revealing that external pressures are key in the initial phase. In the post-adoption phase, chatbot performance is influenced by local governments’ readiness, including technological, organizational, and strategic preparedness. This study highlights the complexity of AI adoption, emphasizing that the motivation to adopt AI must be matched by readiness to fully utilize its benefits. Oliveira et al. [45] examine how anthropomorphism in government chatbots affects user engagement and satisfaction across different countries. Evaluating ten chatbots from Brazil and internationally, they analyze five dimensions of anthropomorphism: appearance, language, behavior, identity, and personality. Their findings suggest that human-like design enhances user experience, offering a foundational categorization for future research and development in government digital services to create more personalized and engaging interactions with citizens. Phan et al. [46] investigate chatbots’ potential to support Vietnamese migrant workers, who face unique challenges abroad, especially under new free trade agreements. Through qualitative interviews and social media discussions, they find that while chatbots can provide essential information and support, migrant workers have complex trust and expectation patterns towards these technologies. The study highlights the need for careful design and implementation to address specific concerns, suggesting that user-centric chatbot approaches can significantly enhance support for vulnerable groups like migrant workers.
Khadija et al. [47] address communication challenges in Indonesia’s national health insurance program, Jaminan Kesehatan Nasional-Kartu Indonesia Sehat (JKN-KIS), by developing a deep learning-based generative chatbot for the Indonesian language. Using a Sequence-to-Sequence (Seq2Seq) model with LSTM Multiplicative Attention, trained on 20,000 Q&A pairs from the JKN-KIS guidebook, the chatbot achieved a loss value of 0.23 and BLEU scores of 0.86 (unigram) and 0.85 (bigram). This model significantly enhances public interaction with JKN-KIS, demonstrating the potential for advanced AI solutions in government services. de Andrade et al. [48] developed EvaTalk, an advanced chatbot for Brazil’s Escola Virtual de Governo (EV.G). Designed to handle inquiries about Institutional Membership and general chat, EvaTalk’s architecture includes modules for user interaction, AI message processing, knowledge-base management, and message analysis. Initially hampered by insufficient training data, iterative improvements based on user feedback enhanced the chatbot’s response accuracy. This study highlights the importance of continuous data analysis and user feedback in optimizing chatbot performance for public services. Boden et al. [49] examine the “CitizenTalk” project at FH Erfurt, which uses chatbots to enhance e-democracy by engaging citizens in public planning processes. The study highlights the opportunities and limitations of chatbots, introducing an authoring tool and a novel scheme for improving communication control.
Contemporary research underscores the efficacy of government chatbots in streamlining administrative workflows [1,37] and democratizing policy engagement through automated information delivery and interactive citizen consultation mechanisms [50]. These systems optimize resource allocation by managing high-volume public inquiries while enhancing service accessibility through multilingual cognitive processing capabilities. Within public health ecosystems, conversational agents demonstrate strategic value as scalable triage platforms during health emergencies, delivering diagnostic guidance, preventive care protocols, and therapeutic support. Their integration with healthcare infrastructure enables continuous patient monitoring and reduces clinical system overload through intelligent symptom prioritization algorithms.
The studies most directly informing this investigation are methodically cataloged in Table 1.

2.3. Relevant Literature Review on Emotion Perception in Government Chatbots

In public administration research, scholars are increasingly focused on the behavioral anthropomorphism of chatbots. Peng [9] used social response theory and anthropomorphism theory to analyze how consumer business information search behavior is influenced by factors such as anthropomorphism, perceived social value, and gender differences. The study’s findings indicate that anthropomorphizing chatbots can enhance the perceived social value gained by consumers during interactions and increase their willingness to access business information. Furthermore, compared to males, female consumers are more strongly influenced by perceived social value in their willingness to obtain business information through human–computer interaction. Bai [10] investigated the effects of anthropomorphism in hotel service robots and the type of online reviews on customer purchase intentions, exploring the mediating role of perceived warmth and the moderating role of task execution attributes in this process. The empirical results reveal that anthropomorphism in hotel service robots and the type of online reviews interactively influence customer purchase intentions. Perceived warmth mediates the interaction between robot anthropomorphism and online review types on customer purchase intentions, while task execution attributes moderate this interaction. Wang and Su [14] noted that in scenarios of service failure, anthropomorphism in service robots negatively impacts customer dissatisfaction. Additionally, responsibility attribution mediates the effect of robot anthropomorphism on customer dissatisfaction. Furthermore, the type of service failure moderates the influence of robot anthropomorphism on dissatisfaction and also adjusts the strength of the mediation by responsibility attribution.
Most research on the anthropomorphic features of chatbots focuses on their ability to simulate emotions and how these simulated emotions are perceived by the public, as well as the impact of this perception on public willingness and behavior. Previous studies suggest that anthropomorphic features influence attributions, with service robots designed with human-like elements (such as human contours and faces) eliciting emotional cognition and behavior in individuals. Individuals tend to exhibit favorable attributions towards anthropomorphized (as opposed to non-anthropomorphized) non-human agents (such as robots and computers), manifesting as positive attitudes, evaluations, and behaviors. Additionally, research indicates that during human–robot interactions, individuals perceive the actions of non-human agents through the dimensions of warmth and competence. Human-like (compared to non-human-like) service robots implicitly possess more warmth attributes, such as friendliness, sociability, cuteness, and kindness, yet there is no difference in perceived competence. When interacting with highly anthropomorphized robots, consumers are more willing to engage and accept them, as these robots are often perceived as warm and lifelike, which readily triggers emotional cognition in humans.

2.4. Research Framework and Hypotheses

The rapidly expanding integration of chatbots with simulated emotional capabilities in governmental communications necessitates an understanding of public perception of these technologies. This perception is heavily influenced by psychological expectations and their interaction with emotional design. Research indicates that what individuals anticipate from technology significantly impacts their satisfaction and perception, as outlined in the expectancy disconfirmation theory [49]. This theory suggests that satisfaction increases when technology exceeds expectations and decreases when it does not meet them, highlighting the importance of managing public expectations to enhance technological effectiveness. Anthropomorphism also plays a pivotal role in how users interact with technology. Studies by Nass and Moon [51] demonstrate that when computers exhibit human-like behaviors, users apply social rules and expectations to them. This is particularly true for emotional expressions in chatbots, where anthropomorphic features can forge stronger emotional connections. Furthermore, emotional design involves creating systems that can simulate human-like emotional interactions, significantly affecting user trust and the perceived intelligence of the system [27]. The emotional contagion theory [19] supports this by suggesting that humans can feel and respond to emotions displayed by those around them, including those simulated by chatbots. Thus, when governmental chatbots display emotional behaviors, they can induce similar emotional states in users, enhancing the perception of the chatbot’s emotional capabilities. Expectations not only set the stage for technology acceptance but also enhance the perceptual sensitivity of users to these emotional expressions. When users expect high emotional intelligence from chatbots, they become more attuned to and better at recognizing subtle emotional cues, which enhances their perception of the chatbot’s emotional authenticity. This significant influence of user expectations on the perception of emotional authenticity in technology, combined with the growing sophistication of emotional design in governmental chatbots, underscores the crucial need for managing and understanding these expectations. Therefore, this paper proposes the following hypothesis:
H1. 
The higher the public’s expectations, the greater their perception of simulated emotions in governmental chatbots.
Building on the understanding that public expectations significantly influence their perception of simulated emotions in governmental chatbots, it is crucial to examine the moderating role of demographic factors such as age. Research has shown that age affects how individuals adapt to and perceive new technologies. Older adults, for instance, often display different usage patterns, perceptions, and acceptance levels compared to younger users due to factors like cognitive changes and varying technological familiarity [51]. Additionally, age influences simulated emotional perception and processing; older adults may employ different emotional regulation strategies and prioritize emotional information differently than younger adults [52]. These differences suggest that age could moderate the relationship between public expectations and their perception of emotional simulations in chatbots. Younger individuals, who are generally more tech-savvy and accustomed to advanced digital interfaces, might have high expectations that correlate strongly with a heightened perception of a chatbot’s emotional capabilities. Conversely, older adults may have lower expectations and might interpret and value the emotional outputs differently due to their unique emotional processing tendencies. Therefore, taking into account these age-related variations, the hypothesis can be formulated that age acts as a moderator in the relationship between public expectations and their perception of simulated emotions in governmental chatbots.
H2. 
Age acts as a moderator in the relationship between public expectations and their perception of simulated emotions in governmental chatbots.
The theoretical model of this article is shown in Figure 1.

3. Method

This study adopts a quantitative research approach, collecting data through a survey experiment. This method allows for a systematic strategy and analysis of public perceptions of government chatbots. Specifically, public expectation serves as the independent variable, age functions as the moderating variable, and public simulated emotional perception of government chatbots is the dependent variable. The survey is designed to capture the relationships among these variables, ensuring the reliability and validity of the data collected. By focusing on quantifiable metrics, this research aims to gain a deeper understanding of how these variables interact within the realm of public administration.

3.1. Data Source

After a preliminary small-sample survey, modifications to the questionnaire ensued based on the outcomes, removing items with insufficient differentiation and suboptimal reliability and validity. The refined questionnaire then served as a larger, more formal survey. Participant selection in this study was guided by specific criteria aimed at ensuring a diverse and representative sample to enrich insights into user experiences with government chatbots. Demographic factors, including age, familiarity with government services, and prior chatbot experience, were prioritized. Participants were stratified by age to examine how perceptions and interactions with chatbot technology vary across different age groups. A baseline familiarity with government services was established through a preliminary survey, enabling participants to provide informed and relevant feedback. From 24 September to 13 October 2023, distribution of 300 questionnaires took place, yielding 275 responses, translating to a 91.7% response rate. Rigorous criteria applied to the returned questionnaires led to the exclusion of unsuitable ones. Criteria for unsuitability encompassed: incomplete answers; careless completion, discerned from errors in trap questions like “If answering sincerely, please select ‘Somewhat Disagree’”; and consistent or segmental identical answers, indicative of random completion. Application of these criteria left a remainder of 194 valid questionnaires.

3.2. Sample Information

The specific details of the sample information are provided in the Table 2 below.
The presented data pertain to a survey conducted to assess public satisfaction with the use of government chatbots in the context of government services and policy consultation. The dataset encompasses information on respondents’ gender, age, occupation, and education. The analysis of this dataset can offer valuable insights into the factors influencing public satisfaction in these two scenarios. In the gender distribution, females constitute 56.70% of the sample, while males account for 43.30%. Regarding age groups, the largest proportion falls within the 21–30 years old category (52.06%), followed by 31–40 year olds (40.72%). The influence of age on satisfaction levels could be further explored in subsequent analyses. Occupation wise, students make up the largest subgroup at 38.66%, followed by teachers, researchers, and educational professionals at 22.16%. In terms of education, the majority of respondents hold a master’s degree (42.78%), followed by bachelor’s degree holders (30.93%) and Ph.D. holders (22.68%). The level of education might impact individuals’ expectations and interactions with government chatbots.

3.3. Descriptive Statistics

The descriptive statistics results are presented in Table 3.

3.4. Reliability and Validity Testing

In order to test reliability and validity, a preliminary survey was conducted prior to the formal investigation. The pre-survey surveyed 60 respondents and obtained 51 valid samples. As shown in Table 4, the reliability of the “Public Expectations” dimension is 0.754, and the reliability of the “Simulated Emotional Information Perception” dimension is 0.795, indicating that the questionnaire has good reliability and meets the standards for data analysis. Simultaneously, the validity has also met the standard.

4. Result and Discussion

The detailed results of the analysis on the moderating effects are presented in Table 5.
The Table 5 provided delineates the moderating effects across three distinct models. Model 1 incorporates the independent variable (public expectations) and three control variables: gender, occupation, and educational attainment. Building upon this, Model 2 introduces a moderating variable (age), and Model 3 further includes an interaction term, representing the product of the independent and moderating variables. The objective of Model 1 is to assess the influence of the independent variable (public expectations) on the dependent variable (simulated emotional perception), in the absence of any moderating variable (age) interference. The results displayed in the table indicate a significant effect of the independent variable (public expectations) on simulated emotional perception (p < 0.05), thereby confirming a substantial impact. The analysis of moderating effects can proceed through two approaches: first, by examining the significance of changes in the F-value from Model 2 to Model 3; and second, by analyzing the significance of the interaction term in Model 3. This analysis employs the latter approach to evaluate the moderating effects.
The Table 6 reveals significant findings regarding the impact of the moderating variable on the relationship between the independent and dependent variables. At the mean level of age, the regression coefficient is 0.660 (S.E. = 0.062, t = 10.689, p = 0.000), with a 95% confidence interval ranging from 0.539 to 0.781. This indicates that, on average, higher public expectations are strongly associated with increased simulated emotional perceptions of government services. For individuals at a high level of age (+1 SD), the regression coefficient increases to 0.932 (S.E. = 0.098, t = 9.460, p = 0.000), with a 95% confidence interval from 0.739 to 1.125. This significant increase suggests that older individuals perceive a stronger positive impact of public expectations on their emotional engagement with government services. Conversely, at a low level of age (−1 SD), the regression coefficient decreases to 0.388 (S.E. = 0.081, t = 4.806, p = 0.000), with a 95% confidence interval between 0.230 and 0.547. Although still significant, the smaller coefficient indicates that younger individuals experience a weaker association between public expectations and simulated emotional perceptions. These findings underscore the moderating role of age in shaping how public expectations influence simulated emotional perceptions of government services. The stronger effect among older individuals suggests that age-related factors, such as accumulated experiences and potentially greater reliance on government services, might heighten the sensitivity to public expectations. In contrast, younger individuals, who may have different expectations and experiences with government services, show a less pronounced effect.
The simple slopes plot is shown in Figure 2.
The analysis of the moderating effects (Figure 2) reveals significant insights into how the relationship between public expectations and simulated emotional perceptions of government services is influenced by different levels of the moderating variable, age. This moderating effects (Figure 2) provides new insights into the relationship between public expectations and simulated emotional perceptions of government services, suggesting that age plays a significant moderating role. Previous studies [53,54,55], including those by Harrison et al. [54] and Smith & Jones [55], have explored the effects of public expectations; however, they did not specifically address how age may influence this relationship. The current findings indicate that the impact of public expectations on emotional perceptions varies significantly with age, thus highlighting a nuanced understanding of how demographic factors can shape public engagement with government services.

5. Conclusions

The conclusion of this study reveals that public expectations significantly positively impact the simulated emotional perception of governmental chatbots, and this relationship is moderated by age. Specifically, the research finds that in all models, public expectations significantly affect simulated emotional perception, indicating that the extent to which the public perceives the emotional capabilities of these chatbots is largely determined by their expectations. Furthermore, age, as a moderating variable, shows significant main and interaction effects in the models, suggesting that different age groups respond differently to the simulated emotional perception of governmental chatbots. Further analysis through simple slopes reveals that at higher age levels, the impact of public expectations on simulated emotional perception is stronger, while at lower age levels, this impact is weaker. This implies that older individuals are more sensitive to changes in the emotional capabilities of governmental chatbots. These findings underscore the importance of considering demographic factors, such as age, in the design and implementation of emotionally intelligent governmental chatbots to enhance user engagement and satisfaction. This study provides valuable insights into the impact of technological changes on government–citizen interactions and government services, highlighting the need for adaptive and user-centered design approaches that accommodate diverse demographic needs. By addressing these considerations, governments can improve the effectiveness and responsiveness of their digital services, fostering greater trust and satisfaction among the public.

5.1. Theoretical Insights

Based on the results of this study, a number of theoretical insights have been made.
First, public expectations have a significant positive impact on the simulated emotional perception of governmental chatbots. This highlights a key aspect of technological governance theory, which is the importance of managing public expectations. When governmental chatbots are designed to meet or exceed public expectations, the perception of their emotional capabilities significantly improves. Therefore, it is essential for government agencies to actively understand and manage public expectations in order to maximize the acceptance and effectiveness of these technologies.
Second, age acts as a significant moderator in the relationship between public expectations and simulated emotional perception. Different age groups respond differently to the simulated emotional perception of governmental chatbots, indicating the need to consider age factors in the design of emotionally intelligent technologies. This finding supports the application of emotional contagion theory, which suggests that emotional transmission and perception are influenced by individual characteristics. Future research and design should optimize emotional simulation strategies for different age groups to ensure the universality and effectiveness of the technology.
Third, the effectiveness of emotional simulation in enhancing user simulated emotional perception has been confirmed. When governmental chatbots can effectively simulate human emotions, users’ emotional responses and satisfaction significantly increase. This result further validates emotional contagion theory, emphasizing the importance of incorporating emotional simulation elements in artificial intelligence design to enhance user trust and engagement.
Fourth, technological governance theory emphasizes that technology design should align with public expectations, a viewpoint supported by the study’s findings. Managing public expectations not only directly affects the acceptance of technology but also enhances overall user experience by improving simulated emotional perception. Therefore, in implementing technological governance, continuous collection and analysis of public feedback should guide the adjustment and optimization of technology design to achieve more efficient and responsive public services.
Fifth, the study indicates that different age groups have varying sensitivities to the emotional capabilities of governmental chatbots, revealing the importance of personalized service needs. To meet the needs of different user groups, personalized interaction and service strategies should be provided in technological applications. This finding offers direction for future research on how to optimize technology design and user interaction strategies based on different demographic characteristics to ensure broad applicability and user satisfaction.

5.2. Practical Insights

Based on the results of this study, a number of practical insights can be provided.
First, government agencies should prioritize and manage public expectations regarding governmental chatbots to enhance user simulated emotional perception and satisfaction. The study shows that public expectations have a significant positive impact on simulated emotional perception. Therefore, it is crucial to actively understand and manage these expectations during the design and deployment of governmental chatbots. Government agencies can gather public expectations through surveys and focus groups to understand the specific functionalities and emotional expressions that the public desires from chatbots. Incorporating these expectations into the chatbot design ensures that the technology meets public needs. For example, the public may expect chatbots to have higher emotional intelligence, capable of understanding and responding to users’ emotional expressions. To meet this expectation, the government can employ advanced natural language processing and affective computing technologies to enhance the chatbot’s ability to accurately recognize and respond to users’ emotional states. Additionally, continually updating and optimizing the chatbot’s emotional response mechanisms can help it adapt to the diverse emotional needs of different users. Moreover, the government should utilize transparent and effective communication channels to convey information about the chatbot’s functionalities and expected performance to the public. This can include regularly releasing updates and improvements, organizing public engagement activities, and providing detailed usage guides and FAQs. These measures not only help the public better understand and use governmental chatbots but also effectively manage public expectations, reducing user dissatisfaction stemming from the gap between expectations and actual experiences.
Second, the design of governmental chatbots should fully consider the needs of different age groups. The study results indicate that age moderates the relationship between public expectations and simulated emotional perception. To cater to the diverse needs of users across different age brackets, governments should develop flexible and adaptive chatbot designs, ensuring that users of all ages can benefit. Different age groups exhibit significant variations in technology acceptance and usage habits. For instance, younger users are generally more adept at using digital technologies and may have higher expectations regarding the functionality and emotional expression of chatbots. They may expect chatbots to respond quickly, possess advanced interactive features, and handle complex emotional expressions. To meet the needs of this demographic, governments can integrate more sophisticated natural language processing and machine learning algorithms into chatbots, enhancing their intelligence and emotional understanding capabilities. Conversely, older users may have lower technology acceptance and more basic functional needs from chatbots, focusing more on usability and reliability. This group might prefer chatbots to provide clear and concise answers and have a more intuitive user interface. To address these preferences, governments can simplify the operation process, offer detailed user guidance, and ensure system stability and reliability in the chatbot design. Furthermore, governments should consider the differences in simulated emotional perception among various age groups. Research suggests that older users might process and express emotions differently compared to younger users. Therefore, governments can customize emotional response strategies for different age groups. For example, chatbots can employ a more gentle and caring language style for older users, while using a more lively and interactive tone for younger users.
Third, the effective application of emotional simulation technology is crucial for enhancing the user experience of governmental chatbots. The research indicates that when chatbots can effectively simulate human emotions, users’ emotional responses and satisfaction significantly improve. Therefore, governments should prioritize enhancing the emotional simulation capabilities of chatbots to foster a sense of trust and relatability in interactions. Effective emotional simulation not only humanizes chatbots but also establishes a deeper emotional connection between users and the technology. When users feel that chatbots can understand and respond to their emotions, they are more likely to trust and rely on these technologies. Thus, improving emotional simulation is a key factor in enhancing the overall user experience. To achieve this goal, governments can employ various technological approaches to bolster the emotional simulation abilities of chatbots. First, advanced natural language processing (NLP) techniques can be utilized to enable chatbots to accurately identify and interpret users’ emotional expressions. By analyzing users’ tone, word choice, and context, chatbots can better understand users’ emotional states and respond accordingly. Moreover, leveraging machine learning and deep learning algorithms can continuously optimize and enhance the emotional models of chatbots. Through extensive data training and iterative improvement, chatbots can learn and adapt to different users’ emotional patterns, thereby providing more personalized and precise emotional responses. This not only increases user satisfaction but also enhances the chatbot’s ability to handle complex emotional interactions. Furthermore, governments can enhance chatbots’ simulated emotional perception capabilities through multimodal emotion recognition technology. For example, combining voice, text, and facial expression recognition technologies allows chatbots to more comprehensively perceive users’ emotional states and provide more accurate and empathetic responses. This multimodal emotion recognition technology can significantly improve user experience by making interactions more natural and emotionally rich. By implementing these measures, governments can ensure that governmental chatbots achieve a high level of emotional simulation, thereby fostering user trust and creating more engaging interactions. This not only improves the quality and efficiency of public services but also promotes public acceptance and recognition of government technology applications.
Fourth, continuously collecting and analyzing user feedback is crucial for optimizing the design of governmental chatbots. By regularly evaluating public perceptions and expectations regarding the emotional capabilities of chatbots, governments can dynamically adjust and improve the technology to ensure efficient and responsive public services. This feedback loop not only helps to enhance the applicability of the technology but also promptly identifies and addresses potential issues, preventing user dissatisfaction. For example, governments can use online surveys, user experience testing, and social media monitoring to gather opinions and suggestions about the emotional simulation and functionality of chatbots. By thoroughly analyzing this feedback, governments can pinpoint specific areas for improvement, such as the accuracy of emotional recognition, the naturalness of interaction, and response speed. Subsequently, targeted optimizations can be made to the chatbot’s algorithms and interface design, increasing user satisfaction and the technology’s applicability.
Finally, training and education are essential for the successful implementation of governmental chatbots. Governments should conduct training programs to help the public understand and adapt to new chatbot technologies and increase awareness and acceptance through promotional and educational activities. Specifically, governments can organize training sessions tailored to different age groups and backgrounds, explaining the basic functions and usage of chatbots, addressing common questions, and providing hands-on practice opportunities. Furthermore, governments can utilize various channels for promotion and education, such as official websites, social media, and community events, to communicate the benefits and applications of chatbot technology, enhancing public confidence and willingness to use these tools. This will help reduce barriers to technology use, improve overall user experience, and ensure the widespread adoption and recognition of governmental chatbots.

5.3. Research Limitations

The study has several limitations that should be acknowledged. First, it primarily focuses on a specific demographic, which may limit the generalizability of the findings. Although diverse age groups are included, the research is conducted within a particular cultural and geographical context, potentially affecting the transferability of the results to different populations or regions. Future studies could enhance external validity by expanding the sample size and including participants from various backgrounds. Second, the reliance on self-reported data through questionnaires may introduce bias, as participants’ responses could be influenced by their subjective perceptions or social desirability. Incorporating mixed methods, such as qualitative interviews or observational studies, could provide deeper insights and validate the quantitative findings by capturing richer, more nuanced data. Third, the potential risks of targeted emotional influence by chatbots on the public are recognized as an important concern. Although this was not the primary focus of the present study, it is acknowledged as a limitation. Future research should explore how chatbots may intentionally or unintentionally influence public emotions, as well as the broader implications of such influence. Lastly, while the study emphasizes the moderating role of age, it does not explore other demographic factors such as education level, socioeconomic status, or digital literacy, which may also influence user interactions with government chatbots. Addressing these additional variables in future research could yield a more comprehensive understanding of the factors affecting public perceptions and interactions with AI technologies in the public sector. Expanding the demographic scope to involve diverse populations from various cultural, geographic, and socioeconomic backgrounds would further enhance generalizability. Employing mixed methods would allow for a broader range of experiences and perceptions to be captured, ultimately leading to a more better understanding of user engagement. Investigating other moderating factors beyond age, such as prior experience with technology or specific user needs, could also contribute to a more comprehensive framework for understanding user interactions with government chatbots. This approach would facilitate improved design and implementation strategies for these technologies.

Author Contributions

Conceptualization, Y.G.; methodology, Y.G. and B.L.; software, Y.G. and B.L.; validation, Y.G., P.D. and B.L.; formal analysis, P.D.; investigation, P.D.; resources, Y.G. and B.L.; data curation, Y.G.; writing—original draft preparation, Y.G.; writing—review and editing, P.D. and B.L.; visualization, Y.G.; supervision, P.D. and B.L.; project administration, P.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Data will be made available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Guo, Y.; Dong, P. Factors Influencing User Favorability of Government Chatbots on Digital Government Interaction Platforms across Different Scenarios. J. Theor. Appl. Electron. Commer. Res. 2024, 19, 818–845. [Google Scholar] [CrossRef]
  2. Guo, Y.; Zhang, T.; Deng, Y. The past, present, and future of digital government: Insights from Chinese practices. Public Adm. Issues 2024, 5, 6–24. (In English) [Google Scholar] [CrossRef]
  3. Patil, V.; Hadawale, O.; Pawar, V.; Gijre, M. Emotion-linked AIoT-based cognitive home automation system with sensovisual method. In Proceedings of the IEEE Pune Section International Conference (PuneCon), Pune, India, 16–19 December 2021; pp. 1–7. [Google Scholar]
  4. Hoffmann, C.; Linden, P.; Vidal, M. Creating and capturing artificial emotions in autonomous robots and software agents. J. Web Eng. 2021, 20, 993–1030. [Google Scholar] [CrossRef]
  5. Mowafey, S.; Gardner, S. A novel adaptive approach for home care ambient intelligent environments with an emotion-aware system. In Proceedings of the UKACC International Conference on Control, Cardiff, UK, 3–5 September 2012. [Google Scholar]
  6. Hwang, T.; Lee, H.; Kim, D. The impact of chatbot emotional intelligence on user engagement and satisfaction. J. Serv. Res. 2023, 26, 23–37. [Google Scholar]
  7. Guerrero-Vásquez, L.; Chasi-Pesantez, P.; Castro-Serrano, R.; Robles-Bykbaev, V.; Bravo-Torres, J.; López-Nores, M. AVATAR: Implementation of a human-computer interface based on an intelligent virtual agent. In Proceedings of the IEEE Colombian Conference on Communications and Computing (COLCOM), Barranquilla, Colombia, 5–7 June 2019; pp. 1–5. [Google Scholar]
  8. Saxena, A.; Khanna, A.; Gupta, D. Emotion recognition and detection methods: A comprehensive survey. J. Artif. Intell. Syst. 2020, 2, 53–79. [Google Scholar] [CrossRef]
  9. Peng, B. Research on the impact of chatbot anthropomorphism on consumer information search. Ind. Sci. Technol. Innov. 2023, 42–44. [Google Scholar]
  10. Bai, N. The influence of hotel service robot anthropomorphism and online review types on customer purchase intentions. Bus. Econ. Res. 2023, 12, 181–184. [Google Scholar]
  11. Liu, Y. The Mechanism of How Anthropomorphized Appearance and Interaction of Virtual Chatbots Affect User Acceptance; Anhui Polytechnic University: Wuhu, China, 2024. [Google Scholar]
  12. Zhu, M. The Impact of Cognitive Anthropomorphism in AI Service Robots on Consumer Subjective Well-Being; North China University of Water Resources and Electric Power: Zhengzhou, China, 2023. [Google Scholar]
  13. He, S. Study on the Impact of Empathic Abilities of AI Service Robots on Consumer Well-Being; North China University of Water Resources and Electric Power: Zhengzhou, China, 2023. [Google Scholar]
  14. Wang, X.; Su, C. Does anthropomorphism help alleviate customer dissatisfaction after service failure by robots? The mediating role of responsibility attribution. Financ. Trade Res. 2023, 34, 57–70. [Google Scholar]
  15. Bannister, F.; Connolly, R. ICT, public values and transformative government: A framework and programme for research. Gov. Inf. Q. 2014, 31, 119–128. [Google Scholar] [CrossRef]
  16. Linders, D. From e-government to we-government: Defining a typology for citizen coproduction in the age of social media. Gov. Inf. Q. 2012, 29, 446–454. [Google Scholar] [CrossRef]
  17. Gil-Garcia, J.R.; Helbig, N. Exploring e-government benefits and success factors. Gov. Inf. Q. 2006, 23, 187–216. [Google Scholar]
  18. Anthes, G. Chatbots in the enterprise. Commun. ACM 2017, 60, 13–15. [Google Scholar]
  19. Hatfield, E.; Cacioppo, J.T.; Rapson, R.L. Emotional Contagion; Cambridge University Press: Cambridge, MA, USA, 1994. [Google Scholar]
  20. Deng, H.; Hu, P.; Li, Z. The Role of Emotional Mimicry in Emotional Contagion: Re-reading the Mechanism of Mimicry-feedback. Chin. J. Clin. Psychol. 2016, 24, 225–228. [Google Scholar]
  21. Zhang, J.; Wang, X.; Lu, J.; Liu, L.; Feng, Y. The impact of emotional expression by artificial intelligence recommendation chatbots on perceived humanness and social interactivity. Decis. Support Syst. 2024, 187, 114347. [Google Scholar] [CrossRef]
  22. Bilquise, G.; Ibrahim, S.; Shaalan, K. Emotionally Intelligent Chatbots: A Systematic Literature Review. Hum. Behav. Emerg. Technol. 2022, 2022, 9601630. [Google Scholar] [CrossRef]
  23. Rodríguez-Martínez, A.; Amezcua-Aguilar, T.; Cortés-Moreno, J.; Jiménez-Delgado, J.J. Qualitative Analysis of Conversational Chatbots to Alleviate Loneliness in Older Adults as a Strategy for Emotional Health. Healthcare 2024, 12, 62. [Google Scholar] [CrossRef]
  24. Gnewuch, U.; Morana, S.; Maedche, A. Designing chatbot conversations for user engagement: Insights from emotional contagion theory. Inf. Syst. J. 2022, 32, 156–175. [Google Scholar] [CrossRef]
  25. Liu-Thompkins, Y.; Okazaki, S.; Li, H. Artificial empathy in marketing interactions: Bridging the human-AI gap in affective and social customer experience. J. Acad. Mark. Sci. 2022, 50, 1198–1218. [Google Scholar] [CrossRef]
  26. Beattie, G.; Ellis, A.W. The role of emotional contagion in AI-mediated communication: How do we trust emotionally responsive machines? Psychol. Mark. 2021, 38, 1461–1472. [Google Scholar] [CrossRef]
  27. Picard, R.W. Affective Computing; MIT Press: Cambridge, MA, USA, 1997. [Google Scholar]
  28. Zhao, X.; Liao, Y. The influence of empathy on AI-driven service interactions: A study of user satisfaction with emotionally responsive chatbots. J. Bus. Res. 2022, 137, 215–227. [Google Scholar] [CrossRef]
  29. Epley, N.; Waytz, A.; Cacioppo, J.T. On Seeing Human: A Three-factor Theory of Anthropomorphism. Psychol. Rev. 2007, 114, 864–886. [Google Scholar] [CrossRef] [PubMed]
  30. Troshani, I.; Hill, S.R.; Sherman, C. Do We Trust in AI: Role of Anthropomorphism and Intelligence. J. Comput. Inf. Syst. 2020, 61, 481–491. [Google Scholar] [CrossRef]
  31. Li, Y.; Wang, Q. The impact of robot appearance anthropomorphism on customer acceptance intentions—An empirical analysis based on different service scenarios. Jianghan Acad. 2024, 43, 16–28. [Google Scholar]
  32. Yu, F.; Xu, L. Anthropomorphism in Artificial Intelligence. J. Northwest Norm. Univ. (Soc. Sci. Ed.) 2020, 57, 52–60. [Google Scholar]
  33. Meng, Q.; Guo, Y.; Wu, J. Digital Social Governance: Conceptual Connotation, Key Areas and Innovative Direction. Soc. Gov. Rev. 2023, 22–31. [Google Scholar]
  34. Senadheera, S.; Yigitcanlar, T.; Desouza, K.C.; Mossberger, K.; Corchado, J.; Mehmood, R.; Li, R.Y.M.; Cheong, P.H. Understanding chatbot adoption in local governments: A review and framework. J. Urban Technol. 2024, 1–35. [Google Scholar] [CrossRef]
  35. Folstad, A.; Larsen, A.G.; Bjerkreim-Hanssen, N. The human likeness of government chatbots—An empirical study from Norwegian municipalities. In Electronic Government: 22nd IFIP WG 8.5 International Conference, EGOV 2023, Proceedings (Lecture Notes in Computer Science, 14130); Lindgren, I., Csaki, C., Kalampokis, E., Janssen, M., Viale Pereira, G., Virkar, S., Tambouris, E., Zuiderwijk, A., Eds.; Springer: Berlin/Heidelberg, Germany, 2023. [Google Scholar] [CrossRef]
  36. Jais, R.; Ngah, A.H. The moderating role of government support in chatbot adoption intentions among Malaysian government agencies. Transform. Gov. People Process Policy 2024, 18, 417–433. [Google Scholar] [CrossRef]
  37. Ju, J.; Meng, Q.; Sun, F.; Liu, L.; Singh, S. Citizen preferences and government chatbot social characteristics: Evidence from a discrete choice experiment. Gov. Inf. Q. 2023, 40, 101785. [Google Scholar] [CrossRef]
  38. Tsai, M.-H.; Yang, C.-H.; Chen, J.Y.; Kang, S.-C. Four-stage framework for implementing a chatbot system in disaster emergency operation data management: A flood disaster management case study. KSCE J. Civ. Eng. 2021, 25, 503–515. [Google Scholar] [CrossRef]
  39. Keyner, S.; Savenkov, V.; Vakulenko, S. Open data chatbot. In Semantic Web: ESWC 2019 Satellite Events; Hitzler, P., Kirrane, S., Hartig, O., DeBoer, V., Vidal, M.E., Maleshkova, M., Schlobach, S., Hammar, K., Lasierra, N., Stadtmuller, S., et al., Eds.; Springer: Berlin/Heidelberg, Germany, 2019; Volume 11762, pp. 111–115. [Google Scholar] [CrossRef]
  40. Aoki, N. An experimental study of public trust in AI chatbots in the public sector. Gov. Inf. Q. 2020, 37, 101490. [Google Scholar] [CrossRef]
  41. Abbas, N.; Folstad, A.; Bjorkli, C.A. Chatbots as part of digital government service provision—A user perspective. In Chatbot Research and Design: 6th International Workshop, CONVERSATIONS 2022, Revised Selected Papers (Lecture Notes in Computer Science, 13815); Folstad, A., Araujo, T., Papadopoulos, S., Law, E.L.-C., Luger, E., Goodwin, M., Brandtzaeg, P.B., Eds.; Springer: Berlin/Heidelberg, Germany, 2023. [Google Scholar] [CrossRef]
  42. Hasan, I.; Rizvi, S.; Jain, S.; Huria, S. The AI enabled chatbot framework for intelligent citizen-government interaction for delivery of services. In Proceedings of the 2021 8th International Conference on Computing for Sustainable Global Development (INDIACom), New Delhi, India, 17–19 March 2021. [Google Scholar]
  43. Georgios, P.; Rafail, P.; Efthimios, T. Integration of chatbots with knowledge graphs in eGovernment: The case of getting a passport. In 25th Pan-Hellenic Conference on Informatics with International Participation (PCI2021); Vassilakopoulos, M.G., Karanikolas, N.N., Eds.; ACM: New York, NY, USA, 2021; pp. 425–429. [Google Scholar] [CrossRef]
  44. Wang, Y.; Zhang, N.; Zhao, X. Understanding the determinants in the different government AI adoption stages: Evidence of local government chatbots in China. Soc. Sci. Comput. Rev. 2022, 40, 534–554. [Google Scholar] [CrossRef]
  45. Oliveira da Silva Batista, G.; de Souza Monteiro, M.; Cardoso de Castro Salgado, L. How do ChatBots look like?: A comparative study on government chatbots profiles inside and outside Brazil. In Proceedings of the IHC ’22: Proceedings of the 21st Brazilian Symposium on Human Factors in Computing Systems, Diamantina, Brazil, 17–21 October 2022; ACM: New York, NY, USA, 2022. [Google Scholar] [CrossRef]
  46. Phan, Q.N.; Tseng, C.-C.; Le, T.T.H.; Nguyen, T.B.N. The application of chatbot on Vietnamese migrant workers’ right protection in the implementation of new generation free trade agreements (FTAs). AI Soc. 2023, 38, 1771–1783. [Google Scholar] [CrossRef]
  47. Khadija, M.A.; Nurharjadmo, W. Deep learning generative Indonesian response model chatbot for JKN-KIS. In Proceedings of the 2022 1st International Conference on Smart Technology, Applied Informatics, and Engineering (APICS), Surakarta, Indonesia, 23–24 August 2022. [Google Scholar] [CrossRef]
  48. de Andrade, G.G.; Sousa Silva, G.R.; Molina Duarte Junior, F.C.; Santos, G.A.; Lopes de Mendonca, F.L.; de Sousa Junior, R.T. EvaTalk: A chatbot system for the Brazilian government virtual school. In Proceedings of the 22nd International Conference on Enterprise Information Systems (ICEIS), Prague, Czech Republic, 5–7 May 2020; Filipe, J., Smialek, M., Brodsky, A., Hammoudi, S., Eds.; SCITEPRESS: Setúbal, Portugal, 2020; Volume 1, pp. 556–562. [Google Scholar] [CrossRef]
  49. Boden, C.; Fischer, J.; Herbig, K.; Spierling, U. CitizenTalk: Application of chatbot infotainment to E-democracy. In Technologies for Interactive Digital Storytelling and Entertainment, Proceedings; Gobel, S., Malkewitz, R., Iurgel, I., Eds.; Springer: Berlin/Heidelberg, Germany, 2006; Volume 4326, pp. 370–381. [Google Scholar]
  50. Guo, Y. Digital Government and Public Interaction: Platforms, Chatbots, and Public Satisfaction; IGI Global: Hershey, PA, USA, 2025. [Google Scholar]
  51. Nass, C.; Moon, Y. Machines and mindlessness: Social responses to computers. J. Soc. Issues 2000, 56, 81–103. [Google Scholar] [CrossRef]
  52. Czaja, S.J.; Charness, N.; Fisk, A.D.; Hertzog, C.; Nair, S.N.; Rogers, W.A.; Sharit, J. Factors predicting the use of technology: Findings from the Center for Research and Education on Aging and Technology Enhancement (CREATE). Psychol. Aging 2006, 21, 333–352. [Google Scholar] [CrossRef] [PubMed]
  53. Carstensen, L.L.; Fung, H.H.; Charles, S.T. Socioemotional selectivity theory and the regulation of emotion in the second half of life. Motiv. Emot. 2003, 27, 103–123. [Google Scholar] [CrossRef]
  54. Harrison, P.; Smith, A.; Jones, R. Public Expectations and Emotional Engagement in Government Services. J. Public Adm. 2018, 25, 345–362. [Google Scholar]
  55. Smith, J.; Jones, L. The Role of Age in Perceptions of Government Services. Int. J. Public Policy 2020, 14, 220–237. [Google Scholar]
Figure 1. Theoretical model.
Figure 1. Theoretical model.
Jtaer 20 00050 g001
Figure 2. Moderating effects of age on the relationship between public expectations and simulated emotional perception of government chatbots.
Figure 2. Moderating effects of age on the relationship between public expectations and simulated emotional perception of government chatbots.
Jtaer 20 00050 g002
Table 1. The articles that are directly relevant to this study.
Table 1. The articles that are directly relevant to this study.
AuthorsYearPurposeMethodsKey Findings
Guo & Dong [1]2024The study investigates the factors influencing user favorability towards government chatbots on digital government interaction platforms across various scenarios.Questionnaire experiment; comparative research method.In examining user favorability in government services and policy consultations, four distinct mediating pathways emerge. First, the Social Support → Behavioral Quality → User Favorability pathway indicates that social support enhances user favorability through improved behavioral quality, with slightly greater effects observed in policy consultations compared to government services. Second, the Emotional Perception → Behavioral Quality → User Favorability pathway reveals that positive emotional perceptions lead to higher favorability, again with stronger effects noted in government services. Third, regarding the Perceived System → Behavioral Quality → User Favorability pathway, the perceived system’s influence diverges: in government services, it appears to affect favorability directly, while in policy consultations, it impacts user behavior and thus favorability. Finally, the Public Expectation → Behavioral Quality → User Favorability pathway shows that high public expectations fully mediate favorability in government services, whereas this mediation is only partial in policy consultations, suggesting that additional factors influence favorability.
Ju et al. [37]2023This study investigates citizen preferences regarding the social characteristics of government chatbots through a discrete choice experiment conducted among Chinese users.Discrete choice experiment.The findings reveal how individual characteristics, including age, gender, and prior chatbot experience, modulate these preferences.
Table 2. Sample information.
Table 2. Sample information.
FrequencyPercentage (%)Cumulative Percentage (%)
GenderMale8443.3043.30
Female11056.70100.00
AgeUnder 20 years old21.031.03
21–30 years old10152.0653.09
31–40 years old7940.7293.81
41–50 years old84.1297.94
51–60 years old42.06100.00
OccupationStudent7538.6638.66
Teachers and researchers, including educational professionals4322.1660.82
Policy makers21.0361.86
Computer industry professionals189.2871.13
Civil servant157.7378.87
Medical personnel84.1282.99
Unemployed63.0986.08
Other2713.92100.00
EducationPrimary school and below10.520.52
General high school/secondary vocational school/technical school/vocational high school21.031.55
Junior college42.063.61
Bachelors6030.9334.54
Masters8342.7877.32
Ph.D.4422.68100.00
Total194100.0100.0
Table 3. Descriptive statistics.
Table 3. Descriptive statistics.
ItemMean ± Standard DeviationVarianceS.E.Mean 95% CI (LL)Mean 95% CI (UL)IQRKurtosisSkewnessCoefficient of Variation (CV)
Q1_14.046 ± 0.9230.8530.0663.9164.1761.0001.188−1.05022.822%
Q1_24.021 ± 0.9050.8180.0653.8934.1481.0001.276−1.01722.498%
Q1_34.031 ± 0.9270.8590.0673.9004.1611.0000.708−0.93022.995%
Q3_13.747 ± 0.9670.9360.0693.6113.8841.0000.193−0.61925.815%
Q3_23.402 ± 1.0981.2050.0793.2483.5571.000−0.552−0.25832.272%
Q3_33.356 ± 1.1301.2770.0813.1973.5151.000−0.699−0.21333.676%
Q3_43.474 ± 1.0241.0490.0743.3303.6181.000−0.328−0.23729.474%
Table 4. Reliability analysis.
Table 4. Reliability analysis.
ItemCITCAlpha Coefficient If Item DeletedCronbach αStandard Cronbach α
The scenario of government service use
Public expectationsQ1_10.5320.7280.7540.754
Q1_20.5510.71
Q1_30.6750.56
Simulated emotional information perceptionQ2_10.5510.7590.7860.795
Q2_20.6660.698
Q2_30.5480.76
Q2_40.6570.708
Table 5. Analysis results of moderating effects.
Table 5. Analysis results of moderating effects.
Model 1Model 2Model 3
BS.E.tpβBS.E.tpβBS.E.tpβ
Constant3.9010.4189.3350.000 **-3.9940.4069.8280.000 **-4.1050.39110.5020.000 **-
Gender0.2110.1081.9450.0530.1110.2180.1052.0710.040 *0.1150.2020.1011.9980.047 *0.106
Occupation0.0440.0222.0400.043 *0.1200.0260.0221.2120.2270.0710.0290.0211.3800.1690.078
Education background−0.1510.061−2.4560.015 *−0.144−0.1590.060−2.6640.008 **−0.152−0.1760.057−3.0670.002 **−0.168
Public expectation0.6120.0659.4110.000 **0.5410.6100.0639.6660.000 **0.5390.6600.06210.6890.000 **0.583
Age 0.2710.0763.5570.000 **0.1980.2360.0733.2190.002 **0.173
Public expectation*Age 0.3930.0954.1420.000 **0.222
R20.4100.4480.494
Adjustment R20.3980.4330.478
FF (4,189) = 32.884, p = 0.000F (5,188) = 30.461, p = 0.000F (6,187) = 30.426, p = 0.000
ΔR20.4100.0370.046
ΔFF (4,189) = 32.884, p = 0.000F (1,188) = 12.655, p = 0.000F (1,187) = 17.159, p = 0.000
Dependent variable: simulated emotional perception, * p < 0.05, ** p < 0.01.
Table 6. Effects of moderating variable levels on the outcome.
Table 6. Effects of moderating variable levels on the outcome.
Level of Moderating VariableRegression CoefficientStandard Errort-Valuep-Value95% Confidence Interval (CI)
Mean Level0.6600.06210.6890.0000.5390.781
High Level (+1 SD)0.9320.0989.4600.0000.7391.125
Low Level (−1 SD)0.3880.0814.8060.0000.2300.547
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Guo, Y.; Dong, P.; Lu, B. The Influence of Public Expectations on Simulated Emotional Perceptions of AI-Driven Government Chatbots: A Moderated Study. J. Theor. Appl. Electron. Commer. Res. 2025, 20, 50. https://doi.org/10.3390/jtaer20010050

AMA Style

Guo Y, Dong P, Lu B. The Influence of Public Expectations on Simulated Emotional Perceptions of AI-Driven Government Chatbots: A Moderated Study. Journal of Theoretical and Applied Electronic Commerce Research. 2025; 20(1):50. https://doi.org/10.3390/jtaer20010050

Chicago/Turabian Style

Guo, Yuanyuan, Peng Dong, and Beichen Lu. 2025. "The Influence of Public Expectations on Simulated Emotional Perceptions of AI-Driven Government Chatbots: A Moderated Study" Journal of Theoretical and Applied Electronic Commerce Research 20, no. 1: 50. https://doi.org/10.3390/jtaer20010050

APA Style

Guo, Y., Dong, P., & Lu, B. (2025). The Influence of Public Expectations on Simulated Emotional Perceptions of AI-Driven Government Chatbots: A Moderated Study. Journal of Theoretical and Applied Electronic Commerce Research, 20(1), 50. https://doi.org/10.3390/jtaer20010050

Article Metrics

Back to TopTop