Next Article in Journal
From Big Data to Cultural Intelligence: An AI-Powered Framework and Machine Learning Validation for Global Marketing
Previous Article in Journal
Pricing of Products and Value-Added Services Considering Product Quality and Network Effects
Previous Article in Special Issue
User Psychological Perception and Pricing Mechanism of AI Large Language Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Exploring Factors Influencing Patients’ Intention to Adopt Generative AI on Online Healthcare Platforms

1
Business School, Nanjing University, Nanjing 210093, China
2
International Education College, Nanjing University of Chinese Medicine, Nanjing 210023, China
3
School of Economics & Trade, Hunan University, Changsha 410006, China
*
Authors to whom correspondence should be addressed.
J. Theor. Appl. Electron. Commer. Res. 2025, 20(4), 287; https://doi.org/10.3390/jtaer20040287
Submission received: 10 June 2025 / Revised: 17 August 2025 / Accepted: 19 August 2025 / Published: 15 October 2025

Abstract

The development of generative AI has disrupted various fields, and the field of online healthcare is no exception. However, there is a lack of research on patients’ intention to adopt generative AI on online healthcare platforms. Therefore, the aim of this study is to investigate the factors influencing patients’ intention to adopt generative AI. Employing a questionnaire-based survey, we explore the factors influencing patients’ intention to adopt generative AI through the UTAUT2 model, considering the moderating effects of construal level, health literacy, and AI literacy. We find that performance expectancy, social influence, and facilitating conditions are positively associated with patients’ intention. Surprisingly, effort expectancy and hedonic motivation do not have a significant impact on patients’ intention. Construal level positively moderates the relationship between performance expectancy and patients’ intention; health literacy negatively moderates the relationship between social influence and patients’ intention. AI literacy positively moderates the relationship between effort expectancy and patients’ intention but negatively moderates the relationship between social influence and patients’ intention. This study enriches UTAUT2 theory and provides practical insights for the development and promotion of generative AI on online healthcare platforms.

1. Introduction

Generative artificial intelligence (AI) is reshaping and disrupting various industries [1,2], and the online healthcare is no exception. Embedded within online healthcare platforms, generative AI trains language models using real consultation data from these platforms, enabling rapid responses and answers to patient inquiries through advanced learning and reasoning capabilities. Generative AI holds significant value on online healthcare platforms [3]. Unlike automated response systems that leverage AI technologies to assist doctors in making quick diagnoses and responses [3,4], the generative AI developed for online healthcare platforms functions as an intelligent medical assistant for patients. By learning from extensive medical data and knowledge, it can answer health-related questions, provide medical consultations, and explain disease information in a patient-friendly manner [5]. Moreover, it can assist patients in making appointments, selecting doctors, and querying information, thus serving as a convenient, efficient, and friendly tool for online healthcare service that is accessible anytime and anywhere. Additionally, generative AI can enhance patient engagement by personalizing interactions based on individual health needs and preferences, ultimately improving the overall health awareness and health literacy of patients [6,7].
Despite the many benefits of generative AI for online healthcare services, their widespread adoption of online healthcare platforms and patients remains limited, and there is scant literature exploring patients’ intention to adapt generative AI online healthcare platforms. Previous research on generative AI in the field of healthcare has primarily focused on both its benefits and challenges. Existing research has extensively explored the role of generative AI in healthcare, including its support for diagnostic decision-making by physicians [8], assistance in clinical practice and research [9], enhancement of healthcare quality and efficiency [10], optimization of operations and management [5], and reinforcement of patient education and physician training [11]. There are also concerns regarding generative AI, such as the potential for generating inaccurate content, making inappropriate diagnoses [12], issues related to privacy and security [13,14], ethical dilemmas regarding patient understanding [15], and deficiencies in legal regulation. However, research on generative AI in online healthcare remains limited, and it is still unclear which factors influence patients’ adoption of generative AI on such platforms. Undoubtedly, the value that generative AI brings to healthcare cannot be denied; however, this value can only be realized if patients adopt generative AI. Therefore, exploring the factors influencing patients’ intention to adopt generative AI on online healthcare platforms is essential for promoting its broader application and maximizing its value. UTAUT2 is a well-established model for understanding technology acceptance and usage, and it has been widely applied in research on user adoption of new technologies [16,17]. In this study, generative AI is an emerging technology, patients constitute a new user group, and online healthcare represents a novel usage context. Therefore, this study employs the UTAUT2 model to investigate the factors influencing patients’ intention to adopt generative AI on online healthcare platforms.
Venkatesh et al. [18] points out that moderators are critical for meaningful extensions of theories of technology acceptance. For example, individual differences in age, gender, and experience moderate the effects of UTAUT2 on behavioral intention [18]. Therefore, examining boundary conditions in patients’ intention to adopt generative AI on online healthcare platforms is crucial, as it contributes to the extension of technology adoption theories and deepens the understanding of this specific context. Performance expectancy and effort expectancy correspond to attributes situated at different construal levels [19]. Performance expectancy, as an outcome-oriented variable, aligns with a higher construal level, in which individuals adopt an abstract mode of thinking that emphasizes causes and consequences. In contrast, effort expectancy, as a process-oriented variable, is associated with a lower construal level, wherein individuals engage in a concrete mode of thinking that highlights execution and processes [20]. Accordingly, the influence of performance expectancy and effort expectancy on the adoption of generative AI may differ across individuals with different construal levels. Social influence and facilitating conditions represent the key social and technological factors through which individual access and use health-related information. Health literacy determines both the motivation and the ability to acquire and utilize such information [21,22]. Therefore, the impact of social influence and facilitating conditions on the adoption of generative AI for health-related information is likely to vary across individuals with different levels of health literacy. Additionally, individuals with different levels of AI literacy demonstrate distinct AI-related behaviors. Specifically, variations in AI literacy shape individuals’ trust in and ability to use AI technologies [23]. Consequently, differences in AI literacy may lead to divergent perspectives on the adoption of generative AI. Taken together, this study aims to enhance the understanding of patients’ intention to adopt generative AI on online healthcare platforms by examining the moderating roles of construal level, health literacy, and AI literacy from the patient’s perspective.
In summary, this study aims to examine the factors influencing patients’ intention to adopt generative AI on online healthcare platforms and the moderating effect of construal level, health literacy, and AI literacy. We collect data through a questionnaire via online channels. The analysis reveals that performance expectancy, social influence, and facilitating conditions are positively associated with patients’ intention to adopt generative AI on online healthcare platforms, whereas effort expectancy and hedonic motivation do not exhibit a significant effect. Furthermore, the results show that the construal level positively moderates the relationship between performance expectancy and patients’ intention. Health literacy negatively moderates the relationship between social influence and patients’ intention. AI literacy positively moderates the relationship between effort expectancy and patients’ intention but negatively moderates the relationship between social influence and patients’ intention. These findings contribute to a more nuanced understanding of patients’ intention to adopt generative AI and offer practical implications for the effective design, implementation, and dissemination of such technologies in online healthcare contexts.

2. Literature Review

2.1. UTAUT2

The adoption of technology is a well-researched topic in the field of information systems. Numerous theories and models have been developed to identify the factors that influence technology adoption, such as the theory of reasoned action, the technology acceptance model, the motivational model, the theory of planned behavior, the model of PC utilization, the innovation diffusion theory, and the social cognitive theory. Venkatesh et al. [24] synthesized these theories and models to create a unified model, known as the unified theory of acceptance and use of technology (UTAUT), which encompasses four primary influences of performance expectancy, effort expectancy, social influence, and facilitating conditions. Later, Venkatesh et al. [18] extended the UTAUT model to consumer behavior research by incorporating three additional influences of hedonic motivation, price value, and habit and proposed the unified theory of acceptance and use of technology 2 (UTAUT2). In general, the factors that influence technology adoption include technology-related variables, user characteristics, and usage environment, which together constitute the information system scenario [19,25]. These aspects are also critical for the wide-ranging application of UTAUT2 to new technologies (e.g., generative AI), new users (e.g., patients), and new usage contexts (e.g., healthcare settings). Additionally, the incorporation of new elements and structures in the UTAUT2 model can enhance its explanatory power and expand its theoretical boundaries.
Using generative AI represents a unique form of consumption behavior and falls within the UTAUT2 research scenario. Given that generative AI involves new technologies, patients, and usage environments, we aim to explore the factors that influence patients’ adoption through the UTAUT2 model. Due to the limited widespread use of generative AI, patients have not yet developed usage habits, and generative AI is provided as a free feature. Therefore, habit and price value are not considered in the model. Additionally, this paper examines the impact of patient characteristics, such as the construal level and health literacy, and AI literacy on the adoption of generative AI to explore the boundary conditions of this technology adoption model.

2.2. Construal Level

The perspective of the construal level theory posits that individuals represent information at different levels of abstraction and embodiment [26]. Individuals with a high construal level tend to focus on abstract, superordinate, decontextualized, and non-textual features that convey the “why” aspects of things and desirability. Conversely, individuals with a low construal level tend to focus on concrete, subordinate, contextualized, and textual features that convey the “how” aspects of things and feasibility [26]. In other words, individuals with a high construal level are more concerned with the causes and consequences, whereas those with a low construal level are more concerned with the execution and processes [27].
The construal level theory posits that individuals represent things at varying levels of embodiment or abstraction, which is influenced by both situational and individual factors [28]. On the one hand, construal level is related to psychological distance, which includes temporal distance, spatial distance, social distance, and hypothetical distance [29,30,31]. When the mental distance is farther, people tend to focus more on abstract features, whereas when it is closer, people tend to focus more on concrete features [20]. Several studies have demonstrated that situational factors such as mental distance significantly affect the construal level. On the other hand, Vallacher and Wegner [32] proposed the action identification theory, suggesting individual differences in action identification levels, with lower levels of identities focusing on how things are performed and more concrete representations and higher levels of identities focusing on the causes or consequences of things and more abstract representations. Vallacher and Wegner developed the Behavior Identification Form (BIF) to measure the individual action identification level, which many studies have used to assess the action identification level [28,32]. The notion of construal level aligns with the concept of action identification level and is a personal attribute varied across individuals, except that construal level has a broader meaning scope and stronger explanatory power [30].
Based on the construal level theory, distinct attributes characterize the construal levels of performance expectancy and effort expectancy. Furthermore, individuals with varying construal levels exhibit differential impacts of their respective performance expectancy and effort expectancy on their acceptance intention to a given product or service. Therefore, the present study incorporates the patient’s construal level as a moderator variable to investigate the boundary conditions that delimit the relationship between performance expectancy and effort expectancy expectations and their effect on patients’ intention of use.

2.3. Health Literacy

Health literacy is a multifaceted concept that refers to an individual’s or group’s competencies in maintaining and promoting their health [33]. The World Health Organization (WHO) definition of health literacy is widely recognized and asserts that it comprises an individual’s cognitive and social skills that determine their motivation and capacity to access, comprehend, and use health-related information and to enhance and maintain their well-being [21]. Individuals with high levels of health literacy exhibit rational judgment and decision-making abilities as well as cognitive and social skills that enable them to solve problems and enhance their health status [34]. On the contrary, low-health-literate individuals tend to exhibit poorer health status and higher healthcare costs due to their lack of these skills [35], resulting in increased hospitalizations, emergency room visits, and mortality risks [36].
In the field of healthcare, there is evidence that individuals with varying levels of health literacy exhibit different health-related behaviors, and the moderating effect of health literacy has been explored. For example, Virlée et al. [22] found that online health community members with low health literacy do not effectively integrate and utilize online health resources compared to those with high health literacy. Similarly, Chen et al. [37] noted that health literacy can moderate the social and knowledge relationship between doctors and patients in online physician choices. Mackert et al. [21] also reported that individuals with high health literacy tend to have higher perceived usefulness and perceived ease of use of health information technology as well as lower privacy perceptions and higher trust in such technology. Therefore, in the investigation of patient behavioral intentions, it is pertinent to take into account the individual factor of health literacy, which can serve as a significant boundary condition.

2.4. AI Literacy

AI literacy denotes an individual’s proficiency in AI, encompassing a grasp of its fundamental concepts, technical intricacies, effective methodologies for its creation and deployment, and awareness of the governing legal frameworks [23]. An increasing number of scholars recognize the significance of AI literacy [38]. In the education sector, AI literacy enhances learners’ ability to utilize generative AI for problem-solving, underscoring the importance of fostering AI literacy [39]. Similarly, in the healthcare industry, improving AI literacy facilitates the effective application of AI by medical professionals [40]. Numerous researchers investigate measurement methods of AI literacy. Celik [41] argues that digital divide, cognitive absorption, and computational thinking significantly impact AI literacy. Some scholars measure AI literacy based on subjective understanding of AI technologies [42], while others assess it through objective knowledge of AI [23]. Existing studies indicate that AI literacy positively influences perceptions of ease of use and usefulness, thereby enhancing the overall acceptance of AI technologies. However, there is a divergence in findings: some research suggests that patients with higher AI literacy exhibit greater acceptance of AI [43], while others propose that patients with lower AI literacy demonstrate higher acceptance [23]. Nevertheless, there is a notable gap in research exploring the moderating role of AI literacy, particularly in relation to AI acceptance. Therefore, this study aims to investigate the moderating effect of AI literacy to address this research gap.

3. Research Hypothesis

3.1. Influencing Factors

In the realm of healthcare service, performance expectancy refers to the degree to which patients perceive that utilizing a particular healthcare system will help them achieve their overall health goals [24,44]. Generative AI can explain medical conditions to patients, answer health-related questions, assist in health management, and provide medical consultations, thereby supporting patients in achieving their health goals [14]. Effort expectancy denotes the ease with which patients can employ healthcare technologies [44]. Generative AI are designed to be patient-friendly and easily operable. It possesses communication and interaction capabilities similar to those of a physician, allowing patients to easily engage with it in a conversational manner to address their concerns [45]. Social influence pertains to the extent to which important individuals in one’s social network direct individuals towards utilizing technology to keep healthy [24]. In particular, family members and friends who express concerns regarding the well-being of individuals recommend and propose the use of generative AI [46,47]. Additionally, patient groups present on social media platforms possess informational and normative effects on individuals’ adoption of generative AI [48]. Facilitating conditions, which relate to patients’ perceptions of the availability of resources and support for technology use [24], are bolstered by the popularity of mobile Internet and Internet applications, which ensure access to generative AI. Finally, hedonic motivation refers to the pleasure or enjoyment derived from using generative AI [18]. Using generative AI, patients have an opportunity to attain more social support by obtaining timely and no-cost online healthcare service [49]. Additionally, the novelty of generative AI could enhance patient engagement and enjoyment. Summarizing the above discussion resulted in the following hypotheses:
H1: 
Performance expectancy is positively associated with patients’ intention to adopt generative AI.
H2: 
Effort expectancy is positively associated with patients’ intention to adopt generative AI.
H3: 
Social influence is positively associated with patients’ intention to adopt generative AI.
H4: 
Facilitating conditions are positively associated with patients’ intention to adopt generative AI.
H5: 
Hedonic motivation is positively associated with patients’ intention to adopt generative AI.

3.2. Moderation Effects

According to construal level theory, different individuals can represent an object or event at different construal levels [50]. Individuals with a higher construal level tend to adopt an abstract cognitive style that emphasizes outcomes and desirability, whereas those with a lower construal level rely on a concrete cognitive style that focuses on processes and feasibility [28]. When patients consider adopting generative AI, performance expectancy refers to the extent to which using such technology is perceived to provide beneficial outcomes, an outcome-focused evaluation aligned with purpose and desirability. This type of evaluation is consistent with the cognition of individuals with a higher construal level [30,51]. Therefore, performance expectancy is expected to have a stronger impact on adoption intention for individuals with a higher construal level compared with those at a lower construal level [20]. In contrast, effort expectancy pertains to the perceived ease of use of generative AI and reflects a feasibility-oriented evaluation centered on task execution, which aligns with the cognition of individuals with a lower construal level [18,24]. Accordingly, for individuals with a higher construal level, effort expectancy is expected to have a weaker impact on adoption intention compared with those with a lower construal level. Based on these theoretical insights, the following hypotheses are proposed:
H6a: 
The construal level strengthens the relationship between performance expectancy and patients’ intention so that the relationship is stronger when the construal level is higher.
H6b: 
The construal level weakens the relationship between effort expectancy and patients’ intention so that the relationship is weaker when the construal level is higher.
According to the definition of health literacy provided by the World Health Organization (WHO) [21], individuals with low health literacy that have a reduced ability to access, understand, and apply basic health information and services and make informed decisions related to their health [33] rely heavily on medical professionals with specialized knowledge and training [37]. The literature has established a significant association between low health literacy and unfavorable consequences, including but not limited to suboptimal health status, escalated hospital admissions, and elevated healthcare expenditure, arising from the inability to proficiently utilize health-related information [22,35]. Conversely, individuals who exhibit high health literacy exhibit a comprehensive understanding of health-related knowledge, possess requisite skills, and demonstrate the necessary consciousness to effectively manage their health and overall well-being [22,52]. Therefore, individuals with lower health literacy may be more susceptible to social influence when considering generative AI due to their limited healthcare knowledge and self-awareness compared with those with higher health literacy. Additionally, individuals with lower health literacy may have a decreased ability to access, comprehend, and integrate healthcare-related resources and technologies compared with those with higher health literacy, resulting in a lower role of facilitating factors in promoting the use of generative AI. Therefore, the following hypotheses are proposed:
H7a: 
Health literacy weakens the positive relationship between social influence and patients’ intention so that the relationship is weaker when health literacy is higher.
H7b: 
Health literacy strengthens the positive relationship between facilitating conditions and patients’ intention so that the relationship is stronger when health literacy is higher.
AI literacy refers to an individual’s comprehensive understanding of the knowledge and technology of AI, the methods of its development and application, and the rules that need to be adhered to [23]. Consequently, compared to patients with lower AI literacy, those with higher AI literacy possess a deeper understanding of AI technology, a greater awareness of its value, and stronger operational capabilities. Therefore, patients with high AI literacy have a better grasp of the capabilities of generative AI, thereby enhancing the effect of performance expectancy on their intention to adopt; they are more familiar with the operational methods of generative AI, enabling them to use it with more ease, which in turn boosts the impact of effort expectancy on their intention to use it. AI literacy allows patients to better utilize technical support resources, thereby strengthening the influence of facilitating conditions on their intention to adopt it; patients with higher AI literacy may enjoy the interaction process with generative AI more, deriving pleasure from the exploration process, thus enhancing the effect of hedonic motivation on their intention to adopt it. Patients with higher levels of AI literacy are more capable of independently evaluating the utility and reliability of generative AI. As such, they are less likely to be influenced by doctors, friends, and social media [53]. This suggests that social influence exerts a weaker effect on adoption intention among patients with higher AI literacy. Based on this, we propose the following hypotheses:
H8a: 
AI literacy positively moderates the relationship between performance expectancy and patients’ intention.
H8b: 
AI literacy positively moderates the relationship between effort expectancy and patients’ intention.
H8c: 
AI literacy positively moderates the relationship between facilitating conditions and patients’ intention.
H8d: 
AI literacy positively moderates the relationship between hedonic motivation and patients’ intention.
H8e: 
AI literacy negatively moderates the relationship between social influence and patients’ intention.
The conceptual framework in this study is presented in Figure 1.

4. Research Methodology

4.1. Research Sample and Data Collection

To test the research hypotheses, this study employed a questionnaire as the primary instrument for data collection, administered through online channels via the Credamo platform. Given that generative AI is predominantly accessed and utilized via the Internet, the use of online distribution was considered appropriate. To obtain a random sample, the questionnaire was randomly distributed through the data marketplace of the Credamo platform, which allowed access to a broad and diverse pool of potential respondents. Therefore, data collected through these channels are regarded as broadly representative and reasonably random for the context of this study. The data collection was conducted in China during September 2023.
The research questionnaire encompassed three components. Firstly, the introduction section explained the purpose of the questionnaire and provided information about informed consent for participants. As shown in Appendix A. It also included a filter question, “Do you know generative AI on online healthcare platforms?”, to exclude respondents who are unaware of this feature. Secondly, a survey was conducted to examine the influencing factors, patients’ intention, and boundary conditions. The questions employed a 7-point Likert scale, ranging from “strongly disagree” to “strongly agree,” and were adapted from established scales in the literature. Lastly, demographic variables, including gender, age, education, and income, were measured and used as control variables. Details are provided in Appendix A.12. Moreover, we set the attention check in Appendix A.6 and Appendix A.9. A total of 462 questionnaires were collected online. After excluding invalid responses, 305 valid questionnaires remained, resulting in a response rate of 66%. To ensure a random sample and control for the influence of demographic characteristics on the results, demographic information was measured. The demographic characteristics of the sample are presented in Table 1.

4.2. Measures

(1) UTAUT2. This study utilized the measurement scale developed by Venkatesh et al. to assess the variables of performance expectancy, effort expectancy, social influence, facilitating conditions, and hedonic motivation within the UTAUT2 model as well as patients’ intention to adopt generative AI [18]. To ensure the applicability of scale to the current context, it was appropriately adapted and translated. For example, performance expectancy was measured using items such as “I find generative AI on online healthcare platforms useful in my daily life”, “Using generative AI on online healthcare platforms helps me handle health tasks more quickly”, and “Using generative AI on online healthcare platforms helps me manage my health and medical consultations more efficiently.” Participants rated their answers on a 7-point Likert scale, where 1 indicates strongly disagree and 7 indicates strongly agree. Details are provided in Appendix A.1, Appendix A.2, Appendix A.3, Appendix A.4, Appendix A.5 and Appendix A.11.
(2) Health literacy. To examine health literacy, this study employed the validated scale developed by Heijmans et al., which comprises nine items describing various scenarios, such as “When reading health-related instructions or brochures from hospitals, pharmacies, or community centers, you find words or phrases that you don’t understand” [54]. Participants rated their agreement with the items on a 7-point Likert scale, where 1 indicates strongly disagree and 7 indicates strongly agree. Details are provided in Appendix A.7.
(3) Construal level. To evaluate the construal level, this study utilized the Behavior Identification Form (BIF) developed by Vallacher and Wegner [32]. The BIF consists of 25 items, each presenting two interpretations of a behavior: one abstract interpretation representing the outcome of the behavior and one concrete interpretation representing the process of the behavior [32], as shown in Appendix A.8. For instance, one interpretation of “joining the army” is “helping the Nation’s defense”, while the other is “signing up”. The scale employs a scoring system wherein “1” signifies an abstract interpretation, which is high construal level, and “2” indicates a concrete interpretation, which is low construal level. Following Rosen et al., the total score of 25 items for each participant was averaged to derive their construal level [55]; the higher average score indicated a lower construal level.
(4) AI literacy. To measure AI literacy, this study employed the scale developed by Tully et al., which includes a comprehensive list of 17 multiple-choice questions, such as recognizing AI, “Which of the following is NOT powered by AI”; the interdisciplinarity of AI, “Which of the following fields contributes to the development of AI”; and understanding intelligence, “Which form of intelligence involves emotional understanding and social skills”; order of answers within each question were randomized [23]. Details are provided in Appendix A.10. Each question was scored as 1 point, and participants’ AI literacy was measured by calculating their total scores.

5. Results

5.1. Reliability and Validity

This study utilized SmartPLS 4.0 to assess the reliability and validity of the constructs. The results show that factor loadings for each construct exceed 0.7, Cronbach’s alpha values range from 0.708 to 0.924, composite reliability (CR) values fall between 0.726 and 0.997, and average variance extracted (AVE) values surpass the recommended threshold of 0.5. These results collectively indicate strong internal reliability and convergent validity of the scale, as shown in Table 2. Furthermore, as presented in Table 3 and Table 4, the square root of the AVE for each construct exceeds the corresponding inter-construct correlations, and the HTMT ratios remain below the recommended threshold of 0.85, demonstrating good discriminant validity [56]. Consequently, the measurement model is deemed credible and valid [57,58].

5.2. Common Method Bias

As all data in this study are self-reported, the potential for common method bias (CMB) is carefully considered. To assess this, an unrotated exploratory factor analysis was conducted. Therefore, an unrotated exploratory factor analysis was performed, and seven factors with eigenvalues greater than 1 were extracted [59]. The maximum factor variance explained was found to be 29.851%, which is less than the 40% threshold typically considered indicative of serious common method bias [60]. The results suggest that common method bias was not a significant concern in this study.

5.3. Hypothesis Testing

The hypotheses in this study were tested using SmartPLS 4.0. The results of the main effect, as shown in Table 5, indicate that performance expectancy positively associates with patients’ intention (β = 0.307, p < 0.001), thus supporting hypothesis 1. However, effort expectancy does not have a significant effect on patients’ intention (β = 0.036, p > 0.05), thus not supporting hypothesis 2. Social influence is found to be positively associated with patients’ intention (β = 0.194, p < 0.01), supporting hypothesis 3. Additionally, facilitating conditions positively associate with patients’ intention (β = 0.170, p < 0.05), providing support for hypothesis 4. Finally, the impact of hedonic motivation on patients’ intention is not significant (β = 0.140, p > 0.1), indicating that hypothesis 5 was not supported.
Table 6 presents the results of the moderation effects. The findings show that the construal level positively moderates the relationship between performance expectancy and patients’ intention. The construal level strengthens the positive effect of performance expectancy on patients’ intention (β = −0.159, p < 0.05), supporting hypothesis 6a. However, the moderating effect of the construal level on the relationship between effort expectancy and patients’ intention is not significant (β = 0.058, p > 0.05), failing to support hypothesis 6b. Health literacy, on the one hand, negatively moderates the relationship between social influence and patients’ intention (β = −0.127, p < 0.05), supporting hypothesis 7a. On the other hand, the moderating effect of health literacy on the relationship between facilitating conditions and patients’ intention is not significant (β = 0.022, p > 0.05), failing to support hypothesis 7b. The moderating effects of AI literacy on the relationships between performance expectancy, facilitating conditions, and hedonic motivation and patients’ intention are not significant (β = −0.025, p > 0.05; β = −0.116, p > 0.05; β = −0.051, p > 0.05), thus failing to support hypotheses 8a, 8c, and 8d. However, AI literacy significantly and positively moderated the relationship between effort expectancy and patients’ intention (β = 0.224, p < 0.01) and negatively moderated the relationship between social influence and patients’ intention (β = −0.131, p < 0.05), supporting hypotheses 8b and 8e, respectively.

6. Discussion

6.1. Principal Findings

This study aims to investigate the factors influencing patients’ intention to adopt generative AI on online healthcare platforms. First, the findings suggest that performance expectancy exerts a positive influence on patients’ intention to adopt generative AI. This implies that generative AI is a promising and effective model and can enhance health management efficiency and healthcare utility on online healthcare platforms. Second, effort expectancy does not exhibit a significant effect on patients’ intention, suggesting the difficulty that patients may face in trusting and operating the unique features and functionalities of generative AI. Third, social influences are identified as a positive determinant of patients’ intention, as individuals’ choices and inclinations are inevitably shaped by significant others, who not only impact patients’ decisions to adopt generative AI but also care about their physical well-being. Four, the research results reveal that facilitating conditions have a significant effect on patients’ intention to adopt generative AI. This finding can be attributed to the widespread accessibility of information and knowledge and the presence of appropriate hardware and software infrastructure, which supports the utilization of generative AI. Five, notably, while generative AI is a novel technology, its primary function is health management and healthcare service. Therefore, hedonic motivation is not the primary motivator for adoption of such technology.
Additionally, this study examines three moderators that influence the relationship between influencing factors and patients’ intention. Firstly, the research results demonstrate that the construal level plays a positive moderating role in the relationship between performance expectancy and patients’ intention. This effect is attributed to the tendency of individuals with a higher construal level to prioritize outcome-related factors that embody desirability over those with a lower construal level. However, no significant effect of the construal level is found on the relationship between effort performance and patients’ intention, possibly due to evolving societal values and contemporary changes, wherein even individuals with a lower construal level may not place adequate attention to the process aspect that could impact their assessment of objects and behaviors.
Secondly, the results of this study suggest that health literacy plays a negative moderating role in the relationship between social influence and patients’ intention while having no significant effect on the relationship between facilitating conditions and patients’ intention. This indicates that individuals with lower health literacy are more vulnerable to the influence of significant others on their inclination to adopt generative AI. Nevertheless, the pervasiveness of the Internet, the widespread use of hardware and software, and the abundance of knowledge and information online have made the impact of facilitating conditions on intention independent of patients’ health literacy.
Thirdly, the study findings reveal that AI literacy exerts a significant and positive influence on the relationship between effort expectancy and patients’ intention. This is because individuals with higher levels of AI literacy, compared with those with lower AI literacy, are better able to understand the functions and operational processes of AI, thereby strengthening the influence of perceived ease of use on their adoption intention. Additionally, AI literacy negatively affects the relationship between social influence and patients’ intention. This is attributed to patients with higher AI literacy possessing a more profound understanding of AI technologies, enabling them to independently assess the value of AI, making them less susceptible to external influences. However, no significant effect of AI literacy is detected on the relationship between performance expectancy and patients’ intention, which may be attributed to the powerful functionalities of generative AI. This leads to the influence of performance expectancy on patients’ intention to adopt generative AI remaining stable regardless of their level of AI literacy. Similarly, AI literacy does not significantly moderate the relationship between facilitating conditions and patients’ intention. This may be possibly because the facilitating conditions for adopting generative AI are sufficiently consistent across individuals with varying levels of AI literacy. Furthermore, no significant moderating effect of AI literacy is found on the relationship between hedonic motivation and patients’ intention, suggesting that patients’ perceived enjoyment in adopting generative AI is not contingent upon their level of AI literacy [61].

6.2. Theoretical Implications

This study offers three primary contributions to the study of generative AI and UTAUT2 models. Firstly, while prior research has examined factors such as social influence, novelty value, and anthropomorphism in relation to the acceptance of ChatGPT-3.5 [62], limited attention has been given to the adoption of generative AI by patients within online healthcare settings. This study seeks to address this gap by investigating the factors that influence patients’ intention to adopt generative AI on online healthcare platforms, thereby advancing theoretical understanding and offering insights into the key determinants shaping patient behavior in this emerging context. Secondly, this study expands the UTAUT2 model to apply to the adoption of generative AI. Our analysis reveals that effort expectancy and hedonic motivation do not significantly affect the intention of using generative AI. This finding highlights the need for a tailored approach to comprehending patients’ intention regarding generative AI on online healthcare platforms, as traditional models may not fully fit new research scenarios. Thirdly, this study extends the theoretical boundaries and explanatory power of the UTAUT2 model in healthcare by introducing moderators such as the construal level, health literacy, and AI literacy. These variables demonstrate influence on the relationship between factors and patients’ intention. By integrating these moderating variables into the UTAUT2 model, we offer a more comprehensive understanding of the factors influencing patients’ intention to adopt generative AI in online healthcare scenarios.

6.3. Implications for Practice

This study has practical implications for the development and promotion of generative AI on online healthcare platforms. First, performance expectancy is positively associated with patients’ intention to adopt generative AI, but effort expectancy has no significant influence on patients’ intention. Thus, platform managers should underscore the need for effective educational and promotional efforts to enhance patients’ understanding of the benefits of generative AI and their ability to use it. This is further supported by the confirmation that AI literacy positively moderates the relationship between effort expectancy and patients’ intention. Second, the findings indicate that social influence is positively associated with patients’ intention, especially among individuals with lower health literacy and AI literacy [63]. Platform managers should encourage patients to recommend the use of generative AI to others by providing rewards and should particularly encourage young patients to recommend it to older adults and to those living in remote areas with low health literacy and AI literacy [64]. Third, facilitating conditions are positively associated with patients’ intention to adopt generative AI, whereas hedonic motivation has no significant influence on patients’ intention. Therefore, platforms should enhance the compatibility of generative AI to facilitate patient use, while investment in entertainment functionalities may not be necessary. These strategies are likely to attract more patients and expand the value of generative AI on online healthcare platforms.

6.4. Limitations and Future Research

This study has several limitations that warrant consideration and provide directions for future research. First, patients may engage with generative AI on online healthcare platforms for varying purposes, such as seeking diagnostic support, managing chronic conditions, or accessing general health information. These differentiated purposes reflect heterogeneous expectations and value perceptions [65], which may systematically influence the relationships between antecedent factors and patients’ intention to adopt generative AI. However, this study does not account for such heterogeneity. Future research should examine the moderating role of patients’ usage purposes, thereby unpacking how different user goals shape the strength or direction of influencing factors. Secondly, this study does not account for additional constructs that may influence adoption behavior, such as price value and usage habits [66]. As generative AI technologies become more prevalent in online healthcare contexts, future research could incorporate these variables to capture evolving patients’ behavior. Furthermore, exploring other potentially relevant factors, such as novelty value, perceived risk, and privacy concerns, may offer a more comprehensive understanding of the facilitators and inhibitors shaping patients’ intention to adopt generative AI in an online health context [14,67]. Lastly, the data are collected in China without regional distinction and within a specific time frame, which may limit the generalizability of the findings across diverse cultural contexts and temporal settings. Future studies should collect data across multiple regions and time points to improve external validity and capture potential cultural, regional (rural vs. urban), and temporal variations in patients’ perceptions of generative AI on online healthcare platforms [68].

7. Conclusions

This study employs the extended model of the unified theory of acceptance and use of technology 2 (UTAUT2) to investigate the factors that influence patients’ intention to adopt generative AI on online healthcare platform as well as the moderating effects of the construal level, health literacy, and AI literacy. The questionnaire-based data analysis revealed that performance expectancy, social influence, and facilitating conditions positively associated with patients’ intention, while effort expectancy and hedonic motivation do not significantly affect patients’ intention. Moreover, the inclusion of moderators reveals that the construal level strengthens the relationship between performance expectancy and patients’ intention, while health literacy weakens the relationship between social influence and patients’ intention. Additionally, AI literacy strengthens the relationship between effort expectancy and patients’ intention but weakens the relationship between social influence and patients’ intention. These findings contribute to a more nuanced understanding of the factors that drive patients’ intention to adopt generative AI on an online healthcare platform and highlight the importance of considering the role of a boundary condition, which may strengthen or weaken the factors on patients’ intention.

Author Contributions

Conceptualization, Y.L.; methodology, Y.L.; software, S.Y.; validation, T.S.; formal analysis, Y.L. and S.Y.; investigation, Y.L.; resources, X.C.; data curation, T.S.; writing—original draft preparation, Y.L.; writing—review and editing, T.S. and X.C.; visualization, S.Y.; supervision, X.C.; project administration, T.S.; funding acquisition, X.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Social Science Foundation of China (Grant No. 21BGL223).

Institutional Review Board Statement

This study was approved by the Institutional Review Board of the Science and Technology Ethics Committee at Nanjing University (Approval No. NJUSOC202507002).

Informed Consent Statement

Informed consent was obtained from all participants.

Data Availability Statement

The data are not publicly available due to privacy and ethical restrictions.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial intelligence
PEPerformance expectancy
EEEffort expectancy
FCFacilitating conditions
HMHedonic motivation
SISocial influence
PIPatients’ intention
CLConstrual level
HLHealth literacy
AILAI literacy

Appendix A. Patients’ Intention to Adopt Generative AI on Online Healthcare Platforms

This survey aims to investigate the factors influencing the adoption intention of generative AI on online healthcare platforms. All information you provide will be kept completely anonymous and used exclusively for scientific research and academic publication. If you agree to participate in this survey, please proceed to complete the following questionnaire. There are no right or wrong answers; please respond sincerely and answer all questions carefully and consecutively. Thank you very much for your support and cooperation.
  • Q1. Do you know generative AI on online healthcare platforms? [Single Choice]
  • ○ Yes
  • ○ No

Appendix A.1. Performance Expectancy

  • Q2. I find generative AI on online healthcare platforms useful in my daily life. [Rating Scale]
  • [1=Strongly Disagree, 7=Strongly Agree]
  • ○ 1
  • ○ 2
  • ○ 3
  • ○ 4
  • ○ 5
  • ○ 6
  • ○ 7
  • Q3. Using generative AI on online healthcare platforms helps me handle health tasks more quickly. [Rating Scale]
  • [1=Strongly Disagree, 7=Strongly Agree]
  • ○ 1
  • ○ 2
  • ○ 3
  • ○ 4
  • ○ 5
  • ○ 6
  • ○ 7
  • Q4. Using generative AI on online healthcare platforms helps me manage my health and medical consultations more efficiently. [Rating Scale]
  • [1=Strongly Disagree, 7=Strongly Agree]
  • ○ 1
  • ○ 2
  • ○ 3
  • ○ 4
  • ○ 5
  • ○ 6
  • ○ 7

Appendix A.2. Effort Expectancy

  • Q5. Learning how to use generative AI on online healthcare platforms is easy for me. [Rating Scale]
  • [1=Strongly Disagree, 7=Strongly Agree]
  • ○ 1
  • ○ 2
  • ○ 3
  • ○ 4
  • ○ 5
  • ○ 6
  • ○ 7
  • Q6. My interactions with generative AI on online healthcare platforms are clear and understandable. [Rating Scale]
  • [1=Strongly Disagree, 7=Strongly Agree]
  • ○ 1
  • ○ 2
  • ○ 3
  • ○ 4
  • ○ 5
  • ○ 6
  • ○ 7
  • Q7. I find generative AI on healthcare platforms easy to use. [Rating Scale]
  • [1=Strongly Disagree, 7=Strongly Agree]
  • ○ 1
  • ○ 2
  • ○ 3
  • ○ 4
  • ○ 5
  • ○ 6
  • ○ 7
  • Q8. It is easy for me to become skillful at using generative AI on online healthcare platforms. [Rating Scale]
  • [1=Strongly Disagree, 7=Strongly Agree]
  • ○ 1
  • ○ 2
  • ○ 3
  • ○ 4
  • ○ 5
  • ○ 6
  • ○ 7

Appendix A.3. Social Influence

  • Q9. People who are important to me think that I should use generative AI on online healthcare platforms. [Rating Scale]
  • [1=Strongly Disagree, 7=Strongly Agree]
  • ○ 1
  • ○ 2
  • ○ 3
  • ○ 4
  • ○ 5
  • ○ 6
  • ○ 7
  • Q10. People who influence my behavior think that I should use generative AI for healthcare. [Rating Scale]
  • [1=Strongly Disagree, 7=Strongly Agree]
  • ○ 1
  • ○ 2
  • ○ 3
  • ○ 4
  • ○ 5
  • ○ 6
  • ○ 7
  • Q11. People whose opinions that I value prefer that I use generative AI on online healthcare platforms. [Rating Scale]
  • [1=Strongly Disagree, 7=Strongly Agree]
  • ○ 1
  • ○ 2
  • ○ 3
  • ○ 4
  • ○ 5
  • ○ 6
  • ○ 7

Appendix A.4. Facilitating Condition

  • Q12. I have the resources necessary (such as internet access or devices) to use generative AI on online healthcare platforms. [Rating Scale]
  • [1=Strongly Disagree, 7=Strongly Agree]
  • ○ 1
  • ○ 2
  • ○ 3
  • ○ 4
  • ○ 5
  • ○ 6
  • ○ 7
  • Q13. I have the knowledge necessary (such as AI or health knowledge) to use generative AI on online healthcare platforms. [Rating Scale]
  • [1=Strongly Disagree, 7=Strongly Agree]
  • ○ 1
  • ○ 2
  • ○ 3
  • ○ 4
  • ○ 5
  • ○ 6
  • ○ 7
  • Q14. Generative AI on online healthcare platforms is compatible with other technologies I use (such as certain internet applications). [Rating Scale]
  • [1=Strongly Disagree, 7=Strongly Agree]
  • ○ 1
  • ○ 2
  • ○ 3
  • ○ 4
  • ○ 5
  • ○ 6
  • ○ 7
  • Q15. I can get help from others when I have difficulties using generative AI on online healthcare platforms. [Rating Scale]
  • [1=Strongly Disagree, 7=Strongly Agree]
  • ○ 1
  • ○ 2
  • ○ 3
  • ○ 4
  • ○ 5
  • ○ 6
  • ○ 7

Appendix A.5. Hedonic Motivation

  • Q16. Using generative AI on online healthcare platforms is fun. [Rating Scale]
  • [1=Strongly Disagree, 7=Strongly Agree]
  • ○ 1
  • ○ 2
  • ○ 3
  • ○ 4
  • ○ 5
  • ○ 6
  • ○ 7
  • Q17. Using generative AI on online healthcare platforms is enjoyable. [Rating Scale]
  • [1=Strongly Disagree, 7=Strongly Agree]
  • ○ 1
  • ○ 2
  • ○ 3
  • ○ 4
  • ○ 5
  • ○ 6
  • ○ 7
  • Q18. Using generative AI on online healthcare platforms is very entertaining. [Rating Scale]
  • [1=Strongly Disagree, 7=Strongly Agree]
  • ○ 1
  • ○ 2
  • ○ 3
  • ○ 4
  • ○ 5
  • ○ 6
  • ○ 7

Appendix A.6. Attention Check

  • Q19. Please choose 2. [Rating Scale]
  • [1=Strongly Disagree, 7=Strongly Agree]
  • ○ 1
  • ○ 2
  • ○ 3
  • ○ 4
  • ○ 5
  • ○ 6
  • ○ 7

Appendix A.7. Health Literacy

  • Q20. When reading health-related instructions or brochures from hospitals, pharmacies, or community centers, you find words or phrases that you do not understand. [Rating Scale]
  • [1=Strongly Disagree, 7=Strongly Agree]
  • ○ 1
  • ○ 2
  • ○ 3
  • ○ 4
  • ○ 5
  • ○ 6
  • ○ 7
  • Q21. When reading health-related instructions or brochures from hospitals, pharmacies, or community centers, you find the content to be too difficult. [Rating Scale]
  • [1=Strongly Disagree, 7=Strongly Agree]
  • ○ 1
  • ○ 2
  • ○ 3
  • ○ 4
  • ○ 5
  • ○ 6
  • ○ 7
  • Q22. When reading health-related instructions or brochures from hospitals, pharmacies, or community centers, you find that it takes a long time for you to understand them. [Rating Scale]
  • [1=Strongly Disagree, 7=Strongly Agree]
  • ○ 1
  • ○ 2
  • ○ 3
  • ○ 4
  • ○ 5
  • ○ 6
  • ○ 7
  • Q23. When reading health-related instructions or brochures from hospitals, pharmacies, or community centers, you find that you need someone to help you understand them. [Rating Scale]
  • [1=Strongly Disagree, 7=Strongly Agree]
  • ○ 1
  • ○ 2
  • ○ 3
  • ○ 4
  • ○ 5
  • ○ 6
  • ○ 7
  • Q24. When you are sick or diagnosed, you find it difficult to gather information from various sources. [Rating Scale]
  • [1=Strongly Disagree, 7=Strongly Agree]
  • ○ 1
  • ○ 2
  • ○ 3
  • ○ 4
  • ○ 5
  • ○ 6
  • ○ 7
  • Q25. When you are sick or diagnosed, you find it difficult to apply the information you receive to your daily life. [Rating Scale]
  • [1=Strongly Disagree, 7=Strongly Agree]
  • ○ 1
  • ○ 2
  • ○ 3
  • ○ 4
  • ○ 5
  • ○ 6
  • ○ 7
  • Q26. When you are sick or diagnosed, you find it difficult to consider whether the information is relevant to your situation. [Rating Scale]
  • [1=Strongly Disagree, 7=Strongly Agree]
  • ○ 1
  • ○ 2
  • ○ 3
  • ○ 4
  • ○ 5
  • ○ 6
  • ○ 7
  • Q27. When you are sick or diagnosed, you find it difficult to consider the effectiveness and reliability of the information. [Rating Scale]
  • [1=Strongly Disagree, 7=Strongly Agree]
  • ○ 1
  • ○ 2
  • ○ 3
  • ○ 4
  • ○ 5
  • ○ 6
  • ○ 7
  • Q28. When you are sick or diagnosed, you find it difficult to verify the accuracy and reliability of the information. [Rating Scale]
  • [1=Strongly Disagree, 7=Strongly Agree]
  • ○ 1
  • ○ 2
  • ○ 3
  • ○ 4
  • ○ 5
  • ○ 6
  • ○ 7

Appendix A.8. Construal Level

Please select the interpretation that best fits the question. [Single Choice]
Q29. Making a lista. Getting organized
b. Writing things down
Q30. Readinga. Following lines of print
b. Gaining knowledge
Q31. Joining the armya. Helping the nation’s defense
b. Signing up
Q32. Washing clothesa. Removing odors from clothes
b. Putting clothes into the machine
Q33. Picking an applea. Getting something to eat
b. Pulling an apple off a branch
Q34. Chopping down a treea. Wielding an axe
b. Getting firewood
Q35. Measuring a room for carpetinga. Getting ready to remodel
b. Using a yardstick
Q36. Cleaning the housea. Showing one’s cleanliness
b. Vacuuming the floor
Q37. Painting a rooma. Applying brush strokes
b. Making the room look fresh
Q38. Paying the renta. Maintaining a place to live
b. Writing a check
Q39. Caring for houseplantsa. Watering plants
b. Making the room look nice
Q40. Locking a doora. Putting a key in the lock
b. Securing the house
Q41. Votinga. Influencing the election
b. Marking a ballot
Q42. Climbing a treea. Getting a good view
b. Holding onto branches
Q43. Filling out a personality testa. Answering questions
b. Revealing what you are like
Q44. Toothbrushinga. Preventing tooth decay
b. Moving a brush around in one’s mouth
Q45. Taking a testa. Answering questions
b. Showing one’s knowledge
Q46. Greeting someonea. Saying hello
b. Showing friendliness
Q47. Resisting temptationa. Saying “no”
b. Showing moral courage
Q48. Eatinga. Getting nutrition
b. Chewing and swallowing
Q49. Growing a gardena. Planting seeds
b. Getting fresh vegetables
Q50. Traveling by cara. Following a map
b. Seeing countryside
Q51. Having a cavity filleda. Protecting your teeth
b. Going to the dentist
Q52. Talking to a childa. Teaching a child something
b. Using simple words
Q53. Pushing a doorbella. Moving a finger
b. Seeing if someone is home

Appendix A.9. Attention Check

  • Q54. Please choose 3. [Rating Scale]
  • [1=Strongly Disagree, 7=Strongly Agree]
  • ○ 1
  • ○ 2
  • ○ 3
  • ○ 4
  • ○ 5
  • ○ 6
  • ○ 7

Appendix A.10. AI Literacy

  • Recognizing AI:
  • Q55. Which of the following is NOT powered by AI? [Single Choice]
  • (a) Self-driving cars
  • (b) Google’s search algorithm
  • (c) A basic calculator
  • (d) Chatbots
  • Understanding Intelligence:
  • Q56. Which form of intelligence involves emotional understanding and social skills? [Single Choice]
  • (a) Machine Intelligence
  • (b) Human Intelligence
  • (c) Animal Intelligence
  • (d) Artificial General Intelligence
  • Interdisciplinarity:
  • Q57. Which of the following fields contributes to the development of artificial intelligence? [Single Choice]
  • (a) Computer science
  • (b) Mathematics
  • (c) Psychology
  • (d) All of the above
  • General vs. Narrow AI:
  • Q58. What is the term for AI systems that can perform any intellectual task that a human can? [Single Choice]
  • (a) Narrow AI
  • (b) General AI
  • (c) Weak AI
  • (d) Strong AI
  • AI’s Strengths and Weaknesses:
  • Q59. In which area does AI typically excel? [Single Choice]
  • (a) Emotional understanding
  • (b) Pattern recognition
  • (c) Moral reasoning
  • (d) Creativity
  • Imagine Future AI:
  • Q60. Which of the following is NOT a likely future application of AI? [Single Choice]
  • (a) Personalized healthcare
  • (b) Emotional robots
  • (c) Time travel
  • (d) Sustainable energy management
  • Representations:
  • Q61. What is a common form of knowledge representation in AI? [Single Choice]
  • (a) Neural networks
  • (b) Waterfall model
  • (c) Agile methodology
  • (d) SWOT analysis
  • Decision-Making:
  • Q62. Which algorithmic approach is commonly used for decision-making in AI? [Single Choice]
  • (a) Dijkstra’s algorithm
  • (b) Depth-first search
  • (c) Decision trees
  • (d) Fourier Transform
  • Machine Learning Steps:
  • Q63. What is the first step in a typical machine learning process? [Single Choice]
  • (a) Data collection
  • (b) Model selection
  • (c) Prediction
  • (d) Model evaluation
  • Human Role in AI:
  • Q64. Who is primarily responsible for the ethical considerations of an AI system? [Single Choice]
  • (a) The AI system itself
  • (b) Data providers
  • (c) Human developers
  • (d) End-users
  • Data Literacy:
  • Q65. Which of the following is an example of metadata? [Single Choice]
  • (a) A spreadsheet of numbers
  • (b) Column headers in a table
  • (c) A chart visualization
  • (d) Raw sensor data
  • Learning from Data:
  • Q66. How do supervised machine learning algorithms learn? [Single Choice]
  • (a) From labeled data
  • (b) From rewards and punishments
  • (c) By observing human behavior
  • (d) From intrinsic motivation
  • Critically Interpreting Data:
  • Q67. Why can data not be taken at face value? [Single Choice]
  • (a) It is always inaccurate
  • (b) It requires interpretation
  • (c) It is self-explanatory
  • (d) It is always biased
  • Action and Reaction:
  • Q68. How can an AI system interact with the physical world? [Single Choice]
  • (a) By planning movements
  • (b) By reacting to sensor inputs
  • (c) By actuating motors
  • (d) All of the above
  • Sensors:
  • Q69. Which of the following sensors allow an AI system to perceive the world? [Single Choice]
  • (a) Cameras
  • (b) Microphones
  • (c) Thermometers
  • (d) All of the above
  • Ethics:
  • Q70. Which is a key ethical issue surrounding AI? [Single Choice]
  • (a) Algorithmic efficiency
  • (b) CPU usage
  • (c) Privacy
  • (d) Code readability
  • Programmability:
  • Q71. Which statement best describes the programmability of AI systems? [Single Choice]
  • (a) They cannot be programmed by humans
  • (b) They program themselves
  • (c) They are programmed using data
  • (d) They are programmed by computer code

Appendix A.11. Adoption Intention

  • Q72. I intend to use generative AI on online healthcare platforms when needed in the future. [Rating Scale]
  • [1=Strongly Disagree, 7=Strongly Agree]
  • ○ 1
  • ○ 2
  • ○ 3
  • ○ 4
  • ○ 5
  • ○ 6
  • ○ 7
  • Q73. I expect that I will use generative AI on online healthcare platforms when needed in the future. [Rating Scale]
  • [1=Strongly Disagree, 7=Strongly Agree]
  • ○ 1
  • ○ 2
  • ○ 3
  • ○ 4
  • ○ 5
  • ○ 6
  • ○ 7
  • Q74. I plan to use generative AI on online healthcare platforms in the future when needed. [Rating Scale]
  • [1=Strongly Disagree, 7=Strongly Agree]
  • ○ 1
  • ○ 2
  • ○ 3
  • ○ 4
  • ○ 5
  • ○ 6
  • ○ 7

Appendix A.12. Demographic Information

  • Q75. Please select your gender. [Single Choice]
  • ○ Male
  • ○ Female
  • Q76. Please select your age group. [Single Choice]
  • ○ 18–25 years old
  • ○ 26–30 years old
  • ○ 31–40 years old
  • ○ 41–50 years old
  • ○ 51–60 years old
  • ○ 60 years and above
  • Q77. Please select your highest level of education. [Single Choice]
  • ○ Below bachelor’s degree
  • ○ Bachelor’s degree
  • ○ Postgraduate degree
  • Q78. Please select your annual income. [Single Choice]
  • ○ Less than USD 412
  • ○ USD 413–687
  • ○ USD 688–962
  • ○ USD 963–1237
  • ○ Above 1237

References

  1. Cascella, M.; Montomoli, J.; Bellini, V.; Bignami, E. Evaluating the Feasibility of ChatGPT in Healthcare: An Analysis of Multiple Clinical and Research Scenarios. J. Med. Syst. 2023, 47, 33. [Google Scholar] [CrossRef]
  2. Cotton, D.R.E.; Cotton, P.A.; Shipway, J.R. Chatting and cheating: Ensuring academic integrity in the era of ChatGPT. Innov. Educ. Teach. Int. 2024, 61, 228–239. [Google Scholar] [CrossRef]
  3. Hou, T.; Li, M.; Tan, Y.; Zhao, H. Physician Adoption of AI Assistant. MSOM-Manuf. Serv. Oper. Manag. 2024, 26, 1639–1655. [Google Scholar] [CrossRef]
  4. Jahanshahi, H.; Kazmi, S.; Cevik, M. Auto Response Generation in Online Medical Chat Services. J. Healthc. Inform. Res. 2022, 6, 344–374. [Google Scholar] [CrossRef] [PubMed]
  5. Meskó, B.; Topol, E.J. The imperative for regulatory oversight of large language models (or generative AI) in healthcare. npj Digit. Med. 2023, 6, 120–126. [Google Scholar] [CrossRef] [PubMed]
  6. Rouhi, A.D.; Ghanem, Y.K.; Yolchieva, L.; Saleh, Z.; Joshi, H.; Moccia, M.C.; Suarez-Pierre, A.; Han, J.J. Can Artificial Intelligence Improve the Readability of Patient Education Materials on Aortic Stenosis? A Pilot Study. Cardiol. Ther. 2024, 13, 137–147. [Google Scholar] [CrossRef]
  7. Almagazzachi, A.; Mustafa, A.; Eighaei Sedeh, A.; Vazquez Gonzalez, A.E.; Polianovskaia, A.; Abood, M.; Abdelrahman, A.; Muyolema Arce, V.; Acob, T.; Saleem, B. Generative Artificial Intelligence in Patient Education: ChatGPT Takes on Hypertension Questions. Curēus 2024, 16, e53441. [Google Scholar] [CrossRef]
  8. Hirosawa, T.; Shimizu, T. The potential, limitations, and future of diagnostics enhanced by generative artificial intelligence. Diagnosis 2024, 11, 446–449. [Google Scholar] [CrossRef]
  9. Sonmez, S.C.; Sevgi, M.; Antaki, F.; Huemer, J.; Keane, P.A. Generative artificial intelligence in ophthalmology: Current innovations, future applications and challenges. Br. J. Ophthalmol. 2024, 108, 1335–1340. [Google Scholar] [CrossRef]
  10. Patel, S.B.; Lam, K. ChatGPT: The future of discharge summaries? Lancet Digital Health 2023, 5, e107–e108. [Google Scholar] [CrossRef]
  11. Marey, A.; Saad, A.M.; Killeen, B.D.; Gomez, C.; Tregubova, M.; Unberath, M.; Umair, M. Generative Artificial Intelligence: Enhancing Patient Education in Cardiovascular Imaging. BJR Open 2024, 6, tzae018. [Google Scholar] [CrossRef]
  12. Daungsupawong, H.; Wiwanitkit, V. Role of a generative AI model in enhancing clinical decision-making in nursing. J. Adv. Nurs. 2024, 80, 4750–4751. [Google Scholar] [CrossRef]
  13. Albaroudi, E.; Mansouri, T.; Alameer, A. The Intersection of Generative AI and Healthcare: Addressing Challenges to Enhance Patient Care. In Proceedings of the 2024 Seventh International Women in Data Science Conference at Prince Sultan University (WiDS PSU), Riyadh, Saudi Arabia, 3–4 March 2024; pp. 134–140. [Google Scholar]
  14. Zhang, X.; Guo, X.; Guo, F.; Lai, K.-H. Nonlinearities in personalization-privacy paradox in mHealth adoption: The mediating role of perceived usefulness and attitude. Technol. Health Care 2014, 22, 515–529. [Google Scholar] [CrossRef]
  15. Cerasa, A.; Crowe, B. Generative artificial intelligence in neurology: Opportunities and risks. Eur. J. Neurol. 2024, 31, e16232. [Google Scholar] [CrossRef]
  16. Cao, Q.; Niu, X. Integrating context-awareness and UTAUT to explain Alipay user adoption. Int. J. Ind. Ergon. 2019, 69, 9–13. [Google Scholar] [CrossRef]
  17. Cimperman, M.; Makovec Brenčič, M.; Trkman, P. Analyzing older users’ home telehealth services acceptance behavior—Applying an Extended UTAUT model. Int. J. Med. Inform. 2016, 90, 22–31. [Google Scholar] [CrossRef] [PubMed]
  18. Venkatesh, V.; Thong, J.Y.L.; Xu, X. Consumer Acceptance and Use of Information Technology: Extending the Unified Theory of Acceptance and Use of Technology. MIS Q. 2012, 36, 157–178. [Google Scholar] [CrossRef]
  19. Ho, C.K.Y.; Ke, W.; Liu, H.; Chau, P.Y.K. Separate versus joint evaluation: The roles of evaluation mode and construal level in technology adoption. MIS Q. 2020, 44, 725–746. [Google Scholar] [CrossRef]
  20. Connors, S.; Khamitov, M.; Thomson, M.; Perkins, A. They’re Just Not That into You: How to Leverage Existing Consumer–Brand Relationships Through Social Psychological Distance. J. Mark. 2021, 85, 92–108. [Google Scholar] [CrossRef]
  21. Mackert, M.; Mabry-Flynn, A.; Champlin, S.; Donovan, E.E.; Pounders, K. Health literacy and health information technology adoption: The potential for a new digital divide. J. Med. Internet Res. 2016, 18, e264. [Google Scholar] [CrossRef]
  22. Virlée, J.; van Riel, A.C.R.; Hammedi, W. Health literacy and its effects on well-being: How vulnerable healthcare service users integrate online resources. J. Serv. Mark. 2020, 34, 697–715. [Google Scholar] [CrossRef]
  23. Tully, S.; Longoni, C.; Appel, G. EXPRESS: Lower Artificial Intelligence Literacy Predicts Greater AI Receptivity. J. Mark. 2025, 89, 1–20. [Google Scholar] [CrossRef]
  24. Venkatesh, V.; Morris, M.G.; Davis, G.B.; Davis, F.D. User Acceptance of Information Technology: Toward a Unified View. MIS Q. 2003, 27, 425–478. [Google Scholar] [CrossRef]
  25. Wang, L.; Wu, T.; Guo, X.; Zhang, X.; Li, Y.; Wang, W. Exploring mHealth monitoring service acceptance from a service characteristics perspective. Electron. Commer. Res. Appl. 2018, 30, 159–168. [Google Scholar] [CrossRef]
  26. Trope, Y.; Liberman, N. Temporal construal. Psychol. Rev. 2003, 110, 403–421. [Google Scholar] [CrossRef]
  27. Ho, C.K.Y.; Ke, W.L.; Liu, H.F. Choice decision of e-learning system: Implications from construal level theory. Inf. Manag. 2015, 52, 160–169. [Google Scholar] [CrossRef]
  28. Kim, H.; John, D.R. Consumer response to brand extensions: Construal level as a moderator of the importance of perceived fit. J. Consum. Psychol. 2008, 18, 116–126. [Google Scholar] [CrossRef]
  29. van Lent, L.G.G.; Sungur, H.; Kunneman, F.A.; van de Velde, B.; Das, E. Too Far to Care? Measuring Public Attention and Fear for Ebola Using Twitter. J. Med. Internet Res. 2017, 19, e193. [Google Scholar] [CrossRef]
  30. Liberman, N.; Trope, Y. The role of feasibility and desirability considerations in near and distant future decisions: A test of temporal construal theory. J. Personal. Soc. Psychol. 1998, 75, 5–18. [Google Scholar] [CrossRef]
  31. Huang, N.; Burtch, G.; Hong, Y.; Polman, E. Effects of multiple psychological distances on construal and consumer evaluation: A field study of online reviews. J. Consum. Psychol. 2016, 26, 474–482. [Google Scholar] [CrossRef]
  32. Vallacher, R.R.; Wegner, D.M. Levels Of Personal Agency—Individual Variation in Action Identification. J. Personal. Soc. Psychol. 1989, 57, 660–671. [Google Scholar] [CrossRef]
  33. Yang, K.; Hu, Y.; Qi, H. Digital Health Literacy: Bibliometric Analysis. J. Med. Internet Res. 2022, 24, e35816. [Google Scholar] [CrossRef] [PubMed]
  34. Zhou, J.J.; Fan, T.T. Understanding the Factors Influencing Patient E-Health Literacy in Online Health Communities (OHCs): A Social Cognitive Theory Perspective. Int. J. Environ. Res. Public Health 2019, 16, 2455. [Google Scholar] [CrossRef] [PubMed]
  35. Berkman, N.D.; Sheridan, S.L.; Donahue, K.E.; Halpern, D.J.; Crotty, K. Low Health Literacy and Health Outcomes: An Updated Systematic Review. Ann. Intern. Med. 2011, 155, 97–107. [Google Scholar] [CrossRef]
  36. Ngoh, L.N. Health literacy: A barrier to pharmacist–patient communication and medication adherence. J. Am. Pharm. Assoc. 2009, 49, e132. [Google Scholar] [CrossRef]
  37. Chen, S.; Guo, X.; Wu, T.; Ju, X. Exploring the influence of doctor–patient social ties and knowledge ties on patient selection. Internet Res. 2022, 32, 219–240. [Google Scholar] [CrossRef]
  38. Kong, S.-C.; Cheung, W.M.-Y.; Zhang, G. Evaluating an Artificial Intelligence Literacy Programme for Developing University Students’ Conceptual Understanding, Literacy, Empowerment and Ethical Awareness. Educ. Technol. Soc. 2023, 26, 16–30. [Google Scholar]
  39. Boscardin, C.K.; Gin, B.; Golde, P.B.; Hauer, K.E. ChatGPT and Generative Artificial Intelligence for Medical Education: Potential Impact and Opportunity. Acad. Med. 2024, 99, 22–27. [Google Scholar] [CrossRef]
  40. Esmaeilzadeh, P. Challenges and strategies for wide-scale artificial intelligence (AI) deployment in healthcare practices: A perspective for healthcare organizations. Artif. Intell. Med. 2024, 151, 102861. [Google Scholar] [CrossRef]
  41. Celik, I. Exploring the Determinants of Artificial Intelligence (AI) Literacy: Digital Divide, Computational Thinking, Cognitive Absorption. Telemat. Inform. 2023, 83, 102026. [Google Scholar] [CrossRef]
  42. Kabakus, A.K.; Bahcekapili, E.; Ayaz, A. The effect of digital literacy on technology acceptance: An evaluation on administrative staff in higher education. J. Inf. Sci. 2025, 51, 930–941. [Google Scholar] [CrossRef]
  43. Schiavo, G.; Businaro, S.; Zancanaro, M. Comprehension, apprehension, and acceptance: Understanding the influence of literacy and anxiety on acceptance of artificial Intelligence. Technol. Soc. 2024, 77, 102537. [Google Scholar] [CrossRef]
  44. Dwivedi, Y.K.; Shareef, M.A.; Simintiras, A.C.; Lal, B.; Weerakkody, V. A generalised adoption model for services: A cross-country comparison of mobile health (m-health). Gov. Inf. Q. 2016, 33, 174–187. [Google Scholar] [CrossRef]
  45. Zhang, X.; Han, X.; Dang, Y.; Meng, F.; Guo, X.; Lin, J. User acceptance of mobile health services from users’ perspectives: The role of self-efficacy and response-efficacy in technology acceptance. Inform. Health Soc. Care 2017, 42, 194–206. [Google Scholar] [CrossRef] [PubMed]
  46. Luo, P.; Ma, X.; Zhang, X.; Liu, J.; He, H. How to make money with credit information? Information processing on online accommodation-sharing platforms. Tour. Manag. 2021, 87, 104384. [Google Scholar] [CrossRef]
  47. Yang, Y.; Zhu, X.; Song, R.; Zhang, X.; Guo, F. Not just for the money? An examination of the motives behind physicians’ sharing of paid health information. J. Inf. Sci. 2023, 49, 145–163. [Google Scholar] [CrossRef]
  48. Kuan, K.K.Y.; Zhong, Y.; Chau, P.Y.K. Informational and Normative Social Influence in Group-Buying: Evidence from Self-Reported and EEG Data. J. Manag. Inf. Syst. 2014, 30, 151–178. [Google Scholar] [CrossRef]
  49. Tan, H.; Zhang, X.; Yang, Y. Satisfaction or gratitude? Exploring the disparate effects of physicians’ knowledge sharing on patients’ service evaluation in online medical consultations. Inf. Syst. J. 2023, 33, 1186–1211. [Google Scholar] [CrossRef]
  50. Trope, Y.; Liberman, N. Temporal construal and time-dependent changes in preference. J. Personal. Soc. Psychol. 2000, 79, 876–889. [Google Scholar] [CrossRef]
  51. Goodman, J.K.; Malkoc, S.A. Choosing Here and Now versus There and Later: The Moderating Role of Psychological Distance on Assortment Size Preferences. J. Consum. Res. 2012, 39, 751–768. [Google Scholar] [CrossRef]
  52. Guo, F.; Zhou, A.; Zhang, X.; Xu, X.; Liu, X. Fighting rumors to fight COVID-19: Investigating rumor belief and sharing on social media during the pandemic. Comput. Human Behav. 2023, 139, 107521. [Google Scholar] [CrossRef] [PubMed]
  53. Zhang, X.; Xie, K.; Gu, B.; Guo, X. Engaging physicians with introductory incentives: The role of online and offline references. MIS Q. 2025; Forthcoming. [Google Scholar] [CrossRef]
  54. Heijmans, M.; Waverijn, G.; Rademakers, J.; van der Vaart, R.; Rijken, M. Functional, communicative and critical health literacy of chronic disease patients and their importance for self-management. Patient Educ. Couns. 2014, 98, 41–48. [Google Scholar] [CrossRef] [PubMed]
  55. Rosen, C.C.; Koopman, J.; Gabriel, A.S.; Johnson, R.E. Who Strikes Back? A Daily Investigation of When and Why Incivility Begets Incivility. J. Appl. Psychol. 2016, 101, 1620–1634. [Google Scholar] [CrossRef]
  56. Henseler, J.; Ringle, C.M.; Sarstedt, M. A new criterion for assessing discriminant validity in variance-based structural equation modeling. J. Acad. Mark. Sci. 2015, 43, 115–135. [Google Scholar] [CrossRef]
  57. Fornell, C.; Larcker, D.F. Evaluating Structural Equation Models with Unobservable Variables and Measurement Error. J. Mark. Res. 1981, 18, 39–50. [Google Scholar] [CrossRef]
  58. Nunnally, J.C. Psychometric Theory; McGraw-Hill: New York, NY, USA, 1967. [Google Scholar]
  59. Harman, H.H. Modern Factor Analysis; University of Chicago Press: Chicago, IL, USA, 1960. [Google Scholar]
  60. Chen, S.; Lai, K.-h.; Guo, X.; Zhang, X. The influence of digital health technology on the allocation of regional medical resources in China. Health Policy Technol. 2025, 14, 101013. [Google Scholar] [CrossRef]
  61. Zhang, X.; Yin, C.; Guo, X.; Lai, K.-H. The Role of Previous Affective Responses in the Continuance of mHealth Monitoring Services: A Longitudinal Investigation. IEEE Trans. Eng. Manag. 2024, 71, 6982–6994. [Google Scholar] [CrossRef]
  62. Ma, X.; Huo, Y. Are users willing to embrace ChatGPT? Exploring the factors on the acceptance of chatbots from the perspective of AIDUA framework. Technol. Soc. 2023, 75, 102362. [Google Scholar] [CrossRef]
  63. Zhou, C.; Li, K.; Zhang, X. Why do I take deviant disclosure behavior on internet platforms? An explanation based on the neutralization theory. Inf. Process. Manag. 2022, 59, 102785. [Google Scholar] [CrossRef]
  64. Zhang, X.; Lai, K.-h.; Guo, X. Promoting China’s mHealth market: A policy perspective. Health Policy Technol. 2017, 6, 383–388. [Google Scholar] [CrossRef]
  65. Zhang, X.; Guo, X.; Lai, K.-h.; Yi, W. How does online interactional unfairness matter for patient-doctor relationship quality in online health consultation? The contingencies of professional seniority and disease severity. Eur. J. Inf. Syst. 2019, 28, 336–354. [Google Scholar] [CrossRef]
  66. Zhang, X.; Guo, X.; Lai, K.-h.; Yin, C.; Meng, F. From offline healthcare to online health services: The role of offline healthcare satisfaction and habits. J. Electron. Commer. Res. 2017, 18, 138–154. [Google Scholar]
  67. Zhang, X.; Guo, X.; Wu, Y.; Lai, K.-h.; Vogel, D. Exploring the inhibitors of online health service use intention: A status quo bias perspective. Inf. Manag. 2017, 54, 987–997. [Google Scholar] [CrossRef]
  68. Zhang, X.; Pu, J.; Lu, Y.; Guo, F. Team Makes You Better: Evidence from Online Medical Consultation Platforms. Inf. Syst. Res. 2025; ahead of print. [Google Scholar] [CrossRef]
Figure 1. Research model.
Figure 1. Research model.
Jtaer 20 00287 g001
Table 1. Demographic characteristics.
Table 1. Demographic characteristics.
NameCategoryFreqPercentageNameCategoryFreqPercentage
GenderMale12240EducationLess than undergraduate3611.8
Female18360 Undergraduate 23276.1
Age18~2510434.1 Postgraduate or above3712.1
26~3012440.7IncomeLess than $4128327.2
31~405718.7 USD 413–6874615.1
41~50144.6 USD 688–9625116.7
51~6062 USD 963–12373210.5
Above 60 00 Above $12379330.5
Table 2. Reliability and validity.
Table 2. Reliability and validity.
ConstructItemOuter-LoadingCronbach’s AlphaCRAVE
Effort expectancyEE10.8250.8120.8180.639
EE20.783
EE30.814
EE40.774
Facilitating conditionsFC10.7980.7080.7260.630
FC20.735
FC30.844
Health literacyHL10.7000.9240.9970.607
HL20.884
HL30.826
HL40.787
HL50.754
HL60.748
HL70.739
HL80.773
HL90.787
Hedonic motivationHM10.8700.8410.8420.759
HM20.859
HM30.884
Performance expectancyPE10.8280.7220.7280.641
PE20.784
PE30.790
Social influenceSI10.8850.8650.8660.787
SI20.876
SI30.901
Patients’ intentionPI10.8870.7560.7750.675
PI20.720
PI30.849
Table 3. Correlation matrix.
Table 3. Correlation matrix.
EEFCHLHMPESIPI
EE0.799
FC0.5280.794
HL−0.244−0.1720.779
HM0.5530.397−0.1990.871
PE0.5860.509−0.1710.5140.801
SI0.4750.320−0.2280.4430.5140.887
PI0.5060.486−0.1330.4910.6030.4970.822
Note: The diagonal is the square root of the AVE. All abbreviations are defined in Table 1.
Table 4. Heterotrait–Monotrait ratio (HTMT).
Table 4. Heterotrait–Monotrait ratio (HTMT).
EEFCHLHMPESIPI
EE
FC0.699
HL0.2610.216
HM0.6670.5090.214
PE0.7560.7080.2050.652
SI0.5620.4180.2460.5190.645
PI0.6390.6520.1870.6080.8100.615
Note: All abbreviations are defined in Table 1.
Table 5. Result of main effect.
Table 5. Result of main effect.
PathPath CoefficientsStandard DeviationT StatisticsSupported
EE → PI0.0360.0790.455No
FC → PI0.170 *0.0712.393Yes
HM → PI0.1400.0771.815No
PE → PI0.307 ***0.0754.086Yes
SI → PI0.194 **0.0692.798Yes
age → PI−0.0690.0511.347No
edu → PI0.0410.0440.925No
gender → PI0.0370.0890.415No
income → PI0.126 *0.0542.328Yes
* p < 0.05, ** p < 0.01, *** p < 0.001.
Table 6. Result of moderating effect.
Table 6. Result of moderating effect.
PathPath CoefficientsStandard DeviationT StatisticsSupported
CL x PE → PI−0.159 **0.0792.012Yes
CL x EE → PI0.0580.0650.889No
HL x FC → PI0.0220.0610.362No
HL x SI → PI−0.127 **0.0622.059Yes
AIL x EE → PI0.224 ***0.0802.788Yes
AIL x PE → PI−0.0250.0800.309No
AIL x SI → PI−0.131 **0.0612.140Yes
AIL x HM → PI−0.0510.0840.609No
AIL x FC → PI−0.1160.0711.631No
** p < 0.05, *** p < 0.01.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, Y.; Shen, T.; Yang, S.; Chen, X. Exploring Factors Influencing Patients’ Intention to Adopt Generative AI on Online Healthcare Platforms. J. Theor. Appl. Electron. Commer. Res. 2025, 20, 287. https://doi.org/10.3390/jtaer20040287

AMA Style

Li Y, Shen T, Yang S, Chen X. Exploring Factors Influencing Patients’ Intention to Adopt Generative AI on Online Healthcare Platforms. Journal of Theoretical and Applied Electronic Commerce Research. 2025; 20(4):287. https://doi.org/10.3390/jtaer20040287

Chicago/Turabian Style

Li, Yu, Tian Shen, Shuyi Yang, and Xi Chen. 2025. "Exploring Factors Influencing Patients’ Intention to Adopt Generative AI on Online Healthcare Platforms" Journal of Theoretical and Applied Electronic Commerce Research 20, no. 4: 287. https://doi.org/10.3390/jtaer20040287

APA Style

Li, Y., Shen, T., Yang, S., & Chen, X. (2025). Exploring Factors Influencing Patients’ Intention to Adopt Generative AI on Online Healthcare Platforms. Journal of Theoretical and Applied Electronic Commerce Research, 20(4), 287. https://doi.org/10.3390/jtaer20040287

Article Metrics

Back to TopTop