Next Article in Journal
Hybrid Web Architecture with AI and Mobile Notifications to Optimize Incident Management in the Public Sector
Previous Article in Journal
PPE-EYE: A Deep Learning Approach to Personal Protective Equipment Compliance Detection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Model of Acceptance of Artificial Intelligence Devices in Higher Education

1
Postgraduate Program of the Faculty of Systems Engineering and Informatics, Universidad Nacional Mayor de San Marcos, Lima 15081, Peru
2
Mathematical Sciences Laboratory (LCMAT), State University of Norte Fluminense Darcy Ribeiro (UENF), Campos dos Goytacazes 28013-602, RJ, Brazil
*
Author to whom correspondence should be addressed.
Computers 2026, 15(1), 46; https://doi.org/10.3390/computers15010046
Submission received: 26 December 2025 / Revised: 7 January 2026 / Accepted: 9 January 2026 / Published: 12 January 2026
(This article belongs to the Section Human–Computer Interactions)

Abstract

Artificial intelligence (AI) has become a highly relevant tool in higher education. However, its acceptance by university students depends not only on technical or functional characteristics, but also on cognitive, contextual, and emotional factors. This study proposes and validates a model of acceptance of the use of AI devices (MIDA) in the university context. The model considers contextual variables such as anthropomorphism (AN), perceived value (PV) and perceived risk (PR). It also considers cognitive variables such as performance expectancy (PEX) and perceived effort expectancy (PEE). In addition, it considers emotional variables such as anxiety (ANX), stress (ST) and trust (TR). For its validation, data were collected from 517 university students and analysed using structural equations (CB-SEM). The results indicate that perceived value, anthropomorphism and perceived risk influence the willingness to accept the use of AI devices indirectly through performance expectancy and perceived effort. Likewise, performance expectancy significantly reduces anxiety and stress and increases trust, while effort expectancy increases both anxiety and stress. Trust is the main predictor of willingness to accept the use of AI devices, while stress has a significant negative effect on this willingness. These findings contribute to the literature on the acceptance of AI devices by highlighting the mediating role of emotions and offer practical implications for the design of AI devices aimed at improving their acceptance in educational contexts.

1. Introduction

AI-based devices are improving the academic experience of university students, as they have the ability to personalise learning, provide immediate feedback, automate processes, and even provide emotional support [1,2,3]. Despite this, not all students adopt this technology in the same way, as there are various factors that influence their willingness to use it. While many students are willing to use it, they also have concerns about data privacy and the ability of AI devices to understand their emotions [4]. In addition, many students perceive that the use of AI devices causes dependence, reduces human interaction, and affects critical thinking and creativity [5,6].
Willingness to use AI devices is a complex phenomenon that depends not only on their technical or functional characteristics, but also on the characteristics of each student, who are in the process of developing their autonomy in learning, building their professional identity and constantly interacting with new technologies. Therefore, it is not enough to simply know students’ opinions about AI devices; it is also necessary to understand how contextual factors influence their perceptions, what their expectations are regarding their use, how they perceive them emotionally, and how they use them in their academic activities [7].
There are many theoretical models that have sought to explain the process of technology adoption. For example, the Technology Acceptance Model (TAM) proposed by Davis [8] has been widely used due to its simplicity, but it is inadequate for explaining the acceptance of AI devices in educational contexts as it does not include social, emotional, and cultural factors [9,10,11]. For its part, the UTAUT 2 model by Venkatesh et al. [12] integrates elements from previous theories to explain technology adoption but has limitations in educational settings, as it does not explicitly consider affective variables such as anxiety, stress and trust. These variables have gained relevance in recent research on intelligent technologies applied to education [13,14]. On the other hand, although the AIDUA model by Gursoy et al. [15] was developed to evaluate the adoption of AI devices, it does not explicitly incorporate key components for the university environment such as perceived value, perceived risk, or specific emotions such as anxiety, stress, and trust. This theoretical and practical gap is the main motivation for the present study.
The main objective of this study is to propose and validate a model of acceptance of the use of artificial intelligence devices (MIDA) in university students. To this end, the model incorporates contextual variables (perceived value, anthropomorphism, and perceived risk), cognitive variables (performance expectancy, perceived effort expectancy), and emotional variables (anxiety, stress, and trust) [16,17]. These variables are included because university students want to use technologies that meet their practical and emotional needs without adding extra cognitive load. For example, anthropomorphism improves the user experience by humanising interaction, which is key in digital learning processes [18]. Likewise, perceived risk and value are relevant to university students, since the more useful and less risky the use of an AI device is perceived to be, the higher the acceptance rate will be [19]. On the other hand, expectations of performance and effort are very important, as if students believe that an AI device improves their academic performance and does not require a great deal of effort, their willingness to use it increases considerably [20,21]. Finally, anxiety, stress, and trust are relevant emotional variables, as in contexts where academic pressure is high, any technology that generates uncertainty or a feeling of lack of control may be rejected. Therefore, understanding how these emotions relate to technological acceptance will allow for the design of strategies tailored to university students [22,23].
The main contribution of this research is to articulate cognitive, contextual, and emotional variables in order to offer a more comprehensive and in-depth understanding of the processes that shape the acceptance of the use of artificial intelligence devices in higher education.

2. Literature Review

In higher education, the acceptance of digital technologies has been widely analysed from various lines of research, especially from theoretical frameworks such as the Technology Acceptance Model (TAM) and the Unified Theory of Acceptance and Use of Technology (UTAUT), which highlight the influence of cognitive and contextual factors on technology adoption [8,12]. On this basis, the literature has broadened its scope to include the study of artificial intelligence-based systems, recognising both their potential to improve personalisation and academic performance and student concerns related to privacy, ethics, transparency, and academic integrity [1,4,5]. In this context, the acceptance of artificial intelligence technologies is not limited to their perceived usefulness, but is determined by various contextual variables such as anthropomorphism, perceived value, and perceived risk [18,19]. These factors significantly influence attitudes, emotional responses such as anxiety, stress, and trust, and students’ willingness to integrate these tools into their academic practices [15,17].

2.1. Artificial Intelligence (AI) Devices

AI devices have been designed to mimic cognitive functions such as learning and problem solving. They can also perform specific tasks based on human intelligence, such as voice recognition, language translation, visual perception, and decision-making [24]. According to van Doorn et al. [25], they can be classified into two types: socially oriented and task-oriented. Socially oriented devices have been designed to facilitate social interactions and mimic human-like conversations (e.g., ChatGPT (GPT-4, OpenAI), voice-based virtual assistants, digital influencers, socialbots, and conversational chatbots such as Replika), while task-oriented devices focus on performing specific tasks or functions efficiently (e.g., Siri, Alexa, Google Mini, and Cortana) [26,27,28,29].

2.2. Willingness to Accept the Use of AI Devices

Willingness to accept the use of AI devices refers to a user’s intention to use them in a specific context [15]. This acceptance varies according to the field in which they are used, such as medicine, transport, psychology or education, so it is necessary to take into account the specific conditions and characteristics of each context [19].

2.3. Objection to the Use of AI Devices

Objection to the use of AI devices refers to users’ low intention to use them in a specific context. When wanting to use an AI device, emotions are not always favourable and can lead to frustration, fear, uncertainty, anxiety and concern. This can lead to objection of these technologies and avoidance of their future use [15,30,31,32].

2.4. Anthropomorphism

Anthropomorphism refers to the attribution of human characteristics to non-human agents, whether real or imaginary [33]. This enhances the sense of social presence, which increases performance expectancy and generates positive emotions towards AI devices [18,34].

2.5. Perceived Value

Perceived value arises from the interaction between the user and the product and is influenced by personal and contextual factors [35]. In the case of AI devices, when people recognise their usefulness and benefits in health, education or research, they are more likely to accept them and incorporate them into their daily activities [36].

2.6. Performance and Effort Expectancy

Performance expectancy refers to users’ perception that a particular technology will have a positive impact on the activities they perform. On the other hand, perceived effort expectancy refers to the level of effort required by an individual to use the technology [37].

2.7. Anxiety

Anxiety refers to an emotion characterised by a feeling of concern about possible threats that are unclear and vague [38].

2.8. Stress

Stress occurs when a person perceives an external stimulus as a threat that exceeds their coping capacity, altering their physical and/or psychological balance. It manifests itself through negative emotional arousal and a physiological response that includes neurohormonal changes, such as increased adrenaline and the release of corticosteroids [39,40].

2.9. Trust

Trust refers to a person’s attitude towards AI, expecting it to act correctly and beneficially for their objectives [41,42].

2.10. University Student Profile

University students believe that AI can improve their learning and help them improve their academic performance [4,43]. They also expect AI to help them organise their academic activities, prepare their assignments and have a more interactive learning experience [44]. In this regard, university students have high expectations regarding the use of technology, the quality of teaching, and the ability to manage their own learning [45]. However, they are concerned about the privacy of their data, the lack of understanding of their emotions, and the risk of plagiarism [4]. Emotions and motivation also influence the profile of university students, as many experience high levels of stress and anxiety, which impacts their academic performance and mental well-being [46,47]. Another relevant aspect that university students attribute to technology is its perceived value, as they tend to constantly evaluate the specific benefits they obtain in their learning process, their employability and their future career prospects, which affects their level of satisfaction and their willingness to continue using it [48,49,50].
On the other hand, when these tools feel more human and the interaction is closer, students value them more highly and their user experience improves [51,52]. Likewise, to encourage the adoption of these technologies, it is necessary to reduce the perceived effort so that their use in everyday academic activities is straightforward [53,54].
Overall, prior research highlights that the acceptance of artificial intelligence in higher education cannot be explained by isolated factors, but rather by the interplay between contextual evaluations, cognitive expectations, and emotional responses. Based on this theoretical foundation, the following section develops the research hypotheses of the proposed model.

3. Hypothesis Development

The acceptance or objection of the use of AI devices cannot be explained solely from a technical or functional perspective, as it is influenced by multiple factors such as users’ prior expectations, the emotions they experience when faced with these technologies, and the context in which they are used. Therefore, the interaction of cognitive, contextual, and emotional dimensions is key to explaining the human experience with artificial intelligence [55]. In this research, we propose the MIDA model (see Figure 1), which coherently articulates these dimensions.
The MIDA model is based on the premise that university students’ acceptance of the use of AI devices is not an immediate or automatic response, but rather the result of a progressive process of cognitive and emotional evaluation. This is supported by cognitive evaluation theory, which posits that behavioural responses to a technological stimulus arise from a sequence of cognitive evaluations that give rise to emotional responses, which determine the final decision [56,57]. Based on this approach, the process of AI acceptance can be understood as a staged mechanism in which students initially evaluate the relevance of the stimulus, then analyse its practical implications, and finally generate emotional responses that act as immediate antecedents to behaviour. This sequential approach has previously been successfully applied in the study of the acceptance of AI devices in the context of services, showing that users go through multiple stages of evaluation before deciding to accept or reject their use [15].
Therefore, the MIDA model proposes that, in a first stage of contextual evaluation, university students assess the relevance and alignment of the use of AI devices based on factors such as perceived value, anthropomorphism, and perceived risk. This initial evaluation allows them to determine whether the use of this technology is meaningful to them and, therefore, whether it warrants further evaluation. As cognitive evaluation theory suggests, when a stimulus is perceived as irrelevant, no additional emotional or behavioural processes are activated [57].
Once this first stage is complete, the student moves on to an evaluation of expectations, in which they analyse in more detail the benefits they expect to obtain from using these devices, as well as the effort they anticipate needing to achieve those benefits. These cognitive assessments allow the effort required and the expected benefits associated with the use of AI devices to be weighed up and constitute the main antecedent of the emotional responses generated during the process. This phase reflects a more conscious practical assessment, similar to that identified in previous studies on technology acceptance based on performance and effort expectancy [12,15].
Based on this cognitive evaluation, key emotions linked to the use of AI devices, such as anxiety, stress, and trust, are configured. According to cognitive evaluation theory, these emotions are not spontaneous reactions, but the direct result of previous evaluations made by the individual [56]. In this sense, emotions operate as the mediating mechanism that guides behaviour, directly influencing the student’s final willingness to accept or reject the use of AI devices.
Thus, the MIDA model explains how and why contextual factors influence the acceptance of AI devices indirectly, through a process of progressive evaluation that integrates cognitive assessments and emotional responses, rather than exerting direct and immediate effects on the final decision. This sequential approach captures the complex and progressive nature of the process of accepting AI devices in educational contexts, providing a deeper understanding of the central psychological mechanisms involved in the adoption of intelligent technologies.

3.1. Effect of Anthropomorphism on Performance Expectancy and Perceived Effort

Anthropomorphism increases the perception of social presence, which contributes to higher performance expectancy and fosters positive emotions towards AI devices [18]. In this regard, Wong et al. [58] indicated that anthropomorphism was positively associated with performance expectancy but negatively associated with perceived effort. Chi et al. [59] also found that anthropomorphism had a positive and significant influence on both performance expectancy and perceived effort expectancy. For their part, Gursoy et al. [15] concluded that anthropomorphism had a positive and significant impact on performance expectancy and reduced perceived effort. Similarly, Bai and Yang [60] found that anthropomorphism positively influenced performance expectancy and negatively influenced effort expectancy among professionals using generative AI in China. However, Espinoza-Bravo et al. [61] found that anthropomorphism had a dual effect, as it improved performance expectancy while increasing the perception of effort among university students in Ecuador.
Based on the above, the following hypotheses are proposed:
H1.1. 
Anthropomorphism positively influences performance expectancy.
H1.2. 
Anthropomorphism negatively influences perceived effort expectancy.

3.2. Effect of Perceived Risk on Performance Expectancy and Perceived Effort

Perceived risk plays an important role in the adoption and acceptance of AI in various fields [19]. In this regard, Li [62] concluded that perceived risk has a negative effect on students’ willingness to use AI technologies in design tools. Similarly, Roy et al. [63] found that perceived risk negatively impacted physicians’ willingness to adopt AI for diabetes diagnosis. In turn, Alzaabi and Shuhaiber [64] found that perceived risk negatively impacted AI adoption, even when people are familiar with these technologies. Furthermore, Zhang et al. [65] found that perceived risks significantly reduced teachers’ trust in AI in rural contexts. Kolar et al. [66] also found that risks negatively affected consumers’ willingness to use AI. Finally, Zheng et al. [67] found that perceived risk negatively influenced Chinese doctors’ performance expectancy regarding the use of an intelligent clinical decision support system.
Finally, Ajzen’s Theory of Planned Behaviour suggests that beliefs about possible negative outcomes influence the perception of difficulty in performing a behaviour [68]. In this sense, when university students perceive the use of AI devices as uncertain or risky, they will tend to anticipate that their use will require greater cognitive or learning effort.
Based on the above, the following hypotheses are proposed:
H2.1. 
Perceived risk negatively influences performance expectancy.
H2.2. 
Perceived risk positively influences perceived effort expectancy.

3.3. Effect of Perceived Value on Performance Expectancy and Perceived Effort

In the technological context, Osei and Rasoolimanesh [69] found that perceived value significantly influenced the intention to use 4.0 technologies. In this regard, Testa et al. [70] found a positive and significant association between perceived value and the intention of Italian university students to adopt natural language processing (NLP) models. Similarly, Chan and Zhou [71] found a strong positive correlation between perceived value and the intention of Hong Kong university students to use generative AI in higher education. On the other hand, Rather [72] found that perceived value directly influenced consumers’ willingness to adopt AI in the Indian hotel industry. Similarly, Sattu et al. [73] found that perceived value played a decisive role in HR professionals’ decision to adopt AI. Furthermore, Akdim and Casaló [74] found that perceived value had a positive relationship with the willingness of US residents to use AI voice assistants.
According to Eccles and Wigfield’s Situated Value-Expectancy theory [75], both the value a person attributes to a task and their expectations of success are constructed in relation to the context. In this sense, when students perceive artificial intelligence as a useful tool for their learning, they tend to engage more actively with it, which in turn strengthens their expectation of achieving good academic results through its use. Furthermore, if they consider that AI facilitates the learning process, the perceived difficulty decreases, thus reducing the expectation of the effort required to use it [76,77].
Based on the above, the following hypotheses are proposed:
H3.1. 
Perceived value positively influences performance expectancy.
H3.2. 
Perceived value negatively influences perceived effort expectancy.

3.4. Effect of Performance Expectancy and Perceived Effort on Emotions

Emotions play a very important role in users’ willingness to use AI devices. Several studies have shown that both performance expectancy and perceived effort expectancy influence users’ emotions, which in turn affect their decision to accept or reject the use of AI devices. For example, in the hotel sector in India, these emotions were decisive in the acceptance or objection of these technologies [63].
In this context, performance expectancy and perceived effort have a significant impact on emotions such as anxiety, stress, and trust. It has been found that when users perceive AI to be efficient and easy to use, they experience lower levels of anxiety and stress. This is due to the inverse relationship between these variables, as higher performance expectancy and lower effort expectancy decrease negative emotions, while lower performance expectancy and higher effort expectancy increase negative emotions [78].
One of the main factors contributing to anxiety is the expectation of effort. When users perceive that AI requires excessive effort or is too complex, anxiety increases, which in turn can increase levels of stress and mistrust towards this technology [79,80]. On the other hand, when AI exceeds users’ expectations, it generates positive emotions that reinforce satisfaction and encourage continued use [81]. In this sense, trust in AI is closely related to a favourable perception of its effectiveness and functionality [82,83]. However, if AI devices fail to meet user expectations, a cognitive mismatch occurs that generates negative emotions and reduces acceptance [84]. In particular, when users perceive that AI is less capable than expected or that its performance contradicts their expectations, they may experience frustration and mistrust. This highlights the importance of properly managing expectations to avoid disappointment and improve the user experience [81].
Based on the above, the following hypotheses are proposed:
H4.1. 
Performance expectancy has a negative influence on anxiety.
H4.2. 
Performance expectancy negatively influences stress.
H4.3. 
Performance expectancy positively influences trust.
H5.1. 
Perceived effort expectancy positively influences anxiety.
H5.2. 
Perceived effort expectancy positively influences stress.
H5.3. 
Perceived effort expectancy negatively influences trust.

3.5. Effect of Anxiety on Acceptance or Objection to the Use of AI Devices

Several studies have highlighted that anxiety towards AI significantly influences its acceptance [85,86,87]. In this regard, Song & Tan [88] found that anxiety negatively influenced the acceptance of generative AI. Similarly, Iyer & Bright [89], Cho & Seo [85], Schiavo et al. [90], and Chen et al. [79] concluded that anxiety was negatively related to the acceptance of big data and AI. Likewise, Kaya et al. [17] discovered that anxiety about AI configuration and learning significantly influenced negative attitudes towards its use. However, Zhang et al. [91] found no negative relationship between anxiety towards AI and perceived ease of use.
Based on the above, the following hypotheses are proposed:
H6.1. 
Anxiety negatively influences willingness to accept the use of AI devices.
H6.2. 
Anxiety positively influences objection to the use of AI devices.

3.6. Effect of Stress on Acceptance or Objection to the Use of AI Devices

Artificial intelligence can reduce or increase stress, as in some cases it facilitates the execution of certain tasks and in others it creates additional difficulties, especially for users who do not have the necessary training or support [92]. In this regard, Chen & Lee [93] found that perceived work stress did not influence nurses’ behavioural intention to use artificial intelligence. However, Jeong et al. [94] found that the adoption of AI was associated with an increase in work stress. Likewise, Loureiro et al. [95] suggested that the incorporation of AI in the workplace can generate stress and affect human well-being.
On the other hand, when university students experience stress when interacting with AI devices, these are often perceived as a cognitive burden or threat, generating a negative emotional response. According to Lazarus’ cognitive evaluation theory [56], these types of emotions reduce the willingness to engage with technology. Consequently, stress decreases acceptance of AI and can lead to attitudes of objection or objection to its use. This is supported by Gursoy et al. [15], who found that negative emotions increase objection and reduce acceptance of the use of AI devices.
Based on the above, the following hypotheses are proposed:
H7.1. 
Stress negatively influences the willingness to accept the use of AI devices.
H7.2. 
Stress positively influences objection to the use of AI devices.

3.7. Effect of Trust on Acceptance or Objection to the Use of AI Devices

While trust is necessary for acceptance, incorrect levels of trust can lead to misuse, abuse, or disuse of this technology. For this reason, trust must be maintained at an appropriate level to encourage responsible use [96,97]. In this regard, Hameed et al. [98] and Kim et al. [16] found that trust had a positive influence on healthcare professionals’ intentions to use AI. Similarly, Stevens & Stetson [99] found that American doctors’ acceptance of AI is determined by their level of trust. For their part, Wang et al. [100] found that trust is vital for Chinese university students to adopt or reject AI. Also, Đerić et al. [101] found that trust was a strong predictor of the behavioural intention to adopt generative AI tools by university students in Croatia.
Based on the above, the following hypotheses are proposed:
H8.1. 
Trust positively influences willingness to accept the use of AI devices.
H8.2. 
Trust negatively influences objection to the use of AI devices.

4. Research Methodology

4.1. Sample and Data Collection

This study used surveys to collect empirical data from university students in Lima, Peru. Convenience sampling was used to collect data during the period from August to November 2025.
Validated questionnaires were used to test the hypothetical relationships between the variables that are the subject of this research. The items in the questionnaires were measured on a five-point Likert scale ranging from “strongly disagree = 1” to “strongly agree = 5”. Reverse translation was used in this study to confirm consistency between the English and Spanish questionnaires. A pilot test was also conducted with 104 participants to determine their effectiveness.
A total of 4812 university students in the city of Lima were contacted via email and LinkedIn. A total of 541 responses were received, of which 517 were valid, indicating a validity rate of 95.5%. The sample consisted of 42.7% males (221 students) and 57.3% females (296 students).
Table 1 shows the demographic profile of the participants in this study.

4.2. Measures

In this research, the constructs were measured using scales adapted from instruments previously validated in the literature. These scales were translated into Spanish and verified through a reverse translation process to ensure semantic equivalence. The items were then adapted to the university context while maintaining consistency with the original meaning of the constructs. In this regard, anthropomorphism, performance expectancy, effort expectancy, willingness to accept the use of AI devices, and objection to the use of AI devices were measured using the scales proposed by Gursoy et al. [15]. Perceived risk was assessed using the scale by Kolar et al. [66], while perceived value was measured using the scale by Sattu et al. [73]. Anxiety associated with the use of AI devices was measured using the scale by Iyer and Bright [89]. Stress was assessed using the scales of Zhou et al. [102] and Wang et al. [103], and trust was measured using the scale of Zhou et al. [36] and Chowdhury et al. [104].

5. Data Analysis and Results

In this study, confirmatory factor analysis (CFA) was used to examine the validity and reliability of the constructs. The composite reliability index (CR), average variance extracted (AVE), and discriminant validity were calculated. When a CR exceeds the threshold of 0.7, the construct is considered to have good internal consistency. Likewise, CR values between 0.6 and 0.7 can also be considered acceptable in exploratory studies or applied research. An AVE > 0.5 indicates adequate convergent validity. However, when the composite reliability (CR) is greater than 0.6, the construct is considered to have acceptable internal consistency. In such circumstances, convergent validity can still be considered acceptable even if the AVE is slightly below the typical threshold of 0.5 [105,106,107]. If the square root of AVE is greater than the observed correlations, the construct is considered to have adequate discriminant validity [107,108]. In order to evaluate the hypothesised relationships between the study variables, the structural equation model (SEM) was used with AMOS SPSS version 26 software.
Finally, taking into account the presence of multivariate non-normality, the model was estimated using the Maximum Likelihood (ML) method, complemented by the bootstrapping procedure, in order to obtain robust estimates of the parameters and their significance values. The overall fit of the model was evaluated using the indicators provided by the AMOS software, including the CMIN/DF normalised discrepancy index, the root mean square error of approximation (RMSEA), and the incremental fit indices CFI, TLI, and IFI. According to the literature, CMIN/DF values below 3.0 and RMSEA values below 0.06 indicate a good fit of the model, while CFI, TLI, and IFI values above 0.85 reflect an acceptable fit, with slightly lower values being acceptable in complex models and applied studies. In addition, the bootstrapping procedure was applied with 5000 samples with replacement to evaluate the statistical significance of the structural relationships proposed in the study [106,109,110,111,112,113].

5.1. Reliability and Validity of the Measurement Scales

Structural equation modelling (SEM) analysis requires the evaluation of the reliability and validity of constructs through the calculation of composite reliability (CR), average variance extracted (AVE), Cronbach’s alpha coefficient, and discriminant validity. In this regard, Table 2 presents the descriptive statistics corresponding to the constructs included in the measurement model.
Table 3 shows the standardised factor loadings of the items associated with each of the latent constructs included in the measurement model. In general, the indicators showed statistically significant factor loadings (p < 0.001) and, for the most part, were above the recommended threshold of 0.60, which indicates an adequate contribution of the items to their respective constructs. However, some indicators, such as PEE1 and ANX1, had factor loadings slightly below this threshold, although above 0.50, a value that can be considered acceptable in applied research and theoretically grounded models. The composite reliability (CR) values ranged from 0.693 (Trust) to 0.872 (Willingness to accept the use of artificial intelligence devices), indicating acceptable and high levels of internal consistency. Although the CR value for the Trust (TR) construct was slightly below the conventional threshold of 0.70, it exceeded the minimum value of 0.60, which is considered acceptable in exploratory studies and applied research, especially when the factor loadings are adequate and the construct has theoretical support. In terms of convergent validity, most constructs achieved AVE values above 0.50, which supports an adequate proportion of variance explained by the indicators. However, the constructs Trust (AVE = 0.430) and Objection to the use of artificial intelligence devices (AVE = 0.485) had AVE values slightly below the recommended threshold. Even so, convergent validity can be considered acceptable in these cases, given that both constructs exhibited satisfactory levels of composite reliability (CR > 0.60) and mostly adequate standardised factor loadings. Finally, discriminant validity was confirmed for all constructs, as the square root of the AVE for each latent variable exceeded the correlations with the other constructs, indicating that each construct shares greater variance with its own indicators than with those of other factors in the model.
Table 4 shows the results of the discriminant validity analysis using the Heterotrait–Monotrait ratio (HTMT) criterion. According to Henseler et al. [114], discriminant validity is considered adequate when HTMT values are below 0.90, with 0.85 being a stricter criterion. The results show that the vast majority of HTMT values are clearly below both thresholds, which indicates adequate differentiation between the constructs of the model.
Table 5 presents the fit indices for the measurement model, which show an adequate overall level of fit between the proposed theoretical model and the observed empirical data. First, the normalised discrepancy index (CMIN/DF = 2.188) is below the recommended threshold of 3.00, indicating a good fit of the model. Likewise, the RMSEA = 0.048 is below the strict criterion of 0.06, suggesting an excellent fit, reinforced by a PCLOSE = 0.804, a value greater than 0.05, confirming that the RMSEA does not differ significantly from a close fit. With regard to the incremental indices, the CFI (0.925), TLI (0.913) and IFI (0.926) exceed the recommended threshold of 0.90, indicating a good comparative fit of the model against a null model. Therefore, the measurement model has adequate psychometric properties for the subsequent estimation of the structural model.

5.2. Structural Model

Univariate normality was assessed using skewness and kurtosis statistics. The values obtained were within acceptable ranges (|skewness| < 2; |kurtosis| < 7), suggesting an adequate distribution at the univariate level. However, the multivariate normality test using the Mardia coefficient showed a violation of this assumption (multivariate kurtosis = 290.97; C.R. = 66.92), so bootstrap procedures were used in the structural equation modelling (CB-SEM) analysis.
As shown in Table 6, the structural model presented lower fit indices than those observed in the measurement model. This difference is duly supported from a theoretical and methodological perspective, given that the structural model incorporates additional causal restrictions between latent constructs, which increases the complexity of the model and limits its overall flexibility. The literature indicates that a moderate decrease in fit indices when moving from the measurement model to the structural model is common in structural equation modelling and does not necessarily reflect an inadequate model specification, especially when the model is solidly grounded in theory and developed in applied research contexts [115].
The structural model presents an acceptable overall fit. The CMIN/DF (2.79) and RMSEA (0.059) indicate a good absolute fit, while the incremental indices (CFI = 0.881; TLI = 0.868; IFI = 0.882) show an acceptable fit considering the complexity of the model and the sample size.

5.3. Hypothesis Testing

Table 7 presents the results of the validation of the hypotheses proposed in the structural model. The validation was carried out using standardised regression coefficients (β) and their corresponding significance values, estimated using structural equation modelling (SEM) with bootstrap correction. The results show that anthropomorphism (AN) has a positive and significant influence on performance expectancy (PEX), therefore hypothesis H1.1 is accepted. However, no empirical evidence was found to support the negative influence of AN on perceived effort expectancy (PEE), so hypothesis H1.2 was rejected. With regard to perceived risk (PR), the results indicate that it does not have a significant influence on PEX, leading to the objection of hypothesis H2.1. However, a positive and significant influence of PR on PEE was confirmed, and therefore hypothesis H2.2 is accepted. For its part, perceived value (PV) showed a positive and highly significant influence on PEX, accepting H3.1, while its negative effect on PEE was not confirmed, therefore rejecting H3.2. Likewise, performance expectancy (PEX) had a negative and significant influence on anxiety (ANX) and stress (ST), as well as a positive and significant influence on trust (TR), accepting hypotheses H4.1, H4.2 and H4.3, respectively. Perceived effort expectancy (PEE) had a positive influence on ANX and ST, accepting hypotheses H5.1 and H5.2, although no significant influence on TR was evident, leading to the objection of H5.3. Anxiety (ANX) did not show significant effects on either willingness to accept the use of artificial intelligence devices (WA) or objection to the use of artificial intelligence devices (OU), so hypotheses H6.1 and H6.2 were rejected. Stress (ST) showed a negative and significant influence on willingness to accept the use of artificial intelligence devices, accepting hypothesis H7.1, while its effect on objection to the use of artificial intelligence devices was not confirmed, therefore hypothesis H7.2 was rejected. Finally, trust (TR) showed a positive and highly significant influence on the willingness to use AI devices, accepting hypothesis H8.1, while its negative influence on the objection to the use of AI devices was not statistically significant, therefore rejecting hypothesis H8.2.
Table 8 presents the indirect effects that were estimated using bootstrapping with 5000 resamples. The results show that performance expectancy (PEX) acts as a mediator between contextual variables and the emotional responses of university students. In this sense, anthropomorphism (AN) and perceived valence (PV) have significant indirect effects on anxiety (ANX), stress (ST), and trust (TR) through performance expectancy (PEX). Therefore, higher performance expectancy (PEX) is associated with lower levels of negative emotions and higher levels of trust (TR). On the other hand, perceived effort expectancy (PEE) mediates the relationship between contextual variables and negative emotions. Both anthropomorphism (AN) and perceived risk (PR) have positive indirect effects on anxiety (ANX) and stress (ST) through PEE, indicating that a higher perception of effort intensifies these emotional responses.
In relation to behavioural outcomes, trust (TR) and stress (ST) significantly mediate the relationship between cognitive evaluations and willingness to accept the use of AI devices (WA). PEX positively influences WA through TR, while PEX negatively influences WA through ST and PEE does so through ST and to a lesser extent through ANX. No significant indirect effects were observed on objection to the use of AI devices (OU), suggesting that in the MIDA model, emotional mechanisms primarily explain the willingness to accept the use of AI devices.
The results of the structural model (see Figure 2) show a high level of explanation of the willingness to accept the use of artificial intelligence devices (WA), with a value of R2 = 0.688. Likewise, performance expectancy (PEX) presents a high level of explained variance (R2 = 0.746), while perceived effort expectancy (PEE) shows a low explanatory capacity (R2 = 0.157). For their part, the constructs of anxiety (ANX), stress (ST) and trust (TR) reach moderate to high levels of explained variance (R2 = 0.700, 0.683 and 0.662, respectively). However, objection to the use of AI devices (OU) shows very low explained variance (R2 = 0.024), suggesting that this construct could be influenced by additional factors not considered in the model.

6. Discussion

The results of this research confirm that the willingness to accept the use of AI devices in higher education occurs through a sequential process in which contextual factors influence the formation of cognitive expectations, which in turn trigger emotional responses with differentiated effects. In this sense, the results support the central premise of MIDA, which indicates that the willingness to accept the use of AI devices cannot be explained solely from a functional utility perspective, but rather requires consideration of the dynamic interaction between cognition and emotion. Unlike classic models of technology acceptance such as, Davis’ Technology Acceptance Model (TAM) [8] and Venkatesh et al.’s Unified Theory of Acceptance and Use of Technology 2 (UTAUT2) [12], which explain the intention to use technology mainly on the basis of assessments of utility and effort, the results of the study show that the willingness to accept the use of AI devices in higher education is a more complex and dynamic process. Furthermore, while the AIDUA model by Gursoy et al. [15] considers emotion as a general affective response, the MIDA model allows us to identify how specific emotions, such as trust or stress, differentially influence the acceptance of AI devices. The findings show that trust favours willingness to accept the use of AI devices, while stress acts as an inhibiting factor, and anxiety has no direct effect, making it important to consider the emotional dimension in a disaggregated manner.
Firstly, contextual factors showed different effects on expectations. For example, anthropomorphism significantly increased performance expectancy. This would indicate that when AI devices incorporate human or social characteristics, students anticipate greater academic benefits. This finding coincides with previous studies indicating that the perception of social presence improves the expected usefulness of intelligent technologies in technology-assisted learning environments [15,116,117]. However, contrary to expectations, anthropomorphism increased the perceived effort expectancy, a finding similar to that found by Espinoza-Bravo et al. [61] in Ecuadorian university students. This could be because the humanisation of AI devices is associated with a greater perception of cognitive complexity [118]. Likewise, this result suggests that students perceive that they must interact in a more elaborate way with AI devices, for example, through precise instructions or critical evaluation of the responses generated [119].
Secondly, perceived risk did not reduce performance expectancy, but it did increase effort expectancy. This finding suggests that students value the academic potential of AI devices even when they perceive associated risks. This is consistent with previous studies indicating that higher education students may accept technologies perceived as useful even when they identify ethical or regulatory risks associated with their use [120]. Perceived value, on the other hand, had a positive and significant influence on performance expectancy and no influence on effort expectancy. This result aligns with Eccles & Wigfield’s [75] situated expectancy-value theory, which indicates that the value assigned to a task increases the expectation of success, although it does not necessarily reduce the anticipated effort when the task involves the development of new skills.
For its part, performance expectancy significantly reduced anxiety and stress and increased trust. In other words, when students anticipate that AI devices will improve their performance, they experience lower levels of emotional distress and develop greater trust in the technology [121,122]. However, perceived effort expectancy increased anxiety and stress without affecting trust. This finding indicates that perceived effort expectancy generate negative emotional responses, but this does not imply that students stop trusting AI devices. In this sense, students can trust and simultaneously experience anxiety and stress during their use [123].
On the other hand, trust had a positive and significant influence on students’ willingness to accept the use of AI devices, while stress significantly reduced this willingness. These findings indicate that acceptance of the use of AI devices is facilitated when students experience favourable emotions and do not feel emotionally overwhelmed [124,125]. Anxiety did not influence either acceptance or objection to the use of AI devices, suggesting that it does not play a decisive role in these decisions in the educational context. According to Lang’s three-dimensional theory of anxiety [126], this could be because anxiety is an emotional response that does not necessarily translate into a behavioural decision [127]. However, these results should be interpreted with caution. Although this result is valid in the context of the current sample, it should not be generalised as a universal finding. It is possible that the impact of anxiety on AI acceptance is not linear or depends on additional factors, such as previous experience with AI technologies, levels of digital literacy, or the evaluative context in which AI is used [128,129].
Finally, the indirect effects analyses confirm the internal consistency of the MIDA model by showing that contextual factors influence emotional responses and willingness to accept the use of AI devices through expectations. Performance expectancy acts as a central mediator and articulates the effects of anthropomorphism and perceived value on anxiety, stress, and trust, such that higher performance expectancy are associated with lower levels of negative emotions and increased trust. Furthermore, effort expectancy significantly mediates the effects of anthropomorphism and perceived risk on anxiety and stress, intensifying these negative emotional responses. Indirect effects also show that willingness to accept the use of AI devices is mainly explained by trust and stress. However, an important finding of this study is the very low explanatory power obtained for the construct objection to the use of AI devices. Rather than indicating a statistical weakness, this result suggests a theoretical distinction between acceptance and objection processes. The findings indicate that objection is not simply the inverse of willingness to use AI but may reflect a qualitatively different mechanism. While willingness to accept AI devices appears to be driven primarily by cognitive evaluations and emotional responses, objection may be shaped by normative, ethical, or institutional considerations that are not captured within the current conceptualisation of the MIDA model [130]. In this sense, the proposed model provides a stronger explanation of willingness to use AI technologies than of active resistance or objection to their use.

6.1. Theoretical Implications

This research makes relevant theoretical contributions to the acceptance of artificial intelligence devices in higher education. First, it expands on traditional models of technology acceptance by empirically validating a model that follows a sequential process integrating cognition and emotion. In this sense, the MIDA model demonstrates that cognitive evaluations (both contextual and expectation-based) influence the acceptance of AI devices in the education sector through emotional responses. Secondly, the results contribute to theory by differentiating the role of performance expectancy and perceived effort expectancy in emotional activation. In this sense, performance expectancy positively regulates emotions by reducing stress and anxiety and increasing trust, while effort expectancy increases emotional load by increasing stress and anxiety without affecting trust. This difference shows that expectations not only fulfil a cognitive function, but also activate different emotional responses, allowing for a better understanding of the psychological mechanisms involved in the acceptance of AI devices [124].
Thirdly, the study results provide empirical evidence that distinguishes between the factors that explain acceptance and those that explain objection to the use of AI devices. While trust and stress significantly explain the intention to use, none of the emotions described predict objection. This finding challenges the implicit assumption of some models that acceptance and objection to the use of technology are driven by the same psychological factors. Therefore, it is suggested that objection to the use of AI devices may be influenced by other factors that go beyond the scope of the present research.
Finally, by incorporating specific emotions such as anxiety, stress, and trust, rather than global affective measures, the MIDA model provides a more granular and theoretically sound approach to analysing the acceptance of AI devices in higher education. This contribution is important in educational contexts, where emotional responses play a key role in the interaction between students and emerging technologies [61].

6.2. Practical Implications

The results of the MIDA model offer relevant practical implications for the design, implementation, and management of AI devices in higher education. First, trust is consolidated as the main positive predictor of willingness to accept the use of AI devices, suggesting that developers and providers should prioritise strategies aimed at strengthening it. This involves ensuring transparency in the functioning of the systems, explaining the decision-making criteria of AI devices in an understandable way, and clearly communicating their academic benefits. Secondly, the findings show that stress has a significant negative effect on the acceptance of AI devices, while anxiety has no direct impact. Consequently, educational institutions should focus their efforts on reducing the sources of stress associated with these technologies, such as cognitive overload, the complexity of interfaces, or the perception of additional academic demands associated with the use of AI devices. Strategies such as progressive training, academic support, and the availability of technical support can help mitigate these effects. Likewise, performance expectancy reduces anxiety and stress while increasing trust, which underscores the importance of students clearly perceiving the functional and academic value of AI devices. In this regard, it is key to integrate these technologies in a manner consistent with learning objectives and demonstrate their usefulness in improving academic performance. On the other hand, the expectation of effort increases negative emotions, highlighting the need to design intuitive and easy-to-use AI devices with user-friendly interfaces and reduced learning curves. Finally, given that contextual variables such as perceived value, anthropomorphism, and perceived risk indirectly influence acceptance, implementation strategies must adequately manage students’ initial perceptions by communicating benefits, limitations, and levels of human control in a balanced manner.

6.3. Limitations and Recommendations for Future Research

This study has some limitations that open up opportunities for future research. First, the term AI devices was used in a broad and practical sense to refer to intelligent systems that support academic activities in higher education, including both socially oriented applications (e.g., conversational or generative AI systems) and task-focused tools (e.g., decision-support or assistance systems). While this inclusive definition allows for an overall examination of students’ acceptance of AI technologies, it also implies a degree of heterogeneity across AI types. In practice, these systems may differ substantially in terms of perceived anthropomorphism, emotional engagement, perceived usefulness, and perceived risk. These differences were not explicitly modelled in the present study and therefore represent a limitation. Future research could address this issue by focusing on specific categories of AI systems or by examining the type of AI as a moderating variable within technology acceptance models.
Second, although the proposed model has theoretical and empirical support, the use of longitudinal or experimental studies would allow for analysis of how expectations and emotional responses evolve as university students gain more experience using AI devices.
Thirdly, the MIDA model took into account a specific number of contextual, cognitive and emotional variables in order to maintain a parsimonious structure; however, other relevant factors were not considered, particularly those that may help explain the objection to the use of AI devices. In this regard, the present model does not explicitly incorporate ethical or normative variables, which may play a central role in shaping opposition to the use of artificial intelligence in higher education. Academic and institutional discussions on academic integrity, the responsible use of generative AI, dual-use risks, and regulatory constraints suggest that ethical considerations may significantly influence students’ resistance or reluctance to adopt AI technologies. The absence of these variables limits the explanatory scope of the model, especially with regard to the outcomes of objection. Therefore, future research should integrate ethical, institutional, and normative dimensions to provide a more comprehensive understanding of resistance to the use of AI in educational contexts.
Fourth, the sample consisted exclusively of university students drawn from a single urban and cultural context, and the empirical data were collected using a cross-sectional design. While this approach is appropriate for the validation of the proposed model, it necessarily limits the generalisability of the findings. Caution is therefore required when extending the conclusions to other educational systems, cultural contexts, or institutional environments. Replication studies conducted across different countries, educational levels, and institutional settings would be valuable to assess the robustness and external validity of the MIDA model. Additionally, future research could examine the applicability of the model to other populations, such as teachers, professionals, or learners at different educational stages.
Finally, future research could explore possible moderating effects of variables such as previous experience with AI, level of digital literacy, or level of education, in order to determine whether the intensity of cognitive and emotional relationships varies among different user profiles. These studies would allow for refinement of the MIDA model and strengthen its applicability in various scenarios.

7. Conclusions

This research demonstrates that acceptance of the use of AI devices in higher education occurs through a process in which expectations and emotions play a key role. The results show that willingness to use AI devices does not depend solely on functional evaluations but is strongly conditioned by the trust generated by the technology and the level of stress associated with its use. Furthermore, the findings confirm that contextual evaluations indirectly influence the acceptance of AI devices by shaping performance and effort expectancy, which trigger specific emotional responses. In particular, performance expectancy favours positive emotional states, while effort expectancy increases emotional distress, affecting the intention to use in different ways.
The study shows that emotions explain the acceptance of AI device use but not its objection. This result suggests that objection of AI device use responds to factors other than the emotional ones considered, which highlights the need to broaden theoretical approaches to fully understand this phenomenon in educational contexts.
Finally, the MIDA model offers a solid explanatory framework for understanding the acceptance of AI device use in higher education by integrating cognition and emotion into a sequential process. These findings provide relevant evidence both in terms of theoretical advancement in the adoption of emerging technologies and in terms of the design of institutional strategies focused on managing performance and effort expectancy, as well as strengthening trust and adequately managing the stress associated with the use of AI devices, favouring the appropriate and responsible integration of these technologies into the educational environment.

Author Contributions

L.S. was responsible for the literature review, conceptualization and definition of the proposed model, overall structure of the manuscript, data handling and analysis, interpretation of results, discussion, and formulation of conclusions. L.R. contributed to the methodological review and validation of the research design. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding. The article processing charge (APC) was personally covered by Luis Salazar.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Acknowledgments

The authors declare that generative artificial intelligence (AI) tools were used during the preparation of this manuscript solely to assist with language refinement, including improvements in writing clarity, grammar, and stylistic consistency. The use of AI was limited to editorial support and did not involve the generation of scientific content, data analysis, interpretation of results, methodological design, or the formulation of conclusions. All intellectual contributions, analyses, and interpretations presented in this manuscript are the sole responsibility of the authors, who reviewed and validated the final version of the text.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Farina, A.; Stevenson, C.N. Ethical Navigations: Adaptable Frameworks for Responsible AI Use in Higher Education. In Exploring the Ethical Implications of Generative AI; IGI Global Scientific Publishing: Hershey, PA, USA, 2024. [Google Scholar] [CrossRef]
  2. Helmiatin; Hidayat, A.; Kahar, M.R. Investigating the Adoption of AI in Higher Education: A Study of Public Universities in Indonesia. Cogent Educ. 2024, 11, 2380175. [Google Scholar] [CrossRef]
  3. Kok, C.L.; Ho, C.K.; Koh, Y.Y.; Wen Heng, J.B.; Teo, T.H. Psychological Aspects of AI Enhanced Learning Experiences. In Proceedings of the 2024 IEEE Region 10 Conference (TENCON 2024), Singapore, 1–4 December 2024; pp. 1302–1305. [Google Scholar] [CrossRef]
  4. Li, W.; Li, K.; Wu, D. AI in Higher Education: A Survey-Based Empirical Study on Applications and Student Expectations. In Proceedings of the 14th International Conference on Educational and Information Technology (ICEIT 2025), Guangzhou, China, 14–16 March 2025; pp. 501–505. [Google Scholar] [CrossRef]
  5. Khalkho, R.; Singh, S.; Gupta, N.; Srivastava, P. Impact of Educational AI on Students’ Studying Habits and Academic Performance. In Proceedings of the 2024 IEEE International Conference on Artificial Intelligence, Quantum Computing and Smart Agriculture (ICAIQSA), Nagpur, India, 20–21 December 2024; pp. 1–6. [Google Scholar] [CrossRef]
  6. Tran, T.T.; Le, T.V.; Le, N.H.; Dam, A.V.T.; Nguyen, T.T.; Nguyen, A.T.T.; Nguyen, H.T. Emotional Attachment to Artificial Intelligence and Perceived Social Isolation among University Students: An Application of Sternberg’s Triangular Theory of Love. Multidiscip. Sci. J. 2025, 7, 2025662. [Google Scholar] [CrossRef]
  7. Oukhouya, L.; Ouhader, H.; Derkaoui, G.; Essam, M.; Yanisse, S.; Sbai, A. Exploring Moroccan Medical Students’ Perception of Artificial Intelligence. In Lecture Notes in Networks and Systems; Springer Nature: New York, NY, USA, 2024; pp. 76–85. [Google Scholar] [CrossRef]
  8. Davis, F.D. Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology. MIS Q. Manag. Inf. Syst. 1989, 13, 319–339. [Google Scholar] [CrossRef]
  9. Bagozzi, R.P. The Legacy of the Technology Acceptance Model and a Proposal for a Paradigm Shift. J. Assoc. Inf. Syst. 2007, 8, 12. [Google Scholar] [CrossRef]
  10. Benbasat, I.; Barki, H. Quo Vadis, TAM? J. Assoc. Inf. Syst. 2007, 8, 211–218. [Google Scholar] [CrossRef]
  11. Sohn, K.; Kwon, O. Technology Acceptance Theories and Factors Influencing Artificial Intelligence-Based Intelligent Products. Telemat. Inform. 2020, 47, 101324. [Google Scholar] [CrossRef]
  12. Venkatesh, V.; Thong, J.; Xu, X. Consumer Acceptance and Use of Information Technology: Extending the Unified Theory of Acceptance and Use of Technology. MIS Q. 2012, 36, 157–178. [Google Scholar] [CrossRef]
  13. Ammenwerth, E. Technology Acceptance Models in Health Informatics: TAM and UTAUT. Stud. Health Technol. Inform. 2019, 263, 64–71. [Google Scholar] [CrossRef]
  14. Xu, X.; Dai, A.; Wei, S.; Tan, L. WIP: Examining Psychological Distance Perceptions towards Advanced Technologies among University Students. In Proceedings of the IEEE Frontiers in Education Conference (FIE), Washington, DC, USA, 13–16 October 2024; pp. 1–5. [Google Scholar] [CrossRef]
  15. Gursoy, D.; Chi, O.H.; Lu, L.; Nunkoo, R. Consumers’ Acceptance of Artificially Intelligent (AI) Device Use in Service Delivery. Int. J. Inf. Manag. 2019, 49, 157–169. [Google Scholar] [CrossRef]
  16. Kim, Y.J.; Choi, J.H.; Fotso, G.M.N. Medical Professionals’ Adoption of AI-Based Medical Devices: UTAUT Model with Trust Mediation. J. Open Innov. Technol. Mark. Complex. 2024, 10, 100220. [Google Scholar] [CrossRef]
  17. Kaya, F.; Aydin, F.; Schepman, A.; Rodway, P.; Yetişensoy, O.; Demir Kaya, M. The Roles of Personality Traits, AI Anxiety, and Demographic Factors in Attitudes toward Artificial Intelligence. Int. J. Hum.-Comput. Interact. 2024, 40, 497–514. [Google Scholar] [CrossRef]
  18. Alsaad, A. The Dual Effect of Anthropomorphism on Customers’ Decisions to Use Artificial Intelligence Devices in Hotel Services. J. Hosp. Mark. Manag. 2023, 32, 1048–1076. [Google Scholar] [CrossRef]
  19. Schwesig, R.; Brich, I.; Buder, J.; Huff, M.; Said, N. Using Artificial Intelligence (AI)? Risk and Opportunity Perception of AI Predict People’s Willingness to Use AI. J. Risk Res. 2023, 26, 1053–1084. [Google Scholar] [CrossRef]
  20. Handoko, B.L.; Thomas, G.N.; Indriaty, L. Adoption and Utilisation of Artificial Intelligence to Enhance Student Learning Satisfaction. In Proceedings of the 2024 International Conference on ICT for Smart Society (ICISS), Bandung, Indonesia, 4–5 September 2024; pp. 1–6. [Google Scholar] [CrossRef]
  21. Duong, C.D.; Bui, D.T.; Pham, H.; Vu, A.T.; Nguyen, V.H. How Effort Expectancy and Performance Expectancy Interact to Trigger Higher Education Students’ Uses of ChatGPT for Learning. Interact. Technol. Smart Educ. 2024, 21, 356–380. [Google Scholar] [CrossRef]
  22. Li, J.; Huang, J.S. Dimensions of Artificial Intelligence Anxiety Based on the Integrated Fear Acquisition Theory. Technol. Soc. 2020, 63, 101410. [Google Scholar] [CrossRef]
  23. Cengiz, S.; Peker, A. Generative Artificial Intelligence Acceptance and Artificial Intelligence Anxiety among University Students: The Sequential Mediating Role of Attitudes toward Artificial Intelligence and Literacy. Curr. Psychol. 2025, 44, 7991–8000. [Google Scholar] [CrossRef]
  24. Menaga, D.; Saravanan, S. Application of Artificial Intelligence in the Perspective of Data Mining. In Artificial Intelligence in Data Mining: Theories and Applications; Elsevier: Amsterdam, The Netherlands, 2021; pp. 133–154. [Google Scholar] [CrossRef]
  25. van Doorn, J.; Mende, M.; Noble, S.M.; Hulland, J.; Ostrom, A.L.; Grewal, D.; Petersen, J.A. Domo Arigato Mr. Roboto: Emergence of Automated Social Presence in Organizational Frontlines and Customers’ Service Experiences. J. Serv. Res. 2017, 20, 43–58. [Google Scholar] [CrossRef]
  26. Thakkar, M.; Pise, N. Survey of Available Datasets for Designing Task Oriented Dialogue Agents. In Proceedings of the 2019 International Conference on Mechatronics, Remote Sensing, Information Systems and Industrial Information Technologies (ICMRSISIIT), Accra, Ghana, 20–22 December 2020; pp. 1–10. [Google Scholar] [CrossRef]
  27. Kim, J.; Merrill, K.; Collins, C. AI as a Friend or Assistant: The Mediating Role of Perceived Usefulness in Social AI vs. Functional AI. Telemat. Inform. 2021, 64, 101694. [Google Scholar] [CrossRef]
  28. Singh, S.; Beniwal, H. A Survey on Near-Human Conversational Agents. J. King Saud Univ.-Comput. Inf. Sci. 2022, 34, 8852–8866. [Google Scholar] [CrossRef]
  29. Leddy, M.; Mc Creanor, N. Exploring How Education Can Leverage Artificial Intelligence for Social Good. In Proceedings of the European Conference on Innovation and Entrepreneurship, Paris, France, 26–27 September 2024; pp. 1041–1048. [Google Scholar] [CrossRef]
  30. Chen, S.; Granitz, N. Adoption, Objection, or Convergence: Consumer Attitudes toward Book Digitisation. J. Bus. Res. 2012, 65, 1219–1225. [Google Scholar] [CrossRef]
  31. Park, E.H.; Werder, K.; Cao, L.; Ramesh, B. Why Do Family Members Reject AI in Health Care? Competing Effects of Emotions. J. Manag. Inf. Syst. 2022, 39, 765–792. [Google Scholar] [CrossRef]
  32. Rucker, D.D.; Petty, R.E. Emotion Specificity and Consumer Behaviour: Anger, Sadness, and Preference for Activity. Motiv. Emot. 2004, 28, 3–21. [Google Scholar] [CrossRef]
  33. Guthrie, S.E. Faces in the Clouds: A New Theory of Religion; Oxford University Press: Oxford, UK, 1993. [Google Scholar]
  34. Sah, Y.J. Anthropomorphism in Human-Centred AI: Determinants and Consequences of Applying Human Knowledge to AI Agents. In Human-Centred Artificial Intelligence: Research and Applications; Elsevier: Amsterdam, The Netherlands, 2022; pp. 103–116. [Google Scholar] [CrossRef]
  35. Sánchez-Fernández, R.; Iniesta-Bonillo, M.Á. The Concept of Perceived Value: A Systematic Review of the Research. Mark. Theory 2007, 7, 427–451. [Google Scholar] [CrossRef]
  36. Zhou, C.; Liu, X.; Yu, C.; Tao, Y.; Shao, Y.H. Trust in AI-Augmented Design: Applying Structural Equation Modelling to AI-Augmented Design Acceptance. Heliyon 2024, 10, e23305. [Google Scholar] [CrossRef]
  37. Venkatesh, V.; Morris, M.G.; Davis, G.B.; Davis, F.D. User Acceptance of Information Technology: Toward a Unified View. MIS Q. Manag. Inf. Syst. 2003, 27, 425–478. [Google Scholar] [CrossRef]
  38. Öhman, A. Anxiety. In Encyclopedia of Stress; Academic Press: Cambridge, MA, USA, 2007; pp. 236–239. [Google Scholar] [CrossRef]
  39. McEwen, B.S. Stress, Definitions and Concepts of. In Encyclopedia of Stress; Academic Press: Cambridge, MA, USA, 2007; p. 653. [Google Scholar] [CrossRef]
  40. Ovsiannikova, Y.; Pokhilko, D.; Kerdyvar, V.; Krasnokutsky, M.; Kosolapov, O. Peculiarities of the Impact of Stress on Physical and Psychological Health. Multidiscip. Sci. J. 2024, 6, e2024ss0711. [Google Scholar] [CrossRef]
  41. Baratella, R.; Amaral, G.; Sales, T.P.; Guizzardi, R.; Guizzardi, G. The Many Facets of Trust. In Frontiers in Artificial Intelligence and Applications; IOS Press: Amsterdam, The Netherlands, 2023; pp. 17–31. [Google Scholar] [CrossRef]
  42. Kaplan, A.D.; Kessler, T.T.; Brill, J.C.; Hancock, P.A. Trust in Artificial Intelligence: Meta-Analytic Findings. Hum. Factors 2023, 65, 337–359. [Google Scholar] [CrossRef]
  43. Khalil, H.; Alazawi, N.; Khatiry, A.R. Exploring the Role of Artificial Intelligence in Education: A Survey-Based Study at University of Technology and Applied Science. In Studies in Systems, Decision and Control; Springer International Publishing AG: Cham, Switzerland, 2024; pp. 63–73. [Google Scholar] [CrossRef]
  44. Perez-Alvarez, R.; Chavarría Villalobos, C.R.; Dalorso Cruz, M.; Miranda Loría, J. Expectations of Higher Education Teachers Regarding the Use of AI in Education. In Communications in Computer and Information Science; Springer Nature: New York, NY, USA, 2024; pp. 208–213. [Google Scholar] [CrossRef]
  45. Tomlinson, A.; Simpson, A.; Killingback, C. Student Expectations of Teaching and Learning when Starting University: A Systematic Review. J. Furth. High. Educ. 2023, 47, 1054–1073. [Google Scholar] [CrossRef]
  46. Huamani-Calloapaza, T.C.; Mendoza-Zuñiga, M.; Guido, Y.; Yana-Salluca, M.; Yana-Salluca, N.; Perez-Argollo, K.; Mora-Estrada, O.; Pandia-Yañez, E.J. Depression, Anxiety, and Stress among Students at a Peruvian Public University: A Cross-Sectional Study. Health Sci. Technol. 2024, 4, 1070. [Google Scholar] [CrossRef]
  47. Awadalla, S.; Davies, E.B.; Glazebrook, C. The Impact of Depressive and Anxiety Symptoms on Academic Achievement among Undergraduate University Students: A Systematic Review. OBM Neurobiol. 2024, 8, 1–36. [Google Scholar] [CrossRef]
  48. Harzalla, M.; Omheni, N. The Influential Factors on E-Learning Adoption and Learning Continuance. In Lecture Notes in Computer Science; Springer Nature: New York, NY, USA, 2020; pp. 397–409. [Google Scholar] [CrossRef]
  49. Balloo, K.; Pauli, R.; Worrell, M. Undergraduates’ Personal Circumstances, Expectations and Reasons for Attending University. Stud. High. Educ. 2015, 42, 1373–1384. [Google Scholar] [CrossRef]
  50. Jawhar, M.; Bitar, Z.; Miller, J.R.; Jawhar, S. AI-Powered Customised University and Career Guidance. In Proceedings of the 2024 Intermountain Engineering, Technology and Computing (IETC), Logan, UT, USA, 13–14 May 2024; pp. 157–161. [Google Scholar] [CrossRef]
  51. Li, X.; Sung, Y. Anthropomorphism Brings Us Closer: The Mediating Role of Psychological Distance in User–AI Assistant Interactions. Comput. Hum. Behav. 2021, 118, 106680. [Google Scholar] [CrossRef]
  52. Liu, Y.; Zhang, Z.; Wu, Y. What Drives Chinese University Students’ Long-Term Use of GenAI? Evidence from the Heuristic–Systematic Model. Educ. Inf. Technol. 2025, 30, 14967–15000. [Google Scholar] [CrossRef]
  53. Lu, W.; Lin, C. Meta-Analysis of Influencing Factors on the Use of Artificial Intelligence in Education. Asia-Pac. Educ. Res. 2025, 34, 617–627. [Google Scholar] [CrossRef]
  54. Guo, K.; Zhan, C.; Li, X. Factors Influencing Chinese College Students’ Intention to Use AIGC: A Study Based on the UTAUT Model. Int. J. Syst. Assur. Eng. Manag. 2025, 16, 1663–1677. [Google Scholar] [CrossRef]
  55. Salazar, L.; Rivera, L. A Systematic Review of Factors Influencing the Acceptance of Artificial Intelligence Devices. Adv. Artif. Intell. Mach. Learn. 2025, 5, 3954–3974. [Google Scholar] [CrossRef]
  56. Lazarus, R.S. Cognition and Emotion in the Appraisal Process. In Emotion and Adaptation; Oxford University Press: Oxford, UK, 1991; pp. 121–160. [Google Scholar]
  57. Lazarus, R.S. Cognition and Motivation in Emotion. Am. Psychol. 1991, 46, 352–367. [Google Scholar] [CrossRef]
  58. Wong, I.K.A.; Zhang, T.; Lin, Z.C.J.; Peng, Q. Hotel AI Service: Are Employees Still Needed? J. Hosp. Tour. Manag. 2023, 55, 416–424. [Google Scholar] [CrossRef]
  59. Chi, O.H.; Gursoy, D.; Chi, C.G. Tourists’ Attitudes toward the Use of Artificially Intelligent (AI) Devices in Tourism Service Delivery: Moderating Role of Service Value Seeking. J. Travel Res. 2022, 61, 170–185. [Google Scholar] [CrossRef]
  60. Bai, X.; Yang, L. Exploring the Determinants of AIGC Usage Intention Based on the Extended AIDUA Model: A Multi-Group Structural Equation Modeling Analysis. Front. Psychol. 2025, 16, 1589318. [Google Scholar] [CrossRef]
  61. Espinoza-Bravo, M.G.; Cabezas-Cabezas, R.; Perez-Cepeda, M.; Carvache-Franco, O.; Carvache-Franco, W. AIDUA (Artificial Intelligence Device Use and Acceptance) Model to Assess the Acceptance of AI in the Learning Practices of Higher Education Students. Cogent Educ. 2025, 12, 2560618. [Google Scholar] [CrossRef]
  62. Li, W. A Study on Factors Influencing Designers’ Behavioural Intention in Using AI-Generated Content for Assisted Design: Perceived Anxiety, Perceived Risk, and UTAUT. Int. J. Hum.-Comput. Interact. 2025, 41, 1064–1077. [Google Scholar] [CrossRef]
  63. Roy, P.; Ramaprasad, B.S.; Chakraborty, M.; Prabhu, N.; Rao, S. Customer Acceptance of Use of Artificial Intelligence in Hospitality Services: An Indian Hospitality Sector Perspective. Glob. Bus. Rev. 2024, 25, 832–851. [Google Scholar] [CrossRef]
  64. Alzaabi, M.; Shuhaiber, A. The Role of the AI Availability and Perceived Risks on AI Adoption and Organisational Values. In Proceedings of the 5th International Conference on Intelligent Human Systems Integration (IHSI 2022): Integrating People and Intelligent Systems, Venice, Italy, 22–24 February 2022. [Google Scholar] [CrossRef]
  65. Zhang, C.; Hu, M.; Wu, W.; Kamran, F.; Wang, X. Unpacking Perceived Risks and AI Trust Influences Pre-Service Teachers’ AI Acceptance: A Structural Equation Modelling-Based Multi-Group Analysis. Educ. Inf. Technol. 2025, 30, 2645–2672. [Google Scholar] [CrossRef]
  66. Kolar, N.; Milfelner, B.; Pisnik, A. Factors for Customers’ AI Use Readiness in Physical Retail Stores: The Interplay of Consumer Attitudes and Gender Differences. Information 2024, 15, 346. [Google Scholar] [CrossRef]
  67. Zheng, R.; Jiang, X.; Shen, L.; He, T.; Ji, M.; Li, X.; Yu, G. Investigating Clinicians’ Intentions and Influencing Factors for Using Intelligence-Enabled Diagnostic Clinical Decision Support System in Healthcare Systems: Cross-Sectional Survey. J. Med. Internet Res. 2025, 27, e62732. [Google Scholar] [CrossRef]
  68. Ajzen, I. The Theory of Planned Behaviour. Organ. Behav. Hum. Decis. Process. 1991, 50, 179–211. [Google Scholar] [CrossRef]
  69. Osei, B.A.; Rasoolimanesh, S.M. Does Value Matter? Predicting Hotel Employees’ Intentions towards the Adoption of Technologies 4.0. Technol. Anal. Strateg. Manag. 2025, 37, 873–889. [Google Scholar] [CrossRef]
  70. Testa, M.; Della Volpe, M.; D’Amato, A.; Apuzzo, A. Does Gender Impact the Relationship between Perceived Value and Intentions of Use of Natural Processing Models? Transform. Gov. People Process Policy 2024. Epub ahead of printing. [Google Scholar] [CrossRef]
  71. Chan, C.K.Y.; Zhou, W. An Expectancy Value Theory (EVT)-Based Instrument for Measuring Student Perceptions of Generative AI. Smart Learn. Environ. 2023, 10, 64. [Google Scholar] [CrossRef]
  72. Rather, R.A. Do Consumers Reveal Engagement Behaviours in Artificial Intelligence (AI)-Based Technologies? The Dynamics of Perceived Value and Self-Congruence. Int. J. Hosp. Manag. 2025, 126, 103989. [Google Scholar] [CrossRef]
  73. Sattu, R.; Das, S.; Jena, L.K. Should I Adopt AI during Talent Acquisition? Evidence from HR Professionals of Indian IT Organisations. J. Organ. Eff. 2024, 11, 1005–1022. [Google Scholar] [CrossRef]
  74. Akdim, K.; Casaló, L.V. Perceived Value of AI-Based Recommendations Service: The Case of Voice Assistants. Serv. Bus. 2023, 17, 81–112. [Google Scholar] [CrossRef]
  75. Eccles, J.S.; Wigfield, A. From Expectancy-Value Theory to Situated Expectancy-Value Theory: A Developmental, Social Cognitive, and Sociocultural Perspective on Motivation. Contemp. Educ. Psychol. 2020, 61, 101859. [Google Scholar] [CrossRef]
  76. Almukharreq, Z.; Sengupta, N. AI Adoption to Enhance Quality Education for Sustainable Education and Lifelong Learning. In Studies in Big Data; Springer Nature: New York, NY, USA, 2025; pp. 511–523. [Google Scholar] [CrossRef]
  77. Mat Yusoff, S.; Mohamad Marzaini, A.F.; Hao, L.; Zainuddin, Z.; Basal, M.H. Understanding the Role of AI in Malaysian Higher Education Curricula: An Analysis of Student Perceptions. Discov. Comput. 2025, 28, 62. [Google Scholar] [CrossRef]
  78. Wang, S.; Sun, Z.; Wang, H.; Yang, D.; Zhang, H. Enhancing Student Acceptance of Artificial Intelligence-Driven Hybrid Learning in Business Education: Interaction between Self-Efficacy, Playfulness, Emotional Engagement, and University Support. Int. J. Manag. Educ. 2025, 23, 101184. [Google Scholar] [CrossRef]
  79. Chen, D.; Liu, W.; Liu, X. What Drives College Students to Use AI for L2 Learning? Modelling the Roles of Self-Efficacy, Anxiety, and Attitude Based on an Extended Technology Acceptance Model. Acta Psychol. 2024, 249, 104442. [Google Scholar] [CrossRef]
  80. Lund, B.D.; Mannuru, N.R.; Agbaji, D. AI Anxiety and Fear: A Look at Perspectives of Information Science Students and Professionals towards Artificial Intelligence. J. Inf. Sci. 2024. Epub ahead of printing. [Google Scholar] [CrossRef]
  81. Shin, S.; Han, G. The Effects of Expectation Violation of AI Speakers on Expectation and Satisfaction: Through the Anthropomorphic Moderation Effect. J. Cogn. Sci. 2024, 25, 1. [Google Scholar] [CrossRef]
  82. Schepman, A.; Rodway, P. The General Attitudes towards Artificial Intelligence Scale (GAAIS): Confirmatory Validation and Associations with Personality, Corporate Distrust, and General Trust. Int. J. Hum.-Comput. Interact. 2023, 39, 2724–2741. [Google Scholar] [CrossRef]
  83. Stracqualursi, L.; Agati, P. Twitter Users’ Perceptions of AI-Based E-Learning Technologies. Sci. Rep. 2024, 14, 5927. [Google Scholar] [CrossRef] [PubMed]
  84. Ebermann, C.; Selisky, M.; Weibelzahl, S. Explainable AI: The Effect of Contradictory Decisions and Explanations on Users’ Acceptance of AI Systems. Int. J. Hum.-Comput. Interact. 2023, 39, 1807–1826. [Google Scholar] [CrossRef]
  85. Cho, K.A.; Seo, Y.H. Dual Mediating Effects of Anxiety to Use and Acceptance Attitude of Artificial Intelligence Technology on the Relationship between Nursing Students’ Perception of and Intention to Use Them: A Descriptive Study. BMC Nurs. 2024, 23, 4–11. [Google Scholar] [CrossRef] [PubMed]
  86. Rodríguez, C.G. Anxiety in the Face of Artificial Intelligence: Between Pragmatic Fears and Uncanny Terrors. Commun. Soc. 2024, 45, 123–144. [Google Scholar] [CrossRef]
  87. Wen, F.; Li, Y.; Zhou, Y.; An, X.; Zou, Q. A Study on the Relationship between AI Anxiety and AI Behavioural Intention of Secondary School Students Learning English as a Foreign Language. J. Educ. Technol. Dev. Exch. 2024, 17, 130–154. [Google Scholar] [CrossRef]
  88. Song, Y.; Tan, H. When Generative AI Meets Abuse: What Are You Anxious About? J. Theor. Appl. Electron. Commer. Res. 2025, 20, 215. [Google Scholar] [CrossRef]
  89. Iyer, P.; Bright, L.F. Navigating a Paradigm Shift: Technology and User Acceptance of Big Data and Artificial Intelligence among Advertising and Marketing Practitioners. J. Bus. Res. 2024, 180, 114699. [Google Scholar] [CrossRef]
  90. Schiavo, G.; Businaro, S.; Zancanaro, M. Comprehension, Apprehension, and Acceptance: Understanding the Influence of Literacy and Anxiety on Acceptance of Artificial Intelligence. Technol. Soc. 2024, 77, 102537. [Google Scholar] [CrossRef]
  91. Zhang, C.; Schießl, J.; Plößl, L.; Hofmann, F.; Gläser-Zikuda, M. Acceptance of Artificial Intelligence among Pre-Service Teachers: A Multigroup Analysis. Int. J. Educ. Technol. High. Educ. 2023, 20, 49. [Google Scholar] [CrossRef]
  92. Alnawafleh, K.A. The Impact of AI on Nursing Workload and Stress Levels in Critical Care Settings. Pak. J. Life Soc. Sci. 2024, 22, 8529–8542. [Google Scholar] [CrossRef]
  93. Chen, C.H.; Lee, W.I. Exploring Nurses’ Behavioural Intention to Adopt AI Technology: The Perspectives of Social Influence, Perceived Job Stress and Human–Machine Trust. J. Adv. Nurs. 2025, 81, 3739–3752. [Google Scholar] [CrossRef] [PubMed]
  94. Jeong, J.; Kim, B.J.; Lee, J. Navigating AI Transitions: How Coaching Leadership Buffers against Job Stress and Protects Employee Physical Health. Front. Public Health 2024, 12, 1343932. [Google Scholar] [CrossRef] [PubMed]
  95. Loureiro, S.M.C.; Bilro, R.G.; Neto, D. Working with AI: Can Stress Bring Happiness? Serv. Bus. 2023, 17, 233–255. [Google Scholar] [CrossRef]
  96. Omrani, N.; Rivieccio, G.; Fiore, U.; Schiavone, F.; Agreda, S.G. To Trust or Not to Trust? An Assessment of Trust in AI-Based Systems: Concerns, Ethics and Contexts. Technol. Forecast. Soc. Change 2022, 181, 121763. [Google Scholar] [CrossRef]
  97. Lahusen, C.; Maggetti, M.; Slavkovik, M. Trust, Trustworthiness and AI Governance. Sci. Rep. 2024, 14, 20752. [Google Scholar] [CrossRef]
  98. Hameed, B.Z.; Naik, N.; Ibrahim, S.; Tatkar, N.S.; Shah, M.J.; Prasad, D.; Hegde, P.; Chlosta, P.; Rai, B.P.; Somani, B.K. Breaking Barriers: Unveiling Factors Influencing the Adoption of Artificial Intelligence by Healthcare Providers. Big Data Cogn. Comput. 2023, 7, 105. [Google Scholar] [CrossRef]
  99. Stevens, A.; Stetson, P.D. Theory of Trust and Acceptance of Artificial Intelligence Technology (TrAAIT): An Instrument to Assess Clinician Trust and Acceptance of Artificial Intelligence. J. Biomed. Inform. 2023, 148, 104550. [Google Scholar] [CrossRef]
  100. Wang, F.; Li, N.; Cheung, A.C.K.; Wong, G.K.W. In GenAI We Trust: An Investigation of University Students’ Reliance on and Resistance to Generative AI in Language Learning. Int. J. Educ. Technol. High. Educ. 2025, 22, 59. [Google Scholar] [CrossRef]
  101. Đerić, E.; Frank, D.; Milković, M. Trust in Generative AI Tools: A Comparative Study of Higher Education Students, Teachers, and Researchers. Information 2025, 16, 622. [Google Scholar] [CrossRef]
  102. Zhou, Q.; Chen, K.; Cheng, S. Bringing Employee Learning to AI Stress Research: A Moderated Mediation Model. Technol. Forecast. Soc. Change 2024, 209, 123773. [Google Scholar] [CrossRef]
  103. Wang, W.; Chen, L.; Xiong, M.; Wang, Y. Accelerating AI Adoption with Responsible AI Signals and Employee Engagement Mechanisms in Healthcare. Inf. Syst. Front. 2021, 25, 2239–2256. [Google Scholar] [CrossRef]
  104. Chowdhury, S.; Budhwar, P.; Dey, P.K.; Joel-Edgar, S.; Abadie, A. AI-Employee Collaboration and Business Performance: Integrating Knowledge-Based View, Socio-Technical Systems and Organisational Socialisation Framework. J. Bus. Res. 2022, 144, 31–49. [Google Scholar] [CrossRef]
  105. Hair, J.F.; Black, W.C.; Babin, B.J.; Anderson, R.E. Multivariate Data Analysis, 7th ed.; Pearson Education: Osaka, Japan, 2014. [Google Scholar]
  106. Schumacker, R.E.; Lomax, R.G. A Beginner’s Guide to Structural Equation Modelling, 3rd ed.; Routledge Taylor & Francis Group: Oxfordshire, UK, 2004. [Google Scholar]
  107. Fornell, C.; Larcker, D.F. Structural Equation Models with Unobservable Variables and Measurement Error: Algebra and Statistics. J. Mark. Res. 1981, 18, 382–388. [Google Scholar] [CrossRef]
  108. Fornell, C.; Yi, Y. Assumptions of the Two-Step Approach to Latent Variable Modelling. Sociol. Methods Res. 1992, 20, 291–320. [Google Scholar] [CrossRef]
  109. Bentler, P.M.; Bonett, D.G. Significance Tests and Goodness of Fit in the Analysis of Covariance Structures. Psychol. Bull. 1980, 88, 588–606. [Google Scholar] [CrossRef]
  110. Bollen, K.A. Structural Equations with Latent Variables; John Wiley & Sons: Hoboken, NJ, USA, 1989. [Google Scholar] [CrossRef]
  111. Brown, T.A. Confirmatory Factor Analysis for Applied Research; The Guilford Press: New York, NY, USA, 2006. [Google Scholar]
  112. Kline, R.B. Principles and Practice of Structural Equation Modelling, 3rd ed.; The Guilford Press: New York, NY, USA, 2011. [Google Scholar]
  113. Hoyle, R.H. (Ed.) Handbook of Structural Equation Modelling; The Guilford Press: New York, NY, USA, 2012. [Google Scholar]
  114. Henseler, J.; Ringle, C.M.; Sarstedt, M. A New Criterion for Assessing Discriminant Validity in Variance-Based Structural Equation Modelling. J. Acad. Mark. Sci. 2015, 43, 115–135. [Google Scholar] [CrossRef]
  115. Kline, R.B. Principles and Practice of Structural Equation Modeling, 4th ed.; Guilford Press: New York, NY, USA, 2016. [Google Scholar]
  116. Noor, N.; Tong, A.; Zainol, Z. ChatGPT and Higher Education Student Well-Being: Role of Subjective Norm and Anthropomorphism with TAM. J. Inf. Commun. Ethics Soc. 2025. Epub ahead of printing. [Google Scholar] [CrossRef]
  117. Chin, C.-H.; Poh, W.; Cham, T.-H.; Thong, J.Z.; Ling, J.P.-W. Exploring the Usage Intention of AI-Powered Devices in Smart Homes among Millennials and Zillennials: The Moderating Role of Trust. Young Consum. 2023, 25, 1–27. [Google Scholar] [CrossRef]
  118. Gong, T. The Dark Side of Robot Anthropomorphism: Cognitive Load, Stress, and Dysfunctional Customer Behaviour. Serv. Ind. J. 2025, 1–29. [Google Scholar] [CrossRef]
  119. Koyuturk, C.; Theophilou, E.; Patania, S.; Donabauer, G.; Martinenghi, A.; Antico, C.; Telari, A.; Testa, A.; Buršić, S.; Garzotto, F.; et al. Understanding Learner-LLM Chatbot Interactions and the Impact of Prompting Guidelines. In Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2025; pp. 364–377. [Google Scholar] [CrossRef]
  120. Alshamy, A.; Al-Harthi, A.S.A.; Abdullah, S. Perceptions of Generative AI Tools in Higher Education: Insights from Students and Academics at Sultan Qaboos University. Educ. Sci. 2025, 15, 501. [Google Scholar] [CrossRef]
  121. Tang, Z.; Liao, J. Unlocking Emotional Resilience: Exploring the Impact of AI-Enhanced Support Systems on EFL Teachers’ Burnout and EFL Students’ Well-Being in Modern Classrooms. Acta Psychol. 2025, 260, 105672. [Google Scholar] [CrossRef]
  122. Peng, Y.; Wang, X.; Zhang, J.; Sivaraman, S.K.; Song, P. Exploring the Impact of Artificial Intelligence on University Students’ Perception of Slow Employment: A Psychological and Behavioural Analysis. Int. J. Interact. Mob. Technol. 2025, 19, 92–105. [Google Scholar] [CrossRef]
  123. Zhou, Q.; Yang, L.; Tang, Y.; Yang, J.; Zhou, W.; Guan, W.; Yan, L.; Liu, Y. The Mediation of Trust on Artificial Intelligence Anxiety and Continuous Adoption of Artificial Intelligence Technology among Primacy Nurses: A Cross-Sectional Study. BMC Nurs. 2025, 24, 724. [Google Scholar] [CrossRef]
  124. Wang, L.; Li, W. The Impact of AI Usage on University Students’ Willingness for Autonomous Learning. Behav. Sci. 2024, 14, 956. [Google Scholar] [CrossRef]
  125. Cui, Y. What Influences College Students Using AI for Academic Writing? A Quantitative Analysis Based on HISAM and TRI Theory. Comput. Educ. Artif. Intell. 2025, 8, 100391. [Google Scholar] [CrossRef]
  126. Lang, P.J. Fear Reduction and Fear Behaviour: Problems in Treating a Construct. In Research in Psychotherapy; American Psychological Association: Washington, DC, USA, 1968; pp. 90–102. [Google Scholar] [CrossRef]
  127. Martínez-Monteagudo, M.C.; Inglés, C.J.; Cano-Vindel, A.; García-Fernández, J.M. Current Status of Research on Lang’s Three-Dimensional Theory of Anxiety. Anxiety Stress 2012, 18, 201–219. [Google Scholar]
  128. Bahçekapili, E.; Boztaş, G.D. Generative AI and digital literacy: Unravelling user intentions through PLS-SEM and machine learning. J. Inf. Sci. 2025. Epub ahead of printing. [Google Scholar] [CrossRef]
  129. Renuga, K.; Izhar, R.; Bhatti, S.N. Examining the Role of AI Experience in Shaping Consumer Attitudes and Adoption Behavior in Social Media Marketing. In Proceedings of the 2025 International Conference on Innovation in Artificial Intelligence and Internet of Things (AIIT), Jeddah, Saudi Arabia, 7–8 May 2025; pp. 1–7. [Google Scholar] [CrossRef]
  130. Taherdoost, H.; Madanchian, M.; Castanho, G. Balancing Innovation, Responsibility, and Ethical Consideration in AI Adoption. Procedia Comput. Sci. 2025, 258, 3284–3293. [Google Scholar] [CrossRef]
Figure 1. Proposed model (MIDA). Source: The authors.
Figure 1. Proposed model (MIDA). Source: The authors.
Computers 15 00046 g001
Figure 2. Results of the structural equation model (MIDA). Source: The authors.
Figure 2. Results of the structural equation model (MIDA). Source: The authors.
Computers 15 00046 g002
Table 1. Demographic Profile of Student Participants (n = 517).
Table 1. Demographic Profile of Student Participants (n = 517).
DemographicCategoryn%
GenderFemale29657.3
Male22142.7
AgeMean (SD)23.9 (6.4)
Minimum18
Maximum55
Semester110.19
2173.29
340.77
4183.48
5101.93
610420.12
7428.12
816932.69
96913.35
107414.31
1120.39
1271.35
CareerIndustrial Engineering6612.76
Systems and Computer Engineering6211.99
Business Administration5510.64
Law428.12
Psychology428.12
Civil Engineering265.03
Business Management224.25
International Business214.06
Economics193.68
Information Systems Engineering152.90
Marketing142.71
Software Engineering132.51
Accounting122.32
Environmental Engineering101.93
Other careers a9818.98
a The category “Other careers” includes academic programmes that, individually, have a lower participation rate within the sample analysed. This group includes, for reference purposes, careers such as Architecture, Graphic Design, Mechatronics Engineering, Mechanical Engineering, Education, Political Science, Human Medicine, Biology, Nutrition, Film, Theatre, Chemical Engineering, Mining Engineering and Agro-industrial Engineering, among others, which were considered jointly for the purpose of facilitating the description and understanding of the information.
Table 2. Descriptive Statistics.
Table 2. Descriptive Statistics.
Construct/IndicatorMean (M)Standard Deviation (SD)
Anthropomorphism (AN)
AN12.881.096
AN22.671.113
AN32.351.091
AN42.731.124
Perceived Risk (PR)
PR1 *3.420.967
PR23.780.995
PR33.900.820
PR44.110.893
Perceived Value (PV)
PV1 *3.800.858
PV23.910.768
PV34.170.769
PV44.280.779
Performance Expectancy (PEX)
PEX13.710.936
PEX23.720.901
PEX33.870.789
PEX4 *3.181.036
Perceived Effort Expectancy (PEE)
PEE13.191.009
PEE22.641.031
PEE32.791.105
PEE42.671.095
Anxiety (ANX)
ANX12.991.113
ANX2 *2.961.122
ANX32.541.083
ANX42.351.049
Stress (ST)
ST12.331.007
ST22.210.998
ST32.291.032
ST42.551.116
Trust (TR)
TR1 *2.820.964
TR23.380.833
TR33.250.910
TR4 *2.940.998
TR53.490.899
Willingness to Accept the Use of AI Devices (WA)
WA14.140.738
WA23.950.868
WA34.260.783
WA44.100.831
Objection to the Use of AI Devices (OU)
OU13.560.951
OU23.631.058
OU3 *3.521.010
OU44.000.977
Note. * Items were removed during the purification process of the measurement model.
Table 3. Model Validity Measures.
Table 3. Model Validity Measures.
ConstructItemStandardised LoadingCRAVEDiscriminant Validity
Anthropomorphism (AN)0.8210.5360.8170.732
AN10.655 ***
AN20.840 ***
AN30.739 ***
AN40.681 ***
Perceived Risk (PR)0.7540.5060.7390.711
PR20.722 ***
PR30.684 ***
PR40.727 ***
Perceived Value (PV)0.7670.5240.7770.724
PV20.764 ***
PV30.745 ***
PV40.657 ***
Performance Expectancy (PEX)0.8110.5880.7980.767
PEX10.764 ***
PEX20.771 ***
PEX30.766 ***
Perceived Effort Expectancy (PEE)0.8050.5130.8010.716
PEE10.576 ***
PEE20.743 ***
PEE30.691 ***
PEE40.831 ***
Anxiety (ANX)0.7470.5030.7850.709
ANX10.545 ***
ANX30.748 ***
ANX40.807 ***
Stress (ST)0.8520.5910.8460.769
ST10.776 ***
ST20.846 ***
ST30.761 ***
ST40.684 ***
Trust (TR)0.6930.4300.7620.655
TR20.612 ***
TR30.670 ***
TR50.682 ***
Willingness to Accept the Use of AI Devices (WA)0.8720.6320.8710.795
WA10.825 ***
WA20.767 ***
WA30.740 ***
WA40.842 ***
Objection to the Use of AI Devices (OU)0.7390.4850.7580.697
OU10.730 ***
OU20.687 ***
OU40.672 ***
Notes: Cα ≥ 0.7; CR ≥ 0.7; AVE ≥ 0.5; *** Significant at p < 0.001.
Table 4. Heterotrait–Monotrait Ratio (HTMT) Analysis.
Table 4. Heterotrait–Monotrait Ratio (HTMT) Analysis.
ConstructANPRPVPEXPEEANXSTTRWAOU
AN
PR0.129
PV0.0190.271
PEX0.1860.1110.772
PEE0.3230.1830.1470.193
ANX0.2570.1780.3680.1600.675
ST0.2860.0790.4110.1730.6640.851
TR0.2160.0100.5470.5370.1030.2310.115
WA0.0230.1740.8030.6670.0700.4150.4110.555
OU0.0590.3940.1270.0410.0250.1590.0850.0120.069
Note: Diagonal elements are the square root of the AVE.
Table 5. Fit Indices of the Measurement Model.
Table 5. Fit Indices of the Measurement Model.
IndicatorEstimateRecommended ThresholdInterpretation
CMIN/DF2.188<3.00Adequate model fit
RMSEA0.048<0.06Very good model fit
PCLOSE0.804>0.05Adequate model fit
CFI0.925≥0.90Adequate model fit
TLI0.913≥0.90Adequate model fit
IFI0.926≥0.90Adequate model fit
Table 6. Fit Indices of the Model.
Table 6. Fit Indices of the Model.
IndicatorEstimateRecommended ThresholdInterpretation
CMIN/DF2.794<3.0Good fit
RMSEA0.059<0.06Good fit
CFI0.881≥0.85 AcceptableAcceptable fit
TLI0.868≥0.85 acceptableAcceptable fit
IFI0.882≥0.85 acceptableAcceptable fit
Note: In the CFI, TLI and IFI indices, a threshold of ≥0.85 was adopted as an acceptable fit, since in complex models with large samples, higher values are not always achieved, even when the model has an adequate fit.
Table 7. Hypothesis-Testing Results.
Table 7. Hypothesis-Testing Results.
HypothesesProposed EffectEstimatep-ValueResults
H1.1: AN → PEX+0.169***Accepted
H1.2: AN → PEE0.351***Rejected
H2.1: PR → PEX−0.0900.100Rejected
H2.2: PR → PEE+0.1380.027Accepted
H3.1: PV → PEX+0.862***Accepted
H3.2: PV → PEE−0.1000.109Rejected
H4.1: PEX → ANX−0.303***Accepted
H4.2: PEX → ST−0.307***Accepted
H4.3: PEX → TR+0.812***Accepted
H5.1: PEE → ANX+0.778***Accepted
H5.2: PEE → ST+0.766***Accepted
H5.3: PEE → TR0.0610.382Rejected
H6.1: ANX → WA−0.0880.326Rejected
H6.2: ANX → OU+0.2020.158Rejected
H7.1: ST → WA−0.2280.009Accepted
H7.2: ST → OU+−0.0890.494Rejected
H8.1: TR → WA+0.714***Accepted
H8.2: TR → OU0.0380.687Rejected
Note. Standardised regression weights were obtained from structural equation modelling (SEM) using AMOS. *** p < 0.001.
Table 8. Indirect effects among constructs.
Table 8. Indirect effects among constructs.
PathEstimated Coefficientp-ValueResult
AN → PEX → ANX−0.303<0.001s
AN → PEX → ST−0.307<0.001s
AN → PEX → TR0.812<0.001s
PV → PEX → ANX−0.283<0.001s
PV → PEX → ST−0.341<0.001s
PV → PEX → TR0.607<0.001s
AN → PEE → ANX0.778<0.001s
AN → PEE → ST0.766<0.001s
PR → PEE → ANX0.197<0.001s
PR → PEE → ST0.230<0.001s
PEX → TR → WA0.602<0.001s
PEX → ST → WA−0.182<0.001s
PEE → ANX → WA−0.0840.007s
PEE → ST → WA−0.182<0.001s
PEX → TR → RU0.0380.687ns
PEX → ST → RU−0.0890.494ns
Notes: Estimates correspond to bootstrap indirect effects (bias-corrected). Significance is determined by two-tailed bootstrap p-values. s = significant indirect effect; ns = non-significant. AN = Anthropomorphism; PR = Perceived Risk; PV = Perceived Value; PEX = Performance Expectancy; PEE = Perceived Effort Expectancy; ANX = Anxiety; ST = Stress; TR = Trust; WA = Willingness to Accept the Use of AI Devices; RU = Objection to the Use of AI Devices.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Salazar, L.; Rivera, L. Model of Acceptance of Artificial Intelligence Devices in Higher Education. Computers 2026, 15, 46. https://doi.org/10.3390/computers15010046

AMA Style

Salazar L, Rivera L. Model of Acceptance of Artificial Intelligence Devices in Higher Education. Computers. 2026; 15(1):46. https://doi.org/10.3390/computers15010046

Chicago/Turabian Style

Salazar, Luis, and Luis Rivera. 2026. "Model of Acceptance of Artificial Intelligence Devices in Higher Education" Computers 15, no. 1: 46. https://doi.org/10.3390/computers15010046

APA Style

Salazar, L., & Rivera, L. (2026). Model of Acceptance of Artificial Intelligence Devices in Higher Education. Computers, 15(1), 46. https://doi.org/10.3390/computers15010046

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop