1. Introduction
AI-based devices are improving the academic experience of university students, as they have the ability to personalise learning, provide immediate feedback, automate processes, and even provide emotional support [
1,
2,
3]. Despite this, not all students adopt this technology in the same way, as there are various factors that influence their willingness to use it. While many students are willing to use it, they also have concerns about data privacy and the ability of AI devices to understand their emotions [
4]. In addition, many students perceive that the use of AI devices causes dependence, reduces human interaction, and affects critical thinking and creativity [
5,
6].
Willingness to use AI devices is a complex phenomenon that depends not only on their technical or functional characteristics, but also on the characteristics of each student, who are in the process of developing their autonomy in learning, building their professional identity and constantly interacting with new technologies. Therefore, it is not enough to simply know students’ opinions about AI devices; it is also necessary to understand how contextual factors influence their perceptions, what their expectations are regarding their use, how they perceive them emotionally, and how they use them in their academic activities [
7].
There are many theoretical models that have sought to explain the process of technology adoption. For example, the Technology Acceptance Model (TAM) proposed by Davis [
8] has been widely used due to its simplicity, but it is inadequate for explaining the acceptance of AI devices in educational contexts as it does not include social, emotional, and cultural factors [
9,
10,
11]. For its part, the UTAUT 2 model by Venkatesh et al. [
12] integrates elements from previous theories to explain technology adoption but has limitations in educational settings, as it does not explicitly consider affective variables such as anxiety, stress and trust. These variables have gained relevance in recent research on intelligent technologies applied to education [
13,
14]. On the other hand, although the AIDUA model by Gursoy et al. [
15] was developed to evaluate the adoption of AI devices, it does not explicitly incorporate key components for the university environment such as perceived value, perceived risk, or specific emotions such as anxiety, stress, and trust. This theoretical and practical gap is the main motivation for the present study.
The main objective of this study is to propose and validate a model of acceptance of the use of artificial intelligence devices (MIDA) in university students. To this end, the model incorporates contextual variables (perceived value, anthropomorphism, and perceived risk), cognitive variables (performance expectancy, perceived effort expectancy), and emotional variables (anxiety, stress, and trust) [
16,
17]. These variables are included because university students want to use technologies that meet their practical and emotional needs without adding extra cognitive load. For example, anthropomorphism improves the user experience by humanising interaction, which is key in digital learning processes [
18]. Likewise, perceived risk and value are relevant to university students, since the more useful and less risky the use of an AI device is perceived to be, the higher the acceptance rate will be [
19]. On the other hand, expectations of performance and effort are very important, as if students believe that an AI device improves their academic performance and does not require a great deal of effort, their willingness to use it increases considerably [
20,
21]. Finally, anxiety, stress, and trust are relevant emotional variables, as in contexts where academic pressure is high, any technology that generates uncertainty or a feeling of lack of control may be rejected. Therefore, understanding how these emotions relate to technological acceptance will allow for the design of strategies tailored to university students [
22,
23].
The main contribution of this research is to articulate cognitive, contextual, and emotional variables in order to offer a more comprehensive and in-depth understanding of the processes that shape the acceptance of the use of artificial intelligence devices in higher education.
3. Hypothesis Development
The acceptance or objection of the use of AI devices cannot be explained solely from a technical or functional perspective, as it is influenced by multiple factors such as users’ prior expectations, the emotions they experience when faced with these technologies, and the context in which they are used. Therefore, the interaction of cognitive, contextual, and emotional dimensions is key to explaining the human experience with artificial intelligence [
55]. In this research, we propose the MIDA model (see
Figure 1), which coherently articulates these dimensions.
The MIDA model is based on the premise that university students’ acceptance of the use of AI devices is not an immediate or automatic response, but rather the result of a progressive process of cognitive and emotional evaluation. This is supported by cognitive evaluation theory, which posits that behavioural responses to a technological stimulus arise from a sequence of cognitive evaluations that give rise to emotional responses, which determine the final decision [
56,
57]. Based on this approach, the process of AI acceptance can be understood as a staged mechanism in which students initially evaluate the relevance of the stimulus, then analyse its practical implications, and finally generate emotional responses that act as immediate antecedents to behaviour. This sequential approach has previously been successfully applied in the study of the acceptance of AI devices in the context of services, showing that users go through multiple stages of evaluation before deciding to accept or reject their use [
15].
Therefore, the MIDA model proposes that, in a first stage of contextual evaluation, university students assess the relevance and alignment of the use of AI devices based on factors such as perceived value, anthropomorphism, and perceived risk. This initial evaluation allows them to determine whether the use of this technology is meaningful to them and, therefore, whether it warrants further evaluation. As cognitive evaluation theory suggests, when a stimulus is perceived as irrelevant, no additional emotional or behavioural processes are activated [
57].
Once this first stage is complete, the student moves on to an evaluation of expectations, in which they analyse in more detail the benefits they expect to obtain from using these devices, as well as the effort they anticipate needing to achieve those benefits. These cognitive assessments allow the effort required and the expected benefits associated with the use of AI devices to be weighed up and constitute the main antecedent of the emotional responses generated during the process. This phase reflects a more conscious practical assessment, similar to that identified in previous studies on technology acceptance based on performance and effort expectancy [
12,
15].
Based on this cognitive evaluation, key emotions linked to the use of AI devices, such as anxiety, stress, and trust, are configured. According to cognitive evaluation theory, these emotions are not spontaneous reactions, but the direct result of previous evaluations made by the individual [
56]. In this sense, emotions operate as the mediating mechanism that guides behaviour, directly influencing the student’s final willingness to accept or reject the use of AI devices.
Thus, the MIDA model explains how and why contextual factors influence the acceptance of AI devices indirectly, through a process of progressive evaluation that integrates cognitive assessments and emotional responses, rather than exerting direct and immediate effects on the final decision. This sequential approach captures the complex and progressive nature of the process of accepting AI devices in educational contexts, providing a deeper understanding of the central psychological mechanisms involved in the adoption of intelligent technologies.
3.1. Effect of Anthropomorphism on Performance Expectancy and Perceived Effort
Anthropomorphism increases the perception of social presence, which contributes to higher performance expectancy and fosters positive emotions towards AI devices [
18]. In this regard, Wong et al. [
58] indicated that anthropomorphism was positively associated with performance expectancy but negatively associated with perceived effort. Chi et al. [
59] also found that anthropomorphism had a positive and significant influence on both performance expectancy and perceived effort expectancy. For their part, Gursoy et al. [
15] concluded that anthropomorphism had a positive and significant impact on performance expectancy and reduced perceived effort. Similarly, Bai and Yang [
60] found that anthropomorphism positively influenced performance expectancy and negatively influenced effort expectancy among professionals using generative AI in China. However, Espinoza-Bravo et al. [
61] found that anthropomorphism had a dual effect, as it improved performance expectancy while increasing the perception of effort among university students in Ecuador.
Based on the above, the following hypotheses are proposed:
H1.1. Anthropomorphism positively influences performance expectancy.
H1.2. Anthropomorphism negatively influences perceived effort expectancy.
3.2. Effect of Perceived Risk on Performance Expectancy and Perceived Effort
Perceived risk plays an important role in the adoption and acceptance of AI in various fields [
19]. In this regard, Li [
62] concluded that perceived risk has a negative effect on students’ willingness to use AI technologies in design tools. Similarly, Roy et al. [
63] found that perceived risk negatively impacted physicians’ willingness to adopt AI for diabetes diagnosis. In turn, Alzaabi and Shuhaiber [
64] found that perceived risk negatively impacted AI adoption, even when people are familiar with these technologies. Furthermore, Zhang et al. [
65] found that perceived risks significantly reduced teachers’ trust in AI in rural contexts. Kolar et al. [
66] also found that risks negatively affected consumers’ willingness to use AI. Finally, Zheng et al. [
67] found that perceived risk negatively influenced Chinese doctors’ performance expectancy regarding the use of an intelligent clinical decision support system.
Finally, Ajzen’s Theory of Planned Behaviour suggests that beliefs about possible negative outcomes influence the perception of difficulty in performing a behaviour [
68]. In this sense, when university students perceive the use of AI devices as uncertain or risky, they will tend to anticipate that their use will require greater cognitive or learning effort.
Based on the above, the following hypotheses are proposed:
H2.1. Perceived risk negatively influences performance expectancy.
H2.2. Perceived risk positively influences perceived effort expectancy.
3.3. Effect of Perceived Value on Performance Expectancy and Perceived Effort
In the technological context, Osei and Rasoolimanesh [
69] found that perceived value significantly influenced the intention to use 4.0 technologies. In this regard, Testa et al. [
70] found a positive and significant association between perceived value and the intention of Italian university students to adopt natural language processing (NLP) models. Similarly, Chan and Zhou [
71] found a strong positive correlation between perceived value and the intention of Hong Kong university students to use generative AI in higher education. On the other hand, Rather [
72] found that perceived value directly influenced consumers’ willingness to adopt AI in the Indian hotel industry. Similarly, Sattu et al. [
73] found that perceived value played a decisive role in HR professionals’ decision to adopt AI. Furthermore, Akdim and Casaló [
74] found that perceived value had a positive relationship with the willingness of US residents to use AI voice assistants.
According to Eccles and Wigfield’s Situated Value-Expectancy theory [
75], both the value a person attributes to a task and their expectations of success are constructed in relation to the context. In this sense, when students perceive artificial intelligence as a useful tool for their learning, they tend to engage more actively with it, which in turn strengthens their expectation of achieving good academic results through its use. Furthermore, if they consider that AI facilitates the learning process, the perceived difficulty decreases, thus reducing the expectation of the effort required to use it [
76,
77].
Based on the above, the following hypotheses are proposed:
H3.1. Perceived value positively influences performance expectancy.
H3.2. Perceived value negatively influences perceived effort expectancy.
3.4. Effect of Performance Expectancy and Perceived Effort on Emotions
Emotions play a very important role in users’ willingness to use AI devices. Several studies have shown that both performance expectancy and perceived effort expectancy influence users’ emotions, which in turn affect their decision to accept or reject the use of AI devices. For example, in the hotel sector in India, these emotions were decisive in the acceptance or objection of these technologies [
63].
In this context, performance expectancy and perceived effort have a significant impact on emotions such as anxiety, stress, and trust. It has been found that when users perceive AI to be efficient and easy to use, they experience lower levels of anxiety and stress. This is due to the inverse relationship between these variables, as higher performance expectancy and lower effort expectancy decrease negative emotions, while lower performance expectancy and higher effort expectancy increase negative emotions [
78].
One of the main factors contributing to anxiety is the expectation of effort. When users perceive that AI requires excessive effort or is too complex, anxiety increases, which in turn can increase levels of stress and mistrust towards this technology [
79,
80]. On the other hand, when AI exceeds users’ expectations, it generates positive emotions that reinforce satisfaction and encourage continued use [
81]. In this sense, trust in AI is closely related to a favourable perception of its effectiveness and functionality [
82,
83]. However, if AI devices fail to meet user expectations, a cognitive mismatch occurs that generates negative emotions and reduces acceptance [
84]. In particular, when users perceive that AI is less capable than expected or that its performance contradicts their expectations, they may experience frustration and mistrust. This highlights the importance of properly managing expectations to avoid disappointment and improve the user experience [
81].
Based on the above, the following hypotheses are proposed:
H4.1. Performance expectancy has a negative influence on anxiety.
H4.2. Performance expectancy negatively influences stress.
H4.3. Performance expectancy positively influences trust.
H5.1. Perceived effort expectancy positively influences anxiety.
H5.2. Perceived effort expectancy positively influences stress.
H5.3. Perceived effort expectancy negatively influences trust.
3.5. Effect of Anxiety on Acceptance or Objection to the Use of AI Devices
Several studies have highlighted that anxiety towards AI significantly influences its acceptance [
85,
86,
87]. In this regard, Song & Tan [
88] found that anxiety negatively influenced the acceptance of generative AI. Similarly, Iyer & Bright [
89], Cho & Seo [
85], Schiavo et al. [
90], and Chen et al. [
79] concluded that anxiety was negatively related to the acceptance of big data and AI. Likewise, Kaya et al. [
17] discovered that anxiety about AI configuration and learning significantly influenced negative attitudes towards its use. However, Zhang et al. [
91] found no negative relationship between anxiety towards AI and perceived ease of use.
Based on the above, the following hypotheses are proposed:
H6.1. Anxiety negatively influences willingness to accept the use of AI devices.
H6.2. Anxiety positively influences objection to the use of AI devices.
3.6. Effect of Stress on Acceptance or Objection to the Use of AI Devices
Artificial intelligence can reduce or increase stress, as in some cases it facilitates the execution of certain tasks and in others it creates additional difficulties, especially for users who do not have the necessary training or support [
92]. In this regard, Chen & Lee [
93] found that perceived work stress did not influence nurses’ behavioural intention to use artificial intelligence. However, Jeong et al. [
94] found that the adoption of AI was associated with an increase in work stress. Likewise, Loureiro et al. [
95] suggested that the incorporation of AI in the workplace can generate stress and affect human well-being.
On the other hand, when university students experience stress when interacting with AI devices, these are often perceived as a cognitive burden or threat, generating a negative emotional response. According to Lazarus’ cognitive evaluation theory [
56], these types of emotions reduce the willingness to engage with technology. Consequently, stress decreases acceptance of AI and can lead to attitudes of objection or objection to its use. This is supported by Gursoy et al. [
15], who found that negative emotions increase objection and reduce acceptance of the use of AI devices.
Based on the above, the following hypotheses are proposed:
H7.1. Stress negatively influences the willingness to accept the use of AI devices.
H7.2. Stress positively influences objection to the use of AI devices.
3.7. Effect of Trust on Acceptance or Objection to the Use of AI Devices
While trust is necessary for acceptance, incorrect levels of trust can lead to misuse, abuse, or disuse of this technology. For this reason, trust must be maintained at an appropriate level to encourage responsible use [
96,
97]. In this regard, Hameed et al. [
98] and Kim et al. [
16] found that trust had a positive influence on healthcare professionals’ intentions to use AI. Similarly, Stevens & Stetson [
99] found that American doctors’ acceptance of AI is determined by their level of trust. For their part, Wang et al. [
100] found that trust is vital for Chinese university students to adopt or reject AI. Also, Đerić et al. [
101] found that trust was a strong predictor of the behavioural intention to adopt generative AI tools by university students in Croatia.
Based on the above, the following hypotheses are proposed:
H8.1. Trust positively influences willingness to accept the use of AI devices.
H8.2. Trust negatively influences objection to the use of AI devices.
5. Data Analysis and Results
In this study, confirmatory factor analysis (CFA) was used to examine the validity and reliability of the constructs. The composite reliability index (CR), average variance extracted (AVE), and discriminant validity were calculated. When a CR exceeds the threshold of 0.7, the construct is considered to have good internal consistency. Likewise, CR values between 0.6 and 0.7 can also be considered acceptable in exploratory studies or applied research. An AVE > 0.5 indicates adequate convergent validity. However, when the composite reliability (CR) is greater than 0.6, the construct is considered to have acceptable internal consistency. In such circumstances, convergent validity can still be considered acceptable even if the AVE is slightly below the typical threshold of 0.5 [
105,
106,
107]. If the square root of AVE is greater than the observed correlations, the construct is considered to have adequate discriminant validity [
107,
108]. In order to evaluate the hypothesised relationships between the study variables, the structural equation model (SEM) was used with AMOS SPSS version 26 software.
Finally, taking into account the presence of multivariate non-normality, the model was estimated using the Maximum Likelihood (ML) method, complemented by the bootstrapping procedure, in order to obtain robust estimates of the parameters and their significance values. The overall fit of the model was evaluated using the indicators provided by the AMOS software, including the CMIN/DF normalised discrepancy index, the root mean square error of approximation (RMSEA), and the incremental fit indices CFI, TLI, and IFI. According to the literature, CMIN/DF values below 3.0 and RMSEA values below 0.06 indicate a good fit of the model, while CFI, TLI, and IFI values above 0.85 reflect an acceptable fit, with slightly lower values being acceptable in complex models and applied studies. In addition, the bootstrapping procedure was applied with 5000 samples with replacement to evaluate the statistical significance of the structural relationships proposed in the study [
106,
109,
110,
111,
112,
113].
5.1. Reliability and Validity of the Measurement Scales
Structural equation modelling (SEM) analysis requires the evaluation of the reliability and validity of constructs through the calculation of composite reliability (CR), average variance extracted (AVE), Cronbach’s alpha coefficient, and discriminant validity. In this regard,
Table 2 presents the descriptive statistics corresponding to the constructs included in the measurement model.
Table 3 shows the standardised factor loadings of the items associated with each of the latent constructs included in the measurement model. In general, the indicators showed statistically significant factor loadings (
p < 0.001) and, for the most part, were above the recommended threshold of 0.60, which indicates an adequate contribution of the items to their respective constructs. However, some indicators, such as PEE1 and ANX1, had factor loadings slightly below this threshold, although above 0.50, a value that can be considered acceptable in applied research and theoretically grounded models. The composite reliability (CR) values ranged from 0.693 (Trust) to 0.872 (Willingness to accept the use of artificial intelligence devices), indicating acceptable and high levels of internal consistency. Although the CR value for the Trust (TR) construct was slightly below the conventional threshold of 0.70, it exceeded the minimum value of 0.60, which is considered acceptable in exploratory studies and applied research, especially when the factor loadings are adequate and the construct has theoretical support. In terms of convergent validity, most constructs achieved AVE values above 0.50, which supports an adequate proportion of variance explained by the indicators. However, the constructs Trust (AVE = 0.430) and Objection to the use of artificial intelligence devices (AVE = 0.485) had AVE values slightly below the recommended threshold. Even so, convergent validity can be considered acceptable in these cases, given that both constructs exhibited satisfactory levels of composite reliability (CR > 0.60) and mostly adequate standardised factor loadings. Finally, discriminant validity was confirmed for all constructs, as the square root of the AVE for each latent variable exceeded the correlations with the other constructs, indicating that each construct shares greater variance with its own indicators than with those of other factors in the model.
Table 4 shows the results of the discriminant validity analysis using the Heterotrait–Monotrait ratio (HTMT) criterion. According to Henseler et al. [
114], discriminant validity is considered adequate when HTMT values are below 0.90, with 0.85 being a stricter criterion. The results show that the vast majority of HTMT values are clearly below both thresholds, which indicates adequate differentiation between the constructs of the model.
Table 5 presents the fit indices for the measurement model, which show an adequate overall level of fit between the proposed theoretical model and the observed empirical data. First, the normalised discrepancy index (CMIN/DF = 2.188) is below the recommended threshold of 3.00, indicating a good fit of the model. Likewise, the RMSEA = 0.048 is below the strict criterion of 0.06, suggesting an excellent fit, reinforced by a PCLOSE = 0.804, a value greater than 0.05, confirming that the RMSEA does not differ significantly from a close fit. With regard to the incremental indices, the CFI (0.925), TLI (0.913) and IFI (0.926) exceed the recommended threshold of 0.90, indicating a good comparative fit of the model against a null model. Therefore, the measurement model has adequate psychometric properties for the subsequent estimation of the structural model.
5.2. Structural Model
Univariate normality was assessed using skewness and kurtosis statistics. The values obtained were within acceptable ranges (|skewness| < 2; |kurtosis| < 7), suggesting an adequate distribution at the univariate level. However, the multivariate normality test using the Mardia coefficient showed a violation of this assumption (multivariate kurtosis = 290.97; C.R. = 66.92), so bootstrap procedures were used in the structural equation modelling (CB-SEM) analysis.
As shown in
Table 6, the structural model presented lower fit indices than those observed in the measurement model. This difference is duly supported from a theoretical and methodological perspective, given that the structural model incorporates additional causal restrictions between latent constructs, which increases the complexity of the model and limits its overall flexibility. The literature indicates that a moderate decrease in fit indices when moving from the measurement model to the structural model is common in structural equation modelling and does not necessarily reflect an inadequate model specification, especially when the model is solidly grounded in theory and developed in applied research contexts [
115].
The structural model presents an acceptable overall fit. The CMIN/DF (2.79) and RMSEA (0.059) indicate a good absolute fit, while the incremental indices (CFI = 0.881; TLI = 0.868; IFI = 0.882) show an acceptable fit considering the complexity of the model and the sample size.
5.3. Hypothesis Testing
Table 7 presents the results of the validation of the hypotheses proposed in the structural model. The validation was carried out using standardised regression coefficients (β) and their corresponding significance values, estimated using structural equation modelling (SEM) with bootstrap correction. The results show that anthropomorphism (AN) has a positive and significant influence on performance expectancy (PEX), therefore hypothesis H1.1 is accepted. However, no empirical evidence was found to support the negative influence of AN on perceived effort expectancy (PEE), so hypothesis H1.2 was rejected. With regard to perceived risk (PR), the results indicate that it does not have a significant influence on PEX, leading to the objection of hypothesis H2.1. However, a positive and significant influence of PR on PEE was confirmed, and therefore hypothesis H2.2 is accepted. For its part, perceived value (PV) showed a positive and highly significant influence on PEX, accepting H3.1, while its negative effect on PEE was not confirmed, therefore rejecting H3.2. Likewise, performance expectancy (PEX) had a negative and significant influence on anxiety (ANX) and stress (ST), as well as a positive and significant influence on trust (TR), accepting hypotheses H4.1, H4.2 and H4.3, respectively. Perceived effort expectancy (PEE) had a positive influence on ANX and ST, accepting hypotheses H5.1 and H5.2, although no significant influence on TR was evident, leading to the objection of H5.3. Anxiety (ANX) did not show significant effects on either willingness to accept the use of artificial intelligence devices (WA) or objection to the use of artificial intelligence devices (OU), so hypotheses H6.1 and H6.2 were rejected. Stress (ST) showed a negative and significant influence on willingness to accept the use of artificial intelligence devices, accepting hypothesis H7.1, while its effect on objection to the use of artificial intelligence devices was not confirmed, therefore hypothesis H7.2 was rejected. Finally, trust (TR) showed a positive and highly significant influence on the willingness to use AI devices, accepting hypothesis H8.1, while its negative influence on the objection to the use of AI devices was not statistically significant, therefore rejecting hypothesis H8.2.
Table 8 presents the indirect effects that were estimated using bootstrapping with 5000 resamples. The results show that performance expectancy (PEX) acts as a mediator between contextual variables and the emotional responses of university students. In this sense, anthropomorphism (AN) and perceived valence (PV) have significant indirect effects on anxiety (ANX), stress (ST), and trust (TR) through performance expectancy (PEX). Therefore, higher performance expectancy (PEX) is associated with lower levels of negative emotions and higher levels of trust (TR). On the other hand, perceived effort expectancy (PEE) mediates the relationship between contextual variables and negative emotions. Both anthropomorphism (AN) and perceived risk (PR) have positive indirect effects on anxiety (ANX) and stress (ST) through PEE, indicating that a higher perception of effort intensifies these emotional responses.
In relation to behavioural outcomes, trust (TR) and stress (ST) significantly mediate the relationship between cognitive evaluations and willingness to accept the use of AI devices (WA). PEX positively influences WA through TR, while PEX negatively influences WA through ST and PEE does so through ST and to a lesser extent through ANX. No significant indirect effects were observed on objection to the use of AI devices (OU), suggesting that in the MIDA model, emotional mechanisms primarily explain the willingness to accept the use of AI devices.
The results of the structural model (see
Figure 2) show a high level of explanation of the willingness to accept the use of artificial intelligence devices (WA), with a value of R
2 = 0.688. Likewise, performance expectancy (PEX) presents a high level of explained variance (R
2 = 0.746), while perceived effort expectancy (PEE) shows a low explanatory capacity (R
2 = 0.157). For their part, the constructs of anxiety (ANX), stress (ST) and trust (TR) reach moderate to high levels of explained variance (R
2 = 0.700, 0.683 and 0.662, respectively). However, objection to the use of AI devices (OU) shows very low explained variance (R
2 = 0.024), suggesting that this construct could be influenced by additional factors not considered in the model.
6. Discussion
The results of this research confirm that the willingness to accept the use of AI devices in higher education occurs through a sequential process in which contextual factors influence the formation of cognitive expectations, which in turn trigger emotional responses with differentiated effects. In this sense, the results support the central premise of MIDA, which indicates that the willingness to accept the use of AI devices cannot be explained solely from a functional utility perspective, but rather requires consideration of the dynamic interaction between cognition and emotion. Unlike classic models of technology acceptance such as, Davis’ Technology Acceptance Model (TAM) [
8] and Venkatesh et al.’s Unified Theory of Acceptance and Use of Technology 2 (UTAUT2) [
12], which explain the intention to use technology mainly on the basis of assessments of utility and effort, the results of the study show that the willingness to accept the use of AI devices in higher education is a more complex and dynamic process. Furthermore, while the AIDUA model by Gursoy et al. [
15] considers emotion as a general affective response, the MIDA model allows us to identify how specific emotions, such as trust or stress, differentially influence the acceptance of AI devices. The findings show that trust favours willingness to accept the use of AI devices, while stress acts as an inhibiting factor, and anxiety has no direct effect, making it important to consider the emotional dimension in a disaggregated manner.
Firstly, contextual factors showed different effects on expectations. For example, anthropomorphism significantly increased performance expectancy. This would indicate that when AI devices incorporate human or social characteristics, students anticipate greater academic benefits. This finding coincides with previous studies indicating that the perception of social presence improves the expected usefulness of intelligent technologies in technology-assisted learning environments [
15,
116,
117]. However, contrary to expectations, anthropomorphism increased the perceived effort expectancy, a finding similar to that found by Espinoza-Bravo et al. [
61] in Ecuadorian university students. This could be because the humanisation of AI devices is associated with a greater perception of cognitive complexity [
118]. Likewise, this result suggests that students perceive that they must interact in a more elaborate way with AI devices, for example, through precise instructions or critical evaluation of the responses generated [
119].
Secondly, perceived risk did not reduce performance expectancy, but it did increase effort expectancy. This finding suggests that students value the academic potential of AI devices even when they perceive associated risks. This is consistent with previous studies indicating that higher education students may accept technologies perceived as useful even when they identify ethical or regulatory risks associated with their use [
120]. Perceived value, on the other hand, had a positive and significant influence on performance expectancy and no influence on effort expectancy. This result aligns with Eccles & Wigfield’s [
75] situated expectancy-value theory, which indicates that the value assigned to a task increases the expectation of success, although it does not necessarily reduce the anticipated effort when the task involves the development of new skills.
For its part, performance expectancy significantly reduced anxiety and stress and increased trust. In other words, when students anticipate that AI devices will improve their performance, they experience lower levels of emotional distress and develop greater trust in the technology [
121,
122]. However, perceived effort expectancy increased anxiety and stress without affecting trust. This finding indicates that perceived effort expectancy generate negative emotional responses, but this does not imply that students stop trusting AI devices. In this sense, students can trust and simultaneously experience anxiety and stress during their use [
123].
On the other hand, trust had a positive and significant influence on students’ willingness to accept the use of AI devices, while stress significantly reduced this willingness. These findings indicate that acceptance of the use of AI devices is facilitated when students experience favourable emotions and do not feel emotionally overwhelmed [
124,
125]. Anxiety did not influence either acceptance or objection to the use of AI devices, suggesting that it does not play a decisive role in these decisions in the educational context. According to Lang’s three-dimensional theory of anxiety [
126], this could be because anxiety is an emotional response that does not necessarily translate into a behavioural decision [
127]. However, these results should be interpreted with caution. Although this result is valid in the context of the current sample, it should not be generalised as a universal finding. It is possible that the impact of anxiety on AI acceptance is not linear or depends on additional factors, such as previous experience with AI technologies, levels of digital literacy, or the evaluative context in which AI is used [
128,
129].
Finally, the indirect effects analyses confirm the internal consistency of the MIDA model by showing that contextual factors influence emotional responses and willingness to accept the use of AI devices through expectations. Performance expectancy acts as a central mediator and articulates the effects of anthropomorphism and perceived value on anxiety, stress, and trust, such that higher performance expectancy are associated with lower levels of negative emotions and increased trust. Furthermore, effort expectancy significantly mediates the effects of anthropomorphism and perceived risk on anxiety and stress, intensifying these negative emotional responses. Indirect effects also show that willingness to accept the use of AI devices is mainly explained by trust and stress. However, an important finding of this study is the very low explanatory power obtained for the construct objection to the use of AI devices. Rather than indicating a statistical weakness, this result suggests a theoretical distinction between acceptance and objection processes. The findings indicate that objection is not simply the inverse of willingness to use AI but may reflect a qualitatively different mechanism. While willingness to accept AI devices appears to be driven primarily by cognitive evaluations and emotional responses, objection may be shaped by normative, ethical, or institutional considerations that are not captured within the current conceptualisation of the MIDA model [
130]. In this sense, the proposed model provides a stronger explanation of willingness to use AI technologies than of active resistance or objection to their use.
6.1. Theoretical Implications
This research makes relevant theoretical contributions to the acceptance of artificial intelligence devices in higher education. First, it expands on traditional models of technology acceptance by empirically validating a model that follows a sequential process integrating cognition and emotion. In this sense, the MIDA model demonstrates that cognitive evaluations (both contextual and expectation-based) influence the acceptance of AI devices in the education sector through emotional responses. Secondly, the results contribute to theory by differentiating the role of performance expectancy and perceived effort expectancy in emotional activation. In this sense, performance expectancy positively regulates emotions by reducing stress and anxiety and increasing trust, while effort expectancy increases emotional load by increasing stress and anxiety without affecting trust. This difference shows that expectations not only fulfil a cognitive function, but also activate different emotional responses, allowing for a better understanding of the psychological mechanisms involved in the acceptance of AI devices [
124].
Thirdly, the study results provide empirical evidence that distinguishes between the factors that explain acceptance and those that explain objection to the use of AI devices. While trust and stress significantly explain the intention to use, none of the emotions described predict objection. This finding challenges the implicit assumption of some models that acceptance and objection to the use of technology are driven by the same psychological factors. Therefore, it is suggested that objection to the use of AI devices may be influenced by other factors that go beyond the scope of the present research.
Finally, by incorporating specific emotions such as anxiety, stress, and trust, rather than global affective measures, the MIDA model provides a more granular and theoretically sound approach to analysing the acceptance of AI devices in higher education. This contribution is important in educational contexts, where emotional responses play a key role in the interaction between students and emerging technologies [
61].
6.2. Practical Implications
The results of the MIDA model offer relevant practical implications for the design, implementation, and management of AI devices in higher education. First, trust is consolidated as the main positive predictor of willingness to accept the use of AI devices, suggesting that developers and providers should prioritise strategies aimed at strengthening it. This involves ensuring transparency in the functioning of the systems, explaining the decision-making criteria of AI devices in an understandable way, and clearly communicating their academic benefits. Secondly, the findings show that stress has a significant negative effect on the acceptance of AI devices, while anxiety has no direct impact. Consequently, educational institutions should focus their efforts on reducing the sources of stress associated with these technologies, such as cognitive overload, the complexity of interfaces, or the perception of additional academic demands associated with the use of AI devices. Strategies such as progressive training, academic support, and the availability of technical support can help mitigate these effects. Likewise, performance expectancy reduces anxiety and stress while increasing trust, which underscores the importance of students clearly perceiving the functional and academic value of AI devices. In this regard, it is key to integrate these technologies in a manner consistent with learning objectives and demonstrate their usefulness in improving academic performance. On the other hand, the expectation of effort increases negative emotions, highlighting the need to design intuitive and easy-to-use AI devices with user-friendly interfaces and reduced learning curves. Finally, given that contextual variables such as perceived value, anthropomorphism, and perceived risk indirectly influence acceptance, implementation strategies must adequately manage students’ initial perceptions by communicating benefits, limitations, and levels of human control in a balanced manner.
6.3. Limitations and Recommendations for Future Research
This study has some limitations that open up opportunities for future research. First, the term AI devices was used in a broad and practical sense to refer to intelligent systems that support academic activities in higher education, including both socially oriented applications (e.g., conversational or generative AI systems) and task-focused tools (e.g., decision-support or assistance systems). While this inclusive definition allows for an overall examination of students’ acceptance of AI technologies, it also implies a degree of heterogeneity across AI types. In practice, these systems may differ substantially in terms of perceived anthropomorphism, emotional engagement, perceived usefulness, and perceived risk. These differences were not explicitly modelled in the present study and therefore represent a limitation. Future research could address this issue by focusing on specific categories of AI systems or by examining the type of AI as a moderating variable within technology acceptance models.
Second, although the proposed model has theoretical and empirical support, the use of longitudinal or experimental studies would allow for analysis of how expectations and emotional responses evolve as university students gain more experience using AI devices.
Thirdly, the MIDA model took into account a specific number of contextual, cognitive and emotional variables in order to maintain a parsimonious structure; however, other relevant factors were not considered, particularly those that may help explain the objection to the use of AI devices. In this regard, the present model does not explicitly incorporate ethical or normative variables, which may play a central role in shaping opposition to the use of artificial intelligence in higher education. Academic and institutional discussions on academic integrity, the responsible use of generative AI, dual-use risks, and regulatory constraints suggest that ethical considerations may significantly influence students’ resistance or reluctance to adopt AI technologies. The absence of these variables limits the explanatory scope of the model, especially with regard to the outcomes of objection. Therefore, future research should integrate ethical, institutional, and normative dimensions to provide a more comprehensive understanding of resistance to the use of AI in educational contexts.
Fourth, the sample consisted exclusively of university students drawn from a single urban and cultural context, and the empirical data were collected using a cross-sectional design. While this approach is appropriate for the validation of the proposed model, it necessarily limits the generalisability of the findings. Caution is therefore required when extending the conclusions to other educational systems, cultural contexts, or institutional environments. Replication studies conducted across different countries, educational levels, and institutional settings would be valuable to assess the robustness and external validity of the MIDA model. Additionally, future research could examine the applicability of the model to other populations, such as teachers, professionals, or learners at different educational stages.
Finally, future research could explore possible moderating effects of variables such as previous experience with AI, level of digital literacy, or level of education, in order to determine whether the intensity of cognitive and emotional relationships varies among different user profiles. These studies would allow for refinement of the MIDA model and strengthen its applicability in various scenarios.