Next Article in Journal
Mining-Induced Environmental Degradation and Displacement in the Context of Ecosocial Work: A Qualitative Study in Rural Areas
Next Article in Special Issue
Immersive Training for Chemical Hazard Response: A Conceptual Model for Sustainable Development-Oriented Learning
Previous Article in Journal
Strategies for Eutrophication Control in Tropical and Subtropical Lakes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Artificial Intelligence in Higher Education: Predictive Analysis of Attitudes and Dependency Among Ecuadorian University Students

by
Carla Mendoza Arce
1,*,
Jaime Camacho Gavilanes
1,
Edgar Mendoza Arce
1,
Edgar Mendoza Haro
1 and
Diego Bonilla-Jurado
2,*
1
Business Administration Program, Faculty of Social Sciences, Commercial Education and Law, Universidad Estatal de Milagro, Milagro 091050, Ecuador
2
Centro de Innovación y Transferencia Tecnologica, Instituto Superior Tecnológico España, Ambato 180103, Ecuador
*
Authors to whom correspondence should be addressed.
Sustainability 2025, 17(17), 7741; https://doi.org/10.3390/su17177741
Submission received: 10 July 2025 / Revised: 12 August 2025 / Accepted: 22 August 2025 / Published: 28 August 2025
(This article belongs to the Special Issue Technology-Enhanced Education and Sustainable Development)

Abstract

This study examines the relationship between attitudes toward artificial intelligence (AI) and AI dependency among Ecuadorian university students. A cross-sectional design was used, applying two validated instruments: the Artificial Intelligence Dependence Scale (DAI) and the General Attitudes Toward Artificial Intelligence Scale (GAAIS), with a sample of 540 students. Structural equation modeling (SEM) assessed how both positive and negative attitudes predict dependency levels. Results indicate a moderate level of AI dependency and an ambivalent attitudinal profile. Both attitudinal dimensions significantly predicted dependency, suggesting dual-use behaviors shaped by perceived utility and ethical concerns. Urban students reported higher dependency and greater sensitivity to AI-related risks, highlighting digital inequalities. Although the SEM model showed adequate comparative fit (CFI = 0.976; TLI = 0.973), residual indicators (RMSEA = 0.075) suggest further refinement is needed. This study contributes to underexplored Latin American contexts and emphasizes the need for equity-driven digital literacy strategies in higher education. Findings support pedagogical frameworks promoting critical thinking, ethical reasoning, and responsible AI use. The study aligns with Sustainable Development Goals 4 (Quality Education) and 10 (Reduced Inequalities), reinforcing the importance of inclusive, learner-centered approaches to AI integration.

1. Introduction

Artificial intelligence (AI) has become one of the most transformative technologies of the 21st century, reshaping productive, social, and educational sectors [1]. From its origins in formal logic and computation [2] to current generative models, AI systems now emulate complex cognitive functions such as autonomous learning and decision-making [3]. In higher education, AI facilitates personalized instruction, adaptive feedback, and automated assessment through platforms like intelligent tutors and virtual assistants [4,5]. When properly integrated, these tools promote equity and inclusion [6], fostering the emergence of data-informed, learner-centered pedagogies [7].
To illustrate the conceptual landscape of the study, Figure 1 presents a semantic network generated using VOSviewer version 1.6.20, based on Scopus-indexed publications from 2020 to 2025. The map highlights the strongest co-occurrences among terms such as AI tools, attitudes, dependence, critical thinking, and higher education, which form tightly linked clusters. These relationships reveal a high degree of conceptual alignment and validate the focus of this study on attitudinal and behavioral dimensions of AI use in academic contexts.
While the global literature increasingly explores these themes, the network analysis also revealed a geographic gap: empirical studies focusing on Latin America, particularly Ecuador, remain scarce. This supports the relevance of investigating how students in digitally emerging regions engage with AI tools and the implications for equity-driven educational policies.
However, this integration raises critical concerns. The widespread use of tools like ChatGPT has improved student productivity in academic writing and organization [8,9], but may also lead to over-reliance and weakening reflection and independent reasoning [10,11]. Excessive dependence on AI can inhibit higher-order thinking skills [12,13] and compromise academic integrity [14,15]. Students express ambivalence: while many value AI’s efficiency, others raise ethical concerns, particularly about originality and self-reliance [16,17,18].
Theoretical models such as the Technology Acceptance Model (TAM) and the Unified Theory of Acceptance and Use of Technology (UTAUT) have been widely used to explain users’ attitudes and behavioral intentions toward digital technologies [19,20]. These frameworks emphasize the role of perceived usefulness, ease of use, and social influence in shaping acceptance patterns, particularly in relation to perceived benefits, ethical concerns, and behavioral dependence on AI systems in academic settings [21,22].
Attitudes toward AI significantly shape its adoption and use [23], yet few studies explore these dynamics in Latin American contexts, where digital divides persist [24]. While digital tools may improve learning outcomes, their uncritical use may foster superficial engagement [25,26,27] and reduce cognitive autonomy [28,29]. This is especially problematic in under-resourced regions, where access and training are uneven [30,31].
Although prior research has examined AI in global settings, there is limited empirical evidence from Ecuador, a country undergoing rapid but unequal technological adoption [32,33,34]. Understanding how students in such contexts relate to AI is crucial to developing ethical, sustainable, and inclusive educational policies [35,36]. The present study addresses this gap by analyzing both the degree of AI dependency and attitudinal orientations among Ecuadorian university students, using validated psychometric instruments.
Anchored in frameworks of digital equity and Education for Sustainable Development (ESD), this study contributes to the literature by exploring how cognitive, ethical, and demographic variables shape the academic use of AI. Findings are discussed in relation to Sustainable Development Goals, particularly SDG 4 (Quality Education) and SDG 10 (Reduced Inequalities), and support the design of learner-centered, context-sensitive strategies for responsible AI integration.
This study aims to examine the relationship between Ecuadorian university students’ attitudes toward AI and their perceived dependence on AI tools in academic contexts. Two validated instruments were used: the Artificial Intelligence Dependence Scale (DAI) and the General Attitudes Toward Artificial Intelligence Scale (GAAIS). These scales were selected due to their strong psychometric properties and prior validation in Latin American university populations, which ensures cultural relevance and measurement reliability in this context. A more detailed explanation of their structure and psychometric validation is provided in the Section 2.3.
The focus lies in identifying predictive and sociodemographic patterns that may inform pedagogical and policy strategies for responsible AI integration. Therefore, the following objectives are proposed: (a) to assess the degree of AI dependence among university students; (b) to identify students’ general attitudes, both positive and negative, toward the educational use of AI; (c) to examine differences in dependence and attitudes based on gender, age, academic major, and area of residence; and (d) to model the predictive relationship between attitudes and AI dependence, considering sociodemographic moderators.
From these objectives, it is hypothesized that the frequent use of AI tools is associated with higher levels of perceived dependence (H1); students in technological or applied science majors report more positive attitudes toward AI than those in traditional fields (H2); significant differences exist in AI dependence and attitudes based on gender, age, academic major, and residential area (H3); and both positive and negative attitudes significantly predict AI dependence, and this relationship is moderated by variables such as academic level and housing area (H4).

2. Materials and Methods

2.1. Research Design

This study employed quantitative, non-experimental, cross-sectional, and correlational-predictive design [37]. It aimed to examine the relationships between students’ attitudes toward AI and their perceived dependence on AI tools in academic settings. The approach allowed for the identification of attitudinal patterns and predictive variables relevant to understanding AI adoption in higher education environments, especially in emerging digital contexts such as Ecuador, also aligned with ethical AI integration and Education for Sustainable Development (ESD), contributing to SDG 4 (Quality Education) and SDG 10 (Reduced Inequalities).

2.2. Participants

The present study included a sample of 540 Ecuadorian university students, composed of 38.57% men and 61.42% women, whose ages ranged from 17 to 35 years (M = 20.68; SD = 3.02). The ethnic composition revealed that 93.88% identified as mestizo, while the remaining 6.12% were distributed between Indigenous, Afro-Ecuadorian, and white identities. A significant proportion of the participants resided in urban areas (67.40%). Regarding the predominant marital status, it was single, representing 92.77% of the sample. Regarding their mode of study, 99.81% stated that they were enrolled in face-to-face mode.
Participants were selected through purposive sampling with institutional mediation, targeting students enrolled in diverse academic programs across multiple universities, a purposive sampling strategy was implemented using formal institutional channels to recruit active university students from diverse academic programs and regions. This approach allowed for the inclusion of participants relevant to the research objectives while ensuring ethical access and demographic variability.
These inclusion criteria were considered: (a) voluntary participation; (b) educational level according to age to guarantee understanding of the instruments. Exclusion criteria included: (a) presence of intellectual disability; (b) effects of substances or drugs that affect consciousness; (c) lack of command of the Spanish language.

2.3. Instruments

Two validated scales were used for data collection:
General Attitudes Towards Artificial Intelligence Scale (GAAIS): Developed by Schepman and Rodway [38] Keele University, Staffordshire, UK. This scale aims to assess general attitudes towards AI in various contexts, including education. The GAAIS consists of 20 items, grouped into two dimensions: positive and negative attitudes towards AI. The measurement is conducted using a 5-point Likert scale, where 0 represents “Strongly disagree” and 4 “Strongly agree”. Regarding its psychometric properties, the GAAIS has shown a high internal consistency, with a Cronbach’s alpha coefficient of 0.85 for the dimension of positive attitudes and 0.82 for the dimension of negative attitudes [39].
Artificial Intelligence Dependency Scale (AID): Developed by Morales-Garcia et al. [40], Universidad Nacional Mayor de San Marcos, Lima, Peru. The purpose is to measure the degree of dependence of students on AI tools in the educational context. It consists of 5 items, measured on a 5-point Likert scale (1 = Strongly disagree, 5 = Strongly agree). With respect to their psychometric properties, they are particularly good, with a Cronbach’s alpha value of 0.87, which constitutes an exceptionally good robustness of the instrument.
Both instruments are considered suitable for assessing psychological constructs essential to understanding AI use within sustainable and ethically aware learning environments, also the scales were reviewed linguistically and conceptually to ensure cultural relevance to the Ecuadorian academic context.

2.4. Procedure

Data collection was conducted online through Google Forms (Google LLC, Mountain View, CA, USA). Participants were first presented with a detailed informed consent form outlining the study objectives, voluntary participation, and guarantees of anonymity and confidentiality. The survey link was distributed via institutional academic channels, including email lists and internal communication platforms. Upon completion, responses were securely stored in Google Sheets (Google LLC, Mountain View, CA, USA), facilitating structured data analysis.
The study received ethical approval from the institutional review board, and all procedures adhered to the ethical principles of the Declaration of Helsinki. Cultural sensitivity and inclusivity were prioritized throughout the data collection process, reflecting international standards for research in underrepresented contexts.

2.5. Data Analysis

The statistical analysis was structured in four interrelated methodological blocks, with the aim of robustly examining the relationships between dependence on AI and student attitudes in the context of higher education. In the first stage, a preliminary descriptive analysis of the variables was conducted, calculating measures of central tendency such as the arithmetic mean (M), dispersion through the standard deviation (SD) and distribution through the coefficients of asymmetry (g1) and kurtosis (g2). Univariate normality was assessed by accepting values of g1 and g2 within the range ±1.5 [41,42]. And multivariate normality through Mardia’s test [43], considering that when there is no significance (p > 0.05) in multivariate g1 and g2, indicates the fulfillment of this assumption.
Subsequently, the general fit model was examined by means of an analysis of the internal consistency of the instruments using McDonald’s omega coefficient (ω), a more appropriate indicator than Cronbach’s alpha on scales with multidimensional structures [44,45]. Additionally, a confirmatory factor analysis (CFA) using Structural Equation Modeling (SEM) was conducted to evaluate the relationships between the latent variables—positive attitudes, negative attitudes, and AI dependence. The SEM was estimated using the Diagonal Weighted Least Squares (DWLS) method, appropriate for ordinal data that do not meet the assumption of multivariate normality, and the variables are ordinal [46,47,48]. For the fit of the model, the indices Chi square (χ2), Chi square normed (χ2/df), Comparative Fit Index (CFI), Tucker–Lewis Index (TLI), Standardized Mean Square Residual (SRMR), and Mean Square Error of Approximation (RMSEA) and the factor loads (λ) of the items were considered. The fit model is considered appropriate when the value of χ2 is p > 0.05 or χ2/df is less than 4; CFI and TLI is greater than 0.95; and SRMR and RMSEA are less than 0.06 (although tolerances of 0.08 are acceptable) [49,50,51,52]. Finally, the factor loads (λ) of the items were considered adequate if they exceed the value of λ > 0.40 [53].
In a third stage, the scores obtained in the scales were compared according to sociodemographic variables (sex, area of residence) and academic variables (general average), for dichotomous variables the Wilcoxon rank sum test (also known as the Mann–Whitney U) test was applied [54], and for comparisons between more than two groups, the Kruskal–Wallis test was used [55]. The level of statistical significance was established at p < 0.05 [49].
Finally, a robust regression model with standard errors corrected for heteroskedasticity was estimated, to identify the significant predictors of dependence on AI [56]. In the first model, the global score of the attitude scale (GAAIS) was introduced; the second model incorporated the subdimensions of positive and negative attitudes; and the third model additionally included sociodemographic variables such as area of residence and academic level. In all cases, standardized coefficients (β), standard errors (SE) and t-values were reported, considering those with p < 0.05 significant [57]. All predictors were treated as continuous variables, using total or subscale scores as independent variables.
The statistical analyses used in the studies were performed using the programming language R, version 4.3.1 [58] (R Foundation for Statistical Computing, Vienna, Austria; available at https://www.r-project.org) and the packages used were lavaan, semTools, MBESS, and dplyr.

2.6. Use of Generative Artificial Intelligence (GenAI)

No generative AI tools were used in the research design, data collection, analysis, or interpretation. Limited AI-assisted tools were employed only for grammar correction and improving cohesion, without influencing scientific reasoning or argumentation.

3. Results

3.1. Descriptive Analysis

Table 1 shows the descriptive analysis of the General Attitudes Toward Artificial Intelligence Scale (GAAIS) and the Artificial Intelligence Dependency Scale (DAI), including means (M), standard deviations (SD), skewness (g1), and kurtosis (g2). The GAAIS yielded an overall mean of 67.72 (SD = 13.93), with subdimensions of positive (M = 33.56, SD = 7.32) and negative attitudes (M = 34.15, SD = 7.70). The DAI presented a total mean of 12.04 (SD = 5.42). All variables exhibited negative skewness and moderate kurtosis. Mardia’s multivariate tests revealed significant deviations from normality (p < 0.01), guiding the decision to apply robust, non-parametric and SEM methods adjusted for non-normality.

3.2. Internal Consistency and Measurement Model

Table 2 shows the analysis of the internal consistency of the instruments used, expressed through McDonald’s omega coefficient (ω) and their respective 95% confidence intervals. GAAIS is highly dependable, with a total omega coefficient of 0.93 [0.92–0.94]. Its subdimensions also show robust internal consistency, with ω values of 0.87 [0.84–0.89] for positive attitudes and 0.92 [0.90–0.93] for negative attitudes. On the other hand, the DAI shows adequate reliability with an omega coefficient of 0.85 [0.83–0.87]. These results indicate that both instruments have a high internal coherence in the sample analyzed, which supports their use for the measurement of attitudes and dependence towards AI.
Figure 2 displays the structural equation model (SEM) assessing the relationships between students’ attitudes toward AI (positive and negative) and their perceived dependency. The three latent constructs presented significant standardized factor loadings, mostly above 0.60, indicating good structural validity.
The model exhibited a chi-square value of χ2(129) = 1974.06, p < 0.001. Although significant (as expected with large samples), the Comparative Fit Index (CFI = 0.976) and Tucker–Lewis Index (TLI = 0.973) indicate excellent incremental fit. The RMSEA value of 0.075 (90% CI [0.649, 0.842]) and SRMR of 0.072 fall within acceptable thresholds, suggesting that the model achieves a satisfactory approximation of the observed data despite minor residual deviations.
Taken together, these indicators support the overall adequacy of the model for representing the attitudinal and behavioral constructs under investigation.

3.3. Group Comparisons

Table 3 reports the results of non-parametric comparisons across sociodemographic and academic variables. No significant differences were observed by sex or academic performance (p > 0.05). However, living area significantly affected DAI scores (p = 0.026) and negative attitudes (p = 0.0023), with urban students reporting higher values. This may reflect contextual or access-related variables affecting perceptions and use of AI.

3.4. Robust Regression Models

Table 4 summarizes the three robust regression models predicting AI dependence. Model 1 confirms a significant positive association between general attitudes and dependence (β = 0.151, t = 9.549), indicating that more favorable global perceptions of AI correspond to higher self-reported reliance. Model 2 distinguishes positive (β = 0.155, t = 3.573) and negative (β = 0.148, t = 3.584) attitudes as independent and significant predictors, suggesting that even critical perceptions may coexist with behavioral dependence, likely due to instrumental use despite ethical reservations.
Model 3 incorporates sociodemographic moderators. Although attitudes remain significant (especially negative attitudes, β = 0.176, t = 4.236), residential area and academic performance do not show statistically significant predictive effects. These findings confirm the dual attitudinal pathway: both enthusiasm and skepticism toward AI contribute to its academic use, reflecting the complex psychosocial dynamics of technology adoption in higher education.

4. Discussion

This study provides empirical evidence on the relationship between Ecuadorian university students and AI-based technologies in academic contexts. The findings indicate a moderate level of dependence, suggesting a phase of progressive but incomplete consolidation. This aligns with prior research highlighting a gradual incorporation of AI in higher education, where ethical concerns and perceived utility interactively shape acceptance [4,5].
Attitudes toward AI show marked duality, with comparable levels of positive and negative perceptions. This ambivalence is consistent with theoretical models from technology adoption, such as the Technology Acceptance Model (TAM) and the Unified Theory of Acceptance and Use of Technology (UTAUT), which explain user behavior as a balance between perceived usefulness, perceived ease of use, and contextual concerns [21,22]. From a psychological perspective, it also aligns with attitude theory, where simultaneous favorable and unfavorable views may reflect cognitive dissonance or instrumental adaptation in pressured academic environments [10,31].
These findings reinforce the notion that AI integration must go beyond technological deployment to include pedagogical frameworks that promote metacognitive reflection and student agency [7,30]. Particularly relevant is the observation that both positive and negative attitudes significantly predict AI dependence. This paradox reflects the functionalization of AI: it is viewed as indispensable regardless of critical views, given the academic pressures to adapt to digital environments [59,60].
Therefore, instructional designs must include scaffolding and formative assessments that foster ethical reasoning and reduce superficial learning risks. At the sociodemographic level, no significant differences were found by gender, contrary to studies noting digital gaps [61]. This may indicate progress in digital equity at the university level in Ecuador. However, significant differences due to residence area were observed: urban students reported greater dependence and more negative attitudes. This may stem from increased exposure and digital saturation, generating both intensive use and heightened ethical concerns [31].
These patterns point to the need for targeted institutional strategies that address digital divides through infrastructure investment and localized digital training programs. In contrast, academic performance showed no significant relationship with attitudes or dependence, highlighting the influence of contextual and cultural factors. This suggests that curricula should embed digital ethics, critical thinking, and self-regulation skills as transversal competencies [62,63].
Moreover, students’ concerns about plagiarism and loss of creativity emphasize a pressing pedagogical challenge: ensuring academic integrity in AI-rich environments. Authentic assessment models that privilege process over product are essential in this regard [15,35]. Taken together, these results support the view that AI integration in education must be pedagogically intentional, ethically grounded, and culturally responsive.
This is especially critical in Latin America, where digital transformation intersects with structural inequalities. In this context, the study aligns with the goals of Education for Sustainable Development (ESD), which emphasizes responsible innovation, social justice, and the development of digital and cognitive competencies for sustainability. Moreover, the findings contribute to the advancement of Sustainable Development Goals (SDGs), notably SDG 4 (Quality Education) and SDG 10 (Reduced Inequalities), by highlighting the sociotechnical dynamics that condition educational access and equity [64].
From an applied perspective, the study identifies priority actions for higher education institutions. First, it is essential to implement data protection and algorithmic transparency policies, particularly regarding AI systems that process student data [31]. Second, sustained teacher training on AI integration is required to ensure meaningful, ethical use in diverse instructional contexts [59]. Third, equity-focused policies must be adopted to guarantee fair access to technological resources, especially in underserved areas.
In relation to the proposed hypotheses, the findings confirm H1, as frequent use of AI tools was positively associated with higher levels of perceived dependence. H2 was partially confirmed: although students in technological or applied science majors tended to show more positive attitudes toward AI, these differences were not statistically significant in all comparisons. H3 was also partially confirmed, with significant differences in AI dependence and negative attitudes according to the area of residence, but no differences by gender, age, or academic performance. Finally, H4 was confirmed, as both positive and negative attitudes significantly predicted AI dependence, with these relationships persisting even when sociodemographic variables were included as moderators.
Finally, future research should expand on these findings through longitudinal and mixed methods approaches, which would allow deeper analysis of attitudinal shifts over time. It is also necessary to explore the perspectives of educators and institutional leadership, as these are key actors in scaling sustainable AI use across educational systems. The broader educational challenge is not simply to integrate AI, but to do so in ways that support inclusion, justice, and intellectual autonomy pillars of a truly sustainable digital education paradigm.

5. Conclusions

This study evidences a nuanced relationship between Ecuadorian university students and AI technologies, marked by moderate dependence and ambivalent attitudes. Both favorable and critical perceptions significantly predict AI reliance, indicating instrumental use beyond personal preference.
Sociodemographic comparisons revealed contextual disparities, particularly in urban settings, where higher dependence and ethical concerns were reported. These findings highlight the need for equity-oriented strategies that address digital exposure, access, and ethical guidance. For higher education systems, the results underscore the importance of pedagogical models that foster critical thinking, ethical awareness, and student autonomy, supported by teacher training and responsible institutional governance of AI tools.
Given the specific socio-cultural, infrastructural, and policy conditions of Ecuador’s higher education system, these results should be interpreted with caution when considering their applicability to other regions. While certain attitudinal and behavioral patterns toward AI may be comparable in contexts with similar digital divides, differences in technological infrastructure, cultural perceptions, and institutional governance may limit direct extrapolation of the findings.
Beyond offering empirical insight into an underexplored context, this research supports the design of inclusive, sustainable educational strategies tailored to the digital transformation of learning environments. Future studies should adopt longitudinal and institutional approaches to better understand the long-term impacts and readiness for ethical AI integration.

Author Contributions

Conceptualization, C.M.A. and J.C.G.; methodology, C.M.A.; software, E.M.A.; validation, C.M.A., J.C.G. and E.M.H.; formal analysis, C.M.A.; investigation, C.M.A., J.C.G. and D.B.J.; resources, E.M.A. and D.B.J.; data curation, C.M.A.; writing—original draft preparation, C.M.A.; writing—review and editing, J.C.G., E.M.H. and D.B.J.; visualization, E.M.A.; supervision, J.C.G.; project administration, C.M.A.; funding acquisition, D.B.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the ethical principles of the Declaration of Helsinki and the Nuremberg Code. According to the regulations of the Universidad Estatal de Milagro (UNEMI). The study did not require prior ethical approval, as it was conducted entirely online through an anonymous survey platform. On the first page of the questionnaire, participants received a clear explanation of the study’s purpose, their rights, and the voluntary nature of their participation. Voluntary participation was considered implicit, informed consent to take part in the study. Nevertheless, the researchers adhered to the highest ethical standards. All data were treated as confidential and used exclusively for the purposes described in the study information provided to participants.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

All responses were anonymous and managed in accordance with the Data Protection Law of Ecuador. The collected data was used exclusively for research purposes and stored securely to ensure confidentiality. The original contributions presented in the study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

The authors thank the Universidad Estatal de Milagro (UNEMI) for its academic and administrative support.

Conflicts of Interest

The authors declare no conflicts of interest.

Correction Statement

This article has been republished with a minor correction to the correspondence contact information. This change does not affect the scientific content of the article.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial Intelligence
DAIArtificial Intelligence Dependence Scale
GAAISGeneral Attitudes Toward Artificial Intelligence Scale
HEHigher Education
IRBInstitutional Review Board
SEMStructural Equation Modeling
RMSEARoot Mean Square Error of Approximation
SRMRStandardized Root Mean Square Residual
CFIComparative Fit Index
TLITucker–Lewis Index
SDStandard Deviation
MMean

References

  1. Halpin, H. The Semantic Web: The Origins of Artificial Intelligence Redux. In Proceedings of the Third International Workshop on the History and Philosophy of Logic, Mathematics and Computation, Donostia San Sebastian, Spain, 3–5 November 2005. [Google Scholar]
  2. Prasad, R.; Choudhary, P. State-of-the-Art of Artificial Intelligence. J. Mob. Multimed. 2021, 17, 427–454. [Google Scholar] [CrossRef]
  3. Alieksieiev, M.; Kurenkov, V. Artificial Intelligence: Origins and Problems. Collect. Sci. Pap. 2023, 2023, 141–143. [Google Scholar] [CrossRef]
  4. Ouyang, F.; Jiao, P. Artificial Intelligence in Education: The Three Paradigms. Comput. Educ. Artif. Intell. 2021, 2, 100020. [Google Scholar] [CrossRef]
  5. Chiu, T.K.F.; Xia, Q.; Zhou, X.; Chai, C.S.; Cheng, M. Systematic Literature Review on Opportunities, Challenges, and Future Research Recommendations of Artificial Intelligence in Education. Comput. Educ. Artif. Intell. 2023, 4, 100118. [Google Scholar] [CrossRef]
  6. Lameras, P.; Arnab, S. Power to the Teachers: An Exploratory Review on Artificial Intelligence in Education. Information 2022, 13, 14. [Google Scholar] [CrossRef]
  7. Zawacki-Richter, O.; Marín, V.I.; Bond, M.; Gouverneur, F. Systematic Review of Research on Artificial Intelligence Applications in Higher Education–Where Are the Educators? Int. J. Educ. Technol. High. Educ. 2019, 16, 39. [Google Scholar] [CrossRef]
  8. Sáez-Velasco, S.; Alaguero-Rodríguez, M.; Rodríguez-Cano, S.; Delgado-Benito, V. Students’ Attitudes Towards AI and How They Perceive the Effectiveness of AI in Designing Video Games. Sustainability 2025, 17, 3096. [Google Scholar] [CrossRef]
  9. Anani, G.E.; Nyamekye, E.; Bafour-Koduah, D. Using Artificial Intelligence for Academic Writing in Higher Education: The Perspectives of University Students in Ghana. Discov. Educ. 2025, 4, 46. [Google Scholar] [CrossRef]
  10. Viselli, L. Artificial Intelligence and Access to Justice: A New Frontier for Law Librarians. Can. Law. Libr. Rev. 2021, 46, 17. [Google Scholar]
  11. Zakarneh, B.I.; Aljabr, F.; Al Said, N.; Jlassi, M. Assessing Pedagogical Strategies Integrating ChatGPT in English Language Teaching: A Structural Equation Modelling-Based Study. World J. Engl. Lang. 2025, 15, 364. [Google Scholar] [CrossRef]
  12. Wang, Y.-Y.; Wang, Y.-S. Development and Validation of an Artificial Intelligence Anxiety Scale: An Initial Application in Predicting Motivated Learning Behavior. Interact. Learn. Environ. 2022, 30, 619–634. [Google Scholar] [CrossRef]
  13. Pino-Falconi, M.; Moreta-Herrera, R.; Moreta, M.E.; Cuesta-Andaluz, E. Víctimas de Bullying y Cyberbullying y Dificultades de Regulación Emocional En Adolescentes Del Ecuador. Análisis de Regresión Estructural. Anu. Psicol./UB J. Psychol. 2025, 55. [Google Scholar] [CrossRef]
  14. Michel-Villarreal, R.; Vilalta-Perdomo, E.; Salinas-Navarro, D.E.; Thierry-Aguilera, R.; Gerardou, F.S. Challenges and Opportunities of Generative AI for Higher Education as Explained by ChatGPT. Educ. Sci. 2023, 13, 856. [Google Scholar] [CrossRef]
  15. Perkins, M. Academic Integrity Considerations of AI Large Language Models in the Post-Pandemic Era: ChatGPT and Beyond. J. Univ. Teach. Learn. Pract. 2023, 20, 7. [Google Scholar] [CrossRef]
  16. Almassaad, A.; Alajlan, H.; Alebaikan, R. Student Perceptions of Generative Artificial Intelligence: Investigating Utilization, Benefits, and Challenges in Higher Education. Systems 2024, 12, 385. [Google Scholar] [CrossRef]
  17. Farinosi, M.; Melchior, C. ‘I Use ChatGPT, but Should I?’ A Multi-Method Analysis of Students’ Practices and Attitudes Towards AI in Higher Education. Eur. J. Educ. 2025, 60, e70094. [Google Scholar] [CrossRef]
  18. Li, M.; Rohayati, M.I. A Bibliometric Analysis of Artificial Intelligence Applications in Global Higher Education. Int. J. Inf. Syst. Model. Des. 2024, 16, 1–24. [Google Scholar] [CrossRef]
  19. Ayaz, A.; Yanartaş, M. An Analysis on the Unified Theory of Acceptance and Use of Technology Theory (UTAUT): Acceptance of Electronic Document Management System (EDMS). Comput. Hum. Behav. Rep. 2020, 2, 100032. [Google Scholar] [CrossRef]
  20. Abuhassna, H.; Yahaya, N.; Zakaria, M.A.Z.M.; Zaid, N.M.; Samah, N.A.; Awae, F.; Nee, C.K.; Alsharif, A.H. Trends on Using the Technology Acceptance Model (TAM) for Online Learning: A Bibliometric and Content Analysis. Int. J. Inf. Educ. Technol. 2023, 13, 131–142. [Google Scholar] [CrossRef]
  21. Qazi, A.; Hasan, N.; Owusu-Ansah, C.M.; Hardaker, G.; Dey, S.K.; Haruna, K. SentiTAM: Sentiments Centered Integrated Framework for Mobile Learning Adaptability in Higher Education. Heliyon 2022, 9, e12705. [Google Scholar] [CrossRef]
  22. Xue, L.; Rashid, A.M.; Ouyang, S. The Unified Theory of Acceptance and Use of Technology (UTAUT) in Higher Education: A Systematic Review. Sage Open 2024, 14, 1–22. [Google Scholar] [CrossRef]
  23. Slimi, Z.; Benayoune, A.; Alemu, A.E. Students’ Perceptions of Artificial Intelligence Integration in Higher Education. Eur. J. Educ. Res. 2025, 14, 471–484. [Google Scholar] [CrossRef]
  24. UNESCO. Global Education Monitoring Report (Informe 2024/5 GEM). Ibarra Taylor. 2024. Available online: https://www.unesco.org/gem-report/en (accessed on 21 August 2025).
  25. Albayati, M.G.; De Oliveira, J.; Patil, P.; Gorthala, R.; Thompson, A.E. A Market Study of Early Adopters of Fault Detection and Diagnosis Tools for Rooftop HVAC Systems. Energy Rep. 2022, 8, 14915–14933. [Google Scholar] [CrossRef]
  26. Jomaa, N.; Attamimi, R.; Al Mahri, M. The Use of Artificial Intelligence (AI) in Teaching English Vocabulary in Oman: Perspectives, Teaching Practices, and Challenges. World J. Engl. Lang. 2024, 15, 1–13. [Google Scholar] [CrossRef]
  27. Šedlbauer, J.; Činčera, J.; Slavík, M.; Hartlová, A. Students’ Reflections on Their Experience with ChatGPT. J. Comput. Assist. Learn. 2024, 40, 1526–1534. [Google Scholar] [CrossRef]
  28. Djokic, I.; Milicevic, N.; Djokic, N.; Malcic, B.; Kalas, B. Students Perceptions of the Use of Artificial Intelligence in Educational Service. Amfiteatru Econ. 2024, 26, 294–310. [Google Scholar] [CrossRef]
  29. Vázquez-Parra, J.C.; Henao-Rodríguez, C.; Lis-Gutiérrez, J.P.; Palomino-Gámez, S. Importance of University Students’ Perception of Adoption and Training in Artificial Intelligence Tools. Societies 2024, 14, 141. [Google Scholar] [CrossRef]
  30. Holmes, W.; Bialik, M.; Fadel, C. Artificial Intelligence in Education; Promise and Implications for Teaching and Learning; Center for Curriculum Redesign: Boston, MA, USA, 2019. [Google Scholar]
  31. Hermansyah, M.; Najib, A.; Farida, A.; Sacipto, R.; Rintyarna, B.S. Artificial Intelligence and Ethics: Building an Artificial Intelligence System That Ensures Privacy and Social Justice. Int. J. Sci. Soc. 2023, 5, 154–168. [Google Scholar] [CrossRef]
  32. Cuesta-Andaluz, E.; Moreta-Herrera, R.; Lascano-Arias, G.; Pino-Falconí, M.; Moreno-Montero, E. Propiedades Psicométricas del European Bullying Intervention Project Questionnaire (EBIPQ) y el European Cyberbullying Intervention Project Questionnaire (ECIPQ) en una Muestra de Adolescentes del Ecuador. Suma Psicológica 2025, 32, 54–64. [Google Scholar] [CrossRef]
  33. Moreta-Herrera, R.; Jadán-Guerrero, J.; Gordón-Villalba, P.; Mayorga-Lascano, M.; Shugulí-Zambrano, C.; Caycho-Rodríguez, T.; Cuesta-Andaluz, E.; Larzabal-Fernández, A. Satisfaction with Life and Suicidal Ideation Among Ecuadorian University Students: A Network Analysis. Int. J. Appl. Posit. Psychol. 2025, 10, 47. [Google Scholar] [CrossRef]
  34. Taruchaín-Pozo, F.; Avilés-Castillo, F.; Cuesta-Andaluz, E.; Buele, J. Organizational Communication in Change Management: A Narrative Review. Multidiscip. Rev. 2025, 9, 2026033. [Google Scholar] [CrossRef]
  35. Karkoulian, S.; Sayegh, N.; Sayegh, N. ChatGPT Unveiled: Understanding Perceptions of Academic Integrity in Higher Education-A Qualitative Approach. J. Acad. Ethics 2024, 23, 1171–1188. [Google Scholar] [CrossRef]
  36. Grájeda, A.; Córdova, P.; Córdova, J.P.; Laguna-Tapia, A.; Burgos, J.; Rodríguez, L.; Arandia, M.; Sanjinés, A. Embracing Artificial Intelligence in the Arts Classroom: Understanding Student Perceptions and Emotional Reactions to AI Tools. Cogent Educ. 2024, 11. [Google Scholar] [CrossRef]
  37. Ato, M.; López-García, J.J.; Benavente, A. Un Sistema de Clasificación de Los Diseños de Investigación En Psicología. Ann. Psicol. 2013, 29, 1038–1059. [Google Scholar] [CrossRef]
  38. Schepman, A.; Rodway, P. The General Attitudes towards Artificial Intelligence Scale (GAAIS): Confirmatory Validation and Associations with Personality, Corporate Distrust, and General Trust. Int. J. Hum. Comput. Interact. 2023, 39, 2724–2741. [Google Scholar] [CrossRef]
  39. Gálvez Marquina, M.C.; Pinto-Villar, Y.M.; Mendoza Aranzamendi, J.A.; Anyosa Gutiérrez, B.J. Adaptación y Validación de Un Instrumento Para Medir Las Actitudes de Los Universitarios Hacia La Inteligencia Artificial. Rev. Comun. 2024, 23, 125–142. [Google Scholar] [CrossRef]
  40. Morales-García, W.C.; Sairitupa-Sanchez, L.Z.; Morales-García, S.B.; Morales-García, M. Development and Validation of a Scale for Dependence on Artificial Intelligence in University Students. Front. Educ. 2024, 9, 1323898. [Google Scholar] [CrossRef]
  41. Tabachnick, B.G.; Fidell, L.S.; Ullman, J.B. Using Multivariate Statistics; Pearson Education: Boston, MA, USA, 2018; pp. 52–98. [Google Scholar]
  42. Lascano-Arias, G.; Cuesta-Andaluz, E.; Espinosa-Pinos, C.A. Suicide Risk and Social Support in Young Ecuadorian Women Victims of Violence: A Psychosocial and Educational Analysis. J. Educ. Soc. Res. 2025, 15, 22. [Google Scholar] [CrossRef]
  43. Mardia, K.V. Measures of Multivariate Skewness and Kurtosis with Applications. Biometrika 1970, 57, 519–530. [Google Scholar] [CrossRef]
  44. Campo-Arias, A.; Oviedo, H.C. Propiedades Psicométricas de Una Escala: La Consistencia Interna. Rev. Salud Publica 2008, 10, 831–839. [Google Scholar] [CrossRef]
  45. Gerbing, D.W.; Anderson, J.C. An Updated Paradigm for Scale Development Incorporating Unidimensionality and Its Assessment. J. Mark. Res. 1988, 25, 186–192. [Google Scholar] [CrossRef]
  46. Li, L.; Niu, Z.; Mei, S.; Griffiths, M.D. A Network Analysis Approach to the Relationship between Fear of Missing out (FoMO), Smartphone Addiction, and Social Networking Site Use among a Sample of Chinese University Students. Comput. Human Behav. 2022, 128, 107086. [Google Scholar] [CrossRef]
  47. Moreta-Herrera, R.; Caycho-Rodríguez, T.; Salinas, A.; Jiménez-Borja, M.; Gavilanes-Gómez, D.; Jiménez-Mosquera, C.J. Factorial Validity, Reliability, Measurement Invariance and the Graded Response Model for the COVID-19 Anxiety Scale in a Sample of Ecuadorians. Omega 2025, 90, 1078–1093. [Google Scholar] [CrossRef] [PubMed]
  48. Vallejo-Barragán, P.; Cuesta-Andaluz, E.; Llerena-Aguirre, L. Dimensiones de Calidad de Vida En El Adulto Mayor. Veritas Res. 2025, 7, 82. [Google Scholar] [CrossRef]
  49. Yang-Wallentin, F.; Jöreskog, K.G.; Luo, H. Confirmatory Factor Analysis of Ordinal Variables with Misspecified Models. Struct. Equ. Model. 2010, 17, 392–423. [Google Scholar] [CrossRef]
  50. Browne, M.W.; Cudeck, R. Alternative Ways of Assessing Model Fit. Sociol. Methods Res. 1992, 21, 230–258. [Google Scholar] [CrossRef]
  51. Byrne, B.M. Testing for Multigroup Equivalence of a Measuring Instrument: A Walk through the Process. Psicothema 2008, 20, 872–882. [Google Scholar]
  52. Wolf, E.J.; Harrington, K.M.; Clark, S.L.; Miller, M.W. Sample Size Requirements for Structural Equation Models: An Evaluation of Power, Bias, and Solution Propriety. Educ. Psychol. Meas. 2013, 73, 913–934. [Google Scholar] [CrossRef]
  53. Dominguez-Lara, S. Propuesta de Puntos de Corte Para Cargas Factoriales: Una Perspectiva de Fiabilidad de Constructo. Enferm. Clin. 2018, 28, 401–402. [Google Scholar] [CrossRef]
  54. Frey, B.B. Mann-Whitney Test. In There’s a Stat for That!: What to Do & When to Do It; Bruce, B.F., Ed.; SAGE Publications: New York, NY, USA, 2016. [Google Scholar]
  55. Ostertagová, E.; Ostertag, O.; Kováč, J. Methodology and Application of the Kruskal-Wallis Test. Appl. Mech. Mater. 2014, 611, 115–120. [Google Scholar] [CrossRef]
  56. Divine, G.W.; Norton, H.J.; Barón, A.E.; Juarez-Colunga, E. The Wilcoxon–Mann–Whitney Procedure Fails as a Test of Medians. Am. Stat. 2018, 72, 278–286. [Google Scholar] [CrossRef]
  57. García, M.d.C.; Servy, E. Regresión Robusta: Una Aplicación; Facultad de Ciencias Económicas y Estadística, Universidad Nacional de Rosario: Rosario, Argentina, 2007. [Google Scholar]
  58. Sperandei, S. Understanding Logistic Regression Analysis. Biochem. Med. 2014, 24, 12–18. [Google Scholar] [CrossRef] [PubMed]
  59. R Core Team. R: A Language and Environment for Statistical Computing; R Foundation for Statistical Computing: Vienna, Austria, 2024. [Google Scholar]
  60. Paranjape, K.; Schinkel, M.; Nannan Panday, R.; Car, J.; Nanayakkara, P. Introducing Artificial Intelligence Training in Medical Education. JMIR Med. Educ. 2019, 5, e16048. [Google Scholar] [CrossRef] [PubMed]
  61. Ahmed, Z.; Zeeshan, S.; Lee, D. Editorial: Artificial Intelligence for Personalized and Predictive Genomics Data Analysis. Front. Genet. 2023, 14, 1162869. [Google Scholar] [CrossRef] [PubMed]
  62. Zhang, K.; Aslan, A.B. AI Technologies for Education: Recent Research & Future Directions. Comput. Educ. Artif. Intell. 2021, 2, 100025. [Google Scholar] [CrossRef]
  63. Kharisma, D.B.; Sudirman, S.; Edi, F.; S, R.R.P.M. Current Trend of Artificial Intelligence-Augmented Reality in Science Learning: Systematic Literature Review. J. Penelit. Pendidik. IPA 2023, 9, 404–410. [Google Scholar] [CrossRef]
  64. Brüns, J.D.; Meißner, M. Do You Create Your Content Yourself? Using Generative Artificial Intelligence for Social Media Content Creation Diminishes Perceived Brand Authenticity. J. Retail. Consum. Serv. 2024, 79, 103790. [Google Scholar] [CrossRef]
Figure 1. Relationship of key terms in the dependence of AI in the educational context.
Figure 1. Relationship of key terms in the dependence of AI in the educational context.
Sustainability 17 07741 g001
Figure 2. SEM model of the relationship between attitudes towards AI and perceived dependence. χ2(129) = 1974.06, p < 0.001; CFI = 0.976; TLI = 0.973; RMSEA = 0.075 [90% CI = 0.649, 0.0842]; SRMR = 0.072.
Figure 2. SEM model of the relationship between attitudes towards AI and perceived dependence. χ2(129) = 1974.06, p < 0.001; CFI = 0.976; TLI = 0.973; RMSEA = 0.075 [90% CI = 0.649, 0.0842]; SRMR = 0.072.
Sustainability 17 07741 g002
Table 1. Preliminary analysis of the instruments.
Table 1. Preliminary analysis of the instruments.
Factors/DimensionsDescriptive StatisticsCorrelation Matrix
MSDg1g21234
GAAIS67.7213.93−8241.87-
Positive attitudes33.567.32−6471.230.877 **-
Negative attitudes34.157.70−5960.9360.896 **0.592 **-
Mardia 4375.376 **70,437 *
DAI12.045.42−222−1280.297 **0.268 **0.278 **-
Mardia 105,476 **21,063 *
Note. ** (p < 0.01); * (p < 0.05); M = mean, SD = standard deviation; g1: Asymmetry; g2: Kurtosis; GAAIS: Scale of General Attitudes towards Artificial Intelligence; DAI: Artificial Intelligence Dependency Scale.
Table 2. Internal Consistency Analysis of Instruments.
Table 2. Internal Consistency Analysis of Instruments.
Instrumentsω(Total)
GAAIS0.93[0.92–0.94]
Positive attitudes0.87[0.84–0.89]
Negative attitudes0.92[0.90–0.93]
DAI0.85[0.83–0.87]
Note. ω: McDonald’s omega coefficient. GAAIS: Scale of General Attitudes towards Artificial Intelligence; DAI: Artificial Intelligence Dependency Scale.
Table 3. Comparison of scales according to sociodemographic and academic variables.
Table 3. Comparison of scales according to sociodemographic and academic variables.
Independent
Variable
ScaleWχ2glp
SexDAI32,105----0.168
Positive attitudes31,800----0.122
Negative attitudes 37,215----0.127
GAAIS34,523----0.998
Housing AreaDAI28,258----0.026
Positive attitudes32,253----0.897
Negative attitudes 37,208----0.0023
GAAIS3304----0.054
Grade point averageDAI--4.0520.132
Positive attitudes--2.4720.291
Negative attitudes --0.5620.755
GAAIS--1.5120.470
Note. W: Wilcoxon rank sum test; χ2: Kruskal–Wallis test; GAAIS: Scale of General Attitudes towards Artificial Intelligence; DAI: Artificial Intelligence Dependency Scale.
Table 4. Robust Regression Models for AI Dependency Prediction.
Table 4. Robust Regression Models for AI Dependency Prediction.
ModelVariableβStandard Errort
Model(Intercept)2.0081.9061.832
GAAIS0.1510.0169.549
Model 2(Intercept)2.0021.1001.819
Positive attitudes0.1550.0433.573
Negative attitudes0.1480.0413.584
Model 3(Intercept)0.3841.2180.315
Positive attitudes0.1270.0442.904
Negative attitudes0.1760.0424.236
Housing Area0.6030.4561.322
Average0.9860.6331.558
Note: β: Coefficient; GAAIS: Scale of General Attitudes towards Artificial Intelligence; Vivienda_1 Zone: Urban; Promedio_2: Remarkable; Promedio_3: Enough.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Arce, C.M.; Gavilanes, J.C.; Arce, E.M.; Haro, E.M.; Bonilla-Jurado, D. Artificial Intelligence in Higher Education: Predictive Analysis of Attitudes and Dependency Among Ecuadorian University Students. Sustainability 2025, 17, 7741. https://doi.org/10.3390/su17177741

AMA Style

Arce CM, Gavilanes JC, Arce EM, Haro EM, Bonilla-Jurado D. Artificial Intelligence in Higher Education: Predictive Analysis of Attitudes and Dependency Among Ecuadorian University Students. Sustainability. 2025; 17(17):7741. https://doi.org/10.3390/su17177741

Chicago/Turabian Style

Arce, Carla Mendoza, Jaime Camacho Gavilanes, Edgar Mendoza Arce, Edgar Mendoza Haro, and Diego Bonilla-Jurado. 2025. "Artificial Intelligence in Higher Education: Predictive Analysis of Attitudes and Dependency Among Ecuadorian University Students" Sustainability 17, no. 17: 7741. https://doi.org/10.3390/su17177741

APA Style

Arce, C. M., Gavilanes, J. C., Arce, E. M., Haro, E. M., & Bonilla-Jurado, D. (2025). Artificial Intelligence in Higher Education: Predictive Analysis of Attitudes and Dependency Among Ecuadorian University Students. Sustainability, 17(17), 7741. https://doi.org/10.3390/su17177741

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop