Next Article in Journal
The Development of Inland Waterway Transport as a Key to Ensuring Sustainability: A Geographic Overview of the Bucharest–Danube Canal
Previous Article in Journal
A Machine Learning and Panel Data Analysis of N2O Emissions in an ESG Framework
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Ethical Use of Generative Artificial Intelligence Among Ecuadorian University Students

by
Jorge Buele
1,*,
Ángel Ramón Sabando-García
2,
Bosco Javier Sabando-García
3 and
Hugo Yánez-Rueda
4
1
Centro de Investigación en Mecatrónica y Sistemas Interactivos (MIST), Facultad de Ingenierías, Universidad Tecnológica Indoamérica, Ambato 180103, Ecuador
2
Facultad de Ciencias Sociales, Económicas y Humanitarias, Pontificia Universidad Católica del Ecuador Sede Santo Domingo, Santo Domingo de los Tsáchilas, Santo Domingo 230150, Ecuador
3
Programa de Odontología, Enfermería y Medicina, Universidad San Gregorio de Portoviejo—USGP, Portoviejo 130105, Ecuador
4
Facultad de Jurisprudencia y Ciencias Políticas, Universidad Tecnológica Indoamérica, Ambato 180103, Ecuador
*
Author to whom correspondence should be addressed.
Sustainability 2025, 17(10), 4435; https://doi.org/10.3390/su17104435
Submission received: 13 April 2025 / Revised: 5 May 2025 / Accepted: 7 May 2025 / Published: 13 May 2025
(This article belongs to the Special Issue Innovative Learning Environments and Sustainable Development)

Abstract

:
Artificial intelligence has transformed educational environments by facilitating processes such as information retrieval, assisted writing, automated feedback, and personalized tutoring. Within university settings, the adoption of technologies capable of autonomously generating content has increased rapidly, becoming a common academic resource for students. However, this accelerated integration poses ethical challenges, particularly when such tools are used without a clear understanding of their implications. This study aimed to examine how students’ emotional attitudes (affective), understanding (cognitive), and practical use (behavioral) of AI relate to their ethical engagement with these technologies. A structured questionnaire was administered to 833 university students in Ecuador. The instrument showed excellent internal consistency (α = 0.992; Ω = 0.992), and the validity analyses confirmed that the dimensions measured distinct but related constructs. ChatGPT was reported as the most used tool (62.2%), followed by Gemini and Siri. The structural model indicated that emotional and cognitive dimensions substantially influenced ethical behavior (β = 0.413 and β = 0.567, respectively), whereas frequent use alone exhibited no significant effect (β = −0.128; p = 0.058). These results suggest that ethical engagement with AI is primarily driven by reflection and knowledge rather than habit. This study contributes to the literature by modeling how different learning dimensions shape ethical behavior in AI use and underscores the relevance of aligning academic practices with socially responsible uses of emerging technologies.

1. Introduction

In recent decades, artificial intelligence (AI) has evolved from an emerging technology to a central component of digital transformation across various fields [1,2]. It encompasses a wide range of techniques, including machine learning, neural networks, and big data analytics, which enable systems to process complex datasets, detect patterns, and support autonomous decision-making. Yang et al. [3] highlight the importance of multiple knowledge representation frameworks in enhancing AI’s ability to model and interpret diverse information contexts, especially in data-intensive environments. Nguyen et al. [4] further illustrate how this synergy between AI and big data is reshaping industries such as finance by enabling real-time decision-making and predictive modeling. Beyond its evident impact in diverse sectors such as healthcare, industry, and finance, the integration of AI has more recently become a key driver of innovation in education, contributing to substantial improvements in teaching and learning processes [5,6].
Its applications in academia range from virtual assistants and content recommendation systems to advanced data analysis tools and automated assessment systems. AI not only facilitates information retrieval and organization, but also functions as a real-time reference source, automatic translator, summary generator, and provider of personalized explanations [7]. Alharbi [8] examines the use of automated writing tools in language education, focusing on their contribution to learner autonomy and the development of writing skills. In the context of English as a Foreign Language (EFL) instruction, AI-driven platforms have been observed to offer adaptive feedback and customized content, thereby supporting more effective language acquisition [9]. The widespread deployment of generative artificial intelligence (Gen-AI), such as the large language models used for writing, summarizing, and creating academic content, has marked a new phase in the digitalization of learning environments. These tools contribute to the personalization of educational processes and enable new forms of interaction between students and digital systems. Although their pedagogical value is widely recognized, their expansion also invites reflection on the broader implications of scaling up AI infrastructure, including concerns related to energy demand and resource consumption [10].
Moreover, the integration of AI into intelligent tutoring systems and plagiarism detection tools has improved access to adaptive educational resources, offering support tailored to individual learning needs [11]. AI is transforming not only how students learn, but also how educators teach and evaluate [12]. Swiecki et al. propose a framework for AI-assisted assessment that blends automated feedback with human evaluation, aiming to preserve fairness and transparency in academic judgment [13]. Its predictive capabilities for identifying academic difficulties and delivering personalized feedback offer potential for more inclusive and adaptive education systems. Nevertheless, the indiscriminate use of these technologies, especially without a solid ethical foundation, may undermine academic integrity, reinforce algorithmic bias, and diminish critical thinking [14]. Ateeq et al. [15] emphasize that addressing these risks requires institutions to adopt holistic assessment models grounded in transparency and ethics. Promoting responsible and ethical engagement with AI aligns with the principles of sustainable development, particularly those linked to quality education (SDG 4), reduced inequalities (SDG 10), and resilient institutional innovation [16].
Recent research has explored how students perceive and use this technology in their academic training. Analyzing AI-assisted writing among university students, Malik et al. [17] found that while these tools enhance grammatical accuracy and plagiarism detection, they may also adversely affect creativity and critical thinking.
The discussion on ethical principles in educational AI has gained relevance in recent years. Nguyen et al. [18] conducted an analysis of international policies and proposed principles such as fairness, autonomy, and data protection in the application of AI to education. Bond et al. review [19] identified that, despite the growing body of research on AI in education, there remains a lack of empirical studies examining how students internalize ethical principles as part of their academic formation. Therefore, understanding how students ethically appropriate AI technologies is essential for developing sustainable, inclusive, and integrity-grounded educational practices.
However, despite the growing attention to ethical issues in educational AI, current research has predominantly focused on general principles or institutional perspectives, with limited empirical evidence on how students themselves understand, interpret, and apply these ethical norms in their academic context. This gap is particularly relevant in the context of higher education, where students are increasingly engaging with AI tools autonomously. This study addresses that gap by examining how university students ethically perceive and appropriate generative AI technologies, aiming to contribute to the development of more ethically aligned and sustainable educational practices.

2. Materials and Methods

2.1. Study Design

A quantitative approach was adopted as it allows for the objective measurement of variables and the analysis of patterns through statistical techniques. A cross-sectional design was used to collect data at a single point in time, providing a snapshot of the current state of the variables under study. Finally, the study is correlational, as it examines the association between the dimensions of AI learning (affective, behavioral, and cognitive) and their impact on ethical formation, without establishing causal relationships.

2.2. Inclusion Criteria

To ensure the relevance of the data, the following inclusion criteria were established for participants: (i) university students enrolled in higher education institutions in Ecuador, (ii) participants who use AI tools in academic contexts, and (iii) students who provided informed consent to participate in the study. Incomplete or inconsistent surveys were not included in the analysis.

2.3. Participants

A non-probability convenience sampling method was used to select participants, allowing for the collection of a representative sample of Ecuadorian university students. A total of 833 valid responses were collected, of which 310 (37.2%) were from men and 523 (62.8%) were from women. The average age of participants was 21.69 years (SD = 3.751). This age range is characteristic of university students in the intermediate or advanced stages of higher education, which facilitates the assessment of their exposure to the use of this technology in learning and their perceptions of ethics in this context.

2.4. Data Collection Instrument

The data collection instrument was based on the Artificial Intelligence Literacy Questionnaire developed by Ng et al. [20], which has been previously validated and applied in various educational contexts, ensuring its reliability and suitability for assessing AI literacy. A structured questionnaire was designed for data collection, consisting of 55 items grouped into four dimensions:
  • Affective learning (19 items): measures emotional attitudes and perceptions toward AI and its relationship with ethics in learning.
  • Behavioral learning (11 items): assesses the frequency and manner in which students use AI tools in their academic environment.
  • Cognitive learning (9 items): analyzes students’ level of knowledge and understanding of AI and its ethical implications.
  • Ethical learning (16 items): measures the internalization of ethical principles in the use of AI in the educational context.
Items were scored using a five-point Likert scale, ranging from 1 (lowest AI literacy) to 5 (highest AI literacy).

2.5. Procedure

The questionnaire was distributed digitally through email, social media, and academic platforms. Before launching the full study, a pilot test was conducted to observe the response process and identify the likelihood of inconsistent answers. It included an explanation of the study’s objectives and requested informed consent for participation. To ensure data quality, incomplete responses or those with inconsistent response patterns were excluded.

2.6. Ethical Considerations

The study was conducted in accordance with the ethical principles established in the Declaration of Helsinki and its subsequent amendments. Participation in the study was entirely voluntary. All students were informed about the objectives of the research and their right to withdraw at any time without consequences, and informed consent was obtained prior to data collection. Regarding data protection, participant confidentiality and anonymity were guaranteed. Personal and academic data were handled in compliance with current data protection regulations and used exclusively for academic purposes. The information was stored in a secure database with restricted access limited to the research team.

2.7. Data Analysis

SPSS software (version 20.0) and IBM AMOS 24.0 were used for data processing. Descriptive analyses were conducted to characterize the sample, and normality tests were performed to evaluate the data distribution. Subsequently, a structural equation model was applied to assess the relationship between AI learning and ethical learning, along with goodness-of-fit tests to validate the theoretical model.

3. Results

3.1. Preferences in the Use of Artificial Intelligence Applications

Among the AI tools reported by students, ChatGPT [21] was by far the most frequently used in academic contexts, with over 60% of respondents selecting it. Gemini and Siri [22] followed with a markedly lower adoption rate. Meanwhile, a wide range of other tools showed limited adoption, and a small percentage of students (2.4%) indicated no use of AI applications in their academic activities (Table 1).

3.2. KMO Sampling Adequacy Test and Bartlett’s Test of Sphericity

Before proceeding with the factor analysis, the data adequacy was assessed using the Kaiser–Meyer–Olkin (KMO) measure and Bartlett’s test of sphericity [23]. These analyses verify whether the correlations between variables are strong enough to justify the application of exploratory or confirmatory factor analysis.
The results showed a KMO value of 0.987, indicating excellent sampling adequacy, as values above 0.90 are typically classified as optimal for factor analysis [24]. Bartlett’s test of sphericity was statistically significant (χ2 = 75,301.593; df = 1485; p < 0.001), confirming that the correlations among items are not random and that an underlying structure that justifies dimensionality reduction exists.

3.3. Instrument Reliability Analysis

To assess the internal consistency of the questionnaire used to measure learning and ethics in the use of artificial intelligence, Cronbach’s alpha (α) and McDonald’s omega (Ω) reliability coefficients were calculated for each of the instrument’s dimensions. The results are presented in Table 2.
Both reliability coefficients presented in Table 2 exceed the threshold of 0.95, which is well above the commonly accepted minimum of 0.70 in psychological and educational research. This denotes a strong internal consistency across all dimensions, indicating that the items within each factor consistently measure the same underlying construct. The highest reliability was found in the ethical learning dimension, followed closely by the affective dimension, highlighting the coherence of participant responses in those areas.

3.4. Convergent Validity of the Instrument

To assess the convergent validity of the instrument, factor loadings for each item, as well as composite reliability (CR) and average variance extracted (AVE) values, were analyzed for each latent dimension. These indicators verify whether the items adequately group within their respective factors and whether they share sufficient common variance. The results are shown in Table 3.
The results indicate that each item within the four proposed dimensions (affective, behavioral, cognitive, and ethical learning) demonstrates strong alignment with its respective construct. Factor loadings above 0.70 suggest that all items make meaningful contributions to their underlying dimensions. The high values of composite reliability (CR > 0.97) confirm consistency in the responses across items within each factor. Additionally, AVE values above the 0.50 threshold reflect that each dimension captures a substantial portion of the variance shared by its items, rather than error variance. Together, these metrics confirm that the questionnaire items group cohesively measure the constructs they were intended to assess.

3.5. Discriminant Validity Between Constructs

The discriminant validity of the instrument was assessed using the HTMT (Heterotrait–Monotrait Ratio) technique, a robust method for determining whether the evaluated constructions are conceptually distinct from one another. HTMT values below 0.90 indicate good discriminant validity between factors [25]. The results are summarized in Table 4.
The HTMT values displayed in the table confirm that the four measured dimensions are empirically distinct. All values fall below or remain close to the 0.90 threshold, which is widely accepted as the upper limit for acceptable discriminant validity. The strongest correlation appears between the cognitive and behavioral dimensions (0.917), reflecting a natural conceptual proximity between knowledge and action in the use of AI. However, this proximity does not exceed the threshold by a significant margin and does not suggest problematic overlap among constructs.

4. Structural Model Analysis

The structural model presented in this study illustrates the relationship between the affective, cognitive, and behavioral dimensions of learning, and their influence on the ethical dimension in the university context, specifically concerning the use of AI technologies. This structure was evaluated through a confirmatory structural equation model (SEM), which allowed for the analysis of latent relationships and the consistency of observable items (see Figure 1).
The figure shows that the affective, cognitive, and behavioral dimensions have positive standardized paths toward the ethical dimension. The strongest path coefficient is observed from behavioral learning to ethical learning (β = 0.91), followed by affective learning (β = 0.50) and cognitive learning (β = 0.41). These associations suggest varying levels of predictive contribution from each learning component.
In the affective dimension (P1–P19), the factor loadings range from 0.65 to 0.95, indicating a high level of internal consistency in how students emotionally relate to the use of AI. The behavioral dimension (P22–P30) also shows strong loadings (0.62 to 0.91), reflecting frequent or intentional actions consistent with ethical use. The cognitive dimension (P31–P39) shows uniformly high loadings (0.73 to 0.94), suggesting well-defined knowledge structures about AI and ethics. Lastly, the ethical dimension (P40–P56) displays loadings from 0.77 to 0.97.

4.1. Model Fit

The goodness-of-fit indices obtained indicate that the proposed model demonstrates an acceptable fit to the data, according to the standard values in structural equation modeling research. The results are as follows:
  • CMIN/DF = 4.294 (acceptable if <5),
  • NFI = 0.927,
  • RFI = 0.925,
  • IFI = 0.944,
  • TLI = 0.939,
  • CFI = 0.944,
  • RMSEA = 0.062,
  • AIC = 6506.289.
These indices indicate that the theoretical model has an adequate fit, with strength in the incremental fit indicators (CFI, TLI, and IFI > 0.90) and reasonable parsimony (RMSEA < 0.08), as recommended by Hair et al. [24] and Hu and Bentler [26]. This supports the validity of the proposed structural model in explaining the influence of learning dimensions on the ethical use of AI.

4.2. Structural Hypothesis Validation

The model proposed four hypotheses (H1 to H4) to evaluate the direct effects of the learning dimensions on the ethical component. The results of the hypothesis testing are presented in Table 5.
As shown in the table, three of the four proposed hypotheses were supported by the data. The strongest direct effect was observed for the cognitive dimension (β = 0.567), followed by the affective dimension (β = 0.413), both statistically significant at p < 0.001. The behavioral dimension exhibited a negative and non-significant coefficient (β = −0.128; p = 0.058), and thus, hypothesis H2 was not supported. The overall model revealed the robust total effect of the learning dimensions on ethical orientation (H1), with a standardized coefficient of β = 0.675.

5. Discussion

5.1. AI Tools

In Ecuador, the data reveal an even higher integration of AI into academic life, with 97% of surveyed university students reporting the use of some AI tool. The predominance of ChatGPT (62.2%) in this context suggests both high levels of digital adoption and a favorable perception of its utility for academic tasks. This trend is consistent with previous findings [27,28] that emphasize students’ appreciation for the personalized assistance and time-saving features of generative AI, despite ongoing concerns regarding the accuracy, ethical implications, and potential overreliance on these systems. The popularity of ChatGPT can be traced to the rapid evolution of OpenAI’s language models, from the basic text generation of GPT-2 to the improved fluency and contextualization of GPT-3 and 3.5. The launch of GPT-4 brought enhanced accuracy and more natural interactions tailored to educational tasks [29].
This finding aligns with global trends. The Global AI Student Survey by the Digital Education Council [30] reported that 86% of students worldwide already use AI tools, with ChatGPT leading (66%), followed by Grammarly and Microsoft Copilot (25%). Similarly, a study by Ravšelj et al. [31], involving over 23,000 students from 109 countries, corroborates the preference for ChatGPT, especially for brainstorming, summarizing complex content, and enhancing learning efficiency.

5.2. Impact of AI Learning on Academic Ethics

The findings of the structural model demonstrate that the affective and cognitive dimensions of AI learning are positively associated with students’ ethical positioning regarding the use of these technologies. This suggests that the way students emotionally perceive AI (affective learning) and the level of understanding they have regarding its functioning and implications (cognitive learning) are factors in the internalization of ethical principles [20,32]. This is reinforced by the strong internal consistency and convergent validity found in both dimensions, which confirmed the theoretical robustness of the instrument [33]. Additionally, these results align with Coeckelbergh’s view [30] that ethical engagement with AI requires more than procedural knowledge—it demands affective involvement and moral awareness.
In other words, students who develop a reflective attitude and a solid understanding of AI tend to adopt a more ethical approach in its use, an essential condition for ensuring fair, inclusive, and sustainable digital education practices [34]. In contrast, the behavioral dimension showed no significant relationship with ethics and a slightly negative beta coefficient (β = –0.128), suggesting that practical engagement with AI, although frequent, does not inherently foster ethical awareness. Despite the high internal reliability observed for this construct [24], its weak influence may reflect a gap between usage and reflective practice. This supports the notion that practice alone is not sufficient without a conceptual foundation and critical reflection. Even if students regularly interact with AI, if they do not understand its scope and limitations or analyze its implications, their use may lack a strong ethical orientation. This limitation echoes the findings of Selwyn et al. [35], who noted that the public adoption of AI technologies often precedes meaningful ethical reflection. This highlights a gap that could undermine efforts to promote long-term educational resilience.
Likewise, the low level of integration regarding behavioral learning in AI use indicates that students have not yet developed effective strategies to apply these tools in a structured manner within their academic training [36]. This is consistent with the findings of Chen et al. [37], who reported that most students use AI for basic tasks, such as idea generation, but lack formal training in its critical and ethical use. Developing such competencies is important not only for academic integrity but also for preparing students to engage in technology in a socially responsible way.
These findings align with the study by Holmes and Porayska-Pomsta [38], who argue that teaching AI ethics must go beyond merely conveying rules and should promote critical reflection on the social and economic impacts of these technologies. Similarly, Sullivan et al. [39] emphasize that although AI can optimize learning, its use without ethical guidance may lead to indiscriminate reliance on generative tools without adequate critical thinking. These results underscore the importance of educational programs that incorporate ethical AI literacy. Williams [40] argues that university curricula should include specific modules addressing the risks and responsibilities associated with AI use in academia, ensuring that students understand its ethical implications. This need is further supported by Venkatesh and Bala [41], who highlight the importance of intentional design in shaping user behavior and our acceptance of digital innovations within learning environments.

5.3. Sustainability Risks and Challenges

In educational settings, generative artificial intelligence tools are increasingly integrated into students’ daily academic routines, offering advantages such as greater efficiency, accessibility, and personalized support. However, this widespread adoption also raises concerns that extend beyond the classroom. The lack of regulatory oversight, the speed of technological consolidation, and the limited transparency in model development all contribute to growing social and environmental risks [42]. As highlighted in the MIT report The Climate and Sustainability Implications of Generative Artificial Intelligence by Bashir et al. [43], the current development trajectory often prioritizes short-term performance gains over long-term sustainability.
Recent assessments [44,45] estimate that the training of large-scale language models consumes thousands of megawatt-hours of electricity and emits hundreds of tons of CO₂, while ongoing inference processes, required to respond to millions of queries, maintain a significant operational footprint. These technologies also depend on the extraction of rare earth elements, contribute to increasing electronic waste, and reshape electricity demand, intensifying pressure on energy systems [46]. Such impacts are often overlooked in educational discourse, despite the need for institutions to evaluate not only the pedagogical value of these tools, but also the environmental costs of their integration.
Moreover, this technological shift is accompanied by complex challenges that go beyond environmental impact. Wach et al. [47] highlight how generative AI may exacerbate socio-economic inequalities, facilitate disinformation, weaken ethical boundaries, and increase technostress among users, especially in contexts where regulation and digital literacy lag adoption [48]. These risks are compounded by the lack of transparency in model training and content generation, raising concerns about bias, misinformation, and manipulation within educational discourse.
From a sustainability perspective, addressing these concerns requires more than just improving energy efficiency. As Nedungadi et al. [49] argue, it is necessary to adopt comprehensive benefit–cost evaluation frameworks that integrate environmental, ethical, and social dimensions, and to engage all stakeholders (educators, institutions, developers, and policymakers) in shaping responsible AI adoption. Educational institutions must therefore not only promote technical and academic literacy, but also cultivate students’ awareness of the material, ecological, and ethical implications of the tools they use [50]. This calls for institutional policies that guide the critical and conscious integration of AI in education, aligning technological innovation with long-term sustainability goals.

5.4. Regulations and Implementation Strategies

The increasing integration of Gen-AI into higher education also raises critical questions about governance, transparency, and institutional responsibility. At the international level, the UNESCO Recommendation on the Ethics of Artificial Intelligence [51] and the OECD Council Recommendation on AI [52] propose shared principles such as human rights, transparency, accountability, and inclusive development. In the European Union, Regulation (EU) 2024/1689 introduces a risk-based regulatory model and creates a supervisory AI Board [53]. While there is no unified regulatory framework in Latin America, several countries including Mexico, Brazil [54] and Peru [55] are advancing national AI strategies and legislative proposals. In 2024, Núñez Ramos [56] introduced a legislative bill in Ecuador aimed at establishing a comprehensive regulatory framework for artificial intelligence, addressing key areas such as human rights protection, education, and digital transformation.
As noted in the review by Memarian and Doleck [57], there is still a lack of comprehensive regulatory frameworks that ensure Fairness, Accountability, Transparency, and Ethics (FATE) in academic contexts. The opacity of algorithmic processes and the potential for biased or inaccurate outputs underscore the urgent need to develop policies that guarantee the fair, reliable, and responsible use of AI in education.
One of the most pressing gaps in the literature and in institutional practice is the absence of clear guidelines to evaluate the reliability of AI-generated content. Without such standards, students may accept erroneous or biased responses as accurate, undermining academic integrity and perpetuating misinformation [58]. One of the main gaps in the literature is the absence of clear institutional policies on the use of AI in higher education. The study by Qadhi et al. [59] confirms that although general ethical principles for AI use are being discussed, most universities still lack specific, enforceable regulations for its implementation in teaching, assessment, and the development of academic competencies.
In addition to legal and policy-related challenges, the widespread use of AI in higher education also requires a technical and pedagogical rethinking of educators’ roles and learning models. Without adequate institutional safeguards, there is a risk that automation will gradually replace the development of essential skills such as critical thinking, reflection, and originality. Williams [40] warns that the increasing dependence on chatbots and generative systems may compromise students’ learning autonomy, especially when content is consumed passively without analytical engagement. Beyond legal regulation, recent developments in international standardization provide a complementary framework for ensuring the ethical and reliable deployment of AI systems. For instance, ISO/IEC 42001:2023 [60] defines the requirements for an AI management system focused on transparency, safety, and accountability. Likewise, the AI Risk Management Framework issued by NIST [61] offers practical strategies for identifying and mitigating AI-related risks in organizational settings. In Europe, the standardization bodies CEN and CENELEC [62] are developing harmonized technical norms to support the application of the EU AI Act, especially for high-risk systems. These technical instruments reinforce the governance of educational AI systems by ensuring their operational integrity and alignment with human rights protections.
Moreover, several peer-reviewed studies have proposed conceptual frameworks for responsible AI governance in higher education. Wu et al. [63] analyzed governance guidelines from 14 U.S. universities, identifying multi-level and role-specific approaches that could be adapted to other contexts. Similarly, Mahajan [64] introduced the HD-AIHED framework, emphasizing human-centered AI governance aligned with UNESCO and OECD ethical principles. These contributions reinforce the need for adaptable institutional policies that integrate ethical and legal considerations into AI deployment in academic environments.
These concerns connect directly to the sustainability challenges discussed previously. Promoting the responsible use of generative AI in education requires not only individual ethical awareness but also a collective, systemic approach. Thus, several strategies are proposed to foster responsible, ethical, and sustainable AI adoption: (i) The development of clear institutional policies that regulate AI use in a way that promotes equity, transparency, and accountability; (ii) Continuous faculty training to empower educators in guiding students through the critical and responsible use of these technologies; and (iii) The integration of AI ethics and sustainability principles into academic curricula, with a focus on critical thinking, digital responsibility, and an awareness of social and environmental impacts.

5.5. Limitations and Future Directions

Although this study provides empirical evidence on the relationship between AI learning and academic ethics, it is important to acknowledge its limitations. First, the data were drawn from a sample of university students in Ecuador, which limits the generalizability of the findings to other educational contexts. In addition, the use of a non-probability convenience sampling method may introduce selection bias, even though the obtained sample size (n = 833) surpasses the estimated size required for a representative probabilistic sample (n < 400). Future studies could expand the analysis to different regions and compare results across diverse academic cultures, considering factors such as institutional readiness, regulatory maturity, and access to technological infrastructure.
Additionally, the research used a correlational design, which prevents the establishment of causal relationships between variables. Experimental or longitudinal studies could provide a deeper understanding of long-term impacts, particularly regarding how students develop ethical, social, and environmental awareness in their engagement with generative AI. Given the conceptual discussion on sustainability risks, future research could empirically examine whether students’ ethical learning is associated with their sensitivity to the ecological and social consequences of AI use, especially in academic settings. Future research could also explore the effectiveness of institutional policies and curricular interventions aimed at promoting responsible and sustainable AI practices in higher education, especially within underrepresented or low-resource academic environments.

6. Conclusions

This study provides evidence on the key factors influencing the ethical engagement of university students with Gen-AI in academic settings. The results confirm that the affective and cognitive learning dimensions significantly predict ethical awareness, while behavioral use alone does not ensure a reflective or responsible approach. These findings underscore the central role of emotional perception and critical understanding in internalizing ethical principles, particularly when using AI systems for academic purposes. This study’s original contribution lies in the validation of a multidimensional model that links learning experiences with ethical orientation, addressing a gap in the current literature on AI literacy and its responsible use in higher education.
From a regulatory and sustainability perspective, the findings highlight the urgent need for universities to implement institutional frameworks aligned with international guidelines such as the UNESCO Recommendation on AI Ethics (2021) and the OECD Council Recommendation on AI (2019). It is recommended that enforceable policies for AI integration are implemented in teaching and assessment, that algorithmic transparency is ensured, and that international technical standards are followed (e.g., ISO/IEC 42001:2023, NIST AI Risk Management Framework). These steps are important not only to safeguard academic integrity but also to align AI deployment with the broader goals of equity, accountability, and sustainability. Future research should replicate this model in diverse contexts, assess longitudinal changes in ethical behavior, and control for social desirability bias, which may influence self-reported attitudes. Beyond its methodological implications, this study highlights the relevance of integrating ethical and sustainability principles into AI adoption in education, reinforcing the need for institutional awareness of its broader ecological and social impact.

Author Contributions

Conceptualization, Á.R.S.-G. and J.B.; methodology, J.B. and H.Y.-R.; software, J.B. and Á.R.S.-G.; validation, J.B., Á.R.S.-G. and B.J.S.-G.; formal analysis, J.B. and Á.R.S.-G.; investigation, J.B. and B.J.S.-G.; resources, J.B.; data curation, Á.R.S.-G. and H.Y.-R.; writing—original draft preparation, J.B., Á.R.S.-G., B.J.S.-G. and H.Y.-R.; writing—review and editing, J.B. and Á.R.S.-G.; visualization, J.B. and Á.R.S.-G.; supervision, B.J.S.-G.; project administration, Á.R.S.-G.; funding acquisition, J.B.; validation support and technical revision, H.Y.-R. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Universidad Tecnológica Indoamérica, under the project “Innovación en la Educación Superior a través de las Tecnologías Emergentes”, Grant Number: IIDI-022-25.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Institutional Review Board of Universidad Tecnológica Indoamérica (protocol code UTI-VI-039-2025; date of approval: 1 April 2025).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study. Participation was voluntary, and an informed consent declaration was included at the beginning of the online questionnaire, which participants were required to accept to proceed.

Data Availability Statement

The data supporting the findings of this study are not publicly available due to privacy and ethical restrictions to protect participant confidentiality. However, the datasets are available from the corresponding author upon reasonable request.

Acknowledgments

The authors would like to express their sincere gratitude to the university students who generously participated in this study. During the preparation of this manuscript, the authors used ChatGPT (OpenAI, GPT-4) for the purposes of improving the clarity and accuracy of English-language grammar and syntax. The authors have reviewed and edited the output and take full responsibility for the content of this publication.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Escotet, M.Á. The Optimistic Future of Artificial Intelligence in Higher Education. Prospects 2024, 54, 531–540. [Google Scholar] [CrossRef]
  2. Rahiman, H.U.; Kodikal, R. Revolutionizing Education: Artificial Intelligence Empowered Learning in Higher Education. Cogent Educ. 2024, 11, 2293431. [Google Scholar] [CrossRef]
  3. Yang, Y.; Zhuang, Y.; Pan, Y. Multiple Knowledge Representation for Big Data Artificial Intelligence: Framework, Applications, and Case Studies. Front. Inf. Technol. Electron. Eng. 2021, 22, 1551–1558. [Google Scholar] [CrossRef]
  4. Nguyen, D.K.; Sermpinis, G.; Stasinakis, C. Big Data, Artificial Intelligence and Machine Learning: A Transformative Symbiosis in Favour of Financial Technology. Eur. Financ. Manag. 2023, 29, 517–548. [Google Scholar] [CrossRef]
  5. Rashid, A.B.; Kausik, M.A.K. AI Revolutionizing Industries Worldwide: A Comprehensive Overview of Its Diverse Applications. Hybrid Adv. 2024, 7, 100277. [Google Scholar] [CrossRef]
  6. Bekbolatova, M.; Mayer, J.; Ong, C.W.; Toma, M. Transformative Potential of AI in Healthcare: Definitions, Applications, and Navigating the Ethical Landscape and Public Perspectives. Healthcare 2024, 12, 125. [Google Scholar] [CrossRef]
  7. Mohamed, Y.A.; Khanan, A.; Bashir, M.; Mohamed, A.H.H.M.; Adiel, M.A.E.; Elsadig, M.A. The Impact of Artificial Intelligence on Language Translation: A Review. IEEE Access 2024, 12, 25553–25579. [Google Scholar] [CrossRef]
  8. Alharbi, W. AI in the Foreign Language Classroom: A Pedagogical Overview of Automated Writing Assistance Tools. Educ. Res. Int. 2023, 2023, 4253331. [Google Scholar] [CrossRef]
  9. Jiang, R. How Does Artificial Intelligence Empower EFL Teaching and Learning Nowadays? A Review on Artificial Intelligence in the EFL Context. Front. Psychol. 2022, 13, 1049401. [Google Scholar] [CrossRef]
  10. Chen, Q.; Wang, J.; Lin, J. Generative AI Exacerbates the Climate Crisis. Science 2025, 387, 587. [Google Scholar] [CrossRef]
  11. Ibrahim, K. Using AI-Based Detectors to Control AI-Assisted Plagiarism in ESL Writing: “The Terminator Versus the Machines”. Lang Test Asia 2023, 13, 46. [Google Scholar] [CrossRef]
  12. Qadir, J. Engineering Education in the Era of ChatGPT: Promise and Pitfalls of Generative AI for Education. In Proceedings of the 2023 IEEE Global Engineering Education Conference (EDUCON), Kuwait, Kuwait, 1–4 May 2023; pp. 1–9. [Google Scholar]
  13. Swiecki, Z.; Khosravi, H.; Chen, G.; Martinez-Maldonado, R.; Lodge, J.M.; Milligan, S.; Selwyn, N.; Gašević, D. Assessment in the Age of Artificial Intelligence. Comput. Educ. Artif. Intell. 2022, 3, 100075. [Google Scholar] [CrossRef]
  14. Wang, Y. Artificial Intelligence in Educational Leadership: A Symbiotic Role of Human-Artificial Intelligence Decision-Making. J. Educ. Adm. 2021, 59, 256–270. [Google Scholar] [CrossRef]
  15. Ateeq, A.; Alzoraiki, M.; Milhem, M.; Ateeq, R.A. Artificial Intelligence in Education: Implications for Academic Integrity and the Shift toward Holistic Assessment. Front. Educ. 2024, 9, 1470979. [Google Scholar] [CrossRef]
  16. Mittal, S.; Vashist, S.; Chaudhary, K. Equitable Education and Sustainable Learning: A Literary Exploration of Integration of Artificial Intelligence in Education for SDGs Advancement. In Explainable AI for Education: Recent Trends and Challenges; Singh, T., Dutta, S., Vyas, S., Rocha, Á., Eds.; Springer Nature: Cham, Switzerland, 2024; pp. 101–118. ISBN 978-3-031-72410-7. [Google Scholar]
  17. Malik, A.R.; Pratiwi, Y.; Andajani, K.; Numertayasa, I.W.; Suharti, S.; Darwis, A. Marzuki Exploring Artificial Intelligence in Academic Essay: Higher Education Student’s Perspective. Int. J. Educ. Res. Open 2023, 5, 100296. [Google Scholar] [CrossRef]
  18. Nguyen, A.; Ngo, H.N.; Hong, Y.; Dang, B.; Nguyen, B.P.T. Ethical Principles for Artificial Intelligence in Education. Educ. Inf. Technol. 2023, 28, 4221–4241. [Google Scholar] [CrossRef]
  19. Bond, M.; Khosravi, H.; De Laat, M.; Bergdahl, N.; Negrea, V.; Oxley, E.; Pham, P.; Chong, S.W.; Siemens, G. A Meta Systematic Review of Artificial Intelligence in Higher Education: A Call for Increased Ethics, Collaboration, and Rigour. Int. J. Educ. Technol. High. Educ. 2024, 21, 4. [Google Scholar] [CrossRef]
  20. Ng, D.T.K.; Wu, W.; Leung, J.K.L.; Chiu, T.K.F.; Chu, S.K.W. Design and Validation of the AI Literacy Questionnaire: The Affective, Behavioural, Cognitive and Ethical Approach. Br. J. Educ. Technol. 2024, 55, 1082–1104. [Google Scholar] [CrossRef]
  21. OpenAI; Achiam, J.; Adler, S.; Agarwal, S.; Ahmad, L.; Akkaya, I.; Aleman, F.L.; Almeida, D.; Altenschmidt, J.; Altman, S.; et al. GPT-4 Technical Report 2024. Available online: https://cdn.openai.com/papers/gpt-4.pdf (accessed on 6 May 2025).
  22. Hoy, M.B. Alexa, Siri, Cortana, and More: An Introduction to Voice Assistants. Med. Ref. Serv. Q. 2018, 37, 81–88. [Google Scholar] [CrossRef]
  23. Kaiser, H.F. An Index of Factorial Simplicity. Psychometrika 1974, 39, 31–36. [Google Scholar] [CrossRef]
  24. Hair, J.F.; Gabriel, M.L.D.S.; Silva, D.d.; Braga, S. Development and Validation of Attitudes Measurement Scales: Fundamental and Practical Aspects. RAUSP Manag. J. 2019, 54, 490–507. [Google Scholar] [CrossRef]
  25. Henseler, J.; Ringle, C.M.; Sarstedt, M. A New Criterion for Assessing Discriminant Validity in Variance-Based Structural Equation Modeling. J. Acad. Mark. Sci. 2015, 43, 115–135. [Google Scholar] [CrossRef]
  26. Hu, L.; Bentler, P.M. Cutoff Criteria for Fit Indexes in Covariance Structure Analysis: Conventional Criteria versus New Alternatives. Struct. Equ. Model. A Multidiscip. J. 1999, 6, 1–55. [Google Scholar] [CrossRef]
  27. Chan, C.K.Y.; Hu, W. Students’ Voices on Generative AI: Perceptions, Benefits, and Challenges in Higher Education. Int. J. Educ. Technol. High Educ. 2023, 20, 43. [Google Scholar] [CrossRef]
  28. George Pallivathukal, R.; Kyaw Soe, H.H.; Donald, P.M.; Samson, R.S.; Hj Ismail, A.R. ChatGPT for Academic Purposes: Survey Among Undergraduate Healthcare Students in Malaysia. Cureus 2024, 16, e53032. [Google Scholar] [CrossRef]
  29. Ayala-Chauvin, M.; Avilés-Castillo, F. Optimizing Natural Language Processing: A Comparative Analysis of GPT-3.5, GPT-4, and GPT-4o. Data Metadata 2024, 3, 359. [Google Scholar] [CrossRef]
  30. Digital Education Council. Digital Education Council Global AI Student Survey 2024. 2024. Available online: https://www.digitaleducationcouncil.com/post/digital-education-council-global-ai-student-survey-2024 (accessed on 6 May 2025).
  31. Ravšelj, D.; Keržič, D.; Tomaževič, N.; Umek, L.; Brezovar, N.; Iahad, N.A.; Abdulla, A.A.; Akopyan, A.; Segura, M.W.A.; AlHumaid, J.; et al. Higher Education Students’ Perceptions of ChatGPT: A Global Study of Early Reactions. PLoS ONE 2025, 20, e0315011. [Google Scholar] [CrossRef] [PubMed]
  32. Dai, C.-P.; Ke, F. Educational Applications of Artificial Intelligence in Simulation-Based Learning: A Systematic Mapping Review. Comput. Educ. Artif. Intell. 2022, 3, 100087. [Google Scholar] [CrossRef]
  33. Fornell, C.; Larcker, D.F. Evaluating Structural Equation Models with Unobservable Variables and Measurement Error. J. Mark. Res. 1981, 18, 39–50. [Google Scholar] [CrossRef]
  34. Sun, J.C.-Y.; Tsai, H.-E.; Cheng, W.K.R. Effects of Integrating an Open Learner Model with AI-Enabled Visualization on Students’ Self-Regulation Strategies Usage and Behavioral Patterns in an Online Research Ethics Course. Comput. Educ. Artif. Intell. 2023, 4, 100120. [Google Scholar] [CrossRef]
  35. Selwyn, N.; Cordoba, B.G.; Andrejevic, M.; Campbell, L. AI for Social Good: Australian Public Attitudes Toward AI and Society; Monash University: Clayton, VIC, Australia, 2020. [Google Scholar] [CrossRef]
  36. Ouyang, F.; Wu, M.; Zheng, L.; Zhang, L.; Jiao, P. Integration of Artificial Intelligence Performance Prediction and Learning Analytics to Improve Student Learning in Online Engineering Course. Int. J. Educ. Technol. High Educ. 2023, 20, 4. [Google Scholar] [CrossRef]
  37. Chen, K.; Tallant, A.C.; Selig, I. Exploring Generative AI Literacy in Higher Education: Student Adoption, Interaction, Evaluation and Ethical Perceptions. Inf. Learn. Sci. 2024, 126, 132–148. [Google Scholar] [CrossRef]
  38. The Ethics of Artificial Intelligence in Education: Practices, Challenges, and Debates; Holmes, W., Porayska-Pomsta, K., Eds.; Routledge, Taylor & Francis Group: New York, NY, USA, 2022. [Google Scholar]
  39. Sullivan, M.; Kelly, A.; McLaughlan, P. ChatGPT in Higher Education: Considerations for Academic Integrity and Student Learning. J. Appl. Learn. Teach. 2023, 6, 31–40. [Google Scholar] [CrossRef]
  40. Williams, R.T. The Ethical Implications of Using Generative Chatbots in Higher Education. Front. Educ. 2024, 8, 1331607. [Google Scholar] [CrossRef]
  41. Venkatesh, V.; Bala, H. Technology Acceptance Model 3 and a Research Agenda on Interventions. Decis. Sci. 2008, 39, 273–315. [Google Scholar] [CrossRef]
  42. Xu, X.; Song, Y. Is There a Conflict between Automation and Environment? Implications of Artificial Intelligence for Carbon Emissions in China. Sustainability 2023, 15, 12437. [Google Scholar] [CrossRef]
  43. Bashir, N.; Donti, P.; Cuff, J.; Sroka, S.; Ilic, M.; Sze, V.; Delimitrou, C.; Olivetti, E. The Climate and Sustainability Implications of Generative AI. MIT Explor. Gener. AI 2024, 1–45. [Google Scholar] [CrossRef]
  44. Hosseini, M.; Gao, P.; Vivas-Valencia, C. A Social-Environmental Impact Perspective of Generative Artificial Intelligence. Environ. Sci. Ecotechnol. 2025, 23, 100520. [Google Scholar] [CrossRef]
  45. Berthelot, A.; Caron, E.; Jay, M.; Lefèvre, L. Estimating the Environmental Impact of Generative-AI Services Using an LCA-Based Methodology. Procedia CIRP 2024, 122, 707–712. [Google Scholar] [CrossRef]
  46. Hutter, R.; Hutter, M. Chances and Risks of Artificial Intelligence—A Concept of Developing and Exploiting Machine Intelligence for Future Societies. Appl. Syst. Innov. 2021, 4, 37. [Google Scholar] [CrossRef]
  47. Wach, K.; Công, D.D.; Ejdys, J.; Kazlauskaitė, R.; Korzyński, P.; Mazurek, G.; Paliszkiewicz, J.; Ziemba, E.W. The Dark Side of Generative Artificial Intelligence: A Critical Analysis of Controversies and Risks of ChatGPT. Entrep. Bus. Econ. Rev. 2023, 11, 7–30. [Google Scholar] [CrossRef]
  48. Capraro, V.; Lentsch, A.; Acemoglu, D.; Akgun, S.; Akhmedova, A.; Bilancini, E.; Bonnefon, J.-F.; Brañas-Garza, P.; Butera, L.; Douglas, K.M.; et al. The Impact of Generative Artificial Intelligence on Socioeconomic Inequalities and Policy Making. Proc. Natl. Acad. Sci. USA Nexus 2024, 3, 191. [Google Scholar] [CrossRef]
  49. Nedungadi, P.; Tang, K.-Y.; Raman, R. The Transformative Power of Generative Artificial Intelligence for Achieving the Sustainable Development Goal of Quality Education. Sustainability 2024, 16, 9779. [Google Scholar] [CrossRef]
  50. Kong, S.-C.; Cheung, W.M.-Y.; Zhang, G. Evaluating an Artificial Intelligence Literacy Programme for Developing University Students’ Conceptual Understanding, Literacy, Empowerment and Ethical Awareness. Educ. Technol. Soc. 2023, 26, 16–30. [Google Scholar]
  51. UNESCO Recommendation on the Ethics of Artificial Intelligence. 2021, p. 21. Available online: https://unesdoc.unesco.org/ark:/48223/pf0000380455 (accessed on 6 May 2025).
  52. OECD Recommendation of the Council on Artificial Intelligence. 2024, p. 12. Available online: https://legalinstruments.oecd.org/en/instruments/%20OECD-LEGAL-0449 (accessed on 6 May 2025).
  53. European Parliament and Council. Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 March 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013 and (EU) 2018/858. Off. J. Eur. Union 2024, L 168, 1–157. Available online: http://data.europa.eu/eli/reg/2024/1689/oj (accessed on 3 May 2025).
  54. Comissão Temporária sobre Inteligência Artificial no Brasil–CTIA. Projeto de Lei nº 2338, de 2023. Dispõe sobre o uso da inteligência artificial no Brasil; Senado Federal: Brasília, Brazil, 2023; Available online: https://www25.senado.leg.br/web/atividade/materias/-/materia/162339 (accessed on 3 May 2025).
  55. Congreso de la República del Perú. Ley N.° 31814: Ley Que Promueve el Uso de la Inteligencia Artificial en Favor del Desarrollo Económico y Social del País. Lima, Peru. 2023. Available online: https://www.gob.pe/institucion/congreso-de-la-republica/normas-legales/4565760-31814 (accessed on 3 May 2025).
  56. Núñez Ramos, S. Proyecto de Ley Orgánica de Regulación y Promoción de la Inteligencia Artificial en Ecuador; Asamblea Nacional del Ecuador: Quito, Ecuador, 2024; Available online: https://www.asambleanacional.gob.ec/sites/default/files/private/asambleanacional/filesasambleanacionalnameuid-19130/2192.%20Proyecto%20de%20Ley%20Org%C3%A1nica%20de%20Regulaci%C3%B3n%20y%20Promoci%C3%B3n%20de%20la%20Inteligencia%20Artificial%20en%20Ecuador%20-pnu%C3%B1ez/pp%20-%20proyecto%20de%20ley%20450889-nu%C3%B1ez.pdf (accessed on 3 May 2025).
  57. Memarian, B.; Doleck, T. Fairness, Accountability, Transparency, and Ethics (FATE) in Artificial Intelligence (AI) and Higher Education: A Systematic Review. Comput. Educ. Artif. Intell. 2023, 5, 100152. [Google Scholar] [CrossRef]
  58. Pawlicki, M.; Pawlicka, A.; Uccello, F.; Szelest, S.; D’Antonio, S.; Kozik, R.; Choraś, M. Evaluating the Necessity of the Multiple Metrics for Assessing Explainable AI: A Critical Examination. Neurocomputing 2024, 602, 128282. [Google Scholar] [CrossRef]
  59. Qadhi, S.M.; Alduais, A.; Chaaban, Y.; Khraisheh, M. Generative AI, Research Ethics, and Higher Education Research: Insights from a Scientometric Analysis. Information 2024, 15, 325. [Google Scholar] [CrossRef]
  60. ISO/IEC 42001:2023; Information Technology—Artificial Intelligence—Management System. International Organization for Standardization: Geneva, Switzerland, 2023.
  61. NIST-AI-600-1; Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile. National Institute of Standards and Technology: Gaithersburg, MD, USA, 2024.
  62. Soler Garrido, J.; De Nigris, S.; Bassani, E.; Sanchez, I.; Evas, T.; André, A.-A.; Boulangé, T. Harmonised Standards for the European AI Act; JRC Publications Repository: Brussels, Belgium, 2024. [Google Scholar]
  63. Wu, C.; Zhang, H.; Carroll, J.M. AI Governance in Higher Education: Case Studies of Guidance at Big Ten Universities. Future Internet 2024, 16, 354. [Google Scholar] [CrossRef]
  64. Mahajan, P. What Is Ethical: AIHED Driving Humans or Human-Driven AIHED? A Conceptual Framework Enabling the Ethos of AI-Driven Higher Education. 2025. Available online: https://www.researchgate.net/publication/389622934_What_is_Ethical_AIHED_Driving_Humans_or_Human-Driven_AIHED_A_Conceptual_Framework_enabling_the_‘Ethos’_of_AI-driven_Higher_Education (accessed on 12 April 2025).
Figure 1. Structural model of the relationships between learning dimensions and the ethical use of AI.
Figure 1. Structural model of the relationships between learning dimensions and the ethical use of AI.
Sustainability 17 04435 g001
Table 1. Most used AI applications in academic activities.
Table 1. Most used AI applications in academic activities.
AI ApplicationFrequencyPercentage (%)
ChatGPT51862.2
Gemini13115.7
Siri708.4
Google Bard414.9
Copilot101.2
Midjourney81.0
DALL-E60.7
Perplexity60.7
Fireflies50.6
Claude40.5
Deepseek40.5
Monica30.4
Others (<0.1%)70.8
None202.4
Total833100
Table 2. Reliability coefficients for each dimension of the AI learning and ethics questionnaire.
Table 2. Reliability coefficients for each dimension of the AI learning and ethics questionnaire.
FactorsCronbach’s αMcDonald’s ΩNumber of Items
Affective learning0.9820.98219
Behavioral learning0.9700.97011
Cognitive learning0.9730.9729
Ethical learning0.9910.99116
Total0.9920.99255
Table 3. Convergent indicators for the AI learning and ethics questionnaire.
Table 3. Convergent indicators for the AI learning and ethics questionnaire.
FactorsItemFactor Loading (λ)Cronbach’s AlphaComposite Reliability (CR)Average Variance Extracted (AVE)
Affective learningAA10.8040.9820.9820.747
AA20.8170.982
AA30.7590.982
AA40.7970.982
AA50.8630.981
AA60.8740.981
AA70.8650.981
AA80.8650.981
AA90.8620.981
AA100.8730.981
AA110.8920.981
AA120.9030.981
AA130.8570.982
AA140.8740.981
AA150.8890.981
AA160.9060.981
AA170.9040.981
AA180.8990.981
AA190.9020.981
Behavioral learningAC10.8420.9690.9710.754
AC20.8690.968
AC30.8750.968
AC40.9050.967
AC50.8970.967
AC60.8830.968
AC70.8870.967
AC80.8850.967
AC90.9050.967
AC100.8070.969
AC110.790.970
Cognitive learningACO10.8560.9710.9730.801
ACO20.8610.971
ACO30.8890.970
ACO40.9090.969
ACO50.9020.969
ACO60.8920.970
ACO70.9130.969
ACO80.9150.969
ACO90.9130.969
Ethical learningAE10.8750.9910.9910.869
AE20.8970.990
AE30.8680.991
AE40.9320.990
AE50.9360.990
AE60.9340.990
AE70.9440.990
AE80.9630.990
AE90.9590.990
AE100.9670.990
AE110.9520.990
AE120.9390.990
AE130.9520.990
AE140.9220.990
AE150.9260.990
AE160.9450.990
Table 4. HTMT analysis of the AI learning and ethics questionnaire.
Table 4. HTMT analysis of the AI learning and ethics questionnaire.
FactorsA-AffectiveA-BehavioralA-CognitiveA-Ethical
A-Affective----
A-Behavioral0.884---
A-Cognitive0.8470.917--
A-Ethical0.7830.7670.811-
Table 5. Results of the hypothesis validation in the structural model.
Table 5. Results of the hypothesis validation in the structural model.
HypothesisRelationshipβp-ValueResult
H1Total → Ethical0.675***Accepted
H2Behavioral → Ethical−0.1280.058Not accepted
H3Cognitive → Ethical0.567***Accepted
H4Affective → Ethical0.413***Accepted
Note: *** p < 0.001 (two-tailed).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Buele, J.; Sabando-García, Á.R.; Sabando-García, B.J.; Yánez-Rueda, H. Ethical Use of Generative Artificial Intelligence Among Ecuadorian University Students. Sustainability 2025, 17, 4435. https://doi.org/10.3390/su17104435

AMA Style

Buele J, Sabando-García ÁR, Sabando-García BJ, Yánez-Rueda H. Ethical Use of Generative Artificial Intelligence Among Ecuadorian University Students. Sustainability. 2025; 17(10):4435. https://doi.org/10.3390/su17104435

Chicago/Turabian Style

Buele, Jorge, Ángel Ramón Sabando-García, Bosco Javier Sabando-García, and Hugo Yánez-Rueda. 2025. "Ethical Use of Generative Artificial Intelligence Among Ecuadorian University Students" Sustainability 17, no. 10: 4435. https://doi.org/10.3390/su17104435

APA Style

Buele, J., Sabando-García, Á. R., Sabando-García, B. J., & Yánez-Rueda, H. (2025). Ethical Use of Generative Artificial Intelligence Among Ecuadorian University Students. Sustainability, 17(10), 4435. https://doi.org/10.3390/su17104435

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop