Previous Article in Journal
Fuzzy Multi-Objective Optimization Model for Resilient Supply Chain Financing Based on Blockchain and IoT
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Passing with ChatGPT? Ethical Evaluations of Generative AI Use in Higher Education

by
Antonio Pérez-Portabella
1,
Mario Arias-Oliva
2,
Graciela Padilla-Castillo
3 and
Jorge de Andrés-Sánchez
4,*
1
Estudis de Comunicació, Universitat Rovira i Virgili and Universidad Complutense de Madrid, Campus Catalunya, 43002 Tarragona, Spain
2
Marketing Department, Faculty of Business & Economy, University Complutense of Madrid, Campus de Somosaguas, 28223 Madrid, Spain
3
Journalism and New Media Department, Faculty of Information Sciences, University Complutense of Madrid, Avenida Complutense 3, 28040 Madrid, Spain
4
Social and Business Research Laboratory, University Rovira i Virgili, Campus de Bellissens, 43204 Reus, Spain
*
Author to whom correspondence should be addressed.
Digital 2025, 5(3), 33; https://doi.org/10.3390/digital5030033
Submission received: 14 July 2025 / Revised: 31 July 2025 / Accepted: 4 August 2025 / Published: 6 August 2025

Abstract

The emergence of generative artificial intelligence (GenAI) in higher education offers new opportunities for academic support while also raising complex ethical concerns. This study explores how university students ethically evaluate the use of GenAI in three academic contexts: improving essay writing, preparing for exams, and generating complete essays without personal input. Drawing on the Multidimensional Ethics Scale (MES), the research assesses five philosophical frameworks—moral equity, relativism, egoism, utilitarianism, and deontology—based on a survey conducted among undergraduate social sciences students in Spain. The findings reveal that students generally view GenAI use as ethically acceptable when used to improve or prepare content, but express stronger ethical concerns when authorship is replaced by automation. Gender and full-time employment status also influence ethical evaluations: women respond differently than men in utilitarian dimensions, while working students tend to adopt a more relativist stance and are more tolerant of full automation. These results highlight the importance of context, individual characteristics, and philosophical orientation in shaping ethical judgments about GenAI use in academia.

1. Introduction

1.1. Ethical Challenges of Generative Artifical Intellugence in Higher Education

The emergence of generative artificial intelligence (GenAI) is profoundly and rapidly transforming the landscape of higher education. Tools such as ChatGPT, Copilot, or DALL·E are reshaping how students and educators access knowledge, generate content, and solve problems [1]. This disruption not only offers new opportunities for personalized learning and the development of cognitive skills but also raises ethical, pedagogical, and assessment-related challenges in the university context [2]. In this new scenario, higher education institutions are compelled to revise their teaching and learning models, promoting a critical and strategic integration of these technologies to prepare students not only to use them but also to understand their social, epistemological, and professional implications [3].
The relevance of ethical aspects in the use of GenAI tools in higher education has a dual dimension. On one hand, it relates to learning processes, where principles such as equal opportunities, inclusiveness, and adherence to deontological norms become particularly important [4]. In this context, the use of artificial intelligence should align with educational values that avoid unfair practices—such as plagiarism or improper delegation of tasks—and foster a fair, equitable, and responsible learning environment [5]. The second dimension goes beyond the educational realm, as it involves the need for broader ethical reflection on the use of GenAI in any context—academic, professional, or social. This entails considering aspects such as algorithmic transparency, authorship, data privacy, and the social implications of these systems, which affect all users regardless of the context in which they are employed [6].
The use of GenAI by students cannot be simplistically categorized as inherently good or bad; its ethical and pedagogical value depends on how it is used [7]. For instance, when students use these tools to support their study, clarify concepts, or generate initial ideas, such use may not only be acceptable but even desirable [8]. GenAI can also contribute to educational inclusion by offering a more accessible and personalized pathway to learning [9]. However, the scenario changes significantly when GenAI is used to write entire essays or answer exam questions without active student involvement. In such cases, the principle of authorship is violated, leading to dishonest behavior [10] and, depending on institutional policies, potentially even illegal conduct [11]. Thus, the problem does not lie in the tool itself but in how it is used: replacing intellectual effort with an automated solution breaks with the core values of higher education, such as responsibility, personal effort, and academic integrity [12].
The considerations in the above subsection motivate the present study, which conducts empirical research on university students’ views regarding the use of GenAI, approached from the perspective of moral philosophy. Given that the ethical judgment of GenAI is entirely contextual [5], three scenarios are proposed, representing different levels of intensity in GenAI use:
  • Scenario 1: Improving or correcting essays before submission. The content of the paper remains unchanged. GenAI is simply used to rephrase sentences, fix syntactic errors, or make the text clearer. This is generally considered a fully acceptable use in academic research [13].
  • Scenario 2: Preparing for an exam with the help of GenAI. The tool is used to summarize topics, generate practice tests, help understand concepts better, and develop study material. This scenario has positive aspects—such as optimizing time and enhancing student productivity—but also negative ones, such as the potential for dependency or reliance on incorrect or biased information [10].
  • Scenario 3: Producing most of the content of an essay using GenAI because the student has little time left before the deadline and perceives the task as too complex. Little or no editing is conducted afterward. This scenario borders on academic fraud [12], yet it is difficult to penalize, as there are currently no reliable tools to detect GenAI-generated text [14].

1.2. Philosophical Framework

Various moral philosophy schools offer differing perspectives on what constitutes right action, and also in the field of technology use [15,16]. Deontology, represented by Kant, holds that morality is grounded in duty and adherence to universal principles and social contracts. Consequentialist approaches, such as moral egoism and utilitarianism, assess the morality of actions based on their outcomes. Meanwhile, virtue ethics, inspired by Aristotle, centers moral reflection on the development of character and personal virtues rather than specific rules or consequences [17]. Therefore, it is overly simplistic to classify an action as ethical or unethical without situating such judgment within a specific moral framework [18].
Consider a case in which a brilliant and responsible student fails to write an essay due to serious personal issues. From the perspective of academic deontology, extensive use of GenAI to write the essay would be ethically questionable, as it would violate principles of integrity and authorship inherent in academia. If such use were evidenced, the work should be invalidated according to institutional policies. However, from the standpoint of moral egoism, one could argue that the use of GenAI is justified. It allows the student to maintain a high academic average, which may be crucial for future opportunities. Moreover, if this is an exceptional case and the student does not habitually misuse the tool, the negative impact on their education would be minimal, further justifying its use from this ethical lens.
To assess students’ evaluations of the morality of using GenAI, this study employs the version of the Multidimensional Ethics Scale (MES) [18] adapted by Shawver and Sennetti [19]. This scale considers five ethical perspectives: moral equity (ME), relativism (RE), egoism (EG), utilitarianism (UT), and contractualism or deontological approach (DE).
This instrument has been applied in various contexts, including business and commercial decision-making [20,21,22], the use of certain sports technologies [23], the adoption of controversial technologies [24,25], and attitudes toward immunity passports during the COVID-19 pandemic [26]. It has also been widely used in academic settings to evaluate ethical judgments related to plagiarism, cheating on non-collaborative exams, and the falsification of recommendation letters [27,28,29,30,31,32,33].
This study does not assess whether the use of GenAI in the three proposed scenarios is legal or illegal, as such a judgment may depend on institutional regulations. It could even be desirable to encourage its use in the first scenario, while explicitly prohibiting it in the third [3]. The evaluation we refer to is interpretative, as students often develop their own understanding of what constitutes acceptable behavior, based on observational learning and the social cues provided by instructors [34]. These interpretative rules are shaped within informal classroom dynamics, influenced by social norms, values, and interactions that, while not always explicit, significantly affect students’ behavior and learning processes [35].
In the formation of ethical judgments regarding the three GenAI use scenarios, two potentially moderating variables are considered: gender and whether the student combines academic activity with full-time employment.
Sex differences in ethical perceptions have been highlighted in various studies. Women tend to show greater concern for the impact of their decisions on others and display higher levels of empathy than men [27]. For their part, men—though not less ethical—tend to adapt their decisions more to the context, with a stronger outcome-oriented focus [15].
Regarding students’ employment status, those who combine academic studies with full-time work often have diverse motivations, which may stem from a desire to acquire practical knowledge or from economic pressures [36]. Additionally, students balancing studies and work face a greater number of obligations, which may lead to conflicts between their professional and academic lives [37]. These factors may influence their perceptions of the ethical appropriateness of using GenAI in the three evaluated scenarios.
Specifically, this study aims to answer the following research questions:
RQ1: Does the scenario in which GenAI is used influence students’ ethical judgments?
RQ2: Does gender influence ethical judgments regarding the use of GenAI in each of the three proposed scenarios?
RQ3: Does working full-time influence ethical judgments regarding the use of GenAI in each of the three proposed scenarios?
This article is structured as follows. Section 2 discusses the five ethical frameworks used to assess GenAI use. Section 3 presents the methodology, including the sample, questionnaire, and analytical strategy. Section 4 reports the results for each research objective. Section 5 offers a general discussion of the findings and their implications. Finally, Section 6 outlines the study’s main conclusions, limitations, and future research directions.

2. Ethical Evaluation of the Use of Generative Artificial Intelligence from Various Moral Philosophy Perspectives and Personal Determinants

2.1. Structuring Judgments About the Use of GenAI Around Moral Philosophy Theories

Research on generative artificial intelligence (GenAI) in the field of education has grown extensively since 2023. A simple search in Web of Knowledge using the terms “ChatGPT” and “Education,” restricted to English-language articles published between 2023 and July 2025, yields 4129 documents (759 in 2023, 1998 in 2024, and 1372 in 2025). The combination of “Generative Artificial Intelligence” and “Education” under the same criteria produces 3371 results, distributed as follows: 315 in 2023, 1887 in 2024, and 1169 in 2025.
This vast body of work makes it difficult to develop a comprehensive taxonomy of studies, given the diversity of approaches and topics addressed. Nevertheless, in relation to the specific scope that informs our research, four main categories can be identified.
The first includes general reflections on how GenAI can be applied to improve education [1,2,5,6,9,10,13,14]. These studies propose cross-cutting applications of GenAI and discuss its advantages and disadvantages, including aspects related to inclusion, educational efficiency, and potential risks. Although ethical considerations may be mentioned, they are not the central focus of these contributions.
A second line of research consists of studies that specifically address ethical issues related to GenAI, either from a general perspective [3,7,38] or through more specific cases, such as essay writing [8,12,39], image generation [40], the explainability of GenAI outcomes [41], or its use strictly as a teaching tool [32].
A third category includes empirical studies focused on the use of GenAI in specific educational contexts, such as mathematics instruction [42], educational programming [43], language learning [44], or inclusive education [45].
Lastly, a relevant category comprises research that explores the perceptions and attitudes of potential GenAI users in education through surveys and interviews. This includes both qualitative studies [46,47,48,49] and quantitative ones. Among the latter, some follow a descriptive approach [50], while others are grounded in theoretical frameworks of technology acceptance. Notable examples include studies based on Self-Determination Theory [51], the Technology Acceptance Model [52], the Expectation Confirmation Model and the Information System Success Theory [53], and the Unified Theory of Acceptance and Use of Technology [54,55]. From a faculty-centered perspective, the application of MES has also been explored [32].
In the following section, we structure the approaches outlined in the previous paragraphs. These are organized around moral theories introduced earlier and briefly defined in Table 1. Accordingly, the study incorporates the ethical reflections developed by previous authors [38,39] which are grounded in the positive and negative consequences of GenAI use identified in the literature [1,10]. Furthermore, the context of application in this study is not limited to a specific academic discipline but rather focuses on use scenarios that are relevant across all university programs. The methodological approach is empirical—specifically, quantitative—and is based on the constructs of the MES [19], as opposed to the more commonly used technology acceptance models [51,53].
Subsequently, and as illustrated in Figure 1, after developing the theoretical framework that situates students’ arguments for and against the use of GenAI within moral philosophy theories—specifically those included in the MES—and considering personal factors such as gender and employment status, we apply a structured survey built around philosophical constructs. This allows us to examine the discriminative power of gender and work status in shaping students’ ethical judgments. The study follows the structure outlined previously and integrates the diverse perspectives discussed above.

2.2. Moral Equity

Moral equity (ME) is an ethical principle that refers to fair, impartial, and considerate treatment of all individuals, particularly in contexts where differences in power, resources, or abilities exist [56]. It involves recognizing that not everyone starts with the same conditions. Therefore, acting justly does not necessarily mean treating everyone the same, but rather providing each individual with what they need to have real equality of opportunity [57].
Unlike equality, which focuses on offering the same treatment to everyone, moral equity takes into account each person’s particular circumstances (such as socioeconomic background, abilities, or structural disadvantages) in order to compensate for inequalities and promote a fairer environment [58]. In the educational context, acting with moral equity means providing additional support to those who need it so they can reach the same level of development and participation as their peers [59].
From the perspective of moral equity, the use of GenAI tools like ChatGPT by university students presents a number of positive and negative aspects that deserve careful consideration.
Among the positive aspects, one of the most notable is the equality of opportunity these tools can provide. By offering immediate access to high-quality, and often marginal, knowledge, GenAI tools have the potential to reduce educational inequalities among students from different socioeconomic backgrounds, ensuring that all can access information and academic support regardless of their origin [10]. Similarly, GenAI can act as a compensatory mechanism for initial disadvantages, helping students with fewer academic resources or language difficulties to perform under more equitable conditions compared to those with stronger educational backgrounds [46].
Another key strength is the educational personalization made possible by these tools. By adapting to individual learning styles and paces, GenAI can provide more understandable explanations tailored to each student’s specific needs [51], contributing to a more just, inclusive, and less discriminatory learning environment. Moreover, they offer meaningful support to students with special needs, as their ability to provide clear answers, repeat concepts, and adapt in real time is especially valuable for those with cognitive or learning difficulties [45].
However, there are also negative aspects from the standpoint of moral equity. One of the main challenges is technological inequality, as access to these tools depends on having a stable internet connection, appropriate devices, and basic digital skills [60]. Furthermore, not all students are equally able to benefit from GenAI—due to a lack of technological competencies [46] or knowledge of how to interact effectively with the tool [50]. In addition, paid versions of GenAI engines like ChatGPT offer enhanced capabilities compared to free ones, which could exacerbate existing educational gaps. This disparity may also extend to institutions, with varying financial capacities to sustain the high cost of GenAI maintenance [46]. Similarly, linguistic coverage of GenAI tools remains unequal [46], potentially reinforcing the Matthew Effect in education, whereby GenAI, due to its high production and maintenance costs, could make the poor poorer and the rich richer [61].
Moreover, algorithmic bias and indirect discrimination represent significant ethical risks [6,38]. Since these models learn from existing data, they can replicate and amplify cultural, social, or economic biases, resulting in responses that may be unfair or even offensive to certain student groups [13].
Another ethical concern involves the potential for unfair competition. When some students use these tools inappropriately—for example, to copy answers or have AI write their essays—clear inequity arises in comparison to those who choose not to use them due to ethical convictions or a commitment to academic integrity [62].
Finally, there is the risk of dependency and reduced individual effort: excessive use of GenAI could lead to a loss of active study habits, undermining the value placed on personal merit and intellectual effort [10], which conflicts with the principles of moral justice in educational settings.

2.3. Relativism

Moral relativism (RE) holds that human actions cannot be judged as inherently right or wrong, but must be understood within the social, cultural, or individual context in which they occur [18]. Applied to the educational domain, this implies that the use of GenAI cannot be deemed intrinsically good or bad; its legitimacy depends on the context, the rules and recommendations of the institution, academic expectations, and the implicit or explicit agreements between students and instructors [11].
From this perspective, there are several positive aspects to the use of ChatGPT. One is its adaptability to different contexts, since relativism allows for flexible use of these tools depending on institutional, disciplinary, or social norms [7]. In fact, many North American universities have established guidelines that encourage the use of GenAI in some contexts, discourage it in others, and prohibit it in certain practices [11]. From a relativist viewpoint, the appropriateness of using technological tools depends on the specific context: using ChatGPT as a personalized tutor to help an international student overcome language barriers—something potentially advisable—is not the same as using it to generate an essay, which may be considered academic fraud [5]. Likewise, in learning environments where practical understanding is valued more than memorization, using large language models (LLMs) to better grasp course material is not only valid, but consistent with the educational ideals promoted by the institution [46].
Educational decisions are often strongly influenced by peer dynamics [63]. In this sense, when reference groups show interest and active engagement in the use of GenAI for learning, they can foster a collective perception of its benefits [14], which may encourage adoption. In contexts where the integration of AI-based tools is under evaluation, the opinions of individuals deemed important by the potential user play a key role in acceptance [64]. On the other hand, pressure from classmates and instructors can also generate concerns about academic integrity when using GenAI [61].
From a relativism approach, it is also possible to consider the use of language models to prepare for exams as immoral, depending on the cultural or institutional context. In universities where the use of such tools is explicitly banned or heavily restricted, their use would be considered dishonest [3].
Moreover, higher education institutions have increasingly emphasized the importance of promoting among students a sense of social responsibility and environmental awareness [65]. Considering that LLMs have a higher environmental impact compared to other alternatives, such as standard Google searches [6,66], it is reasonable to assume that in institutional contexts with a strong commitment to sustainability, a relativist perspective might judge the intensive use of these technologies negatively.

2.4. Moral Egoism

From the perspective of moral egoism (EG), a correct action is one that maximizes self-interest [67]. Within this theoretical framework, the rightness of decisions is assessed primarily based on the personal benefits they provide [68]. Therefore, the use of GenAI should be evaluated in terms of academic performance, efficiency, well-being, or competitive advantage. Whether its use is socially fair or ethically universalizable would be a secondary concern.
Among the positive aspects that moral egoism may identify in the use of GenAI, one of the most prominent is the maximization of individual academic benefit. It enables the design of customized learning pathways [9]. It also allows students to delegate repetitive or unengaging tasks, optimizing time and effort [69], thus allowing them to focus on activities that provide greater personal value, such as critical thinking or solving real-world problems [7,50]. This tool also offers strategic advantages in academic performance. Its use can result in higher grades, as it may act as a personal tutor by offering rapid access to content [69], or improve the readability and structure of submitted assignments [8], thereby enhancing student performance and increasing opportunities for scholarships, postgraduate admissions, or future employability.
Another benefit from this viewpoint is saving time and energy, since ChatGPT facilitates quick and organized access to information [69], which frees up resources for activities that enhance individual well-being. These may include both academic and personal pursuits such as rest, leisure, or social interaction—contributing to a higher quality of life and greater well-being, a variable positively associated with academic achievement [70]. Well-being or the absence of suffering is a hedonistic goal commonly aligned with moral egoism [68].
However, from the same moral egoist standpoint, risks and negative consequences can also be identified. A short-term concern is the possibility of receiving inaccurate, biased, or inconsistent information, or content affected by the so-called hallucination phenomenon [10]. In the long term, excessive dependence on these tools [6] could harm students’ future outcomes. Continuous use of AI may deteriorate fundamental skills such as written expression [71], reducing intellectual autonomy and real preparation for the professional world. In tasks related to visual design and creativity, GenAI also shows weaknesses, including lack of interface intuitiveness, emotional appeal, perceived innovation, as well as limited control and feedback during the creative process [72].
Additionally, there is a risk of personal reputational damage. If a student is found to have committed plagiarism or misused artificial intelligence, they may face academic sanctions or harm to their personal image—an outcome that directly contradicts their self-interest, given that reputation is part of one’s personal assets [73].

2.5. Utilitarianism

From the perspective of utilitarianism (UT), an action is considered morally correct if it promotes the greatest possible well-being for the greatest number of people [74]. Applied to the university context, the use of tools such as ChatGPT is assessed based on its ability to increase collective utility, that is, to generate more happiness, reduce suffering, and enhance the well-being of the academic community as a whole [16].
Among the positive aspects identified by this ethical theory is the maximization of academic efficiency. If the use of GenAI enables students to streamline their learning processes, improve study organization, and resolve doubts more quickly [69], this should translate into greater productivity and better outcomes for a large number of students. Additionally, the reduction in academic workload results in lower stress levels and improved mental well-being [75]. The study [50] reported that, following training in the use of ChatGPT, students experienced significant development in competencies such as analysis, synthesis, information management, problem solving, and learning to learn.
Another key benefit is the promotion of accessible and universal learning. GenAI can serve as a powerful tool for reducing educational barriers, especially for those facing limitations in resources or academic support [10]. By democratizing access to high-quality information and guidance, it supports the desirable goal of universal access to education [4]. Furthermore, it facilitates access to texts in languages unfamiliar to the student, thereby reducing linguistic injustice [8]. These contributions may positively impact the well-being of society as a whole by enabling a larger number of individuals to pursue higher education and gain access to broader learning materials and knowledge.
However, from a utilitarianism standpoint, it is also essential to consider risks and potential negative effects that could diminish collective well-being. One major concern is the widespread loss of essential skills. While the use of GenAI offers immediate benefits, its excessive use could erode key competencies such as creativity, critical thinking, and intellectual autonomy [72,76], which would have long-term negative effects on professional preparedness and on society’s capacity to address complex problems. Added to this is inequitable technological access, which can create significant gaps among students of different socioeconomic levels [76] and among students in countries with varying economic capacities [61]. If access to and effective use of GenAI is not equitable, there is a real risk of widening educational inequality and reducing collective utility.
Another concern is the erosion of public trust and institutional ethics. The widespread and inappropriate use of these tools could undermine the credibility of educational institutions, negatively affecting public perceptions regarding the validity of academic degrees and the fairness of assessment processes. Universities currently face the challenge of distinguishing between genuine and AI-generated text [46]. Existing software for detecting AI-generated content often makes errors—either failing to identify actual AI-generated text or mistakenly attributing human-generated text to AI [5].
From a perspective of social responsibility and sustainability awareness [65], the massive use of large language models (LLMs) entails high consumption of energy and water, generating an environmental impact that could be avoided through the use of technologies with a lower ecological footprint [77].

2.6. Deontology or Contractualism

From the perspective of deontology (DE), also known as contractualism, actions are not judged by their consequences but by their consistency with preexisting principles, rules, or ethical agreements [78]. In the university context, this means that student actions—in this case, the use of GenAI—should be evaluated based on their adherence to the ethical and moral commitments established between students, educational institutions, and society. The goal is not to maximize benefits but to fulfill the duties and obligations arising from a social contract—explicit or implicit—that defines what is morally acceptable in the academic realm [18].
From this viewpoint, GenAI presents several positive aspects. First, it can support the enhancement of educational accessibility, contributing to the fulfillment of the social contract that obliges institutions to ensure fair and equitable education [4]. By providing equal access to knowledge, these tools help materialize the ethical commitment to reduce inequality, promoting a more inclusive system—both by adapting education to individuals with disabilities [45] and by customizing courses to individual needs [9]. Furthermore, its integration can reflect a commitment to innovation and educational progress, recognizing that both students and institutions have a duty to adapt to technological advances like AI when such advances can improve learning processes and benefit the academic community [79].
However, from a deontological perspective, the use of GenAI also poses serious ethical risks. One of the main concerns is the violation of the academic-ethical contract when the tool is used for dishonest purposes such as plagiarism or cheating during assessments [10]. In such cases, the student may breach the commitment assumed upon enrolling in university studies; the primary obligation of universities is not only to educate competent professionals but also to foster ethical behavior [80] and form citizens with social responsibility [65]. In fact, even unintentionally, students may commit plagiarism or infringe copyright when using GenAI, especially when it provides responses without citing sources [46].
Another problem that may arise from the use of GenAI is overreliance on artificial intelligence, leading students to neglect other educational resources, including more traditional ones [46]. By limiting their exposure to diverse perspectives on academic subjects [76], students may fail to optimize their learning process. In public universities in Spain, tuition is at least partially subsidized by public funds—and fully so for scholarship recipients—so students have an obligation to make appropriate use of the resources made available to them by society.
The current trend among universities is to incorporate greater social awareness and holistic student development into their objectives, focusing not only on academic and professional training but also on citizenship education [81]. In this context, it is essential for GenAI users to reflect on the significant carbon footprint associated with its use—far greater than that of other technologies such as Google searches [6]. Raising awareness of this environmental impact contributes to the exercise of responsibility in promoting environmental sustainability [46], which constitutes a relevant issue for educational authorities to consider [3].

2.7. Gender and Employment Status as Potential Factors Shaping Ethical Perceptions of GenAI Use

2.7.1. Potential Differences in the Perception of GenAI Use Between Women and Men

Ethical perception differences between men and women have been widely documented in the academic literature, revealing distinct patterns in how each gender approaches moral dilemmas and also applies to academic settings [27]. These differences are explained by biological, cultural, social, and contextual factors that shape both moral orientations and intentions to act ethically [15]. Pilcher and Smith [82] show that men tend to respond more utilitarianly to moral dilemmas. Women, on the other hand, tend to prefer deontological responses, guided by principles they consider ethically desirable, rather than by the consequences of their actions.
The difference in ethical reasoning can also be understood through gender socialization theory, which argues that men and women are raised from an early age with different ethical values and priorities. Under similar conditions, ethical judgments may differ not due to reasoning capacity, but because of internalized moral priorities shaped by gender roles [83].
Beyond the cultural context of the study, egoistic or altruistic attitudes are also influenced by biological factors. It has been observed that women, across diverse cultural settings, tend to act more altruistically—even when this involves personal sacrifice—which may make them more inclined to act according to moral equity principles. In contrast, men tend to display behaviors more oriented toward self-assertion or genetic legacy [84].
The ethical perceptions of men and women may also differ in their emotional and rational foundations. Women tend to exhibit greater sensitivity to the impact of their actions on others and a stronger intention to act ethically, whereas men—though not necessarily less ethical—respond more variably depending on the context, with a more rationalistic and pragmatic approach [85].

2.7.2. Full-Time Employment Status as a Source of Differences in Ethical Perceptions of GenAI Use

Having a full-time or at least stable job while pursuing university studies is an increasingly common practice worldwide, driven by a combination of economic, educational, and personal motivations [86]. In Spain, approximately one-third of students work on a stable basis [87]. The reasons students choose to combine both activities vary and depend on factors such as socioeconomic background, working hours, the type of job held, and the student’s intrinsic motivations [87].
The need for income is undoubtedly one of the main reasons why students choose to work while studying. This trend has been observed in diverse cultural contexts such as Southeast Asia, Mexico, and Spain, where students report similar motivations [36,88,89,90,91]. However, employment is not always driven by urgent financial needs. Many students from middle- and upper-class backgrounds combine work and study to gain professional experience, achieve economic independence, and enhance their career prospects—beyond the immediate need for income [90].
It is also important to note that heavy workloads and the simultaneous demands of work and academic responsibilities can result in high levels of stress and fatigue, which may lead to a decline in academic performance [37,92]. When students work more than 20 h per week, their academic performance tends to suffer, and in some cases, this can even result in dropping out of university [91].
Moreover, specialized literature has pointed out the conflict between the roles of worker and student, whereby strong involvement in one domain can interfere with performance in the other [37]. The impact of work on academic outcomes varies depending on the student’s primary commitment [93], reinforcing the importance of considering employment status as a relevant individual circumstance that shapes how students navigate university life—and, consequently, how they judge the use of GenAI in their academic work.
The varying motivations, socioeconomic contexts, and levels of responsibility between students with and without jobs can lead to divergent perceptions regarding the legitimacy and appropriateness of GenAI, depending on the ethical perspective from which it is assessed.
From the moral equity perspective, the use of GenAI may be viewed as a resource that helps equalize academic performance and personal development opportunities between students who work and those who do not. Functioning as an adaptive tutor, GenAI has the potential to accommodate students’ individual characteristics, among which employment status plays a central role. In this way, GenAI could help reduce gaps resulting from limited time or unequal access to other educational support systems.
Within the framework of moral relativism, perceptions about the appropriateness and limitations of GenAI use may vary significantly between students with such distinct circumstances. The personal and motivational differences between working students and those who dedicate themselves exclusively to studying may lead to different criteria regarding when and how it is ethically valid to use this technology.
From the perspective of moral egoism, one could argue that a full-time working student—who makes an additional effort in terms of time and resources (e.g., funding their own tuition)—has a stronger justification for adopting tools such as GenAI, insofar as these tools facilitate academic success and help maximize the efficiency of their limited personal resources.
Utilitarianism also offers a favorable reading: GenAI use can improve the compatibility between study and work, benefitting those already in this dual role and encouraging others who had previously seen this option as unviable. This expanded accessibility to higher education could help train more skilled professionals and increase workforce participation, generating long-term collective benefits.
Finally, from a deontological perspective, students who work intensively face time constraints that may limit their ability to care for family or maintain social relationships. In this sense, fulfilling their academic responsibilities could be seen as part of a moral commitment to those close to them. The use of GenAI, by facilitating this fulfillment, may be understood as an ethically legitimate way of honoring that implicit contract.

3. Materials and Methods

3.1. Population and Sampling

The population targeted in this study consisted of undergraduate students in social science disciplines such as economics, business administration, and social work. These academic programs require not only competencies in written and oral communication, but also a humanistic understanding of human activities and analytical-mathematical reasoning skills [94]. Consequently, the curricula include both humanities courses—such as economic history and philosophy [94]—and quantitative subjects, such as statistics [95].
The sample was obtained through purposive sampling. It was administered between April and June 2025 to undergraduate students primarily enrolled in social science programs—such as economics, business, or social work—at two Spanish universities. The first university is located in a large metropolitan area (Madrid), which has a population of approximately 7 million people. In contrast, the main campuses of the second university are located in the Camp de Tarragona area, in Catalonia, which has a population of around 600,000 inhabitants.
The survey was self-administered and accessed via a Google Forms link that allowed only one response per IP address. To ensure participant anonymity, the Google Forms survey was configured so that no email addresses or identifying metadata were collected. The link did not require login credentials, and responses were not associated with any personal information, IP address, or device ID. Additionally, the survey settings disabled any tracking features, and all data were stored without identifiers that could link responses to individual participants.
Additionally, the questionnaire required participants to answer all questions related to the variables presented in Figure 1, while unrelated items—such as the university of origin—were not mandatory.
The voluntary and anonymous nature of participation helped gather responses from individuals with genuinely altruistic motivations and, therefore, potentially more thoughtful and honest answers. Moreover, this approach contributed to minimizing social desirability bias—the tendency to respond in a socially acceptable manner rather than expressing one’s true thoughts or behaviors.

3.2. Sample Profile

Table 2 presents the profile of the analyzed sample, which comprised a total of 151 valid responses. The gender distribution was 43 men (28.48%) and 108 women (71.52%). The overrepresentation of women in the sample can be explained by two complementary factors. First, in Spain, women constitute the majority of students enrolled in social science degree programs, which include disciplines such as sociology, education, and communication. Women account for over 60–65% of enrollment in the social sciences and humanities fields [96]. Second, prior research has shown that women are generally more likely to participate in surveys than men, particularly in voluntary, academic, and opinion-based studies. Likewise, women consistently exhibit higher survey response rates across different modalities, suggesting a gender-based difference in willingness to engage with research instruments [97].
The average age of participants was 21.89 years, with a standard deviation of 3.54 years. In terms of age ranges, 50 participants were between 18 and 20 years old, 33 participants reported being 21, 27 were 22, and 49 respondents indicated they were 23 years old or older. Additionally, 44 participants (29.14%) reported perceiving their academic performance as above average. Regarding employment status, 58 students (38.41%) stated that they had full-time jobs, while the remainder either worked occasionally or were not employed.

3.3. Questionnaire and Measurement Model

The questionnaire was written and distributed exclusively in Spanish. Although Spanish is not the primary language of instruction for undergraduate courses at one of the universities, it is a mandatory language and accounts for approximately 30% of the teaching in undergraduate subjects.
It presented three different scenarios of GenAI use, as described in the introduction: using GenAI to improve an essay without making substantial changes; using GenAI intensively to prepare for an exam; and, finally, using GenAI intensively to produce an essay. After introducing each scenario—and before moving on to the next—respondents were asked to answer 12 items assessing their ethical evaluation of GenAI use in the given situation.
The items used to measure ethical dimensions are detailed in Table 3 and are based on the Multidimensional Ethics Scale (MES) [19]. Items were rated on an 11-point Likert scale (ranging from 0 to 10), which measured the degree of agreement with each statement, from “strongly disagree” to “strongly agree.” This scale and its 11-point format have been employed in various contexts, such as the ethical evaluation of cyborg technologies [24,25], assessments of immunity passports [26], and the use of emerging sports technologies [23].
Several authors recommend the use of 11-point scales over the more common 4-, 5-, or 7-point scales due to multiple advantages. Most notably, this format captures a wider range of response nuances, overcoming the limitations of shorter scales. Additionally, an 11-point Likert scale provides greater sensitivity in measurement, approximating interval-level data and facilitating the assumption of normal data distribution. It is also intuitive for respondents, as the 0-to-10 range is widely recognized and easily understood [98].
Prior to final implementation, the questionnaire underwent a preliminary test with four faculty members and two students, who provided feedback on its clarity and readability. Based on this pilot phase, adjustments were made to improve the wording, although no substantial content changes were necessary.
Furthermore, both gender and employment status were modeled as binary variables. For gender, two categories were used: female and male. In the case of employment status, students were classified as either working in a stable capacity with permanent contracts, or not working/working sporadically, such as holding internships or short-term jobs only a few months a year, typically when classes were not in session.

3.4. Data Analysis

The data analysis was structured to sequentially address the research objectives RO1, RO2, and RO3. As a preliminary step, the reliability of the ethical scales used was assessed through analysis of their internal consistency, using Cronbach’s alpha (CA) and composite reliability (CR). Convergent validity was examined using the average variance extracted (AVE).
To address RO1, a repeated measures ANOVA was conducted across the three evaluated scenarios, with adjustments for the effects of gender and full-time employment status. This analysis involved four groups formed by crossing participants’ gender with their employment/study status. In addition, pairwise comparisons between scenarios were carried out for each item, applying Tukey-adjusted p-values to the t-ratios.
To address RO2 and RO3, separate MANOVA analyses were performed for each of the three scenarios. The dependent variables were grouped according to their corresponding ethical construct, resulting in five MANOVAs per scenario. These analyses examined the discriminant power of the two factors—gender and employment status—while also considering their interaction.
Regarding the adequacy of the sample size relative to the number of responses obtained, it was deemed appropriate for the study’s objectives. The design aimed to detect medium-sized effects (η2 = 0.125), with a significance level of 5% and statistical power of 80%. For RO1, which involved a repeated measures ANOVA with three scenarios and four groups (derived from crossing gender [male/female] with employment status), and assuming moderate correlation between measures (0.5), the sample size calculation performed with G*Power 3.1.0 [99] indicated a minimum of 108 observations was required.
For RO2 and RO3, which involved a separate MANOVA for each of the five ethical dimensions per scenario—and considering that these dimensions consisted of two to three items, and the four groups defined by the combination of gender and employment status—the minimum sample size estimated with G*Power 3.1.0 was 128 observations. Thus, the 151 observations collected exceeded the required thresholds.

4. Results

4.1. Descriptive Statistics and Results for the Research Objective 1

Table 3 presents the validity measures of the ethical scales and the descriptive statistics. All the scales demonstrate internal consistency, as both Cronbach’s alpha (CA) and composite reliability (CR) exceed 0.7, and convergent validity, since the average variance extracted (AVE) is greater than 0.5.
The table shows that item evaluations in the third scenario are systematically and clearly lower than those in the first two scenarios. Furthermore, the second scenario—which implies a more intensive use of GenAI than the first—typically receives higher ratings than the first. It is worth noting that in the third scenario, the evaluation falls below the neutral value (5) in only six items, but not across the board. Specifically, it falls in two moral equity items, ME1 (4.84) and ME3 (4.91); in the third relativism item, RE3 (4.30); in the second egoism item, EG2 (4.96); and in the two deontology items, DE1 (4.42) and DE2 (4.31).
Table 4 reports the results of the repeated measures ANOVA with correction for gender and employment status, as well as pairwise mean comparisons, applying Tukey’s adjustment. The repeated measures ANOVA indicates that the null hypothesis of homogeneity in the evaluations across the three scenarios can be rejected for all items, as in every case the p-value (p) is less than 0.001. The effect size (η2) for this heterogeneity ranges from medium (e.g., in EG1, η2 = 0.088) to large (e.g., in ME1, η2 = 0.386).
In the pairwise mean differences (MD) comparisons, it is observed that, for all items, the evaluation of the third scenario is always lower than that of the first and second, and this difference is statistically significant. The differences between the first and third scenario range from MD = −2.82 in the case of ME1 to MD = −0.97 in ME2, both with p < 0.001. The differences between the second and third scenarios range from MD = −2.96 (p < 0.001) in ME1 to MD = −0.79 (p = 0.004) in EG1.
It is also noteworthy that the second scenario receives better evaluations than the first across all ethical items. However, this difference is only statistically significant in one utilitarianism item (UT1, MD = 0.61, p = 0.006) and in the second deontology item, DE2 (MD = 0.73, p = 0.014).

4.2. Results of Research Objectives 2 and 3

Table 5, Table 6 and Table 7 present the ANOVA results regarding the discriminatory power of gender and employment status in relation to GenAI use across the three proposed scenarios.
In Table 5, corresponding to the first scenario—using GenAI to slightly improve an essay—only employment status shows discriminatory power in ethical perceptions. This effect is statistically significant in moral equity (Pillai’s trace [PT] = 0.065, p = 0.021), in relativism (PT = 0.120, p < 0.001), and in utilitarianism (PT = 0.046, p = 0.031). While in the cases of moral equity and utilitarianism, students without full-time jobs tend to evaluate GenAI use more positively, in the relativist dimension (especially in item RE3), students who work full-time express a more favorable view of GenAI use.
Table 6 presents the results for the second scenario. In this case, only the gender factor shows discriminatory power, and only within the utilitarianism dimension (PT = 0.043, p = 0.04). Item UT1 is rated more positively by women, while UT2 receives more favorable evaluations from men. This suggests that the evaluations are not necessarily more lenient or stricter depending on gender, but simply different in nature.
Table 7 shows the differences in ethical perceptions for the third scenario, where GenAI is used intensively to write an essay. Significant gender differences again emerge only within utilitarianism (PT = 0.047, p = 0.031). As before, women rate UT1 more favorably, while men give more positive evaluations to UT2.
Employment status appears as a key discriminating factor in ethical perceptions in the third scenario. Students who work full-time clearly evaluate GenAI use more positively than those who do not work, particularly in the dimensions of relativism (PT = 0.109, p < 0.001) and deontology (PT = 0.094, p < 0.001).

5. Discussion

5.1. Discussion of Fingings

This study contributes to the understanding of university students’ ethical judgments regarding the use of generative artificial intelligence (GenAI) tools in academic contexts. From a theoretical perspective, it confirms that students’ ethical perceptions are highly contextual and sensitive to the specific type of technology use. The differentiation across scenarios shows that ethical judgment is not monolithic: using GenAI for minor stylistic improvements or as a study aid for exam preparation is viewed positively, while its use to produce full assignments without the student’s active involvement is seen as more problematic from some (but not all) moral philosophy perspectives.
These findings support and expand theoretical arguments regarding the need to distinguish between different uses of GenAI in university settings [7,61], demonstrating how ethical evaluations vary according to context. Additionally, the results reaffirm deontological assumptions, which emphasize the fulfillment of academic duties such as authorship and personal responsibility, especially in the third scenario, where GenAI use borders on academic dishonesty.
Another relevant contribution is the identification of gender differences in ethical perceptions, which are consistent with previous literature. We cannot assert that women are inherently stricter than men in their ethical evaluations; rather, the differences are subtle, aligning with the findings of [27], and only emerge in specific ethical assessments, as noted in [33]. In fact, across several scenarios and types of ethical evaluations, no significant differences are observed between men and women, which is also in line with [29]. However, we did find that women tend to assess intensive uses of GenAI differently than men, particularly within the utilitarianism dimension, aligning with prior research suggesting distinct ethical sensitivities among women [82,100]. These differences appear to stem from divergent evaluations of GenAI’s usefulness—rated more favorably by women—and its cost–benefit balance—viewed more positively by men. This suggests the importance of integrating a gender perspective into the design of university policies regarding the ethical use of artificial intelligence.
A particularly noteworthy finding arises in the first scenario, where employment status alone shows discriminatory power in ethical perceptions. This result can be interpreted through the lenses of moral equity, moral egoism, and utilitarianism. Students who do not work full-time tend to evaluate GenAI more positively from the perspectives of equity and utility, which may be linked to lower time pressure and greater availability to complete tasks independently. In contrast, working students tend to justify GenAI use more through moral relativism, especially with reference to the approval of academic peers, indicating a greater influence of social and professional context on their ethical judgments. In this scenario, we cannot claim that being employed leads to a better ethical evaluation of GenAI in absolute terms—it depends on the ethical dimension being considered.
Working students also evaluate GenAI more favorably than non-working students in the third scenario. This finding suggests that life context and external pressures influence ethical judgments about GenAI, even in contexts where there is broad consensus that its use is questionable. Due to increased time constraints [90], working students perceive GenAI as a tool to maximize academic performance without necessarily compromising their personal integrity, as it helps them minimize conflicts between study and work [37].

5.2. Theoretical Implications

The discussion on the ethical implications of generative artificial intelligence (GenAI) is necessarily multidimensional. Different ethical theories can be used to justify—or challenge—the use of GenAI depending on the specific context in which it is applied. In this study, we examined the field of higher education through three representative scenarios that are broadly relevant across academic disciplines. However, the same analytical approach can be extended to more specialized settings, such as laboratory work in science degrees, where the boundary between technical assistance and intellectual delegation may become blurred.
Beyond the academic sphere, the ethical evaluation framework employed here can be effectively applied to professional and workplace environments, where the use of GenAI tools is becoming increasingly common as a means of enhancing productivity. For instance, employees may use GenAI to improve the clarity and tone of corporate documents without altering the content; others may rely on it to produce full reports or presentations with minimal personal input; and some may even use these tools to respond to technical inquiries or make decisions outside their area of expertise. Each of these scenarios raises distinct ethical concerns related to authorship, professional responsibility, and integrity.
Applying the MES framework to such cases would allow for a nuanced analysis of how professionals justify or reject the use of GenAI depending on their ethical orientation—whether deontological, utilitarian, relativist, or otherwise. Additionally, personal and contextual factors such as organizational hierarchy, technical background, or perceived performance pressure may significantly influence these judgments, making this a fertile area for the empirical extension of the present study’s approach.
Moreover, personal circumstances (such as gender or employment status, in our case) should not be seen as direct predictors of more lenient or stricter ethical judgments, but rather as variables that may exert different influences depending on the ethical dimension under consideration. In other words, individuals with similar profiles may express comparable views in one ethical domain and divergent views in another. In workplace contexts, factors such as marital status, income level, or educational background may also shape how potential users ethically assess the use of GenAI.

5.3. Practical Implications

From a practical standpoint, these results have relevant implications for educational institutions. First, it is essential to differentiate between types of GenAI usage when establishing guidelines. A blanket prohibition does not seem appropriate; instead, institutions should clearly define which uses are compatible with educational values and which are not [3].
As suggested by Cunha et al. [30] and De Ruyter and Schinkel [80], universities should provide explicit training in ethics, which—under the current context—must include discussions around GenAI. This training should address the complexity of ethical judgments depending on the context and the moral principles involved. An approach based on ethical virtue development, rather than prohibitions, could foster more responsible and reflective use of technology.
Given that ethical judgment varies by gender and employment status, intervention strategies should be sensitive to these differences. This may include targeted actions for groups that are more prone to evaluating intensive GenAI use less critically.
Finally, the results emphasize the importance of designing assessments that minimize opportunities for dishonest use of GenAI. Oral exams, supervised multi-stage projects, or tasks requiring complex personal reasoning can make inappropriate use of these tools more difficult [10].
In addition to these measures, institutions should consider developing clear, context-sensitive policies in collaboration with both faculty and students. Engaging students in the co-creation of guidelines for the ethical use of GenAI can promote a sense of shared responsibility and increase compliance. Furthermore, the integration of real-case scenarios into classroom discussions can help students internalize ethical boundaries through applied reasoning rather than abstract norms. In other areas of questionable academic behavior—such as plagiarism—awareness-raising interventions have proven effective in reducing students’ approval of such practices, suggesting that similar strategies could be valuable in the context of GenAI use [33].
Technological tools that promote transparency—such as version control in document editing or GenAI-use disclosures—could be integrated into learning management systems [41]. These do not aim to penalize students but to cultivate accountability and reflection. Universities should establish spaces for ongoing dialog, where educators, students, and technologists can periodically review the ethical challenges emerging from GenAI developments and update institutional practices accordingly. In doing so, institutions can move from reactive regulation to proactive ethical leadership.
The intersection of moral philosophy with legal foundations also surfaces in discussions about the ethics of law. The concept of the “internal morality of law” reveals a connection between moral principles and legal standards, suggesting that potential rules governing the use of GenAI should reflect broader ethical understandings [101]. This perspective has sparked important debates within moral philosophy, particularly regarding the tension between legal positivism and natural law theories, and highlights the need to embed moral considerations within legal and institutional frameworks [101]. This tension, commonly observed in macro-level legal systems (such as those of the state), should likewise be taken into account when developing academic policies and regulations.

6. Conclusions

6.1. Principal Takeaways

This study confirms that university students’ ethical perceptions of GenAI are not homogeneous; rather, they are dependent on the context of use and students’ personal circumstances. Students tend to view GenAI positively when used to correct or improve the formal quality of their assignments, and especially when used as a source of information to prepare for exams. Conversely, they perceive its use more negatively when it involves generating substantial essay content without their own input.
Gender and employment status differences are significant. Women display ethical judgments—at least from the utilitarian perspective—that differ in certain respects from those of men. Additionally, working students appear more tolerant of controversial GenAI use, likely as a response to time-related demands they face. These findings underscore the importance of considering individual and social factors in any academic reflection or regulation concerning the ethical use of GenAI.

6.2. Study Limitations and Further Research

Among the study’s limitations is the sample size and profile: the participants were students from social science degree programs at two Spanish universities, which may limit the generalizability of the results to other academic disciplines or cultural contexts. Furthermore, the self-reported nature of the questionnaires may introduce social desirability bias, although efforts were made to minimize this through anonymous responses. So, further research may extend the analysis to students from other disciplines, such as engineering, law, or medicine, where GenAI use may carry different implications.
Another important limitation is that the study focuses on self-reported ethical perceptions, not on actual behaviors. There is always the possibility that expressed attitudes do not translate directly into actions. Future research should explore this gap between ethical perception and behavior through longitudinal studies or experimental designs.
The study relies on hypothetical scenarios to elicit ethical judgments, which, although grounded in realistic academic practices, may not fully capture the emotional and situational complexity involved in real-life decision-making. Further assessments may incorporate experimental designs or simulated academic environments to examine how students actually behave when faced with GenAI use under time pressure, peer influence, or evaluation anxiety.
The analysis centers exclusively on students’ ethical evaluations, without including the views of faculty members or institutional stakeholders, who play a key role in defining academic integrity policies. Further studies could adopt a multi-actor approach by comparing ethical perceptions across students, instructors, and academic administrators. This would provide a more holistic understanding of normative alignment—or potential mismatch—within higher education. Notably, the three scenarios proposed for student evaluation are also highly relevant to faculty practices, as they mirror common academic tasks such as writing research papers, preparing teaching materials, or studying and gathering documentation for lectures and presentations. Including faculty perspectives would therefore enrich the analysis by addressing how ethical standards around GenAI use are understood and applied across different roles within the university ecosystem.
Another promising line of research would be to analyze the impact of explicit institutional policies on the ethical assessments and actual behaviors of students concerning GenAI. This could provide a better understanding of the role of regulatory frameworks in shaping ethical judgments. It would also be relevant to examine how targeted ethical training on the use of artificial intelligence might shape students’ perceptions and behaviors.
The results of this study also suggest that future research may benefit from incorporating ethical constructs into generative AI acceptance studies. This integration could be pursued using a pure MES approach [19,24,28,29], or by complementing existing research grounded in technology acceptance frameworks, such as the Technology Acceptance Model [52], the Expectation Confirmation Model and Information System Success Theory [53], or the Unified Theory of Acceptance and Use of Technology [54,55].
Finally, it would be interesting to investigate how ethical perceptions evolve as GenAI technologies become more sophisticated, accessible, and pervasive, and to identify new ethical dimensions that may emerge—particularly those related to sustainability, privacy, or authorship.

Author Contributions

Conceptualization, A.P.-P. and M.A.-O.; methodology, A.P.-P. and G.P.-C.; software: J.d.A.-S.; validation: M.A.-O.; formal analysis: A.P.-P.; investigation. A.P.-P. and M.A.-O.; resources, G.P.-C.; data curation, J.d.A.-S.; writing—original draft preparation, A.P.-P. and J.d.A.-S.; writing—review and editing, G.P.-C.; visualization, J.d.A.-S.; supervision, G.P.-C.; project administration, M.A.-O. and G.P.-C.; funding acquisition: J.d.A.-S. and M.A.-O. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by Telefonica and the Telefonica Chair on Smart Cities of the Universitat Rovira i Virgili and Universitat de Barcelona (project number: 42. DB.00.18.00).

Institutional Review Board Statement

(1) All participants received detailed written information about the study and procedure; (2) no data directly or indirectly related to the health of the subjects were collected, and therefore the Declaration of Helsinki was not mentioned when informing the subjects; (3) the anonymity of the collected data was ensured at all times; (4) the research received a favorable evaluation from the Ethics Committee of a researchers’ institution (CE_20250710_10_SOC).

Informed Consent Statement

Informed consent was obtained from all the subjects involved in the study.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ANOVAAnalysis of Variance
AVEAverage Variance Extracted
CACronbach’s alpha
CRConvergent reliability
DEDeontology
EGMoral Egoism
GAIGenerative Artificial Intelligence
MANOVAMultivariate analysis of variance
MEMoral Equity
PTPillai’s Trace
REMoral Relativism
UTUtilitarianism

References

  1. Lo, C.K. What is the impact of ChatGPT on education? A rapid review of the literature. Educ. Sci. 2023, 13, 410. [Google Scholar] [CrossRef]
  2. Yan, L.; Zhang, Y.; Yin, C.; Yang, S.; Xu, Y. Practical and ethical challenges of large language models in education: A systematic scoping review. Br. J. Educ. Technol. 2024, 55, 90–112. [Google Scholar] [CrossRef]
  3. Licht, K.F. Generative artificial intelligence in higher education: Why the ‘banning approach’ to student use is sometimes morally justified. Philos. Technol. 2024, 37, 113. [Google Scholar] [CrossRef]
  4. Culp, J. Ethics of education. In Encyclopedia of the Philosophy of Law and Social Philosophy; Springer: Dordrecht, The Netherlands, 2023; pp. 1–7. [Google Scholar] [CrossRef]
  5. Farrelly, T.; Baker, N. Generative artificial intelligence: Implications and considerations for higher education practice. Educ. Sci. 2023, 13, 1109. [Google Scholar] [CrossRef]
  6. Madsen, D.Ø.; Toston, D.M. ChatGPT and Digital Transformation: A Narrative Review of Its Role in Health, Education, and the Economy. Digital 2025, 5, 24. [Google Scholar] [CrossRef]
  7. Yu, H. Reflection on whether Chat GPT should be banned by academia from the perspective of education and teaching. Front. Psychol. 2023, 14, 1181712. [Google Scholar] [CrossRef] [PubMed]
  8. Gorrieri, L. Should I use ChatGPT as an academic aid? A response to Aylsworth and Castro. Philos. Technol. 2025, 38, 8. [Google Scholar] [CrossRef]
  9. Davis, R.O.; Lee, Y.-J. Prompt: ChatGPT, create my course, please! Educ. Sci. 2023, 14, 24. [Google Scholar] [CrossRef]
  10. Alier, M.; García-Peñalvo, F.; Camba, J.D. Generative artificial intelligence in education: From deceptive to disruptive. Int. J. Interact. Multimed. Artif. Intell. 2024, 8, 5–14. [Google Scholar] [CrossRef]
  11. McDonald, N.; Johri, A.; Ali, A.; Collier, A.H. Generative artificial intelligence in higher education: Evidence from an analysis of institutional policies and guidelines. Comput. Hum. Behav. Artif. Hum. 2025, 100, 121. [Google Scholar] [CrossRef]
  12. Aylsworth, T.; Castro, C. Should I use ChatGPT to write my papers? Philos. Technol. 2024, 37, 117. [Google Scholar] [CrossRef]
  13. Dwivedi, Y.K.; Hughes, D.L.; Baabdullah, A.M.; Ribeiro-Navarrete, S.; Giannakis, M.; Makrides, A.; Davison, R.M.; Sharma, S.K.; Wood, S.L. Opinion paper: ‘So what if ChatGPT wrote it?’ Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. Int. J. Inf. Manag. 2023, 71, 102642. [Google Scholar] [CrossRef]
  14. Mittal, U.; Sai, S.; Chamola, V. A comprehensive review on generative AI for education. IEEE Access 2024, 12, 142733–142759. [Google Scholar] [CrossRef]
  15. Rachels, J.; Rachels, S. The Elements of Moral Philosophy, 7th ed.; McGraw-Hill: New York, NY, USA, 2012. [Google Scholar]
  16. Hemmingsen, M. Act consequentialism and the gamer’s dilemma. Ethics Inf. Technol. 2025, 27, 19. [Google Scholar] [CrossRef]
  17. Driver, J. Moral Theory; Oxford University Press: Oxford, UK, 2022. [Google Scholar]
  18. Reidenbach, R.E.; Robin, D.P. Toward the development of a multidimensional scale for improving evaluations of business ethics. J. Bus. Ethics 1990, 9, 639–653. [Google Scholar] [CrossRef]
  19. Shawver, T.J.; Sennetti, J.T. Measuring ethical sensitivity and evaluation. J. Bus. Ethics 2009, 88, 663–678. [Google Scholar] [CrossRef]
  20. Cruz, C.A.; Shafer, W.E.; Strawser, J.R. A multidimensional analysis of tax practitioners’ ethical judgments. J. Bus. Ethics 2000, 24, 223–244. [Google Scholar] [CrossRef]
  21. Kujala, J. A multidimensional approach to Finnish managers’ moral decision-making. J. Bus. Ethics 2001, 34, 231–254. [Google Scholar] [CrossRef]
  22. Santalla-Banderali, Z.; Alvarado, J.M.; Malavé, J. Escala Multidimensional de Ética (MES-30): Versión Española y estructura factorial. Rev. Iberoam. Diagn. Eval. Psicol. 2024, 3, 133–150. [Google Scholar] [CrossRef]
  23. de Andrés-Sánchez, J.; de Torres-Burgos, F. Evaluación ética de atletas y triatletas españoles sobre uso de la tecnología Vaporfly. Rev. Iberoam. Cienc. Act. Fís. Deport. 2021, 10, 139–159. [Google Scholar] [CrossRef]
  24. Pelegrín-Borondo, J.; Arias-Oliva, M.; Murata, K.; Souto-Romero, M. Does ethical judgment determine the decision to become a cyborg? J. Bus. Ethics 2020, 161, 5–17. [Google Scholar] [CrossRef]
  25. Reinares-Lara, E.; Olarte-Pascual, C.; Pelegrín-Borondo, J. Do you want to be a cyborg? The moderating effect of ethics on neural implant acceptance. Comput. Hum. Behav. 2018, 85, 43–53. [Google Scholar] [CrossRef]
  26. Arias-Oliva, M.; Pelegrín-Borondo, J.; Almahameed, A.A.; de Andrés-Sánchez, J.D. Ethical attitudes toward COVID-19 passports: Evidences from Spain. Int. J. Environ. Res. Public Health 2021, 18, 13098. [Google Scholar] [CrossRef]
  27. Nguyen, N.T.; Basuray, M.T.; Smith, W.P.; Kopka, D.; McCulloh, D. Moral issues and gender differences in ethical judgment using Reidenbach and Robin’s (1990) multidimensional ethics scale: Implications in teaching of business ethics. J. Bus. Ethics 2008, 77, 417–430. [Google Scholar] [CrossRef]
  28. Jung, I. Ethical judgments and behaviors: Applying a multidimensional ethics scale to measuring ICT ethics of college students. Comput. Educ. 2009, 53, 940–949. [Google Scholar] [CrossRef]
  29. Yang, S.C. Ethical academic judgments and behaviors: Applying a multidimensional ethics scale to measure the ethical academic behavior of graduate students. Ethics Behav. 2012, 22, 281–296. [Google Scholar] [CrossRef]
  30. Cunha, M.; Figueiredo, J.; Breia, J.; Pina, J.; Almeida, S.; Oliveira, T. Morality and ethical acting in higher education students. In European Proceedings of Social and Behavioural Sciences, Proceedings of the ICEEPSY 2016: 7th International Conference on Education and Educational Psychology, Rhodes, Greece, 11–15 October 2016; Future Academy: Singapore, 2016; pp. 98–106. [Google Scholar]
  31. Leonard, L.N.; Riemenschneider, C.K.; Manly, T.S. Ethical behavioral intention in an academic setting: Models and predictors. J. Acad. Ethics 2017, 15, 141–166. [Google Scholar] [CrossRef]
  32. Pelegrín-Borondo, J.P.; Pascual, C.O.; Pascual, L.B.; Milon, A.G. Impact of ethical judgment on university professors encouraging students to use AI in academic tasks. In The Leading Role of Smart Ethics in the Digital World; Universidad de La Rioja: Logroño, Spain, 2024; pp. 53–61. [Google Scholar]
  33. Prashar, P.; Gupta, Y.; Dwivedi, Y.K. Plagiarism awareness efforts, students’ ethical judgment and behaviors: A longitudinal experiment study on ethical nuances of plagiarism in higher education. Stud. High. Educ. 2024, 49, 929–955. [Google Scholar] [CrossRef]
  34. Thornberg, R. A classmate in distress: Schoolchildren as bystanders and their reasons for how they act. Soc. Psychol. Educ. 2007, 10, 5–28. [Google Scholar] [CrossRef]
  35. Grühn, D.; Cheng, Y. A self-correcting approach to multiple-choice exams improves students’ learning. Teach. Psychol. 2014, 41, 335–339. [Google Scholar] [CrossRef]
  36. Cuevas, F.; de Ibarrola, M. ¿Estudias y trabajas? Los estudiantes trabajadores de la Universidad Autónoma Metropolitana, Unidad Azcapotzalco. Rev. Latinoam. Estud. Educ. 2009, 39, 121–149. [Google Scholar]
  37. Wyland, R.L.; Lester, S.W.; Mone, M.A.; Winkel, D.E. Work and school at the same time? A conflict perspective of the work–school interface. J. Leadersh. Organ. Stud. 2013, 20, 346–357. [Google Scholar] [CrossRef]
  38. Zhou, J.; Müller, H.; Holzinger, A.; Chen, F. Ethical ChatGPT: Concerns, challenges, and commandments. Electronics 2024, 13, 3417. [Google Scholar] [CrossRef]
  39. Shaw, D. The digital erosion of intellectual integrity: Why misuse of generative AI is worse than plagiarism. AI Soc. 2025. [Google Scholar] [CrossRef]
  40. Skulmowski, A.; Engel-Hermann, P. The ethics of erroneous AI-generated scientific figures. Ethics Inf. Technol. 2025, 27, 31. [Google Scholar] [CrossRef]
  41. Barman, K.G.; Wood, N.; Pawlowski, P. Beyond transparency and explainability: On the need for adequate and contextualized user guidelines for LLM use. Ethics Inf. Technol. 2024, 26, 47. [Google Scholar] [CrossRef]
  42. Şimşek, N. Integration of ChatGPT in mathematical story-focused 5E lesson planning: Teachers and pre-service teachers’ interactions with ChatGPT. Educ. Inf. Technol. 2025, 30, 11391–11462. [Google Scholar] [CrossRef]
  43. Suh, J.; Lee, K.; Lee, J. Programming education with ChatGPT: Outcomes for beginners and intermediate students. Educ. Inf. Technol. 2025. [Google Scholar] [CrossRef]
  44. Vaccino-Salvadore, S. Exploring the ethical dimensions of using ChatGPT in language learning and beyond. Languages 2023, 8, 191. [Google Scholar] [CrossRef]
  45. Gadekallu, T.R.; Khanna, A.; Maddikunta, P.K.R.; Pham, Q.V.; Dev, K.; Liyanage, M. The role of GPT in promoting inclusive higher education for people with various learning disabilities: A review. PeerJ Comput. Sci. 2025, 11, 2400. [Google Scholar] [CrossRef]
  46. Kasneci, E.; Sessler, K.; Kübler, S.; Bannert, M.; Dementieva, D.; Fischer, F.; Winkler, T.; Sailer, M. ChatGPT for good? On opportunities and challenges of large language models for education. Learn. Individ. Differ. 2023, 103, 102274. [Google Scholar] [CrossRef]
  47. Leite, H. Artificial intelligence in higher education: Research notes from a longitudinal study. Technol. Forecast. Soc. Change 2025, 215, 124115. [Google Scholar] [CrossRef]
  48. Tlais, S.; Alkhatib, A.; Hamdan, R.; HajjHussein, H.; Hallal, K.; El Malti, W.E. Artificial intelligence in higher education: Early perspectives from Lebanese STEM faculty. TechTrends 2025, 69, 598–606. [Google Scholar] [CrossRef]
  49. Monib, W.K.; Qazi, A.; Mahmud, M.M. Exploring learners’ experiences and perceptions of ChatGPT as a learning tool in higher education. Educ. Inf. Technol. 2025, 30, 917–939. [Google Scholar] [CrossRef]
  50. Cebrián Cifuentes, S.; Guerrero Valverde, E.; Checa Caballero, S. The Vision of University Students from the Educational Field in the Integration of ChatGPT. Digital 2024, 4, 648–659. [Google Scholar] [CrossRef]
  51. Shahzad, M.F.; Xu, S.; An, X.; Asif, M.; Javed, I. Do generative AI technologies play a double-edged sword role in education? Findings from hybrid approach using PLS-SEM and fsQCA. Educ. Inf. Technol. 2025, 1–30. [Google Scholar] [CrossRef]
  52. Al-Okaily, M. ChatGPT as an educational resource for accounting students: Expanding the classical TAM model. Educ. Inf. Technol. 2025. [Google Scholar] [CrossRef]
  53. Nan, D.; Sun, S.; Zhang, S.; Zhao, X.; Kim, J.H. Analyzing behavioral intentions toward generative artificial intelligence: The case of ChatGPT. Univ. Access Inf. Soc. 2025, 24, 885–895. [Google Scholar] [CrossRef]
  54. Hsu, W.-L.; Silalahi, A.D.K.; Tedjakusuma, A.P.; Riantama, D. How do ChatGPT’s benefit–risk paradoxes impact higher education in Taiwan and Indonesia? An integrative framework of UTAUT and PMT with SEM & fsQCA. Comput. Educ. Artif. Intell. 2025, 8, 100412. [Google Scholar] [CrossRef]
  55. Al Amin, M.; Kim, Y.S.; Noh, M. Unveiling the drivers of ChatGPT utilization in higher education sectors: The direct role of perceived knowledge and the mediating role of trust in ChatGPT. Educ. Inf. Technol. 2025, 30, 7265–7291. [Google Scholar] [CrossRef]
  56. Killen, M. The origins of morality: Social equality, fairness, and justice. Philos. Psychol. 2018, 31, 767–803. [Google Scholar] [CrossRef]
  57. Sandel, M.J. The Tyranny of Merit: What’s Become of the Common Good? Penguin UK: London, UK, 2020. [Google Scholar]
  58. Gosepath, S. Equality. In The Stanford Encyclopedia of Philosophy (Spring 2023 Edition); Zalta, E.N., Nodelman, U., Eds.; Stanford University: Stanford, CA, USA, 2023; Available online: https://plato.stanford.edu/archives/spr2023/entries/equality/ (accessed on 7 July 2025).
  59. Moriña, A. Inclusive education in higher education: Challenges and opportunities. Eur. J. Spec. Needs Educ. 2017, 32, 3–17. [Google Scholar] [CrossRef]
  60. Toh, Y.; Looi, C.K. Transcending the dualities in digital education: A case study of Singapore. Front. Digit. Educ. 2024, 1, 121–131. [Google Scholar] [CrossRef]
  61. Qadir, J. Engineering education in the era of ChatGPT: Promise and pitfalls of generative AI for education. In Proceedings of the 2023 IEEE Global Engineering Education Conference (EDUCON), Kuwait, Kuwait, 1–4 May 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 1–9. [Google Scholar] [CrossRef]
  62. Burgason, K.A.; Sefiha, O.; Briggs, L. Cheating is in the eye of the beholder: An evolving understanding of academic misconduct. Innov. High. Educ. 2019, 44, 203–218. [Google Scholar] [CrossRef]
  63. Andersen, S.; Hjortskov, M. The unnoticed influence of peers on educational preferences. Behav. Public Policy 2019, 6, 530–553. [Google Scholar] [CrossRef]
  64. Venkatesh, V. Adoption and use of AI tools: A research agenda grounded in UTAUT. Ann. Oper. Res. 2022, 308, 641–652. [Google Scholar] [CrossRef]
  65. Berei, E.B. The social responsibility among higher education students. Educ. Sci. 2020, 10, 66. [Google Scholar] [CrossRef]
  66. Vanderbauwhede, W. Estimating the increase in emissions caused by AI-augmented search. arXiv 2024, arXiv:2407.16894. [Google Scholar]
  67. Deigh, J. Egoism. In An Introduction to Ethics; Cambridge University Press: Cambridge, UK, 2010; pp. 25–55. [Google Scholar] [CrossRef]
  68. Shaver, R. Egoism. In The Stanford Encyclopedia of Philosophy (Spring 2023 Edition); Zalta, E.N., Nodelman, U., Eds.; Stanford University: Stanford, CA, USA, 2023; Available online: https://plato.stanford.edu/archives/spr2023/entries/egoism/ (accessed on 24 May 2025).
  69. Menon, D.; Shilpa, K. ‘Chatting with ChatGPT’: Analyzing the factors influencing users’ intention to use the OpenAI’s ChatGPT using the UTAUT model. Heliyon 2023, 9, e20142. [Google Scholar] [CrossRef]
  70. Bücker, S.; Nuraydin, S.; Simonsmeier, B.A.; Schneider, M.; Luhmann, M. Subjective well-being and academic achievement: A meta-analysis. J. Res. Pers. 2018, 74, 83–94. [Google Scholar] [CrossRef]
  71. Meyer, J.G.; Adkins, R.; Hossain, M.N.; Smith, T.R.; Gawryluk, J.R.; Clow, K.A. ChatGPT and large language models in academia: Opportunities and challenges. BioData Min. 2023, 16, 20. [Google Scholar] [CrossRef]
  72. Casteleiro-Pitrez, J. Generative Artificial Intelligence Image Tools among Future Designers: A Usability, User Experience, and Emotional Analysis. Digital 2024, 4, 316–332. [Google Scholar] [CrossRef]
  73. Tomova, L.; Andrews, J.L.; Blakemore, S.J. The importance of belonging and the avoidance of social risk taking in adolescence. Dev. Rev. 2021, 61, 100981. [Google Scholar] [CrossRef]
  74. Rae, S. Moral Choices: An Introduction to Ethics; Zondervan Academic: Grand Rapids, MI, USA, 2018. [Google Scholar]
  75. Slimmen, S.; Timmermans, O.; Mikolajczak-Degrauwe, K.; Oenema, A. How stress-related factors affect mental wellbeing of university students: A cross-sectional study to explore the associations between stressors, perceived stress, and mental wellbeing. PLoS ONE 2022, 17, e0275925. [Google Scholar] [CrossRef]
  76. Oates, A.; Johnson, D. ChatGPT in the classroom: Evaluating its role in fostering critical evaluation skills. Int. J. Artif. Intell. Educ. 2025, in press. [Google Scholar] [CrossRef]
  77. Bossert, L.N.; Loh, W. Why the carbon footprint of generative large language models alone will not help us assess their sustainability. Nat. Mach. Intell. 2025, in press. [Google Scholar] [CrossRef]
  78. Alexander, L.; Moore, M. Deontological ethics. In The Stanford Encyclopedia of Philosophy (Winter 2024 Edition); Zalta, E.N., Nodelman, U., Eds.; Stanford University: Stanford, CA, USA, 2024. [Google Scholar]
  79. UNESCO. Education 2030 Agenda. Available online: https://www.unesco.org/en/education2030 (accessed on 7 July 2025).
  80. De Ruyter, D.; Schinkel, A. Ethics education at the university: From teaching an ethics module to education for the good life. Bordón 2017, 69, 125–138. [Google Scholar] [CrossRef]
  81. Tuto, C. Ethical and deontological challenges in exercising the functions of higher education teachers: Perceptions of teachers at the School of Management, Science and Technology (ESGCT). Rev. Electrónica Investig. Desenvolv. 2020, 2, 10. [Google Scholar]
  82. Pilcher, J.J.; Smith, P.D. Social context during moral decision-making impacts males more than females. Front. Psychol. 2024, 15, 1397069. [Google Scholar] [CrossRef] [PubMed]
  83. Roxas, M.L.; Stoneback, J.Y. The importance of gender across cultures in ethical decision-making. J. Bus. Ethics 2004, 50, 149–165. [Google Scholar] [CrossRef]
  84. Peng, M. The sources and influencing factors of egoism and altruism. In Proceedings of the 2022 8th International Conference on Humanities and Social Science Research (ICHSSR 2022); Online, 22–24 April 2022, Advances in Social Science, Education and Humanities Research; Atlantis Press: Paris, France, 2022; Volume 664, pp. 2200–2211. [Google Scholar]
  85. Pierce, J.R. Sex & gender in ethical decision making: A critical review and recommendations for future research. In Academy of Management Proceedings; Academy of Management: Briarcliff Manor, NY, USA, 2014; p. 6. [Google Scholar] [CrossRef]
  86. Standley, N.; Fesmer, L. Most University Students Working Paid Jobs, Survey Shows. Available online: https://www.bbc.com/news/education-65964375 (accessed on 7 July 2025).
  87. Simón, H.; Casado Díaz, J.M.; Castejón Costa, J.L. Análisis de la actividad laboral de los estudiantes universitarios y de sus efectos sobre el rendimiento académico. Electron. J. Res. Educ. Psychol. 2017, 15, 281–306. [Google Scholar] [CrossRef]
  88. Soelistiyono, A.; Chen, F.C. Exploration of studying while working part-time simultaneously with 15 Indonesian students in Taiwan: A public university case study. Int. J. Prof. Bus. Rev. 2023, 8, 26. [Google Scholar] [CrossRef]
  89. Chantrea, B.; Chansophy, H.; Chantyta, H. Working and Studying at the Same Time; The University of Cambodia: Phnom Penh, Cambodia, 2023. [Google Scholar]
  90. Busso, M.; Pérez, P.E. Combinar trabajo y estudios superiores ¿Un privilegio de jóvenes de sectores de altos ingresos? Población Soc. 2015, 22, 5–29. [Google Scholar]
  91. Pacífico, A.; Gandolfo, P.H. Estudios superiores y trabajo: Implicancias de su simultaneidad. Ciencias Econ. 2016, 13, 153–160. [Google Scholar] [CrossRef]
  92. Brosnan, M.; Bennett, D.; Kercher, K.; Wilson, T.; Keogh, J.W.L. A multi-institution study of the impacts of concurrent work and study among university students in Australia. High. Educ. Res. Dev. 2023, 43, 775–791. [Google Scholar] [CrossRef]
  93. Parry, G.; Reynoldson, C. Creating an authentic learning environment in economics for MBA studies. In Cases on the Human Side of Information Technology; IGI Global: Hershey, PA, USA, 2006; pp. 76–87. [Google Scholar] [CrossRef]
  94. Correa, D.; López-Díez, J.; Campuzano-Hoyos, J. A humanistic manager? Integrating humanities into professional management education in Colombia: The case of EAFIT University, 1994–2007. J. Manag. Hist. 2024, 31, 154–173. [Google Scholar] [CrossRef]
  95. Asian-Chaves, R.; Buitrago, E.M.; Masero-Moreno, I.; Yñiguez, R. Advanced mathematics: An advantage for business and management administration students. Int. J. Manag. Educ. 2021, 19, 100498. [Google Scholar] [CrossRef]
  96. Leung, S.O. A comparison of psychometric properties and normality in 4-, 5-, 6-, and 11-point Likert scales. J. Soc. Serv. Res. 2011, 37, 412–421. [Google Scholar] [CrossRef]
  97. Faul, F.; Erdfelder, E.; Buchner, A.; Lang, A.-G. Statistical power analyses using G*Power 3.1: Tests for correlation and regression analyses. Behav. Res. Methods 2009, 41, 1149–1160. [Google Scholar] [CrossRef]
  98. Ministerio de Ciencia, Innovación y Universidades. Datos y Cifras del Sistema Universitario Español. Curso 2023–2024. Secretaría General de Universidades. 2024. Available online: https://www.universidades.gob.es/wp-content/uploads/Datosycifras2023-2024.pdf (accessed on 7 July 2025).
  99. Becker, R. Gender and survey participation: An event history analysis of the gender effects of survey participation in a probability-based multi-wave panel study with a sequential mixed-mode design. Methods Data Anal. J. Quant. Methods Surv. Methodol. (Mda) 2022, 16, 3–32. [Google Scholar] [CrossRef]
  100. Valentine, S.R.; Rittenburg, T.L. The ethical decision making of men and women executives in international business situations. J. Bus. Ethics 2007, 71, 125–134. [Google Scholar] [CrossRef]
  101. Rundle, K. Fuller’s Internal Morality of Law. Philos. Compass 2016, 11, 499–506. [Google Scholar] [CrossRef]
Figure 1. Analytical framework used to evaluate ethical perception about the use of GenAI.
Figure 1. Analytical framework used to evaluate ethical perception about the use of GenAI.
Digital 05 00033 g001
Table 1. Moral philosophies used in the ethical assessment of GenAI.
Table 1. Moral philosophies used in the ethical assessment of GenAI.
Moral Philosophy TheoryDefinition
Moral Equity (ME)Focuses on fairness, justice, and the equitable treatment of individuals, especially in the presence of social or structural disadvantages. Ethical actions should promote real equality of opportunity.
Relativism (RE)Holds that moral judgments depend on the social, cultural, or institutional context. An action is not inherently right or wrong but must be evaluated relative to the norms and expectations of the environment.
Moral Egoism (EG)Considers an action ethical if it maximizes personal benefit or self-interest. Moral worth is determined by how well an action serves the individual’s goals, well-being, or reputation.
Utilitarianism (UT)Judges actions based on their outcomes for the majority. An act is ethical if it generates the greatest good or well-being for the greatest number of people.
Deontology (DE)Emphasizes duty, rules, and moral obligations. An action is ethical if it aligns with established principles, agreements, or responsibilities, regardless of its consequences.
Source: Own elaboration from [18,19].
Table 2. Sociodemographic profile of the sample.
Table 2. Sociodemographic profile of the sample.
ItemRespondentsProportion
Sex
Male4328.48%
Female10871.52%
Age
≤20 years5033.11%
213321.85%
222717.88%
≥234026.49%
Nonanswered10.66%
Perceived academic performance
On the average or below10770.86%
Above the average4429.14%
Working status
Full time job5838.41%
Other labor situations9362.59%
Table 3. Measures of scale validity and mean and standard deviation of ethical items in the three assessed scenarios.
Table 3. Measures of scale validity and mean and standard deviation of ethical items in the three assessed scenarios.
Scenario 1Scenario 2Scenario 3
CACRAVEMeanSDMeanSDMeanSD
Moral equity (ME)0.8760.88380.1%
ME1 = Fair 7.591.987.772.374.842.76
ME2 = Equal 7.712.167.852.246.742.69
ME3 = Right 7.042.017.52.494.812.92
Relativism (RE)0.9000.90984.1%
RE1 = Acceptable to my peers 7.831.977.82.356.122.86
RE2 = Acceptable in my environment 7.621.957.432.435.632.89
RE3 = Acceptable to people whose opinion I respect 6.332.586.962.724.33.08
Egoism (EG)0.7990.81084%
EG1 = It will provide relevance and prestige 8.051.977.742.286.812.71
EG2 = It is rewarding 6.62.626.892.754.963.18
Utilitarianism (UT)0.8550.85587.3%
UT1 = It is useful 8.41.877.862.296.812.8
UT2 = Good cost–benefit balance 7.292.077.42.435.992.88
Deontology (DE)0.9180.91992.5%
DE1 = It respects an implicit contract with my environment/society 6.172.456.742.524.523.1
DE2 = It aligns with what is expected of me as a student 5.812.736.652.774.313.16
Note: CA = Cronbach’s alpha, CR = Composite reliability, and AVE = Average variance extracted.
Table 4. ANOVA of repeated measures and pairwise comparison between scenarios.
Table 4. ANOVA of repeated measures and pairwise comparison between scenarios.
ANOVAS2 vs. S1S3 vs. S1S3 vs. S2
ItemsSndecor’s Fp-Valueη2MDp-ValueMDp-ValueMDp-Value
ME192.3<0.0010.3860.140.779−2.82<0.001−2.96<0.001
ME218.7<0.0010.1130.170.649−0.97<0.001−1.14<0.001
ME360.0<0.0010.290.360.23−2.26<0.001−2.62<0.001
RE139.8<0.0010.2130.090.858−1.70<0.001−1.60<0.001
RE237.0<0.0010.2010.080.91−1.76<0.001−1.69<0.001
RE350.7<0.0010.2570.410.228−2.13<0.001−2.54<0.001
EG114.2<0.0010.0880.300.224−1.09<0.001−0.790.004
EG227.1<0.0010.1560.140.83−1.61<0.001−1.75<0.001
UT122.2<0.0010.1310.610.006−1.51<0.001−0.900.003
UT215.0<0.0010.0930.240.523−1.00<0.001−1.23<0.001
DE131.8<0.0010.1780.530.104−1.55<0.001−2.07<0.001
DE232.3<0.0010.180.730.014−1.33<0.001−2.05<0.001
Note: η2 stands for Cohen’s size effect and MD mean difference.
Table 5. MANOVA of ethical evaluations in scenario 1 of GenAI use.
Table 5. MANOVA of ethical evaluations in scenario 1 of GenAI use.
Descriptive StatisticsResults of MANOVA
By SexBy Working ConditionEffect
FactorWomenMenNo Full Time WorkFull Time WorkSexWork ConditionSex × Work Condition
MSDMSDMSDMSDPTp-ValuePTp-ValuePTp-Value
Moral equity 0.0220.3600.0650.0210.0060.823
ME17.442.027.981.827.611.917.572.1
ME27.552.158.122.148.011.877.222.49
ME36.862.017.491.987.031.987.052.09
Relativism 0.0370.1380.120<0.0010.0070.808
RE17.791.957.932.037.782.037.91.87
RE27.681.957.491.977.671.967.551.94
RE36.172.596.742.545.822.687.162.19
Egoism 0.0070.6140.0200.2260.0080.539
EG18.021.988.121.978.231.857.762.13
EG26.472.626.912.646.612.576.572.73
Utilitarianism 0.0020.8920.0460.0310.0000.970
UT18.381.868.471.938.721.457.92.33
UT27.312.057.262.167.522.016.932.13
Deontology 0.0290.1150.0100.4880.0050.706
DE15.932.436.772.416.032.56.382.36
DE25.692.676.142.885.62.846.162.53
Note: M = mean, SD = standard deviation, and PT = Pillai’s trace.
Table 6. MANOVA of ethical evaluations in the scenario 2 of GenAI use.
Table 6. MANOVA of ethical evaluations in the scenario 2 of GenAI use.
Descriptive StatisticsResults of MANOVA
By SexBy Working ConditionEffect
FactorWomenMenNo Full Time WorkFull Time WorkSexWork ConditionSex × Work Condition
MSDMSDMSDMSDPTp-ValuePTp-ValuePTp-Value
Moral equity 0.0100.6790.0120.6300.0070.807
ME17.692.487.982.17.712.537.882.1
ME27.722.388.191.837.952.327.712.13
ME37.442.57.652.57.462.547.552.43
Relativism 0.0050.8770.0290.2300.0130.608
RE17.812.297.792.517.872.287.692.47
RE27.382.457.562.387.332.537.592.27
RE36.932.667.052.896.832.877.172.45
Egoism 0.0010.9230.0140.3630.0060.627
EG17.692.297.862.277.912.157.472.47
EG26.842.6373.067.142.76.482.79
Utilitarianism 0.0430.0400.0160.3000.0330.087
UT17.922.167.722.628.092.197.52.42
UT27.232.417.842.467.62.367.092.52
Deontology 0.0160.3160.0220.1950.0020.847
DE16.562.467.192.656.662.776.882.09
DE26.552.76.912.966.752.836.482.69
Note: M = mean, SD = standard deviation, and PT = Pillai’s trace.
Table 7. MANOVA of ethical evaluations in scenario 3 of GenAI use.
Table 7. MANOVA of ethical evaluations in scenario 3 of GenAI use.
Descriptive StatisticsResults of MANOVA
By SexBy Working ConditionEffect
FactorWomenMenNo Full Time WorkFull Time WorkSexWork ConditionSex × Work Condition
MSDMSDMSDMSDPTp-ValuePTp-ValuePTp-Value
Moral equity 0.0110.6530.0520.0520.0140.553
ME14.92.714.72.924.492.815.42.62
ME26.662.686.932.766.82.846.642.46
ME34.792.814.863.24.423.025.432.66
Relativism 0.0050.8640.109<0.0010.0180.453
RE16.192.765.953.125.772.966.672.63
RE25.632.855.633.015.023.046.62.34
RE34.322.984.233.373.553.115.52.65
Egoism 0.0060.6390.0220.1930.0080.536
EG16.692.77.092.776.662.867.052.47
EG24.943.0553.524.593.345.552.82
Utilitarianism 0.0470.0310.0220.1980.0050.700
UT16.862.836.672.766.682.987.022.49
UT25.792.936.492.715.693.066.472.51
Deontology 0.0190.2440.094<0.0010.0030.792
DE14.282.975.143.384.093.185.222.87
DE24.1434.743.533.633.15.42.97
Note: M = mean, SD = standard deviation, and PT = Pillai’s trace.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Pérez-Portabella, A.; Arias-Oliva, M.; Padilla-Castillo, G.; de Andrés-Sánchez, J. Passing with ChatGPT? Ethical Evaluations of Generative AI Use in Higher Education. Digital 2025, 5, 33. https://doi.org/10.3390/digital5030033

AMA Style

Pérez-Portabella A, Arias-Oliva M, Padilla-Castillo G, de Andrés-Sánchez J. Passing with ChatGPT? Ethical Evaluations of Generative AI Use in Higher Education. Digital. 2025; 5(3):33. https://doi.org/10.3390/digital5030033

Chicago/Turabian Style

Pérez-Portabella, Antonio, Mario Arias-Oliva, Graciela Padilla-Castillo, and Jorge de Andrés-Sánchez. 2025. "Passing with ChatGPT? Ethical Evaluations of Generative AI Use in Higher Education" Digital 5, no. 3: 33. https://doi.org/10.3390/digital5030033

APA Style

Pérez-Portabella, A., Arias-Oliva, M., Padilla-Castillo, G., & de Andrés-Sánchez, J. (2025). Passing with ChatGPT? Ethical Evaluations of Generative AI Use in Higher Education. Digital, 5(3), 33. https://doi.org/10.3390/digital5030033

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop