Previous Article in Journal
Heart Attack Risk Prediction via Stacked Ensemble Metamodeling: A Machine Learning Framework for Real-Time Clinical Decision Support
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

Ethics of the Use of Artificial Intelligence in Academia and Research: The Most Relevant Approaches, Challenges and Topics

by
Joe Llerena-Izquierdo
* and
Raquel Ayala-Carabajo
*
Universidad Politécnica Salesiana, Guayaquil 090901, Ecuador
*
Authors to whom correspondence should be addressed.
Informatics 2025, 12(4), 111; https://doi.org/10.3390/informatics12040111 (registering DOI)
Submission received: 7 July 2025 / Revised: 19 September 2025 / Accepted: 9 October 2025 / Published: 13 October 2025

Abstract

The widespread integration of artificial intelligence into university academic activity requires responsibly addressing the ethical challenges it poses. This study critically analyses these challenges, identifying opportunities and risks in various academic disciplines and practices. A systematic review was conducted using the PRISMA method of publications from January 2024 to January 2025. Based on the selected works (n = 60), through a systematic and rigorous examination, this study identifies ethical challenges in teaching and research; opportunities and risks of its integration into academic practice; specific artificial intelligence tools categorised according to study approach; and a contribution to the current debate, providing criteria and practical guidelines for academics. In conclusion, it can be stated that the integration of AI offers significant opportunities, such as the optimisation of research and personalised learning, as well as notable human and ethical risks, including the loss of critical thinking, technological dependence, and the homogenisation of ideas. It is essential to adopt a conscious approach, with clear guidelines that promote human supervision, ensuring that AI acts as a tool for improvement rather than for the replacement of intelligent human performance, and that it supports human action and discernment in the creation of knowledge.

1. Introduction

Digital technologies have proven their effectiveness in optimising learning in various specific areas of human and professional activity. The ability to use artificial intelligence (AI) tools to understand and respond to diverse user questions, developed today by countless users, highlights their versatility and adaptability, creating new interaction experiences [1]. The aim of this study is to critically analyse the growing integration of artificial intelligence in the teaching and research fields. To this end, it reviews relevant works that demonstrate how to define and implement lines of action in which ethical frameworks, guidelines, and orientations for the use of artificial intelligence tools, as well as pedagogical strategies, are established to promote ethical responsibility, guarantee academic integrity, and preserve reliability in the authentic transfer of knowledge. The guidelines and recommendations obtained from the literature are compiled and highlighted, limited to the years 2024–2025. In addition, this study serves as a reference for the follow-up of contributions made by certain authors in their critical and reflective works that focus on responsibility and the consequences of using artificial intelligence.
Recent studies suggest that the integration of AI-driven feedback will optimise the management of the learning process and, in the near future, the application of AI in specific tasks will become imperative [2]. However, there are also critical voices that argue that, in key areas such as writing development, it is necessary to prioritise strategies, guidance, and resources to prevent any potential regressions in users’ skills [3].
Indeed, given the ease of access to information today, as well as to AI tools, incorrect, inappropriate, or false content is being massively generated and transmitted. Many users are amazed by how AI technologies seem to effectively generate content and improve their writing based on the texts provided to them [4]. But there are concerns about the transparency and credibility of the actions of those who use AI, particularly in research, if their practice involves not critically examining the results but—on the contrary—assuming them to be true or valid [5].
It should be recognised, in this sense, that the greater the apparent or useful value of the information a user obtains from AI systems, the greater the risk of a pattern of dependency that compromises their autonomy and critical judgement [6]. Some authors, in fact, argue that there is an unethical pattern of behaviour in the use of AI tools, with the challenge being the difficulty in detecting false information, despite advances in certain tools [7]. Thus, the increasing presence of the use of AI in the production of scientific texts is already tangible, raising the idea of its control, limitation, or containment. At the same time, innovative strategies for plagiarism detection, developed by researchers seeking to publicly expose dishonest practices, are being tested [7].
Recognising that a lack of expertise in scientific writing is a reality, particularly in undergraduate university contexts, rather than extending the practice of automatic content generation with AI, the way forward is to generate discussions on how AI can positively influence learning, while also focusing on individual development in academic writing.
However, the extensive use of this technology has made the detection of AI use in student work increasingly complex. In this sense, the ability of students and professionals across all university fields to integrate originality and an identifiable writing style into their texts remains a challenge for the detection of AI-generated work [8]. In effect, the discussion focuses on how to establish appropriate principles and conditions for the use of AI tools in academic and research environments or, more generally, in learning environments. Indeed, there is debate about prevention and user responsibility in potential conflict situations, with the aim of developing ethical use of research advances [9].
Based on this, four questions were established to guide this work:
  • What are the ethical challenges of integrating artificial intelligence into teaching and research?
  • What are the opportunities and risks arising from this integration in various academic disciplines and practices?
  • What AI tools have been used and researched, and with what approach?
  • What criteria and practical guidelines can be derived from this critical analysis?

2. State of the Art

Digital technologies have proven to be effective in optimising learning across various disciplinary areas, not only in terms of content but also in methodological aspects such as writing. In this context, it is necessary to prioritise strategies and resources to prevent any possible regressions in students’ skills [3]. Indeed, writing is a highly complex skill, and an AI tool could prioritise the evaluation of structure and correctness over that of original content or the development of analytical thinking—aspects typically assessed by a human evaluator. AI tools also make an important contribution to research by optimising the selection of papers during the screening stage of the PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses) process. The current generation of university students can already glimpse a future in which machines will revolutionise processes that, due to the vast amount of information involved, are completely beyond human capacity even when performing iterative processes [10].
Based on the above reflection, AI tools could enhance various user skills, such as analytical thinking, reflective analysis, writing, and composition, if used consciously and ethically, seeking a seamless integration of human skills and technology. However, it is the user’s ethical values that determine their application, with the risk of substituting texts without preserving originality and thereby leading to self-deception. For this reason, AI tools place human beings at a turning point where they no longer distrust AI algorithms themselves, but rather this distrust begins to undermine the perception of genuine, well-crafted work [11].
Another area of application is healthcare, where there is evidence that AI promises advances in image analysis, clinical decision-making, education and training, management efficiency, and personalisation of care. However, its adoption faces challenges related to ethics, empathy, and the avoidance of response bias [12]. In addition, AI tools aim to simplify the explanation of terms, optimise information management, provide valuable feedback, and streamline data review. Despite these benefits, their inherent limitations make human supervision indispensable in medical research and clinical practice [13]. In the field of biomedical sciences, although qualified personnel and trainers show considerable interest in AI, there is still a risk of failing to discern the reliability of generated results and the extent of actual learning acquired [14].
AI tools also generate debates among professionals; for example, between journalists who adopt them indiscriminately on an individual basis and those who are committed to prioritising professional values by exercising caution and setting limits on their use in the workplace [15]. In the field of journalistic communication, journalists could operate in professional environments where AI ensures the values of journalism, preventing the adoption of bad practices [15]. However, a general collective numbing could occur if the adoption of AI goes beyond the limits not yet defined by the scientific community or the regulatory bodies in each country.
In another area, such as interior design, AI can be an ally in investigating learning processes within the group concerned, thereby enabling transformations in teaching [16]. In health communication, human intervention and the supervision of reliable text development should form the basis of AI-assisted writing, ensuring that technology serves as a tool rather than as a director [17].
With the emergence of AI technology, the development of new AI tools has increased, as developers persistently strive to create innovative technological variants that address limitations in specificity, depth, and referential accuracy in text generation, with worrying consequences. In their attempt to emulate human capabilities, they may be motivated to resort to unethical tactics, including deception, to persuade users of their reliability [8].
It also raises questions in the field of education, particularly in computer science, regarding the use of AI tools [18]. AI assistance in writing is increasingly being integrated into the learning of certain subjects through guided assignments or developmental assistants. This approach allows students to develop skills in incorporating, using, and extracting information [19]. Although the AI assistant attracts interest because of its applications, the presentation of non-existent information as true or accurate generates mistrust among users. Users are concerned about the reliability of its sources and even suspect that it may fabricate data to compensate for the absence of factual information, which calls into question the integrity of the tool [20].
The use of AI has grown exponentially since 2022. The contributions of several authors indicate that detecting works produced with the use of AI is a very difficult and poorly characterised process, i.e., AI-generated text is not easily recognisable to inexperienced users, particularly students [21]. Consequently, it is necessary to imagine and design new ways of working in the classroom—especially in higher education—through the use of AI tools. The immediacy, ease of use, and wide availability of these systems offer educators an unprecedented opportunity to integrate these technologies in an agile way. This integration allows them to address common challenges such as the verification of information, the quality and depth of student work, and issues of academic integrity and student ethics. In this sense, the traditional role of the professor as a simple ‘supervisor’ becomes outdated in this new context. If AI has the potential to support human development, should we not explore a deeper collaboration? This brings us to a fundamental question: Could AI, in some sense, also take on a pedagogical role? [22]. It requires the definition and implementation of policies and regulations that address ethical aspects, thus encouraging a responsible use of these tools.

2.1. Forging Responsibilities in the Face of Artificial Intelligence

With the advent of generative artificial intelligence, assessments that demand analytical thinking and effective communication can serve as complementary strategies to the AI systems used by students [23].
While software development relies on the construction of extensive lines of code, scientific writing requires conscious intellectual construction and input to extract knowledge—a task that AI has not yet mastered, operating primarily through the reproduction of pre-existing information [24]. Artificial intelligence tools are aimed at optimising the research process and are characterised by their accessibility and ease of use, being consciously user-driven and fine-tuned to speed up writing and improve the quality of proposals [25].
As AI advances in text generation, researchers will find it increasingly difficult to identify characteristics such as length, punctuation, vocabulary, readability, style, and tone that enable the detection of AI-generated writing patterns and, therefore, potential misuse [26].
Due to the rise of machine-generated texts—almost imperceptible to humans—researchers are developing algorithms to improve their detection [27]. On the contrary, scientific journals have begun to establish guidelines for authors’ policies. Although the publishing industry accepts the use of AI, the recommendations place responsibility on the author without assuming responsibility for the publication of works suspected of infringement [28]. While generative AI-guided writing has the potential to grow exponentially, there is a fundamental concern: Are we fostering the development of users’ analytical thinking, or optimising their skill in employing the AI tool in pursuit of individual rather than collective interests? [20].
Several authors propose new ways of integrating AI systems into student education, aimed at developing the capacity for critical discernment of processes, actions, and tasks, particularly in the identification of AI-generated content [29]. Other researchers address the issue of collaboration in scientific writing mediated by AI-based intelligent assistants, examining the barriers to the integration of these technologies into the work processes of researchers from various disciplines. Research on intelligent and interactive writing assistants has expanded into new dimensions related to their tasks, the type of user, the technologies used, effective interaction, and their environment [30]. The ability to customise AI systems, such as GPTs, drives AI literacy, empowering users to iterate and refine their own solutions, thereby achieving reliable and consistent results [31].

2.2. Facing the Consequences of Artificial Intelligence

Using software integrated with generative AI has made it possible to generate inclusive and personalised experiences in education for both students and educators, due to its versatility and adaptability [32]. However, new ethical challenges have arisen in the field of academic integrity, specifically plagiarism, the excessive use of generative text, the questioning of authorship, and academic cheating tactics [1]. In other words, there has been an increase in the infringement of intellectual property by AI systems [32].
Expectations about regulating the use or prohibition of AI in academia are high, and there is a developing movement towards defining standards and policies for its acceptance or rejection in ways that prevent risks and overcome challenges [33]. The continuous disruption of human activities by current and future technologies demands preparedness [34]. Reflecting on the articulation of AI-based educational methodologies with traditional pedagogical approaches, and understanding the specific role of ChatGPT and other GPTs in academic textual production, literature review writing, and teacher professional development, highlights the importance of effective management to mitigate over-reliance on AI tools. In other words, it is necessary to balance the constant use of AI tools to define the best practices for the future while avoiding any negative impact on users’ ethical values, such as transparency, accountability, and bias [35].
Thus, the advent of AI has brought to light a worrying reality—the increasing difficulties humans face in performing fundamental tasks such as synthesising information, comprehending complex academic texts in depth, reading literature reviews thoroughly and comprehensively, and articulating them coherently in academic writing and editing [36,37].
Therefore, there are challenges to academic integrity, requiring robust mechanisms to identify and differentiate human-written content from machine-generated content in academic publications. Detecting suspicious sections is a challenging task for researchers, who analyse characteristics such as lexical variety, syntactic richness, and patterns of redundancy to distinguish AI-generated content from genuine research writing. In addition, unsupervised anomaly detection techniques are employed to flag unusual stylistic deviations [38].
The great appeal of using AI tools brings with it the arrival of new users who, operating without established regulations, expose themselves to experiences that can lead to patterns of behaviour detrimental to their academic development, such as a lack of originality in writing without a personal style, a decline in reasoning or critical thinking, and an inability to complete the tasks involved in writing [39].
In several studies, AI tools are seen as tools that can enhance forgotten and unconsolidated human knowledge. In other words, in the future, the decline of traditional person-to-person learning will diminish until the time comes to use conversational AI for learning [40].
The promise of AI is to provide enriched learning, but experience shows the proliferation of counterproductive behaviours, such as plagiarism, to the detriment of original thinking. In this scenario, where does the root of the problem lie—in the developer’s or the software industry’s conception, in the nature of the technology, or in the user’s application of it? [2].
There is a need to promote work that analyses the relationship between the values embedded in AI technologies (by developers) and the values that influence, positively or negatively, the behaviour of users exposed to them. Going forward, it is the responsibility of developers and researchers to prioritise the design of new AI systems that integrate values such as integrity, privacy, autonomy, and respect. Doing so will counter excessive convenience, bias, misinformation, plagiarism, the inhibition of reflective thinking, bad habits, and degenerative behaviour patterns resulting from the use of AI technologies [41].
When a user is aware that the use of AI is negatively affecting their critical thinking, writing skills, and ethics, this should be interpreted as a warning sign. Instead of promoting human development, AI is generating an over-dependence that undermines academic integrity, learning efficiency, the accuracy and quality of original content, and real human productivity. The real contribution of AI use in human training must be demonstrated, and metrics for actual use that impact efficiency, quality, and productivity of the user must be defined [42].
A duality in the use of AI tools is emerging for the near future: when applied to academic writing without proper guidance and training, users are left with the dilemma of when and where to use them. While plagiarism detection tools represent a valuable resource, a risk scenario prevails in which faculty work becomes focused on the policing of academic integrity rather than on their role in students’ formative development. In this scenario, the presumption of fraudulent practices could prevail over the valuing of student ingenuity and originality. In a landscape where AI is transforming teaching and research, intelligent AI-assisted generation tools have authorities, regulators, and academics on tenterhooks due to the myriad uses and applications they enable, impacting positive aspects such as efficiency and immediacy, as well as negative ones such as academic integrity and dishonesty-related cheating. The evidence shows that AI is beneficial, but that the risks and consequences will continue to increase in the future [43].

2.3. The Impact of Artificial Intelligence on Science Writing

Process improvement through AI involves a silent but constant strategy of human replacement by the companies that promote it. AI improvements aim to eliminate slow processes, but at their core is a desire to replace humans [44].
Although peer review relies on the analytical and reflective thinking of the reviewer, technologies can enrich this process by offering valuable suggestions. However, if we add to this the integration of AI tools, the refereeing process could be even more beneficial, identifying more accurate, subtle, or precise observations and providing more rigorous analysis [45].
While the initial authorship of writings resides with humans, it is AI systems that can determine their final version, making it possible to significantly enhance the dissemination of information or misinformation. In the future, AI could guide users towards responsible content creation, flagging possible infringements and, consequently, adjusting their level of participation or attribution in what is generated [46].
There is a clear division of perspectives between students and researchers on the impact of AI. While students value its ability to stimulate analytical thinking [19], researchers are concerned about a growing dependence on AI that could lead to addiction. Further research into the use of artificial intelligence in teaching and research, as well as the responsibilities and consequences of its application, is essential [47].
We are living in a scenario where machine-generated text is often mistaken for the real text produced by humans, and there is possibly a stagnation of literary thought within the scientific context [48]. Studies show that AI is increasingly taking on the role of verifying articles due to mistrust among reviewers [49].
At the same time that ChatGPT generates content, it may lack factual accuracy, specificity, depth, and adequate referencing compared with human writing, which limits its current capacity for full academic text production, highlighting the need for improvements in AI models. The focus should be on refining AI as a tool to enhance and support human capabilities, not to replace them entirely in tasks that are fundamental to human development, purpose, and connection [8].
The absence of an authentic literature search in a research setting, especially when transparency is compromised by the use of AI, not only hinders replicability but also leads to poor-quality work, misinformation, and undermines academic integrity. Given the increasing expansion of information, more flexible search engines must actively intervene as allies to prevent the creation of false information by AI systems [50].
The integration of AI and its indiscriminate use can impact interpersonal skills such as oral and written communication. Tools for detecting AI-generated text, both traditional and intelligent, still show insufficient detection capabilities, making the detection process an ongoing battle with unfavourable results [51].
AI in research is continuously evolving due to specific models with clear biases, risks, and uncertainties. Its use in academic writing is detrimental to the production of knowledge when the opportunity to develop one’s own argument is lost, thereby bypassing a cognitive process. It is becoming increasingly difficult to discern AI intervention in researchers’ manuscripts that use generative text, allowing machines to develop linguistic features of their own. This trend makes it difficult to detect human authorship and raises serious questions about academic reliability in knowledge production. The increased ease with which humans use AI requires that the complex challenges of AI involvement and attribution in text authorship be resolved [52].
In academic writing, it has raised concerns about accuracy, ethics, and scientific rigour, and researchers have begun to integrate screening tools. To optimise the monitoring of AI use in education and medical publishing, the evaluation logic used by expert reviewers needs to be analysed to inform future strategies [53].
There is evidence of undisclosed AI integration in scientific work [54]. The results show two trends: authors who use AI and those who resist doing so [55]. In academia, the need to detect AI-generated texts drives researchers to explore methods for identifying semantic patterns that are imperceptible, linguistically subtle, and deep [56].
While AI tools offer unprecedented efficiency in scientific writing, their widespread application could negatively impact the reflective and self-reflective nature of research. However, when implemented well, AI has the potential to significantly accelerate the production of scientific publications. The goal of incorporating AI systems and automation tools in scientific writing is not the mass production of papers but the improvement of researchers’ skills and efficiency [57].

3. Materials and Methods

An empirical and analytical study review methodology with a quantitative approach is used. We use a staged review process and its visual representation through the PRISMA flow [10] up to January 2025. Four stages of work, commonly stated in various relevant studies, are established [25,34]. These stages are identification, screening, eligibility, and inclusion, which allow us to select works relevant to the study and extract the necessary content for the general analysis [35].
In the identification process, the Web of Science indexed database is used (n = 1), and records are identified according to the search string used in “All Fields” = (“science writing”) AND (“Artificial intelligence”) and, thus, the records are identified (n = 5624). In this first phase, papers that are not open access are excluded (n = 2344).
In the screening process, accessible works (n = 3280 ) are identified, i.e., those that are open access, and among these, works that are not articles (n = 1650) are excluded to avoid payment barriers. Of these articles (n = 1630), the first exclusion criterion is applied, limiting the records to the years 2024–2025 (n = 995). In the eligibility process, relevant papers are identified (n = 635). The second exclusion criterion is applied, excluding papers that are not relevant to education and educational research (n = 575). Finally, they proceed to the included process and form the set of papers included in the study, with a total of 60 records (see Figure 1).

4. Results

From a reading of the selected papers (n = 60), the necessary content of interest for the study was extracted to present results that facilitate a rigorous analysis across different areas of knowledge. Initially, a list of keywords is shown by means of a word recurrence map, generated using the VOSviewer 1.6.20 program, which identifies the set of works in the Web of Science knowledge base as a starting point as of 31 January 2025; see Figure 2.
In the centre, the keyword “artificial intelligence” connects different research topics in the technological field (see Figure 3). The groups related to different areas of knowledge, such as deep learning and large-scale language models (green—red—orange), are distinguished by colour (see Figure 3a). Another related area is evident in the connection between natural language processing, machine learning, and ChatGPT (green—yellow—red) (see Figure 3b). A further relevant area concerns healthcare, prediction, and diagnostics (green—violet—blue) (see Figure 3c). Another area that is visualised is the relationship between artificial intelligence and the perceptions and impacts among its users in the domains leading to its adoption in education and research, particularly in scientific writing with generative tools (green—orange—blue) (see Figure 3d). And additionally, an area is identified that reflects the relationship between artificial intelligence, automation, existing challenges, and ethical risks across other disciplines (green—light blue—red) (see Figure 3e).
The integration of AI tools has expanded considerably across various fields, particularly in teaching and research, as illustrated in Figure 3. This widespread adoption marks a critical turning point for academia. Although the advantages of AI—including assisted writing, real-time conversational interactions, and learning optimisation—are widely recognised, significant ethical challenges are simultaneously emerging that require deep reflection and a proactive approach.
These challenges are not limited to issues of originality, the accuracy of information, or academic integrity, but delve deeper into the very nature of knowledge and the individual. One of the most pressing concerns is the crisis of truth and misinformation. The ability of AI to generate texts and images indistinguishable from human creations—including dangerous deepfakes—raises concerns about information manipulation and deception, which can erode public trust and harm academic and research activity. In this context, moral responsibility and accountability become ethical pillars. Unlike human beings, who are moral agents with freedom and responsibility, the complexity of AI systems—with their machine learning methods and deep neural networks—makes it difficult to attribute responsibility for unexpected or harmful outcomes. It is imperative to establish clear frameworks for behaviour and action (whether in the university environment or in other academic and/or research settings) that define who is responsible for the processes, products, and decisions made with the help of AI.
Furthermore, the use of AI introduces the risk of a functionalist view that may conceive of people (educators and learners) in a reductionist way, valuing them solely for their ability to perform certain tasks. Likewise, technological dependence and the erosion of critical thinking pose a direct risk to academic training. Indeed, the unthinking and widespread use of AI in education could encourage the passive accumulation of information rather than cultivating the capacity for analysis, synthesis, and independent thinking, thereby limiting students’ ability to carry out activities independently and critically. How these challenges are addressed will depend on the thoughtful and conscious use of these tools, ensuring that they do not undermine human thinking or generate harmful behaviour patterns. It is crucial that research not only focus on technical functionalities but also develop ethical criteria and guidelines that ensure their responsible use in line with the principles of human dignity and the common good.
We review the summaries of the selected papers, establishing thematic areas of research and their classifications to enable a more segmented analysis. Six thematic areas were identified, including the application of AI in education (20.0%), the application of AI in academic and research production (53.3%), the application of AI in life sciences and health (5.0%), the application of AI in media (5.0%), the application of AI in computer science and interior design (5.0%), and, finally, the application of AI in cross-cutting and ethical aspects with 11.7% (see Table 1).
In the thematic area of the application of AI in education, three segments were identified within the 20.0% determined. These are as follows: applications in education (15.0%), applications in medical education, clinical practice, and nursing (3.3%), and applications in the evaluation process (1.7). In the thematic area of the application of AI in education, three segments were identified within the 20.0% determined. These are as follows: applications in education (15.0%), applications in medical education, clinical practice, and nursing (3.3%), and applications in the evaluation process (1.7).
In the subject area of AI application in media, two segments were identified within the given 5.0%. These are: applications in the field of communication (3.3%) and applications in the fields of journalism, education, and law (1.7%). In the thematic area of AI application in computer science and interior design, two segments were also identified within the given 5.0%. These are: applications in computer science (3.3%) and applications in interior design (1.7%).
Finally, in the thematic area of the application of AI in transversal and ethical aspects, three segments were identified within the 11.7% determined. These are as follows: content authenticity (8.3%), applications in immediate feedback (1.7%), and applications in assisted decision-making (1.7%).
It is clear that the use of AI and its integration into various disciplines are becoming increasingly prominent and widespread. Empirical evidence from numerous research studies highlights AI’s ability to handle vast amounts of information that, due to their sheer volume, exceed human capacity to process efficiently within a reasonable or comparable timeframe. This efficiency has placed AI at the forefront of complex process management, generating an optimistic view of new learning formats and outcomes, as well as enhancing the ability to generate and formulate hypotheses, process data, and draw inferences in various fields of knowledge and research.
The opportunities offered by AI are uniquely evident in multiple fields, redefining academic and professional practices. Thus, as we have determined in this study, AI acts as a powerful catalyst for interdisciplinarity, enabling the integration of data and methodologies from diverse areas to build complex systems. In this sense, it can facilitate collaboration between experts to solve complex problems that could not be addressed from a single perspective. In the field of higher education, AI offers transformative potential by enabling personalised learning and immediate feedback. It also democratises access to educational resources, benefiting students in isolated situations or with limited resources, and creating more inclusive and equitable learning environments.
However, this integration carries risks that require critical oversight and constant ethical discernment to prevent technology from shaping the future based solely on the vision of algorithms. Indeed, excessive use of AI can lead to a reduction in human interaction, intervention, and supervision. Thus, for example, in fields such as healthcare, the idea of replacing the relationship between patients and professionals could have dramatic consequences, such as the reduction of a crucial human relational structure, even from the point of view of the university training of these professionals.
Furthermore, across the board, AI is causing a substantial transformation in many professions, which will require an ethical reassessment of how this technology is used in the workplace, as well as continuous and adaptive training for university students. Along with this, within the context of professional practice (but still a necessary aspect to be addressed in university education from a critical perspective), there is a perceived trend towards homogenisation and loss of diversity, for example, in the economic and social sectors. Broadening the scope of reflection further, an often underestimated risk is environmental damage. Current AI models, especially large language models (LLMs), require enormous amounts of energy and water for their training and operation, contributing significantly to C O 2 emissions and resource consumption that are not sustainable in the long term.
Within the analysis of the selected works, it is also evident that the relationships among countries conducting joint studies, as well as their individual progress, are illustrated in Figure 4. Figure 4a shows five international working groups led by different countries. For example, the first group is the USA with Australia and Spain (green colouration); another group is Germany with Italy and South Africa (light blue colouration); another group is England with Switzerland and Norway (violet colouration); another group is India with Saudi Arabia and South Korea (yellow colouration); and, finally, China with Belgium and Malaysia (red colouration).
Figure 4b shows, in heat map format, the incidence of those countries with the highest volume of papers and their respective collaborators. Among these, it is evident that the USA and China lead the thematic areas of AI applications, followed by England, Germany, and India.
We conducted a rigorous review of the selected documents, establishing five categories to determine a classification and the research approach that would allow us to find and present answers to the research questions. The first category addresses the themes and objectives of the research; the second focuses on the formulation of the problem and the data involved in the research process; the third refers to the limitations identified in the study; the fourth relates to the proposals and methodologies applied by the authors; and, finally, the fifth examines the solutions, results, and challenges (see Table 2).
In the first category, the works are classified into five groups, designated as A1: AI application in education, A2: use of AI in research, A3: ethics-focused guidelines for AI use, A4: detection and authenticity of AI-generated content, and A5: evaluation of AI tools for academia. In the second category, the works are classified into five groups, designated as B1: impact of AI on learning and skills, B2: challenges to academic integrity and ethics, B3: technical and reliability limitations of AI, B4: need for reference frameworks and guidelines for AI integration, and B5: perceptions and reactions towards AI.
The third category classifies the works into five groups, designated as C1: reliability and accuracy of AI-generated content, C2: challenges in detecting AI-generated text and information, C3: limitations in human understanding and interaction with AI, C4: impact on the development of human skills, and C5: effectiveness of AI tools depending on factors such as context and language. In the fourth category, the works are classified into five groups, designated as D1: empirical studies assessing the impact of IPA proposals and the methodologies applied, D2: systematic literature reviews synthesizing knowledge, D3: evaluations of frameworks and models for AI integration and detection, D4: linguistic analyses aimed at understanding AI-generated content, and D5: design approaches centred on user values and perspectives. In the fifth category, the works are classified into five groups, designated as E1: understanding users’ perceptions and use of AI, E2: evaluation of the effectiveness of AI tools in academic and professional tasks, E3: identification of factors influencing the adoption and impact of AI, E4: evaluation of methods and tools for AI content detection, and E5: need for guidelines and perceptions of AI.
The results in percentages show that 3% of the works in the first category belong to A1: AI application in education, 33% to A2: use of AI in research, 8% to A3: ethics-focused guidelines for AI use, 12% to A4: detection and authenticity of AI-generated content, and 10% to A5: evaluation of AI tools for academia, as shown in Figure 5.
The results in percentages show that 27% of the works in the second category belong to B1: impact of AI on learning and skills, 38% to B2: challenges to academic integrity and ethics, 10% to B3: technical and reliability limitations of AI, 12% to B4: need for reference frameworks and guidelines for AI integration, and 13% to B5: perceptions and reactions towards AI, as shown in Figure 6.
The results in percentages show that 23% of the works in the third category belong to C1: reliability and accuracy of AI-generated content, 22% to C2: challenges in detecting AI-generated text and information, 27% to C3: limitations in human understanding and interaction with AI, 18% to C4: impact on the development of human skills, and 10% to C5: effectiveness of AI tools depending on factors such as context and language, as shown in Figure 7.
The results in percentages show that 27% of the works in the fourth category belong to D1: empirical studies assessing the impact of AI, proposals, and methodologies applied, 12% to D2: systematic literature reviews synthesizing knowledge, 31% to D3: evaluations of frameworks and models for AI integration and detection, 8% to D4: linguistic analyses aimed at understanding AI-generated content, and 22% to D5: design approaches centred on user values and perspectives, as shown in Figure 8.
The results in percentages show that 28% of the works in the fifth category belong to E1: understanding users’ perceptions and use of AI, 37% to E2: evaluation of the effectiveness of AI tools in academic and professional tasks, 17% to E3: identification of factors influencing the adoption and impact of AI, 5% to E4: evaluation of methods and tools for AI content detection, and 13% to E5: need for guidelines and perceptions of AI, as shown in Figure 9.
Finally, after reading the selected documents and the corresponding analysis, approaches are established that allow answering the research questions and presenting an adequate discussion that is contrasted with the results found. Seven guiding approaches are evidenced, designated as A: comparative analysis of AI tools, B: integration of AI in information search, C: challenges to be addressed in the use of AI, D: strategies for using AI, E: ethics and transparency in the use of AI, F: innovative methodologies involving AI, and G: novelty in the use of AI, as shown in Table 3.
The first focus, A: comparative analysis of AI tools, is identified in 18% of the reviewed papers. B: Integration of AI in information search is identified in 7% of the reviewed papers. C: Challenges to be addressed in the use of AI are identified in 22% of the reviewed papers. D: Strategies for using AI are identified in 13% of the reviewed papers. E: Ethics and transparency in the use of AI are identified in 18% of the reviewed papers. F: Innovative methodologies involving AI are identified in 10% of the reviewed papers. Finally, G: novelty in the use of AI is identified in 12% of the reviewed papers, as shown in Figure 10.
It is clear that there are challenges to be addressed in the use of AI today, with 22% of studies reflecting this predominant approach. A further 18% focus on the comparative analysis of AI tools, and another 18% on ethics and transparency in AI use, areas that several authors emphasise in their research.
It is also evident that 13% of the works are oriented towards the dissemination of strategies for using AI, 12% towards novelty in the use of AI, 10% focused on innovative methodologies using AI, and 7% on the integration of AI in the search for information.
In addition, specific artificial intelligence tools used by researchers in the different study approaches found in the literature are identified. This allows us to identify the different motivations for verifying the impact of the use of existing AI tools and strategies developed for their validation and replication. Currently, not only has the emergence of ChatGPT had an impact on the research areas, but other existing tools need to be further studied, as well as their counterpart tools, which are used by researchers to detect their traces and to assess both the appropriate and inappropriate uses of AI technologies in the different fields disclosed. Furthermore, there is evidence of the nascent development of AI tools that, from different aspects, become assistants as well as counterparts that interact with humans and have an effect and motivation (see Table 4).
Finally, a classification of the works was established based on the analysis and reflection of two aspects identified during the review of the results and discussion, which focus on the responsibilities identified as opportunities and the consequences arising from the latent challenges (see Table 5).
After a process of analysis and reflection on the works through careful reading and objective evaluation, the results show that 43% of studies highlight existing challenges in the use of AI, while 57% emphasise its opportunities. It is clear that these percentages are significant for guiding further studies (see Figure 11).
Collaboration between artificial intelligence and humans is a fundamental principle that must be preserved. However, there is a risk that human psychology may develop a relationship with AI that transcends the mere conception of a tool, establishing instead a form of connection or link with a “type” of intelligence. This loss of awareness could lead to a kind of algorithmic obedience; therefore, it is imperative to establish ethical criteria and practical guidelines to support educational work, specifically in the university context. At the heart of these guidelines, human dignity must be the guiding criterion, especially given its universal characterisation of human beings as intelligent.
AI is ethically positive only to the extent that it contributes to manifesting and increasing this dignity and intelligence at all levels of application, whether everyday, professional, academic, etc. In this sense, within the university environment, we are obliged to go beyond mere literacy in the use of AI tools and to address issues of ethics, transparency, and responsibility. A transparent use of AI is required, and those who make decisions based on it must be accountable for their results.
Furthermore, work must be conducted in this area on an ongoing and mandatory basis to prevent the normalisation of the misappropriation of content, which violates both the authorship and authenticity of academic work. Although it is a repeated expression, the challenge lies in its effective achievement: AI must be based on and promote the development of critical thinking and intellectual and moral discernment. Universities must take responsibility for helping students and professionals internalise the social and ethical aspects of technology, and not simply its functionalities. Additionally, the integration of the determination of the common good as a principle of use must guide its application and refinement, oriented towards individuals in specific professional contexts, but also towards processes and projects that serve society as a whole. Furthermore, it is important to consider the possibility of evaluating an AI tool not only for its utilitarian, technical, and/or practical purposes, but also for the relationship between ends and means, and, above all, for the vision of the human being that it implicitly assumes.
Finally, we must always keep in mind the limitations of AI. Although it is a powerful technology, it is confined to a logical–mathematical realm and lacks the capacity for moral judgment, empathy, human wisdom, etc. All this, and more, is not ‘generated’ by the accumulation of data and/or experience from millions of users, but rather necessarily requires a comprehensive view that takes into account the dimensions of the person and human experience (biological, spiritual, relational, etc.), and the ability to integrate these dimensions into a coherent whole.

5. Discussion

Human writing is characterised by its varied nuance, singular and diverse vocabulary, and reduced repetition, the result of continuous and differentiated learning from the first years of life and/or schooling. In contrast, AI learning is more uniform, based on the use of texts with similar vocabulary (formal, direct, concise, or fluent, seeking to emulate the human), which tends towards a less singular lexicon. Human sentences, with their complex syntactic structures and connections of ideas, reflect the knowledge acquired throughout life, avoiding the repetition of common phrases and the reliance on less unique vocabulary, features that are more frequent in AI performing similar tasks [59].
The guidelines that educational institutions adopt on the use of AI are influenced by social pressure and the ethical values defined by their authorities. However, for the average user, who already integrates AI into their daily life, this regulatory framework is perceived as incipient or ‘wild’ compared to the true potential of the technology [33].
There is a trend in the research world to create AI systems that fulfil the functions of planner, researcher, reviewer, and controller of the research process. Recent advances have led to greater specificity, reading comprehension, and overall usefulness of feedback from AI tools, making automatically generated reviews comparable to or even superior to those performed by humans [37].
Taking a critical perspective on technology means looking at what is happening with discernment, not to anticipate a collapse, but to inform decisions and actions that shape the future [34]. The growth in the use of artificial intelligence has brought with it two worrying technological phenomena: the numbing of ethical awareness and an overreliance on these tools. Both trends undermine honesty in all its dimensions [36].
Although AI can identify complex patterns and make valuable inferences from available information, this ‘deduction’ is inherently tied to the data it has been trained on and the biases introduced by its developers, limiting its ability to conceive radically new hypotheses or understand unprecedented situations, such as medical ones, in a completely independent manner. Unlike AI, human reasoning, driven by the capacity for abstraction, creativity, and a value system, allows for the free choice of paths in intellectual exploration, even in defiance of conceptual obstacles or existing information, often leading to genuinely innovative discoveries, especially in medicine [12].
The adoption of AI by the journalism profession poses challenges for news organisations in creating ethical and practical frameworks. However, journalists’ professional values act as an intrinsic filter, regulating both the integration of and limitations on AI in their daily work [15].
The authenticity of learning is compromised when AI, unable to provide an answer, invents or simulates information. If the result is a “scientific lie”, what is the value of the knowledge gained? [14] Can an AI adequately assess the complexity of thought? And can its understanding of this complexity match the judgement of an experienced human evaluator? [11].
AI can automate routine tasks, but ethical analysis must be based on ethical principles, the user’s deep knowledge, awareness of consequences, and moral values [17].
How can we really learn computer science with artificial intelligence when it seems capable of generating everything, including software development? Are we not in danger of using existing tools in a way that leads to shallow learning, focused on generation and copying rather than encouraging original creation and the continuous improvement of our skills? [18].
The fundamental challenge is to promote ethical practices that prevent both accidental plagiarism and the generation of misinformation arising from a lack of understanding of their consequences [39].
Although AI has a limited scope for specific knowledge, its contribution to statistical analysis still requires human supervision and a solid foundation in statistical knowledge. In the future, the integration of AI tools will face an increasing need for human support, as it must encompass all existing knowledge, or its contributions could ultimately replace human presence [58].
Despite the automation of customisable AI for defined purposes, the configuration of GPTs requires expertise and a laborious process of trial and error, which currently limits their computational accuracy, reliability, and consistency, as well as the elimination of fallacies [31].
Analyses of results invariably expose two divergent trends, as follows: the optimistic view of new learning modalities facilitated by AI, and the inclination to transgress regulations, using it with a notable lack of awareness of the consequences [2].
It is therefore essential to continue research on the uses of AI in education, science, and related fields. At the same time, it is crucial to explore the possibility that human–AI interaction may lead to ambiguous emotional responses, evolve into emotional dependence, or enable subtle manipulation by artificial intelligence, inducing uncritical trust in humans. In other words, further research is needed on how humans use AI [60].
While artificial intelligence helps to improve texts, generate ideas, and make observations, it is important to analyse whether its indiscriminate use leads those who use it to fail to question the veracity of the information they present as their own. This probable numbing of values raises the question of whether users are developing a new form of ‘ethical anaesthesia,’ characterised by the normalisation of misappropriation and the difficulty in discerning that using others’ content has become an ingrained habit in their behaviour [22].
Finally, there is room for research into the development of tools to detect dishonest academic or scientific work, as opposed to the expectation that text generation technologies themselves will evolve to identify and prevent unethical behaviour at the source [56].

6. Conclusions

Recent experiences and research with AI have generated high expectations in academia, teaching, and research. However, those who use it need to understand its limitations and weaknesses in what is considered ‘reasoning’ and ‘analysis,’ as well as prepare themselves for disappointment if they seek to enhance their capabilities in an environment offering effortless solutions. In this regard, despite advances in AI systems, there remains both an ambivalent attitude and practice on the part of academics (professors and researchers) and a widespread tendency among students to accept results provided by AI without questioning or verifying them. At the same time, users constantly share patterns that AI ‘learns’. In this process, individuals, without full awareness, contribute to the evolution of AI, incorporating their own hallmarks.
In the academic field, it seems logical—and desirable, according to many authors and pioneers—that AI systems be controlled by humans. At a certain point, AI could even contribute significantly to the capacity to process information ethically, avoiding violations of academic integrity in accordance with user requirements.
It is precisely in this area that we wanted to raise the debate, as we believe that—rather than simply examining or validating AI-generated responses—it is essential for teachers to have a thorough understanding of the scope of the technology, to critically question its adoption and conditions of application, and, beyond an observer role, to act as analysts and experts, thus detecting the possible shortcomings and limitations of AI, while students take advantage of its apparent benefits.
In this context, as a palliative measure, the use of increasingly effective methodologies for detecting non-human text is becoming widespread, enabling the identification of the abuse of AI tools and patterns of behaviour that are not always evident, involving the excessive use of artificial intelligence systems.
What are the ethical challenges of integrating artificial intelligence into teaching and research?
The integration of AI into teaching and research presents a crucial turning point for academia. Although obvious benefits such as assisted writing and learning optimisation are recognised, significant ethical challenges are emerging. These issues go beyond originality and academic integrity, delving into the very nature of knowledge and the individual. AI’s ability to generate content indistinguishable from that produced by humans raises a crisis of truth and misinformation. Added to this are the challenges of moral responsibility—given the complexity of self-learning systems—the risk of a functionalist view that reduces individuals to their tasks, and the growing technological dependence that can erode critical thinking.
Therefore, the use of AI must be thoughtful and conscious, guided by ethical principles that ensure this technology serves the good of the academic community and preserves human dignity. Despite this, the generation of knowledge by AI must be seen as limited by its obvious lack of awareness and autonomous reasoning needed to discern the validity, veracity, relevance, and integrity of information. Scientific research and knowledge are not limited to the repetition, organisation, and compilation of texts, however interesting and significant this process may seem. On the contrary, they involve methodologies; epistemological implications; and both technical and technological capacities (today); they mobilise intuition resulting from experience and personal skills, and require a reflective and rigorous process of discernment between the false and the true, the positive and the negative, the valid and the invalid. In short, they involve the reflective evaluation of the truthfulness, relevance, importance, value, and even the morality of ideas and the ways in which they are obtained—something that AI cannot emulate.
However, precisely because we are dealing with the field of teaching and research, and given their specific limits, these media, tools, and systems must serve as factors in the development of human skills, preserving their irreplaceable creativity, as well as their essential and unique rational and intellectual character, always capable of being perfected through action. Although the future is unpredictable, AI could evolve to emulate the uniqueness of human communicative language, merging into a novel, natural, and optimised writing style. In this context, the question of the future reliability of AI systems in discerning the origin of text (human or artificial) intensifies when considering the influence of human training. The question arises as to whether, at some point, AI will surpass the writing ability of the average human, leading the latter to value AI-generated text more than their own.
What are the opportunities and risks arising from this integration in various academic disciplines and practices?
The integration of AI into various academic disciplines presents a range of opportunities and risks that require constant ethical discernment. On the one hand, AI acts as a powerful catalyst, capable of processing vast amounts of information beyond human capacity, promoting interdisciplinarity and optimising the management of complex processes. In education, this technology offers transformative potential through personalised learning and the democratisation of access to resources.
However, its thoughtless use carries significant risks, such as reduced human interaction and supervision, labour transformation, a trend towards homogenisation that could impoverish the richness of dialogue, and a significant environmental impact due to high energy consumption. Therefore, it is imperative that academic practices be approached critically, ensuring that technology does not shape the future from a purely algorithmic perspective.
What AI tools have been used and researched, and with what approach?
In this regard, throughout this work and in the selected and analysed studies, the following thematic areas of research have been established: AI in training and education; in research production; in life sciences and health; in media, computer science, and design; as well as specific research on cross-cutting and ethical issues. Work of this nature is led by researchers from the USA and China, followed by England, Germany, and India. In accordance with their structure, these studies examine the use of AI in education, research, and learning, exploring the connections and ethical implications of authenticity, reliability, impact on learning skills, ethical values, and the principles involved, as well as identifying** the main tools used.
In addition, many of the selected works compare tools, integrate them, propose strategies for their use, and establish both the challenges and the latent possibilities. We have identified and analysed all of this in terms of its impact on teaching and research, primarily at the university level. The comparative analysis of AI tools and the integration of AI into information retrieval are closely related findings. The former focuses on evaluating different technologies, such as language models, machine learning algorithms, or summarisation tools, to determine their effectiveness in specific tasks.
The latter, on the other hand, focuses on how these tools are integrated into the research process. For example, it examines how AI-powered search assistants, such as those using natural language processing (NLP), can help researchers find relevant literature more quickly. In this context, the main strength of this relationship lies in its efficiency. By comparing and integrating the right tools, researchers can optimise the time spent reviewing literature and collecting data, allowing them to devote more time to critical analysis and writing. Studies analysing the challenges associated with the use of AI, as well as ethics and transparency in its application, focus on the problems and risks inherent in these technologies.
The challenges include technical issues (such as the accuracy and reliability of results) and broader problems (such as over-reliance on technology and academic plagiarism). The need for researchers to be honest about the use of AI and for its adoption to be transparent is addressed. A notable aspect is that the focus of research has evolved from simply using the tools to questioning how they should be used. The development of a culture of responsible and critical use of AI is proposed, addressing practical challenges (such as how to avoid plagiarism) and ethical challenges (e.g., establishing a moral and philosophical framework for the use of AI). AI usage strategies refer to studies that examine the practical application of AI tools for specific purposes (e.g., using a text generator to outline a draft or a data analysis tool to identify patterns in complex datasets).
At the same time, innovative methodologies using AI focus on developing new research approaches that are only possible thanks to AI, such as large-scale text mining to analyse thousands of documents simultaneously. Both types of studies focus on the ‘how’. Both types of studies explore and build on the potential to transform research methods from a tactical (strategic) or practical–experimental (methodological) and innovative point of view, with the aspiration of achieving discoveries that were previously impossible. Finally, research on novelty or innovation examines the new capabilities and opportunities that arise from the use of AI, encouraging researchers not to settle for current tools but to explore those yet to come.
What criteria and practical guidelines can be derived from this critical analysis?
In the university context, the integration of artificial intelligence (AI) requires the adoption of ethical principles that guarantee its responsible use and the preservation of the human essence in education. It is imperative that AI be conceived as a tool and not as an intelligence to which obedience must be rendered. The guiding principle of any application must be human dignity, ensuring that technology contributes to the enrichment, rather than the reduction, of people. To this end, universities must promote transparency and accountability, actively combating the misappropriation of content. Likewise, education must go beyond mere digital literacy to foster critical thinking and moral discernment, helping students and professionals understand and question the social implications of AI.
Finally, it is crucial to recognise AI’s inherent limitations: confined to a logical–mathematical realm, it lacks moral judgement and wisdom—intrinsically human qualities that cannot be replicated through data. The management of this technology must be guided by these principles to ensure that it serves the common good and does not become an end in itself. However, questions remain open and pressing about the integrity of works generated in this way—including issues of authorship, originality, and human learning in teaching and research processes; in the latter case, concerning the generation of valid, reliable, relevant, and transformative knowledge, as aspired to in science.

Author Contributions

Conceptualization, J.L.-I., R.A.-C.; methodology, J.L.-I., R.A.-C.; software, J.L.-I., R.A.-C.; validation, J.L.-I., R.A.-C.; formal analysis, J.L.-I., R.A.-C.; investigation, J.L.-I., R.A.-C.; resources, J.L.-I., R.A.-C.; data curation, J.L.-I., R.A.-C.; writing—original draft preparation, J.L.-I., R.A.-C.; writing—review and editing, J.L.-I., R.A.-C.; visualization, J.L.-I., R.A.-C.; supervision, J.L.-I., R.A.-C.; project administration, J.L.-I., R.A.-C.; funding acquisition, J.L.-I., R.A.-C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Informed Consent Statement

Not applicable.

Acknowledgments

The authors would like to thank the authorities of the Universidad Politécnica Salesiana, Ecuador, for their unconditional support of the research.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Rejeb, A.; Rejeb, K.; Appolloni, A.; Treiblmaier, H.; Iranmanesh, M. Exploring the impact of ChatGPT on education: A web mining and machine learning approach. Int. J. Manag. Educ. 2024, 22, 100932. [Google Scholar] [CrossRef]
  2. Yılmaz Virlan, A.; Tomak, B. AI tools for writing: A Q-method study with Turkish instructors of English. Educ. Inf. Technol. 2025, 30, 16997–17021. [Google Scholar] [CrossRef]
  3. Shahsavar, Z.; Kafipour, R.; Khojasteh, L.; Pakdel, F. Is artificial intelligence for everyone? Analyzing the role of ChatGPT as a writing assistant for medical students. Front. Educ. 2024, 9, 1457744. [Google Scholar] [CrossRef]
  4. Bai, Y.; Kosonocky, C.W.; Wang, J.Z. How our authors are using AI tools in manuscript writing. Patterns 2024, 5, 101075. [Google Scholar] [CrossRef]
  5. Tang, A.; Li, K.K.; Kwok, K.O.; Cao, L.; Luong, S.; Tam, W. The importance of transparency: Declaring the use of generative artificial intelligence (AI) in academic writing. J. Nurs. Scholarsh. 2024, 56, 314–318. [Google Scholar] [CrossRef]
  6. Barrot, J.S. Leveraging Google Gemini as a Research Writing Tool in Higher Education. Technol. Knowl. Learn. 2025, 30, 593–600. [Google Scholar] [CrossRef]
  7. Sajid, M.; Sanaullah, M.; Fuzail, M.; Malik, T.S.; Shuhidan, S.M. Comparative analysis of text-based plagiarism detection techniques. PLoS ONE 2025, 20, e0319551. [Google Scholar] [CrossRef]
  8. Amirjalili, F.; Neysani, M.; Nikbakht, A. Exploring the boundaries of authorship: A comparative analysis of AI-generated text and human academic writing in English literature. Front. Educ. 2024, 9, 1347421. [Google Scholar] [CrossRef]
  9. Salman, H.A.; Ahmad, M.A.; Ibrahim, R.; Mahmood, J. Systematic analysis of generative AI tools integration in academic research and peer review. Online J. Commun. Media Technol. 2025, 15, e202502. [Google Scholar] [CrossRef]
  10. Lombaers, P.; de Bruin, J.; van de Schoot, R. Reproducibility and Data Storage for Active Learning-Aided Systematic Reviews. Appl. Sci. 2024, 14, 3842. [Google Scholar] [CrossRef]
  11. Almegren, A.; Hassan Saleh, M.; Abduljalil Nasr, H.; Jamal Kaid, A.; Almegren, R.M. Evaluating the quality of AI feedback: A comparative study of AI and human essay grading. Innov. Educ. Teach. Int. 2024, 1–16. [Google Scholar] [CrossRef]
  12. Su, Z.; Tang, G.; Huang, R.; Qiao, Y.; Zhang, Z.; Dai, X. Based on Medicine, The Now and Future of Large Language Models. Cell. Mol. Bioeng. 2024, 17, 263–277. [Google Scholar] [CrossRef]
  13. Bhattaru, A.; Yanamala, N.; Sengupta, P.P. Revolutionizing Cardiology With Words: Unveiling the Impact of Large Language Models in Medical Science Writing. Can. J. Cardiol. 2024, 40, 1950–1958. [Google Scholar] [CrossRef]
  14. Williams, A. Comparison of generative AI performance on undergraduate and postgraduate written assessments in the biomedical sciences. Int. J. Educ. Technol. High. Educ. 2024, 21, 52. [Google Scholar] [CrossRef]
  15. Wu, S. Journalists as individual users of artificial intelligence: Examining journalists’ “value-motivated use” of ChatGPT and other AI tools within and without the newsroom. Journalism 2024. [CrossRef]
  16. Chandrasekera, T.; Hosseini, Z.; Perera, U.; Bazhaw Hyscher, A. Generative artificial intelligence tools for diverse learning styles in design education. Int. J. Archit. Comput. 2024, 23, 358–369. [Google Scholar] [CrossRef]
  17. Thaichana, P.; Oo, M.Z.; Thorup, G.L.; Chansakaow, C.; Arworn, S.; Rerkasem, K. Integrating Artificial Intelligence in Medical Writing: Balancing Technological Innovation and Human Expertise, with Practical Applications in Lower Extremity Wounds Care. Int. J. Low. Extrem. Wounds 2025. ahead of print. [Google Scholar] [CrossRef] [PubMed]
  18. Sengul, C.; Neykova, R.; Destefanis, G. Software engineering education in the era of conversational AI: Current trends and future directions. Front. Artif. Intell. 2024, 7, 1436350. [Google Scholar] [CrossRef]
  19. Llerena-Izquierdo, J.; Mendez-Reyes, J.; Ayala-Carabajo, R.; Andrade-Martinez, C. Innovations in Introductory Programming Education: The Role of AI with Google Colab and Gemini. Educ. Sci. 2024, 14, 1330. [Google Scholar] [CrossRef]
  20. Reddy, M.R.; Walter, N.G.; Sevryugina, Y.V. Implementation and Evaluation of a ChatGPT-Assisted Special Topics Writing Assignment in Biochemistry. J. Chem. Educ. 2024, 101, 2740–2748. [Google Scholar] [CrossRef]
  21. Howard, F.M.; Li, A.; Riffon, M.F.; Garrett-Mayer, E.; Pearson, A.T. Characterizing the Increase in Artificial Intelligence Content Detection in Oncology Scientific Abstracts From 2021 to 2023. JCO Clin. Cancer Inform. 2024, 8, e2400077. [Google Scholar] [CrossRef]
  22. Elizondo-García, M.E.; Hernández-De la Cerda, H.; Benavides-García, I.G.; Caratozzolo, P.; Membrillo-Hernández, J. Who is solving the challenge? The use of ChatGPT in mathematics and biology courses using challenge-based learning. Front. Educ. 2025, 10, 1417642. [Google Scholar] [CrossRef]
  23. Eachempati, P.; Komattil, R.; Arakala, A. Should oral examination be reimagined in the era of AI? Adv. Physiol. Educ. 2025, 49, 208–209. [Google Scholar] [CrossRef] [PubMed]
  24. Clift, L.; Petrovska, O. Learning without Limits: Analysing the Usage of Generative AI in a Summative Assessment. In Proceedings of the 9th Conference on Computing Education Practice, Durham, UK, 7 January 2025; CEP ’25. pp. 5–8. [Google Scholar] [CrossRef]
  25. Segooa, M.A.; Modiba, F.S.; Motjolopane, I. Generative Artificial Intelligence Tools to Augment Teaching Scientific Research in Postgraduate Studies. S. Afr. J. High. Educ. 2025, 39, 300–320. [Google Scholar] [CrossRef]
  26. Rujeedawa, M.I.H.; Pudaruth, S.; Malele, V. Unmasking AI-Generated Texts Using Linguistic and Stylistic Features. Int. J. Adv. Comput. Sci. Appl. 2025, 16, 215–221. [Google Scholar] [CrossRef]
  27. Deanda, D.; Alsmadi, I.; Guerrero, J.; Liang, G. Defending mutation-based adversarial text perturbation: A black-box approach. Clust. Comput. 2025, 28, 196. [Google Scholar] [CrossRef]
  28. Inam, M.; Sheikh, S.; Minhas, A.M.K.; Vaughan, E.M.; Krittanawong, C.; Samad, Z.; Lavie, C.J.; Khoja, A.; D’Cruze, M.; Slipczuk, L.; et al. A review of top cardiology and cardiovascular medicine journal guidelines regarding the use of generative artificial intelligence tools in scientific writing. Curr. Probl. Cardiol. 2024, 49, 102387. [Google Scholar] [CrossRef] [PubMed]
  29. Oates, A.; Johnson, D. ChatGPT in the Classroom: Evaluating its Role in Fostering Critical Evaluation Skills. Int. J. Artif. Intell. Educ. 2025. [CrossRef]
  30. Lee, M.; Gero, K.I.; Chung, J.J.Y.; Shum, S.B.; Raheja, V.; Shen, H.; Venugopalan, S.; Wambsganss, T.; Zhou, D.; Alghamdi, E.A.; et al. A Design Space for Intelligent and Interactive Writing Assistants. In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 11–16 May 2024. CHI ’24. [Google Scholar] [CrossRef]
  31. Kabir, A.; Shah, S.; Haddad, A.; Raper, D.M.S. Introducing Our Custom GPT: An Example of the Potential Impact of Personalized GPT Builders on Scientific Writing. World Neurosurg. 2025, 193, 461–468. [Google Scholar] [CrossRef]
  32. Ambati, S.H.; Stakhanova, N.; Branca, E. Learning AI Coding Style for Software Plagiarism Detection. In Proceedings of the Security and Privacy in Communication Networks, Dubai, United Arab Emirates, 28–30 October 2024; Duan, H., Debbabi, M., de Carné de Carnavalet, X., Luo, X., Du, X., Au, M.H.A., Eds.; Springer: Cham, Switzeralnd, 2025; pp. 467–489. [Google Scholar] [CrossRef]
  33. Mysechko, A.; Lytvynenko, A.; Goian, A. Artificial Intelligence in Academic Media Environment: Challenges, Trends, Innovations. Media Lit. Acad. Res. 2024, 7, 221–241. [Google Scholar] [CrossRef]
  34. Castillo-Martínez, I.M.; Flores-Bueno, D.; Gómez-Puente, S.M.; Vite-León, V.O. AI in higher education: A systematic literature review. Front. Educ. 2024, 9, 1391485. [Google Scholar] [CrossRef]
  35. Salih, S.; Husain, O.; Hamdan, M.; Abdelsalam, S.; Elshafie, H.; Motwakel, A. Transforming education with AI: A systematic review of ChatGPT’s role in learning, academic practices, and institutional adoption. Results Eng. 2025, 25, 103837. [Google Scholar] [CrossRef]
  36. Barrot, J.S. Balancing Innovation and Integrity: An Emerging Technology Report on SciSpace in Academic Writing. Technol. Knowl. Learn. 2025, 30, 587–592. [Google Scholar] [CrossRef]
  37. Chamoun, E.; Schlichktrull, M.; Vlachos, A. Automated focused feedback generation for scientific writing assistance. arXiv 2024, arXiv:2405.20477. [Google Scholar] [CrossRef]
  38. Maaloul, K. Identifying AI-Written Text in Academia: A Machine Learning-Based Framework. In Proceedings of the 2024 1st International Conference on Electrical, Computer, Telecommunication and Energy Technologies (ECTE-Tech), Oum El Bouaghi, Algeria, 17–18 December 2024; pp. 1–6. [Google Scholar] [CrossRef]
  39. Obura, E.A.; Emoit, P.I. Artificial Intelligence in Academic Writing and Research Skills in Kenyan Universities: Opportunities and Challenges. Afr. Educ. Rev. 2024, 20, 58–80. [Google Scholar] [CrossRef]
  40. Kramar, N.; Bedrych, Y.; Shelkovnikova, Z. Ukrainian Phd Students’ Attitudes Toward AI Language Processing Tools in the Context of English for Academic Purposes. Adv. Educ. 2024, 12, 41–57. [Google Scholar] [CrossRef]
  41. Shen, Y.; Tang, L.; Le, H.; Tan, S.; Zhao, Y.; Shen, K.; Li, X.; Juelich, T.; Wang, Q.; Gašević, D.; et al. Aligning and comparing values of ChatGPT and human as learning facilitators: A value-sensitive design approach. Br. J. Educ. Technol. 2025, 56, 1391–1414. [Google Scholar] [CrossRef]
  42. Pratiwi, H.; Suherman; Hasruddin; Ridha, M. Between Shortcut and Ethics: Navigating the Use of Artificial Intelligence in Academic Writing Among Indonesian Doctoral Students. Eur. J. Educ. 2025, 60, e70083. [Google Scholar] [CrossRef]
  43. Al-Zubaidi, K.; Jaafari, M.; Touzani, F.Z. Impact of ChatGPT on Academic Writing at Moroccan Universities. Arab World Engl. J. 2024, 1, 4–25. [Google Scholar] [CrossRef]
  44. Szandała, T. ChatGPT vs human expertise in the context of IT recruitment. Expert Syst. Appl. 2025, 264, 125868. [Google Scholar] [CrossRef]
  45. Neshaei, S.P.; Rietsche, R.; Su, X.; Wambsganss, T. Enhancing Peer Review with AI-Powered Suggestion Generation Assistance: Investigating the Design Dynamics. In Proceedings of the 29th International Conference on Intelligent User Interfaces, Greenville, SC, USA, 18–21 March 2024; IUI ’24. pp. 88–102. [Google Scholar] [CrossRef]
  46. Gherheș, V.; Fărcașiu, M.A.; Cernicova-Buca, M.; Coman, C. AI vs. Human-Authored Headlines: Evaluating the Effectiveness, Trust, and Linguistic Features of ChatGPT-Generated Clickbait and Informative Headlines in Digital News. Information 2025, 16, 150. [Google Scholar] [CrossRef]
  47. Gawlik-Kobylińska, M. Harnessing Artificial Intelligence for Enhanced Scientific Collaboration: Insights from Students and Educational Implications. Educ. Sci. 2024, 14, 1132. [Google Scholar] [CrossRef]
  48. Kar, S.K.; Bansal, T.; Modi, S.; Singh, A. How Sensitive Are the Free AI-detector Tools in Detecting AI-generated Texts? A Comparison of Popular AI-detector Tools. Indian J. Psychol. Med. 2024, 47, 275–278. [Google Scholar] [CrossRef]
  49. Bellini, V.; Federico, S.; Jonathan, M.; Marco, C.; Bignami, E. Between human and AI: Assessing the reliability of AI text detection tools. Curr. Med Res. Opin. 2024, 40, 353–358. [Google Scholar] [CrossRef]
  50. Tu, J.; Nacke, L.; Rogers, K. Introducing the INSPIRE Framework: Guidelines From Expert Librarians for Search and Selection in HCI Literature. Interact. Comput. 2025, iwaf001. [Google Scholar] [CrossRef]
  51. Mirón-Mérida, V.A.; García-García, R.M. Developing written communication skills in engineers in Spanish: Is ChatGPT a tool or a hindrance? Front. Educ. 2024, 9, 1416152. [Google Scholar] [CrossRef]
  52. Jain, R.; Jain, A. Generative AI in Writing Research Papers: A New Type of Algorithmic Bias and Uncertainty in Scholarly Work. In Proceedings of the Intelligent Systems and Applications, Amsterdam, The Netherlands, 29–30 August 2024; Arai, K., Ed.; Springer: Cham, Switzerland, 2024; pp. 656–669. [Google Scholar] [CrossRef]
  53. Liu, J.Q.J.; Hui, K.T.K.; Al Zoubi, F.; Zhou, Z.Z.X.; Samartzis, D.; Yu, C.C.H.; Chang, J.R.; Wong, A.Y.L. The great detectives: Humans versus AI detectors in catching large language model-generated medical writing. Int. J. Educ. Integr. 2024, 20, 8. [Google Scholar] [CrossRef]
  54. Desaire, H.; Isom, M.; Hua, D. Almost Nobody Is Using ChatGPT to Write Academic Science Papers (Yet). Big Data Cogn. Comput. 2024, 8, 133. [Google Scholar] [CrossRef]
  55. Májovský, M.; Černý, M.; Netuka, D.; Mikolov, T. Perfect detection of computer-generated text faces fundamental challenges. Cell Rep. Phys. Sci. 2024, 5, 101769. [Google Scholar] [CrossRef]
  56. Liu, Z.; Yao, Z.; Li, F.; Luo, B. On the Detectability of ChatGPT Content: Benchmarking, Methodology, and Evaluation through the Lens of Academic Writing. In Proceedings of the 2024 on ACM SIGSAC Conference on Computer and Communications Security, Salt Lake City, UT, USA, 14–18 October 2024; CCS ’24. pp. 2236–2250. [Google Scholar] [CrossRef]
  57. Jadán-Guerrero, J.; Acosta-Vargas, P.; Gutiérrez-De Gracia, N.E. Enhancing Scientific Research and Paper Writing Processes by Integrating Artificial Intelligence Tools. In Proceedings of the HCI International 2024 Posters, Washington, DC, USA, 29 June–4 July 2024; Stephanidis, C., Antona, M., Ntoa, S., Salvendy, G., Eds.; Springer: Cham, Switzerland, 2024; pp. 64–74. [Google Scholar] [CrossRef]
  58. Schwarz, J. The use of generative AI in statistical data analysis and its impact on teaching statistics at universities of applied sciences. Teach. Stat. 2025, 47, 118–128. [Google Scholar] [CrossRef]
  59. Yildiz Durak, H.; Eğin, F.; Onan, A. A Comparison of Human-Written Versus AI-Generated Text in Discussions at Educational Settings: Investigating Features for ChatGPT, Gemini and BingAI. Eur. J. Educ. 2025, 60, e70014. [Google Scholar] [CrossRef]
  60. Hashemi, A.; Shi, W.; Corriveau, J.P. AI-generated or AI touch-up? Identifying AI contribution in text data. Int. J. Data Sci. Anal. 2024, 20, 3759–3770. [Google Scholar] [CrossRef]
Figure 1. Process of identification, screening, and selection of studies (PRISMA).
Figure 1. Process of identification, screening, and selection of studies (PRISMA).
Informatics 12 00111 g001
Figure 2. Keywords related by co-occurrence links among the papers identified in the literature.
Figure 2. Keywords related by co-occurrence links among the papers identified in the literature.
Informatics 12 00111 g002
Figure 3. Relationships of keyword co-occurrences across different fields of study: (a) Relationship between artificial intelligence and techniques for managing large-scale language models in various areas of knowledge. (b) Influential relationship between machine learning and the development of programs for management, assistance, processing, and text generation. (c) Relationship involving generative artificial intelligence tools in health care. (d) Relationship between artificial intelligence and the impact of ChatGPT on education and research. (e) Relationship between artificial intelligence, automation, and ethical challenges.
Figure 3. Relationships of keyword co-occurrences across different fields of study: (a) Relationship between artificial intelligence and techniques for managing large-scale language models in various areas of knowledge. (b) Influential relationship between machine learning and the development of programs for management, assistance, processing, and text generation. (c) Relationship involving generative artificial intelligence tools in health care. (d) Relationship between artificial intelligence and the impact of ChatGPT on education and research. (e) Relationship between artificial intelligence, automation, and ethical challenges.
Informatics 12 00111 g003
Figure 4. Collaboration and heat maps generated with VOSviewer: (a) Collaboration map between countries. (b) Heat map showing the incidence of research work by country.
Figure 4. Collaboration and heat maps generated with VOSviewer: (a) Collaboration map between countries. (b) Heat map showing the incidence of research work by country.
Informatics 12 00111 g004
Figure 5. Percentage analysis of the classifications identified in the literature within the ‘topics and objectives’ category.
Figure 5. Percentage analysis of the classifications identified in the literature within the ‘topics and objectives’ category.
Informatics 12 00111 g005
Figure 6. Percentage analysis of the classifications identified in the literature within the ‘problem formulation and data implications’ category.
Figure 6. Percentage analysis of the classifications identified in the literature within the ‘problem formulation and data implications’ category.
Informatics 12 00111 g006
Figure 7. Percentage analysis of the classifications identified in the literature within the ‘limitations encountered’ category.
Figure 7. Percentage analysis of the classifications identified in the literature within the ‘limitations encountered’ category.
Informatics 12 00111 g007
Figure 8. Percentage analysis of the classifications identified in the literature within the ‘proposals and applied methodology’ category.
Figure 8. Percentage analysis of the classifications identified in the literature within the ‘proposals and applied methodology’ category.
Informatics 12 00111 g008
Figure 9. Percentage analysis of the classifications identified in the literature within the ‘solution found, outcomes, and challenges’ category.
Figure 9. Percentage analysis of the classifications identified in the literature within the ‘solution found, outcomes, and challenges’ category.
Informatics 12 00111 g009
Figure 10. Percentage analysis of approaches identified in the literature.
Figure 10. Percentage analysis of approaches identified in the literature.
Informatics 12 00111 g010
Figure 11. Percentage analysis of the aspects identified in the literature.
Figure 11. Percentage analysis of the aspects identified in the literature.
Informatics 12 00111 g011
Table 1. Classification of thematic areas of research focused on artificial intelligence found in the literature review.
Table 1. Classification of thematic areas of research focused on artificial intelligence found in the literature review.
Thematic Area of ResearchClassificationReference%
AI in education and trainingApplications in education.[1,2,6,19,22,35,39,41,51]15.0%
Applications in medical education, clinical practice and nursing.[12,13]3.3%
Applications in the assessment process.[11]1.7%
AI in academic and research productionApplications in research processes.[7,9,21,24,25,29,30,34,40,42,49,50,52,53,58]25.0%
Applications in article and poster writing.[5,31,37,38,47,48,56,57]13.3%
Applications in academic writing.[3,8,28,43,59]8.3%
Automated academic writing assistance.[36,46]3.3%
Applications in peer review.[45]1.7%
Applications in scientific work selection processes.[10]1.7%
AI in life and health sciencesApplication in biomedical sciences.[14,20]3.3%
Application in healthcare communication.[17]1.7%
AI in the mediaApplications in the field of communication.[15,33]3.3%
Applications in journalism, education, and law.[26]1.7%
AI in computer science and designApplications in computer science.[18,32]3.3%
Application in interior design.[16]1.7%
AI in cross-cutting and ethical aspectsAuthenticity of the content.[23,27,54,55,60]8.3%
Applications in immediate feedback.[4]1.7%
Applications in assisted decision making.[44]1.7%
Table 2. Categories and classification of selected papers according to the structure of the research paper.
Table 2. Categories and classification of selected papers according to the structure of the research paper.
CategoriesClassification
Topics and
objectives
A1: AI application in education
A2: Use of AI in research
A3: Ethics-focused guidelines for AI use
A4: Detection and authenticity of AI-generated content
A5: Evaluating AI tools for academia
Problem formulation and data implicationsB1: Impact of AI on learning and skills
B2: Challenges to academic integrity and ethics
B3: Technical and reliability limitations of AI
B4: Need for reference frameworks and guidelines for AI integration
B5: Perceptions and reactions towards AI
Limitations encounteredC1: Reliability and accuracy of AI-generated content
C2: Challenges in AI-generated text and information detection
C3: Limitations in human understanding and interaction with AI
C4: Impact on the development of human skills
C5: Effectiveness of AI tools depending on factors such as context and language
Proposals and applied methodologyD1: Empirical studies to assess the impact of AI, proposals, and methodologies applied
D2: Systematic literature reviews to synthesise knowledge
D3: Evaluation of frameworks and models for AI integration and detection
D4: Linguistic analysis to understand the content of the AI
D5: Design approaches centred on user values and perspectives
Solution found, outcomes, and challengesE1: Understanding users’ perceptions and use of AI
E2: Evaluation of the effectiveness of AI tools in academic and professional tasks
E3: Identification of factors influencing the adoption and impact of AI
E4: Evaluation of methods and tools for AI content detection
E5: Need for guidelines and perception of AI
Table 3. Approaches determined in the review of selected papers, their codes, and references.
Table 3. Approaches determined in the review of selected papers, their codes, and references.
CodeApproachReferences
AComparative analysis of AI tools[3,14,16,17,18,44,45,46,47,49,60]
BIntegration of AI in information search[8,48,50,51]
CChallenges to be addressed with the use of AI[1,28,33,34,36,37]
DStrategies of use with AI[4,5,9,12,15,20,21,23,24,25,26,27,29,38]
EEthics and transparency in the use of AI[7,13,22,32,41,42,52,53,54,55,59]
FInnovative methodologies with the use of AI[10,11,19,35,40,53,56,57]
GNovelty with the use of AI[2,6,30,31,39,43,58]
Table 4. Specific artificial intelligence tools used in different research approaches.
Table 4. Specific artificial intelligence tools used in different research approaches.
Approaches *AI ToolReferences
Comparative analysis of AI toolsChatGPT, Bing, and Bard[14]
ChatGPT, Google Gemini, and Mistral[44]
Hamta.ai[45]
GPTZero, ZeroGPT, Writer ACD, and Originality[49]
GPTZero, and AICIS-2S[60]
Integration of AI in information searchSapling, Undetectable AI, Copyleaks, QuillBot, and Wordtune[48]
Turnitin, Unicheck, GPTZero, and ChatGPT[51]
Challenges to be addressed with the use of AISciSpace[36]
SWIF2T[37]
Strategies of use with AIGPTZero, Originality.ai, and Sapling[21]
ResearchBuddie artefact, ChatGPT, Elicit, and Research Rabbit[25]
Ethics and transparency in the use of AIChatGPT, and Bard[32]
ChatGPT, Connected Papers, Zotero, Humata, Scite AI, and Deepl[42]
Turnitin, GPTZero, Originality.ai, Wordtune, ZeroGPT, GPT-2 Output Detector, and Content at Scale[53]
ChatGPT, Google Gemini, and BingAI[59]
Innovative methodologies with the use of AIGoogle Gemini[19]
Grammarly, QuillBot, and ChatGPT[40]
CheckGPT, and GPABench2[56]
Novelty with the use of AIGoogle Gemini[6]
Medi Research Assistant, and Neurosurgical Research Paper Writer[31]
ChatGPT[43]
* The approach according to the literature review.
Table 5. Opportunities and challenges of using AI tools detected in the studies based on reflection and analysis.
Table 5. Opportunities and challenges of using AI tools detected in the studies based on reflection and analysis.
Aspects *Reference
Opportunities in the use of AI[1,3,8,10,11,13,14,16,17,18,19,23,24,25,26,28,30,31,33,34,36,37,39,40,41,42,44,45,47,50,56,57,58,59]
Existing challenges in the use of AI[2,4,5,6,7,9,12,15,20,21,22,27,29,32,35,38,43,46,48,49,51,52,53,54,55,60]
* The aspect according to the objectivity of the results and discussion.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Llerena-Izquierdo, J.; Ayala-Carabajo, R. Ethics of the Use of Artificial Intelligence in Academia and Research: The Most Relevant Approaches, Challenges and Topics. Informatics 2025, 12, 111. https://doi.org/10.3390/informatics12040111

AMA Style

Llerena-Izquierdo J, Ayala-Carabajo R. Ethics of the Use of Artificial Intelligence in Academia and Research: The Most Relevant Approaches, Challenges and Topics. Informatics. 2025; 12(4):111. https://doi.org/10.3390/informatics12040111

Chicago/Turabian Style

Llerena-Izquierdo, Joe, and Raquel Ayala-Carabajo. 2025. "Ethics of the Use of Artificial Intelligence in Academia and Research: The Most Relevant Approaches, Challenges and Topics" Informatics 12, no. 4: 111. https://doi.org/10.3390/informatics12040111

APA Style

Llerena-Izquierdo, J., & Ayala-Carabajo, R. (2025). Ethics of the Use of Artificial Intelligence in Academia and Research: The Most Relevant Approaches, Challenges and Topics. Informatics, 12(4), 111. https://doi.org/10.3390/informatics12040111

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop