1. Introduction
In 1950, Alan Turing proposed the ‘Turing Test’ as a thought experiment that could circumvent the philosophical vagueness of the question, “Can a machine think?“ (
Turing, 1950), starting the dialogue that led to the foundation of the scientific field of artificial intelligence (AI), and which has, over time, been reinforced with ideas from other sciences, such as Philosophy, Mathematics, Economics, Neuroscience, Psychology, Control Theory and Cybernetics, Linguistics, and Computer Science (
Russell & Norvig, 2021).
The term ‘artificial intelligence’ was first used at a 1956 workshop held at Dartmouth College to describe the “science and engineering of making intelligent machines, especially intelligent computer programs” (
McCarthy et al., 2006, p. 2). Ever since, definitions of artificial intelligence (AI) have multiplied and expanded, often becoming entangled with the philosophical questions of what constitutes ‘intelligence’ and whether machines can ever really be ‘intelligent’ (
UNESCO, 2021). Pragmatically sidestepping this long-running debate, UNESCO’s World Commission on the Ethics of Scientific Knowledge and Technology (COMEST) describe AI as involving machines capable of imitating certain functionalities of human intelligence, including such features as perception, learning, reasoning, problem solving, language interaction, and even producing creative work (
COMEST, 2019).
Over the past decades, artificial intelligence (AI) has experienced periods of rapid progress, followed by phases of slower development known as “AI winters” (
Russell & Norvig, 2021). Its goal has been to perform tasks that require intelligence, such as decision-making, judgment, and learning. Currently, we are experiencing an ‘AI renaissance’ or ‘age of implementation’, with an ever-increasing range of sectors adopting Machine Learning (ML) due to the exponential growth of digital data creation, computers’ processing power and sophisticated refinement of the corresponding algorithms and their availability ‘as a service’ (
UNESCO, 2021). Real-world applications of AI are becoming increasingly pervasive and disruptive, where the rapid evolution of applications, based on Natural Language Processing (NLP) and Artificial Neural Network (ANN) algorithms (
Ossiannilsson et al., 2024), have sparked an extensive public discourse (
Bond et al., 2023), especially with the advent of text and image Generative AI (GenAI) applications, which are special types of ANNs.
An ANN is a form of ML that draws inspiration from the architecture and functionality of the human brain, such as the synaptic connections between neurons. In this context, a general-purpose transformer is a variant of ANN that has the capability to concentrate on various segments of data to ascertain their interrelations. When such a transformer is trained on extensive text datasets, it results in the development of a Large Language Model (LLM). The Generative Pre-Trained Transformer (GPT) represents a specific category of LLM that undergoes training on even more substantial data volumes, enabling the model to grasp language intricacies and produce coherent, contextually appropriate text (
UNESCO, 2023a).
ChatGPT (
OpenAI, 2022) stands out as the most prominent application of GenAI thus far. In late 2022, ChatGPT marked a milestone in chatbot development. This LLM, based on GPT technology, after being trained on a massive dataset (
Wolfram, 2023) using ML techniques (
Javaid et al., 2023), can engage in written conversations, using coherent natural language that simulates human communication (
Skrabut, 2023). Its ability to perform NLP tasks, such as creating customized educational materials and exercises, providing immediate feedback on tasks like summarizing texts, proofreading, and generating original essays (
Skrabut, 2023), highlights its potential as a valuable tool for both teachers and students. The model’s significant impact is mirrored by a surge in research activity, focusing on both the benefits and ethical concerns of integrating ChatGPT, primarily in higher education, as well as on the model’s performance across various disciplines.
GenAI systems, like ChatGPT, can automatically generate content in response to prompts presented within natural-language conversational interfaces. Unlike curating existing web content, GenAI attempts to create novel content across various symbolic forms by statistically examining data elements’ distributions, thereby identifying and reproducing prevalent patterns (
UNESCO, 2023b). Since the end of 2022, there has been a tremendous increase in awareness, familiarity, and use of GenAI tools. The removal of technological barriers and the availability of simple language interfaces have made GenAI applications accessible and user-friendly, and the influence of technology-savvy student populations and the impact of social networks and globalization have also contributed to the readiness of individual users to explore GenAI applications (
Kurtz et al., 2024).
Artificial intelligence in education (AIED) has been a prominent research field since the 1970s, with AI and education closely linked since the early days of AI (
Doroudi, 2022). Combining theoretical knowledge with practical application, AIED focuses on developing foundational theories for integrating AI into education while simultaneously creating tools aimed at enhancing the learning experience and improving learning outcomes (
Holmes & Porayska-Pomsta, 2022). Despite the growing realization of the potential for AIED, influenced by educational evidence-based policy (
OECD, 2021), it has arguably only now transitioned from experiment to practice in educational settings. Researchers of AIED focus on 1-1 human tutoring by machines, personalized learning, and AI tools supporting teaching, learning, assessment, and administration. In this context, AIED includes ‘learning with AI’, learning about AI’, ‘preparing for AI’, and issues of pedagogy, organizational structures, access, ethics, equity, and sustainability (
UNESCO, 2021). Moreover, since 2023 with the advent of GenAI, there has been an exponential surge in research interest on AIED (
Yim & Su, 2024).
Ouyang and Jiao (
2021) categorized AIED into three distinct paradigms: AI-directed, AI-supported, and AI-empowered. As
Adiguzel et al. (
2023) describes, the AI-directed approach views the learner as a recipient of AI interventions, largely influenced by behaviorist theories, where AI orchestrates and guides cognitive processes, with the learner passively receiving AI-driven instructions. In contrast, the AI-supported paradigm, grounded in cognitive and social constructivist theories, positions learners as collaborators, actively engaging with AI tools to aid their learning process. Lastly, the AI-empowered paradigm, inspired by connectivism theory, sees learners as leaders of their own learning journeys. Here, AI acts as an enabler, allowing learners to assume control over their educational experiences and fostering a synergistic collaboration among various entities, including the learner, educator, information, and technology, to enhance the learner’s intelligence.
AI applications have the potential to transform the way research and education are conducted by automating tedious and repetitive tasks, assisting in data analysis, and enabling new forms of learning and assessment (
Kooli, 2023). As summarized by
Chellappa and Luximon (
2024), the integration of AI applications in education has made a substantial impact, as illustrated by enhancements in educational process efficiency, the facilitation of global learning, the customization of learning experiences, the development of more intelligent educational content, and the optimization of academic administration for improved effectiveness and efficiency.
In this context, AI is rapidly transforming the landscape of academia as well, where the use of AI systems and chatbots has gained significant attention in recent years. As
Bond et al. (
2023) summarize, in their meta-systematic review of AI in HE, the evolution of AIED can be traced back several decades, exhibiting a rich history of intertwining educational theory and emergent technology. As the field matured through the 1990s and into the 2000s, research explored various facets of AIED, such as intelligent tutoring systems, adaptive learning environments, and supporting collaborative learning environments. After the 2010s, the synergies between AI tools and educational practices have further intensified. During this period, researchers explored chatbots for student engagement, automated grading and feedback, predictive analytics for student success, and various adaptive platforms for personalized learning, facing various challenges and dilemmas, like the ethical use of AI (
Kostas et al., 2024). To gain further understanding of the applications of AI in higher education, and to provide guidance to the field,
Zawacki-Richter et al. (
2019) developed a typology, classifying research into four broad areas: (a) profiling and prediction, (b) intelligent tutoring systems, (c) assessment and evaluation, and (d) adaptive systems and personalization.
Generative AI applications such as ChatGPT have the potential to serve as transformative catalysts, supporting higher education institutions in enhancing their relevance and sustainability (
Baidoo-Anu & Owusu Ansah, 2023) while also introducing profound challenges to academic learning and teaching methods (
Crawford et al., 2023). From personalized learning platforms to automated grading systems, AI applications are integrated into various aspects of HE, offering new possibilities for improving learning outcomes and enhancing the educational experience (
Mambile & Mwogosi, 2025). According to a SWOT analysis (
Farrokhnia et al., 2023), ChatGPT poses:
(a) Strengths: plausible, personalized and real-time responses with self-improving capability,
(b) Weaknesses: limited understanding, undefined quality of responses, bias and discrimination effects, non-higher-order thinking,
(c) Opportunities: accessibility of information, personalized learning, complex learning, decrease teaching workload),
(d) Threats: context unaware, threat to academic integrity, plagiarism, decline in cognitive skills, perpetuating discriminations.
GenAI apps like ChatGPT have significant relevance for supporting HE student learning in multiple ways. These applications can be employed to analyze and process vast amounts of textual data, such as academic papers, textbooks, and other course materials, to provide students with personalized recommendations for further study based on their learning requirements and preferences, to develop chatbots and virtual assistants that offer on-demand support and guidance to students, to facilitate adaptive and personalized learning, to facilitate creative thinking, to act an intelligent tutoring system, to influence positive learning outcomes, to develop competencies, and to act as a co-researcher (
AlBadarin et al., 2023;
Wu & Yu, 2023;
Bond et al., 2023;
Fuchs, 2023;
Crompton & Burke, 2023;
D. Ali et al., 2024;
Ogunleye et al., 2024;
Schei et al., 2024;
Ansari et al., 2024).
Although there is a wide range of opportunities for GenAI applications in HE, there are also several challenges that should be addressed, such as inaccuracies of information, plagiarism, discriminations, privacy and security, overreliance on technology, lack of ethical consideration, failure to develop important critical thinking skills, bias, misunderstandings and incorrect responses due to the nuances and complexities of human language, and limited reliability (
Holmes et al., 2023;
Kooli, 2023;
Fuchs, 2023;
Sullivan et al., 2023;
Bond et al., 2023;
Ogunleye et al., 2024;
Ansari et al., 2024;
Sok & Heng, 2024;
Dimeli & Kostas, 2025).
Considering the intricate and diverse concerns surrounding the incorporation of GenAI in HE settings, coupled with the risk of oversimplifying educational processes, it becomes imperative to rigorously assess both its advantages and disadvantages to facilitate well-informed decision-making within an ethical and regulatory framework (
Kim & Adlof, 2023). Addressing this multifaceted issue requires HE institutions to furnish explicit guidance to students by assessing their perceptions, attitudes, concerns, experiences, and practices, as they play a crucial role in shaping the adoption and integration of GenAI in academia (
Deng et al., 2025). Gaining such knowledge holds critical value not only for policymakers, educational institutions, educators, and students but also for guiding future research. Exploring students’ interactions with GenAI is essential for shaping a research-informed approach to teaching and learning in modern HE.
In this context, cross-sectional studies (
Wang & Cheng, 2020) offer significant insights into the preliminary examination of applications, like ChatGPT within HE, as a type of observational study that analyzes data from a population, or a representative subset, at a specific point in time Predominant themes in such research encompass students’ perceptions and experiences, variations in these perceptions and experiences attributable to individual characteristics, and elements influencing students’ intentions and actual utilization of GenAI tools (
J. K. M. Ali et al., 2023;
Barrett & Pack, 2023;
Chan & Hu, 2023;
Hajam & Gahir, 2024;
Balabdaoui et al., 2024;
Serhan & Welcome, 2024;
Sevnarayan, 2024;
Karataş & Yüce, 2024;
Capinding & Dumayas, 2024;
Aldossary et al., 2024). By capturing a momentary depiction of students’ perceptions, behaviors, and performance at a specific juncture, cross-sectional studies are particularly advantageous due to their capacity to amass extensive data from diverse student cohorts rapidly. These studies serve as an effective initial approach to identifying patterns in learners’ perceptions and interactions with GenAI tools in HE (
Deng et al., 2025).
From the indicative studies presented above, there is a great interest at an international level in investigating the use of GenAI tools in HE, and there is already a substantial body of research that has emerged in the short period since the release of ChatGPT in November 2022. However, our understanding of how and why students in HE adopts and engage with these tools needs more evidence-based research data. Moreover, a lack of relevant research is observed regarding the use of AI tools in HE in Greece. This fact reinforces the necessity and usefulness of the present cross-sectional study for investigating students’ opinions and practices regarding the use of ChatGPT and GenAI, focusing on issues such as the perceived benefits in learning, barriers and challenges, as well as the necessary actions for an effective and ethical AIED in the Greek HE institutions. This study aimed to answer the following research questions:
Q1: What is the experience and familiarity of students with the use of ChatGPT and AI?
Q2: What are the students’ perceived benefits of using ChatGPT and AI?
Q3: What are the students’ perceived challenges of using ChatGPT and AI?
Q4: What are the students perceived necessary actions for the proper use of AI?
The study will utilize a cross-sectional study, with a quantitative research methodology, to collect data through survey research. The findings provide insights and recommendations related to the use of AI in HE in Greece. Additionally, this study intends to contribute to the ongoing discussion on the digital and ethical challenges of educational transformation using AI.
3. Results
3.1. Experience and Familiarity with AI
The first research question examines students’ experience with ChatGPT and their overall familiarity with AI. Their responses to the corresponding questionnaire items are presented in
Table 3.
The responses indicate that more than one-third of the students (38.2%) reported being quite or very familiar with the concept of AI. This finding aligns with responses to Item 2, where most students indicated that they either do not use or use ChatGPT and AI tools to a limited extent for various tasks.
These include idea generation (57.3%), problem-solving (58.8%), correcting, editing, and improving written content (69.3%), text translation (58.5%), coding and programming support (79.6%), personal assistance (69.5%), and writing assignments (74.7%). Notably, only a small proportion of students reported high levels of familiarity and usage across all tasks in Item 2, except for information search and research (38.2%).
Sex
Regarding sex, results showed that men reported significantly higher levels of familiarity with AI than women (U = 20,649, p < 0.001). Conversely, women reported using ChatGPT and AI tools more frequently than men for information searching and research (U = 30,846, p = 0.009), while men reported higher usage of AI tools for coding (U = 24,172, p = 0.031). No statistically significant differences were observed for the remaining items.
Year of study
No significant differences were found concerning the year of study.
Level of study
Regarding the level of study, statistically significant differences were observed in AI familiarity (H(2) = 6.107, p = 0.047), the use of ChatGPT and AI tools for information searching and research (H(2) = 11,059, p = 0.004), their use as a personal assistant (H(2) = 9.032, p = 0.011), and their application for writing assignments (H(2) = 8.782, p = 0.012).
Specifically, doctoral students demonstrated higher familiarity with AI than postgraduate and undergraduate students, though no significant difference was observed between the latter two groups. Additionally, undergraduate students reported greater usage of AI tools for information searching and research than postgraduate and doctoral students.
Similarly, both undergraduate and postgraduate students reported higher use of AI tools as personal assistants compared to doctoral students. Furthermore, undergraduate students reported greater reliance on ChatGPT and AI tools for writing assignments than postgraduate and doctoral students.
ICT Skills
Regarding ICT skills, significant differences were found in familiarity with AI (H(4) = 77,158, p < 0.001) and the use of ChatGPT and AI tools for working with text (H(4) = 11,150, p = 0.025). Pairwise comparisons revealed that students with moderate to high ICT skills reported higher familiarity with AI and more frequent use of ChatGPT and AI tools for text-related tasks. No other significant differences were observed between the remaining groups.
3.2. Perceived Benefits of AI in Higher Education
The second research question examines students’ perceived benefits of using ChatGPT and AI tools. Their responses to the corresponding questionnaire items are presented in
Table 4.
Most students acknowledged the benefits of ChatGPT and AI tools in higher education institutions (HEIs). Specifically, a significant proportion of students agreed or strongly agreed that these tools enhance search and research capabilities (77.2%), provide improved assistance and feedback (67.6%), enhance digital skills (65.8%), improve performance on assignments and exams (61.4%), and support lecture comprehension (61.0%). These were identified as the most notable perceived benefits.
Sex
Regarding sex, results indicated that women perceived a greater benefit from ChatGPT and AI tools in terms of improving support and feedback in education (U = 29,852, p = 0.040). No statistically significant differences were found for the other items.
Year of Study
Regarding the year of study, no statistically significant differences were observed across most items, except for the perceived benefit of improved performance in assignments and exams (H(5) = 11,101, p = 0.049). Specifically, third-year students reported a lower perceived benefit in this area compared to students in the fourth and fifth years.
Level of Study
Regarding the level of study, significant differences were found in the perceived benefits of ChatGPT and AI tools for improving lecture comprehension (H(2) = 6.122, p = 0.047), contributing to the digital transformation of HEIs (H(2) = 10,791, p = 0.005), and enhancing the teaching of theoretical subjects (H(2) = 8.246, p = 0.016).
More specifically, postgraduate and undergraduate students were more likely than doctoral students to agree that ChatGPT and AI tools facilitate lecture comprehension, support the digital transformation of HEIs, and enhance the teaching of theoretical subjects.
ICT Skills
Regarding ICT skills, statistically significant differences were found in the perceived benefits of ChatGPT and AI tools for personalized learning (H(4) = 11,390, p = 0.023) and time management (H(4) = 12,765, p = 0.012). Pairwise comparisons indicated a positive association between higher ICT skill levels and the perceived benefits of ChatGPT and AI tools in enhancing personalized learning and time management.
3.3. Perceived Challenges of AI in Higher Education
The third research question examines students’ perceived challenges regarding the use of AI in higher education. Their responses to the corresponding questionnaire items are presented in
Table 5.
Most students acknowledged the challenges associated with ChatGPT and AI tools in higher education institutions (HEIs). Specifically, a substantial proportion of students agreed or strongly agreed that the most critical challenges include the limitation of critical thinking (68.0%), insufficient knowledge and skills in using AI tools (64.8%), limited human control over AI-generated content (63.8%), an increased risk of plagiarism (63.3%), inadequate training in AI tool usage (62.2%), an unclear ethical framework (61.2%), and dependence on technology companies (60.6%).
Sex
For sex, results indicated no statistically significant differences between men and women in their responses.
Year of Study
Regarding the year of study, the results indicate significant differences in perceptions of the ethical framework surrounding AI tool usage (H(5) = 16,792, p = 0.005). Specifically, first-year students were less likely than their peers in the second to fifth years to perceive the unclear ethical framework of AI tools as a challenge. Similarly, significant differences emerged in students’ perceptions of the reliability and validity of AI-generated content (H(5) = 15,018, p = 0.010) and concerns about a potential decline in critical thinking (H(5) = 20,373, p = 0.001).
Sixth-year students were more likely than first- to fourth-year students to consider the low reliability and validity of AI-generated content a challenge. Additionally, first-year students were less likely than third-year students to recognize this issue as a concern. Regarding critical thinking, sixth-year students expressed lower concern than second- and third-year students about its potential decline due to AI tool usage in higher education. Fifth-year students also perceived this issue as less significant compared to third-year students. Moreover, first-year students were less likely than second- and third-year students to consider it a challenge. In contrast, fourth-year students perceived this issue as a greater challenge than third-year students.
Level of Study
Further analysis of potential differences based on students’ level of education revealed statistically significant variations in their responses. Specifically, significant differences were observed in familiarity with prompting (H(2) = 11,595, p = 0.030), the perceived lack of training in AI tool usage (H(2) = 6706, p = 0.035), the clarity of the ethical framework governing AI tools (H(2) = 6199, p = 0.045), concerns regarding authorship rights of AI-generated content (H(2) = 6703, p = 0.035), and perceptions of a decline in critical thinking (H(2) = 7358, p = 0.025).
More specifically, it was found that doctoral students perceive, to a lesser degree than postgraduate undergraduate students, the familiarity with prompting as a challenge of using AI in HE, while postgraduate students perceive it as a challenge to a greater extent than undergraduate students. Moreover, postgraduate students also perceive to a greater extent than undergraduate students that the lack of training in the use of AI tools is a challenge.
Also, it was found that undergraduate students perceive, to a lesser extent than postgraduate students, that the unclear ethical context on the use of AI tools is a challenge for the use of AI in HE, a finding that also reflects their responses regarding the unclear authorship rights of the content produced with AI tools. Lastly, regarding the decrease in critical thinking, it was found that doctoral students perceive to a higher degree than undergraduate and postgraduate students that the use of ChatGPT and AI tools in HE is a challenge.
ICT Skills
An analysis of differences in responses based on students’ ICT skills revealed statistically significant variations. Specifically, significant differences were observed in perceptions of the unclear ethical framework governing AI usage (H(4) = 15,955, p = 0.003), the undefined copyright of AI-generated content (H(4) = 12,677, p = 0.013), and concerns regarding the widening digital divide in higher education (H(4) = 18,624, p < 0.001).
More specifically, students with very low ICT skills were less likely than those with average, good, and very good ICT skills to perceive the unclear ethical framework as a challenge in the use of ChatGPT and other AI tools in higher education. A similar pattern was observed for students with low ICT skills, who also perceived this issue as less of a challenge compared to students with average, good, and very good ICT competence.
Additionally, students with very low and low ICT skills were less likely than those with very good ICT skills to consider the undefined copyright of AI-generated content a significant challenge. Finally, students with average ICT skills were more likely than those with good, very good, and very low skills to perceive the widening digital divide in higher education as a challenge associated with AI tool usage.
3.4. Perceived Necessary Actions for the Proper Use of AI in Higher Education
The fourth research question examined the actions that students consider necessary for the appropriate use of AI in higher education (HE). Their responses to the corresponding questionnaire items are presented in
Table 6.
Τhe responses indicate that the majority of students agreed or strongly agreed with all statements in this section of the questionnaire, suggesting that they consider all the proposed actions necessary for the appropriate use of AI in higher education settings.
More specifically, a substantial proportion of students agreed or strongly agreed that the most important actions include ensuring free access to AI tools for the academic community (77.1%), providing technical support from HE institutions (76.9%), enhancing research and innovation in AI in education (AIED) (76.1%), training on technical knowledge of AI tools (76.0%), pedagogical applications of AI tools (75.7%), ethics (75.6%) and ensuring data privacy and security in AI applications (75.7%).
Notably, across all questionnaire items, a proportion of students, ranging from 9.5% to 31.5%, chose not to express a position on these issues.
Sex
Further analysis to identify differences in respondents’ answers based on sex revealed no statistically significant differences.
Year of Study
Regarding potential differences based on the year of study, statistically significant differences were observed only in students’ views on the necessity of training in the ethical use of AI (H(5) = 13,868, p = 0.016). Specifically, first-year students were less likely than their peers in the second, third, and fourth years to consider training in the ethical use of AI a necessary action for the appropriate integration of AI in higher education.
Level of Study
Further analysis of responses based on students’ level of study revealed statistically significant differences in several areas. Specifically, significant differences were observed regarding the necessity of public discourse on AI in education (H(2) = 24,022, p < 0.001), informing students and teachers about AI (H(2) = 13,581, p = 0.001), establishing a regulatory and legal framework for AI usage (H(2) = 25,522, p < 0.001), increasing research and innovation in AI for education (H(2) = 10,617, p = 0.005), ensuring data privacy and security in AI-based educational tools (H(2) = 17,627, p < 0.001), training in the ethical use of AI (H(2) = 23,060, p < 0.001), training in the technical knowledge of AI tools (H(2) = 21,435, p < 0.001), and training teachers on the educational use of AI (H(2) = 16,790, p < 0.001).
More specifically, postgraduate students were more likely than undergraduate students to consider public discourse as necessary for the appropriate use of AI in higher education (HE). Additionally, undergraduate students were less likely than postgraduate and doctoral students to recognize the necessity of informing students and teachers about AI, establishing a regulatory and legal framework for AI usage, ensuring data privacy and security, training in the ethical use of AI, and training teachers on the educational applications of AI.
Furthermore, postgraduate students were more likely than undergraduate students to perceive increasing research and innovation in AI for education as necessary. Lastly, postgraduate students also expressed a stronger agreement than undergraduate students regarding the necessity of training in the technical knowledge of AI tools.
ICT Skills
Finally, an analysis of differences in responses based on students’ level of ICT skills revealed statistically significant variations across all items in this section of the questionnaire. The results of the Kruskal-Wallis test are as follows: strengthening public dialogue (H(4) = 10,568, p = 0.032), raising awareness about AI in education (AIED) for teachers and students (H(4) = 16,241, p = 0.003), establishing a regulatory framework and legislation (H(4) = 15,873, p = 0.003), adapting higher education (HE) curricula (H(4) = 10,487, p = 0.033), ensuring free access to AI tools for the academic community (H(4) = 9788, p = 0.044), providing technical support from universities (H(4) = 15,095, p = 0.005), strengthening research and innovation in AIED (H(4) = 12,529, p = 0.014), ensuring data privacy and security in AI applications (H(4) = 21,603, p < 0.001), training in AI ethics (H(4) = 22,509, p < 0.001), training in the technical knowledge of AI tools (H(4) = 12,761, p = 0.013), and training in the pedagogical applications of AI tools (H(4) = 15,904, p = 0.003).
Regarding the necessity of strengthening public dialogue on the use of ChatGPT and AI tools in higher education (HE), students with low ICT skills were less likely than those with good and very good ICT skills to consider it essential for the appropriate use of AI in HE. Similarly, students with low ICT skills were less likely than those with average, good, and very good ICT skills to perceive raising awareness about AI in education (AIED) for teachers and students as necessary. A similar pattern was observed among students with very low ICT skills, who were less likely than those with very good ICT skills to consider awareness-raising essential. Additionally, students with average ICT skills perceived this necessity to a lesser extent compared to those with very good ICT skills.
Regarding the necessity of establishing a regulatory framework or legislation, students with very low ICT skills were less likely than those with very good ICT skills to perceive it as necessary. Likewise, students with very good ICT skills were more likely than those with average, low, and good ICT skills to consider essential.
Similarly, students with average ICT skills were less likely than those with good and very good ICT skills to believe that adapting HEIs’ curricula is necessary for the proper integration of AI in HE.
A comparable pattern was observed in responses regarding free access to AI tools for the academic community, where students with average ICT skills were less likely than those with good and very good ICT skills to consider it essential.
Regarding technical support from universities, students with average ICT skills perceived it as less necessary than those with good ICT skills.
Furthermore, students with low ICT skills were less likely than those with good ICT skills to perceive strengthening research and innovation in AIED as essential. Similarly, students with average ICT skills perceived it as less necessary than those with good and very good ICT skills.
In the case of ensuring data privacy and security in AI applications, students with low ICT skills were less likely than those with good and very good ICT skills to consider it necessary. This pattern was also observed among students with average ICT skills, who perceived it as less necessary compared to those with very good ICT skills, as well as among students with good ICT skills, who rated it as less necessary than those with very good ICT skills.
Additionally, students with very low and low ICT skills were less likely than those with good and very good ICT skills to perceive training in AI ethics as necessary for the appropriate use of AI in HE. This also applied to students with average ICT skills, who rated it as less necessary compared to those with good and very good ICT skills.
Regarding the necessity of training in technical knowledge of AI tools, students with very low and low ICT skills were less likely than those with very good ICT skills to consider it essential. This pattern was also observed among students with average ICT skills, who rated it as less necessary compared to those with good and very good ICT skills.
Lastly, students with very low, low, and average ICT skills were less likely than those with good and very good ICT skills to perceive training in the pedagogical applications of AI tools as necessary for their proper integration in HEIs.
4. Discussion
This study explored Greek higher education (HE) students’ perceptions of ChatGPT and other AI tools, focusing on their potential benefits, ethical concerns, and challenges. The results revealed that students recognize AI’s capabilities in enhancing research, academic support, and efficiency, but they also express concerns about ethical implications, critical thinking limitations, and data privacy risks.
The major findings of this study are consistent with those reported in previous studies (
J. K. M. Ali et al., 2023;
Barrett & Pack, 2023;
Chan & Hu, 2023;
Hajam & Gahir, 2024;
Balabdaoui et al., 2024;
Serhan & Welcome, 2024;
Sevnarayan, 2024;
Karataş & Yüce, 2024;
Capinding & Dumayas, 2024;
Aldossary et al., 2024) and are fully aligned with cross-sectional studies conducted in higher education institutions across various countries. These results reinforce existing evidence on students’ perceptions of AI in academia, highlighting both its potential benefits and associated challenges. The alignment with international studies further underscores the global relevance of AI literacy, ethical considerations, and digital competency in higher education, emphasizing the need for structured AI integration strategies across diverse academic contexts.
4.1. Experience and Familiarity with AI (Q1)
Τwo years after the public release of ChatGPT and amidst the rapid expansion of AI applications, less than 40% of students reported familiarity with the concept of AI. Consequently, over 60% indicated limited or no use of AI tools for academic purposes, with information retrieval and research being the most frequently cited applications.
Notably, while male students reported a higher familiarity with AI compared to female students, female students demonstrated a greater tendency to utilize AI tools for information searching and academic research. These findings underscore a gap between AI awareness and actual usage patterns, suggesting the need for targeted educational initiatives to enhance AI literacy and integration in higher education.
The findings also reveal variations in students’ perceptions based on other demographics. Notably, students’ years of study did not appear to significantly influence their use of ChatGPT and AI tools.
However, the level of education played a more decisive role, with doctoral students reporting greater familiarity with AI concepts but lower usage of AI tools for academic purposes compared to undergraduate and postgraduate students. This suggests that while doctoral students may possess a deeper conceptual understanding of AI, they may not integrate AI tools as actively into their academic workflow. These disparities highlight the need for targeted educational interventions to bridge knowledge gaps and ensure equitable access to AI-related competencies across diverse student demographics.
Furthermore, ICT skills were found to be strongly associated with AI familiarity. Students with higher ICT skills were more likely to report familiarity with AI concepts, yet this did not necessarily translate into greater engagement with AI tools in practice.
These findings emphasize the importance of not only improving AI awareness but also fostering practical AI literacy, ensuring that students can effectively integrate AI tools into their academic activities.
4.2. Perceived Benefits of AI in Higher Education (Q2)
Regarding the perceived benefits of using ChatGPT and AI tools, the findings indicate that most students believe these technologies contribute positively to various aspects of HE, with their most notable impact being in information retrieval and research activities. Overall, few significant differences were observed in students’ responses based on sex and year of study.
Undergraduate and postgraduate students perceived the benefits of AI tools more strongly than doctoral students, particularly in enhancing lecture comprehension, facilitating the digital transformation of HE institutions, and supporting the teaching of theoretical subjects. This suggests that students in earlier stages of their academic journey may find AI tools more relevant to their learning processes, as they rely more on structured instruction and guided learning. In contrast, doctoral students, who engage in more specialized and independent research, may perceive less direct applicability of such tools to their academic work, as their research processes often demand higher-order critical thinking and original contributions that AI may not adequately support.
Furthermore, ICT skills were positively correlated with students’ perceptions of AI benefits. Students with higher ICT skills were significantly more likely than their less ICT-skilled peers to recognize AI’s potential in personalized learning and time management, suggesting that technological literacy enhances students’ ability to leverage AI tools effectively. These findings reinforce the importance of digital literacy in higher education, emphasizing the need for structured AI literacy initiatives to ensure that all students, regardless of their ICT skill level, can fully access and benefit from AI-driven educational innovations.
4.3. Perceived Challenges of AI in Higher Education (Q3)
Regarding students’ perceptions of the challenges associated with integrating ChatGPT and AI tools in HE, the findings indicate that all examined factors were perceived as significant challenges by most students. Among these, the most pressing concerns included the potential limitation of critical thinking, insufficient knowledge and skills required for effective AI use, increased risk of plagiarism, inadequate training in AI tools, and the unclear ethical framework governing AI applications in academia.
Conversely, challenges related to the cost of AI tools, restricted access, and concerns over the potential degradation of the teacher’s role in HE were perceived as less critical. This suggests that while students acknowledge certain logistical barriers, their primary concerns center on pedagogical, ethical, and academic integrity-related implications of AI adoption.
No significant gender-based differences were observed, indicating that both male and female students perceive AI-related challenges in similar ways. However, students in the later years of their studies expressed greater concern regarding the ethical framework governing AI use, the potential decline in critical thinking, and the reliability and validity of AI-generated content, compared to those in their earlier years of study.
This trend may reflect greater academic maturity and increased awareness of AI’s implications for scholarly integrity, intellectual development, and research ethics as students progress in their academic journey. The heightened concerns among senior students suggest that as students engage in more complex academic tasks, they become more critically aware of AI’s limitations and ethical challenges, reinforcing the need for structured AI ethics education and critical digital literacy training within higher education curricula.
Furthermore, doctoral students were found to be more concerned than undergraduate and, to some extent, postgraduate students about the lack of AI training, the potential decline in critical thinking, the unclear ethical framework, and the ambiguity surrounding authorship rights of AI-generated content. This suggests that students engaged in advanced research and independent scholarship are more attuned to ethical and intellectual property challenges, which are critical issues in academic publishing and research integrity.
Furthermore, doctoral students exhibited greater concern than undergraduate and, to some extent, postgraduate students regarding the lack of AI training, the potential decline in critical thinking, the unclear ethical framework, and the ambiguity surrounding authorship rights of AI-generated content.
This suggests that students engaged in advanced research and independent scholarship are more attuned to ethical and intellectual property challenges, which are particularly critical in academic publishing and research integrity. Given the increasing reliance on AI in scholarly writing and data analysis, these concerns highlight the necessity of comprehensive AI literacy programs, particularly at the doctoral level, to ensure that emerging researchers can navigate ethical complexities, maintain academic integrity, and critically evaluate AI-generated content in their work.
These findings underscore the critical need for comprehensive AI literacy programs in higher education (HE) institutions. Such initiatives should ensure that students, particularly those at advanced academic stages and with varying levels of ICT competence, receive adequate training in AI ethics, authorship rights, and critical engagement with AI-generated content. Developing structured educational frameworks will be essential to equip students with the necessary skills to navigate AI’s ethical complexities, assess the reliability of AI-generated outputs, and uphold academic integrity in an evolving digital landscape.
4.4. Perceived Necessary Actions for the Proper Use of AI in Higher Education (Q4)
The findings indicate that most participating students consider all proposed actions essential for the effective integration of AI tools in HE. Among these, the provision of free access to AI tools for the academic community, technical support from universities, the strengthening of research and innovation in AI in education, the assurance of data privacy and security in AI applications, and training in both the technical aspects of AI tools and AI ethics were identified as the most critical preconditions.
Overall, gender and year of study did not significantly influence students’ responses. However, differences emerged based on students’ level of education and ICT skills. Postgraduate and doctoral students assigned greater importance to nearly all investigated factors as prerequisites for the responsible and effective use of AI in HE compared to undergraduate students. This suggests that as students progress in their academic careers, they develop a more nuanced understanding of the structural and ethical challenges associated with AI adoption in educational settings.
Furthermore, an important finding was that students with higher levels of ICT skills were more likely than those with lower ICT skills to consider all proposed factors as essential for the successful integration of AI in HE. This underscores the role of digital literacy in shaping students’ perceptions of AI adoption, highlighting the need for targeted AI education initiatives that address both technical proficiency and ethical considerations to ensure equitable and informed use of AI tools in academia.
4.5. Implications for Practice and Policy
Based on these findings, several implications emerge for HE institutions. Universities should introduce AI literacy courses that cover technical knowledge, ethical considerations, and practical applications of AI tools and develop policies that define AI-generated content ownership, regulate plagiarism, and ensure academic integrity.
It is important that students must be trained and prepared to focus on developing critical thinking skills to evaluate AI-generated content with a discerning perspective. Equitable access to AI tools and structured ICT training should be implemented to ensure that students from diverse backgrounds can leverage AI responsibly.
HE’s AI-powered curricula should be prepared accordingly to support analytical reasoning rather than replace human cognition, considering AI ethics. Universities should consider assessing students’ digital literacy before integrating AI-enhanced coursework, and assignments should be designed to encourage deep engagement rather than passive AI reliance, such as critical analyses of AI-generated texts.
Universities should invest in AI research that examines its pedagogical applications, ethical implications, and impact on student learning outcomes, and in this context, more studies should be conducted on discipline-specific AI usage, spanning various scientific fields.
4.6. Limitations of This Study and Suggestions for Future Research
While this study offers valuable insights, certain limitations must be acknowledged. The reliance on self-reported data introduces the possibility of response biases, which may affect the accuracy of the reported perceptions.
Additionally, the study’s generalizability is limited, as it focuses on Greek higher education students and may not fully capture AI adoption trends in other regions or academic disciplines.
Furthermore, the rapidly evolving nature of AI means that students’ perceptions and university policies are likely to shift over time, necessitating longitudinal studies to track these changes and assess their long-term impact.
Building on these findings and limitations, future research should explore several key areas. Longitudinal studies on AI’s evolution in higher education would provide deeper insights into how students’ perceptions and engagement with AI tools develop over time, particularly regarding their effects on learning outcomes and academic integrity.
Investigating the relationship between AI and academic integrity is also crucial, particularly in terms of plagiarism detection, authorship verification, and originality assessment. Given the diversity of academic disciplines, comparative studies on AI adoption across different fields could help determine which areas of study benefit most from AI-driven tools and where additional support may be needed.
Additionally, research on digital literacy and AI awareness should assess the effectiveness of AI training programs in addressing concerns related to the digital divide and equitable access to AI tools.
Finally, exploring AI as a pedagogical tool could help identify strategies for effectively integrating AI into teaching methodologies to support personalized learning while maintaining a strong emphasis on critical thinking and academic rigor.
These future research directions would contribute to a more comprehensive understanding of AI’s evolving role in higher education, ensuring that its integration supports both educational effectiveness and ethical considerations in academia.
5. Conclusions
This cross-sectional study underscores the dual role of AI in HE, highlighting both its potential benefits, such as enhancing research efficiency, supporting academic tasks, and personalizing learning, and its associated challenges, including ethical concerns, content reliability, and the possible decline in critical thinking skills.
A key takeaway from the study is the variation in AI perceptions based on students’ academic level and ICT skills. While undergraduate and postgraduate students generally acknowledge AI’s benefits in enhancing lecture comprehension, doctoral students tend to be more skeptical, particularly regarding ethical issues, authorship rights, and academic integrity. Additionally, students with higher ICT skills recognize AI’s benefits, while those with lower ICT skills are less aware of its challenges, such as the unclear ethical framework and authorship rights.
These findings reinforce the need for comprehensive AI literacy programs, ethical guidelines, and institutional support to help students navigate AI’s evolving role in education. The study also highlights the importance of bridging the digital divide, ensuring that students across different competency levels can engage effectively with AI tools. Furthermore, the concerns expressed by doctoral students emphasize the need for targeted interventions in advanced research training to address AI-related ethical and integrity challenges.
Moreover, these findings suggest that ChatGPT and other AI tools effectively support the second and third categories of
Ouyang and Jiao’s (
2021) model of AI in higher education: the AI-supported paradigm, in which learners actively engage with AI, and the AI-empowered paradigm, which grants learners the agency of their own learning paths.
In this context, AI could serve as an enabler, fostering learner autonomy and facilitating collaboration among students, educators, information, and technology. By augmenting human cognitive processes, AI enhances the overall learning experience and promotes deeper engagement with educational content.
In conclusion, AI presents both opportunities and challenges for higher education. To maximize its benefits while mitigating risks, universities must implement structured AI education, ethical regulations, and inclusive digital training programs. By fostering responsible AI use and critical digital literacy, HE institutions can ensure that AI serves as an enhancing rather than a disruptive technology in academia.
Future research and policy efforts should focus on balancing technological advancements with human-centered learning (
Shneiderman, 2022), ensuring that AI adoption supports academic integrity and educational innovation.