2. Themes
From the papers presented at the workshop, we identified five overarching themes:
Student Experiences of AI: Applications and Limitations;
Challenges and Solutions in Implementing AI in Academic Contexts;
Developing Ethical and Responsible AI Practices in Education;
Future Trends and Directions for AI and Education;
Student Concerns.
In the following sections, we discuss the core elements of these themes and identify respective sub-themes.
2.1. T1: Student Experiences of AI: Applications and Limitations
Applications of AI include embedding AI within the curriculum, increasing student creativity, and one-to-one digital mentorship. Embedding AI within the curriculum created effective learning outcomes for students. For example, during lectures, students engaged with teaching and discussions on the processes, risks, and benefits of AI usage. Processes involved being shown various types of AI models and how to develop effective prompts. Risks covered content checking, reducing over-reliance on the models to produce correct content, references, and identifying spurious content. Student-reported benefits included using AI to get ‘them started,’ structuring essays, and using AI to generate content whilst being aware of the associated risks. Class discussions enabled the student cohort to learn together and identify the strengths and weaknesses of AI usage. It was acknowledged that staff will need to be familiar and comfortable with using AI functionality before embedding it within the curriculum, and this was viewed as a potential barrier for some. Embedding AI into the curriculum has implications for staff workload and the time needed to ‘teach’ AI processes.
Students described increased productivity and creativity as AI output helped them to ‘get started’ with writing essays and reports, for example. Additionally, AI was used for literature searching, for generating ideas and hypotheses, and to improve writing style and structure and, to a lesser extent, PowerPoint development. Further to these applications, AI was deemed beneficial for neurodivergent students as it reduced barriers to inclusion by explaining complex learning materials and delivering personalised learning. Overall, AI appeared to enhance global accessibility and aid the delivery of adaptive learning.
AI applications can be accessed at any time, and many students view them as a ‘one-to-one mentor.’ Frequent usage meant that students became more efficient with their prompts which produced more personalized and tailored outputs—the model became ‘smarter’ and delivered further relevant content. Unlike the AI applications, staff are not available 24 h per day and students do not always receive instant answers to their queries, which can become problematic, especially when students’ work deadlines are approaching. Whether students were experienced with AI or not, these types of situations encouraged students to seek AI help. Student cohorts are diverse, and many students who would not be comfortable asking questions during a lecture or approaching staff find comfort in using AI as a form of mentorship.
Discussions on AI limitations largely included staff perceptions which incorporated spoiled written content, hallucinations, and reduced research synthesis. Staff conveyed that when students used AI content within their written work, it often spoiled the narrative and writing style; ChatGPT often produced ‘verbiage and flowery writing.’ There were concerns that AI usage would reduce students’ research synthesis skills. Students are encouraged to become independent learners and researchers through reading, note taking, critical thinking, and idea generation. With AI’s capability of condensing large amounts of text quickly, creating essays without any research synthesis was seen as a significant worry and a ‘dumbing-down’ of student populations. Not all information produced by AI is correct, and hallucinations (misleading or incorrect AI generated output) were deemed a limitation and a concern, especially if students did not fact-check their content. Additionally, it was discussed that within medical programs, AI models do not always follow UK standards for clinical practice, which is problematic when UK guidelines are required.
2.2. T2: Challenges and Solutions in Implementing AI in Academic Contexts
Challenges for implementing AI in academia were adaptation, staff workload, and staff–student relationships. At times, academia can be slow to adapt, and this is challenging as AI usage is increasing and will continue to do so. We know that many students have been using AI for years, and many are currently being introduced to it. Students appear to be grasping the advantages of AI much faster than staff—this gap needs to be lessened to make AI usage equitable and fair for all. Within the context of AI, there are significant reasons why more agile decision-making is needed (the development of policies and guidelines for AI usage), but it was felt that a lack of these increases the risk of staff burden and increases their workloads. In the absence of unambiguous guidelines and policies, staff must make their own decisions regarding AI usage. For example, if a student has produced work using AI-generated output, then staff have to decide how to mark this work and consider the ethical implications of this, in comparison to those students who have spent time as independent learners synthesizing the research without AI output(s). Further, the lack of effective AI detection tools compounds this problem.
Staff workload will increase for those that are not familiar with AI models if these models become embedded within the curriculum. Discussions on AI and assessment detailed two distinct approaches: permit the use of AI (with specific terms) or prevent the use of AI in assessments. Permitting the use included suggestions such as allowing 10% of work to be AI generated and correctly cited. Preventing the use, such as in exams in exam halls or oral examinations, has implications for increased staff workload when redesigning assessment approaches. Staff–student relationships were seen as a challenge, as staff expressed concerns that students would replace staff mentorship and advice with AI. As described under TI, there are student-reported benefits for AI one-to-one mentorship, but it was hoped that this would not replace staff and student contact time, trust, or relationships.
Solutions for incorporating AI into academia were staff and student development, AI detection tools, and citing AI content. Staff and students should be offered training to increase AI literacy both at institutional and school levels to ensure that all staff and students have fair and equitable access to AI training. Furter, as described under T1, embedding AI training within the curriculum created effective learning outcomes for students (honesty regarding AI usage, its benefits, and awareness of its limitations). The establishment of effective detection tools would ease both staff and student concerns as it negates the notion of ‘cheating’ when students use AI output but do not cite it.
Importantly, it was acknowledged that current AI usage seemed to imply that students were cheating and that it must be remembered that some students will use AI appropriately and not as a short-cut for completing work. There are students who use it to get started with work and structure their work but do know to fact check the content and not copy and paste chunks of text as if it were their own work. Again, the development of clear policies and guidelines would be an effective remedy for this.
2.3. T3: Developing Ethical and Responsible AI Practices in Education
Ethical and responsible considerations for adopting AI in education were concerns with copyright, intellectual property and privacy, institutional policies, and gaining institutional approval for coursework and assessment changes. Developing and maintaining good ethical practices are essential to effectively implement AI in higher education. Both staff and students expressed concerns about copyright, intellectual property, and privacy, as they were unsure how AI models stored user data and information and if the models interacted with each other. A question was asked regarding the future sale of AI companies and what would happen with user data. Questions were also posed on intellectual property as no one was able to guarantee that the data they entered onto an AI model would essentially remain ‘theirs’—instead, it was felt that once it was entered into an AI model, then it became the intellectual property of the model, which raised concerns and hindered usage. It was felt that it was the responsibility of institutions to ensure that user data were protected and to ensure that AI models upheld strict data protection standards. Importantly, within the discussions, these concerns could not be addressed by anyone.
Transparency in data ownership and usage would provide staff and students with control over their own data, and this was seen as vital for allowing those who resisted any changes with technology (‘laggers’) to begin using AI models by encouraging trust and increasing integrity in AI educational systems. Transparency with how data are processed would permit staff and students to make informed choices regarding AI use, which is necessary for maintaining ethical integrity and responsibility in AI practices in education. As stated above, institutional policies need to be developed to enable responsible AI usage, and it was stressed that these policies and guidelines should be clear and unambiguous. For those who seek to make assessment changes, consent needs to be sought from their institution and the purpose of these changes needs to be considered. For example, is the purpose to prevent or permit responsible AI use?
2.4. Future Trends and Directions for AI and Education
Future trends for AI and education included embracing AI, assessment redesign, and concerns around bias and accessibility. Under the context of AI, it was felt that there was a scale from the ‘laggers’ (who resisted changes with technology) to those who were curious and embraced these changes much earlier (‘early adopters’). It was felt that AI was here to stay and, as mentioned previously, its usage will grow. Therefore, we need to embrace AI, and work is needed to close the gap between early adopters and laggers. When incorporating information from the previous themes, this could be achieved via effective training to increase AI literacy, by establishing institutional policies and guidelines, and through the development of good AI detection tools.
As mentioned previously, assessment redesign may be needed depending on whether AI should be included or prevented. Ideas which included preventing AI use were the implementation of oral assessments and the return to traditional exams, although this was not a popular choice. Looking forward, many students want to incorporate AI into their assessments and a 10% rule for permitting AI content was suggested, which would be referenced.
Accessibility concerns were discussed. For example, digital literacy—defined as the skills needed to access, use, and understand AI—offers users increased ability to understand AI algorithms to produce meaningful and personalized outputs; would all users have these levels of digital literacy? If not, this will disadvantage those that do not. Conversely, and to a lesser extent, some felt that AI has the unique prospect of being more inclusive of those who have been digitally excluded by increasing access to online processes. Therefore, work needs to be carried out to support those that lack these skills. Bias was another significant concern, as females, people with disabilities, the LGBTQIA+ community, and other marginalized groups are often underrepresented in AI algorithms and output, and this may unintentionally reinforce these biases.
2.5. T5: Student Concerns
Student concerns included information literacy, digital inequality, and academic honesty and integrity. In regard to information literacy, students were concerned with the accuracy of AI outputs, hallucinations, and referencing. For example, some were aware that AI models produced fake references and that content needed to be checked for accuracy and bias. In addition, some students discussed awareness of misinformation generated by AI and knew how to distinguish between accurate and non-accurate information. Mixed responses were observed in relation to this, as some felt that they could critically evaluate AI outputs and others would welcome the teaching of these vital skills by their institution.
Digital inequality was a significant concern for students, as those without access to all AI models may experience decreased digital literacy and increased inequality, which could potentially widen the attainment gap. Discussions centered on the cost of some AI models and for those who could not afford to subscribe to them. This would clearly disadvantage those students who could not ‘buy this information’, as financial barriers to accessing AI models would increase digital inequality for those students living with disadvantage. Further, digital inequality was discussed for those students without adequate internet connectivity, and, as such, it was believed that this would disadvantage them as they would not have the same access to education that others have. Overall, it was felt that it was the responsibility of institutions to ensure fair access to AI for their student bodies and that institutions should develop their own suite of AI tools to safeguard fair access for all students.
Academic integrity and honesty could be undermined as students felt that other students may ‘cheat’ by using AI incorrectly and, therefore, place themselves at a disadvantage by upholding their own ethical values when completing work. Plagiarism was discussed as a significant concern as some students could use AI tools to complete their university work (essays, reports, presentations, and image generation). AI could be used to complete full pieces of work, and parts of work, without correctly citing its usage. It was felt that students who were more proficient AI users could use ChatGPT, for example, to challenge traditional assessment approaches. ChatGPT has the potential to produce high-quality written work that would be difficult to ascertain as being produced by AI, due to the lack of effective AI detection tools. Students felt that it would be difficult to guarantee the integrity of academic work. To remedy this, effective AI detection tools would ameliorate these concerns and permit students to use AI effectively within their own works without feeling that they were compromising their own academic integrity. Being able to cite AI usage is something that students would like clear guidelines and policies on, as some feel that they cannot, or should not, incorporate AI output without correct and defined institutional guidelines.
2.6. Sub-Themes
In the previous sections, we discussed the overarching themes which featured in the workshop. These themes are summarised in
Table 1, which includes the primary sub-themes discussed above.
In the next section, we consider the above themes across different HE contexts. A number of common issues arise across different subject areas, particularly in the contexts of student learning and autonomy, as well as ethical concerns.
3. Future Directions
The full impact of AI on HE is still unfolding, but clear themes are emerging. The initial rise and consequent fall of the AI detection mirage, as the technology proved ineffectual and prone to false positives, has led to a deeper conversation on assessment methods and tools [
4]. The need to uphold academic integrity has caused many to reconsider each assessment strategy’s merits, drawbacks, and suitability in their context. Authentic and traditional high-stake assessments are perceived as safer choices from an academic integrity point of view.
AI is becoming ubiquitous and embedded in everyday tools used by academics, students, business, and support services as it has been proven to increase efficiency and productivity [
5]. With more third-party service providers vying for a share of a nascent market, data governance and security are becoming more complex, not least because the rapid pace of innovation has outstripped the development of AI literacy in HE, increasing the risk that staff and students may engage with powerful new tools without fully understanding the extent of the data being shared [
6].
There have been different responses to these issues. In the following sections, we relate these concerns to the above themes. To do so, we present case studies for three broader subject areas in HE, namely Science, Technology, Engineering, and Mathematics (STEM), Social Sciences, and Humanities, and focus on one undergraduate course in each area—Engineering, Linguistics, and English, respectively. For each area, we identify a fundamental element of the course which may be influenced by generative AI, and we then consider this element in the context of the themes discussed above.
3.1. STEM: Engineering
Engineering, a core component of STEM, involves applying scientific principles to model, design, and analyse systems. For example, electrical engineering is a branch of engineering that focuses on the study and application of electronics and computer and information technology to develop and enhance systems and devices. Additional areas include computer sciences, robotics, and systems engineering.
Within undergraduate engineering courses, programming is a critical component that spans across all specialties including mechanical, electrical, civil, and computer engineering, among others. Conventionally, programming education focuses on teaching students to write and debug code, understand algorithms, and apply these skills to solve engineering problems. The integration of generative AI into this element can profoundly enhance how programming is taught and applied. By leveraging generative AI, educators can introduce more advanced programming tools that utilise machine learning algorithms to analyse extensive code databases and suggest optimizations or error corrections beyond the capacity of traditional programming tools. For example, a generative AI system could be used to provide real-time feedback on code efficiency and security vulnerabilities, allowing students to see the immediate impacts of their coding decisions in simulated real-world environments. This could be particularly useful in projects involving embedded systems or software for real-time data processing in civil and mechanical engineering tasks, where performance optimization is crucial.
The benefits of integrating generative AI into programming education are substantial. For example, AI can increase the accuracy and speed of learning coding principles, allow students to experiment with complex code without the risk of severe bugs or failures, and encourage a more iterative and innovative approach to developing software solutions. This real-time feedback mechanism can drastically reduce the learning curve and help students quickly understand the consequences of different programming approaches, enhancing their ability to apply theoretical knowledge practically. However, the application of generative AI in programming also requires careful consideration of its limitations. Over-reliance on AI for code correction and optimisation could potentially diminish students’ problem-solving skills, making it essential to balance the use of technology with foundational programming training. Students must also be trained to critically evaluate the suggestions made by AI systems, understanding the underlying algorithms and data upon which these suggestions are based to avoid introducing biases or errors into their work.
3.2. Social Sciences: Linguistics
Courses in the Social Sciences are centred around behaviour and interaction, including Psychology, Education, Sociology, and Economics. Linguistics, the scientific study of language, includes a range of subfields from phonetics and phonology (the study of sounds), syntax (structure), and semantics (meaning), which are standard modules on undergraduate linguistics courses. Additional areas include pragmatics, psycholinguistics, and clinical applications.
As on other courses, linguistics assessments often include formative problem sets and essays, as well as summative essays and written exams. Meanwhile, a common theme across the subfields of linguistics involves critical reflection on linguistic data and the relation of these data to linguistic theory for a model of language. For example, students learn to produce and elicit grammaticality judgements, i.e., an explicit judgment about the well-formedness of a sentence, independent of its meaning. Crucially, these judgments can inform linguistic theory, and, in turn, linguistic theory can make predictions about which judgments will be produced. This process of linking data to theory is a core learning outcome, both in linguistics courses and more broadly across the sciences.
When it comes to generative AI, there may be benefits to taking an ‘embracing’ approach on linguistics modules if key limitations are also addressed. In particular, generative AI may be used to enhance activities which support students’ capacity to link linguistic data and theory. The focus on student learning is key; however, over-reliance on generative AI to make this link runs the risk of discouraging students from doing so themselves—a potential limitation. Furthermore, generative AI can make incorrect conclusions, given a set of data. Therefore, students must develop the capacity to evaluate these conclusions in order to identify those which are not correct. This issue arises for linguistics, but also, more broadly, the capacity to think critically about a claim, rather than taking it at face value, is a crucial life skill.
3.3. Humanities: English
Courses in English studies entail two key areas: 1) the exploration and analysis of literature and 2) the art of writing, which correspond to the fields of English Literature and Creative Writing, respectively. English Literature involves the study of prose, poetry, and plays from various historical periods and cultural contexts, teaching students to critically engage with texts, interpret thematic elements, and understand character development and narrative techniques. Creative Writing focuses on the craft of creating original content, encompassing genres like fiction, non-fiction, poetry, and screenwriting. Both programs often require students to produce analytical essays and creative works and perform literary critiques, as commonly seen in other humanities courses.
A recurring theme in both subfields of English studies is the deep analysis of text—both literary and self-created—to explore meaning, style, context, and intention. Key learning outcomes include the development of critical thinking skills, the ability to articulate complex ideas effectively, and the creative skills to produce compelling narratives. These skills are vital as they enable students to produce and evaluate literary quality and originality, which are crucial for personal and academic growth. The introduction of generative AI in English studies presents both opportunities and challenges. AI can offer new insights into texts for literary analysis, identifying patterns and themes that might not be evident in initial readings. In creative writing, AI tools can facilitate initial ideation, suggest plot developments or dialog options that might inspire students, and even help them overcome writer’s block. However, there is a risk that reliance on AI could make students passive recipients of information, potentially undermining their ability to analyse literature or independently create original content. Integrating generative AI into English Literature and Creative Writing courses clearly presents a complex blend of benefits and challenges that higher-level institutions will need to navigate carefully. The application of AI in these fields can profoundly enhance the educational experience by offering new tools for literary analysis and creative inspiration. In particular, AI algorithms are adept at dissecting large volumes of text and rapidly identifying underlying themes, character developments, and stylistic elements. This capability enriches students’ understanding of literature and stimulates innovative approaches to their creative writing.
Nevertheless, the incorporation of AI is not without its difficulties. A significant concern is the risk of students becoming overly reliant on technological insights, which might undermine the development of their critical thinking and creative skills—qualities that are fundamental to the humanities. The challenge lies in using AI as an augmentative tool rather than one that overshadows the traditional, human-centric methods of literary critique and creative expression. To counteract these issues, universities and English departments should cultivate a curriculum that emphasises the importance of critical engagement with both texts and AI-generated content. Students should be encouraged to scrutinise and challenge AI outputs, integrating them into their broader analytical and creative processes. This approach ensures that AI is a bridge to deeper understanding and originality rather than a crutch. As we look to the future, the role of AI in English studies is poised to expand, promising exciting developments in how literature is taught and understood.
3.4. Conclusions and Future Directions
The integration of generative AI into higher education has the potential to transform teaching and learning, offering enhanced tools for analysis, inspiration, and feedback. These applications were reflected in the workshop themes, along with the inherent challenges of adopting these technologies responsibly for HE contexts. To navigate these challenges, universities must develop clear policies that guide the responsible use of AI. Ethical concerns, such as the potential for plagiarism and the reduction in original thought, should also be addressed through stringent policies and transparent practices. Institutions should educate both students and faculty members on the capabilities and limits of generative AI, fostering an environment where AI is a complement to, rather than a replacement for, human analysis and creativity.
Across subject areas, curricula must be cultivated that emphasise critical engagement with both human and AI-generated content, ensuring that students retain their ability to think independently and critique respective content effectively. This common theme across the different subject areas highlights not only the wide-ranging potential benefits offered by generative AI, but also the delicate balance that must be struck to avoid the limitations. Students should therefore be encouraged to interrogate and challenge AI outputs while integrating these tools into their broader learning practices.
In the future, the role of generative AI in HE courses is likely to grow, presenting new methods for enhancing learning while presenting complex challenges. Future directions should involve the continual evaluation of AI’s impact on student learning, with an adaptive approach to curriculum development that keeps pace with technological advancements. By maintaining a balanced approach that respects the benefits of AI and the integrity of traditional learning practices, HE institutions can prepare students for a future where technological literacy and human creativity coexist harmoniously.