Next Article in Journal
Beyond the Answers: The Role of Questions in Driving Regional School Development—But Whose Questions and with What Focus?
Previous Article in Journal
A Systematic Review of the Use of AI in EFL and EL Classrooms for Gifted Students
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Use of Generative Artificial Intelligence to Develop Student Research, Critical Thinking, and Problem-Solving Skills

Law School, University of Huddersfield, Huddersfield HD1 3DH, UK
Trends High. Educ. 2025, 4(3), 34; https://doi.org/10.3390/higheredu4030034
Submission received: 2 May 2025 / Revised: 27 June 2025 / Accepted: 4 July 2025 / Published: 13 July 2025

Abstract

This paper is a case study of supporting students in developing their Generative Artificial Intelligence (GAI) literacy as well as guiding them to use it ethically, appropriately, and responsibly in their studies. As part of the study, a law coursework assignment was designed utilising a four-step Problem, AI, Interaction, Reflection (PAIR) framework that included a problem-solving task that required the students to use GAI tools. The students were asked to use one or two GAI tools of their choice early in their assessment preparation to research and were given a set questionnaire to reflect on their experience. They were instructed to apply Gibbs’ or Rolfe’s reflective cycles to write about their experience in the reflective part of the assessment. This study found that a GAI-enabled assessment reinforced students’ understanding of the importance of academic integrity, enhanced their research skills, and helped them understand complex legal issues and terminologies. It also found that the students did not rely on GAI outputs but evaluated and critiqued them for their accuracy and depth referring to primary and secondary legal sources—a process that enhanced their critical thinking and problem-solving skills.

1. Introduction

Since the release of OpenAI’s GPT-3.5 in November 2022, the potential applications of Generative Artificial Intelligence (GAI) in educational settings have become increasingly apparent, prompting extensive scholarly debate on its benefits and limitations in teaching and learning [1,2]. In response, policymakers have begun to formulate and disseminate guidelines aimed at supporting the responsible and effective integration of such technologies within educational frameworks. In March 2023, the UK government produced a white paper proposing a “pro-innovation approach” [3] to Artificial Intelligence (AI) regulation recognising the impact AI technology will have on future growth and efficiency. In September 2023, the Department for Education (DfE) published the GAI in Education Policy paper suggesting that the education sector should utilise opportunities that AI provides [4]. The DfE viewed that if used safely, GAI technology can support delivering an excellent education that prepares students to contribute to society and the future workplace. Following this guidance, previously published reports from the Quality Assurance Agency for Higher Education [5], Jisc [6], and a statement from the 24 Vice-Chancellors of the Russell Group on the use of GAI in Higher Education Institutions (HEIs) [7], many universities produced guidance and principles for GAI use. These guidelines and principles recommend training of staff, supporting them to embed GAI into their pedagogy of teaching and learning. They also encourage the HEIs to support students in becoming GAI literate alongside digital literacy.
Whilst these sectoral guidelines recognise the potential of using GAI technology to enhance student learning, there are concerns about the level of training and practical guidance currently available to both the students and the teachers supporting and preparing them to use such technologies [8,9,10]. A study of 27 top-ranked universities revealed that while 22 institutions offer AI literacy resources, including policies and guidelines addressing academic integrity, only a limited number provide comprehensive training or practical support to students and academic staff for effectively integrating these tools into teaching and learning [11]. While some research has examined the use of GAI in teaching pedagogy and student support [12], limited attention has been given to how students are guided in understanding institutional AI policies and trained to apply them effectively [13]. Highlighting this issue, another study emphasised the importance of AI literacy training and awareness to address emerging ethical concerns in higher education [14]. Addressing these concerns, Chan and Colloton proposed 3-pedagogical strategies for AI integration in education policy: (i) training educators in the responsible use of AI for teaching and assessment, (ii) encouraging student engagement with AI technologies under faculty supervision, and (iii) designing assessments that incorporate AI tools [15].
Given the existing gap in research, this study adopts an experimental approach and provides an in-depth analysis of how a group of students was trained in their university’s AI policy and supported in its practical application within a learning context. The training was delivered as part of a module titled Legal and Academic Skills, which aimed to develop students’ understanding of fundamental legal concepts, academic competencies, and legal skills essential for pursuing a Master’s degree in International Human Rights and Justice. In addition to building conceptual knowledge, legal education emphasises the development of key skills necessary for the study of law, such as legal research, interpretation, critical analysis, and the application of legal rules and principles to resolve complex problems and make informed decisions [16]. Together with these skills, the Law QAA Benchmark recognises that evolving AI technologies will impact legal systems and processes. Hence, it recommends that the providers explore any “emerging and potential ethical issues” that may arise from these changes and to incorporate this knowledge in legal education via “digital skills and technology” training [16] (p. 5). Focusing on the required skills that a Master’s level law student is expected to develop in their studies, an assessment was designed which required the students to use GAI to enhance their research, critical thinking and problem-solving skills. In preparing for the assessment, they participated in interactive workshops where they learnt about the institution’s boundaries on AI use as well as how to write prompts to engage with GAI tools.
This paper argues that a better understanding of GAI technology and the institution’s AI policy protects students from academic misconduct, develops their confidence in research as well as empowers them to be independent learners. This paper is divided into 5 sections. Section 1 provides an introduction to this research, which includes a brief literature review. Section 2 provides the methods and methodology of the research. Section 3 explains the module design rationale and steps that the Module Leader (ML) took to introduce GAI literacy to the students. This involves an analysis of the outcomes of a survey that the students participated in at the start of the module. This survey investigated the students’ understanding of GAI, their user level, and any concerns about this technology. Section 4 provides an account of how the PAIR framework was applied to design the assessment and the support that was provided to the students to prepare for the assessment. Section 5 analyses the students’ experience of using GAI at an early stage of drafting their assessment. This analysis is followed by a set of recommendations for the teachers to consider when embedding GAI in teaching and assessment.

2. Methods and Methodology

This study adopts a mixed methods approach, combining case study and exploratory research designs to investigate (i) the role of GAI tools in enhancing students’ problem-solving and critical thinking skills and (ii) the impact of the workshop activities on the students’ understanding of their institution’s AI policy. Mixed methods research enables the integration of quantitative and qualitative data to provide a more comprehensive understanding of complex educational phenomena [17].
A case study method was used to gain in-depth insights into how students interact with GAI tools in an academic setting (Table 1). This approach allowed for the detailed observation of behavioural patterns, attitudes, and changes in student engagement with GAI before and after targeted instructional workshops. Case studies are particularly effective in educational research for providing rich, contextualised understandings of learner experiences [18].
To complement the depth of the case study, an exploratory research design was employed to examine broader trends and perceptions regarding GAI use among students. This approach is suitable for studying phenomena that are still emerging or not well understood, such as the pedagogical implications of GAI in higher education [19]. As part of this component, both surveys and qualitative content analysis of students’ reflections and assignment outputs were used to explore the perceived value, challenges, and ethical considerations associated with GAI use.
The combination of methods enables triangulation of data and enhances the validity of findings by capturing both measurable trends and nuanced, personal experiences [20]. This approach is particularly suited for examining the complex, multi-dimensional nature of digital literacy and critical thinking within AI-mediated learning environments.
The mixed methods approach directly informed the formulation of the following research questions, which together address both behavioural patterns and subjective experiences.
The case study component of this research focused on a specific group of postgraduate international students who lacked prior experience with UK higher education and, in many instances, had no formal background in legal education. This context-specific and narrowly defined sample limits the generalisability of the case study findings, a known limitation of case study research due to its emphasis on depth over breadth [18]. To address this limitation, data collected using survey methods were used not only to triangulate the findings but also to inform the design of targeted instructional workshops and the construction of the assessment questionnaire. This integration of survey and case study data supports the strengths of a mixed methods approach, allowing for both contextual depth and broader applicability [17]. The findings from the analysis also contributed to the further development and expansion of the PAIR Framework [21], enabling its application beyond the initial case and toward more generalised educational settings.
While exploratory studies are typically open-ended and not hypothesis-driven in a strict sense, the following hypotheses were developed to guide interpretation of the data:
H1. 
Students who engaged in the workshop activities and actively experimented with GAI tools demonstrated greater improvement in their ability to critically analyse and solve legal problems.
H2. 
Participation in GAI-focused workshops resulted in a positive shift in students’ perceptions of academic collaboration with GAI tools.
H3. 
Students who used GAI tools early in the research process produced written work that reflected stronger problem-solving strategies than those who did not.
H4. 
Training in ethical and responsible GAI use led to increased student awareness of academic integrity and responsible technology use.

3. Module Design to Support Student AI Literacy

3.1. Investigating Student’s Awareness of GAI

The module was delivered to a group of mature, professional, international students, with an average age of 34. It was a cohort of 25 students who were from the same ethnic background and country of origin, with an even gender distribution. Given the demographic homogeneity, this study does not examine the impact of ethnicity or cultural background on the survey outcomes. Future research could explore these demographic variables in more diverse cohorts to assess their potential influence on students’ engagement with GAI tools.
At the start of the module, students completed a structured survey to assess their baseline understanding and perceptions of GAI. Insights from this survey informed the design of module content, activities, and assessments. The survey questionnaire included: Q1: Have you used GAI tech before?; Q2: If yes, for what purpose, and how have you used it?; Q3: What was your experience with using GAI?; Q4: If you have not used GAI, what was the reason?
Based on the survey data and student feedback, participants were grouped into 3-categories according to their understanding and use of AI: (i) users of traditional AI; (ii) users of GAI, and (iii) non-users of GAI. While all students had heard of GAI and possessed some understanding of how it functions, their levels of comprehension and interpretations of the technology varied (Table 2).
In the group of students who reported that they used GAI tools, a few demonstrated a lack of understanding of how AI and GAI tools work. Some relied on traditional AI tools designed to perform specific tasks intelligently, without recognising the distinct capabilities of GAI. Unlike conventional AI, GAI refers to systems capable of generating new content based on input data. At least 3 students out of 10 in this group used Large Language Models (LLMs) like ChatGPT for (i) paraphrasing/rewriting their writing to improve the quality of expression for work purposes; (ii) generating new ideas for work-related projects; and (iii) creating images and ideas for marketing materials.
The group of students who reported that they had not used GAI recognised the difference between AI and GAI. It is likely that they are users of AI, but not users of GAI. This group of students cited not knowing how this technology works and concerns about data protection, privacy, and ethical issues were the reasons for their non-use. They were particularly concerned about academic integrity issues if GAI was used in their writing and assessment. There were also concerns about an overreliance on these technologies, which may compromise their ability to use cognitive functions. Two-thirds of this group of students were interested in using GAI in the future if training is given to them on appropriate use, whilst the remaining students in this group lacked the confidence to use technology in general.
All three groups demonstrated awareness of the limitations of GAI and issues around ethical and responsible use. The students who used GAI had mixed experiences of it being useful and not being much use. They viewed that whilst using Grammarly for editing content helped them to develop vocabulary and quality of writing in English, it can be time-consuming. The students who used ChatGPT and Bard/Gemini for generating ideas for new projects and for creating marketing materials found that they saved time and were more efficient.

3.2. Addressing Student Concerns Following the Survey

The students were unsure about using GAI for their studies as they learned about academic integrity in their pre-sessional English course and were aware that any work that they submit to be assessed needs to be their work. It was evident from the student comments that they thought the only use of LLM like ChatGPT and Gemini was to write content for assignments that they knew could lead to academic misconduct. Hence, as part of the AI workshop, an introduction and a critical analysis of the university’s policy guidance and principles on digital literacy as well as AI literacy were introduced and explained to the students. Section 10.2 of the Regulations for the Taught Students states that the “use of AI tools could lead to several academic misconduct breaches” including plagiarism, contract cheating, and fabrication [22]. Section 10.2 defines “plagiarism” as “relying on a source to complete” a “work that has not been identified or referenced”; “contract cheating” as “relying on another tool owned by another person to complete” an “assessment”; and “fabrication” as relying on artificial data generated by AI tools “to complete your assessment” (Section 10.2.5) [22]. These definitions of academic misconduct breaches confirm that the use of AI tools in an assessment inappropriately may lead to academic misconduct breaches. The above Regulation does not prohibit students from using the AI tools for their assessment. However, it requires the work that students submit for assessment must be their own. Ideas, information or content used to develop work needs to be referenced, including the use of AI, that needs to be acknowledged. Appropriate use of GAI technology makes a task easier resulting in better productivity, thus the university has recommended developing student GAI literacy as well as using this technology in teaching and learning as appropriate for a discipline.

3.3. Making Students Aware of AI’s Limitations and Setting up Boundaries

A further AI workshop focused on the limitations of AI tools and what would constitute an appropriate and ethical use for this module. At the point of writing this paper, ChatGPT 3.5 information had a cut-off point Sept 2021, so any development in law of facts after that was not available. Hallucinations can generate fake references. GAI-generated outputs rely on patterns and data rather than authoritative sources [23]. Hence, it produces a generic answer with a basic structure that lacks an in-depth and specialised understanding of a topic.
As GAI learns the patterns and structure of input training data and then generates new data that has similar characteristics, it lacks critical analysis and understating of jurisdictional differences of law and legal system [24]. It provides general information and guidance that is not suitable for professional legal advice. The way a law is interpreted and applied can vary depending on the jurisdiction and context in which it is applied. GAI tools may not be able to understand fully the complexities of legal cases or provide tailored advice to a hypothetical scenario. Users who completely depend on GAI tools may find themselves lacking the depth of expertise required to navigate complex legal topics effectively.
The students were alerted to GAI output intellectual property issues. There is a hidden plagiarism of GAI contents if the training data are sourced from the work of others, and there is no acknowledgment of the source by GAI [25]. There may also be copyright issues with images and multimedia created and owned by others. The students were made aware that GAI tools learn from the interaction, i.e., prompts and inputs. Thus, they need to be careful not to enter any personal or confidential information, as it is currently not clear how this will be stored or processed.
To ensure fairness of access, students were instructed to only use free GAI tools. At the current time, there is no way of monitoring whether students are using free or paid subscription tools. Universities will need to consider how best to respond to a potential proliferation of paid subscription GAI tools. This is a matter that requires additional consideration, clear processes, and policy development. In the absence of established processes, guidance and policy to address this issue, the students were instructed to limit their GAI use at the early stage of their research. One of the questions in the reflective piece assignment required the students to declare the GAI that they used for their research and assignment. Writing tools like ChatGPT 3.5 can be used to present and format student’s work; however, they should not be used to rewrite any sections of their own writing in a way that changes the meaning. The students were also warned that it is not acceptable to use any of the GAI tools to generate content that forms part of their assessment unless correctly referenced. In summary, the work a student submits must be his or her own. Any contribution from GAI needs to be acknowledged following the referencing style that the university provides.

4. The Assessment Design, the Workshops, and Support

The assessment was a reflective portfolio, one component of which required students to reflect on their use of selected GAI tools (the Assessment Brief is attached as Supplementary Materials). A structured questionnaire prompted them to focus on how GAI influenced their development of problem-solving and critical thinking skills. These reflections were analysed using qualitative content analysis to identify key themes related to skill development, ethical considerations, and tool effectiveness. Secondary sources, including institutional policies and current literature on AI in education, were also reviewed to contextualise findings.
The PAIR framework which was used in the context of an assignment consists of four steps [21]:
  • Problem formulation requires the students to define the problem or challenge that they want to solve.
  • AI tool selection allows the students to choose the best GAI tools to solve the problem by exploring, comparing, and evaluating different GAI tools and their features.
  • Interaction with GAI tools to solve the problem scenario, which involves experimenting with different inputs and outputs and reviewing how the chosen tool affects their problem-solving process and outcome.
  • Reflection on their experiences with GAI tools.
Students were asked to select one or two GAI tools that were introduced in the AI workshops and to use them at an early stage of researching and drafting one of their portfolio activities. The activity was a problem question that required the students to demonstrate a basic but accurate understanding of the sources of international law and its dispute settlement methods and means. The students were expected to apply their knowledge of the law to the hypothetical scenario to identify the problem issues. They were taught to ‘formulate’ the problem, i.e., to break it down into questions that will help them to develop an in-depth understanding of the topic. They were then asked to use their chosen GAI tool to input the questions. Once an output is generated, they were instructed to critique them for accuracy and relevance. They were also encouraged to find GAI-generated resources from the sources that are available via the University library. The students were asked to reflect on their experience in a separate activity, which was also part of the same portfolio. They were given a set of questions to guide them in this reflection. The question specified that their reflection required them to focus on developing two core legal skills: problem-solving and critical thinking.
The reflective toolkit they used would guide them to examine their experiences repeatedly reflecting on what worked and what did not work. One of the reflective toolkits they were instructed to use was Gibbs’ Reflective Cycle which covers 6 stages in a reflective experience: (i) description of the experience; (ii) feelings and thoughts about the experience; (iii) evaluation of the experience, both good and bad; (iv) analysis to make sense of the situation; (v) conclusion about what they had learned and what they could have done differently; and (vi) an action plan for how they would deal with similar situations in the future, or general changes they might find appropriate [26]. The students were given several helpful questions and examples for each of the stages. They were instructed that not all the questions needed to be answered, noting that the questions were helpful prompts to assist their reflection. As the assignment required a critical reflection, the students were instructed to write in-depth and detailed reflections on stages of their experience. The students were also instructed to reflect on their feelings and emotions as they used a new technology with human-like ability as required by the PAIR framework. The students were also taught Rolfe’s [27] reflection cycle which has fewer steps; thus, often students find it easier to use [28].
As this study aimed to develop student GAI literacy and apply it to enhance student research, critical thinking and problem-solving skills, the application of the PAIR framework was not limited to the assessment only. It was also applied in the workshop planning to encourage the students to experiment with GAI tools in the workshop activities. In addition, a fifth step, a ‘1:1 support’ was added to this framework during the final stage of the students’ assessment preparation. In addition to the timetabled workshops, where the students were introduced to various GAI tools and experimented with them, they were asked to submit a 750-word formative assignment that required them to provide an outline of the problem question and the use of GAI to formulate the problem. Detailed feedback was given on formative assessment submissions. This was followed by a 1 to 1 drop-in session where they had the opportunity to clarify the feedback that they received as well as ask questions related to the summative assessment [29]. This identified the issues commonly raised by the students during the drop-in. As only half of the cohort took the drop-in opportunity, a screencast was recorded addressing those common assessment-related questions, which was made available for the benefit of all students. The end-of-module feedback and the verbal comments that were received from the students demonstrated that the support not only helped them with their summative assessment preparation but also with assessment-related stress.

5. Key Findings from the Student Reflection on GAI Use

5.1. Benefits of Using GAI

(i)
Saved research time
The students commented that the GAI tools provided a structured outline for the problem question and helped them to identify the correct legal framework (for the assignment, it was several International Law agreements). The use of GAI in the preparation of the early draft helped them to save time in researching sources of law, guided them through their research, and gave them confidence that they were on the right track. Students felt that it helped them to focus on their research, saving valuable time during the assignment preparation. They were able to identify a variety of primary and secondary legal sources using Bing, which helped them in structuring initial ideas, thoughts, and arguments for the assignment. As Gemini is connected to the internet, several students used it to obtain “real-time information from the web”. The students appreciated the note that appeared below the search results indicating that the information may be inaccurate and required referencing to reliable sources. This reminder was useful as they recognised the need to check the facts and information that were generated by GAI tools. Many commented that they used Gemini-generated sources as a basis to start their library search to support their research.
(ii)
Helped to learn legal terminologies and definitions
As many students were from non-law backgrounds, and all were non-native English speakers, it was challenging for them to understand the legal terms included in the case decisions and judgments and agreements. This group of students used ChatGPT, Bing, and Gemini as an aid to develop their understanding of unfamiliar words, legal terminologies, and definitions. They used ChatGPT to define unfamiliar legal terminologies or to get a summary of the legal concepts. Several students mentioned that they used these tools to find alternative and appropriate words to improve their writing [30]. They also used ChatGPT to paraphrase their writing. Once the output was received, they edited and paraphrased it again to be compatible with their own writing style boosting confidence in writing in English.
(iii)
Enhanced research skills, and quality
Whilst writing prompts for the GAI tools, the students had to prepare hypothetical questions about the research topics. This exercise allowed them to be experimental, creative, and analytical about their research topic. They reviewed their work rigorously to check for any inappropriate or false information resulting in enhanced quality of their research. They viewed that this process ensured the accuracy, credibility, and integrity of their research. A few students used ChatGPT and Gemini to search for examples of similar cases to the given hypothetical problem question to develop a basic understanding of how to approach a problem question. The case examples generated by GAI were used to search for the original judgment on the official websites of the courts. For example, the students searched the International Court of Justice (ICJ) website for those cases that were generated as examples of the application of international law sources in an international dispute between states. Students compared the information generated by ChatGPT and Gemini with the actual case summary and decisions as included on the ICJ website. They viewed that although the texts generated by ChatGPT and Gemini were correct, they lacked detail.
(iv)
Enhanced knowledge as well as critical thinking skills
The students developed a comprehensive grasp of the nuances of the laws and legal frameworks by engaging with the outputs that were generated by the GAI tools. These tools helped students to investigate diverse perspectives of a problem, understand complicated legal concepts, and collaborate between legal concepts. When used in the early stage of research, it helped the students to identify the fundamentals of the research topic enabling them to take a step-by-step approach to develop core arguments on complex legal issues. Through this process, they improved and enhanced critical thinking abilities and problem-solving skills. The students further developed critical analysis skills by evaluating the accuracy, relevance, and content quality of GAI outputs.
Students used GAI tools as an aid while struggling to understand unfamiliar words and legal terminologies. These tools supported students to find alternative words or terms for paraphrasing helping them to expand their legal knowledge as well as analytical skills, which allowed them to address complex legal topics with confidence and accuracy. One of the students prompted ChatGPT to guide them on how to write a good introduction. It produced a three-step guide that helped the student to write their assignment introduction. It also provided a list of primary and secondary sources with an explanation of their relevance to the research topic. The student was able to critically evaluate these sources to eliminate false and irrelevant sources. GAI tools assisted the students in critical thinking by presenting different viewpoints, drawing attention to irregularities or oppositions, and identifying potential risks or weaknesses in legal arguments.
(v)
Boosted innovation and supported independent learning
ChatGPT’s rapid and immediate response, combined with the ability to experiment with prompts, motivated students to deconstruct the hypothetical scenario into numerous investigative questions, facilitating a deeper understanding of the legal framework and applicable rules. For example, one of the students wrote a prompt, “Can a state denounce a bilateral treaty due to a deterioration in relationship with the other party involved?”. The output was generic and only included a treaty name (a primary source) without mentioning the relevant article number. The student found this output inadequate, lacking in depth and substance, and lacking in legal authorities supporting the information or the arguments. However, the use of ChatGPT and Bing enabled them to explore diverse perspectives on a debate and broaden their perspective. As Gemini is linked to the internet, sources generated by this tool helped the students to read widely developing an in-depth understanding of a topic. At least four students used the Issue, Rule, Application, Conclusion (IRAC) method [31] to answer the problem question. Although this method was briefly introduced in the class, it was not covered to the level that the students can use it to solve a legal problem. The students who used IRAC mentioned that ChatGPT helped them to understand the process as well as its application. At least half of the cohort mentioned that they learned complex legal concepts, for example, the ‘Optional Clause’, a legal term that is included in Article 36, Paragraph (2)–(5) of the 1945 Statute of the International Court of Justice, using ChatGPT that they thought were relevant to answering the problem question but were not covered in detail in the class.

5.2. Challenges of GAI Use and How Did the Students Overcome Them

(i)
Accuracy, reliability, and suitability concerns
The students found it challenging to ensure the reliability and accuracy of the GAI outputs that relate to complex legal issues. Whilst ChatGPT and other GAI tools were helpful to start their research, they had to verify both the facts and sources of the GAI-generated outputs. This process can be time-consuming since it involves verifying the authenticity of the information provided, establishing the relevance of outputs, and cross-referencing ChatGPT’s findings with appropriate academic and legal sources. For these reasons, some preferred to use them in tasks like translating a word or identifying the definition of a legal term instead of relying on legal principles, case laws, other sources of law, and references generated by GAI. One of the students commented:
It’s important to note that while GAI tools are useful, they should not be solely relied on for completing tasks. It’s crucial to refer to the tool’s results in the referencing system in the assignments. In addition to dedicating time to independent reading can strengthen weaker areas of comprehension and enhance overall learning skills. Therefore, while GAI tools provide valuable support, a balanced approach that integrates personal academic growth is essential to academic success.
Another student wrote:
The GAI tools experience greatly enhanced searching and problem-solving skills overall. However, using the GAI tools should only be done with caution, using it only as a searching tool or to translate a word or two and simplify complex terms. In addition, it is important to learn the referencing system in order to properly cite the work of other authors and give them credit for their writing to avoid academic misconduct.
(ii)
Ethical concerns
Another challenge the students encountered was the ethical consideration of using GAI in academic work. The line between assistance and plagiarism can be thin; hence, they found it challenging to navigate this responsibly. Students appreciated the information and training that they received in the workshops on how to use GAI at an early stage of their research and drafting the assignment. However, they commented that there needs to be continued training on the use of these tools as this technology is advancing rapidly. Actual workshops and testing examples may help to understand what is appropriate and what is not appropriate use of GAI in research and academic work. They also expressed concerns about a consistent approach across the modules as the current policy allows the module leaders to decide on the level of GAI use in assessments.
In the reflective task, several students viewed that one needs to be cautious while using these tools as they have many limitations. They suggested that the appropriate and ethical use is to use these tools as starting points for deeper investigation rather than a definitive source, i.e., wholly reliant on these tools. Over-reliance on these tools can compromise one’s critical thinking, researching, creativity, and confidence in research skills. A student commented on how GAI can be a copilot or an assistant in their research–“I was mindful that employing these AI tools was just the starting point. My contribution was crucial for bringing the content depth, perspective, and originality. I critically evaluated the information provided by Bard, integrating it with my research to ensure accuracy and relevance. With ChatGPT’s drafts, I mixed my ideas, adapted the tone to fit my audience, and enriched the content with unique insights and analysis. This blend of AI efficiency and delicate touch was pivotal in creating content that was not only informative but also engaging and reflective of my style”.

6. Conclusions and Recommendations

In the reflective task, the students acknowledged their initial fear of using any GAI-generated outputs as they were concerned about plagiarism and falling into academic misconduct. To overcome this concern and fear, they adopted a cautious approach by cross-checking the information and references that were generated by GAI tools with scholarly articles and legal sources. They reviewed their work rigorously to check for inappropriate or false information resulting in enhanced quality of their research. They viewed that this process ensured the accuracy, credibility, and integrity of their research. The students also viewed that whilst GAI supported their research, they could not be relied on as the only source for information or understanding in their studies as their outputs need critical evaluation through other academic sources.
The students who engaged with GAI tools at a very early stage of their research, and cross-checked GAI-generated outputs against academic sources, overcame the fear of committing an academic offence. They demonstrated a clear understanding and awareness of the university’s academic integrity and misconduct policy and its application. They learned that the use of GAI tools to prepare or to write an assignment is prohibited and such use may be considered as plagiarism, fabrication, or contract cheating. It reinforced the message that when a work is submitted to be assessed, it needs to be their own work.
GAI tools support personalised learning, develop student confidence in their research and work, and support them to be independent learners. It also allowed students to learn at their own pace, which was particularly beneficial for students who were non-native English speakers and came from non-legal backgrounds. By questioning and examining GAI-generated materials they developed a critical understanding of materials; by breaking down the problem questions into prompts, they developed analytical skills. That said, it was clear from the students’ reflection that several students viewed the use of GAI in the coursework limited their learning. As this was new technology to them, they spent too long experimenting with it without any meaningful outcome. Also, they expressed their concerns over a heavy reliance on technology instead of reading academic sources and analysing them. It was also clear that students who indicated a lack of confidence in the use of technology found it challenging and less effective to use GAI in their early research and drafting. This demonstrates that students who are confident in using technology may adapt quickly and have an advantage over students who may not feel ready to experiment.
Teachers considering embedding GAI into their teaching and assessment practices need to recognise the apprehension that some learners, particularly mature students, may feel towards adopting this technology. As highlighted in this case study, a starting point can be the institution’s policy, principles, and guidance on AI usage. Providing a clear and comprehensive explanation of these policies and principles fosters a shared understanding, enabling their consistent application by both teachers and learners. To effectively guide and support learners in experimenting with GAI tools, teachers need to familiarise themselves with the functionalities of the tools they recommend. Additionally, they must be aware of the extra support learners may require outside of the classroom, especially when instructed to utilise GAI for assessment preparation. Given the rapid advancements in GAI technology, it is essential for teachers to continually update their knowledge and skills of this technology.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/higheredu4030034/s1, ASSESSMENT: <COURSEWORK>.

Funding

This research received no external funding.

Institutional Review Board Statement

The School of Business, Education and Law (SBEL), University of Huddersfield, UK reviewed the researcher’s Ethical Review Application. The application was submitted together with the following: (i) research participant consent forms, (ii) the information sheet sent to participants, (iii) the research survey questionnaire, and (iv) the assessment brief and accompanying questionnaire. After reviewing the Ethical Review Application and supporting documents, the Committee approved the research and issued a certificate on 26 August 2024. The application identifier is BELETHICS2324 047.

Informed Consent Statement

Informed consent was obtained from all the participants involved in the study. Proof of this was submitted to the School of Business, Education and Law (SBEL), University of Huddersfield, UK as part of the institution’s ethical review application process.

Data Availability Statement

The information/data gained from the survey questionnaire and the reflection pieces do not include the participants’ names or any characteristics that can be used to identify them. The survey responses are kept on the researcher’s work computer, which is password-protected and accessible only by the researcher. The data will be kept for 3 years before deleting them. The reflective summative assessments were submitted to the University’s Brightspace site, which only a few authorised persons (for example, the external examiner, the internal moderator, and the Faculty administrative support staff) and the researcher can access. These data will not be released to other parties without the participant’s consent.

Conflicts of Interest

The author declares no conflicts of interest. This research is not funded; hence, the author declares that no funder had any role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results. The author also confirms that neither the manuscript nor any parts of its content are currently under consideration or published in another journal, including MDPI journals.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial Intelligence
ChatGPTChatbot Generative Pre-trained Transformer
DfEDepartment for Education
GAIGenerative Artificial Intelligence
HEIsHigher Education Institutions
IRACIssue, Rule, Application, Conclusion
LLMsLarge Language Models
PAIRProblem, AI, Interaction, Reflection framework
QAAQuality Assurance Agency for Higher Education

References

  1. Gobert, J.D.; Sao Pedro, M.A.; Li, H.; Lott, C. Intelligent tutoring systems: A history and an example of an ITS for science. In International Encyclopedia of Education, 4th ed.; Elsevier: Oxford, UK, 2023; pp. 460–470. [Google Scholar] [CrossRef]
  2. Xiao, P.; Chen, Y.; Bao, W. Waiting, banning, and embracing: An empirical analysis of adapting policies for generative AI in Higher Education. SSRN Electron. J. 2023. [Google Scholar] [CrossRef]
  3. Department of Science, Innovation and Technology. A Pro-Innovative Approach to AI Regulation. 2023. Available online: https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper (accessed on 2 February 2024).
  4. Department of Education. Generative Artificial Intelligence in Education. 2023. Available online: https://www.gov.uk/government/publications/generative-artificial-intelligence-in-education/generative-artificial-intelligence-ai-in-education (accessed on 18 January 2024).
  5. Quality Assurance Agency for Higher Education. Generative Artificial Intelligence. 2024. Available online: https://www.qaa.ac.uk/membership/membership-areas-of-work/generative-artificial-intelligence (accessed on 18 January 2024).
  6. Jisc. Report on AI in Tertiary Education: A Summary of the Current State of Play (3rd ed.). 2023. Available online: https://www.jisc.ac.uk/reports/artificial-intelligence-in-tertiary-education (accessed on 2 February 2024).
  7. Russell Group. Russell Group Principles on the Use of GAI Tools in Education. 2023. Available online: https://russellgroup.ac.uk/news/new-principles-on-use-of-ai-in-education/ (accessed on 20 January 2024).
  8. García-Peñalvo, J.F. The perception of Artificial Intelligence in educational contexts after the launch of ChatGPT: Disruption or panic? Educ. Knowl. Soc. 2023, 24, e31279. [Google Scholar] [CrossRef]
  9. Liu, D.; Bridgemen, A.; Miller, B. As Uni Goes back, Here’s How Teachers and Students Can Use ChatGPT to Save Time and Improve Learning; The Conversation: Carlton, Australia, 2023; Available online: https://theconversation.com/as-uni-goes-back-heres-how-teachers-and-students-can-use-chatgpt-to-save-time-and-improve-learning-199884 (accessed on 30 June 2024).
  10. Codina, L. How to Use ChatGPT in the Classroom with an Ethical Perspective and Critical Thinking: A Proposition for Teachers and Educators. 2023. Available online: https://www.lluiscodina.com/chatgpt-educadores/ (accessed on 6 September 2024).
  11. Alqahtani, N.; Wafula, Z. Artificial intelligence integration: Pedagogical strategies and policies at leading universities. Innov. High. Educ. 2025, 50, 665–684. [Google Scholar] [CrossRef]
  12. Chan, C.K.Y.; Hu, W. Students’ voices on generative AI: Perceptions, benefits, and challenges in higher education. Int. J. Educ. Technol. High. Educ. 2023, 20, 43. [Google Scholar] [CrossRef]
  13. Rahman, M.M.; Watanobe, Y. ChatGPT for education and research: Opportunities, threats, and strategies. Appl. Sci. 2023, 13, 5783. [Google Scholar] [CrossRef]
  14. Ahmed, A.R. Navigating the integration of generative artificial intelligence in higher education: Opportunities, challenges, and strategies for fostering ethical learning. Adv. Biomed. Health Sci. 2025, 4, 1–2. [Google Scholar] [CrossRef]
  15. Chan, C.K.Y.; Colloton, T. Chapter 5. In Generative AI in Higher Education: The ChatGPT Effect, 1st ed.; Routledge: London, UK, 2024; pp. 127–165. [Google Scholar]
  16. Quality Assurance Agency for Higher Education. Subject Benchmark Statement—Law (5th ed.). 2023. Available online: https://www.qaa.ac.uk/the-quality-code/subject-benchmark-statements/subject-benchmark-statement-law (accessed on 20 January 2024).
  17. Creswell, J.W.; Plano Clark, V.L. Designing and Conducting Mixed Methods Research, 3rd ed.; Sage Publications: Los Angeles, CA, USA, 2018. [Google Scholar]
  18. Yin, R.K. Case Study Research and Applications: Design and Methods, 6th ed.; Sage Publications: Los Angeles, CA, USA, 2018. [Google Scholar]
  19. Stebbins, R.A. Exploratory Research in the Social Sciences; Sage Publications: Los Angeles, CA, USA, 2001. [Google Scholar]
  20. Johnson, R.B.; Onwuegbuzie, A.J.; Turner, L.A. Toward a Definition of Mixed Methods Research. J. Mix. Methods Res. 2007, 1, 112–133. [Google Scholar] [CrossRef]
  21. Acar, O. Are Your Students Ready for AI? A 4-Step Framework to Prepare Learners for a ChatGPT World [Harvard Business Publishing Education]. 2023. Available online: https://hbsp.harvard.edu/inspiring-minds/are-your-students-ready-for-ai (accessed on 15 December 2023).
  22. University of Huddersfield. Section-10: Academic Misconduct Regulation. 2024. Available online: https://www.hud.ac.uk/policies/registry/regs-taught/section-10/ (accessed on 15 January 2024).
  23. OpenAI. Introducing ChatGPT. 2022. Available online: https://openai.com/index/chatgpt/ (accessed on 28 January 2024).
  24. O’Connor, S. Generative AI. Georget. Law Technol. Rev. 2024, 8, 394–404. [Google Scholar]
  25. Ajevski, M.; Barker, K.; Gilbert, A.; Hardie, L.; Ryan, F. ChatGPT and the future of legal education and practice. Law Teach. 2023, 57, 352–364. [Google Scholar] [CrossRef]
  26. Gibbs, G. Learning by Doing: A Guide to Teaching and Learning Methods; Oxford Polytechnic: Oxford, UK, 1988. [Google Scholar]
  27. Rolfe, G.; Freshwater, D.; Jasper, M. Critical Reflection for Nursing and the Helping Professions: A User’s Guide; Palgrave: Basingstoke, UK, 2001. [Google Scholar]
  28. University of Edinburgh. Reflection Toolkit. 2024. Available online: https://reflection.ed.ac.uk/reflectors-toolkit/reflecting-on-experience/what-so-what-now-what (accessed on 30 January 2024).
  29. Burke, K. Balanced Assessment: From Formative to Summative; Solution Tree: Bloomington, IN, USA, 2014. [Google Scholar]
  30. Chan, C.K.Y.; Lee, K.K.W. The AI generation gap: Are Gen Z students more interested in adopting generative AI such as ChatGPT in teaching and learning than their Gen X and millennial generation teachers? Smart Learn. Environ. 2023, 10, 60. [Google Scholar] [CrossRef]
  31. Slorach, J.S.; Embley, J.; Goodchild, P.; Shephard, C. Legal Systems & Skills, 5th ed.; Oxford University Press: Oxford, UK, 2023. [Google Scholar]
Table 1. The research questions and the methodology.
Table 1. The research questions and the methodology.
Research QuestionMethodology
1. Why and how did students use GAI tools before attending the module workshops? If they did not use them, what were the reasons for non-use?Explored using survey data and initial reflections to identify patterns of engagement and barriers to adoption.
2. How did students’ perceptions of GAI use change after participating in the instructional workshops?Investigated using post-workshop assessment questionnaire. Reflective writing content was analysed to capture shifts in perception and awareness.
3. To what extent did students use GAI during the early stages of their research to analyse and solve a legal problem? What was the impact of this approach on improving their critical thinking skills?Assessed using analysis of student work samples and feedback, both pre- and post-intervention.
4. What was the impact of training on ethical, appropriate, and responsible GAI use on students’ awareness of academic integrity?Explored through student-written reflections on their understanding of responsible GAI use.
Table 2. GAI use survey data.
Table 2. GAI use survey data.
Group Users’ Percentage AI/GAI Tools Used by the Users Purpose of Use OR Reason for Non-Use
User of AI (includes the user of GAI)44%Service providers chat facilities, Google Maps, Google Translate, Siri or Alexa, Amazon or Netflix recommendation engines. For customer support from service providers, to learn English as a second language, voice assistance for internet searches, and to receive recommendations for entertainment.
User of GAI 20%ChatGPT 3.5, Google Bard/Gemini, Elicit, Grammarly. Writing support, editing content, generating new ideas, image creation, and creating marketing materials.
Non-user of GAI 56%N/A *Lack of confidence in using technology, concerns around data protection/privacy, concerns about impact on cognitive functions, and ethical concerns.
* Did not report any use of AI or GAI.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Anwar, N. The Use of Generative Artificial Intelligence to Develop Student Research, Critical Thinking, and Problem-Solving Skills. Trends High. Educ. 2025, 4, 34. https://doi.org/10.3390/higheredu4030034

AMA Style

Anwar N. The Use of Generative Artificial Intelligence to Develop Student Research, Critical Thinking, and Problem-Solving Skills. Trends in Higher Education. 2025; 4(3):34. https://doi.org/10.3390/higheredu4030034

Chicago/Turabian Style

Anwar, Naila. 2025. "The Use of Generative Artificial Intelligence to Develop Student Research, Critical Thinking, and Problem-Solving Skills" Trends in Higher Education 4, no. 3: 34. https://doi.org/10.3390/higheredu4030034

APA Style

Anwar, N. (2025). The Use of Generative Artificial Intelligence to Develop Student Research, Critical Thinking, and Problem-Solving Skills. Trends in Higher Education, 4(3), 34. https://doi.org/10.3390/higheredu4030034

Article Metrics

Back to TopTop