Next Article in Journal
Coping with the Inequity and Inefficiency of the H-Index: A Cross-Disciplinary Empirical Analysis
Next Article in Special Issue
Research Data Management in the Croatian Academic Community: A Research Study
Previous Article in Journal
In-Depth Examination of Coverage Duration: Analyzing Years Covered and Skipped in Journal Indexing
Previous Article in Special Issue
Benefits of Citizen Science for Libraries
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Application of ChatGPT in Information Literacy Instructional Design

University Library, University of Split, 21000 Split, Croatia
*
Author to whom correspondence should be addressed.
Publications 2024, 12(2), 11; https://doi.org/10.3390/publications12020011
Submission received: 3 March 2024 / Revised: 4 April 2024 / Accepted: 11 April 2024 / Published: 15 April 2024

Abstract

:
Recent developments in generative artificial intelligence tools have prompted immediate reactions in the academic library community. While most studies focus on the potential impact on academic integrity, this work explored constructive applications of ChatGPT in the design of instructional materials for courses in academic information literacy. The starting point was the use of openly licenced information resources or content infrastructure as facilitators in the creation of educational materials. In the first phase, course teaching material was developed using a prompt engineering strategy, predefined standards, and a prompt script. As a second step, we experimented with designing a custom chatbot model connected to a pre-defined corpus of source documents. The results demonstrated that the final teaching material required careful revision and optimisation before use in an actual instructional programme. The experimental design of the custom chatbot was able to query specific user-defined documents. Taken together, these findings suggest that the strategic and well-planned use of ChatGPT technology in content creation can have substantial benefits in terms of time and cost efficiency. In the context of information literacy, the results provide a practical and innovative solution to integrate the new technology tool into instructional practices.

1. Introduction

Recent developments and the growing accessibility of generative artificial intelligence (AI) tools have had a significant impact on the higher education sector, opening remarkable opportunities for the use of new technology, while also forcing the community to re-examine traditional approaches to issues such as academic integrity and transparent science [1,2]. Generative Pre-Trained Transformer (ChatGPT 3.5) is a statistical model for generating new data based on variable relationships in a dataset. It is a specific application of a large language model (LLM), which represents a neural network trained on a large dataset. Its natural language processing capabilities enable it to generate well-structured text, written in fluent language, and based on input prompts.
Essentially, ChatGPT is a chatbot capable of remarkable simulation of human conversation and work, utilising the vast amount of data on which it has been trained. The underlying algorithms enable it to provide coherent responses to questions by filling in data from predetermined datasets. This means that ChatGPT, as a form of generative AI, creates content by combining a variety of information to provide responses to user requests.
In the context of higher education, technology initially raised concerns primarily regarding traditional assessment methods, such as written assignments and standardised tests [3]. At the formal level, recommendations have been developed for higher education institutions with basic guidelines on the use of ChatGPT in higher education and its application in teaching and learning, administration, and community involvement. Proposed solutions, in addition to providing clear guidance for students and instructors and reviewing and updating policies relating to academic integrity, also include updating higher education courses to include AI literacy and core AI competencies and skills. When choosing to use AI technology in higher education institutions, an important factor to consider is open-source access [4]. In terms of potential opportunities for educators, higher education faculties are encouraged to adapt their instructional strategies, using ChatGPT in flipped learning and student-centred approaches. Innovative assessment tasks should focus on students’ creative and critical thinking abilities, rather than devising AI-proof solutions [3]. Students should be provided with clear guidance and expectations for using chatbots responsibly, including proper attribution and ethical considerations [5].
Considering the role of academic libraries in this rapidly changing information landscape, discussions have focused on understanding the future possibilities of AI text generators and the possible impact on library services. The limitations concerning training data, bias, and false information are emphasised as concepts crucial for information professionals [6]. The natural language processing abilities of ChatGPT could be used to improve search and discovery, reference and information services, cataloguing, metadata generation, and content creation [7]. Suggested applications of ChatGPT in academic libraries include the creation of virtual assistants, complementing reference services (such as a simulated reference librarian), improving user experience, the selective dissemination of information, optimising collection development, and copy cataloguing. ChatGPT technology offers possibilities for integration into library discovery tools and simplifying repetitive aspects of the research process. Academic librarians can use the content generation abilities of the tool in lesson planning and the creation of open educational resources (OER) [8,9].
A whole range of AI-powered commercial applications targeted at the higher education market appeared during the first half of 2023, ranging from literature review apps, research support tools, article summarisers, and content analysers to personalised reading assistants. The management of AI expectations is crucial for academic libraries in light of the overwhelming interest in these tools and products. Consequences faced by library professionals include reshaping their roles and becoming involved in quality assessment, algorithm literacy, the co-development of AI tools, and new search methods [10]. The persistent problem of the reliability of sources retrieved by ChatGPT can be solved by developing frameworks such as CORE-GPT, which integrates LLMs with a large-scale corpus of open-access full-text scientific articles from Core to provide a reliable and fact-based platform to answer questions [11].
In this context, it is important to note that mature AI products are costly and require large-scale training data. In addition, developers working in this field are highly trained professionals. Academic libraries, as traditionally underfunded institutions, face the problems of budget, data access, and human resources.
In summary, ChatGPT represents cutting-edge technology in natural language processing and artificial intelligence. By exploring its application in instructional design, this research expands the understanding of how emerging technologies can be integrated into educational contexts. Additionally, aligning with the principles of open educational resources, the research suggests innovative approaches to creating openly licenced resources for information literacy education.
The purpose of this paper is to investigate possible applications of ChatGPT in a small-scale environment and a specific field: instructional design for information literacy courses in academic libraries. The starting point for our research is to use openly licenced informational resources, i.e., ‘content infrastructure’, as facilitators in creating educational resources. It has already been widely recognised that LLMs, which use deep learning techniques to generate text based on prompts, contribute greatly to the speed of creating information resources that comprise the content infrastructure [12].
Information literacy is a critical skill in the digital age, and academic libraries play a vital role in promoting it. This research addresses the challenge of effectively teaching information literacy by leveraging ChatGPT to create interactive and adaptive learning experiences tailored to individual student needs. Through practical implementation, this research can provide insights into the effectiveness of using ChatGPT in instructional design for information literacy courses. Additionally, research contributes to the establishment of best practices and guidelines for integrating AI technologies into educational settings.
Our study focused on developing a strategy to produce an information literacy syllabus and additional resources using ChatGPT. Using free-of-charge platforms, our objective was to investigate the possibilities of creating content that would represent OER under the common definition [13]. In this study, we investigated the following questions:
  • How can the output be used in an actual classroom environment?
  • How can the chat model be adapted for a limited set of source documents?
The research questions were addressed in two phases. In the first phase of the study, the ChatGPT model was directed to produce a final text output according to predefined standards and a prompt script. In the second phase, we experimented with introducing a limited corpus of resource documents and designing a small model that could respond to queries.
The prompt script used to train the model, the output generated by this script, and the custom model instructions are all available as Supplementary Materials accompanying this article.

2. Materials and Methods

2.1. Setting the Methodological Context

Academic libraries serve as central hubs for information dissemination, resource access, and scholarly support within educational institutions. Information literacy plays a central role in this context. Due to technological advancements, the traditional definition of information literacy has been supplemented by a new concept of AI literacy. AI literacy entails a basic understanding of how artificial intelligence and machine learning work: their underlying logic and their limitations [14]. By studying ChatGPT’s role in instructional practices, this research aims to bridge theory and practice, paving the way for the informed adoption of AI tools.
For the purpose of this study, ChatGPT was chosen over other generative AI tools for several reasons. Firstly, OpenAI’s solution was unveiled in November 2022 and had reached a staggering number of users in the space of a few months. It should be noted that the preliminary phase of this research began in March 2023. Although the generative AI market has exploded since then, the statistics show that ChatGPT was still the most widely used text generation AI tool in the world in 2023, with an approximate 20% share of users [15]. Secondly, the decision to use ChatGPT was based on the availability of support infrastructure and community knowledge. The growing community of users on the GitHub developer platform offered invaluable advice in the process of experimental setup.
In the first part of the investigation, we adapted the method used by Eager and Brunton [16]. The method is focused on prompt engineering, i.e., writing effective text-based inputs to guide AI models in generating teaching and learning content. External reviewers, selected based on professional profiles and experience in higher education, are involved in evaluating the generated content. The method of collecting textual responses from reviewers was selected for the purpose of obtaining qualitative insights.
In the second part of the research, our approach was mainly based on testing available tools for the customisation of the chat model. The preparation stage involved the extensive research of available solutions for designing chatbots trained with user-defined content. The fundamental requirement was to find a low-code or no-code method of designing and developing custom applications. The low-code approach enables users to design and create applications using intuitive graphical tools, thus reducing the traditional coding requirements. Users with basic coding skills can use low-code tools to develop and integrate custom applications. In comparison, no-code approaches eliminate the need for programming knowledge, allowing nontechnical users to develop applications without writing any code. The proposed method allows a broader range of users to participate in building similar solutions.
In the experimental design of the custom chatbot model, we used a limited dataset with a focus on a specific topic. The main advantages of using a limited dataset are as follows:
  • Computational resources: smaller datasets require less computational power for training, which can be advantageous for resource-limited environments;
  • Storage and processing: limited datasets reduce storage and processing requirements.

2.2. Developing a Strategy to Produce an Information Literacy Syllabus Using ChatGPT

The model training strategy was developed for a specific task and with a specific goal in mind. The key concepts considered were the intended audience, purpose, and tone of the content generated by the model. The four steps of the strategy used in this study were as follows:
  • Assign the model the role of a subject-matter expert (SME);
  • Provide the model with context;
  • Instruct the model to use specific standards and input data;
  • Instruct the model to use a specific output format.
The task selected for this study was designing information literacy instruction material for a postgraduate course: ‘Academic/Scholarly Writing’, delivered at the University of Split Faculty of Economics, Business and Tourism, as part of the international postgraduate study programme in Economics and Management. The role, context, standards, and output format were developed following a detailed analysis of the course implementation plan.
Role assignment: The model was assigned a role by specifying a particular identity or perspective to ensure that the text generated by the model was relevant and appropriate for the intended audience and purpose. The prompt used for this purpose was ‘Act as an instructional designer’.
Context: Specifying the intended task provided additional context for the model. The prompt used for this purpose was as follows: ‘You are tasked with supporting an academic to design a syllabus for a 1st year PhD course in Academic/Scholarly Writing’.
Standards and input data: The parameters and conditions were introduced using input data from the course implementation plan (number of lecture and seminar hours, percentage of e-learning, course objectives, learning outcomes, course content, and assessment methods) and information literacy standards for higher education [17].
Output format: The model was instructed to organise the content in a specific format (e.g., modules, short summary, table, etc.).

2.3. Model Training

Training the model was conducted using a prompt script (see the Supplementary Materials, Document S1). The prompt script contained a linear series of structured prompts or queries of inputs. A systematic approach to communicating with the model is termed ‘prompt engineering’, and includes fine-tuning the questions in order to produce more relevant results and coherent output. The CLEAR framework for incorporating prompt engineering into information literacy instructions suggests the following method to improve AI-generated responses [18]:
  • Concise: brevity and clarity in prompts;
  • Logical: structured and coherent prompts;
  • Explicit: clear output specifications;
  • Adaptive: flexibility and customisation in prompts;
  • Reflective: continuous evaluation and improvement of prompts.
The basic prompts for writing steps include defining specific outcomes and the format and type of content. After testing the initial prompt, the model output was reflected and evaluated. If the results failed to meet the expectations, the prompt was adjusted and the process was repeated. A well-written prompt includes the following components: verbal action, action outcome, task parameters, narrowed topic, alignment with goals, and any constraints or limitations, if applicable [16]. The prompt script used to train the model contained prompt examples aligned with learning goals, assessment criteria, critical thinking, problem-solving skills, and student engagement and interaction.

2.4. Conducting the Review Process

The evaluation of the final output of the ChatGPT model was carried out by two external reviewers with the following profiles: Academic Library Information Literacy Programme Coordinator and Assistant Professor of English Language. The reviewers received the raw output, without any interventions in the text. The only additions to the ChatGPT output included formatting into paragraphs and tables, for the purpose of facilitating the review process. The reviewers were informed of the purpose of the final document and the use of ChatGPT in the process of its preparation. Reviewers were asked to provide general comments, answering the following questions:
  • Does it fit the purpose?
  • Is it relevant to the context in which it is intended to be used?
  • Is it coherent and well-organised?

2.5. Developing a Custom Chat Model

The proposed custom AI chatbot template combines the OpenAI architecture with a limited set of source documents. The selected set of documents is a list of required full textbooks and articles and additional course materials for the course ‘Academic/Scholarly Writing’, in the format of .pdf files. This part of the research was conducted using a GitHub account and applications and services available at no cost through the free plans offered by service providers. For the purpose of creating a custom AI chatbot, the following platforms were used:
  • Flowise AI: versatile low-code/no-code platform designed to simplify the creation of applications utilizing LLMs. It offers a user-friendly drag-and-drop interface, enabling users to visualize and construct LLM apps with ease (https://flowiseai.com; accessed on 31 August 2023);
  • Render: a unified cloud to build and run all apps and websites with free TLS certificates, global CDN, private networks, and auto-deploys from Git (https://render.com; accessed on 31 August 2023);
  • Pinecone: a vector-based database that offers high-performance search and similarity matching (https://www.pinecone.io; accessed on 31 August 2023).
A detailed explanation of the steps is given in the Supplementary Materials, Document S2. The model design flow chart is shown in Figure 1.

3. Results

3.1. ChatGPT Output

3.1.1. Testing the Prompt Script

Conversation with the ChatGPT model using the prepared prompt script is available on the OpenAI website (https://chat.openai.com/share/6696bb02-c0a9-4693-8c7a-30f998fa5e1f, accessed on 29 January 2024). In a separate conversation, supplementary questions and script tasks were used to test how the model responds to questions related to student participation and interaction and critical thinking and problem-solving skills. (https://chat.openai.com/share/67c8ada7-5a4e-4aa7-9ebd-1d44f32988e6, accessed on 29 January 2024).
The results of the prompt script testing are presented in the format of a 15-page document (see the Supplementary Materials, Document S3) with the following subheadings:
  • Introduction;
  • Course Description (Course Objectives, Learning Outcomes, Course Contents, Course Schedule);
  • Workshop Plan: “From Research Topic to Research Questions” (including facilitator notes and participant handouts);
  • Assessment activities, criteria, and grading rubric;
  • Interactive activity lesson plan (including facilitator notes and participant handouts).
The text was not changed or amended before the evaluation stage. ChatGPT answers were copied and pasted without any revisions. Limited formatting was applied to improve readability, and some sections were organised into tables.

3.1.2. Evaluation

The questions given to the reviewers reflected the basic evaluation criteria for the usability, relevance, and quality of AI-generated content. The reviews were given in the format of descriptive text and are presented in Table 1.

3.2. Custom AI Chatbot

3.2.1. Model Architecture

The final version of the model represents a Conversational Retrieval QA Chain, a chain for performing question-answering tasks with a retrieval component. Documents stored in a vector database are the source documents for retrieval. The chatbot generates responses from the user-defined data set, obtaining the most appropriate response. The final version of the custom chatbot model is shown in Figure 2.

3.2.2. Model Testing

The model was tested using simple queries from the corpus text, to check the responsiveness of the model. For the purpose of this paper, we only limited the test to confirm that the chatbot is able to retrieve information from the pre-defined corpus of documents. Figure 3 shows an example of the chatbot’s answer to a question formulated to ask specific information from a source text. In this example, the default temperature setting of 0.9 was used, resulting in a more creative and diverse output. Compared to the actual content of the book, it was found that the model’s answer included all the points in the relevant chapter, but also added one irrelevant point, not directly related to the question.

4. Discussion

4.1. ChatGPT as an Instructional Design Tool

In the first part of the study, we investigated how the output of ChatGPT could be used in instructional design. The specific subject of instruction is information literacy in academic libraries. The instructional material produced by ChatGPT was evaluated by experts in the field, as presented in Section 3. Although earlier studies have explored the possible impacts of ChatGPT in academic libraries [6,7], they have not explicitly addressed the instructional potential of the tool.
The feedback from the reviewers highlights both strengths and areas for improvement in the proposed course content. The introduction and objectives are well defined, but some learning outcomes appear redundant. Additionally, clearer criteria for resource selection and improved lecture content are recommended. Overall, the reviewers concluded that the document required editing to match the required postgraduate level.
Our study suggests that instructors may benefit from using ChatGPT as assistive tool in preparing course materials. Compared to the results of the Eager and Brunton case study [16], our results confirm that exploring features and experimenting with functionalities of ChatGPT is necessary to better understand their capabilities and limitations. In their case study, Eager and Brunton explored the transformative impact of AI-augmented teaching and learning in higher education, emphasizing the need for educators to develop prompt engineering skills to effectively utilize AI tools such as ChatGPT. The authors advocate for the strategic integration of AI and present a case study on AI-assisted assessment design to illustrate practical applications.
The results of the same case study demonstrated a significant time-saving potential of implementing AI-augmented processes in instructional design. However, this point could be argued in terms of the extensive preparation time required for drafting a prompt script that can elicit consistent and contextualised responses from ChatGPT.
In summary, the reviewers concluded that the final output generated by ChatGPT could only be used in the actual classroom environment after careful review and optimisation. Most of the positive comments of the reviewers refer to the structure of sections (course objectives, learning outcomes, workshop plan, and assessment). This was expected, since ChatGPT is trained on a large set of teaching materials and can easily generate lesson plans in an acceptable format. Instructional prompts written in a precise and unambiguous language and including fixed elements such as topic, learning goals, and type of content produced a satisfactory result. In-depth analysis of the text revealed issues concerning the vagueness of phrases generated by ChatGPT and the lack of clarity in defining certain concepts. Despite being instructed to operate as a subject-matter expert on both levels (general settings and role assignment), the model fails to retain the complexity level of the output. The described shortfalls could be resolved by training the model further and adjusting the context, instruction, or constraints.
The results can also be interpreted in the context of open educational resources. In our research, we focused on the integration of ChatGPT into teaching practices as a supportive tool for creating syllabi and lesson plans, which are listed as common open educational resources. We can compare our findings with some of the expectations voiced in the open education community regarding the use of generative AI tools in OER creation process. Lalonde [19] suggests that educators may shift towards becoming editors who validate and revise AI-generated content, ensuring its accuracy and credibility.
Wiley [12] anticipates that such tools will reduce the costs associated with drafting OERs, as they can produce reasonable first drafts much more quickly than humans. Although they can support the initial creation process, producing quality content would still require human expertise. Our results have confirmed these expectations. Reviewers’ comments can easily be translated into reflective prompts, guiding the model to produce content that is more appropriate for the context or task. Repeated testing of prompts and iterating the process can give more refined results.
Methods and strategies for prompt engineering presented in this study can be used to prepare personalized lesson plans, tailored specifically to students’ needs and interests. ChatGPT can be used in course design by suggesting optimal course structures, the sequencing of topics, and learning objectives. Instructors can use the presented strategies to create educational content such as quizzes, exercises, and study materials.
Overall, in the dynamic landscape of education, the integration of new technologies holds immense promise for instructional designers. Specifically, the adoption of AI tools introduces a twofold positive impact. First, it significantly enhances efficiency by automating routine tasks, allowing instructional designers to allocate more time to strategic planning and pedagogical considerations. ChatGPT can serve as a dynamic “sounding board”, enabling the rapid prototyping of ideas and immediate feedback. Second, content creation with the assistance of ChatGPT has implications for open educational resources (OERs). However, rigorous evaluation is essential. Designers must assess the accuracy, relevance, and alignment of ChatGPT-generated output with learning objectives. The “four-eyes principle”—collaborative review by multiple experts—ensures the integrity of instructional materials before publication. Looking ahead, empirical research should explore the impact of AI-powered instructional design on student satisfaction and academic self-efficacy. Understanding these links would inform future design practices and ensure that AI-enhanced learning experiences align with educational goals.

4.2. Custom Chatbot Model

Adetayo [8] mentions several cases of the deployment of chatbots in a library setting, listing examples of virtual reference service chatbots from as early as 2013. These examples fall under the category of traditional chatbots, which are task-specific and rule-based, i.e., producing predefined responses and lacking the ability to handle complex queries or unclear user input. In comparison, ChatGPT offers a variety of unique benefits, including contextual understanding, natural language processing, flexibility, and adaptability.
Testing of the custom chatbot model confirmed that ChatGPT can be adapted to retrieve answers from a pre-defined set of source documents. It was found that the chatbot is able to retrieve information from the corpus. Recently developed software development kits enable the embedding of tools for accessing pre-defined custom data. The process requires minimum coding knowledge and could be used as a cost-effective solution for building custom AI chatbots. The most important limitation is the fact that we only used free plans offered by software developers to build the custom chatbot, which have restrictions on storage volume, available tokens, etc. However, the results provide the foundation for the in-house development of applications. The initial idea could be developed even further, depending on the selected text corpus fed into the model, allowing the user ‘to have a conversation with the text’.
More research is needed to investigate how temperature settings and additional parameters influence the behaviour of the chatbot. There are several options available to tweak the configuration [20]. Temperature is the most common and determines the predictability level of the words generated by the model. A temperature of 0 means that the model will always choose the words with the highest probability of being appropriate. This makes it more likely to produce the same response if the request is repeated, but will be less creative and may appear less natural. Increasing the temperature allows the LLM to consider progressively less probable subsequent words. At really high temperatures, the sampling distribution from which the model is pulling for each word is so large that it tends to produce erratic and even non-sensical results. At a temperature of 2, which is the current maximum for OpenAI’s models, the result becomes disordered and unusable. Generally, a temperature value should be selected on the basis of the desired trade-off between coherence and creativity for a specific application.
The top probability parameter (top_p) is associated with the top-p sampling technique, or nucleus sampling. GPT models generate the next word by assigning probabilities to all possible next words in its vocabulary. With top_p sampling, instead of considering the entire vocabulary, the next word will be sampled from a smaller set of words that collectively have a cumulative probability above the top_p value. The top_p ranges from 0 to 1 (default), and a lower top_p means that the model samples from a narrower selection of words, making the output less random and diverse. Given that top_p impacts output randomness, OpenAI recommends adjusting either top_p or temperature, but not both.
Additional parameters available for a custom-built model are the frequency penalty and the presence penalty. Both parameters address the problem of repetition in generated text and can be used to suppress or increase word frequency, thus impacting the diversity of language use.

4.3. Proper Attribution and the Responsible Use of Content Generated Using ChatGPT

The results of our study should be considered in the context of the proper attribution and responsible use of ChatGPT-generated content. Atlas [5] discusses best practices for the attribution of ChatGPT in various contexts. The standard requirement for any content used in a higher education setting is to disclose the use of LLM in the acknowledgement section or equivalent. This requirement is also in line with the OpenAI sharing and publication policy, which stipulates that the role of AI must be clearly disclosed for any content co-authored with the OpenAI API. Furthermore, the content must comply with the OpenAI content policy or terms of use.
At the end of April 2023, UNESCO published a quick-start guide [4] that addresses some of the main challenges and ethical implications of the use of generative artificial intelligence in higher education. The central theme of the document is academic integrity and the urgent need to adapt traditional assessment strategies, only marginally suggesting that teachers, researchers, and students need to be trained on how to improve the queries they pose to ChatGPT. In our study, our objective was to put a more positive spin on the dominant narrative and investigate how instructors can use the tool to improve teaching and learning practices.
In its 2023 position paper on artificial intelligence tools and their responsible use in higher education learning and teaching [21], the European University Association (EUA) highlights the need to review and reform teaching and assessment practices and adapt them in a such a manner as to ensure that AI is used effectively and appropriately. While urging the academic community to consider the academic integrity, e.g., obligation to reference the use of AI in academic and student work, and recognising the many shortcomings associated with the use of AI, EUA also acknowledges numerous potential benefits for academic work, including improved efficiency, personalised learning, and new ways of working. Our findings support this position, as they provide practical insight into how the model can be applied.
Finally, an interesting example of the implementation of recommendations comes from an independent higher education institution [22], in the form of guidance on the use of ChatGPT for teaching and learning. Since it is not accepted as a supported tool, if instructors choose to use ChatGPT for their teaching, they assume responsibility for reviewing and vetting concerns with accessibility, privacy, and security. The possibility of privacy and copyright violations is highlighted. The guidelines also include suggestions for teaching strategies, focusing on the limitations of the dataset, concerns about academic dishonesty, and economic and environmental implications. This rather conservative approach does not include any recommendations for instructors on how to explore the benefits offered by ChatGPT. However, this could be interpreted in terms of university policy and the fact that the university renegotiated a system-wide agreement with Microsoft to include Microsoft Azure OpenAI, as a solution offering higher data security and compliance with regulatory requirements.

4.4. AI in Education: Opportunities, Challenges, and Ethical Considerations

If we consider the long-term impact on student learning outcomes, possible benefits for the students may include enhanced user experiences, personalised learning, streamlined searching and study assistance.
According to a survey conducted among undergraduate students in summer 2023 in the United States [23], 85% of undergraduate students confirmed that they would feel more comfortable using artificial intelligence (AI) tools if they were developed and vetted by trusted academic sources. Furthermore, 65% also agreed that AI will improve how students learn rather than have negative consequences on learning. In this broader context of AI in education, our results validate the need to explore the potential of developing trustworthy and verified content.
While students may embrace the timesaving potential of newly available tools, this could lead to students’ over-reliance on generative AI technology. It is certain that the old assessment schemes will have to be replaced, shifting focus to a student’s process and approach. Instructors will be expected to use more inventive and varied assessment methods. In this context, it should be noted that tools advertised as being able to detect AI writing in student essays have been found to be very unreliable, especially if the authors are non-native English speakers [24].
The issue of academic integrity is closely associated with AI literacy. We believe that that the focus and the main concepts of information literacy remain unchanged—the reflective discovery of information, the understanding of how information is produced and valued, and the use of information in creating new knowledge and participating ethically in communities of learning [14]. How to apply these concepts in the new reality, while boundaries related to the use of generative AI tools are continuously shifting, is a challenge to be faced by library professionals. As core institutions of universities, academic libraries are important stakeholders in the innovation activities related to the digital transition of European higher education [25], focused on the common task of preparing students for the workforce of the future.

5. Conclusions

In this paper, we have presented results of two possible uses of ChatGPT in information literacy instruction in higher education. Firstly, we have outlined a method for producing written teaching material with the assistance of ChatGPT, demonstrating how the model can be used effectively. Secondly, we developed and tested a small-scale custom AI chatbot that can be used as a supplementary teaching tool.
Our research has highlighted the importance of adopting a strategic approach to interacting with LLMs such as ChatGPT, i.e., setting clear goals and defining purposes, context, and constraints of how the model interacts with the user. Prompt engineering remains an exploratory activity that requires a number of iterations. The proposed custom chatbot model provides a starting point, which can be modified for different topics or applications in instructional design.
Some limitations should be considered. Firstly, the current study was not specifically designed to test the user experience or measure student satisfaction with AI-generated instructional material. The custom chatbot model proposed in this study is designed to provide students with immediate and relevant answers to the questions related to the course materials stored in the model database. This form of learning support can motivate students to become more active in the learning process. However, the actual impact on student motivation and academic performance can only be measured by conducting in-class experiments.
Secondly, the custom chatbot model that we tested in our study is a basic configuration with a small workload. Such models can be tested using free plans; however, more serious deployment of any custom chatbot would require proprietary paid solutions. Small-scale models are appropriate for environments such as a single academic course or the testing of capabilities before considering purchasing an AI tool from a vendor. The effective evaluation of such tools requires some knowledge of how the tool was built and is necessary before any institution-wide decision is made.
The findings of this study can be helpful in understanding the collaborative potential of generative AI as a teaching assistant, tasked with planning activities, managing classroom time, and designing performance-based assessments. The method used to design a custom chatbot model can be applied in the instructional design of digital higher education courses and programmes. Considered in a more limited context, ChatGPT can allow academic libraries to bridge the financing gap caused by limited public funding, enabling the in-house development of applications adapted to the specific needs of a library.
Future studies could investigate whether the content produced in this way can meet the quality assurance standards for open educational resources (OERs). Copyright issues and the licencing of remixed content are additional areas that need to be explored further.
The integration of AI in education is approached with a healthy dose of scepticism, which can be summarised in one question: How can we trust the quality of information? The underlying concern is the transparency of data on which the models are trained. The models are vulnerable to biases in the source material or the selection process for training data, affecting the behaviour of models and causing misrepresentations or factual distortions. Additionally, learning and processing algorithms can be used to grant privilege to one group of users or reinforce stereotypes. Privacy and data security may be compromised during interaction with generative AI models, as personal data improve the quality of training models. All these concerns have been continuously voiced and addressed in the academic community since the launch of ChatGPT. As institutions traditionally focused on issues such as diversity, ethics, privacy and fair use, academic libraries occupy a crucial role in developing university frameworks on the use of generative AI tools and establishing policy guidelines. In this process, libraries can serve as meditators and collaborators, while information professionals need to educate their communities on how to navigate the new information landscape.
In conclusion, all stakeholders in the higher education process should be aware that the arrival of generative AI would bring about a paradigm shift comparable to the one we saw with the arrival of the World Wide Web. Therefore, there is a significant need to constructively integrate these tools into higher education instructional practices.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/publications12020011/s1, Document S1: Prompt Script; Document S2: Custom Model Instructions; Document S3: Prompt Script Output.

Author Contributions

Conceptualization, J.M. and M.S.; methodology, J.M.; software, M.S.; validation, M.S.; investigation, J.M. and M.S.; resources, J.M. and M.S.; writing—original draft preparation, J.M.; writing—review and editing, J.M.; visualization, M.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in the study are included in the article/Supplementary Materials; further inquiries can be directed to the corresponding author/s.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Heaven, W.D. ChatGPT Is Going to Change Education, Not Destroy It. MIT Technology Review. Available online: https://www.technologyreview.com/2023/04/06/1071059/chatgpt-change-not-destroy-education-openai (accessed on 29 April 2023).
  2. Tools Such as ChatGPT Threaten Transparent Science; Here Are Our Ground Rules for Their Use. Nature. Available online: https://www.nature.com/articles/d41586-023-00191-1 (accessed on 4 September 2023).
  3. Rudolph, J.; Tan, S.; Tan, S. ChatGPT: Bullshit Spewer or the End of Traditional Assessments in Higher Education? J. Appl. Teanching 2023, 6, 342–363. [Google Scholar] [CrossRef]
  4. Sabzalieva, E.; Valentini, A. ChatGPT and Artificial Intelligence in Higher Education: Quick Start Guide. UNESCO International Institute for Higher Education in Latin America and the Caribbean. Available online: https://www.iesalc.unesco.org/wp-content/uploads/2023/04/ChatGPT-and-Artificial-Intelligence-in-higher-education-Quick-Start-guide_EN_FINAL.pdf (accessed on 3 September 2023).
  5. Atlas, S. ChatGPT for Higher Education and Professional Development: A Guide to Conversational AI. Available online: https://digitalcommons.uri.edu/cba_facpubs/548/ (accessed on 8 September 2023).
  6. Fernandez, P. “Through the Looking Glass: Envisioning New Library Technologies” AI-Text Generators as Explained by ChatGPT. Libr. Hi Tech News 2023, 40, 11–14. [Google Scholar] [CrossRef]
  7. Lund, B.; Ting, W. Chatting about ChatGPT: How May AI and GPT Impact Academia and Libraries? Libr. Hi Tech News 2023, 3, 26–29. [Google Scholar] [CrossRef]
  8. Adetayo, A.J. Artificial Intelligence Chatbots in Academic Libraries: The Rise of ChatGPT. Libr. Hi Tech News 2023, 40, 18–21. [Google Scholar] [CrossRef]
  9. Cox, C.; Tzoc, E. ChatGPT: Implications for Academic Libraries. Coll. Res. Libr. News 2023, 84, 99–102. [Google Scholar] [CrossRef]
  10. Kim, B. AI, Generative AI, and Libraries. In Proceedings of the INCONECSS Community Meeting No. 6. Artificial Intelligence: Impact on Services, Online, 12 June 2023. [Google Scholar]
  11. Pride, D.; Cancellieri, M.; Knoth, P. CORE-GPT: Combining Open Access Research and Large Language Models for Credible, Trustworthy Question Answering. In Linking Theory and Practice of Digital Libraries; TPDL 2023. Lecture Notes in Computer Science; Alonso, O., Cousijn, H., Silvello, G., Marrero, M., Teixeira Lopes, C., Marchesin, S., Eds.; Springer: Cham, Switzerland, 2023; Volume 14241, pp. 146–159. [Google Scholar] [CrossRef]
  12. Wiley, D. AI, Instructional Design, and OER. Improved Learning. Available online: https://opencontent.org/blog/archives/7129 (accessed on 29 April 2023).
  13. The 2019 UNESCO Recommendation on Open Educational Resources (OER): Supporting Universal Access to Information through Quality Open Learning Materials. UNESCO Digital Library. Available online: https://unesdoc.unesco.org/ark:/48223/pf0000383205 (accessed on 25 April 2023).
  14. IFLA Statement on Libraries and Artificial Intelligence. International Federation of Library Associations and Institutions (IFLA). Available online: https://www.ifla.org/wp-content/uploads/2019/05/assets/faife/ifla_statement_on_libraries_and_artificial_intelligence.pdf (accessed on 20 March 2024).
  15. Statista. Leading Generative Artificial Intelligence (AI) Text Tools Market Share of Users Globally in 2023 [Graph]. In Statista. Available online: https://www.statista.com/forecasts/1423975/world-generative-ai-text-tool-market-share (accessed on 20 March 2024).
  16. Eager, B.; Brunton, R. Prompting Higher Education Towards AI-Augmented Teaching and Learning Practice. J. Univ. Teach. Learn. Pract. 2023, 20. [Google Scholar] [CrossRef]
  17. Framework for Information Literacy for Higher Education. Association of College & Research Libraries. Available online: http://www.ala.org/acrl/files/issues/infolit/framework.pdf (accessed on 17 August 2023).
  18. Lo, L.S. The CLEAR Path: A Framework for Enhancing Information Literacy through Prompt Engineering. J. Acad. Librariansh. 2023, 49, 102720. [Google Scholar] [CrossRef]
  19. Lalonde, C. ChatGPT and Open Education. Available online: https://bccampus.ca/2023/03/06/chatgpt-and-open-education (accessed on 20 March 2024).
  20. OpenAI API Documentation. Available online: https://platform.openai.com/docs/api-reference/chat (accessed on 28 February 2024).
  21. Artificial Intelligence Tools and Their Responsible Use in Higher Education Learning and Teaching. Available online: https://eua.eu/component/attachments/attachments.html?id=4048 (accessed on 28 February 2024).
  22. Understanding AI Writing Tools and their Uses for Teaching and Learning at UC Berkeley. Available online: https://teaching.berkeley.edu/understanding-ai-writing-tools-and-their-uses-teaching-and-learning-uc-berkeley (accessed on 29 April 2023).
  23. McGraw-Hill. Share of Undergraduate Students Who Have Various Opinions on the Use of Artificial Intelligence (AI) Tools in Higher Education in the United States in 2023 [Graph]. In Statista. Available online: https://www.statista.com/statistics/1445975/us-undergraduate-students-opinions-on-the-use-of-ai-in-education (accessed on 20 March 2024).
  24. Liang, W.; Yuksekgonul, M.; Mao, Y.; Wu, E.; Zou, J. GPT detectors are biased against non-native English writers. Patterns 2023, 4, 100779. [Google Scholar] [CrossRef] [PubMed]
  25. European Commission. 2020. Digital Education Action Plan (DEAP) Communication. Available online: https://education.ec.europa.eu/sites/default/files/document-library-docs/deap-communication-sept2020_en.pdf (accessed on 20 March 2024).
Figure 1. Model design flow chart.
Figure 1. Model design flow chart.
Publications 12 00011 g001
Figure 2. Final version of the custom chatbot model.
Figure 2. Final version of the custom chatbot model.
Publications 12 00011 g002
Figure 3. Custom model answering a query.
Figure 3. Custom model answering a query.
Publications 12 00011 g003
Table 1. Summary of reviewer’s comments.
Table 1. Summary of reviewer’s comments.
Document SectionComment
Introduction- The Introduction section is relevant for the context.
Course Description- Course objectives and learning outcomes are well defined.
Learning outcomes- In the Learning Outcomes section, formatting and style guidance, and plagiarism avoidance seem to be redundant elements at the academic level. One possible explanation could be that the tool cannot distinguish ‘lower-order’ and ‘higher-order’ skills from the Information Literacy Competency Standards for Higher Education.
- Originality in scientific writing and understanding plagiarism are topics relevant at all levels of academic studies.
Course Schedule- The structure of the Course Schedule is satisfactory
- Module 1 in the Course Schedule section is unclearly formulated, as it seems unnecessary to spend as much as two weeks on the topic.
- Detailed analysis of the content of other modules also indicates certain issues with clarity (e.g., addressed learning outcomes include vague phrases such as ‘develop basic text analysis techniques’ and ‘establish coherence in scientific writing’, without specifying which techniques or how to establish coherence.
Workshop- The workshop plan is well organised.
- In terms of time management of the proposed workshop, the set time limit of 25 min for Activity 2: Advanced Literature Review and Refinement is definitely not enough for all planned activities.
- One suggestion for the improvement of this section is to use one research topic selected by the workshop facilitator; students can conduct literature reviews and engage in discussion. In this way, the students could practice together, and for homework assignments each student could prepare their own research plan.
- The volume of information proposed for the workshop could be separated into a combination of lectures and subsequent workshops, since it takes a significant amount of time to acquire advanced research skills.
- The lecture should serve as an introduction to the set of instruments needed for formulating effective search strategies.
- It should be noted that the students need to learn how to paraphrase, summarise, and condense a specific body of text without missing out on key information
Workshop notes (Advanced Resources Selection Criteria)- The Key Concepts and Key Considerations sections contain excessive repetitions, while failing to clearly distinguish between the two areas.
- The recommendation is to use only one set of clearly defined criteria for resource selection, instead of unsystematically listing various points that need to be addressed by the students.
- Advanced Resources Selection Criteria should be written in a more accessible and clearer format.
Workshop notes (formulating a research question)- Research Question criteria are clear and practical. Including examples for both well-formulated and poorly formulated research questions is a useful element and a good basis for discussion before the practical part of the workshop.
Assessment- The Assessment section seems to be well developed.
- Assessment criteria are applicable only after the mentioned skills are acquired, which is largely dependent on the students’ learning pace.
Overall impressionThe output document could be used only after an exhaustive editing process and certain interventions in the text, particularly in terms of adapting the complexity level to the required postgraduate level.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Madunić, J.; Sovulj, M. Application of ChatGPT in Information Literacy Instructional Design. Publications 2024, 12, 11. https://doi.org/10.3390/publications12020011

AMA Style

Madunić J, Sovulj M. Application of ChatGPT in Information Literacy Instructional Design. Publications. 2024; 12(2):11. https://doi.org/10.3390/publications12020011

Chicago/Turabian Style

Madunić, Jelena, and Matija Sovulj. 2024. "Application of ChatGPT in Information Literacy Instructional Design" Publications 12, no. 2: 11. https://doi.org/10.3390/publications12020011

APA Style

Madunić, J., & Sovulj, M. (2024). Application of ChatGPT in Information Literacy Instructional Design. Publications, 12(2), 11. https://doi.org/10.3390/publications12020011

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop