1. Introduction
The emergence of Generative Artificial Intelligence (GenAI) [
1,
2,
3] and AI Digital Assistants (AIDA) [
4,
5] in particular has introduced several new opportunities and challenges for education in the age of Industry 5.0 [
6,
7]. Publicly available AIDAs (p-AIDAs) such as ChatGPT have rapidly gained traction among students for academic support, offering real-time feedback, content explanations, and writing assistance [
1,
8,
9,
10,
11]. For example, a recent survey by Freeman [
12] showed that 88% of UK students indicated to use GenAI for assessments. These p-AIDA tools exemplify the potential of AI to personalize learning, enhance engagement, and provide on-demand support at scale. In response, educational researchers and institutions have started exploring the pedagogical affordances and limitations of AIDAs, particularly in supporting learner autonomy, motivation, and academic performance [
1,
2,
3,
11,
13,
14,
15]. Existing studies highlight the promise of GenAI in higher education while simultaneously raising critical concerns around data privacy, academic integrity, and the ethical use of AI-generated content, especially when students rely on non-institutional systems that lack alignment with course contexts and learning goals [
3,
5,
11].
There remain important gaps in our understanding of how students might engage with AIDA tools embedded within institutional ecosystems (i-AIDA). Several institutions might want to provide a safe “walled garden” for their students to engage with GenAI within the institutional boundaries given the previous concerns highlighted [
4,
5,
13]. However, there is limited empirical research on how learners perceive the usefulness and ease of use of i-AIDAs designed with pedagogical intent and integrated into formal course structures. Furthermore, few studies have explored how different design features (e.g., chat, quiz, flashcard functions) impact student engagement. While several recent large-scale surveys [
1,
4,
12] have captured broad student attitudes towards GenAI, in-depth, experience-based analyses of learner interactions with i-AIDAs in authentic educational settings are still lacking. Additionally, little is known about the nature of learner prompts to i-AIDAs, and what these prompts might reveal about learners’ goals, expectations, and support needs in the context of Industry 5.0.
Therefore, in order to address these gaps, we conducted a beta-test of an institutionally developed AI Digital Assistant (i-AIDA) at the UK’s largest distance-learning university, co-designed using Design-Based Research (DBR) principles [
16,
17,
18] and evaluated through the lens of the Technology Acceptance Model (TAM) [
19,
20]. Eighteen students with diverse attitudes towards GenAI and i-AIDA in particular participated in guided sessions, and interacting with i-AIDA’s chat, quiz, and flashcard functions. Drawing on pre-post surveys, think-aloud protocols, screenshares, and prompt analysis this study explored: (1) whether and how students’ perceptions of i-AIDA changed following direct engagement; (2) which design features were seen as valuable for supporting distance learning in line with Education 5.0 principles; and (3) what kinds of prompts students submitted, and what these reveal about learner needs and expectations. Through this application of Industry 5.0, we aim to provide nuanced insights into the pedagogical potential, technical challenges, and user experiences associated with i-AIDA.
2. Literature Review: Industry 5.0 and Education 5.0
In this special issue, several examples are provided of how digitalization and integration of technologies within/across industries allow for the automation of complex human and machine tasks [
21,
22,
23]. There are various definitions and concepts of Industry 4.0 [
24] and Industry 5.0 [
7,
25], but according to Tusquellas, Santiago and Palau [
21] Industry 4.0 “emphasizes the integration of digital technologies with manufacturing processes to boost productivity, efficiency, and economic growth worldwide” while Industry 5.0 “represents a shift from an industry centered on automation and efficiency to a model that emphasizes human–machine collaboration, fosters innovation and promotes alignment with environmental sustainability”.
Similarly, in the context of education, substantial efforts have been made to integrate technologies and human interaction into learning and teaching in face-to-face, hybrid, and online contexts [
2,
26]. For example, Education 4.0 was introduced in 2008 as a concept focused on use of digital technology, innovation, novelty, and connections with employment and industry [
27,
28,
29,
30,
31,
32]. Harkins [
33] first introduced the concept of Education 4.0 to highlight a shift from traditional knowledge-based education to one focused on fostering innovation. This approach aligns with the ongoing evolution of Industry 4.0 [
24], which is increasingly driven by automation, smart technologies, and the Internet of Things. For example, the World Economic Forum [
31] identified a set of eight core competencies that define Education 4.0, including global citizenship, creativity and innovation, and lifelong learning. Additional scholars such as Fisk [
34] and later Hussin [
28] have expanded on these ideas by outlining specific pedagogical practices.
In a recent systematic literature review by Rienties et al. [
27] of 66 papers in Computer Science in the period 2016–2020 focused on how educators introduced innovative pedagogical approaches into their courses and how they used some or all of the nine elements of Education 4.0 identified by [
28]. Computer science was chosen as a discipline as it rapidly changes with the advancements of technology, and one would expect that educators would be more willing to engage with Education 4.0 concepts relative to disciplines that may have slower changes in core principles and methods. Perhaps the most surprising finding was that none of the papers actually referred to the concept of Education 4.0, drawing into question how embedded these concepts are in practice. 17 coders subsequently coded the 66 innovative pedagogical approaches, and 54 out of 66 studies (80%) were regarded as (partial) examples of Education 4.0. Subsequent k-means cluster analysis [
27] indicated three clusters of practice, whereby computer science educators implemented some or all of the elements described by Hussin [
28] as illustrated in
Figure 1: (1) Education 4.0 Light (n = 18); (2) Project-based/hands-on learning (n = 22); and (3) Full Education 4.0 (n = 26).
With the arrival of GenAI and publicly available AI digital assistants (p-AIDA) like ChatGPT in October 2022, several authors [
6,
7,
21,
35] have claimed that we are moving to Education 5.0, which, in line with Industry 5.0, is more focused on inclusive development of technology and people and their environments. Indeed, Sedrakyan, et al. [
36] argued that Industry 5.0 “embraces a value-driven perspective [
25], emphasizing the provision of value for end-users, stakeholders, and the broader socio-technical and environmental system within which actors operate”. For example, Ciolacu, Marghescu, Mihailescu, and Svasta [
6] indicated that Education 5.0 would include “problem-based learning, scenario-based learning and non-traditional labs approaches, increased interaction with AR/VR and AI technology and Biofeedback for well-being and health”. However, many of these elements, with probably the exception of AR/VR/AI and biofeedback are already included in Education 4.0 as illustrated in [
27].
In line with recommendations of Tusquellas, Santiago and Palau [
21] and Sedrakyan, Borsci, van den Berg, van Hillegersberg and Veldkamp [
36] to put humans in the center of development of AI and Education 5.0 in a range of five Design-Based Research (DBR) studies we co-created together with students and staff an institutional AI digital assistant (i-AIDA) at the Open University (OU) in the period December 2023-December 2024 [
4,
5,
13,
37,
38]. Across five sequential studies, we gathered insights from 315 students and 20 staff members. As Wang [
18] describes, DBR is “often referred to as a long-term research endeavor involving iterative observation, design, implementation, and redesign to come up with possible practical solutions to address educational problems.” Following the frameworks outlined by Easterday, Lewis, and Gerber [
16] and Lyons, Lobczowski, Greene, Whitley, and McLaughlin [
17], we applied DBR principles over a 13-month period to iteratively design, evaluate, and refine the i-AIDA system, as reported in detail in [
37].
After an initial study with 10 students to explore what they would want from such an i-AIDA [
5], two follow-up survey studies with in total 305 students explored students’ expected services, concerns, and affordances of such i-AIDA [
4,
38], as illustrated in
Figure 2. In terms of expected services, students primarily wanted i-AIDA to have personalization to individual needs and learning approaches, real-time assistance and query resolution, support for academic tasks, and some wanted emotional and social support, in line with previous findings [
1,
2,
21,
36].
In terms of main concerns, in line with previous literature, students were worried about academic integrity, data privacy and use, operational challenges, ethical and social implications, and the impact of AI on the future of education [
1,
2,
11]. In a range of Design-Based Research studies [
5,
38], students were shown various versions of prototypes of i-AIDA. This helped the research team to develop an alpha version of i-AIDA, which was initially tested with 20 academics to explore the main functionality [
13]. In this follow-up study, we report on the detailed qualitative findings from a beta-test of i-AIDA with 18 students in January 2025, whereby we specifically sampled students with a mix of perspectives on i-AIDA and experience of GenAI in general using the Technology Acceptance Model [
19,
20,
39].
Technology Acceptance Model and AI Digital Assistant
As reported in Venkatesh and Bala [
20], there is a suggestion that the low adoption and use of digital technology by employees represents a major barrier to successful IT deployment. The Technology Acceptance Model (TAM) [
19] and the follow-up Unified Theory of Acceptance and Use of Technology (UTAUT) by Venkatesh et al. [
40] have been extensively used and validated. For example, in a recent meta-analysis by Blut, Chong, Tsiga, and Venkatesh [
39] that reported from 1149 studies containing a total of 25,619 effect sizes from 737,112 users, it found that UTAUT was a robust predictor of how people intent to use technology. In its most basic form, TAM indicates that learners’ intentions to use a technology (in this case: i-AIDA) are primarily determined by two key perceptions: 1) Perceived Usefulness (PU) of i-AIDA (e.g., allowing the learner to better understand key learning materials, perform better on assessments, stimulate motivation); and 2) Perceived Ease of Use (PEU) of i-AIDA (e.g., how the functionality of i-AIDA is easy to understand and engage with, how easy it is to engage with).
At the OU, we face a somewhat similar challenge. How to bring the benefits of Generative AI to our 200 K students, many of whom come from disadvantaged backgrounds, have low self-esteem generally, and low digital technology skills? To this effect, we have recently adopted TAM as advocated in [
20,
39] to help us increase adoption rates. From previous work [
41,
42], we are aware that the use of AI-based digital technologies in teaching delivery can increase retention rates, which is especially important for our students.
Beyond how students were engaging with the various design features of i-AIDA, how easy (or not) they were to use and how useful they might be for their studies, we were particularly interested in looking at the prompts that students used when interacting with the assistant. Prompts are expressions of thinking and often represent questions [
15,
43]. From classroom research, we know that analyzing the thoughts and questions of students can, for example, help to diagnose students’ understanding, stimulate further inquiry into the topic, and provoke critical reflections on educational practice [
44]. Similarly, the analysis of learner prompts has the potential to provide insights into the thought process of learners to diagnose their understanding, interests, and needs. Furthermore, it may inform the development of assistants that specialize in responding to common themes expressed in the prompts. Therefore, our main questions were:
After engaging with a beta-version of an institutional AI digital assistant (i-AIDA) what are the main perceptions of participants, and do they change their perceptions?
What design features encourage engagement with i-AIDA, and which design features in particular are useful to support distance learning students in the context of Education 5.0?
What ideas did students express in their prompts when engaging with i-AIDA?
4. Results
A beta version of i-AIDA was developed and tested in January 2025 with 18 students in one-to-one sessions under the supervision of a facilitator. Before testing, 10 participants were positive about potentially using an i-AIDA for their studies, one was neutral, and seven were not positive. All participants successfully completed the four assigned tasks: watching an instructional video, engaging in a dialogue with the chatbot, completing a quiz, and using flashcards. The think-aloud sessions lasted an average of 44 min (SD = 6:23), ranging from 28 to 56 min.
4.1. RQ1 Change in Perceptions After Engagement with i-AIDA
Post-test results indicated a notable increase in positive perceptions, in particular for those learners who were not so positive about i-AIDA initially, as previously reported in [
37]. While two participants remained neutral, 16 (89%) agreed that i-AIDA would be beneficial for their studies. When asked to compare i-AIDA with other GenAI tools like ChatGPT (on a scale from 1 = very inferior to 10 = much more useful), the average rating was 7.58 (SD = 2.43). While three participants found i-AIDA inferior to ChatGPT, one rated it as similar, one as slightly more useful, and 14 as more useful. As indicated in
Figure 5, in particular students who were initially rather skeptical towards i-AIDA became more positive after engaging with the tool for an hour.
4.2. RQ2 Design Features of i-AIDA That Encourage Engagement
The qualitative feedback from the transcripts of the interviews and screencasts on i-AIDA highlighted both strengths and areas for improvement, as shown in
Table 1. Thematic analysis of the 18 screencasts revealed that 195 comments made by participants (56.9%) focused on i-AIDA’s features, with a mix of positive, neutral, and some negative feedback. The second most common theme (14%) related to how i-AIDA could enhance student engagement and motivation, with predominantly positive remarks. This links strongly with the more human-centered approach of Industry 5.0. The third key theme concerned i-AIDA’s technical performance, indicating areas where further refinements were needed.
Participant feedback was categorized into key themes aligned with the sentiment distribution outlined in
Table 1. The analysis below combines these themes with illustrative quotations to capture the nuanced student experience during the beta testing of i-AIDA.
4.2.1. AIDA Features and TAM
The introductory video, which served as the first task, was generally perceived as informative and engaging. However, some participants raised concerns about the avatars, noting their unnatural appearance and robotic tone. One participant remarked that “they speak way too fast”, adding that a particular avatar was “too distracting… I was looking at her instead of listening to what she was saying” (ID19). The fast pace of the narration also made it difficult for some to follow the content.
Despite these issues, the video sparked curiosity and a strong interest in testing the tool, particularly the dynamic text feature. As one participant explained, “It was quite encouraging for engagement… I liked how it said it would be thought-provoking.” (ID4). For some, the video reinforced a positive first impression of i-AIDA as a course-specific assistant, clearly distinct from general AI tools like ChatGPT.
Participants generally had a positive experience interacting with the chat function. They described the responses as clear, detailed, well-structured, and supportive. Many valued its structured responses, friendly tone, and course-specific guidance, with one noting, “it really fell like talking to an instructor […] it’s quite useful” (ID6). Features such as personalized study schedules and source referencing were particularly appreciated. However, some criticized the tool for being overly wordy or slow and suggested improvements in design and response speed. As one participant remarked, “I’d rather it just did what ChatGPT did and just bang […] it’s a little bit slow” (ID10). Despite this, the tool’s relevance to students’ study contexts and ability to offer practical suggestions were seen as clear strengths.
The quiz feature was described as intuitive, accessible, and visually clear. One participant noted, “It was simple, it was easy to use, it’s user-friendly […] it was really interesting” (ID4). Gamified elements such as leaderboards and progress tracking were seen as motivating by some, particularly competitive learners. However, others called for feedback or incorrect answers and clear feedback for improvement. The use of closed questions was questioned, especially for disciplines like arts and social sciences, where open-ended formats may better support critical thinking. As one participant observed, “I don’t know the purpose of the quiz, but it looks too naïve to me […] I am expecting more sophisticated guidance from AIDA” (ID3).
Several participants described using the flashcards as a smooth and engaging experience, praising their simplicity, responsiveness, and ease of use. One participant noted, “It’s simple, it’s easy to use, it’s given you a quick response” (ID4). The interactive “flip” design was seen as appealing, and the option to download flashcards was viewed as a positive possibility. Some appreciated the clear link between flashcard content and the course, suggesting potential for deeper engagement if expanded. However, not all found the tool useful as one participant remarked: “These flashcards are very, very basic.” (ID5). Suggestions included making the content more advanced and offering reverse or student-generated formats to enhance flexibility and relevance.
4.2.2. Engagement & Motivation
Participants’ perceptions of engagement and motivation when using i-AIDA were varied but generally positive. Many described the tool as “interesting”, “curious”, and even “fun”, particularly when exploring interactive features such as dynamic text and flashcards. The ability to receive immediate feedback and personalized guidance was highlighted as encouraging continued interaction. Several participants appreciated that the assistant asked follow-up questions, prompting deeper engagement with the content. As one noted, it “felt like talking to an instructor” (ID6), reflecting how the tool’s tone and structure supported learner involvement.
However, some users found the system too slow or overly verbose, with a few describing the interface as “boring” or the pacing as frustrating compared to other AI tools. One participant remarked, “It was quite a long response. […] I don’t need the whole page again” (ID10), highlighting how excessive text could hinder engagement. These issues were seen as potential barriers to sustained interaction. Despite this, the assistant was largely viewed as a motivating and confidence-boosting tool, especially when learners received constructive feedback or successfully clarified confusing concepts.
4.2.3. Technical Performance
Participants shared mixed views about i-AIDA’s technical performance, especially regarding speed and responsiveness. Some found the tool smooth and stable—one participant said it was “doing better than I thought it would” (ID11). However, others found the pace frustrating. One person noted, “At this speed, it would at some point annoy me” (ID12), referring to how the text appeared slowly on the chat. Another added, “I would prefer just the whole text and the whole answer to appear immediately” (ID2), showing a preference for quicker replies. Among all themes, technical performance received the highest proportion of negative comments. While the system worked overall, many felt it could be faster and more responsive, with better control over how the content is displayed.
4.2.4. Content Delivery
i-AIDA’s content delivery was frequently described as clear and well-structured, particularly in how it presented key points and responded to user prompts. One participant noted that the responses were “very clear, very structured”, and that the assistant “followed all my instructions carefully” (ID9). Others appreciated the visual layout and organization of the information, with one remarking, “I find it quite easy […] I can glance at it quickly and see” (ID1), referring to the effective use of hierarchy within the chat interface.
Some participants, however, suggested areas for improvement. These included better signposting, more concise outputs, and clearer guidance on how to explore topics in more depth. One participant noted, “It could confuse you […] if that’s not the answer you were looking for or if the bits of information you’re looking for are not there” (ID14), reflecting how misaligned or incomplete responses could affect clarity. Despite these minor issues, most users found the content delivery effective for navigating course materials and supporting their learning.
4.2.5. User Experience
In terms of TAM perceived ease of use, many participants found i-AIDA’s interface intuitive and easy to navigate. Comments such as “It’s simple, it’s easy to use” (ID4) and “It’s very intuitive” (ID9) reflected a generally positive experience with the tool’s layout and usability. Some users appreciated the familiarity of the design, describing it as similar to other AI tools they had used: “It’s a sort of ChatGPT-esque experience […] anybody who’s used ChatGPT will be very comfortable with it” (ID1). Others highlighted smooth navigation and visual clarity, noting that the interface worked well as a guide to course content.
Nonetheless, a few participants suggested improvements to help users locate and access key functions more easily. One noted, “I didn’t see the white button […] probably need to come in the instructions because I didn’t notice it” (ID2), while another said, “I don’t know that AIDA is here […] it feels like a surprise” (ID1). These issues point to the need for clearer visual cues and more consistent guidance to ensure all users can engage confidently with the system.
4.2.6. AI Perception
Participants generally expressed positive perceptions of i-AIDA, often highlighting its relevance and alignment with the course learning. Several participants found that the assistant’s integration within the course platform enhanced its perceived usefulness, particularly in contrast with external tools. As one participant observed, it felt “slightly more useful […] because it can be integrated into what you’re actually doing” (ID8), while another noted the importance of institutional trust, stating, “I think it would be more useful […] it’s a learning model that is learning from the university, like it’s been seeded by the OU, so I know the sources and I trust that” (ID13). Beyond its contextual relevance, participants appreciated the potential for tailored guidance. One explained, “You can tailor that specific thing you’re missing […] something you can’t do unless you have a private tutor” (ID12), suggesting that the assistant’s responsiveness filled gaps in understanding in ways that felt both targeted and efficient.
However, not all reflections were uncritical. A participant cautioned that, similar to social media, AI can be “highly influential”, raising concerns that learners might accept responses uncritically (ID15). Notably, engaging with i-AIDA also prompted some to revise their previous attitudes toward AI. One participant who had been initially skeptical remarked, “I’ve always stayed away from AI […] but this has changed my mind” (ID4). Taken together, these reflections suggest that i-AIDA was generally perceived as a credible and valuable form of support, especially when positioned as a complement rather than a replacement for human interaction.
4.2.7. Additional Insights: Support, Autonomy, Evaluation, and Learning Outcomes
Although less frequently mentioned than other themes, participants offered insightful reflections on support, autonomy, assessment, and learning outcomes. The theme of Support & Guidance was reflected in participants’ appreciation for i-AIDA’s practical assistance with referencing and study management. One participant found the source feature “very, very helpful”, explaining, “if there was a specific source referenced, I’d like to see that here so I could click on it […] and it can show me what the source is and where it is in the source” (ID1). Another described it as “useful” because “sometimes in the course, you always have to quote where something came from” (ID2), and i-AIDA helped locate that material. Support was also linked to efficiency, with one participant noting, “I thought these programs would help me stop me making or spending so much time writing notes […] just click on and go” (ID17). These features were seen as helping students navigate and manage study tasks more effectively.
The theme of Autonomy & Control emerged through participants’ appreciation for being able to personalize their interaction with i-AIDA. This included control over interface elements and visibility of features. One participant noted that “enable quiz, enable activities gives you tabs”, allowing users to “just show and hide tabs”, which was seen as useful for tailoring the experience (ID16). Additionally, they explained they could “take off the tutorial”, and expected some items, like repeated videos, to “automatically hide” once viewed (ID16). Aesthetic customization was also valued. One participant expressed a preference for the dark theme, describing it as less glaring for night-time study and “less of a corporate” look (ID14). Similarly, another commented that changing the tool’s colours was “very useful”, affirming that “we can turn off and turn on these things” (ID12) to suit individual needs.
In relation to Assessment & Evaluation, participants expressed a desire for more detailed and formative feedback. While the quizzes were seen as useful for reinforcing learning, some wanted clearer insight into their performance. One participant noted, “maybe like if you do a try again, you might want some kind of points on it or you might want it to say these are the questions you got wrong […] a bit more feedback would be” (ID16). Another expected “a more refined analysis of my flaws” if the quiz was AI-supported (ID6). These responses reflect a preference for formative feedback that supports progress, rather than simply indicating correct or incorrect answers.
The theme of Learning Effectiveness emerged in participants’ reflections on how i-AIDA supported their understanding and learning progress. One participant noted that “the quiz is helping to work out what I know and what I don’t know. That makes sense” (ID2), while another felt that doing well reinforced confidence: “it sort of reinforces that you’ve taken things in […] if you do well, you then feel more confident” (ID7).
These findings provided a deeper understanding of how students interact with i-AIDA in real learning scenarios, highlighting both its strengths and areas for improvement. Beyond usability and design considerations, student engagement with i-AIDA also revealed patterns in the types of queries they posed to the system. To further explore how students utilized i-AIDA, a prompt analysis was conducted to examine the nature and focus of learner interactions. The following section presents the results of this analysis, offering insights into students’ primary concerns, learning strategies, and the broader role of AI assistance in their academic experience.
4.3. RQ3: Prompt Analysis
After coding iteratively all prompts, four major themes emerged: learning support (mentioned by 24 prompts), course content (10 prompts), course information (11 prompts), and off-topic (19 prompts). First, the learning support theme summarized participants asking i-AIDA for tips and tricks to learn effectively and structure their learning. They wondered about effective time management strategies, ranging from general questions such as ‘What suggestions do you have for managing my time effectively?’ to queries asking the AI to produce study plans given specific constraints such as working hours. Other prompts within this theme were about finding further information, such as information about note-taking, finding a study buddy, effective learning strategies, or asking the AI to summarize content.
The course content theme concerned queries about the written course content. Students used i-AIDA to get more information about course topics. For example, they asked, ‘what is trello?’ or ‘what is the difference between synchronous and asynchronous education’. They also asked for help identifying key information, such as ‘what would be the most important topic’, or ‘Please give me the key research names in this fields’.
The third theme, course information, captured general questions about the course. For example, students asked for more information about the assessment: ‘is the quiz relevant to my final grade?’ or ‘Will I be tested on the materials of week one’. They asked about the workload of the course, for example: ‘How much time would i need in the first week’, and ‘Does the 4 hrs per week study time take account of student’s differential learning speeds & capacity?’. Or they asked about the course’s study goals and website navigation.
The fourth and final theme referred to off-topic prompts, which were about all prompts unrelated to the course, such as questions about ‘construction grammar’, ‘weather’, or ‘plastics in the environment’.
5. Discussion
The transition from Industry 4.0 to 5.0 brings a paradigm shift from automation-centric models to human-centric innovation and co-creation [
6,
21,
25,
35,
36]. While in the wider sector and in this special issue several industry examples are provided [
21,
22,
23], in this article we specifically focused on the development of an institutional AI digital assistant (i-AIDA) using principles of Design-Based Research [
16,
17,
18] and Technology Acceptance Model [
20,
39,
40] at a large distance learning provider, the Open University [
4,
5,
38]. In line with the human-centric innovation and co-creation approach of Industry 5.0, this i-AIDA was (co-)created and (co-)developed together with students and staff over a period of about a year before this particular beta-test was conducted. The findings from the beta-testing of i-AIDA with 18 distance learning students demonstrated the potential and complexity of integrating i-AIDA into higher education settings within the broader context of Industry 5.0. In this sense, i-AIDA embodies this ethos, aiming to offer personalized, context-aware learning support for our distance learners while still maintaining institutional oversight regarding data privacy, academic integrity, and pedagogical alignment.
In terms of RQ1, perhaps one of the most compelling outcomes of this study was the marked shift in student attitudes post-intervention. Those learners who were initially skeptical of GenAI and i-AIDA in particular demonstrated increased acceptance and even enthusiasm after engaging with i-AIDA. This shift indicates that hands-on, context-relevant exposure to i-AIDA may play a crucial role in overcoming some of the initial resistance towards GenAI, especially among distance learning students who value institutional trust and course alignment over the generalized capabilities of p-AIDA tools like ChatGPT. Nonetheless, two out of 18 participants remained “neutral” towards i-AIDA, and whether (or not) this would help with their studies. Furthermore, whether the positive user experience remained for the other 16 students after the beta-test obviously needs further exploration.
In terms of RQ2 of the design features supporting distance learning students in the context of Education 5.0, our thematic analysis of the screencasts, logbooks, and learner feedback revealed that i-AIDA’s core features of chat, quiz, and flashcards were particularly valued. Furthermore, participants appreciated the course-specific integration of i-AIDA in the students’ learning environment and how it provided tailored responses to their prompts. Furthermore, participants appreciated the explicit transparency in source referencing, and where more information about a particular answer within the OU learning environment could be found. These are core functionalities that are currently not available in p-AIDA systems like ChatGPT, and given the sensitive nature of learning data, it seems unlikely that these systems will get these affordances [
11]. These features address many of the concerns raised about public generative AI tools—namely, opacity, lack of context, and risks around misinformation or generic output. i-AIDA’s contextual awareness and grounding in institutional data were seen as its primary strength, supporting previous literature that calls for more ethically-grounded, domain-specific AI systems in an educational context [
3,
11].
At the same time, several limitations and tensions emerged with i-AIDA. While many praised i-AIDA’s design and structured responses, others criticized the tool for being overly verbose or slow, reflecting the need for more streamlined UX and responsive design. Substantial efforts since January 2025 have been put in place by the research and development team to improve the speed and verbosity of i-AIDA, and preliminary findings from our own testing and the current experimental study (see below) seem to suggest that the speed of functionalities is similar to p-AIDA providers. Furthermore, the feedback highlighted differences in learner expectations across disciplines—while the quiz and flashcard features were well-received by some, others felt these tools lacked the depth and adaptability required for more interpretive fields like the arts and humanities. Obviously, this explorative beta-test is just the start of a wider exploration under which conditions and for which types of students particular functionalities are most useful for affect, behavior and cognition
In terms of RQ3, the prompt analysis also provided a unique lens into how learners conceptualized and interacted with i-AIDA. Most prompts were on-topic, either about the course or about learning, but it also became clear that off-topic use of generative AI is likely. From other research, for example, on forum data, we know that not all conversations will be about the course and that students ask questions not related to the course or learning. Software that can detect such off-topic prompts may allow an AIDA to redirect the learners’ efforts back to the course or provide an appropriate response. Most student prompts fell into four distinct categories: learning support, course content, course information, and off-topic queries. This distribution suggests that learners expect AIDA tools not only to provide immediate academic assistance but also to function as personalized study companions capable of offering time management advice, emotional reassurance, and strategic planning.
The learning support theme showed that students are likely to seek guidance and support from i-AIDA regarding their learning. Assisting students in developing effective learning strategies is an important responsibility for educators, and several universities are equipped with a customized student support system. i-AIDA could offer
Supporting Information or direct students to appropriate resources.
The course content theme captured students’ queries about the course’s actual content. For example, learners wanted more information about a concept, wanted to learn about the connection between concepts, and wanted summaries of key information. This area plays to the strengths of generative AI by providing further information and summaries, while also requires mechanisms that sense-check the generated responses.
The course information theme showed that students had general questions about the course. Often, these could be answered by reading the course guidelines. However, with the availability of i-AIDA, students might ask i-AIDA instead of searching for this type of information. Furthermore, the students expected i-AIDA to also answer specifically tailored questions about the course, such as the course workload for particular student groups. While workload information is often indicated on course websites, it may not be tailored to specific student groups. The role of institutional AIDA tools may need to expand beyond course-related FAQs to more holistic educational support systems, with built-in mechanisms for recognizing and adapting to student learning strategies, challenges, and progress. The thematic analysis of the learners’ prompts provided a unique lens into the thoughts and questions of students, as well as their learning needs and preferences, providing relevant information to inform the development of bespoke i-AIDA roles that cater to these requirements.
Finally, this study highlights the ethical implications of i-AIDA [
2,
3,
5,
26]. Students expressed trust in i-AIDA because it was seen as “seeded” by the Open University. This trust, however, comes with responsibility—institutions must ensure transparency, data protection, and accountability in the deployment and evolution of such tools [
7,
11,
21]. The design of i-AIDA systems, therefore, should be inclusive, responsive, and continually shaped by learner feedback—a living system co-evolving with its users.
5.1. Limitations and Future Research
There are obvious limitations of this study. First and foremost, the sample size for the beta-test was relatively small (n = 18), whereby we specifically selected both mostly negative and mostly positive participants based upon the pre-test, which may limit the generalizability of the results to the broader population of distance learners at the OU. Second, while the study employed multiple data sources and analysis methods, the short-term nature of the engagement with i-AIDA (approximately 45 min) means that long-term effects on learning outcomes, sustained usage, and changes in study behavior remain unknown. Third, while we did not explicitly use common UTAUT psychometric instruments of TAM and primarily collected rich qualitative data of actual engagement with the technology and participants’ lived experiences of perceived ease of use and perceived usefulness, future studies should also objectively measure these constructs. Fourth, the findings were based on a single institutional context within the UK, which may not reflect the challenges and affordances of i-AIDA implementations in other cultural, linguistic, or infrastructural settings; thus, cross-institutional and international validation studies are warranted.
At the moment of writing (May/June 2025) we have just made two online courses available for >100 OU students to use i-AIDA in a randomized control-experiment, whereby one group of students receives an updated version of i-AIDA based upon the finding from this study, and one control group has access to one of two courses without i-AIDA. As students are able to freely interact with i-AIDA without any direct facilitation/moderation by humans, the research team is closely monitoring its use in order to explore learners’ attitudes, behaviors, and cognitions when engaging with i-AIDA. In particular, we are interested in whether the prompts, content moderation, functionality (e.g., flashcards, chat, various roles of i-AIDA) meet the students’ expectations in terms of perceived ease of use and perceived usefulness of i-AIDA.
5.2. Conclusions
This study provided a potential example of Industry 5.0 and Education 5.0 by examining how 18 learners interacted with a beta version of an institutionally developed AI Digital Assistant (i-AIDA). Findings from the beta-test suggest that students found i-AIDA’s role-based design, structured prompts, and multimodal functions (chat, quizzes, flashcards) beneficial for enhancing learning support and motivation. Learners appreciated the alignment of i-AIDA with course content and institutional values, and most expressed increased interest in using GenAI tools when provided in a trusted, pedagogically sound environment such as i-AIDA. The study also revealed the diversity of learner prompts and expectations, highlighting the importance of flexibility, scaffolding, and user agency in i-AIDA design. By adopting a mixed-methods approach and grounding the work in TAM and DBR principles, this research provides a nuanced understanding of the educational value and limitations of developing institutional AIDAs. Future work should explore long-term engagement and scale-up effects, as well as cross-institutional implementations, to further advance responsible and effective GenAI integration in higher education.