Next Article in Journal
Untargeted Salivary Metabolomics and Proteomics: Paving the Way for Early Detection of Periodontitis
Previous Article in Journal
A CMOS-Based Terahertz Reconfigurable Reflectarray with Amplitude Control: Design and Validation
Previous Article in Special Issue
Enhanced Prediction of the Remaining Useful Life of Rolling Bearings Under Cross-Working Conditions via an Initial Degradation Detection-Enabled Joint Transfer Metric Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Developing an Institutional AI Digital Assistant in an Age of Industry 5.0

by
Bart Rienties
1,*,
Thomas Ullmann
1,
Felipe Tessarolo
1,
Joseph Kwarteng
2,
John Domingue
2,
Tim Coughlan
1,
Emily Coughlan
1 and
Duygu Bektik
1
1
Institute of Education Technology, The Open University, Milton Keynes MK7 6AA, UK
2
Knowledge Media Institute, The Open University, Milton Keynes MK7 6AA, UK
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(12), 6640; https://doi.org/10.3390/app15126640
Submission received: 11 May 2025 / Revised: 1 June 2025 / Accepted: 6 June 2025 / Published: 12 June 2025
(This article belongs to the Special Issue Advanced Technologies for Industry 4.0 and Industry 5.0)

Abstract

:
In Industry 5.0 it is essential that humans are in the loop of technology integration of industry processes. With the advancements of Generative Artificial Intelligence (GenAI), a lot of new opportunities and challenges for learning and teaching are present. Many students already use publicly available AI Digital Assistants (p-AIDA) like ChatGPT for academic purposes. However, there are concerns around the use of such p-AIDA tools, particularly in terms of academic integrity, data privacy, intellectual property, and the impact on the quality of education. Furthermore, many higher education institutions have substantial learning materials and data about students that they may not want to share with p-AIDA. Therefore, using the Technology Acceptance Model (TAM) and following a Design-Based Research (DBR) approach, we explored the perspectives and experiences of a beta-test of an institutionally developed AIDA (i-AIDA) with 18 UK students using multiple methods and data sources (including pre-post-test, interviews, think-aloud, and prompt analysis). Our research underscores the potential benefits and limitations of in-house i-AIDA in enhancing learning experiences without compromising academic integrity or privacy, and how higher education institutions can prepare themselves for Industry 5.0.

1. Introduction

The emergence of Generative Artificial Intelligence (GenAI) [1,2,3] and AI Digital Assistants (AIDA) [4,5] in particular has introduced several new opportunities and challenges for education in the age of Industry 5.0 [6,7]. Publicly available AIDAs (p-AIDAs) such as ChatGPT have rapidly gained traction among students for academic support, offering real-time feedback, content explanations, and writing assistance [1,8,9,10,11]. For example, a recent survey by Freeman [12] showed that 88% of UK students indicated to use GenAI for assessments. These p-AIDA tools exemplify the potential of AI to personalize learning, enhance engagement, and provide on-demand support at scale. In response, educational researchers and institutions have started exploring the pedagogical affordances and limitations of AIDAs, particularly in supporting learner autonomy, motivation, and academic performance [1,2,3,11,13,14,15]. Existing studies highlight the promise of GenAI in higher education while simultaneously raising critical concerns around data privacy, academic integrity, and the ethical use of AI-generated content, especially when students rely on non-institutional systems that lack alignment with course contexts and learning goals [3,5,11].
There remain important gaps in our understanding of how students might engage with AIDA tools embedded within institutional ecosystems (i-AIDA). Several institutions might want to provide a safe “walled garden” for their students to engage with GenAI within the institutional boundaries given the previous concerns highlighted [4,5,13]. However, there is limited empirical research on how learners perceive the usefulness and ease of use of i-AIDAs designed with pedagogical intent and integrated into formal course structures. Furthermore, few studies have explored how different design features (e.g., chat, quiz, flashcard functions) impact student engagement. While several recent large-scale surveys [1,4,12] have captured broad student attitudes towards GenAI, in-depth, experience-based analyses of learner interactions with i-AIDAs in authentic educational settings are still lacking. Additionally, little is known about the nature of learner prompts to i-AIDAs, and what these prompts might reveal about learners’ goals, expectations, and support needs in the context of Industry 5.0.
Therefore, in order to address these gaps, we conducted a beta-test of an institutionally developed AI Digital Assistant (i-AIDA) at the UK’s largest distance-learning university, co-designed using Design-Based Research (DBR) principles [16,17,18] and evaluated through the lens of the Technology Acceptance Model (TAM) [19,20]. Eighteen students with diverse attitudes towards GenAI and i-AIDA in particular participated in guided sessions, and interacting with i-AIDA’s chat, quiz, and flashcard functions. Drawing on pre-post surveys, think-aloud protocols, screenshares, and prompt analysis this study explored: (1) whether and how students’ perceptions of i-AIDA changed following direct engagement; (2) which design features were seen as valuable for supporting distance learning in line with Education 5.0 principles; and (3) what kinds of prompts students submitted, and what these reveal about learner needs and expectations. Through this application of Industry 5.0, we aim to provide nuanced insights into the pedagogical potential, technical challenges, and user experiences associated with i-AIDA.

2. Literature Review: Industry 5.0 and Education 5.0

In this special issue, several examples are provided of how digitalization and integration of technologies within/across industries allow for the automation of complex human and machine tasks [21,22,23]. There are various definitions and concepts of Industry 4.0 [24] and Industry 5.0 [7,25], but according to Tusquellas, Santiago and Palau [21] Industry 4.0 “emphasizes the integration of digital technologies with manufacturing processes to boost productivity, efficiency, and economic growth worldwide” while Industry 5.0 “represents a shift from an industry centered on automation and efficiency to a model that emphasizes human–machine collaboration, fosters innovation and promotes alignment with environmental sustainability”.
Similarly, in the context of education, substantial efforts have been made to integrate technologies and human interaction into learning and teaching in face-to-face, hybrid, and online contexts [2,26]. For example, Education 4.0 was introduced in 2008 as a concept focused on use of digital technology, innovation, novelty, and connections with employment and industry [27,28,29,30,31,32]. Harkins [33] first introduced the concept of Education 4.0 to highlight a shift from traditional knowledge-based education to one focused on fostering innovation. This approach aligns with the ongoing evolution of Industry 4.0 [24], which is increasingly driven by automation, smart technologies, and the Internet of Things. For example, the World Economic Forum [31] identified a set of eight core competencies that define Education 4.0, including global citizenship, creativity and innovation, and lifelong learning. Additional scholars such as Fisk [34] and later Hussin [28] have expanded on these ideas by outlining specific pedagogical practices.
In a recent systematic literature review by Rienties et al. [27] of 66 papers in Computer Science in the period 2016–2020 focused on how educators introduced innovative pedagogical approaches into their courses and how they used some or all of the nine elements of Education 4.0 identified by [28]. Computer science was chosen as a discipline as it rapidly changes with the advancements of technology, and one would expect that educators would be more willing to engage with Education 4.0 concepts relative to disciplines that may have slower changes in core principles and methods. Perhaps the most surprising finding was that none of the papers actually referred to the concept of Education 4.0, drawing into question how embedded these concepts are in practice. 17 coders subsequently coded the 66 innovative pedagogical approaches, and 54 out of 66 studies (80%) were regarded as (partial) examples of Education 4.0. Subsequent k-means cluster analysis [27] indicated three clusters of practice, whereby computer science educators implemented some or all of the elements described by Hussin [28] as illustrated in Figure 1: (1) Education 4.0 Light (n = 18); (2) Project-based/hands-on learning (n = 22); and (3) Full Education 4.0 (n = 26).
With the arrival of GenAI and publicly available AI digital assistants (p-AIDA) like ChatGPT in October 2022, several authors [6,7,21,35] have claimed that we are moving to Education 5.0, which, in line with Industry 5.0, is more focused on inclusive development of technology and people and their environments. Indeed, Sedrakyan, et al. [36] argued that Industry 5.0 “embraces a value-driven perspective [25], emphasizing the provision of value for end-users, stakeholders, and the broader socio-technical and environmental system within which actors operate”. For example, Ciolacu, Marghescu, Mihailescu, and Svasta [6] indicated that Education 5.0 would include “problem-based learning, scenario-based learning and non-traditional labs approaches, increased interaction with AR/VR and AI technology and Biofeedback for well-being and health”. However, many of these elements, with probably the exception of AR/VR/AI and biofeedback are already included in Education 4.0 as illustrated in [27].
In line with recommendations of Tusquellas, Santiago and Palau [21] and Sedrakyan, Borsci, van den Berg, van Hillegersberg and Veldkamp [36] to put humans in the center of development of AI and Education 5.0 in a range of five Design-Based Research (DBR) studies we co-created together with students and staff an institutional AI digital assistant (i-AIDA) at the Open University (OU) in the period December 2023-December 2024 [4,5,13,37,38]. Across five sequential studies, we gathered insights from 315 students and 20 staff members. As Wang [18] describes, DBR is “often referred to as a long-term research endeavor involving iterative observation, design, implementation, and redesign to come up with possible practical solutions to address educational problems.” Following the frameworks outlined by Easterday, Lewis, and Gerber [16] and Lyons, Lobczowski, Greene, Whitley, and McLaughlin [17], we applied DBR principles over a 13-month period to iteratively design, evaluate, and refine the i-AIDA system, as reported in detail in [37].
After an initial study with 10 students to explore what they would want from such an i-AIDA [5], two follow-up survey studies with in total 305 students explored students’ expected services, concerns, and affordances of such i-AIDA [4,38], as illustrated in Figure 2. In terms of expected services, students primarily wanted i-AIDA to have personalization to individual needs and learning approaches, real-time assistance and query resolution, support for academic tasks, and some wanted emotional and social support, in line with previous findings [1,2,21,36].
In terms of main concerns, in line with previous literature, students were worried about academic integrity, data privacy and use, operational challenges, ethical and social implications, and the impact of AI on the future of education [1,2,11]. In a range of Design-Based Research studies [5,38], students were shown various versions of prototypes of i-AIDA. This helped the research team to develop an alpha version of i-AIDA, which was initially tested with 20 academics to explore the main functionality [13]. In this follow-up study, we report on the detailed qualitative findings from a beta-test of i-AIDA with 18 students in January 2025, whereby we specifically sampled students with a mix of perspectives on i-AIDA and experience of GenAI in general using the Technology Acceptance Model [19,20,39].

Technology Acceptance Model and AI Digital Assistant

As reported in Venkatesh and Bala [20], there is a suggestion that the low adoption and use of digital technology by employees represents a major barrier to successful IT deployment. The Technology Acceptance Model (TAM) [19] and the follow-up Unified Theory of Acceptance and Use of Technology (UTAUT) by Venkatesh et al. [40] have been extensively used and validated. For example, in a recent meta-analysis by Blut, Chong, Tsiga, and Venkatesh [39] that reported from 1149 studies containing a total of 25,619 effect sizes from 737,112 users, it found that UTAUT was a robust predictor of how people intent to use technology. In its most basic form, TAM indicates that learners’ intentions to use a technology (in this case: i-AIDA) are primarily determined by two key perceptions: 1) Perceived Usefulness (PU) of i-AIDA (e.g., allowing the learner to better understand key learning materials, perform better on assessments, stimulate motivation); and 2) Perceived Ease of Use (PEU) of i-AIDA (e.g., how the functionality of i-AIDA is easy to understand and engage with, how easy it is to engage with).
At the OU, we face a somewhat similar challenge. How to bring the benefits of Generative AI to our 200 K students, many of whom come from disadvantaged backgrounds, have low self-esteem generally, and low digital technology skills? To this effect, we have recently adopted TAM as advocated in [20,39] to help us increase adoption rates. From previous work [41,42], we are aware that the use of AI-based digital technologies in teaching delivery can increase retention rates, which is especially important for our students.
Beyond how students were engaging with the various design features of i-AIDA, how easy (or not) they were to use and how useful they might be for their studies, we were particularly interested in looking at the prompts that students used when interacting with the assistant. Prompts are expressions of thinking and often represent questions [15,43]. From classroom research, we know that analyzing the thoughts and questions of students can, for example, help to diagnose students’ understanding, stimulate further inquiry into the topic, and provoke critical reflections on educational practice [44]. Similarly, the analysis of learner prompts has the potential to provide insights into the thought process of learners to diagnose their understanding, interests, and needs. Furthermore, it may inform the development of assistants that specialize in responding to common themes expressed in the prompts. Therefore, our main questions were:
  • After engaging with a beta-version of an institutional AI digital assistant (i-AIDA) what are the main perceptions of participants, and do they change their perceptions?
  • What design features encourage engagement with i-AIDA, and which design features in particular are useful to support distance learning students in the context of Education 5.0?
  • What ideas did students express in their prompts when engaging with i-AIDA?

3. Materials and Methods

3.1. Setting and Participants

This study was conducted at the largest distance learning university in Europe, the Open University (OU). With around 200 K learners who study mostly at a distance, a core premise of the OU is that anyone can start studying at the OU, irrespective of any prior qualification or experience. An i-AIDA might be particularly helpful to support such diversity in students’ backgrounds and needs. The research reported here sits with a larger initiative where we have applied Design Based Research (DBR) principles [4,16,17,18] to explore in depth how an i-AIDA can support our students. According to Wang [18] DBR is an iterative approach incorporating a cycle of observation, design and implementation. Following Easterday, Lewis and Gerber [16] and Lyons, Lobczowski, Greene, Whitley and McLaughlin [17], we have used DBR in combination with TAM as a conceptual framework to develop, study, and refine i-AIDA over a period of 13 months [4]. In particular, following [17] we set up a number of design assumptions which we then transformed into design principles, drawing insights from the studies we conducted. Our process builds on previous experience in applying DBR to create a number of web-based tools. This study is thus the fifth observation we have conducted and was followed by a period of redesign and implementation as illustrated in Figure 2.
Therefore, a beta version of i-AIDA was shared under controlled conditions in a test environment with 18 students in January 2025 to explore the PEU and PU of i-AIDA and its specific functionalities. Following the findings from the five studies [4,5,13,38], this beta version of i-AIDA featured several updates, including a revised chunking strategy for chat and an improved retrieval strategy. These enhancements contributed to a more fluent and cohesive chat experience, with better signposting to relevant resources.
The chat allows learners to prompt, i.e., to instruct or ask i-AIDA anything they are interested in relation to the course. i-AIDA generates responses (prompt completion) based on the learners’ prompts or queries. The responses generated by the LLM are based on the ingested course materials, limiting the answers from i-AIDA to a curated list of relevant content that is appropriate, relevant, and safe for learners to use, thus mitigating potential hallucination. Besides the default chat, learners could also select from a range of bespoke pedagogical roles [8,14,15] that influenced how i-AIDA responded to their prompts.
In order to explore how pedagogical approaches could shape learner interaction with i-AIDA, the beta version included two bespoke role-based assistants in addition to the default chat mode. These roles were co-designed with pedagogical intent in mind and aimed to enhance the learning experience through differentiated interaction styles and were co-designed to scaffold different learning needs, such as a Socratic assistant, which mimics the pedagogical style of Socratic questioning, used probing questions to encourage critical thinking and reflection, mirroring the Socratic method in a conversational style or the personal assistant role, which focuses on providing 24/7, tailored support to learners, with step-by-step explanations for structured support and conceptual clarity. These roles influence the response of the i-AIDA to the learner prompts, delivering a unique pedagogical experience to the learners. i-AIDA also supports interactive flashcards generation, quiz creation, and assessment. Additionally, an introductory video was added to explain the key functionalities of i-AIDA (see Figure 3).
We specifically sampled 40 participants from a previous study [4], who indicated that they would be willing to be interviewed, based upon their survey responses and a diversity of characteristics (e.g., level of study, discipline, sex, study success). We deliberately selected 20 participants with predominantly negative attitudes toward i-AIDA and relatively low levels of AI usage, alongside 20 participants who held mostly positive attitudes and reported moderate to high AI usage. This approach enabled us to maximize variability in prior expectations of i-AIDA. In total, 27 students expressed interest in participating, and we concluded the recruitment process once 20 participants had been enrolled (response rate: 50%). One participant dropped out at the last minute, and for one student, we had incomplete pre-test data, resulting in a total of 18 students. The majority of participants were female (n = 12, 66%). The average age was 50.05 (SD = 14.65), as distance learners are typically older than students attending university in person. Following the introduction to the study and the beta environment, the think-aloud protocols had an average duration of 44 min and 16 s (SD = 6 min and 23 s, range: 28:08–55:47).
Three core tasks were given to participants, as well as three optional tasks if sufficient time was left. First, participants watched a short introduction video of 90 s illustrating the various functionalities and tools of i-AIDA. These were introduced by an i-AIDA avatar as illustrated in Figure 3. After watching the video, participants were asked about their impressions of the video and the avatars in particular.
Second, after reading section ‘1.1 Being an Online Learner’ using the think-aloud protocol, participants were asked to share their impressions of the i-AIDA chat (see Figure 4). They were then prompted to submit at least two questions or prompts to engage with the chat. Throughout this process, the facilitator encouraged participants to verbalize their thoughts, particularly focusing on how (in)effective and (un)helpful they found the chat in responding to their inputs, and its perceived usefulness and perceived ease of use.
Third, the lived experiences of the updated Quiz functionality and Flashcards were explored as illustrated by the tabs in Figure 4. Both approaches allowed participants to quickly get an overview of the core concepts of a particular learning unit and test their knowledge and understanding.

3.2. Instruments

Online survey: Pre-test data for Studies 2 and 3 were obtained from an online i-AIDA survey. This survey was developed in collaboration with an instrument by Freeman [9] and our previous studies [4,38]. It included 25 Likert-scale questions (ranging from 1 = Totally disagree to 5 = Totally agree), 15 checkbox items related to GenAI usage, and five open-ended questions. Participants in these studies were initially asked about their general experiences with AI tools, with a particular focus on their use of GenAI tools in educational contexts. As a post-test, we explored participants’ experiences and expectations of i-AIDA, its perceived usefulness and perceived ease of use in general and relative to other GenAI tools like ChatGPT.
Logbook of engagement: A facilitator maintained a digital logbook documenting how participants engaged with the four core tasks. This consisted of four steps per task: (1) Were participants able to complete the task? (2) What were the key narratives while the participant engaged with the task? (3) Were there any design suggestions to further improve i-AIDA? (4) Were there any bugs or issues when engaging with the task?
Conversation data and prompts: Participants were expected to post at least two prompts and to have a conversation with the i-AIDA. All the prompts created by the participants, the responses generated by i-AIDA, information about the role of the assistant and the session, as well as anonymized user IDs were retrieved from the system for analysis.

3.3. Procedure and Data Analysis

A team of 15 people at the Knowledge Media Institute and the Institute of Educational Technology at the OU designed and evaluated the various versions and iterations of i-AIDA. In total, three facilitators (BR, EC, FT) supported 18 students during the beta-test. Participants’ screenshares were recorded, and audio was automatically transcribed by MS Teams. These transcripts and recordings, along with a summary of the logbook, all the prompts, and the reactions by i-AIDA, were made available to the research team and shared with the i-AIDA design team for discussion. Quantitative data from the pre- and post-tests were analyzed using SPSS 29.0. For the qualitative data from interviews and screencasts, we followed Morgan [10], initially conducting an emergent thematic analysis using ChatGPT-4 to generate preliminary themes, following our previous studies [4,5]. These themes were then reviewed for coherence and subsequently coded independently by at least three authors (BR, DB, EC, FT, TC, TU).
The anonymized conversation data of AIDA were downloaded and filtered to include only the participants’ prompts but not the responses of the AI, resulting in 140 prompt messages. An inductive thematic analysis of the prompts was conducted. The coder was an experienced researcher (TU) with over 20 years of experience evaluating the student experience and more than 15 years of experience with natural language processing. The coding process followed the approach of Braun and Clarke [45].
This research received Human Ethics Research Approval (HREC/2024-0660-2), and participants received a £20 Amazon voucher for their participation.

4. Results

A beta version of i-AIDA was developed and tested in January 2025 with 18 students in one-to-one sessions under the supervision of a facilitator. Before testing, 10 participants were positive about potentially using an i-AIDA for their studies, one was neutral, and seven were not positive. All participants successfully completed the four assigned tasks: watching an instructional video, engaging in a dialogue with the chatbot, completing a quiz, and using flashcards. The think-aloud sessions lasted an average of 44 min (SD = 6:23), ranging from 28 to 56 min.

4.1. RQ1 Change in Perceptions After Engagement with i-AIDA

Post-test results indicated a notable increase in positive perceptions, in particular for those learners who were not so positive about i-AIDA initially, as previously reported in [37]. While two participants remained neutral, 16 (89%) agreed that i-AIDA would be beneficial for their studies. When asked to compare i-AIDA with other GenAI tools like ChatGPT (on a scale from 1 = very inferior to 10 = much more useful), the average rating was 7.58 (SD = 2.43). While three participants found i-AIDA inferior to ChatGPT, one rated it as similar, one as slightly more useful, and 14 as more useful. As indicated in Figure 5, in particular students who were initially rather skeptical towards i-AIDA became more positive after engaging with the tool for an hour.

4.2. RQ2 Design Features of i-AIDA That Encourage Engagement

The qualitative feedback from the transcripts of the interviews and screencasts on i-AIDA highlighted both strengths and areas for improvement, as shown in Table 1. Thematic analysis of the 18 screencasts revealed that 195 comments made by participants (56.9%) focused on i-AIDA’s features, with a mix of positive, neutral, and some negative feedback. The second most common theme (14%) related to how i-AIDA could enhance student engagement and motivation, with predominantly positive remarks. This links strongly with the more human-centered approach of Industry 5.0. The third key theme concerned i-AIDA’s technical performance, indicating areas where further refinements were needed.
Participant feedback was categorized into key themes aligned with the sentiment distribution outlined in Table 1. The analysis below combines these themes with illustrative quotations to capture the nuanced student experience during the beta testing of i-AIDA.

4.2.1. AIDA Features and TAM

The introductory video, which served as the first task, was generally perceived as informative and engaging. However, some participants raised concerns about the avatars, noting their unnatural appearance and robotic tone. One participant remarked that “they speak way too fast”, adding that a particular avatar was “too distracting… I was looking at her instead of listening to what she was saying” (ID19). The fast pace of the narration also made it difficult for some to follow the content.
Despite these issues, the video sparked curiosity and a strong interest in testing the tool, particularly the dynamic text feature. As one participant explained, “It was quite encouraging for engagement… I liked how it said it would be thought-provoking.” (ID4). For some, the video reinforced a positive first impression of i-AIDA as a course-specific assistant, clearly distinct from general AI tools like ChatGPT.
Participants generally had a positive experience interacting with the chat function. They described the responses as clear, detailed, well-structured, and supportive. Many valued its structured responses, friendly tone, and course-specific guidance, with one noting, “it really fell like talking to an instructor […] it’s quite useful” (ID6). Features such as personalized study schedules and source referencing were particularly appreciated. However, some criticized the tool for being overly wordy or slow and suggested improvements in design and response speed. As one participant remarked, “I’d rather it just did what ChatGPT did and just bang […] it’s a little bit slow” (ID10). Despite this, the tool’s relevance to students’ study contexts and ability to offer practical suggestions were seen as clear strengths.
The quiz feature was described as intuitive, accessible, and visually clear. One participant noted, “It was simple, it was easy to use, it’s user-friendly […] it was really interesting” (ID4). Gamified elements such as leaderboards and progress tracking were seen as motivating by some, particularly competitive learners. However, others called for feedback or incorrect answers and clear feedback for improvement. The use of closed questions was questioned, especially for disciplines like arts and social sciences, where open-ended formats may better support critical thinking. As one participant observed, “I don’t know the purpose of the quiz, but it looks too naïve to me […] I am expecting more sophisticated guidance from AIDA” (ID3).
Several participants described using the flashcards as a smooth and engaging experience, praising their simplicity, responsiveness, and ease of use. One participant noted, “It’s simple, it’s easy to use, it’s given you a quick response” (ID4). The interactive “flip” design was seen as appealing, and the option to download flashcards was viewed as a positive possibility. Some appreciated the clear link between flashcard content and the course, suggesting potential for deeper engagement if expanded. However, not all found the tool useful as one participant remarked: “These flashcards are very, very basic.” (ID5). Suggestions included making the content more advanced and offering reverse or student-generated formats to enhance flexibility and relevance.

4.2.2. Engagement & Motivation

Participants’ perceptions of engagement and motivation when using i-AIDA were varied but generally positive. Many described the tool as “interesting”, “curious”, and even “fun”, particularly when exploring interactive features such as dynamic text and flashcards. The ability to receive immediate feedback and personalized guidance was highlighted as encouraging continued interaction. Several participants appreciated that the assistant asked follow-up questions, prompting deeper engagement with the content. As one noted, it “felt like talking to an instructor” (ID6), reflecting how the tool’s tone and structure supported learner involvement.
However, some users found the system too slow or overly verbose, with a few describing the interface as “boring” or the pacing as frustrating compared to other AI tools. One participant remarked, “It was quite a long response. […] I don’t need the whole page again” (ID10), highlighting how excessive text could hinder engagement. These issues were seen as potential barriers to sustained interaction. Despite this, the assistant was largely viewed as a motivating and confidence-boosting tool, especially when learners received constructive feedback or successfully clarified confusing concepts.

4.2.3. Technical Performance

Participants shared mixed views about i-AIDA’s technical performance, especially regarding speed and responsiveness. Some found the tool smooth and stable—one participant said it was “doing better than I thought it would” (ID11). However, others found the pace frustrating. One person noted, “At this speed, it would at some point annoy me” (ID12), referring to how the text appeared slowly on the chat. Another added, “I would prefer just the whole text and the whole answer to appear immediately” (ID2), showing a preference for quicker replies. Among all themes, technical performance received the highest proportion of negative comments. While the system worked overall, many felt it could be faster and more responsive, with better control over how the content is displayed.

4.2.4. Content Delivery

i-AIDA’s content delivery was frequently described as clear and well-structured, particularly in how it presented key points and responded to user prompts. One participant noted that the responses were “very clear, very structured”, and that the assistant “followed all my instructions carefully” (ID9). Others appreciated the visual layout and organization of the information, with one remarking, “I find it quite easy […] I can glance at it quickly and see” (ID1), referring to the effective use of hierarchy within the chat interface.
Some participants, however, suggested areas for improvement. These included better signposting, more concise outputs, and clearer guidance on how to explore topics in more depth. One participant noted, “It could confuse you […] if that’s not the answer you were looking for or if the bits of information you’re looking for are not there” (ID14), reflecting how misaligned or incomplete responses could affect clarity. Despite these minor issues, most users found the content delivery effective for navigating course materials and supporting their learning.

4.2.5. User Experience

In terms of TAM perceived ease of use, many participants found i-AIDA’s interface intuitive and easy to navigate. Comments such as “It’s simple, it’s easy to use” (ID4) and “It’s very intuitive” (ID9) reflected a generally positive experience with the tool’s layout and usability. Some users appreciated the familiarity of the design, describing it as similar to other AI tools they had used: “It’s a sort of ChatGPT-esque experience […] anybody who’s used ChatGPT will be very comfortable with it” (ID1). Others highlighted smooth navigation and visual clarity, noting that the interface worked well as a guide to course content.
Nonetheless, a few participants suggested improvements to help users locate and access key functions more easily. One noted, “I didn’t see the white button […] probably need to come in the instructions because I didn’t notice it” (ID2), while another said, “I don’t know that AIDA is here […] it feels like a surprise” (ID1). These issues point to the need for clearer visual cues and more consistent guidance to ensure all users can engage confidently with the system.

4.2.6. AI Perception

Participants generally expressed positive perceptions of i-AIDA, often highlighting its relevance and alignment with the course learning. Several participants found that the assistant’s integration within the course platform enhanced its perceived usefulness, particularly in contrast with external tools. As one participant observed, it felt “slightly more useful […] because it can be integrated into what you’re actually doing” (ID8), while another noted the importance of institutional trust, stating, “I think it would be more useful […] it’s a learning model that is learning from the university, like it’s been seeded by the OU, so I know the sources and I trust that” (ID13). Beyond its contextual relevance, participants appreciated the potential for tailored guidance. One explained, “You can tailor that specific thing you’re missing […] something you can’t do unless you have a private tutor” (ID12), suggesting that the assistant’s responsiveness filled gaps in understanding in ways that felt both targeted and efficient.
However, not all reflections were uncritical. A participant cautioned that, similar to social media, AI can be “highly influential”, raising concerns that learners might accept responses uncritically (ID15). Notably, engaging with i-AIDA also prompted some to revise their previous attitudes toward AI. One participant who had been initially skeptical remarked, “I’ve always stayed away from AI […] but this has changed my mind” (ID4). Taken together, these reflections suggest that i-AIDA was generally perceived as a credible and valuable form of support, especially when positioned as a complement rather than a replacement for human interaction.

4.2.7. Additional Insights: Support, Autonomy, Evaluation, and Learning Outcomes

Although less frequently mentioned than other themes, participants offered insightful reflections on support, autonomy, assessment, and learning outcomes. The theme of Support & Guidance was reflected in participants’ appreciation for i-AIDA’s practical assistance with referencing and study management. One participant found the source feature “very, very helpful”, explaining, “if there was a specific source referenced, I’d like to see that here so I could click on it […] and it can show me what the source is and where it is in the source” (ID1). Another described it as “useful” because “sometimes in the course, you always have to quote where something came from” (ID2), and i-AIDA helped locate that material. Support was also linked to efficiency, with one participant noting, “I thought these programs would help me stop me making or spending so much time writing notes […] just click on and go” (ID17). These features were seen as helping students navigate and manage study tasks more effectively.
The theme of Autonomy & Control emerged through participants’ appreciation for being able to personalize their interaction with i-AIDA. This included control over interface elements and visibility of features. One participant noted that “enable quiz, enable activities gives you tabs”, allowing users to “just show and hide tabs”, which was seen as useful for tailoring the experience (ID16). Additionally, they explained they could “take off the tutorial”, and expected some items, like repeated videos, to “automatically hide” once viewed (ID16). Aesthetic customization was also valued. One participant expressed a preference for the dark theme, describing it as less glaring for night-time study and “less of a corporate” look (ID14). Similarly, another commented that changing the tool’s colours was “very useful”, affirming that “we can turn off and turn on these things” (ID12) to suit individual needs.
In relation to Assessment & Evaluation, participants expressed a desire for more detailed and formative feedback. While the quizzes were seen as useful for reinforcing learning, some wanted clearer insight into their performance. One participant noted, “maybe like if you do a try again, you might want some kind of points on it or you might want it to say these are the questions you got wrong […] a bit more feedback would be” (ID16). Another expected “a more refined analysis of my flaws” if the quiz was AI-supported (ID6). These responses reflect a preference for formative feedback that supports progress, rather than simply indicating correct or incorrect answers.
The theme of Learning Effectiveness emerged in participants’ reflections on how i-AIDA supported their understanding and learning progress. One participant noted that “the quiz is helping to work out what I know and what I don’t know. That makes sense” (ID2), while another felt that doing well reinforced confidence: “it sort of reinforces that you’ve taken things in […] if you do well, you then feel more confident” (ID7).
These findings provided a deeper understanding of how students interact with i-AIDA in real learning scenarios, highlighting both its strengths and areas for improvement. Beyond usability and design considerations, student engagement with i-AIDA also revealed patterns in the types of queries they posed to the system. To further explore how students utilized i-AIDA, a prompt analysis was conducted to examine the nature and focus of learner interactions. The following section presents the results of this analysis, offering insights into students’ primary concerns, learning strategies, and the broader role of AI assistance in their academic experience.

4.3. RQ3: Prompt Analysis

After coding iteratively all prompts, four major themes emerged: learning support (mentioned by 24 prompts), course content (10 prompts), course information (11 prompts), and off-topic (19 prompts). First, the learning support theme summarized participants asking i-AIDA for tips and tricks to learn effectively and structure their learning. They wondered about effective time management strategies, ranging from general questions such as ‘What suggestions do you have for managing my time effectively?’ to queries asking the AI to produce study plans given specific constraints such as working hours. Other prompts within this theme were about finding further information, such as information about note-taking, finding a study buddy, effective learning strategies, or asking the AI to summarize content.
The course content theme concerned queries about the written course content. Students used i-AIDA to get more information about course topics. For example, they asked, ‘what is trello?’ or ‘what is the difference between synchronous and asynchronous education’. They also asked for help identifying key information, such as ‘what would be the most important topic’, or ‘Please give me the key research names in this fields’.
The third theme, course information, captured general questions about the course. For example, students asked for more information about the assessment: ‘is the quiz relevant to my final grade?’ or ‘Will I be tested on the materials of week one’. They asked about the workload of the course, for example: ‘How much time would i need in the first week’, and ‘Does the 4 hrs per week study time take account of student’s differential learning speeds & capacity?’. Or they asked about the course’s study goals and website navigation.
The fourth and final theme referred to off-topic prompts, which were about all prompts unrelated to the course, such as questions about ‘construction grammar’, ‘weather’, or ‘plastics in the environment’.

5. Discussion

The transition from Industry 4.0 to 5.0 brings a paradigm shift from automation-centric models to human-centric innovation and co-creation [6,21,25,35,36]. While in the wider sector and in this special issue several industry examples are provided [21,22,23], in this article we specifically focused on the development of an institutional AI digital assistant (i-AIDA) using principles of Design-Based Research [16,17,18] and Technology Acceptance Model [20,39,40] at a large distance learning provider, the Open University [4,5,38]. In line with the human-centric innovation and co-creation approach of Industry 5.0, this i-AIDA was (co-)created and (co-)developed together with students and staff over a period of about a year before this particular beta-test was conducted. The findings from the beta-testing of i-AIDA with 18 distance learning students demonstrated the potential and complexity of integrating i-AIDA into higher education settings within the broader context of Industry 5.0. In this sense, i-AIDA embodies this ethos, aiming to offer personalized, context-aware learning support for our distance learners while still maintaining institutional oversight regarding data privacy, academic integrity, and pedagogical alignment.
In terms of RQ1, perhaps one of the most compelling outcomes of this study was the marked shift in student attitudes post-intervention. Those learners who were initially skeptical of GenAI and i-AIDA in particular demonstrated increased acceptance and even enthusiasm after engaging with i-AIDA. This shift indicates that hands-on, context-relevant exposure to i-AIDA may play a crucial role in overcoming some of the initial resistance towards GenAI, especially among distance learning students who value institutional trust and course alignment over the generalized capabilities of p-AIDA tools like ChatGPT. Nonetheless, two out of 18 participants remained “neutral” towards i-AIDA, and whether (or not) this would help with their studies. Furthermore, whether the positive user experience remained for the other 16 students after the beta-test obviously needs further exploration.
In terms of RQ2 of the design features supporting distance learning students in the context of Education 5.0, our thematic analysis of the screencasts, logbooks, and learner feedback revealed that i-AIDA’s core features of chat, quiz, and flashcards were particularly valued. Furthermore, participants appreciated the course-specific integration of i-AIDA in the students’ learning environment and how it provided tailored responses to their prompts. Furthermore, participants appreciated the explicit transparency in source referencing, and where more information about a particular answer within the OU learning environment could be found. These are core functionalities that are currently not available in p-AIDA systems like ChatGPT, and given the sensitive nature of learning data, it seems unlikely that these systems will get these affordances [11]. These features address many of the concerns raised about public generative AI tools—namely, opacity, lack of context, and risks around misinformation or generic output. i-AIDA’s contextual awareness and grounding in institutional data were seen as its primary strength, supporting previous literature that calls for more ethically-grounded, domain-specific AI systems in an educational context [3,11].
At the same time, several limitations and tensions emerged with i-AIDA. While many praised i-AIDA’s design and structured responses, others criticized the tool for being overly verbose or slow, reflecting the need for more streamlined UX and responsive design. Substantial efforts since January 2025 have been put in place by the research and development team to improve the speed and verbosity of i-AIDA, and preliminary findings from our own testing and the current experimental study (see below) seem to suggest that the speed of functionalities is similar to p-AIDA providers. Furthermore, the feedback highlighted differences in learner expectations across disciplines—while the quiz and flashcard features were well-received by some, others felt these tools lacked the depth and adaptability required for more interpretive fields like the arts and humanities. Obviously, this explorative beta-test is just the start of a wider exploration under which conditions and for which types of students particular functionalities are most useful for affect, behavior and cognition
In terms of RQ3, the prompt analysis also provided a unique lens into how learners conceptualized and interacted with i-AIDA. Most prompts were on-topic, either about the course or about learning, but it also became clear that off-topic use of generative AI is likely. From other research, for example, on forum data, we know that not all conversations will be about the course and that students ask questions not related to the course or learning. Software that can detect such off-topic prompts may allow an AIDA to redirect the learners’ efforts back to the course or provide an appropriate response. Most student prompts fell into four distinct categories: learning support, course content, course information, and off-topic queries. This distribution suggests that learners expect AIDA tools not only to provide immediate academic assistance but also to function as personalized study companions capable of offering time management advice, emotional reassurance, and strategic planning.
The learning support theme showed that students are likely to seek guidance and support from i-AIDA regarding their learning. Assisting students in developing effective learning strategies is an important responsibility for educators, and several universities are equipped with a customized student support system. i-AIDA could offer Supporting Information or direct students to appropriate resources.
The course content theme captured students’ queries about the course’s actual content. For example, learners wanted more information about a concept, wanted to learn about the connection between concepts, and wanted summaries of key information. This area plays to the strengths of generative AI by providing further information and summaries, while also requires mechanisms that sense-check the generated responses.
The course information theme showed that students had general questions about the course. Often, these could be answered by reading the course guidelines. However, with the availability of i-AIDA, students might ask i-AIDA instead of searching for this type of information. Furthermore, the students expected i-AIDA to also answer specifically tailored questions about the course, such as the course workload for particular student groups. While workload information is often indicated on course websites, it may not be tailored to specific student groups. The role of institutional AIDA tools may need to expand beyond course-related FAQs to more holistic educational support systems, with built-in mechanisms for recognizing and adapting to student learning strategies, challenges, and progress. The thematic analysis of the learners’ prompts provided a unique lens into the thoughts and questions of students, as well as their learning needs and preferences, providing relevant information to inform the development of bespoke i-AIDA roles that cater to these requirements.
Finally, this study highlights the ethical implications of i-AIDA [2,3,5,26]. Students expressed trust in i-AIDA because it was seen as “seeded” by the Open University. This trust, however, comes with responsibility—institutions must ensure transparency, data protection, and accountability in the deployment and evolution of such tools [7,11,21]. The design of i-AIDA systems, therefore, should be inclusive, responsive, and continually shaped by learner feedback—a living system co-evolving with its users.

5.1. Limitations and Future Research

There are obvious limitations of this study. First and foremost, the sample size for the beta-test was relatively small (n = 18), whereby we specifically selected both mostly negative and mostly positive participants based upon the pre-test, which may limit the generalizability of the results to the broader population of distance learners at the OU. Second, while the study employed multiple data sources and analysis methods, the short-term nature of the engagement with i-AIDA (approximately 45 min) means that long-term effects on learning outcomes, sustained usage, and changes in study behavior remain unknown. Third, while we did not explicitly use common UTAUT psychometric instruments of TAM and primarily collected rich qualitative data of actual engagement with the technology and participants’ lived experiences of perceived ease of use and perceived usefulness, future studies should also objectively measure these constructs. Fourth, the findings were based on a single institutional context within the UK, which may not reflect the challenges and affordances of i-AIDA implementations in other cultural, linguistic, or infrastructural settings; thus, cross-institutional and international validation studies are warranted.
At the moment of writing (May/June 2025) we have just made two online courses available for >100 OU students to use i-AIDA in a randomized control-experiment, whereby one group of students receives an updated version of i-AIDA based upon the finding from this study, and one control group has access to one of two courses without i-AIDA. As students are able to freely interact with i-AIDA without any direct facilitation/moderation by humans, the research team is closely monitoring its use in order to explore learners’ attitudes, behaviors, and cognitions when engaging with i-AIDA. In particular, we are interested in whether the prompts, content moderation, functionality (e.g., flashcards, chat, various roles of i-AIDA) meet the students’ expectations in terms of perceived ease of use and perceived usefulness of i-AIDA.

5.2. Conclusions

This study provided a potential example of Industry 5.0 and Education 5.0 by examining how 18 learners interacted with a beta version of an institutionally developed AI Digital Assistant (i-AIDA). Findings from the beta-test suggest that students found i-AIDA’s role-based design, structured prompts, and multimodal functions (chat, quizzes, flashcards) beneficial for enhancing learning support and motivation. Learners appreciated the alignment of i-AIDA with course content and institutional values, and most expressed increased interest in using GenAI tools when provided in a trusted, pedagogically sound environment such as i-AIDA. The study also revealed the diversity of learner prompts and expectations, highlighting the importance of flexibility, scaffolding, and user agency in i-AIDA design. By adopting a mixed-methods approach and grounding the work in TAM and DBR principles, this research provides a nuanced understanding of the educational value and limitations of developing institutional AIDAs. Future work should explore long-term engagement and scale-up effects, as well as cross-institutional implementations, to further advance responsible and effective GenAI integration in higher education.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/app15126640/s1.

Author Contributions

Conceptualization, B.R., J.D., T.C., F.T. and T.U.; methodology, B.R., F.T. and T.U.; software, B.R.; validation, B.R., F.T. and E.C.; formal analysis, B.R., F.T. and T.U.; investigation, B.R.; resources, B.R.; data curation, B.R., F.T. and T.U.; writing—original draft preparation, all authors; writing—review and editing, all authors; visualization, B.R.; supervision, B.R.; project administration, B.R.; funding acquisition, J.D. All authors have read and agreed to the published version of the manuscript.

Funding

i-AIDA has been financially supported by a number of OU initiatives over a 5 year period. A number of ‘Digital’ and ‘Tutor’ Assistants were developed within ‘Test and Learn’ an activity stream specifically designed to support the rapid development and evaluation of new technologies. Since August 2024, i-AIDA has been supported by the Innovation Foundry an OU initiative to support strategically important innovation for OU business through the office of the Pro Vice Chancellor for Research and Innovation.

Institutional Review Board Statement

This research received Human Ethics Research Approval (HREC/2024 2024-0660-2).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The original survey data related to RQ1 are openly available in https://ordo.open.ac.uk/ (accessed on 5 June 2025). Given the sensitive nature of the comments related to RQ2 and RQ3 the qualitative data is only available upon request.

Acknowledgments

We are grateful for the support from all the students who participated in this research. We would like to thank as well our colleagues at KMI and IET who are currently working on implementing an i-AIDA.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
i-AIDAInstitutional AI Digital Assistant
p-AIDAPublic AI Digital Assistant such as ChatGPT
OUThe Open University UK
GenAIGenerative Artificial Intelligence

References

  1. Ravšelj, D.; Keržič, D.; Tomaževič, N.; Umek, L.; Brezovar, N.; Iahad, N.A.; Abdulla, A.A.; Akopyan, A.; Aldana Segura, M.W.; AlHumaid, J.; et al. Higher education students’ perceptions of ChatGPT: A global study of early reactions. PLoS ONE 2025, 20, e0315011. [Google Scholar] [CrossRef] [PubMed]
  2. Bond, M.; Khosravi, H.; De Laat, M.; Bergdahl, N.; Negrea, V.; Oxley, E.; Pham, P.; Chong, S.W.; Siemens, G. A meta systematic review of artificial intelligence in higher education: A call for increased ethics, collaboration, and rigour. Int. J. Educ. Technol. High. Educ. 2024, 21, 4. [Google Scholar] [CrossRef]
  3. Giannakos, M.; Azevedo, R.; Brusilovsky, P.; Cukurova, M.; Dimitriadis, Y.; Hernandez-Leo, D.; Järvelä, S.; Mavrikis, M.; Rienties, B. The promise and challenges of generative AI in education. Behav. Inf. Technol. 2024, 1–27. [Google Scholar] [CrossRef]
  4. Rienties, B.; Tessarolo, F.; Coughlan, E.; Coughlan, T.; Domingue, J. Students’ Perceptions of AI Digital Assistants (AIDAs): Should Institutions Invest in Their Own AIDAs? Appl. Sci. 2025, 15, 4279. [Google Scholar] [CrossRef]
  5. Rienties, B.; Domingue, J.; Duttaroy, S.; Herodotou, C.; Tessarolo, F.; Whitelock, D. What distance learning students want from an AI Digital Assistant. Distance Educ. 2024, 46, 173–189. [Google Scholar] [CrossRef]
  6. Ciolacu, M.I.; Marghescu, C.; Mihailescu, B.; Svasta, P. Does Industry 5.0 Need an Engineering Education 5.0? Exploring Potentials and Challenges in the Age of Generative AI. In Proceedings of the 2024 IEEE Global Engineering Education Conference (EDUCON), Kos Island, Greece, 8–11 May 2024; pp. 1–10. [Google Scholar]
  7. Directorate-General for Research and Innovation; Breque, M.; De Nul, L.; Petridis, A. Industry 5.0—Towards a Sustainable, Human-Centric and Resilient European Industry; Publications Office of the European Union: Rue de Reims, Luxembourg, 2021. [Google Scholar]
  8. Bektik, D.; Ullmann, T.; Edwards, C.; Herodotou, C.; Whitelock, D. AI-Powered Curricula: Unpacking the Potential and Progress of Generative Technologies in Education. In Proceedings of the EDEN 2024 Annual Conference, Graz, Austria, 16–18 June 2024. [Google Scholar]
  9. Freeman, J. Provide or punish? Students’ Views on Generative AI in Higher Education; Higher Education Policy Institute: London, UK, 2024. [Google Scholar]
  10. Morgan, D.L. Exploring the Use of Artificial Intelligence for Qualitative Data Analysis: The Case of ChatGPT. Int. J. Qual. Methods 2023, 22, 16094069231211248. [Google Scholar] [CrossRef]
  11. Weidlich, J.; Gasevic, D.; Drachsler, H.; Kirschner, P. ChatGPT in education: An effect in search of a cause. PsyArXiv 2025. [Google Scholar] [CrossRef]
  12. Freeman, J. Student Generative AI Survey 2025; Higher Education Policy Institute: London, UK, 2025. [Google Scholar]
  13. Coughlan, T.; Rienties, B.; Edwards, C.; Whitelock, D.; Coughlan, E.; Tessarolo, F. Holding a lead, or the tail wagging the dog? Exploring educator influence over the behaviour of AI Digital Assistants. In Proceedings of the EDEN, Bologna, Italy, 15–17 June 2025. [Google Scholar]
  14. Sharples, M. Towards social generative AI for education: Theory, practices and ethics. Learn. Res. Pract. 2023, 9, 159–167. [Google Scholar] [CrossRef]
  15. Yin, S.X.; Liu, Z.; Goh, D.H.-L.; Quek, C.L.; Chen, N.F. Scaling Up Collaborative Dialogue Analysis: An AI-driven Approach to Understanding Dialogue Patterns in Computational Thinking Education. In Proceedings of the 15th International Learning Analytics and Knowledge Conference, Dublin, Ireland, 3–7 March 2025; pp. 47–57. [Google Scholar]
  16. Easterday, M.W.; Lewis, D.R.; Gerber, E.M. Design-based research process: Problems, phases, and applications. In Proceedings of the International Society of the Learning Sciences, Boulder, CO, USA, 23–27 June 2014. [Google Scholar]
  17. Lyons, K.M.; Lobczowski, N.G.; Greene, J.A.; Whitley, J.; McLaughlin, J.E. Using a design-based research approach to develop and study a web-based tool to support collaborative learning. Comput. Educ. 2021, 161, 104064. [Google Scholar] [CrossRef]
  18. Wang, Y.-H. Design-based research on integrating learning technology tools into higher education classes to achieve active learning. Comput. Educ. 2020, 156, 103935. [Google Scholar] [CrossRef]
  19. Davis, F.D. Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology. MIS Q. 1989, 13, 319–340. [Google Scholar] [CrossRef]
  20. Venkatesh, V.; Bala, H. Technology Acceptance Model 3 and a Research Agenda on Interventions. Decis. Sci. 2008, 39, 273–315. [Google Scholar] [CrossRef]
  21. Tusquellas, N.; Santiago, R.; Palau, R. Professional Development Analytics: A Smart Model for Industry 5.0. Appl. Sci. 2025, 15, 2057. [Google Scholar] [CrossRef]
  22. Dupláková, D.; Sloboda, P. The Maintenance Factor as a Necessary Parameter for Sustainable Artificial Lighting in Engineering Production—A Software Approach. Appl. Sci. 2024, 14, 8158. [Google Scholar] [CrossRef]
  23. Alves, J.; Lima, T.M.; Gaspar, P.D. Sociodemographic Data and Work-Related Musculoskeletal Symptoms in the Metal Polishing Industry: A Case Study in Central Portugal. Appl. Sci. 2024, 14, 7265. [Google Scholar] [CrossRef]
  24. Folgado, F.J.; Calderón, D.; González, I.; Calderón, A.J. Review of Industry 4.0 from the Perspective of Automation and Supervision Systems: Definitions, Architectures and Recent Trends. Electronics 2024, 13, 782. [Google Scholar] [CrossRef]
  25. Xu, X.; Lu, Y.; Vogel-Heuser, B.; Wang, L. Industry 4.0 and Industry 5.0—Inception, conception and perception. J. Manuf. Syst. 2021, 61, 530–535. [Google Scholar] [CrossRef]
  26. Zawacki-Richter, O.; Marín, V.I.; Bond, M.; Gouverneur, F. Systematic review of research on artificial intelligence applications in higher education—Where are the educators? Int. J. Educ. Technol. High. Educ. 2019, 16, 39. [Google Scholar] [CrossRef]
  27. Rienties, B.; Ferguson, R.; Gonda, D.; Hajdin, G.; Herodotou, C.; Iniesto, F.; Llorens-Garcia, A.; Muccini, H.; Sargent, J.; Virkus, S.; et al. Education 4.0 in higher education and Computer Science: A systematic review. Comput. Appl. Eng. Educ. 2023, 31, 1339–1357. [Google Scholar] [CrossRef]
  28. Hussin, A.A. Education 4.0 made simple: Ideas for teaching. Int. J. Educ. Lit. Stud. 2018, 6, 92–98. [Google Scholar]
  29. Jisc. Education 4.0—Transforming the Future of Education (Through Advanced Technology). Available online: https://www.youtube.com/watch?v=aVWHp8FsV1w (accessed on 20 January 2021).
  30. Salmon, G. May the Fourth Be with you: Creating Education 4.0. J. Learn. Dev. 2019, 6, 95–115. [Google Scholar] [CrossRef]
  31. World Economic Forum. Schools of the Future: Defining New Models of Education for the Fourth Industrial Revolution; World Economic Forum: Geneva, Switzerland, 2020; pp. 1–34. [Google Scholar]
  32. Kocdar, S.; Bozkurt, A.; Goru Dogan, T. Engineering through distance education in the time of the fourth industrial revolution: Reflections from three decades of peer reviewed studies. Comput. Appl. Eng. Educ. 2021, 29, 931–949. [Google Scholar] [CrossRef]
  33. Harkins, A.M. Leapfrog principles and practices: Core components of education 3.0 and 4.0. Futures Res. Q. 2008, 24, 19–31. [Google Scholar]
  34. Fisk, P. Education 4.0 … the Future of Learning Will Be Dramatically Different, in School and Throughout Life. Available online: https://www.thegeniusworks.com/2017/01/future-education-young-everyone-taught-together/ (accessed on 1 May 2025).
  35. Ahmad, S.; Umirzakova, S.; Mujtaba, G.; Amin, M.S.; Whangbo, T. Education 5.0: Requirements, enabling technologies, and future directions. arXiv 2023, arXiv:2307.15846. [Google Scholar]
  36. Sedrakyan, G.; Borsci, S.; van den Berg, S.M.; van Hillegersberg, J.; Veldkamp, B.P. Design Implications for Next Generation Chatbots with Education 5.0; Springer: Singapore, 2024; pp. 1–12. [Google Scholar]
  37. Rienties, B.; Bektik, D.; Coughlan, E.; Coughlan, T.; Domingue, J.; Edwards, C.; Ekuban, A.; Herodotou, C.; Kwarteng, J.; Sanders, C.; et al. A five-study Design-Based Research Approach to co-design an Institutional AI Digital Assistant. Submitted.
  38. Rienties, B.; Tessarolo, F.; Coughlan, T.; Herodotou, C.; Domingue, J.; Whitelock, D. A Design-Based Research Approach to what distance learners expect and value from an Institutional AI Digital Assistant. Eur. J. Open Distance E-Learn. 2025, in press. [Google Scholar]
  39. Blut, M.; Chong, A.; Tsiga, Z.; Venkatesh, V. Meta-analysis of the unified theory of acceptance and use of technology (UTAUT): Challenging its validity and charting A research agenda in the red ocean. J. Assoc. Inf. Syst. Forthcom. 2022, 23, 13–95. [Google Scholar] [CrossRef]
  40. Venkatesh, V.; Morris, M.G.; Davis, G.B.; Davis, F.D. User Acceptance of Information Technology: Toward a Unified View. MIS Q. 2003, 27, 425–478. [Google Scholar] [CrossRef]
  41. Herodotou, C.; Maguire, C.; McDowell, N.; Hlosta, M.; Boroowa, A. The engagement of university teachers with predictive learning analytics. Comput. Educ. 2021, 173, 104285. [Google Scholar] [CrossRef]
  42. Herodotou, C.; Carr, J.; Shrestha, S.; Comfort, C.; Bayer, V.; Maguire, C.; Lee, J.; Mulholland, P.; Fernandez, M. Prescriptive analytics motivating distance learning students to take remedial action: A case study of a student-facing dashboard. In Proceedings of the 15th International Learning Analytics and Knowledge Conference, Dublin, Ireland, 3–7 March 2025; pp. 306–316. [Google Scholar]
  43. Jin, Y.; Yang, K.; Yan, L.; Echeverria, V.; Zhao, L.; Alfredo, R.; Milesi, M.; Fan, J.X.; Li, X.; Gasevic, D.; et al. Chatting with a Learning Analytics Dashboard: The Role of Generative AI Literacy on Learner Interaction with Conventional and Scaffolding Chatbots. In Proceedings of the 15th International Learning Analytics and Knowledge Conference, Dublin, Ireland, 3–7 March 2025; pp. 579–590. [Google Scholar]
  44. Chin, C.; Osborne, J. Students’ questions: A potential resource for teaching and learning science. Stud. Sci. Educ. 2008, 44, 1–39. [Google Scholar] [CrossRef]
  45. Braun, V.; Clarke, V. Using thematic analysis in psychology. Qual. Res. Psychol. 2006, 3, 77–101. [Google Scholar] [CrossRef]
Figure 1. Use of Education 4.0 elements in 66 innovative pedagogical approaches in computer science (0 = not included, 1 = included). Source: [27].
Figure 1. Use of Education 4.0 elements in 66 innovative pedagogical approaches in computer science (0 = not included, 1 = included). Source: [27].
Applsci 15 06640 g001
Figure 2. Overview of studies of how i-AIDA was developed over time, and the beta study in January 2025.
Figure 2. Overview of studies of how i-AIDA was developed over time, and the beta study in January 2025.
Applsci 15 06640 g002
Figure 3. Introductory video of i-AIDA featuring an avatar explaining the platform’s various functionalities.
Figure 3. Introductory video of i-AIDA featuring an avatar explaining the platform’s various functionalities.
Applsci 15 06640 g003
Figure 4. i-AIDA online chat version 1.1.
Figure 4. i-AIDA online chat version 1.1.
Applsci 15 06640 g004
Figure 5. Student perceptions about i-AIDA before and after engagement (n positive = 10, n not so positive = 7).
Figure 5. Student perceptions about i-AIDA before and after engagement (n positive = 10, n not so positive = 7).
Applsci 15 06640 g005
Table 1. Distribution of Participant Feedback on i-AIDA: Themes and Sentiment Analysis.
Table 1. Distribution of Participant Feedback on i-AIDA: Themes and Sentiment Analysis.
ThemesSummaryPositiveNeutralNegativeTotal by Comment
AIDA FeaturesRefers to the core functions and perceived ease of use of the assistant, including video, chat, quizzes, and flashcards, and how these are designed to support learning (i.e., perceived usefulness).938616195
Engagement & MotivationEncompasses how users interact with the tool and the extent to which it stimulates interest, participation, and sustained learning behavior.2619348
Technical PerformanceConcerns the responsiveness, stability, and speed of the system, as well as its ability to handle user input effectively.481426
Content DeliveryFocuses on how information is presented, including clarity, structure, and organization of the assistant’s responses.164323
User ExperienceInvolves the overall (perceived) ease of use, interface design, and accessibility of the system from a learner perspective.119121
AI PerceptionRelates to participants’ attitudes towards AI in education, particularly the perceived reliability, role, and influence of the assistant.83213
Support & GuidanceRefers to the tool’s capacity to assist with learning tasks such as referencing, note-taking, and navigating course content.5106
Autonomy & ControlInvolves users’ ability to customize their experience, including interface preferences and control over content visibility.1405
Assessment & EvaluationCovers the role of the assistant in supporting formative assessment through quizzes, feedback, and performance insights.3104
Learning EffectivenessAddresses the extent to which the assistant supports comprehension, knowledge retention, and individual learning goals.2002
Total by comment 16913539343
n = 18.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Rienties, B.; Ullmann, T.; Tessarolo, F.; Kwarteng, J.; Domingue, J.; Coughlan, T.; Coughlan, E.; Bektik, D. Developing an Institutional AI Digital Assistant in an Age of Industry 5.0. Appl. Sci. 2025, 15, 6640. https://doi.org/10.3390/app15126640

AMA Style

Rienties B, Ullmann T, Tessarolo F, Kwarteng J, Domingue J, Coughlan T, Coughlan E, Bektik D. Developing an Institutional AI Digital Assistant in an Age of Industry 5.0. Applied Sciences. 2025; 15(12):6640. https://doi.org/10.3390/app15126640

Chicago/Turabian Style

Rienties, Bart, Thomas Ullmann, Felipe Tessarolo, Joseph Kwarteng, John Domingue, Tim Coughlan, Emily Coughlan, and Duygu Bektik. 2025. "Developing an Institutional AI Digital Assistant in an Age of Industry 5.0" Applied Sciences 15, no. 12: 6640. https://doi.org/10.3390/app15126640

APA Style

Rienties, B., Ullmann, T., Tessarolo, F., Kwarteng, J., Domingue, J., Coughlan, T., Coughlan, E., & Bektik, D. (2025). Developing an Institutional AI Digital Assistant in an Age of Industry 5.0. Applied Sciences, 15(12), 6640. https://doi.org/10.3390/app15126640

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop