Next Article in Journal
Untargeted Salivary Metabolomics and Proteomics: Paving the Way for Early Detection of Periodontitis
Next Article in Special Issue
An Integrated MCDA Framework for Prioritising Emerging Technologies in the Transition from Industry 4.0 to Industry 5.0
Previous Article in Journal
A CMOS-Based Terahertz Reconfigurable Reflectarray with Amplitude Control: Design and Validation
Previous Article in Special Issue
Enhanced Prediction of the Remaining Useful Life of Rolling Bearings Under Cross-Working Conditions via an Initial Degradation Detection-Enabled Joint Transfer Metric Network
 
 
Article
Peer-Review Record

Developing an Institutional AI Digital Assistant in an Age of Industry 5.0

Appl. Sci. 2025, 15(12), 6640; https://doi.org/10.3390/app15126640
by Bart Rienties 1,*, Thomas Ullmann 1, Felipe Tessarolo 1, Joseph Kwarteng 2, John Domingue 2, Tim Coughlan 1, Emily Coughlan 1 and Duygu Bektik 1
Reviewer 1: Anonymous
Reviewer 3: Anonymous
Appl. Sci. 2025, 15(12), 6640; https://doi.org/10.3390/app15126640
Submission received: 11 May 2025 / Revised: 1 June 2025 / Accepted: 6 June 2025 / Published: 12 June 2025
(This article belongs to the Special Issue Advanced Technologies for Industry 4.0 and Industry 5.0)

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

The Introduction section does not explain why the problem analysis was undertaken. The research problem should be clearly formulated, and why this problem should be undertaken in the context of the literature on the subject should be specified. What theory is expanded upon by the article. The purpose of the article should be formulated. The organization of the paper is welcome. The "literature review" section is missing.

The methodological approach is presented in an unprofessional manner. There is no description of the research path. There is no detailed definition of why a given research method was chosen. It would be advisable to show a road map of the research approach. The "Procedure and data analysis" part is too synthetic and does not describe the Procedure and does not indicate in detail (only in general terms) how the data is/will be analyzed. Such an approach is insufficient. I have the impression that the authors confused the selection/selection of the sample with the research approach and the description of the research procedure. What was presented is unclear. The conclusion part is missing. The conclusion part is missing, the authors do not indicate what they contribute to the theory, what their study changes in the state of current knowledge. There is no description of the limitations of the research and no indication of how the research should be continued in the future.

Author Response

Please see the attachment

Author Response File: Author Response.pdf

Reviewer 2 Report

Comments and Suggestions for Authors

The paper tackles a very current and important topic and I am glad to read it.

It presents an evaluation of a learning AI tool from a technological perspective – the users' perception of aspects of the tool, which leads me to some issues.

It is not clear which conceptual framework was adopted to analyse the collected data. Technology acceptance models (e.g. Davies' TAM) and information systems success models (e.g. Delone and McLean's ISSM) could have been useful because they use concepts similar to those discussed in the text, such as ease of use and information quality. Of course, models from other knowledge areas could also be used.

The text says that Industry 5.0 emphasizes human–machine collaboration (line 34), human-centric innovation and co-creation (line 460). However, learning theories always place the student as the centre of the Education process – learning is a student internal process and the result of a social process. Would this mean that the concepts of Industry 4.0 were not suitable for Education?

The fact that the article does not use Education theories to analyse the AI tool seems to be a significant flaw. For example, since the average age is 50 years old, the importance that participants give to autonomy (line 401) could be analysed/explained from the perspective of Andragogy.

The data analysis provides some quantitative data, such as those shown in Figure 4 on page 8, but no statistical inferences are made. Thus, as a quantitative study, the analysis is weak. The qualitative analyses are quite descriptive, and the text does not attempt to relate the elements found in the data collection, nor does it attempt to verify how and why the respondents have the perceptions presented. I believe that this is a reflection of the lack of a conceptual framework. This leads to superficial positions like “certain things are considered positive because they are positive, and those that are not positive were not considered positive”.

I suggest that the author invest in a conceptual framework to improve his paper.

 

I think there is a typo in the first word of line 284 – “Thet”

 

Author Response

Please see the attachment

Author Response File: Author Response.pdf

Reviewer 3 Report

Comments and Suggestions for Authors

The manuscript explores the development and beta-testing of an institutional AI digital assistant (i-AIDA) at The Open University (OU), a large distance learning institution in the UK. Situated within the context of Industry 5.0 and Education 5.0, which emphasize human-centric technology integration, the study investigates how i-AIDA can enhance learning experiences while addressing concerns related to academic integrity, data privacy, and pedagogical alignment. However, there are limitations for revisions as follows.

- The beta-test involved only 18 students, which limits the generalizability of findings. 
- The study notes that the majority of participants were female (66%) and older (average age 50.05), which may not fully represent the diversity of the OU’s 200,000 learners or other higher education contexts.
- The study focuses on a single beta-test session (average duration ~44 minutes), which provides a snapshot of user perceptions but does not explore sustained engagement or long-term impacts on learning outcomes. 
- The findings suggest that i-AIDA's quiz and flashcard features may be less effective for interpretive disciplines like the arts and humanities, where deeper, more nuanced responses are needed. 
- Participants reported concerns about i-AIDA’s speed and verbosity, which received the highest proportion of negative feedback (Table 1). 
- The use of ChatGPT-4 for preliminary thematic analysis, while innovative, raises questions about potential biases or inaccuracies in AI-generated themes. 

Author Response

Please see the attachment

Author Response File: Author Response.pdf

Round 2

Reviewer 1 Report

Comments and Suggestions for Authors

The article has been revised appropriately. I have no comments.

Reviewer 2 Report

Comments and Suggestions for Authors

Thanks for reviewing the paper.

Back to TopTop