Next Article in Journal
Performance Analysis of Wearable Robotic Exoskeleton in Construction Tasks: Productivity and Motion Stability Assessment
Next Article in Special Issue
An Overview of Commercial Virtual Reality Providers in Education: Mapping the Current Market Landscape
Previous Article in Journal
What Is the Best Solution for Smart Buildings? A Case Study of Fog, Edge Computing and Smart IoT Devices
Previous Article in Special Issue
Automatic Correction System for Learning Activities in Remote-Access Laboratories in the Mechatronics Area
 
 
Article
Peer-Review Record

Digital Human Technology in E-Learning: Custom Content Solutions

Appl. Sci. 2025, 15(7), 3807; https://doi.org/10.3390/app15073807
by Sinan Chen 1,2,*, Liuyi Yang 3,*, Yue Zhang 2, Miao Zhang 4, Yangmei Xie 5, Zhiyi Zhu 3 and Jialong Li 6
Reviewer 1:
Reviewer 3:
Appl. Sci. 2025, 15(7), 3807; https://doi.org/10.3390/app15073807
Submission received: 18 February 2025 / Revised: 22 March 2025 / Accepted: 27 March 2025 / Published: 31 March 2025
(This article belongs to the Special Issue Applications of Digital Technology and AI in Educational Settings)

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

The manuscript is interesting, as it describes the usage of IA to mainstream the creation of instructional videos, an application of technology that may reduce considerably the teachers' workload and thus facilitate personalization of teaching.

The procedure is thoroughly described, to ensure traceability and transparency.

However, more information is needed about:

  • The metrics used to evaluate each variable. Please describe them in more detail, including the formulas or the criteria, or provide a relevant reference for each.
  • A more comprehensive analysis of how the proposed approach compares to other existing commercial alternatives.

In this line, the three approaches must be better characterized or, alternatively, their number reduced, as in the examples provided the differences among the three are minimal.

 

 

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

Comments and Suggestions for Authors

good paper and interesting, useful tool.

In the overall flowchart of proposed method (figure1) I miss step 4 - the evaluation loop.  What input is given and at which step of the proposed method is this possible?

The placement of Figure 2 should be closer to the associated text as it is too close to Figure 3 with no text between the two figures, which is somewhat confusing. In the "Key points summary" ("with a general −to − specific structure"), why is "Contributed to optics & calculus" more general then "Influential in physics & mathematics"? I think, optics is a part of physics.

The text in Figure 4 does not correspond to any of the three variants given in Figure 2. Why is this the case?

Comments on the Quality of English Language

To improve the quality of the presentation, please read the text carefully and correct typos such as “framwork”. Avoid repeating the beginnings of sentences with the same words, e.g. “however”.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 3 Report

Comments and Suggestions for Authors

The article explores the development of a web service that leverages artificial intelligence and digital human technology to facilitate the creation of e-Learning videos.

The authors begin by discussing digital transformation in education, emphasizing how digital technologies enhance instructional efficiency and expand access to educational resources. They note the growing reliance on video-based instruction (e.g., e-Learning videos), which, despite its advantages, demands substantial resources to produce. This challenge motivates their development of an AI-powered web service designed to streamline e-Learning video creation. The paper provides a detailed overview of the tool’s workflow, testing processes, and evaluation criteria, including assessments of audio and lip sync quality. Additionally, the authors report findings from a user survey evaluating content consistency, expression style, audiovisual quality, engagement, and overall experience.

While I appreciate the authors' thorough description of the tool's development and refinement, I believe the paper requires significant revisions before publication. In particular, a deeper ethical analysis and a more rigorous evaluation of user experience and learning outcomes would strengthen the study. Below are specific areas for improvement:

  • Theoretical Framework: The paper would benefit from grounding its evaluation in an established learning theory, such as Mayer’s theory of multimedia learning. How do measures of audio and lip sync quality contribute to the overall e-Learning experience? A clearer connection between these technical features and their pedagogical impact would add depth to the study.
  • Video Length Limitation: Figure 1 suggests that the tool can only generate videos up to 30 seconds long. Is this a strict limitation? If so, what are the implications for learners? Longer instructional videos are often necessary for complex topics—how does this constraint affect usability?
  • Use of Facial Photographs and Voice Data: Line 168 mentions that the tool generates facial animations from a reference photograph. Whose photographs are used, and have these individuals provided consent? Similarly, whose voices are used for the audio? Addressing these ethical concerns is crucial.
  • Summary Styles: The tool offers three summary styles: formal, casual, and key points. Are these the only available options? If so, why? Additionally, Figure 2 does not clearly illustrate meaningful differences between these styles, raising questions about their practical value. Clarifying these distinctions would help assess the feature’s usefulness.
  • Case Study Scope: The study primarily demonstrates the tool using an example focused on Sir Isaac Newton. Given the broad applications of e-Learning, additional examples from diverse disciplines would better showcase the tool’s adaptability and effectiveness.
  • User Survey Methodology: The user survey section lacks sufficient methodological detail. What specific questions were asked? How was the questionnaire designed? Who participated, and how many responses were analyzed? Were qualitative responses included, or only quantitative data? Additionally, the study does not assess whether the tool enhances learning outcomes, which is a critical measure of its effectiveness. Lastly, the authors state that the user studies were exempt from ethical review—what was the rationale for this exemption?

Addressing these points would significantly enhance the paper’s contribution to the field and provide a more comprehensive evaluation of the tool’s impact.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Round 2

Reviewer 3 Report

Comments and Suggestions for Authors

Thank you to the authors for their thoughtful responses to my previous comments. I believe they have adequately addressed my suggestions. 

Back to TopTop