Next Article in Journal
“Living in the Discomfort”: Embodied Professional Learning for Transdisciplinary Higher Education
Previous Article in Journal
‘We Just Do What the Teacher Says’—Students’ Perspectives on Participation in ‘Inclusive’ Physical Education Classes
 
 
Article
Peer-Review Record

Student-Created Screencasts: A Constructivist Response to the Challenges of Generative AI in Education

Educ. Sci. 2025, 15(12), 1701; https://doi.org/10.3390/educsci15121701
by Adam Wong 1, Ken Tsang 2, Shuyang Lin 2 and Lai Lam Chan 1,*
Reviewer 1: Anonymous
Reviewer 2:
Educ. Sci. 2025, 15(12), 1701; https://doi.org/10.3390/educsci15121701
Submission received: 4 November 2025 / Revised: 5 December 2025 / Accepted: 14 December 2025 / Published: 17 December 2025
(This article belongs to the Section Higher Education)

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

Comments & Suggestions to Author/s

 

  1. The manuscript provides a comprehensive and up-to-date theoretical context, drawing on both constructivist theory and the UTAUT model. The authors successfully connect screencast creation to constructivist learning principles and the challenges of assessment in the era of generative AI. However, while the background is thorough, it could be tightened for conciseness. The literature review occasionally repeats similar ideas (e.g., multiple paragraphs on AI’s influence on student assessment). A more synthesized discussion comparing previous empirical findings would improve clarity and flow.
  2. Research design is clearly identified. However, minor improvements could include a clearer statement of sampling strategy and ethical considerations (participant consent and institutional approval).
  3. The discussion logically follows the results, linking statistical findings to the theoretical framework. However, the discussion occasionally repeats quantitative results instead of integrating them deeply with pedagogical theory. Greater emphasis on constructivist implications and how screencasts foster reflective and authentic learning would strengthen the argument.
  4. Results are systematically organized with tables, model fit indices, and path coefficients clearly reported. However, one minor suggestion is that the author/s should integrate brief interpretive sentences after each table to avoid disjointed numerical presentation.
  5. The manuscript draws on a solid foundation of relevant literature, however, to enhance completeness, the authors could add more very recent empirical studies (2023–2025) exploring AI’s role in authentic assessment and student-generated content.
  6. The conclusion part could go further by discussing practical implications for instructors and curriculum designers, not only summarizing findings. This would increase the paper’s applied value for readers.
  7. There are several issues with APA style in-text citations and reference list. Therefore, all manuscript should be carefully revised accordingly. For example:
    1. in line 92; “Bindu & Manikandan 2020, p. 4894” is missing a comma after the names.
    2. In line 84, “Peterson, 2007; Kawaf, 2019” is not ordered alphabetically.
    3. In line 104, “Venkatesh, Morris, Davis & Davis (2003)” should be written as Venkatesh et al. (2003).
    4. In line 119: “Negahban & Chung, 2014; Bagozzi, 2007” is not alphabetically ordered.

Referencing has serious issues like those. There are more of those examples I noted here. Please check all those carefully.

  1. The text after tables start immediately. There should be space after table before starting texts. (i.e. page 11).
  2. Most of the reference list are missing doi numbers. They should also be checked considering APA style.
  3. In page 2, image is starting immediately after the text. There should be space.
  4. Authors should define abbreviations (e.g., SCS, UTAUT) at first use and use consistently.
  5. p-values should be italicized everywhere (i.e.p < .05.)
  6. All tables should be revised to make all appropriate for APA style

Comments for author File: Comments.pdf

Author Response

Comment 1: The manuscript provides a comprehensive and up-to-date theoretical context, drawing on both constructivist theory and the UTAUT model. The authors successfully connect screencast creation to constructivist learning principles and the challenges of assessment in the era of generative AI. However, while the background is thorough, it could be tightened for conciseness. The literature review occasionally repeats similar ideas (e.g., multiple paragraphs on AI’s influence on student assessment). A more synthesized discussion comparing previous empirical findings would improve clarity and flow. 

 

Response 1: Thank you for the overall positive comments. We agree with your comments about the need for more synthesized discussions on AI’s influence on student assessment and previous studies on assessment design in the AI era. These are now incorporated as new sections 2.1 and 2.2. Section 2.1 is titled “Connecting Constructivism, Authentic Assessment, and Self-Explanation Through Student-Created Content”. Section 2.2 is titled “2.2 Revisiting Assessment Redesign in the AI-Era”. 

Comment 2: Research design is clearly identified. However, minor improvements could include a clearer statement of sampling strategy and ethical considerations (participant consent and institutional approval). 

Response 2: Thank you for pointing out that. We have now described the ethical considerations and the samples in page 10, section 4.1, starting from line 348. 

Comment 3: The discussion logically follows the results, linking statistical findings to the theoretical framework. However, the discussion occasionally repeats quantitative results instead of integrating them deeply with pedagogical theory. Greater emphasis on constructivist implications and how screencasts foster reflective and authentic learning would strengthen the argument. 

Response 3: We have revised the manuscript according to this valuable suggestion. We have enhanced many parts in Section 5 “Results and Discussions”.  These changes are highlighted in yellow. Furthermore, we emphasized the findings aligned with the principle of constructivism in page 19, section 6 “Conclusion”, at line 617. 

Comment 4: Results are systematically organized with tables, model fit indices, and path coefficients clearly reported. However, one minor suggestion is that the author/s should integrate brief interpretive sentences after each table to avoid disjointed numerical presentation. 

Response 4: We agree about having disjointed numerical presentations. Thus, we have added brief interpretive sentences after some tables. For example, in page 11, Table 2, line 368; page 11, Table 3, line 381; page 12, Table 4, line 388; page 14, Table 8, line 450; page 16, Table 9, line 517. 

Comment 5: The manuscript draws on a solid foundation of relevant literature, however, to enhance completeness, the authors could add more very recent empirical studies (2023–2025) exploring AI’s role in authentic assessment and student-generated content. 

Response 5: This is a good suggestion. We have added more recent empirical studies indicating authentic assessment and student-created content in new section 2.1. That section the seminal work by Vygotsky (1978), Lombardi (2008) and Chi et al. (1994). And the recent empirical research by Van der Walt and Bosch (2025), Killam et al. (2024) and Cardace et al. (2024). 

Comment 6: The conclusion part could go further by discussing practical implications for instructors and curriculum designers, not only summarizing findings. This would increase the paper’s applied value for readers. 

Response 6: Thank you for your suggestion. We have added a part to “Discussion”. Starting from line 450, we have discussed some possible reasons indicated in the path coefficients, which underscores the importance of educators framing innovative assessment tools like student-created screencast.  We have also added a section called 5.6 Practical Implications for this purpose. Since the other reviewer is fine with the “Conclusions” in the previous submissions, we think this arrangement can address the concerns of both reviewers.  

Comment 7: There are several issues with APA style in-text citations and reference list. Therefore, all manuscript should be carefully revised accordingly. For example: 

  1. in line 92; “Bindu & Manikandan 2020, p. 4894” is missing a comma after the names. 
  1. In line 84, “Peterson, 2007; Kawaf, 2019” is not ordered alphabetically. 
  1. In line 104, “Venkatesh, Morris, Davis & Davis (2003)” should be written as Venkatesh et al. (2003). 
  1. In line 119: “Negahban & Chung, 2014; Bagozzi, 2007” is not alphabetically ordered. 

 

Response 7: Thank you for your close observations. We have made the corresponding corrections according: 

  1. Added comma in citation “Bindu & Manikandan, 2020, p. 4894” in line 96. 
  1. Re-ordered the citations to “Kawaf, 2019; Peterson, 2007” alphabetically in line 88. 
  1. Modified the citation to “Venkatesh et al. (2003)” in line 109. 
  1. Re-ordered the citations to “Bagozzi, 2007; Negahban & Chung, 2014” in line 124. 

Comment 8: The text after tables start immediately. There should be space after table before starting texts. (i.e. page 11). 

Response 8: Thank you for pointing out these uncareful mistakes from ours. We agreed and made several changes based on the comments. A blank line or 12pt line spacing was added to every paragraph after the tables. 

Comment 9: Most of the reference list are missing doi numbers. They should also be checked considering APA style. 

Response 9: We have added missing doi numbers to the reference list. We have included the webpage URL for that paper which does not have a doi number. 

Comment 10: In page 2, image is starting immediately after the text. There should be space. 

Response 10: A 12pt line spacing was added before all figures. 

Comment 11: Authors should define abbreviations (e.g., SCS, UTAUT) at first use and use consistently. 

Response 11: We have redefined abbreviations and used it consistently now. 

Comment 12: p-values should be italicized everywhere (i.e.p < .05.) 

Response 12: We have modified all “p” representing p value into italic. 

Comment 13: All tables should be revised to make all appropriate for APA style 

Response 13: We have modified the font style, caption, and table design to match the APA style. 

 

 

 
 

Reviewer 2 Report

Comments and Suggestions for Authors

This is a well-structured and relevant study addressing an emerging issue in higher education — the integration of generative AI and the search for authentic, constructivist assessment alternatives. The authors propose and empirically test a modified UTAUT model to analyze students’ acceptance of student-created screencasts (SCSs) as a form of assessment. The topic is timely, theoretically grounded, and contributes to the literature on technology acceptance, AI ethics in education, and active learning.

The manuscript is clear and methodically OK, but it would greatly benefit from stronger theoretical integration, linguistic refinement, and deeper interpretive discussion of implications beyond the surveyed population.

My major comments/concerns:

THEORETICAL FRAMING:

  1. The rationale for introducing Future Utility (FU) is well-motivated but could be conceptually better distinguished from Perceived Usefulness in TAM and Performance Expectancy in UTAUT.
  2. Consider enriching the conceptual section by connecting constructivism with authentic assessment and self-explanation theory (e.g., Chi et al., 1994; Lombardi, 2008).

  3. The AI context is introduced in the abstract and early introduction, but largely disappears in later sections. Strengthen the link between SCSs and resilience to GenAI misuse.

LITERATURE REVIEW:

  1. Table 1 is informative, yet it only lists teacher-created screencasts. To emphasize the gap, briefly include studies on learner-generated media or student-produced digital artifacts (e.g., Orús et al., 2016).

  2. The review would benefit from a brief subsection on AI-era assessment redesign, referencing recent scholarship on AI-resistant and human-centered evaluation (e.g., Chaka, 2023; Williamson et al., 2023).

METHODOLOGY

The methodology is clearly explained, and the PLS-SEM analysis is appropriate. However, please specify:

    • The ethical approval process or IRB exemption rationale.

    • The institutional and cultural context (e.g., Hong Kong University) for interpretive transparency.

    • Whether multi-group analysis was conducted for moderators (since gender, discipline, etc., are discussed).

RESULTS & INTERPRETATION 

The results section is statistically solid, but the discussion could be more interpretive and educationally grounded:

    • Discuss why Future Utility had the strongest path coefficient — link to student employability narratives or lifelong learning orientation.

    • Reflect on the modest influence of Effort Expectancy — perhaps due to students’ digital familiarity.

    • Add a visual summary of significant and non-significant paths for clarity.

DISCUSSION
The discussion currently reiterates statistical findings. Expand on pedagogical implications:

    • How can instructors redesign feedback or rubrics for SCSs?
    • How might SCSs be integrated into blended learning or flipped classrooms?

    • Could SCSs serve as a formative diagnostic for misconceptions or AI misuse?

  • Include a paragraph on institutional scalability and staff workload, which is briefly mentioned but deserves elaboration.

                                   __________________________

Minor concerns: 

  • The limitations section is honest and appropriate. You could enrich it by acknowledging technological inequality (e.g., access to quality microphones or devices) and language barriers in spoken explanations.

  • In “Future Research Directions,” suggest cross-cultural validation of the modified UTAUT and potential integration with AI literacy frameworks.

  • The manuscript is mostly readable but requires light copy-editing for grammar, consistency, and conciseness.
  • Replace informal or redundant expressions (e.g., “students must explain their work to their teachers”) with more academic phrasing.

  • Ensure consistent abbreviation usage (SCS vs. SCSs).

  • References should follow MDPI formatting.

  • Figures 1 and 2: ensure clarity and readable labels.

 

 

 

Author Response

THEORETICAL FRAMING: 

Comment 1: The rationale for introducing Future Utility (FU) is well-motivated but could be conceptually better distinguished from Perceived Usefulness in TAM and Performance Expectancy in UTAUT. 

Response 1: We agree with this comment. We added a new section “1.3 Future Utility”, talking about the similarities and differences for Future Utility (FU) with Perceived Usefulness in TAM and Performance Expectancy in UTAUT. 

Comment 2: Consider enriching the conceptual section by connecting constructivism with authentic assessment and self-explanation theory (e.g., Chi et al., 1994; Lombardi, 2008). 

Response 2: Thank you for your suggestion. We have added a new section connecting constructivism with authentic assessment and self-explanation theory in page 6, section 2.1, line 170. Section 2.1 is titled “Connecting Constructivism, Authentic Assessment, and Self-Explanation Through Student-Created Content”.  

Comment 3: The AI context is introduced in the abstract and early introduction, but largely disappears in later sections. Strengthen the link between SCSs and resilience to GenAI misuse. 

Response 3: Thank you for your kind suggestion. We have explained the use of student-created screencasts with the self-explanation theory in section 2.1. Moreover, we have added a new section 2.2 in line 201, to talk about the difficulties of ensuring academic integrity in such rapid AI growth. Thus, it is important to design a new pedagogical approach to ensure that curriculum and assessment decisions remain grounded in educational purposes rather than technological imperatives. Section 2.2 is titled “2.2 Revisiting Assessment Redesign in the AI-Era”. 

LITERATURE REVIEW: 

Comment 4: Table 1 is informative, yet it only lists teacher-created screencasts. To emphasize the gap, briefly include studies on learner-generated media or student-produced digital artifacts (e.g., Orús et al., 2016). 

Response 4: We agree with the valuable insight. We have modified the design of Table 1 and added the corresponding study to emphasize the gap between teacher-created screencasts and student-created screencasts. 

Comment 5: The review would benefit from a brief subsection on AI-era assessment redesign, referencing recent scholarship on AI-resistant and human-centered evaluation (e.g., Chaka, 2023; Williamson et al., 2023). 

Response 5: This is a good suggestion. We have added a new section 2.2 in line 201, talking about the revisiting assessment redesign in the AI era with some recent research.  

METHODOLOGY: 

The methodology is clearly explained, and the PLS-SEM analysis is appropriate. However, please specify:   

Comment 6: The ethical approval process or IRB exemption rationale. 

Response 6: We understand your consideration. We have described the ethical approval obtained from the university in page 10, section 4.1, line 353. We will attach the corresponding approval letter as a supplementary file. 

Comment 7: The institutional and cultural context (e.g., Hong Kong University) for interpretive transparency. 

Response 7: Likewise, the background of the samples and the institution were declared in page 10, section 4.1, line 348. 

Comment 8: Whether multi-group analysis was conducted for moderators (since gender, discipline, etc., are discussed). 

Response 8: We did not originally include multi-group analysis (MGA), but we think it is an impressive idea to include MGA in our study. Thus, we have created a new section examining the MGA in page 16, line 525.  

DISCUSSION 

Comment 9: How can instructors redesign feedback or rubrics for SCSs? 

Response 9: Thanks for this suggestion. We agree that SCSs would necessitate new feedback and rubrics. However, as the focus of the article is on students’ acceptance of SCSs, we prefer to mention this points at the last part of 5.6 Practical Implications. At present, we are working on preparing another paper on these points. 

Comment 10: How might SCSs be integrated into blended learning or flipped classrooms? 

Response 10: We think this comments are reasonable and important for educators. We have slightly talked about the difficulties faced by educators in literature review. As stated in line 70, this study will focus more on understanding students’ acceptance of SCS as assignments. Thus, we think it will be better to include these approaches as our future research direction. 

Comment 11: Could SCSs serve as a formative diagnostic for misconceptions or AI misuse? 

Response 11: Yes. Additionally, we explained it in new sections 2.1 and 2.2. As the theories explained in section 2.1, we think SCS could transform assessment from measurement into a powerful learning experience developing both cognitive understanding and metacognitive awareness.  

Comment 12: Include a paragraph on institutional scalability and staff workload, which is briefly mentioned but deserves elaboration. 

Response 12: Thank you for your suggestion. We have now described more background of the institution and the students’ workload starting from line 348. Also, we elaborated more on the workload and scalability issues at the end of Section 5.6 Practical Implcations. 

Comment 13: The limitations section is honest and appropriate. You could enrich it by acknowledging technological inequality (e.g., access to quality microphones or devices) and language barriers in spoken explanations. 

Response 13: We agree with this comment. Since this study aimed at students’ acceptance in SCS, the constructs “Performance Expectancy (PE)” and “Effort Expectancy (EE)” help reflect their perception on technology concern in line 475 and Figure 5. We have explained the constructs with their path coefficients starting from line 450. On the other hand, we also added a point in the last part of 5.6 Practical Implcations to address the point of technical inequality and students who have special education needs in terms of spoken language. 

Comment 14: In “Future Research Directions,” suggest cross-cultural validation of the modified UTAUT and potential integration with AI literacy frameworks. 

Response 14: We agree with this comment. The corresponding sentence was added in “Future Research Directions” in line 647. 

Comment 15: The manuscript is mostly readable but requires light copy-editing for grammar, consistency, and conciseness. 

Comment 16: Replace informal or redundant expressions (e.g., “students must explain their work to their teachers”) with more academic phrasing.  

Response 15 & 16: Thank you for your in-deep observations. We have refined the paper and made changes in some expressions. They are highlighted in yellow in the manuscripts. 

Comment 17: Ensure consistent abbreviation usage (SCS vs. SCSs). 

Response 17: We agree that a consistent abbreviation should be used. We updated the abbreviation in line 680 and corrected all the abbreviations to a consistent form. 

Comment 18: References should follow MDPI formatting. 

Response 18: We have refined the reference list as required. 

Comment 19: Figures 1 and 2: ensure clarity and readable labels. 

Response 19: After reviewing these figures, we agree that the clarity and clearness of the labels can be improved. For Figure 1, it is now separated into three figures (Figure 1 to Figure 3). We have improved the label in Figure 4 (original Figure 2). 

Round 2

Reviewer 2 Report

Comments and Suggestions for Authors

Dear Authors,

The manuscript is now well structured and presents an interesting approach to authentic assessment in the context of generative AI. While the contribution is relevant, the theoretical framing and methodological justification could be strengthened. Therefore, I recommend revisions, primarily focused on improving methodological transparency and discussion.

  1. Consider adding a brief interpretation of the non-significant moderating effects (gender, discipline, study mode). Explaining why acceptance of SCSs appears consistent across student groups, perhaps due to universal features such as authenticity, ownership, and cognitive engagement, would strengthen the analytical depth of the discussion.

  2. In the Methods section, it would be helpful to briefly justify the choice of PLS-SEM (e.g., prediction orientation, model complexity, non-normal data) and to specify the criteria used to evaluate the measurement model (e.g., factor loadings, AVE, CR, discriminant validity). This clarification will enhance the study's methodological transparency.

Author Response

Comment 1: Consider adding a brief interpretation of the non-significant moderating effects (gender, discipline, study mode). Explaining why acceptance of SCSs appears consistent across student groups, perhaps due to universal features such as authenticity, ownership, and cognitive engagement, would strengthen the analytical depth of the discussion. 

Response 1: Thank you for the overall positive comments. We agree that a brief interpretation of the non-significant moderating effects would strengthen the analytical depth of the discussion. Yes, the non-significant moderating effect of gender, mode of study and discipline of study are due to the universal features of authenticity, ownership and cognitive engagement. Thus, we added paragraphs starting from line 557, under section 5.4 “Multi-Group Analysis (MGA)”, to explain why the acceptance of SCS appears non-significant across most of the groups. We did this by explaining that SCS as student assessment can incorporate authenticity, ownership and cognitive engagement and refer to the Literature Review section. 

Comment 2: In the Methods section, it would be helpful to briefly justify the choice of PLS-SEM (e.g., prediction orientation, model complexity, non-normal data) and to specify the criteria used to evaluate the measurement model (e.g., factor loadings, AVE, CR, discriminant validity). This clarification will enhance the study's methodological transparency. 

Response 2: Thank you for your suggestions. We agree that we should justify the choice of using PLS-SEM, so we have added the justification in line 397, under section 4.2. Moreover, we have further included the definition of CR (in line 417), AVE (in line 421) and discriminant validity (in line 430) by Hair et al. (2017) under section 5.1. 

Back to TopTop