Next Article in Journal
“Living in the Discomfort”: Embodied Professional Learning for Transdisciplinary Higher Education
Previous Article in Journal
‘We Just Do What the Teacher Says’—Students’ Perspectives on Participation in ‘Inclusive’ Physical Education Classes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Student-Created Screencasts: A Constructivist Response to the Challenges of Generative AI in Education

1
School of Professional Education and Executive Development, The Hong Kong Polytechnic University, Hong Kong, China
2
College of Professional and Continuing Education, The Hong Kong Polytechnic University, Hong Kong, China
*
Author to whom correspondence should be addressed.
Educ. Sci. 2025, 15(12), 1701; https://doi.org/10.3390/educsci15121701
Submission received: 4 November 2025 / Revised: 5 December 2025 / Accepted: 14 December 2025 / Published: 17 December 2025
(This article belongs to the Section Higher Education)

Abstract

Screencasts, which are screen-capture videos, have been created by teachers delivering instruction or feedback, reflecting a teacher-centered model of learning. Based on the constructivist principle, this study explores an innovative attempt to position students as screencast creators, who must demonstrate their knowledge by and explain their work in the screencast. This innovative approach has the potential to promote authentic learning and reduce dependence on generative artificial intelligence (GenAI) tools for completing assignments. However, it is uncertain whether students will have positive attitudes towards this new form of assessment. From 2022 to 2025, the authors used screencasts as assessments in computer programming and English language subjects. Survey results were obtained from 203 university students and analyzed using Partial Least Squares Structural Equation Modeling (PLS-SEM). The results show that students generally hold positive attitudes toward creating screencasts, with perceived usefulness for future applications exerting the strongest influence on acceptance, followed by perceived performance benefits and ease of use. It is also found that gender, discipline, and study mode did not significantly alter these relationships, although senior students perceived screencast production as more effortful. These findings suggest that student-created screencasts can serve as an effective, student-centered alternative to traditional written assessments. The research results imply that student-created screencasts have the potential to help students develop their skills in an increasingly GenAI-pervasive academic environment.

1. Introduction

The emergence of software that combines artificial intelligence and large language models, with examples like ChatGPT v4.0 and DeepSeek v3.2, presents both prospects and hurdles for educators in terms of assessing students’ academic performance (Dehouche, 2021; Hosseini et al., 2023; Williamson et al., 2023). This software can be a helpful assistant in many aspects, including preparing assignments in universities. Consequently, educators are tasked with guiding students toward the responsible and ethical utilization of these technologies to achieve prescribed learning outcomes. For instance, students should be encouraged to leverage these tools not as generators of answers, but as sources for inspiration, feedback, or reference materials. At the same time, there are substantial concerns about the efficacy of current plagiarism-detection software in identifying content generated by these applications. While specialized tools, such as Turnitin, can detect and highlight plagiarized sections in written work, students may circumvent these mechanisms by using techniques like the “back-translation” method (Jones & Sheridan, 2015; Yankova, 2020). Furthermore, such software often provides probabilistic assessments indicating whether specific text segments are likely to be AI-generated, without definitive proof. This ambiguity can lead to disputes between students and teachers in cases of suspected academic dishonesty. Ultimately, the main role of teachers should not be detecting plagiarism but instead cultivating critical thinkers.
Considering the rapid increase in AI applications in the careers of graduates, universities should perceive them as valuable tools rather than software to be avoided. The new tools require a change in the way student assignments are designed and graded. One such change is to require students to articulate their work in their own words. This is achieved by requiring students to submit screen-capture videos, also known as screencasts, in which students must explain their work. These screencasts are created by students; they are called student-created screencasts (SCSs). Optionally, teachers can require students to turn on the camera and record their explanation as the audio of the video. Although the requirement for students to demonstrate their knowledge is consistent with the principle of constructivism, students may not be willing to submit screencasts as assignments. This is because, when compared with traditional written assignments, students must acquire new skills and exert additional effort to create screencasts.
A typical SCS is shown in Figure 1, together with the assignment instructions in Figure 2, and an anonymized script under the fake name CHAN Tai Man in Figure 3. This assignment requires students to record their voice, but they do not have to show their face.
Educators cannot realize the potential benefits of SCS if it is not feasible for students to create screencasts and accept them as a form of assignment. Therefore, this research study aims to understand students’ acceptance of SCS as assignments in universities using a modified version of the Unified Theory of Acceptance and Use of Technology (UTAUT). The results of this research will pave the way for future research to exploit the benefits of SCS. The rest of the paper is structured as follows. Firstly, we will review the use of screencasts in education and the background and applications of the UTAUT. Then, the Literature Review section will examine the gaps in screencasts in education and discuss the constructs in the UTAUT. The Methodology and Research Design section will present the hypotheses and describe the design of the survey questionnaire. The Results and Discussions section will discuss the results. Finally, the Conclusion section will summarize the findings, state the limitations, and suggest future research directions.

1.1. The Potential of Screencasts as a Pedagogical Tool

A screencast is a screen-capture video that shows the contents on the screen and the user’s interactions with the computer. The user actions include mouse clicks, drags, and keyboard entries. The user’s voice can also be part of the video. Thus, a SC created by a student is called a student-created screencast (SCS). Screencasts are useful tools in education. Previous research shows that teacher-created videos can increase student engagement and improve student performance than attending traditional lectures (Pereira et al., 2014; Orús et al., 2016). Students react positively when teacher-created videos are used in lessons (Kawaf, 2019; Peterson, 2007). However, Morris and Chikwa (2014) found despite an “overwhelmingly positive” perception towards videos there was only a “modest” effect of videos on undergraduates’ knowledge acquisition. The principles of constructivism imply that students will benefit even more when they make their videos to explain certain concepts or demonstrate particular skills. (Shieh, 2012, p. 206). Hence, we propose requiring students to create videos to explain their work to their teachers. Explaining to someone else is the best evidence that one has understood something. Albert Einstein summarized this with the statement: “If you cannot explain it simply, you don’t understand well enough” (Bindu & Manikandan, 2020, p. 4894). This study investigates student perceptions of submitting assignments as videos, called screencasts, in which they must explain their work.
It is reasonable to assume that students would not accept SCS as assignments because it would involve different skills and efforts than the usual “search-edit-submit” method. However, they are likely to accept this new assignment format if they see the future value self-created screencasts for their future study and career. This is particularly relevant to undergraduates as their next stage to develop their career or study at the postgraduate level. Hence, we propose studying students’ acceptance of SCS by modifying a proven framework—the Unified Theory of Acceptance and Use of Technology (UTAUT).

1.2. Unified Theory of Acceptance and Use of Technology (UTAUT)

This research project proposes using a modified version of UTAUT to examine students’ acceptance of creating screencasts as assignments. The UTAUT framework was created by Venkatesh et al. (2003). It states that a person accepts and uses a technology based on that person’s attitude towards the technology. The attitude towards technology is, in turn, influenced by four factors known as constructs. The constructs of the original UTAUT are listed below.
  • “Performance Expectancy (PE): the degree to which an individual believes that using the system will help them to attain gains in the job” (Venkatesh et al., 2003, p. 447).
  • “Effort Expectancy (EE): the degree of ease associated with using the system” (Venkatesh et al., 2003, p. 450).
  • “Facilitating Conditions (FC): the degree to which an individual believes that an organizational and technical infrastructure exists to support the use of the system” (Venkatesh et al., 2003, p. 453).
  • “Social Influence (SI): the degree to which an individual perceives whether others believe they should use the new system” (Venkatesh et al., 2003, p. 451).
Modifications to constructs were necessary when applied in different fields of study when the UTAUT was used in various contexts (Bagozzi, 2007; Negahban & Chung, 2014). In this project, the constructs of FC and SI are not applicable. For FC, the teacher provided free online tools for creating the screencasts. The condition for creating screencast is just a computer with an Internet browser. Therefore, there is no extra cost or software installation required. For SI, the students are required to submit screencasts as assignments. Hence, their use of screencasts is not affected by social influence. However, students may have a positive attitude towards SCS because they can use them to practice for online interviews, or make pre-recorded demonstrations to their clients or colleagues. Therefore, we propose the new construct of Future Utility (FU).

1.3. Future Utility

We define Future Utility (FU) as the degree to which an individual believes that the technology can be used in a future situation to achieve the individual’s purpose. While Future Utility (FU) is a concept that is derived from constructs such as Perceived Usefulness (PU) in the Technology Acceptance Model (TAM) (Davis, 1989), and Performance Expectancy (PE) in the Unified Theory of Acceptance and Use of Technology (UTAUT) (Venkatesh et al., 2003), it represents a distinct and forward-oriented dimension of technology acceptance. Both PU and PE describe an individual’s belief that using a particular technology will enhance performance in a current or ongoing task. In contrast, FU captures the anticipated value or potential benefit of a technology beyond the immediate context of use—its perceived applicability to future academic, professional, or personal situations.
Conceptually, PU in TAM emphasizes functional outcomes (“this tool helps me perform my task better”), while PE in UTAUT extends this view to expectations of gain within specific roles or objectives (“this technology improves my effectiveness in my job/study”). FU, however, specifies the users’ perceptions within a temporal and developmental frame. The focus is not on the technology’s immediate enhancement on current tasks, but on the technology’s potential as the user performs tasks in different situations in future (“this tool will help me demonstrate my work to my colleagues”). It therefore complements but does not duplicate existing constructions.
In the context of Student-Created Screencasts (SCSs), FU refers to students’ belief that the skills acquired through creating screencasts—such as technical communication, digital fluency, and self-expression—can be beneficial in future endeavors beyond the course requirements. For instance, students may recognize that SCS production skills could enhance their ability to deliver online presentations, explain programming concepts to clients, or develop multimedia portfolios for future employment. Whereas PE focuses on the immediate academic benefit of creating screencasts for learning improvement, FU emphasizes the projected long-term value of mastering this technology.
In summary, FU extends the traditional TAM and UTAUT by adding a future-oriented dimension of perceived benefit. This is especially relevant to students as FU highlights the idea that students’ intention to adopt a technology may not be just because it is required by the teacher, but also on the extent to which they envision it is supporting their future learning and career development.
This study surveyed students on their perceptions of Student-Created Screencasts (SCS) using the constructs of PE, EE, and FU to form the modified UTAUT research model. Then, the data was analyzed using Partial Least Squares Structural Modeling (PLS-SEM) to examine the influencing factors and the moderators.

2. Literature Review

2.1. Connecting Constructivism, Authentic Assessment, and Self-Explanation Through Student-Created Content

The use of student-created content as assessment represents a convergence of constructivism, authentic assessment, and self-explanation theory. Constructivist learning theory posits that learners actively construct knowledge through engagement with authentic tasks (Vygotsky, 1978), finding practical expression in authentic assessment approaches that mirror real-world complexities and require meaningful application of knowledge (Lombardi, 2008). When students create content as part of assessment, the evaluative moment transforms into an opportunity for deep learning.
Recent empirical work demonstrates this theoretical alignment. van der Walt and Bosch (2025) ground their study of co-created Open Educational Resources explicitly in social constructivism, showing that student-produced assessment artifacts develop ownership, intrinsic motivation, and self-directed learning. Killam et al. (2024) identify how learner–educator co-creation enhances authentic assessment elements—realistic context, cognitive complexity, evaluative judgment, and collaboration—thereby preparing students for professional practice through constructivist principles.
Self-explanation theory provides the cognitive mechanism linking content creation to learning. Chi et al. (1994) demonstrated that learners who actively explain material—generating inferences and articulating reasoning—achieve superior understanding. When students create assessment artifacts, they engage in extended self-explanation, making their thinking visible. Cardace et al. (2024) embedded self-explanation prompts in explanatory video assignments, finding significant learning and confidence gains through mixed-methods research. Similarly, Malkawi et al. (2023) showed that tenth-graders creating test questions demonstrated improved grammar performance, self-efficacy, and motivation, as question generation required analyzing structures and articulating understanding.
This integrated framework suggests effective student-created assessments should provide authentic contexts (Lombardi, 2008), incorporate explicit self-explanation prompts (Chi et al., 1994), and support co-creative processes that develop evaluative judgment (Killam et al., 2024). When thoughtfully designed, student-created content transforms assessment from measurement into a powerful learning experience, developing both cognitive understanding and metacognitive awareness.

2.2. Revisiting Assessment Redesign in the AI Era

The advent of generative AI technologies, particularly large language models such as ChatGPT, has precipitated urgent calls for fundamental reassessment of educational evaluation practices. Recent scholarship converges on the inadequacy of traditional assessment frameworks in an environment where AI can generate sophisticated academic outputs, demanding instead a shift toward AI-resistant design principles and human-centered evaluation approaches (Khlaif, 2024; Lye & Lim, 2024).
Emerging frameworks for AI-resistant assessment emphasize tasks that foreground originality, contextualized problem-solving, and evidence of student thinking processes rather than polished final products. Alkouk and Khlaif (2024) identify four core design principles: (1) embedding authentic, real-world contexts requiring domain-specific judgment; (2) requiring process documentation through drafts and annotated logs; (3) targeting higher-order cognitive skills such as analysis and synthesis; and (4) establishing explicit AI-use policies with graded levels of permitted tool engagement. These principles reflect a broader shift from attempting to detect AI use toward designing assessments that inherently require human cognitive engagement (Osamor, 2023). Indeed, Chaka (2023a) empirically demonstrated that AI-content detection tools show significant inconsistencies across languages and generators, concluding that current detection technologies cannot serve as dependable gatekeepers for academic integrity, thereby reinforcing the necessity of design-based approaches.
Critical scholarship warns against both uncritical adoption and outright prohibition of AI in educational contexts. Williamson et al. (2023) call for careful re-examination of AI, automation, and datafication in education, arguing that these technologies reshape pedagogical practices and institutional logics in ways that demand sustained critical scrutiny. Chaka (2023b) likewise urges educators to critically rethink assumptions about skills and automation embedded in fourth industrial revolution technologies, ensuring that curriculum and assessment decisions remain grounded in educational purposes rather than technological imperatives.

2.3. Gaps in Existent Research on Student-Created Screencasts

Finding students who are having difficulty in a subject is crucial. However, the typical educational design in many university courses consists of only a few tasks and one critically important assessment at the end. Students who perform well on the assignments but fail on the final examination are frequently observed. As Bernacki et al. (2020) pointed out, most of the “early warning” research is focused on “alternative, more data-rich contexts including massive open online courses (MOOCs)”. At the early stage of the course, SCS may be a more reliable measure of students’ performance than regular assignments. Which pupils need assistance would be determined by it. Lynch (2019) refers to intervention in education as a set of steps a teacher takes to help students improve in their area of need. According to Wong et al. (2017) and Korkmaz and Öz (2021), since the intervention can significantly enhance the reading and writing skills of the students, it is important to identify those who are having difficulty learning English as a second language.
To the best of our knowledge, there is no using SCS as an assessment. We examined research papers on screencasts that involved university education to obtain an overall view of the current state of research in this area, and to identify any gaps in the literature. In the research papers about screencasts in higher education, there are many papers that do not explicitly investigate student-created screencasts, or no insufficient details are provided for a clear classification. We found several research articles that focused on using screencasts in higher education. These articles are listed in Table 1. An analysis of these research articles revealed a significant bias toward teacher-created screencast research. This represents a substantial research gap that warrants immediate attention from the educational technology research community. Among the articles that focused on instructor-produced content, we classified them into five categories, as shown in Table 1. It shows that the teacher-provided screencasts are used in five main areas—optional supplementary learning materials, feedback delivery, enhancements to lectures, and specialized subject support such as statistics. The only student-created screencast research was performed by Orús et al. (2016), in which students could choose to create a video to explain a theoretical concept of marketing. If they chose this route, they did not need to explain the concept in the written report. Thus, the students may have perceived the video project as additional workload without direct impact on their final grade.
The identified research gap is significant because current research only examines one side of the educational equation—content delivery—while ignoring content creation by students. This pedagogical imbalance may imply missed learning opportunities. The principle of constructivism indicates that creating content enhances learning more than consuming it, yet this principle remains unexplored in research related to screencasts. This research represents an innovative first step to explicitly study student-created screencasts.

2.4. The Modified Unified Theory of Acceptance and Use of Technology (UTAUT)

The Unified Theory of Acceptance and Use of Technology (UTAUT) framework was created by Venkatesh et al. (2003). It states that a person accepts and uses a technology based on that person’s attitude towards the technology. The attitude towards technology is, in turn, influenced by four factors known as constructs. As discussed in the Introduction section, we propose removing the constructs FC and SI, and adding FU. The constructs in our proposed modified UTAUT are shown in Figure 4 and are discussed below.

2.4.1. Effort Expectancy (EE)

EE refers to the degree of ease associated with using the technology (Venkatesh et al., 2003). According to Nguyen and Nguyen (2024), individuals’ behavioral intention and actual use of a particular technology related to their belief on free of effort while using it. For students’ perception on SCS, the ease of creating SCS must be examined. Additionally, individuals’ EE proved positively influence their attitude and intention towards acceptance of a technology (Moon & Hwang, 2016).

2.4.2. Performance Expectancy (PE)

PE refers to the degree to which an individual believes that using the system will help them to attain gains in the job (Venkatesh et al., 2003). According to Sari et al. (2024), Performance Expectancy positively influences the intention to adopt technologies, indicating that higher expectations of performance lead to more favorable attitude towards technology adoption in individuals’ educational context. In this study, the Performance Expectancy refers to students’ belief on creating SCS help improve their learning activities.

2.4.3. Future Utility (FU)

FU refers the degree to which an individual believes that the technology can be used in a future situation to achieve the individual’s purpose. FU shares a similar concept with perceived usefulness in TAM (Davis, 1989), but specifically aiming at perceiving its usefulness in the future. In this study, we examine students’ view on using SCS in future, which also shows their attitude towards SCS.

2.4.4. Attitude (ATT)

ATT refers to the individuals’ favor or not favor to a particular technology. According to Or (2023), ATT is influenced by Performance Expectancy, Effort Expectancy in the context of information technology adoption. According to Ernst et al. (2014), individuals’ attitude toward usage of a technology is influenced by perceived usefulness, which was considered similar to Future Utility.

2.4.5. Behavioral Intention (BI)

BI is the most important variable in UTAUT. In this study, it represents students’ intention to use SCS. According to Venkatesh et al. (2003), individuals’ BI will have a significant positive influence on their usage towards the discussed technology. We hypothesize that BI was jointly determined by latent variables including Effort Expectancy, Performance Expectancy, Future Utility, and attitude. Moreover, Buabeng-Andoh and Baah (2020) suggested that BI is influenced by attitude. Therefore, we propose the following research model and hypotheses.

3. Research Model and Hypotheses

As noted by Negahban and Chung (2014) and Bagozzi (2007), when using UTAUT in a specific research area, the researcher should make the necessary changes to the constructs to suit the research project at hand. In this project, the constructs of FC and SI are not applicable, as students are required to use videos, and the infrastructure needed to create screencasts is simply a computer with an Internet browser. Therefore, FC and SI are not included in this study. Since this research is on students in higher education, it is possible that these students may have a positive attitude towards SCS because they can use them to practice for online interviews and make pre-recorded demonstrations to their clients or colleagues. Therefore, we propose the new construct of Future Utility (FU). In summary, this study used a modified UTAUT (Venkatesh et al., 2003), and it includes the constructs of Effort Expectancy (EE), Performance Expectancy (PE), Future Utility (FU), attitude (ATT) and behavioral intention (BI). The research model and our hypotheses are shown in Figure 4.

3.1. Research Hypotheses

Under the modified UTAUT framework, we proposed the following hypotheses:
H1. 
Student attitude towards SCS is affected by the Effort Expectancy.
H2. 
Student attitude towards SCS is affected by the Performance Expectancy in their study.
H3. 
Student attitude towards SCS is affected by the perceived Future Utility.
H4. 
Student behavioral intention towards SCS is affected by their attitude.
H5 
Students have a positive attitude towards SCS.

3.2. Moderators

The effects of the above constructs may be moderated by factors such as gender, year of study at the university, discipline of study, and mode of study. The years of study are represented by the numbers which correspond to the stage in a 4-year university curriculum. The students in this study belong to different programs of study such as language, engineering, and information technology. In our model, the discipline of study is classified as science and non-science. The mode of study is either full-time or part-time. Therefore, we have the following hypotheses.
H1a, H2a, H3a. 
Gender moderates the effects of EE, PE, and FU.
H1b, H2b, H3b. 
Year of Study moderate the effects of EE, PE, and FU.
H1c, H2c, H3c. 
Disciplines of study moderate the effects of EE, PE, and FU.
H1d, H2d, H3d. 
Mode of study moderates the effects of EE, PE, and FU.

4. Methodology

4.1. Research Method and Participants

This research adopted a quantitative approach. The students in this study were undergraduates in a university in Hong Kong. The subject lecturers in this research agreed to adopt a similar screencast format as an individual assignment in their respective subjects. The subjects were chosen so as to cover moderating variables in this study. Therefore, these students were studying in junior and senior years, in computer-related and in language subjects, and in full-time and in part-time modes. Ethical approval was obtained from the university to conduct this research which used the undergraduates as subjects in a survey. In the online survey, there was a consent statement indicating that participation was entirely voluntary and that the student can withdraw at any point during the survey.
These students had to complete an individual assignment in which they must submit a screencast to explain their work. All the screencasts are 3 to 4 min in length and the students must record their voice in English to explain their work. However, the students did not have to show their face. After the subjects have finished and the grades are released, the students completed an online survey. This timing ensured the students that their response in the survey would not affect their grades. Table 2 shows the subject code, subject names, and the number of respondents. In the survey, there was a consent statement, and the students can opt out of the survey. Eventually, there were 203 valid responses. The demographics of the students are shown in Table 3.
Table 2 shows that the data collection period lasted two years, spanning from the academic year 2022/2023 to 2024/2025. There were noticeably fewer students in the SEHS4678 AI course because it is an elective only for final year undergraduates in the university. There are more computer-related subjects because the authors would like to minimize the technical difficulties, if any, among the students, as they were required to produce screencasts.

4.2. Research Instrument

The survey was separated into two parts. In part one of the survey, the questions collected the students’ demographic information, including their gender, age and their previous qualification before joining the university. The results of this part are shown in Table 3.
Table 3 shows that the ratio of male to female students is about 81/19. This is consistent with the gender distribution in the subject in the survey. In this table, junior year means the students were studying in year 1 or year 2 in a four-year bachelor’s degree program. Likewise, senior year means the students were studying in year 3 or year 4 in a four-year bachelor’s degree program.
In part two of the survey, the questions are adopted and adapted from Khechine et al. (2014), who researched on the role of gender and age in the intention to use webinars, and Wu and Chen (2017), who researched on the continuance intention to use massively open online course (MOOCs). Their questions were modified to mention screencasts instead of webinars and MOOCs. Furthermore, two questions on Future Utility (FU) were created by the authors of this research. The questions and their source are shown in Table 4. In this part, the students had rated their answers using a 5-point Likert Scale. The answers were coded as 5 for “Strongly Agree”, 4 for “Agree”, 3 for “Neutral”, 2 for “Disagree”, 1 for “Strongly Disagree”.
PLS_SEM was used because it is more compatible with most of the settings in this study. It is more flexible since it could calculate relationships between constructs without imposing distributional assumptions on the data. Moreover, it does not have high restrictions on the sample size (Hair et al., 2019). Comparing the two most used methods of structural equation modeling, CB-SEM and PLS-SEM, PLS-SEM is better in its ability to efficiently estimate formative models (Kono & Sato, 2023).

5. Results and Discussions

In this study, SmartPLS v4.1.1.2 was used to implement the PLS-SEM approach. All data were retrieved through PLS-SEM algorithm and 5000 sampling sized Bootstrapping in 0.05 level. There are two components in PLS. The first component is the measurement model, which specifies the correlation between constructs by evaluating their validity and reliability. The second component is the structural model which specifies the interaction and influence between constructs. The results of these two components are presented and discussed below.

5.1. Measurement Model Assessment

Table 5 shows the validity and reliability of the collected data. It is important to carry a reliability testing when assessing data consistency, stability, dependability of measurement data, and outcomes to determine the reliability of the data. There are standards of acceptable range regarding different latent variables suggested by Hair et al. (2020). For a reliable latent variable, the Cronbach’s alpha and composite reliability (CR) was used to measure the internal consistency between the constructs; they should not be below 0.7 to be considered as dependable (Hair et al., 2017). In this study, the Cronbach’s alpha and CR were in range between 0.866 and 0.977, which surpassed the 0.7 threshold. Moreover, the average variance extracted (AVE) was used to evaluate convergent validity; it should exceed 0.5 to indicate that the latent variables have ideal convergent validity (Hair et al., 2017). In this study, the AVE values were in range between 0.820 and 0.933, all surpassing the 0.5 threshold. These results indicate that the indicators for each construct strongly align with their intended meaning, providing strong evidence that the constructs were measured accurately and consistently.
  • This measure of reliability takes into account the different outer loadings of the indicator variables
Discriminant validity was used to imply that a construct is unique and is not represented by other constructs in the model (Hair et al., 2017). To examine the discriminant validity of the constructs, Fornell–Larcker criterion was performed as shown in Table 6. According to Ab Hamid et al. (2017), the Fornell–Larcker value of a construct correlated itself (square root of AVE) should be greater than its Fornell–Larcker value correlated to other constructs, as shown above. As a result, all constructs were confirmed to have acceptable discriminant validity.
To diagnose the presence of collinearity, the variance inflation factor (VIF) was performed. Hair et al. (2020) suggest that having VIF values of 5 or above indicate critical collinearity issues. In Table 7, all paths are within the acceptable range of 5, meaning that the constructs are not excessively overlapping statistically. Thus, the measurement model assessment shows the reliability and validity of the constructs, allowing further analysis.
In summary, the above results show that the measure model is valid. Together, these analyses show that the measurement model is both reliable and valid, providing a sound foundation for interpreting the structural paths in the study. The next section examines the results in the structural model assessment, in which the hypotheses were tested using the results from the survey.

5.2. Structural Model Assessment

Table 8 shows the structural model with the results from the survey. The path coefficient, t value, p value, R2, and f2 are used to test the hypotheses. The strength of the association between latent variables can be determined by looking at the path coefficient value. Huber et al. (2007) state that to account for a particular influence within the model, the path coefficients claimed significant when exceeding 0.100 in the 0.05 level. The size difference in relation to the variation in the sample data is measured using t values. According to Winship and Zhuo (2020), t values of the path should be greater than 0.196 to be deemed significant, as the traditional critical value of the t statistic is 1.96 for two-tailed tests at a significant level of 0.05. Therefore, the hypotheses H1, H2, H3 are H4 accepted.
Figure 5 is a visual summary of the strengths of the path coefficients in the proposed model. The size of the arrow line is proportional to the path coefficient. Therefore, it shows that Future Utility (FU) had the strongest path coefficient (0.476) in influencing students’ attitudes toward SCS. This reflects how university students perceive the value of learning technologies. Unlike traditional constructs such as Performance Expectancy and Effort Expectancy, which emphasize immediate learning benefits or ease of use, FU captures a forward-looking belief—that the technology will serve meaningful purposes beyond the current course. This future-oriented mindset aligns with the trend in higher education, emphasizing employability, lifelong learning, and transferable skills development.
As digital transformation is taking place in many industries, university students are aware that digital competence and communication skills are vital for professional success in the modern workplace. Creating screencasts requires students to articulate technical and conceptual understanding clearly, manage digital recording tools, and present information in a clear and organized manner. These are skills that align closely with employability attributes such as critical thinking, self-presentation, and digital literacy. Hence, students may perceive screencast production not merely as an academic task but as practical training for the future workplace.
This finding underscores the importance of educators framing innovative assessment tools like SCSs not solely in terms of academic benefit but as opportunities for skill-building that contribute to students’ lifelong learning trajectories and employability in a rapidly evolving digital economy.
The relatively modest influence (0.157) of Effort Expectancy (EE) on students’ attitudes toward Student-Created Screencasts (SCSs) can likely be attributed to today’s students’ high levels of digital familiarity and technological confidence. EE, as defined in the Unified Theory of Acceptance and Use of Technology (UTAUT), refers to the perceived ease of learning and using a system. In the context of this study, it captures students’ perceptions of how easy it is to create and record screencasts. However, for most contemporary learners—particularly those in higher education—technological engagement is an everyday norm rather than an obstacle.
The current generation of students have grown up using multimedia tools, mobile devices, and cloud-based applications for both academic and personal tasks. They are accustomed to interacting with software that supports communication and content creation—such as video editors, screen recorders, and presentation platforms—which makes the process of generating screencasts relatively intuitive. Consequently, variations in perceived ease of use are less salient compared to other constructs like Future Utility or Performance Expectancy. In essence, because digital competence is already assumed, ease of use does not strongly differentiate attitudes toward SCSs among this population.
Another reason may be due to the high level of support provided in the courses studied. The students were given the choice to use a free, browser-based recording tool, with clear instructional guidance from the lecturers. When infrastructure, access, and support are consistent across participants, perceived effort ceases to be a meaningful determinant of acceptance. This aligns with previous technology adoption research suggesting that when usability barriers are low, the predictive power of EE decreases (Venkatesh et al., 2003).
Furthermore, the modest effect of EE may also reflect a shift in motivational focus among students. Rather than questioning how to use the technology, students are increasingly more concerned about why they should use it. That is, the students are more concerned with the technology’s long-term value, and impact on learning outcomes or employability. As the findings indicate, constructs such as Future Utility and Performance Expectancy better capture those motivational dimensions.
Overall, the relatively weak influence of EE suggests that for modern university students, perceived effort is no longer a major obstacle to adoption. Instead, acceptance depends more heavily on the perceived meaningfulness and future-oriented benefits of the technology. This implies that educators designing novel assessment formats such as SCSs should focus less on reducing technical difficulty and more on clearly communicating the tool’s pedagogical and professional relevance.

5.3. Moderating Effect

Table 9 shows their moderating variables and their effects in the structural model. The moderating effects with p values exceeding 0.05 are considered not statistically significant. Therefore, all the moderators were not significant except for H1b, which states that the students’ years of study moderates the influence of EE on ATT.
The only significant moderating effect was found for H1b, where Year of Study significantly moderated the relationship between Effort Expectancy (EE) and Attitude (ATT) (p = 0.020). The negative coefficient (0.103) suggests that as students advance from junior years to senior years, the influence of perceived ease of use on their attitude decreases. In other words, senior students are less affected by how easy the screencast task feels. To find out the actual effect of the subgroups on the structural model, and whether there are other differences exerted by the demographic groups on the moderators, a Multi-Group Analysis (MGA) was conducted. The results are presented and discussed below.

5.4. Multi-Group Analysis (MGA)

To determine whether constructs have the same meaning and relationship across groups, Multi-Group Analysis (MGA) was conducted. In Table 10, multi-groups were compared using two-tailed Bootstrapping with 5000 sampling size in 0.05 level.
In Table 10, the path coefficients of the structural model using only the data from each subgroup are shown. Each row lists the path coefficients of the subgroups in that category, the difference in the path coefficients, and the one-tailed and two-tailed p-values. Overall, the results show very few meaningful differences between groups based on gender, discipline, or mode of study, as nearly all path differences were statistically non-significant. The only significant difference in path coefficients from EE to ATT is on Year of Study.
The path was significantly stronger for junior students (0.349) than for senior students (0.108), with both one-tailed (p = 0.021) and two-tailed (p = 0.042) values indicating significance. This suggests that ease of use matters more for juniors, while seniors are less influenced by perceived effort. Similarly, for the path coefficients from ATT to BI, the relationship was significantly stronger among junior students (0.916) compared to seniors (0.763), with p-values indicating strong significance (one-tailed p = 0.007; two-tailed p = 0.013). This means that juniors’ intentions to adopt screencasts are more strongly shaped by their attitude toward it.
The absence of significant differences in the structural paths across gender, discipline, and study mode suggests that students perceive SCS very similarly in these contexts. This similarity in perception becomes clearer when viewed through the themes established in the Literature Review, particularly the roles of authenticity, ownership, and cognitive engagement in student-created content. Because all students completed the SCS assignment under similar conditions—after receiving a demonstration video, with access to technical support, and following consistent assignment requirements—these pedagogical features may have neutralized potential demographic differences. As noted in the review of constructivist and authentic assessment literature, when learners engage in tasks that require them to explain their reasoning and make their thinking visible, this high degree of cognitive engagement is largely universal (Chi et al., 1994; Lombardi, 2008).
The SCS in this research involves short voice-narrated videos of students explaining their own solutions to real-world problems—either by responding to customers in writing or by coding a machine learning program. This consistency aligns with calls for authentic, process-focused assessment designs that prioritize real-world communication and demonstrable understanding over polished written products (Qureshi, 2024; Osamor, 2023). As discussed in the Literature Review section, such assessments are more resistant to misuse by generative AI because they foreground students’ reasoning processes and the situational application of knowledge. When students orally present their work, they naturally demonstrate ownership, reducing ambiguity about authorship and mitigating academic integrity concerns that have intensified in the AI era.
Since the SCS design inherently supports cognitive engagement, authenticity, and ownership for all learners, it is unsurprising that constructs in the modified UTAUT influenced attitude consistently across demographic groups. Therefore, the essential pedagogical elements in SCSs—authenticity, ownership, and cognitive engagement—operate similarly for students, regardless of gender, discipline, or study mode, resulting in negligible subgroup differences observed in the analysis.
To summarize, the moderation and Multi-Group Analysis revealed some interesting insights. Notably, gender, discipline of study, and mode of study did not significantly moderate the relationships between Effort Expectancy (EE), Performance Expectancy (PE), or Future Utility (FU) and Attitude (ATT). However, a notable exception was observed with the Year of Study, which moderated the relationship between EE and ATT.

5.5. Students’ Attitudes (ATT) Towards SCSs

Students’ general attitude (ATT) towards SCS is crucial. There are three questions regarding students’ attitudes towards SCS. The questions are “I believe that creating Screencasts is a good idea in this subject.”, “I believe that creating Screencasts is good advice in this subject.” and “I have good impression about Screencasts.” As the survey was designed on 5-point Likert scale, in which “1” means “strongly disagree”, “3” means “neutral”, and “5” means “strongly agree”, it is crucial to if there is a significant difference between “3” and the average responses. To determine whether the difference is significant, a one-sample t-test was performed.
In this study, average ATT from all samples is 3.714. To determine if the students indeed have a positive attitude towards SCS, we need to determine whether this average is statistically significantly higher than 3, we adopted the critical t-value method (Ross & Willson, 2017). According to Ross and Willson (2017), the formula for calculating the t-value in a one-sample t-test is shown below.
t = X ¯ μ S n
where X ¯ is the sample average (which is 3.714). μ is the hypothesized average, (which is 3), s is the samples standard deviation (which is 0.922), n is the number of samples (which is 203).
According to the formula above, the t-value was 11.034. The critical t-value was found to be 1.972 and the p-value is 0.000. As the t-value of the sample is greater than the critical t-value and the p-value is smaller than 0.005, the hypothesis H5 was accepted. It means that overall, the student had a significant positive attitude towards SCS.

5.6. Practical Implications

The findings of this study highlight the potential of SCS as an innovative and effective assessment tool in higher education. To encourage students’ acceptance of SCS, educators should focus on making students aware of the long-term benefits of screencast creation, such as its applicability in professional settings like online interviews and client presentations. Communicating these future-oriented advantages can enhance students’ perceptions of the value of SCS, particularly through the lens of skill development and career preparation.
The lack of significant variations in student perceptions across gender, discipline of study, and mode of study implies that the acceptance of Student-Created Screencast (SCS) is relatively uniform across diverse student demographics, making this approach broadly applicable across different groups of learners.
Another critical implication is the need to address students’ concerns about the perceived effort involved in creating screencasts. Providing clear instructions, accessible tools, and technical support can help students overcome initial hurdles and develop confidence in their ability to produce high-quality screencasts. Additionally, embedding screencast training into the curriculum, particularly in the early years, can help normalize this form of assessment and reduce the resistance often associated with unfamiliar tasks. For students in later years, educators might consider integrating SCS into projects that align with their existing expertise or provide support to help students to transition from traditional assessments to this more technology-driven and innovative form of assessment.
Finally, there are implications for teachers and the higher education institutions too. Firstly, the teachers need to develop new assessment rubrics and feedback that are suitable for SCS. Secondly, the teachers may not accept this innovative form of assessment due to the extra workload required to view and mark the SCS. Therefore, the higher education institutions may consider using AI-assisted tools to help teachers mark the SCSs with no disproportionate increase in effort. Also, the institutions must ensure this new assessment form is scalable in that they had sufficient storage capacity to store the student submissions which are clearly much larger in size than traditional written assessments. Finally, the institutions must ensure that no students would be disadvantaged if they did not have the necessary hardware, such as a good multimedia computer with microphone. This final point is especially relevant in less developed regions. Likewise, students who have special education needs in terms of spoken language should be given sufficient support to complete assessments using screencasts.

6. Conclusions

This research examined university students’ acceptance of student-created screencasts as assignments. Given the potential benefits of student-created screencasts according to the principle of constructivism, this study contributes to narrowing the imbalanced amount of research on using teacher-created screencasts and student-created screencasts as pedagogical tools. Using the modified UTAUT, we identified key factors influencing students’ attitudes and behavioral intentions toward SCS. Among these, Future Utility (FU) emerged as the most significant predictor of students’ attitudes, indicating that students are more likely to embrace SCS when they see their relevance and applicability in future professional or academic contexts. Performance Expectancy (PE) and Effort Expectancy (EE) also played important roles, underscoring the need to balance the perceived benefits of SCS with the effort required to create them. In conclusion, this study underscores the potential of student-created screencasts as a transformative educational tool. By addressing the identified limitations and pursuing new research directions, educators and researchers can further unlock the pedagogical value of SCS, paving the way for their broader adoption and integration into higher education.
As the hypotheses H1, H2, H3, H4, and H5 are accepted, the UTAUT model confirms that students’ attitudes (ATT) toward SCS are significantly influenced by EE, PE, and FU. According to the results in Table 8, Future Utility (FU) exhibited the strongest influence on ATT. This result implies that students are more likely to have a favorable perception of SCS if they recognize the relevance of screencasts for future use. This finding aligns with the principle of constructivism, which emphasizes the importance of practical and transferable skills in learning. Similarly, Performance Expectancy (PE) also demonstrated a significant strong positive effect on ATT. Thus, this result indicates that students perceive SCS as a tool that can enhance the quality of their learning activities. This confirms the potential of student-generated content, in terms of screencasts, to improve learning outcomes. However, the modest contribution of EE to ATT suggests that while ease of use is important, students’ acceptance of SCS is primarily driven by perceived benefits rather than the simplicity of the process. Finally, behavioral intention (BI) toward using SCS was significantly influenced by ATT. This strong relationship between ATT and BI means it is important to induce favorable perceptions among students by providing helpful guidelines, student-friendly tools, and constructive feedback on screencast assignments.

6.1. Limitations

We acknowledge this study has several limitations. First, the research was conducted within a single institution, limiting the generalizability of the findings. Future research should replicate this study across multiple institutions and cultural contexts to confirm the results and explore any variations. Second, the reliance on self-reported survey data introduces the possibility of response bias. Incorporating qualitative methods, such as interviews or focus groups, could provide richer insights into students’ experiences and perceptions of SCS. Third, while this study focused on students’ acceptance of SCS, it did not directly measure the impact of SCS on learning outcomes. Future research could examine whether SCS lead to measurable improvements in academic performance, engagement, or skill development.

6.2. Future Research Directions

Looking ahead, there are many opportunities to build on the findings of this study. Cross-cultural validation can be performed with the modified UTAUT and seek any potential integration between SCS and AI literacy framework. Researchers could explore the use of SCS in different disciplines to assess their adaptability and effectiveness across diverse subject areas. Additionally, the role of instructor support, feedback, and training in influencing students’ acceptance and performance with SCS warrants further investigation. Longitudinal studies could also assess the long-term impact of using SCS on students’ professional readiness and employability. Finally, future research could examine how SCS might foster collaborative learning by incorporating peer feedback and group-based screencast assignments.

Author Contributions

Conceptualization, A.W.; methodology, A.W.; software, L.L.C.; valida-tion, A.W.; formal analysis, A.W.; investigation, A.W., K.T. and S.L.; resources, A.W., K.T. and S.L.; data curation, A.W., K.T. and S.L.; writing—original draft preparation, A.W.; writing—review and editing, A.W. and L.L.C.; visualization, A.W. and L.L.C.; su-pervision, A.W.; project administration, A.W.; funding acquisition, A.W. All authors have read and agreed to the published version of the manuscript.

Funding

The work described in this paper was fully supported by a grant from the Research Grants Council of the Hong Kong Special Administrative Region, China. (Project Ref: UGC/FDS24/H24/24).

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Research Committee of College of Professional and Continuing Education, The Hong Kong Polytechnic University (RC Ref No: RC/ETH/H/401, approved on 13 May 2024).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
SCSStudent-Created Screencast/Student-Created Screencasts
UTAUTUnified Theory of Acceptance and Use of Technology
EEEffort Expectancy
PEPerformance Expectancy
FUFuture Utility
ATTAttitude
BIBehavioral Intention
MGAMulti-Group Analysis

References

  1. Ab Hamid, M. R., Sami, W., & Sidek, M. M. (2017,Discriminant validity assessment: Use of Fornell & Larcker criterion versus HTMT criterion. Journal of Physics: Conference Series, 890(1), 012163. [Google Scholar] [CrossRef]
  2. Alkouk, W. A., & Khlaif, Z. N. (2024). AI-resistant assessments in higher education: Practical insights from faculty training workshops. Frontiers in Education, 9, 1499495. [Google Scholar] [CrossRef]
  3. Bagozzi, R. P. (2007). The legacy of the technology acceptance model and a proposal for a paradigm shift. Journal of the Association for Information Systems, 8(4), 244–254. [Google Scholar] [CrossRef]
  4. Bernacki, M. L., Chavez, M. M., & Uesbeck, P. M. (2020). Predicting achievement and providing support before STEM majors begin to fail. Computers & Education, 158, 103999. [Google Scholar] [CrossRef]
  5. Bindu, M. R., & Manikandan, R. (2020). Can humans take medicines to become immortal? A review of Amish Tripathi’s shiva trilogy. European Journal of Molecular & Clinical Medicine, 7(3), 4894–4897. Available online: https://www.ejmcm.com/archives/volume-7/issue-3/7447 (accessed on 1 December 2025).
  6. Buabeng-Andoh, C., & Baah, C. (2020). Pre-service teachers’ intention to use learning management system: An integration of UTAUT and TAM. Interactive Technology and Smart Education, 17(4), 455–474. [Google Scholar] [CrossRef]
  7. Cardace, A., Hefferon, K., Levina, A., Linn, M., Salehi, S., & Sato, B. K. (2024). Versatile video assignment improves undergraduates’ learning and confidence. Active Learning in Higher Education. Advance online publication. [Google Scholar] [CrossRef]
  8. Chaka, C. (2023a). Detecting AI content in responses generated by ChatGPT, YouChat, and Chatsonic: The case of five AI content detection tools. Journal of Applied Learning and Teaching, 6(2), 94–104. [Google Scholar] [CrossRef]
  9. Chaka, C. (2023b). Stylised-facts view of fourth industrial revolution technologies impacting digital learning and workplace environments: ChatGPT and critical reflections. Frontiers in Education, 8, 1150499. [Google Scholar] [CrossRef]
  10. Chi, M. T. H., De Leeuw, N., Chiu, M.-H., & LaVancher, C. (1994). Eliciting self-explanations improves understanding. Cognitive Science, 18(3), 439–477. [Google Scholar] [CrossRef]
  11. Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 319–340. [Google Scholar] [CrossRef]
  12. Dehouche, N. (2021). Plagiarism in the age of massive Generative Pre-trained Transformers (GPT-3). Ethics in Science and Environmental Politics, 21, 17–23. [Google Scholar] [CrossRef]
  13. Din Eak, P. N., & Annamalai, N. (2024). Enhancing online learning: A systematic literature review exploring the impact of screencast feedback on student learning outcomes. Asian Association of Open Universities Journal, 19(1), 45–62. [Google Scholar] [CrossRef]
  14. Dunn, P. K., McDonald, C., & Loch, B. (2015). StatsCasts: Screencasts for complementing lectures in statistics courses. International Journal of Mathematical Education in Science and Technology, 46(4), 521–532. [Google Scholar] [CrossRef]
  15. Ernst, C. P. H., Wedel, K., & Rothlauf, F. (2014, August 7–9). Students’ acceptance of e-learning technologies: Combining the technology acceptance model with the didactic circle. Twentieth Americas Conference on Information Systems, Savannah, GA, USA. Available online: https://www.researchgate.net/publication/288103752_Students’_acceptance_of_e-learning_technologies_Combining_the_technology_acceptance_model_with_the_didactic_circle (accessed on 3 November 2025).
  16. Ghilay, Y., & Ghilay, R. (2015). Computer courses in higher-education: Improving learning by screencast technology. i-manager’s Journal on Educational Technology, 11(4), 15–26. [Google Scholar] [CrossRef]
  17. Hair, J. F., Hult, G. T. M., Ringle, C. M., & Sarstedt, M. (2017). A primer on partial least squares structural equation modeling (PLS-SEM) (2nd ed.). Sage Publications Inc. [Google Scholar]
  18. Hair, J. F., Risher, J. J., Sarstedt, M., & Ringle, C. M. (2019). When to use and how to report the results of PLS-SEM. European Business Review, 31(1), 2–24. [Google Scholar] [CrossRef]
  19. Hair, J. F., Jr., Howard, M. C., & Nitzl, C. (2020). Assessing measurement model quality in PLS-SEM using confirmatory composite analysis. Journal of Business Research, 109, 101–110. [Google Scholar] [CrossRef]
  20. Hosseini, M., Rasmussen, L. M., & Resnik, D. B. (2023). Using AI to write scholarly publications. Accountability in Research, 31, 715–723. [Google Scholar] [CrossRef] [PubMed]
  21. Huber, F., Herrmann, A., Meyer, F., Vogel, J., & Vollhardt, K. (2007). Kausalmodellierung mit partial least squares: Eine anwendungsorientierte einführung. Gabler. [Google Scholar]
  22. Jones, M., & Sheridan, L. (2015). Back translation: An emerging sophisticated cyber strategy to subvert advances in ‘digital age’ plagiarism detection and prevention. Assessment & Evaluation in Higher Education, 40(5), 712–724. [Google Scholar] [CrossRef]
  23. Kawaf, F. (2019). Capturing digital experience: The method of video videography. International Journal of Research in Marketing, 36(2), 169–184. [Google Scholar] [CrossRef]
  24. Khechine, H., Lakhal, S., Pascot, D., & Bytha, A. (2014). UTAUT model for blended learning: The role of gender and age in the intention to use webinars. Interdisciplinary Journal of E-Learning and Learning Objects, 10(1), 33–52. Available online: https://eric.ed.gov/?id=EJ1058362 (accessed on 3 November 2025).
  25. Khlaif, Z. N. (2024). Rethinking educational assessment in the age of artificial intelligence. In Advances in educational technologies and instructional design book series (pp. 89–115). IGI Global. [Google Scholar] [CrossRef]
  26. Killam, L. A., Montgomery, P., Luhanga, F. L., Adamic, S., & Carter, L. M. (2024). Co-creation as authentic assessment to support student learning and readiness for practice: A conceptual framework. Teaching and Learning in Nursing, 19(1), e276–e282. [Google Scholar] [CrossRef]
  27. Kono, S., & Sato, M. (2023). The potentials of partial least squares structural equation modeling (PLS-SEM) in leisure research. Journal of Leisure Research, 54(3), 309–329. [Google Scholar] [CrossRef]
  28. Korkmaz, S., & Öz, H. (2021). Using Kahoot to improve reading comprehension of English as a foreign language learner. International Online Journal of Education and Teaching (IOJET), 8(2), 1138–1150. Available online: https://eric.ed.gov/?id=EJ1294319 (accessed on 3 November 2025).
  29. Lombardi, M. M. (2008). Making the grade: The role of assessment in authentic learning. EDUCAUSE Learning Initiative. Available online: https://library.educause.edu/resources/2008/1/making-the-grade-the-role-of-assessment-in-authentic-learning (accessed on 3 November 2025).
  30. Lye, C. Y., & Lim, L. (2024). Generative artificial intelligence in tertiary education: Assessment redesign principles and considerations. Education Sciences, 14(6), 569. [Google Scholar] [CrossRef]
  31. Lynch, M. (2019, October 15). Types of classroom interventions. Available online: https://www.theedadvocate.org/types-of-classroom-interventions/ (accessed on 10 February 2022).
  32. Malkawi, N., Awajan, N. W., Alghazo, K. M., & Harafsheh, H. A. (2023). The effectiveness of using student-created questions for assessing their performance in English grammar/case study of “King Abdullah II schools for excellence”. World Journal of English Language, 13(5), 156–170. [Google Scholar] [CrossRef]
  33. Moon, Y. J., & Hwang, Y. H. (2016). A study of effects of UTAUT-based factors on acceptance of smart health care services. In Advanced multimedia and ubiquitous engineering: Future information technology volume 2 (pp. 317–324). Springer. [Google Scholar] [CrossRef]
  34. Morris, C., & Chikwa, G. (2014). Videos: How effective are they and how do students engage with them? Active Learning in Higher Education, 15(1), 25–37. Available online: https://journals.sagepub.com/doi/abs/10.1177/1469787413514654 (accessed on 3 November 2025).
  35. Mullamphy, D. F., Higgins, P. J., Belward, S. R., & Ward, L. M. (2010). To screencast or not to screencast. The ANZIAM Journal, 51, C446–C460. [Google Scholar] [CrossRef]
  36. Negahban, A., & Chung, C.-H. (2014). Discovering determinants of users perception of mobile device functionality fit. Computers in Human Behavior, 35, 75–84. [Google Scholar] [CrossRef]
  37. Nguyen, H., & Nguyen, V. A. (2024). An application of model unified theory of acceptance and use of technology (UTAUT): A use case for a system of personalized learning based on learning styles. International Journal of Information and Education Technology, 14(11), 1574–1582. [Google Scholar] [CrossRef]
  38. Or, C. (2023). The role of attitude in the unified theory of acceptance and use of technology: A meta-analytic structural equation modelling study. International Journal of Technology in Education and Science, 7(4), 552–570. [Google Scholar] [CrossRef]
  39. Orús, C., Barlés, M. J., Belanche, D., Casaló, L., Fraj, E., & Gurrea, R. (2016). The effects of learner-generated videos for YouTube on learning outcomes and satisfaction. Computers & Education, 95, 254–269. [Google Scholar] [CrossRef]
  40. Osamor, A. (2023). Rethinking online assessment strategies: Authenticity versus AI chatbot intervention. Journal of Applied Learning and Teaching, 6(2), 2. [Google Scholar] [CrossRef]
  41. Penn, M., & Brown, M. (2022). Is screencast feedback better than text feedback for student learning in higher education? A systematic review. Ubiquitous Learning: An international Journal, 15(2), 1–18. [Google Scholar] [CrossRef]
  42. Pereira, J., Echeazarra, L., Sanz-Santamaría, S., & Gutiérrez, J. (2014). Student-generated online videos to develop cross-curricular and curricular competencies in Nursing Studies. Computers in Human Behavior, 31, 580–590. [Google Scholar] [CrossRef]
  43. Peterson, E. (2007). Incorporating videos in online teaching. International Review of Research in Open and Distributed Learning, 8(3), 1–4. [Google Scholar] [CrossRef]
  44. Pinder-Grover, T., Green, K. R., & Millunchick, J. M. (2011). The efficacy of screencasts to address the diverse academic needs of students in a large lecture course. Advances in Engineering Education, 2(3), 1–13. Available online: https://eric.ed.gov/?id=EJ1076056 (accessed on 3 November 2025).
  45. Qureshi, F. Z. (2024). Redesigning assessments in business education: Addressing generative AI’s impact on academic integrity. International Journal of Science and Research, 13(10), 1659–1666. [Google Scholar] [CrossRef]
  46. Ross, A., & Willson, V. L. (2017). One-sample t-test. In Basic and advanced statistical tests. SensePublishers. [Google Scholar] [CrossRef]
  47. Sari, N. P. W. P., Duong, M. P. T., Li, D., Nguyen, M. H., & Vuong, Q. H. (2024). Rethinking the effects of performance expectancy and effort expectancy on new technology adoption: Evidence from Moroccan nursing students. Teaching and Learning in Nursing, 19(3), e557–e565. [Google Scholar] [CrossRef]
  48. Shieh, R. S. (2012). The impact of Technology-Enabled Active Learning (TEAL) implementation on student learning and teachers’ teaching in a high school context. Computers & Education, 59(2), 206–214. [Google Scholar] [CrossRef]
  49. van der Walt, L., & Bosch, C. (2025). Co-creating OERs in computer science education to foster intrinsic motivation. Education Sciences, 15(7), 785. [Google Scholar] [CrossRef]
  50. Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information technology: Toward a unified view. MIS Quarterly, 27(3), 425–478. [Google Scholar] [CrossRef]
  51. Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes. Harvard University Press. [Google Scholar] [CrossRef]
  52. Williamson, B., Macgilchrist, F., & Potter, J. (2023). Re-examining AI, automation and datafication in education. Learning, Media and Technology, 48(1), 1–5. [Google Scholar] [CrossRef]
  53. Winship, C., & Zhuo, X. (2020). Interpreting t-statistics under publication bias: Rough rules of thumb. Journal of Quantitative Criminology, 36(2), 329–346. [Google Scholar] [CrossRef]
  54. Wong, C., Delante, N. L., & Wang, P. (2017). Using PELA to predict international business students’ English writing performance with contextualised English writing workshops as intervention program. Journal of University Teaching & Learning Practice, 14(1), 15. [Google Scholar] [CrossRef]
  55. Wu, B., & Chen, X. (2017). Continuance intention to use MOOCs: Integrating the technology acceptance model (TAM) and task technology fit (TTF) model. Computers in Human Behavior, 67, 221–232. [Google Scholar] [CrossRef]
  56. Yankova, D. (2020). On translated plagiarism in academic discourse. English Studies at NBU, 6(2), 189–200. [Google Scholar] [CrossRef]
Figure 1. Student-created screencast (SCS).
Figure 1. Student-created screencast (SCS).
Education 15 01701 g001
Figure 2. Example of assignment instructions.
Figure 2. Example of assignment instructions.
Education 15 01701 g002
Figure 3. Anonymized transcription.
Figure 3. Anonymized transcription.
Education 15 01701 g003
Figure 4. Research model.
Figure 4. Research model.
Education 15 01701 g004
Figure 5. Output from SmartPLS indicates the path coefficients of the model.
Figure 5. Output from SmartPLS indicates the path coefficients of the model.
Education 15 01701 g005
Table 1. Qualitative analysis: teacher-created screencast research.
Table 1. Qualitative analysis: teacher-created screencast research.
CreatorFocusAuthor(s)Main Research Task(s)
TeachersSupplementary Learning MaterialsMorris and Chikwa (2014)Examined customized screencast as optional additional learning resources.
Mullamphy et al. (2010)Documented mathematics lecturers creating screencasts for student support
TeachersFeedback DeliveryDin Eak and Annamalai (2024)Reviewed screencast feedback in online higher education
Penn & Brown (2022)Conducted a systematic review comparing screencast feedback with text feedback
TeachersLecture Enhancement and RecordingPinder-Grover et al. (2011)Documented instructor-developed screencasts posted to supplement lectures
Ghilay and Ghilay (2015)Examined courses fully covered by instructor-produced screencast videos
TeachersSpecialized Subject SupportDunn et al. (2015)Analyzed “StatsCats” with lecturer-provided narration
Mullamphy et al. (2010)Documented mathematics lecturers creating screencasts for student support
Students Optional Assessment alternativeOrús et al. (2016) Allowed students to create videos to explain some marketing concepts to replace a written report.
Table 2. Respondents from different subjects.
Table 2. Respondents from different subjects.
Subject CodeSubject NameAcademic YearNumber of Respondents
SEHS4696Machine Learning for Data Mining24/2549
SEHS2307Computer Programming Concepts22/2337
SEHH2042Computer Programming24/2533
SEHS4517Web Application Development and Management22/2329
SEHS4678Artificial Intelligence22/2326
LCS3175Effective Professional Communication in English24/2521
SEHS4678Artificial Intelligence24/258
Table 3. Demographic questions (n = 203).
Table 3. Demographic questions (n = 203).
CharacteristicsItemsNumberPercentage
GenderMale16581.28
Female3818.72
Discipline of StudyScience16279.80
Non-Science4019.70
Not answered1
Mode of StudyFull-time16983.25
Part-time3316.26
Not answered1
Year of StudyJunior year3316.26
Senior year17083.74
Table 4. Survey question on 5-point Likert scale.
Table 4. Survey question on 5-point Likert scale.
Constructs/Questionnaire Items (5-Point Likert Scale) Source
Performance Expectancy (PE)
Creating screencasts improves the quality of my learning activities. Adapted from
Khechine et al. (2014)
Creating screencasts makes my learning activities more effective.
If I create screencasts, I will improve the skills I want to learn.
Effort Expectancy (EE)
It will be easy for me to create screencasts.Adapted from
Khechine et al. (2014)
The steps to create screencasts are clear to me.
It’ll be easy for me to become skillful at creating screencasts.
Future Utility (FU)
I can use my screencasts to help me revise my learning tasks. Authors of this research
I can use my screencasts to help me demonstrate my work to my colleagues at work.
Attitude (ATT)
I believe that creating screencasts is a good idea in this subject. Adapted from
Wu and Chen (2017)
I believe that creating screencasts is good advice in this subject.
I have good impression about screencasts.
Behavioral Intention (BI)
I intend to create screencasts in future.Adapted from
Khechine et al. (2014)
I predict I will create screencasts in future.
I plan to create screencasts in future.
Table 5. Construct reliability and validity (Both ≥ 0.7 = Good).
Table 5. Construct reliability and validity (Both ≥ 0.7 = Good).
Latent ConstructCronbach’s AlphaComposite Reliability (CR)AVE
Effort Expectancy0.8910.9320.820
Performance Expectancy0.9100.9430.848
Future Utility0.8660.9370.882
Attitude0.9540.9700.915
Behavioral intention0.9640.9770.933
Table 6. Fornell–Larcker criterion.
Table 6. Fornell–Larcker criterion.
EEPEFUATTBI
EE0.906
PE0.6340.921
FU0.5820.7600.939
ATT0.6510.8050.8280.957
BI0.5690.7180.7390.8060.966
Table 7. VIF values for the inner model (VIF < 5 represent generally accepted of the presence of collinearity).
Table 7. VIF values for the inner model (VIF < 5 represent generally accepted of the presence of collinearity).
PathVIF
EE → ATT1.740
PE → ATT2.729
FU → ATT2.467
ATT → BI1.000
Table 8. Structural model result.
Table 8. Structural model result.
HPathCoefficientst Valuesp ValuesR2f2Confirmed
H1EE → ATT0.1572.7550.0060.7730.062Yes
H2PE → ATT0.3445.0920.0000.7730.190Yes
H3FU → ATT0.4767.3280.0000.7730.404Yes
H4ATT → BI0.80624.5630.0000.6501.859Yes
Table 9. Moderating effects in the structural model.
Table 9. Moderating effects in the structural model.
HPathCoefficientp ValuesModerating Effect
(p < 0.05)
H1aGender × EE → ATT−0.1880.152No
H2aGender × PE → ATT0.0740.639No
H3aGender × FU → ATT0.0180.895No
H1bYear of Study × EE  ATT−0.1030.020Yes
H2bYear of Study × PE → ATT0.0000.996No
H3bYear of Study × FU → ATT0.0450.637No
H1cDiscipline of Study × EE → ATT−0.0810.446No
H2cDiscipline of Study × PE → ATT−0.0330.835No
H3cDiscipline of Study × FU → ATT0.0680.655No
H1dMode of Study × EE → ATT0.0930.454No
H2dMode of Study × PE → ATT−0.3710.071No
H3dMode of Study × FU → ATT0.2730.123No
Table 10. Multi-Group Analysis (MGA).
Table 10. Multi-Group Analysis (MGA).
Path Coeff.Path Coeff.Path Diff.1-Tailed p Value2-Tailed p Value
Gender
MaleFemale
EE → ATT0.1270.285−0.1680.8990.203
PE → ATT0.3390.2910.0480.3910.782
FU → ATT0.4970.4450.0530.3450.690
ATT → BI0.7870.879−0.0920.9130.173
Discipline of Study
ScienceNon-science
EE → ATT0.1350.210−0.0750.7680.464
PE → ATT0.3420.427−0.0850.7470.506
FU → ATT0.4850.4630.0220.4420.884
ATT → BI0.8280.7800.0480.2700.539
Mode of Study
Full-timePart-time
EE → ATT0.1590.0770.0820.2640.528
PE → ATT0.3140.642−0.3270.9670.065
FU → ATT0.5040.2440.2600.0650.130
ATT → BI0.8260.7640.0630.2820.564
Year of Study
Junior yearSenior year
EE → ATT0.3490.1080.2410.0210.042
PE → ATT0.3510.2950.0560.3360.672
FU → ATT0.3220.549−0.2270.9640.072
ATT → BI0.9160.7630.1540.0070.013
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wong, A.; Tsang, K.; Lin, S.; Chan, L.L. Student-Created Screencasts: A Constructivist Response to the Challenges of Generative AI in Education. Educ. Sci. 2025, 15, 1701. https://doi.org/10.3390/educsci15121701

AMA Style

Wong A, Tsang K, Lin S, Chan LL. Student-Created Screencasts: A Constructivist Response to the Challenges of Generative AI in Education. Education Sciences. 2025; 15(12):1701. https://doi.org/10.3390/educsci15121701

Chicago/Turabian Style

Wong, Adam, Ken Tsang, Shuyang Lin, and Lai Lam Chan. 2025. "Student-Created Screencasts: A Constructivist Response to the Challenges of Generative AI in Education" Education Sciences 15, no. 12: 1701. https://doi.org/10.3390/educsci15121701

APA Style

Wong, A., Tsang, K., Lin, S., & Chan, L. L. (2025). Student-Created Screencasts: A Constructivist Response to the Challenges of Generative AI in Education. Education Sciences, 15(12), 1701. https://doi.org/10.3390/educsci15121701

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop