1. Introduction
Short-form instructional videos have become a prevalent learning format in higher education, offering concise explanations that align with students’ everyday digital practices. As
Xie et al. (
2023) and
Zhang et al. (
2023) observe, university students increasingly rely on such resources when seeking clarification on mathematical concepts that are not fully resolved during formal instruction. This pattern is particularly visible in mathematics, where abstract ideas often prompt students to search for external explanations (
Shoufan & Mohamed, 2022). However, as
W. Ng (
2012) argued, this reliance typically positions students as passive recipients of digital explanations, raising concerns about superficial understanding, distraction and uneven instructional quality. In this context, the challenge is not simply technological but pedagogical—how to transform students from passive viewers into active sense-makers of disciplinary knowledge. In the specific context of first-year undergraduate mathematics, this challenge often manifests in students’ ability to apply integration techniques procedurally without being able to clearly articulate the underlying reasoning behind each step. This difficulty is particularly evident in substitution methods, where students may follow algorithms successfully while struggling to explain why specific transformations are mathematically justified. Short-form peer-mediated explanations are now pervasive across higher education systems internationally, yet their pedagogical function is under-examined when students act as producers rather than recipients.
Research on video-based learning has shown that multimodal representations can foster motivation, support problem solving and deepen conceptual understanding in mathematics (
Wilkinson et al., 2018). During the COVID-19 period, instructor-generated videos were widely adopted to sustain mathematical learning, and platforms such as YouTube have long served as informal support spaces. Nevertheless, students increasingly prefer the brevity and accessibility of short-form videos that better align with their everyday media habits (
Trang et al., 2025). Importantly, prior empirical work has focused almost exclusively on students’ consumption of instructor- or expert-produced digital content (
Lyu et al., 2025), leaving limited evidence on what happens when students themselves produce instructional videos. This distinction is pedagogically significant: as
Huang (
2020) notes, the act of producing an explanation requires learners to engage in self-explanation, reflective articulation and peer-oriented communication.
Empirical evidence regarding the impact of student-generated videos in mathematics courses remains limited. To address this gap, the present study implemented an intervention in a university mathematics course where students created short videos explaining the solution of integration exercises. The instructor provided formative feedback, and a subset of videos was subsequently peer-evaluated to identify the most helpful exemplar. The intervention combined the affordances of multimodality, digital literacy, peer evaluation and self-explanation to examine both learning outcomes and learning experiences. Such processes resonate with constructivist and social learning perspectives, positioning knowledge as co-constructed through dialogue and representation rather than transmission.
Accordingly, this study pursued two aims: (1) to examine how student performance across midterm and final assessments is expressed within the context of an instructional intervention involving student-generated videos; and (2) to explore students’ perceptions of the activity through qualitative interviews. The study contributes to current discussions on digital literacy and multimodality in higher education (
Panaoura et al., 2025) by reframing short-form video platforms not as informal consumption spaces but as pedagogical tools for structured student-produced explanations. In doing so, the study is positioned within the Scholarship of Teaching and Learning (SoTL) understood as the systematic and theory-informed investigation of authentic teaching and learning practices in higher education with the aim to produce knowledge that is transferable beyond the local classroom.
Guided by a Scholarship of Teaching and Learning perspective, this study addresses the following research questions:
RQ1: How do students experience the process of creating short instructional videos in an undergraduate mathematics course?
RQ2: What insights do qualitative data reveal about students’ reflections on explanatory clarity, peer-oriented explanation, and learning processes during the video-generation activity?
RQ3: How is students’ performance across coursework and final examinations expressed within the context of this exploratory instructional intervention?
While previous studies on student-generated videos have highlighted benefits related to engagement, communication skills, and conceptual understanding, most have focused on their use in blended or project-based contexts and have primarily examined students’ perceptions or overall achievement outcomes. The present study extends this emerging body of research by examining student-generated videos within an undergraduate mathematics course, with particular attention to how the process of explaining mathematical solutions for peers relates to students’ learning processes. By combining performance data with qualitative insights from interviews and classroom observations, the study offers a nuanced account of the pedagogical potential and limitations of student-generated instructional videos in higher mathematics education.
2. Theoretical Framework
The present study is grounded in four complementary strands of literature that together provide the foundation for understanding the educational value of student-generated videos in mathematics learning in higher education. First, research on multimodality in learning highlights the potential of combining multiple semiotic resources, such as language, symbols, gestures and visuals, to enhance comprehension and engagement, particularly in mathematics, where plenty of different learning styles were identified. Within this view, learning is conceived as a multimodal process of meaning-making, where knowledge is represented and negotiated through diverse communicative forms. Second, the concept of digital literacy in higher education emphasises the need for students to critically navigate and produce digital content, moving beyond passive consumption of online resources toward active knowledge creation. Third, studies on peer learning and interaction underscore the importance of collaboration, dialogue, and peer evaluation in fostering both academic achievement and social skills. Finally, the role of self-explanation and metacognition is central to deep learning, as articulating one’s reasoning not only consolidates understanding but also develops learners’ capacity for self-regulation. Taken together, these perspectives align with the principles of participatory pedagogy and the Learning by Design framework (
Kalantzis & Cope, 2010), which position learners as active knowledge designers. Taken together, these four perspectives inform the rationale for integrating student-produced short videos into a mathematics course, providing a lens through which to examine both their cognitive and social impact. In the present study, these strands are not treated as equally weighted analytic constructs. Rather, peer learning and self-explanation serve as the primary theoretical lenses guiding the interpretation of the qualitative data, while multimodality and digital literacy function as contextual frameworks informing the design of the instructional task.
The study is situated within the Scholarship of Teaching and Learning (SoTL), which emphasises the systematic and reflective investigation of teaching practices with the aim of improving student learning and contributing to the broader educational knowledge base. Within the SoTL tradition, teaching is treated as a form of scholarly inquiry, where pedagogical innovations are designed, implemented, and critically examined using appropriate evidence. In this study, the SoTL perspective provides the overarching scholarly positioning of the study, which, in turn, informs the design of the instructional intervention and frames the interpretation of the findings.
2.1. Multimodality in Learning
Learning is inherently multimodal, as students construct meaning by coordinating linguistic, symbolic, visual and embodied resources. In mathematics education,
Wilkinson et al. (
2018) argue that multimodal representations can make students’ reasoning visible by integrating verbal explanation, symbolic notation and visual inscriptions in ways that support conceptual clarity. Similarly,
O. Ng (
2015) showed that mathematics video tutorials reveal how learners mobilise gestures, diagrams and spoken language to externalise and refine their thinking. Recent work in digital education further suggests that multimodal learning environments can enhance not only cognitive engagement but also social attitudes.
Bassachs et al. (
2022), for instance, demonstrated that multimodal cooperative tasks promoted both conceptual understanding and positive peer interaction. This reinforces the SoTL principle that learning artefacts can serve as visible mediations of thinking, providing opportunities for reflection and dialogue.
Short-form videos constitute a paradigmatic case of multimodal learning because they compress explanation into a tight, coordinated mode where voice, inscription and visual framing co-operate to scaffold understanding.
Ding et al. (
2023) reported that brief instructional videos improved performance in business statistics, particularly for students with stronger mathematical backgrounds, aligning with findings by
Shoufan and Mohamed (
2022) that multimodal digital explanations reduce cognitive load and support exam preparation. Taken together, these studies converge on the view that multimodality is not a peripheral property of video tools but an important theoretical perspective for understanding how mathematical explanations can be externally represented and communicated.
2.2. Digital Literacy in Higher Education
Digital literacy has become a central competence in higher education as students increasingly rely on digital resources for academic support.
Gilster (
1997) conceptualised digital literacy not merely as technical fluency but as the ability to critically access, evaluate and produce digital content. Subsequent research has shown that familiarity with digital platforms does not automatically translate into academically productive digital practices.
W. Ng (
2012) demonstrated that so-called “digital natives” often use online media for informational consumption without developing critical or generative skills, a finding echoed more recently by
Alenazi et al. (
2023), who highlight the need for structured integration of digital creation in higher education.
Video platforms such as YouTube have become pervasive informal learning environments in mathematics:
Shoufan and Mohamed (
2022) found that students increasingly rely on them for guidance on problem-solving strategies, while acknowledging variability in quality and fit to curricular expectations.
Adelhardt and Eberle (
2024) showed that short-form platforms such as TikTok can attract engagement but also risk superficiality and distraction, underscoring the importance of social media literacy. Crucially,
Lyu et al. (
2025) note that digital participation remains predominantly consumption-oriented even among university instructors, pointing to a broader asymmetry between production and consumption practices in academic contexts.
Within this landscape,
Huang (
2020) demonstrated that when students are required to produce instructional videos rather than merely watch them, they activate higher-order digital and cognitive processes. This shift from passive viewing to deliberate production is central to contemporary conceptions of digital literacy in higher education, positioning students not only as recipients of explanations but as designers of communicative artefacts for peers. From a SoTL perspective, this design orientation reflects a move from using technology as a delivery tool to employing it as a medium for knowledge construction and reflection.
2.3. Peer Learning and Interaction
Peer learning is grounded in socio-constructivist conceptions of learning, in which meaning is co-constructed through dialogue, explanation and evaluative interaction.
Vygotsky (
1978) highlighted the centrality of social mediation in cognitive development, and
Topping (
2005) later demonstrated that structured peer learning interventions in mathematics can improve achievement, reduce anxiety and foster positive attitudes.
Leung (
2019), in a meta-analysis of peer tutoring, showed significant gains when peer interaction is scaffolded, while
Panadero et al. (
2018) found that peer assessment promotes evaluative judgement and reflective thinking. Those two capacities are central to sustainable learning in higher education.
Digital media further reshape peer interaction by making students’ reasoning processes visible to others.
Stonver and Klette (
2022) observed that student-produced mathematics videos prompt peer-to-peer negotiation of meaning, allowing learners to identify misconceptions, alternative strategies and criteria for clarity.
Kuhlmann et al. (
2024) similarly showed that active cognitive engagement with peers’ videos predicts stronger learning in STEM contexts. Importantly,
Cho and Cho (
2011) noted that when students evaluate peers’ work, through commentary or ranking, the process functions as calibration, aligning personal standards with shared criteria of quality. In this sense, peer learning embodies the SoTL principle of knowledge co-construction through shared evaluative dialogue.
In the present study, the production and subsequent peer-evaluation of student-generated videos is not treated as a peripheral add-on but as an instructional mechanism designed to externalise reasoning, expose it to peer scrutiny and elicit evaluative judgement. In this sense, peer learning is conceptualised as both a cognitive and a social process within the instructional design of the activity.
2.4. Self-Explanation and Metacognition
Self-explanation is widely recognised as a mechanism that supports deep learning and transfer.
Chi (
2000) showed that when learners articulate the rationale behind a solution, they make implicit reasoning explicit, detect inconsistencies and reorganise conceptual understanding. In mathematics specifically,
Rittle-Johnson (
2006) demonstrated that self-explanation enhances both problem-solving accuracy and long-term retention.
Huang (
2020) further argued that student-produced instructional videos constitute a natural medium for self-explanation because they require learners to restructure knowledge for an audience, thereby triggering metacognitive monitoring.
The metacognitive dimension of self-explanation aligns with
Zimmerman’s (
2002) account of self-regulated learning as an iterative cycle of planning, monitoring and evaluating one’s cognitive strategies. Video-based explanations often mobilise multiple semiotic resources, including gesture, verbalisation and symbolic inscription, making metacognition visible in action (
Wilkinson et al., 2018). When coupled with peer evaluation, self-explanation is extended beyond the individual:
Cho and Cho (
2011) showed that giving feedback to peers requires learners to articulate explicit quality criteria, compare reasoning structures and recalibrate their own evaluative standards. Taken together, research on self-explanation and metacognition suggests that articulating reasoning is not merely a communicative act but a driver of conceptual restructuring and reflective awareness, particularly when the explanation is designed for peer uptake rather than private rehearsal. This view reinforces a SoTL stance that reflection-on-action is central to transforming individual insight into shared pedagogical understanding. In the present study, self-explanation and metacognition are not directly measured through dedicated instruments. Instead, these constructs are used as theoretical lenses to interpret students’ reflections and accounts in the qualitative data, consistent with the exploratory scope of the study. In this study, references to metacognition are grounded in students’ reflective self-reports and observed shifts in explanatory awareness, rather than in direct measures of metacognitive regulation or strategic control. As such, the findings point to metacognitive awareness rather than to effortful metacognitive challenge or regulation.
Across these four strands (multimodality, digital literacy, peer learning and self-explanation) the literature converges on a common claim: productive learning in higher education is reinforced when students externalise reasoning in shareable forms that are exposed to peer scrutiny. What remains under-examined is the case in which these externalisations take the form of short student-generated instructional videos, rather than instructor-produced materials or informal consumption of online content. This gap is especially salient in mathematics, where explanatory clarity and evaluative judgement are central to disciplinary learning. The present study is therefore grounded in the proposition that producing and peer-evaluating short explanatory videos can serve as an instructional mechanism, not merely a digital trend, and it explores this proposition through a mixed-methods design. By situating these inquiries within a SoTL perspective, the study aims to contribute both theoretically and practically to current understandings of how technology-mediated peer learning can support reflective and participatory mathematics education.
In this study, multimodality and digital literacy are used as contextual perspectives to frame the learning environment, whereas peer learning and self-explanation function as the primary analytical lenses guiding data interpretation.
3. Methodology
3.1. Participants
This study was conducted with 31 undergraduate students enrolled in an engineering programme at a private university. All participants were attending a compulsory mathematics course focused on integral calculus. Although the class consisted of 32 students, one student did not complete the intervention, resulting in 31 students being included in the analysis. Participants represented a typical cohort of first-year engineering undergraduates, accustomed to traditional lecture-based instruction but increasingly familiar with digital learning environments.
3.2. Research Design
This study employed a mixed-methods design, combining quantitative and qualitative data to examine the impact of student-generated instructional videos on mathematical learning. Quantitative data consisted of students’ scores in a midterm examination (administered prior to the intervention) and a final examination (administered after the intervention). Qualitative data were collected through semi-structured interviews with a subset of participants. The rationale for this design reflects a SoTL-oriented approach (
Creswell & Plano, 2018), seeking to integrate objective evidence of performance with rich, interpretive accounts of students’ learning experiences. The study followed an exploratory convergent mixed-methods design, in which quantitative and qualitative data were collected within the same instructional period and analysed separately before being considered together for interpretive purposes. Integration occurred at the level of interpretation rather than through a sequential explanatory or experimental structure.
3.3. Procedure
Students were asked to create a short instructional video (1–2 min) demonstrating the steps of solving an integration exercise using the u-substitution method. Out of the 32 students, 25 submitted videos. The instructor reviewed all submissions and provided brief written feedback through the university’s e-learning platform. During a subsequent class session, six videos were randomly selected (using a simple random draw by the instructor from the pool of submitted videos) and viewed collectively. This was followed by a group discussion in which students reflected on the clarity and structure of the videos, as well as potential improvements. These discussions were conducted at the whole-class level and were intended to support shared reflection rather than structured peer negotiation or collaborative revision of the videos. The class then voted on the video they found most helpful, which was subsequently uploaded to the course webpage as an exemplar resource. This selection was intended as a reflective and motivational element, rather than as a competitive outcome, with the primary emphasis placed on the process of explanation and evaluation rather than the production of a ‘winning’ artefact.
To support the implementation of the activity, students were given one week to prepare and submit their videos outside scheduled class time. Videos were recorded individually by students using their own devices (e.g., smartphones or personal computers) and uploaded to the university’s e-learning platform. No specialised software or editing tools were required, in order to minimise technical barriers and ensure accessibility.
Students were instructed to record a continuous explanation of the solution process, focusing on clarity of reasoning rather than presentation quality. Most students completed the task within approximately 30–60 min, including planning, recording, and uploading the video. The activity was designed as a low-stakes coursework component to encourage participation while reducing performance pressure.
The classroom discussion revealed several student insights regarding the creation of effective instructional videos: (a) the importance of camera stability to ensure clarity; (b) the preference for maintaining a consistent view of the entire page of calculations; and (c) the need for more detailed explanations of intermediate steps. The instructor emphasised that students should present their solutions as though they were teaching a peer, rather than simply demonstrating their own mastery of the method. This instructional framing emphasised the importance of explaining mathematical reasoning for a peer audience, encouraging students to reflect on the clarity and organisation of their explanations. While the activity did not involve structured peer negotiation or repeated cycles of revision, it supported reflective discussion around what constitutes an effective mathematical explanation. The peer-learning component of the activity was intentionally designed as observational and evaluative rather than dialogic or iterative, focusing on shared reflection and criteria-based judgement rather than sustained peer-to-peer interaction.
3.4. Data Collection and Data Analysis
The quantitative data were derived from students’ performance in the course’s midterm and final examinations, which were part of the standard assessment structure of the mathematics course. Both examinations were designed and administered by the course instructor and aligned with the official course syllabus and learning objectives. The midterm examination assessed students’ understanding of integral calculus topics covered during the first part of the semester, while the final examination evaluated cumulative knowledge with greater emphasis on procedural fluency and problem solving.
The examinations consisted of open-ended problem-solving tasks, requiring students to show all solution steps rather than select predefined answers. As these assessments were routinely used within the programme for summative evaluation purposes, their content validity was ensured through alignment with curricular objectives and instructional coverage. Reliability was addressed through consistent grading criteria applied across all students and assessments, following established departmental practices.
The midterm and final examinations were not designed as parallel pre- and post-test instruments for the purposes of experimental comparison. Rather, they constituted routine summative assessments aligned with the course syllabus and instructional sequence. Consequently, the quantitative analysis was intended to provide an indicative overview of students’ performance across assessment points, rather than to establish causal effects attributable to the intervention. Students who did not submit a video were retained in the quantitative analysis, as the examinations formed part of the standard course assessment and the study did not aim to isolate causal effects of video production on performance.
Semi-structured interview protocols were used, allowing for guided discussion while providing flexibility to probe individual experiences. Sample interview prompts included questions such as: “How did creating the video influence the way you thought about explaining the solution?” and “What criteria did you use when evaluating peers’ videos?”
Qualitative data were gathered through semi-structured interviews with seven students and analysed using an inductive thematic analysis approach following
Braun and Clarke (
2006), focusing on their experiences with video creation, peer evaluation and perceived learning benefits. Interview participants were recruited on a voluntary basis following an open invitation to all students enrolled in the course. Coding was conducted at a semantic level, focusing on explicit meanings expressed in students’ accounts rather than on latent interpretive assumptions. Theme development followed a primarily inductive process, grounded in the data, while being sensitised by the study’s theoretical framework. In this sense, theory informed the interpretation of themes rather than their a priori construction.
The purpose of the interviews was to gain in-depth insights into students’ experiences rather than to obtain a statistically representative sample of the cohort. Those qualitative data from interviews were transcribed and thematically analysed to identify recurring themes regarding students’ perceptions of the video-making process, peer learning and self-explanation. The qualitative interview data were analysed by the authors following an inductive thematic analysis approach. Interview transcripts were read repeatedly to achieve familiarisation with the data, after which initial codes were generated to capture recurring ideas related to students’ experiences of video creation, peer evaluation, and learning processes. These codes were then examined for patterns and organised into broader themes through iterative comparison and refinement. Theme identification was guided by both the empirical data and the study’s theoretical framework. Initial codes such as references to explanatory clarity, peer perspective-taking, and difficulties in articulating mathematical reasoning were subsequently grouped into broader themes related to self-explanation, peer learning, and metacognitive awareness. Throughout the analysis, themes were reviewed to ensure internal coherence and consistency with the data corpus. Trustworthiness was supported through iterative coding, constant comparison across transcripts, and regular analytic discussions between the authors to refine codes and themes.
Classroom observations and student comments during discussions were also incorporated to provide contextual insights. Following
Braun and Clarke’s (
2006) approach to thematic analysis, the data were iteratively coded to ensure trustworthiness and alignment between emergent themes and the study’s theoretical constructs. Quantitative and qualitative findings were considered alongside each other to provide complementary perspectives on students’ learning experiences within the mixed-methods design.
Ethical considerations were addressed in accordance with institutional guidelines. Participation in the study was voluntary, and all students were informed about the purpose of the research and the nature of the data collected. Written informed consent was obtained from all participants prior to data collection. Students were assured that their participation or non-participation would not affect their course grades, and all data were anonymised prior to analysis. According to institutional policy, formal ethical approval was not required for this type of classroom-based educational research.
4. Results
4.1. Quantitative Results
The quantitative analyses are included to contextualise students’ overall achievement within the course assessment framework rather than to establish causal effects attributable to the intervention. Descriptive statistics were calculated for coursework (weighted average of the two midterm examinations), the final examination and the overall course grade. The mean coursework score was slightly higher than the mean score of the final examination, although the variability was comparable across the two assessments. Descriptive statistics for coursework, final examination, and overall course grade are presented in
Table 1.
A paired-samples t-test indicated that the difference between coursework and final examination scores was not statistically significant, t (30) = 1.44, p = 0.16, suggesting that students’ performance remained relatively stable across assessment points. The absence of statistically significant differences should be interpreted in light of the scope of the intervention and the nature of the assessment instruments. The midterm and final examinations assessed a broader range of course content than the specific integration technique addressed in the video-generation activity. Consequently, the quantitative results do not provide a direct measure of learning gains attributable to the intervention but rather reflect overall performance within the course assessment framework. This pattern suggests that the intervention did not produce measurable performance gains within standard assessments, but that students’ performance remained broadly consistent across assessment points during the semester. From a pedagogical standpoint, such stability can indicate that the innovation did not disrupt established learning patterns—a positive outcome for exploratory interventions.
Pearson correlations were then computed to examine the relationships between coursework, final examination, and overall course grade. Results revealed strong positive associations among all three measures. Coursework and final examination scores were strongly correlated (r = 0.82), indicating that students who performed well during the midterm assessments also tended to perform well on the final exam. Both coursework (r = 0.93) and final examination scores (r = 0.97) were very highly correlated with the overall course grade, with the final examination showing the strongest predictive relationship. These findings suggest that while coursework contributed substantially to the overall grade, the final examination played the dominant role in determining final performance. These strong associations underscore the coherence of assessment practices within the course and suggest that students’ mathematical competencies were consistently expressed across different evaluative contexts. While statistical stability limits claim of measurable performance change, it highlights the consistency of students’ achievement across assessment contexts within which the pedagogical intervention was implemented.
4.2. Qualitative Results
The identified themes represent broad, integrative categories that capture recurring patterns across participants’ accounts. Within each theme, more specific dimensions and variations are illustrated through indicative codes and excerpts. The themes were developed through iterative coding and constant comparison across interview transcripts, with indicative codes grouped to capture recurring patterns in students’ accounts. The analysis of the student interviews revealed four major themes regarding their experiences with creating and evaluating short instructional videos: perceived difficulties, perceived benefits, criteria for evaluating peers’ work and prior experiences with online platforms such as YouTube. These themes were grounded in repeated references across interview transcripts and are illustrated below through representative excerpts from students’ accounts. These themes offer insight into how students engaged with the activity and how it influenced their learning processes. Together, they illuminate the qualitative dimensions of learning that quantitative results alone could not capture, reflecting the SoTL emphasis on integrating multiple forms of evidence. An overview of the identified themes and indicative codes is provided in
Table 2.
Several students reported challenges in producing their videos. The most commonly mentioned difficulty was the demand to articulate mathematical reasoning clearly, particularly in English, which some students felt hindered their ability to express themselves fluently. Others highlighted technical concerns, such as ensuring that the video recording was clear, the handwriting visible and the explanation paced appropriately. These difficulties reflect both linguistic and technical barriers that students had to overcome in order to transform their knowledge into a shareable digital resource. Such findings resonate with previous literature on cognitive load and multimodal production (
Shoufan & Mohamed, 2022), suggesting that learning through explanation also entails managing representational complexity.
Despite these challenges, students emphasised the learning benefits of creating and sharing instructional videos. They described the process of explaining mathematical procedures to peers as an opportunity to deepen their own understanding, noting that “when you explain, you understand better”. One student explained that “having to explain each step made me realise where my own thinking was unclear,” while another noted that “a good video is not about speed, but about explaining why each step is necessary.” Some students also reported increased motivation and engagement, even among those who initially expressed little interest in mathematics. This suggests that the act of producing videos fostered both cognitive gains and affective benefits, reinforcing the value of peer-oriented self-explanation. However, not all students experienced this process as straightforward. One participant noted that “explaining the solution made me aware of gaps in my understanding, but it was also frustrating because I was not always sure how to explain my reasoning clearly to others.” Another student remarked that while explaining helped consolidate understanding, “it was sometimes difficult to decide how detailed the explanation should be, and I worried that others might still find it confusing.” Although several students articulated familiar ideas about learning through explanation, the interview data also revealed variation in how confidently and productively students experienced this process, highlighting both perceived benefits and moments of uncertainty. When discussing how they selected the most effective videos, students referred to clarity, organisation and the adequacy of explanations, as their main criteria. They paid attention to whether all steps were presented in a logical sequence and whether the reasoning was communicated in a way that could be easily followed by peers. Interestingly, students placed less emphasis on presentation aesthetics and more on the perceived usefulness of the explanation for their own understanding, indicating a pragmatic approach to peer evaluation.
Many students related the activity to their prior experiences of using YouTube as a learning tool. While they acknowledged that YouTube offers a vast array of resources, they also noted that not all content is easy to follow or directly relevant to their coursework. By contrast, student-generated videos created within their own class were perceived as more tailored and relatable. Several participants emphasised that the brevity of these videos made them easier to engage with, aligning more closely with their everyday media practices shaped by platforms such as TikTok. This finding points to the ecological relevance of the intervention, as it connects students’ informal digital practices with formal academic learning contexts, consistent with principles discussed in the Learning by Design literature.
Taken together, these findings indicate that while students encountered certain linguistic and technical difficulties, the activity was generally perceived as beneficial for learning. By creating and evaluating short peer-produced videos, students not only reinforced their own understanding but also developed evaluative skills and reflected critically on what constitutes a clear and effective mathematical explanation. Overall, the results suggest that student-generated videos can function as a low-stakes instructional activity that encourages students to reflect on explanatory clarity, evaluative criteria, and the communication of mathematical reasoning in undergraduate mathematics contexts.
In addition to the interview data, classroom observations and student comments during whole-class discussions were used to provide contextual insights into how students engaged with the video production activity. During the in-class viewing and discussion of selected videos, students frequently commented on the clarity of explanations, the organisation of solution steps, and practical aspects of recording (e.g., camera stability and visibility of written work). These observations were consistent with themes identified in the interviews, particularly regarding students’ increased awareness of explanatory clarity and the challenges associated with articulating mathematical reasoning for peers. Classroom observations were not treated as a separate data source but were used to support and contextualise the qualitative findings.
5. Conclusions and Discussion
The present study examined the integration of student-generated short instructional videos in a university mathematics course and explored both performance and perception outcomes. Quantitative analysis indicated no statistically reliable gains in achievement across assessment points, while strong positive associations between coursework, final examination and overall course performance suggested stability rather than deterioration in learning. Qualitative findings, however, pointed to a shift in learners’ engagement with explanation: students reported that producing a video for peers compelled them to articulate reasoning explicitly rather than wait for ready-made solutions, thereby encouraging reflection on their explanatory reasoning. This dual pattern, quantitative stability combined with qualitative indications of change in learners’ engagement with explanation, reflects a common SoTL finding that innovation may influence learning processes more than immediate outcomes.
These findings align with prior work arguing that the educational value of video lies not merely in its availability as a resource but in the cognitive work it induces (
Wilkinson et al., 2018). Whereas most studies have analysed instructor-generated videos or student consumption of digital explanations (
Shoufan & Mohamed, 2022;
Lyu et al., 2025), the present results suggest that producing short-form videos can encourage learners to move beyond passive observation towards more active engagement with mathematical explanation. This interpretation is consistent with research showing that self-explanation reorganises internal representations (
Chi, 2000;
Rittle-Johnson, 2006) and that peer-oriented articulation makes reasoning externally accountable (
Cho & Cho, 2011). From a participatory pedagogy standpoint, such shifts point to forms of cognitive engagement and emerging responsibility for explanation addressed to peers—the capacity of learners to author mathematical meaning for others.
Importantly, although performance improvements were not detected at the level of standard assessments, the qualitative evidence points to process-level learning that conventional exams do not capture—namely, the adoption of explanatory criteria, awareness of clarity as a communicative obligation and engagement in evaluative judgement. This pattern echoes
Panadero et al. (
2018) and
Huang (
2020), who note that learning gains in productive digital tasks may manifest in metacognitive dispositions and self-regulation rather than in short-horizon scores. The present study, therefore, does not treat the absence of short-term performance gain as a failure of the intervention but rather as an indication that different dependent variables are sensitive to different aspects of learning. This reinforces the SoTL argument that meaningful pedagogical effects often reside in the quality of reasoning and reflection rather than in surface-level achievement indicators.
Although the activity did not involve extended dialogic interaction among students, peer evaluation nevertheless functioned as a form of disciplinary literacy practice. By viewing and assessing peer-produced explanations, students were required to interpret mathematical reasoning, articulate criteria of clarity and adequacy, and compare alternative solution representations. In this sense, peer evaluation operated as an evaluative and reflective literacy medium, supporting meaning-making through exposure, comparison, and judgement rather than through sustained dialogue.
A secondary, yet informative, finding concerned students’ references to their prior use of platforms such as YouTube and TikTok for mathematical help. Consistent with
Trang et al. (
2025), students expressed a preference for brief, consumable explanations; however, they emphasised that classroom-generated videos were more relevant and intelligible than generic online content. This suggests that the preferred digital format (short-form video) can be retained while shifting students’ role from content recipients to content producers—a move that aligns with calls to reframe digital literacy as production rather than consumption (
W. Ng, 2012;
Huang, 2020). In doing so, the intervention bridges informal and formal learning spaces, aligning with Learning by Design principles that advocate the recontextualisation of everyday practices into structured academic inquiry.
Taken together, the findings indicate that student-generated short videos may serve as a productive pedagogical mechanism for prompting reflection on explanatory clarity, peer perspectives, and evaluative criteria.in higher education mathematics: not to raise scores immediately, but to cultivate metacognitive awareness, explanatory discipline and peer-calibrated judgement. This aligns with
Panadero et al.’s (
2018) account of peer assessment as a driver of evaluative judgement rather than a grade producing procedure. Rather than replicating the existing ecology of online help, the intervention re-purposes familiar media practices into structured academic work, extending debates on multimodality and peer learning within the scholarship of teaching and learning in higher education. As such, the study contributes to SoTL by identifying a transferable mechanism, student explanation for peers through media creation, as a means to promote disciplinary communication and reflective expertise in STEM fields.
Although the present study was implemented in a single institutional context, the mechanisms it activates are not locality-dependent. The processes of self-explanation, evaluative judgement and peer-oriented articulation are cognitive-social mechanisms that transcend geographical context and have been documented across higher education settings. What the present work adds is not a claim of generalisability by breadth of sampling, but a contribution to mechanism-level understanding: it shows that even a single, short, student-produced video task can trigger metacognitive articulation and criteria-driven evaluation among undergraduates in mathematically demanding courses. In this sense, the findings are transferable by analogy, because the underlying instructional mechanism is theoretically portable to comparable higher education environments. This interpretation is offered as a theoretically informed proposition rather than as an empirically verified claim across contexts or instructional configurations. This “transferability by mechanism” perspective situates the study firmly within SoTL traditions that value theoretical depth and reflective replication over statistical generalisation. These interpretations should be read as theoretically informed reflections rather than as evidence of stable or generalisable learning effects.
Taken together, these findings allow for a more precise articulation of the study’s contribution. Rather than claiming broad contributions across multiple domains, this study offers design-based insights into how student-generated short instructional videos can support peer-oriented explanation and reflective engagement in undergraduate mathematics. Its primary contribution lies in clarifying the pedagogical mechanisms activated through video production and peer evaluation, as evidenced by students’ qualitative accounts, rather than in demonstrating generalizable learning gains.
Taken together, the findings of this study are best understood in terms of pedagogical mechanisms rather than direct learning outcomes. Rather than demonstrating measurable performance gains, the study clarifies how the design of student-generated short instructional videos can activate specific learning processes, including peer-oriented explanation, heightened attention to explanatory clarity, and the articulation of evaluative criteria. The primary contribution of the study therefore lies in explicating how these mechanisms operate within an undergraduate mathematics context, offering design-based insight into how familiar digital practices can be pedagogically reconfigured to support reflective and evaluative engagement.
6. Limitations and Directions for Future Studies
The present study should be interpreted within the scope of a single cohort in one mathematics course, which limits the extent to which the findings can be generalised beyond comparable instructional settings in higher education. The intervention was implemented once rather than over multiple cycles, and learning was assessed through conventional examinations that capture only a subset of the processes activated during student-generated video production. Participation in video creation was not uniform across students, which may reflect variation in motivation or confidence rather than the pedagogical mechanism itself. Accordingly, we do not claim transferability by sampling scope but by mechanism. The observed processes are theoretically portable across higher education. Future studies could adopt iterative SoTL cycles of design, implementation and reflection to strengthen both internal validity and pedagogical insight.
Future studies could extend this work in several ways. Longitudinal designs that integrate multiple instances of student-generated video production across a semester or curriculum could shed light on whether metacognitive and communicative gains accumulate or stabilise over time. Comparative studies could examine differences between student-produced and instructor-produced videos, or between short-form and longer explanatory formats, to isolate the effects of production, duration and authorship. Finally, future research could explore alternative outcome measures that are sensitive to metacognitive development, peer-evaluative judgement or explanatory clarity, dimensions that may not register in conventional assessments yet emerged as meaningful in students’ reports. Exploring such measures would advance methodological diversity in mathematics education research and further align it with the reflective, evidence-based ethos of teaching and learning scholarship.