Next Article in Journal
Equity Leadership in K–12 Online Communities Under Democratic Duress
Previous Article in Journal
Bot or Not? Differences in Cognitive Load Between Human- and Chatbot-Led Post-Simulation Debriefings
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

Cameras On or Off? A Critical Analysis of Privacy, Equity, and Pedagogical Engagement in Online Education

by
Antonio Cedillo-Hernandez
1,
Lydia Velazquez-Garcia
2,
Maria Del Pilar Longar-Blanco
2,
Manuel Cedillo-Hernandez
3,* and
Maria G. C. Lopez-Gonzalez
4
1
Tecnologico de Monterrey, Escuela de Ingeniería y Ciencias, Av. Eugenio Garza Sada 2501 Sur, Col. Tecnologico, Monterrey 64700, Nuevo León, Mexico
2
Instituto Politecnico Nacional, Centro de Investigaciones Económicas, Administrativas y Sociales, Lauro Aguirre 120, Agricultura, Ciudad de México 11360, Mexico
3
Instituto Politecnico Nacional, Escuela Superior de Ingeniería Mecánica y Eléctrica, Unidad Culhuacan, Av. Santa Ana 1000, Culhuacan, Coyoacán, Ciudad de México 04440, Mexico
4
Instituto Politecnico Nacional, Escuela Superior de Comercio y Administración, Unidad Santo Tomás, Manuel Carpio 471, Miguel Hidalgo, Ciudad de México 11350, Mexico
*
Author to whom correspondence should be addressed.
Educ. Sci. 2026, 16(2), 256; https://doi.org/10.3390/educsci16020256
Submission received: 9 January 2026 / Revised: 3 February 2026 / Accepted: 4 February 2026 / Published: 6 February 2026
(This article belongs to the Section Education and Psychology)

Abstract

The widespread adoption of online education has brought new debates regarding the mandatory use of webcams during synchronous classes. While keeping cameras on may foster student engagement, enhance non-verbal communication, and provide teachers with richer feedback, it also raises serious concerns about privacy, digital equity, and cybersecurity. This article presents a critical review of recent literature that examines the pedagogical, technological, and social implications of requiring students to turn on their cameras. The analysis is structured around three dimensions: (1) pedagogical engagement, focusing on participation, motivation, and attention; (2) privacy and cybersecurity, addressing risks related to data collection, surveillance, and regulatory compliance; and (3) equity and digital divide, exploring the costs of bandwidth, unequal access to devices, and exposure of personal environments. The discussion highlights the tensions between these dimensions and argues that mandatory webcam use cannot be considered a universal solution. Instead, institutions should adopt flexible policies, teachers should integrate cameras where pedagogically meaningful, and policymakers should enforce data protection standards. The article concludes with recommendations for institutions, educators, and platform developers, and proposes directions for future research on camera use in online education.

1. Introduction

The rapid expansion of online education, accelerated by the COVID-19 pandemic, has spurred important debates about the best practices for synchronous classes. One controversial practice is the requirement (or strong encouragement) that students keep their webcams on during live sessions. Proponents argue that visible presence improves student engagement, provides teachers with non-verbal feedback, and fosters a sense of classroom community (Hosszu et al., 2022; Kocur & Jach, 2024). Critics, however, point to serious concerns related to privacy, digital inequality, cost of data, and emotional well-being (Leroy & Kaufmann, 2024; Uygur & Erdogmus, 2025). Understanding the trade-offs is essential for institutions, teachers, and policymakers aiming to design online learning environments that are both effective and equitable.
A number of recent empirical studies underscore these tensions. For example, Leroy and Kaufmann (2024) report that webcam usage in online classes raises issues of privacy invasion, digital inequity, and adverse effects on well-being. Similarly, Uygur and Erdogmus (2025) investigate why students frequently turn off their cameras in live class meetings, finding motivations related to peer perception, anxiety, personal privacy, and bandwidth constraints. On the pedagogical side, Kocur and Jach (2024) examine the effect of webcam use on student engagement in higher education synchronous courses, showing correlations between camera-on norms and increased reported engagement, but also highlighting that these norms may exacerbate disparities among students of varying socioeconomic status. Moreover, Chandrasiri and Weerakoon (2022) explore how both students and academics initially adopt webcam use to build rapport but gradually reduce usage, possibly due to fatigue or privacy concerns.
In addition to issues of engagement and attention, digital equity is a persistent concern. Saha et al. (2021) found that online learning environments can worsen existing inequalities, given that students from lower-income backgrounds often face unstable internet, limited device capacity, or shared learning spaces, making mandatory camera policies more burdensome. Also, Shawish et al. (2025) highlight widespread concerns from students and educational staff about unauthorized data collection, surveillance, and unclear data retention practices.
Despite these insights, the literature remains critically fragmented. Pedagogical studies often advocate for visibility without fully accounting for data privacy risks, while cybersecurity research frequently addresses surveillance concerns in isolation from learning outcomes. Furthermore, the interplay between these dimensions and the digital equity gap is rarely examined in a unified framework. Addressing this fragmentation requires a structured synthesis rather than a single empirical study. Consequently, the present article undertakes a critical literature review to bridge these disciplinary silos. By integrating findings across peer-reviewed research indexed in Scopus, Web of Science, and IEEE Xplore, this study aims to map the trade-offs between engagement, privacy, and equity, constructing a cohesive conceptual model to guide future policy and practice.

1.1. Research Question and Objectives

This paper addresses the following research question: What are the benefits and drawbacks of mandatory webcam use in online synchronous classes, with respect to pedagogical engagement, privacy and security, and equity and digital access? The objectives of the proposed study are:
  • To synthesize evidence on how webcams influence student engagement, attention, and interaction in online classes.
  • To examine the privacy and cybersecurity concerns associated with webcam use in educational settings.
  • To analyze how requiring webcams interacts with equity issues, including bandwidth, device access, socioeconomic status, and learning environment.
  • To propose a conceptual model and recommendations for policy and practice that balance these dimensions.

1.2. Contribution

By integrating findings from recent empirical and review studies, this article contributes a multidimensional perspective that helps clarify when and how webcam policies in online education can enhance learning without unfairly imposing disproportionate costs or risks. It offers a unified framework to navigate the tension between pedagogical benefits and ethical and equity concerns.

2. Background

This section provides a contextual review of prior empirical work and theory on student engagement in online learning, motivations and limitations of webcam use, privacy and security and equity concerns, and relevant theoretical and regulatory frameworks. Rather than presenting definitive findings, it summarizes key debates in the existing literature that frames the present structured review. It concludes by identifying emerging gaps and tensions and deriving implications for the present study.

2.1. Student Engagement and Online Learning

Engagement is considered a cornerstone of successful online education, encompassing behavioral, cognitive, and affective dimensions. Several studies have explored how webcam use intersects with these dimensions, with mixed but informative results. Hosszu et al. (2022) examined synchronous university courses and found that webcam activation increases students’ reported sense of social presence and attentiveness. Students felt more connected when they could see their peers and instructors, suggesting that webcams may reduce the sense of transactional distance often observed in online learning environments. However, these prior studies also emphasize that webcam use alone does not guarantee improved cognitive outcomes; benefits appear more clearly when cameras are integrated into interactive pedagogies such as group discussions and collaborative activities.
Beyond higher education, research has extended to younger learners. Ferguson-Johnson et al. (2025) analyzed classrooms during extended remote learning and observed that when most students kept their webcams on, the collective engagement of the class improved, leading to higher levels of participation and even modest gains in academic achievement. Such findings, while valuable, remain contingent on contextual factors such as technological access and socio-economic background, reinforcing the need for systematic synthesis across diverse populations.
Other studies highlight the affective side of engagement. Deng and Yang (2025), in a systematic review of online education research, argued that webcams may contribute positively to affective engagement by fostering belonging and visibility, but only when students perceive the environment as supportive. Without trust, the affective benefits diminish, and students may feel observed rather than included. These tensions illustrate why existing evidence should be interpreted as provisional and context-dependent, providing a foundation for deeper review rather than conclusive claims.

2.2. Motivations, Benefits, and Limitations of Webcam Use

Understanding why students choose to activate or deactivate webcams is critical for interpreting the mixed evidence on engagement. Tobi et al. (2021) reported that students often turn off webcams due to concerns about personal appearance, fear of judgment, lack of private space, and limited bandwidth. Instructors, on the other hand, generally perceive webcam use as an indicator of attentiveness and accountability, leading to a disconnect between faculty expectations and student realities.
Research also documents benefits that webcams can provide. Williams and Pica-Smith (2022) found that webcams increase perceived visibility and accountability, motivating students to participate more actively in discussions. Similarly, Trust and Goodman (2023) noted that students feel a greater sense of connection when webcams are encouraged rather than mandated, showing that agency and flexibility can enhance the positive effects of webcam use.
However, several studies point to limitations. Roth and Gafni (2021) demonstrated that while webcam activation can improve affective engagement and positive emotions, it may simultaneously increase stress, particularly when students feel compelled to constantly monitor their appearance or surroundings. The phenomenon known as “Zoom fatigue” is exacerbated by prolonged webcam use, as constant visual self-monitoring elevates cognitive load and distracts from learning. This paradox, well documented in prior studies, highlights why webcam effects must be evaluated critically across settings rather than generalized.

2.3. Privacy, Security, and Equity Considerations

Webcam use raises significant privacy concerns. Rajab and Soheib (2021) found that medical students were uneasy with mandatory webcam policies because these exposed their personal environments and increased their sense of being surveilled. Many also worried about the fate of video recordings, particularly when institutional policies lacked clarity. Similarly, Almekhled and Petrie (2024) reported that higher education faculty and students in Saudi Arabia expressed distrust about data storage and access, noting that opaque policies undermined willingness to keep webcams on.
The issue extends beyond privacy into broader security and equity challenges. Hosszu et al. (2022) showed that students sometimes experience heightened anxiety when cameras are required, with the constant visibility undermining their well-being. Dennen et al. (2022) further showed that socio-economic differences strongly predict webcam use: students with stable internet and private spaces are more likely to comply, while those lacking these resources often disable cameras. Such findings reinforce the importance of systematically comparing evidence across contexts, since privacy and equity concerns are not distributed evenly worldwide.
Equity concerns have also been highlighted in large-scale reviews. Deng and Yang (2025) stressed that online education tends to exacerbate inequalities, particularly when policies assume that all students have access to stable broadband and modern devices. When webcams are required without accommodation, students from disadvantaged backgrounds face additional costs, not only in terms of data usage but also in increased stress over exposing their home environments. These results serve as important antecedents for the structured review conducted in this paper.

2.4. Theoretical and Regulatory Frameworks

Several theoretical perspectives provide insight into the complex effects of webcam use. Social Presence Theory suggests that visibility reduces psychological distance and fosters richer interaction, supporting the pedagogical rationale for encouraging cameras. However, Self-Determination Theory (Ryan & Deci, 2000) emphasizes autonomy, competence, and relatedness as key motivators. Mandatory camera policies may erode autonomy, leading to disengagement even when social presence is theoretically enhanced.
Privacy frameworks, such as Nissenbaum’s (2004) theory of contextual integrity, are also relevant. Students expect certain norms about who can see them, how long recordings are stored, and who has access. When these norms are violated, discomfort and distrust increase, which in turn undermines engagement.
Regulatory frameworks add another dimension. In the European Union, the General Data Protection Regulation (GDPR) treats video images as personal data, requiring informed consent, limited use, and secure handling. In the United States, FERPA (Family Educational Rights and Privacy Act) protects certain aspects of student data but provides less explicit guidance on video use in classrooms. Other regions have adopted data protection laws—such as Mexico’s Federal Law on the Protection of Personal Data Held by Private Parties (LFPDPPP)—that similarly mandate informed consent and data security. These frameworks establish important principles but have been implemented inconsistently, underscoring a gap between formal regulation and everyday practice in education.

2.5. Emerging Gaps and Tensions

Despite a growing body of work, important gaps persist. Much of the current evidence is based on cross-sectional surveys rather than longitudinal designs, limiting our understanding of how attitudes and behaviors toward webcams evolve over time. Moreover, many studies measure perceptions of engagement rather than concrete outcomes such as grades, persistence, or knowledge retention.
There are also tensions that cut across literature. The first lies between engagement and privacy: while webcams can enhance social presence, they can simultaneously create a sense of surveillance. The second involves pedagogy and equity: webcams may facilitate interaction in discussion-based courses but may exclude students with limited resources in lecture-based settings. A third tension arises from institutional policy: some universities adopt strict mandates, while others allow optional use, leading to inconsistent experiences for students. These tensions, identified in prior research, justify the need for a structured synthesis to map how such trade-offs manifest across different contexts and populations.

2.6. Implications for the Present Study

The reviewed literature indicates that webcam use in online education cannot be reduced to a simple binary of beneficial or harmful. Instead, its effects depend on an interplay of pedagogical design, privacy safeguards, and equitable access to technology. This study then positions these works as a foundation for the structured review presented in the following sections. The present study therefore adopts a multidimensional framework, analyzing webcam policies not only for their potential to enhance engagement but also for their implications for student autonomy, well-being, and equity.
This perspective has practical implications. Institutions must recognize that webcams can contribute positively when integrated into interactive pedagogy and supported by clear privacy policies, but they must also provide flexibility for students with limited resources or private space. Instructors should be aware that engagement is not synonymous with visibility, and that alternative strategies, such as polls, chat participation, or breakout discussions, may foster learning without requiring cameras. Policymakers and platform designers should ensure that data protection regulations are robust and applied consistently, balancing the benefits of visibility with the rights and needs of students.

3. Methodology

This study employed a critical documentary analysis approach to synthesize empirical and theoretical evidence concerning the use of webcams in online learning environments. The methodological design sought to integrate pedagogical, privacy, and equity perspectives into a unified analytical framework that could illuminate the multidimensional impact of mandatory camera policies. Following established guidelines for transparent and reproducible reviews in educational research (Kitchenham, 2004; Page et al., 2021), the process combined systematic database searches, explicit inclusion and exclusion criteria, and a structured screening procedure. The overarching goal was to identify, categorize, and interpret the most relevant and recent studies addressing webcam use in synchronous online education from 2020 to 2025.

3.1. Research Design

The research adopted a critical literature review design, combining the systematic rigor of evidence-based synthesis with the interpretive depth of documentary analysis. Unlike a purely systematic review, which aims to exhaustively aggregate empirical out-comes, the critical literature review emphasizes conceptual integration and the identification of gaps, contradictions, and emerging debates (Grant & Booth, 2009). This approach is particularly suitable for evolving topics such as webcam usage in digital education, where technological, ethical, and pedagogical dimensions intersect and are continually reshaped by social and institutional contexts. The analysis incorporated rigorous methodological appraisal tools: MMAT 2018 (Hong et al., 2018), CASP (Critical Appraisal Skills Programme, 2018), and ROBINS-I (Sterne et al., 2016) to assess the internal validity and risk of bias across the heterogeneous corpus.
The study was organized around three central analytical axes derived from both theoretical constructs and recurrent patterns in the literature: (A) pedagogical engagement, referring to cognitive, behavioral, and emotional dimensions of participation and interaction; (B) privacy and security, encompassing digital surveillance, data protection, and psychological safety concerns; and (C) equity and technological costs, addressing issues of accessibility, digital divide, and socioeconomic disparities. These axes guided the selection, classification, and interpretation of sources, ensuring a balanced representation of diverse perspectives. The framework was also informed by established theories such as Social Presence Theory (Short et al., 1976), which supports the pedagogical rationale for visibility and interaction, and Self-Determination Theory (Ryan & Deci, 2000), which points out autonomy and intrinsic motivation as causes of engagement.
Given the interdisciplinary nature of the phenomenon, the review encompassed studies from education, psychology, communication, and information systems. Emphasis was placed on peer-reviewed sources published between 2020 and 2025, corresponding to the period in which online learning practices were profoundly influenced by the COVID-19 pandemic and its aftermath. The design thus reflects both a temporal and conceptual boundary, focusing on how webcam use evolved from an emergency practice to a contested element of post-pandemic digital pedagogy. This methodological stance allows the present work not only to summarize existing findings but also to critically assess how educational norms, institutional policies, and technological infrastructures mediate visibility, participation, and privacy in online learning.

3.2. Databases

To ensure methodological rigor and comprehensive coverage of the scholarly discourse, the literature search was conducted across three of the most authoritative academic databases: Scopus (Elsevier), Web of Science Core Collection (Clarivate Analytics), and IEEE Xplore Digital Library. These databases were selected based on their established reputation for indexing high-impact, peer-reviewed research in the fields of education, psychology, information systems, and computer science, all of which are directly relevant to the multidisciplinary nature of webcam use in online education. Together, they provide complementary coverage that reduces the likelihood of publication bias and ensures the inclusion of both pedagogical and technical perspectives on the phenomenon.
Each database was accessed directly through institutional subscriptions to guarantee full-text availability and metadata completeness. Searches were restricted to documents published between January 2020 and October 2025, corresponding to the period following the global transition to remote learning and the consolidation of hybrid education practices. Only peer-reviewed journal articles and conference proceedings were considered, excluding editorials, preprints, theses, and non-reviewed reports to preserve the scientific reliability of the dataset. All retrieved bibliographic data—including title, abstract, year, DOI, and source—were exported in CSV format and integrated into a unified dataset for further screening and deduplication.
The rationale for combining multiple databases was also methodological. In technology-enhanced learning research, reliance on a single source often limits visibility of studies from either educational or engineering domains (Booth et al., 2016). By using Scopus, Web of Science, and IEEE Xplore in conjunction, this review ensured coverage of diverse scholarly communities—ranging from education and psychology to human–computer interaction and cybersecurity—thus enabling a holistic interpretation of webcam practices across contexts. This triangulated database strategy follows best practices in information retrieval for systematic and critical reviews, emphasizing transparency, replicability, and inclusiveness in the construction of the evidence base (Page et al., 2021).

3.3. Search Strategy

The search strategy was designed to maximize relevance and coverage across the selected databases while maintaining methodological transparency and reproducibility. Searches were conducted between 20 October and 30 October 2025, using a combination of controlled descriptors and free-text keywords related to webcam use in online learning. Boolean operators and truncation were applied to account for linguistic variations and database-specific syntax. The core search string is presented in Table 1, which encompasses three conceptual clusters: (a) webcam-related terms, (b) online education contexts, and (c) conceptual dimensions reflecting engagement, privacy and security, or equity.
The strategy prioritized English-language publications but did not exclude works conducted in multilingual contexts when abstracts were available in English. To avoid redundancy and ensure focus, results were limited to articles and proceedings indexed as peer-reviewed and directly related to education or human–computer interaction. Once retrieved, the bibliographic records were exported in standardized CSV format and consolidated for preprocessing and duplicate removal.
This deliberate combination of conceptual precision and database diversity allowed the search to capture not only empirical studies on webcam engagement but also emerging discussions on privacy, surveillance, and access disparities in synchronous learning. The search strategy thus ensured alignment with the three analytical dimensions underpinning this study, pedagogical engagement, privacy and security, and equity and costs, providing a comprehensive yet thematically coherent foundation for the subsequent screening phase.

3.4. Inclusion and Exclusion Criteria

To guarantee the relevance and quality of the studies included in this review, explicit inclusion and exclusion criteria were established prior to the screening phase. These criteria ensured that only peer-reviewed, methodologically robust works directly addressing webcam use in educational contexts were retained for analysis. The inclusion parameters were defined to capture studies that examined at least one of the analytical dimensions: pedagogical engagement, privacy and security, or equity and technological costs.
Eligible publications met the following general conditions: (a) empirical or theoretical studies focusing on webcam use in online or hybrid learning environments; (b) works published between 2020 and 2025, reflecting the post-pandemic evolution of digital education practices; (c) availability of title and abstract in English, regardless of country of origin; and (d) publication in a peer-reviewed journal or international conference proceedings. Studies were included regardless of methodological orientation (quantitative, qualitative, or mixed) if they contributed substantive insights into the relationship between webcam visibility and educational interaction, data privacy, or digital access.
Conversely, exclusion criteria targeted materials that lacked scientific validation or contextual relevance. Non-reviewed content such as editorials, opinion pieces, institutional reports, theses, and preprints were omitted to maintain methodological rigor. Similarly, studies focusing exclusively on technical webcam engineering or image processing algorithms without any pedagogical or human-centered component were excluded, as were those centered on non-educational applications of video conferencing (e.g., telemedicine, corporate training, or surveillance systems). This selective exclusion ensured that the retained corpus addressed webcam use within formal or higher education settings, where visibility, participation, and privacy intersect most directly.
The resulting inclusion and exclusion framework reflects a balance between breadth and analytical depth, preventing the dilution of educationally relevant findings while preserving sufficient diversity for comparative synthesis. Such methodological transparency aligns with best practices in evidence-based educational research, reinforcing the credibility and reproducibility of the review (Grant & Booth, 2009; Petticrew & Roberts, 2006). In addition, this approach facilitates future meta-analytical extensions by clearly delineating the conceptual boundaries of the dataset.

3.5. Screening Process

The screening phase followed the methodological principles of the PRISMA 2020 framework (Page et al., 2021), ensuring transparency and reproducibility in the selection of studies. After executing the search strategy across Scopus, Web of Science, and IEEE Xplore, a total of 315 records were initially retrieved. All results were exported in CSV format and merged into a unified dataset for preprocessing. Duplicate entries, primarily overlapping records between Scopus and Web of Science, were identified and removed through automated DOI matching and title-similarity analysis using normalized string comparison (threshold ≥ 0.92). This deduplication process reduced the dataset to 214 unique studies.
The remaining records were subjected to a two-stage screening process, combining automated textual filtering and manual verification. The first stage applied keyword-based heuristics to titles and abstracts to identify studies explicitly referencing webcam use in online or hybrid educational settings. Articles that addressed general videoconferencing without educational context or that focused on technical hardware optimization were flagged for exclusion. The second stage involved a manual review of abstracts to confirm relevance according to the three analytical axes established in Section 3.1 (pedagogical engagement, privacy and security, and equity and technological costs). This refinement process yielded 72 studies that met all inclusion criteria and were retained for full-text analysis.
To support analytical consistency, each included article was assigned a unique identifier (INC001–INC072) and categorized by primary thematic axis. Approximately two-thirds of the studies addressed engagement and interaction dynamics, while a smaller subset explored privacy, surveillance, and access disparities. The complete list of included articles, with DOI and metadata, is available as Appendix A. The final corpus comprised 71 included studies (excluding one non-accessible article, INC062). The flow of study identification, screening, and inclusion is summarized in Figure 1.
This systematic yet flexible approach ensured that the final corpus was both representative and analytically coherent, reflecting the diversity of scholarly perspectives on webcam use in synchronous online learning. By combining automated filtering with re-searcher-driven validation, the process balanced efficiency with interpretive accuracy, aligning with best practices for critical reviews in education and information science (Petticrew & Roberts, 2006).

3.6. Data Extraction and Synthesis

The data extraction and synthesis phase was designed to transform the screened corpus of 71 studies into a structured body of evidence aligned with the study’s three analytical axes: pedagogical engagement, privacy and security, and equity and technological costs. The extraction process was guided by standardized procedures for educational research synthesis (Davies, 2000; Gough et al., 2017), emphasizing both transparency and interpretive depth. All selected publications were analyzed in full text to identify their methodological orientation, research context, population, key findings, and relevance to the three axes. This structured approach ensures comparability across studies and supports both thematic and conceptual integration.
Data extraction was carried out using a coding matrix developed in Microsoft Excel to capture both descriptive and analytical information. Descriptive variables included author(s), year of publication, country or region, type of publication (journal or conference), and research design (quantitative, qualitative, or mixed methods). Analytical variables focused on reported outcomes—such as effects of webcam use on attention, participation, motivation, or affective engagement; evidence of privacy violations or digital surveillance; and challenges related to bandwidth, device access, or socioeconomic inequality. Each study was coded independently by two researchers to enhance reliability, with discrepancies discussed until consensus is reached, following intercoder agreement principles (Miles et al., 2019).
After coding, data synthesis combined quantitative descriptive statistics with qualitative thematic analysis. Quantitatively, frequency distributions illustrated the proportion of studies addressing each axis and their methodological tendencies (e.g., prevalence of surveys, experiments, or interviews). Qualitatively, an interpretive synthesis identified cross-cutting patterns and tensions across the three axes, for instance, the balance between visibility and autonomy or between inclusion and surveillance. This dual synthesis approach allows the review to capture not only how webcam use is studied but also how its implications are framed in different disciplinary and cultural contexts.
To enhance rigor, the synthesis followed an iterative validation process. Preliminary themes were compared with existing theoretical frameworks such as Social Presence Theory and Self-Determination Theory to ensure theoretical saturation and alignment with established constructs (Ryan & Deci, 2000; Short et al., 1976). When necessary, axial coding was applied to refine categories that overlap or evolve during analysis. The goal of the synthesis was to construct a comprehensive model illustrating how webcam visibility mediates the relationship between engagement, privacy, and equity in online education. The synthesized findings formed the basis for the results and discussion sections, as well as for the evidence-informed recommendations for policy and practice presented in Section 6 of the paper.

4. Results

4.1. Corpus Characterization and Temporal Trends

The systematic search yielded 71 full-text articles included in the final synthesis, published between January 2020 and October 2025. As shown in Table 2, most of the literature focuses on Higher Education (HE) settings (77.5%, N = 55), with only 22.5% (N = 16) addressing K-12 contexts. Geographically, studies were concentrated in North America (36.6%, N = 26) and Asia and MENA (28.2%, N = 20). A sub-analysis revealed that 92.3% of the North American studies focused exclusively on HE settings, contrasting with a more mixed focus in other regions. The evidence base is predominantly quantitative, with 59.2% (N = 42) using surveys or experiments, followed by qualitative (25.4%, N = 18) and mixed methods (15.5%, N = 11). The most frequent research instrument was the self-report survey (70.4%, N = 50). The median sample size was robust across the corpus, centering at 312 participants (Interquartile Range, IQR: 145–850).
Analysis of publication trends (depicted in Figure 2) reveals a peak in research activity in 2021 (33.8%, N = 24), following the rapid global transition to emergency remote learning. Crucially, a logistic regression on the temporal distribution showed a statistically significant decreasing trend (βyear = −0.58, p < 0.01) in the proportion of studies examining mandatory camera policies over the years (2020 to 2025), indicating a scholarly shift toward evaluating optional or encouraged policies (Trust & Goodman, 2023).
In terms of publication quality (Table 3), 48% of the corpus (34 studies) appeared in journals indexed in Scopus and WoS Q1 or Q2, confirming the high academic rigor of the included literature. The median journal quality was Q2, with only 13 articles published in conference proceedings, suggesting a solid, peer-reviewed evidence base for the synthesis. A sub-analysis of quality by thematic focus indicated that Axis B (Privacy and Security) studies achieved the highest median quality (Q2, IQR = 1), often due to the rigorous methodology associated with qualitative and mixed methods designs prevalent in this sub-field.
The geographic distribution (depicted in Figure 3) shows that most of the evidence on K-12 contexts, representing 87.5% of all K-12 studies (N = 14 out of 16), originates from outside North America (e.g., studies by Ferguson-Johnson et al. (2025) and Saha et al. (2021)). This finding suggests that the dynamics of equity in younger populations may be systematically underrepresented in the dominant HE corpus, particularly those stemming from North America, where 92.3% of local literature focuses solely on HE settings.

4.2. Methodological Quality and Risk of Bias

The quality appraisal (details presented in Table 4) revealed a median aggregate quality score of approximately 75% across all study types, with qualitative studies generally demonstrating higher reporting completeness (80.0%). However, the risk of bias was not uniform.
The Traffic-light Plot (Figure 4) presents the aggregate risk of bias summary, where the six assessed criteria are labeled according to the specific methodological tool used: Adequate Sample (MMAT/CASP), Appropriate Design (AD-MMAT), Non-response Bias (NRB-MMAT), Controlled for Confounders (CC-ROBINS-I), Measurement of Exposure (ME-ROBINS-I), and Relationship with Participants (CASP). The primary sources of concern, as summarized in the plot, were:
  • Selection Bias: Common in quantitative surveys (75.0% median score), where self-selection into webcam-use or non-use groups introduces confounding factors, such as socioeconomic status, which strongly predicts camera use (Dennen et al., 2022).
  • Reporting Completeness: Only 19.7% of studies reported data availability, and, critically, there was a complete absence of study pre-registrations (Table 5). This high prevalence of ad hoc measurement instruments (49.3%) and lack of pre-registration heighten the risk of reporting bias (e.g., selective reporting of positive engagement effects).
  • Confounding: In quasi-experimental designs, controlling for essential confounders like student prior academic performance or home environment (which directly influences camera use) was often found to be uncertain or high risk (58% and 11% for the ROBINS-I confounding item, respectively). These methodological limitations necessitated the sensitivity analysis detailed in Section 4.7.

4.3. Axis A—Pedagogical Engagement (Attention, Participation, Social Presence)

The synthesis of findings related to the pedagogical axis (summarized in Table 6) shows a complex relationship between camera use and learning outcomes. The total number of reports (N = 66) exceeds the number of studies (N = 45) because a single study may report findings across multiple domains.

4.3.1. Synthesis of Effects

Social Presence and Connection: The evidence base strongly supports the positive effect of visibility on perceived social presence and the sense of community in online settings (86.4% of 22 studies were favorable). Non-verbal cues fostered by webcams help mitigate transactional distance and build peer-to-peer connection.
Attention and Behavior: Two-thirds of the studies (66.7% of 15) reported a favorable effect on student self-reported attention, attributing the camera requirement to increased accountability and reduced multitasking. Conversely, ten studies reported a significant negative effect on wellbeing, citing increased anxiety and severe Zoom fatigue due to constant self-monitoring and performance pressure.
Active Participation: The effect on active participation (verbal contribution) was conditional and mixed (favorable). Webcams were most effective when combined with interactive pedagogical designs (e.g., small group work, structured discussion) rather than simply passive lecture attendance.
The overall directionality and strength of these effects across the 45 studies are visually summarized in the Harvest Plot (Figure 5). This visualization confirms that most findings related to Social Presence and Attention align with the favorable hypothesis (Green), while the Active Participation and Wellbeing variables show a mixed distribution, characterized by unfavorable (Red) and neutral and conditional findings.

4.3.2. Meta-Analysis and Subgroup Findings

An exploratory meta-analysis on nine comparable studies that reported standardized effect sizes on an overall engagement score confirmed a statistically significant, small-to-moderate positive average effect: Hedges’ g = 0.32 (95% CI: 0.18 to 0.46). However, the high heterogeneity (I2 = 71.5%) suggests the effect is strongly dependent on contextual factors.
The subgroup analysis (Figure 6) revealed a key moderator: the effect size was significantly larger in contexts with Optional and Encouraged camera policies (g = 0.41) compared to those with Mandatory policies (g = 0.21). This suggests that autonomy and agency are critical preconditions for the webcam to enhance engagement effectively, rather than compulsion.

4.4. Axis B—Privacy and Security (Surveillance and Data Protection)

The qualitative synthesis on the privacy axis (based on 14 primary studies, summarized in Table 7) established four critical thematic areas using CERQual methodology, with two themes reaching high confidence levels:
  • Perceived Surveillance (High Confidence): Students frequently reported feeling that mandatory camera use served an institutional function of monitoring rather than a pedagogical purpose (Shawish et al., 2025). This perception often resulted in anxiety, reticence, and a feeling that their privacy was being compromised (Tobi et al., 2021).
  • Exposure of the Personal Environment (Very High Confidence): This was the single most cited factor influencing the decision to keep the camera off. The forced exposure of homes, often shared, crowded, or unkempt, is seen as an invasion of the domestic sphere, forcing a negotiation between privacy needs and academic requirements.
  • Data Policy Gap (Moderate Confidence): The final two themes, Opaque Data Policies and Regulatory Compliance, both reached moderate confidence. Studies consistently highlighted a regulatory gap between formal data protection laws and institutional practice. Students often lack clear information regarding the retention period, access controls, and subsequent institutional use of recorded video footage, leading to high distrust.
The qualitative synthesis established a critical causal pathway linking institutional practices to negative behavioral outcomes. Opaque institutional policies regarding data retention and access, coupled with mandatory camera use, drive a strong feeling of Perceived Surveillance among students. This perception, in turn, triggers intense Anxiety and Distrust, leading to a self-protective behavior known as Self-Silencing (turning the camera off). The final consequence of this pathway, as confirmed by triangulation with Axis A findings, is the effective decrease in pedagogical participation. This demonstrates that security concerns directly undermine the very engagement benefits the policy intended to achieve.

4.5. Axis C—Equity and Technological Costs

Analysis of the equity axis (12 studies, Table 8) revealed tangible and critical barriers created by camera requirements.
  • Bandwidth Limitations: A median of 42% of students in the surveyed populations (primarily from low- to middle-income countries or low-SES groups) reported persistent difficulties with bandwidth stability, which is exacerbated by video streaming. This leads to technical exclusion, forcing students to disable their cameras or, worse, disconnect entirely.
  • Financial Burden (Data Costs): Studies consistently highlighted that the data consumption required for video streaming constitutes a high financial burden in low- and middle-income country contexts. This additional cost exacerbates existing inequalities, leading to a phenomenon known as Digital Divide 2.0.
  • Shared Spaces and Devices: A significant proportion of students (median: 35%) reported living in shared, noisy, or crowded environments, making it impossible to maintain academic focus or protect their home life from being exposed to the class. This barrier is strongly linked to socioeconomic status (SES), deepening existing disparities.
The visualization of these disparities is demonstrated by the prevalence of these barriers across regions (Figure 7). The Forest Plot confirms that the three major equity barriers are systematic, clustering tightly around the 40% median prevalence threshold. Specifically, Bandwidth Limitation (median 42%) and Financial Burden (median 50%) impact nearly half of the surveyed student population. This confirms that the costs and technical limitations imposed by webcam requirements are not marginal issues but constitute a consistent and systematic barrier to equitable access to learning.
The narrative causal model (Figure 8) demonstrates how a Mandatory Camera Policy acts as a filter: it translates high technical and privacy requirements into a Financial and Social Cost, which disproportionately forces low-SES students into Partial Exclusion (camera off, muted) and, consequently, lower engagement.

4.6. Synthesis of Cross-Axis Tensions

The integration of findings across the three analytic axes reveals that the webcam debate centers on three fundamental and often irreconcilable trade-offs. Table 9 summarizes these core tensions, which prevent mandatory camera use from being a universal solution. The conflicts identified, spanning pedagogical theory, personal rights, and structural inequality, underscore that any official policy must explicitly manage these competing outcomes rather than assuming neutral effects.

4.7. Robustness, Sensitivity and Publication Bias

The analysis of robustness (Table 10) confirms the stability of the meta-analytic finding regarding pedagogical engagement. Specifically, the aggregated effect size (g = 0.32) proved resilient to threats from bias and methodological quality. When studies flagged as “High Risk of Bias” (MMAT and ROBINS-I score below 65%) were removed, the effect size remained significant, slightly reducing to g = 0.29 (p < 0.001). Furthermore, the effect size was slightly higher (g = 0.35) when considering only studies published in Q1 and Q2 journals, confirming that the positive association between webcam use and engagement is not an artifact of low-quality or low-impact studies, but is supported by the highest level of available evidence. The two subsequent figures assess potential undue influence and systematic publication bias.
Figure 9 presents the Influence Analysis (Leave-One-Out), showing the aggregated Hedges’ g effect size when each of the nine studies is sequentially omitted from the meta-analysis. The plot demonstrates high robustness: the resulting aggregated effect size remains tightly clustered around the overall mean (g = 0.32), with no single study pulling the overall estimate outside the 95% Confidence Interval of the primary result. This confirms that the small-to-moderate positive effect of webcams on engagement is not a result of undue influence by a single outlier or a massive sample.
On the other hand, Figure 10 presents the Funnel Plot, which is used to visually assess potential publication bias, where smaller studies (lower precision, top of the plot) might be missing if they reported null or negative findings. The visual distribution shows a minor asymmetry, with a slight overrepresentation of smaller studies on the positive side of the effect. This observation is quantified by Egger’s test, which suggested a low-to-moderate risk of asymmetry (p = 0.08). While this result is not statistically significant at the conventional α = 0.05 level, it warrants a cautious interpretation, suggesting the possibility that some small studies with negative or null findings on engagement were not published, contributing to the final positive aggregated effect.

5. Discussion

The findings of this critical review of 71 studies confirm that the simple prescription of “cameras on” during synchronous online learning represents a complex pedagogical and ethical dilemma. The core argument of the literature is not whether webcams are useful, but whether they should ever be mandatory (Trust & Goodman, 2023).

5.1. Synthesis and Interpretation of Core Findings

The analysis successfully isolated the specific, conditional benefits and the systematic, unconditional harms of required camera use. First, the synthesis of Axis A (Engagement) robustly validates the function of the webcam as a tool for increasing perceived social presence and accountability. Our meta-analysis confirms a small-to-moderate effect (g = 0.32), but this finding is critically moderated by student agency: the effect is stronger when the choice is optional (g = 0.41 vs. g = 0.21). This reinforces Self-Determination Theory, positing that compulsory visibility damages internal motivation.
Second, findings from Axis B (Privacy) and Axis C (Equity) provide compelling counterevidence to mandatory policies. The qualitative synthesis established a high-confidence link between camera use and the dual harms of Perceived Surveillance and Exposure of the Personal Sphere. This exposure is not merely an inconvenience; it is a significant equity barrier for students with low bandwidth or inadequate home environments. The mandatory camera requirement effectively imposes a financial and psychological toll that is disproportionately borne by low-SES students, exacerbating the digital divide and potentially leading to a spiral of disengagement.

5.2. Addressing the Cross-Axis Tensions

The tensions identified in Table 9 are crucial for policy implications. The conflict between Inclusion (pedagogical benefit) and Surveillance (ethical harm) is the central crisis of the mandatory webcam policy. When students feel monitored rather than supported, they resort to self-protective behaviors like self-silencing, which paradoxically defeats the original purpose of increasing engagement. Our robustness check confirms that this dilemma is not a methodological artifact, but a fundamental trade-off supported by high-quality evidence. The principle of Contextual Integrity is fundamentally violated when the camera shifts its function from a tool for interaction (pedagogy) to an instrument for accountability (surveillance).

5.3. Implications for Policy and Practice

The current evidence base necessitates a comprehensive shift from mandatory to a “Cameras Optional with Guardrails” policy framework (Table 11).
For Practice: Educators should leverage the camera only when its function is pedagogically necessary (e.g., for non-verbal feedback during small-group discussions) and combine it with low-bandwidth alternatives for participation (e.g., text-based responses, structured polling). Furthermore, educators must be trained to identify and manage Zoom fatigue and the privacy and equity issues arising from background exposure.
For Policy: Institutional policies must address the regulatory and ethical opacity identified in Axis B. This includes:
  • Mandatory and Explicit Consent: Instituting clear, explicit informed consent for all recording and data retention practices, aligning with GDPR and local data protection laws.
  • Equity Mitigation: Subsidizing data costs or providing virtual background tools and access to private study spaces to mitigate the financial and social burdens on vulnerable students.

6. Conclusions and Future Work

6.1. Synthesis and Fulfillment of Objectives

This critical literature review undertook a multidimensional analysis of mandatory webcam use during synchronous online education, synthesizing evidence across 71 peer-reviewed studies published between 2020 and 2025. The analysis directly addressed the research question: What are the benefits and drawbacks of mandatory webcam use in online synchronous classes, with respect to pedagogical engagement, privacy and security, and equity and digital access?
The synthesis concludes that while webcams offer a small, statistically significant positive effect on engagement (g = 0.32), this benefit is highly conditional and frequently offset by significant ethical and structural costs. The study successfully fulfilled all four objectives:
  • Pedagogical Synthesis (Axis A): We confirmed the positive influence on social presence and accountability but established that this effect is maximized only when student autonomy is preserved (g = 0.41 for optional policies vs. g = 0.21 for mandatory).
  • Privacy and Security Analysis (Axis B): We established through CERQual analysis a high-confidence link between mandatory policies and perceived surveillance, leading to student anxiety and self-protective behaviors (self-silencing).
  • Equity Analysis (Axis C): We quantified the systemic nature of the digital divide, showing that bandwidth limitations and financial costs (data usage) impact nearly half of the student populations surveyed, disproportionately excluding low-SES learners.
  • Model and Recommendations: The study established the conclusion that Mandatory webcam use cannot be considered a universal solution, necessitating flexible policies that balance these trade-offs.

6.2. Main Findings and Policy Implications

This critical literature review undertook a multidimensional analysis of mandatory webcam use during synchronous online education. The most critical finding of this research is the irreconcilable tension that exists between the desire for Social Presence (Axis A) and the imperative for Autonomy and Equity (Axes B and C). The visual evidence from the network analysis confirmed that the strongest tensions exist between Engagement and Equity, underscoring that the camera requirement acts as a systematic structural filter that compounds existing inequalities.
The central policy recommendation derived from this synthesis is a shift toward a “Cameras Optional with Guardrails” framework. This framework requires institutions to:
  • Prioritize Autonomy: Adopt optional or encourage policies to leverage the engagement benefits without triggering surveillance anxiety.
  • Ensure Equity: Provide technical accommodation, subsidized data, and virtual background tools to mitigate the financial and social costs imposed on vulnerable students.
  • Enforce Transparency: Establish clear, explicit informed consent and data retention policies for all video recordings to address the high student distrust identified in the qualitative analysis.

6.3. Limitations and Future Research

This review acknowledges several methodological limitations inherent to the source literature. These include the high heterogeneity (I2 = 71.5%) of the engagement studies and the heavy reliance of the corpus on self-report instruments, which introduces a risk of social desirability bias. Beyond these methodological constraints, the synthesis revealed significant contextual and thematic gaps.
The Figure 11 heatmap highlights a systematic imbalance in the evidence base. Research is largely confined to Higher Education (HE) settings in the Global North, resulting in a critical under-representation of the K-12 population and the Global South. Crucially, the most severe gaps lie in the intersection of Privacy (Axis B) and Equity (Axis C) with non-Western legal and social contexts, limiting the external validity of policy recommendations for a global audience.
Future research must prioritize experimental designs that objectively measure learning outcomes rather than self-reported engagement. Contextually, there is a need to expand research into the K-12 setting and conduct studies focusing on the social and financial costs of webcams in regions of the Global South, directly addressing the highest-risk areas identified in the Heatmap.

Author Contributions

Conceptualization, A.C.-H. and L.V.-G.; methodology, A.C.-H. and L.V.-G.; software, A.C.-H. and M.G.C.L.-G.; validation, M.D.P.L.-B., M.G.C.L.-G. and L.V.-G.; formal analysis, L.V.-G. and M.D.P.L.-B.; investigation, A.C.-H. and L.V.-G.; resources, M.C.-H.; data curation, M.G.C.L.-G. and M.D.P.L.-B.; writing—original draft preparation, A.C.-H. and L.V.-G.; writing—review and editing, M.C.-H., A.C.-H. and L.V.-G.; visualization, A.C.-H. and M.G.C.L.-G.; supervision, M.C.-H.; project administration, M.C.-H. and A.C.-H. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed at the corresponding author.

Acknowledgments

The authors would like to acknowledge the financial support of Writing Lab, Institute for the Future of Education, Tecnologico de Monterrey, Mexico, in the production of this work. Additionally, we express our gratitude to the Secretaria de Ciencia, Humanidades, Tecnologia e Innovacion (SECIHTI), Tecnologico de Monterrey, and the Instituto Politecnico Nacional (IPN) for their support during this research.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

List of Included Studies in the Synthesis (N = 71)

  • Abdelhadi, Z., Naseif, M., Alhejali, W., & Elhayek, A. (2025, January 15–16). Teacher eye: An AI-powered system for monitoring student engagement in online education. 2025 22nd International Learning and Technology Conference (pp. 25–30), Jeddah, Saudi Arabia. https://doi.org/10.1109/lt64002.2025.10940905.
  • Rojabi, A. R., Setiawan, S., Munir, A., Purwati, O., & Widyastuti. (2022). The camera-on or camera-off, is it a dilemma? Sparking engagement, motivation, and autonomy through microsoft teams videoconferencing. International Journal of Emerging Technologies in Learning (IJET), 17(11), 174–189. https://doi.org/10.3991/ijet.v17i11.29061.
  • Alim, S., Petsangsri, S., & Morris, J. (2022). Does an activated video camera and class involvement affect academic achievement? An investigation of distance learning students. Education and Information Technologies, 28(5), 5875–5892. https://doi.org/10.1007/s10639-022-11380-2.
  • Almekhled, B., & Petrie, H. (2023). Concerns of saudi higher education students about security and privacy of online digital technologies during the coronavirus pandemic. In Lecture notes in computer science (pp. 481–490). Springer Nature Switzerland. https://doi.org/10.1007/978-3-031-42286-7_27.
  • Almekhled, B., & Petrie, H. (2023). Privacy and security in online teaching during the COVID-19 pandemic: Experiences and concerns of teachers in UK higher education. In Electronic workshops in computing. 36th international BCS human-computer interaction conference. BCS Learning & Development. https://doi.org/10.14236/ewic/bcshci2023.24.
  • Almekhled, B., & Petrie, H. (2023, April 21–23). UK Students’ concerns about security and privacy of online higher education digital technologies in the coronavirus pandemic. 15th International Conference on Computer Supported Education (pp. 483–492), Prague, Czech Republic. https://doi.org/10.5220/0011993500003470.
  • B, I. B., V, V., M, V., Jeyakumar, V., R., N., & Ramesh, R. (2025, May 23–25). Gaze based attention monitoring and assessment system. 2025 6th Inter-national Conference on Control, Communication and Computing (ICCC) (pp. 1–6), Thiruvanathapuram, India. https://doi.org/10.1109/iccc64910.2025.11077190.
  • Baarir, N. F., Bourekkache, S., & Aloui, A. (2022, December 7–8). The use of Deep Learning techniques in e-learning systems and MOOCs. 2022 International Symposium on Innovative Informatics of Biskra (ISNIB) (pp. 1–6), Biskra, Algeria. https://doi.org/10.1109/isnib57382.2022.10076002.
  • Bedenlier, S., Wunder, I., Gläser-Zikuda, M., Kammerl, R., Kopp, B., Ziegler, A., & Händel, M. (2021). “Generation invisible? Higher education students’ (non)use of webcams in synchronous online learning. International Journal of Educational Research Open, 2, 100068. https://doi.org/10.1016/j.ijedro.2021.100068.
  • Byers, R. E., Carter, C. R., & Wang, Y. (2024). Student engagement in synchronous online learning: Effectiveness of camera and chat/vote engagement methods. Decision Sciences Journal of Innovative Education, 22(3), 138–157. https://doi.org/10.1111/dsji.12309.
  • Calvo-Ferrer, J. R. (2022). Disposición del alumnado universitario español a conectar su cámara durante la pandemia generada por la COVID-19. Íkala, Revista de Lenguaje y Cultura, 27(2), 292–311. https://doi.org/10.17533/udea.ikala.v27n2a01.
  • Chandran, P., Huang, Y., Munsell, J., Howatt, B., Wallace, B., Wilson, L., D’Mello, S., Hoai, M., Rebello, N. S., & Loschky, L. C. (2024, June 4–7). Characterizing learners’ complex attentional states during online multimedia learning using eye-tracking, egocentric camera, webcam, and retrospective recall. 2024 Symposium on Eye Tracking Research and Applications (pp. 1–7), Glasgow, UK. https://doi.org/10.1145/3649902.3653939.
  • Christian, D. D., Hendrickson, K. A., & Jadav, A. (2023). Cameras on or off during online synchronous courses? That is the question: an analysis of university faculty caring intelligence. Interactive Learning Environments, 32(10), 6552–6564. https://doi.org/10.1080/10494820.2023.2267596.
  • Chingapurathu, S. K., Jeganathan, L., & Janaki Meena, M. (2025). Enhancing online learning: A pose and emotion-based approach for student attention monitoring. In Lecture notes in electrical engineering (pp. 651–665). Springer Nature Singapore. https://doi.org/10.1007/978-981-97-4711-5_45.
  • Cliffe, M., Di Battista, E., & Bishop, S. (2020). Can you see me? Participant experience of accessing a weight management programme via group videoconference to overcome barriers to engagement. Health Expectations, 24(1), 66–76. https://doi.org/10.1111/hex.13148.
  • Costa-Feito, A., Saavedra, A., & Barta, S. (2025). Enhancing student memorization through teacher webcam usage and the interplay of social presence. International Journal of Educational Technology in Higher Education, 22(1), 58. https://doi.org/10.1186/s41239-025-00554-w.
  • Cupchik, G. C., Rebello, C. B., Albar, R., Cocunato, J., Cupchik, E., Ignacio, A., & Faubert, E. (2024). Negotiating visibility: Mediating presence through Zoom camera choices in post-secondary students during COVID-19. Societies, 14(7), 126. https://doi.org/10.3390/soc14070126.
  • Daksith, N., Wanaguru, S., Kolonne, U., Dolawatta, A., Abeywardhana, L., & Kasthurirathna, D. (2023, December 7–8). Multi model approach to evaluate and enhance student—Teacher interactions. 2023 5th International Conference on Advancements in Computing (ICAC) (pp. 555–560), Colombo, Sri Lanka. https://doi.org/10.1109/icac60630.2023.10417599.
  • Dennen, V. P., Yalcin, Y., Hur, J., & Screws, B. (2022). Student webcam behaviors and beliefs: Emergent norms, student performance, and cultural differences. Online Learning, 26(4), 168–192. https://doi.org/10.24059/olj.v26i4.3472.
  • Dey, A., Anand, A., Samanta, S., Sah, B. K., & Biswas, S. (2024). Attention-based AdaptSepCX network for effective student action recognition in online learning. Procedia Computer Science, 233, 164–174. https://doi.org/10.1016/j.procs.2024.03.206.
  • Dinsmore, B., & Jonas, A. (2025). Cameras on: How schools reify unequal relationships through decisions about online visibility. AERA Open, 11. https://doi.org/10.1177/23328584251375046.
  • Doan, T. M., Le, M. H., Nguyen, D. D., & Nguyen, M. S. (2022, November 21–23). Smart Desk in hybrid classroom: Research and implement a system to detect inattentive students. 2022 International Conference on Advanced Computing and Analytics (ACOMPA) (pp. 60–65), Ho Chi Minh City, Vietnam. https://doi.org/10.1109/acompa57018.2022.00016.
  • Dvorakova, K., Emmer, J., Janktová, R., & Klementová, K. (2023). The influence of remote learning environment and use of technology on university students’ behavioral engagement in contingency online learning. Tuning Journal for Higher Education, 10(2), 271–300. https://doi.org/10.18543/tjhe.2327.
  • Forkner, K. A., Wissman, A. W., Jimison, R. C., Nelson, K. B., Wuertz, R. E., Silvano, C. J., Barreto, E. F., Eckel Passow, J. E., Enders, F. T., & Staff, N. P. (2022). Lessons learned from clinical and translational science faculty and student survey as COVID-19 pandemic continues to shift education online. Journal of Medical Education and Curricular Development, 9, 23821205211073253. https://doi.org/10.1177/23821205211073253.
  • Gandhi, S., Fadia, A., Agrawal, R., Agrawal, S., & Kumar, P. (2023). MuOE: A multi-task ordinality aware approach towards engagement detection. In Lecture notes in computer science (pp. 70–79). Springer Nature Switzerland. https://doi.org/10.1007/978-3-031-45170-6_8.
  • Gopi, K., Bhavana, G., Kumar, K. R., & Bethe, S. V. (2025, June 11–13). Live monitoring of student behaviour during online classes. 2025 3rd International Conference on Self Sustainable Artificial Intelligence Systems (ICSSAS) (pp. 1158–1163), Erode, India. https://doi.org/10.1109/icssas66150.2025.11081043.
  • Gudmundsen, J. (2025). Showing smartphones in a collaborative learning activity in video-mediated L2 interaction. Learning, Culture and Social Interaction, 54, 100941. https://doi.org/10.1016/j.lcsi.2025.100941.
  • Händel, M., Bedenlier, S., Kopp, B., Gläser-Zikuda, M., Kammerl, R., & Ziegler, A. (2022). The webcam and student engagement in synchronous online learning: Visually or verbally? Education and Information Technologies, 27(7), 10405–10428. https://doi.org/10.1007/s10639-022-11050-3.
  • Hanif, M., Sulistiyo, M. D., & Sthevanie, F. (2025, July 3–5). Real-time detection of student drowsiness in online learning environments using YOLO11. 2025 International Conference on Data Science and Its Applications (ICoDSA) (pp. 375–380), Jakarta, Indonesia. https://doi.org/10.1109/icodsa67155.2025.11157270.
  • Horvat, M., Doljanin, D., & Jagušt, T. (2022). Quantitative measures for classification of human upper body posture in video signals to improve online learning. AIP Conference Proceedings, 2638, 020005. https://doi.org/10.1063/5.0100044.
  • Hossen, M. K., & Uddin, M. S. (2025, February 13–15). A multimodal monitoring system with XGBoost classifier for optimizing student engagement in online learning. 2025 International Conference on Electrical, Computer and Communication Engineering (ECCE), IEEE (pp. 1–6), Chittagong, Bangladesh. https://doi.org/10.1109/ecce64574.2025.11012991.
  • Hossen, M. K., & Uddin, M. S. (2025). From data to insights: Using gradient boosting classifier to optimize student engagement in online classes with explainable AI. Education and Information Technologies, 30(13), 18089–18130. https://doi.org/10.1007/s10639-025-13500-0.
  • Ion, T.-C., & Popescu, E. (2025, June 18–20). Enhancing synchronous online education: Overcoming connectivity challenges with innovative video-conferencing solutions. 2025 34th Annual Conference of the European Association for Education in Electrical and Information Engineering (EAEEIE) (pp. 1–6), Cluj-Napoca, Romania. https://doi.org/10.1109/eaeeie65428.2025.11136619.
  • Irfan, M., Patel, P., & Hassan, B. (2025, April 22–25). EngageSense: A hybrid approach for real time engagement detection for virtual classrooms. 2025 IEEE Global Engineering Education Conference (EDUCON) (pp. 1–9), London, UK. https://doi.org/10.1109/educon62633.2025.11016600.
  • Kalsi, S. S., Forrin, N. D., Sana, F., MacLeod, C. M., & Kim, J. A. (2023). Attention contagion online: Attention spreads between students in a virtual classroom. Journal of Applied Research in Memory and Cognition, 12(1), 59–69. https://doi.org/10.1037/mac0000025.
  • Khairi, M. A. M., Faridah, I., Norsiah, H., & Zaki, M. A. A. (2021). Preliminary study on readiness to teach online due to COVID-19 pandemic among university academician in Malaysia. International Journal of Information and Education Technology, 11(5), 212–219. https://doi.org/10.18178/ijiet.2021.11.5.1514.
  • Khan, A. R., Khosravi, S., Hussain, S., Ghannam, R., Zoha, A., & Imran, M. A. (2022, March 28–31). EXECUTE: Exploring eye tracking to support e-learning. 2022 IEEE Global Engineering Education Conference (EDUCON) (pp. 670–676), Tunis, Tunisia. https://doi.org/10.1109/educon52537.2022.9766506.
  • Kim, S., Kim, J.-H., Hyung, W., Shin, S., Choi, M. J., Kim, D. H., & Im, C.-H. (2024). Characteristic behaviors of elementary students in a low attention state during online learning identified using electroencephalography. IEEE Transactions on Learning Technologies, 17(1), 619–628. https://doi.org/10.1109/tlt.2023.3289498.
  • Kushlev, K., & Epstein-Shuman, A. (2022). Lights, cameras (on), action! Camera usage during Zoom classes facilitates student engagement without increasing fatigue. Technology, Mind, and Behavior, 3(3), 357–363. https://doi.org/10.1037/tmb0000085.
  • Lamba, S., & Sharma, N. (2025). Advanced online proctoring: Facial emotion monitoring with attentive-net. Advances in Artificial Intelligence and Machine Learning, 05(02), 3646–3662. https://doi.org/10.54364/aaiml.2025.52207.
  • Lasekan, O. A., Pachava, V., Godoy Pena, M. T., Golla, S. K., & Raje, M. S. (2024). Investigating factors influencing students’ engagement in sustainable online education. Sustainability, 16(2), 689. https://doi.org/10.3390/su16020689.
  • LeRoy, L. S., & Kaufmann, R. (2022). Identifying student motivations for webcam use in online courses. Interactive Learning Environments, 32(4), 1204–1218. https://doi.org/10.1080/10494820.2022.2115516.
  • Li, N., Romera Rodriguez, G., Xu, Y., Bhatt, P., Nguyen, H. A., Serpi, A., Tsai, C., & Carroll, J. M. (2022, June 1–3). Picturing one’s self: Camera use in Zoom classes during the COVID-19 pandemic. L@S ’22: Ninth ACM Conference on Learning @ Scale (pp. 151–162), New York, NY, USA. https://doi.org/10.1145/3491140.3528284.
  • Lim, J., & Lee, M. (2024). The buffering effects of using avatars in synchronous video conference-based online learning on students’ concerns about interaction and negative emotions. Education and Information Technologies, 29, 16073–16096. https://doi.org/10.1007/s10639-024-12508-2.
  • Linson, A., Xu, Y., English, A. R., & Fisher, R. B. (2022). Identifying student struggle by analyzing facial movement during asynchronous video lecture viewing: Towards an automated tool to support instructors. In Lecture notes in computer science (pp. 53–65). Springer International Publishing. https://doi.org/10.1007/978-3-031-11644-5_5.
  • Maimaiti, G., Jia, C., & Hew, K. F. (2021). Student disengagement in web-based videoconferencing supported online learning: an activity theory perspective. Interactive Learning Environments, 31(8), 4883–4902. https://doi.org/10.1080/10494820.2021.1984949.
  • Meishar-Tal, H., & Forkosh-Baruch, A. (2024). Panopticon, synopticon, and omniopticon: A conceptual framework for understanding the utilization of cameras and video recordings in education. Educational Philosophy and Theory, 56(14), 1391–1402. https://doi.org/10.1080/00131857.2024.2395338.
  • Olaniyan, D., Adebiyi, M. O., Adebiyi, A. A., & Olaniyan, J. (2024, April 2–4). Enhancing engagement in virtual classrooms: A contactless multi-modal emotion detection model. 2024 International Conference on Science, Engineering and Business for Driving Sustainable Development Goals (SEB4SDG) (pp. 1–9), Omu-Aran, Nigeria. https://doi.org/10.1109/seb4sdg60871.2024.10630108.
  • Patil, S., Jani, A., Jondhale, D., Pathak, D., & Savyanavar, A. S. (2024, April 25–27). Student’s attention and alertness monitoring system using AI. 2024 MIT Art, Design and Technology School of Computing International Conference (MITADTSoCiCon) (pp. 1–6), Pune, India. https://doi.org/10.1109/mitadtsocicon60330.2024.10575441.
  • Pi, Z., Zhang, L., Zhao, X., & Li, X. (2024). Peers turning on cameras promotes learning in video conferencing. Computers & Education, 212, 104986. https://doi.org/10.1016/j.compedu.2023.104986.
  • Rajab, M. H., & Soheib, M. (2021). Privacy concerns over the use of webcams in online medical education during the COVID-19 pandemic. Cureus, 13, e13536. https://doi.org/10.7759/cureus.13536.
  • Saravanan, C., Dhanush, S., Kanimozhi, S., Kaviya, B., & Lakshana, R. (2024, December 6–7). E-class engagement and behaviour tracking using machine learning for virtual education. 2024 13th International Conference on System Modeling and Advancement in Research Trends (SMART) (pp. 668–671), Moradabad, India. https://doi.org/10.1109/smart63812.2024.10882503.
  • Mithun, B. S.,, Karmakar, S., Varghese, T., Jaiswal, D., Chatterjee, D., Gavas, R. D., Ramakrishnan, R. K., & Pal, A. (2023, October 1–4). Mind Indriya: A system for simultaneous assessment of cognitive load, anxiety and visual attention. 2023 IEEE International Conference on Systems, Man, and Cybernetics (SMC) (pp. 3234–3240), Honolulu, HI, USA. https://doi.org/10.1109/smc53992.2023.10394104.
  • Sauter, M., Hirzle, T., Wagner, T., Hummel, S., Rukzio, E., & Huckauf, A. (2022, June 8–11). Can eye movement synchronicity predict test performance with unreliably sampled data in an online learning context? ETRA ’22: 2022 Symposium on Eye Tracking Research and Applications (pp. 1–5), Seattle, WA, USA. https://doi.org/10.1145/3517031.3529239.
  • Schreder, K. A., Alonzo, J., & McClure, H. (2023). Building an inclusive pedagogy in synchronous online learning environments focused on social justice issues: A case study. International Journal of Technology in Education, 6(1), 1–18. https://doi.org/10.46328/ijte.322.
  • Šola, H. M., Qureshi, F. H., & Khawaja, S. (2024). Exploring the Untapped potential of neuromarketing in online learning: implications and challenges for the higher education sector in Europe. Behavioral Sciences, 14(2), 80. https://doi.org/10.3390/bs14020080.
  • Song, R., Kang, H., Wang, W., & Zhu, Q. (2024, July 28–31). The online audience’s attention evaluation based on gaze tracking. 2024 43rd Chinese Control Conference (CCC) (pp. 3559–3564), Kunming, China. https://doi.org/10.23919/ccc63176.2024.10662531.
  • Song, Y., Cao, J., Wu, K., Yu, P. L. H., & Lee, J. C.-K. (2023). Developing “learningverse”—A 3-D metaverse platform to support teaching, social, and cognitive presences. IEEE Transactions on Learning Technologies, 16(6), 1165–1178. https://doi.org/10.1109/tlt.2023.3276574.
  • Tendhar, C., Chen, C., Duffy, C., Metzger, K., & Koltz, E. (2024). Effects of active learning techniques on learners’ perceptions of engagement and effectiveness in pre-clinical courses. Education for Health, 37(1), 50–60. https://doi.org/10.62694/efh.2024.15.
  • Tosto, S. A., Alyahya, J., Espinoza, V., McCarthy, K., & Tcherni-Buzzeo, M. (2023). Online learning in the wake of the COVID-19 pandemic: Mixed methods analysis of student views by demographic group. Social Sciences & Humanities Open, 8(1), 100598. https://doi.org/10.1016/j.ssaho.2023.100598.
  • Van Brown, B., & Crocetto, J. (2025). COVID-19 and the online gaze: transferring trauma informed practices online. Discover Education, 4(1), 190. https://doi.org/10.1007/s44217-025-00623-2.
  • Waluyo, B., & Wangdi, T. (2024). Understanding the roles of video cameras in online English courses: A qualitative inquiry into students and foreign lecturers’ conceptions. E-Learning and Digital Media, 22(4), 336–349. https://doi.org/10.1177/20427530241239396.
  • Wong, J. M. S., Tang, W. K. W., & Li, K. C. (2025). Digital transformation in higher education: tertiary students’ perspectives on online learning and its implications for the future. International Journal of Innovation and Learning, 37(5), 1–18. https://doi.org/10.1504/ijil.2025.144600.
  • Jiang, X., Li, Y., Kuang, Z., & Yu, J. (2025). Does instructors’ and students’ on-camera presence enhance learning? Journal of Computer Assisted Learning, 41(1), e13122. https://doi.org/10.1111/jcal.13122.
  • Xie, N., Liu, Z., Li, Z., Pang, W., & Lu, B. (2023). Student engagement detection in online environment using computer vision and multi-dimensional feature fusion. Multimedia Systems, 29(6), 3559–3577. https://doi.org/10.1007/s00530-023-01153-3.
  • Yadav, S., Siddiqui, M. N., & Shukla, J. (2023). EngageMe: Assessing student engagement in online learning environment using neuropsychological tests. In Communications in computer and information science (pp. 148–154). Springer Nature Switzerland. https://doi.org/10.1007/978-3-031-36336-8_23.
  • Yeung, M. W. L., Yau, A. H. Y., & Lee, C. Y. P. (2022). How should webcams be used in online learning under COVID-19: A co-orientation analysis of teachers’ and students’ perceptions of student social presence on webcam. Journal of Computer Assisted Learning, 39(2), 399–416. https://doi.org/10.1111/jcal.12751.
  • You, J., Jung, M., Shin, Y., & Kim, K. (2025). Teacher-Student Inter-Brain and Behavioral Synchronies in Remote Education. IEEE Access, 13, 87026–87035. https://doi.org/10.1109/access.2025.3569961.
  • Yulius, R., Sumpeno, S., Purnomo, M. H., & Winarto, R. (2025, June 12–14). Deep learning based student engagement detection: Comparison of model architectures and computational time efficiency. 2025 IEEE International Conference on Computational Intelligence and Virtual Environments for Measurement Systems and Applications (CIVEMSA) (pp. 1–6), Piraeus, Greece. https://doi.org/10.1109/civemsa65862.2025.11084820.
  • Zhang, L., Park, J. Y., Menon, N., Ranade, N., Yu, B., & Tan, S. (2025). Are you with us? A real-time engagement analytics with machine learning in online learning environments. In Communications in computer and information science (pp. 444–453). Springer Nature Switzerland. https://doi.org/10.1007/978-3-031-94153-5_44.
  • Zhu, W., Leng, X., Mayer, R. E., & Wang, F. (2025). No need for webcams with synchronous online learning. Learning and Instruction, 98, 102131. https://doi.org/10.1016/j.learninstruc.2025.102131.

References

  1. Almekhled, B., & Petrie, H. (2024). Security and privacy in online teaching during the COVID-19 pandemic: Experiences and concerns of academics in Saudi higher education. Journal of Innovative Digital Transformation, 1(2), 85–100. [Google Scholar] [CrossRef]
  2. Booth, A., Sutton, A., & Papaioannou, D. (2016). Systematic approaches to a successful literature review (2nd ed.). SAGE Publications. [Google Scholar]
  3. Chandrasiri, N. R., & Weerakoon, B. S. (2022). Online learning during the COVID-19 pandemic: Perceptions of allied health sciences undergraduates. Radiography, 28(2), 545–549. [Google Scholar] [CrossRef] [PubMed]
  4. Critical Appraisal Skills Programme. (2018). CASP qualitative checklist. Available online: https://casp-uk.net/checklists-archive/casp-qualitative-studies-checklist.pdf (accessed on 8 January 2026).
  5. Davies, P. (2000). The relevance of systematic reviews to educational policy and practice. Oxford Review of Education, 26(3–4), 365–378. [Google Scholar] [CrossRef]
  6. Deng, Z., & Yang, Z. (2025). Exploring the impact of online education on student engagement in higher education in post-COVID-19: What students want to get? Frontiers in Psychology, 16, 1574886. [Google Scholar] [CrossRef] [PubMed]
  7. Ferguson-Johnson, S., Ryan, A. M., & Cortina, K. S. (2025). Webcam use and its role in children’s engagement and achievement during extended remote learning. Computers & Education, 241, 105443. [Google Scholar] [CrossRef]
  8. Gough, D., Oliver, S., & Thomas, J. (2017). An introduction to systematic reviews (2nd ed.). SAGE Publications. [Google Scholar]
  9. Grant, M. J., & Booth, A. (2009). A typology of reviews: An analysis of 14 review types and associated methodologies. Health Information & Libraries Journal, 26(2), 91–108. [Google Scholar] [CrossRef]
  10. Hong, Q. N., Fàbregues, S., Bartlett, G., Boardman, F., Cargo, M., Dagenais, P., Gagnon, M.-P., Griffiths, F., Nicolau, B., O’Cathain, A., Rousseau, M.-C., & Vedel, I. (2018). The Mixed Methods Appraisal Tool (MMAT) version 2018 for information professionals and researchers. Education for information, 34(4), 285–291. [Google Scholar] [CrossRef]
  11. Hosszu, A., Rughiniş, C., Rughiniş, R., & Rosner, D. (2022). Webcams and social interaction during online classes: Identity work, presentation of self, and well-being. Frontiers in Psychology, 12, 761427. [Google Scholar] [CrossRef] [PubMed]
  12. Kitchenham, B. (2004). Procedures for performing systematic reviews (pp. 1–26). Joint Technical Report. Keele University. [Google Scholar]
  13. Kocur, D. J., & Jach, Ł. (2024). Turn on your self-compassion and turn on the webcam. Self-compassion, self-esteem, body esteem, gender, and discomfort related to using the camera affect students’ activity during synchronous online classes. Education and Information Technologies, 29(18), 25123–25141. [Google Scholar] [CrossRef]
  14. Miles, M. B., Huberman, A. M., & Saldaña, J. (2019). Qualitative data analysis: A methods sourcebook (4th ed.). SAGE Publications. [Google Scholar]
  15. Nissenbaum, H. (2004). Privacy as contextual integrity. Washington Law Review, 79, 119. [Google Scholar]
  16. Page, M. J., McKenzie, J. E., Bossuyt, P. M., Boutron, I., Hoffmann, T. C., Mulrow, C. D., Shamseer, L., Tetzlaff, J. M., Akl, E. A., Brennan, S. E., & Chou, R. (2021). The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ, 372, n71. [Google Scholar] [CrossRef] [PubMed]
  17. Petticrew, M., & Roberts, H. (2006). Systematic reviews in the social sciences: A practical guide. Blackwell Publishing. [Google Scholar]
  18. Roth, I., & Gafni, R. (2021). Does web camera usage in synchronous lessons affect academic emotions? Issues in Information Systems, 22(1), 149–163. [Google Scholar] [CrossRef]
  19. Ryan, R. M., & Deci, E. L. (2000). Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. American Psychologist, 55(1), 68. [Google Scholar] [CrossRef] [PubMed]
  20. Saha, A., Dutta, A., & Sifat, R. I. (2021). The mental impact of digital divide due to COVID-19 pandemic induced emergency online learning at undergraduate level: Evidence from undergraduate students from Dhaka City. Journal of Affective Disorders, 294, 170–179. [Google Scholar] [CrossRef] [PubMed]
  21. Shawish, N. S., Abu Kamel, A. M., Al-Maghaireh, D. A., Albana, H. M., & Kawafha, M. (2025). Allied health specialties online learning: Educators and students issues and challenges. International Journal of Educational Reform, 10567879251345808. [Google Scholar] [CrossRef]
  22. Short, J., Williams, E., & Christie, B. (1976). The social psychology of telecommunications. Wiley. [Google Scholar]
  23. Sterne, J. A., Hernán, M. A., Reeves, B. C., Savović, J., Berkman, N. D., Viswanathan, M., Henry, D., Altman, D. G., Ansari, M. T., Boutron, I., Carpenter, J. R., Chan, A. W., Churchill, R., Deeks, J. J., Hróbjartsson, A., Kirkham, J., Jüni, P., Loke, Y. K., Pigott, T. D., … Higgins, J. P. (2016). ROBINS-I: A tool for assessing risk of bias in non-randomized studies of interventions. BMJ, 355, i4919. [Google Scholar] [CrossRef] [PubMed]
  24. Tobi, B., Osman, W. H., Bakar, A. L. A., & Othman, I. W. (2021). A case study on students’ reasons for not switching on their cameras during online class sessions. Learning, 6(41), 216–224. [Google Scholar] [CrossRef]
  25. Trust, T., & Goodman, L. (2023). Cameras optional? Examining student camera use from a learner-centered perspective. TechTrends, 1–13. [Google Scholar] [CrossRef] [PubMed]
  26. Uygur, S. S., & Erdogmus, Y. K. (2025). (In) visible students: Investigating why students turn off their cameras during live lessons. International Journal of Educational Research, 132, 102638. [Google Scholar] [CrossRef]
  27. Williams, C., & Pica-Smith, C. (2022). Camera use in the online classroom: Students’ and educators’ perspectives. European Journal of Teaching and Education, 4(2), 28–51. [Google Scholar] [CrossRef]
Figure 1. PRISMA flow diagram of the study identification and screening process (adapted from Page et al., 2021).
Figure 1. PRISMA flow diagram of the study identification and screening process (adapted from Page et al., 2021).
Education 16 00256 g001
Figure 2. Temporal trends of webcam policy research (2020–2025).
Figure 2. Temporal trends of webcam policy research (2020–2025).
Education 16 00256 g002
Figure 3. World Heatmap of Geographic Distribution of Included Studies (N = 71). Note: NA stands for North America.
Figure 3. World Heatmap of Geographic Distribution of Included Studies (N = 71). Note: NA stands for North America.
Education 16 00256 g003
Figure 4. Aggregate Risk of Bias Summary Plot. Note: The dashed line indicates the 50% threshold.
Figure 4. Aggregate Risk of Bias Summary Plot. Note: The dashed line indicates the 50% threshold.
Education 16 00256 g004
Figure 5. Harvest Plot of Engagement Effect Direction and Strength. Note: Vertical dashed lines separate the distinct analytical domains.
Figure 5. Harvest Plot of Engagement Effect Direction and Strength. Note: Vertical dashed lines separate the distinct analytical domains.
Education 16 00256 g005
Figure 6. Subgroup Dotplot of Engagement Effects. Note: The red dashed vertical line indicates the null effect (0), and the grey dotted vertical line indicates the overall mean effect size. Blue dots represent optional policies, while red dots represent mandatory policies and the overall effect.
Figure 6. Subgroup Dotplot of Engagement Effects. Note: The red dashed vertical line indicates the null effect (0), and the grey dotted vertical line indicates the overall mean effect size. Blue dots represent optional policies, while red dots represent mandatory policies and the overall effect.
Education 16 00256 g006
Figure 7. Forest Plot of Equity Barrier Prevalence. Note: The vertical dashed line indicates the 40% prevalence threshold.
Figure 7. Forest Plot of Equity Barrier Prevalence. Note: The vertical dashed line indicates the 40% prevalence threshold.
Education 16 00256 g007
Figure 8. Causal Narrative Model of Equity Pathway.
Figure 8. Causal Narrative Model of Equity Pathway.
Education 16 00256 g008
Figure 9. Influence Analysis (Leave-One-Out). Note: The vertical dashed line indicates the overall aggregated effect size (g = 0.32).
Figure 9. Influence Analysis (Leave-One-Out). Note: The vertical dashed line indicates the overall aggregated effect size (g = 0.32).
Education 16 00256 g009
Figure 10. Funnel Plot of Publication Bias Assessment.
Figure 10. Funnel Plot of Publication Bias Assessment.
Education 16 00256 g010
Figure 11. Heatmap of Thematic Gaps.
Figure 11. Heatmap of Thematic Gaps.
Education 16 00256 g011
Table 1. Original search query.
Table 1. Original search query.
SubjectSearch String
Webcam-relates terms(“webcam” OR “camera-on” OR “video presence”) AND
Online education contexts(“online learning” OR “synchronous” OR “virtual classroom”) AND
Conceptual dimensions(engagement OR “social presence” OR attention OR privacy OR “data protection” OR security OR cybersecurity OR equity OR “digital divide” OR bandwidth)
Table 2. Corpus Characteristics (N = 71).
Table 2. Corpus Characteristics (N = 71).
VariableCategoryFrequency
(N = 71)
Proportion
(%)
Mediana (IQR)/
98.5% CI (Wilson)
Year20201014.1
20212433.8
20221926.8
20231115.5
2024–202579.8
Country or RegionNorth America2636.6
Asia and MENA2028.2
Europe1825.4
Global and Others79.8
Educational LevelHigher Education (HE)5577.5CI 98.5% [65.4–86.1%]
K-121622.5CI 98.5% [13.4–34.6%]
Study TypeQuantitative4259.2
Qualitative1825.4
Mixed1115.5
Sample SizeStudents(N/A) 1(N/A) 1Median: 312 (IQR: 145–850)
PlatformZoom4867.6
Others2332.4
Reported PolicyOptional4462.0CI 98.5% [49.8–72.8%]
Mandatory2738.0CI 98.5% [25.9–49.9%]
1 N/A: Not Applicable.
Table 3. Venue Quality Appraisal (N = 71).
Table 3. Venue Quality Appraisal (N = 71).
Metric of
Quality
Scopus/WoS
Quartile (Median)
Top 10 Venues
(Frequency)
Journals vs.
Proceedings Ratio
Observations
Overall QualityQ2 (Improving)Computers and Education (3), Online Learning (2),
Frontiers in Psychology (3)
58 Journals/13
Proceedings
48% (34 studies)
published in Q1 and Q2 journals.
Table 4. Methodological Quality Appraisal (Summary of MMAT, CASP, and ROBINS-I Scores).
Table 4. Methodological Quality Appraisal (Summary of MMAT, CASP, and ROBINS-I Scores).
Study TypeNMedian Quality Score (%)Interquartile
Range (IQR)
Key Observations on Bias
Quantitative (Non-experimental)4275.0%66.7–83.3%Primary risk: Selection Bias (auto selection in survey)
Qualitative1880.0%75.0–90.0%Low declaration on researcher positioning
Mixed1177.8%72.2–83.3%Main failure: Integration of quantitative and qualitative components
By Thematic Axis
Axis A: Engagement4573.8%69.2–83.3%Highest risk of bias due to high reliance on self-report measures
Axis B: Privacy and Security1480.0%75.0–87.5%Generally higher quality reporting in qualitative methodology
Axis C: Equity and Costs1276.9%69.2–84.6%Risk in indirect measurement of inequity (e.g., country income proxy)
Table 5. Reporting Completeness (N = 71).
Table 5. Reporting Completeness (N = 71).
Reporting CriterionNProportionKey Implication
Explicit Ethical Statement5983.1%Generally well reported; essential for Axis B (Privacy)
Use of Validated Instruments (pre-existing)3549.3%Prevalence of ad hoc survey instruments, increasing the risk of measurement bias
Data (or instrument) availability1419.7%Low reporting, hindering replication of quantitative findings
Study Pre-registration00.0%Total absence, indicating high risk of reporting bias (selective publication)
Table 6. Synthesis of Effects on Pedagogical Engagement (Axis A).
Table 6. Synthesis of Effects on Pedagogical Engagement (Axis A).
Dependent VariableAggregate Effect DirectionNVote-Counting
(% Favorable)
Key Observations
Social Presence and
Connection
Strongly 2286.4%Visibility reduces transactional distance and fosters a sense of community.
Attention and BehaviorFavorable1566.7%Students report increased attentiveness and accountability when the camera is on.
Active Participation
(Verbal)
Favorable1957.9%Benefits are realized only when combined with interactive pedagogy (group discussion), not in passive lectures.
Anxiety and Well-beingMixed 1010.0%Increases stress and Zoom fatigue due to constant self-monitoring
Table 7. Thematic Matrix on Privacy and Security (Axis B).
Table 7. Thematic Matrix on Privacy and Security (Axis B).
Qualitative ThemeOperational
Definition
NCERQual
Confidence Level
Synthesis or Key Finding
Perceived
Surveillance
Feeling of being monitored by peers and institution, leading to anxiety and reticence.12HighThe perception of the camera as a monitoring tool erodes trust and discourages engagement.
Exposure of
Personal
Environment
Invasion of the home sphere, risk of judgment, and lack of adequate private space.14Very HighMost cited reason for turning the camera off, strongly linked to equity
Opaque Data PoliciesLack of clarity on video retention, access (who views recordings), and institutional use.9ModerateDistance between formal regulation (GDPR and FERPA) and everyday practice breed’s distrust.
Regulatory
Compliance
Explicit mention of GDPR, FERPA, or local laws.6ModerateVideo images are treated as personal data, requiring informed consent and secure handling.
Table 8. Evidence of Inequity and Technological Costs (Axis C).
Table 8. Evidence of Inequity and Technological Costs (Axis C).
Reporting CriterionNProportionKey Implication
Explicit Ethical Statement5983.1%Generally well reported; essential for Axis B (Privacy)
Use of Validated Instruments (pre-existing)3549.3%Prevalence of ad hoc survey instruments, increasing the risk of measurement bias
Data (or instrument) availability1419.7%Low reporting, hindering replication of quantitative findings
Study Pre-registration00.0%Total absence, indicating high risk of reporting bias (selective publication)
Table 9. Cross-Axis Tensions.
Table 9. Cross-Axis Tensions.
Identified TensionCrossed AxesSynthesis/Key Finding
Social Presence vs.
Autonomy
Engagement (Positive) vs.
Privacy (Negative)
The gain in social presence through visibility is undermined if autonomy (Self-Determination Theory) is eroded by mandatory policy.
Pedagogical Interaction vs.
Equity
Engagement (Positive) vs.
Equity (Negative)
Practices that increase engagement (camera on) in resource rich HE can exclude or overload students with limited resources and connectivity.
Visibility vs.
Well-being
Engagement (Attention) vs.
Security (Anxiety)
Constant visibility creates a significant cognitive and emotional cost (fatigue, stress) that distracts from learning, overriding the attention gains.
Table 10. Robustness Table (Sensitivity Analysis on Engagement Effect).
Table 10. Robustness Table (Sensitivity Analysis on Engagement Effect).
Sensitivity AnalysisHedges’ g (95% CI)p ValueKey Implication
Main Effect (N = 9)0.32 (0.18, 0.46)p < 0.001Small-to-moderate positive effect.
Excluding “High Risk” Studies (N = 7)0.29 (0.14, 0.44)p < 0.001Robust: Effect remains significant, confirming the validity of the main finding.
Q1 and Q2 Studies Only 0.35 (0.17, 0.53)p < 0.001Effect is slightly higher in studies with higher methodological quality.
Table 11. Policy Framework and Evidence-Based Recommendations.
Table 11. Policy Framework and Evidence-Based Recommendations.
DimensionEvidence-Based RecommendationSupporting Evidence (Axis and Confidence)
Institutional
Policy
Adopt flexible policies (optional or encouraged) over mandatory ones, selecting student autonomy.Axis A (Subgroup Meta-analysis),
Axis B (CERQual High)
Pedagogical
Design
Integrate the camera only when pedagogically
necessary (small groups, discussions) and use low-bandwidth alternatives for participation.
Axis A (R5 Conditional),
Axis A (Observation)
Equity and
Technology
Subsidize or reduce data load (e.g., low-resolution settings) and provide virtual backgrounds to
protect the home environment.
Axis C (R7, F8 Causal Model)
Privacy and
Security
Establish clear data retention policies and obtain
explicit informed consent for video recording and access.
Axis B (R6 Very High Confidence),
Axis B (Regulation)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cedillo-Hernandez, A.; Velazquez-Garcia, L.; Longar-Blanco, M.D.P.; Cedillo-Hernandez, M.; Lopez-Gonzalez, M.G.C. Cameras On or Off? A Critical Analysis of Privacy, Equity, and Pedagogical Engagement in Online Education. Educ. Sci. 2026, 16, 256. https://doi.org/10.3390/educsci16020256

AMA Style

Cedillo-Hernandez A, Velazquez-Garcia L, Longar-Blanco MDP, Cedillo-Hernandez M, Lopez-Gonzalez MGC. Cameras On or Off? A Critical Analysis of Privacy, Equity, and Pedagogical Engagement in Online Education. Education Sciences. 2026; 16(2):256. https://doi.org/10.3390/educsci16020256

Chicago/Turabian Style

Cedillo-Hernandez, Antonio, Lydia Velazquez-Garcia, Maria Del Pilar Longar-Blanco, Manuel Cedillo-Hernandez, and Maria G. C. Lopez-Gonzalez. 2026. "Cameras On or Off? A Critical Analysis of Privacy, Equity, and Pedagogical Engagement in Online Education" Education Sciences 16, no. 2: 256. https://doi.org/10.3390/educsci16020256

APA Style

Cedillo-Hernandez, A., Velazquez-Garcia, L., Longar-Blanco, M. D. P., Cedillo-Hernandez, M., & Lopez-Gonzalez, M. G. C. (2026). Cameras On or Off? A Critical Analysis of Privacy, Equity, and Pedagogical Engagement in Online Education. Education Sciences, 16(2), 256. https://doi.org/10.3390/educsci16020256

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop