Next Article in Journal
The Hygiene Continuum in Seafood Processing: Integrating Design, Sanitation, and Workforce Safety for Sustainable Food Systems
Previous Article in Journal
A Cross-Sectional Assessment of Oral Health and Quality of Life Among Dental Patients at a Public Special Care Center in Greece: A Cross-Sectional Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Trajectory Patterns of Hygiene Training Effectiveness Across Three Instructional Modes

by
Mark R. Limon
1,*,
Shaira Vita Mae G. Adviento
2,
Chariza Mae B. Basamot
2,
Jacqueline B. Reyes
3,
Karl Lorenze E. Gumsat
4,
Athena Germynne D. Amano
5,
Jessica Camille B. Ramirez
4,
Christian Jay P. Pungtilan
4,
Marie Dale R. Soriano
2,
Louwelyn B. Baclagan
6,
Shareen Kate A. Gamiao
6 and
Shiella Mae G. Juan
2
1
Diamond Jenness Secondary School, 58 Woodland Dr., Hay River, NT X0E 0R8, Canada
2
Technical-Vocational and Livelihood Education Department, College of Teacher Education, Mariano Marcos State University, Laoag City 2900, Ilocos Norte, Philippines
3
Integrated University Laboratory Schools Unit, College of Teacher Education, Mariano Marcos State University, City of Batac 2906, Ilocos Norte, Philippines
4
Industrial Technology Department, College of Industrial Technology, Mariano Marcos State University, Laoag City 2900, Ilocos Norte, Philippines
5
Division of Laoag City, Department of Education, Laoag City 2900, Ilocos Norte, Philippines
6
Division of Ilocos Norte, Department of Education, Laoag City 2900, Ilocos Norte, Philippines
*
Author to whom correspondence should be addressed.
Submission received: 11 December 2025 / Revised: 14 January 2026 / Accepted: 22 January 2026 / Published: 26 January 2026
(This article belongs to the Section Food Hygiene and Safety)

Abstract

Background: Hygiene and food-safety training is a critical public health strategy for preventing contamination and promoting safe food-handling practices in community settings. This study evaluated the long-term effectiveness of In-person, Online, and Hybrid instructional modes in enhancing hygiene and food-safety competencies among trainees in Ilocos Norte, Philippines. Methods: Using a longitudinal quasi-experimental design, performance was measured at 12, 24, and 36 months across four domains: Personal Health & Hygiene, Food Hazards, Cleaning and Sanitation, and Good Manufacturing Practices. A total of 384 students met all inclusion criteria and completed the full series of evaluations. Descriptive and inferential statistical analyses were employed. Results: Competency scores increased significantly over time in all instructional modes (p < 0.001). Hybrid learners demonstrated the highest early longitudinal gains at 12 months (mean score, M = 20.88), compared with In-person (M = 10.28) and Online (M = 10.57). At 36 months, Online learners achieved the highest performance (M = 19.50), indicating stronger long-term retention. Effect size analysis using eta squared (η2) showed large effects for Cleaning and Sanitation (η2 = 0.196), Good Manufacturing Practices (η2 = 0.115), and overall performance (η2 = 0.138). Standardized Mean Change (SMC) indicated substantial improvement across modes, with Hybrid showing the greatest early change (SMC = 41.76 at 12 months) and Online exhibiting the strongest long-term improvement (SMC = 38.80 at 36 months). Training Efficiency Index (TEI) identified In-person instruction as most efficient (TEI = 30.55), followed by Online (29.49) and Hybrid (19.56). Linear Mixed-Effects Regression confirmed significant main effects of Time (β = 4.82, p < 0.001) and Mode (β = 3.97, p < 0.001), as well as a significant Time × Mode interaction (β = −1.42, p < 0.01). Conclusions: The findings indicate that Hybrid instruction supports rapid early competency gains, while Online instruction yields superior long-term mastery of hygiene and food-safety competencies. These results provide evidence-based guidance for optimizing hygiene training programs in community and public health contexts.

1. Introduction

Hygiene and food-safety competencies among food handlers and food service workers are critical determinants of public health outcomes. Effective training improves knowledge, attitudes, and practices related to personal health hygiene (PHH), food hazards (FH) recognition, cleaning and sanitation (CS), and good manufacturing practices (GMP), which in turn reduces foodborne disease risk and contamination events. Recent systematic reviews indicate that online and blended food-safety education programs can improve knowledge and practices, though effect sizes and retention vary by delivery mode and outcome domain [1,2,3,4,5,6,7,8]. Another meta-analysis focused on food handlers conducted by Insfran-Rivarola et al. [9] reported a very large effect size for knowledge improvement (Hedges’ g ≈ 1.24), modest gains in practice (g ≈ 0.65), and low-to-moderate attitude improvements (g ≈ 0.28). These findings confirm that hygiene training—whether traditional or digital—tends to be effective, but also underscore the variability in retention, especially for attitudes and practices [10,11,12,13,14].
Traditional face-to-face instruction has long been the standard for practical hygiene and food-safety training because it enables hands-on demonstration and immediate feedback. However, the increasing availability of digital platforms has expanded access and scalability, producing a shift toward online and hybrid (blended) training models. Studies in health and allied professions show blended learning often improves theoretical knowledge and procedural skills while offering flexibility and improved learner engagement. Evidence also points to online training’s strength in scalability and long-term reinforcement, though some reviews report modest to moderate overall effectiveness and particular weakness in attitude change [15,16,17,18,19,20,21].
Longitudinal evaluation is essential because initial post-training gains may differ from retained competence months or years later. Food-safety programs that show large immediate gains sometimes demonstrate attenuation or variability at later follow-ups; conversely, some online or repeated refresher approaches sustain or even increase retention over time. Historically, longitudinal food-safety evaluations (e.g., 12–24 months) have provided mixed evidence: some studies report stable retention, while others show decay without reinforcement [5,7,22]. This underscores the need for multi-timepoint assessment (e.g., 12, 24, 36 months) to capture trajectory patterns and to understand how instructional mode affects both short- and long-term outcomes [7,16,23,24,25].
Despite growing adoption of online and hybrid hygiene training, there remain three important gaps in the evidence base: (1) limited longitudinal comparisons spanning early (≈12 months) and longer (≥36 months) follow-up intervals; (2) few studies that jointly examine multiple competency domains (PHH, FH, CS, GMP) within the same cohort; and (3) limited use of modern longitudinal regression methods (mixed models, growth curve modeling, and multilevel mediation) to parse how instructional mode shapes both early longitudinal gains and retention trajectories. Addressing these gaps is critical for program design, policy decisions, and efficient allocation of training resources. Recent meta-analytic and program evaluations point to the potential for blended/hybrid models to accelerate short-term skill acquisition, while digital modalities may better sustain long-term retention when accompanied by reinforcement mechanisms—but these patterns need robust, longitudinal confirmation [3,18,26,27,28,29,30,31,32,33,34,35,36]. Evidence from other health education fields suggests that blended or hybrid learning often yields superior gains compared to strictly in-person methods [37,38,39,40,41,42,43].
This study aims to evaluate the comparative effectiveness of In-person, Online, and Hybrid instructional modes on hygiene and food-safety competency development at 12, 24, and 36 months. Specifically, it aimed to: compare mean competency scores (PHH, FH, CS, GMP, and overall composite) across modes at each follow-up; estimate effect sizes, training efficiency indices, and consistency variance across competencies and modes; explore mediation by learner engagement (or similar process measures) where data permit, and model longitudinal trajectories using linear mixed-effects regression to test Mode, Time, and Mode × Time effects on competency growth.
By combining multi-domain competency assessment, multi-timepoint follow-up, and robust longitudinal modeling, this study will provide actionable evidence on which instructional modalities best foster both early mastery and sustained hygiene competence. The findings will inform educators, public-health authorities, and industry stakeholders seeking to optimize training designs for maximal public-health impact and training efficiency. Moreover, by using modern mixed-effects modeling, the study will demonstrate analytic approaches that other program evaluators can adopt to evaluate training interventions in diverse settings.

2. Methods

2.1. Research Design

This study employed a longitudinal quasi-experimental research design to investigate the effects of In-person, Online, and Hybrid instructional modes on hygiene and food-safety competencies. Performance was assessed at three intervals—12, 24, and 36 months—to determine patterns of learning retention and the long-term impact of instructional delivery. Since the same participants were evaluated repeatedly across multiple time points, the design required an analytical approach suitable for dependent observations. In this case, an immediate post-training assessment was not conducted because the primary objective of the study was to evaluate long-term competency retention rather than short-term recall. The first assessment was therefore administered at 12 months following training completion. Thus, the study applied LMER, which is methodologically appropriate for modeling individual growth trajectories and accounting for within-person correlation over time. This design enabled the researchers to assess both between-group differences (instructional modes) and within-group changes (time-dependent improvement).

2.2. Locale of the Study

The study was conducted across training sites in Ilocos Norte, Philippines, including barangay halls, community training centers, and institutional learning venues where food-safety training programs were regularly implemented with varying physical facilities, instructional resources, and learning environments. The location was selected because it hosts a continuous stream of food-handling trainees and community-based participants who undergo standardized hygiene and food-safety training under varying instructional modalities. All follow-up assessments were also completed within Ilocos Norte to ensure consistency of delivery, test administration, and participant accessibility.

2.3. Training Modalities

Three instructional modes were used in this study in enhancing hygiene and food-safety competencies among identified trainees—In-person, online, and hybrid. In-person training was delivered through face-to-face classroom instruction, including live lectures, instructor-led demonstrations, group discussions, and supervised hands-on activities. Online training consisted of self-paced digital modules delivered through a learning management system, including recorded video lectures, interactive quizzes, downloadable reading materials, and asynchronous discussion forums. Hybrid training combined face-to-face classroom sessions with online self-paced modules, allowing participants to receive instructor-led demonstrations while also engaging in independent online review and assessment.
Across all instructional modalities, the training covered core food safety and hygiene competencies followed the same standard hygiene and food safety curriculum, including PHH, FH, CS, and GMP. Content included hand hygiene, personal protective equipment, food contamination routes, temperature control, surface sanitation procedures, waste management, and basic facility hygiene standards. Instructional materials were standardized across modalities to ensure content equivalence. Differences between modalities were limited to the mode of delivery rather than curriculum coverage.
All assessments were administered in a controlled setting within the participating institutions. Participants completed the tests either in designated classrooms or computer laboratories under the supervision of trained proctors. Standardized instructions, time limits, and testing conditions were applied across all instructional modes to ensure consistency of assessment administration. Participants were not allowed to consult external materials during testing, and completed answer sheets were collected immediately after each session. The first standardized assessment was conducted at 12 months following training completion and served as the initial longitudinal reference point for this study. Although an immediate post-training or pre-training baseline was not available, the repeated assessments at 12, 24, and 36 months allowed examination of relative changes and learning trajectories over time.

2.4. Population and Sampling Procedure

The target population comprised Junior and Senior High school students who completed a formal hygiene and food-safety training program. A purposive sampling technique was employed to identify participants who met the following criteria: (a) completion of the full training cycle under only one instructional modality—In-person, Online, or Hybrid; (b) availability and willingness to participate in all scheduled evaluations; and (c) consistent attendance across all assessment periods. Participants were assigned to In-person, Online, or Hybrid instructional modes based on student choice, scheduling constraints, and learner access considerations rather than random allocation. To minimize selection bias, all instructional modes used the same curriculum, learning objectives, instructional duration, assessment instruments, and facilitator qualifications. In addition, linear mixed-effects regression modeling was employed to account for individual-level heterogeneity and unequal group sizes inherent in the non-randomized design.
To preserve the integrity of the longitudinal design, only respondents with complete data at 12, 24, and 36 months were retained for analysis. This procedure ensured that each instructional mode group possessed a comparable and reliable dataset for repeated-measures modeling. Of the 704 individuals who initially participated in the training, only 384 met all inclusion criteria, primarily due to inconsistencies in attendance. Among the valid cases, 72 participants were classified under the In-person group, 128 under the Online group, and 184 under the Hybrid group. Analyses were restricted to participants with complete data across all follow-up periods to ensure valid longitudinal modeling. A formal attrition analysis was not feasible due to the absence of complete baseline and covariate information for participants who did not complete the full follow-up schedule. Their participation lasted for 36 months. Post-training educational and occupational activities of participants were not systematically tracked after program completion. Participants may have received additional hygiene-related exposure through school, work, or media during the 36-month follow-up period.
Four speakers or facilitators were invited to give lectures, simplify the FSLs for better understanding, and guide participants in training across all domains. They were purposively chosen for their qualifications as certified ServSafe food managers and their experience in programs and projects focused primarily on food safety.

2.5. Research Instrument

The study utilized a validated Hygiene and Food Safety Competency Assessment Tool (HFSCCA) consisting of four major domains: PHH, FH, CS, and GMP. Each domain assessed knowledge, application, and understanding of hygiene principles through structured items. The HFSCCA consisted of 100 structured items divided into four core competency domains, which are essential to hygiene and food-food safety practice. Each question provided four response options and was written in clear, context-appropriate language to ensure comprehension among participants. The difficulty level of the items was intentionally maintained across the 12-, 24, and 36-month assessments to preserve comparability. Participants were allotted 100 min to complete the assessment. This uniform administration process ensured the reliability and comparability of scores across all time points, thereby supporting the validity of the longitudinal analyses. Parallel forms of the assessment were used across follow-up periods. Although each test evaluated the same competency domains and maintained comparable difficulty levels, item wording and ordering were varied to minimize practice and recall effects.
The instrument also underwent expert validation and demonstrated strong internal consistency in previous studies, with Cronbach’s [27] alpha values exceeding 0.96, indicating good reliability. The same instrument was administered at all three time points to ensure uniformity and comparability of scores across time and instructional modes.

2.6. Data Gathering Procedure

Data collection followed a three-phase structure aligned with the longitudinal design. During the first phase, participants completed the assessment tool at 12 months following their training under their assigned instructional mode. In-person trainees received face-to-face demonstrations and practice-based activities; Online trainees completed digital modules, video lessons, and virtual instruction; while Hybrid trainees engaged in an integrated blend of online modules complemented by scheduled in-person sessions. A true pre-training baseline assessment was not conducted; instead, the 12-month follow-up served as the initial reference point to emphasize long-term competency retention rather than short-term post-training recall.
In the second phase, the same assessment tool was administered at 24 months to measure intermediate knowledge retention and to capture changes in competency levels. The third phase occurred at 36 months, where long-term performance and retention were evaluated. Each assessment was administered under standardized testing conditions, with uniform instructions provided to all participants regardless of modality. Completed instruments were encoded, screened for completeness, and consolidated into a longitudinal dataset.

2.7. Data Analysis

A combination of descriptive, effect size, and inferential statistical treatments was used to analyze the data. Descriptive statistics—including means, standard deviations, and percentage gains—were computed to summarize learner performance across time points and across instructional modes. These measures provided an initial overview of central tendency and variability and facilitated comparison of competency levels across groups and time points. To quantify improvement within each instructional mode over time, the standardized mean change (SMC) was calculated using the formula:
S M C = M p o s t M p r e S D p r e
whereas, Mpost = mean score at the later time period; Mpre = mean score at the earlier time period, and SDpre = standard deviation of the earlier score. SMC was calculated because it is appropriate for repeated-measures designs and allows comparison of change relative to baseline variability.
To determine the efficiency of each instructional mode translated training into measurable gains relatives to score dispersion, the training efficiency index (TEI) was computed as:
T E I = M e a n   G a i n S t a n d a r d   D e v i a t i o n
whereas, Mean Gain = difference between post-test and pre-test means, and Standard Deviation = variability within the group. The Training Efficiency Index (TEI) was employed to complement traditional effect size measures by evaluating instructional efficiency in terms of performance gains relative to score variability. Unlike cost-effectiveness indicators, which require detailed financial data, TEI is particularly suitable for educational and training research where learning outcomes are the primary focus. Higher TEI values indicate greater efficiency in translating instruction into consistent competency improvement, facilitating comparison across instructional modes.
To evaluate the consistency of learner performance within competency domains, the Coefficient of Variation (CV) was calculated using:
C V = S D M e a n   ×   100  
whereas, SD = standard deviation of the domain scores, and Mean = mean score in that domain. CV was chosen because it standardizes variability relative to the mean, allowing comparison across domains with different score magnitude.
To determine the magnitude of differences among instructional modes was examined using the eta squared (η2) effect size, computed as:
η 2 = S S b e t w e e n S S t o t a l
whereas, SSbetween = sum of squares between groups (instructional modes), and SStotal = total sum of squares. Effect size analysis was included to supplement statistical significance testing and provide practical interpretation of results. Interpretation followed conventional benchmarks for small, medium, and large effects.
Planned pairwise contrasts were conducted following the Linear Mixed-Effects Regression analysis. Estimated marginal means were compared across instructional modes and assessment periods (12, 24, and 36 months) to examine specific differences between In-person, Online, and Hybrid instruction. Multiple comparisons were adjusted using appropriate correction procedures to control for Type I error. Planned pairwise contrast was computed as:
L = i = 1 k c i X i ¯
where c i —contrasts coefficients (must sum of 0); X i ¯ —group of means; and k —number of groups.
The primary inferential analysis utilized was LMER, appropriate for repeated-measures data. Linear mixed-effects regression was used because it is robust to unequal group sizes and unbalanced longitudinal data, allowing reliable estimation of fixed effects while accounting for individual-level variability. Time was modeled as a continuous variable representing equally spaced 12-month intervals (12, 24, and 36 months) to estimate overall competency change trajectories across the follow-up period. The general model estimated was:
Yit = β0 + β1(Timet) + β2(Modei) + β3(Timet × Modei) + ui + ϵit
whereas, Yit = performance score of participant i at time t; β0 = intercept or baseline performance; β1(Timet) = effect of time; β2(Modei) = effect of instructional mode; β3(Timet × Modei) = interaction of time and mode; ui = random effect for participant i (capturing individual differences); and ϵit = residual error term.
Statistical significance was evaluated at α = 0.05, and contrast-specific p-values were reported to identify which group differences were statistically significant at each follow-up period.

2.8. Data Management and Quality Control

All collected data were encoded into a secure digital database, with double-entry verification implemented to minimize encoding errors. Incomplete or inconsistent responses were screened and excluded following predefined criteria. Only participants with full datasets across the 12-, 24-, and 36-month assessments were retained for analysis to preserve the integrity of repeated-measures modeling. Data confidentiality was strictly observed by anonymizing participant identifiers and restricting access only to the research team.

2.9. Ethical Considerations

The study adhered to ethical principles governing human research. Participation was voluntary, and informed consent was obtained from all individuals prior to data collection. Respondents were assured that their information would remain confidential and that only aggregated data would be reported. No identifying information was used in the analysis. The study protocol followed guidelines set by institutional research ethics standards applicable to non-invasive training evaluation studies.

3. Results

This section presents the findings on the effectiveness of the In-person, Online, and Hybrid instructional modes in improving the food safety competencies of participants across three-time intervals: 12 months, 24 months, and 36 months.

3.1. Compare Mean Competency Scores Across Modes

Results in Table 1 show clear differences in performance across instructional modes. At 12 months, the Hybrid group achieved the highest mean score (M = 20.88), nearly double that of the In-person and Online groups. This indicates that hybrid delivery produced rapid early longitudinal gains, possibly due to combined benefits of face-to-face demonstration and digital reinforcement. At 24 months, the hybrid scores declined (M = 15.62), while both In-person and Online groups exhibited steady improvement, suggesting that extended learning may be more stable under traditional or fully online systems. At 36 months, scores converged, with Online instruction slightly outperforming the others (M = 19.50). This implies that long-term mastery may be best supported by continuous online engagement. The performance increased significantly across time (p < 0.001*), with instructional mode showing a large effect size (η2 = 0.138). Hybrid learning excels early, but In-person and Online modes provide more consistent long-term growth.
Table 2 reveals consistent improvement across all domains—PHH, FH, CS, and GMP—for all modes and time intervals. Hybrid instruction produced the highest early longitudinal gains, especially in CS and GMP. PHH, Hybrid mode led early performance but Online surpassed Hybrid by 36 months. FH, Hybrid consistently highest across all periods. CS, Hybrid achieved the strongest performance at all time points. GMP, Online and Hybrid tied at 36 months. Differences across modes for all topics were statistically significant (p < 0.001), indicating that instructional delivery format meaningfully affects competency development.

3.2. Estimate Effect Sizes, Training Efficiency Indices, and Consistency Variation Across Competencies and Modes

Effect sizes were quantified using eta squared (η2), omega squared (ω2), and epsilon squared (ε2). Eta squared represents the proportion of variance explained by the factor of interest, omega squared provides a bias-corrected estimate of population effect size, and epsilon squared estimates explained variance for nonparametric comparisons. Effect size values showed (Table 3) that PHH, Medium (η2 = 0.068); FH, Medium–large (η2 = 0.107); CS, Large (η2 = 0.196); GMP, Large (η2 = 0.115); Overall Performance, Large (η2 = 0.138). The influence of instructional mode is most notable in procedural competencies, especially CS and GMP, which rely heavily on demonstration and repeated practice. These large effect sizes confirm that the type of instructional delivery significantly shapes learning outcomes.
Table 4 presents SMC values for in-person, online, and hybrid instructional modes across 12-, 24-, and 36-month follow-up assessments. All instructional modes demonstrated statistically significant increases in SMC at each follow-up period (p < 0.001), indicating measurable improvement over time. At 12 months, the hybrid mode showed a higher SMC compared with the in-person and online modes. At 24 and 36 months, SMC values across the three modes were more similar, with only small numerical differences observed.
The TEI values for in-person (30.55) and online (29.49) modes were numerically close, while the hybrid mode showed a lower TEI (19.56). Although these values suggest differences in efficiency, the proximity of the TEI estimates for in-person and online modes indicates comparable efficiency levels. The hybrid mode exhibited a lower TEI, reflecting greater variability relative to gain. All TEI values reached statistical significance (p < 0.05 to p < 0.01).
Table 5 reports the coefficient of variation (CV) for hygiene and food safety competency domains. All CV values were within a narrow range (0.15–0.18), indicating generally consistent participant performance across domains. Food Hazards showed the lowest CV (0.15), while Good Manufacturing Practices had the highest CV (0.18). The remaining domains fell between these values, with small numerical differences separating categories. Statistical significance was observed at p < 0.01 and p < 0.05, indicating that variability differed across competency areas, although the magnitude of these differences was limited.

3.3. Model Longitudinal Trajectories Using Linear Mixed-Effects Regression

Table 6 presents the results of a linear mixed-effects regression model examining changes in hygiene competency scores over time, between instructional modes, and the interaction between time and instructional mode. The time effect (β = 4.82, p < 0.001) indicates a statistically significant increase in hygiene competency scores within participants across measurement occasions, regardless of instructional mode. This result reflects change between time points rather than differences between instructional modes.
The instructional mode effect (Hybrid vs. other modes; β = 3.97, p < 0.001) represents a statistically significant difference in baseline scores between instructional modes, averaged across time. Specifically, participants in the hybrid mode had higher scores than those in the other modes at the reference time point. The time × instructional mode interaction (β = −1.42, p = 0.008) indicates that the rate of change over time differed between instructional modes. The negative interaction coefficient shows that the increase in hygiene competency scores over time was smaller for the hybrid mode compared with the other instructional modes. This interaction identifies a significant contrast between modes in their time-related trends, rather than a difference at a single time point. The significant random intercept variance (ui = 2.14, p = 0.030) indicates meaningful variability in baseline hygiene competency scores across individuals, supporting the inclusion of random effects in the model.

4. Discussion

The present study provides a comprehensive longitudinal analysis of the effectiveness of In-person, Online, and Hybrid instructional modes in developing food safety competencies over a 36-month period. The findings reveal nuanced patterns in skill acquisition, retention, and consistency across instructional formats and competency domains, contributing significantly to the understanding of long-term learning trajectories in vocational and professional training contexts. The differential effects observed across PHH, FH, CS, and GMP may be explained by domain-specific learning processes supported by each instructional mode. Procedural competencies, such as cleaning and sanitation, benefit from demonstration, feedback, and guided practice, which are more readily facilitated in In-person and Hybrid formats. In contrast, conceptually oriented domains, including food hazards and good manufacturing practices, may be reinforced through self-paced repetition and review, features that are characteristic of Online instruction. These instructional affordances likely contributed to the observed variation in learning trajectories across domains.
The absence of a true pre-training or immediate post-training baseline limits direct estimation of initial training impact. Accordingly, the term “early longitudinal gains” in this study refers to relative differences observed at the first longitudinal assessment point (12 months) rather than immediate post-intervention effects. This distinction should be considered when interpreting early performance differences across instructional modes. Improvements observed at the 12-month follow-up reflect retained learning rather than immediate post-training gains. As such, references to ‘early longitudinal gains’ in this study denote performance one year after training and should not be interpreted as short-term learning effects. The observed rapid early longitudinal gains in the Hybrid instructional mode, particularly at 12 months, are consistent with prior research demonstrating the benefits of blended learning approaches that combine in-person demonstrations with digital reinforcement [26,37,38,44,45,46,47]. Hybrid learners achieved nearly double the initial mean scores of In-person and Online groups, suggesting that the synergistic integration of face-to-face guidance and digital learning can accelerate early mastery. This supports the cognitive theory of multimedia learning, which posits that dual channels of information presentation—visual and auditory—enhance comprehension and retention [24,30,48,49,50,51,52,53].
Interestingly, while Hybrid learning produced the strongest early performance, a decline was observed at 24 months, followed by convergence with other modes at 36 months. This pattern contrasts with some blended learning studies that report sustained benefits over time [33,40,54,55,56,57,58,59,60], suggesting that hybrid models may be highly effective for short-term acquisition but less stable for long-term retention. One possible explanation is the variability in learner engagement: Hybrid programs may require consistent self-directed effort to maintain early longitudinal gains, and differences in learners’ ability to navigate both digital and in-person components could amplify variability [61,62,63,64,65,66,67,68]. During the 36-month follow-up period, participants may have received additional hygiene-related exposure through educational, occupational, or media sources. These uncontrolled influences may have contributed to observed changes in competency beyond the original training intervention. The negative interaction observed for Hybrid instruction may reflect regression-to-the-mean or ceiling effects, particularly given the high competency scores observed at the earliest follow-up. When early performance is elevated, opportunities for further improvement are naturally constrained, which may contribute to diminishing relative effects over time.
Online instruction, by contrast, demonstrated the most substantial long-term gains at 36 months, particularly in domains such as PHH and GMP. These results align with prior studies highlighting the efficacy of online, self-paced learning for promoting long-term retention, due to the ability of learners to revisit content and reinforce knowledge repeatedly [30,51,65,69,70,71,72,73,74]. The steady improvement of Online learners over time underscores the importance of repeated exposure, adaptive pacing, and sustained engagement in supporting durable competency development [54,65,75,76,77,78,79,80,81,82].
In-person instruction exhibited moderate, steady growth, yielding the most consistent outcomes as indicated by the TEI and CV. This is consistent with literature emphasizing that supervised hands-on training ensures reliable skill acquisition, especially in procedural competencies [83,84,85,86,87,88]. The combination of low variability and stable improvement suggests that traditional instructional formats remain highly valuable for standardizing performance and minimizing disparities in learner outcomes.
The study’s findings suggest that the effectiveness of instructional mode is not static but varies over time and across competency domains. Hybrid instruction appears particularly effective for rapid skill acquisition in procedural domains such as CS and GMP, likely because these competencies benefit from the immediate feedback provided in face-to-face settings while reinforcing knowledge through digital tools [33,37,80,88,89,90,91,92,93,94,95,96]. However, the larger variability observed in Hybrid learners—especially in complex procedural competencies—indicates that initial high scores may not reflect consistent mastery across participants.

5. Implications and Future Research Directions

The results open several avenues for future research. First, adaptive or sequenced instructional designs that transition learners from Hybrid to Online modes could maximize both short-term gains and long-term retention. Second, examining the impact of learner characteristics, such as motivation, prior experience, and digital proficiency, on responsiveness to instructional modes could inform personalized learning pathways. Third, future studies could investigate strategies to reduce variability in Hybrid learning outcomes, such as micro-practice sessions, real-time feedback, or augmented reality simulations for complex procedural tasks.
There is also potential to explore cost-effectiveness and scalability, particularly in resource-constrained training environments. Hybrid instruction, while effective in early stages, demonstrated lower efficiency due to greater variability. Online learning, in contrast, provides predictable long-term outcomes with minimal physical resource requirements. Longitudinal modeling of learning trajectories could inform the design of competency-based curricula by predicting the optimal balance of instructional modes based on competency type, learner profile, and training duration. Although sequencing Hybrid or Online instructional modes may enhance learning outcomes, implementation in practice may be limited by factors such as program cost, availability of technological infrastructure, and the need for instructor training. Institutions with limited resources may face challenges in adopting these approaches at scale, and implementation should therefore be tailored to local capacity. Albeit this study was conducted within a specific regional and community-based training context, the competencies assessed—personal hygiene, food hazards, cleaning and sanitation, and good manufacturing practices—are foundational to hygiene and food-safety training across diverse settings. As such, the observed learning trajectories may be informative for adult food handlers, vocational learners, and industry-based training programs. However, differences in participant characteristics and institutional contexts should be considered when generalizing these findings.
Finally, theoretical postulations could be tested, such as the hypothesis that Hybrid instruction primarily accelerates initial cognitive encoding and procedural familiarity, while Online instruction reinforces consolidation and retention through repeated exposure. Such models could integrate cognitive load theory, self-regulated learning frameworks, and spaced repetition principles to predict performance trajectories across diverse competency domains.

6. Limitations

Several limitations should be considered when interpreting these results. First, individual learner characteristics, such as prior knowledge, motivation, digital literacy, and cognitive styles, were not controlled, yet these factors likely influence responsiveness to instructional modes. Second, external influences over the 36-month follow-up, including workplace practice, peer interactions, and access to additional resources, were not systematically measured and could have contributed to differences in long-term performance. Post-training contamination could not be controlled in this longitudinal design, and the observed outcomes should therefore be interpreted as reflecting learning trajectories within an open, real-world environment as opposed to the isolated effect of the initial training. Third, while effect sizes for Hybrid instruction were large in early stages, the negative interaction between Time and Hybrid mode in the linear mixed-effects regression highlights that early advantages may diminish without continuous reinforcement. Sample sizes differed across instructional modes, which may have influenced the precision of effect size estimates, particularly for the smaller groups. Accordingly, between-mode comparisons should be interpreted with appropriate caution. Hybrid participants demonstrated higher early scores, ceiling effects and regression to the mean cannot be fully excluded as contributing factors to the observed attenuation of Hybrid effects across later follow-up periods. Fourth, since a formal attrition analysis could not be performed, potential systematic differences between retained and excluded participants cannot be fully ruled out. This limitation should be considered when interpreting the longitudinal findings. Finally, the study was conducted within a specific professional training context, which may limit generalizability to other vocational fields or populations. Training location was not included as an analytical factor because standardized site-level data were unavailable. Differences in institutional context and learning environment may therefore have influenced the observed learning outcomes. While parallel test forms were employed, some degree of test familiarity cannot be entirely excluded as a contributor to observed score improvements.

7. Conclusions

In conclusion, this study demonstrates that instructional mode exerts a significant influence on both the rate and sustainability of food safety competency development. Hybrid learning is highly effective for early skill acquisition but exhibits greater variability and declining relative gains over time. Online instruction supports consistent long-term mastery, whereas In-person training provides stable and efficient outcomes. These findings underscore the importance of temporal considerations in instructional design, suggest strategic sequencing of modes to optimize learning, and highlight opportunities for targeted interventions in procedural competencies. Overall, the study advances the understanding of how instructional delivery formats shape competency trajectories and provides evidence-based guidance for designing effective, sustainable, and scalable training programs in food safety and other vocational domains.

Author Contributions

Conceptualization, Writing—original draft, review & editing, Visualization, Validation, Supervision, Project administration, Methodology, Formal analysis, Data curation, M.R.L., S.V.M.G.A., C.M.B.B., J.B.R., K.L.E.G., A.G.D.A., J.C.B.R., C.J.P.P., M.D.R.S., L.B.B. and S.K.A.G. Project administration, Writing—review & editing, formal analysis, software, S.M.G.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Far Eastern University-Nicanor Reyes Medical Foundation Institutional Ethics Revie Committee with reference number 2018-0012.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Data is unavailable due to privacy or ethical restrictions.

Acknowledgments

The authors thank the participants for their valuable contributions to the study and the anonymous reviewers for their helpful comments, which improved the paper.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
PHHPersonal Health & Hygiene
FHFood Hazards
CSCleaning and Sanitation
GMPGood Manufacturing Practices
SDStandard Deviation
SMCStandardized Mean Change
TEITraining Efficiency Index
LMERLinear Mixed-Effects Regression
HFSCCAHygiene and Food Safety Competency Assessment Tool
CVCoefficient of Variation

References

  1. Atoloye, A.T.; Atoloye, I.A.; Olasoji, S.O.; Tanimonure, V.A.; Awoleye, M.O.; Atere, C.T.; Owoyemi, T.L.; Oladejo, A.S. Delivering youth nutrition interventions through school-based gardening of indigenous vegetables and fruits and WhatsApp nutrition education in Southwest Nigeria: Non-randomized study protocol. Front. Nutr. 2025, 12, 1539861. [Google Scholar] [CrossRef]
  2. Beary, M.A.; DiCaprio, E.; Chang, E.A.B.; Dunn, L.L.; Padilla-Zakour, O.I.; Snyder, A.B. Virtual food safety education programs reveal significant opportunities for accessible and effective distance learning. Food Prot. Trends 2025, 45, 8–18. [Google Scholar] [CrossRef]
  3. Berglund, Z.; Simsek, S.; Feng, Y. Effectiveness of Online Food-Safety Educational Programs: A Systematic Review, Random-Effects Meta-Analysis, and Thematic Synthesis. Foods 2024, 13, 794. [Google Scholar] [CrossRef] [PubMed]
  4. Castro, M.; Soares, K.; Ribeiro, C.; Esteves, A. Evaluation of the effects of food safety training on the microbiological load present in equipment, surfaces, utensils, and food manipulator’s hands in restaurants. Microorganisms 2024, 12, 825. [Google Scholar] [CrossRef]
  5. Gamiao, S.K.A.; Soriano, M.D.R.; Salvador, R.Q.; Catacutan, I.M.A.; Padua, H.J.A.; Adviento, S.V.M.G.; Sagucio, T.M.A.; Malapit, A.K.C.; Limon, M.R. Assessing the Self-Reported Level of Food Hygiene knowledge and Practices among cookery Teachers in Northern Philippines. Hygiene 2025, 5, 57. [Google Scholar] [CrossRef]
  6. Mullaney, K.; Mylotte, L.; McCloat, A. Examining the Efficacy of Post-Primary Nutritional Education Interventions as a Preventative Measure for Diet-Related Diseases: A Scoping Review. Appl. Sci. 2025, 15, 6901. [Google Scholar] [CrossRef]
  7. Limon, M.R. Assessing knowledge and skills retention of junior high school students on food safety lessons using modified Kirkpatrick’s model. Food Control 2022, 135, 108814. [Google Scholar] [CrossRef]
  8. Limon, M.R. Validation of a researcher-developed food safety curriculum guide for junior high school students using Delphi technique. Food Control 2021, 125, 108011. [Google Scholar] [CrossRef]
  9. Insfran-Rivarola, A.; Tlapa, D.; Limon-Romero, J.; Baez-Lopez, Y.; Miranda-Ackerman, M.; Arredondo-Soto, K.; Ontiveros, S. A Systematic Review and Meta-Analysis of the Effects of Food Safety and Hygiene Training on Food Handlers. Foods 2020, 9, 1169. [Google Scholar] [CrossRef]
  10. Eisenmann, M.; Rauschenberger, V.; Maschmann, J.; König, S.; Krone, M. Interactive hygiene training using free open source software. BMJ Open Qual. 2024, 13, e002861. [Google Scholar] [CrossRef]
  11. Fallea, A.; Costanza, C.; L’Episcopo, S.; Bartolone, M.; Rundo, F.; Smirni, D.; Roccella, M.; Elia, M.; Ferri, R.; Vetri, L. Virtual Reality-Based Versus Traditional Teaching Approaches in the Oral Hygiene Education of Children with Autism Spectrum Disorder. J. Clin. Med. 2025, 14, 5795. [Google Scholar] [CrossRef]
  12. Gourlay, L. Digital masks: Screens, selves and symbolic hygiene in online higher education. Learn. Media Technol. 2022, 47, 398–406. [Google Scholar] [CrossRef]
  13. Kühnel, C.; Salomo, S.; Pagiatakis, H.; Hübner, J.; Seifert, P.; Freesmeyer, M.; Gühne, F. Medical students’ and radiology technician trainees’ eHealth Literacy and Hygiene Awareness—Asynchronous and Synchronous Digital Hand Hygiene Training in a Single-Center trial. Healthcare 2023, 11, 1475. [Google Scholar] [CrossRef]
  14. Sat, M. The impact of AI integration in project preparation in education course on pre-service teachers’ innovativeness, AI anxiety, attitudes, and acceptance. BMC Psychol. 2025, 13, 1297. [Google Scholar] [CrossRef]
  15. Atwa, H.; Shehata, M.H.; Al-Ansari, A.; Kumar, A.; Jaradat, A.; Ahmed, J.; Deifalla, A. Online, Face-to-Face, or Blended Learning? Faculty and Medical Students’ Perceptions During the COVID-19 Pandemic: A Mixed-Method Study. Front. Med. 2022, 9, 791352. [Google Scholar] [CrossRef] [PubMed]
  16. Conner, M.; Wilding, S.; Van Harreveld, F.; Dalege, J. Cognitive-Affective Inconsistency and Ambivalence: Impact on the Overall Attitude–Behavior relationship. Pers. Soc. Psychol. Bull. 2020, 47, 673–687. [Google Scholar] [CrossRef] [PubMed]
  17. Gross, S.; Wunderlich, K.; Arpagaus, A.; Becker, C.; Gössi, F.; Bissmann, B.; Zumbrunn, S.K.; Wilde, M.; Hunziker, S. Effectiveness of blended learning to improve medical students’ communication skills: A randomized, controlled trial. BMC Med. Educ. 2025, 25, 383. [Google Scholar] [CrossRef]
  18. Ma, C.; Zhou, W. Effectiveness of blended learning in health assessment course among undergraduate nursing students: A quasi-experimental study. Teach. Learn. Nurs. 2024, 19, e715–e721. [Google Scholar] [CrossRef]
  19. Malta, K.; Glickman, C.; Hunter, K.; McBride, A. Comparing the impact of online and in-person active learning in preclinical medical education. BMC Med. Educ. 2025, 25, 329. [Google Scholar] [CrossRef]
  20. Li, Y.R.; Zhang, Z.H.; Li, W.; Wang, P.; Li, S.W.; Su, D.; Zhang, T. Effectiveness and learning experience from undergraduate nursing students in surgical nursing skills course: A quasi- experimental study about blended learning. BMC Nurs. 2023, 22, 396. [Google Scholar] [CrossRef]
  21. Schürmann, V.; Bodemer, D.; Marquardt, N. Exploring the use of regular reflections in student collaboration: A case study in higher education. Front. Educ. 2025, 10, 1526487. [Google Scholar] [CrossRef]
  22. Salvador, R.; Limon, M.; Borromeo, C.M.; Parinas, M.A.; Manrique, L.; De La Cruz, L.; Dalere, J.M. Exploring Technical-Vocational Education Teachers’ Challenges and Adaptation Strategies in Teaching Courses Outside Their Specializations. 2022. Available online: https://publisher.uthm.edu.my/ojs/index.php/JTET/article/view/11084 (accessed on 14 September 2025).
  23. Buttlar, B.; Pauer, S.; Van Harreveld, F. The model of ambivalent choice and dissonant commitment: An integration of dissonance and ambivalence frameworks. Eur. Rev. Soc. Psychol. 2024, 36, 195–237. [Google Scholar] [CrossRef]
  24. Majowicz, S.E.; Hammond, D.; Dubin, J.A.; Diplock, K.J.; Jones-Bitton, A.; Rebellato, S.; Leatherdale, S.T. A longitudinal evaluation of food safety knowledge and attitudes among Ontario high school students following a food handler training program. Food Control 2017, 76, 108–116. [Google Scholar] [CrossRef]
  25. McIntyre, L. Knowledge, Practices and Attitudes of Certified FOODSAFE Food Handlers–Is Retraining Needed; BC Centre for Disease Control: Vancouver, CA, Canada, 2011; Available online: https://www.bccdc.ca/resource-gallery/Documents/Statistics%20and%20Research/Statistics%20and%20Reports/EH/FPS/FOODSAFEKnowledgeRetentionProject_FinalSep2011.pdf (accessed on 2 February 2025).
  26. Limon, M.R.; Vallente, J.P.C.; Tarampi, C.J.G.; Coloma, M.L.V.; Ubaldo, E.M. Document analysis of foodborne diseases and intervention strategies in Philippine basic education for the last 17 years. Food Control 2022, 138, 108984. [Google Scholar] [CrossRef]
  27. Cronbach, L.J. Coefficient alpha and the internal structure of tests. Psychometrika 1951, 16, 297–334. [Google Scholar] [CrossRef]
  28. Muradoglu, M.; Cimpian, J.R.; Cimpian, A. Mixed-Effects Models for Cognitive development researchers. J. Cogn. Dev. 2023, 24, 307–340. [Google Scholar] [CrossRef]
  29. Chen, B.; Huang, W.; Hu, C. The relationship between positive exercise experiences and mobile phone addiction tendencies in older adults: A cross-lagged study. Front. Public Health 2025, 13, 1710048. [Google Scholar] [CrossRef]
  30. Gkintoni, E.; Antonopoulou, H.; Sortwell, A.; Halkiopoulos, C. Challenging Cognitive Load Theory: The role of educational neuroscience and artificial intelligence in redefining learning efficacy. Brain Sci. 2025, 15, 203. [Google Scholar] [CrossRef]
  31. Javed, H.; El-Sappagh, S.; Abuhmed, T. Robustness in deep learning models for medical diagnostics: Security and adversarial challenges towards robust AI applications. Artif. Intell. Rev. 2024, 58, 12. [Google Scholar] [CrossRef]
  32. Löchner, J.; Carlbring, P.; Schuller, B.; Torous, J.; Sander, L.B. Digital interventions in mental health: An overview and future perspectives. Internet Interv. 2025, 40, 100824. [Google Scholar] [CrossRef]
  33. Limon, M.R.; Vallente, J.P.C.; Chua, C.T.; Rustia, A.S. Situating curriculum in context: Using Glatthorn’s Standards-Based Curriculum Development Model to contextualize food safety learning competencies. Food Control 2021, 132, 108538. [Google Scholar] [CrossRef]
  34. Nowels, M.A.; McDarby, M.; Brody, L.; Kleiman, E.; Henson, S.S.; Sweet, C.C.; Kozlov, E. Predictors of Engagement in Multiple Modalities of Digital Mental Health Treatments: Longitudinal study. J. Med. Internet Res. 2024, 26, e48696. [Google Scholar] [CrossRef]
  35. Sang, D.; Miao, L.; Wu, Q. A smart community interactive art therapy platform based on multimodal computer graphics and resilient artificial intelligence for home-based elderly care. Sci. Rep. 2025, 15, 43057. [Google Scholar] [CrossRef]
  36. Xue, W.; Duan, A.i.; Zhao, Y.; Zuo, H.; Isleem, H.F. AI-enabled multimodal monitoring for enhanced safety and functional recovery in elderly and post-stroke care: A mixed-methods study. Digit. Health 2025, 11, 20552076251384834. [Google Scholar] [CrossRef]
  37. Forde, C.; O’Brien, A.; Croitoru, O.; Molloy, N.; Amisano, C.; Brennan, I.; McInerney, A. Comparing Face-to-Face, blended and online teaching approaches for practical skill acquisition: A randomised controlled trial. Med. Sci. Educ. 2024, 34, 627–637. [Google Scholar] [CrossRef]
  38. Gudoniene, D.; Staneviciene, E.; Huet, I.; Dickel, J.; Dieng, D.; Degroote, J.; Rocio, V.; Butkiene, R.; Casanova, D. Hybrid Teaching and Learning in Higher Education: A Systematic Literature review. Sustainability 2025, 17, 756. [Google Scholar] [CrossRef]
  39. Lim, R.B.T.; Tan, C.G.L.; Voo, K.; Lee, Y.L.; Teng, C.W.C. Student perspectives on interdisciplinary learning in public health education: Insights from a mixed-methods study. Front. Public Health 2024, 12, 1516525. [Google Scholar] [CrossRef] [PubMed]
  40. Liu, K.; Liu, S.; Ma, Y.; Jiang, J.; Liu, Z.; Wan, Y. Comparison of blended learning and traditional lecture method on learning outcomes in the evidence-based medicine course: A comparative study. BMC Med. Educ. 2024, 24, 680. [Google Scholar] [CrossRef]
  41. McCleary-Gaddy, A.; Yu, E.T.; Spears, R.D. In-Person, Remote, or Hybrid instruction? A quality improvement assessment of a six week interprofessional education pathway program for undergraduate Pre-Health students. Healthcare 2022, 10, 2399. [Google Scholar] [CrossRef]
  42. Schmid, R.F.; Borokhovski, E.; Bernard, R.M.; Pickup, D.I.; Abrami, P.C. A meta-analysis of online learning, blended learning, the flipped classroom and classroom instruction for pre-service and in-service teachers. Comput. Educ. Open 2023, 5, 100142. [Google Scholar] [CrossRef]
  43. Vallée, A.; Blacher, J.; Cariou, A.; Sorbets, E. Blended Learning Compared to Traditional Learning in Medical Education: Systematic Review and Meta-Analysis. J. Med. Internet Res. 2019, 22, e16504. [Google Scholar] [CrossRef]
  44. He, Z.; Li, H.; Lu, L.; Wang, Q.; Wu, Q.; Lu, L. The application of blended teaching in medical practical course of clinical skills training. BMC Med. Educ. 2024, 24, 724. [Google Scholar] [CrossRef]
  45. Shlomo, A.; Rosenberg-Kima, R.B. F2F, zoom, or asynchronous learning? Higher education students’ preferences and perceived benefits and pitfalls. Int. J. Sci. Educ. 2024, 47, 1002–1027. [Google Scholar] [CrossRef]
  46. Tang, J.T.; Mo, D.; Lan, W.C. Exploring the integration of cooperative learning in blended teaching environments. Educ. Inf. Technol. 2025, 1–29. [Google Scholar] [CrossRef]
  47. Wang, X.; Liu, J.; Jia, S.; Hou, C.; Jiao, R.; Yan, Y.; Ma, T.; Zhang, Y.; Liu, Y.; Wen, H.; et al. Hybrid teaching after COVID-19: Advantages, challenges and optimization strategies. BMC Med. Educ. 2024, 24, 753. [Google Scholar] [CrossRef]
  48. Ahmed, S.; Bukhari, S.A.H.; Ahmad, A.; Rehman, O.; Ahmad, F.; Ahsan, K.; Liew, T.W. Inquiry based learning in software engineering education: Exploring students’ multiple inquiry levels in a programming course. Front. Educ. 2025, 10, 1503996. [Google Scholar] [CrossRef]
  49. Krumm, I.R.; Miles, M.C.; Clay, A.; Carlos, W.G., II; Adamson, R. Making effective educational videos for clinical teaching. CHEST J. 2021, 161, 764–772. [Google Scholar] [CrossRef]
  50. Mayer, R.E. The past, present, and future of the Cognitive Theory of Multimedia Learning. Educ. Psychol. Rev. 2024, 36, 8. [Google Scholar] [CrossRef]
  51. Limon, M.R. Food safety practices of food handlers at home engaged in online food businesses during COVID-19 pandemic in the Philippines. Curr. Res. Food Sci. 2021, 4, 63–73. [Google Scholar] [CrossRef]
  52. Vasilaki, E.; Mavrogianni, A. Extending Cognitive Load Theory: The CLAM framework for Biometric, Adaptive, and Ethical learning. Psychol. Int. 2025, 7, 40. [Google Scholar] [CrossRef]
  53. Teng, M.F. The effectiveness of multimedia input on vocabulary learning and retention. Innov. Lang. Learn. Teach. 2022, 17, 738–754. [Google Scholar] [CrossRef]
  54. Akpen, C.N.; Asaolu, S.; Atobatele, S.; Okagbue, H.; Sampson, S. Impact of online learning on student’s performance and engagement: A systematic review. Discov. Educ. 2024, 3, 205. [Google Scholar] [CrossRef]
  55. Kamalov, F.; Calonge, D.S.; Gurrib, I. New era of Artificial intelligence in Education: Towards a sustainable Multifaceted Revolution. Sustainability 2023, 15, 12451. [Google Scholar] [CrossRef]
  56. Lobos, K.; Cobo-Rendón, R.; Jofré, D.B.; Santana, J. New challenges for higher education: Self-regulated learning in blended learning contexts. Front. Educ. 2024, 9, 1457367. [Google Scholar] [CrossRef]
  57. Mohammadi, M.; Paasivara, M.; Kasurinen, J. Blended learning in higher education: Good practices in platforms and teachers support, enhancing students motivation. Educ. Inf. Technol. 2025, 1–24. [Google Scholar] [CrossRef]
  58. Nong, W.; Ye, J.-H.; Chen, P.; Lee, Y.-S. A study on the blended learning effects on students majoring in preschool education in the post-pandemic era: An example of a research-method course in a Chinese university. Front. Psychol. 2023, 13, 962707. [Google Scholar] [CrossRef]
  59. Song, S.; Lai, Y.C. Blended learning in vocational education: Benefits, challenges, and student engagement. Cogent Educ. 2025, 12, 2548348. [Google Scholar] [CrossRef]
  60. Tonbuloğlu, B.; Tonbuloğlu, İ. Trends and patterns in blended learning research (1965–2022). Educ. Inf. Technol. 2023, 28, 13987–14018. [Google Scholar] [CrossRef]
  61. Almomani, L.M.; Halalsheh, N.; Al-Dreabi, H.; Al-Hyari, L.; Al-Quraan, R. Self-directed learning skills and motivation during distance learning in the COVID-19 pandemic (case study: The university of Jordan). Heliyon 2023, 9, e20018. [Google Scholar] [CrossRef]
  62. Barr, T.; Luo, T. HyFlex course design: Outcomes, challenges, and supports for students and instructors. J. Comput. High. Educ. 2025, 1–29. [Google Scholar] [CrossRef]
  63. Elendu, C.; Amaechi, D.C.; Okatta, A.U.; Amaechi, E.C.; Elendu, T.C.; Ezeh, C.P.; Elendu, I.D. The impact of simulation-based training in medical education: A review. Medicine 2024, 103, e38813. [Google Scholar] [CrossRef] [PubMed]
  64. Limon, M.; Chua, C.T.; Gabriel, A.A. Place of food Safety education in the Philippine K to 12 curriculum. Philipp. J. Sci. 2021, 150, 979. Available online: https://philjournalsci.dost.gov.ph/wp-content/uploads/2021/06/place_of_food_safety_education_in_the_Phils_K_to_12_.pdf (accessed on 1 January 2026). [CrossRef]
  65. Najjar, N.; Rouphael, M.; Bitar, T.; Hleihel, W. The rise and drop of online learning: Adaptability and future prospects. Front. Educ. 2025, 10, 1522905. [Google Scholar] [CrossRef]
  66. Sato, S.N.; Moreno, E.C.; Rubio-Zarapuz, A.; Dalamitros, A.A.; Yañez-Sepulveda, R.; Tornero-Aguilera, J.F.; Clemente-Suárez, V.J. Navigating the new normal: Adapting online and distance learning in the Post-Pandemic era. Educ. Sci. 2023, 14, 19. [Google Scholar] [CrossRef]
  67. Naseer, F.; Tariq, R.; Alshahrani, H.M.; Alruwais, N.; Al-Wesabi, F.N. Project based learning framework integrating industry collaboration to enhance student future readiness in higher education. Sci. Rep. 2025, 15, 24985. [Google Scholar] [CrossRef]
  68. Yaseen, H.; Mohammad, A.S.; Ashal, N.; Abusaimeh, H.; Ali, A.; Sharabati, A.-A.A. The impact of adaptive learning technologies, personalized feedback, and interactive AI tools on student engagement: The moderating role of digital literacy. Sustainability 2025, 17, 1133. [Google Scholar] [CrossRef]
  69. Aryee, G.F.B.; Amoadu, M.; Obeng, P.; Sarkwah, H.N.; Malcalm, E.; Abraham, S.A.; Baah, J.A.; Agyare, D.F.; Banafo, N.E.; Ogaji, D. Effectiveness of eLearning programme for capacity building of healthcare professionals: A systematic review. Hum. Resour. Health 2024, 22, 60. [Google Scholar] [CrossRef]
  70. James, W.; Oates, G.; Schonfeldt, N. Improving retention while enhancing student engagement and learning outcomes using gamified mobile technology. Account. Educ. 2024, 34, 366–386. [Google Scholar] [CrossRef]
  71. Martin, F.; Sun, T.; Westine, C.D. A systematic review of research on online teaching and learning from 2009 to 2018. Comput. Educ. 2020, 159, 104009. [Google Scholar] [CrossRef]
  72. Qin, Y. Online vs. face-to-face: A long-term study on the effectiveness and essence of learning. Cogent Educ. 2025, 12, 2554314. [Google Scholar] [CrossRef]
  73. Rakha, A.H. Promoting online teaching through active learning strategies: Applications and innovations. Front. Educ. 2025, 10, 1546208. [Google Scholar] [CrossRef]
  74. Zhang, D.; Zhao, J.L.; Zhou, L.; Nunamaker, J.F. Can e-learning replace classroom learning? Commun. ACM 2004, 47, 75–79. [Google Scholar] [CrossRef]
  75. Amin, S.M.; Mahgoub, S.A.E.-F.; Tawfik, A.F.; Khalil, D.E.; El-Sayed, A.A.I.; Atta, M.H.R.; Albzia, A.; Mohamed, S.R.M. Nursing education in the digital era: The role of digital competence in enhancing academic motivation and lifelong learning among nursing students. BMC Nurs. 2025, 24, 571. [Google Scholar] [CrossRef] [PubMed]
  76. Castaño, C.; Caballero, R.; Noguera, J.C.; Austin, M.C.; Bernal, B.; Jaén-Ortega, A.A.; De Los Angeles Ortega-Del-Rosario, M. Developing sustainability competencies through active learning strategies across school and university settings. Sustainability 2025, 17, 8886. [Google Scholar] [CrossRef]
  77. Fraidan, A.A.; Alharthi, T. The impact of technology use on cognitive development and lexical attrition in L1 and L2 among younger students with ADHD: Opportunities and challenges. J. Educ. Health Promot. 2025, 14, 180. [Google Scholar] [CrossRef]
  78. Li, Y.; Chen, D.; Deng, X. The impact of digital educational games on student’s motivation for learning: The mediating effect of learning engagement and the moderating effect of the digital environment. PLoS ONE 2024, 19, e0294350. [Google Scholar] [CrossRef]
  79. Tekir, S. Strategies for Effective Classroom Management in Online Teaching: A Post-Pandemic Review of Empirical Studies. SAGE Open 2025, 15, 21582440251377321. [Google Scholar] [CrossRef]
  80. Zamiri, M.; Esmaeili, A. Strategies, Methods, and Supports for Developing Skills within Learning Communities: A Systematic Review of the Literature. Adm. Sci. 2024, 14, 231. [Google Scholar] [CrossRef]
  81. Zolfaghari, Z.; Karimian, Z.; Zarifsanaiey, N.; Farahmandi, A.Y. A scoping review of gamified applications in English language teaching: A comparative discussion with medical education. BMC Med. Educ. 2025, 25, 274. [Google Scholar] [CrossRef]
  82. Zou, Y.; Kuek, F.; Feng, W.; Cheng, X. Digital learning in the 21st century: Trends, challenges, and innovations in technology integration. Front. Educ. 2025, 10, 1562391. [Google Scholar] [CrossRef]
  83. Hamzah, N.R.; Hanid, M.F.A.; Zakaria, M.I. The effect of segmented-interactive video demonstration on student performance in procedural skills among healthcare students. Adv. Health Sci. Educ. 2025. online ahead of print. [Google Scholar] [CrossRef]
  84. Katyal, A.; Krishnan, S.V. Skill progression models in emergency medicine training: A narrative review and the development of the OASIS framework. Int. J. Emerg. Med. 2025, 18, 186. [Google Scholar] [CrossRef]
  85. Shahrezaei, A.; Sohani, M.; Taherkhani, S.; Zarghami, S.Y. The impact of surgical simulation and training technologies on general surgery education. BMC Med. Educ. 2024, 24, 1297. [Google Scholar] [CrossRef]
  86. Taraporewalla, K.; Barach, P.; Van Zundert, A. Teaching medical procedural skills for performance. Clin. Pract. 2024, 14, 862–869. [Google Scholar] [CrossRef]
  87. Williams, A.M.; Hodges, N.J. Effective practice and instruction: A skill acquisition framework for excellence. J. Sports Sci. 2023, 41, 833–849. [Google Scholar] [CrossRef] [PubMed]
  88. Diederich, E.; Lineberry, M.; Schott, V.; Broski, J.; Alsayer, A.; Eckels, K.A.; Murray, M.J.; Huynh, W.; Thomas, L.A. Putting the “learning” in “pre-learning”: Effects of a self-directed study hall on skill acquisition in a simulation-based central line insertion course. Adv. Simul. 2023, 8, 21. [Google Scholar] [CrossRef] [PubMed]
  89. Horne, S.M.; Carr, E.O.; Brent, B.K.; Blackshear, C.T. Hybridized dental hygiene psychomotor skills instruction: The COVID--19 challenge. J. Dent. Educ. 2021, 85, 1930–1932. [Google Scholar] [CrossRef]
  90. Hu, Z.; Zhang, W.; Huang, M.; Liu, X. Application of directly observed procedural skills in hospital infection training: A randomized controlled trial. Front. Med. 2025, 12, 1509238. [Google Scholar] [CrossRef]
  91. Iyamu, I.; Ramachandran, S.; Chang, H.-J.; Kushniruk, A.; Ibáñez-Carrasco, F.; Worthington, C.; Davies, H.; McKee, G.; Brown, A.; Gilbert, M. Considerations for adapting digital competencies and training approaches to the public health workforce: An interpretive description of practitioners’ perspectives in Canada. BMC Public Health 2025, 25, 122. [Google Scholar] [CrossRef] [PubMed]
  92. Nowell, L.; Dolan, S.; Johnston, S.; Jacobsen, M.; Lorenzetti, D.L.; Paolucci, E.O. Supporting Web-Based teaching and learning of virtual care skills and competencies: Development of an Evidence-Informed Framework. JMIR Nurs. 2025, 8, e75868. [Google Scholar] [CrossRef]
  93. Villagrán, I.; Rammsy, F.; Del Valle, J.; De Las Heras, S.G.; Pozo, L.; García, P.; Torres, G.; Varas, J.; Mandrusiak, A.; Corvetto, M.; et al. Remote, asynchronous training and feedback enables development of neurodynamic skills in physiotherapy students. BMC Med. Educ. 2023, 23, 267. [Google Scholar] [CrossRef]
  94. Salvador, R.Q.; Borromeo, C.M.T.; Alnas, G.C.; Adviento, S.V.M.G.; Asuncion, A.C.; Limon, M.R.; Esteban, A.B.; Gajete, A.A.; Garcia, S.M.L.; Parico, J.-M.R.; et al. Safe plates in the school space: Investigating compliance of food safety standards among school-based food service providers. Food Humanit. 2024, 2, 100283. [Google Scholar] [CrossRef]
  95. Vallente, J.P.C. Framing pre-service English language teachers’ identity formation within the theory of alignment as mode of belonging in community of practice. Teach. Teach. Educ. 2020, 96, 103177. [Google Scholar] [CrossRef]
  96. Song, H.; Cai, L. Interactive learning environment as a source of critical thinking skills for college students. BMC Med. Educ. 2024, 24, 270. [Google Scholar] [CrossRef]
Table 1. Mean overall hygiene and food safety competency scores of participants by instructional mode at 12-, 24-, and 36-month follow-up assessments.
Table 1. Mean overall hygiene and food safety competency scores of participants by instructional mode at 12-, 24-, and 36-month follow-up assessments.
Mode of Instruction12 Months24 Months36 Months
Mean (SD)Mean (SD)Mean (SD)
In-person10.28 *** (0.58)13.94 *** (0.83)19.05 *** (0.92)
Online10.57 *** (0.59)13.91 *** (0.66)19.50 *** (0.97)
Hybrid20.88 *** (0.96)15.62 *** (1.06)20.82 *** (0.70)
Notes: *** p < 0.001.
Table 2. Mean competency scores across four hygiene domains by instructional mode and follow-up period.
Table 2. Mean competency scores across four hygiene domains by instructional mode and follow-up period.
Competency DomainMode12 Months24 Months36 Months
(Mean ± SD)(Mean ± SD)(Mean ± SD)
Personal Health & HygieneIn-person10.24 *** (0.84)13.67 *** (1.19)18.96 *** (1.83)
Online10.09 *** (0.92)13.57 *** (1.12)19.52 *** (1.99)
Hybrid12.50 *** (2.17)14.33 *** (1.86)20.43 *** (1.12)
Food HazardsIn-person09.97 *** (1.12)13.38 *** (1.24)19.35 *** (1.79)
Online10.52 *** (1.06)13.83 *** (1.16)19.40 *** (2.04)
Hybrid12.43 *** (2.15)16.11 *** (1.79)20.71 *** (1.19)
Cleaning & SanitationIn-person10.45 *** (0.83)14.06 *** (1.34)19.18 *** (1.78)
Online10.98 *** (1.24)13.91 *** (1.38)19.45 *** (1.87)
Hybrid14.00 *** (2.09)16.51 *** (1.71)21.11 *** (1.00)
Good Manufacturing PracticesIn-person10.65 *** (0.88)14.75 *** (1.36)19.01 *** (1.95)
Online10.92 *** (1.12)14.42 *** (1.36)19.92 *** (1.56)
Hybrid13.22 *** (1.86)15.72 *** (1.87)19.92 *** (1.20)
Notes: *** p < 0.001; SD—standard deviation.
Table 3. Effect size for differences among instructional modes across hygiene competency domains.
Table 3. Effect size for differences among instructional modes across hygiene competency domains.
Competency Domainη2ω2ε2
Personal Health & Hygiene0.068 **0.066 **0.067 **
Food Hazards0.107 ***0.104 ***0.106 ***
Cleaning & Sanitation0.196 ***0.191 ***0.193 ***
Good Manufacturing Practices0.115 ***0.112 ***0.113 ***
Overall Performance0.138 *0.134 *0.136 *
Notes: *** p < 0.001; ** p < 0.01; * p < 0.05; η2 = eta squared (proportion of explained variance); ω2 = omega squared (bias-corrected effect size estimate); ε2 = epsilon squared (effect size for nonparametric comparisons).
Table 4. Standardized mean change and training efficiency index values by instructional mode across 12-, 24-, and 36-month follow-up assessments.
Table 4. Standardized mean change and training efficiency index values by instructional mode across 12-, 24-, and 36-month follow-up assessments.
ModeStandardized Mean ChangeTraining Efficiency Index
12 Months24 Months36 MonthsTEIInterpretation
In-person20.16 ***27.48 ***38.10 ***30.55 **Most efficient
Online20.34 ***27.62 ***38.80 ***29.49 **Highly efficient
Hybrid41.76 ***32.24 ***37.44 ***19.56 *Least efficient
Notes: *** p < 0.001; ** p < 0.01; * p < 0.05; TEI—training efficiency index.
Table 5. Coefficient of variation indicating consistency of participant performance across hygiene and food safety competency domains.
Table 5. Coefficient of variation indicating consistency of participant performance across hygiene and food safety competency domains.
CompetencyCoefficient of VariationInterpretation
Food Hazards0.15 **Most consistent
Cleaning & Sanitation0.16 **Consistent
Personal Health and Hygiene0.17 *Moderately consistent
Good Manufacturing Practices0.18 *Least consistent
Notes: ** p < 0.01; * p < 0.05; CV 0.150-most consistent; 0.151 < CV 0.160-consistent; 0.161 < CV 0.170-moderate consistent; CV > 0.171-least consistent.
Table 6. Linear mixed-effects regression results examining the effects of time, instructional mode, and their interaction on hygiene competency scores.
Table 6. Linear mixed-effects regression results examining the effects of time, instructional mode, and their interaction on hygiene competency scores.
PredictorEstimate (β)SEp-Value
Intercept (reference mode at baseline)10.360.51<0.001 ***
Time (per follow-up interval)4.820.20<0.001 ***
Mode (Hybrid vs. In-person/online [reference])3.970.43<0.001 ***
Time × Instructional Mode (difference in change over time)−1.420.500.008 **
Random Intercept (ui)2.14-0.030 *
Notes: *** p < 0.001; ** p < 0.01; * p < 0.05; SE is not conventionally provided for random intercept variance components.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Limon, M.R.; Adviento, S.V.M.G.; Basamot, C.M.B.; Reyes, J.B.; Gumsat, K.L.E.; Amano, A.G.D.; Ramirez, J.C.B.; Pungtilan, C.J.P.; Soriano, M.D.R.; Baclagan, L.B.; et al. Trajectory Patterns of Hygiene Training Effectiveness Across Three Instructional Modes. Hygiene 2026, 6, 5. https://doi.org/10.3390/hygiene6010005

AMA Style

Limon MR, Adviento SVMG, Basamot CMB, Reyes JB, Gumsat KLE, Amano AGD, Ramirez JCB, Pungtilan CJP, Soriano MDR, Baclagan LB, et al. Trajectory Patterns of Hygiene Training Effectiveness Across Three Instructional Modes. Hygiene. 2026; 6(1):5. https://doi.org/10.3390/hygiene6010005

Chicago/Turabian Style

Limon, Mark R., Shaira Vita Mae G. Adviento, Chariza Mae B. Basamot, Jacqueline B. Reyes, Karl Lorenze E. Gumsat, Athena Germynne D. Amano, Jessica Camille B. Ramirez, Christian Jay P. Pungtilan, Marie Dale R. Soriano, Louwelyn B. Baclagan, and et al. 2026. "Trajectory Patterns of Hygiene Training Effectiveness Across Three Instructional Modes" Hygiene 6, no. 1: 5. https://doi.org/10.3390/hygiene6010005

APA Style

Limon, M. R., Adviento, S. V. M. G., Basamot, C. M. B., Reyes, J. B., Gumsat, K. L. E., Amano, A. G. D., Ramirez, J. C. B., Pungtilan, C. J. P., Soriano, M. D. R., Baclagan, L. B., Gamiao, S. K. A., & Juan, S. M. G. (2026). Trajectory Patterns of Hygiene Training Effectiveness Across Three Instructional Modes. Hygiene, 6(1), 5. https://doi.org/10.3390/hygiene6010005

Article Metrics

Back to TopTop