Next Article in Journal
Studying Energy Performance and Thermal Comfort Conditions in Heritage Buildings: A Case Study of Murabba Palace
Next Article in Special Issue
Sustaining Inclusive, Quality Education during COVID-19 Lockdowns
Previous Article in Journal
Analysis of Building Archetypes for Optimising New Photovoltaic Energy Facilities: A Case Study
Previous Article in Special Issue
And Then There Was COVID-19: Do the Benefits of Cooperative Learning Disappear When Switching to Online Education?
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:

Sustainable Math Education of Female Students during a Pandemic: Online versus Face-to-Face Instruction

Hanadi Mohamed AbdelSalam
Maura A. E. Pilotti
1,* and
Omar J. El-Moussa
Department of Sciences and Human Studies, Prince Mohammad Bin Fahd University, Dhahran 31952, Saudi Arabia
Department of Student Affairs, Prince Mohammad Bin Fahd University, Dhahran 31952, Saudi Arabia
Author to whom correspondence should be addressed.
Sustainability 2021, 13(21), 12248;
Submission received: 23 September 2021 / Revised: 29 October 2021 / Accepted: 4 November 2021 / Published: 6 November 2021


The present study was driven by the assumption that a key feature of sustainable education is its ability to preserve standards of quality even amid unforeseen, potentially disruptive events. It asked whether students’ academic success in math general education courses differed between synchronous online (during the COVID-19 pandemic) and face-to-face (before the pandemic), under the ancillary assumption that computational competency, a pillar of sustainable education, shapes enduring success in a variety of professional fields. As the early identification of at-risk students and ensuing remedial interventions can bring about academic success, the study also investigated the predictive validity of students’ initial performance in online and face-to-face math courses. Two general education courses (introductory calculus and statistics), taught by the same instructor, were selected. Class grades did not differ between instructional modes, thereby providing no evidence for the widespread concern that the switch to the online mode had damaged learning. Yet, during the semester, test and homework performance were differentially sensitive to modes of instruction. Furthermore, both test and homework performance during the first half of the semester predicted class grades in online courses, whereas only test performance predicted class grades in face-to-face courses. These results suggest that sustainable math education in times of crisis is feasible and that educators’ consideration of the differential predictive value of test and homework performance may aid its attainment.

1. Introduction

The present study begins with the acknowledgment that a sustainable education rests to a great extent on computational thinking, a critical aspect of Science, Technology Engineering, and Mathematics (STEM) education. Computational thinking is defined as a set of problem-solving competencies, including problem decomposition, algorithmic thinking, abstraction, and automation [1,2]. A minimal computational competency is required in everyday life, but a much more sophisticated level of competency is demanded by a successful career in a variety of professional fields, as such fields require that one be able to analyze specialized information (i.e., data), identify problems, as well as propose and implement solutions to a variety of significant problems/issues faced by human beings in the environments in which they exist and operate.
Computational competency becomes particularly relevant in a society that is transitioning from a patriarchal system based on tribal networks to a meritocratic one that aspires to a larger, more inclusive, and lasting (i.e., sustainable) integration into the global economy. In such a society, a prototypical example of which is Saudi Arabia (SA), a gender equity drive takes priority in educational matters, as the implementation of meritocracy is doomed to fail without giving women opportunities to succeed professionally, thereby raising their contribution to the economic engine. Not surprisingly, the 2030 Vision of SA (i.e., a strategic plan for sustainable economic growth and development) recognizes that a sustainable education rests on ensuring that both women and men not only are given equitable professional options but also get a quality education that will allow them to thrive in the professions of their choice [3].
The terms “quality education” along with references to equity and sustainability are embedded in the Sustainable Development Goal (SDG) 4 of the United Nations, which recognizes the need to ensure an education that is inclusive, of quality, and capable of promoting lifelong learning [4]. In higher education, the quality of academic programs can be measured by three general parameters, including goals (i.e., skills and knowledge to be acquired), conditions (i.e., resources possessed by learners and institutions), and processes that can achieve the desired goals (i.e., instruction offered and the improved cognition it fosters in learners) [5]. It is widely acknowledged that in both STEM and non-STEM professional fields, computational competency is a key aspect of the quality of higher education programs [6,7]. As for other competencies to be promoted at the undergraduate level, it is first fostered through a set of general education courses (serving as foundations) that vary in the breadth of coverage depending on whether students intend to pursue STEM or non-STEM majors.
In most of the Western world, the issue of gender inequity in STEM fields is well-known along with broader concerns regarding female students’ shying away from or under-performing in courses involving computational knowledge and skills [8,9]. Countless studies have focused on uncovering the reasons for the unbalanced presence of women in STEM academic programs and professions and their under-performance concerning computational competency [10,11,12,13], in the hope that systematic interventions can be implemented, and, with time, rectify the status quo. The reasons uncovered [14] include external factors (e.g., institutional policies) and internal ones (e.g., attitudes), which may create the perfect storm responsible for women remaining under-represented in STEM fields as well as exhibiting, irrespective of the chosen field of study, lower computational competency than male students. Yet, consensus exists that an essential ingredient of women’s being successful in a broad variety of professional fields is a satisfactory computational competency acquired through formal education [15,16,17,18].
In SA, female students’ overall high-school performance (as indexed by their grade point average, GPA) and standardized test scores (e.g., as measured by the General Aptitude Test, GAT) tend to exceed those of male students [19], but null differences have also been reported [20]. Computational competency, assessed before admission to higher education, also favors women [21,22]. Yet, although women are currently more numerous in higher education [23], they are still under-represented in STEM fields [24], such as engineering and computer science, and over-represented in non-STEM fields, such as business [25]. This pattern is not surprising, even in the presence of substantial top-down interventions intended to promote young women’s educational opportunities and conditions. The removal of the impact of decades of patriarchal doctrine on the institutions that have embodied it is not an easy task to accomplish. In this context, it is critically important to preserve female students’ achievement gains, especially regarding computational competency, even in the face of sudden and unexpected environmental changes, such as those brought about by the COVID-19 pandemic. Thus, it is reasonable to ask whether the COVID-19 pandemic, which has required that courses traditionally offered face-to-face be suddenly moved online, as well as reminded women of old barriers by restricting mobility to the confines of one’s home, had affected female college students’ learning in the domain of computational competency.
The present study focuses on a selected set of math courses, which comprise the STEM component of the general education curriculum of a SA university, to determine the extent to which the widespread narrative that the pandemic has damaged students’ learning is realistic. To ensure a comparison uncontaminated by individual differences in instructional approach, courses were taught before and after the pandemic by the same instructor. Furthermore, the study specifically targets math courses that generally enroll female students who shy away from STEM majors. That is, it examines the performance of the very students who may be most sensitive to changes to the mode of math instruction since math is unlikely to be viewed by them as a favored subject matter. We hypothesize that if learning has been disrupted by the move to the online mode, performance indices obtained by students while enrolled in a course (i.e., assignment and test grades), as well as final class grades, will be found to be lower than those obtained in the same course offered face-to-face. The study intends to offer a roadmap for instructors to assess whether the narrative that the pandemic has damaged learning applies to the courses they taught before and during the pandemic. Its goal is to encourage self-reflection and instructional change if the evidence demands it.
Gonzalez et al. [26] have argued that the COVID-19 pandemic has changed students’ learning strategies from an intermittent approach to study materials and class activities to a continuous one. Evidence that practice distributed across learning sessions is superior to practice concentrated in one exists [27,28,29,30]. It follows that differences in performance indices between online and face-to-face instruction may also lie beneath, specifically, in the contribution of course activities to end-of-course performance. Thus, the ancillary aim of the present study is to determine whether female students’ initial performance predicts course grades differently in online and face-to-face courses. The reason for examining the predictability of initial performance is to maximize the effectiveness of targeted remedial interventions through an understanding of the extent to which the identification of at-risk students in the two modes of instruction may benefit from consideration of initial performance. We hypothesize that if online and face-to-face differ in how learning is approached across time, as suggested by Gonzalez et al. [26], the contribution of initial course activities (i.e., tests and assignments) to course grades will be greater online than face-to-face. The reason is that distributed studying is likely to lead to initial performance outcomes that are more representative of the rest of a student’s outcomes in a course than intermittent and/or massed studying, which is more erratic and thus likely to produce less stable performance outcomes.

2. Method

2.1. Participants

The participants were 409 female undergraduate students enrolled in either a general education course covering introductory calculus (n = 139) or statistics (n = 270). They were freshmen or sophomores whose ages ranged from 18 to 25. Participants were pursuing a non-STEM major at a university located in the Eastern Region of Saudi Arabia, which offers secular degree programs, including engineering, computer science, business, law, and architecture, all accredited by the Ministry of Education and by Foreign Educational entities for quality assurance. General education courses, which serve as the foundations of each major, follow a U.S. curriculum developed by the Texas International Education Consortium (TIEC). At the selected university, English is the primary vehicle of communication for academic activities. As such, students are classified as Arabic-English bilingual speakers whose admission to general education courses requires that English competency be demonstrated through standardized tests.

2.2. Materials and Procedure

Each course was chosen for having been taught by one instructor both face-to-face (n = 190) before the pandemic and online (n = 219) during the pandemic for at least two semesters. Only students who completed all the activities of each course were included.
Before the pandemic, enrollment at the selected university was exclusively face-to-face. Students’ prior educational experiences were also, by and large, face-to-face. During the pandemic, courses were offered through the synchronous online mode. Although both face-to-face and online courses relied on Blackboard for the posting of class materials, assignment submission, and test administration, the online mode also relied on Blackboard Collaborate, a platform that offers a virtual classroom to students and educators so that they can interact in real-time using voice, text, and video functions. Blackboard Collaborate entails a host of other functions to ensure that students’ engagement and interaction approximate those of a traditional classroom, such as an interactive whiteboard, and desktop sharing tools. It also has session recording capabilities for students who are absent or need reiteration. Thus, online synchronous class meetings could be considered largely equivalent to face-to-face ones except for the physical distance that separated attendees from each other.
Courses, each of 3 credit hours, were chosen for their being taught consistently by the same instructor whose experience in teaching math spanned over more than a decade. At the selected university, instruction is student-centered and active learning is promoted. Course evaluations and peer observations confirmed that this approach to teaching applied to the selected instructor. Pilot work and information from the Office of the Registrar indicated that introductory calculus and statistics enrolled students whose preference for other subjects over math was one of the reasons for their selecting non-STEM degree programs. Another reason for course selection was that formative assessment was consistently organized into four sets, each involving a homework assignment and a test, except for set 4 which involved only a test. Each set served as a formative assessment measure (i.e., a tool for students to monitor their learning and receive feedback to enhance it) [31]. Sets 1 and 2 were completed in the first half of the semester, whereas sets 3 and 4 were carried out during the second half of the semester. The administration of formative assessment measures was followed, at the end of the semester, by a final examination serving as summative assessment (i.e., a tool to measure learning after a specific instructional period). Sample problems and study guides preceded any form of assessment to familiarize students with the type of questions asked, reinforce math literacy skills, offer copious feedback, and ensure sufficient practice. Questions in tests and assignments were embedded in practical scenarios to increase the likelihood that students would deem questions meaningful and engaging.
In addition to an institution-wide code of honor to which students had to adhere, cheating-prevention measures included a custom browser that barred students from accessing other applications or visiting other websites during tests (LockDown Browser), plagiarism detection software (Turnitin), as well as the monitoring of students’ behavior during proctoring through Blackboard Collaborate’s video and microphone functions.
In the researchers’ master spreadsheet, before statistical analyses of aggregated data were initiated, each student’s performance indices and demographic information (i.e., gender, educational level, and age) were associated with a unique random number. To ensure the accuracy of data collection, the selected instructor remained unaware of the purpose of the present study until all performance data were obtained. Students’ participation complied with the guidelines of the Office for Human Research Protections of the U.S. Department of Health and Human Services and with the American Psychological Association’s ethical standards for the treatment of participants in educational research. The study was conducted under the purview of the Deanship of Research at the selected institution.

3. Results

Table 1 displays the descriptive statistics of students’ performance in formative assessment activities (tests and preceding homework assignments), organized according to the time of completion, as well as in the final test, serving as a summative assessment measure. Because each student’s class grade (%) was the cumulative contribution of all activities completed by the student in the course in which she was enrolled, the numerical contribution of each activity is reported in parentheses next to it.

3.1. Are There Differences between Online and Face-to-Face Instruction?

The test and homework assignment grades in the face-to-face and online modes were submitted to a one-way ANOVA with mode of instruction as the between-subjects factor. To uncover stable patterns beyond the idiosyncrasies of particular math courses, the variable type of course was not included. The results of inferential statistics reported below are considered significant at the 0.05 level. Performance on the first homework assignment and test was equivalent between modes of instruction, Fs < 1, ns. Performance was higher online for all other subsequent tests serving as formative assessment: test 2, F(1, 407) = 21.00, MSE = 174.93, p < 0.001, ηp2 = 0.049, test 3, F(1, 407) = 14.05, MSE = 183.63, p < 0.001, ηp2 = 0.033, and test 4, F(1, 407) = 26.33, MSE = 146.00, p < 0.001, ηp2 = 0.061. For homework assignment 2, performance was not different between the two modes of instruction, F < 1, ns, whereas for assignment 3, it was higher face-to-face, F(1, 407) = 56.84, MSE = 599.17, p < 0.001, ηp2 = 0.123.
Although test grades were higher online, performance on the final examination was not different between modes of instruction, F < 1, ns. Class grades also did not differ, F < 1, ns. If the grades on the final examination and class grades are conceptualized as summative assessment indices, then it is reasonable to conclude that the two instructional modes led to largely equivalent outcomes.

3.2. Does Initial Performance Predict Class Grades in Online and Face-to-Face Courses?

Separate linear regression analyses were carried out for online and face-to-face courses with preliminary performance indices (i.e., the grades of test 1 and assignment 1) as the predictors and class grades serving as the outcome variable (see Table 2). The Pearson correlation coefficients on the last column of the table indicate the relationship of each predictor with the outcome variable. In parentheses, we report the coefficient of determination that illustrates the percentage of the variance of the outcome variable that is accounted for by each predictor irrespective of the other.
Initial test performance predicted 26.83% of the variance in class grades face-to-face, but only 14.21% of the variance in class grades online. Yet, the syllabus specified that the numerical contribution of test 1 to class grades (range: 0–100) was 15%. Thus, students’ initial test performance could be considered much more revealing about students’ future performance in the face-to-face course than in the online course. The opposite could be said for initial homework performance, which predicted 12.18% of the variance in class grades online. However, its predictive validity face-to-face was null. According to the syllabus, homework assignment 1 contributed a meager 3% to students’ class grades. Instead, online, preliminary assignment performance was much more informative, exhibiting a greater contribution to final class grades than it could be expected. At first, the differential predictability of homework assignments between the online and face-to-face modes may appear puzzling since assignments were carried out outside the classroom irrespective of the mode of instruction. Yet, this finding is consistent with the argument put forth by Gonzalez et al. [26] who suggested that the COVID-19 pandemic changed students’ learning strategies, from a discontinuous approach to continuous habits, thereby enhancing the value of homework assignments as formative assessment tools.
Additional regression analyses were carried out with performance in the first half of the semester as the predictors and class grades as the outcome variable (see Table 3) to determine whether additional performance-related information might improve predictions of class grades. Test performance in the first half of the semester comprised tests 1 and 2. Similarly, homework assignment performance comprised assignments 1 and 2.
According to the syllabus, the contribution of tests 1 and 2 to students’ final grades was 30%. In the face-to-face mode, relying on two test scores to define initial test performance did not improve its predictive power much above the expected one. Instead, in the online environment, reliance on two test scores improved the predictive value of test performance. Namely, it made it more informative and largely equivalent between modes of instruction. Homework grades from assignments 1 and 2 showed the pattern obtained when only the grades of assignment 1 were used as predictors. Thus, adding one more homework assignment could not be considered an improvement over the predictive value of the first assignment.

4. Discussion

The results of the present study can be summarized in three points: First, students’ initial performance (as measured by test 1 and assignment 1) was equivalent between modes of instruction, suggesting that students started on an equal footing. Second, after these initial formative assessments were completed, test performance was overall higher online, whereas only later in the semester did homework performance differ between modes of instruction, favoring the face-to-face mode. Yet, the scores of the summative assessment (i.e., final test) and class grades were equivalent between modes of instruction, suggesting that the concerns of educators regarding the potentially devastating effects of the online mode on students’ learning were unsubstantiated. Third, initial performance, defined as the first test taken by students in class, was a much better predictor of class grades face-to-face than online. Instead, initial homework assignment performance was a modest predictor of class grades online, but not face-to-face. When test performance in the first half of the semester (the average of students’ grades on tests 1 and 2) was used as the predictor, then test performance was a modestly useful predictor both online and face-to-face. The predictive validity of homework performance was again modest in the online mode, and absent in the face-to-face mode.

4.1. How Do These Results Inform Plans for Sustainable STEM Education?

At its core, a key aspect of sustainable education is its ability to maintain quality and standards in the face of substantial challenges [32], thereby minimizing the impact of even unexpected events (e.g., a sudden pandemic requiring distance learning). The concerns of educators around the world about the damages that a sudden switch to online instruction might have inflicted on students who were accustomed to on-campus instruction have been highly publicized and debated [33,34]. The evidence collected in the present study suggests that damages might not be as widespread as it had been previously thought, at least if the synchronous mode is used and instruction, delivered by an experienced educator, is student-centered, as well as capable of promoting active learning. These findings are consistent with those reported by other researchers whose examination of the organizational, personal, and instruction-related factors underlying the effectiveness of the online mode during the pandemic [35,36] have led them to dispute concerns regarding emergency remote instruction as depleting quality and standards [26,37]. Interestingly, the sources of their findings are still a matter of debate. For instance, Gonzalez et al. [26] who reported higher online performance in STEM-related courses, suggested that the COVID-19 pandemic changed students’ learning strategies, from discontinuous to continuous habits, thereby improving online performance. Iglesias-Pradas et al. [37], who also reported greater higher performance online in STEM-related courses, maintained that digital skills, as well as institutional readiness, are critical to effectiveness. Our findings add to the extant literature that counters concerns regarding online STEM learning by demonstrating that the absence of damage (at least as measured by class grades) extends to students who are not pursuing STEM majors, and thus to those who may be considered most vulnerable to sudden instructional changes in a subject matter that is not among their main interests. Our findings also suggest that efforts at instructional consistency between online and face-to-face may be critical to ensure that students’ learning is maintained at desirable levels.

4.2. Implications

The COVID-19 pandemic is not entirely over. As a result, institutions of higher education, including the one selected for the present investigation, are forced to continue to rely to a certain extent on the online mode. Namely, a limited number of online courses may continue to serve as substitutes for face-to-face courses due to the demands of additional classroom space posed by the persisting requirements of social distancing. It follows that the measurement of students’ attainment before and during the pandemic can serve as a roadmap for individual educators to assess independently their instructional effectiveness during the pandemic. Most importantly, the findings of comparisons between online and face-to-face instruction, including ours, can be used for the development of training modules of faculty to ensure that students enrolled in online courses receive a quality education. At the selected institution, during the pandemic, modeling of class activities by effective educators has been considered a suitable form of training. As people differ in their responses to environmental changes, mentorship has also been practiced to ensure instructional effectiveness.
If the identification of at-risk students is a faculty’s concern, it is important to note that the uniformity of end-of-semester outcomes (as measured by class grades) of the face-to-face and online modes may conceal differences. Consider that, at least for the math courses selected for the present investigation, the identification of at-risk students for remedial interventions may rely on the grades obtained on the first test in face-to-face classes, but the same information is unlikely to be as effective online. In the latter, performance on tests in the first half of the semester may be needed for suitable predictions. This finding is inconsistent with the hypothesis formulated at the start of our study that indices of initial course activities would be better overall predictors of course grades in the online mode than in the face-to-face mode. The hypothesis was based on the assumption, supported by evidence collected post-facto through interviews of 15 students from the same subject pool, that studying online would be more distributed across time. Distributed learning would make performance indices of initial activities in a course more likely to reflect the indices of all other activities that contribute to a student’s course grade. Contrary to this expectation, indices of initial performance were not better predictors of course grades online than face-to-face. The reason may be that, in students accustomed to the face-to-face classroom, online learning demands adjustments to the virtual classroom [38] that make work in initial course activities less likely to reflect work in later activities. Adjustments refer to students’ need to quickly, and often independently, learn to navigate the contents of the course in which they are enrolled, and, at the same time, identify the demands of its activities to ensure proper execution. It is important to note though that the predictive validity of initial formative assessment measures, taken as a whole, was rather modest. When the predictive value of a formative assessment measure hovers around 30%, the possibility of a false alarm (i.e., the mistaken identification of a student as being at risk of failure) or a miss (i.e., the failure to identify an at-risk student) may be substantial, which suggests that in addition to grades, other behavioral measures may need to be collected by instructors.

4.3. Is Online Learning a Sustainable Option in the Future of Higher Education?

In a recent review of studies that compared online and face-to-face instructional modes over two decades, including both pre-pandemic and pandemic courses (2000–2020), Stevens et al. [39] found support for the argument that online learning is a sustainable option. They reported better learning outcomes for the online mode in 41% of the sampled studies (n = 91), better learning outcomes for face-to-face in 18%, and no significant differences in 41%. These findings focus educators’ attention on the obvious fact that the success of the online mode, indexed by learning outcomes that are either superior or equal to the face-to-face mode, is dependent on the quality of the education delivered.
Stevens et al. [39] noted that the setup of the learning environment, whether online or face-to-face, may be the key ingredient of optimal students’ performance, as measured by final course grades. During the pandemic, adjustments by instructors and students to new online formats, technical constraints, and institutional weaknesses may have created obstacles in the online delivery mode, for some, but not all users [40,41]. Our results along with others [26,37,39] seem to support the idea that the mode of instructional delivery, per se, is not a significant factor in students’ learning outcomes if the set-up of courses promotes high levels of cognitive engagement and technical difficulties are not a matter of concern [42]. Since before the pandemic, suitable platforms for online instruction have been available to assist students in their educational pursuits. Although students, such as those of our sample, may not have been accustomed to online instruction before the occurrence of the pandemic, they have been adept consumers of technology (e.g., cell phones, computers, etc.), which might have fostered a favorable approach to the online environment by eliminating the burden of technical challenges.
Online education is here to stay. The pandemic has merely brought it to the forefront, along with the ongoing debate on its advantages (e.g., greater access and cost-effectiveness) and disadvantages (e.g., diminished human contact) over face-to-face. Yet, each higher education institution and the content and format of the academic programs it offers exist in a context of societal and cultural forces that shape the views its recipients hold of the education that the institution imparts [43,44]. In a society, such as SA, in transition to a state of more equitable societal and economic conditions for women and men, online education may lure women with its flexibility and accessibility. At the same time, by making travel unnecessary and social interactions easily monitored, it may move women back to the confines of their homes where they had been relegated in the past, thereby rendering the rights to independence recently acquired moot. Thus, the answer to the question of whether online education is a suitable choice for the future of higher education needs to take into consideration the unique features of the societal and cultural contexts of the collectives that particular higher education institutions purportedly serve. In each context, the instruction offered by a given institution and its faculty (including modes of delivery) may impact women’s education differently, depending on cultural and societal traditions, habits, and expectations locally present. The measurement of the impact of instruction on women’s education, which may encompass preferences, choices, and quality of attainments, is a matter to be investigated regularly to ensure equity, especially in the acquisition of computational competencies, which are key to attainment in a variety of professional domains.

4.4. Limitations

The present study has limitations that may constrain the generalizability of its results. As a case study, our findings may be specific to the selected institution and the instructional conditions experienced by its students. It is important to note though that the choice of the institution and subject matter (math courses), although deliberate, was based on convenience. We acknowledge that the confluence of organizational and personal factors (e.g., availability of technical equipment, the digital skills of the students and the instructor, and the instructor’s teaching experience and approach) may have contributed to instructional conditions that promoted students’ self-regulated learning in the selected courses. The combination of such factors might make our findings difficult to replicate at other institutions facing less favorable conditions. Thus, we do not claim that our findings generalize across the board. Rather, we argue that comparisons with other studies whose operational framework differs allow educators to gather a better understanding of the conditions that render the outcomes of online learning and those of face-to-face learning equivalent. Although the generalizability of the findings of large-scale studies, which include a loftier number of courses, is undoubtedly greater, case studies that focus on particular instructional conditions and student populations, such as ours, are better equipped to illustrate the confluence of particular conditions conducive to effective teaching. Our study has shown that, for female students in non-STEM majors, math performance does not degrade if synchronous online instruction is imparted by an experienced, student-centered educator. In our view, this is a notable finding since such women are among those who have only recently benefited from opportunities once given to male students only. Of course, further research will have to unpack the key features of such instruction for potential use by other educators.
Undoubtedly, all studies that have been conducted during the pandemic, including ours, reflect a temporary response from instructors and institutions to an unforeseen and sudden event. Emergency remote teaching and regular use of online learning by an institution may be quite different. Nonetheless, the pandemic has shown that an unplanned change could become a seed for broader planned changes if lessons learned are translated into collegial professional development opportunities. More simply, an unplanned change can become a useful opportunity for a more systematic problem-solving approach to teaching [45], such as replacing a face-to-face class or office hour meeting with a synchronous virtual meeting for students with transportation difficulties, offering a comprehensive digital copy of course materials to enhance students’ access to key information, periodically sending course materials to students to reinforce distributed studying and discourage cramming, as well as developing effective online tutoring to broaden the reach of support services [46].
Another limitation of our study is the absence of direct evidence on students’ views and reactions to emergency remote teaching. To compensate for this limitation, we informally queried post-facto 15 students from the same subject pool. Evidence supported the argument made by Gonzalez et al. [26] who suggested that the COVID-19 pandemic changed students’ learning strategies. Students reported that not having to commute to the selected university gave them additional time to invest in educational pursuits. They also reported that the physical distance between them and instructors along with the absence of opportunities for informal encounters with other students made them feel that they had to rely on themselves more deeply. The feeling of distance and the availability of extra time were converted by some students into enhanced self-regulation. The latter included being more mindful of the flow of class activities (including instructions and deadlines), distributing activities (e.g., studying, reviewing, and completing homework assignments) more uniformly across each week of the semester, and collecting more copious records of the materials and topics covered in class meetings. Yet, all admitted that online classes initially demanded more effort and produced more trepidation than face-to-face classes, mostly because becoming familiar and adapting to the environment offered by each instructor were seen as largely independent pursuits. Furthermore, they acknowledged being eager to return to face-to-face classes, which were seen as satisfying the need for proximal human contact and conventional college life that the virtual mode was unable to fulfill.

Author Contributions

All authors contributed equally. Conceptualization, H.M.A., M.A.E.P., & O.J.E.-M.; methodology, H.M.A., M.A.E.P., & O.J.E.-M.; formal analysis, H.M.A., M.A.E.P., & O.J.E.-M.; data curation, H.M.A., M.A.E.P., & O.J.E.-M.; writing—original draft preparation, H.M.A., M.A.E.P., & O.J.E.-M.; writing—review and editing, H.M.A., M.A.E.P., & O.J.E.-M.; project administration, H.M.A., M.A.E.P., & O.J.E.-M. All authors have read and agreed to the published version of the manuscript.


This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are available upon request.


We thank PMU Administration for supporting open-access publishing.

Conflicts of Interest

The authors declare no conflict of interest.


  1. Tsai, M.J.; Liang, J.C.; Hsu, C.Y. The computational thinking scale for computer literacy education. J. Educ. Comput. Res. 2021, 59, 579–602. [Google Scholar] [CrossRef]
  2. Yadav, A.; Good, J.; Voogt, J.; Fisser, P. Computational thinking as an emerging competence domain. In Competence-Based Vocational and Professional Education; Zhang, M., Ed.; Springer: New York, NY, USA, 2017; pp. 1051–1067. [Google Scholar] [CrossRef]
  3. Alshuwaikhat, H.M.; Mohammed, I. Sustainability matters in national development visions—Evidence from Saudi Arabia’s Vision for 2030. Sustainability 2017, 9, 408. [Google Scholar] [CrossRef] [Green Version]
  4. Boeren, E. Understanding Sustainable Development Goal (SDG) 4 on “quality education” from micro, meso and macro perspectives. Int. Rev. Educ. 2019, 65, 277–294. [Google Scholar] [CrossRef] [Green Version]
  5. Polyakova, A.; Azizova, K.M. Research of the concept of quality education. In Proceedings of the International Scientific and Practical Conference of Specialized and Multidisciplinary Researches, Bilbao, Spain, 21–24 December 2020; Volume 4, pp. 59–60. [Google Scholar] [CrossRef]
  6. Drew, D.E. STEM the Tide: Reforming Science, Technology, Engineering, and Math Education in America; JHU Press: Baltimore, MD, USA, 2015; pp. 1–264. [Google Scholar]
  7. Steen, L.A. Math education at risk. Issues Sci. Technol. 2003, 19, 79–81. [Google Scholar]
  8. Hyde, J.S.; Mertz, J.E. Gender, culture, and mathematics performance. Proc. Natl. Acad. Sci. USA 2009, 106, 8801–8807. [Google Scholar] [CrossRef] [Green Version]
  9. Ellison, G.; Swanson, A. The gender gap in secondary school mathematics at high achievement levels: Evidence from the American Mathematics Competitions. J. Econ. Perspect. 2010, 24, 109–128. [Google Scholar] [CrossRef] [Green Version]
  10. Card, D.; Payne, A.A. High school choices and the gender gap in STEM. Econ. Inq. 2021, 59, 9–28. [Google Scholar] [CrossRef]
  11. Daker, R.J.; Gattas, S.U.; Sokolowski, H.M.; Green, A.E.; Lyons, I.M. First-year students’ math anxiety predicts STEM avoidance and underperformance throughout university, independently of math ability. Sci. Learn. 2021, 6, 17. [Google Scholar] [CrossRef]
  12. Dossi, G.; Figlio, D.; Giuliano, P.; Sapienza, P. Born in the family: Preferences for boys and the gender gap in math. J. Econ. Behav. Organ. 2021, 183, 175–188. [Google Scholar] [CrossRef]
  13. Raabe, I.J.; Boda, Z.; Stadtfeld, C. The social pipeline: How friend influence and peer exposure widen the STEM gender gap. Sociol. Educ. 2019, 92, 105–123. [Google Scholar] [CrossRef]
  14. Kanny, M.A.; Sax, L.J.; Riggers-Piehl, T.A. Investigating forty years of STEM research: How explanations for the gender gap have evolved over time. J. Women Minorities Sci. Eng. 2014, 20, 127–148. [Google Scholar] [CrossRef]
  15. Dämmrich, J.; Triventi, M. The dynamics of social inequalities in cognitive-related competencies along the early life course–A comparative study. Int. J. Educ. Res. 2018, 88, 73–84. [Google Scholar] [CrossRef]
  16. Maley, B.; Rafferty, M. Can math competency predict success in nursing school? Teach. Learn. Nurs. 2019, 14, 198–202. [Google Scholar] [CrossRef]
  17. Reyna, V.F.; Nelson, W.L.; Han, P.K.; Dieckmann, N.F. How numeracy influences risk comprehension and medical decision-making. Psychol. Bull. 2009, 135, 943–973. [Google Scholar] [CrossRef] [Green Version]
  18. Ritchie, S.J.; Bates, T.C. Enduring links from childhood mathematics and reading achievement to adult socioeconomic status. Psychol. Sci. 2013, 24, 1301–1308. [Google Scholar] [CrossRef] [PubMed]
  19. Alghamdi, A.K.H.; Al-Hattami, A.A. The accuracy of predicting university students’ academic success. J. Saudi Educ. Psychol. Assoc. 2014, 1, 1–8. [Google Scholar]
  20. El-Moussa, O.J.; Al Ghazo, R.; Pilotti, M.A.E. Data-driven predictions of academic success among college students in Saudi Arabia. Crit. Stud. Teach. Learn. 2021, 9, 115–134. [Google Scholar]
  21. Barry, A. Gender differences in academic achievement in Saudi Arabia: A wake-up call to educational leaders. Int. J. Educ. Policy Leadersh. 2019, 15, 1–17. [Google Scholar] [CrossRef]
  22. Ghasemi, E.; Burley, H.; Safadel, P. Gender differences in general achievement in mathematics: An international study. New Waves-Educ. Res. Dev. J. 2019, 22, 27–54. [Google Scholar]
  23. Alhareth, Y.; Al Dighrir, I.; Al Alhareth, Y. Review of women’s higher education in Saudi Arabia. Am. J. Educ. Res. 2015, 3, 10–15. [Google Scholar] [CrossRef] [Green Version]
  24. Alwedinani, J. Gender and Subject Choice in Higher Education in Saudi Arabia; Lulu Press: Morrisville, NC, USA, 2016; pp. 1–234. [Google Scholar]
  25. Pilotti, M.A.E. What lies beneath sustainable education? Predicting and tackling gender differences in STEM academic success. Sustainability 2021, 13, 1671. [Google Scholar] [CrossRef]
  26. Gonzalez, T.; de la Rubia, M.A.; Hincz, K.P.; Comas-Lopez, M.; Subirats, L.; Fort, S.; Sacha, G.M. Influence of COVID-19 confinement in students’ performance in higher education. PLoS ONE 2020, 15, e0239490. [Google Scholar] [CrossRef]
  27. Rohrer, D.; Taylor, K. The effects of overlearning and distributed practice on the retention of mathematics knowledge. Appl. Cogn. Psychol. 2006, 20, 1209–1224. [Google Scholar] [CrossRef]
  28. Rohrer, D.; Taylor, K. The shuffling of mathematics problems improves learning. Instr. Sci. 2007, 35, 481–498. [Google Scholar] [CrossRef]
  29. Nazari, K.B.; Ebersbach, M. Distributed practice in mathematics: Recommendable especially for students on a medium performance level? Trends Neurosci. Educ. 2019, 17, 100122. [Google Scholar] [CrossRef]
  30. Hartwig, M.K.; Malain, E.D. Do students space their course study? Those who do earn higher grades. Learn. Instr. 2021. Advance online publication. [Google Scholar] [CrossRef]
  31. Dixson, D.D.; Worrell, F.C. Formative and summative assessment in the classroom. Theory Into Pract. 2016, 55, 153–159. [Google Scholar] [CrossRef]
  32. Gamage, K.A.A.; Pradeep, R.G.G.R.; Najdanovic-Visak, V.; Gunawardhana, N. Academic standards and quality assurance: The impact of COVID-19 on university degree programs. Sustainability 2020, 12, 10032. [Google Scholar] [CrossRef]
  33. Bajaba, S.; Mandurah, K.; Yamin, M. A framework for pandemic compliant higher education national system. Int. J. Inf. Technol. 2021, 13, 407–414. [Google Scholar] [CrossRef]
  34. Mishra, L.; Gupta, T.; Shree, A. Online teaching-learning in higher education during lockdown period of COVID-19 pandemic. Int. J. Educ. Res. Open 2020, 1, 100012. [Google Scholar] [CrossRef]
  35. Alqahtani, A.Y.; Rajkhan, A.A. E-learning critical success factors during the covid-19 pandemic: A comprehensive analysis of e-learning managerial perspectives. Educ. Sci. 2020, 10, 216. [Google Scholar] [CrossRef]
  36. Gautam, D.K.; Gautam, P.K. Transition to online higher education during COVID-19 pandemic: Turmoil and way forward to developing country of South Asia-Nepal. J. Res. Innov. Teach. Learn. 2021, 14, 93–111. [Google Scholar] [CrossRef]
  37. Iglesias-Pradas, S.; Hernández-García, Á.; Chaparro-Peláez, J.; Prieto, J.L. Emergency remote teaching and students’ academic performance in higher education during the COVID-19 pandemic: A case study. Comput. Hum. Behav. 2021, 119, 106713. [Google Scholar] [CrossRef]
  38. Toom, A. A study of students’ representations of their virtual classroom. J. Mod. Educ. Rev. 2016, 6, 507–519. [Google Scholar] [CrossRef]
  39. Stevens, G.J.; Bienz, T.; Wali, N.; Condie, J.; Schismenos, S. Online university education is the new normal: But is face-to-face better? Interact. Technol. Smart Educ. 2021, 18, 278–297. [Google Scholar] [CrossRef]
  40. Ghazi-Saidi, L.; Criffield, A.; Kracl, C.L.; McKelvey, M.; Obasi, S.N.; Vu, P. Moving from face-to-face to remote instruction in a higher education institution during a pandemic: Multiple case studies. Int. J. Technol. Educ. Sci. 2020, 4, 370–383. [Google Scholar] [CrossRef]
  41. Ali, W. Online and remote learning in higher education institutes: A necessity in light of COVID-19 pandemic. High. Educ. Stud. 2020, 10, 16–25. [Google Scholar] [CrossRef]
  42. Chisadza, C.; Clance, M.; Mthembu, T.; Nicholls, N.; Yitbarek, E. Online, and face-to-face learning: Evidence from students’ performance during the Covid-19 pandemic. Afr. Dev. Rev. 2020, 33, 114–125. [Google Scholar] [CrossRef]
  43. Sá, M.J.; Serpa, S. The COVID-19 pandemic as an opportunity to foster the sustainable development of teaching in higher education. Sustainability 2020, 12, 8525. [Google Scholar] [CrossRef]
  44. El Said, G.R. How did the COVID-19 pandemic affect higher education learning experience? An empirical investigation of learners’ academic performance at a university in a developing country. Adv. Hum. Comput. Interact. 2021, 2021, 6649524. [Google Scholar] [CrossRef]
  45. Musa, S.; Abdullahi, A.S.; Audu, H.J.; Babagana, U.U. Covid-19 pandemic: An outline of digital learning tools for creating teaching and learning contents. Int. J. Adv. Acad. Res. 2020, 6, 11–20. [Google Scholar] [CrossRef]
  46. Johns, C.; Mills, M. Online mathematics tutoring during the COVID-19 pandemic: Recommendations for best practices. Primus 2021, 31, 99–117. [Google Scholar] [CrossRef]
Table 1. Descriptive statistics of students’ performance face-to-face (FtF) and online, including means (M) and standard errors of the mean (SEM).
Table 1. Descriptive statistics of students’ performance face-to-face (FtF) and online, including means (M) and standard errors of the mean (SEM).
Assessment MeasureOnlineFTFDifference
1st half of the semester
Assignment 1 (3%)84.321.7085.681.38=
Test 1 (15%)84.021.4183.061.11=
Assignment 2 (3%)76.332.1478.261.48=
Test 2 (15%)89.770.8483.761.02+ online
2nd half of the semester
Assignment 3 (4%)65.121.9383.421.35+ FtF
Test 3 (15%)87.800.8582.771.06+ online
Test 4 (15%)91.020.7684.880.94+ online
Final exam (30%)74.322.1274.490.97=
Class grade83.070.9382.800.48=
Note: Significant differences between online and face-to-face are marked with a “+” sign next to the mode that yielded greater performance. The “=” sign indicates that the outcomes of the two modes could be considered equivalent.
Table 2. Regression analyses with initial test and homework performance as predictors and class grades as the outcome variable.
Table 2. Regression analyses with initial test and homework performance as predictors and class grades as the outcome variable.
Predictor VariablesBSEBetatSign.Correlation
Initial test grades0.2210.0400.3335.51p < 0.0010.377 (14.21%)
Initial homework grades0.1640.0330.3004.97p < 0.0010.349 (12.18%)
Initial test grades0.2250.0270.5198.37p < 0.0010.518 (26.83%)
Initial homework grades0.0370.0220.1071.72ns0.103 (1.06%)
Note: Online: R = 0.480; Face-to-Face: R = 0.529.
Table 3. Regression analyses with test and homework performance in the first half of the semester as predictors and class grades as the outcome variable.
Table 3. Regression analyses with test and homework performance in the first half of the semester as predictors and class grades as the outcome variable.
Predictor VariablesBSEBetaTSign.Correlation
1st half of sem. test grades0.5430.0540.53710.12p < 0.0010.597 (35.64%)
1st half of sem. homework grades 0.1590.0320.2675.03p < 0.0010.389 (15.13%)
1st half of sem. test grades0.2830.0320.5518.81p < 0.0010.577 (33.29%)
1st half of sem. homework grades 0.0440.0330.0841.35ns0.257 (6.61%)
Note: Online: R = 0.651; Face-to-Face: R = 0.583.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

AbdelSalam, H.M.; Pilotti, M.A.E.; El-Moussa, O.J. Sustainable Math Education of Female Students during a Pandemic: Online versus Face-to-Face Instruction. Sustainability 2021, 13, 12248.

AMA Style

AbdelSalam HM, Pilotti MAE, El-Moussa OJ. Sustainable Math Education of Female Students during a Pandemic: Online versus Face-to-Face Instruction. Sustainability. 2021; 13(21):12248.

Chicago/Turabian Style

AbdelSalam, Hanadi Mohamed, Maura A. E. Pilotti, and Omar J. El-Moussa. 2021. "Sustainable Math Education of Female Students during a Pandemic: Online versus Face-to-Face Instruction" Sustainability 13, no. 21: 12248.

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop