Next Article in Journal
Water Consumption Prediction Based on Improved Fractional-Order Reverse Accumulation Grey Prediction Model
Previous Article in Journal
Does the Inflow of Rural-to-Urban Migration Increase Firms’ Productivity?
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

From Recall to Resilience: Reforming Assessment Practices in Saudi Theory-Based Higher Education to Advance Vision 2030

by
Mubarak S. Aldosari
Department of Special Education, Prince Sattam Bin Abdulaziz University, Al-Kharj 16273, Saudi Arabia
Sustainability 2025, 17(21), 9415; https://doi.org/10.3390/su17219415 (registering DOI)
Submission received: 28 August 2025 / Revised: 16 October 2025 / Accepted: 22 October 2025 / Published: 23 October 2025
(This article belongs to the Section Sustainable Education and Approaches)

Abstract

Assessment practices are central to higher education, particularly critical in theory-based programs, where they facilitate the development of conceptual understanding and higher-order cognitive skills. They also support Saudi Arabia’s Vision 2030 agenda, which aims to drive educational innovation. This narrative review examines assessment practices in theory-based programs at a Saudi public university, identifies discrepancies with learning objectives, and proposes potential solutions. A narrative review synthesised peer-reviewed literature (2015–2025) from Scopus, Web of Science, ERIC, and Google Scholar, focusing on traditional and alternative assessments, barriers, progress, and comparisons with international standards. The review found that traditional summative methods (quizzes, final exams) still dominate and emphasise memorisation, limiting the development of higher-order skills. Emerging techniques, such as projects, portfolios, oral presentations, and peer assessment, are gaining traction but face institutional constraints and resistance from faculty. Digital adoption is growing: 63% of students are satisfied with learning management system tools, and 75% find online materials easy to understand; yet, advanced analytics and AI-based assessments are rare. A comparative analysis reveals that international standards favour formative feedback, adaptive technologies, and holistic competencies. The misalignment between current practices and Vision 2030 highlights the need to broaden assessment portfolios, integrate technology, and provide faculty training. Saudi theory-based programs must transition from memory-oriented evaluations to student-centred, evidence-based assessments that foster critical thinking and real-world application. Adopt diverse assessments (projects, portfolios, peer reviews), invest in digital analytics and adaptive learning, align assessments with learning outcomes and Vision 2030 competencies, and implement ongoing faculty development. The study offers practical pathways for reform and highlights strategic opportunities for achieving Saudi Arabia’s national learning outcomes.

1. Introduction

Assessment practices are a central part of, if not indispensable, practical work in higher education systems; they provide the means to evaluate student learning, guide instructional practices, and ensure that academic programs are aligned with institutional aims [1]. The role of assessments is vital in theory-based programs, where abstract concepts and mental skills take precedence over practical application. By ‘theory-based programs,’ we refer to academic programs that emphasise theoretical knowledge over hands-on practice (e.g., pure sciences and humanities), focusing on conceptual understanding rather than technical skill training. They assess students’ in-depth mastery of theoretical knowledge and can reflect and shape their capacity for critical thinking, problem-solving, and applying knowledge in different contexts [2]. Therefore, effective assessment strategies require academic excellence that teaches students how to face future challenges in both professional and educational settings. While assessment in higher education serves as a means of evaluation, it is much more than that: assessment is a process that transforms the teaching and learning experience [3]. Educators use assessments to determine the effectiveness of their instructional methods, identify areas where students are struggling, and adjust their approach to meet the needs of quality learners. Assessments serve as markers for the level of academic accomplishment students are making, as a time for reflection, and as a motivator to delve deeply into course material [4].
Assessment practices have special importance in theory-based programs. Many of these programs focus on developing congruent understanding, critical thinking, and analytical skills, qualities that are less outwardly observable than in applied fields [5]. Written exams and essays have traditionally been used to assess theoretical knowledge and understanding. However, they cannot always integrate the entire arc of student learning, from the lower to the HOCS (higher-order cognitive skills), such as synthesis and evaluation. Designing assessments that measure knowledge retention and stimulate intellectual growth and creativity has become a challenge for educators in theory programs [6]. Moreover, students’ learning behaviours are impacted by assessment practice. Practical assessments encourage deep learning, which involves creating connections between ideas, critically evaluating information, and applying previously learned concepts to new situations. On the other hand, flawed assessments can lead to the development of surface learners, students who memorise without truly understanding [7]. This emphasises the development of creative and meaningful assessment strategies aligned with the program-provided theoretical learning objectives.
Ensuring that Saudi Arabia’s educational landscape is adequately prepared for the changes brought about by rapid transformation is a key component of the ambitious goals outlined in Vision 2030. Launched in 2016, Vision 2030 is a comprehensive national development plan designed to diversify the country’s economy, reduce its reliance on oil, and foster a knowledge-based society [8,9]. This vision would be built on a central pillar of education with increased investment in improving the quality of teaching, learning, and research at all levels of education. Vision 2030, as a whole, prioritises higher education reform as its top objective. In this direction, the government has revised curricula, updated teaching methodologies, and adopted international best practices. These efforts aim to equip Saudi students with the skills and knowledge necessary for life in a technologically advanced and globally connected world [10]. As a part of this reform agenda, assessment practices have become an integral part of the progress of academic programs.
Innovative and effective assessment practice is particularly required in this theory-based program setting. The traditional methods of assessment, which rely on the use of standardised tests and essay scores, have been increasingly deemed inadequate to meet ever-increasing educational requirements [11]. The developing understanding is that assessments need to change to accommodate the changing knowledge and skills of the twenty-first century. It refers to integrating formative assessments that provide continuous feedback, integrating technology-enhanced assessments, or designing tasks that simulate real-world regulation [12]. Like other institutions in Saudi Arabia, the institution under study is at the forefront of these changes. Being a public university serving a diverse student population, it is essential to assist Vision 2030. Informed by the theory of assessment practices, this study examines assessment practices in its theory-based programs to contribute to the broader national effort to improve the quality of education and promote student success [13].
To ground the inquiry, this review is guided by contemporary assessment theory. Educational research emphasises that assessment is not only a tool for evaluation but also a driver of learning improvement. In other words, well-designed assessments can actively transform and enhance teaching and learning [3]. Guided by this “assessment-for-learning” perspective, the study examines the institution’s theory-based programs to contribute to the broader national effort to improve education quality and student success [13]. By aligning assessment practices with learning outcomes and student engagement, the review builds on the idea that assessments can catalyse deeper learning rather than merely measure it.
Ultimately, transforming assessment from rote memorisation toward resilience-building is imperative. This means moving beyond testing factual recall to cultivating students’ adaptability, perseverance, and lifelong learning skills, traits that enable them to thrive amid future challenges. By adopting assessments that foster critical thinking, creativity, and problem-solving, educators help develop resilient learners who can continuously learn and innovate in a rapidly changing environment. Such a shift directly supports Vision 2030’s goal of preparing graduates for a knowledge-based, globally competitive economy.
This narrative review examines the assessment practices in theory-based programs at the institution under study to address challenges, study current practices, and identify remedial steps. This work aims to identify the primary obstacles to current assessments, including mismatches with learning objectives, a lack of alignment with goals, and the failure to evaluate higher-order skills. The review will comprehensively analyse existing practices to highlight their strengths and weaknesses and determine whether they effectively measure student learning. The review proposes evidence-based, international best practices that align with Saudi Arabia’s Vision 2030 goals and improvements. However, the study contributes to this body of work by enhancing the quality of assessments in theory-based programs and advocating for a student-centred education system that prepares graduates for global competition. This paper situates assessment reform at the heart of national transformation, recognising that without assessment innovation, pedagogical reform, and workforce readiness will fall short of Vision 2030’s aspirations.

2. Assessment Practices in Saudi Universities

2.1. Overview of Commonly Used Assessment Methods in Saudi Higher Education

Assessment practices in Saudi higher education are essential for evaluating student learning, ensuring academic standards, and promoting student growth. Traditional methods, such as written exams, quizzes, and assignments, remain the dominant approach, especially in theory-based programs. These methods are designed to test students’ foundational knowledge by assessing their ability to recall facts and repeat learned material [13]. However, they fall short in measuring higher-order cognitive skills, such as critical thinking, problem-solving, and analytical abilities. Despite their widespread use, traditional assessments often promote surface learning, which focuses more on memorisation than on deep understanding.
Recently, some Saudi universities have begun to adopt emerging alternative assessment methods. Project-based assessments allow students to apply theoretical knowledge to real-world situations, encouraging creativity, critical thinking, and problem-solving skills [7]. Portfolios are another innovative method, enabling instructors to track students’ learning progress over time and assess both the final product and the development of their skills [14]. Additionally, oral presentations have become a standard method to evaluate communication skills, helping students develop confidence and the ability to present ideas effectively [15]. Peer reviews and group assessments also foster teamwork, accountability, and critical evaluation, preparing students for the professional world where collaboration and feedback are essential [16]. These alternative methods reflect a growing shift in Saudi universities towards more dynamic, participatory assessment practices.
Table 1 illustrates the shift from traditional assessments based on knowledge recollection and basic understanding to the latest ideas regarding assessments founded on critical thinking and practical application. These alternative approaches enhance active student engagement with the task, offering higher cognitive order skills such as analysis and evaluation.

2.2. Barriers and Progress in Assessment Practices in Saudi Higher Education

The transition to alternative assessment methods in Saudi universities has been slow, primarily due to institutional constraints, faculty resistance, and policy limitations. Many universities continue to rely on traditional assessments such as written exams, quizzes, and assignments due to long-standing practices and insufficient resources [17]. Although more innovative methods, such as project-based assessments, hold potential, their implementation requires additional resources, including faculty training and infrastructure, which may not always be readily available. Faculty resistance is another significant barrier, as many educators are more familiar with traditional assessment techniques and lack expertise in alternative strategies. Transitioning to these methods requires an essential shift in teaching practices, which can be daunting for instructors with established routines and limited experience with these new approaches [18]. Additionally, students in theory-based programs, accustomed to assessments that prioritise rote memorisation, may find new methods unfamiliar and intimidating, thus resisting changes that focus on critical thinking and practical application [19].
Policy and curriculum constraints further exacerbate these challenges. Many institutions have rigid policies that limit experimentation with alternative methods. Moreover, curricula that emphasise theoretical knowledge often lack the integration of practical, skill-based assessments, which are essential for fostering more engaging and effective learning experiences [20]. These barriers collectively hinder the broader adoption of innovative assessment methods that could significantly enhance student engagement and learning outcomes.
Despite these challenges, progress is being made in Saudi higher education. Several institutions have begun integrating Learning Management Systems (LMS) and other digital technologies to diversify assessment methods. For instance, LMS platforms now incorporate online quizzes and discussion forums, facilitating formative assessments and providing instant feedback, which promotes continuous learning and engagement [11]. Furthermore, some universities have piloted experiential learning programs, such as capstone projects, internships, and service-learning, that allow students to apply theoretical knowledge in real-world settings. These programs naturally integrate alternative assessment methods, such as peer evaluations, reflective journals, and presentations, to assess higher-order cognitive skills [21]. Overcoming these entrenched barriers requires deliberate institutional support. For example, establishing a supportive “assessment culture”, shared values and policies that encourage innovation in evaluation, is vital to reducing resistance. Strong administrative leadership (as seen in some Western universities) and faculty development initiatives can empower educators to experiment with new assessment strategies. In fact, fostering such an assessment culture with clear incentives and training [1] helps drive the sustainable adoption of alternative methods despite initial scepticism.
These initiatives mark a significant departure from traditional exam-based assessments and reflect the growing recognition of the need for diverse and innovative evaluation approaches. Although faculty resistance and infrastructure limitations persist, these developments suggest that Saudi universities are making substantial progress toward modernising their assessment practices [20]. As these programs evolve, they have the potential to transform the educational landscape, preparing students for the demands of a rapidly changing workforce.
It is also essential to prepare students for the shift toward new assessment formats. Many students accustomed to memorisation-based exams may feel anxious or uncertain about projects, portfolios, and other unfamiliar methods [19]. To ease this transition, instructors can introduce alternative assessments gradually and provide orientation or examples that demonstrate their value. By actively coaching students on how to approach and benefit from these new assessment forms, universities can reduce student resistance and build confidence in more complex, real-world tasks. This supportive approach helps students understand the purpose of innovative assessments and encourages buy-in as they experience the benefits to their learning.

3. Methodology

Narrative literature reviews offer a flexible approach to synthesising research, as they do not adhere to a rigid, protocol-driven template. Nevertheless, clear documentation of the search and analytical procedures enhances transparency and scholarly credibility.

3.1. Search Strategy and Data Synthesis

This review consolidates peer-reviewed scholarship and authoritative reports concerning assessment practices in Saudi universities. It encompasses traditional methods (written examinations, quizzes, assignments), alternative approaches (projects, portfolios, oral presentations, peer and self-assessments), barriers to adoption, progress in digital and experiential learning, and comparative analyses with international standards.
To ensure methodological rigour, a structured review protocol was followed. Literature searches were conducted across Scopus, Web of Science, ERIC, and Google Scholar for the period 2015–2025 using keyword combinations such as ‘Saudi assessment,’ ‘formative learning,’ ‘portfolio evaluation,’ and ‘Vision 2030 education reform.’ Inclusion criteria focused on peer-reviewed articles, reports, and government strategy documents addressing higher education assessment. After an initial yield of 85 sources, titles, and abstracts were screened by two independent reviewers. A total of 42 sources were retained after applying relevance filters and resolving any disagreements through discussion. This transparent process strengthens the reliability of the narrative synthesis.

3.2. Inclusion, Exclusion, and Screening

Sources were included if they (1) examined assessment practices in Saudi higher education or in comparable international settings; (2) addressed traditional and/or alternative assessment methods, barriers to implementation, progress, or alignment with international standards; (3) were published in English; and (4) comprised peer-reviewed research, government or policy reports, or scholarly analyses. Exclusions applied to materials unrelated to higher education assessment, non-scholarly outputs (e.g., blogs), and publications outside the 2015–2025 timeframe.
Titles and abstracts were initially screened for relevance, and duplicate records were removed. Full texts of potentially relevant documents were evaluated against the inclusion criteria. Consistent with narrative review conventions, a formal quality appraisal was not undertaken; however, emphasis was placed on peer-reviewed sources to enhance reliability. Disagreements regarding inclusion were resolved through discussion among the reviewers. Ultimately, approximately 60 sources met all inclusion criteria and were included in the narrative synthesis. This final corpus comprised peer-reviewed studies and relevant policy reports from both Saudi Arabia and international contexts, providing a broad basis for analysis.

3.3. Data Extraction and Synthesis

Data from each included source were extracted on the type of assessment method studied, reported benefits and challenges, contextual influences (e.g., institutional policies, cultural factors), and recommendations. The synthesis employed a thematic, narrative approach, grouping findings into categories that reflected standard practices (traditional and alternative), barriers, progress within Saudi universities, and comparisons with international standards. Narrative review methodology requires authors to articulate their analytical rationale; accordingly, the thematic framework was explicitly aligned with the major themes identified in the literature.
Although detailed search strategies are not obligatory in narrative reviews, the review maintained comprehensive records of search terms, databases, and screening decisions to enhance transparency and provide context for evaluating the completeness of the evidence base.
Although this is a narrative review, a basic quality appraisal was conducted by classifying sources according to study design (e.g., empirical studies, policy analyses, case reports). Preference was given to peer-reviewed empirical studies. Sources were read in full and evaluated for methodological soundness, relevance, and credibility. We acknowledge that this approach does not substitute for formal scoring but provides a structured basis for synthesis and helps reduce interpretive bias.

4. Comparative Analysis with Global Practices

4.1. Comparison of Saudi Practices with International Standards

Over the past few years, higher education institutions in Saudi Arabia have undergone significant changes in assessment practices, consistent with national efforts to improve educational quality. However, these practices still differ notably from international standards and strategies. This comparative analysis examines Saudi practices in assessment design, implementation, and outcomes, highlighting their alignment with global benchmarks and areas for improvement.
Traditional assessments in Saudi universities rely heavily on summative tools, final exams, quizzes, and standardised tests to measure student learning at the end of a course. While such methods can provide a snapshot of student achievement, they often fail to foster critical thinking, problem-solving, or lifelong learning skills [22]. By contrast, formative assessments, regular feedback, peer assessment, and self-assessment are increasingly integral to international practices, supporting continuous improvement. For instance, surveys of online learning indicate that 63% of Saudi students were satisfied with learning management system (LMS) tools, and 75% found course materials easy to understand with online support [23]. These findings suggest that well-designed digital assessments can promote deeper engagement when combined with formative feedback. Globally, the shift to online education has been dramatic: by 2020, 98% of universities worldwide had moved their classes online, and hybrid learning participation grew by 36% between 2012 and 2019, accelerating by a further 92% during the pandemic [24].
The use of technology in assessment also differs markedly between Saudi and international contexts. During the COVID-19 pandemic, Saudi universities adopted platforms such as Blackboard and Moodle to administer online exams and assignments. In one study, 86.5% of medical students preferred the Blackboard platform for e-learning; nearly half (48.5%) used iPads, and 37.6% used laptops [25]. More than half rated their satisfaction and intellectual environment as moderate, and 85% gave moderate scores for communication [25]. Nevertheless, advanced features such as adaptive testing, plagiarism detection, and AI-generated feedback remain limited [26]. Internationally, AI-based assessment tools are increasingly used to analyse student responses, identify learning gaps, and offer personalised learning pathways. In the United States, for example, 75% of undergraduate students were enrolled in at least one distance-education course in 2020, and 44% studied exclusively online, figures that underline the need for sophisticated digital assessment tools [24].
Saudi universities have attempted to align their assessments with the national qualifications framework, the SAQF, ensuring that they reflect labour market and societal competencies [27]. However, global frameworks often integrate broader competencies such as intercultural communication, teamwork, and digital literacy. Countries like Finland and Australia employ holistic assessment systems that blend academic knowledge with generic skills to prepare students for the global workforce. Differences between Saudi and international contexts also surface in faculty roles. In Saudi Arabia, faculty members typically design and administer assessments independently, with varying degrees of assessment literacy [28]. Participation in workshops and training programs is not always mandatory. By contrast, universities in the USA and UK provide structured professional development to enhance faculty skills in assessment design, implementation, and feedback [29]. Such programs ensure assessments align with institutional goals and cultivate a culture of continuous improvement.
Feedback practices further illustrate the gap between local and global standards. Saudi universities tend to provide summative feedback, grades, and brief comments after assessments. This approach informs students of their current standing but often fails to guide them on how to improve [30]. In one Saudi medical college, 55% of students rated their e-learning experience as fair or reasonable, and 61.6% felt their living arrangements were compatible with online learning. However, 69.9% believed that some disciplines were unsuitable for e-learning, underscoring the importance of constructive, formative feedback. International strategies emphasise timely, specific feedback that fosters self-reflection and ownership of learning [25].
Cultural and contextual factors also shape assessment practices. Saudi society’s collectivist ethos values conformity and respect for authority, which can discourage self-assessment and peer assessment activities [31]. Gender-segregated education presents unique challenges; fair assessment across male and female campuses requires careful planning [32]. These factors contrast with international practices in Western countries, which encourage individual accountability and critical thinking. Despite these challenges, progress is evident: 78.6% of Saudi medical students reported that the availability of sufficient technology was essential for e-learning, 62.9% appreciated access to a quiet study space, and 63.8% agreed that the university’s study platform was useful. These figures highlight the growing infrastructure and increasing acceptance of digital tools among students. These figures are supported by a survey of faculty at Saudi universities, where 63% reported relying primarily on exams to measure achievement [31]. Similarly, a labour market study by Galil [24] reported that over 75% of surveyed employers ranked critical thinking and problem-solving as top competencies lacking among graduates of theoretical programs.
The adoption of authentic assessments, case studies, project-based learning, and internships remains limited in Saudi universities compared with international counterparts. Countries such as Canada and Singapore incorporate authentic assessments into their curricula, enabling students to apply their knowledge in real-world contexts and develop transferable skills [32]. Saudi institutions have begun to introduce these practices, but at a slower pace. International standards also emphasise inclusive assessment approaches, such as Universal Design for Learning (UDL), to accommodate diverse learners. Although Saudi universities have taken steps to support students with special needs, further progress is needed to implement UDL principles fully [33]. In contrast, countries such as Sweden and Germany have robust policies in place to ensure equitable access to education and assessment.
To illustrate the key differences and similarities between Saudi and international assessment practices, Table 2 provides a comparative overview of the two.
Saudi higher education has made substantial progress in modernising assessment practices, particularly through the adoption of digital tools and alignment with national frameworks. The integration of formative feedback, comprehensive faculty development, and inclusive assessment remains uneven [23,25]. Quantitative data from recent studies reveal both promising trends, such as high student satisfaction with LMS tools and recognition of the importance of technological infrastructure, and persistent challenges, such as reliance on summative exams and limited uptake of authentic assessments. Leveraging these insights can help policymakers and educators refine assessment strategies to better support student learning and align with global best practices.

4.2. Alignment and Divergence of Local Practices with Global Best Practices

It is essential to understand what works in a given educational or professional system and to identify strengths, areas for improvement, and best practices because the institution under study is evaluated on how closely its practices correspond with internationally recognised standards. Such evaluations serve as benchmarks for teaching, assessment, and overall academic delivery. Globally, learner-centred education, technology integration, inclusivity, and continuous improvement are emphasised. The pandemic underscored this shift: by 2020, 98% of universities worldwide had moved their classes online, and 75% of U.S. undergraduates had taken at least one distance-education course, with 44% studying exclusively online [24]. Such widespread adoption of digital tools is associated with improved engagement and learning outcomes [34]. Faculty surveys suggest that adapting to online teaching can enhance pedagogical approaches. Specifically, 77% of instructors who taught online reported that the experience improved their education, 75% noted that they thought more critically about engaging students, 65% utilised more multimedia content, and 69% incorporated more active learning techniques [35].

4.2.1. Alignment with Global Best Practices

The institution under study has aligned with several best practices through the integration of technology and collaborative learning. It has implemented LMSs, such as Blackboard, for online learning and e-assessment, mirroring a global trend in which North America holds 36% of the global LMS revenue and is projected to have 101.1 million online platform users by 2029, while Moodle commands 69% of Europe’s LMS market. Surveys of Saudi universities illustrate the impact of these tools: at one medical college, 86.5% of students preferred using Blackboard; nearly half accessed courses via iPads (48.5%) or laptops (37.6%); and broader surveys found that 63% of students were satisfied with LMS tools and 75% found course materials easier to understand with online support [25]. More than 78% emphasised the importance of technology for e-learning, and 63.8% agreed that the university’s study platform was useful. In response to the COVID-19 pandemic, the institution adopted a hybrid teaching approach and developed collaborative learning environments to support its students. Students now engage in group projects, peer assessments, and active-learning activities, reflecting international trends that value teamwork and communication.

4.2.2. Divergences and Areas for Improvement

Despite these strengths, notable divergences remain. Traditional assessments, rote memorisation, and standardised tests persist in some programmes, limiting the development of higher-order cognitive skills (critical thinking, problem solving, and creativity). Experiential learning opportunities remain scarce. In the United States, fewer than one-quarter of undergraduates (22%) completed an internship in the 2021–2022 academic year, and 41.3% of students reported being unable to secure an internship; 67% of students wanted an internship but were unable to obtain one. Overall, only 30% of educational institutions offer experiential learning opportunities, while 39% of students never participate in any internship or experiential learning program. These figures underscore the importance of integrating project-based assessments, internships, and industry-led projects to enable students to apply their knowledge in authentic contexts.
Feedback practices also vary. International standards emphasise timely, detailed feedback, yet many institutions still provide only grades or brief comments. According to a U.S. survey, 56% of faculty members agreed that their institution encourages experimentation with new teaching approaches, and 52% reported receiving adequate technical support. However, only 39% of faculty members fully support the increased use of educational technologies, 54% describe themselves as followers rather than early adopters, and approximately 10% are opposed. Furthermore, 36% of faculty members disagree that online courses can achieve learning outcomes equivalent to those of in-person courses. These attitudes underscore the need for more robust professional development.
Faculty digital competence is another area requiring attention. A survey of 30,407 academics found that while over 70% perceive themselves as being at intermediate or advanced levels of digital competence, 30.55% scored zero in at least one digital competence area, and 70.51% scored 0 or 1 in at least one area. Only 20.39% of academics were at advanced levels in teaching and learning, 19.63% in assessment, and 28.66% in empowering learners [36]. Top-level proficiency in accessibility and inclusion was achieved by 36.81%, and just 34.18% excelled in digital continuing professional development. At least 6% of academics remained at beginner levels (A1–A2) [36]. Such figures suggest the need for targeted faculty training, particularly in designing assessments that promote higher-order skills and in effectively leveraging digital tools.

4.2.3. Inclusivity, Accessibility, and Faculty Development

Inclusivity and digital accessibility are international priorities. A 2022 survey of more than 200 higher education professionals found that nearly 80% of institutions consider accessibility a high priority, and 80% have a diversity and inclusion strategy in place. However, only 34% include digital accessibility within that strategy. Lack of awareness (cited by 61%) and insufficient internal skills (57%) were identified as significant barriers. Although the institution under study offers services for students with disabilities, it still needs to upgrade its infrastructure, provide assistive technologies, and train its staff to address the diverse learning needs of its students.
Faculty development remains a critical gap. Leading universities require ongoing professional development in innovative teaching, assessment, and research techniques. However, participation at the institution under study is sporadic. Improving digital competence across the workforce is essential: while 69.19% of academics perceive themselves as being at intermediate levels, nearly 70% still score very low in at least one digital competence [36]. Institutional policies should therefore prioritise continuous professional development and embed digital accessibility within teaching and assessment frameworks.

4.2.4. Industry Partnerships and Experiential Learning

Industry partnerships and experiential learning programmes help students acquire practical skills and enhance employability. Surveys indicate that 81% of students believe schools should offer company-led projects, and 79% consider on-the-job learning to be essential. However, as noted above, experiential learning opportunities remain limited: only 30% of institutions offer them, and 39% of students never participate in internships. Employers recognise the importance of collaboration: 64% report skill gaps that reduce organisational efficiency, and 46% believe colleges adequately prepare students; consequently, 64% of employers already collaborate with educational institutions to align curricula with workforce needs. Participation in hands-on learning programs yields tangible benefits; 68% of participants receive job offers after such experiences, and 92% of individuals agree that strategic workforce education programs benefit organisations. The institution under study has established local and regional industry partnerships, but could expand these collaborations to include international firms, thereby enhancing students’ global perspectives and job prospects.

4.3. Adaptable Innovative Approaches from Global Practices

The examination of diverse approaches to assessment practices internationally highlights several innovative strategies that can be adapted to enhance implementation at the institution under study. Learning analytics collects data from student interactions within learning environments to inform assessment practices, enabling proactive interventions [37]. At Purdue University, the Course Signals system uses predictive modelling to identify at-risk students and provide real-time feedback. A 2012 study reported that students taking at least one Course Signals course saw a 21% increase in retention. In a double-masked trial, 67% of students who received a yellow or red warning showed improvement in their effort and grades, and 78% of those with a red warning showed improvement in their performance. Early pilot data indicated that first-year retention rates were 97% for students in Course Signals classes, compared to 83% for those in traditional classes. These figures illustrate how learning analytics can facilitate continuous, formative assessment [38].
Adaptive learning technologies cater to individual learning paths by adjusting content difficulty based on students’ progress [39]. For instance, Smart Sparrow is used by over 700 institutions, and CogBooks reports a 90% reduction in student dropout with a 24% increase in success. Realise It claims its adaptive training modules reduce compliance training time by 40%, cut content management effort by 50%, and increase learner adoption by 40%. Squirrel AI operates in more than 60,000 public schools across 12,000 cities, offering personalised tutoring at scale. These examples demonstrate that adaptive platforms enhance learner engagement and provide tailored assessment opportunities.
Peer and self-assessment have been employed at the University of Edinburgh to develop metacognitive skills, encouraging students to evaluate their own work critically and that of their peers [40]. This process cultivates reflection, self-regulation, and deeper understanding, aligning assessments with learner-centred pedagogies. Game-based assessment is another trend: MIT’s Education Arcade uses gamified platforms to measure skills such as problem-solving, critical thinking, and creativity [41], allowing students to apply theoretical knowledge in simulated scenarios.
Digital portfolios (e-portfolios) provide comprehensive records of students’ learning journeys, enabling assessment of progress and reflection over time [42]. According to an Educause survey of undergraduate students in 55 countries, more than 50% of students use e-portfolios during their college careers, and 10% use them in nearly all courses. Over 80% of employers surveyed said that e-portfolios are useful in screening candidates, underscoring their value in both academic and professional contexts.
Competency-based assessment measures mastery of specific skills rather than time spent in courses [43]. Western Governors University (WGU) is a leading example, serving over 180,000 students and boasting more than 400,000 alumni. WGU accounts for 5.2% of all U.S. bachelor’s degrees in education and has trained 2% of the nation’s registered nurses. Students progress at their own pace and demonstrate competencies through projects, portfolios, and exams, making this model flexible and outcomes-focused.
VR/AR technologies enable experiential assessment [44]. A 2025 UK study reported that VR adoption in schools increased by 35% in 2024, and 93% of teachers believed that VR enhances teaching quality and student engagement. Simulation-based assessments gained support from 74% of teachers, and VR training was found to be four times faster and 52% cheaper than traditional training at scale. VR lesson plans increased student engagement by 30%, and 59% of teachers reported that combining VR with human instruction yields the best outcomes. Such data demonstrate VR’s potential to provide immersive, authentic assessment experiences.
Sustainable assessment emphasises lifelong learning and ethical development, encouraging students to evaluate their own work beyond formal education [45]. Cross-disciplinary assessment brings together knowledge from multiple fields to address complex problems, fostering innovation and creativity [46]. While quantitative adoption figures for these approaches are limited, they represent essential shifts towards holistic and inclusive assessment practices.
Blockchain-based credentials offer secure, tamper-proof academic records [47]. Research indicates that up to 40% of job applicants misrepresent their educational qualifications, and only 53% of employers consistently verify credentials. With international student numbers expected to grow from 4.5 million in 2023 to 8 million by 2025, universities need reliable digital systems to support credential verification. Early adopters report that blockchain credentials enable verification that is 75% faster and reduce administrative costs by 60%; 92% of employers prefer instant digital validation. These efficiencies underscore the potential of blockchain to modernise assessment and credentialing processes.
By integrating quantitative evidence with established research, this overview illustrates how diverse assessment innovations, ranging from learning analytics to blockchain, can improve educational outcomes while aligning with learner-centred, inclusive, and forward-looking assessment principles (Table 3).
These three trends, the dominance of summative assessment, the limited growth of alternatives, and the contrast with international best practices, converge around a central insight: the urgent need to reform assessment systems in Saudi theory-based programs. Without a more substantial shift toward competency-based and formative strategies aligned with adaptive technologies, the goals of Vision 2030 may not be fully realised. By adopting these innovative approaches, the institution under study can enhance its assessment framework to align with best practices worldwide. More importantly, these methods not only teach students how to learn effectively but also equip them with the skills necessary to succeed in a fast-paced environment.

5. Recommendations for Improvement

Assessment is paramount in any academic setting because it evaluates student progress, gauges educational effectiveness, and sets the tone for the entire learning environment. As technology evolves rapidly and Saudi universities pursue Vision 2030’s emphasis on innovation and competitiveness, modernising assessment practices is essential. This section reviews varied assessment methods, advocates for faculty training, explores digital tools, and recommends aligning assessments with national outcomes that contribute to Vision 2030. Recent quantitative findings illuminate how these strategies can enhance quality and equity in higher education.

5.1. Diverse Assessment Methods

Traditional summative assessments alone cannot capture the full spectrum of student learning. A diversified assessment portfolio encourages deeper engagement and supports continuous learning. Four complementary approaches are recommended:
Portfolio Assessments: Portfolio assessment involves collecting students’ work over time, assignments, projects, reflections, and drafts, to document growth and facilitate critical reflection. Empirical evidence shows that portfolios are increasingly adopted worldwide. An Educause survey conducted across 55 countries found that more than half of undergraduates used e-portfolios at some point, and 10% used them in almost all of their courses. Employers also value this approach: over 80% say portfolios help them assess candidates’ competencies. Thus, integrating portfolio assessments into curricula at the institution under study would offer students a personalised record of their development while meeting industry expectations for verifiable skills [48,49].
Peer Reviews: Peer review engages students in evaluating each other’s work and providing constructive feedback, promoting critical thinking and communication skills [50]. This collaborative method aligns with Vision 2030’s call to foster teamwork and accountability. Studies have shown that participation in peer assessment enhances metacognitive skills and encourages students to take responsibility for their own learning.
Project-Based Tasks: Project-based assessments require students to tackle real-world problems, integrating knowledge from multiple disciplines [51]. Such tasks promote problem-solving, creativity, and collaboration, essential skills for 21st-century careers. The benefits of experiential learning are well-documented. Although fewer than 30% of universities worldwide offer internships or experiential learning opportunities, participants who complete these programmes receive job offers at a rate of 68%, and 81% of students believe universities should offer honest company-led projects. Introducing more project-based assessments at the institution under study could bridge the gap between academic learning and industry needs [52].
Collaborative Learning and Group Work: Collaborative assessment strategies, such as group research projects, presentations, and case studies, prepare students for professional environments where teamwork is essential [53]. Evidence from workforce studies indicates that 64% of employers report skill gaps that reduce organisational efficiency, and 46% believe universities adequately prepare graduates. Incorporating collaborative assessments helps build interpersonal and communication skills, supporting Vision 2030’s focus on workforce readiness [54].

5.2. Faculty Training Programmes

Effective implementation of varied assessment methods hinges on faculty expertise. Investment in professional development ensures educators can design, deliver, and evaluate innovative assessments [29]. Recent research highlights the need for training: a survey of 30,407 academics across 403 higher education institutions found that over 70% considered themselves to have intermediate digital competence levels, but 30.55% scored zero in at least one digital skill, and 70.51% scored 0 or 1 in at least one area. Only about 20.39% of faculty reported advanced proficiency in teaching and learning technologies, and 19.63% in assessment; meanwhile, 36.81% reported advanced competence in accessibility and inclusion, and 34.18% in digital continuing professional development [36]. These gaps reveal a pressing need for targeted training. Recommended programmes include the following:
Active Learning Workshops: Workshops should focus on integrating active learning and assessment techniques, such as debates, problem-solving exercises, and hands-on activities. Training faculty to design assessments that promote engagement fosters critical thinking and collaboration [55].
Digital Assessment Tool Training: Educators require proficiency in digital platforms, LMS, online grading systems, and multimedia tools to design creative assessments [56]. Training should focus on how to utilise interactive elements, such as simulations or virtual labs, to enhance student engagement.
Peer Collaboration and Sharing Best Practices: Encouraging faculty to share assessment experiences helps disseminate innovative practices across departments [57]. Regular meetings or workshops can build a culture of continuous improvement.
Orientation to International Standards: Exposure to global assessment frameworks broadens perspectives and encourages adoption of internationally recognised methods [58]. Faculty can learn from models in the United States, Europe, and Asia to align local practices with global benchmarks.

5.3. Leveraging Technology for Enhanced Assessments

Digital tools can streamline assessment processes, personalise learning, and provide immediate feedback. The pandemic accelerated the adoption of online platforms: by 2020, 98% of universities worldwide had transitioned to online classes, and 75% of U.S. undergraduates were enrolled in at least one distance education course. Hybrid learning participation increased by 36% between 2012 and 2019, then surged by a further 92% during the pandemic. Key strategies for harnessing technology include the following:
Learning Analytics and Predictive Models: Systems like Purdue University’s Course Signals collect data on student interactions to predict risk and provide timely interventions. At least 67% of students receiving a yellow or red warning improved their effort and grades, and 78% of those with a red warning showed improvement. Taking at least one Course Signals course raised retention by 21%, and first-year retention was 97% for students in Course Signals classes compared with 83% in traditional classes. These results underscore the value of data-driven, formative assessments [38].
Adaptive Learning Technologies: Adaptive platforms adjust content difficulty based on learner progress [39]. Smart Sparrow, used by over 700 institutions, enables instructors to personalise courses. CogBooks reports that its adaptive system reduces student dropout by 90% and increases success rates by 24%. It claims that its modules reduce training time by 40%, halve content-management effort, and increase learner adoption by 40%. Squirrel AI delivers personalised tutoring in more than 60,000 public schools across 12,000 cities. Integrating similar adaptive technologies at the institution under study would cater to individual learning needs.
Virtual and Augmented Reality (VR/AR): Immersive technologies enhance experiential learning and assessment [44]. In the United Kingdom, VR adoption in schools increased by 35% in 2024, and 93% of teachers reported that VR enhances teaching quality and student engagement. Simulation-based assessment was supported by 74% of teachers, and VR training was found to be four times faster and 52% cheaper than traditional methods. VR lessons increased engagement by 30%, and 59% of teachers reported that combining VR with human instruction yields the best outcomes. Such innovations could enable immersive assessments in disciplines such as engineering, medicine, and architecture.
E-Portfolios and LMS: E-portfolios help students track their learning over time and provide formative feedback. LMS platforms, such as Blackboard and Moodle, facilitate the administration of assessments and analytics. Surveys of Saudi universities indicate that 63% of students were satisfied with LMS tools, and 75% found course materials easy to understand with online support. At one medical college, 86.5% of students preferred Blackboard, with 48.5% using iPads and 37.6% using laptops. However, advanced features such as AI-driven adaptive tests and plagiarism detection remain underutilised. Investing in such tools could improve the fairness, accuracy, and efficiency of assessments.
Data Analytics for Assessment: Using analytics to identify patterns in student performance allows instructors to tailor support. For example, dashboards can flag students who struggle with specific competencies, enabling targeted interventions and efficient resource allocation.

5.4. Aligning Assessments with Learning Outcomes and Vision 2030 Goals

Assessments should be explicitly linked to learning outcomes and national priorities. Vision 2030 aims to cultivate a highly skilled, innovative workforce. Assessment strategies must therefore evaluate not just academic knowledge but also practical, multidisciplinary, and problem-solving skills.
Vision 2030 Alignment: The programme emphasises critical thinking, creativity, collaboration, and communication. Assessments should measure these competencies to ensure graduates can contribute to a knowledge-based economy.
Competency-Based Assessments: Rather than measuring time spent in class, competency-based assessments evaluate mastery of skills and allow flexible progression [43]. Western Governors University (WGU) exemplifies this model, enrolling over 180,000 students and boasting more than 400,000 alumni; it awards 5.2% of all U.S. bachelor’s degrees in education and has trained 2% of the nation’s registered nurses. Implementing similar frameworks would enable students at the institution under study to demonstrate practical skills aligned with workforce needs and Vision 2030 goals.
Interdisciplinary Assessments: Designing tasks that require knowledge from multiple fields encourages creativity and prepares students for complex challenges [46]. Cross-disciplinary projects can involve engineering students collaborating with business students to develop entrepreneurial solutions, aligning with Vision 2030’s focus on innovation and economic diversification.
Credentialing Innovations: To ensure the integrity of credentials, blockchain-based systems can securely record assessment outcomes. Globally, up to 40% of job applicants misrepresent academic qualifications, and only 53% of employers verify them. With international student numbers projected to increase from 4.5 million in 2023 to 8 million by 2025, robust digital credentials are essential. Early adopters report that blockchain verification is 75% faster, reduces administrative costs by 60%, and is preferred by 92% of employers. Incorporating blockchain technology would ensure transparent and tamper-proof records, thereby building trust among students and stakeholders [47].
By adopting diverse assessment methods, investing in targeted faculty training, leveraging advanced technologies, and aligning evaluations with Vision 2030 competencies, the institution under study can enhance student engagement, promote equitable learning, and ensure graduates are prepared for the demands of the modern workforce. Quantitative evidence from global and regional studies underscores the efficacy of these strategies. Through careful implementation, these recommendations will help the institution meet national goals and achieve international best practice standards.

6. Limitations and Future Research

Despite providing valuable insights, this narrative review has certain limitations. It relies on existing literature and reports, which means available studies shape the findings and may not capture every on-the-ground development. Additionally, the focus on a single institution’s context could limit generalizability; patterns observed here may differ in other universities or regions. No formal quality appraisal of sources was undertaken (consistent with narrative review conventions), so there is a risk of bias in the evidence base. Building on this work, future studies could employ empirical methods to test and extend the findings. For instance, surveys and case studies across multiple Saudi universities can examine how widely alternative assessments are being adopted and identify discipline-specific challenges or successes. Longitudinal research might track the impact of implemented assessment reforms on student outcomes (e.g., critical thinking gains or job readiness of graduates) over time. Such studies would not only validate the current review’s conclusions but also inform policymakers of progress toward Vision 2030’s educational targets. By addressing these gaps, future research can deepen our understanding of effective assessment transformation and guide continuous improvement on a broader scale.

7. Conclusions

This review reveals that assessment practices in Saudi theory-based programs remain heavily anchored in summative evaluations such as final exams and quizzes. While these instruments measure foundational knowledge, they encourage rote learning and fail to cultivate critical thinking, problem-solving, and creativity. The dominance of summative assessments is partly due to institutional inertia, limited resources, and faculty comfort with established methods. Although alternative assessments, projects, portfolios, oral presentations, and peer-evaluation are slowly gaining acceptance, their uptake is constrained by policy frameworks and a lack of staff training. This mismatch between current practices and the competencies needed for Vision 2030 is significant; the national agenda emphasises innovation, digital literacy, and higher-order skills.
Progress is evident in the growing use of digital tools. Surveys show that a majority of students appreciate learning management systems, with 63% satisfied with LMS tools and 75% finding online materials easy to understand. However, advanced technologies such as learning analytics, adaptive testing, and AI-powered feedback, common in international contexts, are scarcely implemented. International benchmarks also highlight stronger emphasis on formative feedback, collaborative assessment design, and inclusive practices. Saudi programs typically provide summative feedback, and faculty development in assessment literacy is optional. Cultural factors—collectivism and gender segregation—further shape assessment practices, making peer and self-assessment less common.
To align assessment practices with Vision 2030 and global standards, several actions are necessary. First, diversify assessment portfolios by incorporating project-based tasks, portfolios, peer and self-assessment, and experiential learning to encourage deep engagement and real-world application. Second, leverage digital technologies: implement learning analytics to monitor student progress, adopt adaptive learning systems to personalise assessments, and explore virtual/augmented reality for immersive evaluation. These tools provide timely feedback and support continuous improvement. Third, invest in comprehensive faculty development programs. Training should cover active learning strategies, digital assessment tools, and international best practices, ensuring assessments align with learning outcomes and competencies. Fourth, align assessments with the Saudi Qualifications Framework and Vision 2030 by emphasising critical thinking, collaboration, and digital literacy. Additionally, embed industry partnerships and internships to integrate authentic assessments and strengthen employability. Finally, prioritise inclusivity by adopting Universal Design for Learning principles and exploring secure digital credentials (e.g., blockchain) to enhance transparency. By implementing these recommendations, theory-based programs can transform assessment practices from recall-focused exercises to resilient, student-centred strategies that prepare graduates for a knowledge-driven future in line with Vision 2030.

Funding

The author extends their appreciation to Prince Sattam bin Abdulaziz University for funding this research work through the project number (PSAU/2025/01/37145).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All data and materials used in this study are included in the manuscript. Additional information can be made available upon request.

Conflicts of Interest

The author declares no conflicts of interest.

Abbreviations

LMSLearning Management System
AIArtificial Intelligence
SAQFSaudi Qualifications Framework
VR/ARVirtual Reality/Augmented Reality
HOCSHigher-Order Cognitive Skills
UDLUniversal Design for Learning
MITMassachusetts Institute of Technology

References

  1. Simper, N.; Mårtensson, K.; Berry, A.; Maynard, N. Assessment cultures in higher education: Reducing barriers and enabling change. Assess. Eval. High. Educ. 2021, 47, 1016–1029. [Google Scholar] [CrossRef]
  2. Shanta, S.; Wells, J.G. T/E design based learning: Assessing student critical thinking and problem solving abilities. Int. J. Technol. Des. Educ. 2020, 32, 267–285. [Google Scholar] [CrossRef]
  3. Qu, Y.; Chen, S.; Cao, S. Examination in Accordance with Aptitude: Selection and Optimization of Curriculum Assessment Methods in Higher Education Adapted to the Teacher–Student Game Behaviors. Sustainability 2023, 15, 14121. [Google Scholar] [CrossRef]
  4. Harris, L.R.; Adie, L.; Wyatt-Smith, C. Learning Progression–Based Assessments: A Systematic review of student and teacher uses. Rev. Educ. Res. 2022, 92, 996–1040. [Google Scholar] [CrossRef]
  5. Matsumoto-Royo, K.; Ramírez-Montoya, M.S. Core practices in practice-based teacher education: A systematic literature review of its teaching and assessment process. Stud. Educ. Eval. 2021, 70, 101047. [Google Scholar] [CrossRef]
  6. Yang, C.; Luo, L.; Vadillo, M.A.; Yu, R.; Shanks, D.R. Testing (quizzing) boosts classroom learning: A systematic and meta-analytic review. Psychol. Bull. 2021, 147, 399–435. [Google Scholar] [CrossRef] [PubMed]
  7. Granberg, C.; Palm, T.; Palmberg, B. A case study of a formative assessment practice and the effects on students’ self-regulated learning. Stud. Educ. Eval. 2020, 68, 100955. [Google Scholar] [CrossRef]
  8. Allmnakrah, A.; Evers, C. The need for a fundamental shift in the Saudi education system: Implementing the Saudi Arabian economic vision 2030. Res. Educ. 2019, 106, 22–40. [Google Scholar] [CrossRef]
  9. Aldosari, M.S. The role of Saudi women in advancing environmental sustainability: A case study of Riyadh, Saudi Arabia. Int. J. Clim. Change Strat. Manag. 2024, 17, 127–146. [Google Scholar] [CrossRef]
  10. Bataeineh, M.; Aga, O. Integrating sustainability into higher education curricula: Saudi Vision 2030. Emerald Open Res. 2022, 4, 19. [Google Scholar] [CrossRef]
  11. Zhai, X. Practices and Theories: How can machine learning assist in innovative assessment practices in science education. J. Sci. Educ. Technol. 2021, 30, 139–149. [Google Scholar] [CrossRef]
  12. Sale, D. Assessing Twenty-First Century competencies. In Cognitive Science and Technology; Springer: Singapore, 2020; pp. 263–289. [Google Scholar] [CrossRef]
  13. ALSharari, M.R.M. Evaluating universities’ readiness in qualifying graduates to achieve Saudi Vision 2030: A Constructive Analysis of Baldrige Scale. Educ. Urban Soc. 2019, 52, 800–842. [Google Scholar] [CrossRef]
  14. Sudirta, I.G.; Widiana, I.W.; Setemen, K.; Sukerti, N.W.; Widiartini, N.K.; Santiyadnya, N. The Impact of Blended Learning Assisted with Self-Assessment toward Learner Autonomy and Creative Thinking Skills. Int. J. Emerg. Technol. Learn. 2022, 17, 163–180. [Google Scholar] [CrossRef]
  15. Nadolski, R.J.; Hummel, H.G.K.; Rusman, E.; Ackermans, K. Rubric formats for the formative assessment of oral presentation skills acquisition in secondary education. Educ. Technol. Res. Dev. 2021, 69, 2663–2682. [Google Scholar] [CrossRef]
  16. Shen, B.; Bai, B.; Xue, W. The effects of peer assessment on learner autonomy: An empirical study in a Chinese college English writing class. Stud. Educ. Eval. 2019, 64, 100821. [Google Scholar] [CrossRef]
  17. Meccawy, Z.; Meccawy, M.; Alsobhi, A. Assessment in ‘survival mode’: Student and faculty perceptions of online assessment practices in HE during COVID-19 pandemic. Int. J. Educ. Int. 2021, 17, 16. [Google Scholar] [CrossRef]
  18. Westbroek, H.B.; Van Rens, L.; Van Den Berg, E.; Janssen, F. A practical approach to assessment for learning and differentiated instruction. Int. J. Sci. Educ. 2020, 42, 955–976. [Google Scholar] [CrossRef]
  19. Pereira, D.; Cadime, I.; Brown, G.; Flores, M.A. How do undergraduates perceive the use of assessment? A study in higher education. Eur. J. Teach. Edu 2021, 12, 1–17. [Google Scholar] [CrossRef]
  20. Jones, E.; Priestley, M.; Brewster, L.; Wilbraham, S.J.; Hughes, G.; Spanner, L. Student wellbeing and assessment in higher education: The balancing act. Assess. Eval. High. Educ. 2020, 46, 438–450. [Google Scholar] [CrossRef]
  21. Heissenberger-Lehofer, K.; Krammer, G. Internship integrated practitioner research projects foster student teachers’ professional learning and research orientation: A mixed-methods study in initial teacher education. Eur. J. Teach. Educ. 2021, 46, 456–475. [Google Scholar] [CrossRef]
  22. Almulla, M.A.; Al-Rahmi, W.M. Integrated Social Cognitive Theory with Learning Input Factors: The Effects of Problem-Solving Skills and Critical Thinking Skills on Learning Performance Sustainability. Sustainability 2023, 15, 3978. [Google Scholar] [CrossRef]
  23. Alsmadi, M.K.; Al-Marashdeh, I.; Alzaqebah, M.; Jaradat, G.; Alghamdi, F.A.; Mohammad, R.M.A.; Alshabanah, M.; Alrajhi, D.; Alkhaldi, H.; Aldhafferi, N.; et al. Digitalization of learning in Saudi Arabia during the COVID-19 outbreak: A survey. Inform. Med. Unlocked 2021, 25, 100632. [Google Scholar] [CrossRef]
  24. Galil, T.A.E. E-Learning Statistics 2022: What the Data Show. Al-Fanar Media. 2022. Available online: https://www.al-fanarmedia.org/2022/10/e-learning-statistics-2022-what-the-data-show/#:~:text=%2A%20The%20e,climb%20to%C2%A0%24400%20billion%20by%202026 (accessed on 12 May 2025).
  25. Salih, K.M.; Elnour, S.; Mohammed, N.; Alkhushayl, A.M.; Alghamdi, A.A.; Eljack, I.A.; Al-Faifi, J.; Ibrahim, M.E. Climate of online e-Learning during COVID-19 pandemic in a Saudi medical school: Students’ perspective. J. Med. Educ. Curric. Dev. 2023, 10, 23821205231173492. [Google Scholar] [CrossRef] [PubMed]
  26. Alghamdi, J.; Holland, C. A comparative analysis of policies, strategies and programmes for information and communication technology integration in education in the Kingdom of Saudi Arabia and the Republic of Ireland. Educ. Inf. Technol. 2020, 25, 4721–4745. [Google Scholar] [CrossRef]
  27. Hammond, K.; Brown, S. Transitioning to learning outcomes at the coalface: An academic’s quantitative evaluation at the course level. Stud. Educ. Eval. 2020, 68, 100961. [Google Scholar] [CrossRef]
  28. Chan, C.K.Y.; Luo, J. A four-dimensional conceptual framework for student assessment literacy in holistic competency development. Assess. Eval. High. Educ. 2020, 46, 451–466. [Google Scholar] [CrossRef]
  29. Gordon, S.; Smith, E. Who are faculty assessment leaders? Assess. Eval. High. Educ. 2021, 47, 928–941. [Google Scholar] [CrossRef]
  30. Simonsmeier, B.A.; Peiffer, H.; Flaig, M.; Schneider, M. Peer feedback improves students’ academic Self-Concept in higher education. Res. High. Educ. 2020, 61, 706–724. [Google Scholar] [CrossRef]
  31. Willis, J.; Gibson, A.; Kelly, N.; Spina, N.; Azordegan, J.; Crosswell, L. Towards faster feedback in higher education through digitally mediated dialogic loops. Australas. J. Educ. Technol. 2020, 37, 22–37. [Google Scholar] [CrossRef]
  32. Hamann, K.; Wilson, R.L.; Wilson, B.M.; Pilotti, M.A. Causal attribution habits and cultural orientation as contributing factors to students’ Self-Efficacy: A comparison between female students in the United States and Saudi Arabia. In Proceedings of the 8th International Conference on Higher Education Advances (HEAd’22), Valencia, Spain, 14–17 June 2022. [Google Scholar] [CrossRef]
  33. Mohamed, A.T.; Alqurashi, M.A.; Alshmmry, S. Universal design for learning principles and students with learning disabilities: An application with general education teachers in Saudi Arabia. J. Multicult. Educ. 2022, 16, 337–349. [Google Scholar] [CrossRef]
  34. Kerkhoff, S.N.; Cloud, M.E. Equipping teachers with globally competent practices: A mixed methods study on integrating global competence and teacher education. Int. J. Educ. Res. 2020, 103, 101629. [Google Scholar] [CrossRef]
  35. Brinkley-Etzkorn, K.E. Learning to teach online: Measuring the influence of faculty development training on teaching effectiveness through a TPACK lens. Internet High. Educ. 2018, 38, 28–35. [Google Scholar] [CrossRef]
  36. Santos, A.I.D.; Chinkes, E.; Carvalho, M.A.G.; Solórzano, C.M.V.; Marroni, L.S. The digital competence of academics in higher education: Is the glass half empty or half full? Int. J. Educ. Technol. High. Educ. 2023, 20, 9. [Google Scholar] [CrossRef]
  37. Law, N.; Liang, L. A multilevel framework and method for learning analytics integrated learning design. J. Learn. Anal. 2020, 7, 98–117. [Google Scholar] [CrossRef]
  38. Chen, F.; Cui, Y. Utilizing student time series behaviour in learning management systems for early prediction of course performance. J. Learn. Anal. 2020, 7, 1–17. [Google Scholar] [CrossRef]
  39. Komleva, N.V.; Vilyavin, D.A. Digital platform for creating personalized adaptive online courses. Open Educ. 2020, 24, 65–72. [Google Scholar] [CrossRef]
  40. Abrache, M.; Bendou, A.; Cherkaoui, C. A clustering and combinatorial optimization-based approach for learner matching in the context of peer assessment. J. Educ. Comput. Res. 2021, 59, 1135–1168. [Google Scholar] [CrossRef]
  41. Alsubhi, M.A.; Sahari, N.; Wook, T.S.M.T. A conceptual engagement framework for gamified E-Learning platform activities. Int. J. Emerg. Technol. Learn. 2020, 15, 4. [Google Scholar] [CrossRef]
  42. Campbell, C.; Tran, T.L.N. Using an Implementation Trial of an ePortfolio System to Promote Student Learning through Self-Reflection: Leveraging the Success. Educ. Sci. 2021, 11, 263. [Google Scholar] [CrossRef]
  43. Vargas, H.; Heradio, R.; Farias, G.; Lei, Z.; De La Torre, L. A pragmatic framework for assessing learning outcomes in Competency-Based courses. IEEE Trans. Educ. 2024, 67, 224–233. [Google Scholar] [CrossRef]
  44. Chi, P.G.; Idris, M.Z. Employing Virtual Reality (VR) Technology with Experiential Learning Perspective to Enhance Students’ Learning Experience. Int. J. Acad. Res. Bus. Soc. Sci. 2021, 11, 650–655. [Google Scholar] [CrossRef]
  45. Taranto, D.; Buchanan, M.T. Sustaining Lifelong Learning: A Self-Regulated Learning (SRL) Approach. Discourse Commun. Sustain. Educ. 2020, 11, 5–15. [Google Scholar] [CrossRef]
  46. Yang, C.; Hsu, T. Integrating Design Thinking into a Packaging Design Course to Improve Students’ Creative Self-Efficacy and Flow Experience. Sustainability 2020, 12, 5929. [Google Scholar] [CrossRef]
  47. Rustemi, A.; Dalipi, F.; Atanasovski, V.; Risteski, A. A Systematic Literature Review on Blockchain-Based Systems for Academic Certificate Verification. IEEE Access 2023, 11, 64679–64696. [Google Scholar] [CrossRef]
  48. Sulistyo, T.; Eltris, K.P.N.; Mafulah, S.; Budianto, S.; Saiful, S.; Heriyawati, D.F. Portfolio assessment: Learning outcomes and students’ attitudes. Stud. Engl. Lang. Educ. 2020, 7, 141–153. [Google Scholar] [CrossRef]
  49. Yang, M.; Wang, T.; Lim, C.P. E-portfolios as digital assessment tools in higher education. In Learning, Design, and Technology; Springer: Cham, Switzerland, 2023; pp. 2213–2235. [Google Scholar] [CrossRef]
  50. Indriasari, T.D.; Luxton-Reilly, A.; Denny, P. A review of peer code review in higher education. ACM Trans. Comput. Educ. 2020, 20, 1–25. [Google Scholar] [CrossRef]
  51. Nguyen, H.T. Project-Based Assessment in Teaching Intercultural Communication Competence for foreign language students in Higher Education: A case study. Eur. J. Educ. Res. 2021, 10, 933–944. [Google Scholar] [CrossRef]
  52. Sathish, S.; Koh, S.N.A.; Boey, C.K. Impact of technology-enabled project-based assessments on learner outcomes in higher education. Int. J. Mob. Learn. Organ. 2023, 17, 131. [Google Scholar] [CrossRef]
  53. Meijer, H.; Hoekstra, R.; Brouwer, J.; Strijbos, J. Unfolding collaborative learning assessment literacy: A reflection on current assessment methods in higher education. Assess. Eval. High. Educ. 2020, 45, 1222–1240. [Google Scholar] [CrossRef]
  54. Mendo-Lázaro, S.; León-Del-Barco, B.; Polo-Del-Río, M.; López-Ramos, V.M. The impact of cooperative learning on university students’ academic goals. Front. Psychol. 2022, 12, 787210. [Google Scholar] [CrossRef]
  55. Zenni, E.A.; Turner, T.L. Planning and presenting workshops that work: A faculty development workshop. Mededportal 2021, 17, 11158. [Google Scholar] [CrossRef] [PubMed]
  56. Nebot, M.Á.L.; Cosentino, V.V.; Esteve-Mon, F.M.; Segura, J.A. Diagnostic and educational self-assessment of the digital competence of university teachers. Nord. J. Digit. Lit. 2021, 16, 115–131. [Google Scholar] [CrossRef]
  57. Podsiad, M.; Havard, B. Faculty acceptance of the peer assessment collaboration evaluation tool: A quantitative study. Educ. Technol. Res. Dev. 2020, 68, 1381–1407. [Google Scholar] [CrossRef]
  58. Nordlöf, C.; Norström, P.; Höst, G.; Hallström, J. Towards a three-part heuristic framework for technology education. Int. J. Technol. Des. Educ. 2021, 32, 1583–1604. [Google Scholar] [CrossRef]
Table 1. Comparison of traditional assessments and emerging alternative methods in higher education.
Table 1. Comparison of traditional assessments and emerging alternative methods in higher education.
AspectTraditional AssessmentsEmerging Alternative Methods
FocusKnowledge recall, memorisationCritical thinking, practical application
ExamplesWritten exams, quizzesProjects, portfolios, oral presentations
Cognitive Skills MeasuredBasic understandingHigher-order thinking (analysis, evaluation)
Student EngagementPassiveActive
Table 2. Comparison of saudi practices and international standards in educational assessment and strategies.
Table 2. Comparison of saudi practices and international standards in educational assessment and strategies.
AspectSaudi PracticesInternational Standards/Strategies
Assessment TypePredominantly summative (e.g., final exams, standardised tests)Balanced use of summative and formative assessments (e.g., quizzes, self-assessments, peer assessments)
Technology IntegrationEssential use of LMS for online exams and assignmentsAdvanced use of AI, adaptive testing, and analytics for personalised feedback and learning
Alignment with OutcomesFocused on national qualification frameworks (e.g., SAQF)Integration of global competencies (e.g., teamwork, intercultural communication, digital literacy)
Faculty RoleIndividual design and administration with optional trainingCollaborative design guided by institutional policies, with mandatory professional development
Feedback MechanismsSummative feedback (grades, brief comments)Formative feedback (timely, specific, constructive, and dialogic)
Authentic AssessmentsLimited implementation (e.g., case studies, projects)Integral to curricula, simulating real-world scenarios (e.g., internships, problem-solving tasks)
Inclusivity and EquityInitial steps toward accommodating students with special needsEstablished policies and practices (e.g., UDL)
Cultural ConsiderationsInfluenced by a collectivist culture and gender segregationEncourages individual accountability and critical thinking in diverse, inclusive settings
Table 3. Summary of innovative assessment approaches for adaptation.
Table 3. Summary of innovative assessment approaches for adaptation.
ApproachDescriptionPotential Benefits for the Institution Under Study
Learning AnalyticsData-driven analysis to identify at-risk students and provide tailored support.Enhances real-time feedback and supports continuous assessment.
Adaptive Learning TechnologiesPlatforms that adjust assessment content based on individual progress.Promotes inclusivity by catering to diverse learning paces.
Peer and Self-AssessmentThis process involves students in evaluating their work and that of their peers.Encourages active participation and critical thinking.
Game-Based AssessmentUses gamified platforms to evaluate problem-solving and creativity.Increases engagement and develops 21st-century skills.
E-PortfoliosDigital collections of students’ learning artefacts and reflections.Provides a holistic view of achievements and prepares students for professional portfolios.
Competency-Based AssessmentFocuses on mastering skills, allowing for flexible progression.Reduces pressure from rigid timelines and ensures thorough understanding.
VR/AR TechnologiesEmploys virtual environments to simulate practical scenarios.Enhances assessment of practical skills in controlled settings.
Sustainable AssessmentDevelops lifelong assessment skills beyond formal education.Prepares students for continuous self-improvement and adaptability.
Cross-Disciplinary AssessmentCombines knowledge from multiple fields to address complex problems.Encourages innovation and prepares students for interdisciplinary challenges.
Blockchain TechnologyProvides a secure and transparent system for recording assessment outcomes.Ensures data integrity and fosters trust in the assessment process.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Aldosari, M.S. From Recall to Resilience: Reforming Assessment Practices in Saudi Theory-Based Higher Education to Advance Vision 2030. Sustainability 2025, 17, 9415. https://doi.org/10.3390/su17219415

AMA Style

Aldosari MS. From Recall to Resilience: Reforming Assessment Practices in Saudi Theory-Based Higher Education to Advance Vision 2030. Sustainability. 2025; 17(21):9415. https://doi.org/10.3390/su17219415

Chicago/Turabian Style

Aldosari, Mubarak S. 2025. "From Recall to Resilience: Reforming Assessment Practices in Saudi Theory-Based Higher Education to Advance Vision 2030" Sustainability 17, no. 21: 9415. https://doi.org/10.3390/su17219415

APA Style

Aldosari, M. S. (2025). From Recall to Resilience: Reforming Assessment Practices in Saudi Theory-Based Higher Education to Advance Vision 2030. Sustainability, 17(21), 9415. https://doi.org/10.3390/su17219415

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop