Next Article in Journal
Mental–Perceptual Abilities and Giftedness Identification in Children Gifted for Music: A Study Across Musical and Non-Musical Families
Previous Article in Journal
Digital Tools for Inclusive Education: Enhancing Learning Experiences in Mathematics for Students with Special Needs
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Impact of Artificial Intelligence-Assisted Assessment and Traditional Assessment on Web Design and Development in Computing Education

by
Christian Basil Omeh
Department of Science and Technology Education, University of South Africa, Pretoria 0003, South Africa
Educ. Sci. 2026, 16(4), 501; https://doi.org/10.3390/educsci16040501
Submission received: 19 November 2025 / Revised: 21 January 2026 / Accepted: 21 January 2026 / Published: 24 March 2026

Abstract

The educational process of developing web design competence remains a persistent challenge for many students and educators, particularly in developing countries where conventional teaching methodologies and assessment models often fall short in promoting higher-order thinking and problem-solving. In this study, we respond to the call for innovative assessment approaches by examining the impacts of assessment models on a web design and development course and students’ cognitive load when adopting the AI-assisted assessment model (AAAM) compared to the traditional assessment model (TAM). We employed a mixed-methods research approach, incorporating a quasi-experimental, non-equivalent pretest–posttest control group design and a qualitative component, involving 63 undergraduate students enrolled in CRE 625. The intervention lasted approximately 10 weeks and focused on web design and development across two universities in a developing country. Consistent with quasi-experimental principles, students were assigned to treatment groups based on pre-existing institutional class structures, thereby controlling allocation using criteria rather than randomization. Two validated instruments were used to assess students’ web design and development competence (WDDC) and cognitive load (CL), and the data were analyzed using ANCOVA to evaluate performance gains and the interaction effect with gender.

1. Introduction

Computing education continues to evolve to meet industry skill needs. Educators are ensuring that students not only acquire technical skills but also develop autonomy, resilience, and reflective learning abilities through effective instructional and assessment processes. According to Gümüş et al. (2024), digital literacy and the integration of Artificial Intelligence (AI) in classroom assessment practices are increasingly on the rise. In higher education, assessment is typically conducted using traditional approaches, which have long been the dominant practice across different levels of education (Pereira et al., 2016). Traditional assessments, such as written exams and checklists, often rely on static tests and manual grading, which struggle to capture the iterative, creative, and problem-solving nature of tasks (Tahvili et al., 2025). This has created a call for the integration of AI into educational assessment, particularly in higher education. Khan et al. (2024) posit that AI-assisted assessment refers to the use of artificial intelligence technologies to support, automate, or enhance the evaluation of student learning, performance, and skill development. In computing education, especially in web design and development courses, Knipp and Winiwarter (2025) assert that AI-assisted assessment systems can analyze code, design artifacts, or learner interactions to provide real-time feedback, adaptive prompts, and personalized guidance during the assessment process (Basil, 2022). They further argue that AI-assisted assessment offers potential solutions to persistent challenges such as feedback latency, inconsistent grading, and limited formative support.
This study examines the comparative impact of AI-assisted assessment and traditional assessment methods on student performance and cognitive load in web design and development course. Specifically, it explores how each assessment approach influences skill acquisition, feedback responsiveness, and learner autonomy in web design and development courses. Previous studies have examined the adoption of traditional assessments in fields such as computer networking, programming, and biology (Csapó et al., 2011; Ala-Mutka, 2005). Traditional assessment approaches are often criticized for being time-consuming, subjective, and poorly aligned with the iterative nature of web design and development. Wickramasinghe (2024) emphasizes that integrating AI technology into assessment, particularly in web design and development courses, requires creative, technical, and problem-solving skills to accomplish web design tasks. Trajkovski and Hayes (2025) further posit that AI-assisted assessments in higher education enhance students’ ability to receive personalized feedback. However, there remains a paucity of research examining how AI-assisted assessment compares to traditional methods in terms of web design and development skill mastery and cognitive load in the learning process. Assessment in web design and development refers to the structured evaluation of learners’ ability to design, build, and refine websites or web applications (Antonenko & Thompson, 2011). It encompasses both technical proficiency (e.g., HTML, CSS, JavaScript, responsive design) and creative competencies (e.g., user interface aesthetics, accessibility, and usability). Effective assessment must capture not only the final product but also the development process, including problem-solving strategies and debugging efficiency. In most cases, traditional assessment focuses on grading final projects or code submissions without providing real-time feedback. While traditional assessments are reliable for standardization, they often lack personalization, timeliness, and insight into the learner’s creative process (Aloisi, 2023). Thus, this study is particularly relevant in the context of developing countries, especially in Africa.
Furthermore, this study is grounded in Cognitive Load Theory (CLT) and Constructivist Learning Theory, both of which highlight the importance of instructional design, assessment, and learner-centered environments. CLT emphasizes that learners have limited working memory; traditional assessments often overload this capacity by requiring students to recall syntax, design principles, and debugging strategies without timely feedback (Arab et al., 2025). In contrast, AI-assisted assessment can reduce extraneous cognitive load by offering real-time, targeted feedback, allowing learners to focus on meaningful problem-solving and creativity (Halkiopoulos & Gkintoni, 2024). This study addresses critical gaps in understanding how assessment methods influence skill acquisition, cognitive load, and feedback responsiveness in web development. Cognitive overload occurs when the mental demands of a learning task exceed a student’s working memory capacity, making it difficult to process, retain, or apply new information (Sari et al., 2024). In web design and development education, this is especially relevant due to the complex, multi-layered nature of tasks, where students must simultaneously manage syntax, logic, design principles, debugging, and user experience considerations. This study shows that traditional assessment increases extraneous cognitive load, as students must guess what went wrong or wait for clarification, which can hinder learning and reduce creativity. AI-assisted assessment, on the other hand, helps manage cognitive load by providing immediate, targeted feedback during the learning process. This reduces unnecessary mental strain and supports germane cognitive load—the effort devoted to meaningful learning and schema construction. The implications of this study extend to curriculum design, instructional strategies, and educational technology adoption, informing how educators can balance AI technology with pedagogical intent to foster deeper learning, reduce cognitive overload, and enhance employability in web design and development.

2. Related Literature Review

2.1. Reframing the Education Process of Web Design and Development

The teaching and learning of Web Design and Development (WDD) has undergone substantial transformation over the past decades, reflecting its dynamic, interdisciplinary nature and increasing relevance in digital economies. WDD education integrates front-end technologies, user experience design, and coding logic, demanding both creative and technical competencies (Wuttikamonchai et al., 2024). Historically, instructional processes relied on lecture-based delivery and static tutorials, particularly in resource-constrained contexts where access to modern tools was limited (Powell et al., 2024; Abarghouie et al., 2020). While these methods introduced foundational concepts such as HTML, CSS, and JavaScript, they often failed to cultivate essential skills such as iterative design thinking, debugging, and responsive layout development (Chandrasekara et al., 2025). The adoption of hands-on platforms such as CodePen, GitHub, and Figma has enhanced practical engagement and skill acquisition. These benefits are further amplified through active learning strategies, especially Project-Based Learning (PBL), which promotes autonomy, creativity, and real-world application (Omelianenko & Artyukhova, 2024; Omeh et al., 2025). Reframing WDD education requires aligning pedagogy with evolving industry standards and learner-centered approaches.

2.2. Assessment and Emerging Technologies

Project-Based Learning (PBL) fosters iterative design, critical thinking, and creative problem-solving, positioning learners as active creators in web design and development education (Nwangwu et al., 2022; Garcia, 2025). Recent studies emphasize the value of integrating emerging technologies such as AI, automated code reviewers, gamified platforms, and collaborative design tools into PBL environments to enhance engagement and skill acquisition (Ruiz Viruel et al., 2025; Kong et al., 2024; Omeh & Ayanwale, 2025). Platforms like CodePen, GitHub Classroom, and Figma enable real-time collaboration and visual prototyping, while AI-driven assessment tools offer adaptive feedback and syntax validation during coding tasks (Kolade et al., 2024). These technologies help reduce cognitive overload, personalize learning pathways, and support formative assessment in complex, creative tasks. Moreover, AI-enhanced environments promote peer-to-peer review and shared responsibility in debugging and interface refinement. Despite these innovations, empirical research on AI-supported PBL in web design education—particularly within African higher education contexts—remains limited. This gap underscores the need for the present study to explore how emerging assessment technologies reshape learning outcomes, autonomy, and instructional equity in computing education.

2.3. Cognitive Load and Assessment

Cognitive load, comprising intrinsic, extraneous, and germane dimensions, is a critical factor in web design and development education, influencing learners’ ability to process information, solve problems, and retain skills (Gkintoni et al., 2025). Prior studies suggest that cognitive load in web development is shaped by task complexity, interface design, and learners’ prior coding experience (Faudzi et al., 2024). Novice students often experience overload when managing syntax, layout logic, and debugging simultaneously, which can hinder schema construction and reduce performance (Suryani et al., 2024). To ensure fair cognitive load measurement, researchers recommend stratifying learners by experience level or using pre-task diagnostics (Martínez-Molés et al., 2024; Puppala, 2025). Assessment in education refers to the systematic process of gathering, interpreting, and using evidence of student learning to improve teaching and learning. For example, Brown (2022) defines assessment as the set of practices used to evaluate and ensure the validity of student achievement, while Levy-Feldman (2025) emphasizes assessment as an integral component of the teaching–learning process that influences decisions affecting students, teachers, and policymakers. AI-assisted assessment tools can mitigate extraneous load by offering real-time feedback, adaptive scaffolding, and syntax validation (Khine, 2024). Project-Based Learning (PBL) further supports germane load by promoting meaningful engagement and iterative design, allowing learners to focus on core principles while gradually mastering web design tasks. H01: There is no significant difference in the mean responses of students who adopted AI-assisted assessment at posttest cognitive load total scores compared to those who adopted traditional assessment.

2.4. Academic Achievement in Web Design and Development

Academic achievement in web design and development encompasses mastery of front-end technologies, creative interface design, and the ability to apply coding skills in real-world contexts (Tezer & Çimşir, 2018). Empirical studies show that Project-Based Learning (PBL) significantly improves these outcomes by fostering active engagement, iterative problem-solving, and knowledge transfer (Chen & Yang, 2019). Learners exposed to PBL demonstrate stronger retention of HTML, CSS, and JavaScript concepts and greater proficiency in responsive design and debugging. The integration of AI-assisted assessment further amplifies these benefits by offering personalized scaffolding, real-time syntax validation, and adaptive feedback tailored to individual learning paths (López-Pimentel et al., 2021; Yilmaz & Yilmaz, 2023). These tools help reduce cognitive overload and accelerate skill acquisition. However, some studies report inconsistent effects on academic performance due to infrastructure gaps, limited access to intelligent platforms, and misalignment between AI tools and pedagogical goals (Omeh et al., 2024). Effective implementation requires balancing automation with human facilitation, ensuring that AI enhances rather than replaces learner autonomy and instructional intent. H02: There is no significant difference in web design and development academic achievement scores between students in AI-assisted assessment and those in traditional assessment.

2.5. Gender, Cognitive Load, and Web Design and Development

Studies examining cognitive load often highlight the moderating role of gender and group dynamics. Arisi (2011) found significant interaction effects between cognitive style and gender on posttest achievement, suggesting that gender differences may influence how learners process instructional material. Similarly, Gierczyk et al. (2025) reported that group assignment in STEM workshops shaped cognitive load outcomes, with social interaction amplifying differences across genders. These findings indicate that both group context and gender jointly affect posttest cognitive load scores, underscoring the importance of considering demographic and situational variables when designing instructional interventions. Cognitive load, defined as the mental effort required to process and learn new information, is a critical factor influencing performance and retention in computing education (Sweller, 2011). In web design and development, learners often face high intrinsic load due to complex syntax, interface logic, and debugging tasks. AI-assisted assessment environments can help manage this load by offering real-time feedback, syntax validation, and adaptive scaffolding, which reduce extraneous cognitive demands and support schema construction (Khine, 2024; Puppala, 2025). These systems enhance germane load by allowing learners to focus on meaningful design and problem-solving rather than error correction. However, concerns persist that excessive automation may inhibit deeper cognitive engagement or reduce learners’ ability to self-regulate (Gkintoni et al., 2025; Umakalu & Omeh, 2025). To address this, the present study embeds AI-assisted assessment within a constructivist, project-based learning framework, ensuring that technology supports rather than replaces cognitive effort. This approach aims to balance automation with autonomy, optimizing cognitive load for sustained learning in web development education. H03: There is no interaction effect between group and gender on posttest cognitive load total scores.

2.6. Theoretical Framework

Cognitive Load Theory (CLT), proposed by Sweller (2011), posits that instructional design should minimize extraneous cognitive load to optimize working memory and facilitate schema acquisition. Constructivist Learning Theory, advanced by Piaget (1936) and later expanded by Vygotsky and Cole (2018), emphasizes active, social, and contextual knowledge construction through learner engagement. In the context of AI-assisted assessment versus traditional assessment in web design and development education, CLT explains how AI tools reduce extraneous load by automating feedback and scaffolding complex tasks, thereby allowing learners to focus on essential design principles. However, CLT is limited in addressing the socio-emotional and collaborative dimensions of learning. Constructivist theory complements this by highlighting how AI-assisted environments foster peer interaction, self-regulation, and authentic problem-solving, which are central to web development. While CLT optimizes cognitive efficiency, constructivism ensures deeper engagement and knowledge transfer through meaningful, socially mediated experiences. Together, these perspectives demonstrate that AI-enhanced assessment is both cognitively and pedagogically robust, balancing efficiency with authentic learner-centered engagement.

2.7. Hypotheses

H01: 
There is no significant difference in the mean responses of students that adopted AI-assisted assessment at Posttest of Cognitive load Total scores compared to those that adopted traditional assessment.
H02: 
There is no significant difference in web design and development academic achievement scores between students in AI-assisted assessment and those in traditional assessment.
H03: 
There is no interaction effect between group and gender on posttest of cognitive load total scores.

3. Methodology

3.1. Study Design and Population of the Study

This study employed a mixed-methods research design, integrating both quantitative and qualitative strategies (Creswell & Inoue, 2025). On the quantitative side, a non-equivalent pretest–posttest control group design was utilized to address the study objectives. This framework was well suited for analyzing the impact of AI-supported assessment compared with conventional assessment on students’ cognitive load and academic achievement in a web design and development course. It enabled comparison between two cohorts: an experimental group exposed to the AI-assisted assessment model and a control group taught with traditional assessment methods, while statistically adjusting for prior experience as a covariate. For the qualitative component, participants responded to open-ended questions, which were subsequently analyzed to provide deeper insight into the quantitative findings. Students were randomly assigned to either the treatment or control group without disrupting the schools’ academic schedules. Ethical clearance was obtained from the Human and Ethics Committee of the University (Ref: UNNR/VTES/PG/2025/702, dated 2 January 2025). In addition, all participants signed informed consent forms and were assured of their right to withdraw from the study at any stage. The sample consisted of 63 undergraduates enrolled in Computer and Robotics Education (CRE 625), a course on web design and development offered in Nigerian universities. University A contributed 31 students, while University B contributed 32. Of the total participants, 28 were male (44.4%) and 35 were female (55.5%), reflecting a slightly higher proportion of female students. Age distribution showed that most participants were between 21 and 23 years (62.2%), followed by those aged 24 and above (19.5%), and those aged 18–20 (18.3%), indicating that the majority were traditional-age undergraduates. In terms of prior web design experience, 30 students (47.6%) reported previous exposure, whereas 33 students (52.4%) had no prior experience, highlighting diversity in baseline skills. Figure 1 outlines a structured learning pathway for web design and development, integrating both AI-assisted and traditional assessment methods. It begins with an orientation session introducing learners to the course. Two pretests follow: one measuring cognitive load and another assessing baseline web design and development achievement. Learners are then divided into two groups—AI-Assisted Assessment and Traditional Assessment—each progressing through identical learning stages. These stages include creating structured web pages using HTML, styling with CSS, adding interactivity via JavaScript, building a mini web application, deploying and testing the application, and reflecting on assessment methods to improve code quality. After completing these stages, learners undergo a revision and feedback phase to consolidate their understanding. The process concludes with a post-test to evaluate learning outcomes. This design allows for comparative analysis of how different assessment methods impact cognitive load and web design and development skill acquisition, with potential insights into the interaction effects of group type and gender on posttest performance.

3.2. System Design

The AI-Assisted Assessment Environment (AAAE) (see Figure 2) was conceived and implemented to provide students in web design courses with active, adaptive, and personalized assessment experiences. The system integrates intelligent assessment functions with pedagogical principles, enabling learners to receive real-time support while engaging in structured problem-solving activities and obtaining timely feedback in web design and development course.

3.3. System Architecture

The AAAE platform was built on a modular architecture that separates frontend and backend components, ensuring scalability, maintainability, and seamless integration of AI services.
Frontend Layer: The user interface features a dual-pane layout. On the left, the “Ask ChatGPT 3.0” module—branded as Jeff, your AI Learning Assistant serves as the conversational interface for real-time support. On the right, course offerings are presented with instructor profiles and action buttons (e.g., Start Learning, Preview), allowing learners to navigate and engage with structured content.
Backend Layer: The backend supports secure authentication, session tracking, and real-time data synchronization. It manages user profiles, course progress, student assessments, and interaction logs, enabling personalized learning analytics.
AI Integration: The system embeds OpenAI’s conversational API to power Jeff, the AI assistant. This agent provides context-sensitive feedback, coding guidance, and conceptual explanations, simulating a responsive virtual tutor.

3.4. Core Functionalities

The AAAE platform offers a suite of features designed to enhance learner autonomy, engagement, and performance:
AI-Powered Chat Interface: The left pane allows students to interact with Jeff, the AI assistant, by posing questions and receiving tailored responses. This supports self-directed learning and reduces cognitive overload by offering immediate clarification and feedback.
Course Navigation and Enrollment: The right pane displays a catalog of technical courses (e.g., Robotics I, Computer Programming, web design and development, Hardware Maintenance) with instructor attribution. Learners can preview or begin modules, promoting structured progression through weekly topics
Collaborative Assessment Structures: The system supports peer interaction through shared modules and AI-assisted feedback, fostering relatedness and collaborative learning.
Performance Monitoring: Engagement metrics such as query frequency, course completion, and feedback responsiveness are logged to inform adaptive recommendations and instructor oversight.

3.5. Design Rationale

The AAAE system is grounded in Constructivist Learning Theory (Bada & Olusegun, 2015), which emphasizes learner agency, contextual engagement, and knowledge construction through experience. It also draws on Cognitive Load Theory (CLT) (Kirschner, 2002) by minimizing extraneous load and optimizing germane load through AI-mediated scaffolding. By embedding AI into the assessment process, AAAE addresses key limitations of traditional methods—namely, delayed feedback, limited personalization, and static instructional delivery. The conversational agent Jeff enables competence-building through timely, adaptive support, while the structured course interface promotes self-regulated learning. Although the current deployment is desktop-based, future iterations will prioritize mobile accessibility to accommodate learners in resource-constrained environments and support flexible, on-the-go learning.
The AI-Assisted Assessment Environment (AAAE) employed OpenAI’s GPT-4 model through its conversational API to power Jeff, the AI Learning Assistant. GPT-4 was chosen because of its advanced contextual reasoning, ability to sustain coherent multi-turn dialogues, and superior accuracy compared to earlier models like GPT-3.5. In educational settings, these strengths are critical for reducing cognitive overload, providing adaptive scaffolding, and delivering timely, personalized feedback. GPT-4’s enhanced coding support and conceptual clarity align with Constructivist Learning Theory and Cognitive Load Theory, ensuring learners receive meaningful guidance. Its reliability and safety features further justify its selection for academic assessment environments.
Also, Instructors were directed not to intervene or correct AI-assisted feedback during the intervention, except in cases where students explicitly sought clarification beyond the standardized plan. This ensured fidelity to the research design and minimized external influence. Observations indicated that differences in feedback frequency between AI-assisted and traditional groups primarily reflected immediacy rather than content accuracy. While AI feedback was more frequent, instructor-led feedback was more deliberate. We acknowledge that this imbalance could introduce bias, yet it also represents a core distinction between the two assessment models under investigation.
Summarily, the AI-Assisted Assessment Environment was designed to complement, not replace, instructor feedback by offering immediate, context-sensitive support. While GPT-based systems can occasionally generate errors or hallucinations, safeguards were implemented through standardized lesson plans, structured prompts and instructor oversight to mitigate inaccuracies. Observations indicated that feedback was generally accurate and aligned with instructional goals. Nonetheless, we acknowledge this limitation and recommend future studies include systematic evaluation of AI feedback accuracy across diverse contexts.

4. Instrument for Data Collection

To measure the core constructs of cognitive load and web design and development of academic achievement, the study employed instruments that were adapted, validated, and tested for reliability. Cognitive load in the web design and development course was assessed using a 20-item questionnaire adapted from Leppink et al. (2013), Faudzi et al. (2024), and Klepsch et al. (2017). The instrument was contextualized to reflect both AI-assisted and traditional assessment environments. It captured three dimensions: intrinsic load (IL), relating to task complexity; extraneous load (EL), concerning instructional clarity and distractions; and germane load (GL), reflecting mental effort invested in learning and schema construction. Responses were recorded on a five-point Likert scale (1 = Strongly Disagree to 5 = Strongly Agree). Example items include: “Debugging errors in my code required intense focus” (IL), “The feedback I received was unclear or difficult to interpret” (EL), and “I actively tried to understand the logic behind my code” (GL) (see Appendix A).
Academic achievement was measured using the Web Design and Development Achievement Test (WDDAT), a 50-item multiple-choice test covering foundational and applied concepts. Each item had four options (A–D), with one correct response scored at two points, yielding a maximum score of 100. The test was developed in line with Bloom’s taxonomy, addressing knowledge, comprehension, and application domains (see Appendix B). Sample items include: “Which of the following is a valid CSS property for changing text color?” and “Which CSS property is used to change the text color of an element?” The WDDAT was reviewed by three computer science education experts to ensure content validity. In addition, continuous assessment tasks were graded using a 30-point rubric derived from a laboratory manual requiring students to complete practical web design exercises. Qualitative data were collected using an instrument designed for web design and development contexts (Appendix C). All instruments underwent expert review and pilot testing to establish validity and reliability. Cronbach’s alpha analysis indicated strong internal consistency, with coefficients of 0.875 for the cognitive load scale and 0.790 for the WDDAT.

4.1. Instrument for Measuring Student Feedback Responsiveness

A separate instrument was developed by the researcher and embedded within the AAAE platform to evaluate student feedback responsiveness. The Feedback Responsiveness Evaluation Framework (FREF) assessed how effectively learners interpreted, applied, and reflected on feedback/assessment to improve their work in HTML, CSS, JavaScript, and related frameworks. This framework provided a structured method for analyzing learning behaviors and revision outcomes, particularly when comparing AI-assisted assessment with traditional instructor-led feedback. Learning analytics were captured through an embedded code sheet that monitored the activities of 10 students during the AI-assisted assessment process. Each participant was assigned a unique identifier to track individual progress. Sample metrics included revision rates, time-to-revision, error recurrence, and improvement scores. The framework and its items were validated by experts, and refinements were made to ensure accurate capture of student responses.

4.2. Data Collection Procedure

Data collection followed a phased and structured approach, covering pre-intervention and post-intervention stages.
Phase 1: Orientation and Pre-Test (Week 1) At the start of the semester, baseline assessments were administered to both groups using Google Forms and the AAAE platform. Instruments included:
Cognitive Load Scale (20 items, 5-point Likert scale)
Web Design and Development Achievement Test (50 multiple-choice items, 2 marks each)
Qualitative instrument (10 items)
Demographic questionnaire (gender, age, prior web design experience) The pre-test established baseline scores for later comparison and served as covariates in statistical analyses to ensure accurate estimation of treatment effects.
Phase 2: Intervention with AI-Assisted Assessment (Weeks 2–9) The instructional intervention lasted 10 weeks, with 4 h sessions per week, two hours of learning and 2 h of hands-on at each different day of the week (totaling 40 contact hours). Students were divided into two groups:
Experimental Group (AI-Assisted Assessment): Learners engaged with AAAE enhanced by AI tools integrated into all assessment activities.
Weeks 2–4: Introduction to web development fundamentals and assessment types; creation of structured web pages using HTML.
Activity Example: An AI-assisted coding simulator provided real-time syntax checks and CSS styling feedback for simple exercises.
Weeks 5–7: Group problem-solving with AI-assisted support for intermediate JavaScript tasks.
Activity Example: “Build a mini web application” project, where the AI dashboard tracked team progress and suggested strategies for improvement (see Figure 3).
Weeks 8–9: Deployment and Testing of Web Applications During this phase, students engaged with advanced, real-world web development projects. The AI system provided predictive analytics to highlight potential coding errors, guided reflective assessment practices, and supported exercises aimed at improving code quality.
Week 10: Revision and Feedback Project In the final instructional week, students collaborated on the design and testing of web design systems. An AI-assisted peer assessment feature was introduced, enabling the platform to score submissions using preloaded rubrics and to flag anomalies for instructor review.
Conventional Group: Traditional Assessment Mode The conventional group undertook the same web design and development tasks but without AI support. Feedback and guidance were delivered exclusively by instructors and peers, replicating conventional assessment practices.
Note that AI feedback tends to be highly specific (see Table 1), directive, and scaffolded, reducing ambiguity and cognitive load. While teacher feedback is often broader, reflective and encourages learner autonomy, supporting deeper critical thinking. This comparison illustrates qualitative differences without assuming that more detail automatically leads to higher test scores.
Phase 3: Post-Test Administration (Week 10) At the conclusion of the intervention, both groups completed the same cognitive load scale and the web design and development achievement test. A parallel form of the test was administered via Google Forms to ensure consistency. This procedure enabled the measurement of changes attributable to the instructional approach while safeguarding test validity and reliability.
To minimize potential experimenter bias and maintain instructional consistency, the study engaged existing computer education lecturers at the participating universities to deliver the assessment intervention. This choice enhanced ecological validity by embedding the intervention within established institutional structures. Two weeks prior to implementation, the principal investigator organized a standardized two-day training program for all instructors. The training emphasized adherence to research protocols, ethical compliance, and the assessment framework. Each instructor was provided with a unified set of lesson plans, facilitation scripts, and assessment materials to ensure consistency across groups. The conventional group followed traditional assessment practices, while the experimental group incorporated the AI-assisted assessment model. The only instructional difference between the two groups was the integration of the AI tool. Instructors assigned to the experimental group received training on the AAAE interface but were directed not to provide additional scaffolding beyond the standardized plan (see Table 2). This strategy reduced variability and reinforced fidelity of implementation. To monitor consistency across sites, structured observations and checklists were employed. Although this design promoted parity between groups, it is acknowledged that the lack of random assignment and the involvement of different instructors may have introduced potential bias.

5. Result

5.1. Assessment of ANCOVA Assumptions

This study evaluated key ANCOVA assumptions for the Posttest of Cognitive Load and the Posttest of Web Design and Development Academic Achievement using both visual and statistical diagnostics. Q–Q plots (Figure 4) indicated approximate normality of residuals, with data points closely following the diagonal line, suggesting no major deviations (Hair et al., 2021). Independence of observations was ensured through the study design, which assigned students from distinct classes with no intergroup interaction (Gravetter, 2017). Levene’s test (Table 3) was applied to assess the homogeneity of error variances. For the Posttest of Cognitive Load, the result was non-significant, F(1, 94) = 9.42, p = 0.103, supporting the assumption of equal variances. However, for the Posttest of Web Design and Development Academic Achievement, Levene’s test was significant, F(1, 94) = 0.78, p = 0.078, indicating a violation of this assumption. Such a breach may increase the risk of Type I error and undermine the reliability of F-tests (Pituch & Stevens, 2015). Accordingly, ANCOVA results for this outcome were interpreted with caution, and future studies are advised to consider robust alternatives such as Welch-adjusted ANCOVA (Maxwell et al., 2018). Regarding covariates, the Pretest of Cognitive Load approached significance (p = 0.033), suggesting a near-linear relationship with its posttest counterpart. In contrast, the Pretest of Web Design and Development Academic Achievement was not significant (p = 0.714), contributing minimally to explained variance (Pallant, 2020). Importantly, interaction terms in both models were non-significant, confirming the assumption of homogeneity of regression slopes (Howell-Moroney, 2024).
Hypothesis 1, a one-way ANCOVA was conducted to examine the effect of instructional group (AI-assisted assessment vs. traditional assessment) on Posttest Cognitive Load Total scores, controlling for Pretest Cognitive Load Total and gender. The analysis revealed a statistically significant main effect of instructional group, F(1, 94) = 183.34, MS = 11,348.77, p < 0.001, partial η2 = 0.721, indicating a large effect size (see Table 4). The adjusted marginal mean for the experimental group (M = 3.65, SE = 0.11) was significantly higher than that of the control group (M = 1.50, SE = 0.10), yielding a mean difference of 2.15, t(94) = 13.60, p < 0.001 (see Table 5). Simple main effects analysis confirmed that this group difference was statistically significant for both female students, F = 2.00, p < 0.001, and male students, F = 5.81, p < 0.001 (see Table 6), demonstrating that the instructional advantage was consistent across genders in terms of cognitive load. This pattern is visually depicted in Figure 5, where the raincloud plot illustrates a denser concentration of higher Posttest Cognitive Load scores in the experimental group. Collectively, these findings provide strong support for Hypothesis 1.
To evaluate Hypothesis 2, a one-way ANCOVA was conducted with Posttest Web Design and Development Academic Achievement as the dependent variable, instructional group as the independent variable, and Pretest Academic Achievement and gender as covariates. The analysis revealed a statistically significant main effect of instructional group, F(1, 94) = 1833.00, p < 0.001, partial η2 = 0.721 (see Table 4), indicating a very strong effect size. The adjusted mean score for the AI-assisted assessment group (M = 76.30, SE = 1.31) was substantially higher than that of the traditional assessment group (M = 49.30, SE = 1.29), yielding a mean difference of 27.00, t(94) = 14.40, p < 0.001 (see Table 5). Simple main effects analysis confirmed that this instructional advantage was statistically significant for both female students, F = 64.10, p < 0.001, and male students, F = 72.80, p < 0.001 (see Table 6), demonstrating consistent benefits across gender. Figure 5 visually reinforces this pattern, with the raincloud plot illustrating a denser distribution of higher Posttest scores in the AI-assisted assessment group. Collectively, these results provide robust support for Hypothesis 2.
Take the code sheet of 10 students with to analysis using the feedback responsiveness evaluation framework. Using this data of the 10 students Obi has revision rate = 80, total feedback 120 Ebuka has revision rate = 60, total feedback 90, ike has revision rate = 140, total feedback 110, Nne has revision rate = 76, total feedback 140, MMa has revision rate = 40, total feedback 140, Obu has revision rate = 90, total feedback 120, Oby has revision rate = 70, total feedback 70, Ore has revision rate = 80, total feedback 80, Ora has revision rate = 80, total feedback 110, Ewu has revision rate = 40, total feedback 170.

5.2. High Responsiveness (≥80%)

Obi (80%)—The Reflective Refiner: Revised 96 of 120 items. Obi’s responsiveness indicates a structured approach to feedback, likely supported by AI cues that clarified syntax and layout issues. His cognitive load was well-managed, enabling deeper revision in CSS specificity and semantic HTML.
Ore (80%)—The Strategic Debugger: Revised 64 of 80 items. Ore’s revision behavior suggests efficient feedback prioritization. AI likely helped him isolate logic errors and optimize styling, reducing intrinsic load and enhancing performance.
Ora (80%)—The Systematic Synthesizer: Revised 88 of 110 items. Ora’s engagement reflects strong working memory and feedback fluency. AI-assisted assessment likely supported her in chunking complex JavaScript and framework tasks into manageable units.
Ike (140%)—The Overcorrecting Improver: Revised 154 of 110 items. Ike’s revision rate exceeding 100% suggests proactive refinement beyond flagged issues. AI feedback may have triggered self-directed debugging and schema expansion, indicating high germane load and deep learning.
Obu (90%)—The Feedback Champion: Revised 108 of 120 items. Obu’s high responsiveness reflects mastery of core web development concepts. AI likely supported his ability to identify subtle logic errors and improve accessibility, reinforcing self-efficacy and achievement.

5.3. Moderate Responsiveness (60–79%)

Nne (76%)—The Balanced Reviser: Revised 106 of 140 items. Nne’s engagement suggests motivation tempered by cognitive overload. AI feedback likely helped her identify recurring patterns, but the volume of flagged issues may have taxed her working memory.
Oby (70%)—The Detail-Oriented Improver: Revised 49 of 70 items. Oby’s focused revisions imply a clean initial submission. AI likely supported her in polishing layout and accessibility features, with low cognitive load enabling reflective refinement.
Ebuka (60%)—The Pragmatic Adapter:Revised 54 of 90 items. Ebuka’s selective engagement suggests prioritization of high-impact feedback. AI may have scaffolded his debugging process, though unfamiliar syntax or framework conventions may have elevated intrinsic load.

5.4. Low Responsiveness (<50%)

MMa (40%)—The Struggling Novice: Revised 56 of 140 items. MMa’s low engagement may reflect difficulty parsing AI feedback or foundational gaps. Her cognitive load was likely dominated by extraneous elements, impeding meaningful revision.
Ewu (40%)—The Overwhelmed Learner: Revised 68 of 170 items. Ewu’s high feedback volume and low revision rate suggest severe overload. AI feedback may have been too dense or technical, leading to disengagement. His case highlights the need for adaptive feedback systems calibrated to learner readiness.
These students demonstrated strong engagement with feedback, suggesting effective cognitive regulation, low extraneous load, and high germane load. These students showed partial engagement with feedback, possibly due to selective revision strategies or moderate cognitive strain. These students revised fewer than half of the feedback items, suggesting high extraneous load, limited schema development, or difficulty interpreting AI-generated feedback. Clarity and Precision: High performers benefited from targeted, actionable feedback that reduced ambiguity. In pattern recognition, AI helped students identify recurring issues (e.g., nesting errors, missing alt attributes), supporting schema development. Cognitive Scaffolding: For moderate and low performers, AI served as a partial scaffold—but without human mediation, some feedback exceeded their zone of proximal development. Students who saw tangible improvement from AI-guided revisions (e.g., Obu, Ike) likely experienced increased self-efficacy and deeper engagement. Tiered Feedback Delivery: Use AI to deliver feedback in layers—starting with critical errors, then stylistic or optimization suggestions. Cognitive Load Calibration: Integrate AI with learner profiles to adjust feedback density and complexity.
This study observations revealed minimal variability across instructional delivery due to the standardized training, lesson plans, and facilitation scripts provided to all lecturers. Structured checklists confirmed that both conventional and experimental groups adhered closely to the research protocols, with the only notable difference being the integration of the AI-assisted assessment tool in the experimental group. Minor variations were observed in instructors’ pacing and interaction styles, reflecting individual teaching habits rather than deviations from the framework. While these differences did not significantly affect fidelity, the absence of random assignment and reliance on multiple instructors introduced potential bias that warrants acknowledgment.
To test Hypothesis 3, a Group × Gender interaction term was incorporated into the ANCOVA model predicting Posttest Cognitive Load Total scores. The interaction effect was not statistically significant, F(1, 94) = 0.233, p = 0.630, partial η2 = 0.0001 (see Table 4), indicating that the impact of AI-assisted assessment on cognitive load did not vary by gender. Consequently, Hypothesis 3 is not supported.

6. Discussion

This study examines the effectiveness of Artificial Intelligence (AI)-assisted assessment compared to traditional assessment in a web design and development course within computing education. Three hypotheses were tested: the first examined differences in cognitive load, the second focused on academic achievement gains, and the third explored whether gender moderated the effect of assessment type on cognitive load.
Efficacy of AI-Assisted Assessment and Traditional Assessment in Web Design and Development Courses
Using analysis of covariance (ANCOVA), the findings provide compelling evidence that AI-assisted assessment serves as a transformative approach in technology education, particularly within underrepresented African contexts. Supporting Hypothesis 1, students in the experimental group demonstrated significantly higher post-intervention academic achievement scores. This aligns with prior research by Gkintoni et al. (2025) and Fong et al. (2025), which emphasized the benefits of AI-driven assessment in enhancing STEM performance without increasing cognitive load. Hong and Guo (2025) similarly found that intelligent agents and code simulators promoted deeper understanding and improved test scores in programming. These outcomes reflect AI’s capacity to personalize learning, reduce cognitive demands, and support formative assessment principles consistent with constructivist learning theory. By delivering immediate, individualized feedback and enabling low-risk trial-and-error, AI-assisted systems helped learners focus on mastering higher-order skills such as creativity and problem-solving in web design tasks. However, Feng (2025) cautioned that intelligent tutoring systems, while rich in feedback, may impose higher cognitive demands due to complex interfaces, especially in contexts lacking adequate training and infrastructure. In many African universities, where digital readiness and power supply are inconsistent, the benefits of AI-assisted assessment may not be universally accessible. Although this study was conducted in a relatively stable environment, it underscores the need for scalability and equity in implementation.

6.1. Academic Achievement in Web Design and Development

Further supporting Hypothesis 2, students exposed to AI-assisted assessment showed significantly greater improvement in web design and development scores than those in the traditional instruction group, even after controlling for baseline performance. This finding reinforces the work of Muthazhagu and Surendiran (2024), who demonstrated that AI algorithms can automate layout and code generation, accelerating prototyping and deployment. Rana and Bhambri (2025) also documented improvements in code efficiency, interface responsiveness, and personalization through generative AI outcomes that closely mirror the learning context of this study. These results validate Constructivist Learning Theory, particularly Vygotsky’s Zone of Proximal Development, by illustrating how AI-supported environments foster peer interaction, self-regulation, and authentic problem-solving. Cognitive Load Theory complements this by showing how AI tools reduce extraneous load through automated feedback and task scaffolding, allowing learners to concentrate on essential design principles. Nonetheless, as Gambo et al. (2025) note, improvements in web design and development may not be solely attributable to AI assessment. Factors such as novelty effects, instructor quality, and group dynamics may also play a role. While this study controlled for pre-intervention performance, it cannot fully eliminate these influences. Future research should adopt longitudinal designs to disentangle these variables more effectively.

6.2. Interaction Effect Betweent Group and Gender on Posttest Cognitive Load Scores

Contrary to Hypothesis 3, the study found no significant interaction between gender and assessment type on cognitive load. This contrasts with findings by Moss and Gunn (2007) and Katerina and Nicolaos (2018), who reported gender-based differences in engagement with web design tasks, often influenced by assessment design. The absence of such effects in this study may reflect a well-structured AI-assisted assessment environment that mitigated gender biases by providing consistent feedback and equitable access to materials. This supports Mahamuni and Tonpe’s (2024) argument that inclusive AI-enhanced assessment design can reduce traditional gender disparities in technology education. Moreover, the lack of interaction suggests that both male and female students benefited similarly, especially when assessments emphasized collaboration and inquiry over competition. Still, as Strzelecki and ElArabawy (2024) observed, gendered learning preferences may not always be captured through quantitative measures. Girls may respond more to affective support, while boys may benefit from gamified experiences—subtleties that warrant further exploration through longitudinal studies examining student interactions with AI.

6.3. Theory, Practice, and Policy Contributions

The findings of this study carry important implications across theoretical, practical, and policy domains. Theoretically, the observed gains in academic achievement and cognitive load reduction among students in the AI-assisted group reinforce Vygotsky’s Social Constructivist Theory and Cognitive Load Theory. AI tools functioned as adaptive scaffolds, enabling learners to tackle complex web design tasks with real-time support, effectively acting as more knowledgeable others. Practically, the study highlights the innovative potential of integrating AI into assessment models. Educators can leverage intelligent tutoring systems, chatbots, and adaptive platforms to enhance engagement, personalize feedback, and improve assessment quality in web design and development. It also emphasizes the importance of designing assessments that align with AI capabilities to ensure equitable support for all learners. From a policy perspective, the results underscore the urgency of embedding AI-enhanced assessment frameworks into national STEM curricula, particularly in higher education across Africa. Policymakers must invest in infrastructure, teacher training, and curriculum reform to promote AI literacy and problem-solving competencies. Additionally, collaborative research should be supported to evaluate the long-term impact of AI-assisted assessment across diverse educational settings.

7. Conclusions

This study explored the impact of AI-assisted assessment on cognitive load and academic achievement in web design and development among computing education students. Employing a mixed-method design, the findings revealed that students exposed to AI-assisted assessment significantly outperformed those in traditional assessment settings, even after accounting for pre-test scores and gender. Although overall performance was not moderated by gender, simple main effects analysis confirmed that both male and female students benefited from AI-assisted assessment.
These results reinforce the growing body of evidence supporting the educational value of AI assessment tools such as Gradescope and Socrative, particularly when integrated into collaborative assessment frameworks. The incorporation of Vygotskian principles—especially scaffolding and social mediation—through AI technologies proved instrumental in fostering learners’ acquisition of higher-order skills in web design. Importantly, this study contributes to the literature by focusing on a relatively underexplored African context, offering culturally relevant insights into AI-driven assessment innovation in STEM education. As educational systems increasingly adopt AI-mediated approaches, this research underscores the need for intentional, evidence-based assessment reform. It provides empirical support for the broader institutional integration of AI-assisted assessment models in higher education, particularly within web design and development curricula. Ultimately, the study bridges theoretical foundations with practical application, demonstrating how human-centered pedagogy and intelligent systems can jointly advance meaningful educational assessment.

Limitations and Future Studies

Despite its contributions, the study has several limitations. First, the sample was drawn from two institutions in Nigeria, which may limit the generalizability of the findings to other cultural or technological contexts. Second, although the study employed a rigorous pre-post ANCOVA design, it did not assess long-term retention or the transferability of web design skills beyond the intervention period. Additionally, the AI assessment tools used were not evaluated for usability or student interaction patterns, which may have influenced the observed outcomes. Future research should address these limitations by replicating the study across multiple institutions and incorporating longitudinal designs to capture delayed learning effects. Further investigation is also needed into the role of teachers’ AI literacy and instructional facilitation in shaping the effectiveness of AI-assisted assessment frameworks. Incorporating qualitative insights from control groups would provide balanced perspectives. Lastly, future studies should examine the scalability of these models in resource-constrained environments to ensure that technological innovation benefits all learners equitably.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The author declares no conflicts of interest.

Appendix A

Scale: 5-point Likert (1 = Strongly Disagree, 5 = Strongly Agree)
Intrinsic Load (IL): Task complexity, Extraneous Load (EL): Instructional clarity and distractions, Germane Load (GL): Mental effort for learning and schema building
  • The web development tasks required deep concentration to complete.
  • I found the coding challenges intellectually demanding.
  • Understanding how HTML, CSS, and JavaScript work together was mentally taxing.
  • Debugging errors in my code required intense focus.
  • Designing responsive layouts involved complex decision-making.

Appendix A.1. Extraneous Load (Instructional Design & Distractions)

6.
I was confused by the instructions provided for the assignments.
7.
Switching between tools (e.g., IDE, browser, AI assistant) made the tasks harder.
8.
The feedback I received was unclear or difficult to interpret.
9.
I spent time trying to understand what the assessment was asking for.
10.
The interface of the coding platform distracted me from the task.

Appendix A.2. Germane Load (Effort Toward Learning)

11.
I actively tried to understand the logic behind my code.
12.
I reflected on how to improve my web design skills during the task.
13.
I used feedback to revise and deepen my understanding of web development.
14.
I mentally rehearsed how to apply coding concepts to future projects.
15.
I tried to connect new coding techniques with what I already knew.

Appendix A.3. AI vs. Traditional Assessment Comparison

16.
AI feedback helped me focus on learning rather than guessing.
17.
Instructor feedback helped me understand my mistakes better than AI.
18.
I felt less mentally overloaded when using AI-assisted assessment.
19.
Traditional assessment made me think more deeply about my code.
20.
I prefer the type of feedback that reduces my mental effort while still helping me learn.

Appendix B

  • Web Design and Development Achievement Test (WDDAT)
  • Time Allowed: 1 h
  • REG. NO: ______________
  • GENDER: ______________ (Male, Female)
  • Questions
  • Q1. Which of the following is correct about HTML?
    A—HTML is a markup language used to structure content on the web.
    B—HTML is a programming language used to build logic.
    C—HTML is only used for styling web pages.
    D—HTML cannot include multimedia elements.
  • Q2. Which of the following is correct about CSS?
    A—CSS is used to define the structure of a webpage.
    B—CSS is used to style and format the appearance of a webpage.
    C—CSS is a server-side scripting language.
    D—CSS is used to store data in databases.
  • Q3. Which of the following is the correct HTML tag for inserting an image?
    A—<image>
    B—<src>
    C—<img>
    D—<picture>
  • Q4. Which of the following is a valid CSS property for changing text color?
    A—font-color
    B—text-color
    C—color
    D—text-style
  • Q5. Which of the following is true about JavaScript?
    A—JavaScript is a markup language.
    B—JavaScript is used to add interactivity to web pages.
    C—JavaScript cannot manipulate HTML elements.
    D—JavaScript is only used for styling.
  • Q6. Which of the following is the correct way to create a hyperlink in HTML?
    A—<a link=“www.example.com”>Example</a>
    B—<a href=“www.example.com”>Example</a>
    C—<link=“www.example.com”>Example</link>
    D—<url=“www.example.com”>Example</url>
  • Q7. Which of the following is a responsive design technique?
    A—Using fixed-width layouts only.
    B—Using media queries in CSS.
    C—Ignoring mobile device compatibility.
    D—Designing only for desktop screens.
  • Q8. Which of the following JavaScript functions displays a popup alert box?
    A—console.log()
    B—alert()
    C—prompt()
    D—document.write()
  • Q9. Which of the following is correct about semantic HTML?
    A—Semantic HTML uses tags that describe the meaning of content.
    B—Semantic HTML is only used for styling.
    C—Semantic HTML is not supported by modern browsers.
    D—Semantic HTML cannot be used with CSS.
  • Q10. Which of the following is the correct CSS property to control spacing outside an element?
    A—padding
    B—margin
    C—border
    D—spacing
  • Q1. Which HTML tag is used to define the main heading of a webpage?
    A—<header>
    B—<head>
    C—<title>
    D—<h1>
  • Q2. Which CSS property is used to change the text color of an element?
    A—text-style
    B—font-color
    C—text-color
    D—color
  • Q3. Which HTML tag is used to create a hyperlink?
    A—<link>
    B—<href>
    C—<url>
    D—<a>
  • Q4. Which JavaScript method is used to write content into the HTML document?
    A—window.print()
    B—document.write()
    C—console.log()
    D—alert()
  • Q5. Which CSS property controls the size of text?
    A—text-size
    B—font-style
    C—font-size
    D—size
  • Q6. Which HTML tag is used to insert an image?
    A—<pic>
    B—<image>
    C—<src>
    D—<img>
  • Q7. Which JavaScript keyword is used to declare a variable?
    A—var
    B—declare
    C—int
    D—define
  • Q8. Which CSS property is used to set the background color of an element?
    A—color
    B—background
    C—bgcolor
    D—background-color
  • Q9. Which HTML tag is used to create an unordered list?
    A—<list>
    B—<li>
    C—<ol>
    D—<ul>
  • Q10. Which JavaScript function is used to display a popup message?
    A—prompt()
    B—alert()
    C—confirm()
    D—popup()
  • Q11. Which HTML attribute specifies the destination of a link?
    A—href
    B—link
    C—src
    D—target
  • Q12. Which CSS property is used to make text bold?
    A—font-weight
    B—font-bold
    C—text-style
    D—bold
  • Q13. Which HTML tag is used to define a table row?
    A—<tr>
    B—<row>
    C—<th>
    D—<td>
  • Q14. Which JavaScript operator is used to compare both value and type?
    A—!=
    B—===
    C—=
    D—==
  • Q15. Which CSS property is used to control the spacing between elements?
    A—padding
    B—spacing
    C—border
    D—margin
  • Q1. Which HTML tag is used to define the main heading of a webpage?
    A—<header>
    B—<head>
    C—<title>
    D—<h1>
  • Q2. Which CSS property is used to change the text color of an element?
    A—text-style
    B—font-color
    C—text-color
    D—color
  • Q3. Which HTML tag is used to create a hyperlink?
    A—<link>
    B—<href>
    C—<url>
    D—<a>
  • Q4. Which JavaScript method is used to write content into the HTML document?
    A—window.print()
    B—document.write()
    C—console.log()
    D—alert()
  • Q5. Which CSS property controls the size of text?
    A—text-size
    B—font-style
    C—font-size
    D—size
  • Q6. Which HTML tag is used to insert an image?
    A—<pic>
    B—<image>
    C—<src>
    D—<img>
  • Q7. Which JavaScript keyword is used to declare a variable?
    A—var
    B—declare
    C—int
    D—define
  • Q8. Which CSS property sets the background color of an element?
    A—color
    B—background
    C—bgcolor
    D—background-color
  • Q9. Which HTML tag is used to create an unordered list?
    A—<list>
    B—<li>
    C—<ol>
    D—<ul>
  • Q10. Which JavaScript function displays a popup message?
    A—prompt()
    B—alert()
    C—confirm()
    D—popup()
  • Q11. Which HTML attribute specifies the destination of a link?
    A—href
    B—link
    C—src
    D—target
  • Q12. Which CSS property makes text bold?
    A—font-weight
    B—font-bold
    C—text-style
    D—bold
  • Q13. Which HTML tag defines a table row?
    A—<tr>
    B—<row>
    C—<th>
    D—<td>
  • Q14. Which JavaScript operator compares both value and type?
    A—!=
    B—===
    C—=
    D—==
  • Q15. Which CSS property controls spacing outside an element?
    A—padding
    B—spacing
    C—border
    D—margin
  • Q16. Which HTML5 element is used for navigation links?
    A—<nav>
    B—<menu>
    C—<section>
    D—<aside>
  • Q17. Which CSS property controls spacing inside an element?
    A—margin
    B—padding
    C—border
    D—spacing
  • Q18. Which HTML tag is used to embed a video?
    A—<media>
    B—<video>
    C—<movie>
    D—<embed>
  • Q19. Which JavaScript function is used to parse a string into an integer?
    A—parseInt()
    B—parseFloat()
    C—Number()
    D—toString()
  • Q20. Which CSS property is used to change the font of text?
    A—font-family
    B—font-style
    C—font-weight
    D—font-size
  • Q21. Which HTML tag is used to define a form?
    A—<form>
    B—<input>
    C—<fieldset>
    D—<label>
  • Q22. Which JavaScript method is used to select an element by ID?
    A—getElementByName()
    B—getElementById()
    C—querySelectorAll()
    D—getElementsByClassName()
  • Q23. Which CSS property is used to underline text?
    A—text-decoration
    B—font-style
    C—line-style
    D—underline
  • Q24. Which HTML attribute is used to specify an image source?
    A—src
    B—href
    C—alt
    D—link
  • Q25. Which JavaScript loop executes at least once regardless of condition?
    A—for
    B—while
    C—do…while
    D—foreach

Appendix C

Feedback Responsiveness Evaluation Framework (FREF)
To evaluate students’ ability to interpret, apply, and reflect on feedback to improve their web development work (HTML, CSS, JavaScript, or frameworks).
Revision Performance Metrics (Quantitative)
MetricDescriptionScoring Method
Revision Rate% of feedback items addressed in the revised submission(Addressed items ÷ total feedback items) × 100
Improvement ScoreChange in rubric score between original and revised submissionPost-score − Pre-score
Time-to-RevisionTime taken to submit revised work after receiving feedbackMeasured in hours/days
Error Recurrence% of previously flagged issues that reappear in revised code(Repeated issues ÷ total issues) × 100

References

  1. Abarghouie, M. H. G., Omid, A., & Ghadami, A. (2020). Effects of virtual and lecture-based instruction on learning, content retention, and satisfaction from these instruction methods among surgical technology students: A comparative study. Journal of Education and Health Promotion, 9(1), 296. [Google Scholar] [CrossRef]
  2. Ala-Mutka, K. M. (2005). A survey of automated assessment approaches for programming assignments. Computer Science Education, 15(2), 83–102. [Google Scholar] [CrossRef]
  3. Aloisi, C. (2023). The future of standardised assessment: Validity and trust in algorithms for assessment and scoring. European Journal of Education, 58(1), 98–110. [Google Scholar] [CrossRef]
  4. Antonenko, P. D., & Thompson, A. D. (2011). Preservice teachers’ perspectives on the definition and assessment of creativity and the role of web design in developing creative potential. Education and Information Technologies, 16(2), 203–224. [Google Scholar] [CrossRef]
  5. Arab, M., Liang, J. T., Hong, V., & LaToza, T. D. (2025). How developers choose debugging strategies for challenging web application defects. arXiv, arXiv:2501.11792. [Google Scholar] [CrossRef]
  6. Arisi, R. O. (2011). Social studies education as a panacea for national security in Nigeria. African Research Review, 5(2). [Google Scholar] [CrossRef]
  7. Bada, S. O., & Olusegun, S. (2015). Constructivism learning theory: A paradigm for teaching and learning. Journal of Research & Method in Education, 5(6), 66–70. [Google Scholar]
  8. Basil, O. C. (2022). Cloud computing and undergraduate researches in universities in enugu state: Implication for skills demand. International Journal of Instructional Technology and Educational Studies, 3(2), 27–33. [Google Scholar] [CrossRef]
  9. Brown, G. T. (2022). Student conceptions of assessment: Regulatory responses to our practices. ECNU Review of Education, 5(1), 116–139. [Google Scholar] [CrossRef]
  10. Chandrasekara, S., Hewavitharana, D., Weerasinghe, M., Gayasri, B., Wijendra, D., & De Silva, D. (2025, April 3). Gamifying coding education for beginners: Empowering learners with HTML, CSS and JavaScript. 2025 IEEE International Research Conference on Smart Computing and Systems Engineering (SCSE) (pp. 1–7), Colombo, Sri Lanka. [Google Scholar]
  11. Chen, C. H., & Yang, Y. C. (2019). Revisiting the effects of project-based learning on students’ academic achievement: A meta-analysis investigating moderators. Educational Research Review, 26, 71–81. [Google Scholar] [CrossRef]
  12. Creswell, J. W., & Inoue, M. (2025). A process for conducting mixed methods data analysis. Journal of General and Family Medicine, 26(1), 4–11. [Google Scholar] [CrossRef]
  13. Csapó, B., Ainley, J., Bennett, R. E., Latour, T., & Law, N. (2011). Technological issues for computer-based assessment. In Assessment and teaching of 21st century skills (pp. 143–230). Springer. [Google Scholar]
  14. Faudzi, M. A., Cob, Z. C., Ghazali, M., Omar, R., & Sharudin, S. A. (2024). User interface design in mobile learning applications: Developing and evaluating a questionnaire for measuring learners’ extraneous cognitive load. Heliyon, 10(18), e37494. [Google Scholar] [CrossRef]
  15. Feng, L. (2025). Investigating the effects of artificial intelligence-assisted language learning strategies on cognitive load and learning outcomes: A comparative study. Journal of Educational Computing Research, 62(8), 1741–1774. [Google Scholar] [CrossRef]
  16. Fong, L., Wynne, K., & Verhoeven, B. (2025). Easing the cognitive load of general practitioners: AI design principles for future-ready healthcare. Technovation, 142, 103208. [Google Scholar] [CrossRef]
  17. Gambo, I., Abegunde, F. J., Gambo, O., Ogundokun, R. O., Babatunde, A. N., & Lee, C. C. (2025). GRAD-AI: An automated grading tool for code assessment and feedback in programming course. Education and Information Technologies, 30(7), 9859–9899. [Google Scholar] [CrossRef]
  18. Garcia, M. B. (2025). Self-coded digital portfolios as an authentic project-based learning assessment in computing education: Evidence from a web design and development course. Education Sciences, 15(9), 1150. [Google Scholar] [CrossRef]
  19. Gierczyk, M., Karwowski, M., Paas, F., & H. Tai, R. (2025). STEM workshop learning: Content load effects on cognitive, interaction, and emotional outcomes. The Journal of Experimental Education, 1–22. [Google Scholar] [CrossRef]
  20. Gkintoni, E., Antonopoulou, H., Sortwell, A., & Halkiopoulos, C. (2025). Challenging cognitive load theory: The role of educational neuroscience and artificial intelligence in redefining learning efficacy. Brain Sciences, 15(2), 203. [Google Scholar] [CrossRef] [PubMed]
  21. Gravetter, F. J. (2007). Study guide for gravetter/wallnau’s essentials of statistics for behavioral science (6th ed.). Wadsworth Publishing. [Google Scholar]
  22. Gümüş, M. M., Kukul, V., & Korkmaz, Ö. (2024). Relationships between middle school students’ digital literacy skills, computer programming self-efficacy, and computational thinking self-efficacy. Informatics in Education, 23(3), 571–592. [Google Scholar] [CrossRef]
  23. Hair, J. F., Astrachan, C. B., Moisescu, O. I., Radomir, L., Sarstedt, M., Vaithilingam, S., & Ringle, C. M. (2021). Executing and interpreting applications of PLS-SEM: Updates for family business researchers. Journal of Family Business Strategy, 12(3), 100392. [Google Scholar] [CrossRef]
  24. Halkiopoulos, C., & Gkintoni, E. (2024). Leveraging AI in e-learning: Personalized learning and adaptive assessment through cognitive neuropsychology—A systematic analysis. Electronics, 13(18), 3762. [Google Scholar] [CrossRef]
  25. Hong, X., & Guo, L. (2025). Effects of AI-enhanced multi-display language teaching systems on learning motivation, cognitive load management, and learner autonomy. Education and Information Technologies, 30, 17155–17189. [Google Scholar] [CrossRef]
  26. Howell-Moroney, M. (2024). Inconvenient truths about logistic regression and the remedy of marginal effects. Public Administration Review, 84(6), 1218–1236. [Google Scholar] [CrossRef]
  27. Katerina, T., & Nicolaos, P. (2018). Examining gender issues in perception and acceptance in web-based end-user development activities. Education and Information Technologies, 23(3), 1175–1202. [Google Scholar] [CrossRef]
  28. Khan, M. A., Kurbonova, O., Abdullaev, D., Radie, A. H., & Basim, N. (2024). Is AI-assisted assessment liable to evaluate young learners? Parents support, teacher support, immunity, and resilience are in focus in testing vocabulary learning. Language Testing in Asia, 14(1), 48. [Google Scholar] [CrossRef]
  29. Khine, M. S. (2024). Using AI for adaptive learning and adaptive assessment. In Artificial intelligence in education: A machine-generated literature overview (pp. 341–466). Springer Nature. [Google Scholar]
  30. Kirschner, P. A. (2002). Cognitive load theory: Implications of cognitive load theory on the design of learning. Learning and Instruction, 12(1), 1–10. [Google Scholar] [CrossRef]
  31. Klepsch, M., Schmitz, F., & Seufert, T. (2017). Development and validation of two instruments measuring intrinsic, extraneous, and germane cognitive load. Frontiers in Psychology, 8, 1997. [Google Scholar] [CrossRef]
  32. Knipp, F., & Winiwarter, W. (2025, April 1–3). AI-AFACT: Designing AI-assisted formative assessment of coding tasks in web development education. 17th International Conference on Computer Supported Education (CSEDU 2025), Porto, Portugal. [Google Scholar]
  33. Kolade, O., Owoseni, A., & Egbetokun, A. (2024). Is AI changing learning and assessment as we know it? Evidence from a ChatGPT experiment and a conceptual framework. Heliyon, 10(4), e25953. [Google Scholar] [CrossRef]
  34. Kong, S. C., Cheung, M. Y. W., & Tsang, O. (2024). Developing an artificial intelligence literacy framework: Evaluation of a literacy course for senior secondary students using a project-based learning approach. Computers and Education: Artificial Intelligence, 6, 100214. [Google Scholar] [CrossRef]
  35. Leppink, J., Paas, F., Van der Vleuten, C. P., Van Gog, T., & Van Merriënboer, J. J. (2013). Development of an instrument for measuring different types of cognitive load. Behavior Research Methods, 45(4), 1058–1072. [Google Scholar] [CrossRef]
  36. Levy-Feldman, I. (2025). The role of assessment in improving education and promoting educational equity. Education Sciences, 15(2), 224. [Google Scholar] [CrossRef]
  37. López-Pimentel, J. C., Medina-Santiago, A., Alcaraz-Rivera, M., & Del-Valle-Soto, C. (2021). Sustainable project-based learning methodology adaptable to technological advances for web programming. Sustainability, 13(15), 8482. [Google Scholar] [CrossRef]
  38. Mahamuni, A. J., & Tonpe, S. S. (2024, April 18–19). Enhancing educational assessment with artificial intelligence: Challenges and opportunities. 2024 IEEE International Conference on Knowledge Engineering and Communication Systems (ICKECS) (Vol. 1, pp. 1–5), Chikkaballapur, India. [Google Scholar]
  39. Martínez-Molés, V., Pérez-Cabañero, C., & Cervera-Taulet, A. (2024). Examining presence in immersive virtual reality and website interfaces through the cognitive fit and cognitive load theories. International Journal of Contemporary Hospitality Management, 36(11), 3930–3949. [Google Scholar] [CrossRef]
  40. Maxwell, A. E., Warner, T. A., & Fang, F. (2018). Implementation of machine-learning classification in remote sensing: An applied review. International Journal of Remote Sensing, 39(9), 2784–2817. [Google Scholar] [CrossRef]
  41. Moss, G., & Gunn, R. (2007). Gender differences in website design: Implications for education. Journal of Systemics, Cybernetics and Informatics, 5(6), 38–43. [Google Scholar]
  42. Muthazhagu, V. H., & Surendiran, B. (2024, January 24–25). Exploring the role of AI in web design and development: A voyage through automated code generation. 2024 IEEE International Conference on Intelligent and Innovative Technologies in Computing, Electrical and Electronics (IITCEE) (pp. 1–8), Bangalore, India. [Google Scholar]
  43. Nwangwu, E. C., Omeh, C. B., & Okorie, C. C. (2022). Design and implementation of CERPS for examination officers in universities in Enugu State. Journal of CUDIMAC, 1(2), 45–56. [Google Scholar]
  44. Omeh, C. B., & Ayanwale, M. A. (2025). Artificial intelligence meets PBL: Transforming computer-robotics programming motivation and engagement. Frontiers in Education, 10, 1674320. [Google Scholar] [CrossRef]
  45. Omeh, C. B., Olelewe, C. J., & Hu, X. (2025). Application of artificial intelligence (AI) technology in tvet education: Ethical issues and policy implementation. Education and Information Technologies, 30(5), 5989–6018. [Google Scholar] [CrossRef]
  46. Omeh, C. B., Olelewe, C. J., & Nwangwu, E. C. (2024). Fostering computer programming and digital skills development: An experimental approach. Computer Applications in Engineering Education, 32(2), e22711. [Google Scholar] [CrossRef]
  47. Omelianenko, O., & Artyukhova, N. (2024). Project-based learning: Theoretical overview and practical implications for local innovation-based development. Economics and Education, 9(1), 35–41. [Google Scholar] [CrossRef]
  48. Pallant, J. (2020). SPSS survival manual: A step by step guide to data analysis using IBM SPSS. Routledge. [Google Scholar]
  49. Pereira, D., Flores, M. A., & Niklasson, L. (2016). Assessment revisited: A review of research in Assessment and Evaluation in Higher Education. Assessment & Evaluation in Higher Education, 41(7), 1008–1032. [Google Scholar]
  50. Piaget, J. (1936). O trabalho por equipes na escola. Tradução de Luiz G. Feiure. Revista de Educação–Diretoria do Ensino do Estado de São Paulo set/dez, 62(247), 317–358. [Google Scholar]
  51. Pituch, K. A., & Stevens, J. P. (2015). Applied multivariate statistics for the social sciences: Analyses with SAS and IBM’s SPSS. Routledge. [Google Scholar]
  52. Powell, J., D’Adamo, C. R., Wolf, J., & Feinman, M. (2024). Worksheet-based delivery system to improve participation and engagement in basic science. Journal of Surgical Education, 81(11), 1558–1564. [Google Scholar] [CrossRef]
  53. Puppala, A. (2025). Cognitive load analysis in AI-augmented BI dashboards: Understanding the impact of artificial intelligence on user comprehension, trust, and decision-making efficiency. Journal of Computer Science and Technology Studies, 7(6), 207–213. [Google Scholar]
  54. Rana, R., & Bhambri, P. (2025). Generative AI in web application development: Enhancing user experience and performance. In Generative AI for web engineering models (pp. 471–486). IGI Global. [Google Scholar]
  55. Ruiz Viruel, S., Sánchez Rivas, E., & Ruiz Palmero, J. (2025). The role of artificial intelligence in project-based learning: Teacher perceptions and pedagogical implications. Education Sciences, 15(2), 150. [Google Scholar] [CrossRef]
  56. Sari, R. C., Pranesti, A., Solikhatun, I., Nurbaiti, N., & Yuniarti, N. (2024). Cognitive overload in immersive virtual reality in education: More presence but less learnt? Education and Information Technologies, 29(10), 12887–12909. [Google Scholar] [CrossRef]
  57. Strzelecki, A., & ElArabawy, S. (2024). Investigation of the moderation effect of gender and study level on the acceptance and use of generative AI by higher education students: Comparative evidence from Poland and Egypt. British Journal of Educational Technology, 55(3), 1209–1230. [Google Scholar] [CrossRef]
  58. Suryani, M., Sensuse, D. I., Santoso, H. B., Aji, R. F., Hadi, S., Suryono, R. R., & Kautsarina. (2024). An initial user model design for adaptive interface development in learning management system based on cognitive load. Cognition, Technology & Work, 26(4), 653–672. [Google Scholar] [CrossRef]
  59. Sweller, J. (2011). Cognitive load theory. In Psychology of learning and motivation (Vol. 55, pp. 37–76). Academic Press. [Google Scholar]
  60. Tahvili, S., Hatvani, L., Felderer, M., de Oliveira Neto, F. G., Afzal, W., & Feldt, R. (2025). Comparative analysis of text mining and clustering techniques for assessing functional dependency between manual test cases. Software Quality Journal, 33(2), 24. [Google Scholar] [CrossRef]
  61. Tezer, M., & Çimşir, B. T. (2018). The impact of using mobile-supported learning management systems in teaching web design on the academic success of students and their opinions on the course. Interactive Learning Environments, 26(3), 402–410. [Google Scholar] [CrossRef]
  62. Trajkovski, G., & Hayes, H. (2025). Implementing AI-assisted assessment in educational institutions. In AI-assisted assessment in education: Transforming assessment and measuring learning (pp. 193–243). Springer Nature. [Google Scholar]
  63. Umakalu, C. P. U., & Omeh, C. B. (2025). Impact of teaching computer robotics programming using hybrid learning in public universities in Enugu State, Nigeria. Vocational and Technical Education Journal, 5(1), 1–8. [Google Scholar]
  64. Vygotsky, L., & Cole, M. (2018). Lev Vygotsky: Learning and social constructivism. In Learning theories for early years practice (pp. 68–73). SAGE Publications Inc. [Google Scholar]
  65. Wickramasinghe, M. T. A. (2024, August 7). Challenges in assessing learning outcomes of undergraduates in web development due to AI-assisted solutions. 7th International Conference on Business Innovation (pp. 34–45, 78), Madrid, Spain. [Google Scholar]
  66. Wuttikamonchai, O., Pimdee, P., Ployduangrat, J., & Sukkamart, A. (2024). A needs assessment evaluation of information technology student mobile website design skills. Contemporary Educational Technology, 16(1), ep494. [Google Scholar] [CrossRef] [PubMed]
  67. Yilmaz, R., & Yilmaz, F. G. K. (2023). Augmented intelligence in programming learning: Examining student views on the use of ChatGPT for programming learning. Computers in Human Behavior: Artificial Humans, 1(2), 100005. [Google Scholar] [CrossRef]
Figure 1. Experiment procedure.
Figure 1. Experiment procedure.
Education 16 00501 g001
Figure 2. Interface of the AI-Assisted Assessment Environment (AAAE).
Figure 2. Interface of the AI-Assisted Assessment Environment (AAAE).
Education 16 00501 g002
Figure 3. Learning Analytics of the students’ dashboard.
Figure 3. Learning Analytics of the students’ dashboard.
Education 16 00501 g003
Figure 4. Q-Q plot for posttest of Cognitive load and posttest of Web design academic achievement.
Figure 4. Q-Q plot for posttest of Cognitive load and posttest of Web design academic achievement.
Education 16 00501 g004
Figure 5. Raincloud plot for Posttest of Cognitive Load and Posttest of Web design and development.
Figure 5. Raincloud plot for Posttest of Cognitive Load and Posttest of Web design and development.
Education 16 00501 g005
Table 1. Comparative Table: AI vs. Teacher Feedback on Similar Learning Items.
Table 1. Comparative Table: AI vs. Teacher Feedback on Similar Learning Items.
Learning Item (Example)AI FeedbackTeacher FeedbackKey Difference
HTML Structure (missing attribute in image tag)“Add an alt attribute to improve accessibility. For example: This ensures compliance with accessibility standards.”“Remember to include alt text for images.”AI provides specific code-level guidance; teacher emphasizes principle but leaves implementation to student.
CSS Styling (overuse of inline styles)“Consider moving inline styles into a CSS file. This reduces redundancy and improves maintainability. Example: style.css with .btn { color: blue; }.”“Try to avoid inline styles; use external CSS instead.”AI scaffolds with step-by-step correction; teacher feedback is concise but less detailed.
JavaScript Logic (incorrect loop condition)“Your loop runs infinitely because i <= array.length. Change to i < array.length. This prevents runtime errors.”“Check your loop condition—it may be causing issues.”AI feedback is precise and actionable; teacher feedback prompts self-discovery.
Web Accessibility (missing ARIA labels)“Add ARIA labels to improve screen reader support. Example: Submit.”“Think about accessibility features for users with disabilities.”AI feedback is clear and directive; teacher feedback is broader, encouraging reflection.
Table 2. Lesson Plan.
Table 2. Lesson Plan.
WeekLearning Objective TopicAI-Assisted Assessment Traditional Assessment
1Orientation of the participants
2Understand web development fundamentals and assessment types- Introduction to HTML, CSS, JS
- Overview of AI vs. traditional assessment
- Setup development environment (e.g., VS Code, GitHub
Diagnostic quiz with adaptive feedback (e.g., CodeSignal, Replit)Written quiz on web basics
3Create structured web pages using HTML- HTML tags, forms, semantic structure
- Build a personal homepage
Auto-graded HTML exercises with instant feedbackManual review of HTML page structure
4Style web pages using CSS- Selectors, box model, layout (Flexbox/Grid)
- Apply styles to homepage
AI tool evaluates CSS syntax and layout (e.g., CodeGrade)Instructor feedback on design consistency
5Add interactivity with JavaScript- Variables, functions, events
- DOM manipulation
AI-assisted debugging and code suggestionsInstructor-graded JS task (e.g., form validation)
6Build a mini web application- Combine HTML, CSS, JS
- Project planning and wireframing
AI feedback during development (e.g., Copilot, GitHub Issues)Rubric-based grading of project prototype
7Deploy and test web applications- Hosting (GitHub Pages, Netlify)
- Testing and debugging
AI-generated deployment checklist and error detectionManual evaluation of deployed site
8Reflect on assessment methods and improve code quality- Code refactoring
- Group discussion on AI vs. traditional feedback
AI-generated code quality report (e.g., readability, performanceReflection essay comparing assessment methods
9Complete final project and conduct meta-assessment- Final project development
- Presentation and peer review
Students choose AI or traditional feedback for final projectInstructor grading + peer review + self-assessment survey
10 Revision, Feedback and posttest
Table 3. Levene’s test for homogeneity of variance.
Table 3. Levene’s test for homogeneity of variance.
Variables Fdf1df2p
Web design achievement_Posttest 9.41571940.103
Cogntive load Posttest0.07851940.780
Table 4. One-way ANCOVA was conducted with Posttest Web Design and Development Academic Achievement.
Table 4. One-way ANCOVA was conducted with Posttest Web Design and Development Academic Achievement.
Independent VariablesEffectFpMSPartial η2η2p
Posttest of Web design AchievementGroup 183.3427<0.00111,348.774270.7210.724
Gender0.01250.9110.772160.0000.000
Pretest of Web (Covariate) 0.033757.73344 0.013
Group × Gender0.93270.9930.005280.0040.000
Post of Cognitive loadGroup 168.5423<0.00173.428060.7020.704
Gender0.01830.8930.007980.0000.000
Pretest of cognitive load0.13560.7140.059060.0010.002
Group × Gender0.23380.6300.101840.0010.003
Table 5. Summary of estimated marginal means and post hoc group comparisons.
Table 5. Summary of estimated marginal means and post hoc group comparisons.
Dependent VariableGroupMarginal MeanSE95% CI Lower95% CI UpperMean
Difference
SE (Diff.)tp-ValueCohen
Post_Cognitive load_TotalExperimental3.650.113.433.87
Control1.500.101.281.712.150.15813.60<0.0013.30
Post_web design and development Academic_AchievementExperimental76.301.3173.7078.90
Control49.31.2946.8051.9027.001.8814.40<0.0013.48
Table 6. Simple main effects by gender for posttest of cognitive load and posttest of web design and development academic achievement.
Table 6. Simple main effects by gender for posttest of cognitive load and posttest of web design and development academic achievement.
Dependent VariableGenderSum of SquaresdfMean SquareFp-Valueη2η2p
Post-test of Cognitive loadFemale2.9412.942.00<0.0010.0250.027
Male8.5418.545.81<0.0010.0720.074
Post-test of Web Academic Achievement Female64.1164.11.07<0.0010.0040.015
Male72.8112,393.8206.19<0.0010.7380.741
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Omeh, C.B. Impact of Artificial Intelligence-Assisted Assessment and Traditional Assessment on Web Design and Development in Computing Education. Educ. Sci. 2026, 16, 501. https://doi.org/10.3390/educsci16040501

AMA Style

Omeh CB. Impact of Artificial Intelligence-Assisted Assessment and Traditional Assessment on Web Design and Development in Computing Education. Education Sciences. 2026; 16(4):501. https://doi.org/10.3390/educsci16040501

Chicago/Turabian Style

Omeh, Christian Basil. 2026. "Impact of Artificial Intelligence-Assisted Assessment and Traditional Assessment on Web Design and Development in Computing Education" Education Sciences 16, no. 4: 501. https://doi.org/10.3390/educsci16040501

APA Style

Omeh, C. B. (2026). Impact of Artificial Intelligence-Assisted Assessment and Traditional Assessment on Web Design and Development in Computing Education. Education Sciences, 16(4), 501. https://doi.org/10.3390/educsci16040501

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop