Skip to Content
Trends in Higher EducationTrends in Higher Education
  • Article
  • Open Access

1 March 2026

A Phenomenological Inquiry into Lecturers’ Acceptance of Computer-Based Testing in Higher Education Through the Lens of the Technology Acceptance Model

1
Department of Mathematical Sciences, University of Agder, 4604 Kristiansand, Norway
2
Department of Science Education, Ahmadu Bello University, Zaria, 810106, Nigeria
3
TETFund Centre of Excellence in Pedagogy, Ahmadu Bello University, Zaria, 810106, Nigeria

Abstract

Integration of computer-based testing (CBT) in higher education has gained momentum globally, particularly in response to increasing demands for efficiency, scalability, and technological innovation in assessments. However, limited research explores how lecturers experience and make sense of CBT adoption, especially within resource-constrained educational systems. Grounded in the technology acceptance model (TAM), we employed a phenomenological approach to investigate lecturers’ perceptions of CBT. Eight lecturers from the largest university in Sub-Saharan Africa were purposively selected and individually interviewed. Thematic analysis, supported by human-AI collaboration, revealed diverse perspectives. The results show that lecturers perceived CBT as useful for improving efficiency, feedback speed, and assessment management, though concerns remained about infrastructure, authenticity, and equity. Ease of use strongly shaped these perceptions, with digitally skilled lecturers reporting a more positive experience. Attitudes toward CBT varied by discipline and pedagogical beliefs while influencing lecturers’ intention to adopt CBT. Thus, lecturers showed cautious but positive behavioural intention, particularly where CBT aligned with assessment needs and institutional support was adequate. The study contributes theoretically by extending the applicability of TAM to qualitative inquiry and practically by informing institutional strategies for improvement.

1. Introduction

Assessment in higher education constitutes a systemic and continuous process that employs empirical data on student learning to enhance academic programmes and improve educational outcomes. This critical academic function has evolved significantly in response to shifting pedagogical paradigms, technological advancements, and institutional demands for efficiency and accountability [1]. At its core, assessment encompasses the establishment of measurable learning outcomes, the provision of sufficient learning opportunities, the systematic collection and analysis of evidence, and the application of findings to improve educational practice [2,3]. These processes operate within an iterative cycle of continuous improvement, targeting various levels of the educational hierarchy—from individual learners to courses, academic programmes, and entire institutions.
Assessment serves two primary purposes in higher education: facilitating student learning (formative assessment) and certifying student achievement (summative assessment) [4]. When effectively implemented, these two functions intertwine to enhance deeper student engagement while ensuring robust measures of competency and achievement. Beyond their evaluative role, contemporary assessment practices shape student identity formation through which students come to know themselves as future professionals, engaging in processes of “being and becoming” that characterise the higher education journey [5]. Further, higher education leverages assessment not only to measure knowledge retention and shape identity formation but also to cultivate critical thinking, problem-solving, and adaptability [6]. These skills are essential for professional success in an increasingly technology-driven world, especially after the COVID-19 pandemic.
The digital transformation of higher education has introduced new possibilities for assessment, including computer-based testing (CBT) and other technology-enhanced assessment methods [7]. This shift reflects higher education’s responsiveness to technological innovation and pedagogical demands. While traditional paper-based examinations remain widely used, CBT has emerged as a viable alternative due to its efficiency, adaptability, and alignment with modern educational paradigms [8]. Paper-based assessments, rooted in long-standing academic traditions, offer accessibility in low-resource settings and provide familiarity for students and educators. However, they pose significant logistical challenges, including manual grading inefficiencies, delayed feedback, and vulnerabilities to academic malpractice (e.g., answer leakage and impersonation) [9,10].
Conversely, CBT presents numerous advantages, such as automated grading, immediate results dissemination, and adaptive testing capabilities that adjust question difficulty based on real-time student performance [11,12]. These affordances of CBT were especially handy during the COVID-19 pandemic as many educational institutions resorted to online learning platforms for teaching and assessment. Empirical studies indicate that CBT reduces grading workloads by approximately 60% while enhancing feedback quality and instructional responsiveness [8]. The integration of multimedia elements, such as interactive simulations, video prompts, and case-based scenarios, into CBT further supports the assessment of higher-order cognitive skills, including critical thinking and problem-solving [13,14]. However, CBT is not without its challenges. Its reliance on stable technological infrastructure exacerbates digital inequities, disproportionately affecting students from low-income backgrounds and under-resourced institutions [15,16]. Additionally, concerns regarding data security, algorithmic biases in automated grading systems, and the potential for technical malfunctions necessitate careful consideration of CBT’s implementation in diverse educational contexts [17,18].
Despite these shortcomings of CBT, its increasing adoption in higher education is driven by its scalability and alignment with institutional priorities for cost-effective, high-volume testing [19]. For instance, Nigeria’s national transition to mandatory CBT by 2027 exemplifies its potential to standardise assessment practices, mitigate examination malpractice, and improve efficiency [20]. Notwithstanding this, scholarly inquiry into the implementation of CBT, particularly in higher education institutions, remains limited and methodologically constrained. Existing research (e.g., [21,22,23]) predominantly employs quantitative survey-based designs that capture surface-level perceptions without deeply exploring contextual challenges through qualitative or mixed-methods approaches. Moreover, many studies (e.g., [23,24]) analysing CBT acceptance either lack a theoretical basis or rely on theoretical models, while insightful, often fail to account for the technology-specific constructs such as perceived usefulness, perceived ease of use, and user attitudes toward technology that are central to the technology acceptance model (TAM) [25].
Therefore, this study employs a phenomenological approach to investigate university lecturers’ acceptance of CBT in higher education through the lens of TAM. By centring the lived experiences of educators, the study aims to provide a diverse understanding of the factors influencing their willingness, or reluctance, to integrate CBT into their pedagogical practices. In specific terms, this study aims to address the following research questions:
  • What is the perceived usefulness of CBT among university lecturers?
  • What is the perceived ease of use of CBT from the perspective of university lecturers?
  • How do university lecturers express their attitudes toward the use of CBT?
  • What factors shape university lecturers’ behavioural intentions to adopt CBT in their assessment practices?
The findings of this study will contribute to the growing body of scholarship on technology adoption in higher education by extending the application of the TAM through a phenomenological lens. By foregrounding university lecturers’ lived experience, we move beyond variable-based explanations of acceptance to illuminate how perceived usefulness, perceived ease of use, attitudes, and behavioural intentions are constructed, negotiated, and enacted in real assessment contexts. Methodologically, this study demonstrates the value of qualitative inquiry in deepening understanding of technology acceptance, particularly in settings where structural constraints, institutional cultures, and individual pedagogical beliefs intersect. Practically, the insights generated from this study may inform evidence-based policy formulation, targeted professional development for lecturers, and context-sensitive institutional strategies for sustainable CBT implementation. In resource-constrained higher education environments, the findings may guide decision-makers in aligning technological investments with lecturers’ needs, capacities, and assessment practices, thereby enhancing both adoption and long-term effectiveness of CBT initiatives.

2. The Technology Acceptance Model

The TAM, originally proposed by Davis [26], is one of the most influential theoretical models used to understand and predict users’ acceptance and use of technology. Grounded in the theory of reasoned action developed by Fishbein and Ajzen [27], TAM was specifically tailored to the context of computer technology. Davis [26] introduced two core constructs, perceived usefulness (PU) and perceived ease of use (PEOU), as the primary determinants of an individual’s intention to use a given system. Since its inception, TAM has undergone several refinements and extensions. Venkatesh and Davis [28] introduced TAM2, which incorporated social influence processes and cognitive instrumental processes. Later, Venkatesh, Morris [29] proposed the unified theory of acceptance and use of technology (UTAUT), integrating eight models of technology acceptance, including TAM. Despite the emergence of these models, the original TAM has remained foundational due to its simplicity, robustness, and applicability across diverse technological domains, including education [30].
The PU refers to the degree to which a user believes that employing a particular technology will enhance their job performance, whereas PEOU refers to the extent to which the user believes that interacting with the technology will be effort-free [26]. These two constructs influence an individual’s attitude toward using (AtU) the technology, which in turn affects their behavioural intention (BI) to use it. BI is understood as a key predictor of actual system usage. Specifically, PEOU is theorised to positively impact PU, as technologies that are easier to use are more likely to be perceived as useful. Additionally, both PU and PEOU shape users’ attitudes toward using the system, which subsequently informs their BI to adopt the technology [25]. Although TAM was originally conceived within a quantitative paradigm to test causal relationships among constructs, its theoretical architecture has increasingly been adapted in qualitative inquiries. In such contexts, PU, PEOU, AtU, and BI serve as analytical lenses through which users’ subjective experience with technology can be interpreted and understood. This adaptation allows researchers to explore not only whether individuals are likely to adopt a technology, but also why and how these perceptions are formed in specific cultural and institutional settings [31,32].
The adoption of TAM in this study is justified on both theoretical and empirical grounds. As a widely accepted model, TAM offers a parsimonious yet powerful lens to understand the cognitive and affective dimensions underlying lecturers’ acceptance of CBT. Although TAM has often been employed in quantitative frameworks, recent studies have increasingly adopted its constructs in qualitative inquiries to understand the lived experiences of educators and students. For instance, Wingo, Ivankova [33] used a TAM-informed qualitative approach to examine faculty adoption of online learning technologies. Furthermore, TAM’s core constructs—PU, PEOU, AtU, and BI—align closely with the themes that emerge from lecturers’ subjective experiences with CBT. Lecturers often question the usefulness of CBT in accurately assessing learning, evaluate the ease of integrating it into existing pedagogical frameworks, and weigh their attitudes and intentions accordingly [30,34]. By adopting TAM within a phenomenological approach, this study seeks not to test the model’s causal pathways statistically but to interpret how these constructs are experienced, expressed, and negotiated by lecturers in real educational contexts.
Despite its widespread use, TAM has been subjected to several criticisms. Scholars argue that the model is overly deterministic, assuming that technology acceptance is primarily driven by rational evaluations of usefulness and ease [35]. It has also been critiqued for its limited consideration of social, cultural, and contextual factors, which are often crucial in educational settings [36]. Moreover, TAM has been described as technocentric, neglecting the complex human, institutional, and ethical dimensions involved in adopting educational technologies. Having acknowledged these criticisms, we argue that TAM remains a valuable conceptual lens, especially when contextualised within interpretive methodologies such as phenomenology. By embedding TAM within a phenomenological design, this study addresses some of the criticisms associated with the model’s reductionism. It highlights the diverse, contextual, and subjective realities of lecturers’ engagement with CBT, an approach that enhances the explanatory power of TAM beyond its conventional application. TAM provides a structured yet flexible framework to explore lecturers’ experiences with CBT. It helps to articulate how lecturers perceive the utility and usability of CBT, how these perceptions shape their attitudes, and how these attitudes inform their intentions toward adoption.

3. Methods

3.1. Research Design

This study adopted a phenomenological approach situated within the qualitative research paradigm [37] to explore university lecturers’ experience with CBT. As noted by Creswell and Creswell [37], phenomenology aims to uncover and describe the lived experience of individuals as they engage with a specific phenomenon. It prioritises participants’ subjective meanings and interpretations, seeking to understand the essence of their experiences without imposing predetermined theoretical assumptions. In line with core phenomenological principles, we engaged in epoche and bracketing by consciously identifying and setting aside our prior assumptions, beliefs, and professional experience related to CBT throughout the research process. This was operationalised through reflexive journaling and analytic memos maintained before and during data collection and analysis.

3.2. Sample of the Study

We used a purposive sampling technique to select eight university lecturers (seven males and one female) who had prior experience with the use and implementation of CBT at the largest university in Sub-Saharan Africa. This non-probability sampling method was appropriate given the study’s aim to explore the lived experience of lecturers with CBT through the theoretical lens of the TAM. The participants were drawn from a range of academic disciplines across the university, providing a diverse array of perspectives on the adoption and integration of CBT in higher education. Table 1 presents pseudonyms, gender, academic departments, and designations of the participating lecturers.
Table 1. Lecturers’ pseudonyms and other biodata.
Table 1 presents the demographic information of eight university lecturers from the same higher education institution who participated in this study. The participants span a range of academic departments, including computer science, biochemistry, animal science, history, electrical engineering, mass communication, and science education. Among them, six hold the rank of Lecturer II, one is an Assistant Lecturer, one is a Senior Lecturer, and one is a Reader. Notably, only one female lecturer participated in the study, while the remaining seven participants were male. This diversity in disciplinary backgrounds and academic ranks offers a multidimensional perspective on lecturers’ experiences with CBT. We contend that this variation enriched the study’s exploration of PU, PEOU, AtU, and BI regarding the adoption of CBT in higher education contexts.

3.3. Data Collection

Data for this study were collected through individual semi-structured interviews with university lecturers, guided by a researcher-developed interview protocol from July to August 2025. The interview guide was informed by the constructs of the TAM, incorporating insights from existing literature and established best practices in qualitative inquiry [37]. The protocol included open-ended questions organised around four thematic areas: PU (e.g., do you believe that CBT reduces the workload for lecturers in terms of grading and exam administration? Can you elaborate on this?), PEOU (e.g., how would you describe the ease or difficulty of setting questions using the CBT platform?), AtU (e.g., has the use of CBT changed how you teach or prepare students for exams? If yes, in what ways?), and BI (e.g., based on your experience, do you see yourself continuing to set and conduct exams through CBT in the future? Why or why not?) regarding the implementation of CBT. Each interview lasted between 15 and 20 min and was conducted in a quiet, private location convenient for the participants to ensure comfort and clarity during the discussion. With the informed consent of all participants, the interviews were audio-recorded and subsequently transcribed verbatim for analysis. Before participation, lecturers signed consent forms indicating their voluntary involvement and approval of the recording and use of their responses for academic research. To ensure impartiality and reduce interviewer bias, the interviews were conducted by trained research assistants who were not part of the academic staff and held no supervisory or evaluative authority over the participants. They were the six top postgraduate students in an advanced qualitative research course for PhD students in the department of science education. Their role was to facilitate open, reflective conversations while ensuring the lecturers’ perspectives were accurately captured. This approach helped enhance a neutral and respectful environment that allowed for candid expression of participants’ experience.

3.4. Data Analysis

The data analysis followed a structured, multi-phase process in line with qualitative research standards outlined by Creswell and Creswell [37]. First, the interview transcripts were carefully organised by consolidating lecturer responses thematically, aligning them with the four core constructs of TAM. This organisation facilitated a holistic understanding of the data and enabled us to trace patterns across individual narratives. Following the AI-assisted qualitative analytical framework developed by Zakariya, Alotaibi [38], we adopted a two-stage, researcher-supervised coding process in which ChatGPT-4o was used as a collaborative analytic tool rather than an autonomous coder. The integration of AI-assisted coding, instead of the traditional use of Nvivo or Atlas.it, in this study was guided by increasing recognition of large language models as useful analytic supports in qualitative research [38]. Tools such as ChatGPT-4o can rapidly surface preliminary patterns in extensive textual data and assist with generating initial codes [38]. Here, AI functioned strictly as a supplementary aid, producing theory-driven coding suggestions, while all interpretive judgments, refinements, and theme development remained the responsibility of the research team. This approach improved efficiency, supported reflexivity, and preserved methodological rigour through careful oversight and cross-validation.
For each research question, structured prompts aligned with the relevant TAM constructs were developed to guide the initial open coding process. A sample prompt reads: Here is a full transcript of a qualitative research dataset aimed at investigating lecturers’ acceptance of computer-based testing in higher institutions. Now, using the first dimension of the technology acceptance model (Perceived Usefulness of CBT: General Experience with CBT, Effectiveness of CBT, Time and Efficiency) as an analytical framework, conduct open coding of the dataset grounded in the dataset while addressing the research question: What is the perceived usefulness of CBT among university lecturers? You should be as explicit as possible in your coding process. These prompts instructed the system to identify meaningful units of data, generate discrete codes grounded in participants’ verbatim accounts, and retain illustrative quotations. The AI-generated open codes were subjected to iterative human review, during which the research team evaluated their accuracy, relevance, and alignment with the raw transcripts. Where codes were overly broad or ambiguous, prompts were refined and reissued to produce more precise and contextually grounded outputs.
In the second stage, axial coding was conducted using a separate set of prompts that directed the system to cluster the refined open codes into coherent categories and propose emerging themes consistent with the study’s objectives and TAM framework. These categories and themes were then rigorously examined through independent cross-validation by two researchers, who compared the AI-assisted outputs against the original transcripts, resolved discrepancies through discussion, and refined the thematic structure where necessary. This iterative, multi-stage analytic procedure enhanced transparency and trustworthiness while ensuring that the final themes remained firmly grounded in lecturers’ lived experiences.
Given that all lecturers were actively engaged in different academic departments and held varied academic ranks, further analysis was conducted to explore thematic differences across disciplinary contexts. Although rank or gender was not the primary variable of interest, any patterns associated with institutional roles were noted. Figure 1 illustrates the iterative process of thematic analysis, enriched with ChatGPT.
Figure 1. Thematic analytical process enriched with ChatGPT.
The final themes generated through this rigorous and iterative process are presented in the subsequent sections. These themes offer a diverse understanding of how university lecturers perceive, engage with, and intend to adopt CBT in their assessment practices.

4. Results

4.1. Perceived Usefulness of CBT Among University Lecturers

The thematic analysis of the generated data for this study revealed some themes related to the PU of CBT among the university lecturers. We present the themes in subsequent paragraphs.

4.1.1. Theme 1: Adoption Challenges

One of the most recurrent themes from the dataset is the initial difficulty lecturers faced while adopting CBT. Many respondents mentioned that the transition from traditional paper-based testing to CBT was challenging, with some conditions on the number of questions and a learning curve affecting their early experiences.
Mr. Usman from the faculty of physical sciences shared his experience:
“Initially, we only used multiple-choice questions (MCQs) for first-year students. For second-year students, we gradually introduced essay questions alongside MCQs. The process becomes easier with familiarity. If you are new to CBT, it may seem difficult at first, but once you get used to the system, it is quite straightforward.”
His statement reflects the learning curve associated with adopting new technology. In line with the TAM, PEOU impacts PU, meaning that if a system is initially difficult to use, it may negatively influence the perception of its benefits.
On some restrictions on the number of questions for CBT examinations, Dr. Aliyu recounted his experience:
“Well, the experience has been mixed. The process requires you to be very fast and follow specific guidelines. For instance, if you want students to answer 60 questions, you are expected to set at least three times that number, meaning you must prepare 180 questions. This can be quite challenging. Additionally, you have to write to the CBT centre to request availability of the exam venue, and sometimes you need to follow up to ensure your request has been approved.”
According to TAM, external variables such as technical infrastructure can affect PU, meaning that a poorly implemented CBT system may lead to resistance among lecturers. However, as indicated in the dataset, many lecturers overcame the initial resistance and eventually recognised the benefits of CBT. This aligns with TAM’s prediction that users who initially struggle with a technology may later find it useful once they become familiar with it.

4.1.2. Theme 2: Perceived Usefulness in Managing Large Classes

A significant advantage of CBT noted by many lecturers is its ability to handle large student populations efficiently. Unlike traditional paper-based testing, where grading and result compilation take significant time, CBT automates grading and provides immediate feedback, making it more convenient for lecturers managing large courses.
For example, Mr. Mutiu from the faculty of engineering explained:
“CBT significantly increases the speed of grading compared to traditional methods. In the traditional system, marking and compiling results could take two to three weeks, and some lecturers even delay submissions. However, with CBT, students can receive their results instantly or soon after the exam. If additional compilation is needed, the process is still much faster and easier for exam officers and lecturers.”
Mr. Usman added:
“If I have a class of 100 students, grading paper-based exams manually could take me hours. With CBT, the grading process is automated and can be completed in just a few minutes. I only need to review the results, eliminating the need for repetitive marking.”
The TAM suggests that when a system demonstrates a clear functional advantage, its PU increases. For lecturers handling large classes, CBT reduces the burden of manually grading hundreds of scripts, making it a valuable and efficient tool.
Mr. Abiodun, a lecturer in the faculty of Agriculture, echoed this sentiment:
“With CBT, grading is automated, making it easier for students to receive their scores promptly.”
Mrs. Oke, a computer science lecturer in the faculty of physical science, further explained:
“With CBT, it is easier to assess a large number of students efficiently. The system allows for instant feedback and automated result compilation, saving time and resources.”
These statements highlight another key benefit: elimination of grading bias. Traditional paper-based grading may involve subjective judgment, fatigue-induced errors, or unconscious biases. By using an automated system, lecturers believe CBT ensures fairness in student assessments, further enhancing its PU.

4.1.3. Theme 3: Time Efficiency and Reduction in Workload

Another major theme that emerged from the dataset is the time efficiency of CBT, particularly in comparison to traditional assessment methods. Many lecturers expressed that while preparing CBT exams might take some time, the overall process is significantly more efficient.
For instance, Mr. Abiodun explained that:
“With CBT, there is no need for marking scripts manually. The only task is setting the questions, and the system handles the rest. There is no need for printing, marking, or manually compiling results. The system automates these processes, making the workload significantly lighter for lecturers.”
Dr. Aliyu, a senior lecturer from the faculty of life sciences, provided additional perspective when he said:
“It allows academic staff to allocate time to other important academic activities instead of spending long hours marking papers. This improves efficiency and productivity.”
Dr. Abdullah, from the faculty of education, remarked:
“This saves time and effort compared to paper-based exams.”
These statements encapsulate a key reason why lecturers find CBT useful. Traditional exams involve printing, distributing, collecting, and manually grading scripts, all of which take considerable time. CBT eliminates most of these steps, allowing lecturers to focus on other academic responsibilities.

4.1.4. Theme 4: Concerns About Question Types and Assessment Quality

Despite its advantages, some lecturers raised concerns about CBT’s effectiveness in assessing higher-order thinking skills. Unlike traditional essay-based exams, CBT often relies on multiple-choice questions (MCQs), true/false questions, and automated responses, which may not adequately assess students’ critical thinking abilities and skills.
Dr. Aliyu expressed his skepticism about CBT when he said:
“CBT exams are often too simple, primarily consisting of multiple-choice questions, where the answers are already provided. In our department, we don’t even encourage multiple-choice questions, whether in paper-based or computer-based formats. We believe other forms of assessment are more effective.”
In addition, Dr. Abdullah explained:
“But simulations require more than just CBT. Some types of assessment, like practical-based tasks, are better suited for other methods.”

4.1.5. Theme 5: Security and Cheating Prevention

A crucial concern among lecturers is examination security and cheating prevention. While CBT offers several security features, such as randomised questions, restricted access, and proctoring software, lecturers remain cautious about potential loopholes.
Dr. Aliyu contends that he had mixed feelings about CBT when he said:
“On the one hand, CBT is beneficial, especially in this digital era. However, the way it is implemented at our university is not entirely effective. Students often sit too close to each other, which increases the chances of communication and cheating. Some students even write exams on behalf of others due to lax identity verification. There is still much room for improvement.”
Mr. Mutiu added:
“In many cases, the level of cheating in CBT is very high.”
This concern is significant because, under the TAM, users are less likely to accept technology if they perceive security risks that compromise fairness and reliability.

4.2. Perceived Ease of Use of CBT Among University Lecturers

The thematic analysis of the generated data for this study revealed some themes related to the PEOU of CBT among the university lecturers. We present these themes in subsequent paragraphs.

4.2.1. Theme 6: Usability of CBT Varies Based on Digital Literacy

The user-friendliness of the CBT system emerged as an important factor influencing PEOU. Some lecturers find the interface simple and intuitive, while others struggle with navigation.
Mrs. Oke praised the simplicity of the CBT system when she said:
“The platform is user-friendly. Once you select an answer, you can easily move to the next question. There is no confusion, as the system is straightforward.”
Likewise, Mr. Usman compared the system to an ATM machine, emphasising its step-by-step guidance:
“It is very easy, especially if you are computer literate. The system guides you through each step, similar to an ATM, with clear prompts and instructions.”
However, some lecturers, particularly those less familiar with computers, found the system less intuitive. Mr. Mutiu stated:
“The platform is somewhat flexible, especially for someone who is computer-literate and familiar with online systems. However, for those who are not very conversant with computers, some training is needed before they can use them effectively.”
Furthermore, Dr. Aliyu pointed out the lack of flexibility in managing the exam process:
“Honestly, as lecturers, we don’t interact much with the platform. Most administrative functions, like uploading questions and managing the exams, are handled by the CBT administrators. I believe it would be better if academic staff were given more control over the process, including setting up the exams and monitoring attendance.”
According to TAM, a system is perceived as easy to use when it has an intuitive design. The findings suggest that digital literacy influences ease of use—lecturers comfortable with technology navigate the system effortlessly, while others require training. Moreover, limited control over exam settings makes some lecturers feel restricted in their role as examiners.

4.2.2. Theme 7: Persistent Technical Barriers Affecting CBT Implementation

A major challenge affecting lecturers’ ease of using CBT is technical difficulties, particularly related to system downtime, network failures, and power outages. These issues disrupt the smooth administration of exams and make lecturers less confident in the system’s reliability.
Mrs. Oke recounted a frustrating experience:
“Technical issues do arise, especially when the server is down. If there is an internet failure, it can disrupt the entire process, which is a significant disadvantage.”
Dr. Aliyu provided a further explanation on these technical issues when he said:
“Technical issues do occur occasionally. When they happen, we report them to the technical staff at the CBT centre. These staff members have a computer science background and are responsible for resolving such problems. However, technical issues can sometimes cause unnecessary difficulties for students.”
Similarly, Mr. Mutiu expressed his concern over examination delays due to system failures and their impacts on students and the conduct of the examination:
“For example, there was a time we were conducting a CBT exam, and the network went completely down. We had to wait for a long period, about four to five hours, for it to be restored. Some students were already exhausted by then, making it difficult to conduct the exam effectively. Eventually, we had to postpone the exam to allow students to prepare again.”
System downtime creates a stressful experience, as lecturers have to wait for IT support or alternative solutions to be arranged. In contrast, traditional paper-based exams do not face such disruptions.
Additionally, power outages were identified as a serious obstacle to the ease of using CBT. Mr. Sadiq explained:
“The only technical issue I have experienced is related to the electricity supply. Sometimes, there are power outages when we are about to start the exam.”
Technical failures not only affect ease of use but also raise concerns about fairness and exam integrity. Mr. Usman added:
“Network issues are the main challenge. Sometimes, the LMS may go down, which can delay exams. System shutdowns and connectivity problems can also affect the assessment process.”
TAM predicts that users will perceive a technology as hard to use if they encounter frequent technical difficulties. The findings indicate that technical barriers reduce lecturers’ trust in CBT, making them reluctant to rely on it. To enhance ease of use, institutions must invest in stable infrastructure, backup power sources, and reliable network connectivity.

4.3. Lecturers’ Attitudes Towards CBT

The thematic analysis of the generated data revealed some themes related to the attitudes of lecturers toward CBT. We present these themes in subsequent paragraphs.

Theme 8: Positive Attitudes Toward CBT

Several lecturers acknowledged the advantages of CBT, particularly in terms of reducing workload, enhancing efficiency, ensuring broader curriculum coverage, and aligning with modern education trends. These benefits foster their positive attitudes towards CBT. Traditional assessment methods involve printing, distributing, collecting, marking, and storing exam papers, all of which are resource-intensive. CBT automates these processes, thereby saving time and reducing stress.
Mrs. Oke emphasised this benefit when she said:
“Shifting to CBT reduces paperwork, which is a major advantage. Traditional methods involve extensive writing, printing, and manual marking, which are all stressful. CBT is easier, more efficient, and provides instant feedback.”
Mr. Usman also highlighted this:
“It is beneficial in reducing paperwork and streamlining assessments.”
Some lecturers noted that CBT forces them to cover a broader range of topics, since question banks must be diverse enough to prevent predictability.
Mr. Abiodun explained:
“Since CBT requires a comprehensive approach, lecturers must cover the entire syllabus rather than focusing on just a few areas. This makes lecturers more serious about ensuring that students receive a well-rounded education.”
Mrs. Oke added:
“It has encouraged me to cover the curriculum more comprehensively. Since I need a wide range of questions, I ensure that students are exposed to all necessary topics.”
Some lecturers see CBT as a natural progression in education due to global technological advancements. Dr. Aliyu emphasized this by saying:
“It is beneficial because it aligns with the digital era and helps students develop ICT skills.”
Mr. Usman echoed this:
“There is a strong relationship because both teaching and assessments are moving toward digital formats. The transition aligns well with modern learning approaches.”

4.4. Lecturers’ Behavioural Intention to Use CBT

The thematic analysis of the generated data revealed some themes related to the BI to use the CBT. We present these themes in subsequent paragraphs.

4.4.1. Theme 9: Strong Behavioural Intention to Use CBT in the Future

Lecturers consistently expressed a clear intention to continue using CBT, often citing its efficiency and alignment with institutional goals.
Dr. Aliyu affirmed his commitment to CBT:
“Yes, I will continue using CBT, especially for lower-level courses. However, improvements need to be made, particularly in areas like seating arrangements and exam security.”
Mr. Abiodun was equally decisive when he said:
“Yes, of course. It makes the process easier and more efficient for both students and lecturers.”
Dr. Abdullah explained his preference in terms of operational necessity:
“Definitely! With the increasing number of students, manual marking is not feasible. CBT makes setting, marking, and recording results much easier.”
Furthermore, Mr. Salami, a lecturer from the mass communication department, echoed this sentiment by arguing as follows:
“The simplicity and efficiency of CBT make it worthwhile. It saves time and allows me to focus on other academic responsibilities.”
These views indicate a high BI to use CBT, driven by both personal experience and institutional dynamics. Even when limitations are acknowledged, lecturers see CBT as the future of assessments, especially for large classes and multiple-choice evaluations. The recurring expression of willingness to adopt and continue using CBT is a direct reflection of BI, which TAM posits as the immediate antecedent of actual system use. The confidence in CBT, despite its flaws, reflects a robust inclination toward its long-term adoption.

4.4.2. Theme 10: Preference for Hybrid Assessment Models and Some Recommendations

Although most lecturers supported CBT, many advised a hybrid approach, especially for courses requiring subjective responses. For instance, Dr. Abdullah explained:
“Yes, but it should be combined with paper-based exams, depending on the subject. While CBT is efficient, some courses require written responses.”
In addition, Mr. Sadiq echoed this sentiment when he said:
“CBT is ideal for large courses and multiple-choice assessments. However, for essay-based exams, traditional paper methods are preferable.”
These remarks reflect contextual thinking, where lecturers balance technological advantages with pedagogical appropriateness. They advocate CBT for its speed and efficiency, but prefer traditional methods for tasks requiring nuanced expression or analytical writing. Furthermore, some lecturers proposed various strategies to overcome barriers and improve the CBT experience. For instance, Mr. Salami proposed a centralised CBT facility as a form of infrastructural development in the university:
“We need a large CBT centre with at least 1000 well-equipped computer stations. This would not only accommodate students comfortably but could also serve as a revenue-generating facility.”
In addition, Mr. Abiodun stressed the need for continuous learning to enhance the digital literacy of lecturers when he said:
“Training should be provided to all lecturers to ensure that they can effectively use the system. Even those who are already familiar with CBT need continuous training because technology is constantly evolving.”
Further, Dr. Abdullah suggested a technical enhancement when he said:
“The system should be regularly updated for better compatibility. Additionally, more training should be provided for lecturers.”
These recommendations highlight the desire for institutional commitment to supporting CBT, not only technologically but also in terms of human resource development. Such recommendations aim to enhance both the PEOU and PU. When users are supported through infrastructure, training, and responsive systems, their experience with the technology improves, thereby increasing the likelihood of adoption and a positive attitude. The results in this section are summarised in Table 2.
Table 2. Thematic map of the results of the study.
Table 2 presents a thematic map summarising lecturers’ experience with CBT in relation to the core constructs of the TAM. The table organises the ten identified themes under PU, PEOU, AtU, and BI. It highlights both enabling factors, such as efficiency, scalability, and workload reduction, and constraining factors, including technical barriers, assessment quality concerns, and security risks. This thematic map illustrates how lecturers’ acceptance of CBT is shaped by the interaction between functional benefits, contextual challenges, and pedagogical considerations. Thus, it provides a concise synthesis of the results, which forms the basis for subsequent discussion in the next section.

5. Discussion

This study explored university lecturers’ perceptions of the usefulness of CBT in higher education using a phenomenological approach grounded in the TAM [25]. The findings reveal a multifaceted understanding of CBT’s perceived utility, shaped by institutional context, prior experience with educational technologies, discipline-specific assessment practices, and pedagogical philosophies. Broadly, lecturers reported that CBT offers substantial benefits, including administrative efficiency, timely feedback, and scalability. However, concerns were raised about its alignment with authentic assessment practices, infrastructure challenges, and implications for equity. A key finding is that PU of CBT was consistently associated with positive experiences of automated grading, streamlined administration, and reduced workload. This aligns with earlier studies indicating that CBT reduces grading workload by approximately 60% and improves feedback delivery [8,12]. In this study, lecturers frequently cited the utility of immediate score reporting and integrated analytics in supporting instructional adjustments, echoing similar observations made by Agostini, Lazareva [7] regarding how digital assessments enhance instructional responsiveness.
Beyond confirming prior efficiency-based findings, our results extend the literature by demonstrating that lecturers’ perceptions of usefulness are dynamic rather than fixed. In contrast to many quantitative TAM studies conducted in more developed countries, such as those in the United Kingdom, Australia, and the United States, where robust infrastructure and institutional support are often assumed [39,40], participants in this study described usefulness as contingent upon contextual stability. In these contexts, PU was not merely a perception of performance enhancement but a reflection of whether the institutional ecosystem could reliably sustain CBT implementation. This finding aligns with recent work (e.g., [41]) emphasising the socio-material and infrastructural dimensions of digital transformation in higher education. It suggests that in developing higher education systems, usefulness is infrastructural as much as functional.
However, lecturers’ perceptions of usefulness were not purely functional; they were also deeply pedagogical. Many expressed reservations regarding CBT’s capability to assess higher-order thinking and discipline-specific learning outcomes, particularly in humanities and qualitative fields. This tension reflects a structural, rather than merely technical, misalignment between efficiency-driven assessment systems and pedagogical aims. While CBT enhances efficiency through automation and scalability, its design often privileges convergent responses, constraining the assessment of interpretive, dialogic, and reflective learning outcomes central to humanities disciplines. These concerns resonate with the literature questioning whether CBT, in its current implementations, adequately supports critical thinking and reflective learning [6,13]. While CBT systems increasingly incorporate multimedia and scenario-based items [14], these innovations remain underutilised or inaccessible in many contexts, limiting the alignment between CBT and constructivist pedagogical ideals. Consistent with the TAM framework, PEOU also shaped PU. Lecturers who found CBT platforms intuitive and well-supported by IT infrastructure were more likely to perceive them as useful. This relationship is central to TAM, which posits that ease of use positively affects perceptions of usefulness and user attitudes [25]. In this study, lecturers with higher digital literacy or prior experience with e-learning systems expressed greater confidence in the value of CBT. Conversely, those in under-resourced institutions or those lacking institutional training support often viewed CBT as burdensome or unreliable, reinforcing concerns about digital inequity raised in prior studies [15,16].
Another major theme in the findings relates to AtU and BI adopting CBT. While most participants acknowledged CBT’s potential, adoption decisions were moderated by disciplinary norms, personal teaching philosophies, and perceptions of student preparedness. For example, lecturers teaching in STEM disciplines reported a stronger BI to adopt CBT, citing compatibility with quantitative and objective assessment formats. This supports findings by Sembey, Hoda [8] and Yeboah [11], who argue that CBT aligns well with the evaluative needs of STEM education. Importantly, the phenomenological design of this study enabled a diverse interpretation of these TAM constructs beyond their traditional quantitative applications. Instead of statistically testing causal paths, we explored how PU, PEOU, AtU, and BI were experienced and narrated by lecturers within their institutional and cultural contexts. This approach addresses a critical gap in the literature, where TAM applications in education have been critiqued for reductionism and failure to incorporate contextual realities [35,36]. Our findings suggest that lecturers’ decisions to adopt CBT are not solely based on rational evaluations of utility and effort, but also on ethical, pedagogical, and socio-cultural considerations.
In doing so, this study contributes to ongoing debates about the contextual limitations of TAM. Recent scholarship argues for integrating socio-cultural and institutional perspectives into technology acceptance research [42,43]. In highly resourced systems, adoption may be driven primarily by performance expectancy and ease. In contrast, in developing contexts, structural reliability, institutional trust, and perceptions of procedural fairness emerge as equally influential dimensions. The lived experiences described by participants indicate that TAM constructs are interpreted through socio-material realities, thereby supporting calls to embed acceptance models within broader institutional ecologies [38].
Furthermore, this study confirms the practical importance of aligning CBT implementation with lecturers’ pedagogical values. Lecturers often questioned the usefulness of CBT in assessing learning holistically and authentically, particularly when restricted to multiple-choice formats. This echoes the argument by Nieminen and Yang [5] that assessment shapes not only knowledge acquisition but also student identity and professional formation. CBT, therefore, must evolve to support the diverse assessment of “being and becoming” as students develop disciplinary identities. Additionally, issues related to technological reliability and data integrity emerged frequently. Several lecturers voiced concerns about system crashes, data loss, and the vulnerability of CBT systems to security breaches, concerns corroborated by Perry, Meissel [17] and Zakariya, Danlami [18]. These anxieties suggest that while CBT may offer theoretical efficiency and scalability, practical implementation challenges can significantly erode its PU and deter adoption.
From a policy perspective, this indicates that infrastructure investment must precede or accompany mandates for digital assessment adoption. Reliable connectivity, secure authentication systems, redundancy protocols, and cybersecurity safeguards are foundational determinants of perceived trust and usefulness. Without these safeguards, lecturers may engage in surface-level compliance rather than deep pedagogical integration. Future research should extend this inquiry in several directions. Comparative cross-national qualitative studies could illuminate how infrastructural maturity shapes the experiential meaning of TAM constructs. Longitudinal research may determine whether adoption challenges diminish as digital familiarity increases. Mixed-method studies integrating phenomenological insights with structural modelling could refine the understanding of how contextual variables mediate TAM pathways. Finally, incorporating student perspectives would generate a more ecosystemic model of CBT acceptance in higher education.

6. Conclusions

This study contributes to the growing literature on digital assessment in higher education by offering a contextualised, theory-driven understanding of how university lecturers perceive the usefulness of CBT. The findings reveal that while many lecturers acknowledge the administrative efficiency and immediate feedback advantages of CBT, concerns remain regarding its alignment with educational values, assessment validity, and digital equity. Lecturers’ willingness to adopt CBT is contingent upon their perceptions of its capacity to support authentic assessment, the usability of available platforms, and the presence of institutional infrastructure and training mechanisms.
Theoretically, this study affirms the explanatory utility of TAM in higher education technology adoption contexts, while also advancing its application beyond positivist paradigms. By adopting a qualitative lens, the study responds to critiques that TAM is overly deterministic [35] and insufficiently attentive to contextual variation [36]. Our adaptation of TAM as an interpretive framework showcases how PU, PEOU, AtU, and BI are dynamically negotiated in response to the lived realities of teaching and assessment in diverse academic environments. Practically, the findings have significant implications for policymakers, institutional leaders, and educational technologists. To foster meaningful adoption of CBT, institutions must invest not only in technological infrastructure but also in sustained professional development programs, especially for low-tech departments, that address lecturers’ pedagogical concerns and disciplinary needs. Efforts to integrate more authentic assessment formats within CBT platforms, such as case-based simulations, reflective prompts, and project-based tasks, may help bridge the perceived gap between CBT’s functionality and its pedagogical usefulness.

7. Limitations of the Study

We acknowledge that the focus of this study on one higher institution in Nigeria may introduce social desirability bias. Also, while the phenomenological design offers depth, it limits the generalizability of findings. Future research could employ mixed-methods approaches to triangulate qualitative insights with large-scale quantitative data, explore longitudinal shifts in perceptions, and investigate student perspectives to create a more holistic understanding of CBT’s impact. Another limitation of this study is the gender imbalance and uneven departmental representation among participants. Although the sample yielded sufficient depth for thematic saturation, these characteristics may constrain the breadth of perspectives captured and limit the transferability of the findings across disciplines and gender contexts. More so, we realised that the duration of 15–20 min for each interview may not be sufficient to reach the depth expected in phenomenological research. Future research could extend this time to deeply explore the experience of lecturers with CBT. We acknowledge that using ChatGPT as a collaborative tool is novel and has enhanced our productivity. However, caution is warranted, particularly in data analysis. The model occasionally misattributed quotes or altered them inappropriately. Researchers should remain vigilant about these limitations and thoroughly validate all AI-generated content before reporting, to ensure accuracy and preserve the integrity of participants’ voices.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Ethics Committee of Ahmadu Bello University, Education Complex (protocol code ABU-EDU-2024-2132 and date of approval: 16 September 2024).

Data Availability Statement

The data used for this study are available upon request from the author.

Acknowledgments

The author acknowledges the University of Agder library for paying the APC for publishing this article. The author declares that the generative pre-trained transformer (ChatGPT-4o) was used in writing and data analysis while preparing the manuscript for publication under strict supervision and output validation by human researchers.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Baartman, L.K.J.; Quinlan, K.M. Assessment and feedback in higher education reimagined: Using programmatic assessment to transform higher education. Perspect. Policy Pract. High. Educ. 2023, 28, 57–67. [Google Scholar] [CrossRef]
  2. Syahbrudin, J.; Istiyono, E.; Khairudin, M.; Anggraini, A.; Wusqo, I.U.; Mariam, M.; Muflihan, Y. Computer-based assessment research trends and future directions: A bibliometric analysis. Contemp. Educ. Technol. 2025, 17, ep554. [Google Scholar] [CrossRef]
  3. Boud, D.; Ajjawi, R.; Dawson, P.; Tai, J. Developing Evaluative Judgment in Higher Education: Assessment for Knowing and Producing Knowledge; Routledge: England, UK, 2018. [Google Scholar]
  4. Rawlusyk, P.E. Assessment in higher education and student learning. J. Instr. Pedagog. 2018, 21, 34. [Google Scholar]
  5. Nieminen, J.H.; Yang, L. Assessment as a matter of being and becoming: Theorising student formation in assessment. Stud. High. Educ. 2024, 49, 1028–1041. [Google Scholar] [CrossRef]
  6. Twist, L. Changing times, changing assessments: International perspectives. Educ. Res. 2021, 63, 1–8. [Google Scholar] [CrossRef]
  7. Agostini, D.; Lazareva, A.; Picasso, F. Advancements in technology-enhanced assessment in tertiary education. Australas. J. Educ. Technol. 2024, 40, 1–7. [Google Scholar] [CrossRef]
  8. Sembey, R.; Hoda, R.; Grundy, J. Emerging technologies in higher education assessment and feedback practices: A systematic literature review. J. Syst. Softw. 2024, 211, 111988. [Google Scholar] [CrossRef]
  9. Truscan, D.; Ahmad, T.; Tran, C.H. Applying Test-Driven Development for Improved Feedback and Automation of Grading in Academic Courses on Software Development. In Frontiers in Software Engineering Education; Bruel, J., Capozucca, A., Mazzara, M., Meyer, B., Naumchev, A., Sadovykh, A., Eds.; Springer: Berlin/Heidelberg, Germany, 2020; pp. 310–323. [Google Scholar]
  10. Pengelley, J.; Whipp, P.R.; Malpique, A. A testing load: A review of cognitive load in computer and paper-based learning and assessment. Technol. Pedagog. Educ. 2025, 34, 1–17. [Google Scholar] [CrossRef]
  11. Yeboah, D. Undergraduate students’ preference between online test and paper-based test in Sub-Saharan Africa. Cogent Educ. 2023, 10, 2281190. [Google Scholar] [CrossRef]
  12. Weirich, S.; Sachse, K.A.; Henschel, S.; Schnitzler, C. Comparing test-taking effort between paper-based and computer-based tests. Appl. Psychol. Meas. 2024, 48, 3–17. [Google Scholar] [CrossRef] [PubMed]
  13. Bennett, R.E.; Goodman, M.; Hessinger, J.; Kahn, H.; Ligget, J.; Marshall, G.; Zack, J. Using multimedia in large-scale computer-based testing programs. Comput. Hum. Behav. 1999, 15, 283–294. [Google Scholar] [CrossRef]
  14. Chai, H.; Hu, T.; Wu, L. Computer-based assessment of collaborative problem solving skills: A systematic review of empirical research. Educ. Res. Rev. 2024, 43, 100591. [Google Scholar] [CrossRef]
  15. Adubika, T.O.; Agashi, P.P. A comparative study of the effect of computer based examination on text anxiety. Int. J. Adv. Res. 2021, 9, 354–358. [Google Scholar] [CrossRef]
  16. Abba, A.; Abubakar, A.A. Challenges of computer based test among senior secondary school students in Zaria local government area of Kaduna state. Afr. Sch. J. Pure Appl. Sci. 2020, 18, 90–104. [Google Scholar]
  17. Perry, K.; Meissel, K.; Hill, M.F. Rebooting assessment. Exploring the challenges and benefits of shifting from pen-and-paper to computer in summative assessment. Educ. Res. Rev. 2022, 36, 100451. [Google Scholar] [CrossRef]
  18. Zakariya, Y.F.; Danlami, K.B.; Shogbesan, Y.O. Affordances and constraints of a blended learning course: Experience of pre-service teachers in an African context. Humanit. Soc. Sci. Commun. 2024, 11, 1596. [Google Scholar] [CrossRef]
  19. Nurpeisova, A.; Shaushenova, A.; Mutalova, Z.; Ongarbayeva, M.; Niyazbekova, S.; Bekenova, A.; Zhumaliyeva, L.; Zhumasseitova, S. Research on the development of a proctoring system for conducting online exams in Kazakhstan. Computation 2023, 11, 120. [Google Scholar] [CrossRef]
  20. Tyohemba, H. Federal govement targets 100% computer-based exams by 2027. In LEADERSHIP Newspaper; Leadership Newspaper Group: Abuja, Nigeria, 2025. [Google Scholar]
  21. Azor, R.O.O.; Edna, N. Computer-based test (CBT), innovative assessment of learning: Prospects and constraints among undergraduates in University of Nigeria, Nsukka. In Implementation of Educational Technology in the 21st Century Secondary Schools in Delta; Oklahoma State University: Stillwater, OK, USA, 2019. [Google Scholar]
  22. Shobayo, M.A.; Binuyo, A.O.; Ogunmakin, R.; Olosunde, G.R. Perceived effectiveness of Computer–Based Test (CBT) mode of examination among undergraduate students in South-Western Nigeria. Int. J. Educ. Libr. Inf. Commun. Technol. 2023, 1, 1–12. [Google Scholar]
  23. Usman, K.O.; Olaleye, S.B. Effect of computer based test (CBT) examination on learning outcome of colleges of education ntudents in Nigeria. Math. Comput. Sci. 2022, 7, 53–58. [Google Scholar]
  24. Ukwueze, C.A.; Uzoagba, O.N. ICT Literacy and Readiness for Computer Based Test Among Public Secondary School Students in Anambra State. N. Media Mass Commun. 2021, 97, 1–14. [Google Scholar] [CrossRef]
  25. Davis, F.D.; Granić, A. The Technology Acceptance Model: 30 Years of TAM; Springer Nature: Cham, Switzerland, 2024. [Google Scholar]
  26. Davis, F.D. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q. 1989, 13, 319–340. [Google Scholar] [CrossRef]
  27. Fishbein, M.; Ajzen, I. Belief, Attitude, Intention and Behavior: An Introduction to Theory and Research; Addison-Wesley: Reading, MA, USA, 1975. [Google Scholar]
  28. Venkatesh, V.; Davis, F.D. A theoretical extension of the technology acceptance model: Four longitudinal field studies. Manag. Sci. 2000, 46, 186–204. [Google Scholar] [CrossRef]
  29. Venkatesh, V.; Morris, M.G.; Davis, G.B.; Davis, F.D. User acceptance of information technology: Toward a unified view. MIS Q. 2003, 27, 425–478. [Google Scholar] [CrossRef]
  30. Teo, T. Factors influencing teachers’ intention to use technology: Model development and test. Comput. Educ. 2011, 57, 2432–2440. [Google Scholar] [CrossRef]
  31. King, W.R.; He, J. A meta-analysis of the technology acceptance model. Inf. Manag. 2006, 43, 740–755. [Google Scholar] [CrossRef]
  32. Park, S.Y. An analysis of the technology acceptance model in understanding university students’ behavioral intention to use e-learning. Educ. Technol. Soc. 2009, 12, 150–162. [Google Scholar]
  33. Wingo, N.P.; Ivankova, N.V.; Moss, J.A. Faculty perceptions about teaching online: Exploring the literature using the technology acceptance model as an organizing framework. Online Learn. 2017, 21, 15–35. [Google Scholar] [CrossRef]
  34. Alshehri, A.; Rutter, M.; Smith, S. Assessing the relative importance of an e-learning system’s usability design characteristics based on students’ preferences. Eur. J. Educ. Res. 2019, 8, 839–855. [Google Scholar] [CrossRef]
  35. Bagozzi, R. The Legacy of the Technology Acceptance Model and a Proposal for a Paradigm Shift. J. Assoc. Inf. Syst. 2007, 8, 244–254. [Google Scholar] [CrossRef]
  36. Benbasat, I.; Barki, H. Quo vadis TAM? J. Assoc. Inf. Syst. 2007, 8, 211–218. [Google Scholar] [CrossRef]
  37. Creswell, J.W.; Creswell, J.D. Research Design: Qualitative, Quantitative, and Mixed Methods Approaches, 4th ed.; Sage: Newbury Park, CA, USA, 2017. [Google Scholar]
  38. Zakariya, Y.F.; Alotaibi, S.B.; Alrashood, J.S.; Alrosaa, T.M. Computer-based testing in higher education: A phenomenology investigation into undergraduate students’ perspectives through the technology acceptance model. Front. Psychol. 2026, 17, 1602964. [Google Scholar] [PubMed]
  39. Bond, M.; Marín, V.I.; Dolch, C.; Bedenlier, S.; Zawacki-Richter, O. Digital transformation in German higher education: Student and teacher perceptions and usage of digital media. Int. J. Educ. Technol. High. Educ. 2018, 15, 48. [Google Scholar] [CrossRef]
  40. Henderson, M.; Selwyn, N.; Aston, R. What works and why? Student perceptions of ‘useful’ digital technology in university teaching and learning. Stud. High. Educ. 2017, 42, 1567–1579. [Google Scholar] [CrossRef]
  41. Castañeda, L.; Selwyn, N. More than tools? Making sense of the ongoing digitizations of higher education. Int. J. Educ. Technol. High. Educ. 2018, 15, 22. [Google Scholar] [CrossRef]
  42. Naveed, Q.N.; Qureshi, M.R.N.; Tairan, N.; Mohammad, A.; Shaikh, A.; Alsayed, A.O.; Shah, A.; Alotaibi, F.M. Evaluating critical success factors in implementing E-learning system using multi-criteria decision-making. PLoS ONE 2020, 15, e0231465. [Google Scholar] [CrossRef] [PubMed]
  43. Alhabeeb, A.; Rowley, J. E-learning critical success factors: Comparing perspectives from academic staff and students. Comput. Educ. 2018, 127, 1–12. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.