Next Article in Journal
Impact of Technical Training and Personalized Information Support on Farmers’ Fertilization Behavior: Evidence from China
Next Article in Special Issue
Exploring Factors Influencing the Acceptance of E-Learning and Students’ Cooperation Skills in Higher Education
Previous Article in Journal
An Enhanced AC Fault Ride through Scheme for Offshore Wind-Based MMC-HVDC System
Previous Article in Special Issue
Acceptance of Mobile Learning Technology by Teachers: Influencing Mobile Self-Efficacy and 21st-Century Skills-Based Training
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:

E-Learning Readiness Assessment Using Machine Learning Methods

LEPPESE Laboratory, Institute of the Economics and Management Sciences, University Centre of Maghnia, PB 600-13300 Al-Zawiya Road, Al-Shuhada District, Maghnia 13300, Algeria
Computer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division, King Abdullah University of Science and Technology (KAUST), Thuwal 23955-6900, Saudi Arabia
Computer Science Department, University of Science and Technology of Oran-Mohamed Boudiaf (USTO-MB), El Mnaouar, BP 1505, Oran 31000, Algeria
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Sustainability 2023, 15(11), 8924;
Submission received: 28 April 2023 / Revised: 26 May 2023 / Accepted: 30 May 2023 / Published: 1 June 2023


Assessing e-learning readiness is crucial for educational institutions to identify areas in their e-learning systems needing improvement and to develop strategies to enhance students’ readiness. This paper presents an effective approach for assessing e-learning readiness by combining the ADKAR model and machine learning-based feature importance identification methods. The motivation behind using machine learning approaches lies in their ability to capture nonlinearity in data and flexibility as data-driven models. This study surveyed faculty members and students in the Economics faculty at Tlemcen University, Algeria, to gather data based on the ADKAR model’s five dimensions: awareness, desire, knowledge, ability, and reinforcement. Correlation analysis revealed a significant relationship between all dimensions. Specifically, the pairwise correlation coefficients between readiness and awareness, desire, knowledge, ability, and reinforcement are 0.5233, 0.5983, 0.6374, 0.6645, and 0.3693, respectively. Two machine learning algorithms, random forest (RF) and decision tree (DT), were used to identify the most important ADKAR factors influencing e-learning readiness. In the results, ability and knowledge were consistently identified as the most significant factors, with scores of ability (0.565, 0.514) and knowledge (0.170, 0.251) using RF and DT algorithms, respectively. Additionally, SHapley Additive exPlanations (SHAP) values were used to explore further the impact of each variable on the final prediction, highlighting ability as the most influential factor. These findings suggest that universities should focus on enhancing students’ abilities and providing them with the necessary knowledge to increase their readiness for e-learning. This study provides valuable insights into the factors influencing university students’ e-learning readiness.

1. Introduction

The COVID-19 pandemic has brought about an unprecedented shift in teaching, learning, and assessment, leading to the rapid development of e-learning [1,2]. Digital transformation has become necessary in different fields, including education, due to information and communication technology being widespread [3]. Compared to other industries, however, university institutions need to be faster in adopting integrated digital transformation models to assess their level of maturity. This highlights the importance of e-learning readiness (ELR) in universities and the need to introduce administrative and educational adjustments regarding goals, plans, programs, practices, and means to improve digital readiness [4]. Implementing e-learning in universities has become increasingly important in the economies of countries, as it allows for using electronic mechanisms to support its advantages, such as its flexibility, accessibility, and interactivity.
Over the last decade, ELR has become an essential factor in the success of universities in achieving their educational goals [5,6]. ELR refers to the degree to which an institution is prepared to implement e-learning as a viable mode of education [7,8]. This readiness is based on various factors, such as organizational culture, technology infrastructure, faculty, and student readiness. The widespread utilization of the Internet and other digital communication systems has led to the emergence of e-learning, which provides numerous benefits such as accessibility, adaptability, and consistency [9]. Unlike face-to-face classes, virtual programs offer a more optimal learning environment and redundancy for those who can obtain their materials online [10]. The COVID-19 outbreak has underscored the significance of e-learning and digital preparedness in higher education. Educational technologies such as Desire to Learn (D2L), Massive Open Online Courses (MOOCs), Moodle, and Blackboard have been introduced to facilitate the educational process [11]. According to a systematic review conducted in [12], advanced programs have been developed to support student learning by providing performance monitoring and feedback. The aim is to create protocols and resources that enhance the appeal and accessibility of learning.
E-learning has become a crucial alternative solution due to rapid changes, with institutions requiring continuous changes to reach their goals in line with models. However, implementing online education poses challenges, including limited infrastructure, policies, financing, and awareness of human resources [13]. Accepting and implementing the required change in thinking and behavior is crucial [14]. Resistance to change can affect psychological state and performance. Managing change requires gradually and positively changing users’ attitudes, training and persuading them to enhance digital literacy, and integrating technologies into higher education institutions. Change management is crucial to implementing change processes in educational institutions, starting with building awareness of change and creating the desire to ensure the necessary capabilities and continuity [15].
Assessing ELR in universities in developing countries is crucial for successful implementation and can be achieved through various means such as surveys, interviews, and focus groups. ELR assessments can identify potential barriers to e-learning adoption, such as lack of infrastructure, technological resources, and instructor expertise [16]. Additionally, readiness assessments help identify opportunities for e-learning, such as reaching more students and offering flexible learning options. Ultimately, conducting ELR assessments can help ensure that universities in developing countries have the necessary resources and support to successfully implement e-learning programs, which can enhance access to education and improve student outcomes. Recently, there has been a growing number of studies evaluating universities’ readiness to implement e-learning in developing countries [17,18,19,20,21,22]. These studies seek to identify the factors that may contribute to the successful adoption of e-learning, such as the availability of technological infrastructure, the capacity of faculty and staff to deliver online courses, the level of support from university administration, and the perceptions of students and instructors toward e-learning. For instance, the study in [23] investigated the readiness of academic staff at the University of Ibadan in Nigeria to adopt e-learning. To this end, data were collected from 240 lecturers who expressed uncertainty about the readiness of students to engage in e-learning and their ability to integrate e-learning into their existing workload. However, the lecturers expressed confidence in the capacity of IT staff at their institutions. The results showed that several factors, including societal and public readiness, financial preparedness, training preparedness, ICT equipment preparedness, and availability of e-learning materials and contents, influenced Nigerian universities’ readiness toward e-learning adoption. The study conducted in [17] utilized a quantitative approach and an online questionnaire to assess the ELR of African engineering and IT students. Most participants were within the age range of 18 to 24, corresponding to the average age of first-year students who enroll in universities in South Africa. The findings indicated that the students exhibited proficiency in fundamental e-learning skills; however, they required further academic assistance, particularly in the areas of time management and problem-solving skills. In [24], Keramati et al. investigated the role of readiness factors in the relationship between e-learning factors and outcomes and proposed a conceptual model that categorizes readiness factors into technical, organizational, and social groups. The study was conducted with 96 high school teachers in Tehran using technology-based education. The data collected from a questionnaire were analyzed using hierarchical regression analysis, latent moderated structuring (LMS) technique, and MPLUS3 software. This revealed that readiness factors have a moderating role in the relationship between e-learning factors and outcomes, and organizational readiness factors significantly impact e-learning outcomes. The study in [25] examined the implementation of e-learning during the COVID-19 lockdown. To this end, questionnaires and interviews were conducted with students, teachers, management board members, and families. The findings emphasize the need for adaptable e-learning strategies that combine online and traditional teaching methods. Moreover, training is recommended to effectively integrate information and communication technologies (ICT) into classrooms. This analysis provided valuable insights for educational institutions and policymakers seeking to implement e-learning in similar circumstances. In [26], an assessment of remote learning in higher education in Poland was carried out considering the students’ perspective during the COVID-19 pandemic. The evaluation identified four dimensions: socio-emotional, developmental, time–financial, and negative attitude. Findings from a survey of 999 students highlight the benefits of remote learning, such as time-saving, location flexibility, work–study balance, and cost reduction. Conversely, disadvantages include the loss of social connections, technology fatigue, and increased distractions. The shape and duration of remote learning play a role in shaping students’ evaluations.
This paper introduces an innovative approach to assessing ELR by leveraging the strengths of the ADKAR model and machine learning methods. This study employs the ADKAR model as a change management framework to evaluate organizational readiness for change owing to its well-established efficacy. While previous studies have used statistical methods to examine the relationship between ADKAR factors and ELR, these methods may not capture the nonlinearity present in the data. In contrast, this paper employs powerful machine learning-driven variable importance identification approaches to identify the most important ADKAR factors that impact ELR. This strategy has the advantage of capturing nonlinear relationships in the data and providing flexibility in handling large and complex datasets. Moreover, we surveyed faculty members and students in the Economics faculty at Tlemcen University in Algeria using the ADKAR model to identify the key dimensions of awareness, desire, knowledge, ability, and reinforcement that impact ELR. Specifically, we employed machine learning techniques, including random forest and decision tree models, to identify the most important ADKAR factors affecting ELR. In addition, SHapley Additive exPlanations (SHAP) analysis was applied to explore further the impact of each ADKAR factor on the final prediction, highlighting ability and knowledge as the most influential factors. The combination of the ADKAR model and machine learning models provides a more comprehensive understanding of the factors affecting ELR, which can be used to inform policy and decision-making in the institution.
The subsequent sections of this paper are structured as follows. Section 2 provides a succinct overview of recent endeavors in ELR assessment within academic institutions. In Section 3, the preliminary materials utilized in this study, comprising the ADKAR model, the feature selection techniques adopted, and the proposed framework, are briefly explained. Section 4 presents a comprehensive discussion of the case study outcomes based on data collected from Tlemcen University in Algeria. Ultimately, we conclude this study by presenting our findings and recommendations.

2. Related Works

Recently, many studies have been conducted to evaluate the current status of ELR in universities and identify the factors that affect it [27,28]. These studies often use surveys and questionnaires to collect data from students, faculty, and staff and employ various analytical techniques, such as statistical analysis and machine learning, to identify the key factors that impact ELR. For instance, in [29], Laksitowening et al. present a methodology for measuring ELR at Telkom University in Indonesia using a multidimensional model. The assessment results were used to identify priority factors and areas for improvement. The evaluation was performed using surveys and data analysis, and the employed tool consists of a series of indicators for assessing the readiness for e-learning in five domains and thirteen factors. According to the evaluation outcomes, Telkom University has areas that require enhancement in terms of e-learning implementation. The top three priority factors are hardware, learning methods, and learning content. In [30], the aim was to assess the readiness for e-learning in two technological universities located in Yangon and Mandalay, Myanmar. This was achieved by investigating various characteristics, facilities, environments, and management practices pertaining to the universities and the students. To this end, the study surveyed 1024 students and found that while the universities were ready in characteristics and environmental dimensions, they were not ready in their facilities and management. The study suggested possible solutions for the universities to improve their readiness in these areas, such as having enough computers and a fast internet connection, better IT infrastructure, sufficient budgets and IT technicians, training, and tutorials, and preparatory knowledge sharing with national/international e-learning experts. In [31], Saekow et al. conducted a comparative analysis of higher-education ELR between Thailand and the USA. By assessing success factors across five dimensions, they concluded that institutional support from high-level administrators is a critical aspect of implementing successful online programs. Their findings also emphasized the significance of providing adequate resources for online programs, establishing clear project plans, launching initial program offerings, and conducting teacher training sessions. In [32], Irene et al. assessed ELR among educators and learners in selected Gauteng, South Africa schools. Utilizing a 29-item questionnaire, the study found that the expectations for ELR of educators and learners in the designated schools were met, but some areas required improvements, such as technology, content, and personnel. The STOPE (Strategy, Technology, Organization, People, and Environment) model was utilized to evaluate the level of readiness. The results showed a rating of 3.86 on the Likert scale, indicating that while the institution is generally considered “ready”, some improvements are still necessary.
Another study in [33] investigated critical factors that influence students’ preparedness to use e-learning systems in higher education, focusing on Wayamba University in Sri Lanka. The study results indicated that learners’ willingness to participate in e-learning activities significantly affects their readiness to use the system. Interestingly, the study revealed that the most crucial factors for adopting e-learning systems were e-learning confidence and training, rather than ELR. The study in [34] aimed to evaluate the ELR perspectives of medical students at the University of Fallujah in Iraq. A semi-structured self-administered questionnaire was utilized to collect data. The findings revealed that most medical students in the university were not adequately prepared for e-learning and lacked sufficient experience with ICT. Importantly, this study suggests that the university must invest in technology and provide formal training to students to facilitate e-learning as a viable learning approach. In [35], Samaneh et al. conducted a study to evaluate the level of ELR among Iranian students learning English as a foreign language and examine its correlation with their English language proficiency. The research involved 217 EFL students from Shiraz Azad University, who completed a self-assessment questionnaire on their ELR and the Test of English as a Foreign Language (TOEFL). The findings showed that the students exhibited a high level of preparedness for e-learning and demonstrated a positive relationship between their English language proficiency and readiness for e-learning. Ref. [36] considered the analysis of students’ readiness and facilitators’ perception toward e-learning in India based on machine learning algorithms. The survey results show that students have low satisfaction levels with e-learning systems and face data security and plagiarism issues. Teachers also had low satisfaction levels with e-learning and felt there was a lack of e-learning training and pedagogical models. In [37], Alotaibi et al. conducted a study to evaluate the readiness level among students at Shaqra University in Saudi Arabia to utilize an e-learning platform. The study utilized a questionnaire to measure five factors: study skills, technology skills, technology access, time management skills, and motivation. The findings revealed that the students enrolled at Shaqra University demonstrated an acceptable level of readiness for using the e-learning platform and possessed specific competencies that would aid in efficiently utilizing the platform. Furthermore, the study did not find any significant difference in readiness between male and female students, although female students exhibited higher levels of readiness in terms of technology and time management skills. In [38], Hung et al. construct and validate a multidimensional tool named the Online Learning Readiness Scale (OLRS) to assess college students’ readiness for online learning through confirmatory factor analysis. The OLRS encompasses five dimensions: learner-controlled and directed learning, computer/internet self-efficacy, online communication self-efficacy, motivation to learn, and online learning anxiety. In the study, data were gathered from 1051 undergraduate students in Taiwan. The findings showed that students had high readiness levels in computer/internet self-efficacy, motivation to learn, and online communication self-efficacy but low readiness levels in learner-controlled and directed learning. Furthermore, the study revealed that gender did not significantly influence the OLRS dimensions, while upper-grade students exhibited higher levels of readiness.

3. Materials and Methods

This section briefly presents the basic concept of the ADKAR model and describes the proposed ADKAR-based variable importance identification via random forest and decision tree algorithms.

3.1. ADKAR Model

The ADKAR model was used in this study to identify the factors that affect ELR in an Algerian University. The model was selected for its ability to provide a structured approach to change management, which is essential for e-learning adoption. The motivation behind using the ADKAR model in this study is that it is a well-established change management framework that can be applied to assess the readiness of an organization for change. E-learning is a significant change for universities, and the ADKAR model provides a structured approach to understanding the key factors that impact the adoption and implementation of e-learning. By using the ADKAR model, this study was able to identify the critical factors that affect ELR, including awareness, desire, knowledge, ability, and reinforcement.
The ADKAR model is suitable for assessing ELR in Algerian universities as it focuses on the individual and organizational level changes required to successfully implement e-learning. The five key elements of ADKAR align well with the key factors that impact ELR, such as understanding the need for e-learning, having the desire to adopt e-learning, having knowledge of e-learning tools and platforms, having the ability to use e-learning tools, and having reinforcement to continue using e-learning [39]. However, it is important to note that ADKAR is a model for managing change and not specifically for assessing ELR. Advantages of using ADKAR for e-learning readiness assessment include [40]:
  • It considers the individual-level changes needed, such as the students’ awareness, desire, and ability to use e-learning platforms.
  • It also addresses the organizational level changes required, such as the availability of resources and support for e-learning implementation.
  • It can be used to identify readiness gaps and develop a plan to address them.
By assessing readiness at each stage, universities can identify areas where additional support and training may be needed to successfully implement e-learning.
Several recent studies have explored using the ADKAR model for assessing ELR in different sectors. For example, in a recent study by Ali et al. (2021), the researchers investigated the impact of the ADKAR change model on technology acceptance in the banking sector based on the Technology Acceptance Model (TAM) and Hofstede’s dimensions of national culture [41]. The study was conducted by distributing 500 questionnaires to employees in five major banks across five cities in Pakistan. The results showed a significant discrepancy between the different dimensions of the studied models. They revealed that the ADKAR change model was the most effective in addressing resistance to change and promoting technology acceptance. The study in [42] focused on applying the ADKAR model in the Tourism Authority, measuring the case of dealing with change strategies and their resulting impact on performance. The study found the proposed ADKAR model to be an efficient way to expand the base of change within the areas of change in the authority’s performance. The study in [43] surveyed 288 female secondary school teachers to evaluate the level of change management practices in government secondary schools in the city of Dammam based on the five elements of the ADKAR model. The results revealed that the overall degree of change management practice was high, with awareness being the most prevalent element, followed by ability, desire, knowledge, and reinforcement. The research also identified several obstacles to managing change, including the lack of financial resources necessary for change, exaggerated satisfaction with the current school situation, and weak administrative support for change initiatives at the school level. Ref. [44] employed the ADKAR change management model to guide the development, design, and implementation of an Excel-based management tool for assessing and planning pediatric healthcare programs. The study demonstrated the efficacy and high efficiency of the ADKAR model in facilitating the successful implementation of the tool. Ref. [45] investigated obstacles to change management in the public sector for educational institutions in Gulf Cooperation Council countries using the ADKAR model. The results from five public educational schools showed weaknesses in the knowledge component due to the lack of predetermined guidelines for the system’s implementation, inability to have authority and responsibility for decision-making, lack of regular performance enhancement, and an ineffective reward system that is not linked to performance appraisal. A strength was found in the desire for change.

3.2. ADKAR-Based Feature Selection via Random Forest and Decision Tree

In this study, ADKAR factors will be used to assess ELR. The ADKAR model is based on five factors that need to be present for a successful change management initiative: awareness, desire, knowledge, ability, and reinforcement. To assess ELR, we will use these five factors as feature variables. The values of these feature variables will be obtained through a questionnaire administered to participants. The responses will be recorded using a five-point Likert scale, where one represents “strongly disagree” and five represents “strongly agree”. After collecting the responses, we will use various statistical techniques, such as correlation analysis, decision tree, and random forest, to identify the importance of each ADKAR factor in predicting ELR. This will enable us to determine which factors are the most important in predicting ELR and thus prioritize interventions to improve ELR in the university.
Random forest (RF) and decision tree (DT) algorithms are commonly used for feature selection and variable importance identification [46,47,48]. These algorithms can be applied to identify the most significant ADKAR factors that contribute to ELR.

3.2.1. Decision Tree-Based Variable Importance Identification

A decision tree is a non-parametric supervised learning method for classification and regression tasks [49]. It partitions the input space into a set of rectangles, with each partition having a different prediction. The decision tree algorithm can identify important features most predictive of the target variable. One of the methods to identify the variable importance using decision trees is the mean decrease impurity (MDI). It measures the feature’s importance as the total reduction of the impurity criteria caused by the feature in the tree. The impurity criterion depends on the task: the Gini index for classification and the mean squared error for regression. The importance score for a feature X i using MDI can be computed as follows [49]:
MDI ( X i ) = t = 1 T Δ imp ( t , X i ) n t n ,
where T is the total number of decision nodes in the tree, n t is the number of samples that reached node t, n is the total number of samples in the training set, and Δ imp ( t , X i ) is the amount that the impurity of node t decreases when a feature X i is used for splitting, weighted by the fraction of samples reaching that node. The MDI method provides a relative ranking of the importance of each feature, where a higher score indicates a more important feature. In the context of the ADKAR model, this method can identify the most important factors contributing to the readiness for e-learning.

3.2.2. Identification of ADKAR Factors’ Importance Using RF

Random forest is a popular ensemble learning method for classification and regression tasks [50]. It is a combination of decision trees, where multiple decision trees are created and combined to improve the overall performance and reduce the variance of a single decision tree. One of the key features of random forest is the ability to identify feature importance, which is a measure of how much each feature contributes to the prediction or classification of a model [51,52]. This can be useful for identifying the most important features in a dataset and understanding the relationships between features and the target variable. Random forest calculates feature importance by measuring the decrease in impurity (e.g., Gini impurity) when a feature is used to split the data at each node. The feature with the highest decrease in impurity is considered the most important feature [50]. This process is repeated for each tree in the forest, and the feature importance scores are averaged across all trees. The mean decrease impurity (MDI) is commonly used to calculate feature importance in the random forest. The MDI calculates feature importance by measuring the average decrease in impurity across all trees in the forest when a feature is used to split the data at each node. More detailed information about random forest-based variable importance identification can be found in [46,47,53].

3.3. SHapley Additive exPlanations (SHAP) for Identifying the Importance of ADKAR Factors

SHAP (SHapley Additive exPlanations) is a popular model-agnostic method for interpreting the predictions of machine learning models [54]. It provides a way to explain the contribution of each feature in predicting a specific sample [55]. For identification of ADKAR factors’ importance, SHAP values can be used to identify the contribution of each factor in the prediction of ELR. Specifically, for each observation in the dataset, the SHAP values measure the impact of each ADKAR factor on the prediction. The SHAP values are computed using SHapley values from cooperative game theory, in which a value is assigned to each feature based on its contribution to the prediction in different possible combinations with other features. By aggregating these SHapley values over all possible combinations, the SHAP values provide a measure of the contribution of each feature to the prediction. Using SHAP values, one can identify which ADKAR factors have the highest impact on the prediction of ELR. This information can be used to prioritize factors for improvement and intervention. Additionally, the SHAP values can be visualized using a SHAP summary plot, which provides an overview of the feature importance across the entire dataset.
Overall, the main steps for ELR assessment in Algerian universities using the ADKAR model and importance variable identification are summarized next.
  • Develop a questionnaire based on the ADKAR model and other relevant ELR frameworks.
  • Identify a representative sample of academic staff and students in the university.
  • Administer the questionnaire to the identified participants in person or through online platforms.
  • Collect and compile the responses from the participants.
  • Use quantitative methods such as descriptive statistics and regression analysis to analyze the data.
  • Apply random forest, decision tree algorithms on ADKAR factors to identify the most important factors affecting ELR.
  • Develop recommendations based on the findings of the study, which can be used to enhance ELR in Algerian universities, specifically at Tlemcen university in Algeria.

4. Results and Discussions

4.1. Study Design

A comprehensive data collection approach was taken to assess the state of digital and online education in Algerian universities. Figure 1 illustrates the general framework adopted in this research design process. At first, we conduct a literature analysis or review, which is an essential preliminary step in any research design. This involves a comprehensive review and analysis of previous studies, research papers, and other relevant literature related to the investigated study. The goal is to gain a deeper understanding of the research topic, identify gaps, and develop research questions or hypotheses. Afterward, semi-open interviews were conducted with a random sample of 15 individuals from the university community. In addition, seven experts designed and reviewed a draft questionnaire to ensure its content validity, readability, and lack of ambiguity. Among them, three engineers specialized in information and knowledge systems, who are also administrators of digital education platforms, contributed their expertise in digitization. Additionally, the study benefited from the valuable contributions of two professors with expertise in econometrics and two professors experienced in scientific research methodology. The questionnaire was divided into two sections: the first section collected demographic information, while the second section included 25 questions related to the study variables. To ensure response accuracy, 320 respondents, including professors and students from Tlemcen University in Algerian, received a copy of the questionnaire in paper and electronic form. All responses were scaled on a five-point Likert scale ranging from strongly disagree to strongly agree. Data analysis was conducted using the SPSS program. Additionally, machine learning methods were utilized to identify the most important features based on the ADKAR factors. Based on the study results, recommendations were made based on the ranking of the ADKAR factors for good management and change management strategies.

4.2. Study Model and Research Hypotheses

Based on the ADKAR model’s theoretical constructs and the conceptual framework from previous research, six hypotheses were formulated to assess the ELR at Tlemcen university in Algeria.
Hypothesis 1 (H1).
Education level has a positive and statistically significant effect on ELR.
Hypothesis 2 (H2).
Change awareness positively and significantly impacts ELR.
Hypothesis 3 (H3).
Change desire has a positive and statistically significant effect on ELR.
Hypothesis 4 (H4).
Change knowledge positively and significantly influences ELR.
Hypothesis 5 (H5).
The ability to change has a positive and statistically significant effect on ELR.
Hypothesis 6 (H6).
Change reinforcement positively and significantly affects ELR.
Figure 2 depicts a graphical representation of the conceptual model adopted in this study. The model demonstrates the relationship between the ADKAR factors and the investigated hypotheses.

4.3. Data Collection

The questionnaire was divided into two parts: the first section is devoted to the respondents’ demographic information and includes gender and education level, while the second section is devoted to studying variables and includes 25 questions. To avoid the problem of missing values, all answers were mandatory, and respondents received an Arabic copy of the questionnaire to ensure the accuracy of the responses. All items were scaled on a five-point Likert scale to provide more variance and give respondents a wider range of choices, ranging from (1) strongly disagree to (5) strongly agree. The data of this study were collected using both paper and electronic questionnaires from an estimated sample of 320 respondents, including professors, and students from the Economics faculty in Tlemcen university in Algeria, as shown in Figure 1.

4.4. Sampling and Sample Size

The research has been conducted with the Economics faculty at Tlemcen university in Algeria. In this study, professors and students from the Economics faculty are considered the target population. Here, we will check the minimum sample size required for a population of 3000. Here, we used the Cochran formula, often used in survey research, to determine the minimum sample size required for a study to achieve the desired level of accuracy. Essentially, the minimum sample size, n, can be computed using the Cochran formula as follows [56]:
n = ( Z ) 2 p . q d 2 1 + 1 N ( Z ) 2 p . q d 2 1 ,
where Z represents the z-score associated with the confidence level, with a value of 1.96 for 95% confidence, N is the population size, p is the estimated proportion of the population with the attribute of interest, and q is the complementary probability of p. Here, p with a value of 0.5 is used to maximize the sample size for a given margin of error. Finally, d represents the margin of error, which in this case is 0.03.
Thus, this study has met the minimum sample size requirement for a population of 3000 with a 95% confidence level and a 3% margin of error, which is 268. The 365 valid responses collected are deemed adequate in this context.

4.5. Validity and Reliability

The assessment of the measurement tool’s validity and reliability is a crucial statistical procedure that ensures the accuracy of the content and structure of the phrases as well as their level of readability and lack of ambiguity. In this study, the validity of the survey instrument is established by its basis on the well-established ADKAR change management model. The ADKAR model has been widely used in the literature to assess change readiness, including ELR. Furthermore, seven experts in methodology and statistical and econometric methods reviewed the content to ensure the survey instrument’s content validity. Afterward, a pilot study was conducted with a preliminary questionnaire administered to 49 respondents from the University Center of Maghnia to evaluate the initial reliability and validity of the instrument. As a final step, the validity and reliability of each scale in the questionnaire were evaluated using Cronbach’s alpha test to ensure the questionnaire’s effectiveness in collecting data and achieving the study objectives by checking the internal consistency (reliability) of the measurement tool statements.
The statistic Cronbach’ alpha is used to evaluate the internal consistency of a questionnaire or scale. Lee Cronbach introduced this statistic in 1951 to assess a test’s internal consistency. Cronbach’s alpha is a coefficient that ranges from 0 to 1. Generally speaking, the scale is considered to indicate good internal consistency if the value of Cronbach’s alpha coefficient is high, typically above 0.7. A high coefficient indicates that the questionnaire or scale items are highly correlated and measure the same underlying construct. The alpha coefficient can also be calculated using the following formula:
α = k k 1 1 i = 1 k r ¯ i . ( 1 r ¯ i . ) v ¯ ( 1 v ¯ ) ,
where r ¯ i . is the average correlation between the i-th item and all other items, and v ¯ is the average variance of the items. To use Cronbach’s alpha to evaluate a questionnaire, the following steps are typically followed:
  • Step 1: Compute the score for each question
    Add up the responses for each question across all participants.
  • Step 2: Compute the total score for each participant
    We sum up the responses for each participant across all questions.
  • Step 3: Calculate the mean and standard deviation of the total scores across all participants. To do so, we can use the following formulas:
    x ¯ = i = 1 n x i n ,
    s = i = 1 n ( x i x ¯ ) 2 n 1 ,
    where x ¯ is the sample mean, s is the sample standard deviation, n is the sample size, and x i is the total score for participant i.
  • Step 4: Calculate the total variance of the responses for all items in the survey.
  • Step 5: Calculate the alpha coefficient using Equation (3).
Table 1 lists the computed Cronbach’s alpha values for each dimension. The Cronbach alpha values for each variable (awareness, desire, knowledge, ability, and reinforcement) ranged from 0.68 to 0.835, which indicates good internal consistency reliability (Table 1). Moreover, the total alpha value of 0.886 indicates a high level of internal consistency reliability for the entire survey, which is desirable when conducting research using survey data. Therefore, these results suggest that the survey questions used to measure the ADKAR model dimensions are reliable and consistent in measuring the intended constructs.
In this study, the validity of the measurement scales was evaluated using convergent and discriminant validity. These validity assessments are crucial for establishing the reliability and accuracy of the measurement scales used. They provide evidence that the measures accurately capture the intended concepts and do not measure unrelated factors. Convergent validity was assessed using the average variance extracted (AVE), with a recommended threshold of at least 0.50 according to Fornell and Larcker [57]. The results presented in Table 1 indicate that all constructs achieved an AVE exceeding 0.5, with their composite reliability (CR) values surpassing the threshold of 0.5. These findings suggest that the measurement scales were reliable and converged as expected without overlapping measures.
Discriminant validity was evaluated using the Fornell and Larcker (1981) criterion [57,58]. This criterion involves comparing the inter-scale correlations between concepts with the square root of the AVE achieved by each concept. Discriminant validity is established if the square root of AVE is greater than the maximum construct correlation. The study’s findings confirm that all measures demonstrated discriminant validity, affirming their distinct measurement of underlying concepts and establishing construct validity [59].
Overall, the results suggest that the measurement scales used in the study were valid and reliable for assessing the investigated concepts, with clear differentiation between them.

Participants’ Characteristics

Table 2 presents the demographic characteristics of the participants involved in this study. Of the total participants, the majority (64.4%) were males, while females accounted for 35.6%. Regarding educational qualifications, the highest proportion of respondents (50.3%) held a Ph.D., while only a small percentage (0.9%) had a master’s degree. Among the remaining participants, 4.7% were students in their first year of higher education, and 44.1% were students in their second year. It is noteworthy that the participants were drawn from a single Algerian university, and the results may not be generalizable to other universities or populations. However, the large sample size and diverse educational backgrounds of the participants increase the validity and reliability of the findings.
Table 3 summarizes the output of the questionnaires based on a five-point Likert scale on learning readiness, which is an important measure of the participants in this study for e-learning. The results indicate that most participants (73.4%) have a high level of readiness for e-learning, with an additional 15.9% having an even higher level of readiness. However, it is concerning that a significant minority of participants (10.6%) have a low to moderate level of readiness (2.8% at level 2 and 7.8% at level 3). This last highlights the need for interventions to increase the readiness of these participants and bring them up to the same level as most participants. The interventions could include providing training on the technical aspects of e-learning and addressing any concerns or fears participants may have about e-learning. Additionally, it may be useful to identify the reasons for the lower readiness levels and specifically address them. Overall, the findings suggest that while most participants are ready for e-learning, there is still room for improvement to ensure that all participants are fully prepared and able to effectively engage with e-learning.
Table 4 shows the pairwise correlation coefficients between the readiness and the gender and education level. The correlation coefficient is a statistical measure quantifying the degree to which two variables are linearly related. The correlation coefficient between two variables X and Y can be calculated using the following formula:
r X Y = i = 1 n ( x i x ¯ ) ( y i y ¯ ) i = 1 n ( x i x ¯ ) 2 i = 1 n ( y i y ¯ ) 2 ,
where n is the number of observations, x i and y i are the values of the variables for the ith observation, x ¯ and y ¯ are the means of the variables, and r X Y is the correlation coefficient between X and Y.
From Table 4, we observe a relatively strong positive correlation with a correlation coefficient of 0.841 between readiness and gender. This means there is a tendency for higher readiness levels to be associated with a particular gender in the sample. This could be related to differences in learning styles and preferences between genders. Research has shown that men and women often have different preferences for learning methods and styles, with women tending to prefer more collaborative and interactive approaches, while men may prefer more competitive or individualistic approaches. It is possible that these differences in learning styles could impact readiness for e-learning, with women showing a greater readiness due to a preference for collaborative and interactive e-learning formats. Another explanation is possible relation to cultural and social factors. In many societies, men and women may have different expectations and opportunities regarding education and career advancement. It is possible that the gender-related factors influencing readiness could include differences in access to technology, cultural norms regarding education, and gender-based expectations of success. Overall, it is important to note that correlation does not imply causation, and further investigation is needed to determine the underlying factors contributing to this relationship. Moreover, further research is needed to identify potential strategies for promoting ELR among all students, regardless of gender.
From Table 4, the correlation coefficient of 0.452 between readiness and education level indicates a moderate positive correlation between the two variables. This suggests that as the participants’ education level increases, their readiness toward e-learning also tends to increase. This result aligns with previous research that has found a positive correlation between education level and ELR. Based on this finding, it may be beneficial for the university to provide targeted support and resources to students with lower levels of education to increase their readiness for e-learning. This could include additional training or resources to help them become more comfortable with technology and online learning platforms. Additionally, the university may consider incorporating more interactive and engaging learning materials to help increase the motivation and engagement of students with lower levels of education.
However, it is important to note that correlation does not necessarily imply causation. It is possible that other factors such as socioeconomic status, age, or experience with technology may also influence the relationship between education level and ELR. Therefore, it is recommended that these factors be explored in future research to gain a more comprehensive understanding of the factors that influence ELR.

4.6. Data Analysis

Figure 3 depicts the pairwise correlation coefficients between readiness and ability, knowledge, desire, awareness, and reinforcement.
Essentially, the correlation coefficients provide important information about the relationship between readiness and the ADKAR factors and can help inform strategies to improve readiness for change. For example, if a particular factor is found to have a weaker correlation with readiness, efforts could be focused on strengthening that factor to increase readiness for change. Several conclusions can be drawn from analyzing Figure 3.
  • Figure 3 indicates the presence of a relatively high positive correlation of 0.6645 between readiness and ability, which means that students with a higher level of ability are more likely to be ready for e-learning. This makes sense because having the necessary skills and knowledge is a prerequisite for effective learning through online platforms. In other words, this suggests that individuals who feel more capable of making the change are more likely to feel ready to make the change. This is consistent with previous research that has shown that learners with a high level of technology literacy are more likely to succeed in e-learning environments.
  • The second-highest correlation was found between readiness and knowledge, 0.6282, which suggests that having sufficient knowledge about e-learning technologies and the content being taught is also an essential factor in determining readiness. This highlights the importance of ensuring learners access appropriate training and support to help them acquire the necessary knowledge and skills.
  • The correlations between readiness and desire, awareness, and reinforcement were also significant but lower than for the other factors (Figure 3). This suggests that learners’ motivation to engage in e-learning, their awareness of the benefits of e-learning, and the support they receive from their peers and instructors also play a role in determining their readiness.
  • Figure 3 also shows a positive correlation between the five ADKAR factors in this case study. The highest correlation is between ability and desire, 0.4731. This suggests that students who have the necessary skills and knowledge for e-learning are more likely to be motivated to learn through online platforms. This could be because they feel more confident and capable of succeeding in an online learning environment.
Analyzing the correlation between ADKAR factors is also a point of interest.
  • From Figure 3, we observe a positive correlation between ability and desire, 0.4731. This suggests that students who have the necessary skills and knowledge for e-learning are more likely to be motivated to learn through online platforms. One possible explanation for this correlation is that as individuals gain more confidence in using e-learning platforms, they may become more interested and motivated to use them to enhance their learning experiences. Similarly, individuals who strongly desire to use e-learning platforms may be more likely to invest the time and effort needed to develop the necessary skills and abilities to use them effectively.
  • The 0.3737 correlation between ability and awareness suggests a moderate positive relationship between these two ADKAR factors (Figure 3). This relationship could be explained by the fact that as individuals become more proficient in using e-learning platforms, they become more aware of their benefits and potential applications in their learning and work environments.

Feature Importance Identification

In this study, decision tree and random forest algorithms were used to identify the ADKAR factors important for ELR. Figure 4 displays the variable importance score of each technique (i.e., RF and DT) for ADKAR factors. Figure 4 shows that the ability factor has the largest contribution. The results indicate that ability is the most important factor in determining ELR. This suggests that the university should focus on providing adequate resources and support to enhance the participants’ abilities to effectively use e-learning tools and platforms.
The knowledge factor also has a relatively high score, which implies that the participant’s level of knowledge about e-learning tools and platforms is also an essential determinant of their readiness (Figure 4). Therefore, the university should consider providing training and educational resources to improve participants’ knowledge about e-learning. On the other hand, the awareness and reinforcement factors had the lowest scores, suggesting that they are not significant determinants of ELR for the participants. However, it is important to note that even though these factors had low scores, they should not be completely ignored. The university should still consider strategies to increase awareness of e-learning among the participants and provide appropriate reinforcement to encourage their engagement with e-learning platforms.
This study employed the SHapley Additive exPlanations (SHAP) technique to identify important variables. Figure 5 depicts the SHAP values associated with each ADKAR factor across participants. The ADKAR factors are listed on the y-axis, ranked from least important (at the bottom) to most important (at the top). The x-axis denotes the SHAP value for each variable and participant within the training dataset, where each case is represented by a plotted point based on the variable’s influence on the prediction of that case. The color of the plotted point indicates whether the value of the variable was high (red) or low (blue).
The results obtained through the SHAP technique complemented the findings of the random forest and decision tree algorithms. It was observed that the “ability” factor was the most important variable, followed by “knowledge” and “desire”. The “awareness” and “reinforcement” factors were found to be less important. These results provide valuable insights for educational institutions to effectively enhance ELR and allocate resources.
Although the importance scores may differ slightly, the order of the identified scores is the same for both random forest and decision tree. Both algorithms identify “ability” as the most important factor for ELR, followed by “knowledge”, “desire”, “reinforcement”, and “awareness”. This consistency in ranking the important factors indicates that these factors are robust and reliable predictors of ELR in the context of the university under study.

4.7. Discussion

Implementing e-learning during the COVID-19 lockdown has posed significant challenges for educational institutions worldwide. This study evaluated students’ readiness for e-learning at Tlemcen University and explored the factors influencing their readiness. The findings of this study have important implications for educational institutions seeking to enhance e-learning readiness and effectively allocate resources [60].
One noteworthy finding of this study is the revelation of the robust positive correlation between readiness and gender. The observation that higher levels of readiness are associated with a specific gender suggests the presence of gender-related disparities in e-learning readiness. This finding is aligned with previous research conducted by [61,62,63] at other academic institutions. However, it is important to note that Tang et al. reported contrasting results, highlighting the need for further investigation and potential factors influencing this relationship [64]. These differences may stem from the different learning styles and preferences between men and women. It has been well documented that men and women tend to have distinct learning preferences, with women often favoring collaborative and interactive approaches, while men may prefer more competitive or individualistic learning methods. Consequently, women in the study may have demonstrated higher readiness levels due to their inclination toward collaborative and interactive e-learning formats. Cultural and social factors could also contribute to this gender disparity, as societal expectations and opportunities regarding education and career advancement can differ between genders. Factors such as access to technology, cultural norms, and gender-based expectations of success may influence readiness levels. These findings exhibit consistency with the results obtained in previous studies reported in [65,66,67,68]. However, it is essential to recognize that correlation does not establish causation, and further research is needed to unravel the underlying factors contributing to this relationship. Future studies should also explore strategies to promote e-learning readiness among all students, irrespective of gender.
The presence of a moderate positive correlation between readiness and education level emphasizes the significance of educational attainment in the context of e-learning readiness. It becomes evident that as participants’ level of education increases, their readiness for e-learning also increases. This finding is aligned with previous studies highlighting the positive relationship between education level and e-learning readiness [64,69].
To capitalize on this correlation, educational institutions should provide targeted support and resources to students with lower education levels. Additional training and resources can help these students become more comfortable with technology and online learning platforms. Moreover, incorporating interactive and engaging learning materials may enhance the motivation and engagement of students with lower education levels, facilitating their readiness for e-learning.
Although the correlation analyses provide valuable insights into the relationship between e-learning readiness and the ADKAR factors, it is important to note that correlation does not imply causation. Other factors, such as socioeconomic status, age, and experience with technology, may also influence the relationship between education level and e-learning readiness. Future research should delve deeper into these factors to comprehensively understand the multifaceted influences on e-learning readiness.
The analysis of the ADKAR factors identified “ability” as the most critical variable for e-learning readiness, followed by “knowledge” and “desire”. This finding highlights the essential role of skills, knowledge, and motivation in determining readiness for e-learning. To enhance ELR, educational institutions should focus on providing comprehensive training and support to develop the necessary skills and knowledge among students. Creating a positive and motivating learning environment is also crucial for fostering students’ readiness for e-learning. By designing instructional strategies that promote skill development, knowledge acquisition, and motivation, institutions can effectively enhance students’ readiness for e-learning.
The ADKAR model has been substantiated as a potent tool for assessing readiness to adopt and leverage e-learning platforms [41]. This model offers a comprehensive framework that goes beyond technological proficiency, encompassing various aspects of change. It evaluates an individual’s awareness regarding the benefits and significance of e-learning, their willingness to engage in online learning, their proficiency in utilizing digital platforms effectively, their capability to apply e-learning tools, and the presence of reinforcement and support mechanisms. This holistic approach acknowledges that successful implementation of e-learning necessitates a multifaceted strategy [70,71]. Ali et al. [41] compared the ADKAR model with the Technology Acceptance Model (TAM) and Hofstede dimensions to investigate the power of change models in innovative technology acceptance in Pakistan. They found that the ADKAR model was the most effective in addressing resistance to change, with the arrangement of its dimensions varying based on the sample characteristics. Al-Alawi et al. identified strengths in the desire for change but weaknesses in knowledge, inability, and performance enhancement [45]. Vázquez suggested the suitability of the ADKAR model for creating an effective change roadmap and enhancing university web accessibility [72]. Yasid and Laela demonstrated the efficacy of the ADKAR model, emphasizing the significance of awareness, desire, knowledge, ability, and reinforcement in facilitating successful transformation [73]. Positive awareness of the need for change is crucial for sustainable change in education, as confirmed in [74]. Students’, teachers’, and administrators’ knowledge and experience of necessary programs play a significant role in educational strategies during the COVID-19 pandemic. The strong desire for digital education is driven by the participants’ ability to effectively use digital learning platforms. The success of adopting desired changes depends on bridging the knowledge gap, and institutional support facilitates their engagement. These findings are aligned with the observations of Ferri et al. that technological challenges hinder online teaching even in technologically advanced countries such as Estonia [75].
The consistency in the order of importance scores between the random forest and decision tree algorithms lends credibility to the identified factors. The resilience of these factors as predictors of e-learning readiness within the university context underscores their significance for educational institutions. By prioritizing the critical factors, such as “ability”, “knowledge”, and “desire”, institutions can tailor their interventions and initiatives to address specific areas and enhance overall readiness for e-learning. This targeted approach enables institutions to effectively allocate resources and maximize their impact on students’ readiness.
Overall, this study provides valuable insights into implementing e-learning during the COVID-19 lockdown at Tlemcen University. The findings highlight the role of gender, education level, and ADKAR factors in influencing e-learning readiness. By understanding these factors, educational institutions can develop strategies to enhance e-learning readiness, promote inclusivity, and ensure effective utilization of resources. Future research should explore the complex interplay of various factors and identify additional strategies for promoting e-learning readiness among diverse student populations.

5. Conclusions

This study explored the factors contributing to university students’ ELR using an innovative approach that combines the ADKAR model and machine learning methods. This study collected data by surveying faculty members and students in the Economics faculty at Tlemcen University in Algeria. The survey instrument was based on the ADKAR model. The correlation analysis indicated a significant positive correlation between all factors and ELR. Furthermore, machine learning algorithms, including random forest and decision trees, were used to identify the most important factors. The results of these algorithms identified ability and knowledge as the most important factors contributing to ELR, with desire and awareness also playing a significant role. Moreover, the SHAP values were used to investigate the impact of each variable on the final prediction. The results of this analysis revealed that ability is the most important factor for predicting elr, followed by knowledge, reinforcement, awareness, and desire. These findings suggest that universities should focus on enhancing students’ abilities and providing them with the necessary knowledge to increase their readiness for e-learning. Overall, this study provides valuable insights into the factors that influence ELR among university students. The results can be used to guide the development of effective e-learning programs and strategies that can improve students’ readiness for e-learning.
Based on the results of this study, some recommendations can be made to enhance ELR in universities in Algeria:
  • There is a need to increase awareness and knowledge of e-learning among faculty members and students through training and workshops.
  • Universities should provide adequate resources and technical support to facilitate the adoption of e-learning.
  • Universities should work on enhancing the design and development of e-learning courses to meet the needs and preferences of students.
  • It is recommended that students be involved in the decision-making process and to seek their feedback on e-learning experiences.
  • Further research is needed to explore the impact of cultural and contextual factors on ELR in different regions and universities in Algeria.
Despite the contribution and the promising results of ELR assessment using the proposed approach, this study has some limitations that need to be considered. The first limitation is the relatively small sample size used in the study, which may affect the generalizability of the findings. Moreover, this study was conducted in the Economics faculty at Tlemcen University in Algeria. Similar studies could be conducted in different faculties or universities in Algeria or other countries to generalize the results to other contexts. In this study, the focus was on the ADKAR model factors as predictors for ELR assessment. However, other variables such as socioeconomic status, internet connectivity, and digital literacy could be integrated with the ADKAR model to develop more accurate models. Although random forest and decision tree models effectively identify important factors, other alternative machine learning techniques could be explored [76,77]. A promising future research direction is to explore deep learning models that can effectively handle large datasets [78,79]. By incorporating such models, the assessment of e-learning readiness can be extended to multiple universities simultaneously, thereby broadening the scope and applicability of the study.
To further improve accuracy, researchers could also investigate integrating additional data sources or refine the existing features used for assessment. Furthermore, efforts can be directed toward developing interpretable machine learning models that clearly explain the assessment results, ensuring transparency and understanding [77]. In terms of fairness, researchers could explore the potential biases in the assessment models and develop approaches to mitigate them. This includes examining the impact of demographic factors on the assessment outcomes and working toward fairer and more equitable assessment practices. Additionally, user acceptance is a critical aspect of any assessment method. Future work could involve gathering feedback from end-users, such as students, teachers, and administrators, to understand their perspectives and preferences. This valuable input can inform the design and refinement of assessment methods that are user-friendly, intuitive, and effective in supporting e-learning readiness.

Author Contributions

M.Z., conceptualization, formal analysis, investigation, methodology, software, writing—original draft, and writing—review and editing. F.H., conceptualization, formal analysis, investigation, methodology, software, supervision, writing—original draft, and writing—review and editing. M.T., conceptualization, formal analysis, investigation, writing—original draft, and writing—review and editing. M.B., conceptualization, formal analysis, investigation, writing—original draft, and writing—review and editing. A.D., conceptualization, methodology, software, writing—original draft, and writing—review and editing. Y.S., investigation, methodology, conceptualization, supervision, and writing—review and editing. All authors have read and agreed to the published version of the manuscript.


This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.


The authors (Fouzi Harrou and Ying Sun) would like to acknowledge the support of the King Abdullah University of Science and Technology (KAUST) in conducting this research. The authors (Mohamed zine, Terbeche Mohammed, Bellahcene Mohammed) acknowledge that ‘This research was carried out under the aegis of the Directorate General of Scientific Research and Technological Development of the Algerian Ministry of Higher Education and Scientific Research’.

Conflicts of Interest

The authors declare no conflict of interest.


  1. Wagiran, W.; Suharjana, S.; Nurtanto, M.; Mutohhari, F. Determining the e-learning readiness of higher education students: A study during the COVID-19 pandemic. Heliyon 2022, 8, e11160. [Google Scholar] [CrossRef]
  2. Jaoua, F.; Almurad, H.M.; Elshaer, I.A.; Mohamed, E.S. E-learning success model in the context of COVID-19 pandemic in higher educational institutions. Int. J. Environ. Res. Public Health 2022, 19, 2865. [Google Scholar] [CrossRef]
  3. Aslam, S.M.; Jilani, A.K.; Sultana, J.; Almutairi, L. Feature evaluation of emerging e-learning systems using machine learning: An extensive survey. IEEE Access 2021, 9, 69573–69587. [Google Scholar] [CrossRef]
  4. Elezi, E.; Bamber, C. Factors Affecting Successful Adoption and Adaption of E-Learning Strategies. In Enhancing Academic Research and Higher Education with Knowledge Management Principles; IGI Global: Hershey, PA, USA, 2021; pp. 19–35. [Google Scholar]
  5. Rodrigues, H.; Almeida, F.; Figueiredo, V.; Lopes, S.L. Tracking e-learning through published papers: A systematic review. Comput. Educ. 2019, 136, 87–98. [Google Scholar] [CrossRef]
  6. Rohayani, A.H. A literature review: Readiness factors to measuring e-learning readiness in higher education. Procedia Comput. Sci. 2015, 59, 230–234. [Google Scholar] [CrossRef]
  7. Moubayed, A.; Injadat, M.; Nassif, A.B.; Lutfiyya, H.; Shami, A. E-learning: Challenges and research opportunities using machine learning & data analytics. IEEE Access 2018, 6, 39117–39138. [Google Scholar]
  8. Darab, B.; Montazer, G.A. An eclectic model for assessing e-learning readiness in the Iranian universities. Comput. Educ. 2011, 56, 900–910. [Google Scholar] [CrossRef]
  9. Nguyen, T.D.; Nguyen, D.T.; Cao, T.H. Acceptance and use of information system: E-learning based on cloud computing in Vietnam. In Proceedings of the Information and Communication Technology: Second IFIP TC5/8 International Conference, ICT-EurAsia 2014, Bali, Indonesia, 14–17 April 2014; Proceedings 2. Springer: Berlin/Heidelberg, Germany, 2014; pp. 139–149. [Google Scholar]
  10. Papadakis, S.; Kalogiannakis, M.; Sifaki, E.; Vidakis, N. Evaluating moodle use via smart mobile phones. A case study in a Greek university. EAI Endorsed Trans. Creat. Technol. 2018, 5, e1. [Google Scholar] [CrossRef]
  11. Hashem, I.A.T.; Yaqoob, I.; Anuar, N.B.; Mokhtar, S.; Gani, A.; Khan, S.U. The rise of “big data” on cloud computing: Review and open research issues. Inf. Syst. 2015, 47, 98–115. [Google Scholar] [CrossRef]
  12. Khanal, S.S.; Prasad, P.; Alsadoon, A.; Maag, A. A systematic review: Machine learning based recommendation systems for e-learning. Educ. Inf. Technol. 2020, 25, 2635–2664. [Google Scholar] [CrossRef]
  13. Parlakkiliç, A. Change management in transition to e-learning system. Qual. Quant. Methods Libr. 2017, 3, 637–651. [Google Scholar]
  14. Parlakkilic, A. E-learning change management: Challenges and opportunities. Turk. Online J. Distance Educ. 2013, 14, 54–68. [Google Scholar]
  15. Baig, M.I.; Shuib, L.; Yadegaridehkordi, E. E-learning adoption in higher education: A review. Inf. Dev. 2022, 38, 570–588. [Google Scholar] [CrossRef]
  16. Rafiee, M.; Abbasian-Naghneh, S. E-learning: Development of a model to assess the acceptance and readiness of technology among language learners. Comput. Assist. Lang. Learn. 2021, 34, 730–750. [Google Scholar] [CrossRef]
  17. Mafunda, B.; Swart, A.J. Determining African students’e-learning readiness to improve their e-learning experience. Glob. J. Eng. Educ. 2020, 22, 216–221. [Google Scholar]
  18. Shakeel, S.I.; Haolader, M.F.A.; Sultana, M.S. Exploring dimensions of blended learning readiness: Validation of scale and assessing blended learning readiness in the context of TVET Bangladesh. Heliyon 2023, 9, e12766. [Google Scholar] [CrossRef]
  19. Bubou, G.M.; Job, G.C. Individual innovativeness, self-efficacy and e-learning readiness of students of Yenagoa study centre, National Open University of Nigeria. J. Res. Innov. Teach. Learn. 2022, 15, 2–22. [Google Scholar] [CrossRef]
  20. Zandi, G.; Rhoma, A.A.; Ruhoma, A. Assessment of E-Learning Readiness in the Primary Education Sector in Libya: A Case of Yefren. J. Inf. Technol. Manag. 2023, 15, 153–165. [Google Scholar]
  21. Lucero, H.R.; Victoriano, J.M.; Carpio, J.T.; Fernando, P.G., Jr. Assessment of e-learning readiness of faculty members and students in the government and private higher education institutions in the Philippines. arXiv 2022, arXiv:2202.06069. [Google Scholar] [CrossRef]
  22. Mushi, R.M.; Lashayo, D.M. E-Learning Readiness Assessment on Distance Learning: A Case of Tanzanian Higher Education Institutions. Int. J. Inf. Commun. Technol. Hum. Dev. (IJICTHD) 2022, 14, 1–10. [Google Scholar] [CrossRef]
  23. Nwagwu, W.E. E-learning readiness of universities in Nigeria-what are the opinions of the academic staff of Nigeria’s premier university? Educ. Inf. Technol. 2020, 25, 1343–1370. [Google Scholar] [CrossRef]
  24. Keramati, A.; Afshari-Mofrad, M.; Kamrani, A. The role of readiness factors in E-learning outcomes: An empirical study. Comput. Educ. 2011, 57, 1919–1929. [Google Scholar] [CrossRef]
  25. Peñarrubia-Lozano, C.; Segura-Berges, M.; Lizalde-Gil, M.; Bustamante, J.C. A qualitative analysis of implementing e-learning during the COVID-19 lockdown. Sustainability 2021, 13, 3317. [Google Scholar] [CrossRef]
  26. Ober, J.; Kochmańska, A. Remote Learning in Higher Education: Evidence from Poland. Int. J. Environ. Res. Public Health 2022, 19, 14479. [Google Scholar] [CrossRef] [PubMed]
  27. Septama, H.D.; Komarudin, M.; Yulianti, T. E-learning Readiness in University of Lampung during COVID-19 Pandemic. In Proceedings of the 2021 2nd SEA-STEM International Conference (SEA-STEM), Hat Yai, Thailand, 24–25 November 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 43–47. [Google Scholar]
  28. Sardjono, W.; Sudrajat, J.; Retnowardhani, A. Readiness Model of e-Learning Implementation from Teacher and Student Side at School in The Pramuka Island of Seribu Islands–DKI Jakarta. In Proceedings of the 2020 International Conference on Information Management and Technology (ICIMTech), Bandung, Indonesia, 13–14 August 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 649–653. [Google Scholar]
  29. Laksitowening, K.A.; Wibowo, Y.F.A.; Hidayati, H. An assessment of E-Learning readiness using multi-dimensional model. In Proceedings of the 2016 IEEE Conference on e-Learning, e-Management and e-Services (IC3e), Kota Kinabalu, Malaysia, 17–19 November 2020; IEEE: Piscataway, NJ, USA, 2016; pp. 128–132. [Google Scholar]
  30. The, M.M.; Usagawa, T. Evaluation on e-learning readiness of Yangon and Mandalay technological universities, Myanmar. In Proceedings of the Tencon 2017–2017 Ieee Region 10 Conference, Penang, Malaysia, 5–8 November 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 2072–2076. [Google Scholar]
  31. Saekow, A.; Samson, D. A study of e-learning readiness of Thailand’s higher education comparing to the United States of America (USA)’s case. In Proceedings of the 2011 3rd International Conference on Computer Research and Development, Shanghai, China, 11–13 March 2011; IEEE: Piscataway, NJ, USA, 2011; Volume 2, pp. 287–291. [Google Scholar]
  32. Irene, K.; Zuva, T. Assessment of e-learning readiness in South African Schools. In Proceedings of the 2018 International Conference on Advances in Big Data, Computing and Data Communication Systems (icABCD), Durban, South Africa, 6–7 August 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1–7. [Google Scholar]
  33. Herath, C.P.; Thelijjagoda, S.; Gunarathne, W. Stakeholders’ psychological factors affecting E-learning readiness in higher education community in Sri Lanka. In Proceedings of the 2015 8th International Conference on Ubi-Media Computing (UMEDIA), Colombo, Sri Lanka, 24–26 August 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 168–173. [Google Scholar]
  34. Mohammed, Y.A. E-learning readiness assessment of medical students in university of fallujah. In Proceedings of the 2018 1st Annual International Conference on Information and Sciences (AiCIS), Fallujah, Iraq, 20–21 November 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 201–207. [Google Scholar]
  35. Samaneh, Y.; Marziyeh, I.; Mehrak, R. An assessment of the e-learning readiness among EFL university students and its relationship with their English proficiency. In Proceedings of the 4th International Conference on E-Learning and E-Teaching (ICELET 2013), Shiraz, Iran, 13–14 February 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 81–85. [Google Scholar]
  36. Tiwari, S.; Srivastava, S.K.; Upadhyay, S. An Analysis of Students’ Readiness and Facilitators’ Perception Towards E-learning Using Machine Learning Algorithms. In Proceedings of the 2021 2nd International Conference on Intelligent Engineering and Management (ICIEM), London, UK, 28–30 April 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 504–509. [Google Scholar]
  37. Alotaibi, R.S. Measuring Students’ Readiness to Use E-leaming Platforms: Shaqra University as a Case Study. In Proceedings of the 2021 International Conference on Electrical, Computer, Communications and Mechatronics Engineering (ICECCME), Port Louis, Mauritius, 7–8 October 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 01–09. [Google Scholar]
  38. Hung, M.L.; Chou, C.; Chen, C.H.; Own, Z.Y. Learner readiness for online learning: Scale development and student perceptions. Comput. Educ. 2010, 55, 1080–1090. [Google Scholar] [CrossRef]
  39. Hiatt, J. ADKAR: A Model for Change in Business, Government, and Our Community; Prosci: Fort Collins, CO, USA, 2006. [Google Scholar]
  40. Hiatt, J.M. The Essence of ADKAR: A Model for Individual Change Management; Prosci: Fort Collins Colorado, CO, USA, 2006. [Google Scholar]
  41. Ali, M.A.; Zafar, U.; Mahmood, A.; Nazim, M. The power of ADKAR change model in innovative technology acceptance under the moderating effect of culture and open innovation. LogForum 2021, 17, 485–502. [Google Scholar]
  42. Al-Shahrbali, I.A.T.; Abdullah, A.Y.K. Change models and applications of the ADKAR model in the strategies of change and its fields in the performance of the Tourism Authority. J. Adm. Econ. 2021, 130, 227–242. [Google Scholar]
  43. Bahamdan, M.A.; Al-Subaie, O.A. Change Management and Its Obstacles in Light of “ADKAR Model” Dimensions from Female Teachers Perspective in Secondary Schools in Dammam in Saudi Arabia. Ilkogr. Online 2021, 20, 2475. [Google Scholar]
  44. Glegg, S.M.; Ryce, A.; Brownlee, K. A visual management tool for program planning, project management and evaluation in paediatric health care. Eval. Program Plan. 2019, 72, 16–23. [Google Scholar] [CrossRef] [PubMed]
  45. Al-Alawi, A.I.; Abdulmohsen, M.; Al-Malki, F.M.; Mehrotra, A. Investigating the barriers to change management in public sector educational institutions. Int. J. Educ. Manag. 2019, 33, 112–148. [Google Scholar] [CrossRef]
  46. Rogers, J.; Gunn, S. Identifying feature relevance using a random forest. In Proceedings of the Subspace, Latent Structure and Feature Selection: Statistical and Optimization Perspectives Workshop, SLSFS 2005, Bohinj, Slovenia, 23–25 February 2005; Revised Selected Papers. Springer: Berlin/Heidelberg, Germany, 2006; pp. 173–184. [Google Scholar]
  47. Grömping, U. Variable importance assessment in regression: Linear regression versus random forest. Am. Stat. 2009, 63, 308–319. [Google Scholar] [CrossRef]
  48. Wei, P.; Lu, Z.; Song, J. Variable importance analysis: A comprehensive review. Reliab. Eng. Syst. Saf. 2015, 142, 399–432. [Google Scholar] [CrossRef]
  49. Breiman, L.; Friedman, J.; Olshen, R.; Stone, C. Classification and regression trees. Wadsworth & Brooks. In Cole Statistics/Probability Series; Springer: Berlin/Heidelberg, Germany, 1984. [Google Scholar]
  50. Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
  51. Ishwaran, H. Variable importance in binary regression trees and forests. Electron. J. Statist. 2007, 1, 519–537. [Google Scholar] [CrossRef]
  52. Breiman, L. Classification and Regression Trees; Routledge: London, UK, 2017. [Google Scholar]
  53. Archer, K.J.; Kimes, R.V. Empirical characterization of random forest variable importance measures. Comput. Stat. Data Anal. 2008, 52, 2249–2260. [Google Scholar] [CrossRef]
  54. Nohara, Y.; Matsumoto, K.; Soejima, H.; Nakashima, N. Explanation of machine learning models using shapley additive explanation and application for real data in hospital. Comput. Methods Programs Biomed. 2022, 214, 106584. [Google Scholar] [CrossRef]
  55. Roth, A.E. The Shapley Value: Essays in Honor of Lloyd S. Shapley; Cambridge University Press: Cambridge, UK, 1988. [Google Scholar]
  56. Cochran, W.G. Sampling Techniques; John Wiley & Sons: Hoboken, NJ, USA, 1977. [Google Scholar]
  57. Fornell, C.; Larcker, D.F. Evaluating structural equation models with unobservable variables and measurement error. J. Mark. Res. 1981, 18, 39–50. [Google Scholar] [CrossRef]
  58. Ab Hamid, M.; Sami, W.; Sidek, M.M. Discriminant validity assessment: Use of Fornell & Larcker criterion versus HTMT criterion. Proc. J. Phys. Conf. Ser. 2017, 890, 012163. [Google Scholar]
  59. Hair, J.F., Jr.; Hult, G.T.M.; Ringle, C.M.; Sarstedt, M. A Primer on Partial Least Squares Structural Equation Modeling (PLS-SEM); Sage Publications: Thousand Oaks, CA, 2021. [Google Scholar]
  60. da Silva, L.M.; Dias, L.P.S.; Rigo, S.; Barbosa, J.L.V.; Leithardt, D.R.; Leithardt, V.R.Q. A literature review on intelligent services applied to distance learning. Educ. Sci. 2021, 11, 666. [Google Scholar] [CrossRef]
  61. Mutambik, I.; Lee, J.; Almuqrin, A. Role of gender and social context in readiness for e-learning in Saudi high schools. Distance Educ. 2020, 41, 515–539. [Google Scholar] [CrossRef]
  62. Reyes, J.R.S.; Grajo, J.D.; Comia, L.N.; Talento, M.S.D.; Ebal, L.P.A.; Mendoza, J.J.O. Assessment of Filipino higher education students’ readiness for e-learning during a pandemic: A Rasch Technique application. Philipp. J. Sci. 2021, 150, 1007–1018. [Google Scholar] [CrossRef]
  63. Adams, D.; Chuah, K.M.; Sumintono, B.; Mohamed, A. Students’ readiness for e-learning during the COVID-19 pandemic in a South-East Asian university: A Rasch analysis. Asian Educ. Dev. Stud. 2022, 11, 324–339. [Google Scholar] [CrossRef]
  64. Tang, Y.M.; Chen, P.C.; Law, K.M.; Wu, C.H.; Lau, Y.Y.; Guan, J.; He, D.; Ho, G.T. Comparative analysis of Student’s live online learning readiness during the coronavirus (COVID-19) pandemic in the higher education sector. Comput. Educ. 2021, 168, 104211. [Google Scholar] [CrossRef] [PubMed]
  65. Mirabolghasemi, M.; Choshaly, S.H.; Iahad, N.A. Using the HOT-fit model to predict the determinants of E-learning readiness in higher education: A developing Country’s perspective. Educ. Inf. Technol. 2019, 24, 3555–3576. [Google Scholar] [CrossRef]
  66. Hasani, L.M.; Adnan, H.R.; Sensuse, D.I.; Suryono, R.R. Factors affecting student’s perceived readiness on abrupt distance learning adoption: Indonesian Higher-Education Perspectives. In Proceedings of the 2020 3rd International Conference on Computer and Informatics Engineering (IC2IE), Yogyakarta, Indonesia, 15–16 September 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 286–292. [Google Scholar]
  67. Shahzad, A.; Hassan, R.; Aremu, A.Y.; Hussain, A.; Lodhi, R.N. Effects of COVID-19 in E-learning on higher education institution students: The group comparison between male and female. Qual. Quant. 2021, 55, 805–826. [Google Scholar] [CrossRef]
  68. Negm, E. Intention to use Internet of Things (IoT) in higher education online learning–the effect of technology readiness. High. Educ. Skills Work-Based Learn. 2023, 13, 53–65. [Google Scholar] [CrossRef]
  69. Daniels, M.; Sarte, E.; Cruz, J.D. Students’ perception on e-learning: A basis for the development of e-learning framework in higher education institutions. Proc. IOP Conf. Ser. Mater. Sci. Eng. 2019, 482, 012008. [Google Scholar] [CrossRef]
  70. Sulistiyani, E.; Budiarti, R.P.N. The Current Conditions of Online Learning in Universitas Nahdlatul Ulama Surabaya. Kresna Soc. Sci. Humanit. Res. 2020, 1, 1–4. [Google Scholar] [CrossRef]
  71. Jaaron, A.A.; Hijazi, I.H.; Musleh, K.I.Y. A conceptual model for adoption of BIM in construction projects: ADKAR as an integrative model of change management. Technol. Anal. Strateg. Manag. 2022, 34, 655–667. [Google Scholar] [CrossRef]
  72. Vázquez, S.R. The use of ADKAR to instil change in the accessibility of university websites. In Proceedings of the 19th International Web for All Conference, Virtual, 25–26 April 2022; pp. 1–5. [Google Scholar]
  73. Yasid, M.; Laela, S.F. The Success Factors of Conversion Process Using ADKAR Model: A Case Study of Islamic Savings Loan and Financing Cooperative. Afkaruna Indones. Interdiscip. J. Islam. Stud. 2022, 18, 234–259. [Google Scholar] [CrossRef]
  74. Chipamaunga, S.; Nyoni, C.N.; Kagawa, M.N.; Wessels, Q.; Kafumukache, E.; Gwini, R.; Kandawasvika, G.; Katowa-Mukwato, P.; Masanganise, R.; Nyamakura, R.; et al. Response to the impact of COVID-19 by health professions education institutions in Africa: A case study on preparedness for remote learning and teaching. Smart Learn. Environ. 2023, 10, 31. [Google Scholar] [CrossRef]
  75. Ferri, F.; Grifoni, P.; Guzzo, T. Online learning and emergency remote teaching: Opportunities and challenges in emergency situations. Societies 2020, 10, 86. [Google Scholar] [CrossRef]
  76. Harrou, F.; Sun, Y.; Hering, A.S.; Madakyaru, M. Statistical Process Monitoring Using Advanced Data-Driven and Deep Learning Approaches: Theory and Practical Applications; Elsevier: Amsterdam, The Netherlands, 2020. [Google Scholar]
  77. Salem, H.; El-Hasnony, I.M.; Kabeel, A.; El-Said, E.M.; Elzeki, O.M. Deep Learning model and Classification Explainability of Renewable energy-driven Membrane Desalination System using Evaporative Cooler. Alex. Eng. J. 2022, 61, 10007–10024. [Google Scholar] [CrossRef]
  78. Torkey, H.; Atlam, M.; El-Fishawy, N.; Salem, H. Machine Learning Model for Cancer Diagnosis based on RNAseq Microarray. Menoufia J. Electron. Eng. Res. 2021, 30, 65–75. [Google Scholar] [CrossRef]
  79. Harrou, F.; Sun, Y.; Hering, A.S.; Madakyaru, M.; Dairi, A. Unsupervised Deep Learning-Based Process Monitoring Methods; Elsevier BV: Amsterdam, The Netherlands, 2021. [Google Scholar]
Figure 1. Illustration of the adopted research design.
Figure 1. Illustration of the adopted research design.
Sustainability 15 08924 g001
Figure 2. Schematic representation of the proposed model.
Figure 2. Schematic representation of the proposed model.
Sustainability 15 08924 g002
Figure 3. Matrix of correlation between readiness and the ADKAR factors.
Figure 3. Matrix of correlation between readiness and the ADKAR factors.
Sustainability 15 08924 g003
Figure 4. Identification of ADKAR factors’ importance using RF and DT algorithms.
Figure 4. Identification of ADKAR factors’ importance using RF and DT algorithms.
Sustainability 15 08924 g004
Figure 5. Identification of ADKAR factors’ importance using (a) RF and (b) DT algorithms based on SHAP technique.
Figure 5. Identification of ADKAR factors’ importance using (a) RF and (b) DT algorithms based on SHAP technique.
Sustainability 15 08924 g005
Table 1. Validity and reliability coefficients.
Table 1. Validity and reliability coefficients.
VariablesCronbach AlphaNumber of ItemsAVECR
Table 2. Demographic data of the 320 respondents.
Table 2. Demographic data of the 320 respondents.
FrequencyPercentage (%)
Educational level1st year154.7
Table 3. ELR Ratings on a Five-Point Likert Scale for E-Learning Participants.
Table 3. ELR Ratings on a Five-Point Likert Scale for E-Learning Participants.
5-Point Likert ScaleFrequencyPercentage (%)
Table 4. Correlation coefficients between the readiness and the gender and education level.
Table 4. Correlation coefficients between the readiness and the gender and education level.
Education Level0.452
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zine, M.; Harrou, F.; Terbeche, M.; Bellahcene, M.; Dairi, A.; Sun, Y. E-Learning Readiness Assessment Using Machine Learning Methods. Sustainability 2023, 15, 8924.

AMA Style

Zine M, Harrou F, Terbeche M, Bellahcene M, Dairi A, Sun Y. E-Learning Readiness Assessment Using Machine Learning Methods. Sustainability. 2023; 15(11):8924.

Chicago/Turabian Style

Zine, Mohamed, Fouzi Harrou, Mohammed Terbeche, Mohammed Bellahcene, Abdelkader Dairi, and Ying Sun. 2023. "E-Learning Readiness Assessment Using Machine Learning Methods" Sustainability 15, no. 11: 8924.

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop