Next Article in Journal
Applying Transformer-Based Dynamic-Sequence Techniques to Transit Data Analysis
Previous Article in Journal
When Accounting for People Behavior Is Hard: Evaluation of Some Spatiotemporal Features for Electricity Load Demand Forecasting
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

Evaluation of E-Learning Websites Using Additive Rank Probability Method: Case Study of C Programming Website †

by
Muhamad Rasydan Mokhtar
School of Energy and Chemical Engineering, Xiamen University Malaysia, Bandar Sunsuria, Sepang 43900, Selangor, Malaysia
Presented at the 8th Eurasian Conference on Educational Innovation 2025, Bali, Indonesia, 7–9 February 2025.
Eng. Proc. 2025, 103(1), 7; https://doi.org/10.3390/engproc2025103007
Published: 7 August 2025

Abstract

E-learning is conducted by using electronic media and resources for learning activities. Recently, e-learning platforms have received more attention than traditional learning methods. Advances in information and communication systems have resulted in e-learning websites becoming interactive and flexible. However, the rapid increase in the use of e-learning websites leads to the problem of e-learning website evaluation and selection. In this study, e-learning websites were evaluated by adopting multi-criteria decision-making (MCDM). A new MCDM method, namely additive rank probability (ARP), was developed in this study to select the best e-learning website. To verify the effectiveness of the ARP method, the best C programming website was selected using five alternatives and ten criteria. The ranking of the C programming websites exactly matched those derived by other MCDM methods. However, there was a difference in the ranking by the ARP method with the weighted Euclidean distance-based approximation (WEDBA) method. ARP was proven as a simple and efficient method for identifying the best e-learning website for an effective learning process.

1. Introduction

The rapid advancement of information and communication technologies has enabled a sustainable approach to modern education including electronic learning (e-learning) for delivering and sharing information for education [1]. With the advent of innovative technologies, e-learning websites and platforms provide access to a variety of courses in diverse areas, including business, languages, law, healthcare, science, technology, and others. Hence, it has become important for students to look for educational resources that align with their interests.
It is undeniable that e-learning provides significant advantages to acquiring knowledge and education. First, it allows the students to access educational materials at any time, which makes learning flexible and accessible for busy people [2]. On e-learning platforms, high-quality courses are accessible without any costs. E-learning helps students to progress at their own pace while several courses or subjects may need extra time to adapt. For learning efficiency, the platforms provide rates, scores, or reports, allowing users to recognize their improvement.
E-learning websites and platforms have been developed due to the proliferation of Internet technology. However, without greater assistance, it is difficult to determine the quality of the e-learning websites [3]. The students may have difficulty in evaluating and assessing e-learning websites and choosing appropriate websites. Therefore, an e-learning website selection method has been developed based on multi-criteria decision-making (MCDM) [4]. However, various criteria complicate the process of choosing a website.
The need to incorporate various criteria requires a scientific and effective method to reduce the complexity of selecting an e-learning website with the best performance for online education. Hence, a new MCDM method was developed in this study to rank websites under reasonable criteria. The developed MCDM method offers the following advantages: (1) it is straightforward to implement; (2) it does not need to normalize the data; (3) it does not require a pairwise comparison process; and (4) it does not require any parameters or thresholds to be set up.
The remainder of this article is organized as follows. Section 2 reviews the related literature on MCDM methods applied in e-learning websites and platforms. Section 3 presents a description of the proposed methodology based on the MCDM method. Section 4 illustrates the case study of evaluating the best e-learning website. Section 5 describes the results and discussion of using the proposed MCDM method. Lastly, Section 6 briefly explains the concluding statement.

2. Literature Review

MCDM is used to identify the best alternative based on multiple and conflicting criteria. This method assists in formulating acceptable solutions. Over the years, various MCDM methods have been applied to e-learning website evaluation and selection. The analytic hierarchy process (AHP) has been identified as the most popular MCDM method for evaluating e-learning [5]. One of the advantages of the AHP method is its ability to structure decision-making problems hierarchically using the pairwise comparison technique [6].
Alice et al. [7] used the AHP method to assess suitable e-learning websites with multiple criteria to assist students in preparing for interviews. The method was useful to find the appropriate information from the selected website. Colace et al. [8] utilized the AHP method to evaluate the potential of online learning platforms from the technological and pedagogical aspects. The results were encouraging and effective. In addition, the problem of ranking e-learning options was solved by Begičević et al. [9] using the AHP method. The results revealed that online learning had a higher priority than face-to-face learning. Karagöz et al. [10] chose a learning management system for organizations using the AHP method. The best choice was determined based on the different weight ratios.
However, Liu et al. [11] stated that the traditional AHP method has a drawback in that this method takes subjective values as precise values. They proposed fuzzy AHP to evaluate three different e-learning platforms. Each platform was assessed based on learning, organizing, and knowledge. Their proposed method was correct and feasible to realize the scientific evaluation of the e-learning platform. Adem et al. [12] proposed fuzzy AHP to select the most appropriate distance education platform for teaching and learning activities. To verify the consistency of the results, sensitivity analysis was applied to produce robust results. However, fuzzy AHP suffers from additive complexity due to the numerous pairwise comparisons.
Büyüközkan et al. [13] utilized fuzzy VIseKriterijumska Optimizacija I Kompromisno Resenje (VIKOR) to effectively assess e-learning website quality in a fuzzy environment. Their method identified acceptable compromises in many website and e-service evaluation problems. Büyüközkan et al. [14] proposed an axiomatic design (AD)-based approach for fuzzy group decision-making to evaluate the quality of Turkish e-learning websites. For the verification of the proposed method, the fuzzy technique for order of preference by similarity to the ideal solution (TOPSIS) was applied to check the reliability of the rankings. Also, fuzzy VIKOR was applied by Ayouni et al. [15] to identify framework criteria and select alternatives from three learning management systems. Moodle was identified as an appropriate system to meet higher institutions’ standards in Saudi Arabian universities.
Gong et al. [4] proposed an extended TODIM (MCDM in Portuguese) based on a linguistic hesitant fuzzy approach to evaluate five different e-learning websites. A comparative analysis with fuzzy VIKOR and fuzzy AD was conducted to show the superiority of their proposed approach. Hwang et al. [3] integrated fuzzy theory, grey systems, and group decision methods to evaluate educational websites. They also developed a computer-assisted website evaluation system to achieve greater accuracy when evaluating the results. Toan et al. [2] integrated grey theory with the TOPSIS method to prioritize six e-learning websites in Vietnam. The proposed method has the advantage of processing the uncertain evaluations manifested by grey numbers to generate a robust ranking for the alternatives. Despite the usefulness of fuzzy sets and grey theories, practical implementation is difficult due to their rules and operations.
Each MCDM method applied in e-learning website evaluation and selection differs in features, such as the ease of use and its mathematical equations. It is important to ensure the easiness of use [16], which is an important aspect in the selection of the MCDM method [17]. A simple method saves computation time and effort without sacrificing quality [18]. Therefore, it is essential to develop a simple MCDM method with easy implementation to evaluate e-learning websites.

3. Methodology

In this study, a new MCDM method, additive rank probability (ARP), was developed to evaluate a set of alternatives with conflicting criteria. The first step of the ARP method involved sorting the alternatives according to the preference. In terms of benefits, the alternatives were sorted in descending order. There were two conditions for sorting the alternatives. Indifference between alternatives was expressed by placing them in the same rank, while the sequential arrangement of alternatives placed preferred alternatives in the upper ranks. For example, the first condition in Table 1 mentions three alternatives, A1, A2, and A3, each of which has a different performance value against criterion a, where A1 is more preferable than A2 and A3, while A2 is more preferable than A3. Therefore, A1 is assigned to rank 1, A2 is assigned to rank 2, and A3 is assigned to rank 3. For the second condition, these three alternatives have the same performance value against criterion b; therefore, all alternatives are assigned to rank 1, rank 2, and rank 3 simultaneously.
In the second step, weights were assigned to the alternatives in rank r with criterion a. The probability of the alternative to be assigned to rank r is yir = wa. Meanwhile, if there are m alternatives assigned in rank r with criterion b, the probability of m alternatives being assigned to rank r is yir = wb/m. For example, recalling the data in Table 1, Table 2 shows the probability of A1, A2, and A3 to be assigned to ranks 1, 2, and 3, respectively. Since the developed method is based on weight distribution, the total probability of each alternative assigned to different ranks must be equal to the total weights (wa + wb).
Finally, the total score was determined by summing the product of the probability of each alternative to be assigned to rank r, yir with rank probability index (RPI) in rank r, RPIr. Equation (1) describes a linear function of alternative scores. The total number of r is equal to the total number of alternatives. RPI is determined based on a single rank where the best rank is associated with the highest points, meanwhile the worst rank is associated with 1 point. For instance, each alternative gets 1 point for each last rank, 2 points for each next-to-last rank up to m points for each first rank. The value of each alternative is given a numerical desirability score where the best alternative is the one that has the highest total score.
S = i r y i r × R P I r

4. Case Study

In this study, C-programming websites (CPWs) [19] were evaluated with five alternatives and ten criteria. The alternatives included cprogramming.com (CPW-1), howstuffworks.com (CPW-2), programiz.com (CPW-3), geeksforgeeks.org (CPW-4), and tutorialspoint.com (CPW-5). Meanwhile, the criteria included functionality (C1), maintainability (C2), portability (C3), reliability (C4), usability (C5), efficiency (C6), ease of learning community (C7), personalization (C8), system content (C9), and general factors (C10). The criteria were divided into beneficial and non-beneficial criteria. Among these ten criteria, six beneficial criteria were defined as C1, C2, C3, C4, C5, and C6, where higher values were preferable, while four non-beneficial criteria were defined as C7, C8, C9, and C10, where lower values were desirable. In the case study, the criteria weights were determined. The criteria weights were used to investigate the applicability of the developed method. The criteria weights were given as wC1 = 0.29, wC2 = 0.18, wC3 = 0.13, wC4 = 0.04, wC5 = 0.04, wC6 = 0.07, wC7 = 0.12, wC8 = 0.07, wC9 = 0.03, and wC10 = 0.03 (Table 3).

5. Results and Discussion

For each criterion, a suitable rank for each CPW was assigned as shown in Table 4. This was obtained by ordering each CPW according to the preference based on Table 3. For example, in criteria C1, C2, C4, and C5, CPW-5 was ranked number 1 due to the highest value, followed by CPW-1, CPW-3, CPW-4, and CPW-2, respectively. As for criteria C3, there was a slight difference in ordering each preference in which CPW-5 was ranked number 1, followed by CPW-3, CPW-4, CPW-1, and CPW-2, respectively.
For criteria C6, CPW-5 showed the highest value, so this alternative was ranked at number 1. Meanwhile, CPW-1 and CPW-3 had the same value, so these two alternatives were placed in the same rank, which were ranked as numbers 2 and 3. In addition, CPW-4 was preferred to CPW-2 in criteria 6. Therefore, CPW-4 was ranked at number 4, and CPW-2 was ranked at number 5. For criteria C7 and C9, the lowest value was desirable, so each CPW was placed in ascending order. Therefore, CPW-2 was ranked number 1, followed by CPW-4, CPW-1, CPW-3, and CPW-5, respectively. In criteria C8, CPW-2 once again had the lowest value, so this alternative was ranked at number 1, followed by CPW-4, CPW-1, CPW-5, and CPW-3, respectively. For criteria C10, CPW-4 was preferable due to its lowest value. As a result, CPW-4 was ranked as 1, followed by CPW-2, CPW-3, CPW-5, and CPW-1, respectively.
Next, the probability of each CPW being assigned to different ranks over the criteria was calculated. For example, the probability of CPW-1 assigned to different ranks was calculated as follows.
  • The probability of CPW-1 being ranked number 1 was 0.000 since this alternative was not assigned in rank 1 in the criteria.
  • The probability of CPW-1 being ranked number 2 was wC1 + wC2 + wC4 + wC5 + wC6/2, which equaled 0.585.
  • The probability of CPW-1 being ranked number 3 was wC6/2 + wC7 + wC8 + wC9, which equals 0.255.
  • The probability of CPW-1 being ranked number 4 was wC3, which equals 0.130.
  • The probability of CPW-1 being ranked number 5 was wC10, which equals 0.030.
Then, the calculation of the probability of other CPWs assigned to different ranks was calculated using the same procedure. Table 5 summarizes the probability of each CPW being assigned to different ranks. Table 6 shows the total score and final ranking for each CPW which have been obtained by summing up the product of the probability of each CPW to be assigned to different ranks with RPI.
In this case study, the RPI for any alternative assigned to rank number 1 was 5 (total number of alternatives = 5), while that for any alternative assigned to rank number 2 was 4 (total number of alternatives − 1 = 4). Likewise, the RPI for any alternative assigned to rank number 3 was 3 (total number of alternatives − 2 = 3), that assigned to rank number 4 was 2 (total number of alternatives − 3 = 2), and that assigned to rank number 5 was 1 (total number of alternatives − 4 = 1).
CPW-5 was ranked the highest, followed by CPW-1, CPW-3, CPW-4, and CPW-2, respectively. To verify the effectiveness of the developed method, the results of the ARP method were compared with those of the AHP, COPRAS, PIV, and WEDBA methods (Table 7) [19].
The ranking order of CPWs given by the proposed ARP method was the same as that of the AHP, COPRAS, and PIV methods. However, there was a small difference in the ranking by the ARP and WEDBA methods. In the developed ARP method, CPW-1 was ranked number 2 while CPW-3 was ranked number 3. For the WEDBA method, CPW-3 was ranked number 2 while CPW-1 was ranked number 3. CPW-5, tutorialspoint.com, was considered the best e-learning website. Hence, there was a good agreement between the ARP method and the other four MCDM methods. Therefore, the result presents the exactness of the proposed method.

6. Conclusions

A new MCDM method, namely ARP, was developed in this study to evaluate CPWs for e-learning. This method shows the following advantages: (1) it is straightforward to implement; (2) it does not need to normalize the data; (3) it does not require a pairwise comparison process; and (4) it does not require any parameters or thresholds to be set up. The ARP method provides a detailed description of steps to evaluate CPWs. The method effectively chose the website by ranking and comparing alternatives. The results obtained using the ARP method matched those of the AHP, COPRAS, and PIV methods. However, there was a difference in ranking between the ARP method and the WEDBA method. The developed ARP method proved to be efficient in evaluating CPWs for e-learning. The ARP method can be used for group decision-making to obtain a comprehensive decision.

Funding

This research was funded by Xiamen University Malaysia through the Xiamen University Malaysia Research Fund [XMUMRF/2023-C12/IENG/0062].

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Acknowledgments

The author is grateful to the reviewers for their comments and suggestions, which have substantially improved the quality of the article.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Jung, I. The Dimensions of e-Learning Quality: From the Learner’s Perspective. Educ. Technol. Res. Develop. 2011, 59, 445–464. [Google Scholar] [CrossRef]
  2. Toan, P.N.; Dang, T.-T.; Hong, L.T.T. E-Learning Platform Assessment and Selection using Two-Stage Multi-Criteria Decision-Making Approach with Grey Theory: A Case Study in Vietnam. Mathematics 2021, 9, 3136. [Google Scholar] [CrossRef]
  3. Hwang, G.-J.; Huang, T.C.K.; Tseng, J.C.R. A group-decision approach for evaluating educational web sites. Comput. Educ. 2004, 42, 65–86. [Google Scholar] [CrossRef]
  4. Gong, J.W.; Liu, H.C.; You, X.Y.; Yin, L. An integrated multi-criteria decision making approach with linguistic hesitant fuzzy sets for e-learning website evaluation and selection. Appl. Soft Comput. 2021, 102, 107118. [Google Scholar] [CrossRef] [PubMed]
  5. Zare, M.; Pahl, C.; Rahnama, H.; Nilashi, M.; Mardani, A.; Ibrahim, O.; Ahmadi, H. Multi-criteria decision making approach in E-learning: A systematic review and classification. Appl. Soft Comput. 2016, 45, 108–128. [Google Scholar] [CrossRef]
  6. Mokhtar, M.R.; Abdullah, M.P.; Hassan, M.Y.; Hussin, F. Application of Promethee Method for Demand Side Management (DSM) options ranking. J. Teknol. Sci. Eng. 2016, 78, 23–28. [Google Scholar] [CrossRef]
  7. Alice, P.S.; Abirami, A.M.; Askarunisa, A. A semantic based approach to organize eLearning through efficient information retrieval for interview preparation. In Proceedings of the International Conference on Recent Trends in Information Technology, Chennai, India, 19–21 April 2012; pp. 151–156. [Google Scholar] [CrossRef]
  8. Colace, F.; De Santo, M.; Pietrosanto, A. Evaluation models for e-learning platform: An AHP approach. In Proceedings of the 36th Annual Frontiers in Education Conference, San Diego, CA, USA, 27–31 October 2006; pp. 1–6. [Google Scholar] [CrossRef]
  9. Begičević, N.; Divjak, B.; Hunjak, T. Prioritization of e-learning forms: A multicriteria methodology. Central Eur. J. Oper. Res. 2007, 15, 405–419. [Google Scholar] [CrossRef]
  10. Karagöz, E.; Oral, L.Ö.; Kaya, O.H.; Tecim, V. LMS selection process for effective distance education system in organizations. KnE Soc. Sci. 2017, 1, 343–356. [Google Scholar] [CrossRef]
  11. Liu, Q.; Peng, R.; Chen, A.; Xie, J. E-learning platform evaluation using fuzzy AHP. In Proceedings of the International Conference on Computational Intelligence and Software Engineering, Wuhan, China, 11–13 December 2009; pp. 1–4. [Google Scholar] [CrossRef]
  12. Adem, A.; Çakıt, E.; Dağdeviren, M. Selection of suitable distance education platforms based on human–computer interaction criteria under fuzzy environment. Neural Comput. Appl. 2022, 34, 7919–7931. [Google Scholar] [CrossRef] [PubMed]
  13. Büyüközkan, G.; Ruan, D.; Feyzioğlu, O. Evaluating e-learning web site quality in a fuzzy environment. Int. J. Intell. Syst. 2007, 22, 567–586. [Google Scholar] [CrossRef]
  14. Büyüközkan, G.; Arsenyan, J.; Ertek, G. Evaluation of e-learning web sites using fuzzy axiomatic design based approach. Int. J. Comput. Intell. Syst. 2010, 3, 28–42. [Google Scholar] [CrossRef]
  15. Ayouni, S.; Menzli, L.J.; Hajjej, F.; Madeh, M.; Al-Otaibi, S. Fuzzy Vikor Application for Learning Management Systems Evaluation in Higher Education. Int. J. Inf. Commun. Technol. Educ. 2021, 17, 17–35. [Google Scholar] [CrossRef]
  16. Hajkowicz, S. A comparison of multiple criteria analysis and unaided approaches to environmental decision making. Environ. Sci. Policy 2007, 10, 177–184. [Google Scholar] [CrossRef]
  17. Cinelli, M.; Stuart, R.; Coles, K.K. Analysis of the potentials of multi criteria decision analysis methods to conduct sustainability assessment. Ecol. Indic. 2014, 46, 138–148. [Google Scholar] [CrossRef]
  18. Zamani-Sabzi, H.; King, J.P.; Gard, C.C.; Abudu, S. Statistical and analytical comparison of multi-criteria decision-making techniques under fuzzy environment. Oper. Res. Perspect. 2016, 3, 92–117. [Google Scholar] [CrossRef]
  19. Khan, N.Z.; Ansari, T.S.A.; Siddiquee, A.N.; Khan, Z.A. Selection of E-learning websites using a novel Proximity Indexed Value (PIV) MCDM method. J. Comput. Educ. 2019, 6, 241–256. [Google Scholar] [CrossRef]
Table 1. Ranking of alternatives in each criterion.
Table 1. Ranking of alternatives in each criterion.
RankCriterion aCriterion b
FirstA1A1, A2, A3
SecondA2A1, A2, A3
ThirdA3A1, A2, A3
Table 2. Probability of alternatives being assigned to different ranks.
Table 2. Probability of alternatives being assigned to different ranks.
AlternativesRank 1Rank 2Rank 3
A1wa + wb/3wb/3wb/3
A2wb/3wa + wb/3wb/3
A3wb/3wb/3wa + wb/3
Table 3. Performance value of each CPW over the criteria [19].
Table 3. Performance value of each CPW over the criteria [19].
CriteriaCPW-1CPW-2CPW-3CPW-4CPW-5
C18.204.267.605.008.73
C28.204.067.806.208.93
C34.404.267.806.208.87
C48.204.067.205.408.40
C58.403.207.405.808.87
C67.803.207.806.008.60
C77.404.268.205.208.87
C86.804.068.404.207.80
C97.404.068.134.408.20
C108.534.267.604.208.40
Table 4. Ranking of each CPW according to criteria.
Table 4. Ranking of each CPW according to criteria.
CriteriaFirst RankSecond RankThird RankFourth RankFifth Rank
C1CPW-5CPW-1CPW-3CPW-4CPW-2
C2CPW-5CPW-1CPW-3CPW-4CPW-2
C3CPW-5CPW-3CPW-4CPW-1CPW-2
C4CPW-5CPW-1CPW-3CPW-4CPW-2
C5CPW-5CPW-1CPW-3CPW-4CPW-2
C6CPW-5CPW-1, CPW-3CPW-1, CPW-3CPW-4CPW-2
C7CPW-2CPW-4CPW-1CPW-3CPW-5
C8CPW-2CPW-4CPW-1CPW-5CPW-3
C9CPW-2CPW-4CPW-1CPW-3CPW-5
C10CPW-4CPW-2CPW-3CPW-5CPW-1
Table 5. Probability of each CPW being assigned to different ranks.
Table 5. Probability of each CPW being assigned to different ranks.
AlternativesFirst RankSecond RankThird RankFourth RankFifth Rank
CPW-10.0000.5850.2550.1300.030
CPW-20.2200.0300.0000.0000.750
CPW-30.0000.1650.6150.1500.070
CPW-40.0300.2200.1300.6200.000
CPW-50.7500.0000.0000.1000.150
Table 6. Total score and final ranking for each CPW.
Table 6. Total score and final ranking for each CPW.
AlternativesTotal ScoreRank
CPW-13.3952
CPW-21.9705
CPW-32.8753
CPW-42.6604
CPW-54.1001
Table 7. Ranking of each CPW using different MCDM methods.
Table 7. Ranking of each CPW using different MCDM methods.
AlternativesARPAHPCOPRASPIVWEDBA
CPW-122223
CPW-255555
CPW-333332
CPW-444444
CPW-511111
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mokhtar, M.R. Evaluation of E-Learning Websites Using Additive Rank Probability Method: Case Study of C Programming Website. Eng. Proc. 2025, 103, 7. https://doi.org/10.3390/engproc2025103007

AMA Style

Mokhtar MR. Evaluation of E-Learning Websites Using Additive Rank Probability Method: Case Study of C Programming Website. Engineering Proceedings. 2025; 103(1):7. https://doi.org/10.3390/engproc2025103007

Chicago/Turabian Style

Mokhtar, Muhamad Rasydan. 2025. "Evaluation of E-Learning Websites Using Additive Rank Probability Method: Case Study of C Programming Website" Engineering Proceedings 103, no. 1: 7. https://doi.org/10.3390/engproc2025103007

APA Style

Mokhtar, M. R. (2025). Evaluation of E-Learning Websites Using Additive Rank Probability Method: Case Study of C Programming Website. Engineering Proceedings, 103(1), 7. https://doi.org/10.3390/engproc2025103007

Article Metrics

Back to TopTop