Next Article in Journal
Research on the Characteristics of the Global Trade Network of Antimony Products and Its Influencing Factors
Previous Article in Journal
Seasonal and Multi-Year Wind Speed Forecasting Using BP-PSO Neural Networks Across Coastal Regions in China
Previous Article in Special Issue
Perceptions, Ethical Challenges and Sustainable Integration of Generative AI in Health Science Education: A Cross-Sectional Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Sustainable Decision-Making in Higher Education: An AHP-NWA Framework for Evaluating Learning Management Systems

1
Faculty of Applied Management, Economics and Finance, University Business Academy in Novi Sad, 11000 Belgrade, Serbia
2
Faculty of Technical Sciences in Čačak, University of Kragujevac, 32000 Čačak, Serbia
3
Faculty of Mathematics and Computer Sciences, University Alfa BK, 11000 Belgrade, Serbia
4
Department of Mathematics, Saveetha School of Engineering, Saveetha Institute of Medical and Technical Sciences, Saveetha University, Chennai 602105, Tamil Nadu, India
5
College of Global Business, Korea University, Sejong 30019, Republic of Korea
*
Authors to whom correspondence should be addressed.
Sustainability 2025, 17(22), 10130; https://doi.org/10.3390/su172210130
Submission received: 23 October 2025 / Revised: 4 November 2025 / Accepted: 9 November 2025 / Published: 12 November 2025

Abstract

This paper applies a hybrid multi-criteria decision-making (MCDM) model that integrates the Analytic Hierarchy Process (AHP) for structured weighting of evaluation criteria with the Net Worth Analysis (NWA) method for value-based aggregation of scores. The proposed framework was employed to evaluate Learning Management Systems (LMS) in higher education, involving two independent expert panels representing management and IT perspectives. Results of the AHP analysis show that cost (28%), security (22%), and usability (17%) are the most influential criteria in the decision-making process, reflecting institutional priorities for financial efficiency, safety and ease of use. Based on the combined AHP-NWA model, Moodle 4.3 emerged as the most sustainable choice (0.586), followed by Atutor 2.2.1 (0.541) and Blackboard (SaaS edition) (0.490). The inclusion of sensitivity and scenario analyses confirmed the robustness of the model, demonstrating that the ranking of alternatives remains stable under variations in weighting factors and different strategic priorities. By framing LMS evaluation within the context of sustainable digital transformation, the study emphasizes how transparent and systematic decision-making supports long-term institutional resilience and aligns with the principles of Education for Sustainable Development (ESD). In addition, the framework contributes to the achievement of Sustainable Development Goal 4 (Quality Education), by guiding higher education institutions toward inclusive, resilient and cost-effective digital solutions.

1. Introduction

The digital transformation of the education sector has brought significant changes to the organization and delivery of the teaching process [1,2,3]. The emergence of the Internet, the growing availability of information and communication technologies (ICT) and the need for flexible access to educational content have contributed to the rapid development of e-learning and the widespread adoption of Learning Management Systems (LMS) [4]. In contemporary higher education, LMS platforms have become a key infrastructural component that ensures the organization, distribution, and monitoring of teaching activities, while simultaneously supporting both online and hybrid teaching models [5,6]. The global COVID-19 pandemic further accelerated this process, forcing many educational institutions to implement digital learning environments in a very short period, often without prior systemic preparation [7,8,9,10]. Under such circumstances, selecting an appropriate LMS platform has become a strategic challenge that goes beyond the technical characteristics of the software, encompassing pedagogical, organizational, security and financial aspects [11,12,13]. A poor choice of LMS platform may result in reduced student engagement, increased operational costs and more complex technical maintenance.
The market offers a large number of commercial and open-source LMS solutions, differing in functionalities, scalability, security, costs and ease of use [14]. Decision-makers in higher education often face the challenge of objectively, transparently and systematically comparing available options and selecting the platform that best suits the specific needs of their institution [15]. In this context, multi-criteria decision-making (MCDM) methods represent a reliable methodological basis for analyzing and ranking alternatives based on clearly defined criteria and weighted priorities [16].
The Analytic Hierarchy Process (AHP) is one of the most frequently used MCDM methods in the evaluation of information systems, as it enables hierarchical structuring of the problem, quantification of subjective assessments and consistency checking of decisions [17]. However, while the AHP method primarily provides the weights of the criteria, additional methods that enable normalization and value-based data processing can be useful for the aggregation of alternative ratings [18]. This paper proposes a hybrid AHP-NWA model, in which AHP is used to calculate criterion weights, while NWA is applied for the final value-based aggregation of results, since NWA performs normalization to ensure comparability and fairness among evaluated alternatives.
A key contribution of this approach lies in methodological independence: one expert panel (management) defines and weights the criteria, while the other (IT specialists) evaluates the alternatives, thus reducing the risk of cognitive bias and increasing the reliability of findings. The application of the proposed model is demonstrated through the evaluation of three LMS platforms: Moodle, ATutor and Blackboard, assessed according to seven criteria: performance, cost, implementation, security, usability, support and documentation [19]. The combination of AHP weights and NWA value aggregation provides results that enable transparent ranking of alternatives and offer a flexible tool for optimizing LMS selection in higher education.
The integration of sustainable decision-making in higher education is essential, as universities increasingly face challenges of balancing financial efficiency, technological advancement, and social inclusion. Learning Management Systems (LMS) play a central role in digital transformation, and their evaluation must consider not only technical and financial aspects, but also their contribution to long-term institutional sustainability. This aligns with the principles of Education for Sustainable Development (ESD), where technology adoption is assessed through its capacity to support inclusive, adaptable, and resilient learning environments.

2. Theoretical Framework and Problem Conceptualization

In contemporary higher education, technology has become a key mediator in knowledge transfer, communication between instructors and students and the evaluation of the teaching process [20]. The digitalization of education entails not only the use of information and communication technologies (ICT) but also the redefinition of pedagogical approaches, evaluation methods and learning organization [21,22]. E-learning and distance education are often used interchangeably, although they differ in the degree of technological dependence and the temporal dimension of communication [23]. E-learning is defined as any form of education that takes place with the assistance of ICT, including the distribution of teaching materials, communication, testing and monitoring student progress [24].
Learning Management Systems (LMS) serve as the central software framework that integrates all aspects of the educational process within a digital environment [25,26,27]. Their core functionalities include course management, assignment distribution, student assessment, activity tracking, discussion forums, video lectures and integration with other tools [28]. By using LMS, institutions achieve greater flexibility in teaching, improved organization, and increased student engagement [29]. However, the diversity of available solutions and varying levels of adaptability to specific user needs make the selection of the optimal platform a complex problem.
MCDM methods offer a structured framework for solving such problems, as they enable the simultaneous analysis of multiple criteria that are often in conflict [30]. Among them, the Analytic Hierarchy Process (AHP), developed by Thomas Saaty [31], is widely accepted due to its ability to hierarchically structure problems, quantify subjective judgments, and introduce consistency checks into the decision-making process [32]. The AHP method is particularly suitable for defining the relative importance of criteria, which serves as the starting point for evaluating complex systems such as LMS platforms [33,34].
However, the AHP method alone does not provide the most suitable mechanism for aggregating alternative scores when it is necessary to combine multiple data sources or bring ratings to a comparable scale [35]. Therefore, in this study, AHP is complemented by the Net Worth Analysis (NWA) method. This method enables the transformation of raw scores into normalized values, where the total utility value for each alternative is calculated based on the importance of the criteria and relative performance. This approach reduces the risk of result distortion due to different rating ranges and ensures transparent value-based aggregation.
A particular methodological advantage of the proposed AHP-NWA approach lies in the use of two independent expert panels. The first panel, consisting of members of the educational institution’s management, determines the weights of the criteria using the AHP method, while the second panel, composed of IT specialists experienced in LMS implementation and maintenance, evaluates each alternative against the defined criteria. This task division eliminates the potential influence of the same group on both phases of decision-making, reducing cognitive bias and increasing the validity of the results. By combining the strengths of AHP and NWA, an integrated methodological framework is obtained, applicable not only to the selection of LMS platforms but also to other areas of digital transformation in education, including the evaluation of online examination tools, video conferencing platforms and learning analytics systems.

3. Methodology and Materials

The evaluation of e-learning software solutions requires a systematic approach that enables an objective and accurate assessment of various platforms. Given the complexity of this issue, this study applies a hybrid decision-making model that combines AHP and NWA. This combination allows for a detailed and comprehensive analysis of e-commerce platforms.

3.1. Simplified Method for Assessing the Relative Importance of Criteria (AHP-NWA Hybrid Model)

The AHP-NWA hybrid model consists of seven main steps that provide a quantitative assessment of criteria and the ranking of alternatives.

3.1.1. Analytic Hierarchy Process (AHP) Method

The Analytic Hierarchy Process (AHP) is a structured decision-making method developed to determine the relative importance of criteria and evaluate alternatives based on defined objectives [36]. The core principle of the method is based on pairwise comparisons, where each criterion is compared with every other in terms of importance, and the same procedure can be applied to evaluate alternatives against each individual criterion.
One of the key advantages of AHP is the ability to check the consistency of the assessments through the consistency ratio, thereby ensuring the reliability of the results [37]. The method is particularly suitable for solving complex decision-making problems involving multiple criteria, and due to its flexibility and transparency, it has found wide application in various fields, including education, management, engineering and information technology.
  • Step 1. Defining the Problem and Forming the Hierarchical Structure
  • The decision-making problem is decomposed into a hierarchy consisting of:
  • Goal: The overall objective of the decision-making process.
  • Criteria: Key factors influencing the decision, denoted as C 1 , C 2 , …, C n
  • Alternatives: The possible options being evaluated.
  • Step 2. Pairwise Comparison of Criteria
The relative importance of the criteria is assessed through pairwise comparisons. Decision-makers assign numerical values to express the importance of one criterion over another, using Saaty’s scale (values from 1 to 9, where 1 indicates equal importance and 9 indicates extreme importance) [38]. The pairwise comparison matrix A is constructed as follows:
A = 1 a 12 a 13 a 1 n 1 a 12 1 a 23 a 2 n 1 a 13 1 a 23 1 a 3 n 1 a 1 n 1 a 2 n 1 a 3 n 1
  • Step 3. Normalization of the Pairwise Comparison Matrix
Each element of the pairwise comparison matrix A is divided by the sum of its respective column to obtain the normalized matrix A norm :
A norm = a 11 i = 1 n a i 1 a 12 i = 1 n a i 2 a 1 n i = 1 n a in a 21 i = 1 n a i 1 a 22 i = 1 n a i 2 a 2 n i = 1 n a in a n 1 i = 1 n a i 1 a n 2 i = 1 n a i 2 a nn i = 1 n a in
  • Step 4. Calculation of Priority Weights
The relative weight w i of each criterion is obtained by computing the average value of each row in the normalized matrix:
w i = j = 1 n A norm , ij n ,   i = 1 , 2 , , n
  • Step 5. Consistency Check
Calculation of the consistency vector W:
W = A · w
Multiply the original comparison matrix by the weight vector.
Calculation of the Consistency Index (CI):
CI = λ max n n 1
λ max = i = 1 n W i / w i n
where λmax is the principal eigenvalue and n is the number of criteria.
Calculation of the Consistency Ratio (CR):
CR = CI RI
Here, RI is the Random Index based on the size of the matrix n. A CR value below 0.1 is considered acceptable.
  • Step 6. Evaluation of Alternatives
The alternatives are evaluated using the same pairwise comparison and weight calculation procedure within each criterion. The final ranking is determined by aggregating the weighted scores of the alternatives. This systematic approach ensures that the decision-making process is rational and consistent, enabling the selection of optimal solutions.

3.1.2. NWA (Net Worth Analysis)

NWA is a multi-criteria decision-making approach based on the analysis of network relationships between decision elements [39]. Unlike traditional hierarchical methods such as AHP, which assume a linear and independent structure of criteria, NWA enables the modeling of interdependencies and feedback loops that occur in complex systems.
This approach introduces a higher degree of realism into the decision-making process, as it accounts for situations where criteria are not mutually independent but are interconnected and can either reinforce or constrain one another. NWA thus allows for a more accurate evaluation of alternatives, particularly in contexts where the interaction between criteria plays a significant role in the outcome [39].
  • Step 7. Final Evaluation of Software
The final values, considering the significance factors, are calculated according to the following equation:
N = j = 1 9 g j   ·   n ij
The Net Worth Analysis (NWA) method was applied in a step-by-step manner to transform expert scores into normalized results. Each expert from Panel B assigned scores to LMS alternatives on a 1–5 scale for each criterion. These raw scores were first normalized using the NWA normalization function to bring all values to a comparable scale between 0 and 1. Then, the normalized scores were multiplied by the AHP-derived criterion weights to calculate the weighted net worth of each alternative. Finally, the sum of weighted values across all criteria produced the overall NWA score for each LMS platform. This process ensures transparency, like the AHP procedure, and allows tracing how expert assessments contribute to the final ranking.
The application of the NWA method in this study is justified by its ability to normalize heterogeneous benefit and cost-type criteria into a comparable scale while preserving proportional differences in expert judgments. Unlike distance-based methods such as TOPSIS or VIKOR, which depend on the identification of an ideal and anti-ideal solution, NWA performs a value-based aggregation that transparently links expert scores and criterion weights through a direct normalization function. This property is particularly advantageous in the context of LMS evaluation, where criteria such as cost, usability, and security differ in direction, scale, and distribution. Moreover, NWA is less computationally intensive than network-based approaches such as ANP, which require a larger number of pairwise comparisons and are less suitable for small expert panels.
By integrating AHP and NWA, the proposed hybrid model combines the strengths of hierarchical weighting and value-based aggregation. AHP ensures the consistency of expert judgments and reliable derivation of weights, while NWA provides a robust mechanism for aggregating normalized scores across multiple criteria. The combination of these two methods has been validated in several technology evaluation domains, such as e-learning system assessment and sustainability-oriented engineering selection [33,40]. Their studies emphasize that hybrid MCDM frameworks increase both methodological transparency and decision reproducibility-features that are critical when evaluating complex socio-technical systems like LMS platforms.

3.2. Decision Structure and Expert Selection

The evaluation process of LMS platforms was conducted with the involvement of two methodologically independent expert teams to ensure objectivity and reduce the risk of cognitive bias.
  • Team A (Decision-Makers)—Consisted of three university professors and members of the higher education institution’s management. Their task was to define the relevant evaluation criteria in accordance with the institution’s strategic and pedagogical objectives and to determine their weighting factors using the AHP method. The Delphi method, a structured communication technique aimed at reaching expert consensus, was applied for criteria definition [40]. This iterative process enabled team members, through multiple rounds of anonymous scoring and opinion exchange, to reach a consensus on which criteria best reflected the institution’s needs and priorities. The use of the Delphi method helped avoid the dominance of individual opinions and ensured a balanced set of criteria resulting from collective reasoning. The selection of three experts for Panel A was guided by their strategic roles within the institution, all three are senior academic managers directly involved in technology procurement and quality assurance processes. Their inclusion ensures that the weighting of criteria reflects key institutional decision-making perspectives, while maintaining a focused and experienced expert group consistent with similar MCDM studies in higher education contexts.
  • Team B (Technical Experts)—Comprised five experienced IT specialists with many years of practice in implementing, administering, and maintaining LMS solutions. Their role was to evaluate the specific platforms based on the criteria defined by Team A. For score aggregation, the NWA method was employed, enabling the assessment of alternatives while considering interdependencies between criteria.
This division of work ensured methodological independence between the criterion definition phase (Team A) and the alternative evaluation phase (Team B). As a result, the risk of bias was reduced, and the reliability of the outcomes increased, since the same experts did not simultaneously influence both the selection of criteria and the evaluation of alternatives.
To ensure interpretive consistency, each point on the 1–5 scale used by Panel B was accompanied by descriptive anchors tailored to the specific criterion. Assessors were provided with a short rubric handbook before scoring and participated in a brief calibration session using two pilot examples to align interpretations. The anchors were criterion-specific, rather than uniform, to avoid scale heterogeneity. This calibration step ensured that assessors interpreted the 1–5 scale consistently across diverse criteria and that the final ratings accurately reflected both quantitative and qualitative performance differences among LMS platforms.
Within the Usability criterion, special attention was given to accessibility, multilingual support, assistive technology compatibility and mobile/offline capabilities. These subdimensions were evaluated using publicly available vendor conformance statements and targeted functional testing. Accessibility was assessed according to WCAG 2.1 AA guidelines, considering screen-reader support (NVDA, JAWS), keyboard navigation, and color-contrast compliance. Multilingual and localization features were verified through the presence of interface language packs and right-to-left (RTL) layout compatibility. Assistive technology compatibility included integration with captioning tools, transcript generation and alternative text rendering, while mobile and offline usability were evaluated based on progressive web application (PWA) features and caching behavior. These components collectively contributed approximately 25% of the total usability score, reflecting their strategic importance for inclusive digital transformation in higher education. Supporting evidence from LMS documentation, such as the Moodle Accessibility Statement, Blackboard Ally reports and ATutor WCAG conformance summaries, was used to validate expert assessments.

3.3. Methodological Contribution to Sustainability

The application of AHP-NWA in this context contributes to sustainable decision-making by offering a structured framework that integrates diverse evaluation criteria and supports transparent prioritization consistent with institutional sustainability goals. By framing the evaluation of LMS platforms through a sustainability lens, the method ensures that economic, social, and technical aspects are jointly considered, thereby supporting digital transformation pathways that are financially viable, socially inclusive, and adaptable to long-term institutional needs.
Recent advances in sustainable decision-making have demonstrated that integrating AHP with complementary multi-criteria methods enhances both analytical transparency and practical applicability across diverse domains. For instance, a fuzzy AHP-based MCDM framework was applied to optimize process parameters in the context of laser-cutting polyethylene, balancing performance, energy efficiency and material sustainability [41]. Similarly, an AHP-based MCDM approach was utilized to evaluate and monitor sustainability-oriented healthcare management practices, demonstrating how hierarchical decision models can support balanced resource allocation and resilience in complex systems [42].
These studies reaffirm that AHP and its hybrid extensions remain robust tools for addressing multi-dimensional sustainability challenges. By applying a comparable logic within the higher education sector, the present AHP-NWA framework operationalizes sustainable digital management by combining structured expert judgment, transparent weighting and adaptable evaluation mechanisms.

3.4. Evaluation Criteria

Based on the strategic needs of the educational institution, the requirements of the teaching process, and consultations with IT experts, seven key criteria were defined, covering the technical, economic and operational aspects of LMS platform performance. These criteria form the foundation for a systematic evaluation of alternatives and enable a balanced consideration of both functional and cost-related, as well as organizational, dimensions of LMS implementation.
  • Performance—This criterion encompasses the system’s functional capabilities, including its capacity to manage and distribute learning content, the availability of interaction tools, system scalability in relation to the number of users and operational stability and reliability under real-world conditions. High performance levels are essential for maintaining continuity of teaching and ensuring a high-quality user experience.
  • Cost—Includes all expenses incurred during the lifecycle of the LMS platform, such as licensing fees, initial implementation costs, periodic maintenance, upgrades and potential additional modules. Budget constraints in educational institutions often make this criterion a decisive factor in the selection process.
  • Implementation—Refers to the complexity of the installation process and the time required for full integration of the platform into the institution’s existing information ecosystem. It also covers compatibility with current hardware and software resources, as well as the need for adaptation or migration of existing learning materials.
  • Security—The assessment of security features includes data protection, integration of security protocols, authentication and authorization mechanisms, communication encryption and system resilience to cyber threats. In the context of handling sensitive student and faculty data, this criterion holds particular importance.
  • Usability—Covers the intuitiveness of the user interface, ease of navigation, speed of access to features and system accessibility for users with varying levels of technical expertise. This criterion also includes accessibility for users with disabilities, in accordance with universal design standards.
  • Support—Involves the availability of technical assistance, including help desk services, contact with the vendor or supplier, availability of online user communities and forums, as well as the speed and quality of responses to reported issues. Effective technical support can significantly reduce downtime and enhance system reliability.
  • Documentation—Pertains to the quality, scope and up-to-dateness of user and system documentation, including manuals, guides and tutorials. Good documentation facilitates the training of teaching and administrative staff, speeds up the implementation process and reduces the need for external support.
The defined criteria enable a comprehensive evaluation of LMS platforms from the perspective of end users, technical staff, and management, ensuring that the final decision aligns with the institution’s strategic and operational objectives.

4. Research Results

In accordance with the previously defined methodology, a quantitative evaluation was conducted for three of the most widely used LMS platforms in higher education: Moodle, ATutor and Blackboard. The evaluation was conducted using the latest stable releases available at the time of analysis: Moodle 4.3, Blackboard Learn Ultra (SaaS edition) and ATutor 2.2.1. All platforms were deployed in controlled institutional test environments with equivalent baseline configurations to ensure comparability. Moodle and ATutor were hosted on identical Linux-based servers (Ubuntu 22.04, Apache 2.4, PHP 8.1, MySQL 8.0), while Blackboard Learn was accessed via its SaaS deployment with default institutional settings. Each LMS included a standard set of LTI-compliant integrations: video conferencing (BigBlueButton for Moodle and ATutor, Blackboard Collaborate Ultra for Blackboard), plagiarism detection (Turnitin, Oakland, CA, USA) and gradebook synchronization via IMS LTI 1.3. Accessibility plugins (e.g., Brickfield Toolkit in Moodle and Blackboard Ally) were enabled to assess compliance with WCAG 2.1 AA guidelines. These controlled configurations ensured that performance, usability and security scores reflected the inherent characteristics of each LMS rather than hosting or configuration bias.
The study was carried out in line with the principles of multi-criteria decision-making, applying a hybrid approach that combines the AHP to determine the weighting factors of the criteria and NWA for the value-based aggregation of alternative ratings.
The analysis process included the following key steps:
  • Determination of criterion weighting factors,
  • Consistency check of the pairwise comparison matrix,
  • Evaluation of alternatives and aggregation of ratings.
By combining the results of the AHP and NWA methods, a comprehensive insight was obtained into the advantages and limitations of the analyzed LMS solutions. In this way, the research enabled decision-making based on objective indicators as well as expert assessments, thereby increasing the validity and applicability of the findings in real-world conditions.

4.1. Criterion Weighting Factors (AHP)

The expert team (Team A) assessed the relative importance of the seven criteria using Saaty’s scale. Based on these evaluations, a priority matrix and the corresponding weights were calculated, as shown in Table 1. The highest importance was assigned to cost (28%), followed by security (22%) and usability (17%). The remaining criteria have the following values: performance (12%), implementation (7%), support (7%), and documentation (3%).

4.2. Consistency Check

The consistency check was performed by calculating the consistency ratio (CR) using the maximum eigenvalue (λmax) and the consistency index (CI). The obtained values are presented in Table 2, with CR = 0.07 indicating a high level of consistency, as it is below the maximum acceptable threshold of 0.10. This confirms the validity of the evaluations, and no further adjustments were necessary.

4.3. Evaluation of Software Alternatives (Expert Assessment)

The second expert team (Team B), composed of IT specialists, evaluated each platform against each criterion using a scale from 1 to 5. The average ratings are presented in Table 3. Whereas, in Figure 1 a radar chart of LMS platforms is given.

5. Discussion

The results of the AHP analysis (Table 1) show that cost (28%), security (22%), and usability (17%) are the three most important criteria in the decision-making process. This clearly reflects the higher education institution management’s priority to select a solution that is financially sustainable, secure and easy to use. Lower weights were assigned to criteria such as performance (12%), implementation (7%), support (7%) and documentation (3%), indicating that, while relevant, these aspects are considered secondary compared to cost efficiency, security and user experience. When these weights are applied to the NWA method for value-based aggregation of scores from Panel B’s evaluation, the following final scores are obtained:
  • Moodle—0.586
  • ATutor—0.541
  • Blackboard—0.490
These results clearly demonstrate Moodle’s advantage over the other two platforms. Criterion-by-criterion analysis reveals that Moodle gains the most significant advantage from its high usability score (0.867 weight) and ease of implementation (0.362 weight), while its greatest contribution to the overall score comes from its low cost (1.421 weight), which is the highest-weighted benefit criterion in the model.
ATutor ranks second with a score of 0.541, due to a balance of solid security (0.897 weight) and relatively low cost, although it falls slightly behind Moodle in terms of usability and support. Blackboard ranks third (0.490), excelling in performance and security; however, its position is significantly lowered by its high cost, the only cost-type criterion, which, in the NWA normalization, negatively impacts the overall score.
In this study, the criteria were categorized into two types: benefit criteria (where higher values are preferred, such as performance, usability, and security) and cost-type criteria (where lower values are preferable, such as total cost). This distinction ensures that criteria with opposite directions of preference are properly normalized during the NWA aggregation process, avoiding distortion in the final ranking and ensuring fair comparison across all evaluated LMS platforms.
The obtained ranking highlights a trade-off between cost efficiency and technical superiority: while Blackboard offers top-tier performance and security, its price makes it less attractive for an institution operating under budget constraints. Conversely, Moodle, as an open-source solution, successfully combines good performance across all criteria without significant weaknesses.
Compared to previous studies where MCDM methods were applied to LMS selection, these results confirm findings that open-source solutions often gain preference due to their flexibility, low cost and active user communities, making them more sustainable long-term options for educational institutions. The integration of AHP and NWA in this study further enhances decision-making transparency, enabling a clear interpretation of how criterion weights influence the final values of alternatives. This analysis shows that the methodological independence of the two expert panels effectively eliminates potential sources of bias, while NWA ensures that scores from different ranges and types of criteria (benefit and cost) are brought onto a common scale, thereby increasing the comparability and reliability of results.
In line with the early work of Colace and De Santo, who applied AHP to evaluate e-learning platforms and demonstrated its ability to structure complex criteria into a consistent framework [33], the findings of this study reaffirm that AHP remains effective in highlighting critical dimensions such as cost, usability and security. Similarly to the fuzzy AHP-TOPSIS hybrid used by Turker et al. [43], which stressed the value of combining subjective expert judgments with robust aggregation, our hybrid AHP-NWA model confirms Moodle as the most suitable solution whenever cost-effectiveness and user adaptability are prioritized. The strength of this approach also resonates with the MACONT technique proposed by Wen, Liao and Zavadskas [18], who showed that normalization procedures are essential to avoid distortions when criteria are measured on different scales. More recent analyses of LMS platforms through MCDM frameworks reached the same conclusion, again ranking Moodle highest for its balance of low cost and adaptability, while proprietary systems performed better in security but were disadvantaged by higher costs [44]. Looking ahead, new approaches such as the integration of large language models into MCDM tasks, as proposed by Wang et al. [45], suggest that frameworks like AHP-NWA could be further strengthened by AI-driven expert systems, offering greater flexibility and reduced bias in future evaluations.
Taken together, these comparisons demonstrate that the results of this study are not only internally consistent but also externally validated by similar methodological approaches in both earlier and more recent literature. Moodle’s consistent advantage across different methodological frameworks reinforces its status as the most sustainable choice for higher education institutions seeking an open-source, user-friendly and cost-effective LMS solution.
From a sustainability perspective, the results demonstrate that Moodle, as an open-source platform, provides long-term economic and social benefits by reducing licensing costs, encouraging community-driven development, and supporting inclusivity. Such findings reinforce the role of LMS evaluation not only as a technical exercise, but also as a strategic decision that directly contributes to the sustainability of higher education institutions in the digital era.

5.1. Limitations of the Study and Recommendations for Future Research

Although the proposed hybrid AHP-NWA model provides a clear, transparent and replicable framework for evaluating LMS platforms, certain methodological and practical limitations should be taken into account. Recommendations for future research include:
  • Limited number of alternatives—The analysis covered only three LMS platforms (Moodle, ATutor and Blackboard), which restricts the breadth of comparison. Future studies should include a larger number of LMS solutions, such as Canvas, Google Classroom, Sakai, or other specialized platforms, to provide a more comprehensive market analysis.
  • Small sample size of experts in Panel A—The first expert panel consisted of only three members, which may affect the representativeness of the findings. Expanding the expert group to include more participants from different types of educational institutions and geographic regions would increase the reliability of results and reduce the risk of individual biases.
  • Reliance on subjective expert assessments—The evaluation of criteria and alternatives was based on subjective expert judgments without verification through objective performance metrics in real-world LMS operations. Future research should combine expert assessments with empirical data, such as system response time, operational stability, number of reported security incidents, and end-user satisfaction.
  • Static nature of the model—The model reflects the state of LMS platforms at the time of the study. Given that LMS solutions are continually evolving, prices change and new security standards emerge, periodic re-evaluation is recommended to ensure the results remain relevant and up to date.

5.2. Practical Implications

The results of this study have clear and direct implications for the decision-making process in higher education institutions that are planning to introduce or replace their LMS platform. The application of the hybrid AHP-NWA model enables management to make decisions based on a combination of quantitative and qualitative criteria, taking into account both technical and economic aspects.
  • Identification of key priorities—Recognizing cost, security and usability as the most influential criteria provides institutions with a clear indication of the priorities to consider when selecting a platform. In practice, this means that an LMS solution combining low costs with high levels of security and ease of use can ensure an optimal balance between value and performance, especially in budget-constrained environments.
  • Guidance through alternative ranking—The obtained ranking, with Moodle in first place, ATutor in second and Blackboard in third, can serve as a guideline for institutions in similar circumstances, particularly for those considering a transition to open-source solutions. These results suggest that Moodle, due to its combination of low cost, good usability and ease of implementation, may be an attractive option for most institutions.
  • Adaptability of the methodological framework—The decision-making framework used in this study can be easily adapted to other decision contexts in education. The transparency of the decision-making process and the ability to present results through clear tables and charts facilitate communication of findings to various target groups, including academic staff, IT departments and institutional leadership. This not only improves the quality of decisions but also increases their acceptance among key stakeholders in the implementation process.

5.3. Potential for Adapting the Model to Other Contexts

The proposed hybrid AHP-NWA model demonstrates a high degree of flexibility and adaptability, making it applicable beyond the scope of LMS platform evaluation. Its structure allows for the straightforward redefinition of goals, criteria and alternatives depending on the specific needs of an organization or sector.
  • Application in other areas of educational technology—The model can be applied to other segments of educational technology, such as the selection of online examination systems, content management systems (CMS), video conferencing tools, learning analytics software or virtual and augmented reality systems in teaching. In these cases, the criteria could include parameters such as compatibility with existing infrastructure, integration capabilities with other tools, scalability and user experience.
  • Use beyond the education sector—The AHP-NWA model can also be employed in decision-making across various fields, including public administration, healthcare, industry and the IT sector.
  • Integration with other MCDM techniques—The model is compatible with other multi-criteria decision-making methods and can be extended into hybrid approaches by incorporating techniques such as: TOPSIS, VIKOR or PIPRECIA, resulting in more robust and detailed evaluations. Furthermore, integrating fuzzy logic could help reduce uncertainty and increase result accuracy, especially when the criteria are subjective or difficult to measure.
  • Utility for both internal and public analyses—Due to its transparency and the ability to present results in tables and charts, the model can serve not only as a tool for internal evaluations but also for the development of publicly available analyses that support strategic decision-making at the sectoral or community level.

5.4. Sensitivity Analysis

To evaluate the robustness of the obtained results, a sensitivity analysis was performed to examine how variations in the weighting factors influence the final ranking of LMS platforms (Figure 2). This approach is particularly relevant in MCDM studies, as decision-makers often face uncertainty regarding the exact importance of criteria.
For the analysis, the weights of the three most influential criteria: cost (28%), security (22%) and usability (17%), were systematically varied by ±10%, while the remaining weights were proportionally adjusted to maintain the total sum of 100%. The recalculated NWA scores under these conditions confirmed the stability of the ranking.
  • When the weight of cost was increased to 31%, Moodle’s total score increased from 0.586 to 0.601, while ATutor and Blackboard showed marginal decreases, preserving the ranking order (Moodle > ATutor > Blackboard).
  • Increasing the weight of security to 24% favored Blackboard (from 0.490 to 0.505) due to its high security rating, but not sufficiently to surpass ATutor.
  • Increasing the weight of usability to 19% further strengthened Moodle’s leading position (0.594), as it had the highest usability score among the alternatives.
Overall, no significant rank reversals occurred under the tested variations, confirming the robustness of the hybrid AHP-NWA model and the stability of Moodle as the preferred solution.

5.5. Scenario Analysis

The scenario analysis aims to simulate real-world challenges that higher education institutions typically encounter when choosing and managing digital platforms (Figure 3). These scenarios represent distinct strategic contexts, such as operating under limited budgets, prioritizing cybersecurity or emphasizing user experience, to examine how institutional priorities may influence the final ranking of LMS platforms. By exploring these scenarios, the robustness and adaptability of the proposed AHP-NWA framework can be better understood. To further strengthen the validity of the findings, a scenario analysis was conducted to simulate different institutional priorities. Three alternative decision-making contexts were examined, each emphasizing a particular strategic orientation of higher education institutions:
  • Scenario 1: Budget-Constrained Environment
    In institutions with limited financial resources, the cost criterion was given dominant importance (40%), with reduced weights for performance and support. Under this scenario, Moodle’s advantage became even more pronounced, achieving a total score of 0.622, while Blackboard dropped further due to its high costs.
  • Scenario 2: Security-Critical Environment
    For institutions dealing with sensitive data, the weight of security was increased to 35%, with usability and implementation reduced. In this case, Blackboard’s score rose to 0.528, reducing the gap with ATutor (0.547), but Moodle remained the top-ranked option (0.593).
  • Scenario 3: User-Centric Environment
    In contexts prioritizing user experience and accessibility, usability was increased to 30%, while cost and documentation were reduced. Moodle consolidated its leading position (0.612) due to its intuitive interface, while ATutor remained second (0.539).
To enhance the generalizability of the findings, a complementary scenario library was introduced to represent three common institutional archetypes:
  • Small Private Colleges,
  • Large Research Universities,
  • Low-Bandwidth or Resource-Constrained Regions.
Each archetype was modeled by adjusting the original AHP-derived weights to reflect distinct strategic priorities.
Although the ranking of alternatives remained largely stable across archetypes, the relative gaps between platforms shifted, confirming that contextual factors can alter institutional preferences without undermining the robustness of the AHP-NWA framework. The inclusion of this scenario library provides a transferable reference that other institutions can adapt by recalibrating weights according to their operational conditions. The scenario analysis highlights the flexibility of the proposed AHP-NWA model, showing that while rankings may shift slightly depending on institutional priorities, Moodle consistently remains the most suitable option across diverse decision contexts.

6. Conclusions

This study developed and applied a hybrid multi-criteria decision-making model that integrates the AHP for determining criterion weights and the NWA method for value-based aggregation of alternative scores. The model was applied to the selection of optimal LMS platforms in the context of higher education, with the participation of two methodologically independent expert panels. The results highlight cost, security, and usability as the most critical factors guiding decision-making, while Moodle consistently emerged as the most sustainable option, followed by ATutor and Blackboard.
The inclusion of sensitivity and scenario analyses confirmed the robustness of the model, demonstrating that the ranking of alternatives remains stable under different institutional priorities and variations in weighting factors. The combination of AHP and NWA proved effective in ensuring transparency, comparability and reduced bias, thereby supporting higher education institutions in making informed and reliable decisions regarding LMS adoption.
By linking the evaluation of LMS platforms with sustainability principles, the study contributes to a broader understanding of how digital transformation in higher education can be managed in an economically viable, socially inclusive and technologically adaptable way. In this way, the proposed AHP-NWA framework provides not only methodological robustness but also a pathway for universities to adopt sustainable and future-oriented digital solutions in line with the Education for Sustainable Development agenda. The framework also contributes to the achievement of Sustainable Development Goal 4 (Quality Education), by supporting higher education institutions in adopting inclusive, resilient and cost-effective digital solutions.

Author Contributions

Conceptualization, A.V. and D.V.; Methodology, A.V. and D.V.; Validation, L.I., A.Š. and M.P.; Formal analysis, D.K.; Investigation, L.I.; Resources, D.K.; Data curation, M.P.; Writing—original draft, A.V., D.V., L.I. and D.K.; Writing—review & editing, A.Š. and M.P.; Supervision, D.K.; Project administration, A.Š. and M.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hashim, M.A.M.; Tlemsani, I.; Matthews, R. Correction: Higher education strategy in digital transformation. Educ. Inf. Technol. 2022, 27, 7379. [Google Scholar] [CrossRef]
  2. Oliveira, K.K.S.; De Souza, R.A.C. Digital transformation towards education 4.0. Inform. Educ. 2022, 21, 283–309. [Google Scholar] [CrossRef]
  3. Demartini, C.G.; Benussi, L.; Gatteschi, V.; Renga, F. Education and digital transformation: The “riconnessioni” project. IEEE Access 2020, 8, 186233–186256. [Google Scholar] [CrossRef]
  4. Sharifov, M.; Safikhanova, S.; Mustafa, A. Review of prevailing trends, barriers and future perspectives of learning managment systems (LMSs) in higher education institutions. Int. J. Educ. Dev. Using Inf. Commun. Technol. 2021, 17, 207–216. [Google Scholar]
  5. Munna, M.S.H.; Hossain, M.R.; Saylo, K.R. Digital education revolution: Evaluating LMS-based learning and traditional approaches. J. Innov. Technol. Converg. 2024, 6, 2. [Google Scholar] [CrossRef]
  6. Castro, R. Blended learning in higher education: Trends and capabilities. Educ. Inf. Technol. 2019, 24, 2523–2546. [Google Scholar] [CrossRef]
  7. Babbar, M.; Gupta, T. Response of educational institutions to COVID-19 pandemic: An inter-country comparison. Policy Futures Educ. 2022, 20, 469–491. [Google Scholar] [CrossRef] [PubMed]
  8. Adedoyin, O.B.; Soykan, E. COVID-19 pandemic and online learning: The challenges and opportunities. Interact. Learn. Environ. 2023, 31, 863–875. [Google Scholar] [CrossRef]
  9. Turnbull, D.; Chugh, R.; Luck, J. Transitioning to e-learning during the COVID-19 pandemic: How have Higher Education Institutions responded to the challenge? Educ. Inf. Technol. 2021, 26, 6401–6419. [Google Scholar] [CrossRef]
  10. Cone, L.; Brøgger, K.; Berghmans, M.; Decuypere, M.; Förschler, A.; Grimaldi, E.; Hartong, S.; Hillman, T.; Ideland, M.; Landri, P.; et al. Pandemic acceleration: COVID-19 and the emergency digitalization of European education. Eur. Educ. Res. J. 2022, 21, 845–868. [Google Scholar] [CrossRef]
  11. Berking, P.; Gallagher, S. Choosing a learning management system. Adv. Distrib. Learn. (ADL) Co-Lab. 2013, 14, 40–62. [Google Scholar]
  12. Ali, M.; Wood-Harper, T.; Wood, B. Understanding the technical and social paradoxes of learning management systems usage in higher education: A sociotechnical perspective. Syst. Res. Behav. Sci. 2024, 41, 134–152. [Google Scholar] [CrossRef]
  13. Wright, C.R.; Lopes, V.; Montgomerie, C.; Reju, S.; Schmoller, S. Selecting a learning management system: Advice from an academic perspective. Educ. Rev. 2014. [Google Scholar]
  14. Alshomrani, S. Evaluation of technical factors in distance learning with respect to open source LMS. Asian Trans. Comput. 2012, 2, 1117. [Google Scholar]
  15. Galvis, Á.H. Supporting decision-making processes on blended learning in higher education: Literature and good practices review. Int. J. Educ. Technol. High. Educ. 2018, 15, 25. [Google Scholar] [CrossRef]
  16. Sahoo, S.K.; Goswami, S.S. A comprehensive review of multiple criteria decision-making (MCDM) methods: Advancements, applications, and future directions. Decis. Mak. Adv. 2023, 1, 25–48. [Google Scholar] [CrossRef]
  17. Asadabadi, M.R.; Chang, E.; Saberi, M. Are MCDM methods useful? A critical review of analytic hierarchy process (AHP) and analytic network process (ANP). Cogent Eng. 2019, 6, 1623153. [Google Scholar] [CrossRef]
  18. Wen, Z.; Liao, H.; Zavadskas, E.K. MACONT: Mixed aggregation by comprehensive normalization technique for multi-criteria analysis. Informatica 2020, 31, 857–880. [Google Scholar] [CrossRef]
  19. Oussous, A.; Menyani, I.; Srifi, M.; Lahcen, A.A.; Kheraz, S.; Benjelloun, F.-Z. An evaluation of open source adaptive learning solutions. Information 2023, 14, 57. [Google Scholar] [CrossRef]
  20. Altınay-Gazi, Z.; Altınay-Aksal, F. Technology as mediation tool for improving teaching profession in higher education practices. Eurasia J. Math. Sci. Technol. Educ. 2017, 13, 803–813. [Google Scholar] [CrossRef]
  21. Pettersson, F. Understanding digitalization and educational change in school by means of activity theory and the levels of learning concept. Educ. Inf. Technol. 2021, 26, 187–204. [Google Scholar] [CrossRef]
  22. Ricard, M.; Zachariou, A.; Burgos, D. Digital education, information and communication technology, and education for sustainable development. In Radical Solutions and eLearning: Practical Innovations and Online Educational Technology; Springer: Singapore, 2020; pp. 27–39. [Google Scholar] [CrossRef]
  23. Guri-Rosenblit, S. ‘Distance education’ and ‘e-learning’: Not the same thing. High. Educ. 2005, 49, 467–493. [Google Scholar] [CrossRef]
  24. Yucel, A.S. E-learning approach in teacher training. Turk. Online J. Distance Educ. 2006, 7, 123–131. [Google Scholar]
  25. Oliveira, P.C.; Cunha, C.J.C.A.; Nakayama, M.K. Learning management systems (LMS) and e-learning management: An integrative review and research agenda. JISTEM-J. Inf. Syst. Technol. Manag. 2016, 13, 157–180. [Google Scholar] [CrossRef]
  26. Dias, B.D.; Diniz, J.A.; Hadjileontiadis, L.J. Towards an intelligent learning management system under blended learning. In Trends, Profiles and Modeling Perspectives; Intelligent Systems Reference Library; Springer: Berlin/Heidelberg, Germany, 2014; Volume 59. [Google Scholar] [CrossRef]
  27. Bradley, V.M. Learning management system (LMS) use with online instruction. Int. J. Technol. Educ. 2021, 4, 68–92. [Google Scholar] [CrossRef]
  28. Rößling, G.; Joy, M.; Moreno, A.; Radenski, A.; Malmi, L.; Kerren, A.; Naps, T.; Ross, R.J.; Clancy, M.; Korhonen, A.; et al. Enhancing learning management systems to better support computer science education. ACM SIGCSE Bull. 2008, 40, 142–166. [Google Scholar] [CrossRef]
  29. Veluvali, P.; Surisetti, J. Learning management system for greater learner engagement in higher education—A review. High. Educ. Future 2022, 9, 107–121. [Google Scholar] [CrossRef]
  30. Massam, B.H. Multi-criteria decision making (MCDM) techniques in planning. Prog. Plan. 1988, 30, 1–84. [Google Scholar] [CrossRef]
  31. Saaty, T.L. The Analytic Hierarchy Process; McGraw-Hill: New York, NY, USA, 1980. [Google Scholar]
  32. Saaty, T.L. Analytic hierarchy process. In Encyclopedia of Operations Research and Management Science; Springer: Boston, MA, USA, 2013; pp. 52–64. [Google Scholar]
  33. Colace, F.; De Santo, M. Evaluation models for e-learning platforms and the AHP approach: A case study. IPSI BGD Trans. Internet Res. 2011, 7, 31–43. [Google Scholar]
  34. Yang, J. Fuzzy comprehensive evaluation system and decision support system for learning management of higher education online courses. Sci. Rep. 2025, 15, 18113. [Google Scholar] [CrossRef] [PubMed]
  35. Hafizan, C.; Noor, Z.Z.; Abba, A.H.; Hussein, N. An alternative aggregation method for a life cycle impact assessment using an analytical hierarchy process. J. Clean. Prod. 2016, 112, 3244–3255. [Google Scholar] [CrossRef]
  36. Albayrak, E.; Erensal, Y.C. Using analytic hierarchy process (AHP) to improve human performance: An application of multiple criteria decision-making problem. J. Intell. Manuf. 2004, 15, 491–503. [Google Scholar] [CrossRef]
  37. Martinez, M.; de Andres, D.; Ruiz, J.-C.; Friginal, J. From measures to conclusions using analytic hierarchy process in dependability benchmarking. IEEE Trans. Instrum. Meas. 2014, 63, 2548–2556. [Google Scholar] [CrossRef]
  38. Beynon, M. An analysis of distributions of priority values from alternative comparison scales within AHP. Eur. J. Oper. Res. 2002, 140, 104–117. [Google Scholar] [CrossRef]
  39. Marinović, M.; Viduka, D.; Lavrnić, I.; Stojčetović, B.; Skulić, A.; Bašić, A.; Balaban, P.; Rastovac, D. An intelligent multi-criteria decision approach for selecting the optimal operating system for educational environments. Electronics 2025, 14, 514. [Google Scholar] [CrossRef]
  40. Linstone, H.A.; Turoff, M. (Eds.) The Delphi Method: Techniques and Applications; Addison-Wesley: Reading, MA, USA, 2002; Available online: https://is.njit.edu/pubs/delphibook/ (accessed on 8 November 2025).
  41. Basar, G.; Der, O. Multi-objective optimization of process parameters for laser cutting polyethylene using fuzzy AHP-based MCDM methods. Proc. Inst. Mech. Eng. Part B J. Eng. Manuf. 2025, 239, 1225–1241. [Google Scholar] [CrossRef]
  42. Pant, S.; Garg, P.; Kumar, A.; Ram, M.; Kumar, A.; Sharma, H.K.; Klochkov, Y. AHP-based multi-criteria decision-making approach for monitoring health management practices in smart healthcare system. Int. J. Syst. Assur. Eng. Manag. 2024, 15, 1444–1455. [Google Scholar] [CrossRef]
  43. Turker, Y.A.; Baynal, K.; Turker, T. The evaluation of learning management systems by using fuzzy AHP, fuzzy TOPSIS and an integrated method: A case study. Turk. Online J. Distance Educ. 2019, 20, 195–218. [Google Scholar] [CrossRef]
  44. Pucar, Đ.; Popović, G.; Milovanović, G. MCDM methods-based assessment of learning management systems. Teme 2023, 47, 939–956. [Google Scholar] [CrossRef]
  45. Wang, H.; Zhang, F.; Mu, C. One for all: A general framework of LLMs-based multi-criteria decision making on human expert level. arXiv 2025, arXiv:2502.15778. [Google Scholar]
Figure 1. Radar Chart of LMS Platforms by Evaluation Criteria.
Figure 1. Radar Chart of LMS Platforms by Evaluation Criteria.
Sustainability 17 10130 g001
Figure 2. Sensitivity Analysis of LMS Scores under Weight Variations.
Figure 2. Sensitivity Analysis of LMS Scores under Weight Variations.
Sustainability 17 10130 g002
Figure 3. Heatmap of Scenario Analysis Results.
Figure 3. Heatmap of Scenario Analysis Results.
Sustainability 17 10130 g003
Table 1. Normalized weighting factors of criteria using the AHP method.
Table 1. Normalized weighting factors of criteria using the AHP method.
PerformanceSecurityUsabilitySupportCostImplementationDocumentationSignificance
Factor (%)
Performance11/31/331/32512%
0.0910.0580.0540.1960.0980.1500.185
Security31321/33522%
0.2720.1750.4840.1300.0980.2250.185
Usability31/31313517%
0.2720.0580.1610.1960.2940.2250.007
Support1/31/21/311/5137%
0.0300.0880.0540.0650.0590.0750.111
Cost331513528%
0.2720.5260.1610.3260.2940.2250.185
Implementation1/21/31/311/3137%
0.0450.0580.0540.0650.0980.0750.111
Documentation1/51/51/51/31/51/313%
0.0180.0350.0320.0220.0590.0250.037
TOTAL11.0335.7006.20015.3333.40013.33327.000100%
1.0001.0001.0001.0001.0001.0001.000
Table 2. Consistency check of the obtained results using the AHP method.
Table 2. Consistency check of the obtained results using the AHP method.
PerformanceSecurityUsabilitySupportCostImplementationDocumentationMatrixConsistency
Index
Lambda
Max
Performance11/31/331/3250.867.241.03
0.1190.0750.0580.2070.0950.1450.163
Security31321/3351.717.641.09
0.3560.2240.5200.1380.0950.2170.163
Usability31/3131351.488.511.22
0.3560.0750.1730.2070.2840.2170.163
Support1/31/21/311/5130.517.341.05
0.0400.1120.0580.0690.0570.0720.098
Cost33151352.217.781.11
0.3560.6730.1730.3440.2840.2170.163
Implementation1/21/31/311/3130.506.880.98
0.0590.0750.0300.0690.0950.0720.098
Documentation1/51/51/51/31/51/310.247.361.05
0.0240.0450.0350.0230.0570.0240.033
TOTAL 52.757.54
CR 0.07
Table 3. Ratings of LMS platforms using the NWA method.
Table 3. Ratings of LMS platforms using the NWA method.
CriterionSignificance Factor (%)MoodleATutorBlackboard
Expert EvaluationPointsExpert EvaluationPointsExpert EvaluationPoints
Performance12%40.47530.35650.594
Security22%30.67340.89751.121
Usability17%50.86730.52030.520
Support7%20.13820.13830.207
Cost28%51.42151.42120.569
Implementation7%50.36240.29040.290
Documentation3%50.16350.16340.130
Total100%40.58640.54140.490
Score123
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Veljić, A.; Viduka, D.; Ilić, L.; Karabasevic, D.; Šijan, A.; Papić, M. Sustainable Decision-Making in Higher Education: An AHP-NWA Framework for Evaluating Learning Management Systems. Sustainability 2025, 17, 10130. https://doi.org/10.3390/su172210130

AMA Style

Veljić A, Viduka D, Ilić L, Karabasevic D, Šijan A, Papić M. Sustainable Decision-Making in Higher Education: An AHP-NWA Framework for Evaluating Learning Management Systems. Sustainability. 2025; 17(22):10130. https://doi.org/10.3390/su172210130

Chicago/Turabian Style

Veljić, Ana, Dejan Viduka, Luka Ilić, Darjan Karabasevic, Aleksandar Šijan, and Miloš Papić. 2025. "Sustainable Decision-Making in Higher Education: An AHP-NWA Framework for Evaluating Learning Management Systems" Sustainability 17, no. 22: 10130. https://doi.org/10.3390/su172210130

APA Style

Veljić, A., Viduka, D., Ilić, L., Karabasevic, D., Šijan, A., & Papić, M. (2025). Sustainable Decision-Making in Higher Education: An AHP-NWA Framework for Evaluating Learning Management Systems. Sustainability, 17(22), 10130. https://doi.org/10.3390/su172210130

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop