Next Article in Journal
Resilience Oriented Distribution System Service Restoration Considering Overhead Power Lines Affected by Hurricanes
Previous Article in Journal
A Multi-Teacher Knowledge Distillation Framework with Aggregation Techniques for Lightweight Deep Models
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Two-Layer Model for Complex Multi-Criteria Decision-Making and Its Application in Institutional Research

Office for Planning and Promotion, The University of Aizu, Aizuwakamatsu 965-8580, Fukushima, Japan
*
Author to whom correspondence should be addressed.
Appl. Syst. Innov. 2025, 8(5), 148; https://doi.org/10.3390/asi8050148
Submission received: 16 July 2025 / Revised: 19 September 2025 / Accepted: 26 September 2025 / Published: 7 October 2025

Abstract

Complex decision-making often involves numerous alternatives and diverse criteria, making it difficult to set clear priorities under resource constraints. This study proposes a two-layer hierarchical decision model that structures the process into sequential stages: the first layer narrows the alternatives according to strategic considerations, and the second layer re-evaluates the shortlisted options based on feasibility. This layered design clarifies the decision path and enhances interpretability compared to single-layer approaches. To demonstrate its practical value, the model is applied to an institutional research case in higher education, implemented with the entropy weight method (EWM) for weighting and TOPSIS for ranking. The results demonstrate that it supports transparent and resource-aware planning for performance improvement, while being scalable to multi-layer structure to accommodate diverse organizational needs and varying levels of complexity.

1. Introduction

In simple terms, decision-making is the process of making choices by recognizing the problem, gathering information, and finalizing the best alternative among feasible solutions [1]. Simple decisions are often made quickly based on personal preferences or intuition, whereas complex judgments typically require a systematic approach through logical reasoning. In organizational settings, decisions are especially likely to be made collectively rather than individually.
Multi-criteria decision-making (MCDM) is a methodology used to evaluate alternatives based on multiple criteria to support decision-making. It has been widely applied in various fields, including industry, business, government, health care, and education [2,3,4,5,6,7]. In higher education institutions, MCDM has been used to improve the management of higher education through institutional research (IR). IR involves analyzing and interpreting information about an institution and its activities, such as students, staff, programs, management, and operations, to assist policymakers in planning and decision-making [8,9]. Numerous studies have explored the application of MCDM-based approaches in the context of IR. Chairungrang, S., et al. applied MCDM techniques to sequence vocational education in Thailand [10]. R. Garg developed an MCDM-based framework to address the cloud deployment model selection problem for an academic institute [11]. Youssef A. E., et al. proposed a hybrid MCDM approach to evaluate and rank Web-based E-Learning Platforms [12]. Alaa M., et al. used Fuzzy Delphi and TOPSIS methods to evaluate the English skills of pre-service teachers [13]. Abdelaal R. M. S., et al. proposed a hybrid MCDM approach to prioritize the objectives and projects in higher education strategic planning [14]. Z. Chen et al. proposed an integrated MCDM method to evaluate teaching quality in educational institutions [15]. K. K. Myint et al. implemented MCDM techniques for estimating 87 high schools in Myanmar [16]. P. Koltharkar et al. adopted fuzzy TOPSIS on a questionnaire-based survey to prioritize student requirements [17]. Y. Xu et al. combined two MCDM techniques to select subjects in the college entrance examination [18]. Kang et al. developed an MCDM-based decision-making model to support college admissions by evaluating multiple criteria relevant to applicant selection [19].
MCDM techniques can support institutions in improving management practices and facilitate development by analyzing educational and research environments [20,21]. However, current studies usually treat the decision-making process as a single-layer procedure, in which all criteria and alternatives are evaluated simultaneously. Such approaches often become impractical when the number of decision targets increases, or when decision-makers need to balance long-term strategic importance against short-term feasibility. Specifically, universities often face the dilemma of having too many aspects that require improvement but insufficient resources to address all of them. For example, the Times Higher Education World University Rankings use 18 indicators to evaluate universities, and institutional planners need to decide which ones to prioritize for improvement under resource constraints.
This study aims to assist decision-makers in making clear and transparent choices in complex multi-criteria settings, while accounting for both objective realities and subjective priorities. Its novelty lies in the two-layer hierarchical framework, which structures the process into strategic prioritization and feasibility evaluation. In this structure, the first layer identifies a smaller set of promising alternatives based on strategic considerations, while the second layer evaluates these filtered alternatives from the perspective of practical feasibility. This layered design makes the decision path explicit, thereby enhancing the transparency and credibility of the outcome. Moreover, it provides a scalable framework that improves interpretability and better reflects the sequential logic of real-world institutional decision-making.

2. Related Works

2.1. Multi-Criteria Decision Aiding/Analysis (MCDA)

In the literature, the research area is often termed multi-criteria decision aiding/analysis (MCDA), which broadly includes both methodological development and practical applications [22,23]. Multi-criteria decision-making (MCDM) usually denotes the decision process itself, whereas MCDA highlights the analytical and theoretical foundations behind it [24]. In this section, we use MCDA to align with the theoretical perspective, while recognizing that the two terms are closely related and often used interchangeably.
MCDA has a long and rich history [25,26], with foundational contributions made by Bernard Roy, who distinguished four problematics according to decision tasks [23]. α-problematic (choice) selects the best alternative(s) from a given set, β-problematic (sorting) assigns alternatives to predefined categories, γ-problematic (ranking) establishes a complete or partial order among alternatives, and δ-problematic (description) structures and understands the decision problem itself. Our proposed two-layer model is situated primarily within the ranking problematic, as it is designed to order university performance indicators under both strategic and feasibility considerations. At the same time, it also involves an element of the choice problematic, as the second layer narrows the ordered set to identify the most feasible target for institutional improvement.
Method-oriented classifications of MCDA group techniques according to how preferences are modeled and criteria are aggregated [27]. Value- or utility-based methods, such as MAUT, TOPSIS, and VIKOR, rely on constructing a compensatory utility function that integrates multiple criteria into a single score [28,29,30]. Pairwise comparison methods, most notably AHP and ANP, obtain relative weights through structured comparisons among criteria or alternatives, offering interpretability but also raising concerns about consistency [31,32]. Outranking methods, including the ELECTRE and PROMETHEE families, avoid strict aggregation by establishing whether one alternative sufficiently outperforms another on most criteria, thereby handling incommensurable or conflicting dimensions. These approaches were developed to address the incommensurability of criteria and the importance of structuring decision problems rather than reducing them to a single aggregated score [24]. The ELECTRE family has evolved from ELECTRE I and II for basic choice and ranking problems to ELECTRE III, IV, IS, and TRI for more complex ranking and sorting tasks [33,34,35]. Stepwise and additive ratio approaches, such as SWARA and ARAS, provide simplified weighting or evaluation procedures that decompose the decision process into sequential steps [36,37,38]. Fuzzy and gray-based methods extend these paradigms to address uncertainty and vagueness in judgments, while hybrid and extended approaches combine classical MCDA with optimization, statistical, or machine learning tools to enhance robustness and adaptability [39,40,41]. Despite their methodological diversity, these approaches share the common objective of supporting decision-makers in structuring complex problems, balancing objective measures with subjective preferences, and achieving transparent and traceable outcomes in multi-criteria environments.

2.2. Multi-Stage and Hierarchical Extensions of MCDM

While many studies have applied MCDM techniques and hybrid methods across education, management, and policy domains, most of these approaches adopt a flat, single-layer structure in which all criteria and alternatives are evaluated simultaneously. This often leads to difficulties in interpretation and scalability when the number of criteria increases or when decision-makers must balance strategic objectives with practical resource constraints.
More recently, researchers have begun to recognize the value of multi-stage or hierarchical MCDM frameworks. Karsak et al. combined the Analytic Network Process (ANP) with goal programming in product planning for quality function deployment [42], while M. Kabak et al. applied an ANP–BOCR model to prioritize renewable energy sources [43]. In both cases, all criteria are processed simultaneously within an ANP structure, and the multi-stage aspect arises from linking ANP with a subsequent optimization or evaluation step. Mardani et al. provided a comprehensive review of fuzzy MCDM techniques, emphasizing that many models adopt hierarchical or multi-stage evaluation structures to address uncertainty in expert-driven contexts [44]. Hendiani et al. (2020) proposed a multi-stage hierarchical fuzzy index-based approach for sustainable supplier selection, in which different levels of criteria are evaluated in a staged manner under fuzzy uncertainty, with higher-level assessments building on the results of lower-level evaluations [45]. This progressive structure allows the decision process to gradually integrate different groups of criteria and manage uncertainty step by step. Kersulienė et al. introduced the SWARA method as a stepwise weighting technique, where criteria weights are determined sequentially rather than simultaneously [36]. Similarly, Jovanović et al. developed a two-phase fuzzy MCDM model combining IMF D-SWARA for weight determination with Fuzzy ARAS-Z for alternative ranking, demonstrated in a paver selection case [46].
These studies illustrate valuable extensions of MCDM into multi-stage or fuzzy-layered frameworks, but they are primarily focused on managing uncertainty or weight elicitation, and, thus, do not fully meet the requirements of our research.

2.3. Research Gaps and Motivation for a Two-Layer Model

Despite these advances, several critical requirements remain unaddressed in the context of institutional research. In particular, decision problems often involve a large number of targets, the need to balance objective indicators with subjective preferences, and practical constraints such as limited resources. These considerations highlight the necessity for a more transparent and tailored framework, which motivates the development of our proposed two-layer decision model.
In this study, several constraints are considered: (1) Targets for decision might be large, making it difficult for methods that rely heavily on subjective judgments or fuzzy linguistic assessments; (2) decision-making requires a balance between objective indicators and subjective preferences of decision-makers; (3) not all alternatives should be treated equally, as practical factors such as resource limitations must be incorporated, possibly in an adjustable manner; and (4) the process must remain transparent, enabling decision-makers to clearly understand which factors drive the final outcome. These require a different type of decision framework tailored to the complexity of institutional research. To address these challenges, this study proposes a two-layer decision model designed specifically for complex institutional research problems.

3. A Two-Layer Decision Model

The model introduces a sequential structure in which different sets of alternatives are assigned to distinct stages of the evaluation process. In the first layer, strategic importance is emphasized by considering objective criteria and decision-makers’ preferences. The second layer then focuses on feasibility by incorporating practical constraints.
The proposed two-layer model is developed under several assumptions. First, the decision context involves a relatively large and complex set of alternatives, where evaluating all criteria simultaneously often makes the decision process difficult to trace and interpret clearly. Second, decision-making needs to balance both objective indicators (e.g., published data) and subjective considerations (e.g., stakeholder preferences). Third, feasibility factors such as resource constraints should be incorporated in a transparent and adjustable manner. These assumptions reflect typical conditions in institutional research and provide the rationale for structuring the evaluation into two sequential layers, as shown in Figure 1.
The framework is also methodologically flexible. Different MCDM techniques can be applied in each layer depending on the decision context and the preferences of decision-makers. For instance, PROMETHEE or fuzzy-based approaches could be used to capture specific requirements or to handle uncertainty. In this study, we use the entropy weight method (EWM) for weighting and TOPSIS for ranking, which together provide an objective and widely recognized approach to validate the proposed model.
In the structured frame, the first layer is designed to identify key alternatives from a complex set to forward the evaluation results to the next layer. The criteria used for evaluation in this layer consist of both objective measures and a subjective parameter α assigning by decision-makers. The second layer is designed to rank the alternatives identified in the first layer. The criteria used in this layer differ from those in the first layer for evaluating alternatives from complementary perspectives.
Let
A = A 1 ,   A 2 ,   ,   A n
be the set of all alternatives.
  • First layer: Identifying alternatives
Let C 1 be the criteria of the first layer
C ( 1 ) = { C 1 ( 1 ) ,   C 2 ( 1 ) ,   ,   C m ( 1 ) ,   α }
where C j ( 1 )   ( j = 1 , , m ) are objective criteria and α is a subjective preference parameter reflecting decision-makers’ emphases on each alternative A i   . α i may be assigned on a bounded scale (e.g., [1,10]). The values of α i are irrelevant in that they are normalized together with other criteria values. If all alternatives are assigned the same α i , then the preference parameter has no impact; if some alternatives are given higher α i values than others, they will gain additional weight in the ranking.
Given the set of alternatives A , the evaluation score of each alternative A i A in the first layer is computed as
S 1 ( A i ) = j = 1 m w j ( 1 ) x i j + w α ( 1 ) x i α
where w j ( 1 ) represents the weight assigned to criterion C j ( 1 ) , and w α ( 1 ) is the weight of the α . x i j and x i α are the normalized performance of A i with respect to criterion C j ( 1 ) and α , respectively. The weights satisfy
j = 1 m w j ( 1 ) + w α ( 1 ) = 1
The alternatives are ranked according to S 1 ( A i ) . The top alternatives form the subset
A = A l ,   A p , ,   A t       A i   r a n k s   a m o n g   t h e   t o p   u n d e r   C ( 1 ) }
which serves as the input for the second-layer evaluation.
  • Second layer: Selecting the final alternative
Let C 2 be the criteria set of the second layer
C ( 2 ) = C 1 ( 2 ) ,   C 2 ( 2 ) ,   ,   C k ( 2 )
where the criteria in C 2 differ from those in the first layer.
The alternative set in the second layer is A derived from the first layer.
A = A l ,   A p ,   ,   A t
Let w i ( 2 ) denote the weight assigned to criterion C i ( 2 ) in the second layer, subject to the following condition:
i = l t w i ( 2 ) = 1
We then obtain the evolution score of alternative A i A under C ( 2 ) .
S 2 A i = j = l t w j ( 2 ) y i j
where y i j is the normalized evaluation value of A with respect to criterion C j ( 2 ) .
Based on the two-layer model, the final selection of alternative is the one with the highest score as determined by the following:
A * = a r g m a x   S 2 A
In contrast to the two-layer design, a conventional one-layer approach evaluates all alternatives A simultaneously under the full set of criteria C 1 C 2 .
S o n e ( A i ) = j = 1 m + k w j z i j ,       i = 1 , , n
where z i j is the normalized performance value of alternative A i with respect to criterion j . w j is the weight assigned to criterion j , subject to j = 1 m + k w j = 1 .
The proposed formulas are derived from normalized performance values and weighted sums, which are widely adopted in standard MCDM methods. Therefore, the two-layer structure ensures that only alternatives that are non-dominated with respect to the first-layer priorities advance to the second-layer evaluation. This prevents alternatives that are weak under C 1 but highly efficient under C 2 from dominating the final ranking, thereby improving the interpretability and traceability of the decision path.

4. Implementation of Two-Layer Model for IR

Paul stated that IR involves collecting data or making studies that are useful or necessary in (a) understanding and interpreting the institution; (b) making intelligent decisions about current operations or plans for the future; and (c) improving the efficiency and effectiveness of the institution [47]. Patrick conceived IR as organizational intelligence in three tiers: technical/analytical intelligence, issues intelligence, and contextual intelligence [9]. These definitions collectively highlight two essential components: data and methodology.

4.1. Data Used for Model Implementation

In this study, we utilized data from the Times Higher Education World University Rankings (THE Rankings) to evaluate university performance. THE Rankings assess research-intensive universities based on 18 indicators grouped under five pillars: Teaching (TE), Research Environment (RE), Research Quality (RQ), International Outlook (IO), and Industry (IN), as summarized in Table 1. In our previous study [48], we explained these indicators in detail. The weights of the indicators are taken directly from the official methodology of THE Rankings. These published weights represent the relative importance assigned by THE to each of the 18 indicators across the five performance pillars.
Our goal is to improve our university’s performance by increasing the scores of the indicators in Table 1. However, due to resource constraints, it is not feasible to address all indicators simultaneously. If we select targets solely based on their assigned weights, we will naturally focus on the top three high-weight indicators: Teaching Reputation, Research Reputation, and Citations in pillars TE, RE, and RQ. Nevertheless, both Teaching Reputation and Research Reputation are determined by the number of votes received from experts worldwide who are limited to selecting a maximum of 15 top universities in their respective fields. This presents a significant challenge for most institutions, including ours, in achieving noticeable improvements in these scores within a short timeframe. Furthermore, weight alone should not be the sole criterion for target selection. It is essential to prioritize improvement targets by taking into account our university’s specific context and constraints. To this end, the proposed two-layer decision model is applied as follows.

4.2. Implementation of First Layer

Let A = { A 1 , A 2 , , A 18 } be the alternative set including 18 THE indicators. The criteria set C ( 1 ) = { i n d i c a t o r   w e i g h t ,   e v a l u a t i o n   s c o r e ,   α } includes THE indicator’s weight, evaluation score to each indicator, and preferred parameter α.
The weights for each indicator are shown in Table 1. The evaluation scores are determined by THE Rankings. However, THE Rankings only publishes scores at the pillar level rather than for each indicator. Table 2 shows our 2025 scores for each pillar, and the overall score is calculated by the following equation described as THE Rankings.
o v e r a l l   s c o r e = 29.5 % × T E s c o r e + 29 % × R E s c o r e + 27.5 % × R Q s c o r e + 4 % × I N s c o r e + 7.5 % × I O s c o r e
Under the constraints of official pillar-level scores, the indicator scores were simulated, as shown in Table 3. This ensures that the simulated values remain consistent with the actual published outcomes and reflect realistic distributions across the 18 indicators.
The criterion α is determined by institutional stakeholders based on their specific considerations regarding the alternatives. The preference parameter α is assigned to each alternative on a bounded scale such as [1,10], consistent with common practice in MCDM studies (e.g., the 9-point scale in AHP) to ensure that subjective preference remains interpretable and comparable. As the absolute values of α are normalized along with other criteria, their effect depends solely on relative differences among alternatives. Alternatives assigned higher α values relative to others consequently receive additional weight in the ranking process.
We adopt the entropy weight method (EWM) to determine the weights of criteria and use TOPSIS to rank the indicators. The combined method has been applied in different fields [49,50,51]. EWM is an objective approach for calculating weights by measuring the information entropy. Criteria that exhibit greater variability are considered more informative and are therefore assigned higher weights. Then, using these weights, TOPSIS ranks alternatives based on their relative closeness to the ideal solution. It assumes that the optimal option should have the shortest distance from a hypothetical ideal solution and the greatest distance from a negative solution. Alternatives are ranked according to these weighted distances, with higher values indicating better performance. While entropy can indeed be used for further analysis [52], in our framework it is specifically used to calculate objective weights, and TOPSIS is then applied for ranking alternatives to ensure both objectivity in weighting and clarity in evaluation.
Table 4 summarizes the top five rankings under three different scenarios.
Case 1: All indicators α = 1 (equal importance across indicators).
Case 2: Four indicators, Student–Staff Ratio (SSR), Doctorate–Bachelor Ratio (DBR), Research Strength (RS), and International Students (IS), are set to α = 5, others α = 1 for strategic consideration on improving TE and RE performance and maintaining an advance in IO.
Case 3: The above four indicators are set to α = 10, others α = 1.
Results confirm that increasing α on selected indicators raises their positions, as expected, while the main high-ranking indicators remain concentrated at the top, demonstrating the stability of the overall ranking pattern.

4.3. Implementation of Second Layer

In the second layer, we select the top five indicators in case 2 along with resource parameters as the alternative set and the criteria set. These two sets are defined as follows.
A = {Citation Impact, Research Reputation, Research Strength, International Students, Teaching reputation}
C ( 2 ) = {Budget, Manpower, Period}
The values of criteria in C ( 2 ) are simulated in Table 5. They were derived from institutional reports to approximate typical ranges of resource allocation within the university. These assumptions provide a reasonable basis for demonstrating the model’s feasibility assessment function, while acknowledging that precise values may vary across institutions.
Similar to the first layer, EWM and TOPSIS are employed to obtain the final rankings. To demonstrate the sensitivity of resource-related criteria, we test the results by adjusting the criterion’s values by +10% and even +50% for sensitivity analysis. Such variations correspond to typical fluctuations in institutional resource allocation from year to year. We tested the results shown in Table 6 under following four scenarios.
  • Baseline values in Table 5;
  • Budget +10% (more budget) for all alternatives;
  • Period +10% (little more time) for Teaching reputation;
  • Period +50% (more time) for Teaching reputation.
Table 6 reports the rankings under different perturbations of the second-layer criteria. When budget values were uniformly increased by 10% across all alternatives, or when the period for Teaching Reputation was extended by 10%, the rankings remained unchanged compared with the baseline, indicating robustness to moderate variations. Only when the period for Teaching Reputation was increased substantially (+50%) did a change occur, with Research Reputation moving ahead of Teaching Reputation. This suggests that the model is stable under small to moderate perturbations, while large changes in resource assumptions can alter the ordering of lower-ranked alternatives.
In summary, the consideration of the two-layer frame of narrowing the set of alternatives from 18 to 5 is that decision-makers cannot simultaneously pursue improvements across all indicators. By first applying the strategic criteria (weight, score, α), we filter out the most promising alternatives A . These selected alternatives then serve as the input to the second layer, where feasibility constraints such as budget, manpower, and time are introduced. The framework is flexible regarding the number and type of criteria that additional or different types of criteria can be incorporated depending on the decision context. This sequential narrowing process reflects the actual logic of university management to prioritize a smaller set of strategically important indicators before testing their feasibility under institutional resource constraints.

5. Discussion

5.1. Comparison with Current MCDA Approaches

From a problem-oriented perspective, the proposed two-layer decision model is primarily situated within the γ-problematic (ranking), as it establishes an ordering of university performance indicators under multiple criteria. At the same time, the second layer also involves elements of the α-problematic (choice), as it selects the most feasible target from the shortlisted set. This positioning clarifies that the two-layer framework is mainly designed to structure ranking and choice tasks in a sequential manner that reflects the logic of institutional planning.
From a method-oriented perspective, the proposed two-layer framework is methodologically flexible. In principle, any MCDA method (utility-based, pairwise, outranking, fuzzy, or hybrid) could be embedded in either layer depending on the decision context. The novelty of our contribution lies not in the choice of a particular algorithm, but in structuring the evaluation into sequential layers that separate strategic prioritization from feasibility assessment. Table 7 summarizes six major families of MCDA methods, their main features, and why they were not directly adopted in this study.
As summarized in Table 7, different families of MCDA methods contribute different aspects of complex decision problems, and none is universally applicable. Outranking methods such as ELECTRE III and IV, for instance, are particularly valuable when criteria are incommensurable or qualitative. However, their reliance on multiple threshold parameters and the production of partial preference relations makes them less suitable for our case where indicators of THE Rankings are quantitative, commensurable, and explicitly weighted. For this reason, outranking was not adopted in our study. Instead, the two-layer framework was introduced to handle the large number of alternatives and to separate strategic prioritization from feasibility assessment in a transparent manner.

5.2. Analysis on the Two-Layer Structure

The one-layer method attempts to maximize performance across all alternatives simultaneously under both strategic and feasibility criteria, which often produces results that are difficult to interpret when strategically weak alternatives appear at the top due to low resource requirements. By contrast, the two-layer frame guarantees that only the subset of alternatives that perform strongly on strategic criteria advances to the second layer. Within this reduced set, feasibility criteria are then applied to identify the final choice. This decomposition ensures that the final decision respects strategic dominance while remaining feasible in practice, thereby providing a clearer and more transparent decision path than a single-layer evaluation.
We compared the rank result of using the two-layer model with those from the one-layer method. As shown in Table 8, the two methods produced different ranking outcomes. The results of two-layer model derive from the baseline in Table 6 that narrows 18 indicators into 5 via three criteria and then ranks them by other three criteria. In the one-layer method, all relevant factors are considered simultaneously as criteria, including THE scores, THE indicator weights, Budget, Manpower, Period, and Preferred Level. The preferred levels are set in the same way as in the two-layer method, where four indicators, Student–Staff Ratio, Doctorate–Bachelor Ratio, Research Strength, and International Students are assigned as 5, while all others are set to 1.
The difference between the two methods lies primarily in their underlying decision logic, despite being based on the same evaluation criteria. In essence, the one-layer method seeks a global optimum across all dimensions, making it more suitable when the objective is to identify indicators based on an overall evaluation. In contrast, the two-layer model adopts a strategy-first, resource-second approach, which is more appropriate in scenarios where resource constraints or strategic prioritization are critical. Additionally, the preferred level serves as a mechanism to incorporate decision-makers’ subjective considerations regarding the alternatives, based on the institution’s specific circumstances. This factor can be applied in either the first layer or the second layer, depending on which aspect, strategic importance or practical feasibility, is prioritized in the decision-making process. This sequential approach allows for conditional decision-making that prioritizes strategic importance in the first layer, followed by a focused analysis of practical feasibility in the second layer.
This difference highlights that the study contributes to integrating MCDM tools into a layered architecture that more closely matches institutional decision-making processes. By separating strategic prioritization (Layer 1) from resource-based feasibility assessment (Layer 2), the proposed framework improves both the interpretability of the results and the flexibility for decision-makers to incorporate subjective preferences.

5.3. Extension to a Multi-Layer Structure

The two-layer structure introduces a hierarchical approach to realize targeted decision-making. This framework can be further extended into a multi-layer structure to address more complex decision-making scenarios.
Let A be the set of all alternatives, and let the criteria set C be partitioned into L disjoint groups:
C = C ( 1 ) C 2 C L ,     C i C j = ϕ       ( i j )  
At layer l 1 l L , the set of alternatives from the previous layer, denoted by A l 1 , is ranked under the criteria subset C ( l ) . A reduced subset
A ( l ) = f ( A l 1 , C l )
is then selected and passed to the next layer.
The process continues recursively until the final set is obtained for making the final decision.
A ( L ) A
This recursive formulation shows that the two-layer model is a special case L = 2 . Thus, the framework is scalable to any number of layers by sequentially partitioning criteria into logically distinct groups. In a multi-layer model, each layer can represent a distinct category of criteria such as strategic considerations, stakeholder preferences, resource constraints, or long-term sustainability, as an example in Figure 2.
Figure 2 illustrates how the framework can be extended into a multi-layer model. In this example, Layer 1 applies strategic criteria to reduce the alternative set. Layer 2 then evaluates these alternatives under stakeholder’s preference, narrowing them to a smaller alternative set. Layer 3 concerns resource constraints, and Layer 4 incorporates long-term sustainability to select the optimal alternative.
This layered approach enables progressive filtering and prioritization, allowing decision-makers to focus on one dimension of concern at a time. Moreover, by decomposing the decision process into multiple logically sequenced stages, a multi-layer structure enhances transparency and flexibility, particularly in large-scale, policy-driven environments. Such a framework is especially suitable for circumstances when criteria or alternatives serve distinct roles at different stages, or when decisions need to be made collectively by a group of stakeholders. Therefore, the multi-layer model generalizes the two-layer logic, offering a scalable and adaptable architecture for large-scale decision-making under complex institutional or organizational settings.

5.4. Limitations and Future Research

Although the proposed two-layer model demonstrates promising results, several limitations should be acknowledged. First, the case study relied on simulated indicator-level data constrained by publicly available pillar-level results from the Times Higher Education World University Rankings. While this ensures consistency with real-world benchmarks, it does not capture the variability of authentic institutional datasets. Second, the resource-related criteria (budget, manpower, and period) were set based on typical institutional ranges but do not directly reflect the precise constraints of a specific university. These simplifications limit the empirical generalizability of the findings.
Future research will aim to address these limitations. Specifically, we plan to (a) involve institutional stakeholders, such as university planners and administrators, to validate the parameter assignment process and strengthen the model’s practical relevance; (b) test the framework using actual institutional datasets and real decision scenarios to further assess empirical validity; and (c) extend the model into a multi-layer structure that incorporates additional dimensions such as policy constraints, long-term sustainability, or stakeholder consensus. These extensions will enhance both the methodological robustness and the practical applicability of the proposed decision-making framework.

6. Conclusions

This study proposes a two-layer decision model for complex multi-criteria problems. The first layer performs strategic prioritization by combining objective indicators with a subjective preference parameter, enabling decision-makers to reflect context-specific priorities. Sensitivity analysis shows that increasing the values of the preference parameter for selected alternatives raises their rankings as intended, while the overall ordering of alternatives remains largely stable under reasonable variations. The second layer then re-evaluates the shortlisted alternatives under feasibility considerations. Applied to an institutional research setting using university performance indicators, the model produced transparent, traceable rankings that align strategic intent with practical feasibility. By combining EWM for weighting and TOPSIS for ranking, the framework ensures both objectivity and clarity. Moreover, its recursive formulation makes it naturally extensible beyond two layers, supporting broader applicability to real-world scenarios.

Author Contributions

Conceptualization and methodology, Y.Z.; writing—original draft preparation, Y.Z.; writing—review and editing, A.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the University of Aizu Competitive Research Funds.

Data Availability Statement

The data presented in this study derived from the Times Higher Education World University Rankings 2024. [Times Higher Education World University Rankings 2024] [https://www.timeshighereducation.com/world-university-rankings/2024/world-ranking, accessed on 10 September 2025].

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Simplilearn. The Best Guide to Understanding What Decision Making Is. Available online: https://www.simplilearn.com/tutorials/mba-preparation-tutorial/what-is-decision-making-explained (accessed on 20 June 2025).
  2. Zavadskas, E.K.; Govindan, K.; Antucheviciene, J.; Turskis, Z. Hybrid multiple criteria decision-making methods: A review of applications for sustainability issues. Econ. Res. Ekon. Istraž. 2016, 29, 857–887. [Google Scholar] [CrossRef]
  3. Frazão, T.D.C.; Camilo, D.G.G.; Cabral, E.L.S.; Souza, R.P. Multicriteria decision analysis (MCDA) in health care: A systematic review of the main characteristics and methodological steps. BMC Med. Inform. Decis. Mak. 2018, 18, 90. [Google Scholar] [CrossRef]
  4. Gupta, S.; Soni, U.; Kumar, G. Green supplier selection using multi-criterion decision making under fuzzy environment: A case study in automotive industry. Comput. Ind. Eng. 2019, 136, 663–680. [Google Scholar] [CrossRef]
  5. Behzadian, M.; Otaghsara, S.K.; Yazdani, M.; Ignatius, J. A state-of the-art survey of TOPSIS applications. Expert Syst. Appl. 2012, 39, 13051–13069. [Google Scholar] [CrossRef]
  6. Thokala, P.; Devlin, N.; Marsh, K.; Baltussen, R.; Boysen, M.; Kalo, Z.; Longrenn, T.; Mussen, F.; Peacock, S.; Watkins, J.; et al. Multiple criteria decision analysis for health care decision making—An introduction: Report 1 of the ISPOR MCDA Emerging Good Practices Task Force. Value Health 2016, 19, 1–13. [Google Scholar] [CrossRef]
  7. Govindan, K.; Rajendran, S.; Sarkis, J.; Murugesan, P. Multi criteria decision making approaches for green supplier evaluation and selection: A literature review. J. Clean. Prod. 2015, 98, 66–83. [Google Scholar] [CrossRef]
  8. Zimmer, B. Achieving Quality Within Funding Constraints: The Potential Contribution of Institutional Research. J. Institutional Res. 1995, 4, 74. [Google Scholar]
  9. Terenzini, P.T. On the nature of institutional research and the knowledge and skills it requires. Res. High. Educ. 1993, 34, 1–10. [Google Scholar] [CrossRef]
  10. Chairungruang, S.; Piriyasurawong, P.; Nilsook, P. Multi Criteria Decision Making for Ranking Digital Vocational Education. In Proceedings of the 2023 Research, Invention, and Innovation Congress: Innovative Electricals and Electronics (RI2C), Bangkok, Thailand, 24–25 August 2023; pp. 59–65. [Google Scholar]
  11. Garg, R.; Join, C.; Sira-Ramirez, H. MCDM-Based Parametric Selection of Cloud Deployment Models for an Academic Organization. IEEE Trans. Cloud Comput. 2022, 10, 863–871. [Google Scholar] [CrossRef]
  12. Youssef, A.E.; Saleem, K. A Hybrid MCDM Approach for Evaluating Web-Based E-Learning Platforms. IEEE Access 2023, 11, 72436–72447. [Google Scholar] [CrossRef]
  13. Alaa, M.; Albakri, I.S.M.A.; Singh, C.K.S.; Hammed, H.; Zaidan, A.A.; Zaidan, B.B.; Albahri, O.S.; Alsalem, M.A.; Salih, M.M.; Almahdi, E.M.; et al. Assessment and Ranking Framework for the English Skills of Pre-Service Teachers Based on Fuzzy Delphi and TOPSIS Methods. IEEE Access 2019, 7, 126201–126223. [Google Scholar] [CrossRef]
  14. Abdelaal, R.M.S.; Makki, A.A.; Al-Madi, E.M.; Qhadi, A.M. Prioritizing Strategic Objectives and Projects in Higher Education Institutions: A New Hybrid Fuzzy MEREC-G-TOPSIS Approach. IEEE Access 2024, 12, 89735–89753. [Google Scholar] [CrossRef]
  15. Chen, Z.; Liang, W.; Luo, S. A Novel Integrated Picture Fuzzy MACONT Method and Its Application in Teaching Quality Evaluation in Higher Education. IEEE Access 2024, 12, 88345–88356. [Google Scholar] [CrossRef]
  16. Myint, K.K.; Thein, N. Implementation of MCDM Techniques for Estimating Regional Education Development in Myanmar. In Proceedings of the 2020 IEEE 9th Global Conference on Consumer Electronics (GCCE), Kobe, Japan, 13–16 October 2020; pp. 76–77. [Google Scholar]
  17. Koltharkar, P.; Eldhose, K.K.; Sridharan, R. Application of fuzzy TOPSIS for the prioritization of students’ requirements in higher education institutions: A case study: A multi-criteria decision-making approach. In Proceedings of the 2020 International Conference on System, Computation, Automation and Networking (ICSCAN), Pondicherry, India, 3–4 July 2020; pp. 1–7. [Google Scholar]
  18. Xu, Y.; Tian, Y. Comprehensive Evaluation Model of Elective Subjects’ Performance in the College Entrance Examination Based on Entropy Weight TOPSIS. In Proceedings of the 2021 6th International Conference on Intelligent Computing and Signal Processing (ICSP), Xi’an, China, 9–11 April 2021; pp. 208–212. [Google Scholar]
  19. Kang, H.; Lee, S. Decision-Making Model Using a Multi-Criteria Decision-Making Method for College Admissions. J. Korea Soc. Comput. Inf. 2018, 23, 65–72. [Google Scholar]
  20. Mustafa, A.; Goh, M. Multi-criterion models for higher education administration. Omega 1996, 24, 167–178. [Google Scholar] [CrossRef]
  21. Garg, R.; Kumar, R.; Garg, S. MADM-Based Parametric Selection and Ranking of E-Learning Websites Using Fuzzy COPRAS. IEEE Trans. Educ. 2019, 62, 11–18. [Google Scholar] [CrossRef]
  22. Köksalan, M.; Wallenius, J.; Zionts, S. Multiple Criteria Decision Making: From Early History to the 21st Century; World Scientific: Singapore, 2011. [Google Scholar]
  23. Roy, B. Paradigms, Challenges. In Multiple Criteria Decision Analysis: State of the Art Surveys, 2nd ed.; Springer: Berlin/Heidelberg, Germany, 2016; pp. 19–39. [Google Scholar]
  24. Roy, B. Multicriteria Methodology for Decision Aiding; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1996. [Google Scholar]
  25. Greco, S.; Stowinski, R.; Wallenius, J. Fifty years of multiple criteria decision analysis: From classical methods to robust ordinal regression. Eur. J. Oper. Res. 2025, 323, 351–377. [Google Scholar] [CrossRef]
  26. Roy, B.; Vanderpooten, D. The European school of MCDA: Emergence, basic features and current works. J. Multi-Criteria Decis. Anal. 1996, 5, 22–38. [Google Scholar] [CrossRef]
  27. Belton, V.; Stewart, T.J. Multiple Criteria Decision Analysis: An Integrated Approach; Kluwer Academic Publishers: Boston, MA, USA, 2002. [Google Scholar]
  28. Keeney, R.L.; Raiffa, H. Decisions with Multiple Objectives: Preferences and Value Tradeoffs; Wiley: New York, NY, USA, 1976. [Google Scholar]
  29. Tzeng, G.H.; Huang, J.J. Multiple Attribute Decision Making: Methods and Applications; Chapman and Hall: London, UK; CRC: New York, NY, USA, 2011. [Google Scholar]
  30. Opricovic, S.; Tzeng, G.H. Compromise solution by MCDM methods: A comparative analysis of VIKOR and TOPSIS. Eur. J. Oper. Res. 2004, 156, 445. [Google Scholar] [CrossRef]
  31. Saaty, T.L.; Vargas, L.G. Models, Methods, Concepts & Applications of the Analytic Hierarchy Process; Springer: Berlin/Heidelberg, Germany, 2022. [Google Scholar]
  32. Saaty, T.L.; Vagas, L.G. Decision Making with the Analytic Network Process: Economic, Political, Social and Technological Applications with Benefits, Opportunities, Costs and Risks; Springer: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
  33. Roy, B. The outranking approach and the foundations of ELECTRE methods. Theory Decis. 1991, 31, 49–73. [Google Scholar] [CrossRef]
  34. Govindan, K.; Jepsen, M.B. ELECTRE: A comprehensive literature review on methodologies and applications. Eur. J. Oper. Res. 2016, 250, 1–29. [Google Scholar] [CrossRef]
  35. Shmelev, S.; Brook, H.R. Macro Sustainability across Countries: Key Sector Environmentally Extended Input-Output Analysis. Sustainability 2021, 13, 11657. [Google Scholar] [CrossRef]
  36. Kersulienė, V.Z.; Zavadskas, E.K.; Turskis, Z. Selection of rational dispute resolution method by applying new stepwise weight assessment ratio analysis (SWARA). J. Bus. Econ. Manag. 2010, 11, 243–258. [Google Scholar] [CrossRef]
  37. Zavadskas, E.K.; Turskis, Z. A new additive ratio assessment (ARAS) method in multicriteria decision-making. Technol. Econ. Dev. Econ. 2010, 16, 159–172. [Google Scholar] [CrossRef]
  38. Hu, L.; Yu, Q.; Jana, C.; Simic, V.; Bin-Mohsin, B. An Intuitionistic Fuzzy SWARA-AROMAN Decision-Making Framework for Sports Event Management. IEEE Access 2024, 12, 57711–57726. [Google Scholar] [CrossRef]
  39. Chen, C.T. Extensions of the TOPSIS for group decision-making under fuzzy environment. Fuzzy Sets Syst. 2000, 114, 1–9. [Google Scholar] [CrossRef]
  40. Maghrabie, H.F.; Beauregard, Y.; Schiffauerova, A. Grey-based Multi-Criteria Decision Analysis approach: Addressing uncertainty at complex decision problems. Technol. Forecast. Soc. Change 2019, 146, 366–379. [Google Scholar] [CrossRef]
  41. Yaakob, A.M.; Serguieva, A.; Gegov, A. FN-TOPSIS: Fuzzy Networks for Ranking Traded Equities. IEEE Trans. Fuzzy Syst. 2017, 25, 315–332. [Google Scholar] [CrossRef]
  42. Karsak, E.E.; Sozer, S.; Alptekin, S.E. Product planning in quality function deployment using a combined analytic network process and goal programming approach. Comput. Ind. Eng. 2003, 44, 171–190. [Google Scholar] [CrossRef]
  43. Kabak, M.; Dağdeviren, M. Prioritization of renewable energy sources for Turkey by using a hybrid MCDM methodology. Energy Convers. Manag. 2014, 79, 25–33. [Google Scholar] [CrossRef]
  44. Mardani, A.; Jusoh, A.; Nor, K.M.; Khalifah, Z.; Zakwan, N.; Valipour, A. Multiple criteria decision-making techniques and their applications—A review of the literature from 2000 to 2014. Econ. Res. Ekon. Istraživanja 2015, 28, 516–571. [Google Scholar] [CrossRef]
  45. Hendiani, S.; Mahmoudi, A.; Liao, H. A Multi-Stage Multi-Criteria Hierarchical Decision-Making Approach for Sustainable Supplier Selection. Appl. Soft Comput. 2020, 94, 106456. [Google Scholar] [CrossRef]
  46. Jovanović, S.; Zavadskas, E.K.; Stević, Ž.; Marinković, M.; Alrasheedi, A.F.; Badi, I. An Intelligent Fuzzy MCDM Model Based on D and Z Numbers for Paver Selection: IMF D-SWARA—Fuzzy ARAS-Z Model. Axioms 2023, 12, 573. [Google Scholar] [CrossRef]
  47. Dressel, P.L. The Nature of Institutional Research; Michigan State University: East Lansing, MI, USA, 1966. [Google Scholar]
  48. Zhou, Y.; Abe, Y.; Asano, A. Times Higher Education Rankings Analysis for Enhancing University Performance Using Multi-Criteria Decision Making. In Proceedings of the 2024 IEEE 17th International Symposium on Embedded Multicore/Many-core Systems-on-Chip (MCSoC), Kuala Lumpur, Malaysia, 16–19 December 2024; pp. 117–121. [Google Scholar]
  49. Huang, J. Combining entropy weight and TOPSIS method for information system selection. In Proceedings of the 2008 IEEE Conference on Cybernetics and Intelligent Systems, Chengdu, China, 1–3 September 2008; pp. 1281–1284. [Google Scholar]
  50. Dehdasht, G.; Ferwati, M.S.; Zin, R.M.; Abidin, N.Z. A hybrid approach using entropy and TOPSIS to select key drivers for a successful and sustainable lean construction implementation. PLoS ONE 2020, 15, e0228746. [Google Scholar] [CrossRef]
  51. Li, X.; Wang, K.; Liu, L.; Xin, J.; Yang, H.; Gao, C. Application of the Entropy Weight and TOPSIS Method in Safety Evaluation of Coal Mines. Procedia Eng. 2011, 26, 2085–2091. [Google Scholar] [CrossRef]
  52. Siwiec, D.; Pacana, A. Materials and products development based on a novelty approach to quality and life cycle assessment (QLCA). Materials 2024, 17, 3859. [Google Scholar] [CrossRef]
Figure 1. Model of two-layer decision-making.
Figure 1. Model of two-layer decision-making.
Asi 08 00148 g001
Figure 2. A multi-layer decision-making model.
Figure 2. A multi-layer decision-making model.
Asi 08 00148 g002
Table 1. Indicators of THE Rankings and weights.
Table 1. Indicators of THE Rankings and weights.
PillarsIndicatorsWeightsTotal Weights
Teaching (TE)Teaching Reputation15.029.5
Student–Staff Ratio4.5
Doctorate–Bachelor Ratio2.0
Doctorate–Staff Ratio5.5
Institutional Income2.5
Research
Environment (RE)
Research Reputation18.029.0
Research Income5.5
Research Productivity5.5
Research
Quality (RQ)
Citation Impact 15.030.0
Research Strength 5.0
Research Excellence 5.0
Research Influence 5.0
Industry (IN)Industry Income2.04.0
Patents2.0
International Outlook (IO)International Students2.57.5
International Staff2.5
International Co-authorship2.5
Studying Abroad0
Table 2. 2025 THE scores of the University of Aizu.
Table 2. 2025 THE scores of the University of Aizu.
RankOverall ScoreTERERQINIO
601–80038.2–43.229.813.468.157.082.8
Table 3. Simulated scores of indicators under the official pillar-level scores.
Table 3. Simulated scores of indicators under the official pillar-level scores.
PillarIndicator 1Indicator 2Indicator 3Indicator 4Indicator 5Total Score
TE7.167.951.128.025.5529.80
RE1.424.047.9413.40
RQ21.3721.543.5221.6768.10
IN37.8219.1857.00
IO29.075.1515.3233.2682.80
Table 4. Output of the first layer under different α assignment.
Table 4. Output of the first layer under different α assignment.
Scenarioα ValueRank 1Rank 2Rank 3Rank 4Rank 5
Case 1All α = 1Citation impactResearch
reputation
Teaching
reputation
Industry
income
International
students
Case 24 indicators α = 5
others α = 1
Citation impactResearch
reputation
Research
strength
International
students
Teaching
reputation
Case 34 indicators α = 10
others α = 1
Research strengthInternational
students
Student
staff ratio
Doctorate
bachelor ratio
Citation
impact
Table 5. Values of criteria of the second layer.
Table 5. Values of criteria of the second layer.
Alternatives Budget (Units)Manpower (People)Period (Months)
Citation Impact120518
Research Reputation150624
Research Strength100415
International Students80312
Teaching Reputation130520
Table 6. Output of the second layer under different scenarios.
Table 6. Output of the second layer under different scenarios.
ScenarioRank 1Rank 2Rank 3Rank 4Rank 5
Baseline International
students
Research
strength
Citation
impact
Teaching
reputation
Research
reputation
Budget +10% for
all alternatives
International
students
Research
strength
Citation
impact
Teaching
reputation
Research
reputation
Period +10% for
alternative 5
International
students
Research
strength
Citation
impact
Teaching
reputation
Research
reputation
Period +50% for
alternative 5
International
students
Research
strength
Citation
impact
Research
reputation
Teaching
reputation
Table 7. Comparison of major MCDA approaches and the proposed two-layer model.
Table 7. Comparison of major MCDA approaches and the proposed two-layer model.
Method FamilyExamplesMain FeaturesWhy Not Used in This Study
Value/utility-basedMAUT, TOPSIS, VIKORWeighted aggregation into a single utility scoreFlat aggregation reduces interpretability when many alternatives are evaluated simultaneously.
Pairwise comparisonAHP, ANPPairwise judgments to derive weightsRequires extensive subjective comparisons, which is impractical with many alternatives.
OutrankingELECTRE III/IV, PROMETHEEEstablishes outranking relations with thresholdsRequire multiple threshold parameters and yield results less intuitive for non-experts; not suitable for our case, which requires a transparent process to show the final choice among the commensurable and weighted indicators in THE Rankings.
Stepwise/additive ratioSWARA, ARAS, ARAS-FSequential assignment of weights or scoresSimplifies weighting and evaluation, but still treats all factors in a single stage, without distinguishing different dimensions of evaluation.
Fuzzy/gray-basedFuzzy AHP, Gray ARAS, Fuzzy PROMETHEECapture vagueness with fuzzy/grey numbersValuable for linguistic judgments, but THE data is quantitative and precise. Transparency was prioritized over handling fuzziness.
Hybrid/extendedANP–BOCR, fuzzy AHP–TOPSIS, PCA-based weightingCombine MCDA with optimization/statisticsIncreases robustness but usually aggregates all criteria in one stage. Our problem requires a transparent process to show how the final choice is made.
Table 8. A comparison between the two-layer and one-layer methods.
Table 8. A comparison between the two-layer and one-layer methods.
RankTwo-Layer ModelOne-Layer Method
1International studentsCitation impact
2Research strengthInternational students
3Citation impactResearch reputation
4Teaching reputationTeaching reputation
5Research reputationResearch influence
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhou, Y.; Asano, A. A Two-Layer Model for Complex Multi-Criteria Decision-Making and Its Application in Institutional Research. Appl. Syst. Innov. 2025, 8, 148. https://doi.org/10.3390/asi8050148

AMA Style

Zhou Y, Asano A. A Two-Layer Model for Complex Multi-Criteria Decision-Making and Its Application in Institutional Research. Applied System Innovation. 2025; 8(5):148. https://doi.org/10.3390/asi8050148

Chicago/Turabian Style

Zhou, Yinghui, and Atsushi Asano. 2025. "A Two-Layer Model for Complex Multi-Criteria Decision-Making and Its Application in Institutional Research" Applied System Innovation 8, no. 5: 148. https://doi.org/10.3390/asi8050148

APA Style

Zhou, Y., & Asano, A. (2025). A Two-Layer Model for Complex Multi-Criteria Decision-Making and Its Application in Institutional Research. Applied System Innovation, 8(5), 148. https://doi.org/10.3390/asi8050148

Article Metrics

Back to TopTop