Next Article in Journal
A Novel Picture Fuzzy Set-Based Decision Approach for Consumer Trust Project Risk Assessment
Next Article in Special Issue
Group Decision-Making Model Based on 2-Tuple Fuzzy Linguistic Model and AHP Applied to Measuring Digital Maturity Level of Organizations
Previous Article in Journal
The Adjustment of Pressure Perception in E-Government Response: A Perspective of the Political System Theory
Previous Article in Special Issue
Exploring the Key Factors of Old Neighborhood Environment Affecting Physical and Mental Health of the Elderly in Skipped-Generation Household Using an RST-DEMATEL Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Integrated Fuzzy Structured Methodology for Performance Evaluation of High Schools in a Group Decision-Making Problem

1
Graduate School, University of Perpetual Help System DALTA Las Piñas, Manila 1740, Philippines
2
Department of Mathematics, Ayandegan Institute of Higher Education, Tonekabon 46818-53617, Iran
3
Department of Management, Ayandegan Institute of Higher Education, Tonekabon 46818-53617, Iran
4
Liberal Arts Department, American University of the Middle East, Egaila 54200, Kuwait
5
Department of Mathematics, Faculty of Arts and Science, Yildiz Technical University, Esenler, 34220 Istanbul, Turkey
*
Author to whom correspondence should be addressed.
Systems 2023, 11(3), 159; https://doi.org/10.3390/systems11030159
Submission received: 13 February 2023 / Revised: 15 March 2023 / Accepted: 16 March 2023 / Published: 20 March 2023

Abstract

:
Evaluating and ranking schools are noteworthy for parents of students and upstream institutions (in Iran, the Ministry of Education). In this process, quantitative criteria, including educational activities, human resources, space and equipment, and administrative-financial indicators, are commonly investigated. This process is carried out only by the upstream institutions and the view of the system from the perspective of another stakeholder, namely, the students’ parents, are ignored and qualitative-judgmental indicators do not involve the school evaluation results. Consequently, in this study, we used the opinions of five parents of students and five experienced school administrators to capture the perspectives of both key system stakeholders. In addition, to perform a more comprehensive analysis, we added three qualitative criteria that are less noticed within the problem (social environment, health, and students), along with their sub-criteria to the criteria obtained from the research background. We eliminated the less influential sub-criteria using the Delphi technique and continued the study with 10 criteria and 53 sub-criteria. Then, using two widely used methods in this field, AHP and TOPSIS, we determined the weight of the sub-criteria and the ranking based on the experts’ views. In addition, to deal with the ambiguity in experts’ judgments, we transformed the crisp data into fuzzy data. We applied the proposed methodology to rank 15 schools in Tehran, Iran. The results showed that the proposed quantitative criteria significantly impact the schools ranking. In addition, according to the sensitivity analysis results, it was found that ignoring the views of the system from another stakeholder can distort the results. Finally, directions for future research were suggested based on current research limitations.

1. Introduction

Education is critical in determining success in various fields and professions, so the education sector is essential to any country’s development. Paying attention to the education system as a national investment is vital. It is one of the most valuable pieces of capital stored in human existence and contributes to production [1]. The educational system of any nation reflects the nation’s inner and mental prowess, as well as its governing philosophy. On the one hand, it fosters the internal skills and abilities of society’s members, and on the other, it can pave the way for the nation’s independence, advancement, and development [2]. Quality education ensures a nation’s future and accelerates its progress and development; therefore, evaluating performance and increasing efficiency in schools as the primary educational institution is of utmost importance [3].
Regardless of their mission, objectives, and vision, institutions and executive bodies must eventually operate in the national or international territory and answer to consumers, clients, and beneficiaries; high schools in the education system are not an exception to this rule [4,5]. Moreover, evaluating the performance of schools and ranking them to choose the best alternative is also extremely essential because, after the family, schools play a large part in raising children and directing them to the proper route in life. Because educational institutions play a crucial role in advancing societies’ economic and cultural goals, both qualitatively and quantitatively, evaluating their performance to identify points that can be improved and their quality can be one of society’s most fundamental challenges [6].
About the topic under discussion, two significant challenges will be addressed in this research: (1) Normally, in this field, the four axis of educational activities, human resources, space and equipment, and administrative-financial indicators, are used to evaluate the schools’ activities [7]. Despite the importance and effectiveness, the above four indicators in evaluating schools are not comprehensive and do not include effective indicators such as social environment, students, health, etc. (2) Specifically, in Iran (and most likely in some other countries), the evaluation of schools is carried out quantitatively and only by the upstream institutions. In contrast, the view of different stakeholders of this system, namely the students’ parents, are ignored in this process and, perhaps more importantly, qualitative-judgmental indicators do not affect the school evaluation results.
Considering the previously mentioned challenges, in this study, we first use the research background and the views of the experts to identify effective indicators for evaluating and ranking schools. Our two main goals at this stage are that the practical and critical indicators, compared to previous studies, are more comprehensive and the other is that the views of both key stakeholders of this system (experts and parents of students) are considered in the evaluation. In the decision-making literature, the next issue is determining their weights after identifying the indicators. As expected, in a problem of this size, the number of leading indicators and sub-indices is large. Recently, it has been suggested that in facing big problems, simplifying the problem is a suitable solution [8]. Based on this, we remove the less essential sub-indices using the Delphi technique and then determine the weights using one of the most well-known weighting tools, i.e., the AHP. The next matter concerns how to face the ambiguities in the verbal judgments of experts. When the evaluation is vague and ambiguous in certain aspects, fuzzy multi-criteria decision-making methods are preferred. They are the most appropriate answer, which has recently attracted the attention of many researchers [9,10,11,12,13,14,15,16,17,18]. Based on this, we also use fuzzy AHP in this research. Finally, we use another widely applied method in this field to rank the alternatives, i.e., fuzzy TOPSIS. It should be noted that combining the above two methods for evaluating and ranking options is the most frequent combination researchers use to solve decision-making problems compared to other possible combinations [19,20,21,22].
This being the case, the proposed approach is a simple and well-known hybrid in the field of decision making and prioritization of schools, which uses more comprehensive indicators than previous studies and considers the views of the two primary stakeholders of the system. To introduce the proposed approach more precisely, we investigated high schools in Tehran, Iran. The other sections of the article are organized as follows: In the next section, the research methodology, including the method and type of research, the sample under review, data collection methods, and questionnaire preparation, is described. Next, Section 3 explains the proposed process, data analysis, and results. Finally, Section 4 includes the conclusions and discussions.

2. Literature Review

In this section, some studies related to the subject are briefly examined. The first part of the background is dedicated to the studies from which the relevant indicators for ranking were adopted. The other part of this section refers to the review of the methods used for ranking in some previous investigations. To generate knowledge, identify gaps, and then point out potential future contributions to the area of expertise, Valmorbida et al. [23] analyzed the characteristics of international scientific publications that address the literature fragment of evaluating the performance of university rankings. For more details, see also [24,25,26,27,28,29,30,31,32,33].
Moore et al. [34] examined the connection between college students’ impressions of teachers’ accessibility and their evaluations of those teachers’ classes. They administered instruments to 266 students to gauge the frequency of teachers’ verbal and nonverbal immediacy behaviors and to collect students’ assessments of the quality of instruction. It was revealed that student evaluations of education were significantly positively correlated with the degree to which the material was immediately applicable. Yeşil [35] investigated the relationship between the communication abilities of prospective teachers and their attitude regarding teaching performance in the Turkish Republic of Northern Cyprus. Tunçeli [36] investigated how prospective preschool teachers’ communication ability and overall outlook on the field were connected. Findings showed no significant differences in communication skills and attitudes among future teachers based on gender or grade level. It was also revealed that the value sub-dimension of the scale correlates positively and significantly with the communication skills of prospective instructors. Certa et al. [37] presented a systematic approach to assess a graduate-level training program’s efficacy. The evaluation process determined the program’s overall effectiveness by contrasting the course’s broad goals with the outcomes most valued by the students. The evaluation process has been streamlined through linguistic variable modeling applied to the responses. Brusca et al. [38] analyzed the method IC disclosed on the universities’ websites in three European countries to evaluate how universities communicate intellectual capital (IC) to their stakeholders and discover potential patterns and trends. The correlation between university rankings as a surrogate for performance and the amount and nature of IC Web disclosure is also investigated. Nicol et al. [39] investigated academic success and open exposure of intellectual capital in the setting of Italian public universities. The content analysis results demonstrate that Italian public universities highly value the disclosure of human capital information.
Multivariate analysis results corroborate that better-performing Italian public universities tend to provide more detail about IC and its constituent parts. To assess the impact of the COVID-19 pandemic on educational outcomes, Gardas and Navimipour [6] presented empirical research to determine the constructs (latent variables) affected primarily by the pandemic. The findings indicate that “compatibility with online mode” and “new opportunities” substantially impact students’ academic success. Wanke et al. [40] looked into the performance and effectiveness of Brazil’s Federal Institute of Education, which comprises schools nationwide and caters to students of various ages and stages. They built and examined a covariance matrix that included efficiency measurements and performance indicators employed by the Brazilian Ministry of Education. Using the TOPSIS, they maximized the values in the covariance matrix. Researchers found no correlation between official performance indicators and the study’s preferred metrics of optimal solution effectiveness.
Wu et al. [41] studied the performance evaluation indices for higher education based on the official performance evaluation structure developed by the Taiwan assessment. They ranked 12 private universities that the Ministry of Education listed as a case study. They adopted the analytic hierarchy process (AHP) and VIKOR model to help universities optimize their performances efficiently. Das et al. [42] presented the evaluation system for technical institutes in Indian states. Their research focuses on how a systematic process can be used in a MADM setting to evaluate and rank a group of engineering schools. Their work’s innovative aspect is combining the fuzzy AHP approach and MOOSRA into a single framework for assessing the effectiveness of India’s technical institutions. Musani and Jemain [43] developed a methodology for objectively evaluating educational institutions based on linguistic data. They mentioned five possible performance measures in linguistics, excellent, honors, mediocre, pass and fail, and the level of academic achievement can be approximated by a fuzzy number based on linguistic features. The outcomes proved that fuzzy set theory could deal with the data uncertainty problems that hinder the usefulness of MADM. Ranjan and Chakraborty [44] evaluated twenty Indian National Institutes of Technology (NITs) using a PROMETHEE and GAIA techniques hybrid. They employed faculty strength, teacher-student ratio, number of conferences held in the last five years, number of papers published in journals in the previous five years, research grants, campus area, placement of students, number of books and online journals available in the library and the course fees to rank the alternatives. Al Qubaisi et al. [45] developed an AHP model to assess the quality of educational institutions and conduct school inspections. The group worked to identify the school’s weighting criteria and create a performance system using the AHP model as a foundation. The school administration can use the proposed structure to address its competitive advantages concerning competing institutions on various fronts. Adhikari et al. [46] suggested an integrated MADM regression-based methodology to evaluate the schools’ input-level performance and explore the influence of this performance combined with contextual factors. West Bengal, India, is home to 82,930 primary and secondary schools, all evaluated using the proposed technique to measure performance at the input level. Their results implied that all these factors significantly affect boys’ passing rate, but girls’ passing rate is affected only by input-level performance and school location. Gul and Yucesan [47] created university rankings in Turkey using metrics based on institutional effectiveness. The Bayesian best-worst method is used here to assign relative importance to the thirty-four criteria that fall within the five overarching criteria. Then, the TOPSIS approach determines how the 189 mentioned public and private universities should be rated.

3. Proposed Methodology

In this section, the three main techniques employed in this research, namely, Delphi, AHP, and TOPSIS, are introduced, respectively. In this research, the Delphi technique has been used to create an agreement among experts about the final indicators and sub-indicators and to eliminate the less critical ones; the AHP technique has been operated to determine the importance of each of the indicators and sub-indicators in the hierarchy of the problem. Finally, the TOPSIS technique has been applied to rank the alternatives. The proposed methodology steps are shown in Figure 1.

3.1. Delphi Technique

The Delphi method is a structured communication technique invented and developed for systematic and interactive forecasting based on the deliberation of experts. In addition, the Delphi technique can identify and screen the most important decision-making indicators. This technique is widely used in the consensus process of expert opinions [48]. With the Delphi method, experts from the required disciplines, mostly consisting of five to twenty members [49], are first identified and asked to participate in the inquiry. The questions are refined by the researchers and pursued through several sequential questionnaires. In the first questionnaire, participants are asked to provide their judgment. The analysis would identify the range of opinions about the problem. In a second questionnaire, the range would be presented to the group, and persons holding views at the extremes of the spectrum would be asked to reassess their opinion given the group’s content [50]. Finally, the process is stopped after a predefined stop criterion (e.g., number of rounds, achievement of consensus, stability of results). The mean or median scores of the final rounds determine the results [51].
According to [52], a Delphi survey includes three significant steps: (1) clarifying the topic and preparing questions to send to the experts; (2) selecting the expert panel; and (3) organizing and running the survey, which involves two or more rounds. The first step can be designed in at least two ways. One way is to ask experts which criteria and sub-criteria are the most important. The other way is to prepare a list of elements and then ask the experts to evaluate them. One of the most critical aspects of the Delphi survey is the selection of qualified experts in the second step. There are no specified rules regarding the appointment or number of participants. In practice, using people with at least five years of experience in the field under investigation is expected [48]. In step 3, experts’ views can be collected using paper and pencil responses or via the internet. In general, the survey is accomplished in two or three rounds. In the first round, a questionnaire is sent to the participants. The experts’ views are collected, and, depending on the topic, questions are reformulated, new questions are added, or a list of items is updated and adapted. In the second round, participants rank their agreement with statements or weigh the relevance of information. For example, participants may get an updated list in the second round and rate each criterion on a scale that ranges from 0 (completely unimportant) to 10 (very important) [52]. Whether a third round is necessary depends on the topic and design of the survey and the desired result.

3.2. AHP Technique

To make decisions including qualitative and quantitative factors, Saati proposed AHP, which presented a method that can represent the human decision-making process and aid in reaching better decisions based on hierarchy and selecting the finest alternative from a limited number of variations. Saati also argued that ten experts are sufficient for pairwise comparison-based investigations [53].
The Delphi method is utilized in this study to determine the final sub-criteria. The Delphi consisted of sending out electronically a description of the problem and providing the collected background knowledge to a panel of experts and stakeholders. In this way, the final weights of the criteria are determined. First, the main criteria based on the objective are compared in pairs to perform the analysis. Pairwise comparison is very simple; each cluster’s elements must be compared pair by pair. Therefore, if there are n elements in a cluster, n ( n 1 ) 2 comparisons will be made.
The ambiguity in experts’ verbal judgments is one of the noteworthy matters researchers have grappled with in using decision-making approaches [54]. When the experts qualitatively examine the indicators of the problem, it is necessary to remove the ambiguity in the judgments by using fuzzy sets (or fuzzy set extensions) [55]. Based on this, in this research, before performing the AHP steps, the problem data are converted into fuzzy numbers based on Table 1.
In the following, the experts’ opinions are quantified using a fuzzy scale. First, the thoughts of experts are collected with Saati’s spectrum. Then, the ideas of the experts are fuzzified. The geometric mean method is used to gather the views of experts in the fuzzy AHP method. The pairwise comparison matrix can be presented according to the results from summarizing the experts’ opinions. It should be noted that every triangular fuzzy number is represented in A = (L, M, U).
After forming the pairwise comparisons matrix, each row’s fuzzy sum is calculated. Based on this, the fuzzy expansion of the preferences of each main criterion is calculated. The sum of the elements of the preferences column of the main criteria will be as follows:
i = 1 10 j = 1 10 L g j , M g j , U g j
For normalization, the sum of the values of each criterion should be divided by Equation (1). Because the values are fuzzy numbers, the fuzzy sum of each row is multiplied by the inverse of Equation (1). In the final step, the obtained values are de-fuzzified. There are several methods for de-fuzzification. In this study, we have used the following procedure for it:
D e f u z z y = max { x max 1 , x max 2 , x max 3 }
where
x max 1 = l + m + u 3 ,
x max 2 = l + 4 m + u 6 ,
x max 3 = l + 2 m + u 4 .
In the second step of the AHP technique, the sub-criteria of each criterion are compared in pairs. The calculations mentioned above are similar to the previous ones.

3.3. TOPSIS Technique

The TOPSIS method is the most applied MADM method for choosing the best option [57]. In this technique, the best alternative must have the most distance from negative factors and the least from positive ones. In the first step of this method, it is necessary to form the decision matrix (the decision matrix X and its elements xij). As before, we use fuzzy numbers to settle the ambiguity in experts’ judgments. A seven-point scale to score the options based on each criterion is shown in Table 2.
When the fuzzy approach with fuzzy triangular numbers is used, the decision matrix x ˜ will be displayed as x ˜ = x ˜ i j m . n . Each decision matrix row is also displayed as x ˜ i j = l i j , m i j , u i j .
In the next step, the fuzzy normal matrix is displayed with the symbol Ñ, and each row of the normal matrix will also be displayed as n ˜ i j . The following relationship is used for normalization:
For positive attributes:
n ˜ i j = l i j u j * , m i j u j * , u i j u j * , u j * = max   u i j .
For negative attributes:
n ˜ i j = l j u i j , l j m i j , l j l i j , l j = min   l i j .
In the third step, the fuzzy weighted normalized matrix should be formed. Generally, this step should convert the normalized matrix (N) to the weighted normalized matrix (V). This matrix is represented by the symbol V ˜ . The weight of each index has already been calculated by the FAHP method, and these weights are normalized here. Having the weights of the indicators represented by the vector W ˜ i j , we will have the following:
V ˜ = v ˜ i j m . m i = 1 , 2 , , m . j = 1 , 2 , , n . v ˜ i j = n ˜ i j . w ˜ j w ˜ = w ˜ 1 , w ˜ 2 , , w ˜ n
In the next step, the fuzzy positive ideal solution (FPIS) and the fuzzy negative ideal solution (FNIS) should be calculated:
A + = v ˜ 1 * , v ˜ 2 * , , v ˜ n * A = v ˜ 1 , v ˜ 2 , , v ˜ n V ˜ j * = max v ˜ i j V ˜ j = min v ˜ i j
In this step, the sum of the distances of the options from the FPIS (d+) and FNIS (d) should be calculated. If F1 and F2 are two fuzzy triangular numbers, then the distance between these two numbers will be calculated with the following formula:
D F 1 , F 2 = 1 3 l 1 l 2 2 + m 1 m 2 2 + u 1 u 2 2 F 1 = ( l 1 , m 1 , u 1 ) F 2 = ( l 2 , m 2 , u 2 )
In the next step, the relative closeness of each option to the ideal solution is calculated. For this, we use the following formula:
C L i * = d i d i + d i +
The value of CL will be between zero and one. The closer this value is to one, the closer this option is to the ideal answer as the better option.

4. Case Study

This research used 15 Iranian secondary high schools in Tehran to explain the proposed methodology.
In the first step of the proposed methodology, it was necessary to prepare a list of the problem criteria and sub-criteria. Criteria, including management staff, credits and costs, educational equipment, library, educational leadership, teaching and learning process, and administrative affairs, were acquired from the background of the research. Because these criteria were not comprehensive, we proposed three criteria, social environment, health, and students (along with their sub-criteria), to perform a more detailed analysis (see Table 3).
Furthermore, a total of 108 sub-criteria have been identified for the main criteria as follows:
For the management staff (C1), we have the following 14 sub-criteria:
  • S11: The proportion of teachers with a bachelor’s degree or higher to the total number of teachers;
  • S12: The proportion of teachers with a field of study related to the subject;
  • S13: The ratio of teaching to all teachers;
  • S14: The average age of teachers;
  • S15: The average service history of teachers;
  • S16: The average teaching hours of teachers per week;
  • S17: The training courses completed by teachers;
  • S18: The manager’s degree;
  • S19: The manager’s field;
  • S110: The amount of service history of the manager in management or deputy positions;
  • S111: The training courses (specialized) completed by the managers and deputies;
  • S112: The degree of deputies;
  • S113: The field of study of deputies;
  • S114: The average service history of deputies.
For the credits and costs (C2), we have the following 9 sub-criteria:
  • S21: The per capita student;
  • S22: The ratio of income from extracurricular and public assistance, etc., to the student;
  • S23: The ratio of incurred expenses to approved expenses;
  • S24: The ratio of costs incurred to motivate teachers to the total budget;
  • S25: The ratio of educational expenses to total costs;
  • S26: The ratio of breeding expenses to total expenses;
  • S27: Cost per student;
  • S28: The proportion of expenditures with the approved budget;
  • S29: The quality of positive documents of costs.
For the educational equipment (C3), we have the following 12 sub-criteria:
  • S31: Per capita student space;
  • S32: The ratio of the number of students to the toilets;
  • S33: The ratio of breeding space to the total area;
  • S34: The sports space per capita;
  • S35: The ratio of the number of students to the classroom space;
  • S36: The ratio of printers and photocopying machines to the needs of the school;
  • S37: The ratio of the number of computers to students;
  • S38: Suitability of educational tools and materials to students’ needs;
  • S39: The degree of up-to-date educational equipment and materials;
  • S310: Quality of teaching materials and tools;
  • S311: Variety of educational materials and tools;
  • S312: Suitability of tables, benches, and chairs to the needs of students.
For the library (C4), we have the following 5 sub-criteria:
  • S41: The ratio of available books to students;
  • S42: The ratio of the number of CDs, educational videos, and tapes to students;
  • S43: The ratio of books, journals, and teaching guides to teachers;
  • S44: The average of teachers who use up-to-date books and publications;
  • S45: The average number of students using the updated library.
For educational leadership (C5), we have the following 25 sub-criteria:
  • S51: Number of training programs held for teachers;
  • S52: The ratio of the number of encouraged teachers to the total number of teachers;
  • S53: The ratio of the number of encouraged students to the total number of students;
  • S54: The quality of setting annual school programs;
  • S55: How to implement annual programs;
  • S56: The quality of compiling quarterly reports and sending them to the regional management;
  • S57: The reopening of the school on time and the preparation of teachers and students;
  • S58: Formation of school councils on time;
  • S59: The quality of council meetings;
  • S510: Registering and maintaining records and minutes of council meetings;
  • S5111: How to implement council approvals;
  • S512: The quality of actions performed in national and religious celebrations;
  • S513: How to perform the morning ceremony;
  • S514: The quality of congregational prayers;
  • S515: How to use leisure time;
  • S516: Actions were taken to identify the strengths and weaknesses of teachers;
  • S517: The number of training programs held for teachers;
  • S518: The ability of the manager to evaluate the performance of teachers;
  • S519: How to inform broadcast programs and instructions;
  • S520: The level of familiarity of the manager with the description of the duties of the employees;
  • S521: The manager’s familiarity with the principles and skills of educational management;
  • S522: The extent of the manager’s familiarity with the principles and philosophy of education;
  • S523: The manager’s familiarity with the principles of psychology;
  • S524: The degree of the manager’s familiarity with the laws and regulations of education;
  • S525: The quality of transportation service for students.
For health (C6), we have the following 3 sub-criteria:
  • S61: The number of students examined in terms of health and treatment;
  • S62: The quality of Bogue food;
  • S63: Health quality of the school environment.
For the students (C7), we have the following 16 sub-criteria:
  • S71: Average GPA of incoming students;
  • S72: The ratio of students to teachers;
  • S73: The ratio of students to classes;
  • S74: The ratio of students participating in camps and scientific trips to all students;
  • S75: The ratio of students participating in scientific, laboratory, Olympiads, artistic, and sports competitions to the total number of students;
  • S76: The ratio of students participating in extracurricular classes to total students;
  • S77: The number of exhibitions held of students’ scientific, cultural, and artistic activities;
  • S78: The rate of students who have completed their education within the official period;
  • S79: The average annual grade point average of students;
  • S710: The middle passing grade of each semester;
  • S711: Pass percentage of each semester;
  • S712: Annual acceptance rate;
  • S713: The amount of students’ participation in class management;
  • S714: The amount of student participation in group work;
  • S715: The level of student participation in decision making and planning;
  • S716: The condition of students’ appearance.
For the teaching and learning process (C8), we have the following 14 sub-criteria:
  • S81: The amount of use of educational technology in the teaching process;
  • S82: Status of planning to improve educational quality;
  • S83: Analysis of the results of academic progress;
  • S84: Providing feedback on the results of academic achievement tests;
  • S85: The number of teachers using the plan;
  • S86: The amount of teachers’ use of educational tools and materials;
  • S87: The amount of teachers’ use of various teaching strategies;
  • S88: The extent to which teachers use multiple methods of evaluating academic progress;
  • S89: The level of familiarity of teachers with the goals and content of lessons;
  • S810: The level of collaboration and exchange of teachers’ experiences with each other;
  • S811: The level of teachers’ familiarity with educational goals, regulations, and guidelines;
  • S812: The level of teachers’ participation in decision making and planning;
  • S813: How to schedule teachers’ meetings with parents;
  • S814: The amount of teachers’ use of laboratories and workshops.
For administrative affairs (C9), we have the following 6 sub-criteria:
  • S91: How to register students;
  • S92: The quality of student’s academic records;
  • S93: The quality of personnel and job files of employees;
  • S94: Quality office property;
  • S95: The quality of the examination book;
  • S96: The quality of the statistical office.
For the social environment (C10), we have the following 4 sub-criteria:
  • S101: Cultural status of parents of students;
  • S102: Economic status of parents of students;
  • S103: Educational level of students’ parents;
  • S104: Parents’ satisfaction with the school.
In the next step, we created a panel of research experts. Since in this research, in addition to the views of organizational experts, we wanted to include the opinions of another primary beneficiary of this system, i.e., the parents of the students, we selected five people from each group. Five school principals with at least five years of continuous school management were determined among the organizational experts. Among the students’ parents, those who continuously had at least three years of membership in the Parents-Teachers Association were selected.
In the following step, each group member was first given a questionnaire including the sub-criteria. We asked the experts to rate each sub-criteria’s importance on a scale of 0 (completely unimportant) to 10 (completely important). In the initial screening, the points assigned by experts for all sub-criteria were between 3 and 10. For example, the first-round results for the social environment (C10) are shown in Table 4 and Table 5.
After reviewing the answers provided by the experts in the first round, to reduce the sub-criteria which are less important, we suggested to the experts that the sub-criteria that scored less than seven be removed. With the acceptance of this proposal by all the research experts, these sub-criteria were excluded from further investigation, as shown in Table 5, and 53 sub-criteria were considered for further studies in the second round. It should be noted that all the research experts participated in both survey rounds and completely analyzed the points.
In the second round, based on the same previous scale from 0 to 10, the experts determined the importance of each sub-criteria, an example of which is shown in Table 6.
It should be noted that Kendall’s coefficient of concordance [58] was used to calculate the agreement of experts’ views, and the results are shown in Table 7.
In the second phase of the proposed methodology, the AHP technique was used to determine the weight of indicators. First, the thoughts of experts were collected with Saati’s spectrum. Then, the ideas of the experts were fuzzified according to Table 1. The geometric mean method was used to gather the views of experts in the fuzzy AHP method. According to the results from summarizing the experts’ opinions, the pairwise comparison matrix is presented in Table 8.
For normalization, the sum of the values of each criterion should be divided by Equation (1). Because the values are fuzzy numbers, the fuzzy sum of each row is multiplied by the inverse of Equation (1). Applying Equation (2) in the next step, the obtained values are de-fuzzified. The normal weights of the main criteria are shown in Table 9.
Based on Table 9, the priority of the main criteria is shown in Figure 2.
The inconsistency rate of the comparisons was found to be 0.094, which is smaller than 0.1; therefore, the comparisons can be trusted.
In the following step, the sub-criteria of each criterion are compared in pairs. The calculations mentioned above are similar to the previous ones. The de-fuzzified and normal values of the sub-criteria weights are presented in Table 10, Table 11, Table 12, Table 13, Table 14, Table 15, Table 16, Table 17, Table 18 and Table 19.
The inconsistency rate of the comparisons made for the sub-criteria of the management staff was found to be 0.011.
Table 11. The calculated weights of the sub-criteria for C2.
Table 11. The calculated weights of the sub-criteria for C2.
Criteria x max 1 x max 2 x max 3 De-FuzzyNormal W
S230.2750.2750.2730.2750.271
S250.4920.4920.4870.4920.484
S260.2490.2490.2470.2490.245
The inconsistency rate of the comparisons made for the sub-criteria of C2 was found to be 0.049.
Table 12. The calculated weights of the sub-criteria for C3.
Table 12. The calculated weights of the sub-criteria for C3.
Criteria x max 1 x max 2 x max 3 De-FuzzyNormal W
S310.2280.2250.2230.2280.218
S320.3570.3540.3510.3570.342
S340.2260.2230.2210.2260.216
S360.1830.1810.1780.1830.175
S380.0230.0230.0230.0230.022
S3100.0280.0280.0280.0280.027
The inconsistency rate of the comparisons made for the sub-criteria of C3 was found to be 0.081.
Table 13. The calculated weights of the sub-criteria for C4.
Table 13. The calculated weights of the sub-criteria for C4.
Criteria x max 1 x max 2 x max 3 De-FuzzyNormal W
S410.4990.4960.4930.4990.487
S440.3990.3960.3940.3990.389
S450.1270.1270.1260.1270.124
The inconsistency rate of the comparisons made for the sub-criteria of C4 was found to be 0.077.
Table 14. The calculated weights of the sub-criteria for C5.
Table 14. The calculated weights of the sub-criteria for C5.
Criteria x max 1 x max 2 x max 3 De-FuzzyNormal W
S510.1020.1010.1000.1020.097
S550.0670.0660.0650.0670.064
S560.0890.0880.0870.0890.085
S580.0740.0730.0720.0740.070
S590.0860.0850.0840.0860.082
S5140.0680.0670.0660.0680.065
S5150.1010.1000.0990.1010.097
S5170.1050.1040.1030.1050.101
S5180.0960.0950.0940.0960.091
S5190.0970.0960.950.0970.093
S5230.0780.0770.0760.0780.074
S5240.0850.0850.0850.0850.081
The inconsistency rate of the comparisons made for the sub-criteria of C5 was found to be 0.037.
Table 15. The calculated weights of the sub-criteria for C6.
Table 15. The calculated weights of the sub-criteria for C6.
Criteria x max 1 x max 2 x max 3 De-FuzzyNormal W
S610.3570.3480.3390.3570.350
S620.3490.3410.3320.3490.342
S630.2960.3130.3300.3130.307
The inconsistency rate of the comparisons made for the sub-criteria of C6 was found to be 0.076.
Table 16. The calculated weights of the sub-criteria for C7.
Table 16. The calculated weights of the sub-criteria for C7.
Criteria x max 1 x max 2 x max 3 De-FuzzyNormal W
S720.1480.1470.1450.1480.142
S760.2170.2140.2120.2170.208
S770.1600.1580.1560.1600.153
S790.1820.1800.1780.1820.174
S7120.1720.1700.1690.1720.165
S7150.1660.1650.1630.1660.159
The inconsistency rate of the comparisons made for the sub-criteria of C7 was found to be 0.021.
Table 17. The calculated weights of the sub-criteria for C8.
Table 17. The calculated weights of the sub-criteria for C8.
Criteria x max 1 x max 2 x max 3 De-FuzzyNormal W
S810.1490.1480.1460.1490.143
S820.2120.2100.2080.2120.203
S830.1590.1570.1550.1590.152
S860.1800.1780.1770.1800.172
S890.1980.1960.1940.1980.190
S8140.1460.1450.1430.1460.140
The inconsistency rate of the comparisons made for the sub-criteria of C8 was found to be 0.097.
Table 18. The calculated weights of the sub-criteria for C9.
Table 18. The calculated weights of the sub-criteria for C9.
Criteria x max 1 x max 2 x max 3 De-FuzzyNormal W
S920.1870.1860.1840.1870.181
S930.2740.2710.2690.2740.265
S940.2840.2820.2800.2840.275
S950.1500.1480.1460.1500.145
S960.1390.1380.1370.1390.134
The inconsistency rate of the comparisons made for the sub-criteria of C9 was found to be 0.075.
Table 19. The calculated weights of the sub-criteria for C10.
Table 19. The calculated weights of the sub-criteria for C10.
Criteria x max 1 x max 2 x max 3 De-FuzzyNormal W
S1020.2900.2880.2870.2900.282
S1030.7370.7320.7230.7370.718
The inconsistency rate of the comparisons made for the sub-criteria of C10 was found to be 0.084.
In the last step of phase 2, the final priority of the criteria is calculated. To determine the final weights, it is enough to multiply the weight of each sub-criteria (W2) by the weight of the main criteria (W1). For example, for C1 and the related sub-criteria, we have the final weight in Table 20.
Figure 3 and Figure 4 show the final weights of all sub-criteria.
In phase three of the proposed methodology, we used FTOPSIS to select the best option. Considering 53 sub-criteria and their final weights, 15 options were prioritized by applying Equations (3)–(8). Due to the table length, the scores obtained from the decision matrix for this problem are not presented here. Therefore, the fuzzy TOPSIS algorithm’s output for ranking the mentioned high schools is given in Table 21.
According to Table 21, it can be concluded that School 2 receives the first rank.

Sensitivity Analysis

In sensitivity analysis, one common method is to change the criteria weights and review their effects on the final outputs. Considering that in this study we used two key stakeholders of the problem in the panel of experts, we used the split-half method for sensitivity analysis. Based on this, we calculate the main criteria weights separately based on the views of the parents and school administrators and compare them with the weights obtained by summing up the two views (see Table 9). The main criteria weights according to the opinions of parents (P), administrators (A), and their combination (T) are shown in Figure 5 and Table 22.
The sensitivity analysis carried out in this study brings significant results. As mentioned in the introduction section, in Iran, schools are evaluated based on the regulations compiled by the Ministry of Education, with inadequate and non-weighted criteria. The first point is that the criteria added in this study, according to research experts, were all important; in total, health is the third, students are the fifth, and social environment is the ninth influential criterion on the school prioritization process.
The second point, and of course more noteworthy, is that not paying attention to the views of other system stakeholders can distort the results and thus lose their reliability because of the exploitation of a particular stakeholder. As shown in Figure 5 and Table 22, the criterion of management staff (C1) in combination (T) and in the view of administrators (A) ranks 1st; in contrast, from the parents’ point of view (P), this criterion ranks fifth in importance. The three most important criteria from the principals’ point of view are management staff (C1), administrative affairs (C9), and library (C4), respectively; this ranking shows that they consider most of the system’s internal factors, especially those directly related to themselves, to be important. On the other hand, the three most important criteria from the point of view of parents are educational equipment (C3), social environment (C10), and credits and costs (C2), respectively. Considering that the fourth most important criterion from the parent’s point of view is health (C6), it is clear that they consider a combination of internal and external factors of the system to be important in their analysis.
Regarding the final ranking of the schools, it should be declared that although based on the combination of views and the principals’ views, School 2 is the best, according to parents’ opinion, while School 11 is the best option. Finally, it should be kept in mind that the bias of the principals in determining the weights of the criteria (see the weight of the first criterion in Table 22) greatly impacted the choice of School 2 as the best option.

5. Conclusions

In light of schools’ crucial role in advancing society’s goals, both qualitatively and quantitatively, evaluating their performance and quality can pose a fundamental importance to society, especially to parents and policymakers. Schools in Iran, for example, are assessed by upstream institutions quantitatively, on inadequate criteria, and without regard to the views of another stakeholder, namely, students’ parents. Consequently, the purpose of this study was to provide an answer to the challenges within the school evaluation and ranking process by establishing a three-phase methodology. Based on this, a case study was conducted to rank 15 schools in one of the districts of Tehran, Iran. By employing the Delphi, fuzzy AHP, and fuzzy TOPSIS techniques, ten criteria, including three new ones proposed in this study, and 53 sub-criteria were weighted by experts, and the priority of the schools was determined. Performing a sensitivity analysis of the problem data showed that ignoring the viewpoints of other stakeholders of the problem can distort the results. In this research, along with the quantitative criteria, three qualitative criteria less noticed in the literature, including the social environment, health, and students, were considered in evaluating schools. The results of this research indicated that considering qualitative and quantitative criteria has a decisive role in evaluating schools and probably other educational systems. In addition, it was shown that in the evaluation of schools, it is better to consider the perspective of other stakeholders of this system because, without it, the analysis results will be associated with one-sidedness.
Even though more comprehensive quantitative and qualitative criteria were considered in the current research compared to the previous studies, and in addition to maintaining methodological simplicity, the views of both the main stakeholders of the system were also obtained, there are weaknesses in it that other researchers can consider in the future. The first weakness relates to how to deal with qualitative criteria (verbal judgments). The literature review shows that different approaches can be used for this case. For example, Intuitionistic fuzzy sets (IFSs) [59], Pythagorean fuzzy sets (PFSs) [60], and Neutrosophic sets (NSs) [61], or the full consistency method (FUCOM) and its combination with the rough sawn method [62], and the interval type-2 fuzzy sets (IT2FS) in a combination of DEMATEL-AHP-TOPSIS [63], may have brought more reliable results. Therefore, one of the future directions for researchers can be to use fuzzy set extensions and compare their results with the results of the present study.
Another limitation of the proposed approach is how the criteria are weighted. While we used the well-known AHP approach to maintain the simplicity of the methodology, other developed methods may yield more accurate results. For example, fuzzy pivot pairwise relative criteria importance assessment (FPIPRECIA) [64], intuitionistic fuzzy decision-making trial and evaluation laboratory (IFDEMATEL) [65], the criteria importance through intercriteria correlation (CRITIC) method [66], the new level-based weight assessment (LBWA) model [67], and the best-worst method (BWM) are some approaches recently recommended by researchers to determine the weights (importance) of problem criteria. In the same way, researchers have recommended the use of VIseKriterijumska Optimizacija I Kompromisno Resenje (VIKOR) [68], evaluation based on distance from average solution (EDAS) [69], data envelopment analysis (DEA) [70], and multi-attributive border approximation area comparison (MABAC) [71] methods in the phase of ranking the alternatives as an alternative to the TOPSIS method. Accordingly, another research direction can be analyzing the problem data with the above approaches and comparing them with the current research results.

Author Contributions

P.L. and S.A.E. planned the scheme, initiated the project, and suggested the simulation; A.S. and S.Y. conducted the numerical simulation and analyzed the results; N.K. developed the simulation result and modeling and examined the theory validation. The manuscript was written through the contribution of all authors. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Acknowledgments

Authors are grateful to the anonymous referees for their valuable suggestions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Javanbakht, T.; Chakravorty, S. Prediction of Human Behavior with TOPSIS. J. Fuzzy Ext. Appl. 2022, 3, 109–125. [Google Scholar] [CrossRef]
  2. Nodira, T. INNOVATIVE MANAGEMENT IN THE DEVELOPMENT OF THE HIGHER EDUCATION SYSTEM. J. Acad. Res. Trends Educ. Sci. 2022, 1, 346–351. [Google Scholar]
  3. Zhang, G.; Wu, J.; Zhu, Q. Performance Evaluation and Enrollment Quota Allocation for Higher Education Institutions in China. Eval. Program Plann. 2020, 81, 101821. [Google Scholar] [CrossRef]
  4. Lai, W.-T. Performance Evaluation and Forecasting for High School Admission Through School-Based Assessment in Taiwan. Int. J. Intell. Technol. Appl. Stat. 2022, 15, 33–45. [Google Scholar]
  5. Pereira, D.; Flores, M.A.; Niklasson, L. Assessment Revisited: A Review of Research in Assessment and Evaluation in Higher Education. Assess. Eval. High. Educ. 2016, 41, 1008–1032. [Google Scholar] [CrossRef] [Green Version]
  6. Gardas, B.B.; Navimipour, N.J. Performance Evaluation of Higher Education System amid COVID-19: A Threat or an Opportunity? Kybernetes 2022, 51, 2508–2528. [Google Scholar] [CrossRef]
  7. Taheri, A.; Taghipour zahir, A.; Jafari, P. Identify Main Components for Performance Assessment of Schools in Favorable Situation. Iran. J. Educ. Sociol. 2019, 2, 1–9. [Google Scholar] [CrossRef]
  8. Qiu, P.; Sorourkhah, A.; Kausar, N.; Cagin, T.; Edalatpanah, S.A. Simplifying the Complexity in the Problem of Choosing the Best Private-Sector Partner. Systems 2023, 11, 80. [Google Scholar] [CrossRef]
  9. Cubukcu, C.; Cantekin, C. Using a Combined Fuzzy-AHP and Topsis Decision Model for Selecting the Best Firewall Alternative. J. Fuzzy Ext. Appl. 2022, 3, 192–200. [Google Scholar] [CrossRef]
  10. Imeni, M. Fuzzy Logic in Accounting and Auditing. J. Fuzzy Ext. Appl. 2020, 1, 66–72. [Google Scholar] [CrossRef]
  11. Akram, M.; Naz, S.; Edalatpanah, S.A.; Mehreen, R. Group Decision-Making Framework under Linguistic q-Rung Orthopair Fuzzy Einstein Models. Soft Comput. 2021, 25, 10309–10334. [Google Scholar] [CrossRef]
  12. Sorourkhah, A.; Babaie-Kafaki, S.; Azar, A.; Shafiei Nikabadi, M. A Fuzzy-Weighted Approach to the Problem of Selecting the Right Strategy Using the Robustness Analysis (Case Study: Iran Automotive Industry). Fuzzy Inf. Eng. 2019, 11, 39–53. [Google Scholar] [CrossRef]
  13. Loganathan, K.; Najafi, M.; Kaushal, V.; Agyemang, P. Evaluation of Public Private Partnership in Infrastructure Projects. In Proceedings of the Pipelines 2021: Planning (Proceedings of Sessions of the Pipelines 2021 Conference), Online. 3–6 August 2021; pp. 151–159. [Google Scholar]
  14. Naeini, M.A.; Zandieh, M.; Najafi, S.E.; Sajadi, S.M. Analyzing the Development of the Third-Generation Biodiesel Production from Microalgae by a Novel Hybrid Decision-Making Method: The Case of Iran. Energy 2020, 195, 116895. [Google Scholar] [CrossRef]
  15. Saberhoseini, S.F.; Edalatpanah, S.A.; Sorourkhah, A. Choosing the Best Private-Sector Partner According to the Risk Factors in Neutrosophic Environment. Big Data Comput. Visions 2022, 2, 61–68. [Google Scholar] [CrossRef]
  16. Jafar, M.N.; Khan, F.; Naveed, A. Prediction of Pakistan Super League-2020 Using TOPSIS and Fuzzy TOPSIS Methods. J. Fuzzy Ext. Appl. 2020, 1, 98–107. [Google Scholar] [CrossRef]
  17. Zhang, K.; Xie, Y.; Noorkhah, S.A.; Imeni, M.; Das, S.K. Neutrosophic Management Evaluation of Insurance Companies by a Hybrid TODIM-BSC Method: A Case Study in Private Insurance Companies. Manag. Decis. 2022; ahead-of-print. [Google Scholar] [CrossRef]
  18. Sorourkhah, A.; Edalatpanah, S.A. Using a Combination of Matrix Approach to Robustness Analysis (MARA) and Fuzzy DEMATEL-Based ANP (FDANP) to Choose the Best Decision. Int. J. Math. Eng. Manag. Sci. 2022, 7, 68–80. [Google Scholar] [CrossRef]
  19. Vijayakumar, S.R.; Suresh, P.; Sasikumar, K.; Pasupathi, K.; Yuvaraj, T.; Velmurugan, D. Evaluation and Selection of Projects Using Hybrid MCDM Technique under Fuzzy Environment Based on Financial Factors. Mater. Today Proc. 2022, 60, 1347–1352. [Google Scholar] [CrossRef]
  20. Vásquez, J.A.; Escobar, J.W.; Manotas, D.F. AHP–TOPSIS Methodology for Stock Portfolio Investments. Risks 2022, 10, 4. [Google Scholar] [CrossRef]
  21. Solangi, Y.A.; Tan, Q.; Mirjat, N.H.; Ali, S. Evaluating the Strategies for Sustainable Energy Planning in Pakistan: An Integrated SWOT-AHP and Fuzzy-TOPSIS Approach. J. Clean. Prod. 2019, 236, 117655. [Google Scholar] [CrossRef]
  22. Bekesiene, S.; Vasiliauskas, A.V.; Hošková-mayerová, Š.; Vasilienė-vasiliauskienė, V. Comprehensive Assessment of Distance Learning Modules by Fuzzy AHP-TOPSIS Method. Mathematics 2021, 9, 409. [Google Scholar] [CrossRef]
  23. Valmorbida, S.M.I.; Ensslin, S.R. Performance Evaluation of University Rankings: Literature Review and Guidelines for Future Research. Int. J. Bus. Innov. Res. 2017, 14, 479–501. [Google Scholar] [CrossRef]
  24. Kunsch, P.L.; Ishizaka, A. Multiple-Criteria Performance Ranking Based on Profile Distributions: An Application to University Research Evaluations. Math. Comput. Simul. 2018, 154, 48–64. [Google Scholar] [CrossRef] [Green Version]
  25. Samanlioglu, F.; Ayaǧ, Z. A Fuzzy AHP-VIKOR Approach for Evaluation of Educational Use Simulation Software Packages. J. Intell. Fuzzy Syst. 2019, 37, 7699–7710. [Google Scholar] [CrossRef]
  26. Fonseca, R.A.; Thomé, A.M.T.; Milanez, B. Decision-Making Process on Sustainability: A Systematic Literature Review BT. In Industrial Engineering and Operations Management; Tavares Thomé, A.M., Barbastefano, R.G., Scavarda, L.F., Gonçalves dos Reis, J.C., Amorim, M.P.C., Eds.; Springer International Publishing: Cham, Switzerland, 2021; pp. 225–236. [Google Scholar]
  27. Pino-Mejías, J.-L.; Luque-Calvo, P.-L. Survey of Methods for Ranking and Benchmarking Higher Education Institutions BT. In Handbook of Operations Research and Management Science in Higher Education; Sinuany-Stern, Z., Ed.; Springer International Publishing: Cham, Switzerland, 2021; pp. 159–211. ISBN 978-3-030-74051-1. [Google Scholar]
  28. Zanellato, G.; Tiron-Tudor, A. Toward a Sustainable University: Babes-Bolyai University Goes Green. Adm. Sci. 2021, 11, 133. [Google Scholar] [CrossRef]
  29. Muniz, R.d.F.; Andriola, W.B.; Muniz, S.M.; Thomaz, A.C.F. The Use of Data Envelopment Analysis to Estimate the Educational Efficiency of Brazilian Schools. J. Appl. Res. Ind. Eng. 2022, 9, 374–383. [Google Scholar] [CrossRef]
  30. Duran, V.; Topal, S.; Smarandache, F. An Application of Neutrosophic Logic in the Confirmatory Data Analysis of the Satisfaction with Life Scale. J. Fuzzy Ext. Appl. 2021, 2, 262–282. [Google Scholar] [CrossRef]
  31. Chansamut, A. Information System Model for Educational Management in Supply Chain for Thai Higher Education Institutions. Int. J. Res. Ind. Eng. 2021, 10, 87–94. [Google Scholar] [CrossRef]
  32. Tavana, M.; Khalili Nasr, A.; Mina, H.; Michnik, J. A Private Sustainable Partner Selection Model for Green Public-Private Partnerships and Regional Economic Development. Socioecon. Plann. Sci. 2022, 83, 101189. [Google Scholar] [CrossRef]
  33. Jing, D.; Imeni, M.; Edalatpanah, S.A.; Alburaikan, A.; Khalifa, H.A. Optimal Selection of Stock Portfolios Using Multi-Criteria Decision-Making Methods. Mathematics 2023, 11, 415. [Google Scholar] [CrossRef]
  34. Moore, A.; Masterson, J.T.; Christophel, D.M.; Shea, K.A. College Teacher Immediacy and Student Ratings of Instruction. Commun. Educ. 1996, 45, 29–39. [Google Scholar] [CrossRef]
  35. Yeşil, H. The Relationship between Candidate Teachers’ Communication Skills and Their Attitudes towards Teaching Profession. Procedia-Soc. Behav. Sci. 2010, 9, 919–922. [Google Scholar] [CrossRef] [Green Version]
  36. Tunçeli, H.İ. The Relationship between Candidate Teachers’ Communication Skills and Their Attitudes towards Teaching Profession (Sakarya University Sample). Pegem J. Educ. Instr. 2013, 3, 51–58. [Google Scholar] [CrossRef] [Green Version]
  37. Certa, A.; Enea, M.; Hopps, F. A Multi-Criteria Approach for the Group Assessment of an Academic Course: A Case Study. Stud. Educ. Eval. 2015, 44, 16–22. [Google Scholar] [CrossRef]
  38. Brusca, I.; Cohen, S.; Manes-Rossi, F.; Nicolò, G. Intellectual Capital Disclosure and Academic Rankings in European Universities. Meditari Account. Res. 2019, 28, 51–71. [Google Scholar] [CrossRef]
  39. Nicolò, G.; Raimo, N.; Polcini, P.T.; Vitolla, F. Unveiling the Link between Performance and Intellectual Capital Disclosure in the Context of Italian Public Universities. Eval. Program Plann. 2021, 88, 101969. [Google Scholar] [CrossRef]
  40. Wanke, P.F.; Antunes, J.J.J.; Miano, V.Y.; do Couto, C.L.P.; Mixon, F.G. Measuring Higher Education Performance in Brazil: Government Indicators of Performance vs Efficiency Measures. Int. J. Product. Perform. Manag. 2022, 71, 2479–2495. [Google Scholar] [CrossRef]
  41. Wu, H.-Y.; Chen, J.-K.; Chen, I.-S.; Zhuo, H.-H. Ranking Universities Based on Performance Evaluation by a Hybrid MCDM Model. Measurement 2012, 45, 856–880. [Google Scholar] [CrossRef]
  42. Das, M.C.; Sarkar, B.; Ray, S. A Performance Evaluation Framework for Technical Institutions in One of the States of India. Benchmarking 2015, 22, 773–790. [Google Scholar] [CrossRef]
  43. Musani, S.; Jemain, A.A. Ranking Schools’ Academic Performance Using a Fuzzy VIKOR. J. Phys. Conf. Ser. 2015, 622, 12036. [Google Scholar] [CrossRef] [Green Version]
  44. Rajeev, R.; Chakraborty, S. Performance Evaluation of Indian Technical Institutions Using PROMETHEE-GAIA Approach. Informatics Educ. 2015, 14, 103–125. [Google Scholar]
  45. Al Qubaisi, A.; Badri, M.; Mohaidat, J.; Al Dhaheri, H.; Yang, G.; Al Rashedi, A.; Greer, K. An Analytic Hierarchy Process for School Quality and Inspection. Int. J. Educ. Manag. 2016, 30, 437–459. [Google Scholar] [CrossRef]
  46. Adhikari, A.; Bhattacharyya, S.; Basu, S.; Bhattacharya, R. Evaluating the Performance of Primary Schools in India: Evidence from West Bengal. Int. J. Product. Perform. Manag. 2022, 71, 2630–2658. [Google Scholar] [CrossRef]
  47. Gul, M.; Yucesan, M. Performance Evaluation of Turkish Universities by an Integrated Bayesian BWM-TOPSIS Model. Socioecon. Plann. Sci. 2022, 80, 101173. [Google Scholar] [CrossRef]
  48. Chowdhury, N.; Katsikas, S.; Gkioulos, V. Modeling Effective Cybersecurity Training Frameworks: A Delphi Method-Based Study. Comput. Secur. 2022, 113, 102551. [Google Scholar] [CrossRef]
  49. Ludwig, L.; Starr, S. Library as Place: Results of a Delphi Study. J. Med. Libr. Assoc. 2005, 93, 315–326. [Google Scholar]
  50. Fish, L.S.; Dean, M.B. The Delphi Method. In Research Methods in Family Therapy; Sprenkle, D.H., Piercy, F.P., Eds.; Guilford Press: New York, NY, USA, 1996; pp. 238–253. [Google Scholar]
  51. Rowe, G.; Wright, G. The Delphi Technique as a Forecasting Tool: Issues and Analysis. Int. J. Forecast. 1999, 15, 353–375. [Google Scholar] [CrossRef]
  52. Steurer, J. The Delphi Method: An Efficient Procedure to Generate Knowledge. Skeletal Radiol. 2011, 40, 959–961. [Google Scholar] [CrossRef] [Green Version]
  53. Saaty, T.L. A Scaling Method for Priorities in Hierarchical Structures. J. Math. Psychol. 1977, 15, 234–281. [Google Scholar] [CrossRef]
  54. Karahan, M.; Lacinkaya, F.; Erdonmez, K.; Eminagaoglu, E.D.; Kasnakoglu, C. Age and Gender Classification from Facial Features and Object Detection with Machine Learning. J. Fuzzy Ext. Appl. 2022, 3, 219–230. [Google Scholar] [CrossRef]
  55. Alam, N.M.F.H.N.B.; Ku Khalif, K.M.N.; Jaini, N.I.; Abu Bakar, A.S.; Abdullah, L. Intuitive Multiple Centroid Defuzzification of Intuitionistic Z- Numbers. J. Fuzzy Ext. Appl. 2022, 3, 126–139. [Google Scholar] [CrossRef]
  56. Ahmed, F.; Kilic, K. Fuzzy Analytic Hierarchy Process: A Performance Analysis of Various Algorithms. Fuzzy Sets Syst. 2019, 362, 110–128. [Google Scholar] [CrossRef]
  57. Sorourkhah, A. Coping Uncertainty in the Supplier Selection Problem Using a Scenario-Based Approach and Distance Measure on Type-2 Intuitionistic Fuzzy Sets. Fuzzy Optim. Model. J. 2022, 3, 64–71. [Google Scholar] [CrossRef]
  58. Field, A.P. Kendall’s Coefficient of Concordance. In Encyclopedia of Statistics in Behavioral Science; John Wiley & Sons, Ltd.: Chichester, UK, 2005; ISBN 9780470013199. [Google Scholar]
  59. Das, A.K.; Granados, C. FP-Intuitionistic Multi Fuzzy N-Soft Set and Its Induced FP-Hesitant N Soft Set in Decision-Making. Decis. Mak. Appl. Manag. Eng. 2022, 5, 67–89. [Google Scholar] [CrossRef]
  60. Arora, H.D.; Naithani, A. Significance of TOPSIS Approach to MADM in Computing Exponential Divergence Measures for Pythagorean Fuzzy Sets. Decis. Mak. Appl. Manag. Eng. 2022, 5, 246–263. [Google Scholar] [CrossRef]
  61. Donbosco, J.S.M.; Ganesan, D. The Energy of Rough Neutrosophic Matrix and Its Application to MCDM Problem for Selecting the Best Building Construction Site. Decis. Mak. Appl. Manag. Eng. 2022, 5, 30–45. [Google Scholar] [CrossRef]
  62. Durmić, E.; Stević, Ž.; Chatterjee, P.; Vasiljević, M.; Tomašević, M. Sustainable Supplier Selection Using Combined FUCOM – Rough SAW Model. Reports Mech. Eng. 2020, 1, 34–43. [Google Scholar] [CrossRef]
  63. Petrovic, I.; Kankaras, M. A Hybridized IT2FS-DEMATEL-AHP-TOPSIS Multicriteria Decision Making Approach: Case Study of Selection and Evaluation of Criteria for Determination of Air Traffic Control Radar Position. Decis. Mak. Appl. Manag. Eng. 2020, 3, 146–164. [Google Scholar] [CrossRef]
  64. Đalić, I.; Stević, Ž.; Karamasa, C.; Puška, A. A Novel Integrated Fuzzy PIPRECIA – Interval Rough SAW Model: Green Supplier Selection. Decis. Mak. Appl. Manag. Eng. 2020, 3, 126–145. [Google Scholar] [CrossRef]
  65. Gergin, R.E.; Peker, İ.; Gök Kısa, A.C. Supplier Selection by Integrated IFDEMATEL-IFTOPSIS Method: A Case Study of Automotive Supply Industry. Decis. Mak. Appl. Manag. Eng. 2022, 5, 169–193. [Google Scholar] [CrossRef]
  66. Pamucar, D.; Žižović, M.; Đuričić, D. Modification of the CRITIC Method Using Fuzzy Rough Numbers. Decis. Mak. Appl. Manag. Eng. 2022, 5, 362–371. [Google Scholar] [CrossRef]
  67. Žižović, M.; Pamucar, D. New Model for Determining Criteria Weights: Level Based Weight Assessment (LBWA) Model. Decis. Mak. Appl. Manag. Eng. 2019, 2, 126–137. [Google Scholar] [CrossRef]
  68. Yildirim, B.F.; Kuzu Yıldırım, S. Evaluating the Satisfaction Level of Citizens in Municipality Services by Using Picture Fuzzy VIKOR Method: 2014-2019 Period Analysis. Decis. Mak. Appl. Manag. Eng. 2022, 5, 50–66. [Google Scholar] [CrossRef]
  69. Paul, V.K.; Chakraborty, S.; Chakraborty, S. An Integrated IRN-BWM-EDAS Method for Supplier Selection in a Textile Industry. Decis. Mak. Appl. Manag. Eng. 2022, 5, 219–240. [Google Scholar] [CrossRef]
  70. Rasoulzadeh, M.; Edalatpanah, S.A.; Fallah, M.; Najafi, S.E. A Multi-Objective Approach Based on Markowitz and DEA Cross-Efficiency Models for the Intuitionistic Fuzzy Portfolio Selection Problem. Decis. Mak. Appl. Manag. Eng. 2022, 5, 241–259. [Google Scholar] [CrossRef]
  71. Vesković, S.; Stević, Ž.; Stojić, G.; Vasiljević, M.; Milinković, S. Evaluation of the Railway Management Model by Using a New Integrated Model DELPHI-SWARA-MABAC. Decis. Mak. Appl. Manag. Eng. 2018, 1, 34–50. [Google Scholar] [CrossRef]
Figure 1. The proposed methodology steps.
Figure 1. The proposed methodology steps.
Systems 11 00159 g001
Figure 2. Weights of the main criteria.
Figure 2. Weights of the main criteria.
Systems 11 00159 g002
Figure 3. Final weights for sub-criteria of C1–C5.
Figure 3. Final weights for sub-criteria of C1–C5.
Systems 11 00159 g003
Figure 4. Final weights for sub-criteria of C6–C10.
Figure 4. Final weights for sub-criteria of C6–C10.
Systems 11 00159 g004
Figure 5. The comparison of weights.
Figure 5. The comparison of weights.
Systems 11 00159 g005
Table 1. Changing the LVs into fuzzy numbers [56].
Table 1. Changing the LVs into fuzzy numbers [56].
Linguistic VariablesFuzzy Numbers (FNs)Inverse of FNs
Equally preferred ( 1 2 , 1 , 3 2 ) ( 2 3 , 1 , 2 )
Moderately preferred ( 1 , 3 2 , 2 ) ( 1 2 , 2 3 , 1 )
Strongly preferred ( 3 2 , 2 , 5 2 ) ( 2 5 , 1 2 , 2 3 )
Very strongly preferred ( 2 , 5 2 , 3 ) ( 1 3 , 2 5 , 1 2 )
Extremely preferred ( 5 2 , 3 , 7 2 ) ( 2 7 , 1 3 , 2 5 )
The main diagonal of the matrix ( 1 , 1 , 1 ) ( 1 , 1 , 1 )
Table 2. Triangular fuzzy numbers.
Table 2. Triangular fuzzy numbers.
Linguistic VariablesFuzzy Numbers
Very poor(0, 0, 1)
Poor(0, 1, 3)
Medium poor(1, 3, 5)
Fair(3, 5, 7)
Medium good(5, 7, 9)
Good(7, 9, 10)
Very good(9, 10, 10)
Table 3. The main criteria.
Table 3. The main criteria.
CriteriaSymbolSource
Management staffC1Literature
Credits and costsC2Literature
Educational equipmentC3Literature
LibraryC4Literature
Educational leadershipC5Literature
HealthC6Recommended
StudentsC7Recommended
Teaching and learning processC8Literature
Administrative affairsC9Literature
Social environmentC10Recommended
Table 4. Summary of the first Delphi round results for C10.
Table 4. Summary of the first Delphi round results for C10.
CriteriaSub-CExpert 1Expert 2Expert 3Expert 4Expert 5Expert 6Expert 7Expert 8Expert 9Expert 10Average
C10S10175798977787.4
S10279964357676.3
S10356869967987.3
S10456434367544.7
Table 5. The excluded sub-criteria.
Table 5. The excluded sub-criteria.
CriteriaSub-Criteria
C13, 4, 8, 9, 12, 13, 14
C21, 2, 4, 7, 8, 9
C33, 5, 7, 9, 11, 12
C42, 3
C52, 3, 4, 7, 10, 11, 12, 13, 16, 20, 21, 22, 25
C6-
C71, 3, 4, 5, 8, 10, 11, 13, 14, 16
C84, 5, 7, 8, 10, 11, 12, 13
C91
C102, 4
Table 6. Summary of the second Delphi round results for C10.
Table 6. Summary of the second Delphi round results for C10.
CriteriaSub-CExpert 1Expert 2Expert 3Expert 4Expert 5Expert 6Expert 7Expert 8Expert 9Expert 10Average
C10S10188998888888.2
Removed-----------
S10378779799797.9
Removed-----------
Table 7. The results of the agreement of experts’ views.
Table 7. The results of the agreement of experts’ views.
RoundsItemsExpertsKendall’s CD.F.Sig.
The first108100.3331070.000
The second53100.402520.003
Table 8. Pairwise comparison matrix of main criteria.
Table 8. Pairwise comparison matrix of main criteria.
CriteriaC1C2C3C4C5C6C7C8C9C10
C1U10.580.410.440.450.460.680.540.491.7
M10.450.320.320.360.360.540.440.391.35
L10.370.260.250.350.310.450.380.331.03
C2U2.6711.001.201.881.352.012.111.311.1
M2.2210.780.961.521.131.711.751.110.94
L1.7110.620.791.280.961.411.360.950.82
C3U3.791.6111.341.291.301.810.551.001.40
M3.151.2811.051.091.001.540.420.781.02
L2.411.0010.820.890.821.280.350.550.78
C4U3.971.261.2210.771.641.711.521.901.56
M3.171.040.9510.621.261.321.221.591.31
L2.290.840.7510.520.981.080.981.301.04
C5U2.820.781.121.9413.073.820.681.040.51
M2.740.660.921.6212.493.160.540.870.43
L2.240.530.781.3011.942.590.460.720.37
C6U3.231.051.221.020.5210.401.521.901.56
M2.740.8910.790.4010.321.221.591.31
L2.160.740.770.610.3310.260.981.301.04
C7U2.230.710.780.930.393.7913.073.820.68
M1.840.590.650.760.323.1512.493.160.54
L1.470.500.550.580.262.5011.942.590.46
C8U2.640.742.861.022.191.020.5211.040.51
M2.280.572.380.821.850.820.4010.870.43
L1.860.471.830.661.480.660.3310.720.37
C9U3.051.051.280.771.390.770.391.3910.40
M2.580.901.000.631.150.630.321.1510.32
L2.060.760.790.530.960.530.260.9610.26
C10U0.971.221.320.962.670.962.192.673.791
M0.741.060.990.762.320.761.852.323.151
L0.590.910.720.641.980.641.481.982.501
Table 9. De-fuzzification of the calculated weights of the main criteria.
Table 9. De-fuzzification of the calculated weights of the main criteria.
Criteria x max 1 x max 2 x max 3 De-FuzzyNormal W
C10.1920.1900.1890.1920.184
C20.0730.0730.0720.0730.070
C30.0870.0860.0850.0870.083
C40.0760.0750.0740.0760.073
C50.0920.0910.0900.0920.088
C60.1100.1090.1070.1100.105
C70.1060.1050.1030.1060.101
C80.1090.1080.1070.1090.104
C90.1260.1240.1230.1260.120
C100.0750.0740.0740.0750.072
Table 10. The calculated weights of the Sub-criteria for C1.
Table 10. The calculated weights of the Sub-criteria for C1.
Criteria x max 1 x max 2 x max 3 De-FuzzyNormal W
S110.1350.1330.1320.1350.129
S120.1700.1680.1660.1700.163
S150.1110.1100.1090.1110.106
S160.1490.1470.1460.1490.143
S170.1610.1600.1580.1610.154
S1100.1490.1470.1460.1490.143
S1110.1680.1670.1650.1680.161
Table 20. Final weights for sub-criteria of C1.
Table 20. Final weights for sub-criteria of C1.
CriteriaWeightSub-CWeightFinal W
C10.197S110.1290.025
S120.1630.032
S150.1060.021
S160.1430.028
S170.1540.030
S1100.1430.028
S1110.1610.032
Table 21. Final ranks.
Table 21. Final ranks.
Schoold+dCLRank
10.1300.1450.52711
20.0920.1650.6411
30.1240.1480.5449
40.1290.1270.49713
50.1120.1520.5756
60.1060.1630.6074
70.1190.1340.52910
80.1610.0970.37614
90.0950.1660.6352
100.1220.1560.5627
110.0970.1500.6065
120.1020.1580.6073
130.1200.1470.5518
140.1200.1270.50412
150.1580.0950.37515
Table 22. Different ranks of criteria.
Table 22. Different ranks of criteria.
CriteriaW (P)RankW (A)RankW (T)Rank
C10.11050.25810.1841
C20.12530.01590.07010
C30.14310.02380.0837
C40.016100.13030.0738
C50.07780.09950.0886
C60.12240.08870.1053
C70.09970.10340.1015
C80.11050.09860.1044
C90.06290.17820.1202
C100.13520.009100.0729
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, P.; Edalatpanah, S.A.; Sorourkhah, A.; Yaman, S.; Kausar, N. An Integrated Fuzzy Structured Methodology for Performance Evaluation of High Schools in a Group Decision-Making Problem. Systems 2023, 11, 159. https://doi.org/10.3390/systems11030159

AMA Style

Li P, Edalatpanah SA, Sorourkhah A, Yaman S, Kausar N. An Integrated Fuzzy Structured Methodology for Performance Evaluation of High Schools in a Group Decision-Making Problem. Systems. 2023; 11(3):159. https://doi.org/10.3390/systems11030159

Chicago/Turabian Style

Li, Pengfei, Seyyed Ahmad Edalatpanah, Ali Sorourkhah, Saziye Yaman, and Nasreen Kausar. 2023. "An Integrated Fuzzy Structured Methodology for Performance Evaluation of High Schools in a Group Decision-Making Problem" Systems 11, no. 3: 159. https://doi.org/10.3390/systems11030159

APA Style

Li, P., Edalatpanah, S. A., Sorourkhah, A., Yaman, S., & Kausar, N. (2023). An Integrated Fuzzy Structured Methodology for Performance Evaluation of High Schools in a Group Decision-Making Problem. Systems, 11(3), 159. https://doi.org/10.3390/systems11030159

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop