Next Article in Journal
Focused Coordination Models towards Sustainability in Higher Education. Case of Quevedo State Technical University (Ecuador)
Next Article in Special Issue
Effects of the Performance-Based Research Fund and Other Factors on the Efficiency of New Zealand Universities: A Malmquist Productivity Approach
Previous Article in Journal
Integrating Deep Ecology and Adaptive Governance for Sustainable Development: Implications for Protected Areas Management
Previous Article in Special Issue
Internships for Higher Education Students to Promote the Local Sustainability of Rural Places
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Comparative Analysis between Global University Rankings and Environmental Sustainability of Universities

by
Manuel Muñoz-Suárez
1,
Natividad Guadalajara
2,* and
José M. Osca
3
1
Faculty of Business, Technical University of Machala, Panamericana Avenue Km. 51/2 way to Pasaje, Machala 070222, Ecuador
2
Center of Economic Engineering, Universitat Politècnica de València, Camino de Vera s/n, 46022 Valencia, Spain
3
Departament of Plant Production, Universitat Politècnica de València, Camino de Vera s/n, 46022 Valencia, Spain
*
Author to whom correspondence should be addressed.
Sustainability 2020, 12(14), 5759; https://doi.org/10.3390/su12145759
Submission received: 27 April 2020 / Revised: 11 June 2020 / Accepted: 15 July 2020 / Published: 17 July 2020
(This article belongs to the Special Issue Promoting Sustainability in Higher Education)

Abstract

:
Global University Rankings (GURs) intend to measure the performance of universities worldwide. Other rankings have recently appeared that evaluate the creation of environmental policies in universities, e.g., the Universitas Indonesia (UI) GreenMetric. This work aims to analyze the interaction between the Top 500 of such rankings by considering the geographical location of universities and their typologies. A descriptive analysis and a statistical logistical regression analysis were carried out. The former demonstrated that European and North American universities predominated the Top 500 of GURs, while Asian universities did so in the Top 500 of the UI GreenMetric ranking, followed by European universities. Older universities predominated the Top 500 of GURs, while younger ones did so in the Top 500 of the UI GreenMetric ranking. The second analysis demonstrated that although Latin American universities were barely present in the Top 500 of GURs, the probability of them appearing in the Top 500 of the UI GreenMetric ranking was 5-fold. We conclude that a low association exists between universities’ academic performance and their commitment to the natural environment in the heart of their institutions. It would be advisable for GURs to include environmental indicators to promote sustainability at universities and to contribute to climate change.

1. Introduction

Given the demand for information about universities and the object to compare them, university ranking systems were developed in many countries of the world (e.g., Australia, Canada, the UK, the US) [1], but they were soon followed by Global University Rankings (GURs), and are a key factor in knowing a university’s performance, productivity and quality according to its position in these rankings. GURs emerged after 2003 to measure university performance. They evaluate universities’ management by using indicators associated with intangible assets, like organizational reputation or institutional prestige, and they employ different methodologies to assess their Intellectual Capital (IC) [2]. The most currently available schemes for the performance-based ranking of Universities are Academic Ranking of World Universities (ARWU), best known as Shanghai Ranking’s Academic Ranking (since 2003), Quacquarelli Symonds (QS) World University Rankings (since 2004), and Times Higher Education (THE) World University rankings (since 2010). These GURs use a variety of criteria, including productivity, citations, awards, reputation, etc., while others are Ranking Web of Universities or Webometrics (since 2004), and Leiden and Scimago that employ only bibliometric indicators [3].
Generally speaking, the parameters or indicators that these GURs apply are related to university research activity (citations, number of publications, industry income, number of highly cited researchers on staff, etc.) and training activity (number of alumni receiving Nobel Prizes (NP) or Fields Medals (FM), academic reputation, faculty/student ratio, etc.). Proposals have been put forward to modify GURs according to their websites [4]. However, each GUR uses a series of different parameters to produce its ranking [5] to determine each university’s position in rankings. For instance, in the ARWU system, three indicators of faculty members who have won NP and FM, and papers published in Nature and Science and in Science Citation Index and Social Science Citation Index journals, predict the ranking of universities. For the QS and THE systems, the more powerful contributors to the ranking of universities are expert-based reputation indicators [6]. The distinct selection of indicators, their institutional coverage, rating methods, and the normalization of these GURs all influence not only the ranking positions of given institutions [2,7,8] but also all the universities included in each GUR [9].
Moreover, corporate social responsibility (CSR) or sustainability [10] in the business world is understood from three areas of action or dimensions: the economic area, the social area, and the environmental area. A study [11] argues that practitioners require scholars to reduce the ambiguity between the IC and its expected results. This would open the door to a potentially productive way of understanding IC and the complexity of economic, social, and environmental values. Some proposals to measure the CSR or corporate sustainability of companies and countries that have been prepared by private companies currently exist, and other open-access rankings have been developed, but these CSR rankings only include companies and not universities.
Universities as knowledge acquisition centers are key and necessary to implement sustainability policies [12]. They must also play a critical decisive role in developing and promoting sustainability in environmental, economic, and social terms by including these three CSR dimensions because the performance of those institutions that integrate sustainability into its policies, curricula, and academic activities has a positive impact [13,14,15,16,17]. Universities are drivers behind the achievement of the sustainability culture in society because they come over as models of sustainable development [18]. The roles of universities’ sustainability must be extended so its social integral impact covers a wide scope [19]. To this end, innovation is a relevant and influential factor that promotes sustainable development, and in such a rapidly growing field as environmental sustainability at universities [20]. Sustainability reporting by universities has been considered useful tools for both accountability and improving socio-environmental performance. However, the literature shows that sustainability reporting by universities is still in its early stages, and relatively very few institutions currently publish sustainability reports [21]. The use of the Information and Communication Technologies (ICT) is important to improve education quality through good teaching practices while generating environmental awareness and creating sustainable spaces [22].
Universities have to contribute to overcoming the main twenty-first-century challenges, such as more environmental and socio-economic crises, unequal pay in countries, and political instability. To do so, they must integrate the sustainable development concept into future organizations, research, and education by training professionals in knowledge, competences, and skills to solve ecological, social, and economic problems in societies as a whole [23]. However, structuring environmental sustainability efforts in accordance with the environmental management performance concept reveals a major weakness in the environmental sustainability management of higher education institutions [24].
For all these reasons, universities play a key role in adopting policies for sustainable development-focused education, which must be set up in different dimensions by a holistic and integrative approach applied to all the fields that sustainability covers [25]. It is important to see universities’ core activities as a provider of research and education in sustainability, and their activities as an organization. Systematic engagement activities have been suggested to benefit both the university’s ability to manage internal university processes (by learning from its peers) and its ability to produce the right graduates and knowledge [26]. One study demonstrates that most analyzed universities do not yet seem to fully capitalize on the relation between economic effectiveness and socio-environmental efficiency. This situation could be a factor that slows down the diffusion of sustainability principles in university governance [27].
Implementing green policies into campuses tends to be the first step that universities take toward a natural environment. They are concerned mainly about energy efficiency indicators, but importance must also be attached to other aspects like waste management, preserving and saving water, transport, and research to achieve all-round long-term sustainability [14]. The green spaces on campuses can provide important benefits for their users, but not enough attention has been paid to them [28]. Universities have significant impacts on greenhouse gas emissions because they contribute to transport on campuses, use water and energy, generate waste, etc., to a great extent Furthermore, the United Nations Development Programme’s 17 Sustainable Development Goals (UNDP 17 SDGs) expect universities to meet its SDG targets by 2030 [29].
Consequently, just as GURs were developed to evaluate the academic reputation of universities by considering their research and training activity, other indices or rankings have been developed to quantify their environmental impact and contribution to cushion adverse effects. Some such examples include the Green League 2007, the Environmental and Social Responsibility Index 2009, and Universitas Indonesia (UI) GreenMetric [30]. Universitas Indonesia developed this ranking in 2010, which is an online world university ranking developed to offer a portrait of current conditions and policies related to green campuses and the sustainability of universities worldwide [29]. The UI GreenMetric adopts the environmental sustainability concept, which contains three elements: environmental (natural resources use, environmental management and pollution prevention), economic (cost-saving), and social (education and social involvement) [31].
The main objective of the present research was to study if the universities better positioned in GURs are also more involved in creating environmental policies in their own institutions in accordance with the UI GreenMetric ranking. The second objective was to study if any geographical differences appeared in the interrelations between GURs and campus sustainability, and if the age of universities, them being public or private, technology or non-technology universities, could impact these interrelations.
In order to fulfill these objectives, we first carried out a descriptive analysis, followed by logistic regression analysis. The main highlighted conclusions are that European and North American universities predominate the Top 500 of GURs, while Asian ones predominate the Top 500 of the UI GreenMetric ranking, followed by European ones. Older, public and non-technology universities also predominate the Top 500 of GURs, whereas younger universities predominate the UI GreenMetric ranking. Although Latin American universities are barely present in GURs, the probability of them appearing in the Top 500 of the UI GreenMetric ranking is 5-fold higher than for other continents and is 1.5-fold more for European universities.
This study represents a novel contribution to the literature by comparing two university rankings: on the one hand, those evaluating academic/research activity and, on the other hand, those assessing their environmental activity on university campuses.
This article is arranged as follows: Section 2 offers a background for understanding the selected sample and the objective set out in this research. Section 3 describes the methodology, the sample, and the data employed herein. Section 4 offers results and discussion. Finally, Section 5 concludes the results and provides a few final remarks.

2. Background

2.1. Global University Rankings

We now go on to describe the four GURs herein employed. The three most studied ones were selected, namely the ARWU ranking, the QS ranking, and the THE ranking, which employ the data from Web of Science. The fourth one is the Webometrics ranking, which uses the data from Google Scholar Citations.

2.1.1. Academic Ranking of World Universities

The ARWU ranking is published by Shanghai Ranking Consultancy, a fully independent organization that researches higher education intelligence and consultation [32]. The ARWU ranking uses six objective indicators to classify universities all over the world with different weights (%) each: number of former students (10%) and staff members (20%) winning Nobel Prizes (NP) or Fields Medals (FM); number of highly cited researchers, as selected by Clarivate Analytics (20%); number of articles published in Nature and Science Journals (20%); number of articles indexed in Science Citation Index-Expanded (SCIE) and the Social Science Citation Index (SSCI) (20%); a university’s performance per capita (10%). This ranking has been published since 2003 and is the oldest of the four studied GURs. The data sources are Thompson Reuters’ Web of Science Database and Resources of National agencies.
The ARWU ranking globally classifies 1000 universities from 63 countries. The individual ordered distribution of each university is done up to post 100, after which 18 ranges apply, each with 50 institutions, in which universities appear in alphabetical order. The university that comes first has a total score of 100 and the total score of each university is published up to post 100. The total scores of the other posts up to post 1000 are not published. This total score is obtained by summing the score of all six above-cited indicators by considering their weights.

2.1.2. QS World University Ranking

The QS ranking was developed by the company Quacquarelli Symonds and has been published yearly since 2004. QS is based on a methodological framework that employs six parameters to represent university performance [33]. These parameters and their weights are: Academic Reputation (40%); Employer Reputation (10%); Faculty/Student Ratio (20%); Citations per Faculty (20%); International Faculty Ratio/ International Student Ratio (5%). The data sources are Scopus Database and the University Portfolio Survey.
The QS ranking analyzes 1000 universities from 84 countries, and classifies them individually up to post 500, and thereafter into 10 groups with 10 universities/group up to post 600 in alphabetical order; then into groups of 50 universities up to post 800, and the final group contains 200 alphabetically ordered universities up to post 1000. None of their scores are published, only their post.

2.1.3. The World University Ranking

The THE ranking has analyzed universities since 2010 with 13 indicators in five areas. These areas are: teaching (the learning environment); research (volume, income, and reputation); citations (research influence); international outlook (staff, students and research); industry income (knowledge transfer). All five areas are scored from 100 to 0 (from the best to the worst) to obtain the total score, and the highest score (closer to 100) is given to the best university [34]. The data sources are Thompson Reuters’ Web of Science Database and the University Portfolio Survey.
The THE ranking classifies some 1300 universities from almost 90 countries around the world individually up to post 200. Thereafter it groups them in the 50s up to post 400, and then as two groups with 100 universities up to post 600, and finally as two groups of 200 universities up to post 1000. Universities appear in alphabetical order in each group. As from post 1000, universities are ordered alphabetically to include all the analyzed universities. Apart from each university’s post, the total score and score of each area are published for all the studied universities: number of full-time equivalency students, number of students per staff, international students, female/male ratio.

2.1.4. Ranking Web of Universities or Webometrics Ranking of World Universities

The Webometrics ranking is issued annually by the Spanish Cybermetrics Lab, which forms part of the Spanish National Research Council (CSIC) [35]. The classification by the Webometrics ranking is done using the website public data available by applying databases of indices with links, like Google, Majestic, Ahrefs, Google Scholar Citations, and Scimago. The rating methodology is based on an analysis of university representation in a global information space. It analyzes links to assess quality rather than analyzing citations or global surveying. The parameters and weights that Webometrics uses are: presence (5%) (size; number of websites) of the institution’s main web domain); visibility (50%) (number of external networks originating backlinks to the institution’s websites); transparency (10%) (number of citations from Top authors according to source); excellence (or scholar) (35%) (number of papers in the top 10% most cited in 26 disciplines). Each parameter is numerically scored with a value of 1 for the best score. The weighted sum of the scores obtained for the four parameters gives ranking positions. The Webometrics ranking has individually classified some 12,000 universities from more than 120 countries worldwide since 2004.
Many studies have been conducted about these rankings, especially on the first three GURs (ARWU, QS and THE), which have been compared to one another. Some works have studied their correlation [36] according to six social science criteria and two behavioral criteria [37]. One study [38] concluded that, due to the predominantly quantitative orientation of GURs (on the internationalization of higher education), their results should not be generalized or understood as a means to improve the quality (internationalization) of higher education. Another study [39] concluded from its comparison that the forms of citation analysis in journal terms, the individual scholar and the paper are central to rankings, and will remain so. Another paper [40] concluded that while a degree of agreement exists across these GURs about the ranking of two ‘elite’ institutions, the level of agreement across the League Table and Ranking Systems rapidly breaks down while progressing to consider institutions further down the ranking order.
Using the data from these two rankings (ARWU and THE) between 2010 and 2018, and the application of ordinal regressions, a study [41] provides evidence that both rankings are mutually influential by generating intra- and reciprocal reputational effects over time. Other works have analyzed ARWU to see how the economy of a country affects the positions of its universities on the ARWU of that country [42] to study not only the stability and confidence of ARWU ranking [43] but also ARWU results from a longitudinal study to examine four Australian universities over a 15-year period, which traces how international rankings are articulated in the university strategy [44].
The QS ranking has been employed to compare the Masters of Business Administration programs offered by top European and Asian B-schools with CSR and sustainability orientation as per their websites [45].
Finally, the Webometrics ranking has been analyzed with the three other GURs by two works. One work [3] has studied the correlation among the four GURs, and that with two others, namely Leiden and Scimago. As expected, the higher correlations appear among ARWU, THE and QS, and among Webometrics, Leiden and Scimago. One of the criticisms faced by bibliometric evaluations of universities is that they prioritize research over teaching. One can conclude that the Webometrics methodology has an advantage because it processes only a very small fraction of full data. It is, therefore, economical in terms of data handling, time, and cost. The other work [46] has also compared the four rankings herein studied and concludes that no ranking tool should be considered perfect, and continuous improvement should be called for. A university that is poorly ranked may be excellent in teaching or in other qualities that contribute to nation-building compared to universities with higher rankings.
GURs are tools employed to evaluate universities’ performance. As a result, they are used in universities’ marketing activities and to know the positions that universities occupy [6]. Thus, GURs influence the perceptions of both students and public opinion, and impact universities’ general reputation held by international audiences [5,7,37]. For these reasons, universities worldwide make efforts to occupy and improve their position in rankings [38].
Yet for many reasons, GURs tend to be the object of much criticism. One reason is that it is impossible to measure “a university’s quality” by employing only some relevant data, methodologies, and indicators [47]. As we have previously seen, GURs use indicators [4,5,6,7,36,38,48,49] that center mainly on research performance [3,50] and tend to rely on internationally accessible bibliometric databases and reputation surveys [50]. It is necessary to bear other different measures in mind, such as qualitative evaluations (pairwise) that must be done by experts in a given field [47].
Partiality toward research-related universities and lack of indicators related to teaching are other criticisms made of them [3,49]. Teaching and research are traditionally perceived as two functions that go hand-in-hand and are mutually supported [7]. Universities’ interest in their occupancy in GURs has promoted generalized dynamics to invest more in research [37] and, accordingly, many universities neglect other important duties [47]. There is a danger of some universities concentrating more on appearing or improving their position in GURs instead of progressing in other relevant areas, like education and knowledge transfer [49]. We must bear in mind that many other factors come into place that can influence universities’ education and research outcomes. Some of these factors may be related to a university’s mission or history in the national context [51]. Moreover, when research is prioritized and leaves teaching somewhat to one side, GURs also confer universities the “world-class” privilege whose key characteristics are a high concentration of talent, plenty of resources to conduct advanced research, and favorable governance [38].
Another reason why GURs is criticized is that they do not contemplate the size of universities, which tends to be the main determining factor for research production, and systematic differences appear in quality in relation to a country, research intensity, and level of resources. Bigger institutions produce more research and have more quality research indicators [51].
The data sources that the QS, ARWU, and THE rankings employ are another reason they are criticized. These GURs are created according to institutions, based on science and the English language. Bibliometric data, which consist of the number of university members’ publications and citations, tend to dominate the weightings of rankings. This has given rise to GURs being dominated by the mainly English universities as they take their bibliometric data from the most important English publishing houses. Moreover, the survey conducted by the rankings THE and QS is based on a small sample and an insufficient number of countries, which means that their results are biased according to a limited number of countries, especially universities in the USA, the UK, or Australia [46].
Countries have different histories, traditions, and culture, and distinct perspectives of the university system. This leads to doubts arising in comparisons of GURs owing to the diversity in socio-cultural and politico-economic influences [46]. The local realities where institutions are found are not taken into account [49]. Indeed, one study [52] reveals that the rankings ARWU, THE and QS produce different geographies of global higher education. Tension is felt between universities of Europe and the USA and emerging universities in the Asia Pacific. More specifically, the ARWU ranking is more oriented to North America and western Europe, while the rankings QS and THE look more to Anglo-Saxon countries because Great Britain, Canada, and Australia appear in both [8].
Another issue to consider is GURs’ partiality toward universities in terms of hard sciences [49]. The ARWU ranking in particular only includes publications in Nature and Science, which suggests that this ranking does not contemplate other fields like Humanities and Arts. The ARWU ranking is the only tool that currently scores based on faculty members and former students receiving a Nobel Prize. This is the main defect of the ARWU ranking, where Nobel Prizes and Field Medals are respectively based on Science and Mathematics [46]. Moreover, a combined effect on location also appears. In the top 200 of Life Sciences, we find that universities are more frequently located in the UE, those reaching the top 200 in Social Sciences are based in the USA, while those among the top 200 in Physics or Biology are located elsewhere in the world [9].

2.2. UI GreenMetric World University Ranking

In the last decade, different campus sustainability assessments (CSAs) have been proposed on national and regional scales around the globe. The present study employed the UI GreenMetric ranking [53], which was developed in Universitas Indonesia in 2010, and is the most widely used one that has been the object of several studies. The oldest study [54] introduces the development and improvement of the UI GreenMetric ranking and evaluates the implementation and result of its 2011 ranking. The 2011 ranking results show that the main prevailing criteria met by many universities are energy and climate change (with about 2500 of the 2800 maximum score), followed by another [55] presenting a critical review. Another study [14] introduces a critical perspective on sustainability university frameworks through (i) a review of current (CSAs); (ii) performing and comparing the results obtained from applying two internationally recognized CSAs (namely, UI GreenMetric and ISCN). One study [56] has constructively analyzed the UI GreenMetric ranking to improve and strengthen the ranking method. One work [57] reviews the applications of the UI GreenMetric index by focusing on the analysis of its characteristics and its capability to assess how the campus urban morphology can affect universities’ sustainability issues. More recently, other works have used the UI GreenMetric ranking to assess sustainability in seven universities from Brazil [58] and in other universities from India [59], whereas other authors have proposed a composite indicator using data envelopment analysis (DEA) and the UI GreenMetric [30].
The UI GreenMetric ranking is based on quantitative metrics and not on country-related sustainability report tools. Moreover, it reports sustainability indicators according to a fixed set of criteria. This allows a comparison to be made of universities’ performances (communicated via self-compiled questionnaires and by retrieving public data display) in the same ranking [14].
The UI GreenMetric ranking classifies 779 universities from 83 countries and adopts the environmental sustainability concept that has three elements: environmental, economic, and social. These elements are measured by the following six indicators with their weights: Setting and Infrastructure (15%), Energy and Climate Change (21%), Waste (18%), Water (10%), Transportation (18%), Education and Research (18%).

3. Materials and Methods

3.1. Sources of Information

For our study, we used the information that appears on the websites of all four GURs [32,33,34,35], in the study object and on the UI GreenMetric ranking website [53].
For all five rankings, the universities in the 2018 Top 500 of each one were selected. A database was created with 2500 observations. The following information was obtained for these observations: the university’s name, its ranking position, its geographical location, and, on each university’s website, the year it began, its typology (public or private), and if it was technology or non-technology, were consulted. The geographical coordinates (longitude and latitude) were obtained for each university [60].

3.2. Methods

First of all, a descriptive analysis was done of the positions that universities take in the Top 500 of each ranking to obtain their distribution by quartiles. To do so, the following were also considered: distribution by regions was also considered (North America, Europe, Asia, Oceania, Latin America, and Africa); age by distinguishing between being older and younger than 100 years; universities being public or private and technology or non-technology.
The geographical distribution of universities by quartiles was done visually by locating universities on maps, performed with the GIS software, QGIS version 2.18.15. [61].
For each GUR, a crosstab was obtained to classify universities according to them being included, or not, in the UI GreenMetric ranking, along with the region where they were located. Crosstabs were also obtained for universities’ age and being public and non-technology.
By a statistical analysis of odds ratio (OR) [9], the influence that the geographical location of a university, found in the Top 500 of a GUR, could have on it being in Top 500 of the UI GreenMetric ranking was calculated. This OR was done to measure the association between belonging to the Top 500 of the UI GreenMetric ranking and a university’s geographical location. An OR value = 1 indicates no association between being in the Top 500 of the UI GreenMetric ranking and location. The further away this value is from 1, the closer the relationship between both factors. Values less than 1 indicate a negative association between a given geographical location and being in the Top 500 of the UI GreenMetric ranking.
In this way, the OR values were calculated by considering the age of universities and distinguishing between being more than 100 years old and less than 100 years old, being public or private, and non-technology or technology.
The OR values were carefully compared with a binary logistic regression model. In general, this model quantified the influence of the explanatory variable (independent variable, regressor, or covariate), which is considered predictive, on the likelihood of being in the Top 500 of the UI GreenMetric ranking (dependent variable or explained variable). In the present work, only one explanatory variable was used in the binomial logistic regressions, whose mathematical expression is as follows:
L n { P ( G M = 1 / x ) P ( G M = 0 / x ) } = α + β   X + ε
where:
GM: takes a value of 1 if a university in the Top 500 of a GUR is also in the Top 500 of the UI GreenMetric ranking, and 0 otherwise.
α: Constant term.
β : Coefficient of the explanatory variable dummy X, which takes a value of 1 if a university is located in North America, Europe, Asia, Oceania, or Latin America, is more than 100 years old, is public and is non-technology, and 0 otherwise.
ε: Random disturbance term.
The OR of the independent variable X is e β . Estimators β were estimated by maximum likelihood calculation, and Wald (χ2) contrasts and the associated p-values were obtained for each one. The error considered levels were 1%, 5%, and 10%. The contrast over the overall model was performed with the likelihood ratio logarithm calculation (log. Likelihood ratio).
The statistical analyses were performed using the SPSS statistical package.

4. Results

4.1. Descriptive Analysis

The geographical distribution of the universities in the Top 500 of each ranking (Table 1) showed for all the GURs that more universities were located in Europe: 254 (THE), 212 (QS), 192 (ARWU) and 192 (Webometrics), followed by North America: 183 (Webometrics), 156 (ARWU), 142 (THE) and 112 (QS). Nevertheless, Asia outperformed North America in QS with 117 universities. Performance considerably differed for the geographical regions of those universities in the Top 500 of the UI GreenMetric ranking as more universities were located in Asia with 200, followed by Europe with 154 and by Latin America with 71, all before North America with 67.
According to rankings, the main differences among regions were found for the THE ranking, in which 50.6% of universities were located in Europe, followed by North America with 28.6%, Asia had 12.2% and Oceania had 7.2%. Conversely, the smallest geographical differences appeared in the ARWU ranking, with 38.4% of universities in Europe, followed by North America with 31.2%, Asia with 21.6%, and Oceania had 5.8%.
For regions, more North American universities were present in the Top 500 of the Webometrics ranking (36.6%). However, North America was less represented in the Top 500 of the UI GreenMetric ranking (13.4%). More universities from Europe were present in the Top 500 of the THE ranking (50.6%), but fewer were present in the Top 500 of the UI GreenMetric ranking (30.8%). Asia had 40.0% of the universities in the Top 500 of the UI GreenMetric ranking, but only 12.2% in the Top 500 of the THE ranking. The presence of universities from Oceania in rankings ranged from 7.2% in the THE ranking, to 1% in the UI GreenMetric ranking, while the presence of Latin American universities ranged from 14.2% in the UI GreenMetric ranking to 0.4% in the THE ranking.
Nonetheless, this geographic distribution changed when only the composition of the first quartile (Q1) in the Top 500 of the five rankings was studied. In GURs, more universities came from North America, according to Webometrics (72), ARWU (64), and THE (53). For Europe, more universities appeared only in the QS ranking (47). The same occurred with the Top 500 of the UI GreenMetric ranking, where more universities were located in Europe (56), followed by Asia (37), North America (20), and Latin America (10).
When studying how the number of universities evolved with quartiles, this number clearly tended to lower from Q1 to Q4 for North America in the Top 500 of the five rankings and for Europe in the Top 500 of the UI GreenMetric ranking. The opposite was observed for Asia because the number of universities tended to rise from Q1 to Q4, and did so more obviously in the UI GreenMetric ranking. This means that although Europe predominated the Top 500 of GURs, the first posts went to universities from North America. The same applied to Asia because, despite having more sustainable universities, the first posts went to European universities.
On the maps of universities per quartiles according to both the ARWU ranking (Figure 1a) and UI GreenMetric ranking (Figure 1b) rankings, we confirm how the universities in ARWU formed four regional clusters in the core of the world economy: the center of Europe, North America, eastern Asia, and Australia. Conversely in the Top 500 of the UI GreenMetric ranking, this geographic concentration was less marked and universities were more scattered worldwide, which meant that more universities appeared all over Asia and Latin America.
Although Europe is the region with a greater equilibrium between both ranking types, Figure 2 details the distribution of universities per country in the Top 500 of the ARWU ranking (Figure 2a) and of the UI GreenMetric ranking (Figure 2b). The best positions occupied by universities in the ARWU ranking concentrate basically in the UK, Germany, and the Netherlands. When analyzing these positions in the UI GreenMetric ranking their composition changes in the UK, and virtually disappears in Germany and the Netherlands. Conversely, universities in Spain and north Italy are practically inexistent in the ARWU ranking but noticeably increase in the UI GreenMetric ranking while occupying the first quartile at the same time.
As for the distribution of universities per typologies (Table 2), in the Top 500 of the four GURs, more than 80% of universities were public (78.6–85.2%). Moreover, most were non-technology (81.8–87.8%). Most of the universities in the Top 500 of the four GURs were more than 100 years old, and more universities of this age were also found in ARWU (65.4%) than in THE (60.2%) and QS (59.0%).
In the Top 500 of the UI GreenMetric ranking, most universities were non-technology (83.8%) and public (76.4%). Conversely, with GURs, only 29.6% of the universities were more than 100 years old.
The number of universities included in the Top 500 of the UI GreenMetric ranking was similar in the Top 500 of the four GURs: between 64 (12.8%) in Webometrics and 57 (11.6%) in ARWU (Table 3). These percentages were also somewhat higher in European universities, which ranged between 16.7% in Webometrics and 12.3% in QS, which could be favored by being more present in the Top 500 of GURs. Asia also had a higher percentage than the mean, but only in QS with 13.7%. Latin America stands out more in the UI GreenMetric ranking, whose percentage of universities in the Top 500 of the UI GreenMetric ranking ranged between 25.0% in QS and 50.0% in THE.
According to typologies, the presence of universities in the Top 500 of the four GURs and in the Top 500 of the UI GreenMetric ranking was slightly higher in universities younger than 100 years (13.1% in THE–16.0% in Webometrics) than for those aged more than 100 years (10.2% in ARWU–11.1% in Webometrics), and in public universities (12.7% in THE–13.8% in Webometrics) versus private ones (5.4% in THE–7.7% in QS). The same can be stated of non-technology universities (11.5%–13.1%) versus technology ones (6.7%–13.1%).

4.2. Statistic Analysis

The statistical analysis allowed us to know if these differences, namely appearing in the Top 500 of the UI Green Metric ranking in all four GURs, per region and typologies of universities, were statistically significant (Table 4). We observed how the Latin American universities in the Top 500 of any GUR were more likely to be in the Top 500 of the UI GreenMetric ranking than those from other geographical locations, and this possibility was also statistically significant at 99% and 90% in all cases, except for THE. Indeed, the Latin American universities in the Top 500 of both ARWU and Webometrics were 5-fold more likely to be in the Top 500 of the UI GreenMetric ranking than those from other regions, with an error of 1%. They were followed far behind by European universities, which were 1.7-fold more likely to belong to the Top 500 of the UI GreenMetric ranking than to the remaining regions to the ARWU, Webometrics, and THE rankings, with an error between 1% and 5%. Conversely, we found a negative association for in North American universities and belonging to the Top 500 of the UI GreenMetric ranking because their OR was always below 1, but this was only significant at 5% in the Webometrics ranking. For the remaining continents Asia and Oceania, no significant association appeared between being in the Top 500 of any GUR and in the Top 500 of the UI GreenMetric ranking.
Although the ORs of universities were below 1 according to their age, which indicated that those universities older than 100 years were less likely to appear in the Top 500 of the UI GreenMetric ranking than younger universities, these differences were not significant. Conversely, the public universities in the Top 500 of GURs were about twice as likely to be in the Top 500 of the UI GreenMetric ranking than private universities, which was significant at 90% in both ARWU and THE. Finally, no significant differences appeared between technology and non-technology universities, save QS, where non-technology universities were twice as likely to be in the Top 500 of the UI GreenMetric ranking than technology ones, with an error of 10%.

5. Discussion

Measuring the quality and scientific performance of universities is a very difficult task, which means that creating excellent and completely objective rankings is practically impossible [47]. This might be due to science being abstract and is much more difficult to measure tan any physical product, which has led to studies that have compared the positions that universities occupy in rankings. Some such studies have found a high correlation among the positions occupied by universities in ARWU, QS, and THE [2,3,7,8], and a low correlation of three GURs with Webometrics [3]. However, this work analyzed the geographical distribution of the universities in the Top 500 of the four studied GURs and evidences a different occupancy in them according to regions either for each quartile or the total of universities. This is because, although the four GURs aim to measure universities’ academic performance, they measure it differently, as we have seen in the Background section. This leads to confusion for society about what GURs actually measure. It is also hard to measure universities’ quality only using the few numbers that GURs are based on. Former studies have questioned their reliability and confirm that GURs’ indicators are based mainly on research, and leave teaching, community service, and their environmental commitments to one side, which has brought about inconsistencies in classification systems. To all this, we must add that today universities are too concerned about improving the positions they occupy in GURs, and many universities are pressured to increase their number of publications to improve their occupancy in GURs [7,46,49] and might neglect other important objectives, like serving society and being committed to sustainability. A conflict arises given the need to simultaneously deal with national interests and international publications [62]. This can have the reverse effect, with universities serving rankings and neglecting their other missions. Those in charge of university policies and academicians look more outwardly to meet external demands rather than maintain local values through education [63]. Rankings are not per se a purpose [49] and it is important to reflect on the profound impact that GURs may have as a form of governance among academicians [62].
Yet despite the geographical differences found among the four GURs, these geographical outcomes corroborated those indicated in other works [7,8,9,52], in which more universities were found in Europe and North America, and in this order [2,7]. The rankings in which European universities predominate more than North American ones are THE and QS. This may be due to the ARWU ranking being of Chinese nationality, which might make it less biased to favor the Anglo-American culture, whereas the rankings THE and QS are English. Initially, THE and QS came about as a single ranking as a response from Europe to the ranking ARWU. In 2009, the two independent THE and QS rankings were formed, which continue today [49]. This means that the rankings THE and QS are similar to one another because they use expert surveys as well as databases [6], unlike the ARWU ranking that employs only databases. Like ARWU, Webometrics strikes a better balance between North America and Europe, possibly because of the open-access databases it uses, and due to North American universities caring more about their presence on public websites than universities in other regions.
In any case, hegemony appears: the world regions with good economic development generally house the universities with large research budgets where teachers know the English language better, specifically regions of central Europe, North America, eastern Asia, and Australia. This finding coincides with another work [56], while South America and Africa do not appear. Prestigious universities with history tend to be more highly classified. Thus, English and western universities occupy better positions than Asian and African universities, which reflects the possibility of those surveyed tending to think that using English is always better than using other languages [46].
Nonetheless, it would be interesting to analyze the evolution of this geographical distribution over time, which a previous work did [36]. Its authors found that, in 2018 compared to 2010, the number of universities located in the USA, UK, and France and positioned in the ARWU, THE and QS ranking lowered, but universities from Singapore and China rose.
Although regional differences were verified among the four GURs, the map completely changes when we analyzed the geographical distribution of the universities in the Top 500 of the UI GreenMetric ranking, which is a novelty of the present work. Basically, the most sustainable universities are scattered around the world and appear in new less economically developed places like Asia and Latin America. This present academic Anglo-American hegemony in GURs can be considered threatened by a potential change toward eastern Asia and a proliferation of universities worldwide if sustainability is taken into account. Only about 60 universities of the Top 500 that appear in the four GURs are found in the Top 500 of the UI Green Metric ranking. In other words, the number of sustainable universities from North America and Oceania drastically drops, as does the number of sustainable European universities, but to a much lesser extent. All this favors Latin American and Asian universities, whose presence substantially increases in the UI GreenMetric ranking. This means that of all the world regions, Europe strikes a better balance between two classification types: GURs rankings and the UI GreenMetric ranking. That is, between academic performance and sustainability. Nonetheless, the detail provided for countries on the map of Europe enables us to see that the predominant universities in the Top 500 of ARWU are from the UK, Germany, and the Netherlands as opposed to more southern countries, which completely changes in the UI GreenMetric ranking. Once again differences appear between north and south Europe owing to the economic power, history, and culture of its nations. This evidences the GURs’ limitations to measure university performance as they completely neglect the role of developing environmental sustainability. These indices should be adapted to the world’s current requirements.
Besides geographical location, we also observed how universities more than 100 years old predominated in GURs, especially in ARWU and Webometrics, but slightly less in THE and QS. This is in keeping with other previous works [46,52] as these last two rankings had more emerging universities because the four GURs employs a different methodology. However, in the UI GreenMetric ranking, younger universities set up less than 100 years ago predominate. This could be related to younger universities being located in Asia as these universities are also the majority in the UI GreenMetric ranking, according to Table 1. This might be due to universities’ age being closely related to Anglo-American academic hegemony. Perhaps older universities have more rigid structures that make it hard for their campuses to adapt to sustainability. Conversely, emerging universities have been more committed to the natural environment and have adapted to society’s new requirements. This was corroborated when we analyzed universities’ age in the number of universities in each GUR that are also found in the UI GreenMetric ranking as younger universities appear than those older than 100 years ago. This could be due to the fact that among the universities that better performed academically, younger ones were more sustainable because they had adapted their premises to environmental requirements.
For public versus private universities, we observed a certain effect on sustainability. Almost 2-fold more public universities present in the four GURs were also found in the UI GreenMeetric ranking versus private universities. This finding was significant in the ARWU and THE rankings. The public universities that better performed academically could have more means to face climate change than private ones, or are perhaps not as concerned about their institution’s economic profitability as some private universities possibly are. This is very important because it indicated the relevant role that a country’s government may play in climate change via its universities.
Regarding specialties, as non-technology universities predominated in the four GURs, the specialties of universities could bias the composition of these GURs [9]. However, this was also the case of the UI Green Metric ranking. When GURs interacted with the UI Green Metric ranking, more sustainable and non-technology universities appeared in QS, which could be due to the different spatial structure between both these university typologies. Thus using “clusters of different university typologies” could be a recommendation for preparing environmental sustainability rankings [14].
Notwithstanding, we stress that the present work analyzed environmental sustainability in universities via the UI GreenMetric ranking, which considers six quantitative indicators. It would be relevant to also assess the university’s relationship with the rest of society in sustainability terms and integrating sustainability into all university community members and strata [64].

6. Conclusions

GURs intend to measure the performance of universities worldwide. As with other previous works, we confirm a different composition for the universities in the Top 500 of GURs, which derives from them employing distinct indicators and methodologies, just as their different geographic distribution demonstrates. Although universities from North America and Europe are more often found in the four GURs, the former predominate ARWU and QS and the latter predominate Webometrics and THE.
Former studies have doubted their reliability and confirmed that indicators of GURs are based fundamentally on research, while they leave teaching, serving the community and their commitment to the natural environment to one side, which has led to inconsistencies in classification systems. A university wrongly classified in GURs can be an excellent university in education or in other qualities that contribute to society compared to other universities included in GURs. Nowadays, climate change is a world threat and universities must be committed to climate and to promote actions that help to protect the natural environment and have to, thus, include this objective in their campus actions.
This study is the first to relate GURs to the UI GreenMetric ranking to examine sustainability on the campuses of those universities that perform better academically. The obtained results reveal that the universities included in GURs are not always the best ones in sustainability matters. This means that when sustainability is evaluated on their campuses, those located in Asia appeared more often in the Top 500 of the UI GreenMetric ranking, followed by European ones. This indicates that a low association exists between universities’ academic performance and their commitment to the natural environment in the heart of their institution.
Only about 12% of the universities in the Top 500 of the four GURs are also in the Top 500 of the UI GreenMetric ranking. The only region with the most universities that perform best academically and had more sustainable campuses is Europe, with a 1.5-fold more probability than the other continents. Although very few Latin American universities are among the best-performing ones in academic terms, more of them have sustainable campuses than other regions and a 5-fold more probability of being in the Top 500 of the UI GreenMetric ranking.
Per typologies, differences in performance also appear in each GUR in relation to its interaction with the UI GreenMetric ranking. Although younger universities are in the Top 500 of the UI GreenMetric ranking, universities’ age in GURs does not come over as a statistically significant factor for being more or less sustainable. Conversely, being a public university positively and significantly discriminates being in the Top 500 of the UI GreenMetric ranking versus private universities, but only in two GURs: ARWU and THE. Only in the QS ranking for the non-technology universities in its Top 500 does the possibility of being in the Top 500 of the UI GreenMetric ranking double that of the two technology ones.
For all these reasons, GURs should be studied and updated in light of the new challenges that society faces. To this end, including other indicators in GURs, apart from those already established, or even creating other integral measurement rankings, is proposed, which are related to environmental sustainability and social inclusion, and exactly in the same way as companies measure CSR: economic area, social area, and environmental area. Thus, universities would bear in mind that their objectives should not only include research and education, but also socio-environmental improvement. This would contribute to not only advance in research and education but also in social equality and an improved environment, while the government in different countries will be able to play a key role in this purpose through their universities.
So, those universities that perform well academically to increase their commitment to cushioning climate change can be an example for other universities to do the same. One way would be to improve educational quality through good teaching practices while generating environmental awareness and creating sustainable spaces. In this way, universities would be drivers toward a sustainability culture in society by becoming models for sustainable development by implementing green policies on their campuses, and prioritizing indicators of energy efficiency, waste management, preserving/saving water, transport, and research to achieve integral long-term sustainability.

Author Contributions

This article was originally conceived and designed by M.M.-S., N.G., and J.M.O. Conceptualization, N.G.; Data Curation, M.M.-S.; Formal Analysis, M.M.S.; Investigation, M.M.-S.; Methodology, N.G.; Resources, M.M.-S.; Software, M.M.-S. and J.M.O.; Supervision, N.G.; Writing—Original Draft Preparation, N.G.; Writing—Review & Editing, N.G. and J.M.O. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Dill, D.D.; Soo, M. Academic Quality, League Tables, and public policy: A cross-national analysis of university rankings systems. High. Educ. 2005, 49, 495–533. [Google Scholar] [CrossRef]
  2. Shehatta, I.; Mahmood, K. Correlation among top 100 universities in the major six global rankings: Policy implications. Scientometrics 2016, 109, 1231–1254. [Google Scholar] [CrossRef]
  3. Basu, A.; Malhotra, D.; Seth, T.; Muhuri, P.K. Global distribution of google scholar citations: A size-independent institution-based analysis. J. Sci. Res. 2019, 8, 72–78. [Google Scholar] [CrossRef]
  4. Al’Hagery, M.A.H. Universities’ global ranking criteria modification according to the analysis of their websites. Int. J. Comp. Sci. Net. Sec. 2017, 17, 67–78. [Google Scholar]
  5. Mussard, M.; James, A.P. Engineering the Global University Rankings: Gold standards, limitations and implications. IEEE Access 2018, 6. [Google Scholar] [CrossRef]
  6. Hou, Y.W.; Jacob, W.J. What contributes more to the ranking of higher education institutions? A comparison of three world university rankings. Int. Educ. J. Comp. Perspect. 2017, 16, 29–46. [Google Scholar]
  7. Olcay, G.A.; Bulu, M. Is measuring the knowledge creation of universities possible? A review of university rankings. Technol. Forecast. Soc. Chang. 2016. [Google Scholar] [CrossRef] [Green Version]
  8. Moed, H.F. A critical comparative analysis of five world university rankings. Scientometrics 2017, 110, 967–990. [Google Scholar] [CrossRef] [Green Version]
  9. Kivinen, O.; Hedman, J.; Artukka, K. Scientific publishing and global university rankings. How well are top publishing universities recognized? Scientometrics 2017, 112, 679–695. [Google Scholar] [CrossRef]
  10. Alcaide, M.A.; de la Poza, E.; Guadalajara, N. Assessing the sustainability of high-value brands in the IT sector. Sustainability 2019, 11, 1598. [Google Scholar] [CrossRef] [Green Version]
  11. Massaro, M.; Dumay, J. Practitioners’ views on intellectual capital and sustainability. J. Intellect. Cap. 2018, 19, 367–386. [Google Scholar] [CrossRef]
  12. De Filippo, D.; Sandoval-Hamón, L.A.; Casani, F.; Sanz-Casado, E. Spanish Universities’ sustainability performance and sustainability-related R&D+I. Sustainability 2019, 11, 5570. [Google Scholar] [CrossRef] [Green Version]
  13. Trencher, G.; Nagao, M.; Chen, C.; Ichiki, K.; Sadayoshi, T.; Kinai, M.; Kamitani, M.; Nakamura, S.; Yamauchi, A.; Yarime, M. Implementing sustainability co-creation between universities and society: A typology-based understanding. Sustainability 2017, 9, 594. [Google Scholar] [CrossRef] [Green Version]
  14. Sonetti, G.; Lombardi, P.; Chelleri, L. True green and sustainable university campuses? Toward a clusters approach. Sustainability 2016, 8, 83. [Google Scholar] [CrossRef] [Green Version]
  15. Zou, Y.; Zhao, W.; Mason, R.; Li, M. Comparing Sustainable Universities between the United States and china: Cases of Indiana university and Tsinghua university. Sustainability 2015, 7, 11799–11817. [Google Scholar] [CrossRef] [Green Version]
  16. An, Y.; Davey, H.; Harun, H. Sustainability reporting at a New Zealand public university: A longitudinal analysis. Sustainability 2017, 9, 1529. [Google Scholar] [CrossRef] [Green Version]
  17. Blasco, N.; Brusca, I.; Labrador, M. Assessing sustainability and its performance implications: An empirical analysis in Spanish Public Universities. Sustainability 2019, 11, 5302. [Google Scholar] [CrossRef] [Green Version]
  18. Alshuwaikhat, H.M.; Adenle, Y.A.; Saghir, B. Sustainability assessment of higher education institutions in Saudi Arabia. Sustainability 2016, 8, 750. [Google Scholar] [CrossRef] [Green Version]
  19. Xiong, W.; Mok, K.H. Sustainability Practices of Higher Education Institutions in Hong Kong: A case study of a sustainable campus consortium. Sustainability 2020, 12, 452. [Google Scholar] [CrossRef] [Green Version]
  20. Filho, W.L.; Emblen-Perry, K.; Molthan-Hill, P.; Mifsud, M.; Verhoef, L.; Azeiteiro, U.M.; Bacelar-Nicolau, P.; De Souza, L.O.; Castro, P.; Beynaghi, A.; et al. Implementing innovation on environmental sustainability at universities around the world. Sustainability 2019, 11, 3807. [Google Scholar] [CrossRef] [Green Version]
  21. Brusca, I.; Labrador, M.; Larran, M. The challenge of sustainability and integrated reporting at universities: A case study. J. Clean. Prod. 2018, 188, 347–354. [Google Scholar] [CrossRef]
  22. Alonso-García, S.; Aznar-Díaz, I.; Cáceres-Reche, M.P.; Trujillo-Torres, J.M.; Romero-Rodríguez, J.M. Systematic review of good teaching practices with ict in spanish higher education. Trends and challenges for sustainability. Sustainability 2019, 11, 7150. [Google Scholar] [CrossRef] [Green Version]
  23. Von Hauff, M.; Nguyen, T. Universities as potential actors for sustainable development. Sustainability 2014, 6, 3043–3063. [Google Scholar] [CrossRef] [Green Version]
  24. Roos, N.; Guenther, E. Sustainability management control systems in higher education institutions from measurement to management. Int. J. Sustain. High. Educ. 2020, 21, 144–160. [Google Scholar] [CrossRef]
  25. Caeiro, S.; Sandoval-Hamón, L.A.; Martins, R.; Bayas-Aldaz, C.E. Sustainability assessment and benchmarking in higher education institutions-a critical reflection. Sustainability 2020, 12, 543. [Google Scholar] [CrossRef] [Green Version]
  26. Lehmann, M.; Christensen, P.; Thrane, M.; Jorgensen, T.H. University engagement and regional sustainability initiatives: Some Danish experiences. J. Clean. Prod. 2009, 17, 1067–1074. [Google Scholar] [CrossRef]
  27. Salvioni, D.M.; Franzoni, S.; Cassano, R. Sustainability in the higher education system: An opportunity to improve quality and image. Sustainability 2017, 9, 914. [Google Scholar] [CrossRef] [Green Version]
  28. Li, X.; Ni, G.; Dewancker, B.L. Improving the attractiveness and accessibility of campus green space for developing a sustainable university environment. Env. Sci. Pollut. Res. 2019, 26. [Google Scholar] [CrossRef]
  29. Suwartha, N.; Berawi, M.A. The role of UI GreenMetric as a global sustainable ranking for Higher Education Institutions. Int. J. Technol. 2019, 10, 862–865. [Google Scholar] [CrossRef] [Green Version]
  30. Puertas, R.; Marti, L. Sustainability in Universities: DEA-GreenMetric. Sustainability 2019, 11, 3766. [Google Scholar] [CrossRef] [Green Version]
  31. UI GreenMetric. Guideline of UI GreenMetric World University Ranking, 2018; Universitas Indonesia: Depok, Indonesia, 2018. [Google Scholar]
  32. Shanghai Ranking Consultancy. Academic Ranking of World Universities-ARWU. Available online: http://www.shanghairanking.com/ARWU-Methodology-2017.html (accessed on 12 January 2020).
  33. Quacquarelli Symonds. QS Top University Rankings. Available online: https://www.topuniversities.com/qs-world-university-rankings/methodology (accessed on 12 January 2020).
  34. THE World University Rankings. Available online: https://www.timeshighereducation.com/world-university-rankings (accessed on 12 January 2020).
  35. Cybermetric Lab. Ranking Web de Universidades-Webometrics. Available online: http://www.webometrics.info/en/About_Us (accessed on 12 January 2020).
  36. Liu, Z.M.; Moshi, G.J.; Awuor, C.M. Sustainability and Indicators of Newly Formed World-Class Universities (NFWCUs) between 2010 and 2018: Empirical Analysis from the Rankings of ARWU, QSWUR and THEWUR. Sustainability 2019, 11, 2745. [Google Scholar] [CrossRef] [Green Version]
  37. Marginson, S. University Rankings and Social Science. Eur. J. Educ. 2014, 49, 45–59. [Google Scholar] [CrossRef]
  38. Komotar, M.H. Global university rankings and their impact on the internationalisation of higher education. Eur. J. Educ. 2019, 54, 299–310. [Google Scholar] [CrossRef]
  39. Peters, M.A. Global university rankings: Metrics, performance, governance. Educ. Philos. Theory 2019, 51, 5–13. [Google Scholar] [CrossRef] [Green Version]
  40. Hosler, M.; Hoolash, B.K.A. The effect of methodological variations on university rankings and associated decision-making and policy. Stud. High. Educ. 2019, 44, 2011–2214. [Google Scholar] [CrossRef]
  41. Safon, V. Inter-ranking reputational effects: An analysis of the Academic Ranking of World Universities (ARWU) and the Times Higher Education World University Rankings (THE) reputational relationship. Scientometrics 2019, 121, 897–915. [Google Scholar] [CrossRef]
  42. Fernández Tuesta, E.; García-Zorita, C.; Romera Ayllon, R.; Sanz-Casado, E. Does a country/region’s economic status affect its universities’ presence in international rankings? J. Data Inf. Sci. 2019, 4, 56–78. [Google Scholar] [CrossRef] [Green Version]
  43. Dobrota, M.; Dobrota, M. ARWU ranking uncertainty and sensitivity: What if the award factor was excluded? J. Assoc. Inf. Sci. Technol. 2016, 67, 480–482. [Google Scholar] [CrossRef]
  44. Dowsett, L. Global university rankings and strategic planning: A case study of Australian institutional performance. J. High. Educ. Policy Manag. 2020. [Google Scholar] [CrossRef]
  45. Rehman, M.A.; Kashif, M.; Mingione, M. Corporate Social Responsibility and Sustainability (CSRS) initiatives among European and Asian Business Schools: A Web-based Content Analysis. Glob. Bus. Rev. 2019, 20, 1231–1247. [Google Scholar] [CrossRef]
  46. Fauzi, M.A.; Tan, C.N.L.; Daud, M.; Awalludin, M.M.N. University rankings: A review of methodological flaws. Issues Educ. Res. 2020, 30, 79–96. [Google Scholar]
  47. Dogan, G.; Al, U. Is it possible to rank universities using fewer indicators? A study on five international university rankings. Aslib J. Inf. Manag. 2019, 71, 18–37. [Google Scholar] [CrossRef] [Green Version]
  48. Siniksaran, E.; Satman, M.H. WURS: A simulation software for university rankings-software review. Scientometrics 2020, 122, 701–717. [Google Scholar] [CrossRef]
  49. Ganga-Contreras, F.; Sáez San Martin, W.; Viancos, P. University rankings as a tool for institutional evaluation: An analysis of used methodologies in international instruments. Rev. Incl. 2019, 6, 367–382. [Google Scholar]
  50. Cakir, M.P.; Acartürk, C.; Alasehir, O.; Cilingir, C. A comparative analysis of global and national university ranking systems. Scientometrics 2015, 103, 813–848. [Google Scholar] [CrossRef]
  51. DoCampo, D.; Cram, L. Academic performance and institutional resources: A cross-country analysis of research universites. Scientometrics 2017, 110, 739–764. [Google Scholar] [CrossRef]
  52. Jons, H.; Hoyler, M. Global geographies of higher education: The perspective of world university rankings. Geoforum 2013, 46, 45–59. [Google Scholar] [CrossRef] [Green Version]
  53. UI GreenMetric World University Ranking. Available online: http://greenmetric.ui.ac.id/ (accessed on 12 January 2020).
  54. Suwartha, N.; Sari, R.F. Evaluating UI GreenMetric as a tool to support green universities development: Assessment of the year 2011 ranking. J. Clean. Prod. 2013, 61, 46–53. [Google Scholar] [CrossRef]
  55. Lauder, A.; Sari, R.F.; Suwartha, N.; Tjahjono, G. Critical review of a global campus sustainability ranking: GreenMetric. J. Clean. Prod. 2015, 108, 852–863. [Google Scholar] [CrossRef]
  56. Ragazzi, M.; Ghidini, F. Environmental sustainability of universities: Critical analysis of a green ranking. Energy Procedia 2017, 119, 111–120. [Google Scholar] [CrossRef]
  57. Marrone, P.; Orsini, F.; Asdrubali, F.; Guattari, C. Environmental performance of universities: Proposal for implementing campus urban morphology as an evaluation parameter in Green Metric. Sustain. Cities Soc. 2018, 42, 226–239. [Google Scholar] [CrossRef]
  58. Drahein, A.D.; De Lima, E.P.; Gouvea Da Costa, S.E. Sustainability assessment of the service operations at seven higher education institutions in Brazil. J. Clean. Prod. 2019, 212, 527–536. [Google Scholar] [CrossRef]
  59. Parvez, N.; Agrawal, A. Assessment of sustainable development in technical higher education institutes of India. J. Clean. Prod. 2019, 214, 975–994. [Google Scholar] [CrossRef]
  60. Google Maps. Undetermined Scale. Available online: https://www.google.es/maps/@39.4657727,-0.8023025,3z (accessed on 6 April 2020).
  61. QGIS Development Team. QGIS Geographic Information System. Open Source Geospatial Foundation Project. 2016. Available online: https://qgis.org (accessed on 9 December 2017).
  62. Gao, X.; Zheng, Y. ‘Heavy mountains’ for Chinese humanities and social science academics in the quest for world-class universities. Comp. A J. Comp. Int. Educ. 2020, 50, 554–572. [Google Scholar] [CrossRef]
  63. Zhou, Y.; Wu, J. The Game Plan: Four Contradictions in the Development of World Class Universities from the Global South. Eğitim ve Bilim-Educ. Sci. 2016, 41, 75–89. [Google Scholar] [CrossRef] [Green Version]
  64. Alba-Hidalgo, D.; Benayas del Álamo, J.; Gutiérrez-Pérez, J. Towards a definition of environmental sustainability evaluation in higher education. High. Educ. Policy 2018, 31, 447–470. [Google Scholar] [CrossRef]
Figure 1. Geographical distribution per quartiles of the Top 500 universities in the Academic Ranking of World Universities (ARWU) ranking (a) and the Universitas Indonesia (UI) GreenMetric ranking (b).
Figure 1. Geographical distribution per quartiles of the Top 500 universities in the Academic Ranking of World Universities (ARWU) ranking (a) and the Universitas Indonesia (UI) GreenMetric ranking (b).
Sustainability 12 05759 g001
Figure 2. Geographical distribution per quartiles of the Top 500 European universities in the ARWU ranking (a) and the UI GreenMetric ranking (b).
Figure 2. Geographical distribution per quartiles of the Top 500 European universities in the ARWU ranking (a) and the UI GreenMetric ranking (b).
Sustainability 12 05759 g002
Table 1. Geographic distribution of universities per quartiles in the Top 500 of each ranking in 2018.
Table 1. Geographic distribution of universities per quartiles in the Top 500 of each ranking in 2018.
RankingQuartileNorth AmericaEuropeAsiaOceaniaLatin AmericaAfricaTotal
ARWUQ1645721800150
Q2426332931150
Q3283524841100
Q4223731433100
Total156 (31.2%)192 (38.4%)108 (21.6%)29 (5.8%)10 (2%)5 (1%)500
QSQ1374729930125
Q23060201041125
Q31758321062125
Q4284736572125
Total112 (22.4%)212 (42.4%)117 (23.4%)34 (6.8%)20 (4%)5 (1%)500
WebometricsQ1723213620125
Q2455420420125
Q3315823931125
Q4354827753125
Total183 (36.6%)192 (38.4%)83 (16.6%)26 (5.2%)12 (2.4%)4 (0.8%)500
THEQ1535114700125
Q2267714701125
Q34672161411150
Q4175417813100
Total142 (28.6%)254 (50.6%)61 (12.2%)36 (7.2%)2 (0.4%)5 (1%)500
UI GreenMetricQ12056372100125
Q22234481173125
Q31036511270125
Q41528641170125
Total67 (13.4%)154 (30.8%)200 (40.0%)5 (1%)71 (14.2%)3 (0.6%)500
Note: Q1 (0–25%); Q2 (25–50%); Q3 (50–75%); Q4 (75–100%).
Table 2. Distribution of universities per quartiles in the Top 500 of each ranking according to their age, and being public and non-technology in 2018.
Table 2. Distribution of universities per quartiles in the Top 500 of each ranking according to their age, and being public and non-technology in 2018.
RankingQuartile>100 yearsPublicNon-TechnologyTotal
ARWUQ1121118116150
Q293126117150
Q3606792100
Q4538284100
Total327 (65.4%)393 (78.6%)409 (81.8%)500
QSQ1101100105125
Q279107110125
Q354112105125
Q46110391125
Total295 (59.0%)422 (84.4%)411 (82.2%)500
WebomeQ110896116125
tricsQ285107110125
Q372114108125
Q460102105125
Total325 (65.0%)419 (83.8%)439 (87.8%)500
THEQ110394113125
Q271112112125
Q38213496150
Q44586113100
Total301 (60.2%)426 (85.2%)434 (86.8%)500
UI Green Q149101107125
MetricQ23886104125
Q34097101125
Q42198107125
Total148 (29.6%)382 (76.4%)419 (83.8%)500
Note: Q1 (0–25%); Q2 (25–50%); Q3 (50–75%); Q4 (75–100%).
Table 3. Number of universities in the Top 500 of each Global University Rankings (GURs) according to their presence in the Top 500 of the UI GreenMetric ranking per region, age, and being public and technology in 2018.
Table 3. Number of universities in the Top 500 of each Global University Rankings (GURs) according to their presence in the Top 500 of the UI GreenMetric ranking per region, age, and being public and technology in 2018.
GURARWUQSWebometricsTHE
UI GreenMetricUI GreenMetricUI GreenMetricUI GreenMetric
YesNoYesNoYesNoYesNo
North America14 (9.0%)14212 (10.7%)10016 (8.7%)16713 (9.2%)129
Europe28 (14.6%)16426 (12.3%)18632 (16.7%)16036 (14.2%)218
Asia10 (9.3%)9816 (13.7%)10110 (12.0%)737 (11.5%)54
Oceania1 (3.4%)281 (2.9%)331 (3.8%)251 (2.8%)35
Latin America4 (40.0%)65 (25.0%)155 (41.7%)71 (50.0%)1
Africa05050405
>100 years33 (10.2%)29132 (10.8%)26336 (11.1%)28932 (10.6%)269
<100 years24 (13.6%)15228 (13.7%)17728 (16.0%)14726 (13.1%)173
Public50 (12.7%)34354 (12.8%)36858 (13.8%)36154 (12.7%)372
Private7 (6.5%)1006 (7.7%)726 (7.4%)754 (5.4%)70
Non-technology47 (11.5%)36254 (13.1%)35756 (12.8%)38351 (11.8%383
Technology10 (11.0%)816 (6.7%)838 (13.1%)537 (10.6%)59
Total57 (11.4%)44360 (12.0%)44064 (12.8%)43658 (11.6%)442
Table 4. Odds ratios (ORs) of universities for being in the Top 500 of the UI GreenMetric ranking in each GUR, from each region in relation to the rest of the world, and for each characteristic (age, public and non-technology) in 2018.
Table 4. Odds ratios (ORs) of universities for being in the Top 500 of the UI GreenMetric ranking in each GUR, from each region in relation to the rest of the world, and for each characteristic (age, public and non-technology) in 2018.
OR   ( e β i )
ARWUQSWebometricsTHE
North America0.6900.8500.537 **0.701
Europe1.643 *1.0441.725 **1.681 *
Asia0.7491.2210.9210.986
Oceania0.2650.2090.2610.204
Latin America5.497 ***2.576 *5.194 ***7.737
Older0.7180.7690.6540.787
Public2.082 *1.7612.0082.540 *
Non-technology1.0522.092 *0.9690.963
Note: ***, **, and * are the 1%, 5% and 10% error levels, respectively.

Share and Cite

MDPI and ACS Style

Muñoz-Suárez, M.; Guadalajara, N.; Osca, J.M. A Comparative Analysis between Global University Rankings and Environmental Sustainability of Universities. Sustainability 2020, 12, 5759. https://doi.org/10.3390/su12145759

AMA Style

Muñoz-Suárez M, Guadalajara N, Osca JM. A Comparative Analysis between Global University Rankings and Environmental Sustainability of Universities. Sustainability. 2020; 12(14):5759. https://doi.org/10.3390/su12145759

Chicago/Turabian Style

Muñoz-Suárez, Manuel, Natividad Guadalajara, and José M. Osca. 2020. "A Comparative Analysis between Global University Rankings and Environmental Sustainability of Universities" Sustainability 12, no. 14: 5759. https://doi.org/10.3390/su12145759

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop