1. Introduction
Recently, college rankings have received significant attention among both lay people and experts. They started in the early 20th century as benign academic exercises [
1]. Now, they are a regional, national, or global competition for better students and faculty and more wealth. For supporters, rankings help make the concern for excellence in higher education publicly visible and active. They argue that rankings encourage universities and academic programs to perform better and help combat the pitfalls of institutional stagnation that could develop in their absence [
2,
3]. They also argue that rankings meet a widespread demand for publicly accessible comparative information about academic institutions and programs. Students and families consult them when making college decisions. Sponsors of faculty research consider ranking-based reputations for funding decisions and research partnership. Policymakers consult them for allocating resources among educational institutions. Academic leaders consult them for administrative decision-making. An increasingly consumerist attitude among students, parents, administrators, and employers toward higher education in a knowledge-based economy might be the reason for the popular rise of college rankings, argues Hazelkorn [
1]. Highly ranked institutions and programs do not hesitate to use their rankings to help market themselves to prospective students and parents, employers, policymakers, and funders.
However, college rankings can be misleading. The literature is replete with criticisms arguing that rankings use diverse and weakly defined methods and indicators, which are often weighted differently in an ad hoc manner [
4,
5]. Rankings have the advantage of extreme simplicity but also the disadvantage of revealing very little, because ranks do not disclose real differences in quality. In fact, all rankings exaggerate quality differences because of their association with winning [
4]. Normally, what matters in a contest is who came in first, not how much better the winner was than the loser.
To make matters worse, by placing all institutions on the same scale, rankings give readers a false impression that these institutions are trying to win the same competition with the same goals. Most academic institutions provide many services, from teaching and preparing students for the market to generating knowledge through research and scholarships. Regardless of how well rankings are done, rankings undermine the multiplicity of services academic institutions provide for the public good. A mere inclusion of numerous factors in rankings does not represent what it takes to run successful academic institutions. College rankings have been described as normalizing “one kind of higher education institution with one set of institutional qualities and purposes, and in doing so strengthening its authority at the expense of all other kinds of institutions and all other qualities and purposes” [
6]. Indeed, higher education is too complex and consumers’ individual preferences are too diverse to be fairly judged by a singular ranking scale [
7].
Lacking a consensus on the exact definition of “academic quality” [
7,
8], a ranking system can indirectly and wrongly influence students and parents by emphasizing some factors over others in the way it defines quality. Factors left out of the rankings may appear less important because of this. Some important factors, such as teaching quality and learning engagement and experience, are hard to define and quantify. Other factors that can affect prospective students and their parents, such as geography, safety, comfort, and convenience, are especially subjective. Still other factors, such as scholarships and cocurricular and extracurricular activities, are quantifiable but idiosyncratic to students and programs. Yet, a seemingly reassuring appearance of objectivity of the rankings often leads students and parents away from factors they care about to factors less central to their goals when making college decisions.
In response to the controversies they raise, ranking systems frequently change their methods and indicators, producing significantly different results from one year to another that are only reflective of the adjustments in methodologies rather than any true change in the quality at the institutions. While changes in the ranking methodologies show a responsiveness to criticism, they also show a lack of intellectual rigor in the process and product. For example, one study revealed that, by adjusting the weighting of different criteria, the same data can produce different rankings for programs. The authors of the study argued that the data used in rankings are useful both for institutions and students, but weightings of the data reflect nothing more than rankers’ preferences [
9]. In general, college ranking systems are arbitrary and reflective of elitism, because the same wealthy schools always seem to get the top spots. To make matters worse, there have been many reports over the years of schools deliberately “fudging” their data or taking non-quality-related steps to increase their ranks [
10].
However, studies are more common on college rankings at the institutional level. They focus on the U.S. News & World Report’s (USNWR’s) college rankings, the Academic Ranking of World Universities (ARWU) by Shanghai Jiao Tong University, the Quacquarelli Symonds (QS) World University Rankings, and the Times Higher Education Supplement (THES) international ranking of universities. In contrast, studies on the rankings of professional programs such as architecture are rare. With only one published study [
11], the rankings of professional architecture programs are seriously understudied. Therefore, among many other things, we do not know much about the kind and quality of data these rankings use, the methods they use to collect the data, the ways they use the data to determine rankings, and the impact these rankings have on programs and public opinion.
To fill in the gaps, this paper focuses on the rankings of professional architecture programs produced by DesignIntelligence (DI) of the Design Futures Council. In
Section 2, it explains the process of DI rankings. In
Section 3, it presents the apparent advantages and limitations of these rankings. In
Section 4, it reports analytic studies on DI rankings, exploring some common issues of rankings. In
Section 5, it explains the impacts of DI rankings on professional architecture programs and public opinions about these programs. In
Section 6, the paper makes some suggestions to improve the process of the ranking of professional architecture programs. Finally in
Section 7, the conclusions are provided.
2. DI Rankings
DI produces the most prominent rankings of professional architecture programs. Only a few of the programs ranked by DI are located outside the USA, maybe because of their memberships in the Association of Collegiate Schools of Architecture (ACSA). In recent years, DI has shifted away from listing “America’s Best Architecture and Design Schools” to “America’s Most Hired From” and “Most Admired Design Schools”. It is noted on DI’s webpage that the shift recognizes that a BEST school does not exist, as one’s needs are unique and as one’s views of “what is best” is subjective. (
https://www.di-ratings.com/, accessed on 3 June 2022).
DI ranks undergraduate and graduate professional architecture degree programs separately. For ranking/rating purposes, DI uses web-based questionnaires to collect inputs and opinions on architecture programs from hiring professionals, program administrators, and students. The topics emphasized in these surveys fall into five separate categories: Output of Institution, Outcomes from Alumni, Learning Environments, Relevance, and Distinctions. The details of three DI survey questionnaires are given below.
2.1. Professionals’ Survey
For this survey, hiring/supervising professionals within firms are invited to participate. The survey asks professionals to rate the importance of several attributes for a new architecture graduate entering the workplace. These attributes include students’ ability to collaborate effectively; ability to positively influence others; adaptability/flexibility; comfort when interfacing with outside parties (client, engineer, constructor, and consultant); committed work ethic; effective interpersonal skills; emotional intelligence; and empathy. They are asked to use 1 for not important, 2 for slightly important, 3 for moderately important, 4 for important, and 5 for very important.
Professionals are also asked to rate the importance of several factors considered in hiring decisions for a new architecture graduate entering the workplace. These factors include GPA, design excellence, research skills, school attended, study abroad experience, constructability focus, adequate understanding of the professional services business structure, knowledge of sustainable design, technology adoption, design for health, and previous work experience. They are asked to use the same ratings as above.
Additionally, professionals are asked to indicate if their firm has an active educational placement/work study program with any design programs. If they do, they are asked to rank the top three programs.
Furthermore, they are asked to identify the top three most hired from programs, starting with the first. For each program, DI surveys ask two sets of questions to further identify student outcomes of professional significance. The first set of questions asks the programs from which the greatest number of students were hired in the last 5 years, the quality of the student portfolios of recent hires from these programs, and the professional readiness of recent hires from these programs. For the last two questions of the set, raters are asked to use 1 for extremely dissatisfactory, 2 for less than satisfactory, 3 for satisfactory, 4 for highly satisfactory, and 5 for extraordinary.
The second set of questions ask to rate the quality of the program from which the greatest number of students are hired in terms of how well it is preparing students for communication and presentation skills; community involvement; construction materials, means, and methods; design technologies; design theory and practice; engineering fundamentals; global issues/international practice; interdisciplinary studies; planning/project methodologies; practice management; research methodologies; sustainability/healthy design; and understanding the impact of urbanization on a design. For this set, they are asked to use 1 for poor, 2 for fair, 3 for good, 4 for very good, and 5 for excellent.
To end the survey, professionals are asked to list up to five programs and to describe the unique distinctions of each program using 25 words or less. Here, the survey lists some 141 programs, with the option to add “other”.
2.2. Deans’ Survey
For this survey, only one survey per program is allowed. The dean or a delegate of the dean can fill in the survey. The survey starts with questions asking if their undergraduate, graduate, or both undergraduate and graduate programs are accredited by the National Architectural Accrediting Board (NAAB). Then, it asks the administrators to provide the average number of graduates from each accredited undergraduate or graduate architecture program during the last three years.
These initial questions are followed by questions that ask to list peer-reviewed papers and books published by a program’s faculty, recognition awards received by the faculty and the program, and product innovations from the program in the past 12 months. They also ask to list research projects by faculty and students in the past 24 months. Additionally, they ask to provide the average percentages of program alumni who have become entrepreneurs (self-employed, created a start-up, etc.); work in leadership/management roles in design-related and non-design-related organizations; and are employed or went on to further their education upon graduation in the last 10 years. Moreover, they ask to provide the top three additional accomplishments of graduates with statistics when possible.
Then, the survey includes a few binary questions (yes/no) on learning environments. These questions ask if a program offers access to a fab-lab and software; studio facilities with 24-hr access; dedicated studio space to each student; immersion location opportunities (abroad, rural, and urban); and hybrid learning methods (physical, virtual, and hybrid). They also ask if a program has an active educational placement/work study program with outside firms/organizations, has dedicated research faculty/programming, and has courses focused on a sustainable/resilient design in the program’s core requirements. Additionally, they ask if a program emphasizes the sustainable resilient design, includes socially focused design courses (e.g., design for equity) in the core requirements, emphasizes socially focused design (design for equity), and promotes interdisciplinary learning (core classes from other disciplines involving the built environment) and trans-disciplinary learning (core classes from other disciplines NOT involving the built environment).
Finally, deans are asked to note the top five significant changes made to architecture courses in the past three years out of 13 different options provided, and to identify five distinguishing features of their programs using 25 words or less for each.
2.3. Students’ Survey
For this survey, current undergraduates and graduates students and alumni who have graduated within the last three years are invited to participate. The survey starts with questions asking for the college or university where the student is currently enrolled or from where the student recently graduated. The survey also wants to know the student’s plan after graduation. The options include pursuing an advanced degree in architecture, working in a private practice, working for a corporation, working in the government, working in academia, self-employment, working for a nonprofit, working in a field other than architecture, or other. Additionally, the survey wants to know from the student the three most important attributes of an employer in 25 words or less. Moreover, the survey asks if the student believes that they will be well-prepared for working in the profession upon graduation. Here, the student is asked to use 1 for unprepared, 2 for unsure, 3 for hopefully prepared, 4 for prepared, and 5 for well-prepared.
After the initial questions, students are asked to rate the quality of their program in terms of how well it is preparing them or has prepared them for communication and presentation skills; community involvement; construction materials, means, and methods; design technologies; design theory and practice; engineering fundamentals; global issues/international practice; interdisciplinary studies; planning/project methodologies; practice management; research methodologies; sustainability/healthy design; and understanding the impact of urbanization on a design. They are asked to use 1 for poor, 2 for fair, 3 for good, 4 for very good, and 5 for excellent.
Then, students are asked the same binary questions (yes/no) on the learning environments for both undergraduate and graduate programs that the deans were asked to answer. As in the deans’ survey, these questions ask if a program offers access to a fab-lab and software; studio facilities with 24-hr access; dedicated studio space to each student; immersion location opportunities (abroad, rural, and urban); and hybrid-learning methods (physical, virtual, and hybrid). They also ask if a program has an active educational placement/work study program with outside firms/organizations, has dedicated research faculty/programming, and has courses focused on sustainable/resilient design included in the program’s core requirements. Additionally, they ask if a program emphasizes a sustainable resilient design, includes socially focused design courses (e.g., design for equity) in the core requirements, emphasizes a socially focused design (design for equity), and promotes interdisciplinary learning (core classes from other disciplines involving the built environment) and transdisciplinary learning (core classes from other disciplines NOT involving the built environment).
Finally, students are asked to identify up to five distinguishing features of their programs using 25 words or less.
2.4. Reporting
Using the data collected through the surveys, DI determines the (1) “most admired” 40 schools; (2) “most hired” from 20 schools for each of the following categories defined based on the number of graduates: less than 20, 20–49, 50–69, 70–99, and 100+; and (3) rankings in the following 12 focus areas: (1) communications and presentation skills, (2) construction materials and methods, (3) design technologies, (4) design theory and practice, (5) engineering fundamentals, (6) healthy built environments, (7) interdisciplinary studies (awareness of and collaboration with multiple disciplines impacting the built environment), (8) practice management, (9) project planning and management, (10) research, (11) sustainable built environments/adaptive design/resilient design, and (12) transdisciplinary collaboration across Architecture/Engineering/Construction. DI also provides program insights using the data collected from professionals, deans, and students. Helpful as they are, these insights will not be discussed in this paper.
3. Some Immediate Observations
The descriptions provided above suggest some positive and many negative aspects of the DI ranking system, which are discussed below.
3.1. DI Collects Data from Multiple Sources
College rankings generally have used three different types of data—reputational data, input data, and output data. Reputational data are generally collected from university presidents, academic deans, department heads, or employers, those who supposedly know most about academic quality. Input data include, among other things, faculty publications and citations, library size, student–faculty ratio, incoming students’ test scores, retention and graduation rates, and educational expenditures. Output data pertain to the graduates of a program and include data on such factors as skillsets, employability, average salaries, awards and recognitions, and roles and responsibilities. Since none of these data sets can describe the academic quality sufficiently [
3,
12,
13,
14], rankings using a combination of reputational, input, and output data may be desired.
Interestingly, DI uses all three types of data to rank architecture programs. The input criteria used by DI are included in its administrators’ and students’ surveys. The output criteria included the employers’ surveys. Finally, the reputational criteria are included in its administrators’ and employers’ surveys. As a result, for supporters, DI data provide a comprehensive comparative assessment of professional architecture programs. Conversely, for critics, DI data are spread too thin and are unable to take into account any one of the three perspectives of rankings sufficiently. For them, conflicts among these perspectives may be difficult to resolve. As a result, the quality of a program, as defined by DI, becomes questionable and confusing. For example, DI does not explain how it determines the ranking of a program when administrators and students hold contradictory views of the program. It also does not explain how it monitors, verifies, and validates the data it collects from its three different sources.
3.2. DI Collects Profession- and Program-Focused Data
DI’s surveys collect generic data, such as the numbers of students, academic products, publications, citations, recognitions received, different types of jobs held by alumni, and hires from a program. These generic data, however, provide very little information about the specific nature of professional architecture programs. To complement these generic data, DI’s surveys also collect profession- and program-specific data. These data include the attributes of new architecture graduates entering the workplace, the factors affecting the hiring of new architecture graduates, and the quality of the learning environment of a program. Therefore, unlike rankings that use generic data only, DI rankings are likely to be more relevant to students and parents looking for an architecture program and to employers hiring new architecture graduates.
Yet, DI’s survey data raise significant concerns. Since professionals may not have direct access to a program, it seems reasonable for DI to seek out their opinions of a program based on the students they hire from the program. However, it does not seem reasonable for DI to seek out students’ and administrators’ opinions concerning the quality of their own programs. It should not surprise anyone if administrators or students use an excellent rating to define the quality of their own programs. Indeed, if the benefit of an answer to a question is already evident to those who answer the question, in the absence of any disincentives, why should they answer in a manner that could harm the rankings of their programs?
3.3. DI Cannot Define the Quality of a Program in an Objective Manner
Hazelkorn [
1] identified eight academic indicators often considered by ranking systems: beginning characteristics, learning outputs, faculty, learning environment, final outcomes, resources, research, and reputation. Beginning characteristics are represented by data such as student admission scores and the percentage of international students. Learning outputs are typically defined by a proxy of retention and graduation rates in most ranking systems. Faculty indicators include faculty-to-student ratios and research output. Learning environment reflects student engagement and satisfaction with the learning conditions in a program. Final outcomes include the employability and average salaries of graduates as proxies for the quality of education. Resources account for the budgetary and physical assets of the program. Research is represented in rankings by the number of faculty publications and citations, as well as by the level of funding for faculty work. Finally, reputation is generally established by peer review, which is often subject to reviewer bias, to a “halo effect”, in which perceptions of one academic unit extend to others in the same institution, and to a tendency to restrict judgments to known institutions.
Only some of the academic indicators identified by Hazelkorn [
1] are considered thoroughly and methodically by the DI ranking system. Therefore, even though DI rankings use general and program- and profession-focused data from three different sources, they cannot provide a comprehensive assessment of the academic quality of architecture programs. They also cannot provide an objective assessment of programs, because they primarily use opinion surveys. The objectivity of DI rankings is further reduced when the same opinion surveys are repeated year after year. Once an opinion survey has been conducted and the results have been published, it is hard to imagine that the survey will continue to provide unbiased data and results in the following years. By employing many small and big strategies to influence surveys, professional architecture programs can try to improve their ranks from one year to the next even though substantive changes in academic quality may be hard to achieve in a short time. Since it is impossible to prove the veracity of participants’ opinions, the DI rankings may not show the true values of architecture programs.
3.4. DI Does Not Have a Process in Place to Verify the Data It Collects
Besides the fact that opinion-based survey data are generally hard to verify, the problem of the verifiability of DI rankings is found at multiple levels. For example, it is not clear how DI verifies whether a survey participant is really the person they claim to be. It is also not clear how it stops a participant from taking the survey more than one time. Additionally, it is not clear if DI can independently verify the information provided by participants in the few cases where they are asked to provide specific data instead of opinions. For example, does it verify if an administrator inadvertently provides wrong data? In fact, stories of mistakes are quite common in ranking systems [
10]. It is worth noting here that, while it is common to ask administrators regarding the quality of their academic programs or to ask them to compare their programs with other similar programs, research in the social sciences has found that it tends to provide outdated results that favor programs in institutions with strong reputations irrespective of a program’s achievement [
15,
16,
17]. More importantly, while the opinions of professionals, deans/administrators, and students about architecture programs are valuable, how can they, who have experienced or known only a few architecture programs, weigh a program objectively against the many other programs that exist? Individuals cannot become experts of all the other programs of the country by hiring a few students as employers, getting a degree or two from one or more programs as students, or running one or more programs as administrators.
Since answers to several DI survey questions are not verifiable, there is no disincentive to wrongly answering DI’s questions. There is also no validity test reported in the literature for the questions in DI’s surveys. Do they really measure what they want to measure? Having a "fab-lab” and software does not mean that students have used them or that the quality of these resources is similar in different programs. Likewise, having a socially focused design course does not ensure that students know how to design for equity or that the content and delivery of these courses are the same in different programs. Similarly, having research faculty/courses does not mean that students have access to them. In many schools, students in professional architecture programs may not need to interact with their research faculty regularly. Therefore, they may be unsure whether the program has a research faculty or not. It is also a fact that many stellar research faculty may not have time to teach or may not be good teachers. In simple words, it is not possible to fairly rank any programs based on a set of binary questions on learning environments, as DI does. To do so, it is necessary to know the quality, quantity, and extent of the things a program has. For example, a program may have an active and strong educational placement/work study program without having dedicated research faculty or programming. A program does not have to have everything included in DI’s questions on a learning environment to become an exceptional learning community.
Responses to many DI’s survey questions are not also verifiable because there are many ways to interpret these questions. For example, in the absence of any clear definition, participants may interpret “sustainable resilient design” differently. Therefore, no answer for whether a program offers a “sustainable resilient design” course can be wrong. Many DI’s questions also seek descriptive, as opposed to numerical, responses. It is not clear how these responses are considered in determining the program quality and, hence, rank. In qualitative studies, it is a common practice to get the results examined by experts, as well as informants familiar with the subject matter, setting, or methodology of the research, before these results are reported. One wonders if the results of the analysis of DI’s qualitative data are examined in a similar manner. It is possible that DI does not use any qualitative data for ranking purposes, but we do not know that. Nevertheless, more transparency concerning its methodology may help improve the usefulness of DI rankings.
3.5. It Is Not Clear How DI Uses Data to Rank Program
As noted earlier, at least three sets of responses are considered, sometimes on the same set of issues, to determine the DI rankings. In the professional’s survey, for example, employers are asked to indicate if architecture programs are preparing newly hired architects well in different professional skills and areas. In the administrators’ survey, administrators are asked to indicate if their programs put emphasis on these skills and areas. Finally, in the students’ survey, students are asked to indicate if their programs are preparing them well, again, for the same set of professional skills and areas. It is not clear how these three sets of responses are mapped onto each other to evaluate architectural programs and to determine their rankings.
In a more specific example, DI uses professional survey data to determine the rankings for “most admired” and “most hired from” programs. However, it is not clear if DI also uses administrators’ and students’ survey data to do the same. It is not also clear how a mismatch between professionals’ expectations and administrators’ and students’ descriptions of their schools affects DI’s rankings. What happens to the ranking of a program when recent graduates of the program exhibit poor research skills in the office where they have just been hired, but both administrators’ and students’ surveys indicate that their program offers research skills courses.
Additionally, a lack of clarity exists concerning how DI determines how many programs to rank. DI does not explain why it includes only the 40 “most admired” programs in its ranking list. To be ranked 41st out of some 141 schools in the “most admired” category is no small feat. In contrast, DI includes 100 schools (20 schools for each of the five categories defined based on the number of graduates) out of some 141 schools in the “most hired from” category. To put it simply, most schools are ranked in the “most hired from” category, which probably is not a very useful distinction. Additionally, a rank in the “most hired from” category says very little about the differences between a program graduating 100+ students and a program graduating less than 20 students. In any given year, ensuring job placement for every student is more difficult for a program graduating 100+ students than a program graduating less than 20 students.
More interestingly, DI does not explain why the numbers of ranked programs in different focus areas vary. In 2021, some focus areas included 15, some 10, and still others included 9 programs in the ranking lists. One wonders if DI ranks only those programs for which it has data. If that is indeed the case, then with only a handful of programs, the usefulness of focus area rankings is questionable. Further, in 2021, DI did not rank programs in the following focus areas: community involvement, global issues/international practice, and understanding the impact of urbanization on a design. Again, DI might not have enough schools to rank in these categories. A lack of a sufficient number of schools to rank should not be surprising, because when professionals are asked to choose only the five most familiar programs out of some 141 programs, they are forced to exclude many other programs that could potentially be good in some of the many focus areas.
3.6. It Is Not Clear How DI Rankings Are Related to Program Accreditation
Architecture programs are accredited by NAAB. NAAB’s program criteria are used to evaluate how a program supports career pathways, design, ecological literacy and competency, history and theory, research and innovation, leadership and collaboration, learning and teaching culture, and social equity and inclusion. NAAB’s student criteria are related to student learning objectives and outcomes. They include health, safety, and welfare in the built environment, professional practice, regulatory context, technical knowledge, design synthesis, and building integration.
Even though all accredited architecture programs must remain in compliance with NAAB’s program and student criteria, NAAB does not rank the accredited programs, but DI does. Interestingly, a brief survey of the NAAB evaluation reports of accredited architectural programs around the country by this author shows that many programs that fail to meet several NAAB criteria are still ranked highly by DI. This is important, because, according to NAAB, “Accreditation is evidence that a collegiate architecture program has met standards essential to produce graduates who have a solid educational foundation and are capable of leading the way in innovation, emerging technologies, and in anticipating the health, safety, and welfare needs of the public.” ([
18], page ii).
Of course, being a voluntary process, not all architecture programs seek NAAB accreditation. Similarly, not all jurisdictions require one to get a degree from a NAAB-accredited program to be licensed to practice. This then raises the interesting question: Should a ranking system rank architectural programs without considering whether a program “has met standards essential to produce graduates who have a solid educational foundation and are capable of leading the way in innovation, emerging technologies, and in anticipating the health, safety, and welfare needs of the public”? If a ranking system of architectural programs should not do that, then who should take the responsibility when incoming students choose a program based on its ranking instead of its accreditation status?
4. Analytic Findings
The literature reports numerous analytic studies on rankings, noting that rankings may be deficient in several aspects. Some of these studies explore ranking processes [
19,
20,
21,
22,
23], differences and similarities [
24,
25,
26,
27,
28], and correlations [
29]. Other studies explore how lessons learned from some countries that use higher education ranking systems might influence similar practices in other countries [
23]. Still others look at theories, data, and methodologies, as well as weightings and formulas used in rankings [
5,
19,
30,
31,
32,
33,
34,
35,
36,
37,
38]. One of these studies found that USNWR rankings were no more than a reflection of the institutional outcomes and financial resources gathered from the Integrated Postsecondary Education Data System (IPEDS) of the National Center for Education Statistics (NCES) [
39]. Another study found several problems with the Times Higher Education Supplement (THES) rankings: The sampling procedure of the rankings was not explained and was very probably seriously biased; the weightings of the various components of these rankings are not justified; inappropriate measures of teaching quality are used; the assessment of research achievements is biased against the humanities and social sciences; the classification of institutions is inconsistent; there were striking and implausible changes in the rankings between 2004 and 2005; the THES rankings are based on regional rather than international comparisons; and so on [
40].
Many studies also explore how college rankings affects consumers and the higher education establishment [
41,
42]. The specific topics explored in these studies include the effects of rankings from one year to the next, also known as the feedback effects [
12,
14], and the effects of rankings on public policies [
32], peer assessments of reputations [
43,
44], and admission outcomes and pricing decisions [
45]. One study on admission outcomes at selective private institutions found that a less favorable rank led an institution to accept a greater percentage of its applicants, lowering the quality of the entering class [
45]. Another study found that, while a change in rank did not affect the (all private) schools’ “sticker price”, less visible discounts were associated with a decrease in rank. A less favorable rank led to lower thresholds of expected self-help and more generous financial aid grants, with a drop of ten places, resulting in a four percent reduction in aid-adjusted tuition and an overall decrease in net tuition [
45]. After controlling for student characteristics at fourteen public research universities, yet another study found no statistically significant relation between the USNWR measures and NSSE (National Survey of Student Engagement) benchmarks, except for one. This finding led the author conclude that the quality of a student’s education seems to have little to do with rankings [
46]. Still others examine the impact of institutional and program area rankings on student’s access to and choice in higher education and discuss the impact of rankings on student opportunities after graduation in terms of placement success and earnings [
23,
47]. There are also those studies suggesting new methodologies for better rankings [
14,
38,
48,
49,
50,
51], focusing on, among other things, relative internationalization [
52], research, educational and environmental performances [
53], and the effect of a university’s strategy on its rank [
51].
This study on the DI rankings of professional architecture programs cannot cover all that has been presented in the literature on college rankings. While similar studies on DI rankings are necessary, this study will explore only a few of the most common issues in relation to DI rankings. These issues include the halo effects of college rankings on the DI rankings of architecture programs, the feedback mechanism affecting the year-to-year DI rankings, the relevance of architecture programs as demonstrated by the DI rankings, and, finally, the relationships between academic expenses and the DI rankings of architecture programs. It should be noted here that all statistical analyses required for the study were performed using IBM SPSS Statistics (Version 27).
4.1. Halo Effects
The “halo effect” is a ranking phenomenon, where less prestigious programs are rated more highly based on the overall reputation of their institutions. In one extreme example, Princeton’s undergraduate business program was ranked among the top ten, even though Princeton did not have such a program [
3]. The “halo effect” demonstrates that at least some reputational survey participants rate programs and institutions without the requisite knowledge to make an informed judgment of quality.
Two analyses were conducted to find out if halo effects exist in DI rankings. First, the correlations between the USNWR national rankings of universities and the “most admired” DI rankings of their undergraduate architecture programs were studied for the years 2019 and 2020. The graduate architecture programs were not considered in the study, because the USNWR national rankings of colleges at the institutional level are for undergraduate programs only. As indicated by the correlational analysis, the associations between these two rankings are strong and statistically significant (
Table 1). Since it is unlikely for architecture programs to affect the USNWR national rankings of universities, these associations may indicate the existence of halo effects of USNWR rankings of universities on the “most admired” DI rankings of their undergraduate architecture programs.
Second, the halo effects were studied at the school level by exploring the following question: Does having the “most admired” undergraduate and graduate programs affect the frequencies of rankings for undergraduate programs? According to
Table 2, in the years between 2010 and 2019, 23 schools had the “most admired” undergraduate architecture programs only. On average, these programs were ranked 5.57 times as the “most admired” programs. In contrast, during the same period, 17 schools had both the “most admired” undergraduate and graduate architecture programs. On average, the undergraduate programs of these schools were ranked 12.24 times as the “most admired” programs, indicating that having a “most admired” graduate program in the school might improve the chance of an undergraduate program to be ranked as a “most admired” program.
4.2. Feedback Mechanism
Another serious concern related to participants’ lack of information about the institutions they are ranking is that reputational rankings—particularly annual ones—may be creating a feedback mechanism for future rankings. Put another way, having little familiarity with other institutions, academic administrators and professionals are likely to allow a school’s previous rank to affect their current assessment of its academic reputation. Even if only indirectly, the sheer ubiquity of rankings on the internet may be a reason for this [
14]. As a result, we may not only observe strong correlations between an institution’s year-to-year rankings, but the correlation between year-to-year rankings may increase and stabilize, and little change can be expected in the rankings overall as years go by. As shown in
Table 3, the feedback mechanism seems to exist in the DI rankings of the “most admired” architecture programs. According to the table, with only a few minor exceptions, the year-to-year correlations have been clearly strengthening for the “most admired” undergraduate programs since 2008–2009. Even though the “most admired” graduate programs do not show a similar pattern, the year-to-year correlations have remained strong over the years, suggesting a strong association of a program’s previous rank with its current rank.
4.3. Relevance
The purpose of any academic ranking is to assess not only the quality but also the relevance of academic programs. People may be less interested in good quality programs if they lack relevance. One way to find out the relevance of a program is to see how hirable the graduates of the program are. If the education one receives from the “most admired” programs of DI is relevant, then one would expect that the ranks of the “‘most admired” programs would be associated with the ranks of the “most hired from” programs of DI. To find this out, the study considered if the “most admired” programs of DI were also included in the “most hired” program categories of DI. The findings for the years 2018 and 2019 are presented in
Table 4. Interestingly, at least nine programs of different sizes were included in the “most hired” categories but not in the “most admired” categories of the two years. Only a few of the 23 schools with the “most admired” undergraduate programs were included in the “most hired” categories of these years. The same is true for the schools with the “most admired” graduate programs; and only a few of these programs were included in the “most hired” categories of these years. Notably, many schools with the “most admired” undergraduate and graduate programs were included only in the category of the “most hired” schools with 100+ graduates of both the years. According to these findings, only some of the “most admired” schools may be providing relevant education, as indicated by their rankings in the “most hired from” categories.
Another way to find out the relevance of an architecture program is to see how well the graduates of the program do in the architectural registration exams (ARE). ARE is designed to measure the various areas of professional knowledge that are largely regarded as an important outcome of a successful professional architectural education. It potentially reveals how much students have learned while in the program. The areas included in these exams are construction & evaluation, practice management, programming & analysis, project development & documentation, project management, and project planning & design. This study explored the correlations of the ARE passing rates of professional architecture programs and the “most admired” and “most hired from” DI ranks of these programs for the years 2018 and 2019. Of course, graduates do not take their registration exams in any predetermined year. Rather, they often take these exams over an undefined number of years. Since the reputation of a program is unlikely to change significantly from year to year, it is expected that graduates from a “most admired” or a “most hired from” program would generally do well in their registration exams, regardless of when they take these exams.
The results of the correlational analysis, as presented in
Table 5, do not support the above expectation consistently. Out of the 24 correlations observed between the ranks of the “most admired” undergraduate programs and the pass rates of their graduates in the registration exams, only 12 correlations were statistically significant, indicating that ARE results may have some association with the “most admired” undergraduate rankings. In contrast, out of the 24 correlations observed between the ranks of the “most admired” graduate programs and the pass rates of their graduates in the registration exams, only three correlations were statistically significant, indicating that ARE results may have very little association with the “most admired” graduate rankings.
As shown in
Table 5, the correlations between the ARE passing rates and the rankings in the “most hired from” categories of the architecture programs are inconsistent at best. The programs in the 2018 “most hired from” categories show none but one statistically significant correlation with the passing rates in the ARE exams. In contrast, the programs in the 2019 “most hired from” categories show several statistically significant correlations with the passing rates in the ARE exams. These inconsistent results raise questions about the ability of DI rankings to define the relevance of professional architecture programs.
4.4. Expenses
In an ideal world, an academic program should be ranked higher if the program provides the same quality of education as other programs but at a lower cost. In the real world, however, the opposite happens—a program generally gets punished with a poor rank for providing the same quality of education at a lower cost. Considering this, Ehrenberg notes that “no administrator in his or her right mind would take actions to cut costs unless he or she had to” [
54].
DI does not include any questions on academic expenses, but its questions on learning environments for deans/administrators and students indirectly refer to academic expenses. More facilities and resources always require more money but provide no assurance for better academic quality. Studies have shown that, when expenses are used as a criterion to determine rankings, they directly encourage the inefficient use of resources by rewarding schools for spending more money, regardless of whether these expenditures contribute to academic quality [
14].
To determine the relationship between DI rankings and academic expenditures, the average undergraduate tuition and fees of the following categories of the “most hired from” schools were compared: (1) “most hired from” schools with no “most admired” programs, (2) “most hired from” schools with “most admired” undergraduate programs, (3) “most hired from” schools with “most admired” graduate programs, and (4) “most hired from” schools with both “most admired” undergraduate and graduate programs. The findings presented in
Table 6 indicate that the schools of category 4 are the most expensive programs with USD 38,968 average tuition and fees. These are followed by the schools of category 3 with USD 26,786 average tuition and fees. After these, we have the schools of category 2 with USD 20,920 average tuition and fees. Finally, we have the schools of category 1 with USD 15,447 average tuition and fees. Simply put, DI ranks are generally better for more expensive programs.
5. Impacts of DI Rankings
Due to the limitations discussed above, DI rankings continue to have significant negative impacts on professional architecture programs and on public opinions about these programs, which are discussed next.
5.1. Impacts of DI Rankings on Academic Programs
DI does not consider the factors related to admissions decisions in the ranking process. Without this information, it is not possible to determine how much learning students receive as they graduate out of a program. If a program admits students of underprivileged backgrounds and makes them highly employable, should not the program be given more credit than a program that admits students of privileged backgrounds only and makes them equally employable? The likely consequence of not considering admission criteria in DI rankings is that they lead programs away from properly balancing many factors that go into determining which applicants will improve the program quality and make better architects.
A lack of focus on admission criteria in the DI rankings can have worrisome secondary effects on architecture programs. While admitting students, programs may place emphasis on things with negative social consequences. A program wishing to get good design students may pay more attention to portfolios than GPAs for many middle-range students, even though students receiving good GPAs are generally well-rounded, hardworking, and persistent. For example, a program may select a 3.0 GPA student with a good portfolio over a 3.25 GPA student with an average portfolio, not taking into account that not all high schools provide opportunities that go into making a good portfolio. While a program should be free to choose its admission criteria, a ranking system can send a wrong message to the program if its admission criteria are not considered when being ranked.
There can be other secondary effects on architecture programs of DI’s lack of focus on admission criteria. Since the employability of graduates is an important factor in ranking, a program may want to improve its ranking by accepting applicants who appear to have better employment prospects. It is not difficult to identify these prospects. They generally come from wealthy families and suburban high schools and have contacts with the privileged of society. Favoring privileged students over underprivileged students may harm a program and the profession by reducing diversity.
Another impact of DI rankings is that they create an incentive for professional architecture programs for reducing class sizes, which affects programs in private and public institutions differently. With smaller classes, it is easier for programs to improve some of the data DI uses. To maintain the quality of learning and teaching, however, a program with smaller class sizes needs to increase tuition and fees. This can be done easily in private schools but not in public schools. Monitored by state boards and legislatures, public schools cannot increase their tuition and fees at will. On many occasions, despite significant budget cuts, public schools are asked to keep their tuition and fees flat. Therefore, the only way they can continue to provide an education at the same level despite budget cuts is to increase class sizes, which is just the opposite thing to do to improve the quality. Over time, this phenomenon may translate into better rankings for private schools and worse rankings for public schools.
Yet another impact of DI rankings is that they affect professional architecture programs through their effects on tuition and fees. Since having more financial resources matters to DI rankings, it is in the best interest of a program to raise tuition and fees. If such raises are not essential, the program may choose to give much of it back to students in the form of scholarships. This will probably raise the rank of a program without making significant changes elsewhere. The unwelcome side effect is that, as tuition rises each year, access to education for students of limited financial means decreases, regardless of whether there is a scholarship or not. Not knowing whether they will get scholarships, these students may choose not to apply to any expensive programs in the first place.
Other impacts of DI rankings include that they affect professional architecture programs through their effects on resource allocation within a program. When programs shift resources to improve some DI ranking indicators, they also take away resources from some other indicators not included in the ranking. This raises the question if those changes in resource allocation improve the academic quality. Since the single largest factor in DI rankings is their reputation among professionals, programs seeking to raise their reputations may start spending a substantial sum of money on their media presence through glossy publications, high-quality videos, and flashy awards. However, it is doubtful that any attempts to increase the visibility of a professional architecture program would help improve its academic quality and would help make better architects.
Finally, one should be reminded that it is not always bad that the DI ranking system does not consider several things included in other ranking systems. For example, pass rates in registration exams have been a factor in many ranking systems of professional programs [
5,
55]. It is not clear, however, if DI already considers registrations exams as an important factor in the rankings. Any assumption that DI considers registration exams in its rankings may have negative effects on a program seeking to improve its rankings. If instructors are told to improve pass rates for their subjects in registration exams, they may very well cover the basics for the weakest students, because registration exams are not designed to assess the highest level of competency in a subject matter. While programs must do all that they can do to improve registration exam outcomes, teaching a course for improved registration exam outcomes may not be a good pedagogical strategy.
5.2. Impacts of DI Rankings on Public Opinion
The consumer of a product, such as a car, a computer, a fridge, or a washing machine, can evaluate the product for themselves. Most buyers know the criteria they need most in a product. If they do not like a product, they can replace it. However, choosing an academic program is not buying a product. One degree cannot be exchanged for another. A high school student can take a tour, sit in a class, or live in a college dormitory for a day or two, but none can predict the college experience. Yet, the college experience one purports to buy based on the rankings outweigh all the other products of life. The market value of a good education rarely diminishes. Rather, it continues to increase, and students expect to sell their degrees in the job market many times throughout their careers. Therefore, like many other systems of academic ranking, the DI rankings of architecture programs seem to have tremendous power to impact public opinion, which cannot be ignored by the programs being ranked.
DI rankings are an easy and cheap way for architecture programs to tell the public how good they are. These programs do not feel responsible for the fact that DI rankings use individual opinions, however educated these opinions may be; that these opinions can only be surrogates for the quality of learning experience in a program; and that there is no way to verify if the data collected through DI’s online surveys truly help evaluate the quality of learning experience in a program. These limitations do not stop highly ranked programs from sharing their DI ranks with the public, because these ranks fulfill a public need for a third-party evaluation of architectural programs.
In essence, most architecture programs assume that DI rankings can be used as proxies for academic quality that the public should care about. By accepting this assumption, the public makes yet another assumption that if the program is good in some criteria, as assessed by DI, then it must be good in all the other unstated and unmeasured criteria that they are interested in. Both these assumptions are wrong. No one knows if DI’s survey findings correlate with the expectations of the public. Such expectations may include that a highly ranked program will provide high-quality education and that it will provide better professional and personal outcomes. So far, no one has studied how and whether the survey data of DI can help measure the quality of architectural education and its impact on life after graduation. At least this study has made it clear that both halo effects and feedback mechanisms may be found in DI rankings, indicating that the present DI rankings of programs can be affected by the rankings of the college within which the program exists and by the rankings of a program in the past.
Similar to most rankings, DI rankings have created the prisoner’s dilemma for architecture programs of the country. For whatever reason, when a program improves its ranking, it also means that another program must fall in ranking. Overall, it is not certain how these constant uncertainties in the rankings might be beneficial for society. A graduate from a highly ranked architecture program may easily get a good job. However, for the same reason, a graduate from a poorly ranked architecture program may have difficulty getting a good job. Even though both graduates may be equally competent to serve the profession and society as judged by the accreditation body, DI rankings create an artificial situation where society is deprived of what these two graduates could give back to society and the profession.
When hiring our next architect, should we take the DI rankings of architecture programs seriously and consider a graduate from a highly ranked program with excellent design skills who lacks social responsibility or a graduate from a lowly ranked program with excellent design skills who takes social responsibility seriously? The public interest is not well-served when our future architects have good design skills but lack social responsibility. Society is weaker if our architecture programs support poorly done rankings for immediate gains. The costs of getting the wrong students into our programs or wrong architects into our offices are not always limited to the profession of architecture only. The works of architects can have profound effects on society and the environment. If architecture programs are admitting the wrong students, the effects may be broadly felt. Put simply, it is possible that DI rankings are negatively affecting architectural programs, the public, and the society they claim to serve.
6. Suggestions for Improvement
Are the rankings of architecture programs necessary? Supporters argue that DI rankings allow a person to find a range of architecture programs they might be interested in. However, that function can easily be performed using the information presented on the websites of programs, schools, ACSA (Association of Collegiate Schools of Architecture), and the NAAB. These websites contain more in-depth information on architecture programs than the rankings provide. While research on these programs may take some time, it may be time well-spent. However, the rankings will always remain useful for those who may not want to do the necessary research or who may not be able to make a decision based on all the publicly available information about the programs they find interesting.
For those who need rankings like the rankings of architecture programs by DI, can the rankings be done differently to help them? In a data-driven world, organizations or individuals involved in the rankings have access to a tremendous amount of data for their purposes. Yet, a sensible ranking system that satisfies all this is highly unlikely. Additionally unlikely is that the market will fix the problems of the current rankings without active engagement of the students and parents looking for a program, the academic programs being ranked, and the organizations doing the rankings. Some suggestions to promote such active engagement are provided below.
6.1. Consider Alternative Processes to Improve Ranking Systems
For many, the DI rankings of architecture programs eliminate the need for individual research comparing the programs. For them, these rankings provide a clear basis for decision-making and give answers instead of arguments. Perhaps the best hope for change, then, is to develop many rankings, where each provides simple answers to a complex question on architectural programs. It is better if such rankings are not limited to those made from a supposedly neutral and general perspective. They should emphasize special interests for improved diversity and utility. They should allow consumers to evaluate programs using subjective criteria that most closely reflects their own preferences or to take the average of a program’s rank across many different qualities and specialty areas. As a result, they may be able to mitigate the perverse incentives created by a hegemonic ranking system. If rankings provide opportunities for architecture programs to highlight a wider number of specialty fields, perhaps these programs will be better able to stand out in a sea of similarities and to position themselves well on campus and in front of prospective students and their parents.
In the US, urban planning programs have attempted, with some controversy, to create a set of academic performance indicators ranging from student diversity to faculty projects without integrating them into an overall ranking system [
56,
57]. This allows schools to monitor and advertise their performances on a subset of indicators that reflect their values—for example, student professional registration, community engagement activities, or research publications. These kinds of systems do not create an overall ranking but, rather, many comparisons and are a way of valuing schools with different missions.
To further improve how architecture programs are compared, experts in educational evaluations can be consulted. These experts may suggest criteria that DI or other ranking organizations of architecture program may not otherwise consider. They may suggest a need to closely observe architecture programs to learn about them and to validate any numerical indicators of rankings. However, it does not appear that any ranking organization is interested in extensive and expensive site visits to assess architecture programs for ranking purposes.
In lieu of ranking lists, many suggest listing schools in broader “quality bands”, in which each school belonging to the same band would be considered largely of equal quality [
58]. One example of the listing of universities and colleges in quality bands is provided by the Teaching Excellence Framework or the Teaching Excellence and Student Outcomes Framework (TEF) by the UK Department of Education. The aim of this exercise, carried out by the Office for Students (OfS), has been to look at what UK universities and colleges are doing to ensure excellence in teaching, learning, and student outcomes, in addition to meeting the national quality requirements for UK higher education. OfS carried out TEF assessments in 2016–2017, 2017–2018, and 2018–2019, according to the UK Department of Education’s specifications. The awards were judged by an independent panel of students, academics, and other experts using a range of official data, combined with a detailed statement from each higher education provider. Universities and colleges that participated in TEF assessments received either Gold, Silver, Bronze, or a provisional award (meaning the higher education provider met the rigorous national quality requirements and entered the TEF but did not have the opportunity to be fully assessed) (
https://www.officeforstudents.org.uk/advice-and-guidance/teaching/about-the-tef/, accessed on 20 August 2022).
Of course, quality bands, such as Gold, Silver, Bronze, or a provisional award as defined by TEF, may not be as appealing as ranking lists. These bands are often associated with the question of granularity. While a greater granularity provides more information about excellence and better reflects the complexity of educational provisions, it also requires more resources. For example, in the case of TEF, a review report recommended that higher education providers are awarded both an institutional rating and a rating for each of the following four areas: teaching and learning environment, student satisfaction, educational gains, and graduate outcomes [
59]. Do these areas represent the right level of granularity? Do they represent all the things that need to be considered for the ranking of academic institutions? Currently, rigorous answers to these questions do not exist.
6.2. Consider “Third Mission” to Improve Ranking Systems
Even when there are reasons to use rankings, a challenge for any ranking system of academic programs has always been to find robust, objective, outcome-based metrics that are easy to collect and analyze and that reflect the needs of a wide variety of users and balance quantitative with wider and more qualitative assessments of academic programs in terms of student learning and the societal impact. In architecture, little consensus exists on how to do so. An active dialogue among all those involved in the education, production, and use of architecture is necessary to develop meaningful approaches to defining student learning and the societal impact. Ideally, future rankings should incentivize architecture programs that provide, for example, the best training for graduates who take on influential roles in different organizations, providing services to society often free of cost.
Often termed as the “third mission” [
60], the free-of-cost services of academic programs can be enormously diverse and can involve different funding and human resources. Continuing education and professional development courses, workshops, and seminars are the most common examples that demonstrate a commitment to extending the service of an academic program to the public sector. Technology transfer, programs for student “startups”, and internationalization are also a part of the “third mission”. With the enlargement of the target population and diversification of curricula, the “third mission” is a natural evolution of academic programs to establish nontraditional relations with the industry and national and international institutions. The “third mission” is also related to the idea of lifelong learning and regional development and may include projects that are directed to economic development, the integration of minorities, the acquisition of basic skills, and addressing environmental questions and issues related to public and population health [
60].
Similar to most rankings, a comprehensive consideration of the “third mission” is absent in the DI rankings of architecture programs. Concerning this, any future rankings of architecture programs could consider, for example, the extent to which sustainability is embedded in the projects and courses of architecture programs or could consider giving credit to academic papers published by faculty in journals focused on the sustainable development goals (SDGs) and that make multiple references to the SDGs. However, just because a course includes themes on sustainability may not make it effective in transforming the thinking or future actions of its participants. For courses, projects, and publications to be impactful, they must genuinely advance thinking, frame policies, and deliver insights that can be implemented. So far, how to measure the societal impact of an architecture education for a ranking system has remained anyone’s guess. In this regard, enhancing the quantitative metrics with qualitative assessments can be helpful. Ranking organizations could highlight innovations more qualitatively using individual examples of high-impact research; social impact projects; and efforts to introduce sustainability, empathy, and emotional intelligence into the architecture curriculum.
6.3. Consider Student Experiences before and after Graduation to Improve Ranking Systems
Like most current rankings, an obvious deficiency in DI rankings is that they use very little information on the quality of teaching and learning experiences. These are a central concern for students, and these should receive more attention. One reason for this is that there are many different aspects of the teaching quality. To improve the ranking methods, one could look at the degree of student involvement in learning activities, including cocurricular and extracurricular activities. However, learning activities affecting student experiences vary widely from program to program. Some may offer a few expensive activities of very high quality, while others may offer many activities of lesser quality at a cheaper price. Capturing and presenting some of that variety in the rankings should help applicants to find a program well-tailored to their interests and aspirations. Currently, DI conducts surveys of students to evaluate their learning experiences, but it is not clear if students have enough information about the things they are being asked. Instead, the National Survey of Student Engagement (NSSE) can be used as a model for this (
https://nsse.indiana.edu/, accessed on 3 June 2022). NSSE (pronounced “Nessie”) seeks to determine how successful colleges are at promoting those experiences that lead directly to student learning.
Another useful suggestion for rankers to consider is that, instead of surveying hiring professionals about the quality of the graduates of a program or asking administrators about the success of recent graduates, it may be more useful to survey alumni to find out if they got the job they wanted, if they enjoy their work, or if the work they do improves the health of the profession and society. Since graduates of an architecture program do not always seek jobs at architectural offices, measures of individual successes, if considered for many schools, could help the rankings with useful information for prospective students.
6.4. Encourage Personalization of Ranking
In response to the problems of rankings by private organizations, in recent years, government organizations have stepped into the ranking business. The European Union created U-Multirank, the Organization for Economic Co-operation and Development (OECD) launched its Assessment of Higher Education Learning Outcomes (AHELO), and the US government developed the College Scorecard [
8]. These rankings emphasize personalization, allowing users to examine multiple dimensions minus any single, holistic ranking and compare institutions with similar missions across several factors. Supporters argue that user-driven comparisons of multiple factors within groups of schools with similar missions better reveal the relevance of the indicators used in rankings than the aggregation of data in holistic rankings.
Rather than “prepackaged” rankings where researchers collect data and then decide on each factor’s weight in the overall ranking, user-driven rankings capitalize on web technology to allow prospective users to assign their own weight to each factor to produce a ranking that best reflects their individual preferences. For example, after choosing a subject, the Center for Higher Education Development (DAAD) in Germany allows users to select five criteria in order of importance from an overall list of twenty-five. The resulting table lists the German universities in order of their performance on these five criteria, as well as showing the relative position of each school on each criterion (
https://www.che.de/en/ranking-germany/, accessed on 3 June 2022). DI’s rankings of architecture programs based on areas of focus are somewhat similar, but the difference is that the focus areas are not defined by its users.
Any standard one-size-fits-all commercial rankings have two important weaknesses. First, they represent only the judgments of their producers. Second, no one ranking system can adequately reflect what all consumers are looking for in a program. Customizable rankings overcome these drawbacks by allowing consumers to receive information that is easily understood and accessible and that also reflects their own preferences and definition of quality. In the future, with the advancement of computer technology, it is possible that the customizable rankings of architecture programs will help promote true academic quality and fulfill their most important social role, namely providing useful information to the consumers of academic programs. It should, however, be noted that, even when users can choose their own components and weights, any ranking is only as good as the data it uses. Therefore, it is important that the data used in rankings, whether incorporated into prepackaged or user-driven rankings, more directly and accurately reflects the quality of professional architecture programs.
7. Conclusions
Academic rankings are a necessary but difficult exercise whether done at the institutional or program level. According to this study, the DI rankings of professional architecture programs show some of the same difficulties of the rankings of colleges and universities. Similar to most academic rankings, the data DI uses, the methods it uses to collect the data, and the way it uses the data for ranking purposes lack rigor and clarity.
The DI rankings of professional architecture programs are determined based on opinion surveys. No opinion survey for rankings can continue to provide unbiased data and results year after year. Left unchecked, programs may try to game the system for a better rank, even though any substantive improvement in academic quality requires time.
DI collects data from multiple sources, but it is unable to monitor, verify, and validate its data collection processes. It has not shown how it resolves conflicting opinions presented by different sources about architecture programs and how it uses descriptive qualitative and quantitative data in the way it ranks these programs. For a lack of proper definitions of the terms and phrases used in its surveys, DI has no way to determine whether the answers provided by a participant are wrong. For the same reason, DI has no disincentive for wrong answers.
DI surveys allow participants to rank only a handful of programs in any focus area of its rankings. As a result, many programs that do excellent work in many focus areas are left out of the rankings. This is significant, because the focus area rankings of DI could provide users some options to compare programs in their areas of interest that the “most hired” and “most admired” rankings of DI cannot provide.
Halo effects observed in other ranking systems were identified in DI rankings. Having a “most admired” professional architecture graduate program in a school could improve the chance of its undergraduate professional program to be ranked as a “most admired” program.
Feedback effects observed in other ranking systems were also identified in DI rankings. Similar to many other ranking systems, a program’s previous rank could affect its current rank in the DI ranking system. As a result, an early mistaken ranking of the program could impact its ranking in the later years.
The study raised doubts about the relevance of DI rankings to the education provided by professional architecture programs. Many highly ranked professional architecture programs are not always highly evaluated by their accreditors. If meeting the NAAB accreditation standards is essential to ensure a solid educational foundation for graduates and to ensure that graduates can lead the way in innovation; emerging technologies; and in anticipating the health, safety, and welfare needs of the public, then it is quite possible that DI rankings are misleading its users.
Many other factors also raised doubts about the relevance of DI rankings. In one example, these rankings did not show consistent associations with the passing rates of the graduates of these programs in the architecture registration exams. In another example, only some of the “most admired” programs were included in the “most hired” category of DI. Similarly, several “most hired” programs were not included in the “most admired” category of DI. In yet another example, just like many other rankings, more expensive programs performed better than less expensive programs in the DI rankings of architecture programs.
According to the study, the likely negative impacts of DI rankings on professional architecture programs are many. Since DI rankings do not consider admission criteria, they may lead a program away from properly balancing many factors that go into determining which applicants will improve program quality and make better architects. Since the expenses of a programs tend to affect DI rankings, these rankings may create an incentive for a program to reduce class size and increase tuition and fees. DI rankings may encourage programs to spend a substantial sum of money on improving media presence for a better visibility and reputation without changing the quality of education. Most importantly, DI rankings may create an artificial situation where the graduates of a poorly ranked program may have difficulty finding the jobs they deserve, even though substantive differences in the quality of education may not exist among many programs holding different ranks.
It is suggested that ranking organizations should allow consumers to evaluate professional architecture programs using their own preferences to avoid some of the problems of DI rankings. It is also suggested that DI or any other ranking organizations should seek expert help in the assessment and evaluation processes of professional architecture programs to improve the objectivity and relevance of their rankings. However, before anything else, an active dialogue among all those involved in the education, production, and use of architecture is necessary to determine how to measure the quality of professional architecture programs for better ranking.