Next Article in Journal
Effects of Blended Learning in Physical Education among University Students: A Systematic Review
Previous Article in Journal
The Ecological Root Metaphor for Higher Education: Searching for Evidence of Conceptual Emergence within University Education Strategies
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Who Sends Scores to GRE-Optional Graduate Programs? A Case Study Investigating the Association between Latent Profiles of Applicants’ Undergraduate Institutional Characteristics and Propensity to Submit GRE Scores

by
Sugene Cho-Baker
* and
Harrison J. Kell
*
Educational Testing Service, Princeton, NJ 08541, USA
*
Authors to whom correspondence should be addressed.
Educ. Sci. 2022, 12(8), 529; https://doi.org/10.3390/educsci12080529
Submission received: 11 July 2022 / Revised: 25 July 2022 / Accepted: 26 July 2022 / Published: 4 August 2022
(This article belongs to the Section Higher Education)

Abstract

:
Many programs have made the submission of GRE scores optional. Little research examines differences in propensity to submit scores according to applicants’ characteristics, however, including the type of undergraduate institution they attended. This study’s purpose was to examine the degree to which the type of undergraduate institution applicants attended predicted score submission to GRE-optional programs, including when controlling for covariates (demographics, program degree and discipline, undergraduate grades). We used data provided by a doctoral degree–granting university to answer our research question. We indexed differences in GRE score submission using odds ratios. Both individually (1.93) and after controlling for covariates (2.00), we found that applicants from small, bachelor’s degree–granting schools were more likely to submit scores than applicants from large, doctoral degree–granting schools. Men were more likely to submit scores than women (1.55). Larger effects were observed for program characteristics: Ph.D. versus master’s (2.94), humanities versus social sciences (3.23), and fine arts versus social sciences (0.16). Our findings suggest that there may be differences in propensity to submit GRE scores to test-optional programs and that some of these differences may be associated with variables (undergraduate school, program type) that have not been widely discussed in the literature.

1. Introduction

Many graduate programs have made the submission of GRE scores optional for applicants [1,2]. Reasons for the spread of test-optional policies (TOPs) at the graduate level are multifaceted [3] and include concerns about the implications of overreliance on the GRE for diversity [4], skepticism about the predictive value of the test [5,6], and the ongoing hardship and disruption induced by the COVID-19 pandemic [7]. Despite the spread of TOPs at the graduate level, there is a scarcity of research on them. The little research that has been published focuses on faculty perceptions of the reasons behind and potential implications of adopting TOPs among graduate programs (e.g., [8]). This investigation aims to contribute to the burgeoning GRE-optional literature by exploring a topic that has been given little attention: differences in GRE score submission according to the type of undergraduate institution applicants attended.
The discussion around correlates of GRE scores—and, by extension, their submission—has focused on the demographic characteristics of graduate school applicants (e.g., ethnicity/race, gender, socioeconomic status (SES)) [9]. Absent from the discussion has been consideration of the implications of TOPs for the composition of graduate student bodies in terms of the undergraduate institutions those students attended. This gap is present despite the fact that U.S. universities have become increasingly stratified by ethnicity/race and SES since the Financial Crisis [10]. Moreover, the nearly 3000 four-year degree–granting institutions in the U.S. [11] vary along many dimensions, including selectivity, size, and public versus private control, all of which may contribute to the students attending them having a wide variety of experiences. Consequently, we propose that consideration of the implications of graduate-level TOPs for the backgrounds of students be extended to include the undergraduate institutions they graduated from. Accordingly, in the current investigation we used latent class analysis to develop a taxonomy of the undergraduate institutions attended by applicants to GRE-optional programs at a research-intensive university. We then predicted test score submission using applicants’ latent class memberships, controlling for relevant covariates (e.g., demographics).

1.1. The Graduate Record Examination

The Graduate Record Examination Project originated with the Carnegie Foundation for the Advancement of Teaching in 1936 [12], cosponsored by the graduate schools of Columbia, Harvard, Princeton, and Yale Universities [13]. The goal of the Project was to develop an objective measure of the scholastic knowledge that students had acquired during their undergraduate years, the idea being that the best predictor of individuals’ acquisition of future knowledge—such as during graduate study—is the extent of knowledge they have already acquired [13]. An impetus for the creation of the Project was the expectation that graduate programs would attract a substantially larger number of applicants as the Great Depression receded and that such an examination could aid in making admissions and/or fellowship decisions [14].
The Graduate Record Examination Project originally developed eight separate tests (biology, chemistry, fine arts, literature, mathematics, physics, social studies, verbal), which were first administered to first-year graduate students at Columbia, Harvard, Princeton, and Yale Universities in October of 1937 [15]. The tests were subsequently revised several times, and in 1946 the first iteration of the GRE (GRE General Test) was administered, producing two scores: quantitative and verbal. When Educational Testing Service (ETS) was founded in 1948, formed by the combination of the testing arms of the American Council on Education, Carnegie Foundation for the Advancement of Teaching, and College Entrance Examination Board [16,17], it took over development and administration of the GRE assessments. The GRE became a part of the general administration of the examination in 1949 [15].
Currently, the GRE consists of three sections, each of which provides test-takers and institutions with a score: Verbal Reasoning (GRE-V), Quantitative Reasoning (GRE-Q), and Analytical Writing (GRE-AW) [18]. GRE-V assesses individuals’ skills in analyzing and evaluating written material, synthesizing information obtained from that material, analyzing relationships among the components of sentences, and recognizing relationships among words and concepts. GRE-Q assesses individuals’ basic mathematical skills, including understanding of elementary mathematical concepts and modeling and solving problems using quantitative methods. GRE-AW assesses individuals’ skills in expressing and supporting complex ideas, developing and evaluating arguments, and sustaining a focused and coherent narrative. Scores on GRE-V and GRE-Q are reported on a scale of 130 to 170, in 1-point increments. Scores for GRE-AW range from 0 to 6, in half-point increments.

1.2. GRE-Optional Policies

1.2.1. Background and Prior Research

Graduate-level TOPs have grown concomitantly with the undergraduate test-optional movement, although the graduate-level movement has lagged the undergraduate-level movement by approximately 10 years. Some institutions have not required undergraduate applicants to submit test scores since the 1970s (e.g., Bowdoin College, Hampshire College [19]), but the movement did not gather strong momentum until 2001, when University of California system president Richard Atkinson called for a transition from the SAT to tests more closely aligned with high school curricula [20]. Six years later, the number of test-optional undergraduate institutions was described as “growing” [21], and reports on the spread of test-optional policies in prominent popular sources were frequent by the latter half of the decade (e.g., [22,23,24]).
The GRE has been criticized intermittently since its inception (e.g., [25,26,27]), but sustained objections to the test did not emerge until the mid-2010s (e.g., [4,28]). Some elements of these critiques were aimed less at the test itself and more on misinterpretation and misuse of its scores in graduate admissions decision-making—specifically, the imposition of cut-scores. Criticism of the GRE increased throughout the remainder of the decade, in both academic (e.g., [29]) and popular (e.g., [30]) outlets, with the accompanying spread of graduate-level TOPs. While the movement toward GRE-optional policies had gathered significant momentum by the beginning of 2020 [2], the COVID-19 pandemic spurred widespread concerns about access to testing centers and the disproportionate burden of the pandemic on the resources of underrepresented students and its implications for their ability to prepare for the examination [7].
Critiques of the GRE are typically two-fold [31]. First, its ability to forecast success in graduate school has been doubted, with its most consistent ability being to predict grades in graduate school, which many doctoral faculty members do not consider to be truly indicative of success. This criticism was partially driven by a series of studies that found evidence that GRE scores were a relatively poor predictor of criteria that many graduate faculty members found to be most important, such as research productivity and completion (e.g., [5,6,32,33]. The conclusions of some of these studies have been questioned on methodological grounds, however [34,35,36]. Other studies have found GRE scores to be predictive of completion [37,38], time to degree [39], research productivity [37,40], and quality of initial job placement [41].
Second, GRE scores evince mean differences among some demographic groups; women, members of often underrepresented ethnic/racial groups (e.g., African-American/Black, Hispanic/Latino), and people from lower SES backgrounds tend to score lower on the GRE, on average, relative to men, Whites, and people from socioeconomically advantaged backgrounds [42,43]. These group differences in average scores are sometimes interpreted as evidence for the GRE being biased against members of these lower-scoring groups (e.g., [2]). It is important to note, however, that while extensive effort should be expended to investigate the sources of group differences in standardized assessment scores when they are observed [44], the existence of such differences is not intrinsic evidence of bias. The Standards for Educational and Psychological Testing reject equating group differences in mean test scores alone with test bias: “the Standards’ measurement perspective explicitly excludes one common view of fairness in public discourse: fairness as the equality of testing outcomes for relevant test-taker subgroup group differences in [testing] outcomes do not in themselves indicate that a testing application is biased or unfair” [44] (p. 54). Standardized tests are typically designed to capture scholastic knowledge and skills (e.g., verbal reasoning)—constructs that are shaped by differential access to learning opportunities due to broad societal forces (e.g., systemic racism [45])—which can lead to group differences in test scores. When evaluating the potential bias of a standardized test—or any assessment—it is important to consider whether it predicts equally well within different subgroups and whether there are differences in the criterion across subgroups. Although a great deal of evidence supports the fairness of standardized tests in terms of prediction among undergraduates (see evidence reviewed in [3,45]), less research has examined whether this is the case for the GRE. Nonetheless, Burton and Wang [46] found evidence for the criterion-related of the GRE being comparable across subgroups and general trends in doctoral degree completion resemble trends in group differences in GRE scores [47], although in both cases there is variability across disciplines. Simulations suggest that differentially emphasizing GRE scores when making graduate admissions decisions has implications both for the diversity of admitted students and their academic performance in graduate school [45].
Although average demographic group differences in GRE scores do not constitute de facto evidence of the GRE’s unfairness, they can be problematic for promoting diversity in graduate programs—particularly when cut-scores based on GRE performance are applied during the graduate admissions process, a practice in direct opposition to guidelines and recommendations that have been in place for decades [4,13,16,18], including as starkly set out in a report prepared for the Office of Civil Rights and U.S. Department of Education [48]: “Like all measures of academic potential, however, test scores are less than perfect, and should never be the sole criterion upon which an admissions decision is made, for they can never measure the totality of factors that are related to an individual’s ability to achieve in an academic program” (underline in original). When hard cut-scores are employed, a greater percentage of applicants from underrepresented groups—due to the average differences in their scores in aggregate—will not meet the required threshold compared to applicants from majority groups, resulting in their automatic rejection and setting the stage for the continued underrepresentation of people from those backgrounds in some graduate disciplines.
Despite the rapid rise of GRE-optional policies, there is little published research on them. We are unaware of any investigations, for example, examining differences in admission rates between GRE score submitters or nonsubmitters or differences in graduate school success between score submitters and nonsubmitters. Although diversity is a major reason many programs cite for adopting TOPs, there are apparently no peer-reviewed publications linking increases (or decreases) in diversity to GRE-optional policies, although several popular reports have linked graduate-level TOPs to more diverse applicant pools [49,50].
We are aware of only three scholarly studies of GRE-optional policies and their potential implications. Cahn [8] interviewed faculty members and others directly involved in graduate admissions at 30 health programs that did not require the submission of GRE scores, asking them to indicate how successful their efforts to enroll diverse classes had been; 20% of interviewees responded “successful”, 13% responded “somewhat successful”, and 44% “not successful” (23% did not answer). Cooper and Knotts [51] surveyed Master of Public Administration programs and examined relations among program prestige, program diversity, and program size, and whether or not the programs required submission of a test score; none of the associations were significant. Dang et al. [31] examined whether including or excluding GRE scores benefited or harmed underrepresented applicants relative to majority applicants in two population health programs. They randomly assigned reviewers to four experimental conditions: GRE scores redacted versus unredacted crossed by underrepresented versus majority status. The investigators concluded that inclusion or exclusion of GRE scores had little impact on evaluations of applicants, for both underrepresented and majority application conditions.

1.2.2. Predicting GRE Score Submission

As little published research has investigated GRE-optional policies, accordingly there is little research—empirical or theoretical—as to possible predictors of GRE score submission and the decisions underlying score submission. What data are available allow for multiple, viable predictions about score submission patterns. For example, based on the fact that women tend to score, on average, lower than men on the GRE, it could be predicted that women are less likely than men to submit test scores—yet if the findings of Owens and colleagues’ [52] qualitative study of graduate students in physics and astronomy programs with test-optional policies for the GRE Physics Subject Test can be generalized to the GRE itself, the opposite prediction can be made. Owens et al. [52] found that men are less likely to be concerned about not submitting their scores and are more likely to take advantage of test-optional policies when they receive a poor score—yet women felt more pressure to send their scores, even when they perceive them to be low. Similarly, given the time and expense involved in preparing for and taking the GRE (although fee waivers are available), it could be predicted that individuals of lower socioeconomic status are less likely to submit scores, simply because they are less likely to take the test. Yet, there is anecdotal evidence that some undergraduate applicants from less well-resourced high schools (e.g., [53]) submit test scores in order to “make up” for the fact that they did not attend well-known, well-connected high schools. If the same phenomenon occurs at the graduate level, individuals from lower income backgrounds—who are more likely to attend less prestigious undergraduate institutions [10]—could be predicted to be more likely to submit GRE scores. By the same token, Posselt [28,54] found evidence for “elite homophily” among graduate faculty making admissions decisions: implicit and explicit preferences for various types of undergraduate institutions, often manifesting in favoritism toward applicants who graduated from schools they themselves were alumni of, from prestigious institutions overall, and/or from institutions with well-regarded discipline-relevant graduate programs. If graduate program applicants are aware of elite homophily, individuals from less well-known or highly ranked undergraduate institutions might be predicted to be more likely to submit GRE scores, in order to compensate for having less access to informal networks, high-profile letter writers, or in-depth research experiences (cf. [3,55]).

1.3. Current Investigation

The current research examines demographic correlates of submission of GRE scores to graduate programs with TOPs—but it utilizes them as covariates, rather than as focal predictors. Instead, we focus our investigation on whether patterns of GRE score submission differ according to the type of undergraduate institution an applicant attended. The interactions that students have with faculty and peers in the undergraduate educational environment constitute powerful socialization processes that can shape their knowledge, skills, and dispositions [56,57]—and the varieties of faculty members and peers differ greatly across institutions. For example, the outlooks and approaches to education and student guidance of faculty members at small, teaching-oriented liberal arts colleges may differ substantially from those at large, research-intensive universities. As graduate programs seek intellectual and experiential diversity among their students (e.g., [58,59,60,61,62]), one partial means to achieving this diversity may be to admit students from a wide variety of undergraduate backgrounds. Accordingly, we studied differences in test score submission to a public, research-intensive university’s GRE-optional programs as a function of the type of undergraduate institution those applicants attended. We found that applicants who attended relatively small, bachelor’s degree–granting, relatively unselective, private institutions were more likely to submit GRE scores than applicants from relatively large, doctoral degree–granting, highly selective, public institutions. We also found that men were more likely to submit scores than women. We observed the largest effects for several program-level characteristics (i.e., terminal degree, discipline).

2. Materials and Methods

2.1. Data

Administrators from a research-intensive doctoral degree–granting university provided us with information about applicants to their institution’s GRE-optional graduate programs from the 2015 to 2019 academic years. In the identification process, we selected programs that accepted only the GRE test as a standardized test submission option. In other words, we removed data from programs that had different criteria such as GRE being one of multiple standardized test options (e.g., Graduate Management Admission Test (GMAT)) or GRE being waived only for a subset of applicants given other qualifications (e.g., minimum undergraduate grade point average (UGPA)). We also excluded data from programs that required submission of GRE Subject Tests. Test-optional status was identified at the level of individual graduate programs, both in terms of academic discipline and degree granted. The institution was not financially compensated for the supply of its data. Data contained information about applicants’ demographic characteristics, GRE score submission status and scores, UGPA, and the name of the undergraduate institution they attended.
To acquire data related to applicants’ undergraduate institutional characteristics, we used educational institutional profiled data provided by the National Center for the Educational Statistics’ Integrated Postsecondary Education Data System (IPEDS) [63] database, a national level public data of U.S. educational institutions. We used the title of the applicants’ undergraduate institutions provided by the university and linked these records with university-level characteristics information from IPEDS.
The initial sample consisted of 6252 graduate program applicants. However, of these applicants, 1044 were international applicants. We removed them, as most attended undergraduate institutions outside the U.S. and lacked IPEDS data. Additionally, 1215 of our sample had missing data on all seven institutional measures, and we thus further removed them from our analyses. Applicants originated from 644 institutions across the U.S.

2.2. Measures

2.2.1. Admissions Data

Measures for our applicant data consisted of two application-related measures (GRE score submission status, UGPA), two demographic characteristics (gender, ethnicity/race), and two graduate program–related measures (degree type, program discipline). GRE score submission status was a dichotomous measure which served as our main outcome variable.
The remainder of the measures were used as covariates. The categories for gender were female and male (the institution did not include an option for selecting other gender categories). The categories for ethnicity/race were: African-American/Black, Asian-American, Hispanic/Latino, and White. We also included an “Other” category, consisting of individuals from the following groups, only a small number of whom were present in the sample: American Indian or Alaska Native, Native Hawaiian or Other Pacific Islander, and multiracial. Degree type was categorized to master’s or Ph.D. based on the program that applicants applied to. Lastly, based on the data provided and the disciplinary categories used by the [64] we classified the academic disciplines of the programs individuals applied to into five groups: fine arts, humanities, nursing, STEM (science, technology, engineering, and mathematics), and social sciences.

2.2.2. Undergraduate Institutional Characteristics

Of the seven undergraduate institutional characteristics extracted from IPEDS, one was a dichotomous variable indicating institutional type (1 = public 4-year versus private 4-year). The Carnegie undergraduate profile was a 15-point scale measure based on five institutional characteristics: (a) 2-year versus 4-year institution, (b) the proportion of part-time versus full-time students, (c) entrance examination (SAT or ACT) scores of the entering first-year students, (d) the proportion of applicants who were offered admission, (e) the proportion of entering students who transferred in from another institution. We considered this as the main indicator of institutional selectivity. The Carnegie undergraduate profile values were: 1 = two-year, higher part-time; 2 = two-year, mixed part/full-time; 3 = two-year, medium full-time; 4 = two-year, higher full-time; 5 = four-year, higher part-time; 6 = four-year, medium full-time, inclusive, lower transfer-in; 7 = four-year, medium full-time, inclusive, higher transfer-in; 8 = four-year, medium full-time, selective, lower transfer-in; 9 = four-year, medium full-time, selective, higher transfer-in; 10 = four-year, full-time, inclusive, lower transfer-in; 11 = four-year, full-time, inclusive, higher transfer-in; 12 = four-year, full-time, selective, lower transfer-in; 13 = four-year, full-time, selective, higher transfer-in; 14 = four year, full-time, more selective, lower transfer-in; 15 = four year, full-time, more selective, higher transfer-in. As an additional indicator of undergraduate selectivity, we used level of the highest degree offered in each institution, coded using a 7-point scale: 1 = associate’s degree, 2 = at least 2, but less than 4 academic years, 3 = bachelor’s degree, 4 = postbaccalaureate certificate, 5 = master’s degree, 6 = post-master’s certificate, 7 = doctoral degree. The remaining four indicators were continuous variables related to school size and ethnic/racial composition. These were the total number of enrollments in 1000 units and the percentage of White, African-American/Black, and Hispanic/Latino enrollees in each institution. Although IPEDS provided percentages for other races (Asian-American, American Indian or Alaska Native, Native Hawaiian or Other Pacific Islander, Two or More Races) we did not include them as they comprised only a small percentage of the enrollee pool.

2.3. Analyses

In order to identify the undergraduate institutional latent profiles, we used latent class analysis (LCA), which is a person-centered mixture modeling technique that identifies latent subpopulations based on patterns of responses to observed variables [65]. Specifically, LCA assigns individuals into latent groups comprised of those who have similar probabilities of predicting multiple indicators of interest [66]. An advantage of LCA is that it reduces the number of subgroups to unique and representative “types” of individuals, compared to the many intersections of multiple observed characteristics, by extracting the key patterns [67]. For example, if we have seven binary grouping variables, all possible patterns of responses would make up 128 possible subgroups. We used LCA to reduce the number of subgroups to represent the key patterns of responses across the seven institutional characteristics to allow a more direct interpretation of the pairwise comparisons. This allowed us to classify individuals according to their undergraduate institutional memberships, based on the combined information from the institutional characteristic measures. This approach was appropriate given that our regression model controls for individual-level demographic characteristics in predicting individuals’ likelihood to submit the GRE scores.
We used all seven undergraduate institutional characteristics indicators to configure the latent profiles using Mplus version 8 [66]. The final number of latent classes and model fit were assessed based on the Bayesian Information Criterion (BIC) and Lo-Mendell-Rubin likelihood ratio test that compares the fit improvement of alternative models [68].
To examine the association between undergraduate latent profiles and GRE submission status, we used the most likely latent class assigned to each individual based on the measurement model. Then, we conducted logistic regression predicting GRE submission status from the latent classes, both separately and controlling for applicants’ demographic characteristics, UGPAs, and characteristics of the programs they applied to. Although we report the results of significance tests, we recognize that statistical significance is heavily influenced by sample size and is not necessarily indicative of practical importance [69]. Thus, we also report effect sizes, in the form of odds ratios (ORs).

3. Results

3.1. Sample Description

Four-hundred thirty-five applicants (10.89%) submitted GRE scores. Descriptive statistics for applicants are shown in Table 1 and Table 2. The majority of applicants were women (70.17%) and White (71.51%) and applied to master’s programs (90.98%). The largest number of applicants applied to social sciences programs (65.36%), followed by nursing (11.34%) and STEM (10.49%). The makeup of the applicant pool in terms of gender and ethnicity/race was comparable to that of the U.S. population holding bachelor’s degrees [70], with two exceptions: Women were overrepresented (70.17% in the current sample compared to 53.27% in the U.S. population of four-year degree holders), and Asian-Americans were underrepresented (4.81% of the current sample compared to 9.28% in the U.S. population of four-year degree holders). Applicants to master’s degree programs were overrepresented relative to the general population of first-time graduate school enrollees (91% compared to 82%) [64]. It is important to keep in mind that our data only included applicants to graduate programs that adopted TOPs and thus may systematically differ in multiple ways from the general population of U.S. graduate program applicants.
Descriptive statistics for applicants’ GRE scores are presented in Table 3. Patterns in group means of GRE scores largely mirrored those observed in the general population of GRE test-takers [42]. On average, men scored higher than women on GRE-V and GRE-Q, although the differences were smaller than in the general GRE test-taker population; women scored slightly higher on GRE-AW. The general trend across ethnicity/race categories was for average scores to be highest among Asian-Americans, followed by Whites, Other, Hispanic/Latinos, and African-Americans/Blacks. This trend also largely mirrored what is observed in the general GRE test-taker pool, although there was a tendency for average scores for all groups to be higher in the current sample than among the population of GRE test-takers.
In-depth information about the institutions that applicants attended, including descriptive statistics, is shown in Table A1. The majority of applicants in the sample attended public undergraduate institutions (68.46%) and institutions that were four-year, full-time, selective (mean Carnegie profile = 13), and offered a doctoral degree (80.39%; highest degree offered = 5.6). The mean total enrollment size of applicants’ undergraduate institutions was 12,172, although there was substantial variation (SD = 8938). Most applicants attended majority White institutions (58.57%).

3.2. Latent Profile Analyses

The final number of latent classes and model fit were assessed based on the Akaike Information Criterion (AIC), BIC, entropy [68], the Lo–Mendell–Rubin likelihood ratio test (LMR) [68], and the parametric bootstrapped likelihood ratio test (BLRT) [71]. Table 4 shows the model fit statistics for the different latent class models examined. The AIC and BIC are relative model fit indices, in which lower values indicate better model fit [72]. Our results show that the values continued decreasing as we added more classes in the model. Higher entropy values indicate how distinct the classes are from each other based on the latent indicators, and a rule of thumb for good classification is values above 0.80 [68]. The results show that entropy was above 0.80 for all models. LMR and BLRT evaluate the relative fit enhancement of a model compared to a model with one less class, and the results showed significant enhancements in the relative model fit in the three-class model compared to the two-class model. However, relative model fit was not enhanced above the three-class model. On the whole, the three-class model was considered the best-fitting model.
Figure 1 shows the standardized means for each of the institutional characteristics of the three profiles. There were distinct differences in the mean scores of undergraduate institutional characteristics across the latent profiles, and we used that information to define each profile. The first profile comprised about 80% of the overall sample and was defined as “Relatively large, doctoral degree-granting, highly selective, public”, given that the applicants were more likely to attend a public four-year institution, with the highest mean number of enrollments, undergraduate profile, and level of degree offered. The second group (15%) was defined as “Relatively small, master’s degree-granting, moderately selective”. The third profile (5%) was defined as “Relatively small, bachelor’s degree-granting, relatively unselective, private”, given the least likelihood of applicants having attended a public four-year university, with the smallest mean number of enrollments, and with a relatively low undergraduate profile and level of degree offered.

3.3. Predicting GRE Score Submission

Table 5 provides information about the descriptive statistics of the percentage of test score submitters by latent class. The percentage of GRE score submitters was 9.72% among Profile 1 applicants (Relatively large, doctoral degree-granting, highly selective, public), whereas the prevalence of GRE submitters was higher in applicants of Profile 2 (11.11%) and Profile 3 (17.17%) schools. Another way of describing these findings is that among applicants who did not submit a test score, eight percent were from small, less selective institutions, whereas among applicants who did submit the test scores, the percentage from small, less selective schools was nearly double (15%).
We also conducted analyses of two logistic regression models to predict submission of GRE scores (Table 6). In the first model we predicted score submission solely from the institutional latent classes. Across the three comparisons, we found two significant results: students who attended Profile 3 schools (relatively small, bachelor’s degree–granting, relatively unselective, private) were more likely to submit scores (OR = 1.93, p < 0.01) than applicants who attended Profile 1 schools (relatively large, doctoral degree–granting, highly selective, public); applicants who attended Profile 3 schools were more likely to submit scores (OR = 1.66, p < 0.05) than those who attended Profile 2 schools (relatively small, master’s degree–granting, moderately selective).
The second model we evaluated included five covariates: degree pursued, discipline of program applied to, ethnicity/race, gender, UGPA. Inclusion of these covariates did not substantively alter the raw difference in the submission propensity between applicants from Profile 3 versus Profile 1 schools (OR = 2.00, p < 0.01) but did render the result of the Profile 3 versus Profile 2 comparison statistically nonsignificant.
Several associations between covariates and GRE score submission propensity were significant. Men were more likely to submit scores than women (OR = 1.55, p < 0.001), and applicants applying to Ph.D. programs were more likely submit scores than applicants to master’s programs (OR = 2.94, p < 0.001). Compared to applicants to social sciences programs, applicants to humanities (OR = 3.23, p < 0.001) and STEM (OR = 1.92, p < 0.001) programs were more likely to submit scores, while applicants to fine arts (OR = 0.16, p < 0.001) and nursing (OR = 0.45, p < 0.01) programs were less likely to submit scores.
Average UGPAs did not differ significantly across the three latent classes, but mean GRE scores did differ across the three institutional profiles (p < 0.001 for GRE-Q, p < 0.05 for GRE-V). Specifically, applicants from Profile 3 institutions had significantly higher GRE-V and GRE-Q scores than applicants from both Profile 1 and Profile 2 institutions. See Table 3 for complete details.

4. Discussion

Our primary finding was that applicants to GRE-optional graduate programs who attended relatively small, bachelor’s degree–granting, relatively unselective, private undergraduate institutions (Profile 3) were more likely to submit GRE scores than attendees of relatively large, doctoral degree–granting, highly selective, public institutions (Profile 1). This finding held even after controlling for a variety of covariates. Notably, while average UGPAs did not differ between applicants from Profile 1 and Profile 3 schools, average GRE scores were higher among Profile 3 applicants than those from Profile 1 institutions.
Although not the primary focus of our investigation, we also examined patterns of GRE score submission across our covariates. We found men were more likely to submit GRE scores than women but no significant differences in submission across ethnic/racial groups. Applicants to Ph.D. programs were more likely to submit scores than applicants to master’s programs. Relative to applicants to social science programs, applicants to humanities and STEM programs were more likely to submit scores, while applicants to fine arts and nursing programs were less likely to submit scores.
Strictly interpreting effect sizes according to statistical benchmarks can sometimes be misleading [73,74]. Nonetheless, to contextualize our findings, we note that the majority of our effects—statistically significant and nonsignificant—would be considered small according to Chen et al.’s [75] guidelines for interpreting odds ratios. The difference in submission rates of GRE scores between Profile 3 and Profile 1 schools would be classified as between small and medium, as would the difference between applicants to social sciences and STEM programs and the difference between social sciences and nursing programs. Differences in submission propensity between Ph.D. and master’s programs and social sciences versus humanities programs were medium-sized. For demographic variables, the difference in submission propensity between women and men was small, as were differences between all ethnic/racial groups compared to Whites. By far the greatest effect was the difference in submission propensity between applicants to social sciences and fine arts programs, which would be classified as large according to Chen et al.’s [75] benchmarks.

Limitations and Future Research Directions

Our findings are purely empirical, and we cannot make inferences about why relatively more applicants from Profile 3 schools submitted GRE scores than applicants from Profile 1 schools. This result is compatible with multiple explanations [76], all of which could be true or false for some applicants to programs with TOPs in our sample. For example, Profile 3 versus Profile 1 applicants may differ systematically in how truthful they believe claims about programs being GRE-optional “really” are; Profile 3, compared to Profile 1, applicants could be more skeptical about TOPs and believe that not submitting scores will actually harm their admission chances. If this is a belief that some applicants hold, it could have multiple origins, ranging from an intuitive feeling to advice from faculty members.
An alternative explanation is that because Profile 1 applicants attended more selective institutions—and thus, on average, highly ranked, well-known, and with strong graduate programs—they felt less pressure to submit GRE scores, believing that the name value of their institution, paired with their academic record and other application materials, would be sufficient to give them a good chance of admission. Contrastingly, perhaps applicants from smaller, less selective schools (Profile 3) are more likely submit GRE scores in order to provide additional evidence of their readiness for graduate work because they believe their undergraduate institutions are not particularly well-known or regarded—the opposite of large, selective universities. This interpretation is consistent with evidence that some individuals from less privileged groups pursue a similar strategy when applying to four-year colleges [53]. If this is found to be the case repeatedly (i.e., in future research), it is possible that allowing applicants to choose which application materials admissions decision makers should consider (and emphasizing it) may lift barriers for individuals who lack the time and resources to fully prepare for standardized tests—but this consideration could also extend to other components of their applications (e.g., opportunities to obtain quality undergraduate research experience and access to high-profile letter writers are not equally distributed across institutions [28]). It also implies, however, that adopting test-blind policies (and, more broadly, any admissions approach that does not consider applicants’ qualifications in a highly multidimensional way) may eliminate an avenue some applicants believe provides them a chance to demonstrate their commitment to, and readiness for, graduate school, in order to make up for aspects of their applications that are considered relatively weak. Indeed, the mere fact that individuals from small, less selective schools comprised only 5% of the applicant pool to the test-optional programs in our sample speaks to the challenges students coming from less privileged backgrounds may face when choosing to attend, and applying to, graduate school. Ultimately, to tease apart competing explanations for test score submission, and to identify novel ones, future research must directly query graduate school applicants as to why (or why not) they submitted GRE scores.
More fundamentally, all of our findings must be replicated in order for their implications to be fully discerned. First, latent class modeling results often do not replicate in new samples, given the method’s sensitivity to detecting different degrees of variance across data [77], and it will be critical to establish whether the latent profiles we derived also describe the schools attended by applicants to test-optional programs at other institutions. In order to examine if this pattern is seen uniquely in test-optional programs, it will also be fruitful to explore whether the characteristics and prevalence of latent profiles are similar or different in institutions that adopted different admissions test polices. Second, a wide sampling of institutions will be important to determine the generality of our results, as the populations of students applying to graduate programs with TOPs—and their score submission patterns—may differ substantially across different types of universities (e.g., large and doctoral degree–granting vs. medium-sized and master’s degree–granting). Third, there could be other undergraduate institutional characteristics that shape individuals’ decisions to take the GRE and apply to graduate school. For example, whether or not applicants’ undergraduate institutions used flexible test policies or how much their programs offered opportunities for conducting research may influence their beliefs about the value of taking standardized tests and submitting test scores. Similarly, individuals’ socioeconomic status [78,79] and educational aspirations [80,81], which are shown to be associated with graduate school pursuits (enrollment), may also predict their likelihood of taking the GRE and submitting GRE scores. Last, our data were collected prior to the COVID-19 pandemic, which has caused massive changes in society in general and in the educational system specifically. Most directly, it caused a large number of institutions to go GRE-optional [7], which may have altered the dynamics of score submission patterns among applicants. Replication and extension of this study must rely on recent data from graduate programs with TOPs.
Future research should move beyond looking only at differences in GRE score submission across undergraduate institutional profiles and demographic backgrounds—it should look at the relationship between score submission and outcomes such as acceptance and enrollment and whether score submission status interacts with these predictors in predicting such outcomes. Dang et al. [31] found that inclusion or exclusion of GRE scores did not affect faculty evaluations of graduate applications, implying that the presence or absence of scores would also have no impact on admissions outcomes. However, as with the current investigation, they relied on prepandemic data gathered from a single institution, and they did not possess actual admissions and enrollment data. As GRE-optional policies spread it is important that their implications for admissions and enrollment are carefully studied, both to ascertain the degree to which they are meeting stakeholders’ goals and to detect any unanticipated consequences they may have. Finally, although the current investigation focused on differences in GRE score submission according to the undergraduate institutions applicants attended, and much of the debate surrounding the GRE has concerned demographic variables [2,4,9], the largest effects we observed were related to program-level characteristics (i.e., discipline, terminal degree offered). Subsequent investigations should seek to replicate these findings and explicate them—and also seek to identify other variables that might be linked to GRE score submission that heretofore have not featured prominently in discussions of graduate-level TOPs.

5. Conclusions

The GRE has been the most widely used standardized test for graduate admissions decision-making for decades but has increasingly been made optional due to concerns about diversity and its predictive value. Despite the spread of GRE-optional policies, however, they have been the subject of little empirical investigation. We found that there was a small difference in propensity to submit GRE scores to graduate programs with TOPs across the type of undergraduate institution applicants attended, in addition to between men and women. Larger differences in propensity to submit scores were found in graduate program characteristics. Notably, the possible influence of the type of undergraduate institution applicants to graduate programs with TOPs attended, along with the characteristics of those programs themselves, have on students’ decisions to submit GRE scores has not featured prominently in the GRE-optional literature. These findings point to the importance of not only studying demographic correlates of GRE score submission but expanding the conversation to include many other types of variables as well.
Our results, combined with those of the few other studies of GRE-optional policies (e.g., [52]), underscore the need for university personnel to carefully think through their graduate admissions policies in terms of test scores and to study who is and who is not submitting scores if they have adopted a GRE-optional policy, as the empirical realities of such policies may prove counterintuitive in some regards. For example, we found no major demographic differences in applicants’ tendency to submit GRE scores, even though that might be a major expected finding of such research—yet our core finding of applicants from less selective undergraduate institutions being the most likely to submit GRE scores is surely contrary to many readers’ expectations, as might the fact that humanities applicants were more likely to submit scores than social sciences applicants. When making changes to their admissions policies, graduate program decision makers must carefully think through their goals in enacting such changes and consider the likely effectiveness of these changes in light of the possibility of unanticipated results and unintended consequences. This is critical given the immense variability in the needs, priorities, and characteristics of the thousands of graduate programs in the U.S.
As the first direct empirical investigation of predictors of GRE score submission to GRE-optional programs we hope this study spurs additional research on this important topic in all its facets.

Author Contributions

Conceptualization, S.C.-B. and H.J.K.; methodology, S.C.-B. and H.J.K.; formal analysis, S.C.-B.; writing—original draft preparation, H.J.K.; writing—review and editing, S.C.-B. and H.J.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Ethical review and approval were waived for this study due to the fact that the data provided were deidentified and analyzed at a level making it impossible to link results back individual records and due to the anonymity of the institution that provided the data.

Informed Consent Statement

Informed consent was waived due to the fact that the data were archival and deidentified and the institution providing the data are anonymized in the current research.

Data Availability Statement

We cannot make the data publicly available due to the privacy concerns of the university that shared them with us. However, interested parties can contact the authors of this paper, who will relay the request for data to our institutional partner.

Acknowledgments

We thank Brent Bridgeman, Heather Buzick, Laura Hamilton, and Margarita Olivera-Aguilar for providing insightful feedback on our work. We thank our university partner for providing us with the data.

Conflicts of Interest

The authors are employed by the company that developed and administers the GRE. However, the authors are employed by ETS’ research and development division and the GRE Program played no role in dictating or influencing the content of this study.

Appendix A

Table A1. Characteristics of undergraduate institutions attended by graduate program applicants.
Table A1. Characteristics of undergraduate institutions attended by graduate program applicants.
M (SD)%
Public (vs. Private)--68.46
Carnegie Undergraduate Profile12.98 (2.05)
1. two-year, higher part-time 0.53
2. two-year, mixed part/full-time 0.15
3. two-year, medium full-time 0.30
4. two-year, higher full-time 0.00
5. four-year, higher part-time 1.36
6. four-year, medium full-time, inclusive, lower transfer-in 0.13
7. four-year, medium full-time, inclusive, higher transfer-in 0.85
8. four-year, medium full-time, selective, lower transfer-in 0.03
9. four-year, medium full-time, selective, higher transfer-in 2.56
10. four-year, full-time, inclusive, lower transfer-in 1.98
11. four-year, full-time, inclusive, higher transfer-in 5.87
12. four-year, full-time, selective, lower transfer-in 10.82
13. four-year, full-time, selective, higher transfer-in 18.14
14. four year, full-time, more selective, lower transfer-in 49.99
15. four year, full-time, more selective, higher transfer-in 7.30
Highest Degree Offered5.55 (1.05)--
1. associate’s degree 0.98
2. bachelor’s degree 3.98
3. postbaccalaureate certificate 0.00
4. master’s degree 9.27
5. post-master’s certificate 5.38
6. doctoral degree 80.39
Total Enrollment12,171.93 (8936.63)--
% of White Enrollees58.57 (13.76)--
% of African-American/Black Enrollees8.35 (8.25)--
% of Hispanic/Latino Enrollees11.96 (7.52)--

References

  1. Gewin, V. US geoscience programmes drop controversial admissions test. Nature 2020, 584, 157. [Google Scholar] [CrossRef] [PubMed]
  2. Langin, K. Ph.D. programs drop standardized exam. Science 2019, 364, 816. [Google Scholar] [CrossRef] [PubMed]
  3. Woo, S.E.; LeBreton, J.; Keith, M.; Tay, L. Bias, fairness, and validity in graduate-school admissions: A psychometric perspective. Perspect. Psychol. Sci. 2022. [Google Scholar] [CrossRef]
  4. Miller, C.W.; Stassun, K. A test that fails. Nature 2014, 510, 303–304. [Google Scholar] [CrossRef]
  5. Hall, J.D.; O’Connell, A.B.; Cook, J.G. Predictors of student productivity in biomedical graduate school applications. PLoS ONE 2017, 12, e0169121. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Petersen, S.L.; Erenrich, E.S.; Levine, D.L.; Vigoreaux, J.; Gile, K. Multi-institutional study of GRE scores as predictors of STEM PhD degree completion: GRE gets a low mark. PLoS ONE 2018, 13, e0206570. [Google Scholar] [CrossRef]
  7. Reuters. True Claim: Some Graduate Schools Are Waiving GRE Test Requirements Because of COVID-19. 2020. Available online: https://www.reuters.com/article/uk-factcheck-coronavirus-gre-waive/true-claim-some-graduate-schools-are-waiving-gre-test-requirements-because-of-covid-19-idUSKCN21Q2Z6 (accessed on 1 July 2022).
  8. Cahn, P.S. Do health professions graduate programs increase diversity by not requiring the graduate record examination for admission? J. Allied Health 2015, 44, 51–56. [Google Scholar]
  9. Millar, J.A. The GRE in public health admissions: Barriers, waivers, and moving forward. Front. Public Health 2020, 8, 796–799. [Google Scholar] [CrossRef]
  10. Carpentier, V. Expansion and Differentiation in Higher Education: The Historical Trajectories of the UK, the USA and France (Working Paper 33). Centre for Global Higher Education. 2018. Available online: https://www.researchcghe.org/publications/working-paper/expansion-and-differentiation-in-higher-education-the-historical-trajectories-of-the-uk-the-usa-and-france/ (accessed on 1 July 2022).
  11. U.S. Department of Education. Fast facts: Educational institutions. National Center for Education Statistics. 2021. Available online: https://nces.ed.gov/fastfacts/display.asp?id=84 (accessed on 1 July 2022).
  12. Vaughn, K.W. The Graduate Record Examinations. Educ. Psychol. Meas. 1947, 7, 745–756. [Google Scholar] [CrossRef]
  13. Carnegie Foundation for the Advancement of Teaching. The Graduate Record Examination; Carnegie Foundation for the Advancement of Teaching: Stanford, CA, USA, 1941. [Google Scholar]
  14. Savage, H.J. Fruit of an impulse: Forty-five years of the Carnegie Foundation (1905–1950); Harcourt, Brace and Company: San Diego, CA, USA, 1953. [Google Scholar]
  15. Conrad, L.; Trisman, D.; Miller, R. (Eds.) GRE Graduate Record Examinations Technical Manual; Educational Testing Service: Princeton, NJ, USA, 1977. [Google Scholar]
  16. Briel, J.B.; O’Neill, K.A.; Scheuneman, J.D. (Eds.) GRE Technical Manual: Test Development, Score Interpretation, and Research for the Graduate Record Examinations Program; Educational Testing Service: Princeton, NJ, USA, 1993. [Google Scholar]
  17. Lemann, N. The Big Test: The Secret History of the American Meritocracy; Farrar, Straus and Giroux: New York, NY, USA, 1999. [Google Scholar]
  18. Educational Testing Service. GRE. Graduate Record Examinations. Guide to the Use of Scores. 2021–2022; ETS: Princeton, NJ, USA, 2021. [Google Scholar]
  19. Barshay, J. Proof points: Test-Optional Policies Didn’t Do Much to Diversify College Student Populations. The Hechinger Report. 2021. Available online: https://hechingerreport.org/proof-points-test-optional-policies-didnt-do-much-to-diversify-college-student-populations/ (accessed on 1 July 2022).
  20. Lucido, J.A. Understanding the test-optional movement. In Measuring Success: Testing, Grades, and the Future of College Admissions; Buckley, J., Letukas, L., Wildavsky, B., Eds.; John Hopkins University Press: Hoboken, NJ, USA, 2018; pp. 145–170. [Google Scholar]
  21. Zwick, R. College Admission Testing; National Association for College Admission Counseling: Arlington, VA, USA, 2007. [Google Scholar]
  22. Jaschik, S. SAT Skepticism in New Form. Inside Higher Ed. 2009. Available online: https://www.insidehighered.com/news/2009/04/21/sat-skepticism-new-form (accessed on 1 July 2022).
  23. McDermott, A. Surviving without the SAT. Chronicle of Higher Education. 2008. Available online: https://www.chronicle.com/article/surviving-without-the-sat-18874/?cid2=gen_login_refresh&cid=gen_sign_inN (accessed on 1 July 2022).
  24. O’Shaughnessy, L. The Other Side of ‘Test Optional’. The New York Times. 2009. Available online: https://www.nytimes.com/2009/07/26/education/edlife/26guidance-t.html (accessed on 1 July 2022).
  25. Marston, A.R. It is time to reconsider the Graduate Record Examination. Am. Psychol. 1971, 26, 653–655. [Google Scholar] [CrossRef]
  26. Milner, M.; McNeil, J.S.; King, S.W. The GRE: A question of validity in predicting performance in professional schools of social work. Educ. Psychol. Meas. 1984, 44, 945–950. [Google Scholar] [CrossRef]
  27. Sternberg, R.J.; Williams, W.M. Does the Graduate Record Examination predict meaningful success in the graduate training of psychology? A case study. Am. Psychol. 1997, 52, 630–641. [Google Scholar] [CrossRef] [PubMed]
  28. Posselt, J.R. Inside Graduate Admissions: Merit, Diversity, and Faculty Gatekeeping; Harvard University Press: Cambridge, MA, USA, 2016. [Google Scholar]
  29. Roberts, M.C.; Ostreko, A. GREs, public posting, and holistic admissions for diversity in professional psychology: Commentary on Callahan et al. Train. Educ. Prof. Psychol. 2018, 12, 286–290. [Google Scholar]
  30. Clayton, V. The Problem with the GRE. The Atlantic. 2016. Available online: https://www.theatlantic.com/education/archive/2016/03/the-problem-with-the-gre/471633/ (accessed on 1 July 2022).
  31. Dang, K.V.; Rerolle, F.; Ackley, S.F.; Irish, A.M.; Mehta, K.M.; Bailey, I.; Fair, E.; Miller, C.; Bibbins-Domingo, K.; Wong-Moy, E.; et al. A randomized study to assess the effect of including the Graduate Record Examinations results on reviewer scores for underrepresented minorities. Am. J. Epidemiol. 2021, 190, 1744–1750. [Google Scholar] [CrossRef]
  32. Miller, C.W.; Zwickl, B.M.; Posselt, J.R.; Silvestrini, R.T.; Hodapp, T. Typical physics Ph. D. admissions criteria limit access to underrepresented groups but fail to predict doctoral completion. Sci. Adv. 2019, 5, eaat7550. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  33. Moneta-Koehler, L.; Brown, A.M.; Petrie, K.A.; Evans, B.J.; Chalkley, R. The limitations of the GRE in predicting success in biomedical graduate school. PLoS ONE 2017, 12, e0166742. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  34. Small, A. Range Restriction, Admissions Criteria, and Correlation Studies of Standardized Tests. 2017. Available online: https://arxiv.org/abs/1709.02895 (accessed on 1 July 2022).
  35. Weissman, M.B. Do GRE scores help predict getting a physics Ph. D.? A comment on a paper by Miller et al. Sci. Adv. 2020, 6, eaax378. [Google Scholar] [CrossRef] [PubMed]
  36. Yeates, T.O. A Critical Analysis of Recent Studies of the GRE and Other Metrics of Graduate Student Preparation. 2018. Available online: https://docplayer.net/208238816-A-critical-analysis-of-recent-studies-of-the-gre-and-other-metrics-of-graduate-student-preparation.html (accessed on 1 July 2022).
  37. Grove, W.A.; Wu, S. The search for economics talent: Doctoral completion and research productivity. Am. Econ. Rev. 2007, 97, 506–511. [Google Scholar] [CrossRef] [Green Version]
  38. Ma, T.; Wood, K.E.; Xu, D.; Guidotti, P.; Pantano, A.; Komarova, N.L. Admission Predictors for Success in a Mathematics Graduate Program. 2018. Available online: https://arxiv.org/pdf/1803.00595.pdf (accessed on 1 July 2022).
  39. Mendoza-Sanchez, I.; de Gruyter, J.N.; Savage, N.T.; Polymenis, M. Undergraduate GPA predicts biochemistry PhD completion and is associated with time to degree. CBE—Life Sci. Educ. 2022, 21, ar19. [Google Scholar] [CrossRef]
  40. Bernstein, B.O.; Lubinski, D.; Benbow, C.P. Psychological constellations assessed at age 13 predict distinct forms of eminence 35 years later. Psychol. Sci. 2019, 30, 444–454. [Google Scholar] [CrossRef]
  41. Krueger, A.B.; Wu, S. Forecasting job placements of economics graduate students. J. Econ. Educ. 2000, 31, 81–94. [Google Scholar] [CrossRef]
  42. Educational Testing Service. A Snapshot of the Individuals Who Took the GRE General Test: July 2016–June 2021; ETS: Princeton, NJ, USA, 2022. [Google Scholar]
  43. Schwager, I.T.; Hülsheger, U.R.; Bridgeman, B.; Lang, J.W. Graduate student selection: Graduate Record Examination, socioeconomic status, and undergraduate grade point average as predictors of study success in a western European university. Int. J. Sel. Assess. 2015, 23, 71–79. [Google Scholar] [CrossRef]
  44. American Educational Research Association; American Psychological Association; National Council on Measurement in Education. Standards for Educational and Psychological Testing; American Educational Research Association: Washington, DC, USA, 2014. [Google Scholar]
  45. Newman, D.A.; Tang, C.; Song, Q.C.; Wee, S. Dropping the GRE, keeping the GRE, or GRE-optional admissions? Considering tradeoffs and fairness. Int. J. Test. 2022, 22, 43–71. [Google Scholar] [CrossRef]
  46. Burton, N.W.; Wang, M.M. Predicting long-term success in graduate school: A collaborative validity study. ETS Res. Rep. Ser. 2005, 2005, i-61. [Google Scholar] [CrossRef]
  47. Council of Graduate Schools. Ph.D. Completion and Attrition: Analysis of Baseline Demographic Data from the Ph.D. Completion Project; Council of Graduate Schools: Washington, DC, USA, 2008. [Google Scholar]
  48. Boone, Young & Associates, Inc.; Educational Testing Service. Minority Enrollment in Graduate and Professional Schools: Recruitment, Admissions, Financial Assistance. A technical Assistance Handbook; U.S. Government Printing Office: Washington, DC, USA, 1984.
  49. Haverlah, A.K. No GRE, No Problem: Texas’ Graduate Schools See Increase in Enrollment and Diversity. Reporting Texas. 2021. Available online: https://reportingtexas.com/no-gre-no-problem-texas-graduate-schools-see-increase-in-enrollment-and-diversity/ (accessed on 1 July 2022).
  50. Spike, C. Grad School Diversity Increases with Test POLICY, New Programs. Princeton Alumni Weekly. 2021. Available online: https://paw.princeton.edu/article/grad-school-diversity-increases-test-policy-new-programs (accessed on 1 July 2022).
  51. Cooper, C.A.; Knotts, H.G. Do I have to take the GRE? Standardized testing in MPA admissions. PS Political Sci. Politics 2019, 52, 470–475. [Google Scholar] [CrossRef] [Green Version]
  52. Owens, L.M.; Zwickl, B.M.; Franklin, S.V.; Miller, C.W. Physics GRE Requirements Create Uneven Playing Field for Graduate Applicants. Proceedings in Physics Education Research Conference 2020. 2020. Available online: https://par.nsf.gov/biblio/10310048 (accessed on 1 July 2022).
  53. Lozano, I. I’m a Working-Class Mexican American Student. The SAT Doesn’t Hurt Me—It Helps. The Washington Post. 5 October 2020. Available online: https://www.washingtonpost.com/outlook/2020/10/05/sat-working-class-student-university-california/ (accessed on 1 July 2022).
  54. Posselt, J.R. Trust networks: A new perspective on pedigree and the ambiguities of admissions. Rev. High. Educ. 2018, 41, 497–521. [Google Scholar] [CrossRef]
  55. Nye, C.D.; Ryan, A.M. Improving graduate-school admissions by expanding rather than eliminating predictors. Perspect. Psychol. Sci. 2022. [Google Scholar] [CrossRef] [PubMed]
  56. Padgett, R.D.; Goodman, K.M.; Johnson, M.P.; Saichaie, K.; Umbach, P.D.; Pascarella, E.T. The impact of college student socialization, social class, and race on need for cognition. New Dir. Inst. Res. 2010, 2010, 99–111. [Google Scholar] [CrossRef]
  57. Weidman, J.C. Undergraduate Socialization: A Conceptual Approach. In Foundations of American Higher Education, 2nd ed.; Bess, J.L., Webster, D.S., Eds.; Simon & Schuster: New York, NY, USA, 1989; pp. 114–135. [Google Scholar]
  58. Brown University; Graduate School. Diversity and Inclusion. 2022. Available online: https://www.brown.edu/academics/gradschool/diversity-0 (accessed on 1 July 2022).
  59. North Carolina State University; College of Engineering. Diversity and Inclusion. 2022. Available online: https://www.engr.ncsu.edu/faculty-staff/efa/diversity-and-inclusion/ (accessed on 1 July 2022).
  60. The Ohio State University. Campus Conversation on Graduate Education: Diversity and Inclusion. 2022. Available online: https://gradsch.osu.edu/sites/default/files/resources/pdfs/CC_Onesheet_DiversityAndInclusion.pdf (accessed on 1 July 2022).
  61. The State University of New York. What Is Diversity at Suny? 2022. Available online: https://www.suny.edu/diversity/about/ (accessed on 1 July 2022).
  62. The Trustees of Princeton University. Access, Diversity, and Inclusion. The Graduate School. Princeton University. 2022. Available online: https://graddiversity.princeton.edu/ (accessed on 1 July 2022).
  63. U.S. Department of Education. 2019 institutional characteristic data. National Center for the Educational Statistics. Integrated Postsecondary Education Data System. 2019. Available online: https://nces.ed.gov/ipeds/use-the-data (accessed on 26 February 2020).
  64. Council of Graduate Schools. Graduate Enrollment and Degrees: 2009–2019; Council of Graduate Schools: Washington, DC, USA, 2021. [Google Scholar]
  65. Weller, B.E.; Bowen, N.K.; Faubert, S.J. Latent class analysis: A guide to best practice. J. Black Psychol. 2020, 46, 287–311. [Google Scholar] [CrossRef]
  66. Muthén, L.K.; Muthén, B.O. Mplus User’s Guide (8th ed.). 1998–2017. Available online: https://www.statmodel.com/download/usersguide/MplusUserGuideVer_8.pdf (accessed on 1 July 2022).
  67. Lanza, S.T.; Rhoades, B.L. Latent class analysis: An alternative perspective on subgroup analysis in prevention and treatment. Prev. Sci. 2013, 14, 157–168. [Google Scholar] [CrossRef] [Green Version]
  68. Lo, Y.; Mendell, N.; Rubin, D.B. Testing the number of components in a normal mixture. Biometrika 2001, 88, 767–778. [Google Scholar] [CrossRef]
  69. Wilkinson, L.; The Task Force on Statistical Inference. Statistical methods in psychology journals. Am. Psychol. 1999, 54, 594–604. [Google Scholar] [CrossRef]
  70. U.S. Census Bureau. Educational Attainment Tables. 2021. Available online: https://www.census.gov/topics/education/educational-attainment/data/tables.html (accessed on 1 July 2022).
  71. McLachlan, G.; Peel, D. Finite Mixture Models; Wiley: Hoboken, NJ, USA, 2000. [Google Scholar]
  72. Logan, J.A.; Pentimonti, J.M. Introduction to latent class analysis for reading fluency research. In The Fluency Construct; Cummings, K.D., Petscher, Y., Eds.; Springer: Berlin/Heidelberg, Germany, 2016; pp. 309–332. [Google Scholar]
  73. Funder, D.C.; Ozer, D.J. Evaluating Effect Size in Psychological Research: Sense and Nonsense. Adv. Methods Pract. Psychol. Sci. 2019, 2, 156–168. [Google Scholar] [CrossRef]
  74. Götz, F.M.; Gosling, S.D.; Rentfrow, P.J. Small effects: The indispensable foundation for a cumulative psychological science. Perspect. Psychol. Sci. 2022, 17, 205–215. [Google Scholar] [CrossRef] [PubMed]
  75. Chen, H.; Cohen, P.; Chen, S. How big is a big odds ratio? Interpreting the magnitudes of odds ratios in epidemiological studies. Commun. Stat. 2010, 39, 860–864. [Google Scholar] [CrossRef]
  76. Stanford, K. Underdetermination of Scientific Theory. Stanford Encyclopedia of Philosophy. 2017. Available online: https://plato.stanford.edu/entries/scientific-underdetermination (accessed on 1 July 2022).
  77. Green, M.J. Latent Class Analysis Was Accurate But Sensitive in Data Simulations. J. Clin. Epidemiol. 2014, 67, 1157–1162. [Google Scholar] [CrossRef] [Green Version]
  78. Nevill, S.C.; Carroll, C.D. The Path through Graduate School: A Longitudinal Examination 10 Years after Bachelor’s Degree. Postsecondary Education Descriptive Analysis Report; National Center for Education Statistics (NCES) Report 2007-162; U.S. Department of Education: Washington, DC, USA, 2007.
  79. U.S. Department of Education Postbaccalaureate Enrollment. Condition of Education, National Center for Education Statistics. 2022. Available online: https://nces.ed.gov/programs/coe/indicator/chb (accessed on 1 July 2022).
  80. Ekstrom, R. Undergraduate Debt and Participation in Graduate Education: The Relationship between Educational Debt and Graduate School Aspirations, Applications, and Attendance among Students with a Pattern of Full-Time, Continuous Postsecondary Education. ETS Research Report No. 89–45. 1991. Available online: https://eric.ed.gov/?id=ED392374 (accessed on 1 July 2022).
  81. Walpole, M. Emerging from the pipeline: African American students, socioeconomic status, and college experiences and outcomes. Res. High. Educ. 2008, 49, 237–255. [Google Scholar] [CrossRef]
Figure 1. Standardized mean values of undergraduate institution characteristic indicators by latent classes based on the three-class model (N = 3993). Note: For the dichotomous category “Public 4-year vs. Private 4-year” we display the probability value of belonging to the “Public 4-year” group within each latent class.
Figure 1. Standardized mean values of undergraduate institution characteristic indicators by latent classes based on the three-class model (N = 3993). Note: For the dichotomous category “Public 4-year vs. Private 4-year” we display the probability value of belonging to the “Public 4-year” group within each latent class.
Education 12 00529 g001
Table 1. Sample composition and comparison to U.S. population of four-year degree–holders: Demographics.
Table 1. Sample composition and comparison to U.S. population of four-year degree–holders: Demographics.
NPercentage% of U.S. BA-Holders
Men119129.8346.73
Women280270.1753.27
African-American/Black41210.329.20
Asian-American1924.819.28
Hispanic/Latino3929.8110.02
Other1423.55--
White285571.5170.09
Note: Percentage of U.S. four-year degree–holders derived from U.S. Census Bureau (2021).
Table 2. Sample composition: Academic program applied to.
Table 2. Sample composition: Academic program applied to.
N%
Ph.D.3609.02
Master’s363390.98
Fine Arts3147.86
Humanities1974.93
Nursing45311.34
Social Sciences261065.36
STEM41910.49
Table 3. Mean GRE scores among score submitters.
Table 3. Mean GRE scores among score submitters.
GRE-Q (n = 411) M (SD)GRE-V (n = 411) M (SD)GRE-AW (n = 347) M (SD)
Men151.78 (7.45)155.67 (6.32)4.17 (0.72)
Women149.60 (6.69)154.94 (7.13)4.27 (0.68)
African-American/Black145.86 (7.92)150.61 (7.69)3.54 (0.72)
Asian-American152.71 (6.25)156.48 (6.87)4.53 (0.65)
Hispanic/Latino145.76 (6.98)152.58 (6.44)3.95 (0.59)
Other148.24 (7.43)153.24 (7.56)4.09 (0.67)
White151.56 (6.59)155.93 (6.46)4.30 (0.67)
Ph.D. Program Applicants148.59 (7.11)156.06 (6.51)4.37 (0.7)
Master’s Program Applicants151.02 (6.96)154.97 (6.91)4.19 (0.69)
Fine Arts Program Applicants148.67 (4.63)158.33 (5.28)5.00 (0.58)
Humanities Program Applicants149.72 (8.21)158.50 (7.35)4.56 (0.64)
Nursing Program Applicants149.21 (5.32)154.83 (5.16)4.08 (0.77)
Social Sciences Program Applicants149.74 (6.98)154.42 (6.76)4.17 (0.69)
STEM Program Applicants153.59 (6.13)154.64 (6.44)4.13 (0.63)
Applicants from Profile 1 Schools150.70 (7.01)154.94 (6.76)4.19 (0.70)
Applicants from Profile 2 Schools147.66 (6.31)154.97 (6.85)4.33 (0.64)
Applicants from Profile 3 Schools153.38 (7.43)158.32 (6.79)4.41 (0.72)
Table 4. Goodness of fit statistics for latent classes.
Table 4. Goodness of fit statistics for latent classes.
Latent ClassesFree ParametersAICBICAdjusted BICEntropyLMR p-ValueBLRT p-Value
221145,469.15145,601.28145,534.560.995<0.001<0.001
329139,920.01140,102.49140,010.340.999<0.001<0.001
437133,974.23134,207.05134,089.481.0000.1590.162
54554,840.0955,123.2454,980.250.9350.4940.497
Table 5. Cross-tabulation of mostly likely undergraduate institution latent class and GRE submission status.
Table 5. Cross-tabulation of mostly likely undergraduate institution latent class and GRE submission status.
GRE SubmittedGRE Not Submitted
N%N%
Relatively large, doctoral degree-granting, highly selective, public 3129.72289890.28
Relatively small, master’s degree-granting, moderately selective6511.1152088.89
Relatively small, bachelor’s degree-granting, relatively unselective, private3417.1716482.83
Table 6. Logistic regression modeling predicting the mostly likely undergraduate latent class (N = 3993).
Table 6. Logistic regression modeling predicting the mostly likely undergraduate latent class (N = 3993).
GRE Score Submitted
Model 1
OR (SE)
Model 2
OR (SE)
Undergraduate Institution Latent Class
Profile 3 a vs. Profile 1 b1.93 (0.38) **2.00 (0.48) **
Profile 2 c vs. Profile 11.16 (0.17)1.23 (0.21)
Profile 3 vs. Profile 21.66 (0.14) *1.63 (0.45)
Undergraduate GPA 1.09 (0.17)
Gender (Male = 1) 1.55 (0.19) ***
Ethnicity/race (Ref. = White)
Black 0.72 (0.16)
Hispanic 1.05 (0.21)
Asian 1.32 (0.34)
Other 1.65 (0.47)
Degree applied (Ph.D. = 1) 2.94 (0.55) ***
Program (Ref. = Social sciences)
Fine arts 0.16 (0.07) ***
Humanities 3.23 (0.71) ***
Nursing 0.45 (0.11) **
STEM 1.92 (0.31) ***
Note: OR = Odds ratio. SE = Standard error. Ref. = Reference group. Profile 1 = Relatively large, doctoral degree–granting, highly selective, public. Profile 2 = Relatively small, master’s degree–granting, moderately selective. Profile 3 = Relatively small, bachelor’s degree–granting, relatively unselective, private. a n = 198. b n = 3210. c n = 585. * p < 0.01. ** p < 0.001. *** p < 0.0001.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Cho-Baker, S.; Kell, H.J. Who Sends Scores to GRE-Optional Graduate Programs? A Case Study Investigating the Association between Latent Profiles of Applicants’ Undergraduate Institutional Characteristics and Propensity to Submit GRE Scores. Educ. Sci. 2022, 12, 529. https://doi.org/10.3390/educsci12080529

AMA Style

Cho-Baker S, Kell HJ. Who Sends Scores to GRE-Optional Graduate Programs? A Case Study Investigating the Association between Latent Profiles of Applicants’ Undergraduate Institutional Characteristics and Propensity to Submit GRE Scores. Education Sciences. 2022; 12(8):529. https://doi.org/10.3390/educsci12080529

Chicago/Turabian Style

Cho-Baker, Sugene, and Harrison J. Kell. 2022. "Who Sends Scores to GRE-Optional Graduate Programs? A Case Study Investigating the Association between Latent Profiles of Applicants’ Undergraduate Institutional Characteristics and Propensity to Submit GRE Scores" Education Sciences 12, no. 8: 529. https://doi.org/10.3390/educsci12080529

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop