The Impact of College Athletic Success on Donations and Applicant Quality

For the 65 colleges and universities that participate in the Power Five athletic conferences (Pac 12, Big 10, SEC, ACC, and Big 12), the football and men’s basketball teams are highly visible. While these programs generate tens of millions of dollars in revenue annually, very few of them turn an operating “profit.” Their existence is thus justified by the claim that athletic success leads to ancillary benefits for the academic institution, in terms of both quantity (e.g., more applications, donations, and state funding) and quality (e.g., stronger applicants, lower acceptance rates, higher yields). Previous studies provide only weak support for some of these claims. Using data from 2006–2016 and a multiple regression model with corrections for multiple testing, we find that while a successful football program is associated with more applicants, there is no effect on the composition of the student body or (with a few caveats) funding for the school through donations or state appropriations.


Introduction
Intercollegiate athletics is in a turbulent period. Recruiting and academic scandals along with antitrust litigations are erupting with unprecedented frequency. The Rice Commission on reforming college basketball called for a panoply of structural reforms.
Meanwhile, the financial outcomes of college athletic departments remain bleak for the vast majority of institutions. NCAA reports consistently show that in recent years only about 20 out of roughly 130 athletic departments in FBS (the Football Bowl Subdivision of Division I, the most commercial subdivision in the NCAA) run an operating surplus. FBS itself is subdivided into the Power Five or Autonomous Conferences (Pac 12, Big 10, SEC, ACC and Big 12) with 65 schools and the five Non-Autonomous Conferences with 64 schools. The Power Five command the large television contracts, control the football championship playoff and enjoy much larger attendance at their games.
During the 2015-2016 school year, the median reported operating deficit at all FBS athletic programs was $14.4 million. At the Power Five schools, the median reported operating deficit was $3.6 million. There were 24 Power Five schools during 2015-2016 that experienced an operating surplus-the median surplus for these schools was $10 million (NCAA 2017).
Because of accounting irregularities, however, it appears that most FBS athletic departments do not include significant shares of their capital costs in their reports to the NCAA. According to one NCAA study, capital costs, properly reckoned, exceed $20 million annually at the average FBS school (Orszag and Orszag 2005). If these and other indirect costs were included, most estimates suggest that no more than a half dozen programs would have a true surplus. quality of the student body, more donations and greater state support. This updating is important because the organization and rules affecting college sports have been in flux in the 21st century. We consider evidence from 2005-2006 through 2015-2016 for the 65 Power Five schools in the FBS. These schools are athletically the most successful and best known by a considerable margin and, therefore, the most likely to garner the positive publicity effects from athletic success.
In what follows we review the existing scholarship, describe our data sources and our data profile, discuss our models and results and draw conclusions. The evolution of the literature is not linear and the current state of knowledge remains ambiguous.

Athletic Success and the Quantity and Quality of Applications
One of the first and most frequently cited studies on the impact of athletic success on admissions is McCormick and Tinsley (1987). They gather data on 150 schools for 1971. On the basis of a multiple regression test with several control variables, they estimate that a school with a "big-time" athletics program had 3% higher SAT scores than schools without such a program. They identified 63 of the schools in their sample to have a big-time program. A difficulty with this cross-sectional analysis is that characteristics not identified by the control variables of a school lead to an incomplete model specification. In an attempt to rectify this deficiency, McCormick and Tinsley test a second model with data from 1981 to 1984, focusing exclusively on schools with big-time programs. They explore the link between changes in SAT scores and changes in athletic performance over this period. None of the estimated coefficients are statistically different than zero at standard confidence levels. Bremmer and Kesselring (1993) essayed a retest of the McCormick and Tinsley hypotheses, using data from 1989 for 132 schools and from 1981-1989 for 53 schools. They find no evidence that basketball or football success led to increased SAT scores of matriculated students. Tucker and Amato (1993) adapt the McCormick and Tinsley model by using a new metric of athletic success-whether the school was in the Associated Press top twenty ranking in football or basketball. They find no relationship between basketball success and SAT scores, but that a football program ranked in the top twenty for ten consecutive years (1980)(1981)(1982)(1983)(1984)(1985)(1986)(1987)(1988)(1989) would attract a freshman class with 3% higher average SAT scores than a program which never ranked in the top twenty. Murphy and Trandel (1994) construct a ten-year panel, 1978-1987, and use team win percentage as the measure of athletic success. They use school fixed effects and, thereby, control more effectively for unobserved differences among institutions. Murphy and Trandel find that a 50% increase in a team's win percentage results in a rather small increase of only 1.3% in the number of its applicants.  employs a different measure of basketball success-the number of rounds through which the school's team advanced in March Madness in the spring before the applications were filed the next fall 1 . Mixon's estimate was positive and statistically significant, suggesting that the average SAT score in the entering class increased by 1.7 points for each additional round the school's team was in the tournament. Toma and Cross (1998) examine the records for the thirteen different universities that won the FBS football championship between 1979 and 1992 and the eleven different universities that won the NCAA men's basketball tournament over the same period. They track the quantity and quality of undergraduate applications for five years preceding and succeeding their championship for each of these schools. They find a clear correlation between winning a championship and the number of applicants, but are unable to identify any measurable impact of a championship on the quality of admitted or entering students. 1 Mixon also co-authored two cross-sectional studies based on data from 1990 and 1993 that found a relationship between athletic success or prominence and the attraction of a school to out-of-state residents (Mixon and Hsing 1994;Mixon and Ressler 1995). Zimbalist (2001) uses data from 86 FBS colleges from 1980 through 1995 and performs a variety of fixed effect multiple regressions, using different measures of athletic success. The tests reveal that, while there is some tendency for athletic success to increase applications, there is no significant relationship between athletic success and average SAT scores.
In a study under commission from the NCAA, Litan, Orszag, and Orszag (Litan et al. 2003) use a fixed-effects model for 1993-2001 and are unable to find a statistically significant relationship between football winning percentage and SAT scores of the incoming class. 2 Tucker (2005) considers evidence from 1990, 1996, 2000, 2001 and 2002 from 78 Division I schools. He finds that after 1996 football success has a positive impact on the average SAT scores of the incoming class. He argues that the perfection of the Bowl Championship Series played a central role in bringing increased attention and, hence, advertising exposure to the top football programs. Smith (2008), in contrast, considers data from Division I schools during 1994-2005 and finds little evidence to support a link between different measures of men's basketball success and four measurements of student quality. The one exception was that schools that had a "breakout" year (lagged two years) experienced an 8.86 point average increase in the SAT scores of the 75th percentile of the entering class. Breakout was defined in various ways, but basically denoted that a school went from a perennial losing record to a strong winning record. Pope and Pope (2009), using a data set from 330 Division I schools during 1983-2002, find that certain types of athletic success appear to increase interest in a school from applicants with high, medium and low SAT scores, but that the increase in enrollments from the students with SAT scores above 600 in English and in Math is weaker and less reliable. Indeed, for some of the athletic performance variables the relationship between athletic success and the log of enrollment is significantly negative. Further, any impact tends to be in the next year with no significant effect after two or three years. In the end, Pope and Pope conclude that "the summary data . . . would suggest that athletically successful schools actually saw slightly lower long-run growth in applications and enrollments." One important caveat in interpreting these results is that Pope and Pope test fixed effect multiple regressions with control variables and thirty-two athletic performance variables. At a 0.10 level of significance, one would anticipate that 3.2 variables would achieve statistical significance randomly. The authors should have, but did not, control for the multiplicity problem. 3 Multicollinearity among the performance variables presents another challenge in interpreting the coefficients. 4 Pope and Pope (2014) use a Division I data set of 332 schools during 1994-2001 to test the impact of men's basketball and football on the propensity of high school applicants to send their SAT scores to a school. They find that a school with stellar results in either sport receives on average up to 10% more SAT scores. They also find that the relationship is stronger for some demographic subgroups, such as males, people of color, out-of-state students and high school athletes. They do not test for actual applications to the school or for eventual enrollments. Pope and Pope model one to three year lags and find that the statistical significance of sports success "decays very quickly across time." In sum, the various studies lend some support to the notion that robust athletic success can lead to an increase in applications to a school. The correlate of this proposition is that poor athletic performance can lead to falling applications. There is only weak support, if any, for the claim that sport success leads to an increase in the quality of students. The increase in applications in some cases, however, may assist a school in filling empty beds in its dormitories. These conclusions are supported 2 For a summary of the literature through 2004, see Frank (2004). 3 See, for instance, Benjamini and Hochberg (1995). 4 Castle and Kostelnik (2011) examine 14 Division II schools in Pennsylvania during 1995-2004 and find weak evidence that some measures of athletic success were correlated with an increase in applications and the SAT scores of the entering class. Three studies found that football success was associated with lower student grades; see Clotfelter (2011);Lindo, Swensen, and Waddell (Lindo et al. 2012); and Hernández-Julián and Rotthoff (2014). anecdotally by a self study done at the University of Massachusetts following its ascent to basketball fame in the mid-1990s under John Calipari.
The period from fall 1988 to fall 1990 did not include outstanding basketball years. In 1991, UMass was a semi-finalist in the NIT and in the four years since has been featured consistently on national television, has been ranked consistently in the top twenty and has gone to the NCAA tournament . . . It is clear that after double digit declines in out-of-state applications from fall 1988 to fall 1991, we experienced two years of double digit increases in fall 1993 and 1994. It has been suggested that this bump in applications might be related to, among other things, the greater awareness of the university beyond Massachusetts, at least partially as a result of the success in basketball.
It has been reported that the University of Connecticut experienced a similar application increase after their very successful Elite Eight season in 1991, with a 26% increase in out-of-state and 6% increase of in-state applications. Despite the growth of applications correlated with UConn basketball success, the conclusion was that there was no impact on yield (enrollment divided by admittances). With the numbers of applications up, it would also be expected that the quality of students enrolled might increase because of a larger pool on which to draw. The Connecticut experience indicates no changes in the quality of students. In the UMass figures there was a decrease in the SAT scores of applicants and enrolled students for both in-and out-of-state students. In fact, this [1995][1996] was the first year that the SAT scores of out-of-state students fell below in-state. None of this suggests that team success carries beyond the application stage. In fact, in the year following the "Dream Season," UConn applications dropped back to earlier numbers. Their conclusion was that there was no lasting impact on the admission numbers. (Massachusetts Football Task Force 1996)

Athletic Success and Alumni Giving
Studies on alumni or other giving are less numerous and less complete. The primary reason for this is that the availability of data on alumni giving is spotty. Sigelman and Carter (1979) assemble data from the Council for Financial Aid to Education (CFAE) from 1966-1967 to 1975-1976 and test the relationship between the yearly change in total giving to the annual fund and athletic success, measured by win percentages in football and men's basketball and a dummy variable indicating whether the team participated in the post-season. Sigelman and Carter do not find any statistically significant relationships between giving and athletic success, and even note that some of the coefficients were negative. Brooker and Klastorin (1981) critique the Sigelman and Carter study on the grounds that it does not control for institutional heterogeneity. They adjust for this by using institutional fixed effects and find some positive and some negative relationships between athletic success and giving. Together, they run tests on 1740 coefficients and find only 1.7% of them to be significant at the 0.10 level, which is fewer than the number that would be expected by chance. The authors do not report the magnitude of the effects. Sigelman and Bookheimer (1983) introduce a fixed effects model and break down alumni contributions into two components, restricted gifts to the athletics department and unrestricted gifts to the annual fund. They find that the two types of giving are uncorrelated with each other and that only gifts to the athletics department are correlated with sport success (football, not basketball). More precisely, they estimate that a 10% increase in football win percentage over the previous four years leads to a $125,000 increase in donations to the athletics department (measured in 1983 dollars). Grimes and Chressanthis (1994) focus on one school, Mississippi State, over a 30-year period between 1962 and 1991. They considered success of the football, basketball and baseball teams. Winning success in football had a negative coefficient that was not statistically significant. Basketball's coefficient was significant at the 0.05 level and positive, but extremely small. Baade and Sundberg (1996) construct a data set from Division I schools during 1973-1979. They employ both winning percentage and bowl or March Madness appearances as measures of sport success. They find no impact on giving from increased winning percentages, but a modest effect for postseason appearances. They do not distinguish between athletic and academic giving. Rhoads and Gerking (2000) follow the modelling of Baade and Sundberg with data for 1986-1987 to 1995-1996. 5 They run their tests first without institutional fixed effects and find impact very similar to those of Baade and Sundberg. They run the tests again with fixed effects and none of the athletic success variables are statistically significant predictors of giving. Rhoads and Gerking also estimate that being placed on NCAA probation for a basketball violation reduces total giving by $1.6 million (measured in 1987 dollars).
Turner, Meserve, and Bowen (Turner et al. 2001) consider a data set of 15 private schools during 1988-1989 to 1997-1998. Using a fixed effects model, they find that football win percentage has no significant effect on the rate of giving among alumni in FBS programs, but that it has a highly significant (at 0.01) and negative effect on the giving amount among alumni. Specifically, an increase of 12 wins is associated with a decrease in giving by $270 for an average person.
Humphreys and Mondello (2007), based on 320 Division I schools from 1976 to 1996 find that postseason play was positively correlated with giving to athletics, but not to giving to academics. Stinson and Howard (2008) investigated 208 institutions from Divisions I-AA and I-AAA and found no correlation between giving to athletics and giving to academics. 6

Athletic Success and State Budgetary Support
Three scholarly articles have explored the impact of university athletic success on legislative appropriations. Humphreys (2006) estimates a reduced form model with data from 1975 to 1996 that controls for state and institution specific characteristics. 7 He finds that state appropriations are 8% higher for institutions in FBS (formerly Division IA), other things equal. The presence of an increase in state support for having an FBS football team of $2.6 million on average (in 1982 dollars) may do little more than offset the additional net costs of fielding the team. Humphreys, however, does not find that either appearance in a bowl game or achieving a national ranking in the top 25 had a statistically significant impact on state appropriations. Alexander and Kern (2010) consider 117 schools from FBS, FCS and Division II for the period 1983-1984 to 2006-2007. They do not explain why their sample is more heavily weighted to FBS or why certain schools were omitted. They find that increases in basketball and football win percentage for schools in FBS do produce a statistically significant increase in state appropriations, but appearances in major bowl games or in the NCAA Final Four does not. The three models that they test yield R-squares of 0.05, 0.05 and 0.08, suggesting that they are underspecified and calling into question the reliability of their coefficient estimates. 5 Goff (2000) also considers the impact of sport success on the endowment. Goff, however, only looks at endowment data only for two schools, Georgia Tech and Northwestern. While he finds no statistically significant relationship between sport success and donations at Georgia Tech, Goff does find one at Northwestern. However, Goff notes that the finding for Northwestern may have been affected by an accounting change at the school (moving a substantial amount of cash into long-term equity during the period studied.) In any event, the data base is much too thin to assign much importance to these results. 6 Koo and Dittmore (2014), based on an unexplained sample of 155 schools from Divisions I, II and III during 2002-2003to 2011-2012, purport to find a positive correlation between athletic giving lagged one year and current academic giving. They conclude that athletic giving does not crowd out academic giving. They do not offer an explanation of why they lag athletic giving; any crowding out would presumably happen in the same year. It is also not clear what their full model is and whether, for instance, they detrended their data or used time fixed effects. Walker (2015) uses a sample of between 954 and 1052 schools during 2002-2011 and finds that appearances in the Final Four are significantly associated with increases in private donations, but, again, he does not break out athletic and academic donations and does not provide a full description of his model. 7 There was a prior study to Humphreys, but is was based on only one year (1980)(1981) and 52 DIA schools (Coughlin and Erekson 1986). Jones (2015) uses a difference-in-difference model that focused on six universities that transitioned from FCS to FBS between 2000 and 2010. Jones finds that when the six schools were compared to all other FCS schools, there is no significant correlation between FBS affiliation and state appropriations. When the six schools are compared only to other FCS schools in the same region, there is still no significant correlation. Only when the comparison is made between FCS schools in the same region and within the same propensity score range 8 did the move to FBS yield a significant relationship with state appropriations. Jones concludes that this result provides some support for the hypothesis of a positive effect between NCAA subdivision and state support.

Anderson on Athletic Benefits
Anderson (2017) deploys the most sophisticated econometric treatment of the relationships between athletic success and various school outcome variables in the extant literature, uses the most recent data, and has been cited numerous times as representing the state of current knowledge on these matters; hence, this paper warrants a separate and more detailed discussion. He considers a data set for FBS schools from 1986 to 2009 and employs a propensity score model to estimate these relationships. Anderson's motivation for adopting this framework is the difficulty of unraveling causality from observational data, especially given that selection bias (e.g., recruiting skill of coaches and administrators), reverse causality (e.g., athletic success begets donations, which are in turn spent to achieve greater athletic success), and confounding variables are likely present. 9 Anderson's basic approach is to use bookmaker spreads for individual football games to establish (via a fifth order polynomial logistic model) a probability of winning (the propensity score). Actual wins are then conditioned on the propensity score and used as the independent variable (or treatment). This method depends on the assumption that gambling is efficient (the bookmaker spread represents full and rationally-processed information to determine the likelihood of contest outcome; put differently, all the relevant variables that impact a game's result are subsumed into the point spread. 10 ) Anderson then essentially runs school applications, SAT scores and donations on the difference between the actual and the expected wins during a season. Thus, Anderson is estimating the effect of unexpected wins (losses) on his school outcome variables.
While it is interesting to know what the effect of unexpected wins is, it is likely a different effect than wins or general athletic success. Few would question, for instance, that a football or men's basketball team that rises from sport oblivion to prominence in one year will experience an uptick in applications, donations or state appropriations (see previous discussion of UMass/UConn.) 11 This is a different matter than a team perennially appearing in March's Elite Eight or Final Four or in the Football Championship Playoff.
Anderson does indeed find that unexpected wins are associated with increased applications, higher SAT scores and increased donations to the athletics department (but not to the general fund). 8 Jones' propensity score includes a number of school characteristics, including size, percentage of full-time students, percent of graduate students, degree of institutional urbanization, freshmen retention rate, and total education expenditures. 9 Reverse causality would not appear to a significant issue when considering applications or SAT scores and athletic success. That is, while it may be logical to expect athletic success to increase applications or SAT scores, it does not seem plausible that more applications or higher SAT scores would engender greater athletic success. Top football and basketball players in FBS are recruited and, by all accounts, base their decisions on factors related to the athletics program. The effects of confounding variables, such as the managerial talent of a school president or provost, along with other unquantifiable attributes, could be accounted for by team fixed effects. Reverse causality may be an issue with athletic success and donations. In such a case, however, one would expect that the presence of reverse causality would strengthen the estimated correlation between the variables. Since Anderson's use of a propensity score is intended to mitigate the impact of reverse causality, other things equal, one would expect his model to imply a weaker correlation between athletic success and donations, contrary to his findings. 10 For this crucial assumption to be statistically valid, the R 2 from this equation must be very high, approaching 1. Anderson, however, does not mention what the R 2 is in this test. It is nonetheless true that the literature on sports betting markets indicates that they operate efficiently. See, for example, the discussion in Lopez, Matthews, and Baumer (Lopez et al. 2018). 11 Such a finding would be consistent with the empirical work of Smith (2008), op. cit., and his "breakout" variable.
Notably, Anderson's strongest result is for donations to the athletics department, but this finding relies on a very incomplete data set on donations (495 observations for BCS 12 athletic donations versus 1560 observations possible for 65 schools over 24 years). He also runs tests separately for the Power Five conferences within FBS and for the remaining FBS (or "Group of Five") conferences and finds that his relationships only hold for the Power Five schools.
Three other points from Anderson are worth noting. First, he finds little evidence that the positive effect of unexpected wins lasts more than one year. Second, his positive results appear to be considerably weaker than he claims. Thus, after correcting for multiple testing, none of his treatment variables are significant at the 0.05 level. His Alumni Athletic Operating Donations variable comes closest with significance at the 0.053 level, but this variable is missing more than two-thirds of its possible observations, creating a possible selection bias. 13 Third, in his conclusion, Anderson misrepresents his own results when he states: "Consider a school that improves its season wins by three games . . . . This school may expect alumni athletic donations to increase by $409,000 (17%), applications to increase by 406 (3%) . . . ." But Anderson is not looking at the net benefit of three wins; he is looking the net benefit of three wins over expectation. That is, if a team wins three more games but the betting markets expected the team would win three more games, there would be no net benefit.

Summary of the Existing Literature
Overall, the literature on the impact of college sport success on the quantity and quality of applications and enrollments, donations to the athletics department and general fund, and state appropriations is mixed and somewhat inconclusive. When significant results have been found, they have tended to be small in practical magnitude. Much of the existing scholarship is limited by methodological issues and data availability and most studies have not considered evidence from the 2000s. Each new study appears to use a new set of schools, conferences or divisions, a different set of athletic success variables, distinct issues with missing data, and different modelling of the relationship between the treatment and outcome variable. Given the restructuring of the NCAA subdivisions and conferences, the emergence of conference-owned RSNs and attendant spurt in television revenues, the increased autonomy of the Power Five conferences and the enhanced role of athletics in university finances and governance since 2000, it makes sense to examine the stability of the relationships between success in sports and possible indirect benefits for the school with more recent data. In what follows, we attempt to overcome some of the methodological issues of previous work and to construct a more up-to-date data set.

Our Data
Anderson (2017)  Our school benefit response variables-summarized in Table 1-include the number of applications (Applied, measured in thousands), the average 75th percentile SAT score across three portions of the exam (SAT75p), the admissions rate (Admit) and yield (Yield), donations to athletics by alumni (Athletics, in millions of dollars), total donations by alumni (Alumni, in millions of dollars), total non-athletic donations by alumni (NonAthletics) and total state appropriations to the school (State, in millions of dollars), also computed per student (StatePC, in thousands). We note that for variables derived from IPEDS (acceptance rate, number of applicants, yield, and 75th percentile SAT score), we have nearly complete coverage with only one school (Maryland) failing to report in multiple years. Notably, as is the case with Anderson's data set, the extent of the missing data for donations is much larger. We have athletic donations data for only 356 school-years and total donations data for 605 school years out of a total possible 715 observations. The pattern of missing data in donations suggests a possible bias, as certain schools (mostly private, notably Notre Dame, Boston College, Miami (FL), Syracuse, Wake Forest) simply did not report donation data (the data is certainly not missing at random). Hence, our findings for the athletic donations response variable should be interpreted with caution. Of course, private schools do not receive state funding, but this does not fully account for the missing data for that variable-state-related schools like Penn State and Pittsburgh also did not report state funding. Many of these variables are strongly right-skewed, and in these cases we have fit the model to their logarithm.
Our athletic success treatment variables-summarized in Table 1-include measures of being good, great, and the best at both basketball and football. Specifically, for both sports we record the cumulative winning percentage over the previous three seasons (BBWpct, FBWpct), as well as any Final Four appearances 15 and national championships in any of the three preceding seasons. Thus, we consider one, two, and three year lags on the Final Four (BBFF1, BBFF2, BBFF3, and FBFF1, FBFF2, FBFF3, respectively) and championship variables (BBChamps1, BBChamps2, BBChamps3, and FBChamps1, FBChamps2, FBChamps3, respectively). The choice to include cumulative winning percentage over three years-as opposed to say, winning percentage in each of the previous three years-was designed to smooth out year-to-year noise in winning percentage, while still reflecting relevant recent history.
Our control variables include median income per capita in the state (Income, in thousands of dollars), number of high school diplomas issued in the state in the previous year (HSDiplomas, in thousands), and state funding, both overall and per student (State and StatePC, respectively). These variables help account for variation in the income and population of each state, in addition to the support from state government (only relevant for public schools). While it may be the case that state household income is less relevant for private schools, such schools often draw regional interest and represent only about one-fifth of the schools considered. Further, to control for the unobserved heterogeneity of the 65 institutions we tested both school (School) and year (Year) fixed effects, with both variables treated as categorical. 16 The fixed effect for School should capture the effect of the long-term branding value of the school's athletic program (i.e., Duke basketball), as well as many other attributes. The fixed effect for Year should capture changes due to inflation and other economic conditions.
The data in Table 2 come exclusively from Sports-Reference and the US Census Bureau, and there is no missing data. As the number of high school diplomas awarded is right-skewed, we use the logarithm of this variable in our models.

Our Models
We tested a wide variety of models, including semi-logs, different lag patterns and interactive effects, and focus here on the most important results (please see our data Appendix A for additional information). 17 Unless otherwise indicated, our standard errors are robust (using the HC1 sandwich variance-covariance matrix (Zeileis and Hothorn 2002)) and our p-values are corrected for multiplicity via the Benjamini-Hochberg method (Benjamini and Hochberg 1995) 18 , which controls the false discovery rate. All computations were performed in R version 3.4 (R Core Team 2018). Our general regression model for each response variable y j is: where each α j is a vector of length 10 containing the fixed effects associated with each academic year (relative to [2005][2006], each γ j is a vector of length 64 containing the fixed effects associated with each school (relative to the University of Alabama, which comes first alphabetically 19 ), β j1 and β j2 are control variables associated with the number of high school diplomas granted in the previous year and the per capita income in the corresponding state, the θ's are associated with the school's recent success in basketball, the λ's are associated with the school's recent success in football, and the error terms j ∼ N 0, σ j , for some fixed value of σ j . 20 For readability we omit the jsubscripts in what follows.

Our Results
In each of the models we fit, School and Year (which is treated as a categorical variable) were statistically significant. For clarity of presentation we relegate these results to our data Appendix A, but these effects capture much of the variability in the data. There are obvious broad trends, such as a general increase in the number of applications over time, and obvious school-specific effects for which these variables control. Multicollinearity among the explanatory variables of interest-as measured by generalized variance inflation factors (Fox and Monette 1992)-does not appear to be problematic in these models. 21

Athletic Success and the Quantity and Quality of Applications
Our first model, shown below, tests the effect of athletic success on school applications, y 1 = log(Applied) = β 1 · X + 1 . Table 3. 19 The success of the University of Alabama's football team impacts our interpretation of the football-related variables.

Results from this model are shown in
However, using Alabama as the reference group has no impact on any term's statistically significance, since we consider only the significance of the School variable as a whole. 20 Note that our models do not directly account for the possibility that a school is not simultaneously investing in competitive athletics and other areas of school achievement or marketing. That is, if a school hired a new football coach for $10 million, leading to an appearance in the national football playoffs, at the same time that it hired two nobel prize winning professors and began a multimillion dollar marketing campaign, then attributing an increase in applications, in entering students SAT scores or in alumni donations to football success would be spurious. There are two caveats to such an endogeneity concern. First, we do employ institutional fixed effects that may attenuate or eliminate such a problem. Second, we do not know of any evidence that such behavior occurred. In any event, we are not attributing causality to the treatment variables in our models; rather, we are noting the presence or absence of statistically significant relationships and the magnitude of such relationships. 21 We computed generalized variance inflation factors raised to the 1/(2 · d f ) power, as recommend by Fox and Monette (1992) for all explanatory variables in all models. Those factors were only above 2 for the control variables number of high school diplomas (ranging from 20.8 to 23.9) and per capita income (8.9 to 10.0). Table 3. Models for Applicants.

Dependent Variable
Log ( Out of the 14 athletic performance variables, only two are statistically significant at the 0.10 level: BBChamps2 and FBWpct. BBChamps2 is significant at the 0.05 level and its coefficient (θ 3 ) indicates a national championship two years earlier is associated with a 10% increase in applications. 22 FBWpct is significant at the 0.001 level and its coefficient (λ 1 ) indicates an additional win per year on average over the previous three seasons is associated with a 1.1% increase in applications. 23 These results are consistent with the bulk of previous literature that big-time football and men's basketball success does provide an advertising (and/or anticipated quality of life) effect which boosts applications to a university. Two additional observations are in order. First, the effect for increasing win percentage in the regular season does not apply to basketball (after controlling for football) and for football the magnitude of the effect is rather modest. Thus, direct advertising of a school's academic programs and campus life might be more effective in generating applications than investing in its basketball or football success. Second, the other observed impact occurs only for the rarest of accomplishments (1 out of 65 in each year): a national championship. Investing in creating a national championship basketball or football team entails very high risk for rather ordinary returns. Further, the returns in the form of increased applications appear to delay for one year and then dissipate by year three. Table 3 also shows the results for a model for the rate of admissions. These results are highly consistent with the previous ones, due to the obvious functional dependence between the rate of admission and the number of applicants (the number of students is largely fixed at most schools). The negative coefficient for football winning percentage-the only variable to achieve statistical significance at the 0.10 level-implies that greater success in football is associated with higher selectivity.
Albeit tenuous, the existence of a statistically significant link between athletic success and applications leads to the next question: whether athletic success improves a school's admission yield (the percent of admitted students who enroll.) We ran the following equation to test this relationship.
Results for this model as shown in Table 4. Of the 14 athletic performance variables, only one is statistically significant at the 0.10 level. FBWpct is significant at the 0.05 level with an association between an extra regular season win on average over the previous three years and an increased yield of 0.476 percentage points; that is, a yield of 30% would increase to 30.476%. While football's impact appears to be statistically significant but small, performance of the men's basketball team does not have a statistically significant impact on yield.
If athletic success may lead to an uptick in applications, even with a steady yield, the school may become more selective. Greater selectivity, in turn, may lead to an improvement on the quality of the student body, often represented by a school's SAT scores.
In the following equation we test what impact athletic success has on the 75th percentile of SAT scores across the three sections of the exam (Math, English, Writing) for the 65 Power Five schools in our sample.
The results from this model are also shown in Table 4. Only one of the 14 athletic performance variables achieve statistical significance at the 0.10 level: BBWpct. The coefficient of BBWpct (θ 1 ) implies that every regular season win on average over three years is associated with a 0.6 point increase in the average across the English, Writing and Math portions of the SAT for the 75th percentile of the school's student body-a statistically significant effect with no practical impact. 22 The coefficient was 0.095 and exp(0.095) = 1.1. 23 The coefficient was 0.144 and exp(0.144/13) = 1.011 (divided by an average of 13 regular season games). Note: * p < 0.1; ** p < 0.05; *** p < 0.01.

Athletic Success and Alumni Giving
Our next test explores the relationship between alumni donations and athletic success. As was the case with Anderson (2017), who also relied on data from the annual survey of the Council for Aid to Education, there is a substantial problem with missing data. Many of the respondent schools simply do not answer the survey questions on alumni donations or do not do so completely or on a regular annual basis. In Anderson's data set, approximately two-thirds of donation data is missing, in ours approximately 15% of overall donations and about half of athletics donations are missing (359 out of 715). Accordingly, although there is no obvious pattern to which schools did not report, our results must be interpreted with caution. The following equation examines the relationship between alumni donations to athletics and athletic performance, with the standard control variables. Its results are shown in Table 5.
y 5 = log(Athletics) = β 5 · X + 5 None of the 14 athletic performance variables is statistically significant at the 0.10 level. 24 This equation examines the relationship between donations to the school's general fund minus athletic donations (Non-Athletic Donations) and athletic performance.
Only one of the 14 athletic performance variables is statistically significant at the 0.10 level: FBWpct. However, the effect size-an expected increase of 1.94% for each additional victory per year-is small. Nine of the 14 coefficients had negative signs.

Athletic Success and State Budgetary Support
Our final model was on the relationship between academic performance and state funding for public universities. The results are shown in Table 6.
Three of the 14 athletic performance variables were significant at the 0.10 level: BBFF1, FBFF1, and FBFF2.
The nominal interpretation of the first coefficient (θ 5 ) is that a Final Four appearance in the previous basketball season is associated with a 5.3% increase in state funding. However, this is nearly offset by a −3.9 decrease associated with actually winning the championship, although this effect is not statistically significant. It is hard to square the idea that teams that reach the Final Four tend to receive greater increases in state funding than teams that win the championship. Further muddying the waters is the fact that the effects on the football side show the same offsetting pattern, but in the opposite direction (negative for reaching the Final Four, positive for winning the championship).
When we consider the state funding per capita (as in Table 6), the statistical significance of all variables save for FBFF1 disappears, and the direction of the effect remains negative.

Conclusions
Previous literature on the effect of athletic success on applications, quality of the student body, donations and state funding has been inconclusive. Researchers have employed different methodologies and models and most have been limited by incomplete data. We develop a recent data set for the Power Five conferences during the eleven years from 2006 through 2016 and use fixed effects linear regression models to retest these relationships. We report robust standard errors and control for multiple testing. We find that certain measures of football success have a modest positive and short-lived impact on student applications, but no clear impact on admission yield or on the quality of the student body. Although hampered by incomplete data, we also found that athletic success did not have a statistically significant effect on donations. Final Four appearances in both basketball and football showed some statistical significance associated with state funding, but the direction and robustness of these findings is unclear.
Author Contributions: The authors contributed equally to this article.

Acknowledgments:
The authors would like to thank Junzhou Liu, our data collection assistant, as well as Michael Lopez, Roger Noll, Brad Humphreys, Ann Kaplan, Nick Horton, Dennis Coates, the reviewers and the associate editor for helpful comments and suggestions.

Appendix A
ANOVA tables Below we present the ANOVA tables for all models discussed in the paper. Note the persistent statistical significance of the Year and School terms.