You are currently viewing a new version of our website. To view the old version click .
Economies
  • Article
  • Open Access

9 May 2022

Committees or Markets? An Exploratory Analysis of Best Paper Awards in Economics

,
and
1
Center for Economic Education, Columbus State University, Columbus, GA 31907, USA
2
School of Economics and Finance, Queensland University of Technology, Brisbane, QLD 4000, Australia
3
Department of Economics, University of New Haven, West Haven, CT 06516, USA
*
Author to whom correspondence should be addressed.
This article belongs to the Special Issue Sociology of Economics

Abstract

Despite the general usefulness of citations as a sort of test of the value of one’s work in the marketplace of ideas, journals and publishers tend to use alternative bases of judgment, namely committees, in selecting candidates for the conferral of journals’ best paper awards. Given that recognition—sometimes in the form of compensation and on other occasions in the form of awards—in academe is geared toward incentivizing the production of impactful research and not some less desirable goal or outcome, it is important to understand the sensitivity in the outcomes of best paper award selection processes to the types of processes used. To that end, this study compares the selection of best paper awards for journals affiliated with several of the world’s top economic associations by committees to a counterfactual process that is based on citations to published studies. Our statistical exploration indicates that in most cases and for most awards, the most cited paper was not chosen. This requires further discussion as to the core characteristics that quantitatively represent the highest impact.

1. Introduction

A recent study by () asserts that there at least two bases for the importance of citations to academic research. First, the research carried out by members of the academy has to be assessed in order to make administrative decisions related to tenure, salary, and hiring. Second, and perhaps more importantly, a relatively large number of citations to a scholar’s work reflects a professional career resulting in a noteworthy contribution to society (). () focus on citations is based on the weight of opinion from the early academic literature (e.g., ; ; ; ) that has tended to favor citations-based analysis as a “market test” of research productivity/impact.1 The studies that followed those early generation investigations (e.g., ; ; ) maintained support of the use of citations as a measure of impact both within and beyond the research community.2
Despite the general usefulness of citations as a sort of test of the value of one’s work in the marketplace of ideas, journals and publishers tend to use alternative bases of judgment, namely committees, in selecting candidates for the conferral of journals’ best paper awards. Given that recognition—sometimes in the form of compensation and on other occasions in the form of awards—in academe is geared toward incentivizing the production of impactful research and not some less desirable goal or outcome (e.g., see ), it is important to understand the sensitivity in the outcomes of best paper award selection processes to the types of processes used. To that end, this study aims to explore the efficacy of current best paper awards processes by comparing the selection of best paper awards for journals affiliated with several of the world’s top economic associations (e.g., American Economic Association, Econometric Society) by committees to a counterfactual process that is based on citations to published studies. Our statistical exploration indicates that in most cases and for most awards, the most cited paper was not chosen. This requires further discussion as to the core characteristics that quantitatively represent the highest impact.
Before turning to a discussion of our empirical exploration, we first provide a review of the economics literature on related areas of interest. This review is followed by a vignette concerning the Hicks-Tinbergen Award, which is conferred biennially to the authors of the best paper published in the official journal of the European Economic Association. Lastly, following a discussion of the study’s limitations and recommendations for future research, which is preceded by a section focusing upon the data and statistical analysis, we offer some concluding remarks.

3. A Vignette: The Hicks-Tinbergen Award

Our exploratory examination of committee selection efficacy of best paper awards begins with a vignette focusing on the Hicks-Tinbergen Award, which was created by the European Economic Association (EEA) in 1991 and is awarded once every two years (i.e., on even-numbered years) to the author(s) of the best paper published in the Association’s official journal during the two preceding years.12 Through the EEA’s presentation of the 2002 Hicks-Tinbergen Award, the EEA’s official journal was the European Economic Review (EER). Beginning with the presentation of the 2004 Award, the group’s official publication was, and remains, the Journal of the European Economic Association (JEEA).13 The selection committee for the Hicks-Tinbergen Award is composed of the editor of the JEEA, the EEA’s past president and the EEA’s vice-president. This committee begins by seeking award nominations from the membership body of the EEA.14 The selection committee then discusses the nominations put forward by EEA members and makes a shortlist. The editors of the JEEA who viewed the short-listed papers are often called upon to evaluate the nominations. In the spring of each year in which the award is given, the committee informs the EEA’s Executive Committee of its decision.15 The winner is announced immediately after the meeting and a statement is posted on the EEA’s website.
All past Hicks-Tinbergen Award winners are listed in Table 1 along with their respective institutional affiliations, while the titles of the award-winning studies are listed in Table A1 in the Appendix A. The 15 prior awards have been shared by 34 economists, for an average of about 2.3 winners (authors) per award. On only two occasions—2008 and 2012—has the Award been conferred upon a single winner. These were Botond Köszegy of the University of California—Berkeley and Guido Tabellini of Bocconi University. Of the remaining 13 awards, seven have gone to teams of two, while the remaining six were received by teams of three or more. Table 1 also provides the number of Google Scholar citations per year for each of the 15 Award-winning articles. Garnering almost 286 citations per year, the 2004 awardees—Frank Smets of the European Central Bank and Raf Wouters of the National Bank of Belgium—produced the most impactful winning study to date. This study is followed by the 2018 study authored by Luigi Giusi of the Einaudi Institute for Economics and Finance, Paola Sapienza of Northwestern University and Luigi Zingales of the University of Chicago, and published in the JEEA, which has garnered about 180 citations per year. The least impactful award-winning study is the 1994 study by Robert Innes of the University of Arizona and Richard Sexton of the University of California—Davis. This study, which is published in the EER, has garnered only about 2.5 citations per year, yet is separated by only about 1.8 citations per year from the next least-cited award-winning paper by Juan Carrillo of Free University Brussels and Thomas Mariotti of the London School of Economics. The latter study was published by the EER in 2002 and has garnered 4.3 cites per year.
Table 1. Hicks-Tinbergen Award, 1994–2020.
In Table 1 are also listed the top-cited articles for each of the evaluation periods covering the 15 Hicks-Tinbergen Awards. First among these is the top-cited study published during the 2012 Award evaluation period, authored by Thomas Dohmen of Maastricht University, Armin Falk of the University of Bonn, David Huffman of Swarthmore College, Uwe Sunde of the University of St. Gallen, Jürgen Schupp of Free University Berlin, and Gert Wagner of the Max Planck Institute. The study, published in the JEEA, has received 329.5 citations per year since publication. This piece is closely followed by the 2004 top-cited study by Jean-Charles Rochet and Jean Tirole, both of Toulouse 1 University. Their study has picked up almost 321.5 citations per year since publication by the JEEA. Lastly, the least impactful top-cited study is the 2016 paper by Wolfgang Dauth of the Institute for Employment Research, Sebastian Findeisen of the University of Mannheim, and Jens Suedekum of both Heinrich-Heine-University Düsseldorf and the Düsseldorf Institute for Competition Economics. At more than 66 citations per year since publication by the JEEA, this study has still been more impactful than 12 of the 15 award-winning selections shown on the left-hand side of Table 1.
Next, Table 1 also includes the “Cites Ratio” for each award period for each of the 15 Hicks-Tinbergen Awards. This ratio represents the proportion of the top-cited study’s citations per year garnered by the award-winning study. As shown in Table 1, this ratio ranges from 0.0161 to one, meaning that the award-winning articles have garnered anywhere from about only 1.6 percent of the citations earned by the top-cited article to as many citations as the top-cited article. The former case occurred in 2002, with the award-winning paper by Carrillo and Mariotti and the top-cited paper by Jeffrey Sachs and Andrew Warner of Harvard University. The former has accrued 4.3 citations per year since publication by the EER, while the latter has garnered about 267 citations per year. The second case referenced above occurred in 2018, when the award-winning paper by Guiso, Sapienza and Zingales also became the top-cited paper during the same selection period.
Lastly, some summary statistics are presented along the bottom of Table 1. Among these are the means and standard deviations for citations per year for each category of papers—the Hicks-Tinbergen Award winners and the top-cited study during each two-year evaluation cycle. The mean number of citations per year across the 15 award-winning papers is 63.42, while that for top-cited papers stands at 184.39. The difference between these two means, or 120.97 citations per year, is treated stochastically. According to a means-difference test, this difference is, given a t-ratio of 3.57, greater than zero at the 0.001 level of statistical significance. In addition, presented at the bottom of Table 1 is the mean of “Cites Ratio,” which is 0.2972. This indicates that at the mean the award-winning papers have garnered just under 30 percent of the citations per year that have been garnered by the top-cited papers.

4. Data and Statistical Analysis

A broader exploratory analysis involves examination of the best paper awards at other economics journals. This study examines the best paper awards at four journals in the American Economic Association journal portfolio. These fall under the American Economic Journal titles, with specific entries subtitled Applied Economics (AEJAE), Economic Policy (AEJEP), Macroeconomics (AEJMa), and Microeconomics (AEJMi). In addition to these, we also examine best paper awards conferred by Quantitative Economics (QE) and Theoretical Economics (TE), two journals in the Econometric Society’s portfolio. Combined with analysis of the EEA’s EER and JEEA, our study includes data from three of the top economic associations in the world. In addition to these, we examine best paper awards from the Journal of the Association of Environmental and Resource Economists (JAERE) and the Journal of Environmental Economics and Management (JEEM), two journals that have been part of the Association of Environmental and Resource Economists’ portfolio.16 Next, our analysis includes best paper awards from the Economic Record (ER), an official journal of the Economic Society of Australia, Environmental and Resource Economics (ERE), which is affiliated with the European Association of Environmental and Resource Economists, and the International Journal of the Economics of Business (IJEB), which maintains an association with the Society of Business Economists.
The best paper awards listed above are administered using processes similar to that for the Hicks-Tinbergen Award.17 For example, the Econometric Society adopted its best paper prizes for QE and TE in 2015 in order to highlight the best paper published in each of the journals in the areas of quantitative economics and economic theory. Prior to 2019, the journals’ editors and co-editors selected a list of nominees, from which the associate editors elected the winning paper. In 2019, the process was enhanced, awarding the prize by an external committee alternating annually between QE and TE. Currently, each of these awards is presented to a paper published in the journal during the two calendar years immediately preceding the year in which the award is made. Similarly, to select the best paper published in the Economic Society of Australia’s ER in a given year, a selection panel reviews the published papers and evaluates them according to their relevance and importance, originality in the use of data and theory, elegance of method and exposition, and strength of policy conclusions. The panel then selects the best paper published that year.
Table 2 lists information on the best paper awards in economics examined in this study—including details on the Hicks-Tinbergen Award conferred originally for the best paper in the EER, and later for the best paper in the JEEA. The table also includes mean citations for the top-cited paper during each evaluation period for the best paper awards under study, as well as the mean citations per year accrued by the winning papers. In each of the 15 cases presented in Table 2, the top-cited paper’s mean exceeds that of the winning paper. In fact, the difference ranges from just over five citations per year to almost 167 citations per year. As with the Hicks-Tinbergen Award vignette presented in the prior section, these differences are treated stochastically by testing them against a null hypothesis of 0. As shown in Table 2, each of the 15 is greater than 0 at the 0.10 level of significance, while 13 (11) of the 15 are significant at the 0.05 (0.01) level of significance.
Table 2. Sample Means and Difference-in-Means Tests.
Next, Table 3 presents the concordance ratios for each of the 15 best paper award entries in Table 2. For each best paper award, the concordance ratio represents the proportion of cases where the best paper award was conferred upon the top-cited paper. As indicated in Table 3, the mode of this ratio is 0, with seven of the 15 entries representing award processes wherein the top-cited paper was never selected for the best paper award. The largest concordance ratio of 0.30, belonging to both the AEJAE and AEJEP, indicates that 30 percent of best paper award selection processes involving these two journals resulted in the top-cited paper receiving the best paper award. The concordance ratios for these two journals only marginally exceeds that of 0.29 for the IJEB, while the remaining concordance ratios are relatively small, ranging from 0.05 to 0.11.
Table 3. Concordance Ratios and Significance Tests.
As before, these differences are treated stochastically by testing them against a null hypothesis of 0. As shown in Table 2, none of the concordance ratios reaches the 0.01 level of significance, while only those that are greater than 0.29 (0.28) achieve the 0.05 (0.10) level of significance. Of the 15 entries in Table 3, only two (three) achieve the 0.05 (0.10) level of significance. Thus, those results indicate that committees are not picking the most successful article in terms of citations. Lastly, and relatedly, the exploratory analysis discussed above provides a rationale for awards committees’ consideration of alternative processes for recognizing the authors of meritorious research. One avenue for consideration is to delay the conferral of awards in order to allow for a market-based determination of the merits of each published paper in the volume(s) of a journal. Given that any delay further increases the time between the production and conferral of awards, perhaps as little as a five-year delay would be sufficient to provide an indication to awards committees of the relative merits of a set of publications.

5. Limitations and Recommendations for Future Research

The analysis presented above in this study is not without limitations. As has been discussed in the scientometrics literature, citations can, in some cases, be “arbitrary,” which makes it difficult to achieve the level of precision proposed in this study. For example, the often abundant literature on a subject obliges researchers to make a decision, often arbitrarily, on the references cited. Similarly, a researcher may not be able to identify all of the literature in his or her field, especially in the current context with the multiplication in the number of journals, and, therefore, he or she may ignore relevant publications in the list of bibliographical references.
Added to these issues are the problems related to the network effect, and to citations to authors by virtue of their fame. The Matthew effect, for example, implies the existence of a boundary to be reached in order to increase citations (see ; ; ; ). These problems relate to the historical debate over the use of citations versus other measurements, such as publication counts (e.g., see ; ; ; ; , , ; ; ; ; ; , , , ), as well as to modern declarations to several initiatives (e.g., DORA, Leiden manifesto) cautioning the university community against using citations for the individual evaluation of researchers (e.g., see ; ). These issues suggest the use of citation classes instead of a single bibliometric index. Future work in this area might examine if prize-winning papers are, for example, among the top one percent (five percent) most cited in the topic. If so, one might assume that there is a concordance between objective measurement (i.e., citations) and subjective measurement (i.e., committees).
Next, our analysis does not control for article type or topic. For example, review articles are generally highly cited, whereas they constitute only an analysis of a collection of existing studies. As such, it is unlikely that they are selected for best paper awards. Although this issue may not be prevalent in the field of economics, wherein review articles are both less common than in other fields and there is an outlet, the Journal of Economic Surveys, dedicated to such work, it would be an important consideration if our approach is applied to other fields, which is an obvious avenue for future research. Similar issues may also exist with regard to the type of article (e.g., theoretical vs. empirical), the openness status of the article (i.e., open access vs. subscription), the number of authors and representation of countries, and the interdisciplinary nature of the study. Future research might control for these elements in the analysis.

6. Conclusions

What constitutes a high-quality paper is not an easy question. Committees and scientific organizations have the difficult role of assessment when deciding how to recognize the best papers. One may argue that the value of a publication is not just defined by its citations, but then the question arises as to what other objective selections play a role in the selection process. The selection of a single paper over many others can be seen as a complex task and opinions within the committee may vary a lot, taking into account the large number of potential candidate papers. Recognizing the best papers can be a reflection of how well an academic society or the committee members are able to identify scientific impact or significant achievement and can itself be seen as a catalyst or sign of a successful association or journal. The result of the exploratory analysis discussed in this paper indicates that in most cases and for most awards, the most cited paper was not chosen. This requires a discussion as to the core characteristics that quantitatively represent the highest impact. How to best identify and recognize the best papers is of practical importance in science.

Author Contributions

Conceptualization, F.G.M.J., B.T. and K.P.U.; methodology, F.G.M.J. and B.T.; writing—original draft preparation, F.G.M.J. and K.P.U.; writing—review and editing, F.G.M.J. and B.T.; project administration, F.G.M.J. and B.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

The authors thank two anonymous reviewers for helpful comments on a prior version. The usual caveat applies.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Hicks-Tinbergen Award, 1994–2020.
Table A1. Hicks-Tinbergen Award, 1994–2020.
YearWinning Paper TitleTop-Cited Paper Title
1992“Price Formation for Fish: An Application of An Inverse Demand Function”“Trade, Knowledge Spillovers, and Growth”
1994“Customer Coalitions, Monopoly Price Discrimination and Generic Entry Deterrence”“Financial Markets and Growth: An Overview”
1996“Job Matching and Job Competition: Are Lower Educated Workers at the Back of Job Queues?”“International R&D Spillovers”
1998“Wages, Profits and the International Portfolio Puzzle”“Income distribution, Political Instability, and Investment”
2000“Gift Exchange and Reciprocity in Competitive Experimental Markets”“Monetary Policy Rules in Practice: Some International Evidence”
2002“Electoral Competition and Politician Turnover”“The Curse of Natural Resources”
2004“An Estimated Dynamic Stochastic General Equilibrium Model of the Euro Area”“Platform Competition in Two-Sided Markets”
2006“Capital, Labor, and the Firm: A Study of German Codetermination”“Liquidity Risk and Contagion”
2008“Ego Utility, Overconfidence, and Task Choice”“Distance to Frontier, Selection, and Economic Growth”
2010“Youth Unemployment and Crime in France”“Why is Fiscal Policy Often Procyclical?”
2012“Culture and Institutions: Economic Development in the Regions of Europe”“Individual Risk Attitudes: Measurement, Determinants, and Behavioral Consequences”
2014“What Good is Wealth without Health? The Effect of Health on the Marginal Utility of Consumption”“Rethinking the Effect of Immigration on Wages”
2016“The Balanced U.S. Press”“The Rise of the East and the Far East: German Labor Markets and Trade Integration”
2018“Long-Term Persistence”“Long-Term Persistence”
2020“Competition and Welfare Gains from Transportation Infrastructure: Evidence from the Golden Quadrilateral of India”“What Works? A Meta Analysis of Recent Active Labor Market Program Evaluations”

Notes

1
() makes reference to a corpus of literature in economics suggesting that citations may be superior to other measures of research productivity, such as publication counts and journal impact factors.
2
Obviously, treating all citations equally is concerning as citations have different functions (; ). Citations may be linked to fashionable topics that prove to be less promising at a later stage. Papers may also be cited because they are considered to be wrong rather than a valuable contributions to science or knowledge (). Citations are also used strategically. For example, hat tipping citations are citations aimed at pleasing authors that might be potential referees in the hope that the cited authors will reciprocate (). Some articles are cited but not read. (), for example, estimate that around 20 percent of the cited papers are actually read. Influence would mean that a paper influenced the creation of new ideas, methods, problems, or solutions in society or academia. Improper use of references can have, for example, serious implications in important environments such as health care (). Errors and issues may increase as the quantity of published material continues to grow substantially ().
3
These findings are supported by more recent studies (; ; ).
4
A relatively new branch of economics literature indicates that the receipt of prestigious medals and awards is a useful way of ranking economics faculties and departments (e.g., see ; , ; , ).
5
Such an examination addresses whether John Bates Clark Medal bestowal simply reflects the past behavior of the most talented economists, or instead whether the awards actually raise subsequent productivity (). For more information on the John Bates Clark Medal, see ().
6
() also note that the number of citations received increased by 50 percent compared to the counterfactual.
7
A separate stream of the academic literature explores the relationships between academic accomplishment and access to various job-related perquisites, as well as their productivity implications (e.g., ; ; ; , , ; ; ).
8
A study by () finds some evidence of shorter times to accept editorial board members’ articles.
9
The author–editor connections examined by () categorize the author and editor as being connected if they have ever worked in the same institution at the same time, if they received their Ph.D. from the same university in the same year, if the editor was one of the Ph.D. advisors of the author, or if the author has ever coauthored a paper with the editor.
10
() control for author, article, and journal-specific characteristics that might influence an article’s citations.
11
The data led () to conclude that economics is more a monocentric cultural pyramid than a polycentric market.
12
See https://www.eeassoc.org/awards (accessed on 10 March 2022). The inaugural presentation of the Hicks-Tinbergen Award occurred in 1992. Records indicate that this Award covered publications over the period from 1989 through 1991. Lastly, according to the EEA, the Award is named Hicks-Tinbergen to make it clear that the EEA supports both theoretical and empirical work in economics in Europe.
13
The JEEA is published by Oxford University Press on behalf of the EEA. Its predecessor, the EER, is an independent journal published by Elsevier.
14
The selection process described here can be found through the organization’s website.
15
The selection committee explains to the EEA’s Executive Committee how the decision was reached, and provides it with a list of any other candidates who were considered as potential winners of the award during the last stages of the process.
16
Until launching its new flagship journal, the JAERE, the Association of Environmental and Resource Economists presented a best paper award—the Ralph C. d’Arge and Allen V. Kneese Award—to a paper selected from the JEEM. This award was presented annually from 2009 to 2013, after which it was replaced by an unnamed award for the best paper in JAERE.
17
Descriptions of the selection processes for best paper awards affiliated with the Econometric Society and the Economic Society of Australia can be found at these organizations’/journals’ websites.

References

  1. Berger, Mark C., and Frank A. Scott. 1990. Changes in U.S. and southern economics departments rankings over time. Growth and Change 21: 21–31. [Google Scholar] [CrossRef]
  2. Bladek, Marta. 2014. DORA: San Francisco Declaration on Research Assessment (May 2013). College & Research Libraries News 75: 191–96. [Google Scholar]
  3. Brogaard, Jonathan, Joseph Engelberg, and Christopher A. Parsons. 2014. Network position and productivity: Evidence from journal editor rotations. Journal of Financial Economics 111: 251–70. [Google Scholar] [CrossRef]
  4. Chan, Ho Fai, Bruno S. Frey, Jana Gallus, and Benno Torgler. 2014a. Academic honors and performance. Labour Economics 31: 188–204. [Google Scholar] [CrossRef]
  5. Chan, Ho Fai, Laura Gleeson, and Benno Torgler. 2014b. Awards before and after the Nobel Prize: A Matthew effect and/or a ticket to one’s own funeral. Research Evaluation 23: 210–20. [Google Scholar] [CrossRef][Green Version]
  6. Chan, Ho Fai, Franklin G. Mixon, Jr., and Benno Torgler. 2018. Relation of early career performance and recognition to the probability of winning the Nobel Prize in economics. Scientometrics 114: 1069–86. [Google Scholar] [CrossRef]
  7. Chan, Ho Fai, Ali Sina Önder, and Benno Torgler. 2015. Do Nobel laureates change their patterns of collaboration following prize reception? Scientometrics 105: 2215–35. [Google Scholar] [CrossRef]
  8. Chan, Ho Fai, Ali Sina Önder, and Benno Torgler. 2016. The first cut is the deepest: Repeated interactions of coauthorship and academic productivity in Nobel laureate teams. Scientometrics 106: 509–24. [Google Scholar] [CrossRef]
  9. Chan, Ho Fai, and Benno Torgler. 2012. Econometric fellows and Nobel laureates in economics. Economics Bulletin 32: 3365–77. [Google Scholar]
  10. Chan, Ho Fai, and Benno Torgler. 2015. The implications of educational and methodological background for the career success of Nobel laureates: An investigation of major awards. Scientometrics 102: 847–63. [Google Scholar] [CrossRef]
  11. Coehlo, Philip R. P., James E. McClure, and Peter J. Reilly. 2014. An investigation of editorial favoritism in the AER. Eastern Economic Journal 40: 274–81. [Google Scholar]
  12. Colussi, Tommaso. 2018. Social ties in academia: A friend is a treasure. Review of Economics and Statistics 100: 45–50. [Google Scholar] [CrossRef]
  13. Combes, Pierre-Philippe, Laurent Linnemer, and Michael Visser. 2008. Publish or peer-rich? The role of skills and networks in hiring economics professors. Labour Economics 15: 423–41. [Google Scholar] [CrossRef]
  14. Conley, John P., and Ali Sina Önder. 2014. The research productivity of new PhDs in economics: The surprisingly high non-success of the successful. Journal of Economic Perspectives 28: 205–16. [Google Scholar] [CrossRef]
  15. Conroy, Michael E., Richard Dusansky, David Drukker, and Arne Kildegaard. 1995. The productivity of economics departments in the U.S.: Publications in the core journals. Journal of Economic Literature 33: 1966–71. [Google Scholar]
  16. Coupé, Tom. 2003. Revealed performances: Worldwide rankings of economists and economics departments, 1990–2000. Journal of the European Economic Association 1: 1309–45. [Google Scholar] [CrossRef]
  17. Coupé, Tom. 2004. What do we know about ourselves? On the economics of economics. Kyklos 57: 197–216. [Google Scholar] [CrossRef]
  18. Davis, Paul, and Gustav F. Papanek. 1984. Faculty ratings of major economics departments by citations. American Economic Review 74: 225–30. [Google Scholar]
  19. Faria, João Ricardo, Franklin G. Mixon, Jr., and Kamal P. Upadhyaya. 2016. Human capital, collegiality, and stardom in economics: Empirical analysis. Scientometrics 106: 917–43. [Google Scholar] [CrossRef]
  20. Faria, João Ricardo, Franklin G. Mixon, Jr., and Kamal P. Upadhyaya. 2017. Human capital and collegiality in academic beehives: Theory and analysis of European economics faculties. Theoretical and Applied Economics 24: 147–62. [Google Scholar]
  21. Faria, João R., Franklin G. Mixon, Jr., and William C. Sawyer. 2021. Human capital, networks and clubs in academe. Unpublished Manuscript. [Google Scholar]
  22. Feldman, Maryann P., and Maryellen R. Kelley. 2003. Leveraging research and development: Assessing the impact of U.S. advanced technology program. Small Business Economics 20: 153–65. [Google Scholar] [CrossRef]
  23. Fiala, Dalibor, Cecilia Havrilová, Martin Dostal, and Ján Paralič. 2016. Editorial board membership, time to accept, and the effect on the citation counts of journal articles. Publications 4: 21. [Google Scholar] [CrossRef]
  24. Frey, Bruno S. 2010. Withering academia? Analyse & Kritik 32: 285–96. [Google Scholar]
  25. Frey, Bruno S., and Jana Gallus. 2014. The power of awards. Economists’ Voice 11: 1–5. [Google Scholar] [CrossRef]
  26. Frey, Bruno S., and Jana Gallus. 2017. Honours versus Money: The Economics of Awards. Oxford: Oxford University Press. [Google Scholar]
  27. Frey, Bruno S., and Susanne Neckermann. 2008. Awards in Economics: Towards a New Field of Inquiry. Zurich: Institute for Empirical Research in Economics, University of Zurich. [Google Scholar]
  28. Frey, Bruno S., and Susanne Neckermann. 2009. Abundant but neglected: Awards as incentives. Economists’ Voice 6: 1–4. [Google Scholar] [CrossRef]
  29. Gerrity, Dennis M., and Richard B. McKenzie. 1978. The ranking of Southern economics departments: New criterion and further evidence. Southern Economic Journal 45: 608–14. [Google Scholar] [CrossRef]
  30. Gibbons, Jean D., and Mary Fish. 1991. Rankings of economic faculties and representation on editorial boards of top journals. Journal of Economic Education 22: 361–72. [Google Scholar] [CrossRef]
  31. Gómez-Mejia, Luis R., Len J. Treviño, and Franklin G. Mixon, Jr. 2009. Winning the tournament for named professorships in management. International Journal of Human Resource Management 20: 1843–63. [Google Scholar] [CrossRef]
  32. Graves, Philip E., James R. Marchand, and Randall Thompson. 1982. Economics departmental rankings: Research incentives, constraints and efficiency. American Economic Review 72: 1131–41. [Google Scholar]
  33. Hamermesh, Daniel S. 2018. Citations in economics: Measurement, uses, and impacts. Journal of Economic Literature 56: 115–56. [Google Scholar] [CrossRef]
  34. Hicks, Diana, Paul Wouters, Ludo Waltman, Sarah de Rijcke, and Ismael Rafols. 2015. Bibliometrics: The Leiden Manifesto for research metrics. Nature 520: 429–31. [Google Scholar] [CrossRef] [PubMed]
  35. Hilmer, Michael J., and Christiana E. Hilmer. 2011. Do editors favor their students’ work? A test of undue favoritism in top economics journals. Economics Bulletin 31: 53–65. [Google Scholar]
  36. Kalaitzidakis, Pantelis, Theofanis P. Mamuneas, and Thanasis Stengos. 2003. Rankings of academic journals and institutions in economics. Journal of the European Economic Association 1: 1346–66. [Google Scholar] [CrossRef]
  37. Klein, Daniel B. 2005. The Ph.D. circle in academic economics. Econ Journal Watch 2: 133–48. [Google Scholar]
  38. Kerr, Steven. 1975. On the folly of rewarding A, while hoping for B. Academy of Management Journal 18: 769–83. [Google Scholar] [PubMed]
  39. Kocher, Martin G., and Matthias Sutter. 2001. The institutional concentration of authors in top journals of economics during the last two decades. Economic Journal 111: 405–21. [Google Scholar] [CrossRef]
  40. Kosfeld, Michael, and Susanne Neckermann. 2011. Getting more work for nothing? Symbolic awards and worker performance. American Economic Journal: Microeconomics 3: 86–99. [Google Scholar] [CrossRef]
  41. Kosfeld, Michael, Susanne Neckermann, and Xiaolan Yang. 2016. The effects of financial and recognition incentives across work contexts: The role of meaning. Economic Inquiry 55: 237–47. [Google Scholar] [CrossRef]
  42. Laband, David N. 1985a. Publishing favoritism: A critique of department rankings based on quantitative publishing performance. Southern Economic Journal 52: 510–15. [Google Scholar] [CrossRef]
  43. Laband, David N. 1985b. A ranking of the top Canadian economics departments by research productivity of graduates. Canadian Journal of Economics 18: 904–7. [Google Scholar] [CrossRef]
  44. Laband, David N. 1985c. An evaluation of 50 ‘ranked’ economics departments—By quantity and quality of faculty publications and graduate student placement and research success. Southern Economic Journal 52: 216–40. [Google Scholar] [CrossRef]
  45. Laband, David N., and Michael J. Piette. 1994. Favoritism versus search for good papers: Empirical evidence regarding the behavior of journal editors. Journal of Political Economy 102: 194–203. [Google Scholar] [CrossRef]
  46. Laband, David N., and Robert D. Tollison. 2000. On secondhandism and scientific appraisal. Quarterly Journal of Austrian Economics 3: 43–48. [Google Scholar] [CrossRef]
  47. Larsson, K. S. 1995. The dissemination of false data through inadequate citation. Journal of Internal Medicine 238: 445–50. [Google Scholar] [CrossRef] [PubMed]
  48. Leibowitz, Stan J., and John P. Palmer. 1984. Assessing the relative impacts of economics journals. Journal of Economic Literature 22: 77–88. [Google Scholar]
  49. Levitt, Steven D., and Susanne Neckermann. 2014. What field experiments have and have not taught us about managing workers. Oxford Review of Economic Policy 30: 639–57. [Google Scholar] [CrossRef]
  50. Mayer, Thomas. 2004. Comment on ‘Dry holes in economic research’. Kyklos 57: 621–25. [Google Scholar] [CrossRef]
  51. Mazloumian, Amin, Young-Ho Eom, Dirk Helbing, Sergi Lozano, and Santo Fortunato. 2011. How citation boosts promote scientific paradigm shifts and Nobel prizes. PLoS ONE 6: e18975. [Google Scholar] [CrossRef]
  52. Medoff, Marshall H. 2003. Editorial favoritism in economics? Southern Economic Journal 70: 425–34. [Google Scholar]
  53. Mixon, Franklin G., Jr. 1998. Favoritism or showcasing high-impact papers? Modeling editorial placement of journal articles in economics. International Review of Economics 45: 327–40. [Google Scholar]
  54. Mixon, Franklin G., Jr., Benno Torgler, and Kamal P. Upadhyaya. 2017. Scholarly impact and the timing of major awards in economics. Scientometrics 112: 1837–52. [Google Scholar] [CrossRef]
  55. Mixon, Franklin G., Jr., and Kamal P. Upadhyaya. 2001. Ranking economics departments in the US South. Applied Economics Letters 8: 115–19. [Google Scholar] [CrossRef]
  56. Mixon, Franklin G., Jr., and Kamal P. Upadhyaya. 2008. A citations-based appraisal of new journals in economics education. International Review of Economics Education 7: 36–46. [Google Scholar] [CrossRef]
  57. Mixon, Franklin G., Jr., and Kamal P. Upadhyaya. 2011. From London to the continent: Ranking European economics departments on the basis of prestigious medals and awards. Ekonomia 14: 119–26. [Google Scholar]
  58. Mixon, Franklin G., Jr., and Kamal P. Upadhyaya. 2012. The economics Olympics: Ranking U.S. economics departments based on prizes, medals, and other awards. Southern Economic Journal 79: 90–96. [Google Scholar] [CrossRef]
  59. Mixon, Franklin G., Jr., and Kamal P. Upadhyaya. 2014. Eyes on the prize: Human capital and demographic elements of economics’ Nobel Prize and John Bates Clark Medal. Briefing Notes in Economics 24: 1–18. [Google Scholar]
  60. Mixon, Franklin G., Jr., and Kamal P. Upadhyaya. 2016a. Ranking economics departments in the US South: An update. Applied Economics Letters 23: 1224–28. [Google Scholar] [CrossRef]
  61. Mixon, Franklin G., Jr., and Kamal P. Upadhyaya. 2016b. Out of big brother’s shadow: Ranking economics faculties and regional universities in the US South. Economics Bulletin 36: 1609–15. [Google Scholar]
  62. Mixon, Franklin G., Jr., and Kamal P. Upadhyaya. 2019. Research productivity and the ranking of junior economics faculty: An appraisal of alternative metrics. Advances in Management and Applied Economics 9: 9–17. [Google Scholar]
  63. Neckermann, Susanne, Reto Cueni, and Bruno S. Frey. 2014. Awards at work. Labour Economics 31: 2015–17. [Google Scholar] [CrossRef]
  64. Neckermann, Susanne, and Bruno S. Frey. 2013. And the winner is? The motivating power of employee awards. Journal of Socio-Economics 46: 66–77. [Google Scholar] [CrossRef]
  65. Schneider, Jesper W. 2013. Caveats for using statistical significance tests in research assessments. Journal of Informetrics 7: 50–62. [Google Scholar] [CrossRef]
  66. Scott, Loren C., and Peter M. Mitias. 1996. Trends in rankings of economics departments in the U.S.: An update. Economic Inquiry 34: 378–400. [Google Scholar] [CrossRef]
  67. Segall, Eric J., and Adam Feldman. 2019. The elite teaching the elite: Who gets hired by the top law schools? Journal of Legal Education 68: 614–22. [Google Scholar] [CrossRef]
  68. Simkin, Mikhail V., and Vwani P. Roychowdhury. 2003. Read before you cite! Complex Systems 14: 269–74. [Google Scholar]
  69. Steel, C. M. 1996. Read before you cite. The Lancet 348: 144. [Google Scholar] [CrossRef]
  70. Thelwall, Mike. 2017. Confidence intervals for normalized citation counts: Can they delimit underlying research capability? Journal of Informetrics 11: 1069–79. [Google Scholar] [CrossRef]
  71. Thelwall, Mike, and Ruth Fairclough. 2017. The accuracy of confidence intervals for field normalized indicators. Journal of Informetrics 11: 530–40. [Google Scholar] [CrossRef]
  72. Torgler, Benno, and Marco Piatti. 2013. A Century of American Economic Review: Insights on Critical Factors in Journal Publishing. Berlin: Springer. [Google Scholar]
  73. Waltman, Ludo. 2016. Conceptual difficulties in the use of statistical inference in citation analysis. Journal of Informetrics 10: 1249–52. [Google Scholar] [CrossRef][Green Version]
  74. Yuret, Tolga. 2018. Path to success: An analysis of US educated elite academics in the United States. Scientometrics 117: 105–21. [Google Scholar] [CrossRef]
  75. Zhu, Xiaodan, Peter Turney, Daniel Lemire, and Andre Vellino. 2015. Measuring academic influence: Not all citations are equal. Journal of the Association for Information Science and Technology 66: 408–27. [Google Scholar] [CrossRef]
  76. Zott, Christoph, and Quy N. Huy. 2007. How entrepreneurs use symbolic management to acquire resources. Administrative Science Quarterly 52: 70–105. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.