Predicting Seasonal Performance in Professional Sport: A 30-Year Analysis of Sports Illustrated Predictions

In 2017, Sports Illustrated (SI) made headlines when their remarkable prediction from 2014 that the Houston Astros (a team in one of the lowest Major League Baseball divisional rankings) would win the World Series, came true. The less-publicised story was that in 2017, SI predicted the Los Angeles Dodgers to win the Major League Baseball (MLB) title. Assessing the forecasting accuracy of experts is critical as it explores the difficulty and limitations of forecasts and can help illuminate how predictions may shape sociocultural notions of sport in society. To thoroughly investigate SI’s forecasting record, predictions were collected from the four major North American sporting leagues (the National Football League, National Basketball Association, Major League Baseball, and National Hockey League) over the last 30 years (1988–2018). Kruskal–Wallis H Tests and Mann–Whitney U Tests were used to evaluate the absolute and relative accuracy of predictions. Results indicated that SI had the greatest predictive accuracy in the National Basketball Association and was significantly more likely to predict divisional winners compared to conference and league champions. Future work in this area may seek to examine multiple media outlets to gain a more comprehensive perspective on forecasting accuracy in sport.


Introduction
In 2014, Sports Illustrated (SI) made the lofty prediction that the Houston Astros (a team in one of the lowest MLB divisional rankings) would win the 2017 World Series. Three years later, to the public's surprise, this prediction came true. The media marvelled at this occurrence, giving credit to SI's predictive accuracy [1,2]. The less-publicised story, however, was that the team SI predicted in 2017 to win the 2017 Championship was the Los Angeles Dodgers. Surprisingly, SI was praised for the accuracy of their 2014 prediction, not criticised for changing that prediction to one that was incorrect in the same year. The original prediction was especially odd since the Astros were at the bottom of the league standings when the prediction was made. It is possible that the bizarreness of this prediction (moving from last to first place) may have influenced how it was perceived and evaluated [3][4][5][6]. However, these differing forecasts highlight the complexities of talent identification and prediction decisions in sport. In this study, we examine the accuracy of SI's pre-season predictions across the four major North American professional sports leagues-Major League Baseball (MLB), the National Football League (NFL), the National Basketball Association (NBA), and the National Hockey League (NHL) over the last 30 years.
Since the turn of the century, the sports industry has seen a rise in the use of analytics [7]. A large catalyst for this shift was the Major League Baseball (MLB) team Oakland Athletics' 'Moneyball' movement where the team experienced tremendous success (second place at the World Series) despite financial restrictions, by using objective measures and statistics [8]. This phenomenon raised awareness of the value of using quantitative approaches to minimise biases from more subjective approaches to decisionmaking. Currently, computer technology and statistical acumen occupy a growing role in the development of accurate forecasting models and although expert opinions have been evaluated in fields like meteorology [9] and finance [10] there has been less attention to the accuracy of experts in predicting sport outcomes [11,12]. There is, however, an exception with recent years bringing new light to evaluating the expert and non-expert (also referred to as 'crowd') forecasts in sports. Specifically, new research published includes topics such as (i) evaluating expert forecasters over an extended period of time [13,14], (ii) evaluating crowds of tipsters (or semi-expert forecasters) [15,16], (iii) evaluating the informational content of sports experts through their contributions on social media [17], and (iv) evaluating semi-expert judgments regarding sporting predictions to improve forecast accuracy [18]. Assessing the forecasting accuracy of 'experts' is critical as it explores the difficulty and limitations of forecasts and challenges us to re-think notions of athlete development and the path to elite performance.
Evidence comparing the accuracy of expert opinions and computer-generated models is varied. On the one hand, there is a substantial body of literature showing the benefits of removing the 'human-element' of the decision-making process [8,19,20] while on the other hand, there is evidence highlighting the importance of the human 'eye' in judgements [21,22]. For example, it was discovered that on average, computer-based models made more correct forecasts at the 2003 Rugby Union World Cup than experts [23]. However, certain human experts were more accurate than the computer-based models with one individual predicting 46 of 48 matches correctly. Perhaps the greater average accuracy of these computer-generated models can be attributed to the increased agreeability of the predictions across the different systems as this is associated with more accurate predictions [24][25][26]. The ability of computerised models to consistently and efficiently analyse data are certainly strengths of these systems [24].
Human judgements are often prone to biases, which can result in overconfidence by experts [27]. Such overconfidence, coupled with the public's tendency to overlook the numerous inaccurate forecasts and celebrate the rare accurate ones, form the grounds of the bizarreness effect [3][4][5][6] and inflate claims of the accuracy of expert forecasts in sport [28]. This explains why bold predictions capture the public's attention and disrupt memory of other events [6] as exemplified by SI's Houston Astros prediction and the resulting impact on the public's perception of SI's predictive accuracy [1,2].
With greater access to performance statistics and other information than ever before, a relevant question is whether this surplus of data helps or hinders human forecasts [10,[29][30][31]. Coincident with this increased access to data, there has been a greater focus on the value of 'simple heuristics' for improving the accuracy of decision-making and forecasting [32]. For instance, several studies have explored the availability heuristic-a phenomenon that postulates a well-recognised competitor (i.e., one which is most readily 'available' in our brains) is more likely to win compared to a less recognised one [10,31,32]. This work suggests having less information may be more beneficial when making decisions under uncertainty, although results in this area have been mixed [11,29,30,33]. The sociocultural implications and the way sport is constructed in society can thus influence individuals when making forecasts.
Despite the evidence presented above that may question the validity of experts, individuals continue to rely on expert opinions [30] presumably, at least in part, due to their access to useful (and often proprietary) information. However, it is important to critically examine 'experts' forecasting records to evaluate if the public should continue to deem these predictions as valuable. Furthermore, given that their ideas are released on behalf of large, seemingly reputable organisations, sports media outlets such as SI have the propensity to shape overall public opinion about the value and accuracy of sports forecasting. Given these implications, the predictive accuracy of popular media outlets is an area ripe for further exploration. SI is among the most distinguished magazines worldwide. Their inaugural issue in 1954 featured "Scouting Reports" and seasonal performance forecasts have continued to grow and evolve ever since (i.e., frequency, variables included, and time to prediction). A focused examination of SI's predictive accuracy provides an opportunity for a unique case study of predictive accuracy in professional sports over a longer time frame than possible in events such as the World Cup of Soccer [34], World Cup of Rugby [10,23], and Wimbledon Tennis Championship [31]. One of the few studies to examine the predictive accuracy of season-long MLB forecasts found that SI and the New York Times made more accurate predictions than random guesses, except for the National League Conference winners [35]. Although informative, this study deserves reassessment as the MLB has been expanded and re-aligned since this 1994 study. Therefore, the objective of this study was to assess the accuracy of SI's pre-season forecasts of the divisional standings, conference standings, and championship winners of the four main North American sport leagues (the NFL, NBA, MLB, and NHL) over the past 30 years.

Procedures
Quantitative content analysis of the popularised magazine, SI was utilised as the primary research method. Content analysis refers to the objective, systematic, and quantitative analysis of communications content [36]. Pre-season predictions made by SI between 1988 and 2018 were collected from the magazine's online repository. The repository consists of digitised versions of paper magazines inclusive of SI's inaugural 1954 issue to the latest SI magazines from 2021. This archive can be freely accessed at https: //vault.si.com/vault/archives (accessed on 14 August 2021). All four major North American sporting leagues-the NFL, NBA, MLB, and the NHL-were examined and predictions included the championship winner, runner-up, and score (for the NFL only), as well as divisional standings and conference winners where available. In years where SI did not explicitly state their prediction, inferences were made for each division based on their rankings of the conference (i.e., the team in each division with the highest predicted conference ranking was inferred to be the predicted divisional winner). This method was utilised for 8.1% of the data. Predictions were then compared to official results for the respective leagues. Throughout the process, the number of teams within each league, conference, and division was noted. Some data (13.7%) were missing from the dataset for reasons such as league lockouts, missing pages in the magazines, or unclear predictions.

Data Analysis
Several, largely descriptive, approaches were taken in the analyses. Data were analysed in Statistical Package for the Social Sciences (SPSS). Two areas of interest were highlighted-determining the absolute accuracy of SI's predictions and the relative accuracy of their predictions (i.e., exactly how far the experts were off if they were incorrect). Initially, a simple binary score was calculated to determine the level of overall agreement for the predictions (i.e., was the prediction correct? Yes/No). The total number of correct forecasts within each division, conference, and league were collected, and following this, the average percentage of correct predictions was calculated for each domain to yield a comparable value (i.e., this number considered the total predictions made and controlled for elements such as the changing structural compositions of each league, lockouts, missing data and unclear predictions). Subsequently, comparisons between predicted vs. actual results for winner and runners-up provided information on relative accuracy. Predicted vs. actual results were calculated via discrepancy scores where the team's predicted standing was subtracted from their actual standing. Thus, smaller discrepancy scores were associated with greater relative accuracy (i.e., the discrepancy between where the team actually finished is close to where SI predicted they would finish). Since the playoff rounds yielded ambiguous final standings (i.e., at the conclusion of each round, numerous teams are eliminated at once) a relative scale was created to clarify the actual results for the winner and runner-up predictions. The scale was as follows (1 = Won Championship, 2 = Lost in Championship, 3 = Lost in Third Round, 4 = Lost in Second Round, 5 = Lost in First Round, 6 = Did not Qualify for Playoffs). The average discrepancy scores for each division, conference, and league were then calculated, permitting comparisons between these categories.
As expected, the data violated the assumption of normality for the use of parametric statistics. Therefore, non-parametric Mann-Whitney U and Kruskal-Wallis H tests were used to compare groups. The exact significance level is reported where applicable, and the Monte Carlo permutation was used as a substitute if computational limits did not allow the exact significance to be calculated. For Kruskal-Wallis H Tests that met the significance level of 0.05, Mann-Whitney U Tests were utilised as a 'post hoc' test to further investigate the finding.
Due to the exploratory nature of this study, we used two-tailed tests for all comparisons. Furthermore, Bonferroni's correction was applied to our alpha level to reduce the likelihood of Type 1 errors. However, given the small sample sizes, we focused on measures of effect size over indicators of statistical significance. Effect sizes for the Mann-Whitney U tests are in the form of Pearson's correlation coefficient (r) and for Kruskal-Wallis tests as epsilon-squared (E 2 R ) [37]. For the purposes of this study, small effect sizes were categorised as 0.2 or less, moderate effect sizes were categorised as 0.2-0.5 and large effect sizes were deemed anything 0.5 or greater.
The variables (number of correct predictions, average percentage of correct predictions, and average discrepancy score) were also totalled for the divisions and conferences within the league to provide a comparison between leagues (i.e., to determine whether SI made the most correct forecasts in the NFL, NBA, MLB, or NHL). For the winner and runner-up predictions for each league, we used Kruskal-Wallis H tests to compare across three time brackets (with approximately the same number of years in each bracket) to determine changes in the percentage of correct predictions (hereinafter referred to as "percent correct") and discrepancy score. Due to changes in the sports leagues over the past 30 years, we attempted to standardise the accuracy values relative to the number of teams in the division/conference/league. The average number of teams per division/conference/league was calculated over the 30-year time frame, and these values were taken into account when examining the number of correct predictions and the average percentage of correct predictions in each respective category.

Results
Results from the Kruskal-Wallis and Mann-Whitney U tests are presented below and in Table 1. Descriptive statistics for each league are presented below (as well as in Table 2), followed by analyses comparing leagues (Tables 3 and 4) and trends over time (Tables 5-7).

NFL
NFL league-wide data (i.e., winner vs. runner-up) revealed medium-sized effects but only the discrepancy scores displayed significant differences (see Table 1). Comparing the American Football Conference (AFC) to the National Football Conference (NFC) yielded medium effect sizes, with significantly higher discrepancy scores for the NFC compared to the AFC. Interestingly, the percent correct and discrepancy scores for the NFC did not significantly differ from the league winner category, despite the NFC having half as many teams as the entire NFL. Divisionally, non-significant small effect sizes were found when examining the AFC and NFC divisions, as well as when comparing the AFC to the NFC. Furthermore, as the average number of teams increased, percent correct decreased and discrepancy scores increased (i.e., SI was less accurate with larger groups). Divisional predictions showed significantly higher percent correct and lower discrepancy scores than conference and league-wide data. Moderate and small-to-moderate effect sizes were found when examining divisional vs. league-wide and conference-wide predictions, respectively.
Examining league winner forecasts over the last 30 years revealed small effect sizes for both percent correct and discrepancy scores, although none of these effects met the criteria for statistical significance (see Table 5). Additionally, the Jonckheere-Terpstra Test of Trend revealed small effect sizes and no discernible trends (see Table 7).

NBA
The NBA data (see Table 1) revealed non-significant small effects between the league winner and runner-up predictions. Small effect sizes were also seen when comparing the predictions for the Eastern Conference and Western Conference Champions. Nonsignificant small effects were also found when examining the NBA divisions, indicating that the division being predicted had little relation to SI's resulting accuracy. SI was, however, significantly more likely to predict divisional winners compared to league and conference champions. Specifically, moderate, and small-to-moderate effect sizes were found when comparing SI's accuracy in forecasting divisional champions to league and conference champions, respectively.
A small-to-moderate effect size was found when examining the discrepancy scores of NBA Champion predictions over the last 30 years. Significant differences and a large effect size were found between the 1988-1997 and 1998-2007 time brackets (see Table 5), where the latter had a larger median discrepancy score (see Table 6). A moderate effect size was found when comparing 1988-1997 vs. 2008-2018 and when comparing 1998-2007 and 2008-2018. Despite this, the Jonckheere-Terpstra Test indicated there were no significant trends (see Table 7).

MLB
Discrepancy scores of the MLB league winner and runner-up were significantly different with a large effect size (Table 1). At the conference level, non-significant small effects were noted between the American League (AL) and the National League (NL) winners. Divisionally, non-significant small effects were found, denoting that SI had no more success in one division than another. SI had significantly more success (i.e., higher percent correct and lower discrepancy scores) in predicting MLB divisional winners compared to league and conference champions, with moderate-to-large and small-to-moderate effects, respectively. Non-significant small effects were found for the forecasts of league winners compared to conference winners, indicating similar margins of error, despite conferences having half as many teams as the league.
Examining World Series Champion predictions via the Jonckheere-Terpstra test revealed a significant trend over time with a moderate-to-large effect size (see Table 7). Specifically, 1996-2003 had significantly smaller discrepancy scores than 2004-2011 with a large effect size between these two time periods. A non-significant small effect size was found between 2004-2011 and 2012-2018, suggesting an initial decrease in predictive accuracy (beginning in 2004) that has since remained consistent. Although no significant difference was found between the discrepancy scores of 1996-2003 and 2012-2018, there was a large effect size indicating SI's predictive accuracy decreased over time.

NHL
Comparing the forecasted winner vs. runner-up in the NHL revealed no significant differences (see Table 1), and effect sizes were small-to-moderate for percent correct and small for discrepancy score. The Eastern conference had significantly larger discrepancy scores than the Western conference with a moderate-sized effect. Divisionally, non-significant small effects suggested the division being forecasted had little correlation with SI's predictive accuracy. Furthermore, as the average number of teams decreased, predictive accuracy increased. For example, percent correct was significantly higher and discrepancy scores significantly lower for divisions compared to league-wide predictions with moderate effect sizes. Comparisons between divisional and conference forecasts indicated a significant moderate effect size for discrepancy score but non-significant small effects for percent correct.
Examining SI's predictions for the Stanley Cup winner over time (see Table 5) yielded small effects for percent correct and discrepancy scores, implying little correlation between time and SI's predictive accuracy. Overall, no significant differences were found between time brackets, and the Jonckheere-Terpstra test confirmed there were no distinct trends over time (see Table 7).

Comparing across Leagues
An analysis combining both league-wide predictions (i.e., NFL championship winner and runner-up vs. NBA championship winner and runner-up, etc.), revealed the NBA had the largest percent correct (see Table 3). Moderate effect sizes and significant differences were revealed between the NBA and NFL, as well as between the NBA and MLB. A nonsignificant small effect was found when comparing the NBA to NHL. SI also experienced the greatest relative accuracy in the NBA with significantly lower discrepancy scores than in the other three leagues and moderate effect sizes. Non-significant small effects were found when comparing the other leagues on both percent correct and discrepancy scores (see Table 3).
The percent correct and discrepancy scores for the divisions and conferences within each league were combined to yield statistics that could be used for comparison across leagues. Generally, at the conference level, small effects and no significant differences were found for percent correct (see Table 3). SI experienced the greatest relative accuracy with the NBA at the conference level. However, the only significant difference was found between the NBA and MLB and this comparison yielded a moderate effect size.
Divisionally, there were significant differences in percent correct and discrepancy scores (see Table 3) although effect sizes were small. Again, SI's divisional NBA predictions had significantly higher percent correct than divisional predictions for the NFL and NHL. SI also experienced the greatest relative accuracy with their divisional NBA predictions, which had significantly lower discrepancy scores than other leagues. An analysis of all predictions found SI's overall percent correct was 32.8%.

Discussion
The use of technology and the internet has allowed for greater forecast examination, but only in recent years. Analyses of SI's predictive accuracy over the last 30 years revealed several intriguing findings. These results help to critically examine the forecasting accuracy of 'experts' and how their predictions shape sociocultural notions of sport in society.
First, there was a clear, and unsurprising, trend showing increased forecasting accuracy when the average number of teams was the lowest (i.e., divisions had significantly higher percent correct and lower discrepancy scores than conference and league-wide predictions). However, non-significant small effect sizes were found between divisions, indicating that the division being forecasted had little relation to SI's resulting predictive accuracy. This was somewhat surprising as certain divisions (e.g., the AFC East in football, the AL East in baseball) are known for 'powerhouse' organisations who have historically been consistent contenders for divisional championships. Perhaps expanding markets of teams in these divisions such as the Tampa Bay Rays (who finished in last place every year from 1998-2007, yet since 2008 have had a winning record in eight years) have led to more uncertainty in these perceived 'easy-to-predict' divisions. Alternatively, if there is no clear divisional favourite, the small pool of teams to choose from may increase the likelihood of correct predictions. Finally, each league has implemented numerous structural changes over the last 30 years (i.e., divisional adjustments, expansion teams). This may also impact why SI had no more success in predicting the winners of one division compared to another, as continual divisional re-alignments may hinder any emerging trends.

SI Accuracy Is Greatest in the NBA
At the divisional, conference, and league levels, SI had the greatest predictive success in the NBA. This may reflect how basketball is structured in relation to other sports. For example, with more scoring attempts per game (i.e., due to a 'shot clock' that requires teams to shoot the ball every 24 s), the likelihood of random outcomes decreases as the variance of points regress towards a mean value determined by the 'talent' on a given team [38]. Moreover, the NBA playoffs are structured as four, best-of-seven series. As a result, lengthening the playoff series reduces the likelihood of a weaker team 'upsetting' the stronger team [39]. In comparison, playoff formats of other leagues rely on single elimination games (e.g., the NFL) or shorter series (e.g., the MLB). Similarly, NBA rosters possess the fewest athletes, which further reduces uncertainty by limiting the potential for variance (i.e., fewer players to project).
It is also likely that greater prediction accuracy in the NBA is associated with the league's financial structure, which differs from other North American leagues in that it operates under a 'soft' salary cap (i.e., franchise payrolls are limited yet some exceptions are permitted). The salary cap was implemented in 1984 with the hopes of increasing competitive balance. However, it has been largely ineffective as the lack of parity in the NBA has remained consistent over the last 30 years [38,40] seemingly making the outcomes easier to predict than other leagues. It is argued the large exemptions to the cap have been detrimental to league parity [41]. For example, the NBA is renowned for dynasties such as the Boston Celtics, Chicago Bulls, and more recently the Golden State Warriors. While other leagues also possess dominating organisations, the NBA differs as its "Larry Bird Exemption" is a clause under the 'soft' salary cap that allows teams to re-sign one of their own players without regard to the salary cap [41,42]. These 'loopholes' in the salary cap damage the competitive balance as large-market franchises can match outside offers and keep their star players together for successive seasons [41]. In comparison, the NHL's revised Collective Bargaining Agreement (CBA) implemented in 2005 has decreased an organisation's capacity to "hold-together" championship teams whereby no NHL team has won the Stanley Cup more than two years in a row [42]. Even prior to the new CBA, a team winning the Stanley Cup in more than two consecutive years has only happened twice in the post-expansion era (1967 to present). Similarly, the NFL illustrates greater competitive balance as a team has yet to win three consecutive Super Bowl Titles [43]. While the primary mechanisms explaining these effects are unknown, the existence of these trends suggests a range of additional factors that may explain SI's greater predictive accuracy in the NBA.

No Changes in Prediction Accuracy over Time
In general, results indicated no significant differences or meaningful effects in SI's predictive accuracy over time. This was somewhat surprising considering the increased attention given to the use of predictive analytics in short-and long-term sport forecasting. However, despite advancements in analytics over the last 30 years, emphasis has also been placed on increasing competitive balance within leagues in order to attract fans, prevent bankruptcy and discourage the creation of rival leagues [43][44][45][46]. Evidence suggests that parity has generally improved for most leagues since the 1990-1999 decade [38]; and as a consequence, accurate forecasting may have become more difficult (i.e., a more 'balanced' league with numerous contenders is theoretically more difficult to predict).
In an effort to achieve competitive balance, leagues have introduced various redistribution mechanisms (e.g., revenue-sharing, reverse-order drafts, free-agency, and salary caps). Ultimately, these measures must be addressed as their proposed benefits to competitive balance [38,41,42,46] may influence forecasting ability. However, their effectiveness is largely unknown as recent increases in competitive balance may not be a direct result of these redistribution mechanisms (i.e., causation cannot be inferred). For example, North American sports also use unbalanced schedules (i.e., teams do not play opponents the same number of times), a format that traditionally produces more uncertainty [47], which also may have contributed to greater league parity. Traditional redistribution mechanisms include revenue-sharing and numerous studies have explored the effects of revenue-sharing with mixed results [48][49][50][51][52][53][54]. Additionally, the effectiveness of the 'reverse-order' draft is also largely unknown as it is argued that adding only one player may not be impactful [55]. Free agency is another redistribution mechanism implemented by all four leagues, yet it is uncertain whether it helps or hinders predictive accuracy. This clause allows for player mobility yet may also permit the formation of 'super-teams'. Finally, salary caps (which are designed to limit large-market teams from signing all the elite athletes) [56] are another redistribution mechanism that may impact predictive accuracy. While their effects are also ambiguous [38,41,46] the NHL and NFL are argued to have greater parity due to their 'hard' salary caps [38] compared to the NBA's 'soft' salary cap and the MLB's luxury tax (i.e., a surcharge for teams that exceed a pre-determined payroll amount) [41]. Overall, it is unclear how these redistribution mechanisms may have affected the predictive accuracy in our analyses. The possible direct and interactive effects provide a host of intriguing areas of further work.
While SI's forecasting accuracy remained largely consistent over time, there was a trend of decreasing accuracy in the MLB. Specifically, 1996-2003 had significantly lower discrepancy scores than in 2004-2011. Although this suggests that SI's accuracy in predicting World Series Champions has decreased over time, the dominance of the New York Yankees, who won four league titles during 1996-2003, needs to be considered. Nonetheless, as previously noted, parity in MLB has generally grown over time [48,57] creating greater difficulty in forecasting the World Series Champions. Other factors may also be especially relevant for MLB forecasts (e.g., performance-enhancing drugs; MLB's 182-game regular season) but are largely unaccounted for in SI's pre-season predictions. Last, although SI's remarkable three-year out Houston Astro's prediction sparked this study, recent evidence revealed unfair play on the Astro's behalf [58] and this must be acknowledged as a potential confounding factor in SI's and our analyses.

Limitations and Future Research
This study is one of few to examine long-term forecasting accuracy, using data from a popular American media outlet. Although interesting results were discovered, the study had some notable limitations. First, 30 years of data, although impressive as a longitudinal data source, provided a relatively small number of data points, some of which were missing due to reasons such as league lockouts, missing pages in the magazine, or if SI had not made a prediction that year. As a result of this small sample size, the statistical tests conducted were generally crude and conservative tests, increasing the possibility of type II error. While the purpose of this research project was to present an initial exploration into the SI's forecasting accuracy, future studies may seek to use more rigorous statistical tests, as data becomes more readily available (i.e., as the forecasting pool grows when SI makes pre-season predictions in subsequent years). In an effort to increase sample size, future studies may also seek to compare the forecasting accuracy of multiple news outlets (i.e., to determine if certain sport media outlets are more successful than others) or compare the predictive accuracy of media outlets to betting odds or other forecasting forums. Finally, SI's "team of editors" who made the prediction each year was inconsistent (i.e., the team changed over time), which confounded our ability to discern predictive accuracy over time. Future work could overcome this limitation by examining a single forecaster, for example, a notable sports commentator, over time.
Additionally, the present study could be enhanced by comparing SI's pre-season predictions to other 'futures betting markets' or sporting prediction magazines. Positioning SI's projections against different measures of across-season competitive balance estimates could help to provide a benchmark for SI's abilities to forecast and highlight the strengths and weaknesses of each source's respective forecasting abilities. Using the futures betting market could be an especially fruitful comparison as returns could be calculated to see if following SI's predictions make money in this time frame.
It is also worth considering that forecasting accuracy is a small segment in the entirety of sports culture and mass media communications. The changing dynamics within the sociocultural vehicle of sports and how SI represents those values must be considered when examining their predictive accuracy. Several other studies have analysed periodicals over time and noted how their content is impacted by the time period in which they are published [59,60].
Last, the authors were unable to determine how SI predictions were/are made. For this reason, future work could benefit from understanding whether SI forecasts were the result of one person's gut (i.e., tipsters/pundits), an aggregation of several people, whether there was some sort of statistical model being used by SI and/or some variation of all the above.

Conclusions
This study presented an initial exploration into SI's pre-season forecasting accuracy for the four major North American sport championships over the last 30 years. While results varied across leagues, SI was generally more successful at predicting divisional winners compared to conference and league champions. Interestingly, in all leagues, it was found that SI was not significantly better at predicting certain divisions compared to others. A direct comparison of leagues revealed SI experienced the greatest predictive accuracy in the NBA. Furthermore, an examination of 30 years of predictions revealed no distinct trends over time, yet there is some evidence to suggest that forecasting accuracy for the MLB may have decreased. This study suggests that despite technological advancements, expert opinions still carry some value in modern-day sport forecasting. Findings from this study highlight that expert opinions should be viewed in relation to the sociocultural lens of sport, and help to shape the way society views sport as a whole. Although greater research is required to fully optimise predictive success in North American sporting leagues, this study presents a valuable baseline analysis of SI's forecasting accuracy.