# Predicting the Outcome of NBA Playoffs Based on the Maximum Entropy Principle

^{1}

^{2}

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Background

#### 2.1. Maximum Entropy Model

#### 2.2. K-Means Clustering

- Let $\{{x}_{1},{x}_{2},\xb7\xb7\xb7,{x}_{n}\}$ be the set of data points and $V=\{{v}_{1},{v}_{2},\xb7\xb7\xb7,{v}_{c}\}$ be the set of centers;
- Randomly select “c” cluster centers and calculate the distance between each data point and cluster centers;
- Assign the data point to the cluster center whose distance from the cluster center is the minimum of all the cluster centers;
- Recalculate the new cluster center using: ${v}_{i}=(1/{c}_{i}){\sum}_{j=1}^{{c}_{i}}{x}_{i}$, where ${c}_{i}$ represents the number of data points in ith cluster;
- Recalculate the distance between each data point and new obtained cluster centers;
- If no data point was reassigned, then stop; otherwise, repeat from step 3.

## 3. Materials and Methods

#### 3.1. Basic Technical Features

#### 3.2. NBAME Model Overview

- $p(y|x)\ge 0$ for all $x,y$;
- ${\sum}_{y}p(y|x)=1$ for all x;
- ${\sum}_{(x,y)}\tilde{p}(x,y){f}_{k}(x,y)={\sum}_{(x,y)}\tilde{p}(x)p(y|x){f}_{k}(x,y)$ for $k\in \{1,2,\dots ,K\}$.

## 4. Results

#### 4.1. Data Collection and Preprocessing

#### 4.2. The Results of the NBAME Model for Predicting the NBA Playoffs

#### 4.3. Comparison of NBAME Model with Some Selected Existing Machine Learning Algorithms

## 5. Conclusions

## Acknowledgments

## Author Contributions

## Conflicts of Interest

## References

- Andrews, D. The (Trans) National Basketball Association: American Commodity-Sign Culture and Global-Local Conjuncturalism. In Articulating the Global and the Local: Globalization and Cultural Studies; Cvetovitch, A., Kellner, D., Eds.; Westview Press: Boulder, CO, USA, 1997; pp. 72–101. [Google Scholar]
- Berri, D.J. National Basketball Association. In Handbook of Sports Economics Research; M.E. Sharpe: Armonk, NY, USA, 2006. [Google Scholar]
- Zak, T.A.; Huang, C.J.; Siegfried, J.J. Production Efficiency: The Case of Professional Basketball. J. Bus.
**1979**, 52, 379–392. [Google Scholar] [CrossRef] - Harville, D.A. The Selection or Seeding of College Basketball or Football Teams for Postseason Competition. J. Am. Stat. Assoc.
**2003**, 98, 17–27. [Google Scholar] [CrossRef] - Bhandari, I.; Colet, E.; Parker, J.; Pines, Z.; Pratap, R.; Ramanujam, K. Advanced Scout: Data Mining and Knowledge Discovery in NBA Data. Data Min. Knowl. Discov.
**1997**, 1, 121–125. [Google Scholar] [CrossRef] - Loeffelholz, B.; Bednar, E.; Bauer, K.W. Predicting NBA games using neural networks. J. Quant. Anal. Sports
**2009**, 5, 1–15. [Google Scholar] [CrossRef] - Ivankovi, Z.; Rackovi, M.; Markoski, B.; Radosav, D.; Ivkovi, M. Analysis of basketball games using neural networks. In Proceedings of the 11th International Symposium on Computational Intelligence and Informatics (CINTI), Budapest, Hungary, 18–20 November 2010; pp. 251–256.
- Beckler, M.; Wang, H.; Papamichael, M. NBA oracle. Zuletzt Besucht Am.
**2013**, 17, 2008–2009. [Google Scholar] - Delen, D.; Cogdell, D.; Kasap, N. A comparative analysis of data mining methods in predicting NCAA bowl outcomes. Int. J. Forecast.
**2012**, 28, 543–552. [Google Scholar] [CrossRef] - Miljković, D.; Gajić, L.; Kovačević, A.; Konjović, Z. The use of data mining for basketball matches outcomes prediction. In Proceedings of the 8th International Symposium on Intelligent Systems and Informatics (SISY), Subotica, Serbia, 10–11 September 2010; pp. 309–312.
- Strumbelj, E.; Vracar, P. Simulating a basketball match with a homogeneous Markov model and forecasting the outcome. Int. J. Forecast.
**2012**, 28, 532–542. [Google Scholar] [CrossRef] - Vracar, P.; Strumbelj, E.; Kononenko, I. Modeling basketball play-by-play data. Expert Syst. Appl.
**2016**, 44, 58–66. [Google Scholar] [CrossRef] - Oh, M.; Keshri, S.; Iyengar, G. Graphical model for baskeball match simulation. In Proceddings of the 2015 MIT Sloan Sports Analytics Conference, Boston, MA, USA, 27–28 February 2015.
- Stekler, H.O.; Sendor, D.; Verlander, R. Issues in sports forecasting. Int. J. Forecast.
**2010**, 26, 606–621. [Google Scholar] [CrossRef] - Haghighat, M.; Rastegari, H.; Nourafza, N. A Review of Data Mining Techniques for Result Prediction in Sports. Adv. Comput. Sci.
**2013**, 2, 7–12. [Google Scholar] - Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J.
**1948**, 27, 379–423. [Google Scholar] [CrossRef] - Jaynes, E.T. Information theory and statistical mechanics. Phys. Rev.
**1957**, 106, 620–630. [Google Scholar] [CrossRef] - Leathwick, J.R.; Elith, J.; Francis, M.P.; Hastie, T.; Taylor, P. Variation in demersal fish species richness in the oceans surrounding New Zealand: an analysis using boosted regression trees. Mar. Ecol. Prog. Ser.
**2006**, 321, 267–281. [Google Scholar] [CrossRef] - Phillips, S.J.; Anderson, R.P.; Schapire, R.E. Maximum entropy modeling of species geographic distributions. Ecol. Model.
**2006**, 190, 231–259. [Google Scholar] [CrossRef] - Phillips, S.J.; Elith, J. On estimating probability of presence from use-availability or presence-background data. Ecology
**2013**, 94, 1409–1419. [Google Scholar] [CrossRef] [PubMed] - Berger, A.L.; Pietra, V.J.D.; Pietra, S.A.D. A maximum entropy approach to natural language processing. J. Comput. Linguist.
**1996**, 22, 39–71. [Google Scholar] - Yu, D.; Hinton, G.; Morgan, N.; Chien, J.-T.; Sagayama, S. Introduction to the special section on deep learning for speech and language processing. IEEE Trans. Audio Speech Lang. Process.
**2012**, 20, 4–6. [Google Scholar] [CrossRef] - Pham, A.-D.; Névéol, A.; Lavergne, T.; Yasunaga, D.; Clément, O.; Meyer, G.; Morello, R.; Burgun, A. Natural language processing of radiology reports for the detection of thromboembolic diseases and clinically relevant incidental findings. BMC Bioinform.
**2014**, 15, 266. [Google Scholar] [CrossRef] [PubMed] - Tseng, C.Y.; Tuszynski, J. Maximum Entropy in Drug Discovery. Entropy
**2014**, 16, 3754–3768. [Google Scholar] [CrossRef] - Xu, Y.; Wu, Z.; Jiang, L.; Song, X. A Maximum Entropy Method for a Robust Portfolio Problem. Entropy
**2014**, 16, 3401–3415. [Google Scholar] [CrossRef] - Phillips, S.J.; Dudik, M.; Schapire, R.E. A maximum entropy approach to species distribution modeling. In Proceedings of the Twenty-First International Conference on Machine learning, Banff, AB, Canada, 4–8 July 2004; p. 83.
- Kotsiantis, S.; Kanellopoulos, D. Discretization techniques: A recent survey. GESTS Int. Trans. Comput. Sci. Eng.
**2006**, 32, 47–58. [Google Scholar] - Silva, J.A.; Faria, E.R.; Barros, R.C.; Hruschka, E.R.; de Carvalho André, C.P.L.F.; Gama, J. Data stream clustering: A survey. J. ACM Comput. Surv.
**2013**, 46, 13. [Google Scholar] [CrossRef] - Qu, J.; Zhang, J.; Huang, C.; Xie, B.; Wang, Y.; Zhang, X.-S. A novel discretization method for processing digital gene expression profiles. In Proceedings of the 7th International Conference on Systems Biology, Huangshan, China, 23–25 August 2013; pp. 134–138.
- Jacques, J.; Preda, C. Functional data clustering: A survey. Adv. Data Anal. Classif.
**2014**, 8, 231–255. [Google Scholar] [CrossRef] - Garcia, S.; Luengo, J.; Herrera, F. Discretization. In Data Preprocessing in Data Mining; Springer: Cham, Switzerland, 2015; pp. 245–283. [Google Scholar]
- Madhu, G.; Rajinikanth, T.V.; Govardhan, A. Improve the classifier accuracy for continuous attributes in biomedical datasets using a new discretization method. Procedia Comput. Sci.
**2014**, 31, 671–679. [Google Scholar] [CrossRef] - Kaya, F. Discretizing Continuous Features for Naive Bayes and C4.5 Classifiers. Available online: http://www.cs.umd.edu/sites/default/files/scholarly_papers/fatih-kaya_1.pdf (accessed on 5 December 2016).
- Kerber, R. Chimerge: Discretization of numeric attributes. In Proceedings of the Tenth National Conference on Artificial intelligence, San Jose, CA, USA, 12–16 July 1992; pp. 123–128.
- Monti, S.; Cooper, G.F. A latent variable model for multivariate discretization. In Proceedings of the Seventh International Workshop on Artificial Intelligence and Statistics, Fort Lauderdale, FL, USA, 4–6 January 1999.
- Jain, A.K. Data Clustering: 50 Years Beyond K-Means. Pattern Recognit. Lett.
**2010**, 31, 651–666. [Google Scholar] [CrossRef] - Lloyd, S.P. Least Squares Quantization in PCM. IEEE Trans. Inf. Theory
**1982**, 28, 129–137. [Google Scholar] [CrossRef] - Tan, P.-N.; Steinbach, M.; Kumar, V. Introduction to Data Mining; Pearson: London, UK, 2005; p. 796. [Google Scholar]
- Kumar, A.; Sinha, R.; Bhattacherjee, V.; Verma, D.S.; Singh, S. Modeling using K-means clustering algorithm. In Proceedings of the 1st International Conference on Recent Advances in Information Technology, Dhanbad, India, 15–17 March 2012; pp. 554–558.
- Patankar, N.; Salkar, S. On the use of Side Information Based Improved K-Means Algorithm for Text Clustering. Int. J. Emerg. Trends Technol.
**2015**, 2, 369–374. [Google Scholar] - Garcia, M.L.L.; Garcia-Rodenas, R.; Gomez, A.G. K-means algorithms for functional data. Neurocomputing
**2015**, 151, 231–245. [Google Scholar] [CrossRef] - Kanungo, T.; Mount, D.M.; Netanyahu, N.S.; Piatko, C.D.; Silverman, R.; Wu, A.Y. An efficient k-means clustering algorithm: analysis and implementation. IEEE Trans. Pattern Anal. Mach. Intell.
**2002**, 24, 881–892. [Google Scholar] [CrossRef] - Darroch, J.N.; Ratcliff, D. Generalized Iterative Scaling for Log-Linear Models. Ann. Math. Stat.
**1972**, 43, 1470–1480. [Google Scholar] [CrossRef] - Cluster Analysis Extended Rousseeuw et al. Available online: http://astrostatistics.psu.edu/su07/R/html/cluster/html/00Index.html (accessed on 5 December 2016).
- Kaufman, L.; Rousseeuw, P.J. Finding Groups in Data: An Introduction to Cluster Analysis; Wiley: Hoboken, NJ, USA, 1990. [Google Scholar]
- Huang, J.; Ling, C.X. Using AUC and accuracy in evaluating learning algorithms. IEEE Trans. Knowl. Data Eng.
**2005**, 17, 299–310. [Google Scholar] [CrossRef] - Yousef, W.A. Assessing classifiers in terms of the partial area under the ROC curve. Comput. Stat. Data Anal.
**2013**, 64, 51–70. [Google Scholar] [CrossRef] - Ling, C.X.; Huang, J.; Zhang, H. AUC: A statistically consistent and more discriminating measure than accuracy. In Proceedings of the 18th International Joint Conference on Artificial Intelligence, Acapulco, Mexico, 9–15 August 2003; pp. 519–524.
- Sing, T.; Sander, O.; Beerenwinkel, N.; Lengauer, T. Visualizing the Performance of Scoring Classifiers. Available online: https://rdrr.io/cran/ROCR/ (accessed on 5 December 2016).
- Sing, T.; Sander, O.; Beerenwinkel, N.; Lengauer, T. Package ’ROCR’. Available online: https://cran.r-project.org/web/packages/ROCR/ROCR.pdf (accessed on 5 December 2016).
- Hall, M.; Frank, E.; Holmes, G.; Pfahringer, B.; Reutemann, P.; Witten, I.H. The WEKA data mining software: An update. ACM SIGKDD Explor. Newsl.
**2009**, 11, 10–18. [Google Scholar] [CrossRef] - NBA Datasets 2007–15 Seasons. Available online: https://drive.google.com/open?id=0BwWkZ4LiPwITZjF3dk VNMVZ4SDg (accessed on 15 December 2016).

**Figure 3.**The number and accuracy of predictions with different confidence by the NBAME model from the 2007–08 season to the 2014–15 season playoffs.

**Figure 4.**ROC curves and AUC values of prediction using the NBAME model from the 2007–08 season to the 2014–15 season playoffs.

Feature | Abbreviation | Feature | Abbreviation |
---|---|---|---|

Field Goal Made | FGM | Field Goal Attempt | FGA |

Three Point Made | 3PM | Three Point Attempt | 3PA |

Free Throw Made | FTM | Free Throw Attempt | FTA |

Offensive Rebounds | Oreb | Defensive Rebounds | Dreb |

Assists | Ast | Steals | Stl |

Blocks | Blk | Turnover | TO |

Personal Fouls | PF | Points | PTS |

**Table 2.**Sample features’ raw values obtained from http://www.stat-nba.com/ website.

Features | FGM | FGA | 3PM | 3PA | FM | FTA | Oreb | Dreb | Ast | Stl | Blk | TO | PF | PTS |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|

Features’ | 32 | 79 | 6 | 24 | 18 | 24 | 8 | 28 | 17 | 10 | 2 | 18 | 15 | 88 |

values of | 45 | 87 | 9 | 24 | 8 | 11 | 5 | 32 | 32 | 8 | 3 | 14 | 23 | 107 |

last | 33 | 85 | 7 | 23 | 22 | 29 | 9 | 36 | 22 | 10 | 4 | 12 | 21 | 95 |

six games | 33 | 83 | 6 | 23 | 12 | 15 | 14 | 28 | 22 | 6 | 4 | 15 | 18 | 84 |

for | 48 | 85 | 8 | 23 | 10 | 14 | 12 | 31 | 29 | 9 | 6 | 13 | 20 | 114 |

home team | 44 | 80 | 7 | 19 | 14 | 18 | 7 | 35 | 25 | 9 | 8 | 14 | 16 | 109 |

Average | 39.17 | 83.17 | 7.17 | 22.67 | 14.00 | 18.50 | 9.17 | 31.67 | 24.50 | 8.67 | 4.50 | 14.33 | 18.83 | 99.50 |

**Table 3.**Sample records of the experimental dataset obtained by getting averages of the previous six games.

Home teams’ features | |||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|

$FG{M}_{h}$ | $FG{A}_{h}$ | $3P{M}_{h}$ | $3P{A}_{h}$ | $FT{M}_{h}$ | $FT{A}_{h}$ | $Ore{b}_{h}$ | $Dre{b}_{h}$ | $As{t}_{h}$ | $St{l}_{h}$ | $Bl{k}_{h}$ | $T{O}_{h}$ | $P{F}_{h}$ | $PT{S}_{h}$ | ||

39.17 | 83.17 | 7.17 | 22.67 | 14.00 | 18.50 | 9.17 | 31.67 | 24.50 | 8.67 | 4.50 | 14.33 | 18.83 | 99.50 | ||

38.33 | 83.67 | 6.83 | 18.00 | 12.83 | 18.33 | 7.83 | 35.00 | 23.67 | 7.00 | 6.17 | 12.83 | 22.00 | 96.33 | ||

37.50 | 84.67 | 10.67 | 26.50 | 18.83 | 25.33 | 10.83 | 32.83 | 24.33 | 8.50 | 6.17 | 12.17 | 19.67 | 104.50 | ||

37.17 | 79.67 | 8.50 | 25.67 | 17.17 | 23.00 | 9.33 | 29.50 | 23.50 | 7.50 | 4.50 | 15.00 | 18.33 | 100.00 | ||

37.83 | 85.50 | 11.50 | 33.33 | 16.67 | 23.83 | 11.83 | 31.83 | 22.00 | 9.67 | 3.33 | 16.50 | 22.00 | 103.83 | ||

39.00 | 78.50 | 7.50 | 20.83 | 16.33 | 20.67 | 7.67 | 31.17 | 24.00 | 7.17 | 3.83 | 16.33 | 21.33 | 101.83 | ||

40.67 | 88.17 | 6.83 | 19.33 | 17.33 | 24.67 | 13.83 | 36.00 | 20.33 | 7.17 | 6.00 | 12.83 | 21.17 | 105.50 | ||

Away Teams’ Features | Home Team | ||||||||||||||

$FG{M}_{a}$ | $FG{A}_{a}$ | $3P{M}_{a}$ | $3P{A}_{a}$ | $FT{M}_{a}$ | $FT{A}_{a}$ | $Ore{b}_{a}$ | $Dre{b}_{a}$ | $As{t}_{a}$ | $St{l}_{a}$ | $Bl{k}_{a}$ | $T{O}_{a}$ | $P{F}_{a}$ | $PT{S}_{a}$ | Win | |

41.00 | 82.33 | 7.50 | 18.33 | 21.00 | 27.83 | 10.33 | 31.17 | 22.17 | 6.67 | 4.00 | 16.33 | 22.17 | 110.50 | 1 | |

36.67 | 75.33 | 7.17 | 20.17 | 17.33 | 23.50 | 8.33 | 28.83 | 19.33 | 8.83 | 3.67 | 13.50 | 21.17 | 97.83 | 1 | |

38.00 | 87.00 | 5.83 | 19.00 | 17.17 | 21.17 | 12.67 | 28.33 | 21.50 | 7.50 | 5.00 | 12.33 | 20.67 | 99.00 | 1 | |

38.33 | 80.33 | 9.17 | 23.00 | 15.33 | 20.00 | 7.50 | 32.83 | 24.67 | 8.00 | 5.83 | 17.83 | 23.17 | 101.17 | 0 | |

35.83 | 85.33 | 7.50 | 22.50 | 18.33 | 24.33 | 10.83 | 35.33 | 19.17 | 7.17 | 5.50 | 11.83 | 19.17 | 97.50 | 1 | |

37.33 | 85.17 | 5.33 | 17.33 | 16.67 | 21.00 | 11.50 | 32.33 | 21.00 | 7.17 | 5.33 | 12.17 | 16.33 | 96.67 | 1 | |

41.67 | 86.67 | 10.17 | 25.17 | 17.17 | 22.50 | 12.17 | 31.33 | 20.67 | 8.33 | 6.00 | 12.50 | 19.17 | 110.67 | 1 |

Home Teams’ Features | |||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|

$FG{M}_{h}$ | $FG{A}_{h}$ | $3P{M}_{h}$ | $3P{A}_{h}$ | $FT{M}_{h}$ | $FT{A}_{h}$ | $Ore{b}_{h}$ | $Dre{b}_{h}$ | $As{t}_{h}$ | $St{l}_{h}$ | $Bl{k}_{h}$ | $T{O}_{h}$ | $P{F}_{h}$ | $PT{S}_{h}$ | ||

37.63 | 83.31 | 7.85 | 22.66 | 14.35 | 19.02 | 9.50 | 31.68 | 24.26 | 8.82 | 4.18 | 14.49 | 18.65 | 98.6 | ||

37.63 | 83.31 | 7.85 | 18.94 | 14.35 | 19.02 | 7.17 | 35.01 | 24.26 | 6.94 | 6.50 | 12.42 | 22.07 | 94.48 | ||

37.63 | 84.75 | 10.74 | 26.11 | 17.80 | 25.44 | 11.27 | 32.59 | 24.26 | 8.82 | 6.50 | 12.42 | 19.87 | 106.59 | ||

37.63 | 80.20 | 7.85 | 26.11 | 17.80 | 22.35 | 9.50 | 29.60 | 22.97 | 7.90 | 4.18 | 15.14 | 18.65 | 98.60 | ||

37.63 | 86.12 | 10.74 | 34.05 | 17.80 | 23.80 | 12.30 | 31.68 | 21.97 | 9.64 | 3.36 | 16.09 | 22.07 | 102.39 | ||

37.63 | 77.95 | 7.85 | 20.87 | 17.80 | 20.76 | 7.17 | 30.77 | 24.26 | 6.94 | 4.18 | 16.09 | 20.98 | 102.39 | ||

40.85 | 87.78 | 7.85 | 18.94 | 17.80 | 25.44 | 13.39 | 36.94 | 19.96 | 6.94 | 5.66 | 12.42 | 20.98 | 106.59 | ||

Away Teams’ Features | Home Team | ||||||||||||||

$FG{M}_{a}$ | $FG{A}_{a}$ | $3P{M}_{a}$ | $3P{A}_{a}$ | $FT{M}_{a}$ | $FT{A}_{a}$ | $Ore{b}_{a}$ | $Dre{b}_{a}$ | $As{t}_{a}$ | $St{l}_{a}$ | $Bl{k}_{a}$ | $T{O}_{a}$ | $P{F}_{a}$ | $PT{S}_{a}$ | Win | |

40.36 | 82.6 | 7.77 | 18.40 | 20.61 | 26.54 | 10.34 | 32.16 | 22.40 | 6.59 | 4.08 | 16.19 | 21.65 | 108.43 | 1 | |

36.76 | 73.84 | 7.77 | 21.63 | 16.85 | 24.42 | 8.26 | 28.68 | 19.34 | 9.00 | 3.76 | 13.17 | 21.65 | 100.06 | 1 | |

37.73 | 86.81 | 5.39 | 18.40 | 16.85 | 20.34 | 12.48 | 28.68 | 21.66 | 7.76 | 5.15 | 12.42 | 20.45 | 100.06 | 1 | |

38.88 | 79.5 | 10.49 | 21.63 | 15.58 | 20.34 | 7.59 | 32.16 | 24.13 | 7.76 | 5.71 | 17.36 | 22.79 | 100.06 | 0 | |

35.66 | 85.13 | 7.77 | 21.63 | 17.90 | 24.42 | 10.83 | 35.50 | 19.34 | 7.18 | 5.71 | 11.74 | 19.09 | 100.06 | 1 | |

37.73 | 85.13 | 5.39 | 18.40 | 16.85 | 20.34 | 11.48 | 32.16 | 20.70 | 7.18 | 5.15 | 12.42 | 16.32 | 100.06 | 1 | |

42.48 | 86.81 | 10.49 | 24.73 | 16.85 | 22.41 | 12.48 | 32.16 | 20.70 | 8.40 | 5.71 | 12.42 | 19.09 | 108.43 | 1 |

Threshold | 2007–08 | 2008–09 | 2009–10 | 2010–11 | 2011–12 | 2012–13 | 2013–14 | 2014–15 |
---|---|---|---|---|---|---|---|---|

0.5 | 74.4 | 68.2 | 68.3 | 66.7 | 69.0 | 67.1 | 65.2 | 62.5 |

0.6 | 77.1 | 74.5 | 75.0 | 69.8 | 73.0 | 71.4 | 66.7 | 70.4 |

0.7 | 100.0 | 80.0 | 100.0 | 100.0 | 100.0 | 75.0 | 100.0 | 100.0 |

Threshold | 2007–08 | 2008–09 | 2009–10 | 2010–11 | 2011–12 | 2012–13 | 2013–14 | 2014–15 |
---|---|---|---|---|---|---|---|---|

0.5 | 86 | 85 | 82 | 81 | 84 | 85 | 89 | 80 |

0.6 | 48 | 55 | 44 | 53 | 26 | 42 | 36 | 27 |

0.7 | 3 | 5 | 2 | 0 | 1 | 4 | 1 | 6 |

**Table 7.**Prediction accuracy (in percentages) of selected algorithms for NBA playoffs for seasons between 2007 and 2015.

Algorithm | 2007–08 | 2008–09 | 2009–10 | 2010–11 | 2011–12 | 2012–13 | 2013–14 | 2014–15 |
---|---|---|---|---|---|---|---|---|

Naive Bayes | 54.7 | 61.5 | 56.1 | 59.3 | 53.6 | 58.8 | 59.3 | 55.0 |

Logistic Regression | 61.6 | 57.1 | 61.0 | 61.7 | 60.7 | 64.7 | 62.6 | 60.0 |

BP Neural Networks | 59.3 | 60.4 | 52.4 | 67.9 | 56.0 | 63.5 | 57.1 | 57.5 |

Random Forest | 64.0 | 60.4 | 64.6 | 64.2 | 58.3 | 70.6 | 62.6 | 56.3 |

NBAME model | 74.4 | 68.2 | 68.3 | 66.7 | 69.0 | 67.1 | 65.2 | 62.5 |

**Table 8.**AUC (in percentages) values of selected algorithms for NBA playoffs for seasons between 2007 and 2015.

Algorithm | 2007–08 | 2008–09 | 2009–10 | 2010–11 | 2011–12 | 2012–13 | 2013–14 | 2014–15 |
---|---|---|---|---|---|---|---|---|

Naive Bayes | 50.0 | 61.6 | 51.9 | 55.6 | 51.6 | 61.2 | 59.4 | 54.7 |

Logistic Regression | 51.8 | 61.7 | 53.2 | 56.4 | 51.9 | 63.1 | 58.7 | 59.6 |

BP Neural Networks | 50.6 | 56.0 | 52.8 | 61.1 | 51.2 | 66.0 | 58.5 | 54.6 |

Random Forest | 51.8 | 58.3 | 50.5 | 50.8 | 52.4 | 66.7 | 59.0 | 58.3 |

NBAME model | 57.2 | 62.3 | 54.1 | 61.7 | 52.9 | 61.7 | 57.9 | 60.4 |

© 2016 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC-BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Cheng, G.; Zhang, Z.; Kyebambe, M.N.; Kimbugwe, N. Predicting the Outcome of NBA Playoffs Based on the Maximum Entropy Principle. *Entropy* **2016**, *18*, 450.
https://doi.org/10.3390/e18120450

**AMA Style**

Cheng G, Zhang Z, Kyebambe MN, Kimbugwe N. Predicting the Outcome of NBA Playoffs Based on the Maximum Entropy Principle. *Entropy*. 2016; 18(12):450.
https://doi.org/10.3390/e18120450

**Chicago/Turabian Style**

Cheng, Ge, Zhenyu Zhang, Moses Ntanda Kyebambe, and Nasser Kimbugwe. 2016. "Predicting the Outcome of NBA Playoffs Based on the Maximum Entropy Principle" *Entropy* 18, no. 12: 450.
https://doi.org/10.3390/e18120450