Next Article in Journal
New Equivalence Tests for Approximate Independence in Contingency Tables
Previous Article in Journal
Foreign Exchange Expectation Errors and Filtration Enlargements
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Bayesian Approach to Predict the Number of Goals in Hockey

by
Abdolnasser Sadeghkhani
* and
Seyed Ejaz Ahmed
Department of Mathematics, Brock University, St. Catharines, ON L2S 3A1, Canada
*
Author to whom correspondence should be addressed.
Stats 2019, 2(2), 228-238; https://doi.org/10.3390/stats2020017
Submission received: 27 March 2019 / Revised: 11 April 2019 / Accepted: 16 April 2019 / Published: 21 April 2019

Abstract

:
In this paper, we use a Bayesian methodology to analyze the outcome of a hockey game using different sources of information, such as points in previous games, home advantage, and specialists’ opinions. Two different models to predict the number of goals are considered, taking into account that it is the nature of hockey that goals are infrequent and rarely exceed six per team per game. A Bayesian predictive density to predict the number of the goals using each model will be used and the possible winner of the game will be predicted. The corresponding prediction error for each model will be addressed.

1. Introduction

Finding ways of predicting the outcome in different sport games from past and current data is an attractive problem for different people, ranging from the sports teams’ coaches to gambling agencies to fans of that sport. There is no doubt that statistical tools are needed to construct an effective and accurate model to predict the outcome of sporting events. In the last decades, predicting match results has attracted much attention from researchers using new methods of statistics, data mining and machine learning, especially in popular sports such as football, basketball, baseball and soccer.
Ice hockey is a popular sport especially in the US and Canada for which there exists a rich dataset available at www.nhl.com. Several publications have appeared in recent years documenting different statistical analysis of the hockey data. Gramacy et al. [1] studied individual contributions of team members using regularized logistic regression. Sadeghkhani and Ahmed [2] estimated the density of scoring time in a hockey game using prior information such as a team’s ranking in the previous season and experts’ opinions. Suzuki et al. [3] proposed a Bayesian approach in predicting the match outcomes in the 2006 World Cup.
However, to the authors’ best knowledge, very few publications currently available in the literature address the issue of Bayesian density estimation of number of the goals. This paper proposes a unique method to estimate the density of number of the goals. We also consider the points in the previous games, away and home factors, and specialists’ opinions to improve our predictions.
The remainder of the paper is organized as follows: In Section 2, we provide definitions and preliminary remarks about two assumptions to model the number of goals in a hockey game. Section 3 discusses how to find Bayesian predictive density estimators for each model. Section 4 addresses how one can enter other factors such as home bonus, away malus, experts’ opinions in detail. In Section 5, we study an application of the proposed methods in predicting the number of goals as well as the result of the game. Finally, we make some concluding remarks in Section 6.

2. Problem Set-Up and Different Models

Poisson distribution has been used in the context of count data, but the fact that the dispersion index (the ratio of variance to mean) equals one is a big concern in practice. Conway and Maxwell [4] introduced the Conway–Maxwell–Poisson (COM-P) distribution, which, similar to the Poisson distribution, belongs to the exponential family of distributions and, therefore, the Bayesian analysis (because of the conjugate prior) of number of events becomes more computationally tractable.
Kadane et al. [5] studied a necessary and sufficient condition on the hyperparameters of the conjugate family for the prior to be proper and discussed methods of sampling from the conjugate distribution.
The COM-P distribution processes an extra parameter and was originally developed as a solution of handling a queuing system with state-dependant arrivals. It has has been used widely in the models that that have over-dispersion or under-dispersion, i.e., the mean is smaller or larger than variance, respectively. For more information, see Shmueli et al. [6].
A random variable (rv for short) has the COM-P ( λ , r ) distribution if it has the probability mass function (pmf) in the form of
P ( X = x λ , r ) = 1 Z ( λ , r ) λ x ( x ! ) r , x = 0 , 1 , , r > 0 , λ > 0 ,
where Z ( λ , r ) = j = 0 λ j ( j ! ) r is the normalizing constant. It is easy to see the Poisson distribution, Po ( λ ) , obtained when r = 1 . Furthermore, the density in (1) belongs to exponential family and values of r > 1 and r < 1 are equivalent to under- and over- dispersion each. When λ ( 0 , 1 ) , tending r inf gives Bernoulli with parameter λ while the geometric distribution corresponds to limiting r 0 with pmf of P ( X = x λ ) = λ x ( 1 λ ) , for x = 0 , 1 , .
Imoto [7] generalized the COM-P to a distribution which has a larger tail and embraces the negative binomial distribution and applicable to the excess zeros model as well.
An rv X is said to have a generalized COM-P (GCOM-P) with three parameters λ > 0 , r > 0 and ν > 0 , if
P ( X = x λ , r , ν ) = ( Γ ( x + ν ) ) r λ x x ! C ( λ , r , ν ) , x = 0 , 1 , ,
where C ( λ , r , ν ) = j = 0 ( Γ ( j + ν ) ) r λ j j ! is the normalizing constant. When r < 1 , ν > 0 and λ > 0 or r = 1 , ν > 0 and λ ( 0 , 1 ) , C ( λ , r , ν ) converge. This distribution reduces to a COM-P distribution with parameter λ and 1 r when ν 1 and to a negative binomial when r = 1 .

3. Bayesian Prior and Posterior Predictive Density Estimations

As can be seen in the literature, gamma distribution is being used as conjugate prior for a Poisson distribution. Suppose X Po ( λ ), choosing λ α , β Gam ( α , β ) with the pdf 1 Γ ( α ) β α λ α 1 e λ / β results in the posterior density λ x , α , β Gam ( x + α , 1 1 + 1 / β ) . The marginal distribution of X, known as a prior predictive density estimator, and posterior predictive density estimator are negative binomial distribution. Rv X NB ( r , p ) has the pmf P ( X = k ) = k k + r 1 ( 1 p ) r p k , k = 0 , 1 , .
Lemma 1.
If X Po ( λ ) and λ α , β Gam ( α , β ) , the prior predictive density P ( X = x α , β ) is given by
NB α , 1 1 + 1 / β ,
while the posterior predictive density of future rv. Y, P ( Y = y X 1 = x 1 , , X n = x n , α , β ) is given by
NB 1 n x i + α , 1 1 + n + 1 / β .
Proof. 
The prior predictive in (3) is the marginal distribution of X and can be found as follows:
P ( X = x α , β ) = 0 P ( X = x λ ) π ( λ α , β ) d λ = 1 x ! Γ ( α ) 1 β α 0 λ α + x 1 e λ ( 1 + 1 / β ) d λ = 1 Γ ( x + 1 ) Γ ( α ) 1 β α Γ ( α + x ) β 1 + β α + x = α + x 1 x 1 1 + β α 1 1 1 + 1 / β x .
Equation (4) can be obtained similarly from
p ( y x ) = 0 P ( y θ ) π ( θ x ) d θ ,
and the posterior density of λ based on x = ( x 1 , , x n ) , is Gam ( x + α , 1 n + 1 / β ) . □
In the COM-P model, Equation (1), Kadane et al. [5] used the extended bivariate gamma distribution denoted by EBG ( a , b , c ) , and given by
π ( λ , r a , b , c ) = κ ( a , b , c ) λ a 1 e r b Z c ( λ , r ) ,
where the normalization constant in (5), is given by
κ 1 ( a , b , c ) = 0 0 ı λ a 1 e r b Z c ( λ , r ) d λ d r ,
and a > 0 , b > 0 and c > 0 needs to satisfy the following condition so that κ 1 ( a , b , c ) becomes finite:
b c > log ( a c ! ) + ( a c a c ) log ( a c + 1 ) .
Next lemma, similarly to Lemma 1, provides the predictive distributions.
Lemma 2.
If X i COM-P ( λ , r ) ; i = 1 , , n , as in (1), for i = 1 , , n , and π ( λ , r a , b , c ) presented in (5), then:
  • the posterior has the same distribution to (5) with a * = a + 1 n x i , b * = b + 1 n log ( x i ! ) , and c * = c + n .
  • the prior predictive density (marginal density of X) is given by
    P ( X = x a , b , c ) = κ ( a , b , c ) κ ( a + x , b + log ( x ! ) , c + 1 ) .
  • the posterior predictive density of future rv Y, P ( Y = y X 1 = x 1 , , X n = x n , a , b , c ) for y = 0 , 1 , is given by
    κ ( a * , b * , c * ) κ ( a * + y , b * + log ( y ! ) , c * + 1 ) .
Proof. 
The proof is straightforward and analogous to the proof of Lemma 1 and has therefore been omitted. □

4. Modelling Number of Goals Using Prior Elicitation Method

In this section, we make two different assumptions: (I) let us assume the number of goals scored by each team is a Poisson rv and (II) it has COM-P. In addition, suppose that A is a team playing home and B is a team playing away and hence, X A B is the number of goals scored by team A to team B and X B A , vice-versa.

4.1. Assumption I: P o Distribution for Modelling Number of Goals

Assume that X A B and X B A , are independently distributed as follows:
X A B λ A B Po λ A B , X B A λ B A Po λ B A ,
where λ A B can be interpreted as the mean number of goals team A scores against team B and λ B A is the number of goals team B scores against team A in a future game. As discussed earlier, one can use conjugate prior π ( λ A B ) (for home) as a Gam ( α , β ) , where the jeffreys non–informative prior π 0 ( λ ) λ A B 1 2 is its special case. Here, we are interested in employing experts’ opinions about the upcoming match’s score. This is called prior elicitation and can be determined by π e ( · ) :
π e ( λ A B ) π ( λ A ) i = 1 s e λ A λ A B x i A B d Gam ( α + d i = 1 s x i A B , 1 β + s d ) ,
where x i A B , i = 1 , 2 , , s , is the i-th expert’s opinion about number of goals that home team A will score against away team B in the future game. Choosing d = 0 returns the π ( · ) as a prior and ignores the specialists’ opinion factor. Analogously, we can set prior of λ B A in the same manner.
We have used the specialists to improve our beliefs about λ A B (or λ B A ), but we can also benefit from other sources of information, such as previous data. Since usually (not necessarily) the number of goals is larger when team plays home and smaller when team plays away, we can use the model home-bonus factor h, and away or visiting-malus factor v.
h = Mean   of   goals   scored   by   team   A   at   home   versus   team   B   in   previous   matches Mean   of   goals   by   all   the   team   in   hosting   their   opponents , v = Mean   of   goals   scored   by   team   B   away   versus   team   A   in   previous   matches Mean   of   goals   by   all   the   team   on   the   way   with   their   opponents .
Alternatively, one can consider the teams’ points as well. The mean of number of goals A scores versus team B is directly related to h and points obtained by team A, namely Q A (in the previous games or last season), and has an indirect relationship with a and points obtained by team B, Q B . Consequently, we can update prior density in (9) (for both λ A B and λ B A ) as follows:
λ A B Gam α + d i = 1 s x i A B , Q A Q B h v 1 β + s d ,
λ B A Gam α + d i = 1 s x i B A , Q B Q A v h 1 β + s d .
Making use of Equations (10) and (11), along with Lemma 1, gives the prior and posterior predictive density estimators for the number of goals team A scores to team B, which are, respectively, given by
X A B NB d i = 1 s x i A B + α , 1 1 + Q A Q B h v 1 β + s d 1
and
X A B NB i = 1 n A B x i A + d i = 1 s x i A B + α , 1 1 + n + Q A Q ¯ E h v 1 β + s d 1 ,
where x i A is the number of goals team A has scored in the previous season (or prior to the upcoming game) when played at home versus opponents and Q ¯ E = 1 n A B Q E i / n A B and Q E i ’s are (all) opponents’ points in previous season (or prior to the upcoming game) who faces team A and n A B is the number of games team A host team B. Similarly, for the number of goals, team B scores in the home of team A, we have
X B A NB d i = 1 s x i B A + α , 1 1 + Q B Q A v h 1 β + s d 1 ,
X B A NB i = 1 n B A x i B + d i = 1 s x i B A + α , 1 1 + n + Q B Q ¯ D h v 1 β + s d 1 ,
where x i B is the number of goals team B has scored in the previous season (or up to upcoming game) when played on the way versus opponents and Q ¯ D = 1 n B A Q D i / n B A and Q D i ’s are (all) opponents’ points in previous season (or prior to the upcoming match) who hosted team B.

4.2. Assumption II: C O M - P Model of Number of Goals

A question that may arise is, “What if the distributions of the number of goals do not obey a Poisson distribution?” In this assumption we contemplate the COM-P ( λ , r ) distribution given in (1) as a distribution of the number of goals. Therefore, X A B and X B A are independently distributed as follows:
X A B λ A B , r A B COM-P λ A B , r A B , X B A λ B A , r B A COM-P λ B A , r B A .
We use EBG ( a , b , c ) in (5) as a conjugate prior. Note that for instance, the conditional distribution π ( λ A B r A B = 1 ) is Gam ( a , c ) and π ( λ A B r A B = 0 ) is Bet ( a , c + 1 ) , where Bet is a beta distribution. Similar to assumption (I), the corresponding prior elicitation can be defined as follows:
π e ( λ A B , r A B ) π ( λ A B , r A B ) i = 1 s e r A B log ( x i A B ! ) λ A B x i A B Z ( λ A B , r A B ) 1 d ,
which is the EBG ( a A B , b A B , c A B ) , with
a A B = d 1 s x i A B + 1 n A B x i A B + a , b A B = d 1 s log ( x i A B ! ) + 1 n A B log ( x i A B ! ) + b , c A B = c + n A B + s d ,
where n A B is number of goals scored in a game where team A is at home, hosting teams similar to team B when they were on the road. x i A B , i = 1 , 2 , , s , x i A B , i = 1 , 2 , , n A B are as defined in assumption (I). This can be done similarly to obtain π e ( λ B A , r B A ) , a B A = d 1 s x i B A + 1 n B A x i B A + a , b B A = d 1 s log ( x i B A ! ) + 1 n B A log ( x i B A ! ) + b , and c B A = c + n B A + s d , correspondingly.
Finally, we can pose the other additional information home–bonus factor h, away–malus factor v, points Q A and, Q B into our prior yielding the joint distribution ( λ A B , r A B ) and ( λ B A , r B A ) obtained from
EBG a A B , b A B , Q A Q B h v c A B ,
EBG a B A , b B A , Q B Q A v h c B A .
Using (18) and (19), along with Lemma 2, result in obtaining the prior predictive density estimation of the number of goals team A scores against team B as follows:
κ d 1 s x i A B + a , d 1 s log ( x i A B ! ) + b , Q A Q B h v ( c + d ) κ d 1 s x i A B + x A B + a , d 1 s log ( x i A B ! ) + log ( x A B ! ) + b , Q A Q B h v ( c + d ) + 1 .
Furthermore, the posterior predictive density of the number of goals team A scores against team B is as follows:
κ a A B , b A B , Q A Q B h v c κ a A B + x , b A B + log ( x ! ) , Q A Q B h v c A B + 1 .
Equations (20) and (21) hold for number of goals teams B scores versus team A by replacing a A B , b A B , with a B A , b B A , respectively.

5. Example of Predicting the Scores and Results

This section addresses prediction results based on our models, as attained in the previous section. For a given match team where A hosts team B, outcomes of that match under the format win, draw and loss can be predicted via the number of goals scored by the two teams, A and B. Let the probabilities associated with win, draw and loss of team A from the predictive distributions versus team B, denoted by π w , π d and π l . So we can write π w = P ( X A B > X B A ) = i = 1 j = 0 i 1 P ( X A B = i ) P ( X B A = j ) , π d = P ( X A B = X B A ) = i = 0 P ( X A B = i ) P ( X B A = i ) , and π l = 1 π w π d .
Let us suppose that we are interested in predicting the match outcome of A: Edmonton Oilers (home) vs B: Arizona Coyotes (away). Data for the season 2017/18 plus current season 2018/19 until the date of writing this manuscript on 29 January 2019 available at nhl.com has been used. In order to use experts’ opinions, we have asked 5 specialists to give their opinion about the upcoming match result, x i A B and x i B A as follows:
x i A B 51614
x i B A 23321
Moreover, the number of goals the Edmonton Oilers scored versus teams which had similar performance to the Arizona Coyotes when playing as a visitor, points per game, and the number of goals Arizona scored versus the teams had similar performance to Edmonton when playing at home (we asked specialists about those teams) are given in Table 1.
Therefore, we have
1 12 x i A B = 34 , 1 9 x i B A = 23 , 1 12 Q i A B = 29.11 , 1 12 Q i B A = 20.15 , Q A = 0.98 , Q B = 1 .
Assumption I:
Gam ( 0.5 , 8 ) is considered for the prior distribution (since in practice the mean of scores by each team is about 4 goals) along with d = 1 as in (9).
(a) Prior predictive density estimator corresponds to no matches having been played and we do not have any source of information but experts’ opinions about the upcoming game, Edmonton Oilers vs Arizona Coyotes. Making use of Equations (12) and (14) yield
X A B NB ( 17.5 , 0.806 ) , X B A NB ( 11.5 , 0.86 ) ,
which correspond to the probabilities below in Table 2.
(b) We consider the posterior predictive densities using Equations (13) and (15) respectively. Therefore, we have
X A B NB ( 51.5 , 0.924 ) , X B A NB ( 34.5 , 0.90 ) .
In other words, making use of data from season 2017 / 18 up to current date and specialists’ opinions, we are expecting Edmonton Oilers will score 3.25923 , while Arizona Coyotes scores 4.69523 . Table 3 and Figure 1 illustrate the result.
The most probable result is 4–3, in favor of Edmonton. Without using experts’ opinions, i.e., d = 0 , we have X A B NB ( 34.5 , 0.96 ) and X B A NB ( 23.5 , 0.9414 ) , which corresponds to π w = 0.37, π l = 0.35 and π d = 0.25 , respectively.
Assumption II:
Let us take a = b = c = 1 , d = 1 and r = 0.9 . These choices result in having the expectation of λ equals 3.88 based on the prior distribution in (5). By applying (21), the posterior predictive densities of the number of the goals in the upcoming match Edmonton versus Arizona, along with corresponding plot, are given in Table 4 and Figure 2.
Also from Table 4, we are expecting to see 1.96 for Edmonton and 2.6 goals for Arizona. Table 5 shows the winning probabilities and one can predict that in the upcoming match, based on assumption II, Arizona will win the match in the Edmonton’s home, and most probable result is 3–2.

5.1. Prediction Errors

Prediction errors (pe’s) of our posterior predictive distributions in the two assumptions, when the specialists’ opinions matter, (namely q ^ 1 ( x A B ) and q ^ 2 ( x A B ) respectively) are evaluated by measuring the Kullback–Leibler distance as below.
p e ( q ^ i , A B ) = E X A B log q λ A B ( x A B ) q ^ i ( x A B ) , i = 1 , 2 ,
where q λ A B ( x A B ) is the Poisson distribution in (8) and q ^ i ( x A B ) for i = 1 , 2 are given in (13) and (21), respectively. This can be repeated for q ^ 1 ( x B A ) and q ^ 2 ( x B A ) as well.
According to Table 1, one needs to calculate the distance between Po ( 5 ) and NB ( 51.5 , 0.924 ) regarding number of goals team A scores against team B and the distance between Po ( 3 ) and NB ( 34.5 , 0.90 ) regarding number of goals team B scores A in Assumption I. There we have p e ( q ^ 1 , A B ) = 0.026 and p e ( q ^ 1 , B A ) = 0.033 . In contrast, if we follow Assumption II based on Table 4, the prediction errors become p e ( q ^ 2 , A B ) = 0.18 and p e ( q ^ 2 , B A ) = 0.046 , respectively.

5.2. Simulation Study

We consider a small simulation study based on a sample of size of 1000, in order to investigate the proposed posterior density estimators regarding number of goals team B scores against team A in Section 5. Figure 3 depicts the assumed underlying model Po (Assumption I) and COM-P (Assumption II) along with their corresponding posterior predictive densities. It can be seen that model assumption I and its posterior predictive density estimator performs better for the number of goals.

6. Conclusions

In summation, we have proposed Bayesian predictive density estimators for the number of goals in a hockey match and consequently predicting the winner of the game. We considered two different assumptions and furthermore we considered points in previous games, away and home factors, and specialists’ opinions to improve our predictions. Assumption I, is based on that the underlying model, i.e., the number of goals in hockey, follows the Poisson distributions, and Assumption II considers the COM-P. However, based on prediction errors, it is easier to assume Assumption I. Eventually, the predictors based on either Assumption I or II, confirm that Edmonton (home team) will win game the next match versus Arizona (away team) by a difference of one goal with 48 and 55 percent, respectively.

Author Contributions

Methodology, calculations and writing the draft, A.S.; adding critical and constructive comments as well as editorial advice, S.E.A.

Funding

This research received no external funding.

Acknowledgments

The authors thank the Stathletes company specially Meghan Chayka, Jeff Goeree and Terry Chayka for providing the data and helpful comments to this manuscript. The Natural Sciences and the Engineering Research Council of Canada, and Ontario Centre of Excellence supported the research of S. Ejaz Ahmed.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gramacy, R.B.; Jensen, S.T.; Taddy, M. Estimating player contribution in hockey with regularized logistic regression. J. Quant. Anal. Sport. 2013, 9, 97–111. [Google Scholar] [CrossRef]
  2. Sadeghkhani, A.; Ahmed, S.E. Predicting the scoring time in hockey. arXiv 2019, arXiv:1903.10889. [Google Scholar]
  3. Suzuki, A.K.; Salasar, L.E.B.; Leite, J.G.; Louzada-Neto, F. A Bayesian approach for predicting match outcomes: The 2006 (Association) Football World Cup. J. Oper. Res. Soc. 2010, 61, 1530–1539. [Google Scholar] [CrossRef]
  4. Conway, R.W.; Maxwell, W.L. A queuing model with state dependent service rates. J. Ind. Eng. 1962, 12, 132–136. [Google Scholar]
  5. Kadane, J.B.; Shmueli, G.; Minka, T.P.; Borle, S.; Boatwright, P. Conjugate analysis of the Conway-Maxwell-Poisson distribution. Bayesian Anal. 2006, 1, 363–374. [Google Scholar] [CrossRef]
  6. Shmueli, G.; Minka, T.P.; Kadane, J.B.; Borle, S.; Boatwright, P. A useful distribution for fitting discrete data: Revival of the Conway–Maxwell–Poisson distribution. J. R. Stat. Soc. Ser. (Appl. Stat.) 2005, 54, 127–142. [Google Scholar] [CrossRef]
  7. Imoto, T. A generalized Conway–Maxwell–Poisson distribution which includes the negative binomial distribution. Appl. Math. Comput. 2014, 247, 824–834. [Google Scholar] [CrossRef]
Figure 1. Pmf of number of Edmonton Oilers vs Arizona Coyotes from (22) based on specialists’ opinion and dataset 2017 / 18 and 2018 / 19 season (up to 29 January).
Figure 1. Pmf of number of Edmonton Oilers vs Arizona Coyotes from (22) based on specialists’ opinion and dataset 2017 / 18 and 2018 / 19 season (up to 29 January).
Stats 02 00017 g001
Figure 2. Pmf of number of Edmonton Oilers vs Arizona Coyotes from Table 4 based on specialists’ opinion and dataset season 2017 / 18 and 2018 / 19 season (up to 29 January).
Figure 2. Pmf of number of Edmonton Oilers vs Arizona Coyotes from Table 4 based on specialists’ opinion and dataset season 2017 / 18 and 2018 / 19 season (up to 29 January).
Stats 02 00017 g002
Figure 3. Simulation study of based on sample of size 1000, under model Assumption I (above) and Assumption II (below).
Figure 3. Simulation study of based on sample of size 1000, under model Assumption I (above) and Assumption II (below).
Stats 02 00017 g003
Table 1. (Left) Table of the number of goals the Edmonton Oilers scored in the 2018 / 19 season (up to 29 January) versus the Arizona Coyotes as well as teams with the similar abilities to the Arizona Coyotes, playing away. (Right) Table of number of goals the Arizona Coyotes scored in the 2017 / 18 and 2018 / 19 season (up to 29 January) scored versus the Edmonton Oilers as well as teams with similar abilities to the Edmonton Oilers when playing at home.
Table 1. (Left) Table of the number of goals the Edmonton Oilers scored in the 2018 / 19 season (up to 29 January) versus the Arizona Coyotes as well as teams with the similar abilities to the Arizona Coyotes, playing away. (Right) Table of number of goals the Arizona Coyotes scored in the 2017 / 18 and 2018 / 19 season (up to 29 January) scored versus the Edmonton Oilers as well as teams with similar abilities to the Edmonton Oilers when playing at home.
Away X AB Q i AB Home X BA Q i BA
Pittsburgh51.81 Winnipeg31.32
Montreal61.2 Washington41.2
Vegas31.19 Detroit10.88
Dallas11.06 Los Angles20.88
Los Angeles30.88 Buffalo11.14
Vegas21.19 San Jose41.25
Calgary11.39 Los Angles10.88
Philadelphia40.96 Vancouver41.02
Tampa Bay31.55 Edmonton31
Vancouver21.02
Arizona21
Calgary21.39
Table 2. Winning probabilities of future upcoming match: Edmonton Oilers vs Arizona Coyotes, based on specialists’ opinion.
Table 2. Winning probabilities of future upcoming match: Edmonton Oilers vs Arizona Coyotes, based on specialists’ opinion.
π w Probability that Edmonton Oilers wins the next match 0.75
π l Probability that Arizona Coyotes wins the next match 0.13
π d Probability that next match draws 0.12
Table 3. Winning probabilities of future upcoming match: Edmonton Oilers vs Arizona Coyotes based on specialists’ opinion and the dataset for season 2017 / 18 and current season up to 29 January 2019.
Table 3. Winning probabilities of future upcoming match: Edmonton Oilers vs Arizona Coyotes based on specialists’ opinion and the dataset for season 2017 / 18 and current season up to 29 January 2019.
π w Probability that Edmonton Oilers wins the next match 0.48
π l Probability that Arizona Coyotes wins the next match 0.37
π d Probability that next match draws 0.15
Table 4. Posterior predictive densities of number of goals scored by Edmonton versus Arizona.
Table 4. Posterior predictive densities of number of goals scored by Edmonton versus Arizona.
x012345678
P ( X A B = x ) 0.060.160.210.210.160.10.060.030.01
P ( X B A = x ) 0.130.240.250.190.110.060.0200
Table 5. Winning probabilities of future upcoming match: Edmonton Oilers vs Arizona Coyotes based on specialists’ opinion and dataset season 2017/18 and current season up to 29 January 2019.
Table 5. Winning probabilities of future upcoming match: Edmonton Oilers vs Arizona Coyotes based on specialists’ opinion and dataset season 2017/18 and current season up to 29 January 2019.
π w Probability that Edmonton Oilers wins the next match 0.55
π l Probability that Arizona Coyotes wins the next match 0.29
π d Probability that next match draws 0.16

Share and Cite

MDPI and ACS Style

Sadeghkhani, A.; Ahmed, S.E. A Bayesian Approach to Predict the Number of Goals in Hockey. Stats 2019, 2, 228-238. https://doi.org/10.3390/stats2020017

AMA Style

Sadeghkhani A, Ahmed SE. A Bayesian Approach to Predict the Number of Goals in Hockey. Stats. 2019; 2(2):228-238. https://doi.org/10.3390/stats2020017

Chicago/Turabian Style

Sadeghkhani, Abdolnasser, and Seyed Ejaz Ahmed. 2019. "A Bayesian Approach to Predict the Number of Goals in Hockey" Stats 2, no. 2: 228-238. https://doi.org/10.3390/stats2020017

Article Metrics

Back to TopTop