# Inference with the Median of a Prior

^{1}

^{2}

^{*}

## Abstract

**:**

## 1 Introduction

_{X|ν,θ}(

**x**|ν, θ) or its corresponding probability density function (pdf) f

_{X|ν,θ}(

**x**|ν, θ),

**X**= (X

_{1}, … ,X

_{n})′ and

**x**= (x

_{1}, … ,x

_{n})′. $\mathcal{V}$ is a random parameter on which we have an a priori information and θ is a fixed unknown parameter. This prior information can either be of the form of a prior cdf F

_{$\mathcal{V}$}(ν) (or a pdf f

_{$\mathcal{V}$}(ν)) or, for example, only the knowledge of a finite number of its moments. In the first case, the marginal cdf

_{X|θ}(

**x**|θ) is the pdf corresponding to the cdf F

_{X|θ}(

**x**|θ).

_{$\mathcal{V}$}(ν) and thus go back to the previous case, e.g. [1] page 90.

_{X|θ}(

**x**|θ) in (1), we propose a new inference tool ${\tilde{F}}_{\mathit{X}|\theta}$(

**x**|θ) which can be used to infer on θ (we will show that ${\tilde{F}}_{\mathit{X}|\theta}$(

**x**|θ) is a cdf under a few conditions). For example we can define

**x**|θ) is the pdf corresponding to the cdf ${\tilde{F}}_{\mathit{X}|\theta}$(

**x**|θ).

_{X|θ}(

**x**|θ) as the mean value of the random variable T = T ($\mathcal{V}$; x) = F

_{X|θ}(

**x**|θ) as given by (1). Now, if in place of the mean value, we take the median, we obtain this new inference tool ${\tilde{F}}_{\mathit{X}|\theta}$(

**x**|θ) which is defined as

**x**|θ). Then we present some of its properties. For example, we show that under some conditions, ${\tilde{F}}_{\mathit{X}|\theta}$(

**x**|θ) has all the properties of a cdf, its calculation is very easy and depends only on the median of prior distribution. Then, we give a few examples and finally, we compare the relative performances of these two tools for the inference on θ. Extensions and conclusion are given in the last two sections.

## 2 A New Inference Tool

_{i}, i = 1, … , n and random parameter $\mathcal{V}$ are continuous and real. We also use increasing and decreasing instead of non-decreasing and non-increasing respectively.

**Definition**

**1**

**X**= (X

_{1}, … , X

_{n})′ have a cdf F

_{X|ν}(

**x**|ν) depending on a random parameter $\mathcal{V}$ with pdf f

_{$\mathcal{V}$}(ν), and let the random variable T = T($\mathcal{V}$;

**x**) = F

_{X|ν}(

**x**|$\mathcal{V}$) have a unique median for each fixed

**x**. The new inference tool, ${\tilde{F}}_{\mathit{X}}(\mathit{x})$, is defined as the median of T:

_{X|$\mathcal{V}$}(x|ν) = 1 − e

^{−νx}, x > 0, be the cdf of an exponential random variable with scale parameter ν > 0. We assume that the prior pdf of $\mathcal{V}$ is known and also is exponential with parameter 1, i.e. f

_{ν}(ν) = e

^{−ν}, ν > 0. We define the random variable T = F

_{X|$\mathcal{V}$}(x|$\mathcal{V}$) = 1 − e

^{−$\mathcal{V}$x}, for any fixed value x > 0. The random variable 0 ≤ T ≤ 1 has the following cdf

_{T}(t) = $\frac{1}{x}$(1 − t)

^{($\frac{1}{x}$ − 1)}, 0 ≤ t ≤ 1. Now, we can calculate the mean of the random variable T as follow

_{X}(x). The marginal cdf is well known, well defined and can also be calculated directly by (1). On the other hand, in this example, it is obvious that Med(T) is a cdf wrt x, which is called ${\tilde{F}}_{X}$(x) in Definition 1, see Figure 1. However, we have not a short cut for calculating ${\tilde{F}}_{X}$(x) such as F

_{X}(x) in (1).

**x**) has all the properties of a cdf. Then, in Theorem 2, we drive a simple expression for calculating ${\tilde{F}}_{\mathit{X}}$(

**x**) and show that, in many cases, the expression of ${\tilde{F}}_{\mathit{X}}$(

**x**) depends only on the median of the prior and can be calculated simply, see Remark 2. In Theorem 3 we state separability property of ${\tilde{F}}_{\mathit{X}}$(

**x**) versus exchangeability of F

_{X}(

**x**).

**Theorem**

**1**

**X**have a cdf F

_{X|ν}(

**x**|ν) depending on a random parameter $\mathcal{V}$ with pdf f

_{$\mathcal{V}$}(ν) and the real random variable T = F

_{X|ν}(

**x**|$\mathcal{V}$) have a unique median for each fixed

**x**. Then:

- 1.
- ${\tilde{F}}_{\mathit{X}}$(
**x**) is an increasing function in each of its arguments. - 2.
- If F
_{X|ν}(**x**|ν) and F_{$\mathcal{V}$}(ν) are continuous cdfs then ${\tilde{F}}_{\mathit{X}}$(**x**) is a continuous function in each of its arguments. - 3.
- 0 ≤ ${\tilde{F}}_{\mathit{X}}$(
**x**) ≤ 1.

- 1.
- Let
**y**= (y_{1}, … , y_{n})′,**z**= (z_{1}, … , z_{n})′, y_{j}< z_{j}for fixed j and y_{i}= z_{i}for i ≠ j, 1 ≤ i, j ≤ n and take_{X|$\mathcal{V}$}is an increasing function in each of its arguments. Therefore,is the unique median of Y and so k_{y}≤ k_{y}or equivalently ${\tilde{F}}_{\mathit{X}}$(_{z}**x**) is increasing in its j-th argument. - 2.
- Let
**x**. = (x_{1}, … , x_{j − 1}, x., x_{j + 1}, … ,x_{n})′ and**t**= (x_{1}, … , x_{j − 1}, t, x_{j + 1}, … ,x_{n})′. By part 1, ${\tilde{F}}_{\mathit{X}}$(**x**) is an increasing function in each of its arguments. Therefore,Further, F_{X|$\mathcal{V}$}(**x**|$\mathcal{V}$) is continuous wrt x_{j}, and so**x**) is the unique median of F_{X|$\mathcal{V}$}(**x**|$\mathcal{V}$), therefore by (3),**x**) is continuous. - 3.
- ${\tilde{F}}_{\mathit{X}}$(
**x**) is the median of random variable T, where T = F_{X|$\mathcal{V}$}(**x**|$\mathcal{V}$) and 0 ≤ T ≤ 1, and so 0 ≤ ${\tilde{F}}_{\mathit{X}}$(**x**) ≤ 1. ☐

**Remark**

**1**

_{xj↑+∞}${\tilde{F}}_{\mathit{X}}$(

**x**) and lim

_{xj↓−∞}${\tilde{F}}_{\mathit{X}}$(

**x**) exist and are finite, [11]. Therefore ${\tilde{F}}_{\mathit{X}}$(

**x**) is a continuous cdf if conditions of Theorem 1 hold and

- 1.
- lim
_{xj↓−∞}${\tilde{F}}_{\mathit{X}}$(**x**) = 0 for any particular j, - 2.
- lim
_{x1↑+∞,… ,xn↑+∞}${\tilde{F}}_{\mathit{X}}$(**x**) = 1, - 3.
- ∆
_{b1a1 }…∆_{bnan}${\tilde{F}}_{\mathit{X}}$(**x**) ≥ 0, where a_{i}≤ b_{i}, i = 1, … , n, and ∆b_{j}a_{j}${\tilde{F}}_{\mathit{X}}$(**x**) = ${\tilde{F}}_{\mathit{X}}$((x_{1}, … , x_{j − 1}, b_{j}, x_{j + 1}, … ,x_{n})′)−${\tilde{F}}_{\mathit{X}}$((x_{1}, … , x_{j − 1}, a_{j}, x_{j + 1}, … ,x_{n})′) ≥ 0.

**x**) the marginal cdf of

**X**based on median. When ${\tilde{F}}_{X}$(x) is a one dimensional cdf, the last condition follows from parts 1 and 3 of Theorem 1.

**Theorem**

**2**

_{X|ν}(

**x**|ν) is a monotone function wrt ν and $\mathcal{V}$ has a unique median ${F}_{\mathrm{\mathcal{V};}}^{-1}\left(\frac{1}{2}\right)$, then ${\tilde{F}}_{\mathit{X}}$(

**x**) = L(${F}_{\mathrm{\mathcal{V};}}^{-1}\left(\frac{1}{2}\right)$).

**Remark**

**2**

**x**) belongs to the family of distributions F

_{X|$\mathcal{V}$}(

**x**|$\mathcal{V}$). Because, ${\tilde{F}}_{\mathit{X}}$(

**x**) = F

_{X|ν}(

**x**|${F}_{\mathrm{\mathcal{V};}}^{-1}(\frac{1}{2})$) Therefore ${\tilde{F}}_{\mathit{X}}$(

**x**) is a cdf and conditions in Remark 1 hold.

**Remark**

**3**

**x**) depends only on the median of prior distribution, ${F}_{\mathrm{\mathcal{V};}}^{-1}\left(\frac{1}{2}\right)$, while the expression of F

_{X}(

**x**) needs the perfect knowledge of F

_{$\mathcal{V}$}(ν). Therefore, ${\tilde{F}}_{\mathit{X}}$(

**x**) is robust relative to prior distributions with the same median.

**Remark**

**4**

_{X|$\mathcal{V}$}(x|$\mathcal{V}$) = 1 − e

^{−$\mathcal{V}$x}has the following cdf

**Theorem**

**3**

_{X|ν}(

**x**|ν) be conditional cdf of

**X**= (X

_{1}, … , X

_{n})′ given $\mathcal{V}$ = ν and L

_{(k1,… ,kr)}(ν) = F

_{(Xk1 ,… ,Xkr)}|$\mathcal{V}$(x

_{k1}, … , x

_{kr}|ν) be monotone function of ν for each {k

_{1}, … , k

_{r}} ⊆ {1, … , n}. Let also $\mathcal{V}$ have a unique median ${F}_{\mathrm{\mathcal{V};}}^{-1}\left(\frac{1}{2}\right)$. If for each {k

_{1}, … , k

_{r}} ⊆ {1, … , n},

**X**| $\mathcal{V}$ = ν has independent components, then

_{1}, … , k

_{r}} ⊆ {1, … , n},

**Remark**

**5**

**X**| $\mathcal{V}$= ν has independent components, then the marginal distribution of

**X**cannot have independent components. For example, in general case,

**X**| $\mathcal{V}$= ν has Independent and Identically Distributed (iid) components, then the marginal distribution of

**X**is exchangeable, see Example 1. We recall that for identically distributed random variables exchangeability is a weaker condition than independence.

**x**) is very easy by using Theorem 2.

**Lemma**

**1**

_{X|$\mathcal{V}$}(

**x**|$\mathcal{V}$). If ν is a real location parameter then L(ν) is decreasing wrt ν.

_{1}< ν

_{2}and ν be a location parameter. Then

**Lemma**

**2**

_{X}|$\mathcal{V}$(x|ν). If ν is a scale parameter then L(ν) is monotone wrt ν.

_{1}< ν

_{2}. If ν is a scale parameter, ν > 0, then

**Lemma**

**3**

_{1}, … X

_{n}given $\mathcal{V}$ = ν beiid random variables and X = (X

_{1}, … , X

_{n})′. If L(ν) = F

_{X1}|$\mathcal{V}$(x|ν) is an increasing (a decreasing) function then L*(ν) = F

_{X|ν}(

**x**|ν) is an increasing (a decreasing) function of ν.

**X**|

**η**be distributed according to an exponential family with pdf

**η**= (η

_{1}, … , η

_{n})′ and

**T**= (T

_{1}, … , T

_{n})′. It can be shown that L(

**η**) = F

_{X}|

**η**(

**x**|

**η**) is a monotone function wrt each of its arguments in many cases by the following method: Let I

_{y≤x}= 1 if y

_{1}≤ x

_{1}, … , y

_{n}≤ x

_{n}and 0 elsewhere; and note that the differentiation under the integral sign is true for exponential family. Then

_{1}< ν

_{2}in V the likelihood ratio

**Lemma**

**4**

_{X|$\mathcal{V}$}(x|ν) is an increasing (or a decreasing) functionof ν for all x.

_{1}< ν

_{2}implies F

_{X|$\mathcal{V}$}(x|ν

_{1}) ≥ F

_{X|$\mathcal{V}$}(x|ν

_{2}) for all x. For stochastically decreasing (SD) the inequality is reversed. This definition is a weaker property than MLR property (by Lemma 4), but is a stronger property than monotonicity of L(ν) = F

_{X|$\mathcal{V}$}(x|ν) (because L(ν) is monotone for each fixed x). Therefore, we have

**Remark**

**6**

_{X|$\mathcal{V}$}(x|ν) wrt ν.Forexample (called Example C), assume that

_{X|$\mathcal{V}$}(x|ν) for different x. L(ν) is not monotone for some of x values in this figure. If we assume that the prior pdf of $\mathcal{V}$ is known and is also exponential with parameter 1, then, still median of random variable T is a cdf, see Figure 3-right.

## 3 Examples

**X**= (X

_{1}, … , X

_{n})′ is said to have an exchangeable normal distribution, $\mathcal{E}$$\mathcal{N}$ (

**x**; µ, σ

^{2}, ρ), if its distribution is multivariate normal with the following mean vector and variance-covariance matrix

**x**; µ, σ

^{2}, ρ) =

#### 3.1 Example 1

_{1}, … , X

_{n}be an iid copy of X (i.e. X|$\mathcal{V}$ = ν, θ) and

**X**= (X

_{1}, … , X

_{n})′, then:

- Prior pdf case f
_{$\mathcal{V}$}(ν) = $\mathcal{N}$ (ν; ν_{0}, θ_{0}):Then we have - Unique median knowledge case Median {$\mathcal{V}$} = ν
_{0}:Then, as we could see, by using Lemma 1 and Theorem 2, we have**x**|θ) (because F_{X|$\mathcal{V}$,θ}(**x**|ν, θ) is a decreasing function wrt ν by Lemma 1), therefore,_{$\mathcal{V}$}(ν) = $\mathcal{N}$ (ν; ν_{0}, θ_{0}) or f_{$\mathcal{V}$}(ν) = $\mathcal{C}$ (ν; ν_{0}, θ_{0}) then ${\tilde{f}}_{\mathit{X}|\theta}$(**x**|θ) is given by (5), because the median of these two distributions are equal to ν_{0}(see Remark 3). - Moments knowledge case E(|$\mathcal{V}$|) = ν
_{0}:Then the ME pdf is given by $\mathcal{D}$$\mathcal{E}$ (ν; ν_{0}). In this case we cannot obtain an analytical expression for_{0}or Median {$\mathcal{V}$} = ν_{0}and the support of $\mathcal{V}$ is R the ME pdf does not exist.

#### 3.2 Example 2

- Prior pdf case f
_{$\mathcal{V}$}(ν) = $\mathcal{I}$$\mathcal{G}$ (ν; α, β):Then, it is easy to show that,_{X|θ}(**x**|θ) cannot be calculated analytically. - Unique median knowledge case Median {$\mathcal{V}$} = ν
_{0}:Then, as we could see, by using Lemma 2 and Theorem 2, we have_{X|ν,θ}(**x**|ν, θ) is a monotone function wrt ν (by using derivative) and by Theorem 3 we have - Moments knowledge case E(1/$\mathcal{V}$) = 1/ν
_{0}:Then, knowing that the variance is a positive quantity, the ME pdf f_{$\mathcal{V}$}(ν) is an $\mathcal{I}$$\mathcal{G}$ (ν; 1, ν_{0}). In this case we have_{X|θ}(**x**|θ) cannot be calculated analytically.

#### 3.3 Example 3

**x**; ν, σ

^{2}, ρ), where ν is a nuisance parameter. Noting that, we can write $\mathcal{E}$$\mathcal{N}$ (

**x**; ν, σ

^{2}, ρ), as follows (exponential family),

_{3}and so L(ν) is a monotone function. Let θ = (σ

^{2}, ρ) and the median of prior pdf be ν

_{0}, then

#### 3.4 Comparison of Estimators in Example 1

_{0}= 1. The MLE of θ based on , is equal to

_{X|θ}(

**x**|θ) is the true pdf of observations obtained using the full prior knowledge on the nuisance parameter, while ${\tilde{f}}_{\mathit{X}|\theta}$(

**x**|θ) is a pseudo pdf which includes only prior knowledge of the median of nuisance parameter.

_{0}, and we note by T

_{MaxEnt}the MLE of θ when the prior mean and variance are known.

_{MaxEnt}. In Table 1 we classify these 4 estimators and corresponding assumptions for n = 1. We see that, in Figure 4-left, $\hat{\theta}$ is better than $\tilde{\theta}$, especially for large sample size n, and T is the best.

_{0}. This is useful for checking robustness of estimators wrt false prior information. We see that $\hat{\theta}$ is more robust than $\tilde{\theta}$ relative to ν

_{0}, but both of them dominated by T. In this case, samples are generated from a normal distribution with random normal mean (median ν

_{0}) when θ = 2, however, we assume that ν has a standard normal prior distribution.

_{MaxEnt}we use prior mean and prior variance information; and for $\tilde{\theta}$ we use only the median value of prior distribution.

## 4 Extensions

**x**) = F

_{X|$\mathcal{V}$}(

**x**|θ) in Definition 1, i.e.,

**x**). That is, instead of using the result of Theorem 2: ${\tilde{F}}_{\mathit{X}|\theta}$(

**x**|θ) = F

_{X}|ν,θ(

**x**|Med($\mathcal{V}$), θ), using ${\tilde{F}}_{\mathit{X}|\theta}^{\mathit{Mod}}$(

**x**|θ) = F

_{X}|ν,θ(

**x**|Mod($\mathcal{V}$), θ). This method was used for eliminating the nuisance parameter ν. In this case, Theorem 3, i.e. separability property of pseudo marginal distribution, also holds for ${\tilde{F}}_{\mathit{X}|\theta}^{\mathit{Mod}}$(

**x**|θ). Note that, the mode of the random variable T, defined in (7) is not equal to ${\tilde{F}}_{\mathit{X}|\theta}^{\mathit{Mod}}$(

**x**|θ) and may not be a cdf similar to the above illustration. However, it may be a cdf similar to the following example pointed out by the referee. In Example A, let $\mathcal{V}$ − 1 be a binomial distribution with parameters $\mathcal{B}$(2, $\frac{3}{4}$), i.e. $\mathcal{V}$ is a discrete random variable with support {1, 2, 3}. Then E(T) = 1 − (e

^{−x}+ 6e

^{−2x}+ 9e

^{−3x})/16 and Mod(T) = 1 − e

^{−3x}are cdfs see Figure 6.

**x**) and ${\tilde{F}}_{\mathit{X}|\theta}^{{Q}_{3}}$(

**x**) respectively.

^{x ln 0.75}and ${\tilde{F}}_{X}^{{Q}_{3}}$(x) = 1 − e

^{x ln 0.25}. In Figure 7 we plot them.

## 5 Conclusion

## Acknowledgments

## References

- Berger, J. O. Statistical Decision Theory: Foundations, Concepts, and Methods; Springer: New York, 1980. [Google Scholar]
- Bernardo, J. M.; Smith, A. F. M. Bayesian Theory; Wiley: Chichester, UK, 1994. [Google Scholar]
- Hernández Bastida, A.; Martel Escobar, M. C.; Vázquez Polo, F. J. On maximum entropy priors and a most likely likelihood in auditing. Qüestiió
**1998**, 22(2), 231–242. [Google Scholar] - Jaynes, E. T. Information theory and statistical mechanics I,II. Physical review
**1957**, 106, 620–630. [Google Scholar] and 108, 171–190. - Jaynes, E. T. Prior probabilities. IEEE Transactions on Systems Science and Cybernetics SSC-4
**1968**, (3), 227–241. [Google Scholar] - Lehmann, E. L.; Casella, G. Theory of point estimation, 2nd ed.; Springer: New York, 1998. [Google Scholar]
- Mohammad-Djafari, A.; Mohammadpour, A. On the estimation of a parameter with incomplete knowledge on a nuisance parameter. 2004; Vol. 735, AIP; pp. 533–540. [Google Scholar]
- Mohammadpour, A.; Mohammad-Djafari, A. An alternative criterion to likelihood for parameter estimation accounting for prior information on nuisance parameter. In Soft methodology and Random Information Systems; Springer: Berlin, 2004; pp. 575–580. [Google Scholar]
- Mohammadpour, A.; Mohammad-Djafari, A. An alternative inference tool to total probability formula and its applications. 2004; Vol. 735, AIP; pp. 227–236. [Google Scholar]
- Robert, C. P.; Casella, G. Monte Carlo statistical methods, 2nd ed.; Springer: New York, 2004. [Google Scholar]
- Rohatgi, V. K. An Introduction to Probability Theory and Mathematical Statistics; Wiley: New York, 1976. [Google Scholar]
- Zacks, S. Parametric statistical inference; Pergamon, Oxford, 1981. [Google Scholar]

**Figure 1.**Top: pdf of random variable T = T ($\mathcal{V}$; x) = F

_{X|$\mathcal{V}$}(x|$\mathcal{V}$) = 1 − e

^{−$\mathcal{V}$x}, Middle: cdf of random variable T, and Bottom: mean and median of random variable T in Example A.

**Figure 2.**Left: cdf of random variable $\mathcal{V}$ in Example B and its corresponding pdf. Right: cdf of random variable T = T ($\mathcal{V}$; x) = F

_{X|$\mathcal{V}$}(x|$\mathcal{V}$) = 1 − e

^{−$\mathcal{V}$x}in Example B.

**Figure 3.**Left: the graphs of L(ν) = F

_{X|$\mathcal{V}$}(x|$\mathcal{V}$) for different x in Example C. Right: the mean and median of random variable T in Example C.

**Figure 4.**The empirical MSEs of $\tilde{\theta}$, T

_{MaxEnt}, $\hat{\theta}$, and T wrt θ (left) and ν

_{0}(right, for θ = 2) for different sample sizes n.

**Figure 5.**Mean, median and mode of random variable T = T ($\mathcal{V}$; x) = F

_{X|$\mathcal{V}$}(x|$\mathcal{V}$) = 1 − e

^{−$\mathcal{V}$x}wrt x.

**Figure 6.**Mean and mode of random variable T = T ($\mathcal{V}$; x) = F

_{X|$\mathcal{V}$}(x|$\mathcal{V}$) = 1 − e

^{−$\mathcal{V}$x}wrt x.

**Figure 7.**Q

_{1}, median and Q

_{3}of random variable T = T ($\mathcal{V}$; x) = F

_{X|$\mathcal{V}$}(x|$\mathcal{V}$) = 1 − e

^{−$\mathcal{V}$x}wrt x.

Assumptions | pdf of X|θ based on prior information MLE of θ | Simulated data pdf MSE(θ) = E(MLE − θ)^{2} |

Known parameter ν = ν _{0} | $\mathcal{N}$ (x; ν_{0}, θ) T = (X − ν _{0})^{2} | $\mathcal{N}$ (x; 0, θ) 2θ ^{2} |

Known prior f _{$\mathcal{V}$}(ν) = $\mathcal{N}$ (ν; ν_{0}, θ_{0}) | $\mathcal{N}$ (x; ν_{0}, θ + θ_{0})$\hat{\theta}$ = max{(X − ν _{0})^{2} − θ_{0}, 0} | $\mathcal{N}$ (x; 0, θ + 1) E($\hat{\theta}$ − θ) ^{2} |

Known moments E($\mathcal{V}$) = ν _{0}, V ($\mathcal{V}$) = $\frac{{\theta}_{0}}{2}$ | $\mathcal{N}$ (x; ν_{0}, θ + $\frac{{\theta}_{0}}{2}$)T _{MaxEnt} = max{(X − ν_{0})^{2} − $\frac{{\theta}_{0}}{2}$, 0} | $\mathcal{N}$ (x; 0, θ + 1) E(T _{MaxEnt} − θ)^{2} |

Known unique median Median($\mathcal{V}$) = ν _{0} | $\mathcal{N}$ (x; ν_{0}, θ)$\tilde{\theta}$ = (X − ν _{0})^{2} | $\mathcal{N}$ (x; 0, θ + 1) 2(θ + 1) ^{2} + 1 |

© 2006 by MDPI (http://www.mdpi.org). Reproduction for noncommercial purposes permitted.

## Share and Cite

**MDPI and ACS Style**

Mohammadpour, A.; Mohammad-Djafari, A.
Inference with the Median of a Prior. *Entropy* **2006**, *8*, 67-87.
https://doi.org/10.3390/e8020067

**AMA Style**

Mohammadpour A, Mohammad-Djafari A.
Inference with the Median of a Prior. *Entropy*. 2006; 8(2):67-87.
https://doi.org/10.3390/e8020067

**Chicago/Turabian Style**

Mohammadpour, Adel, and Ali Mohammad-Djafari.
2006. "Inference with the Median of a Prior" *Entropy* 8, no. 2: 67-87.
https://doi.org/10.3390/e8020067