# On the Folded Normal Distribution

^{1}

^{2}

^{3}

^{*}

## Abstract

**:**

## 1. Introduction

## 2. The Folded Normal

**Figure 1.**The black line is the density of the $N\left(\right)open="("\; close=")">\mu ,{\sigma}^{2}$ and the red line of the $FN\left(\right)open="("\; close=")">\mu ,{\sigma}^{2}$. The parameters in the left figure (

**a**) are $\mu =2$ and ${\sigma}^{2}=3$ and in the right figure (

**b**) $\mu =2$ and ${\sigma}^{2}=4$.

#### 2.1. Relations to Other Distributions

#### 2.2. Mode of the Folded Normal Distribution

#### 2.3. Characteristic Function and Other Related Functions of the Folded Normal Distribution

- The moment generating function of Equation (2) exists and is equal to:$$\begin{array}{c}\hfill {M}_{x}\left(t\right)={\phi}_{x}\left(\right)open="("\; close=")">-it={e}^{\frac{{\sigma}^{2}{t}^{2}}{2}+\mu t}\left(\right)open="["\; close="]">1-\Phi \left(\right)open="("\; close=")">-\frac{\mu}{\sigma}-\sigma t\\ +{e}^{\frac{{\sigma}^{2}{t}^{2}}{2}-\mu t}\left(\right)open="["\; close="]">1-\Phi \left(\right)open="("\; close=")">\frac{\mu}{\sigma}-\sigma t\end{array}$$
- The cumulant generating function is simply the logarithm of the moment generating function:$$\begin{array}{c}\hfill {K}_{x}\left(t\right)=log{M}_{x}\left(t\right)=\left(\right)open="("\; close=")">\frac{{\sigma}^{2}{t}^{2}}{2}+\mu tlog\left(\right)open="\{"\; close="\}">1-\Phi \left(\right)open="("\; close=")">-\frac{\mu}{\sigma}-\sigma t& +{e}^{-2\mu t}\left(\right)open="["\; close="]">1-\Phi \left(\right)open="("\; close=")">\frac{\mu}{\sigma}-\sigma t\end{array}$$
- The Laplace transformation can easily be derived from the moment generating function and is equal to:$$\begin{array}{c}\hfill E\left(\right)open="("\; close=")">{e}^{-tx}={e}^{\frac{{\sigma}^{2}{t}^{2}}{2}-\mu t}\left(\right)open="["\; close="]">1-\Phi \left(\right)open="("\; close=")">-\frac{\mu}{\sigma}+\sigma t\\ +{e}^{\frac{{\sigma}^{2}{t}^{2}}{2}+\mu t}\left(\right)open="["\; close="]">1-\Phi \left(\right)open="("\; close=")">\frac{\mu}{\sigma}+\sigma t\end{array}$$
- The Fourier transformation is:$$\begin{array}{c}\hfill \widehat{f}\left(t\right)={\int}_{-\infty}^{\infty}{e}^{-2\pi ixt}f\left(x\right)dx=E\left(\right)open="("\; close=")">{e}^{-2\pi iXt}\end{array}$$$$\begin{array}{c}\hfill \widehat{f}\left(t\right)={\varphi}_{x}\left(\right)open="("\; close=")">-2\pi t=\\ & {e}^{\frac{-4{\pi}^{2}{\sigma}^{2}{t}^{2}}{2}-i2\pi \mu t}\left(\right)open="["\; close="]">1-\Phi \left(\right)open="("\; close=")">-\frac{\mu}{\sigma}-i2\pi \sigma t\hfill \end{array}$$$$\begin{array}{ccc}& +& {e}^{-\frac{4{\pi}^{2}{\sigma}^{2}{t}^{2}}{2}+i2\pi \mu t}\left(\right)open="["\; close="]">1-\Phi \left(\right)open="("\; close=")">\frac{\mu}{\sigma}-i2\pi \sigma t\hfill \end{array}$$
- The mean residual life is given by:$$\begin{array}{c}\hfill E\left(\right)open="("\; close=")">X-t|Xt=E\left(\right)open="("\; close=")">X|Xt& -t\end{array}$$$$\begin{array}{c}\hfill E\left(\right)open="("\; close=")">X|Xt={\int}_{t}^{\infty}\frac{xf\left(x\right)}{P\left(\right)open="("\; close=")">xt}& dx={\int}_{t}^{\infty}\frac{xf\left(x\right)}{1-F\left(t\right)}dx\end{array}$$$$\begin{array}{ccc}\hfill {\int}_{t}^{\infty}xf\left(x\right)dx& =& {\int}_{t}^{\infty}x\frac{1}{\sqrt{2\pi {\sigma}^{2}}}{e}^{-\frac{1}{2{\sigma}^{2}}{\left(\right)}^{x}2}dx+{\int}_{t}^{\infty}x\frac{1}{\sqrt{2\pi {\sigma}^{2}}}{e}^{-\frac{1}{2{\sigma}^{2}}{\left(\right)}^{x}2}\hfill & dx\end{array}$$$$\begin{array}{ccc}& =& \frac{\sigma}{\sqrt{2\pi}}{e}^{\frac{{\left(\right)}^{t}}{2}}+\mu \left(\right)open="["\; close="]">1-\Phi \left(\right)open="("\; close=")">\frac{t-\mu}{\sigma}\hfill \\ +\frac{\sigma}{\sqrt{2\pi}}{e}^{\frac{{\left(\right)}^{t}}{2}}\end{array}-\mu \Phi \left(\right)open="("\; close=")">\frac{t-\mu}{\sigma}$$$$\begin{array}{ccc}& =& \sqrt{\frac{2}{\pi}}\sigma {e}^{\frac{{\left(\right)}^{t}}{2}}+\mu \left(\right)open="["\; close="]">1-2\Phi \left(\right)open="("\; close=")">\frac{t-\mu}{\sigma}\hfill \end{array}$$$$\begin{array}{c}\hfill E\left(\right)open="("\; close=")">X-t|Xt=\frac{\sqrt{\frac{2}{\pi}}\sigma {e}^{\frac{{\left(\right)}^{t}}{2}}}{+}\\ 1-\frac{1}{2}\left(\right)open="["\; close="]">erf\left(\right)open="("\; close=")">\frac{x-\mu}{\sqrt{2{\sigma}^{2}}}+erf\left(\right)open="("\; close=")">\frac{x+\mu}{\sqrt{2{\sigma}^{2}}}\end{array}$$

## 3. Entropy and Kullback–Leibler Divergence

#### 3.1. Entropy

**Figure 2.**Entropy values for a range of values of $\theta =\frac{\mu}{\sigma}$ with $\sigma =1$ (

**a**) and $\sigma =5$ (

**b**).

#### 3.2. Kullback–Leibler Divergence from the Normal Distribution

**Figure 3.**Kullback–Leibler divergence from the normal for a range of values of $\theta =\frac{\mu}{\sigma}$ with $\sigma =1$ (

**a**) and $\sigma =5$ (

**b**).

#### 3.3. Kullback–Leibler Divergence from the Half Normal Distribution

**Figure 4.**Kullback–Leibler divergence from the half normal for a range of values of $\theta =\frac{\mu}{\sigma}$ with $\sigma =1$ (

**a**) and $\sigma =5$ (

**b**).

## 4. Parameter Estimation

#### 4.1. An Example with Simulated Data

**Figure 5.**The left graph (

**a**) shows the three solutions of the log-likelihood. The right three-dimensional figure (

**b**) shows the values of the log-likelihood for a range of mean and variance values.

#### 4.2. Simulation Studies

**Table 1.**Estimated coverage probability of the $95\%$ confidence intervals for the mean parameter, μ, using the observed information matrix.

Values | of | θ | ||||||
---|---|---|---|---|---|---|---|---|

Sample size | 0.5 | 1 | 1.5 | 2 | 2.5 | 3 | 3.5 | 4 |

20 | 0.689 | 0.930 | 0.955 | 0.931 | 0.926 | 0.940 | 0.930 | 0.948 |

30 | 0.679 | 0.921 | 0.949 | 0.943 | 0.925 | 0.926 | 0.941 | 0.915 |

40 | 0.690 | 0.916 | 0.936 | 0.933 | 0.941 | 0.948 | 0.944 | 0.928 |

50 | 0.718 | 0.944 | 0.955 | 0.938 | 0.933 | 0.948 | 0.946 | 0.946 |

60 | 0.699 | 0.950 | 0.968 | 0.948 | 0.949 | 0.941 | 0.942 | 0.946 |

70 | 0.721 | 0.931 | 0.956 | 0.939 | 0.939 | 0.939 | 0.949 | 0.945 |

80 | 0.691 | 0.930 | 0.950 | 0.940 | 0.946 | 0.936 | 0.945 | 0.939 |

90 | 0.720 | 0.932 | 0.960 | 0.949 | 0.949 | 0.939 | 0.954 | 0.944 |

100 | 0.738 | 0.945 | 0.949 | 0.938 | 0.943 | 0.926 | 0.946 | 0.952 |

**Table 2.**Estimated coverage probability of the bootstrap $95\%$ confidence intervals for the mean parameter, μ, using the percentile method.

Values | of | θ | ||||||
---|---|---|---|---|---|---|---|---|

Sample size | 0.5 | 1 | 1.5 | 2 | 2.5 | 3 | 3.5 | 4 |

20 | 0.890 | 0.925 | 0.939 | 0.921 | 0.918 | 0.940 | 0.929 | 0.942 |

30 | 0.894 | 0.931 | 0.933 | 0.943 | 0.926 | 0.922 | 0.942 | 0.910 |

40 | 0.910 | 0.925 | 0.927 | 0.933 | 0.941 | 0.947 | 0.946 | 0.928 |

50 | 0.914 | 0.943 | 0.942 | 0.934 | 0.934 | 0.945 | 0.946 | 0.943 |

60 | 0.904 | 0.949 | 0.953 | 0.950 | 0.941 | 0.938 | 0.943 | 0.944 |

70 | 0.893 | 0.934 | 0.943 | 0.936 | 0.937 | 0.938 | 0.949 | 0.939 |

80 | 0.918 | 0.940 | 0.939 | 0.939 | 0.944 | 0.935 | 0.946 | 0.938 |

90 | 0.920 | 0.934 | 0.952 | 0.948 | 0.946 | 0.939 | 0.951 | 0.947 |

100 | 0.918 | 0.940 | 0.936 | 0.932 | 0.946 | 0.925 | 0.945 | 0.949 |

**Table 3.**Estimated coverage probability of the $95\%$ confidence intervals for the variance parameter, ${\sigma}^{2}$, using the observed information matrix.

Values | of | θ | ||||||
---|---|---|---|---|---|---|---|---|

Sample size | 0.5 | 1 | 1.5 | 2 | 2.5 | 3 | 3.5 | 4 |

20 | 0.649 | 0.765 | 0.854 | 0.853 | 0.876 | 0.870 | 0.862 | 0.885 |

30 | 0.697 | 0.794 | 0.870 | 0.898 | 0.892 | 0.898 | 0.894 | 0.896 |

40 | 0.723 | 0.849 | 0.893 | 0.914 | 0.919 | 0.913 | 0.909 | 0.902 |

50 | 0.751 | 0.867 | 0.916 | 0.907 | 0.911 | 0.924 | 0.899 | 0.912 |

60 | 0.745 | 0.865 | 0.911 | 0.913 | 0.916 | 0.906 | 0.920 | 0.933 |

70 | 0.769 | 0.874 | 0.928 | 0.928 | 0.912 | 0.930 | 0.926 | 0.935 |

80 | 0.776 | 0.883 | 0.927 | 0.919 | 0.934 | 0.936 | 0.916 | 0.924 |

90 | 0.795 | 0.901 | 0.931 | 0.932 | 0.925 | 0.930 | 0.940 | 0.941 |

100 | 0.824 | 0.904 | 0.927 | 0.933 | 0.925 | 0.936 | 0.932 | 0.942 |

**Table 4.**Estimated coverage probability of the bootstrap $95\%$ confidence intervals for the variance parameter, ${\sigma}^{2}$, using the percentile method.

Values | of | θ | ||||||
---|---|---|---|---|---|---|---|---|

Sample size | 0.5 | 1 | 1.5 | 2 | 2.5 | 3 | 3.5 | 4 |

20 | 0.657 | 0.814 | 0.862 | 0.842 | 0.840 | 0.832 | 0.818 | 0.824 |

30 | 0.701 | 0.850 | 0.885 | 0.891 | 0.882 | 0.867 | 0.869 | 0.866 |

40 | 0.743 | 0.881 | 0.896 | 0.913 | 0.912 | 0.886 | 0.881 | 0.878 |

50 | 0.772 | 0.895 | 0.921 | 0.916 | 0.897 | 0.901 | 0.885 | 0.892 |

60 | 0.797 | 0.907 | 0.912 | 0.910 | 0.906 | 0.897 | 0.907 | 0.916 |

70 | 0.807 | 0.904 | 0.925 | 0.915 | 0.909 | 0.918 | 0.908 | 0.924 |

80 | 0.822 | 0.895 | 0.925 | 0.914 | 0.925 | 0.917 | 0.909 | 0.909 |

90 | 0.869 | 0.916 | 0.932 | 0.922 | 0.919 | 0.915 | 0.934 | 0.929 |

100 | 0.873 | 0.915 | 0.918 | 0.925 | 0.906 | 0.931 | 0.920 | 0.939 |

**Table 5.**Estimated correlations between the two parameters obtained from the observed information matrix.

Values | of | θ | ||||||
---|---|---|---|---|---|---|---|---|

Sample size | 0.5 | 1 | 1.5 | 2 | 2.5 | 3 | 3.5 | 4 |

20 | −0.600 | −0.495 | −0.272 | −0.086 | −0.025 | −0.006 | −0.001 | 0.000 |

30 | −0.638 | −0.537 | −0.262 | −0.089 | −0.022 | −0.005 | −0.001 | 0.000 |

40 | −0.695 | −0.548 | −0.251 | −0.081 | −0.021 | −0.005 | −0.001 | 0.000 |

50 | −0.723 | −0.580 | −0.259 | −0.076 | −0.020 | −0.005 | −0.001 | 0.000 |

60 | −0.750 | −0.597 | −0.251 | −0.075 | −0.019 | −0.004 | −0.001 | 0.000 |

70 | −0.771 | −0.588 | −0.256 | −0.073 | −0.019 | −0.004 | −0.001 | 0.000 |

80 | −0.774 | −0.604 | −0.253 | −0.074 | −0.019 | −0.004 | −0.001 | 0.000 |

90 | −0.796 | −0.599 | −0.245 | −0.073 | −0.018 | −0.004 | −0.001 | 0.000 |

100 | −0.804 | −0.611 | −0.252 | −0.072 | −0.019 | −0.004 | −0.001 | 0.000 |

Values | of | θ | |||||
---|---|---|---|---|---|---|---|

0.5 | 1 | 1.5 | 2 | 2.5 | 3 | 3.5 | 4 |

0.309 | 0.159 | 0.067 | 0.023 | 0.006 | 0.001 | 0.000 | 0.000 |

## 5. Application to Body Mass Index Data

**Figure 6.**The histogram on the left shows the body mass indices of 700 New Zealand adults. The green line is the fitted folded normal and the blue line is the kernel density. The perspective plot on the right shows the log-likelihood of the body mass index data as a function of the mean and the variance.

## 6. Discussion

## Conflicts of Interest

## References

- Leone, F.C.; Nelson, L.S.; Nottingham, R.B. The folded normal distribution. Technometrics
**1961**, 3, 543–550. [Google Scholar] [CrossRef] - Lin, H.C. The measurement of a process capability for folded normal process data. Int. J. Adv. Manuf. Technol.
**2004**, 24, 223–228. [Google Scholar] [CrossRef] - Chakraborty, A.K.; Chatterjee, M. On multivariate folded normal distribution. Sankhya
**2013**, 75, 1–15. [Google Scholar] [CrossRef] - Elandt, R.C. The folded normal distribution: Two methods of estimating parameters from moments. Technometrics
**1961**, 3, 551–562. [Google Scholar] [CrossRef] - Johnson, N.L. The folded normal distribution: Accuracy of estimation by maximum likelihood. Technometrics
**1962**, 4, 249–256. [Google Scholar] [CrossRef] - Johnson, N.L. Cumulative sum control charts for the folded normal distribution. Technometrics
**1963**, 5, 451–458. [Google Scholar] [CrossRef] - Sundberg, R. On estimation and testing for the folded normal distribution. Commun. Stat.-Theory Methods
**1974**, 3, 55–72. [Google Scholar] [CrossRef] - Kim, H.J. On the ratio of two folded normal distributions. Commun. Stat.-Theory Methods
**2006**, 35, 965–977. [Google Scholar] [CrossRef] - Liao, M.Y. Economic tolerance design for folded normal data. Int. J. Prod. Res.
**2010**, 48, 4123–4137. [Google Scholar] [CrossRef] - Nelder, J.A.; Mead, R. A simplex method for function minimization. Comput. J.
**1965**, 7, 308–313. [Google Scholar] [CrossRef] - Johnson, N.L; Kotz, S.; Balakrishnan, N. Continuous Univariate Distributions; John Wiley & Sons, Inc.: New York, NY, USA, 1994. [Google Scholar]
- Psarakis, S.; Panaretos, J. The folded t distribution. Commun. Stat.-Theory Methods
**1990**, 19, 2717–2734. [Google Scholar] [CrossRef] - Psarakis, S.; Panaretos, J. On some bivariate extensions of the folded normal and the folded t distributions. J. Appl. Stat. Sci.
**2000**, 10, 119–136. [Google Scholar] - Kullback, S. Information Theory and Statistics; Dover Publications: New York, NY, USA, 1977. [Google Scholar]
- R Development Core Team. R: A Language and Environment for Statistical Computing. 2012. Available online: http://www.R-project.org/ (accessed on 1 December 2013).
- Yee, T.W. The VGAM package for categorical data analysis. J. Stat. Softw.
**2010**, 32, 1–34. [Google Scholar] - Efron, B.; Tibshirani, R. An Introduction to the Bootstrap; Chapman and Hall/CRC: New York, NY, USA, 1993. [Google Scholar]
- MacMahon, S.; Norton, R.; Jackson, R.; Mackie, M.J.; Cheng, A.; Vander Hoorn, S.; Milne, A.; McCulloch, A. Fletcher challenge-university of Auckland heart and health study: Design and baseline findings. N. Zeal. Med. J.
**1995**, 108, 499–502. [Google Scholar]

© 2014 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/).

## Share and Cite

**MDPI and ACS Style**

Tsagris, M.; Beneki, C.; Hassani, H.
On the Folded Normal Distribution. *Mathematics* **2014**, *2*, 12-28.
https://doi.org/10.3390/math2010012

**AMA Style**

Tsagris M, Beneki C, Hassani H.
On the Folded Normal Distribution. *Mathematics*. 2014; 2(1):12-28.
https://doi.org/10.3390/math2010012

**Chicago/Turabian Style**

Tsagris, Michail, Christina Beneki, and Hossein Hassani.
2014. "On the Folded Normal Distribution" *Mathematics* 2, no. 1: 12-28.
https://doi.org/10.3390/math2010012