# On a Robust MaxEnt Process Regression Model with Sample-Selection

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Robust Sample-Selection GPR Model

#### 2.1. MaxEnt Process Regression Model

**Lemma**

**1.**

**Corollary**

**1.**

#### 2.2. Proposed Model

**Lemma**

**2.**

#### 2.3. The Sample-Selection Bias

**Lemma**

**3.**

**Corollary**

**2.**

**Corollary**

**3.**

## 3. Bayesian Hierarchical Methodology

#### 3.1. Hierarchical Representation of the RSGPR Model

**Theorem**

**1.**

#### 3.2. Full Conditional Posteriors

- (1)
- The full conditional posterior distribution of ${\mathit{\eta}}_{{n}_{1}}$ is given by$$\left(\right)open="["\; close="]">{\mathit{\eta}}_{{n}_{1}}\phantom{\rule{0.277778em}{0ex}}|\phantom{\rule{0.277778em}{0ex}}{\Theta}_{\setminus {\mathit{\eta}}_{{n}_{1}}},{\mathcal{D}}_{n}$$
- (2)
- The full conditional posterior distribution of ${\tau}^{2}$ is an inverse Gamma distribution:$$\left(\right)open="["\; close="]">{\tau}^{2}\phantom{\rule{0.277778em}{0ex}}|\phantom{\rule{0.277778em}{0ex}}{\Theta}_{\setminus {\tau}^{2}},{\mathcal{D}}_{n}$$
- (3)
- The full conditional posterior distribution of $\zeta $ is a normal distribution:$$\begin{array}{c}\hfill \left(\right)open="["\; close="]">\zeta \phantom{\rule{0.277778em}{0ex}}|\phantom{\rule{0.277778em}{0ex}}{\Theta}_{\setminus \zeta},{\mathcal{D}}_{n}\sim \mathcal{N}({\theta}_{\zeta},{\sigma}_{\zeta}^{2}),\end{array}$$$${\theta}_{\zeta}=\frac{{\theta}_{0}/{\sigma}_{0}+{\sum}_{i=1}^{{n}_{1}}({y}_{i}-\eta \left({\mathit{x}}_{i}\right)){z}_{{C}_{i}}/\delta \left({\omega}_{i}\right)}{1/{\sigma}_{0}+{\sum}_{i=1}^{{n}_{1}}{z}_{{C}_{i}}^{2}/\delta \left({\omega}_{i}\right)}\phantom{\rule{0.277778em}{0ex}}\mathrm{and}\phantom{\rule{0.277778em}{0ex}}{\sigma}_{\zeta}^{2}=(\frac{1}{{\sigma}_{0}{\tau}^{2}}+\frac{{\sum}_{i=1}^{{n}_{1}}{z}_{{C}_{i}}^{2}}{\delta ({\omega}_{i}){\tau}^{2}}{)}^{-1}.$$
- (4)
- The full conditional posterior distributions of ${z}_{i}$’s are independent and their distributions are given by$$\begin{array}{c}\hfill \left(\right)open="["\; close="]">{z}_{i}|{\Theta}_{\setminus {z}_{i}},{\mathcal{D}}_{n}\stackrel{ind}{\sim}\left(\right)open="\{"\; close>\begin{array}{cc}{\mathcal{TN}}_{(-\infty ,0)}({\mathit{v}}_{i}^{\top}\mathit{\gamma},\delta \left({\omega}_{i}\right))\hfill & \mathrm{if}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}{s}_{i}=0,\hfill \\ {\mathcal{TN}}_{(0,\infty )}({\theta}_{{z}_{i}},{\sigma}_{{z}_{i}}^{2})\hfill & \mathrm{if}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}{s}_{i}=1\hfill \end{array}\end{array}$$$${\theta}_{{z}_{i}}={\mathit{v}}_{i}^{\top}\mathit{\gamma}+\frac{\zeta ({y}_{i}-\eta \left({\mathit{x}}_{i}\right))}{{\zeta}^{2}+{\tau}^{2}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\mathrm{and}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}{\sigma}_{{z}_{i}}^{2}=\frac{\delta \left({\omega}_{i}\right){\tau}^{2}}{{\zeta}^{2}+{\tau}^{2}}.$$
- (5)
- The full conditional posterior density of $\mathit{\gamma}$ is:$$\left(\right)open="["\; close="]">\mathit{\gamma}\phantom{\rule{0.277778em}{0ex}}|\phantom{\rule{0.277778em}{0ex}}{\Theta}_{\setminus \mathit{\gamma}},{\mathcal{D}}_{n}$$
- (6)
- The full conditional posterior densities of ${\omega}_{i}$’s are independent and they are given by$$\begin{array}{ccc}\hfill p\left({\omega}_{i}\right|{\Theta}_{\setminus {\omega}_{i}},{\mathcal{D}}_{n})& \propto & \phantom{\rule{0.277778em}{0ex}}\varphi ({y}_{i};\eta \left({\mathit{x}}_{i}\right)+\zeta {z}_{{C}_{i}},\delta \left({\omega}_{i}\right){\tau}^{2})\varphi ({z}_{i};{\mathit{v}}_{i}^{\top}\mathit{\gamma},\delta \left({\omega}_{i}\right))g\left({\omega}_{i}\right)\mathrm{I}(i\le {n}_{1})\hfill \\ & +& \varphi ({z}_{i};{\mathit{v}}_{i}^{\top}\mathit{\gamma},\delta \left({\omega}_{i}\right))g\left({\omega}_{i}\right)\mathrm{I}(i>{n}_{1}).\hfill \end{array}$$

#### 3.3. Markov Chain Monte Carlo Method

- (1)
- (2)
- Gibbs samples of $\rho $ and ${\sigma}^{2}$ can be obtained by using those of $\zeta =\rho \sigma $ and ${\tau}^{2}={\sigma}^{2}(1-{\rho}^{2})$.
- (3)
- If ${\omega}_{i}$ degenerates at $\delta \left({\omega}_{i}\right)=1,$ the RSGPR model can be reduced to the SGPR${}_{N}$ model. In this case, the MCMC procedure excludes the Gibbs sampling of ${\omega}_{i}$’s by using the posterior distribution (19).
- (4)
- (5)
- When ${\omega}_{i}\stackrel{iid}{\sim}\mathcal{G}(\nu /2,\nu /2)$ and $\delta \left({\omega}_{i}\right)=1/{\omega}_{i},$ the RSGPR model becomes the SGPR${}_{{t}_{\nu}}$ model. For generating ${\omega}_{i}$’s, we may use the following posteriors$$\begin{array}{ccc}\hfill {\omega}_{i}& \stackrel{ind}{\sim}& \left(\right)open="\{"\; close>\begin{array}{c}\mathcal{G}\left(\right)open="("\; close=")">\frac{\nu +2}{2},\phantom{\rule{0.277778em}{0ex}}\frac{\nu +{z}_{{C}_{i}}^{2}}{2}+\frac{{({y}_{i}-\eta \left({\mathbf{x}}_{i}\right)-\zeta {z}_{{C}_{i}})}^{2}}{2{\xi}^{2}}\hfill \\ \mathrm{for}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}i\le {n}_{1},\hfill \end{array}\mathcal{G}\left(\right)open="("\; close=")">\frac{\nu +1}{2},\phantom{\rule{0.277778em}{0ex}}\frac{\nu +{z}_{{C}_{i}}^{2}}{2}\hfill \\ \mathrm{for}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}i{n}_{1},\hfill \hfill \end{array}$$
- (6)
- When the squared exponential covariance function $K\left(\mathbf{x}\right)$ in Equation (4) is chosen with unknown hyperparameters ${u}_{0}$ and ${w}_{0},$ we need to elicit the priors of ${u}_{0}$ and ${w}_{0}$ for the full Bayes methods based on the MCMC method. The priors considered by [28] can be used for this assessment as follows. The prior distributions are a conjugate ${u}_{0}\sim \mathcal{IG}(a,b)$ and ${w}_{0}\sim \mathcal{HC}(c,d).$ Here $\mathcal{HC}(c,d)$ denotes the half-Cauchy distribution with the p.d.f. $\phantom{\rule{0.277778em}{0ex}}HC({w}_{0};c,d),$ location parameter $c,$ and scale parameter $d.$ See [28], for compatibility with ${w}_{0}\sim \mathcal{HC}(c,d)$ to elicit the prior information on ${w}_{0}.$
- (7)
- Full conditional posterior distributions of ${u}_{0}$ and ${w}_{0}$ are$$\left[{u}_{0}\right|\Theta ,{w}_{0},\mathcal{D}]\sim \mathcal{IG}({a}^{*},{b}^{*})\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\mathrm{and}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}p\left({w}_{0}\right|\Theta ,{u}_{0},\mathcal{D})\propto {\varphi}_{{n}_{1}}({\mathit{\eta}}_{{n}_{1}};{m}_{1}\left(\mathbf{x}\right),{K}_{11}\left(\mathbf{x}\right))HC({w}_{0};c,d),$$
- (8)
- After obtaining the Gibbs samples of $\Theta ,$ we can use them for Monte Carlo estimation of regression function ${\mathit{\eta}}_{{n}_{2}}$ and missing observations ${\mathbf{y}}_{{n}_{2}}.$ They can be also used for predicting regression functions and ${y}_{i}^{\prime}$s evaluated at new predictors (see, e.g., [26]).

#### 3.4. Prediction with Bias Corrected Regression Function

## 4. Numerical Illustrations

#### 4.1. Simulation Scheme

#### 4.2. Performance of the RSGPR Model

#### 4.2.1. Sample-Selection Data Generated from Model 1

#### 4.2.2. Data Generated from Model 2

#### 4.2.3. Data Generated from Model 3 with Normal Mixture Errors

## 5. Conclusions

## Acknowledgments

## Author Contributions

## Conflicts of Interest

## Appendix A

#### Appendix A.1. Proof of Lemma 1

**Proof.**

#### Appendix A.2. Proof of Lemma 2

**Proof.**

#### Appendix A.3. Proof of Lemma 3

**Proof.**

#### Appendix A.4. Proof of Corollary 2

**Proof.**

#### Appendix A.5. Proof of Corollary 3

**Proof.**

#### Appendix A.6. Proof of Theorem 1

**Proof.**

#### Appendix A.7. Derivation of Conditional Posterior Distributions

- (1)
- Full conditional distribution of ${\mathit{\eta}}_{{n}_{1}}$: Equation (13) states that the full conditional density of ${\mathit{\eta}}_{{n}_{1}}$ is$$\begin{array}{c}\hfill p\left(\right)open="("\; close=")">{\mathit{\eta}}_{{n}_{1}}\phantom{\rule{0.277778em}{0ex}}|\phantom{\rule{0.277778em}{0ex}}{\Theta}_{\setminus {\mathit{\eta}}_{{n}_{1}}},{\mathcal{D}}_{n}\\ \propto & {\varphi}_{{n}_{1}}\left(\right)open="("\; close=")">{\mathbf{y}}_{{n}_{1}};{\mathit{\eta}}_{{n}_{1}}+\zeta {\mathbf{z}}_{C},{\tau}^{2}{D}_{1}\left(\delta \left(\mathit{w}\right)\right){\varphi}_{{n}_{1}}({\mathit{\eta}}_{{n}_{1}};{\mathrm{m}}_{1}\left(\mathbf{x}\right),{K}_{11}\left(\mathbf{x}\right)),\hfill \end{array}& \propto & \mathrm{exp}\left(\right)open="\{"\; close="\}">-\frac{1}{2}{\mathit{\eta}}_{{n}_{1}}^{\top}{\Sigma}_{{\mathit{\eta}}_{{n}_{1}}}^{-1}{\mathit{\eta}}_{{n}_{1}}+{\mathit{\theta}}_{{\mathit{\eta}}_{{n}_{1}}}^{\top}{\Sigma}_{{\mathit{\eta}}_{{n}_{1}}}^{-1}{\mathit{\eta}}_{{n}_{1}},\hfill $$
- (2)
- Full conditional distribution of ${\tau}^{2}$: We see from Equation (13) that the full conditional posterior density is$$\begin{array}{ccc}\hfill p({\tau}^{2}\mid {\Theta}_{\setminus {\tau}^{2}},{\mathcal{D}}_{n})& \propto & \prod _{i=1}^{{n}_{1}}\varphi ({y}_{i};\eta \left({\mathit{x}}_{i}\right)+\zeta {z}_{{C}_{i}},\delta \left({\omega}_{i}\right){\tau}^{2})IG({\tau}^{2};c,d)\varphi (\zeta ;{\theta}_{0},{\sigma}_{0}{\tau}^{2}),\hfill \\ & \propto & {\tau}^{-({n}_{1}+2c+3)}\mathrm{exp}\left(\right)open="\{"\; close="\}">-\left(\right)open="("\; close=")">d+\frac{1}{2}\sum _{i=1}^{{n}_{1}}\frac{{({y}_{i}-\eta \left({\mathit{x}}_{i}\right)-\zeta {z}_{{C}_{i}})}^{2}}{\delta \left({\omega}_{i}\right)}+\frac{{(\zeta -{\theta}_{0})}^{2}}{2{\sigma}_{0}}/{\tau}^{2}\hfill & .\end{array}$$This is the kernel of $\mathcal{IG}\left(\right)open="("\; close=")">c+\frac{{n}_{1}+1}{2},\phantom{\rule{0.277778em}{0ex}}d+\frac{1}{2}{\sum}_{i=1}^{{n}_{1}}\frac{{({y}_{i}-\eta \left({\mathit{x}}_{i}\right)-\zeta {z}_{{C}_{i}})}^{2}}{\delta \left({\omega}_{i}\right)}+\frac{{(\zeta -{\theta}_{0})}^{2}}{2{\sigma}_{0}}$ distribution.
- (3)
- Full conditional distribution of $\zeta $: Equation (13) gives the full conditional density of $\zeta $ given by$$\begin{array}{ccc}\hfill p(\zeta \mid {\Theta}_{\setminus \zeta},{\mathcal{D}}_{n})& \propto & \prod _{i=1}^{{n}_{1}}\varphi ({y}_{i};\eta \left({\mathit{x}}_{i}\right)+\zeta {z}_{{C}_{i}},\delta \left({\omega}_{i}\right){\tau}^{2})\varphi (\zeta ;{\theta}_{0},{\sigma}_{0}{\tau}^{2}),\hfill \\ & \propto & \mathrm{exp}\left(\right)open="\{"\; close="\}">-\frac{{\zeta}^{2}-2{\theta}_{\zeta}\zeta}{2{\sigma}_{\zeta}^{2}}\propto \mathrm{exp}\left(\right)open="\{"\; close="\}">-\frac{{(\zeta -{\theta}_{\zeta})}^{2}}{2{\sigma}_{\zeta}^{2}}\hfill & ,\end{array}$$
- (4)
- Full conditional distributions of ${z}_{i}$’s: Equation (13) indicates that the full conditional posterior densities of ${z}_{i}$s are mutually independent, and that for each i,$$\begin{array}{ccc}\hfill p({z}_{i}\mid {\Theta}_{{\setminus}_{i}},{\mathcal{D}}_{n})& \propto & {\left(\right)}^{\varphi}\varphi ({z}_{i};{\mathit{v}}_{i}^{\top}\mathit{\gamma},\delta \left({\omega}_{i}\right)){s}_{i}\hfill & {\left(\right)}^{\varphi}1-{s}_{i}\end{array}$$Since the support of ${z}_{i}$ is $\{{z}_{i};{z}_{i}\ge 0\}$ for ${s}_{i}=1,$ while $\{{z}_{i};{z}_{i}<0\}$ for ${s}_{i}=0,$ we have the truncated normal distributions.
- (5)
- Full conditional distribution of $\mathit{\gamma}$: The full conditional posterior density of $\mathit{\gamma}$ is given by$$\begin{array}{ccc}\hfill p(\mathit{\gamma}\mid {\Theta}_{\setminus \mathit{\gamma}},{\mathcal{D}}_{n})& \propto & \prod _{i=1}^{{n}_{1}}\varphi \left(\right)open="("\; close=")">{y}_{i};\eta \left({\mathit{x}}_{i}\right)+\zeta ({z}_{i}-{\mathit{v}}_{i}^{\top}\mathit{\gamma}),\delta \left({\omega}_{i}\right){\tau}^{2}\hfill \end{array}$$

## References

- Cox, G.; Kachergis, G.; Shiffrin, R. Gaussian process regression for trajectory analysis. In Proceedings of the Annual Meeting of the Cognitive Science Society, Sapporo, Japan, 1–4 August 2012; Volume 34. [Google Scholar]
- Rasmussen, C.E.; Nickisch, H. Gaussian process for machine learning (gpml) toolbox. J. Mach. Learn. Res.
**2010**, 11, 3011–3015. [Google Scholar] - Liutkus, A.; Badeau, R.; Richard, G. Gaussian processes for underdetermined source separation. IEEE Trans. Signal Process.
**2011**, 59, 3155–3167. [Google Scholar] [CrossRef] - Caywood, M.S.; Roberts, D.M.; Colombe, J.B.; Greenward, H.S.; Weiland, M.Z. Gaussian Process Regression for Predictive But Interpretable Machine Learning Models: An Example of Predicting Mental Worklosd across Tasks. Front. Hum. Neurosci.
**2017**, 10, 1–19. [Google Scholar] [CrossRef] [PubMed] - Contreras-Reyes, J.E.; Arellano-Valle, R.B.; Canales, T.M. Comparing growth curves with asymmetric heavy-tailed errors: Application to the southern blue whiting (Micromesistius australis). Fish. Res.
**2014**, 159, 88–94. [Google Scholar] [CrossRef] - Heckman, J.J. Sample selection bias as a specification error. Econometrica
**1979**, 47, 153–161. [Google Scholar] [CrossRef] - Marchenko, Y.V.; Genton, M.G. A Heckman selection-t model. J. Am. Stat. Assoc.
**2012**, 107, 304–317. [Google Scholar] [CrossRef] - Ding, P. Bayesian robust inference of sample selection using selection t-models. J. Multivar. Anal.
**2014**, 124, 451–464. [Google Scholar] [CrossRef] - Hasselt, V.M. Bayesian inference in a sample selection model. J. Econ.
**2011**, 165, 221–232. [Google Scholar] [CrossRef] - Arellano-Valle, R.B.; Contreras-Reyes, J.E.; Stehlík, M. Generalized skew-normal negentropy and its application to fish condition factor time series. Entropy
**2017**, 19, 528. [Google Scholar] [CrossRef] - Kim, H.-J.; Kim, H.-M. Elliptical regression models for multivariate sample-selection bias correction. J. Korean Stat. Soc.
**2016**, 45, 422–438. [Google Scholar] [CrossRef] - Kim, H.-J. Bayesian hierarchical robust factor analysis models for partially observed sample-selection data. J. Multivar. Anal.
**2018**, 164, 65–82. [Google Scholar] [CrossRef] - Kim, H.-J. A class of weighted multivariate normal distributions and its properties. J. Multivar. Anal.
**2008**, 99, 1758–1771. [Google Scholar] [CrossRef] - Lenk, P.J. Bayesian inference for semiparametric regression using a Fourier representation. J. R. Stat. Soc. Ser. B.
**1999**, 61, 863–879. [Google Scholar] [CrossRef] - Fahrmeir, L.; Kneib, T. Bayesian Smoothing and Regression for Longitudial, Spatial and Event History Data; Oxford Statistical Science Series; Oxford University Press: Oxford, UK, 2011; Volume 36. [Google Scholar]
- Chakraborty, S.; Ghosh, M.; Mallick, B.K. Bayesian nonlinear regression for large p and small n problems. J. Multivar. Anal.
**2012**, 108, 28–40. [Google Scholar] [CrossRef] - Leonard, T.; Hsu, J.S.J. Bayesian Methods: An Analysis for Statisticians and Interdisciplinary Researchers; Cambridge University Press: New York, NY, USA, 1999. [Google Scholar]
- Kim, H.-J. A two-stage maximum entropy prior of location parameter with a stochastic multivariate interval constraint and its properties. Entropy
**2016**, 18, 188. [Google Scholar] [CrossRef] - Shi, J.; Choi, T. Monographs on Statistics and Applied Probability, Gaussian Process Regression Analysis for Functional Data; Chapman & Hall: London, UK, 2011. [Google Scholar]
- Rasmussen, C.E.; Williams, C.K.I. Gaussian Process for Machine Learning; The MIT Press: Cambridge, MA, USA, 2006. [Google Scholar]
- Andrews, D.F.; Mallows, C.L. Scale mixtures of normal distributions. J. R. Stat. Soc. Ser. B
**1974**, 36, 99–102. [Google Scholar] - Lachos, V.H.; Labra, F.V.; Bolfarine, H.; Ghosh, P. Multivariate measurement error models based on scale mixtures of the skew-normal distribution. Statistics
**2010**, 44, 541–556. [Google Scholar] [CrossRef] - Arellano-Valle, R.B.; Branco, M.D.; Genton, M.G. A unified view on skewed distributions arising from selection. Can. J. Stat.
**2006**, 34, 581–601. [Google Scholar] [CrossRef] - Kim, H.J.; Choi, T.; Lee, S. A hierarchical Bayesian regression model for the uncertain functional constraint using screened scale mixture of Gaussian distributions. Statistics
**2016**, 50, 350–376. [Google Scholar] [CrossRef] - Rubin, D.B. Inference and missing data. Biometrika
**1976**, 63, 581–592. [Google Scholar] [CrossRef] - Ntzoufras, I. Bayesian Modeling Using WinBUGS; Wiley: New York, NY, USA, 2009. [Google Scholar]
- Chib, S.; Greenberg, E. Understanding the Metropolis-Hastings algorithm. Am. Stat.
**1995**, 49, 327–335. [Google Scholar] - Gelman, A. Prior distributions for variance parameters in hierarchical models (comment on article by Browne and Draper). Bayesian Anal.
**2006**, 1, 515–534. [Google Scholar] [CrossRef] - R Core Team. R: A Language and Environment for Statistical Computing; R Foundation for Statistical Computing: Vienna, Austria, 2017; ISBN 3-900051-07-0. [Google Scholar]
- Spiegelhalter, D.; Best, N.; Carlin, B.; van der Linde, A. Bayesian measure of model complexity and fit (with discussion). J. R. Stat. Soc. Ser. B
**2002**, 64, 583–639. [Google Scholar] [CrossRef] - Johnson, N.L.; Kotz, S.; Balakrishnan, N. Distribution in Statistics: Continuous Univariate Distributions, 2nd ed.; John Wiley & Son: New York, NY, USA, 1994; Volume 1. [Google Scholar]

**Figure 1.**Graphs of the sample-selection bias and the difference in marginal effect of the k-th predictor.

**Figure 2.**Graphs of estimated regression functions (

**left panel**) and predicted regression functions (

**right panel**): (

**i**) black lines are used for the true regression function; (

**ii**) red dashed lines for the robust sample-selection Gaussian process regression (RSGPR) models; (

**iii**) blue dashed lines for the Gaussian process regression (GPR) models.

**Figure 3.**Graphs of regression functions: estimated regression functions (

**left panel**) and predicted regression functions (

**right panel**).

True Value | Mean | s.d. | SGPR${}_{\mathit{N}}$ Model | MC Error | Mean | s.d. | GPR Model | MC Error | ||
---|---|---|---|---|---|---|---|---|---|---|

RMSE | MAB | RMSE | MAB | |||||||

$\sigma =3$ | 2.831 | 0.308 | 0.351 | 0.426 | 0.018 | 2.094 | 0.104 | 0.912 | 0.800 | 0.002 |

$\rho =0.5$ | 0.380 | 0.376 | 0.563 | 0.287 | 0.064 | NA | NA | NA | NA | NA |

SGPR ${}_{{\mathbf{t}}_{\mathbf{10}}}$ Model | GPR ${}_{{\mathbf{t}}_{\mathbf{10}}}$ Model | |||||||||

$\sigma =3$ | 2.880 | 0.974 | 0.509 | 0.515 | 0.050 | 2.130 | 0.109 | 0.876 | 0.800 | 0.003 |

$\rho =0.5$ | 0.435 | 0.275 | 0.627 | 0.422 | 0.032 | NA | NA | NA | NA | NA |

© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Kim, H.-J.; Bae, M.; Jin, D.
On a Robust MaxEnt Process Regression Model with Sample-Selection. *Entropy* **2018**, *20*, 262.
https://doi.org/10.3390/e20040262

**AMA Style**

Kim H-J, Bae M, Jin D.
On a Robust MaxEnt Process Regression Model with Sample-Selection. *Entropy*. 2018; 20(4):262.
https://doi.org/10.3390/e20040262

**Chicago/Turabian Style**

Kim, Hea-Jung, Mihyang Bae, and Daehwa Jin.
2018. "On a Robust MaxEnt Process Regression Model with Sample-Selection" *Entropy* 20, no. 4: 262.
https://doi.org/10.3390/e20040262