# Robust Chi-Square in Extreme and Boundary Conditions: Comments on Jak et al. (2021)

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Preliminaries

#### 2.1. Derivative May Not Be Zero

#### 2.2. Unstable Computation on the Border

#### 2.3. Asymptotic Results Failure Due to Border Conditions

#### 2.4. The Log-Likelihood Correction Factor

#### 2.5. The Log-Likelihood Correction Factor Dependence on the Convergence Criteria

#### 2.6. The Log-Likelihood Correction Factor for Poorly Identified or Unidentified Models

#### 2.7. The Log-Likelihood Correction Factor Dependence on Sample Size

## 3. Mplus Estimation of Two-Level Models via the EM/EMA Algorithm

**variance**option. The default for that value is ${10}^{-4}$, which provides a sufficiently acceptable approximation to the model with zero variances.

**mconvergence**option) is removed, and the optimization is at that point guided solely by the

**logcriterion**option in Mplus. This option monitors the changes in the log-likelihood value that occurs with each EM iteration. When the change in the log-likelihood becomes negligible, the optimization process is deemed to have converged. By changing the

**logcriterion**option, the convergence criterion can be made more strict if this is desired. The change in the log-likelihood value after each EM iteration is guaranteed to always be positive and tend to be more or less monotonically decreasing in the vicinity of the maximum-likelihood estimates. When the changes drop below a certain level, we can be sure that the distance between the current log-likelihood value and the maximum log-likelihood value will be small. Note also that theoretically, the EM algorithm is guaranteed to converge to the maximum-likelihood after an infinite number of iterations. Lowering the

**logcriterion**option to near zero will essentially guarantee that the EM-algorithm takes the numerical equivalent of an infinite number of iterations and thus reach the maximum-likelihood estimates. This optimization construction, which is unique to Mplus, has been tested extensively, including in the Jak et al. [1] article, and the ML parameter estimates obtained with this algorithm are sufficiently accurate.

**logcriterion**(among others) as a convergence criterion. Convergence criteria, in general, are never perfect, including the derivatives criteria. In extreme situations, Mplus default convergence criteria may not be sufficiently strict and may need to be lowered (although that is generally quite rare). Jak et al. [1] appear to put a lot of weight on the derivatives criteria. As we explained earlier, in boundary situations, the derivatives criteria may not be satisfied by the maximum-likelihood estimates altogether. Furthermore, the derivatives criteria can be inadequate in some situations, which is why Mplus uses multiple convergence criteria to verify complete convergence. Consider the following simple example. In a linear regression of a dependent variable Y regressed on a covariate X, the derivative of the regression parameter is tightly related to the scale of X. If the covariate is divided by a factor of 10 or 1000, so will the derivative of the regression parameter, i.e., the derivative criterion can easily be manipulated into a false convergence itself. When such an example is estimated with the Mplus EM algorithm, the

**logcriterion**will force the iterations to continue beyond the point of satisfying the derivative criterion and will reach the maximum-likelihood, while optimization methods, such as QN, based solely on the derivatives criterion, will fail. This example illustrates the fact that convergence criteria, in general, are somewhat difficult to prescribe universally. In extreme situations, convergence criteria should be investigated.

## 4. The Log-Likelihood Correction Factor

#### The New Log-Likelihood Correction Factor in Extreme and Boundary Solutions

## 5. Simulation Study

## 6. Conclusions

## Author Contributions

## Funding

## Data Availability Statement

## Conflicts of Interest

## Appendix A

**Figure A1.**Mplus input file corresponding to the first row in Table 1.

## References

- Jak, S.; Jorgensen, T.D.; Rosseel, Y. Evaluating cluster-level factor models with lavaan and Mplus. Psych
**2021**, 3, 134–152. [Google Scholar] [CrossRef] - Muthén, B.O.; Satorra, A. Multilevel aspects of varying parameters in structural models. In Multilevel Analysis of Educational Data; Academic Press: Cambridge, MA, USA, 1989; pp. 87–99. [Google Scholar]
- Muthén, B.O. Mean and Covariance Structure Analysis of Hierarchical Data; UCLA Statistics Series #62; UCLA: Los Angeles, CA, USA, 1990. [Google Scholar]
- Muthén, L.K.; Muthén, B.O. Mplus User’s Guide, 8th ed.; Muthén & Muthén: Los Angeles, CA, USA, 2017. [Google Scholar]
- Dempster, A.P.; Laird, N.M.; Rubin, D.B. Maximum Likelihood from Incomplete Data via the EM Algorithm. J. R. Stat. Soc. Ser. B
**1977**, 39, 1–38. [Google Scholar] - Satorra, A.; Bentler, P.M. Scaling Corrections for Chi-Square Statistics in Covariance Structure Analysis. In Proceedings of the Business and Economic Statistics Section of the American Statistical Association, Atlanta, GA, USA, 24–28 August 1988; pp. 308–313. [Google Scholar]
- Yuan, K.; Bentler, P.M. Three Likelihood-Based Methods for Mean and Covariance Structure Analysis With Nonnormal Missing Data. Sociol. Methodol.
**2000**, 30, 167–202. [Google Scholar] [CrossRef] - Asparouhov, T.; Muthén, B. Multivariate Statistical Modeling with Survey Data. In Proceedings of the Federal Committee on Statistical Methodology (FCSM) Research Conference, Washington, DC, USA, 14 November 2005. [Google Scholar]
- Muthén, L.K.; Muthén, B.O. How to use a Monte Carlo study to decide on sample size and determine power. Struct. Equ. Model.
**2002**, 4, 599–620. [Google Scholar] [CrossRef]

**Table 1.**Rejection rates of the robust chi-square test of fit corresponding to Table 6, rows 7–12 and 19–24 in Jak et al. [1], ${\theta}_{B}=0$ for the generating model.

Generating Model | Estimated Model | Estimated Model ${\mathit{\theta}}_{\mathit{B}}$ | Mplus 8.6 EMA | Mplus 8.7 EMA | Mplus 8.6 QN | Mplus 8.7 QN |
---|---|---|---|---|---|---|

Config | Uncon | ${\theta}_{B}>0$ | 0.46 | 0.03 | - | - |

Config | Uncon | ${\theta}_{B}=0$ | 0.17 | 0.03 | 0.17 | 0.03 |

Config | Config | ${\theta}_{B}>0$ | 0.36 | 0.05 | - | - |

Config | Config | ${\theta}_{B}=0$ | 0.10 | 0.03 | 0.10 | 0.03 |

Config | Shared | ${\theta}_{B}>0$ | 0.45 | 0.06 | - | - |

Config | Shared | ${\theta}_{B}=0$ | 0.17 | 0.07 | 0.11 | 0.05 |

Shared | Uncon | ${\theta}_{B}>0$ | 0.46 | 0.03 | - | - |

Shared | Uncon | ${\theta}_{B}=0$ | 0.17 | 0.03 | 0.17 | 0.03 |

Shared | Config | ${\theta}_{B}>0$ | 0.35 | 0.05 | - | - |

Shared | Config | ${\theta}_{B}=0$ | 0.09 | 0.03 | 0.09 | 0.03 |

Shared | Shared | ${\theta}_{B}>0$ | 0.41 | 0.08 | - | - |

Shared | Shared | ${\theta}_{B}=0$ | 0.19 | 0.05 | 0.10 | 0.03 |

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Asparouhov, T.; Muthén, B. Robust Chi-Square in Extreme and Boundary Conditions: Comments on Jak et al. (2021). *Psych* **2021**, *3*, 542-551.
https://doi.org/10.3390/psych3030035

**AMA Style**

Asparouhov T, Muthén B. Robust Chi-Square in Extreme and Boundary Conditions: Comments on Jak et al. (2021). *Psych*. 2021; 3(3):542-551.
https://doi.org/10.3390/psych3030035

**Chicago/Turabian Style**

Asparouhov, Tihomir, and Bengt Muthén. 2021. "Robust Chi-Square in Extreme and Boundary Conditions: Comments on Jak et al. (2021)" *Psych* 3, no. 3: 542-551.
https://doi.org/10.3390/psych3030035