- Article
Comparative Mathematical Evaluation of Models in the Meta-Analysis of Proportions: Evidence from Neck, Shoulder, and Back Pain in the Population of Computer Vision Syndrome
- Vanja Dimitrijević,
- Bojan Rašković and
- Borislav Obradović
- + 2 authors
Meta-analysis of proportions requires a rigorous transformation model due to the inherent mathematical constraints of proportional data (boundedness and non-constant variance). This study compared four proportions (Untransformed, Freeman–Tukey, Logit, and Arcsine) to determine the most reliable and numerically stable estimator for pooled prevalence. A rigorous comparative evaluation was performed using 35 empirical studies on Computer Vision Syndrome (CVS)-related musculoskeletal pain prevalence. The analysis employed frequentist methods, Monte Carlo simulations (10,000 iterations) to test CI coverage, and Bayesian sensitivity analysis. Key findings were validated using the Generalized Linear Mixed Model (GLMM), representing the one-step methodological standard. Pooled prevalence estimates were highly consistent (0.467 to 0.483). Extreme heterogeneity (I2 ≈ 98–99%) persisted across all models, with τ2 values exceeding 1.0 specifically in Logit and GLMM frameworks. Mixed-effects meta-regression confirmed that this heterogeneity was independent of study size (p = 0.692 to 0.755), with the moderator explaining virtually none of the variance () of 0% to 0.2%. This confirms that the high variance is an inherent feature of the dataset rather than a statistical artifact. Simulations revealed a critical trade-off: while the Untransformed model provided minimal bias, its CI coverage failed significantly in small-sample boundary scenarios (N = 50, p = 0.01, coverage: 39.36%). Under these conditions, the PFT transformation was most robust (98.51% coverage), while the Logit model also maintained high coverage accuracy (91.07%) despite its variance inflation. We conclude that model selection should be context-dependent: the Untransformed model is recommended for well-powered datasets, whereas the PFT transformation is essential for small samples to ensure valid inferential precision.
Mathematics,
3 February 2026



