Abstract
This study addresses an interpretable supervised binary classification problem under constrained feature availability and class imbalance. The objective is to evaluate whether reliable predictive performance can be achieved using exclusively pre-event administrative variables while preserving transparency and analytical traceability of model decisions. A comparative framework is developed using linear and ensemble-based classifiers, combined with resampling strategies and exhaustive hyperparameter optimization embedded within cross-validation. Model performance is evaluated using standard classification metrics, with particular emphasis on the Matthews correlation coefficient as a robust measure under imbalance. In addition to predictive accuracy, the analysis incorporates global, structural, and local interpretability mechanisms, including permutation feature importance, explicit decision paths derived from tree-based models, and additive local explanations. Experimental results show that optimized ensemble models achieve consistent performance gains over linear baselines while maintaining a balanced error structure across classes. Importantly, the most influential predictors exhibit stable rankings across models and explanation methods, indicating a concentrated and robust discriminative signal within the constrained feature space. The interpretability analysis demonstrates that complex classifiers can be decomposed into verifiable decision rules and locally coherent feature contributions. Overall, the findings confirm that interpretable supervised classification can be reliably conducted under administrative data constraints, providing a reproducible modeling framework that balances predictive performance, error analysis, and explainability in applied mathematical settings.