Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (300)

Search Parameters:
Keywords = CreditScore

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 3574 KB  
Article
A Credit Risk Identification Model Based on the Minimax Probability Machine with Generative Adversarial Networks
by Yutong Zhang, Xiaodong Zhao and Hailong Huang
Mathematics 2025, 13(20), 3345; https://doi.org/10.3390/math13203345 - 20 Oct 2025
Viewed by 186
Abstract
In the context of industrial transitions and tariff frictions, financial markets are experiencing frequent defaults, emphasizing the urgency of upgrading credit scoring methodologies. A novel credit risk identification model integrating generative adversarial networks (GAN) and the minimax probability machine (MPM) is proposed. GAN [...] Read more.
In the context of industrial transitions and tariff frictions, financial markets are experiencing frequent defaults, emphasizing the urgency of upgrading credit scoring methodologies. A novel credit risk identification model integrating generative adversarial networks (GAN) and the minimax probability machine (MPM) is proposed. GAN generates realistic augmented samples to alleviate class imbalance in the credit score dataset, while the MPM optimizes the classification hyperplane by reformulating probability constraints into second-order cone problems via the multivariate Chebyshev inequality. Numerical experiments conducted on the South German Credit dataset, which represents individual (consumer) credit risk, demonstrate that the proposed generative adversarial network’s minimax probability machine (GAN-MPM) model achieves 76.13%, 60.93%, 71.78%, and 72.03% for accuracy, F1-score, sensitivity, and AUC, respectively, significantly outperforming support vector machines, random forests, and XGBoost. Furthermore, SHAP analysis reveals that the installment rate in percentage of disposable income, housing type, duration in month, and status of existing checking accounts are the most influential features. These findings demonstrate the effectiveness and interpretability of the GAN-MPM model, offering a more accurate and reliable tool for credit risk management. Full article
(This article belongs to the Section E2: Control Theory and Mechanics)
Show Figures

Figure 1

31 pages, 6615 KB  
Article
A Modular and Explainable Machine Learning Pipeline for Student Dropout Prediction in Higher Education
by Abdelkarim Bettahi, Fatima-Zahra Belouadha and Hamid Harroud
Algorithms 2025, 18(10), 662; https://doi.org/10.3390/a18100662 - 18 Oct 2025
Viewed by 149
Abstract
Student dropout remains a persistent challenge in higher education, with substantial personal, institutional, and societal costs. We developed a modular dropout prediction pipeline that couples data preprocessing with multi-model benchmarking and a governance-ready explainability layer. Using 17,883 undergraduate records from a Moroccan higher [...] Read more.
Student dropout remains a persistent challenge in higher education, with substantial personal, institutional, and societal costs. We developed a modular dropout prediction pipeline that couples data preprocessing with multi-model benchmarking and a governance-ready explainability layer. Using 17,883 undergraduate records from a Moroccan higher education institution, we evaluated nine algorithms (logistic regression (LR), decision tree (DT), random forest (RF), k-nearest neighbors (k-NN), support vector machine (SVM), gradient boosting, Extreme Gradient Boosting (XGBoost), Naïve Bayes (NB), and multilayer perceptron (MLP)). On our test set, XGBoost attained an area under the receiver operating characteristic curve (AUC–ROC) of 0.993, F1-score of 0.911, and recall of 0.944. Subgroup reporting supported governance and fairness: across credit–load bins, recall remained high and stable (e.g., <9 credits: precision 0.85, recall 0.932; 9–12: 0.886/0.969; >12: 0.915/0.936), with full TP/FP/FN/TN provided. A Shapley additive explanations (SHAP)-based layer identified risk and protective factors (e.g., administrative deadlines, cumulative GPA, and passed-course counts), surfaced ambiguous and anomalous cases for human review, and offered case-level diagnostics. To assess generalization, we replicated our findings on a public dataset (UCI–Portugal; tables only): XGBoost remained the top-ranked (F1-score 0.792, AUC–ROC 0.922). Overall, boosted ensembles combined with SHAP delivered high accuracy, transparent attribution, and governance-ready outputs, enabling responsible early-warning implementation for student retention. Full article
Show Figures

Figure 1

18 pages, 3037 KB  
Article
Stacked Ensemble Model with Enhanced TabNet for SME Supply Chain Financial Risk Prediction
by Wenjie Shan and Benhe Gao
Systems 2025, 13(10), 892; https://doi.org/10.3390/systems13100892 - 10 Oct 2025
Viewed by 503
Abstract
Small and medium-sized enterprises (SMEs) chronically face financing frictions. While supply chain finance (SCF) can help, reliable credit risk assessment in SCF is hindered by redundant features, heterogeneous data sources, small samples, and class imbalance. Using 360 A-share–listed SMEs from 2019–2023, we build [...] Read more.
Small and medium-sized enterprises (SMEs) chronically face financing frictions. While supply chain finance (SCF) can help, reliable credit risk assessment in SCF is hindered by redundant features, heterogeneous data sources, small samples, and class imbalance. Using 360 A-share–listed SMEs from 2019–2023, we build a 77-indicator, multidimensional system covering SME and core-firm financials, supply chain stability, and macroeconomic conditions. To reduce dimensionality and remove low-contribution variables, feature selection is performed via a genetic algorithm enhanced LightGBM (GA-LightGBM). To mitigate class imbalance, we employ TabDDPM for data augmentation, yielding consistent improvements in downstream performance. For modeling, we propose a two-stage predictive framework that integrates TabNet-based feature engineering with a stacking ensemble (TabNet-Stacking). In our experiments, TabNet-Stacking outperforms strong machine-learning baselines in accuracy, recall, F1 score, and AUC. Full article
Show Figures

Figure 1

34 pages, 2489 KB  
Article
When Support Hides Progress: Insights from a Physics Tutorial on Solving Laplace’s Equation Using Separation of Variables in Cartesian Coordinates
by Jaya Shivangani Kashyap, Robert Devaty and Chandralekha Singh
Educ. Sci. 2025, 15(10), 1345; https://doi.org/10.3390/educsci15101345 - 10 Oct 2025
Viewed by 247
Abstract
The electrostatic potential in certain types of boundary value problems can be found by solving Laplace’s Equation (LE). It is important for students to develop the ability to recognize the utility of LE and apply the method to solve physics problems. To develop [...] Read more.
The electrostatic potential in certain types of boundary value problems can be found by solving Laplace’s Equation (LE). It is important for students to develop the ability to recognize the utility of LE and apply the method to solve physics problems. To develop students’ problem-solving skills for solving problems that can be solved effectively using Laplace’s equation in an upper-level electricity and magnetism course, we developed and validated a tutorial focused on finding electrostatic potential in a Cartesian coordinate system. The tutorial was implemented across three instructors’ classes, accompanied by scaffolded pretest (after traditional lecture) and posttest (after the tutorial). We also conducted think-aloud interviews with advanced students using both unscaffolded and scaffolded versions of the pretest and posttest. Findings reveal common student difficulties that were included in the tutorial as a guide to help address them. The difference in the performance of students from the pretest after lecture to the posttest after the tutorial was similar on the scaffolded version of the tests (in which the problems posed were broken into sub-problems) for all three instructors’ classes and interviewed students. Equally importantly, interviewed students demonstrated greater differences in scores from the pretest and posttest on the unscaffolded versions in which the problems were not broken into sub-problems, suggesting that the scaffolded version of the tests may have obscured evidence of actual learning from the tutorial. While a scaffolded test is typically intended to guide students through complex reasoning by breaking a problem into sub-problems and offering structured support, it can limit opportunities to demonstrate independent problem-solving and evidence of learning from the tutorial. Additionally, one instructor’s class underperformed relative to others even on the pretest. This instructor had mentioned that the tests and tutorial were not relevant to their current course syllabus and offered a small amount of extra credit for attempting to help education researchers, highlighting how this type of instructor framing of instructional tasks can negatively impact student engagement and performance. Overall, in addition to identifying student difficulties and demonstrating how the tutorial addresses them, this study reveals two unanticipated but critical insights: first, breaking problems into sub-parts can obscure evidence of students’ ability to independently solve problems, and second, instructor framing can significantly influence student engagement and performance. Full article
Show Figures

Figure 1

15 pages, 405 KB  
Article
Detecting Imbalanced Credit Card Fraud via Hybrid Graph Attention and Variational Autoencoder Ensembles
by Ibomoiye Domor Mienye, Ebenezer Esenogho and Cameron Modisane
AppliedMath 2025, 5(4), 131; https://doi.org/10.3390/appliedmath5040131 - 2 Oct 2025
Viewed by 849
Abstract
Credit card fraud detection remains a major challenge due to severe class imbalance and the constantly evolving nature of fraudulent behaviors. To address these challenges, this paper proposes a hybrid framework that integrates a Variational Autoencoder (VAE) for probabilistic anomaly detection, a Graph [...] Read more.
Credit card fraud detection remains a major challenge due to severe class imbalance and the constantly evolving nature of fraudulent behaviors. To address these challenges, this paper proposes a hybrid framework that integrates a Variational Autoencoder (VAE) for probabilistic anomaly detection, a Graph Attention Network (GAT) for capturing inter-transaction relationships, and a stacking ensemble with XGBoost for robust prediction. The joint use of VAE anomaly scores and GAT-derived node embeddings enables the model to capture both feature-level irregularities and relational fraud patterns. Experiments on the European Credit Card and IEEE-CIS Fraud Detection datasets show that the proposed approach outperforms baseline models by up to 15% in F1-score, achieving values above 0.980 with AUCs reaching 0.995. These results demonstrate the effectiveness of combining unsupervised anomaly detection with graph-based learning within an ensemble framework for highly imbalanced fraud detection problems. Full article
Show Figures

Figure 1

10 pages, 625 KB  
Article
Performance of ChatGPT-4 as an Auxiliary Tool: Evaluation of Accuracy and Repeatability on Orthodontic Radiology Questions
by Mercedes Morales Morillo, Nerea Iturralde Fernández, Luis Daniel Pellicer Castillo, Ana Suarez, Yolanda Freire and Victor Diaz-Flores García
Bioengineering 2025, 12(10), 1031; https://doi.org/10.3390/bioengineering12101031 - 26 Sep 2025
Viewed by 358
Abstract
Background: Large language models (LLMs) are increasingly considered in dentistry, yet their accuracy in orthodontic radiology remains uncertain. This study evaluated the performance of ChatGPT-4 on questions aligned with current radiology guidelines. Methods: Fifty short, guideline-anchored questions were authored; thirty were pre-selected a [...] Read more.
Background: Large language models (LLMs) are increasingly considered in dentistry, yet their accuracy in orthodontic radiology remains uncertain. This study evaluated the performance of ChatGPT-4 on questions aligned with current radiology guidelines. Methods: Fifty short, guideline-anchored questions were authored; thirty were pre-selected a priori for their diagnostic relevance. Using the ChatGPT-4 web interface in March 2025, we obtained 30 answers per item (900 in total) across two user accounts and three times of day, each in a new chat with a standardised prompt. Two blinded experts graded all responses on a 3-point scale (0 = incorrect, 1 = partially correct, 2 = correct); disagreements were adjudicated. The primary outcome was strict accuracy (proportion of answers graded 2). Secondary outcomes were partial-credit performance (mean 0–2 score) and inter-rater agreement using multiple coefficients. Results: Strict accuracy was 34.1% (95% CI 31.0–37.2), with wide item-level variability (0–100%). The mean partial-credit score was 1.09/2.00 (median 1.02; IQR 0.53–1.83). Inter-rater agreement was high (percent agreement: 0.938, with coefficients indicating substantial to almost-perfect reliability). Conclusions: In the conditions of this study, ChatGPT-4 demonstrated limited strict accuracy yet substantial reliability in expert grading when applied to orthodontic radiology questions. These findings underline its potential as a complementary educational and decision-support resource while also highlight its present limitations. Its role should remain supportive and informative, never replacing the critical appraisal and professional judgement of the clinician. Full article
Show Figures

Figure 1

16 pages, 2130 KB  
Article
Application of a Machine Learning Algorithm to Assess and Minimize Credit Risks
by Garnik Arakelyan and Armen Ghazaryan
J. Risk Financial Manag. 2025, 18(9), 520; https://doi.org/10.3390/jrfm18090520 - 17 Sep 2025
Viewed by 659
Abstract
The banking system, as the most important sector of the economy of every country, often encounters a number of risks. Financial institutions of that system operate in an unstable environment, and without having complete information about that environment, they may suffer significant losses. [...] Read more.
The banking system, as the most important sector of the economy of every country, often encounters a number of risks. Financial institutions of that system operate in an unstable environment, and without having complete information about that environment, they may suffer significant losses. The main source of such losses is considered to be credit risks, and for the management of these, various mathematical models are being developed which will allow banks to make decisions on granting a loan. Lately, for this purpose, machine learning (ML) classification algorithms have often been used for credit risk modeling. In this research work, using the ideas of well-known ML algorithms, a new algorithm for solving the binary classification problem was developed. By means of the algorithm created, based on real data, a classification model has been developed. Qualitative indicators of that model, such as ROC AUC, PR AUC, precision, recall, and F1 score, were evaluated. By modifying the resulting probabilities into a range of 300–850 score points, a scoring model has been developed, the usage of which can mitigate credit risk and protect financial organizations from major losses. Full article
(This article belongs to the Special Issue Lending, Credit Risk and Financial Management)
Show Figures

Figure 1

15 pages, 748 KB  
Article
A Mixture Model for Survival Data with Both Latent and Non-Latent Cure Fractions
by Eduardo Yoshio Nakano, Frederico Machado Almeida and Marcílio Ramos Pereira Cardial
Stats 2025, 8(3), 82; https://doi.org/10.3390/stats8030082 - 13 Sep 2025
Viewed by 418
Abstract
One of the most popular cure rate models in the literature is the Berkson and Gage mixture model. A characteristic of this model is that it considers the cure to be a latent event. However, there are situations in which the cure is [...] Read more.
One of the most popular cure rate models in the literature is the Berkson and Gage mixture model. A characteristic of this model is that it considers the cure to be a latent event. However, there are situations in which the cure is well known, and this information must be considered in the analysis. In this context, this paper proposes a mixture model that accommodates both latent and non-latent cure fractions. More specifically, the proposal is to extend the Berkson and Gage mixture model to include the knowledge of the cure. A simulation study was conducted to investigate the asymptotic properties of maximum likelihood estimators. Finally, the proposed model is illustrated through an application to credit risk modeling. Full article
(This article belongs to the Section Survival Analysis)
Show Figures

Figure 1

28 pages, 1433 KB  
Article
Class-Adaptive Weighted Broad Learning System with Hybrid Memory Retention for Online Imbalanced Classification
by Jintao Huang, Yu Wang and Mengxin Wang
Electronics 2025, 14(17), 3562; https://doi.org/10.3390/electronics14173562 - 8 Sep 2025
Viewed by 881
Abstract
Data stream classification is a critical challenge in data mining, where models must rapidly adapt to evolving data distributions and concept drift in real time, while extreme learning machines offer fast training and strong generalization, most existing methods struggle to jointly address multi-class [...] Read more.
Data stream classification is a critical challenge in data mining, where models must rapidly adapt to evolving data distributions and concept drift in real time, while extreme learning machines offer fast training and strong generalization, most existing methods struggle to jointly address multi-class imbalance, concept drift, and the high cost of label acquisition in streaming settings. In this paper, we present the Adaptive Broad Learning System for Online Imbalanced Classification (ABLS-OIC), which introduces three core innovations: (1) a Class-Adaptive Weight Matrix (CAWM) that dynamically adjusts sample weights according to class distribution, sample density, and difficulty; (2) a Hybrid Memory Retention Mechanism (HMRM) that selectively retains representative samples based on importance and diversity; and (3) a Multi-Objective Adaptive Optimization Framework (MAOF) that balances classification accuracy, class balance, and computational efficiency. Extensive experiments on ten benchmark datasets with varying imbalance ratios and drift patterns show that ABLS-OIC consistently outperforms state-of-the-art methods, with improvements of 5.9% in G-mean, 6.3% in F1-score, and 3.4% in AUC. Furthermore, a real-world credit fraud detection case study demonstrates the practical effectiveness of ABLS-OIC, highlighting its value for early detection of rare but critical events in dynamic, high-stakes applications. Full article
(This article belongs to the Special Issue Advances in Data Mining and Its Applications)
Show Figures

Figure 1

23 pages, 3140 KB  
Article
Explainable Machine Learning Models for Credit Rating in Colombian Solidarity Sector Entities
by María Andrea Arias-Serna, Jhon Jair Quiza-Montealegre, Luis Fernando Móntes-Gómez, Leandro Uribe Clavijo and Andrés Felipe Orozco-Duque
J. Risk Financial Manag. 2025, 18(9), 489; https://doi.org/10.3390/jrfm18090489 - 2 Sep 2025
Viewed by 780
Abstract
This paper proposes a methodology for implementing a custom-developed explainability model for credit rating using behavioral data registered during the lifecycle of the borrowing that can replicate the score given by the regulatory model for the solidarity economy in Colombia. The methodology integrates [...] Read more.
This paper proposes a methodology for implementing a custom-developed explainability model for credit rating using behavioral data registered during the lifecycle of the borrowing that can replicate the score given by the regulatory model for the solidarity economy in Colombia. The methodology integrates continuous behavioral and financial variables from over 17,000 real credit histories into predictive models based on ridge regression, decision trees, random forests, XGBoost, and LightGBM. The models were trained and evaluated using cross-validation and RMSE metrics. LightGBM emerged as the most accurate model, effectively capturing nonlinear credit behavior patterns. To ensure interpretability, SHAP was used to identify the contribution of each feature to the model predictions. The presented model using LightGBM predicted the credit risk assessment in accordance with the regulatory model used by the Colombian Superintendence of the Solidarity Economy, with a root-mean-square error of 0.272 and an R2 score of 0.99. We propose an alternative framework using explainable machine learning models aligned with the internal ratings-based approach under Basel II. Our model integrates variables collected throughout the borrowing lifecycle, offering a more comprehensive perspective than the regulatory model. While the regulatory framework adjusts itself generically to national regulations, our approach explicitly accounts for borrower-specific dynamics. Full article
(This article belongs to the Section Financial Technology and Innovation)
Show Figures

Figure 1

20 pages, 1534 KB  
Article
Custom Score Function: Projection of Structural Attention in Stochastic Structures
by Mine Doğan and Mehmet Gürcan
Axioms 2025, 14(9), 664; https://doi.org/10.3390/axioms14090664 - 29 Aug 2025
Viewed by 388
Abstract
This study introduces a novel approach to correlation-based feature selection and dimensionality reduction in high-dimensional data structures. To this end, a customized scoring function is proposed, designed as a dual-objective structure that simultaneously maximizes the correlation with the target variable while penalizing redundant [...] Read more.
This study introduces a novel approach to correlation-based feature selection and dimensionality reduction in high-dimensional data structures. To this end, a customized scoring function is proposed, designed as a dual-objective structure that simultaneously maximizes the correlation with the target variable while penalizing redundant information among features. The method is built upon three main components: correlation-based preliminary assessment, feature selection via the tailored scoring function, and integration of the selection results into a t-SNE visualization guided by Rel/Red ratios. Initially, features are ranked according to their Pearson correlation with the target, and then redundancy is assessed through pairwise correlations among features. A priority scheme is defined using a scoring function composed of relevance and redundancy components. To enhance the selection process, an optimization framework based on stochastic differential equations (SDEs) is introduced. Throughout this process, feature weights are updated using both gradient information and diffusion dynamics, enabling the identification of subsets that maximize overall correlation. In the final stage, the t-SNE dimensionality reduction technique is applied with weights derived from the Rel/Red scores. In conclusion, this study redefines the feature selection process by integrating correlation-maximizing objectives with stochastic modeling. The proposed approach offers a more comprehensive and effective alternative to conventional methods, particularly in terms of explainability, interpretability, and generalizability. The method demonstrates strong potential for application in advanced machine learning systems, such as credit scoring, and in broader dimensionality reduction tasks. Full article
Show Figures

Figure 1

27 pages, 3001 KB  
Article
Effects of Civil Wars on the Financial Soundness of Banks: Evidence from Sudan Using Altman’s Models and Stress Testing
by Mudathir Abuelgasim and Said Toumi
J. Risk Financial Manag. 2025, 18(9), 476; https://doi.org/10.3390/jrfm18090476 - 26 Aug 2025
Viewed by 1081
Abstract
This study assesses the financial soundness of Sudanese commercial banks during escalating civil conflict by integrating Altman’s Z-score models with scenario-based stress testing. Using audited financial data from 2016 to 2022 (pre-war) and projections through to 2028, the analysis evaluates resilience under low- [...] Read more.
This study assesses the financial soundness of Sudanese commercial banks during escalating civil conflict by integrating Altman’s Z-score models with scenario-based stress testing. Using audited financial data from 2016 to 2022 (pre-war) and projections through to 2028, the analysis evaluates resilience under low- and high-intensity conflict scenarios. Altman’s Model 3 (for non-industrial firms) and Model 4 (for emerging markets) are applied to capture liquidity, retained earnings, profitability, and leverage dynamics. The findings reveal relative stability between 2017–2020 and in 2022, contrasted by significant vulnerability in 2016 and 2021 due to macroeconomic deterioration, sanctions, and political instability. Liquidity emerged as the most critical driver of Z-score performance, followed by earnings retention and profitability, while leverage showed a context-specific positive effect under Sudan’s Islamic finance framework. Stress testing indicates that even under low-intensity conflict, rising liquidity risk, capital erosion, and credit risk threaten sectoral stability by 2025. High-intensity conflict projections suggest systemic collapse by 2028, characterized by unsustainable liquidity depletion, near-zero capital adequacy, and widespread defaults. The results demonstrate a direct relationship between conflict duration and systemic fragility, affirming the predictive value of Altman’s models when combined with stress testing. Policy implications include the urgent need for enhanced risk-based supervision, Basel II/III implementation, crisis reserves, contingency planning, and coordinated regulatory interventions to safeguard the stability of the banking sector in fragile states. Full article
(This article belongs to the Section Banking and Finance)
Show Figures

Figure 1

25 pages, 4100 KB  
Article
An Adaptive Unsupervised Learning Approach for Credit Card Fraud Detection
by John Adejoh, Nsikak Owoh, Moses Ashawa, Salaheddin Hosseinzadeh, Alireza Shahrabi and Salma Mohamed
Big Data Cogn. Comput. 2025, 9(9), 217; https://doi.org/10.3390/bdcc9090217 - 25 Aug 2025
Viewed by 1732
Abstract
Credit card fraud remains a major cause of financial loss around the world. Traditional fraud detection methods that rely on supervised learning often struggle because fraudulent transactions are rare compared to legitimate ones, leading to imbalanced datasets. Additionally, the models must be retrained [...] Read more.
Credit card fraud remains a major cause of financial loss around the world. Traditional fraud detection methods that rely on supervised learning often struggle because fraudulent transactions are rare compared to legitimate ones, leading to imbalanced datasets. Additionally, the models must be retrained frequently, as fraud patterns change over time and require new labeled data for retraining. To address these challenges, this paper proposes an ensemble unsupervised learning approach for credit card fraud detection that combines Autoencoders (AEs), Self-Organizing Maps (SOMs), and Restricted Boltzmann Machines (RBMs), integrated with an Adaptive Reconstruction Threshold (ART) mechanism. The ART dynamically adjusts anomaly detection thresholds by leveraging the clustering properties of SOMs, effectively overcoming the limitations of static threshold approaches in machine learning and deep learning models. The proposed models, AE-ASOMs (Autoencoder—Adaptive Self-Organizing Maps) and RBM-ASOMs (Restricted Boltzmann Machines—Adaptive Self-Organizing Maps), were evaluated on the Kaggle Credit Card Fraud Detection and IEEE-CIS datasets. Our AE-ASOM model achieved an accuracy of 0.980 and an F1-score of 0.967, while the RBM-ASOM model achieved an accuracy of 0.975 and an F1-score of 0.955. Compared to models such as One-Class SVM and Isolation Forest, our approach demonstrates higher detection accuracy and significantly reduces false positive rates. In addition to its performance, the model offers considerable computational efficiency with a training time of 200.52 s and memory usage of 3.02 megabytes. Full article
Show Figures

Figure 1

15 pages, 496 KB  
Article
Predictors of Early College Success in the U.S.: An Initial Examination of Test-Optional Policies
by Kaylani Rae Othman, Rachel A. Vannatta and Audrey Conway Roberts
Educ. Sci. 2025, 15(9), 1089; https://doi.org/10.3390/educsci15091089 - 22 Aug 2025
Viewed by 1004
Abstract
For decades, the U.S. college admissions process has utilized standardized exams as critical indicators of college readiness. With the onset of the COVID pandemic, the majority of 4-year universities implemented the Test-Optional policy to improve college access and enrollment. The Test-Optional policy allows [...] Read more.
For decades, the U.S. college admissions process has utilized standardized exams as critical indicators of college readiness. With the onset of the COVID pandemic, the majority of 4-year universities implemented the Test-Optional policy to improve college access and enrollment. The Test-Optional policy allows prospective high school students to apply to institutions that have implemented this policy without a SAT or ACT score. This study examined the use of the Test-Optional policy and its relationship with early college success. Forward multiple regression examined which variables of High School GPA, Students of Color, First-Generation Status, Test-Optional, Pell Eligible, and Pre-College Credits best predict undergraduate first-year GPA. The results generated a five-variable model that accounted for 31% of the variability in first-year college GPA. High School GPA was the strongest predictor, while Test-Optional was not entered into the model. Binary logistic regression examined predictors of first-year college completion. Our results revealed the model including High School GPA, which tripled the odds of first-year completion. Again, Test-Optional was not included in the model. Although Students of Color and Pell Eligibility utilized Test-Optional significantly more than their peers, Test-Optional was not a significant predictor of first-year College GPA or first-year completion. Full article
Show Figures

Figure 1

24 pages, 748 KB  
Article
When Models Fail: Credit Scoring, Bank Management, and NPL Growth in the Greek Recession
by Vasileios Giannopoulos and Spyridon Kariofyllas
Int. J. Financial Stud. 2025, 13(3), 152; https://doi.org/10.3390/ijfs13030152 - 22 Aug 2025
Viewed by 845
Abstract
The significant increase in non-performing loans (NPLs) during the escalating recession of the Greek economy motivates us to study the predictive power of credit rating models in periods of economic shocks. In parallel, we examined the responsibilities of bank management in the expansion [...] Read more.
The significant increase in non-performing loans (NPLs) during the escalating recession of the Greek economy motivates us to study the predictive power of credit rating models in periods of economic shocks. In parallel, we examined the responsibilities of bank management in the expansion of NPLs in this adverse environment. Certain studies connect bad loans with turbulent conditions. Our paper weighs the relative significance of both economic shock and management effectiveness using data at an individual level, which provides the originality of our study. We use a unique dataset of small business loans that were granted during 2005 (expansion period) by a large commercial Greek bank, and we explore their performance between 2010 and 2012 (early recession period). In the context of a stepwise methodology, we compare the Bank’s credit scoring model with three other prediction models (binomial logistic regression, decision tree, and multilayer perceptron neural network) to check both the predictive ability of credit scoring models during recession and the effectiveness of bank management. The comparative analysis confirms the management’s responsibilities in granting NPLs, since the Bank’s model exhibited the worst predictive performance. Additionally, we find that adverse external conditions lead to an increase in NPLs and decrease the predictive performance of all credit scoring models. The study offers a reliable methodological tool for lending management in economic downturns. Full article
Show Figures

Figure 1

Back to TopTop