Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (280)

Search Parameters:
Keywords = credit scoring

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 857 KiB  
Article
Study of the Impact of Agricultural Insurance on the Livelihood Resilience of Farmers: A Case Study of Comprehensive Natural Rubber Insurance
by Jialin Wang, Yanglin Wu, Jiyao Liu and Desheng Zhang
Agriculture 2025, 15(15), 1683; https://doi.org/10.3390/agriculture15151683 - 4 Aug 2025
Viewed by 219
Abstract
Against the backdrop of increasingly frequent extreme weather events and heightened market price volatility, investigating the relationship between agricultural insurance and farmers’ livelihood resilience is crucial for ensuring rural socioeconomic stability. This study utilizes field survey data from 1196 households across twelve county-level [...] Read more.
Against the backdrop of increasingly frequent extreme weather events and heightened market price volatility, investigating the relationship between agricultural insurance and farmers’ livelihood resilience is crucial for ensuring rural socioeconomic stability. This study utilizes field survey data from 1196 households across twelve county-level divisions (three cities and nine counties) from China’s Hainan and Yunnan provinces, specifically in natural rubber-producing regions. Using propensity score matching (PSM), we empirically examine agricultural insurance’s impact on household livelihood resilience. The results demonstrate that agricultural insurance increased the effect on farmers’ livelihood resilience by 1%. This effect is particularly pronounced among recently poverty-alleviated households and large-scale farming operations. Furthermore, the analysis highlights the mediating roles of credit availability, adoption of agricultural production technologies, and production initiative in strengthening insurance’s positive impact. Therefore, policies should be refined and expanded, combining agricultural insurance with credit support and agricultural technology extension to leverage their value and ensure the sustainable development of farm households. Full article
(This article belongs to the Section Agricultural Economics, Policies and Rural Management)
Show Figures

Figure 1

18 pages, 778 KiB  
Article
The Effects of Handedness Consistency on the Identification of Own- and Cross-Race Faces
by Raymond P. Voss, Ryan Corser, Stephen Prunier and John D. Jasper
Brain Sci. 2025, 15(8), 828; https://doi.org/10.3390/brainsci15080828 - 31 Jul 2025
Viewed by 225
Abstract
Background/Objectives: People are better at recognizing the faces of racial in-group members than out-group members. This own-race bias relies on pattern recognition and memory processes, which rely on hemispheric specialization. We hypothesized that handedness, a proxy for hemispheric specialization, would moderate own-race [...] Read more.
Background/Objectives: People are better at recognizing the faces of racial in-group members than out-group members. This own-race bias relies on pattern recognition and memory processes, which rely on hemispheric specialization. We hypothesized that handedness, a proxy for hemispheric specialization, would moderate own-race bias. Specifically, consistently handed individuals perform better on tasks that require the hemispheres to work independently, while inconsistently handed individuals perform better on tasks that require integration. This led to the hypothesis that inconsistently handed individuals would show less own-race bias, driven by an increase in accuracy. Methods: 281 participants completed the study in exchange for course credit. Of those, the sample was isolated to Caucasian (174) and African American individuals (41). Participants were shown two target faces (one Caucasian and one African American), given several distractor tasks, and then asked to identify the target faces during two sequential line-ups, each terminating when participants made an identification judgment. Results: Continuous handedness score and the match between participant race and target face race were entered into a binary logistic regression predicting correct/incorrect identifications. The overall model was statistically significant, Χ2 (3, N = 430) = 11.036, p = 0.012, Nagelkerke R2 = 0.038, culminating in 76% correct classifications. Analyses of the parameter estimates showed that the racial match, b = 0.53, SE = 0.23, Wald Χ2 (1) = 5.217, p = 0.022, OR = 1.703 and the interaction between handedness and the racial match, b = 0.51, SE = 0.23, Wald test = 4.813, p = 0.028, OR = 1.671 significantly contributed to the model. The model indicated that the probability of identification was similar for own- or cross-race targets amongst inconsistently handed individuals. Consistently handed individuals, by contrast, showed an increase in accuracy for the own-race target and a decrease in accuracy for cross-race targets. Conclusions: Results partially supported the hypotheses. Inconsistently handed individuals did show less own-race bias. This finding, however, seemed to be driven by differences in accuracy amongst consistently handed individuals rather than a hypothesized increase in accuracy amongst inconsistently handed individuals. Underlying hemispheric specialization, as measured by proxy with handedness, may impact the own-race bias in facial recognition. Future research is required to investigate the mechanisms, however, as the directional differences were different than hypothesized. Full article
(This article belongs to the Special Issue Advances in Face Perception and How Disorders Affect Face Perception)
Show Figures

Figure 1

29 pages, 2168 KiB  
Article
Credit Sales and Risk Scoring: A FinTech Innovation
by Faten Ben Bouheni, Manish Tewari, Andrew Salamon, Payson Johnston and Kevin Hopkins
FinTech 2025, 4(3), 31; https://doi.org/10.3390/fintech4030031 - 18 Jul 2025
Viewed by 417
Abstract
This paper explores the effectiveness of an innovative FinTech risk-scoring model to predict the risk-appropriate return for short-term credit sales. The risk score serves to mitigate the information asymmetry between the seller of receivables (“Seller”) and the purchaser (“Funder”), at the same time [...] Read more.
This paper explores the effectiveness of an innovative FinTech risk-scoring model to predict the risk-appropriate return for short-term credit sales. The risk score serves to mitigate the information asymmetry between the seller of receivables (“Seller”) and the purchaser (“Funder”), at the same time providing an opportunity for the Funder to earn returns as well as to diversify its portfolio on a risk-appropriate basis. Selling receivables/credit to potential Funders at a risk-appropriate discount also helps Sellers to maintain their short-term financial liquidity and provide the necessary cash flow for operations and other immediate financial needs. We use 18,304 short-term credit-sale transactions between 23 April 2020 and 30 September 2022 from the private FinTech startup Crowdz and its Sustainability, Underwriting, Risk & Financial (SURF) risk-scoring system to analyze the risk/return relationship. The data includes risk scores for both Sellers of receivables (e.g., invoices) along with the Obligors (firms purchasing goods and services from the Seller) on those receivables and provides, as outputs, the mutual gains by the Sellers and the financial institutions or other investors funding the receivables (i.e., the Funders). Our analysis shows that the SURF Score is instrumental in mitigating the information asymmetry between the Sellers and the Funders and provides risk-appropriate periodic returns to the Funders across industries. A comparative analysis shows that the use of SURF technology generates higher risk-appropriate annualized internal rates of return (IRR) as compared to nonuse of the SURF Score risk-scoring system in these transactions. While Sellers and Funders enter into a win-win relationship (in the absence of a default), Sellers of credit instruments are not often scored based on the potential diversification by industry classification. Crowdz’s SURF technology does so and provides Funders with diversification opportunities through numerous invoices of differing amounts and SURF Scores in a wide range of industries. The analysis also shows that Sellers generally have lower financing stability as compared to the Obligors (payers on receivables), a fact captured in the SURF Scores. Full article
(This article belongs to the Special Issue Trends and New Developments in FinTech)
Show Figures

Figure 1

13 pages, 1018 KiB  
Article
Can the Accrual Anomaly Be Explained by Credit Risk?
by Foong Soon Cheong
Account. Audit. 2025, 1(2), 6; https://doi.org/10.3390/accountaudit1020006 - 14 Jul 2025
Viewed by 456
Abstract
Past studies have observed that the low (high) accrual portfolio in the accrual anomaly consists of firms with high (low) credit risk, and have suggested that the abnormal return in the accrual anomaly arises from buying (selling) stocks with high (low) credit risk. [...] Read more.
Past studies have observed that the low (high) accrual portfolio in the accrual anomaly consists of firms with high (low) credit risk, and have suggested that the abnormal return in the accrual anomaly arises from buying (selling) stocks with high (low) credit risk. In this paper, I first investigate whether the low accrual portfolio is indeed dominated by firms with higher credit risk. I find that this claim is not necessarily true. Next, I regress the abnormal return on both the level of accrual and credit risk. The regression is repeated using both decile ranking and actual values. In both cases, I find that the level of accrual is always statistically significant and negative. Finally, I investigate the claim that the abnormal return in the accrual anomaly is due to taking a long (short) position in stocks with high (low) credit risk. In each year, to control for credit risk, I first rank all firms by both their level of accrual and credit risk. The ranking for accrual and credit risk are independently determined. I require that in each year, the long position (in the low accrual decile) and short position (in high accrual decile) are equally weighted within each credit risk decile. After controlling for credit risk, I find that the abnormal return from Sloan’s accrual trading strategy is still positive, statistically significant and economically significant. I conclude that the accrual anomaly cannot be explained by credit risk. All findings in this paper are robust as to whether credit risk is measured using Altman’s z-score or the Standard & Poor’s credit rating. Full article
Show Figures

Figure 1

24 pages, 787 KiB  
Article
Pre Hoc and Co Hoc Explainability: Frameworks for Integrating Interpretability into Machine Learning Training for Enhanced Transparency and Performance
by Cagla Acun and Olfa Nasraoui
Appl. Sci. 2025, 15(13), 7544; https://doi.org/10.3390/app15137544 - 4 Jul 2025
Viewed by 230
Abstract
Post hoc explanations for black-box machine learning models have been criticized for potentially inaccurate surrogate models and computational burden at prediction time. We propose pre hoc and co hoc explainability frameworks that integrate interpretability directly into the training process through an inherently interpretable [...] Read more.
Post hoc explanations for black-box machine learning models have been criticized for potentially inaccurate surrogate models and computational burden at prediction time. We propose pre hoc and co hoc explainability frameworks that integrate interpretability directly into the training process through an inherently interpretable white-box model. Pre hoc uses the white-box model to regularize the black-box model, while co hoc jointly optimizes both models with a shared loss function. We extend these frameworks to generate instance-specific explanations using Jensen–Shannon divergence as a regularization term. Our two-phase approach first trains models for fidelity, then generates local explanations through neighborhood-based fine-tuning. Experiments on credit risk scoring and movie recommendation datasets demonstrate superior global and local fidelity compared to LIME, without compromising accuracy. The co hoc framework additionally enhances white-box model accuracy by up to 3%, making it valuable for regulated domains requiring interpretable models. Our approaches provide more faithful and consistent explanations at a lower computational cost than existing methods, offering a promising direction for making machine learning models more transparent and trustworthy while maintaining high prediction accuracy. Full article
(This article belongs to the Special Issue AI Horizons: Present Status and Visions for the Next Era)
Show Figures

Figure 1

34 pages, 4399 KiB  
Article
A Unified Transformer–BDI Architecture for Financial Fraud Detection: Distributed Knowledge Transfer Across Diverse Datasets
by Parul Dubey, Pushkar Dubey and Pitshou N. Bokoro
Forecasting 2025, 7(2), 31; https://doi.org/10.3390/forecast7020031 - 19 Jun 2025
Viewed by 1095
Abstract
Financial fraud detection is a critical application area within the broader domains of cybersecurity and intelligent financial analytics. With the growing volume and complexity of digital transactions, the traditional rule-based and shallow learning models often fall short in detecting sophisticated fraud patterns. This [...] Read more.
Financial fraud detection is a critical application area within the broader domains of cybersecurity and intelligent financial analytics. With the growing volume and complexity of digital transactions, the traditional rule-based and shallow learning models often fall short in detecting sophisticated fraud patterns. This study addresses the challenge of accurately identifying fraudulent financial activities, especially in highly imbalanced datasets where fraud instances are rare and often masked by legitimate behavior. The existing models also lack interpretability, limiting their utility in regulated financial environments. Experiments were conducted on three benchmark datasets: IEEE-CIS Fraud Detection, European Credit Card Transactions, and PaySim Mobile Money Simulation, each representing diverse transaction behaviors and data distributions. The proposed methodology integrates a transformer-based encoder, multi-teacher knowledge distillation, and a symbolic belief–desire–intention (BDI) reasoning layer to combine deep feature extraction with interpretable decision making. The novelty of this work lies in the incorporation of cognitive symbolic reasoning into a high-performance learning architecture for fraud detection. The performance was assessed using key metrics, including the F1-score, AUC, precision, recall, inference time, and model size. Results show that the proposed transformer–BDI model outperformed traditional and state-of-the-art baselines across all datasets, achieving improved fraud detection accuracy and interpretability while remaining computationally efficient for real-time deployment. Full article
Show Figures

Figure 1

23 pages, 562 KiB  
Article
Enhanced Credit Card Fraud Detection Using Deep Hybrid CLST Model
by Madiha Jabeen, Shabana Ramzan, Ali Raza, Norma Latif Fitriyani, Muhammad Syafrudin and Seung Won Lee
Mathematics 2025, 13(12), 1950; https://doi.org/10.3390/math13121950 - 12 Jun 2025
Viewed by 1449
Abstract
The existing financial payment system has inherent credit card fraud problems that must be solved with strong and effective solutions. In this research, a combined deep learning model that incorporates a convolutional neural network (CNN), long-short-term memory (LSTM), and fully connected output layer [...] Read more.
The existing financial payment system has inherent credit card fraud problems that must be solved with strong and effective solutions. In this research, a combined deep learning model that incorporates a convolutional neural network (CNN), long-short-term memory (LSTM), and fully connected output layer is proposed to enhance the accuracy of fraud detection, particularly in addressing the class imbalance problem. A CNN is used for spatial features, LSTM for sequential information, and a fully connected output layer for final decision-making. Furthermore, SMOTE is used to balance the data and hyperparameter tuning is utilized to achieve the best model performance. In the case of hyperparameter tuning, the detection rate is greatly enhanced. High accuracy metrics are obtained by the proposed CNN-LSTM (CLST) model, with a recall of 83%, precision of 70%, F1-score of 76% for fraudulent transactions, and ROC-AUC of 0.9733. The proposed model’s performance is enhanced by hyperparameter optimization to a recall of 99%, precision of 83%, F1-score of 91% for fraudulent cases, and ROC-AUC of 0.9995, representing almost perfect fraud detection along with a low false negative rate. These results demonstrate that optimization of hyperparameters and layers is an effective way to enhance the performance of hybrid deep learning models for financial fraud detection. While prior studies have investigated hybrid structures, this study is distinguished by its introduction of an optimized of CNN and LSTM integration within a unified layer architecture. Full article
(This article belongs to the Special Issue Machine Learning and Finance)
Show Figures

Figure 1

23 pages, 1601 KiB  
Article
Level-Wise Feature-Guided Cascading Ensembles for Credit Scoring
by Yao Zou and Guanghua Cheng
Symmetry 2025, 17(6), 914; https://doi.org/10.3390/sym17060914 - 10 Jun 2025
Viewed by 378
Abstract
Accurate credit scoring models are essential for financial risk management, yet conventional approaches often fail to address the complexities of high-dimensional, heterogeneous credit data, particularly in capturing nonlinear relationships and hierarchical dependencies, ultimately compromising predictive performance. To overcome these limitations, this paper introduces [...] Read more.
Accurate credit scoring models are essential for financial risk management, yet conventional approaches often fail to address the complexities of high-dimensional, heterogeneous credit data, particularly in capturing nonlinear relationships and hierarchical dependencies, ultimately compromising predictive performance. To overcome these limitations, this paper introduces the level-wise feature-guided cascading ensemble (LFGCE) model, a novel framework that integrates hierarchical feature selection with cascading ensemble learning to systematically uncover latent feature hierarchies. The LFGCE framework leverages symmetry principles in its cascading architecture, where each ensemble layer maintains structural symmetry in processing its assigned feature subset while asymmetrically contributing to the final prediction through hierarchical information fusion. The LFGCE model operates through two synergistic mechanisms: (1) a hierarchical feature selection strategy that quantifies feature importance and partitions the feature space into progressively predictive subsets, thereby reducing dimensionality while preserving discriminative information, and (2) a cascading ensemble architecture where each layer specializes in learning risk patterns from its assigned feature subset, while iteratively incorporating outputs from preceding layers to enable cross-level information fusion. This dual process of hierarchical feature refinement and layered ensemble learning allows the LFGCE to extract deep, robust representations of credit risk. Empirical validation on four public credit datasets (Australian Credit, German Credit, Japan Credit, and Taiwan Credit) demonstrates that the LFGCE achieves an average AUC improvement of 0.23% over XGBoost (Python 3.13) and 0.63% over deep neural networks, confirming its superior predictive accuracy. Full article
(This article belongs to the Special Issue Symmetric Studies of Distributions in Statistical Models)
Show Figures

Figure 1

28 pages, 2604 KiB  
Article
A Hybrid Approach to Credit Risk Assessment Using Bill Payment Habits Data and Explainable Artificial Intelligence
by Cem Bulut and Emel Arslan
Appl. Sci. 2025, 15(10), 5723; https://doi.org/10.3390/app15105723 - 20 May 2025
Viewed by 721
Abstract
Credit risk is one of the most important issues in the rapidly growing and developing finance sector. This study utilized a dataset containing real information about the bill payments of individuals who made transactions with a payment institution operating in Turkey. First, the [...] Read more.
Credit risk is one of the most important issues in the rapidly growing and developing finance sector. This study utilized a dataset containing real information about the bill payments of individuals who made transactions with a payment institution operating in Turkey. First, the transactions in the dataset were analyzed based on the bill type and the individual and features reflecting the payment habits were extracted. For the target class, real credit scores generated by the Credit Registry Office for the individuals whose payment habits were extracted were used. The dataset is a multi-class, unbalanced, and alternative dataset. Therefore, the dataset was prepared for the analysis by using data cleaning, feature selection, and sampling techniques. Then, the dataset was classified using various classification and evaluation methods. The best results were obtained with a model consisting of ANOVA F-Test, SMOTE, and Extra Tree algorithms. With this model, 80.49% accuracy, 79.89% precision, and 97.04% UAC rate were obtained. These results are quite efficient for an alternative dataset with 10 classes. This model was transformed into an explainable and interpretable form using LIME and SHAP, which are XAI techniques. This study presents a new hybrid model for credit risk assessment based on a multi-class and imbalanced alternative dataset and machine learning. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

26 pages, 2575 KiB  
Article
Comparing the Effectiveness of Machine Learning and Deep Learning Models in Student Credit Scoring: A Case Study in Vietnam
by Nguyen Thi Hong Thuy, Nguyen Thi Vinh Ha, Nguyen Nam Trung, Vu Thi Thanh Binh, Nguyen Thu Hang and Vu The Binh
Risks 2025, 13(5), 99; https://doi.org/10.3390/risks13050099 - 20 May 2025
Viewed by 1467
Abstract
In emerging markets like Vietnam, where student borrowers often lack traditional credit histories, accurately predicting loan eligibility remains a critical yet underexplored challenge. While machine learning and deep learning techniques have shown promise in credit scoring, their comparative performance in the context of [...] Read more.
In emerging markets like Vietnam, where student borrowers often lack traditional credit histories, accurately predicting loan eligibility remains a critical yet underexplored challenge. While machine learning and deep learning techniques have shown promise in credit scoring, their comparative performance in the context of student loans has not been thoroughly investigated. This study aims to evaluate and compare the predictive effectiveness of four supervised learning models—such as Random Forest, Gradient Boosting, Support Vector Machine, and Deep Neural Network (implemented with PyTorch version 2.6.0)—in forecasting student credit eligibility. Primary data were collected from 1024 university students through structured surveys covering academic, financial, and personal variables. The models were trained and tested on the same dataset and evaluated using a comprehensive set of classification and regression metrics. The findings reveal that each model exhibits distinct strengths. Deep Learning achieved the highest classification accuracy (85.55%), while random forest demonstrated robust performance, particularly in providing balanced results across classification metrics. Gradient Boosting was effective in recall-oriented tasks, and support vector machine demonstrated strong precision for the positive class, although its recall was lower compared to other models. The study highlights the importance of aligning model selection with specific application goals, such as prioritizing accuracy, recall, or interpretability. It offers practical implications for financial institutions and universities in developing machine learning and deep learning tools for student loan eligibility prediction. Future research should consider longitudinal data, behavioral factors, and hybrid modeling approaches to further optimize predictive performance in educational finance. Full article
Show Figures

Figure 1

46 pages, 1999 KiB  
Systematic Review
Machine Learning and Metaheuristics Approach for Individual Credit Risk Assessment: A Systematic Literature Review
by Álex Paz, Broderick Crawford, Eric Monfroy, José Barrera-García, Álvaro Peña Fritz, Ricardo Soto, Felipe Cisternas-Caneo and Andrés Yáñez
Biomimetics 2025, 10(5), 326; https://doi.org/10.3390/biomimetics10050326 - 17 May 2025
Viewed by 739
Abstract
Credit risk assessment plays a critical role in financial risk management, focusing on predicting borrower default to minimize losses and ensure compliance. This study systematically reviews 23 empirical articles published between 2019 and 2023, highlighting the integration of machine learning and optimization techniques, [...] Read more.
Credit risk assessment plays a critical role in financial risk management, focusing on predicting borrower default to minimize losses and ensure compliance. This study systematically reviews 23 empirical articles published between 2019 and 2023, highlighting the integration of machine learning and optimization techniques, particularly bio-inspired metaheuristics, for feature selection in individual credit risk assessment. These nature-inspired algorithms, derived from biological and ecological processes, align with bio-inspired principles by mimicking natural intelligence to solve complex problems in high-dimensional feature spaces. Unlike prior reviews that adopt broader scopes combining corporate, sovereign, and individual contexts, this work focuses exclusively on methodological strategies for individual credit risk. It categorizes the use of machine learning algorithms, feature selection methods, and metaheuristic optimization techniques, including genetic algorithms, particle swarm optimization, and biogeography-based optimization. To strengthen transparency and comparability, this review also synthesizes classification performance metrics—such as accuracy, AUC, F1-score, and recall—reported across benchmark datasets. Although no unified experimental comparison was conducted due to heterogeneity in study protocols, this structured summary reveals consistent trends in algorithm effectiveness and evaluation practices. The review concludes with practical recommendations and outlines future research directions to improve fairness, scalability, and real-time application in credit risk modeling. Full article
(This article belongs to the Special Issue Nature-Inspired Metaheuristic Optimization Algorithms 2025)
Show Figures

Figure 1

37 pages, 4457 KiB  
Article
Enhancing Privacy in IoT-Enabled Digital Infrastructure: Evaluating Federated Learning for Intrusion and Fraud Detection
by Amogh Deshmukh, Peplluis Esteva de la Rosa, Raul Villamarin Rodriguez and Sandeep Dasari
Sensors 2025, 25(10), 3043; https://doi.org/10.3390/s25103043 - 12 May 2025
Viewed by 1272
Abstract
Challenges in implementing machine learning (ML) include expanding data resources within the finance sector. Banking data with significant financial implications are highly confidential. Diverse breaches and privacy violations can result from a combination of user information from different institutions for banking purposes. To [...] Read more.
Challenges in implementing machine learning (ML) include expanding data resources within the finance sector. Banking data with significant financial implications are highly confidential. Diverse breaches and privacy violations can result from a combination of user information from different institutions for banking purposes. To address these issues, federated learning (FL) using a flower framework is utilized to protect the privacy of individual organizations while still collaborating through separate models to create a unified global model. However, joint training on datasets with diverse distributions can lead to suboptimal learning and additional privacy concerns. To mitigate this, solutions using federated averaging (FedAvg), federated proximal (FedProx), and federated optimization methods have been proposed. These methods work with data locality during training at local clients without exposing data, while maintaining global convergence to enhance the privacy of local models within the framework. In this analysis, the UNSW-NB15 and credit datasets were employed, utilizing precision, recall, accuracy, F1-score, ROC, and AUC as performance indicators to demonstrate the effectiveness of the proposed strategy using FedAvg, FedProx, and FedOpt. The proposed algorithms were subjected to an empirical study, which revealed significant performance benefits when using the flower framework. Consequently experiments were conducted over 50 rounds using the UNSW-NB15 dataset, which achieved accuracies of 99.87% for both FedAvg and FedProx and 99.94% for FedOpt. Similarly, with the credit dataset under the same conditions, FedAvg and FedProx achieved accuracies of 99.95% and 99.94%, respectively. These results indicate that the proposed framework is highly effective and can be applied in real-world applications across various domains for secure and privacy-preserving collaborative machine learning. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

10 pages, 1499 KiB  
Article
Evaluation of Content Quality of Online Health Information by Global Quality Score: A Case Study of Researchers Misnaming It and Citing Secondary Sources
by Andy Wai Kan Yeung
Publications 2025, 13(2), 23; https://doi.org/10.3390/publications13020023 - 1 May 2025
Viewed by 734
Abstract
The Global Quality Score (GQS) is one of the most frequently used tools to evaluate the content quality of online health information. To the author’s knowledge, it is frequently misnamed as the Global Quality Scale, and occasionally secondary sources are cited as the [...] Read more.
The Global Quality Score (GQS) is one of the most frequently used tools to evaluate the content quality of online health information. To the author’s knowledge, it is frequently misnamed as the Global Quality Scale, and occasionally secondary sources are cited as the original source of the tool. This work aimed to reveal the current situation especially regarding the citations among published studies. Web of Science, Scopus, and PubMed were queried to identify papers that mentioned the use of the GQS. Among a total of 411 analyzed papers, 45.0% misnamed it as Global Quality Scale, and 46.5% did not cite the primary source published in 2007 to credit it as the original source. Another 80 references were also cited from time to time as the source of the GQS, led by a secondary source published in 2012. There was a decreasing trend in citing the primary source when using the GQS. Among the 12 papers that claimed that the GQS was validated, half of them cited the primary source to justify the claim, but in fact the original publication did not mention anything about its validation. To conclude, future studies should name and cite the GQS properly to minimize confusion. Full article
Show Figures

Figure 1

30 pages, 4529 KiB  
Article
Credit Rating Model Based on Improved TabNet
by Shijie Wang and Xueyong Zhang
Mathematics 2025, 13(9), 1473; https://doi.org/10.3390/math13091473 - 30 Apr 2025
Viewed by 904
Abstract
Under the rapid evolution of financial technology, traditional credit risk management paradigms relying on expert experience and singular algorithmic architectures have proven inadequate in addressing complex decision-making demands arising from dynamically correlated multidimensional risk factors and heterogeneous data fusion. This manuscript proposes an [...] Read more.
Under the rapid evolution of financial technology, traditional credit risk management paradigms relying on expert experience and singular algorithmic architectures have proven inadequate in addressing complex decision-making demands arising from dynamically correlated multidimensional risk factors and heterogeneous data fusion. This manuscript proposes an enhanced credit rating model based on an improved TabNet framework. First, the Kaggle “Give Me Some Credit” dataset undergoes preprocessing, including data balancing and partitioning into training, testing, and validation sets. Subsequently, the model architecture is refined through the integration of a multi-head attention mechanism to extract both global and local feature representations. Bayesian optimization is then employed to accelerate hyperparameter selection and automate a parameter search for TabNet. To further enhance classification and predictive performance, a stacked ensemble learning approach is implemented: the improved TabNet serves as the feature extractor, while XGBoost (Extreme Gradient Boosting), LightGBM (Light Gradient Boosting Machine), CatBoost (Categorical Boosting), KNN (K-Nearest Neighbors), and SVM (Support Vector Machine) are selected as base learners in the first layer, with XGBoost acting as the meta-learner in the second layer. The experimental results demonstrate that the proposed TabNet-based credit rating model outperforms benchmark models across multiple metrics, including accuracy, precision, recall, F1-score, AUC (Area Under the Curve), and KS (Kolmogorov–Smirnov statistic). Full article
Show Figures

Figure 1

30 pages, 2587 KiB  
Systematic Review
Towards Fair AI: Mitigating Bias in Credit Decisions—A Systematic Literature Review
by José Rômulo de Castro Vieira, Flavio Barboza, Daniel Cajueiro and Herbert Kimura
J. Risk Financial Manag. 2025, 18(5), 228; https://doi.org/10.3390/jrfm18050228 - 24 Apr 2025
Viewed by 3011
Abstract
The increasing adoption of artificial intelligence algorithms is redefining decision-making across various industries. In the financial sector, where automated credit granting has undergone profound changes, this transformation raises concerns about biases perpetuated or introduced by AI systems. This study investigates the methods used [...] Read more.
The increasing adoption of artificial intelligence algorithms is redefining decision-making across various industries. In the financial sector, where automated credit granting has undergone profound changes, this transformation raises concerns about biases perpetuated or introduced by AI systems. This study investigates the methods used to identify and mitigate biases in AI models applied to credit granting. We conducted a systematic literature review using the IEEE, Scopus, Web of Science, and Science Direct databases, covering the period from 1 January 2013 to 1 October 2024. From the 414 identified articles, 34 were selected for detailed analysis. Most studies are empirical and quantitative, focusing on fairness in outcomes and biases present in datasets. Preprocessing techniques dominated as the approach for bias mitigation, often relying on public academic datasets. Gender and race were the most studied sensitive attributes, with statistical parity being the most commonly used fairness metric. The findings reveal a maturing research landscape that prioritizes fairness in model outcomes and the mitigation of biases embedded in historical data. However, only a quarter of the papers report more than one fairness metric, limiting comparability across approaches. The literature remains largely focused on a narrow set of sensitive attributes, with little attention to intersectionality or alternative sources of bias. Furthermore, no study employed causal inference techniques to identify proxy discrimination. Despite some promising results—where fairness gains exceed 30% with minimal accuracy loss—significant methodological gaps persist, including the lack of standardized metrics, overreliance on legacy data, and insufficient transparency in model pipelines. Future work should prioritize developing advanced bias mitigation methods, exploring sensitive attributes, standardizing fairness metrics, improving model explainability, reducing computational complexity, enhancing synthetic data generation, and addressing the legal and ethical challenges of algorithms. Full article
(This article belongs to the Section Risk)
Show Figures

Figure 1

Back to TopTop