Data Leakage and Deceptive Performance: A Critical Examination of Credit Card Fraud Detection Methodologies
Abstract
1. Introduction
- How were categorical, ordinal, or fuzzy variables handled?
- What strategy was used for data splitting (e.g., random, stratified, time-based)?
- When and how was normalization or standardization applied, and to which subsets?
- Were oversampling or undersampling techniques limited to the training set, or did they inadvertently affect the test data?
- Was feature selection or dimensionality reduction performed before or after data splitting?
2. Credit Card Fraud Detection
2.1. Dataset Description
2.2. Handling Imbalanced Data
2.2.1. Oversampling Methods
- Random Oversampling: Duplicates minority samples without introducing new information. This method is simple but can lead to overfitting due to repeated samples.
- SMOTE (Synthetic Minority Oversampling Technique) [4]: Generates synthetic samples between nearest neighbors of minority instances. This method helps to create a more balanced dataset and can improve model performance.
- ADASYN (Adaptive Synthetic Sampling) [5]: Focuses on generating synthetic samples for harder-to-classify minority samples through adaptive weighting. This method is particularly useful for improving the classification of difficult minority instances.
- Borderline-SMOTE [6]: Creates synthetic samples near class boundaries for better discrimination. This method helps to improve the classification of minority instances near the decision boundary.
2.2.2. Undersampling Methods
- Random Undersampling (RUS): Randomly removes majority class samples. This method is simple but can lead to loss of important information.
- NearMiss [7]: Selects majority samples based on proximity to minority instances. This method helps to retain informative majority samples.
- Tomek Links [8]: Removes borderline samples to clarify decision boundaries. This method helps to improve the classification of minority instances by removing ambiguous majority samples.
- Cluster Centroids [9]: Applies K-means clustering to condense the majority class. This method helps to reduce the number of majority samples while retaining the overall distribution.
2.2.3. Hybrid Methods
- SMOTE-Tomek, SMOTE-ENN [10]: Combine oversampling with data cleaning for improved balance. These methods help to generate synthetic minority samples and remove ambiguous majority samples.
- SMOTEBoost [11]: Integrates SMOTE with boosting to enhance weak classifiers. This method helps to improve the performance of weak classifiers by generating synthetic minority samples.
- SMOTE-SVM [12]: Uses SVM to guide synthetic sample generation. This method helps to generate synthetic minority samples based on the decision boundary of an SVM classifier.
2.3. Privacy Constraints and Feature Engineering
2.4. Performance Metrics
- True Positives (TP): Correctly predicted positive instances (e.g., fraudulent transactions correctly identified as fraud).
- True Negatives (TN): Correctly predicted negative instances (e.g., legitimate transactions correctly identified as legitimate).
- False Positives (FP): Incorrectly predicted positive instances (e.g., legitimate transactions flagged as fraud; Type I error).
- False Negatives (FN): Incorrectly predicted negative instances (e.g., fraudulent transactions missed by the model; Type II error).
3. Literature Review
- Data Leakage in Preprocessing: Numerous studies perform critical preprocessing steps (normalization, SMOTE, etc.) before train-test splitting, artificially inflating performance metrics through information leakage.
- Intentional Vagueness in Methodology: Many works deliberately omit crucial implementation details, making replication difficult and raising questions about result validity. This includes unspecified parameter settings, ambiguous preprocessing sequences, silence about stratified sampling, and unexplained architectural choices.
- Inadequate Temporal Validation: Most approaches fail to account for the time-dependent nature of transaction data, neglecting temporal splitting, which is essential for real-world deployment.
- Unjustified Method Complexity: There’s a tendency to apply unnecessarily sophisticated techniques without first ensuring proper data preparation and validation, often obscuring fundamental methodological flaws.
- Overemphasis on Recall: Many works prioritize recall metrics at the expense of precision, leading to models with high false positive rates that would be impractical in production environments.
4. Our Flawed Methodology
4.1. Synthetic Minority Over-Sampling Technique (SMOTE)
- is the new synthetic data point,
- is the original minority class data point,
- is one of the k nearest neighbors of , and
- is an arbitrary value between zero and one.
4.2. The Multilayer Perceptron (MLP) Module
- Input Layer: Receives raw data and distributes it to the subsequent layer. Each neuron in this layer corresponds to a feature or attribute of the input data.
- Hidden Layers: Perform computations and feature extraction, enabling the network to capture intricate patterns in the data.
- Output Layer: Produces the final prediction or classification result. The number of neurons in this layer is determined by the specific task (e.g., one neuron for binary classification, multiple neurons for multi-class classification).
- is the input feature vector,
- and are the weights and biases for the hidden layer,
- is the ReLU activation function,
- and are the weights and bias for the output layer,
- is the sigmoid activation function.
5. Results and Analysis
- A single output neuron with data leakage can outperform many sophisticated models
- The data leakage from improper SMOTE application provides more benefit than architectural complexity
- Such results are artificially inflated and would not generalize to real-world scenarios
MLP with 1 Hidden Layer of 32 Neurons | |||||||||
---|---|---|---|---|---|---|---|---|---|
No. | Stratified | Scaling | Validation | SMOTE | Accuracy | Precision | Recall | F1 | PRC/ROC |
1 | no | no | no | no | 0.999 | 0.867 | 0.637 | 0.735 | Figure 9 |
2 | yes | no | no | no | 0.999 | 0.911 | 0.415 | 0.570 | Figure 10 |
3 | yes | yes | no | no | 0.997 | 0.814 | 0.847 | 0.830 | Figure 11 |
4 | yes | yes | yes | yes | 0.998 | 0.491 | 0.867 | 0.627 | Figure 12 |
5 * | yes | yes | yes | yes | 0.998 | 0.522 | 0.837 | 0.643 | Figure 13 |
- Data leakage can overshadow architectural improvements.
- Gains from increasing neurons are marginal compared to leakage-induced boosts.
- Near-perfect metrics with SMOTE (e.g., at ) are statistically implausible.
- Methodology must precede model complexity.
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
Abbreviations
ADASYN | Adaptive Synthetic Sampling |
ANN | Artificial Neural Network |
API | Application Programming Interface |
AUC | Area Under the Curve |
BN | Batch Normalization |
CAE | Convolutional Autoencoder |
CNN | Convolutional Neural Network |
DAE | Denoising Autoencoder |
DL | Deep Learning |
DT | Decision Tree |
ET | Extra Trees |
FFNN | Feed Forward Neural Network |
FN | False Negative |
FP | False Positive |
GRU | Gated Recurrent Unit |
HRSC | High-Resolution Ship Collections |
LDA | Linear Discriminant Analysis |
LSTM | Long Short-Term Memory |
ML | Machine Learning |
MLP | Multilayer Perceptron |
NB | Naive Bayes |
PCA | Principal Component Analysis |
PRC | Precision-Recall Curve |
RF | Random Forest |
ROC | Receiver Operating Characteristic |
RUS | Random Undersampling |
SMOTE | Synthetic Minority Over-sampling Technique |
SVM | Support Vector Machine |
TN | True Negative |
TP | True Positive |
UMAP | Uniform Manifold Approximation and Projection |
XGBoost | Extreme Gradient Boosting |
References
- Lecun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef]
- Pozzolo, A.D.; Caelen, O.; Johnson, R.A.; Bontempi, G. Calibrating Probability with Undersampling for Unbalanced Classification. In Proceedings of the Symposium on Computational Intelligence and Data Mining (CIDM), Cape Town, South Africa, 7–10 December 2015. [Google Scholar]
- Nguyen, T.T.; Tahir, H.; Abdelrazek, M.; Babar, A. Deep learning methods for credit card fraud detection. arXiv 2020, arXiv:2012.03754. [Google Scholar] [CrossRef]
- Chawla, N.V.; Bowyer, K.W.; Hall, L.O.; Kegelmeyer, W.P. SMOTE: Synthetic minority over-sampling technique. J. Artif. Intell. Res. 2002, 16, 321–357. [Google Scholar] [CrossRef]
- He, H.; Bai, Y.; Garcia, E.A.; Li, S. ADASYN: Adaptive synthetic sampling approach for imbalanced learning. In Proceedings of the 2008 IEEE International Joint Conference on Neural Networks (IEEE World Congress on Computational Intelligence), Hong Kong, China, 1–8 June 2008; pp. 1322–1328. [Google Scholar]
- Han, H.; Wang, W.Y.; Mao, B.H. Borderline-SMOTE: A new over-sampling method in imbalanced data sets learning. In Proceedings of the International Conference on Intelligent Computing, Hefei, China, 23–26 August 2005; pp. 878–887. [Google Scholar]
- Mani, I.; Zhang, I. kNN approach to unbalanced data distributions: A case study involving information extraction. In Proceedings of the Workshop on Learning from Imbalanced Datasets, ICML, Washington, DC, USA, 21–24 August 2003; Volume 126, pp. 1–7. [Google Scholar]
- Tomek, I. An experiment with the edited nearest-nieghbor rule. IEEE Trans. Syst. Man Cybern. 1976, 6, 448–452. [Google Scholar]
- Megha-Natarajan. Cluster Centroid. 2023. Available online: https://medium.com/@megha.natarajan/understanding-the-intuition-behind-cluster-centroids-smote-and-smoteen-techniques-for-dealing-058f3233abeb (accessed on 23 April 2024).
- Batista, G.E.; Prati, R.C.; Monard, M.C. A study of the behavior of several methods for balancing machine learning training data. ACM SIGKDD Explor. Newsl. 2004, 6, 20–29. [Google Scholar] [CrossRef]
- Chawla, N.V.; Lazarevic, A.; Hall, L.O.; Bowyer, K.W. SMOTEBoost: Improving prediction of the minority class in boosting. In Proceedings of the Knowledge Discovery in Databases: PKDD 2003: 7th European Conference on Principles and Practice of Knowledge Discovery in Databases, Cavtat-Dubrovnik, Croatia, 22–26 September 2003; pp. 107–119. [Google Scholar]
- Nguyen, H.M.; Cooper, E.W.; Kamei, K. Borderline over-sampling for imbalanced data classification. Int. J. Knowl. Eng. Soft Data Paradig. 2011, 3, 4–21. [Google Scholar] [CrossRef]
- Fiore, U.; De Santis, A.; Perla, F.; Zanetti, P.; Palmieri, F. Using generative adversarial networks for improving classification effectiveness in credit card fraud detection. Inf. Sci. 2019, 479, 448–455. [Google Scholar] [CrossRef]
- Aditya-Mishra. METRICS. 2018. Available online: https://towardsdatascience.com/metrics-to-evaluate-your-machine-learning-algorithm-f10ba6e38234 (accessed on 10 September 2024).
- Cherif, A.; Badhib, A.; Ammar, H.; Alshehri, S.; Kalkatawi, M.; Imine, A. Credit card fraud detection in the era of disruptive technologies: A systematic review. J. King Saud Univ.-Inf. Sci. 2023, 35, 145–174. [Google Scholar] [CrossRef]
- Hafez, I.Y.; Hafez, A.Y.; Saleh, A.; Abd El-Mageed, A.A.; Abohany, A.A. A systematic review of AI-enhanced techniques in credit card fraud detection. J. Big Data 2025, 12, 6. [Google Scholar] [CrossRef]
- Gbadebo-Ogunmefun, S.; Oketola, A.; Gbadebo-Ogunmefun, T.; Agbeja, A. A Review of Credit Card Fraud Detection Using Machine Learning Algorithms. 2023. Available online: https://www.researchgate.net/publication/376516430_A_Review_of_Credit_Card_Fraud_Detection_using_Machine_Learning_Algorithms (accessed on 3 June 2025).
- Mienye, I.D.; Jere, N. Deep Learning for Credit Card Fraud Detection: A Review of Algorithms, Challenges, and Solutions. IEEE Access 2024, 12, 96893–96910. [Google Scholar] [CrossRef]
- Sharma, P.; Banerjee, S.; Tiwari, D.; Patni, J.C. Machine learning model for credit card fraud detection-a comparative analysis. Int. Arab J. Inf. Technol. 2021, 18, 789–796. [Google Scholar] [CrossRef]
- Benchaji, I.; Douzi, S.; El Ouahidi, B.; Jaafari, J. Enhanced credit card fraud detection based on attention mechanism and LSTM deep model. J. Big Data 2021, 8, 1–21. [Google Scholar] [CrossRef]
- Karthika, J.; Senthilselvi, A. Smart credit card fraud detection system based on dilated convolutional neural network with sampling technique. Multimed. Tools Appl. 2023, 82, 31691–31708. [Google Scholar] [CrossRef]
- Esenogho, E.; Mienye, I.D.; Swart, T.G.; Aruleba, K.; Obaido, G. A neural network ensemble with feature engineering for improved credit card fraud detection. IEEE Access 2022, 10, 16400–16407. [Google Scholar] [CrossRef]
- Sadgali, I.; Sael, N.; Benabbou, F. Bidirectional gated recurrent unit for improving classification in credit card fraud detection. Indones. J. Electr. Eng. Comput. Sci. (IJEECS) 2021, 21, 1704–1712. [Google Scholar] [CrossRef]
- Saad Rubaidi, Z.; Ben Ammar, B.; Ben Aouicha, M. Comparative Data Oversampling Techniques with Deep Learning Algorithms for Credit Card Fraud Detection. In Proceedings of the International Conference on Intelligent Systems Design and Applications, Seattle, WA, USA, 12–14 December 2022; pp. 286–296. [Google Scholar]
- Rtayli, N. An Efficient Deep Learning Classification Model for Predicting Credit Card Fraud on Skewed Data. J. Inf. Secur. Cybercrimes Res. 2022, 5, 61–75. [Google Scholar] [CrossRef]
- Salekshahrezaee, Z.; Leevy, J.L.; Khoshgoftaar, T.M. Feature extraction for class imbalance using a convolutional autoencoder and data sampling. In Proceedings of the 2021 IEEE 33rd International Conference on Tools with Artificial Intelligence (ICTAI), Washington, DC, USA, 1–3 November 2021; pp. 217–223. [Google Scholar]
- Zou, J.; Zhang, J.; Jiang, P. Credit Card Fraud Detection Using Autoencoder Neural Network. arXiv 2019, arXiv:1908.11553. [Google Scholar] [CrossRef]
- Varmedja, D.; Karanovic, M.; Sladojevic, S.; Arsenovic, M.; Anderla, A. Credit Card Fraud Detection—Machine Learning methods. In Proceedings of the 2019 18th International Symposium INFOTEH-JAHORINA (INFOTEH), East Sarajevo, Bosnia and Herzegovina, 20–22 March 2019; pp. 1–5. [Google Scholar] [CrossRef]
- Mizher, M.Z.; Nassif, A.B. Deep CNN approach for Unbalanced Credit Card Fraud Detection Data. In Proceedings of the 2023 Advances in Science and Engineering Technology International Conferences (ASET), Dubai, United Arab Emirates, 20–23 February 2023; pp. 1–7. [Google Scholar] [CrossRef]
- Ajitha, E.; Sneha, S.; Makesh, S.; Jaspin, K. A Comparative Analysis of Credit Card Fraud Detection with Machine Learning Algorithms and Convolutional Neural Network. In Proceedings of the 2023 International Conference on Advances in Computing, Communication and Applied Informatics (ACCAI), Chennai, India, 25–26 May 2023; pp. 1–8. [Google Scholar] [CrossRef]
- Yousuf Ali, M.N.; Kabir, T.; Raka, N.L.; Siddikha Toma, S.; Rahman, M.L.; Ferdaus, J. SMOTE Based Credit Card Fraud Detection Using Convolutional Neural Network. In Proceedings of the 2022 25th International Conference on Computer and Information Technology (ICCIT), Cox’s Bazar, Bangladesh, 17–19 December 2022; pp. 55–60. [Google Scholar] [CrossRef]
- Aurna, N.F.; Hossain, M.D.; Taenaka, Y.; Kadobayashi, Y. Federated Learning-Based Credit Card Fraud Detection: Performance Analysis with Sampling Methods and Deep Learning Algorithms. In Proceedings of the 2023 IEEE International Conference on Cyber Security and Resilience (CSR), Venice, Italy, 31 July 2023–2 August 2023; pp. 180–186. [Google Scholar] [CrossRef]
- Owolafe, O.; Ogunrinde, O.B.; Thompson, A.F.B. A Long Short Term Memory Model for Credit Card Fraud Detection. In Artificial Intelligence for Cyber Security: Methods, Issues and Possible Horizons or Opportunities; Misra, S., Kumar Tyagi, A., Eds.; Springer International Publishing: Cham, Switzerland, 2021; pp. 369–391. [Google Scholar] [CrossRef]
- Xie, Y.; Liu, G.; Yan, C.; Jiang, C.; Zhou, M. Time-Aware Attention-Based Gated Network for Credit Card Fraud Detection by Extracting Transactional Behaviors. IEEE Trans. Comput. Soc. Syst. 2023, 10, 1004–1016. [Google Scholar] [CrossRef]
- Ileberi, E.; Sun, Y.; Wang, Z. Performance evaluation of machine learning methods for credit card fraud detection using SMOTE and AdaBoost. IEEE Access 2021, 9, 165286–165294. [Google Scholar] [CrossRef]
- Sasank, J.S.; Sahith, G.R.; Abhinav, K.; Belwal, M. Credit card fraud detection using various classification and sampling techniques: A comparative study. In Proceedings of the 2019 International Conference on Communication and Electronics Systems (ICCES), Coimbatore, India, 17–19 July 2019; pp. 1713–1718. [Google Scholar]
- Mahesh, K.P.; Afrouz, S.A.; Areeckal, A.S. Detection of fraudulent credit card transactions: A comparative analysis of data sampling and classification techniques. J. Phys. Conf. Ser. 2022, 2161, 012072. [Google Scholar] [CrossRef]
- Abdulghani, A.Q.; Uçan, O.N.; Alheeti, K.M.A. Credit card fraud detection using XGBoost algorithm. In Proceedings of the 2021 14th International Conference on Developments in eSystems Engineering (DeSE), Sharjah, United Arab Emirates, 7–10 December 2021; pp. 487–492. [Google Scholar]
- Khalid, A.; Owoh, N.; Uthmani, O.; Ashawa, M.; Osamor, J.; Adejoh, J. Enhancing Credit Card Fraud Detection: An Ensemble Machine Learning Approach. Big Data Cogn. Comput. 2024, 8, 6. [Google Scholar] [CrossRef]
- Smiti, S.; Soui, M. Bankruptcy prediction using deep learning approach based on borderline SMOTE. Inf. Syst. Front. 2020, 22, 1067–1083. [Google Scholar] [CrossRef]
- Forough, J.; Momtazi, S. Ensemble of deep sequential models for credit card fraud detection. Appl. Soft Comput. 2021, 99, 106883. [Google Scholar] [CrossRef]
- Fanai, H.; Abbasimehr, H. A novel combined approach based on deep Autoencoder and deep classifiers for credit card fraud detection. Expert Syst. Appl. 2023, 217, 119562. [Google Scholar] [CrossRef]
- Alarfaj, F.K.; Malik, I.; Khan, H.U.; Almusallam, N.; Ramzan, M.; Ahmed, M. Credit Card Fraud Detection Using State-of-the-Art Machine Learning and Deep Learning Algorithms. IEEE Access 2022, 10, 39700–39715. [Google Scholar] [CrossRef]
- Cartella, F.; Anunciacao, O.; Funabiki, Y.; Yamaguchi, D.; Akishita, T.; Elshocht, O. Adversarial attacks for tabular data: Application to fraud detection and imbalanced data. arXiv 2021, arXiv:2101.08030. [Google Scholar] [CrossRef]
- Arora, V.; Leekha, R.S.; Lee, K.; Kataria, A. Facilitating User Authorization from Imbalanced Data Logs of Credit Cards Using Artificial Intelligence. Mob. Inf. Syst. 2020, 2020, 8885269. [Google Scholar] [CrossRef]
- Imbalanced-Learn Developers. Imblearn.Over_Sampling.SMOTE—Imbalanced-Learn 0.11.0 Documentation. 2024. Available online: https://imbalanced-learn.org/stable/references/generated/imblearn.over_sampling.SMOTE.html (accessed on 23 July 2025).
- Amir-Al. Artificial Neural Network (ANN) with Practical Implementation. 2019. Available online: https://medium.com/machine-learning-researcher/artificial-neural-network-ann-4481fa33d85a (accessed on 18 September 2024).
No. | Method/Approach | Flaw Identified | Reported Performance | |||
---|---|---|---|---|---|---|
Accuracy | Precision | Recall | F1-Score | |||
1 | SMOTE + ANN [19] |
| 0.99 | 0.93 | 0.88 | 0.91 |
2 | UMAP + SMOTE + LSTM (other dimensionality reduction (UMAP etc.) [20] |
| 0.967 | 0.988 | 0.919 | 0.952 |
3 | RUS + NMS + SMOTE + DCNN [21] |
| 0.972 | 0.368 | 0.392 | 0.378 |
4 | SMOTE-ENN + boosted LSTM [22] |
| - | - | 0.996 (specificity: 0.998) | - |
5 | SMOTE-Tomek + Bi-GRU [23] |
| 0.972 | 0.959 | 0.978 | 0.968 |
6 | Borderline SMOTE + LSTM [24] |
| 99.9 | 80.3 | 92.1 | 85.8 |
7 | SMOTE-Tomek + BPNN (3 hidden layers: 28 + 28 + dropout + 28) [25] |
| - | 0.855 | 1 | 0.922 |
8 | CAE + SMOTE [26] |
| - | 0.920 | 0.890 | 0.905 |
9 | DAE + SMOTE + DNN (4 hidden layers: 22 + 15 + 10 + 5 + 2 output neurons) [27] |
| - | - | ||
10 | SMOTE ahead of various ML methods with the best performance demonstrated by RF followed by an MLP with 4 hidden layers (50 + 30 + 30 + 50 neurons) [28] |
| (RF) (MLP) | (RF) (MLP) | ||
11 | CNN (Conv1D + Flatten + Dropout) [29] |
| 0.93 | 0.93 | 0.93 | 0.93 |
12 | CNN (Conv1D: , + Dropout + Flatten) [30] |
| ||||
13 | (a) MLP: n inputs and n neurons in each hidden layer [31] | |||||
(b) CNN: unspecified but seems to have used 2D kernels for the CNN [31] |
| |||||
(c) LSTM-RNN [31] | ||||||
14 | (a) Random Oversampling (RO) + MLP: two dense hidden layers of 65 units each + 50% dropout [32] | |||||
(b) RO+CNN: Conv1D (32, 2) + dropout (0.2) + BN + Conv1D (64, 2) + BN + flatten+dropout (0.2) + dense (64) + dropout (0.4) + dense (1) [32] |
| |||||
(c) RO+LSTM: LSTM (50) + dropout (0.5) + dense (65) + dropout (0.5) + dense (1)) [32] | ||||||
15 | LSTM-RNN (4 × 50 units) [33] |
| ||||
16 | Time-Aware Attention RNN [34] |
| ||||
17 | SMOTE + AdaBoost (RF/ET/XGB/DT/LR) [35] |
| ||||
18 | SMOTE + Various Classifiers [36] |
| 0.970 | 0.999 | 0.970 | 0.984 |
19 | SMOTE-Tomek + RF [37] |
| 0.99 | 0.92 | 0.94 | 0.93 |
20 | SMOTE + XGBoost [38] |
| 0.999 | 0.999 | 1.00 | 0.999 |
No. | N | Accuracy | Precision | Recall | F1 |
---|---|---|---|---|---|
1 | 0 | 0.958 | 0.976 | 0.939 | 0.958 |
2 | 1 | 0.959 | 0.985 | 0.932 | 0.957 |
3 | 2 | 0.967 | 0.976 | 0.958 | 0.967 |
4 | 4 | 0.982 | 0.980 | 0.983 | 0.982 |
5 | 6 | 0.982 | 0.985 | 0.979 | 0.982 |
6 | 8 | 0.986 | 0.988 | 0.985 | 0.986 |
7 | 10 | 0.992 | 0.989 | 0.994 | 0.992 |
8 | 12 | 0.992 | 0.991 | 0.992 | 0.992 |
9 | 16 | 0.996 | 0.992 | 0.999 | 0.996 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Hayat, K.; Magnier, B. Data Leakage and Deceptive Performance: A Critical Examination of Credit Card Fraud Detection Methodologies. Mathematics 2025, 13, 2563. https://doi.org/10.3390/math13162563
Hayat K, Magnier B. Data Leakage and Deceptive Performance: A Critical Examination of Credit Card Fraud Detection Methodologies. Mathematics. 2025; 13(16):2563. https://doi.org/10.3390/math13162563
Chicago/Turabian StyleHayat, Khizar, and Baptiste Magnier. 2025. "Data Leakage and Deceptive Performance: A Critical Examination of Credit Card Fraud Detection Methodologies" Mathematics 13, no. 16: 2563. https://doi.org/10.3390/math13162563
APA StyleHayat, K., & Magnier, B. (2025). Data Leakage and Deceptive Performance: A Critical Examination of Credit Card Fraud Detection Methodologies. Mathematics, 13(16), 2563. https://doi.org/10.3390/math13162563