Next Article in Journal
Structural Design and Multi-Objective Optimization of High-Pressure Jet Cleaning Nozzle for the Clay-Filled Strata
Next Article in Special Issue
TPP-TimeNet: A Time-Aware AI Framework for Robust Abnormality Detection in Bioprocess Monitoring
Previous Article in Journal
Study on Shaft Soft Rock Deformation Prediction Based on Weighted Improved Stacking Ensemble Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On-Device Privacy-Preserving Fraud Detection for Smart Consumer Environments Using Federated Learning

by
Alexandros I. Bermperis
,
Vasileios A. Memos
,
Christos L. Stergiou
,
Andreas P. Plageras
and
Konstantinos E. Psannis
*
Department of Applied Informatics, University of Macedonia, 54636 Thessaloniki, Greece
*
Author to whom correspondence should be addressed.
Appl. Sci. 2026, 16(2), 835; https://doi.org/10.3390/app16020835
Submission received: 3 December 2025 / Revised: 11 January 2026 / Accepted: 12 January 2026 / Published: 14 January 2026

Abstract

This paper discusses an on-device artificial intelligence (AI) solution for real-time, privacy-preserving fraud detection in smart financial environments, ensuring privacy-preserving consumer transactions. We suggest a distributed, on-device fraud detection solution that uses federated learning (FL) to improve privacy while detecting fraudulent transactions efficiently across decentralized smart environments. In this work, we used several models, including reinforcement learning (RL) agent and Random Forest, and we tested their performance using several measures like accuracy, precision, recall, and F-score, ensuring their applicability to smart environments with resource constraints. The recommended mechanism also uses t-Distributed Stochastic Neighbor Embedding (t-SNE) and Principal Component Analysis (PCA) to reduce dimensions of data, visualize the results, and evaluate the success rate of transactions classified as fraudulent and non-fraudulent. In our methodology, we applied data collection, data preprocessing, and cleaning, and we evaluated the metrics of selected models to allocate resources effectively and support decision-making processes in edge-based fraud detection systems within smart environments.

1. Introduction

Nowadays, in the rapidly developing technological era, artificial intelligence (AI), especially on-device and federated learning concepts, are becoming crucial elements of privacy-aware and adaptive smart environments and autonomous systems in the era of 6G connectivity [1]. Addressing issues with data privacy, real-time decision-making, and effective computation of resource-constrained edge devices is necessary for integrating AI into smart settings [2].
Our methodology optimizes computing efficiency for real-time decision-making in smart environments while minimizing data transmission using federated learning and edge AI models. Furthermore, it is supported by dimensionality reduction techniques such as PCA and t-SNE, which enhance real-time adaptability and model simplicity for on-device deployment [3]. In this paper, we follow an analytical approach for real-time anomaly detection in smart environments through efficient data collection, preprocessing, dimensionality reduction, and selection of an optimized machine learning model for on-device implementation [3,4]. The special feature of our method is that it uses standardization and normalization techniques to enhance the robustness of on-device AI models aiming to detect missing values or handle incomplete data, while maintaining computational feasibility and low latency of resource-constrained edge devices.
The integration of the implemented methods promises a high rate of accuracy and computational efficiency in detecting fraudulent activities within smart environments, optimizing AI-driven security solutions while maintaining low-latency performance [5]. Additionally, our implementation is consistent with the demand for adaptive and future proof AI solutions designed for edge computing and on-device security applications [6]. Apart from emphasizing the algorithm’s technical concept, this creative architecture indicates the importance of applying AI models directly on edge devices for privacy-preserving data processing, real-time analytics, and decentralized decision-making in smart environments. As a result, it is extremely significant for ensuring trust based on artificial intelligence security measures for smart environments, especially in privacy-sensitive applications such as healthcare monitoring, industrial automation, digital transactions, and smart city infrastructures [7]. Businesses can also decrease losses, reduce risks, boost customer satisfaction, and allocate resources more effectively when they make use of accurate and reliable fraud detection techniques [8].
We aim not only to provide practical insights and prove the effectiveness of our technique to real-world problems, but also to make significant contributions to the field of privacy-preserving machine learning and edge AI for smart applications [9,10]. Our objective is to provide inspiration for innovations in AI-driven smart environments, enabling adaptive on-device intelligence for next-generation applications. Combining federated learning (FL) with reinforcement learning (RL) creates a strong framework that allows effective decision-making in real time while protecting data privacy and encouraging continuous learning and adaptability for future-proofing AI-driven security measures [11].
Compared to the aforementioned models, our proposed work additionally contains the following features to solve the open problems:
  • For FL methods for fraud detection (data remains decentralized), but in traditional fraud detection systems, centralized data was required and caused privacy and security issues.
  • Combination of machine learning and reinforcement learning provides a dual approach to fraud detection with high accuracy and an adaptive RL agent that learns easily.
  • Dimensionality reduction for visualization and insights with techniques such as PCA and t-SNE, highlighting the separation between fraudulent and non-fraudulent transactions.
  • Comprehensive model evaluation with multiple metrics (accuracy, precision, recall, F-score) with visualization of training and validation loss.
In other words, this paper addresses the challenge of detecting real-time fraudulent activities in decentralized AI-driven systems using novel and state-of-the-art techniques. We present a comprehensive methodology that integrates three AI techniques, such as machine learning, reinforcement learning, and FL in order to handle a specific and highly unbalanced public dataset typically found in distributed edge computing systems. We developed sophisticated models such as Random Forest and RL agent to evaluate their effectiveness using specific performance. Furthermore, we have integrated common dimensionality reduction techniques, like t-SNE and PCA, for data visualization and analysis in real-time decision-making processes.
Our proposed mechanism includes steps for data acquisition, pre-processing, and model selection and evaluation, ensuring data privacy through FL. The experimental results demonstrate a high accuracy of the Random Forest model, highlighting its ability to effectively classify transactions, identify fraudulent transactions, and contribute to secure, efficient, and intelligent edge computing environments.
The rest of this paper is structured as follows: Section 2 includes the literature review with the state-of-the-art technologies involved in the proposed model; Section 3 presents our proposed approach and mechanism; Section 4 provides the experimental results in diagrams; Section 5 discusses the benefits, challenges, and limitations of the proposed approach; and finally, Section 6 concludes the paper giving some potential future directions.

2. Background

In the following, we describe four state-of-the-art technologies that we involve in our proposed approach.

2.1. Federated Learning (FL)

Since its founding in 2016, FL has redefined several types of intelligent Internet of Things applications by providing novel distributed and privacy-enhancing artificial intelligence technologies [12]. FL is a new machine learning strategy that reduces training time while maintaining data privacy by having a group of users work together to construct a shared deep learning model without disclosing their data [13]. The emergence of this new distributed AI technology has the power of transforming the complex FL elements located in modern intelligent IoT devices. By evolving AI tasks, such AI data training, to the network edge of IoT devices where the data is stored, FL is particularly essential when designing distributed IoT systems in accordance with recent advancements in portable hardware and rising concerns about leaks of private data [14].
It should be noted that our proposed implementation reduces the risk of direct data loss by maintaining all transaction data on client devices, and only exchanging model updates during collaborative training. However, it is a fact that under some circumstances, based on inference attacks, model updates could continue to provide limited information. Differential privacy and secure aggregation strategies are not used in this study. Therefore, a key area for future research is the integration of the above sophisticated strategies for privacy in parallel with experimental evaluation against inference attacks.

2.2. Reinforcement Learning (RL)

RL is a specialized technique of machine learning with mathematical concepts, in which agents interact in their dynamic environment of interconnected devices in a decentralized system. They collect data and perform experiments to maximize the set of benefits and improve the efficiency and training of the model [2]. Sutton and Barto in their initial research from the 1980s offered a foundation for RL [15]. However, Markov Decision Processes (MDPs), which existed from the 1950s, are the cornerstone for it and have the following characteristics: reward, policy, state space, action space, and state transition probability [14,16]. The power of RL models to keep their privacy has been improved by recent developments in FL which makes them more ideal for sensitive applications like real-time consumer fraud detection [17].

2.3. Deep Reinforcement Learning (DRL)

Deep reinforcement learning (DRL) is an artificial intelligence methodology that merges RL with deep learning (DL) to deal with the problem of processing data and information from sensors in AIoT systems [18,19,20]. In contrast to typical machine learning approaches, such as supervised learning [21], in DRL the “agents” interact with the “environment” using decisions, measurements and rewards in order to learn the optimal actions for each situation, without being instructed on what actions to take in advance [2,22]. The main advantage of DRL is its power to deal with high-complexity problems in the real world, including generating multi-dimensional sensory data in order to describe the condition of the environment.

2.4. Federated Reinforcement Learning (FRL)

Federated reinforcement learning (FRL) has acquired a lot of popularity recently, extending RL to train models for handling sequential decision-making problems in resource allocation, content caching, and user access control, which are examples of edge computing, while preserving data privacy [23]. On the other hand, with model-based reinforcement learning an approximate dynamics model is acquired and the optimal policy is then extracted from the model [24]. FRL offers collaborative learning across multiple devices while improving the overall accuracy and robustness of fraud detection approaches [25].

2.5. Edge Computing (EC)

Edge computing (EC) is a technology that provides detailed and effective real-time data analysis services, offering more capabilities in cloud computing systems. It can also reduce the cost of cloud computing and optimize data processing in the most efficient way [26]. With edge computing, time delay is avoided as the stored data is on the edge and not on the central server, i.e., they are closer to the source, but they are part of the cloud-based system [27]. Edge computing offers increased security because of the low latency and high bandwidth in comparison with cloud computing. As a result, an attacker will have less time to complete an attack [26].

3. Proposed Approach

In this section, we present a sophisticated mechanism that is based on the aforementioned cutting-edge technologies and can detect frauds in credit card transactions (Algorithm 1), such as an online transaction fraud detection system [28]. The provided mechanism faces credit card fraud detection, since it can separate the transactions using an accurate classifier to identify the fraudulent procedures from the non-fraudulent ones. This mechanism follows the following six steps: data acquisition, data preprocessing, data splitting, exploratory data analysis, model selection and training, and model testing, thanks to the AI methods presented above.
Specifically, the proposed AI mechanism uses a relevant dataset for anomaly detection in smart environments from Kaggle called Credit Card Fraud Detection (creditcard.csv) [29], which performs methods for cleaning data like handling missing values and outliers. Then, the data is divided into testing, validation, and training sets to optimize model learning for real-time anomaly detection in edge devices. Furthermore, other methods like Principal Component Analysis (PCA) focus on the reduction in dimensionality, assisting in the visualization of the data particularly in resource-constrained edge environments [3]. Specifically, we used Principal Component Analysis (PCA) to reduce our dataset’s features to two dimensions and identify patterns and the variance analysis while optimizing real-time data processing in smart environments. By providing help to visualize data trends and relationships across transactions, this dimensionality reduction technique is essential for anomaly detection in smart environments, where identifying unusual patterns in sensor data, device activity, or network behavior can improve security and prevent potential system failures.
Furthermore, by visualizing high-dimensional data in a lower-dimensional space, t-Distributed Stochastic Neighbor Embedding (t-SNE) [30] was utilized to investigate the clustering of transactions in more detail. In contrast to PCA, which emphasizes preserving global structure, t-SNE is excellent in exposing local patterns and highlighting groups within decentralized, on-device AI systems, such as groups of related transactions. This is important for fraud detections since fraudulent transactions tend to create tight, small clusters that might not be immediately apparent in greater dimensions. We offer a more detailed perspective of transaction behaviors by identifying these hidden groups, which facilitates the isolation of possible fraudulent activities.
The main operation of this mechanism is to train a specific machine learning model, Random Forest, in order to identify fraudulent transactions based on features like transaction amount and location and frequency patterns to distinguish them in dynamic, distributed AI environments [31]. It emphasizes evaluating different models in a validation set before choosing the best performing one for final testing on unseen data. More details about this process are described in Algorithm 1. The proposed approach includes a RL agent additionally to the Random Forest classifier, so as to investigate adaptive decision-making behavior in dynamic transactions. In order to examine how sequential learning strategies react to changing transaction patterns under risk, the RL agent is trained and assessed separately from the Random Forest model. The RL agent offers additional insights into behavioral learning and flexibility, which are especially relevant for future intelligent systems operating in dynamic environments. Moreover, the Random Forest classifier operates as the basic mechanism for fraud detection due to its superior classification performance. A more detailed integration of RL into the federated optimization process could be a potential approach for future research.
Algorithm 1. Credit Card Fraud Detection Mechanism
1. 
Begin
2. 
Data acquisition
3. 
Install libraries: scikit-learn, pandas, tensorflow, tensorflow_federated.
4. 
Create and import a Kaggle API (extended) object for authentication with specific credentials.
5. 
Download a specific dataset from Kaggle (‘creditcard.csv’).
6. 
! Data preprocessing
7. 
Read the csv file into a pandas data frame.
8. 
Define a specific target column from the csv file (‘Class’).
9. 
! Data inspection
10.
Print initial dataset statistics and unique target values.
11.
! Handle missing and inconsistent data
12.
Create a SimpleImputer with strategy ‘mean’ to replace NaN values in features with the mean.
13.
If the target is categorical (not numerical, 0 or 1), use appropriate encoding techniques.
14.
Replace negative values in the target variable with 0 and ensure all values are either 0 or 1.
15.
Replace non-0, 1 values with the mode if any exist.
16.
! Standardization and normalization
17.
Create a StandardScaler object and normalize features to have zero mean and unit variance.
18.
! Data splitting
19.
Use train_test_split function from sklearn.model_selection to divide data into training, validation, and test sets.
20.
! Exploratory data analysis
21.
Select a smaller sample of data for faster visualization.
22.
Perform PCA and t-SNE using umap library for faster execution to visualize data in a 2D space.
23.
Use sns.kdeplot from seaborn to analyze distributions of features for fraudulent and non-fraudulent transactions.
24.
! Model selection and training
25.
Create a list of candidate models: Random Forest and Logistic Regression.
26.
Use a loop to iterate through each model in the list and fit it to the training data.
27.
Train each model using the training data.
28.
! Model evaluation
29.
Iterate through each trained model.
30.
Predict on validation data and calculate accuracy, precision, recall, and F1-score using functions from sklearn.metrics.
31.
Print the evaluation metrics for each model.
32.
Compare the evaluation metrics for all models.
33.
Choose the model with the best overall performance.
34.
! Model testing and evaluation
35.
Test the chosen model on unseen test data and evaluate performance metrics.
36.
Analyze the results using specific metrics and visualizations.
37.
! Client-side data preparation
38.
Function preprocess: Define a function to prepare client data for federated learning.
39.
The data is shuffled, batched, and formatted, ready for distributed model training on each client.
40.
Function make_federated_data: Create federated datasets by distributing data across multiple clients (using ClientData).
41.
Simulate decentralized data held by individual clients.
42.
! Federated Model Definition
43.
Function create_keras_model: Create a standard Keras model (e.g., with dense layers and softmax activation for binary classification).
44.
Function model_fn: Define a model setup function compatible with federated learning.
45.
The create_keras_model is wrapped using tff.learning.models.from_keras_model, enabling it to train across distributed clients without centralizing individual client data.
46.
! Federated Training Process Setup
47.
Iterative process definition: Use tff.learning.algorithms.build_weighted_fed_avg to define an iterative federated averaging (FedAvg) process.
48.
Initialize a model training state.
49.
Use a stochastic gradient descent (SGD) client optimizer for distributed updates).
50.
! Federated Training Execution
51.
Iterate through rounds: Run multiple rounds of training (e.g., for round_num in range(1, 11)).
52.
For each round: Update the model state by aggregating updates from each client.
53.
For each round: Print metrics to monitor training progress.
54.
End.
Our work implements FL in the final segment of the code, structuring all components to emphasize data privacy through decentralization, particularly suited for sensitive consumer transaction data. The model begins by preparing client-side data using two primary functions: preprocess and make_federated_data. The preprocess function organizes each client’s data by batching, shuffling, and formatting it to make it ready for model training which is particularly important for detecting patterns in fraud detection models without affecting data privacy.
The make_federated_data function then uses a ClientData object to generate federated datasets for every client. This decentralized data preparation step supports FL’s privacy-preserving objectives by providing that each client retains full control over its data which is crucial in the context of consumer transaction security.
Furthermore, our work defines the federated model using model_fn which uses create_keras_model to build a standard Keras model which initializes the FL model. After that, this model is contained in tff.learning.models.from_keras_model to enable its use within the FL framework and allowing clients to collaboratively enhance the model while protecting privacy and consumer data security.
We use an iterative federated averaging approach to optimize the training process which is initialized using tff.learning.algorithms.build_weighted_fed_avg. This method sets up the training state and controls federated training rounds using SGD optimizer and the model_fn function. Individual model updates from every client are averaged inside each cycle to create a single global update, which minimizes privacy issues and removes the need to transfer raw data. Unlike traditional centralized methods, often seen on platforms like Kaggle, our FL approach preserves on-device privacy essential in consumer transactions.
To carry out this federated training, a loop iterates over several rounds, aggregating client model updates, modifying the global model state, and recording metrics like accuracy to measure progress. This iterative technique allows for continuous model improvement while retaining the client data locally, future-proofing consumer data privacy by protecting sensitive transaction information according to the federated learning principles.

4. Experiments

Our model was tested and evaluated in real time on the Colab Research platform provided by Google, using the Tesla T4 GPU to ensure reproducibility and stable evaluation conditions [32]. Although the Tesla T4 is optimized for low-latency inference, it provides higher computational resources than typical edge devices deployed in smart environments. The performance evaluation of our model was based on three cases using the known metrics of accuracy, precision, recall, and F-score.
Although the experiments were performed on a cloud-based platform to ensure reproducibility and manage the evaluation settings, the proposed architecture is designed with the perspective of deployment on devices. Specifically, the inference phase of the Random Forest classifier requires less memory and lightweight processing complexity, which makes it better for use on edge devices with limited resources, such as mobile systems or IoT devices. The difference between the real edge hardware-based and cloud-based acceleration technologies is recognized, and a detailed assessment of physical edge devices, including their latency, memory footprint, and energy consumption, is regarded to be a crucial area for further research.
We first visualize the data using Principal Component Analysis (PCA) and t-SNE to provide insight into the patterns of the data (Figure 1, Figure 2, Figure 3 and Figure 4). In resource-constrained contexts, our model minimizes computational overhead while ensuring effective, real-time fraud detection by utilizing federated learning, dimensionality reduction, and on-device AI acceleration. The PCA plot, which captures variance along the two principal components, displays the distribution of transactions in two dimensions in Figure 1. As assumed when reducing a high-dimensional dataset to two dimensions, PCA (Figure 1), captures the general structure of data but is difficult to distinguish between the clusters of fraudulent and non-fraudulent transactions. A more detailed perspective of the local correlations between transactions is then shown by the t-SNE plot (Figure 2). Finding groups of related transactions is made simpler by t-SNE’s remarkable characteristic of capturing non-linear structures. It is easier to identify suspicious patterns in the dataset, like unusual frequency or transaction amounts that significantly from a typical spending pattern, because it highlights tighter clusters of fraudulent transactions than PCA (Figure 1).
In contrast, Figure 3 and Figure 4 display the same visualizations with color coding based on the transaction class. The colored PCA plot (Figure 3) highlights more distinct clustering, making the visual separation of fraudulent and non-fraudulent transactions clearer. Fraudulent transactions are frequently scattered at the cluster borders, which indicates possible anomalies or outliers. This visualization is very useful for detecting fraud behavior in consumer transactions.
Similarly, the colored t-SNE plot (Figure 4) improves upon the uncolored version by emphasizing the fine-grained structure of the data and the relationships between fraudulent and non-fraudulent transactions.
Here the two classes are clearly separated, with fraudulent transactions creating discrete groupings that correspond to consumer behaviors. Both colored versions help us to have a better understanding of fraudulent activity in various consumer base categories, such as sudden spikes in transaction frequency or unusual spending patterns at high-risk merchants that can indicate possible fraud. This is important because the predictive security measures for future-proof consumer applications can be improved.
It should be mentioned that this research only uses t-SNE and PCA for offline exploratory analysis and visualization. Although these methods are not a part of the on-device inference pipeline, they have no impact on the accuracy of classification or the performance of real-time fraud detection. PCA was selected to reduce complexity while preserving overall data variance, although t-SNE was utilized to highlight local transaction clusters. A quantitative comparison with alternative dimensionality reduction techniques, such as Uniform Manifold Approximation and Projection (UMAP) or Linear Discriminant Analysis (LDA), as well as the use of quantitative clustering metrics, is beyond the scope of this study and could be a potential area for future research.
At this point, we define the following meanings: True Positives (TPs) show that fraudulent transactions are correctly detected as fraudulent; True Negatives (TNs) show that non-fraudulent transactions are correctly detected as non-fraudulent; False Positives (FPs) show that non-fraudulent transactions are falsely detected as fraudulent; and False Negatives (FNs) show that fraudulent transactions are not detected and labeled falsely as a non-fraudulent.
Thus, we can estimate the effectiveness of our proposed model using the following predictive performance metrics [33]:
  • Accuracy
Accuracy is the number of samples that a classifier correctly detects, divided by the number of all fraudulent and non-fraudulent transactions. It is defined as follows:
A c c u r a c y = T P   +   T N T P   +   F P   +   T N   +   F N
2.
Precision
Precision is the ratio of predicted fraudulent samples that are correctly labeled as fraudulent. It is defined as follows:
P r e c i s i o n = T P T P   +   F P
3.
Recall
Recall is the ratio of samples that are correctly predicted. It is defined as follows:
R e c a l l = T P T P   +   F N
4.
F-score
F-score is the harmonic an of precision and recall. The closer F-score is to 1, the better it is. It is defined as follows:
F = 2 · P P V · T P R P P V   +   T P R
Table 1 shows the confusion matrix which represents the actual and the corresponding predicted values in terms of the four values TN, FN, FP, and TP.
In real-world fraud detection, the FN metric is very common because the data is usually unbalanced. In our study, a few fraudulent transactions were wrongly labeled as ‘normal’. This happened because their behavior, such as the money spent or how often they occurred, was very similar to regular shopping habits. This shows the difficult balance between being accurate (precision) and catching all fraud (recall), especially since we did not use special methods to fix the data imbalance. In the future, we may create better features and use specific tools to reduce these missed cases.
As shown in Figure 5, the accuracy of our proposed model is 0.9996 in all cases using 50, 100, and 200 estimators, respectively. The precision is in the range 0.9744 to 0.9870. The recall is in the range 0.7755 to 0.7959. The F-score is in the range 0.8636 to 0.8764.
It is obvious that the implemented the Random Forest model achieved very high accuracy, consistently exceeding 0.999, indicating a strong ability to correctly classify transactions. However, a deeper look reveals a trade-off between precision and recall. Precision, hovering around 0.97–0.98, suggests that the models capture most flagged transactions as actual fraud (low false positives). However, the recall scores, ranging from 0.77 to 0.80, indicate that the models might miss a portion of actual fraudulent transactions (false negatives). This trade-off requires consideration based on the cost of each type of error. If missing fraudulent transactions is more critical (e.g., financial loss), focusing on improving recall might be necessary. In addition, in Figure 5 we observe that in case 2 using 100 estimators the metrics are the best compared to the two other cases. Thus, we tested and compared the Random Forest (RF) classifier using 100 estimators with the reinforcement learning agent (RL agent-RLA).
Figure 6 illustrates the performance of the RL agent trained on the CartPole (AI environment) part of the Open AI Gym toolkit over 100 episodes. First, the performance is inconsistent, indicating a lack of stability in developing a reliable strategy. A significant increase around episode 30 shows a brief successful strategy, then the further fluctuations around lower scores reveals the continuous process of exploration and exploitation of the RL agent. We should note that the CartPole environment is chosen as a standard benchmark for evaluating the stability and learning dynamics of the RL agent and does not simulate the fraud detection process.
Table 2 presents the experimental results for the aforementioned evaluation metrics that show low divergence between performance and computational efficiency. As is clearly presented in Table 2, the Random Forest classifier outperforms the reinforcement learning agent across all metrics. Specifically, although both classifiers show very high accuracy, Random Forest slightly outperforms the RL agent.
In addition, the Random Forest model has a higher precision, meaning it has fewer false positives compared to the RL agent, while it has also a higher recall, indicating it successfully identifies a greater number of actual fraudulent transactions compared to the RL agent. Last but not least, the Random Forest model has a higher F-score, providing a better trade-off between precision and recall than the RL agent.
The observed recall value of about 0.79 suggests that some fraudulent transactions remain undetected even though the recommended Random Forest classifier achieves almost perfect accuracy and precision. These FNs are critical in real-world situations because they result in financial harm and delayed reactions. Therefore, recall is an essential parameter for assessing fraud detection systems.
This trade-off helps the trained classifier’s approach, which puts fewer false alarms ahead of effectively recognizing fraudulent transactions. In fact, in high-risk financial applications, a higher recall could be preferable, but in cases where a large amount of false positives takes away from the user experience, this type of behavior might be acceptable.
By adjusting the classifier’s decision threshold or combining the Random Forest with adaptive decision processes, such as post-classification rules based on reinforcement learning, recall could be increased without significantly reducing precision. Future research on these tactics is thought to be essential, especially when it comes to real-time fraud detection for smart consumer applications.
It should be mentioned that this study’s federated learning setup is not a fully optimized federated deployment, but instead an experiment for decentralized and privacy-preserving model training. During the federated aggregation process, no specific procedures were used to address class imbalance or non-independent and identically distributed data distributions at the client level. Certain edge devices may observe highly skewed local transaction distributions as well as the lack of fraudulent samples in real smart environments. Future research should focus on addressing these scenarios using client-aware weighting, dynamic aggregation methods, or data balancing techniques.

5. Discussion

By ensuring that raw data stays on the client device and is never sent to a central server, federated learning (FL) significantly improves privacy protection for sensitive consumer transaction data, eliminating the need for centralized storage. In fact, only model updates like gradients or parameter changes are shared and a central server averages these updates to improve a global model. This decentralized method reduces the risks related to data exposure in financial applications and is consistent with privacy-preserving AI frameworks.
The decentralization of data is a major advantage of FL since each client keeps their data locally, which reduces the risk of data exposure by preventing individual records from being stored on the central server. Users’ privacy is increased by this localized data processing because only aggregated model updates rather than raw data are available. Additionally, FL eliminates the possibility of centralized breaks, which are a weakness in classic centralized training that relies on a single server to keep all data.
Despite its benefits, FL is not completely resistant to privacy threats. Model updates are vulnerable to inference attacks, in which adversaries try to identify particular data points or reconstruct private information, even if they are less sensitive than raw data. FL frameworks can integrate privacy-preserving methods like encryption, secure aggregation, and differential privacy to reduce these concerns and make sure that model updates remain protected from threats.
For privacy-sensitive use cases like fraud detection, our federated and on-device framework offers scalable, secure, and real-time adaptability in diverse sectors such as consumer finance, healthcare monitoring, mobile edge computing, and smart city infrastructures. The effectiveness of FL in preserving privacy heavily depends on additional privacy-enhancing technologies and infrastructure support to secure model updates [34].

6. Conclusions

In conclusion, a machine learning approach for credit card fraud detection was presented. The procedure was based on a relevant dataset and a thorough cleaning of the data to deal with missing values and outliers. The data were divided into training, validation, and test sets. By using dimensional reduction techniques, valuable insights into the data’s underlying structure were provided, while the core focus remained on training a robust model to distinguish fraudulent transactions.
The implemented model was evaluated on a validation set to ensure optimal performance before the final testing on unseen data. The Random Forest model achieved a very high accuracy, consistently exceeding 0.999, indicating a strong ability to correctly classify transactions. Precision, hovering around 0.97–0.98, recommends that the model captures most flagged transactions as actual fraud (low false positives). However, the recall scores, ranging from 0.77 to 0.80, indicate that the model might miss a portion of actual fraudulent transactions (false negatives). Furthermore, compared to the RL agent, the proposed RF classifier achieves better scores in terms of accuracy, precision, recall, and F-score.
For future work, there are some perspectives that could improve our proposed model, especially in terms of the false negatives. These perspectives include feature engineering, hyperparameter tuning, and Convolutional Neural Network (CNN) integration. Feature engineering can be used to develop new features from existing ones. For example, in the dataset used in the proposed approach, the transaction amount could be divided by the average transaction amount, creating a new feature for performance evaluation. Hyperparameter tuning can be used to potentially optimize the performance of our proposed model. Finally, some more advanced and sophisticated machine and deep learning models, such as CNN, can be integrated into our model for a potentially better accuracy level, especially for application in data with more complex patterns.

Author Contributions

Conceptualization, A.I.B., V.A.M., C.L.S., A.P.P. and K.E.P.; methodology A.I.B., V.A.M. and C.L.S.; software, A.I.B., V.A.M., C.L.S. and A.P.P.; validation, A.I.B., V.A.M. and C.L.S.; formal analysis, A.I.B., V.A.M. and C.L.S.; investigation, A.I.B., V.A.M., C.L.S. and A.P.P.; resources, A.I.B., V.A.M., C.L.S. and A.P.P.; data curation, A.I.B., V.A.M., C.L.S. and A.P.P.; writing—original draft preparation, A.I.B., V.A.M. and C.L.S.; writing—review and editing, A.I.B., V.A.M., C.L.S. and A.P.P.; visualization, A.I.B., V.A.M. and C.L.S.; supervision, K.E.P.; project administration, K.E.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

The authors would like to thank the anonymous reviewers for their valuable comments and feedback, which were extremely helpful in improving the quality of the paper.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Silva, P.; Vinagre, J.; Gama, J. Federated online learning for heavy hitter detection. In Proceedings of the ECAI 2024, Santiago de Compostela, Spain, 19–24 October 2024; pp. 4689–4695. [Google Scholar]
  2. Devanga EBadilla, D.; Dehghanimohammadabadi, M. Applied Reinforcement Learning for Decision Making in Industrial Simulation Environments. In Proceedings of the 2022 Winter Simulation Conference (WSC), Singapore, 11–14 December 2022; pp. 2819–2829. [Google Scholar]
  3. Pareek, J.; Jacob, J. Data compression and visualization using PCA and T-SNE. In Proceedings of the Advances in Information Communication Technology and Computing: Proceedings of AICTC 2019, Bikaner, India, 8–9 December 2019; Springer: Singapore, 2021; pp. 327–337. [Google Scholar]
  4. Silva, P.R.; Vinagre, J.; Gama, J. Federated anomaly detection over distributed data streams. arXiv 2022, arXiv:2205.07829. Available online: https://arxiv.org/abs/2205.07829 (accessed on 11 January 2025). [CrossRef]
  5. Masud, M.T.; Keshk, M.; Moustafa, N.; Linkov, I.; Emge, D.K. Explainable Artificial Intelligence for Resilient Security Applications in the Internet of Things. IEEE Open J. Commun. Soc. 2024, 6, 2877–2906. [Google Scholar] [CrossRef]
  6. Rai, K.; Dwivedi, R.K. Fraud detection in credit card data using unsupervised machine learning based scheme. In Proceedings of the 2020 International Conference on Electronics and Sustainable Communication Systems (ICESC), Coimbatore, India, 2–4 July 2020; pp. 421–426. [Google Scholar]
  7. Wu, K.; Cheng, C.-T.; Uwate, Y.; Chen, G.; Mumtaz, S.; Tsang, K.F. State-of-the-Art and Research Opportunities for Next-Generation Consumer Electronics. IEEE Trans. Consum. Electron. 2022, 69, 937–948. [Google Scholar] [CrossRef]
  8. Markovic, T.; Leon, M.; Buffoni, D.; Punnekkat, S. Random forest based on federated learning for intrusion detection. In Proceedings of the IFIP International Conference on Artificial Intelligence Applications and Innovations, Hersonissos, Greece, 17–20 June 2022; pp. 132–144. [Google Scholar]
  9. Abdallah, M.; Maarof, A.; Zainal, A. Fraud detection system: A survey. J. Netw. Comput. Appl. 2016, 68, 90–113. [Google Scholar] [CrossRef]
  10. McMahan, H.B.; Moore, E.; Ramage, D.; Hampson, S.; y Arcas, B.A. Communication-efficient learning of deep networks from decentralized data. In Proceedings of the Artificial Intelligence and Statistics, Fort Lauderdale, FL, USA, 20–22 April 2017; pp. 1273–1282. [Google Scholar]
  11. Yang, Q.; Liu, Y.; Chen, T.; Tong, Y. Federated Machine Learning: Concept and Applications. ACM Trans. Intell. Syst. Technol. 2019, 10, 1–19. [Google Scholar] [CrossRef]
  12. Nguyen, D.C.; Ding, M.; Pathirana, P.N.; Seneviratne, A.; Li, J.; Poor, H.V. Federated learning for internet of things: A comprehensive survey. IEEE Commun. Surv. Tutor. 2021, 23, 1622–1658. [Google Scholar] [CrossRef]
  13. De Vita, F.; Bruneo, D. Leveraging Stack4Things for Federated Learning in Intelligent Cyber Physical Systems. J. Sens. Actuator Netw. 2020, 9, 59. [Google Scholar] [CrossRef]
  14. Guo, Q.; Tang, F.; Kato, N. Federated reinforcement learning-based resource allocation for D2D-aided digital twin edge networks in 6G industrial IoT. IEEE Trans. Ind. Inform. 2022, 19, 7228–7236. [Google Scholar] [CrossRef]
  15. Matsuo, Y.; LeCun, Y.; Sahani, M.; Precup, D.; Silver, D.; Sugiyama, M.; Uchibe, E.; Morimoto, J. Deep learning, reinforcement learning, and world models. Neural Netw. 2022, 152, 267–275. [Google Scholar] [CrossRef]
  16. Singh, V.; Chen, S.-S.; Singhania, M.; Nanavati, B.; Kar, A.K.; Gupta, A. How are reinforcement learning and deep learning algorithms used for big data based decision making in financial industries–A review and research agenda. Int. J. Inf. Manag. Data Insights 2022, 2, 100094. [Google Scholar] [CrossRef]
  17. Sailusha, R.; Gnaneswar, V.; Ramesh, R.; Rao, G.R. Credit Card Fraud Detection Using Machine Learning. In Proceedings of the 2020 4th International Conference on Intelligent Computing and Control Systems (ICICCS), Madurai, India, 13–15 May 2020; pp. 1264–1270. [Google Scholar] [CrossRef]
  18. Chen, W.; Qiu, X.; Cai, T.; Dai, H.-N.; Zheng, Z.; Zhang, Y. Deep reinforcement learning for Internet of Things: A comprehensive survey. IEEE Commun. Surv. Tutor. 2021, 23, 1659–1692. [Google Scholar] [CrossRef]
  19. Lei, L.; Tan, Y.; Zheng, K.; Liu, S.; Zhang, K.; Shen, X. Deep reinforcement learning for autonomous internet of things: Model, applications and challenges. IEEE Commun. Surv. Tutor. 2020, 22, 1722–1760. [Google Scholar] [CrossRef]
  20. Michael, A.; Raja, K.; Kaliyan, K.; Arul, R. A Novel Secure Data Processing Mechanism in IoT Using Deep Learning with Ontology. In Lecture Notes in Networks and Systems; Springer: Berlin/Heidelberg, Germany, 2022; pp. 419–425. [Google Scholar] [CrossRef]
  21. Kong, X.; Zhang, W.; Wang, H.; Hou, M.; Chen, X.; Yan, X.; Das, S.K. Federated Graph Anomaly Detection via Contrastive Self-Supervised Learning. IEEE Trans. Neural Netw. Learn. Syst. 2024, 36, 7931–7944. [Google Scholar] [CrossRef]
  22. Kairouz, P.; McMahan, H.B.; Avent, B.; Bellet, A.; Bennis, M.; Bhagoji, A.N.; Bonawitz, K.; Charles, Z.; Cormode, G.; Cummings, R.; et al. Advances and open problems in federated learning. Found. Trends Mach. Learn. 2021, 14, 1–210. [Google Scholar] [CrossRef]
  23. Wang, J.; Hu, J.; Mills, J.; Min, G.; Xia, M.; Georgalas, N. Federated ensemble model-based reinforcement learning in edge computing. IEEE Trans. Parallel Distrib. Syst. 2023, 34, 1848–1859. [Google Scholar] [CrossRef]
  24. Xue, Z.; Zhou, P.; Xu, Z.; Wang, X.; Xie, Y.; Ding, X.; Wen, S. A resource-constrained and privacy-preserving edge-computing-enabled clinical decision system: A federated reinforcement learning approach. IEEE Internet Things J. 2021, 8, 9122–9138. [Google Scholar] [CrossRef]
  25. Kou, Y.; Lu, C.-T.; Sirwongwattana, S.; Huang, Y.-P. Survey of fraud detection techniques. In Proceedings of the IEEE International Conference on Networking, Sensing and Control, Taipei, Taiwan, 21–23 March 2004; Volume 2, pp. 749–754. [Google Scholar] [CrossRef]
  26. Yu, W.; Liang, F.; He, X.; Hatcher, W.G.; Lu, C.; Lin, J.; Yang, X. A Survey on the Edge Computing for the Internet of Things. IEEE Access 2018, 6, 6900–6919. [Google Scholar] [CrossRef]
  27. Shi, W.; Sun, H.; Cao, J.; Zhang, Q.; Liu, W. Edge Computing—An Emerging Computing Model for the Internet of Everything Era. J. Comput. Res. Dev. 2017, 54, 907–924. [Google Scholar]
  28. Kanika; Singla, J.; Bashir, A.K.; Nam, Y.; Hasan, N.U.; Tariq, U. Handling Class Imbalance in Online Transaction Fraud Detection. Comput. Mater. Contin. 2022, 70, 2861–2877. [Google Scholar] [CrossRef]
  29. Credit Card Fraud Detection Dataset, Kaggle, 2021. Available online: https://www.kaggle.com/datasets/mlg-ulb/creditcardfraud (accessed on 4 July 2024).
  30. Hamid, Y.; Sugumaran, M. A t-SNE based non linear dimension reduction for network intrusion detection. Int. J. Inf. Technol. 2020, 12, 125–134. [Google Scholar] [CrossRef]
  31. Kumar, M.S.; Soundarya, V.; Kavitha, S.; Keerthika, E.S.; Aswini, E. Credit card fraud detection using random forest algorithm. In Proceedings of the 2019 3rd International Conference on Computing and Communications Technologies (ICCCT), Chennai, India, 21–22 February 2019; IEEE: New York, NY, USA; pp. 149–153. [Google Scholar]
  32. Google Colab Platform. 2024. Available online: https://colab.research.google.com/github/d2l-ai/d2l-tvm-colab/blob/master/chapter_gpu_schedules/arch.ipynb (accessed on 4 July 2024).
  33. Memos, V.A.; Psannis, K.E. AI-Powered Honeypots for Enhanced IoT Botnet Detection. In Proceedings of the 2020 3rd World Symposium on Communication Engineering (WSCE), Thessaloniki, Greece, 9–11 October 2020; pp. 64–68. [Google Scholar] [CrossRef]
  34. Yazdinejad, A.; Dehghantanha, A.; Karimipour, H.; Srivastava, G.; Parizi, R.M. A Robust Privacy-Preserving Federated Learning Model Against Model Poisoning Attacks. IEEE Trans. Inf. Forensics Secur. 2024, 19, 6693–6708. [Google Scholar] [CrossRef]
Figure 1. Uncolored PCA.
Figure 1. Uncolored PCA.
Applsci 16 00835 g001
Figure 2. Uncolored t-SNE.
Figure 2. Uncolored t-SNE.
Applsci 16 00835 g002
Figure 3. Colored PCA.
Figure 3. Colored PCA.
Applsci 16 00835 g003
Figure 4. Colored t-SNE.
Figure 4. Colored t-SNE.
Applsci 16 00835 g004
Figure 5. Performance evaluation of the proposed mechanism for the Random Forest classifier.
Figure 5. Performance evaluation of the proposed mechanism for the Random Forest classifier.
Applsci 16 00835 g005
Figure 6. Performance of RL agent in CartPole environment.
Figure 6. Performance of RL agent in CartPole environment.
Applsci 16 00835 g006
Table 1. Confusion matrix.
Table 1. Confusion matrix.
Predicted Value
0 (Non-Fraud)1 (Fraud)
Actual Value0 (Non-Fraud)TN = 56,857FP = 7
1 (Fraud)FN = 34TP = 64
Table 2. Experimental results.
Table 2. Experimental results.
Metric/ClassifierRandom ForestRL Agent
Accuracy0.99960.9993
Precision0.97500.9143
Recall0.79590.6531
F-score0.87640.7619
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bermperis, A.I.; Memos, V.A.; Stergiou, C.L.; Plageras, A.P.; Psannis, K.E. On-Device Privacy-Preserving Fraud Detection for Smart Consumer Environments Using Federated Learning. Appl. Sci. 2026, 16, 835. https://doi.org/10.3390/app16020835

AMA Style

Bermperis AI, Memos VA, Stergiou CL, Plageras AP, Psannis KE. On-Device Privacy-Preserving Fraud Detection for Smart Consumer Environments Using Federated Learning. Applied Sciences. 2026; 16(2):835. https://doi.org/10.3390/app16020835

Chicago/Turabian Style

Bermperis, Alexandros I., Vasileios A. Memos, Christos L. Stergiou, Andreas P. Plageras, and Konstantinos E. Psannis. 2026. "On-Device Privacy-Preserving Fraud Detection for Smart Consumer Environments Using Federated Learning" Applied Sciences 16, no. 2: 835. https://doi.org/10.3390/app16020835

APA Style

Bermperis, A. I., Memos, V. A., Stergiou, C. L., Plageras, A. P., & Psannis, K. E. (2026). On-Device Privacy-Preserving Fraud Detection for Smart Consumer Environments Using Federated Learning. Applied Sciences, 16(2), 835. https://doi.org/10.3390/app16020835

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop