Next Article in Journal
Efficiency Improvement of Wireless Power Supply Track System
Previous Article in Journal
Multiaxial Static and Fatigue Strength of LPBF-Manufactured AlSi10Mg in as-Built and T6 Conditions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

A Federated Learning Approach for Privacy-Preserving Automated Signature Verification †

TelSiP Research Laboratory, Department of Electrical and Electronic Engineering, School of Engineering, University of West Attica, Ancient Olive Grove Campus, 250 Thivon Str., GR-12241 Athens, Greece
*
Author to whom correspondence should be addressed.
Presented at the 6th International Electronic Conference on Applied Sciences, 9–11 December 2025; Available online: https://sciforum.net/event/ASEC2025.
Eng. Proc. 2026, 124(1), 100; https://doi.org/10.3390/engproc2026124100
Published: 1 April 2026
(This article belongs to the Proceedings of The 6th International Electronic Conference on Applied Sciences)

Abstract

The growing interconnectivity of digital systems has led to the massive collection and centralization of sensitive data, raising serious concerns about confidentiality and compliance with privacy regulations. Biometric authentication systems, such as offline signature verification, are particularly vulnerable. Federated learning (FL) provides a promising framework by enabling model training without exposing raw client data. However, keeping data strictly localized inherently creates severe data scarcity, which is a significant barrier to building robust deep learning (DL) models. This work investigates the feasibility of a privacy-preserving writer-dependent (WD) offline signature verification (OSV) system within an FL framework. To make local training viable under these constraints, we integrate complementary techniques into the federated pipeline: data augmentation is utilized to increase local sample diversity, while transfer learning provides robust pre-trained feature representations, drastically reducing the volume of data required for effective local fine-tuning. The proposed WD-OSV system was trained and evaluated on the popular CEDAR signature dataset, for which an average area under the curve of 0.8893, along with an average binary accuracy (ACC) of 80.12%, are reported as preliminary results.

1. Introduction

The necessity of identity verification dates back to ancient civilizations, where fingerprint impressions were employed as an early authentication method [1]. In contemporary times, biometric systems automate identification and verification tasks. The fingerprint trait belongs to physiological biometrics, a category including distinct and immutable physical traits such as palm prints, iris patterns, facial features, retinal scans, vein patterns, and DNA [2]. Lips constitute an interesting exception, presenting both physiological and behavioral characteristics [3]. Behavioral biometrics instead rely on dynamic traits derived human actions, including voice or speech, gait, handwritten signature, handwritten text, keystroke, and breath [2].
This work focuses on offline handwritten signature verification, widely used in banking, commerce, e-health, access control, and document authentication. Signatures offer high user acceptance because of users’ familiarity, its simplicity to produce, and ability to be socially embedded. Their uniqueness arises from neuromotor processes that reflect individual behavioral patterns, making them suitable for biometric recognition [4]. Concurrently, signature verification remains a challenging domain in pattern recognition and computer vision. Deep learning attracts increasing research interest in this field, providing promising results over traditional feature-engineering approaches, although it typically requires substantially larger datasets [5].
Biometric signature verification systems can be distinguished by data modality and acquisition. Online systems capture dynamic time-series data (e.g., x–y coordinates, pressure, velocity, and acceleration) using digital tablets or stylus-equipped devices. Conversely, offline verification systems collect static signature images acquired via scanners or cameras, typically extracting geometric and structural descriptors (e.g., shape, aspect ratio, pixel density, and stroke characteristics, slant angle, texture features). Due to the absence of temporal information, offline verification remains more challenging. Regarding training strategies, the distinction lies between writer-independent (WI) systems, which utilize a single classifier for all users, and WD systems, which train a separate classifier per signer, achieving superior accuracy but lacking scalability [6]. WD models are trained exclusively on genuine signatures and random forgeries [7]; however, the lack of forgeries and limited genuine data pose significant obstacles for deep learning (DL) architectures. Consequently, achieving robust DL performance for automated feature extraction and verification under data scarcity remains an open research problem [8]. Promising directions to mitigate this limitation include data augmentation [9], generative adversarial networks (GANs) [10,11], few-shot learning [12], and transfer learning [13].
Centralizing handwritten signatures introduces a privacy risk to clients’ sensitive biometric information which is protected by strict regulations like the GDPR. To enhance data privacy and trust, we incorporate FL, introduced by McMahan et al. [14], into offline signature verification (OSV), which ensures compliance with privacy regulations. As shown in Figure 1, FL enables collaborative model training without transferring raw data to a central server or across clients, thereby keeping the clients’ data distributed. In writer-dependent settings, FL offers a secondary benefit, the new clients fine-tune a global model locally rather than training from scratch.
This work demonstrates the feasibility of a privacy-preserving WD baseline system by integrating established techniques into a FL framework using the federated averaging (FedAvg) algorithm. To overcome the fundamental constraint of local data scarcity inherent in FL, we employ a dual strategy: (a) data augmentation techniques are applied to directly increase local sample diversity, while (b) transfer learning—via a DenseNet-121 model pre-trained on ImageNet—is utilized to provide rich, pre-learned feature representations, thereby requiring significantly fewer local samples for effective fine-tuning. The system is trained and evaluated on the publicly available CEDAR signature database.

2. Review

Deep learning has gained substantial attention for complex pattern recognition tasks, such as handwritten signature verification. However, its effective integration faces two main challenges: the guarantee of stringent privacy requirements and the data scarcity. To tackle privacy concerns, Xie, Liyang, et al. [15] proposed a writer-independent cross-lingual online signature verification system, combining the Bert model with a federated learning framework. Similarly, Zhang et al. [16] introduced a lightweight 1D-CNN-based online verification system within an FL framework, optimized for low computational resources and data confidentiality. Alternative, Xia et al. [17] developed a secure KNN algorithm resilient to ciphertext attacks, while Kaur, Harshdeep and Kansal [18] introduced a Hadamard transform-based approach to safeguard signature templates.
For the challenge of limited data, several techniques have been explored, including transfer learning, data augmentation, meta-learning, few-shot, and knowledge distillation. Transfer learning and data augmentation are widely adopted to address limited training data. Chetry et al. [19] proposed an offline writer-dependent system using a pre-trained lightweight SqueezeNetV1.0 model, while Harika et al. [20] proposed a hybrid MobileNet-CNN leveraging transfer learning. Comparative studies have examined hybrid approaches combining deep learning models with machine learning verifiers [21,22,23,24]. Conventional data augmentation is extensively utilized to enlarge datasets, frequently alongside transfer learning [25]. Alternatively, AI-generated data or features such as generative adversarial networks (GANs) provides a supplementary data expansion methodology [10,26,27,28]. Furthermore, Badie and Sajedi [29] present a graph neural network-based approach.
Knowledge distillation represents another transfer approach on student–teacher architecture, where a smaller student model learns from a pre-trained larger teacher model [30,31]. The few-shot paradigm encompasses Siamese neural networks (SNNs), which have shown effectiveness under data scarcity, exemplified by SigNet [32]. Advanced architectures of SNN include two-stage SNNs [33], hybrid Siamese–transformer networks with triplet loss [34], and recent transformer-based verification models [35,36]. The meta-learning paradigm, designed for data-scarce environments, has also been adapted, with Hafemann et al. [37] extending the MALM method that leverages different loss functions during during adaptation and meta-learning phases.

3. Proposed System

This section presents a privacy-preserving offline handwritten signature verification system based on a FL framework. By integrating the DenseNet121 CNN architecture, the system enables collaborative model training while keeping raw client data distributed and secure. The methodology encompasses the complete local data collection and data pipeline, preprocessing, augmentation and the FL protocol and the CNN verifier architecture.

3.1. Federated Learning Framework

FL is a collaborative model training, introduced by McMahan [14]. In this framework, clients train their local model on their private data and share only the trained model parameters (e.g., weights) with a central coordinator, rather than exposing their sensitive data to other entities. The coordinator is responsible for implementing a strategy regarding client selection for FL training participation, orchestrating local training rounds, and aggregating the received clients’ model parameters using the federated average (FedAvg) aggregator of Algorithm 1 to update the global model with aggregated parameters. This fundamental privacy-by-design approach, which eliminates the need to expose sensitive raw data, makes FL particularly suitable architecture for applications governed by strict data protection regulations like the GDPR.
Algorithm 1: Federated Averaging (FedAvg) Algorithm.
Engproc 124 00100 i001
In our proposed system, each FL training epoch proceeds with the following strategy. The coordinator first defines the local training hyperparameters, setting the number of local rounds to r = 8 and the batch size to b = 10 . Then, it randomly selects a subset of N training participant clients, denoted as C = { c 1 , , c N } , from all available A = { c 1 , c 2 , , c K } clients, where C A . Each selected client c i gathers its local training dataset D i = { s 1 ( i ) , s 2 ( i ) , , s n i ( i ) } and trains its local model using the predefined hyperparameters. Upon completing local training, all clients synchronously upload their updated model parameters w t + 1 ( i ) to the server. The coordinator server aggregates these parameters using the FedAvg algorithm.
w t + 1 = i = 1 N n i n C w t + 1 ( i ) , where n C = j = 1 N n j
Finally, the server updates the global model with the aggregated weights and distributes the new model back to the clients. This iterative process is repeated until the global model converges or for a predefined FL epochs e = 100 in case of a plateau phenomenon.

3.2. Dataset and Pre-Processing

The proposed system is trained and evaluated on the centralized CEDAR database, which contains 2640 grayscale handwritten signature images, including 1320 genuine and 1320 skilled forgeries collected from 55 signers. Each signer contributes 24 genuine and 24 forged samples. To emulate a realistic FL environment characterized by decentralized client distributions, the dataset is partitioned across multiple clients. In WD scenarios, users typically do not possess skilled forgeries of their own signatures. Therefore, signatures from signers 31–50 are treated as random forgeries and uniformly assigned to signers 1–30 (clients 1–30). Each client is thus allocated 24 genuine signatures, 24 skilled forgeries, and 40 random forgeries. The local datasets are further split into training (80%), validation (10%), and testing (10%) subsets, with skilled forgeries reserved exclusively for testing to better reflect operational deployment constraints. A standardized pre-processing pipeline is applied to all signature images, including resizing to 224 × 224 pixels, binarization to enhance stroke fidelity, and normalization to the [0,1] range. To mitigate data scarcity in WD verification, data augmentation is performed during training, consisting of random rotations within ±40 degrees and variation in brightness and contrast. This augmentation increases local sample diversity and supports improved model generalization under limited per-client data regimes. Further, it is used the embedded preprocessing process of keras DenseNet121.

3.3. Verification Model

A convolutional neural network (CNN) is a multilayer architecture composed of convolutional, pooling, and fully connected layers, primarily employed for tasks such as image classification and object detection. CNNs automatically learn hierarchical representations, eliminating the need for handcrafted descriptors and manual feature selection. By integrating feature extraction and classification into a unified framework, CNNs can operate as end-to-end classifiers or as feature encoders. However, their performance strongly depends on access to large training sets, which poses a major limitation in handwritten signature verification scenarios where localized data scarcity is intrinsic. To mitigate this without relying on generating massive sample volumes, we employ transfer learning via the DenseNet121 architecture [38] pretrained on ImageNet. By leveraging pre-learned hierarchical representations, the model does not need to learn fundamental visual features from scratch. This allows the unfrozen client-specific layers to converge effectively and extract highly distinct signature features despite the limited number of local samples. The original classifier is removed, the final 40 layers are unfrozen for client-specific fine-tuning, and a lightweight fully connected layer is appended as the verifier. DenseNet architectures introduce dense block connections, where the -th layer receives a feature map formed by concatenating the outputs of all preceding layers, [ x 0 , x 1 , , x 1 ] , and produces x = H ( [ x 0 , x 1 , , x 1 ] ) , maintaining a feed-forward structure. This design maximizes information flow, facilitates gradient propagation, and promotes extensive feature reuse. Key advantages include mitigation of the vanishing gradient problem, reduced parameter footprint, and improved regularization. These properties are particularly beneficial for signature verification, as dense connectivity alleviates overfitting and supports effective adaptation under limited and heterogeneous client data environments.

4. Results

The system was trained and evaluated on the CEDAR database. During training, validation used three genuine and three random forgeries per client. Testing was performed under with skilled forgeries and without skilled forgeries settings. In the first case, the test dataset included four skilled forgeries, two genuine, and two random forgeries per client, while in the second case, it consisted of two genuine and two random forgeries per client. Experiments were conducted using different random client selection strategies: three selected clients and five selected clients. The corresponding training plots are shown in Figure 2. The three-client strategy achieved the highest accuracy (78.83%) in the presence of skilled forgeries, whereas the five-client strategy achieved the highest accuracy (87.50%) without skilled forgeries, as summarized in Table 1. Performance evaluation employed equal error rate (EER) and area under the curve (AUC) during verification, while accuracy and F1-score were used as metrics in the test phase.

5. Conclusions

This work demonstrates a collaborative training approach in which private signature data remain localized on client devices, ensuring confidentiality within a federated learning paradigm. Verifier models are deployed at the data source and acquire knowledge from a global model. However, this privacy-by-design setting inherently relies on strictly localized data, leading to severe data scarcity per client. To make FL viable under these constraints, complementary techniques were required: local data augmentation was applied to artificially expand sample diversity, while transfer learning (DenseNet-121) supplied a robust foundation of pre-trained visual features. Together, they enabled effective model convergence despite the significantly limited per-client sample size. Experimental results show that client participation has a measurable impact on global performance, with a three-client selection strategy achieving better accuracy with skilled forgeries signature. Furthermore, the system remains less robust against skilled forgeries, indicating that additional improvements are required in this scenario. Future work may benefit from integrating a pre-trained writer-independent Siamese feature extractor, such as SigNet, into the writer-dependent verifier pipeline. Additionally, more exploration of data augmentation methods is expected to further enhance performance in data-scarce and heterogeneous client environments.

Author Contributions

Conceptualization, H.V., F.Z., S.K., E.N.Z. and G.K.; methodology, H.V., F.Z., S.K., E.N.Z. and G.K.; software, H.V., F.Z., S.K., E.N.Z. and G.K.; validation, H.V., F.Z., S.K., E.N.Z. and G.K.; formal analysis, H.V., F.Z., S.K., E.N.Z. and G.K.; investigation, H.V., F.Z., S.K., E.N.Z. and G.K.; resources, H.V., F.Z., S.K., E.N.Z. and G.K.; data curation, H.V., F.Z., S.K., E.N.Z. and G.K.; writing—original draft preparation, H.V., F.Z., S.K., E.N.Z. and G.K.; writing—review and editing, H.V., F.Z., S.K., E.N.Z. and G.K.; visualization, H.V., F.Z., S.K., E.N.Z. and G.K.; supervision, E.N.Z. and G.K.; project administration, E.N.Z. and G.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The Dataset is used for this research is public available at http://www.cedar.buffalo.edu/NIJ/data/signatures.rar (Accessed on 25 March 2026).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Chaudhari, R.D.; Pawar, A.A.; Deore, R.S. The historical development of biometric authentication techniques: A recent overview. Int. J. Eng. Res. Technol. 2013, 2, 1420–1427. [Google Scholar]
  2. Lien, C.W.; Vhaduri, S. Challenges and opportunities of biometric user authentication in the age of iot: A survey. ACM Comput. Surv. 2023, 56, 1–38. [Google Scholar] [CrossRef]
  3. Chowdhury, D.P.; Kumari, R.; Bakshi, S.; Sahoo, M.N.; Das, A. Lip as biometric and beyond: A survey. Multimed. Tools Appl. 2022, 81, 3831–3865. [Google Scholar] [CrossRef]
  4. Gornale, S.S.; Kumar, S.; Siddalingappa, R.; Hiremath, P.S. Survey on handwritten signature biometric data analysis for assessment of neurological disorder using machine learning techniques. Trans. Mach. Learn. Artif. Intell. 2022, 10, 27–60. [Google Scholar] [CrossRef]
  5. Pandey, G.K.; Raj, V.; Agarwal, A.; Dixit, M.; Chauhan, S.S.; Srivastava, S. Offline Signature Verification: An Extensive Survey of Deep Learning Methods. In Proceedings of the 2025 4th International Conference on Sentiment Analysis and Deep Learning (ICSADL), Virtual, 15–17 February 2025; pp. 892–898. [Google Scholar]
  6. Bhavani, S.D.; Bharathi, R.K. A multi-dimensional review on handwritten signature verification: Strengths and gaps. Multimed. Tools Appl. 2024, 83, 2853–2894. [Google Scholar] [CrossRef]
  7. Hafemann, L.G.; Sabourin, R.; Oliveira, L.S. Offline handwritten signature verification—Literature review. In Proceedings of the 7th International Conference on Image Processing Theory, Tools and Applications (IPTA), Montreal, QC, Canada, 28 November–1 December 2017; pp. 1–8. [Google Scholar]
  8. Alzubaidi, L.; Bai, J.; Al-Sabaawi, A.; Santamaría, J.; Albahri, A.S.; Al-dabbagh, B.S.N.; Gu, Y. A survey on deep learning tools dealing with data scarcity: Definitions, challenges, solutions, tips, and applications. J. Big Data 2023, 10, 46. [Google Scholar] [CrossRef]
  9. Kumar, T.; Brennan, R.; Mileo, A.; Bendechache, M. Image data augmentation approaches: A comprehensive survey and future directions. IEEE Access 2024, 12, 187536–187571. [Google Scholar] [CrossRef]
  10. Hameed, M.M.; Ahmad, R.; Kiah, L.M.; Murtaza, G.; Mazhar, N. OffSig-SinGAN: A deep learning-based image augmentation model for offline signature verification. Comput. Mater. Contin. 2023, 76, 1267–1289. [Google Scholar]
  11. Sukiman, S.A.; Husin, N.A.; Hamdan, H.; Murad, M.A.A. AI-Driven Forgery Detection in Offline Handwriting Signatures: Advances, Challenges, and the Role of Generative Adversarial Networks. J. Comput. Res. Innov. 2025, 10, 182–197. [Google Scholar]
  12. Chamakh, B.; Bounouh, O. A Unified ResNet18-Based Approach for Offline Signature Classification and Verification Across Multilingual Datasets. In Proceedings of the 2025 International Conference on Advanced Computing and Technologies, Chennai, India, 10–12 March 2025; pp. 4024–4033. [Google Scholar]
  13. Tsourounis, D.; Theodorakopoulos, I.; Zois, E.N.; Economou, G. From text to signatures: Knowledge transfer for efficient deep feature learning in offline signature verification. Expert Syst. Appl. 2022, 189, 116136. [Google Scholar] [CrossRef]
  14. McMahan, B.; Moore, E.; Ramage, D.; Hampson, S.; y Arcas, B.A. Communication-efficient learning of deep networks from decentralized data. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, Fort Lauderdale, FL, USA, 20–22 April 2017; pp. 1273–1282. [Google Scholar]
  15. Xie, L.; Wu, Z.; Zhang, X.; Li, Y. FBN: Federated Bert Network with client-server architecture for cross-lingual signature verification. Pattern Recognit. 2023, 142, 109681. [Google Scholar] [CrossRef]
  16. Zhang, L.; Guo, Y.; Ding, Y.; Sato, H. 1-D CNN-Based Online Signature Verification with Federated Learning. In Proceedings of the 2023 IEEE 22nd International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom), Exeter, UK, 1–3 November 2023; IEEE: New York, NY, USA, 2023; pp. 2698–2705. [Google Scholar]
  17. Xia, Z.; Shi, T.; Xiong, N.N.; Sun, X.; Jeon, B. A privacy-preserving handwritten signature verification method using combinational features and secure kNN. IEEE Access 2018, 6, 46695–46705. [Google Scholar] [CrossRef]
  18. Kaur, H.; Kansal, E.R. Distance based online signature verification with enhanced security. Int. J. Eng. Dev. Res. 2017, 5, 1703–1710. [Google Scholar]
  19. Chetry, B.P.; Kar, B. Offline Signature Verification Using Pre-Trained Deep Convolution Neural Network: SqueezeNet. Int. J. Electron. Commun. Eng. 2025, 12, 315–326. [Google Scholar] [CrossRef]
  20. Harika, K.; Dhanalakshmi, P.; Reddy, S.S.K.; Shakeer, S.; Thanmayi, M.S.; Haripriya, V. Handwritten Signature Recognition using MobileCNN. In Proceedings of the 2024 International Conference on Intelligent Systems for Cybersecurity (ISCS), Tokyo, Japan, 15–17 May 2024; pp. 1–7. [Google Scholar]
  21. Ozyurt, F.; Majidpour, J.; Rashid, T.A.; Koç, C. Offline handwriting signature verification: A transfer learning and feature selection approach. arXiv 2024, arXiv:2401.09467. [Google Scholar] [CrossRef]
  22. Upadhyay, R.R.; Singh, K.K. Handwritten signature verification system using hybrid transfer learning approach. Evol. Syst. 2024, 15, 2313–2322. [Google Scholar] [CrossRef]
  23. Lanjewar, R.N.; Wande, R.W.; Gundewar, S. Handwritten Signature Verification using CNN. In Proceedings of the 2024 2nd DMIHER International Conference on Artificial Intelligence in Healthcare, Education and Industry (IDICAIEI), Nagpur, India, 15–17 November 2024; pp. 1–6. [Google Scholar]
  24. Alhadidi, F.; Hiary, H. Offline signature verification using lightweight deep learning. In Proceedings of the 2024 25th International Arab Conference on Information Technology (ACIT), Riyadh, Saudi Arabia, 10–12 December 2024; pp. 1–10. [Google Scholar]
  25. Bhirud, S.; Bijwe, S.; Chavan, T.; Bhonsle, A.; Rukhande, S. Deep Transfer Learning for Authenticating Handwritten Signatures. In Proceedings of the 2025 International Conference on Electronics, AI and Computing (EAIC), Bangalore, India, 10–12 June 2025; pp. 1–6. [Google Scholar]
  26. Al-Suhaibani, D.; Al-Shargabi, A.A. Cluster GAN-based model for signature generation and verification. In Proceedings of the 2023 3rd International Conference on Computing and Information Technology (ICCIT), Tabuk, Saudi Arabia, 13–14 September 2023; pp. 249–254. [Google Scholar]
  27. Arab, N.; Nemmour, H.; Bouibed, M.L.; Chibani, Y. 1D-GAN for improving offline handwritten signature verification based on small sets of real samples. Multimed. Tools Appl. 2025, 84, 24541–24561. [Google Scholar] [CrossRef]
  28. Hong, D.J.; Chang, W.D.; Cha, E.Y. Handwritten signature generation using denoising diffusion probabilistic models with auxiliary classification processes. Appl. Sci. 2024, 14, 10233. [Google Scholar] [CrossRef]
  29. Badie, A.; Sajedi, H. Offline handwritten signature authentication using Graph Neural Network methods. Int. J. Inf. Technol. 2024, 16, 445–455. [Google Scholar] [CrossRef]
  30. Hao, Y.; Zheng, Z. Research on Detecting AI-Generated Forged Handwritten Signatures via Data-Efficient Image Transformers. IEEE Access 2025, 13, 7683–7690. [Google Scholar] [CrossRef]
  31. Tsourounis, D.; Theodorakopoulos, I.; Zois, E.N.; Economou, G. A feature-based knowledge distillation (FKD) for offline signature feature learning without signatures. Expert Syst. Appl. 2025, 258, 129158. [Google Scholar] [CrossRef]
  32. Dey, S.; Dutta, A.; Toledo, J.I.; Ghosh, S.K.; Lladós, J.; Pal, U. Signet: Convolutional siamese network for writer independent offline signature verification. arXiv 2017, arXiv:1707.02131. [Google Scholar] [CrossRef]
  33. Xiao, W.; Ding, Y. A two-stage siamese network model for offline handwritten signature verification. Symmetry 2022, 14, 1216. [Google Scholar] [CrossRef]
  34. Majumder, P.; Joaa, A.M.; Mehedi, M.H.K.; Rasel, A.A. Siamese-transformer network for offline handwritten signature verification using few-shot. In Proceedings of the 2023 26th International Conference on Computer and Information Technology (ICCIT), Chittagong, Bangladesh, 13–15 December 2023; pp. 1–6. [Google Scholar]
  35. Tehsin, S.; Hassan, A.; Riaz, F.; Nasir, I.M. DaGAM-Trans: Dual Graph Attention Module-based Transformer for Offline Signature Forgery Detection. Results Eng. 2025, 21, 106425. [Google Scholar] [CrossRef]
  36. Li, W.; Muhammat, M.; Xu, X.; Aysa, A.; Ubul, K. Multi-scale CNN-CrossViT network for offline handwritten signature recognition and verification. Complex Intell. Syst. 2025, 11, 400–415. [Google Scholar] [CrossRef]
  37. Hafemann, L.G.; Sabourin, R.; Oliveira, L.S. Meta-learning for fast classifier adaptation to new users of signature verification systems. IEEE Trans. Inf. Forensics Secur. 2019, 15, 1735–1745. [Google Scholar] [CrossRef]
  38. Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; IEEE: New York, NY, USA, 2017; pp. 4700–4708. [Google Scholar]
Figure 1. Federated learning architecture.
Figure 1. Federated learning architecture.
Engproc 124 00100 g001
Figure 2. AUC performance curves for federated signature verification: (a) training with three clients; (b) training with five clients.
Figure 2. AUC performance curves for federated signature verification: (a) training with three clients; (b) training with five clients.
Engproc 124 00100 g002
Table 1. Results of test and train processes (C: clients, SF: skilled forgery, RF: random forgery).
Table 1. Results of test and train processes (C: clients, SF: skilled forgery, RF: random forgery).
3C3C5C5C
SFRFSFRF
Test Accuracy (%)78.8381.6775.4287.50
Test F1-Score0.59380.77550.60400.8571
Train Auc-1.00-1.00
Evaluation Auc-0.9030-0.9362
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Veraros, H.; Zantalis, F.; Katsoulis, S.; Zois, E.N.; Koulouras, G. A Federated Learning Approach for Privacy-Preserving Automated Signature Verification. Eng. Proc. 2026, 124, 100. https://doi.org/10.3390/engproc2026124100

AMA Style

Veraros H, Zantalis F, Katsoulis S, Zois EN, Koulouras G. A Federated Learning Approach for Privacy-Preserving Automated Signature Verification. Engineering Proceedings. 2026; 124(1):100. https://doi.org/10.3390/engproc2026124100

Chicago/Turabian Style

Veraros, Haris, Fotios Zantalis, Stylianos Katsoulis, Elias N. Zois, and Grigorios Koulouras. 2026. "A Federated Learning Approach for Privacy-Preserving Automated Signature Verification" Engineering Proceedings 124, no. 1: 100. https://doi.org/10.3390/engproc2026124100

APA Style

Veraros, H., Zantalis, F., Katsoulis, S., Zois, E. N., & Koulouras, G. (2026). A Federated Learning Approach for Privacy-Preserving Automated Signature Verification. Engineering Proceedings, 124(1), 100. https://doi.org/10.3390/engproc2026124100

Article Metrics

Back to TopTop