You are currently viewing a new version of our website. To view the old version click .
Sensors
  • Article
  • Open Access

30 May 2024

Online Signature Biometrics for Mobile Devices

and
Institute of Control and Computation Engineering, Warsaw University of Technology, Nowowiejska 15/19, 00-665 Warsaw, Poland
*
Author to whom correspondence should be addressed.
This article belongs to the Special Issue Sensor-Based Behavioral Biometrics

Abstract

This paper addresses issues concerning biometric authentication based on handwritten signatures. Our research aimed to check whether a handwritten signature acquired with a mobile device can effectively verify a user’s identity. We present a novel online signature verification method using coordinates of points and pressure values at each point collected with a mobile device. Convolutional neural networks are used for signature verification. In this paper, three neural network models are investigated, i.e., two self-made light SigNet and SigNetExt models and the VGG-16 model commonly used in image processing. The convolutional neural networks aim to determine whether the acquired signature sample matches the class declared by the signer. Thus, the scenario of closed set verification is performed. The effectiveness of our method was tested on signatures acquired with mobile phones. We used the subset of the multimodal database, MobiBits, that was captured using a custom-made application and consists of samples acquired from 53 people of diverse ages. The experimental results on accurate data demonstrate that developed architectures of deep neural networks can be successfully used for online handwritten signature verification. We achieved an equal error rate (EER) of 0.63% for random forgeries and 6.66% for skilled forgeries.

1. Introduction

A handwritten signature is probably the oldest and the most socially accepted method of biometric authentication of an individual [1]. With the development of technology, signature acquisition techniques were improved, and a new branch was developed—online handwritten signatures. An image of the signature was replaced with a sequence of x-y coordinates in time, along with various associated attributes like pressure, stylus pivot, and acceleration. Despite that, the handwritten signature’s most severe imperfection stays the same—users’ tidiness of handwriting causes a huge fluctuation between each sample. Due to this fact, samples of the same person can vary in duration, position, or shape, making identification and verification problems so complex. All these problems are widely discussed in [2] providing an overview of biometrics methods. Luckily, the current level of technological development made it possible to create new data processing methods that improved devices’ computing power and enabled the implementation of artificial neural networks on devices such as cloud servers, personal computers, or even mobile phones [3]. However, the scientific literature survey shows that there is still relatively little work on verification methods that can be used on a mobile phone. Most techniques require the use of specialized tablets.
In our research, we have focused on online handwritten signature verification on mobile phones. Attention has been paid to methods based on machine learning techniques. To the best of our knowledge, this article is the first to describe the use of convolutional neural networks in signature recognition utilizing the dataset collected with mobile devices with pressure values in each point. Our research concluded that exploiting mobile phones with pressure sensors is quite a novelty in biometry. Due to the number of mobile device models with so-called “Force Touch” still increasing, questions about the accuracy of their sensors appeared. The results of the complete studies on the precision of pressure sensors are described in the report [4]. The authors conducted the experiments with pressure sensors of four mobile phones: Apple iPhone 6s, iPhone 7, Huawei Mate S Premium, and ZTE AXON Mini. The results confirmed that the accuracy of measurements of tested sensors is good enough for acquiring signature data enhanced with pressure characteristics. The report convinced us that including pressure characteristics in handwritten online signatures entered on a mobile device could affect their verification results.
To sum up, the main contributions of this paper are:
  • A method for preprocessing online signatures gathered on mobile phones into the valuable form for verification systems;
  • Two custom-made classifiers (SigNet and SigNetExt) using convolutional neural networks for signature recognition on mobile phones;
  • A comprehensive study comparing online signature recognition performance based on the commonly used pre-trained VGG-16 model for image recognition and our SigNet and SigNetExt models.
The main innovation is a novel light convolutional neural network architecture that uses only coordinates of points and pressure values at each point collected with a mobile device for signature verification. Our solution does not require additional measurements that can be collected with specialized tablets for signature recognition.
The rest of the paper is structured as follows. Section 2 presents the survey of the online handwritten signature verification methods, the usage of mobile devices for signature acquisition, and mobile cross-device interoperability in signature recognition. Artificial neural network architectures for signature verification are overviewed in Section 3. Section 4 describes the training datasets preprocessing, evaluation metrics, and results of training and validation of proposed neural network models. The performance evaluation of three network models is presented and discussed in Section 5. Finally, Section 6 concludes the paper and highlights future research directions.

3. Convolutional Neural Networks for Handwritten Signature Verification

3.1. Convolutional Neural Network (CNN)

Convolutional neural networks allow the automatic detection of relevant information without human supervision [22]. They use convolution and pooling operations. In CNN, the input data is a multidimensional array, while in other simple artificial neural networks, it is one-dimensional. To prevent the padding problem, extending output is filled with zeros when the output differs in size from the input. CNN comprises a few convolutional layers with the Rectified Linear Unit activation function (ReLU). Each layer is a collection of filters with parameters w × h × d , where w denotes width, h height, and d depth, respectively. The number of activation maps at the output equals the number of filters in a single layer. The output from the convolutional layers is downsampled by using pooling layers. The pooling layer maintains the most relevant information while decreasing the input’s spatial size. In the aggregation process, a window of predefined size is shrunk to a single element (usually with an average or maximum value in a window). The part of the network responsible for input data classification follows convolutional layers. It comprises a few fully connected layers with the softmax function.
Convolutional neural networks are commonly used to classify diverse data from various domains, especially image recognition. The CNN architecture has already been used with good scores for handwritten signature recognition and verification, as is presented in Section 2. Therefore, we have developed and evaluated a novel method employing CNN for signature biometrics on a mobile phone.
The problem being solved is the authenticity of customer signature verification from an established set of M users. Our method aims to classify each input signature as the signature of the m-th user ( m = 1 , , M ) or to determine that it is the signature of a person from outside the given set of users. Thus, the CNN model output for a given input signature is a vector of probabilities for m = 1 , , M classes membership. Then, similarity values to signatures from classes m = 1 , , M are compared to the acceptance threshold predefined in the method. The comparison result determines whether or not the acquired sample is of class m. So, it is probably the signature of the m-th user. We proposed and evaluated three multi-class classifiers to solve this problem. At first, we incorporated and adjusted the CNN model utilizing the VGG-16 architecture described in [23]. VGG-16 is widely used in image recognition. Next, we designed and tested a novel custom-made CNN architecture called SigNet and trained two models implementing this architecture.

3.2. VGG-16 Neural Network Architecture

At first, we incorporated and adjusted the CNN model utilizing the VGG-16 architecture described in [23]. The aim was to estimate probabilities of belonging to each input signature (i.e., each user) to m = 1 , , M classes.
The VGG-16 network architecture was designed by the Visual Geometry Group at the University of Oxford and is described in [23]. The VGG-16 model trained on natural images is widely used in image recognition. In our approach, signatures are processed to images, so it was natural to check how models dedicated to images handle their recognition. Results of numerous studies presented in literature [24,25] confirmed that the VGG-16 model can be successfully used to solve classification tasks in various domains after fine-tuning and appropriate adaptation.
The VGG-16 network comprises 13 convolutional layers with the ReLU function. A max-pooling layer follows each of the two convolutional layers. Three fully connected layers with ReLU follow the convolutional section. The parameters of the layers are as follows:
  • 64 filter kernels with a size of 3 × 3;
  • 128 filter kernels with a size of 3 × 3;
  • 256 filter kernels with a size of 3 × 3;
  • 512 filter kernels with a size of 3 × 3.
The size of the input array is 224 × 224 . The Stochastic Gradient Descent (SGD) optimizer [26] with momentum value m = 0.9 was used for training purposes. The initial learning rate was equal to 0.0001 . To prevent model overfitting, we added a dropout layer to the original network with a dropout value equal to 50%.

3.3. SigNet Neural Network Architecture

VGG-16 is a complex network, and its learning is time-consuming. Moreover, the input of our method is a set of images that feature little content. Most of each image is a background. In this case, it was natural to look for a simpler classifier that is not less effective than VGG-16 and requires smaller learning datasets and training time. Therefore, we have developed a novel custom-made light network model (SigNet) for verifying the authenticity of customer signatures from an established set of M users.
The SigNet network architecture (Figure 1) comprises four convolutional layers with the ReLU function, pooling and softmax layers, and fully connected layers arranged in the following order:
Figure 1. SigNet network architecture diagram.
  • 1st convolutional layer with ReLU;
  • Max pooling layer;
  • 2nd, 3rd and 4th convolutional layer with ReLU, each followed by average pooling layer;
  • One fully connected layer with ReLU;
  • Softmax layer.
The parameters of the layers are as follows:
  • 32 filter kernels with a size of 9 × 9;
  • 32 filter kernels with a size of 7 × 7;
  • 64 filter kernels with a size of 5 × 5;
  • 64 filter kernels with a size of 6 × 16.
The size of the input array is 77 × 190 .
We implemented our convolutional neural network using Matlab. We used the SGD optimizer with momentum value m = 0.9 . The initial learning rate equal to 0.05 was finally decreased to 0.0005 .
Table 1 summarizes the parameters of both CNN architectures, i.e., SigNet and VGG-16.
Table 1. The summary of SigNet and VGG-16 network architectures.

4. Training and Validation

4.1. Database of Online Signatures

We trained and validated both classifiers on data from the multi-modal database, MobiBits [27], collecting samples from a group of volunteers. All signatures were acquired on the Huawei Mate S device using the Adonit Dash 2 stylus. This mobile phone is equipped with pressure sensors built inside the touchscreen. Due to this fact, points of each signature were evaluated with x and y coordinates (Figure 2) and the pressure value put on the touchscreen. A custom-made application was developed and used for data acquisition. The system asked users to enter a series of their original signature samples and saved the data locally on the device. Users could reset a sample attempt or start the session from the beginning.
Figure 2. An example of a genuine signature (top) and its skilled forgery (bottom).
Moreover, the application was used to capture skilled forgeries. Each user was asked to select a victim’s identification number. Then, the application displayed the genuine signature chosen for forging. The user had time to train a fake signature before entering the system.
The dataset (DataSet) used in our research consisted of signatures acquired from 53 volunteers of diverse ages. Data collection was divided into three sessions (20 genuine signatures in the first session and 10 genuine signatures in the second and third). Moreover, each user was asked to enter 20 skilled forgeries. Finally, all collected samples were merged into a dataset with 53 classes. The distribution of genuine signatures and skilled forgeries for each class is presented in Figure 3. It should be noted that we did not include signature changes over time in our research.
Figure 3. Number of samples for each class: genuine signatures (left) and skilled forgeries (right).
Then, the DataSet was randomly divided into three subsets: TrainSet–training set ( 50 % of DataSet), ValSet–validation set ( 25 % of DataSet), TestSet–testing set ( 25 % of DataSet).

4.2. Handwritten Signatures Preprocessing

Convolutional neural network models are incredibly efficient in image recognition. They successfully learn values in specific points of the image and their arrangement in space. Thus, the input data are images. Signature data samples collected with mobile devices are vectors of two coordinates x and y, and pressure values p. Therefore, to use the potential of CNN, we converted handwritten signatures into images with pressure values corresponding to the pixels. Figure 4 depicts the example signature representation after conversion.
Figure 4. Handwritten signatures represented as images (pixels correspond to pressure values), scaled to the SigNet input data size.
Each image of a signature was resized using linear interpolation to fit both network architectures’ predefined input data sizes. In our research, the input data dimensions were 224 × 224 pixels for the VGG-16 network and 77 × 190 pixels for the SigNet network. In the VGG-16 model, the authors strictly defined the input size, which cannot be modified. In the case of SigNet, the input size was fitted to the sizes of samples acquired from volunteers and collected in the MobiBits database. A signature with maximum dimension determined the size of the SigNet input.

4.3. Evaluation Metrics and Results

Three network models were trained:
  • SigNet: the SigNet network architecture trained on the dataset of 968 samples acquired using a custom-made application (the base training dataset).
  • SigNetExt: the SigNet network architecture trained on the augmented dataset of 9680 samples. The base dataset was augmented by multiplying the base training dataset. Each sample was transformed with 5 random shifts and 20 rotations (from 10 to 9 ) for each shifted image.
  • VGG-16: the VGG-16 network architecture trained on the augmented dataset of 968 samples. Each sample was transformed with shifts randomly selected from the range [ 10 p x , 10 p x ] in x and y coordinates and pivots randomly selected from [ 15 , 15 ] for each shifted image. The network was trained with slightly different images in each epoch. Thus, the size of the training dataset was not increased, but it was randomly modified.
To reduce variability and generalize the results, we performed multiple rounds of cross-validation using randomly selected subsets, i.e., TrainSet, ValSet, and TestSet of data from DataSet. Our research averaged the validation and testing results over 20 rounds to estimate a given classifier’s predictive performance. Hence, 20 triples from TrainSet, ValSet, and TestSet were randomly selected for the training, validation, and testing phases.
Commonly used criteria were taken to assess the quality of classifiers in validation and testing procedures.
  • EER—a statistic commonly used in performance evaluation of verification and authorization systems. Equal error rate describes the point at which the false rejection rate (FRR) and false acceptance rate (FAR) are equal. The lower the equal error rate value, the higher the accuracy of the biometric system. The definitions of FAR and FRR are as follows:
    F A R = F P F P + T N ,
    F R R = F N F N + T P ,
    where T P (True Positives) and F P (False Positives) denote the number of true and false owner signature detections, T N (True Negatives) and F N (False Negatives) the number of true and false forgery detection.
  • AUC—the area under the Receiver Operating Characteristic Curve (ROC). ROC is a plot that presents T P in the function of F P .
The results of the validation of three network models are presented in Figure 5 and Figure 6, and Table 2. All performance measure values average 20 rounds of cross-validation.
Figure 5. Validation of SigNet, SigNetExt and VGG-16 on the TrainSet dataset. Averaged ROC curves for 20 rounds of cross-validation.
Figure 6. Validation of SigNet, SigNetExt and VGG-16 on the ValSet dataset. Averaged ROC curves for 20 rounds of cross-validation.
Table 2. Mean Equal Error Rates (EER) for signature recognition and random and skilled forgeries. Validation of classifiers on training and validation datasets.
Figure 5 depicts the ROC curves calculated based on the recognition results of signatures from the training dataset.
Figure 6 depicts the ROC curves calculated based on the recognition results of signatures from the validation datasets. The goal was to classify all samples from a given ValSet, i.e., signatures that were not used in classifier training.
All experiments confirmed that the accuracy of signature recognition depends on the type of forgeries. Skilled forgeries are more challenging to detect. However, both SigNet architecture classifiers have proven to be quite effective in identifying skilled forgeries.
The SigNet model was the best in identifying random forgeries when the validation procedure was performed on training datasets (TrainSet). However, the situation changed when a validation set (ValSet) was used to check the accuracy of all models. In these experiments, SigNet gave the worst results. It could be caused by data overfitting SigNet. The overfitting problem was reduced by increasing the training dataset with augmentations. SigNetExt seems more generalizable and performs better on the validation set for both scenarios.

5. Performance Evaluation of Signature Biometrics

5.1. Experiment Setup

Similar to training and validation, all classifiers were tested 20 times for subsets randomly selected from the MobiBits database to generalize the results. The scenario of each test was as follows. The user claimed that his signature was class k, k { 1 , , M } , where M was the number of users. The acquired sample was preprocessed and entered using the SigNet, SigNetExt, or VGG-16 classifier. Then, the classifier returned the vector of similarities to all M classes. The similarity value for class m was compared to the acceptance threshold predefined in the system. The user was authorized when the similarity value exceeded the threshold value. The testing procedure was according to the Algorithm 1.
Algorithm 1 Classifiers testing process.
1:
m – user identifier, m = 1 , , M ;
2:
S i = ( s i g n i , k ) – a sample ( s i g n i – signature, k – claimed class);
3:
P – vector of probabilities of belonging s i g n i to M classes;
4:
T a – acceptance threshold value;
5:
for all samples from the testing set ( i = 1 , , K ) do
6:
   Select S i = ( s i g n i , k ) . Enter s i g n i at the input of the classifier
7:
   Check the classifier output P;
8:
   if  P [ k ] T a and P [ k ] = m a x m M P  then
9:
       positive user verification;
10:
   else
11:
     forgery detection;
12:
   end if
13:
end for

5.2. Results of Experiments

The metrics described in Section 4 were used for the performance evaluation of SigNet, SigNetExt, and VGG-16 classifiers. Figure 7 presents the average ROC curves calculated based on 20 rounds of experiments for the testing datasets (TestSet) randomly selected input datasets from the DataSet. The average EER values and their standard deviations are collected in Table 3. The goal was to classify all samples from a given test set, i.e., signatures that were not used in classifier training and validation.
Figure 7. Testing of SigNet, SigNetExt, and VGG-16 on the TestSet dataset. Averaged ROC curves for 20 rounds of cross-validation.
Table 3. Mean Equal Error Rates (EER) for signature recognition and random and skilled forgeries. Evaluation of classifiers on the testing dataset (TestSet).
It can be seen that similar to the results of network model validation presented in Section 4, the equal error rate (EER) for random forgeries is small and similar for all models of classifiers. The differences are relatively small, i.e., average values of EER from 0.63 % to 2.70 % . The best result, i.e., the smallest ERR, was obtained for the VGG-16 model and was in the [0.34%, 0.92%] range. However, for the SigNet network architecture, the signature recognition accuracy increased after the training dataset’s augmentation (the SigNetExt model). Thus, it can be expected that for a larger size of this dataset, the results will be similar to the VGG-16 classifier. To verify our supposition, we plan to gather more signatures from the potential users to extend our training dataset and train our model on the extended database.
It should be noted that in handwritten online signature biometrics, the immunity to skilled forgeries evaluates the quality of the identity verification system. The immunity is crucial because the modality of a handwritten signature is easily observable and could be quickly replicated by a third party. Thus, the results for skilled forgeries are determinative. That implies that the results for skilled forgeries, i.e., average values of EER from 6.66% to 12.9%, are the main distinctive factor among proposed classifiers. As can be observed, similarly to the results of model validation on TrainSet and ValSet collected in Table 2, the best results were achieved using the SigNetExt classifier. Several issues could explain the poor performance of VGG-16 architecture. Transfer learning and a wide range of parameters compared to the amount of training data might significantly influence the results.
Moreover, initially, VGG-16 was trained to classify diverse images into 1000 different classes. Our problem was less complicated. The VGG-16 network architecture was trained to classify only the gray-scale images of signatures into 53 classes. Utilizing this network in the problem of identity verification with online signatures may require modification of training datasets or more image preprocessing steps.
In conclusion, the experimental results confirm the high efficiency of classifiers implementing CNN architectures. The SigNet, SigNetExt, and VGG-16 models outperformed the widely used online handwritten signature recognition systems employing DTW. The paper [27] reports the results of signatures from the database MobiBits verification using DTW. The best EER for skilled forgeries was equal to 18.44%, while the worst result of CNN models was 14.16% (VGG-16|) and 7.31% (SigNetExt). Thus, the SigNet architecture trained on augmented datasets can be recommended for online signature verification on mobile devices.

6. Conclusions

With the prevalence of signature forgery as a significant issue in various fields, such as banking, education, and legal documentation, developing an efficient and accessible verification system is crucial. By providing users with a convenient and reliable means of verifying signatures, the mobile application mitigates the risks associated with signature forgery and enhances security in various domains.
This paper presents a study on biometric recognition methods that verify a person’s identity by analyzing online handwritten signatures acquired from mobile devices. We proposed algorithms employing convolutional neural networks for signature verification. Our network architectures were trained, validated, and tested on genuine signatures acquired with mobile devices and collected in the database MobiBits. We compared the accuracy of three CNN models and two types of forgeries. This comparison concludes that extending input datasets consisting of coordinates with pressure values may significantly impact the results. Another conclusion is that integrating CNN-based signature verification technology into a mobile application significantly advances document authentication. By providing users with a convenient, reliable, and accessible means of verifying signatures, the mobile application mitigates the risks associated with signature forgery and enhances security in various domains.
This paper creates the base for many research studies and future experiments. In our future work, we plan to train and evaluate our network models on more diverse datasets and input data in different forms. Moreover, we plan to design and test other CNN architectures and perform a comprehensive comparison with other existing solutions for online handwritten signature verification on mobile phones.
The second research area under consideration is to create a dataset with generated forgeries and train neural network models on training datasets containing genuine and forged signatures.

Author Contributions

Conceptualization, K.R. and E.N.-S.; methodology, K.R.; software, K.R.; validation, K.R.; formal analysis, K.R.; investigation, K.R. and E.N.-S.; resources, K.R.; data curation, K.R.; writing—original draft preparation, K.R. and E.N.-S.; writing—review and editing, E.N.-S.; visualization, K.R.; supervision, E.N.-S.; project administration, E.N.-S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Data supporting reported results are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
EEREqual Error Rate
DTWDynamic Time Warping
FARFalse Acceptance Rate
FRRFalse Rejection Rate
PCAPrincipal Component Analysis
CNNConvolutional Neural Network
RNNRecurrent Neural Network
TA-RNNTime-Aligned Recurrent Neural Network

References

  1. Campisi, P.; Maiorana, E.; Neri, A. Signature Biometrics. In Encyclopedia of Cryptography and Security; van Tilborg, H.C.A., Jajodia, S., Eds.; Springer: Boston, MA, USA, 2011; pp. 1208–1210. [Google Scholar] [CrossRef]
  2. Sharif, M.; Raza, M.; Shah, J.H.; Yasmin, M.; Fernandes, S.L. An Overview of Biometrics Methods. In Handbook of Multimedia Information Security: Techniques and Applications; Singh, A.K., Mohan, A., Eds.; Springer International Publishing: Cham, Switzerland, 2019; pp. 15–35. [Google Scholar] [CrossRef]
  3. Hashim, Z.; Ahmed, H.; Alkhayyat, A. A Comparative Study among Handwritten Signature Verification Methods Using Machine Learning Techniques. Sci. Program. 2022, 2022, 8170424. [Google Scholar] [CrossRef]
  4. Test Report Force Sensing Testing with OPTOFIDELITY™ GOLDFINGER. Available online: https://cdn2.hubspot.net/hubfs/6347010/docs/2016/OptoFidelity_GoldFinger_Force-Sensing_testreport.pdf?hsCtaTracking=31d5e724-5949-428b-8475-1edcbf6d7a7f%7C6cc0bc53-20b8-4a5a-8806-4f0a5654d775 (accessed on 28 October 2017).
  5. Putz-Leszczyska, J. Online signature verification using dynamic time warping with positional coordinates. In Proceedings of the Photonics Applications in Astronomy, Communications, Industry, and High-Energy Physics Experiments 2006, Wilga, Poland, 29 May–4 June 2006; SPIE: Bellinghwm, WA, USA, 2006; Volume 6347, pp. 575–582. [Google Scholar]
  6. Sadak, M.S.; Kahraman, N.; Uludag, U. Handwritten Signature Verification System Using Sound as a Feature. In Proceedings of the 2020 43rd International Conference on Telecommunications and Signal Processing (TSP), Milan, Italy, 7–9 July 2020; pp. 365–368. [Google Scholar] [CrossRef]
  7. Martinez-Diaz, M.; Fierrez, J.; Freire, M.; Ortega-Garcia, J. On the effects of sampling rate and interpolation in HMM-based dynamic signature verification. In Proceedings of the Ninth International Conference on Document Analysis and Recognition (ICDAR 2007), Curitiba, Brazil, 23–26 September 2007; Volume 2, pp. 1113–1117. [Google Scholar]
  8. Ortega-Garcia, J.; Fierrez-Aguilar, J.; Simon, D.; Gonzalez, J.; Faundez-Zanuy, M.; Espinosa, V.; Satue, A.; Hernaez, I.; Igarza, J.J.; Vivaracho, C.; et al. MCYT baseline corpus: A bimodal biometric database. IEE Proc.-Vision Image Signal Process. 2003, 150, 395–401. [Google Scholar] [CrossRef]
  9. Fallah, A.; Jamaati, M.; Soleamani, A. A new online signature verification system based on combining Mellin transform, MFCC and neural network. Digit. Signal Process. 2011, 21, 404–416. [Google Scholar] [CrossRef]
  10. Yeung, D.Y.; Chang, H.; Xiong, Y.; George, S.; Kashi, R.; Matsumoto, T.; Rigoll, G. SVC2004: First international signature verification competition. In Proceedings of the Biometric Authentication: First International Conference, ICBA 2004, Hong Kong, China, 15–17 July 2004; Proceedings; Springer: Berlin/Heidelberg, Germany, 2004; pp. 16–22. [Google Scholar]
  11. Drott, B.; Hassan-Reza, T. On-Line Handwritten Signature Verification Using Machine Learning Techniques with a Deep Learning Approach. Master’s Thesis, Lund University, Lund, Sweden, 2015. [Google Scholar]
  12. Anoto Pen Stylus. Available online: http://www.anoto.com (accessed on 22 August 2018).
  13. Tolosana, R.; Vera-Rodriguez, R.; Fierrez, J.; Ortega-Garcia, J. DeepSign: Deep on-line signature verification. IEEE Trans. Biom. Behav. Identity Sci. 2021, 3, 229–239. [Google Scholar] [CrossRef]
  14. Sae-Bae, N.; Memon, N. Online signature verification on mobile devices. IEEE Trans. Inf. Forensics Secur. 2014, 9, 933–947. [Google Scholar] [CrossRef]
  15. Kholmatov, A.; Yanikoglu, B. SUSIG: An on-line signature database, associated protocols and benchmark results. Pattern Anal. Appl. 2009, 12, 227–236. [Google Scholar] [CrossRef]
  16. Fakhiroh, L.A.; Fariza, A.; Basofi, A. Mobile Based Offline Handwritten Signature Forgery Identification using Convolutional Neural Network. In Proceedings of the International Electronics Symposium (IES), Surabaya, Indonesia, 29–30 September 2021; pp. 423–429. [Google Scholar] [CrossRef]
  17. Blanco-Gonzalo, R.; Sanchez-Reillo, R.; Miguel-Hurtado, O.; Liu-Jimenez, J. Performance evaluation of handwritten signature recognition in mobile environments. IET Biom. 2014, 3, 139–146. [Google Scholar] [CrossRef]
  18. Wacom Pen Tablets. Available online: https://www.wacom.com/en/products/pen-tablets (accessed on 22 August 2018).
  19. Nerini, M.; Favarelli, E.; Chiani, M. Augmented PIN Authentication through Behavioral Biometrics. Sensors 2022, 22, 15. [Google Scholar] [CrossRef] [PubMed]
  20. Lis, K.; Niewiadomska-Szynkiewicz, E.; Dziewulska, K. Siamese Neural Network for Keystroke Dynamics-Based Authentication on Partial Passwords. Sensors 2023, 23, 22. [Google Scholar] [CrossRef] [PubMed]
  21. Siddiqui, N.; Dave, R.; Vanamala, M.; Seliya, N. Machine and Deep Learning Applications to Mouse Dynamics for Continuous User Authentication. Mach. Learn. Knowl. Extr. 2022, 4, 502–518. [Google Scholar] [CrossRef]
  22. Convolutional Neural Networks for Visual Recognition. Available online: http://cs231n.github.io/convolutional-networks/ (accessed on 22 August 2018).
  23. Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv 2015, arXiv:1409.1556. [Google Scholar] [CrossRef]
  24. Dimiccoli, M.; Cartas, A.; Radeva, P. Activity recognition from visual lifelogs: State of the art and future challenges. In Multimodal Behavior Analysis in the Wild; Alameda-Pineda, X., Ricci, E., Sebe, N., Eds.; Computer Vision and Pattern Recognition; Academic Press: Cambridge, MA, USA, 2019; pp. 121–134. [Google Scholar] [CrossRef]
  25. Naveen, P.; Diwan, B. Pre-trained VGG-16 with CNN Architecture to classify X-Rays images into Normal or Pneumonia. In Proceedings of the International Conference on Emerging Smart Computing and Informatics (ESCI), Pune, India, 5–7 March 2021; pp. 102–105. [Google Scholar] [CrossRef]
  26. Montazeri, A.H.; Emami, S.K.; Zaghiyan, M.R.; Eslamian, S. Stochastic learning algorithms. In Handbook of Hydroinformatics; Eslamian, S., Eslamian, F., Eds.; Elsevier: Amsterdam, The Netherlands, 2023; pp. 385–410. [Google Scholar] [CrossRef]
  27. Bartuzi, E.; Roszczewska, K.; Trokielewicz, M.; Białobrzeski, R. Mobibits: Multimodal mobile biometric database. In Proceedings of the 2018 international conference of the biometrics special interest group (BIOSIG), Darmstadt, Germany, 26–28 September 2018; pp. 1–5. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.