Next Article in Journal
Indoor Environmental Quality towards Classrooms’ Comforts Level: Case Study at Malaysian Secondary School Building
Previous Article in Journal
A 4-DOF Upper Limb Exoskeleton for Physical Assistance: Design, Modeling, Control and Performance Evaluation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Handwritten Signature Verification Method Based on Improved Combined Features

1
School of Information Engineering, Wuhan University of Technology, Wuhan 430070, China
2
Key Laboratory of Fiber Optic Sensing Technology and Information Processing, Ministry of Education, Wuhan 430070, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2021, 11(13), 5867; https://doi.org/10.3390/app11135867
Submission received: 31 May 2021 / Revised: 19 June 2021 / Accepted: 22 June 2021 / Published: 24 June 2021
(This article belongs to the Section Electrical, Electronics and Communications Engineering)

Abstract

:

Featured Application

This study proposes a handwritten signature verification method based on improved combined features, which combines dynamic features and static features by using the complementarity between classifiers and score fusion. The significance of this study is to achieve the purpose of verifying the authenticity of the signature and protecting the safety of customer property by extracting more comprehensive and representative signature features.

Abstract

As a behavior feature, handwritten signatures are widely used in financial and administrative institutions. The appearance of forged signatures will cause great property losses to customers. This paper proposes a handwritten signature verification method based on improved combined features. According to advanced smart pen technology, when writing a signature, offline images and online data of the signature can be obtained in real time. It is the first time to realize the combination of offline and online. We extract the static and dynamic features of the signature and verify them with support vector machine (SVM) and dynamic time warping (DTW) respectively. We use a small number of samples during the training stage, which solves the problem of insufficient number of samples to a certain extent. We get two decision scores while getting the verification result. Finally, we propose a score fusion method based on accuracy (SF-A), which combines offline and online features through score fusion and utilize the complementarity among classifiers effectively. Experimental results show that using different numbers of training samples to conduct experiments on local data sets, the false acceptance rate (FAR) and false reject rate (FRR) obtained are better than the offline or online verification results.

1. Introduction

Handwritten signatures are widely used in life. With the development of machine learning and artificial intelligence, the research on handwritten signature verification is also deepening.
In terms of signature data collection methods, there are mainly two kinds: offline signature image and online signature data. Offline signature image refers to the handwritten name of the author on the paper, which is then transmitted to the computer through the scanning device to form the signature image, and then verified according to the image features. Online signature refers to verification based on signature tracks, such as coordinates and pressures during writing. In this paper, an intelligent pen with pressure sensor and camera is used to write the name on the paper with full dots. In the process of writing the signature, the offline image and online data of the signature are collected in real time.
Signature verification usually includes two stages of training and testing. In the training stage, different numbers of real signatures are used for preprocessing and feature extraction and then put into the classifier to obtain the model. In the test stage, the signatures are put into the classifier for comparison and output verification result.
In the feature extraction stage, offline signature image features are called static features, which are mainly divided into local features and global features. Local features are mainly divided into texture features and gradient features, and global features are mainly geometric features. Online signature data features are called dynamic features, which are mainly divided into parameter-based features and function-based features. Parameters-based mainly refer to the signature duration and the number of pen tip upwards. Function-based features mainly refer to signature trajectories and pressure data. Dynamic features based on functional features generally have better results.
There are two main verification methods, model-based verification and distance-based verification. Model-based methods mainly describe data distribution by generating models such as hidden Markov model (HMM), CNN, and SVM. The distance-based approach mainly uses the distance measures to compare the test signature with the reference signature through DTW. This paper uses SVM to process offline signature images and DTW to process online signature data.
There are two main difficulties in signature verification. One is that there is a large intra-class and inter-class variability. The author’s real signature will also change with time, age and other factors, and the forger will also imitate the signature with a lot of training in advance, so it is necessary to extract and select more comprehensive and representative signature features. Second, in real life scenarios, only a small amount of real signatures can be obtained for training, and insufficient data is also a problem that needs to be solved.
In order to solve these challenges, this paper proposes a score fusion method based on accuracy weighting, which combines the static features of offline signature images and the dynamic features of online signature data through score fusion. Specifically, the image and data are preprocessed and feature extracted respectively, and then the images and data are verified by SVM and DTW respectively. Through the two classifiers, we can get the verification results and decision scores of offline signature and online signature. Due to the different verification results between classifiers, there is a certain degree of complementarity. Finally, we use score fusion to achieve offline and online combination and solve the problem of complementarity between classifiers.
The remainder of the paper is organized as follows: Section 2 introduces related work of this study. Section 3 gives a detailed introduction to the proposed method. Section 4 is the experimental results and discussion. Section 5 presents the conclusion.

2. Related Works

In the aspect of handwritten signature verification, a variety of systems and methods have been proposed.

2.1. Feature Extraction in Signatures

The current research status of signature verification feature extraction algorithms mainly extracting signature texture features, geometric features, and dynamic features. Faiza et al. proposed an automatic recognition technology based on multi-level feature fusion and optimal feature selection [1], and calculated 22 Gray-Level Co-Occurrence Matrix (GLCM) features and 8 geometric features, geometric features were used to characterize the shape of the signature such as edge, area, etc., GLCM represented texture information such as each signature change, and then adopted a High-Priority Index Feature (HPFI) to combine these features and proposed a feature selection method based on Skewness-Kurtosis Controlled PCA (SKCPCA), from which the optimal feature was selected to divide the signature into forged and real signatures. The proposed system was verified on MCYT, GPDS and CEDAR data sets, and compared with existing methods, it had significantly enhanced the FAR and FRR, with FAR of 2.66%, 9.17% and 3.34%, the disadvantage was that in the feature selection process, the removed features may affect the system performance, and the removed features may had better results on other data sets. Bhunia et al. proposed a signature verification method that relied on the author by using two different types of texture features, discrete wavelet features and Local Quantized Patterns (LQP) features [2], extracting two types of transformation based on the signature image. For each signature author, using One-class Support Vector Machines (OC-SVM) to establish two independent signature models, corresponding to LQP and wavelet features. For each signature author, obtaining two different verification scores, and integrate the scores of the two OC-SVMs based on the average method to obtain the final verification score. Through the test on the GPDS, MCYT, and CEDAR data sets, the Equal Error Rate (EER) was 12.06%, 11.46%, and 7.59% respectively, which verified the generality of the method. Ghanim et al. extracted different features, and explained their impact on the recognition ability of the system. The calculated features include Histogram of Oriented Gradient (HOG) and geometric features such as length distribution, tilt distribution, and entropy. Based on the application of different machine learning classification techniques: Bagging Tree, Random Forest (RF) and SVM, the system was tested on the UTSig data set, and the experimental results showed that SVM was better than other classifiers, with an accuracy rate of 94% [3]. Hadjadj et al. used Local Ternary Patterns (LTP) and oriented Basic Image Features (oBIFs) texture descriptors to extract features [4], and projected the signature image into the feature space. When the signature to be tested was given, the authenticity was judged by combining the decision results of two SVM. The technology was tested on the Dutch and Chinese signature data sets of ICDAR 2011, and the accuracy rates were 97.74% and 75.98%, respectively.
In order to solve the existing challenges in signature verification and improve the verification ability of the system, Okawa et al. proposed a new feature extraction method that uses a combination strategy to combine the Fisher vectors and KAZE features detected from the foreground and background images to form new features [5], after testing on the MCYT-75 data set, it was found that it had a lower error rate than the existing signature verification method. Alaei et al. proposed a handwritten signature verification method based on interval symbol representation and fuzzy similarity measure. In the feature extraction stage, local binary pattern (LBP) features are extracted from the signature image [6]. This method was tested on the GPDS dataset. When the number of training samples was 8 or more, the proposed method performs better in error rate. Akbari et al. proposed a global method, which treats the signature image as a waveform, decomposed the image with a series of wavelet sub-bands at a specific level, expanded the decomposed image to obtain the waveform, and quantizes the waveform to generate a feature vector [7]. Good results had been achieved on both MCYT and CEDAR data sets. Gyimah et al. proposed an improved handwritten signature verification system by combining GLCM and image area features [8]. By using different kernel functions to test the method on SVM, the FAR was 2.50 and the FRR was 0.14. Based on the idea of optimal feature selection, Sharif et al. proposed a new feature selection method, using genetic algorithm to find the appropriate feature vector, and then input SVM classifier [9]. Through experiments on three data sets of CEDAR, MCYT, and GPDS, the system obtained FAR of 8.9, 17.2, and 9.41 respectively. Parziale et al. proposed the concept of stability. By finding the most stable part of the signature, this part was the most similar part, and then introducing it into DTW, and verifying it by distance measurement [10]. Zois et al. proposed a new grid-based template matching scheme, the core of which was to effectively encode the geometric structure of the signature through the grid template and appropriately divide the subsets. The validity had been proved on four data sets [11].

2.2. Classification in Signatures

The current research status of handwritten signature verification algorithms and classifiers at home and abroad, mainly using neural networks, SVM and so on. Hafemann et al. used the Convolutional Neutral Network (CNN) to directly learn the visual cues in the signature image, and the effect was significant on the GPDS dataset, with its EER reduced from 6.97% to 1.72% [12]. Luiz et al. suggested to solve this problem by modifying the network structure, using spatial pyramid pooling, and learning fixed-size features from variable signatures [13]. On the GPDS data set, better results were obtained. The experimental results showed that the use of more High-resolution signature images can improve performance. Lai et al. used supervised CNN to learn to distinguish feature levels, which are divided into shallow features and deep features. In addition, they also proposed a location-related twin neural network to help learn more discriminative feature spaces. Good results had been achieved on various data sets [14]. Alik et al. proposed a new CNN structure [15], named Large-Scale Signature Network (LS2Net), which handled large-scale training sample problems through batch normalization. LS2Net had achieved high accuracy on MCYT, CEDAR and GPDS datasets. Experiments showed that batch normalization had a significant contribution to performance. Zheng et al. applied Rank Support Vector Machine (RankSVM) for the first time to solve the signature verification task [16]. From the (Receiver Operating Characteristic (ROC) curve, it can be seen that the use of RankSVM maximizes the curve. The Area Under Curve (AUC) solved the imbalance problem to a certain extent. Maergner et al. proposed a method that combines a statistical signature verification model based on image edit distance and a deep neural network structure signature verification model. They believed that the structural feature model and the statistical feature model were completely different and have complementary advantages. On the MCYT and GPDS data sets, it was proved that combining structural models and statistical models can significantly improve performance and benefit from their complementary characteristics [17]. In view of the inability to obtain skilled forged signatures during the training process, Sm et al. solved the problem by checking the different loss functions of CNN and met the substantive requirements for generalization of handwritten signature verification [18]. Through the analysis of the three loss functions of cross entropy, CauchySchwarz divergence and hinge loss, they were combined into a dynamic multi-loss function, and proposed a new integration framework for using them in CNN at the same time. Soleimani et al. proposed a Deep Multitask Metric Learning (DMML) classification method for handwritten signature verification. DMML mainly used the ideas of multi-task learning and transfer learning. Experiments had proved that DMML had better performance in verifying signature authenticity, skilled forgery and random forgery [19]. Okawa proposed the use of Local Stability-Weighted Dynamic Time Warping (LS-DTW) method to verify online signatures. Experiments on MCYT-100 and SVC2004 online signature data sets had achieved good results, effectively improving the speed and accuracy of online signatures system [20].
A variety of signature features and classification methods are used in the literature, so this paper focuses on combining and comparing features and achieving complementarity between classifiers. In order to improve the handwritten signature verification method, we propose a score fusion method based on accuracy (SF-A) to achieve feature combination.

3. Proposed Work

3.1. Outline

Figure 1 shows the implementation process of the signature verification method. The first is data acquisition. The offline image and online data of the signature are simultaneously obtained through the smart pen, and then the quality of the signature data is improved through preprocessing and feature extraction to ensure the accuracy of the verification result. For offline images and online data, SVM and DTW were used for verification, and two scores Score1 and Score2 were obtained. Finally, the result of fusing offline and online features is obtained through SF-A.

3.2. Data Acquisition

This paper uses a smart pen with a camera and pressure sensor to sign on a piece of paper filled with tiny dots. While writing the signature, the image and trajectory data of the signature can be obtained in real time. A total of 20 authors’ signatures were collected, including 30 authentic signatures and 30 forged signatures for each author, a total of 1200 signatures. The forged signature used in this paper is to find 2~3 experimenters, provide the real signature, and then perform the forgery after pre-training to obtain the forgery. The forged signature is reliable and practical. The data set used in this article is entirely composed of Chinese signatures, but the experimental methods used can be widely applied to other languages. Each signature data collected includes both offline signature image and online signature data. The creation of this data set itself is a bold attempt. We hope to achieve a more scientific and reliable signature verification by combining signature online data and offline images. The online data contains six columns, namely X coordinate, Y coordinate, whether the stroke starts or not, whether the stroke ends or not, and pressure. Table 1 lists the signature data of the two authors. Each signature has a corresponding offline image and online data. The online curve in the table is obtained based on the X coordinate of the signature data. The abscissa of the graph represents time, and the ordinate represents the change of the X coordinate or Y coordinate of the online signature over time during the writing process. The red curve represents the real signature and the blue curve represents forged signatures, it is obvious that there is a big difference in the curve. The data will be processed, verified and fused separately in the subsequent steps.

3.3. Pre-Processing

In the pre-processing stage, gray processing, binarization and normalization are carried out for the offline signature image as shown in Figure 2. This paper uniformly sets the image size to 64 × 64. These options aim to reduce noise interference and improve the accuracy of feature extraction and verification. The online data is not pre-processed.

3.4. Feature Extraction

The feature extraction stage includes the extraction of static and dynamic features of the signature.

3.4.1. Static Features

For offline signature images, the texture and geometric features of the signature are extracted. The texture feature is used to represent the local information of the image, and the geometric feature is used to represent the global information of the image. The feature vector obtained by the combination of the two can fully and more accurately represent the image content. This paper uses GLCM and HOG to extract image texture features. GLCM describes the joint distribution of two pixels gray levels with a certain spatial position relationship. It is a second-order statistical texture feature quantity. Through the GLCM of the offline signature image, comprehensive information about the direction, the adjacent interval, and the change of gray scale can be obtained, which is the basis for analyzing the local pattern and arrangement rules of the image.
Let f (x, y) denote a signature image, (x1, y1) and (x2, y2) are two points in the image, the interval is d, and the angle between the two points and the horizontal axis of the coordinate is θ, f (x1, y1) = i, f (x2, y2) = j. In this way, a matrix P (i, j, d, θ) of various pitches and angles can be obtained. This paper analyzes the gray space position from horizontal, vertical, bottom left to top right, bottom right to top left, namely θ = 0, 45, 90, 135. At the same time, this paper uses four parameters to reflect the matrix situation, namely: contrast, correlation, energy and homogeneity, as shown in the following.
C o n = i j ( i j ) 2 P ( i , j )
C o r r = [ i j ( ( i j ) P ( i , j ) μ x μ y ] / σ x σ y
E n e = i j P ( i , j ) 2
H o m = i j P ( i , j ) / ( 1 + | i j | )
HOG is used to calculate the direction statistical value of the local image gradient. The main idea is to segment the signature image, calculate the value of the gradient in each direction in the cell unit, and accumulate to form a histogram. In this paper, the cell size is set to 20 × 20. The equations are as follows, where Gx(x,y), Gy(x,y), H(x,y) respectively represent the horizontal gradient, vertical gradient and pixel value at the pixel (x,y) in the image.
G x ( x , y ) = H ( x + 1 , y ) - H ( x - 1 , y )
G y ( x , y ) = H ( x , y + 1 ) - H ( x , y - 1 )
G ( x , y ) = G x ( x , y ) 2 + G y ( x , y ) 2
α ( x , y ) = c o t ( G y ( x , y ) G x ( x , y ) )
In addition, 9 geometric features are extracted, which is used to represent the global information of the image. The equations are in Table 2, where d represents the distance from point x1 to x2. In the subsequent experiment process, the effect of each static feature will be evaluated.

3.4.2. Dynamic Features

The dynamic feature has the characteristics of time limitation, movement and angle variability. Compared with static characteristics, they usually have a stronger personal style. Combining static features with dynamic features can effectively improve the accuracy and reliability of signature verification. This paper uses smart pen technology to obtain offline and online data at the same time, and finally combines static and dynamic features through SF-A.
In addition to the horizontal and vertical coordinates and pressure data contained in the online data, the dynamic features used in this paper also extract four other dynamic features, namely velocity, acceleration, angle and radius of curvature [21], as shown in Table 3.

3.5. SVM

SVM is a class of generalized linear classifiers, which classifies data in a supervised learning manner, and its decision boundary is the maximum margin hyperplane that is solved for the learning sample. In this paper, SVM is used to classify offline images, and the RBF kernel function is used to obtain the classification results of the images. The positive samples in the training phase are trained with different numbers of real signatures. Because in actual situations, forged signatures cannot be obtained for training, this paper uses the same number of real signatures of other authors for training, which provides a new way to solve the problem of small samples. The test set is composed of the author’s real signatures and forged signatures. While obtaining the offline verification result, it is found that the SVM is classified according to the distance from the sample to the hyperplane. In actual situations, it is expressed by a score. When the score is less than 0, it is judged as a real signature, and when the score is greater than 0, it is judged as forged signatures, we derive the score and record it as Score1 to lay the foundation for the subsequent combined features.
D e c 1 = { G S c o r e 1 ( x i ) < 0 F S c o r e 1 ( x i ) > 0       i = 1 1200

3.6. DTW

DTW is a typical optimization problem. It uses a time warp function that satisfies certain conditions to describe the time correspondence between the test template and the reference template, and solves the warp function corresponding to the minimum cumulative distance when the two templates match. This paper uses DTW to classify online signature data. We use different amounts of real signature data for training, and use their mean to obtain the Gaussian distribution of the reference template, and then compare the Gaussian distribution of the test template with it to get the degree of similarity, denoted by Score2. Due to the different number of training samples, the threshold for judging the authenticity of the signature is also different. For the subsequent feature combination, it is necessary to ensure that Score2 and Score1 have the same judgment standard. We normalize the threshold of the online signature data, that is, subtract the threshold itself. Finally, when Score2 is less than 0, it is judged as a real signature, and when Score2 is greater than 0, it is judged as a forged signature.
D e c 2 = { G S c o r e 2 ( x i ) t h r e < 0 F S c o r e 2 ( x i ) t h r e > 0       i = 1 1200

3.7. SF-A

For offline images and online data of the same signature, different verification results may be obtained by using SVM and DTW. For example, for one of the offline images and online data with a real signature, the offline image is judged as a real signature by SVM, and the online data is judged as a forged signature by DTW, so there is a certain complementarity among them, and fusion with weights is required. At present, the commonly used fusion methods are mainly divided into three categories, namely, fusion based on class labels, fusion based on class ranking, and fusion based on probability output. The third type is more applicable to the situation in this paper. For two decision scores, the commonly used method is to use the Sigmoid function in logistic regression to predict the output result, and find the best weight for the two decision scores through the gradient descent method and increasing the number of iterations. When the result is greater than 0.5, it is judged as a real signature, and if it is less than 0.5, it is judged as a forged signature. Finally, the most suitable straight line for data division is found to complete the classification. In the experimental stage, we use SF-L to represent the score fusion method based on logistic regression.
The SF-A proposed in this paper is based on the classification accuracy of SVM and DTW, using weighted average and logistic regression to assign weights to Score1 and Score2 respectively. Through the fusion of decision scores, the static and dynamic features of the signature are combined. Suppose the accuracy of SVM is Accuracy1, the weight of Score1 is defined as w1. The accuracy of DTW is Accuracy2, and the weight of Score2 is defined as w2. Then the weights of w1 and w2 are defined as follows. It can be seen from the definition that the higher the accuracy of the classifier, the weight also bigger. Xi represents the i-th signature. The Final_Score obtained by SF-A is compared with 0. If it is less than 0, it is judged as a real signature, and if it is greater than 0, it is judged as a forged signature. By assigning weights to each classifier based on accuracy, which can also be called confidence, and then fusing the scores, the result obtained is more reliable, and can achieve a greater degree of classifiers complementarity.
s i g m o i d ( x ) = 1 1 + e x
w 1 = A c c u r a c y 1 / ( A c c u r a c y 1 + A c c u r a c y 2 )
w 2 = A c c u r a c y 2 / ( A c c u r a c y 1 + A c c u r a c y 2 )
F i n a l _ S c o r e ( x i ) = w 1 S c o r e 1 ( x i ) + w 2 S c o r e 2 ( x i )       i = 1 1200
F i n a l _ D e c = { G F i n a l _ S c o r e ( x i ) < 0 F F i n a l _ S c o r e ( x i ) > 0       i = 1 1200

4. Experiment Results and Discussion

Since this article hopes to realize the combination of offline images and online data, it is still difficult to obtain publicly available data sets that can simultaneously exist offline images and corresponding online data. Therefore, this paper uses the collected offline images and online data of 1200 signatures for experiments. We use FAR, FRR, AER and Accuracy as evaluation indicators. FAR is the percentage of forged signatures that are judged to be real signatures among the forged signatures used for testing. FRR refers to the percentage of real signatures that are judged to be forged signatures among the real signatures used for testing. AER is the average value of FRR and FAR. Accuracy refers to verification accuracy, that is, whether the predicted result is consistent with the actual situation. The current deep learning methods can achieve a good signature verification effect, but they need a lot of data to carry out experiments, and are not very practical. If considering the practical application, we think it is more necessary to solve the small sample in the actual situation. Through the study of small sample problems, most scholars choose samples within ten for training. Normally, ten samples will get the best experimental results. For 1200 signatures, this paper selected three, five, eight and ten real signature samples for training, and conducted experiments and comparisons on different features. For the offline signature training set, we randomly selected three, five, eight and ten real signatures of an author as positive samples for SVM training, and randomly selected the same number of real signatures of other authors as negative samples, so as to solve the situation that fake signatures cannot be obtained for training in the actual situation. For the online signature training set, three, five, eight and ten real signatures were randomly selected as registration samples to get the first Gaussian distribution, and then the rest of the signatures were selected as test samples to get the second Gaussian distribution. The similarity was compared to determine whether it was true. Each experiment was repeated ten times to ensure the results accuracy.

4.1. Results

Table 4, Table 5, Table 6 and Table 7 shows the experimental results of the two classifiers using different features under the local data set. After using different numbers of training samples to obtain the results, it is found that the results of SF-A proposed in this paper in order to combine dynamic and static features through score fusion is better than using a feature verification and logistic regression alone. Figure 3, Figure 4, Figure 5 and Figure 6 shows the division of samples between SF-A and SF-L. The improved weighted straight line can better divide real signatures and forged signatures than the original method.

4.2. Comparative Analysis

Experimental results show that using SF-A to combine static and dynamic features is effective for handwritten signature verification. By observing the experimental results of each feature, it is found that for offline images, when the number of samples is 3 and 5, the effect of using GLCM is better than HOG, and when the number of samples is 8 and 10, HOG is better than GLCM. On the whole, the effect of texture features is better than geometric features, which may be related to the dimension of feature vectors. The feature vector dimension of geometric features is smaller. However, the result of combining geometric features and texture features can usually improve the accuracy of verification. It also reduces the FAR and FRR, because the texture feature represents the local texture information of the offline image, while the geometric feature represents the global information of the image. The combined feature vector should be more representative and reliable.
From the perspective of score fusion, that is, combining static and dynamic features, the experimental results of both SF-L and SF-A are better than those of static or dynamic features alone, indicating that the two fusion methods can improve the complementary performance between the two classifiers. As far as SF-L and SF-A are concerned, the SF-A proposed in this paper has achieved the best results under different training samples by weighting the verification accuracy of the classifier. In particular, the improvement of FAR index is more significant, and in some specific occasions, there are often higher requirements for FAR. Therefore, we can conclude that the proposed SF-A can effectively improve the efficiency of handwritten signature verification.

5. Conclusions

This paper proposes a handwritten signature verification method based on improved combined features to meet the requirements of the current high-precision system. We use smart pens to obtain the offline image and online data of the signature at the same time, and perform preprocessing and feature extraction on them respectively. Then we use SVM and DTW for offline images and online data respectively to get verification results and decision scores, and propose the SF-A method to achieve the purpose of combining static and dynamic features by fusing decision scores.
In order to verify the effectiveness of the proposed method, we conducted experiments on a local data set. By using different numbers of training samples, two classifiers were used to experiment and compare different signature features. Experimental results show that the use of SF-A fusion of static and dynamic features has achieved the greatest accuracy and the lowest FAR and FRR, which can effectively verify handwritten signatures.
This paper contributes to the score fusion and feature combination. Through SF-A, the complementarity of the results of each classifier is effectively used, and a higher accuracy rate is achieved. At the same time, through smart pen technology, this paper realizes the combination of offline images and online data for the first time. Finally, in the SVM training stage, the real signature of other authors is used as a negative sample for training, which provides a new idea for the small sample training problem. Different from deep learning methods, our proposed method is more interpretable and widely applicable.
In future work, the data set will be continuously exaggerated to verify the universality of the method. For example, collecting signatures in different languages to compare the verification effect of the proposed method on different languages. We will make the data sets used publicly available for more researchers and institutions to conduct experiments and comparisons. At the same time, we will continue to study signature features in depth and find the most representative signature feature vector. We will also continue to study in depth how to achieve the best verification effect with the minimum training set.

Author Contributions

Conceptualization, Y.Z. and J.Z.; methodology, Y.Z.; software, Y.Z.; validation, Y.Z., H.H. and Y.W.; formal analysis, Y.Z., H.H.; investigation, Y.Z.; resources, H.H., Y.W.; data curation, Y.W.; writing—original draft preparation, Y.Z.; writing—review and editing, H.H.; visualization, Y.Z.; supervision, H.H.; project administration, J.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Batool, F.E.; Khan, M.A.; Sharif, M.; Javed, K.; Nazir, M.; Abbasi, A.A.; Iqbal, Z.; Riaz, N. Offline signature verification system: A novel technique of fusion of GLCM and geometric features using SVM. Multimed. Tools Appl. 2020, 84, 312–332. [Google Scholar] [CrossRef]
  2. Bhunia, A.K.; Alaei, A.; Roy, P.P. Signature verification approach using fusion of hybrid texture features. Neural Comput. Appl. 2019, 31, 8737–8748. [Google Scholar] [CrossRef] [Green Version]
  3. Ghanim, T.M.; Nabil, A.M. Offline signature verification and forgery detection approach. In Proceedings of the 2018 13th International Conference on Computer Engineering and Systems (ICCES), Cairo, Egypt, 18–19 December 2018; pp. 293–298. [Google Scholar]
  4. Hadjadj, I.; Gattal, A.; Djeddi, C.; Ayad, M.; Siddiqi, I.; Abass, F. Offline signature verification using textural descriptors. In Proceedings of the Iberian Conference on Pattern Recognition and Image Analysis, Madrid, Spain, 1–4 July 2019; pp. 177–188. [Google Scholar]
  5. Okawa, M. Synergy of foreground-background images for feature extraction: Offline signature verification using Fisher vector with fused KAZE features. Pattern Recognit. 2018, 79, 480–489. [Google Scholar] [CrossRef]
  6. Alaei, A.; Pal, S.; Pal, U.; Blumenstein, M. An efficient signature verification method based on an interval symbolic representation and a fuzzy similarity measure. IEEE Trans. Inf. Forensics Secur. 2017, 12, 2360–2372. [Google Scholar] [CrossRef] [Green Version]
  7. Akbari, Y.; Shariatmadari, S.; Emadi, S. Nonlinear dynamics tools for offline signature verification using one-class gaussian process. Int. J. Pattern Recognit. Artif. Intell. 2019, 34, 1–20. [Google Scholar]
  8. Gyimah, K.; Appati, K.; Darkwah, K.; Ansah, K. An improved Geo-Textural based feature extraction vector for offline signature verification. J. Adv. Math. Comput. Sci. 2019, 32, 1–14. [Google Scholar] [CrossRef] [Green Version]
  9. Sharif, M.; Khan, M.A.; Faisal, M.; Yasmin, M.; Fernandes, S.L. A framework for offline signature verification system: Best features selection approach. Pattern Recognit. Lett. 2018, 139, 50–59. [Google Scholar] [CrossRef]
  10. Parziale, A.; Diaz, M.; Ferrer, M.A.; Marcelli, A. SM-DTW: Stability modulated dynamic time warping for signature verification. Pattern Recognit. Lett. 2018, 121, 113–122. [Google Scholar] [CrossRef]
  11. Zois, E.N.; Alewijnse, L.; Economou, G. Offline signature verification and quality characterization using poset-oriented grid features. Pattern Recognit. 2016, 54, 162–177. [Google Scholar] [CrossRef]
  12. Hafemann, L.G.; Sabourin, R.; Oliveira, L.S. Learning features for offline handwritten signature verification using deep convolutional neural networks. Pattern Recognit. 2017, 70, 163–176. [Google Scholar] [CrossRef] [Green Version]
  13. Hafemann, L.G.; Oliveira, L.S.; Sabourin, R. Fixed-sized representation learning from offline handwritten signatures of different sizes. Int. J. Doc. Anal. Recognit. 2018, 21, 219–232. [Google Scholar] [CrossRef] [Green Version]
  14. Lai, S.; Jin, L. Learning discriminative feature hierarchies for off-line signature verification. In Proceedings of the 2018 16th International Conference on Frontiers in Handwriting Recognition (ICFHR), Niagara Falls, NY, USA, 5–8 August 2018; pp. 175–180. [Google Scholar]
  15. Alik, N.; Kurban, O.C.; Yilmaz, A.R.; Yildirim, T.; Ata, L.D. Large-scale offline signature recognition via deep neural networks and feature embedding. Neurocomputing 2019, 359, 1–14. [Google Scholar]
  16. Zheng, Y.; Zheng, Y.; Ohyama, W.; Suehiro, D.; Uchida, S. RankSVM for offline signature verification. In Proceedings of the 2019 International Conference on Document Analysis and Recognition (ICDAR), Sydney, NSW, Australia, 20–25 September 2019; pp. 928–933. [Google Scholar]
  17. Maergner, P.; Pondenkandath, V.; Alberti, M.; Liwicki, M.; Riesen, K.; Ingold, R. Offline Signature Verification by Combining Graph Edit Distance and Triplet Networks; Lecture Notes in Computer Science: Berlin, Germany, 2018; Volume 110, pp. 470–480. [Google Scholar]
  18. Masoudnia, S.; Mersa, O.; Araabi, B.N.; Vahabie, A.H.; Sadeghi, M.A.; Ahmadabadi, M.N. Multi-representational learning for offline signature verification using Multi-Loss snapshot ensemble of CNNs. Expert Syst. Appl. 2019, 133, 317–330. [Google Scholar] [CrossRef] [Green Version]
  19. Soleimani, A.; Araabi, B.N.; Fouladi, K. Deep multitask metric learning for offline signature verification. Pattern Recognit. Lett. 2016, 80, 84–90. [Google Scholar] [CrossRef]
  20. Okawa, M. Time-series averaging and local stability-weighted dynamic time warping for online signature verification. Pattern Recognit. 2020, 121, 1–10. [Google Scholar] [CrossRef]
  21. Tang, L.; Kang, W.; Fang, Y. Information divergence-based matching strategy for online signature verification. IEEE Trans. Inf. Forensics Secur. 2017, 13, 861–873. [Google Scholar] [CrossRef]
Figure 1. System overview.
Figure 1. System overview.
Applsci 11 05867 g001
Figure 2. The pre-processing for offline image, from left to right are the original image, grayscale image, binary image and normalized image.
Figure 2. The pre-processing for offline image, from left to right are the original image, grayscale image, binary image and normalized image.
Applsci 11 05867 g002
Figure 3. Sample distribution when using three training samples.
Figure 3. Sample distribution when using three training samples.
Applsci 11 05867 g003
Figure 4. Sample distribution when using five training samples.
Figure 4. Sample distribution when using five training samples.
Applsci 11 05867 g004
Figure 5. Sample distribution when using eight training samples.
Figure 5. Sample distribution when using eight training samples.
Applsci 11 05867 g005
Figure 6. Sample distribution when using ten training samples.
Figure 6. Sample distribution when using ten training samples.
Applsci 11 05867 g006
Table 1. Signature data.
Table 1. Signature data.
GenuineForgery
Writer1Offline Applsci 11 05867 i001 Applsci 11 05867 i002
Online Applsci 11 05867 i003
Writer2Offline Applsci 11 05867 i004 Applsci 11 05867 i005
Online Applsci 11 05867 i006
Table 2. Overview of geometric features.
Table 2. Overview of geometric features.
FeatureFormula
Area i = 1 n j = 1 m A [ i , j ]
MajorAxisLength x 1 + x 2
MinorAxisLength ( x 1 + x 2 ) 2 d
Perimeter 2 l + 2 w
Extent A r e a B o u n d i n g B o x
Solidity A r e a C o n v e x A r e a
Table 3. Dynamic features.
Table 3. Dynamic features.
FeatureDefinition
Velocity ( v ) x i 2 + y i 2
Acceleration ( α ) v i 2 + ( v i θ i ) 2
Angle ( θ ) arctan ( y i / x i )
Radius of curvature ( ρ ) log ( v i / θ i )
Table 4. The result of three training samples.
Table 4. The result of three training samples.
MethodFeaturesFARFRRAERAccuracy
SVMGLCM16.83%20.83%18.83%81.17%
HOG24.83%21.83%23.33%76.67%
GLCM + HOG12.17%17.50%14.83%85.17%
Geometric37.83%28.00%32.92%67.08%
Texture + Geometric13.00%17.50%15.25%84.75%
DTW 11.17%10.50%10.84%89.17%
SF-L 7.17%7.67%7.42%92.58%
SF-A 6.67%7.17%6.92%93.08%
Table 5. The result of five training samples.
Table 5. The result of five training samples.
MethodFeaturesFARFRRAERAccuracy
SVMGLCM14.50%13.17%13.83%86.17%
HOG11.83%19.33%15.58%84.42%
GLCM + HOG6.67%12.33%9.50%90.50%
Geometric29.17%26.50%27.83%72.17%
Texture + Geometric6.33%12.17%9.25%90.75%
DTW 9.83%9.33%9.58%90.42%
SF-L 5.33%5.50%5.42%94.58%
SF-A 5.33%4.83%5.08%94.92%
Table 6. The result of eight training samples.
Table 6. The result of eight training samples.
MethodFeaturesFARFRRAERAccuracy
SVMGLCM10.83%10.67%10.75%89.25%
HOG8.17%8.17%8.17%91.83%
GLCM + HOG5.50%5.83%5.67%94.33%
Geometric20.00%27.33%23.67%76.33%
Texture + Geometric5.67%5.83%5.75%94.25%
DTW 8.00%7.50%7.75%92.25%
SF-L 3.67%1.83%2.75%97.25%
SF-A 2.17%3.17%2.67%97.33%
Table 7. The result of ten training samples.
Table 7. The result of ten training samples.
MethodFeaturesFARFRRAERAccuracy
SVMGLCM8.83%10.50%4.83%90.33%
HOG7.00%6.33%6.67%93.33%
Geometric16.83%25.33%21.08%78.92%
GLCM + HOG4.83%5.83%5.33%94.67%
Texture + Geometric4.17%5.50%4.83%95.17%
DTW 7.67%7.50%7.59%92.42%
SF-L 2.50%2.33%2.42%97.58%
SF-A 1.00%3.33%2.17%97.83%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhou, Y.; Zheng, J.; Hu, H.; Wang, Y. Handwritten Signature Verification Method Based on Improved Combined Features. Appl. Sci. 2021, 11, 5867. https://doi.org/10.3390/app11135867

AMA Style

Zhou Y, Zheng J, Hu H, Wang Y. Handwritten Signature Verification Method Based on Improved Combined Features. Applied Sciences. 2021; 11(13):5867. https://doi.org/10.3390/app11135867

Chicago/Turabian Style

Zhou, Yiwen, Jianbin Zheng, Huacheng Hu, and Yizhen Wang. 2021. "Handwritten Signature Verification Method Based on Improved Combined Features" Applied Sciences 11, no. 13: 5867. https://doi.org/10.3390/app11135867

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop