Handwritten Signature Veriﬁcation Method based on Improved Combined Features

Featured Application: This study proposes a handwritten signature veriﬁcation method based on improved combined features, which combines dynamic features and static features by using the complementarity between classiﬁers and score fusion. The signiﬁcance of this study is to achieve the purpose of verifying the authenticity of the signature and protecting the safety of customer property by extracting more comprehensive and representative signature features. Abstract: As a behavior feature, handwritten signatures are widely used in ﬁnancial and administra-tive institutions. The appearance of forged signatures will cause great property losses to customers. This paper proposes a handwritten signature veriﬁcation method based on improved combined features. According to advanced smart pen technology, when writing a signature, ofﬂine images and online data of the signature can be obtained in real time. It is the ﬁrst time to realize the combination of ofﬂine and online. We extract the static and dynamic features of the signature and verify them with support vector machine (SVM) and dynamic time warping (DTW) respectively. We use a small number of samples during the training stage, which solves the problem of insufﬁcient number of samples to a certain extent. We get two decision scores while getting the veriﬁcation result. Finally, we propose a score fusion method based on accuracy (SF-A), which combines ofﬂine and online features through score fusion and utilize the complementarity among classiﬁers effectively. Experimental results show that using different numbers of training samples to conduct experiments on local data sets, the false acceptance rate (FAR) and false reject rate (FRR) obtained are better than the ofﬂine or online veriﬁcation results.


Introduction
Handwritten signatures are widely used in life. With the development of machine learning and artificial intelligence, the research on handwritten signature verification is also deepening.
In terms of signature data collection methods, there are mainly two kinds: offline signature image and online signature data. Offline signature image refers to the handwritten name of the author on the paper, which is then transmitted to the computer through the scanning device to form the signature image, and then verified according to the image features. Online signature refers to verification based on signature tracks, such as coordinates and pressures during writing. In this paper, an intelligent pen with pressure sensor and camera is used to write the name on the paper with full dots. In the process of writing the signature, the offline image and online data of the signature are collected in real time.
Signature verification usually includes two stages of training and testing. In the training stage, different numbers of real signatures are used for preprocessing and feature extraction and then put into the classifier to obtain the model. In the test stage, the signatures are put into the classifier for comparison and output verification result.
In the feature extraction stage, offline signature image features are called static features, which are mainly divided into local features and global features. Local features are mainly divided into texture features and gradient features, and global features are mainly geometric features. Online signature data features are called dynamic features, which are mainly divided into parameter-based features and function-based features. Parameters-based mainly refer to the signature duration and the number of pen tip upwards. Function-based features mainly refer to signature trajectories and pressure data. Dynamic features based on functional features generally have better results.
There are two main verification methods, model-based verification and distance-based verification. Model-based methods mainly describe data distribution by generating models such as hidden Markov model (HMM), CNN, and SVM. The distance-based approach mainly uses the distance measures to compare the test signature with the reference signature through DTW. This paper uses SVM to process offline signature images and DTW to process online signature data.
There are two main difficulties in signature verification. One is that there is a large intra-class and inter-class variability. The author's real signature will also change with time, age and other factors, and the forger will also imitate the signature with a lot of training in advance, so it is necessary to extract and select more comprehensive and representative signature features. Second, in real life scenarios, only a small amount of real signatures can be obtained for training, and insufficient data is also a problem that needs to be solved.
In order to solve these challenges, this paper proposes a score fusion method based on accuracy weighting, which combines the static features of offline signature images and the dynamic features of online signature data through score fusion. Specifically, the image and data are preprocessed and feature extracted respectively, and then the images and data are verified by SVM and DTW respectively. Through the two classifiers, we can get the verification results and decision scores of offline signature and online signature. Due to the different verification results between classifiers, there is a certain degree of complementarity. Finally, we use score fusion to achieve offline and online combination and solve the problem of complementarity between classifiers.
The remainder of the paper is organized as follows: Section 2 introduces related work of this study. Section 3 gives a detailed introduction to the proposed method. Section 4 is the experimental results and discussion. Section 5 presents the conclusion.

Related Works
In the aspect of handwritten signature verification, a variety of systems and methods have been proposed.

Feature Extraction in Signatures
The current research status of signature verification feature extraction algorithms mainly extracting signature texture features, geometric features, and dynamic features. Faiza et al. proposed an automatic recognition technology based on multi-level feature fusion and optimal feature selection [1], and calculated 22 Gray-Level Co-Occurrence Matrix (GLCM) features and 8 geometric features, geometric features were used to characterize the shape of the signature such as edge, area, etc., GLCM represented texture information such as each signature change, and then adopted a High-Priority Index Feature (HPFI) to combine these features and proposed a feature selection method based on Skewness-Kurtosis Controlled PCA (SKCPCA), from which the optimal feature was selected to divide the signature into forged and real signatures. The proposed system was verified on MCYT, GPDS and CEDAR data sets, and compared with existing methods, it had significantly enhanced the FAR and FRR, with FAR of 2.66%, 9.17% and 3.34%, the disadvantage was that in the feature selection process, the removed features may affect the system performance, and the removed features may had better results on other data sets. Bhunia et al. proposed a signature verification method that relied on the author by using two different types of texture features, discrete wavelet features and Local Quantized Patterns (LQP) features [2], extracting two types of transformation based on the signature image. For each signature author, using One-class Support Vector Machines (OC-SVM) to establish two independent signature models, corresponding to LQP and wavelet features. For each signature author, obtaining two different verification scores, and integrate the scores of the two OC-SVMs based on the average method to obtain the final verification score. Through the test on the GPDS, MCYT, and CEDAR data sets, the Equal Error Rate (EER) was 12.06%, 11.46%, and 7.59% respectively, which verified the generality of the method. Ghanim et al. extracted different features, and explained their impact on the recognition ability of the system. The calculated features include Histogram of Oriented Gradient (HOG) and geometric features such as length distribution, tilt distribution, and entropy. Based on the application of different machine learning classification techniques: Bagging Tree, Random Forest (RF) and SVM, the system was tested on the UTSig data set, and the experimental results showed that SVM was better than other classifiers, with an accuracy rate of 94% [3]. Hadjadj et al. used Local Ternary Patterns (LTP) and oriented Basic Image Features (oBIFs) texture descriptors to extract features [4], and projected the signature image into the feature space. When the signature to be tested was given, the authenticity was judged by combining the decision results of two SVM. The technology was tested on the Dutch and Chinese signature data sets of ICDAR 2011, and the accuracy rates were 97.74% and 75.98%, respectively.
In order to solve the existing challenges in signature verification and improve the verification ability of the system, Okawa et al. proposed a new feature extraction method that uses a combination strategy to combine the Fisher vectors and KAZE features detected from the foreground and background images to form new features [5], after testing on the MCYT-75 data set, it was found that it had a lower error rate than the existing signature verification method. Alaei et al. proposed a handwritten signature verification method based on interval symbol representation and fuzzy similarity measure. In the feature extraction stage, local binary pattern (LBP) features are extracted from the signature image [6]. This method was tested on the GPDS dataset. When the number of training samples was 8 or more, the proposed method performs better in error rate. Akbari et al. proposed a global method, which treats the signature image as a waveform, decomposed the image with a series of wavelet sub-bands at a specific level, expanded the decomposed image to obtain the waveform, and quantizes the waveform to generate a feature vector [7]. Good results had been achieved on both MCYT and CEDAR data sets. Gyimah et al. proposed an improved handwritten signature verification system by combining GLCM and image area features [8]. By using different kernel functions to test the method on SVM, the FAR was 2.50 and the FRR was 0.14. Based on the idea of optimal feature selection, Sharif et al. proposed a new feature selection method, using genetic algorithm to find the appropriate feature vector, and then input SVM classifier [9]. Through experiments on three data sets of CEDAR, MCYT, and GPDS, the system obtained FAR of 8.9, 17.2, and 9.41 respectively. Parziale et al. proposed the concept of stability. By finding the most stable part of the signature, this part was the most similar part, and then introducing it into DTW, and verifying it by distance measurement [10]. Zois et al. proposed a new grid-based template matching scheme, the core of which was to effectively encode the geometric structure of the signature through the grid template and appropriately divide the subsets. The validity had been proved on four data sets [11].

Classification in Signatures
The current research status of handwritten signature verification algorithms and classifiers at home and abroad, mainly using neural networks, SVM and so on. Hafemann et al. used the Convolutional Neutral Network (CNN) to directly learn the visual cues in the signature image, and the effect was significant on the GPDS dataset, with its EER reduced from 6.97% to 1.72% [12]. Luiz et al. suggested to solve this problem by modifying the network structure, using spatial pyramid pooling, and learning fixed-size features from variable signatures [13]. On the GPDS data set, better results were obtained. The experimental results showed that the use of more High-resolution signature images can improve performance. Lai et al. used supervised CNN to learn to distinguish feature levels, which are divided into shallow features and deep features. In addition, they also proposed a location-related twin neural network to help learn more discriminative feature spaces. Good results had been achieved on various data sets [14]. Alik et al. proposed a new CNN structure [15], named Large-Scale Signature Network (LS2Net), which handled large-scale training sample problems through batch normalization. LS2Net had achieved high accuracy on MCYT, CEDAR and GPDS datasets. Experiments showed that batch normalization had a significant contribution to performance. Zheng et al. applied Rank Support Vector Machine (RankSVM) for the first time to solve the signature verification task [16]. From the (Receiver Operating Characteristic (ROC) curve, it can be seen that the use of RankSVM maximizes the curve. The Area Under Curve (AUC) solved the imbalance problem to a certain extent. Maergner et al. proposed a method that combines a statistical signature verification model based on image edit distance and a deep neural network structure signature verification model. They believed that the structural feature model and the statistical feature model were completely different and have complementary advantages. On the MCYT and GPDS data sets, it was proved that combining structural models and statistical models can significantly improve performance and benefit from their complementary characteristics [17]. In view of the inability to obtain skilled forged signatures during the training process, Sm et al. solved the problem by checking the different loss functions of CNN and met the substantive requirements for generalization of handwritten signature verification [18]. Through the analysis of the three loss functions of cross entropy, CauchySchwarz divergence and hinge loss, they were combined into a dynamic multi-loss function, and proposed a new integration framework for using them in CNN at the same time. Soleimani et al. proposed a Deep Multitask Metric Learning (DMML) classification method for handwritten signature verification. DMML mainly used the ideas of multi-task learning and transfer learning. Experiments had proved that DMML had better performance in verifying signature authenticity, skilled forgery and random forgery [19]. Okawa proposed the use of Local Stability-Weighted Dynamic Time Warping (LS-DTW) method to verify online signatures. Experiments on MCYT-100 and SVC2004 online signature data sets had achieved good results, effectively improving the speed and accuracy of online signatures system [20].
A variety of signature features and classification methods are used in the literature, so this paper focuses on combining and comparing features and achieving complementarity between classifiers. In order to improve the handwritten signature verification method, we propose a score fusion method based on accuracy (SF-A) to achieve feature combination. Figure 1 shows the implementation process of the signature verification method. The first is data acquisition. The offline image and online data of the signature are simultaneously obtained through the smart pen, and then the quality of the signature data is improved through preprocessing and feature extraction to ensure the accuracy of the verification result. For offline images and online data, SVM and DTW were used for verification, and two scores Score1 and Score2 were obtained. Finally, the result of fusing offline and online features is obtained through SF-A. verification result. For offline images and online data, SVM and DTW were u verification, and two scores Score1 and Score2 were obtained. Finally, the result o offline and online features is obtained through SF-A.

Data Acquisition
This paper uses a smart pen with a camera and pressure sensor to sign on a paper filled with tiny dots. While writing the signature, the image and trajectory the signature can be obtained in real time. A total of 20 authors' signatures were co including 30 authentic signatures and 30 forged signatures for each author, a tota signatures. The forged signature used in this paper is to find 2~3 experimenters, the real signature, and then perform the forgery after pre-training to obtain the The forged signature is reliable and practical. The data set used in this article is composed of Chinese signatures, but the experimental methods used can be applied to other languages. Each signature data collected includes both offline si image and online signature data. The creation of this data set itself is a bold attem hope to achieve a more scientific and reliable signature verification by com signature online data and offline images. The online data contains six columns, n coordinate, Y coordinate, whether the stroke starts or not, whether the stroke end and pressure. Table 1 lists the signature data of the two authors. Each signatu corresponding offline image and online data. The online curve in the table is o based on the X coordinate of the signature data. The abscissa of the graph represen and the ordinate represents the change of the X coordinate or Y coordinate of th signature over time during the writing process. The red curve represents the real si and the blue curve represents forged signatures, it is obvious that there is a big di in the curve. The data will be processed, verified and fused separately in the sub steps.

Data Acquisition
This paper uses a smart pen with a camera and pressure sensor to sign on a piece of paper filled with tiny dots. While writing the signature, the image and trajectory data of the signature can be obtained in real time. A total of 20 authors' signatures were collected, including 30 authentic signatures and 30 forged signatures for each author, a total of 1200 signatures. The forged signature used in this paper is to find 2~3 experimenters, provide the real signature, and then perform the forgery after pre-training to obtain the forgery. The forged signature is reliable and practical. The data set used in this article is entirely composed of Chinese signatures, but the experimental methods used can be widely applied to other languages. Each signature data collected includes both offline signature image and online signature data. The creation of this data set itself is a bold attempt. We hope to achieve a more scientific and reliable signature verification by combining signature online data and offline images. The online data contains six columns, namely X coordinate, Y coordinate, whether the stroke starts or not, whether the stroke ends or not, and pressure. Table 1 lists the signature data of the two authors. Each signature has a corresponding offline image and online data. The online curve in the table is obtained based on the X coordinate of the signature data. The abscissa of the graph represents time, and the ordinate represents the change of the X coordinate or Y coordinate of the online signature over time during the writing process. The red curve represents the real signature and the blue curve represents forged signatures, it is obvious that there is a big difference in the curve. The data will be processed, verified and fused separately in the subsequent steps.

Writer1
Offline Online Writer2 Offline Online

Pre-Processing
In the pre-processing stage, gray processing, binarization and normalization are carried out for the offline signature image as shown in Figure 2. This paper uniformly sets the image size to 64 × 64. These options aim to reduce noise interference and improve the accuracy of feature extraction and verification. The online data is not pre-processed.

Feature Extraction
The feature extraction stage includes the extraction of static and dynamic features of the signature.

Static Features
For offline signature images, the texture and geometric features of the signature are extracted. The texture feature is used to represent the local information of the image, and the geometric feature is used to represent the global information of the image. The feature vector obtained by the combination of the two can fully and more accurately represent the image content. This paper uses GLCM and HOG to extract image texture features. GLCM describes the joint distribution of two pixels gray levels with a certain spatial position relationship. It is a second-order statistical texture feature quantity. Through the GLCM

Writer1
Offline Online Writer2 Offline Online

Pre-Processing
In the pre-processing stage, gray processing, binarization and normalization are carried out for the offline signature image as shown in Figure 2. This paper uniformly sets the image size to 64 × 64. These options aim to reduce noise interference and improve the accuracy of feature extraction and verification. The online data is not pre-processed.

Feature Extraction
The feature extraction stage includes the extraction of static and dynamic features of the signature.

Static Features
For offline signature images, the texture and geometric features of the signature are extracted. The texture feature is used to represent the local information of the image, and the geometric feature is used to represent the global information of the image. The feature vector obtained by the combination of the two can fully and more accurately represent the image content. This paper uses GLCM and HOG to extract image texture features. GLCM describes the joint distribution of two pixels gray levels with a certain spatial position relationship. It is a second-order statistical texture feature quantity. Through the GLCM

Writer1
Offline Online Writer2 Offline Online

Pre-Processing
In the pre-processing stage, gray processing, binarization and normalization are carried out for the offline signature image as shown in Figure 2. This paper uniformly sets the image size to 64 × 64. These options aim to reduce noise interference and improve the accuracy of feature extraction and verification. The online data is not pre-processed.

Feature Extraction
The feature extraction stage includes the extraction of static and dynamic features of the signature.

Static Features
For offline signature images, the texture and geometric features of the signature are extracted. The texture feature is used to represent the local information of the image, and the geometric feature is used to represent the global information of the image. The feature vector obtained by the combination of the two can fully and more accurately represent the image content. This paper uses GLCM and HOG to extract image texture features. GLCM describes the joint distribution of two pixels gray levels with a certain spatial position relationship. It is a second-order statistical texture feature quantity. Through the GLCM

Writer1
Offline Online Writer2 Offline Online

Pre-Processing
In the pre-processing stage, gray processing, binarization and normalization are carried out for the offline signature image as shown in Figure 2. This paper uniformly sets the image size to 64 × 64. These options aim to reduce noise interference and improve the accuracy of feature extraction and verification. The online data is not pre-processed.

Feature Extraction
The feature extraction stage includes the extraction of static and dynamic features of the signature.

Static Features
For offline signature images, the texture and geometric features of the signature are extracted. The texture feature is used to represent the local information of the image, and the geometric feature is used to represent the global information of the image. The feature vector obtained by the combination of the two can fully and more accurately represent the image content. This paper uses GLCM and HOG to extract image texture features. GLCM describes the joint distribution of two pixels gray levels with a certain spatial position relationship. It is a second-order statistical texture feature quantity. Through the GLCM

Writer1
Offline Online Writer2 Offline Online

Pre-Processing
In the pre-processing stage, gray processing, binarization and normalization are carried out for the offline signature image as shown in Figure 2. This paper uniformly sets the image size to 64 × 64. These options aim to reduce noise interference and improve the accuracy of feature extraction and verification. The online data is not pre-processed.

Feature Extraction
The feature extraction stage includes the extraction of static and dynamic features of the signature.

Static Features
For offline signature images, the texture and geometric features of the signature are extracted. The texture feature is used to represent the local information of the image, and the geometric feature is used to represent the global information of the image. The feature vector obtained by the combination of the two can fully and more accurately represent the image content. This paper uses GLCM and HOG to extract image texture features. GLCM describes the joint distribution of two pixels gray levels with a certain spatial position relationship. It is a second-order statistical texture feature quantity. Through the GLCM

Writer1
Offline Online Writer2 Offline Online

Pre-Processing
In the pre-processing stage, gray processing, binarization and normalization are carried out for the offline signature image as shown in Figure 2. This paper uniformly sets the image size to 64 × 64. These options aim to reduce noise interference and improve the accuracy of feature extraction and verification. The online data is not pre-processed.

Feature Extraction
The feature extraction stage includes the extraction of static and dynamic features of the signature.

Static Features
For offline signature images, the texture and geometric features of the signature are extracted. The texture feature is used to represent the local information of the image, and the geometric feature is used to represent the global information of the image. The feature vector obtained by the combination of the two can fully and more accurately represent the image content. This paper uses GLCM and HOG to extract image texture features. GLCM describes the joint distribution of two pixels gray levels with a certain spatial position relationship. It is a second-order statistical texture feature quantity. Through the GLCM

Pre-Processing
In the pre-processing stage, gray processing, binarization and normalization are carried out for the offline signature image as shown in Figure 2. This paper uniformly sets the image size to 64 × 64. These options aim to reduce noise interference and improve the accuracy of feature extraction and verification. The online data is not pre-processed.

Writer1
Offline Online Writer2 Offline Online

Pre-Processing
In the pre-processing stage, gray processing, binarization and normalization are carried out for the offline signature image as shown in Figure 2. This paper uniformly sets the image size to 64 × 64. These options aim to reduce noise interference and improve the accuracy of feature extraction and verification. The online data is not pre-processed.

Feature Extraction
The feature extraction stage includes the extraction of static and dynamic features of the signature.

Static Features
For offline signature images, the texture and geometric features of the signature are extracted. The texture feature is used to represent the local information of the image, and the geometric feature is used to represent the global information of the image. The feature vector obtained by the combination of the two can fully and more accurately represent the image content. This paper uses GLCM and HOG to extract image texture features. GLCM describes the joint distribution of two pixels gray levels with a certain spatial position relationship. It is a second-order statistical texture feature quantity. Through the GLCM

Feature Extraction
The feature extraction stage includes the extraction of static and dynamic features of the signature.

Static Features
For offline signature images, the texture and geometric features of the signature are extracted. The texture feature is used to represent the local information of the image, and the geometric feature is used to represent the global information of the image. The feature vector obtained by the combination of the two can fully and more accurately represent the image content. This paper uses GLCM and HOG to extract image texture features. GLCM describes the joint distribution of two pixels gray levels with a certain spatial position relationship. It is a second-order statistical texture feature quantity. Through the GLCM of the offline signature image, comprehensive information about the direction, the adjacent interval, and the change of gray scale can be obtained, which is the basis for analyzing the local pattern and arrangement rules of the image.
Let f (x, y) denote a signature image, (x1, y1) and (x2, y2) are two points in the image, the interval is d, and the angle between the two points and the horizontal axis of the coordinate is θ, f (x1, y1) = i, f (x2, y2) = j. In this way, a matrix P (i, j, d, θ) of various pitches and angles can be obtained. This paper analyzes the gray space position from horizontal, vertical, bottom left to top right, bottom right to top left, namely θ = 0, 45, 90, 135. At the same time, this paper uses four parameters to reflect the matrix situation, namely: contrast, correlation, energy and homogeneity, as shown in the following.
HOG is used to calculate the direction statistical value of the local image gradient. The main idea is to segment the signature image, calculate the value of the gradient in each direction in the cell unit, and accumulate to form a histogram. In this paper, the cell size is set to 20 × 20. The equations are as follows, where Gx(x,y), Gy(x,y), H(x,y) respectively represent the horizontal gradient, vertical gradient and pixel value at the pixel (x,y) in the image.
Gx(x, y)= H(x + 1, y)−H(x − 1, y) Gy(x, y)= H(x, y + 1)−H(x, y − 1) G(x, y) = Gx(x, y) 2 + Gy(x, y) 2 (7) α(x, y) = cot Gy(x, y) Gx(x, y) In addition, 9 geometric features are extracted, which is used to represent the global information of the image. The equations are in Table 2, where d represents the distance from point x1 to x2. In the subsequent experiment process, the effect of each static feature will be evaluated.

Dynamic Features
The dynamic feature has the characteristics of time limitation, movement and angle variability. Compared with static characteristics, they usually have a stronger personal style. Combining static features with dynamic features can effectively improve the accuracy and reliability of signature verification. This paper uses smart pen technology to obtain offline and online data at the same time, and finally combines static and dynamic features through SF-A.
In addition to the horizontal and vertical coordinates and pressure data contained in the online data, the dynamic features used in this paper also extract four other dynamic features, namely velocity, acceleration, angle and radius of curvature [21], as shown in Table 3. Table 3. Dynamic features.

SVM
SVM is a class of generalized linear classifiers, which classifies data in a supervised learning manner, and its decision boundary is the maximum margin hyperplane that is solved for the learning sample. In this paper, SVM is used to classify offline images, and the RBF kernel function is used to obtain the classification results of the images. The positive samples in the training phase are trained with different numbers of real signatures. Because in actual situations, forged signatures cannot be obtained for training, this paper uses the same number of real signatures of other authors for training, which provides a new way to solve the problem of small samples. The test set is composed of the author's real signatures and forged signatures. While obtaining the offline verification result, it is found that the SVM is classified according to the distance from the sample to the hyperplane. In actual situations, it is expressed by a score. When the score is less than 0, it is judged as a real signature, and when the score is greater than 0, it is judged as forged signatures, we derive the score and record it as Score1 to lay the foundation for the subsequent combined features.

DTW
DTW is a typical optimization problem. It uses a time warp function that satisfies certain conditions to describe the time correspondence between the test template and the reference template, and solves the warp function corresponding to the minimum cumulative distance when the two templates match. This paper uses DTW to classify online signature data. We use different amounts of real signature data for training, and use their mean to obtain the Gaussian distribution of the reference template, and then compare the Gaussian distribution of the test template with it to get the degree of similarity, denoted by Score2. Due to the different number of training samples, the threshold for judging the authenticity of the signature is also different. For the subsequent feature combination, it is necessary to ensure that Score2 and Score1 have the same judgment standard. We normalize the threshold of the online signature data, that is, subtract the threshold itself. Finally, when Score2 is less than 0, it is judged as a real signature, and when Score2 is greater than 0, it is judged as a forged signature.

SF-A
For offline images and online data of the same signature, different verification results may be obtained by using SVM and DTW. For example, for one of the offline images and Appl. Sci. 2021, 11, 5867 9 of 14 online data with a real signature, the offline image is judged as a real signature by SVM, and the online data is judged as a forged signature by DTW, so there is a certain complementarity among them, and fusion with weights is required. At present, the commonly used fusion methods are mainly divided into three categories, namely, fusion based on class labels, fusion based on class ranking, and fusion based on probability output. The third type is more applicable to the situation in this paper. For two decision scores, the commonly used method is to use the Sigmoid function in logistic regression to predict the output result, and find the best weight for the two decision scores through the gradient descent method and increasing the number of iterations. When the result is greater than 0.5, it is judged as a real signature, and if it is less than 0.5, it is judged as a forged signature. Finally, the most suitable straight line for data division is found to complete the classification. In the experimental stage, we use SF-L to represent the score fusion method based on logistic regression.
The SF-A proposed in this paper is based on the classification accuracy of SVM and DTW, using weighted average and logistic regression to assign weights to Score1 and Score2 respectively. Through the fusion of decision scores, the static and dynamic features of the signature are combined. Suppose the accuracy of SVM is Accuracy1, the weight of Score1 is defined as w1. The accuracy of DTW is Accuracy2, and the weight of Score2 is defined as w2. Then the weights of w1 and w2 are defined as follows. It can be seen from the definition that the higher the accuracy of the classifier, the weight also bigger. Xi represents the i-th signature. The Final_Score obtained by SF-A is compared with 0. If it is less than 0, it is judged as a real signature, and if it is greater than 0, it is judged as a forged signature. By assigning weights to each classifier based on accuracy, which can also be called confidence, and then fusing the scores, the result obtained is more reliable, and can achieve a greater degree of classifiers complementarity.

Experiment Results and Discussion
Since this article hopes to realize the combination of offline images and online data, it is still difficult to obtain publicly available data sets that can simultaneously exist offline images and corresponding online data. Therefore, this paper uses the collected offline images and online data of 1200 signatures for experiments. We use FAR, FRR, AER and Accuracy as evaluation indicators. FAR is the percentage of forged signatures that are judged to be real signatures among the forged signatures used for testing. FRR refers to the percentage of real signatures that are judged to be forged signatures among the real signatures used for testing. AER is the average value of FRR and FAR. Accuracy refers to verification accuracy, that is, whether the predicted result is consistent with the actual situation. The current deep learning methods can achieve a good signature verification effect, but they need a lot of data to carry out experiments, and are not very practical. If considering the practical application, we think it is more necessary to solve the small sample in the actual situation. Through the study of small sample problems, most scholars choose samples within ten for training. Normally, ten samples will get the best experimental results. For 1200 signatures, this paper selected three, five, eight and ten real signature samples for training, and conducted experiments and comparisons on different features. For the offline signature training set, we randomly selected three, five, eight and ten real signatures of an author as positive samples for SVM training, and randomly selected the same number of real signatures of other authors as negative samples, so as to solve the situation that fake signatures cannot be obtained for training in the actual situation. For the online signature training set, three, five, eight and ten real signatures were randomly selected as registration samples to get the first Gaussian distribution, and then the rest of the signatures were selected as test samples to get the second Gaussian distribution. The similarity was compared to determine whether it was true. Each experiment was repeated ten times to ensure the results accuracy.

Results
Tables 4-7 shows the experimental results of the two classifiers using different features under the local data set. After using different numbers of training samples to obtain the results, it is found that the results of SF-A proposed in this paper in order to combine dynamic and static features through score fusion is better than using a feature verification and logistic regression alone. Figures 3-6 shows the division of samples between SF-A and SF-L. The improved weighted straight line can better divide real signatures and forged signatures than the original method.  features under the local data set. After using different numbers of training samples to obtain the results, it is found that the results of SF-A proposed in this paper in order to combine dynamic and static features through score fusion is better than using a feature verification and logistic regression alone. Figures 3-6 shows the division of samples between SF-A and SF-L. The improved weighted straight line can better divide real signatures and forged signatures than the original method.

Comparative Analysis
Experimental results show that using SF-A to combine static and dynamic features is effective for handwritten signature verification. By observing the experimental results of each feature, it is found that for offline images, when the number of samples is 3 and 5, the effect of using GLCM is better than HOG, and when the number of samples is 8 and 10, HOG is better than GLCM. On the whole, the effect of texture features is better than geometric features, which may be related to the dimension of feature vectors. The feature vector dimension of geometric features is smaller. However, the result of combining geometric features and texture features can usually improve the accuracy of verification. It also reduces the FAR and FRR, because the texture feature represents the local texture information of the offline image, while the geometric feature represents the global information of the image. The combined feature vector should be more representative and reliable.
From the perspective of score fusion, that is, combining static and dynamic features, the experimental results of both SF-L and SF-A are better than those of static or dynamic features alone, indicating that the two fusion methods can improve the complementary performance between the two classifiers. As far as SF-L and SF-A are concerned, the SF-A proposed in this paper has achieved the best results under different training samples by weighting the verification accuracy of the classifier. In particular, the improvement of FAR index is more significant, and in some specific occasions, there are often higher requirements for FAR. Therefore, we can conclude that the proposed SF-A can effectively improve the efficiency of handwritten signature verification.

Comparative Analysis
Experimental results show that using SF-A to combine static and dynamic features is effective for handwritten signature verification. By observing the experimental results of each feature, it is found that for offline images, when the number of samples is 3 and 5, the effect of using GLCM is better than HOG, and when the number of samples is 8 and 10, HOG is better than GLCM. On the whole, the effect of texture features is better than geometric features, which may be related to the dimension of feature vectors. The feature vector dimension of geometric features is smaller. However, the result of combining geometric features and texture features can usually improve the accuracy of verification. It also reduces the FAR and FRR, because the texture feature represents the local texture information of the offline image, while the geometric feature represents the global information of the image. The combined feature vector should be more representative and reliable.
From the perspective of score fusion, that is, combining static and dynamic features, the experimental results of both SF-L and SF-A are better than those of static or dynamic features alone, indicating that the two fusion methods can improve the complementary performance between the two classifiers. As far as SF-L and SF-A are concerned, the SF-A proposed in this paper has achieved the best results under different training samples by weighting the verification accuracy of the classifier. In particular, the improvement of FAR index is more significant, and in some specific occasions, there are often higher requirements for FAR. Therefore, we can conclude that the proposed SF-A can effectively improve the efficiency of handwritten signature verification.

Conclusions
This paper proposes a handwritten signature verification method based on improved combined features to meet the requirements of the current high-precision system. We use smart pens to obtain the offline image and online data of the signature at the same time, and perform preprocessing and feature extraction on them respectively. Then we use SVM and DTW for offline images and online data respectively to get verification results and decision scores, and propose the SF-A method to achieve the purpose of combining static and dynamic features by fusing decision scores.
In order to verify the effectiveness of the proposed method, we conducted experiments on a local data set. By using different numbers of training samples, two classifiers were used to experiment and compare different signature features. Experimental results show that the use of SF-A fusion of static and dynamic features has achieved the greatest accuracy and the lowest FAR and FRR, which can effectively verify handwritten signatures.
This paper contributes to the score fusion and feature combination. Through SF-A, the complementarity of the results of each classifier is effectively used, and a higher accuracy rate is achieved. At the same time, through smart pen technology, this paper realizes the combination of offline images and online data for the first time. Finally, in the SVM training stage, the real signature of other authors is used as a negative sample for training, which provides a new idea for the small sample training problem. Different from deep learning methods, our proposed method is more interpretable and widely applicable.
In future work, the data set will be continuously exaggerated to verify the universality of the method. For example, collecting signatures in different languages to compare the verification effect of the proposed method on different languages. We will make the data sets used publicly available for more researchers and institutions to conduct experiments and comparisons. At the same time, we will continue to study signature features in depth and find the most representative signature feature vector. We will also continue to study in depth how to achieve the best verification effect with the minimum training set.