Next Article in Journal
Non-Contact Roughness Measurement in Sub-Micron Range by Considering Depolarization Effects
Next Article in Special Issue
ECG Signal as Robust and Reliable Biometric Marker: Datasets and Algorithms Comparison
Previous Article in Journal
Emotion Recognition from Multiband EEG Signals Using CapsNet
Previous Article in Special Issue
A Two-Stage Method for Online Signature Verification Using Shape Contexts and Function Features
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Novel Local Coding Algorithm for Finger Multimodal Feature Description and Recognition

1
Tianjin Key Laboratory for Advanced Signal Processing, Civil Aviation University of China, Tianjin 300300, China
2
Shenzhen Polytechnic, Shenzhen 518055, China
*
Author to whom correspondence should be addressed.
Sensors 2019, 19(9), 2213; https://doi.org/10.3390/s19092213
Submission received: 9 April 2019 / Revised: 7 May 2019 / Accepted: 9 May 2019 / Published: 13 May 2019
(This article belongs to the Special Issue Biometric Systems)

Abstract

:
Recently, finger-based biometrics, including fingerprint (FP), finger-vein (FV) and finger-knuckle-print (FKP) with high convenience and user friendliness, have attracted much attention for personal identification. The features expression which is insensitive to illumination and pose variation are beneficial for finger trimodal recognition performance improvement. Therefore, exploring suitable method of reliable feature description is of great significance for developing finger-based biometric recognition system. In this paper, we first propose a correction approach for dealing with the pose inconsistency among the finger trimodal images, and then introduce a novel local coding-based feature expression method to further implement feature fusion of FP, FV, and FKP traits. First, for the coding scheme a bank of oriented Gabor filters is used for direction feature enhancement in finger images. Then, a generalized symmetric local graph structure (GSLGS) is developed to fully express the position and orientation relationships among neighborhood pixels. Experimental results on our own-built finger trimodal database show that the proposed coding-based approach achieves excellent performance in improving the matching accuracy and recognition efficiency.

1. Introduction

With the arrival of the informational age and the rapid development of computer technology, people have higher requirements for the accuracy of biometric identification technology [1]. Compared with other common biometric traits, finger-based traits (e.g., fingerprint [2], finger-vein [3] and finger-knuckle-print [4],) have some advantages in uniqueness, anti-counterfeit, user acceptance, and high security [5,6,7,8]. However, affected by the external environment and the inherent differences of individuals, only relying on finger unimodal biometrics for identity authentication still has many security risks, which can no longer achieve the high-performance requirements of a user. Hence, fusing three traits from a finger together should be beneficial to address the person recognition problem [9,10].
However, the quality of three modal finger images is usually degraded seriously due to illumination variation in skin surfaces, which is unhelpful for reliable feature representation [11,12,13]. In addition, the finger trimodal images vary with the finger in pose rotation during imaging, which reduces the discriminability of images and further decreases the recognition accuracy rate. Therefore, exploring a robust feature representation method is very favorable for finger-based recognition improvement.
Recently, some researchers have developed some coding-based feature expression methods, which were often considered to be able to solve the above two problems [14,15,16,17,18,19,20,21]. Ojala et al. first proposed the classical local binary pattern (LBP) algorithm for facial recognition, which has great rotation invariant and was insensitive to illumination variation [16]. In 2011, Rosdi et al. proposed a local line binary pattern (LLBP) algorithm to effectively make use of the position relationships among surrounding pixels in horizontal and vertical orientations [17]. Meng et al. [19] proposed a local direction coding (LDC) algorithm, which utilized the gradient relationships to express a venous feature for finger-vein recognition. In 2013, Peng et al. [20] combined the Gabor wavelet and LBP (GLBP) for feature extraction, which could effectively improve the ability of local and global features representation.
Noteworthy, some methods related to local graph have been presented in succession, and their variants have been successfully applied to many biometric fields [22,23,24,25,26,27,28,29]. Abusham et al. first proposed a local graph structure (LGS) algorithm to extract face features, which was insensitive to illumination [22]. However, the structure was non-symmetric, which led to a feature representation with no-equilibrium in the left and right neighborhoods. In order to balance feature representation of neighbor pixels on both sides, Mohd et al. [23] improved the original LGS operator to the SLGS operator by building a symmetrical structure. In 2015, Dong et al. [24] presented a MOW-SLGS operator for the representation of vein networks, and used the ELM to accomplish finger-vein image classification. On the basis of this information, in 2018, Yang et al. [25] put forward the Weber SLGS, which integrated differential direction features by Weber Law and the local graph structure algorithm. However, these algorithms still have some limitations in the application of representing finger multimodal features. On the one hand, the methods above only describe the relationships between the target pixel and its adjacent ones in a fixed neighborhood, while neglecting the hidden relationships among surrounding pixels. On the other hand, the assignment of different weights for symmetric pixels on left and right sides usually results in an imbalance of the feature expression in images.
To effectively overcome these limitations, we propose a Gabor generalized symmetric local graph structure (Gabor-GSLGS) for finger multimodal fusion recognition, as shown in Figure 1. In the image capture part, first, a finger imaging device is designed, and a pose correction algorithm is proposed to reduce pose variations. A robust finger region of interest (ROI) localization approach is then employed. Secondly, a bank of 6-orientation and single-scale Gabor filters are utilized for finger ROI image enhancement. Thirdly, based on the proposed GSLGS operator, a local coding algorithm is developed for finger features representation. The coded trimodal feature images of a finger are then divided evenly into non-overlapping blocks. Thus, for a finger, we can obtain a feature vector by concatenating the histograms of all blocks. Finally, by computing the similarities between the obtained vectors, the matching results can be reported statistically. Experimental results on our established database demonstrated that the proposed feature description approach exhibits better effects than other traditional approaches in finger multimodal fusion recognition.
The reminder of this paper is organized as follows: The finger trimodal imaging device and the proposed posture correction are introduced in Section 2. The enhancement methods used for finger ROI images are described in Section 3. Section 4 details the structure of the proposed local coding algorithm. A feature matching scheme is employed to implement finger multimodal fusion recognition and described in Section 5. Section 6 outlines the extensive experiments conducted and presents the analysis of experimental results in details. Finally, Section 7 presents the summarization.

2. Finger Image Capture and Preprocessing

2.1. Image Acquisition

As shown in Figure 2a, we have developed a homemade image acquisition device to obtain finger trimodal images. The finger imaging device is designed to capture fingerprint (FP), finger-vein (FV), and finger-knuckle-print (FKP) images automatically. It is composed of a binocular camera with two optical filters, a fingerprint acquisition instrument and an array of LEDs at a wavelength of 850 nm. In the imaging device, the FP images are directly obtained through a fingerprint instrument, which has a quick collection speed. Based on the imaging characteristics of a finger, the FV images are collected by using the near infrared (NIR) light to illuminate the palm side of a finger in penetration manner [27]. For FKP modality, we use the principle of reflecting the visible lights source for image acquisition.
For the sake of improving convenient acquisition of images, a collection groove with fixed sizes is designed in the imaging device, which is used to limit the position of a finger for imaging. It can effectively avoid, to a large extent, an image mismatch problem caused by the rotation and translation of a finger. As shown in Figure 2b, the dimensions of our finger imaging device are 10.9 × 9.8 × 17.8 cm (length × width × height: cm).
As shown in Figure 2c, the acquisition program runs on the platform of Windows, and the software interface is built using C++ language. The top of the interface is designed to display the finger trimodal images captured in real time. Considering the friendliness of human-computer interaction, a window on the right side of the system is designed to remind users of the problems in the system operation.
From Figure 2c, it can be clearly seen that the original captured finger trimodal images have, to a small extent, still some posture variation. To solve this problem during the acquisition process, we present the posture correction method for finger trimodal images.

2.2. Posture Correction

Although a collection groove is designed in the acquisition device to fix the position of a finger, the finger still has a plane rotation at a small range. As the finger plane rotates, the edge line of the finger changes steadily. Hence, the rotation angle of a finger posture can be calculated and corrected based on the edge line of the finger. Due to the different illuminations, the edge line of the finger-vein is easier to detect and process than the finger-knuckle-print. Therefore, the finger in the finger-vein imaging space is selected to calculate the rotation angle, and then the three modalities are rotated and corrected together. The calculation process of a finger posture angle is shown in Figure 3.
At first, the captured finger image is filtered to remove noise, and the edge line of the finger is detected. Then, the point coordinates of two edge lines are extracted to calculate the midpoint coordinates. As shown in Figure 3b, {Ln} (n = 1,2, …, N) represents the coordinate set in the left edge line of the finger, and {Rn} represents the coordinate set in the right edge line of the finger, X and Y represent the row and column coordinates of the midpoint {Mn}. The calculation is as follows:
X M n = ( X L n + X R n ) / 2 Y M n = Y L n = Y R n
The linear fitting of the midpoint {Mn} by least squares method yields the direction line: l = kx + b, where:
k = n = 1 N ( x M n x ¯ ) ( y M n y ¯ ) n = 1 N ( x M n x ¯ ) 2 b = y ¯ k x ¯
Finally, according to k, the posture angle θ of the finger is calculated as follows:
θ = arctan ( 1 k )
Noteworthy, the center of rotation should select the center of the finger direction line l, which can reduce the amplitude of the posture swing of the finger and improve the stability of the correction. Hence, taking the midpoint MN/2 as the center of rotation and θ as the angle of rotation, the finger in the finger-vein imaging space is rotated and corrected. Some original images and corrected images for the same finger are shown in Figure 4.
From Figure 4, we can see that the proposed posture correction algorithm is effective to solve the problem of random plane rotation of the finger. This shows that the selection of the hardware and the posture correction algorithms have achieved better effects in improving the consistency of the finger posture.
However, from the corrected finger images, we can see that they still contain some unnecessary backgrounds and uninformative parts. Hence, the captured finger images need to be processed to implement the regions of interest (ROIs) localization.

2.3. ROI Extraction

Since the imaging principle and acquisition approach of FP, FV, and FKP traits are different, diverse ROI extraction methods are supposed to be adopted accordingly [2]. In this paper, we apply the core point detection method to extract the FP ROI image [30], the convex direction coding method to extract the FKP ROI image [31], and the interphalangeal joint prior method to extract the FV ROI image [3]. Therefore, the FP, FV, and FKP images are cropped into 152 × 152 pixels, 200 × 91 pixels and 200 × 90 pixels, respectively. Some finger trimodal ROI images are shown in Figure 5.

3. Finger Image Enhancement

In recent decades, Gabor filters have been widely applied in many fields since they not only extract the texture information in multiple directions of an image, but also reduce the influence of illumination to some extent [32]. In terms of the abundant texture information of FP, FV, and FKP traits, with respect to direction, oriented Gabor filters are used here to perform image enhancement.
A Gabor filter consists of a real part and an imaginary part. Generally, the real part is also called an even-symmetrical Gabor filter, which is suitable for ridge detection in an image [2]. Since these three modality images of a finger all have their own particular ridge textures, the real part of Gabor filter can be used to extract the flexible feature information effectively [33]. It can be expressed as
G x , y , θ k , f k = γ 2 π σ 2 exp 1 2 x θ k 2 + γ 2 y θ k 2 σ 2 cos 2 π f k x θ k
where x θ k = xcosθk + ysinθk, y θ k = ycosθkxsinθk, σ and k = (1, 2, …, k), respectively, represent the scale index and the orientation index, θk denotes the orientation of the k-th Gabor filter, and fk is the central frequency of the sinusoidal plane wave. Assuming R(x,y) is an original ROI image, each k-th Gabor filtered image Ik(x,y) can be obtained by
I k x , y = R ( x , y ) G x , y , θ k , f k
where the symbol “ ” represents two-dimensional convolution.
First, the original ROI image is convoluted with k-channel Gabor filters. Then, the k Gabor filtered images are merged into an image I(x,y) by using the competitive coding method proposed in [15]. Some Gabor filtered images are shown in Figure 6. It can be clearly seen that the texture information of finger images can be effectively enhanced after multichannel Gabor filtering. Based on this, we apply the coding-based theory to obtain more stable and reliable finger features.

4. Feature Extraction Based on Local Coding Algorithm

To make full use of local position and gradient features between adjacent pixels in Gabor filtered images, a local coding algorithm based on generalized symmetric LGS is proposed for feature representation. The specific steps are as follows:
Step 1 The finger trimodal images are respectively enhanced by k-channel even symmetric Gabor filters in Section 3, and the Gabor filtered images are obtained.
Step 2 As shown in Figure 7, for each center pixel in the Gabor enhanced images, we respectively select three pixels in the left and right of n × n neighborhoods (a square area in Figure 7) to constitute the GSLGS operator in the horizontal orientation. In terms of weight distribution, the weights of symmetric pixels in the right and left sides maintain equal weights. More details are shown in Figure 7.
Since Gabor features of finger trimodal have diverse directions, the information of the surrounding pixels in multiple orientations should be extracted for efficient feature representation. Centered on the target pixel, rotating the GSLGS operator counterclockwise by θk (corresponding to Step 1), the structure of GSLGS in an arbitrary orientation can be obtained. For instance, when k = 4, the structures of GSLGS are shown in Figure 8.
Step 3 The coding process of the GSLGS operation is shown in Figure 9. In the neighborhood of left and right sides, these gray values of the pixels are respectively compared in succession starting from each target pixel. If the value becomes larger, the relationship between the two pixels to be compared is coded to 1. In contrast, the relationship is coded to 0. The coding calculation process is expressed as
F θ k = r = 1 6 p r g r f r 2 6 r + l = 1 6 q g l f l 2 6 l , k = 1 , 2 , , K
p r g r f r = 1 , g r f r 0 , 0 , g r f r < 0
q l g l f l = 1 , g l f l 0 , 0 , g l f l < 0
where gr and fr (gl and fl), respectively, denote values of two adjacent pixels in the right (left) neighborhood, and F(θk) represent the feature coded value in the θk orientation.
From Figure 9, we can see that the coded values of the target pixel at 0° and 45°, respectively, can be obtained according to Equations (6)–(8). Similarly, the same calculation process is done at 90° and 135°. Thus, we calculate the coded values of the central pixel in these four directions as follow:
  • F(θ1) = (010100)2 + (110110)2 = (0 × 32 + 1 × 16 + 0 × 8 + 1 × 4 + 0 × 2 + 0 × 1) + (1 × 32 + 1 × 16 + 0 × 8 + 1 × 4 + 1 × 2 + 0 × 1) = 74.
  • F(θ2) = (100100)2 + (000100)2 = (1 × 32 + 0 × 16 + 0 × 8 + 1 × 4 + 0 × 2 + 0 × 1) + (0 × 32 + 0 × 16 + 0 × 8 + 1 × 4 + 0 × 2 + 0 × 1) = 40.
  • F(θ3) = (100110)2 + (010110)2 = (1 × 32 + 0 × 16 + 0 × 8 + 1 × 4 + 1 × 2 + 0 × 1) + (0 × 32 + 1 × 16 + 0 × 8 + 1 × 4 + 1 × 2 + 0 × 1) = 60.
  • F(θ4) = (010011)2 + (010101)2 = (0 × 32 + 1 × 16 + 0 × 8 + 0 × 4 + 1 × 2 + 1 × 1) + (0 × 32 + 1 × 16 + 0 × 8 + 1 × 4 + 0 × 2 + 1 × 1) = 39.
Step 4 Inspired by the optimal response of Gabor filters in multiple orientations, we choose the maximum value among these coded values as the final coded value, which can be defined as
F x , y = arg max θ k ( 0 ° , 180 ° ) F θ k
As mentioned above, the final coded value of each target pixel in the Gabor enhanced image can be obtained according to the GSLGS operator. For instance, the coded value in Figure 7 is: F(x,y) = argmax {F(θ1), F(θ2), F(θ3), F(θ4)} = F(θ1) = 74.
Considering the great capability of a Gabor filter in enhancing texture feature from any orientation, the GSLGS operator is extended into arbitrary orientations, which has superior orientation selectivity. Therefore, it can effectively solve image mismatch problem due to finger pose variation. More importantly, the proposed local coding algorithm can entirely consider the relationships between each target pixel and its surrounding neighborhoods. In addition, the distribution of weights is conformable in the symmetric pixels on both sides. Hence, the finger feature representation of local neighborhoods can maintain balance in the GSLGS operator.

5. Feature Fusion and Matching

In this section, a gray histogram-based feature matching method is used for finger trimodal fusion recognition, as shown in Figure 10. First, the coded finger trimodal images are uniformly separated into M non-overlapping division blocks. Then, the M local histograms corresponding to each sub-block are established, respectively. Assuming that Hfvi(I = 1, 2, …, M) represents the histogram of the ith division block in a coded finger-vein image, the global histogram Hfv is defined as
H f v = H 1 , H 2 , , H M
Similarly, Hfp and Hfkp represent the global histogram of a coded fingerprint image and finger-knuckle-print image. Then, the final feature histogram H of a finger trimodal image can be expressed by
H = H f p , H f v , H f k p
After the above calculation, we can obtain the feature histogram of each finger sample. Here, we can use various classification algorithms, such as SVM, ELM and k-NN [34]. In this section, for convenience, the intersection coefficient between two feature vectors is calculated to determine the similarity of two individuals [29]. Assuming H1(i) and H2(i) denote the histograms of two samples to be matched, the similarity can be computed by
s i m H 1 ( i ) , H 2 ( i ) = i = 1 L min [ H 1 ( i ) , H 2 ( i ) ] i = 1 L H 1 ( i )
where L denotes the dimension of a feature vector to be matched. In the matching process, if the intersection coefficient sim(·) is >T (similarity decision threshold), it means that the two samples are similar and are able to be matched. But if the intersection coefficient sim(·) is ≤T, it means that the two samples are not matched. Thus, two samples will tend to be more similar as the intersection coefficient increases. The similarity decision threshold T corresponds to the threshold value when the false rejection rate (FRR) is the same as the false accept rate (FAR).

6. Experimental Results

In order to verify the proposed coding-based method, a finger trimodal database from a homemade image acquisition system is used in our experiments. The database contains a total of 17,550 images from 585 individual fingers (index finger, middle finger, and ring finger) of both hands, and each finger contains 30 images (10 images per modality). Here, we randomly select 3000 images samples from 100 different individuals, each of which, respectively, contains 10 images on the FP, FV, and FKP traits, as the experimental database.
Here, the proposed Gabor-GSLGS algorithm is implemented using MATLAB R2014a on a standard desktop PC which is equipped with Inter Core i5-7400 CPU 3 GHz and 8 GB RAM.
The detailed experiments are as follows: In Section 6.1, we mainly describe the analysis of the influence of different parameter selection on the recognition rate. Section 6.2 presents the detailed comparison of the performance of unimodal and multimodal recognition. The experimental results of different feature extraction methods are compared in Section 6.3.

6.1. Parameter Selection

6.1.1. Selection of k

On the basis of the above introduction in Section 3 and Section 4, we can see that the number of orientations in the local coding algorithm corresponds to the number of channels in the Gabor filter. Hence, different k values produce different effects on the performance of the finger multimodal recognition. In order to find the optimal parameters of k, we evaluate it using two recognition indicators, equal error rate (EER) and the time cost of feature extraction. EER listed in Table 1 is the error rate where FRR and FAR are equal. Here, FAR indicates the identification result of incorrect acceptation for an individual, while FRR demonstrates the result of incorrect rejection. The ROC (receiver operating characteristic) curves for intersection coefficient measures are plotted in Figure 11, where FAR and FRR are shown in the same plot at different thresholds.
From Figure 11, we can see that the EER is lowest when k is 6. However, as the value of k increases, the time cost of finger feature extraction also increases. Considering recognition efficiency and accuracy, the parameter k corresponding to 6 is selected in following experiments.

6.1.2. Selection of Neighborhood and Image Division

Apart from parameters k, the size of the neighborhood n × n that constitutes the structure of the GSLGS operator and the number of image division blocks M are also critical factors for finger trimodal recognition. Considering that n and M have a great influence on the recognition performance of the proposed algorithm, therefore, it is important to select suitable parameters. Here, we select different neighborhoods and image block sizes to perform the experiments. Some EERs of different parameters are listed in Table 2, with some ROCs shown in Figure 12 and Figure 13.
From Figure 12, it can be clearly seen that the ROC curves vary by changing n (n = 3, 5 or 7, respectively). This shows that different neighborhoods have different effects on the performance of finger trimodal recognition. By observing these obtained curves in the condition of the same image division block, such as M = 6, we find that the EER is lowest when the size of the neighborhood is selected as 5 × 5 (n = 5). Similarly, when M = 7, 8 or 9, respectively, the pixels selected in a 5 × 5 neighborhood for constructing the GSLGS operator also have optimal accuracy. The reason is that a 3 × 3 neighborhood is more sensitive to noise, while the 7 × 7 neighborhood is relatively weak in the capability of feature expression. However, the 5 × 5 neighborhood is preferred for feature expression among surrounding pixels and is insensitive to noise. Hence, n = 5 is the optimal parameter for constructing the proposed GSLGS operator.
The experimental results of different division blocks by using GSLGS with a 5 × 5 neighborhood are shown in Figure 13. From Figure 13, we can find that the proposed local coding algorithm obtains the best accuracy when the number of division blocks is 8 × 8 (M = 8). This shows that an appropriate image division scheme is beneficial for improving recognition accuracy rate. Hence, the image division blocks M = 8 is an optimal choice for the proposed Gabor-GSLGS approach in finger trimodal recognition.

6.2. Comparison of Unimodal and Multimodal

The proposed local coding algorithm of finger trimodal can also be applied for finger single modal recognition. Here, the experiments of finger unimodal and multimodal recognition are performed when n = 5 and M = 8. The experimental results of different modal combinations are listed in Table 3.
From Figure 14, we can see that the EER rate of different modal combinations are different. It is noted that the bimodal combination (FV + FKP and FV + FP) can achieve a better accuracy than single modal, especially for the FP trait and FKP trait.
From Table 3, we can find that three modal combination have the best recognition accuracy, while the time cost increases with the increase of the modality number. It is noteworthy that in single modal recognition, FV trait performs better than FP trait and FKP trait. This shows that the FV trait is the most dominant trait in the three modalities.
In total, these results show that multimodal fusion recognition performs better than single modal. The reason is that the multimodal combination can make full use of the discrimination of different modalities and different modalities can complement each other in multimodal fusion recognition. However, the computational efficiency of multimodal recognition can still be improved.

6.3. Comparison of Different Methods

In order to further evaluate the proposed local coding method, here we compare it with some common feature extraction methods (LBP [16], GLBP [20], LLBP [17], SLGS [23], and MOW-LGS [24]). The ROCs are plotted in Figure 15, and the simulation results are listed in Table 4.
From Figure 15, we observe that the EER of the proposed approach is lowest among these feature extraction methods. Hence, feature expression based on the proposed coding approach can effectively address the problem of illumination and finger posture variation in finger trimodal fusion recognition.
From Table 4, we can clearly see that the proposed local coding algorithm not only produces the best recognition accuracy, and also that the computation cost of feature extraction is also lowest as compared with other methods. This shows that our method is more robust to finger feature representation.

7. Conclusions

In this paper, a posture correction approach was first designed for reducing the finger pose variation. To solve the problem that the feature expression method was sensitive to illumination variation and posture rotation, a novel local coding algorithm was proposed for finger trimodal fusion recognition. On the one hand, the Gabor filter, to some extent, can effectively reduce the influence of illumination and noise in an image. On the other hand, the posture correction method and the local coding method were used to address the problem of finger posture variation. The proposed Gabor-GSLGS algorithm made full use of the texture features in multiple orientations between surrounding pixels. Furthermore, the proposed method assigned the same weights in symmetrical pixels, which improved the equilibrium of the feature representation of the finger images. The experimental results showed that our method could improve the accuracy and computational efficiency of finger trimodal fusion recognition.
As part of our future work, we will apply the proposed local coding algorithm to other public biometric databases. Moreover, we will focus on reducing the dimensions of the feature vector and improving the efficiency of finger multimodal fusion recognition. At the same time, we will aim to exploit a more robust and effective fusion method which can integrate multiple modal features for personal identification.

Author Contributions

S.L. and H.Z. conceived and designed the experiments; S.L. performed the experiments and analyzed the data; S.L., H.Z., Y.S. and J.Y. wrote the paper.

Funding

This work is supported by the National Natural Science Foundation of China (No. 61806208, No. 61502498) and the Fundamental Research Funds for the Central Universities (NO. 3122017001).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Jain, A.K.; Ross, A.; Prabhakar, S. An introduction to biometric recognition. IEEE Trans. Circuits Syst. Video Technol. 2004, 14, 4–820. [Google Scholar] [CrossRef]
  2. Yang, J.F.; Zhang, X. Feature-level fusion of fingerprint and finger-vein for personal identification. Pattern Recognit. Lett. 2012, 33, 623–628. [Google Scholar] [CrossRef]
  3. Yang, J.F.; Zhong, Z.; Jia, G.M.; Li, Y.N. Spatial circular granulation method based on multimodal finger feature. Int. J. Electr. Comput. Eng. 2016, 2016, 1–7. [Google Scholar] [CrossRef]
  4. Zhang, L.; Zhang, L.; Zhang, D. Finger-knuckle-print: A new biometric identifier. In Proceedings of the International Conference on Image Processing (ICIP2009), Cairo, Egypt, 7–11 November 2009; pp. 1981–1984. [Google Scholar]
  5. Evangelin, L.N.; Fred, A.L. Feature level fusion approach for personal authentication in multimodal biometrics. In Proceedings of the IEEE 2017 3th International Conference on Science Technology Engineering & Management (ICONSTEM), Chennai, India, 23–24 March 2017; pp. 148–151. [Google Scholar]
  6. Peng, J.J.; Li, Y.N.; Li, R.R.; Jia, G.M.; Yang, J.F. Multimodal finger feature fusion and recognition based on delaunay triangular granulation. Comput. Inf. Sci. Eng. 2014, 484, 303–310. [Google Scholar]
  7. Yang, J.F.; Shi, Y.H. Finger-vein ROI localization and vein ridge enhancement. Pattern Recognit. Lett. 2012, 33, 1569–1579. [Google Scholar] [CrossRef]
  8. Yang, J.F.; Wei, J.Z.; Shi, Y.H. Accurate ROI localization and hierarchical hyper-sphere model for finger-vein recognition. Neurocomputing 2018, 328, 171–181. [Google Scholar] [CrossRef]
  9. Xin, Y.; Kong, L.; Liu, Z.; Wang, C.; Xu, X. Multimodal feature-level fusion for biometrics identification System on IoMT platform. IEEE Access 2018, 6, 21418–21426. [Google Scholar] [CrossRef]
  10. Yang, W.; Song, W.; Hu, J.; Zhang, G.; Valli, C. A Fingerprint and Finger-vein Based Cancelable Multi-biometric System. Pattern Recognit. 2017, 78, 242–251. [Google Scholar] [CrossRef]
  11. Li, S.Y.; Zhang, H.G.; Jia, G.M.; Yang, J.F. Finger Vein Recognition Based on Weighted Graph Structural Feature Encoding. In Proceedings of the 13th Chinese Conference on Biometric Recognition, Xinjiang, China, 11–12 August 2018; pp. 29–37. [Google Scholar]
  12. Yang, J.F.; Zhang, B.; Shi, Y. Scattering Removal for Finger-Vein Image Restoration. Sensors 2012, 12, 3627–3640. [Google Scholar] [CrossRef] [PubMed]
  13. Yang, J.F.; Shi, Y.H. Finger-vein Network Enhancement and Segmentation. Pattern Anal. App. 2014, 17, 783–797. [Google Scholar] [CrossRef]
  14. Liu, H.; Ji, R.; Wu, Y.; Huang, F.; Zhang, B. Cross-Modality Binary Code Learning via Fusion Similarity Hashing. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HA, USA, 21–26 July 2017; pp. 6345–6353. [Google Scholar]
  15. Kong, W.K.; Zhang, D. Competitive coding scheme for palmprint verification. In Proceedings of the 17th International Conference on Pattern Recognition (ICPR2004), Cambridge, UK, 23–26 August 2004; pp. 520–523. [Google Scholar]
  16. Ojala, T.; Pietikäinen, M.; Mäenpää, T. Gray Scale and Rotation Invariant Texture Classification with Local Binary Patterns. In Proceedings of the 2000 6th European Conference on Computer Vision (ECCV 2000), Dublin, Ireland, 26 June–1 July 2000. [Google Scholar]
  17. Rosdi, B.A.; Shing, C.W.; Suandi, S.A. Finger Vein Recognition Using Local Line Binary Pattern. Sensors 2011, 11, 11357–11371. [Google Scholar] [CrossRef]
  18. Lu, Y.; Yoon, S.; Xie, S.; Yang, J.C.; Wang, Z.H.; Park, D.S. Finger Vein Recognition Using Generalized Local Line Binary Pattern. KSII Trans. Internet Inf. 2014, 8, 1766–1784. [Google Scholar]
  19. Meng, X.; Yang, G.; Yin, Y.; Xiao, R. Finger Vein Recognition Based on Local Directional Code. Sensors 2012, 12, 14937–14952. [Google Scholar] [CrossRef]
  20. Peng, J.; Li, Q.; Abd El-Latif, A.A.; Wang, N.; Niu, X.M. Finger Vein Recognition with Gabor Wavelets and Local Binary Patterns. IEICE Trans. Inf. Syst. 2013, E96.D, 1886–1889. [Google Scholar] [CrossRef]
  21. Han, W.Y.; Lee, J.C. Palm vein recognition using adaptive Gabor filter. Expert Syst. Appl. 2012, 39, 13225–13234. [Google Scholar] [CrossRef]
  22. Abusham, E.E.A.; Bashir, H.K. Face Recognition Using Local Graph Structure (LGS). In Proceedings of the 14th International Conference on Human-Computer Interaction. Interaction Techniques & Environments (HCI 2011), Orlando, FL, USA, 9–14 July 2011; pp. 169–175. [Google Scholar]
  23. Abdullah, M.F.A.; Sayeed, M.S.; Muthu, K.S. Face recognition with Symmetric Local Graph Structure (SLGS). Expert Syst. Appl. 2014, 41, 6131–6137. [Google Scholar] [CrossRef]
  24. Dong, S.; Yang, J.C.; Chen, Y.; Wang, C.; Zhang, X.Y.; Park, D.S. Finger Vein Recognition Based on Multi-Orientation Weighted Symmetric Local Graph Structure. KSII Trans. Internet Inf. Sys. 2015, 9, 4126–4142. [Google Scholar]
  25. Yang, J.; Zhang, L.; Wang, Y.; Sun, W.H.; Park, D.S. Face Recognition based on Weber Symmetrical Local Graph Structure. KSII Trans. Internet Inf. Sys. 2018, 12, 1748–1759. [Google Scholar]
  26. Yang, J.C.; Zhang, L.C.; Li, M.; Zhao, T.T.; Chen, Y.R.; Liu, J.Z.; Liu, N. Face Recognition with Facial Occlusion Based on Local Cycle Graph Structure Operator. IntechOpen 2018, 4, 597–609. [Google Scholar]
  27. Zhang, H.G.; Li, S.Y.; Shi, Y.H.; Yang, J.F. Graph Fusion for Finger Multimodal Biometrics. IEEE Access 2019, 7, 28607–28615. [Google Scholar] [CrossRef]
  28. Jia, W.; Hu, R.X.; Lei, Y.K.; Zhao, Y.; Gui, J. Histogram of Oriented Lines for Palmprint Recognition. IEEE Trans. Syst. Man Cybern. 2014, 44, 385–395. [Google Scholar] [CrossRef]
  29. Luo, Y.T.; Zhao, L.Y.; Zhang, B.B.; Jia, W.; Xue, F.; Lu, J.T.; Zhu, Y.H.; Xu, B.Q. Local line directional pattern for palmprint recognition. Pattern Recognit. 2016, 50, 26–44. [Google Scholar] [CrossRef]
  30. Kekre, H.B.; Bharadi, V.A. Fingerprint’s core point detection using orientation field. In Proceedings of the International Conference on Advances in Computing, Control and Telecommunication Technologies (ACT 2009), Kerala, India, 28–29 December 2009; pp. 150–152. [Google Scholar]
  31. Zhang, L.; Zhang, L.; Zhang, D.; Zhu, H.L. Online finger-knuckle-print verification for personal authentication. Pattern Recognit. 2010, 43, 2560–2571. [Google Scholar] [CrossRef]
  32. Yang, J.F.; Shi, Y.H.; Jia, G.M. Finger-vein Image Matching based on Adaptive Curve Transformation. Pattern Recognit. 2017, 66, 34–43. [Google Scholar] [CrossRef]
  33. Yang, J.F.; Yang, J.L. Multi-Channel Gabor Filter Design for Finger-Vein Image Enhancement. In Proceedings of the Fifth International Conference on Image and Graphics, Xi’an, China, 20–23 September 2009; pp. 87–91. [Google Scholar]
  34. Nguyen, B.P.; Tay, W.L.; Chui, C.K. Robust Biometric Recognition from Palm Depth Images for Gloved Hands. IEEE T. Hum. Mach. Syst. 2015, 45, 1–6. [Google Scholar] [CrossRef]
Figure 1. Finger multimodal recognition process based on the Gabor generalized symmetric local graph structure (Gabor-GSLGS).
Figure 1. Finger multimodal recognition process based on the Gabor generalized symmetric local graph structure (Gabor-GSLGS).
Sensors 19 02213 g001
Figure 2. A finger trimodal image acquisition system. (a) the imaging schematic diagram; (b) a homemade image capture device; (c) a system interface of image acquisition.
Figure 2. A finger trimodal image acquisition system. (a) the imaging schematic diagram; (b) a homemade image capture device; (c) a system interface of image acquisition.
Sensors 19 02213 g002aSensors 19 02213 g002b
Figure 3. Computing finger posture angle. (a) the edge line of the finger; (b) the coordinate extraction of the finger edge line; (c) finger rotation direction extraction.
Figure 3. Computing finger posture angle. (a) the edge line of the finger; (b) the coordinate extraction of the finger edge line; (c) finger rotation direction extraction.
Sensors 19 02213 g003
Figure 4. Some corrected image samples after rotation.
Figure 4. Some corrected image samples after rotation.
Sensors 19 02213 g004
Figure 5. The finger trimodal region of interest (ROI) images of four fingers. (a) fingerprint (FP) ROIs samples; (b) samples of finger-vein (FV) ROIs; (c) finger-knuckle-print (FKP) ROIs samples.
Figure 5. The finger trimodal region of interest (ROI) images of four fingers. (a) fingerprint (FP) ROIs samples; (b) samples of finger-vein (FV) ROIs; (c) finger-knuckle-print (FKP) ROIs samples.
Sensors 19 02213 g005
Figure 6. The enhanced images of the finger three modalities.
Figure 6. The enhanced images of the finger three modalities.
Sensors 19 02213 g006
Figure 7. The design of the GSLGS operator (0° direction, n = 3).
Figure 7. The design of the GSLGS operator (0° direction, n = 3).
Sensors 19 02213 g007
Figure 8. The GSLGS operator (k = 4).
Figure 8. The GSLGS operator (k = 4).
Sensors 19 02213 g008
Figure 9. The coding process of GSLGS operator at 0° and 45°.
Figure 9. The coding process of GSLGS operator at 0° and 45°.
Sensors 19 02213 g009
Figure 10. The fusion of finger trimodal features.
Figure 10. The fusion of finger trimodal features.
Sensors 19 02213 g010
Figure 11. Receiver operating characteristic (ROC) of different k.
Figure 11. Receiver operating characteristic (ROC) of different k.
Sensors 19 02213 g011
Figure 12. ROC of different neighborhoods in M = 6, 7, 8, 9.
Figure 12. ROC of different neighborhoods in M = 6, 7, 8, 9.
Sensors 19 02213 g012
Figure 13. ROC of different division blocks M in a 5 × 5 neighborhood.
Figure 13. ROC of different division blocks M in a 5 × 5 neighborhood.
Sensors 19 02213 g013
Figure 14. Comparison results of different modal combinations. (a) ROC of unimodal recognition; (b) ROC of multimodal recognition.
Figure 14. Comparison results of different modal combinations. (a) ROC of unimodal recognition; (b) ROC of multimodal recognition.
Sensors 19 02213 g014
Figure 15. Comparisons of different methods.
Figure 15. Comparisons of different methods.
Sensors 19 02213 g015
Table 1. Comparisons on equal error rate (EER) (%) and time cost (single individual).
Table 1. Comparisons on equal error rate (EER) (%) and time cost (single individual).
k468
EER (%)0.0420.0290.038
Time cost (s)0.0120.0170.031
Table 2. Comparisons on EER(%) for different parameters.
Table 2. Comparisons on EER(%) for different parameters.
Blocks6 × 67 × 78 × 89 × 9
Neighborhood
3 × 30.220.160.0860.37
5 × 50.080.0290.0220.024
7 × 70.140.0750.0560.082
Table 3. Comparisons on EER (%) and time cost (single individual).
Table 3. Comparisons on EER (%) and time cost (single individual).
ModalFPFVFKPFV + FKPFV + FPFKP + FPFP + FV + FKP
EER (%)4.260.190.400.200.160.460.022
Time cost (s)0.0150.0100.0150.0210.0190.0180.029
Table 4. Comparisons on EER (%) and time cost (single individual).
Table 4. Comparisons on EER (%) and time cost (single individual).
MethodsLBPGLBPLLBPSLGSMOW-LGSOur Method
EER (%)1.600.580.740.460.420.022
Time cost (s)0.1290.1860.2610.1100.1920.029

Share and Cite

MDPI and ACS Style

Li, S.; Zhang, H.; Shi, Y.; Yang, J. Novel Local Coding Algorithm for Finger Multimodal Feature Description and Recognition. Sensors 2019, 19, 2213. https://doi.org/10.3390/s19092213

AMA Style

Li S, Zhang H, Shi Y, Yang J. Novel Local Coding Algorithm for Finger Multimodal Feature Description and Recognition. Sensors. 2019; 19(9):2213. https://doi.org/10.3390/s19092213

Chicago/Turabian Style

Li, Shuyi, Haigang Zhang, Yihua Shi, and Jinfeng Yang. 2019. "Novel Local Coding Algorithm for Finger Multimodal Feature Description and Recognition" Sensors 19, no. 9: 2213. https://doi.org/10.3390/s19092213

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop