Sensors 2011, 11(12), 11357-11371; doi:10.3390/s111211357

Article
Finger Vein Recognition Using Local Line Binary Pattern
Bakhtiar Affendi Rosdi , Chai Wuh Shing and Shahrel Azmin Suandi
Intelligent Biometric Group, School of Electrical & Electronic Engineering, USM Engineering Campus, Universiti Sains Malaysia, 14300 Nibong Tebal, Seberang Perai Selatan, Pulau Pinang, Malaysia; E-Mails: seng177@gmail.com (C.W.S.); shahrel@eng.usm.my (S.A.S.)
Author to whom correspondence should be addressed; E-Mail: eebakhtiar@eng.usm.my.
Received: 1 November 2011; in revised form: 28 November 2011 / Accepted: 29 November 2011 /
Published: 30 November 2011

Abstract

: In this paper, a personal verification method using finger vein is presented. Finger vein can be considered more secured compared to other hands based biometric traits such as fingerprint and palm print because the features are inside the human body. In the proposed method, a new texture descriptor called local line binary pattern (LLBP) is utilized as feature extraction technique. The neighbourhood shape in LLBP is a straight line, unlike in local binary pattern (LBP) which is a square shape. Experimental results show that the proposed method using LLBP has better performance than the previous methods using LBP and local derivative pattern (LDP).
Keywords:
finger vein; local binary pattern; local line binary pattern; local derivative pattern; biometrics; hand-based biometrics

1. Introduction

Nowadays, personal verification based on biometric technology has been used in many kinds of applications such as door access control, ATM transactions and border crossing controls. Biometric is the technology of verifying people using human physiological or behavioral features such as fingerprint, iris, face and voice [1]. Due to the fact that a hand contains lots of information and the information is easy to be retrieved, hand based biometrics such as fingerprint [2] and palm print [3] are the most popular biometric technologies.

Fingerprint is the most mature hand based biometric method where it has been used in many applications for years [2]. However, fingerprint based biometric system is vulnerable to forgery because the fingerprints are easily exposed to the others. In addition, the condition of the finger’s surface such as sweat and dryness can prevent a clear fingerprint pattern from being obtained [4]. This can degrade the system’s performance. As for finger knuckle print [5] and palm print [3] based biometric system, it is easy to replicate since the features are external to the human body.

To overcome the limitations of current hand based biometric systems, finger vein recognition had been researched [6]. In [7], they proved that each finger has unique vein patterns so that it can be used in personal verification. Finger vein based biometric system has several benefits when compared with other hands based biometric methods. First, the finger vein pattern is hard to replicate since it is an internal feature. In addition, the quality of the captured vein pattern is not easily influenced by skin conditions. Moreover, as compared with palm vein based verification system [8], the size of the device can be made much smaller. Lastly, finger vein recognition does not require contact between the finger and sensor, which is desirable for a hygienic viewpoint.

Most of the current available approaches for finger vein recognition [6,9,10] have similarities on the feature extraction method which utilized the features from the segmented blood vessel network for recognition. However, due to the optical blurring and skin scattering problems, the finger vein images are not always clear and can show irregular shadings [11]. Therefore, segmentation errors can occur during the feature extraction process due to the low qualities of finger vein images. When the networks are not segmented properly, the recognition accuracy may be degraded.

To solve the problem, [12] proposed a method for finger vein recognition using local binary pattern (LBP) [13] and local derivative pattern (LDP) [14]. In the proposed method, the captured finger vein images are enhanced by modified Gaussian high-pass filter and then LBP and LDP are applied to extract the binary codes from the enhanced images. The similarity between the extracted and enrolled binary codes are measured by Hamming distance. Although the recognition accuracy when LDP is used as feature extraction method is good, the processing time is 2.5 times longer than the LBP. Moreover, the memory size to store the binary codes of LDP is four times bigger than the LBP.

Besides LDP, a numbers of LBP variants have been proposed so far. One of the variants called local ternary pattern (LTP) has been proposed in [15]. LTP uses three-value encoding instead of two-value encoding as in the original version of LBP. Another variant of LBP that uses five-value encoding called local quinary pattern (LQP) has been proposed in [16]. Recently, [17] proposed a new variant of LBP called local line binary pattern (LLBP) and applied it to face recognition. They demonstrated that the proposed method can produce higher recognition rates compared to LBP on two benchmark face databases.

The problems of the LDP and the advantages of LLBP have motivated us to use LLBP as feature extraction technique in finger vein recognition. The main difference between LLBP and LBP/LDP is its neighbourhood shape is a straight line with length N pixel, unlike in LBP/LDP, which is a square. We believe that the straight-line shape of LLBP is more suitable to capture the pattern inside a finger vein image. The finger vein pattern is clearer in the image processed by LLBP than the image processed by LBP as shown in Figure 1. It should be noted that for viewing convenience, the binary codes computed for each pixel in Figure 1 (left) using LBP and LLBP are converted to decimal numbers, which are then normalized to values ranging within 0 to 255 to represent the grayscale values. The resulting images are illustrated in Figure 1 (middle) and Figure 1 (right) for LBP and LLBP, respectively.

2. Proposed Method

Figure 2 shows the block diagram of the proposed method for finger vein recognition. The method consists of four main stages: image acquisition, preprocessing, feature extraction by Local Line Binary Pattern (LLBP) and the calculation of matching scores by Hamming distance.

2.1. Finger Vein Image Acquisition

To capture the finger vein images, a special imaging device is constructed as shown in Figure 3 [18]. The constructed device consists of a modified camera (Logitech V-UAV35) and an array of infra-red LED (OSRAM SFH485, wavelength = 880 nm). The camera is not infra-red (IR) sensitive device where it consists of an IR blocking filter. Hence, the IR blocking filter is replaced by a negative film to react as an IR pass filter. To reduce the finger alignment problem, especially finger rotation, an open window with a fixed size (2.5 cm × 2.5 cm) is set for the user to place their finger during the capturing process.

2.2. Preprocessing

There are four major steps in the preprocessing stage, which are ROI extraction, image resizing, image enhancement and translation alignment.

The original image is acquired with the black unwanted background. A simple algorithm is developed to extract the finger vein image from the background. Three major steps involved in this algorithm. First, the captured image is binarized [19] using a threshold value that has been determined by Otsu’s method [20]. Then, the center of the object, which is the finger, is obtained [21]. Finally, the image is cropped to 480 × 160 pixels based on the obtained center of the finger. Figure 4 (a–c) shows the captured, the binarized, and the cropped images for a finger at intervals, respectively. As shown in Figure 4 (b), the center of the objects for a same finger captured at an interval are different. This is because, our device only can reduce the finger rotation problem and not the horizontal and vertical displacement problems.

To reduce the time complexity and eliminate pixel noise, the cropped image is resized into smaller size. The cropped image size of 480 × 160 pixels is reduced to a resolution, 192 × 64 pixels, in which the resize ratio is 0.4.

In general, the finger vein image is in low contrast due to the variation of finger profile. As in [12], a symmetrical modified Gaussian high-pass filter is used to enhance the contrast of the finger vein image. The filter has the following formula :

H ( x , y ) = a ( 1 e D 2 ( x , y ) / 2 D 0 2 ) + b
Here, D(x, y) is defined as follows:
D ( x , y ) = [ ( x x 0 ) 2 + ( y y 0 ) 2 ] 1 / 2
where D(x, y) is the distance between the center and a relative position, x and y are the positions relative to the center D0(x0, y0) of a convolution mask, and a and b are adjustment variables that can change the amplitude and DC level of the filtering mask. In this paper, an experiment is designed to determine the optimum size of the filtering mask and the values of adjustment variables a and b for this particular finger vein recognition. Examples of finger vein images and their convolved results are shown in Figure 5. In these examples, the size of the filtering mask is 9 and, the variables a and b are 12.53 and −4, respectively.

Although our proposed ROI extraction algorithm can reduce the horizontal and vertical displacement between the extracted images, the alignment problems still cannot be totally eliminated. In contrast to [12] and [22], these displacement problems are solved by evaluating the translation parameters between two enhanced images using the phase only correlation (POC) function [4,5]. Then, the common regions for the two images are cropped based on the evaluated translation parameters. The translation parameters (tx, ty) between two enhanced images f and g can be estimated from the peak location of the POC function of them. Then, f and g are aligned based on (tx, ty) and the common regions fc and gc are extracted. It should be noted that, the two enhanced images only will be aligned when tx and ty are less than 20 and 10, respectively. Generally, when tx and ty are larger than those values, the two images are most likely from two different fingers.

2.3. Feature Extraction

In [12], LDP and LBP are used to extract the binary codes from the enhanced images. Although the performance of LDP is better than the LBP, the computation time for LDP is about 2.5 times slower than the LBP. Moreover, the code length for LDP is four times longer than the LBP. The computation time and template size are two important factors that need to be considered in designing a biometric system. To overcome the above-mentioned problems, the binary codes in this work are extracted from the enhanced images using a new texture descriptor called Local Line Binary Pattern (LLBP) [17]. One of the benefits of LLBP operator is that it can emphasize the change in image intensity such as vertices, edges and corners.

Motivated by LBP, Petpon and Srisuk [17] proposed an LLBP operator for face recognition. The operator consists of two components: horizontal component and vertical component. The magnitude of LLBP can be obtained by calculating the line binary codes for both components. The illustration of LLBP operator is shown in Figure 6, and its mathematic definitions are given in Equations (4)(6). LLBPh, LLBPv and LLBPm are LLBP on horizontal direction, vertical direction, and its magnitude, respectively. N is the length of the line in pixel, hn is the pixel along with the horizontal line and vn is the pixel along with the vertical line, c = N 2 is the position of the center pixel hc on the horizontal line and vc on the vertical line, and s(·) function defines a thresholding function as in Equation (3).

Employing Equations (3) and (4), the horizontal component of LLBP (LLBPh) extracts a binary code of N − 1 bits for each pixel. The same numbers of bits are extracted by the vertical component of LLBP (LLBPv) using Equations (3) and (5). Consequently, by concatenating the binary codes from LLBPh and LLBPv, the total binary code of LLBP for each pixel is 2(N − 1) bits. In Figure 6, the binary sequence for horizontal (vertical) component is defined from left (top) as 010111001111(2)(101001011101(2)). Hence, the binary code for LLBP is 010111001111101001011101(2).

s ( x ) = { 1 , x 0 , 0 , x < 0.
LLBP hN , c ( x , y ) = n = 1 c 1 s ( h n h c ) 2 c n 1 + n = c + 1 N s ( h n h c ) 2 n c 1
LLBP vN , c ( x , y ) = n = 1 c 1 s ( v n v c ) 2 c n 1 + n = c + 1 N s ( v n v c ) 2 n c 1
LLB P m = LLB P h 2 + LLB P v 2

2.4. Matching

As in [12], the similarity between the extracted binary codes and the enrolled codes is measured using Hamming Distance (HD). The formula is given in Equation (7).

HD = ( codeA codeB ) CodeLength
where ⊗ is a Boolean exclusive-OR operator between corresponding pair of bits. The codeA and codeB are the extracted binary and enrolled codes, respectively. CodeLength is the total number of bits of the enrolled codes. The HD value is ranging from 0 to 1. The HD is close to 0, when the two codes are from the same finger. When the two codes are from two different fingers, the HD is close to 1.

3. Experimental Results

Two major experiments have been conducted using our own established database [18]. First, an experiment to determine the optimum size of the filtering mask for modified Gaussian high-pass filter and the optimum length of the line N for the LLBP and LLBPv is conducted. Then, the performance of the LLBP and LLBPv operators in personal verification are compared with the previously proposed methods [12] which utilize LBP and LDP. The number of sampling points and radius for the LBP that has been used in [12] are 8 and 1, respectively (denoted as LBP (8, 1)). In this paper, the performance of our proposed method is also compared to LBP with the number of sampling points and radius are, 8 and 2, respectively (denoted as LBP (8, 2)). Moreover, we compare the performance of our method with another two variants of LBP, which are LDiP [23] and LTP [15] as well. The performance is evaluated based on Equal Error Rate (EER) as in [10,12,22]. The EER is defined as the error rate when the False Acceptance Rate (FAR) and the False Rejection Rate (FRR) are equal. FAR is the error rate where the un-enrolled finger vein images are accepted as enrolled images. FRR is the error rate where the enrolled finger vein images are rejected as un-enrolled images.

3.1. Database of the Finger Vein Images

The finger vein images that have been used in this study were collected using the capturing device that was explained in Section 2.1. The images were collected from 51 male and female volunteers, who are staffs and students of Universiti Sains Malaysia. The age of the subjects ranged from 21 to 56 years old. Each subject provides 10 images of four fingers, which are left index, left middle, right index and right middle fingers. Consequently, there are 51 sets of four fingers with 10 images for each finger. In total, the database contains 2040 images from 204 different fingers. The spatial and depth resolutions of the captured finger vein images were 640 × 480 and 256 gray levels, respectively. Figure 7 shows examples of finger vein images captured using our device.

3.2. Determination of Parameters

There are two parameters that will affect the verification accuracy of the proposed method: the size of the filtering mask (S) for modified Gaussian high-pass filter and the length of the line (N) for the LLBP and LLBPv operators. In order to determine the optimum values for these two parameters, an experiment was conducted. A sub-dataset which contained 200 finger vein images from 20 different fingers were used in the experiment. The tuning criterion was the parameters that could lead to a lower EER would be chosen. For a fair comparison, the optimum size of the filtering mask for the LBP (8, 1), LBP (8, 2), LDP, LDiP and LTP have also been determined in the same manner.

Tables 1 and 2 show the EER for the various size of N and S for the LLBP and LLBPv operators, respectively. As a result, the optimal matching performance for the LLBP is observed when the length of the line (N) is 21, and the size of the filtering mask (S) is 15. As for the LLBPv, the optimum values of N and S are 17 and 15, respectively. Figures 812 show the EER for the various size of S for the LBP (8, 1), LBP (8, 2), LDP, LDiP and LTP operators, respectively. The optimum size of the filtering mask for LBP (8, 1), LBP (8, 2), LDP, LDiP and LTP are 11, 15, 19, 13 and 13, respectively.

3.3. Verification Results

In order to show the superiority of the local line binary pattern in personal verification, the proposed method was compared with the other variants of LBP [12,15,23]. In the experiment, the optimal parameter values obtained in previous experiment are employed. The experiment was conducted on all finger vein images. Thus, there were 204 fingers (51 × 4) and 10 images for each finger. As a result, the number of genuine tests is 9180 ( 204 × ( 10 2 ) ) and the number of imposter tests is 2,070,600 ( ( 2040 2 ) 9180 ). The results in terms of EER are summarized in Figure 13. From the results shown in Figure 13, it is observed that the performance of the proposed method performs significantly better than the other LBP variants. The verification accuracy for LLBPv is slightly better than the LLBP. As shown in Figure 14, the vein is much clearer in the image processed by LLBPv compared with the image processed by the other LBP variants. We believe that the vertical component of the local line binary pattern can extract significant and important features by emphasizing the vein line. It should be noted that for viewing convenience, the binary codes computed for each pixel in Figure 14 (a) using LBP variants are converted to decimal numbers, which are then normalized to values ranging within 0 to 255 to represent the grayscale values. The resulting images are illustrated in Figure 14 (b–i), for LBP (8,1), LBP (8,2), LDP, LDiP, LTP, LLBP, LLBPh, and LLBPv, respectively.

3.4. Speed and Memory

All experiments were performed using MATLAB on an Intel Core i5 processor and 4 GB RAM. The processing time and binary codes length of the proposed method were compared with the other LBP variants. From the results shown in Table 3, it is obvious that the processing time for the LLBPv is significantly faster than the other LBP variants. As for the binary code length, LLBPv produces much shorter length than the previously proposed method using LDP.

4. Conclusions

Extraction of robust features from finger vein images is an important issue in a finger vein based biometric system. Instead of using LBP and LDP, we propose to use the local line binary pattern (LLBP) as feature extraction technique. The straight-line shape of LLBP can extract robust features from the images with unclear veins. Experimental results on the images from 204 fingers that are captured from our own prototype device indicate that the equal error rate (EER) for the LLBP is significantly lower than the LBP and LDP. Moreover, the feature extraction time for LLBP is faster than the other LBP variants. We also find out that the vertical component of LLBP is the most suitable for finger vein recognition. In future, we plan to fuse the features from the finger vein images with the shape of a finger using various fusion techniques. We hope that the EER can be further reduced by the fusion of these two features.

This work is supported by Universiti Sains Malaysia Research University Grant No. 1001/PELECT/814116 and Short Term Grant No. 304/PELECT/6039021.

References and Notes

  1. Jain, A.; Ross, A.; Prabhakar, S. An introduction to biometric recognition. IEEE Trans. Circ. Syst. Video Tech 2004, 14, 4–20.
  2. Jain, A.K.; Feng, J.; Nandakumar, K. Fingerprint matching. Computer 2010, 43, 36–44.
  3. Guo, Z.; Zhang, D.; Zhang, L.; Zuo, W. Palmprint verification using binary orientation co-occurrence vector. Patt. Recogn. Lett 2009, 30, 1219–1227, doi:10.1016/j.patrec.2009.05.010.
  4. Ito, K.; Nakajima, H.; Kobayashi, K.; Aoki, T.; Higuchi, T. A fingerprint matching algorithm using phase-only correlation. IEICE Trans. Fundament. Electron. Commun. Comput. Sci 2004, E87-A, 682–691.
  5. Zhang, L.; Zhang, L.; Zhang, D.; Zhu, H. Ensemble of local and global information for finger-knuckle-print recognition. Patt. Recogn 2011, 44, 1990–1998, doi:10.1016/j.patcog.2010.06.007.
  6. Miura, N.; Nagasaka, A.; Miyatake, T. Feature extraction of finger-vein patterns based on repeated line tracking and its application to personal identification. Mach. Vision Appl 2004, 15, 194–203, doi:10.1007/s00138-004-0149-2.
  7. Yanagawa, T.; Aoki, S.; Ohyama, T. Human finger vein images are diverse and its patterns are useful for personal identification. MHF Preprint Ser 2007, 12, 1–7.
  8. Zhang, Y.B.; Li, Q.; You, J.; Bhattacharya, P. Palm Vein Extraction and Matching for Personal Authentication. Proceedings of the 9th International Conference on Advances in Visual Information Systems, Shanghai, China, 28–29 June 2007; pp. 154–164.
  9. Yu, C.B.; Qin, H.F.; Zhang, L.; Cui, Y.Z. Finger-vein image recognition combining modified hausdorff distance with minutiae feature matching. J. Biomed. Sci. Eng 2009, 2, 261–272, doi:10.4236/jbise.2009.24040.
  10. Song, W.; Kim, T.; Kim, H.C.; Choi, J.H.; Kong, H.J.; Lee, S.R. A finger-vein verification system using mean curvature. Patt. Recogn. Lett 2011, 32, 1541–1547, doi:10.1016/j.patrec.2011.04.021.
  11. Lee, E.C.; Park, K.R. Image restoration of skin scattering and optical blurring for finger vein recognition. Opt. Lasers Eng 2011, 49, 816–828, doi:10.1016/j.optlaseng.2011.03.004.
  12. Lee, E.C.; Jung, H.; Kim, D. New finger biometric method using near infrared imaging. Sensors 2011, 11, 2319–2333, doi:10.3390/s110302319.
  13. Ojala, T.; Pietikainen, M.; Maenpaa, T. Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans. Patt. Anal. Mach. Intell 2002, 24, 971–987, doi:10.1109/TPAMI.2002.1017623.
  14. Zhang, B.; Gao, Y.; Zhao, S.; Liu, J. Local derivative pattern versus local binary pattern: Face recognition with high-order local pattern descriptor. IEEE Trans. Image Process 2010, 19, 533–544, doi:10.1109/TIP.2009.2035882.
  15. Tan, X.; Triggs, B. Enhanced local texture feature sets for face recognition under difficult lighting conditions. IEEE Trans. Image Process 2010, 19, 1635–1650, doi:10.1109/TIP.2010.2042645.
  16. Nanni, L.; Lumini, A.; Brahnam, S. Local binary patterns variants as texture descriptors for medical image analysis. Artif. Intell. Med 2010, 49, 117–125, doi:10.1016/j.artmed.2010.02.006.
  17. Petpon, A.; Srisuk, S. Face Recognition with Local Line Binary Pattern. Proceedings of the Fifth International Conference on Image and Graphics, Xi’an, China, 20–23 September 2009; pp. 533–539.
  18. Mahri, N.; Suandi, S.A.; Rosdi, B.A. Finger Vein Recognition Algorithm Using Phase Only Correlation. Proceedings of 2010 International Workshop on Emerging Techniques and Challenges for Hand-Based Biometrics, Istanbul, Turkey, 22 August 2010; pp. 1–6.
  19. This method is implemented using the Matlab 7.0 function im2bw.
  20. This method is implemented using the Matlab 7.0 function graythresh.
  21. The Matlab implementation is available at the Matlab Central. Available online: http://www.mathworks.com/matlabcentral/fileexchange/5457 (accessed on 4 November 2011).
  22. Lee, E.C.; Lee, H.C.; Park, K.R. Finger vein recognition using minutia-based alignment and local binary pattern-based feature extraction. Int. J. Imag. Syst. Tech 2009, 19, 179–186, doi:10.1002/ima.20193.
  23. Jabid, T.; Kabir, M.H.; Chae, O. Robust facial expression recognition based on local directional Pattern. ETRI J 2010, 32, 784–794, doi:10.4218/etrij.10.1510.0132.
Sensors 11 11357f1 200
Figure 1. The finger vein image (left), the image after processed by LBP (middle) and the image after processed by LLBP (right).

Click here to enlarge figure

Figure 1. The finger vein image (left), the image after processed by LBP (middle) and the image after processed by LLBP (right).
Sensors 11 11357f1 1024
Sensors 11 11357f2 200
Figure 2. Block diagram of the proposed method.

Click here to enlarge figure

Figure 2. Block diagram of the proposed method.
Sensors 11 11357f2 1024
Sensors 11 11357f3 200
Figure 3. Finger vein image capturing device.

Click here to enlarge figure

Figure 3. Finger vein image capturing device.
Sensors 11 11357f3 1024
Sensors 11 11357f4 200
Figure 4. Example of (a) the captured images, (b) the binarized images with the center of the objects and (c) the cropped images for a finger at intervals.

Click here to enlarge figure

Figure 4. Example of (a) the captured images, (b) the binarized images with the center of the objects and (c) the cropped images for a finger at intervals.
Sensors 11 11357f4 1024
Sensors 11 11357f5 200
Figure 5. The resized (top) and their enhanced images.

Click here to enlarge figure

Figure 5. The resized (top) and their enhanced images.
Sensors 11 11357f5 1024
Sensors 11 11357f6 200
Figure 6. Example of LLBP operator.

Click here to enlarge figure

Figure 6. Example of LLBP operator.
Sensors 11 11357f6 1024
Sensors 11 11357f7 200
Figure 7. Examples of the captured finger vein images.

Click here to enlarge figure

Figure 7. Examples of the captured finger vein images.
Sensors 11 11357f7 1024
Sensors 11 11357f8 200
Figure 8. EERs (%) by varying S for LBP (8, 1) based on a sub-dataset of finger vein images.

Click here to enlarge figure

Figure 8. EERs (%) by varying S for LBP (8, 1) based on a sub-dataset of finger vein images.
Sensors 11 11357f8 1024
Sensors 11 11357f9 200
Figure 9. EERs (%) by varying S for LBP (8, 2) based on a sub-dataset of finger vein images.

Click here to enlarge figure

Figure 9. EERs (%) by varying S for LBP (8, 2) based on a sub-dataset of finger vein images.
Sensors 11 11357f9 1024
Sensors 11 11357f10 200
Figure 10. EERs (%) by varying S for LDP based on a sub-dataset of finger vein images.

Click here to enlarge figure

Figure 10. EERs (%) by varying S for LDP based on a sub-dataset of finger vein images.
Sensors 11 11357f10 1024
Sensors 11 11357f11 200
Figure 11. EERs (%) by varying S for LDiP based on a sub-dataset of finger vein images.

Click here to enlarge figure

Figure 11. EERs (%) by varying S for LDiP based on a sub-dataset of finger vein images.
Sensors 11 11357f11 1024
Sensors 11 11357f12 200
Figure 12. EERs (%) by varying S for LTP based on a sub-dataset of finger vein images.

Click here to enlarge figure

Figure 12. EERs (%) by varying S for LTP based on a sub-dataset of finger vein images.
Sensors 11 11357f12 1024
Sensors 11 11357f13 200
Figure 13. EERs (%) according to various operators based on the whole dataset of finger vein images.

Click here to enlarge figure

Figure 13. EERs (%) according to various operators based on the whole dataset of finger vein images.
Sensors 11 11357f13 1024
Sensors 11 11357f14 200
Figure 14. Example of (a) the cropped images and the images after processed by various texture descriptors ((b) LBP (8, 1), (c) LBP (8, 2), (d) LDP, (e) LDiP, (f) LTP, (g) LLBP, (h) LLBPh, and (i) LLBPv) for three different fingers.

Click here to enlarge figure

Figure 14. Example of (a) the cropped images and the images after processed by various texture descriptors ((b) LBP (8, 1), (c) LBP (8, 2), (d) LDP, (e) LDiP, (f) LTP, (g) LLBP, (h) LLBPh, and (i) LLBPv) for three different fingers.
Sensors 11 11357f14 1024
Table Table 1. EERs (%) by varying N and S for LLBP based on a sub-dataset of finger vein images (N: length of the line, S: size of the filtering mask.)

Click here to display table

Table 1. EERs (%) by varying N and S for LLBP based on a sub-dataset of finger vein images (N: length of the line, S: size of the filtering mask.)
S
1113151719
N

173.332.112.112.782.67
193.222.112.112.562.56
213.222.111.892.562.56
233.222.111.892.562.33
253.222.001.892.442.33
Table Table 2. EERs (%) by varying N and S for LLBPv based on a sub-dataset of finger vein images (N: length of the line, S: size of the filtering mask.)

Click here to display table

Table 2. EERs (%) by varying N and S for LLBPv based on a sub-dataset of finger vein images (N: length of the line, S: size of the filtering mask.)
S
1113151719
N

133.112.331.892.442.33
153.112.031.892.332.22
173.002.001.782.332.22
193.002.001.782.222.17
212.892.001.782.222.12
Table Table 3. Comparison of processing time and binary code length.

Click here to display table

Table 3. Comparison of processing time and binary code length.
OperatorFeature extraction (ms)Matching (ms)Total processing time (ms)Code Length (bits)

LLBPv (N = 17)29.67.937.5135,168 (176 × 48 × 16)
LLBP (N = 21)598.167.1302,720 (172 × 44 × 40)
LBP (8, 1) [12]65.110.375.494,240 (190 × 62 × 8)
LBP (8, 2)439.210.3449.590,240 (188 × 60 × 8)
LDP [12]193.111.7204.8360,960 (188 × 60 × 32)
LDiP [23]713.310.3723.694,240 (190 × 62 × 8)
LTP [15]82.61193.6188,480 (190 × 62 × 16)
Sensors EISSN 1424-8220 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert