Next Article in Journal
Mobile Robot Path Planning Algorithm Based on RRT_Connect
Next Article in Special Issue
Optimized Quantum Circuit for Quantum Security Strength Analysis of Argon2
Previous Article in Journal
Imperfect-Information Game AI Agent Based on Reinforcement Learning Using Tree Search and a Deep Neural Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Double Quantification of Template and Network for Palmprint Recognition

1
Key Laboratory of Jiangxi Province for Image Processing and Pattern Recognition, Nanchang Hangkong University, Nanchang 330063, China
2
Department of Computer Engineering, Sejong University, Seoul 05006, Republic of Korea
*
Authors to whom correspondence should be addressed.
Electronics 2023, 12(11), 2455; https://doi.org/10.3390/electronics12112455
Submission received: 1 May 2023 / Revised: 25 May 2023 / Accepted: 27 May 2023 / Published: 29 May 2023
(This article belongs to the Special Issue Recent Advances in Security and Privacy for Multimedia Systems)

Abstract

:
The outputs of deep hash network (DHN) are binary codes, so DHN has high retrieval efficiency in matching phase and can be used for high-speed palmprint recognition, which is a promising biometric modality. In this paper, the templates and network parameters are both quantized for fast and light-weight palmprint recognition. The parameters of DHN are binarized to compress the network weight and accelerate the speed. To avoid accuracy degradation caused by quantization, mutual information is leveraged to optimize the ambiguity in Hamming space to obtain a tri-valued hash code as a palmprint template. Kleene Logic’s tri-valued Hamming distance measures the dissimilarity between palmprint templates. The ablation experiments are tested on the binarization of the network parameter, and the normalization and trivialization of the deep hash output value. The sufficient experiments conducted on several contact and contactless palmprint datasets confirm the multiple advantages of our method.

1. Introduction

Recently, biometric recognition has become a promising and widely used personal identification technology. As a biometric modality, palmprint has many advantages, including rich features, high user acceptance and easy acquisition. Many palmprint recognition methods were proposed and achieved high recognition accuracy. However, with the large-scale applications of biometrics, the sizes of palmprint databases are rapidly increasing. It is urgent to save the storage cost of recognition models and feature templates, and also to accelerate matching speed. Thus, it is necessary to develop fast and light-weight techniques for large-scale palmprint recognition and retrieval [1].
Palmprint recognition based on deep hash network (DHN) has been widely used in many scenarios due to its low storage, low matching computational complexity and high recognition accuracy. However, DHNs ignore model squeezing, i.e., reducing model weight.
The binary deep hash network (B-DHN) has many advantages. The traditional DHN uses format FP32 to represent the weight value, while the weights of B-DHN are binary. Therefore, the parameter storage size of B-DHN is only about 1/32 of that of the traditional DHN. In addition, B-DHN has only “1” and “−1” parameters (weights), which means that the convolution operations can be replaced with bitwise operations. To be specific, B-DHN can replace the traditional multiplication operations with “Logic-And-Gate” and “Logic-XOR-Gate” operations. In other words, in the propagation process of B-DHN, only addition and bit operations are needed instead of multiplication, which is much faster than traditional DHN [2].
In this paper, the templates and network parameters are both quantized for fast and light-weight palmprint recognition. We propose a B-DHN with tri-valued hash codes. Specifically, the weights of the DHN are binarized to accelerate the operations of the network and reduce the model weight. The network compression commonly results in accuracy degradation. Therefore, the outputs of DHN are transformed to tri-valued codes [3] to reduce the ambiguity of the DHN output in Hamming space, which remarkably improves the recognition accuracy.
The contributions of this paper can be summarized as follows:
  • The proposed B-DHN has two advantages. On the one hand, the speed of B-DHN is much faster than that of traditional DHN. On the other hand, B-DHN squeezes the network by binarizing the network to reduce the model storage. Thus, B-DHN has the advantages in terms of speed and storage;
  • To improve the recognition accuracy of B-DHN, the weights are balanced and standardized by maximizing the information entropy and minimizing the quantization error before the network binarization, which reduces the information loss due to the parameter binarization in forward propagation. The gradient of B-DHN cannot be calculated; therefore, a function, which approximates the gradient to minimize the information loss, ensures sufficient update at the beginning of training and accurate gradient at the end of training;
  • In order to reduce the accuracy degradation caused by the squeezing of the DHN, the outputs of DHN are quantized to tri-valued hash codes as the palmprint templates. Mutual information is used to dilute the ambiguity of the output binarization in Hamming space. Kleene Logic’s tri-valued Hamming distance measures the dissimilarity between palmprint templates; thus, the ambiguous intervals have a small weight to improve the recognition accuracy.
The rest of this paper is organized as follows: Section 2 introduces some related works. Section 3 describes the proposed methodology, which includes the B-DHN and the tri-valued hash codes. Section 4 shows the experimental results and discussions. The conclusions and future works are given in Section 5.
The overall structure is shown in Figure 1. BI-CBP denotes binary convolution operation + BN + PReLU, and MI denotes the mutual information transforming of the network outputs to tri-valued hash codes.

2. Related Works

2.1. Palmprint Recognition

Palmprint recognition typically includes four steps, namely, image acquisition, image preprocessing, feature extraction and matching. Palmprint recognition methods are reviewed as follows.

2.1.1. Local Texture Coding Methods

Filters are commonly used to extract texture features, such as the magnitude and phase of the filter response, and then the features are encoded by some specific rules. The single-direction approaches preserve only single-direction texture. Zhang et al. [4] proposed Palmprint Code (PalmCode), which used 45° Gabor filter to process the image and binarized the response value. Kong et al. [5] filtered the images by six Gabor filters along different directions (0°, 30°, 60°, 90°, 120°, 150°), and coded the winner (dominant) direction index of the filter corresponding to the minimum response of each pixel. Wei et al. [6] used the modified finite Radon transform filter to extract palmprint features and a “winner-take-all rule” to code each pixel. In matching phase, a pixel-to-region template matching method was used to further improve accuracy.
In order to fully utilize the palmprint texture, multi-directional approaches were proposed. Since fusion can improve the accuracy effectively [7,8], Fei et al. [9] used multi-directional Gabor filters for feature extraction, and encoded the direction index corresponding to the top-2 response values. Guo et al. [10] encoded the responses of the multi-directional Gabor filters and fused the matching scores at score level. If the absolute value of the response is closer to 0, the response is regarded as worse. The method removed the 8% worst response entries with a mask matrix and ignored them in the matching phase. Xu et al. [11] encoded the most relevant response and the weighted responses of two adjacent directions. They also used a Gaussian filter to process the image, which reduced the noise effect and further improved the performance.
Some studies on the downsampling were proposed for coding methods. Yang et al. [12] allowed all pixels in each block to have equal voting rights to extract stable features. Leng et al. [13] selected the best impact pixel from the 16 candidates in each block. Yang et al. [14] conducted the downsampling in uniformly-spaced windows, which avoided the small distance and strong correlation between the adjacent preserved pixels.

2.1.2. Deep Learning-Based Methods

Deep learning-based palmprint recognition methods typically use a specific method, such as convolutional neural network (CNN), for feature extraction and classification. Svoboda et al. [15] trained AlexNet to expand the separability between the genuine and impostor distributions, but the training required supervision. Wen et al. [16] proposed a new loss function to increase the distance between classes. Inspired by this, Zhong and Zhu [17] proposed a concentrated large marginal cosine loss, which improved intra-class tightness. Matkowski et al. [18] proposed a CNN framework for palmprint recognition in an uncontrolled environment; they used two sub-networks for segmenting the region of interest (ROI) and extracting the features, respectively. Chai et al. [19] performed gender soft biometric network for pre-training, and then trained the network for palmprint classification. Xu et al. [20] used soft biometrics in multi-task pre-training for palmprint recognition. Du et al. [21] proposed a CNN-based regularized adversarial domain cross-domain recognition model. Liu et al. [22] developed a soft-shift triplet loss function for learning to distinguish palmprint features using a fully convolutional network.

2.2. Deep Hash Network

DHNs combine the advantages of hashing algorithms and CNNs. On the one hand, DHNs have the same robust feature extraction ability as CNN. On the other hand, DHNs combine CNNs with hash encoding, so DHNs reduce the storage space and speed up the matching/retrieval speed. The procedure of DHNs can be summarized as: a CNN is trained for extracting low-dimensional feature, and then the features are transformed into binary codes.
Chen et al. [23] proposed a discriminative spectrum hash to compact palmprint representation. Cheng et al. [24] combined supervised hashing and deep convolutional features for palmprint recognition. Zhong et al. [25] extracted palmprint and dorsal hand vein features using DHN and feature map matching for hand-based multi-biometric recognition. Zhong et al. [26] used a DHN for palmvein recognition with an equal error rate (EER) close to 0% on the near infrared (NIR) spectral dataset. Li et al. [27] used softmax classification loss and improved ternary loss to learn the hash code and maintain consistency with high-dimensional features. Liu et al. [28] used a deep self-supervised hash to generate pseudo labels, and used a DHN to generate hash codes. Wu et al. [29] used a DHN to generate hash codes and feature selection to obtain more compact deep hash codes.
This paper proposes double quantification of template and network for palmprint recognition. A DHN is compressed to a B-DHN to speed up the network speed and reduce the network weight. At the same time, the outputs of network have low storage cost and fast matching/retrieval speed.

3. Methodology

This section describes the proposed B-DHN for tri-valued codes. Firstly, the B-DHN is specified, which includes balanced normalized quantization for parameter binarization and approximation function for gradient calculation. Next, the quantization for the tri-valued hash codes is described, where the ambiguous interval of the deep hash value is labeled by a state. The DHN configuration in this paper is shown in Table 1.

3.1. Binary Deep Hash Networks

3.1.1. Binary Convolution and Approximation Function

  • Binary convolution
To reduce the information loss in a B-DHN, the network parameters are balanced and normalized before binarization as:
W s t d = W σ ( W )
where σ ( . ) is the standard deviation, and W is:
W = W W
where W is the initial weight value, and W is the mean value of W .
Final, binary convolution can be formulated as:
Z b = B c o ( s i g n ( A ) , s i g n ( W s t d ) )
where A denotes the input activation vector computed by the previous network layer, and B c o (.) denotes binary convolution operation, which replaces the traditional multiplication operation with “Logic-XOR-Gate” operation. Contrary to traditional convolution operations, the binary convolution conducts bitwise operations.
2.
Approximation function
Since the gradient of binary network cannot be calculated directly, approximation function is used to replace binarization function. Approximation function of the parameter x is:
g ( x ) = k t a n h ( t x )
t = T 1 10 i N × l o g T 2 T 1
k = m a x ( 1 t , 1 )
where i is the current epoch index, and N is the maximum epoch number, T 1 = 10 1 and T 2 = 10 1 .

3.1.2. Loss Function of B-DHN

The loss function is divided into two parts. The first part is distance loss, which keeps the feature vectors of the same class close together and the feature vectors of the different classes far away. The second part is quantization loss; a regular term is added to the real-valued network output in the last layer.
  • Distance loss function
In order to decrease the intra-class distance and enlarge the inter-class distance, the intra-class distance is formulated as:
S f i , f j = 1 2 l i j H D f i , f j
where f i and f j denote two binary codes (templates) of the input images of the identical class. H D . denotes the Hamming distance. The inter-class distance is:
S f i , f j = 1 2 1 l i j m a x T H D f i , f j , 0
If f i and f j belong to the identical class, l i j = 1; otherwise l i j = 0. T denotes the distance threshold. In the experiment, T is set to 90 because each output is coded as 128 bits.
By combining the above two formulae, the distance loss function is:
R s f i , f j , l i j = i = 1 M j = 1 M S f i , f j + S f i , f j
where M is the number of palmprint images in the training set.
2.
Quantization loss function
To reduce quantization error, each entry in the output should be close to “+1” or “−1”, so the quantization function is:
R Q = i = 1 M 1 2 1 f i 2
where f i denotes the absolute value of f i , and . 2 denotes the L2−Norm of the vector.
Ultimately the loss function is:
L o s s = α R s + R Q
where α is the combination factor.

3.2. Tri-Valued Hash Codes

It is worth noting that the feature extraction of DHNs uses binarized output, which is fine for classification and can still achieve relatively high accuracy even if binarized outputs ignore the ambiguous interval in Hamming space. In this paper, the tri-valued quantification of hash codes fully considers ambiguous interval, so the accuracy is better than that of DHNs. To reduce quantization error, the network outputs are balanced before quantization.
X s t d = X σ ( X )
X = X X
where X is the mean of network output.
Two thresholds need to be determined for coding network output by finding the largest mutual information. Mutual information is:
I ( A , B ) = P ( A , B ) l o g P ( A , B ) P ( A ) P ( B )
where A and B are the distributions of two sets of network outputs, respectively, and the threshold selection is formulated as:
E t 1 , t 2 = I ( A , B )
A = { x | A ( x ) } , x ( β , δ )   a n d   B = { x | B ( x ) } , x ( β , δ ) , β , δ ϵ ( 4,4 )
t 1 , t 2 = a r g m a x ( E t 1 , t 2 )
where β < δ and both are integers. t 1 and t 2 denote two thresholds. After obtaining the two thresholds, x, a bit of the output, is transformed to a tri−valued digit as:
T c o d e ( x ) = 1 , x t 1 0 , t 1 x t 2 1 , x t 2
The tri-valued Hamming distance table of Kleene logic [30,31] is shown in Table 2, where “0” labels the ambiguous state. The Kleene logic table is simplified because only two calculators, ¬ and , are used for distance calculation. For two tri-valued digits, a and b { 1,0 , 1 } , the tri-valued Hamming distance is:
T H D ( a , b ) = γ ( ¬ ( a b + 1 ) )
where γ is the weight factor, which is generally set to 0.5, i.e., the weight is 0.5 for ambiguous state.

4. Experimental Results and Discussions

This section introduces the palmprint databases and describes the experiments. It also presents the ablation experiments on the balanced and normalized network parameters, the normalized network output, and the tri-valued quantization.

4.1. Dataset and Experimental Environment Setup

In this paper, four palmprint databases, including one contact database and three non-contact databases, are used for performance evaluation and comparison. The original images are pre-processed to ROI with the size of 128 × 128 [32]. The details of the databases are described in Table 3. The multi-spectral database contains four spectrum sub-datasets, namely, Blue, Red, Green and NIR.
  • PolyU database [33]. A total of 7752 images belong to 386 palms, each palm containing around 20 images. The images are all acquired with a contact device. There are, in total, 30,042,876 matchings, including 74,068 genuine matchings and 29,968,808 imposter matchings;
  • Multispectral database [34]. The images are acquired with contact devices from different spectral environments. Each spectral database contains 6000 palm images. There are, in total, 1,799,700 matchings, including 33,000 genuine matchings and 1,796,400 imposter matchings;
  • IITD database [35]. There are 2601 hand images captured with contactless device from 230 individuals (460 palms). Each palm has around five images. Contactless acquisition usually contains stronger noise;
  • Tongji-print database [36]. It consists of 12,000 images of 300 individuals (600 palms) acquired with a contactless device in two sessions. In each session, 10 images of each palm are acquired. There are, in total, 1,799,700 matchings, including 2700 genuine matchings and 17,970,000 imposter matchings.
The image samples of these databases are shown in Figure 2. The four databases are acquired in different ways; thus, the experiments on these databases can better reflect the advantages of the our method. To ensure the stability of the experiments, the categories with incomplete sample number are removed.
The system configuration is: AMD Ryzen 5 3600 6-Core Processor 3.60 GHz, NVIDIA GPU GTX3060ti, 16 GB RAM and Windows 10 OS. All experiments are conducted on PyTorch.

4.2. Ablation Experiments

  • Balanced and normalized network parameters
Balanced and normalized network parameters can maximize the information entropy of binary weights and binary activations by adjusting the distribution of network weights. Since an explicit balance operation is used before the binarization, the binary weights of each layer in the network have maximum information entropy. Binary activations affected by binary weights also have maximum information entropy.
2.
Balanced network output
Before balanced network output, the distribution of network output is concentrated around “0”, i.e., many values are in the ambiguous interval, which is not conducive to the quantization operation. The balanced network outputs have maximum information entropy to reduce the ambiguity of the network output.
3.
Tri-valued quantization
Traditional binarization ignores the ambiguity in Hamming space, and direct binarization with a threshold or a function can lead to misclassification. Specifically, logical “true” feature values are misclassified as logical “false”, and vice versa. This uncertainty is directly reflected in the distance matching process, where the intra-class distance is too large and the inter-class distance is too small [37].
Table 4 shows the ablation experimental results on the PolyU database. The balanced and normalized network parameters greatly improve the accuracy. Balanced network output and tri-valued quantization are both helpful to accuracy improvement.

4.3. Comparison Experiments

The comparison experimental results are shown in Table 5. The proposed method is compared with many SOTA methods, including local texture coding methods, such as PalmCode [4], ordinal code (OrdinalCode) [38], fusion code (FusionCode) [39], competitive code (CompCode) [5], robust line orientation code (RLOC) [6], half-orientation code (HOC) [40], double-orientation code (DOC) [9], discriminative competitive code (DCC) [11], discriminative and robust competitive code (DRCC) [11] and binary orientation co-occurrence vector (BOCV) [10]; deep learning-based methods, such as palmprint network (PalmNet) [41]; and deep hash-based methods, such as deep hash codes (DHC) [29] and deep tri-valued code (DTC) [3].
The lower the EER is, the stronger the discrimination. Table 6 shows the EERs of different methods on multispectral databases. Figure 3 and Figure 4 show the receiver operating characteristic (ROC) curves. The proposed method can obtain the optimal results that are close to the best results, but our method has more advantages, including low storage cost of template and model, and low computational complexity of template generation and matching.
Table 7 compares the storage cost of template. The storage cost of the proposed method is the lowest, so the proposed method is highly efficient in matching/retrieval phase.
For the deep learning-based methods, the proposed method still has high accuracy. For deep hash-based methods, the accuracy of the proposed method is slightly worse than those of DHC and DTC. The same phenomenon can be found in Figure 3 and Figure 4. However, the proposed method is compressed, its computational complexity and model storage are much smaller, as shown in Table 8. Parameter volume (Params) is used to measure model storage, and computational complexity is measured by multiply-ACCumulate operation (MACC) and floating-point operations (FLOPs). The weight and activation (W/A) are also shown. The model storage of the proposed method is only 6.67 MB, about 1/32 of those of DTC and DHC. At the same time, the proposed method conducts convolution operation in bitwise mode instead of float mode, which is far more computationally efficient than DTC and DHC.
On the NIR dataset, the accuracy of the proposed method degrades due to the harsh acquisition environment, and we will further improve the algorithm on this dataset in the future.

5. Conclusions and Future Works

In this paper, a B-DHN for tri-valued hash codes is proposed for the double quantification of template and network in fast and light-weight palmprint recognition. Balanced and normalized network parameters are used to reduce the information loss after weighting and activation binarization. Approximation function can better represent the quantization function, ensuring the network to run smoothly. The model size of our method is only 1/32 of that of a DHN, which remarkably reduces the network storage cost. At the same time, bitwise operations instead of float operations remarkably reduce the network computational complexity. The tri-valued quantization for hash codes improves the accuracy by reducing the ambiguity interval. In future work, we will try to overcome the sensitivity of the network to illumination, especially NIR.

Author Contributions

Each author discussed the details of the manuscript. Q.L. designed and wrote the manuscript. Q.L. implemented the proposed technique and provided the experimental results and collated the results of the experiment. C.K. and L.L. reviewed and revised the article. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Natural Science Foundation of China (61866028), Technology Innovation Guidance Program Project (Special Project of Technology Cooperation, Science and Technology Department of Jiangxi Province) (20212BDH81003), and Innovation Foundation for Postgraduate Student of Nanchang Hangkong University (YC2021-S710).

Data Availability Statement

The datasets used in this paper are publicly available and their links are provided in the reference section.

Acknowledgments

This research was supported by National Natural Science Foundation of China (61866028), Technology Innovation Guidance Program Project (Special Project of Technology Cooperation, Science and Technology Department of Jiangxi Province) (20212BDH81003), and Innovation Foundation for Postgraduate Student of Nanchang Hangkong University (YC2021-S710).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Jia, W.; Huang, S.; Wang, B.; Fei, L.; Zhao, Y.; Min, H. Deep Multi-loss Hashing Network for Palmprint Retrieval and Recognition. In Proceedings of the 2021 IEEE International Joint Conference on Biometrics (IJCB), Shenzhen, China, 4–7 August 2021. [Google Scholar]
  2. Qin, H.; Gong, R.; Liu, X.; Shen, M.; Wei, Z.; Yu, F.; Song, J. Forward and Backward Information Retention for Accurate Binary Neural Networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020. [Google Scholar]
  3. Lin, Q.; Leng, L.; Khan, M.K. Deep Ternary Hashing Code for Palmprint Retrieval and Recognition. In Proceedings of the 2022 The 6th International Conference on Advances in Artificial Intelligence, Birmingham, UK, 21–23 October 2022. [Google Scholar]
  4. Zhang, D.; Kong, W.K.; You, J.; Wong, M. Online Palmpint Identification. IEEE Trans. Pattern Anal. Mach. Intell. 2003, 25, 1041–1050. [Google Scholar] [CrossRef]
  5. Kong, A.W.K.; Zhang, D. Competitive Coding Scheme for Palmprint Verification. In Proceedings of the 17th International Conference on Pattern Recognition, Cambridge, UK, 26 August 2004. [Google Scholar]
  6. Jia, W.; Huang, D.-S.; Zhang, D. Palmprint verification based on robust line orientation code. Pattern Recognit. 2008, 41, 1504–1513. [Google Scholar] [CrossRef]
  7. Leng, L.; Zhang, J. Palmhash Code vs. Palmphasor Code. Neurocomputing 2013, 108, 1–12. [Google Scholar] [CrossRef]
  8. Leng, L.; Li, M.; Kim, C.; Bi, X. Dual-source discrimination power analysis for multi-instance contactless palmprint recognition. Multimedia Tools Appl. 2015, 76, 333–354. [Google Scholar] [CrossRef]
  9. Fei, L.; Xu, Y.; Tang, W.; Zhang, D. Double-orientation code and nonlinear matching scheme for palmprint recognition. Pattern Recognit. 2016, 49, 89–101. [Google Scholar] [CrossRef]
  10. Guo, Z.; Zhang, D.; Zhang, L.; Zuo, W. Palmprint verification using binary orientation co-occurrence vector. Pattern Recognit. Lett. 2009, 30, 1219–1227. [Google Scholar] [CrossRef]
  11. Xu, Y.; Fei, L.; Wen, J.; Zhang, D. Discriminative and Robust Competitive Code for Palmprint Recognition. IEEE Trans. Syst. Man, Cybern. Syst. 2016, 48, 232–241. [Google Scholar] [CrossRef]
  12. Leng, L.; Yang, Z.; Min, W. Democratic voting downsampling for coding-based palmprint recognition. IET Biom. 2020, 9, 290–296. [Google Scholar] [CrossRef]
  13. Yang, Z.; Leng, L.; Min, W. Extreme Downsampling and Joint Feature for Coding-Based Palmprint Recognition. IEEE Trans. Instrum. Meas. 2020, 70, 1–12. [Google Scholar] [CrossRef]
  14. Yang, Z.; Leng, L.; Min, W. Downsampling in uniformly-spaced windows for coding-based Palmprint recognition. Multimedia Tools Appl. 2023, 1–16. [Google Scholar] [CrossRef]
  15. Svoboda, J.; Masci, J.; Bronstein, M.M. Palmprint Recognition Via Discriminative Index Learning. In Proceedings of the 23rd International Conference on Pattern Recognition (ICPR), Cancun, Mexico, 4–8 December 2016. [Google Scholar]
  16. Wen, Y.; Zhang, K.; Li, Z.; Qiao, A. Discriminative Feature Learning Approach for Deep Face Recognition. In Proceedings of the Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016. [Google Scholar]
  17. Zhong, D.; Zhu, J. Centralized Large Margin Cosine Loss for Open-Set Deep Palmprint Recognition. IEEE Trans. Circuits Syst. Video Technol. 2019, 30, 1559–1568. [Google Scholar] [CrossRef]
  18. Matkowski, W.M.; Chai, T.; Kong, A.W.K. Palmprint Recognition in Uncontrolled and Uncooperative Environment. IEEE Trans. Inf. Forensics Secur. 2019, 15, 1601–1615. [Google Scholar] [CrossRef]
  19. Chai, T.; Prasad, S.; Wang, S. Boosting palmprint identification with gender information using DeepNet. Futur. Gener. Comput. Syst. 2019, 99, 41–53. [Google Scholar] [CrossRef]
  20. Xu, H.; Leng, L.; Yang, Z.; Teoh, A.B.J.; Jin, Z. Multi-task Pre-training with Soft Biometrics for Transfer-learning Palmprint Recognition. Neural Process. Lett. 2022, 1–18. [Google Scholar] [CrossRef]
  21. Du, X.; Zhong, D.; Shao, H. Cross-Domain Palmprint Recognition via Regularized Adversarial Domain Adaptive Hashing. IEEE Trans. Circuits Syst. Video Technol. 2020, 31, 2372–2385. [Google Scholar] [CrossRef]
  22. Liu, Y.; Kumar, A. Contactless Palmprint Identification Using Deeply Learned Residual Features. IEEE Trans. Biom. Behav. Identit- Sci. 2020, 2, 172–181. [Google Scholar] [CrossRef]
  23. Chen, Y.C.; Lim, M.H.; Yuen, P.C.; Lai, J. Discriminant Spectral Hashing for Compact Palmprint Representation. In Proceedings of the Biometric Recognition: 8th Chinese Conference, Jinan, China, 16–17 November 2013. [Google Scholar]
  24. Cheng, J.; Sun, Q.; Zhang, J.; Zhang, Q. Supervised Hashing with Deep Convolutional Features for Palmprint Recognition. In Proceedings of the Biometric Recognition: 12th Chinese Conference, Shenzhen, China, 28–29 October 2017. [Google Scholar]
  25. Zhong, D.; Shao, H.; Du, X. A Hand-Based Multi-Biometrics via Deep Hashing Network and Biometric Graph Matching. IEEE Trans. Inf. Forensics Secur. 2019, 14, 3140–3150. [Google Scholar] [CrossRef]
  26. Zhong, D.; Liu, S.; Wang, W.; Du, X. Palm Vein Recognition with Deep Hashing Network. In Proceedings of the Pattern Recognition and Computer Vision: First Chinese Conference, Guangzhou, China, 23–26 November 2018. [Google Scholar]
  27. Li, D.; Gong, Y.; Cheng, D.; Shi, W.; Tao, X.; Chang, X. Consistency-Preserving deep hashing for fast person re-identification. Pattern Recognit. 2019, 94, 207–217. [Google Scholar] [CrossRef]
  28. Liu, Y.; Song, J.; Zhou, K.; Yan, L.; Liu, L.; Zou, F.; Shao, L. Deep Self-Taught Hashing for Image Retrieval. IEEE Trans. Cybern. 2018, 49, 2229–2241. [Google Scholar] [CrossRef]
  29. Wu, T.; Leng, L.; Khan, M.K.; Khan, F.A. Palmprint-Palmvein Fusion Recognition Based on Deep Hashing Network. IEEE Access 2021, 9, 135816–135827. [Google Scholar] [CrossRef]
  30. Liu, C.; Fan, L.; Ng, K.W.; Jin, Y.; Ju, C.; Zhang, T.; Chan, C.S.; Yang, Q. Ternary Hashing. arXiv 2021, arXiv:2103.09173. [Google Scholar]
  31. Fitting, M. Kleene’s Logic, Generalized. J. Log. Comput. 1991, 1, 797–810. [Google Scholar] [CrossRef]
  32. Leng, L.; Teoh, A.B.J. Alignment-free row-co-occurrence cancelable palmprint Fuzzy Vault. Pattern Recognit. 2015, 48, 2290–2303. [Google Scholar] [CrossRef]
  33. PolyU Palmprint Database. Available online: https://www.comp.polyu.edu.hk/~biometrics (accessed on 11 July 2021).
  34. Zhang, L.; Cheng, Z.; Shen, Y.; Wang, D. Palmprint and Palmvein Recognition Based on DCNN and A New Large-Scale Contactless Palmvein Dataset. Symmetry 2018, 10, 78. [Google Scholar] [CrossRef]
  35. IITD Touchless Palmprint Database. Available online: http://www4.comp.polyu.edu.hk/∼csajaykr/IITD/Database_Palm.htm (accessed on 11 July 2021).
  36. Tongji Palmprint Image Database. Available online: https://cslinzhang.github.io/ContactlessPalm/ (accessed on 11 July 2021).
  37. Shao, H.; Zhong, D.; Du, X. Deep Distillation Hashing for Unconstrained Palmprint Recognition. IEEE Trans. Instrum. Meas. 2021, 70, 1–13. [Google Scholar] [CrossRef]
  38. Sun, Z.; Tan, T.; Wang, Y.; Li, S.Z. Ordinal Palmprint Represention for Personal Identification. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR′05), San Diego, CA, USA, 20–25 June 2005. [Google Scholar]
  39. Kong, A.; Zhang, D.; Kamel, M. Palmprint identification using feature-level fusion. Pattern Recognit. 2006, 39, 478–487. [Google Scholar] [CrossRef]
  40. Fei, L.; Xu, Y.; Zhang, D. Half-orientation extraction of palmprint features. Pattern Recognit. Lett. 2016, 69, 35–41. [Google Scholar] [CrossRef]
  41. Genovese, A.; Piuri, V.; Plataniotis, K.N.; Scotti, F. PalmNet: Gabor-PCA Convolutional Networks for Touchless Palmprint Recognition. IEEE Trans. Inf. Forensics Secur. 2019, 14, 3160–3174. [Google Scholar] [CrossRef]
Figure 1. Overall structure.
Figure 1. Overall structure.
Electronics 12 02455 g001
Figure 2. Samples in different palmprint databases.
Figure 2. Samples in different palmprint databases.
Electronics 12 02455 g002
Figure 3. ROC curves of various methods on various databases.
Figure 3. ROC curves of various methods on various databases.
Electronics 12 02455 g003
Figure 4. ROC curves of various methods on multispectral database.
Figure 4. ROC curves of various methods on multispectral database.
Electronics 12 02455 g004
Table 1. Network parameter configuration.
Table 1. Network parameter configuration.
LayerConfiguration
Conv1Filter 16 × 3 × 3, st.4, pad 0, BN, PReLU
Max_poolFilter 2 × 2, st.1, pad 0
Conv2Filter 32 × 5 × 5, st.2, pad 2, BN, PReLU
Max_poolFilter 2 × 2, st.1, pad 0
Conv3Filter 64 × 3 × 3, st.1, pad 1, PReLU
Conv4Filter 64 × 3 × 3, st.1, pad 1, PReLU
Conv5Filter 128 × 3 × 3, st.1, pad 1, PReLU
Max_poolFilter 2 × 2, st.1, pad 0
Full6Length 2048
Full7Length 2048
Full8Length 128
Table 2. Kleene Logic Table.
Table 2. Kleene Logic Table.
A ↔ B¬A
B−101
A−110−10
00000
1−101−1
Table 3. Details of the palmprint database.
Table 3. Details of the palmprint database.
DatabasesPolyUIITDMultispectralTongji-Print
CollectionTouchTouchlessTouchlessTouchless
Number of class378460500600
Number of samples per class2051220
Total number of samples7560230024,00012,000
Table 4. Ablation experiments on PolyU.
Table 4. Ablation experiments on PolyU.
Balanced and Normalized Network ParametersBalanced Network Output Tri-Valued QuantizationEER (%)
---3.0431
--0.0913
-1.8691
-0.0763
-0.0850
0.0673
Table 5. EER (%) of various methods on various databases.
Table 5. EER (%) of various methods on various databases.
PolyUIITDTongji
PalmCode0.35005.45000.1100
OrdinalCode0.23005.50000.1600
FusionCode0.24006.20000.0731
CompCode0.12005.50000.1100
RLOC0.13005.00000.0253
HOC0.16006.55000.0954
DOC0.18006.20000.0431
DCC0.15005.49000.0506
DRCC0.18005.42000.0308
BOCV0.08564.56000.0056
DHPN0.04563.73100.0694
PalmNet0.11104.20400.0332
DHC0.05133.11800.0001
DTC0.03022.92700.0000
Ours0.06733.79600.0075
Table 6. EER (%) of various methods on multispectral database.
Table 6. EER (%) of various methods on multispectral database.
BlueGreenRedNIR
PalmCode0.28000.25000.23000.2000
OrdinalCode0.16000.15000.07200.1100
FusionCode0.31000.19000.12000.1700
CompCode0.09110.11000.03570.0579
RLOC0.07990.08550.04430.0629
HOC0.18000.16000.10000.0839
DOC0.13000.12000.05840.0501
DCC0.11000.09790.04500.0575
DRCC0.11000.09270.06590.0563
BOCV0.03580.05930.02410.0261
DHPN0.02130.03520.03690.0020
PalmNet0.01780.00870.03660.0871
DHC0.00000.00000.00000.0000
DTC0.00000.00000.00000.0000
Ours0.00180.00030.00870.0159
Table 7. Storage cost (bit).
Table 7. Storage cost (bit).
MethodStorageMethodStorage
PalmCode2048HOC2048
OrdinalCode3072DOC2048
FusionCode2048DRCC2048
CompCode3072BOCV6144
RLOC6144Ours128
Table 8. Computational complexity and model storage.
Table 8. Computational complexity and model storage.
MethodBit-Width
(W/A)
Operation
Type
Params
(MB)
FLOPs (M) *MACC (M)
DHPN (Feature Extraction + PCA)32/32Float527.7630,816.8913,621.10
DHN32/32Float213.49176.5888.29
DTC (Network +
Quantification)
32/32Float213.49176.5988.29
Ours1/1Bitwise6.67-88.29
* Our method does not have floating operations.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lin, Q.; Leng, L.; Kim, C. Double Quantification of Template and Network for Palmprint Recognition. Electronics 2023, 12, 2455. https://doi.org/10.3390/electronics12112455

AMA Style

Lin Q, Leng L, Kim C. Double Quantification of Template and Network for Palmprint Recognition. Electronics. 2023; 12(11):2455. https://doi.org/10.3390/electronics12112455

Chicago/Turabian Style

Lin, Qizhou, Lu Leng, and Cheonshik Kim. 2023. "Double Quantification of Template and Network for Palmprint Recognition" Electronics 12, no. 11: 2455. https://doi.org/10.3390/electronics12112455

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop