Balancing Heterogeneous Image Quality for Improved Cross-Spectral Face Recognition
Abstract
:1. Introduction
- To the best of our knowledge, this is one of the first works that specifically deal with the issue of quality imbalance in cross-spectral face recognition.
- Quality balancing is achieved by upgrading the IR face imagery where a cascaded structure of denoising and deblurring is proposed.
- For deblurring, we further propose an SVD theory-inspired CNN model (SVDFace) which decomposes the inverse kernel function into stacks of 1D convolutional layers. The singular value decomposition (SVD) network has advantages such as compact parameters, good interpretability in its structure, and needless knowledge of the exact cause of image degradation.
- The proposed deblurring method (SVDFace) is proven to be advantageous over other deblurring methods, including state of the art. The cascaded structure of face enhancement is also shown to be superior to the non-cascaded structure. Moreover, the upgrading approach to quality balancing outperforms the downgrading approach.
2. Related Works
3. Proposed Methodology
3.1. Necessity of the Cascaded Structure
3.2. BM3D Denoising
3.3. Deep Neural-Network-Based Deblurring
3.3.1. SVD-Inspired Deblurring Network
3.3.2. Analysis of Structural Advantages
4. Experimental Results and Analysis
4.1. Dataset
4.2. Quality Balancing: Upgrading vs. Downgrading
4.3. Cascaded or Non-Cascaded
4.4. Comparison of Different Deblurring Methods
4.5. Analysis and Conclusion
- Infrared faces acquired at long standoffs suffer from quality degradation due to atmospheric and camera effects, which leads to a serious drop in the cross-spectral recognition performance, raising the issue of heterogeneous image quality imbalance.
- For both SWIR and NIR at 50 m and 106 m, image quality balancing prior to face matching via upgrading the low quality imagery (i.e., cascaded enhancement) or downgrading the high quality imagery (Gaussian smoothing) yields substantial improvement in recognition performance, with the former approach being better than the latter approach.
- The proposed cascaded enhancement structure is necessary and effective in that a single denoising stage yields lower recognition performance, while a subsequent deblurring stage dramatically improves the performance.
- In the context of cross-spectral face recognition, the newly developed deblurring network (SVDFace) demonstrates its advantage over traditional deblurring methods, as well as the state-of-the-art deblurring model based on deep learning, for all cases of IR bands and standoffs.
- As the degree of quality imbalance between the heterogeneous faces increases, such as when the standoff increases from 50 m to 106 m, the effect of quality balancing becomes more pronounced, especially for the SWIR band.
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Short Biography of Authors
| Zhicheng Cao is an Assistant Professor at the School of Life Science and Technology of Xidian University, China. He holds a Ph.D. in Electrical Engineering from West Virginia University, USA. He received a B.S. and an M.S. degree in Biomedical Engineering, both from Xi’an Jiaotong University, China. His research interests include biometrics, pattern recognition, image processing, and deep learning, with a focus on multi- and cross-spectral face/periocular recognition. He is the author of two books and more than 20 papers in related areas, and holds four patents as the first inventor. He has been a reviewer for many international journals, such as Pattern Recognition, Neurocomputing, Machine Vision and Applications, IEEE Access and IET Biometrics. He is a member of IEEE and SPIE. |
| Xi Cen is currently a Master’s student at the School of Life Science and Technology of Xidian University, China. She received her B.S. degree in Computer Science from Northwest A&F University, China in 2018. Her research focuses on face recognition and biometrics. |
| Heng Zhao received his B.E. degree in Auto Control from Xi’an Jiaotong University in 1996 and his Ph.D. degree in Circuit and System from Xidian University in 2005. From 1996 to 1999, he was an Engineer with Xi’an Flight Automatic Control Research Institute. Since 2005, he has been a faculty with the Biomedical Engineering Department, School of Life Science and Technology, Xidian University. His research interests include pattern recognition and image processing. |
| Liaojun Pang is a Full Professor at the School of Life Science and Technology of Xidian University, China. He received his Bachelor of Science in Computer Science, Master of Science in Computer Science, and PhD in Cryptography, all from Xidian University. He was a visiting scholar at the Department of Computer Science ofWayne State University in the USA. His research interests include biometrics, biometric encryption and information security. He has authored four books, over 30 patents, and more than 50 research papers. He has become a Member of IEEE since 2009. He was a recipient of the National Award for Technological Invention of China for their contribution in biometric encryption. |
References
- Buddharaju, P.; Pavlidis, I.T.; Tsiamyrtzis, P.; Bazakos, M. Physiology-based face recognition in the thermal infrared spectrum. IEEE Trans. Pattern Anal. Machine Intell. 2007, 29, 613–626. [Google Scholar] [CrossRef]
- Nicolo, F.; Schmid, N.A. Long Range Cross-Spectral Face Recognition: Matching SWIR Against Visible Light Images. IEEE Trans. Inf. Forensics Secur. 2012, 7, 1717–1726. [Google Scholar] [CrossRef]
- Klare, B.F.; Jain, A.K. Heterogeneous Face Recognition Using Kernel Prototype Similarities. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 1410–1422. [Google Scholar] [CrossRef] [Green Version]
- Juefei-Xu, F.; Pal, D.K.; Savvides, M. NIR-VIS Heterogeneous Face Recognition via Cross-Spectral Joint Dictionary Learning and Reconstruction. In Proceedings of the The IEEE Conference on Computer Vision and Pattern Recognition Workshops, Boston, MA, USA, 7–12 June 2015; pp. 141–150. [Google Scholar]
- Lezama, J.; Qiu, Q.; Sapiro, G. Not Afraid of the Dark: NIR-VIS Face Recognition via Cross-Spectral Hallucination and Low-Rank Embedding. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21 July–26 July 2017; pp. 6807–6816. [Google Scholar]
- Hu, S.; Short, N.; Riggan, B.S.; Chasse, M.; Sarfras, M.S. Heterogeneous Face Recognition: Recent Advances in Infrared-to-Visible Matching. In Proceedings of the The 12th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2017), Washington, DC, USA, 30 May–3 June 2017; pp. 883–890. [Google Scholar]
- Cho, S.W.; Baek, N.R.; Kim, M.C.; Kim, J.H.; Park, K.R. Face Detection in Nighttime Images Using Visible-Light Camera Sensors with Two-Step Faster Region-Based Convolutional Neural Network. Sensors 2018, 18, 2995. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Zhang, H.; Riggan, B.S.; Hu, S.; Short, N.J.; Patel, V.M. Synthesis of High-Quality Visible Faces from Polarimetric Thermal Faces using Generative Adversarial Networks. Int. J. Comput. Vis. 2019, 127, 845–862. [Google Scholar] [CrossRef] [Green Version]
- Di, X.; Riggan, B.S.; Hu, S.; Short, N.J.; Patel, V.M. Polarimetric Thermal to Visible Face Verification via Self-Attention Guided Synthesis. CoRR. 2019. Available online: https://arxiv.org/abs/1901.00889 (accessed on 25 March 2021).
- Le, H.A.; Kakadiaris, I.A. DBLFace: Domain-Based Labels for NIR-VIS Heterogeneous Face Recognition. In Proceedings of the 2020 IEEE International Joint Conference on Biometrics (IJCB), Seoul, Korea, 27–29 August 2020; pp. 1–10. [Google Scholar]
- Kirschner, J. SWIR for Target Detection, Recognition, Furthermore, Identification. 2011. Available online: http://www.photonicsonline.com/doc.mvc/SWIR-For-Target-Detection-Recognition-And-0002 (accessed on 4 January 2015).
- Lemoff, B.E.; Martin, R.B.; Sluch, M.; Kafka, K.M.; Dolby, A.; Ice, R. Automated, Long-Range, Night/Day, Active-SWIR Face Recognition System. In Proceedings of the SPIE Conference on Infrared Technology and Applications XL, Baltimore, MD, USA, 5–9 May 2014; p. 90703. [Google Scholar]
- Gassenq, A.; Gencarelli, F.; Van Campenhout, J.; Shimura, Y.; Loo, R.; Narcy, G.; Vincent, B.; Roelkens, G. GeSn/Ge heterostructure short-wave infrared photodetectors on silicon. Opt. Express 2012, 20, 27297–27303. [Google Scholar] [CrossRef] [Green Version]
- Cao, Z.; Schmid, N.A.; Bourlai, T. Composite multilobe descriptors for cross-spectral recognition of full and partial face. Opt. Eng. 2016, 55, 083107. [Google Scholar] [CrossRef]
- Klare, B.; Jain, A.K. Heterogeneous Face Recognition: Matching NIR to Visible Light Images. In Proceedings of the International Conference on Pattern Recognition, Istanbul, Turkey, 23–26 August 2010; pp. 1513–1516. [Google Scholar]
- Cao, Z.; Schmid, N.A. Fusion of operators for heterogeneous periocular recognition at varying ranges. Pattern Recognit. Lett. 2016, 82, 170–180. [Google Scholar] [CrossRef]
- Sarfraz, M.S.; Stiefelhagen, R. Deep Perceptual Mapping for Cross-Modal Face Recognition. International Journal of Computer Vision 2017, 122, 426–438. [Google Scholar] [CrossRef] [Green Version]
- Oh, B.S.; Oh, K.; Teoh, A.B.J.; Lin, Z.; Toh, K.A. A Gabor-based Network for Heterogeneous Face Recognition. Neurocomputing 2017, 261, 253–265. [Google Scholar] [CrossRef]
- Iranmanesh, S.M.; Dabouei, A.; Kazemi, H.; Nasrabadi, N.M. Deep Cross Polarimetric Thermal-to-Visible Face Recognition. In Proceedings of the 2018 International Conference on Biometrics (ICB), Gold Coast, Australia, 20–23 February 2018; pp. 166–173. [Google Scholar]
- He, R.; Cao, J.; Song, L.; Sun, Z.; Tan, T. Adversarial Cross-Spectral Face Completion for NIR-VIS Face Recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 42, 1025–1037. [Google Scholar] [CrossRef]
- Liu, D.; Gao, X.; Wang, N.; Li, J.; Peng, C. Coupled Attribute Learning for Heterogeneous Face Recognition. IEEE Trans. Neural Networks Learn. Syst. 2020, 31, 4699–4712. [Google Scholar] [CrossRef] [PubMed]
- Cao, Z.; Schmid, N.A.; Li, X. Image Disparity in Cross-Spectral Face Recognition: Mitigating Camera and Atmospheric Effects. In Proceedings of the SPIE Conference on Automatic Target Recognition XXVI, Baltimore, MD, USA, 18–19 April 2016; p. 98440Z. [Google Scholar]
- Burton, A.M.; Wilson, S.; Cowan, M.; Bruce, V. Face recognition in poor-quality video: Evidence from security surveillance. Psychol. Sci. 1999, 10, 243–248. [Google Scholar] [CrossRef]
- Sellahewa, H.; Jassim, S. Image-quality-based adaptive face recognition. IEEE Trans. Instrum. Meas. 2010, 59, 805–813. [Google Scholar] [CrossRef] [Green Version]
- Kang, D.; Han, H.; Jain, A.K.; Lee, S.W. Nighttime face recognition at large standoff: Cross-distance and cross-spectral matching. Pattern Recognit. 2014, 47, 3750–3766. [Google Scholar] [CrossRef]
- Chang, K.; Bowyer, K.W.; Sarkar, S.; Victor, B. Comparison and combination of ear and face images in appearance-based biometrics. IEEE Trans. Pattern Anal. Mach. Intell. 2003, 25, 1160–1165. [Google Scholar] [CrossRef]
- Fronthaler, H.; Kollreider, K.; Bigun, J. Automatic image quality assessment with application in biometrics. In Proceedings of the CVPRW’06. Conference onComputer Vision and Pattern Recognition Workshop, New York, NY, USA, 17–22 June 2006; p. 30. [Google Scholar]
- Abaza, A.; Harrison, M.A.; Bourlai, T. Quality metrics for practical face recognition. In Proceedings of the 2012 21st International Conference on Pattern Recognition (ICPR), Tsukuba, Japan, 11–15 November 2012; pp. 3103–3107. [Google Scholar]
- Grother, P.; Tabassi, E. Performance of biometric quality measures. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 29, 531–543. [Google Scholar] [CrossRef]
- Nandakumar, K.; Chen, Y.; Jain, A.K.; Dass, S.C. Quality-based score level fusion in multibiometric systems. In Proceedings of the ICPR 2006. 18th International Conference on Pattern Recognition, Hong Kong, China, 20–24 August 2006; Volume 4, pp. 473–476. [Google Scholar]
- Nandakumar, K.; Chen, Y.; Dass, S.C.; Jain, A.K. Likelihood ratio-based biometric score fusion. IEEE Trans. Pattern Anal. Mach. Intell. 2008, 30, 342–347. [Google Scholar] [CrossRef] [PubMed]
- Kryszczuk, K.; Drygajlo, A. Improving classification with class-independent quality measures: Q-stack in face verification. In Advances in Biometrics; Springer: Berlin/Heidelberg, Germany, 2007; pp. 1124–1133. [Google Scholar]
- Kryszczuk, K.; Drygajlo, A. Improving biometric verification with class-independent quality information. IET Signal Process. 2009, 3, 310–321. [Google Scholar] [CrossRef] [Green Version]
- Richardson; Hadley, W. Bayesian-Based Iterative Method of Image Restoration. J. Opt. Soc. Am. 1972, 62, 55–59. [Google Scholar] [CrossRef]
- Whyte, O.; Sivic, J.; Zisserman, A. Deblurring Shaken and Partially Saturated Images. Int. J. Comput. Vis. 2014, 110, 185–201. [Google Scholar] [CrossRef]
- Kenig, T.; Kam, Z.; Feuer, A. Blind image deconvolution using machine learning for three-dimensional microscopy. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 32, 2191. [Google Scholar] [CrossRef] [PubMed]
- Krizhevsky, A.; Sutskever, I.; Hinton, G. ImageNet classification with deep convolutional neural networks. In Proceedings of the International Conference on Neural Information Processing Systems, Lake Tahoe, NV, USA, 3–6 December 2012; pp. 1097–1105. [Google Scholar]
- Chao, D.; Chen, C.L.; He, K.; Tang, X. Learning a Deep Convolutional Network for Image Super-Resolution. In Proceedings of the ECCV2014, Lecture Notes in Computer Science, Zurich, Switzerland, 6–7 September 2014; Volume 8692, pp. 184–199. [Google Scholar]
- Sun, J.; Cao, W.; Xu, Z.; Ponce, J. Learning a convolutional neural network for non-uniform motion blur removal. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 769–777. [Google Scholar]
- Schuler, C.J.; Hirsch, M.; Harmeling, S.; Schlkopf, B. Learning to Deblur. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 38, 1439–1451. [Google Scholar] [CrossRef]
- Nah, S.; Kim, T.H.; Lee, K.M. Deep Multi-scale Convolutional Neural Network for Dynamic Scene Deblurring. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 257–265. [Google Scholar]
- Kupyn, O.; Budzan, V.; Mykhailych, M.; Mishkin, D.; Matas, J. DeblurGAN: Blind Motion Deblurring Using Conditional Adversarial Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 8183–8192. [Google Scholar]
- Dabov, K.; Foi, A.; Katkovnik, V.; Egiazarian, K. Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 2007, 16, 2080–2095. [Google Scholar] [CrossRef] [PubMed]
- Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. CoRR. 2014. Available online: https://arxiv.org/abs/1409.1556v3 (accessed on 25 March 2021).
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Johnson, P.A.; Lopez-Meyer, P.; Sazonova, N.; Hua, F.; Schuckers, S. Quality in face and iris research ensemble (Q-FIRE). In Proceedings of the 2010 Fourth IEEE International Conference on Biometrics: Theory, Applications and Systems (BTAS), Washington, DC, USA, 27–29 September 2010; pp. 1–6. [Google Scholar]
- Levin, A.; Weiss, Y.; Durand, F.; Freeman, W.T. Understanding Blind Deconvolution Algorithms. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 2354–2367. [Google Scholar] [CrossRef] [PubMed]
- Lee, T.B.; Jung, S.H.; Heo, Y.S. Progressive Semantic Face Deblurring. IEEE Access 2020, 8, 223548–223561. [Google Scholar] [CrossRef]
- Yasarla, R.; Perazzi, F.; Patel, V.M. Deblurring Face Images Using Uncertainty Guided Multi-Stream Semantic Networks. IEEE Trans. Image Process. 2020, 29, 6251–6263. [Google Scholar] [CrossRef]
- Martin, R.B.; Kafka, K.M.; Lemoff, B.E. Active-SWIR signatures for long-range night/day human detection and identification. In Proceedings of the SPIE Symposium on DSS, Baltimore, MD, USA, 29 April 2013; pp. 209–218. [Google Scholar]
- Ojala, T.; Pietikainen, M.; Maenpaa, T. Multiresolution Gray-Scale and Rotation Invariant Texture Classification with Local Binary Patterns. IEEE Trans. Inf. For. Sec. 2002, 24, 971–987. [Google Scholar] [CrossRef]
- Guo, Y.; Xu, Z. Local Gabor phase difference pattern for face recognition. In Proceedings of the International Conference on Pattern Recognition, Tampa, FL, USA, 8–11 December 2008; pp. 1–4. [Google Scholar]
- Chen, J.; Shan, S.; He, C.; Zhao, G.; Pietikeinen, M.; Chen, X.; Gao, W. WLD: A Robust Local Image Descriptor. IEEE Trans. Pattern Anal. Mach. Int. 2010, 32, 1705–1720. [Google Scholar] [CrossRef]
Index | Type | Patch Size | Remark | Output Size |
---|---|---|---|---|
1 | C1_1 | 5 × 1 | vertical | 116 × 112 × 16 |
2 | C1_2 | 5 × 1 | vertical | 112 × 112 × 16 |
3 | C1_3 | 5 × 1 | vertical | 108 × 112 × 16 |
4 | C1_4 | 5 × 1 | vertical | 104 × 112 × 16 |
5 | C1_5 | 5 × 1 | vertical | 100 × 112 × 16 |
6 | C1_6 | 5 × 1 | vertical | 96 × 112 × 16 |
7 | C1_7 | 5 × 1 | vertical | 92 × 112 × 16 |
8 | C1_8 | 5 × 1 | vertical | 88 × 112 × 16 |
9 | C1_9 | 5 × 1 | vertical | 84 × 112 × 16 |
10 | C1_10 | 5 × 1 | vertical | 80 × 112 × 16 |
11 | C1_11 | 5 × 1 | vertical | 76 × 112 × 16 |
12 | C2_1 | 1 × 5 | horizontal | 76 × 108 × 16 |
13 | C2_2 | 1 × 5 | horizontal | 76 × 104 × 16 |
14 | C2_3 | 1 × 5 | horizontal | 76 × 100 × 16 |
15 | C2_4 | 1 × 5 | horizontal | 76 × 96 × 16 |
16 | C2_5 | 1 × 5 | horizontal | 76 × 92 × 16 |
17 | C2_6 | 1 × 5 | horizontal | 76 × 88 × 16 |
18 | C2_7 | 1 × 5 | horizontal | 76 × 84 × 16 |
19 | C2_8 | 1 × 5 | horizontal | 76 × 80 × 16 |
20 | C2_9 | 1 × 5 | horizontal | 76 × 76 × 16 |
21 | C2_10 | 1 × 5 | horizontal | 76 × 72 × 16 |
22 | C3_1 | 5 × 5 | square | 72 × 68 × 32 |
23 | C3_2 | 5 × 5 | square | 68 × 64 × 32 |
24 | C3_3 | 5 × 5 | square | 64 × 60 × 32 |
25 | C3_4 | 5 × 5 | square | 60 × 56 × 32 |
26 | C4_1 | 1 × 1 | up-sampling | 120 × 112 × 128 |
27 | C4_2 | 5 × 5 | padding | 120 × 112 × 1 |
Network Type | Dimensions | Cost | Numbers of Parameters | Training Data Needed | Interpretable |
---|---|---|---|---|---|
Vertical SVD Networks | 1D | Low | Small | Yes | |
Horizontal SVD Networks | 1D | Low | Small | Yes | |
Other Networks | 2D | High | Large | No |
CASE | METHOD | GAR1 (%) | GAR2 (%) | EER (%) |
---|---|---|---|---|
SWIR 50 m | Original | 91.88 | 62.11 | 8.90 |
Downgrading (Gaussian Smoothing) | 92.93 | 67.09 | 7.92 | |
Upgrading (proposed) | 96.43 | 69.26 | 5.53 | |
SWIR 106 m | Original | 82.50 | 44.79 | 14.17 |
Downgrading (Gaussian Smoothing) | 86.74 | 51.67 | 11.75 | |
Upgrading (proposed) | 91.8 | 52.78 | 9.04 |
CASE | METHOD | GAR1 (%) | GAR2 (%) | EER (%) |
---|---|---|---|---|
NIR 50 m | Original | 92.23 | 68.21 | 8.71 |
Downgrading (Gaussian Smoothing) | 93.42 | 70.24 | 7.63 | |
Upgrading (proposed) | 96.08 | 70.45 | 5.95 | |
NIR 106 m | Original | 64.48 | 13.28 | 23.24 |
Downgrading (Gaussian Smoothing) | 66.38 | 15.96 | 21.73 | |
Upgrading (proposed) | 73.80 | 17.79 | 18.53 |
CASE | METHOD | GAR1 (%) | GAR2 (%) | EER (%) |
---|---|---|---|---|
SWIR 50 m | Original | 91.88 | 62.11 | 8.90 |
Non-cascaded (BM3D alone) | 88.23 | 53.92 | 11.21 | |
Cascaded Enhancement (BM3D + CNN deblur) | 96.43 | 69.26 | 5.53 | |
SWIR 106 m | Original | 82.50 | 44.79 | 14.17 |
Non-cascaded (BM3D alone) | 74.79 | 35.56 | 18.33 | |
Cascaded enhancement (BM3D + CNN deblur) | 91.81 | 52.78 | 9.04 |
CASE | METHOD | GAR1 (%) | GAR2 (%) | EER (%) |
---|---|---|---|---|
NIR 50 m | Original | 92.23 | 68.21 | 8.71 |
Non-cascaded (BM3D alone) | 91.04 | 65.97 | 9.38 | |
Cascaded enhancement (BM3D + CNN deblur) | 96.08 | 70.45 | 5.95 | |
NIR 106 m | Original | 64.48 | 13.28 | 23.24 |
Non-cascaded (BM3D alone) | 60.95 | 8.97 | 23.37 | |
Cascaded enhancement (BM3D + CNN deblur) | 73.80 | 17.79 | 18.53 |
CASE | METHOD | GAR1 (%) | GAR2 (%) | EER (%) |
---|---|---|---|---|
SWIR 50 m | Original | 91.88 | 62.11 | 8.90 |
Laplacian Sharpening [22] | 94.33 | 67.44 | 7.29 | |
Blind Deconvolution [47] | 92.99 | 65.90 | 8.04 | |
DeblurGan [42] | 93.56 | 63.10 | 7.59 | |
Progressive Semantic Deblurring [48] | 93.28 | 62.54 | 7.85 | |
UMSN Face Deblurring [49] | 94.05 | 61.55 | 7.28 | |
SVDFace (proposed) | 96.43 | 69.26 | 5.53 | |
SWIR 106 m | Original | 82.50 | 44.79 | 14.17 |
Laplacian Sharpening [22] | 90.00 | 52.15 | 10.00 | |
Blind Deconvolution [47] | 84.65 | 47.85 | 13.42 | |
DeblurGan [42] | 89.80 | 47.91 | 10.14 | |
Progressive Semantic Deblurring [48] | 87.15 | 43.13 | 11.67 | |
UMSN Face Deblurring [49] | 88.40 | 46.88 | 10.97 | |
SVDFace (proposed) | 91.81 | 52.78 | 9.04 |
CASE | METHOD | GAR1 (%) | GAR2 (%) | EER (%) |
---|---|---|---|---|
NIR 50 m | Original | 92.23 | 68.21 | 8.71 |
Laplacian Sharpening [22] | 94.12 | 70.87 | 7.12 | |
Blind Deconvolution [47] | 92.23 | 69.26 | 8.47 | |
DeblurGan [42] | 93.28 | 65.97 | 8.19 | |
Progressive Semantic Deblurring [48] | 92.44 | 64.71 | 8.53 | |
UMSN Face Deblurring [49] | 93.27 | 65.90 | 7.99 | |
SVDFace (proposed) | 96.08 | 70.45 | 5.95 | |
NIR 106 m | Original | 64.48 | 13.28 | 23.24 |
Laplacian Sharpening [22] | 66.81 | 16.38 | 20.39 | |
Blind Deconvolution [47] | 63.91 | 17.58 | 21.29 | |
DeblurGan [42] | 65.75 | 15.40 | 21.32 | |
Progressive Semantic Deblurring [48] | 64.62 | 15.25 | 21.45 | |
UMSN Face Deblurring [49] | 66.60 | 15.54 | 20.67 | |
SVDFace (proposed) | 73.80 | 17.79 | 18.53 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Cao, Z.; Cen, X.; Zhao, H.; Pang, L. Balancing Heterogeneous Image Quality for Improved Cross-Spectral Face Recognition. Sensors 2021, 21, 2322. https://doi.org/10.3390/s21072322
Cao Z, Cen X, Zhao H, Pang L. Balancing Heterogeneous Image Quality for Improved Cross-Spectral Face Recognition. Sensors. 2021; 21(7):2322. https://doi.org/10.3390/s21072322
Chicago/Turabian StyleCao, Zhicheng, Xi Cen, Heng Zhao, and Liaojun Pang. 2021. "Balancing Heterogeneous Image Quality for Improved Cross-Spectral Face Recognition" Sensors 21, no. 7: 2322. https://doi.org/10.3390/s21072322
APA StyleCao, Z., Cen, X., Zhao, H., & Pang, L. (2021). Balancing Heterogeneous Image Quality for Improved Cross-Spectral Face Recognition. Sensors, 21(7), 2322. https://doi.org/10.3390/s21072322