Face Beneath the Ink: Synthetic Data and Tattoo Removal with Application to Face Recognition
Abstract
:1. Introduction
- A novel algorithm for synthetically adding facial tattoos to face images.
- An algorithm for removing tattoos from facial images trained on only facial images with synthetically added tattoos. We refer to this algorithm as TRNet.
- An experimental analysis of the quality of the tattoo removal.
- Showcasing the application of tattoo removal in a face recognition system by conducting an experimental analysis on the effect of removing facial tattoos on a face recognition system.
2. Related Work
2.1. Synthetic Data Generation for Face Analysis
2.2. Facial Alterations
2.3. Facial Completion
3. Facial Tattoo Generator
3.1. Placement of Tattoos
3.2. Blending
3.3. Generation Strategies
4. Synthetic Tattoo Database
5. Tattoo Removal
5.1. Models
- pix2pix is a supervised conditional GAN for image-to-image translation [50]. For the generator, a U-Net architecture is used, whereas the discriminator is based on a PatchGAN classifier which divides the image into patches and discriminates between bona fide (i.e., real images) and fake images.
- Tattoo Removal Net (TRNet) is a U-net architecture [24,51] which utilizes spectral normalization and self-attention. The network was training using only the synthetic data described in Section 4. An illustration of the used U-Net architecture is shown in Figure 12. The encoder of the network is based on ResNet34, and the decoder consists of four main blocks and utilizes PixelShuffling [52]. The loss function is a combination of feature loss (perceptual loss) from [53], gram matrix style loss [54], and pixel (L1) loss. For the gram matrix loss and the feature loss, blocks from a pre-trained VGG-16 model are used [51,55].
5.2. Quality Metrics
- Peak signal-to-noise ratio (PSNR) is a measurement of error between an input and an output image and is calculated as follows:
- Mean Structural Similarity Index (MSSIM) as given in [56], is defined as follows:
- Visual Information Fidelity (VIF) is a full reference image quality assessment measurement proposed by Sheikh and Bovik in [57]. VIF is derived from a statistical model for natural scenes as well as models for image distortion and the human visual system. returns a value in the range of 0 to 1, where 1 indicates that the ground truth and inpainted images are identical. We use the pixel domain version as implemented in [58].
5.3. Removal Quality Results
6. Application to Face Recognition
6.1. Experimental Setup
- Database: for the evaluation, we use the publicly available database HDA Facial Tattoo and Painting Databasehttps://dasec.h-da.de/research/biometrics/hda-facial-tattoo-and-painting-database (accessed on 13 December 2022), which consists of 250 image pairs of individuals with and without real facial tattoos. The database was originally collected by Ibsen et al. in [5]. The images have all been aligned using the RetinaFace facial detector [59]. Examples of original image pairs (before tattoo removal) are given in Figure 16. These pairs of images are used for evaluating the performance of a face recognition system. For evaluating the effect of tattoo removal, the models described in Section 5.1 are employed on the facial images containing tattoos, whereafter the resulting images are used during the evaluation.
- Face recognition system: to evaluate the applicability of tattoo removal for face recognition, we use the established ArcFace pre-trained model (LResNet100E-IR, ArcFace@ms1m-refine-v2) with the RetinaFace facial detector.
- Recognition performance metrics: the effect of removing facial tattoos is evaluated empirically [60]. Specifically, we measure the FNMR at operationally relevant thresholds corresponding to a FMR of and :
- -
- False Match Rate (FMR): the proportion of the completed biometric non-mated comparison trials that result in a false match.
- -
- False Non-Match Rate (FNMR): the proportion of the completed biometric mated comparison trials that result in a false non-match.
6.2. Experimental Results
7. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Zeng, D.; Veldhuis, R.; Spreeuwers, L. A survey of face recognition techniques under occlusion. IET Biom. 2021, 10, 581–606. [Google Scholar] [CrossRef]
- Adjabi, I.; Ouahabi, A.; Benzaoui, A.; Taleb-Ahmed, A. Past, Present, and Future of Face Recognition: A Review. Electronics 2020, 9, 1188. [Google Scholar] [CrossRef]
- Kurutz, S. Face Tattoos Go Mainstream. 2018. Available online: https://www.nytimes.com/2018/08/04/style/face-tattoos.html (accessed on 3 December 2022).
- Abrams, M. Why Are Face Tattoos the Latest Celebrity Trend. 2020. Available online: https://www.standard.co.uk/insider/style/face-tattoos-celebrity-trend-justin-bieber-presley-gerber-a4360511.html (accessed on 3 December 2022).
- Ibsen, M.; Rathgeb, C.; Fink, T.; Drozdowski, P.; Busch, C. Impact of facial tattoos and paintings on face recognition systems. IET Biom. 2021, 10, 706–719. [Google Scholar] [CrossRef]
- Zhao, S.; Liu, W.; Liu, S.; Ge, J.; Liang, X. A hybrid-supervision learning algorithm for real-time un-completed face recognition. Comput. Electr. Eng. 2022, 101, 108090. [Google Scholar] [CrossRef]
- Mathai, J.; Masi, I.; AbdAlmageed, W. Does Generative Face Completion Help Face Recognition? In Proceedings of the International Conference on Biometrics (ICB), Crete, Greece, 4–7 June 2019; pp. 1–8. [Google Scholar]
- Bacchini, F.; Lorusso, L. A tattoo is not a face. Ethical aspects of tattoo-based biometrics. J. Inf. Commun. Ethics Soc. 2017, 16, 110–122. [Google Scholar] [CrossRef]
- Wood, E.; Baltrusaitis, T.; Hewitt, C.; Dziadzio, S.; Cashman, T.J.; Shotton, J. Fake It Till You Make It: Face Analysis in the Wild Using Synthetic Data Alone. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 10–17 October 2021; pp. 3681–3691. [Google Scholar]
- Joshi, I.; Grimmer, M.; Rathgeb, C.; Busch, C.; Bremond, F.; Dantcheva, A. Synthetic Data in Human Analysis: A Survey. arXiv 2022, arXiv:2208.09191. [Google Scholar]
- Rathgeb, C.; Dantcheva, A.; Busch, C. Impact and Detection of Facial Beautification in Face Recognition: An Overview. IEEE Access 2019, 7, 152667–152678. [Google Scholar] [CrossRef]
- European Council. Regulation of the European Parliament and of the Council on the Protection of Individuals with Regard to the Processing of Personal Data and on the Free Movement of Such Data (General Data Protection Regulation). 2016. Available online: https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32016R0679 (accessed on 13 December 2022).
- Goodfellow, I.J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.C.; Bengio, Y. Generative Adversarial Nets. In Proceedings of the Annual Conference on Neural Information Processing Systems 2014, Montreal, QC, Canada, 8–13 December 2014; Volume 27. [Google Scholar]
- Karras, T.; Laine, S.; Aila, T. A Style-Based Generator Architecture for Generative Adversarial Networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 16–20 June 2019; pp. 4396–4405. [Google Scholar]
- Karras, T.; Laine, S.; Aittala, M.; Hellsten, J.; Lehtinen, J.; Aila, T. Analyzing and Improving the Image Quality of StyleGAN. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 8107–8116. [Google Scholar]
- Karras, T.; Aittala, M.; Laine, S.; Härkönen, E.; Hellsten, J.; Lehtinen, J.; Aila, T. Alias-Free Generative Adversarial Networks. In Proceedings of the NeurIPS, Virtual, 6–14 December 2021. [Google Scholar]
- Grimmer, M.; Raghavendra, R.; Christoph, C. Deep Face Age Progression: A Survey. IEEE Access 2021, 9, 83376–83393. [Google Scholar] [CrossRef]
- Cappelli, R.; Maio, D.; Maltoni, D. SFinGe: An Approach to Synthetic Fingerprint Generation. In Proceedings of the International Workshop on Biometric Technologies, Calgary, AB, Canada, 15 June 2004. [Google Scholar]
- Priesnitz, J.; Rathgeb, C.; Buchmann, N.; Busch, C. SynCoLFinGer: Synthetic Contactless Fingerprint Generator. arXiv 2021, arXiv:2110.09144. [Google Scholar] [CrossRef]
- Wyzykowski, A.B.V.; Segundo, M.P.; de Paula Lemes, R. Level Three Synthetic Fingerprint Generation. In Proceedings of the 25th International Conference on Pattern Recognition (ICPR), Milan, Italy, 10–15 January 2021; pp. 9250–9257. [Google Scholar]
- Drozdowski, P.; Rathgeb, C.; Busch, C. SIC-Gen: A Synthetic Iris-Code Generator. In Proceedings of the International Conference of the Biometrics Special Interest Group (BIOSIG), Darmstadt, Germany, 20–22 September 2017; pp. 61–69. [Google Scholar]
- Dole, J. Synthetic Iris Generation, Manipulation, & ID Preservation. 2021. Available online: https://eab.org/cgi-bin/dl.pl?/upload/documents/2256/06-Dole-SyntheticIrisPresentation-210913.pdf (accessed on 3 December 2022).
- Xu, X.; Matkowski, W.M.; Kong, A.W.K. A portrait photo-to-tattoo transform based on digital tattooing. Multimed. Tools Appl. 2020, 79, 24367–24392. [Google Scholar] [CrossRef]
- Madhavan, V. SkinDeep. 2021. Available online: https://github.com/vijishmadhavan/SkinDeep (accessed on 3 December 2022).
- Singh, R.; Vatsa, M.; Bhatt, H.S.; Bharadwaj, S.; Noore, A.; Nooreyezdan, S.S. Plastic Surgery: A New Dimension to Face Recognition. IEEE Trans. Inf. Forensics Secur. 2010, 5, 441–448. [Google Scholar] [CrossRef]
- Rathgeb, C.; Dogan, D.; Stockhardt, F.; Marsico, M.D.; Busch, C. Plastic Surgery: An Obstacle for Deep Face Recognition? In Proceedings of the 15th IEEE Computer Society Workshop on Biometrics (CVPRW), Seattle, WA, USA, 14–19 June 2020; pp. 3510–3517. [Google Scholar]
- International Civil Aviation Organization. Machine Readable Passports—Part 9—Deployment of Biometric Identification and Electronic Storage of Data in eMRTDs, 2021. Available online: https://www.icao.int/publications/documents/9303_p9_cons_en.pdf (accessed on 13 December 2022).
- Dantcheva, A.; Chen, C.; Ross, A. Can facial cosmetics affect the matching accuracy of face recognition systems? In Proceedings of the IEEE Fifth International Conference on Biometrics: Theory, Applications and Systems (BTAS), Arlington, VA, USA, 23–27 September 2012; pp. 391–398. [Google Scholar]
- Wang, T.Y.; Kumar, A. Recognizing human faces under disguise and makeup. In Proceedings of the IEEE International Conference on Identity, Security and Behavior Analysis (ISBA), Sendai, Japan, 29 February–2 March 2016; pp. 1–7. [Google Scholar]
- Chen, C.; Dantcheva, A.; Swearingen, T.; Ross, A. Spoofing faces using makeup: An investigative study. In Proceedings of the IEEE International Conference on Identity, Security and Behavior Analysis (ISBA), New Delhi, India, 22–24 February 2017; pp. 1–8. [Google Scholar]
- Rathgeb, C.; Drozdowski, P.; Fischer, D.; Busch, C. Vulnerability Assessment and Detection of Makeup Presentation Attacks. In Proceedings of the International Workshop on Biometrics and Forensics (IWBF), Porto, Portugal, 29–30 April 2020; pp. 1–6. [Google Scholar]
- Singh, M.; Singh, R.; Vatsa, M.; Ratha, N.K.; Chellappa, R. Recognizing Disguised Faces in the Wild. Trans. Biom. Behav. Identity Sci. (TBIOM) 2019, 1, 97–108. [Google Scholar] [CrossRef] [Green Version]
- Ferrara, M.; Franco, A.; Maltoni, D. The magic passport. In Proceedings of the IEEE International Joint Conference on Biometrics, Clearwater, FL, USA, 29 September–2 October 2014; pp. 1–7. [Google Scholar]
- Scherhag, U.; Rathgeb, C.; Merkle, J.; Breithaupt, R.; Busch, C. Face Recognition Systems Under Morphing Attacks: A Survey. IEEE Access 2019, 7, 23012–23026. [Google Scholar] [CrossRef]
- Rathgeb, C.; Botaljov, A.; Stockhardt, F.; Isadskiy, S.; Debiasi, L.; Uhl, A.; Busch, C. PRNU-based Detection of Facial Retouching. IET Biom. 2020, 9, 154–164. [Google Scholar] [CrossRef]
- Hedberg, M.F. Effects of sample stretching in face recognition. In Proceedings of the 19th International Conference of the Biometrics Special Interest Group, online, 16–18 September 2020; pp. 1–4. [Google Scholar]
- Verdoliva, L. Media Forensics and DeepFakes: An Overview. IEEE J. Sel. Top. Signal Process. 2020, 14, 910–932. [Google Scholar] [CrossRef]
- Ferrer, C.C.; Pflaum, B.; Pan, J.; Dolhansky, B.; Bitton, J.; Lu, J. Deepfake Detection Challenge Results: An Open Initiative to Advance AI. 2020. Available online: https://ai.facebook.com/blog/deepfake-detection-challenge-results-an-open-initiative-to-advance-ai/ (accessed on 3 December 2022).
- Tolosana, R.; Vera-Rodriguez, R.; Fierrez, J.; Morales, A.; Ortega-Garcia, J. Deepfakes and beyond: A Survey of face manipulation and fake detection. Inf. Fusion 2020, 64, 131–148. [Google Scholar] [CrossRef]
- Iizuka, S.; Simo-Serra, E.; Ishikawa, H. Globally and Locally Consistent Image Completion. ACM Trans. Graph. 2017, 36, 107. [Google Scholar] [CrossRef]
- Li, Y.; Liu, S.; Yang, J.; Yang, M.H. Generative Face Completion. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 5892–5900. [Google Scholar]
- Zhao, Y.; Chen, W.; Xing, J.; Li, X.; Bessinger, Z.; Liu, F.; Zuo, W.; Yang, R. Identity Preserving Face Completion for Large Ocular Region Occlusion. In Proceedings of the 29th British Machine Vision Conference (BMVC), Newcastle, UK, 3–6 September 2018. [Google Scholar]
- Song, L.; Cao, J.; Song, L.; Hu, Y.; He, R. Geometry-Aware Face Completion and Editing. In Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, Hawaii, USA, 27 January–1 February 2019; pp. 2506–2513. [Google Scholar]
- Din, N.U.; Javed, K.; Bae, S.; Yi, J. A Novel GAN-Based Network for Unmasking of Masked Face. IEEE Access 2020, 8, 44276–44287. [Google Scholar] [CrossRef]
- King, D. Dlib-ml: A Machine Learning Toolkit. J. Mach. Learn. Res. 2009, 10, 1755–1758. [Google Scholar]
- Feng, Y.; Wu, F.; Shao, X.; Wang, Y.; Zhou, X. Joint 3D Face Reconstruction and Dense Alignment with Position Map Regression Network. In Proceedings of the ECCV, Munich, Germany, 8–14 September 2018. [Google Scholar]
- Phillips, P.J.; Wechsler, H.; Huang, J.; Rauss, P.J. The FERET database and evaluation procedure for face-recognition algorithms. Image Vis. Comput. 1998, 16, 295–306. [Google Scholar] [CrossRef]
- Phillips, P.J.; Flynn, P.J.; Scruggs, W.T.; Bowyer, K.W.; Chang, J.; Hoffman, K.; Marques, J.; Min, J.; Worek, W.J. Overview of the Face Recognition Grand Challenge. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2005), San Diego, CA, USA, 20–26 June 2005; Volume 1, pp. 947–954. [Google Scholar]
- Liu, Z.; Luo, P.; Wang, X.; Tang, X. Deep Learning Face Attributes in the Wild. In Proceedings of the IEEE International Conference on Computer Vision, ICCV 2015, Santiago, Chile, 7–13 December 2015. [Google Scholar]
- Isola, P.; Zhu, J.Y.; Zhou, T.; Efros, A.A. Image-to-Image Translation with Conditional Adversarial Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 5967–5976. [Google Scholar]
- Howard, J. fastai. 2018. Available online: https://github.com/fastai/fastai (accessed on 13 December 2022).
- Shi, W.; Caballero, J.; Huszar, F.; Totz, J.; Aitken, A.P.; Bishop, R.; Rueckert, D.; Wang, Z. Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 1874–1883. [Google Scholar]
- Johnson, J.; Alahi, A.; Fei-Fei, L. Perceptual losses for real-time style transfer and super-resolution. In Proceedings of the European Conference on Computer Vision (ECCV), Amsterdam, The Netherlands, 11–14 October 2016. [Google Scholar]
- Gatys, L.A.; Ecker, A.S.; Bethge, M. A Neural Algorithm of Artistic Style. arXiv 2015, arXiv:1508.06576. [Google Scholar] [CrossRef]
- Liu, S.; Deng, W. Very deep convolutional neural network based image classification using small training sample size. In Proceedings of the 3rd IAPR Asian Conference on Pattern Recognition (ACPR), Kuala Lumpur, Malaysia, 3–6 November 2015; pp. 730–734. [Google Scholar]
- Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed]
- Sheikh, H.R.; Bovik, A.C. Image information and visual quality. IEEE Trans. Image Process. 2006, 15, 430–444. [Google Scholar] [CrossRef] [PubMed]
- Khalel, A. Sewar. 2021. Available online: https://github.com/andrewekhalel/sewar (accessed on 3 December 2022).
- Deng, J.; Guo, J.; Ververas, E.; Kotsia, I.; Zafeiriou, S. RetinaFace: Single-Shot Multi-Level Face Localisation in the Wild. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, 13–19 June 2020. [Google Scholar]
- ISO/IEC JTC1 SC37 Biometrics; ISO/IEC 19795-1:2021; Information Technology—Biometric Performance Testing and Reporting—Part 1: Principles and Framework. International Organization for Standardization: Geneva, Switzerland, 2021.
Database | Subjects | Images | |
---|---|---|---|
Bona fide | Tattooed | ||
FERET | 529 | 621 | 6743 |
FRGCv2 | 533 | 1436 | 16,209 |
CelebA | 6872 | 6872 | 6872 |
Scenario | Portrait | Inner | ||||
---|---|---|---|---|---|---|
MSSIM | PSNR | VIF | MSSIM | PSNR | VIF | |
Tattooed | ||||||
pix2pix | ||||||
TRNet |
Type | EER% | FNMR% | |
---|---|---|---|
FMR = 0.1% | FMR = 1% | ||
Tattooed | |||
pix2pix | |||
TRNet |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Ibsen, M.; Rathgeb, C.; Drozdowski, P.; Busch, C. Face Beneath the Ink: Synthetic Data and Tattoo Removal with Application to Face Recognition. Appl. Sci. 2022, 12, 12969. https://doi.org/10.3390/app122412969
Ibsen M, Rathgeb C, Drozdowski P, Busch C. Face Beneath the Ink: Synthetic Data and Tattoo Removal with Application to Face Recognition. Applied Sciences. 2022; 12(24):12969. https://doi.org/10.3390/app122412969
Chicago/Turabian StyleIbsen, Mathias, Christian Rathgeb, Pawel Drozdowski, and Christoph Busch. 2022. "Face Beneath the Ink: Synthetic Data and Tattoo Removal with Application to Face Recognition" Applied Sciences 12, no. 24: 12969. https://doi.org/10.3390/app122412969
APA StyleIbsen, M., Rathgeb, C., Drozdowski, P., & Busch, C. (2022). Face Beneath the Ink: Synthetic Data and Tattoo Removal with Application to Face Recognition. Applied Sciences, 12(24), 12969. https://doi.org/10.3390/app122412969