Facial Anonymization Model Evaluation Criteria: Development and Validation in Autonomous Vehicle Environments
Abstract
1. Introduction
2. Background & Related Work
2.1. Background
2.1.1. Computer Vision Algorithms
2.1.2. Deep Learning Algorithms
2.1.3. Facial Landmark Detection and Extraction Algorithms
2.2. Related Work
2.2.1. Technique Families and Typical Evaluation Patterns
2.2.2. Representative Studies and Metric Usage in Practice
3. Proposed Evaluation Metrics and Criteria
3.1. Generate Anonymized Image Datasets and Preprocessing
3.1.1. Generate Anonymized Image
3.1.2. Preliminary Experiments for Rounding Precision Setting
3.1.3. Dataset Preprocessing
- Original dataset: 50,000 original images extracted from CelebA
- 3D dataset: 50,000 images from 3D function application results
- 3D_round dataset: 50,000 images from 3D function + second decimal place rounding
- Depth dataset: 50,000 images from Depth function application results
- Depth_round dataset: 50,000 images from Depth function + second decimal place rounding
- SynergyNet dataset: 50,000 images from SynergyNet model application results
3.2. Establish Experimental Items
- Facial Landmark Reduction Rate: Facial anonymization may alter facial geometric structure in ways that affect the reliability and quantity of detected facial landmarks between original and anonymized images. Wang et al. reported that Deepfake faces tend to exhibit fewer detected feature points than real faces, particularly in specific facial regions, due to manipulation artifacts that introduce “feature point defects” [29]. Motivated by this observation, we hypothesize that anonymized images may yield a reduced number of reliably detected facial landmarks compared to their corresponding original images.
- Facial Similarity: Effective facial anonymization should reduce how similar the anonymized face looks to the original face, because higher similarity can imply a higher risk of revealing identity-related information.
- Facial Re-identification Rate: For effective facial anonymization, it should be difficult to re-identify faces from original images in anonymized facial images. The re-identification (re-ID) rate in this study refers to the proportion of anonymized facial images identified as the same person as in the original images.
- Algorithm-Specific Model Evaluation Metrics: Evaluation methods can be selected according to the characteristics of the algorithm used by each facial anonymization model. Computer vision algorithms can be evaluated using Detection Rate, False Positive Rate, and ROC curve [13,14], while deep learning algorithms can be evaluated using IS, FID, Precision/Recall, PPL, NND, Memorization Assessment, and AUC [17,25,26,27,28].
- Facial Landmark Region Anonymization Evaluation: A widely used method in recent facial anonymization is utilizing facial landmarks. The five most characteristic regions of the face are the face contour, left eye, right eye, nose, and mouth. Since re-identification from virtual faces to real faces is possible through all these facial landmark regions, it should be evaluated whether the five aforementioned facial landmark regions are necessarily anonymized in anonymized images.
- Use of Facial Landmark Extraction Algorithms: To ensure the performance and reliability of facial landmark-based anonymization models, accurate landmark extraction must precede. Therefore, it should be evaluated whether the model accurately detects landmarks using reliable facial landmark extraction algorithms such as SIFT, SURF, ORB, and A-KAZE. This is because inaccurate landmark extraction can cause incompleteness in anonymization.
- Implementation of Overfitting Prevention Measures: Artificial intelligence models can experience overfitting problems where they become excessively optimized to training data, resulting in degraded performance on new data in real environments. To ensure stable performance and generalization capability of anonymization models, it must be reviewed whether overfitting prevention techniques such as Batch Normalization [38], Dropout [39], and L1/L2 Regularization [40] are appropriately applied.
- Training Dataset Diversity: Facial landmark regions (face contour, left eye, right eye, nose, mouth) can be occluded by hair, masks, sunglasses, or profile-facing behavior. Therefore, the training dataset should include sufficiently diverse situations, such as profile views, frontal views, and partial facial occlusions, so that the anonymization model can operate effectively even when key facial landmark regions are occluded in test images or videos.
- Generalization Capability Verification: To demonstrate the practical and generalization capability of anonymization models, evaluation on external datasets not used in the training process or real-environment data is essential. The process of confirming whether the model consistently performs effective anonymization not only on training data but also on images from various environments and conditions should be included.
3.3. Experimentation and Derive Evaluation Criteria
3.3.1. Statistical Evaluation Criteria Setting Methodology
3.3.2. Evaluation Criteria for Facial Landmark Reduction Rate
3.3.3. Evaluation Criteria for Facial Similarity
3.3.4. Evaluation Criteria for Facial Re-Identification Rate
3.3.5. Derive Evaluation Criteria
- Facial landmark reduction rate ≥ 13.86%
- Facial similarity ≤ 0.58%
- Re-identification rate (FaceNet) ≤ 21.75%
4. Validation
4.1. Validation Experiments
4.1.1. Experimental Method
4.1.2. Validation Models
- RTFS Model: The RTFS model was implemented based on the real-time facial surface geometry estimation technology presented in the Google Research paper. The RTFS model used here generates a facial mesh composed of 468 3D vertices through MediaPipe FaceMesh and extracts depth information based on the z-coordinate values of each vertex [21]. It applies Delaunay triangulation to divide the facial surface into triangular patches and performs anonymization by applying gray-scale shading in the range 0–255, configured for the experiment, according to the depth value of each patch. This approach can effectively alter structural facial features through three-dimensional geometric transformation.
- EDIDMFV Model: The EDIDMFV model performs facial anonymization utilizing depth information based on the research in. This model extracts 468 3D landmarks through MediaPipe FaceMesh and generates a depth map by scaling z-coordinate information by a factor of 100 [51]. Using Delaunay triangulation, the model interpolates depth values for pixels inside each triangle with barycentric coordinates. It then applies a Gaussian blur with a 51 × 51 kernel and normalizes the result to the 0–1 range to generate a depth map. Finally, it achieves anonymization by applying a gray-scale mask with gamma correction to the facial region, where γ was set to 0.7.
- LAFPAD Model: The LAFPAD model extracts 68 two-dimensional facial landmarks through the face-alignment library based on the research in [52,53]. This model generates a facial mask with an extended forehead region based on facial contours and performs limited anonymization, excluding eye and mouth regions. Anonymization is performed by applying gray-scale values in the range of 80–200, as specified for the experiment, according to the y-coordinate changes from forehead to chin, reflecting design characteristics for preserving action detection performance.
4.2. Validation Experiment Results
4.2.1. Verification of Landmark-Based Evaluation Metrics’ Model Discrimination Capability
4.2.2. Verification of Similarity and Re-Identification Evaluation Model Characteristic Reflection Capability
4.2.3. Demonstration of Evaluation Criteria’s Technical Approach Discrimination Capability
4.3. Comprehensive Evaluation of Each Model
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- European Union. General Data Protection Regulation (GDPR). 2016. Available online: https://eur-lex.europa.eu/eli/reg/2016/679/oj (accessed on 4 February 2026).
- California Legislature. California Consumer Privacy Act of 2018 (CCPA) (AB 375). 2018. Available online: https://leginfo.legislature.ca.gov/faces/codes_displayText.xhtml?division=3.&part=4.&lawCode=CIV&title=1.81.5 (accessed on 4 February 2026).
- National People’s Congress of China. Personal Information Protection Law of the People’s Republic of China (PIPL). 2021. Available online: http://www.npc.gov.cn/npc/c2/c30834/202108/t20210820_313088.htm (accessed on 4 February 2026).
- Japanese Government. Act on the Protection of Personal Information (APPI), as Amended. 2003. Available online: https://laws.e-gov.go.jp/law/415AC0000000057 (accessed on 4 February 2026).
- Brazilian Government. Lei Geral de Proteção de Dados Pessoais (LGPD) (Law No. 13,709/2018). 2018. Available online: https://www.planalto.gov.br/ccivil_03/_ato2015-2018/2018/lei/L13709compilado.htm (accessed on 4 February 2026).
- Ministry of the Interior and Safety; R.o.K. Personal Information Protection Act (PIPA), as Amended. 2023. Available online: https://elaw.klri.re.kr/eng_service/lawView.do?hseq=62389&lang=ENG (accessed on 4 February 2026).
- ITU-T Study Group 17. Draft Recommendation ITU-T X.af-sec: Evaluation Methodologies for Anonymization Techniques Using Face Images in Autonomous Vehicles (Under Study). 2026. Available online: https://www.itu.int/Itu-t/workprog/wp_item.aspx?isn=21790 (accessed on 4 February 2026).
- ISO 21177:2024; Intelligent Transport Systems—ITS Station Security Services for Secure Session Establishment and Authentication Between Trusted Devices. ISO: Geneva, Switzerland, 2024. Available online: https://www.iso.org/standard/87225.html (accessed on 4 February 2026).
- IEEE Std 1609.2-2022; IEEE Standard for Wireless Access in Vehicular Environments (WAVE)–Security Services for Applications and Management Messages. IEEE Standards Association: Piscataway, NJ, USA, 2022. Available online: https://standards.ieee.org/ieee/1609.2/10258/ (accessed on 4 February 2026).
- United Nations Economic Commission for Europe. UN Regulation No. 155: Cyber Security and Cyber Security Management System Requirements. 2021. Available online: https://unece.org/sites/default/files/2023-02/R155e%20%282%29.pdf (accessed on 4 February 2026).
- United Nations Economic Commission for Europe. UN Regulation No. 156: Software Update and Software Update Management System. 2021. Available online: https://unece.org/sites/default/files/2024-03/R156e%20%282%29.pdf (accessed on 4 February 2026).
- European Automobile Manufacturers’ Association (ACEA). ACEA Principles of Data Protection in Relation to Connected Vehicles and Services. 2016. Available online: https://www.acea.auto/files/ACEA_Principles_of_Data_Protection.pdf (accessed on 4 February 2026).
- Viola, P.; Jones, M. Rapid object detection using a boosted cascade of simple features. In Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2001, Kauai, HI, USA, 8–14 December 2001. [Google Scholar]
- Dalal, N.; Triggs, B. Histograms of oriented gradients for human detection. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–26 June 2005; IEEE: New York, NY, USA, 2005. [Google Scholar]
- Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Counville, A.; Bengio, Y. Generative Adversarial Nets. In Proceedings of the 28th International Conference on Neural Information Processing Systems (NIPS 2014), Montréal, QC, Canada, 8–13 December 2014; pp. 2672–2680. [Google Scholar]
- Choi, Y.; Choi, M.; Kim, M.; Ha, J.W.; Kim, S.; Choo, J. StarGAN: Unified Generative Adversarial Networks for Multi-domain Image-to-Image Translation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 8789–8797. [Google Scholar]
- Karras, T.; Laine, S.; Aila, T. A Style-Based Generator Architecture for Generative Adversarial Networks. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, Ca, USA, 16–20 June 2016. [Google Scholar]
- Wu, Y.; Ji, Q. Facial Landmark Detection: A Literature Survey. Int. J. Comput. Vis. 2019, 127, 115–142. [Google Scholar] [CrossRef]
- Chien, H.-J.; Chuang, C.-C.; Klette, R. When to use what feature? SIFT, SURF, ORB, or A-KAZE features for monocular visual odometry. In Proceedings of the 2016 International Conference on Image and Vision Computing New Zealand (IVCNZ), Palmerston North, New Zealand, 21–22 November 2016. [Google Scholar]
- Kazemi, V.; Sullivan, J. One millisecond face alignment with an ensemble of regression trees. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014. [Google Scholar]
- Kartynnik, Y.; Ablavatski, A.; Grishchenko, I.; Grundmann, M. Real-Time Facial Surface Geometry from Monocular Video on Mobile GPUs. arXiv 2019, arXiv:1907.06724. [Google Scholar] [CrossRef]
- Zhang, K.; Zhang, Z.; Li, Z.; Qiao, Y. Joint Face Detection and Alignment Using Multitask Cascaded Convolutional Networks. IEEE Signal Process. Lett. 2016, 23, 1499–1503. [Google Scholar] [CrossRef]
- Zhang, S.; Zhu, X.; Lei, Z.; Shi, H.; Wang, X.; Li, S.Z. FaceBoxes: A CPU Real-Time Face Detector with High Accuracy. arXiv 2017, arXiv:1708.05234. [Google Scholar]
- Guo, J.; Zhu, X.; Yang, Y.; Yang, F.; Lei, Z.; Li, S.Z. Towards Fast, Accurate and Stable 3D Dense Face Alignment. In European Conference on Computer Vision; Springer: Cham, Switzerland, 2020; pp. 152–168. [Google Scholar]
- Salimans, T.; Goodfellow, I.; Zaremba, W.; Cheung, V.; Radford, A.; Chen, X. Improved Techniques for Training GANs. arXiv 2016, arXiv:1606.03498. [Google Scholar] [CrossRef]
- Heusel, M.; Ramsauer, H.; Unterthiner, T.; Nessler, B.; Hochreiter, S. GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium. arXiv 2017, arXiv:1706.08500. [Google Scholar]
- Kynkäänniemi, T.; Karras, T.; Laine, S.; Lehtinen, J.; Aila, T. Improved Precision and Recall Metric for Assessing Generative Models. arXiv 2019, arXiv:1904.06991. [Google Scholar] [CrossRef]
- Borji, A. Pros and Cons of GAN Evaluation Measures: New Developments. arXiv 2021, arXiv:2103.09396. [Google Scholar] [CrossRef]
- Wang, G.; Jiang, Q.; Jin, X.; Cui, X. FFR_FD: Effective and fast detection of DeepFakes via feature point defects. Information Sciences 2022, 596, 472–488. [Google Scholar] [CrossRef]
- Hellmann, F.; Mertes, S.; Benouis, M.; Hustinx, A.; Hsieh, T.-C.; Conati, C.; Krawitz, P.; André, E. GANonymization: A GAN-Based Face Anonymization Framework for Preserving Emotional Expressions. ACM Trans. Multimed. Comput. Commun. Appl. 2024, 21, 6. [Google Scholar] [CrossRef]
- Kuang, Z.; Yang, X.; Shen, Y.; Hu, C.; Yu, J. Facial Identity Anonymization via Intrinsic and Extrinsic Attention Distraction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 19–21 June 2024; pp. 12406–12415. [Google Scholar]
- Kim, H.; Pang, Z.; Zhao, L.; Su, X.; Lee, J.S. Semantic-aware deidentification generative adversarial networks for identity anonymization. Multimed. Tools Appl. 2023, 82, 15535–15551. [Google Scholar] [CrossRef]
- Wen, Y.; Liu, B.; Ding, M.; Xie, R.; Song, L. IdentityDP: Differential Private Identification Protection for Face Images. arXiv 2021, arXiv:2103.01745. [Google Scholar] [CrossRef]
- Cao, J.; Liu, B.; Chen, X.; Ding, M.; Xie, R.; Song, L.; Li, Z.; Zhang, W.; Wu, Y. Face De-identification: State-of-the-art Methods and Comparative Studies. arXiv 2024, arXiv:2411.09863. [Google Scholar] [CrossRef]
- Liu, Z.; Luo, P.; Wang, X.; Tang, X. Deep Learning Face Attributes in the Wild. arXiv 2014, arXiv:1411.7766. [Google Scholar]
- Liu, Z.; Luo, P.; Wang, X.; Tang, X. CelebA: Large-Scale CelebFaces Attributes Dataset. 2026. Available online: https://mmlab.ie.cuhk.edu.hk/projects/CelebA.html (accessed on 21 February 2026).
- Wu, C.-Y.; QXu Neumann, U. Synergy between 3DMM and 3D Landmarks for Accurate 3D Facial Geometry. arXiv 2021, arXiv:2110.09772. [Google Scholar]
- Ioffe, S.; Szegedy, C. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. In Proceedings of the 32nd International Conference on Machine Learning (ICML), Lille, France, 6–11 July 2015; PMLR: Lille, France, 2015; pp. 448–456. [Google Scholar]
- Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R. Dropout: A Simple Way to Prevent Neural Networks from Overfitting. J. Mach. Learn. Res. 2014, 15, 1929–1958. [Google Scholar]
- Ng, A.Y. Feature Selection, L1 vs. L2 Regularization, and Rotational Invariance. In Proceedings of the Twenty-First International Conference on Machine Learning, Banff, ALB, Canada, 4–8 July 2004; ACM: New York, NY, USA, 2004; p. 78. [Google Scholar]
- Wilimitis, D.; Walsh, C.G. Practical Considerations and Applied Examples of Cross-Validation for Model Development and Evaluation in Health Care: Tutorial. JMIR AI 2023, 2, e49023. [Google Scholar] [CrossRef]
- Huber, P.J.; Ronchetti, E.M. Robust Statistics, 2nd ed.; John Wiley & Sons: Hoboken, NJ, USA, 2009. [Google Scholar]
- Miller, C.; Portlock, T.; Nyaga, D.M.; O’sUllivan, J.M. A review of model evaluation metrics for machine learning in genetics and genomics. Front. Bioinform. 2024, 4, 1457619. [Google Scholar] [CrossRef]
- Groeneveld, R.A.; Meeden, G. Measuring Skewness and Kurtosis. Stat. 1984, 33, 391. [Google Scholar] [CrossRef]
- Joanes, D.N.; Gill, C.A. Comparing Measures of Sample Skewness and Kurtosis. Statistician 1998, 47, 183–189. [Google Scholar] [CrossRef]
- Ageitgey. Face_Recognition. 2025. Available online: https://github.com/ageitgey/face_recognition (accessed on 11 October 2025).
- King, D.E. Dlib-ml: A Machine Learning Toolkit. J. Mach. Learn. Res. 2009, 10, 1755–1758. [Google Scholar]
- Schroff, F.; Kalenichenko, D.; Philbin, J. FaceNet: A Unified Embedding for Face Recognition and Clustering. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 815–823. [Google Scholar]
- Cao, Q.; Shen, L.; Xie, W.; Parkhi, O.M.; Zisserman, A. VGGFace2: A Dataset for Recognising Faces across Pose and Age. arXiv 2017, arXiv:1710.08092. [Google Scholar]
- Karras, T.; Laine, S.; Aila, T. Flickr-Faces-HQ Dataset (FFHQ). 2026. Available online: https://github.com/NVlabs/ffhq-dataset (accessed on 21 February 2026).
- Wang, H.; Li, S.; He, J.; Qian, Z.; Zhang, X.; Fan, S. Exploring Depth Information for Detecting Manipulated Face Videos. arXiv 2024, arXiv:2411.18572. [Google Scholar] [CrossRef]
- Ren, Z.; Lee, Y.J.; Ryoo, M.S. Learning to Anonymize Faces for Privacy Preserving Action Detection. arXiv 2018, arXiv:1803.11556. [Google Scholar] [CrossRef]
- Bulat, A.; Tzimiropoulos, G. How Far are We from Solving the 2D & 3D Face Alignment Problem? (and a Dataset of 230,000 3D Facial Landmarks). In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017. [Google Scholar]







| Anonymization Dataset Name | Description |
|---|---|
| 3D Dataset | Dataset anonymized by applying the original function values of the 3D model that performs anonymization based on three-dimensional structural information |
| 3D_round Dataset | Dataset anonymized by applying second decimal place rounding to the function values of the 3D model that performs anonymization based on three-dimensional structural information |
| Depth Dataset | Dataset anonymized by applying the original function values of the Depth model that performs anonymization based on depth information |
| Depth_round Dataset | Dataset anonymized by applying second decimal place rounding to the function values of the Depth model that performs anonymization based on depth information |
| SynergyNet Dataset | Dataset anonymized by applying the SynergyNet model [37] |
| Anonymization Dataset Name | Changed Facial Landmark Count |
|---|---|
| Original image Dataset | 68 |
| 3D Dataset | 66.1650 |
| 3D_round Dataset | 64.9685 |
| Depth Dataset | 57.1999 |
| Depth_round Dataset | 58.5732 |
| SynergyNet Dataset | 30.0968 |
| Anonymization Dataset Name | Facial Landmark Reduction Rate |
|---|---|
| Original image Dataset | 0% |
| 3D Dataset | 2.70% |
| 3D_round Dataset | 4.46% |
| Depth Dataset | 15.88% |
| Depth_round Dataset | 13.86% |
| SynergyNet Dataset | 55.74% |
| Anonymization Dataset Name | Facial Similarity Evaluation Results | |
|---|---|---|
| Identical Face | Not Identical | |
| 3D Dataset | 0.43% | 99.57% |
| 3D_round Dataset | 0.56% | 99.44% |
| Depth Dataset | 0.90% | 99.10% |
| Depth_round Dataset | 1.00% | 99.00% |
| SynergyNet Dataset | 0.00% | 100.00% |
| Anonymization Dataset Name | Re-ID Rate |
|---|---|
| 3D Dataset | 23.73% |
| 3D_round Dataset | 23.65% |
| Depth Dataset | 22.12% |
| Depth_round Dataset | 22.46% |
| SynergyNet Dataset | 16.77% |
| Evaluation Metrics | RTFS | EDIDMFV | LAFPAD |
|---|---|---|---|
| Facial Landmark Count Change | 56 (56.236) | 55 (55.012) | 66 (65.756) |
| Facial Landmark Reduction Rate | 17.30% | 17.10% | 3.30% |
| Facial Similarity Evaluation | 0.10% | 0.00% | 0.00% |
| re-ID rate | 0.013% | 0.015% | 0.000% |
| Evaluation Metrics | Evaluation Criteria | RTFS | EDIDMFV | LAFPAD | |||
|---|---|---|---|---|---|---|---|
| Measured Value | Achievement | Measured Value | Achievement | Measured Value | Achievement | ||
| facial landmark count change | ≤59 | 56 (56.236) | Met | 55 (55.012) | Met | 66 (65.756) | Not Met |
| facial landmark reduction rate | ≥13.86% | 17.30% | Met | 17.10% | Met | 3.30% | Not Met |
| facial similarity evaluation | ≤0.58% | 0.10% | Met | 0.00% | Met | 0.00% | Met |
| re-ID rate (FaceNet) | ≤21.75% | 0.013% | Met | 0.015% | Met | 0.000% | Met |
| Use of Facial Landmark Extraction Algorithms | Utilizing | MediaPipe-FaceMesh | Met | MediaPipe-FaceMesh-EDIDMFV | Met | face-alignment | Met |
| Facial Landmark Region Anonymization Evaluation | 5 regions (face contour, left eye, right eye, nose, mouth) | 5 regions (face contour, left eye, right eye, nose, mouth) | Met | 5 regions (face contour, left eye, right eye, nose, mouth) | Met | 2 regions (face contour, nose) | Not Met |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Ko, C.; Jeon, D.; Song, Y.; Lee, Y. Facial Anonymization Model Evaluation Criteria: Development and Validation in Autonomous Vehicle Environments. Appl. Sci. 2026, 16, 2979. https://doi.org/10.3390/app16062979
Ko C, Jeon D, Song Y, Lee Y. Facial Anonymization Model Evaluation Criteria: Development and Validation in Autonomous Vehicle Environments. Applied Sciences. 2026; 16(6):2979. https://doi.org/10.3390/app16062979
Chicago/Turabian StyleKo, Chaeyoung, Daul Jeon, Yunkeun Song, and Yousik Lee. 2026. "Facial Anonymization Model Evaluation Criteria: Development and Validation in Autonomous Vehicle Environments" Applied Sciences 16, no. 6: 2979. https://doi.org/10.3390/app16062979
APA StyleKo, C., Jeon, D., Song, Y., & Lee, Y. (2026). Facial Anonymization Model Evaluation Criteria: Development and Validation in Autonomous Vehicle Environments. Applied Sciences, 16(6), 2979. https://doi.org/10.3390/app16062979

