Next Article in Journal
A Preliminary Study on Computer Aided Process Planning for Generating Additive Manufacturing Products via 3D/4D/5D Printing
Previous Article in Journal
Development and Implementation of Modular Turning Dynamometer with Miniature Load Cell
 
 
engproc-logo
Article Menu

Article Menu

Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

Recent Facial Image Preprocessing Techniques: A Review †

by
Rendra Soekarta
1,2,* and
Ku Ruhana Ku-Mahamud
1,3
1
Faculty of Business Management and Information Technology, Universiti Muhammadiyah Malaysia (UMAM), Padang Besar 02100, Perlis, Malaysia
2
Faculty of Informatics Engineering, Universitas Muhammadiyah Sorong, Sorong City 98416, West Papua, Indonesia
3
School of Computing, Universiti Utara Malaysia (UUM), Bukit Kayu Hitam 06010, Keda, Malaysia
*
Author to whom correspondence should be addressed.
Presented at the 8th Mechanical Engineering, Science and Technology International Conference, Padang Besar, Perlis, Malaysia, 11–12 December 2024.
Eng. Proc. 2025, 84(1), 39; https://doi.org/10.3390/engproc2025084039
Published: 7 February 2025

Abstract

This review analyzes recent advancements in facial recognition and classification algorithms, emphasizing the critical role of preprocessing techniques in enhancing the algorithms’ accuracy, reliability, and efficiency. Facial image preprocessing is a critical step in various applications, including facial recognition, emotion detection, and biometric authentication. Preprocessing methods including normalization, noise reduction, illumination correction, alignment, resolution enhancement, data augmentation, and edge detection are essential for improving image quality and standardizing facial features in improving facial image quality. The review explores the strengths and weaknesses of these techniques across different facial datasets. The ongoing refinement of preprocessing techniques will be pivotal in advancing facial recognition, classification, and other image-based tasks. Finally, this paper provides insights into the future directions of research. As the demand for more robust, fair, and efficient systems grows, developing domain-specific preprocessing methods and adopting cutting-edge artificial intelligence technologies will be vital to meet the challenges of increasingly complex applications.

1. Introduction

Facial recognition technology has rapidly evolved over the past few decades, becoming a cornerstone in various security, authentication, and social media applications. The preprocessing techniques that prepare raw facial images for subsequent analysis by machine learning algorithms are at the heart of its success. These preprocessing methods are crucial for enhancing image quality, reducing noise, and standardizing facial features, thereby significantly improving the accuracy and reliability of facial recognition systems [1,2,3]. This article comprehensively reviews the critical preprocessing techniques employed in facial images, especially facial recognition. Also, it examines a broad spectrum of algorithms developed and refined over the years. The discussion will delve into the various domains where these techniques are applied, ranging from security and surveillance to mobile device authentication. By evaluating the advantages and limitations of each preprocessing method, this review aims to highlight the strengths and weaknesses that practitioners must consider when selecting or developing a facial recognition system. Furthermore, the limitations inherent to current preprocessing techniques will be explored, including challenges such as lighting variations [4], occlusions [5], and facial expressions [6], which can significantly impact recognition accuracy. Understanding these limitations is essential for guiding future research and development efforts.
Finally, this article will explore emerging trends and future directions in image preprocessing for facial recognition from 2020 to 2024. As the demand for more robust and versatile facial recognition systems grows, advancements in deep learning, data augmentation, and real-time processing are expected to play a pivotal role in shaping the future of this technology. Through this review, researchers, developers, and practitioners will clearly understand the current landscape of preprocessing techniques in facial recognition and insights into the challenges and opportunities.

2. Facial Image Recognition and Classification Algorithm

Recognition and classification of facial images are two main tasks for analyzing and interpreting human faces to identify or categorize individuals. While they are related, they have distinct goals and applications. Facial recognition is for identifying or verifying the identity of an individual based on their facial features. On the other hand, facial image classification focuses on categorizing faces based on specific characteristics or attributes without identifying the person.

2.1. Facial Image Recognition

Recent advances in facial image preprocessing techniques have significantly impacted the effectiveness of facial recognition technologies. This review aims to dissect the evolution of these preprocessing methods, their applications in various domains, and their performance implications across datasets. By synthesizing the latest research and developments from 2020 to 2024, this article provides an in-depth look at the techniques that have paved the way for more accurate and robust facial recognition systems.
Table 1 provides an overview of some of the algorithms used in the image recognition domain, categorized based on their specific applications, performance metrics, and datasets. The table includes algorithms such as Convolutional Neural Networks (CNNs) [2,3,7], which are highly effective under diverse lighting conditions, and Multi-view Co-evolutionary Binary Optimization coupled with Deep Neural Networks (DNNs) [8], demonstrating strong performance on occluded and noisy images. Various feature extraction techniques, including Histogram of Oriented Gradients (HOGs), Scale-Invariant Feature Transform (SIFT), Gabor Filters, and Canny edge detection [9], are highlighted, with the combination of SIFT and CNNs achieving 100% accuracy on specific datasets. Advanced methods like the self-adaptive ant colony system (SAACS) [10] provide high accuracy in feature selection tasks, while Dual-Tree Complex Wavelet Transform (DTCWT) combined with Pseudo-Zernike Moments [11] offers robust resistance to geometric distortions and noise-based attacks.
Local Binary Pattern Histograms (LBPHs) paired with OpenFace DNNs [3,4] deliver superior accuracy compared to benchmark algorithms. Dual Linear Collaborative Discriminant Regression Classification (LCDRC) [12] achieves high accuracy rates of 98.86%, while DTCWT with Collaborative Representation Classifier [13] ensures robustness against noise and lighting variations. VGG-16, combined with Random Fourier Features [14], excels at recognizing masked faces across various datasets with diverse poses and lighting conditions. Techniques such as the Histogram of Enhanced Gradients (HEGs) integrated with HOGs [15] significantly reduce noise influence, achieving high accuracy even in noisy images. Few-shot learning [6] enhances the efficiency of facial expression recognition in data-scarce environments. Auto-encoder with Skip Connections [5] and Single-Shot Multibox Detector combined with Autoencoders [16] improve occluded feature recognition, with the latter effectively utilizing encoder-only autoencoders.
Table 1. Algorithms for image recognition.
Table 1. Algorithms for image recognition.
Ref.AlgorithmPRFEFSRemark
[2]CNN-Highly effective under diverse lighting
[8]Multi-view Co-evolutionary Binary Optimization, and DNN-Good performance on occluded/noisy images
[9]HOG, SIFT, Gabor, Canny, and CNN-100% accuracy with SIFT + CNN combination
[10]SAACS--High accuracy
[11]DTCWT and Pseudo-Zernike Moments-Demonstrates strong resistance to geometric and noise-based attacks
[7]SVM and CNN (VGG-16)--Quality and the number of data training greatly affect the training process
[3]CNN, LBPH, and HOG--The CNN outperformed other benchmark algorithms
[4]LBPH and OpenFace DNN--Superior accuracy compared to benchmark algorithms
[6]Few-Shot Learning---Enhances facial expression recognition efficiency
[5]Auto-Encoder with Skip Connections--Improves occluded expression recognition
[16]Single-Shot Multibox Detector and Autoencoder-Uses an autoencoder that only utilizes the encoder part
[12]Dual LCDRC-High Accuracy: 98.86%
[13]DTCWT and Collaborative Representation Classifier -Robust to noise and invariant to lighting
[14]VGG-16 and Random Fourier Features-Effective on masked faces and consistent performance on various datasets with different pose and lighting characteristics
[15]Histogram of Enhanced Gradients and HOG-Significantly reduces the influence of noise through an adaptive denoising process, resulting in a high level of accuracy even in noisy images
Note: PR—Preprocessing, FE—Feature Extraction, and FS—Feature Selection.

2.2. Facial Image Classification

Classification algorithms include supervised and unsupervised learning methods. Recent studies have demonstrated the use of non-hybrid and hybrid algorithms in classification tasks. Non-hybrid algorithms are pure algorithms that are not combined with other algorithms or techniques. In contrast, hybrid algorithms combine two or more algorithms or methods to achieve a more effective model [17]. This article explores various facial recognition techniques designed to improve accuracy and effectiveness under diverse conditions such as occlusion, noise, and classification based on gender and ethnicity. One of the novel methods in thermal facial recognition combines Haar Wavelet Transform and Local Binary Pattern techniques to exploit the unique thermal signatures of facial blood vessels and muscles, making it less sensitive to lighting variations compared to visible spectrum imaging [18]. This method has shown significant potential for reliable biometric identification under diverse lighting conditions. In addition, the SAACS algorithm has been proposed, which excels in feature selection for sky images and outperforms other bio-inspired algorithms with high classification accuracy [19].
The algorithm achieved an accuracy of 98.7%, demonstrating its effectiveness in structured data processing through robust feature extraction. Histogram Equalization and CNNs [20] combined preprocessing and deep learning to reach 100% accuracy, highlighting their suitability for diverse applications. Haar Cascade, HOGs, and CNNs [21] showcased high efficiency in student validation tasks during examinations by integrating detection and feature extraction techniques. Similarly, Gabor Filter, Maximum Response, and Monte Carlo methods [22] effectively processed unstructured data with minimal computational overhead. The combination of Gabor Filter, Maximum Response, and Monte Carlo methods and Partial Least Squares Regression (PLSR) [22] effectively processed unstructured data with minimal computation costs. Local Mean Patch (LMP) and CNN-LCDRC [23] showed robustness against complex data scenarios, while Dual-Tree Complex Wavelet Transform (DTCWT) [24] provided robust feature extraction but faced limitations in asynchronous transmission. Haar Cascade paired with Deep Neural Networks (DNNs) [25] was effective for feature extraction and scalability in large datasets. CNN-HOG [26] offered high accuracy and adaptability for specific scripts. Particle Swarm Optimization (PSO) and Ant Colony Optimization (ACO) [27] were sensitive to noise but efficient for feature selection tasks. Linear Discriminant Analysis [28] effectively reduced dimensionality but posed a risk of losing critical information. Principal Component Analysis (PCA) with Collaborative Representation-Based Classification [1] improved accuracy with fewer samples, while PCA with the Genetic Algorithm (GA) [29] achieved the smallest feature set for optimized performance. The Geodesic Path Algorithm, paired with PCA, K-nearest Neighbors, and SVM [18], enhanced classification accuracy with robust preprocessing. Finally, the SAACS [19], an improved version of the Ant Colony System (ACS), boosted classification accuracy to 95.64%, making it highly efficient.
Table 2 shows the latest algorithms and techniques for image classification published between 2020 and 2024. All algorithms involve preprocessing and feature extraction stages, which are critical for classification accuracy. Sometimes, the feature selection stage can be omitted because the feature extraction process has already produced a compact and highly informative set of features. Furthermore, feature selection may not significantly improve model performance in some situations, especially when working with deep-learning models or small datasets. However, omitting the feature selection stage can reduce computational complexity and time, making the process more efficient. In cases where the dataset is large or contains irrelevant or redundant features, including a feature selection step can be beneficial to avoid overfitting and improve model generalization.

3. Facial Image Preprocessing Technique

Image preprocessing is crucial across different domains as it significantly influences the performance and accuracy of algorithms in various applications. The choice of preprocessing technique is often tailored to each domain’s specific challenges and requirements, such as enhancing image clarity, improving contrast, reducing noise, and maintaining consistent lighting. For instance, normalization is a fundamental preprocessing step to standardize the intensity distribution, as observed in techniques utilizing CNNs or PCA for facial image recognition [20,30]. Preprocessing in facial image analysis is essential as it enhances the quality and consistency of the images, which is crucial for accurate recognition and analysis. By addressing issues such as feature extraction capability, computational complexity, data dependency, robustness to noise, and scalability, preprocessing ensures that the subsequent steps in facial image processing are more effective and reliable. For example, illumination correction, as seen in DTCWT and Histogram Equalization, addresses lighting inconsistencies [23,26]. This step is crucial in applications like facial recognition, where even minor inconsistencies can lead to significant errors. The effectiveness of image preprocessing techniques in classification tasks largely depends on the specific nature of the images and the classification problem. Alignment techniques, such as landmark detection and HOGs, ensure consistent face positioning, improving feature extraction reliability [31]. While preprocessing can significantly enhance model performance, it can also introduce challenges and limitations that must be carefully managed. For example, edge detection methods like Gabor Wavelets or Laplacian can highlight critical features but are sensitive to noise [21,32]. The key is to select and apply preprocessing techniques that best align with the data’s characteristics and the classification task’s goals.
Grayscale and color space conversion simplify data but may result in the loss of important color information, impacting feature extraction, as seen in preprocessing techniques involving grayscale conversion with CNN- and PCA-based methods [20,30]. Histogram equalization enhances contrast but can amplify noise, affecting robustness, as demonstrated in CNNs and histogram equalization approaches applied to FERET datasets [23]. Image scaling standardizes images but may blur details, reducing feature extraction quality, as noted in alignment-based techniques like HOGs and landmark detection [31]. Noise reduction, like Gaussian filtering, is adequate but risks smoothing key features, impacting robustness, which aligns with observations in DTCWT-based techniques for edge enhancement [26].
Image normalization and face alignment improve consistency but are sensitive to lighting and landmark accuracy, highlighting data dependency, as evidenced in methods leveraging DNNs and alignment on datasets such as CFPW and VGGFace2 [22]. Cropping and region of interest extraction isolate regions but may result in the loss of external details, affecting scalability, as noted in Haar Cascade and CNN techniques used on LFW datasets [7]. Edge detection and geometric transformations aid feature extraction but can introduce noise or distortions, impacting robustness and scalability, as demonstrated in approaches utilizing Gabor Wavelets and Canny edge detection [11,21]. Principal component analysis and Gabor Filters provide robust feature extraction but have high computational demands and potential information loss, as observed in PCA-Gabor techniques used on CK+ datasets [33]. Image augmentation enhances data diversity but risks overfitting, affecting scalability, as shown in data augmentation strategies in CNN-based models [34]. Laplacian of Gaussian aids edge detection but is noise-sensitive, impacting robustness, as highlighted in edge-focused segmentation techniques [32]. Overall, selecting techniques requires balancing these factors to optimize facial image processing.
Eight preprocessing techniques—Normalization, Noise Reduction, Illumination Correction, Alignment, Resolution Enhancement, Data Augmentation, Image Segmentation, and Edge Detection—play a critical role in facial image analysis. Normalization ensures consistent intensity distribution, as seen in CNN- and PCA-based approaches used on FERET and UIVBFED datasets [20,30]. Noise reduction, such as Gaussian filtering and wavelet methods, minimizes disruptions but may affect details, as demonstrated in DTCWT-based edge detection [26]. Illumination correction, like histogram equalization, improves image quality under varying lighting conditions but may amplify noise, as seen in FERET studies [23]. Alignment techniques, including landmark detection and HOGs, enhance feature consistency on datasets like Radboud Faces and Sheffield [11,31]. Resolution enhancement, though less common, is critical in PCA-based methods requiring detailed analysis [30]. Data augmentation, such as flipping and cropping, increases diversity and robustness in CNN-based experiments [34]. Image segmentation isolates key regions, improving accuracy in facial expression analysis with edge detection and DTCWT [32]. Edge detection highlights features but risks adding noise, as in Gabor Wavelets and Laplacian techniques [21,32]. Computational complexity measures the resources a method requires, with lightweight approaches like Haar Cascade ideal for real-time systems [7]. Data dependence evaluates reliance on data quality or quantity, with methods like CNN-HOG offering flexibility across diverse datasets [28]. Resistance to disruptions assesses robustness against data distortions, as shown in PCA with the GA [30]. Scalability checks a method’s ability to handle data growth without performance loss, as demonstrated in the SAACS for large datasets like SWIMCAT and MGCD [10]. Techniques such as PSO combined with the Geodesic Path Algorithm [35] and DTCWT with Pseudo-Zernike Moments [29] showcase innovations in robustness and scalability, further extending the preprocessing applications in facial recognition tasks.
Table 3 summarizes preprocessing techniques for facial images. Image normalization is one of the most widely used preprocessing tasks followed by reducing noise and illumination correction. Preprocessing is effective in optimizing input images to take advantage of the algorithm while compensating for the limitations of overcoming special domain challenges. The combination that is well designed from preprocessing techniques and algorithms is adjusted to the domain of the image to improve performance and ensure reliability in various applications.
The content in the table shows that normalization is a fundamental preprocessing step, ensuring consistent intensity distribution across a dataset, as used in CNN, DNN, and PCA methods [20,22,30]. Furthermore, image segmentation and edge detection consistently emerge as significant preprocessing activities, frequently used in methods such as HOGs, DTCWT, and Gabor Wavelet, underlining their importance in feature extraction and image analysis [21,26,32]. Noise reduction is another commonly used technique, particularly in applications requiring better robustness to perturbations, such as in PCA-based and Auto-Encoder methods [37,39]. Alignment is often applied to standardize image orientation, frequently seen in techniques utilizing HOGs, CNN-HOG, and landmark detection [28,31].

4. Future Direction

Preprocessing techniques were applied steadily across the years, indicating a stable focus on refining raw data before the data were used in the classification or recognition tasks. After reviewing these preprocessing techniques, it becomes evident that data preprocessing is indispensable for maintaining high accuracy in recognition and classification, and there is an opportunity to explore more complex, hybrid approaches. Combining various preprocessing techniques could lead to more robust algorithms that perform well across a broader range of conditions and datasets.
Several future trends in techniques will emerge for facial image preprocessing as technology evolves. The aim is to improve the performance of facial image analysis while addressing challenges related to images, such as robustness to variations in lighting, pose, occlusions, and environmental conditions. One of the techniques is deep learning for end-to-end networks, which will reduce the need for manual or heuristic preprocessing steps [40]. As for data augmentation, Generative Adversarial Networks can be used for image enhancement. The CycleGAN technique could be used to transform low-quality images into high-quality ones or generate images that simulate ideal lighting or facial expressions [41].
Image preprocessing techniques may become dynamic by adjusting facial image normalization processes based on facial expressions. Fusion of Red–Green–Blue (RGB), Depth, and Thermal Imaging may be an option to improve the robustness of facial image preprocessing [42,43]. This fusion of modalities could lead to more accurate and reliable preprocessing pipelines. In summary, facial image preprocessing techniques will likely center around leveraging deep learning, generative models, and cross-domain adaptation to overcome the challenges posed by the images.

5. Conclusions

This article emphasizes the significant role of image preprocessing in enhancing the performance and accuracy of facial recognition and classification tasks between 2020 and 2024. During this period, consistently applying preprocessing and feature extraction techniques in recognition and classification research highlights their foundational importance. Future research should consider designing more sophisticated preprocessing pipelines that integrate multiple techniques. For instance, combining preprocessing, feature extraction, and selective feature selection techniques could address the limitations of each method, such as handling noisy data, reducing computational complexity, and improving scalability. Integrating advanced techniques such as deep learning-based preprocessing or domain adaptation methods could enhance model performance, particularly in challenging real-world scenarios. By focusing on these areas, future research can continue to push the boundaries of what is possible in facial recognition and classification, ultimately leading to more accurate and reliable algorithms. Overall, these interconnected trends will shape the future of facial classification and recognition technologies, leading to more advanced, equitable, and secure systems.

Author Contributions

Conceptualization, R.S. and K.R.K.-M.; methodology, R.S. and K.R.K.-M.; investigation, R.S. and K.R.K.-M.; writing-original draft preparation, R.S. and K.R.K.-M.; writing-review and editing, R.S. and K.R.K.-M.; visualization, R.S. and K.R.K.-M.; supervision, R.S. and K.R.K.-M.; project administration R.S. and K.R.K.-M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Yin, H.-F.; Wu, X.-J.; Hu, C.; Song, X. Face Recognition via Compact Second-Order Image Gradient Orientations. Mathematics 2022, 10, 2587. [Google Scholar] [CrossRef]
  2. Shi, M.; Xu, L.; Chen, X. A Novel Facial Expression Intelligent Recognition Method Using Improved Convolutional Neural Network. IEEE Access 2020, 8, 57606–57614. [Google Scholar] [CrossRef]
  3. Tsalera, E.; Papadakis, A.; Samarakou, M.; Voyiatzis, I. Feature Extraction with Handcrafted Methods and Convolutional Neural Networks for Facial Emotion Recognition. Appl. Sci. 2022, 12, 8455. [Google Scholar] [CrossRef]
  4. Chen, Y.-C.; Liao, Y.-S.; Shen, H.-Y.; Syamsudin, M.; Shen, Y.-C. An Enhanced LBPH Approach to Ambient-Light-Affected Face Recognition Data in Sensor Network. Electronics 2022, 12, 166. [Google Scholar] [CrossRef]
  5. Poux, D.; Allaert, B.; Ihaddadene, N.; Bilasco, I.M.; Djeraba, C.; Bennamoun, M. Dynamic Facial Expression Recognition Under Partial Occlusion With Optical Flow Reconstruction. IEEE Trans. Image Process. 2022, 31, 446–457. [Google Scholar] [CrossRef]
  6. Kim, C.-L.; Kim, B.-G. Few-shot learning for facial expression recognition: A comprehensive survey. J. Real-Time Image Process. 2023, 20, 52. [Google Scholar] [CrossRef]
  7. Van Nguyen, T.; Chu, T.D. Comparative study on the performance of face recognition algorithms. EUREKA Phys. Eng. 2023, 4, 120–132. [Google Scholar] [CrossRef]
  8. Soni, N.; Sharma, E.K.; Kapoor, A. Hybrid meta-heuristic algorithm based deep neural network for face recognition. J. Comput. Sci. 2021, 51, 101352. [Google Scholar] [CrossRef]
  9. Benradi, H.; Chater, A.; Lasfar, A. A hybrid approach for face recognition using a convolutional neural network combined with feature extraction techniques. IAES Int. J. Artif. Intell. 2023, 12, 627. [Google Scholar] [CrossRef]
  10. Sagban, R.; Ku-Mahamud, K.R.; Bakar, M.S.A. ACO ustic: A nature-inspired exploration indicator for ant colony optimization. Sci. World J. 2015, 2015, 392345. [Google Scholar] [CrossRef] [PubMed]
  11. Agilandeeswari, L.; Prabukumar, M.; Alenizi, F.A. A robust semi-fragile watermarking system using Pseudo-Zernike moments and dual tree complex wavelet transform for social media content authentication. Multimed. Tools Appl. 2023, 82, 43367–43419. [Google Scholar] [CrossRef]
  12. Hosgurmath, S.; Mallappa, V.V.; Patil, N.B.; Petli, V. Effective face recognition using dual linear collaborative discriminant regression classification algorithm. Multimed. Tools Appl. 2022, 81, 6899–6922. [Google Scholar] [CrossRef]
  13. Chen, G.Y.; Krzyżak, A.; Duda, P.; Cader, A. Noise Robust Illumination Invariant Face Recognition Via Bivariate Wavelet Shrinkage in Logarithm Domain. J. Artif. Intell. Soft Comput. Res. 2022, 12, 169–180. [Google Scholar] [CrossRef]
  14. Sikha, O.K.; Bharath, B. VGG16-random fourier hybrid model for masked face recognition. Soft Comput. 2022, 26, 12795–12810. [Google Scholar] [CrossRef] [PubMed]
  15. Berrimi, F.; Kara-Mohamed, C.; Hedli, R.; Hamdi-Cherif, A. The Histogram of Enhanced Gradients (HEG)—A Fast Descriptor for Noisy Face Recognition. Rev. d’Intelligence Artif. 2024, 38, 33–44. [Google Scholar] [CrossRef]
  16. Sandhya, S.; Balasundaram, A.; Shaik, A. Deep Learning Based Face Detection and Identification of Criminal Suspects. Comput. Mater. Contin. 2023, 74, 2331–2343. [Google Scholar] [CrossRef]
  17. Rashid, S.J.; Abdullah, A.I.; Shihab, M.A. Face Recognition System Based on Gabor Wavelets Transform, Principal Component Analysis and Support Vector Machine. Int. J. Adv. Sci. Eng. Inf. Technol. 2020, 10, 959–963. [Google Scholar] [CrossRef]
  18. Marzoog, Z.S.; Hasan, A.D.; Abbas, H.H. Gender and race classification using geodesic distance measurement. Indones. J. Electr. Eng. Comput. Sci. 2022, 27, 820. [Google Scholar] [CrossRef]
  19. Petwan, M.; Ku-Mahamud, K.R. Feature selection for sky image classification based on self adaptive ant colony system algorithm. Int. J. Electr. Comput. Eng. 2023, 13, 7037. [Google Scholar] [CrossRef]
  20. Sukumaran, A.; Brindha, T. Optimal feature selection with hybrid classification for automatic face shape classification using fitness sorted Grey wolf update. Multimed. Tools Appl. 2021, 80, 25689–25710. [Google Scholar] [CrossRef]
  21. Goud, K.M.; Hussain, S.J. Implementation of Invigilation System using Face Detection and Face Recognition Techniques. A Case Study. J. Eng. Sci. Technol. Rev. 2021, 14, 109–120. [Google Scholar] [CrossRef]
  22. See, Y.-C.; Liew, E.; Noor, N.M. Gabor and Maximum Response Filters with Random Forest Classifier for Face Recognition in the Wild. Int. Arab J. Inf. Technol. 2021, 18, 797–806. [Google Scholar] [CrossRef]
  23. Shi, D.; Tang, H. Face recognition algorithm based on self-adaptive blocking local binary pattern. Multimed. Tools Appl. 2021, 80, 23899–23921. [Google Scholar] [CrossRef]
  24. Chen, Z.; Wu, X.-J.; Yin, H.-F.; Kittler, J. Noise-robust dictionary learning with slack block-Diagonal structure for face recognition. Pattern Recognit. 2020, 100, 107118. [Google Scholar] [CrossRef]
  25. Muttaqin, R.; Fuada, S.; Mulyana, E. Attendance system using machine learning-based face detection for meeting room application. Int. J. Adv. Comput. Sci. Appl. 2020, 11, 286–293. [Google Scholar] [CrossRef]
  26. Kadappa, R. Face Recognition Using Modified Histogram of Oriented Gradients and Convolutional Neural Networks. Int. J. Image Graph. Signal Process. 2023, 15, 60–76. [Google Scholar] [CrossRef]
  27. Niu, S.; Nie, Z.; Liu, J.; Chu, M. An Application Study of Improved Iris Image Localization Based on an Evolutionary Algorithm. Electronics 2023, 12, 4454. [Google Scholar] [CrossRef]
  28. Abusham, E.; Ibrahim, B.; Zia, K.; Rehman, M. Facial Image Encryption for Secure Face Recognition System. Electronics 2023, 12, 774. [Google Scholar] [CrossRef]
  29. Perez-Gomez, V.; Rios-Figueroa, H.V.; Rechy-Ramirez, E.J.; Mezura-Montes, E.; Marin-Hernandez, A. Feature Selection on 2D and 3D Geometric Features to Improve Facial Expression Recognition. Sensors 2020, 20, 4847. [Google Scholar] [CrossRef] [PubMed]
  30. Wazirali, R.; Ahmed, R. Hybrid Feature Extractions and CNN for Enhanced Periocular Identification During COVID-19. Comput. Syst. Sci. Eng. 2022, 41, 305–320. [Google Scholar] [CrossRef]
  31. Farook, T.H.; Saad, F.H.; Ahmed, S.; Dudley, J. Dental Loop FLT: Facial landmark tracking. SoftwareX 2023, 24, 101531. [Google Scholar] [CrossRef]
  32. Pei, X.; Zhao, Y.; Chen, L.; Guo, Q.; Duan, Z.; Pan, Y.; Hou, H. Robustness of machine learning to color, size change, normalization, and image enhancement on micrograph datasets with large sample differences. Mater. Des. 2023, 232, 112086. [Google Scholar] [CrossRef]
  33. Jayasimha, Y.; Reddy, R.V.S. A facial expression recognition model using hybrid feature selection and support vector machines. Int. J. Inf. Comput. Secur. 2021, 14, 79. [Google Scholar] [CrossRef]
  34. Harastani, M.; Benterkia, A.; Zadeh, F.M.; Nait-Ali, A. Methamphetamine drug abuse and addiction: Effects on face asymmetry. Comput. Biol. Med. 2020, 116, 103475. [Google Scholar] [CrossRef]
  35. Attivissimo, F.; D’Alessandro, V.I.; Di Nisio, A.; Scarcelli, G.; Schumacher, J.; Lanzolla, A.M.L. Performance evaluation of image processing algorithms for eye blinking detection. Measurement 2023, 223, 113767. [Google Scholar] [CrossRef]
  36. Maafiri, A.; Elharrouss, O.; Rfifi, S.; Al-Maadeed, S.A.; Chougdali, K. DeepWTPCA-L1: A New Deep Face Recognition Model Based on WTPCA-L1 Norm Features. IEEE Access 2021, 9, 65091–65100. [Google Scholar] [CrossRef]
  37. Al-Dosari, I.H.M.; Jasim, I.B. Image Denoising Improvement using Siny-Soft Wavelet Thresholding. In Proceedings of the Selected Papers of the Workshop on Emerging Technology Trends on the Smart Industry and the Internet of Things (TTSIIT 2022), Kyiv, Ukraine, 19–20 January 2022; Volume 3149, pp. 1–8. [Google Scholar]
  38. Gong, H.; Luo, T.; Ni, L.; Li, J.; Guo, J.; Liu, T.; Feng, R.; Mu, Y.; Hu, T.; Sun, Y.; et al. Research on facial recognition of sika deer based on vision transformer. Ecol. Inform. 2023, 78, 102334. [Google Scholar] [CrossRef]
  39. Chen, H.; Lin, Y.; Li, B. Exposing Face Forgery Clues via Retinex-Based Image Enhancement; Springer: Berlin/Heidelberg, Germany, 2023; pp. 20–34. [Google Scholar]
  40. Oiwa, K.; Suzuki, S.; Maeda, Y.; Jinnai, H. Applicability of deep learning for blood pressure estimation during hemodialysis based on facial images. Ren. Replace Ther. 2024, 10, 2. [Google Scholar] [CrossRef]
  41. Fang, Y.; Deng, W.; Du, J.; Hu, J. Identity-aware CycleGAN for face photo-sketch synthesis and recognition. Pattern Recognit. 2020, 102, 107249. [Google Scholar] [CrossRef]
  42. Wang, M.; Deng, W. Deep face recognition: A survey. Neurocomputing 2021, 429, 215–244. [Google Scholar] [CrossRef]
  43. Luo, Z.; Hu, J.; Deng, W.; Shen, H. Deep Unsupervised Domain Adaptation for Face Recognition. In Proceedings of the 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018), Xi’an, China, 15–19 May 2018; pp. 453–457. [Google Scholar] [CrossRef]
Table 2. Algorithms for image classification.
Table 2. Algorithms for image classification.
RefAlgorithmLearning MethodsPRFEFSRemark
SUS
[17]Gabor Wavelets--Accuracy is acceptable, with a success rate above 98.7%
[20]Histogram Equalization and CNN-100% accuracy
[21]Haar Cascade HOGs and CNN--Highly accurate and efficient for student validation during examinations
[22]Gabor Filters, Maximum Response, Monte Carlo, Uninformative Variable Elimination, and PLSR --Able to process unstructured data. Low computation costs and time
[23]LMP and CNN-LCDRC---Robust to complex data
[24]DTCWT---Not suitable for asynchronous transmission
[25]Haar Cascade and DNN-Good for feature extraction. Large dataset scalability
[26]CNN-HOG---High accuracy. Potential for special scripts
[10]PSO and ACO---Sensitive to noise; may produce false edges
[28]Linear Discriminant Analysis--Important information can be lost if dimensions are reduced too much
[1]PCA and Collaborative-Representation-Based Classification--Improves accuracy with fewer samples
[29]PCA and GA--GA achieves the smallest feature set
[18]Geodesic Path Algorithm, PCA, K-Nearest Neighbors, and SVM--The geodesic path improves classification accuracy
[19]Self-Adaptive ACS and ACS---Classification accuracy up to 95.64
Note: S—Supervised, US—Unsupervised, PR—Preprocessing, FE—Feature extraction, and FS—Feature selection.
Table 3. Techniques for image preprocessing.
Table 3. Techniques for image preprocessing.
Ref.Algorithm
/Technique
(Image Domain)
Preprocessing Activity
NZNRICALREDAISED
[20]CNN
(Digital images)
-------
[21]Gabor Wavelets
(Facial image for real-time face detection)
-----
[22]Multi-view Co-evolutionary Binary Optimization and DNN
(2D facial image)
---
[23]Histogram Equalization and CNN
(Grayscale facial image for robust recognition across different lighting conditions, poses, and facial expressions)
---- - -
[24]Haar Cascade, HOGs, and CNN
(2D real-time facial for recognition authentication in smart door lock systems)
-----
[25]Gabor Filters, Maximum Response, Monte Carlo Uninformative, Variable Elimination, and PLSR
(2D facial images for real-world face detection and recognition)
-------
[10]SAACS
(Ground-based sky images for cloud type classification)
-------
[26]DTCWT
(2D facial image)
------
[11]HOG, SIFT, Gabor, Canny, and CNN
(Digital multimedia images—watermarked images, logos, fingerprints, and facial images)
---
[7]Haar Cascade and DNN
(Labelled wild facial images)
-------
[27]PSO
(Eye for biometric authentication)
----
[28]CNN-HOG
(Fruit, frontal face, face attribute, and labelled facial images)
-------
[29]DTCWT and Pseudo-Zernike Moments
(Fingerprint for digital signature)
------
[34]SVM amd CNN (VGG-16)
(2D facial image to evaluate facial asymmetry)
-----
[36]CNN-LSTM
(2D facial image)
-----
[33]PCA + Gabor and PCA + LBP
(Facial expression)
------
[37]PCA and Collaborative-Representation-Based Classification
(Face with sunglasses and scarf occlusion)
-------
[30]PCA and GA
(3D facial expression and virtual facial expression)
----
[38]CNN, LBPH, and HOGs
(Radboud faces)
------
[32]LBPH
(Extended Yale Face B)
---
[31]HOGs and landmark detection-----
[39]Auto-Encoder with Skip Connections
(Facial expression)
----
[35]Geodesic Path Algorithm, PCA, K-Nearest Neighbors, and SVM
(Facial Recognition)
----
Note: NZ—Normalization, NR—Noise Reduction, IC—Illumination Correction, AL—Alignment, RE—Resolution Enhancement, DA—Data Augmentation, IS—Image Segmentation, and ED—Edge Detection.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Soekarta, R.; Ku-Mahamud, K.R. Recent Facial Image Preprocessing Techniques: A Review. Eng. Proc. 2025, 84, 39. https://doi.org/10.3390/engproc2025084039

AMA Style

Soekarta R, Ku-Mahamud KR. Recent Facial Image Preprocessing Techniques: A Review. Engineering Proceedings. 2025; 84(1):39. https://doi.org/10.3390/engproc2025084039

Chicago/Turabian Style

Soekarta, Rendra, and Ku Ruhana Ku-Mahamud. 2025. "Recent Facial Image Preprocessing Techniques: A Review" Engineering Proceedings 84, no. 1: 39. https://doi.org/10.3390/engproc2025084039

APA Style

Soekarta, R., & Ku-Mahamud, K. R. (2025). Recent Facial Image Preprocessing Techniques: A Review. Engineering Proceedings, 84(1), 39. https://doi.org/10.3390/engproc2025084039

Article Metrics

Back to TopTop