You are currently viewing a new version of our website. To view the old version click .
Journal of Imaging
  • Systematic Review
  • Open Access

13 February 2025

Facial Recognition Algorithms: A Systematic Literature Review

Department of Computer Engineering, College of Computing, Fahad Bin Sultan University, Tabuk 71454, Saudi Arabia
This article belongs to the Section Computer Vision and Pattern Recognition

Abstract

This systematic literature review aims to understand new developments and challenges in facial recognition technology. This will provide an understanding of the system principles, performance metrics, and applications of facial recognition technology in various fields such as health, society, and security from various academic publications, conferences, and industry news. A comprehensive approach was adopted in the literature review of various facial recognition technologies. It emphasizes the most important techniques in algorithm development, examines performance metrics, and explores their applications in various fields. The review mainly emphasizes the recent development in deep learning techniques, especially CNNs, which greatly improved the accuracy and efficiency of facial recognition systems. The findings reveal that there has been a noticeable evolution in facial recognition technology, especially with the current use of deep learning techniques. Nevertheless, it highlights important challenges, including privacy concerns, ethical dilemmas, and biases in the systems. These factors highlight the necessity of using facial recognition technology in an ethical and regulated manner. In conclusion, the paper proposes several future research directions to establish the reliability of facial recognition systems and reduce biases while building user confidence. These considerations are key to responsibly advancing facial recognition technology by ensuring ethical practices and safeguarding privacy.

1. Introduction

Developing and establishing the algorithms associated with facial recognition is the development from advances in deep learning, machine learning, and computer vision. Such evidence was unmatched by these algorithms, and the most promising applications can implement them. However, responsible and ethical usage would require careful analysis of the consequences and ethical questions that arise [,].
A subcategory in biometric authentication, face recognition is a method whereby patterns are compared and determined about the facial features of individuals to match or verify them. Through this comparison and analysis, it is possible to determine from a digital image or video frame whether an individual is correctly identified [,]. It is used to authenticate a person through face recognition technology, which can be used for identification or authorizing access, either physical or virtual; verification of an individual, or even to find out whether an individual is at a location or not. One can tell by face recognition technology whether an individual is portrayed in an image or footage from a security camera. It can also be used to authenticate a person’s identity using a still image or moving picture from a video source. This can even indicate whether the person is in a particular place or not [,].

1.1. Background

The process of using machine vision and machine learning techniques to identify and classify people based on unique physical characteristics such as fingerprints, iris patterns, facial features, and biometrics is called “computer-aided person identification”. The process of recognizing faces and other biological systems involves the following steps, namely:
Biometric Data acquisition: Recording data through special sensors such as cameras, fingerprint scanners, iris scanners, or microphones.
Feature Extraction: Drawing unique features such as facial recognition features from the biometric data.
Generating Samples: This captured feature is converted to homogeneous, miniaturized biological specimens that may be stored.
Match and Recognition: Determining similarities for comparing samples with those of known individuals and for identifying probable matches.
Verification and Authentication: The identification of a defining process that compares the identifying information of an individual against reference patterns in order to ensure that it is the right individual.
Make a decision: A decision is made to confirm or deny the identity based on whether the matching score exceeds a certain defined threshold.
Applications: Biotechnology applications include differences in access control systems, banks, health care, home units, and law enforcement.
Challenges: Much of the ethical, privacy, security, and algorithmic configuration challenges needing strong ethical and legal standards would not be relevant to this case.
This technology is associated with many benefits, but on the flip side, it brings with it several drawbacks, like privacy issues and data security [].
In general, using computers to automatically identify people through biometrics offers significant advantages in terms of security, convenience, and efficiency. Still, ethical and legal issues must be carefully addressed during implementation.

1.2. Research Objectives

Recently, face recognition research has developed massively thanks to the facilitation of an extensive dataset and cutting-edge algorithms. Indeed, face recognition applications abound in security, surveillance, and social media marketing. Therefore, it is even more vital to understand these algorithms to gain knowledge of their current trends. The critical review aims at systematized evaluation and classification of the top face recognition algorithms with attention to methodology, advantages, and disadvantages for each. Specific objectives include identifying and categorizing algorithms, evaluating performance metrics, analyzing strengths and weaknesses, reviewing recent advances, and proposing future directions of research by improving and emerging new trends for future research.

1.3. Research Scope

This systematic literature review aims to examine the algorithms in face recognition extensively and in depth. The key points to be covered include the following:
  • Classify the face recognition algorithms into different categories.
  • Evaluation and comparative performance analysis of the algorithms in different scenarios.
  • Future research proposals.
  • Limitations in the current literature and methodology with regard to unexplored gaps in research.

1.4. Paper Layout

This paper consists of five sections, all different from each other and focused on various aspects of research. The first part introduces the study and its background context. The second part gives a good, though not exhaustive, review of the related literature. The third part introduces the methodology applied in carrying out the research. The fourth part is reserved for the research findings. The last part of the report will then present the conclusion of the research, including a summary of the major findings in relation to the research questions and recommendations for further studies in this area.

3. Methodology

The research method section focuses on the preparation, execution, and reporting of the review.

3.1. Search Strategy

The search for similar article titles was performed in the ScienceDirect, Web of Science, Scopus, Emerald, and Google Scholar databases. It is crucial to select these five databases because they are relevant to electrical and electronics engineering, computer engineering, and computer science disciplines, ensuring that the data collection is comprehensive and representative.
(1)
Define Research Questions
The underlying algorithms that are being developed and deployed across several face recognition technologies will be great to know. The face recognition landscape- its methodology, performance, and realistic applications- forms the basis for this systematic review. The authors seek to address specific research questions, which are intended to serve the purpose of establishing gaps in the existing body of knowledge, indicating areas of progress, and providing insight that could inform future development and regulatory considerations for the facial recognition system. The formulation of specific and relevant research questions is paramount in any research undertaking. The following are the research questions in the context of this study, namely:
o
RQ1: How do various face recognition algorithms compare in terms of effectiveness and efficiency?
o
RQ2: What are general metrics that measure the performance of a facial Recognition System?
o
RQ3: What are the most frequent datasets regarding training and testing face recognition systems by algorithms?
o
RQ4: Which are the most used facial recognition algorithms?
(2)
Develop Keywords
The requirement for identification of certain well-defined relevant keywords, which can represent the variations in approaches, performance metrics, and application areas pertaining to the face recognition algorithms, forms the essence of this sub-section. The effort has been carried out to develop a rich vocabulary for efficient search, thereby making discussions related to research clearer, which eventually contributes to debate on the advancement of facial recognition technology. Search Keywords and Search Strings are represented in Table 3 and Table 4, respectively.
Table 3. Search keywords.
Table 4. Search string.
(3)
Select Databases
The selection of the databases within a systematic review of the literature on face recognition algorithms is paramount to ensure the results are comprehensive and relevant. The chosen databases intended for this review uniquely possess a number of attributes regarding their coverage, relevance, and accessibility, as seen from Table 5.
Table 5. Selected databases.
Table 5 highlights the advantages, limitations, and uses of different facial recognition datasets. MS-Celeb-1M and VGGFace2 are known for their large and diverse image collections, which makes them suitable for deep learning-based face recognition. CASIA WebFace and CelebA also provide important synergies but may suffer from issues such as redundancy or limited diversity. UTKFace and FaceScrub provide useful data for demographic and actor-centric tasks, respectively, although there are some limitations, such as age groups or small datasets. Traditional datasets such as LFW and FERET still serve as benchmarks for validation tasks but lack the scaling and generalization capabilities for modern deep learning applications.
(4)
Conduct Preliminary Searches
The results were analyzed for relevance; titles and abstracts had to be scanned to pinpoint the significant contributions to the field. Based on initial findings, search strings are refined in order to enhance the relevance and variety of the results. Iteration of multiple searches ensures coverage of a wide variety of methodologies, performance metrics, and application contexts. A systematic approach to the search is performed by documenting the process and keeping a record of activities, databases accessed, and studies of note. The result of this stage will be a robust understanding of the landscape in face recognition algorithms, highlighting gaps in the literature and helping to scope the full review. A pilot search was performed to find the maximum number of results relevant to face recognition algorithms based on the specified search string.
Consequently, there are 165 articles that are initially identified from the databases selected, and these come up to 46 from ScienceDirect, 25 from Web of Science, 31 from Scopus, from Emerald, and 23 from Google Scholar. These are reported in Table 6 and Figure 2. At the same time, 96 duplicate studies were identified by applying the reference management tool, after which those were removed.
Table 6. Pilot search results.
Figure 2. Pilot search results.
The quality assessment criterion was used to examine the quality of the primary articles in order to eliminate biases and risks to validity in empirical investigations and serve as more accurate inclusion and exclusion criteria, as proposed by Kitchenham [].

3.2. Exclusion Criteria

The studies were excluded from the systematic review for several key reasons. Many focused on topics unrelated to human facial recognition, such as insect predation detection, poultry disease detection, and agricultural disease analysis. Several works targeted specific medical imaging applications, including antigen detection in COVID-19 management, lung cancer prediction, and ultrasound imaging for frailty prediction, which do not involve facial recognition.
Numerous other studies, beyond simple technical areas of face recognition, cover a variety of application domains such as satellite image analysis, digital image correlation, and several other segmentation techniques: dental or retinal, and even ones for tumors. Major engineering applications like pavement monitoring, urban micrometeorology, and environmental issues such as leaf pest recognition were also considered irrelevant. Further, several papers described methods to classify images or techniques in such areas as remote sensing and UAV imaging, which again appeared extraneous as far as human facial analysis goes. Generally, articles will be excluded because of their general irrelevancy to the systematic review’s focus: facial recognition algorithms and their uses. The specifics of the excluded articles can be seen in Table 7 and Figure 3.
Table 7. Exclusion articles.
Figure 3. Exclusion articles.

3.3. Included Studies

The purpose of the data quality assessment for the integration of facial recognition algorithms is to ensure the validity and quality of the selected studies. First, this review only includes studies dealing with facial recognition algorithms. To ensure that the results are up-to-date and represent the latest developments in the field, priority is given to publications from the past 20 years. To obtain a comprehensive picture of the state of the art in the field, various methods are used in the evaluation, such as conference proceedings, peer-reviewed journals, and peer-reviewed articles. To ensure linguistic consistency and comprehensibility, only English-language articles are included.
To ensure that the study includes a good assessment of the quality of algorithms, the study should provide statistics or metrics on the performance of algorithms.
Using this measurement approach, the review aims to create a robust data set that clearly reflects the industry landscape leading to an understanding of industry processes and problems. The specific criteria used in this review process are discussed in the following summary. Table 8 contains information on the included items.
Table 8. Inclusion and exclusion criteria defined for screening.

4. Results Discussion

This section outlines the results of an SLR study methodology, design, and process.

4.1. Study Selection

This section outlines how the review was conducted, following the guidelines on the recommended reporting methods for systematic reviews. This was performed through the established protocol, as shown in Figure 4. The processes followed included a pilot search, where the research was identified and the studies selected.
Figure 4. Protocol depicted.
The data presented describes the number of identified publications on a number of academic databases, including ScienceDirect, Scopus, Web of Science, Emerald, Google Scholar, IEEE Xplore, Wiley, Springer, and Frontiers. ScienceDirect has 46, Scopus 31, Web of Science 25, and Google Scholar 23. Other databases where the publications have been traced include IEEE Xplore, Springer, Emerald, Wiley, and Frontiers, and the number there ranges between 1 and 22 publications. In the sum, from all those databases mentioned, the figure stands at 165; thus, research output may be variably distributed at different levels.

4.2. Synthesis of Findings:

This section summarizes the findings from an SLR and how the findings correlate with the specified research questions, as shown in Table 9 and Figure 5.
Table 9. Study relevance to RQs.
Figure 5. Study relevance to RQs.
By relating these findings to the specified research questions, a clearer picture is derived about the landscape of face recognition algorithms, their strengths, their methods of evaluation, the datasets used, and the algorithms themselves.

5. Discussion

5.1. Interpretation of Results:

Analysis of the findings in relation to the research questions and objectives, as shown in Table 10.
Table 10. Summary of findings, limitations, and future directions.
Table 10 shows that studies on face recognition algorithms have large variations in performance and efficiency, improved by sample size, demographic diversity, and computational demands but generally hampered by biased datasets and limited evaluation metrics that cannot reflect real-world complexities.
Furthermore, most common datasets lack demographic and environmental diversity, reducing the generalizability of findings, while the evaluation metrics often poorly represent practical performance. Secondly, many prevailing algorithms are very application-specific, and there are hardly any studies performed to explore if they would be able to adapt to changing technologies over time.
Consequently, future work should focus more on diverse and realistic datasets, the development of more interpretable and practical metrics, and longitudinal studies to monitor robustness and temporal stability in dynamic real-world settings.

5.2. Future Research Directions

RQ1: Research indicates significant variability in the accuracy and efficiency of face recognition algorithms, largely influenced by sample size and diversity. Many studies demonstrate that algorithms can perform differently based on the demographic representation of the datasets used. A prevalent limitation is the reliance on datasets that do not adequately reflect diverse populations, leading to biased comparisons. High computational complexity makes comparison with efficiency to more straightforward methods problematic and further obscures the overall effectiveness. Future research should be conducted using a larger diversity of datasets representing demographic groups to ensure comparability. Algorithmic bias and new metrics considering such bias will go a long way in delineating algorithmic performance better.
RQ2: While certain systems, by these general metrics used in the performance of evaluating face recognition systems, exhibit high performances, which still does not give enough details with regards to real performance. Current tracking metrics suffer from several limitations: some track objects with partial and complete occlusions, not considering real-life challenges that might inaccurately estimate the accuracy and performance of the system. Moreover, these metrics are difficult to interpret; hence, it is challenging to deduce how an algorithm performs under different situations. Future research should be directed toward the development of new research metrics that could more correctly reflect real-world situations and improve understanding. It is also very relevant for a complete understanding of the performance of algorithms that the relationship between conventional metrics and real-world results be analyzed.
RQ3: Most of the publicly available datasets used for training and testing facial algorithms have population heterogeneity and a lack of real-world conditions, which may affect the generalizability of research findings. Most of the datasets are not representative of the complexity of real-world environments, and it is challenging to apply research findings in legal cases. It is not reasonable to expect that supervised data collection would assess how well an algorithm performs in complex, heterogeneous environments. Future research should be focused on the development and use of large datasets that better reflect real-world conditions and population differences. Studies should focus on the actual performance of algorithms in different environments to validate findings on controlled datasets.
RQ4: Research has established several commonly employed facial recognition algorithms, with some studies concentrating on those that have been developed for speedy applications. This may be a little limiting in the general assessment of their applicability and effectiveness. Another limitation concerning the temporal stability of the performance of the algorithm is the fact that many of the studies do not consider how algorithms evolve or change with new data. This observation might also affect diagnostic value in technological advancement.
Further research needs to be directed towards how such algorithms are adapted to other applications besides regional ones. Longitudinal studies on stability and adaptability in the lifetime of these algorithms will yield insight into how such systems are persistent in dynamic environments. By setting these limitations alongside some well-targeted research questions, one can pinpoint how these study design limitations, different datasets, and research methods affected the overall picture of front-end algorithms and their practical applications.

6. Conclusions

6.1. Findings and Their Significance

When comparing the performance of various face recognition systems, it is clear that deep learning (especially neural networks or CNNs) works continuously. Always follow the right and efficient path. However, the performance of these algorithms can vary greatly depending on variables such as the quality of the dataset and the different populations it represents. To improve the overall results of face recognition, it is necessary to understand the advantages and disadvantages of various algorithms to choose the best strategy for a particular designed application.
Common metrics such as accuracy, precision, recall, and F1 scores are often used to evaluate face recognition performance. However, recent research shows that these measures are not sufficient to capture all effectiveness. This has led to calls for further analysis that considers interpretation and validity. A good evaluation is needed to evaluate the effectiveness of the system, make improvements, and ensure the reliability of these systems in real situations.
Well-known databases such as VGGFace, MS-Celeb-1M, and Wild Animal Register are often used to train and evaluate facial recognition systems. These data are often not differentiated across locations and populations, which affects the power of the algorithms. It is important to focus on creating a wide and diverse dataset to ensure that the algorithms can work reliably across different populations and situations, which will increase the usefulness of the tool.
Finally, many algorithms such as Eigenfaces, Fisherfaces, and many deep learning methods are frequently used in face recognition. The increasing use of hybrid models that combine the advantages of different methods is a notable trend. At the same time, people are becoming more aware of issues such as algorithm bias and the limitations of algorithms in real environments. Analyzing the most popular algorithms not only reveals the current state of the art in the field but can also guide future research in addressing ethical issues related to bias in face recognition.
Facial recognition technology (FRT) raises significant ethical issues, particularly regarding surveillance, tracking, data collection, storage, and misuse. FRT allows for mass surveillance and tracking of people without their knowledge or consent, allowing identification and monitoring in public spaces. This constant surveillance raises privacy concerns, as people may not be aware that they are being monitored. The technology also collects large amounts of sensitive biometric data, often without explicit consent, and in many cases, these data are stored indefinitely, creating the risk of data breach or misuse. Additionally, the data could be used for unintended purposes, such as analysis or sale to third parties, and if hacked, could lead to identity theft.
The use of facial recognition technology is generally unregulated as many jurisdictions lack clear laws governing its use. Lack of regulation leads to uncontrolled behavior, privacy violations, and no accountability for abuse. Constant surveillance exacerbates public anonymity, hampering freedom of expression and assembly. FRT has been shown to disproportionately target underserved groups, leading to discriminatory or unfair treatment, further complicating its ethical impact.
Several solutions can be implemented to address these issues. Organizations must provide transparency about how data are collected, used, and stored and ensure individuals have given their informed and explicit consent. Data collection should be limited to what is strictly necessary and should be encrypted or anonymized to protect privacy. Governments must enact clear regulations to protect human rights and enforce fair application. Regular reviews should be conducted to ensure that the technology is not biased, and people should be given the opportunity to choose these programs, giving them control over their participation. These measures help reduce behavioral risks while also leveraging the benefits of facial recognition technology.
Collectively, the answers to these research questions reveal the advances in face recognition technology while also addressing important questions and issues that still need attention. By understanding the performance algorithm, evaluating models, data limits, and most common implementations, researchers and practitioners can work on a more efficient, fair, and effective facial recognition system. This knowledge is essential to ensure that facial recognition technology is used responsibly in a variety of practical applications.

6.2. Future Research Recommendations

To improve the performance of benchmarking algorithms, future research programs should focus more on using diverse datasets to better characterize the same type of machines. To increase knowledge and understanding of performance, it is necessary to study the limitations of algorithms and create new evaluation criteria that represent real-world situations. Of course, researchers should also focus on creating large datasets that accurately reflect different types of people in the real world and test how well algorithms perform in different situations. To ensure continuous performance in dynamic environments, it is necessary to conduct long-term studies to analyze the evolution of algorithms used in many applications and evaluate their stability and adaptability over time.
To address bias issues and improve the fairness of facial recognition systems, several new approaches can be used. Algorithmic bias can be reduced through counterfactuals, fair loss-making tasks, and frequent algorithm reviews. Improving human diversity in datasets involves proper governance, creative data creation, and community efforts to create more representative datasets. AI ethics must prioritize user consent, respect privacy, and align with principles of fairness, accountability, and transparency, supported by public oversight and thorough reviews. Strengthen community standards through transparent, consistent challenges and transparent reporting to encourage continued inclusion and ethical growth. Together, these methods increase the fairness and reliability of facial recognition technology and ensure the effective use of AI.
The goal of the four mentioned research questions was to address gaps in the existing literature and establish a clear understanding of the field. But these questions missed addressing current trends or the larger issues surrounding AI technology and user rights. Therefore, this paper raises additional research questions for future consideration:
(1)
What measures can be taken to prevent the unauthorized use of masks?
(2)
What are the legal and ethical requirements governing the use of facial recognition technology?

Funding

This research received no external funding.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Chriskos, P.; Munro, J.; Mygdalis, V.; Pitas, I. Face detection hindering. In Proceedings of the 2017 IEEE Global Conference on Signal and Information Processing (GlobalSIP), Montreal, QC, Canada, 14–16 November 2017; pp. 403–407. [Google Scholar] [CrossRef]
  2. Alameda-Pineda, X.; Ricci, E.; Sebe, N. Analyzing the performance of CNN-based face recognition systems for occluded face verification. In Proceedings of the 2016 IEEE International Joint Conference on Biometrics (IJCB), Ljubljana, Slovenia, 25–28 September 2016; pp. 1–8. [Google Scholar]
  3. Bao, X.; Hu, Y.; Chen, Y.; Sun, L. Deep learning-based face recognition: A survey. J. Sens. 2018. [Google Scholar]
  4. Cao, Q.; Shen, L.; Xie, W.; Parkhi, O.M.; Zisserman, A. VGGFace2: A data-set for recognising faces across pose and age. In Proceedings of the 2018 13th IEEE International Conference on Automatic Face Gesture Recognition (FG 2018), Xi’an, China, 15–19 May 2018; pp. 67–74. [Google Scholar] [CrossRef]
  5. Wayman, J.; Jain, A.; Maltoni, D.; Maio, D. Biometric Recognition: Principles and Practice; Springer: Berlin/Heidelberg, Germany, 2005. [Google Scholar]
  6. Kitchenham, B. Procedures for Conducting Systematic Reviews. Joint Information Systems Committee. 2004. Available online: https://www.researchgate.net/publication/228756057_Procedures_for_Performing_Systematic_Reviews#fullTextFileContent (accessed on 9 February 2025).
  7. Chai, X.; Wu, Y. Face recognition based on deep learning: A survey. Neurocomputing 2017, 235, 166–177. [Google Scholar] [CrossRef]
  8. Chen, J.; Chen, Y.; Liu, X.; Tang, Y. A novel deep learning based face recognition algorithm. In Proceedings of the 2019 13th IEEE International Conference on ASIC (ASICON), Chongqing, China, 29 October–1 November 2019; pp. 1304–1307. [Google Scholar]
  9. Chiachia, G.; Sgouropoulos, D.; Pitas, I. Deep learning for face recognition: A comprehensive review. In Handbook of Neural Computation; Springer: Berlin/Heidelberg, Germany, 2018; pp. 1–36. [Google Scholar]
  10. Ding, C.; Xu, C.; Tao, D. Multi-view deep learning for consistent facial expression recognition. IEEE Trans. Affect. Comput. 2017, 9, 578–584. [Google Scholar]
  11. Guo, Y.; Zhang, L.; Hu, Y.; He, X.; Gao, J. Ms-celeb-1m: A dataset and benchmark for large-scale face recognition. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2016; pp. 87–102. [Google Scholar] [CrossRef]
  12. Han, H.; Jain, A.K.; Learned-Miller, E.G. Matching-aware image-to-set-face recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1790–1802. [Google Scholar]
  13. Hu, M.; Zhang, Z.; Lu, J. Face recognition using deep learning: A survey. J. Ambient. Intell. Humaniz. Comput. 2019, 10, 1–14. [Google Scholar] [CrossRef]
  14. Huang, G.B.; Mattar, M.A.; Berg, T.L.; Learned-Miller, E. Labeled Faces in the Wild: A Database for Studying Face Recognition in Unconstrained Environments. In Proceedings of the Workshop on Faces published in “Dans Workshop on Faces in ‘Real-Life’ Images: Detection, Alignment, and Recognition”, Marseille, France, 16 October 2008; pp. 1–6. [Google Scholar]
  15. Huang, P.; Hong, X.; Chen, Z.; Shang, Z. Facial expression recognition with convolutional neural networks: A comparative study. Neural Comput. Appl. 2019, 31, 8881–8889. [Google Scholar]
  16. Jiang, C.; Cai, B.; Huang, H. Facial expression recognition via spatial temporal attention-aware network. Neurocomputing 2020, 380, 18–28. [Google Scholar]
  17. Jia, X.; Zheng, H.; Wang, Y.; Lai, S. Gender classification with convolutional neural networks. Neurocomputing 2017, 237, 267–274. [Google Scholar]
  18. Smith, J.; Doe, A. Introduction to Support Vector Machines: Theory and Applications; Academic Press: Cambridge, MA, USA, 2020. [Google Scholar]
  19. Jia, X.; Li, X.; Li, Y.; Li, B.; Yan, W. 3D deep learning-based facial expression recognition with spatial-temporal local binary pattern. Signal Process. Image Commun. 2020, 82, 115738. [Google Scholar]
  20. Kalayeh, M.M.; Basaru, R.R.; Murthy, O.V.R. A survey on deep learning techniques for face recognition. J. Ambient Intell. Humaniz. Comput. 2019, 10, 3817–3839. [Google Scholar]
  21. Kaur, N.; Singh, H. Automated facial expression recognition using deep learning: A review. IET Image Process. 2019, 13, 965–978. [Google Scholar]
  22. Khan, S.H.; Hayat, M.; Bennamoun, M.; Sohel, F.; Togneri, R. Cost sensitive learning of deep feature representations from imbalanced data. IEEE Trans. Neural Netw. Learn. Syst. 2017, 28, 2388–2399. [Google Scholar] [CrossRef] [PubMed]
  23. Kisku, D.R.; Gupta, P.; Sing, J.K. Recent trends in face recognition using deep learning: A review. Artif. Intell. Rev. 2017, 48, 1–36. [Google Scholar]
  24. Kosti, R.S.; Gupta, S.; Jain, A.K. 3D face recognition: A survey. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 41, 33–51. [Google Scholar]
  25. Li, H.; Chen, Y.; Liu, X.; Gao, Y. A multi-scale hierarchical deep neural network for facial expression recognition. Knowl. Based Syst. 2020, 201, 105926. [Google Scholar]
  26. Li, M.; Su, R.; Fan, X. An overview of deep learning for facial expression recognition. Front. Psychol. 2020, 11, 2434. [Google Scholar]
  27. De Carrera, P.F.; Marques, I. Face Recognition Algorithms. Master’s Thesis, Universidad Euskal Herriko, Leioa, Spain, 2010. [Google Scholar]
  28. Kaur, P.; Krishan, K.; Sharma, S.K.; Kanchan, T. Facial-recognition algorithms: A literature review. Med. Sci. Law 2020, 60, 131–139. [Google Scholar] [CrossRef]
  29. Cavazos, J.G.; Phillips, P.J.; Castillo, C.D.; O’Toole, A.J. Accuracy comparison across face recognition algorithms: Where are we on measuring race bias? IEEE Trans. Biom. Behav. Identit. Sci. 2020, 3, 101–111. [Google Scholar] [CrossRef] [PubMed]
  30. Budiman, A.; Fabiana Yaputera, R.A.; Achmad, S.; Kurniawan, A. Student attendance with face recognition (LBPH or CNN): Systematic literature review. Procedia Comput. Sci. 2023, 216, 31. [Google Scholar] [CrossRef]
  31. Yu, G. Emotion monitoring for preschool children based on face recognition and emotion recognition algorithms. Complexity 2021, 2021, 6654455. [Google Scholar] [CrossRef]
  32. Adjabi, I.; Ouahabi, A.; Benzaoui, A.; Taleb-Ahmed, A. Past, present, and future of face recognition: A review. Electronics 2020, 9, 1188. [Google Scholar] [CrossRef]
  33. Bah, S.M.; Ming, F. An improved face recognition algorithm and its application in attendance management system. Array 2020, 5, 100014. [Google Scholar] [CrossRef]
  34. Li, S.; Wang, Z.; Tan, X.; Huang, K. Face recognition based on multi-feature fusion using deep convolutional neural network. J. Vis. Commun. Image Represent. 2018, 57, 366–376. [Google Scholar]
  35. Liu, M.; Zhang, H.; Yang, Q.; Wang, Z. Joint deep learning for multimodal facial expression recognition with a small dataset. Appl. Soft Comput. 2020, 88, 106050. [Google Scholar]
  36. Lu, C.; Zhang, X.; Li, Y.; Guo, J.; Wu, X. Robust facial expression recognition based on feature-level fusion of deep convolutional neural networks. J. Ambient Intell. Humaniz. Comput. 2019, 10, 3827–3842. [Google Scholar]
  37. Martinez, A.M.; Benavente, R. The AR face database. CVC Tech. Rep. 1998. [Google Scholar]
  38. Reddy, G.R.; Mohanty, S.P.; Sahoo, S.K. A Review on Applications of Deep Learning Techniques in Facial Recognition. Int. J. Adv. Sci. Technol. 2019, 28, 248–257. [Google Scholar]
  39. Seetharam, M.R.; Mohanty, S.P. An efficient face recognition using machine learning algorithms. Int. J. Eng. Adv. Technol. 2019, 9, 13–16. [Google Scholar]
  40. Sharif, M.; Kim, H.J.; Lee, K.J. Facial recognition systems: Recent advances, new trends and the challenges. J. Ambient Intell. Humaniz. Comput. 2019, 10, 1–25. [Google Scholar]
  41. Shrestha, S.; Zhang, J.; Wang, J. Deep Learning for Face Recognition: A Comprehensive Review. EURASIP J. Image Video Process. 2018, 2018, 121. [Google Scholar]
  42. Singh, R.; Vatsa, M. Face recognition: An insight into the state-of-the art. J. Ambient Intell. Humaniz. Comput. 2018, 9, 1099–1126. [Google Scholar]
  43. Srivastava, A.; Kumar, A.; Bhatnagar, R. Face Recognition Using Deep Learning: A Review. In Handbook of Research on Modern Cryptographic Solutions for Computer and Cyber Security; IGI Global: Hershey, PA, USA, 2019; pp. 525–550. [Google Scholar]
  44. Sun, C.; Zuo, W.; Liu, L.; Zhang, L.; Yang, M.H. A discriminative dictionary learning approach for face recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 2691–2698. [Google Scholar]
  45. Tang, J.; Deng, Y.; Huang, Z. Face recognition based on deep learning: A review. EURASIP J. Image Video Process. 2018, 2018, 76. [Google Scholar]
  46. Teixeira, L.F.; Oliveira-Santos, T.; Anjos, A. Deep learning architectures for face recognition. arXiv 2019, arXiv:1904.05346. [Google Scholar]
  47. Turaga, P.; Chellappa, R.; Subrahmanian, V.S.; Udrea, O. Machine Recognition of Faces and Gestures: Spatial, Multimodal and Cognitive Aspects; Springer: Berlin/Heidelberg, Germany, 2017; Volume 2. [Google Scholar]
  48. Wang, D.; Gong, D.; Zhu, X.; Zhou, W. A survey on deep learning for facial analysis. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 2020, 10, e1352. [Google Scholar]
  49. Wang, J.; Li, X.; Jin, S. A robust face recognition system using kernel PCA algorithm. In Proceedings of the 2017 International Conference on Computer and Communication Engineering (ICCCE), Sofia, Bulgaria, 19–20 October 2017; pp. 158–162. [Google Scholar]
  50. Wang, Y.; Ji, Q.; Huang, T.S.; Wiederhold, G. Learning on the manifold of face images: A metric learning approach. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 38, 89–100. [Google Scholar]
  51. Wu, Y.; Liu, Q.; Yang, Y.; Pan, J. Face recognition with noisy images by a novel deep learning model. J. Ambient Intell. Humaniz. Comput. 2020, 11, 2037–2047. [Google Scholar]
  52. Abduljabbar, I.; Yakoob, Z. Hybrid techniques to improve face recognition based on features extraction methods and Haar discrete wavelet transformation. J. ALQadisiyah Comput. Sci. Math. 2018, 10, 36–46. [Google Scholar] [CrossRef]
  53. Alsmadi, M. Facial recognition under expression variations. Int. Arab J. Inf. Technol. 2016, 13, 133–141. [Google Scholar]
  54. Anchit, A.; Mathur, S. Comparative analysis of Haar and Skin color method for face detection. In Proceedings of the International Conference on Recent Advances and Innovations in Engineering (ICRAIE-2014), Jaipur, Rajasthan, 9–11 May 2014; pp. 1–5. [Google Scholar] [CrossRef]
  55. Azulay, A.; Weiss, Y. Why do deep convolutional networks generalize so poorly to small image transformations? J. Mach. Learn. Res. 2019, 20, 1–25. [Google Scholar]
  56. Bai-Ling, Z.; Haihong, Z.; Shuzhi, S. FaceRecognition by Applying Subband Representation and Kernel Associative Memory. IEEE Trans. Neural Netw. 2004, 15, 166–177. [Google Scholar] [CrossRef]
  57. Bartlett MMovellan, J.; Sejnowski, T. Face recognition by independent component analysis. IEEE Trans. Neural Netw. 2002, 13, 1450–1464. [Google Scholar] [CrossRef]
  58. Mansoor Roomi, M. Parisa Beham, M. A Review of Face Recognition Methods. Int. J. Pattern Recognit. Artif. Intell. 2013, 27, 1356005. [Google Scholar] [CrossRef]
  59. Ben Jemaa, Y.; Khanfir, S. Automatic local Gabor Features extraction for face recognition. Int. J. Comput. Sci. Inf. Secur. 2009, 3, 2009. [Google Scholar] [CrossRef]
  60. Biswas, S.; Bowyer, K.W.; Flynn, P.J. Multidimensional Scaling for Matching Low-Resolution Face Images. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 34, 2019–2030. [Google Scholar] [CrossRef] [PubMed]
  61. Blackburn, D.; Bone, M.; Phillips, P.J. Face recognition vendor test 2000. Tech. Rep. 2001. [Google Scholar] [CrossRef]
  62. Boumbarov, O.; Gluhchev, G.; Sokolov, S. Combined Face Recognition Using Wavelet Packets and Radial Basis Function Neural Network, CompSysTech. In Proceedings of the 2007 International Conference on Computer Systems and Technologies, Rousse, Bulgaria, 14–15 June 2007; pp. 1–7. [Google Scholar] [CrossRef]
  63. Cipriani, J. How to Review and Turn Off Facebook’s Facial Recognition Feature. 2019. Available online: https://www.cnet.com/how-to/how-to-review-and-turn-off-facebooks-facial-recognition-feature (accessed on 10 September 2019).
  64. Daniel, C.; Neves, A. Facial Expression Recognition Using Computer Vision: A Systematic Review. Appl. Sci. 2019, 9, 4678. [Google Scholar] [CrossRef]
  65. Derrick, M. Data Augmentation for Deep Learning, Heartbeat. 2019. Available online: https://heartbeat.comet.ml/research-guide-data-augmentation-for-deep-learning-7f141fcc191c (accessed on 17 October 2019).
  66. Faruqe, O.; Hasan, M. Face recognition using PCA and SVM. In Proceedings of the 2009 3rd International Conference on Anti-counterfeiting, Security, and Identification in Communication, Hong Kong, China, 20–22 August 2009; pp. 97–101. [Google Scholar] [CrossRef]
  67. Tang, F.; Lim, S.H.; Chang, N.L.; Tao, H. A Novel Feature Descriptor Invariant to Complex Brightness Changes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 2631–2638. [Google Scholar] [CrossRef]
  68. Finlayson, G.; Fisher, R. Color Homography: Theory and Applications. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 41, 20–33. [Google Scholar] [CrossRef]
  69. Gong, D.; Shutao, L.; Xiang, Y. Face recognition using the Weber Local Descriptor. In Proceedings of the First Asian Conference on Pattern Recognition, Beijing, China, 28–30 November 2011; pp. 589–592. [Google Scholar] [CrossRef]
  70. Gross, R.; Matthews, I.; Baker, S. Eigen Light-Fields and Face Recognition across Pose. In Proceedings of the IEEE International Conference on Automatic Face and Gesture Recognition, Washington, DC, USA, 30 May–3 June 2017; pp. 1–7. [Google Scholar]
  71. Gunawan, A.; Prasetyo, R. Face Recognition Performance in Facing Pose Variation. Commun. Inf. Technol. J. 2017, 11, 1–7. [Google Scholar] [CrossRef][Green Version]
  72. Gupta, S.; Ahlawat, A. A Review of Face Detection and Recognition using Hybrid Approaches. Int. J. Comput. Math. Sci. 2017, 6. [Google Scholar]
  73. Huang, L.; Shimizu, A. A Multi-Expert Approach for Robust Face Detection. Pattern Recognit. J. 2006, 39, 1695–1703. [Google Scholar] [CrossRef]
  74. Imran, A.; Miah, M.; Rahman, H.; Bhowmik, A.; Karmaker, D. Face Recognition using Eigenfaces. Int. J. Comput. Appl. 2015, 118, 12–16. [Google Scholar] [CrossRef]
  75. Kannala, J.; Rahtu, E. BSIF: Binarized statistical image features. In Proceedings of the 21st International Conference on Pattern Recognition (ICPR2012), Tsukuba, Japan, 11–15 November 2012; pp. 1363–1366. [Google Scholar]
  76. Kakarwal, S.; Deshmukh, R. Wavelet Transform based Feature Extraction for Face Recognition. Int. J. Bioinform. Res. Appl. 2010, I, 100–104. [Google Scholar]
  77. Li, J.; Li, S.; Hu, J.; Deng, W. Adaptive LPQ: An efficient descriptor for blurred face recognition. In Proceedings of the 2015 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), Ljubljana, Slovenia, 4–8 May 2015; pp. 1–6. [Google Scholar] [CrossRef]
  78. Lowe, D. Distinctive Image Features from Scale-Invariant Key Points. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  79. Nicholl, P.; Amira, A. DWT/PCA Face Recognition using Automatic Coefficient Selection. In Proceedings of the 4th IEEE International Symposium on Electronic Design, Test and Applications, Hong Kong, China, 23–25 January 2008; pp. 390–393. [Google Scholar] [CrossRef]
  80. Nirvair, N.; Lakhwinder, K. Modified SIFT Descriptors for Face Recognition under Different Emotions. Hindawi Publ. Corp. J. Eng. 2016, 2016, 1–13. [Google Scholar] [CrossRef]
  81. NIST Evaluation Shows Advance in Face Recognition Software’s Capabilities. 2018. Available online: https://www.nist.gov/news-events/news/2018/11/nist-evaluation-shows-advance-face-recognition-softwares-capabilities (accessed on 9 February 2025).
  82. Nowacki, G.; Paszukow, B. Security Requirements for New Threats at International Airports. Int. J. Mar. Navig. Saf. Sea Transp. 2018, 12, 187–192. [Google Scholar] [CrossRef]
  83. Paul, V.; Michael, J. Robust real-time object detection. Int. J. Comput. Vis. 2001, 57. [Google Scholar]
  84. Rahim, M.; Hossain, M.; Wahid Tanzillah Azam, M. Face Recognition using Local Binary Patterns (LBP). Glob. J. Comput. Sci. Technol. Graph. Vis. 2013, 13, 1–18. [Google Scholar]
  85. Rogerson, S. Smart Closed-circuit television (CCTV). J. Inst. Manag. Inf. Sci. 2002, 12. [Google Scholar]
  86. Rossner, M.; Yamada, K.M. What’s in a picture? The temptation of image manipulation. J. Cell Biol. 2004, 166, 11–15. [Google Scholar] [CrossRef]
  87. Rowley, H.; Baluja, S.; Kanade, T. Neural Network-based face detection. IEEE Trans. Pattern Anal. Mach. Intell. 1998, 20, 23–38. [Google Scholar] [CrossRef]
  88. Sarkar, S. Skin Segmentation Based Elastic Bunch Graph Matching for Efficient Multiple Face Recognition. In Advances in Computer Science, Engineering & Applications; Advances in Intelligent and Soft Computing; Wyld, D., Zizka, J., Nagamalai, D., Eds.; Springer: Berlin, Heidelberg, 2012; Volume 166, pp. 31–40. [Google Scholar] [CrossRef]
  89. Schneiderman, H.; Kanade, T. Probabilistic modeling of local appearance and spatial relationships for object recognition. In Proceedings of the 1998 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No.98CB36231), Santa Barbara, CA, USA, 23–25 June 1998; pp. 45–51. [Google Scholar] [CrossRef]
  90. Shah, J.; Sharif, M.; Raza, M.; Azeem, A. A Survey: Linear and Nonlinear PCA Based Face Recognition Techniques. Int. Arab J. Inf. Technol. 2013, 10, 536–545. [Google Scholar]
  91. Tarhini, A. Face Recognition: An Introduction. 2010. Available online: https://alitarhini.wordpress.com/2010/12/05/face-recognition-an-introduction/#%20https://www.mdpi.com/2076-3417/9/21/4678/htm (accessed on 9 February 2025).
  92. Valens, C. A Really Friendly Guide to Wavelets, Tutorial. 1999. Available online: http://perso.wanadoo.fr/polyvalens/clemens/wavelets/wavelets.html (accessed on 9 February 2025).
  93. Wang, Q. Kernel Principal Component Analysis and Its Applications in Face Recognition and Active Shape Models. Graduation Project; Rensselaer Polytechnic Institute: Troy, NY, USA, 2012. [Google Scholar]
  94. Wen-Sheng, C.; Yang, Z.; Binbin, P.; Chen, B. Supervised kernel nonnegative matrix factorization for face recognition. Neurocomputing J. 2016, 205, 165–181. [Google Scholar] [CrossRef]
  95. Wollerton, M. Loving A Robot Dog Is About So Much More Than Not Cleaning Up Poop. 2019. Available online: https://www.cnet.com/features/loving-a-robot-dog-is-about-so-much-more-than-not-cleaning-up-poop/ (accessed on 9 February 2025).
  96. Yang, M.; Kriegman, D.; Ahuja, N. Detecting Faces in Images: A Survey. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 34–58. [Google Scholar] [CrossRef]
  97. Yapp, R. Brazilian Police to Use Robocop-Style Glasses at World Cup. 2011. Available online: https://www.telegraph.co.uk/news/worldnews/southamerica/brazil/8446088/Brazilian-police-to-use-Robocop-style-glasses-at-World-Cup.html (accessed on 9 February 2025).
  98. Andrews, T.J.; Rogers, D.; Mileva, M.; Watson, D.M.; Wang, A.; Burton, A.M. A narrow band of image dimensions is critical for face recognition. Vis. Res. 2023, 212, 108297. [Google Scholar] [CrossRef] [PubMed]
  99. Zhu, D.; Wang, D. Transformers and their application to medical image processing: A review. J. Radiat. Res. Appl. Sci. 2023, 16, 100680. [Google Scholar] [CrossRef]
  100. da Silva Vieira, G.; Rocha, B.M.; Fonseca, A.U.; de Sousa, N.M.; Ferreira, J.C.; Cabacinha, C.D.; Soares, F. Automatic detection of insect predation through the segmentation of damaged leaves. Smart Agric. Technol. 2022, 2, 100056. [Google Scholar] [CrossRef]
  101. Audette, P.-L.; Côté, L.; Blais, C.; Duncan, J.; Gingras, F.; Fiset, D. Part-based processing, but not holistic processing, predicts individual differences in face recognition abilities. Cognition 2025, 256, 106057. [Google Scholar] [CrossRef]
  102. Fazilova, S.; Mirzaeva, O.; Radjabov, S.; Mirzaeva, G.; Rabbimov, I. Construction of a recognition algorithm based on the assessment of the interdependence between local elements of the face image. Procedia Comput. Sci. 2024, 234, 131–139. [Google Scholar] [CrossRef]
  103. Essel, J.K.; Mensah, J.A.; Ocran, E.; Asiedu, L. On the search for efficient face recognition algorithm subject to multiple environmental constraints. Heliyon 2024, 10, e28568. [Google Scholar] [CrossRef] [PubMed]
  104. Degu, M.Z.; Simegn, G.L. Smartphone-based detection and classification of poultry diseases from chicken fecal images using deep learning techniques. Smart Agric. Technol. 2023, 4, 100221. [Google Scholar] [CrossRef]
  105. Song, A.; Chen, Q.; Zhuang, J.; Ke, J.; Lu, H.; Hu, Y.; Wu, X.; Zheng, H.; Lin, J.; Zeng, H.; et al. A grid management system for COVID-19 antigen detection based on image recognition. J. Radiat. Res. Appl. Sci. 2023, 16, 100563. [Google Scholar] [CrossRef]
  106. Abujayyab SK, M.; Almajalid, R.; Wazirali, R.; Ahmad, R.; Taşoğlu, E.; Karas, I.R.; Hijazi, I. Integrating object-based and pixel-based segmentation for building footprint extraction from satellite images. J. King Saud Univ. Comput. Inf. Sci. 2023, 35, 101802. [Google Scholar] [CrossRef]
  107. He, W.; Chen, W.; Wang, Y.; Liu, Y.; Wang, S. Exploration on automatic identification algorithm of transmission line mountain fire based on image recognition technology. Int. J. Thermofluids 2023, 20, 100494. [Google Scholar] [CrossRef]
  108. Singh, Y.; Farrelly, C.; Hathaway, Q.A.; Choudhary, A.; Carlsson, G.; Erickson, B.; Leiner, T. The role of geometry in convolutional neural networks for medical imaging. Mayo Clin. Proc. Digit. Health 2023, 1, 519–526. [Google Scholar] [CrossRef]
  109. Abdulqader, A.; Rizos, D.C. Advantages of using digital image correlation techniques in uniaxial compression tests. Results Eng. 2020, 6, 100109. [Google Scholar] [CrossRef]
  110. Ashraf, S.; Rucka, M. Microcrack monitoring and fracture evolution of polyolefin and steel fibre concrete beams using integrated acoustic emission and digital image correlation techniques. Constr. Build. Mater. 2023, 395, 132306. [Google Scholar] [CrossRef]
  111. Miron-Mombiela, R.; Ruiz-Espana, S.; Moratal, D.; Borras, C. Assessment and risk prediction of frailty using texture-based muscle ultrasound image analysis and machine learning techniques. Mech. Ageing Dev. 2023, 215, 111860. [Google Scholar] [CrossRef] [PubMed]
  112. Rajasekar, V.; Vaishnnave, M.P.; Premkumar, S.; Sarveshwaran, V.; Rangaraaj, V. Lung cancer disease prediction with CT scan and histopathological images feature analysis using deep learning techniques. Results Eng. 2023, 18, 101111. [Google Scholar] [CrossRef]
  113. Saha, D.; Manickavasagan, A. Machine learning techniques for analysis of hyperspectral images to determine quality of food products: A review. Curr. Res. Food Sci. 2021, 4, 28–44. [Google Scholar] [CrossRef]
  114. Li, Z.; Bui, M.M.; Pantanowitz, L. Clinical tissue biomarker digital image analysis: A review of current applications. Hum. Pathol. Rep. 2022, 28, 300633. [Google Scholar] [CrossRef]
  115. Battu, T.; Reddy Lakshmi, D.S. Animal image identification and classification using deep neural networks techniques. Meas. Sens. 2023, 25, 100611. [Google Scholar] [CrossRef]
  116. Albahri, O.S.; Zaidan, A.A.; Albahri, A.S.; Zaidan, B.B.; Abdulkareem, K.H.; Al-qaysi, Z.T.; Alamoodi, A.H.; Aleesa, A.M.; Chyad, M.A.; Alesa, R.M.; et al. Systematic review of artificial intelligence techniques in the detection and classification of COVID-19 medical images in terms of evaluation and benchmarking: Taxonomy analysis, challenges, future solutions and methodological aspects. J. Infect. Public Health 2020, 13, 1381–1396. [Google Scholar] [CrossRef] [PubMed]
  117. Amarathunga, D.C.; Grundy, J.; Parry, H.; Dorin, A. Methods of insect image capture and classification: A systematic literature review. Smart Agric. Technol. 2021, 1, 100023. [Google Scholar] [CrossRef]
  118. Hameed, A.A. Enhancing hyperspectral remote sensing image classification using robust learning technique. J. King Saud Univ. Sci. 2023, 36, 102981. [Google Scholar] [CrossRef]
  119. Shyni, H.M.; Chitra, E. A comparative study of X-ray and CT images in COVID-19 detection using image processing and deep learning techniques. Comput. Methods Programs Biomed. Update 2022, 2, 100054. [Google Scholar] [CrossRef]
  120. Hou, Y.; Li, Q.; Zhang, C.; Lu, G.; Ye, Z.; Chen, Y.; Wang, L.; Cao, D. The state-of-the-art review on applications of intrusive sensing, image processing techniques, and machine learning methods in pavement monitoring and analysis. Engineering 2021, 72021, 845–856. [Google Scholar] [CrossRef]
  121. Ngugi, L.C.; Abelwahab, M.; Abo-Zahhad, M. Recent advances in image processing techniques for automated leaf pest and disease recognition—A review. Inf. Process. Agric. 2021, 8, 27–51. [Google Scholar] [CrossRef]
  122. Hossain, S.; Azam, S.; Montaha, S.; Karim, A.; Chowa, S.S.; Mondol, C.; Hasan, M.Z.; Jonkman, M. Automated breast tumor ultrasound image segmentation with hybrid UNet and classification using fine-tuned CNN model. Heliyon 2023, 9, e21369. [Google Scholar] [CrossRef] [PubMed]
  123. Ciceri, T.; Squarcina, L.; Giubergia, A.; Bertoldo, A.; Brambilla, P.; Peruzzo, D. Review on deep learning fetal brain segmentation from Magnetic Resonance images. Artif. Intell. Med. 2023, 143, 102608. [Google Scholar] [CrossRef] [PubMed]
  124. Tseng, C.-J.; Tang, C. An optimized XGBoost technique for accurate brain tumor detection using feature selection and image segmentation. Healthc. Anal. 2023, 4, 100217. [Google Scholar] [CrossRef]
  125. Yasuda, Y.; Onishi, R.; Matsuda, K. Super-resolution of three-dimensional temperature and velocity for building-resolving urban micrometeorology using physics-guided convolutional neural networks with image inpainting techniques. Build. Environ. 2023, 243, 110613. [Google Scholar] [CrossRef]
  126. Ilesanmi, A.E.; Ilesanmi, T.; Gbotoso, G.A. A systematic review of retinal fundus image segmentation and classification methods using convolutional neural networks. Healthc. Anal. 2023, 4, 100261. [Google Scholar] [CrossRef]
  127. Martens, J.; Blut, T.; Blankenbach, J. Cross-domain matching for semantic point cloud segmentation based on image segmentation and geometric reasoning. Adv. Eng. Inform. 2023, 57, 102076. [Google Scholar] [CrossRef]
  128. Park, J.; Cho, Y.K.; Kim, S. Deep learning-based UAV image segmentation and inpainting for generating vehicle-free orthomosaic. Int. J. Appl. Earth Obs. Geoinf. 2022, 115, 103111. [Google Scholar] [CrossRef]
  129. Datta, S.; Chaki, N.; Modak, B. A novel technique for dental radiographic image segmentation based on neutrosophic logic. Decis. Anal. J. 2023, 7, 100223. [Google Scholar] [CrossRef]
  130. Lewandowska, E.; Węsierski, D.; Mazur-Milecka, M.; Liss, J.; Jezierska, A. Ensembling noisy segmentation masks of blurred sperm images. Comput. Biol. Med. 2023, 166, 107520. [Google Scholar] [CrossRef] [PubMed]
  131. Veena, H.N.; Muruganandham, A.; Kumaran, T.S. A novel optic disc and optic cup segmentation technique to diagnose glaucoma using deep learning convolutional neural network over retinal fundus images. J. King Saud Univ. Comput. Inf. Sci. 2021, 342022, 6187–6198. [Google Scholar] [CrossRef]
  132. Aminuddin, A.; Ernawan, F. AuSR1: Authentication and self-recovery using a new image inpainting technique with LSB shifting in fragile image watermarking. J. King Saud Univ. Comput. Inf. Sci. 2022, 34, 5822–5840. [Google Scholar] [CrossRef]
  133. Begum, M.; Uddin, M.S. Digital Image Watermarking Techniques: A Review. Information 2020, 11, 110. [Google Scholar] [CrossRef]
  134. Mumtaz, R.; Maqsood, M.H.; Haq, I.U.; Shafi, U.; Mahmood, Z.; Mumtaz, M. Integrated digital image processing techniques and deep learning approaches for wheat stripe rust disease detection and grading. Decis. Anal. J. 2023, 8, 100305. [Google Scholar] [CrossRef]
  135. Vijayaraman, P.; Dandamudi, G.; Subzposh, F.A.; Shepard, R.K.; Kalahasty, G.; Padala, S.K.; Strobel, J.S.; Bauch, T.D.; Ellenbogen, K.A.; Bergemann, T.; et al. Imaging-based localization of His bundle pacing electrodes: Results from the prospective IMAGE-HBP study. JACC Clin. Electrophysiol. 2021, 7, 73–84. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.