Next Article in Journal
Location Privacy Protection in Edge Computing: Co-Design of Differential Privacy and Offloading Mode
Next Article in Special Issue
AffectiVR: A Database for Periocular Identification and Valence and Arousal Evaluation in Virtual Reality
Previous Article in Journal
Integrated Neural Network Approach for Enhanced Vital Signal Analysis Using CW Radar
Previous Article in Special Issue
OcularSeg: Accurate and Efficient Multi-Modal Ocular Segmentation in Non-Constrained Scenarios
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Role of Machine Learning in Advanced Biometric Systems

Department of Electrical & Computer Engineering, University of Nevada, Las Vegas, NV 89154, USA
*
Author to whom correspondence should be addressed.
Electronics 2024, 13(13), 2667; https://doi.org/10.3390/electronics13132667
Submission received: 30 May 2024 / Revised: 3 July 2024 / Accepted: 5 July 2024 / Published: 7 July 2024
(This article belongs to the Special Issue Biometric Recognition: Latest Advances and Prospects)

Abstract

:
Today, the significance of biometrics is more pronounced than ever in accurately allowing access to valuable resources, from personal devices to highly sensitive buildings, as well as classified information. Researchers are pushing forward toward devising robust biometric systems with higher accuracy, fewer false positives and false negatives, and better performance. On the other hand, machine learning (ML) has been shown to play a key role in improving such systems. By constantly learning and adapting to users’ changing biometric patterns, ML algorithms can improve accuracy and performance over time. The integration of ML algorithms with biometrics, however, introduces vulnerabilities in such systems. This article investigates the new issues of concern that come about because of the adoption of ML methods in biometric systems. Specifically, techniques to breach biometric systems, namely, data poisoning, model inversion, bias injection, and deepfakes, are discussed. Here, the methodology consisted of conducting a detailed review of the literature in which ML techniques have been adopted in biometrics. In this study, we included all works that have successfully applied ML and reported favorable results after this adoption. These articles not only reported improved numerical results but also provided sound technical justification for this improvement. There were many isolated, unsupported, and unjustified works about the major advantages of ML techniques in improving security, which were excluded from this review. Though briefly mentioned, we did not touch upon encryption/decryption aspects, and, accordingly, cybersecurity was excluded from this study. At the end, recommendations are made to build stronger and more secure systems that benefit from ML adoption while closing the door to adversarial attacks.

1. Introduction

The adoption of ML allows the field of biometrics to use different authentication methods than what is currently present. In one study, researchers used ML to classify different handwriting as an authentication method [1]. Here, the authors employed a multi-class SVM to perform the verification and identification of persons based on their handwriting using a given PIN. Although this study was conducted using a very small sample size (30 people), it showed that ML can be used to detect anomalies present in someone’s handwriting in order to detect an impostor. With more training and a larger dataset, this could become a very secure method of authentication, as users tend to have different handwriting, especially when it comes to smaller details, like how specific letters are written or how the ink trails when the pen is lifted in a specific direction. In [2], template protection using DL was addressed while in [3], face and gate traits were captured by video cameras. Here, the effect of ML on the fusion process was the subject of study. In another study [4], the authors discussed the application of classical and ML methods to achieve facial recognition. They further proposed the development of a software tool for authentication. In [5], the authors checked the identification accuracy of the machine learning algorithm REPTree (a decision tree) on selected biometric datasets that had been deployed and evaluated on the data mining tool WEKA. They reported an accuracy of 95% on selected datasets. In another interesting work [6], the authors studied behavioral biometrics based on touch dynamics and phone movement. Using two publicly available datasets—BioIdent and Hand Movement Orientation and Grasp (H-MOG)—this study used seven common machine learning algorithms to evaluate performance, including Random Forest, Support Vector Machine, K-Nearest Neighbor, Naive Bayes, Logistic Regression, Multilayer Perceptron, and Long Short-Term Memory Recurrent Neural Networks, with accuracy rates reaching as high as 86%. In another paper [7], the authors studied the classification performance of biometrics using the ML methods Random Tree, the Multilayer Perceptron Neural Network (MPNN), and the C4.5 decision tree (DT) algorithms. The Random Forest classifier algorithm exhibited greater performance compared to the other techniques, obtaining a 93.5% accuracy.
In addition, deep learning (DL)-based methods represent the current state of the art for solving pattern recognition tasks. This is especially important, as in DL, the features are not hand-crafted for classification; rather, they are developed by the DL system after seeing the dataset. This means that deep learning-based biometric systems can render a lower FMR and, thus, a higher level of security [2]. Two main applications of ML in biometrics are shown below.

1.1. CAPTCHA (Completely Automated Public Turing Test to Tell Computers and Humans Apart)

Real-time CAPTCHAs are a new technology that could improve the security of biometric techniques. By requiring the user to perform some sort of action, like looking into the camera, they can verify that there is indeed a human accessing the device. Alongside that, by randomizing the requests, attackers cannot predict what to expect, therefore providing a stronger level of security. Moreover, utilizing ML to further enhance the security of CAPTCHAs can be game-changing. CAPTCHAs using ML can be trained to detect anomalies present in the patterns of users. By determining what “feels” like human input and what “feels” like bot input, the device can learn to identify differences and eventually be able to detect ML bots that act and solve CAPTCHAs somewhat similarly to humans.

1.2. Continuous Biometrics

Integrating ML with a biometric system also opens the possibility of utilizing “continuous authentication” to protect its users. Currently, most biometric systems employ a static approach, where the user is authenticated once and is logged in until they log out. This opens the door for attackers to gain access while the user is logged in. Researchers have pointed out the vulnerabilities of static authentication, where the genuine user logs into the system at the start of the session. If there is a change of user during the session, that change will remain undetectable for as long as the impostor is logged in [2].
ML can be utilized to detect subtle variations in biometrics to ensure that only authorized users are authenticated while increasing accuracy and reliability to provide a better user experience for those who use these devices regularly. “Supervised ML can be used to classify the data much more accurately”.
Another study [8] that is cited in this paper analyzed the mouse movements of users as a method of continuous authentication by using data collection software that ran in the background. In this study, features such as click elapsed time, movement speed, movement acceleration, and relative position of extreme speed are utilized by a Support Vector Machine (SVM) technique to classify the behavior as belonging to the genuine user or impostor.
Despite all the advantages that come about as a result of ML adoption, there are some downsides too, which are the main focus of this paper. Attacks on data that introduce biases and backdoors may undermine the accuracy and integrity of ML models. ML models’ “black box” qualities are used in model inversion attacks to extract private biometric information. Deepfakes, created utilizing deep learning algorithms, may fool speech and facial recognition systems and allow for illegal access and identity theft. The security of these systems is also threatened by adversarial attacks, like forging real samples or changing precise biometric data. Additionally, vulnerabilities are amplified by the transferability of attacks between related systems. Furthermore, biased training data may lead to misidentification and unjust outcomes by introducing prejudice and discrimination into biometric systems. To improve the security and fairness of biometric systems, this article emphasizes the necessity for strong solutions, such as using enhanced deepfake detection tools and addressing biases in training data. In addition, the user’s privacy should be a main consideration while designing biometric systems. The following research questions are addressed in this paper:
Q1: We have a rich body of literature on the positive impacts of ML on biometrics. Is there a disadvantage to this adoption? And, if so, how do we address the problems?
Q2: In general, does the introduction of ML techniques make the system more vulnerable or less? If more, what are the remedies?

1.3. Inclusions and Exclusions

The consequences of using a database for training the ML module of a biometric system are included in this study. Pertinent aspects such as adversarial attacks, data poisoning, model inversion, and, more importantly, GANs (basis of deepfakes) are also included. We did not entertain the encryption/decryption of data, as this calls for a different treatment. Furthermore, a popular subset of ML, deep learning, is not the focus here, and the listed negative impacts in Section 2 apply to deep learning as a subset of ML.
The rest of this paper is organized as follows. Section 2 examines the potential negative effects of ML adoption in biometric systems, focusing on model inversion attacks and data poisoning attacks that give ML models biases and back doors and examining the adverse effects of deepfake technologies on biometric systems. The transferability of attacks between comparable systems is also covered here. Recommendations to overcome the adverse effects are presented in Section 3. Section 4 offers a futuristic view of biometrics and other concluding remarks.

2. Negative Impact of Machine Learning on Biometrics and Methodology

In this section, we elaborate on some of the negative impacts of ML on biometrics. It is emphasized that the list is not exhaustive, and there may be other disadvantages that are not mentioned here due to space limitations. Furthermore, we explain the methodology at the end of the section.

2.1. Adversarial Attacks

Biometric systems depend on ML models to effectively categorize and verify people based on their distinctive biometric features. These models are, however, open to adversarial attacks that seek to alter or disrupt the input data in such a manner that the model erroneously classifies it. In the realm of biometrics, adversarial attacks may take many different forms. Making bogus samples is one typical method. An attacker can create fake biometric data replicating authorized people’s characteristics or new identities [9]. To trick the model and gain unauthorized access, the attacker injects these fake samples into the system. By showing a synthetic but seemingly realistic visage that closely matches the face of an authorized user, for instance, the attacker may try to deceive a facial recognition system. Altering accurate biometric information is a different kind of adversarial attack. An attacker may try to get around the system’s authentication procedure by changing the properties of a person’s biometric attributes, such as fingerprints or speech patterns [10]. This manipulation might include physical changes, like the application of synthetic fingerprints, or digital changes to the biometric information that has been recorded in a database. The intention is to trick the system into erroneously considering the changed biometric data, allowing unwanted access or facilitating identity theft.

2.2. Data Poisoning

ML algorithms use training data to find patterns and provide precise predictions. Regarding biometric systems, the training data generally comprise individual biometric samples. The performance of the biometric system may be harmed if an attacker can introduce biases or malicious models by manipulating the training data used to develop the method [11]. Attacks known as “data poisoning” include adding erroneous or malicious samples to the training data. An attacker hopes to influence the model’s learning process in a way that provides inaccurate or skewed outputs. For instance, an attacker may provide many samples from a specific demographic group, causing the model to behave biasedly when authenticating group members. Backdoors may be inserted into the model as part of data poisoning attacks. Attackers could introduce specific samples or patterns that function as triggers to the training data, leading the model to react incorrectly to inputs. By using these covert backdoors, unauthorized users could get around the system’s authentication procedure. The security and integrity of the training data must be guaranteed to prevent data poisoning attacks. This entails putting into practice methods like data validation, anomaly detection, and data sanitization to find and eliminate potentially poisonous samples. The training data should be monitored and audited often to detect potentially harmful biases or strange trends.

2.3. Model Inversion

An assault known as a “model inversion attack” may be used to exploit ML models, especially those employed in biometric systems. These attacks include a hostile actor querying the model to rebuild or obtain sensitive data. Regarding biometrics, the possible repercussions of model inversion attacks are especially problematic since they might endanger people’s privacy by enabling attackers to reproduce the original biometric data, such as a face image or fingerprint, using the model’s replies. A model learns to make predictions or categorize data by examining patterns and correlations during training. However, the underlying workings of the model often need to be more transparent and easier to access. The “black box” characteristic of ML models refers to this opacity [12].
An adversary uses a model inversion attack to try to reverse-engineer it by taking advantage of the model’s “black box” characteristics. The attacker wants to retrieve private data that the model has learned during its training phase by giving it carefully constructed inputs and seeing how it responds. This extracted information may include personal information or particular traits related to the biometric data being utilized. Model inversion attacks pose a severe privacy risk for biometric systems [13]. For identification or verification, biometrics depends on distinctive physical or behavioral traits, such as fingerprints or facial features. The system’s security safeguards may be circumvented if an attacker can recreate the original biometric data by querying the model, allowing illegal access or impersonation.

2.4. Deepfakes

Deepfake technology uses ML techniques, especially deep learning, to produce synthetic media that are very convincing and lifelike. This technology makes it possible to change images, films, and audio recordings, often in a manner that makes it impossible to tell the difference between the actual and fake objects. Attackers may use deepfakes to trick voice- or face-recognition-based biometric systems, enabling them to pass as someone else. Deep learning models are trained on a lot of data, including photographs or recordings of the intended subject, to produce deepfakes. By comprehending and imitating the patterns and characteristics seen in the training data, these models are trained to create very exact imitations. Deepfakes can change material in various ways, such as by swapping out faces in movies or changing speech in audio recordings.
Deepfakes constitute a severe security issue in the context of biometric systems. For instance, voice recognition systems depend on distinctive vocal traits to verify or authenticate people. Attackers, however, may use deepfakes to duplicate someone’s voice with startling precision, making it difficult for the system to distinguish between the natural person and the artificial imitation. Deepfakes that effectively modify a person’s look or imitate another person’s facial characteristics may also deceive face recognition algorithms [14]. Unauthorized access, identity theft, or the fabrication of false identities might result from this. The creation of reliable detection techniques is necessary to stop deepfake assaults. Developing algorithms and approaches that can recognize deepfakes based on differences in the synthesized media is a current area of research for engineers and researchers. These detection methods examine numerous media features, such as visual artifacts, inconsistent facial expressions, or strange audio patterns, to distinguish between authentic and artificial information.

2.5. Transferability of Attacks

The situation where an attack that takes advantage of flaws in one biometric system may be used in other similar systems is known as the transferability of assaults. The context of ML-based biometric systems makes this idea especially pertinent. Suppose a hacker can effectively locate and exploit vulnerabilities in one of these systems. In that case, they may be able to utilize the same attack method to penetrate other systems that use comparable ML models or algorithms. The effect of vulnerabilities is amplified by the portability of attacks across many systems, potentially impacting various platforms. To identify and verify people based on distinctive biometric qualities like fingerprints, facial features, or speech patterns, ML-based biometric systems use algorithms and models trained on big datasets [15]. During the authentication process, these models analyze the training data to identify patterns and correlations that will help them make choices about people’s identities.
These models, however, are not flawless and may be exposed to numerous kinds of assaults. For instance, an attacker may try to trick the system by delivering a biometric sample that has been altered such that it seems fundamental to the model but differs from the original biometric attribute. This can result in impersonation or illegal access. When an attack is successful against one system, it shows that the ML models or algorithms being used include more profound flaws. These deficiencies may result from shortcomings in the training set, design errors in the model, or restrictions in the overall system layout. Hackers may be able to use the same flaws in similar systems by knowing these vulnerabilities. Assaults’ transferability severely hampers the security of biometric systems. Because attackers may use the same attack technique to compromise other systems with comparable properties, identifying and exploiting a vulnerability in one system can have far-reaching effects. This places the responsibility for addressing vulnerabilities in their systems and the broader consequences for other systems in the field on developers and researchers.

2.6. Bias and Discrimination

ML algorithms’ bias and discrimination are a growing source of worry since they have the ability to reinforce and amplify preexisting societal biases and prejudices. When trained on skewed or underrepresented datasets, these algorithms may unintentionally learn and perpetuate biased behaviors, producing unjust results and harming people. Biometric systems are one area where prejudice and discrimination may have a significant impact. Biometric systems employ physiological or behavioral traits like fingerprints or facial features to identify and validate people. These algorithms may incorrectly identify or reject people based on their demographic characteristics, such as race or gender, if the training data used to construct them are biased or unrepresentative. For instance, a facial recognition system may need help in correctly identifying people with darker skin tones if it is trained on mostly light-skinned faces and lacks variety in its training data [16]. Certain demographic groups may have increased rates of false positives, which might result in misidentification and unfair repercussions. Similar to the previous example, a gender bias may develop if a system is primarily trained on data from one gender, which can result in the incorrect identification or exclusion of people from other genders.
Such prejudices and discrimination in biometric systems might have detrimental effects, primarily when these systems are utilized for crucial tasks like access control or law enforcement. Misidentifications may result in unjustified arrests or access rejections, disproportionately harming certain groups and reinforcing systemic prejudices. Hackers may use the possibility of prejudice and discrimination in biometric systems to their advantage [17]. To further their goals, adversaries may consciously influence or exploit the system’s fundamental prejudices. Because a particular ethnic group is more likely to be mistakenly identified by a face recognition system than others, an attacker may try to get around one by using this prejudice.

2.7. Scalability Issues

When ML models employed in biometric systems are used in extensive applications, they encounter difficulties known as scalability concerns. The amount of computer power and processing time necessary for authentication likewise rises dramatically as the number of users and transactions rises. Security holes and chances for attackers to take advantage of flaws in the system are only two issues that might result from this. The additional processing resources needed to accommodate many users is one of the significant issues. Biometric data are analyzed and matched using ML models, which often utilize sophisticated algorithms and extensive calculations. The system must process more data as the user base grows, which might place a burden on the computing resources [18]. This may result in slower reaction times, more latency, and decreased system performance.
The duration of the authentication procedure is another problem. Biometric systems must contrast the user-provided biometric data with the reference templates in the system’s database. The time it takes to complete this matching procedure might drastically grow as the number of users and transactions rises. The authentication procedure may take longer, harming the user experience and the system’s effectiveness. Scalability problems may lead to security flaws. For instance, the system may need help verifying each request in a timely way when it is overloaded with many users and transactions. Attackers may take advantage of this by conducting denial-of-service attacks or barrage the system with phony requests, which would overload its capacity and perhaps circumvent security measures. Furthermore, scalability problems could allow attackers to take advantage of architectural flaws in the system. Additional components, interfaces, or integrations will likely be added when the system is scaled up to serve a broader user base. These upgrades could become entry points for attackers to enter the system and undermine its security if not adequately evaluated and guarded.

2.8. Methodology

The methodology used here consisted of conducting a detailed review of the literature in which ML techniques have been adopted in biometrics. In this study, we included all the works that have successfully applied ML and reported favorable results after this adoption. These articles not only reported improved numerical results but also provided sound technical justification for this improvement. It is well understood that integrating ML with biometrics brings forth more robustness and discrimination power (for classification) to a biometric system. Nonetheless, the vulnerabilities and biases introduced as a result of ML adoption should not be ignored, and this is the main objective of this review.

3. Recommendations to Prevent Flaws in ML-Based Biometric Systems

3.1. Strong Training Data

Robust training data are the cornerstone of every ML model that succeeds. The collection of examples used to train the model and give it the ability to predict or categorize correctly is known as the training data. To guarantee the validity and efficacy of the final model, these data must be reliable and of a high caliber. It is critical to develop trustworthy data-gathering techniques. Data training includes defining the criteria for picking data sources and guaranteeing their trustworthiness and authority [19]. Data may come from various sources, including public databases, surveys, user-generated content, and specialist data suppliers. It is critical to validate the data’s validity and correctness by referring to numerous sources or using data validation procedures. Another characteristic of good training data is that they accurately portray real-world settings and include various samples. The data should consist of the necessary traits and patterns the model must learn to generate reasonable predictions. Biased or skewed training data may lead to biased or faulty models. As a result, it is critical to properly curate the training data to minimize any biases and guarantee that they are representative of the target population. It is vital to safeguard the training data from modification or compromise. Unauthorized changes to the training data might add inaccuracy or purposefully mislead the model. Implementing robust data security measures, like encryption and access limits, may assist in protecting the integrity of the training data. Regular audits and monitoring can also help detect any suspicious activities or data breaches.

3.2. Adversarial Defensive Mechanisms

Adversarial assaults are purposeful efforts to trick or manipulate a machine learning model by exploiting its flaws. Adversarial defensive mechanisms are tactics and procedures created to recognize and thwart such assaults, strengthening models’ integrity and imperviousness to manipulation. Adversarial training is one strategy for adversarial defense. This method entails adding hostile samples to the training data. Adversarial examples are samples deliberately altered to fool the model while seeming like the original samples. By including these malicious cases during training, the model can better comprehend and manage these perturbations, strengthening its defense against adversarial assaults. Robust feature engineering is another protective strategy. Features may be constructed to strengthen the model against adversarial assaults rather than depending exclusively on raw input data. These qualities may capture higher-level semantic information that is more resilient to disturbances. For instance, models may be created for picture classification tasks emphasizing vital elements like textures or forms more than pixel values.
Adversarial defense may also be improved via ensemble approaches. The total model becomes increasingly resistant to adversarial assaults via the independent training of numerous models and the combination of their predictions. Assaults by adversaries often target certain flaws in individual models. Still, using a variety of projections from an ensemble makes it more complicated for adversaries to design successful assaults. Continuous research and development are necessary to keep one step ahead of hostile threats. Researching new defensive tactics and upgrading current ones is essential since attackers’ methods constantly evolve. This entails investigating adversarial attack detection techniques, strengthening adversarial training methodologies, and encouraging research community partnerships to exchange information and ideas.

3.3. Regular Model Updates

The efficacy and security of biometric systems must be maintained via regular model upgrades. As technology develops, new attack routes and flaws can emerge that unscrupulous actors might exploit. It is crucial to regularly update and enhance the ML models employed in biometric systems to remain ahead of these threats. Organizations may quickly detect and fix vulnerabilities in their designs by staying up to date on the most current advancements in biometric security research and best practices [20]. Keeping up with new attack methodologies, biometric spoofing techniques, and adversarial ML developments are all part of this. Regular model upgrades make the incorporation of better algorithms and tactics to fend off these changing threats possible.
A problem called concept drift, which happens when the statistical characteristics of the data used to train the model change over time, may also be addressed using regular model updates. Aging, accidents, and environmental changes are just a few variables that might alter biometric data. The system may keep its accuracy and dependability by incorporating new data into the models and considering these changes. Regular upgrades also guarantee that the biometric system complies with changing regulatory standards [21]. Businesses must modify their biometric systems to comply with these evolving regulatory frameworks as privacy and data protection regulations continue to change. Regular updates make it possible to put privacy-enhancing strategies into effect, use secure data handling procedures, and adhere to data storage and retention policies.

3.4. Integrated Biometrics

Multi-modal biometrics is the process of authenticating a person using several biometric modalities. Multi-modal biometrics integrates two or more modalities, such as voice, face, fingerprint, iris, or behavioral attributes, for identification purposes rather than depending only on one biometric feature, such as a fingerprint or face [22,23]. There are significant benefits to using various biometric modalities. It first makes it harder for attackers to simultaneously impersonate or change several biometric traits. For instance, it can be difficult for an attacker to duplicate a person’s vocal rhythm and facial features perfectly. The entire security of the authentication process is increased by this multi-modal method, which considerably increases the difficulty for attackers to trick the system. By lowering the rates of erroneous acceptance and rejection, multi-modal biometrics improve accuracy and dependability. The method may increase confidence in identifying a person by merging different biometric modalities. The system may fall back on one modality to provide proper authentication in situations when one modality needs to be more accurate due to various reasons.
Additionally, multi-modal biometrics enables enhanced resilience against individual and environmental fluctuations. For instance, the system may use other accessible modalities, such as the person’s face or voice, for verification if their fingerprint is momentarily hidden by moisture or damage. However, it is crucial to consider the compromises brought about by multi-modal biometrics, such as a system’s increased complexity, cost, and user experience. Multiple biometric modalities may need more hardware, processing power, and computing resources to integrate and manage [18]. User acceptability and convenience should also be considered when deploying multi-modal biometric systems since they may need different enrollment processes and longer authentication times.

4. Conclusions

Biometric systems’ capabilities have unquestionably been improved by machine learning, but this technology also introduces vulnerabilities that need to be addressed. The security and fairness of biometric systems are seriously jeopardized by adversarial attacks, data poisoning attacks, model inversion attacks, deepfakes, the transferability of attacks, and biases and discrimination. Strong remedies are needed to reduce these hazards. The ability to discriminate between real information and phony information may benefit from improved deepfake detection algorithms. Several potential future paths may be investigated to achieve this goal. This is especially important as research on GANs has progressed tremendously since its inception in 2014. There is a constant need for research and improvement in deepfake detection methods. The countermeasures to identify and distinguish between real and altered biometric data must improve along with deepfake technology. This will be crucial in preserving the reliability and integrity of biometric systems. Efforts should be concentrated on enhancing training data security. Attacks on data that are intended to poison them make clear the necessity for robust data validation, anomaly detection, and data sanitization techniques. The reliability and accuracy of biometric models may be improved by routinely monitoring and reviewing training data to help avoid biases and malicious models from entering the system. Transparency and comprehensibility should be given top emphasis in ML biometric models. Enhancing these models’ interpretability may provide users insights into how they make decisions, allowing them to see weaknesses and fix them. It is feasible to manage and reduce the dangers posed by model inversion assaults by understanding the basic principles of ML models. Biometric technologies may continue to develop as safe and dependable instruments for identification and authentication across a variety of areas by resolving these flaws and encouraging fairness.

Author Contributions

The background information, literature survey, and summarization of the state of the art were performed by M.G. The identification of positive and negative impacts of ML, potential promise and threats, and recommendations to researchers in the field were prepared and compiled by S.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

No specific dataset was used. The pertinent information is properly cited throughout the manuscript.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

MLMachine learning
DLDeep learning
CAPTCHACompletely Automated Public Turing Test to Tell Computers and Humans Apart
SVMSupport Vector Machine
PINPersonal Identification Number
FMRFalse Match Rate
GANGenerative Adversarial Network

References

  1. Scheidat, T.; Leich, M.; Alexandar, M.; Vielhauer, C. Support Vector Machines for Dynamic Biometric Handwriting Classification, AIAI-2009, Workshops Proceedings. 2009. Available online: https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=90ee60f5df2d679dc851f13ca28165eb03519042 (accessed on 12 October 2023).
  2. Rathgeb, C.; Kolberg, J.; Uhl, A.; Busch, C. Deep Learning in the Field of Biometric Template Protection: An Overview. arXiv 2023, arXiv:2303.02715v1. [Google Scholar]
  3. Kumar, A.; Jain, S.; Kumar, M. Comparative Study of Multi-Biometrics Authentication Using Machine Learning Algorithms. In Proceedings of the 2024 11th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO), Noida, India, 14–15 March 2024; pp. 1–5. Available online: https://ieeexplore.ieee.org/abstract/document/10522125 (accessed on 8 December 2023).
  4. Saurova, K.E.; Hayitbaeva, D.K. Artificial Intelligence based Methods of Identification and Authentication by Face Image. Acad. Res. Educ. Sci. 2024, 5, 123–130. Available online: https://cyberleninka.ru/article/n/artificial-intelligence-based-methods-of-identification-and-authentication-by-face-image/viewer (accessed on 8 December 2023).
  5. Shakil, S.; Arora, D.; Zaidi, T. Feature based classification of voice based biometric data through Machine learning algorithm. Materialstoday 2022, 51, 240–247. [Google Scholar] [CrossRef]
  6. Pryor, L.; Mallet, J.; Dave, R.; Seliya, N.; Vanamala, M.; Boone, E. Evaluation of a User Authentication Schema Using Behavioral Biometrics and Machine Learning, Computer Science, Cryptography, Cornell University. arXiv 2022, arXiv:2205.08371. Available online: https://arxiv.org/abs/2205.08371 (accessed on 19 December 2023).
  7. Umasankari, N.; Muthukumar, B. Evaluation of Biometric Classification and Authentication Using Machine Learning Techniques. In Proceedings of the 2023 International Conference on Artificial Intelligence and Knowledge Discovery in Concurrent Engineering (ICECONF), Chennai, India, 5–7 January 2023; Available online: https://ieeexplore.ieee.org/abstract/document/10083610 (accessed on 19 December 2023).
  8. Mahadi, M.; Mohamad, M.; Kadir, M. A Survey of Machine Learning Techniques for Behavioral-Based Biometric User Authentication. Recent Adv. Cryptogr. Netw. Secur. 2018, 31, 43–59. Available online: https://www.intechopen.com/chapters/60937 (accessed on 19 December 2023).
  9. Siddiqui, N.; Dave, R.; Vanamala, M.; Seliya, N. Machine and Deep Learning Applications to Mouse Dynamics for Continuous User Authentication. Mach. Learn. Knowl. Extr. 2022, 4, 502–518. Available online: https://www.mdpi.com/2504-4990/4/2/23 (accessed on 18 January 2024). [CrossRef]
  10. Rosenberg, I.; Shabtai, A.; Elovici, Y. Adversarial ML Attacks and Defense Methods in the Cyber Security Domain. ACM Comput. Surv. 2021, 54, 1–36. [Google Scholar] [CrossRef]
  11. Sudar, K.M.; Deepalakshmi, P.; Ponmozhi, K.; Nagaraj, P. Analysis of Security Threats and Countermeasures for Various Biometric Techniques. In Proceedings of the 2019 IEEE International Conference on Clean Energy and Energy Efficient Electronics Circuit for Sustainable Development (INCCES), Krishnankoil, India, 18–20 December 2019; pp. 1–6. [Google Scholar] [CrossRef]
  12. Li, K.; Baird, C.; Lin, D. Defend Data Poisoning Attacks on Voice Authentication. In IEEE Transactions on Dependable and Secure Computing; IEEE: Piscataway, NJ, USA, 2023. [Google Scholar]
  13. Shafee, A.; Awaad, T.A. Privacy Attacks against Deep Learning Models and Their Countermeasures. J. Syst. Archit. 2020, 114, 101940. [Google Scholar] [CrossRef]
  14. Dionysiou, A.; Vassiliades, V.; Athanasopoulos, E. Exploring Model Inversion Attacks in the Black-Box Setting. Proc. Priv. Enhancing Technol. 2023, 2023, 190–206. [Google Scholar] [CrossRef]
  15. Jones, V.A. Artificial Intelligence Enabled Deepfake Technology: The Emergence of a New Threat—ProQuest. Available online: www.proquest.com2020.www.proquest.com/openview/60d6b06b94904dccf257c4ea7c297226/1?pq-origsite=gscholar&cbl=18750&diss=y (accessed on 12 January 2024).
  16. Minaee, S.; Abdolrashidi, A.; Su, H.; Bennamoun, M.; Zhang, D. Biometrics Recognition Using Deep Learning: A Survey. Artif. Intell. Rev. 2023, 56, 8647–8695. [Google Scholar] [CrossRef]
  17. Mittal, S.; Thakral, K.; Majumdar, P.; Vatsa, M.; Singh, R. Are Face Detection Models Biased? In Proceedings of the 2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG), Waikoloa Beach, HI, USA, 5–8 January 2023. [Google Scholar] [CrossRef]
  18. Popescu, G. Biometric Technologies and the Automation of Identity and Space. In Handbook on Geographies of Technology; Edward Elgar Publishing: Cheltenham, UK, 2017; Available online: http://www.elgaronline.com/abstract/9781785361159.xml (accessed on 27 June 2023).
  19. Alhomayani, F.; Mahoor, M. Deep Learning Methods for Fingerprint-Based Indoor Positioning: A Review. J. Locat. Based Serv. 2020, 14, 129–200. [Google Scholar] [CrossRef]
  20. Hasan, M.K.; Ghazal, T.; Saeed, R.; Pandey, B.; Gohei, H.; Esmawi, A.; Abdel-Khalek, S.; Alkhassawneh, H. A Review on Security Threats, Vulnerabilities, and Counter Measures of 5G Enabled Internet-of-Medical-Things. IET Commun. 2021, 16, 421–432. [Google Scholar] [CrossRef]
  21. Zhang, C.; Costa-Perez, X.; Patras, P. Adversarial Attacks against Deep Learning-Based Network Intrusion Detection Systems and Defense Mechanisms. IEEE/ACM Trans. Netw. 2022, 30, 1294–1311. [Google Scholar] [CrossRef]
  22. Wang, J.; Pan, J.; AlQerm, I.; Liu, Y. Def-IDS: An Ensemble Defense Mechanism against Adversarial Attacks for Deep Learning-Based Network Intrusion Detection. In Proceedings of the 2021 International Conference on Computer Communications and Networks (ICCCN), Athens, Greece, 19–22 July 2021; pp. 1–9. [Google Scholar] [CrossRef]
  23. Akulwar, P.; Vijapur, N. Secured Multi Modal Biometric System: A Review. In Proceedings of the 2019 Third International Conference on I-SMAC (IoT in Social, Mobile, Analytics, and Cloud) (I-SMAC), Palladam, India, 12–14 December 2019. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ghilom, M.; Latifi, S. The Role of Machine Learning in Advanced Biometric Systems. Electronics 2024, 13, 2667. https://doi.org/10.3390/electronics13132667

AMA Style

Ghilom M, Latifi S. The Role of Machine Learning in Advanced Biometric Systems. Electronics. 2024; 13(13):2667. https://doi.org/10.3390/electronics13132667

Chicago/Turabian Style

Ghilom, Milkias, and Shahram Latifi. 2024. "The Role of Machine Learning in Advanced Biometric Systems" Electronics 13, no. 13: 2667. https://doi.org/10.3390/electronics13132667

APA Style

Ghilom, M., & Latifi, S. (2024). The Role of Machine Learning in Advanced Biometric Systems. Electronics, 13(13), 2667. https://doi.org/10.3390/electronics13132667

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop