Previous Article in Journal
Constructing 8 × 8 S-Boxes with Optimal Boolean Function Nonlinearity
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Enhancing Multi-Factor Authentication with Templateless 2D/3D Biometrics and PUF Integration for Securing Smart Devices

1
School of Informatics, Computing and Cyber Systems, Northern Arizona University, Flagstaff, AZ 86001, USA
2
School of Computer Science and Advanced Technology, EPITA, 94270 Le Kremlin-Bicêtre, France
*
Author to whom correspondence should be addressed.
Cryptography 2025, 9(4), 68; https://doi.org/10.3390/cryptography9040068 (registering DOI)
Submission received: 23 September 2025 / Revised: 18 October 2025 / Accepted: 22 October 2025 / Published: 27 October 2025
(This article belongs to the Topic Recent Advances in Security, Privacy, and Trust)

Abstract

Secure authentication in smart device ecosystems remains a critical challenge, particularly due to the irrevocability of compromised biometric templates in server-based systems. This paper presents a post-quantum secure multi-factor authentication protocol that combines templateless 2D and 3D facial biometrics, liveness detection, and Physical Unclonable Functions (PUFs) to achieve robust identity assurance. The protocol exhibits zero-knowledge properties, preventing adversaries from identifying whether authentication failure is due to the biometric, password, PUF, or liveness factor. The proposed protocol utilizes advanced facial landmark detection via dlib or mediapipe, capturing multi-angle facial data and mapping it. By applying a double-masking technique and measuring distances between randomized points, stabilized facial landmarks are selected through multiple images captured during enrollment to ensure template stability. The protocol creates high-entropy cryptographic keys, securely erasing all raw biometric data and sensitive keys immediately after processing. All key cryptographic operations and challenge-response exchanges employ post-quantum algorithms, providing resistance to both classical and quantum adversaries. To further enhance reliability, advanced error-correction methods mitigate noise in biometric and PUF responses, resulting in minimal FAR and FRR that meets industrial standards and resilience against spoofing. Our experimental results demonstrate this protocol’s suitability for smart devices and IoT deployments requiring high-assurance, scalable, and quantum-resistant authentication.

1. Introduction

The integration of biometric authentication into smart devices has become pervasive, providing an appealing alternative to traditional credential-based methods. Modern secure devices commonly employ facial recognition [1,2], iris scanning [3,4], or fingerprint analysis [5] as primary authentication mechanisms, capitalizing on the inherent uniqueness and non-reproducibility of biometric traits. These identifiers, rooted in distinctive facial features or fingerprint patterns, not only reduce the risk of unauthorized access to sensitive data [6,7,8] but also eliminate the cognitive overhead associated with passwords and PINs. As a result, biometric systems streamline user authentication, promote seamless access to digital resources, and drive wider adoption of secure smart devices across diverse sectors [9].
Despite their advantages, the deployment of biometrics in authentication frameworks exposes several critical vulnerabilities. The centralized storage of biometric templates introduces a substantial risk: a successful breach may lead to irrevocable compromise of personal identifiers, as biometric information cannot be reset or revoked like conventional credentials [10]. Furthermore, biometric modalities are increasingly targeted by sophisticated spoofing strategies—including the use of high-resolution images, 3D masks, or adversarial inputs—that seek to deceive recognition systems [11]. The recent proliferation of AI-driven attacks, such as generative adversarial networks (GANs) producing hyper-realistic deepfakes, and template inversion techniques capable of reconstructing original biometric inputs from stored data, has underscored these security and privacy concerns.
To address these multifaceted threats, this work proposes a post-quantum, privacy-preserving authentication protocol for smart devices. Our approach eliminates the reliance on stored biometric templates by generating templateless, high-entropy cryptographic keys from stabilized biometric features, and integrates PUFs with zero-knowledge proof (ZKP) techniques. ZKP enables verification of a claim’s validity without disclosing any underlying secret or error source. By combining these innovations, the proposed protocol strengthens resilience against spoofing, inversion, and quantum attacks—offering a scalable, user-friendly security solution suitable for next-generation smart device ecosystems.

2. Background Information

Biometrics has significantly advanced Multi-Factor Authentication (MFA) systems by enhancing security and making it increasingly challenging for criminals to compromise these systems [6]. Among the various biometric methods, fingerprint scanning has been one of the most widely adopted. The effectiveness of MFA systems is closely tied to metrics like the False Rejection Rate (FRR) and False Acceptance Rate (FAR), which are critical in determining the accuracy and reliability of user identification [12]. By optimizing these metrics, the overall performance of MFA systems can be significantly improved, leading to more secure authentication processes [6,13]. Historically, various methods have been employed in MFA, starting with the use of Personal Identification Numbers (PINs) or passwords [6]. Over time, these were combined with tokens, such as cards, to provide an additional layer of security. The latest advancements in MFA technology have integrated biometric methods, particularly facial recognition, with traditional tokens or PINs to create more robust authentication systems. Research has also explored how factors like age, gender, and cognitive abilities impact the effectiveness of these authentication methods, highlighting the need for adaptable and inclusive security solutions. These developments underscore the evolving nature of MFA and the critical role that biometrics plays in its ongoing improvement [12,13,14].
A prominent example of biometric authentication in use today is facial recognition. This technology employs various techniques to identify individuals, including analysis of facial features and even the retina. However, recent advancements have also led to new security challenges, such as facial spoofing and deepfake attacks. In ref. [15], researchers combined facial recognition with a One-Time Password (OTP) system, where an OTP is sent via SMS to the user’s phone as a secondary authentication step. The system first scans the user’s face, and if a match is found in the database, it proceeds to send an OTP to the registered mobile number for verification [15]. Despite its potential, this approach has several limitations. One significant issue is that the system lacks the ability to confirm whether an OTP has already been sent, leading to repeated and unnecessary OTP transmissions. This flaw reduces the system’s reliability and user experience. Furthermore, the hardware used in this system, particularly the microcontrollers, limits its scalability. These components have limited processing power, which can impede the system’s deployment in large-scale environments. Another critical vulnerability involves SIMJacker attacks, where attackers send malicious messages containing vulnerable links or code to gain access to SIM cards. This exploitation can allow attackers to intercept OTPs intended for device verification, leading to significant security breaches. These challenges highlight the need for more robust and scalable biometric authentication systems that can better withstand emerging threats.
Another study explored a multi-factor authentication system that combines fingerprint recognition with a secret key ( S k ) to enhance security [16]. In this system, the secret key S k is used to generate a user-specific random matrix, which is stored on a smart card. During enrollment, both the S k and the user’s fingerprint are used to create a unique vector, which is then stored in a central database [16]. For authentication, the user asserts their identity and inputs both their fingerprint and the S k . The system then generates a verification vector using the same process. If this verification vector matches the one stored in the database, the user is successfully authenticated. However, this system has some critical vulnerabilities. If the secret key S k is compromised, it can lead to unauthorized access, as fraudsters could potentially generate the correct vector and gain access to the system. Additionally, the need to store sensitive information such as the vectors in a centralized database raises significant security concerns. If this database is breached, attackers could steal the stored vectors and use them to impersonate users, potentially gaining access to sensitive information such as financial information, medical records, and personal identification information. Theft of such data can lead to severe consequences, including financial loss, identity theft, and long-term damage to the user’s personal and professional life.
Recent studies have highlighted the vulnerability of face recognition systems to template inversion (TI) attacks, where adversaries attempt to reconstruct face images from stored biometric templates. In particular, ref. [17] demonstrated that high resolution, realistic face images can be generated from facial templates using only synthetic training data. Their method employed StyleGAN to map biometric templates back into the generator’s latent space, enabling the recovery of detailed facial characteristics without relying on real datasets. The reliance on centralized storage also introduces risks of data corruption, where any alteration in the stored information could lead to failed authentications or unauthorized access. This highlights the potential security pitfalls of storing both the secret key and biometric data, emphasizing the need for more secure and decentralized approaches to biometric authentication.
In ref. [18] by Cambou, Bertrand et al., describes a method of using Challenge-Response-Pair (CRP) to secure digital files. The subset protocol presented for securing digital files in distributed and zero-trust environments leverages a CRP mechanism that uniquely harnesses each digital file as a source of entropy for cryptographic operations. In this approach, the protection and verification of file authenticity occur both at distributed storage nodes and on terminal devices operating amid weak signals and obfuscating electromagnetic noise. By introducing nonces into the process, the message digests generated from hashed files become unique and unclonable, effectively enabling the files themselves to serve as PUFs within challenge-response protocols. During enrollment, randomized challenges elicit distinct subset responses used for secure cryptographic key generation and distribution while subsequent verification cycles repeat the CRP process for file authentication and decryption. Building on the subset protocol explained in ref. [18] through file specific CRP mechanisms, we have extended this technology to the domain of biometric authentication.
By harnessing the inherent uniqueness of human facial data, our approach utilizes biometric landmarks and the computed distances between them as PUF, seamlessly integrating this with the optimized subset protocol for biometric data. This methodology significantly elevates the security framework of biometric authentication systems by eliminating the need for permanent storage of facial data or cryptographic keys—both are generated dynamically for each authentication instance. The objective of this solution is
  • To develop a novel, templateless biometric authentication protocol that dynamically generates ephemeral cryptographic keys from stable facial landmarks using a subset challenge response mechanism ensuring neither biometric templates nor sensitive keys are ever stored, and all data remains irreversible and resistant to inversion attacks.
  • To enhance authentication security and resiliency against next-generation threats including deepfakes, spoofing, and quantum adversaries, by promoting expression-based authentication, injecting noise into the key generated in the biometric process, leveraging post-quantum cryptographic (PQC) algorithms for data protection, and integrating machine-learning-based liveness detection.
  • To implement robust MFA by binding biometric-derived cryptographic key and key generated from SRAM PUF to further strengthen security against cloning, replay, and side-channel attacks in distributed, zero-trust environments.
This paper is organized to provide a detailed, end-to-end account of the proposed authentication protocol and its empirical evaluation. Section 2 offers a comprehensive review of prior work on MFA, PUF, and biometric security, outlining the challenges of template storage, emerging threats such as deepfakes, and recent developments in secure cryptographic protocols. Section 3 presents the core protocols in depth, encompassing both the 2D and 3D facial biometric key generation and recovery workflows, landmark stability analysis, challenge-response subset algorithms, noise injection, and the integration of MFA via SRAM PUF. Cryptographic key generation/recovery and error correction techniques are explicitly detailed, along with methods for liveness assurance and resilience against template inversion attacks. Section 4 delivers a thorough performance and security analysis, including entropy calculations, robustness under variable environmental and operational scenarios, detailed error rate characterization (FAR and FRR), and comparative benchmarking against existing methods. This structure culminates in a discussion synthesizing practical deployment insights, operational guidance for threshold and scenario selection, and directions for future research on privacy-preserving, post-quantum biometric authentication.

3. Templateless Biometrics

3.1. Enrollment of Face

The resources needed to utilize biometric data include a camera. This camera can be sourced from a variety of devices such as mobile phones or laptops, making it a convenient and accessible tool for implementing biometric-based authentication systems on every smart device. To ensure reliable performance and sufficient image quality during enrollment, the camera should have a minimum resolution of 720p and support a frame rate of at least 30 fps. The face enrollment procedure utilizes the “dlib” or “mediapipe” python package, employing 68 or 468 landmark, respectively, to pinpoint various facial features such as the eyes, nose, chin, and ears. These landmarks serve as key identifiers for individuals. During the initial face enrollment, multiple frames of the face are captured, and landmarks are applied to each frame. These landmark positions are then compared across all images, with areas obscured by shadows or inadequate lighting masked out from consideration in the authentication process. Additionally, comparisons are made between landmarks from each frame, with those exhibiting the greatest variations being masked as shown in Figure 1. These masked landmarks are excluded from authentication, effectively lowering the rates of FAR and FRR.
To ensure that the individual enrolling or authenticating is a living person and not an image or a replica, a liveliness test is conducted. This protocol employs an open-source code that leverages the blinking of the eyes to measure variation. If the variation falls below 2, it confirms that the user is a real person and not a replica [19]. Additionally, machine-learning models integrated within the deepfake python package further analyze subtle facial movements and micro-expressions, enhancing the detection of AI-generated media and spoofing attacks. By combining physiological measurements with advanced machine-learning-based behavioral analysis, the protocol delivers robust liveliness detection, effectively mitigating both traditional and deepfake presentation threats.

3.2. Key Generation and Recovery Using 2D Facial Data

As the name of our protocol implies, this protocol is “templateless” meaning that the stored data used for key recovery cannot be easily reverse-engineered into exploitable biometric information. This approach significantly enhances privacy and reduces the risks of identity theft and spoofing, as the databases do not store any retrievable data on the user’s biometric identities. Instead, they only hold response information required to recover a user’s key, which is useless without both the legitimate user’s face and their two-factor authentication password. From an attacker’s standpoint, such a database offers little incentive for compromise, especially since it would be encrypted following standard best practices for digital information security.

3.2.1. Initial Enrollment/Registration

The initial enrollment process involves the steps outlined in Section 3.1. Multiple frames of the user’s face are captured, and landmarks are recorded. Figure 2 illustrates the enrollment/registration process where first, a random number ( R N ) is combined with the user’s password ( P W D ) using an XOR operation to create a stream of bits, which is subsequently hashed with SHA3-512 and SHAKE256 to produce a message digest. This digest is then used to generate a set of n challenges, represented as facial landmark coordinates [ C 0 ( x , y ) , C 1 ( x , y ) , C 2 ( x , y ) , C n 1 ( x , y ) ] . For each captured frame, facial landmarks are analyzed to identify those with high variance and these unstable landmarks are excluded during key generation. Each generated challenge corresponds to a coordinate on a 256 × 256 frame, and the Euclidean distance between challenges and the stable landmarks is calculated, with additional masking applied to exclude distances near predefined transition points.
Next, the response distances are encoded using gray codes to will reduce the bit error rate, and each challenge thus produces a binary will reduce the bit error rate between consecutive, resulting in a collection of subset responses R 0 , R 1 , R 2 , R n 1 . A cryptographically secure ephemeral key K is generated from these responses, and if K contains long runs of consecutive zeros or ones exceeding a threshold, it is adjusted to randomize these runs. The ephemeral key K is then used for cryptographic operations, such as encrypting files or other sensitive data. From the ephemeral key, a subset of landmark responses corresponding to the set bits ( F 0 , F 1 , F 2 , , F p ) is selected. Random number R N , the hash of H ( K ) and the subset of the responses ( F 0 , F 1 , F 2 , , F p ) , are securely stored and all other intermediate data are deleted. Detailed flow of this is shown in Algorithm 1.
Algorithm 1 Templateless Enrollment.
  • Require: Frames E, password p w , config Θ , challenge-PRF C
  • Require: Constants unpacked as
  •    1:  Θ ( b dist ,   g r a y ,   C n ,   S ,   K ,   E max ,   T ,   Q ,   p dead ,   D dead ,   R max ,   s h a p e ,   s gray ,   η )
  • Ensure: Published subset s u b s e t , masks ( D m a s k , L m a s k , C m a s k ) , verifier h
  •    Stage A—Password-tied determinism (no images yet)
  •    2:  shake SHAKE - 256 ( p w )
  •    3:  C DeriveChallenges   ( p w ,   C n ,   S )                ▹ 2 C n coords mod S paired as ( x i ,   y i )
  •    4:  k EphemeralKey ( p w ,   K ) ; k BreakRuns ( k ,   R max )
  •    5:  h S H A 3 256 ( k )                          ▹ verifier bound to k; public
  •    Stage B—Geometry acquisition (deterministic ordering)
  •    6:  X FaceChips ( E ,   S )                      ▹ one aligned S × S chip per frame
  •    7:  L Landmarks ( X ,   s h a p e )                   ▹ 2D landmark coords in chip frame
  •    8:  L m a s k BuildLandmarkMask ( L )            ▹ keep lower-variance half (stable landmarks)
  •    Stage C—Stability gating (deadzones)
  •    9:  R raw RawResponses ( L ,   C ,   L m a s k )               ▹ per-challenge distances per frame
  •  10:  M DZ PerChallengeDeadzones ( R raw [ 0 ] ,   D dead ,   S 2 )
  •  11:  D m a s k FrequentDeadzoneMask ( M DZ ,   p dead )
  •  12:  R ApplyDeadzone ( R raw ,   D m a s k )           ▹ drop unstable distance indices everywhere
  •      Stage D—Bit representation & position selection
  •  13:  R bin QuantizeAndEncode ( R ,   b dist ,   g r a y ,   s gray )
  •  14:  C m a s k CreateChallengeMask ( R bin ,   K )               ▹ select least-varying positions
  •  15:  R MaskByChallenge ( R bin [ 0 ] ,   C m a s k )               ▹ first frame’s K selected bits
  •      Stage E—Noise injection & publication
  •  16:  R ˜ InjectNoiseGlobally ( R ,   η )
  •  17: subset GenerateSubset ( R ˜ ,   k )               ▹ publish indices/values with k [ i ] = 1
  •  18: Erase transient buffers { X ,   L ,   R raw , R ,   R bin ,   R }               ▹ no template retained
  •  19: return ( s u b s e t ,   D m a s k ,   L m a s k ,   C m a s k ,   h )
To further strengthen the resilience of the authentication protocol against inversion and side-channel attacks, two additional cryptographic obfuscation mechanisms are employed:
  • After quantizing and gray-coding the calculated distances between facial landmarks, a random subset of bits is removed while the order of the remaining bits is preserved. Incorporating only these selected segments of the encoded biometric representation into the final key material enhances security and minimizes the risk of information leakage from the overall biometric dataset.
  • The second additional step is noise injection. After masking and encoding the per-challenge bitstrings for the selected enrollment frame (as depicted in Figure 2) and before gating by the ephemeral key K, noise is injected across the entire collection of encoded responses. The detailed description of the method is explained in the Algorithm 2. This obfuscation is applied only during the enrollment phase, ensuring that while additional uncertainty is injected into the stored representations, legitimate key recovery during authentication remains unaffected.
Together with rigorous liveliness testing, gray code bit selection, double masking and noise injection, these techniques significantly harden the system against AI driven attacks, statistical analysis and direct attacks on the biometric keying process.
The algorithm for key generation is as follows:
Inputs: password p w (bytes), frames E = { I i } (enrollment) or R = { J j } (recovery), and config
Θ = b dist ,   g r a y ,   C n ,   S ,   K ,   E max ,   T ,   Q ,   p dead ,   D dead ,   R max ,   s h a p e ,   s i m ,   s gray ,   η ,
where b dist = bits per distance, g r a y { 0 , 1 } , C n = challenges, S = face-chip size (px), K = key length (bits), E max = uncertainty budget, T = matching tolerance, Q = threshold for entering error correction, p dead = deadzone percentage, D dead = deadzone density, R max = max zero-run length in key, s h a p e { mediapipe , dlib } , s i m { hamming } , s gray = optional Gray bit-slicing seed, and η = noise-injection rate.
Password binds three invariants reused at recovery: (i) challenge points ( x i ,   y i ) , (ii) landmark order (via password-seeded shuffle), (iii) gating pattern of the one-time key k. Stability pipeline (landmark selection → deadzones → identical quantization) ensures comparable bits across sessions.
  • Storage discipline
    -
    Only ( s u b s e t ,   D m a s k ,   L m a s k ,   C m a s k ,   h ) are retained/published. All transient image/geometry/bit buffers are erased at the end of Stage E. No biometric template is stored.
  • Password-tied primitives and helpers
    -
    DeriveChallenges  ( p w ,   C n ,   S ) : Expand p w with SHAKE-256 long enough for 2 C n coordinates. Read the values and reduce mod S to get ( x i ,   y i ) coordinates. Same method is used in the recovery.
    -
    BreakRuns  ( k , R max ) : A sliding window of length R max scans sequence k, and if the window contains only zeros, one bit is flipped to 1 to prevent extended zero runs and enforce distributed activations.
    -
    HashKey  ( k ) : Hash representation of k which uses SHA-256 and SHAKE-256.
    -
    GenerateSubset  ( { r i } ,   k ) : Given an ordered list of responses, 50% of the responses are stored while the rest are deleted.
    -
    GrayCode bit selection/Pick3: After converting distance s to binary using gray code, we deterministically selects g bit indices from { 0 , , b dist 1 } . This returns the remaining indices in increasing order, preserving bit order.
  • Face detection and geometry
    -
    FaceChips  ( { I } , S ) : Detect a single face per frame and align it to a square S × S  chip.
    -
    Landmarks  ( X , s h a p e ) : Extract 2D landmarks for each face chip using s h a p e { mediapipe , dlib } . Coordinates are pixel positions in the chip reference frame.
    -
    BuildLandmarkMask  ( L ) : Landmarks with high variance in their location across all frames are masked and the stable ones are used.
  • Challenge responses
    -
    RawResponses  ( L ,   C ,   L m a s k ) : For each challenge ( x , y ) C , compute Euclidean distances to landmarks with L m a s k [ i ] = 1 and collect them into a list. The output per frame is a list of lists (one list per challenge).
  • Deadzones (stability gating)
    -
    PerChallengeDeadzones  ( d , D dead , S 2 ) : Partition [ 0 , S 2 ] into D dead equal bins. For each challenge, identify indices that fall near transition boundaries and mark them as unstable, with stable regions left unmarked.
    -
    FrequentDeadzoneMask  ( { dz j } , p dead ) : Aggregate these instability markings across all challenges and construct a global mask that flags the most frequently unstable positions for exclusion.
    -
    ApplyDeadzone  ( R raw , D m a s k ) : Apply D m a s k in every frame/challenge, drop distances where D m a s k = 1 so later quantization avoids unstable indices.
  • Global noise injection:
    Algorithm 2 Inject Noise Globally.
    Require: 
    β { 0 , 1 } B (concatenated bit vector), noise rate η [ 0 , 1 ]
    Ensure: 
    β ˜ { 0 , 1 } B with exactly η B flips
     1:
    F η B
     2:
    if  F = 0  then return  β
     3:
    Initialize flip index set I
     4:
    Spread: reserve at most one flip per contiguous block if possible to avoid clustering
     5:
    Fill: sample remaining ( F | I | ) indices uniformly without replacement from [ 1 . . B ] I
     6:
    β ˜ β ; flip bits at indices in I
     7:
    return  β ˜
  • Final step
    -
    GenerateSubset: After noise injection, publish only the responses whose indices are selected; this yields the stored subset.
    -
    MaskGeneration: Deadzone/challenge masks use 1 = exclude , 0 = include . Landmark mask uses 1 = include (selected landmarks)

3.2.2. Key Recovery

When an authentication request is initiated, the system utilizes the stored subset of responses, the random number ( R N ) , and the hash of the enrollment key ( H ( K ) ) for key recovery, as illustrated in Figure 3. During authentication, a single frame of the user’s face is captured and facial landmarks are mapped. Consistent with the enrollment procedure, R N and the password ( P W D ) undergo an XOR operation to produce a bit stream, which is subsequently hashed with SHA3-512 and SHAKE256 to yield a message digest. This digest is then used to generate a set of n challenge coordinates on the face. The distances between these challenges and the detected landmarks are computed, producing a new set of responses ( R 0 , R 1 , , R n 1 ) . Assuming an error-free scenario, each stored subset response ( F 0 , F 1 , , F p ) is compared with the corresponding component in the authentication responses; if they match, the associated bit in the reconstructed ephemeral key ( K ) is set to 1, and if not, to 0. If the number of 1s in K exceeds a set threshold, its hash is compared to the stored H ( K ) . A successful match confirms the identity and authorizes decryption using K . In the event of discrepancies, error correction mechanisms are employed to facilitate key recovery. The detailed description of this process is shown in Algorithm 3.
The algorithm for key recovery is as follows:
Algorithm 3 Templateless Recovery.
Require: 
Frames R, password p w , config Θ , challenge-PRF C, stored ( s u b s e t , D m a s k , L m a s k , C m a s k , h )
Ensure: 
Recovered key k or fail
   1:
Unpack  Θ ( b dist , g r a y , C n , S , K , E max , T , Q , p dead , D dead , R max , s h a p e , s gray )
Phase A—Re-create password-tied view
   2:
Recompute ( x i , y i ) from SHAKE - 256 ( p w )   mod   S ;    C ApplyMask ( C , C m a s k )
   3:
Y FaceChips ( R , S ) ; L Landmarks ( Y , s h a p e )
Phase B—Recompute the same bit representation
   4:
R raw RawResponses ( L , C , L m a s k ) ; ApplyDeadzone ( R raw , D m a s k )
   5:
R bin QuantizeAndEncode ( R raw , b dist , g r a y , s gray )
   6:
Let F R bin ordered over the K kept positions
Phase C—Ordered matching and validation
   7:
u AttemptKeyRecovery ( F , s u b s e t , N = 7 , T , hamming ) (map each published item to its position)
   8:
if  # { 1 u } > E max  then
   9:
    return fail
 10:
if  # { 1 u } = 0  then
 11:
     k b i t a r r a y ( u ) if  S H A 3 256 ( k ) = h  then return  k
 12:
if  # { 1 u } > Q  then
 13:
    for  k EnumerateCompletions ( u )  do
 14:
        if  S H A 3 256 ( k ) = h  then
 15:
           return  k
 16:
return fail

3.2.3. Error Correction

In the event that the initial key recovery does not produce the correct ephemeral key, error correction is used. Initial steps involve marking the binary stream positions 0 and 1 where we are certain and the unsure positions, which could either be 0 or 1, are set as “X”. A window is defined to match the client-generated full responses with the subset response, in accordance with the threshold number of consecutive 0’s an ephemeral key can have. It is assumed that every window will have at least one match.
Let us consider w i n d o w _ s i z e = 4 , w i n d o w _ s t a r t _ l o c = 0 and m a t c h _ l o c = 1 The error correction handles three cases:
  • Case 1: One match found.
    Consider we have 8 responses as shown in Figure 4a. In this scenario, the search begins by finding a match for subset F 1 . The comparison starts with w i n d o w _ s t a r t _ l o c = 0 and considers the first 4 full subset responses generated by the client. It is observed that a match is found at R 1 , i.e., m a t c h _ l o c = 1 , so the binary stream at the R 1 location is set to “1”, and the other unmatched locations before R 1 are set to “0”. To continue the search, the w i n d o w _ s i z e remains unchanged, but w i n d o w _ s t a r t _ l o c = m a t c h _ l o c + 1 = 2 .
  • Case 2: No match found.
    Figure 4b illustrates the scenario when no match is found. Considering w i n d o w _ s t a r t _ l o c = 0 and m a t c h _ l o c = 1 , while attempting to find a match for subset F 1 , the first 4 full subset responses are examined for a match. Since no matches are found, the window is shifted by one, i.e., w i n d o w _ s t a r t _ l o c is set to 1, and R 0 is marked as “X”. The window size is expanded to w i n d o w _ s i z e = w i n d o w _ s i z e + ( w i n d o w _ s i z e w i n d o w _ s t a r t _ l o c ) = 7 . This expansion accounts for the possibility that F 2 might find a match in either R 1 , R 2 , or R 3 . Assuming that every window has one match, the search extends to the next 4 values of the full subset response to find a match for F 2 .
  • Case 3: More than one match found.
    Figure 4c depicts the scenario when more than one match is found. Considering w i n d o w _ s t a r t _ l o c = 0 and m a t c h _ l o c = 1 , while attempting to find a match for subset F 1 , the first 4 full subset responses are examined. When multiple matches occur, it is termed a collision. In this example, matches were found at R 2 and R 3 , indicating a potential presence of a 1 at either of these positions. Therefore, both positions are marked as “X”, and the responses before the first match are set to 0. In this case, m a t c h _ l o c is set to 3, w i n d o w _ s t a r t _ l o c is set to m a t c h _ l o c + 1 , and the window size is adjusted to w i n d o w _ s i z e = w i n d o w _ s i z e + ( w i n d o w _ s i z e ( w i n d o w _ s i z e - m a t c h _ l o c ) ) = 5 .
After generating the ternary key, the system generates an exhaustive set of keys by changing the values of the uncertain position to 0 and 1. The hash of these keys is generated and compared with H ( K ) to find a match. In cases where a match is not found, the system denies authentication, and the entire recovery process is re-initiated. This error correction method follows a deterministic search strategy that limits the number of uncertain positions explored during key reconstruction. This design provides a predictable correction capability ensuring reliable recovery without the computational overhead of traditional fuzzy extractor schemes. For a detailed description of the key recovery protocol and full analysis, the readers are referred to [20].
  • Functions used in error correction for key recovery
    -
    AttemptKeyRecovery  ( F , s u b s e t , N , T , s i m ) : Let F be the ordered list of enrollment-time bitstrings at the K kept positions (recomputed during recovery). Slide a window of size N over F and, for each published item in subset, look for matches within tolerance T (default Hamming). Output a keystate vector u { 1 , 0 , 1 } K where 1 marks a confident match at a unique position, 1 marks uncertainty/collisions, and 0 marks not-yet-determined positions.
    -
    EnumerateCompletions  ( u ) : If u contains 1 , enumerate 0 / 1 assignments for those positions (bounded by a threshold Q before attempting). For each candidate k , accept if S H A 3 256 ( k ) = h .
    -
    Verification and early exit: If u contains no 1 , form k directly and accept on hash match. If the count of 1 exceeds the budget E max , abort.

3.3. Key Generation and Recovery Using 3D Facial Data

While 2D facial recognition provides a valuable foundation for biometric authentication and remains widely adopted in many security systems, the integration of 3D facial recognition offers additional capabilities that further enhance accuracy, security, and robustness [21]. By incorporating depth and geometric data, 3D facial recognition enables natural resistance to common presentation attacks such as those involving photos or digital images, due to its ability to capture the structural intricacies of the human face [19]. This added dimensionality and adaptability make 3D technology exceptionally well-suited for a wide variety of applications, ranging from personal device security to mission-critical environments like law enforcement, border control, and high-security access management.
The core protocol for the proposed 3D key generation and recovery from biometric data remains consistent with the algorithm described in Section 3.2. The principal distinction is the increased number of frames and diversity of angles captured during enrollment, as illustrated in Figure 5. Multiple frames are systematically acquired from various angles to robustly characterize the subject’s facial geometry. For each captured frame, facial landmarks are meticulously extracted and recorded, forming the basis for generating robust subset responses.

3.3.1. Initial Enrollment

As depicted in Figure 6, the enrollment process involves capturing a comprehensive set of 60 frames from a broad range of facial angles. Landmarks are detected in each frame, and the templateless key generation method, as outlined in Section 3.2.1, is applied to yield unique subset responses per frame.
  • For each frame the same challenge is formed by hashing a random nonce (RN) with the user’s password (PWD).
  • For each challenge, a unique subset response is generated and used to derive an ephemeral key K, as illustrated in Figure 6. The key generation methodology follows the process detailed in Section 3.2.1.
  • Similar to the 2D the essential subset responses, RN, and the hash of the ephemeral key H ( K ) are securely stored. All other enrollment data, including raw landmarks, are promptly deleted to ensure privacy and security.
During enrollment, we process all 60 frames end-to-end to capture full 3D facial detail. The temporary memory overhead is approximately 300 KB. After processing, only the essential data are retained: the subset responses and handshake. Everything else including raw frames, dense landmarks, and intermediate 3D data are deleted. The resulting stored record per user is approximately 100 KB. For IoT-constrained devices, the same protocol is used with resource-aware adjustments: we reduce the number of frames captured, drop redundant frames early, and immediately discard intermediate 3D data. These measures keep memory and power usage low while preserving accuracy and security.

3.3.2. Key Recovery

During authentication, a live frame is captured, and the stored subset responses, together with the corresponding R N and H ( K ) , are utilized to reconstruct the ephemeral key K. The recovery process unfolds as follows:
  • The challenge is regenerated from the R N and P W D and applied to the live capture, producing a fresh set of responses using the methodology from Section 3.2.2 and illustrated in Figure 7.
  • To reconstruct K, a subset response is randomly selected. If the ephemeral key generated ( K ) closely matches the expected response profile (i.e., the number of 1’s is near the subset length with low uncertainty), the recovery continues with this set. If not, another subset is randomly chosen, and the process repeats.
  • This iterative matching substantially reduces latency in key recovery; however, to prevent extended search times, a threshold is enforced. If recovery exceeds the allotted time, access is denied and the user must re-initiate authentication.
  • Once a ternary key with the appropriate number of 1’s and minimal ambiguity is identified, an exhaustive search is conducted by toggling uncertain positions. All candidate keys are hashed and compared to the stored H ( K ) . A match grants access; otherwise, the process is restarted.
During key recovery, a live capture is processed to reconstruct facial geometry, derive responses, and regenerate the ephemeral key, which is verified against the stored subset together with R N and H ( K ) . No new data are created; all raw frames, landmarks, and intermediate 3D data are discarded immediately after the decision. The transient memory footprint is approximately 50 KB, which is lower than during enrollment.
This methodology enables effective utilization of 3D facial data from multiple angles during authentication, significantly improving recognition accuracy even under challenging conditions such as variable lighting, expressions, or occlusions. Furthermore, the protocol’s versatility allows its extension beyond facial biometrics to general object recognition, subject to landmark accessibility. Its adaptability and security make it a powerful solution for a wide array of scenarios, from rapid, secure identification in high-stakes environments like airports and border crossings to reliable biometric authentication in diverse digital applications, offering tangible benefits in security, operational efficiency, and user convenience.

3.4. Multi-Factor Authentication (MFA) with 2D/3D Biometry and SRAM PUF

MFA is a security process requiring users to present two or more distinct forms of identification before accessing a system, relying on three primary factors: inherence (e.g., fingerprint scanning, facial recognition), knowledge (e.g., passwords, PINs), and possession (e.g., smart cards, OTP tokens) [6,22,23,24,25]. By integrating multiple authentication factors, MFA provides a robust defense against attacks; even if an attacker compromises a user’s password, they would still need additional factors to succeed [26,27]. To further mitigate risks such as modeling or spoofing attacks, this paper proposes an MFA protocol design that leverages the CRP mechanism, utilizing templateless key generation and recovery through subset responses with biometric data and liveliness factor (Inherence Factor) as discussed in Section 3.2. This approach also incorporates CRP mechanism using a SRAM PUF token (Possession Factor) [28,29,30] as shown in Figure 8, and a password (Knowledge Factor) for authentication.
The method of using SRAM PUF for key generation and recovery is inspired from paper by Jain, Saloni et al. [31]. This paper proposes an optimized protocol for enhancing industrial IoT device security by integrating SRAM-based PUF, error correction codes (ECC) or Response-Based Cryptography (RBC) [32], PQC algorithms, and ZKP systems to efficiently generate and recover one-time cryptographic keys with low latency and minimal error rates. Advanced techniques such as stable cell filtering and addressing schemes maximize key entropy and randomness from PUF responses, while flexible key lengths and integration with CRYSTALS-KYBER and CRYSTAL-DILITHIUM provide quantum-resistant security.

MFA Protocol Design

The enrollment process for this protocol involves two main steps: the enrollment of the SRAM PUF, and the enrollment of the user’s biometric data. The key generation process, illustrated in Figure 9, proceeds as follows:
  • For SRAM PUF enrollment, the SRAM PUF is read multiple times to filter unstable cells, and stable responses are used to generate high entropy responses.
  • The key generation process initiates with the server generating a RN, while the user provides a PWD. These inputs undergo hashing using SHA3-512 and SHAKE256, resulting in a message digest.
  • Further, challenge-response pair mechanism (CRPM) utilizing SRAM PUF is employed to generate challenges. These challenges are utilized to determine the addresses of the cells that need to be read from the SRAM [31,33].
  • The SRAM PUF utilizes the enrollment to generate a response, denoted as S K .
  • Next with the biometric data, the hash is utilized to generate challenges, facilitating the calculation of the distance between each challenge and the landmarks of the user’s face. This process results in the generation of a full set of responses as detailed in Section 3.2.
  • Following this, the protocol generates an ephemeral key, denoted as B K , and produces a subset response utilized during the recovery process.
  • During the process of ephemeral key generation using the biometric data, a liveliness test is conducted to ensure the absence of any spoofing attacks. The outcome of this liveliness test is denoted as L and is utilized in the generation of the final key, denoted as K.
  • The final key K is generated by combining S K , B K , and L through an XOR operation followed by a modulo operation. This resulting key K is then employed in encryption algorithms such as AES, Double Encryption using AES, CRYSTAL-KYBER, or CRYSTAL-DILITHIUM [34,35] to secure digital files.
  • During this process the R N , hash of the SRAM PUF response H ( S K ) , subset responses and hash of the ephemeral key H ( B K ) ) is stored as handshake. This handshake is shared with the client during the authentication process.
In the key recovery process for authentication, successful authentication requires the user to possess both the SRAM token and their biometric information along with the personal password. The authentication process, depicted in Figure 10, involves a sequential flow detailed below:
  • Utilizing the handshake, the client extracts the RN, while the user inputs the PWD, both of which undergo one-way hashing to produce a message digest. This message digest is employed to generate challenges for extracting responses from the SRAM PUF and ephemeral keys from the biometric data.
  • The SRAM PUF generates a response S K , while the biometry data produces two outputs: the potential ephemeral key B K and the liveliness factor L.
  • S K undergoes error management methods like Response Based Cryptography [32] to identify and manage errors and extract S K . Similarly, the subset response method generates the potential ephemeral key B K with uncertain positions as detailed in Section 3.2.2, which is passed through the error correction method detailed in Section 3.2.3 to determine the ephemeral key B K .
  • Combining S K , B K , and L generates the final key K . The hash of K is then compared with H ( K ) . If a match is found, decryption of the digital file is initiated; otherwise, authentication is denied.
  • The complexity of generating the final key K from multiple factors in the MFA system makes it impossible to pinpoint which specific factor contributed to any errors resulting in authentication failure.
The lack of knowledge regarding which factor caused authentication failure and enhanced entropy of the keys makes this multi-layered authentication system effective in enhancing the security of various applications.
The protocol is a quantum-resistant multi-factor authentication system which integrates advanced lattice-based post-quantum cryptographic algorithms such as CRYSTAL-KYBER for key exchange and CRYSTAL-DILITHIUM for digital signatures with zero-knowledge proofs to safeguard privacy, while employing device-specific noise from SRAM PUFs and liveliness for multi-factor authentication. The current PQC utilizes a pseudo-random number generator (PSRNG), which, while deterministic and predictable, may not offer sufficient security against quantum adversaries. To enhance quantum resistance, replacing PRNG with a true random source is beneficial [36,37]. The multi-factor responses can directly substitute for conventional PRNGs in algorithms like CRYSTAL-KYBER or CRYSTAL-DILITHIUM during key creation, ensuring that both public and private cryptographic keys are generated from inherently unpredictable, hardware-unique entropy sources. This method delivers robust protection against classical and quantum attacks by eradicating vulnerabilities associated with deterministic PRNGs and firmly anchoring security in the physical unclonability of the device, thereby preventing spoofing, key-reuse, and enhancing overall cryptographic resilience.

4. Results

4.1. Entropy Analysis

Entropy represents the measure of unpredictability and randomness critical to the security of biometric and token-based authentication systems.
Biometric Computation: For landmark-based face biometrics, entropy analysis considers both the immense combinatorial possibilities from grid selection and landmark combinations, and the impact of the key length on effective security. In the proposed system, the biometric entropy E b is a function of:
  • The selection of k unique coordinates from 256 × 256 grid ( M = 65,536 ) .
  • The assignment of l facial landmarks from a total set L to each selected point. Entropy Computation is represented as:
    • Key length generated is: K = 256
    • 256 unique grid coordinates selected: M K = 65,536 K
    • For each coordinate, consider l = 32 stable landmarks out of the L = 68 or 468 landmarks: L l K
    • The resulting entropy is:
      E b L = log 2 M K × L l K
Using the above equation, with 68 available landmarks, selecting 32 stable landmarks within a 256 × 256 grid results in approximately 18,912 bits of entropy, representing the number of possible key combinations. Increasing the grid size or the number of chosen landmarks both raise this value; for example, using 468 landmarks instead of 68 with the same selection size boosts the entropy to around 44,555 bits. This protocol is designed to be flexible; its parameters, such as the number of landmarks and selections, grid size, key length, can be easily adjusted to suit different applications and security requirements, allowing for scalability as needed.
When the system extracts a binary key of length m (for eg., 128, 256, 512, or 1024 bits) by quantizing and encoding selected distances, the effective entropy is limited by both the number of reliably recoverable key bits and the combinatorial selection space. For a 256-bit key, the attacker must search for 2 256 possible values, but if the raw entropy from landmark selection and grid choices exceeds 256 bits, the full keyspace is utilized. So, as the key length increases, the entropy from the landmark/grid combination must be sufficient to populate all the key bits with true randomness. Here, the use of hundreds of landmark/grid pairs accelerates the growth in possible combinations, easily supporting very large, quantum-safe keys. With 68 landmarks, even a conservative subsets provide entropy far exceeding 2 256 i.e., the minimum for post-quantum secure authentication. With 468 landmarks, the number of possible landmark subsets per coordinate grows from 68 32 2.5 × 10 19 to 468 32 3.6 × 10 49 , providing orders-of-magnitude greater entropy. For 256 × 32 landmark choices for each response, entropy becomes effectively unbounded for practical purposes, supporting extremely long keys (512, 1024 bits) without risk of entropy exhaustion.
Combined Token and Biometric Entropy: For an SRAM-based token system, the token entropy E t is derived from the number of cells in the SRAM PUF. Let N = 1,048,576 be the total number of SRAM cells, and K = 256 the number of cells used to compute each response. The token entropy can be expressed as:
E t = log 2 N K = log 2 1,048,576 256 = 5120 bits
The combined entropy is represented by :
E = m a x ( E t , E b )
ensuring the upper bound is always at least as strong as the most secure subsystem, and vastly exceeding modern cryptographic standards.
Practical Implication: This analysis confirms that by increasing the number of landmarks (especially to 468), and with flexible, high key lengths, the system achieves an astronomically large keyspace and an entropy margin that robustly withstands both classical and quantum adversaries. This scalability in entropy directly translates to higher security assurances for both current and future cryptographic requirements.

4.2. Error Rate

Comprehensive analysis of error rates is fundamental for evaluating the security, resilience, and user experience of any biometric authentication protocol. The ability to quantify, interpret, and manage error probabilities directly influences system optimization. This prompts refinements in both algorithm design and hardware implementation. Excessive error rates can not only undermine security but also erode user trust, making robust error management essential for system acceptance and operational reliability.
At the core of our approach is a carefully defined error tolerance, which determines the maximum permissible bit error in ephemeral key reconstruction while maintaining successful authentication. This parameter is tunable: a more permissive threshold allows for greater adaptation to environmental noise and user variability (ideal for consumer electronics or public access points), whereas a stricter threshold enforces fine-grained matching-suitable for high-security environments such as military installations or banking vaults.
The testing results presented here were obtained on devices with Dell laptop(Dell Technologies, Inc., Round Rock, Texas, United States) Core i7 and i9 processors, featuring 16 GB and 32 GB RAM, and 1 TB SSDs, ensuring comprehensive validation of the protocol on high-performance platforms with robust computational and graphics capabilities. Initial assessments were also conducted on i3 and i5 processors, with comparable outcomes to those reported below. The facial frames used for testing comprised AI-generated images reflecting a wide spectrum of gender, age, and ethnic diversity, as illustrated in Figure 11. Additional test cases included images with variations in hair, facial orientation, and viewpoint, further detailed in Figure 12.
These results are generated by extensive testing, spanning 100,000 authentication keys generated at a 10.5% error tolerance with 20 facial frames demonstrate that the proportion of uncertain bits per authentication event is typically within the 0–10.5% range, with the highest likelihood concentrated between 0.5% and 3% uncertainty with a stable background and no other factors intervening, as depicted in Figure 13. These uncertainties primarily stem from challenging real-world conditions and increase with variations in lighting, facial pose, expression, or imaging quality.
To effectively manage these uncertainties, the protocol incorporates an exhaustive error correction process (Section 3.2.3) that reliably resolves errors, enabling robust ephemeral key recovery even under adverse circumstances. Notably, increasing the error tolerance threshold to 15.5% with the same number of frames significantly increases the tolerance of error rates exceeding 7.5%. As the threshold is raised, the overall ability to tolerate bit error frequency increases, further enhancing both the reliability and user convenience of the system.
Figure 14 illustrates how average error rates vary with error tolerance for systems using 5, 10, and 20 frames. As the error tolerance increases from 7.5% to 37.5%, all systems show an increased capacity to handle error, with those utilizing more frames (like 20) consistently achieving better accuracy than those with fewer frames. This evaluation was performed without adding noise or applying gray code bit selection. For the 20-frame system, the error rate starts at roughly 0.8% with a 7.5% tolerance, rises to 3% at 10.5% tolerance, and approaches 8% at 15.5% tolerance. These error rate values are measured up to the point where the system starts to exhibit increased rates of false rejection or false acceptance at higher threshold values. These results indicate that lower tolerance settings demand greater precision, which increases authentication stringency.
The system demonstrates its most reliable expression-based authentication performance within the 7.5% to 12.5% tolerance range, making it well-suited for stable, controlled environments such as secure or military zones. In contrast, increasing error tolerance allows the system to remain effective in more variable and uncontrolled environments. Higher frame counts enhance enrollment and authentication robustness; 20 frames are ideal for high-security uses, while 10 frames strike a good balance for general applications. The data presented are based on AI-generated images incorporating diverse races and genders; however, some variation is observed when tested with real human subjects under natural conditions.
When the protocol is extended to MFA scenarios using SRAM PUF, testing over 300 enrollment cycles at a 12.5% error tolerance and 20 frames consistently yields an error rate between 0% and 2.9% per authentication event. In instances where minor uncertainties remain, the integrated error management routines resolve these efficiently, ensuring a seamless and reliable user experience.
Compared to conventional systems, which often struggle with high error rates under increased key lengths, landmark counts, or noisy deployment conditions, our protocol stands out delivering high error resilience, dynamic adaptability, and consistent user satisfaction. This foundational reliability is a critical factor that both differentiates our approach and ensures its suitability for deployment in diverse real-world security scenarios.

4.3. FAR and FRR

False Acceptance Rate (FAR) measures the likelihood that an unauthorized user is incorrectly accepted as legitimate by the biometric system, while False Rejection Rate (FRR) measures the probability that a genuine user is incorrectly rejected. When evaluated using images generated and authenticated in controlled, consistent conditions, without any background, expression, lighting, or presentation variations, FAR and FRR offer a clear baseline of system accuracy under ideal circumstances.

4.3.1. 2D Biometric Analysis

Figure 15 shows the baseline FAR and FRR for different error threshold percentages under controlled, no-variation conditions. The FRR starts high at low error thresholds (e.g., 20.17% at 7.5%) and decreases sharply as the threshold increases, while FAR remains very low across all thresholds but rises slightly at higher tolerances. At strict thresholds, the system tends to be overly conservative, resulting in a higher rejection rate even for legitimate users. With increasing error tolerance, genuine users are more readily accepted since higher number of errors can be corrected using the error correction method, but the system also becomes more permissive, causing a gradual increase in false acceptances. Notably, at a 15% error threshold, both FAR and FRR converge near zero, indicating an ideal operating point where the system achieves high security and usability under invariant conditions. The intersection and trends of these rates reflect the classic trade-off in biometric security as tightening the threshold reduces the risk of false acceptance but increases inconvenience to legitimate users, whereas loosening it enhances user convenience but slightly elevates the risk of false acceptance.

4.3.2. Impact of Noise Injection

Figure 16 presents the relationship between the FAR, FRR, and error tolerance for three different noise levels: 0.05%, 0.15%, and 0.25%. Across all noise levels, as tolerance increases, there is a consistent trend where FRR sharply decreases while FAR increases. At the highest noise level (0.25) shown in the graph, the system still achieves near-zero error across a practical range of tolerances, with the minimum FAR and FRR is observed between error tolerance of 0.25 and 0.395 i.e., 25% to 39.5%. The error rate (ER) varies with noise severity: for noise = 0.05, ER is approximately 29.3% at a tolerance of 0.280; for noise = 0.15, ER is about 50.2% at a tolerance of 0.300; and for noise = 0.25, ER drops to roughly 0.63% at a tolerance of 0.245. Notably, the performance envelope reveals that near to zero FAR can be reached at a tolerance of 0.395, and zero FRR at 0.340, with the best operating point near 0.355 tolerance where the average error is about 0.05%. At higher noise (0.30 and 0.35), the trend diverges: at 0.30, the curves still admit a low-error operating band in the same tolerance region, whereas at 0.35 the FRR remains high across most tolerances, driving the ER up and leaving little practical room for operation. These findings confirm that by carefully adjusting the tolerance, the system can be tuned such that FAR approaches zero while FRR is simultaneously minimized, maintaining robust performance even under significant noise conditions.

4.3.3. Impact of Noise Injection and Selective Gray Code Bit Extraction

Figure 17 illustrates the impact of combining noise with the security feature of selectively removing bits in the gray coding method, as shown in the attached graph. The performance curves for all three noise levels (0.05, 0.15, 0.25) collapse near the origin i.e., both FAR and FRR reach 0.00 % at low tolerance values and remain at or near zero across a practical tolerance range. The ER for each noise condition is also essentially 0.00 % , occurring at tolerances around 0.075 for noise 0.05, 0.175 for noise 0.15, and 0.250 for noise 0.25. Extending to stronger noise (0.30 and 0.35), the selective bit strategy keeps FAR and FRR essentially at zero around low tolerances for 0.30, while at 0.35 the curves flatten with FAR near zero and FRR near saturation, so the ER is dominated by rejections. In essence, by integrating noise with selective gray code bit removal, the protocol achieves a broad region of negligible error, significantly enhancing both security and usability even in the presence of noise.

4.3.4. 3D Biometric Analysis

The verification performance of the proposed 3D biometric protocol was systematically evaluated through experiments using the FEI Face Database [38]. This dataset includes images of 200 participants captured in upright poses with yaw rotations up to 180 ° and ∼10% scale variation, ages 19–40 (Figure 18). The system’s accuracy was assessed at two key error tolerance thresholds, 7.5% and 12.5%. At 7.5% tolerance, the observed FAR was 4.25% and the FRR was 6.74%. When the tolerance was increased to 12.5%, FAR dropped sharply to 0.46% while FRR decreased to 2.75%. This concurrent reduction occurs because subset gating compares only the most stable, informative bits and ignores uncertain regions, which keeps genuine users patterns within a few bits of the enrolled reference while leaving impostor patterns much farther away. When the tolerance is relaxed slightly, many near-miss genuine attempts succeed, lowering FRR. Impostors do not gain the same advantage because the bits that could help them are masked, and the comparison is anchored on stable features. This reduces accidental overlaps and lowers FAR at the same time. Accordingly, the 12.5% tolerance is selected as the operating point for the remainder of the study. It provides a balanced trade-off between security and usability under the multi-view protocol, reducing false decisions without conferring advantage to impostors.

4.3.5. MFA with Biometry and SRAM PUF

When the 2D and 3D biometric protocol was applied using an SRAM token in a MFA setup, with data collected from 100 separate enrollment attempts. The FAR and FRR both dropped to nearly 0% as shown in Figure 19. This performance can be attributed to the analysis of SRAM PUF, which has been shown in prior research [31] to inherently produce very low FAR and FRR values. Our experiments with the same type of SRAM chip and biometric data confirmed these findings, as any observed errors primarily originated from the biometric component, not the SRAM. As a result, the system configuration described here represents an ideal case for authentication, demonstrating that combining strong biometric methods with robust hardware tokens like SRAM can yield extremely secure and reliable user verification when tested under controlled and uncontrolled conditions.

4.4. Latency

Latency is the time delay between starting and completing a cryptographic operation. Minimizing latency is essential for real-time decision making and secure, efficient data handling, allowing for swift responses to security threats while maintaining data integrity.
When utilizing facial biometric data for key generation, the latency is substantially influenced by liveliness detection. With liveliness enabled, the average enrollment time is approximately 0.444 s, ranging from 0.403 to 0.837 s, while the mean authentication time is about 0.551 s, varying between 0.154 and 5.627 s. In contrast, when liveliness checks are disabled, enrollment latency slightly decreases to a mean of 0.405 s (min 0.370, max 0.686), and authentication latency is significantly reduced to a mean of 0.040 s (min 0.033, max 0.234).
With MFA, combining facial characteristics with liveliness factor and SRAM PUF for the authentication process, key generation requires approximately 0.494 s, while key recovery takes around 0.951 s.
In the current 3D biometric pipeline, end-to-end enrollment using 60 frames is completed within 10 to 12 s. During key recovery, the latency is dependent on the number of ambiguous landmark positions: when uncertainty is low (≤3–4), the process requires about 1.5 s; with higher ambiguity (>4), recovery extends to 5–8 s. These timings are reflective of the initial implementation of 3D biometric key generation. In practical scenarios where only 10 frames are processed, total latency decreases considerably, enabling faster user experiences. The protocol continues to undergo active optimization, with efforts focused on reducing both enrollment and key recovery times for real-world applications
These fast authentication times easily meet recommended biometric latency standards for practical systems (usually below or equal to 1.5 s for user-facing interactions). Such rapid biometric authentication enables seamless integration with technologies used in banking, healthcare, law enforcement, and travel, improving both security and operational efficiency by reducing wait times and streamlining identity verification [39].

5. Real-Time Use-Cases

Both 2D and 3D biometric recognition present unique strengths that enable deployment across a wide array of real-world applications. 2D facial recognition is secure enough for most use cases, offering a balace between accuracy, speed, and efficiency. It remains advantageous where low computational overhead, and seamless user experience are paramount. Examples include mobile device authentication, workplace access control, and secure payment systems, particularly in settings with controllable lighting and posture. However, for applications that demand higher assurance levels and can accommodate slightly greater computational complexity, 3D recognition offers a stronger layer of protection. 3D biometric recognition exhibits superior robustness to variations in angle, lighting, and user expression. This makes it uniquely suited for high-security and mission-critical applications, such as automated border control gates, immigration and customs screening, law enforcement identity verification, and aerospace facility access environments where resistance to sophisticated spoofing (including deepfakes) and high-accuracy matching are indispensable.
Emerging use cases further leverage the strengths of 3D modalities, such as expression-based authentication, where dynamic facial movements add a temporal dimension to security and enhance resilience against static spoofing attempts. The incorporation of SRAM PUF-based MFA introduces an additional possession factor, binding cryptographically unique hardware signatures to each authentication event, thereby strengthening defenses in distributed or high-risk infrastructures.
In healthcare, advanced 3D biometrics facilitate secure patient verification, seamless electronic health record access, and prevention of identity fraud. In finance, they underpin reliable onboarding, high-value transaction authorization, and ATM access in each case, supported by the added security of SRAM-based MFA for regulatory compliance.
The proposed protocol is primarily positioned at the authentication and access-control layer of smart and distributed infrastructures, rather than in real-time sensing or actuation loops. With current latency measurements is in the millisecond range, industrial control tasks includes device identity verification processes that occur prior to or in parallel with other operations. The protocol’s design particularly is deterministic search mechanism and lightweight computation model that supports integration into IoT gateways, edge devices, and mobile endpoints where secure initialization, enrollment, and periodic re-authentication are required.
An example is to enable secure and privacy-preserving identity verification across both consumer and industrial IoT environments. In a smart workspace or automated home, a user interacts with a secured terminal or appliance by presenting their face for biometric capture. The device processes the facial data locally and performs authentication through an SRAM PUF token, ensuring only legitimate hardware completes the access sequence. Once verified, the system grants access or triggers authorized functions without storing biometric templates or cryptographic keys. In industrial IoT deployments, each control panel or networked sensor leverages its unique SRAM PUF signature to validate device and user legitimacy during authentication. This approach prevents unauthorized access, device cloning, and remote spoofing, supporting secure automation and robust identity management throughout distributed infrastructures.

6. Discussion

The results of this study substantiate the working hypothesis of a post-quantum, templateless biometric authentication protocol enriched with 2D/3D facial recognition, liveness detection, and SRAM PUF-based MFA. It can deliver high-assurance, privacy-preserving identity verification without the persistent storage of sensitive biometric or cryptographic data. Our approach offers several advances when placed in context with previous studies.
First, the delineation of stable facial landmarks and multi-frame enrollment, coupled with gray code encoding and double masking, provides a marked reduction in both FAR and FRR relative to legacy template-based biometric systems [40,41,42]. The joint use of gray coding, bit slicing, and tailored noise injection further expands tolerance to environmental variations enabling robust operation across diverse lighting, pose, or expression conditions that previously challenged usability and accuracy.
Second, the use of a cryptographically secure, ephemeral key created via challenge–response subset protocols and immediately deleted post verification addresses longstanding template inversion, spoofing, reverse engineering and data breach vulnerabilities that have hindered broad biometric deployment. This innovation aligns with recent calls in the field for storage-free, zero-knowledge-friendly biometric authentication suitable for high-security landscapes and emerging quantum threats.
Third, experimental evaluations confirm the protocol’s exceptional scalability. Both theoretical entropy calculations and empirical benchmarks demonstrate that by increasing the number of facial landmarks (e.g., from 68 to 468) and leveraging high-entropy grid/landmark pairs, the protocol supports quantum-resistant key lengths (256, 512, or 1024 bits and beyond) without exhaustion of the keyspace. Integration of SRAM PUF as a possession factor within the MFA framework provides a highly reliable, tamper-evident hardware anchor, with observed error rates dominated by biometric rather than the PUF component.
Furthermore, analysis across different error tolerances establishes that the protocol can be flexibly tuned: lower tolerances are ideal for controlled or secure settings, while higher tolerances enhance usability in unconstrained environments. This flexibility was confirmed by both simulated and human-subject studies, with performance largely preserved as operational settings shift.
Compared to alternative MFA and biometric schemes documented in recent literature [6,15,16,17], our method exhibits superior error resilience, heightened resistance to spoofing (including deepfake and template-inversion attacks), and eliminates the risk of static template compromise. The zero-knowledge, ephemeral keying strategy, combined with privacy-preserving enrollment and authentication, marks a significant step forward toward truly secure and user-centric identity management. The comparative analysis presented in Table 1 further contextualizes the contributions of this work relative to recent state-of-the-art facial biometric authentication protocols.
In the broad scope, these findings suggest potential for direct application in domains such as secure device login, financial authentication, border control, and healthcare verification. The flexible parameterization further allows future adaptation to new biometric modalities and object recognition challenges, thus broadening the protocol’s real-world applicability.

7. Conclusions and Future Work

This study introduced a templateless biometric authentication protocol that integrates securely with multi-factor authentication through SRAM PUF tokens. By forgoing the persistent storage of biometric templates and cryptographic keys, the proposed system effectively mitigates major vulnerabilities such as template inversion, impersonation, and large-scale database compromise. Unlike traditional methods that rely on helper data or stored redundancy, the protocol employs a deterministic search algorithm rather than conventional error correction codes. This approach eliminates the need for additional stored data that could compromise security while maintaining accurate and consistent authentication outcomes. Experimental results demonstrate that the combined use of landmark stability analysis, gray-code encoding, noise injection, and the deterministic search mechanism achieves exceptionally low false acceptance and rejection rates, with low latency and high operational efficiency. These properties make the system highly suitable for deployment across smart devices, IoT infrastructures, and distributed authentication environments.
Future work will focus on optimization to reducing system latency and data storage requirements to enhance real-world adaptability. In addition, rigorous testing across diverse human subjects and varying environmental conditions will be conducted to further validate robustness, scalability, and cross-domain performance. Further our research will extend the subset-response framework beyond human biometrics to object identification technologies. Such advancement could enable the detection and classification of targets from aerial and satellite platforms, contributing to environmental monitoring, national security, and disaster response operations

8. Patents

  • Cambou BF, Herlihy M, Tamassia R, Toussaint K, KRISHNA K, inventors; Northern Arizona University, Brown University, assignee. Protocols for protecting digital files. United States patent application US 19/064,331. 28 August 2025.
  • Cambou BF, Garrett ML, Partridge M, Ghanaimiandoab D, inventors; Northern Arizona University, assignee. Protocols with noisy response-based cryptographic subkeys. United States patent application US 18/885,226. 22 May 2025.

Author Contributions

Conceptualization, B.C. and S.J.; methodology, B.C. and S.J.; software, S.J., A.B., M.C. and D.G.M.; validation, S.J., A.B., M.C. and D.G.M.; formal analysis, S.J., A.B. and M.C.; investigation, S.J., A.B. and M.C.; resources, S.J., A.B., M.C. and D.G.M.; data curation, S.J., A.B. and M.C.; writing—original draft preparation, S.J.; writing—review and editing, S.J., A.B., M.C. and D.G.M.; visualization, S.J., A.B., M.C. and D.G.M.; supervision, B.C.; project administration, B.C.; funding acquisition, B.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research is funded by Castle Shield Holdings, LLC.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Acknowledgments

This research was carried out in collaboration with Northern Arizona University and Castle Shield Holdings, LLC. We would like to thank Jeffrey Roney of Castle Shield Holdings, LLC for funding this work. We gratefully acknowledge Logan Garrett and Mahafujul Alam for their critical support in protocol understanding and for providing the AI-generated datasets employed in experimentation. We also extend special thanks to Ian Burke for assistance with the SRAM hardware token. The AI dataset used for biometric protocol testing was sourced from Generated Photos (https://generated.photos/; accessed on 5 January 2025). In addition, the FEI Face Database (https://fei.edu.br/~cet/facedatabase.html; accessed on 10 June 2025) was employed as a benchmark dataset for 3D biometric analysis.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
CRPChallenge Response Pair
ECCError Correction Code
FARFalse Acceptance Rate
FRRFalse Rejection Rate
GANsGenerative Adversarial Network
MFAMulti Factor Authentication
OTPOne Time Password
PINsPersonal Identification Number
PQCPost Quantum Cryptography
PUFPhysically Unclonable Functions
RBCResponse-Based Cryptography
SRAMStatic Random Access Memory
TITemplate Inversion
IoTInternet of Thing
RNRandom Number
PWDPassword
ZKPZero Knowledge Proof
PRNGPseudo Random Number Generator

References

  1. Chen, S.; Pande, A.; Mohapatra, P. Sensor-assisted facial recognition: An enhanced biometric authentication system for smartphones. In Proceedings of the 12th Annual International Conference on Mobile Systems, Applications, and Services, Bretton Woods, NH, USA, 16–19 June 2014; pp. 109–122. [Google Scholar]
  2. Dabbah, M.; Woo, W.; Dlay, S. Secure authentication for face recognition. In Proceedings of the 2007 IEEE Symposium on Computational Intelligence in Image and Signal Processing, Honolulu, HI, USA, 1–5 April 2007; pp. 121–126. [Google Scholar]
  3. Al-Assam, H.; Sellahewa, H.; Jassim, S. On security of multi-factor biometric authentication. In Proceedings of the 2010 International Conference for Internet Technology and Secured Transactions, London, UK, 8–11 November 2010; pp. 1–6. [Google Scholar]
  4. Banerjee, I.; Mookherjee, S.; Saha, S.; Ganguli, S.; Kundu, S.; Chakravarti, D. Advanced atm system using iris scanner. In Proceedings of the 2019 International Conference on Opto-Electronics and Applied Optics (Optronix), Kolkata, India, 18–20 March 2019; pp. 1–3. [Google Scholar]
  5. Nath, D.; Ray, S.; Ghosh, S.K. Fingerprint recognition system: Design & analysis. In Proceedings of the International Conference on Scientific Paradigm Shift in Information Technology & Management, SPSITM, Kolkata, India, 9–12 October 2011. [Google Scholar]
  6. Ometov, A.; Bezzateev, S.; Mäkitalo, N.; Andreev, S.; Mikkonen, T.; Koucheryavy, Y. Multi-factor authentication: A survey. Cryptography 2018, 2, 1. [Google Scholar] [CrossRef]
  7. Elmahmudi, A.; Ugail, H. A framework for facial age progression and regression using exemplar face templates. Vis. Comput. 2021, 37, 2023–2038. [Google Scholar] [CrossRef]
  8. Insan, I.M.; Sukarno, P.; Yasirandi, R. Multi-factor authentication using a smart card and fingerprint (case study: Parking gate). Indones. J. Comput. Indo-JC 2019, 4, 55–66. [Google Scholar]
  9. Karimian, N.; Guo, Z.; Tehranipoor, F.; Woodard, D.; Tehranipoor, M.; Forte, D. Secure and reliable biometric access control for resource-constrained systems and IoT. arXiv 2018, arXiv:1803.09710. [Google Scholar] [CrossRef]
  10. Patel, V.M.; Ratha, N.K.; Chellappa, R. Cancelable biometrics: A review. IEEE Signal Process. Mag. 2015, 32, 54–65. [Google Scholar] [CrossRef]
  11. Zhang, H.; Bian, W.; Jie, B.; Xu, D.; Zhao, J. A complete user authentication and key agreement scheme using cancelable biometrics and PUF in multi-server environment. IEEE Trans. Inf. Forensics Secur. 2021, 16, 5413–5428. [Google Scholar] [CrossRef]
  12. Mansour, A.; Sadik, M.; Sabir, E. Multi-factor authentication based on multimodal biometrics (MFA-MB) for Cloud Computing. In Proceedings of the 2015 IEEE/ACS 12th International Conference of Computer Systems and Applications (AICCSA), Marrakech, Morocco, 17–20 November 2015; pp. 1–4. [Google Scholar]
  13. Lipps, C.; Herbst, J.; Schotten, H.D. How to Dance Your Passwords: A Biometric MFA-Scheme for Identification and Authentication of Individuals in IIoT Environments. In Proceedings of the ICCWS 2021 16th International Conference on Cyber Warfare and Security, Cookeville, TN, USA, 25–26 February 2021; p. 168. [Google Scholar]
  14. Pahuja, S.; Goel, N. Multimodal biometric authentication: A review. AI Commun. 2024, 37, 525–547. [Google Scholar] [CrossRef]
  15. Pramana, M.D.; Lestyea, A.; Amiruddin, A. Development of a Secure Access Control System Based on Two-Factor Authentication Using Face Recognition and OTP SMS-Token. In Proceedings of the 2020 International Conference on Informatics, Multimedia, Cyber and Information System (ICIMCIS), Jakarta, Indonesia, 19–20 November 2020; pp. 52–57. [Google Scholar]
  16. Ibrokhimov, S.; Hui, K.L.; Al-Absi, A.A.; Sain, M. Multi-factor authentication in cyber physical system: A state of art survey. In Proceedings of the 2019 21st international conference on advanced communication technology (ICACT), PyeongChang, Republic of Korea, 17–20 February 2019; pp. 279–284. [Google Scholar]
  17. Shahreza, H.O.; Marcel, S. Template inversion attack using synthetic face images against real face recognition systems. IEEE Trans. Biom. Behav. Identity Sci. 2024, 6, 374–384. [Google Scholar] [CrossRef]
  18. Cambou, B.; Philabaum, C.; Hoffstein, J.; Herlihy, M. Methods to encrypt and authenticate digital files in distributed networks and zero-trust environments. Axioms 2023, 12, 531. [Google Scholar] [CrossRef]
  19. Dkhil, M.B.; Wali, A.; Alimi, A.M. Towards a new system for drowsiness detection based on eye blinking and head posture estimation. arXiv 2018, arXiv:1806.00360. [Google Scholar] [CrossRef]
  20. Ghanai Miandoab, D.; Garrett, M.L.; Alam, M.; Jain, S.; Assiri, S.; Cambou, B. Secure Cryptographic Key Encapsulation and Recovery Scheme in Noisy Network Conditions. Appl. Sci. 2025, 15, 2732. [Google Scholar] [CrossRef]
  21. Jabberi, M.; Wali, A.; Chaudhuri, B.B.; Alimi, A.M. 68 landmarks are efficient for 3D face alignment: What about more? 3D face alignment method applied to face recognition. Multimed. Tools Appl. 2023, 82, 41435–41469. [Google Scholar] [CrossRef] [PubMed]
  22. Abhishek, K.; Roshan, S.; Kumar, P.; Ranjan, R. A comprehensive study on multifactor authentication schemes. In Proceedings of the Advances in Computing and Information Technology: Proceedings of the Second International Conference on Advances in Computing and Information Technology (ACITY), Chennai, India, 13–15 July 2012; Springer: Berlin/Heidelberg, Germany, 2013; Volume 2, pp. 561–568. [Google Scholar]
  23. Suleski, T.; Ahmed, M.; Yang, W.; Wang, E. A review of multi-factor authentication in the Internet of Healthcare Things. Digit. Health 2023, 9, 20552076231177144. [Google Scholar] [CrossRef] [PubMed]
  24. Otta, S.P.; Panda, S.; Gupta, M.; Hota, C. A systematic survey of multi-factor authentication for cloud infrastructure. Future Internet 2023, 15, 146. [Google Scholar] [CrossRef]
  25. Carrillo-Torres, D.; Pérez-Díaz, J.A.; Cantoral-Ceballos, J.A.; Vargas-Rosales, C. A novel multi-factor authentication algorithm based on image recognition and user established relations. Appl. Sci. 2023, 13, 1374. [Google Scholar] [CrossRef]
  26. Ometov, A.; Petrov, V.; Bezzateev, S.; Andreev, S.; Koucheryavy, Y.; Gerla, M. Challenges of multi-factor authentication for securing advanced IoT applications. IEEE Netw. 2019, 33, 82–88. [Google Scholar] [CrossRef]
  27. Reno, J. Multifactor authentication: Its time has come. Technol. Innov. Manag. Rev. 2013, 3, 51–58. [Google Scholar] [CrossRef]
  28. Jain, S.; Korenda, A.R.; Cambou, B.; Lucero, C. Secure Content Protection Schemes for Industrial IoT with SRAM PUF-Based One-Time Use Cryptographic Keys. In Proceedings of the Science and Information Conference, London, UK, 2–4 December 2024; Springer: Berlin/Heidelberg, Germany, 2024; pp. 478–498. [Google Scholar]
  29. Urien, P. Revisiting Multi-Factor Authentication Token Cybersecurity: A TLS Identity Module Use Case. In Proceedings of the 2024 International Conference on Computing, Networking and Communications (ICNC), Big Island, HI, USA, 19–22 February 2024; pp. 33–38. [Google Scholar]
  30. Gupta, C.; Varshney, G. Securing Web Access: PUF-Driven Two-Factor Authentication for Enhanced Protection. In Proceedings of the International Conference on Computer Safety, Reliability, and Security, Florence, Italy, 17–20 September 2024; Springer: Berlin/Heidelberg, Germany, 2024; pp. 74–87. [Google Scholar]
  31. Jain, S.; Korenda, A.R.; Bagri, A.; Cambou, B.; Lucero, C.D. Strengthening industrial IoT security with integrated puf token. In Proceedings of the Future Technologies Conference, London, UK, 14–15 November 2024; Springer: Berlin/Heidelberg, Germany, 2024; pp. 99–123. [Google Scholar]
  32. Jain, S.; Korenda, A.R.; Cambou, B. A Novel Approach to Optimize Response-Based Cryptography for Secure. In Proceedings of the Future Technologies Conference (FTC) 2024, Volume 4; Springer Nature: Berlin/Heidelberg, Germany, 2024; Volume 1157, p. 226. [Google Scholar]
  33. Jain, S. Secure and Reliable Zero-Knowledge Proof Cryptographic Systems for Real-World Applications. Ph.D. Thesis, Northern Arizona University, Flagstaff, AZ, USA, 2024. [Google Scholar]
  34. Avanzi, R.; Bos, J.; Ducas, L.; Kiltz, E.; Lepoint, T.; Lyubashevsky, V.; Schanck, J.M.; Schwabe, P.; Seiler, G.; Stehlé, D.; et al. CRYSTALS-Kyber algorithm specifications and supporting documentation. NIST PQC Round 2019, 2, 1–43. [Google Scholar]
  35. Ducas, L.; Kiltz, E.; Lepoint, T.; Lyubashevsky, V.; Schwabe, P.; Seiler, G.; Stehlé, D. Crystals-dilithium: A lattice-based digital signature scheme. IACR Trans. Cryptogr. Hardw. Embed. Syst. 2018, 2018, 238–268. [Google Scholar] [CrossRef]
  36. Datcu, O.; Macovei, C.; Hobincu, R. Chaos based cryptographic pseudo-random number generator template with dynamic state change. Appl. Sci. 2020, 10, 451. [Google Scholar] [CrossRef]
  37. Zhang, K.; Yang, M.; Yuan, Z.; Zhang, Y.; Liu, W. Optimized Quantum-Resistant Cryptosystem: Integrating Kyber-KEM with Hardware TRNG on Zynq Platform. Electronics 2025, 14, 2591. [Google Scholar] [CrossRef]
  38. Thomaz, C.E. FEI face database. FEI Face Database Available 2012, 11, 46–57. [Google Scholar]
  39. Ganmati, A.; Afdel, K.; Koutti, L. Deep Learning-Based Multi-Factor Authentication: A Survey of Biometric and Smart Card Integration Approaches. arXiv 2025, arXiv:2510.05163. [Google Scholar]
  40. Ballard, L.; Kamara, S.; Reiter, M.K. The Practical Subtleties of Biometric Key Generation. In Proceedings of the USENIX Security Symposium, San Jose, CA, USA, 28 July–1 August 2008; pp. 61–74. [Google Scholar]
  41. Wati, V.; Kusrini, K.; Al Fatta, H.; Kapoor, N. Security of facial biometric authentication for attendance system. Multimed. Tools Appl. 2021, 80, 23625–23646. [Google Scholar] [CrossRef]
  42. Chen, B.; Chandran, V. Biometric based cryptographic key generation from faces. In Proceedings of the 9th Biennial Conference of the Australian Pattern Recognition Society on Digital Image Computing Techniques and Applications (DICTA 2007), Glenelg, SA, Australia, 3–5 December 2007; pp. 394–401. [Google Scholar]
  43. Rathgeb, C.; Merkle, J.; Scholz, J.; Tams, B.; Nesterowicz, V. Deep face fuzzy vault: Implementation and performance. Comput. Secur. 2022, 113, 102539. [Google Scholar] [CrossRef]
  44. Sardar, A.; Umer, S.; Rout, R.K.; Sahoo, K.S.; Gandomi, A.H. Enhanced biometric template protection schemes for securing face recognition in IoT environment. IEEE Internet Things J. 2024, 11, 23196–23206. [Google Scholar] [CrossRef]
  45. Boddeti, V.N. Secure face matching using fully homomorphic encryption. In Proceedings of the 2018 IEEE 9th International Conference on Biometrics Theory, Applications and Systems (BTAS), Redondo Beach, CA, USA, 22–25 October 2018; pp. 1–10. [Google Scholar]
Figure 1. Left image is the 68-landmark variations of 30 frames and right side of the image shows the marking of 10 landmarks with most variation.
Figure 1. Left image is the 68-landmark variations of 30 frames and right side of the image shows the marking of 10 landmarks with most variation.
Cryptography 09 00068 g001
Figure 2. Initial enrollment process to generate subset responses for an ephemeral key which is used for encryption of sensitive data.
Figure 2. Initial enrollment process to generate subset responses for an ephemeral key which is used for encryption of sensitive data.
Cryptography 09 00068 g002
Figure 3. The key recovery process using subset of responses.
Figure 3. The key recovery process using subset of responses.
Cryptography 09 00068 g003
Figure 4. These figures illustrate the distinct scenarios encountered during the error correction process: (a) Case 1: a single valid match is identified; (b) Case 2: no valid match is found; and (c) Case 3: multiple valid matches are detected.
Figure 4. These figures illustrate the distinct scenarios encountered during the error correction process: (a) Case 1: a single valid match is identified; (b) Case 2: no valid match is found; and (c) Case 3: multiple valid matches are detected.
Cryptography 09 00068 g004
Figure 5. Enrollment of face from every angle.
Figure 5. Enrollment of face from every angle.
Cryptography 09 00068 g005
Figure 6. Initial enrollment of face from every angle and generation of subset responses.
Figure 6. Initial enrollment of face from every angle and generation of subset responses.
Cryptography 09 00068 g006
Figure 7. Authentication of face from an angle and reconstruction of ephemeral key.
Figure 7. Authentication of face from an angle and reconstruction of ephemeral key.
Cryptography 09 00068 g007
Figure 8. Castle Shield SRAM Token.
Figure 8. Castle Shield SRAM Token.
Cryptography 09 00068 g008
Figure 9. Initiating enrollment and key generation using MFA with SRAM PUF and biometric data.
Figure 9. Initiating enrollment and key generation using MFA with SRAM PUF and biometric data.
Cryptography 09 00068 g009
Figure 10. Authentication process using MFA with SRAM PUF and biometric data.
Figure 10. Authentication process using MFA with SRAM PUF and biometric data.
Cryptography 09 00068 g010
Figure 11. Some sample AI-generated images representing diverse genders, age groups, and ethnic backgrounds used for the testing. Source of the data: Generated Photos, accessed on 5 January 2025 (https://generated.photos/).
Figure 11. Some sample AI-generated images representing diverse genders, age groups, and ethnic backgrounds used for the testing. Source of the data: Generated Photos, accessed on 5 January 2025 (https://generated.photos/).
Cryptography 09 00068 g011
Figure 12. Images displaying 4 of the 14 AI generated images to highlight the variability in the image used for the testing. Source of the data: Generated Photos, accessed on 5 January 2025 (https://generated.photos/).
Figure 12. Images displaying 4 of the 14 AI generated images to highlight the variability in the image used for the testing. Source of the data: Generated Photos, accessed on 5 January 2025 (https://generated.photos/).
Cryptography 09 00068 g012
Figure 13. Probability of occurrences of uncertain positions in the key when authentication is tried for the first time after enrollment.
Figure 13. Probability of occurrences of uncertain positions in the key when authentication is tried for the first time after enrollment.
Cryptography 09 00068 g013
Figure 14. The graph shows that increasing the error-tolerance parameter expands the error rate that can be detected and corrected during key recovery.
Figure 14. The graph shows that increasing the error-tolerance parameter expands the error rate that can be detected and corrected during key recovery.
Cryptography 09 00068 g014
Figure 15. False Acceptance and False Rejection Rates using the subset protocol with stable images and without any variation.
Figure 15. False Acceptance and False Rejection Rates using the subset protocol with stable images and without any variation.
Cryptography 09 00068 g015
Figure 16. Per-noise FAR (solid) and FRR (dashed) vs. tolerance for noise levels 0.05, 0.15, and 0.25.
Figure 16. Per-noise FAR (solid) and FRR (dashed) vs. tolerance for noise levels 0.05, 0.15, and 0.25.
Cryptography 09 00068 g016
Figure 17. FAR (solid) and FRR (dashed) vs. tolerance for three noise levels with Selective bits from gray coding.
Figure 17. FAR (solid) and FRR (dashed) vs. tolerance for three noise levels with Selective bits from gray coding.
Cryptography 09 00068 g017
Figure 18. Example face images from the FEI Face Database used in the 3D biometric verification experiments. Dataset URL(accessed on 10 June 2025): https://fei.edu.br/~cet/facedatabase.html.
Figure 18. Example face images from the FEI Face Database used in the 3D biometric verification experiments. Dataset URL(accessed on 10 June 2025): https://fei.edu.br/~cet/facedatabase.html.
Cryptography 09 00068 g018
Figure 19. False Acceptance and False Rejection Rates using the SRAM PUF with biometric subset protocol.
Figure 19. False Acceptance and False Rejection Rates using the SRAM PUF with biometric subset protocol.
Cryptography 09 00068 g019
Table 1. Comparative analysis of recent facial biometric authentication protocols.
Table 1. Comparative analysis of recent facial biometric authentication protocols.
Feature/MetricThis WorkRathgeb, et al. [43]Sardar, et al. [44]Boddeti, Vishnu Naresh [45]
Protocol/systemMulti-view 2D and 3D Templateless; Expression; SRAM PUF; Liveliness; Noise InjectionFace (Deep Features/ArcFaceFace (Feature Fusion, FaceHashing, Sliding-XOR)Face (Homomorphic Enc., IoT-ready)
Expression-awareYesNoNoNo
ZKPYesNoNoNo
MFAYesNoNoNo
Data storedNo biometric or sensitive info storedProtected Template (Fuzzy Vault)Cancelable Templates/Bio-CryptoEncrypted or Compressed Template
PQC readyYesNot claimedNot claimedYes
FARFor 2D—0.05, 3D—0.46, and MFA 0 0.010.14–0.27Not reported
FRRFor 2D—0.001, 3D—2.75 and MFA 0 <10.12–0.34Not reported
Smart device readyYesNot emphasizedYesYes
Flexible key lengthYesNot emphasizedNot emphasizedNot emphasized
AdaptabilityYes (Can tune variables to adapt different use cases)NoNoNo
Data storage2D ≈ 14 KB and 3D—60 frames ≈ 100 KB (reduced with the number of frames)Not mentionedNot mentioned2D ≈ 16 KB
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jain, S.; Bagri, A.; Cambou, M.; Ghanai Miandoab, D.; Cambou, B. Enhancing Multi-Factor Authentication with Templateless 2D/3D Biometrics and PUF Integration for Securing Smart Devices. Cryptography 2025, 9, 68. https://doi.org/10.3390/cryptography9040068

AMA Style

Jain S, Bagri A, Cambou M, Ghanai Miandoab D, Cambou B. Enhancing Multi-Factor Authentication with Templateless 2D/3D Biometrics and PUF Integration for Securing Smart Devices. Cryptography. 2025; 9(4):68. https://doi.org/10.3390/cryptography9040068

Chicago/Turabian Style

Jain, Saloni, Amisha Bagri, Maxime Cambou, Dina Ghanai Miandoab, and Bertrand Cambou. 2025. "Enhancing Multi-Factor Authentication with Templateless 2D/3D Biometrics and PUF Integration for Securing Smart Devices" Cryptography 9, no. 4: 68. https://doi.org/10.3390/cryptography9040068

APA Style

Jain, S., Bagri, A., Cambou, M., Ghanai Miandoab, D., & Cambou, B. (2025). Enhancing Multi-Factor Authentication with Templateless 2D/3D Biometrics and PUF Integration for Securing Smart Devices. Cryptography, 9(4), 68. https://doi.org/10.3390/cryptography9040068

Article Metrics

Back to TopTop