Combining Multiple Biometric Traits Using Asymmetric Aggregation Operators for Improved Person Recognition

Biometrics is a scientific technology to recognize a person using their physical, behavior or chemical attributes. Biometrics is nowadays widely being used in several daily applications ranging from smart device user authentication to border crossing. A system that uses a single source of biometric information (e.g., single fingerprint) to recognize people is known as unimodal or unibiometrics system. Whereas, the system that consolidates data from multiple biometric sources of information (e.g., face and fingerprint) is called multimodal or multibiometrics system. Multibiometrics systems can alleviate the error rates and some inherent weaknesses of unibiometrics systems. Therefore, we present, in this study, a novel score level fusion-based scheme for multibiometric user recognition system. The proposed framework is hinged on Asymmetric Aggregation Operators (Asym-AOs). In particular, Asym-AOs are estimated via the generator functions of triangular norms (t-norms). The extensive set of experiments using seven publicly available benchmark databases, namely, National Institute of Standards and Technology (NIST)-Face, NIST-Multimodal, IIT Delhi Palmprint V1, IIT Delhi Ear, Hong Kong PolyU Contactless Hand Dorsal Images, Mobile Biometry (MOBIO) face, and Visible light mobile Ocular Biometric (VISOB) iPhone Day Light Ocular Mobile databases have been reported to show efficacy of the proposed scheme. The experimental results demonstrate that Asym-AOs based score fusion schemes not only are able to increase authentication rates compared to existing score level fusion methods (e.g., min, max, t-norms, symmetric-sum) but also is computationally fast.


Introduction
Traditional authentication methods based on passwords and identity cards still face several challenges such as passwords can be forgotten or identity cards can be faked [1,2]. A widespread technology known as biometrics was employed as an alternative to these conventional recognition

Transformation-Based Score Fusion
In this category, match-scores should be converted into a same domain using min-max, z-score or tanh [10] normalization approaches, then a simple rule is applied to combine the normalized scores, e.g., min, max, sum, product rules, t-norms, etc. For instance, the authors in [5] proposed 2D and 3D palmprint biometric person recognition based on bank of binarized statistical image features (B-BSIF) and self-quotient image (SQI) scheme. The scores extracted from 2D and 3D palmprints were normalized by min-max and combined using min, max and weighted sum rules. The authors in [9] investigated a finger multimodal user authentication, i.e., combining finger vein, fingerprint, finger shape and finger knuckle print using triangular norms (t-norms). While, authors in [11] have utilized t-norms to integrate the matching scores from multiple biometric traits (i.e., hand geometry, hand veins and palmprint). Cheniti et al. [12] combined the match-scores utilizing symmetric-sums (S-sum) generated via t-norms, this approach rendered good performance on National Institute of Standards and Technology (NIST)-Multimodal and NIST-Fingerprint databases.

Classifier-Based Score Fusion
In this category, the discrimination of genuine and imposter users is treated as a binary classification problem through concatenating scores from multiple matchers to form a feature vector, which is then classified into one of two classes: genuine or impostor. Towards this purpose, numerous classifiers have been utilized to this aim. For example, Kang et al. [13] applied support vector machine (SVM) for classification to combine finger vein and finger geometry. The authors in [14] used a combination of match-scores using hidden Markov model (HMM), whereas, Kang et al. [15] studied the fusion of fingerprint, finger vein and shape of finger using an SVM classifier.

Density-Based Score Fusion
In this category, the discrimination of authentic and imposter users is based on explicit estimation of authentic and impostor match score densities, which is then used to estimate likelihood ratio to produce the final output. Nandakumar et al. [16] utilized the likelihood ratio test to combine match-scores from multiple biometric traits, where densities were estimated using a mixture of Gaussian models, while authors in [17] investigated Gaussian Mixture Model to modal the authentic and impostor score densities through the combination step, attaining performance better than SVM and sum-rule based score integration methods on face, hand geometry, finger texture and palmprint datasets.

Disadvantages of Previously Proposed Methods
To sum up, transformation-based rules, like product, min, max, and weighted sum rules were observed in the literature to perform weakly, since they failed to take into account the distribution distance of different biometric traits' match-scores. While, classifier-based fusion methods face the problem of unbalanced training set due to the unbalanced authentic and imposter training score sets. Likewise, though density-based approaches can lead to optimal performance, it is hard to estimate the density function of scores accurately because its nature is usually unknown and also limited dataset is available for the same.
In order to alleviate some of the disadvantages of prior fusion algorithms as well as to increase the overall accuracy performance of multibiometric user authentication, in this work, we present a novel score level fusion method using Asymmetric Aggregation Operators (Asym-AOs), which are built via the generator functions of t-norms.

Proposed Asym-AOs Score Level Fusion Technique
In this section, we describe a novel strategy for score level fusion which utilizes the generating function of t-norms. The preliminaries of Asymmetric Aggregation Operators are first discussed, then the Asym-AOs based multibiometric score fusion method is outlined.

Overview of Asymmetric Aggregation Operators
Asymmetric aggregation operators (Asym-AOs) were introduced by Mika et al. [18,19]. The Asym-AOs can be defined as a binary function [0, 1] Asym-AOs satisfy properties like asymmetry, boundary conditions and monotonicity. The general form of Asym-AO is given by: Typically, f (s 1 ) is a generator function of t-norms, which is a continuous monotone decreasing function under the conditions: is also a continuous monotone decreasing function under the conditions: f [−1] is the pseudo inverse of f and is the usual inverse of f . From the definition of t-norms, we can write: In this paper, we have evaluated two types of Asym-AOs; the first is based on the continuous monotone decreasing function defined below: while the second type of Asym-AOs is based on the continuous monotone decreasing function as defined below.
The three used generating functions of t-norms, i.e., Hamacher, Algebric Product and Aczel-Alsina to obtain the Asym-AOs can be given by: Table 1 presents few typical examples of Asym-AOs. They are calculated by using various generating function of t-norms and above-mentioned continuous monotone decreasing functions. For a given continuous monotone decreasing function φ(s 1 ) = 1/s 2 1 and the generator function of Hamacher t-norm f (s 1 ) = (1 − s 1 )/s 1 , the Asym-AO1 is defined as: The Asym-AO2 is defined as: where φ(s 1 ) = (2 − s 1 ) and f (s 1 ) is the generator function of Hamacher t-norm.
In multibiometric based individual recognition, the chosen of the combination rule is very important to attain high accuracy. Generally, the best combination rule is the one that can minimize the imposter matching scores and further maximize the authentic matching scores. To this end, we proposed Asym-AOs as a combination rule to integrate the scores because they satisfy the previously mentioned requirement of combination rule as can be observed in Figures 1-4.

The Score Level Fusion Method Based on Asym-AOs
A biometric system is essentially a pattern recognition/matching system. The biometric recognition systems are composed of two main stages: enrollment and recognition/verification. During the enrollment stage, the biometric trait is captured using biometric sensor (e.g., camera in case of face recognition). The captured biometric trait is then used to extract salient features (i.e., template), which are then stored in a database along with user's identity. During the recognition/verification stage, whenever the user wants to be recognized, they present their biometric trait to the system. This time the captured biometric trait from the sensor is used to extract features are compared to the features stored in the database during enrollment in order to compute match score. If the match score is greater than a threshold then the user is classified as genuine, otherwise as an impostor. Figure 5 shows an architecture illustrating the overall procedure of a multibiometric person recognition framework that integrates information from multiple biometric sources using Asym-AOs. In a multimodal biometric verification setting, each user provides their biometric traits to the respective sensors and claims their identity. Next, the framework separately extracts individual features. In this study, binarized statistical image features method (explained below in detail) has been employed to extract the features. Then, the framework matches the captured traits' extracted features against their corresponding features/templates in the database accumulated at the time of enrollment and produces a vector of matching score Q = [Q1, Q2, . . . , QN], where Qi is match-score produced via ith modality corresponding to ith sensor. The information can be fused at different levels, i.e., sensor, feature, match-score, and decision level. The match-scores level fusion is generally preferred owing to ease in combining of scores, and thus was applied in this work as well. Owing to the heterogeneity of match-scores, the different match-scores should be first transformed into the range [0,1] before fusion. In this work, we applied two normalization methods, which are min-max, and tanh-estimators as defined below: where Q indicates the normalized score and Q represents the match-score generated by a specific matcher.
where Q indicates the normalized score, and Q, µ and σ are the match-score, mean and standard deviation of match scores, respectively, as given by Hampel estimators [1]. Once the match-scores are normalized utilizing min − max (Equation (9)) or tanh − estimators (Equation (10)), the Asym − AOs are applied to integrate these normalized scores. If the fused score is greater than a threshold then the user is authenticated as genuine, otherwise as an impostor.

Binarized Statistical Image Features (BSIF)
Local texture descriptors have been successfully used in various real-world applications of image texture classification [20]. Therefore, we also employed local texture descriptor in multibiometric systems. Specifically, in this study, we present an experimental analysis of a very popular descriptors named binarized statistical image features (BSIF) for accurate person authentication in multibiometric systems. BSIF features efficiently encode local texture information and represent image regions in the form of histogram. Instead of manual tuning, BSIF features employ learning technique to attain statistically significant description of the data that enables efficient information encoding using simple element-wise quantization [20,21]. Moreover, BSIF features have shown their effectiveness for different applications ranging from face, iris, ear and palmprint biometrics to face and fingerprint spoof detection [22].
The BSIF was proposed by Kannala and Rahtu [21] for texture classification and face recognition based on other methodologies like local phase quantization (LPQ) and local binary pattern (LBP) that produce a binary codes. The BSIF descriptor is based on a set of filters of fixed size, where the filters are learnt from natural images via independent component analysis (ICA). In order to effectively estimate texture properties of a given images, BSIF computes a binary code string for the pixels of the image by binarizing the response of a linear filter with a threshold at zero. The BSIF is characterized by two important factors, the filter size l and the filter length n.
For a biometric image X of size l × l and a linear filter Wi of the same size, the response of filter is given by: The binarized feature bi is given by: Finally, the BSIF features are obtained as a normalized histogram of the pixel's binary codes that can efficiently describe the texture components in biometric images.

Experiments
Here, we provide experimental analysis of the proposed Asym-AOs based score fusion method. In particular, we have conducted experiments on multi-modal, multi-unit, multi-algorithm and multi-modal mobile biometric systems using seven publicly available datasets.

NIST-Multimodal Database
The NIST-Multimodal database is composed of four set of similarity scores from 517 subjects [23]. The scores were generated via two different face matchers (i.e., labelled as matcher C and matcher G) and from left and right index fingers.

NIST-Face Database
The NIST-Face database consists of two set of similarity score vectors of 6000 samples from 3000 users [23]. The scores were generated via two different face matchers (i.e., labelled as matcher C and matcher G).

IIT Delhi Palmprint V1 Database
The IIT Delhi Palmprint V1 database is made up of images acquired using touch − less imaging setup of 235 users [24]. The images were collected from left and right hand in the IIT Delhi campus during July 2006 to June 2007 in an indoor environment. The subjects were in the age group of 12 to 57 years. The resolution of touchless palmprint images is 800 × 600 pixels. Besides, 150 × 150 pixels normalized images are also available in this database.

PolyU Contactless Hand Dorsal Images database
In Hong Kong PolyU Contactless Hand Dorsal Images (CHDI) database, images were acquired from 712 volunteers [26]. The images were collected in the Hong Kong polytechnic university campus, IIT Delhi campus, and in some villages in India during 2006 to 2015. This database also provides segmented images of minor, second, and major knuckle of little, ring, middle, and index fingers along with segmented dorsal images.

IIT Delhi-2 Ear Database
In IIT Delhi Ear database, ear images acquired from 221 subjects are available [25]. These images were collected from a distance in an indoor environment at IIT Delhi campus, India during October 2006 to June 2007. The subjects were in the age group 14 to 58 years. The ear images were collected using a simple imaging setup with 272 × 204 resolution. Besides that, 50 × 180 pixels normalized ear images are also provided.

MOBIO Face Database
The MOBIO face database was collected over a period of 18 months from six sites across Europe from August 2008 until July 2010. It consists of face images from 150 subjects [27]. Among them, 51 are females and 99 males. Images were collected using a handheld mobile device (i.e., the Nokia N93i).

VISOB iPhone Day Light Ocular Mobile database
The iPhone day light ocular mobile database is a partition of Visible Light Mobile Ocular Biometric dataset (VISOB Dataset ICIP2016 Challenge Version). It contains eye images acquired from 550 subjects using front facing (selfie) camera of mobile device, i.e., iPhone 5s (1.2 MP, fixed focus) [28]. This database provides only the eye regions with size of 240 × 160 pixels.

Experimental Protocol
Experiments were performed in authentication mode and performance of proposed fusion scheme is reported in Receiver Operating Characteristics (ROC). The ROC curve [12] is is obtained by plotting Genuine Acceptance Rate (GAR) vs. False Acceptance Rate (FAR), where GAR = 1 − FRR. The FRR (False Rejection Rate) is the proportion at which genuine individuals are rejected by the system as imposters, FAR (False Acceptance Rate) is the proportion at which imposter individuals are accepted by the system as genuine users, and GAR is the rate of the genuine users accepted over the total of enrolled individuals. Since no finger major knuckle or mobile (face and ocular traits) multi-modal data sets are publicly available, we created chimerical multi-modal data sets. Creating chimerical data sets is a common procedure exploited in biometrics on multi-modal systems, when no real data sets are available [8]. For example, the mobile face and ocular chimerical multi-modal data set in this study was created by combining the face and ocular images of pairs of clients of the available individual MOBIO face and VISOB iPhone Day Light Ocular data sets.
Moreover, it is a very typical phenomenon that when two independent data sets that are being used to produce the chimerical multi-modal data set have different number of users, number of subjects are set to smaller number among those datasets. Thus, in this work, we set number of users equals to subjects' number of smaller datasets. For instance, IIT Delhi-2 Ear database has 221 subjects, while PolyU-CHDI database has 712 subjects, therefore we created chimerical multi-modal data set with only 221 subjects to evaluate our proposed score fusion framework.
The abbreviations of the presented work as well as the conventional ones is described in Table 2 in order to facilitate the comparison.

Experimental Results
In this section, we provide the experimental results on publicly available databases for multi-modal, multi-unit, multi-algorithm and multi-modal mobile biometric systems.

Performance of Asym-AOs Based Fusion on Multi-Modal Systems
• Experiment 1 Figure 6 shows an architecture illustrating the overall procedure of face and fingerprint based multi-modal person recognition framework. In this first set of experiments, the match-scores of face matcher C and right fingerprint (NIST-multimodal database) are first normalized by tanh-estimators normalization technique as in Equation (10), then they are combined via proposed Asym-AOs. Figure 7 shows performance of unimodal biometric authentication and of their integration by using Asym-AOs. At FAR = 0.01%, the GARs of face matcher C and right fingerprint are 74.30% and 85.30%,respectively. However, with Asym-AO2 using Hamacher generating function, a GAR of 97.55% is achieved with the same FAR operating point. Table 3 shows the performances of Asym-AO1 and Asym-AO2 using Hamacher, algebric product and Aczel-Alsina generating functions together with previously-proposed score fusion methods based on sum rule, min and max rules [22], algebric product [26], Frank and Hamacher t-norms [11], and S-sum based on max rule and Hamacher t-norm [12]. It can be seen from Table 2 that Asym-AOs outperformed existing score fusion methods in literature.   94.00 Max rule [22] 85.73 Min rule [22] 80.92 Sum rule [22] 92.40 Algebric product [26] 92.00 Hammcher t-norm [11] 92.37 Frank t-norm with p = 1.3 [11] 92.30 S-sum using Max rule [12] 92.14 S-sum using Hamacher t-norm [12] 92.40 • Experiment 2 Figure 8 shows an architecture illustrating the overall procedure of ear and major finger knuckles based multi-modal person recognition framework. In this second set of experiments, a multi-modal biometric framework utilizing ear and index finger's major knuckle was analyzed. Namely, the experiments were carried out by merging two databases, i.e., IIT Delhi-2 Ear database and PolyU CHDI database. Two images per user (i.e., one train and one test image) belonging to 221 persons were randomly selected from each database. Therefore, we have 221 genuine scores and 48,620 (221 × 220 ) imposter scores for each biometric trait considered. First, the match-scores of ear and index finger's major knuckle are normalized via min-max normalization technique as in Equation (9), then they are combined utilizing Asym-AOs. Figure 9 depicts ROCs of ear and index finger's major knuckle biometric modalities as well as of fused modalities using Asym-AO2 generated by Aczel-Alsina function. From this figure, we can observe that Asym-AOs fusion method improved the authentication rate of uni-modal systems. For example, at FAR = 0.01%, GAR's of ear, index finger's major knuckle and multi-modal were 40%, 75.5% and 85.00%, respectively. Other Asym-AOs such as Asym-AO1 and Asym-AO2 generated by Hamacher and Algebric product function were also tested for score level fusion and the performances attained were almost identical to those of Asym-AO2 generated by Aczel-Alsina. From Table 4, it is evident that Asym-AOs based score fusion was significantly better than by utilizing Hamacher and Frank t-norms [11], S-sum generated by Hamacher t-norm and max rule [5], algebric product [26], sum rule, min and max rules [22].

Performance of Asym-AOs Based Fusion on Multi-Unit Systems
The match-scores of left and right palmprint were normalized by Equation (9) then fused using proposed score fusion technique. In Figure 11, we present the performances of unimodal and mutibiometric authentication systems in terms of ROC curves. It is easy to notice that the GARs of left and right palmprint are 91.00 and 91.50 at FAR = 0.01 , respectively. However, GAR = 98.43 when Asym-AO1 based Aczel-Alsina generating function is used at 0.01 FAR operating point. We can also see in Table 5 the different Asym-AOs generated by Hamacher, Aczel-Alsina and Algebric product functions as well as previously proposed score fusion strategies, i.e., sum rule, min and max rules [22], Hamacher and Frank t-norms [11], S-sums [12] and Algebric product [26]. As previously mentioned about multi-modal biometric scenario, we can state based on Table 4 that the proposed framework outperforms prior widely adopted score fusion rules in multi-unit scenarios.    [22] 98.00 Min rule [22] 92.00 Sum rule [22] 97.39 Algebric product [26] 97.58 Hammcher t-norm [11] 97.26 Frank t-norm with p = 1.3 [11] 97.63 S-sum using Max rule [12] 97.52 S-sum using Hamacher t-norm [12] 97.96

Performance of Asym-AOs Based Fusion on Multi-Algorithm Systems
In this subsection, we report performances of Asym-AOs based score fusion strategy on multi-algorithm biometric system. Figure 12 shows an architecture illustrating the overall procedure of face based multi-algorithm person recognition framework; this system applied two face matching algorithm. Specifically, we conducted experiments on NIST-face database. This database has similarity scores of 3000 subjects. The number of genuine match-scores is 6000 (3000 × 2), whereas the number of imposter match-score is 17,994,000 (6000 × 2999). The match-scores of face matcher C and face matcher G were normalized by tanh-estimators normalization technique as in Equation (10). Figure 13 shows ROC's curves of individual biometric algorithms with proposed score level fusion method. At FAR = 0.01% operating point, the GARs of face matcher C, face matcher G and multi-algorithm using Asym-AO2 generated by Algebric product function with m = 1.2 are 64.0%, 72.1%, and 76.27%, respectively. Table 6 summarizes the obtained authentication rate based on Asym-AOs generated by Hamacher, Algebric product and Aczel-Alsina functions along with previously proposed fusion techniques like S-sum using Hamacher t-norm [12], S-sum using max rule [12], Hamcher t-norm [11], Frank t-norm [11], sum rule, min and max rules [22]. Comparing the results attained using our proposed Asym-AOs for score level with the performances of individual biometric systems, a notable improvement in terms of attainable recognition rate can be observed. Moreover, the proposed Asym-AOs outperforms not only uni-biometric systems but also existing score fusion strategies.

Uni-biometrics
Face matcher C 64.00 Face matcher G 72.10 Multi-algorithem score combination scheme for FAR = 0.01 Asym-AO1 using Hamacher with m = 3 76.20 Asym-AO2 using Hamacher with m = 10 76.10 Asym-AO1 using Algebric product with m = 0.1 76.00 Asym-AO2 using Algebric product with m = 1.2 76.27 Asym-AO1 using Aczel-Alsina with p = 1.2 and m = 2 76.23 Asym-AO2 using Aczel-Alsina with p = 0.7 and m = 3 76.20 Max rule [22] 71.50 Min rule [22] 73.15 Sum rule [22] 75.75 Algebric product [26] 75.85 Hammcher t-norm [11] 75.90 Frank t-norm with p = 1.3 [11] 75.80 S-sum using Max rule [12] 76.00 S-sum using Hamacher t-norm [12] 75.75 Figure 14 shows an architecture illustrating the overall procedure of face based multi-modal mobile person recognition framework. Unlike previous multibiometric authentication systems that depend on samples collected using ordinary camera, we also studied a multi-modal biometric framework based on datasets collected on mobile/smart phones, i.e., multi-modal system using face and ocular biometrics (i.e., left eye) from MOBIO face and VISOB iPhone Day Light Ocular biometric databases, respectively. In this experiment, 300 images of face and 300 images of ocular biometric for 150 subjects were utilized. One image for the face and one image for ocular biometric per user were randomly selected as training set and the same thing for the testing set. The texture descriptor BSIF was extracted from face and ocular biometrics. The chi-squared distance was utilized for the generating of scores followed by min-max normalization method as in Equation (9). The ROC curves of individual biometric modalities collected on mobile phones and the Asym-AO1 fusion rule generated by Hamacher function are shown in Figure 15. The Asym-AOs lead to good recognition rate compared to the best uni-biometric system. At FAR of 0.01%, GAR's of face and ocular biometric are 95.0% and 59.0%, respectively. While, with Asym-AO1 using Hamacher function, GAR of 99.40% is attained at 0.01% FAR.The improvement in performance achieved due to Asym-AOs using Hamacher, Aczel-Alsina and Algebric product functions is also reported in Table 6. In addition, results using S-sums [12], Hamcher and Frank t-norms [11], Algebric product [26], sum rule, min and max rules [26] are also reported in Table 6 for comparison purpose.The results achieved demonstrate the effectiveness of Asym-AOs based score fusion rule. In Table 7, it is easy to observe that proposed multibiometric fusion strategy via Asym-AOs attains better performance than corresponding prior proposed score fusion rules in the literature. For instance, using Hamacher t-norm achieved GAR of 96.70%, while fusion via Asym-AO1 generated by Hamacher function resulted into GAR of 99.40%.  Multi-modal mobile score combination scheme for FAR = 0.01

Performance of Asym-AOs Based Fusion on Multi-Modal Mobile Systems
Asym-AO1 using Hamacher with m = 4 99.40 Asym-AO2 using Hamacher with m = 7 99.18 Asym-AO1 using Algebric product with m = 0.7 99.33 Asym-AO2 using Algebric product with m = 0.6 99.33 Asym-AO1 using Aczel-Alsina with p = 0.6 and m = 0.6 99.40 Asym-AO2 using Aczel-Alsina with p = 0.6 and m = 0.4 98.00 Max rule [22] 80.53 Min rule [22] 90.30 Sum rule [22] 97.50 Algebric product [26] 97.15 Hammcher t-norm [11] 96.90 Frank t-norm with [11] p = 1.3 97.50 S-sum using Max rule [12] 97.15 S-sum using Hamacher t-norm [12]] 96.88 To sum up, our results obtained using proposed biometric fusion scheme using Asym-AOs with generating functions of t-norms show its effectiveness for person verification in different scenarios such as multi-modal, multi-unit, multi-algorithm and multimodal mobile biometric systems as well as modalities like face, major finger knuckle, palm-print, ear, fingerprint and ocular biometric. Thus, it can be stated that our presented scheme for score fusion leads to lessen inherent limitations of unibiometrics and minimizes the error rates. In addition, proposed Asym-AOs fusion framework is capable of vanquishing the drawbacks of score fusion based on density and classifier techniques owing to obstacle of score densities estimation and the unbalanced authentic and imposter training score sets, respectively. Though, our presented methodology is computationally inexpensive, estimating value of parameter m (in EContactless Hand Dorsal Imagesquations 4 and 5) can be tricky, especially when we have to empirically estimate it by way maximizing the system's performance (i.e., brute force search).

Conclusions
In this work, we proposed a framework for the fusion of match-scores in a multibiometric user authentication systems based on Asymmetric Aggregation Operators (Asym-AOs). These Asym-AOs are computed utilizing the generator functions of t-norms. Extensive experimental analysis on seven publicly available databases, i.e., collected using ordinary camera and mobile phone, showed a remarkable improvement in authentication rates over uni-modal biometric systems as well as other existing score-level fusion methods like min, max, algebraic product, t-norms and S-sum using t-norms. It is hoped that the proposed Asym-AOs biometric fusion scheme will be exploited and explored for the development of information fusion systems in this field as well as in different domains. In the future, we aim to study the proposed framework under big data and fusion of mobile multibiometrics using in-built sensors. In addition, we plan to evaluate the robustness of the presented method against spoofing attacks, and subsequently, we will redesign the proposed framework to inherently enhance its robustness against attacks.