Next Article in Journal
Silencing Quorum Sensing through Extracts of Melicope lunu-ankenda
Previous Article in Journal
Application of Optical Biosensors in Small-Molecule Screening Activities
Previous Article in Special Issue
A Neuro-Inspired Spike-Based PID Motor Controller for Multi-Motor Robots with Low Cost FPGAs
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Integrating Iris and Signature Traits for Personal Authentication Using User-SpecificWeighting

by
Serestina Viriri
1,* and
Jules R. Tapamo
2
1
School of Computer Science, University of KwaZulu-Natal, Westville Campus, Durban 4000, South Africa
2
School of Electrical, Electronic and Computer Engineering, Howard College, University of KwaZulu-Natal, Durban 4000, South Africa
*
Author to whom correspondence should be addressed.
Sensors 2012, 12(4), 4324-4338; https://doi.org/10.3390/s120404324
Submission received: 7 March 2012 / Revised: 22 March 2012 / Accepted: 22 March 2012 / Published: 29 March 2012
(This article belongs to the Special Issue Biomimetic Sensors, Actuators and Integrated Systems)

Abstract

: Biometric systems based on uni-modal traits are characterized by noisy sensor data, restricted degrees of freedom, non-universality and are susceptible to spoof attacks. Multi-modal biometric systems seek to alleviate some of these drawbacks by providing multiple evidences of the same identity. In this paper, a user-score-based weighting technique for integrating the iris and signature traits is presented. This user-specific weighting technique has proved to be an efficient and effective fusion scheme which increases the authentication accuracy rate of multi-modal biometric systems. The weights are used to indicate the importance of matching scores output by each biometrics trait. The experimental results show that our biometric system based on the integration of iris and signature traits achieve a false rejection rate (FRR) of 0.08% and a false acceptance rate (FAR) of 0.01%.

1. Introduction

Multi-modal biometric systems address the shortcomings of uni-modal systems. For instance, the problem of non-universality: it is possible for a subset of users to not possess a particular biometrics trait. For example, the feature extraction module of an iris authentication system may be unable to extract features from iris images associated with specific individuals, due to either the occlusion of the iris region of interest or poor quality of the images. Multi-modal systems ascertain that a live user is indeed authenticated. It is very difficult for intruders to circumvent multiple biometric traits simultaneously [1]. Thus, a challenge-response type of authentication can be facilitated using multi-biometric systems.

Furthermore, multi-modal biometric systems are expected to be more reliable due to the presence of multiple pieces of evidence [2]. Multi-modal systems should be able to meet the stringent performance requirements imposed by various applications [3]. In fact, research has proved that combining biometric techniques for human identification is more effective, but challenging [4]. Therefore, the problem of information fusion still needs attention in order to optimize the success rate of multi-modal biometric systems.

In this paper, a framework for modeling bi-modal biometric systems based on iris (a physiological trait) and the signature (a behavioral trait) for personal authentication is proposed. These two biometric traits are not correlated. Moreover, iris is proving to be one of the most reliable biometric traits while signatures continue to be widely used for personal authentication.

2. Related Work

Multi-modal biometrics was pioneered by Anil K. Jain; and there has been substantial research carried out in this area. A variety of biometric fusion schemes, which use classifiers, have been described in the literature to combine multiple biometric trait scores. These include majority voting, sum and product rules, k-NN classifiers, SVMs, and decision trees [46]. For instance, Ross et al. [1,7] combine the matching scores of the face, fingerprint and hand geometry using three different techniques, the sum rule, decision tree, and linear discriminant analysis. Experiments indicate that the fusion scheme using the sum rule with normalized scores gives the best performance. These results are further improved by learning user-specific matching thresholds and weights for individual biometric traits.

Other multi-modal biometric fusion approaches include: the HyperBF network approach used to combine the normalized scores of five different classifiers operating on the voice and face feature sets of an individual for identification [8]. Bigun et al. develop a statistical framework based on Bayesian statistics to integrate the speech (text-dependent) and face data of a user [9]. The estimated biases of each classifier is taken into account during the fusion process. Hong and Jain associate different confidence measures with the individual matchers when integrating the face and fingerprint traits of a user [3]. They also suggest an indexing mechanism wherein face information is used to retrieve a set of possible identities and the fingerprint information is then used to select a single identity. A commercial product called BioID [10] uses the voice, lip motion and face features of a user to verify identity. Brunelli and Falavigna also addressed an important aspect of fusion; the normalization of scores obtained from different domains [8]. Normalization maps the scores obtained from different ranges into a common range.

Although several score fusion techniques have been proposed in the literature, Ross et al. [11] grouped all of them into three main categories:

  • Density-based score fusion: this technique estimates the conditional densities p(sgenuine) and p(simpositor), where s = [s1, s2, …, sn] is the vector of matching scores, computes the probabilities P(genuines) and P(impositors), and can use the Bayesian rule to make a decision.

  • Transformation-based score fusion: this approach transforms the match scores from different matchers into a common domain using normalization techniques.

  • Classifier-based score fusion: learning pattern classifiers are used to determine the relationship between the vector of match scores, s = [s1, s2, …, sn] and the posteriori probabilities, P(genuines) and P(impositors).

In this paper, an enhanced user-specific weighting technique is proposed, which is based on the different degrees of importance for different traits of an individual to integrate the physiological trait, the iris and behavioral trait, the signature. The user-specific weights for individual biometric traits are calculated based on the score of each biometric trait of an individual user. The proposed approach is an alternative to the estimation of user-specific weights by exhaustive search.

The rest of the paper is structured as follows: Section 3 explores various fusion techniques for combining biometric traits; Section 4 describes an overall multi-modal biometrics system; Section 5 describes the weighting techniques and normalization strategies; Section 6 presents experimental results; and Section 7 draws the conclusions and future work.

3. Multi-Modal Biometrics System

Multi-modal biometric systems are based on the consolidation of information presented by multiple evidences that stem from multiple traits. Some of the limitations imposed by uni-modal biometric systems (that is, biometric systems that rely on the evidence of a single biometric trait) can be overcome by using multiple biometric modalities [4,8,9]. Such systems, known as multi-biometric systems, are expected to be more reliable due to the presence of multiple, fairly independent pieces of evidence.

A variety of factors should be considered when designing a multi-biometric system. These include the choice and number of biometric traits; the level in the biometric system at which information provided by multiple traits should be integrated; the methodology adopted to integrate the information; and the cost vs. matching performance trade-off.

A simple multi-modal biometrics system has five important components as depicted in Figure 1, in which different biometric traits are fused at match score level:

  • Sensor module, acquires the biometric data of an individual. An example is the ePadInk tablet that captures the signature.

  • Feature extraction module, the acquired biometric data is processed to extract distinctive feature values.

  • Matching module, the extracted feature values are compared against those in the template by generating a matching score.

  • Fusion module, combines the biometric trait scores.

  • Decision module, a claimed identity is either accepted or rejected based on the fusion matching score generated in the fusion module.

4. Fusion in Biometrics

There are various levels of fusion for combining biometric traits. The three possible levels of fusion are [1, 11]:

  • Fusion at the sensor level: The consolidation of evidence captured by multiple sources of the input data before feature extraction.

  • Fusion at the feature extraction level: The data obtained from each sensor is used to compute a feature vector. If the features extracted from one biometric trait are independent of those extracted from the other, it is better to concatenate the two vectors into a single new vector. The new feature vector now has a higher dimensionality and represents a person's identity in a different hyperspace. Feature reduction techniques may be employed to extract useful features from the larger set of features.

  • Fusion at the matching score level: Each subsystem provides a matching score indicating the proximity of the feature vector with the template vector. These scores can be combined to assert the veracity of the claimed identity. Fusion techniques such as logistic regression may be used to combine the scores reported by different sensors. These techniques attempt to minimize the FRR for a given FAR [12].

  • Fusion at the rank level: The consolidation of the ranks output by individual biometric subsystems in order to drive a consensus rank for each identity [11].

  • Fusion at the decision level: Each sensor can capture multiple biometric data and the resulting feature vectors are individually classified into the two classes: accept or reject. A majority vote scheme, such as that employed in [13] can be used to make the final decision.

5. Integrating Iris and Signature Traits

A brief description of the two biometric traits used in this research work is given below.

5.1. Iris Recognition

Iris recognition is proving to be one of the most reliable biometric traits for personal identification since iris patterns have stable, invariant and distinctive features. Several techniques have been proposed for iris segmentation, coding and matching. The most common approach used in iris recognition is to generate feature vectors corresponding to individual iris images and perform iris matching based on some distance measures [14,15]. In this research work, an algorithm that detects the largest non-occluded rectangular part of the iris as region of interest (ROI) is used [16]. A cumulative-sum-based grey change analysis technique is applied to the ROI to extract features for recognition [17]. Then, the Hamming Distance is computed as the iris matching score.

5.2. Signature Verification

Signature continues to be an important biometric trait because it remains widely used primarily for authenticating the identity of human beings. An efficient text-based directional signature recognition algorithm which verifies signatures, even when they are composed of symbols and special unconstrained cursive characters which are superimposed and embellished is used [18]. This algorithm extends the character-based signature verification technique. The text-based directional algorithm integrates the direction information extracted from the structure of the whole signature text image contours with the transition information between background and foreground pixels in the signature text image. The extracted features represent the distinguishing cursive handwriting styles. Then, the Mahalanobis Distance is computed as the signature matching score.

5.3. Combining Iris and Signature Traits

The iris and signature traits are fused at the matching score level, where the matching scores output of each of these two traits are weighted and combined. Fusion at the matching score level is usually preferred, as it is relatively easy to access and combine the scores presented by the different modalities [4]. There are two distinct approaches for the match score level fusion: the classification problem approach [6], where a feature vector is constructed using the matching scores output by the individual matchers, and the combination problem approach, where the individual matching scores are combined to generate a single scalar score, which is then used to make the final decision. The literature shows that the combination approach performs better than the classification approach [1]; hence, it is adopted in this paper. The combining process is summarized in Algorithm 1.

Algorithm 1 Fusion of Iris and Signature Traits.
1:for each fusion per User do
2:for each User do
3:   if iris then
4:     SirisHammingDistance {//Iris Score Generation}
5:   else
6:     SsigMahalanobisDistance {//Signature Score Generation}
7:   end if
8:end for
9:for each score do
10:   if Siris then
11:      S iris Normalization ( S iris )
12:   else
13:      S sig Normalization ( S sig )
14:   end if
15:end for
16:for each normalized score do
17:   if S iris then
18:      W iris Weighting ( S iris )
19:   else
20:      W sig Weighting ( S sig )
21:   end if
22:end for
23: S fus W iris S iris + W sig S sig
24:end for

Score Generation

Iris matching scores are computed from string iris feature codes extracted by the cumulative-sum-based grey change analysis technique. To verify the similarity of two iris codes, Hamming Distance (HD) based on the matching algorithm [19] is used. The smaller the HD, the higher the similarity of the compared iris codes. The HD denotes the iris raw matching score, Siris, which is computed as:

S iris = 1 2 N [ ( i = 1 N A h ( i ) B h ( i ) ) + ( i = 1 N A v ( i ) B v ( i ) ) ] only when A h ( i ) 0 B h ( i ) 0 , A v ( i ) 0 B v ( i ) 0

where Ah(i) and Av (i) denote the enrolled iris code over horizontal and vertical directions, respectively, Bh(i) and Bv(i) denote the new input iris code over the horizontal and vertical directions respectively. N is the total number of cells, and ⊕ is the XOR operator.

Signature matching scores are generated from the signature feature vectors. To verify the similarity of two signatures, Mahalanobis Distance (MD) based on correlations between signatures is used. It differs from Euclidean distance in that it takes into account the correlations of the data set and is scale-invariant. The smaller the MD, the higher the similarity of the compared signatures. The MD denotes the signature raw matching score, Ssig, which is computed as in Equation (2).

S sig ( x , y ) = ( x y ) T S 1 ( x y )

where x⃗ and y⃗ denote the enrolled feature vector and the new signature feature vector to be verified, with the covariance matrix S.

Score Normalization

Given a set of n raw matching scores {Sk}, k = 1,2, …, n, the corresponding normalized scores S k are given by:

  • Min-max normalization: retains the original distribution of scores and maps all the scores into the [0, 1] range.

    S k = S k min ( { S k } ) min ( { S k } ) min ( { S k } )
    where min({Sk}) and max({Sk}) are the minimum and maximum, respectively, of the given set {Sk} of matching scores.

  • Z-score normalization: transforms the scores to a distribution with arithmetic mean of 0 and standard deviation of 1.

    S k = S k μ σ
    where μ and σ are the mean and standard deviation, respectively, of the set {Sk}.

  • Tanh normalization: is a robust statistical technique [20] which maps the raw scores into the [0, 1] range.

    S k = 1 2 { tanh ( 0.01 ( S k μ σ ) ) + 1 }
    where μ and σ are the mean and standard deviation, respectively, of {Sk}.

The ROC curves depicting the performance of the individual score normalization techniques implemented on iris biometrics trait is shown in Figure 2. The CASIA iris database [21] is used for comparing and contrasting these normalization algorithms. A similar experiment was conducted on the signature trait using the GPDS signature database [22], and it obtained comparable results to the iris trait. As a result, tanh normalization technique performs better than min-max and Z-score techniques.

Score Weighting

Let s iris and s sig be the normalized scores of the iris and signature traits, respectively. The fusion score, sfus is computed as

s fus = w iris s iris + w sig s sig

where wiris and wsig are the weights associated with the degrees of importance of the two traits per individual, and

w iris + w sig = 1

Different iris scores and signature scores are given different degrees of importance for different users. For instance, by reducing the weight wiris of an occluded iris and increasing the weight wsig associated with the signature trait, the false reject error rate of the particular user can be reduced. The biometric system learns user-specific parameters by observing system performance over a period of time [4]. Two techniques are used to compute the user-specific weights: an exhaustive search technique, and a user-score-based technique.

The Exhaustive Search Technique

Let w iris i and w sig i, be the weights associated with the ith user in the database. The algorithm operates on the training set as follows [7]:

  • For the ith user in the database, vary weights w iris i and w sig i over the range [0, 1], with the constraint w iris i + w sig i = 1. Compute s fus i = w iris i s iris i + w sig i s sig i. This computation is performed over all scores associated with the ith user.

  • Choose that set of weights that minimizes the total error rate. The total error rate is the sum of the false acceptance and false rejection rates pertaining to this user.

The set of weights, { w iris i , w sig i }, that minimizes the total error rate, with the constraint w iris i + w sig i = 1, does not necessarily associate the degrees of importance for iris and signature biometric traits of the ith individual in the fusion score: s fus i = w iris i s iris i + w sig i s sig i. An alternative user-score-based weighting technique, which computes the weights, { w iris i , w sig i }, by associating them with the degrees of importance for iris and signature biometric traits, respectively, is proposed. In this method, the weights, { w iris i , w sig i }, which are not constrained to w iris i + w sig i = 1, are computed in consideration of how close the scores s iris i and s sig i are to the thresholds of the iris and signature traits, respectively. The user-score-based weighting technique is described below.

The User-Score-Based Technique

Let s iris i and s sig i be the normalized scores associated with the ith user in the database, and τ1 and τ2 are the thresholds of the iris and signature traits, respectively. The preliminary weights w iris i and w sig i per trait are computed as

w iris i = s iris i τ 1 + s iris i

and

w sig i = s sig i τ 2 + s sig i

where w iris i and w sig i are the initial weights associated with the iris and signature, respectively, without the constraint w iris i + w sig i = 1. These weights are assigned to the scores, s iris i and s sig i after analyzing how close or farther away the scores are from their respective thresholds, τ 1 and τ2. Then, the fusion weights for the ith user are computed respectively, for the iris and signature as

w iris i = w iris i w iris i + w sig i
w sig i = w sig i w iris i + w sig i

with the constraint w iris i + w sig i = 1, and the fusion score is computed in Equation (12).

s fus i = w iris i s iris i + w iris i s sig i

Score Fusion

The dual ν-Support Vector Machine (2ν-SVM) fusion algorithm [23] is used to integrate the matching scores of the iris siris and signature ssig, together with their corresponding weights, wiris and wsig. The weighted iris matching score miris is defined as

m iris = s iris × w iris

and the weighted signature score msig is defined as

m sig = s sig × w sig

The weighted matching scores and their labels are used to train the 2ν-SVM for bimodal fusion. Let the training data be

Z iris = ( m iris , y )

and

Z sig = ( m sig , y )

where y ∈ {+1, −1}, such that +1 represents the genuine class and −1 represents the impostor class. The 2ν-SVM error parameters are calculated using Equation (17) and (18).

ν + = n + n + + n
ν = n n + + n

where n+ and n are the number of genuine and impostor, respectively. The training data is mapped into a higher dimension feature space such that Zφ (Z), where φ(.) is the mapping function. The optimal hyperplane separates the data into two different classes in the higher dimensional feature space.

In the classification phase, the bi-modal fusion matching score sfus is computed in Equation (19),

s fus = f iris ( m iris ) + f sig ( m sig )

where

f iris ( m iris ) = a iris φ ( m iris ) + b iris
f sig ( m sig ) = a sig φ ( m sig ) + b sig

where airis, asig, biris and bsig are parameters of the hyperplane. The solution of Equation (19) is the signed distance of sfus from the separating hyperplane given by the two 2ν-SVM for the two biometric modalities. The decision function defined in Equation (22) verifies the identity.

Decision ( s fus ) = { Accept , if s fus > 0 Reject , otherwise

6. Experimental Results and Discussions

The performance of the investigated bi-modal biometrics system is evaluated by calculating its false acceptance rate (FAR) and false rejection rate (FRR) at various thresholds. These two factors are integrated together in a receiver operating characteristic (ROC) curve that plots the FRR or the genuine acceptance rate (GAR) against the FAR at different thresholds. The FAR and FRR are computed by generating all possible genuine and impostor matching scores and then setting a threshold for deciding whether to accept or reject a match.

The bi-modal database used in the experiments was constructed by merging CASIA iris database [21] with GPDS signature database [22]. An alternative bi-modal database was constructed from CASIA iris database and a database created from signatures captured using the ePadInk tablet. Seven iris images of the same user were obtained from a set of 50 users from the CASIA database. Fifteen signatures (ten genuine and 5 forgeries) were obtained from a different set of 50 users from the GPDS database, and another set of signatures were captured using ePadInk tablet. The mutual independence assumption of the iris and signature biometric traits allows us to randomly pair the users from the two different data sets. In this way, two bi-modal databases consisting of 50 users were constructed, either from CASIA with GPDS, or CASIA with signatures captured using ePadInk tablet.

Firstly, the matching scores of the iris and signature traits are computed as defined in Equations (1) and (2). These matching scores are normalized and weighted as defined in subsections of 5.3. Various normalization techniques were investigated. The ROC curves depicting the performance of the individual score normalization techniques is shown in Figure 2. The Tanh Normalization technique performs better than the Min-Max and Z-Score techniques.

Table 1 shows the scores for the iris and signature biometric traits, and their respective weights, for the sample of ten different individuals. The raw scores are normalized by the tanh technique, and the weights are computed using Equations (10) and (11). For instance, from Table 1, we observe that for user 5, W 1 i = 0.83, a high weight attached to the iris trait, possibly due to the blurred iris. This demonstrates the importance of assigning user-specific weights to the individual biometric trait.

Figure 3 shows the average true positive rates achieved by the exhaustive search technique and the user-score-based approach, respectively, on uni-modal biometric traits based on iris and signature. The exhaustive search technique obtained true positive rates of 92.4% and 82.0% on the iris and signature traits, respectively. The user-score-based approach obtained true positive rates of 99.25% and 94.0% on the iris and signature traits, respectively. The overall average true positive rate achieved by the user-score-based is 99.6%. Therefore, the results show an improvement in accuracy when the user-score-based weighting technique is used.

6.1. Validation of the User-Score-Based Weighting Algorithm

The ROC curves in Figure 4, show the performance of the uni-modal biometric traits based on iris and signature, respectively, and the 2ν-SVM fused based bi-modal traits weighted by the exhaustive search technique and the user-score-based approach, respectively. The overall results show an improvement in performance when scores are combined using the user-score-based weighting technique. For a given FAR of 0.01%, user-score-based weighting achieve a very low FRR of 0.08%, compared to exhaustive search weighting with a FRR of 0.75%, as shown in Table 2.

The user-score-based weighting algorithm computes the weights of the iris and signature traits by analyzing how close the two matching scores are to their respective thresholds, hence associating the weights with the different degrees of importance for the bi-modal biometric traits involved. Comparatively, the exhaustive search weighting technique calculates weights that simply minimize the total error rate. This minimum error rate (the sum of FAR and FRR) does not necessarily reflect the different degrees of importance for the bi-modal biometric traits fused.

6.2. Comparison with Existing Bi-Modal Biometric Systems

Table 3 shows the performance of the user-score-based weighted 2ν-SVM fusion algorithm, compared to other bi-modal biometric fusion algorithms in the literature. The quality based sum-rule [23] obtained an accuracy rate of 97.39%, when used to fuse the face and iris modalities, whereas the fusion of the iris and signature modalities based on the user-score-based weighted 2ν-SVM technique achieves an accuracy rate of 99.6%.

7. Conclusions

In this paper, an enhanced user-specific weighting technique of integrating a physiological biometrics trait, the iris, with a behavioral trait, the signature, is proposed. The proposed user-score-based approach calculates weights for each biometrics trait per user in proportion to the scores of the biometric traits of the same user. This enhanced user-specific weighting improves the accuracy rate of bi-modal biometric systems by reducing false reject rate (FRR) on a low false accept rate (FAR). Experimental results show that the proposed approach achieved a minimal FRR of 0.08% on a FAR of 0.01%. Further investigation of the effect of the proposed approach with other different biometric modalities is envisaged.

Acknowledgments

Portions of the research in this paper use the CASIA iris image database collected by the Institute of Automation of the Chinese Academy of Sciences, and the Grupo de Procesado Digital de Sennales GPDS signature database collected by the Universidad de Las Palmas de Gran Canaria, Spain.

References

  1. Ross, A.; Jain, A.K. Information fusion in biometrics. Pattern Recogn. Lett. 2003, 24, 2115–2125. [Google Scholar]
  2. Hong, L.; Jain, A.K. Can multibiometrics improve performance? Proce. AutoID 1999, 99, 59–64. [Google Scholar]
  3. Hong, L.; Jain, A.K. Integrating faces and fingerprints for personal identification. IEEE Trans. Pattern Anal. Mach. Intell. 1998, 20, 1295–1307. [Google Scholar]
  4. Jain, A.K.; Ross, A. Multibiometric systems. Commun. ACM 2004, 47, 34–40. [Google Scholar]
  5. Kittler, J.; Hatef, M.; Duin, R.; Matas, J. On combining classifiers. IEEE Trans. Pattern Anal. Mach. Intell. 1998, 20, 226–239. [Google Scholar]
  6. Verlinde, P.; Cholet, G. Comparing Decision Fusion Paradigms Using K-NN Based Classifiers, Decision Trees and Logistic Regression in a Multi-Modal Identity Verification Application. Proceedings of the International Conference on Audio and Video-Based Biometric Person Authentication, Washington DC, USA, 22–24 March 1999; pp. 188–193.
  7. Jain, A.K.; Ross, A. Learning User-Specific Parameters in a Multi-Biometric System. Proceedings of IEEE International Conference on Image Processing, Rochester, NY, USA, 22–25 September 2002; pp. 57–60.
  8. Brunelli, R.; Falavigna, D. Person identification using multiple cues. IEEE Trans. Pattern Anal. Mach. Intell. 1995, 12, 955–966. [Google Scholar]
  9. Bigun, E.S.; Bigun, J.; Duc, B.; Fischer, S. Expert Conciliation for Multimodal Person Authentication Systems Using Bayesian Statistics. Proceedings of the International Conference on Audio and Video-Based Biometric Person Authentication, Crans-Montana, Switzerland, 12–14 March 1997; pp. 291–300.
  10. Frischholz, R.W.; Dieckmann, U. Bioid: A multimodal biometric identification system. IEEE Comput. 2000, 33, 64–68. [Google Scholar]
  11. Ross, A.A.; Nandakumar, K.; Jain, A.K. Handbook of Multibiometrics; Springer: Berlin, Heidelberg, Germany, 2006. [Google Scholar]
  12. Jain, A.K.; Prabhakar, S.; Chen, S. Combining multiple matchers for a high security fingerprint verification system. Pattern Recogn. Lett. 1999, 20, 1371–1379. [Google Scholar]
  13. Zuev, Y.; Ivanon, S. The Voting as a Way to Increase the Decision Reliability. Proceedings of the Foundations of Information/Decision Fusion with Applications to Engineering Problems, Washington DC, USA, 7–9 August 1996; pp. 206–210.
  14. Miyazawa, K.; Ito, K.; Aoki, T.; Kobayashi, K.; Nakajima, H. A Phase-Based Iris Recognition Algorithm; Springer: Berlin, Heidelberg, Germany, 2005; Volume 3832, pp. 356–365. [Google Scholar]
  15. Bowyer, K.W.; Hollingsworth, K.; Flynn, P.J. Image understanding for iris biometrics: A survey. Comput. Vis. Image Underst. 2008, 110, 281–307. [Google Scholar]
  16. Viriri, S.; Tapamo, J.-R. Improving Iris-Based Personal Identification Using Maximum Rectangular Region Detection. Proceedings of the 2009 International Conference on Digital Image Processing, Bangkok, Thailand, 7–9 March 2009; pp. 421–425.
  17. Ko, J.-G.; Gil, Y.-H.; Yoo, J.-H.; Chung, K.-L. A Novel and efficient feature extraction method for iris recognition. ETRI J. 2007, 29, 399–401. [Google Scholar]
  18. Viriri, S.; Tapamo, J.-R. Signature verification based on handwritten text recognition. Commun. Comput. Inf. Sci. 2009, 61, 98–105. [Google Scholar]
  19. Daugman, J.G. High confidence visual recognition of persons by a test of statistical independence. IEEE Trans. Pattern Anal. Mach. Intell. 1993, 15, 1148–1161. [Google Scholar]
  20. Jain, A.K.; Nandakumar, K.; Ross, A. Score normalization in multimodal biometric systems. Pattern Recogn. Lett. 2005, 38, 2270–2285. [Google Scholar]
  21. Casia Iris Image Database (CASIA). Available online: http://www.sinobiometrics.com/ (accessed on 8 June 2008).
  22. GPDS Signature Database. Available online: http://www.gpds.ulpgc.es/download/index.htm/ (accessed on 20 February 2010).
  23. Vatsa, M.; Singh, R.; Noore, A. Integrating image quality in 2ν-SVM biometric match score fusion. Int. J. Neural Syst. 2007, 17, 343–351. [Google Scholar]
  24. Teoh, A.; Samad, S.A.; Hussain, A. Nearest neighbourhood classifiers in a bimodal biometric verification syatem fusion decision scheme. J. Res. Pract. Inf. Technol. 2004, 36, 47–62. [Google Scholar]
Figure 1. Multi-modal Biometrics System (Iris & Signature).
Figure 1. Multi-modal Biometrics System (Iris & Signature).
Sensors 12 04324f1 1024
Figure 2. ROC curves showing the performance of each of the three normalization techniques on the Iris trait.
Figure 2. ROC curves showing the performance of each of the three normalization techniques on the Iris trait.
Sensors 12 04324f2 1024
Figure 3. Average true positive rate of the iris and signature Modalities.
Figure 3. Average true positive rate of the iris and signature Modalities.
Sensors 12 04324f3 1024
Figure 4. Tanh normalized-based ROC curves showing the performance of using Iris, Signature, Iris + Signature (Exhaustive), and Iris + Signature (User-score-based).
Figure 4. Tanh normalized-based ROC curves showing the performance of using Iris, Signature, Iris + Signature (Exhaustive), and Iris + Signature (User-score-based).
Sensors 12 04324f4 1024
Table 1. User-specific Scores and Weights of different traits for 10 users.
Table 1. User-specific Scores and Weights of different traits for 10 users.
UserIris ScoreSignature ScoreNormalized Iris ScoreNormalized Signature ScoreIris WeightSignature Weight
10.1920.0010.4870.4880.800.20
20.2770.0010.4900.4880.860.14
30.6252.0540.5050.5050.500.50
40.4462.4380.5060.4960.440.56
50.2320.0050.4860.4920.830.17
60.4732.3830.4980.5070.470.53
70.0710.0280.4840.4930.670.33
80.5222.4740.5050.5070.470.53
90.3661.3580.4970.5020.480.52
100.4511.7740.5020.5060.500.50
Table 2. Exhaustive search vs. User-score-based technique.
Table 2. Exhaustive search vs. User-score-based technique.
Weighting TechniqueFAR (%)FRR (%)
Exhaustive search0.010.75
User-score-based0.010.08
Table 3. Comparative table of the weighted based fusion algorithms.
Table 3. Comparative table of the weighted based fusion algorithms.
Biometric ModalitiesWeighted Fusion AlgorithmVerification Accuracy (%)
Face + IrisQuality based Sum-rule [23]97.39
Face + Speechk-NN based fusion [24]99.72
Face + IrisQuality based [23]98.91
Iris + SignatureUser-Score-based Weighted 2ν-SVM99.6

Share and Cite

MDPI and ACS Style

Viriri, S.; Tapamo, J.R. Integrating Iris and Signature Traits for Personal Authentication Using User-SpecificWeighting. Sensors 2012, 12, 4324-4338. https://doi.org/10.3390/s120404324

AMA Style

Viriri S, Tapamo JR. Integrating Iris and Signature Traits for Personal Authentication Using User-SpecificWeighting. Sensors. 2012; 12(4):4324-4338. https://doi.org/10.3390/s120404324

Chicago/Turabian Style

Viriri, Serestina, and Jules R. Tapamo. 2012. "Integrating Iris and Signature Traits for Personal Authentication Using User-SpecificWeighting" Sensors 12, no. 4: 4324-4338. https://doi.org/10.3390/s120404324

Article Metrics

Back to TopTop