Face–Iris Multimodal Biometric Identiﬁcation System

: Multimodal biometrics technology has recently gained interest due to its capacity to overcome certain inherent limitations of the single biometric modalities and to improve the overall recognition rate. A common biometric recognition system consists of sensing, feature extraction, and matching modules. The robustness of the system depends much more on the reliability to extract relevant information from the single biometric traits. This paper proposes a new feature extraction technique for a multimodal biometric system using face–iris traits. The iris feature extraction is carried out using an e ﬃ cient multi-resolution 2D Log-Gabor ﬁlter to capture textural information in di ﬀ erent scales and orientations. On the other hand, the facial features are computed using the powerful method of singular spectrum analysis (SSA) in conjunction with the wavelet transform. SSA aims at expanding signals or images into interpretable and physically meaningful components. In this study, SSA is applied and combined with the normal inverse Gaussian (NIG) statistical features derived from wavelet transform. The fusion process of relevant features from the two modalities are combined at a hybrid fusion level. The evaluation process is performed on a chimeric database and consists of Olivetti research laboratory (ORL) and face recognition technology (FERET) for face and Chinese academy of science institute of automation (CASIA) v3.0 iris image database (CASIA V3) interval for iris. Experimental results show the robustness.


Introduction
The increasing demand for reliable and secure recognition systems now used in many fields is obvious evidence that more attention should be paid to biometrics.Biometric systems represent a means of accurate automatic personal recognition based on physiological characteristics (as fingerprint, iris, face, palm print), or behavioral characteristics (as gait, signature, and typing) that are unique and cannot be lost or forgotten [1].Biometric recognition systems are used in many areas such as: passport verification, airports, buildings, mobile phones, and identity cards [2].Unimodal biometric systems measure and analyze a single characteristic of the human body.These have many limitations, such as: (i) Noise in sensed data: where the recognition rate of a biometric system is very sensitive to the quality of the biometric sample.(ii) Non-universality: if each individual in a population is able to provide a biometric modality for a given system, this modality is said to be universal.However, not all biometric modalities are truly universal.(iii) Lack of individuality: features extracted from biometric modality of different individuals may be relatively identical [2].(iv) Intra-class variation: the biometric information acquired during the training process of an individual for generating a template will not be identical to the template generated from biometric information for the same user during the test process.These variations may be due to poor interaction of the user with the sensor [3].(v) Spoofing: although it seems difficult to steal a person's biometric modalities, it is always possible to circumvent a biometric system using spoofed biometric modalities.To overcome these disadvantages, one solution is the use of several biometric modalities within the same system, which is then referred to as a multi-biometric system [3,4].
Effectively, multi-biometric system can be divided into four categories: multi-sensors, multi-samples, multi-algorithms, and multi-instances [5].Combining information from multiple biometric sources is known as information fusion.This can be divided into three different levels of fusion [5,6].In the sensor level, fusion occurs before the feature extraction module, and can be done only if various acquisitions are instances of the same biometric modality obtained from several compatible sensors.The feature level fusion consists of combining different feature vectors generated from different biometric modalities to create a single template or feature vector [3].Only in the case of feature vectors that are compatible between each other or homogeneouscan they be concatenated into a single feature vector [6].Match score level fusion is performed after the matcher module, which generates match scores between the test sample and the template stored in the database as a similarity or dissimilarity indicator for each modality.The fusion process occurs to combine the scores obtained by different matchers to generate a single matching score [5].Rank level fusion consists of generating a rank of biometric identities sorted with all biometric modalities, and then fusing the ranking for each individual available for different biometric modalities.The lowest score obtained corresponds to the correct identity.In decision level fusion, each modality has gone through its biometric system (feature extraction, matching, and recognition), where each system is providing a binary decision.Decision level fusion aims to make a final decision by using different algorithms such as AND, OR, etc. [5,6].
A biometric system has two phases, enrolment and recognition.For the enrolment phase, biometric modality is captured and processed with specific algorithms, to obtain a reference biometric template for each user that is stored in the database.For the recognition phase, a biometric sample is captured and processed as in the previous phase, then compared with biometric templates stored in the database [7].Generally, biometric systems can operate in two modes, which are the identification mode and the verification mode.For the identification mode, a biometric sample is captured and processed, then compared against all templates in the database (this mode known as a one-to-one comparison), and the identity of the template to which the person belongs is determined.For the verification mode, a biometric sample is captured and processed as in the enrolment phase, then it is compared to the corresponding template stored in the database.The obtained result is either accepted (if the user is authentic) or rejected (if the user is an impostor) [7].
Several multimodal biometric systems have been proposed using different modalities in recent years, including the following.In 1995, [8] Brunelli and Falavigna proposed a multimodal biometric system combining face and voice based on the theory of supervised learning and Bayes.In 1998, [9] Hong and Jain combined face and finger print at the matching scores level.In 2002, [10] Kittler and Messer combined voice and face using two trainable methods of classifier.In 2003, [11] Ross and Jain combined face, finger print and hand geometry at matching score level.In 2004, [12]  In this study, we choose face and iris patterns to construct a multimodal biometric system for the following reasons: Iris modality is the most reliable biometric characteristic; it is a protected organ and has a unique texture unchanged throughout the adult human life.The iris region is segmented from the eye image for the identification process.The face is the most natural way to recognize a person from its image [6].Face recognition is friendly, non-invasive (meaning that it does not violate individual privacy), and its deployment cost is relatively low; a simple camera connected to a computer may be sufficient.However, facial recognition is still relatively sensitive to the surrounding environment to achieve a high recognition rate.On the other hand, the modality of the iris is certainly more intrusive, but it is currently considered as one of the most accurate biometrics.This choice of the combination of the two modalities is confirmed by the Zephyr analysis as shown in [6].In addition, a sample capture device with a very high resolution would simultaneously analyze the texture of the iris and the face [21].
There are basically four components for a conventional biometric system: preprocessing, feature extraction, matching, and decision phase.The feature extraction method affects the performance of the system significantly; there are many feature extraction techniques described in [22].This paper proposes a multimodal biometric system based on the face and iris, which uses a multi-resolution 2D Log-Gabor filter with spectral regression kernel discriminant analysis (SRKDA) to extract pertinent features from the iris.Furthermore, it proposes a new facial feature extraction technique, which is singular spectrum analysis (SSA) modeled by normal-inverse Gaussian distribution (NIG) model and statistical features (entropy, energy, and skewness) derived from wavelet transform.The classification process is performed using fuzzy k-nearest neighbor (FK-NN).
This paper is organized as follows: Section 2 reviews related works by introducing the well-known proposed multimodal biometric systems based on face and iris modalities.Section 3 describes the proposed multimodal biometric system.Section 4 presents the results of the experiment carried out to assess the performance of the proposed approach and Section 5 concludes the paper.

Related Works
The recognition rate of multimodal systems depends on multiple factors, such as: the fusion scheme, fusion technique, selected features and extraction techniques, the used modalities, and compatibility of the feature vectors of various used modalities.The following section presents a brief overview of state-of-the-art of face-iris multimodal biometric systems.Recent and important works are summarized in the Table 1.
Facial features are extracted with PCA and discrete coefficient transform (DCT), while iris features with 1D Log-Gabor filter method and Zernike moment.Genetic algorithm (GA) is used for dimensionality reduction.
-Hybrid level of fusion.Euclidean distance.

Proposed Multimodal Biometric System
This paper proposes a multimodal biometric system based on face and iris modalities as shown in Figure 1.The proposed system is described and detailed in this section.

Pre-processing
The image pre-processing step aims to process the face and iris images in order to enhance their quality and also to extract the regions of interest (ROIs).
The face is considered the most important part of human body.It is enhanced by applying histogram equalization that usually increases the global contrast of the images.Then, the face image is cropped using the center positions of the left and right eyes, which are detected by Viola and Jones algorithm [34].On the other hand, local regions of the face image (left and right iris, nose, mouth) are detected with the same algorithm.Figure 2 illustrates the pre-processing steps of face recognition.method, one for the iris-sclerotic boundary and another within the first for the iris-pupil boundary.There are two steps for detecting iris-pupil boundaries: Finding the initial contour of the pupil and iris; we used the Hough transform to find the pupil circle coordinates and then initialized the contour at these points.

•
Searching the true contour of the pupil and the iris using the active contour method.Figure 3 shows an example of the iris segmentation process [35].

Iris Features Extraction
A 2D Log-Gabor filter is used to capture two-dimensional characteristic patterns.Because of its added dimension, the filter is not only designed for a particular frequency, but also is designed for a particular orientation.The orientation component is a Gaussian distance function according to the angle in polar coordinates.This filter is defined by the following equation:  John Daugman developed the first algorithms for iris recognition, publishing the first related papers and giving the first live demonstrations.This paper proposes an iris biometric system based on Daugman's algorithms.The iris regions can be approximated by two circles with the snake method, one for the iris-sclerotic boundary and another within the first for the iris-pupil boundary.
There are two steps for detecting iris-pupil boundaries: • Finding the initial contour of the pupil and iris; we used the Hough transform to find the pupil circle coordinates and then initialized the contour at these points.

•
Searching the true contour of the pupil and the iris using the active contour method.Figure 3 shows an example of the iris segmentation process [35].

Iris Features Extraction
A 2D Log-Gabor filter is used to capture two-dimensional characteristic patterns.Because of its added dimension, the filter is not only designed for a particular frequency, but also is designed for a particular orientation.The orientation component is a Gaussian distance function according to the angle in polar coordinates.This filter is defined by the following equation: ( ) ( ) where:

Iris Features Extraction
A 2D Log-Gabor filter is used to capture two-dimensional characteristic patterns.Because of its added dimension, the filter is not only designed for a particular frequency, but also is designed for a particular orientation.The orientation component is a Gaussian distance function according to the angle in polar coordinates.This filter is defined by the following equation: where: f 0 : center frequency; σ f : width parameter for the frequency; θ 0 : center orientation; σ θ : width parameter of the orientation.
This filter is applied to the image by a convolution between the image and the filter.The multi-resolution 2D Log-Gabor filter G( f s , θ o ) is a 2D Log-Gabor filter used in different scale (s) and orientation (o) [36,37].
The high dimensionality of extracted features causes problems of efficiency and effectiveness in the learning process.One solution for this problem is to reduce the original feature set to a small number of features while gaining improved accuracy and/or efficiency of the biometric system.In this work, spectral regression kernel discriminant analysis (SRKDA) is used.It was proposed by Cai et al. [38] and it is a powerful technique for dimensionality reduction for multi-resolution 2D Log-Gabor features.The SRKDA algorithm is described in [38].

Facial Features Extraction
This paper proposes a new feature extraction method for face recognition based on statistical features generated from SSA-NIG and wavelet methods.This method extracts relevant information that are invariant to illumination and expression variation, and is described as follows.
SSA is a powerful non-parametric technique used in signal processing and time series analysis.It is also a spectral estimation method which is related to eigenvalues of a covariance matrix that can decompose the signal into a sum of components.Each obtained component has a specific interpretation.For example, in a short time series the SSA decomposes the signal into oscillatory components PC (PC 1 , PC 2 , PC 3 , . . ., PC L ).SSA is used to solve several problems such as smoothing, finding structure in short time series, and denoising [39][40][41][42][43].
The SSA technique has two main phases-decomposition and reconstruction of the time series signal-and each phase has its steps.The decomposition process has two steps: the embedding step and singular value decomposition (SVD) step.
Embedding step: transforms a one-dimensional signal YT = (y 1 , . . ., y T ) into multi-dimensional signals X 1 , . . ., X K .Where X i = (y i , . . ., y i+L−1 ) ∈ RL, and K = T -L + 1.The single parameter here is the window length L, which is an integer such that 2 ≤ L ≤ T. The obtained matrix X is called the trajectory matrix X = [X 1 , . . ., X K ].
Singular value decomposition (SVD) step: computes the singular value decomposition (SVD) of the trajectory matrix.The eigenvalues are denoted by λ 1 , . . ., λ L and the eigenvectors of the matrix XX are denoted by U1, . . ., UL.If we denote V i = XU i / √ λ i , then the SVD of the trajectory matrix can be written as: [43].A facial image is transformed into one signal vector, then the derived signal is decomposed into multi-dimensional signals (principle components, PCs) by the decomposition process explained previously.The first signal contains the main information that is not affected by noise, variation illumination and expression variation.Figure 4 shows an example of one-dimensional singular spectrum analysis (1D-SSA) of a signal with window of length 4. The original signal is decomposed into four components, PC The effect of these two parameters on the shape of the NIG pdf is demonstrated in Figure 5.The NIG parameters are estimated using the following formula.α and δ are computed from each of the SSA segment signals [44].
The NIG pdf models the histogram of facial nonlinear signals as shown in Figure 6.
where: K 1 (.): first-order modified Bessel function of the second kind.α: denotes the feature factor of the NIG pdf.δ: scale factor.
α controls the steepness of the NIG pdf.If α increases, the steepness of the NIG pdf increases also.
On the other hand, scale factor δ controls the dispersion of the NIG pdf [38].
The effect of these two parameters on the shape of the NIG pdf is demonstrated in Figure 5.The NIG parameters are estimated using the following formula.
where K 2 x and K 4 x are the second order and fourth order cumulants of the NIG pdf, respectively.α and δ are computed from each of the SSA segment signals [44].
where K x 2 and K x 4 are the second order and fourth order cumulants of the NIG pdf, respectively.
α and δ are computed from each of the SSA segment signals [44].
The NIG pdf models the histogram of facial nonlinear signals as shown in Figure 6.The NIG pdf models the histogram of facial nonlinear signals as shown in Figure 6.In addition to mean and standard deviation generated from SSA-NIG, statistical features (entropy, energy and skewness) derived from the third level of wavelet transform are used.

Classification Process
The proposed system operates in identification mode, in which feature vectors are compared to the stored templates in the database for each biometric trait during the enrollment module.Among the most famous statistical methods of classification, we find the original k-nearest neighbor (K-NN); however, in this work we have investigated and improved the fuzzy k-nearest neighbor (FK-NN) for In addition to mean and standard deviation generated from SSA-NIG, statistical features (entropy, energy and skewness) derived from the third level of wavelet transform are used.

Classification Process
The proposed system operates in identification mode, in which feature vectors are compared to the stored templates in the database for each biometric trait during the enrollment module.Among the most famous statistical methods of classification, we find the original k-nearest neighbor (K-NN); however, in this work we have investigated and improved the fuzzy k-nearest neighbor (FK-NN) for our multimodal biometrics system for the classification phase [45].

Fusion Process
The main structure of the proposed multimodal biometric system is based on the effective combination of the face and iris modalities.In our proposal, the system uses score level fusion and decision level fusion at the same time in order to exploit the advantages of each fusion level and improve the performance of the biometric system.In the score level fusion, the scores are normalized with min-max and Z-score techniques, but the fusion is performed with the min rule, max rule, sum rule and weighted sum rule.In the decision level fusion, we have used OR rule.

Experimental Results
The goal of this paper is to design an optimal and efficient face-iris multimodal biometric system.We start by evaluating the performance of unimodal systems using only iris modality and only face modality, then propose a multimodal biometric system by combining the two systems selecting the best feature vectors, using score level fusion and decision level fusion at the same time.The iris is a small internal organ, protected by the eyelids and eyelashes when detected from the wall face image.For this reason, it does not affect the performance of the face recognition system; on the other hand, the iris organ is independent from the face.We use a real database as a chimeric database for the implementation of the face-iris multimodal biometric system.In this work, we chose chimeric databases constructed from the Chinese academy of science institute of automation (CASIA) v3.0 iris image (CASIA V3) database, Olivetti research laboratory (ORL) and face recognition technology (FERET) databases; these databases are described as follows.

1)
CASIA iris database: Developed by the Institute of Automation of Chinese Academy of Sciences (CASIA) "Chinese Academy of Sciences Institute of Automation".Moreover, since it is the oldest, this database is the best known, and is widely used by the majority of researchers.It presents few defects, and very similar and homogeneous characteristics.CASIA-IrisV3-Interval contains 2655 images of irises corresponding to 249 individuals; these images were taken under the same conditions as CASIA V1.0, with a resolution of 320 × 280 pixels [46].Figure 7a shows example images from the CASIA iris database.2) ORL face database: The ORL (Olivetti Research Laboratory) database includes individuals for which each has 10 images with pose and expression variations; the database contains 400 images.These poses are taken at different time intervals.Captured images have a small size (11KB) and 92 × 112 resolution; they have the gray scale called portable graymap format (PGM) format [47].
Figure 7b shows example images from the ORL face database.3) FERET face database: A database of facial imagery was collected between December 1993 and August 1996 comprising 11,338 images photographed from 994 subjects at different angles and conditions.They are divided into standard galleries: fa, fb, ra, rb set, etc.In this work, the color FERET database ba, bj, and bk partitions are considered, where ba is a frontal image, bj is an alternative frontal image, and bk is also a frontal image corresponding to ba, but taken under different lighting.The images have resolution of 256 × 384 pixels and are in the joint photographic experts group (jpg) format [48].Figure 7c shows example images from the FERET face database.
August 1996 comprising 11,338 images photographed from 994 subjects at different angles and conditions.They are divided into standard galleries: fa, fb, ra, rb set, etc.In this work, the color FERET database ba, bj, and bk partitions are considered, where ba is a frontal image, bj is an alternative frontal image, and bk is also a frontal image corresponding to ba, but taken under different lighting.The images have resolution of 256 × 384 pixels and are in the joint photographic experts group (jpg) format [48].Figure 7c shows example images from the FERET face database.

Iris System
In our experiments, every eye image from CASIA interval V3 was segmented and normalized into 240 × 24 pixels as shown in Figure 3.Then, the multi-resolution 2D log Gabor filter was used to extract pertinent features in different scale "s", orientation "o" and ratio σ/f0.Next, SRKDA was applied to reduce the dimensionality of the vector.A total of 40 subjects were considered and each subject had seven images; one, two, three and four images were selected for training and the remaining images were saved as testing images.The recognition rate is calculated using the following parameters

Iris System
In our experiments, every eye image from CASIA interval V3 was segmented and normalized into 240 × 24 pixels as shown in Figure 3.Then, the multi-resolution 2D log Gabor filter was used to extract pertinent features in different scale "s", orientation "o" and ratio σ/f 0 .Next, SRKDA was applied to reduce the dimensionality of the vector.A total of 40 subjects were considered and each subject had seven images; one, two, three and four images were selected for training and the remaining images were saved as testing images.The recognition rate is calculated using the following parameters (s = 4, o = 5, σ/f 0 = 0.65), (s = 4, o = 5, σ/f 0 = 0.85), (s = 5, o = 8, σ/f 0 = 0.65) and (s = 5, o = 8, σ/f 0 = 0.85).Table 1 shows the recognition of the iris identification rate, while Figure 8 shows the cumulative match characteristic (CMC) curve of the iris recognition system.
Table 2 gives the recognition rates of the iris recognition system when different images are used for training; if the number of training images increases the recognition rate increases also.Using two images for training gives best results when uses one image for training, and so on.On the other hand, the best recognition rate is obtained using parameters (s = 5, o = 8, σ/f 0 = 0.85) and four images for training, with 97.33%.Figure 8 shows the CMC curve of the system, which demonstrates that the system achieves the recognition rate of 100% at rank 7.

Face system
Experimental results were obtained from the two face databases, ORL and FERET, from which the goal was to select the best feature vectors and enhance the performance.The face image was enhanced, then the facial image and local regions (noise, mouth and eyes) were detected with the Viola and Jones algorithm as shown in Figure 2. The SSA-NIG method was applied for feature extraction, by selecting different components PC1, PC2, PC3, PC1+PC2, PC1+PC2+PC3, and different window length M of size 5, 9, 12.
In the ORL face database, 40 subjects were considered and each subject had seven images, as for the CASIA iris database.Evaluation tests were performed using one, two and three images for training and the remaining images were used for testing.The obtained evaluation results are shown in Table 3 and Figure 9.  Experimental results were obtained from the two face databases, ORL and FERET, from which the goal was to select the best feature vectors and enhance the performance.The face image was enhanced, then the facial image and local regions (noise, mouth and eyes) were detected with the Viola and Jones algorithm as shown in Figure 2. The SSA-NIG method was applied for feature extraction, by selecting different components PC1, PC2, PC3, PC1+PC2, PC1+PC2+PC3, and different window length M of size 5, 9, 12.
In the ORL face database, 40 subjects were considered and each subject had seven images, as for the CASIA iris database.Evaluation tests were performed using one, two and three images for training and the remaining images were used for testing.The obtained evaluation results are shown in Table 3 and Figure 9.  Table 3 demonstrates the effect of window length and principal components used for the feature extraction method.The best recognition rate is obtained when taking three images for training, with recognition rates of 97%, 94%, and 89% for M = 5, M = 9, and M = 12, respectively.We also note that SSA decomposes the signal in components, and the denoising process eliminates the effects of varying illumination.The best results were obtained when we took the first principal component PC1 and window length of M = 5, with recognition rate of 97.00%.From Figure 9, the CMC curve shows that the proposed system achieved 100% at the rank 8.  Table 3 demonstrates the effect of window length and principal components used for the feature extraction method.The best recognition rate is obtained when taking three images for training, with recognition rates of 97%, 94%, and 89% for M = 5, M = 9, and M = 12, respectively.We also note that SSA decomposes the signal in components, and the denoising process eliminates the effects of varying illumination.The best results were obtained when we took the first principal component PC1 and window length of M = 5, with recognition rate of 97.00%.From Figure 9, the CMC curve shows that the proposed system achieved 100% at the rank 8.
Experiments were also performed on the FERET database by taking 200 subjects, each with three frontal facial images b a , b j and b k .In the tests, one and two images were used for training and the remaining images were used for testing.Table 4 and Figure 10 show the obtained results.

Evaluations Of Multimodal Biometric Identification Systems
Experimental results of the proposed face-iris multimodal biometric system, are presented in this section.They are conducted on two chimeric multimodal databases; the first database is the "CASIA iris-ORL face database" and the second database is the "CASIA iris-FERET face database".In the previous section, evaluation of unimodal biometric system was performed in order to select the best parameter for the feature extraction step of the face and iris unimodal biometric systems, and hence construct a robust multimodal biometric system by combining the two unimodal systems with the proposed fusion scheme shown in Figure 11.The simplest idea for creating a multimodal database is to create "virtual" individuals by randomly associating the identity of different individuals from different databases; in this case face and iris databases are associated.From Table 4, the best obtained results use two images for training and one image for testing in all experiments.On the other hand, the use of the first component of the SSA signal gives the best results against the PC2, PC3, PC1+PC2 and PC1+PC2+PC3.We also note that the use of window length M= 5 with the use of the first component achieved a good recognition rate of 95.00%.Figure 10 shows that the system achieved a recognition rate of 100% at the rank 9.

Evaluations Of Multimodal Biometric Identification Systems
Experimental results of the proposed face-iris multimodal biometric system, are presented in this section.They are conducted on two chimeric multimodal databases; the first database is the "CASIA iris-ORL face database" and the second database is the "CASIA iris-FERET face database".In the previous section, evaluation of unimodal biometric system was performed in order to select the best parameter for the feature extraction step of the face and iris unimodal biometric systems, and hence construct a robust multimodal biometric system by combining the two unimodal systems with the proposed fusion scheme shown in Figure 11.The simplest idea for creating a multimodal database is to create "virtual" individuals by randomly associating the identity of different individuals from different databases; in this case face and iris databases are associated.In the evaluation process, 40 subjects were considered and each subject had seven images.We choose three images for training and the remaining images were testing images.The proposed fusion scheme was implemented, in which min-max and Z-score normalization methods were used to normalize scores generated with the face and iris systems.The min rule, max rule, sum rule and weighted sum rule were used as fusion rules for the proposed system.Moreover, decision level fusion was performed with the OR rule.The fusion rules used were defined by the following equations., where I is the number of individuals in the database).The fused score for user i is denoted as i f [2] and given by: • Sum rule: • Maximum rule (Max rule): (5) • Minimum rule (Min rule): ), ,.., , • Weighted sum rule fusion: Experimental results are shown in Table 5 and Figure 12.The best recognition rate of the proposed face-iris multimodal biometric system is obtained by normalization using min-max and fusion with the max rule.A recognition rate of 99.16% was reached at rank 1.The CMC curve in Figure 12 demonstrates that the proposed system achieved 100% at rank 5.

Tests on CASIA-ORL Multimodal Database
In the evaluation process, 40 subjects were considered and each subject had seven images.We choose three images for training and the remaining images were testing images.The proposed fusion scheme was implemented, in which min-max and Z-score normalization methods were used to normalize scores generated with the face and iris systems.The min rule, max rule, sum rule and weighted sum rule were used as fusion rules for the proposed system.Moreover, decision level fusion was performed with the OR rule.The fusion rules used were defined by the following equations.
The quantity n m i represents the normalized score for matcher m (m = 1, 2, . . ., M, where M is the number of matchers) applied to user in which (i = 1, 2, . . ., I, where I is the number of individuals in the database).The fused score for user i is denoted as f i [2] and given by:

•
Sum rule: • Maximum rule (Max rule): • Minimum rule (Min rule): • Weighted sum rule fusion: Experimental results are shown in Table 5 and Figure 12.The best recognition rate of the proposed face-iris multimodal biometric system is obtained by normalization using min-max and fusion with the max rule.A recognition rate of 99.16% was reached at rank 1.The CMC curve in Figure 12 demonstrates that the proposed system achieved 100% at rank 5.

Tests on CASIA-FERET multimodal database
In this experiment, 200 subjects were taken from the CASIA and FERET database randomly to construct a chimeric multimodal database.Each subject had three images; two images were used for training and one image was used for testing.Implementation of the proposed fusion scheme, as in the first database was realized.The obtained results are shown in Table 5 and Figure 13.
Table 6 gives the recognition rates of the proposed multimodal system using the min-max and Z-score normalization method and the min rule.The max rule, sum rule and weighted sum rule were used as fusion methods.The best recognition rate reached 99.33% with min-max normalization and max rule fusion.On the other hand, the proposed system is robust and achieved a recognition rate of 100% at rank 3.

Tests on CASIA-FERET Multimodal Database
In this experiment, 200 subjects were taken from the CASIA and FERET database randomly to construct a chimeric multimodal database.Each subject had three images; two images were used for training and one image was used for testing.Implementation of the proposed fusion scheme, as in the first database was realized.The obtained results are shown in Table 5 and Figure 13.

Conclusion
This paper describes an effective and efficient face-iris multimodal biometric system that has appealingly low complexity, and focusing on diverse and complementary features.The iris features are carried out with a multi-resolution 2D Log-Gabor filter combined with SRKDA, while the facial features are computed using the SSA-NIG method.The evaluation of the unimodal biometric trait allows selecting the best parameters of the two feature extraction methods to construct a reliable multimodal system.The fusion of face-iris features is performed using score fusion and decision fusion.Experiments are performed on CASIA-ORL and CASIA-FERET databases.The obtained experiment results have shown that the proposed face-iris multimodal system improves the performance of unimodal biometrics based on face or iris.The best recognition rate is obtained with min-max normalization and max rule fusion, with higher recognition rates up to 99.16% and 99.33% for CASIA-ORL and CASIA-FERET databases, respectively.In future work, we plan to explore the potential of deep learning to extract high-level representations from data, which will be combined  Table 6 gives the recognition rates of the proposed multimodal system using the min-max and Z-score normalization method and the min rule.The max rule, sum rule and weighted sum rule were used as fusion methods.The best recognition rate reached 99.33% with min-max normalization and max rule fusion.On the other hand, the proposed system is robust and achieved a recognition rate of 100% at rank 3.

Conclusion
This paper describes an effective and efficient face-iris multimodal biometric system that has appealingly low complexity, and focusing on diverse and complementary features.The iris features are carried out with a multi-resolution 2D Log-Gabor filter combined with SRKDA, while the facial features are computed using the SSA-NIG method.The evaluation of the unimodal biometric trait allows selecting the best parameters of the two feature extraction methods to construct a reliable multimodal system.The fusion of face-iris features is performed using score fusion and decision fusion.Experiments are performed on CASIA-ORL and CASIA-FERET databases.The obtained experiment results have shown that the proposed face-iris multimodal system improves the performance of unimodal biometrics based on face or iris.The best recognition rate is obtained with min-max normalization and max rule fusion, with higher recognition rates up to 99.16% and 99.33% for CASIA-ORL and CASIA-FERET databases, respectively.In future work, we plan to explore the potential of deep learning to extract high-level representations from data, which will be combined with traditional machine learning to compute useful features.
Feng et al. combined face and palm print at feature level fusion.In 2005, [13] Jain et al. combined face, finger print and hand geometry at the score level.In 2006, [14] Li et al. combined palm print, hand shape and knuckle print at feature level fusion.In 2011, [15] Meraoumia et al. integrated two different modalities, palm print and finger knuckle print, at score level fusion.In 2013, [16] Eskandari and Toygar combined face and iris at feature level fusion.In 2017, [17] Elhoseny et al. investigated the fusion of finger print and iris in the identification process.In the same year (2017), [18] Hezil and Boukrouche combined ear and palm print at feature level.In 2018, [3] Kabir et al. proposed multi-biometric systems based on genuine-impostor score fusion.In 2019, [19] Walia et al. proposed a multimodal biometric system integrating three complementary biometric traits, namely, iris, finger vein, and finger print based on an optimal score level fusion model.In 2019, [20] Mansour et al. proposed multi-factor authentication based on multimodal biometrics (MFA-MB).

Figure 1 .Figure 1 .
Figure 1.Block diagram of the proposed multimodal biometric system.
where K x 2 and K x 4 are the second order and fourth order cumulants of the NIG pdf, respectively.

Figure 4 .
Figure 4. One-dimensional singular spectrum analysis (1D-SSA) of signal.The NIG probability density function (pdf) can model non-linear signals, such as financial data, economic data, images, and video signals.In this work, NIG modeling is used for capturing the statistical variations in the SSA image signal.The estimated parameters generated by NIG pdf are then used as features.The NIG pdf is a variance-mean mixture density function, in which the mixing distribution is the inverse Gaussian density and is given with the following equation:

Figure 5 .Figure 5 .
Figure 5.Effect of α and δ on the shape of normal inverse Gaussian of probability density function (NIG pdf).

Figure 6 .
Figure 6.The corresponding normal inverse Gaussian of probability density functions (NIG pdfs) constructed from the estimated α and δ (in red).

Figure 6 .
Figure 6.The corresponding normal inverse Gaussian of probability density functions (NIG pdfs) constructed from the estimated α and δ (in red).

Figure 7 .
Figure 7. Examples of face and iris images from (a) Chinese academy of science institute of automation (CASIA), (b) Olivetti research laboratory (ORL), and (c) Face recognition technology (FERET).

Figure 7 .
Figure 7. Examples of face and iris images from (a) Chinese academy of science institute of automation (CASIA), (b) Olivetti research laboratory (ORL), and (c) Face recognition technology (FERET).

Figure 8 .
Figure 8. Cumulative match characteristic (CMC) curve for the Iris unimodal biometric system performed on the Chinese academy of sciences institute of automation (CASIA) database.

Figure 8 .
Figure 8. Cumulative match characteristic (CMC) curve for the Iris unimodal biometric system performed on the Chinese academy of sciences institute of automation (CASIA) database.

Figure 9 .
Figure 9. Cumulative match characteristic (CMC) curve for face unimodal biometric system performed on the Olivetti research laboratory (ORL) face database.

Figure 9 .
Figure 9. Cumulative match characteristic (CMC) curve for face unimodal biometric system performed on the Olivetti research laboratory (ORL) face database.

Figure 10 .
Figure 10.Cumulative match characteristic (CMC) curve of face unimodal biometric system performed on the facial recognition technology (FERET) face database.

Figure 10 .
Figure 10.Cumulative match characteristic (CMC) curve of face unimodal biometric system performed on the facial recognition technology (FERET) face database.

Figure 11 .
Figure 11.Scheme of the proposed face-iris multimodal biometric system.
normalized score for matcher m (m = 1, 2, …, M, where M is the number of matchers) applied to user in which (

Figure 11 .
Figure 11.Scheme of the proposed face-iris multimodal biometric system.

Table 2 .
Recognition rate of the iris unimodal biometric system performed on the CASIA database.

Table 3 .
Recognition rate of the face unimodal biometric system performed on the ORL face database.

Table 3 .
Recognition rate of the face unimodal biometric system performed on the ORL face database.

Table 4 .
Recognition rate of the face unimodal biometric system performed on the FERET face database.

Table 4 .
Recognition rate of the face unimodal biometric system performed on the FERET face database.

Table 5 .
Recognition rates of the proposed face-iris multimodal system on the CASIA-ORL database.

Table 5 .
Recognition rates of the proposed face-iris multimodal system on the CASIA-ORL database.

Table 6 .
Recognition rates of the proposed face-iris multimodal system on CASIA-FERET database.

Table 6 .
Recognition rates of the proposed face-iris multimodal system on CASIA-FERET database.