Next Article in Journal
Knowledge Distillation Meets Reinforcement Learning: A Cluster-Driven Approach to Image Processing
Previous Article in Journal
Performance Evaluation of the Radio Propagation in a Vessel Cabin Using LoRa Bands
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Score-Fusion Method Based on the Sine Cosine Algorithm for Enhanced Multimodal Biometric Authentication

1
Computer Science Department, Faculty of Computers & Information, Mansoura University, Mansoura 35516, Egypt
2
Computer Science Department, College of Computer and Information Sciences, Jouf University, Jouf 72388, Saudi Arabia
3
Information Systems Department, Faculty of Computers and Informatics, Zagazig University, Zagazig 44519, Egypt
4
Information Systems Department, College of Computer and Information Sciences, Jouf University, Jouf 72388, Saudi Arabia
*
Author to whom correspondence should be addressed.
Sensors 2026, 26(1), 208; https://doi.org/10.3390/s26010208 (registering DOI)
Submission received: 24 November 2025 / Revised: 24 December 2025 / Accepted: 25 December 2025 / Published: 28 December 2025
(This article belongs to the Section Biosensors)

Abstract

Score fusion is a technique that combines the matching scores from multiple biometric modalities for an authentication system. Biometric modalities are unique physical or behavioral characteristics that can be used to identify individuals. Biometric authentication systems use these modalities to verify or identify individuals. Score fusion can improve the performance of biometric authentication systems by exploiting the complementary strengths of different modalities and reducing the impact of noise and outliers from individual modalities. This paper proposes a new score fusion method based on the Sine Cosine Algorithm (SCA). SCA is a meta-heuristic optimization algorithm used in various optimization problems. The proposed method extracts features from multiple biometric sources and then computes intra/inter scores for each modality. The proposed method then normalizes the scores for a given user using different biometric modalities. Then, the mean, maximum, minimum, median, summation, and Tanh are used to aggregate the scores from different biometric modalities. The role of the SCA is to find the optimal parameters to fuse the normalized scores. We evaluated our methods on the CASIA-V3-Internal iris dataset and the AT&T (ORL) face database. The proposed method outperforms existing optimization-based methods under identical experimental conditions and achieves an Equal Error Rate (EER) of 1.003% when fusing left iris, right iris, and face. This represents an improvement of up to 85.89% over unimodal baselines. These findings validate SCA’s effectiveness for adaptive score fusion in multimodal biometric systems.

1. Introduction

Multi-biometric systems are personal identification and verification systems that use multiple biometric traits to identify an individual. Biometric traits are unique physical or behavioral characteristics that can be used to identify a person, such as fingerprints, facial features, iris patterns, voice patterns, and gait. Multi-biometric systems are more accurate and secure than single-biometric systems because they use multiple traits to authenticate an individual. It can be used in various applications, including Access control, Law enforcement, Border Security, and Commercial applications. Multi-biometric systems are a rapidly evolving field of research and development, with new biometric modalities and fusion algorithms being introduced steadily [1].
Multi-biometric fusion systems can be classified into three main types: Feature-level fusion, Score-level fusion, and Decision-level fusion. In feature-level fusion, the features extracted from the different biometric traits are combined into a single feature vector. This feature vector is then used to train a classifier, which is used to authenticate the individual. Feature-level fusion enhances performance by synergizing complementary information from diverse biometric modalities [2]. In score-level fusion, the matching scores from the different biometric traits are combined to decide whether to authenticate the individual. Several score-level fusion algorithms can be used, such as weighted sum, majority voting, and Dempster-Shafer fusion [3]. Decision-level fusion combines decisions from multiple sources to improve accuracy. It is a widely used technique in machine learning, artificial intelligence, and signal processing. Common decision fusion algorithms include majority voting, weighted average, and Bayesian fusion [4]. Score-level fusion has many advantages over other multi-biometric fusion methods, including:
  • Simplicity: Score-level fusion is relatively simple to implement in terms of the algorithms required and the amount of data needed. This makes it a good choice for applications where resources are limited.
  • Efficiency: Score-level fusion is also very efficient, meaning that it can be implemented to run quickly on low-powered devices. This is important for applications where real-time performance is critical.
  • Robustness: Score-level fusion is robust to noise and environmental variations, meaning it can still perform well even when the biometric data is of poor quality or the system operates in a challenging environment.
  • Flexibility: Score-level fusion can combine the matching scores from any biometric modality, making it a very flexible fusion method.

Problem Statement

Deep learning has been used to develop new score-level fusion methods for Multi-biometric systems [5]. These methods improve the accuracy and security of score-level fusion. However, its limitations include its complexity, data requirements, and need for interpretability. On the other hand, the performance of score-level fusion algorithms based on weighted sum is sensitive to the selection of fusion parameters; the weights assigned to the matching scores from different biometric modalities can significantly impact the system’s accuracy. It is important to carefully tune the fusion parameters for a specific Multi-biometric system to achieve optimal performance.
This study presents a score-level fusion algorithm for Multi-biometric authentication systems. It employs the Sine Cosine Algorithm (SCA) to enhance the Multi-biometric system’s recognition accuracy. The contributions for this study are summarized as follows:
1-
Proposes a novel SCA-based adaptive score fusion framework for multimodal biometrics;
2-
Demonstrates superior performance over PSO and GWO using various performance metrics.
3-
Validates the approach on realistic iris-face combinations using CASIA and ORL datasets
The SCA is a meta-heuristic optimization algorithm used in various optimization problems [6]. The proposed method for multimodal biometric fusion first extracts features from multiple biometric sources, such as fingerprints, iris scans, and facial images. It then computes intra- and inter-class scores for each modality, which measure the similarity between biometric samples from the same individual and different individuals, respectively. The scores are then normalized for each user, and various aggregation techniques combine the scores from different modalities. An SCA optimization algorithm finds the optimal parameters for the aggregation process. Our method learns to combine the matching scores from different biometric modalities based on a training dataset of labeled biometric samples. This allows the algorithm to adapt to the biometric modalities’ specific characteristics. Additionally, our method is relatively easy to implement and can be used with various biometric modalities. It has the potential to be used in a wide range of applications, such as access control systems, law enforcement and security systems, financial and banking systems, healthcare systems, and mobile devices.
The remaining sections of this study are organized as follows: Section 2 summarizes the relevant research works for Multi-biometric systems. Section 3 describes the proposed score fusion algorithm. Section 4 introduces the experiments and performance evaluation. Finally, the paper is concluded in Section 5.

2. Related Works

Biometric fusion combines multiple biometric modalities to improve the accuracy, robustness, and cost of biometric systems. It works by combining information from multiple modalities to compensate for the weaknesses of each modality. The recent methods for Multi-biometric fusion are discussed in the following subsections.

2.1. Feature-Level Fusion

Feature-level fusion is often used in applications with noisy or incomplete biometric data. Recent research on biometric fusion has focused on developing new and improved feature fusion algorithms. As presented in [7], the authors proposed a feature-level Multi-biometric fusion algorithm called Dis-Eigen. They evaluated this algorithm on a dataset of face and fingerprint images and achieved an identification rate of 93.70%. In ref. [8], authors proposed a biometric recognition system using two feature extraction algorithms: the nearest neighbor algorithm for fingerprints and the speedup robust feature (SURF) algorithm for irises. Their system achieved an accuracy of 98.326%. As presented in [2], the authors evaluated a proposed model using deep features extracted from three popular pre-trained CNN models: AlexNet, VGG16, and GoogleNet. The model was tested on two benchmark datasets and achieved an accuracy of 99.05%. As presented in [9], the authors propose using fingerprint and palm print identification, two popular biometric systems, to improve the accuracy of individual identification. The authors use a deep neural network (DNN) to extract features from the fingerprint and palm print images and achieve an accuracy of 97.6%. As presented in [10], the authors propose a Multi-biometric identification system that combines fingerprint, palm print, and hand vein features. The system extracts features from each modality and combines them into a single feature vector. This feature vector is then used to generate a fuzzy vault, a secure way to store biometric data. In the recognition stage, the test person’s combined feature vector is compared to the fuzzy vault database to identify the individual. The proposed system was evaluated and achieved an accuracy of 98.5%. As presented in [11], the authors developed a face recognition system that uses the Rectangle Histogram of Oriented Gradients (R-HOG) feature extraction algorithm. The system achieves a low equal error rate (EER) and a high recognition accuracy of 99%. As presented in [12], the authors used different feature extraction algorithms for biometric modalities: 2-dimensional Principal Component Analysis (2DPCA) for the iris, Scale Invariant Feature Transform (SIFT) for signature, and Mel-frequency cepstral coefficients for speech. They then used a Genetic Algorithm to optimize the extracted features. Finally, they used an Artificial Neural Network (ANN) to classify the features and identify individuals. The proposed algorithm achieved an overall classification accuracy of 96–98%. Table 1 summarizes the recent techniques for feature fusion that have been noticed for their significant contributions to the multi-biometric authentication system.

2.2. Score-Level

Matching score level fusion is often used in applications where high accuracy is required [13], the authors proposed a model that first uses Principal Component Analysis (PCA) to extract features from 3D face images and then uses Iterative Closest Point (ICP) to extract features from 3D ear images. Finally, the model fuses the Face and ear features using score-level fusion. The proposed model achieves an accuracy of 99.25%. As presented in [3], the authors proposed a multimodal biometric framework that uses finger-knuckle-print (FKP) and iris features to authenticate individuals. It uses the scale-invariant feature transform (SIFT) and speeded-up robust features (SURF) algorithms to extract features from the FKP images and the Log Gabor wavelet to extract features from the iris images. The extracted features are then reduced in dimensionality using principal component analysis (PCA). The FKP and iris features are combined at the match score level using a neuro-fuzzy neural network classifier. The proposed framework was evaluated on the Poly-U and CASIA databases and achieved a promising recognition accuracy.
As presented in [14], the authors proposed a multimodal biometric person recognition system developed using a Convolutional Neural Network (CNN) for facial features and the Oriented FAST and Rotated BRIEF (ORB) algorithm for fingerprint features. The two features are fused at the match score level using a weighted sum rule, which achieves a recognition rate of 99.38%. As presented in [15], the authors proposed an improved local binary pattern (LBP) coding method to extract more robust face features. They also improve the conventional endpoint detection technology, voice activity detection (VAD), to more accurately detect voice mute and transition information, which boosts the effectiveness of voice matching. The proposed system achieves an accuracy of 98%.
As presented in [16], the authors proposed a way to combine the scores from multiple biometric systems, called weighted quasi-arithmetic mean (WQAM). WQAMs are estimated using different trigonometric functions. The proposed fusion scheme has the properties of both weighted mean and quasi-arithmetic mean and achieved a recognition rate of 97.22%. As presented in [17], the authors proposed an algorithm that preprocesses the palm print and face images and then extracts features from each image. The matching score for each trait is then calculated using the correlation coefficient. Finally, the matching scores are combined using t-norm-based score level fusion. The proposed algorithm achieves a Genuine Acceptance Rate (GAR) of 99.7%. As presented in [18], the authors used k-means clustering to divide the score range for each biometric modality into three zones of interest. They then apply two fusion approaches to the extracted regions: (1) decision tree combined with weighted sum (BCC) and (2) fuzzy logic (BFL). The BCC fusion approach achieves an accuracy of 95%, and the BFL fusion approach achieves an accuracy of 94.44%. Table 2 summarizes the recent techniques for score fusion that have been noticed for their significant contributions to Multi-biometric authentication systems.
Adaptive score fusion has recently made use of deep learning architectures. For example, Wang et al. [19] achieved state-of-the-art results on multimodal biometric benchmarks by proposing a gated attention network to dynamically learn modality-specific weights. Comparably, Zhang et al. [20] reported promising EER on a private dataset using a transformer-based fusion module that captures cross-modal dependencies at the score level. Although these techniques show remarkable accuracy, they frequently necessitate large labeled datasets, high processing power, and opaque weight assignment. The proposed SCA-based method, on the other hand, provides a lightweight, comprehensible, and data-efficient substitute that is especially well-suited for applications with limited resources or privacy concerns.

2.3. Decision-Level Fusion

Decision-level fusion is often used in applications with multiple biometric modalities, and/or the system must handle noisy or incomplete data. In ref. [4], authors proposed three decision-level fusion schemes for image recognition: Local Decision Fusion (LDF), Global Decision Fusion (GDF), and Local-Global Decision Fusion (LGDF). Their proposed LGDF method outperformed the feature-score hybrid fusion method by improving the average recognition rate by 6.75%. In ref. [21], the authors proposed a deep learning-based convolutional neural network architecture for classifying gender from fingerprints of each of the five finger types. They evaluate the performance of the proposed architecture and show that it improves classification accuracy by 18.72% overall, compared to the average classification accuracy of single classification models. In ref. [22], authors proposed a method for encrypting and compressing biometric data, which makes biometric data more secure and efficient for transmission over wireless networks. Better recognition rates are obtained when individual similarity scores are combined for the final decision. In ref. [23], authors presented a method to combine face and iris data for biometric systems, focusing on decision-level fusion to create a robust system. In ref. [24], authors proposed a fingerprint-based Multi-biometric cryptosystem (MBC) that uses decision-level fusion to improve security and accuracy. They also use hash functions to protect each biometric trait further. Experimental results and security analysis show that MBC outperforms single-biometric cryptosystems (SBCs) in security and accuracy. In ref. [25], authors proposed a fusion strategy that combines three classifiers based on feature and score-level fusion using a decision-level fusion rule. The strategy achieved a recognition accuracy of 98.75%. Table 3 summarizes the recent techniques for decision fusion that have been noticed for their significant contributions to Multi-biometric authentication systems.

3. Methodology

Score fusion is the process of combining the matching scores from multiple biometric modalities to produce a single score representing the overall confidence that an individual is who they claim to be. Score fusion can improve the accuracy and robustness of biometric authentication systems by reducing the impact of noise and outliers from individual modalities. The proposed methodology utilizes SCA to find the optimal parameters for score fusion. Figure 1 shows the architecture diagram for the proposed fusion method. The system described in the figure can be used for various biometric authentication applications.
The system first extracts features from multiple biometric sources. The feature extraction step extracts the most relevant features from each biometric modality. These features are used to represent the biometric template of an individual. For the Face, these features may include facial landmarks, skin texture, and eye color. For the iris, the features may include the iris pattern. For the Fingerprint, the features may include the fingerprint pattern and minutiae. Once the features have been extracted, the system computes intra/inters scores for each modality. The intra-class scores for a given user using any biometric modality are the scores that measure the similarity between the user’s biometric templates. The inter-class scores for a given user using any biometric modality are the scores that measure the similarity between the user’s biometric templates and the biometric templates of other users.
A normalization process is needed since scores are computed using different biometric modalities. Score normalization is a process that converts the comparator’s parameters and data types into a common format. The three most used score normalization techniques are Min-Max, Z-Score, and Hyperbolic Tangent [26]. Then, the mean, maximum, minimum, median, summation, and Tanh are used to aggregate the scores for a given user using different biometric modalities. These aggregated scores can then be used to decide whether to accept or reject the user’s identity claim. The system uses an SCA-based fusion method to find the optimal parameters to fuse the normalized scores. The proposed SCA-based fusion method is a weighted sum method that assigns different weights to the scores from each modality based on their reliability using a training dataset of labeled biometric samples. It combines the normalized scores from different modalities into a single score.
Let K be the number of users in the Multi-biometric system, m be the number of biometric templates per user, and n be the size of each biometric template. Let x i r be the i t h biometric template for the kth user using the r t h biometric modality. The intra-class scores for the kth user using the r t h biometric modality is defined as follows (Equation (1)):
X r k = i = 1 m j = i + 1 m d x i , x j
where d(x, y) is the Euclidean distance between the templates, the inter-class scores for the kth user using the r t h biometric modality is defined as follows (Equation (2)):
Y r k = i = 1 m j = 1 , j i K d x i , y j
x i is the ith biometric template for the ith user, and y j is the j t h biometric template for the j t h user. The previous computations are repeated for all available biometric modalities in the Multi-biometric system to compute X r and Y r .
Given a total of R biometric modalities, the mean, maximum, minimum, median, summation, and Tanh intra scores are defined, respectively, as follows (Equations (3)–(8)):
X m e a n = m e a n ( X 1 , . . . , X r , . . . , X R )
X m a x = m a x ( X 1 , . . . , X r , . . . , X R )
X m i n = m i n ( X 1 , . . . , X r , . . . , X R )
X m e d i a n = m e d i a n ( X 1 , . . . , X r , . . . , X R )
X s u m = X 1 + . . . + X r + . . . + X R
X t a n h = T a n h ( X 1 , . . . , X r , . . . , X R )
The same definitions can compute the mean, maximum, minimum, median, summation, and Tanh inter scores Y m e a n ,   Y m a x ,   Y m i n , Y m e d i a n , Y s u m ,   Y t a n h .
The SCA-based fusion algorithm must first be initialized with a population of random solutions. Each solution represents a set of parameters for the matching scores from the different biometric modalities. The SCA then iteratively updates the population of solutions until it finds a solution that produces the lowest equal error rate. The candidate solution is represented by a vector I, where I = { w 1 , w 2 , w 3 , w 4 , w 5 , w 6 , θ}. Where θ     [ l b ,   u b ] , and the weights w i are in the range [0, 1] and subject to the constraint in (Equation (9)):
i = 1 6 w i = 1
The weight values are used to compute the fused intra/inter scores for the training set, respectively, as follows (Equations (10) and (11)):
X f u s e d = w 1 · X m e a n + w 2 · X m a x + w 3 · X m i n + w 4 · X m e d i a n + w 5 · X s u m + w 6 · X t a n h
Y f u s e d = w 1 · Y m e a n + w 2 · Y m a x + w 3 · Y m i n + w 4 · Y m e d i a n + w 5 · Y s u m + w 6 · Y t a n h
During the evolution of the SCA population to find the best fusion parameters, it is necessary to ensure that the candidate solutions are bound to the search space, as follows (Equation (12)):
I = I n e w ,   if   ( i = 1 6 w i 1 ) | | ( i = 1 6 ( w i > 1 ) 0 ) | | ( i = 1 6 ( w i < 0 ) 0 ) I ,   otherwise
Each candidate solution is evaluated using its fitness value f, representing the biometric system’s equal error rate (EER). The target is to minimize the error rate. The fused scores are used to compute the EER for the biometric system based on the given threshold. The fitness value, f, for each candidate solution, is computed as follows (Equations (13)–(15)):
F A R = i = 1 Y f u s e d Y f u s e d i < θ Y f u s e d × 100
F R R = i = 1 X f u s e d X f u s e d i θ X f u s e d × 100
f = F A R + F R R 2
The pseudo code of the proposed SCA-based score fusion algorithm is given by Algorithm 1.
Algorithm 1. SCA-based score fusion algorithm
Input: Population size   ( p ) , number of iterations   ( t m a x ) , the mean intra/inter scores ( X m e a n   /   Y m e a n ), the maximum intra/inter scores ( X m a x   /   Y m a x ), the minimum intra/inter scores ( X m i n   /   Y m i n ), the median intra/inter scores ( X m e d i a n   /   Y m e d i a n ), the sum intra/inter scores ( X s u m   /   Y s u m ), and tanh intra/inter scores ( X t a n h   /   Y t a n h ).
Output: The best score fusion parameters I b e s t .
1 
   Create initial population   P o p 0 that includes p candidate solutions I each of length 7 .
2 
   Calculate the fitness value for each solution I in the population   P o p 0 using Equation (15).
3 
   Obtain the best/minimum fitness value f b e s t in the population   P o p 0 .
4 
   Achieve the corresponding best solution I b e s t in the population   P o p 0 .
5 
   Set the initial values of r 1 and   r 2 .
6 
      For  t   =   2 to t m a x
7 
   Set the tuning parameter a   to 2.
8 
      Set r 1 = a − t × ((a)/ t m a x ).
9 
            For  i   =   1 to p
10 
              For  j   =   1 to 7
11 
        Set r 2 = (2 × π) × r.                  /*r ∊ ℝ, r ∊ [0, 1] */    Set r 3   = 2*r.
12 
       Set r 4 = r.
13 
      If  r 4 < 0.5  Then,
14 
       I ( i , j ) = I ( i , j ) + ( r 1   × sin( r 2 ) × abs( r 3 × I b e s t (j) − I(i, j))).
15 
       Else
16 
       I ( i , j ) = I ( i , j ) + ( r 1   × cos( r 2 ) × abs( r 3 × I b e s t (j) − I(i, j))).
17 
      End
18 
      End
19 
      End
20 
      Ensure that each solution is in the population within the search space [Equation (12)].
21 
      Calculate the fitness value for each solution I in the population P o p t using Equation (15).
22 
      Update the best fitness value f b e s t in the population   P o p t .
23 
      Update the corresponding best solution I b e s t in the population   P o p t .
24 
   End
25 
   Return the best-found solution ( I b e s t ) in the last population.
The suggested fusion framework is modality-independent, meaning that any biometric modality (such as voice, palm print, or fingerprint) that generates a scalar match score is accepted. The score normalization step allows for this generality by mapping different score ranges into a common [0, 1] interval. Plug-and-play deployment is made possible by the fact that the same SCA-based optimization pipeline can be applied to any combination of modalities without requiring architectural modifications. The implementation details and the performance evaluation of the proposed algorithm are explained in detail in the next section.

4. Result Evaluation and Discussion

This section explains how our proposed method significantly improves recognition performance compared to unimodal biometric systems and highlights the most important contribution. We conducted experiments using the CASIA iris dataset [27] and the AT&T (ORL) face database [28] to evaluate the proposed method. The CASIA-V3-Internal iris dataset contains 146 subjects, each with images of their left and right eyes. We preprocessed the images by segmenting, normalizing, and converting them to binary iris codes using the Libron Mask code [29]. The Libron Mask code is a multi-step process that includes iris segmentation and localization using the Circular Hough Transform and Linear Hough Transform, followed by normalization using Daugman’s rubber sheet model, and finally, encoding the normalized iris region using 1D Log-Gabor filters and phase quantization to create binary iris templates. We then reshaped each iris template into a single binary iris code vector, which we used as input for the experiments. The AT&T (ORL) face database comprises 400 images of 40 individuals, each represented by ten facial photos exhibiting variations in facial expressions, lighting, and time. These images were captured against a plain black background and are 92 × 112 pixels in size, with 256 gray levels per pixel. Binary facial features were extracted using an optimized Genetic algorithm transformation [30] applied to the features extracted using principal component analysis (PCA) [31]. In this experiment, 40 subjects were randomly selected from the CASIA-V3-Internal iris dataset and assigned to the ORL face dataset. During the experiments, left and right iris samples were paired with face samples for each of the 40 subjects. The samples were divided into two groups, with 60% allocated for training and 40% for testing. The generated scores are normalized by dividing by their length to account for the different lengths of binary iris and face templates. This ensures that all biometric scores are on the same scale. Multiple experiments were conducted using the gray wolf optimizer and Practical Swarm Optimizer to comprehensively evaluate the proposed method and compare it to other metaheuristic algorithms in the literature [32]. Experimental parameters are presented in Table 4, including fixed parameter values shared between the applied metaheuristic algorithms to facilitate equitable comparison of outcomes. Moreover, to account for the stochasticity of metaheuristic algorithms and generate robust results, we repeat each experiment ten times and report the mean results.
Four experiments are performed to compare the accuracy of the proposed fusion method with the original unimodal biometric. The proposed fusion method was evaluated on four levels: left iris with right iris, left iris with Face, right iris with Face, and left and right iris with Face. The following metrics are used to evaluate the recognition accuracy: False Accepted Rate (FAR), False Rejection Rate (FRR), and Equal Error Rate (EER). The False Acceptance Rate of an identification system is the percentage of times the system incorrectly grants access to an unauthorized person. The False Rejection Rate, on the other hand, is the percentage of times that the system incorrectly denies access to an authorized person. The Equal Error Rate is where the FAR and FRR are equal, meaning the system is equally likely to make either error [33]. Table 5 shows the results compared to the original unimodal system.
Based on the results in the table, the proposed fusion method performs well on all four levels, with an EER below 5% for all combinations of models. The lowest EER is achieved by fusing the left and right iris with the Face, which is 1.003%. This suggests that the proposed fusion method effectively combines scores from different models to improve the overall accuracy of the identification system. The explanation for the proposed fusion method’s good performance is that it considers the complementary nature of different features. For example, the iris is a unique identifier difficult to forge or alter. The Face, on the other hand, is more susceptible to appearance changes due to aging, lighting conditions, and facial expressions. However, the Face can also provide additional information about the individual, such as gender, ethnicity, and age. By fusing information from both the iris and the Face, the proposed method can achieve higher accuracy than could be achieved with either feature alone.
Utilizing anatomical symmetry, the left + right iris fusion produces limited modality diversity but high intra-class consistency. A spoof-resistant trait (iris) and a user-friendly trait (face) are combined in face + iris fusion; however, face performance deteriorates in low light or when expressions change. For high-security settings where hardware overhead is acceptable, the tri-modal fusion (left + right iris + face) maximizes complementary information and achieves the lowest EER (1.003%) at the expense of requiring sensors.
Moreover, the proposed SCA optimizer finds the optimal weight values for each score and the optimal threshold for the final decision. This is important because different scores may have different levels of importance, and different thresholds may be appropriate for the recognition accuracy. By using the SCA optimizer to set the fusion parameters, the proposed fusion method can achieve a higher level of accuracy than could be achieved with a fixed set of parameters. Overall, the results in the table suggest that the proposed fusion method is a promising approach for improving the accuracy of identification systems. Furthermore, the genuine and imposter distributions are computed by considering all feasible database comparisons using the Hamming distance measure. The genuine and imposter distributions are visualized in Figure 2.
The figure shows that the mean values of the genuine distribution are 57.22, 56.38, 7.91, 60.56, 14.41, 16.12, and 19.33 for the unimodal right iris, unimodal left iris, unimodal Face, proposed fusion of left and right iris, proposed fusion of left iris and Face, proposed fusion of right iris and Face, and proposed fusion of left and right iris with Face, respectively. The mean values of the imposter distribution are 68.95, 68.74, 10.02, 74.21, 17.78, 19.74, and 23.94 for the same modalities, respectively. This indicates that the proposed score-level fusion scheme significantly improves the separation between the genuine and imposter distributions compared to unimodal biometric systems. To further analyze the proposed method, the convergence curve for the proposed SCA optimizer is depicted in Figure 3. The curve represents the experiment for the proposed fusion of the training data, which contains the left and right iris with the Face.
The convergence curve for the SCA shows that the algorithm converges to a good solution within a reasonable number of iterations and achieves a low EER. The curve indicates that the algorithm is progressing consistently towards the optimal solution. The good performance of the SCA is achieved through a sufficient number of candidate solutions and evolves over time. This allows the SCA to explore a large region of the solution space and find the optimal solution. Moreover, SCA uses sine and cosine operators, allowing the SCA to explore the solution space more efficiently than traditional search operators.
Despite the lack of explicit spoofing experiments, the suggested fusion framework shows intrinsic resilience to presentation and noise attacks. First, by combining complementary modalities (such as face and texture-rich iris), the system lessens dependence on any one vulnerable characteristic. Second, during fusion, compromised or noisy modalities are automatically downweighed by the SCA-optimized weights, as shown by the better separation of real and fake distributions (Figure 2). This behavior is consistent with research in [33], which demonstrated that score-level fusion improved resilience to partial occlusion and sensor noise. Future research will involve formal evaluation under spoofing protocols, such as CASIA-Iris-AntiSpoofing or LivDet.
Two additional experiments are conducted using the Gray Wolf Optimizer (GWO) and the Practical Swarm Optimizer (PSO) to comprehensively evaluate the proposed fusion method and compare it to other metaheuristic algorithms. For all applied experiments, Table 6, Table 7 and Table 8 show the EER, the decidability metrics, and the accuracy improvement ratio.
Table 6 shows that the SCA outperformed the GWO and PSO algorithms on all four models. SCA is more effective at finding the optimal parameters for the biometric systems. The superior performance of the SCA is that it is more effective at exploring the search space and finding the global optimum than search algorithms such as GWO and PSO. We employ Equal Error Rate (EER) as the main metric because of its threshold-invariance and cross-study comparability, in accordance with accepted practice in biometric evaluation [33]. To give a complete picture of system performance, we also report FAR, FRR (Table 5), and decidability index d’ (Table 7).
The results in Table 7 show that the SCA outperformed the GWO and PSO algorithms on all four biometric models regarding the decidability metrics. d’ denotes the decidability metrics that indicate the separation between the genuine and impostor distributions, which is defined by (16):
d = μ i μ g σ i 2 + σ g 2 2
where μ i and are μ g the means and σ i 2 and σ g 2 are the variances of the imposter and genuine distributions, respectively. Hence, the largest d’ value indicates higher recognition performance. SCA is more effective at discriminating between genuine and impostor users.
The results in Table 8 show that the SCA significantly outperformed the GWO and PSO algorithms on all four biometric models regarding the improvement ratio. The ratio is computed compared to the original unimodal system; it is defined by (Equation (17)):
I m p r o v e m e n t = m i n E E R m 1 , ,   E E R m n E E R m 1 + + m n m i n E E R m 1 , ,   E E R m n × 100
where min ( E E R m 1 , . . . , E E R m n ) denotes the minimum EER among the unimodal m1 to m2, and E E R m 1 + . . . + m n denotes the EER for the fused model using the biometric models m1 to m2. The table shows that the performance improvement ranged from 53.58% to 85.89%. This suggests that the SCA is much more effective for optimizing biometric systems than the GWO and PSO algorithms.
This experiment investigates the elapsed run time of the applied metaheuristic algorithms. For a fair comparison, the population size and iterations are fixed (as shown in Table 4), and all algorithms are implemented using MATLAB R2023a and performed on the same machine with Processor AMD Ryzen 7, CPU 3.20 GHz, 16 GB memory. The obtained results are shown in Table 9.
The results in Table 9 show that the SCA outperformed the GWO and PSO algorithms regarding running time. The SCA completed the task in 9410 milliseconds, while the GWO and PSO algorithms took 9800 and 10,940 milliseconds, respectively. Eventually, the proposed score fusion method is compared to other multimodal biometric methods in the literature. Table 10 shows the results of this comparison, including the type of biometric data, the fusion type, and EER.
It is noteworthy that the SCA optimization is not carried out during live authentication, but rather only once during system enrollment or periodic recalibration. A straightforward weighted sum of normalized scores is used to calculate the fused score at runtime, requiring very little latency (<1 ms per query). Thus, the 9410 ms total optimization time is a one-time offline expense that is acceptable even on edge devices for high-security systems. This design maintains adaptive weight learning while guaranteeing real-time responsiveness during authentication.
The proposed method of fusing the left and right iris with the Face achieved promising recognition performance compared to other methods. However, deep learning-based score fusion methods achieved better recognition accuracy on average, even though they required more computational resources. This can be a limiting factor for resource-constrained applications.

5. Conclusions

This paper proposes a new score fusion method based on the Sine Cosine Algorithm (SCA). The proposed method extracts features from multiple biometric sources and then computes intra/inter scores for each modality. The proposed method then normalizes the scores for a given user using different biometric modalities and aggregates them using various aggregation rules. The role of the SCA is to find the optimal parameters to fuse the normalized scores. We evaluated the proposed method on the CASIA-V3-Internal iris dataset and the AT&T (ORL) face database. The results showed that the improvement in the recognition accuracy ranged from 53.58% to 85.89% compared to the unimodal biometric systems. Moreover, it outperforms several state-of-the-art score fusion methods regarding accuracy and robustness. Furthermore, the proposed method learns to fuse the scores from different modalities based on a training dataset of labeled biometric samples. This allows the algorithm to adapt to the biometric modalities’ specific characteristics. Moreover, the proposed method is relatively easy to implement and can be used with various biometric modalities. This work has direct applicability in high-assurance identity verification scenarios. For example, border control can integrate iris and face to balance security and user throughput; mobile banking apps can use face with behavioral biometrics for frictionless yet spoof-resistant login; and telehealth platforms can prevent patient misidentification in remote consultations. The method’s low runtime overhead and modality flexibility make it suitable for both cloud and edge deployments. Future research will concentrate on assessing robustness under standardized spoofing benchmarks and putting the fusion pipeline on edge hardware to evaluate real-world latency and power consumption, and investigating hybrid fusion strategies that combine SCA with lightweight neural post-processors to capture nonlinear score interactions while maintaining interpretability.

Author Contributions

Conceptualization, E.H., M.T. and A.S.A.; methodology, A.M.M., E.H. and M.T.; data creation, A.M.M., E.H. and A.S.A.; formal analysis, A.S.A. and A.M.M.; investigation, M.T. and A.M.M.; resources, E.H. and M.T.; supervision, E.H., M.T., A.M.M. and A.S.A.; writing—original draft, A.M.M., M.T. and A.S.A.; writing—review and editing, E.H., M.T. and A.M.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

This article does not contain any studies with human participants or animals performed by any of the authors.

Informed Consent Statement

This article does not contain any studies with human participants; hence, no informed consent is declared.

Data Availability Statement

The used datasets are publicly available on: http://cam-orl.co.uk/facedatabase.html, accessed on 24 December 2025 [AT&T (ORL) face data set link]; http://biometrics.idealtest.org, accessed on 24 December 2025 [CASIA-V3-Internal iris data set link]; Implementation code used for the current study is provided upon reasonable requests to authors.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Dargan, S.; Kumar, M. A comprehensive survey on the biometric recognition systems based on physiological and behavioral modalities. Expert Syst. Appl. 2020, 143, 113114. [Google Scholar] [CrossRef]
  2. Sarangi, P.P.; Nayak, D.R.; Panda, M.; Majhi, B. A feature-level fusion-based improved multimodal biometric recognition system using ear and profile face. J. Ambient Intell. Humaniz. Comput. 2021, 13, 1867–1898. [Google Scholar] [CrossRef]
  3. Srivastava, R.; Bhardwaj, V.P.; Othman, M.T.; Pushkarna, M.; Anushree Mangla, A.; Bajaj, M.; Rehman, A.U.; Shafiq, M.; Hamam, H. Match-Level Fusion of Finger-Knuckle Print and Iris for Human Identity Validation Using Neuro-Fuzzy Classifier. Sensors 2022, 22, 3620. [Google Scholar] [CrossRef] [PubMed]
  4. Devi, D.V.; Rao, K.N. Decision-level fusion schemes for a Multimodal Biometric System using local and global wavelet features. In Proceedings of the 2020 IEEE International Conference on Electronics, Computing and Communication Technologies (CONECCT), Virtual, 2–4 July 2020; pp. 1–6. [Google Scholar] [CrossRef]
  5. Al-Waisy, A.; Qahwaji, R.; Ipson, S.; Al-Fahdawi, S.; Nagem, T.A.M. A Multi-biometric iris recognition system based on a deep learning approach. Pattern Anal. Appl. 2018, 21, 783–802. [Google Scholar] [CrossRef]
  6. Mirjalili, S. SCA: A sine cosine algorithm for solving optimization problems. Knowl.-Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  7. Omar, B.; Majeed, H.; Hashim, S.Z.; Al-Ani, M.S. New Feature-level Algorithm for a Face-fingerprint Integral Multi-biometrics Identification System. UHD J. Sci. Technol. 2022, 6, 12–20. [Google Scholar] [CrossRef]
  8. Kumar, T.; Bhushan, S.; Jangra, S. Ann trained and WOA optimized feature-level fusion of iris and Fingerprint. Mater. Today Proc. 2021, 51, 1–11. [Google Scholar] [CrossRef]
  9. Sengar, S.S.; Rajkumar, K. Multimodal Biometric Authentication System Using Deep Learning Method. In Proceedings of the 2020 International Conference on Emerging SmartComputing and Informatics (ESCI), Pune, India, 12–14 March 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 309–312. [Google Scholar] [CrossRef]
  10. Vinothkanna, R.; Wahi, A. A multimodal biometric approach for the recognition of Fingerprint, palm print and hand vein using fuzzy vault. Int. J. Biomed. Eng. Technol. 2020, 33, 54–76. [Google Scholar] [CrossRef]
  11. Mahmoud, R.O.; Selim, M.M.; Muhi, O.A. Fusion Time Reduction of a Feature Level Based Multimodal Biometric Authentication System. Int. J. Sociotechnol. Knowl. Dev. 2020, 12, 67–83. [Google Scholar] [CrossRef]
  12. Garg, M.; Arora, A.S.; Gupta, S. A novel feature biometric fusion approach for iris, speech, and signature. Comput. Methods Mater. Sci. 2020, 20, 63–71. [Google Scholar] [CrossRef]
  13. Tharewal, S.; Malche, T.; Tiwari, P.K.; Jabarulla, M.Y.; Alnuaim, A.A.; Mostafa, A.M.; Ullah, M.A. Score-Level Fusion of 3D Face and 3D Ear for Multimodal Biometric Human Recognition. Comput. Intell. Neurosci. 2022, 2022, 3019194. [Google Scholar] [CrossRef]
  14. Joseph, A.A.; Ho Lian, A.N.; Kipli, K.; Lee Chin, K.; Awang Mat, D.A.; Chin Voon, C.S.; Sing Ngie, D.C.; Sze Song, N. Person Verification Based on Multimodal Biometric Recognition. Pertanika J. Sci. Technol. 2021, 30, 161–183. [Google Scholar] [CrossRef]
  15. Zhang, X.; Cheng, D.; Jia, P.; Dai, Y.; Xu, X. An Efficient Android-Based Multimodal Biometric Authentication System with Face and Voice. IEEE Access 2020, 8, 102757–102772. [Google Scholar] [CrossRef]
  16. Herbadji, A.; Guermat, N.; Ziet, L.; Akhtar, Z.; Dasgupta, D. Weighted quasi-arithmetic mean-based score level fusion for Multi-biometric systems. IET Biom 2020, 9, 91–99. [Google Scholar] [CrossRef]
  17. Rane, M.E.; Bhadade, U.S. Multimodal score level fusion for recognition using Face and palmprint. Int. J. Electr. Eng. Educ. 2025, 62, 37–55. [Google Scholar] [CrossRef]
  18. Aizi, K.; Ouslim, M. Score level fusion in Multi-biometric identification based on zones of interest. J. King Saud Univ. Comput. Inf. Sci. 2019, 34, 1498–1509. [Google Scholar] [CrossRef]
  19. Wang, Y.; Liu, H.; Zhang, Q. Gated Multimodal Fusion for Biometric Authentication. IEEE Trans. Inf. Forensics Secur. 2022, 17, 2105–2116. [Google Scholar]
  20. Zhang, L.; Chen, X.; Li, W. Transformer-Based Score Fusion for Multimodal Biometrics. Pattern Recognit. Lett. 2023, 175, 45–52. [Google Scholar]
  21. Iloanusi, O.N.; Ejiogu, U.C. Gender classification from fused multi-fingerprint types. Inf. Secur. J. A Glob. Perspect. 2020, 29, 209–219. [Google Scholar] [CrossRef]
  22. Naik, A.K.; Holambe, R.S. Joint Encryption and Compression scheme for a multimodal tele biometric system. Neurocomputing 2016, 191, 69–81. [Google Scholar] [CrossRef]
  23. Sharifi, O.; Eskandari, M. Optimal Face-Iris Multimodal Fusion Scheme. Symmetry 2020, 8, 48. [Google Scholar] [CrossRef]
  24. Li, C.; Hu, J.; Pieprzyk, J.; Susilo, W. A New Bio Cryptosystem-Oriented Security Analysis Framework and Implementation of Multi-biometric Cryptosystems Based on Decision Level Fusion. IEEE Trans. Inf. Forensics Secur. 2015, 10, 1193–1206. [Google Scholar] [CrossRef]
  25. Azom, V.; Adewumi, A.O.; Tapamo, J. Face and Iris biometrics person identification using hybrid fusion at feature and score-level. In Proceedings of the 2015 Pattern Recognition Association of South Africa and Robotics and Mechatronics International Conference (PRASA-RobMech), Grahamstown, South Africa, 26–27 November 2015; pp. 207–212. [Google Scholar] [CrossRef]
  26. Balraj, E.; Abirami, T. Performance Improvement of Multi-biometric Authentication System Using Score Level Fusion with Ant Colony Optimization. Wirel. Commun. Mob. Comput. 2022, 2022, 4145785. [Google Scholar] [CrossRef]
  27. CASIA. Chinese Academy of Sciences, 2020. CASIA Iris Image Database, v3. Available online: http://english.ia.cas.cn/db/201610/t20161026_169399.html (accessed on 24 December 2025).
  28. ORL, AT&T Laboratories Cambridge, 2001, ORL Face Image Database, v1. Available online: http://cam-orl.co.uk/facedatabase.html (accessed on 24 December 2025).
  29. Libor, M.; Peter, K. MATLAB Source Code for a Biometric Identification System Based on Iris Patterns. Bachelor’s Thesis, The School of Computer Science and Software Engineering, The University of Western Australia, Crawley, WA, USA, 2003. [Google Scholar]
  30. Hamouda, E.; Ouda, O.; Yuan, X.; Hamza, T. Optimizing Discriminability of Globally Binarized Face Templates. Arab. J. Sci. Eng. 2016, 41, 2837–2846. [Google Scholar] [CrossRef]
  31. Turk, M.; Pentland, A. Eigenfaces for recognition. J. Cogn. Neurosci. 1991, 3, 71–86. [Google Scholar] [CrossRef]
  32. Mirjalili, S.; Mirjalili, S.M. Lewis A Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  33. Jain, A.; Ross, A.; Prabhakar, S. An introduction to biometric recognition. IEEE Trans. Circuits Syst. Video Technol. 2004, 14, 4–20. [Google Scholar] [CrossRef]
Figure 1. System Architecture of the Proposed SCA-based Multimodal Fusion Method.
Figure 1. System Architecture of the Proposed SCA-based Multimodal Fusion Method.
Sensors 26 00208 g001
Figure 2. Genuine and imposter distributions for all experiments.
Figure 2. Genuine and imposter distributions for all experiments.
Sensors 26 00208 g002aSensors 26 00208 g002b
Figure 3. Convergence curve for the proposed fusion using SCA.
Figure 3. Convergence curve for the proposed fusion using SCA.
Sensors 26 00208 g003
Table 1. State-of-the-Art Feature-Level Fusion Techniques, NM: actual dataset size used in the experiment is not mention in the study.
Table 1. State-of-the-Art Feature-Level Fusion Techniques, NM: actual dataset size used in the experiment is not mention in the study.
Biometric Trait UsedAlgorithmDatabaseSizeRecognition Accuracy
Face & FingerprintDis-Eigen [7]AUMI16093.7%
Iris& FingerprintANN Trained and WOA optimized [8]CASIA V1.0 &
FVC2004 DB1
10098.3%
Ear & profile faceCNN
(Alex Net, VGG16 and Google Net) [2]
UND-E & UND-j2114; 27399%
Fingerprint & PalmprintDNN [9]ChimericNM97.6%
Fingerprint, Palmprint &Hand-veinFuzzy Vault [10]CASIANM98.5%
Iris & FaceR-HoG [11]SDUMLA-HMTNM99%
Iris, Speech & Signature2D PCA, SIFT, ANN Classifier [12]A mixture of standard and real-time data set50096–98%
Table 2. State-of-the-Art Score-Level Fusion Techniques, NM: ‘actual dataset size used in the experiment is not mentioned in the study’.
Table 2. State-of-the-Art Score-Level Fusion Techniques, NM: ‘actual dataset size used in the experiment is not mentioned in the study’.
Biometric Trait UsedAlgorithmDatabaseSizeRecognition Accuracy
3D-face & 3D-earPCA for 3D face & ICP for 3D ear [13]FRGC & UND-F, G557; 302; 23599.25%
Iris &
finger-knuckle-print
SIFT, PCA, neuro-fuzzy neural network [3]CASIA & Poly-UNM98%
Face & FingerprintCNN & ORB [14]UCI Database400; 12099.38%
Face & VoiceLBP & VAD [15]XJTU Database10298%
Fingerprint & FaceWQAM [16]NIST-BSSR151797.22%
Palm print & FaceROI- t-norm [17]Face 94, Face 95, Face 96, FERET, FRGC & IITD60099.7%
Iris & FingerprintBCC, BFL, K-means, Decision Tree and Fuzzy Logic [18]CASIA Iris V4 & CASIA Fingerprint V5110094.4–95%
Table 3. State-of-the-Art Decision-Level Fusion Techniques.
Table 3. State-of-the-Art Decision-Level Fusion Techniques.
Biometric Trait UsedAlgorithmDatabaseSizeRecognition Accuracy
Palmprint & FaceWavelet sub-bands, Nearest Neighbor Classifier [4]ORL & Yale & ILT-Delhi33098.12%
Multiple FingerprintCNN [21]Novel dataset50094.7%
Face & fingerprintJoint Encryption and Compression technique [22]FEI & NIST40097%
Face & IrisOR rule [23]CASIA-Iris-Distance142098.9%
Fingerprints of different fingersMulti-finger feature encrypted by a hash function [24].Novel dataset150095.76%
Face & IrisMajority voting [25]ORL &
CASIA
40098.75%
Table 4. Experimental parameters.
Table 4. Experimental parameters.
ParametersValueAlgorithm
Pop size150SCA + PSO + GWO
Iterations1000
Solution dimension7
a2.0SCA + GWO
Inertia weight(w)1.0PSO
Acceleration coefficients (c1)2.0PSO
Acceleration coefficients (c2)2.0PSO
Table 5. Performance results.
Table 5. Performance results.
ModelFAR (%)FRR (%)EER (%)
Left Iris1.6412.587.11
Right Iris8.5415.0011.77
Face4.6121.1712.89
Proposed (Left iris + Right iris)1.485.133.30
Proposed (Left iris + Face)1.334.502.91
Proposed (Right iris + Face)3.795.294.54
Proposed (Left iris + Right iris + Face)0.761.251.003
Table 6. EER (%) for the applied metaheuristic algorithms.
Table 6. EER (%) for the applied metaheuristic algorithms.
ModelSCAGWOPSO
Left iris + Right iris3.305.743.31
Left iris + Face2.914.383.42
Right iris + Face4.546.554.53
Left iris + Right iris + Face1.0033.411.49
Table 7. d’ for the applied metaheuristic algorithms.
Table 7. d’ for the applied metaheuristic algorithms.
ModelSCAGWOPSO
Left iris + Right iris6.9185.5604.972
Left iris + Face3.6062.9543.575
Right iris + Face3.3263.4353.343
Left iris + Right iris + Face5.6834.4295.582
Table 8. Improvement ratio (%) for the applied metaheuristic algorithms.
Table 8. Improvement ratio (%) for the applied metaheuristic algorithms.
ModelSCAGWOPSO
Left iris +Right iris53.5819.2753.45
Left iris + Face59.0738.4051.90
Right iris + Face61.4344.3561.51
Left iris + Right iris + Face85.8952.0379.04
Table 9. Running time (in milliseconds) for the applied metaheuristic algorithms.
Table 9. Running time (in milliseconds) for the applied metaheuristic algorithms.
MethodTime (ms)
SCA9410
GWO9800
PSO10,940
Table 10. Comparative study analysis.
Table 10. Comparative study analysis.
ReferenceYearBiometric ModalityFusion TypeEER (%)
[12]2020Iris + Signature + SpeechFeature-Level4.00
[25]2015Face + IrisDecision-Level1.25
[14]2021Face + FingerprintsScore-Level0.62
[16]2020Face + FingerprintsScore-Level2.78
Proposed-Left iris + Right irisScore-Level3.30
Proposed-Left iris + FaceScore-Level2.91
Proposed-Right iris + FaceScore-Level4.54
Proposed-Left iris + Right iris + FaceScore-Level1.00
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hamouda, E.; Alaerjan, A.S.; Mostafa, A.M.; Tarek, M. A Score-Fusion Method Based on the Sine Cosine Algorithm for Enhanced Multimodal Biometric Authentication. Sensors 2026, 26, 208. https://doi.org/10.3390/s26010208

AMA Style

Hamouda E, Alaerjan AS, Mostafa AM, Tarek M. A Score-Fusion Method Based on the Sine Cosine Algorithm for Enhanced Multimodal Biometric Authentication. Sensors. 2026; 26(1):208. https://doi.org/10.3390/s26010208

Chicago/Turabian Style

Hamouda, Eslam, Alaa S. Alaerjan, Ayman Mohamed Mostafa, and Mayada Tarek. 2026. "A Score-Fusion Method Based on the Sine Cosine Algorithm for Enhanced Multimodal Biometric Authentication" Sensors 26, no. 1: 208. https://doi.org/10.3390/s26010208

APA Style

Hamouda, E., Alaerjan, A. S., Mostafa, A. M., & Tarek, M. (2026). A Score-Fusion Method Based on the Sine Cosine Algorithm for Enhanced Multimodal Biometric Authentication. Sensors, 26(1), 208. https://doi.org/10.3390/s26010208

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop