Next Article in Journal
Online SSA-Based Real-Time Degradation Assessment for Inter-Turn Short Circuits in Permanent Magnet Traction Motors
Previous Article in Journal
A Novel Image Encryption Scheme Based on a Quantum Logistic Map, Hyper-Chaotic Lorenz Map, and DNA Dynamic Encoding
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Left Meets Right: A Siamese Network Approach to Cross-Palmprint Biometric Recognition

Department of Computer Science, College of Computer and Information Sciences, Jouf University, Sakaka 72388, Saudi Arabia
Electronics 2025, 14(10), 2093; https://doi.org/10.3390/electronics14102093
Submission received: 24 March 2025 / Revised: 7 May 2025 / Accepted: 20 May 2025 / Published: 21 May 2025

Abstract

:
What if you could identify someone’s right palmprint just by looking at their left—and vice versa? That is exactly what I set out to do. I built a specially adapted Siamese network that only needs one palm to reliably recognize the other, making biometric systems far more flexible in everyday settings. My solution rests on two simple but powerful ideas. First, Anchor Embedding through Feature Aggregation (AnchorEFA) creates a “super-anchor” by averaging four palmprint samples from the same person. This pooled anchor smooths out noise and highlights the consistent patterns shared between left and right palms. Second, I use a Concatenated Similarity Measurement—combining Euclidean distance with Element-wise Absolute Difference (EAD)—so the model can pick up both big structural similarities and tiny textural differences. I tested this approach on three public datasets (POLYU_Left_Right, TongjiS1_Left_Right, and CASIA_Left_Right) and saw a clear jump in accuracy compared to traditional methods. In fact, my four-sample AnchorEFA plus hybrid similarity metric did not just beat the baseline—it set a new benchmark for cross-palmprint recognition. In short, recognizing a palmprint from its opposite pair is not just feasible—it is practical, accurate, and ready for real-world use. This work opens the door to more secure, user-friendly biometric systems that still work even when only one palmprint is available.

1. Introduction

Biometric authentication systems are now integral to applications ranging from mobile device unlocking and building access control to automated identity verification at critical checkpoints [1]. Among various biometric modalities, palmprint recognition is distinguished by its rich pattern of lines, ridges, and textures. Palmprints exhibit unique geometric and textural features that are at least as discriminative as fingerprints. Conventional palmprint systems perform same-hand matching (e.g., right-hand to right-hand), which restricts their applicability when only one hand orientation is available [2]. In scenarios where only one palmprint is captured, cross-palmprint recognition—matching left to right—is required.
Cross-palmprint recognition addresses the problem: can a model accurately determine if a left-hand and a right-hand palmprint belong to the same individual? This capability is not merely academic; it addresses practical challenges in real-world deployments. For example, a user with a temporary hand injury may be unable to present one palmprint for authentication. Likewise, uncontrolled capture at security checkpoints may yield only one palm orientation. In such cases, matching against the opposite palm becomes essential to maintain both accessibility and security [3].
Beyond consumer and access-control use, cross-palmprint matching has significant forensic applications. At crime scenes, investigators often recover partial or degraded palmprints that cannot be matched via same-hand databases. Matching such fragments to opposite-hand records could expedite case resolution and enhance forensic databases. In this work, I propose an adapted Siamese network incorporating Anchor Embedding through Feature Aggregation (AnchorEFA) and a hybrid similarity metric to address cross-palmprint recognition. I validate my approach on three public datasets under subject-level hold-out splits and demonstrate substantial improvements over conventional single-hand methods
To explore these possibilities, my study centers on four questions:
  • Q1: Can a model accurately identify people by comparing left and right palmprints?
  • Q2: Which techniques help the model better distinguish those cross-palmprint features?
  • Q3: How much does my Anchor Embedding through Feature Aggregation (AnchorEFA) improve performance by blending multiple samples into a single “anchor”?
  • Q4: Does combining Euclidean distance with Element-wise Absolute Difference (EAD) outperform using either measure alone?
I answer these questions by building an adapted Siamese network that integrates AnchorEFA and a hybrid similarity metric. I rigorously test it on three public datasets (POLYU_Left_Right, TongjiS1_Left_Right, and CASIA_Left_Right), tuning each component for the best cross-palmprint accuracy. Along the way, we will highlight practical takeaways—especially for forensic applications, where speed and accuracy matter most.
The rest of this paper is organized as follows: Section 2 reviews related work; Section 3 describes my methodology; Section 4 presents results and analysis; Section 5 discusses implications and future directions; and Section 6 concludes with final thoughts on advancing cross-palmprint recognition in real-world settings.

2. Related Work

Palmprint recognition has evolved as a significant field within biometric identification, leveraging unique patterns such as lines, ridges, and textures. This segment reviews the recent advancements that intersect with my work on cross-palmprint recognition using an adapted Siamese network model.
As presented in [4], the authors developed a novel approach for palmprint recognition that performs well with limited training data. The study introduces a MetaSiamese network architecture to address the challenges of few-shot learning, leveraging meta-learning techniques to generalize effectively across various palmprint recognition tasks, even with minimal samples per class. This research aims to improve the accuracy and reliability of biometric recognition systems in scenarios where data collection is constrained or impractical. The authors of [5] focused on developing a method for palmprint recognition that addresses the challenges of limited data availability. The study proposes a similarity hashing approach for few-shot learning, enabling efficient and accurate palmprint recognition by encoding and comparing palmprint features in a low-dimensional hash space. This technique aims to enhance biometric systems’ performance, particularly in scenarios with minimal samples, by reducing computational complexity and improving generalization. As presented in [6], the authors explored the use of Siamese networks for palmprint recognition. The study’s objective is to leverage the Siamese network architecture, which compares pairs of palmprint images, to accurately identify similarities and differences. This approach is designed to enhance recognition performance by focusing on learning robust feature representations that can distinguish between palmprints effectively. The method is particularly suitable for biometric applications where pairwise comparison and generalization across different individuals are critical for performance. The authors of [7] explored the use of network-based methods for palmprint recognition. The study focuses on leveraging advanced network architectures to enhance the accuracy and efficiency of palmprint recognition systems. It likely investigates techniques for feature extraction, matching, and classification within a networked framework, emphasizing scalability and reliability for biometric identification.
Furthermore, the authors of [8] enhanced the performance of palmprint recognition systems in open-set scenarios by introducing a novel loss function. The study focuses on designing a centralized large-margin cosine loss that improves feature representation by ensuring strong intra-class compactness and inter-class separability in the embedding space. This approach aims to address the unique challenges of open-set recognition, where systems must accurately identify and differentiate both seen and unseen classes, thus making palmprint recognition more robust and reliable for real-world applications. As presented in [9], the authors addressed the challenge of cross-domain palmprint recognition, where recognition systems trained on one domain need to perform effectively on a different domain with varying data distributions. The proposed solution introduces a regularized feature adaptation method to minimize the discrepancies between source and target domains. This approach ensures that the extracted features are robust and transferable, enabling accurate and reliable palmprint recognition in cross-domain scenarios. The authors of [10] developed a multimodal biometric system that combines information from the left palmprint, right palmprint, and irises to enhance the accuracy and robustness of biometric identification. By integrating these three modalities, the system aims to leverage the complementary strengths of each biometric feature to achieve higher recognition reliability, reduce error rates, and improve security compared to unimodal systems. This research addresses the limitations of individual biometric modalities by using a fusion approach for more effective identification in real-world scenarios. In addition, the authors of [11,12] designed a novel interactive palmprint authentication framework that tackles the challenges of high-dimensional data with redundant and unrelated features. By utilizing a two-stage model, the approach employs a pre-trained VGG16 network to extract relevant features and a Siamese network to assess palmprint similarity, providing a robust, efficient, and accurate solution for biometric authentication systems.
As described in [13], the purpose of this study is to design a palmprint recognition system that captures a single image of the palm and compares different images based on similarity measurements instead of classification algorithms. Hence the method uses a Siamese network architecture derived from the VGG16 model with a quest to shorten the training time and the cost of registration as well as affording people a chance to register using only one palmprint sample. The methods of authentication are based on the calculated similarity score using the combination of the Euclidean distance and the sigmoid layer. In line with the recommendation made by [14], the study designed a fingerprint recognition system that invokes a network-based model with high accuracy and effective computational processes. The main concern of the research is BUILDING the network architectures for the feature extraction and matching in order to provide the biometric fingerprint recognition. Authors in [15] designed and tested palmprint recognition system, which is suitable for use in adversarial and unconstrained environments encompassing illumination, orientation of hand, and subject compliance. The study seeks to improve the reliability and accuracy of palmprint recognition systems for applications in real world identification.
The authors of [16] introduced PalmNet, a new system that addresses the problem of palmprint recognition from the contactless modality by employing Gabor filters, PCA, and CNNs. Its major objective is to solve the problems associated with touchless imaging, including different hand poses and lighting conditions while still yielding high accuracy and identification reliability. To overcome cross-sensor matching difficulties, the authors of [17] designed a fingerprint matching system based on the Siamese networks. The proposed system aims at estimating and learning the stable and discriminative features for matching fingerprints collected using different sensor and increase the reliability of biometrics systems in different environment. Work described in [18] initiated and established an efficient data-driven classifier for palmprint identification, which enhances accuracy as well as decreasing complexity. The approach aims at improving these aspects by using sophisticated techniques to learn discriminative features for constructing a stable system of biometric identification that retains strong performances and works fine in multiple settings.
The system’s capability to store the AnchorEFA of one palmprint in the database for subsequent use alongside the other palmprint for verification or identification allows for applications in both identification and verification scenarios. For Verification (Authentication) [18], when performing a one-to-one comparison, my solution serves as a verification tool. By comparing the presented palmprint with the stored AnchorEFA of the user’s other palmprint, the system authenticates the user’s claimed identity, offering a robust method for confirming personal identity based on biometric data. The identification process as presented in [18] investigated the role of anchorene, a carotenoid-derived metabolite, in the formation of anchor roots (ANRs) in Arabidopsis thaliana. The study aims to elucidate the biochemical and physiological mechanisms through which anchorene influences root development, focusing on its interaction with auxin signaling and its response to environmental factors such as nitrogen availability. As shown in Table 1.
To the best of my knowledge, previous studies have not specifically tackled the challenge of cross-palmprint biometric recognition, where left and right palmprints of the same individual are inherently asymmetric. My proposed method, detailed in subsequent sections, uniquely addresses this gap by combining an adapted Siamese network with AnchorEFA and hybrid similarity metrics.
Our solution plays a crucial role in forensic applications, aiding the identification process with crime scene evidence. By comparing a crime scene palmprint against a database of stored AnchorEFAs, law enforcement agencies can efficiently identify suspects or exclude individuals from investigations, streamlining the forensic process and enhancing investigative accuracy.

3. Methodology

In this work, I specifically selected a Siamese network architecture over traditional few-shot or zero-shot methods because my cross-palmprint matching scenario inherently requires learning direct similarity metrics without relying on auxiliary semantic descriptors or episodic training mechanisms typically needed in zero-shot or few-shot frameworks. Siamese networks excel in biometric verification tasks, especially for directly comparing inherently asymmetric data such as left and right palmprints. To further enhance my Siamese network’s capability, I introduced Anchor Embedding through Feature Aggregation (AnchorEFA), where multiple anchor samples (ranging from N = 2 to N = 5) are aggregated to form a robust, stable reference embedding. Conceptually, AnchorEFA bridges my method towards few-shot learning paradigms by effectively utilizing limited per-class samples, improving generalization to unseen individuals. This approach capitalizes on my robust pretrained VGG16-Palmprint model, previously developed in my research [10], ensuring highly discriminative feature representations.
I chose the VGG16 architecture specifically because I previously developed a specialized version, termed VGG16-Palmprint, tailored explicitly for palmprint biometric recognition tasks [10]. Unlike the conventional VGG16 pre-trained on ImageNet, my specialized model was trained and fine-tuned extensively on dedicated palmprint datasets. This customization significantly improved the network’s ability to extract palmprint-specific discriminative features. It is important to highlight that the datasets used to develop my VGG16-Palmprint model were distinct and independent from the datasets used in the current research, explicitly to prevent any risk of data leakage.
Our goal was simple: teach a neural network to recognize that your left and right palms belong to the same person—even when only one palmprint is available. To do that, I built an adapted Siamese network and added two key twists: a smarter way to pick a “reference” palmprint (AnchorEFA) and a hybrid similarity score that looks at both big-picture and fine-grain differences.
Overall Pipeline as shown in Figure 1:
  • Feature Extraction
    • I feed each palm image (left or right) into a pre-trained VGG16-Palmprint model.
    • The output is a compact feature vector capturing that palm’s unique lines, ridges, and textures.
  • Anchor Creation with AnchorEFA
    • Instead of using one sample as my reference, I randomly pick N palmprints from the same person and average their feature vectors.
    • This “mean” vector serves as a stable, noise-resistant anchor.
  • Pair Generation
    • I pair each anchor with an opposite-hand palmprint (positive pair) or a different person’s palmprint (negative pair).
  • Adapted Siamese Network
    • Our model consists of two identical branches (one for each input) built on the VGG16 backbone—but crucially, they do not share weights, letting left and right palms learn their own quirks.
  • Similarity Measurement
    • I compare the two branch outputs using one of three methods:
      Element-wise Absolute Difference (EAD)
      Euclidean Distance
      Concatenated (EAD + Euclidean)
  • Classification
    • The similarity score feeds into a final decision layer that predicts whether the pair is a “match” (same person) or “non-match” (different people).
Each component is described in more detail below—from how I preprocess images to the exact math behind my similarity metrics.

3.1. Dataset and Preprocessing

To put my adapted Siamese network through its paces, I picked three well-known palmprint collections—POLYU, CASIA, and Tongji Session 1—and deliberately left out Tongji Session 2 (we had already used it to pretrain my VGG16-Palmprint feature extractor). Keeping each person’s data confined to one split (training, validation, or test) helps us avoid “peeking” at answers and ensures my results stay honest and generalizable.

3.1.1. Dataset Descriptions

  • POLYU [19]: a total of 1301 left-hand and 1299 right-hand samples from 230 people—great for capturing natural variation.
  • Tongji Session 1 [20]: a total of 3000 left and 3000 right samples from 300 people—ideal for large-scale feature analysis.
  • CASIA [21]: a total of 2746 left and 2756 right samples from 310 people—perfect for testing robustness across different conditions.

3.1.2. VGG16-Palmprint Feature Extraction

I tapped into my previously trained VGG16-Palmprint model [11,12] to turn each palm image into a 4096-dimensional feature vector—essentially a compact summary of its unique lines, ridges, and textures. These vectors become the “images” my Siamese network compares.

3.1.3. Triplet Loss and Cross-Palmprint Pairs Generation Technique

Instead of pairing images at random, I use the Triplet Loss framework to form training triplets: an anchor (our reference), a positive (another print from the same person), and a negative (a print from someone else). This setup teaches the network to pull matching palmprints closer in feature space while pushing non-matches farther apart—sharpening its ability to spot the subtle similarities and differences between left and right palms.

3.2. Steps in Pairs Generation

  • Anchor Embedding through Feature Aggregation (AnchorEFA): This innovative approach calculates the anchor not by selecting a single sample but by averaging a random sample of N palmprints from the same individual. This mean vector creates a more representative and stable anchor for comparison, leveraging the concept of feature aggregation to enhance the robustness and generalizability of the anchor point in the feature space.
  • Pair Construction:
    • Positive Pairs: formed by combining the AnchorEFA from the left palmprint (anchorL) with all right palmprint (positiveR) of the same individual.
    • Negative Pairs: created by combining the anchorL with right palmprint (negativeR) from different individuals.
  • Repetition for Comprehensive Coverage: To ensure a thorough representation of palmprint variations, this pair generation process is repeated 10 times.
  • Bidirectional Analysis: This procedure is symmetrically applied for right palmprint, generating anchorR and pairing them with left palmprints, thus facilitating a dual-sided examination [22].
  • Optimization of N : The number of samples per anchor ( N ) is varied from 2 to 5 across datasets to identify the optimal setting for anchor computation, fine-tuning the balance between representativeness and computational efficiency.

3.3. Triplet Loss Function Integration

The Triplet Loss function [23] is at the core of boosting my model’s ability to tell different palmprints apart. It encourages each anchor to be closer to its positive samples and farther from its negative samples, which leads to a more distinct and separable feature space. The core objective is defined by the following Equation (1):
L o s s = m a x ( 0 , d ( A , P )     d ( A , N ) + M )
where d ( A , P ) is the distance between the anchor and positive samples, and d A , N is the distance between the anchor and negative samples. This loss function ensures that the model learns to embed features in a space where similar features are clustered together, and dissimilar features are separated by at least the margin, M, enhancing the model’s discriminatory power.
By bringing AnchorEFA and the Triplet Loss function together, I introduce a new way of generating cross-palmprint pairs. This fusion strengthens the training process of my adapted Siamese network and sets a strong stage for advancing biometric recognition technologies. Through careful and well-thought-out experiments, I aim to boost both the accuracy and reliability of cross-palmprint recognition, helping drive progress in this ever-evolving field.

3.4. Dataset Splitting Strategy

Strategic splitting of the generated pairs into training, validation, and testing sets maximizes sample diversity, crucial for effective learning and unbiased model evaluation:
  • POLYU_Left_Right Dataset:
    Training Set: A total of 35,840 pairs from 160 individuals.
    Validation Set: A total of 7560 pairs from 34 individuals.
    Testing Set: A total of 8160 pairs from 34 individuals.
  • TongjiS1_Left_Right Dataset:
    Training Set: A total of 83,600 pairs from 209 individuals.
    Validation Set: A total of 18,000 pairs from 45 individuals.
    Testing Set: A total of 18,000 pairs from 45 individuals.
  • CASIA_Left_Right Dataset:
    Training Set: A total of 70,280 pairs from 214 individuals.
    Validation Set: A total of 18,460 pairs from 46 individuals.
    Testing Set: A total of 19,440 pairs from 46 individuals.
A huge part of building a trustworthy biometric model comes down to how I split my data. I make sure every person’s palmprints live in only one of three buckets—training, validation, or testing—so there is zero overlap. That simple rule stops data leakage in its tracks and keeps my model from overfitting, which means it is far better at handling brand-new palmprint samples.
With the data neatly organized, I move on to the experimental setup. Here is where I lay out the adapted Siamese network’s architecture, explain exactly how I train it, and show how the Triplet Loss function drives learning. By walking through each step—from preparing the data to running the experiments—I create a clear path that helps readers understand my results and why they matter for next-generation biometric systems.

3.5. Anchor Embedding Through Feature Aggregation (AnchorEFA)

One tricky part of cross-palmprint recognition is picking a good “anchor” in a Siamese network. If you choose just one palmprint, you risk that odd scan skewing everything. Instead, my AnchorEFA method creates an anchor by averaging N feature vectors from the same person into a single mean vector. Think of it as blending multiple snapshots to form a clearer picture of that person’s palmprint.
This feature aggregation does two things for us: it smooths out weird outliers in individual scans and highlights the consistent patterns that truly define someone’s palm. The result? A far more stable reference point in feature space—and better accuracy and reliability when matching left to right palms.

3.6. Enhanced Siamese Network Architecture

Our Siamese network consists of two identical “towers” that each process a palmprint independently—one for the left hand, and one for the right. Unlike typical Siamese designs, these towers do not share weights, since the left and right palms have naturally different traits. However, both towers lean on the same pretrained VGG16-Palmprint backbone, so they extract high-level features in a consistent way.
What really sets my model apart is how I compare those features. Rather than using just one distance metric, I combine Element-wise Absolute Difference (EAD) with Euclidean distance. This dual approach gives us both a broad sense of similarity and a fine-grained look at pixel-level differences. During training, these two measures guide weight updates differently for “positive” pairs (same person) versus “negative” pairs (different people), helping the network zero in on subtle patterns and reliably tell left and right palms apart.

3.7. Similarity Measurement Techniques

When it comes to deciding how “close” two palmprint feature vectors really are, one size definitely does not fit all. That is why I lean on three complementary approaches:
  • Element-wise Absolute Difference (EAD) [24]
    Imagine lining up two feature vectors side by side and subtracting them entry by entry—that is EAD. It does not squash all those differences into one number; instead, it preserves each tiny discrepancy, helping my model pick up on subtle texture changes between left and right palms.
  • Euclidean Distance [25]
    Think of Euclidean distance as the straight-line path between two points in high-dimensional space. By squaring, summing, and rooting the differences, I get a single score that reflects overall similarity—perfect for capturing the big picture.
  • Concatenated Approach (EAD + Euclidean Distance)
    Why choose? By sticking both EAD and Euclidean distance together, I get the best of both worlds: the fine-grain detail of EAD plus the broad-stroke view of Euclidean distance. In practice, this hybrid metric consistently gave my model the clearest signal for telling matching palms from non-matches.
Together with Triplet Loss and my adapted Siamese architecture, these metrics shape a feature space where same-person prints cluster tightly while different-person prints stay well apart.

3.8. Implementation of Operation Mode

Our system is not just a research demo—it is built for everyday use. Here is how it would actually work in three common scenarios:
  • Verification (Authentication)
    • What happens: You place your hand on a scanner. The system compares that print to your stored AnchorEFA from the opposite palm.
    • Outcome: Match = Electronics 14 02093 i001 Access granted. No match = Electronics 14 02093 i002 Access denied.
    • Use cases: Unlocking phones, entering secure buildings, and logging into banking apps.
  • Identification
    • What happens: You present an unknown palmprint. The model searches through its entire AnchorEFA database.
    • Outcome: It tells you exactly who that print belongs to—no claimed identity needed.
    • Use cases: Large-scale security checkpoints, finding missing persons, and verifying patients in hospitals.
  • Forensic Analysis
    • What happens: Investigators input a partial or smudged crime-scene print. The system compares it against stored AnchorEFAs.
    • Outcome: Quickly narrows down suspects or confirms identities.
    • Use cases: Crime scene investigations, evidence matching, and cold case reviews.
Each mode uses the same underlying network—AnchorEFA for stable references, my dual-metric similarity for precise matching—but tailors the workflow to fit real-world needs, whether that is instant authentication or critical forensic leads.

3.9. Mathematical Model of the Adapted Siamese Network

In this section, I provided a mathematical model for the adapted Siamese network as follows:
Let X L and X R represent input vectors of the left and right palmprint, respectively. Let f   denote the pretrained VGG16-Palmprint feature extraction function. Thus, the extracted feature vectors F L and F R from inputs X L and X R are computed using Equation (2):
F L = f ( X L ) ,   F R = f ( X R )
Given a set of N palmprint samples, each represented as X L i or X R i , the aggregated anchor feature vectors A L and A R are defined as the mean vectors across N feature vectors using Equations (3) and (4):
A L = 1 N i = 1 N f X L i
A R = 1 N i = 1 N f X R i  
The similarity between the aggregated anchors and palmprint vectors is computed using two metrics:
1. Element-wise Absolute Difference (EAD) between A L and F R Equation (5):
E A D A L , F R = A L F R
2. Euclidean Distance between A L and F R   Equation (6):
D E u c l i d e a n A L , F R = i = 1 N A L i F R i 2
3. Concatenated Similarity combines EAD and Euclidean distance into a single representation Equation (7):
S c o n c a t A L , F R = C o n c a t   ( E A D A L , F R ,   D E u c l i d e a n A L , F R )
The similarity prediction (classification) from the concatenated similarity vector is computed by a dense neural layer (Dense) followed by a sigmoid activation function ( σ ), as shown below in Equation (8):
y ^ = σ D e n s e (   S c o n c a t ( A L , F R ) )
I trained my model by minimizing the binary cross-entropy (BCE) loss function Equation (9):
B C E y , y ^ = [ y log y ^ + 1 y log 1 y ^ ]
where y denotes the ground truth (1 for matching palms, 0 for non-matching palms), and y ^ is the predicted similarity score.
I train my model using binary cross-entropy, a straightforward way to penalize wrong guesses about whether two palmprints match (with Y = 1 meaning “yes, they match”, and Y = 0 meaning “no”). Behind the scenes, an optimizer like Adam tweaks the weights in my network’s dense layers bit by bit to reduce these mistakes over time.
Putting it all together, this loss-and-optimization setup gives us a clear, end-to-end blueprint for how my adapted Siamese network works—from cleaning up and extracting features from raw palmprint images to measuring similarity and finally deciding if two prints belong to the same person. By spelling out each step and the math that drives it, I ensure my method is easy to implement, test, and refine.

4. Experimental Setup and Results

I designed my experiments in two main phases. First, I tried a simple anchor approach to get a baseline. Next, I introduced AnchorEFA (Anchor Embedding through Feature Aggregation) to see if I could improve cross-palmprint recognition by adjusting the number of samples NNN and exploring different similarity measurements.
Preliminary Experiments (Traditional Anchor Method): For my initial trials, I used a single-sample anchor (where N = 1) to establish how my model performed. I tested three similarity measurements—Euclidean distance, Element-wise Absolute Difference (EAD), and their combination—across the POLYU_Left_Right, TongjiS1_Left_Right, and CASIA_Left_Right datasets. Although this method worked consistently across all datasets, it did not match the accuracy I later achieved with AnchorEFA. That finding hinted that a more advanced anchoring technique could lead to better feature discrimination and higher accuracy.
Advanced Experiments (Moving to AnchorEFA): Building on the initial results, I switched to AnchorEFA, varying N from 2 to 5. At the same time, I continued evaluating Euclidean distance, EAD, and their concatenation to identify the most effective setup.
  • Early Stopping: I stopped training once the model stopped improving on a separate validation set—this helps prevent overfitting.
  • Best Model and Similarity Measurement: I selected whichever model and similarity measurement worked best at distinguishing matching and non-matching palmprints in validation.
  • Final Testing: I then tested this refined model on my held-out dataset, examining FAR, FRR, EER, accuracy, and ROC-AUC. This gave us a thorough look at how well the model could handle real-world cross-palmprint scenarios.
By starting with a standard anchor method and gradually moving to AnchorEFA, I were able to pinpoint the most effective configuration for cross-palmprint recognition. Adjusting key parameters and rigorously testing different options allowed us to improve my model’s accuracy and reliability, ultimately providing a strong case for how AnchorEFA can advance the field of biometric identification.
Validation Strategy and Early Stopping
I reserved approximately 15% of subjects—and all their corresponding left–right palmprint pairs—as an independent validation set, ensuring zero overlap with the training cohort. I did not perform k-fold cross-validation because a subject-level hold-out split fully prevents data leakage while providing a sufficiently large validation sample for reliable monitoring. During training, I monitored the validation loss at the end of each epoch and employed early stopping when the loss ceased to improve over consecutive epochs, restoring the model weights from the best-performing epoch. All reported performance metrics were subsequently evaluated on the separate 15% of subjects as a test set.
Computational Complexity and Run-Time Performance
Experiments were run on an NVIDIA A100 GPU (Google Colab) with up to 300 training epochs and early stopping on validation accuracy (patience = 50 epochs). Training the adapted Siamese network (VGG16-Palmprint backbone + AnchorEFA) for each dataset required 244.56 s. Early stopping typically halted training well before 300 epochs. During inference, feature extraction and concatenated similarity computation for one left–right palmprint pair took on average 1.5056 × 10−4 s per sample. The overall time complexity scales linearly with the number of evaluated pairs and the 4096-dimensional embedding size, demonstrating that my method is both practical for offline training and efficient enough for real-time biometric verification.
I carried out a set of experiments to assess my adapted Siamese network’s performance in cross-palmprint recognition. I focused on three main factors: (1) the effectiveness of different similarity measurement techniques, (2) the impact of using Anchor Embedding through Feature Aggregation (AnchorEFA), and (3) overall model performance on my combined datasets.

4.1. Experiment 1: Similarity Measurement Techniques

As summarized in Table 2, my first experiment evaluated three similarity measurement methods—Euclidean Distance, Element-wise Absolute Difference (EAD), and their concatenated version—using a traditional anchor setup (where N = 1). I observed that each dataset yielded a uniform accuracy of 50%, with a mean EER (Equal Error Rate) of 49.63% and a ROC-AUC score of 0.51.
These results provide a baseline indication of how well Euclidean distance captures cross-palmprint similarities. While it offers a straightforward approach to measuring differences, it struggles to identify the subtle features unique to palmprints, leading to less-than-ideal discriminative power in this initial setup.
As shown in Table 3, EAD gave the model a remarkably strong boost, especially on the TongjiS1_Left_Right and CASIA_Left_Right datasets. The average accuracy jumped to 76.88%, with the ROC-AUC climbing to 0.80. This improvement suggests that EAD does a better job of picking up on subtle palmprint variations compared to Euclidean distance on its own.
Switching to Table 4, I see that combining Euclidean distance with EAD resulted in an even higher accuracy—reaching an average of 84.70%—and a ROC-AUC of 0.92. This integrated technique outperformed using each metric by itself, emphasizing the benefits of capturing different facets of palmprint features. By merging Euclidean distance with EAD, the model gains a more well-rounded view, confirming the importance of choosing similarity measures that complement one another. Equipped with these findings, I can now investigate how AnchorEFA might further heighten the model’s cross-palmprint recognition abilities.

4.2. Experiment 2: The Impact of AnchorEFA

In this part of the study, I looked at how different numbers of samples per anchor (N = 2 to N = 5) affected the model’s cross-palmprint recognition. My goal was to pinpoint the ideal N that strikes the best balance between accuracy and consistency.
N = 2 (Table 5): using two samples per anchor notably boosted the model’s performance across every dataset, especially on TongjiS1_Left_Right. I saw a mean accuracy of 85.38% and a ROC-AUC of 0.92, which was a clear step up from the single-sample anchor approach.
N = 3 (Table 6): adding a third sample kept accuracy high and even nudged the ROC-AUC up to 0.93. This suggests that more samples can sharpen the model’s ability to separate matching and non-matching palmprint.
N = 4 (Table 7): at four samples per anchor, the model reached its peak performance, with a mean accuracy of 86.26% and a ROC-AUC of 0.93. This configuration seemed to strike the perfect balance, delivering the best overall results.
N = 5 (Table 8): taking it up to five samples per anchor had mixed effects. While TongjiS1_Left_Right did very well, POLYU_Left_Right showed a noticeable drop in both accuracy and ROC-AUC. It appears that gathering too many samples can introduce noise or dilute the anchor’s specificity, highlighting the importance of finding the sweet spot for N.

A Closer Look at the “Four-Sample” AnchorEFA Configuration

In this section, I focus on how my Anchor Embedding through Feature Aggregation (AnchorEFA) approach works when I use four samples per anchor. I tested this setup on three different datasets—POLYU, TongjiS1, and CASIA—to see how well it could spot when two palmprints come from the same person versus different people.
Performance Across Datasets: According to Table 9, when looking at “Same Person” classifications in the POLYU dataset, my model shows a True Positive Rate (TPR) of 0.88, meaning it correctly identifies a match most of the time. However, the True Negative Rate (TNR) for “Different Person” is 0.75, suggesting it could still confuse prints from different individuals. Things get more encouraging in the TongjiS1 dataset, where TPR climbs to 0.92 and TNR reaches 0.90. Essentially, the model does an even better job recognizing both matches and non-matches. Over on CASIA, I see similarly dependable results—0.89 for TPR and 0.83 for TNR—indicating the model can handle a range of data conditions. Across these datasets, I also see fewer false positives and false negatives, signaling the model’s adaptability and hinting that it could work well in real-world settings.
Detailed Classification Metrics
To drill down further, Table 10 shows precision, recall, and F1-scores for each category:
  • POLYU
    Different Person: Precision 0.86, Recall 0.75, and F1 0.80. While precision is respectable, the recall suggests some missed cases.
    Same Person: Precision 0.78, Recall 0.88, and F1 0.83. This points to a strong ability to verify if two prints belong to the same individual.
    Overall Score: 0.82.
  • TongjiS1
    Different Person: Precision 0.92, Recall 0.90, and F1 0.91.
    Same Person: Precision 0.90, Recall 0.92, and F1 0.91.
    Overall Score: 0.91, marking a nice jump from POLYU’s results and reflecting a great balance between catching correct matches and avoiding mistakes.
  • CASIA
    Different Person: Precision 0.88, Recall 0.83, and F1 0.86.
    Same Person: Precision 0.84, Recall 0.89, and F1 0.86.
    Overall Score: 0.86, showing consistently strong performance across varied conditions.
When I compare these datasets, it is clear that four-sample AnchorEFA often strikes the right balance between capturing enough information for accurate matching and avoiding the noise that can come from aggregating too many samples. The steady improvement in precision, recall, and F1-scores across POLYU, TongjiS1, and CASIA indicates that this configuration can adapt well to different data conditions. All told, these findings suggest that combining four-sample AnchorEFA with my adapted network is a solid path toward more reliable cross-palmprint recognition in real-world scenarios.

4.3. ROC Curves and Equal Error Rate (EER) Analysis

In Figure 2, I present Receiver Operating Characteristic (ROC) curves and Equal Error Rate (EER) plots for three datasets—POLYU_Left_Right, TongjiS1_Left_Right, and CASIA_Left_Right. The ROC curves illustrate how the True Positive Rate (TPR) trades off against the False Positive Rate (FPR) at different thresholds, while the EER marks the point where FPR equals the False Negative Rate (FNR). Below is a closer look at how my model fares on each dataset.

4.3.1. POLYU_Left_Right Dataset

  • ROC Curve: the model achieves an AUC of 0.899, indicating that it can effectively distinguish between matching and non-matching palmprints. An AUC close to 1 suggests strong separability of classes.
  • EER: the EER is around 18.03%, meaning that, at the threshold where the FPR equals the FNR, approximately 18.03% of classifications are incorrect. This is the “crossover” point between false positives and false negatives.

4.3.2. TongjiS1_Left_Right Dataset

  • ROC Curve: with an AUC of 0.96, the model shows an even stronger performance here than on the POLYU dataset. A higher AUC indicates an excellent ability to separate same-person vs. different-person palmprints.
  • EER: the EER is about 9.34%, significantly lower than in the POLYU dataset. This lower value means fewer overall errors when deciding if two palmprints come from the same individual.

4.3.3. CASIA_Left_Right Dataset

  • ROC Curve: the AUC is 0.939, signaling that the model performs well in detecting true matches at various false-positive levels.
  • EER: the EER is roughly 14.75%, sitting between the results for the POLYU and TongjiS1 datasets in terms of error rate at the optimal threshold.
Across all three datasets, the model achieves high AUC values, underscoring its ability to reliably tell apart positive (same person) and negative (different person) samples. TongjiS1_Left_Right stands out with the lowest EER and highest AUC, suggesting a strong balance between sensitivity and specificity. Meanwhile, POLYU_Left_Right and CASIA_Left_Right still display strong discriminative power, even though their EER values are higher.
These results reinforce the conclusion that my Siamese network—adapted for cross-palmprint recognition—provides accurate and robust biometric authentication and identification across different datasets. By striking a favorable trade-off between TPR and FPR, the model demonstrates both strong separability (high AUC) and manageable error rates (EER), validating its applicability for real-world biometric systems.

4.4. Experiment 3: Performance on the Consolidated Datasets

After finalizing my model configuration—AnchorEFA with N = 4 samples, combined with the concatenated similarity measurement (Euclidean distance + EAD)—I put everything to the test on a consolidated dataset that merges data from my earlier experiments. The goal was to see how the model handles a more extensive and varied dataset under real-world conditions.
Table 11 shows the model delivering an accuracy of 86.53% and a ROC-AUC of 0.93, which both point to solid cross-palmprint recognition capabilities. In particular, the high ROC-AUC underscores the model’s skill at distinguishing between genuine (same person) and imposter (different person) pairs in a broad dataset.
Digging deeper, I looked at the confusion matrix in Table 12:
  • 83% of “Different Person” cases were classified correctly (true negatives).
  • 90% of “Same Person” cases were identified correctly (true positives).
  • 17% were false positives (different individuals incorrectly flagged as the same).
  • 10% were false negatives (same individuals mistakenly labeled as different).
Although a small portion of examples were misclassified, these figures still signal strong overall discrimination.
From the classification report (Figure 3):
  • Different Person: Precision 0.90, Recall 0.83, and F1 0.86.
  • Same Person: Precision 0.84, Recall 0.90, and F1 0.87.
  • Overall Accuracy: 0.87.
These numbers reveal the model’s steady balance between correctly identifying matches and avoiding false alarms—particularly with a slight edge in catching “Same Person” pairs.
The ROC curve (also in Figure 3) has an AUC of 0.93, confirming the model’s robust ability to separate positive from negative classes. An EER of around 14.03% indicates that, at the point where false positives equal false negatives, the model still maintains solid performance. Overall, the model shows a balanced, reliable approach to cross-palmprint recognition, with slightly higher recall for “Same Person” cases.
Moving on to the test split, Table 13 shows:
  • Accuracy: 81.64%
  • ROC-AUC: 0.89
Though these scores dip a bit compared to the validation set, they still confirm the model’s strong generalization to unseen data and robust performance under changing conditions.
Specific metrics included in Table 14 shows:
  • True Negative Rate: 78% (correctly tagging different individuals)
  • True Positive Rate: 86% (correctly identifying the same person)
  • False Positive Rate: 22% (different individuals mislabeled as the same)
  • False Negative Rate: 14% (the same individual mislabeled as different)
In Figure 4, I see:
  • EER Analysis: at around 18.943%, the model balances FPR and FNR reasonably well, though there is room to lower misclassifications further.
  • ROC Curve Analysis: an AUC of 0.89 is still impressive, indicating strong separation between “Same Person” and “Different Person”.
  • Precision, Recall, and F1-Score:
    Different Person: Precision 0.84, Recall 0.78, and F1 0.81
    Same Person: Precision 0.79, Recall 0.86, and F1 0.82
    Overall Accuracy: 0.82, with macro and weighted averages both sitting at 0.82, showing consistent performance across both classes.
Although the model makes some errors (noted by the non-trivial FPR and FNR), these outcomes highlight its reliability for practical biometric tasks, where achieving a high level of accuracy and consistency is crucial.
Experiment 3 ties together all my earlier optimizations—namely N = 4 for AnchorEFA plus a Euclidean + EAD similarity approach—and demonstrates how they elevate cross-palmprint recognition. The results show meaningful gains in accuracy and reliability, establishing a new benchmark for cross-palmprint identification. By examining the consolidated dataset and a final test split, I confirm that this adapted Siamese network can handle a wide variety of palmprints, paving the way for future developments in biometric security and verification.

5. Discussion

The primary goal of this study was to see if I can reliably identify individuals by comparing the features of their left and right palmprints. To investigate this, I developed an adapted Siamese network that combines two key ideas: Anchor Embedding through Feature Aggregation (AnchorEFA) and a concatenated similarity measurement blending Euclidean distance with Element-wise Absolute Difference (EAD). My experiments show that cross-palmprint recognition is not only feasible, but can be highly accurate across multiple datasets. Here, I revisit my main research questions and explore the broader implications of my findings.
Q1: Can I accurately identify individuals by comparing left and right palmprints?
Our results give a clear “yes”. Left and right palmprints share enough features that, when captured by my adapted Siamese network, allow for reliable identification—even when only one palm might be available. This outcome confirms the physiological overlap between both palms and suggests that cross-palmprint recognition can fill an important gap in real-world situations where complete biometric data is not always accessible.
Q2: How can I increase the model’s ability to discriminate between cross-palmprint features?
I found that combining AnchorEFA and a concatenated similarity measurement substantially boosts the network’s discriminative power. AnchorEFA provides more stable anchors by averaging several palmprints from the same person, minimizing noise and outliers. Meanwhile, the concatenated approach—fusing Euclidean distance and EAD—captures both broad structural differences and subtle texture variations, leading to higher accuracy and better ROC-AUC scores overall.
Q3: What is the impact of AnchorEFA on the model’s performance?
By aggregating multiple palmprint samples for each anchor, AnchorEFA gives the model a richer, more representative snapshot of an individual’s palm features. This aggregation reduces the risk of outliers skewing results, making the network more robust when identifying subtle variations. As a result, I observed notable improvements in precision, recall, and overall reliability.
Q4: Why use a concatenated similarity measurement (Euclidean + EAD)?
Relying on one metric alone can miss important details about a palmprint’s structure or texture. Euclidean distance captures global differences, while EAD focuses on finer, element-wise distinctions. By combining both, the model gains a more nuanced perspective, translating into better performance in distinguishing matching from non-matching palms.
As summarized in Table 1 (Related Works), existing palmprint methods—limited to same-hand matching—typically achieve extremely high performance (often > 95% accuracy and EER < 1%). In contrast, my method tackles the more challenging cross-palmprint scenario and achieves 86.26% accuracy, 14.56% EER, and 0.93 ROC-AUC. Although these numbers are lower than same-hand results, they represent the first reported benchmark for cross-hand matching. The fact that my approach remains competitive, despite the inherent asymmetry between left and right palms, underscores its effectiveness. This comparative analysis highlights the trade-off between task difficulty and recognition performance, demonstrating that my adapted Siamese network with AnchorEFA and hybrid similarity metrics can reliably bridge the gap between opposite-hand palmprints where prior methods cannot.

5.1. Broader Implications

  • Biometric Technology: Cross-palmprint recognition can extend current systems, offering a versatile fallback when only one palmprint is available. This can boost security in everything from facility access control to personal devices.
  • Forensic Science: Being able to match a palmprint from one hand to data about the other hand broadens the forensic toolkit—especially useful when partial or unclear prints are all that investigators can recover at a crime scene.

5.2. Future Directions

There are several ways to build on these findings:
  • Additional Features: Integrating more palmprint characteristics (like deeper ridge details or micro-textures) could push accuracy even further.
  • Scaling Up: Testing the model on larger, more diverse datasets would offer insights into its scalability and generalizability in real-world settings.
  • Ethics and Privacy: As with any biometric technology, I must consider responsible data use, consent, and broader ethical issues before widespread implementation.
Our adapted Siamese network—featuring AnchorEFA and a hybrid similarity measure—has shown that cross-palmprint recognition is both possible and highly effective. The techniques we have introduced offer solid groundwork for developing more secure, accurate, and flexible biometric and forensic identification systems in the future.

6. Conclusions

By exploring the cross palmprint recognition domain, I have made important contributions to biometric and forensic identification technology. I have set new benchmarks in the accuracy and reliability of biometric recognition systems by developing an adapted Siamese network model with Anchor Embedding through Feature Aggregation (AnchorEFA) and a concatenated similarity measurement approach. Not only does this study answer critical research questions with regards to the viability and optimization of cross palmprint recognition, but also it serves as a foundation for future innovations in the area. This research confirms that cross palmprint recognition is not only feasible but highly effective. Results have demonstrated that the application of AnchorEFA has been critical in increasing the model’s capability to discriminate similar and dissimilar features, and have demonstrated the importance of a representative anchor in improving recognition. Additionally, the model’s discriminative power is greatly improved by adopting a concatenated similarity measurement approach that combines Euclidean distance and Element-wise Absolute Difference (EAD) for the measurement. Beyond the academic sphere, the implications of this study are important for practical implementation of biometric recognition systems. Accurate matching of cross-palmprint can improve personal authentication processes, strengthen security measures and offer a reliable alternative to identification where traditional biometric data is limited. In addition, cross palmprint recognition is a promising forensic application that could transform crime scene investigations and criminal identification, allowing law enforcement agencies to more effectively match partial or indistinct palmprint evidence to existing records.
In future, the model will be expanded to incorporate additional biometric features for further research, and to explore its applicability to larger datasets as well as its performance in real world scenarios. In addition, the deployment of advanced biometric technologies must be carefully navigated in terms of ethical considerations and privacy implications. Finally, this work is a significant step forward in the biometric recognition field, providing a complete solution to the problems of cross palmprint recognition. My research improves the precision, accuracy, and versatility of identification systems, and further contributes to the development of more secure and efficient biometric and forensic technologies, a milestone in the continuing development of security and identification methodology.

Author Contributions

M.E. conceived and designed the study, developed the methodology, performed all experiments and data analysis, and wrote the original draft. This research builds on prior collaborative work documented in Refs. [10,25,26]. M.E. reviewed and edited the manuscript and is responsible for all correspondence.

Funding

This work was funded by the Deanship of Graduate Studies and Scientific Research at Jouf University under grant No. (DGSSR-2024-02-01054).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The experimental data supporting the findings of this study are available from the PolyU-IITD Contactless Palmprint Images Database Version 3.0, the Tongji Contactless Palmprint Database, and the CASIA Palmprint Image Database. These datasets are accessible to researchers upon submission of the respective application forms and signed license agreements. Use of these databases is restricted to non-commercial purposes, and explicit approval is required for publishing any images from the datasets. Due to ongoing patent applications, the full code cannot be publicly hosted at this time. The complete codebase including data-processing pipelines, model definitions, and training/evaluation scripts will be made available to qualified researchers under a standard confidentiality agreement (NDA) upon request.

Acknowledgments

The author acknowledges the Deanship of Graduate Studies and Scientific Research at Jouf University under Grant No. DGSSR-2024-02-01054.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Babu, V.D.; Dornala, R.R.; Anusha, C.; Babu, P.R.; Mohan, K.K.; Sumanth, K.V. A Hybrid Multimodal Biometric Recognition System (HMBRS) based on Fusion of Iris, Face, and Finger Vein Traits. In Proceedings of the 2024 5th International Conference on Smart Electronics and Communication (ICOSEC), Trichy, India, 18–20 September 2024; pp. 1287–1292. [Google Scholar]
  2. Sathwik, G.; Reddy, P.D.T.; Gupta, M.; Rahul, M.; Dholvan, M. Secured Voting System based on Multilayered Biometric Authentication. In Proceedings of the 2024 Second International Conference on Inventive Computing and Informatics (ICICI), Bangalore, India, 11–12 June 2024; pp. 502–509. [Google Scholar]
  3. Poon, C.; Wong, D.C.M.; Shen, H.C. Personal Identification and Verification: Fusion of Palmprint Representations. In Proceedings of the Biometric Authentication, Berlin/Heidelberg, Germany, 15 July 2004; pp. 782–788. [Google Scholar]
  4. Liu, C.; Zhong, D.; Shao, H. Few-shot palmprint recognition based on similarity metric hashing network. Neurocomputing 2021, 456, 540–549. [Google Scholar] [CrossRef]
  5. Zhong, D.; Yang, Y.; Du, X. Palmprint Recognition Using Siamese Network. In Proceedings of the Biometric Recognition, Urumqi, China, 11–12 August 2018; pp. 48–55. [Google Scholar]
  6. Du, X.; Zhong, D.; Li, P. Low-Shot Palmprint Recognition Based on Meta-Siamese Network. In Proceedings of the 2019 IEEE International Conference on Multimedia and Expo (ICME), Shanghai, China, 8–12 July 2019; pp. 79–84. [Google Scholar]
  7. Zhong, D.; Zhu, J. Centralized Large Margin Cosine Loss for Open-Set Deep Palmprint Recognition. IEEE Trans. Circuits Syst. Video Technol. 2020, 30, 1559–1568. [Google Scholar] [CrossRef]
  8. Du, X.; Zhong, D.; Shao, H. Cross-Domain Palmprint Recognition via Regularized Adversarial Domain Adaptive Hashing. IEEE Trans. Circuits Syst. Video Technol. 2021, 31, 2372–2385. [Google Scholar] [CrossRef]
  9. Taouche, C.; Belhadef, H. Multimodal biometric system combining left and right palmprints. Inf. Discov. Deliv. 2020, 48, 2–13. [Google Scholar] [CrossRef]
  10. Ezz, M.; Alanazi, W.; Mostafa, A.M.; Hamouda, E.; Elbashir, M.K.; Alruily, M. Improved Siamese Palmprint Authentication Using Pre-Trained VGG16-Palmprint and Element-Wise Absolute Difference. Comput. Syst. Sci. Eng. 2023, 46, 2299–2317. [Google Scholar] [CrossRef]
  11. Michele, A.; Colin, V.; Santika, D.D. MobileNet Convolutional Neural Networks and Support Vector Machines for Palmprint Recognition. Procedia Comput. Sci. 2019, 157, 110–117. [Google Scholar] [CrossRef]
  12. Zhu, L.; Xu, P.; Zhong, C. Siamese Network Based on CNN for Fingerprint Recognition. In Proceedings of the 2021 IEEE International Conference on Computer Science, Electronic Information Engineering and Intelligent Control Technology (CEI), Fuzhou, China, 24–26 September 2021; pp. 303–306. [Google Scholar]
  13. Matkowski, W.M.; Chai, T.; Kong, A.W.K. Palmprint Recognition in Uncontrolled and Uncooperative Environment. IEEE Trans. Inf. Forensics Secur. 2020, 15, 1601–1615. [Google Scholar] [CrossRef]
  14. Genovese, A.; Piuri, V.; Plataniotis, K.N.; Scotti, F. PalmNet: Gabor-PCA Convolutional Networks for Touchless Palmprint Recognition. IEEE Trans. Inf. Forensics Secur. 2019, 14, 3160–3174. [Google Scholar] [CrossRef]
  15. Alrashidi, A.; Alotaibi, A.; Hussain, M.; AlShehri, H.; AboAlSamh, H.A.; Bebis, G. Cross-Sensor Fingerprint Matching Using Siamese Network and Adversarial Learning. Sensors 2021, 21, 3657. [Google Scholar] [CrossRef] [PubMed]
  16. Rida, I.; Herault, R.; Marcialis, G.L.; Gasso, G. Palmprint recognition with an efficient data driven ensemble classifier. Pattern Recognit. Lett. 2019, 126, 21–30. [Google Scholar] [CrossRef]
  17. Zhai, H.; Deng, M.; Wu, H. Elliptic Curve Cryptography-Based Identity Authentication Scheme Suitable for Metaverse Environment. Symmetry 2024, 16, 891. [Google Scholar] [CrossRef]
  18. Jia, K.P.; Dickinson, A.J.; Mi, J.; Cui, G.; Xiao, T.T.; Kharbatia, N.M.; Guo, X.; Sugiono, E.; Aranda, M.; Blilou, I.; et al. Anchorene is a carotenoid-derived regulatory metabolite required for anchor root formation in Arabidopsis. Sci. Adv. 2019, 5, eaaw6787. [Google Scholar] [CrossRef] [PubMed]
  19. Kumar, A. Toward more accurate matching of contactless palmprint images under less constrained environments? IEEE Trans. Inf. Forensics Secur. 2018, 14, 34–47. [Google Scholar] [CrossRef]
  20. Zhang, L.; Li, L.; Yang, A.; Shen, Y.; Yang, M. Towards contactless palmprint recognition: A novel device, a new benchmark, and a collaborative representation based identification approach. Pattern Recognit. 2017, 69, 199–212. [Google Scholar] [CrossRef]
  21. Chadha, G.S.; Panambilly, A.; Schwung, A.; Ding, S.X. Bidirectional deep recurrent neural networks for process fault classification. ISA Trans. 2020, 106, 330–342. [Google Scholar] [CrossRef] [PubMed]
  22. Zhang, M.; Cheng, Q.; Luo, F.; Ye, L. A Triplet Nonlocal Neural Network with Dual-Anchor Triplet Loss for High-Resolution Remote Sensing Image Retrieval. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 2711–2723. [Google Scholar] [CrossRef]
  23. Chen, H.Z.; Hu, B.; Chen, L.; Guan, Z.H. Deep Metric Learning with Impulsive Neural Network. In Proceedings of the 2024 IEEE 13th Data Driven Control and Learning Systems Conference (DDCLS), Kaifeng, China, 17–19 May 2024; pp. 1117–1123. [Google Scholar]
  24. Lee, J.H. Minimum Euclidean distance evaluation using deep neural networks. AEU-Int. J. Electron. Commun. 2019, 112, 152964. [Google Scholar] [CrossRef]
  25. Hamouda, E.; Ezz, M.M.; Mostafa, A.M.; Elbashir, M.K.; Alruily, M.; Tarek, M. Innovative Hetero-Associative Memory Encoder (HAMTE) for Palmprint Template Protection. Comput. Syst. Sci. Eng. 2023, 46, 619–636. [Google Scholar] [CrossRef]
  26. Fawzy, A.; Ezz, M.; Nouh, S.; Tharwat, G. Palmprint Recognition System Using Siamese Network and Transfer Learning. Int. J. Adv. Appl. Sci. 2022, 9, 90–99. [Google Scholar] [CrossRef]
Figure 1. Overview of the proposed cross-palmprint recognition methodology.
Figure 1. Overview of the proposed cross-palmprint recognition methodology.
Electronics 14 02093 g001
Figure 2. Receiver operating characteristic (ROC) curves and equal error rate (EER) plots for three datasets; (a) POLYU_Left_Right, (b) TongjiS1_Left_Right, and (c) CASIA_Left_Right.
Figure 2. Receiver operating characteristic (ROC) curves and equal error rate (EER) plots for three datasets; (a) POLYU_Left_Right, (b) TongjiS1_Left_Right, and (c) CASIA_Left_Right.
Electronics 14 02093 g002
Figure 3. Performance on the consolidated datasets.
Figure 3. Performance on the consolidated datasets.
Electronics 14 02093 g003
Figure 4. Performance on the consolidated test split.
Figure 4. Performance on the consolidated test split.
Electronics 14 02093 g004
Table 1. Summary of existing palmprint recognition methods.
Table 1. Summary of existing palmprint recognition methods.
Ref.Method/ModelProblem TypePerformanceDatasetsSubject SplitComments
[4]SMHNet (Similarity Metric Hashing)Few-shot classificationAccuracy: 100.0% (PolyU); 99.2% (XJTU-UP)PolyU, XJTU-UP, TongjiClasses disjointLimited to same-hand palmprint classification; does not support cross-hand matching.
[5]Siamese VGG16One-shot verificationEER: 0.28% (PolyU); 4.56% (XJTU)Tongji, PolyU-IITD, CASIASubjects disjointSame-hand verification; does not support cross-hand matching.
[6]Meta-Siamese NetworkFew-shot classificationAccuracy: up to 100.0%PolyU, Tongji, CASIA, IITDClasses disjointSame-hand classification; does not support cross-hand matching.
[7]MobileNet-V2 + SVMClosed-set classificationAccuracy: 100.0% (PolyU)PolyUImages-only splitSame-hand classification; does not support cross-hand matching.
[8]C-LMCL Loss (Large-Margin Cosine)Open-set identificationAccuracy: >95.0% (closed-set); EER: ≈3.0–5.0% (open-set)CASIA, TongjiSubjects disjointSame-hand identification; does not support cross-hand matching.
[9]R-ADAH (Adversarial Hashing)Cross-domain classificationRank-1 Accuracy: ≈90.0%; EER: ≈1.5%PolyU, CASIA, TongjiDomains disjointSame-hand classification; does not support cross-hand matching.
[10]Multimodal Fusion (Palmprint + Iris)Closed-set classificationAccuracy: 93.5%CASIA, ORL/FERETSubjects overlappedSame-hand verification; does not support cross-hand matching.
[11]Siamese VGG16-Palmprint + EADOne-to-one verificationAccuracy: 91.8%; EER: 0.082% (CASIA)CASIALikely overlappedSame-hand verification; does not support cross-hand matching.
[13]End-to-End Alignment + CNN (Wild)Closed-set identificationRank-1 Accuracy: ≈90.0% (wild); Accuracy: >99.0% (controlled)NTU-PI-v1, CASIA, IITD, PolyUSubjects disjointSame-hand identification; does not support cross-hand matching.
[14]PalmNet (Gabor-PCA CNN + attention)Closed-set identificationAccuracy: >99.0%; EER: <1.0%Tongji, IITD, CASIA, RESTSubjects disjointSame-hand identification; does not support cross-hand matching.
[16]Ensemble 2DPCA/2DLDA + RSMClosed-set classificationAccuracy: ≈99.0% (PolyU noisy)PolyU, CASIA, IITDImages-only splitSame-hand classification; does not support cross-hand matching.
ProposedSiamese + AnchorEFA + Hybrid MetricCross-palmprint verificationAccuracy: 86.26%; EER: 14.56%; AUC: 0.93POLYU, TongjiS1, CASIASubjects disjointAggregates multi-sample anchors with hybrid Euclidean + EAD similarity; first cross-palm approach.
Problem type definitions: Classification tasks distinguish among multiple classes given only a handful of labeled examples per class. Verification tasks decide “same vs. different” between two samples. Closed-set scenarios assume all inputs belong to a fixed set of known classes (no unknowns). Open-set identification extends this by correctly rejecting samples from unseen (out-of-set) identities. Cross-domain classification addresses cases where training (source) and testing (target) data come from different distributions. Finally, closed-set verification performs match vs. non-match decisions with both genuine and imposter pairs drawn from the same known identity pool.
Table 2. Performance using Euclidian distance.
Table 2. Performance using Euclidian distance.
DatasetAccuracyFARFRREERROC-AUC
POLYU_Left_Right50.00%48.84%48.81%48.82%0.51
TongiS1_Left_Right50.00%47.00%47.02%47.01%0.54
CASIA_Left_Right50.00%53.06%53.03%53.04%0.47
Mean50.00%49.63%49.62%49.63%0.51
Table 3. Performance using EAD.
Table 3. Performance using EAD.
DatasetAccuracyFARFRREERROC-AUC
POLYU_Left_Right54.11%48.57%48.62%48.60%0.53
TongiS1_Left_Right90.64%9.99%10.00%9.99%0.96
CASIA_Left_Right85.88%15.08%15.08%15.08%0.92
Mean76.88%24.55%24.57%24.56%0.80
Table 4. Performance using Euclidian and EAD.
Table 4. Performance using Euclidian and EAD.
DatasetAccuracyFARFRREERROC-AUC
POLYU_Left_Right78.98%19.07%19.05%19.06%0.89
CASIA_Left_Right85.43%15.20%15.19%15.20%0.92
TongiS1_Left_Right89.68%11.48%11.48%11.48%0.95
Mean84.70%15.25%15.24%15.24%0.92
Table 5. AnchorEFA with 2 samples.
Table 5. AnchorEFA with 2 samples.
DatasetAccuracyFARFRREERROC-AUC
POLYU_Left_Right80.42%20.53%20.53%20.53%0.88
TongiS1_Left_Right92.11%8.69%8.69%8.69%0.97
CASIA_Left_Right83.61%16.58%16.57%16.57%0.91
Mean85.38%15.26%15.26%15.26%0.92
Table 6. AnchorEFA with 3 samples.
Table 6. AnchorEFA with 3 samples.
DatasetAccuracyFARFRREERROC-AUC
POLYU_Left_Right80.40%18.78%18.76%18.77%0.90
TongiS1_Left_Right91.94%8.32%8.32%8.32%0.97
CASIA_Left_Right83.80%16.48%16.48%16.48%0.91
Mean85.38%14.53%14.52%14.52%0.93
Table 7. AnchorEFA with 4 samples.
Table 7. AnchorEFA with 4 samples.
DatasetAccuracyFARFRREERROC-AUC
POLYU_Left_Right81.67%19.60%19.60%19.60%0.89
TongiS1_Left_Right91.06%9.34%9.34%9.34%0.96
CASIA_Left_Right86.05%14.75%14.75%14.75%0.93
Mean86.26%14.56%14.56%14.56%0.93
Table 8. AnchorEFA with 5 samples.
Table 8. AnchorEFA with 5 samples.
DatasetAccuracyFARFRREERROC-AUC
POLYU_Left_Right51.31%51.88%51.88%51.88%0.49
TongiS1_Left_Right91.98%8.53%8.54%8.54%0.97
CASIA_Left_Right84.96%14.50%14.50%14.50%0.93
Mean76.08%24.97%24.97%24.97%0.80
Table 9. Confusion matrix analysis.
Table 9. Confusion matrix analysis.
ModelPOLYUTongjiS1CASIA
SampleDifferent PersonSame PersonDifferent PersonSame PersonDifferent PersonSame Person
Different Person0.750.250.900.100.830.17
Same Person0.120.880.080.920.110.89
Table 10. Confusion classification report analysis.
Table 10. Confusion classification report analysis.
AccuracyPOLYUTongjiS1CASIA
PrecisionRecallF1-ScorePrecisionRecallF1-ScorePrecisionRecallF1-Score
Different Person0.860.750.800.920.900.910.880.830.86
Same Person0.780.880.830.900.920.910.840.890.86
Overall 0.82 0.91 0.86
Table 11. Performance of the consolidated dataset with AnchorEFA N = 4.
Table 11. Performance of the consolidated dataset with AnchorEFA N = 4.
DatasetAccuracyFARFRREERROC-AUC Score
Consolidated datasets86.53%14.04%14.04%14.04%0.93
Table 12. Accuracy of different and same person.
Table 12. Accuracy of different and same person.
Different PersonSame Person
Different Person0.830.17
Same Person0.100.90
Table 13. Performance of the consolidated test split with AnchorEFA N = 4.
Table 13. Performance of the consolidated test split with AnchorEFA N = 4.
DatasetAccuracyFARFRREERROC-AUC Score
Consolidated datasets81.64%20.69%15.63%18.16%0.89
Table 14. Accuracy of different and same person on the consolidated test split.
Table 14. Accuracy of different and same person on the consolidated test split.
Different PersonSame Person
Different Person0.780.22
Same Person0.140.86
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ezz, M. Left Meets Right: A Siamese Network Approach to Cross-Palmprint Biometric Recognition. Electronics 2025, 14, 2093. https://doi.org/10.3390/electronics14102093

AMA Style

Ezz M. Left Meets Right: A Siamese Network Approach to Cross-Palmprint Biometric Recognition. Electronics. 2025; 14(10):2093. https://doi.org/10.3390/electronics14102093

Chicago/Turabian Style

Ezz, Mohamed. 2025. "Left Meets Right: A Siamese Network Approach to Cross-Palmprint Biometric Recognition" Electronics 14, no. 10: 2093. https://doi.org/10.3390/electronics14102093

APA Style

Ezz, M. (2025). Left Meets Right: A Siamese Network Approach to Cross-Palmprint Biometric Recognition. Electronics, 14(10), 2093. https://doi.org/10.3390/electronics14102093

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop