Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (261)

Search Parameters:
Keywords = biometric recognition system

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 1072 KiB  
Review
EEG-Based Biometric Identification and Emotion Recognition: An Overview
by Miguel A. Becerra, Carolina Duque-Mejia, Andres Castro-Ospina, Leonardo Serna-Guarín, Cristian Mejía and Eduardo Duque-Grisales
Computers 2025, 14(8), 299; https://doi.org/10.3390/computers14080299 - 23 Jul 2025
Abstract
This overview examines recent advancements in EEG-based biometric identification, focusing on integrating emotional recognition to enhance the robustness and accuracy of biometric systems. By leveraging the unique physiological properties of EEG signals, biometric systems can identify individuals based on neural responses. The overview [...] Read more.
This overview examines recent advancements in EEG-based biometric identification, focusing on integrating emotional recognition to enhance the robustness and accuracy of biometric systems. By leveraging the unique physiological properties of EEG signals, biometric systems can identify individuals based on neural responses. The overview discusses the influence of emotional states on EEG signals and the consequent impact on biometric reliability. It also evaluates recent emotion recognition techniques, including machine learning methods such as support vector machines (SVMs), convolutional neural networks (CNNs), and long short-term memory networks (LSTMs). Additionally, the role of multimodal EEG datasets in enhancing emotion recognition accuracy is explored. Findings from key studies are synthesized to highlight the potential of EEG for secure, adaptive biometric systems that account for emotional variability. This overview emphasizes the need for future research on resilient biometric identification that integrates emotional context, aiming to establish EEG as a viable component of advanced biometric technologies. Full article
(This article belongs to the Special Issue Multimodal Pattern Recognition of Social Signals in HCI (2nd Edition))
Show Figures

Figure 1

19 pages, 675 KiB  
Article
A Multicomponent Face Verification and Identification System
by Athanasios Douklias, Ioannis Zorzos, Εvangelos Maltezos, Vasilis Nousis, Spyridon Nektarios Bolierakis, Lazaros Karagiannidis, Eleftherios Ouzounoglou and Angelos Amditis
Appl. Sci. 2025, 15(15), 8161; https://doi.org/10.3390/app15158161 - 22 Jul 2025
Abstract
Face recognition technology is a biometric technology, which is based on the identification or verification of facial features. Automatic face recognition is an active research field in the context of computer vision and artificial intelligence (AI) that is fundamental for a variety of [...] Read more.
Face recognition technology is a biometric technology, which is based on the identification or verification of facial features. Automatic face recognition is an active research field in the context of computer vision and artificial intelligence (AI) that is fundamental for a variety of real-time applications. In this research, the design and implementation of a face verification and identification system of a flexible, modular, secure, and scalable architecture is proposed. The proposed system incorporates several and various types of system components: (i) portable capabilities (mobile application and mixed reality [MR] glasses), (ii) enhanced monitoring and visualization via a user-friendly Web-based user interface (UI), and (iii) information sharing via middleware to other external systems. The experiments showed that such interconnected and complementary system components were able to perform robust and real-time results related to face identification and verification. Furthermore, to identify a proper model of high accuracy, robustness, and performance speed for face identification and verification tasks, a comprehensive evaluation of multiple face recognition pre-trained models (FaceNet, ArcFace, Dlib, and MobileNetV2) on a curated version of the ID vs. Spot dataset was performed. Among the models used, FaceNet emerged as a preferable choice for real-time tasks due to its balance between accuracy and inference speed for both face identification and verification tasks achieving AUC of 0.99, Rank-1 of 91.8%, Rank-5 of 95.8%, FNR of 2% and FAR of 0.1%, accuracy of 98.6%, and inference speed of 52 ms. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Image Processing)
39 pages, 6851 KiB  
Article
FGFNet: Fourier Gated Feature-Fusion Network with Fractal Dimension Estimation for Robust Palm-Vein Spoof Detection
by Seung Gu Kim, Jung Soo Kim and Kang Ryoung Park
Fractal Fract. 2025, 9(8), 478; https://doi.org/10.3390/fractalfract9080478 - 22 Jul 2025
Abstract
The palm-vein recognition system has garnered attention as a biometric technology due to its resilience to external environmental factors, protection of personal privacy, and low risk of external exposure. However, with recent advancements in deep learning-based generative models for image synthesis, the quality [...] Read more.
The palm-vein recognition system has garnered attention as a biometric technology due to its resilience to external environmental factors, protection of personal privacy, and low risk of external exposure. However, with recent advancements in deep learning-based generative models for image synthesis, the quality and sophistication of fake images have improved, leading to an increased security threat from counterfeit images. In particular, palm-vein images acquired through near-infrared illumination exhibit low resolution and blurred characteristics, making it even more challenging to detect fake images. Furthermore, spoof detection specifically targeting palm-vein images has not been studied in detail. To address these challenges, this study proposes the Fourier-gated feature-fusion network (FGFNet) as a novel spoof detector for palm-vein recognition systems. The proposed network integrates masked fast Fourier transform, a map-based gated feature fusion block, and a fast Fourier convolution (FFC) attention block with global contrastive loss to effectively detect distortion patterns caused by generative models. These components enable the efficient extraction of critical information required to determine the authenticity of palm-vein images. In addition, fractal dimension estimation (FDE) was employed for two purposes in this study. In the spoof attack procedure, FDE was used to evaluate how closely the generated fake images approximate the structural complexity of real palm-vein images, confirming that the generative model produced highly realistic spoof samples. In the spoof detection procedure, the FDE results further demonstrated that the proposed FGFNet effectively distinguishes between real and fake images, validating its capability to capture subtle structural differences induced by generative manipulation. To evaluate the spoof detection performance of FGFNet, experiments were conducted using real palm-vein images from two publicly available palm-vein datasets—VERA Spoofing PalmVein (VERA dataset) and PLUSVein-contactless (PLUS dataset)—as well as fake palm-vein images generated based on these datasets using a cycle-consistent generative adversarial network. The results showed that, based on the average classification error rate, FGFNet achieved 0.3% and 0.3% on the VERA and PLUS datasets, respectively, demonstrating superior performance compared to existing state-of-the-art spoof detection methods. Full article
Show Figures

Figure 1

24 pages, 824 KiB  
Article
MMF-Gait: A Multi-Model Fusion-Enhanced Gait Recognition Framework Integrating Convolutional and Attention Networks
by Kamrul Hasan, Khandokar Alisha Tuhin, Md Rasul Islam Bapary, Md Shafi Ud Doula, Md Ashraful Alam, Md Atiqur Rahman Ahad and Md. Zasim Uddin
Symmetry 2025, 17(7), 1155; https://doi.org/10.3390/sym17071155 - 19 Jul 2025
Viewed by 232
Abstract
Gait recognition is a reliable biometric approach that uniquely identifies individuals based on their natural walking patterns. It is widely used to recognize individuals who are challenging to camouflage and do not require a person’s cooperation. The general face-based person recognition system often [...] Read more.
Gait recognition is a reliable biometric approach that uniquely identifies individuals based on their natural walking patterns. It is widely used to recognize individuals who are challenging to camouflage and do not require a person’s cooperation. The general face-based person recognition system often fails to determine the offender’s identity when they conceal their face by wearing helmets and masks to evade identification. In such cases, gait-based recognition is ideal for identifying offenders, and most existing work leverages a deep learning (DL) model. However, a single model often fails to capture a comprehensive selection of refined patterns in input data when external factors are present, such as variation in viewing angle, clothing, and carrying conditions. In response to this, this paper introduces a fusion-based multi-model gait recognition framework that leverages the potential of convolutional neural networks (CNNs) and a vision transformer (ViT) in an ensemble manner to enhance gait recognition performance. Here, CNNs capture spatiotemporal features, and ViT features multiple attention layers that focus on a particular region of the gait image. The first step in this framework is to obtain the Gait Energy Image (GEI) by averaging a height-normalized gait silhouette sequence over a gait cycle, which can handle the left–right gait symmetry of the gait. After that, the GEI image is fed through multiple pre-trained models and fine-tuned precisely to extract the depth spatiotemporal feature. Later, three separate fusion strategies are conducted, and the first one is decision-level fusion (DLF), which takes each model’s decision and employs majority voting for the final decision. The second is feature-level fusion (FLF), which combines the features from individual models through pointwise addition before performing gait recognition. Finally, a hybrid fusion combines DLF and FLF for gait recognition. The performance of the multi-model fusion-based framework was evaluated on three publicly available gait databases: CASIA-B, OU-ISIR D, and the OU-ISIR Large Population dataset. The experimental results demonstrate that the fusion-enhanced framework achieves superior performance. Full article
(This article belongs to the Special Issue Symmetry and Its Applications in Image Processing)
Show Figures

Figure 1

18 pages, 10821 KiB  
Article
Explainable Face Recognition via Improved Localization
by Rashik Shadman, Daqing Hou, Faraz Hussain and M. G. Sarwar Murshed
Electronics 2025, 14(14), 2745; https://doi.org/10.3390/electronics14142745 - 8 Jul 2025
Viewed by 235
Abstract
Biometric authentication has become one of the most widely used tools in the current technological era to authenticate users and to distinguish between genuine users and impostors. The face is the most common form of biometric modality that has proven effective. Deep learning-based [...] Read more.
Biometric authentication has become one of the most widely used tools in the current technological era to authenticate users and to distinguish between genuine users and impostors. The face is the most common form of biometric modality that has proven effective. Deep learning-based face recognition systems are now commonly used across different domains. However, these systems usually operate like black-box models that do not provide necessary explanations or justifications for their decisions. This is a major disadvantage because users cannot trust such artificial intelligence-based biometric systems and may not feel comfortable using them when clear explanations or justifications are not provided. This paper addresses this problem by applying an efficient method for explainable face recognition systems. We use a Class Activation Mapping (CAM)-based discriminative localization (very narrow/specific localization) technique called Scaled Directed Divergence (SDD) to visually explain the results of deep learning-based face recognition systems. We perform fine localization of the face features relevant to the deep learning model for its prediction/decision. Our experiments show that the SDD Class Activation Map (CAM) highlights the relevant face features very specifically and accurately compared to the traditional CAM. The provided visual explanations with narrow localization of relevant features can ensure much-needed transparency and trust for deep learning-based face recognition systems. We also demonstrate the adaptability of the SDD method by applying it to two different techniques: CAM and Score-CAM. Full article
(This article belongs to the Special Issue Explainability in AI and Machine Learning)
Show Figures

Figure 1

24 pages, 589 KiB  
Article
FaceCloseup: Enhancing Mobile Facial Authentication with Perspective Distortion-Based Liveness Detection
by Yingjiu Li, Yan Li and Zilong Wang
Computers 2025, 14(7), 254; https://doi.org/10.3390/computers14070254 - 27 Jun 2025
Viewed by 562
Abstract
Facial authentication has gained widespread adoption as a biometric authentication method, offering a convenient alternative to traditional password-based systems, particularly on mobile devices equipped with front-facing cameras. While this technology enhances usability and security by eliminating password management, it remains highly susceptible to [...] Read more.
Facial authentication has gained widespread adoption as a biometric authentication method, offering a convenient alternative to traditional password-based systems, particularly on mobile devices equipped with front-facing cameras. While this technology enhances usability and security by eliminating password management, it remains highly susceptible to spoofing attacks. Adversaries can exploit facial recognition systems using pre-recorded photos, videos, or even sophisticated 3D models of victims’ faces to bypass authentication mechanisms. The increasing availability of personal images on social media further amplifies this risk, making robust anti-spoofing mechanisms essential for secure facial authentication. To address these challenges, we introduce FaceCloseup, a novel liveness detection technique that strengthens facial authentication by leveraging perspective distortion inherent in close-up shots of real, 3D faces. Instead of relying on additional sensors or user-interactive gestures, FaceCloseup passively analyzes facial distortions in video frames captured by a mobile device’s camera, improving security without compromising user experience. FaceCloseup effectively distinguishes live faces from spoofed attacks by identifying perspective-based distortions across different facial regions. The system achieves a 99.48% accuracy in detecting common spoofing methods—including photo, video, and 3D model-based attacks—and demonstrates 98.44% accuracy in differentiating between individual users. By operating entirely on-device, FaceCloseup eliminates the need for cloud-based processing, reducing privacy concerns and potential latency in authentication. Its reliance on natural device movement ensures a seamless authentication experience while maintaining robust security. Full article
(This article belongs to the Special Issue Cyber Security and Privacy in IoT Era)
Show Figures

Figure 1

27 pages, 3401 KiB  
Article
Human–Seat–Vehicle Multibody Nonlinear Model of Biomechanical Response in Vehicle Vibration Environment
by Margarita Prokopovič, Kristina Čižiūnienė, Jonas Matijošius, Marijonas Bogdevičius and Edgar Sokolovskij
Machines 2025, 13(7), 547; https://doi.org/10.3390/machines13070547 - 24 Jun 2025
Viewed by 237
Abstract
Especially in real-world circumstances with uneven road surfaces and impulsive shocks, nonlinear dynamic effects in vehicle systems can greatly skew biometric data utilized to track passenger and driver physiological states. By creating a thorough multibody human–seat–chassis model, this work tackles the effect of [...] Read more.
Especially in real-world circumstances with uneven road surfaces and impulsive shocks, nonlinear dynamic effects in vehicle systems can greatly skew biometric data utilized to track passenger and driver physiological states. By creating a thorough multibody human–seat–chassis model, this work tackles the effect of vehicle-induced vibrations on the accuracy and dependability of biometric measures. The model includes external excitation from road-induced inputs, nonlinear damping between structural linkages, and vertical and angular degrees of freedom in the head–neck system. Motion equations are derived using a second-order Lagrangian method; simulations are run using representative values of a typical car and human body segments. Results show that higher vehicle speed generates more vibrational energy input, which especially in the head and torso enhances vertical and angular accelerations. Modal studies, on the other hand, show that while resonant frequencies stay constant, speed causes a considerable rise in amplitude and frequency dispersion. At speeds ≥ 50 km/h, RMS and VDV values exceed ISO 2631 comfort standards in the body and head. The results highlight the need to include vibration-optimized suspension systems and ergonomic design approaches to safeguard sensitive body areas and preserve biometric data integrity. This study helps to increase comfort and safety in both traditional and autonomous car uses. Full article
Show Figures

Figure 1

19 pages, 9631 KiB  
Article
Res2Former: Integrating Res2Net and Transformer for a Highly Efficient Speaker Verification System
by Defu Chen, Yunlong Zhou, Xianbao Wang, Sheng Xiang, Xiaohu Liu and Yijian Sang
Electronics 2025, 14(12), 2489; https://doi.org/10.3390/electronics14122489 - 19 Jun 2025
Viewed by 472
Abstract
Speaker verification (SV) is an exceptionally effective method of biometric authentication. However, its performance is heavily influenced by the effectiveness of the extracted speaker features and their suitability for use in resource-limited environments. Transformer models and convolutional neural networks (CNNs), leveraging self-attention mechanisms, [...] Read more.
Speaker verification (SV) is an exceptionally effective method of biometric authentication. However, its performance is heavily influenced by the effectiveness of the extracted speaker features and their suitability for use in resource-limited environments. Transformer models and convolutional neural networks (CNNs), leveraging self-attention mechanisms, have demonstrated state-of-the-art performance in most Natural Language Processing (NLP) and Image Recognition tasks. However, previous studies indicate that standalone Transformer and CNN architectures present distinct challenges in speaker verification. Specifically, while Transformer models deliver good results, they fail to meet the requirements of low-resource scenarios and computational efficiency. On the other hand, CNNs perform well in resource-constrained environments but suffer from significantly reduced recognition accuracy. Several existing approaches, such as Conformer, combine Transformers and CNNs but still face challenges related to high resource consumption and low computational efficiency. To address these issues, we propose a novel solution that enhances the Transformer model by introducing multi-scale convolutional attention and a Global Response Normalization (GRN)-based feed-forward network, resulting in a lightweight backbone architecture called the lightweight simple transformer (LST). We further improve LST by incorporating the Res2Net structure from CNN, yielding the Res2Former model—a low-parameter, high—precision SV model. In Res2Former, we design and implement a time-frequency adaptive feature fusion(TAFF) mechanism that enables fine-grained feature propagation by fusing features at different depths at the frame level. Additionally, holistic fusion is employed for global feature propagation across the model. To enhance performance, multiple convergence methods are introduced, improving the overall efficacy of the SV system. Experimental results on the VoxCeleb1-O, VoxCeleb1-E, VoxCeleb1-H, and Cn-Celeb(E) datasets demonstrate that Res2Former achieves excellent performance, with the Large configuration attaining Equal Error Rate (EER)/Minimum Detection Cost Function (minDCF) scores of 0.81%/0.08, 0.98%/0.11, 1.81%/0.17, and 8.39%/0.46, respectively. Notably, the Base configuration of Res2Former, with only 1.73M parameters, also delivers competitive results. Full article
(This article belongs to the Special Issue New Advances in Embedded Software and Applications)
Show Figures

Figure 1

34 pages, 9431 KiB  
Article
Gait Recognition via Enhanced Visual–Audio Ensemble Learning with Decision Support Methods
by Ruixiang Kan, Mei Wang, Tian Luo and Hongbing Qiu
Sensors 2025, 25(12), 3794; https://doi.org/10.3390/s25123794 - 18 Jun 2025
Viewed by 403
Abstract
Gait is considered a valuable biometric feature, and it is essential for uncovering the latent information embedded within gait patterns. Gait recognition methods are expected to serve as significant components in numerous applications. However, existing gait recognition methods exhibit limitations in complex scenarios. [...] Read more.
Gait is considered a valuable biometric feature, and it is essential for uncovering the latent information embedded within gait patterns. Gait recognition methods are expected to serve as significant components in numerous applications. However, existing gait recognition methods exhibit limitations in complex scenarios. To address these, we construct a dual-Kinect V2 system that focuses more on gait skeleton joint data and related acoustic signals. This setup lays a solid foundation for subsequent methods and updating strategies. The core framework consists of enhanced ensemble learning methods and Dempster–Shafer Evidence Theory (D-SET). Our recognition methods serve as the foundation, and the decision support mechanism is used to evaluate the compatibility of various modules within our system. On this basis, our main contributions are as follows: (1) an improved gait skeleton joint AdaBoost recognition method based on Circle Chaotic Mapping and Gramian Angular Field (GAF) representations; (2) a data-adaptive gait-related acoustic signal AdaBoost recognition method based on GAF and a Parallel Convolutional Neural Network (PCNN); and (3) an amalgamation of the Triangulation Topology Aggregation Optimizer (TTAO) and D-SET, providing a robust and innovative decision support mechanism. These collaborations improve the overall recognition accuracy and demonstrate their considerable application values. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

27 pages, 3417 KiB  
Article
GaitCSF: Multi-Modal Gait Recognition Network Based on Channel Shuffle Regulation and Spatial-Frequency Joint Learning
by Siwei Wei, Xiangyuan Xu, Dewen Liu, Chunzhi Wang, Lingyu Yan and Wangyu Wu
Sensors 2025, 25(12), 3759; https://doi.org/10.3390/s25123759 - 16 Jun 2025
Viewed by 465
Abstract
Gait recognition, as a non-contact biometric technology, offers unique advantages in scenarios requiring long-distance identification without active cooperation from subjects. However, existing gait recognition methods predominantly rely on single-modal data, which demonstrates insufficient feature expression capabilities when confronted with complex factors in real-world [...] Read more.
Gait recognition, as a non-contact biometric technology, offers unique advantages in scenarios requiring long-distance identification without active cooperation from subjects. However, existing gait recognition methods predominantly rely on single-modal data, which demonstrates insufficient feature expression capabilities when confronted with complex factors in real-world environments, including viewpoint variations, clothing differences, occlusion problems, and illumination changes. This paper addresses these challenges by introducing a multi-modal gait recognition network based on channel shuffle regulation and spatial-frequency joint learning, which integrates two complementary modalities (silhouette data and heatmap data) to construct a more comprehensive gait representation. The channel shuffle-based feature selective regulation module achieves cross-channel information interaction and feature enhancement through channel grouping and feature shuffling strategies. This module divides input features along the channel dimension into multiple subspaces, which undergo channel-aware and spatial-aware processing to capture dependency relationships across different dimensions. Subsequently, channel shuffling operations facilitate information exchange between different semantic groups, achieving adaptive enhancement and optimization of features with relatively low parameter overhead. The spatial-frequency joint learning module maps spatiotemporal features to the spectral domain through fast Fourier transform, effectively capturing inherent periodic patterns and long-range dependencies in gait sequences. The global receptive field advantage of frequency domain processing enables the model to transcend local spatiotemporal constraints and capture global motion patterns. Concurrently, the spatial domain processing branch balances the contributions of frequency and spatial domain information through an adaptive weighting mechanism, maintaining computational efficiency while enhancing features. Experimental results demonstrate that the proposed GaitCSF model achieves significant performance improvements on mainstream datasets including GREW, Gait3D, and SUSTech1k, breaking through the performance bottlenecks of traditional methods. The implications of this research are significant for improving the performance and robustness of gait recognition systems when implemented in practical application scenarios. Full article
(This article belongs to the Collection Sensors for Gait, Human Movement Analysis, and Health Monitoring)
Show Figures

Figure 1

57 pages, 4508 KiB  
Review
Person Recognition via Gait: A Review of Covariate Impact and Challenges
by Abdul Basit Mughal, Rafi Ullah Khan, Amine Bermak and Atiq ur Rehman
Sensors 2025, 25(11), 3471; https://doi.org/10.3390/s25113471 - 30 May 2025
Viewed by 764
Abstract
Human gait identification is a biometric technique that permits recognizing an individual from a long distance focusing on numerous features such as movement, time, and clothing. This approach in particular is highly useful in video surveillance scenarios, where biometric systems allow people to [...] Read more.
Human gait identification is a biometric technique that permits recognizing an individual from a long distance focusing on numerous features such as movement, time, and clothing. This approach in particular is highly useful in video surveillance scenarios, where biometric systems allow people to be easily recognized without intruding on their privacy. In the domain of computer vision, one of the essential and most difficult tasks is tracking a person across multiple camera views, specifically, recognizing the similar person in diverse scenes. However, the accuracy of the gait identification system is significantly affected by covariate factors, such as different view angles, clothing, walking speeds, occlusion, and low-lighting conditions. Previous studies have often overlooked the influence of these factors, leaving a gap in the comprehensive understanding of gait recognition systems. This paper provides a comprehensive review of the most effective gait recognition methods, assessing their performance across various image source databases while highlighting the limitations of existing datasets. Additionally, it explores the influence of key covariate factors, such as viewing angle, clothing, and environmental conditions, on model performance. The paper also compares traditional gait recognition methods with advanced deep learning techniques, offering theoretical insights into the impact of covariates and addressing real-world application challenges. The contrasts and discussions presented provide valuable insights for developing a robust and improved gait-based identification framework for future advancements. Full article
(This article belongs to the Special Issue Artificial Intelligence and Sensor-Based Gait Recognition)
Show Figures

Figure 1

13 pages, 13928 KiB  
Article
Voter Authentication Using Enhanced ResNet50 for Facial Recognition
by Aminou Halidou, Daniel Georges Olle Olle, Arnaud Nguembang Fadja, Daramy Vandi Von Kallon and Tchana Ngninkeu Gil Thibault
Signals 2025, 6(2), 25; https://doi.org/10.3390/signals6020025 - 23 May 2025
Viewed by 725
Abstract
Electoral fraud, particularly multiple voting, undermines the integrity of democratic processes. To address this challenge, this study introduces an innovative facial recognition system that integrates an enhanced 50-layer Residual Network (ResNet50) architecture with Additive Angular Margin Loss (ArcFace) and Multi-Task Cascaded Convolutional Neural [...] Read more.
Electoral fraud, particularly multiple voting, undermines the integrity of democratic processes. To address this challenge, this study introduces an innovative facial recognition system that integrates an enhanced 50-layer Residual Network (ResNet50) architecture with Additive Angular Margin Loss (ArcFace) and Multi-Task Cascaded Convolutional Neural Networks (MTCNN) for face detection. Using the Mahalanobis distance, the system verifies voter identities by comparing captured facial images with previously recorded biometric features. Extensive evaluations demonstrate the methodology’s effectiveness, achieving a facial recognition accuracy of 99.85%. This significant improvement over existing baseline methods has the potential to enhance electoral transparency and prevent multiple voting. The findings contribute to developing robust biometric-based electoral systems, thereby promoting democratic trust and accountability. Full article
Show Figures

Figure 1

23 pages, 1701 KiB  
Article
Left Meets Right: A Siamese Network Approach to Cross-Palmprint Biometric Recognition
by Mohamed Ezz
Electronics 2025, 14(10), 2093; https://doi.org/10.3390/electronics14102093 - 21 May 2025
Viewed by 362
Abstract
What if you could identify someone’s right palmprint just by looking at their left—and vice versa? That is exactly what I set out to do. I built a specially adapted Siamese network that only needs one palm to reliably recognize the other, making [...] Read more.
What if you could identify someone’s right palmprint just by looking at their left—and vice versa? That is exactly what I set out to do. I built a specially adapted Siamese network that only needs one palm to reliably recognize the other, making biometric systems far more flexible in everyday settings. My solution rests on two simple but powerful ideas. First, Anchor Embedding through Feature Aggregation (AnchorEFA) creates a “super-anchor” by averaging four palmprint samples from the same person. This pooled anchor smooths out noise and highlights the consistent patterns shared between left and right palms. Second, I use a Concatenated Similarity Measurement—combining Euclidean distance with Element-wise Absolute Difference (EAD)—so the model can pick up both big structural similarities and tiny textural differences. I tested this approach on three public datasets (POLYU_Left_Right, TongjiS1_Left_Right, and CASIA_Left_Right) and saw a clear jump in accuracy compared to traditional methods. In fact, my four-sample AnchorEFA plus hybrid similarity metric did not just beat the baseline—it set a new benchmark for cross-palmprint recognition. In short, recognizing a palmprint from its opposite pair is not just feasible—it is practical, accurate, and ready for real-world use. This work opens the door to more secure, user-friendly biometric systems that still work even when only one palmprint is available. Full article
Show Figures

Figure 1

22 pages, 3864 KiB  
Article
Raspberry Pi-Based Face Recognition Door Lock System
by Seifeldin Sherif Fathy Ali Elnozahy, Senthill C. Pari and Lee Chu Liang
IoT 2025, 6(2), 31; https://doi.org/10.3390/iot6020031 - 20 May 2025
Viewed by 1584
Abstract
Access control systems protect homes and businesses in the continually evolving security industry. This paper designs and implements a Raspberry Pi-based facial recognition door lock system using artificial intelligence and computer vision for reliability, efficiency, and usability. With the Raspberry Pi as its [...] Read more.
Access control systems protect homes and businesses in the continually evolving security industry. This paper designs and implements a Raspberry Pi-based facial recognition door lock system using artificial intelligence and computer vision for reliability, efficiency, and usability. With the Raspberry Pi as its CPU, the system uses facial recognition for authentication. A camera module for real-time image capturing, a relay module for solenoid lock control, and OpenCV for image processing are essential. The system uses the DeepFace library to detect user emotions and adaptive learning to improve recognition accuracy for approved users. The device also adapts to poor lighting and distances, and it sends real-time remote monitoring messages. Some of the most important things that have been achieved include adaptive facial recognition, ensuring that the system changes as it is used, and integrating real-time notifications and emotion detection without any problems. Face recognition worked well in many settings. Modular architecture facilitated hardware–software integration and scalability for various applications. In conclusion, this study created an intelligent facial recognition door lock system using Raspberry Pi hardware and open-source software libraries. The system addresses traditional access control limits and is practical, scalable, and inexpensive, demonstrating biometric technology’s potential in modern security systems. Full article
Show Figures

Figure 1

21 pages, 299 KiB  
Review
The Impact of Biometric Surveillance on Reducing Violent Crime: Strategies for Apprehending Criminals While Protecting the Innocent
by Patricia Haley
Sensors 2025, 25(10), 3160; https://doi.org/10.3390/s25103160 - 17 May 2025
Viewed by 1091
Abstract
In the rapidly evolving landscape of biometric technologies, integrating artificial intelligence (AI) and predictive analytics offers promising opportunities and significant challenges for law enforcement and violence prevention. This paper examines the current state of biometric surveillance systems, emphasizing the application of new sensor [...] Read more.
In the rapidly evolving landscape of biometric technologies, integrating artificial intelligence (AI) and predictive analytics offers promising opportunities and significant challenges for law enforcement and violence prevention. This paper examines the current state of biometric surveillance systems, emphasizing the application of new sensor technologies and machine learning algorithms and their impact on crime prevention strategies. While advancements in facial recognition and predictive policing models have shown varying degrees of accuracy in determining violence, their efficiency and ethical concerns regarding privacy, bias, and civil liberties remain critically important. By analyzing the effectiveness of these technologies within public safety contexts, this study aims to highlight the potential of biometric systems to improve identification processes while addressing the urgent need for strong frameworks that ensure improvements in violent crime prevention while providing moral accountability and equitable implementation in diverse communities. Ultimately, this research contributes to ongoing discussions about the future of biometric sensing technologies and their role in creating safer communities. Full article
(This article belongs to the Special Issue New Trends in Biometric Sensing and Information Processing)
Back to TopTop