Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (180)

Search Parameters:
Keywords = multi-biometrics

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
31 pages, 1452 KB  
Article
A User-Centric Context-Aware Framework for Real-Time Optimisation of Multimedia Data Privacy Protection, and Information Retention Within Multimodal AI Systems
by Ndricim Topalli and Atta Badii
Sensors 2025, 25(19), 6105; https://doi.org/10.3390/s25196105 - 3 Oct 2025
Abstract
The increasing use of AI systems for face, object, action, scene, and emotion recognition raises significant privacy risks, particularly when processing Personally Identifiable Information (PII). Current privacy-preserving methods lack adaptability to users’ preferences and contextual requirements, and obfuscate user faces uniformly. This research [...] Read more.
The increasing use of AI systems for face, object, action, scene, and emotion recognition raises significant privacy risks, particularly when processing Personally Identifiable Information (PII). Current privacy-preserving methods lack adaptability to users’ preferences and contextual requirements, and obfuscate user faces uniformly. This research proposes a user-centric, context-aware, and ontology-driven privacy protection framework that dynamically adjusts privacy decisions based on user-defined preferences, entity sensitivity, and contextual information. The framework integrates state-of-the-art recognition models for recognising faces, objects, scenes, actions, and emotions in real time on data acquired from vision sensors (e.g., cameras). Privacy decisions are directed by a contextual ontology based in Contextual Integrity theory, which classifies entities into private, semi-private, or public categories. Adaptive privacy levels are enforced through obfuscation techniques and a multi-level privacy model that supports user-defined red lines (e.g., “always hide logos”). The framework also proposes a Re-Identifiability Index (RII) using soft biometric features such as gait, hairstyle, clothing, skin tone, age, and gender, to mitigate identity leakage and to support fallback protection when face recognition fails. The experimental evaluation relied on sensor-captured datasets, which replicate real-world image sensors such as surveillance cameras. User studies confirmed that the framework was effective, with over 85.2% of participants rating the obfuscation operations as highly effective, and the other 14.8% stating that obfuscation was adequately effective. Amongst these, 71.4% considered the balance between privacy protection and usability very satisfactory and 28% found it satisfactory. GPU acceleration was deployed to enable real-time performance of these models by reducing frame processing time from 1200 ms (CPU) to 198 ms. This ontology-driven framework employs user-defined red lines, contextual reasoning, and dual metrics (RII/IVI) to dynamically balance privacy protection with scene intelligibility. Unlike current anonymisation methods, the framework provides a real-time, user-centric, and GDPR-compliant method that operationalises privacy-by-design while preserving scene intelligibility. These features make the framework appropriate to a variety of real-world applications including healthcare, surveillance, and social media. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

15 pages, 930 KB  
Article
Analysis of Sensor Location and Time–Frequency Feature Contributions in IMU-Based Gait Identity Recognition
by Fangyu Liu, Hao Wang, Xiang Li and Fangmin Sun
Electronics 2025, 14(19), 3905; https://doi.org/10.3390/electronics14193905 - 30 Sep 2025
Abstract
Inertial measurement unit (IMU)-based gait biometrics have attracted increasing attention for unobtrusive identity recognition. While recent studies often fuse signals from multiple sensor positions and time–frequency features, the actual contribution of each sensor location and signal modality remains insufficiently explored. In this work, [...] Read more.
Inertial measurement unit (IMU)-based gait biometrics have attracted increasing attention for unobtrusive identity recognition. While recent studies often fuse signals from multiple sensor positions and time–frequency features, the actual contribution of each sensor location and signal modality remains insufficiently explored. In this work, we present a comprehensive quantitative analysis of the role of different IMU placements and feature domains in gait-based identity recognition. IMU data were collected from three body positions (shank, waist, and wrist) and processed to extract both time-domain and frequency-domain features. An attention-gated fusion network was employed to weight each signal branch adaptively, enabling interpretable assessment of their discriminative power. Experimental results show that shank IMU dominates recognition accuracy, while waist and wrist sensors primarily provide auxiliary information. Similarly, the contribution of time-domain features to classification performance is the greatest, while frequency-domain features offer complementary robustness. These findings illustrate the importance of sensor and feature selection in designing efficient, scalable IMU-based identity recognition systems for wearable applications. Full article
16 pages, 7627 KB  
Article
Behavioral Biometrics in VR: Changing Sensor Signal Modalities
by Aleksander Sawicki, Khalid Saeed and Wojciech Walendziuk
Sensors 2025, 25(18), 5899; https://doi.org/10.3390/s25185899 - 20 Sep 2025
Viewed by 212
Abstract
The rapid evolution of virtual reality systems and the broader metaverse landscape has prompted growing research interest in biometric authentication methods for user verification. These solutions offer an additional layer of access control that surpasses traditional password-based approaches by leveraging unique physiological or [...] Read more.
The rapid evolution of virtual reality systems and the broader metaverse landscape has prompted growing research interest in biometric authentication methods for user verification. These solutions offer an additional layer of access control that surpasses traditional password-based approaches by leveraging unique physiological or behavioral traits. Current literature emphasizes analyzing controller position and orientation data, which presents challenges when using convolutional neural networks (CNNs) with non-continuous Euler angles. The novelty of the presented approach is that it addresses this limitation. We propose a modality transformation approach that generates acceleration and angular velocity signals from trajectory and orientation data. Specifically, our work employs algebraic techniques—including quaternion algebra—to model these dynamic signals. Both the original and transformed data were then used to train various CNN architectures, including Vanilla CNNs, attention-enhanced CNNs, and Multi-Input CNNs. The proposed modification yielded significant performance improvements across all datasets. Specifically, F1-score accuracy increased from 0.80 to 0.82 for the Comos subset, from 0.77 to 0.82 for the Quest subset, and notably from 0.83 to 0.92 for the Vive subset. Full article
(This article belongs to the Special Issue Sensor-Based Behavioral Biometrics)
Show Figures

Figure 1

25 pages, 4706 KB  
Article
Transfer Learning-Based Distance-Adaptive Global Soft Biometrics Prediction in Surveillance
by Sonjoy Ranjon Das, Henry Onilude, Bilal Hassan, Preeti Patel and Karim Ouazzane
Electronics 2025, 14(18), 3719; https://doi.org/10.3390/electronics14183719 - 19 Sep 2025
Viewed by 168
Abstract
Soft biometric prediction—including age, gender, and ethnicity—is critical in surveillance applications, yet often suffers from performance degradation as the subject-to-camera distance increases. This study hypothesizes that embedding distance-awareness into the training process can mitigate such degradation and enhance model generalization across varying visual [...] Read more.
Soft biometric prediction—including age, gender, and ethnicity—is critical in surveillance applications, yet often suffers from performance degradation as the subject-to-camera distance increases. This study hypothesizes that embedding distance-awareness into the training process can mitigate such degradation and enhance model generalization across varying visual conditions. We propose a distance-adaptive, multi-task deep learning framework built upon EfficientNetB3, augmented with task-specific heads and trained progressively across four distance intervals (4 m to 10 m). A weighted composite loss function is employed to balance classification and regression objectives. The model is evaluated on a hybrid dataset combining the Front-View Gait (FVG) and MMV annotated pedestrian datasets, totaling over 19,000 samples. Experimental results demonstrate that the framework achieves up to 95% gender classification accuracy at 4 m and retains 85% accuracy at 10 m. Ethnicity prediction maintains an accuracy above 65%, while age estimation achieves a mean absolute error (MAE) ranging from 1.1 to 1.5 years. These findings validate the model’s robustness across distances and its superiority over conventional static learning approaches. Despite challenges such as computational overhead and annotation demands, the proposed approach offers a scalable and real-time-capable solution for distance-resilient biometric systems. Full article
Show Figures

Figure 1

21 pages, 5572 KB  
Article
Real-Time Detection and Segmentation of the Iris At A Distance Scenarios Embedded in Ultrascale MPSoC
by Camilo Ruiz-Beltrán, Óscar Pons, Martín González-García and Antonio Bandera
Electronics 2025, 14(18), 3698; https://doi.org/10.3390/electronics14183698 - 18 Sep 2025
Viewed by 255
Abstract
Iris recognition is currently considered the most promising biometric method and has been applied in many fields. Current commercial and research systems typically use software solutions running on a dedicated computer, whose power consumption, size and price are considerably high. This paper presents [...] Read more.
Iris recognition is currently considered the most promising biometric method and has been applied in many fields. Current commercial and research systems typically use software solutions running on a dedicated computer, whose power consumption, size and price are considerably high. This paper presents a hardware-based embedded solution for real-time iris segmentation. From an algorithmic point of view, the system consists of two steps. The first employs a YOLOX trained to detect two classes: eyes and iris/pupil. Both classes intersect in the last of the classes and this is used to emphasise the detection of the iris/pupil class. The second stage uses a lightweight U-Net network to segment the iris, which is applied only on the locations provided by the first stage. Designed to work in an Iris At A Distance (IAAD) scenario, the system includes quality parameters to discard low-contrast or low-sharpness detections. The whole system has been integrated on one MultiProcessor System-on-Chip (MPSoC) using AMD’s Deep learning Processing Unit (DPU). This approach is capable of processing the more than 45 frames per second provided by a 16 Mpx CMOS digital image sensor. Experiments to determine the accuracy of the proposed system in terms of iris segmentation are performed on several publicly available databases with satisfactory results. Full article
Show Figures

Figure 1

17 pages, 239 KB  
Article
Stakeholder Roles and Views in the Implementation of the Differentiated HIV Treatment Service Delivery Model Among Female Sex Workers in Gauteng Province, South Africa
by Lifutso Motsieloa, Edith Phalane and Refilwe N. Phaswana-Mafuya
Healthcare 2025, 13(18), 2329; https://doi.org/10.3390/healthcare13182329 - 17 Sep 2025
Viewed by 308
Abstract
Background: Key populations (KPs), particularly female sex workers (FSWs), continue to face significant barriers in accessing HIV-related healthcare services in South Africa. Structural challenges have historically hindered equitable HIV treatment access, worsened by the COVID-19 pandemic. Overburdened clinics, staff shortages, and travel constraints [...] Read more.
Background: Key populations (KPs), particularly female sex workers (FSWs), continue to face significant barriers in accessing HIV-related healthcare services in South Africa. Structural challenges have historically hindered equitable HIV treatment access, worsened by the COVID-19 pandemic. Overburdened clinics, staff shortages, and travel constraints disrupted HIV services and ART adherence. In response, the Differentiated Service Delivery (DSD) model was rapidly scaled up to decentralise care and improve treatment continuity. Objective: To solicit the views of stakeholders regarding their interests, roles and experiences in the implementation of the HIV treatment DSD model among FSWs in South Africa, as well as associated successes and barriers thereof. Methods: We purposively selected and interviewed eight stakeholders, comprising government officials, implementers and sex workers’ advocacy organizations. Thematic analysis was used to explore the perceived impact of DSD models and associated successes and barriers in the current service delivery landscape. Results: The study found that decentralization of DSD models improved access to services for FSWs. However, the criminalization of sex work perpetuates fear and marginalization, while stigma and discrimination within healthcare settings remain significant deterrents to HIV treatment uptake. High mobility among FSWs also disrupts continuity of care, contributing to treatment interruptions and lack of data on loss to follow-up. Participants highlighted the need for legal reform, increased healthcare provider sensitization, and the integration of mental health and psychosocial support in HIV services. Peer-led interventions and digital health innovations, such as biometric systems and electronic medical records, emerged as promising strategies for enhancing patient tracking and retention. Nonetheless, the sustainability of DSD models is threatened by an overreliance on external donor funding and insufficient government ownership. Conclusions: To achieve equitable healthcare access and improved HIV outcomes for KPs, especially FSWs, a multi-pronged, rights-based approach is essential. This must include community engagement, structural and legal reforms, integrated support services, and sustainable financing mechanisms to ensure the long-term impact and scalability of DSD models. Full article
31 pages, 2542 KB  
Article
ECR-MobileNet: An Imbalanced Largemouth Bass Parameter Prediction Model with Adaptive Contrastive Regression and Dependency-Graph Pruning
by Hao Peng, Cheng Ouyang, Lin Yang, Jingtao Deng, Mingyu Tan, Yahui Luo, Wenwu Hu, Pin Jiang and Yi Wang
Animals 2025, 15(16), 2443; https://doi.org/10.3390/ani15162443 - 20 Aug 2025
Viewed by 507
Abstract
The precise, non-destructive monitoring of fish length and weight is a core technology for advancing intelligent aquaculture. However, this field faces dual challenges: traditional contact-based measurements induce stress and yield loss. In addition, existing computer vision methods are hindered by prediction biases from [...] Read more.
The precise, non-destructive monitoring of fish length and weight is a core technology for advancing intelligent aquaculture. However, this field faces dual challenges: traditional contact-based measurements induce stress and yield loss. In addition, existing computer vision methods are hindered by prediction biases from imbalanced data and the deployment bottleneck of balancing high accuracy with model lightweighting. This study aims to overcome these challenges by developing an efficient and robust deep learning framework. We propose ECR-MobileNet, a lightweight framework built on MobileNetV3-Small. It features three key innovations: an efficient channel attention (ECA) module to enhance feature discriminability, an original adaptive multi-scale contrastive regression (AMCR) loss function that extends contrastive learning to multi-dimensional regression for length and weight simultaneously to mitigate data imbalance, and a dependency-graph-based (DepGraph) structured pruning technique that synergistically optimizes model size and performance. On our multi-scene largemouth bass dataset, the pruned ECR-MobileNet-P model comprehensively outperformed 14 mainstream benchmarks. It achieved an R2 of 0.9784 and a root mean square error (RMSE) of 0.4296 cm for length prediction, as well as an R2 of 0.9740 and an RMSE of 0.0202 kg for weight prediction. The model’s parameter count is only 0.52 M, with a computational load of 0.07 giga floating-point operations per second (GFLOPs) and a CPU latency of 10.19 ms, achieving Pareto optimality. This study provides an edge-deployable solution for stress-free biometric monitoring in aquaculture and establishes an innovative methodological paradigm for imbalanced regression and task-oriented model compression. Full article
(This article belongs to the Section Aquatic Animals)
Show Figures

Figure 1

17 pages, 6208 KB  
Article
Sweet—An Open Source Modular Platform for Contactless Hand Vascular Biometric Experiments
by David Geissbühler, Sushil Bhattacharjee, Ketan Kotwal, Guillaume Clivaz and Sébastien Marcel
Sensors 2025, 25(16), 4990; https://doi.org/10.3390/s25164990 - 12 Aug 2025
Viewed by 598
Abstract
Current finger-vein or palm-vein recognition systems usually require direct contact of the subject with the apparatus. This can be problematic in environments where hygiene is of primary importance. In this work we present a contactless vascular biometrics sensor platform named sweet which can [...] Read more.
Current finger-vein or palm-vein recognition systems usually require direct contact of the subject with the apparatus. This can be problematic in environments where hygiene is of primary importance. In this work we present a contactless vascular biometrics sensor platform named sweet which can be used for hand vascular biometrics studies (wrist, palm, and finger-vein) and surface features such as palmprint. It supports several acquisition modalities such as multi-spectral Near-Infrared (NIR), RGB-color, Stereo Vision (SV) and Photometric Stereo (PS). Using this platform we collected a dataset consisting of the fingers, palm and wrist vascular data of 120 subjects. We present biometric experimental results, focusing on Finger-Vein Recognition (FVR). Finally, we discuss fusion of multiple modalities. The acquisition software, parts of the hardware design, the new FV dataset, as well as source-code for our experiments are publicly available for research purposes. Full article
(This article belongs to the Special Issue Novel Optical Sensors for Biomedical Applications—2nd Edition)
Show Figures

Figure 1

24 pages, 3087 KB  
Article
Photoplethysmogram (PPG)-Based Biometric Identification Using 2D Signal Transformation and Multi-Scale Feature Fusion
by Yuanyuan Xu, Zhi Wang and Xiaochang Liu
Sensors 2025, 25(15), 4849; https://doi.org/10.3390/s25154849 - 7 Aug 2025
Viewed by 581
Abstract
Using Photoplethysmogram (PPG) signals for identity recognition has been proven effective in biometric authentication. However, in real-world applications, PPG signals are prone to interference from noise, physical activity, diseases, and other factors, making it challenging to ensure accurate user recognition and verification in [...] Read more.
Using Photoplethysmogram (PPG) signals for identity recognition has been proven effective in biometric authentication. However, in real-world applications, PPG signals are prone to interference from noise, physical activity, diseases, and other factors, making it challenging to ensure accurate user recognition and verification in complex environments. To address these issues, this paper proposes an improved MSF-SE ResNet50 (Multi-Scale Feature Squeeze-and-Excitation ResNet50) model based on 2D PPG signals. Unlike most existing methods that directly process one-dimensional PPG signals, this paper adopts a novel approach based on two-dimensional PPG signal processing. By applying Continuous Wavelet Transform (CWT), the preprocessed one-dimensional PPG signal is transformed into a two-dimensional time-frequency map, which not only preserves the time-frequency characteristics of the signal but also provides richer spatial information. During the feature extraction process, the SENet module is first introduced to enhance the ability to extract distinctive features. Next, a novel Lightweight Multi-Scale Feature Fusion (LMSFF) module is proposed, which addresses the limitation of single-scale feature extraction in existing methods by employing parallel multi-scale convolutional operations. Finally, cross-stage feature fusion is implemented, overcoming the limitations of traditional feature fusion methods. These techniques work synergistically to improve the model’s performance. On the BIDMC dataset, the MSF-SE ResNet50 model achieved accuracy, precision, recall, and F1 scores of 98.41%, 98.19%, 98.27%, and 98.23%, respectively. Compared to existing state-of-the-art methods, the proposed model demonstrates significant improvements across all evaluation metrics, highlighting its significance in terms of network architecture and performance. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

24 pages, 5022 KB  
Article
Aging-Invariant Sheep Face Recognition Through Feature Decoupling
by Suhui Liu, Chuanzhong Xuan, Zhaohui Tang, Guangpu Wang, Xinyu Gao and Zhipan Wang
Animals 2025, 15(15), 2299; https://doi.org/10.3390/ani15152299 - 6 Aug 2025
Viewed by 414
Abstract
Precise recognition of individual ovine specimens plays a pivotal role in implementing smart agricultural platforms and optimizing herd management systems. With the development of deep learning technology, sheep face recognition provides an efficient and contactless solution for individual sheep identification. However, with the [...] Read more.
Precise recognition of individual ovine specimens plays a pivotal role in implementing smart agricultural platforms and optimizing herd management systems. With the development of deep learning technology, sheep face recognition provides an efficient and contactless solution for individual sheep identification. However, with the growth of sheep, their facial features keep changing, which poses challenges for existing sheep face recognition models to maintain accuracy across the dynamic changes in facial features over time, making it difficult to meet practical needs. To address this limitation, we propose the lifelong biometric learning of the sheep face network (LBL-SheepNet), a feature decoupling network designed for continuous adaptation to ovine facial changes, and constructed a dataset of 31,200 images from 55 sheep tracked monthly from 1 to 12 months of age. The LBL-SheepNet model addresses dynamic variations in facial features during sheep growth through a multi-module architectural framework. Firstly, a Squeeze-and-Excitation (SE) module enhances discriminative feature representation through adaptive channel-wise recalibration. Then, a nonlinear feature decoupling module employs a hybrid channel-batch attention mechanism to separate age-related features from identity-specific characteristics. Finally, a correlation analysis module utilizes adversarial learning to suppress age-biased feature interference, ensuring focus on age-invariant identifiers. Experimental results demonstrate that LBL-SheepNet achieves 95.5% identification accuracy and 95.3% average precision on the sheep face dataset. This study introduces a lifelong biometric learning (LBL) mechanism to mitigate recognition accuracy degradation caused by dynamic facial feature variations in growing sheep. By designing a feature decoupling network integrated with adversarial age-invariant learning, the proposed method addresses the performance limitations of existing models in long-term individual identification. Full article
(This article belongs to the Section Animal System and Management)
Show Figures

Figure 1

36 pages, 1010 KB  
Article
SIBERIA: A Self-Sovereign Identity and Multi-Factor Authentication Framework for Industrial Access
by Daniel Paredes-García, José Álvaro Fernández-Carrasco, Jon Ander Medina López, Juan Camilo Vasquez-Correa, Imanol Jericó Yoldi, Santiago Andrés Moreno-Acevedo, Ander González-Docasal, Haritz Arzelus Irazusta, Aitor Álvarez Muniain and Yeray de Diego Loinaz
Appl. Sci. 2025, 15(15), 8589; https://doi.org/10.3390/app15158589 - 2 Aug 2025
Cited by 1 | Viewed by 633
Abstract
The growing need for secure and privacy-preserving identity management in industrial environments has exposed the limitations of traditional, centralized authentication systems. In this context, SIBERIA was developed as a modular solution that empowers users to control their own digital identities, while ensuring robust [...] Read more.
The growing need for secure and privacy-preserving identity management in industrial environments has exposed the limitations of traditional, centralized authentication systems. In this context, SIBERIA was developed as a modular solution that empowers users to control their own digital identities, while ensuring robust protection of critical services. The system is designed in alignment with European standards and regulations, including EBSI, eIDAS 2.0, and the GDPR. SIBERIA integrates a Self-Sovereign Identity (SSI) framework with a decentralized blockchain-based infrastructure for the issuance and verification of Verifiable Credentials (VCs). It incorporates multi-factor authentication by combining a voice biometric module, enhanced with spoofing-aware techniques to detect synthetic or replayed audio, and a behavioral biometrics module that provides continuous authentication by monitoring user interaction patterns. The system enables secure and user-centric identity management in industrial contexts, ensuring high resistance to impersonation and credential theft while maintaining regulatory compliance. SIBERIA demonstrates that it is possible to achieve both strong security and user autonomy in digital identity systems by leveraging decentralized technologies and advanced biometric verification methods. Full article
(This article belongs to the Special Issue Blockchain and Distributed Systems)
Show Figures

Figure 1

17 pages, 2072 KB  
Article
Barefoot Footprint Detection Algorithm Based on YOLOv8-StarNet
by Yujie Shen, Xuemei Jiang, Yabin Zhao and Wenxin Xie
Sensors 2025, 25(15), 4578; https://doi.org/10.3390/s25154578 - 24 Jul 2025
Viewed by 592
Abstract
This study proposes an optimized footprint recognition model based on an enhanced StarNet architecture for biometric identification in the security, medical, and criminal investigation fields. Conventional image recognition algorithms exhibit limitations in processing barefoot footprint images characterized by concentrated feature distributions and rich [...] Read more.
This study proposes an optimized footprint recognition model based on an enhanced StarNet architecture for biometric identification in the security, medical, and criminal investigation fields. Conventional image recognition algorithms exhibit limitations in processing barefoot footprint images characterized by concentrated feature distributions and rich texture patterns. To address this, our framework integrates an improved StarNet into the backbone of YOLOv8 architecture. Leveraging the unique advantages of element-wise multiplication, the redesigned backbone efficiently maps inputs to a high-dimensional nonlinear feature space without increasing channel dimensions, achieving enhanced representational capacity with low computational latency. Subsequently, an Encoder layer facilitates feature interaction within the backbone through multi-scale feature fusion and attention mechanisms, effectively extracting rich semantic information while maintaining computational efficiency. In the feature fusion part, a feature modulation block processes multi-scale features by synergistically combining global and local information, thereby reducing redundant computations and decreasing both parameter count and computational complexity to achieve model lightweighting. Experimental evaluations on a proprietary barefoot footprint dataset demonstrate that the proposed model exhibits significant advantages in terms of parameter efficiency, recognition accuracy, and computational complexity. The number of parameters has been reduced by 0.73 million, further improving the model’s speed. Gflops has been reduced by 1.5, lowering the performance requirements for computational hardware during model deployment. Recognition accuracy has reached 99.5%, with further improvements in model precision. Future research will explore how to capture shoeprint images with complex backgrounds from shoes worn at crime scenes, aiming to further enhance the model’s recognition capabilities in more forensic scenarios. Full article
(This article belongs to the Special Issue Transformer Applications in Target Tracking)
Show Figures

Figure 1

24 pages, 824 KB  
Article
MMF-Gait: A Multi-Model Fusion-Enhanced Gait Recognition Framework Integrating Convolutional and Attention Networks
by Kamrul Hasan, Khandokar Alisha Tuhin, Md Rasul Islam Bapary, Md Shafi Ud Doula, Md Ashraful Alam, Md Atiqur Rahman Ahad and Md. Zasim Uddin
Symmetry 2025, 17(7), 1155; https://doi.org/10.3390/sym17071155 - 19 Jul 2025
Viewed by 748
Abstract
Gait recognition is a reliable biometric approach that uniquely identifies individuals based on their natural walking patterns. It is widely used to recognize individuals who are challenging to camouflage and do not require a person’s cooperation. The general face-based person recognition system often [...] Read more.
Gait recognition is a reliable biometric approach that uniquely identifies individuals based on their natural walking patterns. It is widely used to recognize individuals who are challenging to camouflage and do not require a person’s cooperation. The general face-based person recognition system often fails to determine the offender’s identity when they conceal their face by wearing helmets and masks to evade identification. In such cases, gait-based recognition is ideal for identifying offenders, and most existing work leverages a deep learning (DL) model. However, a single model often fails to capture a comprehensive selection of refined patterns in input data when external factors are present, such as variation in viewing angle, clothing, and carrying conditions. In response to this, this paper introduces a fusion-based multi-model gait recognition framework that leverages the potential of convolutional neural networks (CNNs) and a vision transformer (ViT) in an ensemble manner to enhance gait recognition performance. Here, CNNs capture spatiotemporal features, and ViT features multiple attention layers that focus on a particular region of the gait image. The first step in this framework is to obtain the Gait Energy Image (GEI) by averaging a height-normalized gait silhouette sequence over a gait cycle, which can handle the left–right gait symmetry of the gait. After that, the GEI image is fed through multiple pre-trained models and fine-tuned precisely to extract the depth spatiotemporal feature. Later, three separate fusion strategies are conducted, and the first one is decision-level fusion (DLF), which takes each model’s decision and employs majority voting for the final decision. The second is feature-level fusion (FLF), which combines the features from individual models through pointwise addition before performing gait recognition. Finally, a hybrid fusion combines DLF and FLF for gait recognition. The performance of the multi-model fusion-based framework was evaluated on three publicly available gait databases: CASIA-B, OU-ISIR D, and the OU-ISIR Large Population dataset. The experimental results demonstrate that the fusion-enhanced framework achieves superior performance. Full article
(This article belongs to the Special Issue Symmetry and Its Applications in Image Processing)
Show Figures

Figure 1

20 pages, 459 KB  
Article
Post-Quantum Secure Multi-Factor Authentication Protocol for Multi-Server Architecture
by Yunhua Wen, Yandong Su and Wei Li
Entropy 2025, 27(7), 765; https://doi.org/10.3390/e27070765 - 18 Jul 2025
Viewed by 566
Abstract
The multi-factor authentication (MFA) protocol requires users to provide a combination of a password, a smart card and biometric data as verification factors to gain access to the services they need. In a single-server MFA system, users accessing multiple distinct servers must register [...] Read more.
The multi-factor authentication (MFA) protocol requires users to provide a combination of a password, a smart card and biometric data as verification factors to gain access to the services they need. In a single-server MFA system, users accessing multiple distinct servers must register separately for each server, manage multiple smart cards, and remember numerous passwords. In contrast, an MFA system designed for multi-server architecture allows users to register once at a registration center (RC) and then access all associated servers with a single smart card and one password. MFA with an offline RC addresses the computational bottleneck and single-point failure issues associated with the RC. In this paper, we propose a post-quantum secure MFA protocol for a multi-server architecture with an offline RC. Our MFA protocol utilizes the post-quantum secure Kyber key encapsulation mechanism and an information-theoretically secure fuzzy extractor as its building blocks. We formally prove the post-quantum semantic security of our MFA protocol under the real or random (ROR) model in the random oracle paradigm. Compared to related protocols, our protocol achieves higher efficiency and maintains reasonable communication overhead. Full article
Show Figures

Figure 1

11 pages, 3292 KB  
Article
Essential Multi-Secret Image Sharing for Sensor Images
by Shang-Kuan Chen
J. Imaging 2025, 11(7), 228; https://doi.org/10.3390/jimaging11070228 - 8 Jul 2025
Viewed by 394
Abstract
In this paper, we propose an innovative essential multi-secret image sharing (EMSIS) scheme that integrates sensor data to securely and efficiently share multiple secret images of varying importance. Secret images are categorized into hierarchical levels and encoded into essential shadows and fault-tolerant non-essential [...] Read more.
In this paper, we propose an innovative essential multi-secret image sharing (EMSIS) scheme that integrates sensor data to securely and efficiently share multiple secret images of varying importance. Secret images are categorized into hierarchical levels and encoded into essential shadows and fault-tolerant non-essential shares, with access to higher-level secrets requiring higher-level essential shadows. By incorporating sensor data, such as location, time, or biometric input, into the encoding and access process, the scheme enables the context-aware and adaptive reconstruction of secrets based on real-world conditions. Experimental results demonstrate that the proposed method not only strengthens hierarchical access control, but also enhances robustness, flexibility, and situational awareness in secure image distribution systems. Full article
(This article belongs to the Section Computational Imaging and Computational Photography)
Show Figures

Figure 1

Back to TopTop