Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (120)

Search Parameters:
Keywords = biometric fusion

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
2 pages, 134 KB  
Correction
Correction: Rastogi et al. Sequential Multimodal Biometric Authentication Fusion System. Mathematics 2026, 14, 1178
by Swati Rastogi, Sanoj Kumar, Musrrat Ali and Abdul Rahaman Wahab Sait
Mathematics 2026, 14(9), 1428; https://doi.org/10.3390/math14091428 - 24 Apr 2026
Viewed by 116
Abstract
In the original publication [...] Full article
36 pages, 884 KB  
Review
Real-Time Cognitive State Monitoring via Physiological Signals in Commercial Aviation: A Systematic Literature Review with Reasoned Snowballing Expansion
by Giacomo Belloni and Petru Lucian Curșeu
Safety 2026, 12(2), 56; https://doi.org/10.3390/safety12020056 - 20 Apr 2026
Viewed by 435
Abstract
Aviation safety depends critically on pilots’ mental and cognitive states, particularly in high-stakes and complex operational environments where human errors cause most safety events today. This paper reviews current advances in real-time monitoring of commercial pilots’ cognitive states through physiological and neurophysiological signals [...] Read more.
Aviation safety depends critically on pilots’ mental and cognitive states, particularly in high-stakes and complex operational environments where human errors cause most safety events today. This paper reviews current advances in real-time monitoring of commercial pilots’ cognitive states through physiological and neurophysiological signals and identifies methods applicable to enhance aviation safety and efficiency. In an increasingly complex and congested system, it is essential to investigate the relationships between pilots’ mental workload, stress, startle effect, and physiological parameters to highlight cognitive overload or deficiencies in real time. This systematic literature review was conducted according to PRISMA 2020 guidelines, using Google Scholar, Scopus, and PubMed, and identified 26 eligible studies. A targeted backward citation search screened 17 additional records, and two studies were added to the initial set. Twenty-eight records were therefore included and the review highlights a range of biometric indicators of pilots’ mental states with varying degrees of validity and operational applicability. Collectively, these studies offer a clear overview of state-of-the-art approaches, while also evidencing constraints related to intrusiveness and real-world feasibility. Physiological monitoring holds strong promise for enhancing pilot performance and safety by detecting early signs of overload and stress. However, its integration into operational aviation remains limited. Future research should prioritise longitudinal, in situ evaluations, multimodal data fusion, and pilot-centred design to ensure practical applicability, non-intrusiveness, and regulatory compliance, ultimately bridging the gap between academic research and cockpit reality. Full article
Show Figures

Figure 1

21 pages, 1320 KB  
Article
Adaptive Decision Fusion in Probability Space for Pedestrian Gender Recognition
by Lei Cai, Huijie Zheng, Fang Ruan, Feng Chen, Wenjie Xiang, Qi Lin and Yifan Shi
Appl. Sci. 2026, 16(8), 3640; https://doi.org/10.3390/app16083640 - 8 Apr 2026
Viewed by 250
Abstract
Pedestrian gender recognition plays an important role in pedestrian analysis and intelligent video applications, for example, in demographic statistics, soft biometric analysis, and context-aware person retrieval. However, it remains a challenging task owing to viewpoint variations, illumination changes, occlusions, and low image quality [...] Read more.
Pedestrian gender recognition plays an important role in pedestrian analysis and intelligent video applications, for example, in demographic statistics, soft biometric analysis, and context-aware person retrieval. However, it remains a challenging task owing to viewpoint variations, illumination changes, occlusions, and low image quality in real-world imagery. To address these issues, an effective adaptive decision fusion framework, termed the Decision Fusion Learning Network (DFLN), is proposed in this paper. The key novel aspect of DFLN is that it effectively explores both an appearance-centered view that emphasizes detailed texture and clothing information and a structure-centered view that captures rich contour and structural information for pedestrian gender recognition. To realize DFLN, a Parallel CNN Prediction Probability Learning Module (PCNNM) is first constructed to independently learn modality-specific probabilities from color image and edge maps. Subsequently, a learnable Decision Fusion Module (DFM) is designed to fuse the modality-specific probabilities and explore their complementary merits for realizing accurate pedestrian gender recognition. The DFM can be easily coupled with the PCNNM, forming an end-to-end decision fusion learning framework that simultaneously learns the feature representations and carries out adaptive decision fusion. Experiments on two pedestrian benchmark datasets, named PETA and PA-100K, show that DFLN achieves competitive or superior performance compared with several state-of-the-art pedestrian gender recognition methods. Extensive experimental analysis further confirms the effectiveness of the proposed decision fusion strategy and its favorable generalization ability under domain shift. Full article
Show Figures

Figure 1

24 pages, 1564 KB  
Article
Sequential Multimodal Biometric Authentication Fusion System
by Swati Rastogi, Sanoj Kumar, Musrrat Ali and Abdul Rahaman Wahab Sait
Mathematics 2026, 14(7), 1178; https://doi.org/10.3390/math14071178 - 1 Apr 2026
Cited by 1 | Viewed by 548 | Correction
Abstract
The current study proposes an improved DenseNet-based Sequential Multimodal Biometric Authentication System, involving face and ear modality for better human identification. The architecture is composed of three convolutional layers and two dense layers, which are optimized for obtaining the discriminative spatial representations in [...] Read more.
The current study proposes an improved DenseNet-based Sequential Multimodal Biometric Authentication System, involving face and ear modality for better human identification. The architecture is composed of three convolutional layers and two dense layers, which are optimized for obtaining the discriminative spatial representations in 200 × 200 pixel facial and ear images. Evaluation is performed based on strict 5-fold subject disjoint cross-validation data to ensure the unbiased assessment. The model proposed attained a steady classification accuracy of 97.1 ± 0.79%, and balanced values for Precision, Recall and F1-score under controlled validation conditions, while the Performance analysis including False Acceptance (FAR), False Rejection (FRR) and Equal Error Rate (EER) showed that the EER found is around 1.05% at the optimum operating value. Comparative experiments between parallel feature concatenation and sequential verification techniques show that the sequential framework yields decreased FAR, when compared to the parallel framework, without having a detrimental effect on overall accuracy, while the Statistical validation by analysis of variance shows that the incremental architectural improvements have a significant impact on performance improvements. Findings of this analysis show a “score distribution” that both “single-trait and traditional multifactor systems” exceed the presentation of a novel method for Nex-G authentication solutions. This study advances biometric security by demonstrating how multimodal fusion may address the increasing global demand for robust and privacy-aware authentication methods, thereby setting a standard for intelligent multimodal recognition systems. Full article
Show Figures

Figure 1

31 pages, 2150 KB  
Article
Context-Aware Decision Fusion for Multimodal Access Control Under Contradictory Biometric Evidence
by Yasser Hmimou, Azedine Khiat, Hassna Bensag, Zineb Hidila and Mohamed Tabaa
Computers 2026, 15(4), 208; https://doi.org/10.3390/computers15040208 - 27 Mar 2026
Viewed by 575
Abstract
Access control systems rely increasingly on multimodal biometric and behavioral signals to enhance security and robustness against sophisticated attacks. However, when heterogeneous modalities provide conflicting evidence, such as valid biometric credentials accompanied by abnormal behavioral or acoustic patterns, traditional fusion strategies based on [...] Read more.
Access control systems rely increasingly on multimodal biometric and behavioral signals to enhance security and robustness against sophisticated attacks. However, when heterogeneous modalities provide conflicting evidence, such as valid biometric credentials accompanied by abnormal behavioral or acoustic patterns, traditional fusion strategies based on static thresholds or majority voting often fail, leading to false alarms or insecure authorization decisions. This paper addresses this critical limitation by proposing a contextual decision-making fusion framework designed to resolve conflicting multimodal evidence at the decision-making level. The proposed approach models access control as a decision-making problem in a context of uncertainty, where independent agents generate modality-specific evidence from authentication channels based on face, voice, and fingerprints. A centralized fusion mechanism integrates heterogeneous results using adaptive reliability weighting and contextual reasoning to resolve conflicts before operational decisions are made. Rather than treating each modality independently, the framework explicitly considers inconsistencies, uncertainties, and situational context when aggregating evidence. The framework is evaluated using public benchmarks, including VGGFace2, VoxCeleb2, and FVC2004, combined with controlled multimodal scenarios that induce conflicting evidence. Experimental results obtained under controlled contradiction scenarios show that the proposed fusion strategy reduces false alarms and improves decision consistency by approximately 18%. These results are interpreted within the scope of controlled multimodal simulations. Full article
Show Figures

Figure 1

25 pages, 3673 KB  
Systematic Review
Recent Advances in Multi-Camera Computer Vision for Industry 4.0 and Smart Cities: A Systematic Review
by Carlos Julio Fierro-Silva, Carolina Del-Valle-Soto, Samih M. Mostafa and José Varela-Aldás
Algorithms 2026, 19(4), 249; https://doi.org/10.3390/a19040249 - 25 Mar 2026
Viewed by 894
Abstract
The rapid deployment of surveillance cameras in urban, industrial, and domestic environments has intensified the need for intelligent systems capable of analyzing video streams beyond the limitations of single-camera setups. Unlike traditional single-camera approaches, multi-camera systems expand spatial coverage, reduce blind spots, and [...] Read more.
The rapid deployment of surveillance cameras in urban, industrial, and domestic environments has intensified the need for intelligent systems capable of analyzing video streams beyond the limitations of single-camera setups. Unlike traditional single-camera approaches, multi-camera systems expand spatial coverage, reduce blind spots, and enable consistent tracking of people and objects across non-overlapping views, thereby improving robustness against occlusions and viewpoint changes. This article presents a comprehensive review of multi-camera vision systems published between 2020 and 2025, covering application domains including public security and biometrics, intelligent transportation, smart cities and IoT, healthcare monitoring, precision agriculture, industry and robotics, pan–tilt–zoom (PTZ) camera networks, and emerging areas such as retail and forensic analysis. The review synthesizes predominant technical approaches, including deep-learning-based detection, multi-target multi-camera tracking (MTMCT), re-identification (Re-ID), spatiotemporal fusion, and edge computing architectures. Persistent challenges are identified, particularly in inter-camera data association, scalability, computational efficiency, privacy preservation, and dataset availability. Emerging trends such as distributed edge AI, cooperative camera networks, and active perception are discussed to outline future research directions toward scalable, privacy-aware, and intelligent multi-camera infrastructures. Full article
Show Figures

Figure 1

35 pages, 743 KB  
Systematic Review
Affective Intelligent Systems in Healthcare: A Systematic Review
by Analúcia Schiaffino Morales, Thiago de Luca Reis, Alison R. Panisson, Fabrício Ourique and Iwens G. Sene
Technologies 2026, 14(3), 188; https://doi.org/10.3390/technologies14030188 - 20 Mar 2026
Cited by 1 | Viewed by 628
Abstract
Objectives: To investigate the current state of affective computing in healthcare, focusing on its application contexts, algorithmic trends, and the technical–ethical duality involving data privacy and security. Methods and Results: A systematic review was conducted in two phases (2013–2025) following PRISMA guidelines. A [...] Read more.
Objectives: To investigate the current state of affective computing in healthcare, focusing on its application contexts, algorithmic trends, and the technical–ethical duality involving data privacy and security. Methods and Results: A systematic review was conducted in two phases (2013–2025) following PRISMA guidelines. A total of 170 peer-reviewed articles were selected from PubMed, IEEE Xplore, Scopus, and Web of Science based on predefined inclusion and exclusion criteria, with the sample restricted to full-text studies in English addressing affective computing in healthcare. No formal risk-of-bias tool was applied due to the computational nature of the studies, and the findings were synthesized descriptively. Discussion: The findings reveal a clear shift from classical machine learning (e.g., SVM, k-NN) toward deep learning and hybrid architectures such as CNN–LSTM and attention-based models for processing complex physiological signals. Recent years have shown a growing interest in multimodal data fusion and privacy-preserving mechanisms such as homomorphic encryption. Evidence remains limited by methodological heterogeneity and inconsistent reporting across studies. A significant gap persists in regulatory compliance, as 57% of recent publications do not adequately address data security or ethical risks associated with sensitive biometric footprints. Conclusions: Although affective computing has reached a certain level of technical maturity, future research must prioritize lightweight, secure, and privacy-by-design architectures to enable ethically aligned and trustworthy deployment in real-world healthcare scenarios. Full article
Show Figures

Figure 1

21 pages, 2858 KB  
Article
Generation of Distances Between Feature Vectors Derived from a Siamese Neural Network for Continuous Authentication
by Sergey Davydenko, Pavel Laptev and Evgeny Kostyuchenko
J. Cybersecur. Priv. 2026, 6(2), 45; https://doi.org/10.3390/jcp6020045 - 3 Mar 2026
Viewed by 457
Abstract
Continuous authentication is a promising method for protecting computer systems in the event of compromise of primary authentication factors, such as passwords or tokens. Systems employing continuous authentication that rely on biometrics may not be restricted to a single biometric characteristic; rather, they [...] Read more.
Continuous authentication is a promising method for protecting computer systems in the event of compromise of primary authentication factors, such as passwords or tokens. Systems employing continuous authentication that rely on biometrics may not be restricted to a single biometric characteristic; rather, they can simultaneously utilize multiple characteristics and subsequently arrive at a conclusive decision based on their collective analysis outcomes. One of the significant challenges researchers encounter when investigating effective fusion in decision-making is the lack of data. At present, data generation primarily involves the creation of feature vectors or attack simulation. This paper introduces a method for directly generating distances derived from a Siamese neural network, utilizing the probability density function of an existing distribution. Through statistical analysis, we successfully generated 5000 samples that correspond to the initial distribution, which were then employed to discover the threshold values at which FAR and FRR were less than 1%. The methods developed can be further applied to identify the most efficient strategies for integrating the results of continuous authentication in systems that incorporate multiple biometric characteristics. Full article
(This article belongs to the Special Issue Cyber Security and Digital Forensics—3rd Edition)
Show Figures

Figure 1

26 pages, 2520 KB  
Article
Concealed Face Analysis and Facial Reconstruction via a Multi-Task Approach and Cross-Modal Distillation in Terahertz Imaging
by Noam Bergman, Ihsan Ozan Yildirim, Asaf Behzat Sahin, Hakan Altan and Yitzhak Yitzhaky
Sensors 2026, 26(4), 1341; https://doi.org/10.3390/s26041341 - 19 Feb 2026
Viewed by 539
Abstract
Terahertz (THz) sub-millimeter wave imaging offers unique capabilities for stand-off biometrics through concealment, yet it suffers from severe sparsity, low resolution, and high noise. To address these limitations, we introduce a novel unified Multi-Task Learning (MTL) network centered on a custom shared U-Net-like [...] Read more.
Terahertz (THz) sub-millimeter wave imaging offers unique capabilities for stand-off biometrics through concealment, yet it suffers from severe sparsity, low resolution, and high noise. To address these limitations, we introduce a novel unified Multi-Task Learning (MTL) network centered on a custom shared U-Net-like THz data encoder. This network is designed to simultaneously solve three distinct critical tasks on concealed THz facial data, given a limited dataset of approximately 1400 THz facial images of 20 different identities. The tasks include concealed face verification, facial posture classification, and a generative reconstruction of unconcealed faces from concealed ones. While providing highly successful MTL results as a standalone solution on the very challenging dataset, we further studied the expansion of this architecture via a cross-modal teacher-student approach. During training, a privileged visible-spectrum teacher fuses limited visible features with THz data to guide the THz-only student. This distillation process yields a student network that relies solely on THz inputs at inference. The cross-modal trained student achieves better latent space in terms of inter-class separability compared to the single-modality baseline, but with reduced intra-class compactness, while maintaining a similar success in the task performances. Both THz-only and distilled models preserve high unconcealed face generative fidelity. Full article
Show Figures

Figure 1

28 pages, 3713 KB  
Article
Multi-Class Online Signature Verification Based on Hybrid Statistical Moments and UMAP-Based Nonlinear Dimensionality Reduction
by Liyan Huang, Yuanxiang Ruan, Weijun Li, Naisheng Xu and Pan Zheng
Technologies 2026, 14(2), 89; https://doi.org/10.3390/technologies14020089 - 1 Feb 2026
Viewed by 485
Abstract
Online signature verification (OSV) is a challenging problem in behavioral biometrics, especially when skilled forgeries closely mimic genuine signatures in both appearance and dynamics. This study presents a multi-class OSV framework that combines hybrid statistical features and nonlinear dimensionality reduction using Uniform Manifold [...] Read more.
Online signature verification (OSV) is a challenging problem in behavioral biometrics, especially when skilled forgeries closely mimic genuine signatures in both appearance and dynamics. This study presents a multi-class OSV framework that combines hybrid statistical features and nonlinear dimensionality reduction using Uniform Manifold Approximation and Projection (UMAP). A 40-dimensional feature set is created from statistical moments of dynamic writing parameters in both time and frequency (DCT) domains. Experimental results show that UMAP-based dimensionality reduction preserves category-related structures in a compact two-dimensional space. The proposed approach achieves an average classification accuracy of 0.989 ± 0.005 and a Cohen’s kappa coefficient of 0.985 ± 0.006, demonstrating robust performance across multiple classifiers. The results validate the effectiveness of combining multi-domain statistical feature fusion with UMAP for multi-class online signature verification, providing both high performance and interpretable visual representations. Full article
(This article belongs to the Section Information and Communication Technologies)
Show Figures

Figure 1

29 pages, 1843 KB  
Systematic Review
Deep Learning for Tree Crown Detection and Delineation Using UAV and High-Resolution Imagery for Biometric Parameter Extraction: A Systematic Review
by Abdulrahman Sufyan Taha Mohammed Aldaeri, Chan Yee Kit, Lim Sin Ting and Mohamad Razmil Bin Abdul Rahman
Forests 2026, 17(2), 179; https://doi.org/10.3390/f17020179 - 29 Jan 2026
Viewed by 1017
Abstract
Mapping individual-tree crowns (ITCs) along with extracting tree morphological attributes provides the core parameters required for estimating thermal stress and carbon emission functions. However, calculating morphological attributes relies on the prior delineation of ITCs. Using the Preferred Reporting Items for Systematic Reviews and [...] Read more.
Mapping individual-tree crowns (ITCs) along with extracting tree morphological attributes provides the core parameters required for estimating thermal stress and carbon emission functions. However, calculating morphological attributes relies on the prior delineation of ITCs. Using the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) framework, this review synthesizes how deep-learning (DL)-based methods enable the conversion of crown geometry into reliable biometric parameter extraction (BPE) from high-resolution imagery. This addresses a gap often overlooked in studies focused solely on detection by providing a direct link to forest inventory metrics. Our review showed that instance segmentation dominates (approximately 46% of studies), producing the most accurate pixel-level masks for BPE, while RGB imagery is most common (73%), often integrated with canopy-height models (CHM) to enhance accuracy. New architectural approaches, such as StarDist, outperform Mask R-CNN by 6% in dense canopies. However, performance differs with crown overlap, occlusion, species diversity, and the poor transferability of allometric equations. Future work could prioritize multisensor data fusion, develop end-to-end biomass modeling to minimize allometric dependence, develop open datasets to address model generalizability, and enhance and test models like StarDist for higher accuracy in dense forests. Full article
Show Figures

Figure 1

30 pages, 4189 KB  
Systematic Review
Automated Fingerprint Identification: The Role of Artificial Intelligence in Crime Scene Investigation
by Csongor Herke
Forensic Sci. 2026, 6(1), 6; https://doi.org/10.3390/forensicsci6010006 - 22 Jan 2026
Viewed by 3277
Abstract
Background/Objectives: This systematic review examines how artificial intelligence (AI) is transforming fingerprint and latent print identification in criminal investigations, tracing the evolution from traditional dactyloscopy to Automated Fingerprint Identification Systems (AFISs) and AI-enhanced biometric pipelines. Methods: Following PRISMA 2020 guidelines, we [...] Read more.
Background/Objectives: This systematic review examines how artificial intelligence (AI) is transforming fingerprint and latent print identification in criminal investigations, tracing the evolution from traditional dactyloscopy to Automated Fingerprint Identification Systems (AFISs) and AI-enhanced biometric pipelines. Methods: Following PRISMA 2020 guidelines, we conducted a literature search in the Scopus, Web of Science, PubMed/MEDLINE, and legal databases for the period 2000–2025, using multi-step Boolean search strings targeting AI-based fingerprint identification; 68,195 records were identified, of which 61 peer-reviewed studies met predefined inclusion criteria and were included in the qualitative synthesis (no meta-analysis). Results: Across the included studies, AI-enhanced AFIS solutions frequently demonstrated improvements in speed and scalability and, in several controlled benchmarks, improved matching performance on low-quality or partial fingerprints, although the results varied depending on datasets, evaluation protocols, and operational contexts. They also showed a potential to reduce certain forms of examiner-related contextual bias, while remaining susceptible to dataset- and model-induced biases. Conclusions: The evidence indicates that hybrid human–AI workflows—where expert examiners retain decision making authority but use AI for candidate filtering, image enhancement, and data structuring—currently offer the most reliable model, and emerging developments such as multimodal biometric fusion, edge computing, and quantum machine learning may contribute to making AI-based fingerprint identification an increasingly important component of law enforcement practice, provided that robust regulation, continuous validation, and transparent governance are ensured. Full article
Show Figures

Figure 1

23 pages, 2725 KB  
Article
Text- and Face-Conditioned Multi-Anchor Conditional Embedding for Robust Periocular Recognition
by Po-Ling Fong, Tiong-Sik Ng and Andrew Beng Jin Teoh
Appl. Sci. 2026, 16(2), 942; https://doi.org/10.3390/app16020942 - 16 Jan 2026
Viewed by 374
Abstract
Periocular recognition is essential when full-face images cannot be used because of occlusion, privacy constraints, or sensor limitations, yet in many deployments, only periocular images are available at run time, while richer evidence, such as archival face photos and textual metadata, exists offline. [...] Read more.
Periocular recognition is essential when full-face images cannot be used because of occlusion, privacy constraints, or sensor limitations, yet in many deployments, only periocular images are available at run time, while richer evidence, such as archival face photos and textual metadata, exists offline. This mismatch makes it hard to deploy conventional multimodal fusion. This motivates the notion of conditional biometrics, where auxiliary modalities are used only during training to learn stronger periocular representations while keeping deployment strictly periocular-only. In this paper, we propose Multi-Anchor Conditional Periocular Embedding (MACPE), which maps periocular, facial, and textual features into a shared anchor-conditioned space via a learnable anchor bank that preserves periocular micro-textures while aligning higher-level semantics. Training combines identity classification losses on periocular and face branches with a symmetric InfoNCE loss over anchors and a pulling regularizer that jointly aligns periocular, facial, and textual embeddings without collapsing into face-dominated solutions; captions generated by a vision language model provide complementary semantic supervision. At deployment, only the periocular encoder is used. Experiments across five periocular datasets show that MACPE consistently improves Rank-1 identification and reduces EER at a fixed FAR compared with periocular-only baselines and alternative conditioning methods. Ablation studies verify the contributions of anchor-conditioned embeddings, textual supervision, and the proposed loss design. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

34 pages, 15414 KB  
Article
From Visual to Multimodal: Systematic Ablation of Encoders and Fusion Strategies in Animal Identification
by Vasiliy Kudryavtsev, Kirill Borodin, German Berezin, Kirill Bubenchikov, Grach Mkrtchian and Alexander Ryzhkov
J. Imaging 2026, 12(1), 30; https://doi.org/10.3390/jimaging12010030 - 7 Jan 2026
Viewed by 850
Abstract
Automated animal identification is a practical task for reuniting lost pets with their owners, yet current systems often struggle due to limited dataset scale and reliance on unimodal visual cues. This study introduces a multimodal verification framework that enhances visual features with semantic [...] Read more.
Automated animal identification is a practical task for reuniting lost pets with their owners, yet current systems often struggle due to limited dataset scale and reliance on unimodal visual cues. This study introduces a multimodal verification framework that enhances visual features with semantic identity priors derived from synthetic textual descriptions. We constructed a massive training corpus of 1.9 million photographs covering 695,091 unique animals to support this investigation. Through systematic ablation studies, we identified SigLIP2-Giant and E5-Small-v2 as the optimal vision and text backbones. We further evaluated fusion strategies ranging from simple concatenation to adaptive gating to determine the best method for integrating these modalities. Our proposed approach utilizes a gated fusion mechanism and achieved a Top-1 accuracy of 84.28% and an Equal Error Rate of 0.0422 on a comprehensive test protocol. These results represent an 11% improvement over leading unimodal baselines and demonstrate that integrating synthesized semantic descriptions significantly refines decision boundaries in large-scale pet re-identification. Full article
(This article belongs to the Section Biometrics, Forensics, and Security)
Show Figures

Figure 1

15 pages, 1464 KB  
Review
Convergent Sensing: Integrating Biometric and Environmental Monitoring in Next-Generation Wearables
by Maria Guarnaccia, Antonio Gianmaria Spampinato, Enrico Alessi and Sebastiano Cavallaro
Biosensors 2026, 16(1), 43; https://doi.org/10.3390/bios16010043 - 4 Jan 2026
Viewed by 1356
Abstract
The convergence of biometric and environmental sensing represents a transformative advancement in wearable technology, moving beyond single-parameter tracking towards a holistic, context-aware paradigm for health monitoring. This review comprehensively examines the landscape of multi-modal wearable devices that simultaneously capture physiological data, such as [...] Read more.
The convergence of biometric and environmental sensing represents a transformative advancement in wearable technology, moving beyond single-parameter tracking towards a holistic, context-aware paradigm for health monitoring. This review comprehensively examines the landscape of multi-modal wearable devices that simultaneously capture physiological data, such as electrodermal activity (EDA), electrocardiogram (ECG), heart rate variability (HRV), and body temperature, alongside environmental exposures, including air quality, ambient temperature, and atmospheric pressure. We analyze the fundamental sensing technologies, data fusion methodologies, and the critical importance of contextualizing physiological signals within an individual’s environment to disambiguate health states. A detailed survey of existing commercial and research-grade devices highlights a growing, yet still limited, integration of these domains. As a central case study, we present an integrated prototype, which exemplifies this approach by fusing data from inertial, environmental, and physiological sensors to generate intuitive, composite indices for stress, fitness, and comfort, visualized via a polar graph. Finally, we discuss the significant challenges and future directions for this field, including clinical validation, data security, and power management, underscoring the potential of convergent sensing to revolutionize personalized, predictive healthcare. Full article
(This article belongs to the Special Issue Wearable Sensors and Systems for Continuous Health Monitoring)
Show Figures

Figure 1

Back to TopTop