Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (11)

Search Parameters:
Keywords = adaptive multi-modal biometric system

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 1193 KB  
Review
Tactical-Grade Wearables and Authentication Biometrics
by Fotios Agiomavritis and Irene Karanasiou
Sensors 2026, 26(3), 759; https://doi.org/10.3390/s26030759 - 23 Jan 2026
Viewed by 224
Abstract
Modern battlefield operations require wearable technologies to operate reliably under harsh physical, environmental, and security conditions. This review looks at today and tomorrow’s potential for ready field-grade wearables embedded with biometric authentication systems. It details physiological, kinematic, and multimodal sensor platforms built to [...] Read more.
Modern battlefield operations require wearable technologies to operate reliably under harsh physical, environmental, and security conditions. This review looks at today and tomorrow’s potential for ready field-grade wearables embedded with biometric authentication systems. It details physiological, kinematic, and multimodal sensor platforms built to withstand rugged, high-stress environments, and reviews biometric modalities like ECG, PPG, EEG, gait, and voice for continuous or on-demand identity confirmation. Accuracy, latency, energy efficiency, and tolerance to motion artifacts, environmental extremes, and physiological variability are critical performance drivers. Security threats, such as spoofing and data tapping, and techniques for template protection, liveness assurance, and protected on-device processing also come under review. Emerging trends in low-power edge AI, multimodal integration, adaptive learning from field experience, and privacy-preserving analytics in terms of defense readiness, and ongoing challenges, such as gear interoperability, long-term stability of templates, and common stress-testing protocols, are assessed. In conclusion, an R&D plan to lead the development of rugged, trustworthy, and operationally validated wearable authentication systems for the current and future militaries is proposed. Full article
(This article belongs to the Special Issue Biomedical Electronics and Wearable Systems—2nd Edition)
Show Figures

Figure 1

23 pages, 6094 KB  
Systematic Review
Toward Smart VR Education in Media Production: Integrating AI into Human-Centered and Interactive Learning Systems
by Zhi Su, Tse Guan Tan, Ling Chen, Hang Su and Samer Alfayad
Biomimetics 2026, 11(1), 34; https://doi.org/10.3390/biomimetics11010034 - 4 Jan 2026
Viewed by 770
Abstract
Smart virtual reality (VR) systems are becoming central to media production education, where immersive practice, real-time feedback, and hands-on simulation are essential. This review synthesizes the integration of artificial intelligence (AI) into human-centered, interactive VR learning for television and media production. Searches in [...] Read more.
Smart virtual reality (VR) systems are becoming central to media production education, where immersive practice, real-time feedback, and hands-on simulation are essential. This review synthesizes the integration of artificial intelligence (AI) into human-centered, interactive VR learning for television and media production. Searches in Scopus, Web of Science, IEEE Xplore, ACM Digital Library, and SpringerLink (2013–2024) identified 790 records; following PRISMA screening, 94 studies met the inclusion criteria and were synthesized using a systematic scoping review approach. Across this corpus, common AI components include learner modeling, adaptive task sequencing (e.g., RL-based orchestration), affect sensing (vision, speech, and biosignals), multimodal interaction (gesture, gaze, voice, haptics), and growing use of LLM/NLP assistants. Reported benefits span personalized learning trajectories, high-fidelity simulation of studio workflows, and more responsive feedback loops that support creative, technical, and cognitive competencies. Evaluation typically covers usability and presence, workload and affect, collaboration, and scenario-based learning outcomes, leveraging interaction logs, eye tracking, and biofeedback. Persistent challenges include latency and synchronization under multimodal sensing, data governance and privacy for biometric/affective signals, limited transparency/interpretability of AI feedback, and heterogeneous evaluation protocols that impede cross-system comparison. We highlight essential human-centered design principles—teacher-in-the-loop orchestration, timely and explainable feedback, and ethical data governance—and outline a research agenda to support standardized evaluation and scalable adoption of smart VR education in the creative industries. Full article
(This article belongs to the Special Issue Biomimetic Innovations for Human–Machine Interaction)
Show Figures

Figure 1

17 pages, 1203 KB  
Article
A Score-Fusion Method Based on the Sine Cosine Algorithm for Enhanced Multimodal Biometric Authentication
by Eslam Hamouda, Alaa S. Alaerjan, Ayman Mohamed Mostafa and Mayada Tarek
Sensors 2026, 26(1), 208; https://doi.org/10.3390/s26010208 - 28 Dec 2025
Viewed by 500
Abstract
Score fusion is a technique that combines the matching scores from multiple biometric modalities for an authentication system. Biometric modalities are unique physical or behavioral characteristics that can be used to identify individuals. Biometric authentication systems use these modalities to verify or identify [...] Read more.
Score fusion is a technique that combines the matching scores from multiple biometric modalities for an authentication system. Biometric modalities are unique physical or behavioral characteristics that can be used to identify individuals. Biometric authentication systems use these modalities to verify or identify individuals. Score fusion can improve the performance of biometric authentication systems by exploiting the complementary strengths of different modalities and reducing the impact of noise and outliers from individual modalities. This paper proposes a new score fusion method based on the Sine Cosine Algorithm (SCA). SCA is a meta-heuristic optimization algorithm used in various optimization problems. The proposed method extracts features from multiple biometric sources and then computes intra/inter scores for each modality. The proposed method then normalizes the scores for a given user using different biometric modalities. Then, the mean, maximum, minimum, median, summation, and Tanh are used to aggregate the scores from different biometric modalities. The role of the SCA is to find the optimal parameters to fuse the normalized scores. We evaluated our methods on the CASIA-V3-Internal iris dataset and the AT&T (ORL) face database. The proposed method outperforms existing optimization-based methods under identical experimental conditions and achieves an Equal Error Rate (EER) of 1.003% when fusing left iris, right iris, and face. This represents an improvement of up to 85.89% over unimodal baselines. These findings validate SCA’s effectiveness for adaptive score fusion in multimodal biometric systems. Full article
(This article belongs to the Section Biosensors)
Show Figures

Figure 1

45 pages, 1164 KB  
Review
Integrating Cutting-Edge Technologies in Food Sensory and Consumer Science: Applications and Future Directions
by Dongju Lee, Hyemin Jeon, Yoonseo Kim and Youngseung Lee
Foods 2025, 14(24), 4169; https://doi.org/10.3390/foods14244169 - 5 Dec 2025
Viewed by 1606
Abstract
With the introduction of emerging digital technologies, sensory and consumer science has evolved beyond traditional laboratory-based and self-response-centered sensory evaluations toward more objective assessments that reflect real-world consumption contexts. This review examines recent trends and potential applications in sensory evaluation research focusing on [...] Read more.
With the introduction of emerging digital technologies, sensory and consumer science has evolved beyond traditional laboratory-based and self-response-centered sensory evaluations toward more objective assessments that reflect real-world consumption contexts. This review examines recent trends and potential applications in sensory evaluation research focusing on key enabling technologies—artificial intelligence (AI) and machine learning (ML), extended reality (XR), biometrics, and digital sensors. Furthermore, it explores strategies for establishing personalized, multimodal, and intelligent–adaptive sensory evaluation systems through the integration of these technologies, as well as the applicability of sensory evaluation software. Recent studies report that AI/ML models used for sensory or preference prediction commonly achieve RMSE values of approximately 0.04–24.698, with prediction accuracy ranging from 79 to 100% (R2 = 0.643–0.999). In XR environment, presence measured by the IPQ (7-point scale) is generally considered adequate when scores exceed 3. Finally, the review discusses ethical considerations arising throughout data collection, interpretation, and utilization processes and proposes future directions for the advancement of sensory and consumer science research. This systematic literature review aims to identify emerging technologies rather than provide a quantitative meta-analysis and therefore does not cover domain-specific analytical areas such as chemometrics beyond ML approaches or detailed flavor and aroma chemistry. Full article
Show Figures

Figure 1

25 pages, 1072 KB  
Review
EEG-Based Biometric Identification and Emotion Recognition: An Overview
by Miguel A. Becerra, Carolina Duque-Mejia, Andres Castro-Ospina, Leonardo Serna-Guarín, Cristian Mejía and Eduardo Duque-Grisales
Computers 2025, 14(8), 299; https://doi.org/10.3390/computers14080299 - 23 Jul 2025
Cited by 3 | Viewed by 4265
Abstract
This overview examines recent advancements in EEG-based biometric identification, focusing on integrating emotional recognition to enhance the robustness and accuracy of biometric systems. By leveraging the unique physiological properties of EEG signals, biometric systems can identify individuals based on neural responses. The overview [...] Read more.
This overview examines recent advancements in EEG-based biometric identification, focusing on integrating emotional recognition to enhance the robustness and accuracy of biometric systems. By leveraging the unique physiological properties of EEG signals, biometric systems can identify individuals based on neural responses. The overview discusses the influence of emotional states on EEG signals and the consequent impact on biometric reliability. It also evaluates recent emotion recognition techniques, including machine learning methods such as support vector machines (SVMs), convolutional neural networks (CNNs), and long short-term memory networks (LSTMs). Additionally, the role of multimodal EEG datasets in enhancing emotion recognition accuracy is explored. Findings from key studies are synthesized to highlight the potential of EEG for secure, adaptive biometric systems that account for emotional variability. This overview emphasizes the need for future research on resilient biometric identification that integrates emotional context, aiming to establish EEG as a viable component of advanced biometric technologies. Full article
(This article belongs to the Special Issue Multimodal Pattern Recognition of Social Signals in HCI (2nd Edition))
Show Figures

Figure 1

27 pages, 3417 KB  
Article
GaitCSF: Multi-Modal Gait Recognition Network Based on Channel Shuffle Regulation and Spatial-Frequency Joint Learning
by Siwei Wei, Xiangyuan Xu, Dewen Liu, Chunzhi Wang, Lingyu Yan and Wangyu Wu
Sensors 2025, 25(12), 3759; https://doi.org/10.3390/s25123759 - 16 Jun 2025
Cited by 1 | Viewed by 1642
Abstract
Gait recognition, as a non-contact biometric technology, offers unique advantages in scenarios requiring long-distance identification without active cooperation from subjects. However, existing gait recognition methods predominantly rely on single-modal data, which demonstrates insufficient feature expression capabilities when confronted with complex factors in real-world [...] Read more.
Gait recognition, as a non-contact biometric technology, offers unique advantages in scenarios requiring long-distance identification without active cooperation from subjects. However, existing gait recognition methods predominantly rely on single-modal data, which demonstrates insufficient feature expression capabilities when confronted with complex factors in real-world environments, including viewpoint variations, clothing differences, occlusion problems, and illumination changes. This paper addresses these challenges by introducing a multi-modal gait recognition network based on channel shuffle regulation and spatial-frequency joint learning, which integrates two complementary modalities (silhouette data and heatmap data) to construct a more comprehensive gait representation. The channel shuffle-based feature selective regulation module achieves cross-channel information interaction and feature enhancement through channel grouping and feature shuffling strategies. This module divides input features along the channel dimension into multiple subspaces, which undergo channel-aware and spatial-aware processing to capture dependency relationships across different dimensions. Subsequently, channel shuffling operations facilitate information exchange between different semantic groups, achieving adaptive enhancement and optimization of features with relatively low parameter overhead. The spatial-frequency joint learning module maps spatiotemporal features to the spectral domain through fast Fourier transform, effectively capturing inherent periodic patterns and long-range dependencies in gait sequences. The global receptive field advantage of frequency domain processing enables the model to transcend local spatiotemporal constraints and capture global motion patterns. Concurrently, the spatial domain processing branch balances the contributions of frequency and spatial domain information through an adaptive weighting mechanism, maintaining computational efficiency while enhancing features. Experimental results demonstrate that the proposed GaitCSF model achieves significant performance improvements on mainstream datasets including GREW, Gait3D, and SUSTech1k, breaking through the performance bottlenecks of traditional methods. The implications of this research are significant for improving the performance and robustness of gait recognition systems when implemented in practical application scenarios. Full article
(This article belongs to the Collection Sensors for Gait, Human Movement Analysis, and Health Monitoring)
Show Figures

Figure 1

14 pages, 2438 KB  
Article
Contactless Fatigue Level Diagnosis System Through Multimodal Sensor Data
by Younggun Lee, Yongkyun Lee, Sungho Kim, Sitae Kim and Seunghoon Yoo
Bioengineering 2025, 12(2), 116; https://doi.org/10.3390/bioengineering12020116 - 26 Jan 2025
Cited by 1 | Viewed by 2023
Abstract
Fatigue management is critical for high-risk professions such as pilots, firefighters, and healthcare workers, where physical and mental exhaustion can lead to catastrophic accidents and loss of life. Traditional fatigue assessment methods, including surveys and physiological measurements, are limited in real-time monitoring and [...] Read more.
Fatigue management is critical for high-risk professions such as pilots, firefighters, and healthcare workers, where physical and mental exhaustion can lead to catastrophic accidents and loss of life. Traditional fatigue assessment methods, including surveys and physiological measurements, are limited in real-time monitoring and user convenience. To address these issues, this study introduces a novel contactless fatigue level diagnosis system leveraging multimodal sensor data, including video, thermal imaging, and audio. The system integrates non-contact biometric data collection with an AI-driven classification model capable of diagnosing fatigue levels on a 1 to 5 scale with an average accuracy of 89%. Key features include real-time feedback, adaptive retraining for personalized accuracy improvement, and compatibility with high-stress environments. Experimental results demonstrate that retraining with user feedback enhances classification accuracy by 11 percentage points. The system’s hardware is validated for robustness under diverse operational conditions, including temperature and electromagnetic compliance. This innovation provides a practical solution for improving operational safety and performance in critical sectors by enabling precise, non-invasive, and efficient fatigue monitoring. Full article
(This article belongs to the Special Issue Computer-Aided Diagnosis for Biomedical Engineering)
Show Figures

Figure 1

29 pages, 2031 KB  
Article
Monitoring and Analyzing Driver Physiological States Based on Automotive Electronic Identification and Multimodal Biometric Recognition Methods
by Shengpei Zhou, Nanfeng Zhang, Qin Duan, Xiaosong Liu, Jinchao Xiao, Li Wang and Jingfeng Yang
Algorithms 2024, 17(12), 547; https://doi.org/10.3390/a17120547 - 2 Dec 2024
Cited by 4 | Viewed by 2209
Abstract
In an intelligent driving environment, monitoring the physiological state of drivers is crucial for ensuring driving safety. This paper proposes a method for monitoring and analyzing driver physiological characteristics by combining electronic vehicle identification (EVI) with multimodal biometric recognition. The method aims to [...] Read more.
In an intelligent driving environment, monitoring the physiological state of drivers is crucial for ensuring driving safety. This paper proposes a method for monitoring and analyzing driver physiological characteristics by combining electronic vehicle identification (EVI) with multimodal biometric recognition. The method aims to efficiently monitor the driver’s heart rate, breathing frequency, emotional state, and fatigue level, providing real-time feedback to intelligent driving systems to enhance driving safety. First, considering the precision, adaptability, and real-time capabilities of current physiological signal monitoring devices, an intelligent cushion integrating MEMSs (Micro-Electro-Mechanical Systems) and optical sensors is designed. This cushion collects heart rate and breathing frequency data in real time without disrupting the driver, while an electrodermal activity monitoring system captures electromyography data. The sensor layout is optimized to accommodate various driving postures, ensuring accurate data collection. The EVI system assigns a unique identifier to each vehicle, linking it to the physiological data of different drivers. By combining the driver physiological data with the vehicle’s operational environment data, a comprehensive multi-source data fusion system is established for a driving state evaluation. Secondly, a deep learning model is employed to analyze physiological signals, specifically combining Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM) networks. The CNN extracts spatial features from the input signals, while the LSTM processes time-series data to capture the temporal characteristics. This combined model effectively identifies and analyzes the driver’s physiological state, enabling timely anomaly detection. The method was validated through real-vehicle tests involving multiple drivers, where extensive physiological and driving behavior data were collected. Experimental results show that the proposed method significantly enhances the accuracy and real-time performance of physiological state monitoring. These findings highlight the effectiveness of combining EVI with multimodal biometric recognition, offering a reliable means for assessing driver states in intelligent driving systems. Furthermore, the results emphasize the importance of personalizing adjustments based on individual driver differences for more effective monitoring. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

22 pages, 13474 KB  
Article
Multimodal Human–Robot Interaction Using Gestures and Speech: A Case Study for Printed Circuit Board Manufacturing
by Ángel-Gabriel Salinas-Martínez, Joaquín Cunillé-Rodríguez, Elías Aquino-López and Angel-Iván García-Moreno
J. Manuf. Mater. Process. 2024, 8(6), 274; https://doi.org/10.3390/jmmp8060274 - 30 Nov 2024
Cited by 5 | Viewed by 6384
Abstract
In recent years, technologies for human–robot interaction (HRI) have undergone substantial advancements, facilitating more intuitive, secure, and efficient collaborations between humans and machines. This paper presents a decentralized HRI platform, specifically designed for printed circuit board manufacturing. The proposal incorporates many input devices, [...] Read more.
In recent years, technologies for human–robot interaction (HRI) have undergone substantial advancements, facilitating more intuitive, secure, and efficient collaborations between humans and machines. This paper presents a decentralized HRI platform, specifically designed for printed circuit board manufacturing. The proposal incorporates many input devices, including gesture recognition via Leap Motion and Tap Strap, and speech recognition. The gesture recognition system achieved an average accuracy of 95.42% and 97.58% for each device, respectively. The speech control system, called Cellya, exhibited a markedly reduced Word Error Rate of 22.22% and a Character Error Rate of 11.90%. Furthermore, a scalable user management framework, the decentralized multimodal control server, employs biometric security to facilitate the efficient handling of multiple users, regulating permissions and control privileges. The platform’s flexibility and real-time responsiveness are achieved through advanced sensor integration and signal processing techniques, which facilitate intelligent decision-making and enable accurate manipulation of manufacturing cells. The results demonstrate the system’s potential to improve operational efficiency and adaptability in smart manufacturing environments. Full article
(This article belongs to the Special Issue Smart Manufacturing in the Era of Industry 4.0)
Show Figures

Figure 1

18 pages, 3964 KB  
Article
Towards a Secure Signature Scheme Based on Multimodal Biometric Technology: Application for IOT Blockchain Network
by Oday A. Hassen, Ansam A. Abdulhussein, Saad M. Darwish, Zulaiha Ali Othman, Sabrina Tiun and Yasmin A. Lotfy
Symmetry 2020, 12(10), 1699; https://doi.org/10.3390/sym12101699 - 15 Oct 2020
Cited by 30 | Viewed by 4721
Abstract
Blockchain technology has been commonly used in the last years in numerous fields, such as transactions documenting and monitoring real assets (house, cash) or intangible assets (copyright, intellectual property). The internet of things (IoT) technology, on the other hand, has become the main [...] Read more.
Blockchain technology has been commonly used in the last years in numerous fields, such as transactions documenting and monitoring real assets (house, cash) or intangible assets (copyright, intellectual property). The internet of things (IoT) technology, on the other hand, has become the main driver of the fourth industrial revolution, and is currently utilized in diverse fields of industry. New approaches have been established through improving the authentication methods in the blockchain to address the constraints of scalability and protection in IoT operating environments of distributed blockchain technology by control of a private key. However, these authentication mechanisms do not consider security when applying IoT to the network, as the nature of IoT communication with numerous entities all the time in various locations increases security risks resulting in extreme asset damage. This posed many difficulties in finding harmony between security and scalability. To address this gap, the work suggested in this paper adapts multimodal biometrics to strengthen network security by extracting a private key with high entropy. Additionally, via a whitelist, the suggested scheme evaluates the security score for the IoT system with a blockchain smart contract to guarantee that highly secured applications authenticate easily and restrict compromised devices. Experimental results indicate that our system is existentially unforgeable to an efficient message attack, and therefore, decreases the expansion of infected devices to the network by up to 49 percent relative to traditional schemes. Full article
Show Figures

Figure 1

10 pages, 388 KB  
Article
Design and Implementation of a Multi-Modal Biometric System for Company Access Control
by Elisabetta Stefani and Carlo Ferrari
Algorithms 2017, 10(2), 61; https://doi.org/10.3390/a10020061 - 27 May 2017
Cited by 8 | Viewed by 6475
Abstract
This paper is about the design, implementation, and deployment of a multi-modal biometric system to grant access to a company structure and to internal zones in the company itself. Face and iris have been chosen as biometric traits. Face is feasible for non-intrusive [...] Read more.
This paper is about the design, implementation, and deployment of a multi-modal biometric system to grant access to a company structure and to internal zones in the company itself. Face and iris have been chosen as biometric traits. Face is feasible for non-intrusive checking with a minimum cooperation from the subject, while iris supports very accurate recognition procedure at a higher grade of invasivity. The recognition of the face trait is based on the Local Binary Patterns histograms, and the Daughman’s method is implemented for the analysis of the iris data. The recognition process may require either the acquisition of the user’s face only or the serial acquisition of both the user’s face and iris, depending on the confidence level of the decision with respect to the set of security levels and requirements, stated in a formal way in the Service Level Agreement at a negotiation phase. The quality of the decision depends on the setting of proper different thresholds in the decision modules for the two biometric traits. Any time the quality of the decision is not good enough, the system activates proper rules, which ask for new acquisitions (and decisions), possibly with different threshold values, resulting in a system not with a fixed and predefined behaviour, but one which complies with the actual acquisition context. Rules are formalized as deduction rules and grouped together to represent “response behaviors” according to the previous analysis. Therefore, there are different possible working flows, since the actual response of the recognition process depends on the output of the decision making modules that compose the system. Finally, the deployment phase is described, together with the results from the testing, based on the AT&T Face Database and the UBIRIS database. Full article
(This article belongs to the Special Issue Data Compression, Communication Processing and Security 2016)
Show Figures

Figure 1

Back to TopTop