Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (81)

Search Parameters:
Keywords = face spoofing

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 1033 KiB  
Article
Internet of Things Platform for Assessment and Research on Cybersecurity of Smart Rural Environments
by Daniel Sernández-Iglesias, Llanos Tobarra, Rafael Pastor-Vargas, Antonio Robles-Gómez, Pedro Vidal-Balboa and João Sarraipa
Future Internet 2025, 17(8), 351; https://doi.org/10.3390/fi17080351 (registering DOI) - 1 Aug 2025
Abstract
Rural regions face significant barriers to adopting IoT technologies, due to limited connectivity, energy constraints, and poor technical infrastructure. While urban environments benefit from advanced digital systems and cloud services, rural areas often lack the necessary conditions to deploy and evaluate secure and [...] Read more.
Rural regions face significant barriers to adopting IoT technologies, due to limited connectivity, energy constraints, and poor technical infrastructure. While urban environments benefit from advanced digital systems and cloud services, rural areas often lack the necessary conditions to deploy and evaluate secure and autonomous IoT solutions. To help overcome this gap, this paper presents the Smart Rural IoT Lab, a modular and reproducible testbed designed to replicate the deployment conditions in rural areas using open-source tools and affordable hardware. The laboratory integrates long-range and short-range communication technologies in six experimental scenarios, implementing protocols such as MQTT, HTTP, UDP, and CoAP. These scenarios simulate realistic rural use cases, including environmental monitoring, livestock tracking, infrastructure access control, and heritage site protection. Local data processing is achieved through containerized services like Node-RED, InfluxDB, MongoDB, and Grafana, ensuring complete autonomy, without dependence on cloud services. A key contribution of the laboratory is the generation of structured datasets from real network traffic captured with Tcpdump and preprocessed using Zeek. Unlike simulated datasets, the collected data reflect communication patterns generated from real devices. Although the current dataset only includes benign traffic, the platform is prepared for future incorporation of adversarial scenarios (spoofing, DoS) to support AI-based cybersecurity research. While experiments were conducted in an indoor controlled environment, the testbed architecture is portable and suitable for future outdoor deployment. The Smart Rural IoT Lab addresses a critical gap in current research infrastructure, providing a realistic and flexible foundation for developing secure, cloud-independent IoT solutions, contributing to the digital transformation of rural regions. Full article
Show Figures

Figure 1

33 pages, 11684 KiB  
Article
Face Spoofing Detection with Stacking Ensembles in Work Time Registration System
by Rafał Klinowski and Mirosław Kordos
Appl. Sci. 2025, 15(15), 8402; https://doi.org/10.3390/app15158402 - 29 Jul 2025
Viewed by 78
Abstract
This paper introduces a passive face-authenticity detection system, designed for integration into an employee work time registration platform. The system is implemented as a stacking ensemble of multiple models. Each model independently assesses whether a camera is capturing a live human face or [...] Read more.
This paper introduces a passive face-authenticity detection system, designed for integration into an employee work time registration platform. The system is implemented as a stacking ensemble of multiple models. Each model independently assesses whether a camera is capturing a live human face or a spoofed representation, such as a photo or video. The ensemble comprises a convolutional neural network (CNN), a smartphone bezel-detection algorithm to identify faces displayed on electronic devices, a face context analysis module, and additional CNNs for image processing. The outputs of these models are aggregated by a neural network that delivers the final classification decision. We examined various combinations of models within the ensemble and compared the performance of our approach against existing methods through experimental evaluation. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Image Processing)
Show Figures

Figure 1

10 pages, 449 KiB  
Systematic Review
Advancing Secure Face Recognition Payment Systems: A Systematic Literature Review
by M. Haswin Anugrah Pratama, Achmad Rizal and Indrarini Dyah Irawati
Information 2025, 16(7), 581; https://doi.org/10.3390/info16070581 - 7 Jul 2025
Viewed by 413
Abstract
In the digital era, face recognition technology has emerged as a promising solution for enhancing payment system security and convenience. This systematic literature review examines face recognition advancements in payment security following the PRISMA framework. From 219 initially identified articles, we selected 10 [...] Read more.
In the digital era, face recognition technology has emerged as a promising solution for enhancing payment system security and convenience. This systematic literature review examines face recognition advancements in payment security following the PRISMA framework. From 219 initially identified articles, we selected 10 studies meeting our technical criteria. The findings reveal significant progress in deep learning approaches, multimodal feature integration, and transformer-based architectures. Current trends emphasize multimodal systems combining RGB with IR and depth data for sophisticated attack detection. Critical challenges remain in cross-dataset generalization, evaluation standardization, computational efficiency, and combating advanced threats including deepfakes. This review identifies technical limitations and provides direction for developing robust facial recognition technologies for widespread payment adoption. Full article
(This article belongs to the Special Issue Computer Vision for Security Applications, 2nd Edition)
Show Figures

Figure 1

21 pages, 1761 KiB  
Article
Protecting IOT Networks Through AI-Based Solutions and Fractional Tchebichef Moments
by Islam S. Fathi, Hanin Ardah, Gaber Hassan and Mohammed Aly
Fractal Fract. 2025, 9(7), 427; https://doi.org/10.3390/fractalfract9070427 - 29 Jun 2025
Viewed by 385
Abstract
Advancements in Internet of Things (IoT) technologies have had a profound impact on interconnected devices, leading to exponentially growing networks of billions of intelligent devices. However, this growth has exposed Internet of Things (IoT) systems to cybersecurity vulnerabilities. These vulnerabilities are primarily caused [...] Read more.
Advancements in Internet of Things (IoT) technologies have had a profound impact on interconnected devices, leading to exponentially growing networks of billions of intelligent devices. However, this growth has exposed Internet of Things (IoT) systems to cybersecurity vulnerabilities. These vulnerabilities are primarily caused by the inherent limitations of these devices, such as finite battery resources and the requirement for ubiquitous connectivity. The rapid evolution of deep learning (DL) technologies has led to their widespread use in critical application domains, thereby highlighting the need to integrate DL methodologies to improve IoT security systems beyond the basic secure communication protocols. This is essential for creating intelligent security frameworks that can effectively address the increasingly complex cybersecurity threats faced by IoT networks. This study proposes a hybrid methodology that combines fractional discrete Tchebichef moment analysis with deep learning for the prevention of IoT attacks. The effectiveness of our proposed technique for detecting IoT threats was evaluated using the UNSW-NB15 and Bot-IoT datasets, featuring illustrative cases of common IoT attack scenarios, such as DDoS, identity spoofing, network reconnaissance, and unauthorized data access. The empirical results validate the superior classification capabilities of the proposed methodology in IoT cybersecurity threat assessments compared with existing solutions. This study leveraged the synergistic integration of discrete Tchebichef moments and deep convolutional networks to facilitate comprehensive attack detection and prevention in IoT ecosystems. Full article
(This article belongs to the Section Optimization, Big Data, and AI/ML)
Show Figures

Figure 1

24 pages, 589 KiB  
Article
FaceCloseup: Enhancing Mobile Facial Authentication with Perspective Distortion-Based Liveness Detection
by Yingjiu Li, Yan Li and Zilong Wang
Computers 2025, 14(7), 254; https://doi.org/10.3390/computers14070254 - 27 Jun 2025
Viewed by 612
Abstract
Facial authentication has gained widespread adoption as a biometric authentication method, offering a convenient alternative to traditional password-based systems, particularly on mobile devices equipped with front-facing cameras. While this technology enhances usability and security by eliminating password management, it remains highly susceptible to [...] Read more.
Facial authentication has gained widespread adoption as a biometric authentication method, offering a convenient alternative to traditional password-based systems, particularly on mobile devices equipped with front-facing cameras. While this technology enhances usability and security by eliminating password management, it remains highly susceptible to spoofing attacks. Adversaries can exploit facial recognition systems using pre-recorded photos, videos, or even sophisticated 3D models of victims’ faces to bypass authentication mechanisms. The increasing availability of personal images on social media further amplifies this risk, making robust anti-spoofing mechanisms essential for secure facial authentication. To address these challenges, we introduce FaceCloseup, a novel liveness detection technique that strengthens facial authentication by leveraging perspective distortion inherent in close-up shots of real, 3D faces. Instead of relying on additional sensors or user-interactive gestures, FaceCloseup passively analyzes facial distortions in video frames captured by a mobile device’s camera, improving security without compromising user experience. FaceCloseup effectively distinguishes live faces from spoofed attacks by identifying perspective-based distortions across different facial regions. The system achieves a 99.48% accuracy in detecting common spoofing methods—including photo, video, and 3D model-based attacks—and demonstrates 98.44% accuracy in differentiating between individual users. By operating entirely on-device, FaceCloseup eliminates the need for cloud-based processing, reducing privacy concerns and potential latency in authentication. Its reliance on natural device movement ensures a seamless authentication experience while maintaining robust security. Full article
(This article belongs to the Special Issue Cyber Security and Privacy in IoT Era)
Show Figures

Figure 1

42 pages, 3140 KiB  
Review
Face Anti-Spoofing Based on Deep Learning: A Comprehensive Survey
by Huifen Xing, Siok Yee Tan, Faizan Qamar and Yuqing Jiao
Appl. Sci. 2025, 15(12), 6891; https://doi.org/10.3390/app15126891 - 18 Jun 2025
Viewed by 1868
Abstract
Face recognition has achieved tremendous success in both its theory and technology. However, with increasingly realistic attacks, such as print photos, replay videos, and 3D masks, as well as new attack methods like AI-generated faces or videos, face recognition systems are confronted with [...] Read more.
Face recognition has achieved tremendous success in both its theory and technology. However, with increasingly realistic attacks, such as print photos, replay videos, and 3D masks, as well as new attack methods like AI-generated faces or videos, face recognition systems are confronted with significant challenges and risks. Distinguishing between real and fake faces, i.e., face anti-spoofing (FAS), is crucial to the security of face recognition systems. With the advent of large-scale academic datasets in recent years, FAS based on deep learning has achieved a remarkable level of performance and now dominates the field. This paper systematically reviews the latest advancements in FAS based on deep learning. First, it provides an overview of the background, basic concepts, and types of FAS attacks. Then, it categorizes existing FAS methods from the perspectives of RGB (red, green and blue) modality and other modalities, discussing the main concepts, the types of attacks that can be detected, their advantages and disadvantages, and so on. Next, it introduces popular datasets used in FAS research and highlights their characteristics. Finally, it summarizes the current research challenges and future directions for FAS, such as its limited generalization for unknown attacks, the insufficient multi-modal research, the spatiotemporal efficiency of algorithms, and unified detection for presentation attacks and deepfakes. We aim to provide a comprehensive reference in this field and to inspire progress within the FAS community, guiding researchers toward promising directions for future work. Full article
(This article belongs to the Special Issue Deep Learning in Object Detection)
Show Figures

Figure 1

16 pages, 3459 KiB  
Article
Anti-Spoofing Method by RGB-D Deep Learning for Robust to Various Domain Shifts
by Hee-jin Kim and Soon-kak Kwon
Electronics 2025, 14(11), 2182; https://doi.org/10.3390/electronics14112182 - 28 May 2025
Viewed by 501
Abstract
We propose a deep learning-based face anti-spoofing method that utilizes both RGB and depth images for face recognition. The proposed method can detect spoofing attacks across various domain types using domain adversarial learning for preventing overfitting to a specific domain. A pre-trained face [...] Read more.
We propose a deep learning-based face anti-spoofing method that utilizes both RGB and depth images for face recognition. The proposed method can detect spoofing attacks across various domain types using domain adversarial learning for preventing overfitting to a specific domain. A pre-trained face detection model and a face segmentation model are employed to detect a facial region from RGB images. The pixels outside the facial region in the corresponding depth image are replaced with the depth values of the nearest pixels in the facial region to minimize background influence. Subsequently, a network comprising convolutional layers and a self-attention layer extracts features from RGB and depth images separately, then fuses them to detect spoofing attacks. The proposed network is trained using domain adversarial learning to ensure robustness against various types of face spoofing attacks. The experiment results show that the proposed network reduces the average Attack Presentation Classification Error Rate (APCER) to 11.12% and 8.00% compared to ResNet and MobileNet, respectively. Full article
(This article belongs to the Special Issue Deep Learning-Based Object Detection/Classification)
Show Figures

Figure 1

16 pages, 7057 KiB  
Article
VRBiom: A New Periocular Dataset for Biometric Applications of Head-Mounted Display
by Ketan Kotwal, Ibrahim Ulucan, Gökhan Özbulak, Janani Selliah and Sébastien Marcel
Electronics 2025, 14(9), 1835; https://doi.org/10.3390/electronics14091835 - 30 Apr 2025
Viewed by 734
Abstract
With advancements in hardware, high-quality head-mounted display (HMD) devices are being developed by numerous companies, driving increased consumer interest in AR, VR, and MR applications. This proliferation of HMD devices opens up possibilities for a wide range of applications beyond entertainment. Most commercially [...] Read more.
With advancements in hardware, high-quality head-mounted display (HMD) devices are being developed by numerous companies, driving increased consumer interest in AR, VR, and MR applications. This proliferation of HMD devices opens up possibilities for a wide range of applications beyond entertainment. Most commercially available HMD devices are equipped with internal inward-facing cameras to record the periocular areas. Given the nature of these devices and captured data, many applications such as biometric authentication and gaze analysis become feasible. To effectively explore the potential of HMDs for these diverse use-cases and to enhance the corresponding techniques, it is essential to have an HMD dataset that captures realistic scenarios. In this work, we present a new dataset of periocular videos acquired using a virtual reality headset called VRBiom. The VRBiom, targeted at biometric applications, consists of 900 short videos acquired from 25 individuals recorded in the NIR spectrum. These 10 s long videos have been captured using the internal tracking cameras of Meta Quest Pro at 72 FPS. To encompass real-world variations, the dataset includes recordings under three gaze conditions: steady, moving, and partially closed eyes. We have also ensured an equal split of recordings without and with glasses to facilitate the analysis of eye-wear. These videos, characterized by non-frontal views of the eye and relatively low spatial resolutions (400×400), can be instrumental in advancing state-of-the-art research across various biometric applications. The VRBiom dataset can be utilized to evaluate, train, or adapt models for biometric use-cases such as iris and/or periocular recognition and associated sub-tasks such as detection and semantic segmentation. In addition to data from real individuals, we have included around 1100 presentation attacks constructed from 92 PA instruments. These PAIs fall into six categories constructed through combinations of print attacks (real and synthetic identities), fake 3D eyeballs, plastic eyes, and various types of masks and mannequins. These PA videos, combined with genuine (bona fide) data, can be utilized to address concerns related to spoofing, which is a significant threat if these devices are to be used for authentication. The VRBiom dataset is publicly available for research purposes related to biometric applications only. Full article
Show Figures

Figure 1

29 pages, 6364 KiB  
Article
Face Anti-Spoofing Based on Adaptive Channel Enhancement and Intra-Class Constraint
by Ye Li, Wenzhe Sun, Zuhe Li and Xiang Guo
J. Imaging 2025, 11(4), 116; https://doi.org/10.3390/jimaging11040116 - 10 Apr 2025
Viewed by 699
Abstract
Face anti-spoofing detection is crucial for identity verification and security monitoring. However, existing single-modal models struggle with feature extraction under complex lighting conditions and background variations. Moreover, the feature distributions of live and spoofed samples often overlap, resulting in suboptimal classification performance. To [...] Read more.
Face anti-spoofing detection is crucial for identity verification and security monitoring. However, existing single-modal models struggle with feature extraction under complex lighting conditions and background variations. Moreover, the feature distributions of live and spoofed samples often overlap, resulting in suboptimal classification performance. To address these issues, we propose a jointly optimized framework integrating the Enhanced Channel Attention (ECA) mechanism and the Intra-Class Differentiator (ICD). The ECA module extracts features through deep convolution, while the Bottleneck Reconstruction Module (BRM) employs a channel compression–expansion mechanism to refine spatial feature selection. Furthermore, the channel attention mechanism enhances key channel representation. Meanwhile, the ICD mechanism enforces intra-class compactness and inter-class separability, optimizing feature distribution both within and across classes, thereby improving feature learning and generalization performance. Experimental results show that our framework achieves average classification error rates (ACERs) of 2.45%, 1.16%, 1.74%, and 2.17% on the CASIA-SURF, CASIA-SURF CeFA, CASIA-FASD, and OULU-NPU datasets, outperforming existing methods. Full article
(This article belongs to the Section Biometrics, Forensics, and Security)
Show Figures

Figure 1

19 pages, 3001 KiB  
Article
Modular Neural Network Model for Biometric Authentication of Personnel in Critical Infrastructure Facilities Based on Facial Images
by Oleksandr Korchenko, Ihor Tereikovskyi, Ruslana Ziubina, Liudmyla Tereikovska, Oleksandr Korystin, Oleh Tereikovskyi and Volodymyr Karpinskyi
Appl. Sci. 2025, 15(5), 2553; https://doi.org/10.3390/app15052553 - 27 Feb 2025
Cited by 1 | Viewed by 694
Abstract
The widespread implementation of neural network tools for biometric authentication based on facial and iris images at critical infrastructure facilities has significantly increased the level of security. However, modern requirements dictate the need to modernize these tools to increase resistance to spoofing attacks, [...] Read more.
The widespread implementation of neural network tools for biometric authentication based on facial and iris images at critical infrastructure facilities has significantly increased the level of security. However, modern requirements dictate the need to modernize these tools to increase resistance to spoofing attacks, as well as to provide a base for assessing the compliance of the psycho-emotional state of personnel with job responsibilities, which is difficult to ensure using traditional monolithic neural network models. Therefore, this article is devoted to the development of a modular neural network model that provides effective biometric authentication for critical infrastructure personnel based on facial images, taking into account the listed requirements. When developing the model, an approach was used in which the functionality of each module was defined in such a way as to correspond to a task traditionally solved by a separate neural network model. This made it possible to use in each individual module a tested and accessible toolkit that has proven its effectiveness in solving the corresponding problem, which, in turn, compared to traditional approaches, allows for a 30–40% increase in the efficiency of the development and adaptation of authentication tools for the conditions of their application. Innovative features of the developed modular model include the ability to recognize spoofing attacks based on environmental artifacts and the naturalness of emotions, as well as an increase in the accuracy of person recognition due to the use of a U-Net neural network to highlight natural facial contours in occlusions. The experimental results show that the proposed model allows for a 5–10% decrease in person recognition error, recognition of spoofing attacks based on the naturalness of emotions and images of background objects, and recognition of the emotional state of personnel, which increases the efficiency of biometric authentication tools. Full article
(This article belongs to the Special Issue Applications of Signal Analysis in Biometrics)
Show Figures

Figure 1

16 pages, 15374 KiB  
Article
U-Net-Based Fingerprint Enhancement for 3D Fingerprint Recognition
by Mohammad Mogharen Askarin, Min Wang, Xuefei Yin, Xiuping Jia and Jiankun Hu
Sensors 2025, 25(5), 1384; https://doi.org/10.3390/s25051384 - 24 Feb 2025
Viewed by 924
Abstract
Biometrics-based authentication mechanisms can address the built-in weakness of conventional password or token-based authentication in identifying genuine users. However, 2D-based fingerprint biometrics authentication faces the problem of sensor spoofing attacks. In addition, most 2D fingerprint sensors are contact-based, which can boost the spread [...] Read more.
Biometrics-based authentication mechanisms can address the built-in weakness of conventional password or token-based authentication in identifying genuine users. However, 2D-based fingerprint biometrics authentication faces the problem of sensor spoofing attacks. In addition, most 2D fingerprint sensors are contact-based, which can boost the spread of deadly diseases such as the COVID-19 virus. Three-dimensional fingerprint-based recognition is the emerging technology that can effectively address the above issues. A 3D fingerprint is captured contactlessly and can be represented by a 3D point cloud, which is strong against sensor spoofing attacks. To apply conventional 2D fingerprint recognition methods to 3D fingerprints, the 3D point cloud needs to be converted into a 2D gray-scale image. However, the contrast of the generated image is often not of good quality for direct matching. In this work, we propose an image segmentation approach using the deep learning U-Net to enhance the fingerprint contrast. The enhanced fingerprint images are then used for conventional fingerprint recognition. By applying the proposed method, the fingerprint recognition Equal Error Rate (EER) in experiment A and B improved from 41.32% and 41.97% to 13.96 and 12.49%, respectively, over the public dataset. Full article
(This article belongs to the Special Issue Advances and Challenges in Sensor Security Systems)
Show Figures

Figure 1

12 pages, 340 KiB  
Article
Quantitative Study of Swin Transformer and Loss Function Combinations for Face Anti-Spoofing
by Liang Yu Gong and Xue Jun Li
Electronics 2025, 14(3), 448; https://doi.org/10.3390/electronics14030448 - 23 Jan 2025
Cited by 1 | Viewed by 1293
Abstract
Face anti-spoofing (FAS) has always been a hidden danger in network security, especially with the widespread application of facial recognition systems. However, some current FAS methods are not effective at detecting different forgery types and are prone to overfitting, which means they cannot [...] Read more.
Face anti-spoofing (FAS) has always been a hidden danger in network security, especially with the widespread application of facial recognition systems. However, some current FAS methods are not effective at detecting different forgery types and are prone to overfitting, which means they cannot effectively process unseen spoof types. Different loss functions significantly impact the classification effect based on the same feature extraction without considering the quality of the feature extraction. Therefore, it is necessary to find a loss function or a combination of different loss functions for spoofing detection tasks. This paper mainly aims to compare the effects of different loss functions or loss function combinations. We selected the Swin Transformer as the backbone of our training model to extract facial features to ensure the accuracy of the ablation experiment. For the application of loss functions, we adopted four classical loss functions: cross-entropy loss (CE loss), semi-hard triplet loss, L1 loss and focal loss. Finally, this work proposed combinations of Swin Transformers and different loss functions (pairs) to test through in-dataset experiments with some common FAS datasets (CelebA-Spoofing, CASIA-MFSD, Replay attack and OULU-NPU). We conclude that using a single loss function cannot produce the best results for the FAS task, and the best accuracy is obtained when applying triplet loss, cross-entropy loss and Smooth L1 loss as a loss combination. Full article
(This article belongs to the Special Issue AI Synergy: Vision, Language, and Modality)
Show Figures

Figure 1

30 pages, 7517 KiB  
Article
MixCFormer: A CNN–Transformer Hybrid with Mixup Augmentation for Enhanced Finger Vein Attack Detection
by Zhaodi Wang, Shuqiang Yang, Huafeng Qin, Yike Liu and Junqiang Wang
Electronics 2025, 14(2), 362; https://doi.org/10.3390/electronics14020362 - 17 Jan 2025
Cited by 2 | Viewed by 1236
Abstract
Finger vein recognition has gained significant attention for its importance in enhancing security, safeguarding privacy, and ensuring reliable liveness detection. As a foundation of vein recognition systems, vein detection faces challenges, including low feature extraction efficiency, limited robustness, and a heavy reliance on [...] Read more.
Finger vein recognition has gained significant attention for its importance in enhancing security, safeguarding privacy, and ensuring reliable liveness detection. As a foundation of vein recognition systems, vein detection faces challenges, including low feature extraction efficiency, limited robustness, and a heavy reliance on real-world data. Additionally, environmental variability and advancements in spoofing technologies further exacerbate data privacy and security concerns. To address these challenges, this paper proposes MixCFormer, a hybrid CNN–transformer architecture that incorporates Mixup data augmentation to improve the accuracy of finger vein liveness detection and reduce dependency on large-scale real datasets. First, the MixCFormer model applies baseline drift elimination, morphological filtering, and Butterworth filtering techniques to minimize the impact of background noise and illumination variations, thereby enhancing the clarity and recognizability of vein features. Next, finger vein video data are transformed into feature sequences, optimizing feature extraction and matching efficiency, effectively capturing dynamic time-series information and improving discrimination between live and forged samples. Furthermore, Mixup data augmentation is used to expand sample diversity and decrease dependency on extensive real datasets, thereby enhancing the model’s ability to recognize forged samples across diverse attack scenarios. Finally, the CNN and transformer architecture leverages both local and global feature extraction capabilities to capture vein feature correlations and dependencies. Residual connections improve feature propagation, enhancing the stability of feature representations in liveness detection. Rigorous experimental evaluations demonstrate that MixCFormer achieves a detection accuracy of 99.51% on finger vein datasets, significantly outperforming existing methods. Full article
Show Figures

Figure 1

27 pages, 4439 KiB  
Article
Personal Identification Using Embedded Raspberry Pi-Based Face Recognition Systems
by Sebastian Pecolt, Andrzej Błażejewski, Tomasz Królikowski, Igor Maciejewski, Kacper Gierula and Sebastian Glowinski
Appl. Sci. 2025, 15(2), 887; https://doi.org/10.3390/app15020887 - 17 Jan 2025
Cited by 2 | Viewed by 2835
Abstract
Facial recognition technology has significantly advanced in recent years, with promising applications in fields ranging from security to consumer electronics. Its importance extends beyond convenience, offering enhanced security measures for sensitive areas and seamless user experiences in everyday devices. This study focuses on [...] Read more.
Facial recognition technology has significantly advanced in recent years, with promising applications in fields ranging from security to consumer electronics. Its importance extends beyond convenience, offering enhanced security measures for sensitive areas and seamless user experiences in everyday devices. This study focuses on the development and validation of a facial recognition system utilizing a Haar cascade classifier and the AdaBoost machine learning algorithm. The system leverages characteristic facial features—distinct, measurable attributes used to identify and differentiate faces within images. A biometric facial recognition system was implemented on a Raspberry Pi microcomputer, capable of detecting and identifying faces using a self-contained reference image database. Verification involved selecting the similarity threshold, a critical factor influencing the balance between accuracy, security, and user experience in biometric systems. Testing under various environmental conditions, facial expressions, and user demographics confirmed the system’s accuracy and efficiency, achieving an average recognition time of 10.5 s under different lighting conditions, such as daylight, artificial light, and low-light scenarios. It is shown that the system’s accuracy and scalability can be enhanced through testing with larger databases, hardware upgrades like higher-resolution cameras, and advanced deep learning algorithms to address challenges such as extreme facial angles. Threshold optimization tests with six male participants revealed a value that effectively balances accuracy and efficiency. While the system performed effectively under controlled conditions, challenges such as biometric similarities and vulnerabilities to spoofing with printed photos underscore the need for additional security measures, such as thermal imaging. Potential applications include access control, surveillance, and statistical data collection, highlighting the system’s versatility and relevance. Full article
Show Figures

Figure 1

24 pages, 21931 KiB  
Article
Evaluating and Enhancing Face Anti-Spoofing Algorithms for Light Makeup: A General Detection Approach
by Zhimao Lai, Yang Guo, Yongjian Hu, Wenkang Su and Renhai Feng
Sensors 2024, 24(24), 8075; https://doi.org/10.3390/s24248075 - 18 Dec 2024
Cited by 1 | Viewed by 953
Abstract
Makeup modifies facial textures and colors, impacting the precision of face anti-spoofing systems. Many individuals opt for light makeup in their daily lives, which generally does not hinder face identity recognition. However, current research in face anti-spoofing often neglects the influence of light [...] Read more.
Makeup modifies facial textures and colors, impacting the precision of face anti-spoofing systems. Many individuals opt for light makeup in their daily lives, which generally does not hinder face identity recognition. However, current research in face anti-spoofing often neglects the influence of light makeup on facial feature recognition, notably the absence of publicly accessible datasets featuring light makeup faces. If these instances are incorrectly flagged as fraudulent by face anti-spoofing systems, it could lead to user inconvenience. In response, we develop a face anti-spoofing database that includes light makeup faces and establishes a criterion for determining light makeup to select appropriate data. Building on this foundation, we assess multiple established face anti-spoofing algorithms using the newly created database. Our findings reveal that the majority of these algorithms experience a decrease in performance when faced with light makeup faces. Consequently, this paper introduces a general face anti-spoofing algorithm specifically designed for light makeup faces, which includes a makeup augmentation module, a batch channel normalization module, a backbone network updated via the Exponential Moving Average (EMA) method, an asymmetric virtual triplet loss module, and a nearest neighbor supervised contrastive module. The experimental outcomes confirm that the proposed algorithm exhibits superior detection capabilities when handling light makeup faces. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

Back to TopTop