Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (28)

Search Parameters:
Keywords = biometric forensics

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 2072 KiB  
Article
Barefoot Footprint Detection Algorithm Based on YOLOv8-StarNet
by Yujie Shen, Xuemei Jiang, Yabin Zhao and Wenxin Xie
Sensors 2025, 25(15), 4578; https://doi.org/10.3390/s25154578 - 24 Jul 2025
Viewed by 250
Abstract
This study proposes an optimized footprint recognition model based on an enhanced StarNet architecture for biometric identification in the security, medical, and criminal investigation fields. Conventional image recognition algorithms exhibit limitations in processing barefoot footprint images characterized by concentrated feature distributions and rich [...] Read more.
This study proposes an optimized footprint recognition model based on an enhanced StarNet architecture for biometric identification in the security, medical, and criminal investigation fields. Conventional image recognition algorithms exhibit limitations in processing barefoot footprint images characterized by concentrated feature distributions and rich texture patterns. To address this, our framework integrates an improved StarNet into the backbone of YOLOv8 architecture. Leveraging the unique advantages of element-wise multiplication, the redesigned backbone efficiently maps inputs to a high-dimensional nonlinear feature space without increasing channel dimensions, achieving enhanced representational capacity with low computational latency. Subsequently, an Encoder layer facilitates feature interaction within the backbone through multi-scale feature fusion and attention mechanisms, effectively extracting rich semantic information while maintaining computational efficiency. In the feature fusion part, a feature modulation block processes multi-scale features by synergistically combining global and local information, thereby reducing redundant computations and decreasing both parameter count and computational complexity to achieve model lightweighting. Experimental evaluations on a proprietary barefoot footprint dataset demonstrate that the proposed model exhibits significant advantages in terms of parameter efficiency, recognition accuracy, and computational complexity. The number of parameters has been reduced by 0.73 million, further improving the model’s speed. Gflops has been reduced by 1.5, lowering the performance requirements for computational hardware during model deployment. Recognition accuracy has reached 99.5%, with further improvements in model precision. Future research will explore how to capture shoeprint images with complex backgrounds from shoes worn at crime scenes, aiming to further enhance the model’s recognition capabilities in more forensic scenarios. Full article
(This article belongs to the Special Issue Transformer Applications in Target Tracking)
Show Figures

Figure 1

13 pages, 1987 KiB  
Article
Novel Deep Learning-Based Facial Forgery Detection for Effective Biometric Recognition
by Hansoo Kim
Appl. Sci. 2025, 15(7), 3613; https://doi.org/10.3390/app15073613 - 26 Mar 2025
Cited by 1 | Viewed by 595
Abstract
Advancements in science, technology, and computer engineering have significantly influenced biometric identification systems, particularly facial recognition. However, these systems are increasingly vulnerable to sophisticated forgery techniques. This study presents a novel deep learning framework optimized for texture analysis to detect facial forgeries effectively. [...] Read more.
Advancements in science, technology, and computer engineering have significantly influenced biometric identification systems, particularly facial recognition. However, these systems are increasingly vulnerable to sophisticated forgery techniques. This study presents a novel deep learning framework optimized for texture analysis to detect facial forgeries effectively. The proposed method leverages high-frequency texture features, such as roughness, color variation, and randomness, which are more challenging to replicate than specific facial features. The network employs a shallow architecture with wide feature maps to enhance efficiency and precision. Furthermore, a binary classification approach combined with supervised contrastive learning addresses data imbalance and strengthens generalization capabilities. Experimental results, conducted on three benchmark datasets (CASIA-FASD, CelebA-Spoof, and NIA-ILD), demonstrate the model’s robustness, achieving an Average Classification Error Rate (ACER) of approximately 0.06, significantly outperforming existing methods. This approach ensures practical applicability for real-time biometric systems, providing a reliable and efficient solution for forgery detection. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Face Recognition Research)
Show Figures

Figure 1

21 pages, 7041 KiB  
Article
Synergy of Internet of Things and Software Engineering Approach for Enhanced Copy–Move Image Forgery Detection Model
by Mohammed Assiri
Electronics 2025, 14(4), 692; https://doi.org/10.3390/electronics14040692 - 11 Feb 2025
Viewed by 804
Abstract
The fast development of digital images and the improvement required for security measures have recently increased the demand for innovative image analysis methods. Image analysis identifies, classifies, and monitors people, events, or objects in images or videos. Image analysis significantly improves security by [...] Read more.
The fast development of digital images and the improvement required for security measures have recently increased the demand for innovative image analysis methods. Image analysis identifies, classifies, and monitors people, events, or objects in images or videos. Image analysis significantly improves security by identifying and preventing attacks on security applications through digital images. It is crucial in diverse security fields, comprising video analysis, anomaly detection, biometrics, object recognition, surveillance, and forensic investigations. By integrating advanced software engineering models with IoT capabilities, this technique revolutionizes copy–move image forgery detection. IoT devices collect and transmit real-world data, improving software solutions to detect and analyze image tampering with exceptional accuracy and efficiency. This combination enhances detection abilities and provides scalable and adaptive solutions to reduce cutting-edge forgery models. Copy–move forgery detection (CMFD) has become possibly a major active research domain in the blind image forensics area. Between existing approaches, most of them are dependent upon block and key-point methods or integration of them. A few deep convolutional neural networks (DCNN) techniques have been implemented in image hashing, image forensics, image retrieval, image classification, etc., that have performed better than the conventional methods. To accomplish robust CMFD, this study develops a fusion of soft computing with a deep learning-based CMFD approach (FSCDL-CMFDA) to secure digital images. The FSCDL-CMFDA approach aims to integrate the benefits of metaheuristics with the DL model for an enhanced CMFD process. In the FSCDL-CMFDA method, histogram equalization is initially performed to improve the image quality. Furthermore, the Siamese convolutional neural network (SCNN) model is used to learn complex features from pre-processed images. Its hyperparameters are chosen by the golden jackal optimization (GJO) model. For the CMFD process, the FSCDL-CMFDA technique employs the regularized extreme learning machine (RELM) classifier. Finally, the detection performance of the RELM method is improved by the beluga whale optimization (BWO) technique. To demonstrate the enhanced performance of the FSCDL-CMFDA method, a comprehensive outcome analysis is conducted using the MNIST and CIFAR datasets. The experimental validation of the FSCDL-CMFDA method portrayed a superior accuracy value of 98.12% over existing models. Full article
(This article belongs to the Special Issue Signal and Image Processing Applications in Artificial Intelligence)
Show Figures

Figure 1

19 pages, 2872 KiB  
Article
Channel and Spatial Attention in Chest X-Ray Radiographs: Advancing Person Identification and Verification with Self-Residual Attention Network
by Hazem Farah, Akram Bennour, Neesrin Ali Kurdi, Samir Hammami and Mohammed Al-Sarem
Diagnostics 2024, 14(23), 2655; https://doi.org/10.3390/diagnostics14232655 - 25 Nov 2024
Cited by 1 | Viewed by 973
Abstract
Background/Objectives: In contrast to traditional biometric modalities, such as facial recognition, fingerprints, and iris scans or even DNA, the research orientation towards chest X-ray recognition has been spurred by its remarkable recognition rates. Capturing the intricate anatomical nuances of an individual’s skeletal structure, [...] Read more.
Background/Objectives: In contrast to traditional biometric modalities, such as facial recognition, fingerprints, and iris scans or even DNA, the research orientation towards chest X-ray recognition has been spurred by its remarkable recognition rates. Capturing the intricate anatomical nuances of an individual’s skeletal structure, the ribcage of the chest, lungs, and heart, chest X-rays have emerged as a focal point for identification and verification, especially in the forensic field, even in scenarios where the human body damaged or disfigured. Discriminative feature embedding is essential for large-scale image verification, especially in applying chest X-ray radiographs for identity identification and verification. This study introduced a self-residual attention-based convolutional neural network (SRAN) aimed at effective feature embedding, capturing long-range dependencies and emphasizing critical spatial features in chest X-rays. This method offers a novel approach to person identification and verification through chest X-ray categorization, relevant for biometric applications and patient care, particularly when traditional biometric modalities are ineffective. Method: The SRAN architecture integrated a self-channel and self-spatial attention module to minimize channel redundancy and enhance significant spatial elements. The attention modules worked by dynamically aggregating feature maps across channel and spatial dimensions to enhance feature differentiation. For the network backbone, a self-residual attention block (SRAB) was implemented within a ResNet50 framework, forming a Siamese network trained with triplet loss to improve feature embedding for identity identification and verification. Results: By leveraging the NIH ChestX-ray14 and CheXpert datasets, our method demonstrated notable improvements in accuracy for identity verification and identification based on chest X-ray images. This approach effectively captured the detailed anatomical characteristics of individuals, including skeletal structure, ribcage, lungs, and heart, highlighting chest X-rays as a viable biometric tool even in cases of body damage or disfigurement. Conclusions: The proposed SRAN with self-residual attention provided a promising solution for biometric identification through chest X-ray imaging, showcasing its potential for accurate and reliable identity verification where traditional biometric approaches may fall short, especially in postmortem cases or forensic investigations. This methodology could play a transformative role in both biometric security and healthcare applications, offering a robust alternative modality for identity verification. Full article
Show Figures

Figure 1

11 pages, 547 KiB  
Article
GaitAE: A Cognitive Model-Based Autoencoding Technique for Gait Recognition
by Rui Li, Huakang Li, Yidan Qiu, Jinchang Ren, Wing W. Y. Ng and Huimin Zhao
Mathematics 2024, 12(17), 2780; https://doi.org/10.3390/math12172780 - 8 Sep 2024
Cited by 2 | Viewed by 1585
Abstract
Gait recognition is a long-distance biometric technique with significant potential for applications in crime prevention, forensic identification, and criminal investigations. Existing gait recognition methods typically introduce specific feature refinement modules on designated models, leading to increased parameter volume and computational complexity while lacking [...] Read more.
Gait recognition is a long-distance biometric technique with significant potential for applications in crime prevention, forensic identification, and criminal investigations. Existing gait recognition methods typically introduce specific feature refinement modules on designated models, leading to increased parameter volume and computational complexity while lacking flexibility. In response to this challenge, we propose a novel framework called GaitAE. GaitAE efficiently learns gait representations from large datasets and reconstructs gait sequences through an autoencoder mechanism, thereby enhancing recognition accuracy and robustness. In addition, we introduce a horizontal occlusion restriction (HOR) strategy, which introduces horizontal blocks to the original input sequences at random positions during training to minimize the impact of confounding factors on recognition performance. The experimental results demonstrate that our method achieves high accuracy and is effective when applied to existing gait recognition techniques. Full article
(This article belongs to the Special Issue Mathematical Methods for Pattern Recognition)
Show Figures

Figure 1

25 pages, 3396 KiB  
Review
Technology in Forensic Sciences: Innovation and Precision
by Xavier Chango, Omar Flor-Unda, Pedro Gil-Jiménez and Hilario Gómez-Moreno
Technologies 2024, 12(8), 120; https://doi.org/10.3390/technologies12080120 - 26 Jul 2024
Cited by 11 | Viewed by 24451
Abstract
The advancement of technology and its developments have provided the forensic sciences with many cutting-edge tools, devices, and applications, allowing forensics a better and more accurate understanding of the crime scene, a better and optimal acquisition of data and information, and faster processing, [...] Read more.
The advancement of technology and its developments have provided the forensic sciences with many cutting-edge tools, devices, and applications, allowing forensics a better and more accurate understanding of the crime scene, a better and optimal acquisition of data and information, and faster processing, allowing more reliable conclusions to be obtained and substantially improving the scientific investigation of crime. This article describes the technological advances, their impacts, and the challenges faced by forensic specialists in using and implementing these technologies as tools to strengthen their field and laboratory investigations. The systematic review of the scientific literature used the PRISMA® methodology, analyzing documents from databases such as SCOPUS, Web of Science, Taylor & Francis, PubMed, and ProQuest. Studies were selected using a Cohen Kappa coefficient of 0.463. In total, 63 reference articles were selected. The impact of technology on investigations by forensic science experts presents great benefits, such as a greater possibility of digitizing the crime scene, allowing remote analysis through extended reality technologies, improvements in the accuracy and identification of biometric characteristics, portable equipment for on-site analysis, and Internet of things devices that use artificial intelligence and machine learning techniques. These alternatives improve forensic investigations without diminishing the investigator’s prominence and responsibility in the resolution of cases. Full article
(This article belongs to the Collection Review Papers Collection for Advanced Technologies)
Show Figures

Figure 1

12 pages, 4080 KiB  
Article
The Effects of AI-Driven Face Restoration on Forensic Face Recognition
by Mengxuan Yang, Shengnan Li and Jinhua Zeng
Appl. Sci. 2024, 14(9), 3783; https://doi.org/10.3390/app14093783 - 29 Apr 2024
Cited by 2 | Viewed by 4457
Abstract
In biometric recognition, face recognition is a mature and widely used technique that provides a fast, accurate, and reliable method for human identification. This paper aims to study the effects of face image restoration for forensic face recognition and then further analyzes the [...] Read more.
In biometric recognition, face recognition is a mature and widely used technique that provides a fast, accurate, and reliable method for human identification. This paper aims to study the effects of face image restoration for forensic face recognition and then further analyzes the advantages and limitations of the four state-of-the-art face image restoration methods in the field of face recognition for forensic human image identification. In total, 100 face image materials from an open-source face image dataset are used for experiments. The Gaussian blur processing is applied to simulate the effect of blurred face images in actual cases of forensic human image identification. Four state-of-the-art AI-driven face restoration methods are used to restore the blurred face images. We use three mainstream face recognition systems to evaluate the recognition performance changes of the blurred face images and the restored face images. We find that although face image restoration can effectively remove facial noise and blurring effects, the restored images do not significantly improve the recognition performance of the face recognition systems. Face image restoration may change the original features in face images and introduce new made-up image features, thereby affecting the accuracy of face recognition. In current conditions, the improvement in face image restoration on the recognition performance of face recognition systems is limited, but it still has a positive role in the application of forensic human image identification. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

21 pages, 7580 KiB  
Article
Fingerprint Recognition in Forensic Scenarios
by Nuno Martins, José Silvestre Silva and Alexandre Bernardino
Sensors 2024, 24(2), 664; https://doi.org/10.3390/s24020664 - 20 Jan 2024
Cited by 8 | Viewed by 8769
Abstract
Fingerprints are unique patterns used as biometric keys because they allow an individual to be unambiguously identified, making their application in the forensic field a common practice. The design of a system that can match the details of different images is still an [...] Read more.
Fingerprints are unique patterns used as biometric keys because they allow an individual to be unambiguously identified, making their application in the forensic field a common practice. The design of a system that can match the details of different images is still an open problem, especially when applied to large databases or, to real-time applications in forensic scenarios using mobile devices. Fingerprints collected at a crime scene are often manually processed to find those that are relevant to solving the crime. This work proposes an efficient methodology that can be applied in real time to reduce the manual work in crime scene investigations that consumes time and human resources. The proposed methodology includes four steps: (i) image pre-processing using oriented Gabor filters; (ii) the extraction of minutiae using a variant of the Crossing Numbers method which include a novel ROI definition through convex hull and erosion followed by replacing two or more very close minutiae with an average minutiae; (iii) the creation of a model that represents each minutia through the characteristics of a set of polygons including neighboring minutiae; (iv) the individual search of a match for each minutia in different images using metrics on the absolute and relative errors. While in the literature most methodologies look to validate the entire fingerprint model, connecting the minutiae or using minutiae triplets, we validate each minutia individually using n-vertex polygons whose vertices are neighbor minutiae that surround the reference. Our method also reveals robustness against false minutiae since several polygons are used to represent the same minutia, there is a possibility that even if there are false minutia, the true polygon is present and identified; in addition, our method is immune to rotations and translations. The results show that the proposed methodology can be applied in real time in standard hardware implementation, with images of arbitrary orientations. Full article
(This article belongs to the Section Biosensors)
Show Figures

Figure 1

13 pages, 1559 KiB  
Article
Offline Mongolian Handwriting Identification Based on Convolutional Neural Network
by Yuxin Sun, Daoerji Fan, Huijuan Wu, Zhixin Wang and Jia Tian
Electronics 2024, 13(1), 111; https://doi.org/10.3390/electronics13010111 - 27 Dec 2023
Cited by 1 | Viewed by 1823
Abstract
Handwriting is a biometric behavioral characteristic with evident individual distinctiveness. With the rise of the deep learning trend and demands for forensic identification, handwriting identification has become one of the focal points of research in the field of pattern recognition. Research in handwriting [...] Read more.
Handwriting is a biometric behavioral characteristic with evident individual distinctiveness. With the rise of the deep learning trend and demands for forensic identification, handwriting identification has become one of the focal points of research in the field of pattern recognition. Research in handwriting identification for major global languages has matured. However, in China, there is limited attention in the field of writer identification for minority languages such as Mongolian, making it challenging to resolve criminal cases involving handwriting issues. This paper initiates an initial exploration of Mongolian handwriting identification by constructing a structurally simple convolutional neural network. This convolutional neural network, consisting of 12 convolution operations and designed for Mongolian handwriting identification, is referred to as MWInet-12. In this paper, the model evaluation experiments were conducted using a dataset comprising 156,372 samples contributed by 125 writers from the MOLHW dataset. The dataset was divided into training, validation, and test sets in an 8:1:1 ratio. The final results of the experiments reveal impressive accuracy on the test set, achieving a top-1 accuracy of 89.60% and a top-5 accuracy of 97.53%. Furthermore, through comparative experiments involving Resnet50, Fragnet, GRRNN, VGG16, and VGG19 models, this paper establishes that the proposed model yields the most favorable results for Mongolian handwriting identification. The exploratory research on Mongolian handwriting identification in this paper contributes to increasing awareness of information processing for minority languages. It aids in advancing research on classifying writers of Mongolian historical texts and provides technical support for judicial authentication involving handwriting issues. Full article
Show Figures

Figure 1

23 pages, 3168 KiB  
Article
Invariant Feature Encoding for Contact Handprints Using Delaunay Triangulated Graph
by Akmal Jahan Mohamed Abdul Cader, Jasmine Banks and Vinod Chandran
Appl. Sci. 2023, 13(19), 10874; https://doi.org/10.3390/app131910874 - 30 Sep 2023
Cited by 2 | Viewed by 1154
Abstract
Contact-based biometric applications primarily use prints from a finger or a palm for a single instance in different applications. For access control, there is an enrollment process using one or more templates which are compared with verification images. In forensics applications, randomly located, [...] Read more.
Contact-based biometric applications primarily use prints from a finger or a palm for a single instance in different applications. For access control, there is an enrollment process using one or more templates which are compared with verification images. In forensics applications, randomly located, partial, and often degraded prints acquired from a crime scene are compared with the images captured from suspects or existing fingerprint databases, like AFIS. In both scenarios, if we need to use handprints which include segments from the finger and palm, what would be the solution? The motivation behind this is the concept of one single algorithm for one hand. Using an algorithm that can incorporate both prints in a common processing framework can be an alternative which will have advantages like scaling to larger existing databases. This work proposes a method that uses minutiae or minutiae-like features, Delaunay triangulation and graph matching with invariant feature representation to overcome the effects of rotation and scaling. Since palm prints have a large surface area with degradation, they tend to have many false minutiae compared to fingerprints, and the existing palm print algorithms fail to tackle this. The proposed algorithm constructs Delaunay triangulated graphs (DTG) using minutiae where Delaunay triangles form from minutiae, and initiate a collection of base triangles for opening the matching process. Several matches may be observed for a single triangle match when two images are compared. Therefore, the set of initially matched triangles may not be a true set of matched triangles. Each matched triangle is then used to extend as a sub-graph, adding more nodes to it until a maximum graph size is reached. When a significant region of the template image is matched with the test image, the highest possible order of this graph will be obtained. To prove the robustness of the algorithm to geometrical variations and working ability with extremely degraded (similar to latent prints) conditions, it is demonstrated with a subset of partial-quality and extremely-low-quality images from the FVC (fingerprint) and the THUPALMLAB (palm print) databases with and without geometrical variations. The algorithm is useful when partial matches between template and test are expected, and alignment or geometrical normalization is not accurately possible in pre-processing. It will also work for cross-comparisons between images that are not known a priori. Full article
(This article belongs to the Special Issue Cutting Edge Advances in Image Information Processing)
Show Figures

Figure 1

28 pages, 1432 KiB  
Article
Fingerprint Systems: Sensors, Image Acquisition, Interoperability and Challenges
by Akmal Jahan Mohamed Abdul Cader, Jasmine Banks and Vinod Chandran
Sensors 2023, 23(14), 6591; https://doi.org/10.3390/s23146591 - 21 Jul 2023
Cited by 14 | Viewed by 9377
Abstract
The fingerprint is a widely adopted biometric trait in forensic and civil applications. Fingerprint biometric systems have been investigated using contact prints and latent and contactless images which range from low to high resolution. While the imaging techniques are advancing with sensor variations, [...] Read more.
The fingerprint is a widely adopted biometric trait in forensic and civil applications. Fingerprint biometric systems have been investigated using contact prints and latent and contactless images which range from low to high resolution. While the imaging techniques are advancing with sensor variations, the input fingerprint images also vary. A general fingerprint recognition pipeline consists of a sensor module to acquire images, followed by feature representation, matching and decision modules. In the sensor module, the image quality of the biometric traits significantly affects the biometric system’s accuracy and performance. Imaging modality, such as contact and contactless, plays a key role in poor image quality, and therefore, paying attention to imaging modality is important to obtain better performance. Further, underlying physical principles and the working of the sensor can lead to their own forms of distortions during acquisition. There are certain challenges in each module of the fingerprint recognition pipeline, particularly sensors, image acquisition and feature representation. Present reviews in fingerprint systems only analyze the imaging techniques in fingerprint sensing that have existed for a decade. However, the latest emerging trends and recent advances in fingerprint sensing, image acquisition and their challenges have been left behind. Since the present reviews are either obsolete or restricted to a particular subset of the fingerprint systems, this work comprehensively analyzes the state of the art in the field of contact-based, contactless 2D and 3D fingerprint systems and their challenges in the aspects of sensors, image acquisition and interoperability. It outlines the open issues and challenges encountered in fingerprint systems, such as fingerprint performance, environmental factors, acceptability and interoperability, and alternate directions are proposed for a better fingerprint system. Full article
(This article belongs to the Special Issue Advances in Biometrics: Sensors, Algorithms, and Systems)
Show Figures

Figure 1

28 pages, 11151 KiB  
Article
Hybrid Deep Learning and Discrete Wavelet Transform-Based ECG Biometric Recognition for Arrhythmic Patients and Healthy Controls
by Muhammad Sheharyar Asif, Muhammad Shahzad Faisal, Muhammad Najam Dar, Monia Hamdi, Hela Elmannai, Atif Rizwan and Muhammad Abbas
Sensors 2023, 23(10), 4635; https://doi.org/10.3390/s23104635 - 10 May 2023
Cited by 8 | Viewed by 4386
Abstract
The intrinsic and liveness detection behavior of electrocardiogram (ECG) signals has made it an emerging biometric modality for the researcher with several applications including forensic, surveillance and security. The main challenge is the low recognition performance with datasets of large populations, including healthy [...] Read more.
The intrinsic and liveness detection behavior of electrocardiogram (ECG) signals has made it an emerging biometric modality for the researcher with several applications including forensic, surveillance and security. The main challenge is the low recognition performance with datasets of large populations, including healthy and heart-disease patients, with a short interval of an ECG signal. This research proposes a novel method with the feature-level fusion of the discrete wavelet transform and a one-dimensional convolutional recurrent neural network (1D-CRNN). ECG signals were preprocessed by removing high-frequency powerline interference, followed by a low-pass filter with a cutoff frequency of 1.5 Hz for physiological noises and by baseline drift removal. The preprocessed signal is segmented with PQRST peaks, while the segmented signals are passed through Coiflets’ 5 Discrete Wavelet Transform for conventional feature extraction. The 1D-CRNN with two long short-term memory (LSTM) layers followed by three 1D convolutional layers was applied for deep learning-based feature extraction. These combinations of features result in biometric recognition accuracies of 80.64%, 98.81% and 99.62% for the ECG-ID, MIT-BIH and NSR-DB datasets, respectively. At the same time, 98.24% is achieved when combining all of these datasets. This research also compares conventional feature extraction, deep learning-based feature extraction and a combination of these for performance enhancement, compared to transfer learning approaches such as VGG-19, ResNet-152 and Inception-v3 with a small segment of ECG data. Full article
(This article belongs to the Special Issue Advances in Biometrics: Sensors, Algorithms, and Systems)
Show Figures

Figure 1

22 pages, 17811 KiB  
Article
Probabilistic Fingermark Quality Assessment with Quality Region Localisation
by Tim Oblak, Rudolf Haraksim, Laurent Beslay and Peter Peer
Sensors 2023, 23(8), 4006; https://doi.org/10.3390/s23084006 - 15 Apr 2023
Cited by 4 | Viewed by 2826
Abstract
The assessment of fingermark (latent fingerprint) quality is an intrinsic part of a forensic investigation. The fingermark quality indicates the value and utility of the trace evidence recovered from the crime scene in the course of a forensic investigation; it determines how the [...] Read more.
The assessment of fingermark (latent fingerprint) quality is an intrinsic part of a forensic investigation. The fingermark quality indicates the value and utility of the trace evidence recovered from the crime scene in the course of a forensic investigation; it determines how the evidence will be processed, and it correlates with the probability of finding a corresponding fingerprint in the reference dataset. The deposition of fingermarks on random surfaces occurs spontaneously in an uncontrolled fashion, which introduces imperfections to the resulting impression of the friction ridge pattern. In this work, we propose a new probabilistic framework for Automated Fingermark Quality Assessment (AFQA). We used modern deep learning techniques, which have the ability to extract patterns even from noisy data, and combined them with a methodology from the field of eXplainable AI (XAI) to make our models more transparent. Our solution first predicts a quality probability distribution, from which we then calculate the final quality value and, if needed, the uncertainty of the model. Additionally, we complemented the predicted quality value with a corresponding quality map. We used GradCAM to determine which regions of the fingermark had the largest effect on the overall quality prediction. We show that the resulting quality maps are highly correlated with the density of minutiae points in the input image. Our deep learning approach achieved high regression performance, while significantly improving the interpretability and transparency of the predictions. Full article
(This article belongs to the Special Issue Trustless Biometric Sensors and Systems)
Show Figures

Figure 1

22 pages, 1181 KiB  
Review
Dental Age Estimation Using Deep Learning: A Comparative Survey
by Essraa Gamal Mohamed, Rebeca P. Díaz Redondo, Abdelrahim Koura, Mohamed Sherif EL-Mofty and Mohammed Kayed
Computation 2023, 11(2), 18; https://doi.org/10.3390/computation11020018 - 29 Jan 2023
Cited by 20 | Viewed by 8934
Abstract
The significance of age estimation arises from its applications in various fields, such as forensics, criminal investigation, and illegal immigration. Due to the increased importance of age estimation, this area of study requires more investigation and development. Several methods for age estimation using [...] Read more.
The significance of age estimation arises from its applications in various fields, such as forensics, criminal investigation, and illegal immigration. Due to the increased importance of age estimation, this area of study requires more investigation and development. Several methods for age estimation using biometrics traits, such as the face, teeth, bones, and voice. Among then, teeth are quite convenient since they are resistant and durable and are subject to several changes from childhood to birth that can be used to derive age. In this paper, we summarize the common biometrics traits for age estimation and how this information has been used in previous research studies for age estimation. We have paid special attention to traditional machine learning methods and deep learning approaches used for dental age estimation. Thus, we summarized the advances in convolutional neural network (CNN) models to estimate dental age from radiological images, such as 3D cone-beam computed tomography (CBCT), X-ray, and orthopantomography (OPG) to estimate dental age. Finally, we also point out the main innovations that would potentially increase the performance of age estimation systems. Full article
Show Figures

Figure 1

16 pages, 2230 KiB  
Review
Deepfakes Generation and Detection: A Short Survey
by Zahid Akhtar
J. Imaging 2023, 9(1), 18; https://doi.org/10.3390/jimaging9010018 - 13 Jan 2023
Cited by 64 | Viewed by 35719
Abstract
Advancements in deep learning techniques and the availability of free, large databases have made it possible, even for non-technical people, to either manipulate or generate realistic facial samples for both benign and malicious purposes. DeepFakes refer to face multimedia content, which has been [...] Read more.
Advancements in deep learning techniques and the availability of free, large databases have made it possible, even for non-technical people, to either manipulate or generate realistic facial samples for both benign and malicious purposes. DeepFakes refer to face multimedia content, which has been digitally altered or synthetically created using deep neural networks. The paper first outlines the readily available face editing apps and the vulnerability (or performance degradation) of face recognition systems under various face manipulations. Next, this survey presents an overview of the techniques and works that have been carried out in recent years for deepfake and face manipulations. Especially, four kinds of deepfake or face manipulations are reviewed, i.e., identity swap, face reenactment, attribute manipulation, and entire face synthesis. For each category, deepfake or face manipulation generation methods as well as those manipulation detection methods are detailed. Despite significant progress based on traditional and advanced computer vision, artificial intelligence, and physics, there is still a huge arms race surging up between attackers/offenders/adversaries (i.e., DeepFake generation methods) and defenders (i.e., DeepFake detection methods). Thus, open challenges and potential research directions are also discussed. This paper is expected to aid the readers in comprehending deepfake generation and detection mechanisms, together with open issues and future directions. Full article
Show Figures

Figure 1

Back to TopTop