Next Article in Journal
A Domestic Trash Detection Model Based on Improved YOLOX
Next Article in Special Issue
A Hierarchical Spatial–Temporal Cross-Attention Scheme for Video Summarization Using Contrastive Learning
Previous Article in Journal
An Improved Q-Learning-Based Sensor-Scheduling Algorithm for Multi-Target Tracking
Previous Article in Special Issue
Learning Gait Representations with Noisy Multi-Task Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Cross-Sensor Fingerprint Enhancement Using Adversarial Learning and Edge Loss

1
Department of Computer Science, CCIS, King Saud University, Riyadh 11451, Saudi Arabia
2
Department of Computer Engineering, King Saud University, Riyadh 11451, Saudi Arabia
3
Department of Computer Science and Engineering, University of Nevada, Reno, NV 89557, USA
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(18), 6973; https://doi.org/10.3390/s22186973
Submission received: 2 August 2022 / Revised: 4 September 2022 / Accepted: 6 September 2022 / Published: 15 September 2022

Abstract

:
A fingerprint sensor interoperability problem, or a cross-sensor matching problem, occurs when one type of sensor is used for enrolment and a different type for matching. Fingerprints captured for the same person using various sensor technologies have various types of noises and artifacts. This problem motivated us to develop an algorithm that can enhance fingerprints captured using different types of sensors and touch technologies. Inspired by the success of deep learning in various computer vision tasks, we formulate this problem as an image-to-image transformation designed using a deep encoder–decoder model. It is trained using two learning frameworks, i.e., conventional learning and adversarial learning based on a conditional Generative Adversarial Network (cGAN) framework. Since different types of edges form the ridge patterns in fingerprints, we employed edge loss to train the model for effective fingerprint enhancement. The designed method was evaluated on fingerprints from two benchmark cross-sensor fingerprint datasets, i.e., MOLF and FingerPass. To assess the quality of enhanced fingerprints, we employed two standard metrics commonly used: NBIS Fingerprint Image Quality (NFIQ) and Structural Similarity Index Metric (SSIM). In addition, we proposed a metric named Fingerprint Quality Enhancement Index (FQEI) for comprehensive evaluation of fingerprint enhancement algorithms. Effective fingerprint quality enhancement results were achieved regardless of the sensor type used, where this issue was not investigated in the related literature before. The results indicate that the proposed method outperforms the state-of-the-art methods.

1. Introduction

The fingerprint is a biometric modality deployed mainly for human identification. Fingerprint recognition systems have several practical applications, including access control and criminal investigation [1].
Most available fingerprint systems compare data captured from the same sensor, where matching algorithms are designed to work on data obtained from a single sensor for enrollment and verification. Thus, the ability of these algorithms to work on data collected from multiple sensors is limited. It is known as the fingerprint sensor interoperability problem or the cross-sensor problem. In legacy databases, billions of fingerprints have been collected from different sensors based on diverse technologies. Every time the sensor of choice is changed, the re-enrollment of persons is a costly and substantial task. Moreover, due to the improvement in fingerprint sensors and the need to apply fingerprint recognition in devices such as those linked to the Internet of Things (IoT), the demand is high for an efficient fingerprint matching algorithm that can recognize fingerprints captured using different sensors. Therefore, the algorithms for the sensor interoperability problem, which improve the biometric system’s ability to adapt to data obtained from several sensors, are highly needed and will significantly impact system usability [2].
The quality of fingerprints varies based on the sensor types used for capturing the fingerprint, even if the same sensing technology is employed (e.g., optical or capacitive). Additionally, the corresponding sets of features have high variability, which cannot be analyzed easily by a matching algorithm for accurate decisions. An example is shown in Figure 1, which shows the fingerprint of the same finger captured by nine different sensors [3].
Differences in sensor technology and interaction type can cause significant variations in the quality of fingerprints. Thus, a considerable drop in the performance of the existing fingerprint recognition systems has been reported when different sensors are used for identification [2].
Moreover, the performance of cross-sensor matching algorithms is affected because of the variations in ridge patterns caused by the various types of noises and artifacts due to the difference in sensor technologies, as shown in Figure 1. There is a real need to enhance fingerprint images. However, this is challenging because fingerprints captured using various sensors include several kinds of texture patterns and noises [4].
A sample including a set of impressions taken from the MOLF dataset [5] is presented in Figure 2. These impressions were categorized into three subsets: DB1 comprises the flat dap (10) fingerprints captured by the Lumidigm Venus sensor; DB2 contains the fingerprints of the same fingers captured by the Secugen HamsterIV sensor; and DB3 consists of the dap fingerprints captured by CrossMatch L-Scan patrol sensor. Their quality was measured using the NFIQ (NBIS Fingerprint Image Quality) tool [6]. It is an open-source minutiae-based quality evaluation algorithm that provides a quality value {1, 2, 3, 4, 5}, with 1 representing the best quality and 5 denoting the worst one. Each row within the set stands for fingerprints captured by the same sensor. Each column, in turn, represents the same level of quality, in which the first column is excellent while the last column is poor. It can be noticed that DB1 has no images of the poor class. In addition, most of the ridge pattern information is unclear in the impressions belonging to classes poor and fair in DB2 and DB3.
In this paper, we present an efficient enhancement solution for the cross-sensor fingerprint problem. Specifically, motivated by the outstanding performance of deep learning-based techniques in various computer vision tasks such as image enhancement [7,8]. We designed an image-to-image mapping function that receives a low-quality fingerprint and generates a high-quality one. We model using Convolutional Neural Networks (CNN) based on encoder–decoder architecture. The learning of this kind of CNN is a challenging problem. Thus, we trained our method using two types of learning approaches i.e., the conventional end-to-end approach and using the adversarial learning (using a conditional GAN framework).
Adversarial learning generates fingerprints of higher quality than those produced by conventional learning, as demonstrated by comparing the outputs of the two methods using two frequent metrics: NFIQ and SSIM.
Our method was evaluated on two benchmark public datasets, FingerPass and MOLF. The results indicate that fingerprints are enhanced to higher quality regardless of the sensor type used.
To the best of our knowledge, this is the first work dealing with the problem of cross-sensor fingerprint enhancement using deep learning. Our contributions in this paper can be summarized as follow:
  • We formulated the cross-sensor fingerprint enhancement problem as an image-to-image transformation problem and designed it using a CNN model with an encoder–decoder architecture that takes a low-quality fingerprint and produces an enhanced fingerprint. We trained the proposed CNN model using two different approaches: conventional learning and adversarial learning.
  • Motivated by the success of adversarial learning in modeling image-to-image transformation [9], we learned the proposed image-to-image transformation (the CNN model) using a conditional GAN framework, where the proposed CNN model plays the role of a generator.
  • To preserve the ridge patterns in the fingerprints, we incorporated the edge loss function [10] and L1 loss [9] into the adversarial loss [11]. This resulted in good quality enhanced fingerprints regardless of the type of sensor used to capture the fingerprints.
  • For comprehensive evaluation of a fingerprint enhancement algorithm, we proposed a new metric called Fingerprint Quality Enhancement Index (FQEI). This metric yields a value between 1 and −1, where 1 represents the best enhancement and −1 represents the worst degradation.
The rest of this paper is structured as follows. Section 2 reviews previous enhancement methods, while Section 3 describes in detail the proposed method. Section 4 presents the training and testing stages of our model, while Section 5 gives details of the experiments. Section 6 discusses our results. Finally, Section 7 concludes the conducted work and suggests some directions for future work.

2. Related Work

In the last decade, various studies have been conducted to study the effect of reliable fingerprint enhancement for solving the matching problem assuming that the same sensor was used both for enrollment and verification.
A common technique is the HONG method proposed by Hong et al. [12], where fingerprints are enhanced using a bank of Gabor filters, which are adjusted to the orientation of the local ridges. Another state-of-the-art method is the CHIK method, which was proposed by Chikkerur et al. [13], where fingerprints are enhanced using the short-time Fourier transform (STFT). In this method, each fingerprint is initially divided into small overlapping windows, and the STFT is applied to each window. Next, the block energy, ridge orientation, and ridge frequency are estimated using the Fourier spectrum, and then contextual filtering is applied for fingerprint enhancement.
Other enhancement techniques focus on using off-line images, such as the latent fingerprint technique [14]. Researchers proposed an approach that employed a CNN model to predict ridge direction from a set of pre-trained ridge patterns. In [7], a direct end-to-end enhancement approach was proposed using the FingerNet architecture. This method relied on the use of a CNN within an encoder–decoder scheme. In [8], the authors employed a convolutional auto-encoder neural network to enhance the missing ridge pattern. A similar work was proposed in [15], where a method based on de-convolutional auto-encoders was developed to match sensor-scan and inked fingerprints.
All previous works have focused on using conventional learning only in the enhancement process, where CNNs learn to minimize the loss function. This process, however, requires a lot of manual effort. In contrast, the flexibility provided by Generative Adversarial Networks (GANs), which apply adversarial learning, allows for optimizing the objective function of the problem more effectively. It initially determines a single high-level goal, such as producing indistinguishable fake images from real images, and then learns to achieve such a goal automatically using a suitable loss function [9]. In the JOSHI method [16], a conditional GAN model was proposed based on an image-to-image translation to reconstruct the ridge structure of latent fingerprints. As discussed above, most previous enhancement methods have focused on matching latent fingerprints left unintentionally at a crime scene. Unlike previous methods, which deal with latent fingerprints, the proposed method addresses the problem of enhancing cross-sensor fingerprints. The problem of cross-sensor enhancement has been addressed in a few studies only. In [4,17], an adaptive histogram equalization method was proposed to enhance the contrast of contactless fingerprint ridges and valleys. To date, these are the only published studies concerning cross-sensor enhancement. No previous studies have addressed the cross-sensor enhancement problem using deep learning techniques.

3. Proposed Method

A critical issue when designing an effective cross-sensor fingerprint enhancement is preserving valleys, ridges, and other fingerprint features, such as minutiae. In view of this, we introduce a new method for cross-sensor fingerprint enhancement.

3.1. Problem Formulation

Fingerprint enhancement can be expressed as an image-to-image transformation problem. It aims to learn a mapping, denoted by , which transforms an input fingerprint x m x n to an enhanced fingerprint y ^ . This implies finding a mapping : m x n m x n such that y ^ = ( x ;   θ ) , where θ represents the transformation parameters. A critical question in this context is how to model the mapping function   . From a practical standpoint, the application of both deep learning and CNNs has shown promising performance in pattern recognition problems, as indicated in various studies [4,14,15]. This, in turn, motivated us to model using a CNN model. The learning method typically employed in CNNs is conventional learning, which is based on an objective function that minimizes the loss function between ground truth and the predicted labels. However, regardless of whether the learning process is automatic, several studies have sought to design more effective loss functions [9].
Another efficient learning approach is based on the Generative Adversarial Networks (GANs) framework. The learning method applied in GANs is adversarial learning, which is based on a min-max game and includes a specific loss function, where one agent tries to maximize while the other one tries to minimize [11].

3.2. The Design of Mapping Function ( )

The design of the mapping function ( ) is a challenging problem since the captured fingerprints by different sensors have different texture patterns and noise [4]. The desired mapping must be developed to enhance fingerprints by preserving the underlying fingerprint features and removing possible corruption and noise. To address these issues and effectively learn , two learning frameworks were investigated: conventional learning and adversarial learning.

3.2.1. Conventional Learning Framework (One-Net)

In this case, was designed using a CNN model following an encoder–decoder architecture [18]. It takes a low-quality fingerprint as input and produces a high-quality one as output. This architecture minimizes the loss between the target images and the predicted ones. This architecture was adopted from SegNet [19] with some modifications. SegNet comprises two networks: an encoder and a corresponding decoder, followed by a final pixel-wise classification layer.
SegNet has five encoders and corresponding five decoders. All the encoders include two consecutive layers and max pooling layers. Each convolutional layer consists of 64 filters with size 3 × 3, 1 padding and stride of 1 followed by batch normalization (BN) layer and then element-wise rectified linear non-linearity (ReLU). After that, 2 × 2 max pooling layer, with a stride of 2, is applied where the related max pooling indices (locations) are saved.
Each corresponding decoder up-samples its input using the recalled max-pooling indices using a 2 × 2 max unpooling layer with a stride of 2. Then, it convolves the input using two consecutive convolutional layers. Each convolutional layer contains 64 filters of size 3 × 3 and a stride of 1 followed by a batch normalization layer, then a ReLU layer. The final output is then fed into a multi-class soft-max classifier to compute class probabilities for each pixel independently.
This model has been specifically designed for segmentation purposes. However, since our goal is different and focuses on the enhancement task, the SegNet model was modified to fit the task of interest by receiving a low-quality, 300 × 300 × 1 fingerprint and generating a same-size fingerprint with higher quality. Both the Softmax layer and the pixel-wise classification layer were removed. Since the target task is to produce a same-size fingerprint with a higher quality, a convolution layer with one filter of size 3 × 3, was also added, as shown in Figure 3.
The preservation of small and thin details is essential for fingerprint matching since they play an important role in determining the identity of each subject. Some of these details are the minutiae points formed mainly by ridge bifurcations and ridge endings. The ridge bifurcations are those points where ridges are divided into two ridges, whereas the ridge endings are those points where ridges end. The extraction of minutiae points is a difficult task in low-quality fingerprint images [1], see Figure 4.
These small details should be considered when designing the target model. Convolutional networks are deployed to gradually reduce the image resolution until it is represented via tiny feature maps, where the spatial structure is not yet visible. However, this spatial acuity loss may restrict fingerprint enhancement. This issue can be addressed by dilated convolutions that can increase the output feature maps resolution without decreasing the individual neurons’ receptive field. Thus, a second modification introduced to the SegNet model is adding dilated convolutions.
Generally, dilated convolution is a convolution having a wider kernel that is generated based on repeatedly adding spaces among the kernel elements [20]. Therefore, each convolution layer in the encoder was substituted by a dilated convolution layer using a different dilation factor in the range: 1, 1, 2, 2, 4, 4, 8, 8, 16, and 16. Our results illustrate that dilated convolution is appropriate for fingerprint enhancement since it enlarges the receptive field with no coverage or resolution loss.
In the decoder network, each decoder up-samples its input feature map(s) by deploying the memorized max-pooling indices related to its corresponding encoder’s feature map(s). It should be noted that there is no conducted learning within the up-sampling stage. SegNet uses the max pooling indices to up-sample the feature map(s) and convolves them with a trainable decoder filter bank. Next, batch normalization is applied to each map. Subsequently, the high dimensional feature representation at the final decoder output is fed to a convolutional layer followed by a Tanh layer as shown in Table 1.

3.2.2. The Adversarial Learning Framework (Two-Net)

This type of learning is based on the conditional generative adversarial network (cGAN) framework [9]. The cGAN framework consists of a generator and a discriminator. The role of the generator is to produce a transformed image from the input one. The discriminator determines if the input image is real or fake. In the training stage, both the generator and discriminator conduct a min-max game. For this task, plays the role of the generator, which is to produce a high-quality fingerprint ( y ^ ) from a low-quality one ( x ). The enhanced high-quality fingerprint must have a clear ridge structure to preserve the valleys, ridges, and further fingerprint features, such as minutiae points. The discriminator differentiates real fingerprints from the generated ones, which helps to learn .
To effectively learn via the cGANs framework, it is considered a generator that generates an enhanced image y ^ from an input image x . To model , a dilated SegNet is deployed since both the input and output are images with the same size 300 × 300 × 1, as explained in the first framework. The discriminator D is modeled using a patch GAN discriminator that was adopted from the paper [9]. The first convolution layer Conv contains 64 filters, stride 2, depth of 2, followed by a Leaky ReLU layer. The second Conv consists of 128 filters, stride 2, and the third contains 256 filters of stride 2; the fourth Conv contains 512 filters, stride 2; each of these layers is followed by a batch normalization layer and the Leaky ReLU. The last layer is a Conv layer consisting of one filter and stride of 1. All these Conv layers contain filters of size 4, as illustrated in Figure 5.

3.3. Loss Functions and the Learning of ( )

For the first framework, is learned through conventional learning based on taking a low-quality fingerprint x and producing a high-quality one. This model minimizes the gradient difference between the generated fingerprint and the ground truth y . We used two loss functions: L1 loss [9] and Edge Loss [10].
The first loss used is the L 1 distance that is expressed as follows:
L 1 ( ) = E x , y [ y ( x ) 1 ]
An ideal fingerprint image has valleys and ridges that flow in a locally regular direction. In this case, the detection of ridges is straightforward, and minutiae can be accurately located within the image. Nevertheless, skin conditions (e.g., dry/wet, bruises, and cuts), improper finger pressure, and sensor noise significantly impact fingerprint image quality.
Therefore, the edge loss function is added to improve the fingerprint ridge structures by calculating the edge direction. For this, the ridge pattern of the generated fingerprint and the corresponding ground truth fingerprint are initially computed, and then the loss is used to update the parameters of . The edge loss is denoted as L e d g e and can be expressed as follows:
e d g e ( ) = ( x ) y 2 + ε 2
where represents the Laplacian of Gaussian operator, y denotes the ground truth fingerprint (high quality), and ( x ) denotes the enhanced image. The parameter with constant ε empirically set to 10 3 as used in [10]. This loss is used to preserve edge features useful for improving ridge patterns.
The total loss
C o n v e n t i o n a l ( ) = μ   L 1 ( ) + λ e d g e ( ) .
In the second framework, learning is inspired by the method [9]. Both D and are learned using adversarial learning. The training dataset includes pairs of poor- and high-quality fingerprints. Such pairs are expressed as ( x i ;   y i ) , in which x i stands for the poor-quality fingerprint image, while y i stands for the corresponding high-quality one (ground truth).
A fingerprint x is fed into , which then maps it to an enhanced version y ^ . The channel-wise concatenation between the pairs ( x ,   y ) and ( x , y ^ ) is then fed into D to classify them as real or generated fingerprints. The discriminator ensures that the generator effectively learns to preserve ridge structures of the generated enhanced fingerprints. The adversarial loss is given below:
G A N ( ,   D ) = E ( x , y ) [ log ( D ( x , y ) + E x [ log ( 1 D ( x , ( x ) )   ] ] .  
A custom training loop is deployed to train the model using the training dataset, in which the network weights are updated in each iteration. In the training stage, produces a fingerprint that is hard to be classified as synthetic via D . In contrast, D avoids being misled by and increases the successful discrimination between the original and synthetic fingerprints by reducing the value of the loss function.
We combined the edge loss and L1 distance with adversarial learning. The final objective function is expressed below:
a r g min max D   G A N ( ,   D ) + μ   L 1 ( ) + λ e d g e ( ) .
Figure 6 illustrates the training framework, which learns to produce an enhanced fingerprint from an input one.

3.4. Assessing the Quality of the Enhancements

Although both NFIQ [6] and SSIM [21] are popular and accurate metrics used widely to measure fingerprint quality, they do not offer a comprehensive description of what happens during enhancement. In these metrics, the number of enhanced or degraded images is not considered. A new metric has been designed to comprehensively describe each class’s performance by analyzing the NFIQ results.

Fingerprint Quality Enhancement Index (FQEI)

The detail of the new metric for assessing the enhancement potential of an algorithm is given in the following paragraphs. A fingerprint can be assigned to one of five quality levels, i.e., Q1: excellent, Q2: very good, Q3: good, Q4: fair, or Q5: poor, based on the scores obtained from the NFIQ tool [6]. Using the quality levels of fingerprints before and after enhancement, we compute the Quality Confusion Matrix (QCM) as shown in Table 2, where Qjj is the number of images with original quality Qj have been enhanced to quality Q i .
To quantify the enhancement quality, each Qjj in QCM is scaled according to the corresponding coefficient wij in the weight quality matrix (WQM), shown in Table 3.
In WQM, (i) wii = 0 because there is no enhancement in the quality level of the fingerprints, (ii) wij (i < j) is 1, 2, 3, or 4 depending on enhancement levels, e.g., in case of Q13, the quality of fingerprints after enhancement goes two levels up from Q3 to Q1, it must be weighted with w13 = 2, (iii) wij (i > j) is −1, −2, −3 or −4 depending on de-enhancement levels.
The enhancement score ( E s ), which quantifies the quality of enhancement of fingerprints that were in a low-quality class before enhancement and assigned to a high-quality class after enhancement, can be expressed using QCM and WQM as follows:
E s = j = 2 5 j > i Q i j × w i j
The degradation score ( D s ), which quantifies the quality of de-enhancement of fingerprints that were in a high-quality class before enhancement and assigned to a low-quality class after enhancement, can be expressed using QCM and WQM as follows:
D s = i = 2 5 j < i Q i j × w i j
In the ideal case ( I S ) scenario, all images are enhanced from low-quality class to excellent class. In other words, I S can be represented as a weighted sum of all images, except those of Q1 quality, using the following formula:
I S = ( Q 12 × 1 ) + ( Q 13 × 2 ) + ( Q 14 × 3 ) + ( Q 15 × 4 )
where Q12 represents images from very good class that enhanced one degree up to be in class excellent, and so on.
However, in the worst-case ( W S ) scenario, all images move from the high-quality class to the poor-quality class, excluding the class poor since its images preserve their class. This means that WS can be expressed as a weighted sum of all images, except those in class poor, using the following formula:
W S = ( Q 51 × 4 ) + ( Q 52 × 3 ) + ( Q 53 × 2 ) + ( Q 54 × 1 )
where Q 51 represents images from excellent class that degraded four degrees down to be in poor class, and so on.
To measure the enhancement ratio (ER), the E s computed using Equation (6), is divided by IS computed using Equation (8). Thus, the ER is expressed as follows:
E R = E s   I S
In contrast, the degradation ratio can be measured by dividing the D s by WS as follows:
D R = D s W S
The difference between the enhancement ratio and the degradation ratio is computed to determine the degree of enhancement for measuring the performance of an algorithm:
FQEI   = E R D R .
In the ideal case scenario FQEI = 1, and it is equal to −1 in the worst-case scenario.
The more the FQEI is close to one, the higher the enhancement is, and vice versa. An illustrative example is provided in the Appendix A.

4. Training and Testing

In this section, we discuss the training stage, which uses training data to learn the model, and the testing stage tests it using test data.

4.1. Training Details

The model constructed is a supervised generative one trained to generate high-quality fingerprint images from low-quality ones. Practically, a supervised model needs paired training data of low-quality fingerprints combined with their corresponding enhanced images. However, cross-sensor fingerprint datasets have low-quality fingerprints, and their related high-quality counterparts are not available. Moreover, cross-sensor fingerprint databases are not large enough with high-quality images. This results in training difficulties of deep neural network models. Therefore, there is a need to generate fingerprints with noise characteristics similar to those of real fingerprints, as shown in Figure 1, and their enhanced versions to train the enhancement model. The following subsections detail the datasets prepared for training the model.

FingerPass Database

The training data were fingerprints from the AES2501 sensor from the FingerPass dataset, which includes 8460 images of different qualities: excellent, very good, good, fair, and poor. To help the model learn how to enhance fingerprints with different quality levels, all fingerprints were enhanced using the HONG method [12], which were used as the target fingerprints.
The proposed method was trained using a minibatch SGD with Adam optimizer considering the following parameters: Momentum parameters β1 = 0.5 and β2 = 0.999, Learning rate 0.002, μ = 100 , and λ = 0.001.

4.2. Testing Details

The performance of the proposed method was tested using two benchmark public databases: FingerPass [3] and MOLF [5].

4.2.1. Multisensor Optical and Latent Fingerprint (MOLF) Dataset

This dataset includes images captured by using three different sensors, having the same sensor technology (optical sensors) and the same capturing method (press). Images in the database come from 100 subjects, where each one of the 10 fingerprints was captured in two sessions (two independent instances were captured in each session). Each sensor was used to capture 4000 images with 1000 fingerprint classes.
Live-scan images in the database are categorized into three subsets. DB1, DB2, and DB3. It can be noted from Figure 2 that those images are visually different due the acquisition sensor used and the capturing process applied.

4.2.2. FingerPass Database

FingerPass consists of images of the same eight fingers (thumb, index finger, middle finger, and ring finger of both hands) captured using nine sensors from 90 subjects; a sample is shown in Figure 1.
It includes two technological types (optical and capacitive sensors) and two capturing methods (in this case, press and sweep). Each subject was asked to take 12 impressions for each finger. Therefore, the database includes images of 720 fingers, where the total number of impressions for one sensor is 90 × 8 × 12 = 8640 images.
Since our model is trained on fingerprints of size 300 × 300 × 1, the fingerprints from the MOLF dataset and FingerPass are preprocessed to match the required size.

5. Experimental Results

In this section, we introduce the metric used to evaluate our results and present the outcome of the conducted experiments.

5.1. Fingerprint Image Quality Analysis

The NFIQ module of NBIS proposed in [6] was used to analyze the ability of the proposed enhancement algorithm to enhance the quality of cross-sensor fingerprints. The analysis offers a value between 1 and 5, where 1 represents the best quality while 5 represents the worst quality. The score distribution before and after applying the enhancement method was assessed using fingerprints from MOLF and FingerPass datasets to evaluate the performance. The results for MOLF enhancement using adversarial learning are shown in Table 4.
It can be noticed from Table 4 that all images were enhanced, although different sensors were used to capture them. In DB1, there is a significant image quality enhancement, where 3796 images were enhanced out of 4000 to be in class excellent. The difference here is 204 images, which are enhanced compared to the original images.
Moreover, DB2 shows enhancement in class excellent results from 1340 to 2255 and a noticed reduction in a class fair and poor with 27 and 89 images before and 8 and 4 images after for each class. DB3 shows an increase in class excellent fingerprints by 1285 images and a reduction for all other classes; the number of fingerprints of class poor reduces from 97 to 18 after enhancement.
Two learning methods were applied: namely, conventional learning and adversarial learning. A single network was constructed with a loss function that aims to minimize the distance between the predicted and ground truth to test the impact of conventional learning, as described in Section 3.1. On the other hand, the impact of adversarial learning was tested using two networks: a generator and a discriminator, as described in Section 3.2. The results are shown in Table 5 on MOLF datasets.
It can be noticed from Table 5 that the experiment based on adversarial learning offered better results than the conventional one, although the same network architecture was used to generate the fingerprints.

Comparison with the State-of-the-Art Method

There are various studies in the field of fingerprint enhancement, for example, the methods proposed in [7,8,14,15]. Although HONG and CHIK methods are a bit old, their performance is still better than the recent methods for cross-sensor fingerprint enhancement, and, due to this reason, they have been used in recent cross-sensor matching methods [4,22,23,24,25,26]. So, we compared our method with HONG and CHIK methods and a more recent method, i.e., JOSHI method [16].
Figure 7, Figure 8 and Figure 9 illustrate the comparison results on DB1, DB2, and DB3.
It is revealed from Figure 7, Figure 8 and Figure 9 that our method outperforms HONG and CHIK methods in enhancing fingerprints to class excellent from DB1 and DB3. For DB2, the number of enhanced fingerprints to class excellent by HONG method is slightly higher than that by our method and CHIK.

5.2. Fingerprint Quality Enhancement Index (FQEI)

The FQEI metric was measured using MOLF datasets DB1, DB2, and DB3 by comparing three methods: HONG, CHIK, JOSHI [16], and our method, where obtained results are provided in Table 6. It can be clearly noticed that our method outperformed both HONG, CHIK, and JOSHI methods on DB1, DB2, and DB3.
For DB1, the HONG method performance is 0.2581 since the E s is 348, which is less than the D s (−808). This means that the number of images above the diagonal is less than the images below the diagonal. The same case is for CHIK performance, where the E s is 168, while the D s is −1943 since a large number of fingerprints was degraded from excellent class to very good class. In contrast, our method has a higher enhancement score than the degradation score. Thus, our method outperformed both the HONG and CHIK methods on DB1, DB2, and DB3.
Table 7, Table 8, Table 9, Table 10 and Table 11 illustrate a comparison between the enhancement results obtained with HONG method, CHIK method, JOSHI method, and our method for FingerPass datasets using NFIQ and our metric FQEI.
From Table 7 for FingerPass dataset before enhancement, it can be noticed that there are three sensors that have the highest number of images in poor class, including AES3400, ATRUA, and FPC sensors with 1398, 3107, and 2507 images, respectively.
Based on comparing the results of NFIQ for the three methods after enhancement, it can be noticed that our method offered the highest enhancement in these three sensors by extremely reducing it to zero poor images for the first sensor, one poor image for the second sensor and zero poor images for the third sensor. Moreover, it particularly enhanced the number of images in excellent class to more than 8000 images for the first sensors and the URU4000B sensor. In contrast, the HONG method revealed the highest enhancement for AES2501 sensor. There are also two sensors with the highest number of images in the excellent class: the WS and V300 sensors.
The overall results show that our method outperformed mostly in increasing the number of images in the excellent class. The CHIK method usually transforms fingerprints’ quality to excellent and very good classes but with a noticeable reduction in the number of images in excellent class in most sensors. JOSHI method increases the number of poor fingerprints in two sensors: AES3400 and V3000.
In terms of FQEI metric, our method shows the highest results for five out of nine sensors. The results on AES3400, ATRUA, and URU4000B sensors are 0.9149, 0.9388, 0.9707, respectively, which are very close to 1, and hence a very high enhancement performance. However, a negative enhancement was achieved by JOSHI method in two sensors: AES3400 and V3000. On the other hand, CHIK method gave FQEI of −0.0012 for AES3400 sensor, where the minus sign means distortion in images, which can be obviously noticed by comparing it with the confusion matrix results as shown in Table 12, where most images preserved in good class without enhancement as well as a slight enhancement was revealed from poor class to good class.

5.3. Structural Similarity Index Metric (SSIM)

Fingerprint enhancement algorithms are applied to improve fingerprints without changing the ridge structure. This feature can be assessed by computing the SSIM [21] on the generated fingerprints using anguli and their related ground truth, due to the lack of databases that include low-quality images and relative high-quality images. In other words, the higher the obtained SSIM value is, the higher the preserved structural similarity between the generated and ground truth is. Moreover, this denotes that the ridge structure is also maintained.
A comparison was conducted for fingerprints that were enhanced using HONG method, CHIK method, and our method. The test datasets contain two thousand synthetic fingerprints generated using anguli [27]. It is an open-source implementation from the fingerprint generator SFinGe [28] based on simulating synthetic live fingerprints having similar features, such as real-live fingerprints. Two thousand (2000) synthetic fingerprint images produced by Anguli are used to test the model with pattern types following the normal distribution, including the arch, right loop, left loop, whorl, and double loop. From those images generated using Anguli, other input images with lower quality were generated by adding Gaussian noise with morphological operations and blurring the filtering in the frequency domain.
Both the mean and standard deviation of SSIM were then computed as shown in Table 13.
The mean of SSIM between the enhanced fingerprint generated using our model and the ground truth is 0.5127. It can be noticed that our method had the highest mean of SSIM, which means that the preservation of ridge patterns is the best in our method.

5.4. Computation Time

The average computation time needed to enhance the URU4000b sensor dataset was computed. All three methods were applied on the same environment (R2021b). The experiment was also applied using a laptop with an Intel Core i7-9750H CPU at 2.60 GHz -2.59 GHz, 32.0 GB RAM, Microsoft Windows 10 in the 64-bit operating system, and an x64-based processor. Our method is faster than HONG, CHIK, and JOSHI [16] methods as shown in Table 14.

6. Discussion

The fingerprint sensor interoperability focuses on addressing how the fingerprint-matching system is able to compensate for the differences in the captured fingerprints for the same person by several sensors. The main causes of such variability in fingerprints are the differences in the used capturing technology of sensors, scanning area, sensor resolution, and interaction type.
In practice, each sensor generates its specific type of distortions. Hence, there is a need to enhance captured fingerprints by various sensors. To achieve this, a cross-sensor enhancement method was designed and trained using fingerprints from one sensor, which is the AES2501. On the other hand, this method revealed general enhancement results for other sensors in FingerPass and MOLF datasets. The learning approach considered is the adversarial learning one, which offers better enhancement than the conventional learning one. Moreover, it was found that there was no change in the global flow of ridge patterns within the captured fingerprints by different sensors. This proves its robustness to discrimination. Hence, the edge loss, L1 loss, and adversarial loss function were used as loss functions.
The use of dilation convolution offered better enhancement results than those measured using convolution only. This means that the small fingerprint details, considered important features for determining the identity, such as the minutia point and edges, were preserved. This is clearly illustrated in Table 15.
Based on comparing the results of our method with those of two state-of-art fingerprint methods: HONG and CHIK and a more recent method i.e., JOSHI method [16], using two metrics, our method outperformed both of them. However, the NFIQ metric does not offer a precise description for enhancement performance. Therefore, a new metric was designed, called FQEI. This metric gives one result value between 1 and −1 instead of the five classes results as in the NFIQ.
Figure 10 illustrates zoomed-in views of the fingerprints enhanced using the three methods. From the enhanced fingerprints examples shown in Figure 10, it can be noticed that the smoothed ridges related to the processed fingerprints by the HONG method were more enhanced than those of the CHIK method. On the other hand, our method enhanced fingerprints with preserving their original ridge pattern better than HONG and CHIK.
From Table 12, it is obvious that our method offers faster enhancement results than those of HONG, CHIK, and JOSHI methods. In other words, the average computation time needed to enhance one fingerprint by the HONG, CHIK, JOSHI, and our method was 0.63, 0.48, 0.38, and 0.087 s, respectively. Thus, our method is 13% faster than HONG method. However, there are two sensors FX3000 and V300 with less results than what was expected since the fingerprint nature is different than the original data.

7. Conclusions

It can be concluded that with the continuous developments in both fingerprint sensor technologies and the Internet of Things (IoT), the use of biometric fingerprint identification has been increasing over the years. Differences in sensor technologies and resolution can lead to different types of distortion, which affects fingerprint image quality. Therefore, fingerprints must be enhanced. On the other hand, there are no sufficient investigations of the cross-sensor enhancement problem in the related literature. Therefore, this paper proposed an efficient solution for this problem based on deep learning, in which cGAN framework is used for training the image-to-image transformation for fingerprint enhancement. It was demonstrated that the proposed method significantly enhanced the cross-sensor fingerprints regardless of the sensor type used. However, there is still space to achieve more enhancement. One of the suggested future works is to explore different loss functions to preserve and recover the ridge patterns.

Author Contributions

Conceptualization, A.A., M.H., H.A. and G.B.; methodology, A.A. and M.H.; software, A.A. and W.A.; validation, A.A., M.H. and W.A.; formal analysis, A.A. and M.H.; investigation, A.A. and M.H.; resources, M.H., W.A. and G.B.; data curation, A.A. and W.A.; writing—original draft preparation, A.A. and M.H.; writing—review and editing, A.A., W.A. and G.B.; visualization, A.A. and W.A.; supervision, M.H. and H.A.; project administration, M.H. and H.A.; funding acquisition, M.H. and H.A. All authors have read and agreed to the published version of the manuscript.

Funding

This Project was funded by the National Plan for Science, Technology and Innovation (MAARIFAH), King Abdulaziz City for Science and Technology, Kingdom of Saudi Arabia, under Project no. 13-INF946-02.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The MOLF dataset is available at: http://research.iiitd.edu.in/groups/iab/molf.html, and the FingerPass (accessed on 22 May 2021) dataset is available at: http://www.fingerpass.csdb.cn/ (accessed on 22 May 2021).

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

To clarify FQEI metric, the following example is provided: For a small dataset of 40 fingerprint images having different qualities, the table below represents the NFIQ degrees before (original) and after the enhancement. The QCM is then computed, where the first column represents the total number of images before enhancement, while the first row represents the total number of images after enhancement and so on. Both the IS and WS are then computed as follows:
I S = 10 × 1 + 10 × 2 + 5 × 3 + 10 × 4 W S = 5 × 4 + 10 × 3 + 10 × 2 + 5 × 1
Table A1. NFIQ quality score of the example before and after enhancements.
Table A1. NFIQ quality score of the example before and after enhancements.
NFIQ ValuesQ1Q2Q3Q4Q5
Original51010510
The enhanced2041411
Table A2. Computing the (QCM).
Table A2. Computing the (QCM).
Q1Q2Q3Q4Q5
q118524
q201201
q330335
q410000
q501000
Table A3. Computing the WCM by Multiplying QCM with WQM.
Table A3. Computing the WCM by Multiplying QCM with WQM.
Q1Q2Q3Q4Q5
q10810616
q200203
q3−600310
q4−40000
q50−3000
Table A4. Calculating the FQEI.
Table A4. Calculating the FQEI.
E s D s I S   W S ERDRFQEI
58−1385−750.6823−0.17330.509

References

  1. Maltoni, D.; Maio, D.; Jain, A.K.; Prabhakar, S. Handbook of Fingerprint Recognition; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2009. [Google Scholar]
  2. Ross, A.; Jain, A. Biometric Sensor Interoperability: A Case Study in Fingerprints. In Biometric Authentication; Maltoni, D., Jain, A.K., Eds.; Lecture Notes in Computer Science 3087; Springer: Berlin/Heidelberg, Germany, 2004; pp. 134–145. [Google Scholar]
  3. Jia, X.; Yang, X.; Zang, Y.; Zhang, N.; Tian, J. A Cross-Device Matching Fingerprint Database from Multi-Type Sensors. In Proceedings of the 21st International Conference on Pattern Recognition (ICPR2012), Tsukuba, Japan, 11–15 November 2012; pp. 3001–3004. [Google Scholar]
  4. Lin, C.; Kumar, A. A CNN-based framework for comparison of contactless to contact-based fingerprints. IEEE Trans. Inf. Forensics Secur. 2018, 14, 662–676. [Google Scholar] [CrossRef]
  5. Sankaran, A.; Vatsa, M.; Singh, R. Multisensor Optical and Latent Fingerprint Database. IEEE Access 2015, 3, 653–665. [Google Scholar] [CrossRef]
  6. NIST. NIST Biometric Image Software (NBIS). 2021. Available online: https://www.nist.gov/services-resources/software/nist-biometric-image-software-nbis (accessed on 9 May 2022).
  7. Li, J.; Feng, J.; Kuo, C.-C.J. Deep convolutional neural network for latent fingerprint enhancement. Signal Process. Image Commun. 2018, 60, 52–63. [Google Scholar] [CrossRef]
  8. Svoboda, J.; Monti, F.; Bronstein, M.M. Generative convolutional networks for latent fingerprint reconstruction. In Proceedings of the 2017 IEEE International Joint Conference on Biometrics (IJCB), Denver, CO, USA, 1–4 October 2017; pp. 429–436. [Google Scholar]
  9. Isola, P.; Zhu, J.-Y.; Zhou, T.; Efros, A.A. Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1125–1134. [Google Scholar]
  10. Zamir, S.W.; Arora, A.; Khan, S.; Hayat, M.; Khan, F.S.; Yang, M.H.; Shao, L. Multi-stage progressive image restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 14821–14831. [Google Scholar]
  11. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial nets. In Advances in Neural Information Processing Systems, Proceedings of the 27th International Conference on Neural Information Processing Systems, Montreal, QC, Canada, 8–13 December 2014; MIT Press: Cambridge, MA, USA, 2014; pp. 2672–2680. [Google Scholar]
  12. Hong, L.; Wan, Y.; Jain, A. Fingerprint image enhancement: Algorithm and performance evaluation. IEEE Trans. Pattern Anal. Mach. Intell. 1998, 20, 777–789. [Google Scholar] [CrossRef]
  13. Chikkerur, S.; Wu, C.; Govindaraju, V. A systematic approach for feature extraction in fingerprint images. In Biometric Authentication; Zhang, D., Jain, A.K., Eds.; Lecture Notes in Computer Science 3072; Springer: Berlin/Heidelberg, Germany, 2004; pp. 344–350. [Google Scholar]
  14. Wong, W.J.; Lai, S.-H. Multi-Task CNN for Restoring Corrupted Fingerprint Images. Pattern Recognit. 2020, 101, 107203. [Google Scholar] [CrossRef]
  15. Schuch, P.; Schulz, S.; Busch, C. De-convolutional auto-encoder for enhancement of fingerprint samples. In Proceedings of the 2016 Sixth International Conference on Image Processing Theory, Tools and Applications (IPTA), Oulu, Finland, 12–15 December 2016; IEEE: New York, NY, USA, 2016. [Google Scholar] [CrossRef]
  16. Joshi, I.; Anand, A.; Vatsa, M.; Singh, R.; Roy, S.D.; Kalra, P. Latent fingerprint enhancement using generative adversarial networks. In Proceedings of the 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), Waikoloa, HI, USA, 7–11 January 2019; pp. 895–903. [Google Scholar]
  17. Lin, C.; Kumar, A. Improving cross sensor interoperability for fingerprint identification. In Proceedings of the 2016 23rd International Conference on Pattern Recognition (ICPR), Cancun, Mexico, 4–8 December 2016; IEEE: New York, NY, USA, 2016. [Google Scholar] [CrossRef]
  18. Hinton, G.E.; Salakhutdinov, R.R. Reducing the Dimensionality of Data with Neural Networks. Science 2006, 313, 504–507. [Google Scholar] [CrossRef] [PubMed]
  19. Badrinarayanan, V.; Kendall, A.; Cipolla, R. Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495. [Google Scholar] [CrossRef] [PubMed]
  20. Hassani, I.K.; Pellegrini, T.; Masquelier, T. Dilated convolution with learnable spacings. arXiv 2021, arXiv:2112.03740. [Google Scholar]
  21. Wang, Z.; Bovik, A.; Sheikh, H.; Simoncelli, E.P. Simoncelli, Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  22. Zang, Y.; Yang, X.; Jia, X.; Zhang, N.; Tian, J.; Zhu, X. A coarse-fine fingerprint scaling method. In Proceedings of the 2013 International Conference on Biometrics (ICB), Madrid, Spain, 4–7 June 2013; IEEE: New York, NY, USA, 2013. [Google Scholar] [CrossRef]
  23. Alshehri, H.; Hussain, M.; Aboalsamh, H.A.; Al Zuair, M.A. Cross-sensor fingerprint matching method based on orientation, gradient, and gabor-hog descriptors with score level fusion. IEEE Access 2018, 6, 28951–28968. [Google Scholar] [CrossRef]
  24. AlShehri, H.; Hussain, M.; AboAlSamh, H.; AlZuair, M. A large-scale study of fingerprint matching systems for sensor interoperability problem. Sensors 2018, 18, 1008. [Google Scholar] [CrossRef] [PubMed]
  25. Alshehri, H.; Hussain, M.; Aboalsamh, H.A.; Emad-Ul-Haq, Q.; AlZuair, M.; Azmi, A.M. Alignment-Free Cross-Sensor Fingerprint Matching Based on the Co-Occurrence of Ridge Orientations and Gabor-HoG Descriptor. IEEE Access 2019, 7, 86436–86452. [Google Scholar] [CrossRef]
  26. Alrashidi, A.; Alotaibi, A.; Hussain, M.; AlShehri, H.; AboAlSamh, H.; Bebis, G. Cross-Sensor Fingerprint Matching Using Siamese Network and Adversarial Learning. Sensors 2021, 21, 3657. [Google Scholar] [CrossRef] [PubMed]
  27. Ansari, A.H. Generation and Storage of Large Synthetic Fingerprint Database. Master’s Thesis, Indian Institute of Science, Bangalore, India, 2011. [Google Scholar]
  28. Cappelli, R.; Maio, D.; Maltoni, D. SFinGe: An approach to synthetic fingerprint generation. In Proceedings of the International Workshop on Biometric Technologies (BT’04), Prague, Czech Republic, 15 May 2004; pp. 147–154. [Google Scholar]
Figure 1. Fingerprints from the FingerPass database of the same finger that were captured by different sensors.
Figure 1. Fingerprints from the FingerPass database of the same finger that were captured by different sensors.
Sensors 22 06973 g001
Figure 2. Quality variations Q = {1, 2, 3, 4,5} per impression for the same subject across three sensors: (a) Lumidigm Venus, (b) Secugen Hamster-IV, and (c) CrossMatch L-Scan Patrol from MOLF dataset.
Figure 2. Quality variations Q = {1, 2, 3, 4,5} per impression for the same subject across three sensors: (a) Lumidigm Venus, (b) Secugen Hamster-IV, and (c) CrossMatch L-Scan Patrol from MOLF dataset.
Sensors 22 06973 g002
Figure 3. Conventional learning framework.
Figure 3. Conventional learning framework.
Sensors 22 06973 g003
Figure 4. The two most common minutiae—ridge ending and bifurcation. Reprinted with permission from Ref. [1]. Copyright 2022, Springer Nature.
Figure 4. The two most common minutiae—ridge ending and bifurcation. Reprinted with permission from Ref. [1]. Copyright 2022, Springer Nature.
Sensors 22 06973 g004
Figure 5. The adversarial learning framework. The blue color represents dilated Conv, BN and ReLU layers; the pink color represents Max-Pooling layer; the green color represents up sampling layer; light grey color represents Conv, BN and ReLU layers; the dark grey color represents Conv and Tanh layers.
Figure 5. The adversarial learning framework. The blue color represents dilated Conv, BN and ReLU layers; the pink color represents Max-Pooling layer; the green color represents up sampling layer; light grey color represents Conv, BN and ReLU layers; the dark grey color represents Conv and Tanh layers.
Sensors 22 06973 g005
Figure 6. The learning procedure of using adversarial learning. The thin arrows represent the input; the thick arrows represent the output; The dotted lines represent weights updating, the dashes represent the two fingerprints used to calculate the edge loss and L1 loss; and the circles represent the channel-wise concatenation.
Figure 6. The learning procedure of using adversarial learning. The thin arrows represent the input; the thick arrows represent the output; The dotted lines represent weights updating, the dashes represent the two fingerprints used to calculate the edge loss and L1 loss; and the circles represent the channel-wise concatenation.
Sensors 22 06973 g006
Figure 7. Comparison between the enhancement results of HONG [12], CHIK [13], JOSHI [16], and our method on DB1.
Figure 7. Comparison between the enhancement results of HONG [12], CHIK [13], JOSHI [16], and our method on DB1.
Sensors 22 06973 g007
Figure 8. Comparison between the enhancement results of HONG [12], CHIK [13], JOSHI [16], and our method on DB2.
Figure 8. Comparison between the enhancement results of HONG [12], CHIK [13], JOSHI [16], and our method on DB2.
Sensors 22 06973 g008
Figure 9. Comparison between the enhancement results of HONG [12], CHIK [13], JOSHI [16], and our method on DB3.
Figure 9. Comparison between the enhancement results of HONG [12], CHIK [13], JOSHI [16], and our method on DB3.
Sensors 22 06973 g009
Figure 10. A zoomed-in view for fingerprint enhancement result, where the first column shows the original fingerprint, while the second, third, and fourth columns show those of the HONG [12], CHIK [13] and our method, respectively.
Figure 10. A zoomed-in view for fingerprint enhancement result, where the first column shows the original fingerprint, while the second, third, and fourth columns show those of the HONG [12], CHIK [13] and our method, respectively.
Sensors 22 06973 g010
Table 1. Specifications of encoder and decoder models; FS represents the filter size; FN is number of filters and S represents the stride.
Table 1. Specifications of encoder and decoder models; FS represents the filter size; FN is number of filters and S represents the stride.
EncoderDecoder
LayerFSFNSLayerFSFNS
Conv1_13641Conv1_13641
Conv1_23641Conv1_23641
Max Pooling 12-2Max Un pooling 12-2
Dilated Conv 2_13641Conv 2_13641
Dilated Conv 2_23641Conv 2_23641
Max Pooling 22-2Max Un pooling 22-2
Dilated Conv 3_13641Conv 3_13641
Dilated Conv 3_23641Conv 3_23641
Max Pooling 32-2Max Un pooling 32-2
Dilated Conv 4_13641Conv 4_13641
Dilated Conv 4_23641Conv 4_23641
Max Pooling 42-2Max Un pooling 42-2
Dilated Conv 5_13641Conv 5_13641
Dilated Conv 5_23641Conv 5_23641
Max Pooling 52-2Max Un pooling 52-2
Conv 6_1311
Tanh---
Table 2. The quality confusion matrix (QCM).
Table 2. The quality confusion matrix (QCM).
Q11Q12Q13Q14Q15
Q21Q22Q23Q24Q25
Q31Q32Q33Q34Q35
Q41Q42Q43Q44Q45
Q51Q52Q53Q54Q55
Table 3. The weight quality matrix (WQM).
Table 3. The weight quality matrix (WQM).
01234
−10123
−2−1012
−3−2−101
−4−3−2−10
Table 4. NFIQ quality scores on the Original MOLF dataset and the enhanced dataset by our model (After E.). The up arrow represents better enhancement.
Table 4. NFIQ quality scores on the Original MOLF dataset and the enhanced dataset by our model (After E.). The up arrow represents better enhancement.
QualityQDB1DB2DB3
OriginalAfter E.OriginalAfter E.OriginalAfter E.
Excellent129653796 ↑13402255 ↑20183303 ↑
Very good298518319401724985646
Good3372603874416
Fair4121927815519
poor5008959718
Table 5. The effect of the learning approach on the quality of the MOLF database.
Table 5. The effect of the learning approach on the quality of the MOLF database.
Quality ScoreQDB1
Original
Conventional Learning
(One Net)
Adversarial Learning
(Two Net)
Excellent1296538273796
Very good2985123183
Good337122
Fair4123619
poor5020
Quality ScoreQDB2
Original
Conventional Learning
(One Net)
Adversarial Learning
(Two Net)
Excellent1134019152255
Very good2194020571724
Good3603188
Fair42768
poor58945
Quality ScoreQDB3
Original
Conventional Learning
(One Net)
Adversarial Learning
(Two Net)
Excellent1201832063303
Very good2985634646
Good37443916
Fair41558619
poor5973518
Table 6. FQEI values computed for HONG method, CHIK method, JOSHI [16] method, and our method for the MOLF dataset.
Table 6. FQEI values computed for HONG method, CHIK method, JOSHI [16] method, and our method for the MOLF dataset.
The Enhancement MethodDB1DB2DB3
HONG [12]0.25810.63420.7026
CHIK [13]0.02310.55620.6508
JOSHI [16]0.20120.17230.3270
Our method 0.88630.67600.8740
Table 7. Analysis of the fingerprint quality scores measured by NFIQ of the FingerPass database before enhancement.
Table 7. Analysis of the fingerprint quality scores measured by NFIQ of the FingerPass database before enhancement.
QualityQAES2501AES3400ATRUAFPCFX3000UPEKV300WSURU4000B
Excellent155190284041052016791773954697
Very Good22423653149508419554726378953263
Good3662717723565585330114276304647
Fair43200000104233
Poor541398310725071010040
Table 8. Analysis of the fingerprint quality scores measured by NFIQ and FQEI of the FingerPass enhanced using HONG method [12].
Table 8. Analysis of the fingerprint quality scores measured by NFIQ and FQEI of the FingerPass enhanced using HONG method [12].
NFIQQAES2501AES3400ATRUAFPCFX3000UPEKV300WSURU4000B
Excellent1619401136134758945638167865245
Very Good224432026852602038827693225818533389
Good328161546259602100
Fair4101000010
Poor502771051000006
FQEI0.61460.11100.58680.47910.44970.13790.50650.58290.5912
Table 9. Analysis of the fingerprint quality scores measured by NFIQ and FQEI of the FingerPass enhanced using CHIK method [13].
Table 9. Analysis of the fingerprint quality scores measured by NFIQ and FQEI of the FingerPass enhanced using CHIK method [13].
NFIQQAES2501AES3400ATRUAFPCFX3000UPEKV300WSURU4000B
Excellent1483403184225383839536217545
Very Good238061247525574363877800468724238095
Good307323706284802000
Fair4000000000
Poor501193914500000
FQEI0.5335−0.00120.54100.46150.25620.13720.32020.53730.0683
Table 10. Analysis of the fingerprint quality scores measured by NFIQ and FQEI of the FingerPass enhanced using JOSHI method [16].
Table 10. Analysis of the fingerprint quality scores measured by NFIQ and FQEI of the FingerPass enhanced using JOSHI method [16].
NFIQQAES2501AES3400ATRUAFPCFX3000UPEKV300WSURU4000B
Excellent12216197458378324182607287425062160
Very Good2649703975639460055672164861266474
Good3273459281453189359243866
Fair4018773822146420
Poor50310751261121600
FQEI0.2291−0.37920.78890.56600.10270.3114−0.0700.30960.1582
Table 11. Analysis of the fingerprint quality scores measured by NFIQ and FQEI of the FingerPass enhanced using our method.
Table 11. Analysis of the fingerprint quality scores measured by NFIQ and FQEI of the FingerPass enhanced using our method.
NFIQQAES2501AES3400ATRUAFPCFX3000UPEKV300WSURU4000B
Excellent1582481928066395847002924474367798134
Very Good22797655624680260957162411855467
Good32826181303343223
Fair417301513990206416
Poor50010119010700
FQEI0.56450.93880.97070.78360.31490.34070.29310.58250.9149
Table 12. Quality Confusion Matrices for AES3400 sensor enhancements using: (a) Hong [12] (b) CHIK [13] (c) Our method.
Table 12. Quality Confusion Matrices for AES3400 sensor enhancements using: (a) Hong [12] (b) CHIK [13] (c) Our method.
(a)(b)(c)
Q1Q2Q3Q4Q5Q1Q2Q3Q4Q5Q1Q2Q3Q4Q5
q10000000000061682101310
q2017144041015990100148016
q3047684701267046645008270166015
q4000000000002242057
q50118609004628056100000
Table 13. Mean and standard deviation (std) of SSIM.
Table 13. Mean and standard deviation (std) of SSIM.
The Enhancement MethodMean of SSIMStd
HONG [12]0.45510.0482
CHIK [13]0.46500.0460
JOSHI [16]0.41250.0354
Our method0.51270.0693
Table 14. Comparison between the computation time for enhancement.
Table 14. Comparison between the computation time for enhancement.
MethodAverage Computation Time (in Seconds)
HONG [12]0.63
CHIK [13]0.48
JOSHI [16]0.38
Our method 0.087
Table 15. The impact of using dilation operation and convolution operation for MOLF datasets.
Table 15. The impact of using dilation operation and convolution operation for MOLF datasets.
FQEIDB1DB2DB3
Convolution layer0.86430.52080.7996
Dilation Convolution0.88630.67600.8740
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Alotaibi, A.; Hussain, M.; AboAlSamh, H.; Abdul, W.; Bebis, G. Cross-Sensor Fingerprint Enhancement Using Adversarial Learning and Edge Loss. Sensors 2022, 22, 6973. https://doi.org/10.3390/s22186973

AMA Style

Alotaibi A, Hussain M, AboAlSamh H, Abdul W, Bebis G. Cross-Sensor Fingerprint Enhancement Using Adversarial Learning and Edge Loss. Sensors. 2022; 22(18):6973. https://doi.org/10.3390/s22186973

Chicago/Turabian Style

Alotaibi, Ashwaq, Muhammad Hussain, Hatim AboAlSamh, Wadood Abdul, and George Bebis. 2022. "Cross-Sensor Fingerprint Enhancement Using Adversarial Learning and Edge Loss" Sensors 22, no. 18: 6973. https://doi.org/10.3390/s22186973

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop