Next Article in Journal
A Combination of Natural Products, BenPros (Green Tea Extract, Soybean Extract and Camellia Japonica Oil), Ameliorates Benign Prostatic Hyperplasia
Previous Article in Journal
Experimental and Numerical Investigation of the Damage Characteristics of Rocks under Ballistic Penetration
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Robust Fingerprint Minutiae Extraction and Matching Based on Improved SIFT Features

1
Faculty of Computers and Artificial Intelligence, Sohag University, Sohag 82524, Egypt
2
College of Computer Engineering and Sciences, Prince Sattam bin Abdulaziz University, Al Kharj 11942, Saudi Arabia
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(12), 6122; https://doi.org/10.3390/app12126122
Submission received: 27 April 2022 / Revised: 6 June 2022 / Accepted: 8 June 2022 / Published: 16 June 2022
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)

Abstract

:
Minutiae feature extraction and matching are not only two crucial tasks for identifying fingerprints, but also play an eminent role as core components of automated fingerprint recognition (AFR) systems, which first focus primarily on the identification and description of the salient minutiae points that impart individuality to each fingerprint and differentiate one fingerprint from another, and then matching their relative placement in a candidate fingerprint and previously stored fingerprint templates. In this paper, an automated minutiae extraction and matching framework is presented for identification and verification purposes, in which an adaptive scale-invariant feature transform (SIFT) detector is applied to high-contrast fingerprints preprocessed by means of denoising, binarization, thinning, dilation and enhancement to improve the quality of latent fingerprints. As a result, an optimized set of highly-reliable salient points discriminating fingerprint minutiae is identified and described accurately and quickly. Then, the SIFT descriptors of the local key-points in a given fingerprint are matched with those of the stored templates using a brute force algorithm, by assigning a score for each match based on the Euclidean distance between the SIFT descriptors of the two matched keypoints. Finally, a postprocessing dual-threshold filter is adaptively applied, which can potentially eliminate almost all the false matches, while discarding very few correct matches (less than 4%). The experimental evaluations on publicly available low-quality FVC2004 fingerprint datasets demonstrate that the proposed framework delivers comparable or superior performance to several state-of-the-art methods, achieving an average equal error rate (EER) value of 2.01%.

1. Introduction

Biometrics is often identified as the science of recognizing an individual through his physical/behavioral traits in addition to physiological characteristics. The characteristics that can be used by biometric systems typically involve fingerprint recognition, facial identification, voice recognition and handwriting recognition systems. Among all biometric techniques, fingerprint recognition is the most widely used for personal identification systems, due to its relative permanence and uniqueness [1]. Due to the relatively high level of fingerprint accuracy among all the biometric traits, recent years have witnessed a fairly substantial upswing in the use of many digital fingerprint reading devices in our day-to-day lives. However, these modern devices are being used increasingly for a wide variety of purposes, e.g., for the attendance of the staff before and after their work as a login password in computers or the key to a locker, etc. Fingerprints are thought to be excellent individualizing evidence because they are permanent from birth to death and very unique for each individual (i.e., the probability of two fingerprints being the same is 64 billion to 1.2, according to mathematical assumptions). Furthermore, they are easy to verify and leave marks on every object a person touches. This makes fingerprint-based biometric security devices highly popular in sensitive high-security areas such as banks, correctional facilities, jewellers, prisons and military establishments [2].
Feature extraction is basically a dimensionality reduction process, whose ultimate goal is to gain the most relevant information from the original data in a high-dimensional space and represent that information in a lower dimensional space. When the input data to an algorithm are too large to be processed, and it is suspected to be excrescent (many data, but not much information), then the input data should be converted into a reduced set of representative features (also called feature vector). In other words, through the process of extracting features, the input high-dimensional data are converted to a small set of low-dimensional features to identify useful information [3]. If the extracted features are carefully selected, it is highly expected that the pertinent information will be extracted from the input data by the reduced feature set, so as to effectively perform a given task by utilizing this reduced representation rather than the full-size input data. A fingerprint recognition system is an automatic pattern recognition system that typically consists of three fundamental stages: image pre-processing, feature extraction and fingerprint matching [4]. A good feature set contains rich information that can effectively distinguish an object from other objects (i.e., being able to identify an object). At this stage, it is necessary as much as possible to prevent the creation of different feature codes for the objects in the same class [5]. Latent fingerprints are merely partial impressions of the finger’s ridge pattern inadvertently left after fingertips contact a surface, which involve ridges and valleys. In a fingerprint, ridges are presented as black lines, whereas the valleys are presented as a white area among the ridges [6]. Fingerprint features can be broadly divided into two categories:
(1)
Global features: These features form a special pattern of ridge and valleys, called singularities or Singular Point (SP), and they can further be divided into three types: loop, delta, and whorl. The significant points are the core and the delta. The core is defined as the most points on the innermost ridges, and the delta is defined as the central point where three different trend flows converge (see Figure 1a). It can be argued that these features provide the most useful and crucial information for fingerprint classification, fingerprint matching, and fingerprint alignment [7,8].
(2)
Local features: At the local level, ridge characteristics, collectively called minutia, represent the most widely used features to match fingerprints. There are several types of minutiae, but for practical purposes, just two types of minutiae [9,10] are considered as the two most prominent ridge characteristics: ridge ending (point where ridge ends abruptly) and ridge bifurcation (point where a ridge forks or diverges into branch ridges), as shown in Figure 1b,c.
The primary objective of this work is the development of a fully automated minutiae extraction and matching framework for fingerprint identification and verification purposes, in which an adaptive SIFT detector is applied to high-contrast fingerprints preprocessed by means of denoising, binarization, thinning, dilation and enhancement to improve the quality of latent fingerprints. As a result, an optimized set of highly-reliable salient points discriminating fingerprint minutiae is identified. The SIFT descriptors of the local keypoints in a given fingerprint are then matched with those of the stored templates using a brute force algorithm. Finally, a dual-threshold filter is adaptively applied, which can potentially eliminate almost all the false matches, while losing less than 4% of correct matches. The remainder of the paper is organized as follows: in Section 2, we briefly review closely related prior work on feature extraction and matching algorithms developed for fingerprint recognition. The details of the proposed methodology for fingerprint minutiae extraction and matching and its components and stages are elaborately described in Section 3. Section 4 provides the experimental results and discusses their implications. Finally, concluding remarks and avenues for future research are drawn in Section 5.

2. Prior Work

In the fingerprint recognition literature, a variety of feature extraction algorithms have been proposed for identifying remarkable features. For example, in [11], Bader and Sagheer developed two computer vision algorithms (i.e., FAST and Harris) to extract features (i.e., corner points) from the finger vein image, where the fingerprint patterns are matched based on the differences between corners represented in the form of points using the Manhattan distance. The false match rate (FMR) and false non-match rate (FNMR) are then minimized to get the optimum threshold that triggers the final decision. The results of the study confirmed that the use of two adaptive algorithms concurrently reduces the error rate and helps to build a reliable system of finger vein identification. In [12], an improved Harris–SIFT image matching algorithm is proposed. First, the feature points of the image are extracted by the Harris corner detection operator. Then, the 28-dimensional increasing homocentric square window is applied to describe the neighborhood information of key feature points. Euclidean distance is typically employed as a similarity measure function in the matching process. Finally, simulation results demonstrated the validity of the improved algorithm, igniting a new thought for the research into the image matching.
The fingerprint feature extraction process aims at finding various minutiae points in a fingerprint to use them further for fingerprint matching. However, the minutiae extraction process can be difficult and not accurate because a fingerprint image may be contaminated due to noise present in the fingerprint image. This results in a lot of minutiae point candidates. Thus, an optimal preprocessing technique is needed to minimize the number of key minutiae points and to get only those key points that can be further used to match the fingerprint [13]. In this direction, recently a study conducted by Singh and Kaurr [14] proposed a methodology for fingerprint minutia extraction using morphological operations, where the number of minutiae points is calculated by using three different methods, and the result is then analyzed. The original image is compared against two preprocessed images; one is obtained by using a dilation operation and the other is obtained by first applying hole filling, followed by a dilation.
In addition, Singh et al. [15] proposed a new fingerprint feature detection algorithm, where it has been found that the presence of artifacts or noise in fingerprint images leads to a lot of spurious minutiae. To tackle this problem, a good strategy for feature extraction has been devised to extract the valid minutiae points in fingerprints and at the same time avoid extracting spurious minutiae points. The presented method could effectively perform the template matching [16] to find out bifurcation and termination of ridges. A smoothing algorithm is developed to find ridges in the fingerprint images with the help of eight different masks. It is a process of making a binary image of ridges from the grayscale fingerprint image. The experimental results verified the validity and accuracy of the algorithm in terms of genuine acceptance rate (GAR), false acceptance rate (FAR), and false rejection rate (FRR).
Furthermore, in [17], an algorithm based on Harris corner detector was proposed for extracting fingerprint minutiae. At the beginning, the Harris corner detector is applied to detect minutiae points and extract high curvature dots from the enhanced fingerprint image. In the later postprocessing stage, false or spurious minutiae are removed, based on the space distribution of minutiae. The type of minutiae is judged using a neighboring gray level information. The exact orientation of minutiae is decided by initial orientation and style. Compared to typical minutiae extraction algorithms, the presented approach needs to not convert the fingerprint image to a binary image and submit the resultant binary image to a thinning process. Instead, the minutiae are directly extracted from the gray-level fingerprint image. This can reduce the processing time and promote efficiency. Experimental results on the FVC2002 latent fingerprint database demonstrated that the presented method is not only fast and fairly reliable, but also fit to use in reality.
In [18], a fingerprint-based authentication approach introduced, by means of fingerprint enhancement, feature extraction and matching techniques. Firstly, the contrast of the small tiles existing in the fingerprint image is enhanced by using an adaptive variant of histogram equalization called Contrast Limited Adaptive Histogram Equalization (CLAHE) along with a combination of Gabor filters [19] and fast Fourier transform (FFT). Then, the improved fingerprint is authenticated by picking a small amount of information from some local interest points—so-called ’minutiae features’. In order to render significantly improved feature detection results, a hybrid combination of SURF and Harris corner detection algorithms is applied to the thinned binary fingerprint image. For fingerprint matching, the Euclidean Distance between the SURF-Harris descriptors of two feature points is used as the similarity criterion of the two matched fingerprints. To automatically remove false matches and incorrect match points, the authors applied an iterative algorithm called RANdom SAmple Consensus (RANSAC). The extensive experiments conducted on the two publicly accessible FVC2002 DB1 and FVC2000 DB1 fingerprint databases demonstrated the efficiency and effectiveness of the proposed method in achieving average recognition rates of 95% and 92.5% for FVC2002 DB1 and FVC2000 DB1 databases, respectively.

3. Proposed Methodology

In this section, the proposed minutiae-based feature extraction and matching framework for robust fingerprint recognition is presented, detailing its major modules, including image preprocessing, minutiae feature extraction, and template matching. A block diagram depicting various modules and workflow within the presented framework is shown in Figure 2. Minutiae points are local descriptive features of a fingerprint image, which are heavily relied upon to match fingerprints accurately. These minutiae points are necessary and sufficient to determine the uniqueness of a fingerprint image. A good-quality fingerprint image typically has 25 to 80 minutiae, depending on the resolution of the fingerprint scanners and the finger position on the sensor [20].

3.1. Image Preprocessing

In fingerprint recognition, the performance of fingerprint minutiae extraction heavily depends upon the quality of the input fingerprint image. Typical preprocessing steps prior to fingerprint minutiae extraction involve binarization, noise removal, and fingerprint segmentation [20,21]. From a fingerprint image, good minutiae points can precisely be located from the thinned ridges. However, in practice, it is not yet possible to extract good minutiae points accurately from a fingerprint image, since a significant percentage of an acquired fingerprint image is of poor-quality due to several factors, including acquisition device conditions (e.g., dirtiness, humidity, pattern location, and orientation), individual artifacts (e.g., skin environment, age, skin tensility, and pressure), etc.
Moreover, the ridge structures in very low quality fingerprint image are completely corrupted or not always well-defined; therefore, they can not be correctly detected. This leads to a significant number of spurious minutiae and, at the same time, a large percentage of genuine minutiae may be ignored. Therefore, fingerprint image preprocessing for contrast enhancement and illumination correction is a crucial step in any scheme for automated minutiae extraction and matching [22]. In this work, the fingerprint image preprocessing stage that primarily aims at improving minutiae features in the fingerprint image through noise removal and contrast enhancement to reliably extract the minutiae features and potentially minimize the chances of false or spurious minutiae to be detected in the highly corrupted regions [1]. The procedure for image preprocessing is elucidated in some detail in the following subsections below.

3.1.1. Fingerprint Enhancement Using Contextual Filtering

In this work, an enhancement technique based on contextual iterative filters is designed for preprocessing the fingerprint images, which proceeds in four distinct steps: (i) ROI estimation (ii) ridge-visibility enhancement, (iii) ridge-pattern enhancement, and (iv) image binarization. In the first step, an algorithm based on the local standard deviation [23,24] is applied to the input fingerprint image for evaluating the local variance of the image intensity, followed by a thresholding operation for removing unwanted finger shadows. Finally, a mask binary image that reliably identifies the fingerprint ROI is obtained. The second step is performed based on a Homomorphic filtering technique, in which the background image B I is first estimated by applying a morphological opening operation with a mask s to the input image I. Then, the image R I representing the fingerprint ridges can readily be obtained as: I R = I B I . In order to effectively suppress the noise present in the image, a nonlinear equalization is preformed by applying the logarithm to the ridge image I R , yielding an improved image I L ( x , y ) = log ( I R ( x , y ) ) . In the third step, the enhancement of the ridge pattern is performed, similarly to that described in [23], by computing ridge frequency and orientation maps, followed by the application of a bank of Gabor filters optimally tuned to the computed local ridge maps that result in an image I E containing a minimum amount of background texture details [25]. The fourth and last step in the preprocessing stage is an adaptive image binarization process, which permits effectively reducing the noise in the edges of the ridge pattern described in the image I E , and reliably estimating the potential minutiae points. For obtaining a uniform contrast between ridges and valleys, the logarithm of I E is taken as: I ( x , y ) = log ( I E ( x , y ) ) . After establishing the histogram of the image I I , a binary image of the ridge pattern I B is obtained as follows:
I B ( x , y ) = 0 if I I ( x , y ) arg max i ( H ( i ) ) 1 otherwise
where H is the histogram of the image I I . A simplified outline of fingerprint image enhancement based on contextual filters is presented in Figure 3.
Additionally, some morphological operations (e.g., erosion and closing) are performed to fill out the holes by smoothening the ridges’ surface in the fingerprint image and merging the narrow gaps among the fingerprint ridges [1].

3.1.2. Thinning and Dilation

Erosion (or thinning) and dilation (or thickening) are two morphological operations applied to the fingerprint binary image. As the fingerprint ridges are relatively thick, it is desirable for subsequent minutiae feature extraction and analysis to thin the ridges, so that each is a single pixel thick [1]. The thinning algorithm essentially consists of removing contour points of connected components in a fingerprint image to produce their skeleton [26,27]. In order to extract a skeleton of the fingerprint, a skeletonization process is performed by applying the Zhang–Suen thinning algorithm developed by Zhang and Suen in [28] to the binary image. The Zhang–Suen thinning algorithm is a simple and efficient technique and is one of the most popular adaptive thinning algorithms in the literature, which maintains a 3 × 3 sized block and consists of two sub-iterations. It is an iterative algorithm that computes the skeleton of an image by removing all the contour points of the image except those belonging to the skeleton [29].
More formally, suppose black pixels are represented by 1’s and white pixels are represented by 0s. The algorithm finds connected components in a binary image by working on every dark pixel (e.g., p 1 ) that can have eight neighbors (see Figure 4). To preserve the connectivity of the skeleton, each iteration is further divided into two sub-iterations. In the first subiteration, the contour point p 1 is removed from the digital pattern, if the following conditions are held:
2 N ( p 1 ) 6 , M ( p 1 ) = 1 , p 2 × p 4 × p 6 = 0 , p 4 × p 6 × p 8 = 0 .
where N ( p 1 ) and M ( p 1 ) denote the number of 0 to 1 transitions from p 2 to p 9 in a clockwise direction and the number of non-zero neighbors of p 1 , respectively. That is, N ( p 1 ) = p 2 + p 3 + + p 8 + p 9 .
In the second subiteration, the contour point p1 is removed from the digital pattern, if the following conditions are met:
2 N ( p 1 ) 6 , M ( p 1 ) = 1 , p 2 × p 4 × p 8 = 0 , p 2 × p 6 × p 8 = 0 .
An example of fingerprint thinning is given in Figure 5a. Dilation is one of basic morphological operations, which is typically applied to binary images to enhance image features by enlarging the boundaries of the segmented image objects and fill in the holes within these objects. In this work, the dilation process is used to increase the width of the fingerprint ridges in a way such that, if there is a gap of one or two pixels between any two ridges, then they can be joined to form a single ridge. Such a gap of one or two pixels between fingerprint ridges can be due to any error of some sort [1]. For fingerprint dilation, we create a function that looks at every pixel of the binary image and, in the meantime, all the neighboring pixels of the pixel under consideration are checked as well. Consequently, when a pixel is selected, all of its neighbors are checked and, if any one of them is black, the value of that pixel is changed to black so that this results in removing small holes in the image and also joining ridges that have at most a 2-pixel gap [24]. A binary fingerprint image after the dilation operation is shown in Figure 5b. After performing thinning and dilation operations, the resultant image is almost well-suited for feature extraction by removing all noises present in the image. However, there still remains very small patterns in the fingerprint image that need to be eliminated in order to perform more robust minutiae feature extraction [1]. This can be performed simply by calculating the number of pixels in each ridge and any ridge having fewer than a certain number of pixels (e.g., ≤20 pixel) is removed from the fingerprint image (see Figure 5c).

3.2. Minutiae Feature Extraction

After finishing all the aforementioned preprocessing operations, the enhanced fingerprint image is obtained, in which the minutiae feature points can be easily located [1]. In this work, the robust minutiae feature points of the enhanced fingerprint image are determined by using an adaptive version of the SIFT detector [30] to obtain a sparse set of frames (or local keypoints) from the fingerprint image. The keypoints are viewed as oriented disks attached to blob-like structures of the fingerprint image under consideration [31]. As the image scales, rotates, and/or translates, these keypoints could help track image objects and thus the deformation. The effect of such deformation on the feature appearance is excluded by canonization, i.e., by mapping the keypoints to a canonical disk [32,33,34]. In order to search for fingerprint blobs on a multiple scale, the SIFT based detector constructs a scale space defined as a function F ( x , σ ) of spatial and scale variables, where x R 2 and σ R + are the spatial and scale coordinates, respectively. The domain of the scale-space variable σ is sampled at discretized steps in logarithmic values:
σ ( s , o ) = σ 0 2 o + s / S , o Z , s = 0 , , S 1
where o, s, S N and σ 0 R + are the octave index, scale index, scale resolution and base scale offset, respectively. Notice that octaves of negative index are likely to be obtained. Then, a resolution defined as a function of the octave can be used for sampling the spatial coordinate x on a lattice, as follows:
x = 2 o x o , o Z , x o [ 0 , , N o 1 ] × [ 0 , , M o 1 ]
where x o and ( N o , M o ) denote the spatial index and spatial resolution of octave o, respectively. The resolution of the octaves can be obtained from the resolution ( M 0 , N 0 ) of the base octave o = 0 , as follows:
N o = N 0 2 o , M o = M 0 2 o
However, some scale levels can be usefully stored twice across different octaves, by allowing the parameter s to be negative or larger than S. More formally, let [ s min , s max ] and [ o min , o min + O 1 ] be the ranges of s and o, respectively, where O denotes the number of octaves.
The SIFT detector and descriptor make use of two scale spaces, namely a Gaussian space and a Difference of Gaussian (DoG) space. The Gaussian scale-space of an image I ( x ) can be obtained from the ‘zero-scale’ image by a Gaussian convolution:
G ( x , σ ) ( g σ I ) ( x )
where the scale σ is typically sampled a particular way to reduce redundancy. In practice, it is assumed that the fingerprint image is nominally pre-smoothed σ n , accordingly G ( x , σ ) = ( g σ 2 σ n 2 I ) ( x ) . As pointed out in [30], successive convolutions need to be performed by small Gaussian kernels to allow the pyramid to be incrementally computed. Another scale space that the detection algorithm makes use of is the difference of Gaussians which is progressively calculated from the scale derivative of G ( x , σ ) along the scale coordinate σ :
D ( x , σ ( s , o ) ) G ( x , σ ( s + 1 , o ) ) G ( x , σ ( s , o ) )
For computing the octave o = 1 , the fingerprint image is enlarged by a factor of, e.g., 2 through bilinear interpolation (applied to the enlarged image σ n = 1 ). For scale space extrema detection at all scales, the DoG space possesses s [ 1 , S ] . As the DoG space is derived from the differentiation of the Gaussian scale space, the latter has a scale index in the range of s [ 1 , S + 1 ] . With regard to the parameter O, it should be set as high as possible to cover all octaves. Thus, keypoints are extracted by picking local-extremum points in 3D neighborhoods of D ( x , σ ) , where the extrema are efficiently selected by looking at 9 × 9 × 9 neighborhoods of samples. Since the octave is represented by a 3D array, the index k is mapped to scale space indexes ( x 1 , x 2 , s ) , as follows:
k 1 = x 2 + x 1 M o + ( s s min ) M o N o
The index k can alternatively be mapped to a subscript ( i , j , l ) by using:
x 1 = j 1 , x 2 = i 1 , s = l 1 + s min
The actual size of a spatial bin is m σ , where σ is the scale of the keypoint, and m = 3.0 is a nominal factor (see Figure 6).
Due to the way such extrema are detected, the following inequality constraints are strictly satisfied: 1 x 2 M o 2 , 1 x 1 N o 2 and s min + 1 s s max 1 . As the interest is in both local maxima and minima, the process is then repeated for G ( x , σ ) . For sub-pixel refinement, a test with a threshold on the intensity D ( x , σ ) is simultaneously applied on the peakedness of the extremum to exclude weak points and/or edge points (see Figure 7).
The orientation θ of a detected keypoint ( x , σ ) is obtained as a predominant orientation of the gradient around the keypoint calculated as a quadratically interpolated maximum of the histogram of gradient orientations G ( x 1 , x 2 , σ ) around the keypoint. The histogram is then weighted by the magnitude of the gradient | G ( x 1 , x 2 , σ ) | , using a Gaussian window of standard deviation 1.5 σ centered on the keypoint. After arranging data in bins, a moving average filter is applied to smooth the constructed histogram before computing the maximum. Besides the global maximum, there is also a need to retain all the local maximum of a value over 0.8% of the maximum. Hence, multiple SIFT frames would be obtained for each location and scale. Two examples of SIFT keypoint detection are shown in Figure 8 below.
The SIFT descriptor of a detected keypoint ( x , σ ) is a local statistic of gradient orientations of G ( · , σ ) , which is computed from a weighted 3D histogram of gradient orientations. More formally, the SIFT descriptor is expressed as a weighted and interpolated histogram of the gradient orientations and locations within a patch around the keypoint, where the histogram domain is represented in tuples: ( x , θ ) R 2 × R / Z . Moreover, the bins constitute a 3D lattice with N p = 4 spatial bins along each spatial direction, N o = 8 bins for the gradient orientation and a total of N p 2 N o = 128 components. The window H ( x ) is a Gaussian with a deviation of N p / 2 , i.e., half of the spatial bin range.
For invariance purposes, the histogram is then projected on the image domain, based on the local reference frame around the keypoint. The spatial dimensions are then multiplied by a factor of m σ , where m and σ are a nominal factor (set to 3.0 by default) and the scale of the keypoint, respectively. The layout is rotated as well to ensure that x 1 is aligned to the keypoint orientation θ . The resultant histograms are further weighted by the gradient modulus, and the contributions of their gradient orientations are smoothly distributed using a trilinear interpolation into adjacent histogram bins to avoid boundary effects in which the descriptor changes abruptly as a sample orientation shifts smoothly from one bin to another.

3.3. Minutiae Feature Matching

From the above description of the feature extraction module, it is evident that the SIFT algorithm can generally be seen as a local image operator that takes a given image and transforms it into a large collection of local feature vectors. Hence, the feature matching procedure between feature descriptors of two fingerprint images essentially involves computing the Euclidean distance between each descriptor of the first image and each descriptor of the second image in Euclidean space [35]. To use this local operator for fingerprint recognition purposes, it is applied on two fingerprint images, i.e., a test and template image.
To find corresponding features between the two images, which could lead to fingerprint recognition, several feature matching approaches can be applied. Based on the Nearest Neighborhood (NN) procedure, for each feature a i in the feature set of the query fingerprint image, the entire reference database is queried to find the most similar stored feature b i with the smallest Euclidean distance to the feature a i . A pair of corresponding features ( a i , b i ) is typically termed a match M ( a i , b i ) [36]. To check if this match is positive or negative, a certain predefined threshold is considered. For matching, when the ratio of the Euclidean distance of the nearest-neighbor to the Euclidean distance of the next nearest-neighbor exceeds the predefined threshold, the matched feature is rejected. On the other hand, if the Euclidean distance between two feature vectors a i and b i does not exceed the value of the threshold, the match M ( a i , b i ) is labelled as positive and stored as a valid match. As shown in [30], Euclidean distance refers to the distance of keypoints in the feature space.
The keypoints (i.e., features) in image space are transformed into a multi-dimensional space based on their characteristics such as gradients, orientations, magnitude, locations and brightness, where each feature is represented by a feature vector. Then, the Euclidean distance between the two feature vectors a and b is defined as:
D ( a , b ) = ( a 1 b 1 ) 2 + ( a 2 b 2 ) 2 + + ( a n b n ) 2 = i = 1 n ( a i b i ) 2
where n denotes the number of features in the feature set. The value of D ( a , b ) is then compared with a predefined threshold. If it is greater than the threshold, the matched keypoint will be discarded. However, the computation of the pairwise Euclidean distances between all feature points can be prohibitively expensive. As a result, it is very useful, especially for computational purposes, to find inner products of vectors in the feature spaces [37]. This will greatly reduce the computational burden and, in the meantime, retain the feature robustness. As the distance between feature vectors is probably going to be similar, mismatch might take place, but the angle is constantly different [38]. On the other hand, cosine similarity that measures the similarity between two vectors in the feature space can be obtained by the cosine of the angle between the two vectors. Formally, having computed the inner product of a pair of feature vectors, then it is straightforward to compute the inverse cosine between each pair of feature vectors as follows:
a · b = i = 1 n a i b i = a 1 b 1 + a 2 b 2 + + a n b n θ = arccos a · b a b
Then, the nearest neighbor is checked to see if it has an angle below the predefined threshold ratio, i.e., θ < τ θ . It is worth mentioning that, in the original SIFT matching algorithm, only the nearest neighbor distance is compared against other distances and the smallest distance value is selected, whereas, in this improved SIFT matching algorithm, angles between feature vectors are compared. Additionally, the ratio of descriptor distances is employed for outlier rejection to reduce the number of false positive matches (see Figure 9 below).
As can be seen in Figure 9, the improved SIFT algorithm gives prominently better results in terms of matching quality and robustness compared to the original SIFT matching algorithm, particularly when the fingerprint image contains much local similar characteristics. However, despite discarding a few correct matches, nearly all mismatches are removed.

4. Experimental Results and Analysis

In this section, the results of the experiments and simulations conducted to verify the validity and performance of the proposed framework for fingerprint identification are presented. The experiments were carried out independently on a publicly available fingerprint database, namely FVC2004 [39] from the Fingerprint Verification Competition (FVC); a series of competition organised by the University of Bologna (Italy) which aims at establishing benchmarks for evaluating the performance of fingerprint recognition systems. FVC2004 is basically a multi-database, where each component database was created from fingerprints captured with a different sensor technology. In FVC2004 fingerprint database, there are four distinct datasets (i.e., DB1, DB2, DB3, and DB4), each containing 110 different fingerprint images with eight impressions per finger (resulting in a total of 110 × 8 = 880 fingerprints). Each component database is further divided into two disjoint subsets (A and B), so that the subset A contains the first 100 fingers and eight impressions per finger (i.e., 800 impressions in total), which is commonly used for the performance evaluation of fingerprint verification systems, whereas set B that is made available to allow parameter tuning before the submission of the algorithms only has the last 10 fingers (80 impressions).
The fingerprints in FVC2004 database mainly depend upon image quality, size and the source sensor type used for acquiring the fingerprints. Fingerprint images are available in a TIFF format with 8-bit gray-level depth, and a resolution of about 500 dpi. The image size varies depending on the database. During performance evaluation, only homogeneous fingerprint images that belong to the same component database are matched to each other. Fingerprint samples taken from FVC2004 fingerprint database are shown in Figure 10.
Taking into account that the fingerprint data included in FVC2004 database were collected by using different fingerprint sensing technologies, including optical, semiconductor, thermal, and tactile, and also the variations in image size and resolution are particularly apparent in this database. In addition, in this database, fingerprints were not acquired in realistic settings according to a formal protocol, but instead primarily characterized by the presence of different distortions (e.g., rotations, scalings, translations, and poor quality in resolution) within fingerprints of the same person’s finger. Taking all these challenging aspects mentioned above into consideration, this dataset represents a great choice for the testing and validation of the proposed fingerprint matching framework in extreme conditions. Additionally, a publicly available fingerprint dataset presented in [40] was used for feature matching subsystem, where these fingerprint images acting as the fingerprint biometric data undergo minutiae points feature extraction.
The performance of the proposed framework is evaluated on the FVC2004 benchmark fingerprint database (described above) by the Equal Error Rate (EER) that can be simply attained from the ROC (Receiver Operating Characteristic) curve; a plot of FMR against FNMR. The EER refers to the point on the ROC curve where FMR and FNMR are equal; a lower EER value generally indicates a better performance of the fingerprint verification system. A plot of the ROC curve that quantitatively shows the performance of the proposed framework is presented in Figure 11. In real-world applications, a fingerprint recognition system usually operates away from the EER point by reducing the FMR in order to ensure a high level of security. Note that the FMR and FNMR are inversely related and there is a strict trade-off between them; therefore, it is of particular interest to evaluate how the FNMR is affected. Moreover, it could also be argued here that the matching performance (in terms of EER) can be greatly affected by a large variation in true matches against the false matches.
A comparison of the matching performance in terms of EER (%) between the proposed framework and some closely related works is presented in Table 1.
From the figures presented in Table 1, one can see that the proposed framework performs effectively in comparison with other techniques, achieving the best EER value of 2.01%. In addition, there is a close matching performance (in terms of EER) between our framework and those previously presented in [42,43,45,46]. The average execution time of the fingerprint recognition algorithm, including enhancement, feature extraction and matching, is about 674 ms, so that it can run sufficiently fast for real-time operation, since the additional computational costs for the fingerprint enrollment are negligible besides the realtime SIFT feature extraction and matching. The average recognition time taken by the proposed technique has been compared with that of several closely related techniques, as shown in Table 2.
The presented fingerprint recognition system is implemented for much of its framework using Microsoft Visual Studio 2017 development tools and OpenCV Vision Library to realize real-time digital image processing and automatic object recognition. All tests and evaluations were preformed on a PC with an Intel® Core(TM) i7 CPU—2.60 GHz with Turbo Boost Technology, 8 GB RAM, and running Windows 10 Pro (64-bit) as the operational system.

5. Conclusions

This paper has introduced an automated minutiae extraction and matching framework for fingerprint identification and verification purposes, The proposed framework has followed a stepwise procedure as follows: first, multiple preprocessing operations including denoising, binarization, thinning, dilation, and enhancement are performed on the input fingerprint image to obtain high accuracy minutiae data. An improved SIFT detector is then applied to high-contrast fingerprints to detect an optimized set of highly-reliable salient points discriminating fingerprint minutiae and describe them accurately and quickly. Then, the SIFT descriptors of the local key-points in a given fingerprint are matched with those of the stored templates using a brute force approach. Finally, an adaptive dual-threshold filter is applied to remove false matches, while preserving the correct one. Experimental results on the public FVC2004 fingerprint datasets have demonstrated that the presented framework admits highly competitive or even much better performance than several state-of-the-art methods in terms of EER and robustness. Furthermore, the results of the performance evaluation indicated that the framework is easy to perform, inexpensive, relatively fast, and can offer enough finger ridge detail to allow for excellent fingerprint comparison for identification and verification purposes, which can be deemed as the most important managerial implication emerging from this study. Nevertheless, a possible limitation of this study is that the validation process has been performed using a single database of a relatively small number of latent fingerprint images. Future work will be along two main axes. The first will be further extension of this framework by incorporating advanced deep learning paradigm for better recognition performance. The second will concentrate on conducting more experiments for testing and evaluation of our approach on challenging latent fingerprints unintentionally left by subjects at crime scenes.

Author Contributions

Conceptualization, S.B.; methodology, S.B.; software, S.B.; validation, S.B. and S.A.; formal analysis, S.B. and S.A.; project administration, S.A., A.A. and A.B.; funding acquisition, S.A., A.B. and A.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Singh, P.; Kaur, L. Fingerprint Feature Extraction using Ridges and Valleys. Int. J. Eng. Res. Technol. 2015, 4, 1330–1334. [Google Scholar]
  2. Grosz, S.A.; Engelsma, J.J.; Liu, E.; Jain, A.K. C2CL: Contact to Contactless Fingerprint Matching. IEEE Trans. Inf. Forensics Secur. 2022, 17, 196–210. [Google Scholar] [CrossRef]
  3. Bakheet, S.; Al-Hamadi, A. Chord-length shape features for license plate character recognition. J. Russ. Laser Res. 2020, 41, 156–170. [Google Scholar] [CrossRef]
  4. Ali, S.F.; Khan, M.A.; Aslam, A.S. Fingerprint matching, spoof and liveness detection: Classification and literature review. Front. Comput. Sci. 2021, 15, 151310. [Google Scholar] [CrossRef]
  5. Kumar, G.; Bhatia, P.K. A Detailed Review of Feature Extraction in Image Processing Systems. In Proceedings of the 2014 Fourth International Conference on Advanced Computing & Communication Technologies, Rohtak, India, 8–9 February 2014; pp. 5–12. [Google Scholar]
  6. Duan, Y.; He, K.; Feng, J.; Lu, J.; Zhou, J. Estimating 3D Finger Pose via 2D-3D Fingerprint Matching. In Proceedings of the 27th International Conference on Intelligent User Interfaces, Helsinki, Finland, 22–25 March 2022. [Google Scholar]
  7. Bakheet, S.; Al-Hamadi, A. Robust hand gesture recognition using multiple shape-oriented visual cues. EURASIP J. Image Video Process 2021, 2021, 26. [Google Scholar] [CrossRef]
  8. Mali, K.; Bhattacharya, S. Fingerprint recognition using global and local structures. Int. J. Comput. Sci. Eng. 2011, 3, 161–172. [Google Scholar]
  9. Alonso-Fernandez, F.; Bigun, J.; Fierrez, J.; Fronthaler, H.; Kollreider, K.; Ortega-Garcia, J. Fingerprint recognition. In Guide to Biometric Reference Systems and Performance Evaluation; Springer: London, UK, 2009; pp. 51–88. [Google Scholar]
  10. Bakheet, S.; Al-Hamadi, A. A framework for instantaneous driver drowsiness detection based on improved HOG features and naïve Bayesian classification. Brain Sci. 2021, 11, 240. [Google Scholar] [CrossRef]
  11. Bader, A.S.; Sagheer, A.M. Finger Vein Identification Based On Corner Detection. J. Theor. Appl. Inf. Technol. 2018, 96, 2696–2705. [Google Scholar]
  12. Cao, Y.; Pang, B.; Liu, X.; Shi, Y. An Improved Harris-SIFT Algorithm for Image Matching. In Proceedings of the International Conference on Advanced Hybrid Information Processing, Harbin, China, 17–18 July 2017; pp. 56–64. [Google Scholar]
  13. Bakheet, S.; Al-Hamadi, A. A Discriminative Framework for Action Recognition Using f-HOL Features. Information 2016, 7, 68. [Google Scholar] [CrossRef] [Green Version]
  14. Singh, P.; Kaur, L. Fingerprint feature extraction using morphological operations. In Proceedings of the International Conference on Advances in Computer Engineering and Applications, Cebu, Philippines, 15–17 December 2015; pp. 764–767. [Google Scholar]
  15. Singh, K.; Kaur, K.; Sardana, A. Fingerprint feature extraction. Int. J. Comput. Sci. Technol. 2011, 2, 237–241. [Google Scholar]
  16. Bakheet, S.; Al-Hamadi, A.; Mofaddel, M.A. Recognition of Human Actions Based on Temporal Motion Templates. Br. J. Appl. Sci. Technol. 2017, 20, 1–11. [Google Scholar] [CrossRef]
  17. Lian, Q.; Zhang, J.; Chen, S. Extracting fingerprint minutiae based on Harris corner detector. Opt. Tech. 2008, 34, 383–387. [Google Scholar]
  18. Bakheet, S.; Al-Hamadi, A.; Youssef, R. A Fingerprint-Based Verification Framework Using Harris and SURF Feature Detection Algorithms. Appl. Sci. 2022, 12, 2028. [Google Scholar] [CrossRef]
  19. Bakheet, S.; Al-Hamadi, A. Hand gesture recognition using optimized local Gabor features. J. Comput. Theor. Nanosci. 2017, 14, 1380–1389. [Google Scholar] [CrossRef]
  20. Thakkar, D. Minutiae Based Extraction in Fingerprint Recognition. Available online: https://www.bayometric.com/minutiae-based-extraction-fingerprint-recognition/ (accessed on 10 October 2017).
  21. Sadek, S.; Abdel-Khalek, S. Generalized α-Entropy Based Medical Image Segmentation. J. Softw. Eng. Appl. 2014, 7, 62–67. [Google Scholar] [CrossRef] [Green Version]
  22. Bakheet, S.; Al-Hamadi, A. A Hybrid Cascade Approach for Human Skin Segmentation. Br. J. Math. Comput. Sci. 2016, 17, 1–14. [Google Scholar] [CrossRef]
  23. Hong, L.; Wan, Y.; Jain, A. Fingerprint image enhancement: Algorithm and performance evaluation. IEEE Trans. Pattern Anal. Mach. Intell. 1998, 20, 777–789. [Google Scholar] [CrossRef] [Green Version]
  24. Sadek, S.; Al-Hamadi, A.; Michaelis, B.; Sayed, U. A New Method for Image Classification Based on Multi-level Neural Networks. In Proceedings of the International Conference on Signal and Image Processing (ICSIP’09), Amsterdam, The Netherlands, 23–25 September 2009; pp. 197–200. [Google Scholar]
  25. Bakheet, S.; Al-Hamadi, A. Computer-Aided Diagnosis of Malignant Melanoma Using Gabor-Based Entropic Features and Multilevel Neural Networks. Diagnostics 2020, 10, 822. [Google Scholar] [CrossRef]
  26. Patel, M.B.; Parikh, S.M.; Patel, A.R. An Improved Thinning Algorithm For Fingerprint Recognition. Int. J. Adv. Res. Comput. Sci. 2017, 8, 1238–1244. [Google Scholar] [CrossRef] [Green Version]
  27. Hall, R.W. Fast parallel thinning algorithms: Parallel speed and connectivity preservation. Commun. ACM 1989, 32, 124–131. [Google Scholar] [CrossRef]
  28. Zhang, T.Y.; Suen, C.Y. A Fast Parallel Algorithms For Thinning Digital Patterns. Commun. ACM 1984, 27, 236–239. [Google Scholar] [CrossRef]
  29. Kocharyan, D. A modified fingerprint image thinning algorithm. Am. J. Softw. Eng. Appl. 2013, 2, 1–6. [Google Scholar]
  30. Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 2, 91–110. [Google Scholar] [CrossRef]
  31. Vedaldi, A. An Implementation of SIFT Detector and Descriptor. 2008. Available online: http://cs.tau.ac.il/~turkel/imagepapers/ (accessed on 8 March 2022).
  32. Lee, Y.; Lee, D.H.; Park, J.H. Revisiting NIZK-Based Technique for Chosen-Ciphertext Security: Security Analysis and Corrected Proofs. Appl. Sci. 2021, 11, 3367. [Google Scholar] [CrossRef]
  33. Dospinescu, O.; Brodner, P. Integrated Applications with Laser Technology. Inform. Econ. 2013, 17, 53–61. [Google Scholar] [CrossRef]
  34. Agarwal, D.; Garima; Bansal, A. A Utility of Ridge Contour Points in Minutiae-Based Fingerprint Matching. In Proceedings of the International Conference on Computational Intelligence and Data Engineering, Hyderabad, India, 8–9 August 2020. [Google Scholar]
  35. Jiayuan, R.; Yigang, W.; Yun, D. Study on eliminating wrong match pairs of SIFT. In Proceedings of the IEEE 10th International Conference on Signal Processing Proceedings, Ljubljana, Slovenia, 18–20 September 2010; pp. 992–995. [Google Scholar] [CrossRef]
  36. Omercevic, D.; Drbohlav, O.; Leonardis, A. High-Dimensional Feature Matching: Employing the Concept of Meaningful Nearest Neighbors. In Proceedings of the 2007 IEEE 11th International Conference on Computer Vision, Rio de Janeiro, Brazil, 14–20 October 2007; pp. 1–8. [Google Scholar] [CrossRef]
  37. Harris, C.; Stephens, M. A combined corner and edge detector. In Proceedings of the Alvey Vision Conference, Manchester, UK, 31 August–2 September 1988; pp. 147–151. [Google Scholar]
  38. Moravec, H. Towards Automatic Visual Obstacle Avoidance. In Proceedings of the 5th International Joint Conference on Artificial Intelligence (IJCAI’77), Cambridge, MA, USA, 22–25 August 1977; Volume 1, p. 584. [Google Scholar]
  39. Maio, D.; Maltoni, D.; Cappelli, R.; Wayman, J.L.; Jain, A.K. FVC 2004: Third Fingerprint Verification Competition. In Proceedings of the International Conference on Biometric Authentication, Hong Kong, China, 15–17 July 2004; Volume 3072, pp. 1–7. [Google Scholar]
  40. Lakshmanan, R.; Selvaperumal, S.; Chow, M. Integrated Finger Print Recognition Using Image Morphology and Neural Network. Int. J. Adv. Stud. Comput. Sci. Eng. 2014, 3, 40–48. [Google Scholar]
  41. Ali, S.; Prakash, S. 3Dimensional Secured Fingerprint Shell. Pattern Recognit. Lett. 2019, 126, 68–77. [Google Scholar] [CrossRef]
  42. Arunalatha, J.S.; Tejaswi, V.; Shaila, K.; Anvekar, D.; Venugopal, K.R.; Iyengar, S.S.; Patnaik, L.M. FIVDL: Fingerprint Image Verification using Dictionary Learning. Procedia Comput. Sci. 2015, 54, 482–490. [Google Scholar] [CrossRef] [Green Version]
  43. Turroni, F.; Maltoni, D.; Cappelli, R.; Maio, D. Improving Fingerprint Orientation Extraction. IEEE Trans. Inf. Forensics Secur. 2011, 6, 1002–1013. [Google Scholar] [CrossRef]
  44. Alam, B.; Jin, Z.; Yap, W.S.; Goi, B.M. An alignment-free cancelable fingerprint template for bio-cryptosystems. J. Netw. Comput. Appl. 2018, 15, 20–32. [Google Scholar] [CrossRef]
  45. Yang, J.; Xiong, N.; Vasilakos, A.V. Two-Stage Enhancement Scheme for Low-Quality Fingerprint Images by Learning from the Images. IEEE Trans. Hum.-Mach. Syst. 2013, 43, 235–248. [Google Scholar] [CrossRef]
  46. Bartunek, J.S.; Nilsson, M.; Sallberg, B.; Claesson, I. Adaptive Fingerprint Image Enhancement With Emphasis on Preprocessing of Data. IEEE Trans. Image Process. 2013, 22, 644–656. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  47. Gottschlich, C. Curved-Region-Based Ridge Frequency Estimation and Curved Gabor Filters for Fingerprint Image Enhancement. IEEE Trans. Image Process. 2012, 21, 2220–2227. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. Typical Minutiae structures: (a) core and delta singularities; (b) ridge ending; and (c) ridge bifurcation.
Figure 1. Typical Minutiae structures: (a) core and delta singularities; (b) ridge ending; and (c) ridge bifurcation.
Applsci 12 06122 g001
Figure 2. A functional block diagram of the automated fingerprint recognizer.
Figure 2. A functional block diagram of the automated fingerprint recognizer.
Applsci 12 06122 g002
Figure 3. Simplified outline of fingerprint enhancement based on contextual filters.
Figure 3. Simplified outline of fingerprint enhancement based on contextual filters.
Applsci 12 06122 g003
Figure 4. A 3 × 3 sized block.
Figure 4. A 3 × 3 sized block.
Applsci 12 06122 g004
Figure 5. Main steps of fingerprint preprocessing: (a) thinning, (b) dilation and (c) unwanted pattern removal.
Figure 5. Main steps of fingerprint preprocessing: (a) thinning, (b) dilation and (c) unwanted pattern removal.
Applsci 12 06122 g005
Figure 6. SIFT descriptor layout for 4 × 4 sub-regions.
Figure 6. SIFT descriptor layout for 4 × 4 sub-regions.
Applsci 12 06122 g006
Figure 7. SIFT minutiae detection for various threshold values: (a) source image and detected frames at threshold values of (b) 5.0, (c) 7.5, and (d) 10.0.
Figure 7. SIFT minutiae detection for various threshold values: (a) source image and detected frames at threshold values of (b) 5.0, (c) 7.5, and (d) 10.0.
Applsci 12 06122 g007
Figure 8. SIFT minutiae feature detection (a) original image, (b) binary image, (c) thinned image, and (d) detected SIFT minutiae keypoints (pixels shown in green color).
Figure 8. SIFT minutiae feature detection (a) original image, (b) binary image, (c) thinned image, and (d) detected SIFT minutiae keypoints (pixels shown in green color).
Applsci 12 06122 g008
Figure 9. Fingerprint minutiae matching results using (a) original SIFT algorithm and (b) improved SIFT algorithm.
Figure 9. Fingerprint minutiae matching results using (a) original SIFT algorithm and (b) improved SIFT algorithm.
Applsci 12 06122 g009
Figure 10. A sample of fingerprint images from the FVC2004 fingerprint database.
Figure 10. A sample of fingerprint images from the FVC2004 fingerprint database.
Applsci 12 06122 g010
Figure 11. A plot of the ROC curve between FMR and FNMR for the FVC 2004 fingerprint database.
Figure 11. A plot of the ROC curve between FMR and FNMR for the FVC 2004 fingerprint database.
Applsci 12 06122 g011
Table 1. A summary of the performance comparison between the proposed framework and most closely related techniques.
Table 1. A summary of the performance comparison between the proposed framework and most closely related techniques.
WorkTechniquesEER (%)
Proposed methodImprove SIFT Features02.01
Ali and Prakash [41]Fingerprint Shell02.02
Arunalatha et al. [42]Dictionary Learning02.04
Francesco et al. [43]Orientation Extraction02.06
Alam et al. [44]Fingerprint template02.07
Jucheng et al. [45]Two-Stage Enhancement Scheme02.19
Bartunek et al. [46]Pre-processing02.40
Carsten  [47]Curved Gabor Filters11.97
Table 2. Average time taken comparison of various closely related techniques.
Table 2. Average time taken comparison of various closely related techniques.
TechniquesElapsed Time (ms)
Proposed Technique674
Ali and Prakash [41]632
Arunalatha et al. [42]745
Francesco et al. [43]957
Alam et al. [44]1075
Jucheng et al. [45]1119
Carsten  [47]1149
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Bakheet, S.; Alsubai, S.; Alqahtani, A.; Binbusayyis, A. Robust Fingerprint Minutiae Extraction and Matching Based on Improved SIFT Features. Appl. Sci. 2022, 12, 6122. https://doi.org/10.3390/app12126122

AMA Style

Bakheet S, Alsubai S, Alqahtani A, Binbusayyis A. Robust Fingerprint Minutiae Extraction and Matching Based on Improved SIFT Features. Applied Sciences. 2022; 12(12):6122. https://doi.org/10.3390/app12126122

Chicago/Turabian Style

Bakheet, Samy, Shtwai Alsubai, Abdullah Alqahtani, and Adel Binbusayyis. 2022. "Robust Fingerprint Minutiae Extraction and Matching Based on Improved SIFT Features" Applied Sciences 12, no. 12: 6122. https://doi.org/10.3390/app12126122

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop