Next Article in Journal
Geometric Estimation of Multivariate Dependency
Previous Article in Journal
Kernel Mixture Correntropy Conjugate Gradient Algorithm for Time Series Prediction
Previous Article in Special Issue
A Unified Framework for Head Pose, Age and Gender Classification through End-to-End Face Segmentation

Entropy 2019, 21(8), 786; https://doi.org/10.3390/e21080786

Article
Entropy-Based Clustering Algorithm for Fingerprint Singular Point Detection
1
Institute of Photonic Engineering, National Kaohsiung University of Science and Technology, Kaohsiung 80778, Taiwan
2
Department of Electronic Engineering, National Kaohsiung University of Science and Technology, Kaohsiung 80778, Taiwan
3
Department of Computer Science and Information Engineering, National Kaohsiung University of Science and Technology, Kaohsiung 80778, Taiwan
*
Author to whom correspondence should be addressed.
Received: 19 July 2019 / Accepted: 9 August 2019 / Published: 12 August 2019

Abstract

:
Fingerprints have long been used in automated fingerprint identification or verification systems. Singular points (SPs), namely the core and delta point, are the basic features widely used for fingerprint registration, orientation field estimation, and fingerprint classification. In this study, we propose an adaptive method to detect SPs in a fingerprint image. The algorithm consists of three stages. First, an innovative enhancement method based on singular value decomposition is applied to remove the background of the fingerprint image. Second, a blurring detection and boundary segmentation algorithm based on the innovative image enhancement is proposed to detect the region of impression. Finally, an adaptive method based on wavelet extrema and the Henry system for core point detection is proposed. Experiments conducted using the FVC2002 DB1 and DB2 databases prove that our method can detect SPs reliably.
Keywords:
singular point detection; boundary segmentation; blurring detection; fingerprint image enhancement; fingerprint quality

1. Introduction

Fingerprint biometrics is increasingly being used in the commercial, civilian, physiological, and financial domains based on two important characteristics of fingerprints: (1) fingerprints do not change with time and (2) every individual’s fingerprints are unique [1,2,3,4,5]. Owing to these characteristics, fingerprints have long been used in automated fingerprint identification or verification systems. These systems rely on accurate recognition of fingerprint features. At the global level, fingerprints have ridge flows assembled in a specific formation, resulting in different ridge topology patterns such as core and delta (singular points (SPs)), as shown in Figure 1a. These SPs are the basic features required for fingerprint classification and indexing. Local fingerprint features are carried by local ridge details such as ridge endings and bifurcations (minutiae), as shown in Figure 1b. Fingerprint minutiae are often used to conduct matching tasks because they are generally stable and highly distinctive [6].
Most previous SP extraction algorithms were performed directly over fingerprint orientation images. The most popular method is based on the Poincaré index [7], which typically computes the accumulated rotation of the vector field along a closed curve surrounding a local point. Wang et al. [8] proposed a fingerprint orientation model based on 2D Fourier expansions to extract SPs independently. Nilsson and Bigun [9] as well as Liu [10] used the symmetry properties of SPs to extract them by first applying a complex filter to the orientation field in multiple resolution scales by detecting the parabolic and triangular symmetry associated with core and delta points. Zhou et al. [11] proposed a feature of differences of the orientation values along a circle (DORIC) in addition to the Poincaré index to effectively remove spurious detections, take the topological relations of SPs as a global constraint for fingerprints, and use the global orientation field for SP detection. Chen et al. [12] obtained candidate SPs by the multiscale analysis of orientation entropy and then applied some post-processing steps to filter the spurious core and delta points.
However, SP detection is sensitive to noise, and extracting SPs reliably is a very challenging problem. When input fingerprint images have poor quality, the performance of these methods degrades rapidly. Noise in fingerprint images makes SP extraction unreliable and may result in a missed or wrong detection. Therefore, fingerprint image enhancement is a key step before extracting SPs.
Fingerprint image enhancement remains an active area of research. Researchers have attempted to reduce noise and improve the contrast between ridges and valleys in fingerprint images. Most fingerprint image enhancement algorithms are based on the estimation of an orientation field [13,14,15]. Some methods use variations of Gabor filters to enhance fingerprint images [16,17]. These methods are based on the estimation of a single orientation and a single frequency; they can remove undesired noise and preserve and improve the clarity of ridge and valley structures in images. However, they are not suitable for enhancing ridges in regions with high curvature. Wang and Wang [18] first detected the SP area and then improved it by applying a bandpass filter in the Fourier domain. However, detecting the SP region when the fingerprint image has extremely poor quality is highly difficult. Yang et al. [19] first enhanced fingerprint images in the spatial domain with a spatial ridge-compensation filter by learning from the images and then used a frequency bandpass filter that is separable in the radial- and angular-frequency domains. Yun and Cho [20] analyzed fingerprint images, divided them into oily, neutral, and dry according to their properties, and then applied a specific enhancement strategy for each type. To enhance fingerprint images, Fronthaler et al. [21] used a Laplacian-like image pyramid to decompose the original fingerprint into subbands corresponding to different spatial scales and then performed contextual smoothing on these pyramid levels, where the corresponding filtering directions stem from the frequency-adapted structure tensor. Bennet and Perumal [22] transformed fingerprint images into the wavelet domain and then used singular value decomposition (SVD) to decompose the low subband coefficient matrix. Fingerprint images were enhanced by multiplying the singular value matrix of the low-low(LL) subband with the ratio of the largest singular value of the generated normalized matrix with mean of 0 and variance of 1 and the largest singular value of the LL subband. However, the resulting images were sometimes uneven. This is because SVD was applied only to the low subband and a generated normalized matrix was used. To overcome this problem, Wang et al. [23] introduced a novel lighting compensation scheme involving the use of adaptive SVD on wavelet coefficients. First, they decomposed the input fingerprint image into four subbands by 2D discrete wavelet transform (DWT). Subsequently, they compensated fingerprint images by adaptively obtaining the compensation coefficients for each subband based on the referred Gaussian template.
The aforementioned methods for enhancing fingerprint images can reduce noise and improve the contrast between ridges and valleys in the images. However, they are not really effective with fingerprint images having very poor quality, particularly blurring. To overcome this problem, we need to segment the fingerprint foreground with the interleaved ridge and valley structure from the complex background with non-fingerprint patterns for more accurate and efficient feature extraction and identification. Many studies have investigated segmentation on rolled and plain fingerprint images. Mehtreet al. [24] partitioned a fingerprint image into blocks and then performed block classification based on gradient and variance information to segment fingerprint images into blocks. This method was further extended to a composite method [25] that takes advantage of both the directional and the variance approaches. Zhang et al. [26] proposed an adaptive total variation decomposition model by incorporating the orientation field and local orientation coherence for latent fingerprint segmentation. Based on a ridge quality measure that was defined as the structural similarity between the fingerprint patch and its dictionary-based reconstruction, Cao et al. [27] proposed a learning-based method for latent fingerprint image segmentation.
This study proposes an efficient approach by combining the novel adaptive image enhancement, compact boundary segmentation, and a novel clustering algorithm by integrating wavelet frame entropy with region growing to evaluate the fingerprint image quality so as to validate the SPs. Experiments were conducted on FVC2002 DB1 and FVC2002 DB2 databases [28]. The experimental results indicate the excellent performance of the proposed method.
The rest of this paper is organized as follows. Section 2 introduces the proposed image enhancement, precise boundary segmentation, and blurring detection based on wavelet entropy clustering algorithm. Section 3 describes the proposed algorithm for SP detection. Section 4 presents experimental results to verify the proposed approach. Finally, Section 5 presents the conclusions of this study.

2. Blurring Detection for Fingerprint Impression

2.1. Fingerprint Background Removal

SVD has been widely used in digital image processing [29,30,31]. Without loss of generality, we suppose that f is a fingerprint image with a resolution of M × N (M ≥ N). The SVD of a fingerprint image f can be written as follows:
f = U Σ V T ,
where U = [ u 1 , u 2 , , u N ] and V = [ v 1 , v 2 , , v N ] are orthogonal matrices containing singular vectors and Σ = [ D , O ] contains the sorted singular values on its main diagonal. D = diag   ( λ 1 , λ 2 , , λ k ) with singular values λ i ,   i = 1 , 2 , , k in a non-increasing order, O is a M × ( M k ) zero matrix, and k is the rank of f . We also can expand the fingerprint image as follows:
f = λ 1 u 1 v 1 T + λ 2 u 2 v 2 T + + λ k u k v k T ,     λ 1 λ 2 λ k .
The terms λ i u i v i T containing the vector outer-product in Equation (2) are the principal images. The Frobenius norm of the fingerprint image is preserved in SVD transformation:
f F 2 = i = 1 k λ i 2 .
Equation (3) shows how the signal energy of f can be partitioned by the singular values in the sense of the Frobenius norm. It is common to discard the small singular values in SVD to obtain matrix approximations whose rank equals the number of remaining singular values. Good matrix approximations can always be obtained with a small fraction of the singular values. The highly concentrated property of SVD helps remove background noise from the foreground ridges.
We performed some experiments to observe the effects of singular values on a fingerprint image. Figure 2a shows a fingerprint image in the FVC 2002 DB2 database. First, all singular values of the fingerprint image, as shown in Figure 2a, were set to 1 and the fingerprint image was then reconstructed. Figure 2b shows the reconstructed fingerprint image without the effect of singular values, implying that the singular vectors represent the background information of the given fingerprint image. Next, all singular values of the fingerprint image shown in Figure 2a were multiplied by 2 and the fingerprint image was then reconstructed. As shown in Figure 2c, the fingerprint image looks clearer and the background of the fingerprint image was removed. It suggests that the singular values represent the foreground ridges of the given fingerprint image. Thus, SVD can be used for enhancing the ridge structure and removing noise from the background of the fingerprint image. In addition, if the fingerprint image is a low-contrast image, this problem can be corrected by replacing Σ with an equalized singular matrix obtained from a normalized image, which is considered to be that with a probability density function involving a Gaussian distribution with a mean and variance calculated using the available dataset. This normalized image is called a Gaussian template image.
Based on observations of the effects of SVD on a fingerprint image, and to effectively remove the background, we examined the singular values of the fingerprint image, which contains most of the foreground information. We automatically adjusted the illumination of an image to obtain an equalized image that has a normal distribution. If the fingerprint image had low contrast, the singular values were multiplied with a scalar larger than 1. A normalized intensity image with no illumination problem can be considered an image that has a Gaussian distribution and that can easily be obtained by generating random pixel values with Gaussian distribution. Moreover, the first singular value contributes 99.72% of energy to the original image and the first two singular values contribute 99.88% of the total energy [31]. The larger singular value represents the energy of the fingerprint pattern and the smaller one, the energy of the background and noise. To effectively remove the background, we set a compensation weight, α, that enhanced the image contrast. It is easy to remove the ridge of images when the compensation weight is larger than 1, and the image contrast is reduced when the compensation weight is smaller than 1. Therefore, we compared the maximum singular value of the Gaussian template with the maximum singular value of the original fingerprint image to compute the compensation weight as follows:
α = { m a x ( max ( Σ G ) max ( Σ ) , max ( Σ ) max ( Σ G ) ) ,     max ( Σ ) < η 1 ,     m a x ( Σ ) η ,
where the threshold value η is experimentally set as 90,000, and Σ G is the singular value matrix of the Gaussian template image with mean and variance calculated from the adopted database as shown in Table 1. The equalized image, f e q , having the same size as the original fingerprint image can be generated by the following:
f e q = U ( α Σ ) V T .
This task that actually equalizes the fingerprint image can eliminate the undesired background noise. As shown in Figure 2d, the background of the fingerprint image has been removed, thereby providing an image with nearly normal distribution. It also improves the clarity and continuity of ridge structures in the fingerprint image.

2.2. Impression Region Detection and Boundary Segmentation

The fingerprint texture should be distinguished from the background by a suitable binary threshold obtained from the energy analysis as a very useful and distinctive preprocessing for boundary segmentation. An analysis of the energy distribution of fingerprint images from the public fingerprint image database indicates a prominent distinction between the fingerprint object and the undesired background owing to the construction of ridges and valleys. In this section, we propose an impression region detection approach based on the energy difference between the impression contour and the background scene. The most obvious feature of the fingerprint ridge is the texture; it exhibits variances in the energy roughness of the impression region. Roughness corresponds to the perception that our sense of touch can feel with an object, and it can be characterized in two-dimensional scans by depth (energy strength) and width (separation between ridges). Before ridge object extraction, a smoothing filter is used to smooth the image and enhance the desired local ridge. The local standard average µ and energy ε of the 7 × 7 pixels defined by the mask are given by the following expressions:
μ ( x ,   y ) = 1 N i = 3 3 j = 3 3 f e q ( x + i ,   y + j ) ,
ε ( x ,   y ) = 1 N i = 3 3 j = 3 3 ( f e q ( x + i ,   y + j ) μ ( x ,   y ) ) 2 ,
where f e q ( x ,   y ) is the equalized image, as discussed in Section 2.1, and N = 49 is a normalizing constant. For transforming the grayscale intensity image in Figure 3a into a logical map, a binarized image of the equalized image, f b ( x ,   y ) , is obtained by extracting the interesting object from the background as follows:
f b ( x , y ) = { 255 , i f   ε ( x ,   y ) 255 0 , i f   ε ( x ,   y ) < 255 ,
where f b ( x ,   y ) is a binarized image; pixel values labeled 255 are objects of interest, whereas pixel values labeled 0 are undesired ones.
Figure 3b shows the binarized image obtained by applying Equation (8) to the equalized image. Based on the binary images, as shown in Figure 3b, we can detect the region of impression (ROI), which is very useful as a distinctive preprocessing of boundary segmentation. Figure 3b shows that the proposed algorithm can perform very well for discriminating the blur region. Pixel (x, y) with energy ε(x,y) ≥ 255 is an object of the ROI; therefore, we can detect the ROI, f R O I ( x ,   y ) , as follows:
f R O I = { ( x , y ) | ε ( x , y ) 255 } .
To define the fingerprint contour, we determine the boundary location of the fingerprint. Most human fingerprint contours have elliptical shapes. Thus, the left, right, and horizontal projections for an elliptical fingerprint contour are divided to search for landmarks by commencing from two sides in every 15 pixels from top to down. Based on the located landmarks, the contour of the fingerprint is acquired in a polygon. As illustrated in Figure 3c, the blue, green, and red lines present the contours received by using left, right, and horizontal projections, respectively. This method is advantageous because it is simple and is less influenced by finger pressure.

2.3. Blurring Detection

Our proposed method improves the fingerprint image quality, as discussed in Section 2.1, and the ROI is defined, as discussed in Section 2.2. However, the fingerprint image still contains a blur region within the ROI, leading to the false detection of SPs. In this section, we propose a method for detecting the blur region in a fingerprint image and then ignoring it during detection to reduce the time and improve the accuracy of SP detection.
To locate the blur region, we perform region segmentation by finding a meaningful boundary based on a point aggregation procedure. Choosing the center pixel of the region is a natural starting point. Grouping points to form the region of interest, while focusing on 4-connectivity, would yield a clustering result when there are no more pixels for inclusion in the region. After region growing, the region is measured to determine the size of the blur region. Entropy filtering for blur detection of pixels in the 11 × 11 (N = 11) neighborhood defined by the mask is given by the following:
e N S D W F = 1 N 2 x ,   y = 0 N 1 | d H H ( x ,   y ) | l o g | d H H ( x ,   y ) | ,
where d H H is the coefficient of a non-subsampled version of the 2D non-separable discrete wavelet transform (NSDWT) [32,33] in the high-frequency subband decomposed at the first level ( d j + 1 H H ), j = 0, as shown in Figure 4. Figure 3b shows that the proposed algorithm can perform very well for discriminating the blur region.

3. SP Detection

In general, SPs of a fingerprint are detected by a Poincaré index-based algorithm. However, the Poincaré index method usually results in considerable spurious detection, particularly for low-quality fingerprint images. This is because the conventional Poincaré index along the boundary of a given region equals the sum of the Poincaré indices of the core points within this region, and it contains no information about the characteristics and cannot describe the core point completely. To overcome the shortcoming of the Poincaré index method, we propose an adaptive method based on wavelet extrema for core point detection. Wavelet extrema contain information on both the transform modulus maxima and minima in the image, considered to be among the most meaningful features for signal characterization.
First, we align the ROI based on the Poincaré’s core points and the local orientation field. The Poincaré index at pixel (x,y), which is enclosed by 12 direction fields taken in a counterclockwise direction, is calculated as follows:
P o i n c a r e ( x , y ) = 1 2 π k = 0 M 1 Δ ( k ) ,
where
Δ ( k ) = { δ ( k ) ,     i f   | δ ( k ) | < π / 2 π + δ ( k ) ,     i f   | δ ( k ) | < π / 2 π δ ( k ) ,                 o t h e r w i s e
and
δ ( k ) = θ ( x ( k ) , y ( k ) ) θ ( x ( k ) , y ( k ) ) ;   k = ( k + 1 )   mod   M ;   M = 12 ,
where ( x ( k ) , y ( k ) ) and ( x ( k ) , y ( k ) ) are the paired neighboring coordinates of the direction fields. A core point has a Poincaré index of +1/2. By contrast, a delta point has a Poincaré index of -1/2. The core pointsdetected in this step are called rough core points.
Next, we align the fingerprint image under the right-angle coordinate system based on the number and location of preliminary core points. Because fingerprints may have different numbers of cores, the first step in alignment is to adopt the preliminary Poincaré indexed positions as a reference. If the number of preliminary cores is 2, the image is rotated along the orientation calculated from the midpoint between the two cores. If the number of cores is equal to 1, the image is rotated along the direction calculated from the neighboring orientation of the core. If the number of cores is zero, the fingerprint is kept intact. The rotation angle is calculated as follows:
j < y c = 1 2 t a n 1 i ζ s i n 2 O i , j i ζ c o s 2 O i , j ,
where O i , j is the local orientation around a pixel and ζ is the core subregion of interest (COI) centered at the Poincaré index core point (xc, yc) with size of 60 × 60 pixels, which was determined to avoid possible variability near the boundary while one is fingerprinted by the reader. Fingerprint alignment is performed to make the pattern rotation-invariant and to reduce the false rejection rate. The rotations are given by the following Equation:
{ y = x s i n ϕ + y c o s ϕ x = x c o s ϕ y s i n ϕ ,
and point (x, y) with orientation angle ϕ is mapped to point ( x , y ) . Figure 5 shows some fingerprint alignment by our method with different numbers of cores.
After alignment, the COI subregion with size of 60 × 60 pixels centered at the Poincaré’s detected point is further segmented from the aligned image. The COI then goes through a skeletonization process to peel off as many ridge pixels as possible without affecting the general shape of the ridge, as shown in Figure 6a, and is then transformed to a skeletonized ridge image, as shown in Figure 6b. The skeletonized ridge image is used to compute the wavelet extrema, as shown in Figure 6c.
Wavelet modulus maxima representations for two-dimensional signals were proposed by Mallat [33] as a tool for extracting information on singularities, which were considered to be among the most meaningful features for signal characterization. Most wavelet transform local extrema are actually modulus maxima (there are examples of signals for which the wavelet extrema and modulus representations are the same). The set of indices and the local maximum, denoted as M ( f ) , and local minimum, denoted as m ( f ) , of skeletonized ridge image f are defined as follows:
M ( f ) = { ( z , f ( z ) ) : f ( z 1 ) f ( z ) and f ( z + 1 ) f ( z ) } ,
m ( f ) = { ( z , f ( z ) ) : f ( z 1 ) f ( z ) a n d f ( z + 1 ) f ( z ) } ,
Where zZ. Similarly, the indices and values of wavelet transform extrema for an image f is defined as follows:
E ( f ) = { { M ( w j ( f ) ) } { m ( w j ( f ) ) } ; j = 1 , 2 , , J } ,
where w j ( f ) is the 2D non-separable wavelet transform of image f at scale j, j = 1, 2,…, J. The SP of a fingerprint image can be found by extracting curvature primitives and discovering the location of these primitives in the subregion, as shown in Figure 6c.
We find the exact location of the core point defined by the Henry system and trace the skeletonized ridge curves with 8-adjacency to explore wavelet extrema in 1-pixel increments by starting at 10 pixels apart from two sides. The highest extrema in the ridge curve correspond to core point candidates. We devise two 8-adjacency grids to locate the wavelet extrema (Figure 7a,b). Beginning from two opposite ends and moving toward the center of the subregion, the black-colored pixel of each grid is designated as the central point to trace. Based on this central point, the moving guideline is as follows: if the gray-level of the adjacent pixel is 0, then move toward that pixel, where the number shown in the grid indicates the moving sequence. This method enables one to follow the real track of the ridge curve. Whenever a singularity is detected, its location is noted. Figure 7c shows that it is common to find multiple core point candidates with small vertical displacements, and the area below the lowest ridge curve is circumscribed for locating the core point. In the Henry system, exact core point location can be performed as follows: (a) locate the topmost extrema in the innermost ridge curve if there is no rod; (b) otherwise, locate the top of the rods. The following equation summarizes this process:
s = { ω e , 0 ,                     i = 0 ω e , ( i / 2 ) + ( i mod 2 ) ,     i 1 ,
Where s is the determined core point, i is the number of rods below the innermost ridge curve, ω e , 0 is the topmost extrema in the innermost ridge curve, and ω e , ( i / 2 ) + ( i mod 2 ) is the located rod extrema below the innermost ridge curve. Figure 7d presents an example marked with the blue cross.

4. Experimental Results and Discussion

In this section, to illustrate the effectiveness of our proposed method, we present some of the performed experiments using both FVC2002 DB1 and DB2 fingerprint databases. FVC2002 includes four databases, namely, DB1, DB2, DB3, and DB4, collected using different sensors or technologies that are widely used in practice. Each database is 110 fingers wide (w) and 8 impressions per finger deep (d) (880 fingerprints in all). Fingerprints from 101 to 110 (set B) have been made available to the participants to allow for parameter tuning before the submission of the algorithms. The benchmark is then constituted by fingers numbered from 1 to 100 (set A). Volunteers were randomly partitioned into three groups (30 persons each); each group was associated with a database and therefore to a different fingerprint scanner. Each volunteer was invited to present themselves at the collection place in three distinct sessions, with at least two weeks between each session. The forefinger and middle finger of both hands (in total, four fingers) of each volunteer were acquired by interleaving the acquisition of the different fingers to maximize differences in finger placement. No efforts were made to control image quality and the sensor platens were not systematically cleaned. In each session, four impressions were acquired of each of the four fingers of each volunteer. During the second session, individuals were requested to exaggerate the displacement (impressions 1 and 2) and rotation (impressions 3 and 4) of the finger without exceeding 35°. During the third session, fingers were alternatively dried (impressions 1 and 2) and moistened (impressions 3 and 4). The SPs of all fingerprints in the testing database were manually labeled beforehand to obtain the ground truth. For a ground-truth S P ( x 0 , y 0 ) , if a detected S P ( x , y ) satisfies ( x x 0 ) 2 ( y y 0 ) 2 < 10 , it is said to be truly detected; otherwise, it is called a miss.
The singular point detection rate (SDR) is defined as the ratio of truly detected SPs to all ground-truth SPs:
SDR = N u m ( t r u l y   d e t e c t e d   S P s ) N u m ( g r o u n d t r u t h   S P s ) × 100 % .
The singular point miss rate (SMR) is defined as the ratio of the number of missed SPs to the number of all ground-truth SPs. The sum of the detection rate and miss rate is 100%:
SMR = N u m ( m i s s e d   S P s ) N u m ( g r o u n d t r u t h   S P s ) × 100 % = 100 SDR .
The singular point false alarm rate (SFR) is defined as the ratio of the number of falsely detected SPs to the total number of ground-truth SPs:
SFR = N u m ( f a l s e l y   d e t e c t e d   S P s ) N u m ( g r o u n d t r u t h   S P s ) × 100 % .
The singular point correctly detected rate (SCR) is defined as the ratio of all truly detected SPs to all detected SPs in a fingerprint of all fingerprint images:
SCR = N u m ( t r u e l y   d e t e c t e d   S P s ) N u m ( d e t e c t e d   S P s ) × 100 % .
First, the compensation weight coefficients are calculated by using Equation (4) and the equalized image, f e q , having the same size as the original fingerprint image can be generated by Equation (5). Figure 8 and Figure 9 show example image results of the proposed method for FVC2002 DB1 and DB2, respectively. As shown in Figure 8c and Figure 9b, the background of the fingerprint image has been removed, thereby providing an image with nearly normal distribution. It also improves the clarity and continuity of ridge structures in the fingerprint image.
Then, we show the effectiveness by comparing the amount of information in our method and in the original fingerprint images by using the entropy of an image. The entropy of information H was introduced by Shannon [34] in 1948, and it can be calculated by the following equation:
H = v = 0 255 p i log 2 p i ,
where p i denotes the probability mass function of gray level i, and it is calculated as follows:
p i = N u m b e r   o f   o c c u r r e n c e s   o f   i n t e n s i t y   l e v e l s N u m b e r   o f   i n t e n s i t y   l e v e l s .
In digital image processing, entropy is a measure of an image’s information content, which is interpreted as the average uncertainty of the information source. The entropy of an image can be used for measuring image visual aspects [35] or for gathering information to be used as parameters in some systems [36]. Entropy is widely used for measuring the amount of information within an image. Higher entropy implies that an image contains more information.
Entropy is measured to quantify the information produced from the enhanced image. For good enhancement, the entropy of the enhanced image should be close to that of the original image. This small difference between entropies of the original and the enhanced images indicates that the image details are preserved. It also shows that the histogram shape is maintained; thus, the saturation case can be avoided. Table 2 shows the entropy of equalized images compared with original images for each image shown in Figure 8 and Figure 9. The result shows that the equalized fingerprint images have smaller entropy while they are still close to the entropy of the original image. It means that our method can remove noise from the original image while retaining the structure of the fingerprint image.
Next, the equalized fingerprint image was used to determine the contour and detected the blur region of the fingerprint, as discussed in Section 2.2 and Section 2.3. Figure 10 shows the binarized image obtained by applying Equation (8) to the equalized image. Based on the binary images, as shown in Figure 10b, we can detect the region of impression (ROI), and the contour of the fingerprint is acquired in a polygon, as shown in Figure 10c. Figure 11b presents the blur detection result obtained by 2D non-separable wavelet entropy filtering for low-quality images, as discussed in Section 2.3. In what follows, an ROI with a 30% blur region is considered to have bad quality, and its SP detection is not good enough.
Our experiments were tested on the FVC2002 DB1_A and FVC2002 DB2_A databases. We compared the results of our proposed SP detection with results obtained using other methods, including a rule-based algorithm [5], Zhou’s algorithm [11], Tico’s algorithm [37], Ramo’s algorithm [38], and Chikkerur and Ratha’s algorithm [39]. In these methods, the singular points were measured on Euclidean distance. While no standard terms exist to define a correct detection, we devoted our attention in this research to a method for detecting a singular point precisely and followed the convention for adopting the 10-pixel deviation on the distance between the expected and the detected singular points to validate the performance of the proposed method. In addition, the singular point detection based on the Poincaré index method is sensitive for low-quality fingerprints. In this paper, we show that by combining a novel adaptive image enhancement, compact boundary segmentation, and NSDWT for localization, the detection of singular points is more robust. Moreover, a novel clustering algorithm by integrating wavelet frame entropy with region growing is introduced to evaluate the fingerprint image quality to validate the detected singular points. Table 3 and Table 4 show the correctly detected rate, detection rate, miss rate, and false alarm rate. The results in the tables indicate that our method not only has a higher correctly detected rate than other methods but also has a low false alarm rate. Figure 12 presents the results of truly detected SPs on the FVC2002 database; the core points and the delta points are closer as ground truth SPs. Figure 13 presents some comparison results of SP detection for the FVC2002 database using our proposed method and the Poincaré index method. In this figure, blue and green crosses indicate the core and delta points, respectively, detected by our proposed method, and the red cross indicates the core point detected by the Poincaré index method. The results show that the location of the SPs detected using our method is more accurate than those of the SPs detected using the Poincaré index method.

5. Conclusions

Because the conventional Poincaré index along the boundary of a given region equals the sum of the Poincaré indices of the core points within this region, it contains no information about the characteristics and cannot describe the core point completely. To solve this problem, we proposed an adaptive method to detect SPs in a fingerprint image. First, a novel fingerprint enhancement algorithm was proposed to considerably eliminate the background, thereby improving the clarity and continuity of ridge structures. Second, we demonstrated that the proposed algorithm could effectively detect low-quality regions with a high correct rate. Third, based on the threshold value, the proposed algorithm inspected and made a True/False decision about whether a detected SP was accepted. Experimental results demonstrate that the proposed algorithm effectively detects SPs and the results are better than those obtained by rule-based [5], Zhou [11], Tico [37], Ramo [38], and Chikkerur [39].

Author Contributions

N.T.L. developed the fingerprint hardware and software coding, and wrote the original draft. J.-W.W. guided the research direction and edited the paper. D.H.L. designed the experiments. C.-C.W. contributed to editing the paper. All authors discussed the results and contributed to the final manuscript.

Funding

This research was funded in part by MOST 107-2218-E-992-310 and 108-2221-E-992-076 from the Ministry of Science and Technology, Taiwan.

Acknowledgments

The authors appreciate the support from National Kaohsiung University of Science and Technology in Taiwan.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Henry, E.R. Classification and Uses of Finger Prints; George Rutledge & Sons: London, UK, 1900. [Google Scholar]
  2. Jain, A.; Lin, H.; Bolle, R. On-line fingerprint verification. IEEE Trans. Pattern Anal. Mach. Intell. 1997, 19, 302–314. [Google Scholar] [CrossRef]
  3. Ratha, N.; Bolle, R. Automatic Fingerprint Recognition Systems; Springer: New York, NY, USA, 2004. [Google Scholar]
  4. Wang, C.-N.; Wang, J.-W.; Lin, M.-H.; Chang, Y.-L.; Kuo, C.-M. Optical methods in fingerprint imaging for medical and personality applications. Sensors 2017, 17, 2418. [Google Scholar] [CrossRef] [PubMed]
  5. Maltoni, D.; Maio, D.; Jain, A.K.; Prabhakar, S. Handbook of Fingerprint Recognition; Springer Science & Business Media: New York, NY, USA, 2009. [Google Scholar]
  6. Pankanti, S.; Prabhakar, S.; Jain, A.K. On the individuality of fingerprints. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 1010–1025. [Google Scholar] [CrossRef]
  7. Kawagoe, M.; Tojo, A. Fingerprint pattern classification. Pattern Recognit. 1984, 17, 295–303. [Google Scholar] [CrossRef]
  8. Wang, Y.; Hu, J.; Phillips, D. A Fingerprint Orientation Model Based on 2D Fourier Expansion (FOMFE) and Its Application to Singular-Point Detection and Fingerprint Indexing. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 29, 573–585. [Google Scholar] [CrossRef] [PubMed]
  9. Nilsson, K.; Bigun, J. Localization of corresponding points in fingerprints by complex filtering. Pattern Recognit. Lett. 2003, 24, 2135–2144. [Google Scholar] [CrossRef]
  10. Liu, M. Fingerprint classification based on Adaboost learning from singularity features. Pattern Recognit. 2010, 43, 1062–1070. [Google Scholar] [CrossRef]
  11. Zhou, J.; Chen, F.; Gu, J. A novel algorithm for detecting singular points from fingerprint images. IEEE Trans. Pattern Anal. Mach. Intell. 2009, 31, 1239–1250. [Google Scholar] [CrossRef]
  12. Chen, H.; Liang, J.; Liu, E.; Tian, J. Fingerprint singular point detection based on multiple-scale orientation entropy. IEEE Signal Process. Lett. 2011, 18, 679–682. [Google Scholar] [CrossRef]
  13. Hung, D.D. Enhancement and feature purification of fingerprint images. Pattern Recognit. 1993, 26, 1661–1671. [Google Scholar] [CrossRef]
  14. He, Y.; Tian, J.; Lou, X.; Zhang, T. Image enhancement and minutiae matching in fingerprint verification. IEEE Signal Process. Lett. 2003, 24, 1349–1360. [Google Scholar] [CrossRef]
  15. Jirachaweng, S.; Hou, Z.; Yau, W.Y.; Areekul, V. Residual orientation modeling for fingerprint enhancement and singular point detection. Pattern Recognit. 2011, 44, 431–442. [Google Scholar] [CrossRef]
  16. Wang, W.; Li, J.; Huang, F.; Feng, H. Design and implementation of Log-Gabor filter in fingerprint image enhancement. Pattern Recognit. Lett. 2008, 29, 301–308. [Google Scholar] [CrossRef]
  17. Gottschlich, C. Curved-region-based ridge frequency estimation and curved Gabor filters for fingerprint image enhancement. IEEE Trans. Image Process. 2011, 21, 2220–2228. [Google Scholar] [CrossRef]
  18. Wang, S.; Wang, Y. Fingerprint enhancement in the singular point area. IEEE Signal Process. Lett. 2004, 11, 16–19. [Google Scholar] [CrossRef]
  19. Yang, J.; Xiong, N.; Vasilakos, A.V. Two-stage enhancement scheme for low-quality fingerprint images by learning from the images. IEEE Trans. Human-Mach. Syst. 2013, 43, 235–259. [Google Scholar] [CrossRef]
  20. Yun, E.K.; Cho, S.B. Adaptive fingerprint image enhancement with fingerprint image quality analysis. Image Vis. Comput. 2006, 24, 101–110. [Google Scholar] [CrossRef]
  21. Fronthaler, H.; Kollreider, K.; Bigun, J. Local features for enhancement and minutiae extraction in fingerprint. IEEE Trans. Image Process. 2008, 17, 354–363. [Google Scholar] [CrossRef]
  22. Bennet, D.; Perumal, D.S.A. Fingerprint: DWT, SVD based enhancement and significant contrast for ridges and valleys using fuzzy measures. J. Comput. Sci. Eng. 2011, 6, 28–32. [Google Scholar]
  23. Wang, J.; Le, N.T.; Wang, C.C.; Lee, J.S. Enhanced ridge structure for improving fingerprint image quality based on a wavelet domain. IEEE Signal Process. Lett. 2015, 22, 390–395. [Google Scholar] [CrossRef]
  24. Mehtre, B.M.; Murthy, N.N.; Kapoor, S.; Chatterjee, B. Segmentation of fingerprint images using the directional image. Pattern Recognit. 1987, 20, 429–435. [Google Scholar] [CrossRef]
  25. Mehtre, B.M.; Chatterjee, B. Segmentation of fingerprint images – a composition method. Pattern Recognit. 1989, 22, 381–385. [Google Scholar] [CrossRef]
  26. Zhang, J.; Lai, R.; Kou, C.C.J. Adaptive directional total – variation model for latent fingerprint segmentation. IEEE Trans. Inf. Forensics Secur. 2013, 8, 1261–1273. [Google Scholar] [CrossRef]
  27. Cao, K.; Liu, E.; Jain, A.K. Segmentation and enhancement of latent fingerprint: A coarse to fine ridge structure dictionary. IEEE Trans. Pattern Anal. Mach. Intell. 2014, 36, 1847–1859. [Google Scholar] [CrossRef]
  28. Maio, D.; Maltoni, D.; Cappelli, R.; Wayman, J.L.; Jain, A.K. FVC2002: Second fingerprint verification competition. Proceeding of the Object Recognition Supported by User Interaction for Service Robots, Quebec City, QC, Canada, 11–15 August 2002; pp. 811–814. [Google Scholar]
  29. Andrews, H.; Patterson, C. Singular value decompositions and digital image processing. IEEE Trans. Acoust. Speech Signal Process. 1976, 24, 26–53. [Google Scholar] [CrossRef]
  30. Wang, J.W.; Le, N.T.; Wang, C.C. Color face image enhancement using adaptive singular value decomposition in Fourier domain for face recognition. Pattern Recognit. 2016, 57, 31–49. [Google Scholar] [CrossRef]
  31. Wang, J.W.; Le, N.T.; Lee, J.S.; Wang, C.C. Illumination compensation for face recognition using adaptive singular value decomposition in the wavelet domain. Inf. Sci. 2018, 435, 69–93. [Google Scholar] [CrossRef]
  32. Unser, M. Texture classification and segmentation using wavelet frames. IEEE Trans. Image Process. 1995, 4, 1546–1560. [Google Scholar] [CrossRef]
  33. Mallat, S. A Wavelet Tour of Signal Processing, 3rd ed.; The Sparse Way; Academic Press, Inc.: Cambridge, MA, USA, 2008. [Google Scholar]
  34. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423/623–656. [Google Scholar] [CrossRef]
  35. Tsai, D.Y.; Lee, Y.; Matsuyama, E. Information entropy measure for evaluation of image quality. J. Digital Imaging 2008, 21, 338–347. [Google Scholar] [CrossRef]
  36. Min, B.S.; Lim, D.K.; Kim, S.J.; Lee, J.H. A novel method of determining parameters of CLAHE based on image entropy. Int. J. Softw. Eng. Its Appl. 2013, 7, 113–120. [Google Scholar] [CrossRef]
  37. Tico, M. and Kuosmanen, P. A multiresolution method for singular points detection in fingerprint images. IEEE Int. Symp. Circuit Syst. 1999, 4, 183–186. [Google Scholar]
  38. Ramo, P.; Tico, M.; Onnia, V.; Saarinen, J. Optimized singular point detection algorithm for fingerprint images. IEEE Trans. Image Process. 2001, 3, 242–245. [Google Scholar]
  39. Chikkerur, S.; Ratha, N.K. Impact of singular point detection on fingerprint matching performance. In Proceedings of the Fourth IEEE Workshop on Automatic Identification Advanced Technologies, Buffalo, NY, USA, 17–18 October 2005; pp. 207–212. [Google Scholar]
Figure 1. The global and local features in the fingerprint. (a) Singular points (SPs) (square: core; triangle: delta) and (b)minutiae (red circle: ridges ending; blue circle: bifurcation).
Figure 1. The global and local features in the fingerprint. (a) Singular points (SPs) (square: core; triangle: delta) and (b)minutiae (red circle: ridges ending; blue circle: bifurcation).
Entropy 21 00786 g001
Figure 2. Effects of singular values on a fingerprint image. (a) Fingerprint image in FVC 2002 DB2 database; (b) reconstructed fingerprint image when all singular values of Figure 2a are set to 1; (c) reconstructed fingerprint image when all singular values of Figure 2a are multiplied by 2; (d) equalized fingerprint images of Figure 2a.
Figure 2. Effects of singular values on a fingerprint image. (a) Fingerprint image in FVC 2002 DB2 database; (b) reconstructed fingerprint image when all singular values of Figure 2a are set to 1; (c) reconstructed fingerprint image when all singular values of Figure 2a are multiplied by 2; (d) equalized fingerprint images of Figure 2a.
Entropy 21 00786 g002
Figure 3. (a) Original fingerprint image in FVC 2002 DB2 database; (b) binary image by using energy transformation and blur detection obtained with2D non-separable wavelet entropy filtering for Figure 3a; (c) segmented image of Figure 3a.
Figure 3. (a) Original fingerprint image in FVC 2002 DB2 database; (b) binary image by using energy transformation and blur detection obtained with2D non-separable wavelet entropy filtering for Figure 3a; (c) segmented image of Figure 3a.
Entropy 21 00786 g003
Figure 4. Filter bank implementation of 2D non-separable discrete wavelet transform (NSDWT), j: level.
Figure 4. Filter bank implementation of 2D non-separable discrete wavelet transform (NSDWT), j: level.
Entropy 21 00786 g004
Figure 5. Fingerprint alignment: (a) number of cores = 0; (b) number of cores = 1; (c,d) number of cores = 2.
Figure 5. Fingerprint alignment: (a) number of cores = 0; (b) number of cores = 1; (c,d) number of cores = 2.
Entropy 21 00786 g005
Figure 6. (a) COI subregion; (b)skeletonized ridges; (c) 2D wavelet extrema.
Figure 6. (a) COI subregion; (b)skeletonized ridges; (c) 2D wavelet extrema.
Entropy 21 00786 g006
Figure 7. Core point detection based on wavelet extrema and Henry system. (a) Two 8-adjacency grids moving toward each other along the ridge curve indicated in yellow; (b) traced path of the ridge curve (green line: from left to right); (c) SP located at the lowest ridge curve (red square) and the area beneath (blue line: searching extrema from right to left); (d) SP detection in accordance with the Henry system (blue cross).
Figure 7. Core point detection based on wavelet extrema and Henry system. (a) Two 8-adjacency grids moving toward each other along the ridge curve indicated in yellow; (b) traced path of the ridge curve (green line: from left to right); (c) SP located at the lowest ridge curve (red square) and the area beneath (blue line: searching extrema from right to left); (d) SP detection in accordance with the Henry system (blue cross).
Entropy 21 00786 g007
Figure 8. Results of our proposed method for the FVC2002 DB1 database. (a) Original fingerprint images; (b) histogram of Figure 8a; (c) equalized fingerprint images of Figure 8a; (d) histogram of Figure 8c.
Figure 8. Results of our proposed method for the FVC2002 DB1 database. (a) Original fingerprint images; (b) histogram of Figure 8a; (c) equalized fingerprint images of Figure 8a; (d) histogram of Figure 8c.
Entropy 21 00786 g008
Figure 9. Results of our proposed method for the FVC2002 DB2 database. (a) Original fingerprint images and (b) equalized fingerprint images of Figure 9a.
Figure 9. Results of our proposed method for the FVC2002 DB2 database. (a) Original fingerprint images and (b) equalized fingerprint images of Figure 9a.
Entropy 21 00786 g009
Figure 10. Binary images by using energy transformation for the FVC 2002 DB1 and DB2 databases. (a) Equalized images of five fingerprint images in the FVC 2002 database; (b) binary images of Figure 10a; (c) segmented images of Figure 10a.
Figure 10. Binary images by using energy transformation for the FVC 2002 DB1 and DB2 databases. (a) Equalized images of five fingerprint images in the FVC 2002 database; (b) binary images of Figure 10a; (c) segmented images of Figure 10a.
Entropy 21 00786 g010
Figure 11. Blur detection result obtained by 2D non-separable wavelet entropy filtering for low-quality images: (a) original images and (b) blur detection results.
Figure 11. Blur detection result obtained by 2D non-separable wavelet entropy filtering for low-quality images: (a) original images and (b) blur detection results.
Entropy 21 00786 g011
Figure 12. Truly detected SPs for the FVC2002 database (blue: core point; green: delta point) by our proposed method: (a) FVC2002 DB1 and (b) FVC2002 DB2 databases.
Figure 12. Truly detected SPs for the FVC2002 database (blue: core point; green: delta point) by our proposed method: (a) FVC2002 DB1 and (b) FVC2002 DB2 databases.
Entropy 21 00786 g012
Figure 13. Some comparison results of SP detection for the FVC2002 database. The blue and green crosses indicate the core and delta points, respectively, detected by our proposed method, and the red cross indicates the core point detected by the Poincaré index method.
Figure 13. Some comparison results of SP detection for the FVC2002 database. The blue and green crosses indicate the core and delta points, respectively, detected by our proposed method, and the red cross indicates the core point detected by the Poincaré index method.
Entropy 21 00786 g013
Table 1. Mean and standard deviation of Gaussian distribution function in each database.
Table 1. Mean and standard deviation of Gaussian distribution function in each database.
DatabaseMeanStandard Deviation
FVC2002 DB10.840.24
FVC2002 DB20.500.18
Table 2. Entropy of equalized images compared with original images for each database.
Table 2. Entropy of equalized images compared with original images for each database.
The Entropy of Image
FVC2002 DB1FVC2002 DB2
Original image5.12225.52625.41715.39837.24467.15437.70127.5129
Equalized image4.80285.39395.04964.98446.84017.06036.33226.2088
Table 3. Comparison results of various detection algorithms for the FVC2002 DB1-A fingerprint database.
Table 3. Comparison results of various detection algorithms for the FVC2002 DB1-A fingerprint database.
AlgorithmSCRSDRSMRSFR
CoreDeltaCoreDeltaCoreDelta
Tico’s [37]58.5090.2755.499.8344.5110.7880.20
Ramo’s [38]53.5492.1968.427.8131.588.4746.15
Zhou’s [11]88.8895.7896.984.223.022.279.97
Chikkerur’s [39]85.1395.8992.754.117.256.938.16
Rule-based [5]50.0086.2655.2413.7444.7615.9281.04
Proposed90.7292.4397.257.572.751.413.07
Table 4. Comparison results of different detection algorithms for the FVC2002 DB2-A fingerprint database.
Table 4. Comparison results of different detection algorithms for the FVC2002 DB2-A fingerprint database.
AlgorithmSCRSDRSMRSFR
CoreDeltaCoreDeltaCoreDelta
Tico’s [37]32.3265.3834.7534.6265.2552.94187.80
Ramo’s [38]49.4980.7237.5019.2862.5023.88166.67
Zhou’s [11]81.2595.9590.884.059.128.4512.54
Chikkerur’s [39]73.2593.2394.206.775.8013.8728.62
Rule-based [5]56.5773.8637.6126.1462.3935.40165.85
Proposed89.9295.5495.214.464.791.512.76

© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Back to TopTop