You are currently viewing a new version of our website. To view the old version click .
Sensors
  • Article
  • Open Access

20 January 2024

Fingerprint Recognition in Forensic Scenarios

,
and
1
Portuguese Military Academy, 1169-203 Lisbon, Portugal
2
Instituto Superior Técnico, Universidade de Lisboa, 1049-001 Lisbon, Portugal
3
Military Academy Research Center (CINAMIL), 1169-203 Lisbon, Portugal
4
Laboratory for Instrumentation, Biomedical Engineering and Radiation Physics, Universidade de Coimbra (LIBPhys-UC), 3000-370 Coimbra, Portugal
This article belongs to the Section Biosensors

Abstract

Fingerprints are unique patterns used as biometric keys because they allow an individual to be unambiguously identified, making their application in the forensic field a common practice. The design of a system that can match the details of different images is still an open problem, especially when applied to large databases or, to real-time applications in forensic scenarios using mobile devices. Fingerprints collected at a crime scene are often manually processed to find those that are relevant to solving the crime. This work proposes an efficient methodology that can be applied in real time to reduce the manual work in crime scene investigations that consumes time and human resources. The proposed methodology includes four steps: (i) image pre-processing using oriented Gabor filters; (ii) the extraction of minutiae using a variant of the Crossing Numbers method which include a novel ROI definition through convex hull and erosion followed by replacing two or more very close minutiae with an average minutiae; (iii) the creation of a model that represents each minutia through the characteristics of a set of polygons including neighboring minutiae; (iv) the individual search of a match for each minutia in different images using metrics on the absolute and relative errors. While in the literature most methodologies look to validate the entire fingerprint model, connecting the minutiae or using minutiae triplets, we validate each minutia individually using n-vertex polygons whose vertices are neighbor minutiae that surround the reference. Our method also reveals robustness against false minutiae since several polygons are used to represent the same minutia, there is a possibility that even if there are false minutia, the true polygon is present and identified; in addition, our method is immune to rotations and translations. The results show that the proposed methodology can be applied in real time in standard hardware implementation, with images of arbitrary orientations.

1. Introduction

Fingerprints accompany all human beings from birth. They are a biometric key composed of unique patterns found on the distal phalanges of the fingers, distinct for each individual, and can be used for various purposes. Due to their unequivocal and invariant properties, fingerprints have gained importance in the field of forensic analysis, becoming a relative alternative to other traditional authentication methods [,,]. Currently, there is a growing number of applications using fingerprint recognition systems, such as for accessing mobile phones, monitoring employee presence in a company and, in forensic investigations, to achieve the unequivocal identification of an individual. Technological advances in fingerprint processing enable capture, storage, and comparison methods to be more financially accessible today, allowing a significant portion of the population to use these technologies [,,].
Fingerprints can be used to determine if two images are of the same finger, thereby identifying the individual to whom it belongs. Identification of fingerprints collected in a crime scenario is typically done manually, consuming a lot of time, which may be relevant and indispensable to solve the crime. Considering that many fingerprints are collected, a lot of time is spent on comparisons that a computer system can perform in seconds, and it must also be considered that the images collected in the field are sometimes of poor quality [].
The main problem for which a solution is sought corresponds to the identification of a fingerprint based on comparison with others, with the aim of identifying those that correspond and belong to the same individual. It is essential to find a method that can be used to make reliable comparisons, generate precise outcomes, and requiring minimal computing power. This way, fingerprint comparison can be performed in the field in real time, with the help of a mobile computing device.
In this paper, the main contributions are:
  • A methodology is proposed that can accurately and efficiently to compare two fingerprints and classify them as belonging to the same person or different individuals. The proposed method can be used on portable devices during field work providing real-time screening of collected fingerprints. The proposed methodology also looks forward to validating each minutia individually while in the literature most methodologies look up to validate the entire fingerprint model.
  • A new process to validate extracted minutiae is proposed using the convex hull of the set of minutiae to create a region of valid minutiae, and an individual minutia validation is proposed by employing n-side polygons instead of triangulations, which is the approach most common in the literature.
This paper is organized into five sections. The first provides a brief introduction to the work, describing the problem and motivation for carrying out the work. Section 2 details a comparison of the databases used in the reviewed articles and a study of the state-of-the-art on fingerprint comparison methods which encompasses the gaps found. Section 3 defines the proposed methodology to achieve the objectives of the work. Section 4 contains the results of applying the proposed methodology in different databases, initially tuning the algorithm in one database, and then validating it in different ones. Section 5 is where the conclusions of the work are drawn, namely about the methodology applied and the produced results, the achieved objectives, and the proposals for future work.

2. Background

This section covers concepts relevant to the work to be developed, namely about fingerprint features and minutiae extraction.

2.1. Fingerprint Features

The features extracted from the fingerprints are organized hierarchically into global and local []. Global features are singular points in the fingerprint, namely the core and deltas. The core represents the point of convergence of the pattern. Deltas are points where the ridges diverge, and a point is formed that resembles the delta symbol (Figure 1).
Figure 1. Global features: core and delta.
The local features of a fingerprint (see Figure 2) are minutiae and refer to the points at which the ridges join or end (bifurcations and terminations, respectively) and are of high relevance as they are used by most fingerprint matching algorithms (sometimes associated with global features). In a fingerprint image, depending on the quality and size, typically 10 to 200 minutiae can be found, and a good quality image should allow the identification of at least 50 to 100 minutiae. Each minutia is associated with a position and orientation, and its distribution is not uniform.
Figure 2. Local features: termination and bifurcation.

2.2. Crossing Numbers Method

The Crossing Numbers method is widely used to extract minutiae from a fingerprint image []. The termination and bifurcation extraction is reached through the analysis of the neighborhood of each pixel in the skeletonized image of the fingerprint using a 3 × 3 window centered in the reference pixel p :
  p 1     p 8     p 7     p 2   p   p 6     p 3     p 4     p 5  
The crossing number for the pixel p is computed through the difference between adjacent pixel values:
C N = 1 2   i = 1 8 P i P i + 1   ,                         P 9 = P 1
Each pixel is then labeled accordingly with its CN value following Table 1, and a first set of local feature points is found.
Table 1. Crossing number and type of minutia.

4. Proposed Methodology

This section describes the methodology implemented to process and search for matches between fingerprint images. The methodology is, in a larger picture, divided into four blocks (see Figure 7) which follow the common process. The new proposed methods are inserted into the pipeline.
Figure 7. Proposed methodology.
The first block concerns image pre-processing in which the orientation and frequency of the ridges are estimated to apply oriented Gabor filters. Then, minutiae are extracted using the Crossing Numbers method and validated by defining a region of valid minutiae, removing nearby minutiae by type, and removing minutiae clouds. The third step concerns the creation of a set of polygons that represent each minutia, and finally, the last step is the minutiae matching and, subsequently, fingerprint match.
Considering the main goals of our study, which consist of proposing a methodology that can be used in real time, on a portable device, in forensic areas. We have to keep in mind that it is important to have a low computational effort such that any common hardware can run the algorithm and to provide a method that can be validated and understood by an operator. This said, we cannot use deep learning since we do not want to train a model for every scenario and there is a small amount of images. The texture based methods, in general, improve the results achieved by other methods but increase the computational effort and, therefore, we will not be using any for now. Each minutiae based method proposed in the literature uses the set of extracted (and sometimes filtered) minutiae to create a model for the fingerprint which is further compared to find a match. We will be focusing on validating each minutia individually which is the work that is carried out manually in real forensic scenarios.

4.1. Pre-Processing

Image pre-processing was implemented through the algorithm proposed by Raymond Thai [] which includes the state of the art pre-processing techniques: segmentation, normalization, ridge orientation and frequency estimation, the application of oriented Gabor filters and skeletonization.
Figure 8 shows the effect of the applied pre-processing techniques in two images of different databases (FVC2000 DB1 and FVC2002 DB1).
Figure 8. Image pre-processing: (a,b) are Original, (c,d) are Pre-processed, (e,f) are Skeletonized.

4.2. Minutiae Extraction and Validation

Minutiae are extracted using the Crossing Numbers (CN) method in the skeletonized image. Due to noise in the image and damage in the fingerprints (ex: scars) there are some spurious minutiae that are captured by the CN method.
Three techniques are implemented to create a set of valid minutiae from the extracted ones. The first step is to set a minimum distance δ between minutiae of the same type. Figure 9 represents the effect on the δ parameter in the excluded minutiae, having bifurcations in blue color and terminations in red.
Figure 9. Minimum distance for minutiae by type: (a) δ = 5 (b) δ = 10 (c) δ = 15 .
The second step is a novel definition of a region of interest (for valid minutiae) which is defined by the use of the convex hull. The convex hull was implemented using the incremental algorithm, and it defines the smallest convex polygon that contains all minutiae points in the set. Once the polygon is defined, an erosion of γ pixels is obtained through its interior (see Figure 10).
Figure 10. ROI definition: (a) Convex hull, (b) Convex border limits, (c) Erosion of γ = 10 pixels (red represents a termination while blue represents a bifurcation minutia).
This procedure excludes the border minutiae, which are not reliable for matching procedures, as shown in Figure 11.
Figure 11. Set of minutiae: (a) Original (b) After ROI definition (red represents a termination while blue represents a bifurcation minutia).
The last step in minutiae validation is the removal of minutiae clouds, that is, when the algorithm finds a termination close to a bifurcation due to noise but only one is a real minutia (Figure 12). Minutiae clouds are replaced by the average point, creating a new averaged minutia.
Figure 12. Minutiae Cloud Removal: (a) Original (b) Replaced (red represents a termination minutia, blue represents a bifurcation minutia, pink represents the new averaged minutia, the arrow and black circle emphasize the zoom).

4.3. Feature Extraction

Each valid minutia is individually represented through a set of polygons that are built using the neighboring minutiae as vertices. Then, each polygon is stored using its features, namely the edge size and the angles formed by adjacent vertices and the reference minutia. This means that each minutia will be associated with a set of fixed-length vectors that describe its associated polygons.
The first step in building the model to represent each minutia is to define n as is the number of desired vertices for each polygon. Then, a circumference centered on the reference minutiae is considered and its radius is incremented until the maximum radius is reached or a set of valid neighbor minutiae is found. The goal is to find at least n + 2 minutiae and guarantee that their distribution matches at least one per quadrant. To increase the number of minutiae registered through polygons, sets of n points were also allowed but only if they were found when the maximum radius for the circumference is achieved.
Figure 13 illustrates the search for neighbor minutiae to form the set of polygons that represents a minutia. After these minutiae are found, polygons with desired n vertices are formed using the points combinations. A polygon is valid if:
Figure 13. Search for neighbor minutiae: (a) Invalid; (b) Valid (blue represents a detected minutiae, purple circle represents the search radius for neighbor minutiae, red represented the reference minutia).
  • There are no overlapping edges.
  • There are no holes within the boundaries of the polygon.
  • The starting point and the ending point coincide (closed polygon).
  • The polygon contains the reference minutia.
Figure 14 shows examples of both invalid and valid polygons that represent a minutia (reference is green and polygon vertices are black).
Figure 14. The green dot is the reference minutia; invalid polygons: (a,b); valid polygons: (c,d). Blue represents the termination minutia, red represents the bifurcation minutia, green dots represents the reference minutia, black represents the polygon vertices, green lines represents the polygon edges.
A fixed-size feature vector is then built to represent each polygon using the sizes of the edges and the angles formed by the adjacent vertices and the reference minutia (see Figure 15).
Figure 15. Angle used to build the feature vector (blue represents the termination minutia, red represents the bifurcation minutia, green dot represents the reference minutia, black represents the polygon vertices, green lines represents the polygon edges, A and B are names of the vertices).
Figure 16 is an example of all valid polygons that represent a valid reference minutia. Each polygon feature will be further considered for matching.
Figure 16. Set of valid polygons for a reference minutia (blue represents the termination minutia, red represents the bifurcation minutia, green dot represents the reference minutia, black represents the polygon vertices, green lines represents the polygon edges, purple circle represents the search radius).
In this step, each minutia that was previously validated will be associated with a certain number of polygons that depend on its neighborhood minutiae. Each polygon is represented using a fixed-size vector that is not dependent on translations and rotations since it is always sorted in the same way (by keeping the larger edge size as most-left element and then keep the relative element order, and sorting the angles accordingly).

4.4. Matching

The matching has two steps: the minutiae matching and the fingerprint matching.
Minutiae matching is done through the polygons. Considering two minutiae, the set of polygons that represent each one is compared to find the closest match using the feature vectors. The best match is defined as the pair of feature vectors that has a smaller Euclidean distance. The pair that has smaller distance is not guaranteed to match (in totally different minutiae, there will always be a pair with smaller distance although it is a high distance compared to real matching minutiae). Thus, the pair of best matched polygons must be validated. The validation is performed using both an absolute and relative error criteria that is applied element-wise to the feature vectors. The thresholds applied to the edge sizes and the angles are different.
Fingerprint matching is conducted by defining a threshold T M C (Total Minutiae Corresponding) which is the minimum number of matching minutiae to consider the fingerprints as match.

5. Results and Discussion

A random set of 125 images from FVC2000 DB1 database [] was chosen which resulted in 125 genuine comparisons and 7154 imposter matchings. An average of 23 minutiae (represented by polygons) were registered.
The first experiments were performed to understand the importance of the values chosen for the relative and absolute error criteria that are used to validate the best match polygons. To start with some configuration, the number of vertices was set to n = 5 , the absolute error threshold for the edges size was set to T H L = 5 , and the absolute error threshold for the angles was set to T H A = 10 . The threshold for fingerprint match was set as T M C = 12 and the relative error threshold T H R E L was varied. The results achieved are in Table 6.
Table 6. Results as a function of the relative error criteria.
The smaller the relative error threshold, the more polygons are validated using the absolute error threshold. This results in a more conservative system, which produces a higher number of false negatives, thus increasing the FNMR.
Then, the relative error threshold was fixed to T H R E L = 11 , and the other parameters were kept while varying T H L . The following results were achieved (Table 7):
Table 7. Results in function of the absolute error criteria for the edge sizes.
A smaller value of T H L means that it is harder for two polygons to match, and the system is conservative. The same absolute distance between two edge sizes produces a higher relative error if the edges are smaller. Thus, the absolute error criteria for edge sizes will have more importance to filter small edges that fail the relative error criteria.
Setting T H L = 5 and varying the absolute error threshold for the angles, the following results were achieved.
The results shown in Table 8 are important to understand that most of the angle verifications are done using the relative error criteria.
Table 8. Results as a function of the absolute error criteria for the angles.
Using the best configuration found ( T H R E L = 11 , T H L = 5 and T H A = 10 ) the number of polygon vertices was changed.
Analyzing Table 9 we can state that, on one hand, the increase in the number of polygon vertices guarantees polygons with more distinct geometries and, as such, when a minutia is validated, the degree of confidence is greater, resulting in a decrease in the FMR since minutiae that do not hardly correspond are not matched. On the other hand, when increasing the number of polygon vertices, the algorithm becomes more dependent on the number of minutiae extracted, being more exposed to the quality of the database.
Table 9. Results as a function of n.
Varying the threshold TMC allows us to understand how good is the border between the number of minutiae that are correctly and wrongly matched, and compute the EER.
An EER of 0.03% was achieved at T M C = 12 . In general, the system behaves as expected for a fingerprint matching system, that is, a smaller threshold TMC allows more false images to match while a higher TMC reduces the false matches (Figure 17).
Figure 17. FMR and FNMR accordingly with TMC (FVC2000 DB1).
Keeping the algorithm parameters, more experiments were carried out using a random sample of 125 images from the FVC2002 DB1 database which led to an average of 35 registered minutiae (an increase of 52%). Initially, FNMR of 0% and FMR of 17.6% were achieved. These results suggest that the matching fingerprints have all more than 12 matching minutiae but 17.6% of the non-matching fingerprints are also reaching more than the TMC limit, which is expected since the high increase in the number of registered minutiae leads to hundreds of more possible polygons. Therefore, the error criteria were straightened and changed to T H R E L = 10 , T H L = 4 and T H A = 10 . With the new configuration, an FNMR of 21.65% and an FMR of 3.36% were achieved, suggesting that the TMC threshold was not good for the images and, therefore, a swipe in the TMC values was done (see Figure 18).
Figure 18. FMR and FNMR accordingly with TMC (FVC2002 DB1).
An EER of 7.8% was achieved for a threshold T M C = 9.5 but since the number of corresponding minutiae must be integer, using TMC = 10 FNMR of 9.10% and FMR of 6.74% were obtained.
A last set of experiments were carried out to challenge the algorithm and understand if it can deal with transformations applied to the images (such as translation and rotation) since the goal was to propose an algorithm that can be used in real-time without depending on the image alignment. From 15 random images chosen from FVC2000 DB1 database, a rotation of 45°, another rotation of 90° and a diagonal translation of 10% were applied, creating a new database composed by 60 images.
With the previous configuration n = 5 ,   T H R E L = 11 , T H L = 5 ,   T H A = 10 , and T M C = 12 , FNMR of 46.7% and FMR of 0.1% (Figure 19) were achieved meaning that false matchings are being well rejected but the number of matching minutiae is smaller even in matching images. Thus, those results suggest that the threshold TMC is too high.
Figure 19. FMR and FNMR accordingly with TMC (transformed database).
Through the adjustment of TMC to 6.4, an EER of 3.6% was achieved, leading to an FNMR of 4.4% and an FMR of 2.6% using the integer threshold of 7, showing that the algorithm can identify matching minutiae without a previous image alignment. The results achieved show that the algorithm depends on the extracted and validated minutiae both in quality (to have the same geometrical relationship between minutiae in different matching images) and quantity (to ensure a higher number of registered minutiae), leading to the need of dynamically adjusting its parameters to achieve good results.
Compared to the state-of-the-art traditional methods, we can compare with the work from Mohamed-Abdul-Cader et al. [], since they also choose a sample of around 150 images from FVC2002 DB1 and obtained EER of 6.68%. We achieved in 7.8% in a sample of the same database but we did not optimize the algorithm’s parameters in an exhaustive way. We overfitted the parameters in FVC2000 DB1 where we achieved 0.06% error and then performed only one iteration of tunning the parameters before attempt in FVC2002 DB1. This means that our results still have space for large improvement. Also, our results are in line with the state-of-the-art but our technique is a novelty since we are proposing a method that uses neighbor minutiae to validate the match of each individual minutia in opposition to what other authors do, using the set of minutiae as a whole.
Several authors have applied methods based on Deep Learning, such as He et al. [] having achieved EER values between 3% and 6% (FVC2006 dataset) and between 7% and 9% in the AES3400 dataset. In the proposed method we achieved values of 7.8%, which are slightly higher than the values presented by He [], justified by the fact that our initial objective was the development of a methodology able of running in an electronic device with low computational performance (example: cheap tablet) and capable of analyzing fingerprints in crime scenarios without internet access (that is, without access to the cloud).

6. Conclusions and Future Work

The design of a fingerprint matching system capable of acting on a large database is still an open problem and due to the unique characteristics of this biometric key, research continues to be carried out to propose more methods capable of handling fingerprints. This work aims at proposing a method that can act on a local database of fingerprints collected at a crime scene to carry out a selection of them and find those that are relevant to solving the crime in real time and without requiring a computational effort such that it can only be carried out on machines with high processing power and following the work that is done manually by forensic teams (this is, validating each minutia individually), so that the algorithm can be understood and validated by an operator.
In the state-of-the-art, methods were divided into three categories: methods based of deep learning, image texture, and minutiae. The first ones require high computational effort and the need to train models, which requires a high number of images and is not practical to be applied in real time. The second ones are based on image processing techniques that abstract themselves from the image and perform mathematical procedures and operation on the values of its pixels, being very sensitive to noise and presenting itself more as a validation complement of other methods. Finally, methods based on minutiae have the main disadvantage of depending on the reliability of minutiae extraction, but they are similar to the work carried out manually, allowing, for example, an operator to visualize and understand algorithm decisions and can be applied in real time without requiring a high computational effort (may be executed on portable devices).
The proposed methodology depends on the minutiae extracted from the image and aims to meet human reasoning, that is, to validate a minutia that is present in both images and check whether the number of validated minutiae reaches a level that allows us to guarantee that the fingerprints match. The algorithm is divided into four blocks: pre-processing, minutiae extraction and validation, feature extraction, and fingerprint matching. The first block was implemented with the application of the most mentioned state-of-the-art methods, resulting in images with substantial noise removal and clear distinction between ridges and valleys. In the second block, minutiae extraction was implemented using the Crossing Numbers method, which has proven results over the years, and minutiae validation was innovative, using the convex hull of the set of minutiae to define a region of valid minutiae. Feature extraction consists of the formation of polygons that represent a minutia, formed through neighboring minutiae. In the literature, most authors use minutiae triangulations to create models that represent the fingerprint, but with the aim of validating each minutia individually, the implement method associates each minutia with a set of unique polygons, with different shapes, depending on the arrangement, in space, of neighboring minutiae. Finally, fingerprint matching is intuitive; once a certain number of minutiae is associated in the two images, we consider the fingerprints to match.
The achieved results show that the implemented method is very dependent on the minutiae extracted, with different results and the need to make adaptations when changing databases. Image quality is a relevant factor, as it determines the details that are extracted. The concept of associating a set of polygons with a chosen number of vertices to each minutia proves to be challenging, since by increasing the number of polygon vertices we increase the certainty in the individual validation of each minutia, but more minutiae are required in the image and in the same relative position to ensure that there are enough of them to achieve an adequate number of minutia represented. Using dynamic parameters in the algorithm, namely the number of minutiae considered as a decision threshold to consider the fingerprints as match and the validation criteria of the polygon (maximum relative and absolute errors between the lengths of the edges and the angles formed between adjacent vertices and the reference minutia), FNMR 0% and FMR 0.06% were achieved in a random sample of 125 images from the FVC2000 DB1 database and FNMR 9.1% and FMR 6.7% in a random sample of 125 images from the FVC2002 DB1, results that are in line with the state of the art and suggest the possibility of transposing the method to real applications of the minutiae match. The application of the algorithm to images that underwent translation and rotation achieved FNMR 4.4% and FMR 2.6%, showing that the algorithm is capable of matching minutiae without requiring prior alignment of the images.
In future work, the dynamic adjustment of the polygon matching validation criteria would allow the automatic adaptation of the algorithm to different databases and the possibility of incorporating the use of minutiae that are valid but, because they are the extremes of the valid regions, they cannot be represented through polygons that surround them is proposed. Also, simple features of each polygon were chosen to allow the outlined objectives to be achieved, without increasing the computational effort but there is the possibility of exploring other features.

Author Contributions

J.S.S. and A.B. proposed the idea and concept; N.M. developed the software under the supervision of J.S.S. and A.B. All authors revised and edited the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported in part by the Military Academy Research Center (CINAMIL), and by UIDP/FIS/04559/2020 and UIDB/FIS/04559/2020 (https://doi.org/10.54499/UIDB/04559/2020 accessed on 18 January 2024), funded by national funds through FCT/MCTES and co-financed by the European Regional Development Fund (ERDF) through the Portuguese Operational Program for Competitiveness and Internationalization, COMPETE 2020.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Data were obtained from FVC2000 database [] and are publicly available upon request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Liu, Y.; Zhou, B.; Han, C.; Guo, T.; Qin, J. A novel method based on deep learning for aligned fingerprints matching. Appl. Intell. 2020, 50, 397–416. [Google Scholar] [CrossRef]
  2. Bae, J.; Choi, H.-S.; Kim, S.; Kang, M. Fingerprint image denoising and inpainting using convolutional neural network. J. Korean Soc. Ind. Appl. Math. 2020, 24, 363–374. [Google Scholar]
  3. Gorgel, P.; Eksi, A. Minutiae-Based Fingerprint Identification Using Gabor Wavelets and CNN Architecture. Electrica 2021, 21, 480. [Google Scholar] [CrossRef]
  4. Zhou, B.; Han, C.; Liu, Y.; Guo, T.; Qin, J. Fast minutiae extractor using neural network. Pattern Recognit. 2020, 103, 107273. [Google Scholar] [CrossRef]
  5. Nachar, R.; Inaty, E.; Bonnin, P.J.; Alayli, Y. Hybrid minutiae and edge corners feature points for increased fingerprint recognition performance. Pattern Anal. Appl. 2020, 23, 213–224. [Google Scholar] [CrossRef]
  6. Zhou, W.; Hu, J.; Wang, S. Enhanced locality-sensitive hashing for fingerprint forensics over large multi-sensor databases. IEEE Trans. Big Data 2017, 7, 759–769. [Google Scholar] [CrossRef]
  7. González, M.; Sánchez, Á.; Dominguez, D.; Rodríguez, F.B. Ensemble of diluted attractor networks with optimized topology for fingerprint retrieval. Neurocomputing 2021, 442, 269–280. [Google Scholar] [CrossRef]
  8. Krishna Prasad, K. A Text Book of Research Papers on Fingerprint Recognition & Hash Code Techniques; Srinivas Publication: Mangalore, India, 2018. [Google Scholar]
  9. Situmorang, B.H.; Andrea, D. Identification of Biometrics Using Fingerprint Minutiae Extraction Based on Crossing Number Method. Komputasi J. Ilm. Ilmu Komput. Dan Mat. 2023, 20, 71–80. [Google Scholar] [CrossRef]
  10. Maio, D.; Maltoni, D.; Cappelli, R.; Franco, A.; Ferrara, M. FVC-onGoing: Online Evaluation of Fingerprint Recognition Algorithms. Available online: https://biolab.csr.unibo.it/fvcongoing/UI/Form/Home.aspx (accessed on 25 November 2023).
  11. Maio, D.; Maltoni, D.; Cappelli, R.; Franco, A.; Ferrara, M. FVC2000—Fingerprint Verification Competition. Available online: http://bias.csr.unibo.it/fvc2000/ (accessed on 25 November 2023).
  12. Maio, D.; Maltoni, D.; Cappelli, R.; Franco, A.; Ferrara, M. FVC2002—Fingerprint Veritication Competition. Available online: http://bias.csr.unibo.it/fvc2002/default.asp (accessed on 25 November 2023).
  13. Maio, D.; Maltoni, D.; Cappelli, R.; Franco, A.; Ferrara, M. FVC2004—Fingerprint Verification Competition. Available online: http://bias.csr.unibo.it/fvc2004/ (accessed on 25 November 2023).
  14. Maio, D.; Maltoni, D.; Cappelli, R.; Franco, A.; Ferrara, M. FVC2006—Fingerprint Veritication Competition. Available online: http://bias.csr.unibo.it/fvc2006/ (accessed on 25 November 2023).
  15. Li, H. Feature extraction, recognition, and matching of damaged fingerprint: Application of deep learning network. Concurr. Comput. Pract. Exp. 2021, 33, e6057. [Google Scholar] [CrossRef]
  16. Trivedi, A.K.; Thounaojam, D.M.; Pal, S. A novel minutiae triangulation technique for non-invertible fingerprint template generation. Expert Syst. Appl. 2021, 186, 115832. [Google Scholar] [CrossRef]
  17. Mohamed-Abdul-Cader, A.-J.; Chaidee, W.; Banks, J.; Chandran, V. Minutiae Triangle Graphs: A New Fingerprint Representation with Invariance Properties. In Proceedings of the 2019 International Conference on Image and Vision Computing New Zealand (IVCNZ), Dunedin, New Zealand, 2–4 December 2019; pp. 1–6. [Google Scholar]
  18. Ghaddab, M.H.; Jouini, K.; Korbaa, O. Fast and accurate fingerprint matching using expanded delaunay triangulation. In Proceedings of the 2017 IEEE/ACS 14th International Conference on Computer Systems and Applications (AICCSA), Hammamet, Tunisia, 30 October–3 November 2017; pp. 751–758. [Google Scholar]
  19. Surajkanta, Y.; Pal, S. A digital geometry-based fingerprint matching technique. Arab. J. Sci. Eng. 2021, 46, 4073–4086. [Google Scholar] [CrossRef]
  20. Engelsma, J.J.; Cao, K.; Jain, A.K. Learning a fixed-length fingerprint representation. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 43, 1981–1997. [Google Scholar] [CrossRef]
  21. Tang, Y.; Gao, F.; Feng, J.; Liu, Y. FignerNet: An Unified Deep Network for Fingerprint Minutiae Extraction. In Proceedings of the IEEE International Join Conference on Biometrics, Denver, CO, USA, 1–4 October 2017. [Google Scholar]
  22. Cao, K.; Jain, A.K. Automated Latent Fingerprint Recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 41, 788–800. [Google Scholar] [CrossRef] [PubMed]
  23. He, Z.; Zhang, J.; Pang, L.; Liu, E. PFVNet: A Partial Fingerprint Verification Network Learned from Large Fingerprint Matching. IEEE Trans. Inf. Forensics Secur. 2022, 17, 3706–3719. [Google Scholar] [CrossRef]
  24. Cui, Z.; Feng, J.; Zhou, J. Dense Registration and Mosaicking of Fingerprints by Training na End-to-End Network. IEEE Trans. Inf. Forensics Secur. 2020, 16, 627–642. [Google Scholar] [CrossRef]
  25. Kumar, M. A novel fingerprint minutiae matching using LBP. In Proceedings of the 3rd International Conference on Reliability, Infocom Technologies and Optimization, Noida, India, 8–10 October 2014; pp. 1–4. [Google Scholar]
  26. Bakheet, S.; Al-Hamadi, A.; Youssef, R. A fingerprint-based verification framework using Harris and SURF feature detection algorithms. Appl. Sci. 2022, 12, 2028. [Google Scholar] [CrossRef]
  27. Li, Y.; Shi, G. ORB-based Fingerprint Matching Algorithm for Mobile Devices. In Proceedings of the 2019 IEEE 2nd International Conference on Computer and Communication Engineering Technology (CCET), Beijing, China, 16–18 August 2019; pp. 11–15. [Google Scholar]
  28. Raymond, T. Fingerprint Image Enhancement and Minutiae Extraction. Master’s Thesis, School of Computer Science and Software Engineering—University of Western Australia, Perth, Australia, 2003. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.