Next Article in Journal
Fundamental Investigation of Wave Propagation inside IC-Striplines upon Excitation with Hertzian Dipole Moments
Next Article in Special Issue
Vehicle Logo Detection Method Based on Improved YOLOv4
Previous Article in Journal
A 112 Gb/s DAC-Based Duo-Binary PAM4 Transmitter in 28 nm CMOS
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Crime Scene Shoeprint Image Retrieval: A Review

1
Hebei International Research Center for Medical-Engineering, Chengde Medical University, Chengde 067000, China
2
Department of Biomedical Engineering, Chengde Medical University, Chengde 067000, China
*
Author to whom correspondence should be addressed.
Electronics 2022, 11(16), 2487; https://doi.org/10.3390/electronics11162487
Submission received: 11 July 2022 / Revised: 5 August 2022 / Accepted: 8 August 2022 / Published: 10 August 2022

Abstract

:
Shoeprints performs a vital role in forensic investigations. It has been an advanced research issue in forensic science. The main purpose of shoeprint image retrieval is to acquire a ranking list of shoeprint images in a database, according to their feature similarities to the query image. In this way, a shoeprint can not only be used as an exhibit for bringing criminal charges but also to provide a clue to a case. The goal of this work is to present an overview of the existing works conducted in shoeprint image retrieval. We detail the different phases of the shoeprint retrieval task and present a summary of the state-of-the-art methods. We analyzed the difficulties and problems in this field and discussed future work directions. This review may help neophytes become involved in research easily and quickly.

1. Introduction

With the popularity of forensic investigation novels and TV series, a growing number of people have become aware of what trace evidence that investigators extract at crime scenes for case detection. Fingerprints, hair, blood and DNA can be collected at crime scenes and provide clues to cases [1,2,3,4,5]. With the popularization of relevant knowledge, the number of fingerprint traces and DNA evidence collected at scenes have decreased noticeably, which seriously affects the investigation of criminal cases. However, after the criminal suspects commit crimes, they inevitably leave traces at the crime scene [6]. This inference is based on Locard’s principle that there is an exchange of materials between two objects with contact. When it is difficult for criminal investigators to extract fingerprints, DNA and other evidence at crime scenes, shoeprints can perform a vital role [7,8,9,10].
A shoeprint is a mark left by a shoe outsole when it makes contact with a surface. It reflects the suspect’s height, weight, age [11], walking habits and some other individual characteristics [12], so it can not only be used for bringing criminal charges but also can provide a clue to a case [13,14]. Given a shoeprint image derived at a crime scene, shoeprint retrieval searches for the most similar gallery shoeprints in a database. In forensic practice, a crime scene shoeprint can be used in three kinds of tasks according to where the gallery shoeprints come from. The first kind of task is the scene to scene (S2S) shoeprint retrieval, in which the query image and the gallery images are both collected at the crime scenes. The purpose of S2S shoeprint retrieval is to compare the query shoeprint image with shoeprint images collected at other scenes to search for some clues [15,16]. The second kind of task is scene to reference (S2R) shoeprint retrieval. Gallery shoeprints in this kind of shoeprint retrieval task are collected from the suspects. The suspect is required to step on the uniform backgrounds such as the chemical paper, and then the shoeprint images can be obtained by scanning the chemical paper. This kind of shoeprint is referred to as the reference shoeprint. The purpose of S2R shoeprint retrieval is to compare the query shoeprint image with shoeprints captured from suspects to link the case with the suspect. The third kind of task is scene to pattern (S2P) shoeprint retrieval. In this kind of shoeprint retrieval task, the gallery shoeprints are created by taking impressions of shoe outsoles presented by shoe vendors. This kind of shoeprint is referred to as the standard pattern. S2P shoeprint retrieval aims to determine the standard outsole of the query shoeprint. By conducting S2P shoeprint retrieval, investigators can acquire a series of standardized data of the query, such as the standard outsole, the manufacturer and the manufacturing time. The S2S and S2R shoeprint retrieval tasks are usually used in forensic practice. Figure 1 shows three groups of shoeprint image samples, and it visually illustrates the differences between S2S, S2R and S2P shoeprint retrieval tasks.
Given a shoeprint image collected at a crime scene, it is difficult for computer vision systems to search for the most similar gallery images in a database. The main reason that makes this task harder is that it is hard to determine the difference between the query image and the degraded database samples. Although the researchers devote their efforts to proposing efficient shoeprint retrieval methods, the existing methods have their deficiencies. The main purpose of this study is to present a thorough review of the existing crime scene shoeprint retrieval methods. We also analyzed the problems and challenges linked with these methods. It may help advance the relevant research issues.
In this survey, our research contributions can be summarized as follows:
(1)
Each kind of method is reviewed and compared in terms of the feature extraction method, performance, etc. This may help neophytes become involved in research easily and quickly.
(2)
Discourse is presented on publicly available datasets and their details in terms of attributes, size, etc.
(3)
A comprehensive discussion is presented about current research issues and challenges linked with these methods.
(4)
Potential future work directions are explored to advance the relevant research issue.
The rest of the paper is organized as follows. In Section 2, a comprehensive overview of the existing shoeprint image retrieval techniques is presented, and the comparisons are followed by a literature review. Section 3 presents the publicly available datasets and evaluation metrics. Section 4 sums up the reviewed articles in the context of challenges and future work directions. The last section concludes by presenting an overview of the paper.

2. Shoeprint Retrieval Method

In previous works, investigators collected shoeprints at a crime scene and manually compared them with other crime scene shoeprints to search for clues. However, it is hard work to perform manual comparisons because there is a huge amount of shoeprints for comparison. To perform the comparisons more efficiently, researchers devoted their efforts to designing automatic shoeprint retrieval methods.
In the inception phase, the shoeprint retrieval methods work in a semi-automatic manner. In these methods, shoeprints are represented by using a codebook of shape primitives [16]. Although the shape primitives can be classified automatically [17], these semi-automatic methods involve a lot of work, and inconsistent user encoding can result in poor performance [18].
With the extensive application of computer techniques in the field of forensic investigation, more and more automatic shoeprint retrieval methods have been employed in forensic practice. In this study, we concentrate on automatic shoeprint retrieval techniques. An automatic shoeprint image retrieval system mainly consists of three phases: the shoeprint image preprocessing phase, the shoeprint image feature extraction phase and the corresponding feature similarity measurement and ranking score computation phase [19]. The preprocessing phase aims to separate shoeprints from complex backgrounds and to enhance image quality [20,21,22]. The feature extraction phase is used to extract discriminative features to represent the shoeprint images. The main task of the feature similarity measurement and ranking score computation phase is to match the query shoeprint with the database images and to rank the database shoeprint images according to the matching scores. The framework of the shoeprint image retrieval method is shown in Figure 2.

2.1. Shoeprint Image Acquisition

There are three kinds of shoeprints used in forensic practice. The first kind of shoeprint is the crime scene shoeprint that is collected at different crime scenes. The shoeprints are usually digitized by taking photos of the impressions. The imaging process is shown in Figure 3. This kind of shoeprint can also be digitized by scanning the gelatin lifters on the impressions. The second kind of shoeprint is the reference shoeprint that is collected from the suspect. The shoeprints can be digitized with a camera or scanner by scanning chemical paper stepped on by the suspects. The imaging process is shown in Figure 4. We refer to this kind of shoeprint as the reference shoeprint. This kind of shoeprint can also be digitized by using the shoeprint scanner [23], and the imaging process is shown in Figure 5. The third kind of shoeprint is the standard shoeprint that is acquired by taking photos of outsoles provided by footwear vendors.
In forensic practice, there are two kinds of reference shoeprint scanners used in shoeprint image acquisition, i.e., the shoeprint scanner and the crime scene shoeprint scanner. The reference shoeprint scanner is used to collect reference shoeprints stepped on by the suspect. The imaging process of the reference shoeprint scanner is shown in Figure 5. The crime scene shoeprint scanner is used to collect shoeprints at crime scenes. The imaging process of the crime scene shoeprint scanner is shown in Figure 3. Shoeprint images usually can be scanned at 300 dpi by using the two kinds of shoeprint scanners.

2.2. Feature Extraction Methods

In the shoeprint retrieval task, discriminative features usually play a vital role in enhancing retrieval performance [24]. The main differences among existing shoeprint retrieval methods are the types of features used in the methods. According to the type of features, these methods are essentially organized into three main categories, and the framework of feature extraction is shown in Figure 6. Methods in the first category usually extract holistic features immediately [25,26,27], and in these methods, a shoeprint is processed as a whole, i.e., these methods always represent shoeprint images from global perspectives. Methods in the second category concentrate on extracting distinctive features in semantic regions [28,29,30,31,32,33,34]. Therefore, these methods always extract semantic regions at first and then extract regional features in the semantic regions. Thus, these methods can represent shoeprints from regional perspectives. Methods in the third category concentrate on extracting local features [35]. These methods always extract interest points first, and then the local features are extracted for representing shoeprint images. These methods can describe shoeprints from local perspectives.

2.2.1. Holistic-Feature-Based Methods

Bouridane et al. [36,37] employed a fractal-based feature to describe shoeprint images and then used the extracted feature to perform shoeprint image classification. The method was tested on a database containing 145 shoeprints, and the classification accuracy was about 88%. To verify the robustness of variations in rotation and translation, relevant experiments were carried out. The results show that the method can deal well with variations in image translation and rotation.
The moment’s invariant features are also used to describe geometric shapes, e.g., Hu moments [38] and Zernike moments [39,40]. Algarni et al. [41] employed Hu moments to retrieve shoeprints. The accuracy of the method was about 99.4% on the shoeprint images of high image quality. In [42], Khotanzad extracted features to represent shoeprint images by using the Zernike-moments-based method. Experiments were conducted on a database containing more than two hundred shoeprints, and the accuracy at the top 50 was about 92%. The results show that the method can achieve good performance on shoeprints with high image quality. Wei et al. [43,44] also used the Zernike moments method to extract shoeprint image features and to achieve good performance.
The Fourier transform is an excellent method in image representation due to its good performance in image analysis in the frequency domain. Huynh et al. [45] proposed a Fourier-transform-based shoeprint classification method. The method is robust with respect to variations in rotation, scale and translation. The method was tested on a database of 503 shoeprint images, and the accuracy at the top one was about 54%. In [46,47], frequency spectra were extracted by using the Fourier transform to represent shoeprint images. Experiments were conducted, and the results showed that the methods are robust with respect to variations in rotation, scale and translation. However, their accuracy may decrease with partial shoeprints. In [48,49,50,51,52], Cervelli performed the Fourier transform on the cropped shoeprint images. Experiments were conducted on 35 shoeprints, and the accuracy at the top 6 was about 91%. However, the method may fail when dealing with shoeprints with geometry transformations. Crookes et al. [53] employed phase correlation and advanced correlation filters in the shoeprint retrieval method. The correlation methods are insensitive to image translations. The method was tested on a database containing 100 shoeprints, and the accuracy was about 99%. The method is sensitive to variations in scale. Jing et al. [54] used four kinds of features to represent shoeprint images. Then, the feature similarity between shoeprints was measured by summing the absolute difference between these features.
The Fourier transform can work well in analyzing images in the frequency domain, but it has the disadvantage of disabled local analysis. To overcome this drawback, the Gabor transform [55,56] was used in some shoeprint image retrieval methods. Patil et al. [57,58] used the Gabor transform to extract image features, and the method achieved good performance. The accuracy at the top 2 was about 100%. The results show that the method is robust with respect to partial shoeprints. In [59], Li used the Gabor feature and an integral histogram to retrieve shoeprints. Experiments were conducted on a database containing 2000 shoeprint images, and the accuracy was about 46.8%. Variations in rotation, scale and translation were not considered. Pei et al. [60] extracted texture and geometry features by using odd and even Gabor filters. The geometry and texture features were used to weigh the similarity and query, respectively. To achieve good performance, Kong et al. [61] extracted textural and statistical features to represent shoeprint images by fusing Gabor features and Zernike features. Experiments were conducted on a dataset containing 6000 shoeprint images, and the accuracy at the top 5 of the ranking list was about 61.7%. The method was tested on a dataset composed of 1225 gallery images and 104 probe images, and the recognition rate at the top 1 of the ranking list was about 34.59%.
The deep learning method has been extensively used in shoeprint feature extraction and image recognition, as it has the powerful ability of image representation [62]. In [25,26], Kong extracted shoeprint image depth features by using a convolutional neural network [63], and then the multi-channel normalized cross-correlation method was used to match these depth features. The recall at the top 20% was about 94.0%. Zhang et al. [64] used an extended shoeprint image database to fine-tune the parameters of the VGG-16 network [65] and then extracted features by using the fine-tuned network. The recall at the top 10 of the ranking list was about 88.7%. Partial shoeprints were not considered.
Zhang et al. [66] represented shoeprint images by using an edge direction histogram. This method uses an edge direction histogram to represent the shape attributes detected by using the Canny edge detector. Experiments were conducted on a dataset containing 512 shoeprints, and its accuracy at the top 4% was about 97.7%.
In [27], Richetelli extracted features using scale-invariant feature transform descriptors, the POC method and the Fourier–Mellin transformation method to represent shoeprint images. Then, the performance of these methods was compared to the CS dataset. The literature reports that the POC method achieves the best performance on dust and blood traces.

2.2.2. Regional Feature-Based Methods

Tang et al. [67,68,69] extracted fundamentals to represent shoeprints with an attributed relational graph (ARG), and the nodes in the ARG were these fundamental shapes. The method was tested on a database containing 1400 shoeprints, and the accuracy at the top 20% of the ranking list was about 91%. The results show that their method can deal well with partial shoeprints. Pavlouet et al. [70,71] detected regions by using a maximally stable extremal regions detector and then extracted features by using the SIFT method in these detected regions. The accuracy at the top 1 was about 92%. In [28], Kortylewsk provided a periodic pattern-based method. The method detects periodic patterns first and then extracts the Fourier features of the detected periodic patterns. Shoeprint images were ranked according to feature similarities. Experiments were conducted on the CSFID dataset [28], and the recall at the top 20% of the ranking list was about 85.7%. Kortylewski et al. [29,30] used the compositional active basis model on gallery shoeprint images and then evaluated them against other crime scene shoeprint images. The method was tested on the FID-300 dataset [29], and the accuracy at the top 20% was about 71%. The method can achieve good performance on shoeprints with periodic patterns. Wang et al. [72,73,74,75] divided a shoeprint image into two different regions and extracted features in these regions by using the Wavelet Fourier–Mellin transform. Experiments were conducted on a database containing 10,096 shoeprints, and the accuracy at the top 2% was about 96.6%. Alizadeh et al. [31] implemented a shoeprint retrieval method by using the sparse representation method. Their results show that the method can achieve good performance. However, the features they used may result in failure in dealing with variations in image translations and rotations. In [32], Ma divided a shoeprint into different regions and then extracted features by using convolutional neural networks. Experiments were conducted on the FID-300 dataset, and the accuracy at the top 10% of the ranking list was about 89.8%. Tang et al. [76] extracted dot and line textures to represent shoeprint images. Experiments were conducted on 3000 shoeprints, and the average recognition rate at the top 10 of the ranking list was about 91.1%. In [77], Ghouti used the block energy-dominant of Directional FilterBanks to represent shoeprints. Then, the Euclidean distance was used to match these features. Alizadeh et al. [78] used local binary patterns to represent shoeprints, and this method has good performance on high-quality shoeprint images. The method was tested on a dataset composed of 190 probe images and 760 gallery images, and the accuracy at the top 1 was about 97.6%.

2.2.3. Interest-Point-Feature-Based Methods

Interest-point-feature-based methods are not only used in shoeprint image retrieval but also in some other tasks, such as recognition [79,80], classification [81], registration [82], scene categorization [83], object detection [84,85], etc. In this kind of shoeprint retrieval method, interest points are detected first, and the local features are extracted to represent the detected points for retrieving shoeprints. Local feature extraction methods are usually used to describe images, e.g., the binary robust independent elementary method [86], the ORB method [87], the FREAK method [88], the BRISK method [89], the SIFT descriptors and some improved SIFT methods [90,91]. These local feature methods are usually robust with respect to variations in rotation, translation and scale.
Li et al. [92] and Wang et al. [93] represented shoeprint images by using SIFT descriptors. The methods were tested on shoeprint datasets, and the accuracy was 90% and 97%, respectively. In [94], Nibouche described shoeprint images by using SIFT descriptors and estimated the matching performance by using the random sample consensus (RANSAC) method. The method was tested on a database containing 300 shoeprints, and the accuracy at the top 1 was about 91%. In [95], Su used SIFT descriptors to describe interest points which are detected by using Harris corner detector. The method was tested on a database containing 374 shoeprints, and the accuracy at the top 1 was about 87%. Almaadeed et al. [35] extracted interest points by using a combination of Harris detectors and Hessian detectors in multiple scales, and then they employed the SIFT descriptors to represent these detected interest points. The method was tested on 400 classes of shoeprint images, with each class containing a query image and a target image, and the recognition accuracy at the top 10 was about 68.5%.

2.3. Similarity Evaluation and Ranking Score Computation

The main task of the feature similarity measurement and ranking score computation phase is to match the query shoeprint with the database images and to rank the database shoeprint images according to the matching score. The framework of the feature similarity measurement and ranking score computation phase is shown in Figure 7. In the feature space, the distance between similar shoeprint images is large, which makes it difficult for the similarity measurement method to effectively measure the similarity of shoeprint images. Some methods make an effort to solve this problem. The distance function is often used to measure the similarity between image features. Common distance measurement methods include Euclidean distance, Manhattan distance, Mahalanobis distance and cosine distance. In addition, several other methods have been used to measure the similarity between shoeprint images. In [96], Bouridane utilized correlation coefficients to evaluate the similarity between shoeprint images. Gueham et al. [97] used phase-only correlation (POC) to measure shoeprint image similarity. Wang et al. [72,73] divided shoeprint images into the sole region and heel region; extracted features from partitioned images and matched them; and integrated feature similarity for the two partitions in the matching process. Kong et al. [25,26] proposed a multi-channel normalized cross-correlation method to calculate the similarity between multi-channel depth features. Due to complex backgrounds, clutter and partial occlusion, there were large intra-class variations, i.e., the appearance of similar shoeprint images collected from different sites always varied greatly. To overcome the problem and achieve a better performance, some methods have used computation models to calculate the ranking scores based on feature similarities, and the process is shown in Figure 7, which is labeled with dotted lines. Wang et al. [73] used the manifold ranking method to compute the ranking scores and then ranked shoeprints according to their ranking scores. The experimental results show that there was a significant performance improvement compared with the method that only uses feature similarities.
The details of the recent shoeprint image retrieval methods are summarized in Table 1.

3. Datasets and Evaluation Metrics

3.1. Publicly Available Datasets

Publicly available datasets are essential to advance research issues. The main problem in this field is that lacking real crime scene shoeprint images makes it difficult to train a shoeprint image retrieval model. Most existing methods test their methods on shoeprints collected under laboratory conditions, and most shoeprints used in the literature are generated by adding artificial distortions [46,71,98,99]. In addition, most shoeprint datasets used in the literature are not made available. Thus, most of the literature does not conduct comparisons with existing methods. In later research works, we found two shoeprint datasets that had been made publicly available for shoeprint retrieval evaluation, i.e., the FID-300 dataset [29] and the CS dataset [27]. The details of the datasets are summarized as follows.

3.1.1. FID-300 Dataset

The FID-300 dataset consists of one probe set and one gallery set. Shoeprint images in the probe set are used as the query images, and they are collected at different real crime scenes. Shoeprint images in the gallery set are reference shoeprint images, and they have high image quality. The reference images are digitized by scanning the chemical paper stepped on by the reference shoes. The FID-300 dataset contains 1175 gallery images and 300 query shoeprint images.
Figure 8 shows some samples of the FID-300 dataset. Figure 8a shows the query images, and their corresponding reference shoeprint images are shown in Figure 8b.

3.1.2. CS Dataset

To understand how different methods perform under different conditions, Richetelli et al. [27] offered the CS dataset. The CS dataset contains one probe set and one gallery set. Shoeprint images in the gallery set are reference shoeprints, and the image quality of these images is very high. The gallery set contains 100 images. The probe set consists of dust shoeprints, blood shoeprints, enhanced blood replicates and high-quality shoeprint images. The number of dust shoeprints is 66, and these dust shoeprint images are digitized by scanning the gelatin lifters on the impressions stepped on by analysts. The number of blood shoeprints is 53, and the blood shoeprint images are digitized by scanning the blood prints. The enhanced blood prints are enhanced by using leuco-crystal violet (LCV). The number of high-quality shoeprint images is 100. Shoeprints in this dataset are not collected at real crime scenes, and they are scene-like ones.
Figure 9 shows some sample shoeprints in the CS dataset. Figure 9a–d show the query shoeprints, and Figure 9e,f show their corresponding reference shoeprints.

3.2. Evaluation Metrics

In forensic practice, forensic investigators pay more attention to the top n shoeprints in the ranking list; therefore, shoeprint image retrieval methods are always evaluated by using cumulative match characteristic (CMC) curves and the cumulative match score (CMS).
The CMS is an efficient measure method, and it is always used in evaluating the performance of image retrieval methods [100]. The cumulative match score is formulated as follows:
CMS n = r n Q × 100 %
where Q denotes the number of the query images, and r n represents the number of the query images in the top n ranked matches. The CMC shows how often the query appears in the top n matches. The cumulative match scores are plotted on a graph. The rank is plotted along the horizontal axis, and the vertical axis is the cumulative match scores.
The results in the literature are reported in the format CMS n @ n , which refers to CMS n as the top n matches. To display the results more intuitively, n can be expressed in the form of a percentage.

4. Research Challenges and Discussions

4.1. Research Challenges

4.1.1. Limited Data

Publicly available datasets are essential for model training and performance evaluation. The main problem in this research issue is the lack of real crime scene shoeprint image datasets. So far, few publicly available shoeprint image datasets have been published in the literature for research purposes, and some state-of-the-art approaches conduct experiments on their own datasets. This means that there are few shoeprint images to be used for the training retrieval model. Furthermore, in some datasets, there is only one shoeprint per class. Thus, it is difficult to find the most similar shoeprint according to the contents of the query shoeprint.

4.1.2. Degraded Images

Shoeprints collected at crime scenes are of low image quality, and most shoeprint images are misaligned, incomplete, cluttered and highly degraded. Some properties of the crime scene shoeprints are summarized in Figure 10. The degraded shoeprints render it more challenging to represent shoeprint images.

4.1.3. Large Intra-Class Variations

There are big differences between shoeprints in the probe set and gallery set. Although two shoeprints may have the same shoe pattern, there are large intra-class variations. The large intra-class variations are caused by variations in shape, appearance, noise, partial occlusion and clutter. Large intra-class variations make it difficult to match shoeprint images.

4.2. Discussions

In the literature, a large number of handcraft features have been used for shoeprint retrieval, and these features achieve good performance on non-realistic and generated shoeprint images. However, they cannot achieve expected performance on real crime scene shoeprints. The crime scene shoeprints are always incomplete and degraded, and they are difficult to represent and match. Deep-learning-based methods can deal well with real crime scene shoeprints in some scenarios, but the features extracted by these methods are sensitive to variations in rotation, scale and translation. Thus, they needs huge computations in the process of feature matching. Furthermore, to achieve better performance, the methods need to train their models on a huge amount of data. We think that the researchers can carry out their works from three aspects: (1) extending the real crime scene shoeprints, as public datasets are essential not only for the model training but also for the performance evaluation; (2) reducing the amount of computation in deep-learning-based methods; and (3) paying more attention to designing matching methods to deal with degraded shoeprints.

5. Conclusions

In this study, shoeprint retrieval methods are reviewed and classified based on feature extraction techniques that are presented in the literature. The matching methods and performance comparisons are presented for a thorough understanding of the existing methods. Some publicly available shoeprint datasets and their details are presented, which help researchers to choose the appropriate dataset for their research work and to conduct fair comparisons with existing shoeprint image retrieval approaches. It also presents the fact that there is a lack of publicly available real crime scene datasets for this research issue. Furthermore, the challenges that the researchers face and future work directions are also analyzed.

Author Contributions

Conceptualization, Y.W.; methodology, Y.W. and X.D.; writing—original draft preparation, Y.W.; writing—review and editing, Y.W., X.D. and X.Z.; data curation, G.S.; visualization, C.C. and X.Z.; supervision, Y.W.; project administration, G.S.; funding acquisition, Y.W. All authors have read and agreed to the published version of the manuscript.

Funding

The starting fund for the scientific research of the high-level talents of Chengde Medical University—Nature (Grant No. 202111) and the “Technology Innovation Guidance Project-Science and Technology Work Conference” of the Hebei Provincial Department of Science and Technology.

Acknowledgments

We would like to cordially thank Xinnian Wang (Dalian Maritime University) and Shiqi Xu (Chengde Medical University) for their valuable discussions and fruitful commentary.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Liu, Y.; Hu, D.; Fan, J.; Wang, F.; Zhang, D. Multi-feature fusion for crime scene investigation image retrieval. In Proceedings of the IEEE International Conference on Digital Image Computing: Techniques and Applications, Sydney, NSW, Australia, 29 November–1 December 2017; pp. 1–7. [Google Scholar]
  2. Benecke, M. DNA typing in forensic medicine and in criminal investigations: A current survey. Naturwissenschaften 1997, 84, 181–188. [Google Scholar] [CrossRef] [PubMed]
  3. Robertson, J.R. Forensic Examination of Hair; CRC Press: Boca Raton, FL, USA, 2002. [Google Scholar]
  4. Buckleton, J.S.; Bright, J.-A.; Taylor, D. Forensic DNA Evidence Interpretation; CRC Press: Boca Raton, FL, USA, 2016. [Google Scholar]
  5. Robertson, B.; Vignaux, G.A.; Berger, C.E. Interpreting Evidence: Evaluating Forensic Science in the Courtroom; John Wiley & Sons: Hoboken, NJ, USA, 2016. [Google Scholar]
  6. Locard, E. The analysis of dust traces. Am. J. Police Sci. 1930, 1, 276–298. [Google Scholar] [CrossRef]
  7. Cervelli, F.; Dardi, F.; Carrato, S. Comparison of footwear retrieval systems for synthetic and real shoe mark. In Proceedings of the International Symposium on Image and Signal Processing and Analysis, Salzburg, Austria, 16–18 September 2009; pp. 534–542. [Google Scholar]
  8. Bodziak, W.J. Footwear Impression Evidence Detection, Recovery and Examination, 2nd ed.; CRC Press: Boca Raton, FL, USA, 2000. [Google Scholar]
  9. Rankin, B. Footwear marks-a step by step review. Forensic Sci. Soc. 1998, 32, 54–70. [Google Scholar]
  10. Thompson, T.; Black, S. Forensic Human Identification: An Introduction; CRC Press: Boca Raton, FL, USA, 2006. [Google Scholar]
  11. Hassan, M.; Wang, Y.; Wang, D.; Li, D.; Liang, Y.; Zhou, Y.; Xu, D. Deep learning analysis and age prediction from shoeprints. Forensic Sci. Int. 2021, 327, 110987. [Google Scholar] [CrossRef]
  12. Francis, X.; Sharifzadeh, H.; Newton, A.; Baghaei, N.; Varastehpour, S. Learning wear patterns on footwear outsoles using convolutional neural networks. In Proceedings of the 18th IEEE International Conference on Trust, Security and Privacy in Computing and Communications/13th IEEE International Conference on Big Data Science and Engineering, Rotorua, New Zealand, 5–8 August 2019; pp. 450–457. [Google Scholar]
  13. Speir, J.A.; Richetelli, N.; Fagert, M.; Hite, M.; Bodziak, W.J. Quantifying randomly acquired characteristics on outsoles in terms of shape and position. Forensic Sci. Int. 2016, 266, 399–411. [Google Scholar] [CrossRef]
  14. Ribaux, O.; Girod, A. Forensic intelligence and crime analysis. Law Probab. Risk 2003, 2, 47–63. [Google Scholar] [CrossRef]
  15. Ribaux, O.; Baylon, A.; Roux, C.; Delémont, O.; Lock, E.; Zingg, C.; Margot, P. Intelligence-led crime scene processing. Part I: Forensic intelligence. Forensic Sci. Int. 2010, 195, 10–16. [Google Scholar] [CrossRef]
  16. Geradts, Z.; Keijzer, J. The image-database REBEZO for shoeprint with developments on automatic classification of shoe outsole designs. Forensic Sci. Int. 1996, 79, 21–23. [Google Scholar] [CrossRef]
  17. Budka, M.; Ashraf, A.W.U.; Bennett, M.; Neville, S.; Mackrill, A. Deep multilabel CNN for forensic footwear impression descriptor identification. Appl. Soft Comput. 2021, 109, 107496. [Google Scholar] [CrossRef]
  18. Srihari, S.N. Analysis of Footwear Impression Evidence. 2011. Available online: https://www.ojp.gov/pdffiles1/nij/grants/233981.pdf (accessed on 10 July 2022).
  19. Rida, I.; Al-Maadeed, N.; Al-Maadeed, S.; Bakshi, S. A comprehensive overview of feature representation for biometric recognition. Multimed. Tools Appl. 2020, 79, 4867–4890. [Google Scholar] [CrossRef]
  20. Ramakrishnan, V.; Srihari, S. Extraction of shoe-print patterns from impression evidence using conditional random fields. In Proceedings of the 19th IEEE International Conference on Pattern Recognition (ICPR), Tampa, FL, USA, 8–12 December 2008; pp. 1–4. [Google Scholar]
  21. Francis, X.; Sharifzadeh, H.; Newton, A.; Baghaei, N.; Varastehpour, S. Feature enhancement and denoising of a forensic shoeprint dataset for tracking wear-and-tear effects. In Proceedings of the IEEE International Symposium on Signal Processing and Information Technology, Ajman, United Arab Emirates, 10–12 December 2019; pp. 1–5. [Google Scholar]
  22. Guo, T.; Tang, Y.; Guo, W. Planar shoeprint segmentation based on the multiplicative intrinsic component optimization. In Proceedings of the 3rd International Conference on Image, Vision and Computing, Chongqing, China, 27–29 June 2018; pp. 283–287. [Google Scholar]
  23. Wang, X.N.; Wu, Y.J.; Zhang, T. Multi-Layer Feature Based Shoeprint Verification Algorithm for Camera Sensor Images. Sensors 2019, 19, 2491. [Google Scholar] [CrossRef] [Green Version]
  24. Rida, I.; Fei, L.; Proença, H.; Nait-Ali, A.; Hadid, A. Forensic shoe-print identification: A brief survey. arXiv 2019, arXiv:1901.01431. [Google Scholar]
  25. Kong, B.; Supancic, J.; Ramanan, D. Cross-Domain forensic shoeprint matching. In Proceedings of the British Machine Vision Conference, London, UK, 4–7 September 2017; pp. 128–135. [Google Scholar]
  26. Kong, B.; Supančič, J.; Ramanan, D.; Fowlkes, C.C. Cross-Domain Image Matching with Deep Feature Maps. Int. J. Comput. Vis. 2019, 127, 1738–1750. [Google Scholar] [CrossRef] [Green Version]
  27. Richetelli, N.; Lee, M.C.; Lasky, C.A.; Gump, M.E.; Speir, J.A. Classification of footwear outsole patterns using Fourier transform and local interest points. Forensic Sci. Int. 2017, 275, 102–109. [Google Scholar] [CrossRef]
  28. Kortylewski, A.; Albrecht, T.; Vetter, T. Unsupervised footwear impression analysis and retrieval from crime scene data. In Proceedings of the Asian Conference on Computer Vision, Singapore, 1–5 November 2014; pp. 644–658. [Google Scholar]
  29. Kortylewski, A.; Vetter, T. Probabilistic Compositional Active Basis Models for Robust Pattern Recognition. In Proceedings of the 27th British Machine Vision Conference (BMVC), York, UK, 19–22 September 2016. [Google Scholar]
  30. Kortylewski, A. Model-Based IMAGE Analysis for Forensic Shoe Print Recognition. Ph.D. Dissertation, Department Computer Graphic Bilder Kennung, University of Basel, Basel, Switzerland, 2017. [Google Scholar]
  31. Alizadeh, S.; Kose, C. Automatic retrieval of shoeprint images using blocked sparse representation. Forensic Sci. Int. 2017, 277, 103–114. [Google Scholar] [CrossRef]
  32. Ma, Z.; Ding, Y.; Wen, S.; Xie, J.; Jin, Y.; Si, Z.; Wang, H. Shoe-Print Image Retrieval with Multi-Part Weighted CNN. IEEE Access 2019, 7, 59728–59736. [Google Scholar] [CrossRef]
  33. Rathinavel, S.; Arumugam, S. Full shoe print recognition based on pass band dct and partial shoe print identification using overlapped block method for degraded images. Int. J. Comput. Appl. 2011, 26, 16–21. [Google Scholar] [CrossRef]
  34. Hasegawa, M.; Tabbone, S. A local adaptation of the histogram radon transform descriptor: An application to a shoe print dataset. In Proceedings of the 2012 Joint IAPR International Conference on Structural, Syntactic, and Statistical Pattern Recognition, Hiroshima, Japan, 7–9 November 2012; pp. 675–683. [Google Scholar]
  35. Almaadeeda, S.; Bouridaneb, A.; Crookesc, D.; Nibouche, O. Partial shoeprint retrieval using multiple point-of-interest detectors and SIFT descriptors. Integr. Comput. Aided Eng. 2015, 22, 41–58. [Google Scholar] [CrossRef]
  36. Alexander, A.; Bouridane, A.; Crookes, D. Automatic classification and recognition of shoeprints. In Proceedings of the International Conference on Image Processing and its Applications, Manchester, UK, 24–28 October 1999; pp. 638–641. [Google Scholar]
  37. Bouridane, A.; Alexander, A.; Nibouche, M.; Crookes, D. Application of fractals to the detection and classification of shoeprints. In Proceedings of the International Conference on Image Processing, Vancouver, BC, Canada, 10–13 September 2000; pp. 474–477. [Google Scholar]
  38. Hu, M.K. Visual pattern recognition by moment invariants. IEEE Trans. Inf. Theory 1962, 8, 179–187. [Google Scholar]
  39. Teague, M.R. Image analysis via the general theory of moments. J. Opt. Soc. Am. 1980, 70, 920–930. [Google Scholar] [CrossRef]
  40. Teh, C.H.; Chin, R.T. On image analysis by the methods of moments. Pattern Anal. Mach. Intell. 1988, 10, 496–513. [Google Scholar] [CrossRef]
  41. Algarni, G.; Amiane, M. A novel technique for automatic shoeprint image retrieval. Forensic Sci. Int. 2008, 181, 10–14. [Google Scholar] [CrossRef]
  42. Khotanzad, A.; Hong, Y.H. Invariant image recognition by Zernike moments. Pattern Anal. Mach. Intell. 1990, 12, 489–497. [Google Scholar] [CrossRef] [Green Version]
  43. Wei, C.H.; Gwo, C.Y. Alignment of core point for shoeprint analysis and retrieval. In Proceedings of the International Conference on Information Science, Electronics and Electrical Engineering, Sapporo City, Hokkaido, Japan, 26–28 April 2014; pp. 1069–1072. [Google Scholar]
  44. Gwo, C.Y.; Wei, C.H. Shoeprint retrieval: Core point alignment for pattern comparison. Sci. Justice 2016, 56, 341–350. [Google Scholar] [CrossRef] [PubMed]
  45. Huynh, C.; de Chazal, P.; McErlean, D.; Reilly, R.; Hannigan, T.; Fleury, L. Automatic classification of shoeprints for use in forensic science based on the Fourier transform. In Proceedings of the International Conference on Image Processing, Barcelona, Spain, 14–18 September 2003; pp. 569–572. [Google Scholar]
  46. de Chazal, P.; Flynn, J.; Reilly, R.B. Automated processing of shoeprint images based on the Fourier transform for use in forensic science. IEEE Trans. Pattern Anal. Mach. Intell. 2005, 27, 341–350. [Google Scholar] [CrossRef] [PubMed]
  47. Gueham, M.; Bouridane, A.; Crookes, D.; Nibouche, O. Automatic recognition of shoeprints using Fourier-Mellin transform. In Proceedings of the NASA/ESA Conference on Adaptive Hardware and Systems, Noordwijk, The Netherlands, 22–25 June 2008; pp. 487–491. [Google Scholar]
  48. Dardi, F.; Cervelli, F.; Carrato, S. An automatic footwear retrieval system for shoe marks from real crime scenes. In Proceedings of the International Symposium on Image and Signal Processing and Analysis, Salzburg, Austria, 16–18 September 2009; pp. 668–672. [Google Scholar]
  49. Dardi, F.; Cervelli, F.; Carrato, S. A texture based shoe retrieval system for shoe Marks of real crime scenes. In Proceedings of the International Conference on Image Analysis and Processing, Trieste, Italy, 7–10 November 2009; pp. 384–393. [Google Scholar]
  50. Cervelli, F.; Dardi, F.; Carrato, S. A translational and rotational invariant descriptor for automatic footwear retrieval of real cases shoe marks. In Proceedings of the European Signal Processing Conference, Aalborg, Denmark, 23–27 August 2010; pp. 1665–1669. [Google Scholar]
  51. Cervelli, F.; Dardi, F.; Carrato, S. A texture recognition system of real shoe marks taken from crime scenes. In Proceedings of the IEEE International Conference on Image Processing, Cairo, Egypt, 7–10 November 2009; pp. 2905–2908. [Google Scholar]
  52. Dardi, F.; Cervelli, F.; Carrato, S. A combined approach for footwear retrieval of crime scene shoe marks. In Proceedings of the 3rd International Conference on Crime Detection and Prevention (ICDP), London, UK, 3 December 2009; pp. 1–6. [Google Scholar]
  53. Crookes, D.; Bouridane, A.; Su, H.; Gueham, M. Following the Footsteps of Others: Techniques for Automatic Shoeprint Classification. In Proceedings of the Second NASA/ESA Conference on Adaptive Hardware and Systems, Edinburgh, UK, 5–8 August 2007; pp. 67–74. [Google Scholar]
  54. Jing, M.Q.; Ho, W.J.; Chen, L.H. A novel method for shoeprints recognition and classification. In Proceedings of the IEEE International Conference on Machine Learning and Cybernetics, Baoding, China, 7–15 July 2009; pp. 2846–2851. [Google Scholar]
  55. Daugman, J.G. Two-dimensional spectral analysis of cortical receptive field profiles. Vis. Res. 1980, 20, 847–856. [Google Scholar] [CrossRef]
  56. Daugman, J.G. Uncertainty relation for resolution in space, spatial frequency, and orientation optimized by two dimensional visual cortical filters. J. Opt. Soc. Am. 1985, 2, 1160–1169. [Google Scholar] [CrossRef] [PubMed]
  57. Patl, M.P.; Kulkarni, V.J. Rotation and intensity invariant shoeprint matching using Gabor transform with application to forensic science. Pattern Recognit. 2009, 42, 1308–1317. [Google Scholar] [CrossRef]
  58. Deshmukh, M.P.; Patl, M.P. Automatic shoeprint matching system for crime scene investigation. Int. J. Comput. Sci. Commun. Technol. 2009, 2, 281–287. [Google Scholar]
  59. Li, X.; Wu, M.; Shi, Z. The retrieval of shoeprint images based on the integral histogram of the Gabor transform domain. In Proceedings of the International Conference on Intelligent Information Processing, Hangzhou, China, 3–6 October 2014; pp. 249–258. [Google Scholar]
  60. Pei, W.; Zhu, Y.; Na, Y.; He, X. Multiscale Gabor wavelet for shoeprint image retrieval. In Proceedings of the 2nd IEEE International Congress on Image and Signal Processing (CISP), Tianjin, China, 17–19 October 2009; pp. 1–5. [Google Scholar]
  61. Kong, X.; Yang, C.; Zheng, F. A novel method for shoeprint recognition in crime scenes. In Proceedings of the 9th Chinese Conference on Biometric Recognition, Shenyang, China, 7-9 November 2014; pp. 498–505. [Google Scholar]
  62. Vagač, M.; Povinský, M.; Melicherčík, M. Detection of shoe sole features using dnn. In Proceedings of the 14th IEEE International Scientific Conference on Informatics, Poprad, Slovakia, 14–16 November 2017; pp. 416–419. [Google Scholar]
  63. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  64. Zhang, Y.; Fu, H.; Dellandréa, E.; Chen, L. Adapting convolutional neural networks on the shoeprint retrieval for forensic use. In Proceedings of the Chinese Conference on Biometric Recognition, Shenzhen, China, 28–29 October 2017; pp. 520–527. [Google Scholar]
  65. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. In Proceedings of the International Conference on Learning Representations, San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
  66. Zhang, L.; Allinson, N. Automatic shoeprint retrieval system for use in forensic investigations. In Proceedings of the UK Workshop On Computational Intelligence, London, UK, 5–7 September 2005; pp. 137–142. [Google Scholar]
  67. Tang, Y.; Srihari, S.N.; Kasiviswanathan, H.; Corso, J.J. Footwear print retrieval system for real crime scene marks. In Proceedings of the International Workshop on Computational Forensics, Tokyo, Japan, 11–12 November 2010; pp. 88–100. [Google Scholar]
  68. Tang, Y.; Srihari, S.N.; Kasiviswanathan, H. Similarity and Clustering of Footwear Prints. In Proceedings of the IEEE International Conference on Granular Computing, San Jose, CA, USA, 14–16 August 2010; pp. 459–464. [Google Scholar]
  69. Tang, Y.; Kasiviswanathan, H.; Srihari, S.N. An efficient clustering-based retrieval framework for real crime scene footwear marks. Int. J. Granul. Comput. Rough Sets Intell. Syst. 2012, 2, 327–360. [Google Scholar] [CrossRef]
  70. Pavlou, M.; Allinson, N.M. Automatic extraction and classification of footwear patterns. In Proceedings of the International Conference on Intelligent Data Engineering and Automated Learning, Burgos, Spain, 20–23 September 2006; pp. 2088–2095. [Google Scholar]
  71. Pavlou, M.; Allinson, N.M. Automated encoding of footwear patterns for fast indexing. Image Vis. Comput. 2009, 27, 402–409. [Google Scholar] [CrossRef]
  72. Wang, X.N.; Sun, H.H.; Yu, Q.; Zhang, C. Automatic shoeprint retrieval algorithm for real crime scenes. In Proceedings of the Asian Conference on Computer Vision, Singapore, 1–5 November 2014; pp. 399–413. [Google Scholar]
  73. Wang, X.; Zhang, C.; Wu, Y.; Shu, Y. A manifold ranking based method using hybrid features for crime scene shoeprint retrieval. Multimed. Tools Appl. 2017, 76, 21629–21649. [Google Scholar] [CrossRef]
  74. Wu, Y.; Wang, X.; Nankabirwa, N.L.; Zhang, T. LOSGSR: Learned Opinion Score Guided Shoeprint Retrieval. IEEE Access 2019, 7, 55073–55089. [Google Scholar] [CrossRef]
  75. Wu, Y.J.; Wang, X.N.; Zhang, T. Crime Scene Shoeprint Retrieval Using Hybrid Features and Neighboring Images. Information 2019, 10, 45. [Google Scholar] [CrossRef] [Green Version]
  76. Tang, C.; Dai, X. Automatic shoe sole pattern retrieval system based on image content of shoeprint. In Proceedings of the IEEE International Conference on Computer Design and Applications (ICCDA), Qinhuangdao, China, 25–27 June 2010; pp. 602–605. [Google Scholar]
  77. Ghouti, L.; Bouridane, A.; Crookes, D. Classification of shoeprint images using directional filter banks. In Proceedings of the International Conference on Visual Information Engineering (VIE), Bangalore, India, 26–28 September 2006; pp. 167–173. [Google Scholar]
  78. Alizadeh, S.; Jond, H.B.; Nabiyev, V.V. Automatic Retrieval of Shoeprints Using Modified Multi-Block Local Binary Pattern. Symmetry 2021, 13, 296. [Google Scholar] [CrossRef]
  79. Wang, J.; Sun, K.; Cheng, T.; Jiang, B.; Deng, C.; Zhao, Y.; Liu, D.; Mu, Y.; Tan, M.; Wang, X.; et al. Deep High-Resolution Representation Learning for Visual Recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 43, 3349–3364. [Google Scholar] [CrossRef] [Green Version]
  80. Cao, Z.; Hidalgo, G.; Simon, T.; Wei, S.E.; Sheikh, Y. OpenPose: Realtime Multi-Person 2D Pose Estimation Using Part Affinity Fields. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 43, 172–186. [Google Scholar] [CrossRef] [Green Version]
  81. Ma, W.; Shen, J.; Zhu, H.; Zhang, J.; Zhao, J.; Hou, B.; Jiao, L. A Novel Adaptive Hybrid Fusion Network for Multiresolution Remote Sensing Images Classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5400617. [Google Scholar] [CrossRef]
  82. Huang, W.; Yang, H.; Liu, X.; Li, C.; Zhang, I.; Wang, R.; Zheng, H.; Wang, S. A Coarse-to-Fine Deformable Transformation Framework for Unsupervised Multi-Contrast MR Image Registration with Dual Consistency Constraint. IEEE Trans. Med. Imaging 2021, 40, 2589–2599. [Google Scholar] [CrossRef]
  83. Zhang, L.; Liang, R.; Yin, J.; Zhang, D.; Shao, L. Scene Categorization by Deeply Learning Gaze Behavior in a Semisupervised Context. IEEE Trans. Cybern. 2021, 51, 4265–4276. [Google Scholar] [CrossRef]
  84. Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 580–587. [Google Scholar]
  85. Girshick, R.; Donahue, J.; Darrel, T.; Malik, J. Region-based convolutional networks for accurate object detection and segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 38, 142–158. [Google Scholar] [CrossRef] [PubMed]
  86. Calonder, M.; Lepetit, V.; Strecha, C.; Fua, P. Brief: Binary robust independent elementary features. In Proceedings of the European Conference on Computer Vision, Heraklion, Crete, Greece, 5–11 September 2010; pp. 778–792. [Google Scholar]
  87. Rublee, E.; Rabaud, V.; Konolige, K.; Bradski, G. ORB: An efficient alternative to SIFT or SURF. In Proceedings of the IEEE International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 2564–2571. [Google Scholar]
  88. Alahi, A.; Ortiz, R.; Vandergheynst, P. Freak: Fast retina keypoint. In Proceedings of the IEEE Conference on Computer vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; pp. 510–517. [Google Scholar]
  89. Leutenegger, S.; Chli, M.; Siegwart, R. BRISK: Binary robust invariant scalable keypoints. In Proceedings of the IEEE International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 2548–2555. [Google Scholar]
  90. Ke, Y.; Sukthankar, R. PCA-SIFT: A more distinctive representation for local image descriptors. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Washington, DC, USA, 27 June–2 July 2004; pp. 506–513. [Google Scholar]
  91. Bay, H.; Tuytelaars, T.; Van Gool, L. Surf: Speeded up robust features. In Proceedings of the European Conference on Computer Vision, Graz, Austria, 7–13 May 2006; pp. 404–417. [Google Scholar]
  92. Li, Z.; Wei, C.; Li, Y.; Sun, T. Research of shoeprint image stream retrieval algorithm with scale-invariance feature transform. In Proceedings of the International Conference on Multimedia Technology, Hangzhou, China, 26–28 July 2011; pp. 5488–5491. [Google Scholar]
  93. Wang, H.; Fan, J.; Li, Y. Research of shoeprint image matching based on SIFT algorithm. J. Comput. Methods Sci. Eng. 2016, 16, 349–359. [Google Scholar] [CrossRef]
  94. Nibouche, O.; Bouridane, A.; Gueham, M.; Laadjel, M. Rotation invariant matching of partial shoeprints. In Proceedings of the 13th International Machine Vision and Image Processing Conference, Dublin, Ireland, 2–4 September 2009; pp. 94–98. [Google Scholar]
  95. Su, H.; Crookes, D.; Bouridane, A.; Gueham, M. Local Image Features for Shoeprint Image Retrieval. In Proceedings of the British Machine Vision Conference, Warwick, UK, 10–13 September 2007; pp. 1–10. [Google Scholar]
  96. Bouridane, A. Techniques for Automatic Shoeprint Classification; Springer: Boston, MA, USA, 2009. [Google Scholar]
  97. Gueham, M.; Bouridane, A.; Crookes, D. Automatic Recognition of Partial Shoeprints Based on Phase-Only Correlation. In Proceedings of the IEEE International Conference on Image Processing, San Antonio, TX, USA, 16–19 September 2007; pp. 441–444. [Google Scholar]
  98. Gueham, M.; Bouridane, A.; Crookes, D. Automatic classification of partial shoeprints using advanced correlation filters for use in forensic science. In Proceedings of the 19th IEEE International Conference on Pattern Recognition (ICPR), 8–11 December 2008; pp. 1–4. [Google Scholar]
  99. Chiu, H.-C.; Chen, C.-H.; Yang, W.-C.; Jiang, J. Automatic Full and Partial Shoeprint Retrieval System for Use in Forensic Investigations. In Proceedings of the 12th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI), Suzhou, China, 17–19 October 2019; pp. 1–6. [Google Scholar]
  100. Phillips, P.J.; Moon, H.; Rizvi, S.A.; Rauss, P.J. The FERET Evaluation Methodology for Face-Recognition Algorithms. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1090–1104. [Google Scholar] [CrossRef]
Figure 1. Differences among the three kinds of shoeprint retrieval tasks: (a) S2S shoeprint retrieval. (b) S2R shoeprint retrieval. (c) S2P shoeprint retrieval.
Figure 1. Differences among the three kinds of shoeprint retrieval tasks: (a) S2S shoeprint retrieval. (b) S2R shoeprint retrieval. (c) S2P shoeprint retrieval.
Electronics 11 02487 g001
Figure 2. The framework of the automatic shoeprint retrieval task.
Figure 2. The framework of the automatic shoeprint retrieval task.
Electronics 11 02487 g002
Figure 3. The imaging process of the crime scene shoeprints.
Figure 3. The imaging process of the crime scene shoeprints.
Electronics 11 02487 g003
Figure 4. The imaging process of the reference shoeprints.
Figure 4. The imaging process of the reference shoeprints.
Electronics 11 02487 g004
Figure 5. The imaging process of the shoeprint scanner.
Figure 5. The imaging process of the shoeprint scanner.
Electronics 11 02487 g005
Figure 6. The framework of the feature extraction phase.
Figure 6. The framework of the feature extraction phase.
Electronics 11 02487 g006
Figure 7. The framework of the similarity evaluation and ranking score computation phase.
Figure 7. The framework of the similarity evaluation and ranking score computation phase.
Electronics 11 02487 g007
Figure 8. Sample shoeprints in the FID-300 dataset: (a) The query shoeprint collected at crime scenes. (b) The corresponding shoeprints of the query shoeprints.
Figure 8. Sample shoeprints in the FID-300 dataset: (a) The query shoeprint collected at crime scenes. (b) The corresponding shoeprints of the query shoeprints.
Electronics 11 02487 g008
Figure 9. Sample shoeprints in the CS database: (a) Dust shoeprint. (b) Blood shoeprint. (c) Blood shoeprint enhanced by LCV. (d) High-quality shoeprint. (e) The reference shoeprint of (a). (f) The reference shoeprint of (bd).
Figure 9. Sample shoeprints in the CS database: (a) Dust shoeprint. (b) Blood shoeprint. (c) Blood shoeprint enhanced by LCV. (d) High-quality shoeprint. (e) The reference shoeprint of (a). (f) The reference shoeprint of (bd).
Electronics 11 02487 g009
Figure 10. The properties posed by the crime scene shoeprint images: (a) Shoeprints are hardly distinguished from the complicated background. (b) Shoeprints are randomly incomplete. (c) Deformations of the shoeprint can occur. (d) Deformation can occur.
Figure 10. The properties posed by the crime scene shoeprint images: (a) Shoeprints are hardly distinguished from the complicated background. (b) Shoeprints are randomly incomplete. (c) Deformations of the shoeprint can occur. (d) Deformation can occur.
Electronics 11 02487 g010
Table 1. Summary of the published shoeprint retrieval methods.
Table 1. Summary of the published shoeprint retrieval methods.
MethodsFeaturesMatching MethodsPerformanceDataset
Kortylewski, et al., 2014 [28]Periodical TextureDefined Similarity Measure27.1%@1%#S:170, #R:1175
Wang, et al., 2014 [72]Fourier–MellinCorrelation Coefficient87.5%@2%#S:72, #S:10096
Almaadeed, et al., 2015 [35]Harris + Hessian + SIFTRANSAC68.5%@10#R:400, #R:400
Kortylewski, et al., 2016 [29]Original PixelsProbabilistic Model71%@20%#S:300, #R:1175
Wang, et al., 2017 [73]Fourier–MellinManifold Ranking93.5%@2%#S:72, #S:10096
Richetelli, et al., 2017 [27]SIFTRANSAC97%@5#R:272, #R:100
Alizadeh, et al., 2017 [31]Original PixelsSparse Representation for Classification99.5%@1#R:190, #R:190
Kong, et al., 2017 [25]Deep FeaturesNormalized Cross-Correlation92.5%@20%#S:300, #R:1175
Kong, et al., 2019 [26]Deep FeaturesNormalized Cross-Correlation94%@20%#S:300, #R:1175
Ma, et al., 2019 [32]Deep FeaturesDeep Neural Networks89.8%@10%#S:300, #R:1175
Wu, et al., 2019 [74]Fourier–MellinManifold Ranking96.6%@2%#S:72, #S:10096
Wu, et al., 2019 [75]Fourier–MellinManifold Ranking92.5%@2%#S:72, #S:10096
Alizadeh, et al., 2021 [78]Local Binary PatternChi-squared Test97.6%@1#R:190, #R:760
#S: the number of crime scene shoeprints. #R: the number of reference shoeprints. x%@y means that the cumulative match score at the top y of the ranking list is x percent.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wu, Y.; Dong, X.; Shi, G.; Zhang, X.; Chen, C. Crime Scene Shoeprint Image Retrieval: A Review. Electronics 2022, 11, 2487. https://doi.org/10.3390/electronics11162487

AMA Style

Wu Y, Dong X, Shi G, Zhang X, Chen C. Crime Scene Shoeprint Image Retrieval: A Review. Electronics. 2022; 11(16):2487. https://doi.org/10.3390/electronics11162487

Chicago/Turabian Style

Wu, Yanjun, Xianling Dong, Guochao Shi, Xiaolei Zhang, and Congzhe Chen. 2022. "Crime Scene Shoeprint Image Retrieval: A Review" Electronics 11, no. 16: 2487. https://doi.org/10.3390/electronics11162487

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop