Automatic Retrieval of Shoeprints Using Modiﬁed Multi-Block Local Binary Pattern

: A shoeprint is a valuable clue found at a crime scene and plays a signiﬁcant role in forensic investigations. In this paper, in order to maintain the local features of a shoeprint image and place a pattern in a block, a novel automatic method was proposed, referred to as Modiﬁed Multi-Block Local Binary Pattern (MMB-LBP). In this method, shoeprint images are divided into blocks according to two different models. The histograms of all blocks of the ﬁrst and second models are separately measured and stored in the ﬁrst and second feature matrices, respectively. The performance evaluations of the proposed method were carried out by comparing with state-of-the-art methods. The evaluation criteria are the successful retrieval rates obtained using the best match score at rank one and cumulative match score for the ﬁrst ﬁve matches. The comparison results indicated that the proposed method performs better than other methods, in terms of retrieval of complete and incomplete shoeprints. That is, the proposed method was able to retrieve 97.63% of complete shoeprints, 96.5% of incomplete toe shoeprints, and 91.18% of incomplete heel shoeprints. Moreover, the experiments showed that the proposed method is signiﬁcantly resistant to the rotation, salt and pepper noise, and Gaussian white noise distortions in comparison with the other methods.


Introduction
A crime scene refers to a place where a criminal or convict commits actions that are against the law; such a place includes valuable evidence, signs, and indications that can be used for investigating the crime. Indeed, a crime scene is considered the source of facts and information that are related to the crime and the criminal. When the evidence and signs are properly (and systematically) investigated, it can help investigators identify the criminal(s). Edmond Locard's exchange theory proposes that perpetrators almost-always (or naturally) leave trace evidence behind at the crime scene. At the same time, these perpetrators will also leave with something from the crime scene [1]. Given the importance and the decisive role of the signs and evidence left at the crime scene, it can be used to determine the occurrence of the crime and prove the innocence of people who have been unduly accused.
The most common types of evidence (or clues) that remain at the crime scene include fingerprints, blood, hair, and shoeprints. Regarding these clues, many research studies have been extensively conducted on fingerprints, blood, and hair, and effective methods have been proposed for identifying them [2][3][4][5]. Moreover, in recent years, criminals have tried to avoid leaving trace evidence behind that could be used by forensic experts to identify them (i.e., they used methods such as covering their faces and using gloves to eliminate the effects of their fingerprints). However, criminals are not usually aware of their shoeprints being used as trace evidence. Studies indicate that shoe marks are frequent (similar to fingerprints) at crime scenes [6]; moreover, 35% of crime scenes include shoeprints [7]. Harris + Hessian + SIFT S, R, P, N 99.33% @ 1 [35]   As a case in point, in 1993, a database was produced in Holland that included 14,000 shoeprint samples belonging to three categories: suspect shoeprints, shoeprints left at crime scenes, and shoeprints available in shoe stores. Geradts et al. proposed an algorithm for classifying the above-mentioned database [53]. This algorithm automatically classifies the patterns of the external parts of shoe soles. Initially, it splits and divides shoe profiles into different profiles and measures the Fourier features for these profiles. Then, it selects the best Fourier feature for classifying by neural networks. In [15,16], fractals were used for describing shoeprints and arithmetic square noise was used for specifying final matching. Using Fourier transform, De Chazal et al. introduced a method for automatic classification of shoeprint images, which included 476 complete images [17]. Images belonged to 140 groups; each group included two (or several) shoeprint samples. Due to its independence from rotation and translation of Fourier transform, this method had higher efficiency with regard to translation changes and rotation in bigger datasets. The images used in this method had high quality, and noisy images were not mentioned here. Pavlou et al. proposed an automatic classification system for shoe patterns in [27], which was based on the local shape structure of patterns. Features and descriptors of the selected patterns were invariant of affine; consequently, they could be resistant to rotation and relative translation. The abundant nature and the locality of these features provide the opportunity for accurate detection and identification of the incomplete traces of shoeprints. Gueham et al. introduced a technique in [25] for the automatic identification of shoeprint images. They used Mellin transform to produce features that were invariant of translation, rotation, and scale. Fast Fourier transform is firstly obtained from the image and high and low frequencies are filtered from the results of Fourier transform. Next, the results are mapped by log-polar, and another Fourier scale is measured by means of fast Fourier transform. In this method, a two-dimensional correlation is used for the similarity criterion. According to their report, the efficiency of the algorithm regarding scale distortion and under noise conditions was good. Moreover, it had remarkable capabilities in identifying partial of shoeprints.
Using Hu Moment invariants, AlGarni et al. developed an automatic method in [24] for matching shoeprints. Features and descriptors of the selected pattern were invariant of affine and resistant to the rotation and relative translation. Dardi et al. introduced a descriptor based on Mahalanobis distance for retrieving shoeprint traces [37]. This descriptor operates based on geometrical pixel structures so that a block distance matrix is obtained for each shoeprint from the value and image density variance. This distance is referred to as Mahalanobis distance. Then, this descriptor measures power spectrum density. In this method, the correlation coefficient was used for matching the queried image Symmetry 2021, 13, 296 4 of 20 with the database images. As a result of the comparison, the images are matched with the queried image, in terms of the degree of similarity and returned.
In [28], a method for automatic identification of shoeprints using directional properties of shoe sole patterns was proposed. In this method, co-occurrence matrices Fourier transform and a directional matrix was used for extracting features that match the direction of shoeprint patterns. In [29], Nibouche et al. introduced a method for retrieving rotated incomplete shoeprints using a combination of the intended local points and Scale Invariant Feature Transform (SIFT). In this method, the respective points of the shoeprint image were identified using the Harris-Laplacian feature detector. Later, the produced features were coded by SIFT. In the matching task, the random sample collection method was used for estimating the transform model and for producing some inliers that were later multiplied by the total Euclidean point-by-point distance under the threshold.
In [18], Zhang et al. proposed an automatic retrieval system based on the information of the edges of shoeprint patterns in a way that the direction of the available edges in shoeprint shapes are described in a histogram. In doing so, firstly, the Canny method was used as the edge detector for extracting the information of shoeprint edges. Then, the extracted information was quantized and the histogram of the shoeprint image was produced. In [41], Tang et al. proposed a shoeprint image retrieval method based on clustering. In this method, for enhancing shoeprint retrieval speed, the reference database is clustered based on the lower patterns of shoeprints. The available geometrical shapes in shoeprints, such as segments, circles, and ovals, are used as shoeprint features; then, these features are structurally clustered into attributed relational graphs (ARGs). In [32], a local adaptation was used for histogram of the Radon transform. That is, the shoeprint image is analyzed into connected variables and local descriptors. Hence, for finding the best local matching between connection components and the similarity between two images, the average local similarity degree was used.
In [44], Kong et al. proposed a shoeprint identification method in which Gabor and Zernike textural features were used to extract shoeprint image moment and the similarity degree between images was measured based on the above-mentioned features. In [33], Wei et al. used the SIFT for detecting noisy and incomplete shoeprints. Different scale-spaces were used in this method for detecting local maximums. Then, the local maximum features were used for matching shoeprint images. In [34], Wei et al. used core point alignment for retrieving shoeprints. In this method, the contours that are more reliable for simulating left and right margins of shoeprint image are selected. Next, the concave points along with the left and right margins of the image are determined as the core points of the image. Finally, the shoeprint image is divided into circular sections. Next, the moments of each section are measured and the Euclidean distance is measured for determining the similarity between the two shoeprint images.
Kortylewski et al. [45] introduced an unsupervised shoeprint retrieval algorithm for noisy environments. In this algorithm, local rotation in the image is measured by adjusting an alternating pattern; as a result, that part of the image that has rotation is normalized. Then, the local Fourier transform of that section of the image is measured. Next, the patterns are divided and the features of the frequency range in each situation where the alternating pattern is fixed is used for matching shoeprints. Matching of shoeprints is carried out by comparing Fourier transforms of the alternating patterns. The performance of this method indicates that it is resistant to noise but it can retrieve shoeprints only when shoeprints have alternating patterns.
Using Gabor transform, Patil et al. proposed an automatic shoeprint matching method in [30], which was invariant of rotation and brightness. For extracting features of shoeprint images, they obtained Gabor filters at eight different angles. Among the eight results of the Gabor filter, the four images with the highest energy are selected. Then, these four images are divided into 16 × 16-pixel blocks and their average variance is selected as the feature vector. In [35], Almaadeed et al. developed a method for retrieving incomplete shoeprints using multiple point-of-interest detectors and SIFT descriptors. For making this method invariant of the scale, and resistant to blob-like structures, the Harris and Hessian multiple-scale detectors were used. Moreover, for making it invariant of rotation, the SIFT descriptor was used for describing the intended points. Finally, by combining the advantages of the two detectors, the queried image is matched.
In [48], shoeprint retrieval has been carried out based on the similarity between hybrid features composed of global and local features. The ranking procedure includes an opinion score granted by the forensic expert, which is basically a relevancy score of the shoeprint to the query. In [50], using the blocking sparse representation technique, the queried image was divided into two blocks and two sparse representations were extracted by Wright's sparse representation. Fourier transforms, Gabor transforms, Hessian-Harris' multi-scale detectors, and SIFT descriptors are applied to extract the local and global features of the shoeprint image, its rotation, and corners, respectively.

Local Binary Pattern
Local Binary Pattern (LBP) introduced is well-known feature extraction and texture classification method [9]. Good features of this method, such as separation, invariability in the uniform changes of the gray-scale, implementation simplicity, and computational speed, have led to extensive use of it. The main rationale and justification for using LBP to detect shoeprints are that shoeprint images are made of a combination of several sub-patterns that can properly be described by this method.
The performance of LBP was firstly examined on eight neighbors of a pixel in the form of a 3 × 3 square operator. This operator functions in a way where the central pixel is considered as the threshold value, which is compared with the values of the eight neighboring pixels for producing an 8-bit code. If the value of each of the neighboring pixels is greater than or equal to the threshold, it will be replaced with 1 in the binary code; otherwise, it will be replaced with 0. Then, each digit of the binary code is multiplied by its locational value and their sum is measured as the LBP value for that pixel. By applying this 3 × 3 operator on the entire image pixels, an image with the same size is produced. Let (x, y) be the coordinates of a pixel of the input image, then, LBP P,R will be measured as in (1). Figure 1 shows an example of the LBP operator.
where P refers to the number of neighboring pixels which is equal to 8; R denotes the distance between the central pixel and the neighboring pixels which is equal to 1 here, g c stands for the value of pixel (x, y) and g i denotes its ith neighbor pixel.

Local Binary Pattern
Local Binary Pattern (LBP) introduced is well-known feature extraction and texture classification method [9]. Good features of this method, such as separation, invariability in the uniform changes of the gray-scale, implementation simplicity, and computational speed, have led to extensive use of it. The main rationale and justification for using LBP to detect shoeprints are that shoeprint images are made of a combination of several subpatterns that can properly be described by this method.
The performance of LBP was firstly examined on eight neighbors of a pixel in the form of a 3 × 3 square operator. This operator functions in a way where the central pixel is considered as the threshold value, which is compared with the values of the eight neighboring pixels for producing an 8-bit code. If the value of each of the neighboring pixels is greater than or equal to the threshold, it will be replaced with 1 in the binary code; otherwise, it will be replaced with 0. Then, each digit of the binary code is multiplied by its locational value and their sum is measured as the LBP value for that pixel. By applying this 3 × 3 operator on the entire image pixels, an image with the same size is produced. Let ( , ) be the coordinates of a pixel of the input image, then, , will be measured as in (1). Figure 1 shows an example of the LBP operator.
where refers to the number of neighboring pixels which is equal to 8; denotes the distance between the central pixel and the neighboring pixels which is equal to 1 here, stands for the value of pixel ( , ) and denotes its th neighbor pixel.

A Multi-Block Local Binary Pattern
Multi-Block Local Binary Pattern (MB-LBP) is regarded as a developed model of LBP, which has been proposed in different ways [54][55][56]. In this model, for extracting image features, the image is firstly divided into n areas, i.e., , , … , . Then, the LBP operator functions invariantly for each area. Next, the histograms of the n areas are measured  Multi-Block Local Binary Pattern (MB-LBP) is regarded as a developed model of LBP, which has been proposed in different ways [54][55][56]. In this model, for extracting image features, the image is firstly divided into n areas, i.e., R 0 , R 1 , . . . , R n−1 . Then, the LBP operator functions invariantly for each area. Next, the histograms of the n areas are measured and incorporated within a feature vector. Figure 2 shows the results of the application of MB-LBP for a sample of a shoeprint image.

The Proposed Technique for Retrieval Shoeprints
The overview of the proposed method for retrieval shoeprints is given in Figure 3. The method includes three stages of preprocessing, extracting feature, and matching shoeprint image. In the first stage, preprocessing operations on shoeprint images are conducted by the means of certain techniques so that the image is prepared for the feature extraction stage. Such operations include noise removal, rotation, and the change of the scale of shoeprint images. In the feature extraction stage, features of the preprocessed image are extracted using the proposed MMB-LBP method. In the matching stage, features extracted from the queried shoeprint image are compared with features of the reference images via the Chi-square test; then, the results of the comparison are ranked so that the shoeprint image with the highest similarity is put at the beginning of the list. Finally, the correct retrieval rate is computed using the best match score at rank one and cumulative match score for the first five matches.

A Preprocessing
The preprocessing stage is vital for shoeprint images before the feature extraction stage. Figure 4 demonstrates the conducted steps and procedures in the preprocessing

The Proposed Technique for Retrieval Shoeprints
The overview of the proposed method for retrieval shoeprints is given in Figure 3. The method includes three stages of preprocessing, extracting feature, and matching shoeprint image. In the first stage, preprocessing operations on shoeprint images are conducted by the means of certain techniques so that the image is prepared for the feature extraction stage. Such operations include noise removal, rotation, and the change of the scale of shoeprint images. In the feature extraction stage, features of the preprocessed image are extracted using the proposed MMB-LBP method. In the matching stage, features extracted from the queried shoeprint image are compared with features of the reference images via the Chi-square test; then, the results of the comparison are ranked so that the shoeprint image with the highest similarity is put at the beginning of the list. Finally, the correct retrieval rate is computed using the best match score at rank one and cumulative match score for the first five matches.

The Proposed Technique for Retrieval Shoeprints
The overview of the proposed method for retrieval shoeprints is given in Figure 3. The method includes three stages of preprocessing, extracting feature, and matching shoeprint image. In the first stage, preprocessing operations on shoeprint images are conducted by the means of certain techniques so that the image is prepared for the feature extraction stage. Such operations include noise removal, rotation, and the change of the scale of shoeprint images. In the feature extraction stage, features of the preprocessed image are extracted using the proposed MMB-LBP method. In the matching stage, features extracted from the queried shoeprint image are compared with features of the reference images via the Chi-square test; then, the results of the comparison are ranked so that the shoeprint image with the highest similarity is put at the beginning of the list. Finally, the correct retrieval rate is computed using the best match score at rank one and cumulative match score for the first five matches.

A Preprocessing
The preprocessing stage is vital for shoeprint images before the feature extraction stage. Figure 4 demonstrates the conducted steps and procedures in the preprocessing

A Preprocessing
The preprocessing stage is vital for shoeprint images before the feature extraction stage. Figure 4 demonstrates the conducted steps and procedures in the preprocessing stage.

Noise Elimination
The first task in the preprocessing stage is to remove and eliminate noises that were created in the stage of recording shoeprints and scanning images. Hence, the colored shoeprint image is converted into a grey image, and the noises of the shoeprint images are removed using a median filter with a 5-pixel neighborhood. However, it should be noted that there might be larger spots noises in the shoeprint images, which are not eliminated with the median filter with a 5-pixel neighborhood. Thus, the Otsu method is used for thresholding the shoeprint image. Next, the reverse of the output image produced by the Otsu thresholding method is convoluted with the grey image. Finally, the pixels forming the shoeprint are isolated from the image background. Figure 4a indicates the result of noise elimination from a shoeprint image. It should be noticed that applying the median filter before Otsu thresholding leads to a reduction of pixels, which are separately located in the image and the enhancement of the values of the pixels located in the neighborhood of the shoeprint member. Consequently, this results in the improvement of the image division using the Otsu thresholding method.

Rotating Image
It is not always expected that input images as the traces and shoeprints are located at a specific angle and since the proposed method is invariant of rotation, all of the shoeprint images should be rotated before extracting features, so, they are in an identical condition for the investigation. Hence, after the noise elimination stage, the shoeprint image is vertically rotated forwards. Some parts of the shoeprint might likely be lost from the margins of the image while rotating the shoeprint image; hence, empty spaces should be added to the sides of the main image to prevent the loss of sections of the image. Figure 4b shows the result of adding space to the sides of the shoeprint image.
The Karhunen-Loeve method [57] is used for automatic rotation. This method for the automatic rotation of the image uses the concept of Eigenvector and the center of gravity. That is, the gravity center of the image is firstly measured from the binary image. Then, the covariance matrix, between row and column dimensions of all of the pixels related to the shoeprint, is obtained according to (3); the result indicates the changes of two dimensions in relation to one another.

Noise Elimination
The first task in the preprocessing stage is to remove and eliminate noises that were created in the stage of recording shoeprints and scanning images. Hence, the colored shoeprint image is converted into a grey image, and the noises of the shoeprint images are removed using a median filter with a 5-pixel neighborhood. However, it should be noted that there might be larger spots noises in the shoeprint images, which are not eliminated with the median filter with a 5-pixel neighborhood. Thus, the Otsu method is used for thresholding the shoeprint image. Next, the reverse of the output image produced by the Otsu thresholding method is convoluted with the grey image. Finally, the pixels forming the shoeprint are isolated from the image background. Figure 4a indicates the result of noise elimination from a shoeprint image. It should be noticed that applying the median filter before Otsu thresholding leads to a reduction of pixels, which are separately located in the image and the enhancement of the values of the pixels located in the neighborhood of the shoeprint member. Consequently, this results in the improvement of the image division using the Otsu thresholding method.

Rotating Image
It is not always expected that input images as the traces and shoeprints are located at a specific angle and since the proposed method is invariant of rotation, all of the shoeprint images should be rotated before extracting features, so, they are in an identical condition for the investigation. Hence, after the noise elimination stage, the shoeprint image is vertically rotated forwards. Some parts of the shoeprint might likely be lost from the margins of the image while rotating the shoeprint image; hence, empty spaces should be added to the sides of the main image to prevent the loss of sections of the image. Figure 4b shows the result of adding space to the sides of the shoeprint image.
The Karhunen-Loeve method [57] is used for automatic rotation. This method for the automatic rotation of the image uses the concept of Eigenvector and the center of gravity. That is, the gravity center of the image is firstly measured from the binary image. Then, the covariance matrix, between row and column dimensions of all of the pixels related to the shoeprint, is obtained according to (3); the result indicates the changes of two dimensions in relation to one another.
Symmetry 2021, 13, 296 8 of 20 where x c and y c denote the coordinates of the gravity center of the shoeprint image. Based on this matrix, the eigenvalues and eigenvectors are measured. Then, the sine and cosine of the angle between eigenvectors related to the largest eigenvalues are obtained through (7)(8), which is considered as the required angle for rotating the image. The rotation matrix is formed from the obtained angle. Then, the positions of pixels in the new image are obtained using the rotation matrix and the beginning of the gravity center coordinate through (9-10). Figure 4c shows the result of the rotated shoeprint image. In the obtained image, the eigenvector is related to the greatest eigenvalue perpendicular to the axis X.
where x 1 and y 1 denote the coordinates of the input image, x 2 and y 2 refer to the pixel coordinates obtained from rotating the input image in the output image.

Scale Change
In this step, the scale of all shoeprint images is changed into a 256 × 128 fixed format. The available shoeprint in the image is enclosed in a frame and is cut from the whole image. For doing so, in the shoeprint image, the column index of the closest pixel of the shoeprint member is measured in relation to the right edge of the image. The width of the frame is obtained from the value differences between them. Next, for obtaining frame length, the row index of the nearest pixel of the shoeprint image in relation to the upper edge of the image, and the index of the nearest pixel of the shoeprint image in relation to the lower edge of the image is measured. By measuring these coordinates, the intended shoeprint can be separated from the image. By doing so, the additional margins in the image are cut out and only the shoeprint framework is preserved. Figure 4d shows the result of removing the margins of the shoeprint image. Now, the scale of the shoeprint image can be changed to 256 × 128. For doing so, image dimensions should be measured; if image length is greater than its double-width, image size is changed based on its length; otherwise, the image size will be changed based on its width. That is, if changing image size is based on its length, shoeprint image dimensions will be changed in a proportion of 256 pixels relative to the frame length so that image length is perpendicularly transformed for 256 pixels. Consequently, its width is also changed. On the other hand, if changing image size is based on its width, the scale of the shoeprint image dimensions will be transformed for 128 pixels relative to the frame width in a way that Symmetry 2021, 13, 296 9 of 20 image width is horizontally transformed for 128 pixels; accordingly, the frame length is also changed. Then, the obtained image is placed on the 256 × 128 frame. It should be noted that the axil dividing the obtained image columns into two equal parts should match the axil dividing columns of the 256 × 128 frame. As a result, the shoeprint image is vertically standing in the frame with 256 × 128 pixels. By doing this, all of the images are changed into an identical frame, which leads to the independence of the proposed method from the scale. Figure 4e illustrates the result of changing the scale of the shoeprint image.
After the vertical rotation of the image, likely, the shoeprint heel is upwards and it should be noticed that our purpose is that the shoeprint toe and paw are upwards. Hence, in this stage, it is assumed that, in shoeprint patterns, the contact surface of the upper half of the shoe is more than the lower half. In other words, in a binary image of the shoeprint, the density of the black pixels in the upper half should be more than those in the lower half. Hence, after the shoeprint image is made perpendicular, the intended densities are measured; in case the density of the upper half of the image is less than those of the lower half, the image is rotated for 180 degrees, so that the heel is placed in the lower part of the image. However, observing and examining the rotation of shoeprint images indicate that this is not always true; that is, in about 15% of cases, the opposite results are observed, and under such circumstances, the images are manually rotated so that the shoeprint image heel is downwards and shoeprint toe is upwards.

Extracting Features
In the machine vision, the features extracted from images should express and indicate the maximum characteristics of the images. Therefore, for enhancing the identification accuracy of LBP, so that the extracted features can better represent the shoeprint image, the local features should be preserved in extracting image histograms. For this purpose, the Modified Multi-Block Local Binary Pattern method is proposed.

Modified Multi-Block Local Binary Pattern
In the Modified Multi-Block Local Binary Pattern (MMB-LBP) method, shoeprint images are divided into blocks because not dividing them means that there is only one histogram for the entire image. The major weakness of one histogram is the lack of sensitivity among bins of this histogram, with regard to the location of shoeprint image features. On the other hand, in dividing shoeprint images into blocks before the application of the LBP operator, false features resulting from dark lines in the margins of the blocks will be produced; it will also lead to the destruction of the patterns located in the margins of the blocks. Hence, for eliminating this weakness and enhancing the sensitivity, the LBP operator is firstly applied to the entire image. As a result, an image with the same size is produced. Then, by dividing the result of the LBP operator into different blocks, histograms of those blocks will be extracted. Consequently, feature loss and false features in the margins of blocks will be avoided. Figure 5 shows the result of applying the LBP on a shoeprint image.
Since blocking is applied automatically on a shoeprint image without any human interference, shoeprint patterns are likely to be fragmented. That is, while blocking, if the axil isolating blocks pass through one or several shoeprint patterns, part of one pattern will be in one block and another part will be in the neighboring block or blocks. In other words, if part of a pattern is located in one block, it will be wrongly identified as a complete pattern in that block. Hence, when a pattern is located in two or more neighboring blocks, that pattern will not be accurately identified which will have a destructive impact while shoeprint images are matched.
Hence, for accurately extracting pattern features located in block margins and for reducing the destructive impact of pattern fragments while matching shoeprint images, the shoeprint image in the proposed method is blocked according to two different models. That is, in case one pattern in one blocking model is located in the margin of a block, in new blocking, that pattern should be completely located in the middle of the block.
Hence, as the features of a pattern are completely located in a block from new blocking, the destructive impact of that pattern fragment will be avoided.
as the features of a pattern are completely located in a block from new blocking, the destructive impact of that pattern fragment will be avoided.
In shoeprint images, blocking begins from two different positions and continues without overlapping until the end of the image. In the first blocking model, blocking begins from the 1 × 1 position in the upper-left edge of the image and is carried out up to the 256 × 128 position in the lower right edge of the image for the size of 32 × 32 pixels. Hence, it can be argued that from 1 × 1 position up to 32 × 32 position of the image is recognized as the first image block. Moreover, the second block begins from the 1 × 31 position of the image and continues up to the 33 × 36 position. position. This blocking procedure continues in the way that was mentioned in the second blocking model, so that three blocks in the rows and seven blocks in the columns are created. Finally, a total of 53 blocks from the shoeprint image will be produced according to both models. Figure 6 depicts the two blocking models of a shoeprint image. It should be noted that blue lines in the figure are just for the sake of illustration. This blocking procedure continues in the way that was mentioned in the second blocking model, so that three blocks in the rows and seven blocks in the columns are created. Finally, a total of 53 blocks from the shoeprint image will be produced according to both models. Figure 6 depicts the two blocking models of a shoeprint image. It should be noted that blue lines in the figure are just for the sake of illustration.  position. This blocking procedure continues in the way that was mentioned in the second blocking model, so that three blocks in the rows and seven blocks in the columns are created. Finally, a total of 53 blocks from the shoeprint image will be produced according to both models. Figure 6 depicts the two blocking models of a shoeprint image. It should be noted that blue lines in the figure are just for the sake of illustration. Next, for extracting image features, the histograms of all blocks of the first model are separately measured and stored in the first feature matrix. Then, the histograms of all of the blocks of the second blocking model are stored in the second feature matrix. Image features include two feature matrices; the first matrix has 8 × 4 elements where each element consists of a histogram with 256 values. Moreover, the second matrix has 7 × 3 elements where each element consists of a histogram with 256 values. Figure 7a illustrates an overview of the corresponding histograms based on the first blocking model and Figure 7b shows the overview of the corresponding histograms based on the second blocking model for the shoeprint image. Figure 8 demonstrates the distinction between the texture features of two neighboring blocks from two different shoeprint images. As shown in this figure, it is obvious that the two corresponding blocks from two different images have dissimilar histograms. Indeed, these histograms are used for matching the queried shoeprint image with the reference images. Hence, in the proposed method, histograms of each area of the shoeprint image are produced according to (11) and (12).
FM 2,i,j = Hist(LBP(i × SB + 17 : i × SB + SB, j × SB + 1 : j × SB + SB)), i = 0, 1, . . . , 6, j = 0, 1, . . . , 2 where the i refers to the area index and j refers to the bin index, SB denotes the size of blocks and is equal to 32. FM 1 and FM 2 , respectively, refer to the first feature matrix and the second feature matrix. Next, for extracting image features, the histograms of all blocks of the first model are separately measured and stored in the first feature matrix. Then, the histograms of all of the blocks of the second blocking model are stored in the second feature matrix. Image features include two feature matrices; the first matrix has 8 × 4 elements where each element consists of a histogram with 256 values. Moreover, the second matrix has 7 × 3 elements where each element consists of a histogram with 256 values. Figure 7a illustrates an overview of the corresponding histograms based on the first blocking model and Figure 7b shows the overview of the corresponding histograms based on the second blocking model for the shoeprint image. Figure 8 demonstrates the distinction between the texture features of two neighboring blocks from two different shoeprint images. As shown in this figure, it is obvious that the two corresponding blocks from two different images have dissimilar histograms. Indeed, these histograms are used for matching the queried shoeprint image with the reference images. Hence, in the proposed method, histograms of each area of the shoeprint image are produced according to (11) and (12).
where the refers to the area index and refers to the bin index, denotes the size of blocks and is equal to 32. and , respectively, refer to the first feature matrix and the second feature matrix.    Next, for extracting image features, the histograms of all blocks of the first model are separately measured and stored in the first feature matrix. Then, the histograms of all of the blocks of the second blocking model are stored in the second feature matrix. Image features include two feature matrices; the first matrix has 8 × 4 elements where each element consists of a histogram with 256 values. Moreover, the second matrix has 7 × 3 elements where each element consists of a histogram with 256 values. Figure 7a illustrates an overview of the corresponding histograms based on the first blocking model and Figure 7b shows the overview of the corresponding histograms based on the second blocking model for the shoeprint image. Figure 8 demonstrates the distinction between the texture features of two neighboring blocks from two different shoeprint images. As shown in this figure, it is obvious that the two corresponding blocks from two different images have dissimilar histograms. Indeed, these histograms are used for matching the queried shoeprint image with the reference images. Hence, in the proposed method, histograms of each area of the shoeprint image are produced according to (11) and (12).
where the refers to the area index and refers to the bin index, denotes the size of blocks and is equal to 32. and , respectively, refer to the first feature matrix and the second feature matrix.

A Shoeprint Image Matching
The feature matrices that were extracted in the feature extraction stage are applied in this stage for matching. As mentioned above, shoeprint images were divided in to different blocks and each of the blocks or areas is of a different degree of significance. Hence, for distinguishing high-significance areas of the shoeprint image from low-significance areas with regard to pattern density, W 1 and W 2 weight matrices are introduced as given in Table 2. As a result, while comparing histograms, the presence of similarities and differences in high-significance areas of images are better highlighted. In other words, the similarity in high-significance areas of the image indicates the similarity of shoeprints; also, the difference in high-significance areas of the images indicates the lack of similarity between shoeprints. Hence, the Chi-square test in (13) is used as the similarity criterion for obtaining the corresponding similarity between different areas of the queried image with those of the reference image. Using this method, blocks of the queried image were compared with those of the reference image. Then, the final results of block comparison were measured and summarized to obtain the degree of similarity between the queried image and the reference image. Hence, in case the results of block comparison for an image is close to zero, it will be interpreted as the similarity between the queried image and the database image. If feature matrices for the queried image and the reference image are called Q and P, respectively, the Chi-square test is defined as follows: where i, j, and w i,j denote the corresponding histogram index with the area, the index among histogram bins and w i,j stands for the area significance coefficient, respectively.

Results and Discussion
In this study, the database of participants from two cities, namely Miandoab in Iran and Trabzon in Turkey, which is referred to as the Iranian-Turkish Shoeprints Database (ITSP DB available at https://ceng2.ktu.edu.tr/~itspdb, accessed on: 14 December 2020), is used. The database includes five separate images for each shoe and, in total, 950 shoeprint images. Here, shoes of different sizes were used. The point that was taken into consideration while recording samples was that each remaining shoe image sample was not identical with the other ones, because shoeprints from one type of shoe in each contact with the surface leave a partially different effect and impression, especially in its margins. Figure 9 shows samples of the shoeprint images from ITSP DB. The complete description of this database can be found in [50].
The evaluations were carried out on the database with 190 categories where each category included five cases. One sample from each shoeprint category was selected and registered in the database. Then, the remaining four samples from each category, i.e., 760 shoeprint images were recorded in the database and used for testing queried images. In other words, in all of the evaluations, 20% of shoeprint images were allocated to training Symmetry 2021, 13, 296 13 of 20 data and 80% of shoeprint images were dedicated to testing data. To include partial and incomplete shoeprints in the evaluations, the complete images were divided into two sections, i.e., heel and toe. Figure 10 illustrates the heel and toe incomplete shoeprints that were obtained from complete shoeprints.
incomplete shoeprints in the evaluations, the complete images were divided into two sections, i.e., heel and toe. Figure 10 illustrates the heel and toe incomplete shoeprints that were obtained from complete shoeprints.
For a better evaluation and investigation of the proposed method, the queried images were examined in the presence of rotation distortions, salt and pepper noise, and Gaussian white noise. In the following sections, the performance of the proposed method was compared with those of LBP, MB-LBP, Patil [30], and Almaadeed [35] methods, in terms of retrieval shoeprints.
The successful retrieval rate of the proposed method is measured by the best match score at rank one and the cumulative match score for the first five matches: Cumulative match score = × 100 (4)  incomplete shoeprints in the evaluations, the complete images were divided into two sections, i.e., heel and toe. Figure 10 illustrates the heel and toe incomplete shoeprints that were obtained from complete shoeprints. For a better evaluation and investigation of the proposed method, the queried images were examined in the presence of rotation distortions, salt and pepper noise, and Gaussian white noise. In the following sections, the performance of the proposed method was compared with those of LBP, MB-LBP, Patil [30], and Almaadeed [35] methods, in terms of retrieval shoeprints.
The successful retrieval rate of the proposed method is measured by the best match score at rank one and the cumulative match score for the first five matches: Cumulative match score = × 100 (4) Figure 9. (a) Samples of the recorded shoeprint images from the participants, (b) samples with the unwanted margins have been removed.

Complete shoeprint
Heel Partial shoeprints Figure 10. Partial heel and toe images from the complete shoeprints. Figure 10. Partial heel and toe images from the complete shoeprints.
For a better evaluation and investigation of the proposed method, the queried images were examined in the presence of rotation distortions, salt and pepper noise, and Gaussian white noise. In the following sections, the performance of the proposed method was compared with those of LBP, MB-LBP, Patil [30], and Almaadeed [35] methods, in terms of retrieval shoeprints.
The successful retrieval rate of the proposed method is measured by the best match score at rank one and the cumulative match score for the first five matches: Cumulative match score = the number of accurately retrieved images total query images × 100 (14)

Performance of LBP versus MB-LBP
The first test was carried out for evaluating and comparing the performance of LBP with MB-LBP with different sizes of blocks. For substantiating and demonstrating that the MB-LBP performs better than LBP, both methods were implemented, and the cumulative match scores of their performances were compared as depicted in Figure 11. As shown, the MB-LBP had an 81% retrieval score in all of the sizes, i.e., 8 × 8, 16 × 16, 32 × 32, 64 × 64, and 128 × 128 and its performance in retrieval shoeprint images was better than that of the LBP. The reason for the better performance of the MB-LBP is that it extracts local features of shoeprint images. From Figure 11, the comparison results indicate that whatever the dimensions of blocks, the MB-LBP has a higher cumulative match score than the LBP. Therefore, it can be argued that extracting several histograms for one shoeprint image is better than extracting one histogram for the entire shoeprint image.

Performance of LBP versus MB-LBP
The first test was carried out for evaluating and comparing the performance of LBP with MB-LBP with different sizes of blocks. For substantiating and demonstrating that the MB-LBP performs better than LBP, both methods were implemented, and the cumulative match scores of their performances were compared as depicted in Figure 11. As shown, the MB-LBP had an 81% retrieval score in all of the sizes, i.e., 8 × 8, 16 × 16, 32 × 32, 64 × 64, and 128 × 128 and its performance in retrieval shoeprint images was better than that of the LBP. The reason for the better performance of the MB-LBP is that it extracts local features of shoeprint images. From Figure 11, the comparison results indicate that whatever the dimensions of blocks, the MB-LBP has a higher cumulative match score than the LBP. Therefore, it can be argued that extracting several histograms for one shoeprint image is better than extracting one histogram for the entire shoeprint image.

Performance of the MB-LBP versus the MMB-LBP
The purpose of this test was to investigate the performance of MB-LBP versus the proposed MMB-LBP. For indicating that blocking shoeprint image for extracting histograms after the application of LBP has a better performance than doing it before the application of LBP, both methods in different sizes of blocks were implemented. The results are shown in Figure 12, which demonstrates that the shoeprint retrieval score in the image-blocking model after the application of LBP has a higher performance than the one before the application of the LBP. The difference in performance is attributed to the lack of false features resulting from dark lines in the margins of blocks. Based on the result shown in the figure above, it can be argued that the problem of the placement of a part of the shoeprint pattern in one block and another part in the next block or neighboring blocks have been sorted out. As given  Figure 11. Cumulative match score the LBP vs. the MB-LBP.

Performance of the MB-LBP versus the MMB-LBP
The purpose of this test was to investigate the performance of MB-LBP versus the proposed MMB-LBP. For indicating that blocking shoeprint image for extracting histograms after the application of LBP has a better performance than doing it before the application of LBP, both methods in different sizes of blocks were implemented. The results are shown in Figure 12, which demonstrates that the shoeprint retrieval score in the image-blocking model after the application of LBP has a higher performance than the one before the application of the LBP.

Performance of LBP versus MB-LBP
The first test was carried out for evaluating and comparing the performance of LBP with MB-LBP with different sizes of blocks. For substantiating and demonstrating that the MB-LBP performs better than LBP, both methods were implemented, and the cumulative match scores of their performances were compared as depicted in Figure 11. As shown, the MB-LBP had an 81% retrieval score in all of the sizes, i.e., 8 × 8, 16 × 16, 32 × 32, 64 × 64, and 128 × 128 and its performance in retrieval shoeprint images was better than that of the LBP. The reason for the better performance of the MB-LBP is that it extracts local features of shoeprint images. From Figure 11, the comparison results indicate that whatever the dimensions of blocks, the MB-LBP has a higher cumulative match score than the LBP. Therefore, it can be argued that extracting several histograms for one shoeprint image is better than extracting one histogram for the entire shoeprint image.

Performance of the MB-LBP versus the MMB-LBP
The purpose of this test was to investigate the performance of MB-LBP versus the proposed MMB-LBP. For indicating that blocking shoeprint image for extracting histograms after the application of LBP has a better performance than doing it before the application of LBP, both methods in different sizes of blocks were implemented. The results are shown in Figure 12, which demonstrates that the shoeprint retrieval score in the image-blocking model after the application of LBP has a higher performance than the one before the application of the LBP. The difference in performance is attributed to the lack of false features resulting from dark lines in the margins of blocks. Based on the result shown in the figure above, it can be argued that the problem of the placement of a part of the shoeprint pattern in one block and another part in the next block or neighboring blocks have been sorted out. As given  The difference in performance is attributed to the lack of false features resulting from dark lines in the margins of blocks. Based on the result shown in the figure above, it can be argued that the problem of the placement of a part of the shoeprint pattern in one block and another part in the next block or neighboring blocks have been sorted out. As given in Figure 11, the best shoeprint retrieval result was 96%, which was related to 32 × 32 block size. On the other hand, the best shoeprint retrieval score was 97.63%, which was obtained in 32 × 32 block size. This 1% difference in performance indicates the higher performance of the proposed method. That is, appropriate shoeprint blocking leads to high retrieval accuracy of the MB-LBP. In fact, in case the sizes of other blocks are investigated, it is found that shoeprint retrieval score in 8 × 8 size for MB-LBP is 83%; in contrast, regarding MMB-LBP, the retrieval score is 90%. Moreover, a 1% increase in the 16 × 16 block size and a 2% increase in the 64 × 64 block size are observed.
The second aim of this test was carried out for investigating the MMB-LBP with different sizes of blocks. Since the size of the blocks of MMB-LBP can have a significant impact on image retrieval performance, the purpose of this test was to decide upon the size of the blocks of the proposed method. Hence, the MB-LBP and MMB-LBP were investigated in the block sizes of 8 × 8, 16 × 16, 32 × 32, 64 × 64, and 128 × 128. Figure 12 shows the evaluation results in the first rank. As shown, the best shoeprint retrieval score was 97.63%, which was related to the 32 × 32 size.

A Comparison with Patil and Almaadeed
The first experiment in this category was conducted for investigating and comparing the proposed MMB-LBP with Patil [30] and Almaadeed [35]. Table 3 gives the results for this investigation on complete and incomplete shoeprints in different ranks. As it can be observed, the cumulative match score of the proposed method for complete shoeprints was 97.63 and 96.05 for incomplete toe prints. Moreover, regarding incomplete heel shoeprints, the cumulative match score was 91 in the first rank. However, it should be noted that 83.29% of complete shoeprint images in the Almaadeed method and 64% of complete shoeprints in the Patil method were observed in the first rank. Moreover, 74% of incomplete shoeprint images in these two methods were observed in the first rank. Cumulative match score in the MMB-LBP, complete shoeprint images reached 100 in the fifth rank and incomplete shoeprints reached higher than 97 in the fifth rank. In contrast, in Patil and Almaadeed, the best match for incomplete shoeprints in the fifth rank reached 78. It should be maintained that the low cumulative match score in the Patil method is that shoeprint images are significantly rotated. That is, Radon transform could not properly rotate shoeprint images vertically. Moreover, shoeprint images are blocked before feature extraction in the Patil; hence, for having an equal evaluation and investigation of the performances of the proposed method and Patil method, the preprocessing process of the MMB-LBP was used instead of Radon transform for image rotation in Patil method.
As shown in Tables 3 and 4, it can be observed that the proper rotation of shoeprint image, due to the way the Patil method functions, has a remarkable impact on shoeprint matching performance. That is, whereas 64 cumulative match score in the first rank was obtained for complete shoeprints via Radon transform, this cumulative match score was enhanced to 82 for the proposed method. Indeed, it can be mentioned that the shoeprint image cumulative match score in the proposed method was enhanced for all conditions in such a way that in the fifth rank, both complete and incomplete shoeprint cumulative match score surpassed 89. Therefore, it can be argued that the proposed method had a better performance than the other two methods in matching complete and incomplete shoeprints. Here, we investigate the rotation independency of shoeprint images in the proposed method. In this test, queried images were randomly rotated clockwise in one of these angles, i.e., 15, 30, 45 degrees. Figure 13 demonstrates the results of this test for the proposed and Patil methods. As shown, the MMB-LBP is resistant to rotation distortions. Here, we investigate the rotation independency of shoeprint images in the proposed method. In this test, queried images were randomly rotated clockwise in one of these angles, i.e., 15, 30, 45 degrees. Figure 13 demonstrates the results of this test for the proposed and Patil methods. As shown, the MMB-LBP is resistant to rotation distortions. It is seen from Figure 13 that the cumulative match score achieved in the first rank for complete, incomplete toe, and incomplete heel shoeprints are above 97, 95, and 91, respectively. Moreover, 100% matching for complete shoeprints and cumulative match score over 98 were obtained for incomplete shoeprints in the fifth rank. On the other hand, as shown in Figure 13 and Table 4, about 1% matching reduction for complete shoeprints, incomplete toe, and heel shoeprints were observed in the Patil method in the first rank. Moreover, a 5% matching reduction for complete shoeprints and a 9% reduction in incomplete toe and heel shoeprints were observed in the Patil method in the fifth rank. It is worth noting that the cumulative match score of the Almaadeed method here was less than 50 and, therefore, was not included in the report.

Evaluations under Distortions and Noises
Here, the resistance of the proposed method to salt and pepper noise and Gaussian white noise is investigated. The database shoeprint images while digitalizing were stained with salt and pepper noise and Gaussian white noise. Hence, the proposed method was investigated in such noisy conditions. The queried shoeprint images were stained salt and pepper and Gaussian white noise with different signal to noise ratio (SNR), i.e., 15.28,  It is seen from Figure 13 that the cumulative match score achieved in the first rank for complete, incomplete toe, and incomplete heel shoeprints are above 97, 95, and 91, respectively. Moreover, 100% matching for complete shoeprints and cumulative match score over 98 were obtained for incomplete shoeprints in the fifth rank. On the other hand, as shown in Figure 13 and Table 4, about 1% matching reduction for complete shoeprints, incomplete toe, and heel shoeprints were observed in the Patil method in the first rank. Moreover, a 5% matching reduction for complete shoeprints and a 9% reduction in incomplete toe and heel shoeprints were observed in the Patil method in the fifth rank. It is worth noting that the cumulative match score of the Almaadeed method here was less than 50 and, therefore, was not included in the report.

Evaluations under Distortions and Noises
Here, the resistance of the proposed method to salt and pepper noise and Gaussian white noise is investigated. The database shoeprint images while digitalizing were stained with salt and pepper noise and Gaussian white noise. Hence, the proposed method was investigated in such noisy conditions. The queried shoeprint images were stained salt and pepper and Gaussian white noise with different signal to noise ratio (SNR), i.e., 15 where P s denotes the average power of shoeprint image, σ 2 n refers to the noise variance. Figure 14 illustrates the images, which had been stained with salt and pepper noise and Gaussian white noise. Figures 15 and 16 show the results for the proposed, Patil, and Almaadeed methods in the noisy condition. As it can be observed in the related tables and figures, the proposed method is resistant to noise. That is to say, under slat pepper noise and Gaussian white noise condition in different SNRs, in the MMB-LBP, cumulative match score for complete shoeprint, incomplete toe matching, and incomplete heel matching nearly reached 98, 95, and 91, respectively in the first rank. Comparatively, in the Patil method, under the same noise conditions, cumulative match score for complete shoeprint, incomplete toe matching, and incomplete heel matching reached 81, 69, and 68, respectively in the first rank. Furthermore, as shown in Figure 15, as the signal to noise ratio decreases further, a more drastic reduction in the cumulative match score of Almaadeed method under the salt and pepper noise condition will be observed. 18.80, 24.82, 26.76, 32.78, and 38.80. The noise variance for a sample shoeprint image with the signal to noise ratio is defined as: SNR(db) = 20 log (5) where denotes the average power of shoeprint image, refers to the noise variance. Figure 14 illustrates the images, which had been stained with salt and pepper noise and Gaussian white noise. Figures 15 and 16 show the results for the proposed, Patil, and Almaadeed methods in the noisy condition. As it can be observed in the related tables and figures, the proposed method is resistant to noise. That is to say, under slat pepper noise and Gaussian white noise condition in different SNRs, in the MMB-LBP, cumulative match score for complete shoeprint, incomplete toe matching, and incomplete heel matching nearly reached 98, 95, and 91, respectively in the first rank. Comparatively, in the Patil method, under the same noise conditions, cumulative match score for complete shoeprint, incomplete toe matching, and incomplete heel matching reached 81, 69, and 68, respectively in the first rank. Furthermore, as shown in Figure 15, as the signal to noise ratio decreases further, a more drastic reduction in the cumulative match score of Almaadeed method under the salt and pepper noise condition will be observed.    18.80, 24.82, 26.76, 32.78, and 38.80. The noise variance for a sample shoeprint image with the signal to noise ratio is defined as: SNR(db) = 20 log (5) where denotes the average power of shoeprint image, refers to the noise variance. Figure 14 illustrates the images, which had been stained with salt and pepper noise and Gaussian white noise. Figures 15 and 16 show the results for the proposed, Patil, and Almaadeed methods in the noisy condition. As it can be observed in the related tables and figures, the proposed method is resistant to noise. That is to say, under slat pepper noise and Gaussian white noise condition in different SNRs, in the MMB-LBP, cumulative match score for complete shoeprint, incomplete toe matching, and incomplete heel matching nearly reached 98, 95, and 91, respectively in the first rank. Comparatively, in the Patil method, under the same noise conditions, cumulative match score for complete shoeprint, incomplete toe matching, and incomplete heel matching reached 81, 69, and 68, respectively in the first rank. Furthermore, as shown in Figure 15, as the signal to noise ratio decreases further, a more drastic reduction in the cumulative match score of Almaadeed method under the salt and pepper noise condition will be observed.    From Figure 16, as the signal to noise proportion decreases, a sharp reduction is observed in the cumulative match score of the Almaadeed method under salt pepper noise condition. Furthermore, an about 10% reduction of signal to noise proportion is observed in the Almaadeed method under Gaussian white condition. Thus, it can be argued that the MMB-LBP had a better performance than Patil and Almaadeed methods under salt pepper noise and Gaussian white conditions. condition. Furthermore, an about 10% reduction of signal to noise proportion is observed in the Almaadeed method under Gaussian white condition. Thus, it can be argued that the MMB-LBP had a better performance than Patil and Almaadeed methods under salt pepper noise and Gaussian white conditions.

Conclusions
In this paper, a novel MMB-LBP method was proposed for matching shoeprint images automatically. Indeed, the MMB-LBP was used for extracting the texture features of shoeprint images. The results showed that the proposed method has a higher retrieval success rate in comparison with the LBP and MB-LBP. The evaluation results also demonstrated that the proposed method is robust under rotation, Gaussian white noise, and salt and pepper noise, and it has better cumulative match scores when compared with the Patil and Almaadeed methods. The cumulative match scores in the first rank for complete, incomplete toe, and incomplete heel shoeprints were 97, 96, and 91, respectively, in the presence of rotation distortions, salt and pepper, and Gaussian white noises. Finally, the cumulative match score for complete, incomplete toe, and incomplete heel shoeprints in the fifth rank was 100, over 98, and 97, respectively.

Conclusions
In this paper, a novel MMB-LBP method was proposed for matching shoeprint images automatically. Indeed, the MMB-LBP was used for extracting the texture features of shoeprint images. The results showed that the proposed method has a higher retrieval success rate in comparison with the LBP and MB-LBP. The evaluation results also demonstrated that the proposed method is robust under rotation, Gaussian white noise, and salt and pepper noise, and it has better cumulative match scores when compared with the Patil and Almaadeed methods. The cumulative match scores in the first rank for complete, incomplete toe, and incomplete heel shoeprints were 97, 96, and 91, respectively, in the presence of rotation distortions, salt and pepper, and Gaussian white noises. Finally, the cumulative match score for complete, incomplete toe, and incomplete heel shoeprints in the fifth rank was 100, over 98, and 97, respectively.