You are currently viewing a new version of our website. To view the old version click .
Journal of Imaging
  • Article
  • Open Access

29 July 2021

A Robust Document Identification Framework through f-BP Fingerprint

,
,
,
,
,
,
and
1
Department of Mathematics and Computer Science, University of Catania, 95125 Catania, Italy
2
School of Physics, Engineering and Computer Science, University of Hertfordshire, Hatfield AL10 9AB, UK
3
Raggruppamento Carabinieri Investigazioni Scientifiche, RIS di Messina, 98122 Messina, Italy
*
Author to whom correspondence should be addressed.
This article belongs to the Section Biometrics, Forensics, and Security

Abstract

The identification of printed materials is a critical and challenging issue for security purposes, especially when it comes to documents such as banknotes, tickets, or rare collectable cards: eligible targets for ad hoc forgery. State-of-the-art methods require expensive and specific industrial equipment, while a low-cost, fast, and reliable solution for document identification is increasingly needed in many contexts. This paper presents a method to generate a robust fingerprint, by the extraction of translucent patterns from paper sheets, and exploiting the peculiarities of binary pattern descriptors. A final descriptor is generated by employing a block-based solution followed by principal component analysis (PCA), to reduce the overall data to be processed. To validate the robustness of the proposed method, a novel dataset was created and recognition tests were performed under both ideal and noisy conditions.

2. Fingerprint Extraction Process

Illuminating the surface to highlight the wood fibers is mandatory to properly extract the pseudo-random pattern which is unique for each sheet of paper. However, such patterns must be digitalized and properly modeled mathematically to implement a robust document identification system, which is the goal of this paper. Given a certain physical paper document d i , the aim of this work is to obtain a digital fingerprint F i , namely a sequence of K ordered values { f i ( 1 ) , f i ( 2 ) , . . . , f i ( K ) } , which is solely determined by correctly processing the digital image s i , which is the acquisition of the document d i . The overall proposed pipeline is summarized in Figure 1.
Figure 1. Overall pipeline of the proposed framework. First row describes the process to acquire documents; second row shows the fingerprint extraction process.

2.1. Document Digitization and Image Registration Considerations

The physical set of N documents D = { d 1 , d 2 , . . . d N } was acquired using devices that are able to capture the wood fiber pattern by exploiting the translucent properties of the paper. In this work, two different acquisition environments were employed to compare the performance of low-end and high-end equipment. Details about devices and related settings are provided in Section 3. The acquisition of a physical document d i was carried out in a semi-constrained environment; specifically, the documents must be roughly aligned regarding the capturing device to guarantee an effective consequent registration. For the sake of readability, the set of the digitized versions of the documents D = { d 1 , d 2 , . . . d N } can be defined as S = { s 1 , s 2 , . . . s N } .
To successfully analyze the wood fiber pattern of a document d i , the related digital image s i must be registered. This step is critical as the paper fingerprint strongly depends on spatial information; hence, one must ensure that if a given document is acquired multiple times under the same setup, the system will process exactly the same region of the paper surface. To this aim, reference points were exploited (e.g., black bands in the acquired image) to rotate and properly crop s i (see Section 2.2 for more details). After registration, a W × H sample from each document s i was obtained, defined ad x i , and the related set X = { x 1 , x 2 , . . . , x N } was employed to build the fingerprint.

2.2. Extracting a Unique Fingerprint

The extraction of a unique fingerprint from a sample x i is the process that encodes the texture information in such a way as to satisfy the following properties: (i) low complexity; (ii) encoding capabilities; (iii) robustness with respect to the missing parts. To this aim, the LBP descriptor and its variants [14] are employed, which are demonstrated to satisfy all the aforementioned requirements. These descriptors guarantee high capabilities in terms of discriminative power by maintaining low computational complexity and working almost perfectly even in the presence of slight variations on textures. In particular, LBP is a local descriptor that compares a pixel, called a pivot, to its n neighbors along the circle defined by a certain radius r [13]. In recent years the use of LBP for texture classification has grown, and a wide set of LBP variants has been proposed [14]. Hence, the so-called f-BP variant has the goal to improve the accuracy and the robustness for a specific task. The well-known local property makes the f-BP a flexible descriptor even in the presence of small perturbations, which is the fundamental requirement of the fingerprint we are looking for. Regardless of the f-BP, after pattern extractions the final descriptor is obtained by counting the times each pattern occurs, namely by computing a histogram.
Histograms are compact and effective descriptors for a various number of tasks; nevertheless, they heavily discard spatial information. To face this issue, x i is first divided in M non-overlapping p × p patches and the histogram is separately calculated for each patch P j ( x i ) with j = { 1 , 2 , . . . . . , M } ; hence the histogram h i , j represents the histogram of the j patch of the sample x i . The importance of spatiality is easily guessed: if the document presents some types of fault (e.g., missing parts, tears, holes, noise) it is important they do not affect the whole fingerprint, but just a portion. For this reason, the choice of the patch size p and the hyperparameters θ f of the employed f-BP variant (e.g., the number of neighbors n and the radius r) have consequences on the performance. The size T of the histogram depends on the number of possible patterns the f-BP variant led. For example, if one employs classical LBP with n = 8 and r = 1 the number of possible patterns, and the histogram size T, is 256. As far as the patch size p is concerned, large patches decrease spatial information while small patches make the BP excessively local and increase the complexity of the obtained fingerprint.
The final fingerprint F i for document d i can be obtained by concatenating all the histograms h j , i for j = 1 , 2 , . . . , M :
F i = j = 1 M h j , i
The size K of F i is K = M × T , as M patches are obtained from M histograms of size T. The goal of this study is to test different f-BP variants and look for the parameters { W , H , p , θ f } which led to the most robust fingerprint.

3. Datasets for Document Identification and Fingerprint Testing

To evaluate the proposed approach and provide a great contribution to this research field, a new dataset is introduced, which is composed by 200 A4 paper sheets arranged in groups of 40 and divided into 5 non-overlapping classes. Each class is defined by two attributes: the manufacturer of the paper b { b 1 , b 2 , b 3 , b 4 } and the weight or grammage (measured in g/m2) g { 80 , 160 , 200 } . Thus, the obtained classes are the following: ( b 1 , 80 ) , ( b 2 , 80 ) , ( b 3 , 80 ) , ( b 4 , 160 ) , ( b 4 , 200 ) .
All the 200 documents in D were then acquired multiple times using two different devices as described in Figure 2 and detailed in the next subsections. The dataset will be made available online after this paper is accepted and a download link will be placed in this section.
Figure 2. Devices employed for acquisitions.

3.1. Devices

To compare the performances obtainable with high-end and low-end equipment, each document is digitized using two different devices. For the high-end case the Video Spectral Comparator 6000 (VSC) was employed while for the low-end one we used the Backlight Imaging Tool (BIT): a cheap overhead projector combined with a digital camera that we accurately designed.
The VSC consists of a main unit (Figure 2b) connected to a standard workstation. It provides several functionalities and a set of different light sources to highlight paper details normally not visible in standard conditions. Table 1 shows VSC acquisition settings.
Table 1. VSC Settings.
The BIT consists of an overhead projector which serves as source light and a consumer RGB camera hung on the projector arm. The employed camera is a Nikon D3300 equipped with a Nikon DX VR 15 mm–55 mm 1: 3.5–5.6 GII lens. Settings details are listed in Table 2.
Table 2. BIT Settings.

3.2. Dataset Acquisition

For the sake of clarity, the terms S V S C and S B I T will be employed for referring to the digital acquisitions made by the VSC and the BIT respectively. The overall dataset acquisition pipeline is depicted in Figure 1. As expected, S V S C and S B I T show different contrast and sharpness.
S V S C consists of 200 documents acquired twice, for a total of 400 acquisitions (Table 3). The result of a single acquisition is a bitmap image of 1292 × 978 pixels and 300 dot per inch (dpi), as reported in Figure 3a. S B I T consists of 200 documents acquired 8 times. However, the insufficient power of light in the BIT does not allow the extraction of the translucent pattern from paper with grammage 160 or 200. Thus, only the 120 documents with grammage 80 were considered for a total of 960 acquisitions with a resolution of 6000 × 4000 pixels and 300 dpi (Table 3). Figure 4a shows a raw acquisition, where the black bands, used for image registration, are visible.
Table 3. Dataset Table.
Figure 3. Document acquisition with VSC before registration (a) and after registration (b).
Figure 4. Document acquisition with BIT before (a) and after registration (b).

3.3. Image Registration

The acquisition of the black bands outside the paper area surface was voluntarily performed to distinguish selectively the pixels from the external area and easily obtain a registered set of images. All the raw images in S V S C and S B I T were converted into grayscale. First, a luminance threshold is used to find the top-left corner ( y 0 , y 1 ) of the sheet of paper. Secondly, the image anchored in position ( y 0 + u , y 1 + u ) is cropped, where u is the minimum offset to perform a cropping by excluding the external area. The value of u is variable: the larger the external area acquired is, the greater will be its value. Images acquired by means of the VSC are cropped into patches of 400 × 400 , while the ones acquired with the BIT are cropped into patches of 5000 × 1000 pixels. Finally, one obtained X V S C , the set of 400 registered samples from VSC and X B I T , the set of 960 registered samples from BIT. Source examples are shown in Figure 3b and Figure 4b.

4. Experiments and Discussion

To evaluate the proposed fingerprint extraction approach in depth, analysis of the datasets described in Section 3 were performed in terms of recognition tests. Since each document was acquired multiple times (i.e., twice for the VSC and 8 times for the BIT), a fingerprint reference dataset was built to face the recognition task; such reference datasets consist of only one sample per document while the rest of the samples were used for querying it. A certain document d with extracted fingerprint F a will have a correct match with the closest element in the reference dataset F b , if both F a and F b “belong” to the document d; in other words, a correct match occurs if s a and s b are different acquisitions of the same document. The recognition test performances are measured using the well-known accuracy metric defined as the rate of queries, which obtain a correct match. The adopted similarity measure for fingerprints was the Bhattacharyya distance [28], which is typically and effectively employed for problems where probability distribution must be compared. However, to better assess the effectiveness of the proposed fingerprint, four different recognition experiments are performed as detailed in the following. First, the original LBP was employed to compare the recognition accuracy on both datasets (VSC and BIT) obtaining the demonstration of device invariance. Given this result, a comparison was performed only on the BIT dataset employing LBP fingerprints computed as in [15] vs. the three other LBP variants, i.e., LTP [25], SBP [26] and CLBP [27]. Moreover, also the fingerprint robustness was investigated. To this aim, a challenging scenario was created where the query samples were intentionally altered by removing some pixels from the digital image to simulate physical damage of the paper (e.g., tears, holes). Finally, an optimization in terms of fingerprint dimensions was carried out and tested as well by exploiting principal component analysis (PCA) [29].

4.1. Dataset Comparison

To demonstrate the goodness of the LBP-based fingerprint extraction method, we started from the work of Guarnera et al. [15], our previous work, which represents the state of the art. Table 4 shows the overall accuracy obtained in the recognition tests performed on both datasets: 96.5 % and 99.2 % for VSC and BIT, respectively. Although samples from different datasets have different patch sizes, the best for both datasets was 100 × 100 . This is a reasonable trade-off to preserve local spatial information. The accuracy on the BIT dataset is slightly higher than the accuracy obtained on the VSC. This demonstrates that the robustness of the fingerprint does not depend on the acquisition settings nor device.
Table 4. Best configuration parameters and accuracy of recognition test in VSC and BIT datasets.

4.2. Comparisons among LBP Variants

As introduced in Section 2, many LBP variants were proposed for texture analysis. Among them, LTP [25], SBP [26] and CLBP [27] were selected for the experiments described in this section. In the previous section, the independence of the proposed fingerprint from the acquisition device was demonstrated. Starting from this evidence, in the next experiments, only the BIT dataset will be employed given the higher number of available samples. The results in terms of accuracy are reported in Table 5 where CLBP and SBP show an improvement in terms of performance vs. LBP, by achieving an accuracy of 99.7 % and 99.4 % , respectively. It is worth noting that LBP is the employed method of [15] to extract the fingerprint, so the aforementioned results represent the overperformance with respect to the state of the art. As described in the literature, LTP tends to work better than LBP when the texture presents regions that are uniform (i.e., low variance). It is worth noting that the wood fiber patterns show a high variance, thus explaining the worse results of such descriptor. SBP, which is a generalization of the common binary pattern, as expected, obtains accuracy results of ( 99.4 % ) that are slightly better than LBP. Finally, the best performance was obtained by CLBP ( 99.7 % ) even if it delivers the largest fingerprint in terms of histogram dimensions (number of bins).
Table 5. Recognition test accuracy of the test carried out in BIT dataset employing LTP, SBP and CLBP.

4.3. Tests on Noisy Environment: Synthetically Altered Documents Are Introduced

The proposed method for fingerprint extraction was tested under controlled conditions to properly assess what was expected to happen in real cases, namely when a document experienced some alteration between the first fingerprint extraction and the successive ones. Hence, the original fingerprint of the document may be very dissimilar from the latter one. To this aim, two types of damages on paper were simulated: tears and stain. The “tear” simulates a loss of information which starts from one angle of a sheet sample x i by replacing such loss with black pixels, while the “stain” introduces random black blocks on the sample to simulate holes or stains. For both, the so-called degree represents the size of black area: the maximum degree corresponds to about 75 % of the full sample to be removed (see Figure 5 and Figure 6). Given the aforementioned alterations, a new recognition test on the BIT dataset was carried out, which includes 120 samples without any alterations on the fingerprint database and other 960 samples with alterations that were used to query the database. The results are reported in Figure 7 and Figure 8 further proving the robustness of the proposed fingerprint, specifically the CLBP-based one which achieves best performance once more.
Figure 5. Examples of altered documents with simulations of tear damage; in particular (a) represents the first degree of damage while (b) the last (e.g., 11).
Figure 6. Examples of altered documents with simulations of stain damage; in particular (a) represents the first degree of damage while (b) the last (e.g., 11).
Figure 7. Accuracy employing CLB and LBP VS degrees of tear alteration.
Figure 8. Accuracy of CLB and LBP VS degrees of stain alteration.

4.4. Fingerprint Dimensions Optimization

All the tests described in the previous sections were performed employing the pipeline described in Figure 1 with the following settings: images were cropped into patches of 100 × 100 pixels; number of neighbors for CLBP were n = 12 and radius was r = 6 . These settings brought to 500 patches from the BIT dataset; thus, a histogram of 8194 elements was computed for each patch. This results in a fingerprint with dimension of 500 × 8194 = 4,097,000 elements whose storage occupancy is about 8.3 MB. Since the fingerprint with the proposed method could be even larger depending on parameters and since large fingerprints decrease efficiency, some optimization strategies to reduce it were explored.
The simplest strategy to reduce the fingerprint size was the increment of the patch size p; however, this could not guarantee the same accuracy performance. Table 6 shows the results obtained using larger values of p while monitoring the fingerprint size. The analysis of the results showed that the setting with p = 200 presents a performance similar to p = 100 (i.e., only a drop of 0.4 % of accuracy) reducing the size by 25 % , from 4,097,000 to 1,024,250 elements, that can be stored in 2.2 MB. However, as stated in the previous sections, employing bigger patches does not preserve spatial information and actually shows a tremendous accuracy drop (e.g., 67.6 % for p = 500 ).
Table 6. Accuracy and size of CLBP fingerprints to vary of patch size.
To optimize the size of the fingerprint preventing a large loss in terms of accuracy, we employed the Principal Component Analysis (PCA) [29]. As we know, PCA reduces the dimensions by projecting each data point onto only some of the principal components to obtain lower-dimensional data while preserving most of the data variance; in a nutshell, it reduces the dimensions by preserving most of the information, which better describes a certain phenomenon. PCA is applied to each histogram h j , i previously obtained using CLPB. Hence, such histograms are drastically reduced in terms of dimensions. First, for testing purposes, all the 120 samples included in the fingerprint database are used to fit the PCA model. By employing the well-known explained variance analysis, we found that 95 % of the information can be preserved using the first 32 principal components (also known as features), despite the original 8194. However, PCA moves histograms in a geometric space where the Bhattacharyya distance becomes less efficient; to face this problem, the recognition test was performed by means of the Euclidean distance. To verify the quality of reduction, the same recognition tests, as described in the previous sections, were carried out with the now-reduced fingerprints, delivering an accuracy of 97.97 % with only 16,000 elements while maintaining the excellent performance of the not-reduced fingerprints case. It is worth noting that the PCA model was built using all the samples of each of the 120 documents in BIT. This could generate a PCA model overfitted on the data. Thus, a further test was performed using only the 50 % of the dataset (60 documents) to fit the PCA model, while and we queried the reference dataset with the samples which come from the remaining 50 % . In this case, it was found that 95 % of information can be preserved using the first 40 principal components for each patch. recognition tests confirmed the results obtained with the PCA model built on all the 120 documents ( 97.97 % of accuracy). It is important to note that although in the fingerprint comparison we also consider the missing parts when an alteration occurs, this does not heavily affect the Bhattacharyya distance between two fingerprints. On the contrary, the Euclidean distance is affected by this. In fact, the Euclidean distance calculated between an unaltered fingerprint of a document and an altered fingerprint of the same document exhibits extremely higher values, which impacts on the accuracy performance. To overcome this latter problem, a custom Euclidean distance was employed, where only a part of the fingerprint elements is considered in distance computation. Specifically, the differences between each element of the two fingerprints is computed and, subsequently, we sorted those differences by considering only a certain percentage of the lower ones. This percentage depends on the dimensions of the missing part, but this information is known by the operator during the identification phase, because in a real document the altered parts are visible. Figure 9a,b report the accuracy (vertical axis) when varying the percentage of elements included in distance computation (horizontal axis). The obtained results also suggest how to maintain a high accuracy according to the alteration degree. For example, in an average scenario of damage (orange lines) the 50 % of distance is needed to maintain the accuracy over the 99 % .
Figure 9. Accuracy variability of different percentiles on tear (a) and stain (b) damages.

5. Fingerprint Robustness Analysis

The carried-out recognition tests started from the hypothesis that every query fingerprint F q could find a correspondent fingerprint F x into fingerprints database previously extracted from the same document and stored. A real case scenario could present some differences: the query fingerprint F q could not find a correspondent F x and then the nearest one has no meaning (it is the most similar but it is a fingerprint extracted from another document). Hence, additional information is needed: given the distance e ¯ between two samples. To solve this problem, e ¯ was analyzed in all the previously presented experiments; in particular, starting from the fingerprints extracted by the images acquired with BIT device (e.g., 960), the distances obtained in the tests without simulated damages employing CLBP and LBP were analyzed, considering three kinds of distances:
  • e ¯ 0 : distance obtained between F q and F x , both extracted from the same document, when F x is the closest fingerprint in the recognition test.
  • e ¯ 1 : distance obtained between F q and F x , both extracted from the same document, when F x is not the closest fingerprint in the recognition test.
  • e ¯ n u l l : distance obtained between F q and F x , extracted from different documents.
Given 840 different F q , 120 distances have been computed for each of them. For every F q analyzed, two results were obtained:
  • The closest fingerprint F x is extracted from the same document of F q , and then the distance between them is classified as e ¯ 0 and the others 119 distances are classified as e ¯ n u l l .
  • The closest fingerprint F x is not extracted from the same document of F q , and then the distance between them is classified as e ¯ 1 and the others 119 distances are classified as e ¯ n u l l .
It is easy to figure out that the population of e ¯ n u l l is much bigger than e ¯ 0 and e ¯ 1 , whose sum is exactly 840.
Figure 10a,b represent the plot of distances e ¯ 0 , e ¯ 1 , e ¯ n u l l in both tests (LBP and CLBP). The plots have been cut because the populations are unbalanced and because the focus of the analysis is on the intersections of the two curves. In those plots it is possible to detect two Gaussians almost fully separated. The intersection between them (the tail of green Gaussian, delimited by orange and blue lines) represents an uncertainty zone. It is worth noting that the position of e ¯ 1 in both cases (LBP and CLBP) is within this zone that confirms the meaning of distance: lower will be the distance with the nearest fingerprint and greater will be the possibility that the fingerprints are extracted from the same document. Naturally, the concept of low/great depends on the descriptor employed; in the forensics domain it is important the measure of the degree of uncertainty whenever it is available. The percentage of uncertainty zone z and the percentage r of e ¯ 0 inside it gives a further degree of confidence and it is variable for each descriptor. Given a descriptor the couple (z,r) can be employed to describe the robustness of it. CLBP has the e ¯ 0 range between 0.286 and 0.338 and uncertainty zone between 0.331 and 0.338 and then z = 13.46 % , while r = 2.62 % due to 22 e ¯ 0 inside uncertainty zone on 837 total; LBP has z = 13.56 % and r = 4.92 % . Table 7 shows the analysis for every binary pattern tested.
Figure 10. Accuracy variability of different percentiles on tear (a) and stain (b) damages. For both the plots x-axis represents the values of the distances obtained and y-axis the number of occurrences. e ¯ n u l l , e ¯ 0 and e ¯ 1 are represented by gray, green and red respectively.
Table 7. Percentage of uncertainty zone (z) and percentage of e ¯ 0 inside it (r) for each analyzed descriptor.
Moreover, a cross-dataset analysis has been conducted to understand if there is a correlation between input and descriptor efficiency. The textures with the distance within z have been analyzed: CLBP has 22 distance on 837 while LBP 41 on 835. 13 are shared while others are close to z meaning that bad texture (in terms of acquisition) will have a bad distances (close or within z), independently from the descriptor.

6. Conclusions

In this paper, a novel approach for document identification was proposed. The method employs variants of binary pattern descriptors (e.g., LBP, LTP, SBP, CLBP) to obtain a proper fingerprint to uniquely recognize the input document, but at the same time, be easily manageable. For this reason, an additional analysis was conducted to optimize the fingerprint in terms of dimensions; it was based on PCA which has confirmed almost the same degree of confidence, reducing the fingerprint size to less than 1 / 100 of the original. To demonstrate the robustness of the method, the dataset was expanded by including more noisy samples, demonstrating the value of the proposed technique in real case scenarios and better results with respect to the state of the art. Finally, a further analysis on the meaning of distances was conducted, to generalize the recognition test.

Author Contributions

Conceptualization, F.G. and O.G.; Data curation, F.G. and D.A.; Investigation, F.G., O.G. and D.A.; Methodology, F.G. and O.G.; Resources, O.G., V.M. and A.S.; Software, F.G.; Supervision, O.G. and S.B.; Validation, F.G., O.G. and S.B.; Writing original draft, F.G.; Writing review and editing, O.G., D.A., F.S., S.B., S.L., V.M. and A.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Acknowledgments

The authors would like to thank Raggruppamento Carabinieri Investigazioni Scientifiche, RIS di Messina for providing the VSC®6000 instrumentation and support and iCTLab s.r.l. (Spinoff of University of Catania) for help during the dataset creation. Both were fundamental also for their insightful comments.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Cheddad, A.; Condell, J.; Curran, K.; Mc Kevitt, P. Combating digital document forgery using new secure information hiding algorithm. In Proceedings of the International Conference on Digital Information Management, London, UK, 13–16 November 2008; pp. 922–924. [Google Scholar] [CrossRef]
  2. Ahmed, A.G.H.; Shafait, F. Forgery Detection Based on Intrinsic Document Contents. In Proceedings of the International Workshop on Document Analysis Systems, Tours, France, 7–10 April 2014; pp. 252–256. [Google Scholar] [CrossRef][Green Version]
  3. Berenguel, A.C.; Terrades, O.R.; Lladós, J.C.; Cañero, C.M. Banknote Counterfeit Detection through Background Texture Printing Analysis. In Proceedings of the IAPR Workshop on Document Analysis Systems, Santorini, Greece, 11–14 April 2016; pp. 66–71. [Google Scholar] [CrossRef]
  4. Bruna, A.R.; Farinella, G.M.; Guarnera, G.C.; Battiato, S. Forgery Detection and Value Identification of Euro Banknotes. Sensors 2013, 13, 2515–2529. [Google Scholar] [CrossRef] [PubMed]
  5. Gill, N.K.; Garg, R.; Doegar, E.A. A review paper on digital image forgery detection techniques. In Proceedings of the International Conference on Computing, Communication and Networking Technologies, Delhi, India, 3–5 July 2017; pp. 1–7. [Google Scholar] [CrossRef]
  6. Kumar, M.; Gupta, S.; Mohan, N. A computational approach for printed document forensics using SURF and ORB features. Soft Comput. 2020, 24, 13197–13208. [Google Scholar] [CrossRef]
  7. Berenguel, A.C.; Terrades, O.R.; Lladós, J.C.; Cañero, C.M. Identity Document and banknote security forensics: A survey. arXiv 2019, arXiv:1910.08993. [Google Scholar]
  8. Battiato, S.; Giudice, O.; Paratore, A. Multimedia forensics: Discovering the history of multimedia contents. In Proceedings of the 17th International Conference on Computer Systems and Technologies 2016, Palermo, Italy, 23–24 June 2016; pp. 5–16. [Google Scholar]
  9. Samsul, W.; Uranus, H.P.; Birowosuto, M.D. Recognizing Document’s Originality by laser Surface Authentication. In Proceedings of the International Conference on Advances in Computing, Control and Telecommunication Technologies, Jakarta, Indonesia, 2–3 December 2010; pp. 37–40. [Google Scholar] [CrossRef]
  10. Sharma, A.; Subramanian, L.; Brewer, E.A. PaperSpeckle: Microscopic fingerprinting of paper. In Proceedings of the ACM Conference on Computer and Communications Security, Chicago, IL, USA, 17–21 October 2011; pp. 99–110. [Google Scholar] [CrossRef]
  11. Toreini, E.; Shahandashti, S.F.; Hao, F. Texture to the Rescue: Practical Paper Fingerprinting based on Texture Patterns. ACM Trans. Priv. Secur. 2017, 20, 1–29. [Google Scholar] [CrossRef]
  12. Petrovska-Delacrétaz, D.; Jain, A.K.; Chollet, G.; Dorizzi, B. Guide to Biometric Reference Systems and Performance Evaluation; Springer: London, UK, 2009. [Google Scholar]
  13. Ojala, T.; Pietikäinen, M.; Harwood, D. A comparative study of texture measures with classification based on featured distributions. Pattern Recognit. 1996, 29, 51–59. [Google Scholar] [CrossRef]
  14. Brahnam, S.; Jain, L.C.; Nanni, L.; Lumini, A. (Eds.) Local Binary Patterns: New Variants and Applications; Springer: Berlin/Heidelberg, Germany, 2014. [Google Scholar] [CrossRef]
  15. Guarnera, F.; Allegra, D.; Giudice, O.; Stanco, F.; Battiato, S. A New Study On Wood Fibers Textures: Documents Authentication Through LBP Fingerprint. In Proceedings of the IEEE International Conference on Image Processing, Taipei, Taiwan, 22–25 September 2019; pp. 4594–4598. [Google Scholar] [CrossRef]
  16. Buchanan, J.D.; Cowburn, R.P.; Jausovec, A.V.; Petit, D.; Seem, P.; Xiong, G.; Atkinson, D.; Fenton, K.; Allwood, D.A.; Bryan, M.T. Forgery: Fingerprinting documents and packaging. Nature 2005, 43, 475. [Google Scholar] [CrossRef] [PubMed]
  17. Van Beijnum, F.; Van Putten, E.G.; Van der Molen, K.L.; Mosk, A.P. Recognition of paper samples by correlation of their speckle patterns. arXiv 2006, arXiv:physics/0610089. [Google Scholar]
  18. Cowburn, R. Laser Surface Authentication-natural randomness as a fingerprint for document and product authentication. In Proceedings of the Optical Document Security Conference, San Francisco, CA, USA, 23–25 January 2008. [Google Scholar]
  19. Cowburn, R. Laser surface authentication—Reading Nature’s own security code. Contemp. Phys. 2008, 49, 331–342. [Google Scholar] [CrossRef]
  20. Clarkson, W.; Weyrich, T.; Finkelstein, A.; Heninger, N.; Halderman, J.A.; Felten, E.W. Fingerprinting blank paper using commodity scanners. In Proceedings of the IEEE Symposium on Security and Privacy, Oakland, CA, USA, 17–20 May 2009; pp. 301–314. [Google Scholar] [CrossRef]
  21. Wong, C.W.; Wu, M. Counterfeit Detection Based on Unclonable Feature of Paper Using Mobile Camera. IEEE Trans. Inf. Forensics Secur. 2017, 12, 1885–1899. [Google Scholar] [CrossRef]
  22. Liu, R.; Wong, C.W.; Wu, M. Enhanced Geometric Reflection Models for Paper Surface Based Authentication. In Proceedings of the IEEE International Workshop on Information Forensics and Security, Hong Kong, China, 11–13 December 2019. [Google Scholar] [CrossRef]
  23. Chen, D.; Hu, Q.; Zeng, S. An Anti-Counterfeiting Method of High Security and Reliability Based on Unique Internal Fiber Pattern of Paper. In Proceedings of the 2020 IEEE 14th International Conference on Anti-Counterfeiting, Security, and Identification (ASID), Xiamen, China, 30 October–1 November 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 174–178. [Google Scholar]
  24. Haist, T.; Tiziani, H.J. Optical detection of random features for high security applications. Opt. Commun. 1998, 147, 173–179. [Google Scholar] [CrossRef]
  25. Tan, X.; Triggs, B. Enhanced local texture feature sets for face recognition under difficult lighting conditions. In Proceedings of the Analysis and Modeling of Faces and Gestures, Rio de Janeiro, Brazil, 20 October 2007; pp. 168–182. [Google Scholar] [CrossRef]
  26. Nguyen, T.P.; Vu, N.S.; Manzanera, A. Statistical binary patterns for rotational invariant texture classification. Neurocomputing 2016, 173, 1565–1577. [Google Scholar] [CrossRef]
  27. Guo, Z.; Zhang, L.; Zhang, D. A completed modeling of local binary pattern operator for texture classification. IEEE Trans. Image Process. 2010, 19, 1657–1663. [Google Scholar] [CrossRef] [PubMed]
  28. Bhattacharyya, A. On a measure of divergence between two statistical populations defined by their probability distributions. Bull. Calcutta Math. Soc. 1943, 35, 99–109. [Google Scholar]
  29. Pearson, K. On lines and planes of closest fit to systems of points in space. Philos. Mag. 1901, 2, 559–572. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.