Next Article in Journal
Per-Pixel Coded Exposure for High-Speed and High-Resolution Imaging Using a Digital Micromirror Device Camera
Previous Article in Journal
Modeling the Insertion Mechanics of Flexible Neural Probes Coated with Sacrificial Polymers for Optimizing Probe Design
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Efficient Banknote Recognition Based on Selection of Discriminative Regions with One-Dimensional Visible-Light Line Sensor

1
Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu, Seoul 100-715, Korea
2
Kisan Electronics, Sungsoo 2-ga 3-dong, Sungdong-gu, Seoul 133-831, Korea
*
Author to whom correspondence should be addressed.
Sensors 2016, 16(3), 328; https://doi.org/10.3390/s16030328
Submission received: 11 January 2016 / Revised: 22 February 2016 / Accepted: 22 February 2016 / Published: 4 March 2016
(This article belongs to the Section Physical Sensors)

Abstract

:
Banknote papers are automatically recognized and classified in various machines, such as vending machines, automatic teller machines (ATM), and banknote-counting machines. Previous studies on automatic classification of banknotes have been based on the optical characteristics of banknote papers. On each banknote image, there are regions more distinguishable than others in terms of banknote types, sides, and directions. However, there has been little previous research on banknote recognition that has addressed the selection of distinguishable areas. To overcome this problem, we propose a method for recognizing banknotes by selecting more discriminative regions based on similarity mapping, using images captured by a one-dimensional visible light line sensor. Experimental results with various types of banknote databases show that our proposed method outperforms previous methods.

1. Introduction

The accurate and reliable recognition of banknotes plays an important role in the growing popularity of payment facilities such as ATMs and currency-counting machines. There have been many studies on this classification functionality that have been based on the optical characteristics of banknotes. Most studies on classification of banknotes by denomination (e.g., $1, $5, $10, etc.) have been based on images of banknotes captured by visible-light sensors. In general, a banknote can appear in four directions on two sides, i.e., the forward and reverse images of the front and back sides, and the captured images of these input directions are used in recognition of banknotes.
Previous studies using visible-light images of banknotes can be divided into those that used whole banknote images for recognition [1,2,3,4,5,6] and those that used certain regions of banknote images [7,8,9,10,11,12]. Wu et al. [1] proposed a banknote orientation recognition method that uses the average brightness of eight uniform rectangles on a banknote image as the input of the classifier using a three-layer back-propagation (BP) network. However, their experiments only focused on orientation recognition of one type of Chinese banknote—the Renminbi (RMB) 100 Yuan note. A Chinese banknote recognition method using a three-layer neural network (NN) was proposed by Zhang et al. [2]. This method uses linear transforms of gray images to reduce the effect of noise and uses the edge characteristics of the transformed image as the input vectors to the NN classifier. This method was applied to Sri Lankan banknote recognition in [3]. A BP network was used as the classifier in research by Gai et al. [4]. In this research, recognition features were extracted by applying generalized Gaussian density (GGD) to the capture of the statistical characteristics of quaternion wavelet transform (QWT) coefficients on banknote images. To recognize multiple banknotes, Hassanpour and Farahabadi considered the texture characteristics of paper currencies as a random process and used hidden Markov model (HMM) for classification [6]. The Indian banknote recognition method proposed by Sharma et al. [5] uses a local binary pattern (LBP) operator to extract features from banknote images and classifies banknote types using Euclidean distances with template images.
In studies concerning regions of banknotes, not all the image data that a banknote provides have been used for recognition; only certain areas on banknote images have been selected and used. This helps to reduce the amount of input data required and puts the focus on regions of banknotes that have high degrees of discrimination. A Bangladeshi banknote recognition system was proposed using axis symmetric masks to select regions of banknote images before feeding information into a multilayer perceptron network to reduce the network size and adapt it to banknote flipping [7]. Axis-symmetrical masks were also applied to the neuro-recognition system proposed by Takeda and Nishikage [8] for analysis of Euro currency using two image sensors. Takeda et al. also proposed a mask optimization technique using a genetic algorithm (GA) [10] that could be used to select good masks using the sum of the pixels uncovered by masks, called the slab value [9], as the input to the recognition neural network. Component-based banknote recognition with speeded-up robust features (SURF) was proposed by Hasanuzzaman et al. [11]. In this method, components that provide specific information about banknotes, such as denomination numbers, portraits, and building textures, are cropped and considered to be reference regions. In the multi-currency classification method proposed by Youn et al. [12], multi-template correlation matching is used to determine the discriminant areas of each banknote that are highly correlated among banknotes of the same types and poorly correlated among those of different types.
Another approach to extracting classification features involves using statistical procedures such as principal component analysis (PCA) [13,14,15,16] or canonical analysis (CA) [17] to reduce the size of the feature vector. In research using learning vector quantization (LVQ) as the classifier, input feature vectors have been extracted by PCA from banknote data acquired by various type of sensors, such as sensors of various wavelengths [13], or point and line sensors [14]. The banknote recognition method proposed by Rong et al. [15] for a rail transit automatic fare collection system employs PCA to extract features and build matching templates from banknote data acquired by various sensors, such as magnetic, ultraviolet (UV) fluorescence and infrared (IR) sensors. In the hierarchical recognition method proposed by Park et al. [16], United States dollar (USD) banknotes were classified by front or back size and their forward or backward directions using a support vector machine (SVM) and then recognized by denomination ($1, $2, $5, etc.) using a K-mean algorithm based on the PCA features extracted from sub-sampled banknote images. In research by Choi et al. [17], CA was used for size reduction and to increase the discriminating power of features extracted from Korean won images using wavelet transform.
There have also been studies combining both of the above feature extraction approaches. The Indian currency recognition method proposed by Vishnu and Omman [18] selects five regions of interest (ROI): the textures of center numerals, shapes, Reserve Bank of India (RBI) seals, latent images, and micro letters on scanned banknote images. PCA is subsequently used for dimensionality reduction of the extracted features. Finally, the recognition results are validated using a classifier implemented with WEKA software [18]. Texture-based analysis is also used in the Indian banknote recognition method proposed by Verma et al. [19]. In this method, linear discriminant analysis (LDA) is applied to ROIs containing textures on banknote images for feature reduction, and SVM is applied for classification. Here, the ROI selection is conducted with the help of a set of external tools called Mazda. In the smartphone-based US banknote recognition system proposed by Grijalva et al. [20], regions of interest are located in the right parts of banknote images. From these regions, weight vectors are extracted using PCA and are compared with those of a training set using the Mahalanobis distance to determine the denomination of an input banknote. Although ROIs were defined in [20], it is uncertain whether the selected areas on banknote images are indeed those with highest discriminating power for recognition purposes.
To overcome these limitations, we propose a banknote recognition method that uses a combination of both of the feature extraction approaches mentioned above. From the sub-sampled banknote images, we select areas that have high degrees of similarity from among banknotes in the same class and high degrees of difference among those in different classes. The discriminant features are then extracted from the selected data using PCA, and the banknote type is determined by the classifier based on K-means algorithm. Our method is novel in the following respects:
(1)
Using the sub-sampled banknote region from the captured image, the local areas that have high discriminating power are selected using a similarity map. This map is obtained based on the ratio of correlation map values, considering between-class and in-class dissimilarities among banknote images.
(2)
Optimally reduced features for recognition are obtained from the selected local areas based on PCA, which reduces both the noise components and the processing time.
(3)
The performance of our method has been measured using both normal circulated banknotes and test notes, and the effectiveness of our method has been confirmed in the harsh testing environment of banknote recognition.
(4)
Through experiments with various types of banknotes—US dollars (USD), South African rand (ZAR), Angolan kwanza (AOA) and Malawian kwacha (MWK)—we have confirmed that our method can be applied irrespective of the type of banknote.
Table 1 presents a comparison between our research and previous studies. The remainder of this paper is organized as follows: Section 2 presents the details of the proposed banknote recognition method. Experimental results are presented in Section 3, and conclusions drawn from the results are presented in Section 4.

2. Proposed Methods

2.1. Overview of the Proposed System

Figure 1 is an overview of the proposed banknote recognition system. The pre-processing step for acquired banknote images is as follows. A banknote region is segmented from the input image and sub-sampled to a size of 64 × 12 pixels to reduce the processing time. In the second step, from the sub-sampled banknote image, the recognition region with the high discriminating power is selected using a similarity map. Consequently, the optimally reduced feature vector is extracted from the data selected with the similarity map using PCA. Finally, the banknote type and the direction of the input image are determined using a K-means algorithm based on the PCA features.

2.2. Acquisition of Banknote Image, Region Segmentation and Normalization

In this study, we used a commercial banknote-counting machine [21]. Figure 2 shows the set-up of our research. As shown in Figure 2a, if we input the banknotes into the banknote-counting machine, the image data of each banknote can be automatically acquired as shown in Figure 2b. Because of the limitations of the size and cost of the counting machine, a conventional two-dimensional (area) image sensor is not used. One line image is captured at each trigger time as the input banknote is moving through the roller device inside the counting machine at a high speed and is being illuminated by a light-emitting diode (LED). The line sensor has a resolution of 1584 pixels and is triggered to capture 464 line images for each moving input banknote. A 1584 × 464 pixel banknote image of visible light is acquired by concatenating the line images.
When entering the recognition system, a banknote can be exposed in one of the following four directions: the front side in the forward direction (the “A direction”) or backward direction (the “B direction”), the back side in the forward direction (the “C direction”) or backward direction (the “D direction”). In this study, we classified banknotes in terms of type (e.g., $1, $5, $10) and direction. Therefore, there are four classes corresponding to four directions for each type of banknote. To address the problems of displacement and rotation of the banknote area in the captured image [16], we use the commercial corner detection algorithm built into the counting machine to locate the banknote area and exclude the background area from the captured banknote image, as shown in Figure 3. The segmented banknote images are then sub-sampled to have the same size of 64 × 12 pixels to reduce the effect of noise and redundant data and to increase the processing speed. Examples of original banknote images, corresponding banknote area segmented images, and sub-sampled images are shown in Figure 3.

2.3. Similarity Map

In a sub-sampled banknote image, there are areas that are mostly similar regardless of the banknote type and areas that are more distinguishable among different types of banknotes. To properly select the highly discriminative regions for recognition of banknote type, we propose a method based on the calculation of the ratio of the similarity among sub-sampled banknote images in the different classes and in the same class. This method results in a 64 × 12-pixel binary mask called a similarity map that can be obtained from a training data set using the following procedure.
First, we generate a reference banknote image for each class by averaging all the banknote images belonging to the same class. An example of a reference image of a recent US $100 banknote in the front-forward direction is shown in Figure 4a. Based on the reference images generated, we calculate the correlation maps for each input banknote with respect to the class to which it belongs in the training set using the following formulas:
M ( i , j ) = | I ( i , j ) R ( i , j ) |
with
I ( i , j ) = I ( i , j ) μ I σ I
R ( i , j ) = R ( i , j ) μ R σ R
where I(i, j) and R(i, j) are the gray-scale values of the pixel at position (i, j) of the input image and reference image, respectively; µI and σI are the mean and standard deviation values of the input image; and µR and σR are the mean and standard deviation values of the reference image. If the input banknote image and reference image are in the same class, the correlation map is defined as an in-class correlation map, denoted by MIC(i, j); otherwise, it is defined as a between-class correlation map, denoted by MBC(i, j). By taking the average images of all the in-class and between-class correlation maps of all the training banknote images in each class, we obtain the in-class and between-class correlation maps of each class, denoted by M ¯ I C ( i , j ) and M ¯ B C ( i , j ) , respectively. Examples of visualized between-class and in-class correlation maps of recent US$100 banknotes in the front-forward direction are shown in Figure 4b,c.
In the next step, the similarity map of each class is calculated by determining the pixel-wise ratio of between-class and in-class correlation maps, using Equation (4). If a pixel has an in-class map value equal to zero, its similarity map value is assigned the maximum value among those of the other calculated pixels. An example of a visualized similarity map for a front-forward US$100 banknote image is shown in Figure 4d.
S ( i , j ) = M ¯ B C ( i , j ) M ¯ I C ( i , j )
Using Equation (4), we can determine the areas where the dissimilarity of banknotes from the different classes is higher than that of the same class. These areas are the regions that have high discriminating power for banknote images and are represented by the bright pixels in the similarity map, scaled to the gray values, as shown in Figure 4.
Finally, we average the similarity maps of all the banknote classes to obtain the final similarity map. To select the banknote areas corresponding to the bright pixels on the similarity map, we use the thresholding method, so that the histogram of the similarity map is divided by half, the higher map values are assigned “1”, and the lower values are assigned “0”. The resulting binary similarity map image is considered to be a mask for selecting the pixels at white mask positions (corresponding to bright similarity map values) on the sub-sampled banknote image. These are used in recognition of the banknote type. The number of pixels selected for use in recognition is roughly half of the original sub-sampled image (approximately (64 × 12)/2 = 384 pixels). This procedure and examples of each intermediate stage are illustrated in Figure 5.

2.4. Feature Extraction by PCA and Classification by K-Means Algorithm

2.4.1. PCA Method and PCA-Based K-Means Algorithm

To further reduce the number of dimensions of the input vector, we apply the PCA method to banknote data selected using the similarity map in the previous step. PCA is a statistical procedure for representing the data in a lower-dimensional space by projecting original data onto the eigenvectors corresponding to the largest eigenvalues of the covariance matrix. The procedure for conducting PCA in our research is similar to that used in the eigenface method [20,22]. First, we calculate the covariance matrix of the mean-subtracted training data using the following formula:
C = 1 N X X T
where X = [ ( x 1 μ ) ( x 2 μ ) ... ( x N μ ) ] T is the µ mean-subtracted vector from the input data xi (i = 1,…,N) and N is the number of original data values.
From the covariance matrix C, we calculate the eigenvalues and eigenvectors of C. The N eigenvalues [λ1, λ2, …, λN] are sorted in descending order, and their corresponding eigenvectors are arranged row by row to form the matrix V, as follows:
V = [ v 1 v 2 ... v N ]
where vi is the eigenvector corresponding to eigenvalue λi, i = 1, …, N. Matrix V is of size N × N. If we need to reduce the input data size to M smaller than N, the projection of X onto the first M eigenvectors’ directions is conducted, and results in Y, which consists of the coefficients of the principal components of X, as shown in the following equation:
Y = V M X = [ v 1 v 2 ... v M ] [ ( x 1 μ ) ( x 2 μ ) ... ( x N μ ) ]
The sizes of VM, X, and Y are M × N, N × 1, and M × 1, respectively. As a result, the banknote data are represented by the PCA coefficients in lower dimensionality, and we use these coefficients as inputs to the classifier in the next step.
The features extracted by PCA are used for classification of the banknote type and direction. The number of classes is predefined as the number of denominations to be recognized multiplied by four directions. When a banknote is input into the system, its recognition features are extracted, and the type and direction are determined based on the Euclidean distance to the class centers (vectors), which are obtained using a K-means clustering algorithm [23]. For example, in the case of USD, the number of classes is 68, which equals 17 types of banknotes ×4 directions (A, B, C, and D). Therefore, there are 68 class centers in the training result for USD. Using the extracted features of an input banknote, the distances between this PCA feature vector and the 68 center vectors are calculated, and the banknote is determined to belong to the class with the nearest center to the banknote’s feature (the nearest centroid classifier [24]).

2.4.2. Determination of Number of PCA Dimensions used for Feature Extraction Method

A typical nearest-centroid-based classifier uses the shortest distance between the input vector and the center vectors as the class assignment criterion. However, there are cases in which the class assignment is not certain, e.g., when the input vector is located at nearly the central position between two class centers. In such a case, the difference between the shortest and second-shortest distances of input vector to the class centers is small. In this study, we considered both the shortest distance, referred to as the 1st distance, and the second-shortest distance, referred to as the 2nd distance, to evaluate the certainty and effectiveness of the classification results.
First, we draw a scatter plot of the 1st distances and the differences between the 1st and 2nd distances of the genuine acceptance cases of USD, as shown in Figure 6. The system must have the ability to reject the unrecognized cases. To simulate the rejected cases, we use the test notes on which the patterns were modified, as shown in Figure 7. Because the test notes are unrecognized, their distances to all the banknote class center vectors are greater than those of genuine banknotes. Therefore, their 1st distances are greater, and the positions of the test note cases on the matching distance scatter plot are far away from those of the genuine accepted banknote cases. Figure 6 shows an example of a scatter plot of matching distances of real USD banknotes and test notes.
It can be seen from Figure 6 that the matching score distributions of banknotes and test notes are separated in the plots and that the degree of varies with the dimensionality of the extracted PCA features. When applying a threshold for rejection based on matching distances, the test notes must be rejected and, consequently, the error cases and uncertain banknote cases (banknotes that are damaged, soiled, faded, etc.) will also be rejected. If the score distributions of genuine-acceptance banknotes and test notes are well separated, the uncertain and error cases are easier to reject, and the recognition results are more reliable. To evaluate the separation between these matching score distributions, we calculate the distributions’ scatter values based on distribution centers using Fisher’s criterion in LDA [23]. For each test, we obtain two distances dX and dY, namely the 1st distance and the difference between the 1st and 2nd distance. The center of each distribution is at the position (µX, µY). The measure of scatter of the matching distance distribution is equivalent to a variance and is calculated as follows:
S = i = 0 N 1 [ ( d X i μ X ) 2 + ( d Y i μ Y ) 2 ]
where N is the number of samples in the distribution. The scatter values of the accepted cases and rejected cases are denoted by SA and SR, respectively. Using the Fisher criterion in LDA, our goal is to find the optimal number of PCA dimensions for banknote feature extraction so that the following ratio (called the F-ratio) is maximized:
F = S B S W = ( μ A X μ R X ) 2 + ( μ A Y μ R Y ) 2 S A S R
where SB and SW are the between-class scatter and within-class scatter, respectively, and (µAX, µAY) and (µRX, µRY) are the centers of the acceptance and rejection distributions, respectively.

3. Experimental Results

In this study, we used a database consisting of 99,236 images captured from 49,618 USD banknotes on both sides. The images in the database include the four directions of 17 types of banknotes: $1, $2, $5, recent $5, most recent $5, $10, recent $10, most recent $10, $20, recent $20, most recent $20, $50, recent $50, most recent $50, $100, recent $100 and most recent $100. The number of images in each banknote class is shown in Table 2. In our experimental database of USD banknote images, both the number of images and the number of classes are comparatively larger than those in previous studies, as shown in Table 3.
First, we calculated the similarity map and applied half-histogram thresholding to obtain the mask for selecting the discriminative areas in a 64 × 12-pixel sub-sampled banknote image. Using the resulting binary mask shown in Figure 5, we selected 388 gray values of the pixels corresponding to the white areas of the mask from the sub-sampled image for banknote classification.
From the selected banknote image data, we extracted the classification features using PCA. In this step, the reliability of the classification results is affected by the dimensionality of the extracted PCA feature vector, as explained in Section 2.4. Therefore, in subsequent experiments, we determined the optimal PCA dimensionality that yields the best classification accuracy and reliability in term of maximization of the F-ratio given by Equation (9). A USD test note database consisting of 2794 images was collected for our rejection test experiments. We considered test notes and false accepted cases of banknotes to belong to the same distribution, such that the remaining distribution consists of only genuine accepted cases. The experimental results for the F-ratio calculation and classification accuracy for various numbers of PCA dimensions are shown in Figure 8. The error rate was calculated based on unsupervised K-means clustering for 68 classes in the USD banknote database.
It can be seen from the Figure 8 that although there were no classification errors in the cases in which 20, 40, or 60 PCA dimensions were used for feature extraction, the separations between the distributions of genuine accepted cases and rejected cases were not good in terms of low ratios between each distribution’s scatter measures. The scatter plot of the matching distances when 20 PCA dimensions were used is shown in Figure 6a. The F-ratio reached a maximum value of 0.001275 at a dimensionality of 160. As the number of extracted features increases, much more processing time is required. Therefore, we considered two cases—the use of 80 or 160 PCA dimensions for feature extraction—in subsequent experiments conducted to evaluate the recognition accuracy achieved. Scatter plots of the distances for the cases of extraction of 80 and 160 PCA dimensions for banknote and test note matching tests are shown in Figure 9.
With the parameters for banknote feature extraction determined, we evaluated the accuracy of the proposed recognition method in comparison to the accuracy reported for methods used in previous studies, as shown in Table 4. When we used 80 or 160 PCA dimensions for feature extraction, there were no changes in the error rates and rejection rates, which were 0.002% and 0.004%. The rejected cases correspond to the banknotes for which the 1st matching distances were higher than the 1st threshold and for which the differences between the 1st and 2nd distances were lower than the 2nd threshold. The 2nd threshold is used to ensure that the recognition result is reliable, as explained in Section 2.4.2. In Figure 9, the positions of rejected cases on the scatter plots are in the gray areas. Because the total error and false rejection rate was 0.006%, the correct recognition rate of our method was 99.994%.
It can be seen from Table 4 that although the numbers of banknote images and classes in our USD database were greater than in other studies, the recognition accuracy of our method was higher than that in previous studies in terms of low false recognition rates and rejection rates. Consequently, we can confirm that our proposed method outperforms the previous previously proposed methods for USD banknote recognition.
The false recognition case for our method is shown in Figure 10, in which the uppermost image is the original banknote image and the middle and lower images are the deskewed image and the 64 × 12-pixel sub-sampled image of the upper image, respectively. The banknote on the left was misclassified as belonging to the class of the banknote on the right. This error case occurred because the image was captured from a folded banknote, as seen in Figure 10.
Figure 11 presents illustrations of rejection cases in our experiments in which the images are arranged in the same manner as in Figure 10. Although these input banknote images were correctly recognized, their matching scores were too high, so their distributions on the scatter plots in Figure 9 were in the rejection region, where the 1st distances are greater than the 1st threshold. It can be seen in Figure 11 that the upper case corresponded to images of a folded banknote, similar to the false recognition case in Figure 10. The remainder of the rejected images was captured from a severely damaged banknote with a tear, folded corner, and writing patterns. These resulted in 1st distances to the genuine classes of these banknote features being higher than the 1st rejection threshold and consequently the images being rejected by the system.
In subsequent experiments, we applied the proposed method to other countries’ banknote image databases to confirm the performance of our method for different types of paper currency. The banknotes used in these experiments were South African rand (ZAR), Angolan kwanza (AOA) and Malawian kwacha (MWK). The numbers of banknote images and classes in the experimental databases are shown in Table 5. Figure 12 shows some examples of banknote images for each type of currency.
Because test notes similar to those used in the USD experiments were not available for the AOA, MWK and ZAR currencies, we tested the recognition accuracy of these databases using the same parameters as those for USD recognition. In addition, because there has been no previous research on recognition of paper banknotes from Angola, Malawi, or South Africa, we were not able to compare the accuracy of our method with any methods applied to these currencies in previous studies. Our proposed method correctly recognized 100% of the banknote images in the AOA and ZAR databases and 99.675% of the banknotes in the MWK database. The experimental results for the similarity maps and the recognition error rates for the AOA, MWK and ZAR databases are given in Table 6.
From the images of the similarity maps and the resulting masks shown in Table 6, most of the selected areas for recognition in the MWK and ZAR banknote images were on the two sides. The recognition area in the AOA banknote images was located in the middle of the images. The reason for these results is that the patterns on these banknotes, such as numbers, seals, photos, and portraits, are asymmetrically distributed. In the cases of MWK and ZAR, the photo and portrait patterns are printed far to the two sides of the banknotes, while on AOA banknotes, the feature patterns are more different in the middle areas, depending on the banknotes’ denominations and directions. As a result, the high-discriminating-power regions for AOA, MWK, and ZAR banknote images were determined and are shown in Table 6.
Examples of error cases in the MWK database are shown in Figure 13. The lower image with smaller sizes for each pair was sub-sampled from the upper banknote image. In this figure, the MK100 banknote images of the front side and forward back side shown in the left column were misclassified as MK500 banknotes in the same direction, examples of which are shown in the right column. It can be seen from the Figure 13 that in the areas selected for recognition by map in Table 6, the shapes of the textures in the sub-sampled images of the MK100 and MK500 banknotes are slightly similar to each other. This resulted in misclassification in these cases. However, the banknote images in the AOA and ZAR databases were correctly recognized with error rates of 0%, so we can confirm that our proposed method can be applied to paper currencies from various countries and achieve good performance in terms of matching accuracy.
We also measured the processing time for our recognition method. This experiment was conducted using the USD banknote image database and a desktop personal computer (PC) with a 2.33 GHz CPU and 4 GB of memory. In these experiments, we compared the time required when using 80 and 160 PCA dimensions for the feature extraction process. When 160 PCA dimensions were used, the system was able to process approximately 442 images/s (1000/2.26 ms). When the number of extracted features by PCA was reduced by half, the processing speed increased to 568 images per second (1000/1.76 ms). Therefore, we used 80 PCA dimensions for final decisions using our banknote recognition method. The average processing times are shown in Table 7.
In a previous study [16] in which a higher-powered PC (3.5-GHz CPU, 8 GB of memory) was used, up to 5.83 ms was required to recognize an image. Our proposed method required less processing time for the following reasons. First, because we used deskewed banknote images of smaller sizes (up to 400 × 120 pixels, compared to 1212 × 246 in [16]), the sub-sampling time was reduced. Second, the number of extracted features used for recognition in our method was smaller (80 dimensions, compared to 192 in [16]). Therefore, the feature extraction and matching processes required less time than with the method used in [16].
In addition, we measured the processing time of a banknote-counting machine using a Texas Instruments (TI) digital media processor (chip). This machine required approximately 1 ms to process and recognize one input banknote using our proposed method with 80 PCA features extracted from a visible-light banknote image.
We also calculated the total memory usage required to employ our proposed method with a counting machine with limited resources. The measurement details are given in Table 8. The total memory usage was 931,924 Bytes for our method. Our proposed method outperformed the USD recognition method described in [16] in terms of both processing time (15.6 ms) and memory usage (1.6 MB).

4. Conclusions

In this paper, we propose an efficient banknote recognition method based on the selection of the high-discriminating-power regions of banknote images captured by visible-light sensors. The recognition regions are determined by calculating the ratio between the in-class similarity and between-class similarity of banknote images. Our experimental results show that using a PCA-based K-means algorithm as the classifier, our proposed method was able to recognize various paper currencies by denomination and input direction with high accuracy, as indicated by low error rates and rejection rates. With the help of a test note database that represents rejection cases, we were also able to evaluate the reliability of our method based on the measurement of scatter between genuine acceptance and rejection score distributions.
In future work, we plan to combine our recognition method with the evaluation of fitness for recirculation of banknotes to properly reject poor-quality banknotes that are difficult to recognize. In addition, we intend to extend our method for recognizing paper currencies from multiple countries, rather than recognition of denominations and directions of banknotes only from an individual country.

Acknowledgments

This research was supported by a grant from the Advanced Technology Center R&D Program funded by the Ministry of Trade, Industry & Energy of Korea (10039011).

Author Contributions

Tuyen Danh Pham and Kang Ryoung Park designed the overall system and made the banknote feature extraction algorithm. In addition, they wrote and revised the paper. Young Ho Park and Seung Yong Kwon implemented the banknote recognition algorithm. Dae Sik Jeong and Sungsoo Yoon helped with the dataset collection and experiments.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wu, Q.; Zhang, Y.; Ma, Z.; Wang, Z.; Jin, B. A Banknote Orientation Recognition Method with BP Network. In Proceedings of the WRI Global Congress on Intelligent Systems, Xiamen, China, 19–21 May 2009; pp. 3–7.
  2. Zhang, E.-H.; Jiang, B.; Duan, J.-H.; Bian, Z.-Z. Research on Paper Currency Recognition by Neural Networks. In Proceedings of the International Conference on Machine Learning and Cybernetics, Xi’an, China, 2–5 November 2003; pp. 2193–2197.
  3. Gunaratna, D.A.K.S.; Kodikara, N.D.; Premaratne, H.L. ANN based currency recognition system using compressed gray scale and application for Sri Lankan currency notes-SLCRec. Proc. World Acad. Sci., Eng. Technol. 2008, 35, 235–240. [Google Scholar]
  4. Gai, S.; Yang, G.; Wan, M. Employing quaternion wavelet transform for banknote classification. Neurocomputing 2013, 118, 171–178. [Google Scholar] [CrossRef]
  5. Sharma, B.; Kaur, A. Recognition of Indian paper currency based on LBP. Int. J. Comput. Appl. 2012, 59, 24–27. [Google Scholar] [CrossRef]
  6. Hassanpour, H.; Farahabadi, P.M. Using hidden markov models for paper currency recognition. Expert Syst. Appl. 2009, 36, 10105–10111. [Google Scholar] [CrossRef]
  7. Jahangir, N.; Chowdhury, A.R. Bangladeshi Banknote Recognition by Neural Network with Axis Symmetrical Masks. In Proceedings of the 10th International Conference on Computer and Information Technology, Dhaka, Bangladesh, 27–29 December 2007; pp. 1–5.
  8. Takeda, F.; Nishikage, T. Multiple Kinds of Paper Currency Recognition Using Neural Network and Application for Euro Currency. In Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks, Como, Italy, 24–27 July 2000; pp. 143–147.
  9. Takeda, F.; Omatu, S.; Onami, S. Recognition System of US Dollars Using a Neural Network with Random Masks. In Proceedings of the International Joint Conference on Neural Networks, Nagoya, Japan, 25–29 October 1993; pp. 2033–2036.
  10. Takeda, F.; Nishikage, T.; Omatu, S. Banknote recognition by means of optimized masks, neural networks and genetic algorithms. Eng. Appl. Artif. Intell. 1999, 12, 175–184. [Google Scholar] [CrossRef]
  11. Hasanuzzaman, F.M.; Yang, X.; Tian, Y. Robust and Effective Component-based Banknote Recognition by SURF Features. In Proceedings of the 20th Annual Wireless and Optical Communications Conference, Newark, NJ, USA, 15–16 April 2011; pp. 1–6.
  12. Youn, S.; Choi, E.; Baek, Y.; Lee, C. Efficient multi-currency classification of CIS banknotes. Neurocomputing 2015, 156, 22–32. [Google Scholar] [CrossRef]
  13. Ahmadi, A.; Omatu, S.; Kosaka, T. A PCA Based Method for Improving the Reliability of Bank Note Classifier Machines. In Proceedings of the 3rd International Symposium on Image and Signal Processing and Analysis, Rome, Italy, 18–20 September 2003; pp. 494–499.
  14. Omatu, S.; Yoshioka, M.; Kosaka, Y. Reliable Banknote Classification Using Neural Networks. In Proceedings of the 3rd International Conference on Advanced Engineering Computing and Applications in Sciences, Sliema, Malta, 11–16 October 2009; pp. 35–40.
  15. Rong, Z.; Xiangxian, C.; Zhifeng, T.; Jidong, B. Design of Bill Acceptor for Automatic Fare Collection of Rail Transit. In Proceedings of the Enterprise Systems Conference, Shanghai, China, 2–3 August 2014; pp. 179–183.
  16. Park, Y.H.; Kwon, S.Y.; Pham, T.D.; Park, K.R.; Jeong, D.S.; Yoon, S. A high performance banknote recognition system based on a one-dimensional visible light line sensor. Sensors 2015, 15, 14093–14115. [Google Scholar] [CrossRef] [PubMed]
  17. Choi, E.; Lee, J.; Yoon, J. Feature Extraction for Bank Note Classification Using Wavelet Transform. In Proceedings of the 18th International Conference on Pattern Recognition, Hong Kong, China, 20–24 August 2006; pp. 934–937.
  18. Vishnu, R.; Omman, B. Principal Features for Indian Currency Recognition. In Proceedings of the Annual IEEE India Conference, Pune, India, 11–13 December 2014; pp. 1–8.
  19. Verma, K.; Singh, B.K.; Agarwal, A. Indian Currency Recognition based on Texture Analysis. In Proceedings of the Nirma University International Conference on Engineering, Ahmedabad, Gujarat, India, 8–10 December 2011; pp. 1–5.
  20. Grijalva, F.; Rodriguez, J.C.; Larco, J.; Orozco, L. Smartphone Recognition of the U.S. Banknotes’ Denomination, for Visually Impaired People. In Proceedings of the IEEE ANDESCON, Bogota, Colombia, 15–17 September 2010; pp. 1–6.
  21. Smart K3. Available online: http://kisane.com/en/our-service/smart-k3/ (accessed on 11 February 2016).
  22. Turk, M.; Pentland, A. Eigenfaces for recognition. J. Cogn. Neurosci. 1991, 3, 71–86. [Google Scholar] [CrossRef] [PubMed]
  23. Duda, R.O.; Hart, P.E.; Stork, D.G. Pattern Classification, 2nd ed.; Wiley: New York, NY, USA, 2001. [Google Scholar]
  24. Nearest Centroid Classifier. Available online: https://en.wikipedia.org/w/index.php?title=Nearest_centroid_classifier&oldid=676582023 (accessed on 13 December 2015).
  25. Kagehiro, T.; Nagayoshi, H.; Sako, H. A Hierarchical Classification Method for US Bank Notes. In Proceedings of the IAPR Conference on Machine Vision Applications, Tsukuba Science City, Japan, 16–18 May 2005; pp. 206–209.
Figure 1. Flowchart of proposed method.
Figure 1. Flowchart of proposed method.
Sensors 16 00328 g001
Figure 2. Examples of the set-up of our research: (a) Input banknotes. (b) Acquisition of image data.
Figure 2. Examples of the set-up of our research: (a) Input banknotes. (b) Acquisition of image data.
Sensors 16 00328 g002
Figure 3. Examples of input images for four banknote directions: (a) A direction; (b) B direction; (c) C direction; (d) D direction; (eh) Corresponding banknote areas segmented from the images in (ad), respectively; (il) Corresponding 64 × 12-pixel sub-sampled images of the banknote area segmented images in (eh), respectively.
Figure 3. Examples of input images for four banknote directions: (a) A direction; (b) B direction; (c) C direction; (d) D direction; (eh) Corresponding banknote areas segmented from the images in (ad), respectively; (il) Corresponding 64 × 12-pixel sub-sampled images of the banknote area segmented images in (eh), respectively.
Sensors 16 00328 g003aSensors 16 00328 g003b
Figure 4. Examples of correlation maps and similarity map of front-forward recent US$100 banknote image: (a) Reference image; (b) Between-class correlation map; (c) In-class correlation map; (d) Similarity map.
Figure 4. Examples of correlation maps and similarity map of front-forward recent US$100 banknote image: (a) Reference image; (b) Between-class correlation map; (c) In-class correlation map; (d) Similarity map.
Sensors 16 00328 g004
Figure 5. Example of average similarity map obtained from similarity maps of all USD classes and binary mask obtained from similarity map for feature selection.
Figure 5. Example of average similarity map obtained from similarity maps of all USD classes and binary mask obtained from similarity map for feature selection.
Sensors 16 00328 g005
Figure 6. Scatter plots of matching distances of real USD banknotes and test notes obtained using 388 similarity map pixels and (a) 20 PCA dimensions; (b) 388 PCA dimensions.
Figure 6. Scatter plots of matching distances of real USD banknotes and test notes obtained using 388 similarity map pixels and (a) 20 PCA dimensions; (b) 388 PCA dimensions.
Sensors 16 00328 g006
Figure 7. Examples of test notes (a) A direction; (b) B direction; (c) C direction; (d) D direction.
Figure 7. Examples of test notes (a) A direction; (b) B direction; (c) C direction; (d) D direction.
Sensors 16 00328 g007
Figure 8. Example of average similarity map obtained from similarity maps of all USD classes and binary mask obtained from similarity map for feature selection.
Figure 8. Example of average similarity map obtained from similarity maps of all USD classes and binary mask obtained from similarity map for feature selection.
Sensors 16 00328 g008
Figure 9. Scatter plot of matching distances of real USD banknotes and test notes using 388 similarity map pixels and (a) 80 PCA dimensions; (b) 160 PCA dimensions.
Figure 9. Scatter plot of matching distances of real USD banknotes and test notes using 388 similarity map pixels and (a) 80 PCA dimensions; (b) 160 PCA dimensions.
Sensors 16 00328 g009
Figure 10. False recognition case of USD banknote: (a) Input banknote; (b) False recognized class.
Figure 10. False recognition case of USD banknote: (a) Input banknote; (b) False recognized class.
Sensors 16 00328 g010
Figure 11. Rejection cases in USD banknote image database: (a) Case 1; (b) Case 2.
Figure 11. Rejection cases in USD banknote image database: (a) Case 1; (b) Case 2.
Sensors 16 00328 g011aSensors 16 00328 g011b
Figure 12. Examples of banknote images in the experimental databases: (a) Angolan kwanza (AOA); (b) Malawian kwacha (MWK); (c) South African rand (ZAR).
Figure 12. Examples of banknote images in the experimental databases: (a) Angolan kwanza (AOA); (b) Malawian kwacha (MWK); (c) South African rand (ZAR).
Sensors 16 00328 g012
Figure 13. False recognition cases in MWK banknote image database: (a) Input banknote; (b) False recognized class.
Figure 13. False recognition cases in MWK banknote image database: (a) Input banknote; (b) False recognized class.
Sensors 16 00328 g013
Table 1. Comparison of proposed and previous methods.
Table 1. Comparison of proposed and previous methods.
CategoryMethodStrengthWeakness
Using the whole banknote image
  • Using average brightness values of eight uniform rectangles on banknote images as the input for BP network [1].
  • Edge characteristic of linear transformed banknote image was used as the input for three layer NN classifier [2,3].
  • Using HMM to model textures of banknote as a random process [6].
  • Using GGD to extract statistical features from QWT coefficients [4].
  • Simple in feature extraction method [1].
  • Make use of all of the available recognition features on banknote image.
  • Only focused on orientation recognition of a banknote type—Renminbi (RMB) 100 Yuan [1].
  • Possibility of redundancy in the input data to the classifiers.
  • Need for additional feature extraction or representation method because of large-size images could reduce classification speed (HMM [6], QWT [4]).
Using local regions on banknote image
  • Using symmetric masks on banknote images to select input features for the NN classifiers [7,8].
  • Optimizing masks for selecting features using GA algorithm [10].
  • Using SURF based on class-specific components of textures on banknote images [11].
  • Determination of discriminant areas on banknotes by multi-template correlation matching [12].
  • Help to reduce the size of input data to the classifier and reduce processing time.
  • High-discriminating-power regions on banknote images could be located [10,11,12].
  • Fixed recognition regions on banknote images were not the optimal discriminative areas [7,8].
  • Difficulty in application of embedded systems with limited resources due to usage of complex features (SURF [11]).
Using statistical analysis to extract features from banknote image
  • Using PCA on data acquired by various sensors and LVQ for classification [13,14].
  • Applying PCA for feature extraction from banknote data acquired by various sensors: IR, UV, magnetic, fluorescence [15].
  • Using PCA for feature extraction, SVM for pre-classification, and K-means for denomination recognition [16].
  • Using CA on features extracted by wavelet transform [17].
  • Help to reduce the size of input data to the classifier.
  • Can be applied to feature extraction from data acquired by multiple sensors [13,14,15].
Additional processing time and resources required for feature extraction by statistical analysis (memory for PCA eigenvector data).
Combining two feature extraction methods: local region definition and statistical analysis
  • ROIs were selected from five security features on Indian banknote image. PCA was used for dimensionality reduction of data extracted from ROI [18].
  • Using LDA for feature reduction on ROIs containing textures cropped from Indian banknote image [19].
  • Using PCA for feature extraction from the region on the right part of detected banknote image [20].
  • Using feature extraction method based on PCA on banknote areas selected by similarity map (proposed method).
Input feature to the classifiers was reduced in dimensionality and optimized by statistical analysis.
  • Using large-size scanned color banknote images that are difficult to apply on embedded systems [18].
  • ROI selection had to be conducted with the help of external tool (Mazda [19]).
  • The selected region for recognition on the right part of banknote image is not definitely optimal [20].
  • Calculation of similarity map is necessary (proposed method).
Table 2. Numbers of banknote images in experimental USD database.
Table 2. Numbers of banknote images in experimental USD database.
Type of BanknoteA DirectionB DirectionC DirectionD Direction
$12018201620182016
$21626166016261660
$5849834849834
Recent $51208121812081218
Most Recent $51795179717951797
$101498150914981509
Recent $101258127712581277
Most Recent $101564156515641565
$201651164716511647
Recent $201063106910631069
Most Recent $201965195919651959
$501270126212701262
Recent $501397134313971343
Most Recent $501479157314791573
$1001011112610111126
Recent $1001964176119641761
Most Recent $1001250113612501136
Table 3. Numbers of images and classes in the experimental databases used in previous studies and in this study.
Table 3. Numbers of images and classes in the experimental databases used in previous studies and in this study.
StudyNumber of ImagesNumber of Classes
[4]15,00024
[13]360024
[14]357024
[16]61,24064
[25]65,70048
This study99,23668
Table 4. Comparison of recognition accuracy of the proposed method and previous studies.
Table 4. Comparison of recognition accuracy of the proposed method and previous studies.
Recognition MethodExperimental USD Banknote Image DatabaseError Rate (%)Rejection Rate (%)
[4]15,000 images/24 classes0.1200.580
[16]61,240 images/64 classes0.1140.000
Proposed method99,236 images/68 classes0.0020.004
Table 5. Numbers of banknote images and classes in the experimental databases of Angolan kwanza (AOA), Malawian kwacha (MWK) and South African rand (ZAR).
Table 5. Numbers of banknote images and classes in the experimental databases of Angolan kwanza (AOA), Malawian kwacha (MWK) and South African rand (ZAR).
CurrencyNumber of ImagesNumber of Classes
AOA136636
MWK246424
ZAR76040
Table 6. Experimental results for the AOA, MWK, and ZAR banknote image databases.
Table 6. Experimental results for the AOA, MWK, and ZAR banknote image databases.
CurrencySimilarity MapBinary MaskError Rate (%)
AOA Sensors 16 00328 i001 Sensors 16 00328 i0020.000
MWK Sensors 16 00328 i003 Sensors 16 00328 i0040.325
ZAR Sensors 16 00328 i005 Sensors 16 00328 i0060.000
Table 7. Processing time of the proposed recognition method on desktop computer (unit: ms).
Table 7. Processing time of the proposed recognition method on desktop computer (unit: ms).
Number of PCA DimensionsSub-SamplingFeature ExtractionK-Means MatchingTotal Processing Time
1601.230.780.252.26
80 (Proposed)1.230.400.131.76
Table 8. Calculation of memory usage of our proposed method.
Table 8. Calculation of memory usage of our proposed method.
CategoryData SizeData TypeMemory Usage (Bytes)
Original image1584 × 464BYTE734,976
Deskewed image400 × 120BYTE48,000
Sub-sampled image64 × 12BYTE768
Similarity map388Integer1552
Selected region by similarity map388BYTE388
PCA transform matrix80 × 388Integer124,160
Extracted PCA features80Integer320
K-means centers80 × 68Integer21,760
Total931,924

Share and Cite

MDPI and ACS Style

Pham, T.D.; Park, Y.H.; Kwon, S.Y.; Park, K.R.; Jeong, D.S.; Yoon, S. Efficient Banknote Recognition Based on Selection of Discriminative Regions with One-Dimensional Visible-Light Line Sensor. Sensors 2016, 16, 328. https://doi.org/10.3390/s16030328

AMA Style

Pham TD, Park YH, Kwon SY, Park KR, Jeong DS, Yoon S. Efficient Banknote Recognition Based on Selection of Discriminative Regions with One-Dimensional Visible-Light Line Sensor. Sensors. 2016; 16(3):328. https://doi.org/10.3390/s16030328

Chicago/Turabian Style

Pham, Tuyen Danh, Young Ho Park, Seung Yong Kwon, Kang Ryoung Park, Dae Sik Jeong, and Sungsoo Yoon. 2016. "Efficient Banknote Recognition Based on Selection of Discriminative Regions with One-Dimensional Visible-Light Line Sensor" Sensors 16, no. 3: 328. https://doi.org/10.3390/s16030328

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop