Next Article in Journal
Optimization Method for Solving Cloaking and Shielding Problems for a 3D Model of Electrostatics
Next Article in Special Issue
An End-to-End Framework Based on Vision-Language Fusion for Remote Sensing Cross-Modal Text-Image Retrieval
Previous Article in Journal
Oscillatory Properties of Fourth-Order Advanced Differential Equations
Previous Article in Special Issue
A Fuzzy Plug-and-Play Neural Network-Based Convex Shape Image Segmentation Method
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

MBDM: Multinational Banknote Detecting Model for Assisting Visually Impaired People

Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro, 1-gil, Jung-gu, Seoul 04620, Republic of Korea
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(6), 1392; https://doi.org/10.3390/math11061392
Submission received: 1 February 2023 / Revised: 8 March 2023 / Accepted: 11 March 2023 / Published: 13 March 2023

Abstract

:
With the proliferation of smartphones and advancements in deep learning technologies, object recognition using built-in smartphone cameras has become possible. One application of this technology is to assist visually impaired individuals through the banknote detection of multiple national currencies. Previous studies have focused on single-national banknote detection; in contrast, this study addressed the practical need for the detection of banknotes of any nationality. To this end, we propose a multinational banknote detection model (MBDM) and a method for multinational banknote detection based on mosaic data augmentation. The effectiveness of the MBDM is demonstrated through evaluation on a Korean won (KRW) banknote and coin database built using a smartphone camera, a US dollar (USD) and Euro banknote database, and a Jordanian dinar (JOD) database that is an open database. The results show that the MBDM achieves an accuracy of 0.8396, a recall value of 0.9334, and an F1 score of 0.8840, outperforming state-of-the-art methods.

1. Introduction

Smartphones equipped with deep learning technology have the ability to recognize a variety of objects using built-in cameras, including banknotes of multiple nationalities [1,2]. Previous studies on banknote detection have mainly focused on either handcrafted feature-based methods or deep feature-based methods. Handcrafted feature-based methods such as speed up robust features (SURF) have achieved good results in banknote detection [1,2,3,4]. However, deep feature -based methods such as you only look once (YOLO)-v3 have been shown to outperform handcrafted feature-based methods in varied environments, with an F1 score of 0.9184 for YOLO-v3 compared to 0.8139 for SURF [5]. While these existing studies have primarily focused on single-nationality banknote detection, there is a need for a method to detect banknotes of any nationality. However, considering the practicality aspect, banknote input through a camera is required for detection regardless of the type of nationality. In this study, we addressed this problem and devised a multinational banknote detection model (MBDM) and a method for multinational banknote detection based on mosaic data augmentation. The main contributions of this study are as follows:
-
This study is the first on smartphone-based multinational banknote detection.
-
A novel MBDM was developed. The MBDM consists of 69 layers and has a four-step process for feature extraction and final detection. Its feature map configuration is particularly effective at detecting small objects such as coins.
-
The detection performance of the MBDM was improved using mosaic data augmentation to increase training data.
-
The self-constructed Dongguk Multinational Banknote database version 1 (DMB v1) and the MBDM developed in this study were made publicly available on GitHub [6] for fair evaluation by other researchers.
The remainder of this paper is structured as follows. Section 2 reviews the related works of single-national and multinational banknote detection. Section 3 provides a detailed description of the proposed research method of overall procedure, mosaic data augmentation, the architecture of the MBDM, and the mathematical fundamentals of bounding box detection by the MBDM. Section 4 presents the experimental results with Dongguk Korean Banknote database version 1 (DKB v1) [5] including US dollars (USD), Euros (EUR), and Korean won (KRW), and Jordanian dinar (JOD) open database with analysis. Finally, Section 5 summarizes the study and the main findings.

2. Related Works

Existing banknote detection studies can be broadly classified into handcrafted feature-based methods and deep feature-based methods. In handcrafted feature-based methods, techniques such as SURF, fast radial symmetry (FRS) transform, and principal component analysis (PCA) are applied to banknote images for detection and recognition. The SURF method is particularly fast and efficient at localizing and matching banknote features compared to other handcrafted feature-based methods [1,2,3,4]. The SURF method is robust in terms of image rotation or scaling change; however, it may mistakenly recognize the background as a banknote. To address this issue, some studies have used FRS-based banknote recognition by extracting gradients for the numerical part of the banknote [7]. Other handcrafted feature-based methods such as PCA extract denomination types and regions of interest (RoIs) for each denomination type to create eigen-images and then perform banknote recognition based on these eigen-images [8]. There have also been studies that have used the K-nearest neighbors (KNN) classifier [9] or decision tree classifier (DTC) [10] to perform banknote detection on a Malaysian banknote database [11]. However, handcrafted feature-based methods can struggle to maintain good detection or classification performance for banknote data acquired in different environments, leading to the development of deep feature-based methods to overcome these challenges.
Deep convolutional neural networks (DCNNs) have shown better performance in detection and classification compared to handcrafted feature-based methods. There are several types of DCNNs that have been applied to banknote detection including MobileNet, AlexNet, Faster R-CNN, YOLO-v3, and self-designed CNN models. MobileNet [12] is useful for applications with low hardware requirements because it has a small number of parameters and requires minimal computation compared to other compared DCNNs with many layers, for example, GoogleNet or visual geometry group (VGG)-16. However, Faster R-CNN requires high-performance hardware such as a graphics processing unit (GPU) owing to its large number of layers, making it difficult to use on wearable devices with lower performance. As a result, there has been a study that achieved good performance by applying MobileNet to banknote detection on a wearable device [13]. AlexNet [14], which has a structure similar to LeNet and consists of eight layers including five convolution layers and three fully connected layers, can perform training using two GPUs in parallel. There has been a study that used this CNN network for deep feature extraction and then classified banknote objects in images using a support vector machine (SVM) and histogram of oriented gradients (HOG) [15]. Faster R-CNN [16] has also been applied to banknote detection and is known for its excellent detection performance in DCNNs [5]. It obtains feature maps through feature extraction using a CNN such as VGG [17] or a residual network (ResNet) [18] and then uses these feature maps as an input for the region proposal network (RPN) and classifier, which are trained to perform banknote detection. The detection boxes produced by the detection result are filtered for false positives (FPs) through three postprocessing steps. Banknote FPs are removed based on the width-to-height ratio of the detection boxes, and FPs are eliminated if they do not fall within the appropriate range based on the size range of coins and bills. Finally, when there are multiple detection boxes within an image for a single object, only the detection box with the highest score is considered a true positive (TP), and FPs are eliminated to improve detection performance [5]. YOLO-v3 [19,20] has also been applied in some studies. YOLO-v2 [21] used VGG-16 for feature extraction; however, to create a faster model, Darknet-19 was developed and used in YOLO-v2 [21]. YOLO-v3 improves upon YOLO-v2 by using Darknet-53, which has more layers than Darknet-19. The YOLO-v3 method performs better in object detection, particularly for small objects, and is able to achieve real-time detection owing to its improved processing speed. It has been used for banknote detection on India’s banknote dataset and Iraq’s banknote dataset with good speed and performance. In addition, there has been a study on detection and classification using a shallow CNN network with Euro and Mexican banknotes [22], where Euro and Mexican banknotes were trained and tested separately. While these deep feature-based studies have focused on banknotes, there are almost no existing studies on small objects such as coins. While deep feature-based methods have generally shown better detection performance than handcrafted feature-based methods, most existing studies in this field have only focused on single-national banknote detection. There are no existing studies on multinational banknote detection, regardless of the nationality of the banknote. However, it is practical to be able to detect banknotes of any nationality when using a smartphone camera. To address this problem, we propose a method for multinational banknote detection using the MBDM and mosaic data augmentation.
Although they are not related to banknote detection, the authors proposed the method of motion prediction for beating heart surgery with a gated recursive unit (GRU) [23], an improved stereo matching algorithm on the basis of joint similarity measures and adaptive weights [24], reconstructing dynamic soft-tissue using a stereo endoscope on the basis of a single-layer network [25], and endoscope image mosaic on the basis of pyramid oriented fast and rotated brief (ORB) [26]. Table 1 compares the strengths and weaknesses of the previous studies and the proposed method, dividing them into single-nationality banknote databases and multinationality banknote databases.

3. Materials and Methods

3.1. Overall Procedure of Proposed Method

Figure 1 illustrates the overall process of the proposed banknote detection method. The first step is to preprocess the training images to improve the training performance. This is performed using mosaic data augmentation (described in detail in Section 3.2). After preprocessing, the training data are applied to the MBDM for training. When a testing image is input to the trained MBDM, the MBDM outputs the location, type, and classification probability of multinational banknotes.

3.2. Mosaic Data Augmentation

As shown in Figure 1, the proposed method uses mosaic data augmentation to efficiently train the banknote dataset. Mosaic data augmentation is a type of Bag of Freebies (BoF) data augmentation technique, which was introduced in YOLO-v4 [27] to improve the accuracy of detection by modifying the training method or increasing the training cost of models trained in an offline environment. The advantage of BoF is that it increases the variability of input images and makes the detection model more robust to various images. Because mosaic data augmentation uses banknote data from various nationalities, it is particularly useful for datasets with different classes; it was shown to be effective in this study. As shown in Figure 2a, mosaic data augmentation combines four images from four different classes (dollar, Euro, Korean won, and Jordanian dinar) into one image. The mosaic data augmentation was performed based on the image size of the banknote dataset, and as shown in the right image of Figure 2a, four class images were randomly placed in four zones, with annotations adjusted according to their positions. Four individual annotations are applied to one mosaic data-augmented image, with four class annotations stored in one image. The use of mosaic data augmentation allows for a mini batch size of four. While using mosaic data augmentation, a clear impact of applying the mini batch was observed. In the case of the mosaic data augmentation introduced in YOLO-v4, only four images were introduced. In this study, by contrast, we used a mini batch size of two to compare the effect of using two images and a mini batch size of six to compare the effect of using six, as illustrated in Figure 2b,c.

3.3. Architecture of MBDM

The preprocessed data are fed to the MBDM, a CNN model, for training. The MBDM architecture is based on and improves upon the YOLO-v3 model. It begins with a 3 × 3 (stride = 1) convolution layer and a 3 × 3/2 (stride = 2) convolution layer. It then uses a series of 1 × 1 (stride = 1) convolution, 3 × 3 (stride = 1) convolution, and residual layers, which creates a set, followed by a 3 × 3/2 (stride = 2) convolution layer for downsampling. After the downsampling, the set becomes 8 and convolution is performed. This is repeated 3 times, and at the end of each set, 3 × 3/2 (stride = 2) convolution is applied and downsampling is performed. The eight sets of convolution layers are divided into three parts: the front box set, the middle box set, and the last box set. Training is then completed by applying average pooling, a fully connected layer, and an independent logistic classifier to the output obtained after passing through the set four times. The MBDM consists of a total of 69 convolution layers. Figure 3 provides a visual representation of the structure of the MBDM. The lower part of the figure shows the first prediction proceeding after feature extraction. The green boxed part is the 1 × 1 convolution layer, the yellow boxed part is the upsampling part, and the purple boxed part is the concatenation part. After the first prediction, 1 × 1 convolution is performed and upsampling is performed, followed by concatenation using the features from the front box set. The second prediction is then made, following the same process as the first, with the second concatenation part concatenating with the features of the middle box set. The third prediction is followed by the fourth prediction, which is made by concatenating with the features of the last box set.
Four prediction feature maps are generated in this method, each with dimensions of 13 × 13, 26 × 26, 52 × 52, and 104 × 104 as shown in Figure 4. These feature maps are input into a fully convolutional network (FCN) consisting of 1 × 1 and 3 × 3 convolution layers, and the output channel is increased to 512. The feature map is then upsampled twice and concatenated with the feature map at the next higher resolution. This process is repeated to obtain feature maps at four scales. The number of output channels in the feature map at each scale is determined using a formula that takes into account the anchor box value (3), the bounding box offset value (4), the objectness score (1), and the number of classes. For example, the numbers of 1 × 1 convolution layers for the feature map at each scale would be: box number × (bounding box offset + objectness score + class number) = 3 × (4 + 1 + 26) = 93. The feature maps at each scale will have dimensions of 104 × 104 (×93), 52 × 52 (×93), 26 × 26 (×93), and 13 × 13 (×93). Prediction is performed at larger scales as the feature vector size decreases and at smaller scales as the feature vector size increases. Table 2 shows the detailed structure of the MBDM.

3.4. Mathematical Fundamentals of Bounding Box Detection by MBDM

In training, bounding boxes are defined using v x , v y , v w , and v h ( v x and v y represent the x- and y-coordinates of the bounding box in the grid, and v w and v h represent the width and height of the bounding box, respectively). The method for determining bounding boxes from YOLO is used and the location coordinates of the bounding box are predicted relative to the grid cell using this method. Further, t x ,   t y ,   t w , a n d   t h represent the x- and y-coordinates and the width and height, respectively, and these location coordinates indicate the predicted bounding box. c x and c y represent the offset from the top left corner of the grid cell, and p w and p h represent the width and height in anchor dimensions. σ() of Equations (1) and (2) is a logistic activation function such as sigmoid function [21]. The bounding box is predicted using t x , t y , t w , t h and c x , c y   p w , p h , which are calculated using the following formula.
v x = σ ( t x )   +   c x
v y = σ ( t y )   +   c y
v w = p w e t w
v h = p h e t h

4. Experiment Results and Analysis

4.1. Experimental Database and Setup

In this study, we used the Dongguk Korean Banknote database version 1 (DKB v1) [5] and Jordanian dinar (JOD) open database. We also used an integrated multinational banknote database, which included images of USD, EUR, and KRW captured using the Samsung Galaxy Note10 camera [28]. The DKB v1 consists of eight classes of KRW banknotes, including denominations of 10, 50, 100, 500, 1000, 5000, 10,000, and 50,000 as well as coins and bills. It includes a total of 6400 images, with 800 images per class and has a resolution of 1920 × 1080 pixels. The JOD database, which includes both coins and bills, consists of nine classes in denominations of 1 Qirsh 5, 10 Piastres 1/4, 1/2, 1, 5, 1, and 20 Dinars, and has image resolution of 3264 × 2448 pixels. The acquired USD database includes six classes of 1, 5, 10, 20, 50, and 100 dollar bills, with a total of 120 images, 20 images per class, and an image resolution of 1920 × 1080 pixels. The EUR database consists of five classes of 5, 10, 20, 50, and 100 Euro bills, with a total of 100 images (20 images per class) and an image resolution of 1920 × 1080 pixels. However, the input images of all four datasets for the MBDM in Figure 1 are resized to 1024 × 1024 pixels via bilinear interpolation. Therefore, the minimum resolution is 1024 × 1024 pixels to apply our method. Data augmentation was applied to the USD and EUR databases because of their small number of training data. To mimic real-world conditions, the banknote images in all of the databases were acquired at various angles, with folds, and varying contrasts. A detailed description of the experimental databases is provided in Table 3, and examples of database images are presented in Figure 5.
We trained and tested our algorithm on a desktop computer equipped with Intel® Core™ i7-950 [email protected], 20 Gigabytes (GB) memory, and NVIDIA GeForce GTX1070 graphics with 1920 compute unified device architecture (CUDA) cores [29]. The algorithm was implemented using PyTorch [30] in Python [31] and utilized CUDA (Version 10.0) [32] with CUDA deep neural network library (CUDNN) (version 7.1.4) [33].

4.2. Training of Proposed Method

For this study, as described in Table 3, a total of 21,020 images were divided into training and testing sets of 10,510 images, and two-fold cross-validation was performed.
This implies that in the first fold, 10,510 images were used for training and the remaining 10,510 images were used for testing. In the second fold, these two sets were exchanged for training and testing. The final performance was determined by taking average accuracy across the two-fold experiments. In addition, we selected 1000 images of each nationality from the training set to use as a validation dataset, totaling 4000 images. The parameters used for training in this study included a base learning rate of 0.001, batch size of 1, gamma of 0.1, weight decay of 0.0005, and 150 epochs. An adaptive moment estimation (Adam) optimizer was also used. Figure 6a,b show training loss and accuracy, as well as the validation loss and accuracy, for the first and second folds of the MBDM, respectively. As shown in these figures, the training loss and accuracy values converge as the number of epochs increases, indicating that the MBDM has been adequately trained on the training data. In addition, the convergence of the validation loss and accuracy values as the number of epochs increases suggests that the MBDM is not overfitted to the training data.

4.3. Testing of Proposed Method

4.3.1. Evaluation Metric

To measure the testing performance, we calculated true positive (TP), false positive (FP), and false negative (FN) values based on the intersection over union (IoU) value between the detection box and the ground-truth box obtained through the proposed MBDM. We then used the obtained TP, FP, and FN values to calculate precision, recall, and F1 score using the equations below, and evaluated the testing performance accordingly. In Equations (5)–(7), #TP, #FP, and #FN represent the number of TP, FP, and FN, respectively [34].
P r e c i s i o n = # T P # T P + # F P
R e c a l l = # T P # T P + # F N
F 1   s c o r e = 2 · p r e c i s i o n · r e c a l l p r e c i s i o n + r e c a l l

4.3.2. Ablation Studies

We conducted ablation studies to compare three methods. The first method involved using Retinex filtering [35] before applying the MBDM to the input image, while the second method involved using grouped convolution [36] in the MBDM. The third method involved using mosaic data argumentation for MBDM training. As shown in Table 4, both precision and recall performances were lower when Retinex filtering was applied compared to when it was not applied. Grouped convolution was applied to improve the speed and performance of the operation; however, Table 5 shows that performance was higher when it was not used. Table 6 shows that mosaic augmentation with four images resulted in higher accuracy compared to two or six images and also showed higher accuracy compared to the case with no mosaic augmentation. Although the authors showed that applying four images in the existing method [27] was effective for mosaic augmentation, they did not perform the ablation studies to select the optimal number of mosaic images. Furthermore, they used ImageNet (ILSVRC 2012 val) and MS COCO (test-dev 2017) datasets, which have different image characteristics from our experimental datasets of USD, EUR, KRW, and JOD banknotes. Therefore, we performed the ablation studies to select the optimal number of mosaic images as shown in Table 6 and found that four images for mosaic augmentation has the highest accuracies.
As shown in Figure 3 and Table 2, the proposed MBDM consists of three box sets. We added one more box set with the same configuration to create the MBDM (deeper layers), resulting in a model with deeper layers (85 layers). We compared the performance of the MBDM (normal layers) and the MBDM (deeper layers) in Table 7. When mosaic augmentation is not used, the MBDM (deeper layers) exhibited lower precision performance but higher recall performance compared to the MBDM (normal layers). However, in terms of final F1 score, the MBDM (deeper layers) performed better. When we applied the same mosaic augmentation with four images, the MBDM (normal layers) showed higher accuracy, and the MBDM (normal layers) with mosaic augmentation using four images showed the highest performance in all cases.

4.3.3. Comparisons with the State-of-the-Art Methods

In this subsection, we compare the performance of the proposed method with state-of-the-art methods. We compared our method with deep feature-based methods such as the MobileNet-based banknote detection method [13], the Faster R-CNN-based banknote detection method [5], the YOLO v3-based banknote detection method [19,20], and the YOLO v2 [21] method. As shown in Table 8, the proposed method outperformed the state-of-the-art methods in the multinational banknote database. MobileNet [13] exhibited the worst performance, while Faster R-CNN performed better than YOLO v2. In terms of precision, Faster R-CNN achieved a slightly higher score than YOLO v3; however, in terms of recall, YOLO v3 performed better. In terms of the final performance metric, the F1 score, YOLO v3 exhibited the highest results. Table 9 and Table 10 compare the performance of the state-of-the-art methods and the proposed method for USD and EUR. As described in Section 4.1, the USD and EUR databases contain only bills; therefore, we only compare the detection accuracy for bills. As shown in Table 9 and Table 10, the proposed method showed higher performance than the state-of-the-art methods. Finally, Table 11 and Table 12 compare the performance of the state-of-the art methods and the proposed method for KRW and JOD. As described in Section 4.1, the KRW and JOD database contain both coins and bills, so we compare the detection accuracy of coins and bills, respectively. As shown in Table 11 and Table 12, the proposed method showed higher performance than the state-of-the-art methods. Upon comparing Table 9 and Table 10 with Table 11 and Table 12, it is evident that the accuracy of the proposed method is relatively higher in the USD and EUR databases than in the KRW and JOD databases. This is because the KRW and JOD databases contain coin data, which have a smaller size and more light reflection on the metal surface, leading to lower detection accuracy compared to bills, which have a larger size and relatively less light reflection. Figure 7, Figure 8, Figure 9, Figure 10 and Figure 11 compare the results of Table 8, Table 9, Table 10, Table 11 and Table 12 according to the IoU threshold; evidently, the proposed method showed higher detection accuracy than the state-of-the-art methods.

4.3.4. Analysis

We compared the experimental results of this study to demonstrate successful detections and cases wherein detection errors occurred. As shown in Figure 12, out of the six detection results, the three examples corresponding to Figure 12a are USD 5, EUR 10, and EUR 100, which represent successful object detection and classification. In particular, Figure 12a shows that these bills were properly detected despite the complex backgrounds, various sizes, and in-plane rotation. The three examples corresponding to Figure 12b are KRW 50, USD 50, and JOD 10, which are cases where a detection error occurred. In the case of KRW 50, the coin is smaller than other banknote objects, and especially smaller than other coins, so it was incorrectly detected as a different coin class during the detection process. In the case of USD 50, a detection error occurred because of the complicated background, whereas in the case of JOD 10, a detection error occurred owing to the folded state of the bill.
In the next experiment, we extracted the gradient class activation map (GradCAM) [38] from each layer of the proposed MBDM, and the results are shown in Figure 13. GradCAM images can visually indicate the importance of extracted features by displaying important features in a color close to red and unimportant features in a color close to blue in the extracted feature map [38]. We obtained GradCAM from conv_8, conv_12, conv_16, conv_20, and conv_24 of the feature extractor in Table 2, using a JOD 20 bill, KRW 1000 bill, JOD 5 coin, and EUR 100 bill as input. As shown in Figure 13, it was confirmed that highly activated important features are well detected in the banknote and coin regions in the feature map obtained from conv_24 of the MBDM. This confirms that the proposed MBDM extracts important features that can effectively detect banknotes.
In the next experiment, we performed a t-test [39] and measured Cohen’s d-value [40] between F1 scores for the proposed MBDM and the second-best method in Table 8 for statistical testing. These tests were conducted for the F1 scores of the coin, bill, and coin and bill databases, respectively. A Cohen’s d-value around 0.2 represents a small effect size, 0.5 means a medium effect size, and 0.8 means a large effect size. As shown in Figure 14a, we measured the p-values of the second-best method and the proposed method for the coin database in Table 8. The p-value of the result was 0.022, which implies a 95% confidence level, and Cohen’s d-value was 4.858 (large effect size). As shown in Figure 14b, we measured the p-values of the second-best method and the proposed method for the bill database in Table 8. The p-value of the result was 0.017, which means a 95% confidence level, and Cohen’s d-value was 5.295 (large effect size). As shown in Figure 14c, we measured the p-values of the second-best method and the proposed method for the coin and bill database in Table 8. The p-value of the result was 0.019, which means a 95% confidence level, and Cohen’s d-value was 5.031 (large effect size). These results confirm that there is a significant difference between the F1 scores of the proposed method and the second-best method in Table 8.

4.3.5. Comparisons of Inference Time and Model Complexity

This subsection compares the inference time, processing speed, number of model parameters, GPU memory requirements, and the number of floating-point operations per second (#FLOPs) of the proposed method (MBDM) with state-of-the-art methods. The performance metrics were measured on both a desktop computer (described in Section 4.1) and a Jetson TX2 embedded system (shown in Figure 15). Jetson TX2 uses a NVIDIA PascalTM-family GPU and has 256 CUDA cores, 8 GB of memory shared between the central processing unit (CPU) and GPU, and 59.7 GB/s of memory bandwidth, and a power consumption of less than 7.5 W [41]. The MBDM was tested for inference time and processing speed on both a desktop computer and the Jetson TX2 embedded system. The results, shown in Table 13, indicate that the inference time per image was 11.32 ms on the desktop computer and 57.32 ms on the Jetson TX2. This translates to a processing speed of 88.34 frames per second (fps) (1000/11.32) on the desktop computer and 17.45 fps (1000/57.32) on the Jetson TX2. These results demonstrate that the proposed method can be operated on both desktop computers and embedded systems with limited computing resources. However, as shown in Table 13, the MBDM has a slower processing speed than other models but a faster processing speed than Faster R-CNN. In terms of model parameters, GPU memory requirements, and the number of floating-point operations per second (#FLOPs), as shown in Table 14, the MBDM has more parameters than other models and requires less GPU memory than Faster R-CNN and YOLO v4. In addition, it requires less FLOPs than Faster R-CNN. Nevertheless, the proposed method exhibits higher detection accuracy than other methods, as shown in Table 8, Table 9, Table 10, Table 11 and Table 12 and Figure 7, Figure 8, Figure 9, Figure 10 and Figure 11.

5. Conclusions

In this study, a novel method for detecting banknotes using smartphone images taken under various conditions with various banknote types and complex backgrounds was developed. The proposed method, called the MBDM, was trained using mosaic data augmentation and tested on several databases including DKB v1, a KRW database built for this research, a JOD database (open database), and a multinational banknote database comprising USD and EUR images captured with a Samsung Galaxy Note 10 camera. The results showed that the MBDM outperformed state-of-the-art methods in terms of detection accuracy. The visualization of GradCAM images confirmed that the MBDM adequately extracted important features for accurate detection. Statistical analysis using t-test and Cohen’s d-value also revealed that the proposed method exhibited significantly higher accuracy than the second-best method.
We consider that the execution of our algorithm, database storage, and model training can be performed via cloud computing. However, we also consider a scenario where our algorithm works on an embedded system in a mobile phone. Therefore, we compared the inference time and processing speed of our proposed method and the state-of-the-art methods on a desktop computer and a Jetson embedded system as shown in Table 13, and compared the number of parameters, GPU memory requirement, and #FLOPs as shown in Table 14. These results show that our algorithm can work on an embedded system with limited computing power and memory. However, the detection performance was not always satisfactory, especially for small coins such as KRW 50 or in the case of complicated backgrounds or folded banknotes.
To address these issues, future research will focus on methods for maintaining spatial features to improve detection performance for small objects and for handling complicated backgrounds and folded banknotes. In addition, efforts will be made to reduce the processing time, number of parameters, memory usage, and #FLOPs of the MBDM while maintaining its accuracy.

Author Contributions

Methodology, C.P.; supervision, K.R.P.; writing—original draft, C.P.; writing—review and editing, K.R.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported in part by the National Research Foundation of Korea (NRF) funded by the Ministry of Science and ICT (MSIT) through the Basic Science Research Program (NRF-2021R1F1A1045587), in part by the NRF funded by the MSIT through the Basic Science Research Program (NRF-2022R1F1A1064291), and in part by the MSIT, Korea, under the Information Technology Research Center (ITRC) support program (IITP-2023-2020-0-01789) supervised by the IITP (Institute for Information and Communications Technology Planning and Evaluation).

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Sanchez, G.A.R. A computer vision-based banknote recognition system for the blind with an accuracy of 98% on smartphone videos. J. Korea Soc. Comput. Inform. 2019, 24, 67–72. [Google Scholar]
  2. Sanchez, G.A.R.; Uh, Y.J.; Lim, K.; Byun, H. Fast banknote recognition for the blind on real-life mobile videos. In Proceedings of the Korean Society of Computer Information Conference, Jeju Island, Republic of Korea, 24–26 June 2015; pp. 835–837. [Google Scholar]
  3. Hasanuzzaman, F.M.; Yang, X.; Tian, Y. Robust and effective component-based banknote recognition by SURF features. In Proceedings of the 2011 20th Annual Wireless and Optical Communications Conference (WOCC), Newark, NJ, USA, 15–16 April 2011; pp. 1–6. [Google Scholar]
  4. Dunai, L.D.; Pérez, M.C.; Peris-Fajarnés, G.; Lengua, I.L. Euro Banknote Recognition System for Blind People. Sensors 2017, 17, 184. [Google Scholar] [CrossRef] [Green Version]
  5. Park, C.; Cho, S.W.; Baek, N.R.; Choi, J.; Park, K.R. Deep Feature-Based Three-Stage Detection of Banknotes and Coins for Assisting Visually Impaired People. IEEE Access 2020, 8, 184598–184613. [Google Scholar] [CrossRef]
  6. MBDM with Algorithms. Available online: https://github.com/channygrad/MBDM/tree/main/MBDM (accessed on 3 March 2023).
  7. Domínguez, A.R.; Alvarez, C.L.; Corrochano, E.B. Automated banknote identification method for the visually impaired. In Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications, Proceedings of the 19th Iberoamerican Congress, CIARP 2014, Puerto Vallarta, Mexico, 2–5 November 2014; Springer: Cham, Switzerland, 2014; pp. 572–579. [Google Scholar]
  8. Grijalva, F.; Rodriguez, J.C.; Larco, J.; Orozco, L. Smartphone recognition of the U.S. banknotes’ denomination, for visually impaired people. In Proceedings of the 2010 IEEE ANDESCON, Bogota, Colombia, 15–17 September 2010; pp. 1–6. [Google Scholar]
  9. Cunningham, P.; Delany, S.J. K-nearest neighbour classifiers. arXiv 2020, arXiv:2004.04523. [Google Scholar]
  10. Swain, P.H.; Hauska, H. The decision tree classifier: Design and potential. IEEE Trans. Geosci. Electron. 1977, 15, 142–147. [Google Scholar] [CrossRef] [Green Version]
  11. Sufri, N.A.J.; Rahmad, N.A.; As’ari, M.A.; Zakaria, N.A.; Jamaludin, M.N.; Ismail, L.H.; Mahmood, N.H. Image based ringgit banknote recognition for visually impaired. J. Telecomm. Electronic Comput. Eng. 2017, 9, 103–111. [Google Scholar]
  12. Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
  13. Mittal, S.; Mittal, S. Indian banknote recognition using convolutional neural network. In Proceedings of the 2018 3rd International Conference On Internet of Things: Smart Innovation and Usages (IoT-SIU), Bhimtal, India, 23–24 February 2018. [Google Scholar]
  14. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional networks. In Proceedings of the 26th Annual Conference on Neural Information Processing Systems (NIPS), Lake Tahoe, NV, USA, 3–6 December 2012; pp. 1106–1114. [Google Scholar]
  15. Imad, M.; Ullah, F.; Hassan, M.A.; Naimullah, M.A. Pakistani Currency Recognition to Assist Blind Person Based on Convolutional Neural Network. J. Comput. Sci. Technol. Stud. 2020, 2, 12–19. [Google Scholar]
  16. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  17. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. In Proceedings of the International Conference on Learning Representations, San Diego, CA, USA, 7–9 May 2015; pp. 1–14. [Google Scholar]
  18. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 26 June–1 July 2016; pp. 770–778. [Google Scholar]
  19. Joshi, R.C.; Yadav, S.; Dutta, M.K. YOLO-v3 Based Currency Detection and Recognition System for Visually Impaired Persons. In Proceedings of the 2020 International Conference on Contemporary Computing and Applications (IC3A), Lucknow, India, 5–7 February 2020; pp. 280–285. [Google Scholar]
  20. Mahmood, R.R.; Younus, M.D.; Khalaf, E.A. Currency Detection for Visually Impaired Iraqi Banknote as a Study Case. Turkish J. Comput. Math. Educ. 2021, 12, 2940–2948. [Google Scholar]
  21. Redmon, J.; Farhadi, A. YOLO9000: Better, faster, stronger. arXiv 2016, arXiv:1612.08242. [Google Scholar]
  22. Pérez, D.G.; Corrochano, E.B. Recognition system for Euro and Mexican banknotes based on deep learning with real scene images. Comput. Sist. 2018, 22, 1065–1076. [Google Scholar]
  23. Yang, B.; Li, Y.; Zheng, W.; Yin, Z.; Liu, M.; Yin, L.; Liu, C. Motion prediction for beating heart surgery with GRU. Biomed. Signal Process. Control 2023, 83, 104641. [Google Scholar] [CrossRef]
  24. Lai, X.; Yang, B.; Ma, B.; Liu, M.; Yin, Z.; Yin, L.; Zheng, W. An improved stereo matching algorithm based on joint similarity measure and adaptive weights. Appl. Sci. 2022, 13, 514. [Google Scholar] [CrossRef]
  25. Yang, B.; Xu, S.; Chen, H.; Zheng, W.; Liu, C. Reconstruct dynamic soft-tissue with stereo endoscope based on a single-layer network. IEEE Trans. Image Process. 2022, 31, 5828–5840. [Google Scholar] [CrossRef] [PubMed]
  26. Zhang, Z.; Wang, L.; Zheng, W.; Yin, L.; Hu, R.; Yang, B. Endoscope image mosaic based on pyramid ORB. Biomed. Signal Process. Control 2022, 71 Pt B, 103261. [Google Scholar] [CrossRef]
  27. Bochkovskiy, A.; Wang, C.; Liao, H.M. Yolov4: Optimal speed and accuracy of object detection. arXiv 2020, arXiv:2004.10934. [Google Scholar]
  28. Samsung Galaxy Note 10. Available online: https://en.wikipedia.org/wiki/Samsung_Galaxy_Note_10 (accessed on 19 December 2022).
  29. NVIDIA GeForce GTX 1070. Available online: https://www.nvidia.com/en-in/geforce/products/10series/geforce-gtx-1070/ (accessed on 26 March 2022).
  30. Pytorch. Available online: https://pytorch.org/docs/stable/index.html (accessed on 26 March 2022).
  31. Python. Available online: https://docs.python.org/3.7/whatsnew/changelog.html#python-3-7-0-alpha-1 (accessed on 26 March 2022).
  32. CUDA. Available online: https://en.wikipedia.org/wiki/CUDA (accessed on 26 March 2022).
  33. CUDNN. Available online: https://developer.nvidia.com/cudnn (accessed on 26 March 2022).
  34. Precision and Recall. Available online: https://en.wikipedia.org/wiki/Precision_and_recall (accessed on 29 January 2022).
  35. Parihar, A.S.; Singh, S. A study on Retinex based method for image enhancement. In Proceedings of the 2018 2nd International Conference on Inventive Systems and Control (ICISC), Coimbatore, India, 19–20 January 2018; pp. 619–624. [Google Scholar]
  36. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef] [Green Version]
  37. YOLO v5. Available online: https://pytorch.org/hub/ultralytics_yolov5/ (accessed on 28 February 2023).
  38. Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 618–626. [Google Scholar]
  39. Student’s T-Test. Available online: https://en.wikipedia.org/wiki/Student%27s_t-test (accessed on 27 December 2020).
  40. Cohen, J. A power primer. Psychol. Bull. 1992, 112, 155. [Google Scholar] [CrossRef]
  41. Jetson TX2 Module. Available online: https://developer.nvidia.com/embedded/jetson-tx2 (accessed on 22 April 2022).
Figure 1. Procedure of proposed method for multinational banknote detection.
Figure 1. Procedure of proposed method for multinational banknote detection.
Mathematics 11 01392 g001
Figure 2. Mosaic data augmentation of banknote dataset: (a) is an example of applying four images of mosaic data augmentation, and (b,c) are examples of applying six images and two images of mosaic data augmentation, respectively.
Figure 2. Mosaic data augmentation of banknote dataset: (a) is an example of applying four images of mosaic data augmentation, and (b,c) are examples of applying six images and two images of mosaic data augmentation, respectively.
Mathematics 11 01392 g002
Figure 3. Structure of the MBDM.
Figure 3. Structure of the MBDM.
Mathematics 11 01392 g003
Figure 4. Prediction process. From the bottom to top, the 1st to 4th predictions are presented.
Figure 4. Prediction process. From the bottom to top, the 1st to 4th predictions are presented.
Mathematics 11 01392 g004
Figure 5. Examples of database images: (a) is KRW, (b) is USD, (c) is JOD, and (d) is EUR.
Figure 5. Examples of database images: (a) is KRW, (b) is USD, (c) is JOD, and (d) is EUR.
Mathematics 11 01392 g005aMathematics 11 01392 g005b
Figure 6. Training and validation loss and accuracy graphs of MBDM: (a) is the loss and accuracy for the training and validation data in the first fold, and (b) is the loss and accuracy for the training and validation data in the second fold.
Figure 6. Training and validation loss and accuracy graphs of MBDM: (a) is the loss and accuracy for the training and validation data in the first fold, and (b) is the loss and accuracy for the training and validation data in the second fold.
Mathematics 11 01392 g006aMathematics 11 01392 g006b
Figure 7. Performance comparison graph for each method according to IoU threshold in multinational banknote database: (a) precision, (b) recall, and (c) F1 score.
Figure 7. Performance comparison graph for each method according to IoU threshold in multinational banknote database: (a) precision, (b) recall, and (c) F1 score.
Mathematics 11 01392 g007
Figure 8. Performance comparison graph for each method according to IoU threshold in USD database: (a) precision, (b) recall, and (c) F1 score.
Figure 8. Performance comparison graph for each method according to IoU threshold in USD database: (a) precision, (b) recall, and (c) F1 score.
Mathematics 11 01392 g008
Figure 9. Performance comparison graph for each method according to IoU threshold in EUR database: (a) precision, (b) recall, and (c) F1 score.
Figure 9. Performance comparison graph for each method according to IoU threshold in EUR database: (a) precision, (b) recall, and (c) F1 score.
Mathematics 11 01392 g009
Figure 10. Performance comparison graph for each method according to IoU threshold in KRW database: (a) precision, (b) recall, and (c) F1 score.
Figure 10. Performance comparison graph for each method according to IoU threshold in KRW database: (a) precision, (b) recall, and (c) F1 score.
Mathematics 11 01392 g010
Figure 11. Performance comparison graph for each method according to IoU threshold in JOD database: (a) precision, (b) recall, and (c) F1 score.
Figure 11. Performance comparison graph for each method according to IoU threshold in JOD database: (a) precision, (b) recall, and (c) F1 score.
Mathematics 11 01392 g011
Figure 12. The detection results of the proposed method: (a) is the case where the detection was successful, and (b) is the case where the detection error occurred.
Figure 12. The detection results of the proposed method: (a) is the case where the detection was successful, and (b) is the case where the detection error occurred.
Mathematics 11 01392 g012
Figure 13. Examples of GradCAM images from MBDM, (ad) are with the inputs of JOD 20 dinar, KRW 1000 bill, JOD 5 dinar coin, and EUR 100 bill, respectively. GradCAM images extracted from input, conv_8, conv_12, conv_16, conv_20, conv_24 of Table 2 from the left of (ad).
Figure 13. Examples of GradCAM images from MBDM, (ad) are with the inputs of JOD 20 dinar, KRW 1000 bill, JOD 5 dinar coin, and EUR 100 bill, respectively. GradCAM images extracted from input, conv_8, conv_12, conv_16, conv_20, conv_24 of Table 2 from the left of (ad).
Mathematics 11 01392 g013
Figure 14. The t-test results between F1 scores of the proposed method and the second-best method in case of (a) coin database, (b) bill database, and (c) coin and bill database.
Figure 14. The t-test results between F1 scores of the proposed method and the second-best method in case of (a) coin database, (b) bill database, and (c) coin and bill database.
Mathematics 11 01392 g014
Figure 15. Jetson TX2 embedded system.
Figure 15. Jetson TX2 embedded system.
Mathematics 11 01392 g015
Table 1. Comparisons of proposed and previous studies on banknote detection.
Table 1. Comparisons of proposed and previous studies on banknote detection.
CategoryMethodsAdvantagesDisadvantages
Single-national banknote
detection
Handcrafted feature-basedSURF-based [1,2,3,4], FRS transform [7], PCA [8], KNN + DTC [11]
-
High-performance devices are not required because of the small amount of computation
-
Applicable to wearable devices or mobile devices
Detection performance is poor in images with complex backgrounds, and limited scale invariance performance
Deep feature-basedMobileNet [13], AlexNet, SVM, and HOG [15], Faster R-CNN + three step postprocessing [5], YOLO-v3 + data augmentation [19,20], Shallow CNN [22]
-
High detection performance in datasets with complex backgrounds
-
Applicable to wearable devices with the development of shallow deep CNN
Did not perform multinational banknote detection of various nationalities
Multinational banknote
detection
Deep feature-basedMBDM
(proposed method)
In a multinational banknote environment acquired from various conditions and backgrounds, not only bills but also small-sized coins are detected with high accuracyHigher computational cost than MobileNet, YOLO-based methods
Table 2. Architecture of MBDM.
Table 2. Architecture of MBDM.
Layer NumberNumber of
Iterations
Layer TypeNumber of
Filters
Filter SizeOutput
0 Input_layers00
1Conv.323 × 3512 × 512
2Conv.643 × 3/2256 × 256
3×1Conv.321 × 1
4Conv.643 × 3
5Residual 256 × 256
6 Conv.1283 × 3/2128 × 128
7×2Conv.641 × 1
8Conv.1283 × 3
9Residual 128 × 128
10 Conv.2563 × 3/264 × 64
11×8Conv.1281 × 1
12Conv.2563 × 3
13Residual 64 × 64
14 Conv.5123 × 3/232×32
15×8Conv.2561 × 1
16Conv.5123 × 3
17Residual 32 × 32
18 Conv.10243 × 3/216 × 16
19×8Conv.5121 × 1
20Conv.10243 × 3
21Residual 16 × 16
22 Conv.20483 × 3/28 × 8
23×4Conv.10241 × 1
24Conv.20483 × 3
25Residual 8 × 8
26 Average poolingGlobal
27 Connected1000
28 Independent logistic classifier
Table 3. Detailed description of database.
Table 3. Detailed description of database.
Type of BanknoteDenominationNumber of ImagesTrainingTesting
USD1 Dollar660330330
5 Dollar670335335
10 Dollar660330330
20 Dollar670335335
50 Dollar660330330
100 Dollar680340340
EUR5 Euro800400400
10 Euro800400400
20 Euro800400400
50 Euro800400400
100 Euro800400400
KRW10 Won800400400
50 Won800400400
100 Won800400400
500 Won800400400
1000 Won820410410
5000 Won820410410
10,000 Won820410410
50,000 Won820410410
JOD1 piastres700350350
5 piastres900450450
10 piastres240120120
0.25 dinar580290290
0.5 dinar480240240
1 dinar1460730730
5 dinar580290290
10 dinar700350350
20 dinar900450450
Total21,02010,51010,510
Table 4. Performance comparisons with or without Retinex filtering.
Table 4. Performance comparisons with or without Retinex filtering.
MethodPrecisionRecallF1 Score
With Retinex0.80530.87370.8381
Without Retinex (proposed method)0.83960.93340.8840
Table 5. Performance comparisons with or without grouped convolution.
Table 5. Performance comparisons with or without grouped convolution.
MethodPrecisionRecallF1 Score
With grouped convolution0.82940.92550.8748
Without grouped convolution (proposed method)0.83960.93340.8840
Table 6. Performance comparisons with or without mosaic augmentation, and with mosaic augmentation according to the various number of images.
Table 6. Performance comparisons with or without mosaic augmentation, and with mosaic augmentation according to the various number of images.
MethodPrecisionRecallF1 Score
Without mosaic augmentation0.82420.92630.8723
With mosaic augmentation usingtwo images0.81730.90970.8610
four images0.83960.93340.8840
six images0.81660.88850.8510
Table 7. Performance comparisons according to MBDM (normal layers) and MBDM (deeper layers) with or without mosaic augmentation.
Table 7. Performance comparisons according to MBDM (normal layers) and MBDM (deeper layers) with or without mosaic augmentation.
MethodPrecisionRecallF1 Score
Without mosaic augmentationMBDM
(deeper layers)
0.81500.94240.8741
MBDM
(normal layers)
0.82420.92630.8723
With mosaic augmentation
(four images)
MBDM
(deeper layers)
0.83590.92550.8784
MBDM
(normal layers)
0.83960.93340.8840
Table 8. Performance comparisons of the proposed method and the state-of-the-art methods with multinational banknote database.
Table 8. Performance comparisons of the proposed method and the state-of-the-art methods with multinational banknote database.
MethodsPrecisionRecallF1 Score
Faster R-CNN [5]Coin0.75760.82100.7880
Bill0.84100.91140.8748
Coin and bill0.79930.86620.8314
YOLO v3 [19,20]Coin0.73510.82720.7783
Bill0.86290.97100.9137
Coin and bill0.79900.89910.8460
MobileNet [13]Coin0.74720.79110.7685
Bill0.83760.88670.8614
Coin and bill0.79240.83890.8150
YOLO v2 [21]Coin0.73710.79060.7630
Bill0.86530.92820.8956
Coin and bill0.80120.85940.8293
YOLO v4 [27]Coin0.77630.86570.8186
Bill0.87130.97150.9187
Coin and bill0.82380.91860.8686
YOLO v5 [37]Coin0.77710.86080.8168
Bill0.87310.96720.9178
Coin and bill0.82510.9140.8673
MBDM
(proposed method)
Coin0.77470.87070.8200
Bill0.87370.98190.9246
Coin and bill0.83960.93340.8840
Table 9. Performance comparisons of the proposed method and the state-of-the-art methods with USD.
Table 9. Performance comparisons of the proposed method and the state-of-the-art methods with USD.
MethodsPrecisionRecallF1 Score
Faster R-CNN [5]0.84650.91730.8805
YOLO v3 [19,20]0.87520.98480.9268
MobileNet [13]0.81960.86760.8429
YOLO v2 [21]0.87520.93880.9059
YOLO v4 [27]0.88040.97450.9251
YOLO v5 [37]0.87950.98150.9277
MBDM (proposed method)0.88610.99570.9377
Table 10. Performance comparisons of the proposed method and the state-of-the-art methods with EUR.
Table 10. Performance comparisons of the proposed method and the state-of-the-art methods with EUR.
MethodsPrecisionRecallF1 Score
Faster R-CNN [5]0.84720.91810.8812
YOLO v3 [19,20]0.87390.98340.9254
MobileNet [13]0.84860.89830.8727
YOLO v2 [21]0.87470.93830.9054
YOLO v4 [27]0.87560.98390.9266
YOLO v5 [37]0.88410.97850.9289
MBDM (proposed method)0.88550.99520.9372
Table 11. Performance comparisons of the proposed method and the state-of-the-art methods with KRW.
Table 11. Performance comparisons of the proposed method and the state-of-the-art methods with KRW.
MethodsPrecisionRecallF1 Score
Faster R-CNN [5]Coin0.76000.82360.7905
Bill0.83560.90550.8691
Coin and bill0.79780.86460.8298
YOLO v3 [19,20]Coin0.72580.81670.7686
Bill0.85200.95870.9022
Coin and bill0.78890.88770.8354
MobileNet [13]Coin0.73920.78260.7603
Bill0.82860.87720.8522
Coin and bill0.78390.82990.8062
YOLO v2 [21]Coin0.72620.77890.7516
Bill0.85240.91430.8823
Coin and bill0.78930.84660.8169
YOLO v4 [27]Coin0.75730.86320.8068
Bill0.84890.96760.9043
Coin and bill0.80310.91540.8556
YOLO v5 [37]Coin0.76250.84960.8037
Bill0.85630.95420.9026
Coin and bill0.80940.90190.8532
MBDM
(proposed method)
Coin0.76810.86320.8129
Bill0.86610.97340.9166
Coin and bill0.81710.91830.8647
Table 12. Performance comparisons of the proposed method and the state-of-the-art methods with JOD.
Table 12. Performance comparisons of the proposed method and the state-of-the-art methods with JOD.
MethodsPrecisionRecallF1 Score
Faster R-CNN [5]Coin0.75520.81840.7855
Bill0.83490.90480.8684
Coin and bill0.79510.86160.8270
YOLO v3 [19,20]Coin0.72280.81340.7654
Bill0.84850.95480.8985
Coin and bill0.78570.88410.8320
MobileNet [13]Coin0.73340.77650.7543
Bill0.82210.87030.8455
Coin and bill0.77770.82340.7999
YOLO v2 [21]Coin0.73340.77650.7543
Bill0.84910.91080.8789
Coin and bill0.78620.84330.8138
YOLO v4 [27]Coin0.74960.84050.7925
Bill0.84100.94290.8890
Coin and bill0.79530.89170.8407
YOLO v5 [37]Coin0.72740.84250.7807
Bill0.81500.94390.8747
Coin and bill0.77120.89320.8277
MBDM
(proposed method)
Coin0.76510.85980.8097
Bill0.86270.96960.9131
Coin and bill0.81390.91470.8614
Table 13. Comparisons of inference time and processing speed by proposed method and the state-of-the-art methods.
Table 13. Comparisons of inference time and processing speed by proposed method and the state-of-the-art methods.
EnvironmentInference Time per One Image (Unit: ms)Processing Speed
(Unit: frames/s)
DesktopFaster R-CNN [5]12.5279.87
YOLO v3 [19,20]9.81101.94
MobileNet [13]7.59131.75
YOLO v2 [21]10.3896.34
YOLO v4 [27]10.1198.91
YOLO v5 [37]9.96100.4
MBDM (proposed method)11.3288.34
Jetson embedded systemFaster R-CNN [5]63.2515.81
YOLO v3 [19,20]48.7920.50
MobileNet [13]31.6231.63
YOLO v2 [21]51.4519.44
YOLO v4 [27]49.8220.07
YOLO v5 [37]48.8820.46
MBDM (proposed method)57.3217.45
Table 14. Comparisons of number of parameters, GPU memory requirement, and #FLOPs.
Table 14. Comparisons of number of parameters, GPU memory requirement, and #FLOPs.
Method# Parameters
( × 10 6 )
GPU Memory Requirement (Unit: Gbyte)#FLOPs
( × 10 9 )
Faster R-CNN [5]65.25634.1382.1592
YOLO v3 [19,20]63.35242.2851.7420
MobileNet [13]13.15261.1590.7492
YOLO v2 [21]53.00062.2931.6549
YOLO v4 [27]64.17482.3151.7951
YOLO v5 [37]56.10491.9511.5832
MBDM (proposed method)68.25942.3121.8472
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Park, C.; Park, K.R. MBDM: Multinational Banknote Detecting Model for Assisting Visually Impaired People. Mathematics 2023, 11, 1392. https://doi.org/10.3390/math11061392

AMA Style

Park C, Park KR. MBDM: Multinational Banknote Detecting Model for Assisting Visually Impaired People. Mathematics. 2023; 11(6):1392. https://doi.org/10.3390/math11061392

Chicago/Turabian Style

Park, Chanhum, and Kang Ryoung Park. 2023. "MBDM: Multinational Banknote Detecting Model for Assisting Visually Impaired People" Mathematics 11, no. 6: 1392. https://doi.org/10.3390/math11061392

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop