Next Article in Journal
Coupling and Decoupling Measurement Method of Complete Geometric Errors for Multi-Axis Machine Tools
Next Article in Special Issue
Highly Curved Lane Detection Algorithms Based on Kalman Filter
Previous Article in Journal
First Example of Unsaturated Poly(Ester Amide)s Derived From Itaconic Acid and Their Application as Bio-Based UV-Curing Polymers
Previous Article in Special Issue
An Integrated Algorithm for Intersection Queue Length Estimation Based on IoT in a Mixed Traffic Scenario
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Adaptive Approach for Multi-National Vehicle License Plate Recognition Using Multi-Level Deep Features and Foreground Polarity Detection Model

by
Muhammad Ali Raza
1,2,*,
Chun Qi
1,*,
Muhammad Rizwan Asif
2,3 and
Muhammad Armoghan Khan
4
1
School of Electronics and Information Engineering, Xi’an Jiaotong University, Xi’an 710049, China
2
Department of Electrical and Computer Engineering, COMSATS University Islamabad (Lahore Campus), Lahore 54000, Pakistan
3
Department of Engineering, Aarhus University, 8200 Aarhus, Denmark
4
School of Electrical Engineering, Xi’an Jiaotong University, Xi’an 710049, China
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2020, 10(6), 2165; https://doi.org/10.3390/app10062165
Submission received: 14 February 2020 / Revised: 8 March 2020 / Accepted: 17 March 2020 / Published: 22 March 2020
(This article belongs to the Special Issue Intelligent Transportation Systems: Beyond Intelligent Vehicles)

Abstract

:

Featured Application

We present an adaptive framework for the recognition of multinational vehicles license plates. To make it generalized, this research does not require any prior knowledge of license plate layout and, furthermore, training data is not used from all of the targeted countries. These properties make this approach more suitable in order to get the registration identity of multinational vehicles.

Abstract

License plate recognition system (LPR) plays a vital role in intelligent transport systems to build up smart environments. Numerous country specific methods have been proposed successfully for an LPR system, but there is a need to find a generalized solution that is independent of license plate layout. The proposed architecture is comprised of two important LPR stages: (i) License plate character segmentation (LPCS) and (ii) License plate character recognition (LPCR). A foreground polarity detection model is proposed by using a Red-Green-Blue (RGB) channel-based color map in order to segment and recognize the LP characters effectively at both LPCS and LPCR stages respectively. Further, a multi-channel CNN framework with layer aggregation module is proposed to extract deep features, and support vector machine is used to produce target labels. Multi-channel processing with merged features from different-level convolutional layers makes output feature map more expressive. Experimental results show that the proposed method is capable of achieving high recognition rate for multinational vehicles license plates under various illumination conditions.

1. Introduction

Automatic license plate recognition (ALPR) is a significant topic in the field of intelligent transport systems (ITS) and remains ever challenging in the research era of image processing and computer vision. This is the framework that is used widely to extract the license plate registration number from digital images for vehicle identification. This system also does not require any additional hardware like transmitter or responder to detect vehicles as every vehicle is identified by its license plate. Numerous potential applications have urged researchers to pay more attention to developing an efficient and sophisticated ALPR system. The law enforcement agencies are widely using this system for traffic monitoring, congestion control, borders or restricted area security and also to detect suspicious or theft vehicles. LPR system is also being used in smart parking areas and smart toll stations for flexible entry and exit of vehicle, smooth traffic flow, deduct fee or fine and to enhance the security. Moreover, it can be installed in intelligent vehicles to recognize the identity of neighboring vehicles for communication purposes.
License plate detection (LPD) and license plate recognition (LPR) are two major modules of an ALPR system. LPR module can be further split into license plate character segmentation (LPCS) and license plate character recognition (LPCR). In recent years, particularly focused work has been seen for multinational ALPR systems, but it has been done only for license plate detection and verification [1,2,3,4,5]. Every module has its own importance but LPR is comparatively more important as most required information is extracted at this stage and this is a difficult task, specifically in the case of multi-style LPs for multinational vehicles. The whole system will be useless if we could not get a whole registration number at the LPR stage. In order to complete the system, the proposed work has been done for license plate recognition part that further splits into LPCS and LPCR stages.
Most of the work is seen only for specific types of LPs in existing literature and their effectiveness is limited as it cannot be applied to another region. Moreover, change in license plate regulations would lead to the failure of such specific methods. Various countries do not have standardized LPs and also allow customized license plates that add further complications. In addition, many countries have a lot of diversity in their license plates. Therefore, there is a need to develop a generalized framework for the recognition of multinational vehicle license plates to cater above-mentioned issues. Generally, challenges that are involved in effective character segmentation and recognition of license plates are illumination variance, rotation, shadow and border touched characters due to screws or noise, but some more challenges are involved when we deal with multinational vehicle license plates. They have different background and foreground colors, complex backgrounds containing different shapes and writings, and different shape and size of LPs and characters as shown in Figure 1.
A unique part-based approach is developed that gives very promising results for character segmentation of multinational vehicle license plates despite the above-mentioned challenges. The RGB color-based foreground polarity detection model and character height estimation filter are proposed to accomplish the license plate character segmentation task effectively. To deal with multi-style backgrounds, thresholding under various illumination conditions, skew correction and border touched characters for multinational VLPs are salient features of proposed segmentation method. For the recognition of segmented characters, multi-channel CNN framework with layer aggregation module is proposed to extract deep features, and support vector machine is used as classifier to produce target labels.
Our major contributions in this article are: (i) the proposed generalized method does not require any prior knowledge of LP layout and no training data is required at LPCS stage, furthermore we do not use training data from all targeted countries to extract deep features at LPCR stage, (ii) established color-based foreground polarity detection framework that contributes at both LPCS and LPCR stages. This algorithm significantly classifies the foreground from background based on their colors even under various illumination conditions (iii) multi-channel processing with concatenated output feature vector is introduced in order to get enhanced original image information, (iv) improved CNN is proposed by merging different-level convolutional features to get a more expressive output feature map.
The remainder of the paper is arranged in the following order. Section 2 presents the related work in detail. Section 3 explains the proposed method and Section 4 provides the experimental results and comparison with the latest methods. In Section 5, we present the concluding remarks.

2. Related Work

Generally, connected components with binary thresholding, projection-based analysis, maximally stable extremal regions (MSER), sliding window and many CNN based methods have been extensively used for character extraction from license plates.
The research conducted in [6,7] claims to be implemented for multinational vehicle LPR but was only tested on Israeli and Bulgarian LPs. The two-stage character segmentation approach [8] is proposed for Chinese LPs by using CCA and bank of harrow-shaped filter (HSF). The CCA and kohonen neural network [9] is used to segment characters from Indian LPs. The CCA and Contour detection is used for character segmentation in Bangladesh and Iraq license plates [10,11,12]. CIE-LAB color space, OTSU segmentation and CCA are used to extract characters from a Pakistani vehicle license plate. This method [13] is suitable only for highly contrasting images and very sensitive to shadow. The projection-based methods [14,15] have also been used for China and Iran respectively for LPCS. Usually, this technique is used with one of the binarization methods (OTSU, Adaptive, MET, etc.). The first image is converted into binary then horizontal and vertical projection is used to separate the characters. Maximally stable extremal regions (MSER) is also very famous method for LPCS as used in [16,17] for Chinese license plates. Sliding window technique [18,19] has also been used for America and Iran, but this method requires prior knowledge of window size and it is computationally complex. In recent years many CNN-based approaches are introduced to accomplish segmentation free LPR task as discussed in [20,21]. Recurrent neural network and connectionist temporal classification is used in [20] while in [21], YOLO detector is used to accomplish this task and the target countries are China and Brazil respectively. The author proposed robust license plate recognition by introducing the concept of synthetic images to train a convolutional neural network [22]. A multi-task convolutional neural network [23] is proposed for license plate recognition with better accuracy and lower computational cost.
Once the segmentation is done successfully, the next step is to recognize the characters separately. Generally, four approaches are used to accomplish the recognition task. The simplest and straightforward method is template matching [24,25], which was used a lot in the beginning, but this method is not flexible enough for different fonts, noise, rotation and thickness changes. The artificial neural network [12,26,27,28] is another very popular approach that has been used widely to recognize the license plate characters as well as many other classification tasks. The SVM classifier [13,16,29,30] with various feature extraction methods is also used to perform recognition tasks as SVM is strong and fast classifier for real time applications. The extreme learning machines (ELMs) [31] and hidden Markov model (HMM) [32] are also used as license plate recognition classifiers. The concept of using the conventional low-level features is now considered old fashioned due to the revolution in computer vision by the introduction of deep learning methods. The deep convolutional neural network (CNN) architectures learn a hierarchy of discriminate features automatically that richly describe image content. In recent years, the use of deep learning frameworks [20,21,33,34,35] are also seen in ALPR system because of its powerful recognition capability.
No method mentioned above (CCA, projection, MSER, sliding window) has the capability to handle multinational LPs directly. Some CNN based methods have capability to do this task, but they require a lot of training data which is not feasible in multinational scenario. Our proposed recognition framework is capable enough to recognize the license plate characters of different shapes, styles and having different background/foreground colors under various illumination conditions.

3. Proposed Methodology

The proposed framework is comprised of license plate character segmentation (LPCS) and license plate character recognition (LPCR) modules. The detailed proposed architectures of these modules are discussed in this section.

3.1. License Plate Character Segmentation

Figure 2 shows the complete process of the license plate characters segmentation. As discussed above, we used part-based approach to accomplish this task. The first objective is to segment the region of interest (ROI) in order to reduce the image processing area and to discard a lot of redundant information. In the second part, a proposed character height estimation filter along with connected component analysis (CCA) is applied to extract the required objects. These blocks are discussed below in detail.
LP image detected by [1] is fed to part-I for the foreground and background classification. To distinguish the background from the foreground is one of the most difficult and important steps for object segmentation. The proposed method strongly requires the identification of background/foreground color and polarity, as further segmentation processes based on this step. The term polarity is referred to bright or dark based on the intensity of foreground/background colors. We observed that most of the countries use unique color for background as well as for foreground. Probably, the largest color candidates would belong to background region while foreground would have second highest color candidates, which is the key point that we use for background and foreground classification. We propose a model by using RGB color space which is best suited to identifying any color even under various illumination conditions.
RGB-based color space can be visualized as a cube with corners of black, the three primaries (red, green, blue), the three secondaries (cyan, magenta, yellow), and white. The eight pure color corners and distribution cube is shown in Figure 3 [36]. By considering above mentioned colors and color distribution, a model is proposed that represents all shades of colors into 8 pure colors and has the capability to determine any background and foreground color, which is strongly required in case of multinational VLPs. The total candidates of red, green, blue, yellow, cyan and magenta are computed by (1) as per the color’s definition mentioned in Table 1, and the threshold value is determined by using color distribution cube and rigorous experiments on 3718 images in the test dataset.
C C ¯ =   m = 0 h e i g h t 1 { n = 0   w i d t h 1 T ( I ( x n   ,   y m ) ) }
T ( ζ ) = 1   w h e n   ζ   i s   T C ¯ , w h e r e   ζ = I ( x n   ,   y m )
By using (1) we can get color candidates count ( C C ¯ ) of red, green, blue, cyan, magenta and yellow, but we cannot set any hard limit to separate the group of black and white pixels as the illumination level is unknown. The values of white and black pixels ( V B W ) are extracted together as one group and the candidate’s vector ( C B W ¯ ) is obtained as
C B W ¯ = { V B W i f V B W T C ¯   0     i f V B W T C ¯
where   T C ¯ = { d 1 < 0.2   &   d 2 < 0.2 } .
Then Otsu’s method based on two intra-class variance is applied to separate them into two classes having black and white pixels separately as shown in Figure 4. Further, larger and smaller class between these two groups is determined as
C B ¯ = i = 0 S P C B W ¯ ( i )
C W ¯ = i = S P 255 C B W ¯ ( i )
Until this point, we have known the background and foreground colors because the largest candidates group belong to the background while the second largest candidates group belong to the foreground, as shown in Figure 5a. This information is further used to determine the foreground polarity for post processing, as this information is most necessary for adaptive thresholding and for most morphological operations.
Adaptive thresholding technique strongly requires the prior knowledge of foreground polarity to separate background from foreground precisely. This information is also needed for morphological operations to obtain the required results. Algorithm 1 is used to accomplish this task, whereas max1 and max2 belong to C C ¯ that represent the background and foreground respectively. The background and foreground polarities depend on the sequence of colors as shown in Figure 5b. For example, Ind-1 has brighter polarity as compared to Ind-2. The CG2 (Ind) is determined as
Algorithm 1 Foreground polarity detection process
Input 1: max1   % max1 represents the color pixel count belongs to LP background
Input 2: max2   % max2 represents the color pixel count belongs to LP foreground
If (max1 & max2) CG1
  If max1 (Ind) > max2 (Ind)
    FP ← bright
  Else
    FP ← dark
  End
Else if (max1 CG1& max2   CG2) || (max1 CG2 & max2 CG1)
Output:
  FP← find {CG2(Ind)}
End
C G 1 M E A N = 1 N i = 1 N V C G 1 ( i )
C G 2 M E A N = 1 N i = 1 N V B W ¯ ( i )
C G 2 ( I n d ) = { 1 i f   C G 1 M E A N < C G 2 M E A N 2 i f   C G 1 M E A N   > C G 2 M E A N
The next step is to convert the RGB image into binary image B I ( x , y ) . For this purpose, first the image is converted into gray image ( G I ) by using the standard conversion computed by (8) and then we use real-time adaptive thresholding for binary conversion using local mean intensity (first-order statistics) as mentioned in (9) with neighborhood size (Ns). This is most suitable thresholding technique compared to OTSU’s, MET and many others based on two intra-class variance to separate background and foreground particularly in shadow and illumination variance environment. Adaptive thresholding requires prior knowledge of foreground polarity which we have already determined in the above steps.
G I = 0.2989     R + 0.5870     G + 0.1140     B
B I ( x , y ) = { 1 G I   ( x , y ) < i = x m / 2 x + m / 2 j = y n / 2 y + n / 2 G I   ( i , j ) m x n 0 G I   ( x , y )   i = x m / 2 x + m / 2 j = y n / 2 y + n / 2 G I   ( i , j ) m x n  
N s = 2 f l o o r ( s i z e ( I ) / 16 ) + 1
Figure 6 shows the effect of prior knowledge of foreground polarity. In Figure 6a, the LP with dark foreground is well thresholded while in Figure 6b, the LP with bright foreground is well thresholded. So, for multinational vehicle LPs, we must have prior knowledge of foreground polarities in order to separate foreground from background as multinational VLPs have different backgrounds and foregrounds polarities. After getting binary image, we change its background polarity to bright if needed by using (11).
B I ( x , y ) = { B I ( x , y ) i f   F P = D a r k B I ( x , y ) ¯ i f   F P = B r i g h t
The next step is to extract the ROI which contains only required characters by discarding redundant area of license plate. Let I be a given set of objects of 2 types, then required region ( R R ) can be extracted as
  R R = { R   |   R   i s   t h e   m a x i m a l   c o n n e c t e d   s e t   o f   t y p e   1   i n   I }
where type 1 and type 2 represent the white and black objects respectively.
The character height estimation is also one of the most important and difficult tasks to accomplish the character’s segmentation for multinational LPs as different countries LPs have different character size. It also uses further for skew correction and to eliminate the other remaining redundant objects. Until this step, we have already discarded a lot of redundant area of LP by extracting the required background region. To accomplish this task, we propose an adaptive way to estimate the character’s height, which can be estimated as
H O h i = { O h i   | h i = ± 20 %   o f   h m a x   ;   i = 1 , 2 , . n }
The H O h i is a set that is used to keep the objects of specific height. Where, h m a x is the maximum height among all detected bounding boxes in extracted region ( R R ) of license plate. The O h i is the targeted object that is selected based on the criteria of height h i . The next bounding box object is compared with h m a x , if its height is ± 20 %   o f   h m a x then it will be considered as required object. If it does not fulfil the criteria then the previous bounding box with maximum height will be discarded and a recent one will be considered as maximum height object for further processing. This process will be continued until all required objects are extracted. By following this approach, all larger and smaller objects are eliminated automatically and only required objects, i.e., license plate characters are left.
After finding the heights of required objects, we use this information for skew correction by using (14). The skew detection and correction problem is also addressed in an efficient and accurate way, as it helps to enhance the system performance at segmentation as well as at recognition stage.
C A L P =   t a n 1 ( y R M ( O h i )   y L M ( O h i ) x R M ( O h i )   x L M ( O h i ) )
where   O h i   H O h i , y R M ( O h i ) and y L M ( O h i ) is y-axis values of right most and left most detected objects while x R M ( O h i ) and x L M ( O h i ) is x-axis values of right most and left most detected objects of hi.
The border touched characters are separated by using the information of upper and lower boundaries of objects in set H O h i as shown in Figure 8c. The final step is to remove the remaining redundant information and to get bounding boxes on our required objects that are LP characters, so, this task is done by using (15)
F I = { O h i i f   O h i     H O h i 0    i f   O h i     H O h i
where, F I represents the final image that have only required LP characters.
The output of above discussed LPCS processes can be seen in Figure 7 and Figure 8. In order to prove the effectiveness of proposed method, Figure 9 presents some sample images of LPs having blur, noise, shadow and also effected by various illumination conditions. The proposed method is not capable of handling the multi-color background/foreground LPs as shown in Figure 10. As discussed above, most of the countries have unique color for background and foreground. Probably, the largest color would belong to background region while foreground would have second largest color, which is the key point that is used to distinguish the background from foreground. In this proposed approach, the rest of the processes are based on this information. That is why this method would give failed segmentation of the license plate characters.

3.2. License Plate Character Recognition

We propose a deep learning (DL) framework to accomplish the license plate character classification task as DL is most recently been introduced in approaches in the area of artificial intelligence. Deep learning is powered by neural networks. Convolutional neural networks (ConvNets or CNNs) are a category of neural networks that have proven very effective in areas such as image recognition and classification. They are made up of neurons that have learnable weights and biases. Each neuron receives some inputs, performs a dot product and optionally follows it with a non-linearity. The whole network still expresses a single differentiable score function: from the raw image pixels on one end to class scores at the other. And they still have a loss function on the last (fully-connected) layer.
Figure 11 represents the proposed structure for character recognition of multinational vehicles LPs. First, the segmented image is decomposed into red, green and blue channels and then it is passed through polarity matching module to process data uniformly in order to eliminate the impact of foreground polarity that varies in multinational VLPs. To hierarchically learn features, these separated channels images are fed to CNNs and get output vectors, which are further concatenated to acquire enhanced image’s feature information. Before sending this output feature vector to the classifier we introduce data normalization module which has a better impact on output scores. Finally, this normalized feature vector is fed to classifier in order to obtain target labels.
The pre-trained network is used as a starting point to learn a new task. Fine-tuning a network with transfer learning is usually much faster than training a network with randomly initialized weights from scratch. Learned features can be transferred quickly to a new task using a smaller number of training images. Some of the well-known pre-trained networks (AlexNet [37], VGG-16 [38], GoogleNet [39], ResNet-18 [40], Inception v3 [41]) have been trained over a million images and can classify images into 1000 object categories and have learned rich feature representations for a wide range of images. The speed is one of the most important parameters as we are working for real-time applications. By keeping this constraint in mind, we choose AlexNet CNN as our base network for transfer learning as it is time efficient among the rest of the four well known CNNs, as shown in Figure 12. The test time is considered for whole test database to test the speed of networks for our particular application. The test database for recognition part contains 21717 license plate characters images that are extracted at segmentation stage from 3718 license plate images of eight different countries.
In deep neural networks, higher-level convolutional layers have rich features representation while the spatial information is deficient. In contrast, in shallow layers, spatial information is reserved at the cost of less expressive features. Generally, features from the last convolutional layer are used by FC layers which are further used for classification. Max-pooling is a down sampling strategy in convolutional neural networks. Therefore, we may have a chance to get spatial information that might be lost during down sampling. Vanishing gradient is another problem that may occur in deep networks. For recognition and detection tasks, performance can be enhanced and information loss can also be reduced by using collective feature information of different convolutional layers, as the authors suggested in [42,43,44,45]. In addition, it is noted that the convergence time is also improved by using the concept of deep layers aggregation. Therefore, we propose an improved CNN at the lines of AlexNet, as shown in Figure 13 and parametric detail is discussed in Table 2. In improved CNN, the feature of convolution layer 4 and 5 are merged and one extra max-pooling layer is also introduced to match the features dimensions of both convolution layers.
The next most important step is to choose the classifier that accepts the feature vector from CNN feature learning module and generates output labels. In [46,47,48], the authors claimed that the SVM is a strong and fast classifier for real-time classification applications and great attention has been paid to the fusion of neural networks and SVM [49,50]. That is why the same is used in our proposed system. For the support vector machine algorithm, kernel and hinge loss function is used as described in (16) and (17) [51].
G ( x j   ,   x k ) = x j x k
where G(xj,xk) is element (j,k) of the Gram matrix, where xj and xk are p-dimensional vectors representing observations j and k in X.
[ y , f ( x ) ] = m a x [ 0 , 1 y f ( x ) ]
where f(x) = + b, β is a vector of p coefficients, x is an observation from p predictor variables and b is the scalar bias.

4. Experimental Results and Discussion

We have used MATLAB 2019b on an Intel® Core™ i7 CPU having 4.20GHz processing power along with GTX 1080Ti GPU for feature extraction and evaluation. Experiments on 3718 high quality, low quality and blurry images have been performed. Most of the vehicles license plates are from different states of America, Europe along with vehicles from Pakistan, UAE, Canada, Australia, Mexico, UAE, etc.
License plates having different foreground and background colors with a variety of formats are included in the test dataset, as shown in Figure 14. Most of the images are gathered from Media Lab [52] and Olav’s databases [53]. Moreover, the low-resolution images and the images effected by diverse weather and illumination conditions are also included in the test database. In several license plate images, shadow, partial shadow and blur are also included. Some cases in which license plates were under direct sunlight are also included to verify the adaptability of the proposed method.

4.1. LP Characters Segmentation Results Analysis

Regardless of the variety of license plates, the proposed framework effectively segments all the characters of almost 97% of license plates with 98% precision, as shown in Table 3, which is quite good in the case of multinational VLPs. The major reason for high accuracy for multinational VLPs is that our proposed approach first eliminates the redundant part of license plate by extracting the region of interest (ROI). It consists of the largest area based on unique color that is the salient feature of majority of the country’s LPs. Then, a character height estimation filter is used, which adaptively detects the length of characters in any type of license plate. The accuracy and precision are defined as
A c c u r a c y = N M C L P s T L P s Precision = C S C C S C + F S C
and
T L P s = N M C L P s + M C L P s
where N M C L P s (no-missing character license plates) are license plates of all detected characters and M C L P s (missing-characters license plates) are LPs having partially detected characters. C S C represents the correctly segmented characters and F S C denotes the falsely segmented characters of no-missing character license plates ( N M C L P s ) and only these LP characters are further used in recognition stage, as the LPs with missing characters ( M C L P s ) are useless for further processes.

4.2. LP Characters Recognition Results Analysis

In this part, each character extracted from the license plate will be separately recognized. In total, 21717 characters of 36 (0–9, A–Z) classes having different length of each class are extracted in the previous part. The test dataset consists of different style characters with different background and foreground polarities are extracted from various VLP formats to evaluate the performance of proposed feature model under different conditions. As we have used SVM for classification, which is a supervised learning model, training images are required to get a decision boundary for performance assessment. Therefore, a database of 84058 characters of 37 (0–9, A–Z, EC) classes and almost 2250 characters of each class are used for training. One extra-class (EC) is also introduced to eliminate the falsely segmented characters (FSC). In order to support the adaptability constraint for recognition of multinational VLPs and to prove the effectiveness of the proposed framework, license plate characters from one country are gathered for training and made it suitable for the test data to have any background and foreground color and polarity. All training data consist of gray scale images to eliminate the effect of color and all images have dark foreground polarity to make it uniform. This approach will make this system more generic in nature to support multinational framework, as it is not feasible to gather training data from all targeted countries.
Our proposed model contributes well at three different levels and achieved outstanding performance collectively for such a diversified data. The individual and collective performance of these levels is discussed in the next sections.

4.2.1. Layers Aggregation Module

The CNN module is the backbone of the proposed method as it plays a vital role in extracting the features from license plate images, which greatly affects the recognition ability of the proposed framework. This is the reason behind proposing and evaluating four different suggested modified CNN structures by using layer aggregation models to fuse low- and high-level features. These four models are referred to as CNNC15, CNNC25, CNNC35 andCNNC45 and concatenate the feature map of convolution layers 1, 2, 3 and 4 with convolution layer 5 respectively. CNNB denotes the base structure and outputs only convolution layer 5 feature map as traditional structure works. The proposed layers aggregation models for CNN structure can be seen in Figure 15, where C represents the convolution and P represents the pooling layer.
Table 4 demonstrates the comparative study of four different layer aggregation and traditional models for 8 different countries. It is observed that network with CNNC45 model gives better character recognition as compared to CNNB, CNNC15, CNNC25 and CNNC35. Probably, CNNC15, CNNC25 and CNNC35 still contain excessive background noise as compared to CNNC45, which would lead to reducing the performance of the system. Table 4 shows that CNNC45 got 90.30% recognition accuracy, which is highest among all other structures even it beats the state-of-the-art network (CNNB).

4.2.2. Multi-Channel CNN Architecture Module

Multiple image fields are the essential requirement for efficient object classification as image features have direct impact on performance of object recognition framework. Here we propose multi-channels based multi-CNN architecture along with concatenated output feature vector, that is capable to deal with comprehensive and more appropriate features to enhance the original image’s feature information. First, red, green and blue channel images are obtained by decomposing the original RGB image, then it is fed to separately trained neural networks for each channel image and finally the fully connected layer is obtained by concatenating one-dimensional eigenvectors of three single image domains, as shown in Figure 11.
Experimental results proved that the proposed framework with multi-channel CNN module (MMC-CNN) can learn image features more effectively as compared to traditional convolutional neural network (CNN-LA), and achieved 92.13% overall recognition accuracy which is 2% more than traditional CNN structure, as shown in Table 5.

4.2.3. Impact of FG-Polarity Matching Module

We explored the proposition that, in terms of foreground polarity, data uniformity enhances the network recognition performance. Our focused work is for multinational VLPs, and we know that different country LPs have different foreground polarities, even one country LPs may have different FP. For this purpose, we propose and integrate polarity matching module before feeding the image to multichannel convolutional neural network, that checks and inverts the foreground polarity of an image if needed, as we already have FP information that is extracted by using foreground/background classification framework at segmentation stage. The foreground polarity matching module works as follows
I F P ( x , y )   =   { I F P ( x , y ) i f   F P = D a r k I F P ( x , y ) ¯ i f   F P = B r i g h t
The FP matching module is integrated between multi-channel input and multi-channel CNN network module to rectify the mismatch of foreground polarity and gives more accurate recognition results as compared to the previous stage. A data normalization module is also used to make an output feature vector in standardized form. The total 96% recognition accuracy is achieved after the integration of these modules in the proposed framework.
For better understanding, we graphically present the performance comparison of every module for all targeted countries separately, as shown in Figure 16 and Figure 17. Figure 16 shows the cumulative accuracy of every integrated module while Figure 17 only demonstrates the percentage increase in accuracy that every module contributes in the proposed system. It is observed that cumulative recognition performance enhances from 86.25% to 96% for our proposed network with the combination of different modules at different levels, as shown in Figure 18. Base represents the result achieved by the base network while Level 1, Level 2 and Level 3 represent the output of layer aggregation, multi-channel CNN and FG polarity matching modules respectively.
A comparison of the proposed method with already existing works is also provided. Five existing state-of-the-art methods have been implemented in MATLAB and their performance is recorded on test dataset. These existing methods are compared with the proposed method in terms of recognition accuracy and computational time and results are presented in Table 6. It has been observed that the proposed method outperforms all the existing methods by achieving 96.04% recognition accuracy. Furthermore, the proposed method is faster, in terms of computational time, than all tested methods except the method in [21]. Although the method in [21] has little better computational time than the proposed method, its accuracy is almost 12.5% less than the proposed method.

5. Conclusions

In this work, we propose an adaptive framework that is well suited for the recognition of multinational vehicle license plates. To make it generalized, this research does not require any prior knowledge of LP layout and furthermore we do not use training data from all targeted countries. This is the prominent feature of the proposed methodology. A foreground polarity detection model is developed for the classification of background and foreground based on their colors, which is not only the backbone of the LPCS stage but contributes well at the LPCR stage. This model also works well in the presence of shadow and under various illumination conditions. The multi-channel processing with improved CNN structure is proposed to extract deep features, and the real-time SVM classifier is used to get output labels. Features from convolution layer 4 and 5 are merged in a layer aggregation module in order to get a more expressive output feature map. The recognition performance is enhanced at three different levels and 96.04% recognition accuracy is achieved, which is a significant improvement in the case of multi-style license plates.

Author Contributions

Conceptualization, M.A.R. and M.R.A.; Data curation, M.A.K.; Methodology, M.A.R.; Software, M.A.R.; Supervision, C.Q.; Validation, M.A.K.; Visualization, M.R.A.; Writing—original draft, M.A.R.; Writing—review & editing, M.A.R. and M.R.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research is supported by the National Natural Science Foundation of China (Grant No. 61572395 and 61675161).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Asif, M.R.; Qi, C.; Wang, T.; Fareed, M.S.; Raza, S.A. License plate detection for multi-national vehicles: An illumination invariant approach in multi-lane environment. Comput. Electr. Eng. 2019, 78, 132–147. [Google Scholar] [CrossRef]
  2. Asif, M.R.; Chun, Q.; Hussain, S.; Fareed, M.S.; Khan, S. Multinational vehicle license plate detection in complex backgrounds. J. Vis. Commun. Image Represent. 2017, 46, 176–186. [Google Scholar] [CrossRef]
  3. Asif, M.R.; Qi, C.; Bibi, I.; Sadiq Fareed, M.; Zhang, Z.; Zhang, Z. Performance Evaluation of Local Image Features for Multinational Vehicle License Plate Verification. IEEE Intell. Veh. Symp. Proc. 2018, 2018, 2170–2175. [Google Scholar]
  4. Saini, M.K.; Saini, S. Multiwavelet transform based license plate detection. J. Vis. Commun. Image Represent. 2017, 44, 128–138. [Google Scholar] [CrossRef]
  5. Zhang, J.; Li, Y.; Li, T.; Xun, L.; Shan, C. License Plate Localization in Unconstrained Scenes Using a Two-Stage CNN-RNN. IEEE Sens. J. 2019, 19, 5256–5265. [Google Scholar] [CrossRef]
  6. Shapiro, V.; Gluhchev, G.; Dimov, D. Towards a multinational car license plate recognition system. Mach. Vis. Appl. 2006, 17, 173–183. [Google Scholar] [CrossRef]
  7. Shapiro, V.; Gluhchev, G. Multinational license plate recognition system: Segmentation and classification. Proc. Int. Conf. Pattern Recognit. 2004, 4, 352–355. [Google Scholar]
  8. Tian, J.; Wang, R.; Wang, G.; Liu, J.; Xia, Y. A two-stage character segmentation method for Chinese license plate. Comput. Electr. Eng. 2015, 46, 539–553. [Google Scholar] [CrossRef]
  9. Sasi, A.; Sharma, S.; Cheeran, A.N. Automatic Car Number Plate Recognition. In Proceedings of the 2017 International Conference on Innovations in Information, Embedded and Communication Systems (ICIIECS), Coimbatore, India, 17–18 March 2017; pp. 1–6. [Google Scholar]
  10. Abedin, M.Z.; Nath, A.C.; Dhar, P.; Deb, K.; Hossain, M.S. License plate recognition system based on contour properties and deep learning model. In Proceedings of the 2017 IEEE Region 10 Humanitarian Technology Conference (R10-HTC), Dhaka, Bangladesh, 21–23 December 2017; pp. 590–593. [Google Scholar]
  11. Azam, S.; Islam, M.M. Automatic license plate detection in hazardous condition. J. Vis. Commun. Image Represent. 2016, 36, 172–186. [Google Scholar] [CrossRef]
  12. Barnouti, N.H.; Naser, M.A.S.; Al-Dabbagh, S.S.M. Automatic Iraqi license plate recognition system using back propagation neural network (BPNN). In Proceedings of the 2017 Annual Conference on New Trends in Information & Communications Technology Applications (NTICT), Baghdad, Iraq, 7–9 March 2017; pp. 105–110. [Google Scholar]
  13. Khan, M.A.; Sharif, M.; Javed, M.Y.; Akram, T.; Yasmin, M.; Saba, T. License number plate recognition system using entropy-based features selection approach with SVM. IET Image Process. 2018, 12, 200–209. [Google Scholar] [CrossRef]
  14. Pan, M.S.; Yan, J.B.; Xiao, Z.H. Vehicle license plate character segmentation. Int. J. Autom. Comput. 2008, 5, 425–432. [Google Scholar] [CrossRef]
  15. Ashtari, A.H.; Nordin, M.J.; Fathy, M. An Iranian license plate recognition system based on color features. IEEE Trans. Intell. Transp. Syst. 2014, 15, 1690–1705. [Google Scholar] [CrossRef]
  16. Hsu, G.S.; Chen, J.C.; Chung, Y.Z. Application-oriented license plate recognition. IEEE Trans. Veh. Technol. 2013, 62, 552–561. [Google Scholar] [CrossRef]
  17. Gou, C.; Wang, K.; Yao, Y.; Li, Z. Vehicle License Plate Recognition Based on Extremal Regions and Restricted Boltzmann Machines. IEEE Trans. Intell. Transp. Syst. 2016, 17, 1096–1107. [Google Scholar] [CrossRef]
  18. Bulan, O.; Kozitsky, V.; Ramesh, P.; Shreve, M. Segmentation- and Annotation-Free License Plate Recognition with Deep Localization and Failure Identification. IEEE Trans. Intell. Transp. Syst. 2017, 18, 2351–2363. [Google Scholar] [CrossRef]
  19. Panahi, R.; Gholampour, I. Accurate Detection and Recognition of Dirty Vehicle Plate Numbers for High-Speed Applications. IEEE Trans. Intell. Transp. Syst. 2017, 18, 767–779. [Google Scholar] [CrossRef]
  20. Wang, J.; Huang, H.; Qian, X.; Cao, J.; Dai, Y. Sequence recognition of Chinese license plates. Neurocomputing 2018, 317, 149–158. [Google Scholar] [CrossRef]
  21. Laroca, R.; Severo, E.; Zanlorensi, L.A.; Oliveira, L.S.; Goncalves, G.R.; Schwartz, W.R.; Menotti, D. A Robust Real-Time Automatic License Plate Recognition Based on the YOLO Detector. In Proceedings of the 2018 International Joint Conference on Neural Networks (IJCNN), Rio de Janeiro, Brazil, 8–13 July 2018. [Google Scholar]
  22. Björklund, T.; Fiandrotti, A.; Annarumma, M.; Francini, G.; Magli, E. Robust license plate recognition using neural networks trained on synthetic images. Pattern Recognit. 2019, 93, 134–146. [Google Scholar] [CrossRef]
  23. Wang, W.; Yang, J.; Chen, M.; Wang, P. A Light CNN for End-to-End Car License Plates Detection and Recognition. IEEE Access 2019, 7, 173875–173883. [Google Scholar] [CrossRef]
  24. Comelli, P.; Ferragina, P.; Granieri, M.N.; Stabile, F. Optical Recognition of Motor Vehicle License Plates. IEEE Trans. Veh. Technol. 1995, 44, 790–799. [Google Scholar] [CrossRef]
  25. Huang, Y.P.; Lai, S.Y.; Chuang, W.P. A template-based model for license plate recognition. In Proceedings of the IEEE International Conference on Networking, Sensing and Control, Taipei, Taiwan, 21–23 March 2004; Volume 2, pp. 737–742. [Google Scholar]
  26. Jagtap, J.; Holambe, S. Multi-Style License Plate Recognition using Artificial Neural Network for Indian Vehicles. In Proceedings of the 2018 International Conference on Information, Communication, Engineering and Technology (ICICET), Pune, India, 29–31 August 2018; pp. 1–4. [Google Scholar]
  27. Kocer, H.E.; Cevik, K.K. Artificial neural networks based vehicle license plate recognition. Procedia Comput. Sci. 2011, 3, 1033–1037. [Google Scholar] [CrossRef] [Green Version]
  28. Huang, Y.P.; Chen, C.H.; Chang, Y.T.; Sandnes, F.E. An intelligent strategy for checking the annual inspection status of motorcycles based on license plate recognition. Expert Syst. Appl. 2009, 36, 9260–9267. [Google Scholar] [CrossRef]
  29. Wen, Y.; Lu, Y.; Yan, J.; Zhou, Z.; Von Deneen, K.M.; Shi, P. An algorithm for license plate recognition applied to intelligent transportation system. IEEE Trans. Intell. Transp. Syst. 2011, 12, 830–845. [Google Scholar] [CrossRef]
  30. Raghunandan, K.S.; Shivakumara, P.; Jalab, H.A.; Ibrahim, R.W.; Kumar, G.H.; Pal, U.; Lu, T. Riesz Fractional Based Model for Enhancing License Plate Detection and Recognition. IEEE Trans. Circuits Syst. Video Technol. 2018, 28, 2276–2288. [Google Scholar] [CrossRef]
  31. Yang, Y.; Li, D.; Duan, Z. Chinese vehicle license plate recognition using kernel-based extreme learning machine with deep convolutional features. IET Intell. Transp. Syst. 2018, 12, 213–219. [Google Scholar] [CrossRef]
  32. Duan, T.; Du, T. Building an automatic vehicle license plate recognition system. In Proceedings of the International Conference Computer Science, Can Tho, Vietnam, 21–24 February 2005; pp. 59–63. [Google Scholar]
  33. Lin, C.H.; Lin, Y.S.; Liu, W.C. An efficient license plate recognition system using convolution neural networks. In Proceedings of the 2018 IEEE International Conference on Applied System Invention (ICASI), Chiba, Japan, 13–17 April 2018; pp. 224–227. [Google Scholar]
  34. Puarungroj, W.; Boonsirisumpun, N. Thai License Plate Recognition Based on Deep Learning. Procedia Comput. Sci. 2018, 135, 214–221. [Google Scholar] [CrossRef]
  35. Zang, D.; Chai, Z.; Zhang, J.; Zhang, D.; Cheng, J. Vehicle license plate recognition using visual attention model and deep learning. J. Electron. Imaging 2015, 24, 033001. [Google Scholar] [CrossRef] [Green Version]
  36. RGB Color Space. Available online: https://engineering.purdue.edu/~abe305/HTMLS/rgbspace.htm (accessed on 10 July 2019).
  37. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 2012, 25, 1097–1105. [Google Scholar]
  38. Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  39. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
  40. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  41. Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the Inception Architecture for Computer Vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 2818–2826. [Google Scholar]
  42. Yu, F.; Wang, D.; Shelhamer, E.; Darrell, T. Deep Layer Aggregation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018; pp. 2403–2412. [Google Scholar]
  43. Sun, Y.; Wang, X.; Tang, X. Deep learning face representation from predicting 10,000 classes. In Proceedings of the IEEE Computer Soc. Conference Computer Vision Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 1891–1898. [Google Scholar]
  44. Kong, T.; Yao, A.; Chen, Y.; Sun, F. HyperNet: Towards accurate region proposal generation and joint object detection. In Proceedings of the IEEE Computer Soc. Conference Computer Vision Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 845–853. [Google Scholar]
  45. Tang, L.; Gao, C.; Chen, X.; Zhao, Y. Pose detection in complex classroom environment based on improved faster R-CNN. IET Image Process. 2019, 13, 451–457. [Google Scholar] [CrossRef]
  46. Zhang, J.; Shao, K.; Luo, X. Small sample image recognition using improved Convolutional Neural Network. J. Vis. Commun. Image Represent. 2018, 55, 640–647. [Google Scholar] [CrossRef]
  47. Delforouzi, A.; Pooyan, M. Efficient farsi license plate recognition. In Proceedings of the 2009 7th International Conference on Information, Communications and Signal Processing (ICICS), Macau, China, 8–10 December 2009; pp. 1–5. [Google Scholar]
  48. Qi, X.; Wang, T.; Liu, J. Comparison of Support Vector Machine and Softmax Classifiers in Computer Vision. In Proceedings of the 2017 Second International Conference on Mechanical, Control and Computer Engineering (ICMCCE), Harbin, China, 8–10 December 2017; pp. 151–155. [Google Scholar]
  49. He, T.; Li, X. Image Quality Recognition Technology Based on Deep Learning. J. Vis. Commun. Image Represent. 2019. [Google Scholar] [CrossRef]
  50. Fan, X.; Tjahjadi, T. Fusing dynamic deep learned features and handcrafted features for facial expression recognition. J. Vis. Commun. Image Represent. 2019, 65, 102659. [Google Scholar] [CrossRef]
  51. TemplateSVM. Available online: https://www.mathworks.com/help/stats/templatesvm.html (accessed on 13 September 2019).
  52. Medialab LPR Database. Available online: http://www.medialab.ntua.gr/research/LPRdatabase.html (accessed on 25 September 2019).
  53. Olav’s License Plate Pictures. Available online: http://www.olavsplates.com/ (accessed on 25 September 2019).
Figure 1. Multinational vehicles license plates.
Figure 1. Multinational vehicles license plates.
Applsci 10 02165 g001
Figure 2. Proposed architecture for license plate character segmentation.
Figure 2. Proposed architecture for license plate character segmentation.
Applsci 10 02165 g002
Figure 3. Visualization of RGB color space: (a) RGB-cube with 8 pure colors, (b) RGB–cube with colors distribution, and (c) gray shades distribution bar.
Figure 3. Visualization of RGB color space: (a) RGB-cube with 8 pure colors, (b) RGB–cube with colors distribution, and (c) gray shades distribution bar.
Applsci 10 02165 g003
Figure 4. Black and white candidate’s histogram.
Figure 4. Black and white candidate’s histogram.
Applsci 10 02165 g004
Figure 5. Foreground polarity detection based on colors: (a) foreground (FG)/background (BG) detection based on colors candidates count, (b) Color priority sequence.
Figure 5. Foreground polarity detection based on colors: (a) foreground (FG)/background (BG) detection based on colors candidates count, (b) Color priority sequence.
Applsci 10 02165 g005
Figure 6. Effectiveness of prior knowledge of FG polarity: (a) License plate (LP) image with dark foreground polarity, (b) LP image with bright foreground polarity.
Figure 6. Effectiveness of prior knowledge of FG polarity: (a) License plate (LP) image with dark foreground polarity, (b) LP image with bright foreground polarity.
Applsci 10 02165 g006
Figure 7. FG polarity uniformity and region of interest (ROI) extraction process: (a) original LP images, (b) gray images, (c) thresholded images, (d) extracted ROI having the same foreground polarity.
Figure 7. FG polarity uniformity and region of interest (ROI) extraction process: (a) original LP images, (b) gray images, (c) thresholded images, (d) extracted ROI having the same foreground polarity.
Applsci 10 02165 g007
Figure 8. Required objects detection part based on character height estimation filter: (a) Angle detection, (b) Angle correction, (c) Border touched character separation, (d) Bounding boxes on required objects after removing a small object.
Figure 8. Required objects detection part based on character height estimation filter: (a) Angle detection, (b) Angle correction, (c) Border touched character separation, (d) Bounding boxes on required objects after removing a small object.
Applsci 10 02165 g008aApplsci 10 02165 g008b
Figure 9. LP images with noise, blur, shadow and various illumination conditions.
Figure 9. LP images with noise, blur, shadow and various illumination conditions.
Applsci 10 02165 g009
Figure 10. Failed segmentation samples of license plate images.
Figure 10. Failed segmentation samples of license plate images.
Applsci 10 02165 g010
Figure 11. Proposed architecture for multinational vehicles license plate character recognition.
Figure 11. Proposed architecture for multinational vehicles license plate character recognition.
Applsci 10 02165 g011
Figure 12. Time efficiency of five well known convolutional neural networks (CNNs).
Figure 12. Time efficiency of five well known convolutional neural networks (CNNs).
Applsci 10 02165 g012
Figure 13. The improved CNN network structure.
Figure 13. The improved CNN network structure.
Applsci 10 02165 g013
Figure 14. Samples images of LPs from the test dataset.
Figure 14. Samples images of LPs from the test dataset.
Applsci 10 02165 g014
Figure 15. Different layers aggregation structures of proposed CNN model.
Figure 15. Different layers aggregation structures of proposed CNN model.
Applsci 10 02165 g015
Figure 16. Graphical representation of integrated modules overall performance with base structure.
Figure 16. Graphical representation of integrated modules overall performance with base structure.
Applsci 10 02165 g016
Figure 17. Graphical representation of individual modules contribution for every targeted country.
Figure 17. Graphical representation of individual modules contribution for every targeted country.
Applsci 10 02165 g017
Figure 18. Graphical behavior of accumulative rise in recognition accuracy for proposed modules.
Figure 18. Graphical behavior of accumulative rise in recognition accuracy for proposed modules.
Applsci 10 02165 g018
Table 1. Color definition based on three main groups.
Table 1. Color definition based on three main groups.
Color Count Color   Threshold   ( T C ¯   ) Color Group
C R ¯ d1 ≥ 0.24 & d3 ≥ 0.24B < R > G
C Y ¯ d1 < 0.24 & d3 ≥ 0.24
C M ¯ d1 ≥ 0.24 & d3 < 0.24
C B L ¯ d2 ≥ 0.24 & d3 ≥ 0.24R < B > G
C C Y ¯ d3 ≥ 0.24 & d2 < 0.24
C M ¯ d1 ≥ 0.24 & d3 < 0.24
C G ¯ d1 ≥ 0.24 & d2 ≥ 0.24R < G > B
C Y ¯ d1 < 0.24 & d2 ≥ 0.24
C C Y ¯ d1 ≥ 0.24 & d2 < 0.24
C R ¯ : Red count, C Y ¯ : Yellow count, C M ¯ : Magenta count, C B L ¯ : Blue count, C C Y ¯ : Cyan count, C G ¯ : Green count, d 1 = | ( R G ) / R + G ) | , d 2 = | ( G B ) / G + B ) | , d 3 = | ( B R ) / B + R ) | . The distances between primary colors red-green, green-blue and blue-red are represented by d1, d2 and d3 respectively.
Table 2. The parametric detail of improved CNN.
Table 2. The parametric detail of improved CNN.
LayersConfiguration
InputFM: 1OS: 227 × 227 × 3KS: -S: -
Convolution1FM: 96OS: 55 × 55 × 96KS: 11 × 11S: 4
Max Pooling1FM: 96OS: 27 × 27 × 96KS: 3 × 3S: 2
Convolution2FM: 256OS: 27 × 27 × 256KS: 5 × 5S: 1
Max Pooling2FM: 256OS: 13 × 13 × 256KS: 3 × 3S: 2
Convolution3FM: 384OS: 13 × 13 × 384KS: 3 × 3S: 1
Convolution4FM: 384OS: 13 × 13 × 384KS: 3 × 3S: 1
Max Pooling4FM: 384OS: 6 × 6 × 384KS: 3 × 3S: 2
Convolution5FM: 256OS: 13 × 13 × 256KS: 3 × 3S: 1
Max Pooling4FM: 256OS: 6 × 6 × 256KS: 3 × 3S: 2
ConcatenationOS: 6 × 6 × 640Layer4 © Layer5
FC6OS: 1 × 1 × 4096
FC7OS: 1 × 1 × 4096
FC8OS: 1 × 1 × 37
Table 3. Accuracy and precision of segmented characters.
Table 3. Accuracy and precision of segmented characters.
CountryTLPsNMCLPsCScFSCAccuracy, %Precision, %
Australia24724213641897.9898.70
Canada32932518132298.7898.80
England1501448561296.0098.62
Mexico1061037350797.1799.06
Pakistan39738223873796.2298.47
Europe74773244457597.9998.34
UAE46442370595.6597.93
USA16961630988019196.1198.10
Total371836022171736796.8898.33
Table 4. Performance comparison of base CNN with modified CNN structures.
Table 4. Performance comparison of base CNN with modified CNN structures.
Recognition Accuracy, %
CountryCNNBCNNC15CNNC25CNNC35CNNC45
Australia89.6788.4974.1091.1491.55
Canada78.7682.4763.7282.4184.62
England85.1284.972.1484.8988.43
Mexico93.4793.4785.7195.7896.19
Pakistan83.4884.1267.7785.7590.89
Europe90.7891.4485.4791.9893.76
UAE88.1188.1187.6789.8791.25
USA80.6481.9763.0382.0085.69
Total86.2586.8774.9587.9890.30
Table 5. Performance comparison of different modules of proposed recognition network.
Table 5. Performance comparison of different modules of proposed recognition network.
Recognition Accuracy, %
CountryCNNBCNN-LAMMC-CNN-LAMMC-CNN-LA © MPM
Australia89.6791.5592.6497.23
Canada78.7684.6287.9992.77
England85.1288.4390.4793.39
Mexico93.4796.1996.5398.86
Pakistan83.4890.8993.9298.71
Europe90.7893.7694.1697.86
UAE88.1191.2592.5393.04
USA80.6485.6988.7696.44
Total86.2590.3092.1396.04
Table 6. Comparison with existing methods.
Table 6. Comparison with existing methods.
MethodYearRA, %Time, ms
Ref. [31]201882.25.6
Ref. [13]201893.14.0
Ref. [34]201892.57.0
Ref. [21]201883.51.9
Ref. [30]201888.36.8
Proposed-96.043.5

Share and Cite

MDPI and ACS Style

Raza, M.A.; Qi, C.; Asif, M.R.; Khan, M.A. An Adaptive Approach for Multi-National Vehicle License Plate Recognition Using Multi-Level Deep Features and Foreground Polarity Detection Model. Appl. Sci. 2020, 10, 2165. https://doi.org/10.3390/app10062165

AMA Style

Raza MA, Qi C, Asif MR, Khan MA. An Adaptive Approach for Multi-National Vehicle License Plate Recognition Using Multi-Level Deep Features and Foreground Polarity Detection Model. Applied Sciences. 2020; 10(6):2165. https://doi.org/10.3390/app10062165

Chicago/Turabian Style

Raza, Muhammad Ali, Chun Qi, Muhammad Rizwan Asif, and Muhammad Armoghan Khan. 2020. "An Adaptive Approach for Multi-National Vehicle License Plate Recognition Using Multi-Level Deep Features and Foreground Polarity Detection Model" Applied Sciences 10, no. 6: 2165. https://doi.org/10.3390/app10062165

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop