License Plate Super-Resolution Using Diffusion Models

In surveillance, accurately recognizing license plates is hindered by their often low quality and small dimensions, compromising recognition precision. Despite advancements in AI-based image super-resolution, methods like Convolutional Neural Networks (CNNs) and Generative Adversarial Networks (GANs) still fall short in enhancing license plate images. This study leverages the cutting-edge diffusion model, which has consistently outperformed other deep learning techniques in image restoration. By training this model using a curated dataset of Saudi license plates, both in low and high resolutions, we discovered the diffusion model's superior efficacy. The method achieves a 12.55\% and 37.32% improvement in Peak Signal-to-Noise Ratio (PSNR) over SwinIR and ESRGAN, respectively. Moreover, our method surpasses these techniques in terms of Structural Similarity Index (SSIM), registering a 4.89% and 17.66% improvement over SwinIR and ESRGAN, respectively. Furthermore, 92% of human evaluators preferred our images over those from other algorithms. In essence, this research presents a pioneering solution for license plate super-resolution, with tangible potential for surveillance systems.


I. INTRODUCTION
A S the number of motor vehicles continues to rise, license plates become crucial to public security [1].Computer vision techniques offer timely solutions for license plate identification.A prevalent technique is License Plate Recognition (LPR) [2], which transforms license plate images into recognizable character sets.This aids in tasks such as vehicle tracking for security, traffic violations monitoring [3], toll collection [4], and intelligent parking [5].
The effectiveness of LPR hinges on the quality of its input images.While surveillance cameras capture these images, many output images of subpar quality and low resolution.Factors such as camera positioning, weather, and lighting conditions play roles in this degradation.Given that license plates occupy a minor section of these images, they often appear at even lower resolutions.Enhancing and denoising these images is vital for practical applications and remains an active research area.
Super Resolution (SR) is a reconstruction technology that produces high-resolution images from low-resolution images.Super-resolution has gained researchers attention recently.The first paper that addressed this problem was by Dong et al. [6].They defined the mapping between low and high-resolution images using a Convolutional Neural Network (CNN).They demonstrated good restoration quality that was fast to achieve.Since then, many research studies have been published in the area with better results.
There has been some research work that addressed enhancing LP images using SR techniques.Some authors used older methods such as image processing and interpolation techniques [7].Such approaches depend on the pixel information, which may lead to poor quality.Another approach utilized multiple low resolution images to produce a single high resolution image [8].Other approaches [9]- [11] utilized SRGAN model to produce the high quality images.Figure 1 shows some examples where the traditional methods could fail to produce images which mimic exactly the original image with high quality.
In this study, we pioneer the adaptation of Diffusion Models for License Plate (LP) image Super Resolution.Our dataset comprises genuine Saudi Arabia license plate images.Through comprehensive experiments, we benchmarked our approach against leading Super Resolution methods like SwinIR [17] and ESRGAN [18], employing PSNR, SSIM, and MS-SSIM as evaluation metrics.Notably, our diffusion model enhanced PSNR values by 13% and 37% over SwinIR and ESRGAN respectively, with SSIM improving by 5% and 18%.A human-centric evaluation confirmed our method's efficacy: 92% of participants favored our generated images over those of SwinIR and ESRGAN.This underscores the potential of diffusion models in reconstructing high-fidelity LP images, indicating an exciting avenue for future research.
The primary contributions of this study are: • Introduction of a novel method for License Plate Image Super Resolution, demonstrating superior performance over existing state-of-the-art approaches.• Pioneering the application of Diffusion Models for license plate image super resolution, to the best of our knowledge.
• Development of a unique dataset focused on Saudi Arabian License Plates.Section II presents a summary of related work in license plate image SR.Our methodology is elucidated in Section III.Comprehensive experimental results, both quantitative and qualitative, are discussed in Section IV.The paper concludes with discussions in Section V and final remarks in Section VI.

II. RELATED WORKS
This section reviews prior research pertinent to License Plate Image Super Resolution.Image Super Resolution (ISR) aims to produce high-resolution images from their lower-resolution counterparts.Although ISR has wide applications, this paper focuses on license plates.Recent methodologies predominantly employ deep learning, especially convolution neural networks (CNNs) and Generative Adversarial Networks (GANs).Yang et al. [19] introduced a multi-scale super-resolution CNN (MSRCNN), drawing inspiration from GoogLeNet [20].Their model has three convolutional layers designed for feature extraction, feature mapping, and high-resolution image reconstruction.Importantly, they found that training with license plate datasets results in superior super-resolution specifically for license plate images.
Lee et al. [10] built upon the SRGAN architecture and incorporated a character-based perceptual loss for high-quality image generation suitable for Optical Character Recognition (OCR).Their method, when compared with the traditional SRGAN, improved OCR accuracy by 6.8% and 4.0% at the plate and character levels, respectively.
Hamdi et al. [11] developed a Double GAN for image enhancement and super-resolution.While their model exhibited significant image quality improvement, it was computationally intensive.
Lin et al. [15] targeted Chinese License Plate image enhancement using a GAN.They introduced a residual dense network and progressive upsampling, achieving better performance metrics and reduced reconstruction time compared to several benchmarks.Shah et al. [21] employed the VGG-19 network in conjunction with a GAN for super-resolution, effectively addressing the over-smoothing issue.Their approach demonstrated a marked improvement in PSNR and accuracy.
Nascimento et al. [22] extended the Multi-Path Residual Network (MPRNet) [23], merging PixelShuftle layers with attention modules.Their model, designed to enhance License Plate images, outperformed the baseline MPRNet in various aspects.
Lee et al. [24] proposed a two-step framework, first training an ISR network, followed by a character recognition network.While their global image extraction technique showed promise, it faltered under challenging conditions like low lighting and rotations.
In another contribution, [25] combined ISR with feature extraction for License Plate recognition.Their approach optimized weight freezing techniques to leverage high-frequency image details and character information, albeit with extended training durations.
Our research pioneers the adaptation of diffusion models [12] for LP super-resolution.To date, the diffusion model, while utilized for generic image denoising, has not been specifically applied to LP images.Notably, Ho et al. [26] and Saharia et al. [14] showcased its potential in denoising and super-resolution for general and remote sensing images, respectively.
A summary of the discussed research can be found in Table I.

III. PROPOSED METHOD
In this section, we introduce our method dedicated to the super-resolution of license plate images.Image super-resolution aims to generate high-resolution (HR) images from their low-resolution (LR) counterparts.Such a dataset consists of paired LR and HR images, represented as: Here, x lr is the degraded LR image with z hr as its original HR version.The degradation typically stems from introducing noise to the HR image or reducing its resolution, as described by: where θ denotes degradation parameters.The super-resolution task seeks to reconstruct y hr , an approximation of the HR image z hr from x lr , using the model: While many current techniques rely on pixel-level information, this often sacrifices important image characteristics.Recognizing the effectiveness of the diffusion model in image generation, given its proficiency in capturing global features, we employ it for license plate super-resolution.
Diffusion models, rooted in Non-Equilibrium Thermodynamics, comprise two phases: the forward noise introduction and the backward noise removal processes.The former uses a Markov chain approach to gradually degrade the image by adding Gaussian noise, visualized in Figure 2. From a given input x lr 0 , each Markov chain step introduces spherical Gaussian noise with variance β t .At step t, X lr t is formed by adding noise to X lr t−1 following: This chain produces a sequence from x lr 0 to x lr T defined by: Using reparametrization, we can succinctly describe x lr t without multiple iterations: Here, α t and α t are defined, giving us the ability to determine x lr t at any step based on β t .Consequently, x lr t is produced by: Fig. 3: From the fully degraded image x lr T , the backward process reconstructs the HR image y hr 0 using prior knowledge about x lr 0 .
Denoising Process.The output of the previous step produces a set of noisy images X lr t that has an isotropic distribution.Our target at this stage is to recover the high resolution LP image from the noisy image.This is done by reversing the previous process and gradually removing the noise from the x lr t sample which is drawn from the normal distribution q(x lr 0 ) to obtain a sample from the original date distribution.The backward process is shown in Figure 3.This can be approximated with the parameterized model p 0 that uses a CNN as follows: This model p(x lr 0 ) can be used to predict the µ θ (x lr t , t) and Σ θ (x lr t , t) parameters of the Gaussian distribution at each time-step t.Then, the trajectory from x lr T to x lr 0 can be found as follows: In order for the generative model to recover the high LP resolution image, it needs to have some information about the original image x lr 0 .Therefore, we can condition the sampling of x lr t at time-step t based on x lr 0 .This can be represented as: Therefore, we can express x lr 0 , given that ϵ ∼ N (0, I), as follows: Besides, we can define a neural network ϵ θ (x lr t , t) to find the values of ϵ and the mean as follows: Diffusion models define the loss function as the Mean Square Error (MSE) between the added noise to the original image and the predicted noise in the backward process.We follow the same approach as Ho et al. [26] by using a simplified version of the loss function that ignores the weighting coefficients and outperforms its full original version.It can be expressed as follows: Our model utilizes a U-Net architecture [31] which consists of a contracting and an expanding paths.The contracting path is mainly responsible for increasing the image resolution.It has successive layers that capture the context using up-sampling operators.The expanding path enables precise localization by combining the high resolution features from the contracting path with the up-sampled output.This is followed by a convolution layer that learns to generate a more precise image.The contracting path uses a large number of feature channels which are used to pass information to the higher resolution layers.Therefore, the expanding path will be symmetric to the contracting path, which produces a U-shaped network architecture rather than a fully connected network.

IV. EXPERIMENTS A. Datasets
Our dataset consists of genuine images of Saudi Arabian License Plates (LP).These images, captured with cameras, were meticulously cropped to focus solely on the license plates, excluding extraneous background details.Saudi license plates, featuring a mix of Arabic and English characters and numerals, represent various designs, all of which are encompassed in our study.It's essential to note that while these mixed characters might challenge certain techniques, our method sidesteps character recognition, ensuring such variations don't influence our performance.
The dataset comprises 593 color images with dimensions 192 × 192 × 3, each with red (R), green (G), and blue (B) channels.For model development, we allocated 92% of the images (543 images) for training and reserved the remaining 8% (50 images) for testing.Original images underwent downsampling by a factor of 4, resulting in low-resolution images of 48 × 48 × 3.These downsampled images were used as inputs to the diffusion model.For validation, our model was tested on downscaled images with identical resolution (48 × 48 × 3), emphasizing the versatility and robustness of our approach without further fine-tuning.

B. Evaluation Metrics
In order to assess the quality of our model comprehensively, we used the following metrics: Peak Signal-to-Noise Ratio (PSNR), Structural Similarity (SSIM) [32] and Multi-scale Structural Similarity (MS-SSIM) [33].PSNR represents the ratio between the maximum possible value (power) of a signal and the power of the distorting noise that degrades the quality of the image.PSNR is well suited in this context due to its simple calculations and clear meaning.It is based on using the Mean Squared Error (MSE) which compares the original pixel values to the degraded image.It can be defined as follows: where R is defined based on the image format and M and N are the number of rows and columns of the image.SSIM relies on extracting structural information from the image by assuming pixels interdependency which is close to the human perception.SSIM is defined based on three characteristics which are captured from the image, i.e. luminance, contrast and structure.They are defined as follows: where µ x and µ y represent the mean of x, y respectively.σ x and σ y represent the standard deviation of x, y respectively.σ xy is the covariance of x, y.For division stabilization, α 1 , α 2 and α 3 constants are used.A simplified representation of SSIM can be written as follows: MS-SSIM performs multi-step downsampling steps which provides more flexibility than SSIM and incorporates the variations of image resolution and viewing conditions.MS-SSIM can be written as follows: where l M is the luminance that is calculated by eq.18 at scale M only.Moreover, c j , s j are the contrast and structure which are calculated based on eq.19 and eq.20, respectively.The constants α M , β j , γ j are used to adjust the relative importance of each term.

C. Implementation Details
This section delineates the specifics of our diffusion model implementation and provides a comparative overview with the SwinIR and ESRGAN models.
Our diffusion model leveraged Pytorch for its implementation.Conversely, SwinIR utilized Pytorch Lightning, while ESRGAN was built upon TensorFlow.For the computational demands of our diffusion model and SwinIR, we employed the NVIDIA Quadro RTX 8000 (48601 MiB) GPU.ESRGAN, however, was implemented using the NVIDIA Tesla T4 (15360 MiB) provided by Google Colab.
The construction of our diffusion model involved an initial step of downsizing our authentic HR images from a resolution of 192 × 192 × 3 to 48 × 48 × 3. To enhance the model's robustness, we augmented the dataset through random rotations at angles of [5,10,15].The training regimen consisted of 8688 iterations spanning 64 epochs, with each batch comprising 4 images.Drawing from Saharia et al. [14], we adopted a fixed learning rate of 0.0002, incorporating a linear warm-up schedule via the Adam Optimizer.
Table II indicates that the diffusion model incurs a 21% longer training duration compared to SwinIR and 2.6% relative to ESRGAN.Nonetheless, as the ensuing section will demonstrate, the resultant performance more than compensates for this marginal increase in time.

B. Results
Our model was evaluated on 50 images from the LP dataset.Figure 5 displays a representative high-resolution (HR) output produced using our diffusion model.These generated HR images closely match the original ground truth, preserving all image features.
To assess the fidelity of our generated images, we analyzed their histograms against the ground truth.Figure 6 presents histograms for the original HR image, the downscaled low-resolution (LR) image, and the HR image generated by our diffusion model.While the LR image histogram markedly differs from the original due to noise, the histogram of the produced HR image is strikingly consistent with that of the original.This indicates our diffusion model's capability to reproduce HR images that retain the intrinsic characteristics of the ground truth.

C. Comparison with State-of-the-art Methods
This section discusses the comparison between our results with the results generated by two selected state-of-the-art techniques.In the recent literature, there are two main approaches which are commonly used in image generation.Some techniques use encoders and decoders to synthesize images.Others are designed based on GAN algorithms.For the first approach, we selected the state-of-the-art method SwinIR [17] while ESRGAN [18] was selected as the state-of-the-art GANbased method.We performed quantitative and qualitative analysis as discussed next.
1) Quantitative Results : Using diffusion models to generate high resolution images in the domain of license plate images super resolution has proved to be more efficient than using the other traditional methods.We compared our approach with State-of-the-art methods, i.e.SwinIR [17] and ESRGAN [18].We used our LP images dataset and built the SwinIR and ESRGAN models for the comparison.Table III shows the quantitative comparison on the LP SR task.We used three metrics  for the results evaluation, i.e.PSNR, SSIM and MS-SSIM, as discussed in section IV-B.The results demonstrated that our approach outperformed the SOTA methods with a significant margin using the three selected evaluation metrics.First, using PSNR to assess the diffusion model showed 12.55% improvement over SwinIR and 37.32% improvement over ESRGAN.Similarly, using SSIM, DDPM outperformed SwinIR by 4.89% and ESRGAN by 17.66%.On the other hand, MS-SSIM did not show a similar very large improvement but it was also better.This indicates that DDPM outperforms the other traditional methods in the LP SR setting.
2) Qualitative Results: Human Evaluation: While metrics such as PSNR and SSIM offer quantitative insights, they might not always align with human perception.To better understand the visual quality of images generated by SwinIR, ESRGAN, and our DDPM approach, we conducted a human evaluation study.
For this, we utilized the three-alternative-forced-choice (3-AFC) discrimination test, a reputable method for subjective image quality assessment, particularly when gauging an approach's effectiveness [34].In this paradigm, participants are presented with three options and tasked with choosing the one that best meets a given criterion.
In our study, participants answered 11 questions, each featuring images produced by the aforementioned super-resolution methods.These images were randomly ordered for each participant to eliminate potential bias.The primary directive was to   identify the clearest, highest-quality image from each set.A total of 50 individuals participated.Figure 7 reveals that over 40 participants consistently chose our generated images as the clearest across all questions.SwinIR was preferred by only a few, and ESRGAN was the top choice for merely two questions by a single participant.Table IV enumerates the percentage preferences for each technique, underscoring the consistent visual superiority of our generated LP images.
3) Qualitative Results: Visual Details Evaluation: Upon evaluating images generated by the three algorithms, our method consistently outperformed the others.Empirical results, reinforced by human assessments, highlight the preeminence of our approach.As depicted in Figure 8, our diffusion model demonstrates superior detail restoration capabilities compared to SwinIR.Analogously, Figure 9 showcases our model's enhanced prowess in rendering intricate details of super-resolved LP images.Particularly noteworthy is the pronounced sharpness of our images, especially around character boundaries.

VI. CONCLUSIONS
In this paper, we presented a generative model that would restore a high quality image from a highly distorted image.To the best of our knowledge, we believe that we are the first to utilize diffusion models in the context of license plate images super resolution.Our results showed that our proposed approach has superior performance compared to other traditional methods.Using diffusion model for LP super resolution significantly outperforms SOTA methods such as SwinIR and ESRGAN with a notable margin.Our model had 13% PSNR improvement over SwinIR and 37% over ESRGAN.Similarly, the score of SSIM was 5% better than SwinIR and 18% better than ESRGAN.Besides, our model was able to capture detailed features in Fig. 7: Number of participants who selected the respective generated super resolved image as the clearest image.Blue, red and yellow colors denote the number of participants who selected our generated image, SwinIR generated image or ESRGAN generated image, respectively.More than 40 participants among all of the 50 participants selected our generated image for all of the questions.
the LP images which, on the other side, failed by the other SOTA methods.According to our human evaluation experiment, our participants selected our generated images 92% as the clearest image, against SwinIR and ESRGAN images.This shows that diffusion model is a good candidate for improving the quality of license plate images, hence, improving the performance of LP recognition systems.On the other hand, the main disadvantage of using the diffusion model approach is its heavy computational cost.This hinders it use especially in real time applications.However and by considering the superior performance of diffusion model, it is worth considering this approach in super resolution problems.In the future, we will work in the direction of minimizing its computational costs while harnessing its powerful performance.

Fig. 1 :
Fig. 1: Comparison between the ground truth and the super resolved images using SwinIR and ESRGAN.

Fig. 2 :
Fig. 2: Starting from an HR license plate image z hr 0 , it's either noise-introduced or downscaled to produce x lr 0 .The forward process further adds noise until reaching isotropy at step T .

Figure 4
Figure 4 shows sample intermediate steps of the reconstruction process.It starts with the input image (a) that has 48 × 48 resolution.With the repetitve denoising steps following the diffusion model (visualized in (b)), we were able to generate a high definition image (c) that has 192 × 192 resolution which resembles the ground truth image (d).By comparing the two images: the generated HR image (c) with the ground truth image (d), we notice that it is difficult to find a difference and they are almost identical.

Fig. 4 :
Fig. 4: This figure shows the intermediate steps of the HR image reconstruction.Starting from the LR image (a), then going through the denoising steps (b), to produce the output HR image (c) that resembles the ground truth image (d).

Fig. 5 :
Fig. 5: First top row has the original HR image of (192 × 192) resolution.Middle row has the downscaled LR image of (48 × 48) resolution.Last row (bottom) has the super resolved images using diffusion model with (192 × 192) resolution.

Fig. 6 :
Fig. 6: First row: (a) ground truth HR image, (b) the downscaled LR image and (c) the generated HR image using diffusion model (c).The bottom row shows their respective histograms.The generated HR image histogram (f) is very similar to the ground truth image histogram (d) in contrast to the LR image histogram (e).

Fig. 8 :
Fig.8: The first (left) column has the SwinIR super-resolved images, the middle column has our generated images, and the last (right) column shows the original images, with zoomed regions in each case.One can see that SwinIR failed to recover some details while our method could recover them.Our generated images are closer to the original images.

Fig. 9 :
Fig. 9: ESRGAN failed to recover many of the details in their generated images (first left column), while our generated images (middle column) have closer visual details compared to the original images (last right column).

TABLE I :
A summary of the related work which is addressed in this paper

TABLE II :
This table shows the training time taken by each of the methods to train the LP super resolution model.

TABLE III :
Using diffusion model proved its superiority to generate high resolution images in the context of license plate images.This table shows the quantitative results to compare our approach with SOTA methods: SwinIR and ESRGAN using three metric: PSNR, SSIM and MS-SSIM.

TABLE IV :
Most of the participants (92%) selected the LP images generated by our diffusion model over the other generated images as the clearest image.For the ESRGAN generated images; only one participant in two questions selected the ESRGAN generated image.