Next Article in Journal
Efficient Ontology Meta-Matching Based on Interpolation Model Assisted Evolutionary Algorithm
Next Article in Special Issue
Bayesian Linear Regression and Natural Logarithmic Correction for Digital Image-Based Extraction of Linear and Tridimensional Zoometrics in Dromedary Camels
Previous Article in Journal
Preimage Problem Inspired by the F-Transform
Previous Article in Special Issue
Face Recognition Algorithm Based on Fast Computation of Orthogonal Moments
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Rain Rendering and Construction of Rain Vehicle Color-24 Dataset

1
School of Communications and Information Engineering, Xi’an University of Posts and Telecommunications, Chang’an West St., Chang’an District, Xi’an 710121, China
2
Department of Statistics & Data Science, Southern University of Science and Technology, 1088 Xueyuan Avenue, Shenzhen 518055, China
*
Authors to whom correspondence should be addressed.
Mathematics 2022, 10(17), 3210; https://doi.org/10.3390/math10173210
Submission received: 28 July 2022 / Revised: 30 August 2022 / Accepted: 1 September 2022 / Published: 5 September 2022
(This article belongs to the Special Issue Advances in Pattern Recognition and Image Analysis)

Abstract

:
The fine identification of vehicle color can assist in criminal investigation or intelligent traffic management law enforcement. Since almost all vehicle-color datasets that are used to train models are collected in good weather, the existing vehicle-color recognition algorithms typically show poor performance for outdoor visual tasks. In this paper we construct a new R a i n V e h i c l e C o l o r -24 dataset by rain-image rendering using P S technology and a S y R a G A N algorithm based on the V e h i c l e C o l o r -24 dataset. The dataset contains a total of 40,300 rain images with 125 different rain patterns, which can be used to train deep neural networks for specific vehicle-color recognition tasks. Experiments show that the vehicle-color recognition algorithms trained on the new dataset R a i n V e h i c l e C o l o r -24 improve accuracy to around 72 % and 90 % on rainy and sunny days, respectively. The code is available at [email protected].

1. Introduction

With the development of computer vision technology and hardware, vision algorithms based on deep learning have achieved unprecedented performances, and are increasingly applied to practical scenarios. For example, color recognition is applied in vehicle tracking [1,2,3,4,5,6,7,8,9,10,11]. In order to train vehicle-color recognition algorithms, scholars have constructed multiple vehicle-color datasets. For example, the vehicle-color dataset by Chen et al. [1] included eight color categories, and each image contained one vehicle. The dataset collected by Jeong et al. [3] contains seven colors, and Tilakaratna et al. [12] expanded their dataset to 13 categories. Hu et al. [4] constructed a new benchmark vehicle-color dataset, V e h i c l e C o l o r -24, with 24 colors, and proposed a novel vision color recognition ( V C R ) method based on a Smooth Modulation Neural Network with Multi-Scale Feature Fusion ( S M N N - M S F F ). The V e h i c l e C o l o r -24 includes 10,091 images with a total of 31,232 vehicles, and each image contains up to nine vehicles. These datasets and algorithms have been very conducive to vehicle recognition tasks. However, the images of the above datasets are mostly collected in good weather. On the other hand, criminal investigation or intelligent traffic management law enforcement often encounter bad conditions, especially rainy weather [5].
The raindrops are likely to turn into rain streaks due to their high density and fast speed. Rain streaks will typically produce reflection or refraction, and often blur and deform the images captured by cameras, which poses challenges for subsequent visual tasks. Studying low- and high-level tasks has become a hot research direction [5]. Many scholars have paid attention to the joint processing of low- and high-level tasks. The generalization in object detection is improved by embedding domain adaptation, image restoration, style transfer, or other modules into object detection methods or few-shot learning mechanisms [6,7,8,9,13,14,15,16]. However, these works all require additional modules, which will undoubtedly increase the burden of outdoor equipment. To solve this problem, a natural solution is to construct image datasets with rich diversity and in various environments for specific tasks, which can be used to train models facing subsequent high-level tasks without adding modules, while still improving generalization. However, it is very expensive to collect such datasets in practice, so this paper constructs the R a i n V e h i c l e C o l o r -24 dataset by rain-image-rendering technology toward this end. It aims to address the specific vehicle-color recognition task.
There exists much literature on the construction of rain image datasets. For example, Garg and Nayar (2006) used a particle simulator to synthesize rain patterns, and then superimposed the rain patterns with clean backgrounds to synthesize the rain image [17,18]. Hu et al. [19] and Tremblay et al. [20] rendered rain images based on the complex fusion of background and rain layers. Data-driven synthetic rain images based on a generative adversarial network ( G A N ) have recently received increasing attention [20,21,22]. Wang et al. [23] constructed a large-scale real-rain image dataset, i.e., S P A -Data (spatial attentive data). However, rain vehicle-color image datasets are rare.
Inspired by the above works, this paper constructs a R a i n V e h i c l e C o l o r -24 dataset by rendering rain images using photoshop ( P S ) technology and the S y R a G A N algorithm [24]. The R a i n V e h i c l e C o l o r -24 dataset has a total of 40,300 rain images with 125 kinds of rain-streak patterns, which is beneficial in improving the generalization of the deep neural network models for the fine identification of vehicle colors. Using the benchmark datasets, R a i n 100 L and R a i n V e h i c l e C o l o r -24, we trained the current state-of-the-art (SOTA) algorithm P R e N e t network [25] and the lightly weighted L D V S deraining network [26] to obtain the models P R e N e t 1 , L D V S 1 , P R e N e t 2 and L D V S 2 . After testing on both synthetic and real data, these above models showed obvious advantages for the deraining task. Vehicle-color recognition methods trained on the new datasets showed improved performance for vehicle-color classifying in both sunny and rainy conditions.
The main contributions of this paper are as follows:
(1)
This paper constructs the R a i n V e h i c l e C o l o r -24 dataset by rain-image-rendering technology, in order to address the specific task of the vehicle-color fine recognition. Both model-based and data-driven-based rendering are used: the former synthesizes 300 images by P S to form one subset in which clean background images are from the V e h i c l e C o l o r -24; the latter, i.e., the S y R a G A N network, synthesizes 8000 vehicle images to form another subset in which clean background images are also from the V e h i c l e C o l o r -24;
(2)
This dataset helps to increase the performance of vehicle-color recognition methods on rainy days since the R a i n V e h i c l e C o l o r -24 dataset consists paired vehicle-color rain images with various rain patterns;
(3)
We improve the performances of existing algorithms for vehicle-color identification in rainy conditions. Vehicle-color identification plays a key role in intelligent traffic management and criminal investigation. However, existing algorithms are typically trained on the datasets collected in good weather conditions, which suffer from poor performance in poor weather conditions, such as rainy weather. In this paper, we show that our newly constructed dataset is critically beneficial to the performances of existing algorithms for vehicle-color identification in rainy conditions.
The rest of this paper is structured as follows: Section 2 reviews the related work; Section 3 introduces construction of the R a i n V e h i c l e C o l o r -24; Section 4 compares complex experimental results in detail; Section 5 concludes the paper.

2. Related Work

2.1. Photoshop ( P S ) Technology

At present, P S is the main synthetic rain image technology. Garg and Nayar (2006) synthesized various types of rain patterns, and then directly added them to the corresponding clean background images to obtain paired rain images. There has been some work on the simple stacking of background and rain layers. For example, Li et al. [17] proposed a paired rain-image test set ( R a i n 12 ) composed of one type of rain pattern and 12 background images to synthesize 12 rain images. Yang et al. [18] constructed a dataset ( R a i n 100 H ) containing 1900 rain/clean image pairs, with 1800 image pairs for training and 100 image pairs for testing. These datasets are often used as comparative datasets; however, the types of rain streaks are relatively simple. There has also been research on the complex fusion of background and rain layers. For example, Li et al. [27] used P S to add noise to form rain patterns of different intensities and directions, and then synthesized the rain images based on the screen blend model ( S B M ), producing the dataset D D C - D a t a . Wang et al. [28] constructed a dataset Q S M D - D a t a using synthetic rain images based on a screen-shrouded model ( S B M ).
The process of P S technology is as follows (as shown in Figure 1):
(1)
First, the rain-streak patterns under the conditions of two light sources with different angles of illumination and different camera directions are constructed.
(2)
Then, the rain images are synthesized according to the raindrop-modeling equation:
ω n = [ n ( n 1 ) ( n + 2 ) σ / ( ρ r 0 3 ) ] 1 / 2 ,
r ( t , θ , ϕ ) = r 0 [ 1 + A 2 , 0 s i n ( ω 2 t ) P 2 , 0 ( θ ) + A 3 , 1 s i n ( ω 3 t ) c o s ( ϕ ) P 3 , 1 ( θ ) ] ,
where r is the surface tension, ρ is the density of water, θ is the angle, ϕ is the azimuth, r 0 is the size of the raindrop, A 2 , 0 and A 3 , 1 are the amplitudes, and ω n is the frequency, P n , m ( θ ) is the Legendre function that describes the dependence of the shape on the angle θ for the mode (n,m). The parameters are usually set by empirical knowledge ( G a r g and N a y a r , 2006 [29]), and the rain pattern is synthesized by formulas ( 1 ) and ( 2 ) .
(3)
Finally, the synthesized rain pattern is directly added to the clean background image to get the rain image.

2.2. Data-Driven Rain-Image-Rendering Technology

There are also semi-automatic methods to collect rain images. For example, Qian et al. [30] collected a paired raindrop image dataset. Jin et al. [31] constructed the rain image dataset, R a i d a R , under a wide range of circumstances, using cameras on the roof. However, due to the high cost of data collection, in practice, real paired rain images are not sufficient to train models; therefore, most methods are trained with the help of rendering data. Wei et al. [21] synthesized a rain image dataset by an unsupervised learning mechanism with a constrained C y c l e G A N network, resulting in a dataset R a i n 200 L with more types of rain-streak patterns and more realistic visual effects. Wang et al. [22] used a Bayesian model to construct a rain-image generation network to generate more than 120 rain-streak patterns. Further exploration of more reasonable and accurate rain-image synthesis models and the generation of more realistically rendered rain images are also important research directions. In addition, the construction of a specific vehicle rain-image dataset for specific vehicle target detection will be very much meaningful and badly needed.

2.3. Single-Image Rain-Removal Algorithm

Algorithms for single-image rain removal are mainly divided into two categories: traditional model-driven method and data-driven deep neural network [5]. Model-driven algorithms rely on the statistical analysis of rain streaks and background scenes, and use priors on rain streaks and background layers to build a rain-removing model to iteratively give explicit solutions. Chen et al. [32] constructed a generalized low-rank model, and they distinguished the low-rank rain pattern as a separate layer from the background layer. Lou et al. [33] proposed a highly discriminative sparse coding method to separate the rain pattern from the background layer.
On the other hand, data-driven deep neural network algorithm for rain removal was first proposed by Fu et al. [34]. The algorithm reconstructs a clean background layer after removing the rain pattern from the high-frequency layer. Li et al. [27] divided the rain-removal network into two stages: decomposition and combination. They then used the residuals of the synthetic and original rain images to train the network to improve the rain-removal performance. Ren et al. [25] proposed a simple and effective rain-removal network, i.e., the Rain-Removal Network Baseline Progressive Recurrent Network ( P R e N e t ). Since then, a variety of algorithms have been proposed, e.g., lightweight pyramid network ( L P N e t ) algorithm [35], local binary pattern conditional generative adversarial network ( L B P - C G A N ) [36], lightweight single-image deraining algorithm incorporating visual saliency ( L D V S ) [26]
Since P R e N e t is one of the simple and effective single-image rain-removal algorithms, and L D V S is one of the lightweight algorithm, they will be trained on the R a i n 100 L and R a i n V e h i c l e C o l o r -24 benchmark rain datasets. Moreover, we combine the subsequent target recognition algorithms to examine whether the dataset R a i n V e h i c l e C o l o r -24 can improve the performance of the algorithm’s modeling of the low- and high-level joint task and the low-level rain-removal task. The details are given in Section 4.2 and Section 4.3.

2.4. Vehicle-Color Recognition Algorithms

Due to its practical significance, vehicle-color recognition has attracted much attention in computer vision. The literature mainly falls into two categories: manual feature-based methods and emerging data-driven deep learning methods [1,2,3,4,5,6,7,8,9,10,11]. Among others, Hu et al. [5] proposed a novel V C R method based on a Smooth Modulation Neural Network with Multi-Scale Feature Fusion ( S M N N - M S F F ), which is then trained and evaluated on the dataset “ V e h i c l e C o l o r -24” with 24 vehicle-color classes. V e h i c l e C o l o r -24 consists of 10 , 091 vehicle images from a 100 h urban road surveillance video.
In this paper, we perform rain-image rendering using the V e h i c l e C o l o r -24 dataset to construct a specific task dataset for vehicle-color recognition. The objective is to improve the performance of the low- and high-level joint tasks in vehicle-color fine recognition or improve the generalization of vehicle-color fine recognition in bad weather.

3. Construction of Rain   Vehicle   Color -24

3.1. V e h i c l e C o l o r -24

Firstly, 8000 vehicle images are selected from the existing V e h i c l e C o l o r -24 as a clean background image with resolution 1747 × 982. The dataset consists of 10,091 vehicle images captured from urban road surveillance videos, with a total of 31,232 vehicles and 24 colors. The authors preprocessed the dataset, including lighting adjustment, dehazing, etc. Samples from V e h i c l e C o l o r -24 are shown in Figure 2.

3.2. Rendering by P S

We randomly selected 300 images from V e h i c l e C o l o r -24, and used P S software to generate rain patterns of different directions, sizes, and thicknesses by adjusting parameters such as motion blur and color level. We then superimposed 120 kinds of rain streaks with 300 clean background images from the V e h i c l e C o l o r -24 dataset to construct a subset of R a i n V e h i c l e C o l o r -24. As a result, we have more types of rain patterns to ensure the diversity of rain images in the new dataset. The partial rain image samples of the rain image subset are shown in Figure 3.
Figure 3 shows sample instances of this subset, which includes three kinds of images. Every image is formed by a different rain-streak pattern imposed onto a clean vehicle image from a different scene. Three scenes are classified into simple scenes (a single vehicle in an image taken under a clear sky), moderately complex scenes (many vehicles in an image taken under a clear sky), and complex scenes (many vehicles in an image taken under gray skies), respectively.

3.3. Rendering by the S y R a G A N Algorithm

In order to further enrich the types of rain streaks, this paper uses the S y R a G A N algorithm [24] to render rain patterns on the V e h i c l e C o l o r -24 dataset to construct another subset. S y R a G A N is inspired by the mapping network used in the latest I 2 I (image-to-Image) translation method, which maps the random noise space to the rain-pattern representation space to generate diverse rain patterns. S y R a G A N consists of the feature map network M, encoder network E, two generators G, and two discriminators D. The network input consists of the clean background image x c and the rain image x r to produce the synthesized rain images x s r 1 and x s r 2 as output, while S z and S r are the extracted rain patterns from the network (see Figure 4).
The rain-image rendering process is as follows. First, the clean background image and the rain image are respectively input into the S y R a G A N network, and the rain noise is extracted by the mapping network M to produce various rain styles. Second, rain streak is added to the clean background image to generate the rain image. Finally, the generated images are discriminated using the discriminator D. By this way, the network is optimized to generate rain images with various styles. For each image, five kinds of rain images with different directions, sizes, and thicknesses can be correspondingly generated.
To construct a dataset with diversity as wide as possible, we sampled one kind of clean background image with a single vehicle in a image named as “simple scene”, and rendered it into five rain images with five rain-streak patterns (Figure 5). Second, we sampled one kind of clean background sample with many vehicles in a image named as “medium scene”, and rendered it into five different rain images with five rain-streak patterns (Figure 6). Third, we sampled one kind of many vehicles under gray skies, which we named as “complex scene”, and rendered it into five different rain images with five rain-streak patterns (Figure 7). In processes such as those from the clean vehicle dataset V e h i c l e C o l o r -24, 8000 image samples are fed into S y R a G A N to obtain 40,000 vehicle rain-image samples. Combined with the rain data subset generated by P S technology, 40,300 rain images are finally obtained, with a resolution of 512 × 384 .
To summarize, the previous dataset V e h i c l e C o l o r -24 was labeled according to 24 standard vehicle colors, but lacked corresponding rain images. This paper leverages S y R a G A N and P S technologies to constitute R a i n V e h i c l e C o l o r -24. Some samples are illustrated in Figure 8.

4. Experimental Results

In this paper we used the metrics P S N R and S S I M to evaluate the quality of the recovered images. The formulas [37] are
M S E = 1 H × W i = 1 H j = 1 W ( X ( i , j ) Y ( i , j ) ) 2 ,
P S N R = 10 log 10 ( 2 n 1 ) 2 M S E ,
S S I M ( X , Y ) = ( 2 u X u Y + C 1 u X 2 + u Y 2 + C 1 ) ( 2 σ X σ Y + C 2 σ X 2 + σ Y 2 + C 2 ) ( σ X Y + C 3 σ X σ Y + C 3 ) .

4.1. Experimental Setup

In this paper, we first used R a i n 100 L and R a i n V e h i c l e C o l o r -24 as benchmark datasets to train the two deraining networks P R e N e t [25] and L D V S [26], to obtain the deraining network models P R e N e t 1 , L D V S 1 , P R e N e t 2 , and L D V S 2 . We then tested them on the synthetic and real rain images. In addition, we used F a s t e r - R C N N [38] to detect objects for vehicle-color classification after deraining by P R e N e t 1 , L D V S 1 , P R e N e t 2 , and L D V S 2 . The experimental results showed that deraining performance is improved after being trained by the R a i n V e h i c l e C o l o r -24, and the performance of F a s t e r - R C N N in subsequent vehicle-color fine recognition processes is also improved. All mean average precisions ( m A P s) of vehicle-color classification are improved when corresponding specific vehicle recognition deep neural networks are trained on R a i n V e h i c l e C o l o r -24.

4.2. P R e N e t Model Trained on R a i n 100 L and R a i n V e h i c l e C o l o r -24

4.2.1. P R e N e t Network

Ren et al. [25] proposed a better and simpler baseline deraining network with six recurrent modules. Specifically, by repeatedly unfolding a shallow R e s N e t , progressive R e s N e t ( P R N ) was proposed to take advantage of recursive computation. In this paper, we introduce a recurrent-layer L S T M module to extract the dependencies of deep features across stages, forming the final framework, which we refer to as progressive recurrent network ( P R e N e t ). As for loss functions, single- M S E or negative- S S I M losses are sufficient for training P R N and P R e N e t . The illustration of P R e N e t is shown in Figure 9. As shown in this paper, the P R e N e t is one of representative SOTA methods due to its simplicity, efficiency and effectiveness.

4.2.2. Comparison of Synthetic Rain Images

We used the training subset of the Rain 100L and R a i n V e h i c l e C o l o r -24 datasets to train the P R e N e t network, and then we obtained the deraining network models P R e N e t 1 and P R e N e t 2 . Finally, we tested the rain-removal performance of the two models on the R a i n 100 L test set. The results are given in Figure 10 and Table 1.
We tested the images sampled from the synthetic R a i n 100 L test subset with P R e N e t 1 trained on R a i n 100 L and P R e N e t 2 trained on the R a i n V e h i c l e C o l o r -24. The test results are shown in Figure 10. As can be seen from Figure 10, the P R e N e t 1 model has a better visual effect than the P R e N e t 2 model with further supporting results given in Table 1. The P R e N e t 1 outperforms P R e N e t 2 on the P S N R and the S S I M have been increased by margins 0.23 and 0.02 , respectively. The reason why P R e N e t 1 performs better is because the test and training images used by P R e N e t 1 are identically distributed, with the two coming from the same dataset R a i n 100 L . On the other hand, the rain-removal effect of P R e N e t 2 trained with the R a i n V e h i c l e c o l o r -24 is slightly worse than the former, since the test images derained by P R e N e t 2 are not identically distributed with the training images. Most of the rain streaks have been removed and the background image can be restored. This means that when there is no domain gap, the test effect is better; furthermore, the latter faces the gap between the test and training data and still maintains the performance effect. In other words, the experiments show the better quality of R a i n V e h i c l e C o l o r -24.
Figure 11 shows the visual effects of the test images from R a i n V e h i c l e c o l o r -24 and the testing models P R e N e t 1 and P R e N e t 2 . Figure 11 shows that they are competitive enough with P R e N e t 1 , since the distributions of the training and test sets are the same, and R a i n V e h i c l e C o l o r -24 has more varying rain patterns. The results are given in Table 1.

4.2.3. Comparison of Real Rain Images

To compare the generalization of the rain-removal methods, the P R e N e t models, which are trained on R a i n V e h i c l e C o l o r -24 and R a i n 100 L , are tested on the real-rain images from the real-world rain image dataset. Figure 12 and Figure 13 show the two rain-removal models do not work; however, P R e N e t 2 is still better than P R e N e t 1 .

4.2.4. Comparison of Recognition Effects of Low- and High-Level Joint Tasks

In this section, we divided V e h i c l e C o l o r - 24 into training, verification, and test sets at a ratio of 8:1:1. We fixed the confidence threshold at 0.5. We calculated the results using the relevant confidence code under Python.
To investigate the detection results of the vehicle-color fine recognization of F a s t e r R C N N , P R e N e t 1 + F a s t e r R C N N , and P R e N e t 2 + F a s t e r R C N N [38], we first used F a s t e r R C N N for target detection without deraining preprocessing; then, we used F a s t e r R C N N for target detection after deraining by P R e N e t 1 or P R e N e t 2 . Figure 14 shows that F a s t e r R C N N recognizes the vehicle color on rain images with lower confidence; meanwhile, F a s t e r R C N N recognizes the vehicle color on derained rain images with higher confidence. Furthermore, P R e N e t 2 + F R C N N performs better than P R e N e t 1 + F R C N N , since the former is trained on R a i n V e h i c l e C o l o r -24 with more various rain streaks and a much bigger data size.
From Figure 14, F a s t e r R C N N , P R e N e t 1 + F a s t e r R C N N , and P R e N e t 2 + F a s t e r R C N N achieve around 70 % , 70 % , and 90 % accuracy, respectively, for white-color-vehicle target detection. Therefore, the R a i n V e h i c l e C o l o r -24 dataset we proposed in this paper provides a better guarantee for visual tasks, such as vehicle-color target detection.

4.3. L D V S Model Tested on R a i n 100 L and R a i n V e h i c l e C o l o r -24

4.3.1. L D V S Network

The network framework of L D V S [26] is shown in Figure 15, which is mainly composed of dilated convolution and lightweight attention modules. In the main network, there is an encoder with five feature extraction modules and a convolution operation and a decoder, where each feature extraction module concatenates a dilated convolution with a lightweight attention module C B A M . The rain image O is inputted into the network to extract the feature maps, and the output is the rain pattern R and the clean background image B. The clean image is equal to the input rain image O minus the feature map R. The loss function L is defined as:
Ł = S S I M ( B ^ , B ) + α | | B ^ B | | 2 .

4.3.2. Comparison on Synthetic Rain Image

Without loss of generality, we trained the L D V S 1 and L D V S 2 models obtained through the L D V S method on R a i n 100 L and R a i n V e h i c l e C o l o r -24, respectively. Figure 16 and Figure 17 show the tested synthetic rain images and the comparison of rain-removal results between the L D V S 1 and L D V S 2 models.
Figure 17 shows that L D V S performs better in rain-removal when the test images are taken from the R a i n V e h i c l e C o l o r -24 dataset. There is almost no obvious residual rain streaks on images c or f, and the background is relatively clear. Table 2 gives the testing results performed on R a i n 100 L , showing that the performance of L D V S 2 is similar to L D V S 1 . When testing on the R a i n V e h i c l e C o l o r -24 dataset, the performance of L D V S 2 is better than L D V S 1 .

4.3.3. Comparison of Real Rain Images

To further test the generalization of L D V S , the L D V S 1 model pre-trained on the Rain 100L dataset and the L D V S 2 model pre-trained on the R a i n V e h i c l e C o l o r - 24 dataset were used to test rain removal on real-data images, and the results are shown in Figure 18. The qualitative results show that the LDVS2 trained on R a i n V e h i c l e C o l o r - 24 generalization performs better than the L D V S 1 trained on R a i n 100 L .
Figure 18 shows that L D V S 1 or L D V S 2 works well for the rain removal effect of the synthetic rain images. On the other hand, when working in the real world, the effect is less ideal, and there are a large number of rain streak residues.

4.4. Object Detection Models Trained by V e h i c l e C o l o r -24 and R a i n V e h i c l e C o l o r -24

Four models, S S D 1 , F a s t e r R C N N 1 , S S D 2 , and F a s t e r R C N N 2 are trained on V e h i c l e C o l o r -24 and R a i n V e h i c l e C o l o r -24, and all models are then tested on the V e h i c l e C o l o r -24 and R a i n V e h i c l e C o l o r -24 test sets. The test results are shown in Figure 19. S S D 1 and F a s t e r R C N N 1 hardly recognize the color of vehicles on rainy days, while S S D 2 and F a s t e r R C N N 2 can recognize any vehicle color in the rain images. For example, in Figure 19e, white vehicle detection achieves 94 % accuracy after S S D 2 , and in Figure 19f, white vehicle detection achieves 100 % accuracy after F a s t e r R C N N 2 . Testing on the V e h i c l e C o l o r -24 test subset, the results of the models are almost identical.
Table 3 shows the average accuracy of each category of the object detection algorithms when they test on the R a i n V e h i c l e C o l o r 24 dataset and are trained on V e h i c l e C o l o r 24 . Table 4 shows the average accuracy of each category of object detection algorithms when they test on the R a i n V e h i c l e C o l o r 24 test set and are trained on R a i n V e h i c l e C o l o r 24 . It is noted that almost all color is classified more accurately by the model trained on the R a i n V e h i c l e C o l o r 24 .

5. Conclusions

In this paper, the R a i n V e h i c l e C o l o r 24 dataset is constructed by rendering rain images based on P S technology and the S y R a G A N algorithm. The dataset has a total of 40,300 rain images, including 125 rain patterns. The aim of constructing R a i n V e h i c l e C o l o r 24 is to train data-driven deep learning neural networks for specific vehicle color recognition tasks. Specifically, R a i n V e h i c l e C o l o r 24 consists of two subsets: one is 300 rain images rendered by Photoshop from the R a i n V e h i c l e C o l o r 24 database, and the other is 40 , 000 rain vehicle images rendered by the S y R a G A N network from another 8000 vehicle images in V e h i c l e C o l o r 24 . Extensive experiments show that, when P R e N e t and L D V S are trained on the new dataset, R a i n V e h i c l e C o l o r 24 , both deraining task and subsequent target recognition algorithms after deraining are improved. More specifically, when the model is designed for the task of fine recognition of vehicle color, the corresponding recognition accuracy is improved for good or rainy weather conditions, after the model is fine-tuned on R a i n V e h i c l e C o l o r 24 ,
For future work, we will study low- and high-level joint tasks, based on the above work. We will focus on vehicle object detection and recognition in various adverse conditions such as bad weather. We will also consider fusing fuzzy sets, rough sets, overlap functions (see [39,40,41,42]) to expand the method of this paper. These studies will be critically beneficial in fields such as criminal investigation or traffic management law enforcement.

Author Contributions

Writing—original draft preparation, M.H.; experiments in Section 4.3, J.Y.; experiments in Section 4.2, C.W.; experiments in Section 4.4, Y.W.; review, J.F.; writing—review and editing, B.J. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (No. 62071378), the Shaanxi Province International Science and Technology Cooperation Program (No. 2022KW-04), and the Xi’an Science and Technology Plan Project (No. 21XJZZ0072).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data that support the findings of this study are openly available at [[email protected]].

Conflicts of Interest

The authors declare that they have no conflict of interest.

References

  1. Chen, P.; Bai, X.; Liu, W. Vehicle color recognition on urban road by feature context. IEEE Trans. Intell. Transp. Syst. 2014, 15, 2340–2346. [Google Scholar]
  2. Tariq, A.; Khan, M.Z.; Khan, M.U.G. Real Time Vehicle Detection and Colour Recognition using tuned Features of Faster-RCNN. In Proceedings of the 2021 1st International Conference on Artificial Intelligence and Data Analytics (CAIDA), Riyadh, Saudi Arabia, 6–7 April 2021; pp. 262–267. [Google Scholar]
  3. Jeong, Y.; Park, K.H.; Park, D. Homogeneity patch search method for voting-based efficient vehicle color classification using front-of-vehicle image. Multimed. Tools Appl. 2019, 78, 28633–28648. [Google Scholar]
  4. Hu, M.; Bai, L.; Li, Y.; Zhao, S.R.; Chen, E.H. Vehicle 24-Color Long Tail Recognition Based on Smooth Modulation Neural Network with Multi-layer Feature Representation. arXiv 2021, arXiv:2107.09944. [Google Scholar]
  5. Hu, M.; Wu, Y.; Song, Y.; Yang, J.; Zhang, R.; Wang, H.; Meng, D. The integrated evaluation and review of single image rain removal based. J. Image Graph. 2022, 10, 11834. [Google Scholar]
  6. Xu, C.D.; Zhao, X.R.; Jin, X.; Wei, X.S. Exploring categorical regularization for domain adaptive object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 11724–11733. [Google Scholar]
  7. Vs, V.; Gupta, V.; Oza, P.; Sindagi, V.A.; Patel, V.M. Mega-cda: Memory guided attention for category-aware unsupervised domain adaptive object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 4516–4526. [Google Scholar]
  8. Xu, M.; Wang, H.; Ni, B.; Tian, Q.; Zhang, W. Cross-domain detection via graph-induced prototype alignment. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 12355–12364. [Google Scholar]
  9. Zhang, Y.; Wang, Z.; Mao, Y. Rpn prototype alignment for domain adaptive object detector. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 12425–12434. [Google Scholar]
  10. Li, W.; Liu, X.; Yuan, Y. SIGMA: Semantic-complete Graph Matching for Domain Adaptive Object Detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 19–23 June 2022; pp. 5291–5300. [Google Scholar]
  11. Shan, Y.; Lu, W.F.; Chew, C.M. Pixel and feature level based domain adaptation for object detection in autonomous driving. Neurocomputing 2019, 367, 31–38. [Google Scholar]
  12. Tilakaratna, D.S.; Watchareeruetai, U.; Siddhichai, S.; Natcharapinchai, N. Image analysis algorithms for vehicle color recognition. In Proceedings of the 2017 International Electrical Engineering Congress (iEECON), Pattaya, Thailand, 8–10 March 2017; pp. 1–4. [Google Scholar]
  13. Kim, T.; Jeong, M.; Kim, S.; Choi, S.; Kim, C. Diversify and match: A domain adaptive representation learning paradigm for object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 12456–12465. [Google Scholar]
  14. Dong, H.; Pan, J.; Xiang, L.; Hu, Z.; Zhang, X.; Wang, F.; Yang, M.H. Multi-scale boosted dehazing network with dense feature fusion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 2157–2167. [Google Scholar]
  15. Sindagi, V.A.; Oza, P.; Yasarla, R.; Patel, V.M. Prior-based domain adaptive object detection for hazy and rainy conditions. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2020; pp. 763–780. [Google Scholar]
  16. Wang, T.; Zhang, X.; Yuan, L.; Feng, J. Few-shot adaptive faster r-cnn. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 7173–7182. [Google Scholar]
  17. Li, Y.; Tan, R.T.; Guo, X.; Lu, J.; Brown, M.S. Rain streak removal using layer priors. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2736–2744. [Google Scholar]
  18. Yang, W.; Tan, R.T.; Feng, J.; Liu, J.; Guo, Z.; Yan, S. Deep joint rain detection and removal from a single image. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1357–1366. [Google Scholar]
  19. Hu, X.; Fu, C.W.; Zhu, L.; Heng, P.A. Depth-attentional features for single-image rain removal. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 8022–8031. [Google Scholar]
  20. Tremblay, M.; Halder, S.S.; De Charette, R.; Lalonde, J.F. Rain rendering for evaluating and improving robustness to bad weather. Int. J. Comput. Vis. 2021, 129, 341–360. [Google Scholar]
  21. Wei, Y.; Zhang, Z.; Wang, Y.; Xu, M.; Yang, Y.; Yan, S.; Wang, M. Deraincyclegan: Rain attentive cyclegan for single image deraining and rainmaking. IEEE Trans. Image Process. 2021, 30, 4788–4801. [Google Scholar]
  22. Wang, H.; Yue, Z.; Xie, Q.; Zhao, Q.; Zheng, Y.; Meng, D. From rain generation to rain removal. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 14791–14801. [Google Scholar]
  23. Wang, T.; Yang, X.; Xu, K.; Chen, S.; Zhang, Q.; Lau, R.W. Spatial attentive single-image deraining with a high quality real rain dataset. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 12270–12279. [Google Scholar]
  24. Choi, J.; Kim, D.H.; Lee, S.; Lee, S.H.; Song, B.C. Synthesized rain images for deraining algorithms. Neurocomputing 2022, 492, 421–439. [Google Scholar]
  25. Ren, D.; Zuo, W.; Hu, Q.; Zhu, P.; Meng, D. Progressive image deraining networks: A better and simpler baseline. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 3937–3946. [Google Scholar]
  26. Hu, M.; Yang, J.; Ling, N.; Liu, Y.; Fan, J. Lightweight single image deraining algorithm incorporating visual saliency. IET Image Process. 2022, 16, 3190–3200. [Google Scholar]
  27. Li, S.; Ren, W.; Zhang, J.; Yu, J.; Guo, X. Single image rain removal via a deep decomposition–composition network. Comput. Vis. Image Underst. 2019, 186, 48–57. [Google Scholar]
  28. Wang, Y.; Ma, C.; Zeng, B. Multi-decoding deraining network and quasi-sparsity based training. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 13375–13384. [Google Scholar]
  29. Garg, K.; Nayar, S.K. Photorealistic rendering of rain streaks. ACM Trans. Graph. 2006, 26, 996–1002. [Google Scholar]
  30. Qian, R.; Tan, R.T.; Yang, W.; Su, J.; Liu, J. Attentive generative adversarial network for raindrop removal from a single image. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 2482–2491. [Google Scholar]
  31. Jin, J.; Fatemi, A.; Lira, W.M.P.; Yu, F.; Leng, B.; Ma, R.; Mahdavi-Amiri, A.; Zhang, H. Raidar: A rich annotated image dataset of rainy street scenes. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 2951–2961. [Google Scholar]
  32. Chen, D.Y.; Chen, C.C.; Kang, L.W. Visual depth guided color image rain streaks removal using sparse coding. IEEE Trans. Circuits Syst. Video Technol. 2014, 24, 1430–1455. [Google Scholar]
  33. Luo, Y.; Xu, Y.; Ji, H. Removing rain from a single image via discriminative sparse coding. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 3397–3405. [Google Scholar]
  34. Fu, X.; Huang, J.; Ding, X.; Liao, Y.; Paisley, J. Clearing the skies: A deep network architecture for single-image rain removal. IEEE Trans. Image Process. 2017, 26, 2944–2956. [Google Scholar]
  35. Fu, X.; Liang, B.; Huang, Y.; Ding, X.; Paisley, J. Lightweight pyramid networks for image deraining. IEEE Trans. Neural Netw. Learn. Syst. 2019, 31, 1794–1807. [Google Scholar] [PubMed]
  36. Xue, P.; He, H. Research of Single Image Rain Removal Algorithm Based on LBP-CGAN Rain Generation Method. Math. Probl. Eng. 2021, 2021, 8865843. [Google Scholar]
  37. Wang, H.; Xie, Q.; Wu, Y.; Zhao, Q.; Meng, D. Single image rain streaks removal: A review and an exploration. Int. J. Mach. Learn. Cybern. 2020, 11, 853–872. [Google Scholar]
  38. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster r-cnn: Towards real-time object detection with region proposal networks. Adv. Neural Inf. Process. Syst. 2015, 28. [Google Scholar]
  39. Zhang, X.; Wang, J.; Zhan, J.; Dai, J. Fuzzy measures and Choquet integrals based on fuzzy covering rough sets. IEEE Trans. Fuzzy Syst. 2021, 30, 2360–2374. [Google Scholar]
  40. Sheng, N.; Zhang, X. Regular partial residuated lattices and their filters. Mathematics 2022, 10, 2429. [Google Scholar] [CrossRef]
  41. Wang, J.; Zhang, X. A novel multi-criteria decision-making method based on rough sets and fuzzy measures. Axioms 2022, 11, 275. [Google Scholar] [CrossRef]
  42. Liang, R.; Zhang, X. Interval-valued pseudo overlap functions and application. Axioms 2022, 11, 216. [Google Scholar] [CrossRef]
Figure 1. Synthesis process of rain image using photoshop software [5].
Figure 1. Synthesis process of rain image using photoshop software [5].
Mathematics 10 03210 g001
Figure 2. Samples from V e h i c l e C o l o r -24.
Figure 2. Samples from V e h i c l e C o l o r -24.
Mathematics 10 03210 g002
Figure 3. Examples of subset by P S . The three images are rendered by P S from simple, moderately complex, and complex scenes, respectively. We set the parameters’ noise, angle, distance, and Gaussian blur as ( 40 % , 20 , 50 , 0.5 ) , ( 106 % , 87 , 48 , 0.3 ) , ( 146 % , 53 , 33 , 0.5 ) , respectively, for rendering the three rain images.
Figure 3. Examples of subset by P S . The three images are rendered by P S from simple, moderately complex, and complex scenes, respectively. We set the parameters’ noise, angle, distance, and Gaussian blur as ( 40 % , 20 , 50 , 0.5 ) , ( 106 % , 87 , 48 , 0.3 ) , ( 146 % , 53 , 33 , 0.5 ) , respectively, for rendering the three rain images.
Mathematics 10 03210 g003
Figure 4. SyRaGAN network structure overview [24]. M is the feature extracting module, E is the encoder, and D are the discriminators. The inputs are clean background image x c and rain image x r , and the outputs are the synthesizing rain images x s r 1 , , x s r 2 . Further, S z , S r are the extracted rain patterns from the network.
Figure 4. SyRaGAN network structure overview [24]. M is the feature extracting module, E is the encoder, and D are the discriminators. The inputs are clean background image x c and rain image x r , and the outputs are the synthesizing rain images x s r 1 , , x s r 2 . Further, S z , S r are the extracted rain patterns from the network.
Mathematics 10 03210 g004
Figure 5. Rendered rain images with single vehicle. (a) Original clean image; (bf) rendered rain images with 5 kinds of different rain-streak patterns.
Figure 5. Rendered rain images with single vehicle. (a) Original clean image; (bf) rendered rain images with 5 kinds of different rain-streak patterns.
Mathematics 10 03210 g005
Figure 6. Rendered rain images with many vehicles. (a) Original clean image; (bf) rendered rain images with 5 kinds of different rain-streak patterns.
Figure 6. Rendered rain images with many vehicles. (a) Original clean image; (bf) rendered rain images with 5 kinds of different rain-streak patterns.
Mathematics 10 03210 g006
Figure 7. Rendered rain vehicle images under gray skies (the first image is original clean background image, and the rest are rendered rain images with five different types).
Figure 7. Rendered rain vehicle images under gray skies (the first image is original clean background image, and the rest are rendered rain images with five different types).
Mathematics 10 03210 g007
Figure 8. Illustrations of some samples from R a i n V e h i c l e C o l o r -24.
Figure 8. Illustrations of some samples from R a i n V e h i c l e C o l o r -24.
Mathematics 10 03210 g008
Figure 9. Illustration of P R e N e t [25]. Input O is the rain image, output B T is the clean background image of the T stage output.
Figure 9. Illustration of P R e N e t [25]. Input O is the rain image, output B T is the clean background image of the T stage output.
Mathematics 10 03210 g009
Figure 10. Test results of P R e N e t 1 and P R e N e t 2 on two synthetic rain images from the R a i n 100 L . P R e N e t 1 and P R e N e t 2 represent the different models that are trained on R a i n 100 L and R a i n V e h i c l e C o l o r -24 training subsets, respectively.
Figure 10. Test results of P R e N e t 1 and P R e N e t 2 on two synthetic rain images from the R a i n 100 L . P R e N e t 1 and P R e N e t 2 represent the different models that are trained on R a i n 100 L and R a i n V e h i c l e C o l o r -24 training subsets, respectively.
Mathematics 10 03210 g010
Figure 11. Test results on synthetic rain images from R a i n V e h i c l e C o l o r -24. P R e N e t 1 and P R e N e t 2 represent the different models, which are trained on the training subsets of R a i n 100 L and R a i n V e h i c l e C o l o r -24, respectively.
Figure 11. Test results on synthetic rain images from R a i n V e h i c l e C o l o r -24. P R e N e t 1 and P R e N e t 2 represent the different models, which are trained on the training subsets of R a i n 100 L and R a i n V e h i c l e C o l o r -24, respectively.
Mathematics 10 03210 g011
Figure 12. Test result on real-rain image from R e a l D a t a . P R e N e t 1 and P R e N e t 2 represent the different models trained on the R a i n 100 L and the R a i n V e h i c l e C o l o r -24 training subsets, respectively.
Figure 12. Test result on real-rain image from R e a l D a t a . P R e N e t 1 and P R e N e t 2 represent the different models trained on the R a i n 100 L and the R a i n V e h i c l e C o l o r -24 training subsets, respectively.
Mathematics 10 03210 g012
Figure 13. Test results on two real-rain images containing vehicles from R I S . P R e N e t 1 and P R e N e t 2 represent the different models trained on R a i n 100 L and R a i n V e h i c l e C o l o r -24, respectively.
Figure 13. Test results on two real-rain images containing vehicles from R I S . P R e N e t 1 and P R e N e t 2 represent the different models trained on R a i n 100 L and R a i n V e h i c l e C o l o r -24, respectively.
Mathematics 10 03210 g013
Figure 14. Objective detection test results of F a s t e r R C N N , P R e N e t 1 / 2 + F a s t e r R C N N on two synthetic rain images with vehicles from the R a i n V e h i c l e C o l o r -24 dataset. Each subtitle is corresponding object detection method and the corresponding confidence value in the parentheses.
Figure 14. Objective detection test results of F a s t e r R C N N , P R e N e t 1 / 2 + F a s t e r R C N N on two synthetic rain images with vehicles from the R a i n V e h i c l e C o l o r -24 dataset. Each subtitle is corresponding object detection method and the corresponding confidence value in the parentheses.
Mathematics 10 03210 g014
Figure 15. Illustration of L D V S [26].
Figure 15. Illustration of L D V S [26].
Mathematics 10 03210 g015
Figure 16. Test results of L D V S 1 on synthetic rain images from R a i n 100 L .
Figure 16. Test results of L D V S 1 on synthetic rain images from R a i n 100 L .
Mathematics 10 03210 g016
Figure 17. Test results of L D V S 2 on rain image from R a i n V e h i c l e C o l o r -24 test subset. The first column is the original rainy vehicle image, the second is the clean background image (GT), and the third is the one after L D V S 2 . The image in the first row is in a simple scene, the one in the second row is in a medium complexity scene, and the one in the last row is in a complex scene.
Figure 17. Test results of L D V S 2 on rain image from R a i n V e h i c l e C o l o r -24 test subset. The first column is the original rainy vehicle image, the second is the clean background image (GT), and the third is the one after L D V S 2 . The image in the first row is in a simple scene, the one in the second row is in a medium complexity scene, and the one in the last row is in a complex scene.
Mathematics 10 03210 g017
Figure 18. Test results of L D V S on real images from R e a l D a t a . The first column is the original real rainy image, the second is after L D V S 1 , and the third is after L D V S 2 .
Figure 18. Test results of L D V S on real images from R e a l D a t a . The first column is the original real rainy image, the second is after L D V S 1 , and the third is after L D V S 2 .
Mathematics 10 03210 g018
Figure 19. Object detection test results on rain images from R a i n V e h i c l e C o l o r -24. (a,d) Rain image; (b,c) object detection test result with S S D 1 and F a s t e r R C N N 1 , which are trained on V e h i c l e C o l o r - 24 ; (e,f) object detection test result with S S D 2 and F a s t e r R C N N 2 , which are trained on the R a i n V e h i c l e C o l o r - 24 .
Figure 19. Object detection test results on rain images from R a i n V e h i c l e C o l o r -24. (a,d) Rain image; (b,c) object detection test result with S S D 1 and F a s t e r R C N N 1 , which are trained on V e h i c l e C o l o r - 24 ; (e,f) object detection test result with S S D 2 and F a s t e r R C N N 2 , which are trained on the R a i n V e h i c l e C o l o r - 24 .
Mathematics 10 03210 g019
Table 1. Comparing P S N R , S S I M of the P R e N e t 1 and the P R e N e t 2 on R a i n 100 L , R a i n V e h i c l e C o l o r -24, respectively.
Table 1. Comparing P S N R , S S I M of the P R e N e t 1 and the P R e N e t 2 on R a i n 100 L , R a i n V e h i c l e C o l o r -24, respectively.
MetricsModelsPReNet1PReNet2
Datasets
Rain100LPSNRSSIMPSNRSSIM
32.670.96532.440.945
Rain Vehicle Color-24PSNRSSIMPSNRSSIM
31.620.95533.510.973
Table 2. Comparing P S N R and S S I M of test effect on R a i n V e h i c l e C o l o r -24 from the L D V S 1 and the L D V S 2 .
Table 2. Comparing P S N R and S S I M of test effect on R a i n V e h i c l e C o l o r -24 from the L D V S 1 and the L D V S 2 .
MetricsModelsLDVS1LDVS2
Datasets
Rain100LPSNRSSIMPSNRSSIM
33.560.95933.120.960
Rain Vehicle Color-24PSNRSSIMPSNRSSIM
31.230.95134.340.960
Table 3. Comparison of the average accuracy of each category of object detection algorithms on V e h i c l e C o l o r - 24 and R a i n V e h i c l e C o l o r - 24 test sets. All object detection algorithms are trained on V e h i c l e C o l o r - 24 .
Table 3. Comparison of the average accuracy of each category of object detection algorithms on V e h i c l e C o l o r - 24 and R a i n V e h i c l e C o l o r - 24 test sets. All object detection algorithms are trained on V e h i c l e C o l o r - 24 .
Category SMNN MSFF 1 Faster R CNN 1 SSD 1
VC-24RVC-24VC-24RVC-24VC-24RVC-24
White0.980.640.840.800.960.74
Black0.970.520.820.310.950.38
Orange0.980.850.810.710.960.81
Silver gray0.960.300.770.440.910.86
Grass green0.980.820.700.610.960.96
Dark gray0.940.300.660.170.840.29
Dark red0.980.630.780.240.930.44
Gray0.890.060.180.130.540.13
Red0.960.650.600.200.880.41
Cyan0.970.820.750.330.920.46
Champagne0.970.170.630.290.810.25
Dark blue0.960.390.660.120.860.36
Blue0.970.590.730.100.870.69
Dark brown0.970.090.450.020.710.11
Brown0.880.360.300.130.580.27
Yellow0.970.660.510.130.790.18
Lemon yellow0.990.880.870.840.930.70
Dark orange0.960.670.650.180.780.13
Dark green0.940.280.380.080.580.00
Red orange0.990.330.240.000.610.00
Earthy yellow0.970.500.620.500.740.10
Green0.930.130.610.330.740.00
Pink0.940.660.500.330.710.17
Purple0.800.000.000.000.190.00
mAP94.96%47.22%58.59%29.19%78.13%30.23%
Table 4. Comparison of the average accuracy of each category of object detection algorithms on the V e h i c l e C o l o r - 24 and R a i n V e h i c l e C o l o r - 24 test sets. All object detection algorithms are trained on R a i n V e h i c l e C o l o r - 24 .
Table 4. Comparison of the average accuracy of each category of object detection algorithms on the V e h i c l e C o l o r - 24 and R a i n V e h i c l e C o l o r - 24 test sets. All object detection algorithms are trained on R a i n V e h i c l e C o l o r - 24 .
Category SMNN MSFF 2 Faster R CNN 2 SSD 2
VC-24RVC-24VC-24RVC-24VC-24RVC-24
White0.620.600.940.960.940.95
Black0.610.690.820.690.920.93
Orange0.690.770.920.900.950.95
Silver gray0.480.310.440.810.850.86
Grass green0.600.820.880.930.940.96
Dark gray0.470.430.570.690.710.67
Dark red0.360.480.730.790.830.88
Gray0.180.310.120.410.350.31
Red0.370.440.620.560.790.76
Cyan0.420.620.710.820.870.87
Champagne0.330.280.460.740.650.73
Dark blue0.600.520.660.790.780.75
Blue0.290.560.440.690.870.69
Dark brown0.380.350.450.180.600.47
Brown0.470.350.340.100.330.34
Yellow0.510.350.940.830.720.92
Lemon yellow0.320.570.950.991.000.75
Dark orange0.410.320.520.280.110.47
Dark green0.620.590.100.180.360.34
Red orange0.520.380.660.070.290.52
Earthy yellow1.000.680.230.450.780.28
Green0.590.180.470.850.550.97
Pink0.030.840.020.541.000.52
Purple0.990.220.070.030.000.06
mAP49.14%48.58%55.13%60.65%70.84%66.33%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Hu, M.; Wang, C.; Yang, J.; Wu, Y.; Fan, J.; Jing, B. Rain Rendering and Construction of Rain Vehicle Color-24 Dataset. Mathematics 2022, 10, 3210. https://doi.org/10.3390/math10173210

AMA Style

Hu M, Wang C, Yang J, Wu Y, Fan J, Jing B. Rain Rendering and Construction of Rain Vehicle Color-24 Dataset. Mathematics. 2022; 10(17):3210. https://doi.org/10.3390/math10173210

Chicago/Turabian Style

Hu, Mingdi, Chenrui Wang, Jingbing Yang, Yi Wu, Jiulun Fan, and Bingyi Jing. 2022. "Rain Rendering and Construction of Rain Vehicle Color-24 Dataset" Mathematics 10, no. 17: 3210. https://doi.org/10.3390/math10173210

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop