Performance Evaluation of Region-Based Convolutional Neural Networks Toward Improved Vehicle Taillight Detection

: Increasingly serious tra ﬃ c jams and tra ﬃ c accidents pose threats to the social economy and human life. The lamp semantics of driving is a major way to transmit the driving behavior information between vehicles. The detection and recognition of the vehicle taillights can acquire and understand the taillight semantics, which is of great signiﬁcance for realizing multi-vehicle behavior interaction and assists driving. It is a challenge to detect taillights and identify the taillight semantics on real tra ﬃ c road during the day. The main research content of this paper is mainly to establish a neural network to detect vehicles and to complete recognition of the taillights of the preceding vehicle based on image processing. First, the outlines of the preceding vehicles are detected and extracted by using convolutional neural networks. Then, the taillight area in the Hue-Saturation-Value (HSV) color space are extracted and the taillight pairs are detected by correlations of histograms, color and positions. Then the taillight states are identiﬁed based on the histogram feature parameters of the taillight image. The detected taillight state of the preceding vehicle is prompted to the driver to reduce tra ﬃ c accidents caused by the untimely judgement of the driving intention of the preceding vehicle. The experimental results show that this method can accurately identify taillight status during the daytime and can e ﬀ ectively reduce the occurrence of confused judgement caused by light interference.


Introduction
With the explosive growth of the number of vehicles, many social problems have become increasingly prominent, such as traffic accidents, traffic congestion and deterioration of the traffic environment. According to traffic accident data released by relevant departments, the proportion of rear-end collisions in traffic accidents reached 30-40% [1]. Frequent traffic accidents have caused huge losses to people's lives and property. Recently, many experts and scholars in the field of active vehicle safety have studied the related technologies of detecting and identifying abnormal driving behavior of the preceding vehicles. An Intelligent Transportation System (ITS) integrates control system, information system, communication system and computer network technology. It has the characteristics of real-time, accurate and wide-scale. This system can provide a method to ease urban traffic congestion and improve driver safety. Meanwhile, it can reduce emission pollution and it is In general, algorithms of taillights detection can be divided into the following categories: feature-based methods, machine-learning-based methods, and multi-sensor fusion methods.
(1) Feature-based methods Rezaei et al. used chain coding to maintain the original accuracy in the process of taillight outline detection and analyze the geometric rules of the taillight outline. They used virtual symmetry detection technology to locate the taillight position [7]. Based on the cognitive theory, Weis et al. decomposed a video input stream into color, shape and other features which are closely combined with an image model and created the pixel values of interest in taillight areas according to the characteristics of taillights and atmospheric effects caused by external lighting and weather conditions [8]. Jen et al. analyzed the taillight area color and brightness to find the invariant characteristics of the light scattering areas. Then, they used a classifier to train its dynamic frequency response and the size of the light scattering region to detect the taillight's brake light signal. However, this method did not consider the influence of illumination variation on light scattering in the taillight area [9]. Some scholars have extracted the correct taillight status by calculating the correlation of regional brightness values of the possible taillight areas of the vehicles on the road. They used the codebook theory to realize the all-weather tracking and detection of taillights, and the detection and identification of brake lights, left-turn lights, and right-turn lights [10][11][12]. The algorithm based on a priori information uses a relatively specific feature to detect the taillight. The advantage of this method is that it is faster to detect and can be applied to detection in a dynamic background. However, the disadvantage is that the relatively low level of the feature leads to a decrease in the ability to distinguish the target, which leads to an increase in false detection.
(2) Machine learning-based methods Ming et al. selected 8 directions and 5 different scales of Gabor filters to extract features of the vehicle taillight images and trained a BP neural network to extract the taillight distribution characteristics [13]. This method needs to further improve the matching effect of the taillights and solve the problem of poor detection of the red vehicles. Wang et al. used a vehicle's rear appearance image to learn "brake light mode" through a multi-layer perceptron neural network in a large database and trained a depth classifier to judge whether the taillight was in the normal or braking state [14]. Wang et al. proposed a method to optimize the faster RCNN for vehicle detection by improving multi shapes receptive field generation, anchor generation optimization and ROI assignment to improve detection speed and accuracy [15]. Zhang et al. proposed a single deep neural network for vehicle detection in urban traffic surveillance. Different feature extractors for target classification were used in this article, which improves detection speed and accuracy. The feature pyramid was used to accurately generate the vehicle bounding box and accurately classify the vehicle category [16]. Machine learning-based algorithms generally use feature operators and classifiers, where the feature operators are manually extracted. This method is generally based on the characteristics of a single scene design. So, the detection results will be influenced and changed, when the scene changes.
(3) Multi-sensor fusion methods Jin et al. combined millimeter wave radar and machine vision. This method can effectively identify the preceding vehicle at night. By extracting the vehicle characterization features and integrating information using D-S evidence theory, the vehicle taillights can be detected. However, this method is affected when the target is occluded or overlaps with another object [17]. Manuel et al. proposed a feature-based method for on-road vehicle detection in urban traffic and determined a rough upper bound of the intensity for shadow which reduces false positives rate [18].
In order to solve the problem that taillight detection is greatly affected by complex traffic environment, this paper proposes a method of combining vehicle detection and taillight detection. The vehicle detection of the preceding vehicle is completed using the Faster R-CNN network. On this basis, the taillight area is extracted in the HSV color space, and the judgment of the taillight lighting state is completed.

Faster RCNN Model
The traditional method of detecting taillights is to detect taillights directly in the image using feature matching or image processing techniques. This method will cause more interference in the image, which is not conducive to the detection of the tail light. Therefore, this paper introduces the technology of vehicle detection. After the vehicle detection is completed, the taillight detection can effectively reduce the interference in the image.
In the process of vehicle detection, there may be image quality and angle, as well as illumination and other factors, resulting in image quality degradation or target scale deformation. The vehicle detection method based on Faster RCNN can effectively eliminate interference and has relatively strong generalization ability. Neural networks have always been one of the most popular research topics in the field of artificial intelligence. It models the process of information processing by abstractly simulating the human brain and establishes probabilities or classification models based on different methods of neuron connections.
In recent years, the research on the structure of the CNN has been deepening. The CNN is widely used in various fields, such as behavior recognition, pedestrian detection, human posture recognition, and so on [19,20]. Recently, the CNN has been developed toward artificial intelligence, such as speech recognition and natural language processing, and so on.
According to current research and investigation, CNN can learn high-level image group features from a massive amount of image information. AlexNet has achieved excellent results in the field of large-scale image classification. It was designed in 2012 by the winners of ImageNet competitions, Hinton and Krizhevsky [21]. RCNN based on region feature extraction transforms the target detection problem through the region proposal method with the help of the good feature extraction and classification performance of CNN. This is a milestone in applying the CNN method to the object detection problem [22]. A Fully Convolutional Network (FCN) achieves the pixel-level classification of images, thus resolving the problem of semantics-level image segmentation [23].
CNN is a powerful feature extraction method. Szegedy et al. tried to regard object detection as a regression problem, but the detection effect on VOC2007 is average [24]. Sande et al. used the candidate region method to solve the detection problem and obtained a good effect on VOC2007 [25]. Fast RCNN was proposed in 2015, and it mapped candidate regions to the last layer feature map of CNN, which solves the problem of many repeated computations in RCNN and improves the speed of object detection [26]. In the same year, the Faster RCNN was proposed, whose basic structure includes the convolutional layer, the Region Proposal Network (RPN) layer, the ROI pooling layer and the classification layer [27], as shown in Figure 1. It further improves the speed and accuracy of object detection by using the shared features of RPN and Fast RCNN. Faster RCNN, one of the mainstream frameworks for object detection, was built in and has been used since late 2015, although Mask RCNN and other improved frameworks have been developed since then. The basic structure of Mask RCNN has not changed much and Faster RCNN still has the best accuracy in practical applications. Appl. Sci. 2019, 9,

The Construction of Faster RCNN
The CNN layer is mainly composed of the convolution layer, the pooling layer, and the ReLU layer. The convolutional layer uses specific models to extract features from the target image. Then, the target image is sorted, identified, and predicted by these extracted features. It assumes that the size of the input image is M N × , the size of the filter kernel is K K × , the stride is S , and the number of padding pixels is P . Then, the method for calculating the size of the feature map of the convolutional layer output is as follows: In this paper, the parameters of the convolutional layer are defined as: kernel_size = 3, pad = 1. Then the original image becomes ( 2) ( 2) M N + × + . The convolution operation is applied with the filter kernel size of 3 3 × and the output size will be M N × . This method ensures that the convolutional layer in the CNN structure does not change the sizes of the input and output matrices. Pooling layer is usually connected after the convolutional layer to aggregate the features and reduce computational complexity by reducing dimensions. First, the proposal feature maps must go through the full connect layer and softmax layer to calculate the classification proposals and output the detected probability vector _ cls prob . Next, the position offset _ bbox pred of each proposal is obtained by bounding box regression, which is used to regress a more accurate object detection boundary. It assumes that the input image is size M N × , the filter kernel is size K K × , the stride is S , and the padding is P . Then, the method for calculating the size of the feature map of the pooling layer output is as follows: In this paper, the parameters of the pooling layer are defined as: kernel size = 2 and stride = 2. A matrix of size M N × becomes ( / 2) ( / 2) M N × after the pooling layer operation. The RPN network is composed of two structures. One part classifies anchors using the softmax classifier. After the classification, the foreground (the object of detection) and the background can be obtained. Another part is used to calculate the bounding box regression offset of the anchors, which can achieve more accurate proposals. The proposal layer, integrated with foreground anchors and bounding box regression offsets, is used to obtain the boundary candidate boxes for vehicle target. Certain unsuitable proposals are excluded. Then the RPN achieves the target localization by using the proposal layer.
The anchors are rectangles generated by a group of RPN. The four values 1 1 2 2 ( , , , ) x y x y of each row in the matrix represent the coordinates of the top left corner and the bottom right corner of the rectangular frame.

The Construction of Faster RCNN
The CNN layer is mainly composed of the convolution layer, the pooling layer, and the ReLU layer. The convolutional layer uses specific models to extract features from the target image. Then, the target image is sorted, identified, and predicted by these extracted features. It assumes that the size of the input image is M × N, the size of the filter kernel is K × K, the stride is S, and the number of padding pixels is P. Then, the method for calculating the size of the feature map of the convolutional layer output is as follows: In this paper, the parameters of the convolutional layer are defined as: kernel_size = 3, pad = 1. Then the original image becomes (M + 2) × (N + 2). The convolution operation is applied with the filter kernel size of 3 × 3 and the output size will be M × N. This method ensures that the convolutional layer in the CNN structure does not change the sizes of the input and output matrices. Pooling layer is usually connected after the convolutional layer to aggregate the features and reduce computational complexity by reducing dimensions. First, the proposal feature maps must go through the full connect layer and softmax layer to calculate the classification proposals and output the detected probability vector cls_prob. Next, the position offset bbox_pred of each proposal is obtained by bounding box regression, which is used to regress a more accurate object detection boundary. It assumes that the input image is size M × N, the filter kernel is size K × K, the stride is S, and the padding is P. Then, the method for calculating the size of the feature map of the pooling layer output is as follows: In this paper, the parameters of the pooling layer are defined as: kernel size = 2 and stride = 2. A matrix of size M × N becomes (M/2) × (N/2) after the pooling layer operation.
The RPN network is composed of two structures. One part classifies anchors using the softmax classifier. After the classification, the foreground (the object of detection) and the background can be obtained. Another part is used to calculate the bounding box regression offset of the anchors, which can achieve more accurate proposals. The proposal layer, integrated with foreground anchors and bounding box regression offsets, is used to obtain the boundary candidate boxes for vehicle target. Certain unsuitable proposals are excluded. Then the RPN achieves the target localization by using the proposal layer. The anchors are rectangles generated by a group of RPN. The four values (x1, y1, x2, y2) of each row in the matrix represent the coordinates of the top left corner and the bottom right corner of the rectangular frame.
There are nine rectangular frames in the matrix with different ratios of length to width. Therefore, anchors introduce a multi scales method for target detection. These nine anchors traverse CNN and obtain feature maps after calculation. Each point uses these nine anchors as the initial detection frame, but the results obtained are inaccurate.
The bounding box regression image example is shown in Figure 2. The correct Ground Truth (GT) data is shown by the external thick boundary, and the extracted foreground anchors are shown by the internal thin boundary. In this process, the thin boundary detection range should be adjusted slightly so that the identified foreground anchors are closer to GT. Generally, the window is represented by four-dimensional vectors (x, y, w, h), which represent the coordinate position, width and height of the central point of the window, respectively. Assume boundary A represents the original foreground anchors. This process is to find a mapping so that boundary A can be mapped to a regression window GT that is closer to the actual window GT.
Appl. Sci. 2019, 9, x FOR PEER REVIEW 6 of 19 There are nine rectangular frames in the matrix with different ratios of length to width. Therefore, anchors introduce a multi scales method for target detection. These nine anchors traverse CNN and obtain feature maps after calculation. Each point uses these nine anchors as the initial detection frame, but the results obtained are inaccurate.
The bounding box regression image example is shown in Figure 2. The correct Ground Truth ( GT ) data is shown by the external thick boundary, and the extracted foreground anchors are shown by the internal thin boundary. In this process, the thin boundary detection range should be adjusted slightly so that the identified foreground anchors are closer to GT . Generally, the window is represented by four-dimensional vectors ( , , , ) x y w h , which represent the coordinate position, width and height of the central point of the window, respectively. Assume boundary A represents the original foreground anchors. This process is to find a mapping so that boundary A can be mapped to a regression window ' GT that is closer to the actual window GT .

Comparison of the Detection Performance of Different Models
This paper mainly tests the vehicles in the road and produces 2500 vehicles image data sets collected in the real road environment. The camera resolution of the acquisition equipment is 1280 * 720. Due to the limitation of data workload, the data set mainly includes sedans and sport utility vehicles. Then we use an image labelling tool to mark the image dataset with the ROI with each image containing 1 to 2 marked target vehicles, as shown in Figure 3. Figure 3 shows the marked vehicles, and the target position of the vehicle mark. The small data set can establish the RCNN training process faster. We randomly select 1500 sheets as the training set and 1000 sheets as the test set. The article's vehicle inspection data set has fewer data problems, so the data is supplemented and improved. The article combines a homemade dataset with a subset of the standard dataset. In many public driving data sets, the BDD100K dataset from Berkeley contains a variety of weather data, which contains approximately 100,000 images of 1280 × 720 containing vehicle targets. The article selects 6000 images as the test set and 4000 images as the test set. Therefore, there are 7500 images in the training set and 5000 images in the test set.
To verify the detection capability of the Faster RCNN model, three representative vehicle detection methods based on CNN are selected in this paper. The methods are the CNN, the RCNN and the Fast RCNN. The different method approaches are presented in Table 1.  This paper mainly tests the vehicles in the road and produces 2500 vehicles image data sets collected in the real road environment. The camera resolution of the acquisition equipment is 1280 * 720. Due to the limitation of data workload, the data set mainly includes sedans and sport utility vehicles. Then we use an image labelling tool to mark the image dataset with the ROI with each image containing 1 to 2 marked target vehicles, as shown in Figure 3. Figure 3 shows the marked vehicles, and the target position of the vehicle mark. The small data set can establish the RCNN training process faster. We randomly select 1500 sheets as the training set and 1000 sheets as the test set.
Appl. Sci. 2019, 9, x FOR PEER REVIEW 6 of 19 There are nine rectangular frames in the matrix with different ratios of length to width. Therefore, anchors introduce a multi scales method for target detection. These nine anchors traverse CNN and obtain feature maps after calculation. Each point uses these nine anchors as the initial detection frame, but the results obtained are inaccurate.
The bounding box regression image example is shown in Figure 2. The correct Ground Truth ( GT ) data is shown by the external thick boundary, and the extracted foreground anchors are shown by the internal thin boundary. In this process, the thin boundary detection range should be adjusted slightly so that the identified foreground anchors are closer to GT . Generally, the window is represented by four-dimensional vectors ( , , , ) x y w h , which represent the coordinate position, width and height of the central point of the window, respectively. Assume boundary A represents the original foreground anchors. This process is to find a mapping so that boundary A can be mapped to a regression window ' GT that is closer to the actual window GT .

Comparison of the Detection Performance of Different Models
This paper mainly tests the vehicles in the road and produces 2500 vehicles image data sets collected in the real road environment. The camera resolution of the acquisition equipment is 1280 * 720. Due to the limitation of data workload, the data set mainly includes sedans and sport utility vehicles. Then we use an image labelling tool to mark the image dataset with the ROI with each image containing 1 to 2 marked target vehicles, as shown in Figure 3. Figure 3 shows the marked vehicles, and the target position of the vehicle mark. The small data set can establish the RCNN training process faster. We randomly select 1500 sheets as the training set and 1000 sheets as the test set. The article's vehicle inspection data set has fewer data problems, so the data is supplemented and improved. The article combines a homemade dataset with a subset of the standard dataset. In many public driving data sets, the BDD100K dataset from Berkeley contains a variety of weather data, which contains approximately 100,000 images of 1280 × 720 containing vehicle targets. The article selects 6000 images as the test set and 4000 images as the test set. Therefore, there are 7500 images in the training set and 5000 images in the test set.
To verify the detection capability of the Faster RCNN model, three representative vehicle detection methods based on CNN are selected in this paper. The methods are the CNN, the RCNN and the Fast RCNN. The different method approaches are presented in Table 1. The article's vehicle inspection data set has fewer data problems, so the data is supplemented and improved. The article combines a homemade dataset with a subset of the standard dataset. In many public driving data sets, the BDD100K dataset from Berkeley contains a variety of weather data, which contains approximately 100,000 images of 1280 × 720 containing vehicle targets. The article selects 6000 images as the test set and 4000 images as the test set. Therefore, there are 7500 images in the training set and 5000 images in the test set.
To verify the detection capability of the Faster RCNN model, three representative vehicle detection methods based on CNN are selected in this paper. The methods are the CNN, the RCNN and the Fast RCNN. The different method approaches are presented in Table 1. In this paper, training sets are used to train various models, and test sets are used to test the effect, as shown in Table 2. Among them, the accuracy rate calculation method is: correct detection/(correct detection + false detection). The recall rate calculation method is: correct detection/(correct detection + missed detection). The experimental results show that the principle and mechanism of a method will lead to different experimental results. The sliding window method of CNN obtains the sliding window features in different image positions, but this method is less efficient than CNN. RCNN uses a selective search algorithm to extract a candidate boundary and it combines a Support Vector Machine (SVM) to extract regions. RCNN has a better extraction effect. Compared with RCNN, Fast RCNN also uses a selective search algorithm to extract a candidate boundary, but it employs the softmax classifier, which leads to a better classification result because it introduces an interclass competition mechanism. Compared with Fast RCNN, Faster RCNN uses RPN to extract the candidate boundary and omits a selective search algorithm while achieving end-to-end training. Faster RCNN uses the convolution feature and RPN sharing method based on Fast RCNN, and its detection time averages about 94 ms.

Detection Performance with Different Scenes
The standard sample set contains images in different weathers, including 53,535 sunny days, 7125 rainy days, 7888 snowy days, and 181 fog days. This article selects images from some different weather conditions from the standard sample set. The partial data of the standard sample set and the self-made data are combined to form the test set of this paper. In the data set, there are 1272 images on sunny days, 537 images on foggy days, 279 images on snowy days, and 412 images on rainy days. The number of data in different weather conditions is shown in Table 3.
From the test set, 150 frames of images during the sun, cloudy, foggy and rainy days were selected for testing. The test data is shown in Table 4. In different road environments, trained neural networks can better detect vehicles. These include factors like excessive light intensity, complex road background, and excessive distance from preceding vehicle. The vehicle detection results are shown in Figure 4. The correct test results are shown in Figure 4a,b. Figure 4c shows the result of false detection. Figure 4d shows the result of the missed test. From the test set, 150 frames of images during the sun, cloudy, foggy and rainy days were selected for testing. The test data is shown in Table 4. In different road environments, trained neural networks can better detect vehicles. These include factors like excessive light intensity, complex road background, and excessive distance from preceding vehicle. The vehicle detection results are shown in Figure 4. The correct test results are shown in Figure 4a-b. Figure 4c shows the result of false detection. Figure 4d shows the result of the missed test.

Taillight Detection and Light Signal Identification Based on Image Processing
This chapter is mainly to complete the taillight detection and light recognition work of the vehicle. By recognizing the taillight signals, the driving intention of the preceding car can be effectively understood. Under the normal driving conditions of urban roads, the most frequently used ones are brake lights and turn signals. This paper mainly designs approaches for taillight detection based on the correlation among the three parameter and the histograms of the two taillight states. The method of identifying taillight state by parameters. The procedure of this method is shown in Figure 5.

Taillight Detection and Light Signal Identification Based on Image Processing
This chapter is mainly to complete the taillight detection and light recognition work of the vehicle. By recognizing the taillight signals, the driving intention of the preceding car can be effectively understood. Under the normal driving conditions of urban roads, the most frequently used ones are brake lights and turn signals. This paper mainly designs approaches for taillight detection based on the correlation among the three parameter and the histograms of the two taillight states. The method of identifying taillight state by parameters. The procedure of this method is shown in Figure 5. Appl. Sci. 2019, 9,

Color Space Conversion
Due to the complex and varying lighting conditions during the day, it is difficult to process rapidly changing color information by simply using the R channel thresholds in the RGB color model to extract the taillight regions. Detecting the taillights with the R-channel threshold results in inaccurate segmentation of the taillight area for red vehicles, as shown in Figure 6.  The image threshold segmentation is performed on the HSV color space image. Image segmentation is performed using a specific threshold. A pixel in which the gradation value is greater than or equal to a certain threshold is judged to belong to the target object, and a pixel whose gradation value is smaller than the threshold is excluded from the target.
Then select Sobel for edge detection in the taillight area. The image is denoised as much as possible and the internal holes are narrowed to reconnect the adjacent areas. The image is subjected to a morphological closing operation to eliminate narrow discontinuities and small voids in the taillight region. A morphological opening operation is performed on the image to smooth the image boundary. The detection result is shown in Figure 8.

Color Space Conversion
Due to the complex and varying lighting conditions during the day, it is difficult to process rapidly changing color information by simply using the R channel thresholds in the RGB color model to extract the taillight regions. Detecting the taillights with the R-channel threshold results in inaccurate segmentation of the taillight area for red vehicles, as shown in Figure 6.

Color Space Conversion
Due to the complex and varying lighting conditions during the day, it is difficult to process rapidly changing color information by simply using the R channel thresholds in the RGB color model to extract the taillight regions. Detecting the taillights with the R-channel threshold results in inaccurate segmentation of the taillight area for red vehicles, as shown in Figure 6.  The image threshold segmentation is performed on the HSV color space image. Image segmentation is performed using a specific threshold. A pixel in which the gradation value is greater than or equal to a certain threshold is judged to belong to the target object, and a pixel whose gradation value is smaller than the threshold is excluded from the target.
Then select Sobel for edge detection in the taillight area. The image is denoised as much as possible and the internal holes are narrowed to reconnect the adjacent areas. The image is subjected to a morphological closing operation to eliminate narrow discontinuities and small voids in the taillight region. A morphological opening operation is performed on the image to smooth the image boundary. The detection result is shown in Figure 8.

Color Space Conversion
Due to the complex and varying lighting conditions during the day, it is difficult to process rapidly changing color information by simply using the R channel thresholds in the RGB color model to extract the taillight regions. Detecting the taillights with the R-channel threshold results in inaccurate segmentation of the taillight area for red vehicles, as shown in Figure 6.  The image threshold segmentation is performed on the HSV color space image. Image segmentation is performed using a specific threshold. A pixel in which the gradation value is greater than or equal to a certain threshold is judged to belong to the target object, and a pixel whose gradation value is smaller than the threshold is excluded from the target.
Then select Sobel for edge detection in the taillight area. The image is denoised as much as possible and the internal holes are narrowed to reconnect the adjacent areas. The image is subjected to a morphological closing operation to eliminate narrow discontinuities and small voids in the taillight region. A morphological opening operation is performed on the image to smooth the image boundary. The detection result is shown in Figure 8. The image threshold segmentation is performed on the HSV color space image. Image segmentation is performed using a specific threshold. A pixel in which the gradation value is greater than or equal to a certain threshold is judged to belong to the target object, and a pixel whose gradation value is smaller than the threshold is excluded from the target.
Then select Sobel for edge detection in the taillight area. The image is denoised as much as possible and the internal holes are narrowed to reconnect the adjacent areas. The image is subjected to a morphological closing operation to eliminate narrow discontinuities and small voids in the taillight region. A morphological opening operation is performed on the image to smooth the image boundary. The detection result is shown in Figure 8

Taillight Detection
To detect the correct taillight pixel pair, we first need to determine the centroid of the pixel cluster in the image. Then we select the side of the image as the primary point, the other particles as the dependent points, and calculate the vertical distance between the primary point and the other dependent points. Histograms of directional gradient correlation, distance correlation, and color correlation are used to detect vehicle taillight pairs.

Correlation Detection Based on Histogram of Oriented Gradients
A histogram of oriented gradients (HOG) is a kind of feature histogram. For an image ( , ) f x y , its gradient magnitude ( , ) f x y ∇ and direction angle Φ at the point ( , ) x y are defined as: x G and y G respectively represent the gradient in the x direction and the gradient in the y direction.
If we divide the interval -2 2 π π       ， in the gradient direction in Equation (6) into k uniform intervals (bin), use bin k to represent the number k gradient direction and use K to represent the dimension values. Then the gradient magnitude weight projection function ( , ) k Q x y in the number k gradient direction of the pixel point ( , ) k Q x y can be expressed as: The gradient magnitude weight projection function (4) is the gradient magnitude of the pixel, which reflects the edge information of the pixel to some extent. According to Equation (4), the gradient feature of each pixel point ( , ) x y is a K dimensional vector. The gradient direction histogram of the image is the histogram statistics of the K dimensional gradient features of all the pixels in the image. The candidate taillight areas obtained by the image processing method is as shown in Figure 9. The correlation coefficients of the gradient histograms of the two frames in the X and Y directions are calculated, as shown in Table 5. Analyzing the correlation coefficient of the taillight candidate area, the taillight pair candidate coefficients on both sides have the highest gradient histogram

Taillight Detection
To detect the correct taillight pixel pair, we first need to determine the centroid of the pixel cluster in the image. Then we select the side of the image as the primary point, the other particles as the dependent points, and calculate the vertical distance between the primary point and the other dependent points. Histograms of directional gradient correlation, distance correlation, and color correlation are used to detect vehicle taillight pairs.

Correlation Detection Based on Histogram of Oriented Gradients
A histogram of oriented gradients (HOG) is a kind of feature histogram. For an image f (x, y), its gradient magnitude ∇ f (x, y) and direction angle Φ at the point (x, y) are defined as: Gx and Gy respectively represent the gradient in the x direction and the gradient in the y direction.
If we divide the interval − π 2 , π 2 in the gradient direction in Equation (6) into k uniform intervals (bin), use bin k to represent the number k gradient direction and use K to represent the dimension values. Then the gradient magnitude weight projection function Q k (x, y) in the number k gradient direction of the pixel point Q k (x, y) can be expressed as: The gradient magnitude weight projection function Q k (x, y) in Equation (4) is the gradient magnitude of the pixel, which reflects the edge information of the pixel to some extent. According to Equation (4), the gradient feature of each pixel point (x, y) is a K dimensional vector. The gradient direction histogram of the image is the histogram statistics of the K dimensional gradient features of all the pixels in the image. The candidate taillight areas obtained by the image processing method is as shown in Figure 9.

Taillight Detection
To detect the correct taillight pixel pair, we first need to determine the centroid of the pixel cluster in the image. Then we select the side of the image as the primary point, the other particles as the dependent points, and calculate the vertical distance between the primary point and the other dependent points. Histograms of directional gradient correlation, distance correlation, and color correlation are used to detect vehicle taillight pairs.

Correlation Detection Based on Histogram of Oriented Gradients
A histogram of oriented gradients (HOG) is a kind of feature histogram. For an image ( , ) f x y , its gradient magnitude ( , ) f x y ∇ and direction angle Φ at the point ( , ) x y are defined as: x G and y G respectively represent the gradient in the x direction and the gradient in the y direction.
If we divide the interval -2 2 π π       ， in the gradient direction in Equation (6)  k Q x y can be expressed as: The gradient magnitude weight projection function ( , ) k Q x y in Equation (4) is the gradient magnitude of the pixel, which reflects the edge information of the pixel to some extent. According to Equation (4), the gradient feature of each pixel point ( , ) x y is a K dimensional vector. The gradient direction histogram of the image is the histogram statistics of the K dimensional gradient features of all the pixels in the image. The candidate taillight areas obtained by the image processing method is as shown in Figure 9. The correlation coefficients of the gradient histograms of the two frames in the X and Y directions are calculated, as shown in Table 5. Analyzing the correlation coefficient of the taillight candidate area, the taillight pair candidate coefficients on both sides have the highest gradient histogram The correlation coefficients of the gradient histograms of the two frames in the X and Y directions are calculated, as shown in Table 5. Analyzing the correlation coefficient of the taillight candidate area, the taillight pair candidate coefficients on both sides have the highest gradient histogram correlation coefficient values in the X direction and the Y direction, which are considered to be the correct taillight pair matching results.

Correlation Detection Based on Location Relationship
In the distance-pairing restrictions, the taillights are generally symmetrically distributed. On a flat road, although a camera angle offset may exist, the lights on the two sides are roughly in the same horizontal line. The centroids of the two taillights image randomly selected are shown in Figure 10.
Appl. Sci. 2019, 9, x FOR PEER REVIEW 11 of 19 correlation coefficient values in the X direction and the Y direction, which are considered to be the correct taillight pair matching results. In the distance-pairing restrictions, the taillights are generally symmetrically distributed. On a flat road, although a camera angle offset may exist, the lights on the two sides are roughly in the same horizontal line. The centroids of the two taillights image randomly selected are shown in Figure 10. Considering the restrictions of the centroid of the two regions in the vertical and horizontal directions, Figure 11 shows the height difference between the centroid coordinates of the two points. In the vertical direction Y, the absolute value of the distance difference between the two centroids is less than the maximum value of the height of the cluster pairs between the two centroids. In the horizontal direction X, the distance between the two points is less than the width of the car body.

Correlation Detection Based on Color
To further reduce possible false detections in the color correlation restriction conditions, a correlation detection on the pixel cluster pairs satisfying the conditions is carried out. Pixel cluster pairs are extracted from the original image. The linear correlation coefficient formula is used, as follows: The sum of the correlation coefficient vectors of the mean value of the three monochrome channels (red, green, and blue channels) in the candidate taillight pair region is calculated to determine whether taillight pairs belong to the same vehicle Var X Cov X Y is the covariance of X and Y . The calculation method is expressed as: Considering the restrictions of the centroid of the two regions in the vertical and horizontal directions, Figure 11 shows the height difference between the centroid coordinates of the two points. In the vertical direction Y, the absolute value of the distance difference between the two centroids is less than the maximum value of the height of the cluster pairs between the two centroids. In the horizontal direction X, the distance between the two points is less than the width of the car body.
Appl. Sci. 2019, 9, x FOR PEER REVIEW 11 of 19 correlation coefficient values in the X direction and the Y direction, which are considered to be the correct taillight pair matching results.

Correlation Detection Based on Location Relationship
In the distance-pairing restrictions, the taillights are generally symmetrically distributed. On a flat road, although a camera angle offset may exist, the lights on the two sides are roughly in the same horizontal line. The centroids of the two taillights image randomly selected are shown in Figure 10. Considering the restrictions of the centroid of the two regions in the vertical and horizontal directions, Figure 11 shows the height difference between the centroid coordinates of the two points. In the vertical direction Y, the absolute value of the distance difference between the two centroids is less than the maximum value of the height of the cluster pairs between the two centroids. In the horizontal direction X, the distance between the two points is less than the width of the car body.

Correlation Detection Based on Color
To further reduce possible false detections in the color correlation restriction conditions, a correlation detection on the pixel cluster pairs satisfying the conditions is carried out. Pixel cluster pairs are extracted from the original image. The linear correlation coefficient formula is used, as follows: The sum of the correlation coefficient vectors of the mean value of the three monochrome channels (red, green, and blue channels) in the candidate taillight pair region is calculated to determine whether taillight pairs belong to the same vehicle Var X Cov X Y is the covariance of X and Y . The calculation method is expressed as:

Correlation Detection Based on Color
To further reduce possible false detections in the color correlation restriction conditions, a correlation detection on the pixel cluster pairs satisfying the conditions is carried out. Pixel cluster pairs are extracted from the original image. The linear correlation coefficient formula is used, as follows: The sum of the correlation coefficient vectors of the mean value of the three monochrome channels (red, green, and blue channels) in the candidate taillight pair region is calculated to determine whether taillight pairs belong to the same vehicle Var[X] is the variance of X. Var[Y] is the variance of Y. Cov(X, Y) is the covariance of X and Y. The calculation method is expressed as: E(X) represents the expectations of component X. E(Y) represents the expectations of component Y.
Using the original image based on the RGB color space to calculate the sum of the correlation of R, G, and B channels, and setting the dynamic threshold to a colour , as follows: a colour represents the dynamic threshold. If the sum of the correlation is less than the threshold, the two regions are recognized as the correct taillight matching items.
The taillight detection method using the positional relationship constraint alone and the taillight detection method used in this article are compared as shown in Figure 12. The taillight detection method using the positional relationship constraint alone cannot effectively eliminate the interference points that are close in positional relationship. The method used in this paper can effectively eliminate the interference with similar position and accurately detect the taillight.
a colour represents the dynamic threshold.
If the sum of the correlation is less than the threshold, the two regions are recognized as the correct taillight matching items.
The taillight detection method using the positional relationship constraint alone and the taillight detection method used in this article are compared as shown in Figure 12. The taillight detection method using the positional relationship constraint alone cannot effectively eliminate the interference points that are close in positional relationship. The method used in this paper can effectively eliminate the interference with similar position and accurately detect the taillight.

Histogram Characteristic Parameter Extraction
The gray level histogram feature parameters of the image are measured at certain specific pixels or their neighborhoods, which can well describe the gray level features in the image. The histogram shows a global description of the grayscale image, but the histogram is not directly used as a feature. Instead, its mean, variance, energy, and entropy are used as the characteristics of the differences between the categories.
For an image f , if we assume the total number of pixels is N , the maximum gray level of the image (the grayscale image is 255) is L , the number of pixels with grey level k is k N , then the graylevel histogram of f can be represented as: The mean value is the mean value of the grey level probability distribution, as follows: The variance is the measurement of the discrete type of the image grey value distribution, as follows: The energy represents the uniformity of the grey level distribution, as follows: The entropy is the measurement of the amount of information in an image, as follows:

Histogram Characteristic Parameter Extraction
The gray level histogram feature parameters of the image are measured at certain specific pixels or their neighborhoods, which can well describe the gray level features in the image. The histogram shows a global description of the grayscale image, but the histogram is not directly used as a feature. Instead, its mean, variance, energy, and entropy are used as the characteristics of the differences between the categories.
For an image f , if we assume the total number of pixels is N, the maximum gray level of the image (the grayscale image is 255) is L, the number of pixels with grey level k is Nk, then the gray-level histogram of f can be represented as: The mean value is the mean value of the grey level probability distribution, as follows: The variance is the measurement of the discrete type of the image grey value distribution, as follows: The energy represents the uniformity of the grey level distribution, as follows: The entropy is the measurement of the amount of information in an image, as follows: The average brightness of the image can be expressed by the mean value. The dispersion of the image grey level distribution can be expressed by the variance. Due to the influence of image sampling on the mean and the variance, the target image is usually normalized for classification. The energy is the secondary moment of the grey distribution to the original point. If the grey value of the image is in an equiprobability distribution, the energy is the smallest. Otherwise, the energy is larger. In information theory, the entropy reflects how much information is contained in an image.

Brake Light Identification Method
Traditionally, the methods of color threshold segmentation or shape matching are usually used to determine the taillight state. Although these methods can roughly detect the outline and area of the taillight, they cannot judge the current taillight lighting state. In this paper, by analyzing and comparing the on and off states of the brake light, and by observing, we find that when the brake light is on, the inner layer is yellowish because of its stronger brightness, while the outer layer is red because of its weaker brightness. When the brake light is off, its color is more uniformly red. To summarize, the taillight status can be monitored according to brightness and color information. Therefore, the histogram features of the original taillight area image are analyzed. Figure 13ashows the image when the taillight is off. Figure 13b shows the image when the taillight is turned on.
Appl. Sci. 2019, 9, x FOR PEER REVIEW 13 of 19 The average brightness of the image can be expressed by the mean value. The dispersion of the image grey level distribution can be expressed by the variance. Due to the influence of image sampling on the mean and the variance, the target image is usually normalized for classification. The energy is the secondary moment of the grey distribution to the original point. If the grey value of the image is in an equiprobability distribution, the energy is the smallest. Otherwise, the energy is larger. In information theory, the entropy reflects how much information is contained in an image.

Brake Light Identification Method
Traditionally, the methods of color threshold segmentation or shape matching are usually used to determine the taillight state. Although these methods can roughly detect the outline and area of the taillight, they cannot judge the current taillight lighting state. In this paper, by analyzing and comparing the on and off states of the brake light, and by observing, we find that when the brake light is on, the inner layer is yellowish because of its stronger brightness, while the outer layer is red because of its weaker brightness. When the brake light is off, its color is more uniformly red. To summarize, the taillight status can be monitored according to brightness and color information. Therefore, the histogram features of the original taillight area image are analyzed. Figure 13ashows the image when the taillight is off. Figure 13b shows the image when the taillight is turned on. The histogram of the lighting state statistics of the taillights is shown in Figure 14a. The histogram of the closing status of the tail light is shown in Figure 14b. There are obvious differences between the taillight distributions in the two states. The on status of the brake light is a continuous lighting process. The 5-frame taillight image with an interval of 1 s is randomly selected, as shown in Figure 15a-e. We can see clearly the on and off process of the brake lights. In a period of 5 s, we calculate the histogram features of the left and right taillights in the image and those in the background image. If the taillights on the two sides satisfy the conditions of continuous lighting of the histogram characteristic parameters at the same time in the same time interval, the brake lights can be recognized as lit. The histogram of the lighting state statistics of the taillights is shown in Figure 14a. The histogram of the closing status of the tail light is shown in Figure 14b. There are obvious differences between the taillight distributions in the two states.
Appl. Sci. 2019, 9, x FOR PEER REVIEW 13 of 19 The average brightness of the image can be expressed by the mean value. The dispersion of the image grey level distribution can be expressed by the variance. Due to the influence of image sampling on the mean and the variance, the target image is usually normalized for classification. The energy is the secondary moment of the grey distribution to the original point. If the grey value of the image is in an equiprobability distribution, the energy is the smallest. Otherwise, the energy is larger. In information theory, the entropy reflects how much information is contained in an image.

Brake Light Identification Method
Traditionally, the methods of color threshold segmentation or shape matching are usually used to determine the taillight state. Although these methods can roughly detect the outline and area of the taillight, they cannot judge the current taillight lighting state. In this paper, by analyzing and comparing the on and off states of the brake light, and by observing, we find that when the brake light is on, the inner layer is yellowish because of its stronger brightness, while the outer layer is red because of its weaker brightness. When the brake light is off, its color is more uniformly red. To summarize, the taillight status can be monitored according to brightness and color information. Therefore, the histogram features of the original taillight area image are analyzed. Figure 13ashows the image when the taillight is off. Figure 13b shows the image when the taillight is turned on. The histogram of the lighting state statistics of the taillights is shown in Figure 14a. The histogram of the closing status of the tail light is shown in Figure 14b. There are obvious differences between the taillight distributions in the two states. The on status of the brake light is a continuous lighting process. The 5-frame taillight image with an interval of 1 s is randomly selected, as shown in Figure 15a-e. We can see clearly the on and off process of the brake lights. In a period of 5 s, we calculate the histogram features of the left and right taillights in the image and those in the background image. If the taillights on the two sides satisfy the conditions of continuous lighting of the histogram characteristic parameters at the same time in the same time interval, the brake lights can be recognized as lit. The on status of the brake light is a continuous lighting process. The 5-frame taillight image with an interval of 1 s is randomly selected, as shown in Figure 15a If we segment the left taillight image in the five frames in Figure 15, as shown in Figure 16a-e, and calculate the statistics of the histogram characteristic parameters, as shown in Table 6. Within 5 s after the background frame is determined, if the histogram characteristic parameters of the vehicle taillight area images are larger than that of the background frames in two continuous frames or more, the taillights of the preceding vehicle are recognized as lit.  According to the five images extracted at equal intervals within 5 s, the histogram feature parameter list mentioned above is obtained, and obvious changes can be noted. The first two points are the parameters corresponding to the two frames when the light is off, and the last three points are the parameters corresponding to the three frames when the light is on. By analyzing the histogram feature parameters, we can know that the average value, entropy, and variance of the taillights when they are not lit are less than the values when the taillights are lit. The energy when the taillights are lit is greater than the energy when the taillights are not lit. Whether the tail light is on can be determined by setting a dynamic threshold.

Identification Method of the Direction Light
The direction light of the vehicle flashes intermittently when it is on. The taillight images is shown in Figure 17. In this paper, the statistical results of the histogram distribution of the directional lights in the taillights on state are shown in Figure 18a, and the histogram distribution statistics of the directional lights in the taillights off state are shown in Figure 18b. The histogram distribution characteristics are obviously different when the direction light is on compared to when it is off. Therefore, the method of statistical histogram characteristic parameters can be used to analyze the lighting color and characteristics of the direction lights. The flickering of the direction lights causes a periodic change in the histogram characteristic parameters.  If we segment the left taillight image in the five frames in Figure 15, as shown in Figure 16a-e, and calculate the statistics of the histogram characteristic parameters, as shown in Table 6. Within 5 s after the background frame is determined, if the histogram characteristic parameters of the vehicle taillight area images are larger than that of the background frames in two continuous frames or more, the taillights of the preceding vehicle are recognized as lit. If we segment the left taillight image in the five frames in Figure 15, as shown in Figure 16a-e, and calculate the statistics of the histogram characteristic parameters, as shown in Table 6. Within 5 s after the background frame is determined, if the histogram characteristic parameters of the vehicle taillight area images are larger than that of the background frames in two continuous frames or more, the taillights of the preceding vehicle are recognized as lit.  According to the five images extracted at equal intervals within 5 s, the histogram feature parameter list mentioned above is obtained, and obvious changes can be noted. The first two points are the parameters corresponding to the two frames when the light is off, and the last three points are the parameters corresponding to the three frames when the light is on. By analyzing the histogram feature parameters, we can know that the average value, entropy, and variance of the taillights when they are not lit are less than the values when the taillights are lit. The energy when the taillights are lit is greater than the energy when the taillights are not lit. Whether the tail light is on can be determined by setting a dynamic threshold.

Identification Method of the Direction Light
The direction light of the vehicle flashes intermittently when it is on. The taillight images is shown in Figure 17. In this paper, the statistical results of the histogram distribution of the directional lights in the taillights on state are shown in Figure 18a, and the histogram distribution statistics of the directional lights in the taillights off state are shown in Figure 18b. The histogram distribution characteristics are obviously different when the direction light is on compared to when it is off. Therefore, the method of statistical histogram characteristic parameters can be used to analyze the lighting color and characteristics of the direction lights. The flickering of the direction lights causes a periodic change in the histogram characteristic parameters.   According to the five images extracted at equal intervals within 5 s, the histogram feature parameter list mentioned above is obtained, and obvious changes can be noted. The first two points are the parameters corresponding to the two frames when the light is off, and the last three points are the parameters corresponding to the three frames when the light is on. By analyzing the histogram feature parameters, we can know that the average value, entropy, and variance of the taillights when they are not lit are less than the values when the taillights are lit. The energy when the taillights are lit is greater than the energy when the taillights are not lit. Whether the tail light is on can be determined by setting a dynamic threshold.

Identification Method of the Direction Light
The direction light of the vehicle flashes intermittently when it is on. The taillight images is shown in Figure 17. In this paper, the statistical results of the histogram distribution of the directional lights in the taillights on state are shown in Figure 18a, and the histogram distribution statistics of the directional lights in the taillights off state are shown in Figure 18b. The histogram distribution characteristics are obviously different when the direction light is on compared to when it is off. Therefore, the method of statistical histogram characteristic parameters can be used to analyze the lighting color and characteristics of the direction lights. The flickering of the direction lights causes a periodic change in the histogram characteristic parameters. If we segment the left taillight image in the five frames in Figure 15, as shown in Figure 16a-e, and calculate the statistics of the histogram characteristic parameters, as shown in Table 6. Within 5 s after the background frame is determined, if the histogram characteristic parameters of the vehicle taillight area images are larger than that of the background frames in two continuous frames or more, the taillights of the preceding vehicle are recognized as lit.  According to the five images extracted at equal intervals within 5 s, the histogram feature parameter list mentioned above is obtained, and obvious changes can be noted. The first two points are the parameters corresponding to the two frames when the light is off, and the last three points are the parameters corresponding to the three frames when the light is on. By analyzing the histogram feature parameters, we can know that the average value, entropy, and variance of the taillights when they are not lit are less than the values when the taillights are lit. The energy when the taillights are lit is greater than the energy when the taillights are not lit. Whether the tail light is on can be determined by setting a dynamic threshold.

Identification Method of the Direction Light
The direction light of the vehicle flashes intermittently when it is on. The taillight images is shown in Figure 17. In this paper, the statistical results of the histogram distribution of the directional lights in the taillights on state are shown in Figure 18a, and the histogram distribution statistics of the directional lights in the taillights off state are shown in Figure 18b. The histogram distribution characteristics are obviously different when the direction light is on compared to when it is off. Therefore, the method of statistical histogram characteristic parameters can be used to analyze the lighting color and characteristics of the direction lights. The flickering of the direction lights causes a periodic change in the histogram characteristic parameters.  Another important feature of the turn signal light is that there is an alternating process of turnon and turn-off with a period of 1 s, as shown in Figure 19a-e. From the figure, it can be seen clearly that the direction light has a periodic on and off characteristic. Figure 19. Brightness change chart of the direction light during on and off process.
The 5 frames of the left taillight in Figure 19 are segmented as images which are shown in Figure  20, and the characteristic parameters are shown in Table 7. By analyzing the data in the histogram characteristic parameter list, the mean value, energy, and variance are more sensitive to the direction lights. Therefore, these three parameters are selected as the conditions for determining the direction light's on or off state. Within 5 s after determining the background frame, the histogram feature parameters are calculated once per second. If the parameters of the taillight area on both sides of the vehicle are greater than the parameters of the background frame, the direction light of the preceding vehicle is recognized as lit. According to the 5 images extracted at equal intervals within 5 s, the histogram feature parameter list mentioned above can be obtained. The first two points are the parameters corresponding to the two frames when the light is off, and the last three points are the parameters corresponding to the three frames when the light is on. By analyzing the histogram characteristic parameters, we find that the mean value, energy, and variance when the light is off are less than their values when the light is on, and the entropy value is not sensitive to whether the lamp is on. Another important feature of the turn signal light is that there is an alternating process of turnon and turn-off with a period of 1 s, as shown in Figure 19a-e. From the figure, it can be seen clearly that the direction light has a periodic on and off characteristic. Figure 19. Brightness change chart of the direction light during on and off process.
The 5 frames of the left taillight in Figure 19 are segmented as images which are shown in Figure  20, and the characteristic parameters are shown in Table 7. By analyzing the data in the histogram characteristic parameter list, the mean value, energy, and variance are more sensitive to the direction lights. Therefore, these three parameters are selected as the conditions for determining the direction light's on or off state. Within 5 s after determining the background frame, the histogram feature parameters are calculated once per second. If the parameters of the taillight area on both sides of the vehicle are greater than the parameters of the background frame, the direction light of the preceding vehicle is recognized as lit. According to the 5 images extracted at equal intervals within 5 s, the histogram feature parameter list mentioned above can be obtained. The first two points are the parameters corresponding to the two frames when the light is off, and the last three points are the parameters corresponding to the three frames when the light is on. By analyzing the histogram characteristic parameters, we find that the mean value, energy, and variance when the light is off are less than their values when the light is on, and the entropy value is not sensitive to whether the lamp is on. The 5 frames of the left taillight in Figure 19 are segmented as images which are shown in Figure 20, and the characteristic parameters are shown in Table 7. By analyzing the data in the histogram characteristic parameter list, the mean value, energy, and variance are more sensitive to the direction lights. Therefore, these three parameters are selected as the conditions for determining the direction light's on or off state. Within 5 s after determining the background frame, the histogram feature parameters are calculated once per second. If the parameters of the taillight area on both sides of the vehicle are greater than the parameters of the background frame, the direction light of the preceding vehicle is recognized as lit. Another important feature of the turn signal light is that there is an alternating process of turnon and turn-off with a period of 1 s, as shown in Figure 19a-e. From the figure, it can be seen clearly that the direction light has a periodic on and off characteristic. Figure 19. Brightness change chart of the direction light during on and off process.
The 5 frames of the left taillight in Figure 19 are segmented as images which are shown in Figure  20, and the characteristic parameters are shown in Table 7. By analyzing the data in the histogram characteristic parameter list, the mean value, energy, and variance are more sensitive to the direction lights. Therefore, these three parameters are selected as the conditions for determining the direction light's on or off state. Within 5 s after determining the background frame, the histogram feature parameters are calculated once per second. If the parameters of the taillight area on both sides of the vehicle are greater than the parameters of the background frame, the direction light of the preceding vehicle is recognized as lit. According to the 5 images extracted at equal intervals within 5 s, the histogram feature parameter list mentioned above can be obtained. The first two points are the parameters corresponding to the two frames when the light is off, and the last three points are the parameters corresponding to the three frames when the light is on. By analyzing the histogram characteristic parameters, we find that the mean value, energy, and variance when the light is off are less than their values when the light is on, and the entropy value is not sensitive to whether the lamp is on.  According to the 5 images extracted at equal intervals within 5 s, the histogram feature parameter list mentioned above can be obtained. The first two points are the parameters corresponding to the two frames when the light is off, and the last three points are the parameters corresponding to the three frames when the light is on. By analyzing the histogram characteristic parameters, we find that the mean value, energy, and variance when the light is off are less than their values when the light is on, and the entropy value is not sensitive to whether the lamp is on.

Experiment Results and Analysis
By using the methods described in this paper, break light detection results and direction light result are shown in Figure 21. The time spent on vehicle detection and taillight detection takes about 200 ms.

Experiment Results and Analysis
By using the methods described in this paper, break light detection results and direction light result are shown in Figure 21 If the taillight is turned on but not detected, it is called a false alarm. If the taillight is off but is identified to be on, it is termed a missing alarm. In some cloudy and rainy weather conditions, the taillights can be detected, but in severe weather conditions, such as fog, heavy rain, and bright light, missed detections will occur. Some test results are shown in Figure 22. The experimental results are shown in Table 8.   If the taillight is turned on but not detected, it is called a false alarm. If the taillight is off but is identified to be on, it is termed a missing alarm. In some cloudy and rainy weather conditions, the taillights can be detected, but in severe weather conditions, such as fog, heavy rain, and bright light, missed detections will occur. Some test results are shown in Figure 22. The experimental results are shown in Table 8.

Experiment Results and Analysis
By using the methods described in this paper, break light detection results and direction light result are shown in Figure 21 If the taillight is turned on but not detected, it is called a false alarm. If the taillight is off but is identified to be on, it is termed a missing alarm. In some cloudy and rainy weather conditions, the taillights can be detected, but in severe weather conditions, such as fog, heavy rain, and bright light, missed detections will occur. Some test results are shown in Figure 22. The experimental results are shown in Table 8.    The longer the distance that can be identified, the more the system can give the driver an earlier warning and improve the safety of driving. In the experiment, under the normal driving condition of the urban road, the taillight of the preceding vehicle can be accurately recognized when the distance from the preceding vehicle is approximately within the range of 30 m. When the distance increases, the area of the taillight in the image is too small and consists of only a few pixels. In this case, the front taillight cannot be detected, as shown in Figure 23.
The longer the distance that can be identified, the more the system can give the driver an earlier warning and improve the safety of driving. In the experiment, under the normal driving condition of the urban road, the taillight of the preceding vehicle can be accurately recognized when the distance from the preceding vehicle is approximately within the range of 30 m. When the distance increases, the area of the taillight in the image is too small and consists of only a few pixels. In this case, the front taillight cannot be detected, as shown in Figure 23.

Discussion
There are some models where the taillights are complex polygons or irregular shapes. Because video frame images need to be processed by image pre-processing, image filtering and morphological processing, the binarization and grayscale transformation may cause incompleteness in the taillight area, which may result in an inability to identify the lamp area.
In some rainy weather conditions, the taillight area can be detected. However, in severe weather conditions, such as heavy fog, heavy rain, bright light, and so on, the image target area is partially or completely blocked, and objects caused by strong light distortion of the surface color will result in the inability to accurately capture and identify the image target area, resulting in a missed detection. In addition, the image data collected under different weather conditions needs to be further expanded, which can further verify the robustness of the system.
In the future, it will be necessary to design a more robust taillight detection method for different weather conditions and different taillight shapes. A distance detection sensor system will be considered to integrate into this system.

Conclusions
Effectively identifying taillight signals can help us understand the driving intentions of preceding vehicle. In this paper, CNN is used to detect preceding vehicles on the current road. The position, color and gradient histogram features of the taillight area are combined to detect the taillight areas on both sides. The change rules of the histogram characteristic parameters under different taillight lighting states are analyzed, and the taillight signal of the vehicle is identified.
This study collected images of vehicles taken under different road conditions and selected images from different scenes and marked areas of interest. The faster RCNN is then built and trained and tested through the image data set to achieve high-precision vehicle detection.
In this study, the taillight area segmentation threshold is obtained by weighting three channels in the HSV color model. The taillight pair matching is completed by using three correlation detections of gradient histogram feature, color feature and position space feature to improve the accuracy of taillight detection. The brake lights and turn lights are recognized according to the histogram characteristic parameters of the taillight region to improve the accuracy of the taillight detection.
The results show that the algorithm can detect vehicles and taillights better in daytime traffic environments, but there are still some shortcomings in the study of this issue. In future research work, it will be necessary to collect and produce more marked road vehicle images to train a more robust Faster RCNN neural network.

Discussion
There are some models where the taillights are complex polygons or irregular shapes. Because video frame images need to be processed by image pre-processing, image filtering and morphological processing, the binarization and grayscale transformation may cause incompleteness in the taillight area, which may result in an inability to identify the lamp area.
In some rainy weather conditions, the taillight area can be detected. However, in severe weather conditions, such as heavy fog, heavy rain, bright light, and so on, the image target area is partially or completely blocked, and objects caused by strong light distortion of the surface color will result in the inability to accurately capture and identify the image target area, resulting in a missed detection. In addition, the image data collected under different weather conditions needs to be further expanded, which can further verify the robustness of the system.
In the future, it will be necessary to design a more robust taillight detection method for different weather conditions and different taillight shapes. A distance detection sensor system will be considered to integrate into this system.

Conclusions
Effectively identifying taillight signals can help us understand the driving intentions of preceding vehicle. In this paper, CNN is used to detect preceding vehicles on the current road. The position, color and gradient histogram features of the taillight area are combined to detect the taillight areas on both sides. The change rules of the histogram characteristic parameters under different taillight lighting states are analyzed, and the taillight signal of the vehicle is identified.
This study collected images of vehicles taken under different road conditions and selected images from different scenes and marked areas of interest. The faster RCNN is then built and trained and tested through the image data set to achieve high-precision vehicle detection.
In this study, the taillight area segmentation threshold is obtained by weighting three channels in the HSV color model. The taillight pair matching is completed by using three correlation detections of gradient histogram feature, color feature and position space feature to improve the accuracy of taillight detection. The brake lights and turn lights are recognized according to the histogram characteristic parameters of the taillight region to improve the accuracy of the taillight detection.
The results show that the algorithm can detect vehicles and taillights better in daytime traffic environments, but there are still some shortcomings in the study of this issue. In future research work, it will be necessary to collect and produce more marked road vehicle images to train a more robust Faster RCNN neural network.