Next Article in Journal
PHACK: An Efficient Scheme for Selective Forwarding Attack Detection in WSNs
Previous Article in Journal
An Intelligent Optical Dissolved Oxygen Measurement Method Based on a Fluorescent Quenching Mechanism
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

New Vehicle Detection Method with Aspect Ratio Estimation for Hypothesized Windows

1
The School of Electrical and Electronic Engineering, Yonsei University, Seoul 120-749, Korea
2
Department of Electrical Engineering, Gachon University, Seongnam 461-701, Korea
*
Author to whom correspondence should be addressed.
Sensors 2015, 15(12), 30927-30941; https://doi.org/10.3390/s151229838
Submission received: 26 August 2015 / Revised: 26 November 2015 / Accepted: 3 December 2015 / Published: 9 December 2015
(This article belongs to the Section Physical Sensors)

Abstract

:
All kinds of vehicles have different ratios of width to height, which are called the aspect ratios. Most previous works, however, use a fixed aspect ratio for vehicle detection (VD). The use of a fixed vehicle aspect ratio for VD degrades the performance. Thus, the estimation of a vehicle aspect ratio is an important part of robust VD. Taking this idea into account, a new on-road vehicle detection system is proposed in this paper. The proposed method estimates the aspect ratio of the hypothesized windows to improve the VD performance. Our proposed method uses an Aggregate Channel Feature (ACF) and a support vector machine (SVM) to verify the hypothesized windows with the estimated aspect ratio. The contribution of this paper is threefold. First, the estimation of vehicle aspect ratio is inserted between the HG (hypothesis generation) and the HV (hypothesis verification). Second, a simple HG method named a signed horizontal edge map is proposed to speed up VD. Third, a new measure is proposed to represent the overlapping ratio between the ground truth and the detection results. This new measure is used to show that the proposed method is better than previous works in terms of robust VD. Finally, the Pittsburgh dataset is used to verify the performance of the proposed method.

1. Introduction

Vehicle detection (VD) is one of the major research issues within intelligent transportation system (ITS) organizations, and considerable research has been conducted. Most of the research works consist of two steps: HG (hypothesis generation) and HV (hypothesis verification).
Concerning the HG, symmetry [1,2], color [3,4], shadow [5,6] and edges [7,8] were used to select the vehicle candidates. Further, search space reduction methods were developed to save the computational resources in HG. For example, in [9], the linear model between the vehicle position and vehicle size is updated using a recursive least square algorithm. This linear model helps to generate the Region of interests (ROIs) such that they are likely to include vehicle regions. Therefore, this approach can reduce false positives as compared with the previous exhaustive search or sliding window approaches. Interestingly, in [10], image inpainting is used to verify the detection results. Image inpainting is actually a method for restoring damaged images. This approach also reduces false positives.
Concerning the HV, lots of research has focused on the application of machine vision technologies to VD, as in [11,12]. The HV works based on machine vision mainly consist of features and classifiers. In case of the features, the Histogram of oriented gradient (HOG) [13], Haar-like wavelet [14], Gabor feature [15] and Aggregate channel features (ACF) [16] are generally used. The Haar-like wavelet takes less computational time than the HOG or Gabor feature. However, the detection performance using the Haar-like wavelet is lower than that of the HOG or Gabor feature. In [17], the HOG and Haar-like wavelet are combined in cascade form to reduce the computational time and to improve the detection performance. The Haar-like wavelet accelerated the hypothesis generation (HG), while HOG verified the generated hypotheses. In addition to these features, the Gabor feature is also an effective feature for VD. The Gabor filter is a kind of band-pass filter that extracts specific information called the Gabor feature from the frequency domain. In [18], a gamma distribution is used to represent the Gabor feature for better detection performance when compared with the Gaussian distribution. The Gabor feature, however, takes a long computational time due to how the computation of a convolution is required. Some works have been reported [19,20] to reduce the computational time of the Gabor filter. In case of the classifiers, the support vector machine (SVM), Adaboost [21] and Neural Network (NN) are used to train the various features. Recently, Latent SVM has been researched for a deformable part-based model (DPM) [22]. This model can capture significant deformation in the object appearance.
On the other hand, all kinds of vehicles have different ratios of width to height, which are called the aspect ratios. Most previous works, however, use a fixed aspect ratio for vehicle detection (VD). The use of a fixed vehicle aspect ratio [23,24,25,26] for VD degrades the performance. Thus, a new step named HI (hypothesis improvement) is developed to enhance the VD performance in this paper. In the HI, the aspect ratio of the hypothesized windows is estimated and the result is applied to the classifier in the HV. Thus, the HI is positioned between the HG and the HV. A part of this paper was presented in [27]. The contribution of this paper is threefold: (1) the HI based on the estimation of vehicle aspect ratio is inserted between the HG (hypothesis generation) and the HV (hypothesis verification); (2) a simple HG method named a signed horizontal edge map is proposed to speed up VD; (3) a new measure is proposed to quantify how well the detection result matches the ground truth. This measure can be used to show that the proposed method is better than previous methods in terms of robust VD.
The remainder of this paper is organized as follows: In Section 2, the proposed vehicle detection system is briefly outlined. In Section 3, the vehicle aspect ratio is estimated and is used to generate an efficient ROI. In Section 4, some experiments are conducted to verify the validity of the proposed method. Some conclusions are drawn in Section 5.

2. Motivation and System Overview

Figure 1 shows the distribution of vehicle aspect ratios in the Pittsburgh dataset.
Figure 1. The distribution of vehicle aspect ratios in the Pittsburgh dataset.
Figure 1. The distribution of vehicle aspect ratios in the Pittsburgh dataset.
Sensors 15 29838 g001
A total of 10,907 vehicles are used in the Pittsburgh dataset, as shown in Figure 1. As can be seen in the figure, the vehicle aspect ratio is approximately 1, but it varies from 0.5 to 2, depending on the types of vehicles and the camera viewpoint. Examples of vehicle images are given in Figure 2. In general, sedans have low vehicle aspect ratios, while trucks or buses have high vehicle aspect ratios, as shown in Figure 2. Thus, the use of a fixed aspect ratio in VD can degrade the performance.
Figure 2. Vehicle images for various vehicle aspect ratios: (a) 0.7; (b) 1; and (c) 1.5.
Figure 2. Vehicle images for various vehicle aspect ratios: (a) 0.7; (b) 1; and (c) 1.5.
Sensors 15 29838 g002
The proposed system is outlined in Figure 3; as shown in the figure, the HI is positioned between the HG and the HV. In the HG, a simple method named a signed horizontal edge is developed to provide good hypothesized windows. In the HI, the aspect ratios of the hypothesized windows are estimated by combining the symmetry and horizontal edges of the vehicles. In the HV, ACF is employed with SVM to test the hypothesized windows with estimated aspect ratios.
Figure 3. Flow chart of the proposed vehicle detection system.
Figure 3. Flow chart of the proposed vehicle detection system.
Sensors 15 29838 g003

3. Proposed Method

In this section, the three steps of the proposed method are explained. The results of the three steps are summarized, as in Figure 4. As in [28], the hypotheses for the vehicles are generated in the HG. Here, let us denote the i -th hypothesis for the vehicle as w i = ( x i , y i , w i , h i ) , where ( x i , y i ) denotes the left-lower position of the i -th hypothesis and w i and h i are the associated width and the height of the window, respectively. In the HI, the aspect ratio, or the equivalent height, of the hypothesized window w i is estimated. Initially, the height of the hypothesis is set to h i = 2 w i , which is long enough to include all kinds of vehicles, as shown in Figure 1. As in Figure 4a, the candidates of vehicle width are generated. Figure 4b shows the results of the estimation of vehicle height h i for the given vehicle width w i . Let us denote the estimated value for the vehicle height by h ^ i . Then, the i -th hypothesis is computed by w ^ i = ( x i , y i , w i , h ^ i ) . Figure 4c shows the hypothesized windows given as the result of the HI. Finally, ACF and SVM are used to test all of the hypothesized windows given from the HI. The vehicle detection results of the proposed method are shown in Figure 4d.
Figure 4. The framework of the proposed method: (a) shows the result of the HG by signed horizontal edges ( h i = 2 w i ) ; (b) is the result of the estimation of the vehicle height; (c) shows the hypothesized windows given as the result of the HI; and (d) shows the vehicle detection results in the HV.
Figure 4. The framework of the proposed method: (a) shows the result of the HG by signed horizontal edges ( h i = 2 w i ) ; (b) is the result of the estimation of the vehicle height; (c) shows the hypothesized windows given as the result of the HI; and (d) shows the vehicle detection results in the HV.
Sensors 15 29838 g004

3.1. Hypothesis Generation (HG)—A Signed Horizontal Edge Map

An efficient hypothesis generation method for VD was reported by Alonso et al. in [28]. The method uses an absolute edge map defined by
E ( x , y ) = | E h ( x , y ) E v ( x , y ) |
where E h ( x , y ) and E v ( x , y ) represent the horizontal and vertical gradient images, respectively. Figure 5a,c shows an original vehicle image I ( x , y ) and the associated absolute edge map E ( x , y ) and the generated hypotheses. The absolute edge map method in [28] is very efficient but it has the drawback that the absolute edge map E ( x , y ) sometimes misses some weak horizontal edges such as vehicle shadows, degrading the VD performance. To avoid missing some weak horizontal edges such as shadow edges, the signed horizontal edge map computed by
E s ( x , y ) = I ( x , y ) H ,               H = [ 1 2 1 0 0 0 1 2 1 ]
It is used as in Figure 5b. The signed horizontal edge map E s ( x , y ) takes into account both sign and magnitude of the horizontal edges and outperforms the absolute edge map E ( x , y ) in detecting the edges between shadows and roads since the shadows tend to be darker than the roads. Figure 5b,d show the same image I ( x , y ) , and the associated edge E s ( x , y ) and the generated hypotheses. The signed horizontal edge map E s ( x , y ) outperforms the absolute edges image E ( x , y ) in the HG.
Figure 5. Hypothesis generation methods for VD: (a) Absolute edge image by [28]; (b) Signed horizontal edge image; (c) HG by absolute edge image [28]; and (d) HG by signed horizontal edge image.
Figure 5. Hypothesis generation methods for VD: (a) Absolute edge image by [28]; (b) Signed horizontal edge image; (c) HG by absolute edge image [28]; and (d) HG by signed horizontal edge image.
Sensors 15 29838 g005

3.2. Hypothesis Improvement (HI)–Aspect Ratio Estimation

In this subsection, the symmetry of the vehicle images, the horizontal edges and the prior knowledge about the aspect ratio of the vehicles are combined to estimate the aspect ratio of the hypothesized windows provided by the HG.

3.2.1. Symmetry

The basic idea of this subsection is to exploit the fact that vehicles are symmetric while the backgrounds are not, as shown in Figure 6. The symmetry for each value in the y axis is computed as follows:
(1) The given hypothesized window is flipped horizontally as in Figure 6, making a mirror image. Figure 6a,b show examples of the original image and the corresponding flipped image, respectively. For j s that belong to vehicles, the two images are almost the same while, for j s that belong to backgrounds, the two images are different from each other.
(2) In order to quantify the symmetry of the given hypothesized window, the similarity between the hypothesized window and the mirror image is computed. Instead of intensity values, gradient values in images are used because they are more robust than intensity values under various illuminations. Thus, the HOG feature vector, is used. The HOG feature is a part of ACF and includes the gradient magnitude and orientation. The HOG feature vector for a hypothesized window can be denoted by H = [ F 1 , 1 , , F I , 1 , , F 1 , J , , F I , J ] T I J ; F i , j = [ B i , j 1 , , B i , j T ] T denotes the histogram of the ( i , j ) block, and B i , j t denotes the sum of the gradient magnitudes according to the orientation bin t in the ( i , j ) block, where I and J are the numbers of column and row blocks of the window, respectively, as shown in Figure 6; T denotes the number of orientation bins in the HOG; i , j and t are the indices for I , J and T , respectively.
Figure 6. The procedure of using the symmetry: (a) is the HOG feature of the hypothesized window; and (b) is the HOG feature of the flipped hypothesized window ( T = 9 ) .
Figure 6. The procedure of using the symmetry: (a) is the HOG feature of the hypothesized window; and (b) is the HOG feature of the flipped hypothesized window ( T = 9 ) .
Sensors 15 29838 g006
(3) For the HOG of the hypothesized window H = [ F 1 , 1 , , F I , 1 , , F 1 , J , , F I , J ] T I J and the HOG of the associated flipped image H F = [ F 1 , 1 F , , F I , 1 F , , F 1 , J F , , F I , J F ] T I J , the similarity between the two HOGs is defined by
S = H H F = [ F 1 , 1 F 1 , 1 F T F I , 1 F I , 1 F T F 1 , J F 1 , J F T F I , J F I , J F T ] = [ [ B 1 , 1 1 B 1 , 1 1 F B 1 , 1 T B 1 , 1 T F ] T [ B I , 1 1 B I , 1 1 F B I , 1 T B I , 1 T F ] T    [ B 1 , J 1 B 1 , J 1 F B 1 , J T B 1 , J T F ] T [ B I , J 1 B I , J 1 F B I , J T B I , J T F ] T ] = [ s 1 , 1 T s I , 1 T s 1 , J T s I , J T ] T I J
where denotes component-wise multiplication; s i , j = [ B i , j 1 B i , j 1 F B i , j T B i , j T F ] T . The symmetry for the j -th row vector block can be quantified by
m = [ m 1 , , m J ] ,      m j = i = 1 I s i , j 1
Finally, the symmetry is summed over all j s (over all row blocks) and the accumulated symmetry is defined as
M = [ M 1 , , M J ] ,      M j = t = 0 J j ( m J t T s )
where T s is a median value of the symmetry vector m . That is, the accumulated symmetry M j for the j -th vector row block is the sum of the symmetry values of m from the bottom to the j -th vector row block. Figure 7 shows the computation results of the accumulated symmetry for vehicle images. In Figure 7b, the symmetry is depicted for different j s. Since the vehicle region has high symmetry, the background region has low symmetry, and T s is a median value of the symmetry vector m , the j -th row vector block corresponding to the vehicle height has a maximum accumulated symmetry as shown in Figure 7c.
Figure 7. The result of estimating symmetry: (a) is the hypothesized window; (b) is the symmetry in terms of j ; and (c) is the accumulated symmetry from bottom to top ( I = 8 , J = 64 ) .
Figure 7. The result of estimating symmetry: (a) is the hypothesized window; (b) is the symmetry in terms of j ; and (c) is the accumulated symmetry from bottom to top ( I = 8 , J = 64 ) .
Sensors 15 29838 g007

3.2.2. Horizontal Edge

In addition to the symmetry of the vehicles, the horizontal edge is also an important cue that we can use to estimate the vehicle height. The horizontal edge is also computed using the HOG feature vector H . Figure 8 shows the result of the horizontal edge detection. The amount of the horizontal edge is defined by
E = [ E 1 , , E J ] ,     E j = i = 1 I B i , j t 0
where t 0 denotes the bin for the horizontal orientation. For simplicity, E is also called the horizontal edge. In Figure 8b, the j corresponding to the vehicle height has the highest magnitude of the horizontal edge due to the intensity difference between the vehicle and the background.
Figure 8. The result of the horizontal edge: (a) is the hypothesized window; and (b) is the horizontal edge in terms of j .
Figure 8. The result of the horizontal edge: (a) is the hypothesized window; and (b) is the horizontal edge in terms of j .
Sensors 15 29838 g008

3.2.3. Prior Knowledge about Aspect Ratio

Finally, our prior knowledge about aspect ratio is used to fine-tune the heights of vehicles. As shown in Figure 1, it is unusual for the vehicle aspect ratio to be less than 0.5 or larger than 1.5. Thus, we model the vehicle aspect ratios using a Gaussian distribution, as shown in Figure 9. The degree to which the estimated aspect ratio matches our prior knowledge is used to reduce false estimations of the vehicle aspect ratio. Here, prior knowledge match degree is defined by
W = [ W 1 , , W J ] W j = N ( j | J 2 , σ ) = 1 σ 2 π exp ( ( j J / 2 ) 2 2 σ 2 )
where N ( · ) denotes a Gaussian distribution. The mean of the vehicle aspect ratio is set to one. Figure 9 shows the distribution of the prior knowledge match degree.
Figure 9. The prior knowledge match degree: (a) is the hypothesized window; and (b) is the prior knowledge match degree in terms of j ( σ = 10 ) .
Figure 9. The prior knowledge match degree: (a) is the hypothesized window; and (b) is the prior knowledge match degree in terms of j ( σ = 10 ) .
Sensors 15 29838 g009

3.2.4. Estimating Vehicle Height

Three measures, the accumulated symmetry M j , the horizontal edge E j , and the prior knowledge match degree W j , are combined to define a score for the vehicle height as
T = [ T 1 , , T J ] ,      T j = M j E j W j .
Figure 10 shows the final height score T j of a window for different values of j . Using the height score, the vehicle height is estimated as in Equation (9) below:
h ^ = h J j ^ J ,       j ^ = arg max j T j .
Figure 10. The results of estimating vehicle height: (a) is the hypothesized window; (b) is the total score for vehicle height estimation; and (c) shows the estimated vehicle height represented by the red line.
Figure 10. The results of estimating vehicle height: (a) is the hypothesized window; (b) is the total score for vehicle height estimation; and (c) shows the estimated vehicle height represented by the red line.
Sensors 15 29838 g010
In Figure 10c, the estimated height of a given vehicle is marked by a red line. As shown in the figure, the estimated height is very close to the ground truth.

3.3. Hypothesis Verification (HV)

In the HV, ACF and SVM are applied to the fine-tuned windows obtained from the HI and the vehicle verification is conducted. ACF uses the five channels: normalized edge, HOG and LUV color channels. Figure 11 shows the ACF channels for the vehicle image. The channels are divided into 4 × 4 blocks and pixels in each block are summed [16]. The features extracted from ACF are trained by a linear SVM [29]. Finally, the trained SVM is used to detect the vehicles for the fine-tuned windows obtained from the HI.
Figure 11. ACF channels for the image (a): (b) is HOG; (c) is L color space in LUV; (d) is U color space in LUV; (e) is V color space in LUV; (f) is normalized edge.
Figure 11. ACF channels for the image (a): (b) is HOG; (c) is L color space in LUV; (d) is U color space in LUV; (e) is V color space in LUV; (f) is normalized edge.
Sensors 15 29838 g011

4. Experiment

In this section, experiments are conducted to compare the performance the proposed method with that of the previous two methods. In Table 1, the HG, HI and HV of the competing algorithms are summarized. The first two algorithms denoted by “SW” and “Alonso” are the existing ones. Here, “SW” means a sliding window approach in [30] and “Alonso” means the algorithm in [28]. In “SW”, the aspect ratio is set to 1. The last one denoted by “SHE + VH” is the proposed method. Here, “SHE” means the signed horizontal edges and “VH” means the proposed aspect ratio estimation.
Table 1. The mean absolute errors with the proposed method and previous methods.
Table 1. The mean absolute errors with the proposed method and previous methods.
MethodsHGHIHV
Previous method 1 (SW)Sliding window×ACF + SVM
Previous method 2 (Alonso)Absolute edge image Peaks of edgesACF + SVM
Proposed method (SHE + VH)Signed horizontal edge imageSymmetry, Horizontal edgeACF + SVM
The proposed methods are evaluated on three aspects: (1) the aspect ratio estimation; (2) the vehicle detection; and (3) the computation time. First, the aspect ratio estimation is considered. A total of 11,021 vehicles from the Pittsburgh dataset are used to evaluate the performance of the aspect ratio estimation. In Table 2, the previous methods and the proposed methods are compared in terms of the mean absolute error (MAE) between the true and estimated aspect ratios, which is defined by
1 N G i = 1 N G | R E i R G i |
where N G is the number of vehicles; R G i and R E i are the true and estimated aspect ratios for the i -th sample, respectively. In Table 2, the MAE is evaluated for different types of vehicles: sedans, Sport Utility Vehicles (SUVs), trucks and busses. For sedans, trucks, and buses, the proposed method has a lower MAE than “SW” and “Alonso”. For SUVs, however, the proposed method underperforms compared to the previous methods. The reason for that is that the aspect ratio of a SUV is close to one and the fixed aspect ratio of one is better than the aspect ratio estimation. Overall, the proposed method demonstrates the lowest MAE among the competing methods and it means that the proposed method generates more accurate hypothesized windows than the previous methods do.
Table 2. The mean absolute errors (MAE) with the proposed and previous methods.
Table 2. The mean absolute errors (MAE) with the proposed and previous methods.
VehicleNumber of VehiclesPrevious Method1 (“SW”)Previous Method2 (“Alonso”)Proposed Method (“SHE + VH”)
Sedan59820.14250.10440.1014
SUV43500.06940.06350.0902
Truck3900.30870.18440.0961
Bus2990.14150.34380.1309
Total11,0210.16560.17400.1047
Second, the proposed method is evaluated in terms of the VD performance. In Figure 12, the VD results of the previous methods and proposed methods are compared. From the figure, it can be seen that the bounding boxes of the proposed methods fit the vehicles more accurately than those of the previous methods. Further, the previous methods produce some false positives and miss some vehicles, while the proposed method detects the vehicles successfully. To qualitatively evaluate the detection performance, two measures are introduced: the PASCAL measure [31] and the average overlapping ratio (AOR).
Figure 12. The vehicle detection result of (a) “SW”; (b) “Alonso”; and (c) “SHE+VH”.
Figure 12. The vehicle detection result of (a) “SW”; (b) “Alonso”; and (c) “SHE+VH”.
Sensors 15 29838 g012
The PASCAL measure considers a detection to be correct if the PASCAL measure r between the detection result B D and ground truth B T defined by
r = area ( B D B T ) area ( B D B T )
exceeds a threshold T r , where B D B T denotes the intersection between the detection result and ground truth, and B D B T denotes their union. In this experiment, the threshold T r is set to 0.55. Using the PASCAL measure, the true positive rate (TPR), the false positive per image (FPPI), and, subsequently, the Receiver operating characteristic (ROC) curve are evaluated. In addition to them, another measure, AOR, is proposed in this paper. The AOR is defined by
AOR = i = 1 N D r i I [ r i > T r ] i = 1 N D I [ r i > T r ]
It represents the accuracy of true positive detection, where N D is the number of the detected vehicles; r i is the PASCAL measure for the i -th vehicle detection; and I ( ) is an indicator function that returns to one if the argument is true and zero otherwise. Using the TPR and AOR, we can define the true positive score (TPS) by
TPS = TPR ( AOR T r ) = ( 1 N G i = 1 N D I [ r i > T r ] ) ( i = 1 N D r i I [ r i > T r ] i = 1 N D I [ r i > T r ] T r ) = 1 N G i = 1 N D ( r i T r ) I [ r i > T r ]
where N G is the number of vehicles. TPS reflects both TPR and AOR and it represents the true detection rate and accuracy simultaneously. In Figure 13, the detection performances of the proposed and previous methods are compared in terms of the TPR, FPPI, ROC and AOR. In the experiment, the size of the images is 320 × 240 and only the vehicles covering more than 30 pixels are considered as true targets. Figure 13a is the ROC. From the figure, the proposed method demonstrates better detection performance than the previous methods. In Figure 13b, the AOR (detection accuracy) is depicted against TPR (detection rate). This figure clearly shows that the proposed method outperforms the previous two methods in detection accuracy (AOR) when the detection rates (TPR) are the same. In Figure 13c, the detection performance is compared in terms of the TPS and FPPI. The TPS is the combination of the detection accuracy and rate. The proposed method demonstrates much higher TPS than the previous methods, meaning that the proposed method detects the vehicles better and more accurately than the previous methods simultaneously. In Figure 13d and Table 3, three competing methods are compared in terms of the speed-up ratio (SUR) [32], TPS, TPR and AOR when the FPPI is set to 1. SUR means how much faster the algorithm runs in comparison with the exhaustive search “SW” and it is defined as
SUR = p r o c e s s i n g   t i m e   o f   S W p r o c e s s i n g   t i m e   o f   a n   a l g o r i t h m
Figure 13. The detection performance in terms of (a) FPPI (false positive per image) vs. TPR (true positive rate); (b) TPR vs. AOR (average overlapping ratio); (c) FPPI vs. TPS (true positive score); and (d) SUR (speed-up ratio) vs. TPS vs. TPR vs. AOR when FPPI is 1.
Figure 13. The detection performance in terms of (a) FPPI (false positive per image) vs. TPR (true positive rate); (b) TPR vs. AOR (average overlapping ratio); (c) FPPI vs. TPS (true positive score); and (d) SUR (speed-up ratio) vs. TPS vs. TPR vs. AOR when FPPI is 1.
Sensors 15 29838 g013
Table 3. The overall performance of the proposed and previous methods (when FPPI is 1).
Table 3. The overall performance of the proposed and previous methods (when FPPI is 1).
MethodsTPSTPRAORSUR
Previous method 1 (SW)0.08720.6650.68121
Previous method 2 (Alonso)0.04250.25740.7152.4736
Proposed method (SHE + VH)0.12490.64360.7441.3243
From the figure and table, the proposed method runs 1.87 times slower than “Alonso” but it achieves much better performance than “Alonso” in the other three measures. Compared with “SW”, the proposed method runs 1.32 times faster and it achieves much better performance in TPS and AOR. Its TPR is almost the same as that of “SW”. Thus, it is evident that the proposed method is attractive both in detection rate and accuracy even though it is computationally slightly more expensive than “Alonso”.

5. Conclusions

In this paper, a precise new on-road vehicle detection system has been proposed. In situations that require the vehicle position and size, accurate vehicle detection is very important. For accurate vehicle detection, the signed horizontal edge map was proposed in the HG and the aspect ratio of the vehicle windows was estimated in the HI. The windows from the HI were provided to the HV composed of the ACF and SVM, and good VD performance was obtained.
Finally, a new measurement was proposed to test the accuracy of the proposed vehicle detection method. In the experiment, the proposed method was compared with the previous methods in terms of the TPR, FPPI, ROC, AOR, and SUR. The validity of the proposed method was proven through experimentation.

Acknowledgments

This work was supported by the Technology Innovation Program, 10052731, Development of low level video and radar fusion system for advanced pedestrian recognition funded By the Ministry of Trade, industry & Energy(MI, Korea).

Author Contributions

Jisu Kim, Jeonghyun Baek and Euntai Kim designed the algorithm, and carried out the experiment, analyzed the result, and wrote the paper. Yongseo Park analyzed the data, and carried out the experiment, and gave helpful suggestion on this research.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Broggi, A.; Cerri, P.; Antonello, P.C. Multi-Resolution Vehicle Detection Using Artificial Vision. In Proceedings of the IEEE Intelligent Vehicles Symposium, Parma, Italy, 14–17 June 2004; pp. 310–314.
  2. Bensrhair, A.; Bertozzi, M.; Broggi, A.; Miche, P.; Mousset, S.; Toulminet, G. A Cooperative Approach to Vision-Based Vehicle Detection. In Proceedings of the IEEE Conference on Intelligent Transportation Systems, Oakland, CA, USA, 25–29 August 2001; pp. 207–212.
  3. Tsai, L.W.; Hsieh, J.W.; Fan, K.C. Vehicle Detection Using Normalized Color and Edge Map. IEEE Trans. Image Proc. 2007, 16, 850–864. [Google Scholar] [CrossRef]
  4. Guo, D.; Fraichard, T.; Xie, M.; Laugier, C. Color Modeling by Spherical Influence Field in Sensing Driving Environment. In Proceedings of the IEEE Intelligent Vehicles Symposium, Dearborn, MI, USA, 5 October 2000; pp. 249–254.
  5. Tzomakas, C.; von Seelen, W. Vehicle Detection in Traffic Scenes Using Shadows; Technical Report for Institut Fur Neuroinformatik; Ruht-Universitat: Bochum, Germany, August 1998. [Google Scholar]
  6. Feng, Y.; Xing, C. A New Approach to Vehicle Positioning Based on Region of Interest. In Proceedings of the IEEE International Conference on Software Engineering and Service Science (ICSESS), Beijing, China, 23–25 May2013; pp. 471–474.
  7. Sun, Z.; Miller, R.; Bebis, G.; DiMeo, D. A Real-Time Precrash Vehicle Detection System. In Proceedings of the IEEE Workshop on Application of Computer Vision, Orlando, FL, USA, 4 December 2002; pp. 171–176.
  8. Southall, B.; Bansal, M.; Eledath, J. Real-Time Vehicle Detection for Highway Driving. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–26 June 2009; pp. 541–548.
  9. Kim, J.; Baek, J.; Kim, E. On Road Precise Vehicle Detection System Using ROI Estimation. In Proceedings of the 17th International IEEE Conference on Intelligent Transportation System, Qingdao, China, 8–11 October 2014; pp. 2251–2252.
  10. Joung, J.H.; Ryoo, M.S.; Choi, S.; Yu, W.; Chae, H. Background-Aware Pedestrian/Vehicle Detection System for Driving Environments. In Proceedings of the 14th International IEEE Conference on Intelligent Transportation System, Washington, DC, USA, 5–7 October 2011; pp. 1331–1336.
  11. Sun, Z.; Bebis, G.; Miller, R. On-Road Vehicle Detection: A Review. IEEE Trans. Pattern Anal. Mach. Intell. 2006, 28, 694–711. [Google Scholar] [PubMed]
  12. Sivaraman, S.; Trivedi, M.M. Looking at vehicles on the road: A survey of vision-based vehicle detection, tracking, and behavior analysis. IEEE Trans. Intell. Transp. Syst. 2013, 14, 1773–1795. [Google Scholar] [CrossRef]
  13. Yuan, Q.; Ablavsky, V. Learning a family of detectors via multiplicative kernels. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 514–530. [Google Scholar] [CrossRef] [PubMed]
  14. Chang, W.C.; Cho, C.W. Online boosting for vehicle detection. IEEE Trans. Syst. Man Cybern. Part. B Cybern. 2010, 40, 892–902. [Google Scholar] [CrossRef] [PubMed]
  15. Sun, Z.; Bebis, G.; Miller, R. On-road vehicle detection using evolutionary gabor filter optimization. IEEE Trans. Intell. Transp. Syst. 2005, 6, 125–137. [Google Scholar] [CrossRef]
  16. Dollar, P.; Appel, R.; Belongie, S.; Perona, P. Fast feature pyramids for object detection. IEEE Trans. Pattern Anal. Mach. Intell. 2014, 36, 1532–1545. [Google Scholar] [CrossRef] [PubMed]
  17. Geismann, P.; Schneider, G. A two-staged approach to vision-based pedestrian recognition using Haar and HOG Features. In Proceedings of the IEEE Intelligent Vehicles Symposium, Eindhoven, The Netherlands, 4–6 June 2008; pp. 554–559.
  18. Guo, J.M.; Prasetyo, H.; Wong, K. Vehicle Verification Using Gabor Filter Magnitude with Gamma Distribution Modeling. IEEE Trans. Signal Process. Lett. 2014, 21, 600–604. [Google Scholar] [CrossRef]
  19. Arróspide, J.; Salgado, L. Log-Gabor filters for image-based vehicle verification. IEEE Trans. Image Process. 2013, 22, 2286–2295. [Google Scholar] [CrossRef] [PubMed]
  20. Amayeh, G.; Tavakkoli, A.; Bebis, G. Accurate and Efficient Computation of Gabor Features in Real-Time Applications. In Proceedings of the 2009 5th International Symposium Visual Computing, Las Vegas, NV, USA, 2 December 2009; pp. 243–252.
  21. Choi, W.P.; Tse, S.H.; Wong, K.W.; Lam, K.M. Simplified gabor wavelets for human face recognition. Pattern Recognit. 2008, 41, 1186–1199. [Google Scholar] [CrossRef]
  22. Felzenszwalb, P.F.; Girshick, R.B.; McAllester, D.; Ramanan, D. Object detection with discriminatively trained part-based models. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 32, 1627–1645. [Google Scholar] [CrossRef] [PubMed]
  23. Lee, S.; Son, H.; Choi, J.C.; Min, K. HOG Feature Extractor Circuit for Real-Time Human and Vehicle Detection. In Proceedings of the 2012 IEEE Region 10 Conference on TENCON, Cebu, Philippines, 19–22 November 2012; pp. 1–5.
  24. Chen, X.; Xiang, S.; Liu, C.L.; Pan, C.H. Vehicle Detection in Satellite Images by Hybrid Deep Convolutional Neural Networks. IEEE Trans. Geosci. Remote Sens. Lett. 2014, 10, 1797–1801. [Google Scholar] [CrossRef]
  25. Haselhoff, A.; Kummert, A. A vehicle Detection System Based on Haar and Triangle Features. In Proceedings of the 2009 IEEE Conference on Intelligent Vehicles Symposium, Xi’an, China, 3–5 June 2009; pp. 261–266.
  26. Mao, L.; Xie, M.; Huang, Y. Preceding Vehicle Detection Using Histograms of Oriented Gradients. In Proceedings of the 2010 International Conference on Communications, Circuits and Systems, Chengdu, China, 28–30 July 2010; pp. 354–358.
  27. Kim, J.; Baek, J.; Kim, E. On-Road Vehicle Detection Based on Effective Hypothesis Generation. In Proceedings of the 2013 22nd IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN 2013), Gyeongju, Korea, 26–29 August 2013; pp. 252–257.
  28. Alonso, D.; Salgado, L.; Nieto, M. Robust Vehicle Detection through Multidimensional Classification for on Board Video Based Systems. In Proceedings of the IEEE International Conference on Image Processing, San Antonio, TX, USA, 16–19 October 2007; pp. 321–324.
  29. Vapnik, V. The Nature of Statistical Learning Theory; Springer-Verlag: New York, NY, USA, 1995. [Google Scholar]
  30. Satzoda, R.K.; Trivedi, M.M. Efficient Lane and Vehicle Detection with Integrated Synergies (ELVIS). In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 708–713.
  31. Everingham, M.; van Gool, L.; Williams, C.K.I.; Winn, J.; Zisserman, A. The PASCAL visual object classes (VOC) challenge. Int. J. Comput. Vis. 2010, 88, 303–338. [Google Scholar] [CrossRef]
  32. Yu, J.; Miyamoto, R.; Onoye, T. A speed-up scheme based on multiple-instance pruning for pedestrian detection using a support vector machine. IEEE Trans. Image Process. 2013, 22, 4752–4761. [Google Scholar] [PubMed]

Share and Cite

MDPI and ACS Style

Kim, J.; Baek, J.; Park, Y.; Kim, E. New Vehicle Detection Method with Aspect Ratio Estimation for Hypothesized Windows. Sensors 2015, 15, 30927-30941. https://doi.org/10.3390/s151229838

AMA Style

Kim J, Baek J, Park Y, Kim E. New Vehicle Detection Method with Aspect Ratio Estimation for Hypothesized Windows. Sensors. 2015; 15(12):30927-30941. https://doi.org/10.3390/s151229838

Chicago/Turabian Style

Kim, Jisu, Jeonghyun Baek, Yongseo Park, and Euntai Kim. 2015. "New Vehicle Detection Method with Aspect Ratio Estimation for Hypothesized Windows" Sensors 15, no. 12: 30927-30941. https://doi.org/10.3390/s151229838

Article Metrics

Back to TopTop