Sensors 2012, 12(9), 12386-12404; doi:10.3390/s120912386

Article
Design of a Multi-Sensor Cooperation Travel Environment Perception System for Autonomous Vehicle
Long Chen 1,2, Qingquan Li 1,*, Ming Li 1,*, Liang Zhang 1 and Qingzhou Mao 1
1
State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, No.129, Luoyu Road, Wuhan, 430079, China; E-Mails: lchen.whu@gmail.com (L.C.); zl200531610254@126.com (L.Z.); qzhmao@whu.edu.cn (Q.M.)
2
School of Electronic Information, Wuhan University, No.129, Luoyu Road, Wuhan, 430079, China
*
Author to whom correspondence should be addressed; E-Mails: qqli@whu.edu.cn (Q.L.); liming751218@gmail.com (M.L.) Tel./Fax: +86-755-2653-6101.
Received: 31 July 2012; in revised form: 20 August 2012 / Accepted: 23 August 2012 /
Published: 12 September 2012

Abstract

: This paper describes the environment perception system designed for intelligent vehicle SmartV-II, which won the 2010 Future Challenge. This system utilizes the cooperation of multiple lasers and cameras to realize several necessary functions of autonomous navigation: road curb detection, lane detection and traffic sign recognition. Multiple single scan lasers are integrated to detect the road curb based on Z-variance method. Vision based lane detection is realized by two scans method combining with image model. Haar-like feature based method is applied for traffic sign detection and SURF matching method is used for sign classification. The results of experiments validate the effectiveness of the proposed algorithms and the whole system.
Keywords:
autonomous vehicle; travel environment perception system; multi-sensor cooperation; road and lane detection; traffic sign detection

1. Introduction

Intelligent Vehicle System (IVS) is a comprehensive system which should have several necessary functions: travel environment perception, self-localization, path planning and vehicle control. Travel environment perception is the foundation of other functions in IVS. This paper introduces the multi-sensor fusion travel environment perception system designed for our autonomous vehicle SmartV-II. The main functions include road curb detection, lane detection and traffic sign recognition. With the help of this travel environment perception system, SmartV-II became the only robot to complete comprehensive test section of the 2010 Future Challenge in time, see Figure 1.

IVS has been studied for a long time, especially since the DAPRA Challenge held in 2005. Many effective solutions have been proposed for road and lane detection and traffic sign recognition.

1.1. Road and Lane Detection

Road can be mainly divided into structured road and unstructured road based on the structure information. The former means regular road with visible lane markings, such as highway and most urban road. For structured road, lane detection and following as the key technology have been studied over last two decades. Some effective lane detection systems have been proposed, such as AWSTM, AutoVue, RALPH ([13]), AURORA [4], SCAR [5], GOLD ([6,7]), and LOIS [8]. These lane detection algorithms can be mainly grouped into two categories: edge based methods and model based methods. Edge based methods are most widely used [9,10].They are fast but highly dependent on the method used to extract the edges corresponding to the lane boundaries. When the road condition is complex, these methods may easily fail. Common road models include triangle model, straight line model, clothoid model, polynomial model and spline model, etc. Wang et al. [11] computed the likelihood probability through fitting the detected features to the model, and Kang et al. [12] and Wang et al. [13] found the extreme value of the energy function to the lane location, then the Kalman filter was used for predicting the parameters of the model. These algorithms would be time-consuming because of the iterative operation. Unstructured road refers the irregular road without normal markings such as campus and park road, rural road and off-road. In the situation, researchers mainly focus on the natural road boundary and drivable range detection [1417]. Lieb et al. [14] used one-dimensional template matching and the sum of squared differences combined with optical flow to determine the most similar regions in front of vehicle. This method can hardly deal with the situation where there is an unexpected obstacle in the front. Dynamical sampling windows are used for training range detection in [15], but the selected range can not represent the real road classes feature space well. Our previous solution of lane detection is reported in [18]. In this paper, we apply a more believable method based on laser information for locating the road range, because the laser has more reliability depth information which is easier to find structural change. In [19], a trigonometry based road detection method using laser scanner is proposed, which applies the relationship of neighboring three laser points. However, because of the ranging error, the relationship may be destroyed and this method will be less robust as the range increases. In this paper, a Z-Variance based road curb detection method is proposed, which is range independent. Chen et al. [20] also introduced some recent developments of active vision in robotic systems.

1.2. Traffic Sign Detection and Recognition

Traffic sign detection and recognition in realtime is a vital issue in IVS and Driver Assistance System (DAS). One decade age, realtime performing systems have been successfully achieved [2123]. Traffic sign recognition usually consists of two components: detection and classification. First, the location of the traffic signs are found and the target rectangles are extracted in the detection stage. To which category does the candidate sign belong is the main issue needing to be addressed in the classification phase. For traffic sign detection, color segmentation is the most common method. RGB color model is widely used [24]. RGB color space has a higher sensitivity to light intensity. Therefore, HIS and HSV which are not affected by the lighting changes have been used [25,26]. Some other authors also used YIQ [27], YUV, L*a*b [28] and CIE color spaces. Some authors developed databases of color pixels, look-up tables and hierarchical region growing techniques [26,29,30]. Shape based method is usually used for a final detection after the color segmentation. Many circle, ellipse and triangle detection methods also have been used. Soetedjo and Yamada [31] discussed ellipse detection in complex scene with neighborhood characteristics and symmetric features of the simple coding. Piccioli et al. [32] analyzed the color information and geometrical characteristic of the edges to extract possible triangular or circular signs. For traffic sign classification, many methods have been employed for traffic signs classification such as template matching, LDA, SVM, ANN and other machine learning methods. OCR systems are applied in [28,33,34] using the pictogram-based classification by template matching and cross-correlation. In [35,36], the authors make use of the LDA to distinguish between the road signs. The Multi-Layer Perception [37] is widely used in the current approaches. Neural networks are also widely adopted [38,39]. Support vector machines (SVM) are largely adopted to classify the inner part of road signs [40]. Random forests, an ensemble learning technique, are used in [41] to classify signs, and a comparison is made between this technique and SVM and AdaBoost. In recent years, one of the most accepted and widely used approach in object detection has been proposed by Viola and Jones [42]. Their approach is based on a cascade of detectors, where each detector is an ensemble of boosted classifiers based on the Haar-like features. Inspired by detector presented in [42], we apply this method combined with color segmentation for the traffic sign detection. Different from above solutions, this paper presents a low-cost multi-sensor integrated system to realize the necessary functions based on several novel algorithms. The contributions of this paper are as follows:

  • By reasonably arranging several simple low-cost sensors, our system can realize complex functions without high-end sensors. Combination of cameras and lasers based road detection method can deal with not only structured road but also unstructured road.

  • Multiple sensors are skillfully installed for covering more view around the vehicle to satisfy the situation that the vehicle drives with high speed or passes a turn with high curvature.

  • Traffic signs are divided into six classes; for each class, we trained a classifier based on Haar-like features for the detection and the scale invariant feature SURF is used for the sign classification.

The rest of the paper is organized as follows. Section 2 introduces the layout of the sensors. Section 3 describes Z-variance based road curb detection. Section 4 presents two scans method for multiple lanes detection. Realtime traffic sign recognition is introduced in Section 5. Experiments and results are discussed in Section 6. Conclusions are given in Section 7.

2. Multi-Sensor Layout

The layout of the sensors for IVS should enable a wide view including not only the front view but also the left and right sides of the vehicle. Compared with the two successful vehicle in DAPRA Challenge, i.e., BOSS [43] from CMU and Stanley [44] from Stanford University, our system uses lower cost sensors instead of the high-end laser scanners such as Velodyne and fixes several sensors in the front part of the vehicle to cover the area close to the vehicle. Our detection system arranges the layout of lasers and cameras in such a way that guarantees our range of perception should cover not only the front view of the ego vehicle but also the left and right view. This arrangement can deal with the situation where the vehicle prepares to drive through a turn with high speed. Figure 2 shows the positions and coverage areas of the sensors. Three laser scanners are marked by 1, 2 and 3 in the upper figure. Laser 1 is mounted on the roof and Laser 2 and Laser 3 are mounted on the head of the vehicle, tilted downward to scan the road ahead. We can adjust the pitch angles ρ1, ρ2 and ρ3 in order that the lasers can touch different distances ahead our vehicle. Three cameras with different pitch angles and heading angles are used for curb finding. When vehicle is traveling roughly along the straight line, the middle camera is used for lane detection. When it comes to turning, two aside cameras are chosen in order to cover the closer area around the vehicle. Data from different sensors will be transformed to the unique vehicle coordinate. Calibration is performed using OPENCV functions [45] and the Camera Calibration Toolbox for MATLAB. The algorithm used is taken mainly from [46].

3. Road Curb Detection

3.1. Z-Variance Based Road Curb Detection

The laser scanner used for road shoulder detection is slanted down. The proposed method assumes that the road surface is flat. With this hypothesis, the elevation variance of the points on road surface is low, while the variance of Z value is high on the road boundary or curb. All the laser points are translated to the vehicle coordinate. Median filter is applied to filter out some tiny objects on the road such as leaves and road crack. The Z-variance of the ith point will be calculated by

D i = 1 9 i 4 i + 4 Z k

The algorithm step is as follows:

  • Calculate the Z-variance of all points.

  • Select the points with Z-variances above the threshold t, and the segment between these two points with length wider than the vehicle will be selected as candidate road section.

  • Compare the mean value of height H, distance D between head of vehicle and midpoint of one section, then calculate weights for all candidate road sections by the following equation:

    W i = α e | H H min H min | + ( 1 α ) e | D D min D min |
    where Hmin is the minimum height and Dmin is the minimum distance, and α is a weighting factor. Wi ranges from 0 to 1.

  • The candidate road section with highest weight is considered as the real road which is expressed as pointpair, that is, left point (Xl,Yl) and right point (XR, YR).

3.2. Multi-Laser Based Road Curb Fitting

To obtain the road boundary, only one single scan laser is not enough. Multiple lasers are combined to settle this problem. Three SICK laser scanners are used with scan range 2 m, 3.5 m and 6 m respectively. Road curb detection described above will be carried out with each laser dependently. Consequently, we can get three point-pairs which can be divided into left points ((XL2,YL2),(XL3.5,YL3.5) and (XL6,YL6)) and right points ((XR2, YR2),(XR3.5, YR3.5) and (XR6, YR6)). Finally, a parabola is used to fit the points on the same side, see Figure 3.

x = α + b y + c y 2

4. Lane Detection

For structured roads, this paper proposes a two scans method to detect multiple lanes. Figure 4 is the proposed flow chart of multiple lanes detection method. Road image from top-middle camera is first preprocessed by top-hat transform and threshold. In mathematical morphology, top-hat transform is an operation that extracts small elements and details from given images. The top-hat extracts the objects that have not been eliminated by the opening. That is, it removes objects larger than the structuring element.

4.1. Imaging Model

Using the image model, we can rebuild the model of the lane plane in the 3D world space from the image in the 2D image space based on the inverse perspective mapping (IPM), and finally obtain the real width of lane markings and distance between two adjacent lane markings. The proposed image model is shown in Figure 5. W = (X, Y, Z) ∈ E3 denotes the world coordinate system WCS and I = (u, v) ∈ E2 denotes the image coordinate. Camera is located in C(d, 0, h) ∈ W, h is the height of the camera from the ground. Optical axis is parallel to the ground, γ is the angle between optical axis and the lane. α is horizontal view angle of the camera and β is vertical view angle. The mapping from W to I is given in Equation (4) and the mapping from I to W is given in Equation (5), where HI and WI respectively represents horizontal resolution and vertical resolution of the camera, which can be acquired by calibration. The width of the lane marking decreases with increasing distance to the camera in perspective view. Based on imaging model, we can get the real distance ΔX in the WCS coordinate when the distance is Δu in the line v in the image coordinate. The relationship is given in Equation (7).

v = H I 2 ( 1 h Y × tan β 2 × cos γ ) u = W I 2 ( 1 X Y tan α 2 )
X = Y tan α 2 cot γ ( 1 2 u W I ) Y = h × H I ( H I 2 v ) × tan β 2 × cos γ
Δ u = Δ X × W I ( 2 v H I ) cot γ cos γ tan β 2 2 × h H I × tan α 2

4.2. Two Scans Based Method for Multi-Lane Detection

After preprocessing, the gradient of each pixel will be calculated as follows:

I ( x , y ) = ( I x , I T y ) ( D x , D y ) T
where Dx and Dy denote the gradient in x direction and y direction respectively. First, we want to get a most obvious lane, called the surest lane, based on the edge distribution function(EDF). EDF is the histogram of the gradient magnitude with respect to the orientation. We can estimate the magnitude and orientation by Equation (8). To compute this histogram, the angle θ(x, y) with the range [−90°, 90°] were quantized in 90 subintervals at a step of 2°. The surest lane is defined as the maximum value of the histogram. Figure 6(g) shows the RANSAC line fitting of the surest lane after the first scan. Starting from the surest lane, we can do the second scan. Other lanes could be fitted with the same method. Figure 6(j) shows the results of multiple lane detection. Figure 6(i,k) presents the global maxima and local maxima.

| I ( x , y ) | = | D x | + | D y | θ ( x , y ) = tan 1 D y D x

5. Traffic Sign Detection and Classification

The proposed sign detection and recognition method includes two parts. The detection part is based on color segmentation, Haar-like wavelet features and AdaBoost classifier. The recognition part is based on feature matching method with the Speeded Up Robust Features (SURF). Figure 7 is the flow chart of the traffic sign recognition system. Because Haar-like features are features of gray images, the detection method we proposed here is mainly based on the gray information. Since the shape information can mainly affect the Haar-like features, the main traffic signs that this paper copes with can be divided into six classes based on the shape, as shown in Figure 8.

5.1. Color-Based Segmentation

The color-based segmentation includes two steps: (1) color quantization, (2) ROI locking. In the first step, we extract the target color pixels. In the next step, we get the ROI from the pixels based on constraints on bounding box of the connected-components of the pixels. The main color includes: red, blue, yellow, white and black. In our detection method, we focus on the three colors: red, blue and yellow. The RGB color model is highly related to the light intensity. HSV color model is applied in this paper.

According to Table 1, we can get the red, blue and yellow pixels from the original image. After the color segmentation, the detected pixels can form some connected regions, then we can get the enclosing rectangles (ER) of them. Based on some constraints on ER, we can wipe off many noise regions. First, the ER smaller than 20 × 20 pixels are considered as noise and not processed further. Second, the aspect ratio of ER is limited to 2. Third, the saturation of ER is no less than 0.5. The rest of ERs will be ignored. Figure 9 shows the results of three color segmentation and ROI locking.

5.2. AdaBoost for Traffic Sign Detection

The AdaBoost algorithm is a classifier learning method which combines a set of weak classifiers to construct a strong classifier and then assembles some strong classifiers to a cascade classifier. Feature selection is crucial for classifier. Motivated by the work of Tieu and Viola [47], we use extended Haar-like features to train AdaBoost classifier for traffic signs detection.

feature i = i ( i , n ) ω i × RectSum ( r i )
where ωi denotes the weight of rectangle, RectSum(ri) is the integral of image by surrounded by rectangle ri, featurej is the jth feature, n is arbitrarily chosen that represents the number of rectangles consisting of featurej.

5.3. SURF Matching for Classification

The proposed recognition method includes three steps: image scaling, SURF features extraction, features matching. The detected targets found in detection stage will be normalized to be of the same size (100 × 100) as the template which will be matched. Though SURF is a scale invariant feature, in this step we will make sure that the true sign contains enough features to be matched with the template sign. If the number of matched points is lower than a certain value, the candidate will be discarded as a noise. In order to make sure the certain value is adequate for all candidates, the image scaling is necessary. In this paper, we use bilinear interpolation for image scaling. Once the image is normalized, the SURF descriptor can be used for exacting the scale and rotation invariant features.

SURF [48,49] detector is chosen instead of the often used SIFT detector. SURF is developed to run substantially faster but possess comparable performance than SIFT. The resulting descriptor vector for all 4 × 4 sub-regions is of length 64. More details about SURF can be found in [48] and [49].

Because we have many template signs to be matched, in order to reduce the matching time, all the template signs are divided into six groups based on the color and the trained Adaboost classifiers. We used Approximate Nearest Neighbor (ANN) [50] algorithm for matching. SURF features are first extracted from all the template signs which will be divided into eight groups and stored in a database. Then a candidate image is matched by individually comparing each feature of the candidate with the special database; the selection is made based on the classifier used and color information and the features are matched based on ANN. The image in the template database that gives the maximum number of matches with the candidate image is the target class. Figure 10 shows some match results between the candidate signs and template signs. See [51] for more details about the algorithm.

6. Results and Discussion

6.1. Road Curb Detection

In order to test the curb detection algorithm, we collected the synchronous laser data and the image data of the whole route in the Future Challenge 2010. The data set contains 9,230 frames as a combination of three laser scanners. If the road curb detected from the laser data is close to the scene in image, we consider it as true position. The final accuracy can reach 82%. Figure 11 shows some results of the proposed road curb detection. The point in red denotes the road segment point obtained from our curb detection method. The red dashed line represents the fitting boundary based on the curb points.

6.2. Lane Detection

The algorithm takes the mobile laboratory SmartV-II (Figure 1(b)) Wuhan University as the platform. The test image data is acquired by the analog Video Camera, which is mounted on the top of the Chery SUV with a fixed strut. The size of the recorded images is 640 × 480. For some special reason, we transform the video to 388 × 332. We tested the system under a variety of different road conditions, including structured road and unstructured road. The test data contains 15 videos and 4,319 frames in total, among which unstructured road (without lanes) consisting 2,891 frames and unstructured road (without lanes) consisting 1,428 frames. All the videos are taken on urban roads in Wuhan and Xi'an City, China. The average error rate under different conditions is lower than 9%. The average processing time is 20 ms per frame on a Pentium E5200 2.5 GHz computer. For comparison, we implemented the Canny/Hough Estimation of Vanishing Points (CHEVP) algorithm [13]. Wang et al. proposes the CHEVP algorithm to initialize their B-Spline SNAKE tracking algorithm. Here, we just compare the detection algorithm instead of tracking. For all the 4,319 frames, the correct detection of CHEVP is lower than 30%, and for the 1,428 structured road frames, the correct detection is no more than 50%. The main reason is the Hough failed to grab many unobvious lines.

Figure 12 shows some results from the front camera under different road conditions. Figure 12(a) shows the roads with vehicle or shadow. Figure 12(b) shows the highway with orientation arrows markings. Figure 12(c) shows the highway with crosswalk warning markings. Figure 12(d) is the road with crosswalk markings. Figure 12(e) shows the road with pavement lettering markings.

6.3. Traffic Sign Detection and Recognition

The test image data is acquired by the CCD Video Camera which is mounted on the top of the Chery SUV with a fixed strut. The size of the recorded images is of 640 × 480. We tested the system under a variety of different conditions. To evaluate the performance of the proposed method, 200 images were taken as test images, in which there are 281 traffic signs.

In this paper, six classifiers were trained for the six classes of signs listed in Figure 8. For all the classifiers, the number of position samples (PS) and negative samples (NS) are listed in Table 2. Our method can detect road signs in 50 ms. In the 281 signs, there are 265 signs being correctly detected, 14 signs being missed, and 2 signs being false alarm. Thus the detection rate is 94.3%, demonstrating that the proposed detection method is effective and efficient. Some detection results are shown in Figure 13 to demonstrate that our method is insensitive to many complex conditions.

The 265 detected traffic signs are used to evaluate the performance of the proposed method. Among the 265 signs, 244 signs are correctly classified and 14 signs are falsely classified. The recognition accuracy is 92.7%.

7. Conclusions

In this paper, we propose a real-time traveling environment perception system for autonomous vehicle navigation. Our system makes use of the good aspects of laser and camera respectively. At the same time, the combination of multiple lasers and multiple cameras can cover all the front view of ego vehicle, and their information fusion can deal with tough situations. The functions of our perception system include road shoulder detection, lane detection and traffic sign recognition. Many experiment results show that our system is reliable in synthetic urban environment. Our future work will also introduce the Velodyne laser scanner to deal with more complex road conditions and make use of SLAM to develop our IVS.

The work described in this paper was supported by the National Natural Science Foundation of China (Grant No. 91120002 and No. 2011AA110403) and Major State Basic Research Development Program (No. 2010CB732100).

References

  1. Batavia, P.; Pomerleau, D.; Thorpe, C. Predicting Lane Position for Roadway Departure Prevention. Proceedings of the IEEE Intelligent Vehicles Symposium, Berlin, Germany, October 1998.
  2. Batavia, P. Driver Adaptive Warning Systems. Technical Report CMU-RI-TR-98-07; Carnegie Mellon University: Pittsburgh, PA, USA, 1998.
  3. Bertozzi, M.; Broggi, A.; Cellario, M.; Fascioli, A.; Lombardi, P.; Porta, M. Artificial Vision in Road Vehicles. Proc. IEEE 2002, 90, 1258–1271.
  4. Chen, M.; Jochem, T.; Pomerleau, D. AURORA: A Vision-based Roadway Departure Warning System. Proceedings of the 1995 IEEE/RSJ International Conference on Intelligent Robots and Systems 95, Human Robot Interaction and Cooperative Robots, Pittsburgh, PA, USA, August 1995. Volume 1; pp. 243–248.
  5. Pomerleau, D.; Thorpe, C.; Emery, L. Performance Specification Development for Roadway Departure Collision Avoidance Systems. Proceedings of the 4th World Congress on Intelligent Transport Systems, Berlin, Germany, October 1997.
  6. Bertozzi, M.; Broggi, A. Vision-Based Vehicle Guidance. Computer 1997, 30, 49–55.
  7. Bertozzi, M.; Broggi, A. GOLD: A Parallel Real-time Stereo Vision System for Generic Obstacle and lane detection. IEEE Trans. Imag. Proc. 1998, 7, 62–81.
  8. Kluge, K.; Lakshmanan, S. A Deformable-Template Approach to Lane Detection. Proceedings of the Intelligent Vehicles '95 Symposium, Detroit, MI, USA, 25–26 September 1995; pp. 54–59.
  9. Broggi, A. Robust Real-time Lane and Road Detection in Critical Shadow Conditions. Proceedings of International Symposium on Computer Vision, Coral Gables, FL, USA, November 1995; pp. 353–358.
  10. Paetzold, F.; Franke, U.; von Seelen, W. Lane Recognition in Urban Environment Using Optimal Control Theory. Proceedings of the Intelligent Vehicles Symposium, Dearborn, MI, USA, October 2000; pp. 221–226.
  11. Wang, Y.; Shen, D.; Teoh, E. Lane Detection Using Spline Model. Pattern Recognit. Lett. 2000, 21, 677–689.
  12. Kang, D.; Choi, J.; Kweon, I. Finding and Tracking Road Lanes Using Line-snakes. Proceedings of the Intelligent Vehicles Symposium, Tokyo, Japan, September 1996; pp. 189–194.
  13. Wang, Y.; Teoh, E.; Shen, D. Lane detection and tracking using B-Snake. Image Vis. Comput. 2004, 22, 269–280.
  14. Lieb, D.; Lookingbill, A.; Thrun, S. Adaptive Road Following Using Self-supervised Learning and Reverse Optical Flow. Proceedings of Robotics: Science and Systems, Cambridge, MI, USA, June 2005.
  15. Wang, J.; Ji, Z.; Su, Y. Unstructured road detection using hybrid features. Proceedings of the International Conference on Machine Learning and Cybernetics, Baoding, China, July 2009. Volume 1; pp. 482–486.
  16. Foedisch, M.; Takeuchi, A. Adaptive Road Detection Through Continuous Environment Learning. Proceedings of the 33rd Applied Imagery Pattern Recognition Workshop, Washington, DC, USA, October 2004; pp. 16–21.
  17. Zhou, S.; Iagnemma, K. Self-supervised Learning Method for Unstructured Road Detection Using Fuzzy Support Vector Machines. Proceedings of the IEEE /RSJ International Conference on Intelligent Robots and Systems, Taipei, Taiwan, October 2010.
  18. Chen, L.; Li, Q.; Mao, Q.; Zou, Q. Block-constraint Line Scanning Method for Lane Detection. Proceedings of the Intelligent Vehicles Symposium, San Diego, CA, USA, June 2010; pp. 89–94.
  19. Wijesoma, W.; Kodagoda, K.; Balasuriya, A. Road-boundary Detection and Tracking Using Ladar Sensing. IEEE Trans. Robot. Autom. 2004, 20, 456–464.
  20. Chen, S.; Li, Y.; Kwok, N. Active Vision in Robotic Systems: A Survey of Recent Developments. Int. J. Robot. Res. 2011, 30, 1343–1377.
  21. Gavrila, D.; Philomin, V. Real-Time Object Detection for Smart Vehicles. Proceedings of the Seventh IEEE International Conference on Computer Vision, Kerkyra, Greece, September 1999. Volume 1; pp. 87–93.
  22. Gavrila, D.; Franke, U.; Wohler, C.; Gorzig, S. Real Time Vision for Intelligent Vehicles. IEEE Instrum. Meas. Mag. 2001, 4, 22–27.
  23. Barnes, N.; Zelinsky, A. Real-time Radial Symmetry for Speed Sign Detection. Proceedings of the Intelligent Vehicles Symposium, Parma, Italy, June 2004; pp. 566–571.
  24. Andrey, V.; Jo, K. Automatic Detection and Recognition of Traffic Signs Using Geometric Structure Analysis. Proceedings of the International Joint Conference on SICE-ICASE, Busan, Korea, October 2006; pp. 1451–1456.
  25. Liu, H.; Ran, B. Vision-based Stop Sign Detection and Recognition System for Intelligent Vehicles. Transp. Res. Rec. 2001, 1748, 161–166.
  26. de la Escalera, A.; Armingol, J.; Mata, M. Traffic Sign Recognition and Analysis for Intelligent Vehicles. Image Vis. Comput. 2003, 21, 247–258.
  27. Kehtarnavaz, N.; Ahmad, A. Traffic Sign Recognition in Noisy Outdoor Scenes. Proceedings of the Intelligent Vehicles 95 Symposium, Detroit, MI, USA, September 1995; pp. 460–465.
  28. Siogkas, G.; Dermatas, E. Detection, Tracking and Classification of Road Signs in Adverse Conditions. Proceedings of the 2006 IEEE Mediterranean Electrotechnical Conference, Malaga, Spain, May 2006; pp. 537–540.
  29. de la Escalera, A.; Armingol, J.; Pastor, J.; Rodriguez, F. Visual Sign Information Extraction and Identification by Deformable Models for Intelligent Vehicles. Intell. Transp. Sys. 2004, 5, 57–68.
  30. Fleyeh, H. Color Detection and Segmentation for Road and Traffic Signs. Proceedings of the Conference on Cybernetics and Intelligent Systems, Singapore, December 2004. Volume 2; pp. 809–814.
  31. Soetedjo, A.; Yamada, K. Fast and Robust Traffic Sign Detection. Proceedings of 2005 IEEE International Conference on Systems, Man and Cybernetics, Waikoloa, HI, USA, October 2005. Volume 2; pp. 1341–1346.
  32. Piccioli, G.; De Micheli, E.; Parodi, P.; Campani, M. Robust Method for Road Sign Detection and Recognition. Imag Vis. Comput. 1996, 14, 209–223.
  33. Wu, W.; Chen, X.; Yang, J. Detection of Text on Road Signs From Video. IEEE Trans. Intell. Transp. 2005, 6, 378–390.
  34. Li, L.; Ma, G.; Ding, S. Identification of Degraded Traffic Sign Symbols Using Multi-Class Support Vector Machines. Proceedings of the International Conference on Mechatronics and Automation, Harbin, China, August 2007; pp. 2467–2471.
  35. Bahlmann, C.; Zhu, Y.; Ramesh, V.; Pellkofer, M.; Koehler, T. A System for Traffic Sign Detection, Tracking, and Recognition Using Color, Shape, and Motion Information. Proceedings of the Intelligent Vehicles Symposium, Las Vegas, NV, USA, June 2005; pp. 255–260.
  36. Keller, C.; Sprunk, C.; Bahlmann, C.; Giebel, J.; Baratoff, G. Real-Time Recognition of US Speed Signs. Proceedings of the Intelligent Vehicles Symposium, Eindhoven, the Netherlands, June 2008; pp. 518–523.
  37. Ishak, K.; Sani, M.; Tahir, N. A Speed Limit Sign Recognition System Using Artificial Neural Network. Prceedings of the Conference on Research and Development, Selangor, Malaysia, June 2006; pp. 127–131.
  38. Nguwi, Y.; Kouzani, A. Automatic Road Sign Recognition Using Neural Networks. Proceedings of the 2006 International Joint Conference on Neural Networks, Vancouver, BC, Canada; 2006; pp. 3955–3962.
  39. Ach, R.; Luth, N.; Schinner, T.; Techmer, A.; Walther, S. Classification of Traffic Signs in Real-Time on a Multi-Core Processor. Proceedings of the Intelligent Vehicles Symposium, Eindhoven, the Netherlands, June 2008; pp. 313–318.
  40. Maldonado-Bascon, S.; Lafuente-Arroyo, S.; Siegmann, P.; Gomez-Moreno, H.; Acevedo-Rodriguez, F. Traffic Sign Recognition System for Inventory Purposes. Proceedings of the Intelligent Vehicles Symposium, Eindhoven, the Netherlands, June 2008; pp. 590–595.
  41. Kouzani, A. Road-Sign Identification Using Ensemble Learning. Proceedings of the Intelligent Vehicles Symposium, Xi'an, China, June 2007; pp. 438–443.
  42. Viola, P.; Jones, M. Robust Real-time Object Detection. Int. J. Comput. Vis. 2001, 57, 137–154.
  43. Urmson, C.; Anhalt, J.; Bagnell, D.; Baker, C.; Bittner, R.; Clark, M.; Dolan, J.; Duggins, D.; Galatali, T.; Geyer, C.; et al. Autonomous Driving in Urban Environments: Boss and the Urban Challenge. J. Field Robot. 2008, 25, 425–466.
  44. Thrun, S.; Montemerlo, M.; Dahlkamp, H.; Stavens, D.; Aron, A.; Diebel, J.; Fong, P.; Gale, J.; Halpenny, M.; Hoffmann, G.; et al. Stanley: The Robot that Won the DARPA Grand Challenge. J. Robot. Sys. 2007, 1–43.
  45. Bradski, G.; Kaehler, A. Learning OpenCV: Computer vision with the OpenCV library; O'Reilly Media, 2008.
  46. Zhang, Z. Flexible Camera Calibration by Viewing a Plane from Unknown Orientations. Proceedings of the Seventh IEEE International Conference on Computer Vision, Kerkyra, Greece, September 1999. Volume 1; pp. 666–673.
  47. Viola, P.; Jones, M. Robust Real-Time Face Detection. Int. J. Comput. Vis. 2004, 57, 137–154.
  48. Bay, H.; Tuytelaars, T.; Van Gool, L. SURf: Speeded Up Robust Ffeatures. Proceedings of the 9th European Conference on Computer Vision, Graz, Austria, May 2006; pp. 404–417.
  49. Fasel, B.; Van Gool, L. Interactive Museum Guide: Accurate Retrieval of Object Descriptions. Proceedings of the 4th International Conference on Adaptive Multimedia Retrieval: User, Context, and Feedback, Geneva, Switzerland, July 2006; pp. 179–191.
  50. Arya, S.; Mount, D.; Netanyahu, N.; Silverman, R.; Wu, A. An Optimal Algorithm for Approximate Nearest Neighbor Searching Fixed Dimensions. J. ACM (JACM) 1998, 45, 891–923.
  51. Chen, L.; Li, Q.; Li, M.; Mao, Q. Traffic Sign Detection and Recognition for Intelligent Vehicle. Proceedings of the Intelligent Vehicles Symposium, Baden-Banden, Germany, June 2011; pp. 908–913.
Sensors 12 12386f1 200
Figure 1. (a) At approximately 3:04 pm on Oct 18, 2010, SmartV-II was the first robot to complete the Future Challenge; (b) Autonomous Vehicle SmartV-II, developed by Wuhan University.

Click here to enlarge figure

Figure 1. (a) At approximately 3:04 pm on Oct 18, 2010, SmartV-II was the first robot to complete the Future Challenge; (b) Autonomous Vehicle SmartV-II, developed by Wuhan University.
Sensors 12 12386f1 1024
Sensors 12 12386f2 200
Figure 2. Lasers and cameras layout.

Click here to enlarge figure

Figure 2. Lasers and cameras layout.
Sensors 12 12386f2 1024
Sensors 12 12386f3 200
Figure 3. Road curb fitting.

Click here to enlarge figure

Figure 3. Road curb fitting.
Sensors 12 12386f3 1024
Sensors 12 12386f4 200
Figure 4. Flowchart of two scans based lane detection method.

Click here to enlarge figure

Figure 4. Flowchart of two scans based lane detection method.
Sensors 12 12386f4 1024
Sensors 12 12386f5 200
Figure 5. Imaging Model. (a) The W space. (b) The xy plane in the W space. (c) The yz plane in the W space.

Click here to enlarge figure

Figure 5. Imaging Model. (a) The W space. (b) The xy plane in the W space. (c) The yz plane in the W space.
Sensors 12 12386f5 1024
Sensors 12 12386f6 200
Figure 6. Step by step results by proposed lane detection. (a) Original Image. (b) Image after open operation. (c) Image after dilate operation. (d) Image after top-hat transform. (e) Threshold. (f) First scan. (g) RANSAC. (h) Second scan. (i) Gradient contribution function after first scan. (j) RANSAC. (k) Gradient contribution function after second scan.

Click here to enlarge figure

Figure 6. Step by step results by proposed lane detection. (a) Original Image. (b) Image after open operation. (c) Image after dilate operation. (d) Image after top-hat transform. (e) Threshold. (f) First scan. (g) RANSAC. (h) Second scan. (i) Gradient contribution function after first scan. (j) RANSAC. (k) Gradient contribution function after second scan.
Sensors 12 12386f6 1024
Sensors 12 12386f7 200
Figure 7. Flow chart of traffic signs recognition system.

Click here to enlarge figure

Figure 7. Flow chart of traffic signs recognition system.
Sensors 12 12386f7 1024
Sensors 12 12386f8 200
Figure 8. Traffic Signs Classes.

Click here to enlarge figure

Figure 8. Traffic Signs Classes.
Sensors 12 12386f8 1024
Sensors 12 12386f9 200
Figure 9. Color quantization and ROI locking. (a) original image, (b) ROI locking, (c) red segmentation, (d) blue segmentation, (e) yellow segmentation.

Click here to enlarge figure

Figure 9. Color quantization and ROI locking. (a) original image, (b) ROI locking, (c) red segmentation, (d) blue segmentation, (e) yellow segmentation.
Sensors 12 12386f9 1024
Sensors 12 12386f10 200
Figure 10. SURF feature matching. The number of match points is 16, 11, 24, 7, 12, 7 according to priority.

Click here to enlarge figure

Figure 10. SURF feature matching. The number of match points is 16, 11, 24, 7, 12, 7 according to priority.
Sensors 12 12386f10 1024
Sensors 12 12386f11 200
Figure 11. Some results of the road curb detection.

Click here to enlarge figure

Figure 11. Some results of the road curb detection.
Sensors 12 12386f11 1024
Sensors 12 12386f12 200
Figure 12. Some examples of lane detection results.

Click here to enlarge figure

Figure 12. Some examples of lane detection results.
Sensors 12 12386f12 1024
Sensors 12 12386f13 200
Figure 13. Detection of traffic signs under various conditions.

Click here to enlarge figure

Figure 13. Detection of traffic signs under various conditions.
Sensors 12 12386f13 1024
Table Table 1. Color quantization.

Click here to display table

Table 1. Color quantization.
RedBlueYellow
SaturationS > 0.2S > 0.2S > 0.2
Hue0 < H < 10320 < H < 360200 < H < 27020 < H < 100
Table Table 2. The number of PS and NS for the six trained classifiers.

Click here to display table

Table 2. The number of PS and NS for the six trained classifiers.
C1C2C3C4C5C6
PS3,1251,276794648963346
NS5,2002,30016,001,6001,6001,000
Sensors EISSN 1424-8220 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert