Next Article in Journal
Label Free QCM Immunobiosensor for AFB1 Detection Using Monoclonal IgA Antibody as Recognition Element
Next Article in Special Issue
Efficient Lane Boundary Detection with Spatial-Temporal Knowledge Filtering
Previous Article in Journal
Research on a Defects Detection Method in the Ferrite Phase Shifter Cementing Process Based on a Multi-Sensor Prognostic and Health Management (PHM) System
Previous Article in Special Issue
Fast Object Motion Estimation Based on Dynamic Stixels
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Vertical Corner Feature Based Precise Vehicle Localization Using 3D LIDAR in Urban Area

1
Department of Electronic Engineering, Konkuk University, 120 Neungdong-ro, Gwangjin-gu, Seoul 05029, Korea
2
Satellite Navigation Team, Korea Aerospace Research Institute, 169-84 Gwahak-ro, Yuseong-gu, Daejeon 305-806, Korea
*
Author to whom correspondence should be addressed.
Sensors 2016, 16(8), 1268; https://doi.org/10.3390/s16081268
Submission received: 31 March 2016 / Revised: 2 August 2016 / Accepted: 4 August 2016 / Published: 10 August 2016
(This article belongs to the Special Issue Sensors for Autonomous Road Vehicles)

Abstract

:
Tall buildings are concentrated in urban areas. The outer walls of buildings are vertically erected to the ground and almost flat. Therefore, the vertical corners that meet the vertical planes are present everywhere in urban areas. These corners act as convenient landmarks, which can be extracted by using the light detection and ranging (LIDAR) sensor. A vertical corner feature based precise vehicle localization method is proposed in this paper and implemented using 3D LIDAR (Velodyne HDL-32E). The vehicle motion is predicted by accumulating the pose increment output from the iterative closest point (ICP) algorithm based on the geometric relations between the scan data of the 3D LIDAR. The vertical corner is extracted using the proposed corner extraction method. The vehicle position is then corrected by matching the prebuilt corner map with the extracted corner. The experiment was carried out in the Gangnam area of Seoul, South Korea. In the experimental results, the maximum horizontal position error is about 0.46 m and the 2D Root Mean Square (RMS) horizontal error is about 0.138 m.

1. Introduction

Precise vehicle localization has become important for the safe driving of autonomous vehicles. Generally, a Global Positioning System (GPS) is most often used for position recognition. However, the position accuracy of the GPS is not reliable in urban areas. To solve this problem, techniques for recognizing position by integrating several sensors (e.g., GPS, Inertial Measurement Unit (IMU), Vision, light detection and ranging (LIDAR), etc.) have been continuously researched. Kummerle et al. [1] presented an autonomous driving system in a multi-level parking structure. This approach utilizes multi-level surface maps to localize a self-driving car based on 3D LIDAR. Brenner [2] presented a vehicle localization approach using the typical roadside pole obtained by 2D LIDAR. Baldwin and Newman [3] and Chong et al. [4] presented a vehicle localization method with 2D push-broom LIDAR and 3D priors. Particularly, [4] utilized LIDAR accumulation with a 3D rolling window. Hu et al. [5], Choi and Maurer [6,7] presented hybrid map-based localization methods. The topological-metric hybrid map of [5] include the lane marking and 2D occupancy grid map. Also, the grid-feature hybrid map of [6,7] includes the typical roadside pole and 2D occupancy grid map. Thus, many methods have been researched for vehicle localization. However, these methods did not provide a complete solution for an urban area.
Recently, a localization method using a Velodyne LIDAR based road intensity map has been researched. Levinson et al. [8,9], Hata and Wolf [10] presented an intensity map based localization method using the Velodyne LIDAR. Wolcott and Eustice [11] presented a visual localization method within a LIDAR intensity map. The LIDAR based road intensity map matching method has the advantage of a very high localization accuracy. The camera based road intensity map matching method has the advantage of being able to be used with an already mounted low cost camera. However, both methods have the disadvantage of not being able to be used when the vehicle is in congested traffic in an urban area. Also, the data file size of the road intensity map is very large. Most recently, Wolcott and Eustice [12] presented a new scan matching algorithm that leverages Gaussian mixture maps. This solution is able to overcome the lack (harsh weather and poor road surface texture) of a road intensity map by exploiting the structure.
Because tall buildings in urban areas obstruct the GPS satellite signal reception, the position accuracy of the GPS is very poor. However, buildings in urban areas can be used to generate a landmark map for precise vehicle localization. This map’s information can be represented in various forms (e.g., 2D/3D occupancy grid map, 3D point cloud, 2D corner feature, 2D contour (line), and 3D vertical surface, etc.). We focused on the outer wall of a building which is erected vertically to the ground and almost flat. These corners can provide very good landmarks and also can be extracted using the LIDAR. In contrast to the use of road marking, corners can be extracted during traffic congestion periods.
A significant amount of research has been carried out on the corner detection method using LIDAR [13,14,15,16,17]. In addition, a large number of papers have been presented on the position estimation of a mobile robot using detected corner points. However, most of these papers focused on the indoor localization of the mobile robot [18,19,20,21] and research on vehicle localization using the vertical corner extraction based on LIDAR in an urban area has not been presented. In this paper, we used corner feature information for precise vehicle localization in an urban area.
In indoor environments, corridor walls can be clearly scanned by 2D LIDAR [18,19,20,21]. This is because there is almost no object covering the corridor wall. Therefore, the corner can be easily extracted by 2D LIDAR. However, in urban environments, corner extraction using only 2D LIDAR is difficult. Because of the many roadside trees in urban areas, LIDAR can scan only a part of the building. Also, if the outer wall of the building is made of glass, the laser cannot be reflected. In this case, by using 3D LIDAR, the false positive rate can be reduced through combining the multiple corner extraction results from each layer. The corner map contains information describing the geometrical properties of the vertical corner of the corresponding building. The vehicle position error can be corrected through corner map matching. This corner map has the advantage of having a small data file size.
The Extended Kalman Filter (EKF) is used to estimate the vehicle position. The vehicle motion is updated using the output from the ICP algorithm based on the geometric relations between the scan data of 3D LIDAR. The vertical corner is extracted using the proposed corner extraction method. The vehicle position is then corrected by matching the prebuilt corner map with the extracted corner. The experiment was carried out in the Gangnam area of Seoul, South Korea.
The remaining part of this paper is organized as follows. In Section 2, the vertical corner feature is defined and the corner extraction method is proposed. The procedure for the generation of the vertical corner map is described in Section 3. In Section 4, the Kalman filter configuration and the observability analysis are described. The vehicle driving test results are shown in Section 5. Finally, the conclusion is given in Section 6.

2. Corner Extraction

In general, a corner is a position formed by the interconnection of two converging lines or surfaces. The outer wall of the building is almost flat, forming a vertical plane perpendicular to the ground. Therefore, the vertical corner could be a distinctive and useful point landmark for the map based vehicle localization in an urban area. In general, each building has at least four vertical corners and these can be extracted using LIDAR.

2.1. Corner Definition

In this paper, the vertical corner is projected into the ground plane and treated as a point feature on the 2D horizontal plane. Figure 1 shows the corner definition.
As shown in Figure 1, the corner consisted of a corner point and two corner lines. Figure 2 shows the attributes of the corner.
The corner point is represented by a 2D horizontal position in the east-north-up (ENU) frame. The corner line has a directional property of the corresponding building. Therefore, as shown in Figure 2, the corner line is represented by a directional angle ( θ 1 , θ 2 ) in the ENU frame.

2.2. Corner Extraction

In this paper, the vehicle localization utilizes the vertical corner feature of the building. Therefore, we only use the data of the eight upper layers of the 3D LIDAR [22]. The scanned data is processed for each layer.

2.2.1. Line Extraction

In order to extract the corner, the line must be accurately extracted. Several line extraction methods have previously been presented [23,24,25,26]. In these methods, the Iterative-End-Point-Fit (IEPF) algorithm shows the best performance in terms of accuracy and computational time [27]. Figure 3 shows the line extraction result using the IEPF algorithm.
As shown in Figure 3, the laser scanning points are divided into many line segments.

2.2.2. Outlier Removal

In Figure 3, most of these line segments are a set of scan points from the leaves of the roadside trees. Therefore, the laser data reflected by the leaves of roadside trees must be removed. Figure 4 shows the laser scanning points reflected by the leaves and the outer wall of the building.
As shown in Figure 4, the features of the laser scanning points that are reflected by the two types of objects are clearly distinguished. For roadside trees, the variance of distance errors between the extracted line and each point is very large. On the other hand, for outer wall of a building, the variance of the distance errors is very small. Figure 5 shows the pseudo code for outlier removal.
Through the process of the pseudo code, the outliers such as the roadside trees are removed. Figure 6 shows the line extraction result after the outlier removal.
In Figure 6, the green points represent the line segments that are finally extracted.

2.2.3. Corner Candidate Extraction

By the corner definition of the Section 2.1, the corner candidate is extracted. Figure 7 shows the extracted corner candidate.
As shown in Figure 7, the corner point (black star) and the corner lines (black lines) are extracted as the corner candidate.

2.2.4. Corner Determination

By using 3D LIDAR, the false positive rate can be reduced through the redundant corner candidates from each layer. Figure 8 shows the 3D position of the corner candidates extracted from the eight layers and the corner determination result.
In Figure 8a, the multiple corner candidates from each layer can be found. Generally, the outer wall of the building is erected vertically to the ground. Thus, as shown In Figure 8b, the 2D positions of the corner candidates extracted in one vertical corner are approximately the same. The corner candidates that exist within a certain area in the 2D east-north frame are classified into the same corner. Then, if the number of corner candidates that exist within a certain area is greater than a predetermined number (redundancy check), the corner is determined. In Figure 8b, the position and angle of the determined corner (red) is the average value of the corner candidates (black) for the corresponding corner.

2.2.5. Corner Extraction Result

Figure 9 shows the final corner extraction result.
In Figure 9, the extracted line segments are represented in green and the objects (outlier) such as the roadside trees are represented in yellow. The stars represent the corner candidates and the red stars represent the final extracted corners. As shown in Figure 8 and Figure 9, all of the black stars were extracted from only one layer and these were excluded from the redundancy check.

3. Corner Map Generation

A corner map is essential for precise vehicle localization. By using the corner map, the vehicle position can be very accurately estimated.

3.1. Corner Map Definition

Basically, a corner map must have the 2D positions of the corners. However, if the corner map only has the position information of the corners, problems can arise in the data association process. Figure 10 shows an extraction result of the two nearest corners.
In Figure 10, the red circles represent the position of the corner map. Also, the red stars represent the extracted corners, where the distance between the two corners is approximately 2.5 m. In this situation, when the position error of the vehicle has occurred more than 1 m from the true position, the possibility of data association failure greatly increased. Therefore, the direction angle information of the two corner lines is added to the corner map (see Figure 2). The two corners shown in Figure 10 can be clearly identified by using the direction angle information of the corner lines.
Finally, to determine the quality of the extracted corner information, the position error covariance of the corner is added to the corner map. Table 1 shows an example of a corner map.
As shown in Table 1, the corner map information is saved as a text file. Based on a driving path of about 2 km, the number of extracted corners is 271 and the text file size is 28 KB. The corner map therefore has the advantage of being able to cover a large area with a very small data file size.

3.2. Corner Map Generation

The experiment has been carried out in the Gangnam area of Seoul, South Korea. Figure 11 shows the vehicle trajectory and the street view at the four intersections.
As shown in Figure 11, the experimental environment is a dense urban area surrounded by high buildings, and two laps were driven from the starting point to the finish point. The traveling distance was about 4.5 km and the maximum traveling speed was about 40 km/h. The position and heading of the vehicle were acquired by using the integrated system of the Real-Time Kinematic (RTK) GPS and Inertial Navigation System (INS) (NovAtel RTK/SPAN system). In an environment where there are many high buildings, the position error of the RTK/INS is about 1–2 m. Therefore, the vehicle trajectory must be corrected to generate the corner map. The vehicle trajectory was optimized by using the GraphSLAM method [28,29]. In this paper, this optimized vehicle trajectory was considered as the ground truth. Figure 12 shows the graph optimization result of the vehicle trajectory.
As shown at the top left of Figure 12, the intensity maps for the two laps do not match. On the right of Figure 12, the red points represent the corrected vehicle trajectory after the graph optimization. As shown at the bottom left of Figure 12, the intensity map exactly matches. Here, the incremental pose information outputted from the ICP algorithm was used as edge measurement of the graph. The theory and principles for the GraphSLAM method are well described in [28,29]. Thus, obtaining the optimized vehicle trajectory is possible by using the GraphSLAM.
The corner map is generated based on the ground truth. In addition, by applying the corner extraction method described in Section 2, the data of the corner extracted from each vehicle position are collected. In the next, clustering is performed to classify the data in such a corner. Also, the mean position, mean direction angle, and position error covariance of the corner for each cluster are calculated. Finally, to increase the reliability of the corner map, the corners with a large position error covariance and a small extracted count are excluded from the corner map. Figure 13 shows the final generated corner map information displayed on the synthetic map of the 2D occupancy grid and road intensity maps.
As shown in Figure 13, many corners were extracted, and false positives did not occur.

3.3. Data Association with the Corner Map

First, the extracted corners need to be associated with the corner map data. The data association considers the position error covariance of the vehicle. Figure 14 shows the data association process between the extracted corners and the corner map.
In Figure 14, the red circle indicates the position of the corner map and the red star shows the position of the extracted corner. The red lines represent the directional angle of the corner lines, and the magenta ellipse represents the position uncertainty of the corner caused by the position error covariance of the vehicle. As shown in Figure 14, when the red circle appears in the elliptic region and the difference between the directional angles of the map and the extracted corner is less than a predetermined angle value, the two corners are determined to be the same corner.

4. Kalman Filter Configuration and Observability Analysis

In this paper, the general 2D point landmark-based positioning technique is used to estimate the vehicle position [30]. The landmarks are the vertical corners of the building in the street.

4.1. Motion Update

The motion update process is predicted with the translation vector ( T ) and the rotation matrix ( R ) from the ICP algorithm based on the range data of 3D LIDAR. The state variables are defined as follows.
q = ( x ,    y ,    θ ) T u = ( T ,    R ) T
where x and y are the horizontal (2D) positions of the vehicle in the ENU frame, and θ is the heading angle of the vehicle. T and R refer to the increment of the 2D position and the heading angle, respectively. The state equation is as follows.
q k + 1 = F K q k + G k u k F k = [ 1 0 0 0 1 0 0 0 1 ]            G k = [ cos ( θ k ) sin ( θ k ) 0 sin ( θ k ) cos ( θ k ) 0 0 0 1 ]
In Equation (2), the vehicle motion is updated by accumulating the results of the ICP.

4.2. Measurement Update

The range and bearing measurements between the vehicle and the corners are used for the measurement update. The measurement equations are as follows.
r i = ( x i x ) 2 + ( y i y ) 2
α i = tan 1 ( y i y x i x ) θ
where r i and α i are the range and bearing measurements between the vehicle and the ith corner, respectively. x i and y i are the 2D position of the ith corner in the corner map. The measurement matrix H is as follows.
H = [ ( x 1 x ) ( x 1 x ) 2 + ( y 1 y ) 2 ( y 1 y ) ( x 1 x ) 2 + ( y 1 y ) 2 0 ( y 1 y ) ( x 1 x ) 2 + ( y 1 y ) 2 ( x 1 x ) ( x 1 x ) 2 + ( y 1 y ) 2 1 ( x n x ) ( x n x ) 2 + ( y n y ) 2 ( y n y ) ( x n x ) 2 + ( y n y ) 2 0 ( y n y ) ( x n x ) 2 + ( y n y ) 2 ( x n x ) ( x n x ) 2 + ( y n y ) 2 1 ]

4.3. Observability Analysis

The observability of the vehicle state estimation using the corner point measurement is required to be verified. This observability of the vehicle state is analyzed in the algebraic framework [31]. First, the state evolution model is as follows.
x ˙ = T cos θ y ˙ = T sin θ θ ˙ = R
where is the Euclidean norm and R is a 1 × 1 matrix. Expressing Equation (4) with respect to θ is as follows.
θ = tan 1 ( y i y x i x ) α i
By taking the derivative with respect to time of Equation (7), we obtain Equation (8).
θ ˙ = 1 1 + ( y i y x i x ) 2 { x ˙ y i y ( x i x ) 2 y ˙ 1 x i x }
By substituting Equation (6) in Equation (8), we obtain the following equation.
R = 1 1 + ( y i y x i x ) 2 { T cos θ y i y ( x i x ) 2 T sin θ 1 x i x }
By substituting Equation (7) in Equation (9), we obtain Equation (10).
R = T [ y i y x i x cos { tan 1 ( y i y x i x ) α i } sin { tan 1 ( y i y x i x ) α i } ] ( x i x ) { 1 + ( y i y x i x ) 2 }
In Equations (3) and (10), r i is the range measurement, T and R are the input values from the ICP, x i and y i represent the position information from the corner map, and x and y are unknowns. Since there are two equations and two unknown variables, it is possible to calculate x and y . Therefore, x and y are observable. In Equation (7), since α i is a measurement, θ is also observable.
From this observability analysis, if at least more than one corner feature can be extracted while the vehicle is moving, the vehicle position can be estimated from the corner map matching.

5. Experimental Results

In this section, we analyze the experimental results by comparing the corner map matching, RTK/INS, and ICP based Dead Reckoning (DR). We also analyze the causes of the position error when the corner map matching is used.
Figure 15 shows the vehicle trajectory for each case. The trajectory for the corner map matching is almost the same as the ground truth. In the case of RTK/INS, the trajectory has a small offset error with respect to the ground truth. The trajectory for the ICP based DR significantly differs from the ground truth. Figure 16 shows the accumulated position error that occurs when using only the ICP based DR without the corner map matching. As is well known, the position error of the ICP based DR is divergent. This accumulated position error can be removed by using the corner map matching. Figure 17 shows the result of the corner map matching.
In Figure 17, the blue line represents the position error of the RTK/INS, and the red line represents the position error of the corner map matching. The maximum position error of the RTK/INS is about 1.2 m. Here, the position error of the RTK/INS at the start point is very small because the vehicle trajectory is optimized based on the start point. On the other hand, the maximum position error of the corner map matching is about 0.46 m. The 2D RMS horizontal errors of the RTK/INS and the corner map matching are about 0.85 m and 0.138 m, respectively. In Figure 18, it can be seen that the heading error is very small. The RMS heading error of the corner map matching is about 0.168°. Figure 19 shows the Cumulative Distribution Function (CDF) of the horizontal vehicle position error.
The corner map matching has localization accuracies of about 0.25 m and 0.33 m at 95% and 99% confidence levels, respectively. Figure 20 shows the CDF of the 2D RMS horizontal errors for each corner (271 corners) and the horizontal position errors of all the extracted corners (13,260 corner extraction data).
In Figure 20, the horizontal position accuracies of all the extracted corners are about 0.27 m and 0.45 m at 95% and 99% confidence levels, respectively. Each corner has 2D RMS horizontal errors of about 0.26 m and 0.32 m at 95% and 99% confidence levels, respectively. As shown in Figure 19 and Figure 20, the corner map matching error is very small, since the corner measurement is accurate. Also, the CDF of the 2D RMS error for each corner shows that the position error for each corner is affected by the characteristic of the corresponding corner.
As shown in Figure 17 and Figure 19, the corner map matching errors are greater than 0.33 m at several regions. These errors are related to the number of extracted corners, distance from the vehicle to the corner, and corner shape. First, let us consider the number of extracted corners as shown in Figure 21.
Figure 21 shows that there are less than five extracted corners in most areas. Since there are many roadside trees in urban areas, it is impossible to extract a large number of corners. Furthermore, if the building has glass walls, the corner cannot be extracted. Figure 22 shows the relationship between the localization error and the number of extracted corners. On the left of Figure 22, the localization error largely occurs in the epoch from 1950 to 2020. On the right of Figure 22, it can be seen that the corner was barely extracted at the same epoch. When the corner is not extracted, the position error of the ICP based DR is accumulated. However, as shown in Figure 22, the accumulated position error is reduced at the same time as a corner is extracted. Figure 23 shows the vehicle trajectory with the position error covariance at the same epoch.
At the bottom right of Figure 23, when the corner was extracted, the accumulated position error and the position error covariance were reduced. Here, the ellipse of the covariance is not of actual size, because it was expanded for clarity. In this way, even by extracting only one corner, the vehicle state can be estimated as demonstrated in Section 4.3. Of course, if more corners are extracted, the localization performance can be improved.
Secondly, the localization accuracy is related to the distance from the vehicle to the corner. Figure 24 shows the relationship between the corner position error covariance and the distance from the vehicle to the corner.
As shown in Figure 24, the position accuracy of the extracted corner is most closely related to the distance from the vehicle to the extracted corner. The further the corner is from the vehicle, the greater is the error covariance of the corner position. This is because, as the distance to the object increases, the more the ranging accuracy of the LIDAR is reduced. Also, the horizontal resolution of the laser beam is 0.16°. If any object is 50 m from the vehicle, the position uncertainty of the reflected point is 0.14 m. Figure 25 shows that the 2D RMS error of the extracted corners increases in proportion to the distance. Thus, the measurement of the corner that is furthest from the vehicle could be the cause of the relatively large error.
Finally, let us consider the corner shape. Figure 26 shows the examples of good and poor corners for the corner shape.
In the good corner example, the angle between the two corner lines is perpendicular, and the error covariance is very small. The position accuracy of the extracted corner is related to the Dilution of Precision (DOP) between the two corner lines [32]. When the angle between the two corner lines is perpendicular, the DOP of the corner position is minimized. However, as shown at the bottom right of Figure 26, the position of the poor corner has a large uncertainty. Figure 27 shows the estimated vehicle trajectory in a situation affected by the poor corner measurement with a large covariance.
As shown on the right of Figure 27, the corner map matching error caused by the poor corner measurement is relatively large. These errors are about 0.3 m.
So far, the vertical corner feature based precise vehicle localization has been described. The results show that the corner map matching performance is determined by the accuracy of the extracted corner. The corner is a very good landmark which can be extracted accurately. Therefore, the vehicle position can be estimated accurately. The results also show that the corner map matching method can guarantee very good localization accuracy in an urban area. The performance can be further improved by accurately extracting more corners.

6. Conclusions

In this paper, we proposed the vertical corner feature based precise vehicle localization method using 3D LIDAR in an urban area. First, we defined a corner map which was generated using the proposed corner extraction method as described in Section 3. After generating the corner map, we estimated the vehicle position using corner map matching. In the experimental results, the maximum and 2D RMS horizontal errors generated by using the corner map matching were about 0.46 m and about 0.138 m, respectively.
The vertical corner feature based precise vehicle localization method has several advantages. First, the corner map matching performance is very good and 2D RMS horizontal error is about 0.138 m. Second, in contrast to the road intensity map based localization methods, the corner map matching method can be used in a condition of traffic congestion. Third, the data file size of the corner map is very small compared to that of the road intensity map. The calculation time of the corner map matching is also very short compared to the other map matching methods.
As previously mentioned, the accuracy of the corner measurements and the number of extracted corners are very important for the precise vehicle localization. Accordingly, a method for extracting many corners with higher accuracy should be investigated in the future.

Author Contributions

J.H. Im proposed the vertical corner feature based precise vehicle localization method using 3D LIDAR and the corner map definition; S.H. Im provided advice for improving the quality of this research; G.I. Jee supervised the research work; J.H. Im wrote the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kummerle, R.; Hahnel, D.; Dolgov, D.; Thrun, S.; Burgard, W. Autonomous Driving in a Multi-level Parking Structure. In Proceedings of the 2009 IEEE International Conference on Robotics and Automation (ICRA), Kobe, Japan, 12–17 May 2009.
  2. Brenner, C. Vehicle Localization Using Landmarks Obtained by a LIDAR Mobile Mapping System. Proc. Photogramm. Comput. Vis. Image Anal. 2010, 38, 139–144. [Google Scholar]
  3. Baldwin, I.; Newman, P. Road vehicle localization with 2D push-broom LIDAR and 3D priors. In Proceedings of the 2012 IEEE International Conference on Robotics and Automation (ICRA), Saint Paul, MN, USA, 14–18 May 2012.
  4. Chong, Z.J.; Qin, B.; Bandyopadhyay, T.; Ang, M.H., Jr.; Frazzoli, E.; Rus, D. Synthetic 2D LIDAR for Precise Vehicle Localization in 3D Urban Environment. In Proceedings of the 2013 IEEE International Conference on Robotics and Automation (ICRA), Karlsruhe, Germany, 6–10 May 2013.
  5. Hu, Y.; Gong, J.; Jiang, Y.; Liu, L.; Xiong, G.; Chen, H. Hybrid Map-Based Navigation Method for Unmanned Ground Vehicle in Urban Scenario. Remote Sens. 2013, 5, 3662–3680. [Google Scholar] [CrossRef]
  6. Choi, J.; Maurer, M. Hybrid Map-based SLAM with Rao-Blackwellized Particle Filters. In Proceedings of the 2014 17th International Conference on Information Fusion (FUSION), Salamanca, Spain, 7–10 July 2014.
  7. Choi, J. Hybrid Map-based SLAM using a Velodyne Laser Scanner. In Proceedings of the 2014 IEEE 17th International Conference on Intelligent Transportation Systems (ITSC), Qingdao, China, 8–10 October 2014.
  8. Levinson, J.; Thrun, S. Robust Vehicle Localization in Urban Environments Using Probabilistic Maps. In Proceedings of the 2010 IEEE International Conference on Robotics and Automations, Anchorage, Alaska, AK, USA, 3–7 May 2010.
  9. Levinson, J.; Montemerlo, M.; Thrun, S. Map-Based Precision Vehicle Localization in Urban Environments. In Robotics Science and Systems; MIT Press: Cambridge, MA, USA, 2007. [Google Scholar]
  10. Hate, A.; Wolf, D. Road Marking Detection Using LIDAR Reflective Intensity Data and its Application to Vehicle Localization. In Proceedings of the 2014 IEEE 17th International Conference on Intelligent Transportation Systems (ITSC), Qingdao, China, 8–11 October 2014.
  11. Wolcott, R.W.; Eustice, R.M. Visual Localization within LIDAR Maps for Automated Urban Driving. In Proceedings of the 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2014), Chicago, IL, USA, 14–18 September 2014.
  12. Wolcott, R.W.; Eustice, R.M. Fast LIDAR Localization using Multiresolution Gaussian Mixture Maps. In Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015.
  13. Li, Y.; Olson, E.B. Extracting General Purpose Feautres from LIDAR Data. In Proceedings of the IEEE International Conference on Robotics and Automamtion, Anchorage, AK, USA, 3–7 May 2010.
  14. Chee, Y.W.; Yang, T.L. Vision and LIDAR Feature Extraction; Technical Report; Cornell University: Ithaca, NY, USA, 2013. [Google Scholar]
  15. Park, Y.S.; Yun, S.M.; Won, C.S.; Cho, K.E.; Um, K.H.; Sim, S.D. Calibration between color camera and 3D LIDAR instruments with a polygonal planar board. Sensors 2014, 14, 5333–5353. [Google Scholar] [CrossRef] [PubMed]
  16. Hadji, S.E.; Hing, T.H.; Khattak, M.A.; Ali, M.S.M.; Kazi, S. 2D Feature Extraction in Sensor Coordinates for Laser Range Finder. In Proceedings of the 15th International Conference on Robotics, Control and Manufacturing Technology (ROCOM ’15), Kuala Lumpur, Malaysia, 23–25 April 2015.
  17. Yan, R.; Wu, J.; Wang, W.; Lim, S.; Lee, J.; Han, C. Natural Corners Extraction Algorithm in 2D Unknown Indoor Environment with Laser Sensor. In Proceedings of the 2012 12th International Conference on Control, Automation and Systems, Jeju Island, Korea, 17–21 October 2012.
  18. Siadat, A.; Djath, K.; Dufaut, M.; Husson, R. A Laser-Based Mobile Robot Navigation in Structured Environment. In Proceedings of the 1999 European Control Conference (ECC), Karlsruhe, Germany, 31 August–3 September 1999.
  19. Vazquez-Martin, R.; Nunez, P.; Bandera, A.; Sandoval, F. Curvature-Based Environment Description for Robot Navigation Using Laser Range Sensors. Sensors 2009, 9, 5894–5918. [Google Scholar] [CrossRef] [PubMed]
  20. Lee, S.Y.; Song, J.B. Mobile Robot Localization Using Range Sensors: Consecutive Scanning and Cooperative Scanning. Int. J. Control Autom. Syst. 2005, 3, 1–14. [Google Scholar]
  21. Zhang, X.; Rad, A.B.; Wong, Y.K. Sensor Fusion of Monocular Cameras and Laser Rangefinders for Line-Based Simultaneous Localization and Mapping (SLAM) Tasks in Autonomous Mobile Robots. Sensors 2012, 12, 429–452. [Google Scholar] [CrossRef] [PubMed]
  22. Velodyne LiDAR. Velodyne HDL-32E, User’s Manual and Programming Guide; LIDAR Manufacturing Company: Morgan Hill, CA, USA, 2012. [Google Scholar]
  23. Arras, K.O.; Siegwart, R. Feature Extraction and Scene Interpretation for Map-Based Navigation and Map Building. In Proceedings of the Symposium on Intelligent Systems and Advanced Manufacturing, Pittsburgh, PA, USA, 14–17 October 1997; Volume 3210, pp. 42–53.
  24. Borges, G.A.; Aldon, M.J. Line Extraction in 2D Range Images for Mobile Robotics. J. Intell. Robot. Syst. 2004, 40, 267–297. [Google Scholar] [CrossRef]
  25. Siadat, A.; Kaske, A.; Klausmann, S.; Dufaut, M.; Husson, R. An Optimized Segmentation Method for a 2D Laser-Scanner Applied to Mobile Robot Navigation. In Proceedings of the 3rd IFAC Symposium on Intelligent Components and Instruments for Control Applications, Annecy, France, 9–11 June 1997.
  26. Harati, A.; Siegwart, R. A New Approach to Segmentation of 2D Range Scans into Linear Regions. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, San Diego, CA, USA, 29 October–2 November 2007.
  27. Nguyen, V.; Martinelli, A.; Tomatis, N.; Siegwart, R. A Comparison of Line Extraction Algorithms Using 2D Laser Rangefinder for Indoor Mobile Robotics. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Edmonton, AB, Canada, 2–6 August 2005.
  28. Grisetti, G.; Kummerle, R.; Stachniss, C.; Burgard, W. A Tutorial on Graph-Based SLAM. IEEE Intell. Transp. Syst. Mag. 2010, 2, 31–43. [Google Scholar] [CrossRef]
  29. Sunderhauf, N.; Protzel, P. Towards a Robust Back-End for Pose Graph SLAM. In Proceedings of the IEEE International Conference on Robotics and Automation, St Paul, MN, USA, 14–18 May 2012.
  30. Hu, H.; Gu, D. Landmark-based Navigation of Industrial Mobile Robots. Int. J. Ind. Robot 2000, 27, 458–467. [Google Scholar] [CrossRef]
  31. Tao, Z.; Bonnifait, P. Road Invariant Extended Kalman Filter for an Enhanced Estimation of GPS Errors using Lane Markings. In Proceedings of the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 28 September–2 October 2015.
  32. Kelly, A. Precision dilution in triangulation based mobile robot position estimation. Intell. Auton. Syst. 2003, 8, 1046–1053. [Google Scholar]
Figure 1. Corner definition.
Figure 1. Corner definition.
Sensors 16 01268 g001
Figure 2. Attribute of the corner feature.
Figure 2. Attribute of the corner feature.
Sensors 16 01268 g002
Figure 3. Line extraction result using the IEPF algorithm.
Figure 3. Line extraction result using the IEPF algorithm.
Sensors 16 01268 g003
Figure 4. Laser scanning points reflected by the leaves of roadside trees and the outer wall of the building.
Figure 4. Laser scanning points reflected by the leaves of roadside trees and the outer wall of the building.
Sensors 16 01268 g004
Figure 5. Pseudo code for outlier removal.
Figure 5. Pseudo code for outlier removal.
Sensors 16 01268 g005
Figure 6. Line extraction result (after outlier removal).
Figure 6. Line extraction result (after outlier removal).
Sensors 16 01268 g006
Figure 7. Extracted corner candidate (black star and lines).
Figure 7. Extracted corner candidate (black star and lines).
Sensors 16 01268 g007
Figure 8. (a) 3D position of corner candidates; (b) Corner determination result.
Figure 8. (a) 3D position of corner candidates; (b) Corner determination result.
Sensors 16 01268 g008
Figure 9. Corner extraction result.
Figure 9. Corner extraction result.
Sensors 16 01268 g009
Figure 10. Extraction result of the two nearest corners.
Figure 10. Extraction result of the two nearest corners.
Sensors 16 01268 g010
Figure 11. Vehicle trajectory and street view (four intersections).
Figure 11. Vehicle trajectory and street view (four intersections).
Sensors 16 01268 g011
Figure 12. Graph optimization result of the vehicle trajectory using GraphSLAM.
Figure 12. Graph optimization result of the vehicle trajectory using GraphSLAM.
Sensors 16 01268 g012
Figure 13. Generated corner map information (on synthetic map).
Figure 13. Generated corner map information (on synthetic map).
Sensors 16 01268 g013
Figure 14. Data association with the corner map.
Figure 14. Data association with the corner map.
Sensors 16 01268 g014
Figure 15. Vehicle trajectory for each case.
Figure 15. Vehicle trajectory for each case.
Sensors 16 01268 g015
Figure 16. Position error (using only the ICP based DR).
Figure 16. Position error (using only the ICP based DR).
Sensors 16 01268 g016
Figure 17. Position error (corner map matching and RTK/INS).
Figure 17. Position error (corner map matching and RTK/INS).
Sensors 16 01268 g017
Figure 18. Heading error (corner map matching).
Figure 18. Heading error (corner map matching).
Sensors 16 01268 g018
Figure 19. CDF of the horizontal vehicle position error.
Figure 19. CDF of the horizontal vehicle position error.
Sensors 16 01268 g019
Figure 20. CDF of the 2D RMS horizontal errors for each corner and the horizontal position errors of all extracted corners.
Figure 20. CDF of the 2D RMS horizontal errors for each corner and the horizontal position errors of all extracted corners.
Sensors 16 01268 g020
Figure 21. Number of extracted corners.
Figure 21. Number of extracted corners.
Sensors 16 01268 g021
Figure 22. Relationship between the localization error and the number of extracted corners.
Figure 22. Relationship between the localization error and the number of extracted corners.
Sensors 16 01268 g022
Figure 23. Vehicle trajectory with the position error covariance (the blue trajectory is the ground truth; the red trajectory is the estimated trajectory; the red ellipses represent the vehicle position error covariance; the green lines are to connect between the extracted corners and the vehicle position).
Figure 23. Vehicle trajectory with the position error covariance (the blue trajectory is the ground truth; the red trajectory is the estimated trajectory; the red ellipses represent the vehicle position error covariance; the green lines are to connect between the extracted corners and the vehicle position).
Sensors 16 01268 g023
Figure 24. Relationship between the corner position error covariance and the distance from the vehicle to the corner (the red ellipses on the corner represent the position error covariance of the extracted corners).
Figure 24. Relationship between the corner position error covariance and the distance from the vehicle to the corner (the red ellipses on the corner represent the position error covariance of the extracted corners).
Sensors 16 01268 g024
Figure 25. 2D RMS error of the extracted corners with respect to each distance.
Figure 25. 2D RMS error of the extracted corners with respect to each distance.
Sensors 16 01268 g025
Figure 26. Examples of good and poor corners for the corner shape (the red ellipses on the corner represent the position error covariance of the extracted corners).
Figure 26. Examples of good and poor corners for the corner shape (the red ellipses on the corner represent the position error covariance of the extracted corners).
Sensors 16 01268 g026
Figure 27. Estimated vehicle trajectory in the situation affected by the poor corner measurement (the blue trajectory is the ground truth).
Figure 27. Estimated vehicle trajectory in the situation affected by the poor corner measurement (the blue trajectory is the ground truth).
Sensors 16 01268 g027
Table 1. An example of a corner map.
Table 1. An example of a corner map.
IndexEast (m)North (m)Direction Angle 1 (Degree)Direction Angle 2 (Degree)Covariance Matrix (2 × 2)
11010132450.00210.00090.00090.0029
22030−13−130.0042−0.001−0.0010.0015

Share and Cite

MDPI and ACS Style

Im, J.-H.; Im, S.-H.; Jee, G.-I. Vertical Corner Feature Based Precise Vehicle Localization Using 3D LIDAR in Urban Area. Sensors 2016, 16, 1268. https://doi.org/10.3390/s16081268

AMA Style

Im J-H, Im S-H, Jee G-I. Vertical Corner Feature Based Precise Vehicle Localization Using 3D LIDAR in Urban Area. Sensors. 2016; 16(8):1268. https://doi.org/10.3390/s16081268

Chicago/Turabian Style

Im, Jun-Hyuck, Sung-Hyuck Im, and Gyu-In Jee. 2016. "Vertical Corner Feature Based Precise Vehicle Localization Using 3D LIDAR in Urban Area" Sensors 16, no. 8: 1268. https://doi.org/10.3390/s16081268

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop