Next Article in Journal
Towards a Low-Cost Remote Memory Attestation for the Smart Grid
Next Article in Special Issue
A Framework for Applying Point Clouds Grabbed by Multi-Beam LIDAR in Perceiving the Driving Environment
Previous Article in Journal
High-Performance Motion Estimation for Image Sensors with Video Compression
Previous Article in Special Issue
Investigating Driver Fatigue versus Alertness Using the Granger Causality Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

GPS/DR Error Estimation for Autonomous Vehicle Localization

1
Electronics Engineering, Konkuk University, Seoul 143-701, Korea
2
Satellite Navigation Team, Korea Aerospace Research Institute (KARI), Daejeon 305-806, Korea
*
Author to whom correspondence should be addressed.
Sensors 2015, 15(8), 20779-20798; https://doi.org/10.3390/s150820779
Submission received: 16 June 2015 / Revised: 30 July 2015 / Accepted: 11 August 2015 / Published: 21 August 2015
(This article belongs to the Special Issue Sensors in New Road Vehicles)

Abstract

:
Autonomous vehicles require highly reliable navigation capabilities. For example, a lane-following method cannot be applied in an intersection without lanes, and since typical lane detection is performed using a straight-line model, errors can occur when the lateral distance is estimated in curved sections due to a model mismatch. Therefore, this paper proposes a localization method that uses GPS/DR error estimation based on a lane detection method with curved lane models, stop line detection, and curve matching in order to improve the performance during waypoint following procedures. The advantage of using the proposed method is that position information can be provided for autonomous driving through intersections, in sections with sharp curves, and in curved sections following a straight section. The proposed method was applied in autonomous vehicles at an experimental site to evaluate its performance, and the results indicate that the positioning achieved accuracy at the sub-meter level.

1. Introduction

Autonomous land vehicles require a level of accuracy that can enable lane decisions. In prior research, Lee [1] and Serrano [2] obtained position accuracy that enabled taking lane decisions by using a precise positioning algorithm based on RTK using GPS with an open sky. Specifically, it is important to develop guidance systems for autonomous vehicles since autonomous vehicles have to follow particular paths. Although autonomous driving is regionally possible by recognizing the surroundings, intelligent transportation services will only be commercially viable if the position of a vehicle is known with sufficient accuracy.
Therefore, the position information of vehicles should be sufficiently accurate to enable path- following functionality. If the information is sufficiently accurate and globally valid, then a safe path can be followed by implementing a simple guidance method. However, the GPS-RTK system requires data from a reference station and active communication links to receive all necessary information, so additional infrastructure is required. Also, standalone GPS with a Satellite Based Augmentation System (SBAS) cannot offer the necessary accuracy for the use of autonomous vehicles.
Waypoints and position information are necessary to guide vehicles, and inaccurate position information and waypoints can lead autonomous vehicles to wrong locations or to run in a biased route. Furthermore, lane keeping can be carried out using vision algorithms rather than position information. However, the disadvantage of using this method is that lanes must exist, so it cannot be used in areas where there is no guarantee of a continuity in lanes, such as in intersections and crosswalks. Moreover, problems can occur if lane detection is carried out in curved sections with straight lane models. Thus, the proposed method uses vision sensors and a curved line model to detect curved by calculating the lateral distance information of the detected lanes. In addition, stable autonomous driving methods have been suggested by estimating the GPS/DR errors through the use of longitudinal distance information by detecting stop lines and by matching the curved lane and the waypoints.

2. Related Work

Many studies have been carried out to localize vision-based and map-assisted methods [3,4,5,6], and computer vision and other techniques from robotic systems have been implemented to successfully follow paths without having absolute information on the position. Although such technologies have been previously applied in small areas, the DARPA Grand Challenge (DGC) in 2005 and DARPA Urban Challenge (DUC) in 2007 nevertheless required absolute position because the vehicle needed to navigate a large area.
Simultaneous Localization and Mapping (SLAM) is a popular mapping method (in this paper, a waypoint was used as a map). SLAM performs efficient localization with map building by using reflectivity information obtained with vision sensors, LiDAR, and an estimator, such as an Extended Kalman Filter (EKF) or a Rao-Blackwellized Particle Filter (RBPF) [7,8,9]. This is similar to the proposed method because SLAM performs localization through landmarks and a map. The difference is that SLAM uses all features for localization while the method proposed in this study performs localization via GPS/DR error estimation by detecting the lanes and stop lines.
Other methods use the intensity of 3D-LiDAR to generate surface maps of the road, and the data is then localized through map matching [10,11,12]. However, this cannot be applied in every vehicle because 3D-LiDAR is expensive and the surface map is a new data form that does not yet have a standard.
One of the localization method that uses vision sensor is visual odometry technique. Bak [13], Cuong [14], and Scaramuzza [15] used point features to extract odometry information of ego-vehicle. A lane, which is a line feature, is more apparent information than point features, and is used as a land mark in this paper.
Gruyer [16], Ieng [17] and Li [18] presented a localization method that uses lane detection that is similar to that proposed in this study in terms of the using lateral distance with mapping assistance. Especially, Gruyer and Ieng used lateral distance measurement using lateral camera systems which are placed outside of the vehicle. However, the proposed method performs lane tracking not in the image frame but in the vehicle frame by using lane tracking based on a curved model and reducing the lateral distance error that occurs when using the front camera as described in Section 4.2. The measurement equation that is derived in the lane frame is a difference with respect to other papers. In this manner, safe autonomous driving is supported in areas with a sharp curves by reducing the lateral distance error.

3. Localization Problems in Autonomous Driving

Autonomous vehicles should be able to perform continuous and reliable localization. The vehicle can be assumed to have a surveyed precise map that consists of waypoints that can be followed in addition to stop line information. The commands to follow the waypoints include the steering angle and speed. The position and the angle of the heading of vehicle are necessary in order to generate the proper steering angle and speed. A driving environment can be basically classified into three parts: a straight lane, a curved lane and a short region without a lane, such as an intersection. In a straight lane, it is possible to obtain the lateral distance by detecting the lane, and the navigation system accurately estimates the lateral position when the distance is measured. Also, the longitudinal position can be accurately estimated by using the distance measured when the stop line has been detected. Problems can occur at an intersection and in an area marking a transition from a straight to a curved section. Computer vision cannot be used to provide any information for the intersection, and no correction information is available for localization. In addition, the area that is to be converted into a curve from a straight lane has no longitudinal information, and the error of the longitudinal position results in steering commands that are either performed early or late with erroneous timing, which causes a departure from the lane. Therefore, autonomous driving with a precise lateral position is only possible in a straight section, but a precise longitudinal position is essential for autonomous vehicles in an area where a curved lane begins and were the vehicle will enter an intersection. In addition, the navigation system should provide the precise position at the intersection.

4. GPS/DR Error Estimator

The proposed navigation system consists of a GPS, an Inertial Measurement Unit (IMU), an odometer, a map and a vision sensor (Figure 1a). The GPS/DR system consists of a GPS receiver, an odometer and a gyroscope (Figure 1b), and it provides the position and heading angle. The odometer filter estimates the velocity of the vehicle, the heading filter estimates the heading angle, and the position filter estimates the position and the heading of the GPS/DR system. The GPS/DR system model is summarized as follows.
Figure 1. (a) Overall system structure; (b) GPS/DR system structure.
Figure 1. (a) Overall system structure; (b) GPS/DR system structure.
Sensors 15 20779 g001
Odometer filter:
δ S F o d o = w o d o , z o d o = n δ S F o d o + v o d o = V G P S V o d o
In this equation, δ S F o d o is the scale factor error of odometer, measurement z o d o = V G P S V o d o is the difference between GPS and odometer speed and n is number of odometer pulses.
Heading filter:
x h e a d , k = [ 1 Δ t 0 1 ] [ δ θ δ B ] + w k 1 = F h e a d , k 1 x h e a d , k 1 + w k 1
z h e a d , k s t o p = [ 1 0 ] [ δ θ k δ B k ] + v k z h e a d , k m o v e = [ 0 1 ] [ δ θ k δ B k ] + v k
In this equation, δ θ and δ B represent the heading angle error of the vehicle and the bias error of the gyroscope, z h e a d m o v e is the measurement when the vehicle moves, and z h e a d s t o p is the measurement when the vehicle stopped.
Figure 2 shows the performance of the estimated position/velocity of the GPS/DR system.
Image processing is used with a vision sensor to detect and track lanes and stop lines, and this information is used to measure the lateral and longitudinal distances.
An EKF-based GPS/DR error estimation filter estimates the GPS/DR error using lateral/longitudinal distance measurements that are obtained from the image processing system. In this study, we have assumed that the waypoint (map) is very accurate, and it makes GPS/DR error estimation possible. If the waypoint has an error, then the error is considered as a GPS/DR position error. Thus the positioning result of the proposed system is not sufficiently accurate in a global frame (erroneous latitude and longitude) but is accurate in a waypoint frame to make safe autonomous driving possible.
Figure 2. (a) GPS/DR position error; (b) GPS/DR heading error.
Figure 2. (a) GPS/DR position error; (b) GPS/DR heading error.
Sensors 15 20779 g002

4.1. GPS/DR Error Modeling

In an open sky, the error components can be modeled by using the Dilution of Precision (DOP) error component that results of the placement of the satellites with the exception of the thermal noise, clock errors and the atmosphere error components [19,20]. The range measurements that can be obtained from a satellite can be modeled as follows:
ρ i = p + d p + c ( d t d T ) + d a t m o s p h e r e + ε ρ
In this equation, ρ i is the range measurement of a satellite i , d p , d t , d T , d a t m o s p h e r e , ε ρ , represent the geometrical distance between a satellite and a receiver, the orbit error of a satellite, the clock error of a satellite, the clock error of the receiver, the atmospheric error (the delay error due to the ionosphere and troposphere), and the thermal noise of the receiver, respectively. At least four range measurements are required to calculate the position, and non-lane areas are very short during autonomous driving. At an intersection, the length of the section is shorter than 100 m and can be driven within 10 s at a speed of 36 km/h. We can assume that there is no change in the DOP during short intervals, and therefore, GPS errors can be modeled as a random constant. As shown in Figure 3, the random constant model is appropriate because the change in the GPS/DR position error is small. We correct the GPS/DR position with the estimated GPS error by using the waypoint and vision in the lane or by predicting the GPS/DR error in the non-lane [21]:
x k = [ e E e N ] x k + 1 = [ 1 0 0 1 ] [ e E e N ]
e ( · ) shows the east and north GPS/DR errors as state variables.
Figure 3. GPS/DR position error rate.
Figure 3. GPS/DR position error rate.
Sensors 15 20779 g003

4.2. Curve Model Lane Detection, Curved Parameter Estimation and Lateral Measurement

In general, the lane detection methods trace the vanishing points by using a straight line model [22]. When the lane detection methods use a straight line model, an error can occur when lateral measurements are made in sections with a curved line (Figure 4).
Figure 4 shows that the origin of the vehicle is on the right side of the lane (+), but errors occur in the estimation of the left side of the lane (−) during lane detection using straight line models. Therefore, lane detection is carried out using curved models to remedy these errors. Figure 5 consists of a flow chart of the lane detection with curved models. The curved parameters of the lane are estimated by generating a model with a second-order polynomial line by using the least squares (LSQ) method with a simple edge detector, finding pixels that are considered to indicate lanes, and using n points among them. Subsequently, an outlier is removed by using the estimated line for the curve, as in Figure 6.
Figure 4. Lateral error during lane detection with a straight line model.
Figure 4. Lateral error during lane detection with a straight line model.
Sensors 15 20779 g004
Figure 5. Flowchart for the lane detection in a curved model.
Figure 5. Flowchart for the lane detection in a curved model.
Sensors 15 20779 g005
Figure 6. Removal of the outliers through curved model estimation: (a) image frame; (b) vehicle frame; (c) before the outlier removal; (d) after the outlier removal.
Figure 6. Removal of the outliers through curved model estimation: (a) image frame; (b) vehicle frame; (c) before the outlier removal; (d) after the outlier removal.
Sensors 15 20779 g006
The constraints of the lane validation method used in this paper are as follows:
(1)
The width of both lanes that are detected is 2.5–4.5 m. The width of the general lane is about 3.5 m.
(2)
The difference in the slopes (the first-order term) for both lanes is less than 0.3 because the lanes are parallel.
(3)
The difference in the angle variation (the second-order term) for both lanes is below 0.015 because the variation in the angle of both lanes is the same as that in curved sections.
The length of the area is confined to be 14 m in order to minimize the distortion that occurs during transformation from an image frame to a vehicle frame, and M = 50 , n = 30 are also set.
To obtain stable lateral measurements, the curved parameter ( x l , k ) is estimated by using a Kalman filter with the points predicted for the outlier-removed lanes and the curved models consist of the second-order polynomial models from Equation (6):
y ( x ) = a + b x + c x 2
This simple model is used with vehicles that move at a speed of ν, as measured by the odometer of the vehicle, and the filter equation is as follow with Δ s = v Δ t [23,24].
State equation:
F l , k 1 = [ 1 Δ s Δ s 2 2 0 1 Δ s 0 0 1 ] x ^ l , k ( ) = F l , k 1 x ^ l , k 1 ( + ) + W l , k 1
P l , k ( ) = P l , k 1 ( + ) F l , k 1 P l , k 1 ( + ) + Q l , k 1
Measurement equation:
z l , k = P v e h i c l e , l e f t , X + P v e h i c l e , r i g h t , X 2
H l , k = [ 1 P v e h i c l e , Y P v e h i c l e , Y 2 1 P v e h i c l e , Y P v e h i c l e , Y 2 ]
K l , k = P l , k ( ) H l , k T [ H l , k P l , k ( ) H l , k T + R l , k ] 1
x ^ l , k ( + ) = x ^ l , k ( ) + K l , k ( z l , k H l , k x ^ l , k ( ) )
P l , k ( + ) = ( I K l , k H l , k ) P l , k ( )
The frame is the vehicle frame (Figure 7) with P v e h i c l e detected lane points in the vehicle frame. The points detected on the Y axis for both lanes is estimated using a LSQ method that corresponds to P v e h i c l e , l e f t , Y = P v e h i c l e , r i g h t , Y = P v e h i c l e , Y .
Figure 7. Global frame and vehicle frame.
Figure 7. Global frame and vehicle frame.
Sensors 15 20779 g007
Figure 8 represents the lateral measurements for all sections and the section that is indicated has a sharp curved lane that appears sequentially (Figure 9).
Figure 8. Lateral measurements for lane detection: (a) in a straight line model; (b) in the curved model.
Figure 8. Lateral measurements for lane detection: (a) in a straight line model; (b) in the curved model.
Sensors 15 20779 g008
Figure 9. Curved lane sections.
Figure 9. Curved lane sections.
Sensors 15 20779 g009
The results were obtained using driving data that was obtained from the center of the lanes with manual driving. Therefore, the lateral measurements should have a value near zero. In Figure 9, area ① is a right turn, and ②–④ are a left turn. As shown in Figure 8a, the errors indicate that the vehicles are located in the left of the lane in the left turn area and that vehicles are located on the right side of the lane in the right turn area. As the curved lane sections become sharper, the errors increase. If this measurement is used, the GPS/DR errors can be incorrectly estimated. However, Figure 8b shows lateral measurements close to zero with the second-order polynomial models. In other words, the estimation functions for GPS/DR errors can be improved by using curved models.

4.3. Longitudinal Measurements from the Stop Line Detection

An accurate position in terms of the lateral lanes is used to keep the vehicle in the center of the lane. However, the accuracy of the longitudinal position is important in order to enact changes in the direction when in an intersection. Stop lines can be detected using vision sensors, and the stop lines that are detected are thus used to calculate the longitudinal measurements. Figure 10 shows the flowchart of stop line detection and the result is shown as Figure 11. The constraints of the validation of the distance for the detected stop lines are as follows:
(1)
The stop line is located laterally within 3.5 m.
(2)
The stop line measurement is less than 14 m (which is the reliable range of vision calibration in this test).
(3)
The difference between the detected stop line location and the map is less than 3 m (feasible GPS/DR error).
Figure 10. Flowchart for the stop line detection.
Figure 10. Flowchart for the stop line detection.
Sensors 15 20779 g010
Figure 11. Result of the stop line detection: (a) Canny edge detection; (b) detected stop line; (c) raw image.
Figure 11. Result of the stop line detection: (a) Canny edge detection; (b) detected stop line; (c) raw image.
Sensors 15 20779 g011

4.4. GPS/DR Error Estimation Filter

The error estimation filter uses measurements from both sensors, including GPS/DR and a vision sensor. GPS/DR provides information on the absolute position, and the vision sensor provides information on the vehicle frame that is different from that of GPS/DR. Therefore, the error estimation filter has a structure where the information of the navigation frame is corrected by using information that is measured for the vehicle frame [25]. The lateral distance between the GPS/DR and the waypoint link can be calculated by using a waypoint as the information of a map. For the vision sensor, the lateral distance from the center of the lane can be measured by detecting the lanes. This study makes two assumptions for the error estimation filter:
Assumption 1
The waypoint is located at the center of the lanes. In general, the location information for the lanes is produced through a survey when accurate maps are produced, and therefore, the center of the lanes can be easily extracted from an accurate map that has been produced.
Assumption 2
The waypoints in the curved section are very fine, so the vehicles do not go outside of the lane.
The measurement that was used for the error estimation filter utilizes the lateral distance from the center of lanes that was measured with the vision sensor ( d v , l a t ), and the lateral distance is calculated by using the GPS/DR and the waypoint ( d g , l a t ). The longitudinal distance uses d v , l o n and d g , l o n , which are obtained from the vision sensor and from GPS/DR, respectively. Therefore, the measurement equation is shown in Equation (14):
z e , k = Δ d k = [ d v , l a t , k d g , l a t , k d v , l o n , k d g , l o n , k ]
It is necessary to present the lane frame to configure the filter (Figure 12), and as shown in Figure 12, the frame is rotated according to the heading angle ( ψ ) and is calculated from a waypoint. Then, it can be transformed into a lane frame:
R ( ψ k ) = [ cos ψ k sin ψ k sin ψ k cos ψ k ]
Figure 12. Lane frame.
Figure 12. Lane frame.
Sensors 15 20779 g012
After the rotation, the transformed e ( E ) axis corresponds to a lane. This is the lateral distance information of the lanes, and therefore, the equation for the lane frame can be presented as follows:
e N = Δ d v , l a t , k = e E sin ψ k + e N cos ψ k
e E = Δ d v , l o n , k = e E cos ψ k + e N sin ψ k
h e , k ( x ) = [ Δ d v , l o n , k Δ d v , l a t , k ] = [ e E cos ψ k + e N sin ψ k e E sin ψ k e N cos ψ k ]
Equation (19) can be obtained through a partial differential of each term in Equation (18):
H e , k ( x ) = [ h e , k ( x ) E h e , k ( x ) N ] = [ cos ψ k sin ψ k sin ψ k cos ψ k ]
The GPS/DR error estimation filter can be summarized as follows:
State equation:
x e = [ e E e N ] F e , k 1 = [ 1 0 0 1 ] x e , k = F e , k 1 x e , k 1 + W e , k 1
P e , k ( ) = P e , k 1 ( + ) F e , k 1 P e , k 1 ( + ) + Q e , k 1
Measurement equation:
z e , k = Δ d k = [ d v , l a t , k d g , l a t , k d v , l o n , k d g , l o n , k ]
H e , k ( x ) = [ cos ψ k sin ψ k sin ψ k cos ψ k ]
K e , k = P e , k ( ) H e , k T [ H e , k P e , k ( ) H e , k T + R e , k ] 1
x ^ e , k ( + ) = x ^ e , k ( ) + K e , k ( z e , k H e , k x ^ e , k ( ) )
P e , k ( + ) = ( I K e , k H e , k ) P e , k ( )

4.5. Longitudinal Measurement from the Curve Matching

The longitudinal position error from the GPS/DR measurement may be large in the transition area from a straight to a curved lane when the vehicle starts to drive or drives in one direction for a long time because there is no longitudinal information. If the vehicle enters a curved section, the heading of the vehicle and the waypoint change, and then the error estimation for east and north become available. However, before entering a curve, the waypoint navigation may suffer from a failure due to the steering command that has been generated with an incorrect timing as a result of the longitudinal error.
The curved parameter of the lane ahead can be calculated as in Equation (6) by using the waypoint. After interpolation with a resolution of Δ y , the waypoint that uses the curved parameter that has been calculated is transformed into a vehicle frame. At this time, the transformation uses the corrected position with the error estimator that has been laterally corrected. Then the interpolated waypoint curve ( l w p v ) has no lateral distance with the curve ( l i v ) that is detected from the image and only has a longitudinal distance because the lateral error of the GPS/DR has already been estimated. The error function that measures the longitudinal range is shown in Equations (27) and (28):
e = l w p v ( x ) l i w ( x )
d v , l o n = arg min e 2
As shown in Figure 13, the true longitudinal distance is −2.78 m, and the distance measured ( d v , l o n ) from Equation (28) is −2.93 m. Thus, accurate longitudinal measurements can be obtained through curve matching.
The measurements for the longitudinal range from the curve matching are effective only in the transition from a straight to a curved lane. After the longitudinal error has been estimated once, the lateral information in the area of the curve becomes longitudinal information. Therefore, in this study, we have used Δ y = 0.1  m and have restricted the section where curve matching was performed by the waypoint curvature. The longitudinal range that was measured is used as an input of the error estimation filter that is described in Section 4.4.
Figure 13. Measuring the longitudinal distance with curve matching ( o : waypoint curve; : detected curve lane, * : matched curve).
Figure 13. Measuring the longitudinal distance with curve matching ( o : waypoint curve; : detected curve lane, * : matched curve).
Sensors 15 20779 g013

5. Autonomous Experimental Results

The experiment was carried out by applying the lane detection method with a curved model and the GPS/DR error estimation filter proposed in this study in an autonomous vehicle. Figure 14 shows a map that is based on the reflectivity of the experimental site. The map was produced by using a commercial RTK/INS system and a 3D-LiDAR (HDL-32E, Velodyne, Morgan Hill, CA, USA) (Table 1).
Figure 14. Reflectivity map.
Figure 14. Reflectivity map.
Sensors 15 20779 g014
Table 1. Experimental environment.
Table 1. Experimental environment.
GPS/DRVision SensorReference Trajectory
U-Blox EVK 6T + ADIS 16405BumbleBee 2Commercial RTK/INS (Novatel Propak V3 + SPAN HG1700)
The true value of the position cannot be known due to the movement in the environment, so a commercial RTK/INS system, which has a high price, was used to determine the reference position (true value) to conduct a quantitative analysis. Figure 15 shows the total trajectory.
Figure 15. Driving trajectory (arrows represent the direction of vehicle). (a) curved section; (b) intersection.
Figure 15. Driving trajectory (arrows represent the direction of vehicle). (a) curved section; (b) intersection.
Sensors 15 20779 g015
Figure 16 shows the results that were obtained with the proposed method, and Figure 17 shows a comparison of the results when measurements were used for lane detection with a straight line model and with a curved model in sections with sharp curves. In this section, the result of using a straight model is that vehicles lean to the left during a right turn and to the right during a left turn. However, the results when using a curved model reduce the lateral error.
Figure 16. Estimated GPS/DR lateral error (red) and position error (blue).
Figure 16. Estimated GPS/DR lateral error (red) and position error (blue).
Sensors 15 20779 g016
Figure 17. Experimental results for the curved lane sections: (a) straight model; (b) curved model (□: waypoint, -: Standalone GPS/DR, *: proposed).
Figure 17. Experimental results for the curved lane sections: (a) straight model; (b) curved model (□: waypoint, -: Standalone GPS/DR, *: proposed).
Sensors 15 20779 g017
A stop line does not exist in every section, and therefore, the longitudinal accuracy analyzes the sections with stop lines (Figure 18).
Figure 18. Estimated GPS/DR longitudinal error (red) and position error (blue): (a) without stop line detection; (b) with stop line detection.
Figure 18. Estimated GPS/DR longitudinal error (red) and position error (blue): (a) without stop line detection; (b) with stop line detection.
Sensors 15 20779 g018
Figure 19 and Figure 20 show the results in an intersection and the white asterisk indicates the results where no vision measurements have been taken. Even in this section, vision information cannot be obtained, we derived the position with a sub-meter level of accuracy.
Figure 19. Result at the intersection (□: waypoint, o: GPS/DR, *: proposed, -: RTK/INS).
Figure 19. Result at the intersection (□: waypoint, o: GPS/DR, *: proposed, -: RTK/INS).
Sensors 15 20779 g019
Figure 20. Position error at the intersection.
Figure 20. Position error at the intersection.
Sensors 15 20779 g020
In Table 2, the overall longitudinal error is larger than the intersection. The reason for this is that the longitudinal error can be corrected before entering an intersection, but cannot be estimated because most of total trajectory is straight.
Table 2. Result summary.
Table 2. Result summary.
RMSE (m)GPS/DROverallw/o Stop LineWith Stop LineIntersection
Lateral1.770.217--0.337
Longitudinal2.110.6181.570.1910.393

6. Conclusions

This study proposes a navigation system that supports autonomous driving through the use of GPS/DR, waypoints and a vision sensor. The disadvantages of using a lane following system with lane detection is that there are discontinuous sections in the lanes. Therefore, this study proposed a GPS/DR error estimation filter that allows stable navigation in such sections. In addition, we suggest methods that can reduce the errors in the lateral distance by detecting lanes using a curved model in order to solve the problems where lateral distance errors occur in curved lane sections. The curve matching method between the waypoints and the detected curve lanes reduces the longitudinal error before entering a curved section after a straight section. Only lane and stopline detection was performed using image processing, so much less computational power is required when compared to the use of visual odometry and a particle filter, and it is thus very easy to enable real-time use in embedded systems.
This system can therefore be applied in not only an autonomous vehicle but also vehicles which is already equipped with GPS, IMU and black box(monocular vision sensor) to safely run the vehicle in straight lanes, curved lanes and through intersections.
However, greater availability is required for autonomous vehicles. The stopline was the only road marking considered in this paper for longitudinal information, but the stopline does not exist everywhere. Thus, every road marking (arrow, speed limit, etc.) will be added to provide more longitudinal information. Also, vision sensors have limited functionality at night or with a backlight, so LiDAR, Dedicated Short Range Communications (DSRC), etc. can possibly be used in autonomous vehicles [26,27,28,29]. Multi-sensor systems based on a federated Kalman filter can improve the reliability and availability of the localization system, and in addition, the integrity of an autonomous vehicle can be provided by using the covariance of local filters and the master filter. Therefore, a localization method that ensures the availability and integrity will be part of future work.

Acknowledgments

This research was supported by a grant from “Development of GNSS based Transportation Infrastructure Technology (06-A03)” funded by Ministry of Land, Infrastructure and Transport of Korean government.

Author Contributions

Designed the localization system and wrote the paper: B.H.L.; Contributed to discussion of result and provided significant suggestion: J.-H.S., J.-H.I., S.-H.I. and M.-B.H.; Supervised the research work: G.-I.J.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lee, B.H.; Jee, G.I. Performance analysis of GPS-RTK floating solution with Doppler measurement. In Proceedings of the IS-GPS/GNSS, Taipei, Taiwan, 26–28 October 2010; pp. 273–276.
  2. Serrano, L.; Kim, D.; Langley, R.B. A single GPS receiver as a real-time, accurate velocity and acceleration sensor. In Proceedings of the ION GNSS 17th ITM, Long Beach, CA, USA, 21–24 September 2004; pp. 2021–2034.
  3. Badino, H.; Huber, D.; Kanade, T. Visual topometric localization. In Proceedings of the IEEE Intelligent Vehicles Symposium, Baden-Baden, Germany, 5–9 June 2011; pp. 794–799.
  4. Du, J.; Barth, M.J. Next-Generation Automated Vehicle Location Systems: Positioning at the Lane Level. IEEE Trans. Intell. Transp. Syst. 2008, 9, 48–57. [Google Scholar]
  5. Laneurit, J.; Chapuis, R.; Chausse, F. Accurate vehicle positioning on a numerical map. Int. J. Control Autom. Syst. 2005, 3, 15–31. [Google Scholar]
  6. Miller, I.; Campbel, M.; Huttenlocher, D. Map-aided localization in sparse global positioning system environments using vision and particle filtering. J. Field Robot. 2011, 28, 619–643. [Google Scholar] [CrossRef]
  7. Dissanayake, M.W.M.G.; Newman, P.; Clark, S.; Durrant-Whyte, H.F.; Csorba, M. A solution to the simultaneous localization and map building (SLAM) problem. IEEE Trans. Robot. Autom. 2001, 17, 229–241. [Google Scholar] [CrossRef]
  8. Montemerlo, M.; Thrun, S.; Koller, D.; Wegbreit, B. FastSLAM: A factored solution to the simultaneous localization and mapping problem. In Proceedings of the AAAI National Conference on Artificial Intelligence, Edmonton, AB, Canada, 28 July–1 August 2002; pp. 593–598.
  9. Thrun, S.; Burgard, W.; Fox, D. Probabilistic Robotics; MIT Press: Cambridge, MA, USA, 2005. [Google Scholar]
  10. Levinson, J.; Montemerlo, M.; Thrun, S. Map-based precision vehicle localization in urban environments. In Proceedings of the Robotics: Science and Systems, Atlanta, GA, USA, 27–30 June 2007; pp. 121–128.
  11. Levinson, J.; Thrun, S. Robust vehicle localization in urban environment using Probabilistic Maps. In Proceedings of the IEEE International Conference on Robotics and Automation, Anchorage, AK, USA, 3–7 May 2010; pp. 4372–4378.
  12. Schreiber, M.; Knoppel, C.; Franke, U. LaneLoc: Lane marking based localization using highly accurate maps. In Proceedings of the IEEE Intelligent Vehicles Symposium, Gold Coast, Australia, 23–26 June 2013; pp. 449–454.
  13. Bak, A.; Gruyer, D.; Bouchafa, S.; Aubert, D. Multi-sensor localization—Visual odometry as a low cost proprioceptive sensor. In Proceedings of the 15th International IEEE Conference on Intelligent Transportation Systems, Anchorage, AK, USA, 16–19 September 2012; pp. 1365–1370.
  14. Cuong, N.V.; Heo, M.B.; Jee, G.I. 1-Point Ransac based robust visual odometry. J. Korean GNSS Soc. 2013, 2, 81–89. [Google Scholar]
  15. Scaramuzza, D.; Siegwart, R. Appearance-guided monocular omnidirectional visual odometry for outdoor ground vehicles. IEEE Trans. Robot. 2008, 24, 1015–1026. [Google Scholar] [CrossRef]
  16. Gruyer, D.; Belaroussi, R.; Revilloud, M. Map-Aided localization with lateral perception. In Proceedings of the IEEE Intelligent Vehicles Symposium, Dearborn, MI, USA, 8–11 June 2014; pp. 674–680.
  17. Ieng, S.S.; Gruyer, D. Merging lateral cameras information with proprioceptive sensors in vehicle location gives centimetric precision. In Proceedings of the 18th International Technical Conference on the Enhanced Safety of Vehicles (ESV), Nagoya, Japan, 19–22 May 2003.
  18. Li, H.; Nashashibi, F.; Toulminet, G. Localization for intelligent vehicle by fusing mono-camera, low-cost GPS and map data. In Proceedings of the International IEEE Annual Conference on Intelligent Transportation Systems, Madeira, Portugal, 19–22 September 2010; pp. 1657–1662.
  19. Bernhard, H.-W.; Herbert, L.; Elma, W. GNSS-Global Navigation Satellite Systems, GPS, GLONASS, Galileo & More; Springer-Verlag Wien: New York, NY, USA, 2008. [Google Scholar]
  20. Kaplan, E.D.; Hegarty, C.J. Understanding GPS: Principles and Applications; Artech House: Boston, MA, USA, 2005. [Google Scholar]
  21. Seo, S.H.; Lee, B.H.; Jee, G.I. Position error correction using waypoint and vision sensor. In Proceedings of the International Symposium on GNSS, Jeju, Korea, 18–20 October 2014; pp. 31–34.
  22. Kuk, J.G.; An, J.H.; Ki, H.Y.; Cho, N.I. Fast lane detection & tracking based on hough transform with reduced memory requirement. In Proceedings of the International IEEE Annual Conference on Intelligent Transportation Systems, Madeira, Portugal, 19–22 September 2010; pp. 1344–1349.
  23. Li, T.; Zhidong, D. A new 3D LIDAR-based lane markings recognition approach. In Proceedings of the IEEE International Conference on Robotics and Biomimetics, Shenzhen, China, 12–14 December 2013; pp. 2197–2202.
  24. Bevly, D.M. GNSS for Vehicle Control; Artech House Publishers: Boston, MA, USA, 2010. [Google Scholar]
  25. Lee, B.H.; Im, S.H.; Heo, M.B.; Jee, G.I. Error correction method with Precise Map Data for GPS/DR based on Vision/Vehicle Speed Sensor. In Proceedings of the ION GNSS+, Nashville, TN, USA, 16–20 September 2013; pp. 1260–1266.
  26. Alam, N.; Balaei, A.T.; Dempster, A.G. An instantaneous Lane-Level positioning using DRSC carrier frequency offset. IEEE Trans. Intell. Transp. Syst. 2012, 13, 1566–1575. [Google Scholar] [CrossRef]
  27. Chen, L.; Li, Q.; Li, M.; Zhang, L.; Mao, Q. Design of a multi-sensor cooperation travel environment perception system for autonomous vehicle. Sensors 2012, 12, 12386–12404. [Google Scholar] [CrossRef]
  28. Chu, T.; Guo, N.; Backen, S.; Akos, D. Monocular Camera/IMU/GNSS integration for ground vehicle navigation in challenging gnss environments. Sensors 2012, 12, 3162–3185. [Google Scholar] [CrossRef] [PubMed]
  29. Cong, L.; Li, E.; Qin, H.; Ling, K.V.; Xue, R. A performance improvement method for low-cost land vehicle GPS/MEMS-INS attitude determination. Sensors 2015, 15, 5722–5746. [Google Scholar] [CrossRef] [PubMed]

Share and Cite

MDPI and ACS Style

Lee, B.-H.; Song, J.-H.; Im, J.-H.; Im, S.-H.; Heo, M.-B.; Jee, G.-I. GPS/DR Error Estimation for Autonomous Vehicle Localization. Sensors 2015, 15, 20779-20798. https://doi.org/10.3390/s150820779

AMA Style

Lee B-H, Song J-H, Im J-H, Im S-H, Heo M-B, Jee G-I. GPS/DR Error Estimation for Autonomous Vehicle Localization. Sensors. 2015; 15(8):20779-20798. https://doi.org/10.3390/s150820779

Chicago/Turabian Style

Lee, Byung-Hyun, Jong-Hwa Song, Jun-Hyuck Im, Sung-Hyuck Im, Moon-Beom Heo, and Gyu-In Jee. 2015. "GPS/DR Error Estimation for Autonomous Vehicle Localization" Sensors 15, no. 8: 20779-20798. https://doi.org/10.3390/s150820779

Article Metrics

Back to TopTop