Next Article in Journal
An Adaptive Channel Access Method for Dynamic Super Dense Wireless Sensor Networks
Next Article in Special Issue
An Accurate and Generic Testing Approach to Vehicle Stability Parameters Based on GPS and INS
Previous Article in Journal
Development of a Wireless and Passive SAW-Based Chemical Sensor for Organophosphorous Compound Detection
Previous Article in Special Issue
Pothole Detection System Using a Black-box Camera
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Passive Sensor Integration for Vehicle Self-Localization in Urban Traffic Environment †

Institute of Industrial Science, The University of Tokyo, 4-6-1 Komaba, Meguro-ku, Tokyo 153-8505, Japan
*
Author to whom correspondence should be addressed.
This paper is an extended version of the paper entitled “GNSS/INS/on-board camera integration for vehicle self-localization in urban canyon”, presented at IEEE 18th International Conference on Intelligent Transportation Systems, Gran Canaria, Spain, 15–18 September 2015.
Sensors 2015, 15(12), 30199-30220; https://doi.org/10.3390/s151229795
Submission received: 12 October 2015 / Revised: 25 November 2015 / Accepted: 26 November 2015 / Published: 3 December 2015
(This article belongs to the Special Issue Sensors in New Road Vehicles)

Abstract

:
This research proposes an accurate vehicular positioning system which can achieve lane-level performance in urban canyons. Multiple passive sensors, which include Global Navigation Satellite System (GNSS) receivers, onboard cameras and inertial sensors, are integrated in the proposed system. As the main source for the localization, the GNSS technique suffers from Non-Line-Of-Sight (NLOS) propagation and multipath effects in urban canyons. This paper proposes to employ a novel GNSS positioning technique in the integration. The employed GNSS technique reduces the multipath and NLOS effects by using the 3D building map. In addition, the inertial sensor can describe the vehicle motion, but has a drift problem as time increases. This paper develops vision-based lane detection, which is firstly used for controlling the drift of the inertial sensor. Moreover, the lane keeping and changing behaviors are extracted from the lane detection function, and further reduce the lateral positioning error in the proposed localization system. We evaluate the integrated localization system in the challenging city urban scenario. The experiments demonstrate the proposed method has sub-meter accuracy with respect to mean positioning error.

Graphical Abstract

1. Introduction

Vehicle self-localization in urban environment is a challenging but significant topic for autonomous driving and driving assistance. Both motion planning and vehicle cooperation need accurate localization. Localization methods can be categorized into two types based on the sensors used: passive sensor-based and active sensor-based. Passive sensors such as Global Navigation Satellite System (GNSS) receivers and vision sensors collect signals, radiation, or light from the environment, while the active sensors have transmitters, which can send out light waves, electrons or signals. The reflection bounced off the target will be collected by the active sensor. Velodyne, a type of Light Detection and Ranging (LIDAR) sensor is a representative active sensor for vehicle self-localization [1,2,3,4,5,6,7]. Active sensor-based localization has a weakness from a practical point of view—the price of these sensors is currently very high. Although we expect that the cost of these sensors will be reduced in the near future, they suffer from another critical problem, which is the high energy consumption. Therefore, it is necessary to consider low-cost solutions such as GNSS receivers, cameras and inertial sensors—i.e., passive sensors—as an alternative or compensation.
Global Positioning System (GPS) is the main information source in vehicle navigation, which was indicated from the famous Defense Advanced Research Projects Agency (DARPA) GRAND CHALLENGE [8]. Under ideal conditions, namely eight GPS satellites or more in the open sky field, the error standard deviation of the differential mode GPS positioning is approximate 0.3 m [9]. However, vehicles often run in urban areas, in which GNSS positioning performance is severely degraded due to Non-Line-Of-Sight (NLOS) and multipath effects. These effects may lead to a 100 m positioning error in the city urban environment [10]. Various GNSS technologies were proposed to mitigate the NLOS and multipath effects [11]. Recently, building information was considered to analyze the effects of NLOS and multipath. Marais et al., proposed to exclude the satellites located in non-sky regions for localization using the omnidirectional fisheye camera [12]. In addition, 3D building model, shadow mapping, and omnidirectional infrared (IR) cameras were employed for excluding the unhealthy signals and mitigating the effects of NLOS and multipath [13,14,15]. However, the exclusion of satellites will cause the distortion of Horizontal Dilution of Precision (HDOP). Another famous urban positioning method is shadow matching. This method evaluates the satellite visibility using city building models, and compares the predicted satellite visibility with the visibility measurement to reduce the positioning error [16,17,18]. In order to mitigate the NLOS and multipath effects while reducing the HDOP distortion, we proposed a candidate distribution based positioning method using a 3D building map (3D-GNSS) [19,20,21]. 3D-GNSS takes the advantage of 3D building map to rectify the pseudorange error caused by NLOS and multiple effects. The developed 3D-GNSS has been evaluated for pedestrian applications. The result demonstrated 3D-GNSS can obtain high positioning accuracy in urban canyons.
Following the idea of the integration of GPS and inertial navigation systems [22,23,24,25,26], we integrated inertial sensors and 3D-GNSS for vehicle applications as well [27,28]. The evaluation result indicated the inertial sensor can smooth the positioning trajectory, however, the combination of 3D-GNSS and inertial sensors still cannot satisfy the sub-meter accuracy requirement of autonomous driving [29,30]. Moreover, the inertial sensor can output accurate heading direction information during a short time period, but the drift will become obvious as time increases.
In addition to the inertial sensor and GNSS positioning, sensing techniques were also widely used for positioning. The most famous one is Simultaneous Localization and Mapping (SLAM). SLAM is considered as a problem of constructing or updating a map while at the same time tracking the vehicle position in the map. Optical sensors used may be a Velodyne Laser Scanner [2,4,31], camera [32,33,34] or the fusion of both [35]. However, SLAM may display error accumulation problems [4], because the localization and mapping are conducted simultaneously. Therefore, the two-step method which firstly constructs an accurate map off-line, and then matches the observation with the constant map for the localization on-line, is more preferred than SLAM. Lane markings are distinctive objects on road surfaces. From the mid-1980s, lane detection from camera sensors has received considerable attention. Those researches mainly focused on recognizing the lane markings on the road surface, but not localization [36,37,38,39,40]. Tao et al., presented a localization method that built a map of the lane markings in a first stage, then the system conducted a map-matching process for improving the stand-alone GPS positioning error [41,42]. Schreiber et al., developed a vision-based localization system using a stereo camera and a highly accurate map containing curbs and road markings [43]. Norman et al., focused on the intersection scenario, and converted the map data to an image in a camera frame [44]. Nedevschi et al., also developed an intersection localization system based on the alignment of visual landmarks perceived by an onboard stereo vision system, using the information from an extended digital map [29]. Christopher et al., proposed to use a lane detection system for improving the localization performance when GPS suffers an outage [45]. Young et al., developed an algorithm aimed at counting the sequence number of the occupied lanes based on multiple-lane detection, which were used for improving lateral localization [46]. In addition, the stop line detection function was employed to reduce the longitudinal error in localization as well [47,48]. All of these researches proposed to use the sensing technology to improve the positioning accuracy.
Different with those related works, this paper focuses on the problem of vehicle self-localization in the most challenging environment, a city urban environment, and improves the positioning performance from two aspects: both GNSS positioning technology and sensor integration. The main contribution of this paper is to propose a multiple passive sensor-based localization system for the lane-level localization, which integrates the innovative 3D-GNSS positioning technique with an inertial sensor and onboard monocular camera. The flowchart of the proposed system is shown in Figure 1. In the integrated system, 3D-GNSS provides the global positioning result. The motion of vehicle is described via speedometer data from a Control Area Network bus, and heading direction from the Inertial Measurement Unit (IMU). The vision-based lane detection is firstly used for controlling the drift of the IMU. Moreover, the lane keeping and lane changing behaviors are obtained from the lane detection function. These behaviors describe the relative movement of the vehicle, which can further improve the lateral positioning error. Finally, we use a partite filter to integrate this information and the 2D lane marking map.
Figure 1. Flowchart of the proposed localization system.
Figure 1. Flowchart of the proposed localization system.
Sensors 15 29795 g001
Our previous works [19,20,21] focused on the development and evaluation of 3D-GNSS for pedestrian applications. In addition, we only combined 3D-GNSS and inertial sensors for improving the localization performance in our previous works [27,28]. This paper further improves the localization performance by additionally integrating vision-based lane detection. This paper is an extension of the work presented in our conference paper [49]. Compared to the conference work, an idea of heading direction rectification is additionally described and evaluated, and more experimental results and discussion are presented in this paper as well.
We organize the remainder of the paper as follows: the 3D-GNSS positioning method and vision-based lane detection are presented in Section 2 and Section 3, respectively. The drift rectification method for an inertial sensor is described in Section 4. Section 5 explains the integration algorithm. The experimental results are demonstrated in Section 6. Finally, we will end the paper in Section 7 with conclusions and proposals of future work.

2. 3D-GNSS

The integrated localization system considers the GNSS positioning result as the main information source. Generally speaking, GNSS positioning provides an estimation for localization, and the role of other sensors is to optimize the position around the GNSS result. A more accurate GNSS result makes it easier for the integrated system to achieve lane-level performance. Therefore, the performance of the GNSS positioning technique should be improved first. The satellite positioning techniques suffer from NLOS and multipath effects in urban canyons, as shown in Figure 2. This paper proposes to utilize 3D-GNSS to reduce both the multipath and NLOS effects. We briefly describe the idea of 3D-GNSS in this section, while the details of 3D-GNSS have been published in our previous works [19,20,21].
Figure 3a shows the flowchart of 3D-GNSS method. In order to analyze and correct the pseudorange error caused by multipath and NLOS, a 3D building map is needed. 3D-GNSS recognizes the reflected path of satellite signals using ray-tracing, and further estimates the reflection delay. Theoretically, the pseudorange simulation at the position of receiver should be the same as the pseudorange measurement. Thus, 3D-GNSS employs the candidate distribution algorithm to rectify the positioning error. Firstly, a set of position hypothesis, named candidates, are distributed around the previous positioning result and the position given by conventional GNSS receiver. Next, 3D-GNSS generates the pseudorange simulation from each candidate position using the ray tracing, as shown in Figure 3b.
Figure 2. (a) Multipath effect [19]; (b) NLOS effect [19].
Figure 2. (a) Multipath effect [19]; (b) NLOS effect [19].
Sensors 15 29795 g002
Figure 3. (a) Flowchart of 3D-GNSS; (b) Example of ray-tracing [19]; (c) Likelihood of candidates.
Figure 3. (a) Flowchart of 3D-GNSS; (b) Example of ray-tracing [19]; (c) Likelihood of candidates.
Sensors 15 29795 g003
There are three types of satellite conditions. The LOS signal is not affected by the buildings, and does not include the reflection delay. In the NLOS case, as shown in Figure 2b, the reflection delay is equal to the difference between the direct path and the reflection path. However, the third condition is more ambiguous, which is the multipath effect shown in Figure 2a. This research assumes that the signal in the multipath condition is about 6 dB weaker than the LOS signal, and the strobe correlator [50] with 0.2 chip spacing is adopted in the commercial receiver based on the experience. The research uses these principles to estimate the pseudorange delay in the multiple path condition. By using ray tracing, both the satellite condition and the reflection delay can be estimated.
The 3D-GNSS technique additionally checks the consistency of the satellite conditions by considering the signal strength. If the satellite conditions estimated using the signal strength conflict with the result from ray tracing, then the 3D-GNSS excludes the satellite from the following process. The probability of each position hypothesis is evaluated based on the similarity between the pseudorange measurement and the pseudorange simulation. Figure 3c indicates the different likelihood values of candidates using different colors. It is easy to see that the candidates around the ground truth position have higher weighting. The black dots are the invalid candidates, whose pseudorange similarity exceeds the defined threshold in this research of 10 m.
Finally, the 3D-GNSS provides the final rectified result by weighted averaging the positions of all the valid candidates. In this paper, we adopt the multiple satellite systems in 3D-GNSS, which includes not only GPS, but also GLObal NAvigation Satellite System (GLONASS) and Quasi Zenith Satellite System (QZSS).

3. Lane Detection from Monocular Camera

The lane detection method used in this research is highly inspired by Aly’s work [38]. However, different from the lane detection research described in [38], this research does not focus on the lane detection itself, but rather employs the vision-based lane detection to improve the accuracy of localization.

3.1. Inverse Perspective Mapping (IPM)

Before the lane detection, a top view of the road surface is generated, which is named Inverse Perspective Mapping (IPM). This has two advantages: the first one is that the perspective effect can be eliminated from the images, which makes the lanes become parallel [38]. The other one is that the road surface is represented in meters by the IPM. Thus, the prior information about the lane markings can be used easily.
In order to generate the IPM, a flat road is assumed. In addition, for conducting this transformation, camera intrinsic parameters and camera extrinsic parameters are also used. As shown in Figure 4a, an image coordinate {Fi} = {u, v}, a camera coordinate {Fc} = {Xc, Yc, Zc} and a world coordinate {Fw} = {Xw, Yw, Zw} are defined, respectively. The world coordinate is centered at the optical center of the camera, and Zw is perpendicular to the road surface. The camera coordinate Xc axis is assumed to stay in the world coordinate Xw-Yw plane. It means that the systems has a pitch angle α and yaw angle β for the optical axis, but the roll angle is zero. In addition, focal length of camera is (fu, fv), and optical center is (cu, cv). h is the height of the camera coordinate above the road plane. Thus, any point in the image coordinate can be projected to the road plane by the homogeneous transformation [38]:
[ u v 1 ] = [ f u 0 c u 0 f v c v 0 0 1 ] [ x c / z c y c / z c 1 ]
[ x c y c z c ] = [ 1 0 0 0 cos ( α + π 2 ) sin ( α + π 2 ) 0 sin ( α + π 2 ) cos ( α + π 2 ) ] [ cos β sin β 0 sin β cos β 0 0 0 1 ] [ x w y w h ]
where Equation (1) represents the transformation from the camera coordinate to the image coordinate, and Equation (2) is the transformation from the world coordinate to the camera coordinate. Therefore, any point on the ground can be projected back to the image coordinate. Figure 4b shows an original image from onboard camera, and the region of interest (ROI) is marked by the red rectangle. Figure 4c visualizes the IPM image of the ROI, where the line boundaries of the lane become parallel.
Figure 4. (a) Definition of the coordinate systems; (b) Original image from onboard camera; (c) IPM of the rectangle area in (b).
Figure 4. (a) Definition of the coordinate systems; (b) Original image from onboard camera; (c) IPM of the rectangle area in (b).
Sensors 15 29795 g004

3.2. Lane Detection

For enhancing lane marking and reducing other disturbances, the IPM image is firstly smoothed by a Gaussian filter along the vertical direction, and then filtered by a second-derivative of Gaussian for the horizontal direction. The algorithm considers 2.5% pixels as the road marking and thresholds the filtered IPM [38], as shown in Figure 5a. In addition, the lane detection is limited in a rectangular area centered at the vehicle center. The width along the lateral direction is 7 m (two-lane width), and the farthest distance along the heading direction is 30 m. Figure 5b shows the lane detection ROI.
Figure 5. Lane detection [49], (a) Filtered IPM; (b) Lane detection ROI; (c) Hough transformation result of (b); (d) Rough area of the first line; (e) Result from RANSAC for the first line detection in (d); (f) Hypothesized area of the second line; (g) RANSAC result for the second line detection in (f).
Figure 5. Lane detection [49], (a) Filtered IPM; (b) Lane detection ROI; (c) Hough transformation result of (b); (d) Rough area of the first line; (e) Result from RANSAC for the first line detection in (d); (f) Hypothesized area of the second line; (g) RANSAC result for the second line detection in (f).
Sensors 15 29795 g005
One lane consists of two line boundaries. Therefore, the lane detection problem is converted into a line detection one in the ROI. The line detection algorithm consistz of a Hough transform to locate the rough areas of lines, followed by a Random Sample Consensus (RANSAC) fitting to optimize the line detection results [38]. In fact, the two lines are represented by two local maxima in the Hough transform result, which is visualized in Figure 5c. Thus, the rough area of the first line is located by finding the global maximum in Hough transform space, as indicated by the red rectangle in Figure 5d. After that, the RANSAC line fitting is conducted in the red rectangle for detecting the first line. Figure 5e shows the result from RANSAC line fitting. In this paper, the lane width is assumed as 3.5 m. The hypothesized area of the second line can be determined based on this assumption. Then the RANSAC line fitting is used for the line detection again. Figure 5f,g demonstrate the hypothesized area of the second line and the result of the RANSAC line fitting, respectively. Thus, the position of the vehicle relative to the two line boundaries can be estimated.

3.3. Lane Keeping and Lane Changing Detection

One lane is represented by two line boundaries on the road surface. After the line boundaries are detected, we employ the particle filter to track each line. The purpose of the line tracking is to detect the lane keeping and changing behaviors. Figure 6 shows the line detection and tracking results in a lane changing process from frame t to t + 2k. In Figure 6, the blue rectangle is the ROI of lane detection, the red color lines are the detected lines. The attached number, e.g., “Line-1”, “Line-2” and “Line-3”, indicate the tracking results. The small green rectangle at the bottom of images denotes our vehicle position. From frame t to t + k, the Line-2 becomes close to the center of the vehicle, which is indicated in the Figure 6a,b. As time increases, the Line-2 gradually appears at the left side of the vehicle, as shown in Figure 6c. In this process, the position of the tracked Line-2 changes from the right side to the left side of the vehicle, which indicates the vehicle performs right-direction lane changing.
Figure 6. Detection of lane changing scenario; (a), (b) and (c) are line detection and tracking results at different frames. The blue rectangle is the ROI of lane detection, the red color lines are the detected lines. The attached number, e.g., “Line-1”, “Line-2” and “Line-3”, indicates the tracking results. The green rectangle shows the vehicle position.
Figure 6. Detection of lane changing scenario; (a), (b) and (c) are line detection and tracking results at different frames. The blue rectangle is the ROI of lane detection, the red color lines are the detected lines. The attached number, e.g., “Line-1”, “Line-2” and “Line-3”, indicates the tracking results. The green rectangle shows the vehicle position.
Sensors 15 29795 g006
Figure 7 shows the lane keeping scenario. The green rectangle denotes our vehicle position, which is located between Line-1 and Line-2 during the driving period. In Figure 7a–c, Line-1 always appears at the left and Line-2 always appears at the right side of the vehicle. Unlike the lane changing scenario, there is no line cross from one side to the other side of the vehicle in this period. Based on the relative position of the vehicle and the tracked lines, the lane keeping can be detected. This information will be used in the following integration.
Figure 7. Detection of the lane keeping scenario; (a), (b) and (c) are the line detection and tracking results at different frames. The blue rectangle is the ROI of lane detection, the red color lines are the detected lines. The attached numbers, e.g., “Line-1” and “Line-2”, indicate the tracking results. The green rectangle shows the vehicle position.
Figure 7. Detection of the lane keeping scenario; (a), (b) and (c) are the line detection and tracking results at different frames. The blue rectangle is the ROI of lane detection, the red color lines are the detected lines. The attached numbers, e.g., “Line-1” and “Line-2”, indicate the tracking results. The green rectangle shows the vehicle position.
Sensors 15 29795 g007

4. Heading Direction Rectification for an Inertial Sensor

In this research, an Inertial Measurement Unit (IMU) sensor is employed to detect the heading direction of the vehicle. The IMU sensor outputs the acceleration, the angle rate and the value of angle along each axis. According to the IMU sensor specification, the pitch and roll angles are the difference from the body of the IMU sensor to the horizontal plane perpendicular to the gravity direction. In addition, the yaw angle is the accumulated angle changes since the sensor is reset or connected. Suppose that the heading direction has been calibrated relative to the true north direction, the vehicle velocity can be calculated by following equations:
[ V n o r t h e a s t 0 V a l t i t u d e ] = [ cos ω 0 sin ω 0 1 0 sin ω 0 cos ω ] [ 1 0 0 0 cos λ sin λ 0 sin λ cos λ ] [ V C A N 0 0 ]
[ V n o r t h V e a s t V a l t i t u d e ] = [ cos θ sin θ 0 sin θ cos θ 0 0 0 1 ] [ V n o r t h e a s t 0 V a l t i t u d e ]
where, λ and ω are the roll and pitch angle of the IMU, respectively. θ is the heading direction from the true north direction. The vehicle velocity V C A N can be represented by V n o r t h e a s t , and V a l t i t u d e in Equation (3). The speed V n o r t h e a s t is further decomposed by Equation (4). Because we conducted the experiments in a flat area, the simplified North-East plane is adopted for the positioning. The GNSS positioning result at first epoch is defined as the original point of the North-East plane. The movement of the vehicle can be calculated by Equation (5):
X k = [ x n o r t h , k x e a s t , k ] = [ x n o r t h , k 1 x e a s t , k 1 ] + Δ t [ V n o r t h , k 1 V e a s t , k 1 ] = [ x n o r t h , k 1 x e a s t , k 1 ] + Δ t [ V n o r t h e a s t , k 1 cos θ k 1 V n o r t h e a s t , k 1 sin θ k 1 ]
where, ( x n o r t h , k , x e a s t , k ) denotes the vehicle position at time k. Figure 8 shows the trajectory generated from Equations (5). The trajectory is indicated by blue points and the ground truth is shown by cyan line. It is clear to see that the blue points are parallel with the ground truth at the beginning, but it is drifting with the increase of time.
Figure 8. Drift effect of IMU sensor.
Figure 8. Drift effect of IMU sensor.
Sensors 15 29795 g008
This paper proposes to use the vision based lane detection to control the drift effect. When the lane keeping manner is detected, the vehicle heading direction is rectified as the direction of the occupied lane, which could be estimated from the 2D lane marking map. Otherwise, the heading direction is accumulated by the yaw angle rate provided by IMU sensor. This idea is formulized as follows:
θ k = { θ k l a n e ,lane keeping θ k 1 + Δ θ k I M U ,otherwise
where, θ k is the heading direction at time k, θ k l a n e is the lane direction obtained from 2D lane marking map. Δ θ k I M U denotes the angle changing obtained from IMU sensor.

5. Integrated Localization of 3D-GNSS, Inertial Sensor and Lane Detection

In this research, multiple passive sensors are simultaneously adopted for positioning. 3D-GNSS, inertial sensor and lane detection are integrated in the particle filter. The particle filter is a nonparametric version of the Bayes filter, and has widely been applied. The particle filter represents a posterior using a set of particles { x k i = ( x k , n o r t h i , x k , e a s t i ) } i = 1 n , x k i is the two dimensional positioning, and i is the particle index, k is the time, n means the number of the particles. Each particle has an importance weight w k i . The vehicle position is recursively estimated by the following steps [51]:
  • Prediction: Create the particles { x k i , w k i } i = 1 n for time k based on the previous set of particles { x k 1 i , w k 1 i } i = 1 n and the control value u k , based on the propagation model in Equation (5).
  • Correction: an weight of each particle in { x k i , w k i } i = 1 n is evaluated with the new observation z k , according to certain observation model w k i = p ( z k | x k i ) . In this paper, z k includes two observation G k for 3D-GNSS positioning result and V k for lane detection result.
  • Resampling: the particle set { x k i , w k i } i = 1 n will be resampled based on the importance weight.
Figure 9 demonstrates the process of the weight evaluation for one particle. The yellow point is the particle i. The distance between the GNSS positioning result and the particle i is consist of the longitudinal distance D G N S S , l o n g i t u d i n a l i , k and the lateral distance D G N S S , l a t e r a l i , k by referring to the direction of the occupied lane. Therefore, the probability computed thanks to 3D-GNSS measurement p ( G k | x k i ) is represented as follows:
p ( G k | x k i ) = exp ( ( ( D G N S S , l a t e r a l i , k ) 2 + ( D G N S S , l o n g i t u d i n a l i , k ) 2 ) σ G N S S 2 )
p ( G k | x k i ) = p ( G k , l a t e r a l | x k i ) p ( G k , l o n g i t u d i n a l | x k i )
p ( G k , l a t e r a l | x k i ) = exp ( ( D G N S S , l a t e r a l i , k ) 2 σ G N S S 2 )
p ( G k , l o n g i t u d i n a l | x k i ) = exp ( ( D G N S S , l o n g i t u d i n a l i , k ) 2 σ G N S S 2 )
Figure 9. Distance estimation for a particle.
Figure 9. Distance estimation for a particle.
Sensors 15 29795 g009
This paper set the value of σ G N S S 2 as nine square meters, which is tuned empirically. Figure 10a demonstrated the probability of the particles estimated from the GNSS measurement. The particles around of the GNSS position have high weighting.
This research optimizes the probability of the particles using the vision-based lane detection. The lane detection provides the distance from the vehicle center to right and left lines ( D r i g h t k , D l e f t k ) . In addition, the distance from each particle to right and left lines ( D l a n e , r i g h t i , k , D l a n e , l e f t i , k ) can be estimated from the prepared 2D lane marking map, as shown in Figure 9. The probability computed thanks to lane detection measurement p ( V k | x k i ) can be calculated as follows:
p ( V k | x k i ) = 1 2 ( p ( V k , l e f t | x k i ) + p ( V k , r i g h t | x k i ) )
p ( V k , j | x k i ) = exp ( ( D l a n e , j i , k D j k ) 2 σ l a n e 2 ) , ( j { r i g h t , l e f t } )
where, probability p ( V k , r i g h t | x k i ) and p ( V k , l e f t | x k i ) correspond to the right line and left line, respectively. This paper empirically sets the variance σ l a n e 2 as 0.25 m2. Figure 10b visualizes the probability p ( V k | x k i ) estimated from the measurement of the lane detection. When the lane keeping is detected, the particles outside the occupied lane of the previous result, will be excluded in the calculation, and visualized as black dots in Figure 10b. The particles around the lane center have higher probability, because the vehicle runs along the lane center in experiments. It is important to note that the lane detection can sense the lateral position. It cannot perceive the position difference along the longitudinal direction. Therefore, we propose to integrate p ( V k | x k i ) into p ( G k , l a t e r a l | x k i ) . Thus, the integrated probability is represented as:
p ( G k , V k | x k i ) = ( ( 1 γ ) p ( G k , l a t e r a l | x k i ) + γ p ( V k | x k i ) ) p ( G k , l o n g i t u d i n a l | x k i )
where, γ is the importance weight of the lane detection measurement, which is set as 0.5 empirically. Figure 10c visualizes the integrated probability of all particles by different colors. Comparing to Figure 10a, the high weighting particles are not around of GNSS positioning result, but appear in the correct lane. p ( V k | x k i ) in Equation (13) leads to this improvement. In addition, when the system detects lane changing, the operation for the particle exclusion follows the lane changing direction. Figure 10d demonstrates the valid particles corresponding to the lane detection measurement in the the lane changing case.
Figure 10. Particle probability evaluation.
Figure 10. Particle probability evaluation.
Sensors 15 29795 g010

6. Experiments

We chose the Hitotsubashi area in Tokyo for experiments due to the tall building density. Figure 11 shows the developed 3D map for 3D-GNSS and 2D map for integration. We used two kinds of data to construct the 3D map. The first one is the 2-dimensional building footprint, which is provided by Japan Geospatial Information Authority. The other one is the Digital Surface Model (DSM) data, obtained from Aero Asahi Corporation. The DSM data includes the height information of the building [20]. The 2D map is generated from high resolution aerial images provided by NTT-geospace.
Figure 11. 2D and 3D Maps used in this research.
Figure 11. 2D and 3D Maps used in this research.
Sensors 15 29795 g011
In the experiment, a u-blox EVK-M8 GNSS model, a commercial level receiver was adopted. It is produced by u-blox Company (Thalwil, Switzerland), and can receive signals from multiple-GNSS (GPS, GLONASS, and QZSS). We placed the u-blox receiver on the top of our vehicle to collect pseudorange measurements. In addition, an IMU sensor (AMU-3002A Lite, Silicon Sensing, Amagasaki-shi, Hyogo Prefecture, Japan) and speedometer recorder were used to measure the angle attitude and the vehicle velocity, respectively. Moreover, an onboard camera was installed in the vehicle, which captured the front view images when driving. These images are the input of the lane detection algorithm. In addition, we manually distinguished the ground truth trajectory of our vehicle from these images, and manually decided the occupied lane for the result evaluation. The driving distance is approximate 1500 m in each test.
In the vehicle self-localization, it is more important to distinguish which lane the vehicle is in compared to the positioning accuracy. Therefore, both the lateral error and the correct lane rate are employed to estimate the performance of the localization system. Figure 12 shows the definition of the lateral error and the heading direction error. The lateral error E r r o r k P is the minimum distance from P k to the ground truth. The heading direction error E r r o r k θ is the direction difference from the estimated trajectory to the ground truth trajectory. The heading direction error is used for the evaluation of the heading direction rectification.
Figure 12. Definition of positioning error and heading direction error.
Figure 12. Definition of positioning error and heading direction error.
Sensors 15 29795 g012

6.1. Evaluation for Lane Detection

This section evaluates the lane detection by comparing the estimated lane with the hand-labeled ground truth. The cases of correction detection, partial non-detection, completed non-detection and false detection are shown in Figure 13, Figure 14, Figure 15 and Figure 16, respectively.
Figure 13. Examples of correct detections, (a) column shows images captured from camera; (b) column is IPM image and indicates the ground truth using yellow lines; (c) column shows lane detection result, where the blue rectangle is the ROI for lane detection and the red line is the detection result.
Figure 13. Examples of correct detections, (a) column shows images captured from camera; (b) column is IPM image and indicates the ground truth using yellow lines; (c) column shows lane detection result, where the blue rectangle is the ROI for lane detection and the red line is the detection result.
Sensors 15 29795 g013
Figure 14. Examples of non-detection for one of lane boundaries, (a) column shows images captured from camera; (b) column is IPM image and indicates the ground truth using yellow line; (c) column shows lane detection result, where the blue rectangle is the ROI for lane detection and the red line is the detection result.
Figure 14. Examples of non-detection for one of lane boundaries, (a) column shows images captured from camera; (b) column is IPM image and indicates the ground truth using yellow line; (c) column shows lane detection result, where the blue rectangle is the ROI for lane detection and the red line is the detection result.
Sensors 15 29795 g014
Figure 15. Examples of non-detection of both boundaries, (a) column shows images captured from camera; (b) column is IPM image and indicates the ground truth using yellow line; (c) column shows lane detection result, where the blue rectangle is the ROI for lane detection.
Figure 15. Examples of non-detection of both boundaries, (a) column shows images captured from camera; (b) column is IPM image and indicates the ground truth using yellow line; (c) column shows lane detection result, where the blue rectangle is the ROI for lane detection.
Sensors 15 29795 g015
Figure 16. An example of false detection, (a) shows images captured from camera; (b) is IPM image and indicates the ground truth using yellow line; (c) shows lane detection result, where the blue rectangle is the region of interest (ROI) for lane detection and the red line is the detection result.
Figure 16. An example of false detection, (a) shows images captured from camera; (b) is IPM image and indicates the ground truth using yellow line; (c) shows lane detection result, where the blue rectangle is the region of interest (ROI) for lane detection and the red line is the detection result.
Sensors 15 29795 g016
As shown in Figure 13, the developed lane detection can locate the lane boundary in different scenarios. The first row of Figure 13 shows a lane changing case, the crossed line is detected accurately. The second and the third rows demonstrate that there are many other vehicles around our vehicle in the experiments. Because the occupied lane is visible, the lane detection still works for finding the occupied lane. However, this kind of urban environment is challenging for detecting other lanes, because of the occlusions.
Figure 14 shows two examples of partial non-detection. In the two examples, the right lane boundaries are not detected and the left boundaries are correctly detected. The reasons are the occlusion of other vehicles in the first example, and the degradation of the lane markings in the second example. This case is not serious for the localization system. In the case of partial non-detection, the detected lane boundaries can still be used for integration. In the quantitative evaluation, the cases of Figure 13 and Figure 14 are considered as correct detections.
The complete non-detection means both left and right boundaries cannot be detected as shown in Figure 15. In the first example, the reason for the non-detection is the unclear lane marking on the road surface. In the second example, the lost detection of the left boundary is because the detection algorithm excludes the line, which does not cross the bottom of the ROI. The reason for the right boundary is that there are so few pixels of the line in the ROI. In the case of the completed non-detection, the localization system uses the GNSS positioning result for integration because of the lack of lane detection results. In addition, Figure 16 shows an example of false detection. A part of the right lane boundary is occluded by the front vehicle. The body of the vehicle presents a line feature, which causes the false detection.
The quantitative evaluation of the lane detection is illustrated in Table 1. The total number of frames including lanes is 5920. The developed lane detection algorithm can detect lanes in 95.3% of the frames. About 4.7% of the frames are complete non-detection ones, where the non-detection happens around curved road areas and on roads including unclear lane markings, such as Figure 15. In the detections, 99.6% of the frames are correct, such as the cases of Figure 13 and Figure 14. Only 0.4% are false detections, such as the case of Figure 16. Table 1 indicates that the lane detection displays high reliability.
Table 1. Quantitative evaluation of lane detection.
Table 1. Quantitative evaluation of lane detection.
Frames Including Lane (F)Detection (D) (Figure 13, Figure 14 and Figure 16)Non-Detection (ND) (Figure 15)Correct Detection (CD) (Figure 13 and Figure 14)False Detection (FD) (Figure 16)
The number of frames59205642278561923
Detection rate/95.3% (D/F)4.7% (ND/F)99.6% (CD/D)0.4% (FD/D)

6.2. Evaluation for Heading Direction Rectification

As presented in Section 4, the IMU sensor suffers from the drift problem. The drift effect will generate errors in the propagation of particles. This paper proposes to rectify the heading direction, when lane keeping happens. Two types of heading direction errors in one experiment are plotted in Figure 17. The green line indicates the error of the IMU sensor, and the yellow line shows the error after the proposed rectification. The positive value means the difference in the counterclockwise direction. It is clear to see that the heading direction of the IMU sensor has about 5° drift by the end of the experiment. The error of the rectified heading direction changes around zero. This result proves that the vision-based lane detection can solve the drift problem. Actually, the experiment demonstrated in Figure 17 corresponds to the one discussed in Figure 8. In order to make the effectiveness of heading rectification more comprehensive, we plot the inertial trajectory using the rectified heading direction in Figure 18. After the rectification, the inertial trajectory is parallel to the ground truth trajectory. In Figure 17, the areas denoted as 1 to 6 are the turning areas, which are also indicated in Figure 18. The large error is caused by unconfident ground truth trajectory, because it is difficult to manually describe the accurate curve of the turning trajectory.
Figure 17. Effectiveness of the heading direction rectification.
Figure 17. Effectiveness of the heading direction rectification.
Sensors 15 29795 g017
Figure 18. Inertial trajectory after heading rectification in the experiment of Figure 8.
Figure 18. Inertial trajectory after heading rectification in the experiment of Figure 8.
Sensors 15 29795 g018

6.3. Evaluation for Localization

In this section, we focus on the evaluation of the positioning performance. In order to understand the benefit of the proposed 3D-GNSS in urban canyon environments, this paper compares the Weighted Least Square (WLS) GNSS-based integration and the proposed 3D-GNSS-based integration. Figure 19a shows the positioning results of WLS-GNSS and WLS-GNSS-based integration using purple dots and green dots, respectively. The 3D-GNSS and 3D-GNSS-based integration in the same test are shown in Figure 19b, using red dots and yellow dots. The ground truth is represented by the cyan line. In Figure 19a, the purple dots corresponding to the WLS-GNSS result are randomly spread over a wide area. On the contrary, the 3D-GNSS is much more accurate compared to the WLS-GNSS. Moreover, the 3D-GNSS-based integration also indicates better performance than WLS-GNSS-based integration, which can be understood by comparing the green dots and yellow dots in Figure 19c. Except for the GNSS positioning method, the two integration systems use the same algorithm. This proves that the more accurate the GNSS method is, the higher performance the integrated system has.
Figure 19. Positioning results of WLS-GNSS, 3D-GNSS and integration systems.
Figure 19. Positioning results of WLS-GNSS, 3D-GNSS and integration systems.
Sensors 15 29795 g019
To explain the reason for the improvement in the 3D-GNSS method, the satellite conditions in this experiment are illustrated in Figure 20. In the experiment, the GNSS receiver can receive the signals from nine satellites on average. About five of the satellites are in LOS condition, and other satellites are NLOS. In the conventional WLS, the NLOS signal is used directly. Thus, the pseudorange error of the NLOS signal will lead to the positioning error. In the proposed 3D-GNSS positioning method, the 3D map information is adopted for distinguishing NLOS and multipath effects, and correcting the pseudorange errors. Therefore, the 3D-GNSS achieves better performance. It also provides an accurate basis for the integrated localization system.
Figure 20. Condition of satellites in the experiment of Figure 19. Purple points indicate the number of LOS satellites from the ground truth position. Orange points are the number of satellites using 3D-GNSS (LOS + NLOS).
Figure 20. Condition of satellites in the experiment of Figure 19. Purple points indicate the number of LOS satellites from the ground truth position. Orange points are the number of satellites using 3D-GNSS (LOS + NLOS).
Sensors 15 29795 g020
We repeat three tests along the driving route. Table 2 shows the quantitative comparison based on the multiple tests. The comparison includes “WLS-GNSS”, “WLS-GNSS&IMU&Speedometer integration”, “WLS-GNSS&IMU&Speedometer&Lane detection integration”, “3D-GNSS”, “3D-GNSS&IMU&Speedometer integration”, and “3D-GNSS&IMU&Speedometer&Lane detection integration”. WLS-GNSS&IMU&Speedometer integration and 3D-GNSS&IMU&Speedometer integration exclude the lane detection function. As demonstrated in Table 2, the 3D-GNSS method also shows much better performance compared to WLS. About 63.2% of results are in the correct lane. Although 3D-GNSS cannot be directly used for vehicle self-localization, it has potential as the main source in the integration. After integrating with IMU and speedometer, the correct lane rate is increased and the mean error is improved to 1.2 m. Table 2 indicates the integration of GNSS, IMU, speedometer and lane detection has 93% correct lane rate and sub-meter accuracy.
Table 2. Comparison of different positioning methods.
Table 2. Comparison of different positioning methods.
Positioning MethodPositioning Error Mean (m)Positioning Error Standard Deviation (m)Correct Lane Recognition Rate
WLS-GNSS7.5310.0612.9%
WLS-GNSS&IMU&Speedometer integration5.725.3416.1%
WLS-GNSS&IMU&Speedometer &Lane detection integration3.162.5637.4%
3D-GNSS1.481.1263.2%
3D-GNSS&IMU&Speedometer integration1.170.8479.1%
3D-GNSS&IMU&Speedometer&Lane detection integration0.750.7693.0%
To demonstrate the performance of the 3D-GNSS based integration, Figure 21 shows the positioning error of the three types of localization methods in the experiment that has been discussed in Figure 19. Obviously, the 3D-GNSS&IMU&Speedometer&lane detection integration has the best performance. Most of the time, the positioning error is maintained under 1.5 m, but there are some areas where the positioning error is larger than 1.5 m. Those areas are indicated in Figure 21 and Figure 19b, correspondingly. The areas 1, 2, 3, 5, 6 and 7 in Figure 21 denote the intersections. In the intersections, the lane detection function does not work because of the lack of lane markings. Therefore, the positioning error cannot be reduced compared to 3D-GNSS&IMU&Speedometer integration. This result indicates that it is necessary to develop other road marking recognition functions for intersection areas, which is expected to improve the positioning error in those intersection areas.
Area 4 is in the road link. This area is enlarged and shown in Figure 22. The vehicle enters this road link from epoch 196554. From the epoch 196554 to epoch 196557, 3D-GNSS results are in the incorrect lane (the left lane of the ground truth). At the same time, the camera detects lane keeping behavior. Therefore, the integrated localization system gives an incorrect positioning result, as indicated by the yellow dots. From epoch 196558, the 3D-GNSS appears in the correct lane, and the result of the integration gradually becomes correct because of the effect of the 3D-GNSS. The epochs from 196554 to 196557 correspond to the area 4 in Figure 21 and Figure 19b. This case study explains the reason for the 7% failure in the occupied lane recognition. It is important to note that this case also demonstrates that the positioning result can be corrected from an incorrect lane assignment by the 3D-GNSS after several epochs.
Figure 21. Positioning error of 3D-GNSS and 3D-GNSS-based integrations.
Figure 21. Positioning error of 3D-GNSS and 3D-GNSS-based integrations.
Sensors 15 29795 g021
Figure 22. Demonstration of the effect of the 3D-GNSS in the proposed integration.
Figure 22. Demonstration of the effect of the 3D-GNSS in the proposed integration.
Sensors 15 29795 g022
Figure 23 shows another case, which can demonstrate the benefit of the lane detection. At the epoch 196619 and epoch 196623, the 3D-GNSS&IMU&Speedometer integration (green dots) appears in the incorrect lane because of the 3D-GNSS error. The lane detection increases the probability of the particles being in the correct lane, which is explained in Figure 10. The lane detection reduces the lateral positioning error by correctly adjusting the probability of particles, which is the reason for the improvements indicated in Table 2 and Figure 21.
Figure 23. Demonstration of the effect of the lane detection in the proposed integration.
Figure 23. Demonstration of the effect of the lane detection in the proposed integration.
Sensors 15 29795 g023

7. Conclusions

This research extended our previous 3D-GNSS work for vehicle positioning in city environments. As indicated in the experimental results, the innovative 3D-GNSS positioning technique achieved 1.5 m mean positioning error and a 63% lane recognition rate by reducing the multipath and NLOS effects. However, it is still difficult to satisfy the sub-meter requirement by only using 3D-GNSS. This paper proposed the integration of multiple on-board sensors, 3D-GNSS, inertial sensors and an on-board monocular camera for improving the accuracy of vehicle positioning. In the integration system, a lane detection algorithm is developed, in order to recognize the lane keeping/changing behavior. Finally, the particle filter integrates the three sources, 3D-GNSS, vehicle motion and lane detection for localization. In the integration, the drift problem of the inertial sensor is effectively controlled by using lane detection. Moreover, the lane detection function provides an additional measurement to optimize the positioning result along lateral direction. The experiments conducted in the urban traffic environment have demonstrated that the proposed system achieved a 93% correct lane recognition rate and sub-meter accuracy. In the near future, the sensing technology for the intersection scenario will be considered to improve the localization accuracy. In addition, more difficult scenarios will be discussed, such as under bridgea or in tunnels, where the GNSS signal is in outage conditions.

Acknowledgments

The authors acknowledge the Grant-in-Aid for Japan Society for the Promotion of Science (JSPS) Postdoctoral Fellowship for Oversea Researchers.

Author Contributions

Yanlei Gu implemented the lane detection algorithm. He proposed and implemented the integrated localization algorithm. He also conducted the experiments and the performance analysis. Li-Ta Hsu implemented the algorithm of the 3D-GNSS method. He also participated the discussion of the integration algorithm development. Shunsuke Kamijo is the research leader of this project on GNSS urban localization. He proposed the idea of the 3D-GNSS method and the application for autonomous driving. He participated in all the discussions associated with this project.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Levinson, J.; Askeland, J.; Becker, J.; Dolson, J.; Held, D.; Kammel, S.; Kolter, J.Z.; Langer, D.; Pink, O.; Pratt, V.; et al. Towards fully autonomous driving: Systems and algorithms. In Proceedings of the IEEE Intelligent Vehicles Symposium 2011, Baden-Baden, Germany, 5–9 June 2011; pp. 163–168.
  2. Moosmann, F.; Stiller, C. Velodyne SLAM. In Proceedings of the IEEE Intelligent Vehicles Symposium 2011, Baden-Baden, Germany, 5–9 June 2011; pp. 393–398.
  3. Glaser, C.; Michalke, T.P.; Burkle, L.; Niewels, F. Environment perception for inner-city driver assistance and highly-automated driving. In Proceedings of the IEEE Intelligent Vehicles Symposium 2014, Dearborn, MI, USA, 8–11 June 2014; pp. 1270–1275.
  4. Choi, J. Hybrid map-based SLAM using a Velodyne laser scanner. In Proceedings of the 17th International Conference on Intelligent Transportation Systems, Qingdao, China, 8–11 October 2014; pp. 3082–3087.
  5. Tang, J.; Chen, Y.; Niu, X.; Wang, L.; Chen, L.; Liu, J.; Shi, C.; Hyyppä, J. LiDAR scan matching aided inertial navigation system in GNSS-denied environments. Sensors 2015, 7, 16710–16728. [Google Scholar] [CrossRef] [PubMed]
  6. Hata, A.Y.; Osorio, F.S.; Wolf, D.F. Robust curb detection and vehicle localization in urban environments. In Proceedings of the IEEE Intelligent Vehicles Symposium 2014, Dearborn, Michigan, USA, 8–11 June 2014; pp. 1257–1262.
  7. Hata, A.; Wolf, D. Road marking detection using LIDAR reflective intensity data and its application to vehicle localization. In Proceedings of the 17th International Conference on Intelligent Transportation Systems, Qingdao, China, 8–11 October 2014; pp. 584–589.
  8. Urmson, C.; Anhalt, J.; Bartz, D.; Clark, M.; Galatali, T.; Gutierrez, A.; Harbaugh, S.; Johnston, J.; Kato, H.; Koon, P.; et al. A robust approach to high-speed navigation for unrehearsed desert terrain. Springer Tracts Adv. Robot. 2007, 36, 45–102. [Google Scholar]
  9. Rezaei, S.; Sengupta, R. Kalman filter-based integration of DGPS and vehicle sensors for localization. IEEE Trans. Control Syst. Technol. 2007, 15, 1080–1088. [Google Scholar] [CrossRef]
  10. MacGougan, G.; Lachapelle, G.; Klukas, R.; Siu, K.; Garin, L.; Shewfelt, J.; Cox, G. Performance analysis of a stand-alone high-sensitivity receiver. GPS Solut. 2002, 6, 179–195. [Google Scholar]
  11. Groves, P.D. GNSS solutions: Multipath vs. NLOS signals. How does non-line-of-sight reception differ from multipath interference? Inside GNSS Mag. 2013, 8, 40–42. [Google Scholar]
  12. Marais, J.; Meurie, C.; Attia, D.; Ruichek, Y.; Flancquart, A. Toward accurate localization in guided transport: Combining GNSS data and imaging information. Transp. Res. Part C: Emerg. Technol. 2014, 43, 188–197. [Google Scholar] [CrossRef]
  13. Meguro, J.I.; Murata, T.; Takiguchi, J.I.; Amano, Y.; Hashizume, T. GPS multipath mitigation for urban area using omnidirectional infrared camera. IEEE Trans. Intell. Transp. Syst. 2009, 10, 22–30. [Google Scholar] [CrossRef]
  14. Bauer, S.; Obst, M.; Streiter, R.; Wanielik, G. Evaluation of shadow maps for non-line-of-sight detection in urban GNSS vehicle localization with vanets—The GAIN approach. In Proceedings of the IEEE 77th Vehicular Technology Conference 2013: VTC2013-Spring, Dresden, Germany, 2–5 June 2013; pp. 1–5.
  15. Obst, M.; Bauer, S.; Reisdorf, P.; Wanielik, G. Multipath detection with 3D digital maps for robust multi-constellation GNSS/INS vehicle localization in urban areas. In Proceedings of the IEEE Intelligent Vehicles Symposium 2012, Alcala de Henares, Spain, 3–7 June 2012; pp. 184–190.
  16. Groves, P.D. Shadow Matching: A new GNSS positioning technique for urban canyons. J. Navig. 2011, 64, 417–430. [Google Scholar] [CrossRef]
  17. Wang, L.; Groves, P.D.; Ziebart, M.K. GNSS shadow matching: Improving urban positioning accuracy using a 3D city model with optimized visibility scoring scheme. Navig. J. Inst. Navig. 2013, 60, 195–207. [Google Scholar] [CrossRef]
  18. Wang, L.; Groves, P.D.; Ziebart, M.K. Smartphone shadow matching for better cross-street gnss positioning in urban environments. J. Navig. 2014, 68, 411–433. [Google Scholar] [CrossRef]
  19. Miura, S.; Hsu, L.T.; Chen, F.; Kamijo, S. GPS error correction with pseudorange evaluation using three-dimensional maps. IEEE Trans. Intell. Transp. Syst. 2015, 16, 3104–3115. [Google Scholar] [CrossRef]
  20. Hsu, L.T.; Gu, Y.; Kamijo, S. 3D building model-based pedestrian positioning method using GPS/GLONASS/QZSS and its reliability calculation. GPS Solut. 2015, 1–16. [Google Scholar] [CrossRef]
  21. Hsu, L.T.; Gu, Y.; Kamijo, S. NLOS correction/exclusion for GNSS measurement using RAIM and city building models. Sensors 2015, 7, 17329–17349. [Google Scholar] [CrossRef] [PubMed]
  22. Milanes, V.; Naranjo, J.E.; Gonzalez, C.; Alonso, J.; de Pedro, T. Autonomous vehicle based in cooperative GPS and inertial systems. Robotica 2008, 26, 627–633. [Google Scholar] [CrossRef]
  23. Noureldin, A.; Karamat, T.B.; Eberts, M.D.; El-Shafie, A. Performance enhancement of MEMS-based INS/GPS integration for low-cost navigation applications. IEEE Trans. Veh. Technol. 2009, 58, 1077–1096. [Google Scholar] [CrossRef]
  24. Kubo, N.; Dihan, C. Availability improvement of RTK-GPS with IMU and vehicle sensors in urban environment. In Proceedings of the 25th International Technical Meeting of The Satellite Division of the Institute of Navigation (ION GNSS 2012), Nashville, TN, USA, 17–21 September 2012; pp. 1545–1555.
  25. Angrisano, A.; Petovello, M.; Pugliano, G. Benefits of combined GPS/GLONASS with low-cost MEMS IMUs for vehicular urban navigation. Sensors 2012, 12, 5134–5158. [Google Scholar] [CrossRef] [PubMed]
  26. Sun, D.B.; Petovello, M.G.; Cannon, M.E. Ultratight GPS/reduced-IMU integration for land vehicle navigation. IEEE Trans. Aerosp. Electron. Syst. 2013, 49, 1781–1791. [Google Scholar] [CrossRef]
  27. Gu, Y.; Hsu, L.-T.; Wada, Y.; Kamijo, S. Integration of 3D map based GPS positioning and on-board sensors for vehicle self-localization in urban canyon. In Proceedings of the ION 2015 Pacific PNT Meeting, Honolulu, HI, USA, 20–23 April 2015.
  28. Gu, Y.; Wada, Y.; Hsu, L.-T.; Kamijo, S. vehicle self-localization in urban canyon using 3D map based GPS positioning and vehicle sensors. In Proceedings of the 3rd International Conference on Connected Vehicles & Expo (ICCVE 2014), Vienna, Austria, 3–7 November 2014.
  29. Nedevschi, S.; Popescu, V.; Danescu, R.; Marita, T.; Oniga, F. Accurate ego-vehicle global localization at intersections through alignment of visual data with digital map. IEEE Trans. Intell. Transp. Syst. 2013, 14, 673–687. [Google Scholar] [CrossRef]
  30. Gruyer, D.; Belaroussi, R.; Revilloud, M. Map-aided localization with lateral perception. In Proceedings of the IEEE Intelligent Vehicles Symposium 2014, Dearborn, MI, USA, 8–11 June 2014; pp. 674–680.
  31. Zhang, J.; Singh, S. LOAM: Lidar odometry and mapping in real-time. In Proceedings of the Robotics: Science and Systems Conference (RSS), Berkeley, CA, USA, 14–16 July 2014.
  32. Engel, J.; Stückler, J.; Cremers, D. Large-scale direct slam with stereo cameras. In Proceedings of the IEEE International Conference on Intelligent Robots and Systems (IROS) 2015, Hamburg, Germany, 28 September–2 October 2015.
  33. Klein, G.; Murray, D. Parallel tracking and mapping for small AR workspaces. In Proceedings of the 6th IEEE and ACM International Symposium on Mixed and Augmented Reality 2007, Nara, Japan, 13–16 November 2007; pp. 225–234.
  34. Jiang, Y.H.; Xiong, G.M.; Chen, H.Y.; Lee, D.J. Incorporating a wheeled vehicle model in a new monocular visual odometry algorithm for dynamic outdoor environments. Sensors 2014, 14, 16159–16180. [Google Scholar] [CrossRef] [PubMed]
  35. Zhang, J.; Singh, S. Visual-lidar odometry and mapping: Low-drift, robust, and fast. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015; pp. 2174–2181.
  36. Cheng, W.C. PSO algorithm particle filters for improving the performance of lane detection and tracking systems in difficult roads. Sensors 2012, 12, 17168–17185. [Google Scholar] [CrossRef] [PubMed]
  37. Tapia-Espinoza, R.; Torres-Torriti, M. Robust lane sensing and departure warning under shadows and occlusions. Sensors 2013, 3, 3270–3298. [Google Scholar] [CrossRef] [PubMed]
  38. Aly, M. Real time detection of lane markers in urban streets. In Proceedings of the IEEE Intelligent Vehicles Symposium 2008, Eindhoven, The Netherlands, 4–6 June 2008; pp. 7–12.
  39. Bertozzi, M.; Broggi, A. GOLD: A parallel real-time stereo vision system for generic obstacle and lane detection. IEEE Trans. Image Process. 1998, 7, 62–81. [Google Scholar] [CrossRef] [PubMed]
  40. Yenikaya, S.; Yenikaya, G.; Düven, E. Keeping the vehicle on the road: A survey on on-road lane detection systems. ACM Comput. Surv. 2013, 46, 2:1–2:43. [Google Scholar] [CrossRef]
  41. Tao, Z.; Bonnifait, P.; Fremont, V.; Ibanez-Guzman, J. Lane marking aided vehicle localization. In Proceedings of the 16th IEEE International Conference on Intelligent Transportation Systems 2013, The Hague, The Netherlands, 6–9 October 2013; pp. 1509–1515.
  42. Tao, Z.; Bonnifait, P.; Fremont, V.; Ibanez-Guzman, J. Mapping and localization using GPS, lane markings and proprioceptive sensors. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2013, Tokyo, Japan, 3–7 November 2013; pp. 406–412.
  43. Schreiber, M.; Knoppel, C.; Franke, U. Laneloc: Lane marking based localization using highly accurate maps. In Proceedings of the IEEE Intelligent Vehicles Symposium 2013, Gold Coast, Australia, 23–26 June 2013; pp. 449–454.
  44. Mattern, N.; Wanielik, G. Camera-based vehicle localization at intersections using detailed digital maps. In Proceedings of the 2010 IEEE/ION Position Location and Navigation Symposium (PLANS), Indian Wells, CA, USA, 4–6 May 2010; pp. 1100–1107.
  45. Rose, C.; Britt, J.; Allen, J.; Bevly, D. An integrated vehicle navigation system utilizing lane-detection and lateral position estimation systems in difficult environments for GPS. IEEE Trans. Intell. Transp. Syst. 2014, 15, 2615–2629. [Google Scholar] [CrossRef]
  46. Seo, Y.W.; Rajkumar, R. Tracking and estimation of ego-vehicle's state for lateral localization. In Proceeding of the 17th International Conference on Intelligent Transportation Systems, Qingdao, China, 8–11 October 2014; pp. 1251–1257.
  47. Marita, T.; Negru, M.; Danescu, R.; Nedevschi, S. Stop-line detection and localization method for intersection scenarios. In Proceedings of the IEEE 7th International Conference on Intelligent Computer Communication and Processing (ICCP 2011), Cluj-Napoca, Romania, 25–27 August 2011; pp. 293–298.
  48. Lee, B.H.; Song, J.H.; Im, J.H.; Im, S.H.; Heo, M.B.; Jee, G.I. GPS/DR Error estimation for autonomous vehicle localization. Sensors 2015, 8, 20779–20798. [Google Scholar] [CrossRef] [PubMed]
  49. Kamijo, S.; Gu, Y.; Hsu, L.-T. GNSS/INS/on-board aamera integration for vehicle self-localization in urban canyon. In Proceedings of the IEEE 18th International Conference on Intelligent Transportation Systems (ITSC 2015), Gran Canaria, Spain, 15–18 September 2015; pp. 2533–2538.
  50. Garin, L.; van Diggelen, F.; Rousseau, J.M. Strobe & edge correlator multipath mitigation for code. In Proceedings of the the 9th International Technical Meeting of the Satellite Division of the Institute of Navigation (ION GPS 1996), Kansas City, MO, USA, 17–20 September 1996; pp. 657–664.
  51. Qin, B.; Chong, Z.J.; Bandyopadhyay, T.; Ang Jr, M.H.; Frazzoli, E.; Rus, D. Curb-intersection feature based monte carlo localization on urban roads. In Proceedings of the IEEE International Conference on Robotics and Automation 2012 (ICRA 2012), Saint Paul, MN, USA, 14–18 May 2012; pp. 2640–2646.

Share and Cite

MDPI and ACS Style

Gu, Y.; Hsu, L.-T.; Kamijo, S. Passive Sensor Integration for Vehicle Self-Localization in Urban Traffic Environment. Sensors 2015, 15, 30199-30220. https://doi.org/10.3390/s151229795

AMA Style

Gu Y, Hsu L-T, Kamijo S. Passive Sensor Integration for Vehicle Self-Localization in Urban Traffic Environment. Sensors. 2015; 15(12):30199-30220. https://doi.org/10.3390/s151229795

Chicago/Turabian Style

Gu, Yanlei, Li-Ta Hsu, and Shunsuke Kamijo. 2015. "Passive Sensor Integration for Vehicle Self-Localization in Urban Traffic Environment" Sensors 15, no. 12: 30199-30220. https://doi.org/10.3390/s151229795

Article Metrics

Back to TopTop