Next Article in Journal
The Evaluation of Physical Stillness with Wearable Chest and Arm Accelerometer during Chan Ding Practice
Next Article in Special Issue
Sine Rotation Vector Method for Attitude Estimation of an Underwater Robot
Previous Article in Journal
Development of an Integrated Evaluation System for a Stretchable Strain Sensor
Previous Article in Special Issue
Development of Torque Sensor with High Sensitivity for Joint of Robot Manipulator Using 4-Bar Linkage Shape
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Novel Intersection Type Recognition for Autonomous Vehicles Using a Multi-Layer Laser Scanner

1
School of Electrical and Electronic Engineering, Yonsei University, 50 Seodaemun-gu Sinchon-dong, Seoul 120-743, Korea
2
School of Electrical and Electronics Engineering, Chung-Ang University, 84 Heukseok-Ro Dongjak-Gu, Seoul 156-756, Korea
*
Author to whom correspondence should be addressed.
Sensors 2016, 16(7), 1123; https://doi.org/10.3390/s16071123
Submission received: 21 April 2016 / Revised: 14 July 2016 / Accepted: 15 July 2016 / Published: 20 July 2016

Abstract

:
There are several types of intersections such as merge-roads, diverge-roads, plus-shape intersections and two types of T-shape junctions in urban roads. When an autonomous vehicle encounters new intersections, it is crucial to recognize the types of intersections for safe navigation. In this paper, a novel intersection type recognition method is proposed for an autonomous vehicle using a multi-layer laser scanner. The proposed method consists of two steps: (1) static local coordinate occupancy grid map (SLOGM) building and (2) intersection classification. In the first step, the SLOGM is built relative to the local coordinate using the dynamic binary Bayes filter. In the second step, the SLOGM is used as an attribute for the classification. The proposed method is applied to a real-world environment and its validity is demonstrated through experimentation.

Graphical Abstract

1. Introduction

When an autonomous vehicle drives on urban roads, the vehicle encounters a number of traffic intersections and is expected to pass the intersections smoothly all together with other vehicles without causing any trouble. For smooth and safe passing through the urban intersections, it is of crucially importance for the autonomous vehicle to recognize what type of an intersection the current one is. Furthermore, the intersection type recognition is also a valuable cue to the global localization. When the autonomous vehicle has no idea about its location due to the blackout of the GPS or a cold start, one of the possible solutions for the global localization in this situation will be the intersection type recognition proposed herein. For the intersection type recognition, three kinds of sensors are widely used: a camera, radar, and laser scanner.
The use of a camera is popular in the intersection type recognition and several research works have been reported [1,2,3,4]. Mostly, they use gray value gradients to extract the lane markings or roadsides. Admittedly, however, the camera-based intersection type recognition methods have the drawback that they are computationally very expensive and sensitive to the change of luminance and weather condition.
In [5], road boundaries were detected using radar by making the maps of the static environment such as an occupancy grid map. Using a similar method, radar sensors are expected to be used for the intersection type recognition. Furthermore, radar sensors inform the system not only of the position of an obstacle but also its velocity because they use the Doppler Effect. Thus, the moving objects could be easily detected and removed from the grid maps if the radars are used for the intersection type recognition. Unfortunately, the current commercial radar has too low of a resolution to be used as a primary sensor, and it should be used together with a secondary sensor such a laser scanner.
The use of a 3D laser scanner is also popular in the intersection type recognition [6,7,8,9]. In [6,7], Quanwen et al. developed a beam model for the 64-layer scanning lidar and applied it to the intersection type recognition. In [8], Alberto et al. extracted the road curb and navigable surface as features and developed the road geometry recognition system including types of intersections. The system used an artificial neural network (ANN) as a classifier. In [9], Chen et al. proposed a toe-finding algorithm to detect the admissible space and intersection using a 64-layer scanning lidar. Unfortunately, however, the 3D multi-layer laser scanner has some problems: it is economically too expensive to commercialize and the associated algorithm is computationally too expensive to implement in real-time. Also, this sensor ruins the design of the vehicles. Thus, a 2D multi-layer laser scanner with 3 or 4 layers can be a solution to these drawbacks.
The 2D scanner with 3 or 4 layers is cheaper than the full 3D scanner. So, it is widely used for detection such as vehicle or pedestrian [10,11,12]. Furthermore, the 2D multi-layer laser scanner is much more design-friendly than the 3D one since it can be installed and concealed on the frontal bumper. Some research works about the intersection type recognition using the 2D multi-layer laser scanner have been reported [13,14,15]. T. Weiss et al. proposed a road boundary detection algorithm in intersections by using laser scanners. They proposed the imaginary center line concept. These two imaginary center lines detect the occupied cells in grid map to find the boundaries. Unfortunately, the previous intersection type recognition methods using 2D multi-layer laser scanners have some drawbacks: First, the associated algorithms are computationally expensive because they require building a wide global occupancy grid map. Furthermore, the previous works were applied only to simple intersections scenarios such as T-shape junctions.
In this paper, a new method for the intersection type recognition using a multi-layer laser scanner is proposed. When our autonomous vehicle faces an intersection, the proposed method classifies the type of the intersection into six classes: highway with more than two lanes, merge-roads, diverge-roads, plus-shape intersections, and two types of T-shape junctions.
Compared with the previous works [13,14,15], the proposed method builds the occupancy grid map (OGM) not relative to the global coordinate but relative to the local coordinate, more specifically, relative to the ego vehicle coordinates. The local OGM turns to represent the nearby environment from the ego vehicle’s point of view. The local OGM building, however, is not as simple as the global OGM building. In the global OGM building, the standard binary Bayes filter reported in [16] is widely used. Unfortunately, however, the standard binary Bayes filter cannot be applied directly to the local OGM building since all the static things in the environment seem to move due to the ego motion. Thus, to resolve the difficulty in the local OGM building, the dynamic binary Bayes filter developed in [17] is employed to build an OGM relative to the local vehicle coordinate in this paper. Furthermore, dynamic moving objects are detected by using a measure for segments and removed from the local OGM. In this paper, the map built relative to the local frame with dynamic objects removed is referred to as Static Local Coordinate Occupancy Grid Map (SLOGM) and it is used as a feature for the intersection type recognition. Finally, the nearest neighbor (NN) classifier with the proposed similarity is applied to the intersection type recognition.
This paper is organized as follows: in Section 2, the detail algorithm for building SLOGM is described. In Section 3, new intersections type recognition method using SLOGM is developed. The experimental results of the proposed method are presented in Section 4. Finally, some conclusions are drawn in Section 5.

2. Static Local Coordinate Occupancy Grid Map

2.1. Motivation

The OGM provides a very reliable framework to represent an environment and the OGM is widely used in robotics to represent a map [12,13,14,15]. Unfortunately, however, the standard OGM built relative to the global coordinates has a drawback: Memory requirements for the implementation of the OGM are extremely high when the OGM covers a spacious region. Thus, the OGM can be applied to only small or medium sized indoor environments such as a room, or a building, etc., and the OGM is restricted from the use in spacious outdoor roads. The drawback prevents the OGM from being used as a feature for the intersection type recognition.
To cope with the above problems, the local OGM is developed in this paper. The local OGM developed herein is defined not relative to the global coordinate but relative to the local vehicle coordinate in this paper. The local OGM building, however, is not as simple as the global OGM building. The standard binary Bayes filter reported in [16] is widely used in the global OGM building, but it cannot be used in a straightforward manner in the local OGM building. In the standard binary Bayes filter, the environment is assumed not to change at all. The assumption makes sense in the global OGM building, but it is not the case at all in the local OGM building because of the ego motion. More specifically, the environment m of interest is decomposed into a set of evenly spaced cells
m t = { m t 1 , m t 2 , , m t N }
in the OGM, and each cell is allowed to take on either occupied (O) or free (F),
m t i { O , F }
where the subscript t denotes the time; and N denotes the number of cells in the environment m. In the OGM building, the map is defined as a posterior probability p ( m t i | u 1 : t , z 1 : t ) for each cell m t i , where z 1 : t = { z 1 , z 2 , z 3 , , z t } is the set of the laser scanner measurements corresponding to m i collected from time 1 to t and u 1 : t = { u 1 , u 2 , u 3 , , u t } is the set of control update corresponding to vehicle’s odometry collected from time 1 to t. In the global OGM building, m t i is assumed to be unknown but fixed. That is,
p ( m t i = O | m t 1 i = F ) = p ( m t i = F | m t 1 i = O ) = 0
and
p ( m t i = O | m t 1 i = O ) = p ( m t i = F | m t 1 i = F ) = 1
Thus, if m t i is changed, it takes a long time for p ( m t i | u 1 : t , z 1 : t ) to change from zero to one (or one to zero). Thus, the direct application of the standard Bayes filter to the local OGM building causes the problem, as shown in Figure 1. Let us suppose that the autonomous vehicle is driving along the highway with two preceding vehicles at time t 1 as in Figure 1a. The two preceding vehicles drive at the same speed as the autonomous vehicle.
The first figure in Figure 1a is the view from the autonomous vehicle at time t 1 and the second figure in Figure 1a is the corresponding local OGM p ( m t 1 i = O | u 1 : t 1 , z 1 : t 1 i ) . At the next time t , the autonomous vehicle moves forward with two preceding vehicles at the same speed, as shown in Figure 1b. The view from the autonomous vehicle will change as shown in the Figure 1b. In order to build the local OGM relative to the autonomous vehicle, the previous local OGM p ( m t 1 i = O | u 1 : t 1 , z 1 : t 1 i ) is shifted downwards according to the odometry of the autonomous vehicle and it turns out to be p ( m t i = O | u 1 : t , z 1 : t 1 i ) , as shown in Figure 1c. This step corresponds to the prediction step in Bayes filtering. At time t , the view from the ego vehicle will look like the first figure in Figure 1d and the sensor measurements p ( m t i = O | z t i ) as shown in the second figure in Figure 1d are presented. In the standard Bayes filter, the two OGMs are combined by Bayes rule as shown in Figure 1e. In the figure, a big “+” denotes the Bayes inference. In the inference, p ( m t i = O | u 1 : t , z 1 : t 1 i ) in Figure 1c plays the role of a prior while p ( m t i = O | z t i ) in Figure 1d plays the role of a likelihood (more precisely, an inverse likelihood). In Figure 1e, the regions A and A’ correspond to the region A’’. In the prior p ( m t i = O | u 1 : t , z 1 : t 1 i ) , the region A is free since it was free in p ( m 1 : t 1 | u 1 : t 1 , z 1 : t 1 ) .
When the measurement p ( m t i = O | z t i ) is presented, the region A’ is not observed anymore because it is behind the preceding vehicles. By combining A and A’ by Bayesian inference, the region A’’ remains almost free, which is indicated in almost white color, as shown in Figure 1e, but, obviously, it is not the true. The region A’’ is unknown and should be marked in gray. The region A’’ will turn gray, but it will take some time.
Thus, to resolve the difficulty in the local OGM building, the dynamic binary Bayes filter developed in [17] is employed to build an OGM relative to the local vehicle coordinate in this paper. In the dynamic binary Bayes filter, the value of the cell in the OGM is assumed to change.

2.2. Occupancy Grid Mapping Relative to Autonomous Vehicles

In this paper, the dynamic binary Bayes filter developed in [17] is used to update the posterior p ( m t 1 i | u 1 : t 1 , z 1 : t 1 i ) when a new measurement z t i and new movement u t are presented. Obviously, each cell satisfies
p ( m t i = F | u 1 : t , z i 1 : t ) + p ( m t i = O | u 1 : t , z i 1 : t ) = 1
In the above equation, the posterior p ( m t i | u 1 : t , z 1 : t i ) can be rewritten into
p ( m t i | u 1 : t , z 1 : t i ) = p ( m t i | u 1 : t , z t i , z 1 : t 1 i ) = p ( z t i | m t i , u 1 : t , z 1 : t 1 i ) p ( m t i | u 1 : t , z 1 : t 1 i ) p ( z t i | u 1 : t , z 1 : t 1 i )
Since the new measurement z t i is independent of the previous measurements z 1 : t 1 i and movements u 1 : t , we have
p ( z t i | m t i , u 1 : t , z 1 : t 1 i ) = p ( z t i | m t i )
Thus,
p ( m t i | u 1 : t , z 1 : t i ) = p ( z t i | m t i ) p ( m t i | u 1 : t , z 1 : t 1 i ) p ( z t t | u 1 : t , z 1 : t 1 i )
Applying the Bayes rule to the likelihood term p ( z i t | m t i ) in Equation (8) yields,
p ( m t i | u 1 : t , z 1 : t i ) = p ( m t i | z t i ) p ( z t i ) p ( m t i ) p ( m t i | u 1 : t , z 1 : t 1 i ) P ( z t i | u 1 : t , z 1 : t 1 i )
Thus, the posterior can be computed by
p ( m t i = O | u 1 : t , z 1 : t i ) = p ( m t i = O | z t i ) p ( z t i ) p ( m t i = O ) p ( m t i = O | u 1 : t , z 1 : t 1 i ) P ( z t i | u 1 : t , z 1 : t 1 i )
and
p ( m t i = F | u 1 : t , z 1 : t i ) = p ( m t i = F | z t i ) p ( z t i ) p ( m t i = F ) p ( m t i = F | u 1 : t , z 1 : t 1 i ) P ( z t i | u 1 : t , z 1 : t 1 i )
Dividing the above two equations yields:
p ( m t i = O | u 1 : t , z 1 : t i ) p ( m t i = F | u 1 : t , z 1 : t i ) = p ( m t i = O | z t i ) p ( m t i = F | z t i ) p ( m t i = F ) p ( m t i = O ) p ( m t i = O | u 1 : t , z 1 : t 1 i ) p ( m t i = F | u 1 : t , z 1 : t 1 i )
Applying the total probability theorem and Markov assumption:
p ( m t i = O | u 1 : t , z 1 : t i ) p ( m t i = F | u 1 : t , z 1 : t i ) = p ( m t i = O | z t i ) p ( m t i = F | z t i ) p ( m t i = F ) p ( m t i = O ) × p ( m t i = O | m t 1 i = O ) p ( m t 1 i = O | u 1 : t , z 1 : t 1 i ) + p ( m t i = O | m t 1 i = F ) p ( m t 1 i = F | u 1 : t , z 1 : t 1 i ) p ( m t i = F | m t 1 i = O ) p ( m t 1 i = O | u 1 : t , z 1 : t 1 i ) + p ( m t i = F | m t 1 i = F ) p ( m t 1 i = F | u 1 : t , z 1 : t 1 i )
For simplicity, we assume that the state transition probability is constant, and then the state transition can be represented by the following four parameters
π 11 = p ( m t i = O | m t 1 i = O ) π 12 = p ( m t i = O | m t 1 i = F ) π 21 = p ( m t i = F | m t 1 i = O ) π 22 = p ( m t i = F | m t 1 i = F )
where π 11 + π 12 = 1 and π 21 + π 22 = 1 . Rearranging Equation (13) using simple two equations
( π 11 + π 21 ) ( p ( m t 1 i = O | u t , z 1 : t 1 i ) + p ( m t 1 i = F | u t , z 1 : t 1 i ) ) = 1 ( π 21 + π 22 ) ( p ( m t 1 i = O | u t , z 1 : t 1 i ) + p ( m t 1 i = F | u t , z 1 : t 1 i ) ) = 1
then,
p ( m t i = O | u 1 : t , z 1 : t i ) p ( m t i = F | u 1 : t , z 1 : t i ) = p ( m t i = O | z t i ) 1 p ( m t i = O | z t i ) 1 p ( m t i = O ) p ( m t i = O ) π 11 p ( m t 1 i = O | u 1 : t , z t 1 i ) + π 12 ( 1 p ( m t 1 i = O | u 1 : t , z t 1 i ) ) 1 ( π 11 p ( m t 1 i = O | u 1 : t , z t 1 i ) + π 12 ( 1 p ( m t 1 i = O | u 1 : t , z t 1 i ) ) ) = p ( m t i = O | z t i ) 1 p ( m t i = O | z t i ) 1 p ( m t i = O ) p ( m t i = O ) 1 ( π 21 p ( m t 1 i = O | u 1 : t , z t 1 i ) + π 22 ( 1 p ( m t 1 i = O ) | u 1 : t , z t 1 i ) ) π 21 p ( m t 1 i = O | u 1 : t , z t 1 i ) + π 22 ( 1 p ( m t 1 i = O | u 1 : t , z t 1 i ) ) = ρ t i
Finally, the modified binary Bayes filter for local OGM is,
p ( m t i = O | u 1 : t , z 1 : t i ) = ρ t i 1 + ρ t i
In Figure 2, the static and dynamic binary Bayes filters are applied to the real sensor data and the local OGM is built relative to the vehicle coordinate. Figure 2a,b are the results of static and dynamic binary filters, respectively, and Figure 2c is the corresponding world image taken on a highway. In the Figures, the darker the cell is, the more likely occupied the corresponding cell is. When the static method developed in [16] is used, the problem explained in Figure 1 arises, as shown in Figure 2a. The regions occluded by guardrail have bright cells, and it means that these cells are unoccupied, but it is not true. Unlike the static method [16], however, when the dynamic filter is applied, the occluded region behind the guardrail has occupancy probabilities, which correspond to unknown regions, as shown in Figure 2b.

2.3. Dynamic Object Removal

There might be a number of moving objects on the intersections, and they are likely to interfere with the intersection type recognition. Thus, the dynamic objects are removed before the intersection type recognition in this paper. A typical example is given in Figure 3. Figure 3a,b depict the raw measurements from the scanner, while Figure 3c depicts the segmentation results obtained by the ABD (adaptive breakpoints detection) [18]. In Figure 3a,b, the colors indicate layer information. Layer 0, 1, 2 and 3 are indicated in blue, red, green and black, respectively. In Figure 3c, different colors mean different segments. Three large segments are formed, and they are indicated in magenta, cyan and black. The three segments correspond to the left and right road boundaries and the preceding car, respectively. The black segment should be removed before the intersection type recognition. Let us denote the segment at time t by
z ^ t j = { z ^ t 1 , M 1 , z ^ t 2 , M 2 , z ^ t 3 , M 3 , , z ^ t j , M j } ,   j = 1 , , N t
where j is the index for the segments and N t denotes the number of the segments formed at time t . The index M j denotes the length of the j th segment z ^ t j . The difference between z t j and z ^ t j is explained in Figure 4. Then, let us denote the region in the local OGM m t , which is hit by z ^ t j as
m ^ t j = { m t i m t | p o s ( m t i ) = p o s ( z ^ t j , p ) , i = 1 , , N , p = 1 , , M j }
where p o s ( ) 2 denotes the longitudinal and latitudinal coordinates relative to the autonomous vehicle. To choose the dynamic segments which move over time, a new score,
S ( z ^ t j ) = 1 card ( m ^ t j m ^ t 1 j ) card ( m ^ t j )
is defined to measure the degree to which the segment z ^ t j can be classified as a dynamic object, where card ( ) denotes the cardinality of the argument set. The physical meaning of this score is that the less the intersection between m ^ t j and m ^ t 1 j , the larger the score is, and the more likely the segment comes from a dynamic object.
In conclusion, if
S ( z ^ t j ) > δ
then m ^ t j is quite different from m ^ t 1 j and the corresponding segment z ^ t j is classified as a dynamic object, where δ is a threshold for the dynamic object. If
S ( z ^ t j ) δ
then the corresponding segment z ^ t j does not move much, and it is classified as a static object. The result of the dynamic object removal is shown in Figure 5. The local OGMs before and after removing the dynamic object are depicted in Figure 5a,c respectively, and the segment corresponding to the preceding vehicle is magnified in Figure 5b. As shown in the Figures, a preceding vehicle with gray trail is present, but the vehicle is finally removed in Figure 5c. From now on, we call the OGM without dynamic object in autonomous vehicles coordinate as static local coordinate occupancy grid map, SLOGM.

3. Intersection Type Recognition Using the SLOGM

In this section, a new intersection type recognition method is presented. It is actually a multiple-class classification problem and the SLOGM explained in Section 2 is used as a feature for the classification problem. For the sake of simplicity, a new similarity measure is developed and a nearest neighbor (NN) classifier is used based on the new similarity measure.

3.1. Intersection Types

In this paper, six types of intersections are considered and they are a highway, a merge-road, a diverge-road, a plus-shaped intersection and two kinds of T-shaped junctions as shown in Figure 6a through Figure 6f, respectively. Thus, the problem considered herein is a six-class classification problem, and the SLOGM is used as a feature for the classification.
The highway is a simple straight road and it looks like ‘I’ as shown in Figure 6a. The merge-road is a junction road at which additional road joins the main road, and, thus, it looks like an upside down lower case “y” as shown in Figure 6b. The diverge-road is a junction road at which the main road splits into two roads, and it looks like “y” as shown in Figure 6c. At a plus-shaped intersection, two roads meet and cross each other as shown in Figure 6d. The current longitudinal road is clearly observed by the laser scanner, but the two latitudinal roads are only partially observed due to the limited field of view (FOV) of the laser scanner. At the first type T junction, the current longitudinal road is merged with another latitudinal road at the right angle, as shown in Figure 6e. At the second type T junction, the current longitudinal road ends and the vehicles can go either to the left or to the right in the perpendicular direction, as shown in Figure 6f. Thus, a total of six classes are considered in this paper.

3.2. New Similarity Measure and Nearest Neighbor Classifier

In this subsection, a new similarity measure using SLOGM is developed for the intersection type recognition, and it is applied to implement the NN classifier. Let us suppose that we are given a training set D with N t r a i n training samples
D = { ( M 1 , t 1 ) , ( M 2 , t 2 ) , ( M 3 , t 3 ) , , ( M N t r a i n , t N t r a i n ) }
where M n = [ M n 1 M n 2 M n N ] T [ 0 , 1 ] N is the SLOGM with the size N and t n { H , M , D , P , T 1 , T 2 } is the associated intersection type; n = 1 , 2 , , N t r a i n is an index for training samples. Here, H , M , D , P , T 1 and T 2 mean ‘highway’, ‘merge road’, ‘diverge road’, ‘Plus-shape intersections’, ‘first type T-junction’, and ‘second type T-junction’, respectively. To apply the NN to the intersection type recognition, a new similarity measure is developed. When two SLOGMs L and M are given and L , M [ 0 , 1 ] N , the overlapped area (OA) between them is defined based on their free space by
O A ( L , M ) = card { F r e e ( L ) Free ( M ) }
where F r e e ( M ) = { i | M i < ε f r e e , i = 1 , 2 , , N } is a set of cells in SLOGM, which has low occupancy probability and corresponds to the drivable roads; ε f r e e is a small threshold to determine whether a cell is free or occupied and card ( ) denotes a cardinality of a set. That is, the OA implies the degree to which two SLOGMs share the drivable roads. Then, the similarity between the two SLOGMs L and M is defined by
S I M ( L , M ) = O A 2 ( L , M ) F r e e ( L ) × F r e e ( M ) [ 0 , 1 ]
The similarity measure S I M ( L , M ) is actually the squared geometric mean of normalized overlapped area between two SLOGMs and when two SLOGMs have the similar drivable roads, SIM ( L , M ) becomes close to 1. When a training set D in Equation (23) is given and a test SLOGM L [ 0 , 1 ] N is presented, the intersection type of the SLOGM M can be predicted by an NN classifier by
T y p e o f M = t n { H , M , D , P , T 1 , T 2 }
where t N N = arg max n { S I M ( L , M n ) } .

4. Experiment Setup

The validity of the proposed method is demonstrated through experimentation. The LUX2010 of IBEO (Hamburg, Germany) is used as a multi-layer laser scanner and it is installed on the KIA K5 (Seoul, Korea) as shown in Figure 7. The horizontal FOV of the LUX2010 is 110 degrees with 0.125 degree resolutions and vertical FOV is 3.2 degree with 0.8 resolutions. A single camera is also installed on the top of the windshield to gather the ground truth (GT) of the intersection types. A total of 1213 SLOGM samples are collected and each class has the similar number of samples. Each SLOGM sample covers the area of 80 m × 50 m and each cell in SLOGMs is 0.25 m × 0.25 m large.
To show the validity of the proposed method, its performance is compared with that of [6,8]. In this experiment, [6,8] were implemented by the authors. The previous two works used the 3D scanner with 32 or 64 layers, respectively, but we implemented their ideas using the 2D laser scanner IBEO LUX 2010 with four layers for fair comparison with the proposed method.
For the validation of the proposed system, five-fold cross validation is conducted. The whole set of samples is partitioned into five subsets with the same size in a random manner. The first four sets are used as a training set, and the last set is used as a test set as in Figure 8. Then, three of the first four sets and the last set are used as a training set, and the remaining set is used as a test set in turn. The similar process is repeated three more times such that all the five subsets are used as a test set exactly once. The five-fold cross validations are run 100 times, and the results are summarized in the next subsection.

5. Experiment Results

The NN classifier based on the proposed similarity is applied to the intersection type recognition. The six classes considered herein are summarized in Table 1 As stated, five-fold cross validation is conducted one hundred times and the quantitative results of intersection type recognition are summarized in Table 2, and some illustrative examples are given in Figure 9. Table 2 is the confusion matrix for the six classes. In Figure 9, the first and third columns show the actual environments superimposed with the laser scanner and the second and fourth columns are the corresponding SLOGMs.
As shown in Table 2, the class T 1 (the first type T-junction) demonstrates the highest true positive rate (TPR) among the six classes, and it is 91.575%. Following the class T 1 , the class T 2 (second type T-shape junction) has the second highest TPR and it is 91.532%. The reason for the excellence of the two T-junctions is that their SLOGMs have relatively unique shapes from other SLOGMs. In class T 1 (first type T junction), a single latitudinal road joins the main longitudinal road at the junction in the perpendicular direction, as shown in Figure 9e. In class T 2 (second type T junction), the current main road ends and branches off in to the left or in to the right in the perpendicular direction, as shown in Figure 9f. The unique shape in the two junctions gives them high similarity scores and distinguishes the two sets of samples from the others. On the other hand, the class M (merge-road) is the one which is the most difficult to classify. Since the FOV of a laser scanner is limited, the whole intersection is rarely observed. When the autonomous vehicle enters a merge-road, the road might look like just a highway or a common straight road because the boundaries of the intersection block the laser scanner from scanning the latitudinal roads as shown in Figure 9b. For the similar reason, the class H is also hard to identify because it shares the common characteristic of straightness with other intersections.
Finally, the proposed method is compared with the previous works [6,8] in terms of true positive rate (TPR). The TPR of [6,8] reported herein are lower than the original values reported in [6,8] because the current TPR are obtained using the four layer IBEO LUX 2010, while the original TPR were obtained using the 3D Velodyne (Morgan Hill, CA, US) with 32 or 64 layers. A box plot is depicted to visualize the cross validation results as in Figure 10. From the figure, the proposed method outperforms the previous two methods. The average TPR of [6,8] are 54.61% and 46.91%, respectively, while that of the proposed method is 89.15%. The reason for the excellence of the proposed method might be that the SLOGM proposed herein is a good match with the 2D multi-layer laser scanner and has stronger discriminative power than the previous features of [6,8] for the intersection type recognition using a laser scanner.

6. Conclusions

In this paper, a new intersection type recognition method has been proposed. Unlike the previous works, the occupancy grid map was built relative to the local coordinate and the intersection type was recognized based on the local OGM. The dynamic binary Bayes filter was employed to solve the cell change problem which arises in the local OGM building. A new measure S ( z ^ t j ) was proposed to remove the moving dynamic objects from the local OGM. Furthermore, a new similarity measure SIM ( L , M ) between two SLOGMs was developed, and it was combined with an NN classifier to implement the intersection type recognition system. Finally, the proposed method was applied to a real world problem and its validity was verified by experimentation.

Acknowledgments

This work was supported by the Technology Innovation Program, 10052731, Development of low level video and radar fusion system for advanced pedestrian recognition funded by the Ministry of Trade, Industry & Energy (MOTIE), Korea.

Author Contributions

Jhonghyun An, Baehoon Choi and Euntai Kim designed the algorithm, and carried out the experiment, analyzed the results, and wrote the paper. Kwee-Bo Sim summarized the experimental results and gave helpful suggestions about this research.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Alberto, B. Parallel and local feature extraction: A real-time approach to road boundary detection. IEEE Trans. Image Process. 1995, 4, 217–223. [Google Scholar]
  2. Kong, H.; Audibert, J.-Y.; Ponce, J. General road detection from a single image. IEEE Trans. Image Process. 2010, 19, 2211–2220. [Google Scholar] [PubMed]
  3. Karl, K. Extracting road curvature and orientation from image edge points without perceptual grouping into features. In Proceedings of the IEEE Intelligent Vehicles Symposium, Paris, France, 24–26 October 1994; pp. 109–114.
  4. Danescu, R.; Oniga, F.; Nedevschi, S. Modeling and tracking the driving environment with a particle-based occupancy grid. IEEE Trans. Intell. Transp. Syst. 2011, 12, 1331–1342. [Google Scholar] [CrossRef]
  5. Homm, F.; Kaempchen, N.; Ota, J.; Burschka, D. Efficient occupancy grid computation on the GPU with lidar and radar for road boundary detection. In Proceedings of the IEEE Intelligent Vehicles Symposium, San Diego, CA, USA, 21–24 June 2010; pp. 1006–1013.
  6. Zhu, Q.; Chen, L.; Li, Q.Q.; Li, M.; Nüchter, A.; Wang, J. 3D lidar point cloud based intersections recognition for autonomous driving. In Proceedings of the IEEE Intelligent Vehicles Symposium, Alcala de Henares, Spain, 3–7 June 2012; pp. 456–461.
  7. Zhu, Q.; Mao, Q.; Chen, L.; Li, M.; Li, Q. Veloregistration based intersections detection for autonomous driving in challenging urban scenarios. In Proceedings of the 15th International IEEE Conference on Intelligent Transportation Systems, Anchorage, AK, USA, 16–19 September 2012; pp. 1191–1196.
  8. Hata, A.Y.; Habermann, D.; Osorio, F.S.; Wolf, D.F. Road geometry classification using ANN. In Proceedings of the IEEE Intelligent Vehicles Symposium, Dearborn, MI, USA, 8–11 June 2014; pp. 1319–1324.
  9. Chen, T.; Dai, B.; Liu, D.; Liu, Z. Lidar-based long range road intersections detection. In Proceedings of the 2011 Sixth International Conference on Image and Graphics (ICIG), Hefei, China, 12–15 August 2011; pp. 754–759.
  10. Ryu, K.-J.; Park, S.-K.; Hwang, J.-P.; Kim, E.-T.; Park, M. On-road Tracking using Laser Scanner with Multiple Hypothesis Assumption. Int. J. Fuzzy Logic. Intell. Syst. 2009, 9, 232–237. [Google Scholar] [CrossRef]
  11. Kim, J.; Cho, H.; Kim, S. Positioning and Driving Control of Fork-Type Automatic Guided Vehicle with Laser Navigation. Int. J. Fuzzy Logic. Intell. Syst. 2013, 13, 307–314. [Google Scholar] [CrossRef]
  12. Kim, B.; Choi, B.; Park, S.; Kim, H.; Kim, E. Pedestrian/Vehicle Detection Using a 2.5-dimensional Multi-layer Laser Scanner. IEEE Sens. J. 2016, 16, 400–408. [Google Scholar] [CrossRef]
  13. Weiss, T.; Schiele, B.; Dietmayer, K. Robust Driving Path Detection in Urban and Highway Scenarios Using a Laser Scanner and Online Occupancy Grids. In Proceedings of the IEEE Intelligent Vehicles Symposium, Istanbul, Turkey, 13–15 June 2007; pp. 184–189.
  14. Konrad, M.; Szczot, M.; Dietmayer, K. Road course estimation in occupancy grids. In Proceedings of the 2010 IEEE Intelligent Vehicles Symposium, San Diego, CA, USA, 21–24 June 2010; pp. 412–417.
  15. Konrad, M.; Szczot, M.; Schüle, F.; Dietmayer, K. Generic grid mapping for road course estimation. In Proceedings of the IEEE Intelligent Vehicles Symposium, Baden-Baden, Germany, 5–9 June 2011; pp. 851–856.
  16. Thrun, S.; Fox, D.; Burgard, W. Probabilistic Robotics; MIT Press: Cambridge, MA, USA, 2005. [Google Scholar]
  17. Lee, H.; Hong, S.; Kim, E. Probabilistic background subtraction in a video-based recognition system. KSII Trans. Internet Inf. Syst. 2011, 5, 782–804. [Google Scholar] [CrossRef]
  18. Kim, B.; Choi, B.; Yoo, M.; Kim, H.; Kim, E. Robust Object Segmentation Using a Multi-layer Laser Scanner. Sensors 2014, 14, 20400–20418. [Google Scholar] [CrossRef] [PubMed]
Figure 1. The illustration of incomplete OGM update using standard binary Bayes filter in local coordinate system. (a) The local OGM, which is built at t 1 p ( m t 1 i = O | u 1 : t 1 , z 1 : t 1 i ) ; (b) Ego vehicle moves forward with two preceding vehicles at the same speed; (c) The local OGM is shifted downwards to turn p ( m t i = O | u 1 : t , z 1 : t 1 i ) using the vehicle’s odometry; (d) The new measurement (inverse) likelihood OGM p ( m t i = O | z t i ) ; and (e) The updated OGM p ( m 1 : t | u 1 : t , z 1 : t ) . The red circle region A’’ indicates the drawback of the static binary filter.
Figure 1. The illustration of incomplete OGM update using standard binary Bayes filter in local coordinate system. (a) The local OGM, which is built at t 1 p ( m t 1 i = O | u 1 : t 1 , z 1 : t 1 i ) ; (b) Ego vehicle moves forward with two preceding vehicles at the same speed; (c) The local OGM is shifted downwards to turn p ( m t i = O | u 1 : t , z 1 : t 1 i ) using the vehicle’s odometry; (d) The new measurement (inverse) likelihood OGM p ( m t i = O | z t i ) ; and (e) The updated OGM p ( m 1 : t | u 1 : t , z 1 : t ) . The red circle region A’’ indicates the drawback of the static binary filter.
Sensors 16 01123 g001
Figure 2. (a) Result of static binary Bayes filters; (b) Result of dynamic binary Bayes filters; and (c) The real world image (highway).
Figure 2. (a) Result of static binary Bayes filters; (b) Result of dynamic binary Bayes filters; and (c) The real world image (highway).
Sensors 16 01123 g002
Figure 3. (a) The camera image superimposed with raw laser scanner data; (b) Raw laser scanner data. Layer 0 (blue), layer 1 (red), layer 2 (green), and layer 3 (black); and (c) The segmentations on the grid map. Different color means that different segments.
Figure 3. (a) The camera image superimposed with raw laser scanner data; (b) Raw laser scanner data. Layer 0 (blue), layer 1 (red), layer 2 (green), and layer 3 (black); and (c) The segmentations on the grid map. Different color means that different segments.
Sensors 16 01123 g003
Figure 4. The illustration of difference between z t i and z ^ t j . The z t i indicates the set of laser beams, and z ^ t j indicates the set of cells in the occupancy grid map which were hit by laser beams. The gray cells are the unknown region, and the set of dark cells is one segment, which is segmented as an independent object.
Figure 4. The illustration of difference between z t i and z ^ t j . The z t i indicates the set of laser beams, and z ^ t j indicates the set of cells in the occupancy grid map which were hit by laser beams. The gray cells are the unknown region, and the set of dark cells is one segment, which is segmented as an independent object.
Sensors 16 01123 g004
Figure 5. (a) The grid map with dynamic object; (b) The trails of dynamic segment; and (c) The static grid map with dynamic object removed. The blue box means the dynamic object information.
Figure 5. (a) The grid map with dynamic object; (b) The trails of dynamic segment; and (c) The static grid map with dynamic object removed. The blue box means the dynamic object information.
Sensors 16 01123 g005
Figure 6. These are simplified shapes of the grid map represented as graphically; (a) Highway more than two lanes; (b) Merge-roads; (c) Diverge-roads; (d) Plus-shape intersections; (e) First type T-shape junctions; (f) Second type T-shape junctions.
Figure 6. These are simplified shapes of the grid map represented as graphically; (a) Highway more than two lanes; (b) Merge-roads; (c) Diverge-roads; (d) Plus-shape intersections; (e) First type T-shape junctions; (f) Second type T-shape junctions.
Sensors 16 01123 g006
Figure 7. Vehicle equipped with a multi-layer laser scanner and a camera [18].
Figure 7. Vehicle equipped with a multi-layer laser scanner and a camera [18].
Sensors 16 01123 g007
Figure 8. Outline of the experiment.
Figure 8. Outline of the experiment.
Sensors 16 01123 g008
Figure 9. The results of intersections recognition. (a) Highway; (b) Merge-road; (c) Diverge-road; (d) Plus-shape intersections; (e) First type T-shape junctions; (f) Second type T-shape junctions; left side images are camera image with raw data calibration; right side images are static local coordinate occupancy grid map (SLOGM).
Figure 9. The results of intersections recognition. (a) Highway; (b) Merge-road; (c) Diverge-road; (d) Plus-shape intersections; (e) First type T-shape junctions; (f) Second type T-shape junctions; left side images are camera image with raw data calibration; right side images are static local coordinate occupancy grid map (SLOGM).
Sensors 16 01123 g009
Figure 10. The box plot for each class.
Figure 10. The box plot for each class.
Sensors 16 01123 g010
Table 1. Types of intersections.
Table 1. Types of intersections.
ClassType of Intersections
H Highways
M Merge-roads
D Diverge-roads
P Plus-shape intersections
T 1 First type T-shape junctions
T 2 Second type T-shape junctions
Table 2. Confusion matrix.
Table 2. Confusion matrix.
Prediction H M D P T 1 T 2
Actual
H 82.76%4.01%8.11%2.75%1.05%1.33%
M 3.96%81.06%8.23%1.63%-5.12%
D 2.29%1.89%87.63%2.64%1.15%4.41%
P --0.24%87.43%0.54%11.79%
T 1 ----91.58%8.43%
T 2 ---5.53%2.94%91.53%

Share and Cite

MDPI and ACS Style

An, J.; Choi, B.; Sim, K.-B.; Kim, E. Novel Intersection Type Recognition for Autonomous Vehicles Using a Multi-Layer Laser Scanner. Sensors 2016, 16, 1123. https://doi.org/10.3390/s16071123

AMA Style

An J, Choi B, Sim K-B, Kim E. Novel Intersection Type Recognition for Autonomous Vehicles Using a Multi-Layer Laser Scanner. Sensors. 2016; 16(7):1123. https://doi.org/10.3390/s16071123

Chicago/Turabian Style

An, Jhonghyun, Baehoon Choi, Kwee-Bo Sim, and Euntai Kim. 2016. "Novel Intersection Type Recognition for Autonomous Vehicles Using a Multi-Layer Laser Scanner" Sensors 16, no. 7: 1123. https://doi.org/10.3390/s16071123

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop