Novel Intersection Type Recognition for Autonomous Vehicles Using a Multi-Layer Laser Scanner

There are several types of intersections such as merge-roads, diverge-roads, plus-shape intersections and two types of T-shape junctions in urban roads. When an autonomous vehicle encounters new intersections, it is crucial to recognize the types of intersections for safe navigation. In this paper, a novel intersection type recognition method is proposed for an autonomous vehicle using a multi-layer laser scanner. The proposed method consists of two steps: (1) static local coordinate occupancy grid map (SLOGM) building and (2) intersection classification. In the first step, the SLOGM is built relative to the local coordinate using the dynamic binary Bayes filter. In the second step, the SLOGM is used as an attribute for the classification. The proposed method is applied to a real-world environment and its validity is demonstrated through experimentation.


Introduction
When an autonomous vehicle drives on urban roads, the vehicle encounters a number of traffic intersections and is expected to pass the intersections smoothly all together with other vehicles without causing any trouble. For smooth and safe passing through the urban intersections, it is of crucially importance for the autonomous vehicle to recognize what type of an intersection the current one is. Furthermore, the intersection type recognition is also a valuable cue to the global localization. When the autonomous vehicle has no idea about its location due to the blackout of the GPS or a cold start, one of the possible solutions for the global localization in this situation will be the intersection type recognition proposed herein. For the intersection type recognition, three kinds of sensors are widely used: a camera, radar, and laser scanner.
The use of a camera is popular in the intersection type recognition and several research works have been reported [1][2][3][4]. Mostly, they use gray value gradients to extract the lane markings or roadsides. Admittedly, however, the camera-based intersection type recognition methods have the drawback that they are computationally very expensive and sensitive to the change of luminance and weather condition.
In [5], road boundaries were detected using radar by making the maps of the static environment such as an occupancy grid map. Using a similar method, radar sensors are expected to be used for the intersection type recognition. Furthermore, radar sensors inform the system not only of the position of an obstacle but also its velocity because they use the Doppler Effect. Thus, the moving objects could be easily detected and removed from the grid maps if the radars are used for the intersection type

Motivation
The OGM provides a very reliable framework to represent an environment and the OGM is widely used in robotics to represent a map [12][13][14][15]. Unfortunately, however, the standard OGM built relative to the global coordinates has a drawback: Memory requirements for the implementation of the OGM are extremely high when the OGM covers a spacious region. Thus, the OGM can be applied to only small or medium sized indoor environments such as a room, or a building, etc., and the OGM is restricted from the use in spacious outdoor roads. The drawback prevents the OGM from being used as a feature for the intersection type recognition.
To cope with the above problems, the local OGM is developed in this paper. The local OGM developed herein is defined not relative to the global coordinate but relative to the local vehicle coordinate in this paper. The local OGM building, however, is not as simple as the global OGM building. The standard binary Bayes filter reported in [16] is widely used in the global OGM building, but it cannot be used in a straightforward manner in the local OGM building. In the standard binary Bayes filter, the environment is assumed not to change at all. The assumption makes sense in the global OGM building, but it is not the case at all in the local OGM building because of the ego motion. More specifically, the environment m of interest is decomposed into a set of evenly spaced cells in the OGM, and each cell is allowed to take on either occupied (O) or free (F), where the subscript t denotes the time; and N denotes the number of cells in the environment m. In the OGM building, the map is defined as a posterior probability ppm i t |u 1:t , z 1:t q for each cell m i t , where z 1:t " tz 1 , z 2 , z 3 ,¨¨¨, z t u is the set of the laser scanner measurements corresponding to m i collected from time 1 to t and u 1:t " tu 1 , u 2 , u 3 ,¨¨¨, u t u is the set of control update corresponding to vehicle's odometry collected from time 1 to t. In the global OGM building, m i t is assumed to be unknown but fixed. That is, and Thus, if m i t is changed, it takes a long time for ppm i t |u 1:t , z 1:t q to change from zero to one (or one to zero). Thus, the direct application of the standard Bayes filter to the local OGM building causes the problem, as shown in Figure 1. Let us suppose that the autonomous vehicle is driving along the highway with two preceding vehicles at time t´1 as in Figure 1a. The two preceding vehicles drive at the same speed as the autonomous vehicle.
The first figure in Figure 1a is the view from the autonomous vehicle at time t´1 and the second figure in Figure 1a is the corresponding local OGM ppm i t´1 " O|u 1:t´1 , z i 1:t´1 q. At the next time t, the autonomous vehicle moves forward with two preceding vehicles at the same speed, as shown in Figure 1b. The view from the autonomous vehicle will change as shown in the Figure 1b. In order to build the local OGM relative to the autonomous vehicle, the previous local OGM ppm i t´1 " O|u 1:t´1 , z i 1:t´1 q is shifted downwards according to the odometry of the autonomous vehicle and it turns out to be ppm i t " O|u 1:t , z i 1:t´1 q, as shown in Figure 1c. This step corresponds to the prediction step in Bayes filtering. At time t, the view from the ego vehicle will look like the first figure in Figure 1d and the sensor measurements ppm i t " O|z i t q as shown in the second figure in Figure 1d are presented. In the standard Bayes filter, the two OGMs are combined by Bayes rule as shown in Figure 1e. In the figure, a big "+" denotes the Bayes inference. In the inference, ppm i t " O|u 1:t , z i 1:t´1 q in Figure 1c plays the role of a prior while ppm i t " O|z i t q in Figure 1d plays the role of a likelihood (more precisely, an inverse likelihood). In Figure 1e, the regions A and A' correspond to the region A". In the prior ppm i t " O|u 1:t , z i 1:t´1 q, the region A is free since it was free in ppm 1:t´1 |u 1:t´1 , z 1:t´1 q. When the measurement ( | ) is not observed anymore because it is behind the preceding vehicles. By combining A and A' by Bayesian inference, the region A'' remains almost free, which is indicated in almost white color, as shown in Figure 1e, but, obviously, it is not the true. The region A'' is unknown and should be marked in gray. The region A'' will turn gray, but it will take some time. Thus, to resolve the difficulty in the local OGM building, the dynamic binary Bayes filter developed in [17] is employed to build an OGM relative to the local vehicle coordinate in this paper. In the dynamic binary Bayes filter, the value of the cell in the OGM is assumed to change. When the measurement ppm i t " O|z i t q is presented, the region A' is not observed anymore because it is behind the preceding vehicles. By combining A and A' by Bayesian inference, the region A" remains almost free, which is indicated in almost white color, as shown in Figure 1e, but, obviously, it is not the true. The region A" is unknown and should be marked in gray. The region A" will turn gray, but it will take some time.
Thus, to resolve the difficulty in the local OGM building, the dynamic binary Bayes filter developed in [17] is employed to build an OGM relative to the local vehicle coordinate in this paper. In the dynamic binary Bayes filter, the value of the cell in the OGM is assumed to change.

Occupancy Grid Mapping Relative to Autonomous Vehicles
In this paper, the dynamic binary Bayes filter developed in [17] is used to update the posterior ppm i t´1 |u 1:t´1 , z i 1:t´1 q when a new measurement z i t and new movement u t are presented. Obviously, each cell satisfies ppm i t " F|u 1:t , z i 1:t q`ppm i t " O|u 1:t , z i 1:t q " 1 (5) In the above equation, the posterior ppm i t |u 1:t , z i 1:t q can be rewritten into Since the new measurement z i t is independent of the previous measurements z i 1:t´1 and movements u 1:t , we have Thus, Applying the Bayes rule to the likelihood term ppz t i |m i t q in Equation (8) yields, Thus, the posterior can be computed by ppm i t " O|u 1:t , z i 1:t q " and ppm i t " F|u 1:t , z i 1:t q " Dividing the above two equations yields: Applying the total probability theorem and Markov assumption: For simplicity, we assume that the state transition probability is constant, and then the state transition can be represented by the following four parameters where π 11`π12 " 1 and π 21`π22 " 1. Rearranging Equation (13) using simple two equations then, Finally, the modified binary Bayes filter for local OGM is, In Figure 2, the static and dynamic binary Bayes filters are applied to the real sensor data and the local OGM is built relative to the vehicle coordinate. Figure 2a,b are the results of static and dynamic binary filters, respectively, and Figure 2c is the corresponding world image taken on a highway. In the Figures, the darker the cell is, the more likely occupied the corresponding cell is. When the static method developed in [16] is used, the problem explained in Figure 1 arises, as shown in Figure 2a. The regions occluded by guardrail have bright cells, and it means that these cells are unoccupied, but it is not true. Unlike the static method [16], however, when the dynamic filter is applied, the occluded region behind the guardrail has occupancy probabilities, which correspond to unknown regions, as shown in Figure 2b.
Finally, the modified binary Bayes filter for local OGM is, In Figure 2, the static and dynamic binary Bayes filters are applied to the real sensor data and the local OGM is built relative to the vehicle coordinate. Figure 2a,b are the results of static and dynamic binary filters, respectively, and Figure 2c is the corresponding world image taken on a highway. In the Figures, the darker the cell is, the more likely occupied the corresponding cell is. When the static method developed in [16] is used, the problem explained in Figure 1 arises, as shown in Figure 2a. The regions occluded by guardrail have bright cells, and it means that these cells are unoccupied, but it is not true. Unlike the static method [16], however, when the dynamic filter is applied, the occluded region behind the guardrail has occupancy probabilities, which correspond to unknown regions, as shown in Figure 2b.

Dynamic Object Removal
There might be a number of moving objects on the intersections, and they are likely to interfere with the intersection type recognition. Thus, the dynamic objects are removed before the intersection type recognition in this paper. A typical example is given in Figure 3. Figure 3a,b depict the raw measurements from the scanner, while Figure 3c depicts the segmentation results obtained by the ABD (adaptive breakpoints detection) [18]. In Figure 3a,b, the colors indicate layer information. Layer 0, 1, 2 and 3 are indicated in blue, red, green and black, respectively. In Figure 3c, different colors mean different segments. Three large segments are formed, and they are indicated in magenta, cyan and black. The three segments correspond to the left and right road boundaries and the preceding car, respectively. The black segment should be removed before the intersection type recognition. Let us denote the segment at time t bŷ , j " 1,¨¨¨, N t (18) where j is the index for the segments and N t denotes the number of the segments formed at time t.
The index M j denotes the length of the jth segmentẑ j t . The difference between z j t andẑ j t is explained in Figure 4. Then, let us denote the region in the local OGM m t , which is hit byẑ j t aŝ where pos p¨q P 2 denotes the longitudinal and latitudinal coordinates relative to the autonomous vehicle. To choose the dynamic segments which move over time, a new score,

20)
is defined to measure the degree to which the segmentẑ j t can be classified as a dynamic object, where card p¨q denotes the cardinality of the argument set. The physical meaning of this score is that the less the intersection betweenm j t andm j t´1 , the larger the score is, and the more likely the segment comes from a dynamic object. the raw measurements from the scanner, while Figure 3c depicts the segmentation results obtained by the ABD (adaptive breakpoints detection) [18]. In Figure 3a,b, the colors indicate layer information. Layer 0, 1, 2 and 3 are indicated in blue, red, green and black, respectively. In Figure 3c, different colors mean different segments. Three large segments are formed, and they are indicated in magenta, cyan and black. The three segments correspond to the left and right road boundaries and the preceding car, respectively. The black segment should be removed before the intersection type recognition. Let us denote the segment at time t by   3  1  2   ,  3,  1,  2,ˆˆˆ, , , , where j is the index for the segments and t N denotes the number of the segments formed at time t . The index j M denotes the length of the th j segment ˆj t z . The difference between j t z and ˆj t z is explained in Figure 4. Then, let us denote the region in the local OGM t m , which is hit by ˆj t z as is defined to measure the degree to which the segment ˆj t z can be classified as a dynamic object, where card( )  denotes the cardinality of the argument set. The physical meaning of this score is that the less the intersection between ˆj t m and 1 j t m  , the larger the score is, and the more likely the segment comes from a dynamic object.   then the corresponding segmentẑ j t does not move much, and it is classified as a static object. The result of the dynamic object removal is shown in Figure 5. The local OGMs before and after removing the dynamic object are depicted in Figure 5a,c respectively, and the segment corresponding to the preceding vehicle is magnified in Figure 5b. As shown in the Figures, a preceding vehicle with gray trail is present, but the vehicle is finally removed in Figure 5c. From now on, we call the OGM without dynamic object in autonomous vehicles coordinate as static local coordinate occupancy grid map, SLOGM.
then the corresponding segment ˆj t z does not move much, and it is classified as a static object. The result of the dynamic object removal is shown in Figure 5. The local OGMs before and after removing the dynamic object are depicted in Figure 5a,c respectively, and the segment corresponding to the preceding vehicle is magnified in Figure 5b. As shown in the Figures, a preceding vehicle with gray trail is present, but the vehicle is finally removed in Figure 5c. From now on, we call the OGM without dynamic object in autonomous vehicles coordinate as static local coordinate occupancy grid map, SLOGM.  The gray cells are the unknown region, and the set of dark cells is one segment, which is segmented as an independent object.
In conclusion, if then ˆj t m is quite different from 1 j t m  and the corresponding segment ˆj t z is classified as a dynamic object, where  is a threshold for the dynamic object. If then the corresponding segment ˆj t z does not move much, and it is classified as a static object. The result of the dynamic object removal is shown in Figure 5. The local OGMs before and after removing the dynamic object are depicted in Figure 5a,c respectively, and the segment corresponding to the preceding vehicle is magnified in Figure 5b. As shown in the Figures, a preceding vehicle with gray trail is present, but the vehicle is finally removed in Figure 5c. From now on, we call the OGM without dynamic object in autonomous vehicles coordinate as static local coordinate occupancy grid map, SLOGM.

Intersection Type Recognition Using the SLOGM
In this section, a new intersection type recognition method is presented. It is actually a multiple-class classification problem and the SLOGM explained in Section 2 is used as a feature for the classification problem. For the sake of simplicity, a new similarity measure is developed and a nearest neighbor (NN) classifier is used based on the new similarity measure.

Intersection Types
In this paper, six types of intersections are considered and they are a highway, a merge-road, a diverge-road, a plus-shaped intersection and two kinds of T-shaped junctions as shown in Figure 6a Figure 6f, respectively. Thus, the problem considered herein is a six-class classification problem, and the SLOGM is used as a feature for the classification. multiple-class classification problem and the SLOGM explained in Section 2 is used as a feature for the classification problem. For the sake of simplicity, a new similarity measure is developed and a nearest neighbor (NN) classifier is used based on the new similarity measure.

Intersection Types
In this paper, six types of intersections are considered and they are a highway, a merge-road, a diverge-road, a plus-shaped intersection and two kinds of T-shaped junctions as shown in Figure 6a through Figure 6f, respectively. Thus, the problem considered herein is a six-class classification problem, and the SLOGM is used as a feature for the classification. The highway is a simple straight road and it looks like 'I' as shown in Figure 6a. The merge-road is a junction road at which additional road joins the main road, and, thus, it looks like an upside down lower case "y" as shown in Figure 6b. The diverge-road is a junction road at which the main road splits into two roads, and it looks like "y" as shown in Figure 6c. At a plus-shaped intersection, two roads meet and cross each other as shown in Figure 6d. The current longitudinal road is clearly observed by the laser scanner, but the two latitudinal roads are only partially observed due to the limited field of view (FOV) of the laser scanner. At the first type T junction, the current longitudinal road is merged with another latitudinal road at the right angle, as shown in Figure 6e. At the second type T junction, the current longitudinal road ends and the vehicles can go either to the left or to the right in the perpendicular direction, as shown in Figure 6f. Thus, a total of six classes are considered in this paper. The highway is a simple straight road and it looks like 'I' as shown in Figure 6a. The merge-road is a junction road at which additional road joins the main road, and, thus, it looks like an upside down lower case "y" as shown in Figure 6b. The diverge-road is a junction road at which the main road splits into two roads, and it looks like "y" as shown in Figure 6c. At a plus-shaped intersection, two roads meet and cross each other as shown in Figure 6d. The current longitudinal road is clearly observed by the laser scanner, but the two latitudinal roads are only partially observed due to the limited field of view (FOV) of the laser scanner. At the first type T junction, the current longitudinal road is merged with another latitudinal road at the right angle, as shown in Figure 6e. At the second type T junction, the current longitudinal road ends and the vehicles can go either to the left or to the right in the perpendicular direction, as shown in Figure 6f. Thus, a total of six classes are considered in this paper.

New Similarity Measure and Nearest Neighbor Classifier
In this subsection, a new similarity measure using SLOGM is developed for the intersection type recognition, and it is applied to implement the NN classifier. Let us suppose that we are given a training set D with N train training samples D " pM 1 , t 1 q , pM 2 , t 2 q , pM 3 , t 3 q ,¨¨¨,`M N train , t N train˘( where M n " " M 1 n M 2 n¨¨¨M N n ı T P r0, 1s N is the SLOGM with the size N and t n P tH, M, D, P, T 1 , T 2 u is the associated intersection type; n " 1, 2,¨¨¨, N train is an index for training samples. Here, H, M, D, P, T 1 and T 2 mean 'highway', 'merge road', 'diverge road', 'Plus-shape intersections', 'first type T-junction', and 'second type T-junction', respectively. To apply the NN to the intersection type recognition, a new similarity measure is developed. When two SLOGMs L and M are given and L, M P r0, 1s N , the overlapped area (OA) between them is defined based on their free space by where FreepMq " is a set of cells in SLOGM, which has low occupancy probability and corresponds to the drivable roads; ε f ree is a small threshold to determine whether a cell is free or occupied and cardp¨q denotes a cardinality of a set. That is, the OA implies the degree to which two SLOGMs share the drivable roads. Then, the similarity between the two SLOGMs L and M is defined by The similarity measure SI M pL, Mq is actually the squared geometric mean of normalized overlapped area between two SLOGMs and when two SLOGMs have the similar drivable roads, SIM pL, Mq becomes close to 1. When a training set D in Equation (23) is given and a test SLOGM L P r0, 1s N is presented, the intersection type of the SLOGM M can be predicted by an NN classifier by

Experiment Setup
The validity of the proposed method is demonstrated through experimentation. The LUX2010 of IBEO (Hamburg, Germany) is used as a multi-layer laser scanner and it is installed on the KIA K5 (Seoul, Korea) as shown in Figure 7. The horizontal FOV of the LUX2010 is 110 degrees with 0.125 degree resolutions and vertical FOV is 3.2 degree with 0.8 resolutions. A single camera is also installed on the top of the windshield to gather the ground truth (GT) of the intersection types. A total of 1213 SLOGM samples are collected and each class has the similar number of samples. Each SLOGM sample covers the area of 80 mˆ50 m and each cell in SLOGMs is 0.25 mˆ0.25 m large. For the validation of the proposed system, five-fold cross validation is conducted. The whole set of samples is partitioned into five subsets with the same size in a random manner. The first four sets are used as a training set, and the last set is used as a test set as in Figure 8. Then, three of the first four sets and the last set are used as a training set, and the remaining set is used as a test set in turn. The similar process is repeated three more times such that all the five subsets are used as a test set exactly once. The five-fold cross validations are run 100 times, and the results are summarized in the next subsection. To show the validity of the proposed method, its performance is compared with that of [6,8].
In this experiment, [6,8] were implemented by the authors. The previous two works used the 3D scanner with 32 or 64 layers, respectively, but we implemented their ideas using the 2D laser scanner IBEO LUX 2010 with four layers for fair comparison with the proposed method.
For the validation of the proposed system, five-fold cross validation is conducted. The whole set of samples is partitioned into five subsets with the same size in a random manner. The first four sets are used as a training set, and the last set is used as a test set as in Figure 8. Then, three of the first four sets and the last set are used as a training set, and the remaining set is used as a test set in turn. The similar process is repeated three more times such that all the five subsets are used as a test set exactly once. The five-fold cross validations are run 100 times, and the results are summarized in the next subsection. Figure 7. Vehicle equipped with a multi-layer laser scanner and a camera [18].
For the validation of the proposed system, five-fold cross validation is conducted. The whole set of samples is partitioned into five subsets with the same size in a random manner. The first four sets are used as a training set, and the last set is used as a test set as in Figure 8. Then, three of the first four sets and the last set are used as a training set, and the remaining set is used as a test set in turn. The similar process is repeated three more times such that all the five subsets are used as a test set exactly once. The five-fold cross validations are run 100 times, and the results are summarized in the next subsection.

Experiment Results
The NN classifier based on the proposed similarity is applied to the intersection type recognition. The six classes considered herein are summarized in Table 1 As stated, five-fold cross validation is conducted one hundred times and the quantitative results of intersection type recognition are summarized in Table 2, and some illustrative examples are given in Figure 9. Table 2 is the confusion matrix for the six classes. In Figure 9, the first and third columns show the actual environments superimposed with the laser scanner and the second and fourth columns are the corresponding SLOGMs.

Experiment Results
The NN classifier based on the proposed similarity is applied to the intersection type recognition. The six classes considered herein are summarized in Table 1 As stated, five-fold cross validation is conducted one hundred times and the quantitative results of intersection type recognition are summarized in Table 2, and some illustrative examples are given in Figure 9. Table 2 is the confusion matrix for the six classes. In Figure 9, the first and third columns show the actual environments superimposed with the laser scanner and the second and fourth columns are the corresponding SLOGMs.  As shown in Table 2, the class 1 T (the first type T-junction) demonstrates the highest true positive rate (TPR) among the six classes, and it is 91.575%. Following the class 1 T , the class 2 T (second type T-shape junction) has the second highest TPR and it is 91.532%. The reason for the excellence of the two T-junctions is that their SLOGMs have relatively unique shapes from other SLOGMs. In class 1 T (first type T junction), a single latitudinal road joins the main longitudinal road at the junction in the perpendicular direction, as shown in Figure 9e. In class 2 T (second type T junction), the current main road ends and branches off in to the left or in to the right in the perpendicular direction, as shown in Figure 9f. The unique shape in the two junctions gives them high similarity scores and distinguishes the two sets of samples from the others. On the other hand, As shown in Table 2, the class T 1 (the first type T-junction) demonstrates the highest true positive rate (TPR) among the six classes, and it is 91.575%. Following the class T 1 , the class T 2 (second type T-shape junction) has the second highest TPR and it is 91.532%. The reason for the excellence of the two T-junctions is that their SLOGMs have relatively unique shapes from other SLOGMs. In class T 1 (first type T junction), a single latitudinal road joins the main longitudinal road at the junction in the perpendicular direction, as shown in Figure 9e. In class T 2 (second type T junction), the current main road ends and branches off in to the left or in to the right in the perpendicular direction, as shown in Figure 9f. The unique shape in the two junctions gives them high similarity scores and distinguishes the two sets of samples from the others. On the other hand, the class M (merge-road) is the one which is the most difficult to classify. Since the FOV of a laser scanner is limited, the whole intersection is rarely observed. When the autonomous vehicle enters a merge-road, the road might look like just a highway or a common straight road because the boundaries of the intersection block the laser scanner from scanning the latitudinal roads as shown in Figure 9b. For the similar reason, the class H is also hard to identify because it shares the common characteristic of straightness with other intersections.
Finally, the proposed method is compared with the previous works [6,8] in terms of true positive rate (TPR). The TPR of [6,8] reported herein are lower than the original values reported in [6,8] because the current TPR are obtained using the four layer IBEO LUX 2010, while the original TPR were obtained using the 3D Velodyne (Morgan Hill, CA, US) with 32 or 64 layers. A box plot is depicted to visualize the cross validation results as in Figure 10. From the figure, the proposed method outperforms the previous two methods. The average TPR of [6,8] are 54.61% and 46.91%, respectively, while that of the proposed method is 89.15%. The reason for the excellence of the proposed method might be that the SLOGM proposed herein is a good match with the 2D multi-layer laser scanner and has stronger discriminative power than the previous features of [6,8] for the intersection type recognition using a laser scanner.
merge-road, the road might look like just a highway or a common straight road because the boundaries of the intersection block the laser scanner from scanning the latitudinal roads as shown in Figure 9b. For the similar reason, the class H is also hard to identify because it shares the common characteristic of straightness with other intersections.
Finally, the proposed method is compared with the previous works [6,8] in terms of true positive rate (TPR). The TPR of [6,8] reported herein are lower than the original values reported in [6,8] because the current TPR are obtained using the four layer IBEO LUX 2010, while the original TPR were obtained using the 3D Velodyne (Morgan Hill, CA, US) with 32 or 64 layers. A box plot is depicted to visualize the cross validation results as in Figure 10. From the figure, the proposed method outperforms the previous two methods. The average TPR of [6,8] are 54.61% and 46.91%, respectively, while that of the proposed method is 89.15%. The reason for the excellence of the proposed method might be that the SLOGM proposed herein is a good match with the 2D multi-layer laser scanner and has stronger discriminative power than the previous features of [6,8] for the intersection type recognition using a laser scanner.

Conclusions
In this paper, a new intersection type recognition method has been proposed. Unlike the previous works, the occupancy grid map was built relative to the local coordinate and the intersection type was recognized based on the local OGM. The dynamic binary Bayes filter was employed to solve the cell change problem which arises in the local OGM building. A new measure   j t S z was proposed to remove the moving dynamic objects from the local OGM. Furthermore, a new similarity measure   SIM , L M between two SLOGMs was developed, and it was combined with an NN classifier to implement the intersection type recognition system. Finally, the proposed method was applied to a real world problem and its validity was verified by experimentation.

Conclusions
In this paper, a new intersection type recognition method has been proposed. Unlike the previous works, the occupancy grid map was built relative to the local coordinate and the intersection type was recognized based on the local OGM. The dynamic binary Bayes filter was employed to solve the cell change problem which arises in the local OGM building. A new measure S´ẑ j t¯w as proposed to remove the moving dynamic objects from the local OGM. Furthermore, a new similarity measure SIM pL, Mq between two SLOGMs was developed, and it was combined with an NN classifier to implement the intersection type recognition system. Finally, the proposed method was applied to a real world problem and its validity was verified by experimentation.