# An Urban Autodriving Algorithm Based on a Sensor-Weighted Integration Field with Deep Learning

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Sensor Setup

## 3. Proposed Methodology

#### 3.1. Vision Deep Learning: Sparse Spatial CNN

#### 3.1.1. Dataset

#### 3.1.2. Proposed Network Model

#### 3.2. Proposed Sensor Integration Algorithm: Sensor-Weighted Integration Field (SWIF)

#### 3.2.1. Lane Data

#### 3.2.2. LiDAR Data

#### 3.2.3. GPS Data

_{A}OY

_{A}. For this reason, Equations (3) and (4) are utilized. With the values from the GPS data, such as My position O’ (which is the absolute location of the vehicle) and heading angle θ, in Equation (3) the coordinate from X

_{A}OY

_{A}is changed to the relative coordinate X

_{R}O’Y

_{R}. Finally, in Equation (4), to synchronize lane and object fields, waypoints are transformed into the coordinate X’

_{R}O’’Y’

_{R}.

#### 3.2.4. SWIF Algorithm

#### 3.3. Proposed Motion Planning and Maneuvering Control

#### 3.3.1. Vehicle Speed and Steering Angle Decisions

#### 3.3.2. Maneuvering Control Algorithm

## 4. Experimental Results

#### 4.1. Lane Recognition with Sparse Spatial CNN

#### 4.2. Test Scenario 1: Pedestrian in the Lane

#### 4.3. Test Scenario 2: Construction Site on the Road

#### 4.4. Performance of Proposed Algorithm in International College Creative Car Competition

## 5. Discussion and Remarks

## 6. Conclusions

## Author Contributions

## Acknowledgments

## Conflicts of Interest

## References

- Seo, C.; Yi, K. Car-following motion planning for autonomous vehicles in multi-lane environments. J. Korean Auto-Veh. Saf. Assoc.
**2019**, 11, 30–36. [Google Scholar] - Lee, S.; Park, S.; Choi, I.; Jeong, J. Vehicle recognition of ADAS vehicle in collision situation with multiple vehicles in single lane. J. Korean Auto-Veh. Saf. Assoc.
**2019**, 11, 44–52. [Google Scholar] - Le Vine, S.; Zolfaghari, A.; Polak, J. Autonomous cars: The tension between occupant experience and intersection capacity. Transp. Res. Part C: Emerg. Technol.
**2015**, 52, 1–14. [Google Scholar] [CrossRef] - Lee, S.; Kim, J.; Yoon, J.S.; Shin, S.; Bailo, O.; Kim, N.; Lee, T.-H.; Hong, H.S.; Han, S.-H.; Kweon, I.S. Vpgnet: Vanishing point guided network for lane and road marking detection and recognition. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 1947–1955. [Google Scholar]
- Huval, B.; Wang, T.; Tandon, S.; Kiske, J.; Song, W.; Pazhayampallil, J.; Andriluka, M.; Rajpurkar, P.; Migimatsu, T.; Cheng-Yue, R.; et al. An empirical evaluation of deep learning on highway driving. arXiv
**2015**, arXiv:1504.01716. [Google Scholar] - Pan, X.; Shi, J.; Luo, P.; Wang, X.; Tang, X. Spatial as deep: Spatial cnn for traffic scene understanding. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, New Orleans, LA, USA, 2–7 February 2018. [Google Scholar]
- Bounini, F.; Gingras, D.; Lapointe, V.; Pollart, H. Autonomous vehicle and real time road lanes detection and tracking. In Proceedings of the 2015 IEEE Vehicle Power and Propulsion Conference (VPPC), Montréal, QC, Canada, 19–22 October 2015; pp. 1–6. [Google Scholar]
- Miao, X.; Li, S.; Shen, H. On-board lane detection system for intelligent vehicle based on monocular vision. Int. J. Smart Sens. Intell. Syst.
**2012**, 5, 517. [Google Scholar] [CrossRef] [Green Version] - VISIN; Francesco, V.; Kastner, K.; Cho, K.; Matteucci, M.; Courville, A.; Bengio, Y. Renet: A recurrent neural network based alternative to convolutional networks. arXiv
**2015**, arXiv:1505.00393. [Google Scholar] - Bell, S.; Zitnick, C.L.; Bala, K.; Girshick, R. Inside-outside net: Detecting objects in context with skip pooling and recurrent neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2874–2883. [Google Scholar]
- Zhang, J.; Xu, Y.; Ni, B.; Duan, Z. Geometric constrained joint lane segmentation and lane boundary detection. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 486–502. [Google Scholar]
- Likhachev, M.; Ferguson, D. Planning long dynamically feasible maneuvers for autonomous vehicles. Int. J. Robot. Res.
**2009**, 28, 933–945. [Google Scholar] [CrossRef] [Green Version] - Kuwata, Y.; Teo, J.; Fiore, G.; Karaman, S.; Frazzoli, E.; How, J.P. Real-time motion planning with applications to autonomous urban driving. IEEE Trans. Control Syst. Technol.
**2009**, 17, 1105–1118. [Google Scholar] [CrossRef] - Hardy, J.; Campbell, M. Contingency planning over probabilistic obstacle predictions for autonomous road vehicles. IEEE Trans. Robot.
**2013**, 29, 913–929. [Google Scholar] [CrossRef] - Dolgov, D.; Thrun, S.; Montemerlo, M.; Diebel, J. Practical search techniques in path planning for autonomous driving. Ann. Arbor
**2008**, 1001, 18–80. [Google Scholar] - Ziegler, J.; Werling, M.; Schroder, J. Navigating car-like robots in unstructured environments using an obstacle sensitive cost function. In Proceedings of the 2008 IEEE Intelligent Vehicles Symposium, Eindhoven, The Netherlands, 4–6 June 2008; pp. 787–791. [Google Scholar]
- Ajanovic, Z.; Lacevic, B.; Shyrokau, B.; Stolz, M.; Horn, M. Search-based optimal motion planning for automated driving. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems, Madrid, Spain, 1–5 October 2018; pp. 4523–4530. [Google Scholar]
- Noor-A-Rahim, M.; Ali, G.M.N.; Guan, Y.L.; Ayalew, B.; Chong, P.H.J.; Pesch, D. Broadcast Performance Analysis and Improvements of the LTE-V2V Autonomous Mode at Road Intersection. IEEE Trans. Veh. Technol.
**2019**, 68, 9359–9369. [Google Scholar] [CrossRef] - He, J.; Tang, Z.; Fan, Z.; Zhang, J. Enhanced collision avoidance for distributed LTE vehicle to vehicle broadcast communications. IEEE Commun. Lett.
**2018**, 22, 630–633. [Google Scholar] [CrossRef] [Green Version] - Martin-Vega, F.J.; Soret, B.; Aguayo-Torres, M.C.; Kovacs, I.Z.; Gomez, G. Geolocation-based access for vehicular communications: Analysis and optimization via stochastic geometry. IEEE Trans. Veh. Technol.
**2017**, 67, 3069–3084. [Google Scholar] [CrossRef] [Green Version] - Liu, Z.; Lee, H.; Ali, G.G.; Pesch, D.; Xiao, P. A Survey on Resource Allocation in Vehicular Networks. arXiv
**2019**, arXiv:1909.13587. [Google Scholar] - TuSimple. Available online: http://benchmark.tusimple.ai/#/t/1/dataset (accessed on 27 December 2018).

**Figure 3.**Proposed algorithm flow. Note: ROI = region of interest; SWIF = sensor-weighted integration field; SCNN = spatial convolutional neural network.

**Figure 6.**Network models: (

**a**) Markov random field–conditional random field (MRF–CRF); (

**b**) spatial convolutional neural network (SCNN); (

**c**–

**e**) proposed sparse spatial convolutional neural network (SSCNN).

**Figure 8.**Schematic weighted lane field: (

**a**) weights applied to the adjacent left line; (

**b**) weights applied to the adjacent right line; (

**c**) applied weights in a random row of the weighted lane field; (

**d**) weighted lane field from Figure 7.

**Figure 9.**Area distinction of the obstacle field: (

**a**) LiDAR field diagram; (

**b**) sample of the object field.

**Figure 10.**Method to form the object field: (

**a**) binary object field; (

**b**) window for offset; (

**c**) weighted object field.

**Figure 11.**Method to process GPS data: (

**a**) coordinate transformation of route data; (

**b**) weighted route field.

**Figure 12.**Driving situations and their SWIF values: (

**a**) road image including obstacles; (

**b**) road image including a speed bump; (

**c**) result of SWIF algorithm in situation of (

**a**); (

**d**) result of SWIF algorithm in situation of (

**b**).

**Figure 13.**Motion planning method: (

**a**) sample of the cost field; (

**b**) sample of SWIF; (

**c**) result represented in SWIF.

**Figure 14.**Vehicle speed and steering angle control diagram: (

**a**) used overall control block diagram; (

**b**) specific parameters of integral anti-windup scheme.

**Figure 16.**Comparison between maneuvered path and waypoints in scenario 1: (

**a**) tracked path: (

**b**) magnified path while avoiding pedestrian.

**Figure 19.**Comparison between maneuvering path and waypoints in scenario 2: (

**a**) tracked path; (

**b**) magnified path while avoiding construction site.

**Figure 21.**Tests in K-City: (

**a**) driving in school zone; (

**b**) static obstacle; (

**c**) lane change; (

**d**) intersection driving

Sensor | Specification |
---|---|

Vision (Logitech c930e) | Viewing angle of 90 degrees Resolution of 1080p with 30 fps |

2D LiDAR (LMS 151) | Recognition distance up to 50 m Resolution of 0.25 degrees Scanning speed of 50 Hz |

Real Time Kinematic GPS (MRP-2000) | Position Accuracy: 0.01 m in horizontal; 0.01 m in vertical Time to First Fix: 28 s (in Digital Multimedia Broadcasting mode) |

Input: | Lane field |

Process: | If there is “left” (“right”) “left” (“right”) is considered as the left lane (or right lane). “left-left” (“right-right”) is ignored. Or else “left-left” (“right-right”) is considered as the left lane (or right lane). Only coordinate values of left and right lanes remain in the lane field. For each row of the lane field Based on the column component of the left lane, the weights as shown in Figure 8a are assigned. Based on the column component of the right lane, the weights as shown in Figure 8b are assigned. |

Output: | Weighted lane field |

Tusimple/KURD | SCNN [6] | SSCNN |
---|---|---|

Accuracy | 94.62% | 94.56% |

Time (s) | 0.124 | 0.0459 |

Speed (fps) | 8.063 | 21.796 |

CULane | SCNN [6] | SSCNN |
---|---|---|

Normal | 90.6 | 83.0 |

Crowded | 69.7 | 64.1 |

Night | 66.1 | 62.1 |

No line | 43.4 | 39.2 |

Shadow | 66.9 | 57.9 |

Arrow | 84.1 | 78.5 |

Dazzle light | 58.5 | 56.5 |

Curve | 64.4 | 54.9 |

Crossroad | 1990 | 2759 |

Total | 71.6 | 66.2 |

© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Oh, M.; Cha, B.; Bae, I.; Choi, G.; Lim, Y.
An Urban Autodriving Algorithm Based on a Sensor-Weighted Integration Field with Deep Learning. *Electronics* **2020**, *9*, 158.
https://doi.org/10.3390/electronics9010158

**AMA Style**

Oh M, Cha B, Bae I, Choi G, Lim Y.
An Urban Autodriving Algorithm Based on a Sensor-Weighted Integration Field with Deep Learning. *Electronics*. 2020; 9(1):158.
https://doi.org/10.3390/electronics9010158

**Chicago/Turabian Style**

Oh, Minho, Bokyung Cha, Inhwan Bae, Gyeungho Choi, and Yongseob Lim.
2020. "An Urban Autodriving Algorithm Based on a Sensor-Weighted Integration Field with Deep Learning" *Electronics* 9, no. 1: 158.
https://doi.org/10.3390/electronics9010158