# Multi-Stage Hough Space Calculation for Lane Markings Detection via IMU and Vision Fusion

^{*}

## Abstract

**:**

## 1. Introduction

**Constructing primary probabilistic Hough space:**a primary probabilistic Hough space is extracted from a single frame, which measures each line segment with a probability value. In this section, an efficient Hough Transform with edge gradient constraints [1] is employed for line-segment extraction and a CNN-based classifier is proposed for line-segment classification. The proposed probabilistic Hough space is constructed by the outputs of this classification network and each point in this probabilistic space describes the confidence possibility of the corresponding line segment. A threshold $\xi $ (which is set to 0.7) is used to choose the valid line segments from the probabilistic Hough space. It is necessary to mention that, because Hough space makes it convenient for storing the results across frames, we construct a primary probabilistic Hough space to record the classification results of each frame.

**Filtering probabilistic Hough space across frames by IMU and vision data fusion:**due to the disturbance of occlusion, vehicle movement, and classification error, the primary probabilistic Hough space extracted from a single frame is not reliable. For example, the change of vehicle pose significantly could affect the classification results of the corresponding line segments. Consequently, the same lane markings might have different values in the probabilistic Hough space. To solve this, sequential information is included, and a Kalman Filter is employed to smooth the probabilistic Hough space across frames. While the vehicle is moving, the line segments extracted from images always have different positions in Hough space at different times, though they lie on the same lane markings. Movement information provided by the IMU makes it possible to align previous and current line segments in the current Hough space, which is essential for the filtering process. The final filtered probabilistic Hough space is used to extract the final line segments. Line segments with low probability value will be eliminated and those with high value will be kept and tracked.

## 2. Related Works

#### 2.1. Conventional Algorithms without CNNs

#### 2.2. Lane Detection with CNNs

## 3. Single Frame: Primary Probabilistic Hough Space via Lane Markings Extraction

#### 3.1. Line Segments Extraction by Hough Transform and RANSAC

Algorithm 1 Revising line segments by RANSAC: R represents ROI. (P1,P2) are two edge points randomly extracted from R. Defining l is the original line segment. k represents the slope of l and b is the bias, n is the number of iterations(n=40), $lf$ is the final output |

Input:R,l:(k,b)Output:lffunctionREVISE($R,l$)whilendo$(P1,P2)\leftarrow Get\phantom{\rule{0.166667em}{0ex}}two\phantom{\rule{0.166667em}{0ex}}edge\phantom{\rule{0.166667em}{0ex}}points\phantom{\rule{0.166667em}{0ex}}randomly\phantom{\rule{0.166667em}{0ex}}from\phantom{\rule{0.166667em}{0ex}}R$ $\widehat{l}:(\widehat{k},\widehat{b})\leftarrow U\phantom{\rule{-0.166667em}{0ex}}se\phantom{\rule{0.166667em}{0ex}}(P1,P2)\phantom{\rule{0.166667em}{0ex}}to\phantom{\rule{0.166667em}{0ex}}fit\phantom{\rule{0.166667em}{0ex}}straight\phantom{\rule{0.166667em}{0ex}}line$ if$\widehat{l}\phantom{\rule{0.166667em}{0ex}}has\phantom{\rule{0.166667em}{0ex}}less\phantom{\rule{0.166667em}{0ex}}outliers\phantom{\rule{0.166667em}{0ex}}than\phantom{\rule{0.166667em}{0ex}}l$thenl = $\widehat{l}$:($\widehat{k},\widehat{b}$)end if$n=n-1$ end whilelf = l end function |

#### 3.2. Constructing Primary Probabilistic Hough Space by Classification Networks

**H**represents the perspective transformation matrix.

## 4. Sequential Frames: Filtered Probabilistic Hough Space via IMU and Vision Data

#### 4.1. Filtering Primary Hough Space with Kalman Filter

#### 4.2. Aligning Previous Line Segments in the Current Hough Space

#### 4.3. Final Lane Fitting Using the Result of Sequential Frames

## 5. Results and Discussion

#### 5.1. Performance of the Classification Networks

#### 5.2. Performance of the Filtered Probabilistic Hough Space

## 6. Conclusions

## Author Contributions

## Funding

## Acknowledgments

## Conflicts of Interest

## References

- Zhao, Y.; Pan, H.; Du, C.; Zheng, Y. Principal direction-based Hough transform for line detection. Opt. Rev.
**2015**, 22, 224–231. [Google Scholar] [CrossRef] - Yoo, H.; Yang, U.; Sohn, K. Gradient-Enhancing Conversion for Illumination-Robust Lane Detection. IEEE Trans. Intell. Transp. Syst.
**2013**, 14, 1083–1094. [Google Scholar] [CrossRef] - Gaikwad, V.; Lokhande, S. Lane Departure Identification for Advanced Driver Assistance. IEEE Trans. Intell. Transp. Syst.
**2014**, 16, 1–9. [Google Scholar] [CrossRef] - Niu, J.; Lu, J.; Xu, M.; Lv, P.; Zhao, X. Robust Lane Detection Using Two-stage Feature Extraction with Curve Fitting. Pattern Recognit.
**2016**, 59, 225–233. [Google Scholar] [CrossRef] - Pollard, E.; Gruyer, D.; Tarel, J.; Leng, S.-S.; Cord, A. Lane marking extraction with combination strategy and comparative evaluation on synthetic and camera images. In Proceedings of the 14th IEEE Intelligent Transportation Systems (ITSC), Washington, DC, USA, 5–7 October 2011; pp. 1741–1746. [Google Scholar]
- Zhao, K.; Meuter, M.; Nunn, C.; Müller, D.; Müller-Schneiders, S.; Pauli, J. A novel multi-lane detection and tracking system. In Proceedings of the IEEE Intelligent Vehicles Symposium, Alcala de Henares, Spain, 3–7 June 2012; pp. 1084–1089. [Google Scholar]
- Zhou, S.; Jiang, Y.; Xi, J.; Gong, J.; Xiong, G.; Chen, H. A novel lane detection based on geometrical model and gabor filter. In Proceedings of the IEEE Intelligent Vehicles Symposium, La jolla, CA, USA, 21–24 June 2010; pp. 59–64. [Google Scholar]
- Ozgunalp, U.; Fan, R.; Ai, X.; Dahnoun, N. Multiple Lane Detection Algorithm Based on Novel Dense Vanishing Point Estimation. IEEE Trans. Intell. Transp. Syst.
**2016**, 18, 621–632. [Google Scholar] [CrossRef] - Hur, J.; Kang, S.N.; Seo, S.W. Multi-lane detection in urban driving environments using conditional random fields. In Proceedings of the IEEE Intelligent Vehicles Symposium (IV), Gold Coast City, Australia, 23–26 June 2013; pp. 1297–1302. [Google Scholar]
- Lee, S.; Kim, J.; Yoon, J.S.; Shin, S.; Bailo, O.; Kim, N.; Lee, T.-H.; Hong, H.; Han, S.-H.; Kweon, I.S. Vpgnet: Vanishing point guided network for lane and road marking detection and recognition. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 1947–1955. [Google Scholar]
- Wang, Z.; Ren, W.; Qiang, Q. LaneNet: Real-Time Lane Detection Networks for Autonomous Driving. arXiv
**2018**, arXiv:1807.01726. [Google Scholar] - Zhang, W.; Mahale, T. End to End Video Segmentation for Driving: Lane Detection For Autonomous Car. arXiv
**2018**, arXiv:1812.05914. [Google Scholar] - Neven, D.; De Brabandere, B.; Georgoulis, S.; Proesmans, M.; Gool, L.V. Towards end-to-end lane detection: An instance segmentation approach. In Proceedings of the IEEE Intelligent Vehicles Symposium (IV), Changshu, China, 26–30 June 2018; pp. 286–291. [Google Scholar]
- Pan, X.; Shi, J.; Luo, P.; Wang, X.; Tang, X. Spatial as deep: Spatial cnn for traffic scene understanding. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, Hilton New Orleans Riverside, New Orleans, LA, USA, 2–7 February 2018. [Google Scholar]
- Ghafoorian, M.; Nugteren, C.; Baka, N.; Booij, O.; Hofmann, M. EL-GAN: Embedding loss driven generative adversarial networks for lane detection. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018. [Google Scholar]
- Aly, M. Caltech. 2014. Available online: http://www.vision.caltech.edu/malaa/datasets/caltech-lanes/ (accessed on 12 June 2018).

**Figure 1.**Workflow of the proposed approach: Hough Transform and Classification networks are used to extract the primary probabilistic Hough space. Kalman filter is introduced to smooth the probabilistic Hough space across frames, where sequential information is employed. Movement information provided by IMU is applied to make the previous line segments aligned in the same Hough space. The final filtered probabilistic Hough space is used to extract the final line segments with high probability value. By connecting valid line segments in the vehicle coordinates which are detected at different times, lane fitting could be solved with more sequential information and the final result would be more robust.

**Figure 2.**(

**a**) A line segment disturbed by edge noise. (

**b**) The original ROI which is proposed by the line segment. (

**c**) The result of RANSAC. (green ROI: provided by the line segment before revision; red ROI: provided by the line segment after revision).

**Figure 4.**Process of line-segment classification by using the proposed network: the inputs are proposed by line segments and this classification network is used to measure each line segment by the metrics of possibility. The probabilistic Hough space is employed to record the confidence probability of each line segment.

**Figure 5.**Yellow rectangle is proposed by the two endpoints (P1,P2) of line segments. Blue rectangle is proposed by two new calculated diagonal points by Equation (3).

**Figure 7.**(

**a**) due to the vehicle movement and the classification error of the networks, the same line segment has different classification results at time t and t + 1. (

**b**) plotting the probability values before and after Kalman filtering.

**Figure 8.**The line segment l has different positions in vehicle coordinate and Hough space at different times. Velocity V and acceleration A are measured in north-east coordinates.

**Figure 9.**The result of alignment during neighbor frames. The current detections are labeled in red and the previous results (after alignment) are labeled in yellow. The bottom part shows the result in the Hough space and the blue circles represent the range of alignment error.

**Figure 10.**Local lane-map is constructed by connecting those recorded results from t-n to t in the same vehicle coordinate. It makes the final output more stable by providing useful information for the fitting stage in a larger spatial and time scale than single frame.

**Figure 13.**(

**a**) Ground truth is labeled in the form of line segments. (

**b**) Four parts of data are chosen to test the algorithm: clip1 (sunlight), clip2 (sunlight, heavy), clip3 (rainy, heavy), clip4 (rainy).

**Figure 14.**The first and third rows show the probabilistic Hough space where the points with high brightness represent the possible valid line segments. The second and fourth rows show the corresponding result of line segments extraction where green line segments are the result of detection and red ones are the result of tracking.

**Figure 16.**The final results are displayed in the image coordinates and vehicle coordinates. In the second and fourth rows, the yellow rectangle represents the center of the vehicle. Line segments detected in the past are labeled in purple and those extracted from the current frame are labeled in red. The results of lane fitting are labeled in green. The cyan points represent the trace of the vehicle which are calculated by the IMU data.

Layer Index | 1 | 2 | 3 | 4 | 5 | 6 |
---|---|---|---|---|---|---|

Layer Name | Data | Conv+Relu | Pooling | Conv+Relu | Interp | Pooling |

Output Size | $(64,64,3)$ | $(62,62,40)$ | $(31,31,40)$ | $(29,29,20)$ | $(28,28,20)$ | $(14,14,20)$ |

Layer Index | 7 | 8 | 9 | 10 | 11 | 12 |

Layer Name | Conv | Pooling | Conv | Inner-Product | Inner-Product | Softmax |

Output Size | $(10,10,20)$ | $(5,5,20)$ | $(1,1,50)$ | $(1,1,500)$ | $(1,1,2)$ | $(1,1,2)$ |

Clip | Total | Niu’s Method [4] | Our Method | |||
---|---|---|---|---|---|---|

AR(%) | FN(%) | AR(%) | FN(%) | |||

cordova1 | 466 | 92.2 | 5.4 | 97.25 | 2.7 | |

cordova2 | 472 | 97.7 | 1.8 | 97.05 | 1.2 | |

washington1 | 639 | 96.9 | 2.5 | 95.84 | 3.7 | |

washington2 | 452 | 98.5 | 1.7 | 95.63 | 3.1 |

Datasets | clip1 | clip2 | clip3 | clip4 |
---|---|---|---|---|

Filtered probabilistic Hough space (sequential frames) | 0.95 | 0.93 | 0.91 | 0.94 |

CNNs-based classification (single frame) | 0.91 | 0.89 | 0.88 | 0.92 |

Clip | Total | Neven’s Method [13] | Our Method | |||
---|---|---|---|---|---|---|

TP(%) | FP(%) | TP(%) | FP(%) | |||

part1 | 927 | 61.8 | 6.7 | 72.2 | 0.6 | |

part2 | 174 | 78.2 | 38.5 | 72.9 | 1.5 | |

part3 | 647 | 83.6 | 6.1 | 87.3 | 1.7 | |

part4 | 713 | 82.5 | 5.9 | 76.5 | 0.1 |

© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Sun, Y.; Li, J.; Sun, Z.
Multi-Stage Hough Space Calculation for Lane Markings Detection via IMU and Vision Fusion. *Sensors* **2019**, *19*, 2305.
https://doi.org/10.3390/s19102305

**AMA Style**

Sun Y, Li J, Sun Z.
Multi-Stage Hough Space Calculation for Lane Markings Detection via IMU and Vision Fusion. *Sensors*. 2019; 19(10):2305.
https://doi.org/10.3390/s19102305

**Chicago/Turabian Style**

Sun, Yi, Jian Li, and Zhenping Sun.
2019. "Multi-Stage Hough Space Calculation for Lane Markings Detection via IMU and Vision Fusion" *Sensors* 19, no. 10: 2305.
https://doi.org/10.3390/s19102305