Enhancing Lane-Tracking Performance in Challenging Driving Environments through Parameter Optimization and a Restriction System
Abstract
:1. Introduction
2. Related Works
2.1. Tone Correction
2.2. Lane Detection
2.2.1. Tool-Based Methods
2.2.2. Deep Learning-Based Methods
- (1)
- Encoder–decoder segmentation: In this approach, encoder–decoder convolutional neural networks (CNNs) are commonly used for pixelwise segmentation. SegNet is a deep fully convolutional neural network architecture specifically designed for semantic pixelwise segmentation [26,27]. LaneNet is another model that focuses on end-to-end lane detection and consists of two decoders: a segmentation branch for lane detection in a binary mask and an embedding branch for road segmentation [28]. Another variant replaces the skip connection in U-Net with a long short-term memory (LSTM) layer, shown in Figure 3 [29]. LSTM is a type of recurrent neural network (RNN) that addresses the challenge of modeling long dependencies in sequential data [30,31]. By incorporating an LSTM layer, this method effectively preserves high-dimensional information from the encoder and transmits it to the decoder. Also, there is a method to replace the standard convolution layer in the traditional U-Net with depthwise and pointwise convolutions to reduce computational complexity while maintaining the detection rate. This U-Net structure is referred to as DSUNet (depthwise and pointwise U-Net) [32].
- (2)
- GAN model: The GAN is composed of a generator and a discriminator [33]. Lane detection can be performed using GAN-based models [34]. One specific method proposed the embedding-loss GAN (EL-GAN) for semantic segmentation. The generator predicts lanes based on input images, while the discriminator determines the quality of the predicted lane using shared weights. This approach produces thinner lane predictions compared to regular CNN results, allowing for more accurate lane observation. It also performs well in scenarios where lanes are obscured by obstacles such as vehicles.
3. Proposed Method
3.1. Noise-Reduction Processing
3.1.1. Discrimination between Day and Night
3.1.2. Suppression of Noise and Contrast Enhancement
3.2. Lane-Candidate Grouping and Optimal Lane Detection and Tracking
3.2.1. Determination of the Edge and Straight Line
3.2.2. Restriction System and Lane Tracking
4. Simulations
4.1. Experimental Results
4.2. Objective Assessment
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Lee, D.-K.; Lee, I.-S. Performance Improvement of Lane Detector Using Grouping Method. J. Korean Inst. Inf. Technol. 2018, 16, 51–56. [Google Scholar] [CrossRef]
- Yoo, H.; Yang, U.; Sohn, K. Gradient-enhancing conversion for illumination-robust lane detection. IEEE Trans. Intell. Transp. Syst. 2013, 14, 1083–1094. [Google Scholar] [CrossRef]
- Stoel, B.C.; Vossepoel, A.M.; Ottes, F.P.; Hofland, P.L.; Kroon, H.M.; Schultze Kool, L.J. Interactive Histogram Equalization. Pattern Recognit. Lett. 1990, 11, 247–254. [Google Scholar] [CrossRef]
- Hines, G.; Rahman, Z.; Woodell, G. Single-Scale Retinex Using Digital Signal Processors. In Proceedings of the Global Signal Processing Conference, San Jose, CA, USA, 25–29 October 2004; pp. 1–6. [Google Scholar]
- Petro, A.B.; Sbert, C.; Morel, J.-M. Multiscale Retinex. Image Process. Line 2014, 4, 71–88. [Google Scholar] [CrossRef]
- Sultana, S.; Ahmed, B. Robust Nighttime Road Lane Line Detection Using Bilateral Filter and SAGC under Challenging Conditions. In Proceedings of the 2021 IEEE 13th International Conference on Computer Research and Development (ICCRD), Beijing, China, 5–7 January 2021; pp. 137–143. [Google Scholar] [CrossRef]
- Canny, J. A Computational Approach to Edge Detection. IEEE Trans. Pattern Anal. Mach. Intell. 1986, 6, 679–698. [Google Scholar] [CrossRef]
- Aminuddin, N.S.; Ibrahim, M.M.; Ali, N.M.; Radzi, S.A.; Saad, W.H.M.; Darsono, A.M. A New Approach to Highway Lane Detection by Using Hough Transform Technique. J. Inf. Commun. Technol. 2017, 16, 244–260. [Google Scholar] [CrossRef]
- Lee, C.-Y.; Kim, Y.-H.; Lee, Y.-H. Optimized Hardware Design Using Sobel and Median Filters for Lane Detection. J. Adv. Inf. Technol. Converg. 2019, 9, 115–125. [Google Scholar] [CrossRef]
- Illingworth, J.; Kittler, J. A Survey of the Hough Transform. Comput. Vision, Graph. Image Process. 1988, 44, 87–116. [Google Scholar] [CrossRef]
- Borkar, A.; Hayes, M.; Smith, M.T. Robust Lane Detection and Tracking with Ransac and Kalman Filter. In Proceedings of the 16th IEEE International Conference on Image Processing (ICIP), Cairo, Egypt, 7–10 November 2009; pp. 3261–3264. [Google Scholar] [CrossRef]
- Guo, J.; Wei, Z.; Miao, D. Lane Detection Method Based on Improved RANSAC Algorithm. In Proceedings of the 2015 IEEE Twelfth International Symposium on Autonomous Decentralized Systems 2015, Taichung, Taiwan, 25–27 March 2015; pp. 285–288. [Google Scholar] [CrossRef]
- Sander, J.; Ester, M.; Kriegel, H.P.; Xu, X. Density-Based Clustering in Spatial Databases: The Algorithm GDBSCAN and Its Applications. Data Min. Knowl. Discov. 1998, 2, 169–194. [Google Scholar] [CrossRef]
- Niu, J.; Lu, J.; Xu, M.; Lv, P.; Zhao, X. Robust Lane Detection Using Two-Stage Feature Extraction with Curve Fitting. Pattern Recognit. 2016, 59, 225–233. [Google Scholar] [CrossRef]
- Lee, S.; Hyeon, D.; Park, G.; Baek, I.J.; Kim, S.W.; Seo, S.W. Directional-DBSCAN: Parking-Slot Detection Using a Clustering Method in around-View Monitoring System. In Proceedings of the 2016 IEEE Intelligent Vehicles Symposium (IV), Gothenburg, Sweden, 19–22 June 2016; pp. 349–354. [Google Scholar] [CrossRef]
- Ding, Y.; Xu, Z.; Zhang, Y.; Sun, K. Fast Lane Detection Based on Bird’s Eye View and Improved Random Sample Consensus Algorithm. Multimed. Tools Appl. 2017, 76, 22979–22998. [Google Scholar] [CrossRef]
- Luo, L.B.; Koh, I.S.; Park, S.Y.; Ahn, R.S.; Chong, J.W. A Software-Hardware Cooperative Implementation of Bird’s-Eye View System for Camera-on-Vehicle. In Proceedings of the 2009 IEEE International Conference on Network Infrastructure and Digital Content 2009, Beijing, China, 6–8 November 2009; pp. 963–967. [Google Scholar] [CrossRef]
- Meng, Z.; Xia, X.; Xu, R.; Liu, W.; Ma, J. HYDRO-3D: Hybrid Object Detection and Tracking for Cooperative Perception Using 3D LiDAR. IEEE Trans. Intell. Veh. 2023, 20, 1–13. [Google Scholar] [CrossRef]
- Xia, X.; Meng, Z.; Han, X.; Li, H.; Tsukiji, T.; Xu, R.; Zheng, Z.; Ma, J. An Automated Driving Systems Data Acquisition and Analytics Platform. Transp. Res. Part C Emerg. Technol. 2023, 151, 104120. [Google Scholar] [CrossRef]
- McCall, J.C.; Trivedi, M.M. An Integrated, Robust Approach to Lane Marking Detection and Lane Tracking. In Proceedings of the IEEE Intelligent Vehicles Symposium, Parma, Italy, 14–17 June 2004; pp. 533–537. [Google Scholar] [CrossRef]
- Apostoloff, N.; Zelinsky, A. Robust Vision Based Lane Tracking Using Multiple Cues and Particle Filtering. In Proceedings of the IEEE IV2003 Intelligent Vehicles Symposium, Columbus, OH, USA, 9–11 June 2003; pp. 558–563. [Google Scholar] [CrossRef]
- Loose, H.; Franke, U.; Stiller, C. Kaiman Particle Filter for Lane Recognition on Rural Roads. In Proceedings of the 2009 IEEE Intelligent Vehicles Symposium, Xi’an, China, 3–5 June 2009; pp. 60–65. [Google Scholar] [CrossRef]
- Xiong, L.; Xia, X.; Lu, Y.; Liu, W.; Gao, L.; Song, S.; Yu, Z. IMU-Based Automated Vehicle Body Sideslip Angle and Attitude Estimation Aided by GNSS Using Parallel Adaptive Kalman Filters. IEEE Trans. Veh. Technol. 2020, 69, 10668–10680. [Google Scholar] [CrossRef]
- Liu, W.; Xia, X.; Xiong, L.; Lu, Y.; Gao, L.; Yu, Z. Automated Vehicle Sideslip Angle Estimation Considering Signal Measurement Characteristic. IEEE Sens. J. 2021, 21, 21675–21687. [Google Scholar] [CrossRef]
- Xia, X.; Hashemi, E.; Xiong, L.; Khajepour, A. Autonomous Vehicle Kinematics and Dynamics Synthesis for Sideslip Angle Estimation Based on Consensus Kalman Filter. IEEE Trans. Control Syst. Technol. 2023, 31, 179–192. [Google Scholar] [CrossRef]
- Firdaus-Nawi, M.; Noraini, O.; Sabri, M.Y.; Siti-Zahrah, A.; Zamri-Saad, M.; Latifah, H. DeepLabv3+_Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation. Pertanika J. Trop. Agric. Sci. 2011, 34, 137–143. [Google Scholar]
- Badrinarayanan, V.; Kendall, A.; Cipolla, R. Segnet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495. [Google Scholar] [CrossRef]
- Neven, D.; De Brabandere, B.; Georgoulis, S.; Proesmans, M.; Van Gool, L. Towards End-to-End Lane Detection: An Instance Segmentation Approach. In Proceedings of the 2018 IEEE Intelligent Vehicles Symposium (IV), Changshu, China, 26–30 June 2018; pp. 286–291. [Google Scholar] [CrossRef]
- Zou, Q.; Jiang, H.; Dai, Q.; Yue, Y.; Chen, L.; Wang, Q. Robust Lane Detection from Continuous Driving Scenes Using Deep Neural Networks. IEEE Trans. Veh. Technol. 2020, 69, 41–54. [Google Scholar] [CrossRef]
- Medsker, L.R.; Jain, L.C. Recurrent Neural Networks. Des. Appl. 2001, 5, 64–67. [Google Scholar]
- Graves, A.; Graves, A. Long Short-Term Memory. Supervised Sequence Labelling with Recurrent Neural Networks; Springer: Cham, Switzerland, 2012; pp. 37–45. [Google Scholar]
- Lee, D.H.; Liu, J.L. End-to-End Deep Learning of Lane Detection and Path Prediction for Real-Time Autonomous Driving. Signal Image Video Process. 2023, 17, 199–205. [Google Scholar] [CrossRef]
- Goodfellow, I.J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial Nets. Adv. Neural Inf. Process. Syst. 2014, 3, 2672–2680. [Google Scholar] [CrossRef]
- Ghafoorian, M.; Nugteren, C.; Baka, N.; Booij, O.; Hofmann, M. EL-GAN: Embedding Loss Driven Generative Adversarial Networks for Lane Detection. Lect. Notes Comput. Sci. 2019, 11129, 256–272. [Google Scholar] [CrossRef]
- Hartigan, J.A.; Wong, M.A. Algorithm AS 136: A K-Means Clustering Algorithm. J. R. Stat. Soc. Ser. C Appl. Stat. 1979, 28, 100–108. [Google Scholar] [CrossRef]
Left Lane (Conv/DSUNet/LSTM/Prop) | Right Lane (Conv/DSUNet/LSTM/Prop) | |
---|---|---|
Wiper | 0.49/0.72/0.57/0.65 | 0.40/0.63/0.54/0.69 |
Rainy surface | 0.33/0.51/0.50/0.54 | 0.26/0.70/0.74/0.73 |
Overexposed | 0.64/0.82/0.55/0.94 | 0.61/0.88/0.50/0.93 |
Guideline | 0.35/0.86/0.59/0.84 | 0.03/0.75/0.65/0.70 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Lee, S.-H.; Kwon, H.-J.; Lee, S.-H. Enhancing Lane-Tracking Performance in Challenging Driving Environments through Parameter Optimization and a Restriction System. Appl. Sci. 2023, 13, 9313. https://doi.org/10.3390/app13169313
Lee S-H, Kwon H-J, Lee S-H. Enhancing Lane-Tracking Performance in Challenging Driving Environments through Parameter Optimization and a Restriction System. Applied Sciences. 2023; 13(16):9313. https://doi.org/10.3390/app13169313
Chicago/Turabian StyleLee, Seung-Hwan, Hyuk-Ju Kwon, and Sung-Hak Lee. 2023. "Enhancing Lane-Tracking Performance in Challenging Driving Environments through Parameter Optimization and a Restriction System" Applied Sciences 13, no. 16: 9313. https://doi.org/10.3390/app13169313
APA StyleLee, S.-H., Kwon, H.-J., & Lee, S.-H. (2023). Enhancing Lane-Tracking Performance in Challenging Driving Environments through Parameter Optimization and a Restriction System. Applied Sciences, 13(16), 9313. https://doi.org/10.3390/app13169313