Effects of Hybridizing the U-Net Neural Network in Traffic Lane Detection Process
Abstract
1. Introduction
1.1. Advanced Driver Assistance Systems
- Reducing human errors (driver alerting and proactive intervention).
- Increasing road safety—one of the main reasons for the massive acceptance and integration of ADAS systems in the automotive market (by reducing the number of accidents and implicitly the number of injuries and deaths, increasing the safety of other road users—pedestrians, cyclists).
- Improving comfort and convenience at the wheel through ADAS systems that can automatically adjust the vehicle speed depending on traffic conditions (ACC), provide assistance in staying in the lane (LKA and LCA), facilitate or automate parking maneuvers, and support traffic jam driving by managing slow traffic with numerous stops and starts. These systems reduce driver stress and fatigue and may indirectly contribute to lowering pollutant emissions through smoother and more efficient driving behavior [5].
- Providing the basic foundation for the development of fully autonomous vehicles. As ADAS systems become more sophisticated and highly integrated, they pave the way for higher levels of vehicle automation.
1.2. Traffic Lane Detection
- LDW—Lane departure warnings that alert the driver if the vehicle unintentionally leaves its lane, using visual, audible or haptic alerts.
- LKA—Lane keeping assist, which actively intervenes in steering to help keep the vehicle in its lane.
- LCA—Lane centering assist, acting on the steering system to keep the vehicle centered in the lane.
- ALKS—Automatic lane keeping systems that are designed to follow lane markings with minimal driver intervention.
1.3. Current Challenges and Research
2. Materials and Methods
2.1. Using Game Engines to Generate Traffic Images
2.2. U-Net Architecture
2.3. Hybrid U-Net Architecture
- Vertical kernels (7 × 1, 6 × 2, 5 × 3) capture lane boundaries and vertical structures;
- Horizontal kernels (1 × 7, 2 × 6, 3 × 5) focus on curved lines.
2.4. Simulation Conditions
- Basic U-Net;
- U-Net modified with GSC;
- U-Net modified with TE;
- U-Net modified with MDLSM;
- Hybrid U-Net.
3. Results
4. Discussion
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
ADAS | advanced driver assistance systems |
ALKS | automatic lane keeping systems |
Carla | car learning to act |
LDW | lane departure warning |
LKA | lane keeping assist |
LCA | lane centering assist |
GSC | geometrical skip connection |
MSLDM | multi-scale detection module |
SAM | spatial attention module |
TE | transformer encoder |
TLD | traffic lane detection |
U-Net | convolutional neural network |
References
- Masello, L.; Castignani, G.; Sheehan, B.; Murphy, F.; McDonnell, K. On the road safety benefits of advanced driver assistance systems in different driving contexts. Transp. Res. Interdiscip. Perspect. 2022, 15, 100670. [Google Scholar] [CrossRef]
- Sankar, S.H.; Jyothis, V.K.; Suresh, A.; Anand, P.S.; Nalinakshan, S. Comparative Analysis of Advanced Driver Assistance Systems (ADAS) in Automobiles. In Emerging Electronics and Automation. E2A 2023; Stroe, D.I., Nasimuddin, D., Laskar, S.H., Pandey, S.K., Eds.; Lecture Notes in Electrical Engineering; Springer: Singapore, 2025; Volume 1237. [Google Scholar] [CrossRef]
- Chengula, T.J.; Mwakalonge, J.; Comert, G.; Sulle, M.; Siuhi, S.; Osei, E. Enhancing advanced driver assistance systems through explainable artificial intelligence for driver anomaly detection. Mach. Learn. Appl. 2024, 17, 100580. [Google Scholar] [CrossRef]
- McDonnell, K.; Murphy, F.; Sheehan, B.; Masello, L.; Castignani, G.; Ryan, C. Regulatory and Technical Constraints: An Overview of the Technical Possibilities and Regulatory Limitations of Vehicle Telematic Data. Sensors 2021, 21, 3517. [Google Scholar] [CrossRef] [PubMed]
- Gkartzonikas, C.; Gkritza, K. What have we learned? A review of stated preference and choice studies on autonomous vehicles. Transp. Res. Part C Emerg. Technol. 2019, 98, 323–337. [Google Scholar] [CrossRef]
- European Commission. Regulation (EU) 2019/2144 on Type-Approval Requirements for Motor Vehicles with Regard to General Safety and the Protection of Vehicle Occupants and Vulnerable Road Users. Available online: https://eur-lex.europa.eu/eli/reg/2019/2144/oj/eng (accessed on 10 May 2025).
- Cicchino, J.B. Effects of lane departure warning on police-reported crash rates. Traffic Inj. Prev. 2018, 19, 615–622. [Google Scholar] [CrossRef]
- Mirzarazi, F.; Danishvar, S.; Mousavi, A. The Safety Risks of AI-Driven Solutions in Autonomous Road Vehicles. World Electr. Veh. J. 2024, 15, 438. [Google Scholar] [CrossRef]
- Kathiresh, S.; Poonkodi, M. Artificial Intelligence-Powered Advanced Driver Assistance Systems in Vehicles. In Computing Technologies for Sustainable Development IRCCTSD 2024; Sivakumar, P.D., Ramachandran, R., Pasupathi, C., Balakrishnan, P., Eds.; Communications in Computer and Information Science; Springer: Cham, Germany, 2024; Volume 2361. [Google Scholar] [CrossRef]
- Sarrab, M.; Pulparambil, S.; Awadalla, M. Development of an IoT based real-time traffic monitoring system for city governance. Glob. Transit. 2020, 2, 230–245. [Google Scholar] [CrossRef]
- Almukhalfi, H.; Noor, A.; Noor, T.H. Traffic management approaches using machine learning and deep learning techniques: A survey. Eng. Appl. Artif. Intell. 2024, 133, 108147. [Google Scholar] [CrossRef]
- Thabit, A.S.M.; Kerrache, C.A.; Calafate, C.T. A survey on monitoring and management techniques for road traffic congestion in vehicular networks. ICT Express 2024, 10, 1186–1198. [Google Scholar] [CrossRef]
- Hu, J.; Wang, Z.; Yang, L. Robust Lane Detection under Adverse Environmental Conditions Using Data Augmentation and Deep Learning. IEEE Trans. Intell. Transp. Syst. 2020, 21, 1417–1427. [Google Scholar]
- Neven, D.; De Brabandere, B.; Georgoulis, S.; Proesmans, M.; Van Gool, L. Towards End-to-End Lane Detection: An Instance Segmentation Approach–Explores Difficulties in Lane Detection Using Deep Learning. arXiv 2018, arXiv:1802.05591. Available online: https://arxiv.org/abs/1802.05591 (accessed on 8 February 2025).
- Sang, I.C.; Norris, W.R. Improved generalizability of CNN based lane detection in challenging weather using adaptive preprocessing parameter tuning. Expert Syst. Appl. 2025, 275, 127055. [Google Scholar] [CrossRef]
- Nie, X.; Xu, Z.; Zhang, W.; Dong, X.; Liu, N.; Chen, Y. Foggy Lane Dataset Synthesized from Monocular Images for Lane Detection Algorithms. Sensors 2022, 22, 5210. [Google Scholar] [CrossRef] [PubMed]
- Sultana, S.; Ahmed, B.; Paul, M.; Islam, M.R.; Ahmad, S. Vision-Based Robust Lane Detection and Tracking in Challenging Conditions. IEEE Access 2023, 11, 67938–67955. [Google Scholar] [CrossRef]
- Zhang, R.; Peng, J.; Gou, W.; Ma, Y.; Chen, J.; Hu, H.; Li, W.; Yin, G.; Li, Z. A robust and real-time lane detection method in low-light scenarios to advanced driver assistance systems. Expert Syst. Appl. 2024, 256, 124923. [Google Scholar] [CrossRef]
- Yaamini, H.S.G.; Swathi, K.J.; Manohar, N.; Ajay Kumar, G. Lane and Traffic Sign Detection for Autonomous Vehicles: Addressing Challenges on Indian Road Conditions. Methods X 2025, 14, 103178. [Google Scholar] [CrossRef]
- Pandian, J.A.; Thirunavukarasu, R.; Mariappan, L.T. Enhancing lane detection in autonomous vehicles with multi-armed bandit ensemble learning. Sci. Rep. 2025, 15, 3198. [Google Scholar] [CrossRef]
- Sang, I.C.; Norris, W.R. A Robust Lane Detection Algorithm Adaptable to Challenging Weather Conditions. IEEE Access 2024, 12, 11185–11195. [Google Scholar] [CrossRef]
- Dai, W.; Li, Z.; Xu, X.; Chen, X.; Zeng, H.; Hu, R. Enhanced Cross Layer Refinement Network for robust lane detection across diverse lighting and road conditions. Eng. Appl. Artif. Intell. 2025, 139, 109473. [Google Scholar] [CrossRef]
- Philion, J. FastDraw: Addressing the Long Tail of Lane Detection by Adapting a Sequential Prediction Network. 2019. arXiv 2019, arXiv:1905.04354. Available online: https://arxiv.org/abs/1905.04354 (accessed on 10 May 2025). [CrossRef]
- Dawam, E.S.; Feng, X. Smart City Lane Detection for Autonomous Vehicle. In Proceedings of the 2020 IEEE Intl Conf on Dependable, Autonomic and Secure Computing, Intl Conf on Pervasive Intelligence and Computing, Intl Conf on Cloud and Big Data Computing, Intl Conf on Cyber Science and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech), Calgary, AB, Canada, 17–22 August 2020; pp. 334–338. [Google Scholar] [CrossRef]
- Pihlak, R.; Riid, A. Simultaneous Road Edge and Road Surface Markings Detection Using Convolutional Neural Networks. In Databases and Information Systems. DB&IS 2020; Robal, T., Haav, H.M., Penjam, J., Matulevičius, R., Eds.; Communications in Computer and Information Science; Springer: Cham, Germany, 2020; Volume 1243. [Google Scholar] [CrossRef]
- Wang, Z.; Wu, G.; Boriboonsomsin, K.; Barth, M.J.; Han, K.; Kim, B.; Tiwari, P. Cooperative Ramp Merging System: Agent-Based Modeling and Simulation Using Game Engine. SAE Int. J. Connect. Autom. Veh. 2019, 2, 115–128. [Google Scholar] [CrossRef]
- Yamaura, M.; Arechiga, N.; Shiraishi, S.; Eisele, S.; Hite, J.; Neema, S.; Scott, J.; Bapty, T. ADAS Virtual Prototyping Using Modelica and Unity Co-simulation Via OpenMETA. 2016. Available online: https://ep.liu.se/ecp/124/006/ecp16124006.pdf (accessed on 10 May 2025).
- Kim, B.; Kashiba, Y.; Dai, S.; Shiraishi, S. Testing Autonomous Vehicle Software in the Virtual Prototyping Environment. IEEE Embed. Syst. Lett. 2016, 9, 1. [Google Scholar] [CrossRef]
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. 2015. Available online: https://arxiv.org/abs/1505.04597 (accessed on 16 February 2025).
- Milletari, F.; Navab, N.; Ahmadi, S.A. V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation. 2016. Available online: https://arxiv.org/abs/1606.04797 (accessed on 10 May 2025).
- Zhou, Z.; Rahman Siddiquee, M.M.; Tajbakhsh, N.; Liang, J. UNet++: A Nested U-Net Architecture for Medical Image Segmentation. 2018. Available online: https://arxiv.org/abs/1807.10165 (accessed on 10 May 2025).
- Oktay, O.; Schlemper, J.; Le Folgoc, L.; Lee, M.; Heinrich, M.; Misawa, K.; Mori, K.; McDonagh, S.; Hammerla, N.Y.; Kainz, B.; et al. Attention U-Net: Learning Where to Look for the Pancreas. arXiv 2018. Available online: https://arxiv.org/abs/1804.03999 (accessed on 10 May 2025). [CrossRef]
- Liu, L.; Chen, X.; Zhu, S.; Tan, P. CondLaneNet: A Top-to-down Lane Detection Framework Based on Conditional Convolution. 2021. arXiv 2021, arXiv:2105.05003. [Google Scholar] [CrossRef]
- Baek, S.W.; Kim, M.J.; Suddamalla, U.; Wong, A.; Lee, B.H.; Kim, J.H. Real-Time Lane Detection Based on Deep Learning. J. Electr. Eng. Technol. 2021, 17, 655–664. [Google Scholar] [CrossRef]
- Kingma, D.; Ba, J. Adam: A Method for Stochastic Optimization, International Conference on Learning Representations. arXiv 2014, arXiv:1412.6980. Available online: https://arxiv.org/abs/1412.6980 (accessed on 14 May 2025).
- Feng, J.; Zhao, P.; Zheng, H.; Konpang, J.; Sirikham, A.; Kalnaowakul, P. Enhancing Autonomous Driving Perception: A Practical Approach to Event-Based Object Detection in CARLA and ROS. Vehicles 2025, 7, 53. [Google Scholar] [CrossRef]
- Nie, S.; Zhang, G.; Yun, L.; Liu, S. A Faster and Lightweight Lane Detection Method in Complex Scenarios. Electronics 2024, 13, 2486. [Google Scholar] [CrossRef]
- Tabelini, L.; Berriel, R.; Paixao, T.M.; Badue, C.; De Souza, A.F.; Olivera-Santos, T. Keep your eyes on the lane: Attention-guided lane detection. arXiv 2020, arXiv:2010.12035. [Google Scholar]
- Zubic, N.; Gehrig, D.; Gehrig, M.; Scaramuzza, D. From Chaos Comes Order: Ordering Event Representations for Object Recognition and Detection. arXiv 2023, arXiv:2304.13455. [Google Scholar]
- Tabelini, L.; Berriel, R.; Paixäo, T.M.; Badue, C.; De Souza, A.F.; Oliveira-Santos, T. PolyLaneNet: Lane Estimation via Deep Polynomial Regression. In Proceedings of the 2020 25th International Conference on Pattern Recognition (ICPR), Milan, Italy, 10–15 January 2021; pp. 6150–6156. [Google Scholar] [CrossRef]
- Gao, G.; Guo, Y.; Zhou, L.; Li, L.; Shi, G. Res2Net-based multi-scale and multi-attention model for traffic scene image classification. PLoS ONE 2024, 19, e0300017. [Google Scholar] [CrossRef]
- Li, R.; Dong, Y. Robust Lane Detection through Self Pre-training with Masked Sequential Autoencoders and Fine-tuning with Customized PolyLoss. arXiv 2023, arXiv:2305.17271. [Google Scholar] [CrossRef]
Performance Indicators | U-Net | ||
---|---|---|---|
TuSimple | Carla | TuSimple+Carla | |
mIoU | 0.520 | 0.360 | 0.560 |
F1 score | 0.670 | 0.520 | 0.690 |
Inference time [s] | 0.016 | 0.017 | 0.016 |
FPS | 60 | 59 | 60 |
Performance Indicators | U-Net + GSC | ||
---|---|---|---|
TuSimple | Carla | TuSimple+Carla | |
mIoU | 0.530 | 0.450 | 0.580 |
F1 score | 0.690 | 0.57 | 0.730 |
Inference time [s] | 0.025 | 0.025 | 0.026 |
FPS | 35.35 | 34.98 | 35.17 |
Performance Indicators | U-Net + TE | ||
---|---|---|---|
TuSimple | Carla | TuSimple+Carla | |
mIoU | 0.51 | 0.35 | 0.55 |
F1 score | 0.63 | 0.50 | 0.66 |
Inference time [s] | 0.023 | 0.023 | 0.023 |
FPS | 43.20 | 42.68 | 43.10 |
Performance Indicators | U-Net + MSLDM | ||
---|---|---|---|
TuSimple | Carla | TuSimple+Carla | |
mIoU | 0.520 | 0.170 | 0.580 |
F1 score | 0.680 | 0.280 | 0.720 |
Inference time [s] | 0.031 | 0.032 | 0.031 |
FPS | 31.92 | 30.90 | 31.97 |
Performance Indicators | Hybrid U-Net | ||
---|---|---|---|
TuSimple | Carla | TuSimple+Carla | |
mIoU | 0.53 | 0.39 | 0.60 |
F1 score | 0.69 | 0.55 | 0.74 |
Inference time [s] | 0.040 | 0.035 | 0.040 |
FPS | 23.71 | 28.32 | 23.08 |
Architecture | Performance Indicators | |||
---|---|---|---|---|
mIoU | F1 Score | Inference Time [Msec/Frame] | FPS | |
U-Net | 0.80 | 0.83–0.87 | 50 | 20 |
SCNN | 0.77 | 0.72–0.80 | 60 | 7.5–16.7 |
PolyLane Net | 0.74 | 0.53–0.78 | 45 | 22.2 |
YOLO | 0.79 | 0.82 | 30–69.9 | 33.3 |
ResNet34 | - | 0.71–0.96 | - | 280 |
U-Net (our dataset) | 0.56 | 0.69 | 16 | 60 |
Hybrid U-Net (our dataset) | 0.60 | 0.69–0.74 | 40 | 28.82 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Csato, A.; Mariasiu, F. Effects of Hybridizing the U-Net Neural Network in Traffic Lane Detection Process. Appl. Sci. 2025, 15, 7970. https://doi.org/10.3390/app15147970
Csato A, Mariasiu F. Effects of Hybridizing the U-Net Neural Network in Traffic Lane Detection Process. Applied Sciences. 2025; 15(14):7970. https://doi.org/10.3390/app15147970
Chicago/Turabian StyleCsato, Aron, and Florin Mariasiu. 2025. "Effects of Hybridizing the U-Net Neural Network in Traffic Lane Detection Process" Applied Sciences 15, no. 14: 7970. https://doi.org/10.3390/app15147970
APA StyleCsato, A., & Mariasiu, F. (2025). Effects of Hybridizing the U-Net Neural Network in Traffic Lane Detection Process. Applied Sciences, 15(14), 7970. https://doi.org/10.3390/app15147970