Robust Lane Detection Based on Informative Feature Pyramid Network in Complex Scenarios
Abstract
1. Introduction
- Channel Information Loss. Fixed lateral connections in conventional FPNs often discard critical channel-wise details, degrading feature representation [17].
- An effective and lightweight lane detection framework (Info-FPNet) is proposed, which is designed to address the challenges of lane detection in complex driving environments. The proposed architecture improves upon conventional FPN-based methods by enhancing multi-scale feature representation and alignment.
- An informative feature pyramid (IFP) module is designed, which combines pixel shuffle upsampling, feature alignment, and semantic encoding to selectively aggregate spatial and semantic information. This reduces aliasing effects and preserves detailed lane structures across scales.
- A cross-layer refinement (CLR) module is introduced that utilizes region-wise attention and anchor-based regression to refine coarse lane proposals. This enhances the localization accuracy of curved and occluded lanes while maintaining computational efficiency.
- Comprehensive experiments conducted on CULane and TuSimple benchmarks demonstrate that our method achieves state-of-the-art performance in both accuracy and robustness. Notably, Info-FPNet outperforms existing methods under challenging conditions such as night-time, strong reflections, and occlusions, while maintaining real-time inference speed.
2. Related Work
2.1. Traditional Lane Detection
2.2. Deep Learning-Based Lane Detection
2.3. Information Fusion-Based Lane Detection
3. Methodology
3.1. Motivation
3.2. Network Architecture
3.2.1. Informative Feature Pyramid (IFP) Module
3.2.2. Cross-Layer Refinement (CLR) Module
3.2.3. Lane IoU Loss (LaneIoU)
4. Experiment and Analysis
4.1. Datasets and Evaluation Metrics
4.1.1. CULane Dataset
4.1.2. TuSimple Dataset
4.1.3. Evaluation Metrics
4.2. Implementation Details
4.3. Ablation Study
4.4. Experimental Comparison with State-of-the-Arts Methods
4.4.1. Performance on CULane Dataset
4.4.2. Performance on TuSimple Dataset
4.5. Efficiency Analysis
4.6. Qualitative Results Analysis
5. Conclusions
Funding
Institutional Review Board Statement
Data Availability Statement
Conflicts of Interest
References
- Geiger, A.; Lenz, P.; Stiller, C.; Urtasun, R. Vision meets robotics: The kitti dataset. Int. J. Robot. Res. 2013, 32, 1231–1237. [Google Scholar] [CrossRef]
- Sheng, S.; Formosa, N.; Hossain, M.; Quddus, M. Advancements in lane marking detection: An extensive evaluation of current methods and future research direction. IEEE Trans. Intell. Veh. 2024, 9, 6462–6473. [Google Scholar] [CrossRef]
- Lee, D.H.; Liu, J.L. End-to-end deep learning of lane detection and path prediction for real-time autonomous driving. Signal Image Video Process. 2023, 17, 199–205. [Google Scholar] [CrossRef]
- Kaur, G.; Kumar, D. Lane detection techniques: A review. Int. J. Comput. Appl. 2015, 112, 4–8. [Google Scholar]
- Bar Hillel, A.; Lerner, R.; Levi, D.; Raz, G. Recent progress in road and lane detection: A survey. Mach. Vis. Appl. 2014, 25, 727–745. [Google Scholar] [CrossRef]
- Tang, J.; Li, S.; Liu, P. A review of lane detection methods based on deep learning. Pattern Recognit. 2021, 111, 107623. [Google Scholar] [CrossRef]
- Lee, H.S.; Kim, K. Simultaneous traffic sign detection and boundary estimation using convolutional neural network. IEEE Trans. Intell. Transp. Syst. 2018, 19, 1652–1663. [Google Scholar] [CrossRef]
- Zou, Q.; Jiang, H.; Dai, Q.; Yue, Y.; Chen, L.; Wang, Q. Robust lane detection from continuous driving scenes using deep neural networks. IEEE Trans. Veh. Technol. 2019, 69, 41–54. [Google Scholar] [CrossRef]
- Mukhopadhyay, A.; Murthy, L.; Mukherjee, I.; Biswas, P. A hybrid lane detection model for wild road conditions. IEEE Trans. Artif. Intell. 2022, 4, 1592–1601. [Google Scholar] [CrossRef]
- Maddiralla, V.; Subramanian, S. Effective lane detection on complex roads with convolutional attention mechanism in autonomous vehicles. Sci. Rep. 2024, 14, 19193. [Google Scholar] [CrossRef] [PubMed]
- Liu, Y.; Wang, J.; Li, Y.; Li, C.; Zhang, W. Lane-GAN: A robust lane detection network for driver assistance system in high speed and complex road conditions. Micromachines 2022, 13, 716. [Google Scholar] [CrossRef]
- Bi, J.; Song, Y.; Jiang, Y.; Sun, L.; Wang, X.; Liu, Z.; Xu, J.; Quan, S.; Dai, Z.; Yan, W. Lane Detection for Autonomous Driving: Comprehensive Reviews, Current Challenges, and Future Predictions. IEEE Trans. Intell. Transp. Syst. 2025, 26, 5710–5746. [Google Scholar] [CrossRef]
- Lin, T.Y.; Dollár, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature pyramid networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2117–2125. [Google Scholar]
- Honda, H.; Uchida, Y. Clrernet: Improving confidence of lane detection with laneiou. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–8 January 2024; pp. 1176–1185. [Google Scholar]
- Hui, J.; Lian, G.; Wu, J.; Ge, S.; Yang, J. Proportional feature pyramid network based on weight fusion for lane detection. PeerJ Comput. Sci. 2024, 10, e1824. [Google Scholar] [CrossRef] [PubMed]
- Zheng, T.; Huang, Y.; Liu, Y.; Tang, W.; Yang, Z.; Cai, D.; He, X. CLRNet: Cross Layer Refinement Network for Lane Detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022. [Google Scholar]
- Chen, S.; Zhao, J.; Zhou, Y.; Wang, H.; Yao, R.; Zhang, L.; Xue, Y. Info-FPN: An informative feature pyramid network for object detection in remote sensing images. Expert Syst. Appl. 2023, 214, 119132. [Google Scholar] [CrossRef]
- Ke, J.; He, L.; Han, B.; Li, J.; Gao, X. ProFPN: Progressive feature pyramid network with soft proposal assignment for object detection. Knowl.-Based Syst. 2024, 299, 112078. [Google Scholar] [CrossRef]
- Zhao, C.; Fu, X.; Dong, J.; Qin, R.; Chang, J.; Lang, P. SAR ship detection based on end-to-end morphological feature pyramid network. IEEE J. Sel. Top. Appl. Earth Obs. Remote. Sens. 2022, 15, 4599–4611. [Google Scholar] [CrossRef]
- Aminuddin, N.S.; Ibrahim, M.M.; Ali, N.M.; Radzi, S.A.; Saad, W.H.M.; Darsono, A.M. A new approach to highway lane detection by using Hough transform technique. J. Inf. Commun. Technol. 2017, 16, 244–260. [Google Scholar]
- Aly, M. Real time detection of lane markers in urban streets. In Proceedings of the IEEE Intelligent Vehicles Symposium, Eindhoven, The Netherlands, 4–6 June 2008; pp. 7–12. [Google Scholar]
- Pan, X.; Shi, J.; Luo, P.; Wang, X.; Tang, X. Spatial as deep: Spatial cnn for traffic scene understanding. In Proceedings of the AAAI Conference on Artificial Intelligence, New Orleans, LA, USA, 2–7 February 2018; Volume 32. [Google Scholar]
- Hou, Y.; Ma, Z.; Liu, C.; Loy, C.C. Learning lightweight lane detection cnns by self attention distillation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 1013–1021. [Google Scholar]
- Qin, Z.; Zhang, P.; Li, X. Ultra fast deep lane detection with hybrid anchor driven ordinal classification. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 46, 2555–2568. [Google Scholar] [CrossRef]
- Tabelini, L.; Berriel, R.; Paixao, T.M.; Badue, C.; De Souza, A.F.; Oliveira-Santos, T. Keep your Eyes on the Lane: Real-time Attention-guided Lane Detection. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 294–302. [Google Scholar]
- Liu, L.; Chen, X.; Zhu, S.; Tan, P. Condlanenet: A top-to-down lane detection framework based on conditional convolution. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 3773–3782. [Google Scholar]
- Qin, Z.; Wang, H.; Li, X. Ultra fast structure-aware deep lane detection. In Proceedings of the European Conference on Computer Vision (ECCV), Glasgow, UK, 23–28 August 2020; Springer: Berlin/Heidelberg, Germany, 2020; pp. 276–291. [Google Scholar]
- Han, J.; Deng, X.; Cai, X.; Yang, Z.; Xu, H.; Xu, C.; Liang, X. Laneformer: Object-aware Row-Column Transformers for Lane Detection. In Proceedings of the AAAI Conference on Artificial Intelligence, Online, 22 February–1 March 2022; Volume 36, pp. 799–807. [Google Scholar]
- Neven, D.; De Brabandere, B.; Georgoulis, S.; Proesmans, M.; Van Gool, L. Towards end-to-end lane detection: An instance segmentation approach. In Proceedings of the 2018 IEEE Intelligent Vehicles Symposium (IV), Changshu, China, 26–30 June 2018; IEEE: New York, NY, USA, 2018; pp. 286–291. [Google Scholar]
- Prakash, A.; Chitta, K.; Geiger, A. Multi-modal fusion transformer for end-to-end autonomous driving. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Virtual, 19–25 June 2021; pp. 7077–7087. [Google Scholar]
- Huang, K.; Shi, B.; Li, X.; Li, X.; Huang, S.; Li, Y. Multi-modal sensor fusion for auto driving perception: A survey. arXiv 2022, arXiv:2202.02703. [Google Scholar]
- Kotseruba, I.; Tsotsos, J.K. Attention for vision-based assistive and automated driving: A review of algorithms and datasets. IEEE Trans. Intell. Transp. Syst. 2022, 23, 19907–19928. [Google Scholar] [CrossRef]
- Kao, Y.; Che, S.; Zhou, S.; Guo, S.; Zhang, X.; Wang, W. LHFFNet: A hybrid feature fusion method for lane detection. Sci. Rep. 2024, 14, 16353. [Google Scholar] [CrossRef]
- Lv, Z.; Han, D.; Wang, W.; Chen, C. IFPNet: Integrated feature pyramid network with fusion factor for lane detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 4–6 October 2023; pp. 1888–1897. [Google Scholar]
- Tang, J. Detect Lane Line Based on Bi-directional Feature Pyramid Network. In Proceedings of the 2022 International Conference on Machine Learning and Intelligent Systems Engineering (MLISE), Guangzhou, China, 5–7 August 2022; pp. 122–126. [Google Scholar]
- He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2961–2969. [Google Scholar]
- Shi, W.; Caballero, J.; Huszár, F.; Totz, J.; Aitken, A.P.; Bishop, R.; Rueckert, D.; Wang, Z. Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 1874–1883. [Google Scholar]
- Huang, Z.; Wei, Y.; Wang, X.; Liu, W.; Huang, T.S.; Shi, H. Alignseg: Feature-aligned segmentation networks. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 550–557. [Google Scholar] [CrossRef] [PubMed]
- Jaderberg, M.; Simonyan, K.; Zisserman, A. Spatial transformer networks. Adv. Neural Inf. Process. Syst. 2015, 28, 1–9. [Google Scholar]
- Davies, A.; Fennessy, P. Digital Imaging for Photographers; Routledge: New York, NY, USA, 2012. [Google Scholar]
- Kong, T.; Sun, F.; Liu, H.; Jiang, Y.; Li, L.; Shi, J. Foveabox: Beyound anchor-based object detection. IEEE Trans. Image Process. 2020, 29, 7389–7398. [Google Scholar] [CrossRef]
- Zheng, T.; Fang, H.; Zhang, Y.; Tang, W.; Yang, Z.; Liu, H.; Cai, D. Resa: Recurrent feature-shift aggregator for lane detection. In Proceedings of the AAAI Conference on Artificial Intelligence, Vancouver, BC, Canada, 2–9 February 2021; Volume 35, pp. 3547–3554. [Google Scholar]
- Loshchilov, I.; Hutter, F. Decoupled Weight Decay Regularization. arXiv 2019, arXiv:1711.05101. [Google Scholar] [CrossRef]
- Loshchilov, I.; Hutter, F. SGDR: Stochastic Gradient Descent with Warm Restarts. arXiv 2017, arXiv:1608.03983. [Google Scholar] [CrossRef]
- Abualsaud, H.; Liu, S.; Lu, D.B.; Situ, K.; Rangesh, A.; Trivedi, M.M. Laneaf: Robust multi-lane detection with affinity fields. IEEE Robot. Autom. Lett. 2021, 6, 7477–7484. [Google Scholar] [CrossRef]
- Qu, Z.; Jin, H.; Zhou, Y.; Yang, Z.; Zhang, W. Focus on local: Detecting lane marker from bottom up via key point. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 14122–14130. [Google Scholar]
- Morley, M.; Atkinson, R.; Savić, D.; Walters, G. GAnet: Genetic algorithm platform for pipe network optimisation. Adv. Eng. Softw. 2001, 32, 467–475. [Google Scholar] [CrossRef]
- Su, J.; Chen, Z.; He, C.; Guan, D.; Cai, C.; Zhou, T.; Wei, J.; Tian, W.; Xie, Z. Gsenet: Global semantic enhancement network for lane detection. In Proceedings of the AAAI Conference on Artificial Intelligence, Vancouver, BC, Canada, 20–27 February 2024; Volume 38, pp. 15108–15116. [Google Scholar]
- Wang, P.; Luo, Z.; Zha, Y.; Zhang, Y.; Tang, Y. End-to-End Lane Detection: A Two-Branch Instance Segmentation Approach. Electronics 2025, 14, 1283. [Google Scholar] [CrossRef]
- Yan, D.; Zhang, T. MHFS-FORMER: Multiple-Scale Hybrid Features Transformer for Lane Detection. Sensors 2025, 25, 2876. [Google Scholar] [CrossRef]
- Liu, R.; Yuan, Z.; Liu, T.; Xiong, Z. End-to-end lane shape prediction with transformers. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Montreal, QC, Canada, 10–17 October 2021; pp. 3694–3702. [Google Scholar]
Category | Proportion (%) |
---|---|
Normal | 27.7 |
Crowded | 23.4 |
Night | 20.3 |
No line | 11.7 |
Crossroad | 9.0 |
Shadow | 2.7 |
Arrow | 2.6 |
Dazzle light | 1.4 |
Curve | 1.2 |
FP | IFP | CLR | LaneIoU | F1 (%) |
---|---|---|---|---|
✓ | - | - | - | 78.27 |
✓ | - | ✓ | - | 79.73 |
✓ | - | ✓ | ✓ | 79.88 |
- | ✓ | - | - | 79.56 |
- | ✓ | ✓ | - | 79.84 |
- | ✓ | ✓ | ✓ | 80.31 |
Method | Backbone | Normal (%) | Crowded (%) | Dazzle (%) | Shadow (%) | No Line (%) | Arrow (%) | Curve (%) | Cross | Night (%) |
---|---|---|---|---|---|---|---|---|---|---|
SCNN [22] | VGG16 | 90.60 | 69.70 | 58.50 | 66.90 | 43.40 | 84.10 | 64.40 | 1990 | 66.10 |
LaneAF [45] | ERFNet | 91.10 | 73.32 | 69.71 | 75.81 | 50.62 | 86.86 | 65.02 | 1844 | 70.90 |
LaneAF [45] | DLA-34 | 91.80 | 75.61 | 71.78 | 79.12 | 51.38 | 86.88 | 72.70 | 1360 | 73.03 |
FOLOLane [46] | ERFNet | 92.70 | 77.80 | 75.20 | 79.30 | 52.10 | 89.00 | 69.40 | 1569 | 74.50 |
RESA [42] | Resnet34 | 91.90 | 72.40 | 66.50 | 72.00 | 46.30 | 88.10 | 68.60 | 1896 | 69.80 |
GANet-m [47] | Resnet34 | 93.73 | 77.92 | 71.64 | 79.49 | 52.63 | 90.37 | 76.32 | 1368 | 73.67 |
LaneFormer [28] | Resnet34 | 90.74 | 72.31 | 69.12 | 71.57 | 47.37 | 85.07 | 65.90 | 26 | 67.77 |
LaneATT [25] | Resnet34 | 92.14 | 75.03 | 66.47 | 78.15 | 49.39 | 88.38 | 67.72 | 1330 | 70.72 |
CondLane [26] | Resnet34 | 93.38 | 77.14 | 71.17 | 79.93 | 51.85 | 89.89 | 73.88 | 1387 | 73.92 |
CLRNet [16] | Resnet34 | 93.49 | 78.06 | 74.57 | 79.92 | 54.01 | 90.59 | 72.77 | 1216 | 75.02 |
P-FPN [15] | Resnet34 | 93.70 | 78.24 | 74.81 | 81.21 | 54.21 | 90.74 | 73.92 | 1160 | 74.85 |
GSENet [48] | Resnet34 | 93.80 | 79.42 | 75.34 | 82.27 | 54.83 | 90.67 | - | 1072 | 76.07 |
TBISA [49] | Resnet50 | 92.8 | 74.4 | - | 75.7 | 47.9 | 88.5 | 70.6 | 1653 | 71.0 |
MHFS-Former [50] | Resnet34 | 92.89 | 79.25 | 68.86 | 78.80 | 53.78 | 86.70 | 67.70 | 1219 | 69.88 |
Our method | Resnet34 | 94.04 | 78.91 | 75.14 | 82.56 | 54.69 | 90.94 | 75.83 | 1179 | 75.43 |
Method | Backbone | F1 (%) | FPS |
---|---|---|---|
SCNN [22] | VGG16 | 71.60 | 7.5 |
LaneAF [45] | ERFNet | 75.63 | 24 |
LaneAF [45] | DLA-34 | 77.41 | 20 |
FOLOLane [46] | ERFNet | 78.80 | 40 |
RESA [42] | Resnet34 | 74.50 | 45.5 |
GANet-m [47] | Resnet34 | 79.39 | 127 |
LaneFormer [28] | Resnet34 | 74.70 | - |
LaneATT [25] | Resnet34 | 76.68 | 171 |
CondLane [26] | Resnet34 | 78.74 | 128 |
CLRNet [16] | Resnet34 | 79.73 | 103 |
P-FPN [15] | Resnet34 | 79.94 | 126 |
GSENet [48] | Resnet34 | 80.58 | - |
TBISA [49] | Resnet50 | 76.0 | 51.8 |
MHFS-Former [50] | Resnet34 | 77.38 | - |
Our method | Resnet34 | 80.31 | 98 |
Method | Backbone | F1 (%) | Accuracy (%) |
---|---|---|---|
SCNN [22] | VGG16 | 95.57 | 96.53 |
RESA [42] | Resnet18 | 96.93 | 96.84 |
LaneATT [25] | Resnet18 | 96.71 | 95.57 |
LaneATT [25] | Resnet34 | 96.77 | 95.63 |
LaneATT [25] | Resnet122 | 96.06 | 96.10 |
LSTR [51] | Resnet18 | − | 96.18 |
CondLane [26] | Resnet18 | 97.01 | 95.48 |
CondLane [26] | Resnet34 | 96.98 | 95.37 |
CondLane [26] | Resnet101 | 97.24 | 96.54 |
CLRNet [16] | Resnet18 | 97.89 | 96.84 |
CLRNet [16] | Resnet34 | 97.82 | 96.87 |
CLRNet [16] | Resnet101 | 97.62 | 96.83 |
P-FPN [15] | Resnet18 | 98.01 | 96.91 |
P-FPN [15] | Resnet34 | 97.89 | 96.93 |
P-FPN [15] | Resnet101 | 97.68 | 96.89 |
GSENet [48] | Resnet18 | 97.98 | 96.82 |
GSENet [48] | Resnet34 | 97.94 | 96.88 |
GSENet [48] | Resnet101 | 97.90 | 96.81 |
TBISA [49] | Resnet50 | 96.9 | 96.8 |
MHFS-Former [50] | Resnet18 | 96.42 | - |
MHFS-Former [50] | Resnet134 | 96.88 | - |
Info-FPNet (ours) | Resnet18 | 98.07 | 96.96 |
Info-FPNet (ours) | Resnet34 | 97.94 | 96.94 |
Info-FPNet (ours) | Resnet101 | 97.76 | 96.91 |
Method | Backbone | GFLOPs |
---|---|---|
SCNN [22] | VGG16 | 328.4 |
LaneAF [45] | ERFNet | 22.2 |
LaneAF [45] | DLA-34 | 23.6 |
RESA [42] | Resnet34 | 41.0 |
LaneATT [25] | Resnet34 | 18.0 |
CondLane [26] | Resnet34 | 19.6 |
P-FPN [15] | Resnet34 | 21.5 |
Our method | Resnet34 | 21.9 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Lian, G. Robust Lane Detection Based on Informative Feature Pyramid Network in Complex Scenarios. Electronics 2025, 14, 3179. https://doi.org/10.3390/electronics14163179
Lian G. Robust Lane Detection Based on Informative Feature Pyramid Network in Complex Scenarios. Electronics. 2025; 14(16):3179. https://doi.org/10.3390/electronics14163179
Chicago/Turabian StyleLian, Guoyun. 2025. "Robust Lane Detection Based on Informative Feature Pyramid Network in Complex Scenarios" Electronics 14, no. 16: 3179. https://doi.org/10.3390/electronics14163179
APA StyleLian, G. (2025). Robust Lane Detection Based on Informative Feature Pyramid Network in Complex Scenarios. Electronics, 14(16), 3179. https://doi.org/10.3390/electronics14163179