Real-Time Deterministic Lane Detection on CPU-Only Embedded Systems via Binary Line Segment Filtering
Abstract
1. Introduction
1.1. Background and Motivation
1.2. Deep Learning Approaches and Limitations
1.3. Classical Geometry-Based Methods
1.4. Objective and Contribution
- We propose a Binary Line Segment Filter (BLSF), a lightweight and fully deterministic geometric filtering mechanism that enforces global orientation consistency over line segments, specifically designed for CPU-only embedded platforms.
- We demonstrate that combining median local thresholding with BLSF significantly improves robustness against strong backlighting, low-contrast night scenes, heavy rain, and high-curvature lanes, without relying on learning-based feature extraction.
- We present a complete sensor-layer lane detection pipeline with analyzable worst-case execution time, achieving real-time performance (32.7 ms per frame) on a 2 GHz ARM platform without discrete GPU acceleration.
- We provide extensive quantitative and qualitative comparisons against both classical geometric baselines and representative lightweight deep learning methods on challenging subsets of CULane and LLAMAS, highlighting the trade-offs between robustness, efficiency, and deployability.
2. Proposed Lane Detection Pipeline
2.1. Image Preprocessing
2.2. Lane Feature Extraction
2.3. Geometric Priors of Lane Markings in BEV
2.4. Lane Model Fitting
2.5. Handling Violations of the Global Geometric Prior
Geometric Dynamics of the Sliding Window Under Variable Curvature
3. Experimental Verification
3.1. Failure Modes and Dataset Description
3.1.1. Stress-Test Corpus Versus Benchmark-Scale Validation
3.1.2. “Ordinary but Complex” Illumination: Dappled Sunlight and Dusk
3.2. Quantitative Validation and Statistical Confidence
Statistical Confirmation on Large-Scale Benchmarks
3.3. Computational Complexity and Runtime Analysis
Static Memory Footprint and Cache-Resident Execution
3.4. Experimental Setup and Image Preprocessing
3.5. Lane Feature Extraction and Parameter Optimization
3.6. Sensitivity Analysis and Geometric Filtering
4. Performance Evaluation
4.1. Discussion of the Effect of Removing BLSF
4.2. Quantification of the Failure Boundary of the Geometric Prior
5. Application in Autonomous Logistics
5.1. Enabling Sustainable Last-Mile Delivery
5.2. Overcoming Infrastructure and Regulatory Barriers
5.3. Integration with 5PL and Digital-Twin Logistics Ecosystems
6. Limitations and Future Work
6.1. Environmental Scope and Visual Complexity
6.2. Benchmark Coverage and Parameter Adaptation
6.3. Hybrid Architectures and Safety-Oriented Integration
6.4. Future Directions and System Integration Outlook
7. Conclusions
- Algorithmic Efficiency: The pipeline achieves an average processing time of 32.7 ms per frame on a 2 GHz ARM processor. This real-time performance is achieved without GPU acceleration, significantly reducing the power envelope and hardware cost for mass-market ADAS deployment.
- Robustness in Extreme Conditions: Experimental results demonstrate that the BLSF effectively suppresses sensor noise and environmental interference. Compared to previous geometric approaches [16], the proposed method shows superior stability under challenging scenarios, including strong backlighting, heavy rain, and low-contrast night scenes, maintaining an average correct detection rate of 95%.
- Determinism and Explainability: Unlike “black-box” deep learning models, our purely geometric approach offers mathematical transparency and predictable execution time, which are essential for achieving higher Automotive Safety Integrity Levels (ASIL) and formal safety certification.
- Balanced Design: The study confirms that for resource-constrained platforms, the integration of well-motivated geometric priors can recover significant robustness typically associated with heavy convolutional backbones, without the associated hardware overhead.
- Exploring hybrid architectures that combine lightweight temporal tracking with BLSF to handle intermittent markings.
- Evaluating the pipeline’s performance across a broader range of ARM-based architectures and sensor mounting configurations to further validate its cross-platform generalizability.
- Developing adaptive thresholding mechanisms to better handle the transition between urban canyons and open highways.
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- World Health Organization. Global Status Report on Road Safety 2023; World Health Organization: Geneva, Switzerland, 2023. [Google Scholar]
- Husain, A.A.; Maity, T.; Yadav, R.K. Vehicle detection in intelligent transport system under a hazy environment: A survey. IET Image Process. 2020, 14, 1–10. [Google Scholar] [CrossRef]
- Xing, Y.; Chen, L.; Wang, H.; Cao, D.; Velenis, E.; Wang, F.-Y. Advances in vision-based lane detection: Algorithms, integration, assessment, and perspective on ACP-based parallel vision. IEEE/CAA J. Autom. Sin. 2018, 5, 645–661. [Google Scholar] [CrossRef]
- ISO 26262-1:2018; Road vehicles—Functional Safety—Part 1: Vocabulary. ISO: Geneva, Switzerland, 2018.
- Qin, Z.; Wang, H.; Li, X. Ultra Fast Structure-aware Deep Lane Detection. In Proceedings of the European Conference on Computer Vision (ECCV), Glasgow, UK, 23–28 August 2020. [Google Scholar]
- Qu, Z.; Jin, H.; Zhou, Y.; Yang, Z.; Zhang, W. Focus on local: Detecting lane marker from bottom up via key point. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 14122–14130. [Google Scholar]
- Han, J.; Deng, X.; Cai, X.; Yang, Z.; Xu, H.; Xu, C.; Liang, X. Laneformer: Object-aware row-column transformers for lane detection. Proc. AAAI Conf. Artif. Intell. 2022, 36, 799–807. [Google Scholar] [CrossRef]
- Sang, I.-C.; Norris, W.R. A robust lane detection algorithm adaptable to challenging weather conditions. IEEE Access 2024, 12, 11185–11195. [Google Scholar] [CrossRef]
- Che, Q.H.; Nguyen, D.P.; Pham, M.Q.; Lam, D.K. TwinLiteNet: An efficient and lightweight model for driveable area and lane segmentation in self-driving cars. arXiv 2023, arXiv:2307.10705. [Google Scholar]
- Khan, H.U.; Ali, A.R.; Hassan, A.; Ali, A.; Kazmi, W.; Zaheer, A. Lane detection using lane boundary marker network with road geometry constraints. In Proceedings of the IEEE Winter Conference on Applications of Computer Vision (WACV), Snowmass Village, CO, USA, 1–5 March 2020; pp. 1834–1843. [Google Scholar]
- Ozgunalp, U. Robust lane-detection algorithm based on improved symmetrical local threshold for feature extraction and inverse perspective mapping. IET Image Process. 2019, 13, 975–982. [Google Scholar] [CrossRef]
- Duong, T.T.; Pham, C.C.; Tran, T.H.P.; Nguyen, T.P.; Jeon, J.W. Near real-time ego-lane detection in highway and urban streets. In Proceedings of the IEEE International Conference on Consumer Electronics–Asia (ICCE-Asia), Seoul, Republic of Korea, 26–28 October 2016; pp. 1–4. [Google Scholar]
- Piao, J.; Shin, H. Robust hypothesis generation method using binary blob analysis for multi-lane detection. IET Image Process. 2017, 11, 1210–1218. [Google Scholar] [CrossRef]
- Xu, S.; Ye, P.; Han, S.; Sun, H.; Jia, Q. Road lane modeling based on RANSAC algorithm and hyperbolic model. In Proceedings of the 3rd International Conference on Systems and Informatics (ICSAI), Shanghai, China, 19–21 November 2016; pp. 97–101. [Google Scholar]
- Hou, Y.; Ma, Z.; Liu, C.; Loy, C.C. Learning Lightweight Lane Detection CNNs by Self Attention Distillation. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019; pp. 1013–1021. [Google Scholar]
- Kuo, C.Y.; Lu, Y.R.; Yang, S.M. On the image sensor processing for lane detection and control in vehicle lane keeping systems. Sensors 2019, 19, 1665. [Google Scholar] [CrossRef] [PubMed]
- Narote, S.P.; Bhujbal, P.N.; Narote, A.S.; Dhane, D.M. A review of recent advances in lane detection and departure warning systems. Pattern Recognit. 2018, 73, 216–234. [Google Scholar] [CrossRef]
- Liu, Y.-H.; Hsu, H.P.; Yang, S.M. Development of an efficient and resilient algorithm for lane feature extraction in image sensor-based lane detection. J. Adv. Technol. Eng. Res. 2019, 5, 85–92. [Google Scholar] [CrossRef]
- Zhang, X.; Chen, M.; Zhan, X. A combined approach to single-camera-based lane detection in driverless navigation. In Proceedings of the IEEE/ION Position, Location and Navigation Symposium (PLANS), Monterey, CA, USA, 23–26 April 2018; pp. 1042–1046. [Google Scholar]
- Zhai, S.; Zhao, X.; Zu, G.; Lu, L.; Cheng, C. An algorithm for lane detection based on RIME optimization and optimal threshold. Sci. Rep. 2024, 14, 27244. [Google Scholar] [CrossRef] [PubMed]
- Suder, J.; Podbucki, K.; Marciniak, T. Power Requirements Evaluation of Embedded Devices for Real-Time Video Line Detection. Energies 2023, 16, 6677. [Google Scholar] [CrossRef]
- Lee, W.-C.; Tai, P.-L. Defect detection in striped images using a one-dimensional median filter. Appl. Sci. 2020, 10, 1012. [Google Scholar] [CrossRef]
- Lu, Z.; Xu, Y.; Shan, X.; Liu, L.; Wang, X.; Shen, J. A lane detection method based on a ridge detector and regional G-RANSAC. Sensors 2019, 19, 4028. [Google Scholar] [CrossRef] [PubMed]
- Storsæter, A.D. Camera-based lane detection—Can yellow road markings facilitate automated driving in snow? Vehicles 2021, 3, 664–690. [Google Scholar] [CrossRef]
- Von Gioi, R.G.; Jakubowicz, J.; Morel, J.M.; Randall, G. A fast line segment detector with false detection control. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 32, 722–732. [Google Scholar] [CrossRef] [PubMed]
- Pan, X.; Shi, J.; Luo, P.; Wang, X.; Tang, X. Spatial as deep: Spatial CNN for traffic scene understanding. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), New Orleans, LA, USA, 2–7 February 2018. [Google Scholar]
- Behrendt, K.; Soussan, R. Unsupervised labeled lane markers using maps. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, Seoul, Republic of Korea, 27 October–2 November 2019. [Google Scholar]
- Borkar, A.; Hayes, M.; Smith, M.T. A novel lane detection system with efficient ground-truth generation. IEEE Trans. Intell. Transp. Syst. 2012, 13, 365–374. [Google Scholar] [CrossRef]
- Engesser, V.; Rombaut, E.; Vanhaverbeke, L.; Lebeau, P. Autonomous Delivery Solutions for Last-Mile Logistics Operations: A Literature Review and Research Agenda. Sustainability 2023, 15, 2774. [Google Scholar] [CrossRef]
- Alverhed, E.; Hellgren, S.; Isaksson, H.; Olsson, L.; Palmqvist, H.; Flodén, J. Autonomous last-mile delivery robots: A literature review. Eur. Transp. Res. Rev. 2024, 16, 4. [Google Scholar] [CrossRef]








| Approach | Paradigm/Core Idea | Accelerator Need | Determinism & Explainability | Adverse Lighting/Weather Robustness | Positioning vs. BLSF (This Work) |
|---|---|---|---|---|---|
| Classical geometric (e.g., Hough variants) | Edge/line extraction + heuristic fitting | CPU (low requirement) | Deterministic; explainable | Often sensitive to glare/shadows | Serves as an efficiency- and explainability-oriented baseline; however, its performance degrades significantly under extreme illumination conditions, leading to frequent failure cases. |
| Robust Lane Detection [8] | Hybrid geometry/rules with adaptive tuning | CPU-friendly | Explainable relative to DL; behavior depends on tuning policy | Specifically targets challenging weather; supports “geometry is still active” | The results indicate that enforcing bounded and deterministic processing stages may help maintain robustness and predictability in scenarios where data-driven methods exhibit unstable behavior. |
| UFLD [5] | Lightweight DL; row-wise formulation (ResNet-18/34) | GPU preferred (CPU latency can be high) | Stochastic/black-box vs. geometric | Good when training coverage matches domain | Even when adopting lightweight deep learning, the absence of hardware accelerators leads to substantial performance limitations on pure CPU platforms. |
| FOLOLane [6] | DL bottom-up keypoints; exploits locality | GPU preferred | Stochastic/black-box vs. geometric | Strong on benchmarks | Shares a similar emphasis on local geometric consistency with BLSF; however, FOLOLane relies on CNN-based keypoint association, whereas BLSF replaces this with direct histogram voting, making it more suitable for extremely resource-constrained environments. |
| Laneformer [7] | Transformer row–column attention | GPU/accelerator typically required | Stochastic & high-dimensional | Strong on complex scenes | The high computational complexity of attention mechanisms (e.g., quadratic complexity in token interactions) makes such models difficult to deploy on embedded CPU-only systems in real time. |
| TwinLiteNet [9] | Lightweight multi-task segmentation | Jetson/GPU-class preferred | Stochastic/black-box vs. geometric | Good when trained for target domain | Despite being labeled as “lightweight,” the CNN architecture still incurs substantial GFLOPs. When ported to pure CPU platforms (e.g., ARM Cortex-A), inference latency increases markedly, highlighting the non-substitutability of BLSF in CPU-only scenarios. |
| BLSF (this work) | Geometric consensus voting + binary line-segment filtering | CPU (ARM Cortex-A) | Deterministic/white-box; bounded execution time | High under adverse illumination via structural consistency prior |
| Symbol | Description | Symbol | Description |
|---|---|---|---|
| World coordinate system | Length threshold for valid line segments | ||
| Camera coordinate system | Lower and upper bounds of the adaptive orientation threshold | ||
| Mounting height of the camera above ground | A0, A1, A2 | Orientation bins for left-curving, straight, and right-curving lanes | |
| Camera pitch angle | Accumulated voting score of orientation bin | ||
| Camera yaw angle | li | Length of line segment i | |
| Camera offset relative to vehicle center | Estimated lane width in BEV domain | ||
| A point on the ground plane in world coordinates | H | Height of the BEV image | |
| Corresponding point in the image plane | Wd | Width of the sliding window | |
| Grayscale intensity at pixel ) | Height of the sliding window | ||
| (R,G,B) | Red, green, and blue channels of the BEV image | n | Number of sliding windows |
| Processed grayscale image after median local thresholding | , | Horizontal positions of left and right lane markings | |
| 1D median filter window (lane width support region) | (a, b, c) | Quadratic lane model parameters (x = ay2 + by + c) | |
| Intensity threshold for median local thresholding | N | Number of inlier points in RANSAC fitting | |
| Length of the (i)-th line segment | Horizontal distance from point i to the fitted lane curve | ||
| Orientation angle of the (i)-th line segment | S | Aggregate fitting score in RANSAC | |
| Δy | Longitudinal step between sliding windows in BEV | Wlane | Width of lane marking |
| Δx | Lateral drift of lane centerline per step | Rmin | Minimum road curvature radius |
| R | Lateral drift of lane centerline per step | emax | Maximum superelevation rate |
| M | Lateral capture margin of sliding window | fmax | Maximum side-friction factor |
| Condition | Frames (n) | Successes | 95% Wilson Interval (w±) | Baseline Accuracy [16] | |
|---|---|---|---|---|---|
| High Curvature | 85 | 84 | 98.8% | [93.6%, 99.8%] | 19% |
| Heavy Rain | 60 | 57 | 95.0% | [86.3%, 98.3%] | 17% |
| Strong Backlight | 72 | 67 | 93.1% | [84.7%, 97.0%] | 3% |
| Low Contrast Night | 80 | 74 | 92.5% | [84.6%, 96.5%] | 36% |
| Total Stress Test | 297 | 282 | 95.0% | [91.9%, 97.0%] | <30% |
| Processing Stage | Avg. Time (ms) | Percentage (%) |
|---|---|---|
| IPM + Grayscale | 4.1 | 12.5% |
| Median Local Threshold (MLT) | 6.3 | 19.3% |
| LSD | 9.8 | 30.0% |
| BLSF (Voting + Filtering) | 2.1 | 6.4% |
| Hough + Sliding Window | 4.6 | 14.1% |
| RANSAC Fitting | 5.8 | 17.7% |
| Total | 32.7 | 100% |
| Metric | Proposed Geometric Pipeline (BLSF) | SOTA Deep Learning (e.g., UFLD/ResNet18) |
|---|---|---|
| Platform | ARM Cortex-A (CPU Only) | NVIDIA Jetson/Discrete GPU |
| Compute Paradigm | Integer Logic & Histograms | Float-32 Matrix Multiplication (GEMM) |
| Power Envelope | <2 Watts | 10–20 Watts |
| Memory Usage | <5 MB (LUTs + Line Buffers) | >50 MB (Weights + Feature Maps) |
| Inference Latency | 32.7 ms (Deterministic) | 8.4 ms (GPU)/>150 ms (CPU) |
| Safety Certifiability | High (White-Box/Auditable) | Low (Black-Box/Stochastic) |
| Condition | With BLSF | Without BLSF | [16] |
|---|---|---|---|
| High curvature | 99% | 97% | 19% |
| Strong backlighting | 93% | 89% | 3% |
| Low contrast night | 92% | 92% | 36% |
| Heavy rain | 95% | 78% | 17% |
| Method | Backbone Type | Dazzling Light (F1/IoU) | Night (F1/IoU) | FPS (CPU) | Power Efficiency |
|---|---|---|---|---|---|
| BLSF (Ours) | Geometric (Edge/Segment + Hough) | 0.861/0.792 | 0.847/0.774 | 30.6 | High |
| Baseline (No BLSF) | Geometric (Edge/Segment) | 0.801/0.776 | 0.776/0.731 | 29.2 | High |
| Kuo et al. [16] | Geometric (Grid-based) | 0.654/0.582 | 0.698/0.610 | 28.5 | High |
| UFLD [5] | DL (ResNet-18) | 0.872/0.805 | 0.855/0.781 | 8.4 | Low |
| ENet-SAD [15] | DL (ENet) | 0.841/0.763 | 0.832/0.755 | 14.1 | Medium |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Tsai, S.-E.; Yang, S.-M.; Hsieh, C.-H. Real-Time Deterministic Lane Detection on CPU-Only Embedded Systems via Binary Line Segment Filtering. Electronics 2026, 15, 351. https://doi.org/10.3390/electronics15020351
Tsai S-E, Yang S-M, Hsieh C-H. Real-Time Deterministic Lane Detection on CPU-Only Embedded Systems via Binary Line Segment Filtering. Electronics. 2026; 15(2):351. https://doi.org/10.3390/electronics15020351
Chicago/Turabian StyleTsai, Shang-En, Shih-Ming Yang, and Chia-Han Hsieh. 2026. "Real-Time Deterministic Lane Detection on CPU-Only Embedded Systems via Binary Line Segment Filtering" Electronics 15, no. 2: 351. https://doi.org/10.3390/electronics15020351
APA StyleTsai, S.-E., Yang, S.-M., & Hsieh, C.-H. (2026). Real-Time Deterministic Lane Detection on CPU-Only Embedded Systems via Binary Line Segment Filtering. Electronics, 15(2), 351. https://doi.org/10.3390/electronics15020351

