Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (110)

Search Parameters:
Keywords = lane recognition

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 4337 KB  
Article
Automatic Real-Time Queue Length Detection Method of Multiple Lanes at Intersections Based on Roadside LiDAR
by Qian Chen, Jianying Zheng, Ennian Du, Xiang Wang, Wenjuan E, Xingxing Jiang, Yang Xiao, Yuxin Zhang and Tieshan Li
Electronics 2026, 15(3), 585; https://doi.org/10.3390/electronics15030585 - 29 Jan 2026
Abstract
Signal intersections are key nodes in urban road traffic networks, and real-time queue length information serves as a core performance indicator for formulating effective signal management schemes in modern adaptive traffic signal control systems, thereby enhancing traffic efficiency. In this study, a roadside [...] Read more.
Signal intersections are key nodes in urban road traffic networks, and real-time queue length information serves as a core performance indicator for formulating effective signal management schemes in modern adaptive traffic signal control systems, thereby enhancing traffic efficiency. In this study, a roadside Light Detection and Ranging (LiDAR) sensor is employed to acquire 3D point cloud data of vehicles in the road space, which acts as an important method for queue length detection. However, during queue-length detection, vehicles in different lanes are prone to occlusion because of the straight-line propagation of laser beams. This paper proposes a queue-length detection method based on variations in vehicle point cloud features to address the occlusion of queue-end vehicles during detection. This method first preprocesses LiDAR point cloud data (including region-of-interest extraction, ground-point filtering, point cloud clustering, object association, and lane recognition) to detect real-time queue lengths across multiple lanes. Subsequently, the occlusion problem is categorized into complete occulusion and partial occlusion, and corresponding processing is performed to correct the detection results. The performance of the proposed queue length detection method was validated through experiments that collected real-world data from three urban road intersections in Suzhou. The results indicate that this method’s average accuracy can reach 99.3%. Furthermore, the effectiveness of the proposed occlusion handling method has been validated through experiments. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

29 pages, 4853 KB  
Article
ROS 2-Based Architecture for Autonomous Driving Systems: Design and Implementation
by Andrea Bonci, Federico Brunella, Matteo Colletta, Alessandro Di Biase, Aldo Franco Dragoni and Angjelo Libofsha
Sensors 2026, 26(2), 463; https://doi.org/10.3390/s26020463 - 10 Jan 2026
Viewed by 600
Abstract
Interest in the adoption of autonomous vehicles (AVs) continues to grow. It is essential to design new software architectures that meet stringent real-time, safety, and scalability requirements while integrating heterogeneous hardware and software solutions from different vendors and developers. This paper presents a [...] Read more.
Interest in the adoption of autonomous vehicles (AVs) continues to grow. It is essential to design new software architectures that meet stringent real-time, safety, and scalability requirements while integrating heterogeneous hardware and software solutions from different vendors and developers. This paper presents a lightweight, modular, and scalable architecture grounded in Service-Oriented Architecture (SOA) principles and implemented in ROS 2 (Robot Operating System 2). The proposed design leverages ROS 2’s Data Distribution System-based Quality-of-Service model to provide reliable communication, structured lifecycle management, and fault containment across distributed compute nodes. The architecture is organized into Perception, Planning, and Control layers with decoupled sensor access paths to satisfy heterogeneous frequency and hardware constraints. The decision-making core follows an event-driven policy that prioritizes fresh updates without enforcing global synchronization, applying zero-order hold where inputs are not refreshed. The architecture was validated on a 1:10-scale autonomous vehicle operating on a city-like track. The test environment covered canonical urban scenarios (lane-keeping, obstacle avoidance, traffic-sign recognition, intersections, overtaking, parking, and pedestrian interaction), with absolute positioning provided by an indoor GPS (Global Positioning System) localization setup. This work shows that the end-to-end Perception–Planning pipeline consistently met worst-case deadlines, yielding deterministic behaviour even under stress. The proposed architecture can be deemed compliant with real-time application standards for our use case on the 1:10 test vehicle, providing a robust foundation for deployment and further refinement. Full article
(This article belongs to the Special Issue Sensors and Sensor Fusion for Decision Making for Autonomous Driving)
Show Figures

Figure 1

27 pages, 5656 KB  
Article
Dynamic Visibility Recognition and Driving Risk Assessment Under Rain–Fog Conditions Using Monocular Surveillance Imagery
by Zilong Xie, Chi Zhang, Dibin Wei, Xiaomin Yan and Yijing Zhao
Sustainability 2026, 18(2), 625; https://doi.org/10.3390/su18020625 - 7 Jan 2026
Viewed by 227
Abstract
This study addresses the limitations of conventional highway visibility monitoring under rain–fog conditions, where fixed stations and visibility sensors provide limited spatial coverage and unstable accuracy. Considering that drivers’ visual fields are jointly affected by global fog and local spray-induced mist, a dynamic [...] Read more.
This study addresses the limitations of conventional highway visibility monitoring under rain–fog conditions, where fixed stations and visibility sensors provide limited spatial coverage and unstable accuracy. Considering that drivers’ visual fields are jointly affected by global fog and local spray-induced mist, a dynamic visibility recognition and risk assessment framework is proposed using roadside monocular CCTV (Closed-Circuit Television) imagery. The method integrates the Koschmieder scattering model with the dark channel prior to estimate atmospheric transmittance and derives visibility through lane-line calibration. A Monte Carlo-based coupling model simulates local visibility degradation caused by tire spray, while a safety potential field defines the low-visibility risk field force (LVRFF) combining dynamic visibility, relative speed, and collision distance. Results show that this approach achieves over 86% accuracy under heavy rain, effectively captures real-time visibility variations, and that LVRFF exhibits strong sensitivity to visibility degradation, outperforming traditional safety indicators in identifying high-risk zones. By enabling scalable, infrastructure-based visibility monitoring without additional sensing devices, the proposed framework reduces deployment cost and energy consumption while enhancing the long-term operational resilience of highway systems under adverse weather. From a sustainability perspective, the method supports safer, more reliable, and resource-efficient traffic management, contributing to the development of intelligent and sustainable transportation infrastructure. Full article
(This article belongs to the Special Issue Traffic Safety, Traffic Management, and Sustainable Mobility)
Show Figures

Figure 1

34 pages, 4042 KB  
Article
Perceptual Elements and Sensitivity Analysis of Urban Tunnel Portals for Autonomous Driving
by Mengdie Xu, Bo Liang, Haonan Long, Chun Chen, Hongyi Zhou and Shuangkai Zhu
Appl. Sci. 2026, 16(1), 453; https://doi.org/10.3390/app16010453 - 31 Dec 2025
Viewed by 250
Abstract
Urban tunnel portals constitute critical safety zones for autonomous vehicles, where abrupt luminance transitions, shortened sight distances, and densely distributed structural and traffic elements pose considerable challenges to perception reliability. Existing driving scenario datasets are rarely tailored to tunnel environments and have not [...] Read more.
Urban tunnel portals constitute critical safety zones for autonomous vehicles, where abrupt luminance transitions, shortened sight distances, and densely distributed structural and traffic elements pose considerable challenges to perception reliability. Existing driving scenario datasets are rarely tailored to tunnel environments and have not quantitatively evaluated how specific infrastructure components influence perception latency in autonomous systems. This study develops a requirement-driven framework for the identification and sensitivity ranking of information perception elements within urban tunnel portals. Based on expert evaluations and a combined function–safety scoring system, nine key elements—including road surfaces, tunnel portals, lane markings, and vehicles—were identified as perception-critical. A “mandatory–optional” combination rule was then applied to generate 48 logical scene types, and 376 images after brightness (30–220 px), blur (Laplacian variance ≥ 100), and occlusion filtering (≤0.5% pixel error) were obtained after luminance and occlusion screening. A ResNet50–PSPNet convolutional neural network was trained to perform pixel-level segmentation, with inference rate adopted as a quantitative proxy for perceptual sensitivity. Field experiments across ten urban tunnels in China indicate that the model consistently recognized road surfaces, lane markings, cars, and motorcycles with the shortest inference times (<6.5 ms), whereas portal structures and vegetation required longer recognition times (>7.5 ms). This sensitivity ranking is statistically stable under clear, daytime conditions (p < 0.01). The findings provide engineering insights for optimizing tunnel lighting design, signage placement, and V2X configuration, and offers a pilot dataset to support perception-oriented design and evaluation of urban tunnel portals in semi-enclosed environments. Unlike generic segmentation datasets, this study quantifies element-specific CNN latency at tunnel portals for the first time. Full article
(This article belongs to the Section Civil Engineering)
Show Figures

Figure 1

33 pages, 2750 KB  
Article
Real-Time Detection of Rear Car Signals for Advanced Driver Assistance Systems Using Meta-Learning and Geometric Post-Processing
by Vasu Tammisetti, Georg Stettinger, Manuel Pegalajar Cuellar and Miguel Molina-Solana
Appl. Sci. 2025, 15(22), 11964; https://doi.org/10.3390/app152211964 - 11 Nov 2025
Viewed by 768
Abstract
Accurate identification of rear light signals in preceding vehicles is pivotal for Advanced Driver Assistance Systems (ADAS), enabling early detection of driver intentions and thereby improving road safety. In this work, we present a novel approach that leverages a meta-learning-enhanced YOLOv8 model to [...] Read more.
Accurate identification of rear light signals in preceding vehicles is pivotal for Advanced Driver Assistance Systems (ADAS), enabling early detection of driver intentions and thereby improving road safety. In this work, we present a novel approach that leverages a meta-learning-enhanced YOLOv8 model to detect left and right turn indicators, as well as brake signals. Traditional radar and LiDAR provide robust geometry, range, and motion cues that can indirectly suggest driver intent (e.g., deceleration or lane drift). However, they do not directly interpret color-coded rear signals, which limits early intent recognition from the taillights. We therefore focus on a camera-based approach that complements ranging sensors by decoding color and spatial patterns in rear lights. This approach to detecting vehicle signals poses additional challenges due to factors such as high reflectivity and the subtle visual differences between directional indicators. We address these by training a YOLOv8 model with a meta-learning strategy, thus enhancing its capability to learn from minimal data and rapidly adapt to new scenarios. Furthermore, we developed a post-processing layer that classifies signals by the geometric properties of detected objects, employing mathematical principles such as distance, area calculation, and Intersection over Union (IoU) metrics. Our approach increases adaptability and performance compared to traditional deep learning techniques, supporting the conclusion that integrating meta-learning into real-time object detection frameworks provides a scalable and robust solution for intelligent vehicle perception, significantly enhancing situational awareness and road safety through reliable prediction of vehicular behavior. Full article
(This article belongs to the Special Issue Convolutional Neural Networks and Computer Vision)
Show Figures

Figure 1

41 pages, 3403 KB  
Review
Towards Next-Generation FPGA-Accelerated Vision-Based Autonomous Driving: A Comprehensive Review
by Md. Reasad Zaman Chowdhury, Ashek Seum, Mahfuzur Rahman Talukder, Rashed Al Amin, Fakir Sharif Hossain and Roman Obermaisser
Signals 2025, 6(4), 53; https://doi.org/10.3390/signals6040053 - 1 Oct 2025
Viewed by 3979
Abstract
Autonomous driving has emerged as a rapidly advancing field in both industry and academia over the past decade. Among the enabling technologies, computer vision (CV) has demonstrated high accuracy across various domains, making it a critical component of autonomous vehicle systems. However, CV [...] Read more.
Autonomous driving has emerged as a rapidly advancing field in both industry and academia over the past decade. Among the enabling technologies, computer vision (CV) has demonstrated high accuracy across various domains, making it a critical component of autonomous vehicle systems. However, CV tasks are computationally intensive and often require hardware accelerators to achieve real-time performance. Field Programmable Gate Arrays (FPGAs) have gained popularity in this context due to their reconfigurability and high energy efficiency. Numerous researchers have explored FPGA-accelerated CV solutions for autonomous driving, addressing key tasks such as lane detection, pedestrian recognition, traffic sign and signal classification, vehicle detection, object detection, environmental variability sensing, and fault analysis. Despite this growing body of work, the field remains fragmented, with significant variability in implementation approaches, evaluation metrics, and hardware platforms. Crucial performance factors, including latency, throughput, power consumption, energy efficiency, detection accuracy, datasets, and FPGA architectures, are often assessed inconsistently. To address this gap, this paper presents a comprehensive literature review of FPGA-accelerated, vision-based autonomous driving systems. It systematically examines existing solutions across sub-domains, categorizes key performance factors and synthesizes the current state of research. This study aims to provide a consolidated reference for researchers, supporting the development of more efficient and reliable next generation autonomous driving systems by highlighting trends, challenges, and opportunities in the field. Full article
Show Figures

Figure 1

82 pages, 17076 KB  
Review
Advancements in Embedded Vision Systems for Automotive: A Comprehensive Study on Detection and Recognition Techniques
by Anass Barodi, Mohammed Benbrahim and Abdelkarim Zemmouri
Vehicles 2025, 7(3), 99; https://doi.org/10.3390/vehicles7030099 - 12 Sep 2025
Cited by 1 | Viewed by 2579
Abstract
Embedded vision systems play a crucial role in the advancement of intelligent transportation by supporting real-time perception tasks such as traffic sign recognition and lane detection. Despite significant progress, their performance remains sensitive to environmental variability, computational constraints, and scene complexity. This review [...] Read more.
Embedded vision systems play a crucial role in the advancement of intelligent transportation by supporting real-time perception tasks such as traffic sign recognition and lane detection. Despite significant progress, their performance remains sensitive to environmental variability, computational constraints, and scene complexity. This review examines the current state of the art in embedded vision approaches used for the detection and classification of traffic signs and lane markings. The literature is structured around three main stages, localization, detection, and recognition, highlighting how visual features like color, geometry, and road edges are processed through both traditional and learning-based methods. A major contribution of this work is the introduction of a practical taxonomy that organizes recognition techniques according to their computational load and real-time applicability in embedded contexts. In addition, the paper presents a critical synthesis of existing limitations, with attention to sensor fusion challenges, dataset diversity, and deployment in real-world conditions. By adopting the SALSA methodology, the review follows a transparent and systematic selection process, ensuring reproducibility and clarity. The study concludes by identifying specific research directions aimed at improving the robustness, scalability, and interpretability of embedded vision systems. These contributions position the review as a structured reference for researchers working on intelligent driving technologies and next-generation driver assistance systems. The findings are expected to inform future implementations of embedded vision systems in real-world driving environments. Full article
Show Figures

Figure 1

27 pages, 1057 KB  
Review
Distributed Acoustic Sensing for Road Traffic Monitoring: Principles, Signal Processing, and Emerging Applications
by Jingxiang Deng, Long Jin, Hongzhi Wang, Zihao Zhang, Yanjiang Liu, Fei Meng, Jikai Wang, Zhenghao Li and Jianqing Wu
Infrastructures 2025, 10(9), 228; https://doi.org/10.3390/infrastructures10090228 - 29 Aug 2025
Viewed by 3652
Abstract
With accelerating urbanization and the exponential growth in vehicle populations, high-precision traffic monitoring has become indispensable for intelligent transportation systems (ITSs). Conventional sensing technologies—including inductive loops, cameras, and radar—suffer from inherent limitations: restrictive spatial coverage, prohibitive installation costs, and vulnerability to adverse weather. [...] Read more.
With accelerating urbanization and the exponential growth in vehicle populations, high-precision traffic monitoring has become indispensable for intelligent transportation systems (ITSs). Conventional sensing technologies—including inductive loops, cameras, and radar—suffer from inherent limitations: restrictive spatial coverage, prohibitive installation costs, and vulnerability to adverse weather. Distributed Acoustic Sensing (DAS), leveraging Rayleigh backscattering to convert standard optical fibers into kilometer-scale, real-time vibration sensor networks, presents a transformative alternative. This paper provides a comprehensive review of the principles and system architecture of DAS for roadway traffic monitoring, with a focus on signal processing techniques, feature extraction methods, and recent advances in vehicle detection, classification, and speed/flow estimation. Special attention is given to the integration of deep learning approaches, which enhance noise suppression and feature recognition under complex, multi-lane traffic conditions. Real-world deployment cases on highways, urban roads, and bridges are analyzed to demonstrate DAS’s practical value. Finally, this paper delineates emerging research trends and technical hurdles, providing actionable insights for the scalable implementation of DAS-enhanced ITS infrastructures. Full article
(This article belongs to the Special Issue Sustainable Road Design and Traffic Management)
Show Figures

Figure 1

17 pages, 1852 KB  
Article
A Hybrid Classical-Quantum Neural Network Model for DDoS Attack Detection in Software-Defined Vehicular Networks
by Varun P. Sarvade, Shrirang Ambaji Kulkarni and C. Vidya Raj
Information 2025, 16(9), 722; https://doi.org/10.3390/info16090722 - 25 Aug 2025
Cited by 3 | Viewed by 1605
Abstract
A typical Software-Defined Vehicular Network (SDVN) is open to various cyberattacks because of its centralized controller-based framework. A cyberattack, such as a Distributed Denial of Service (DDoS) attack, can easily overload the central SDVN controller. Thus, we require a functional DDoS attack recognition [...] Read more.
A typical Software-Defined Vehicular Network (SDVN) is open to various cyberattacks because of its centralized controller-based framework. A cyberattack, such as a Distributed Denial of Service (DDoS) attack, can easily overload the central SDVN controller. Thus, we require a functional DDoS attack recognition system that can differentiate malicious traffic from normal data traffic. The proposed architecture comprises hybrid Classical-Quantum Machine Learning (QML) methods for detecting DDoS threats. In this work, we have considered three different QML methods, such as Classical-Quantum Neural Networks (C-QNN), Classical-Quantum Boltzmann Machines (C-QBM), and Classical-Quantum K-Means Clustering (C-QKM). Emulations were conducted using a custom-built vehicular network with random movements and varying speeds between 0 and 100 kmph. Also, the performance of these QML methods was analyzed for two different datasets. The results obtained show that the hybrid Classical-Quantum Neural Network (C-QNN) method exhibited better performance in comparison with the other two models. The proposed hybrid C-QNN model achieved an accuracy of 99% and 90% for the UNB-CIC-DDoS dataset and Kaggle DDoS dataset, respectively. The hybrid C-QNN model combines PennyLane’s quantum circuits with traditional methods, whereas the Classical-Quantum Boltzmann Machine (C-QBM) leverages quantum probability distributions for identifying anomalies. Full article
Show Figures

Graphical abstract

20 pages, 11718 KB  
Article
Automatic Electric Tricycles Trajectory Tracking and Multi-Violation Detection
by Leishan Guo, Bo Yu, Benhao Xie, Geng Zhao, Yuan Tian and Jianqing Wu
Sensors 2025, 25(16), 5135; https://doi.org/10.3390/s25165135 - 19 Aug 2025
Viewed by 895
Abstract
The escalating traffic violations associated with electric tricycles pose a critical challenge to urban traffic safety. It is important to automatically track the trajectories of electric tricycles and detect the multi-violations related to electric tricycles. This paper proposed an Electric Tricycle Object Detection [...] Read more.
The escalating traffic violations associated with electric tricycles pose a critical challenge to urban traffic safety. It is important to automatically track the trajectories of electric tricycles and detect the multi-violations related to electric tricycles. This paper proposed an Electric Tricycle Object Detection (ETOD) model based on the custom-built dataset of electric tricycles. ETOD can successfully achieve real-time and accurate recognition and high-precision detection for electric tricycles. By integrating a multi-object tracking algorithm, an Electric Tricycle Violation Detection System (ETVDS) was developed. The ETVDS can detect and identify violations including speeding, passenger overloading, and illegal lane changes by plotting electric tricycle trajectories. The ETVDS can identify the conflicts related to electric tricycles in complex traffic scenarios. This work offers an effective technological solution for mitigating electric tricycle traffic violations in challenging urban environments. Full article
(This article belongs to the Section Vehicular Sensing)
Show Figures

Figure 1

27 pages, 7810 KB  
Article
Mutation Interval-Based Segment-Level SRDet: Side Road Detection Based on Crowdsourced Trajectory Data
by Ying Luo, Fengwei Jiao, Longgang Xiang, Xin Chen and Meng Wang
ISPRS Int. J. Geo-Inf. 2025, 14(8), 299; https://doi.org/10.3390/ijgi14080299 - 31 Jul 2025
Viewed by 1056
Abstract
Accurate side road detection is essential for traffic management, urban planning, and vehicle navigation. However, existing research mainly focuses on road network construction, lane extraction, and intersection identification, while fine-grained side road detection remains underexplored. Therefore, this study proposes a road segment-level side [...] Read more.
Accurate side road detection is essential for traffic management, urban planning, and vehicle navigation. However, existing research mainly focuses on road network construction, lane extraction, and intersection identification, while fine-grained side road detection remains underexplored. Therefore, this study proposes a road segment-level side road detection method based on crowdsourced trajectory data: First, considering the geometric and dynamic characteristics of trajectories, SRDet introduces a trajectory lane-change pattern recognition method based on mutation intervals to distinguish the heterogeneity of lane-change behaviors between main and side roads. Secondly, combining geometric features with spatial statistical theory, SRDet constructs multimodal features for trajectories and road segments, and proposes a potential side road segment classification model based on random forests to achieve precise detection of side road segments. Finally, based on mutation intervals and potential side road segments, SRDet utilizes density peak clustering to identify main and side road access points, completing the fitting of side roads. Experiments were conducted using 2021 Beijing trajectory data. The results show that SRDet achieves precision and recall rates of 84.6% and 86.8%, respectively. This demonstrates the superior performance of SRDet in side road detection across different areas, providing support for the precise updating of urban road navigation information. Full article
Show Figures

Figure 1

30 pages, 3451 KB  
Article
Integrating Google Maps and Smooth Street View Videos for Route Planning
by Federica Massimi, Antonio Tedeschi, Kalapraveen Bagadi and Francesco Benedetto
J. Imaging 2025, 11(8), 251; https://doi.org/10.3390/jimaging11080251 - 25 Jul 2025
Viewed by 3683
Abstract
This research addresses the long-standing dependence on printed maps for navigation and highlights the limitations of existing digital services like Google Street View and Google Street View Player in providing comprehensive solutions for route analysis and understanding. The absence of a systematic approach [...] Read more.
This research addresses the long-standing dependence on printed maps for navigation and highlights the limitations of existing digital services like Google Street View and Google Street View Player in providing comprehensive solutions for route analysis and understanding. The absence of a systematic approach to route analysis, issues related to insufficient street view images, and the lack of proper image mapping for desired roads remain unaddressed by current applications, which are predominantly client-based. In response, we propose an innovative automatic system designed to generate videos depicting road routes between two geographic locations. The system calculates and presents the route conventionally, emphasizing the path on a two-dimensional representation, and in a multimedia format. A prototype is developed based on a cloud-based client–server architecture, featuring three core modules: frames acquisition, frames analysis and elaboration, and the persistence of metadata information and computed videos. The tests, encompassing both real-world and synthetic scenarios, have produced promising results, showcasing the efficiency of our system. By providing users with a real and immersive understanding of requested routes, our approach fills a crucial gap in existing navigation solutions. This research contributes to the advancement of route planning technologies, offering a comprehensive and user-friendly system that leverages cloud computing and multimedia visualization for an enhanced navigation experience. Full article
(This article belongs to the Section Computer Vision and Pattern Recognition)
Show Figures

Figure 1

16 pages, 2152 KB  
Article
Vehicle Motion State Recognition Method Based on Hidden Markov Model and Support Vector Machine
by Xiaojun Zou, Weibo Xiang, Jihong Lian, En Song, Chengkai Tang and Yangyang Liu
Symmetry 2025, 17(7), 1011; https://doi.org/10.3390/sym17071011 - 27 Jun 2025
Viewed by 3180
Abstract
With the development of intelligent transportation, vehicle motion state recognition has become a crucial method for enhancing the reliability of vehicle navigation and ensuring driving safety. Currently, machine learning is the main approach for recognizing vehicle motion states. The symmetry characteristics of sensor [...] Read more.
With the development of intelligent transportation, vehicle motion state recognition has become a crucial method for enhancing the reliability of vehicle navigation and ensuring driving safety. Currently, machine learning is the main approach for recognizing vehicle motion states. The symmetry characteristics of sensor data have also been studied to better recognize motion states. However, the existing approaches face challenges during motion state changes due to indeterminate state boundaries, resulting in reduced recognition accuracy. To address this problem, this paper proposes a vehicle motion state recognition method based on the Hidden Markov Model (HMM) and Support Vector Machine (SVM). Firstly, Kalman filtering is applied to denoise the data of inertial sensors. Then, HMM is employed to capture the subtle state transition, enabling the recognition of complex dynamic state changes. Finally, SVM is utilized to classify motion states. The sensor data were collected in various vehicle motion states, including stationary, straight-line driving, lane changing, turning, and then the proposed method is compared with SVM, KNN (K-Nearest Neighbor), DT (Decision Tree), RF (Random Forest), and NB (Naive Bayes). The results of the experiment show that the proposed method improves the recognition accuracy of motion state transitions in the case of boundary ambiguity and is superior to the existing methods. Full article
(This article belongs to the Special Issue Symmetry and Its Application in Wireless Communication)
Show Figures

Figure 1

20 pages, 2853 KB  
Article
MHFS-FORMER: Multiple-Scale Hybrid Features Transformer for Lane Detection
by Dongqi Yan and Tao Zhang
Sensors 2025, 25(9), 2876; https://doi.org/10.3390/s25092876 - 2 May 2025
Cited by 2 | Viewed by 1369
Abstract
Although deep learning has exhibited remarkable performance in lane detection, lane detection remains challenging in complex scenarios, including those with damaged lane markings, obstructions, and insufficient lighting. Furthermore, a significant drawback of most existing lane-detection algorithms lies in their reliance on complex post-processing [...] Read more.
Although deep learning has exhibited remarkable performance in lane detection, lane detection remains challenging in complex scenarios, including those with damaged lane markings, obstructions, and insufficient lighting. Furthermore, a significant drawback of most existing lane-detection algorithms lies in their reliance on complex post-processing and strong prior knowledge. Inspired by the DETR architecture, we propose an end-to-end Transformer-based model, MHFS-FORMER, to resolve these issues. To tackle the interference with lane detection in complex scenarios, we have designed MHFNet. It fuses multi-scale features with the Transformer Encoder to obtain enhanced multi-scale features. These enhanced multi-scale features are then fed into the Transformer Decoder. A novel multi-reference deformable attention module is introduced to disperse the attention around the objects to enhance the model’s representation ability during the training process and better capture the elongated structure of lanes and the global environment. We also designed ShuffleLaneNet, which meticulously explores the channel and spatial information of multi-scale lane features, significantly improving the accuracy of target recognition. Our method has achieved an accuracy score of 96.88%, a real-time processing speed of 87 fps on the TuSimple dataset, and an F1 score of 77.38% on the CULane dataset. Compared with the methods based on CNN and those based on Transformer, our method has demonstrated excellent performance. Full article
(This article belongs to the Special Issue AI-Driving for Autonomous Vehicles)
Show Figures

Figure 1

15 pages, 1106 KB  
Article
End-to-End Lane Detection: A Two-Branch Instance Segmentation Approach
by Ping Wang, Zhe Luo, Yunfei Zha, Yi Zhang and Youming Tang
Electronics 2025, 14(7), 1283; https://doi.org/10.3390/electronics14071283 - 25 Mar 2025
Cited by 5 | Viewed by 2145
Abstract
To address the challenges of lane line recognition failure and insufficient segmentation accuracy in complex autonomous driving scenarios, this paper proposes a dual-branch instance segmentation method that integrates multi-scale modeling and dynamic feature enhancement. By constructing an encoder-decoder architecture and a cross-scale feature [...] Read more.
To address the challenges of lane line recognition failure and insufficient segmentation accuracy in complex autonomous driving scenarios, this paper proposes a dual-branch instance segmentation method that integrates multi-scale modeling and dynamic feature enhancement. By constructing an encoder-decoder architecture and a cross-scale feature fusion network, the method effectively enhances the feature representation capability of multi-scale information through the integration of high-level feature maps (rich in semantic information) and low-level feature maps (retaining spatial localization details), thereby improving the prediction accuracy of lane line morphology and its variations. Additionally, hierarchical dilated convolutions (with dilation rates 1/2/4/8) are employed to achieve exponential expansion of the receptive field, enabling better fusion of multi-scale features. Experimental results demonstrate that the proposed method achieves F1-scores of 76.0% and 96.9% on the CULane and Tusimple datasets, respectively, significantly enhancing the accuracy and reliability of lane detection. This work provides a high-precision, real-time solution for autonomous driving perception in complex environments. Full article
Show Figures

Figure 1

Back to TopTop