Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (37)

Search Parameters:
Keywords = lane marking extraction

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 3248 KiB  
Article
MRNet: A Deep Learning Framework for Drivable Area Detection in Multi-Scenario Unstructured Roads
by Jun Yang, Jiayue Chen, Yan Wang, Shulong Sun, Haizhen Xie, Jianguo Wu and Wei Wang
Electronics 2025, 14(11), 2242; https://doi.org/10.3390/electronics14112242 - 30 May 2025
Viewed by 421
Abstract
In the field of autonomous driving, the accurate identification of drivable areas on roads is the key to ensuring the safe driving of vehicles. However, unstructured roads lack clear lane lines and regular road structures, and they have fuzzy edges and rutting marks, [...] Read more.
In the field of autonomous driving, the accurate identification of drivable areas on roads is the key to ensuring the safe driving of vehicles. However, unstructured roads lack clear lane lines and regular road structures, and they have fuzzy edges and rutting marks, which greatly increase the difficulty of identifying drivable areas. To address the above challenges, this paper proposes a drivable area detection method for unstructured roads based on the MRNet model. To address the problem that unstructured roads lack clear lane lines and regular structures, the model dynamically captures local and global context information based on the self-attention mechanism of a Transformer, and it combines the input of image and LiDAR data to enhance the overall understanding of complex road scenes; to address the problem that detailed features such as fuzzy edges and rutting are difficult to identify, a multi-scale dilated convolution module (MSDM) is proposed to capture detailed information at different scales through multi-scale feature extraction; to address the gradient vanishing problem in feature fusion, a residual upsampling module (ResUp Block) is designed to optimize the spatial resolution recovery process of the feature map, correct errors, and further improve the robustness of the model. Experiments on the ORFD dataset containing unstructured road data show that MRNet outperforms other common methods in the drivable area detection task and achieves good performance in segmentation accuracy and model robustness. In summary, MRNet provides an effective solution for drivable area detection in unstructured road environments, supporting the environmental perception module of autonomous driving systems. Full article
(This article belongs to the Special Issue New Trends in AI-Assisted Computer Vision)
Show Figures

Figure 1

17 pages, 3398 KiB  
Article
A Double-Layer LSTM Model Based on Driving Style and Adaptive Grid for Intention-Trajectory Prediction
by Yikun Fan, Wei Zhang, Wenting Zhang, Dejin Zhang and Li He
Sensors 2025, 25(7), 2059; https://doi.org/10.3390/s25072059 - 26 Mar 2025
Viewed by 589
Abstract
In the evolution of autonomous vehicles (AVs), ensuring safety is of the utmost significance. Precise trajectory prediction is indispensable for augmenting vehicle safety and system performance in intricate environments. This study introduces a novel double-layer long short-term memory (LSTM) model to surmount the [...] Read more.
In the evolution of autonomous vehicles (AVs), ensuring safety is of the utmost significance. Precise trajectory prediction is indispensable for augmenting vehicle safety and system performance in intricate environments. This study introduces a novel double-layer long short-term memory (LSTM) model to surmount the limitations of conventional prediction methods, which frequently overlook predicted vehicle behavior and interactions. By incorporating driving-style category values and an improved adaptive grid generation method, this model achieves more accurate predictions of vehicle intentions and trajectories. The proposed approach fuses multi-sensor data collected by perception modules to extract vehicle trajectories. By leveraging historical trajectory coordinates and driving style, and by dynamically adjusting grid sizes according to vehicle dimensions and lane markings, this method significantly enhances the representation of vehicle motion features and interactions. The double-layer LSTM module, in conjunction with convolutional layers and a max-pooling layer, effectively extracts temporal and spatial features. Experiments conducted using the Next Generation Simulation (NGSIM) US-101 and I-80 datasets reveal that the proposed model outperforms existing benchmarks, with higher intention accuracy and lower root mean square error (RMSE) over 5 s. The impact of varying sliding window lengths and grid sizes is examined, thereby verifying the model’s stability and effectiveness. Full article
(This article belongs to the Special Issue AI-Driving for Autonomous Vehicles)
Show Figures

Figure 1

30 pages, 16455 KiB  
Article
Automated Detection of Pedestrian and Bicycle Lanes from High-Resolution Aerial Images by Integrating Image Processing and Artificial Intelligence (AI) Techniques
by Richard Boadu Antwi, Prince Lartey Lawson, Michael Kimollo, Eren Erman Ozguven, Ren Moses, Maxim A. Dulebenets and Thobias Sando
ISPRS Int. J. Geo-Inf. 2025, 14(4), 135; https://doi.org/10.3390/ijgi14040135 - 23 Mar 2025
Viewed by 1049
Abstract
The rapid advancement of computer vision technology is transforming how transportation agencies collect roadway characteristics inventory (RCI) data, yielding substantial savings in resources and time. Traditionally, capturing roadway data through image processing was seen as both difficult and error-prone. However, considering the recent [...] Read more.
The rapid advancement of computer vision technology is transforming how transportation agencies collect roadway characteristics inventory (RCI) data, yielding substantial savings in resources and time. Traditionally, capturing roadway data through image processing was seen as both difficult and error-prone. However, considering the recent improvements in computational power and image recognition techniques, there are now reliable methods to identify and map various roadway elements from multiple imagery sources. Notably, comprehensive geospatial data for pedestrian and bicycle lanes are still lacking across many state and local roadways, including those in the State of Florida, despite the essential role this information plays in optimizing traffic efficiency and reducing crashes. Developing fast, efficient methods to gather this data are essential for transportation agencies as they also support objectives like identifying outdated or obscured markings, analyzing pedestrian and bicycle lane placements relative to crosswalks, turning lanes, and school zones, and assessing crash patterns in the associated areas. This study introduces an innovative approach using deep neural network models in image processing and computer vision to detect and extract pedestrian and bicycle lane features from very high-resolution aerial imagery, with a focus on public roadways in Florida. Using YOLOv5 and MTRE-based deep learning models, this study extracts and segments bicycle and pedestrian features from high-resolution aerial images, creating a geospatial inventory of these roadway features. Detected features were post-processed and compared with ground truth data to evaluate performance. When tested against ground truth data from Leon County, Florida, the models demonstrated accuracy rates of 73% for pedestrian lanes and 89% for bicycle lanes. This initiative is vital for transportation agencies, enhancing infrastructure management by enabling timely identification of aging or obscured lane markings, which are crucial for maintaining safe transportation networks. Full article
(This article belongs to the Special Issue Spatial Information for Improved Living Spaces)
Show Figures

Figure 1

20 pages, 6100 KiB  
Article
Rearview Camera-Based Blind-Spot Detection and Lane Change Assistance System for Autonomous Vehicles
by Yunhee Lee and Manbok Park
Appl. Sci. 2025, 15(1), 419; https://doi.org/10.3390/app15010419 - 4 Jan 2025
Cited by 2 | Viewed by 2290
Abstract
This paper focuses on a method of rearview camera-based blind-spot detection and a lane change assistance system for autonomous vehicles, utilizing a convolutional neural network and lane detection. In this study, we propose a method for providing real-time warnings to autonomous vehicles and [...] Read more.
This paper focuses on a method of rearview camera-based blind-spot detection and a lane change assistance system for autonomous vehicles, utilizing a convolutional neural network and lane detection. In this study, we propose a method for providing real-time warnings to autonomous vehicles and drivers regarding collision risks during lane-changing maneuvers. We propose a method for lane detection to delineate the area for blind-spot detection and for measuring time to collision—both utilized to ascertain the vehicle’s location and compensate for vertical vibrations caused by vehicle movement. The lane detection method uses edge detection on an input image to extract lane markings by employing edge pairs consisting of positive and negative edges. Lanes were extracted through third-polynomial fitting of the extracted lane markings, with each lane marking being tracked using the results from the previous frame detections. Using the vanishing point where the two lanes converge, the camera calibration information is updated to compensate for the vertical vibrations caused by vehicle movement. Additionally, the proposed method utilized YOLOv9 for object detection, leveraging lane information to define the region of interest (ROI) and detect small-sized objects. The object detection achieved a precision of 90.2% and a recall of 82.8%. The detected object information was subsequently used to calculate the collision risk. A collision risk assessment was performed for various objects using a three-level collision warning system that adapts to the relative speed of obstacles. The proposed method demonstrated a performance of 11.64 fps with an execution time of 85.87 ms. It provides real-time warnings to both drivers and autonomous vehicles regarding potential collisions with detected objects. Full article
Show Figures

Figure 1

30 pages, 30480 KiB  
Article
Numerical Investigation of a Novel Type of Rotor Working in a Palisade Configuration
by Łukasz Malicki, Ziemowit Malecha, Błażej Baran and Rafał Juszko
Energies 2024, 17(13), 3093; https://doi.org/10.3390/en17133093 - 23 Jun 2024
Cited by 1 | Viewed by 1360
Abstract
This paper explores an interesting approach to wind energy technology, focusing on a novel type of drag-driven vertical-axis wind turbines (VAWTs). Studied geometries employ rotor-shaped cross-sections, presenting a distinctive approach to harnessing wind energy efficiently. The rotor-shaped cross-section geometries are examined for their [...] Read more.
This paper explores an interesting approach to wind energy technology, focusing on a novel type of drag-driven vertical-axis wind turbines (VAWTs). Studied geometries employ rotor-shaped cross-sections, presenting a distinctive approach to harnessing wind energy efficiently. The rotor-shaped cross-section geometries are examined for their aerodynamic efficiency, showcasing the meticulous engineering behind this innovation. The drag-driven turbine shapes are analyzed for their ability to maximize energy extraction in a variety of wind conditions. A significant aspect of these turbines is their adaptability for diverse applications. This article discusses the feasibility and advantages of utilizing these VAWTs in fence configurations, offering an innovative integration of renewable energy generation with physical infrastructure. The scalability of the turbines is highlighted, enabling their deployment as a fence around residential properties or as separators between highway lanes and as energy-generating structures atop buildings. The scientific findings presented in this article contribute valuable insights into the technological advancements of rotor-shaped VAWTs and their potential impact on decentralized wind energy generation. The scalable and versatile nature of these turbines opens up new possibilities for sustainable energy solutions in both urban and residential settings, marking a significant step forward in the field of renewable energy research and technology. In particular, it was shown that among the proposed rotor geometries, the five-blade rotor was characterized by the highest efficiency and, working in a palisade configuration with a spacing of 10 mm to 20 mm, produced higher average values of the torque coefficient than the corresponding Savonius turbine. Full article
Show Figures

Figure 1

27 pages, 12958 KiB  
Article
Turning Features Detection from Aerial Images: Model Development and Application on Florida’s Public Roadways
by Richard Boadu Antwi, Michael Kimollo, Samuel Yaw Takyi, Eren Erman Ozguven, Thobias Sando, Ren Moses and Maxim A. Dulebenets
Smart Cities 2024, 7(3), 1414-1440; https://doi.org/10.3390/smartcities7030059 - 13 Jun 2024
Cited by 4 | Viewed by 2222
Abstract
Advancements in computer vision are rapidly revolutionizing the way traffic agencies gather roadway geometry data, leading to significant savings in both time and money. Utilizing aerial and satellite imagery for data collection proves to be more cost-effective, more accurate, and safer compared to [...] Read more.
Advancements in computer vision are rapidly revolutionizing the way traffic agencies gather roadway geometry data, leading to significant savings in both time and money. Utilizing aerial and satellite imagery for data collection proves to be more cost-effective, more accurate, and safer compared to traditional field observations, considering factors such as equipment cost, crew safety, and data collection efficiency. Consequently, there is a pressing need to develop more efficient methodologies for promptly, safely, and economically acquiring roadway geometry data. While image processing has previously been regarded as a time-consuming and error-prone approach for capturing these data, recent developments in computing power and image recognition techniques have opened up new avenues for accurately detecting and mapping various roadway features from a wide range of imagery data sources. This research introduces a novel approach combining image processing with a YOLO-based methodology to detect turning lane pavement markings from high-resolution aerial images, specifically focusing on Florida’s public roadways. Upon comparison with ground truth data from Leon County, Florida, the developed model achieved an average accuracy of 87% at a 25% confidence threshold for detected features. Implementation of the model in Leon County identified approximately 3026 left turn, 1210 right turn, and 200 center lane features automatically. This methodology holds paramount significance for transportation agencies in facilitating tasks such as identifying deteriorated markings, comparing turning lane positions with other roadway features like crosswalks, and analyzing intersection-related accidents. The extracted roadway geometry data can also be seamlessly integrated with crash and traffic data, providing crucial insights for policymakers and road users. Full article
Show Figures

Graphical abstract

40 pages, 22727 KiB  
Article
Image-Aided LiDAR Extraction, Classification, and Characterization of Lane Markings from Mobile Mapping Data
by Yi-Ting Cheng, Young-Ha Shin, Sang-Yeop Shin, Yerassyl Koshan, Mona Hodaei, Darcy Bullock and Ayman Habib
Remote Sens. 2024, 16(10), 1668; https://doi.org/10.3390/rs16101668 - 8 May 2024
Cited by 4 | Viewed by 2140
Abstract
The documentation of roadway factors (such as roadway geometry, lane marking retroreflectivity/classification, and lane width) through the inventory of lane markings can reduce accidents and facilitate road safety analyses. Typically, lane marking inventory is established using either imagery or Light Detection and Ranging [...] Read more.
The documentation of roadway factors (such as roadway geometry, lane marking retroreflectivity/classification, and lane width) through the inventory of lane markings can reduce accidents and facilitate road safety analyses. Typically, lane marking inventory is established using either imagery or Light Detection and Ranging (LiDAR) data collected by mobile mapping systems (MMS). However, it is important to consider the strengths and weaknesses of both camera and LiDAR units when establishing lane marking inventory. Images may be susceptible to weather and lighting conditions, and lane marking might be obstructed by neighboring traffic. They also lack 3D and intensity information, although color information is available. On the other hand, LiDAR data are not affected by adverse weather and lighting conditions, and they have minimal occlusions. Moreover, LiDAR data provide 3D and intensity information. Considering the complementary characteristics of camera and LiDAR units, an image-aided LiDAR framework would be highly advantageous for lane marking inventory. In this context, an image-aided LiDAR framework means that the lane markings generated from one modality (i.e., either an image or LiDAR) are enhanced by those derived from the other one (i.e., either imagery or LiDAR). In addition, a reporting mechanism that can handle multi-modal datasets from different MMS sensors is necessary for the visualization of inventory results. This study proposes an image-aided LiDAR lane marking inventory framework that can handle up to five lanes per driving direction, as well as multiple imaging and LiDAR sensors onboard an MMS. The framework utilizes lane markings extracted from images to improve LiDAR-based extraction. Thereafter, intensity profiles and lane width estimates can be derived using the image-aided LiDAR lane markings. Finally, imagery/LiDAR data, intensity profiles, and lane width estimates can be visualized through a web portal that has been developed in this study. For the performance evaluation of the proposed framework, lane markings obtained through LiDAR-based, image-based, and image-aided LiDAR approaches are compared against manually established ones. The evaluation demonstrates that the proposed framework effectively compensates for the omission errors in the LiDAR-based extraction, as evidenced by an increase in the recall from 87.6% to 91.6%. Full article
Show Figures

Figure 1

16 pages, 4596 KiB  
Article
A Fast and Accurate Lane Detection Method Based on Row Anchor and Transformer Structure
by Yuxuan Chai, Shixian Wang and Zhijia Zhang
Sensors 2024, 24(7), 2116; https://doi.org/10.3390/s24072116 - 26 Mar 2024
Cited by 11 | Viewed by 3187
Abstract
Lane detection plays a pivotal role in the successful implementation of Advanced Driver Assistance Systems (ADASs), which are essential for detecting the road’s lane markings and determining the vehicle’s position, thereby influencing subsequent decision making. However, current deep learning-based lane detection methods encounter [...] Read more.
Lane detection plays a pivotal role in the successful implementation of Advanced Driver Assistance Systems (ADASs), which are essential for detecting the road’s lane markings and determining the vehicle’s position, thereby influencing subsequent decision making. However, current deep learning-based lane detection methods encounter challenges. Firstly, the on-board hardware limitations necessitate an exceptionally fast prediction speed for the lane detection method. Secondly, improvements are required for effective lane detection in complex scenarios. This paper addresses these issues by enhancing the row-anchor-based lane detection method. The Transformer encoder–decoder structure is leveraged as the row classification enhances the model’s capability to extract global features and detect lane lines in intricate environments. The Feature-aligned Pyramid Network (FaPN) structure serves as an auxiliary branch, complemented by a novel structural loss with expectation loss, further refining the method’s accuracy. The experimental results demonstrate our method’s commendable accuracy and real-time performance, achieving a rapid prediction speed of 129 FPS (the single prediction time of the model on RTX3080 is 15.72 ms) and a 96.16% accuracy on the Tusimple dataset—a 3.32% improvement compared to the baseline method. Full article
(This article belongs to the Section Vehicular Sensing)
Show Figures

Figure 1

25 pages, 19658 KiB  
Article
Developing a Method to Automatically Extract Road Boundary and Linear Road Markings from a Mobile Mapping System Point Cloud Using Oriented Bounding Box Collision-Detection Techniques
by Seokchan Kang, Jeongwon Lee and Jiyeong Lee
Remote Sens. 2023, 15(19), 4656; https://doi.org/10.3390/rs15194656 - 22 Sep 2023
Cited by 3 | Viewed by 2794
Abstract
Advancements in data-acquisition technology have led to the increasing demand for high-precision road data for autonomous driving. Specifically, road boundaries and linear road markings, like edge and lane markings, provide fundamental guidance for various applications. Unfortunately, their extraction usually requires labor-intensive manual work, [...] Read more.
Advancements in data-acquisition technology have led to the increasing demand for high-precision road data for autonomous driving. Specifically, road boundaries and linear road markings, like edge and lane markings, provide fundamental guidance for various applications. Unfortunately, their extraction usually requires labor-intensive manual work, and the automatic extraction, which can be applied universally for diverse curved road types, presents a challenge. Given this context, this study proposes a method to automatically extract road boundaries and linear road markings by applying an oriented bounding box (OBB) collision-detection algorithm. The OBBs are generated from a reference line using the point cloud data’s position and intensity values. By applying the OBB collision-detection algorithm, road boundaries and linear road markings can be extracted efficiently and accurately in straight and curved roads by adjusting search length and width to detect OBB collision. This study assesses horizontal position accuracy using automatically extracted and manually digitized data to verify this method. The resulting RMSE for extracted road boundaries is +4.8 cm and +5.3 cm for linear road markings, indicating that high-accuracy road boundary and road marking extraction was possible. Therefore, our results demonstrate that the automatic extraction adjusting OBB detection parameters and integrating the OBB collision-detection algorithm enables efficient and precise extraction of road boundaries and linear road markings in various curving types of roads. Finally, this enhances its practicality and simplifies the implementation of the extraction process. Full article
(This article belongs to the Special Issue Advanced Remote Sensing Technology in Geodesy, Surveying and Mapping)
Show Figures

Figure 1

13 pages, 2282 KiB  
Article
Drivable Agricultural Road Region Detection Based on Pixel-Level Segmentation with Contextual Representation Augmentation
by Yefeng Sun, Liang Gong, Wei Zhang, Bishu Gao, Yanming Li and Chengliang Liu
Agriculture 2023, 13(9), 1736; https://doi.org/10.3390/agriculture13091736 - 1 Sep 2023
Cited by 4 | Viewed by 1886
Abstract
Drivable area detection is crucial for the autonomous navigation of agricultural robots. However, semi-structured agricultural roads are generally not marked with lanes and their boundaries are ambiguous, which impedes the accurate segmentation of drivable areas and consequently paralyzes the robots. This paper proposes [...] Read more.
Drivable area detection is crucial for the autonomous navigation of agricultural robots. However, semi-structured agricultural roads are generally not marked with lanes and their boundaries are ambiguous, which impedes the accurate segmentation of drivable areas and consequently paralyzes the robots. This paper proposes a deep learning network model for realizing high-resolution segmentation of agricultural roads by leveraging contextual representations to augment road objectness. The backbone adopts HRNet to extract high-resolution road features in parallel at multiple scales. To strengthen the relationship between pixels and corresponding object regions, we use object-contextual representations (OCR) to augment the feature representations of pixels. Finally, a differentiable binarization (DB) decision head is used to perform threshold-adaptive segmentation for road boundaries. To quantify the performance of our method, we used an agricultural semi-structured road dataset and conducted experiments. The experimental results show that the mIoU reaches 97.85%, and the Boundary IoU achieves 90.88%. Both the segmentation accuracy and the boundary quality outperform the existing methods, which shows the tailored segmentation networks with contextual representations are beneficial to improving the detection accuracy of the semi-structured drivable areas in agricultural scene. Full article
Show Figures

Figure 1

17 pages, 4453 KiB  
Article
Interactive Attention Learning on Detection of Lane and Lane Marking on the Road by Monocular Camera Image
by Wei Tian, Xianwang Yu and Haohao Hu
Sensors 2023, 23(14), 6545; https://doi.org/10.3390/s23146545 - 20 Jul 2023
Cited by 4 | Viewed by 3366
Abstract
Vision-based identification of lane area and lane marking on the road is an indispensable function for intelligent driving vehicles, especially for localization, mapping and planning tasks. However, due to the increasing complexity of traffic scenes, such as occlusion and discontinuity, detecting lanes and [...] Read more.
Vision-based identification of lane area and lane marking on the road is an indispensable function for intelligent driving vehicles, especially for localization, mapping and planning tasks. However, due to the increasing complexity of traffic scenes, such as occlusion and discontinuity, detecting lanes and lane markings from an image captured by a monocular camera becomes persistently challenging. The lanes and lane markings have a strong position correlation and are constrained by a spatial geometry prior to the driving scene. Most existing studies only explore a single task, i.e., either lane marking or lane detection, and do not consider the inherent connection or exploit the modeling of this kind of relationship between both elements to improve the detection performance of both tasks. In this paper, we establish a novel multi-task encoder–decoder framework for the simultaneous detection of lanes and lane markings. This approach deploys a dual-branch architecture to extract image information from different scales. By revealing the spatial constraints between lanes and lane markings, we propose an interactive attention learning for their feature information, which involves a Deformable Feature Fusion module for feature encoding, a Cross-Context module as information decoder, a Cross-IoU loss and a Focal-style loss weighting for robust training. Without bells and whistles, our method achieves state-of-the-art results on tasks of lane marking detection (with 32.53% on IoU, 81.61% on accuracy) and lane segmentation (with 91.72% on mIoU) of the BDD100K dataset, which showcases an improvement of 6.33% on IoU, 11.11% on accuracy in lane marking detection and 0.22% on mIoU in lane detection compared to the previous methods. Full article
(This article belongs to the Section Vehicular Sensing)
Show Figures

Figure 1

14 pages, 3464 KiB  
Article
Image Data Extraction and Driving Behavior Analysis Based on Geographic Information and Driving Data
by Huei-Yung Lin, Jun-Zhi Zhang and Chin-Chen Chang
Electronics 2023, 12(13), 2989; https://doi.org/10.3390/electronics12132989 - 7 Jul 2023
Cited by 3 | Viewed by 2066
Abstract
Driving behavior analysis has become crucial for traffic safety. In addition, more abundant driving data are needed to analyze driving behavior more comprehensively and thus improve traffic safety. This paper proposes an approach to image data extraction and driving behavior analysis that uses [...] Read more.
Driving behavior analysis has become crucial for traffic safety. In addition, more abundant driving data are needed to analyze driving behavior more comprehensively and thus improve traffic safety. This paper proposes an approach to image data extraction and driving behavior analysis that uses geographic information and driving data. Information derived from geographic and global positioning systems was used for image data extraction. In addition, we used an onboard diagnostic II and a controller area network bus logger to record driving data for driving behavior analysis. Driving behavior was analyzed using sparse automatic encoders and data exploration to detect abnormal and aggressive behavior. A regression analysis was performed to derive the relationship between aggressive driving behavior and road facilities. The results indicated that lane ratios, no lane markings, and straight lane markings are important features that affect aggressive driving behaviors. Several traffic improvements were proposed for specific intersections and roads to make drivers and pedestrians safer. Full article
(This article belongs to the Special Issue Convolutional Neural Networks and Vision Applications, 3rd Edition)
Show Figures

Figure 1

19 pages, 5966 KiB  
Article
Lane Line Detection and Object Scene Segmentation Using Otsu Thresholding and the Fast Hough Transform for Intelligent Vehicles in Complex Road Conditions
by Muhammad Awais Javeed, Muhammad Arslan Ghaffar, Muhammad Awais Ashraf, Nimra Zubair, Ahmed Sayed M. Metwally, Elsayed M. Tag-Eldin, Patrizia Bocchetta, Muhammad Sufyan Javed and Xingfang Jiang
Electronics 2023, 12(5), 1079; https://doi.org/10.3390/electronics12051079 - 21 Feb 2023
Cited by 25 | Viewed by 6059
Abstract
An Otsu-threshold- and Canny-edge-detection-based fast Hough transform (FHT) approach to lane detection was proposed to improve the accuracy of lane detection for autonomous vehicle driving. During the last two decades, autonomous vehicles have become very popular, and it is constructive to avoid traffic [...] Read more.
An Otsu-threshold- and Canny-edge-detection-based fast Hough transform (FHT) approach to lane detection was proposed to improve the accuracy of lane detection for autonomous vehicle driving. During the last two decades, autonomous vehicles have become very popular, and it is constructive to avoid traffic accidents due to human mistakes. The new generation needs automatic vehicle intelligence. One of the essential functions of a cutting-edge automobile system is lane detection. This study recommended the idea of lane detection through improved (extended) Canny edge detection using a fast Hough transform. The Gaussian blur filter was used to smooth out the image and reduce noise, which could help to improve the edge detection accuracy. An edge detection operator known as the Sobel operator calculated the gradient of the image intensity to identify edges in an image using a convolutional kernel. These techniques were applied in the initial lane detection module to enhance the characteristics of the road lanes, making it easier to detect them in the image. The Hough transform was then used to identify the routes based on the mathematical relationship between the lanes and the vehicle. It did this by converting the image into a polar coordinate system and looking for lines within a specific range of contrasting points. This allowed the algorithm to distinguish between the lanes and other features in the image. After this, the Hough transform was used for lane detection, making it possible to distinguish between left and right lane marking detection extraction; the region of interest (ROI) must be extracted for traditional approaches to work effectively and easily. The proposed methodology was tested on several image sequences. The least-squares fitting in this region was then used to track the lane. The proposed system demonstrated high lane detection in experiments, demonstrating that the identification method performed well regarding reasoning speed and identification accuracy, which considered both accuracy and real-time processing and could satisfy the requirements of lane recognition for lightweight automatic driving systems. Full article
Show Figures

Figure 1

16 pages, 5528 KiB  
Article
Frequent and Automatic Update of Lane-Level HD Maps with a Large Amount of Crowdsourced Data Acquired from Buses and Taxis in Seoul
by Minwoo Cho, Kitae Kim, Soohyun Cho, Seung-Mo Cho and Woojin Chung
Sensors 2023, 23(1), 438; https://doi.org/10.3390/s23010438 - 31 Dec 2022
Cited by 4 | Viewed by 4070
Abstract
Recently, HD maps have become important parts of autonomous driving, from localization to perception and path planning. For the practical application of HD maps, it is significant to regularly update environmental changes in HD maps. Conventional approaches require expensive mobile mapping systems and [...] Read more.
Recently, HD maps have become important parts of autonomous driving, from localization to perception and path planning. For the practical application of HD maps, it is significant to regularly update environmental changes in HD maps. Conventional approaches require expensive mobile mapping systems and considerable manual work by experts, making it difficult to achieve frequent map updates. In this paper, we show how frequent and automatic updates of lane marking in HD maps are made possible with enormous crowdsourced data. Crowdsourced data is acquired from onboard low-cost sensing devices installed on many city buses and taxis in Seoul, South Korea. A large amount of crowdsourced data is daily accumulated on the server. The quality of sensor measurement is not very high due to the limited performance of low-cost devices. Therefore, the technical challenge is to overcome the uncertainty of the crowdsourced data. Appropriately filtering out a large amount of low-quality data is a significant problem. The proposed HD map update strategy comprises several processing steps including pose correction, observation assignment, observation clustering, and landmark classification. The proposed HD map update strategy is experimentally verified using crowdsourced data. If the changed environments are successfully extracted, then precisely updated HD maps are generated. Full article
(This article belongs to the Section Vehicular Sensing)
Show Figures

Figure 1

28 pages, 14914 KiB  
Article
Generalized LiDAR Intensity Normalization and Its Positive Impact on Geometric and Learning-Based Lane Marking Detection
by Yi-Ting Cheng, Yi-Chun Lin and Ayman Habib
Remote Sens. 2022, 14(17), 4393; https://doi.org/10.3390/rs14174393 - 3 Sep 2022
Cited by 16 | Viewed by 4817
Abstract
Light Detection and Ranging (LiDAR) data collected by mobile mapping systems (MMS) have been utilized to detect lane markings through intensity-based approaches. As LiDAR data continue to be used for lane marking extraction, greater emphasis is being placed on enhancing the utility of [...] Read more.
Light Detection and Ranging (LiDAR) data collected by mobile mapping systems (MMS) have been utilized to detect lane markings through intensity-based approaches. As LiDAR data continue to be used for lane marking extraction, greater emphasis is being placed on enhancing the utility of the intensity values. Typically, intensity correction/normalization approaches are conducted prior to lane marking extraction. The goal of intensity correction is to adjust the intensity values of a LiDAR unit using geometric scanning parameters (i.e., range or incidence angle). Intensity normalization aims at adjusting the intensity readings of a LiDAR unit based on the assumption that intensity values across laser beams/LiDAR units/MMS should be similar for the same object. As MMS technology develops, correcting/normalizing intensity values across different LiDAR units on the same system and/or different MMS is necessary for lane marking extraction. This study proposes a generalized correction/normalization approach for handling single-beam/multi-beam LiDAR scanners onboard single or multiple MMS. The generalized approach is developed while considering the intensity values of asphalt and concrete pavement. For a performance evaluation of the proposed approach, geometric/morphological and deep/transfer-learning-based lane marking extraction with and without intensity correction/normalization is conducted. The evaluation shows that the proposed approach improves the performance of lane marking extraction (e.g., the F1-score of a U-net model can change from 0.1% to 86.2%). Full article
Show Figures

Graphical abstract

Back to TopTop