sensors-logo

Journal Browser

Journal Browser

Intelligent Perception for Autonomous Driving in Specific Areas

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Physical Sensors".

Deadline for manuscript submissions: closed (20 August 2023) | Viewed by 5171

Special Issue Editors


E-Mail Website
Guest Editor
School of Transportation Science and Engineering, Beihang University, Beijing 100191, China
Interests: vehicle intelligent perception and control

E-Mail Website
Guest Editor
Department of Civil and Environmental Engineering, University of California, Los Angeles, CA 90095, USA
Interests: connected and automated vehicles; intelligent transportation systems
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Research Institute for Frontier Science, Beihang University, Beijing 100191, China
Interests: intelligent vehicle and computer vision

Special Issue Information

Dear Colleagues,

Specific areas, such as mine, port and railway, are supposed to be the first batch of large-scale application scenarios for autonomous driving. Among the key technologies of autonomous driving, intelligent perception is fundamental, and allows autonomous vehicles to perceive and understand the driving environment. With the development of neural networks and intelligent sensors, a series of research achievements and applications have been made in the intelligent perception of urban road environments. By comparison, the scenarios in specific areas are more challenging due to their irregular geometric features, complex traffic interactions, or harsh weather conditions. All these pose great challenges to the perception of autonomous driving. This Special Issue will publish papers that reflect research and innovation in the area of intelligent perception for automatic driving in specific areas, including but not limited to the following main topics: lane line/drivable area detection for automatic vehicles; object/obstacle detection and tracking in specific areas; scene segmentation/understanding for automatic vehicles; object/obstacle classification, detection, and tracking in fog/rain/snow; roadside intelligent perception; vehicle–road collaborative perception; vehicle–vehicle collaborative perception; scene reconstruction and understanding; high-precision positioning; depth completion of perception data; data enhancement.

Dr. Guizhen Yu 
Prof. Dr. Lisheng Jin
Dr. Jiaqi Ma
Dr. Zhangyu Wang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • intelligent perception
  • specific areas
  • collaborative perception
  • perception in harsh environment
  • mapping
  • positioning

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

20 pages, 9752 KiB  
Article
VV-YOLO: A Vehicle View Object Detection Model Based on Improved YOLOv4
by Yinan Wang, Yingzhou Guan, Hanxu Liu, Lisheng Jin, Xinwei Li, Baicang Guo and Zhe Zhang
Sensors 2023, 23(7), 3385; https://doi.org/10.3390/s23073385 - 23 Mar 2023
Cited by 2 | Viewed by 2447
Abstract
Vehicle view object detection technology is the key to the environment perception modules of autonomous vehicles, which is crucial for driving safety. In view of the characteristics of complex scenes, such as dim light, occlusion, and long distance, an improved YOLOv4-based vehicle view [...] Read more.
Vehicle view object detection technology is the key to the environment perception modules of autonomous vehicles, which is crucial for driving safety. In view of the characteristics of complex scenes, such as dim light, occlusion, and long distance, an improved YOLOv4-based vehicle view object detection model, VV-YOLO, is proposed in this paper. The VV-YOLO model adopts the implementation mode based on anchor frames. In the anchor frame clustering, the improved K-means++ algorithm is used to reduce the possibility of instability in anchor frame clustering results caused by the random selection of a cluster center, so that the model can obtain a reasonable original anchor frame. Firstly, the CA-PAN network was designed by adding a coordinate attention mechanism, which was used in the neck network of the VV-YOLO model; the multidimensional modeling of image feature channel relationships was realized; and the extraction effect of complex image features was improved. Secondly, in order to ensure the sufficiency of model training, the loss function of the VV-YOLO model was reconstructed based on the focus function, which alleviated the problem of training imbalance caused by the unbalanced distribution of training data. Finally, the KITTI dataset was selected as the test set to conduct the index quantification experiment. The results showed that the precision and average precision of the VV-YOLO model were 90.68% and 80.01%, respectively, which were 6.88% and 3.44% higher than those of the YOLOv4 model, and the model’s calculation time on the same hardware platform did not increase significantly. In addition to testing on the KITTI dataset, we also selected the BDD100K dataset and typical complex traffic scene data collected in the field to conduct a visual comparison test of the results, and then the validity and robustness of the VV-YOLO model were verified. Full article
(This article belongs to the Special Issue Intelligent Perception for Autonomous Driving in Specific Areas)
Show Figures

Figure 1

20 pages, 5466 KiB  
Article
Real-Time Trajectory Prediction Method for Intelligent Connected Vehicles in Urban Intersection Scenarios
by Pangwei Wang, Hongsheng Yu, Cheng Liu, Yunfeng Wang and Rongsheng Ye
Sensors 2023, 23(6), 2950; https://doi.org/10.3390/s23062950 - 08 Mar 2023
Cited by 4 | Viewed by 2211
Abstract
Intelligent connected vehicles (ICVs) have played an important role in improving the intelligence degree of transportation systems, and improving the trajectory prediction capability of ICVs is beneficial for traffic efficiency and safety. In this paper, a real-time trajectory prediction method based on vehicle-to-everything [...] Read more.
Intelligent connected vehicles (ICVs) have played an important role in improving the intelligence degree of transportation systems, and improving the trajectory prediction capability of ICVs is beneficial for traffic efficiency and safety. In this paper, a real-time trajectory prediction method based on vehicle-to-everything (V2X) communication is proposed for ICVs to improve the accuracy of their trajectory prediction. Firstly, this paper applies a Gaussian mixture probability hypothesis density (GM-PHD) model to construct the multidimension dataset of ICV states. Secondly, this paper adopts vehicular microscopic data with more dimensions, which is output by GM-PHD as the input of LSTM to ensure the consistency of the prediction results. Then, the signal light factor and Q-Learning algorithm were applied to improve the LSTM model, adding features in the spatial dimension to complement the temporal features used in the LSTM. When compared with the previous models, more consideration was given to the dynamic spatial environment. Finally, an intersection at Fushi Road in Shijingshan District, Beijing, was selected as the field test scenario. The final experimental results show that the GM-PHD model achieved an average error of 0.1181 m, which is a 44.05% reduction compared to the LiDAR-based model. Meanwhile, the error of the proposed model can reach 0.501 m. When compared to the social LSTM model, the prediction error was reduced by 29.43% under the average displacement error (ADE) metric. The proposed method can provide data support and an effective theoretical basis for decision systems to improve traffic safety. Full article
(This article belongs to the Special Issue Intelligent Perception for Autonomous Driving in Specific Areas)
Show Figures

Figure 1

Back to TopTop