Advancements in Sensing and Perception for Autonomous Vehicles in Adverse Environmental Conditions

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Electrical and Autonomous Vehicles".

Deadline for manuscript submissions: closed (15 March 2025) | Viewed by 3746

Special Issue Editor

Department of Psychology, Stanford University, Stanford, CA 94305, USA
Interests: autonomous vehicles; imaging systems; autonomous driving; computer graphics

Special Issue Information

Dear Colleagues,

The evolution of autonomous vehicles is inextricably linked to the sophistication of their perception systems. These systems, which represent a fusion of advanced sensor technologies and computational algorithms, serve as the cornerstone for autonomous path planning and control. They utilize a wide array of sensors, such as cameras, LiDAR, RADAR, and ultrasonic sensors, integrated with state-of-the-art signal processing algorithms and machine learning frameworks. The primary challenge lies in accurately and efficiently processing and interpreting sensor data, which is essential for real-time path planning and control in varying operational environments.

Specifically, a significant and persistent challenge in this field lies in the performance of perception systems under adverse weather conditions, such as heavy rain, fog, snow, and varying light intensities. These conditions can severely impede the abilities of sensors, leading to reduced visibility and data fidelity. Moreover, the challenge includes ensuring data integrity and reliability in such environments, which necessitates robust sensor fusion techniques and adaptive algorithms capable of compensating for environmental variabilities.

Topics of Interest:

  1. Enhanced Sensor Technologies: Research that investigates novel sensor designs, particularly those that enhance operational resilience and accuracy in adverse weather conditions, is crucial. This includes advancements in sensor data restoration, spectral sensitivity, and noise reduction techniques both via traditional signal processing and machine learning-based algorithms.
  2. AI and Deep Learning in Data Interpretation: Research that focuses on the application of advanced deep learning models for improved object detection, scene understanding, and decision-making processes in complex and dynamic environments.
  3. Sensor Fusion and Data Integration: Research that focuses on the methodologies and frameworks for effective sensor fusion, aiming to create a comprehensive and resilient perception mechanism that mitigates the limitations of individual sensors, particularly in challenging weather conditions.
  4. Real-Time Data Processing Architectures: Research that explores innovative data processing architectures and algorithms that enhance the real-time capabilities of autonomous vehicles for fast response, focusing on computational efficiency, latency reduction, and energy optimization.
  5. Simulation, Validation, and Robustness Testing: Contributions that present novel approaches to simulating and validating perception systems under varied and adverse conditions, ensuring system robustness and reliability.
  6. End-to-End System Optimization: Research on optimizing the entire perception system of autonomous vehicles for efficiency, accuracy, and reliability. Contributions may include topics like sensor parameter optimization, system-level optimization, and the balance between hardware capabilities and software demands.
  7. Case Studies and Applications: We also welcome the submission of case studies and practical applications demonstrating the real-world implementation of these technologies in autonomous vehicles.

We are inviting you to contribute articles, perspectives, and reviews addressing the abovementioned topics, offering both theoretical insights and practical solutions to the challenges faced in optimizing perception systems for autonomous vehicles. This includes advancements in both hardware and algorithms, with a special focus on performance in adverse weather conditions. Your research and expertise are invaluable and will significantly contribute to advancing this critical field within automotive technology.

Technical Program Committee Member

Name: Dr. Chuxi Yang
Email: yangcx@dlut.edu.cn
Affiliation: School of Control Science and Engineering, Dalian University of Technology, Dalian 116023, China
Research Interests: computational imaging simulation; computer vision; digital image processing

Dr. Zhenyi Liu
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • autonomous driving
  • sensing technology
  • simulation and validation
  • realtime data processing
  • sensor fusion
  • system integration
  • end-to-end optimization
  • sensor data restoration

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

21 pages, 5674 KiB  
Article
Reality Head-Up Display Navigation Design in Extreme Weather Conditions: Enhancing Driving Experience in Rain and Fog
by Qi Zhu and Ziqi Liu
Electronics 2025, 14(9), 1745; https://doi.org/10.3390/electronics14091745 - 25 Apr 2025
Viewed by 365
Abstract
This study investigates the impact of extreme weather conditions (specifically heavy rain and fog) on drivers’ situational awareness by analyzing variations in illumination levels. The primary objective is to identify optimal color wavelengths for low-light environments, thereby providing a theoretical foundation for the [...] Read more.
This study investigates the impact of extreme weather conditions (specifically heavy rain and fog) on drivers’ situational awareness by analyzing variations in illumination levels. The primary objective is to identify optimal color wavelengths for low-light environments, thereby providing a theoretical foundation for the design of augmented reality head-up display in adverse weather conditions. A within-subjects experimental design was employed with 26 participants in a simulated driving environment. Participants were exposed to different illumination levels and AR-HUD colors. Eye-tracking metrics, including fixation duration, visit duration, and fixation count, were recorded alongside situational awareness ratings to assess cognitive load and information processing efficiency. The results revealed that the yellow AR-HUD significantly enhanced situational awareness and reduced cognitive load in foggy conditions. While subjective assessments indicated no substantial effect of lighting conditions, objective measurements demonstrated the superior effectiveness of the yellow AR-HUD under foggy weather. These findings suggest that yellow AR-HUD navigation icons are more suitable for extreme weather environments, offering potential improvements in driving performance and overall road safety. Full article
Show Figures

Figure 1

20 pages, 6577 KiB  
Article
Deep Learning-Based Train Obstacle Detection Technology: Application and Testing in Metros
by Fei Yan, Yiran Gu and Yunlai Sun
Electronics 2025, 14(7), 1318; https://doi.org/10.3390/electronics14071318 - 26 Mar 2025
Viewed by 427
Abstract
With the rapid development of urban rail transit, unmanned train driving technology is also advancing rapidly. Automatic obstacle detection is particularly crucial and plays a vital role in ensuring train operation safety. This paper focuses on train obstacle detection technology and testing methods. [...] Read more.
With the rapid development of urban rail transit, unmanned train driving technology is also advancing rapidly. Automatic obstacle detection is particularly crucial and plays a vital role in ensuring train operation safety. This paper focuses on train obstacle detection technology and testing methods. First, we review existing obstacle detection systems and their testing methods, analyzing their technical principles, application status, advantages, and limitations. In the experimental section, the Intelligent Train Eye (ITE) system is used as a case study. Black-box testing is conducted in the level high-precision (LH) mode, with corresponding test cases designed based on various scenarios that may arise during train operations. White-box testing is performed in the level exploration (LE) mode, where the test results are meticulously recorded and analyzed. The test cases in different modes comprehensively cover the testing requirements for train operations. The results indicate that the ITE system successfully passes most of the test cases and meets the primary functional requirements. Full article
Show Figures

Figure 1

18 pages, 970 KiB  
Article
Enhancing Federated Learning in Heterogeneous Internet of Vehicles: A Collaborative Training Approach
by Chao Wu, Hailong Fan, Kan Wang and Puning Zhang
Electronics 2024, 13(20), 3999; https://doi.org/10.3390/electronics13203999 - 11 Oct 2024
Cited by 1 | Viewed by 1301
Abstract
The current Internet of Vehicles (IoV) faces significant challenges related to resource heterogeneity, which adversely impacts the convergence speed and accuracy of federated learning models. Existing studies have not adequately addressed the problem of resource-constrained vehicles that slow down the federated learning process [...] Read more.
The current Internet of Vehicles (IoV) faces significant challenges related to resource heterogeneity, which adversely impacts the convergence speed and accuracy of federated learning models. Existing studies have not adequately addressed the problem of resource-constrained vehicles that slow down the federated learning process particularly under conditions of high mobility. To tackle this issue, we propose a model partition collaborative training mechanism that decomposes training tasks for resource-constrained vehicles while retaining the original data locally. By offloading complex computational tasks to nearby service vehicles, this approach effectively accelerates the slow training speed of resource-limited vehicles. Additionally, we introduce an optimal matching method for collaborative service vehicles. By analyzing common paths and time delays, we match service vehicles with similar routes and superior performance within mobile service vehicle clusters to provide effective collaborative training services. This method maximizes training efficiency and mitigates the negative effects of vehicle mobility on collaborative training. Simulation experiments demonstrate that compared to benchmark methods, our approach reduces the impact of mobility on collaboration, achieving large improvements in the training speed and the convergence time of federated learning. Full article
Show Figures

Figure 1

27 pages, 18959 KiB  
Article
Context Awareness Assisted Integration System for Land Vehicles
by Xiaoyu Li, Xiye Guo, Kai Liu, Zhijun Meng, Guokai Chen, Yuqiu Tang and Jun Yang
Electronics 2024, 13(11), 2038; https://doi.org/10.3390/electronics13112038 - 23 May 2024
Viewed by 849
Abstract
Accurate context awareness of land vehicles can assist integrated navigation systems. Motion behavior recognition is one context awareness of vehicles, based on which constraint information helps reduce the impact of short-term blockage of navigation signals on radio-frequency-based positioning systems. To improve the reliability [...] Read more.
Accurate context awareness of land vehicles can assist integrated navigation systems. Motion behavior recognition is one context awareness of vehicles, based on which constraint information helps reduce the impact of short-term blockage of navigation signals on radio-frequency-based positioning systems. To improve the reliability of behavior recognition, we proposed a machine learning-based vehicle motion behavior recognition and constraint method (MLMRC). The machine learning-based recognition process is directly driven by raw data from low-cost MEMS-IMU, while the traditional threshold-based method relies on previous experience. Four categories of constraint information—sensor error calibration, velocity constraint, angle constraint, and position constraint—were constructed from the recognition results. Both the simulated vehicle experiment and real vehicle experiment demonstrate the performance of the MLMRC method. When there is a short-term blockage, the MLMRC method can reduce the positioning error from 17.2% to 38.3% compared with the traditional method, which effectively improves positioning accuracy and provides support for autonomous vehicles in complex urban environments. Full article
Show Figures

Figure 1

Back to TopTop