Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (191)

Search Parameters:
Keywords = driving rain

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 3334 KB  
Article
A Reproducible Evaluation Method for Intelligent-Driving Longitudinal Control Under Complex Weather Through Operational Design Domain Parameter Perturbation
by Yang Xu, Zhixiong Li, Chuan Sun, Shucai Xu, Haiming Sun, Yicheng Cao and Junru Yang
Machines 2026, 14(4), 454; https://doi.org/10.3390/machines14040454 - 20 Apr 2026
Viewed by 220
Abstract
Complex weather degrades both perception reliability and tire–road adhesion, thereby reducing the safety margin and responsiveness of intelligent driving longitudinal control. This study proposes a reproducible evaluation method for adverse weather operational design domains based on parameter perturbation testing and comprehensive assessment. Snow, [...] Read more.
Complex weather degrades both perception reliability and tire–road adhesion, thereby reducing the safety margin and responsiveness of intelligent driving longitudinal control. This study proposes a reproducible evaluation method for adverse weather operational design domains based on parameter perturbation testing and comprehensive assessment. Snow, fog, and rain are graded using standard quantitative thresholds and are coupled with road slipperiness to construct a weather–road state set. A mechanism-oriented indicator system, a combined subjective–objective weighting strategy, and a multi-level fuzzy comprehensive evaluation model are then used to generate quantitative capability scores. The method is validated on a co-simulation framework integrating vehicle–sensor simulation, a driving simulator, and a digital-twin testing environment using representative autonomous emergency braking scenarios. Results show that increasing weather severity, decreasing road adhesion, and higher initial speed reduce the post-braking safety margin and prolong collision-response time. The proposed method differentiates performance across weather–road states and provides quantitative support for test-coverage planning and capability boundary calibration in adverse weather operational design domains. Full article
(This article belongs to the Special Issue Control and Path Planning for Autonomous Vehicles)
Show Figures

Figure 1

20 pages, 2528 KB  
Article
Utilizing Multi-Source Remote Sensing Data and the CGAN to Identify Key Drought Factors Influencing Maize Across Distinct Phenological Stages
by Hui Zhao, Jifu Guo, Jing Jiang, Funian Zhao and Xiaoyang Yang
Remote Sens. 2026, 18(7), 1085; https://doi.org/10.3390/rs18071085 - 3 Apr 2026
Viewed by 355
Abstract
Drought is one of the major disasters constraining crop production. The accurate identification of the dominant environmental factors that drive drought stress at different growth stages of maize is essential for developing stage-specific and precise water management strategies, enhancing drought resistance, and ensuring [...] Read more.
Drought is one of the major disasters constraining crop production. The accurate identification of the dominant environmental factors that drive drought stress at different growth stages of maize is essential for developing stage-specific and precise water management strategies, enhancing drought resistance, and ensuring food security. However, a key challenge is quantifying the nonlinear interactions among multiple environmental factors. This study focuses on the rain-fed agricultural region of Northwest China. To address the limited availability of drought event samples in this region and the inadequacy of traditional statistical methods in capturing complex inter-factor relationships, we integrate a small-sample modeling framework based on an improved Conditional Generative Adversarial Network (CGAN) with an attribution framework that employs SHapley Additive exPlanations (SHAP) for interpretability analysis. We incorporate ten environmental factors derived from multi-source remote sensing: temperature (Tmax, Tmin, Tmean), precipitation (P), evapotranspiration (ET), soil moisture at 0–10 cm (SM0–10) and at 10–40 cm (SM10–40), and solar-induced chlorophyll fluorescence (SIFmax, SIFmin, SIFmean). Sample sets were established for different maize phenological stages. The CGAN model was employed to achieve high-precision estimation of maize drought severity levels, while the SHAP method was used to quantitatively analyze the dominant factors and their contributions at each phenological stage. The results show that the CGAN model achieved coefficients of determination (R2) of 0.963, 0.972, and 0.979 for the seedling, jointing–tasseling, and maturity stages, respectively, demonstrating excellent nonlinear modeling capability under small samples. SHAP analysis reveals a clear dynamic evolution of dominant factors across phenological stages. Evapotranspiration (ET) dominated in the seedling stage, reflecting the primary role of surface water–heat balance, while the jointing–tasseling stage transitioned to a co-dominance of ET, topsoil moisture (SM0–10), and minimum SIF, indicating intensified crop transpiration and physiological stress under the meteorological drought framework, and the maturity stage shifted to an absolute dominance centered on mean temperature (Tmean), highlighting the critical impact of heat stress. This study provides a data-driven quantitative perspective for understanding maize drought mechanisms and offers a scientific basis for formulating differentiated drought management strategies for different growth stages. Furthermore, it demonstrates the potential of integrating CGAN with SHAP for agricultural remote sensing and drought attribution research in data-scarce regions. Full article
Show Figures

Figure 1

19 pages, 6674 KB  
Article
Characterization of Vehicle Tire Hydroplaning Using Numerical Simulation and Field Full-Scale Accelerated Loading Methods
by Wentao Wang, Xiangrui Han, Hua Rong, Yinghao Miao and Linbing Wang
Appl. Sci. 2026, 16(7), 3433; https://doi.org/10.3390/app16073433 - 1 Apr 2026
Viewed by 342
Abstract
Increasingly frequent extreme rainfall commonly leads to water accumulation on the road surface, elevating vehicle tire hydroplaning to a major threat to driving safety. Existing research mainly focused on tire model optimization or predicting critical hydroplaning speed features based on empirical formulas and [...] Read more.
Increasingly frequent extreme rainfall commonly leads to water accumulation on the road surface, elevating vehicle tire hydroplaning to a major threat to driving safety. Existing research mainly focused on tire model optimization or predicting critical hydroplaning speed features based on empirical formulas and numerical simulations. However, there is a lack of systematic validation of the tire–water–pavement coupling interaction under realistic pavement conditions, with particular insufficient attention paid to pavement dynamic responses. In this study, numerical simulation and field full-scale accelerated loading methods were applied to investigate dynamic response characteristics of the tire–water–pavement coupling interaction system. Parametric analyses were first performed to investigate the influences of vehicle speed, vehicle load, water-film thickness, and tire lateral position on the mechanical behaviors of the fluid–structure interaction for a moving vehicle tire. Subsequently, field-measured dynamic responses’ features were used to validate the numerical model, which was then further applied to predict critical conditions of vehicle tire hydroplaning. Finally, the mechanisms of hydroplaning and corresponding mitigation measures were discussed. The study revealed that increasing vehicle speed and water-film thickness, as well as decreasing vehicle load, would reduce the pavement supporting force. The tire–pavement contact stress and strain decreased from the vehicle tire’s center position towards its shoulders. The predicted critical hydroplaning condition suggested that increasing vehicle load mitigated hydroplaning by reducing the proportion of water-induced hydrodynamic lifting force relative to the total vehicle load. When the water depth is relatively shallow, the hydroplaning risk increases rapidly with water depth, while the water’s adverse impact on tire–pavement contact force gradually diminishes as water depth continues to increase. It implies that a vehicle with a relatively low axle load driving on the pavement with a thin thickness of retained water in light rain will still face the hydroplaning risk, as the pavement’s supporting force may be substantially reduced in this weather. The findings provide theoretical foundations and experimentally supported insights on driving safety assessment and anti-skid design of water-covered pavement. Full article
(This article belongs to the Special Issue Road Safety in Sustainable Urban Transport)
Show Figures

Figure 1

48 pages, 5585 KB  
Review
Sensors in Self-Driving Vehicles: A Detailed Literature Review and New Trends
by Patrik Viktor and Gabor Kiss
Sensors 2026, 26(7), 2153; https://doi.org/10.3390/s26072153 - 31 Mar 2026
Viewed by 1029
Abstract
Autonomous vehicles rely on complex sensing systems to perceive their environment and ensure safe operation. This review analyses the main sensor technologies used in self-driving vehicles, including cameras, LiDAR, radar, ultrasonic sensors and GNSS/IMU-based localisation systems. A core set of 40 primary research [...] Read more.
Autonomous vehicles rely on complex sensing systems to perceive their environment and ensure safe operation. This review analyses the main sensor technologies used in self-driving vehicles, including cameras, LiDAR, radar, ultrasonic sensors and GNSS/IMU-based localisation systems. A core set of 40 primary research articles was systematically analysed to compare the capabilities, limitations and integration challenges of sensing technologies used in autonomous vehicles. In addition to these primary studies, further references were included to provide background information and describe emerging developments in autonomous sensing systems. The review shows that no single sensor technology can provide reliable perception under all environmental conditions. Camera systems offer rich visual information but are sensitive to lighting and weather conditions, while LiDAR provides highly accurate three-dimensional geometry but suffers from signal attenuation in rain and fog. Radar sensors demonstrate superior robustness in adverse weather and enable direct velocity measurement, although their spatial resolution remains limited compared to optical sensors. As a result, modern autonomous vehicles rely on multi-sensor fusion architectures that combine complementary sensing modalities to improve reliability and safety. The analysis also identifies several key research gaps in the current literature. In particular, there is a lack of systematic evaluation of trade-offs between sensor performance, computational requirements and vehicle energy consumption. Furthermore, the safety certification of artificial intelligence-based perception systems and the integration of emerging technologies such as FMCW LiDAR and terahertz radar remain open research challenges. Overall, the results suggest that the future of autonomous vehicle perception will depend not only on improvements in individual sensors but also on robust sensor fusion architectures, safety-certified AI models and energy-efficient sensor processing platforms. These findings provide guidance for researchers and engineers developing next-generation sensing systems for autonomous driving. Full article
Show Figures

Figure 1

11 pages, 1933 KB  
Article
Study on the Mechanism of Urban Road Car-Following Safety Under Adverse Weather Conditions
by Zhipeng Gu, Xing Wang and Yufei Han
Vehicles 2026, 8(3), 56; https://doi.org/10.3390/vehicles8030056 - 13 Mar 2026
Viewed by 329
Abstract
Car following is a common and important behavior in vehicle traffic flow, and the fluctuation of car-following behavior caused by the change in weather environment has also become one of the main causes of traffic accidents. To solve this problem, a driving scene [...] Read more.
Car following is a common and important behavior in vehicle traffic flow, and the fluctuation of car-following behavior caused by the change in weather environment has also become one of the main causes of traffic accidents. To solve this problem, a driving scene on urban roads was built through the driving simulation platform, and the driving simulator was used to carry out the vehicle-following test. The operating behavior parameters of the test drivers, such as steering wheel angle, headway, throttle opening, standard deviation of vehicle speed, acceleration, collision times, and so on, were collected and studied. The results showed that there were significant differences (p < 0.05) in indicators such as steering wheel angle, headway, acceleration, and standard deviation of speed under adverse weather conditions. The bad weather caused the line of sight to be blocked, which the driver compensated for by strengthening the trimming of the steering wheel angle, leading to the deterioration of the vehicle lateral stability. Moreover, safety studies have shown that the minimum driving interval occurred in foggy weather, while the maximum occurred in snowy weather. In addition, the standard deviation of vehicle speed and acceleration fluctuations have been reduced to ensure driving safety in adverse weather conditions. The driving experience of the drivers has a significant impact on the number of collisions, as novice drivers had a higher probability of collision. Full article
Show Figures

Figure 1

15 pages, 3088 KB  
Article
Lightweight Semantic Segmentation Algorithm Based on Gated Visual State Space Models
by Kui Di, Jinming Cheng, Lili Zhang and Yubin Bao
Electronics 2026, 15(6), 1175; https://doi.org/10.3390/electronics15061175 - 12 Mar 2026
Viewed by 439
Abstract
LiDAR serves as the primary sensor for acquiring environmental information in intelligent driving systems. However, under adverse weather conditions, point cloud signals obtained by LiDAR suffer from intensity attenuation and noise interference, leading to a decline in segmentation accuracy. To address these issues, [...] Read more.
LiDAR serves as the primary sensor for acquiring environmental information in intelligent driving systems. However, under adverse weather conditions, point cloud signals obtained by LiDAR suffer from intensity attenuation and noise interference, leading to a decline in segmentation accuracy. To address these issues, this paper designs a lightweight semantic segmentation system based on the Gated Visual State Space Model (VMamba), named RainMamba. Specifically, the system utilizes spherical projection to transform point clouds into 2D sequences and constructs a physical perception feature embedding module guided by the Beer–Lambert law to explicitly model and suppress spatial noise at the source. Subsequently, an uncertainty-weighted cross-modal correction module is employed to incorporate RGB images for dynamically calibrating the degraded point cloud data. Finally, a VMamba backbone is adopted to establish global dependencies with linear complexity. Experimental results on the SemanticKITTI dataset demonstrate that the system achieves an inference speed of 83 FPS, with a relative mIoU improvement of approximately 7.2% compared to the real-time baseline PolarNet. Furthermore, zero-shot evaluations on the real-world SemanticSTF dataset validate the system’s robust Sim-to-Real generalization capability. Notably, RainMamba delivers highly competitive accuracy comparable to the state-of-the-art heavy-weight model PTv3 while requiring a significantly lower parameter footprint, thereby demonstrating its immense potential for practical edge-computing deployment. Full article
Show Figures

Figure 1

24 pages, 18188 KB  
Article
WeatherMono: A CNN-Transformer Architecture for Self-Supervised Monocular Depth Estimation in Rainy and Foggy Conditions
by Yongsheng Qiu
Sensors 2026, 26(5), 1705; https://doi.org/10.3390/s26051705 - 8 Mar 2026
Viewed by 331
Abstract
In rainy and foggy conditions, the scattering of light and the occlusion effects of atmospheric particles distort the reflected light from object surfaces, leading to inconsistent depth information. As a result, depth estimation models trained under clear weather conditions fail to generalize effectively [...] Read more.
In rainy and foggy conditions, the scattering of light and the occlusion effects of atmospheric particles distort the reflected light from object surfaces, leading to inconsistent depth information. As a result, depth estimation models trained under clear weather conditions fail to generalize effectively to adverse weather conditions. To address this challenge, we propose a novel CNN-Transformer architecture, WeatherMono, for self-supervised monocular depth estimation under rainy and foggy weather. Rainy and foggy images often contain large regions of low contrast and blurry features. By combining Convolutional Neural Networks (CNNs) with Transformers, WeatherMono effectively captures both local and global contextual information, thus improving depth estimation accuracy. Specifically, we introduce a Multi-Scale Deformable Convolution (MDC) module and a Global-Local Feature Interaction (GLFI) module. The MDC module extracts detailed local features in rainy and foggy environments, while the GLFI module incorporates an efficient multi-head attention mechanism into the Transformer encoder, enabling more effective capture of both local and global information. This enhances the model’s ability to comprehend image features, strengthens its capability to handle low-contrast and blurry images, and ultimately improves the accuracy of depth estimation in adverse weather conditions. Experiments on WeatherKITTI show WeatherMono achieves AbsRel of 0.097, outperforming WeatherDepth (0.104) and RoboDepth (0.107). On DrivingStereo, it achieves AbsRel of 0.149 (rain) and 0.101 (fog). Extensive qualitative and quantitative experiments demonstrate that WeatherMono significantly outperforms existing methods in terms of both accuracy and robustness under rainy and foggy conditions. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

18 pages, 2343 KB  
Article
VMESR: Variable Mamba-Enhanced Super-Resolution for Real-Time Road Scene Understanding with Automotive Vision Sensors
by Hongjun Zhu, Wanjun Wang, Chunyan Ma and Rongtao Hou
Sensors 2026, 26(5), 1683; https://doi.org/10.3390/s26051683 - 6 Mar 2026
Viewed by 457
Abstract
Automotive vision systems depend critically on front-view cameras, whose image quality frequently degrades under adverse conditions such as rain, fog, low illumination, and rapid motion. To address this challenge, we propose VMESR, a variable mamba-enhanced super-resolution network that integrates a selective state-space model [...] Read more.
Automotive vision systems depend critically on front-view cameras, whose image quality frequently degrades under adverse conditions such as rain, fog, low illumination, and rapid motion. To address this challenge, we propose VMESR, a variable mamba-enhanced super-resolution network that integrates a selective state-space model into a lightweight super-resolution architecture. By serializing 2D feature maps and applying variable-depth mamba blocks, VMESR captures long-range dependencies with linear complexity. A multi-scale feature extractor, enhanced residual modules equipped with a convolutional block attention module, and dense fusion connections work together to improve the recovery of high-frequency details. Extensive experiments demonstrate that VMESR achieves competitive performance in both objective metrics and perceptual quality compared to state-of-the-art methods, while significantly reducing parameter counts and computational cost. VMESR provides a practical balance between efficiency and reconstructive accuracy, offering a deployable super-resolution solution for embedded automotive sensors and enhancing the robustness of autonomous driving perception pipelines. Full article
(This article belongs to the Special Issue AI for Emerging Image-Based Sensor Applications)
Show Figures

Figure 1

33 pages, 2143 KB  
Article
Adverse Weather Modulates Risk Effects and Injury Dependencies Between Alcohol-Impaired and Sober Drivers
by Zhengqi Huo, Xiaobao Yang, Xiaobing Liu and Xuedong Yan
Safety 2026, 12(2), 38; https://doi.org/10.3390/safety12020038 - 6 Mar 2026
Viewed by 489
Abstract
Existing research on driving under the influence (DUI) crashes predominantly employs independent modeling frameworks that overlook the interdependency between injury outcomes of impaired and sober drivers, potentially leading to biased parameter estimates and an incomplete understanding of crash mechanisms. This study develops a [...] Read more.
Existing research on driving under the influence (DUI) crashes predominantly employs independent modeling frameworks that overlook the interdependency between injury outcomes of impaired and sober drivers, potentially leading to biased parameter estimates and an incomplete understanding of crash mechanisms. This study develops a copula-based bivariate ordered response modeling framework to investigate how injury severities of DUI and non-DUI drivers are interdependent and how this dependency varies systematically across weather conditions. Using crash data from the U.S. Crash Report Sampling System (2016–2022), we analyze 3773 two-vehicle crashes involving one alcohol-impaired and one sober driver under clear, rain/snow, and fog conditions. Three key findings emerge from our analysis. First, injury severities between DUI and non-DUI drivers exhibit significant dependency, with both the strength and structure of this association varying systematically across weather conditions. Dependency intensity increases progressively from clear weather (Kendall’s τ = 0.2717) to rain/snow (0.2966) and peaks under fog (0.3239). Moreover, the optimal dependency structure differs by weather conditions. Second, DUI and non-DUI drivers demonstrate markedly differentiated response patterns to risk factors, with the same factor often producing opposite-direction or substantially different magnitude effects on the two parties. Third, weather conditions play a critical moderating role, with most risk factors exhibiting significant amplification effects on crash injury severity under adverse weather. For example, on curved roadways under fog compared to clear weather, severe/fatal injury risk increases from 4.45% to 5.81% for DUI drivers and from 7.99% to 11.36% for non-DUI drivers. These findings highlight the importance of joint dependency modeling in alcohol-related crash research and provide evidence-based insights for weather-sensitive DUI enforcement and targeted safety interventions. Full article
Show Figures

Figure 1

14 pages, 5168 KB  
Article
The Concept of a Digital Twin in the Arctic Environment
by Ari Pikkarainen, Timo Sukuvaara, Kari Mäenpää, Hannu Honkanen and Pyry Myllymäki
Electronics 2026, 15(5), 1001; https://doi.org/10.3390/electronics15051001 - 28 Feb 2026
Viewed by 323
Abstract
A Digital Twin is a virtual environment that simulates, predicts, and optimizes the performance of its physical counterpart. Digital Twin models hold great potential in wireless networking testing and development. This paper aims to envision our concept of simulating the operation of different [...] Read more.
A Digital Twin is a virtual environment that simulates, predicts, and optimizes the performance of its physical counterpart. Digital Twin models hold great potential in wireless networking testing and development. This paper aims to envision our concept of simulating the operation of different sensors in vehicle test-track conditions. Vehicle parameters are embedded into the edge computing entity, which uses them to generate a test configuration for the Digital Twin. This configuration is then applied in simulated sensor-output prediction, ultimately producing event data for the vehicle entity. The sensor suite—comprising radar, cameras, GPS and LiDAR—is modeled to provide the multi-modal input required for generating simulated perception data in the Digital Twin. To ensure realistic perception behavior, the physical vehicle is represented within a digital environment that reproduces the actual test track. This allows LiDAR occlusions to be attributed to genuine environmental structures (e.g., trees, buildings, other vehicles) rather than simulation artifacts. Within the Digital Twin, the objective is to evaluate how sensor signals—such as radar waves and LiDAR light pulses—propagate through the environment and how real-world obstacles may weaken or distort them. Historical datasets are used to calibrate and validate the Digital Twin, ensuring that the simulated sensor behavior aligns with real-world observations; the data collected during previous test runs can be used for visualization and analysis. Weather conditions are modeled to evaluate how rain, fog and snow impact sensor performance within the Digital Twin environment, to learn about the effects and predict sensor operation in different weather conditions. In this article, we examine the Digital Twin of our test track as a development environment for designing, deploying and testing ITS-enhanced road-weather services and warnings. These services integrate real-world road-weather observations, forecast data, roadside sensors and on-board vehicle measurements to support safe driving and optimize vehicle trajectories for both passenger and autonomous vehicles. This research is expected to benefit stakeholders involved in automotive testing, simulation and road-weather service development. Full article
Show Figures

Figure 1

18 pages, 3282 KB  
Article
PF-ConvNeXt: An Adverse Weather Recognition Network for Autonomous Driving Scenes
by Quanxiang Wang, Zhaofa Zhou and Zhili Zhang
Electronics 2026, 15(5), 920; https://doi.org/10.3390/electronics15050920 - 24 Feb 2026
Viewed by 232
Abstract
Rain, snow, fog, and dust can degrade road-scene images, blur fine details, and consequently reduce the reliability of perception systems for autonomous driving. To address this problem, this paper proposes PF-ConvNeXt, an adverse weather recognition model built upon the ConvNeXt architecture. First, a [...] Read more.
Rain, snow, fog, and dust can degrade road-scene images, blur fine details, and consequently reduce the reliability of perception systems for autonomous driving. To address this problem, this paper proposes PF-ConvNeXt, an adverse weather recognition model built upon the ConvNeXt architecture. First, a lightweight pyramid split attention (PSA) module is introduced to enable multi-scale feature fusion, so that both global degradation patterns and local texture details can be captured simultaneously. Second, a feature enhancement channel and spatial attention module (FECS) is designed. It adaptively recalibrates features along the channel and spatial paths, thereby suppressing interference from complex backgrounds and noise. Third, during training, Focal Loss is adopted to strengthen learning for hard samples and minority weather categories, alleviating recognition bias caused by class imbalance. Experiments are conducted on a dataset of 5000 images constructed by integrating RTTS, DAWN, and a self-collected rainy-weather dataset. The results show that PF-ConvNeXt achieves 90.16% accuracy, 95.24% mean average precision, and a 92.18% F1-score. It outperforms the ConvNeXt baseline by 4.74%, 5.46%, and 5.95%, respectively, and surpasses multiple mainstream classification models. This study provides an effective recognition framework for robust environmental perception under challenging weather conditions and demonstrates promising potential for practical deployment. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

23 pages, 2335 KB  
Article
A New Device for Continuous, Real-Time Acoustic Measurement of Rain Inclination
by David Dunkerley
Water 2026, 18(4), 495; https://doi.org/10.3390/w18040495 - 15 Feb 2026
Viewed by 625
Abstract
Driving rain or ‘wind-driven rain’ (WDR) arrives at the ground on an oblique trajectory, and drops may strike at a speed greater than their still-air terminal velocity. Oblique rain can affect a range of geomorphic processes including the splash dislodgment and transport of [...] Read more.
Driving rain or ‘wind-driven rain’ (WDR) arrives at the ground on an oblique trajectory, and drops may strike at a speed greater than their still-air terminal velocity. Oblique rain can affect a range of geomorphic processes including the splash dislodgment and transport of soil particles, and hydrological processes including overland flow, canopy interception, and the generation of stemflow. The mean rain inclination angle at which WDR strikes the ground has been estimated from the catch of paired gauges, one with a conventional horizontal orifice, and one with a vertical orifice, or by related forms of vectopluviometers. Such data allow the resolution of rain vectors to find the rain inclination. However, rain-collecting devices of this kind do not permit the real-time recording of the rain inclination from moment to moment. Here, a new acoustic method for measuring the rain inclination is introduced that provides an inexpensive tool for the continuous, real-time monitoring of WDR. Furthermore, the method also permits the simultaneous recording of rainfall duration and intermittency at a high temporal resolution, with no additional apparatus. Data on rain inclinations collected during showers on a tropical coast exposed to strong trade-winds are presented to illustrate the operation of the acoustic measurement system. However, the focus of this paper is the presentation of the new method itself, and not on the climatology of WDR. Full article
Show Figures

Figure 1

21 pages, 4333 KB  
Article
A Multivariable Model for Predicting Automotive LiDAR Visibility Under Driving-In-Rain Conditions
by Wing Yi Pao, Long Li, Martin Agelin-Chaab and Haoxiang Lang
Appl. Sci. 2026, 16(4), 1835; https://doi.org/10.3390/app16041835 - 12 Feb 2026
Viewed by 540
Abstract
LiDAR sensors are becoming more common and are going to be widely adopted in vehicles in the future by reducing the production cost of the time-of-flight units. Manufacturers are uncertain about the placement, cover material, and shape of the assembly to achieve the [...] Read more.
LiDAR sensors are becoming more common and are going to be widely adopted in vehicles in the future by reducing the production cost of the time-of-flight units. Manufacturers are uncertain about the placement, cover material, and shape of the assembly to achieve the optimal performance of the LiDAR, especially in rainy conditions. Although there are existing methodologies for evaluating the visibility and signal intensity of point clouds, there are no indexing approaches available since they would require a broad and comprehensive dataset and realistic and repeatable conditions to perform parametric studies. A matrix of rain conditions with quantified raindrop distribution characteristics is simulated using a wind tunnel via the wind-driven rain concept to produce the realistic impact of raindrops onto the sensor assembly surface at various wind speeds. This paper presents a performance prediction model method for LiDAR sensors and showcases the capability of such a model to provide insights quantitatively when comparing variations. The model is 3-dimensional, including rain conditions perceived by a moving vehicle at different speeds, material properties of surface wettability, and LiDAR visibility in rain compared to dry conditions. The observed LiDAR signal degradation follows an exponential manner, for which this study provides experimentally derived coefficients, enabling quantitative prediction across materials, topologies, rain, and driving speed conditions. Full article
(This article belongs to the Section Transportation and Future Mobility)
Show Figures

Figure 1

14 pages, 9818 KB  
Article
REHEARSE-3D: A Multi-Modal Emulated Rain Dataset for 3D Point Cloud De-Raining
by Abu Mohammed Raisuddin, Jesper Holmblad, Hamed Haghighi, Yuri Poledna, Maikol Funk Drechsler, Valentina Donzella and Eren Erdal Aksoy
Sensors 2026, 26(2), 728; https://doi.org/10.3390/s26020728 - 21 Jan 2026
Cited by 1 | Viewed by 481
Abstract
Sensor degradation poses a significant challenge in autonomous driving. During heavy rainfall, interference from raindrops can adversely affect the quality of LiDAR point clouds, resulting in, for instance, inaccurate point measurements. This, in turn, can potentially lead to safety concerns if autonomous driving [...] Read more.
Sensor degradation poses a significant challenge in autonomous driving. During heavy rainfall, interference from raindrops can adversely affect the quality of LiDAR point clouds, resulting in, for instance, inaccurate point measurements. This, in turn, can potentially lead to safety concerns if autonomous driving systems are not weather-aware, i.e., if they are unable to discern such changes. In this study, we release a new, large-scale, multi-modal emulated rain dataset, REHEARSE-3D, to promote research advancements in 3D point cloud de-raining. Distinct from the most relevant competitors, our dataset is unique in several respects. First, it is the largest point-wise annotated dataset (9.2 billion annotated points), and second, it is the only one with high-resolution LiDAR data (LiDAR-256) enriched with 4D RADAR point clouds logged in both daytime and nighttime conditions in a controlled weather environment. Furthermore, REHEARSE-3D involves rain-characteristic information, which is of significant value not only for sensor noise modeling but also for analyzing the impact of weather at the point level. Leveraging REHEARSE-3D, we benchmark raindrop detection and removal in fused LiDAR and 4D RADAR point clouds. Our comprehensive study further evaluates the performance of various statistical and deep learning models, where SalsaNext and 3D-OutDet achieve above 94% IoU for raindrop detection. Full article
Show Figures

Figure 1

22 pages, 6124 KB  
Article
High-Resolution Monitoring of Badland Erosion Dynamics: Spatiotemporal Changes and Topographic Controls via UAV Structure-from-Motion
by Yi-Chin Chen
Water 2026, 18(2), 234; https://doi.org/10.3390/w18020234 - 15 Jan 2026
Viewed by 776
Abstract
Mudstone badlands are critical hotspots of erosion and sediment yield, and their rapid morphological changes serve as an ideal site for studying erosion processes. This study used high-resolution Unmanned Aerial Vehicle (UAV) photogrammetry to monitor erosion patterns on a mudstone badland platform in [...] Read more.
Mudstone badlands are critical hotspots of erosion and sediment yield, and their rapid morphological changes serve as an ideal site for studying erosion processes. This study used high-resolution Unmanned Aerial Vehicle (UAV) photogrammetry to monitor erosion patterns on a mudstone badland platform in southwestern Taiwan over a 22-month period. Five UAV surveys conducted between 2017 and 2018 were processed using Structure-from-Motion photogrammetry to generate time-series digital surface models (DSMs). Topographic changes were quantified using DSMs of Difference (DoD). The results reveal intense surface lowering, with a mean erosion depth of 34.2 cm, equivalent to an average erosion rate of 18.7 cm yr−1. Erosion is governed by a synergistic regime in which diffuse rain splash acts as the dominant background process, accounting for approximately 53% of total erosion, while concentrated flow drives localized gully incision. Morphometric analysis shows that erosion depth increases nonlinearly with slope, consistent with threshold hillslope behavior, but exhibits little dependence on the contributing area. Plan and profile curvature further influence the spatial distribution of erosion, with enhanced erosion on both strongly concave and convex surfaces relative to near-linear slopes. The gully network also exhibits rapid channel adjustment, including downstream meander migration and associated lateral bank erosion. These findings highlight the complex interactions among hillslope processes, gully dynamics, and base-level controls that govern badland landscape evolution and have important implications for erosion modeling and watershed management in high-intensity rainfall environments. Full article
(This article belongs to the Section Water Erosion and Sediment Transport)
Show Figures

Figure 1

Back to TopTop