Next Article in Journal
A QGIS-Based Gaussian Plume Dispersion Model for Point Sources: Development and Intercomparison of Reflective and Non-Reflective Formulations
Previous Article in Journal
Uranium Adsorption at Increased and Ultra-Trace Levels by Humic Acid-Coated Alumina: Thermodynamic and Kinetic Studies
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Multivariable Model for Predicting Automotive LiDAR Visibility Under Driving-In-Rain Conditions

1
Faculty of Engineering and Applied Science, Ontario Tech University, Oshawa, ON L1G 0C5, Canada
2
ACE Climatic Aerodynamic Wind Tunnel, Ontario Tech University, Oshawa, ON L1G 0C5, Canada
*
Author to whom correspondence should be addressed.
Appl. Sci. 2026, 16(4), 1835; https://doi.org/10.3390/app16041835
Submission received: 2 January 2026 / Revised: 2 February 2026 / Accepted: 5 February 2026 / Published: 12 February 2026
(This article belongs to the Section Transportation and Future Mobility)

Featured Application

The resulting measurements and the prediction model support the design of more robust autonomous vehicle sensing systems by considering material selection and engineering strategies that mitigate LiDAR performance degradation in adverse weather.

Abstract

LiDAR sensors are becoming more common and are going to be widely adopted in vehicles in the future by reducing the production cost of the time-of-flight units. Manufacturers are uncertain about the placement, cover material, and shape of the assembly to achieve the optimal performance of the LiDAR, especially in rainy conditions. Although there are existing methodologies for evaluating the visibility and signal intensity of point clouds, there are no indexing approaches available since they would require a broad and comprehensive dataset and realistic and repeatable conditions to perform parametric studies. A matrix of rain conditions with quantified raindrop distribution characteristics is simulated using a wind tunnel via the wind-driven rain concept to produce the realistic impact of raindrops onto the sensor assembly surface at various wind speeds. This paper presents a performance prediction model method for LiDAR sensors and showcases the capability of such a model to provide insights quantitatively when comparing variations. The model is 3-dimensional, including rain conditions perceived by a moving vehicle at different speeds, material properties of surface wettability, and LiDAR visibility in rain compared to dry conditions. The observed LiDAR signal degradation follows an exponential manner, for which this study provides experimentally derived coefficients, enabling quantitative prediction across materials, topologies, rain, and driving speed conditions.

1. Introduction

Recent technological and commercial developments have driven an increasing adoption of automotive-grade Light Detection and Ranging (LiDAR) sensors in road vehicles. For example, the XPeng P5 was the first mass-produced electric vehicle to integrate LiDAR in its advanced driver assistance system (ADAS) suite [1]. BMW has also introduced a LiDAR-enabled Level 3 automated driving package on select 7 Series and i7 models, although the availability is currently limited to specific regions, reflecting a controlled, market-targeted deployment strategy [2]. Meanwhile, globally available models such as the forthcoming Volvo EX90 are expected to offer LiDAR capabilities for mainstream consumers [3].
LiDAR deployment is not limited to passenger vehicles but is also expanding into the heavy-duty sector. Daimler Truck & Torc recently selected a long-range 4D LiDAR (Aeva) for their series-production autonomous commercial-vehicle program, complemented by a short-range 3D LiDAR (Innoviz) for Level 4 production trucks [4,5]. These real-world deployments illustrate the growing industry momentum toward integrating LiDAR as a key sensing modality in next-generation transportation systems, even as further validation remains necessary, especially under adverse weather conditions, most notably rain.
Rain introduces signal attenuation pathways that alter how emitted laser beams propagate and return, often leading to performance degradation in the form of reduced detection range, increased noise, number of points and signal strength of the points detected [6]—also known as LiDAR visibility.
A wide range of testing approaches, from controlled laboratory rain rigs to outdoor field trials, has been explored to evaluate LiDAR performance under rain, with representative examples reported in [7,8,9,10]. Across these studies, LiDAR performance degradation is commonly observed to follow monotonic, often exponential or power-law trends with increasing rain severity, consistent with established optical attenuation behavior. However, despite this qualitative agreement, existing approaches have not yet produced sufficiently quantitative or generalizable predictive frameworks, except for several scenarios being measured or computed [11,12].
Laboratory-based rain rigs offer high controllability but typically lack environmental realism for automotive applications, as they cannot fully reproduce droplet impact dynamics, airflow interactions, or surface water behavior on a moving vehicle. Conversely, outdoor testing provides realistic scenarios but suffers from poor repeatability due to uncontrolled environmental variability, making it difficult to isolate causal relationships or to draw statistically robust conclusions within practical testing timeframes. As a result, many prior studies focus on isolated aspects of the problem, such as atmospheric attenuation alone, stationary sensor exposure, or single-material lens covers. They do not capture the coupled, multi-variable interactions that govern LiDAR visibility in realistic driving-in-rain conditions.
In particular, existing rain–LiDAR models are typically characterized by one or more of the following simplifying assumptions: rain perception and optical attenuation are treated as decoupled processes, with rain intensity prescribed independently of vehicle motion or local aerodynamics; sensor surface effects are either neglected or implicitly fixed, most often corresponding to optical-grade glass or a single hydrophilic or hydrophobic surface condition; and performance metrics are evaluated under a stationary or atmospheric rain-curtain perspective. All of these limit their applicability to dynamic driving scenarios. Consequently, while exponential degradation trends are well documented, existing parameterizations are typically context-specific and difficult to transfer across materials, configurations, or operating conditions; the present framework significantly improves transferability by explicitly incorporating surface wettability and perceived rain intensity within a unified formulation.
To address these limitations, the present study adopts a systematic, multi-stage framework that integrates realistic rain perception, controlled environmental reproduction, and comprehensive material evaluation. This work builds upon four prior milestones of investigation reported in [13,14,15,16,17], which collectively established the foundational understanding of rain-LiDAR sensor interactions. Outdoor rain data collected on a moving vehicle are first used to characterize perceived rain intensity and droplet size distributions under representative driving conditions. These conditions are then reproduced in a controlled wind-driven rain environment using a wind tunnel capable of independently regulating airflow, rain intensity, and droplet microphysics, while maintaining repeatability across test cases. Within this framework, critical variables including aerodynamic configuration, surface orientation, and cover material wettability are varied systematically while all other parameters are held constant, enabling direct attribution of observed performance changes to specific physical mechanisms.
Relative to prior work, the major contributions of this study are threefold. First, while retaining the commonly observed exponential form of LiDAR performance degradation, this work explicitly parameterizes the degradation behavior as a function of surface wettability and perceived rain intensity, rather than treating material effects as implicit constants. Second, the rain quality encompassing intensity, droplet size distribution, and aerodynamic interaction is explicitly documented and analyzed, enabling quantitative comparison across conditions. Third, the study presents, to the authors’ best knowledge, the most extensive experimental evaluation of LiDAR cover materials reported to date, comprising over 35 materials spanning a wide range of wettability and surface treatments, and thereby substantially broadening the empirical basis available in the open literature for assessing material-dependent LiDAR performance under rain.
In light of these contributions, this paper pursues the following objectives: (i) to employ a validated wind-driven rain methodology to reproduce realistic, repeatable driving-in-rain conditions; (ii) to systematically vary surface material properties, aerodynamic configurations, and driving-speed-equivalent wind conditions; and (iii) to quantitatively model LiDAR visibility degradation under these coupled influences. A time-of-flight (ToF) LiDAR unit is evaluated across a 3 × 3 rain-condition matrix encompassing nine combinations of rainfall intensity and driving speed equivalent airflow. The resulting dataset forms the basis for a material-aware, experimentally grounded LiDAR performance model. The following section reviews prior work in the open literature on LiDAR performance under adverse weather, highlighting existing rain-testing approaches and the limitations that motivate the present investigation.

2. Literature Review

The following surveys synthesize three foundational aspects of rain-induced LiDAR performance degradation: existing relevant models outlining both rain and LiDAR signal quality, the critical environmental and surface parameters that govern these effects, and testing and evaluation strategies.
In the literature, perceived rain models and LiDAR attenuation models have generally been developed separately. Simplified dynamic rain models, such as Stern’s droplet strike model [18], Bocci’s flux model [19], and Carvalho’s kinematics model [20], estimate surface wetting and droplet impact based on the orientation of the surface and driving speed. More recently, Pao’s perceived intensity model [15] integrates these concepts by incorporating observable and model-assumable parameters including droplet size distribution, wind velocity, and crosswind effects into a unified representation of the perceived rain field as experienced by a moving sensor.
On the other hand, attenuation models typically assume a theoretical droplet size distribution and relate extinction coefficients directly to rainfall rate [21,22]. Building upon this approach, Goodin [23] derived a power-law relationship between LiDAR return power and rain rate. However, these models predominantly account for atmospheric scattering and extinction, without considering the optical deflection and distortion introduced by droplets adhering to the sensor’s surface. As a result, the decoupled development of perceived rain and attenuation models leaves significant gaps in estimating LiDAR performance during real driving-in-rain conditions, where interactions at the sensor–environment interface evolve dynamically.
Prior studies have shown that adhering droplets exert a disproportionately large influence on LiDAR visibility compared with free-path atmospheric attenuation. This underscores that surface material properties, particularly wettability, are among the most critical yet unmodeled factors influencing sensor performance in rain. Beyond this, several other interdependent parameters also critically affect LiDAR performance when driving in rain. Aerodynamic factors, including sensor assembly topology, surface curvature, orientation, and mounting location, govern the local airflow field around the sensor [24].
Although not focused specifically on sensors, vehicle soiling research has extensively studied droplet deposition and accumulation patterns via both computational and physical experimental strategies, as well as the relationship between aerodynamic drag and soiling mitigation efficiency [25]. The strength of local vortices has been shown to influence droplet adhesion, as demonstrated in studies of side-view mirror contamination [26]. Similarly, small geometric modifications can produce substantial changes in rear flow recirculation behaviors, resulting in distinct regions with varying concentrations of particle or droplet deposition on the vehicle tailgate [27]. The key point relevant to LiDAR studies is that the surrounding flow field can substantially modify droplet dynamics, both in the local near-sensor region and directly on the sensor surface. These aerodynamic effects govern droplet breakup, splashing, coalescence, and the formation of films and rivulets, ultimately determining the surface water management and roll-off behaviors.
Studies have also suggested that optical factors further compound the complexity [28]. The curvature of the sensor lens cover, its transmittance and reflectance characteristics, and the scattering and refraction induced by individual droplets across the surface directly affect the LiDAR’s received signal [14]. Therefore, all the discussed parameters illustrate the intricate, multi-interface nature of LiDAR performance when driving in rain, as shown in Figure 1.
The following section focuses on sensor studies, reviewing testing methodologies and performance degradation observed. Understanding the mechanisms is essential for establishing quantitative metrics and enabling systematic comparisons across varying environmental and operational parameters.
Industry-led AV testing usually has limited disclosures, with Waymo being one of the longest documented efforts addressing rain-related challenges, with public acknowledgements dating back to 2019. However, within the Waymo Open Dataset, only a very small fraction of the released data is labeled as rain, restricting comprehensive academic analysis of sensor behavior under precipitation [29]. Computational studies rely on real-world data for accurate simulation, such as physically inspired integration in CARLA to match statistics observed in the Waymo Open Dataset [30], and to expand upon to generate synthetic datasets, such as ray attenuation applied to open KITTI driving datasets [31].
Zoox conducted real-world rain testing in Seattle in 2021 [32], followed by controlled wind-tunnel experiments in 2022 to evaluate rain-mitigation strategies [33]. Collectively, these efforts provide valuable system-level insights into AV operation in rain; however, the scarcity of open, quantitative evaluations reinforces the need for fundamental academic investigations that decouple environmental, material, and sensor-level effects to establish robust and generalizable LiDAR performance models.
Outdoor testing is inherently time-consuming and, even when statistical matching is achieved, it remains highly scenario dependent. In contrast, a systematic and controlled testing approach enables repeatable reproduction of sensor degradation phenomena, thereby accelerating insight generation and supporting innovation through parametric experimentation.
Typically, there are two common methods of controlled dynamic testing—driving through a rain zone on a test track or using wind-driven rain in a wind tunnel. Carballo created a multi-LiDAR study in the LIBRE dataset containing slow driving speeds under artificial rain, fog, and strong lighting conditions [34]. Meanwhile, Kim performed driving tests at higher speeds with an artificial rain system on a test track [35]. They made observations of different parameters such as driving speed, rain intensity, distance to the object, material of the target, and color of the target. Tang performed LiDAR and camera testing in a parking lot environment and drove at lower speeds [36]. They found that higher driving speeds can degrade LiDAR performance more quickly. This is because moving faster can cause the LiDAR surface to receive higher perceived rain intensity, depending on the location and the cover material of the LiDAR sensor.
Beyond differences in testing methodology, prior experimental and modeling studies consistently report an approximately exponential or power-law degradation of LiDAR signal strength, effective range, or point density with increasing rain intensity or exposure duration [37,38]. While such trends provide useful first-order characterization, existing models generally assume a single, fixed sensor surface condition and therefore fail to capture performance variations arising from changes in surface material properties. In most reported studies, LiDAR units are evaluated either without protective covers or with optical-grade glass housings, implicitly corresponding to hydrophilic surface behavior under wetting conditions, such as a multi-LiDAR study in [39]. As a result, material-dependent effects are effectively embedded as unmodeled constants within empirical attenuation formulations, limiting their validity when surface treatments or alternative cover materials are introduced.
It is often assumed based on human driving experience that hydrophobic surfaces inherently improve visibility, as demonstrated by commercial coatings designed to enhance optical clarity [40]. However, human vision and LiDAR sensing are governed by fundamentally different mechanisms. LiDAR perception relies on the transmission, refraction, and reception of laser beams, with droplet-induced deflection and distortion governed primarily by Snell’s law and the evolving liquid–solid–air interface, rather than by subjective visual transparency alone [14]. Consequently, surface treatments that appear beneficial for human vision do not necessarily yield equivalent improvements in LiDAR performance, and in some cases may introduce additional optical artifacts. These limitations highlight the need for a revised modeling framework that explicitly incorporates surface material properties alongside environmental and aerodynamic effects.
While wind tunnel studies of vehicle-rain interactions are extensive, to the authors’ best knowledge, Pao et al. are the only works to date that directly evaluate LiDAR sensing performance using a wind-driven rain methodology, summarized in [13,24]. Their investigations are systematic and hierarchical, encompassing the definition of meteorological requirements, characterization of dynamic perceived rain conditions under vehicle motion, assessment of material influences, and the development of a wind-tunnel testing method specifically tailored for LiDAR and other optical sensor studies.
In parallel, recent studies have explored the impact of adverse weather on perception performance using camera-based and object-detection-oriented metrics, including detection confidence and recognition accuracy under rain and soiling conditions [41,42]. These efforts provide useful insight into end-to-end perception behavior under degraded sensing conditions. The performance metrics employed in such studies inherently reflect a combination of sensor-level effects and downstream processing, and therefore emphasize system-level perception outcomes rather than raw sensor response. Within this broader landscape, sensor-centric analyses that focus on point-level visibility offer complementary information by characterizing the underlying sensing behavior that precedes higher-level perception.
While these efforts provide an important starting point, the absence of a unified, quantitatively validated metric under realistic rain conditions highlights the need for experimentally grounded models capable of reliably informing mitigation design and computational simulation. Accordingly, this study leverages a validated wind-driven rain approach to generate realistic yet repeatable driving-in-rain conditions, forming the basis for a robust quantitative LiDAR performance model.

3. Materials and Methods

3.1. Realistic Rain Simulation

Traditionally, wind tunnel rain systems consist of spray nozzle arrays, which cannot accurately represent the realistic conditions experienced by a moving vehicle, including droplet size distribution and dynamic impact angle. These limitations affect the resulting surface droplet dynamics and, subsequently, sensor perception. This study employs a patented (US 12,496,607 B2) rain simulation system integrated into a wind tunnel. The wind tunnel used is 1/14th model scale, which allows a sufficient cross-sectional area for the sensor assembly, with side and top walls open to reduce blockage effects and a collector at the top rear to recollect airflow at the downstream end of the test section.
While wind-tunnel testing does not target maximum detection range or long-range attenuation, it provides the controlled, repeatable rain exposure necessary to isolate near- and mid-range LiDAR perception degradation mechanisms relevant to automotive sensing.
The rain system—Vectorized Rain Simulation Apparatus (VeRSA)—uses a vectorized physical concept combining vertical free-fall and horizontal wind deflection to direct raindrops toward the test object. To account for different wind speeds and rainfall intensities, the falling height and flow rate of individual in-line nozzles are adjustable. The wind tunnel setup is shown in Figure 2.
Based on outdoor investigations using a vehicle equipped with meteorological sensors, particularly an optical disdrometer, the Laser Precipitation Monitor (LPM) by Thies Clima (Göttingen, Germany), it was found that a forward-viewing surface oriented perpendicular to the ground receives more rain, which is influenced by driving speed and other external factors [17]. Therefore, as wind speed representing driving speed increases, system flow rate is also increased to achieve higher perceived rain intensities. Table 1 presents the rain matrix reporting the rain categories, driving speeds, perceived rain intensities, and volume-mean droplet sizes. The natural rain intensities for light, moderate, and heavy categories correspond to approximately 5, 10, and 25 mm/h, respectively. The target conditions are decided based on the perceived precipitation prediction model in [17]. The rain intensity is calculated from the number and size of droplets recorded by the LPM over a 2 min duration using the particle event mode. The droplets are assumed to be spherical, with a measurement area of 4560 mm2. The intensity I [mm/h] and volume-mean droplet size D3,0 [mm] can be calculated using:
I = V 2 m i n · 1 2   m i n · 60   m i n 1   h · 1 4560   m m 2
D 3 , 0 = [ n i d i 3 / N ] 1 / 3    
V, ni, and di represent the volume of all droplets, number count and droplet diameter at bin number i, respectively, and N = Σni. It should be noted that this mode has a limitation of data saturation, which may underestimate the rain intensity; however, this mode is consistent with outdoor experiments that required timestamping of droplets recorded.
The droplet size distributions were validated against outdoor vehicle experiments and showed significant improvements over traditional spray nozzle systems, as illustrated in Figure 3. This improved realism is an essential step in inducing accurate LiDAR performance degradation, since the area influence of each droplet on sensor perception is highly dependent on the characteristics of the incoming droplets. LiDAR performance under a large number of small droplets, such as those from tire spray, differs from that under natural precipitation consisting of fewer but larger droplets, even when the overall water intensity is the same.

3.2. Experimental Setups

A high-resolution, pulsed-ToF, MEM-based 3D scanning LiDAR is used in the study. This LiDAR sensor operates at 905 nm at 10 fps. It has a linear angular resolution with 45 forward-facing lines, a FOV of 70° (H) × 20° (V) and a detection range of 0.2–600 m with 0.03 m accuracy.
The LiDAR is placed inside a waterproofed container and behind an optically transparent cover, and the cover is exposed to direct soiling rather than the sensor unit itself. Cover materials and shapes are varied, and external addition of aerodynamic devices are used to manipulate the local flow field around the sensor FOV, several sample configurations are demonstrated in Figure 4. Each configuration is subjected to the same set of cover materials and driving-in-rain conditions and follows the same metric of performance evaluation to compare the impact on LiDAR visibility.
The sensor covers use polycarbonate and glass panels as substrates, and they are treated with different coatings and thin films to achieve various scratch resistance, anti-reflection, and wettability ranging from water contact angles (WCA) of 5 to 150°. The WCAs are measured with a goniometer setup, and the optical characteristics are measured with an optical test bench to determine the transmittance and reflectance at different angles of incidence. Samples of these material characterization results can be found in [14].
The optical measurements serve as a pass/fail examination to determine if the sample is an acceptable LiDAR grade material; a minimum of 90% transmittance is typically desired. Having a cover in front of the sensor will exert a potential influence on dry vision, usually causing lower performance than having no cover. This is the case for the typically flat configuration. It is seen that certain configurations, such as tilted or curved covers, can achieve optimal vision at a specific angle orientation, which is reflected in the optical test bench measurements. On the other hand, the WCA measurements provide information on one of the critical parameters for the model.

3.3. LiDAR Performance Evaluation

Each test case records a duration of 2 min after the wind and rain are stable. The LiDAR assembly is exposed to pre-calibrated driving-in-rain conditions. For lighter rain conditions, the soaking process is slightly longer to achieve saturated droplet accumulation behaviors. The wind tunnel honeycomb wall (flow conditioning mesh screen) is a large, squared, and convenient object in the environment that can be detected by the forward-facing LiDAR. It is 3.3 m from the LiDAR, above the minimal detection range. Therefore, it is used as the fixed detection target in the study.
When analyzing the LiDAR performance, the effect of having a cover on visibility is evaluated first to compare with bare sensing without a cover in dry conditions; then, the visibility in rain with a cover is compared to the control frame (dry visibility with a cover). A visibility metric is defined for the quality of detection based on the number of points detected. The LiDAR sensor has an internal processing algorithm to determine valid detection if reflectivity is above 10%. While reflectivity values are range- and unit-dependent, the short-range single target in this study eliminates this parameter. For LiDARs that do not filter low-confidence points, the visibility scores in this study are also proportionally applicable for signal intensity studies representing the return power.
The LiDAR vision is viewed in real time, and the point cloud data are post-processed to calculate the averaged visibility percentage over 2 min in a frame-by-frame manner. The on-cover droplet dynamics are also observed using an external camera.

4. Results and Discussions

The results section focuses on several key aspects influencing LiDAR performance degradation. First, the effect of surface material properties is examined, with wettability quantified by the water contact angle (WCA). Second, the influences of rain intensity and driving speed equivalent wind conditions are analyzed, along with their combined impact as a measure of condition severity. Third, the sensitivity of LiDAR performance to configuration changes in local geometry and the surrounding aerodynamic flow field is evaluated. Finally, a semi-empirical model predicting LiDAR performance is introduced, and a corresponding indexing approach is proposed to enable quantitative performance assessment and comparison.

4.1. Effects of Material Properties on LiDAR Vision in Rain

Here, the observed trends are identified across the four tested configurations. The results reveal three major classes of behavior corresponding to superhydrophobic, hydrophilic, and hydrophobic surface materials, all of which exhibit an overall exponential degradation trend, as illustrated in Figure 5. Figure 5a presents the LiDAR visibility results for the flat configuration under increasing condition severity, defined by the combined effects of perceived rain intensity and driving speed. The trendlines indicate three distinct performance regimes: (1) a near-linear decay with relatively high visibility retention for superhydrophobic surfaces; (2) an exponential decay with moderate visibility for hydrophilic surfaces; and (3) a near-linear plateau characterized by persistently low visibility for hydrophobic surfaces.
Sample LiDAR visions under the same condition for the three different material classes (hydrophilic, hydrophobic, and superhydrophobic) and the four different configurations (flat, slant, curve, and flap) are demonstrated in Figure 6. It is seen that the presence of raindrops causes signal blockages, resulting in missing points compared to dry vision, whereas the remaining detected points are likely to show reduced signal strength in the form of reflectivity, as shown by the red-orange colors, while higher-reflectivity points are green. It is worth noting that the flat and curve configurations tend to cause a blind spot in the center of the FOV due to reflection, and the effect is normalized when computing visibility.
The distinct LiDAR performance trends observed across surfaces with varying wettability can be attributed to differences in the size and morphology of adhering droplets from both static and dynamic perspectives. For hydrophilic materials, partial laser transmittance through adherent water films occurs due to mild refraction, resulting in an overall exponential degradation behavior as rain conditions become more severe. On these surfaces, droplets spread, coalesce, and drain in multiple directions while remaining relatively flat, forming thin water films with low curvature that introduce limited refraction of the laser paths.
In contrast, hydrophobic covers promote the formation of discrete droplets with spherical-cap geometries that undergo limited deformation even under gravitational forces. These droplets act as convex lenses on the cover surface, inducing refraction, scattering, and, in some cases, total internal reflection, leading to substantial degradation in LiDAR visibility. Although droplets slide readily on the low-friction surface, smaller hemispherical droplets remain adhered, continuing to impair LiDAR perception. Consequently, soiling effects are often not naturally mitigated on mildly hydrophobic covers. This phenomenon has been previously reported in droplet shape and ray-tracing studies [14].
On superhydrophobic covers, the majority of the surface remains dry under rainy conditions due to low adhesion forces that cause droplets to rebound or be shed rapidly. The few droplets that remain adhered are extremely small, as larger droplets are efficiently removed through the non-sticking surface property. These small droplets introduce little to no LiDAR performance degradation, as their near-spherical shape and minimal contact area limit laser refraction. Moreover, such spherical droplets may act as ball lenses, helping to preserve near-straight laser propagation paths.
The remaining three configurations—slanted, curved, and flap—exhibit trends broadly consistent with those observed for the flat configuration, with only modest variations in the absolute LiDAR visibility levels. The slanted configuration provides a slight improvement in visibility for hydrophobic and superhydrophobic covers by reducing the stagnation region upstream of the cover, thereby promoting more effective droplet removal. In contrast, the slanted geometry has little influence on hydrophilic surfaces, where thin water films continue to dominate optical behavior.
When a flap is added to the slanted configuration, further improvements are observed for hydrophobic and moderately hydrophilic covers. This enhancement is attributed to increased local flow velocities along the cover surface, which promote either flatter film formation or more rapid droplet removal. In comparison, the superhydrophobic cover shows minimal additional benefit, as droplet removal is already highly efficient due to the non-sticking surface properties.
The curved configuration exhibits more nuanced behavior, particularly for borderline hydrophobic materials, which transition from hydrophilic-like to hydrophobic-like visibility trends. While the superhydrophobic cover performs similarly to the flat case, a slight degradation in visibility is observed. This reduction may be attributed to additional optical refraction arising from the combined curvature of the cover and adhering droplets, as well as the presence of both upward- and downward-directed flow regimes along the curved surface. Droplets moving upward along the surface may experience reduced mobility due to gravitational effects, further influencing droplet residence time and optical interference.
From a perception standpoint, the observed reduction in LiDAR visibility directly translates to decreased point density and increased spatial sparsity within the point cloud, which are critical factors governing object detection accuracy and distance measurement reliability. As rain severity increases, missing returns and attenuated signal strength reduce the number of valid points available to represent object surfaces, increasing uncertainty in object shape reconstruction, centroid estimation, and range consistency. In practice, internal sensor filtering and confidence-based rejection of weak returns may further exacerbate point loss, particularly for distant or low-reflectivity objects. In addition, localized optical effects such as specular reflection or droplet-induced lensing can selectively amplify returns from highly reflective surfaces, introducing spatial distortions or isolated false-positive points. Collectively, these effects degrade the robustness of downstream perception tasks, underscoring the importance of sensor-level visibility modeling as a prerequisite for reliable object detection and ranging under rainy driving conditions.

4.2. LiDAR Performance Prediction Model

The material property influence on LiDAR performance in rain was demonstrated in the previous section. Given that LiDAR visibility is primarily controlled by surface wettability and perceived rain intensity, a comprehensive 3-dimensional representation of the data is presented in Figure 7. While this visualization captures the overall degradation trends across material classes, it does not enable direct performance prediction for a given cover material. Consequently, a correlation model is developed following the structured procedure outlined below, explicitly incorporating the key governing parameters.
Although the general observation that increasing rain intensity or severity leads to degraded LiDAR performance is well established, and attenuation trends have been reported and summarized in prior literature, the quantitative extent of signal deterioration has not been rigorously validated under realistic driving-in-rain conditions. In particular, existing studies are limited by two fundamental gaps: (1) the lack of controlled yet dynamically realistic testing environments capable of reproducing steady-state rain exposure during vehicle motion, and (2) the absence of systematic investigations quantifying LiDAR performance across different sensor cover materials.
As a result, previously reported attenuation behaviors are typically derived under fixed surface assumptions and are therefore not directly transferable when surface wettability, optical properties, or water management behavior are altered. This limitation is especially critical given mounting evidence that adhering droplets and surface water dynamics dominate LiDAR signal degradation under rain, often exceeding the influence of free-path atmospheric attenuation alone.
Addressing these limitations, the present study employs a validated wind-driven rain methodology to generate realistic, repeatable driving-in-rain conditions while explicitly isolating material-dependent effects. Under these conditions, we observe that for each evaluated cover material, LiDAR performance degradation follows a consistent exponential form. Accordingly, the degradation behavior can be expressed as:
f x , λ , d = a ( λ ) e b λ x + c ( d )
where a, b, and c are coefficients, x is the perceived rain intensity (mm/h), λ is WCA (deg), and d is the droplet size distribution descriptor. The exponential functional form adopted in Equation (3) follows established attenuation modeling practices in optical, meteorological, and LiDAR-related literature. The coefficients a and b as functions of λ are shown in Figure 8. The individual functions for a and b are:
a = P 1 λ 3 + P 2 λ 2 + P 3 λ + P 4
b = P 5 ( 1 ) λ + P 6 ( 1 ) ,     λ < 80 ° P 5 ( 2 ) λ + P 6 ( 2 ) ,     λ 80 °
Exponential formulations have been widely adopted to describe signal degradation in optical and radiative transfer systems, including atmospheric attenuation, scattering-dominated visibility loss, and LiDAR signal extinction under adverse weather conditions. In prior LiDAR-related studies, exponential or power-law relationships have been used to characterize the decay of received signal strength or effective detection range as a function of rain rate, fog density, or particulate concentration, reflecting the cumulative effect of scattering and absorption processes along the optical path. In the present work, the exponential structure is retained not as a theoretical derivation from first principles, but as an empirically supported functional form that is consistent with these established attenuation models while enabling explicit incorporation of material- and orientation-dependent effects that are not addressed in existing formulations.
Coefficient a represents the baseline influence of surface material on LiDAR visibility across driving-in-rain conditions and is modeled as a polynomial function. While a U-shaped parabolic behavior is generally assumed, under low-severity conditions the limited airflow and reduced droplet impingement may not effectively remove accumulated water, resulting in negligible performance improvement. Consequently, a third degree polynomial provides the best fit. The cubic polynomial form allows the model to capture multiple wettability regimes, including hydrophilic film formation, transitional mixed behavior, and hydrophobic or superhydrophobic droplet-dominated states, which cannot be represented by linear or quadratic terms alone.
Dry visibility is assumed to be 100% for normalization and is not included in the regression. The constant term P 4 represents a configuration-dependent offset in wet-condition LiDAR visibility. Negative values of P 4 for the slant and flap configurations indicate small downward shifts in the fitted degradation curve, resulting in reduced sensitivity to increasing severity and ultimately superior performance overall.
The linear coefficient P 3 describes the first-order dependence of baseline LiDAR visibility on WCA, reflecting how incremental changes in wettability influence initial optical degradation under rain. Larger positive values of P 3 indicate an overall tendency toward improved baseline visibility with increasing WCA, consistent with enhanced droplet removal at higher wettability extremes.
However, the observed dependence is non-monotonic. This behavior is captured by the higher-order coefficients P 2 and P 1 , which introduce curvature and asymmetry into the WCA–visibility relationship. Specifically, the quadratic term P 2 governs the emergence of a performance trough at intermediate WCA, corresponding to the regime where discrete hemispherical droplets persist on the surface and induce strong optical distortion.
The cubic coefficient P 1 captures the higher-order sensitivity at the extremes of wettability, amplifying the differences between material classes and reflecting how low and high WCA surfaces respond distinctly under different configurations. Collectively, the relative magnitudes and signs of these coefficients determine the location and severity of performance extrema across material classes.
Coefficient b represents the sensitivity of LiDAR visibility degradation to increasing perceived rain intensity. It exhibits an approximately linear dependence on surface wettability, with a threshold behavior observed at the transition between hydrophilic and hydrophobic materials. When b 0 , the response is effectively flat, indicating limited sensitivity to rain intensity, and the degradation behavior is dominated by coefficient a . As b increases, sensitivity to rain intensity increases, leading to accelerated performance degradation under more severe conditions. Conversely, decreasing b reflects reduced sensitivity, which can result in a modest performance improvement peak for hydrophilic materials, as illustrated in Figure 7b,d.
Beyond its empirical definition, coefficient b encapsulates the combined influence of surface water dynamics, optical deflection, and droplet–surface interactions. While polynomial parameterizations are commonly employed in empirical modeling to capture smooth but nonlinear dependencies on geometric or material parameters, the present formulation adopts a piecewise definition to reflect a distinct transition in dominant droplet behavior. Specifically, experimental observations indicate a turning point near a surface inclination of approximately 80 ° , beyond which droplet dynamic mechanisms change character due to the coupled effects of gravity, airflow-induced shear, and surface tension.
Notably, this transition occurs at an inclination lower than the static contact angle threshold (i.e., 90 ° ), underscoring that dynamic droplet behavior under wind-driven rain conditions cannot be inferred from static wettability metrics alone. Accordingly, the piecewise polynomial representation of b is introduced as a physically informed empirical model, enabling accurate representation of two distinct droplet-regime behaviors within a unified analytical framework.
Coefficient c represents a bias term that shifts the overall degradation state of LiDAR visibility and is introduced to account for systematic effects associated with droplet size distribution. In physical terms, variations in droplet size distribution alter the baseline optical interference experienced by the LiDAR sensor by modifying refraction, scattering, and surface water morphology, thereby influencing the effective severity of rain conditions.
In the present study, the droplet size distribution is held fixed and corresponds to a representative, experimentally validated distribution derived from outdoor measurements under natural rainfall conditions. As a result, coefficient c is calibrated as a constant offset, c ( d ) = 0 , capturing the contribution of this specific rain microphysical condition within the tested parameter matrix. Under this formulation, c does not represent a generalizable predictive function of droplet size, but rather an empirically fitted reference state against which the effects of rain intensity and surface wettability are evaluated.
The inclusion of c in this form serves to preserve model extensibility. While most natural rain events exhibit broadly similar droplet size distributions with intensity as the primary varying parameter, other scenarios, such as secondary soiling from tire spray, can produce substantially different droplet size characteristics. In such cases, c ( d ) may be explicitly parameterized in future work to capture these effects. Accordingly, within the scope of the present study, c is treated as a calibrated empirical offset rather than as a fully mechanistic descriptor of droplet-scale physics. The values corresponding to Figure 8 are presented in Table 2 and Table 3.

4.3. Model Validation

The semi-empirical model introduced in this section is derived directly from the experimental observations obtained using a high-resolution, pulsed time-of-flight (ToF) LiDAR operating at a wavelength of 905 nm. The model formulation and fitted coefficients are therefore specific to this sensor architecture and operating regime. While the degradation trends are discussed in general terms for clarity, wavelength-dependent effects, alternative scanning mechanisms, and sensor-specific signal processing strategies are not explicitly parameterized in the present study.
The dominant mechanisms captured by the model (namely droplet-induced refraction, scattering, and surface water morphology on the sensor cover) are governed primarily by geometric optics and interface physics common to near-infrared pulsed ToF LiDAR systems. The observed trends with respect to rain intensity and surface wettability are transferable across similar architectures, although absolute attenuation levels and fitted coefficients may vary. Accordingly, the present model should be interpreted as a material- and rain-aware performance framework for pulsed ToF LiDAR systems, rather than as a universal predictor across all LiDAR technologies.
It is noted that uncertainty in the present model is dominated by the inherent variability of rain–surface interactions rather than by measurement noise or numerical fitting error alone. Transient droplet dynamics, stochastic surface wetting, and coupled aerodynamic effects introduce irreducible variability that cannot be meaningfully represented by conventional confidence bounds without oversimplifying the underlying physics. Accordingly, the model is intended to provide reliable average performance trends and comparative predictions across materials and configurations. Within this context, agreement between predicted and measured visibility levels is evaluated in terms of trend consistency and relative ranking, which are most relevant for sensor design and comparative assessment under realistic driving-in-rain conditions.
The accuracy of the model prediction was evaluated using an independent dataset not used for model development. Six different cover materials in the flat configuration were tested under selected conditions, and the measured LiDAR visibility was compared with the corresponding model predictions, as summarized in Table 4.
The model was demonstrated to be valid and robust in predicting LiDAR visibility. Given the parameters of perceived rain intensity and WCA, the model accurately predicts performance for the majority of cases. The largest observed discrepancy was approximately 18%, which can be attributed primarily to experimental uncertainties and is consistent with the predicted uncertainty of 17.9%. Notably, the prediction errors do not scale proportionally with perceived rain intensity, reflecting the inherent complexity of wind-driven rain, where multiple variables must be simultaneously controlled to reproduce each target condition. A detailed error analysis is provided in [24], and further reductions in uncertainty are achievable through optimization of system parameters such as nozzle opening size, nozzle positioning, flow characteristics, and dispensing frequency.
This modeling approach relies on experimental data to optimize prediction accuracy; consequently, larger datasets and broader rain condition variations would further refine the model, particularly to enable formulation of coefficient c . At present, the model considers two primary variables, with each configuration represented by its own set of parameters. However, geometric factors such as surface morphology (e.g., patterned roughness) are also expected to influence LiDAR performance and are desirable to incorporate explicitly as predictive parameters. In its current form, the model is sufficient to approximate LiDAR performance and serves as a reliable baseline reference for comparative evaluation.

4.4. LiDAR Performance Indexing

Beyond pointwise prediction, the semi-empirical model developed from this comprehensive experimental campaign also enables global performance indexing, facilitating compact comparison across materials and configurations. The model backbone is grounded in experimentally observed exponential degradation behavior, while the fitted coefficients are intentionally chosen to represent the dominant governing parameters identified through systematic variation of rain microphysics and surface wettability. Within this framework, a dual-index approach is introduced, comprising a Baseline Performance Index (BPI), derived from the area under coefficient a , and a Sensitivity Index (SI), derived from coefficient b . The BPI characterizes baseline LiDAR visibility primarily governed by surface wettability, while the SI quantifies sensitivity to rain severity, capturing performance instability associated with material selection.
Configuration-specific aerodynamic effects, such as local flow acceleration, droplet impingement angle, and surface water transport, are implicitly captured through the fitted coefficients in the present formulation. This approach prioritizes comparative evaluation of material performance across configurations, with the flat cover used as a reference baseline. Although the results demonstrate that aerodynamic geometry plays a measurable role in LiDAR performance, explicit aerodynamic parameterization would require a focused experimental study dedicated to isolating flow-field effects, which lies outside the scope of the present work.
Accordingly, the proposed indices are not intended to decompose individual physical mechanisms, but rather to provide a structured, comparative metric for evaluating material-driven performance trends under controlled, realistic rain exposure. The results further highlight that aerodynamic configuration should be considered when applying the model to design optimization, and that explicit aerodynamic parameterization represents a natural and important extension of the present framework in future work. These indices are defined as:
B P I = λ m i n λ m a x a λ d λ
S I = λ m i n λ m a x d b ( λ ) d λ d λ
where a high BPI indicates strong baseline LiDAR visibility across material selections, and a low SI reflects stable sensitivity to rain severity that is less dependent on material choice.
Coefficient b exhibits regime-dependent behavior with a clear transition near λ 80 ° , corresponding to the hydrophilic–hydrophobic boundary. Accordingly, piecewise linear representations are adopted for each configuration to capture the distinct wettability regimes. This treatment does not alter the sensitivity index formulation, as the proposed SI quantifies local variations in b ( λ ) and naturally accounts for both flat response regions and threshold-driven changes in sensitivity. Mathematically, the SI is equivalent to the integral of the magnitude of the local gradient, which measures the overall stability of LiDAR sensitivity with respect to wettability. By construction, this formulation evaluates the degree of variation in b irrespective of whether the slope is increasing or decreasing, such that both upward and downward trends are interpreted as indicators of instability rather than directional performance change.
While a single scalar performance metric is desirable for compact comparison across configurations and materials, it is acknowledged that global indices such as the Baseline Performance Index (BPI) and Sensitivity Index (SI) necessarily aggregate heterogeneous physical effects, including baseline optical interference, sensitivity to rain severity, and configuration-specific aerodynamic influences. As such, these indices are not intended to replace detailed physical interpretation of individual model parameters, but rather to provide a concise, comparative summary of overall performance trends derived from the fitted degradation model.
Expanding the dataset and further parameterizing coefficient c are identified as next priorities to enable a more physically grounded weighting strategy for constructing a unified performance index. In the present formulation, the implicit weighting of contributing effects arises directly from the experimentally calibrated model structure, allowing BPI and SI to reflect observed system-level behavior without imposing application-specific preferences. This approach is consistent with common practice in data-driven performance assessment, where weighting strategies are intentionally left flexible to accommodate differing operational priorities.
Upon calculating the indices for the four tested configurations, the BPI ranking from best to worst is flap, slant, flat, and curve, while the SI ranking from least sensitive to most sensitive is flat, flap, slant, and curve. These results indicate that passive aerodynamic features such as the flap can significantly improve baseline performance, while modest orientation changes that better align the sensor with the local airflow can also leverage aerodynamic effects to promote droplet removal and droplet flattening, thereby reducing refraction-induced degradation. A natural extension of the proposed framework is to evaluate the variance of BPI or SI across configurations for a given material, which would explicitly quantify the robustness of a cover material to geometric and aerodynamic changes and further support design-relevant decision-making.

5. Conclusions

As sensor deployment in autonomous vehicles continues to accelerate, uncertainty in sensor performance under adverse weather remains one of the principal challenges to robust automation and reliable operation. Mitigating the associated safety risks requires careful sensor and system design, which in turn depends on comprehensive performance prediction models capable of capturing the full spectrum of realistic driving scenarios. For LiDAR systems operating in rain, the governing parameters are logically identified as aerodynamics, surface material properties, and soiling particle characteristics, all of which fundamentally influence optical perception and sensing reliability.
In this work, a LiDAR performance prediction model is developed based on perceived rain intensity, which combines the effect of driving speed and rain rate into a single severity descriptor, and material properties described by surface wettability. LiDAR visibility is quantified using a normalized metric defined by the ratio of detected points under rainy conditions relative to dry operation. Through a comprehensive experimental campaign with a large sampling set, consistent exponential degradation behavior is observed across all tested materials, with visibility decreasing as condition severity increases. Importantly, the extent and sensitivity of degradation are shown to vary significantly with material selection, rather than following a single universal attenuation trend as often assumed in existing literature. Based on these findings, a 3-dimensional semi-empirical model is established, and complementary indexing approaches are proposed to enable systematic prediction and comparison of LiDAR performance across material choices, sensor geometries, aerodynamic configurations, and driving-in-rain conditions.
Beyond experimental interpretation, the proposed model provides a practical mechanism for conditioning LiDAR degradation behavior within computational simulation environments. By parameterizing rain-induced performance loss as a function of physically interpretable variables, the model can be directly integrated into ray-based simulators, perception pipelines, and digital twin frameworks to reproduce realistic sensor responses under adverse weather. This capability enables accelerated evaluation of mitigation strategies, virtual design iteration, and robustness assessment that would otherwise require extensive outdoor or full-scale testing.
Future work will focus on expanding the experimental dataset to encompass a broader range of rain conditions, droplet size distributions, and surface materials, enabling further refinement of model coefficients and explicit formulation of coefficient c. Incorporating additional geometric and aerodynamic descriptors, such as surface orientation, curvature, and local flow topology, will allow the framework to evolve from configuration-specific fitting toward a generalized predictive model. Ultimately, integration of the proposed semi-empirical formulation into computational simulation and digital twin platforms will support scalable, physics-informed evaluation of LiDAR robustness under realistic, multi-variable driving-in-rain scenarios. Extending the framework to additional LiDAR architecture, wavelengths, and perception metrics (e.g., range noise and false detections) represents a natural next step toward broader cross-sensor generalization.

Author Contributions

Conceptualization, W.Y.P. and M.A.-C.; methodology, W.Y.P. and L.L.; software, L.L.; formal analysis, W.Y.P.; investigation, W.Y.P. and L.L.; resources, M.A.-C., H.L.; writing—original draft preparation, W.Y.P.; writing—review and editing, L.L., M.A.-C. and H.L.; supervision, M.A.-C. and H.L.; project administration, M.A.-C.; funding acquisition, M.A.-C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in the study are included in the article; further inquiries can be directed to the corresponding author.

Acknowledgments

The authors would like to acknowledge the financial support from the Natural Science and Engineering Research Council of Canada (NSERC), Mitacs, and Ontario Centre of Innovation (OCI); as well as instrument support from the ACE core research facility at Ontario Tech University. Lastly, the authors would like to acknowledge the fruitful discussions with Magna on LiDAR performance in rain using different sensor cover materials during our collaborative project meetings, which have contributed to the insights for this work.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Ouyang, I. Tesla’s Chinese Rival Xpeng Ups the Self-Driving Game with World’s First Mass-Produced LiDAR in P5 Sedan, Defying Elon Musk. 2021 South China Morning Post. Available online: https://www.scmp.com/business/companies/article/3129530/teslas-chinese-rival-xpeng-ups-self-driving-game-worlds-first (accessed on 2 January 2026).
  2. Padeanu, A. BMW Personal Pilot L3 Announced, Level 3 Automated Driving System Debuts in 7 Series. 2023 BMW BLOG. Available online: https://www.bmwblog.com/2023/11/10/bmw-personal-pilot-announced-level-3-automated-driving (accessed on 2 January 2026).
  3. Luminar Day: A New Era—Luminar Achieves Global Start of Production for Volvo Cars. 2024 Luminar. Available online: https://investors.luminartech.com/news-events/press-releases/detail/87/luminar-day-a-new-era-luminar-achieves-global-start-of (accessed on 2 January 2026).
  4. Daimler Truck and TORC Robotics Select AEVA to Supply Advanced 4D LIDAR Technology for Series-Production Autonomous Trucks 2024 AEVA. Available online: https://www.aeva.com/press/daimler-truck-and-torc-robotics-select-aeva-to-supply-advanced-4d-lidar-technology-for-series-production-autonomous-trucks/ (accessed on 2 January 2026).
  5. Daimler Truck and TORC Robotics Select Innoviz Technologies as LIDAR Partner for Series Production of Level 4 Autonomous Trucks. 2025 Daimler Truck. Available online: https://www.daimlertruck.com/en/newsroom/pressrelease/daimler-truck-and-torc-robotics-select-innoviz-technologies-as-lidar-partner-for-series-production-of-level-4-autonomous-trucks-53284511 (accessed on 2 January 2026).
  6. Zhang, C.; Huang, Z.; Ang, M.H., Jr.; Rus, D. LiDAR Missing Measurement Detection for Autonomous Driving in Rain. In Proceedings of the 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Detroit, MI, USA, 1–5 October 2023. [Google Scholar] [CrossRef]
  7. Choe, J.; Cho, H.; Chung, Y. Performance Verification of Autonomous Driving LiDAR Sensors Under Rainfall Conditions in Darkroom. Sensors 2024, 24, 14. [Google Scholar] [CrossRef]
  8. Filgueira, A.; Gonzalez-Jorge, H.; Laguela, S.; Diaz-Vilarino, L.; Arias, P. Quantifying the Influence of Rain in LiDAR Performance. Measurement 2017, 95, 143–148. [Google Scholar] [CrossRef]
  9. Lambert, J.; Carballo, A.; Cano, A.M.; Narksri, P.; Wong, D.; Takeuchi, E.; Takeda, K. Performance Analysis of 10 Models of 3D LiDARs for Automated Driving. IEEE Access 2020, 8, 131699–131722. [Google Scholar] [CrossRef]
  10. Sheeny, M.; Pellegrin, E.D.; Mukherjee, S.; Ahrabian, A.; Wang, S.; Wallace, A. RADIATE: A Radar Dataset for Automotive Perception in Bad Weather. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA 2021), Xi’an, China, 30 May–5 June 2021. [Google Scholar] [CrossRef]
  11. Rasshofer, R.H.; Spies, M.; Spies, H. Influences of Weather Phenomena on Automotive Laser Radar Systems. Adv. Radio Sci. 2011, 9, 49–60. [Google Scholar] [CrossRef]
  12. Byeon, M.; Yoon, S.W. Analysis of Automotive Lidar Sensor Model Considering Scattering Effects in Regional Rain Environments. IEEE Access 2020, 8, 102669–102679. [Google Scholar] [CrossRef]
  13. Pao, W.Y.; Li, L.; Howorth, J.; Agelin-Chaab, M.; Roy, L.; Knutzen, J.; Baltazar-y-Jimenez, A.; Muenker, K. Wind Tunnel Testing Methodology for Autonomous Vehicle Optical Sensors in Adverse Weather Conditions. In Proceedings of the 23. Internationales Stuttgarter Symposium, Proceedings of the ISSYM, Stuttgart, Germany, 4–5 July 2023; Kulzer, A.C., Reuss, H.C., Wagner, A., Eds.; Springer Vieweg: Wiesbaden, Germany, 2023; pp. 13–39. [Google Scholar] [CrossRef]
  14. Pao, W.Y.; Howorth, J.; Li, L.; Agelin-Chaab, M.; Roy, L.; Knutzen, J.; Baltazar-y-Jimenez, A.; Muenker, K. Investigation of Automotive LiDAR Vision in Rain from Material and Optical Perspectives. Sensors 2024, 24, 2997. [Google Scholar] [CrossRef]
  15. Pao, W.Y.; Carvalho, M.; Hosseinnouri, F.; Li, L.; Rouaix, C.; Agelin-Chaab, M.; Hangan, H.; Gultepe, I.; Komar, J. Evaluating Weather Impact on Vehicles: A Systematic Review of Perceived Precipitation Dynamics and Testing Methodologies. Eng. Res. Express 2024, 6, 013001. [Google Scholar] [CrossRef]
  16. Pao, W.Y.; Li, L.; Agelin-Chaab, M.; Roy, L.; Knutzen, J.; Jimenez, A.B.Y.; Muenker, K.; Chakraborty, A.; Komar, J. Driving in the Rain: Evaluating How Surface Material Properties Affect LiDAR Perception in Autonomous Driving; SAE Technical Paper 2025-01-8016; SAE WCX: Detroit, MI, USA, 2025. [Google Scholar] [CrossRef]
  17. Pao, W.Y.; Li, L.; Villeneuve, E.; Whalls, E.; Agelin-Chaab, M.; Gultepe, I.; Komar, J. Perceived Precipitation Intensity Prediction Model Based on Simultaneous Dynamic and Static Observations for Evaluating Weather Impacts on Vehicle Applications. J. Traffic Transp. Eng. 2025, 12, 639–651. [Google Scholar] [CrossRef]
  18. Stern, W.A. An Optimal Speed for Traversing a Constant Rain. Amer. J. Phys. 1983, 51, 815. [Google Scholar] [CrossRef]
  19. Bocci, F. Whether or Not to Run in the Rain. Eur. J. Phys. 2012, 33, 1321–1332. [Google Scholar] [CrossRef]
  20. Carvalho, M.; Hangan, H. Modelling Weather Precipitation Intensity on Surfaces in Motion with Application to Autonomous Vehicles. Sensors 2023, 23, 8034. [Google Scholar] [CrossRef]
  21. Olsen, R.L.; Rogers, D.V.; Hodge, D.B. The aRb Relation in the Calculation of Rain Attenuation. IEEE Trans. Antennas Propag. 1978, 26, 318–329. [Google Scholar] [CrossRef]
  22. Lewandowski, P.A.; Eichinger, W.E.; Kruger, A.; Krajewski, W.F. Lidar-Based Estimation of Small-Scale Rainfall: Empirical Evidence. J. Atmos. Ocean. Technol. 2009, 26, 656–664. [Google Scholar] [CrossRef]
  23. Goodin, C.; Carruth, D.; Doude, M.; Hudson, C. Predicting the Influence of Rain on LIDAR in ADAS. Electronics 2019, 8, 89. [Google Scholar] [CrossRef]
  24. Pao, W.Y. Methodology Development and Evaluation of Automotive LiDAR Performance in Rain. Ph.D. Thesis, Ontario Tech University, Oshawa, ON, Canada, December 2024. [Google Scholar]
  25. Gaylard, A.P.; Kirwan, K.; Lockerby, D.A. Surface Contamination of Cars: A Review. Proc. Inst. Mech. Eng. Part D J. Automob. Eng. 2017, 231, 1160–1176. [Google Scholar] [CrossRef]
  26. Tsepov, D.S.; Solomatin, N.S.; Zotov, A.V.; Rastorguev, D.A. Formation Analysis of Soiling for Side Windows and the Rear View Mirrors of Vehicles. IOP Conf. Ser. Mater. Sci. Eng. 2019, 632, 012068. [Google Scholar] [CrossRef]
  27. Jilesen, J.; Gaylard, A.; Escobar, J. Numerical Investigation of Features Affecting Rear and Side Body Soiling. SAE Int. J. Passeng. Cars Mech. Syst. 2017, 10, 299–308. [Google Scholar] [CrossRef]
  28. Fersch, T.; Buhmann, A.; Koelpin, A.; Weigel, R. Ther Influence of Rain on Small Aperture LiDAR Sensors. In Proceedings of the 2016 German Microwave Conference (GeMiC), Bochum, Germany, 14–16 March 2016. [Google Scholar] [CrossRef]
  29. Lee, D.H.; Lucietto, A.M.; Peters, D.L. Enhanced Scene Recognition and Object Detection for Autonomous Driving Environments Using Machine Learning “Work in Progress” (WIP). In Proceedings of the 2025 ASEE Annual Conference & Exposition, Montreal, QC, Canada, 22–25 June 2025. [Google Scholar] [CrossRef]
  30. Yang, D.; Cai, X.; Liu, Z.; Jiang, W.; Zhang, B.; Yan, G.; Gao, X.; Liu, S.; Shi, B. Realistic Rainy Weather Simulation for LiDARs in CARLA Simulator. In Proceedings of the 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Abu Dhabi, United Arab Emirates, 14–18 October 2024. [Google Scholar] [CrossRef]
  31. Teufel, S.; Volk, G.; Bernuth, A.V.; Bringmann, O. Simulating Realistic Rain, Snow, and Fog Variations for Comprehensive Performance Characterization of LiDAR Perception. In Proceedings of the 2022 IEEE 95th Vehicular Technology Conference (VTC2022-Spring), Helsinki, Finland, 19–22 June 2022. [Google Scholar] [CrossRef]
  32. Hawkins, A.J. Amazon’s Zoox Will Test Its Autonomous Vehicles on Seattle’s Rainy Streets. 2021 The Verge. Available online: https://www.theverge.com/2021/10/18/22732813/amazon-zoox-autonomous-vehicles-seattle-rain (accessed on 2 January 2026).
  33. Putting Zoox to the Test: Rain Mitigation. 2022 Zoox. Available online: https://zoox.com/journal/rain-mitigation/ (accessed on 2 January 2026).
  34. Carballo, A.; Lambert, J.; Monrroy, A.; Wong, D.; Narksri, P.; Kitsukawa, Y.; Takeuchi, E.; Kato, S.; Takeda, K. LIBRE: The Multiple 3D Lidar Dataset. In Proceedings of the 2020 IEEE Intelligent Vehicles Symposium (IV), Las Vegas, NV, USA, 20–23 October 2020. [Google Scholar] [CrossRef]
  35. Kim, J.; Park, B.; Roh, C.; Kim, Y. Performance of Mobile LiDAR in Real Road Driving Conditions. Sensors 2021, 21, 7461. [Google Scholar] [CrossRef]
  36. Tang, L.; Shi, Y.; He, Q.; Sadek, A.W.; Qiao, C. Performance Test of Autonomous Vehicle Lidar Sensors Under Different Weather Conditions. Transp. Res. Rec. J. Transp. Res. Board 2020, 2674, 319–329. [Google Scholar] [CrossRef]
  37. Zhang, B.; Ma, X.; Ma, X.; Jia, X.; Zhang, X. The Impacts of Rainfall on MEMS Lidar SNR and Detecting Ability. Infrared Phys. Technol. 2025, 150, 106009. [Google Scholar] [CrossRef]
  38. Rivero, J.V.; Schubert, O.; Kroll, H.; Buschardt, B.; Straub, D. A Stochastic Physical Simulation Framework to Quantify the Effect of Rainfall on Automotive Lidar. SAE Int. J. Adv. Curr. Pr. Mobil. 2019, 1, 531–538. [Google Scholar] [CrossRef]
  39. Walz, S.; Bijelic, M.; Kraus, F.; Ritter, W.; Simon, M.; Doric, I. A Benchmark for Spray from Nearby Cutting Vehicles. In Proceedings of the 2021 IEEE International Intelligent Transportation Systems Conference (ITSC), Indianapolis, IN, USA, 19–22 September 2021. [Google Scholar] [CrossRef]
  40. Kille, L.; Strohbücker, V.; Niesner, R.; Sommer, O.; Wozniak, G. Side Mirror Soiling Investigation through the Characterization of Water Droplet Formation and Size behind a Generic Plate; SAE Technical Paper 2024-01-5030; SAE International: Warrendale, PA, USA, 2024. [Google Scholar] [CrossRef]
  41. Borges, P.V.K.; Peynot, T.; Liang, S.; Arain, B.; Wildie, M.; Minareci, M.G.; Lichman, S.; Samvedi, G.; Sa, I.; Hudson, N.; et al. A Survey on Terrain Traversability Analysis for Autonomous Ground Vehicles: Methods, Sensors, and Challenges. Field Robot. 2022, 2, 1567–1627. [Google Scholar] [CrossRef]
  42. Nowakowski, M. Operational Environment Impact on Sensor Capabilities in Special Purpose Unmanned Ground Vehicles. In Proceedings of the 2024 21st International Conference on Mechatronics—Mechatronika (ME), Brno, Czech Republic, 4–6 December 2024. [Google Scholar]
Figure 1. Summary of critical parameters affecting LiDAR performance [18] (reused with permission).
Figure 1. Summary of critical parameters affecting LiDAR performance [18] (reused with permission).
Applsci 16 01835 g001
Figure 2. Model-scale wind tunnel setup showing the (a) rain simulation system and sensor position, (b) laser precipitation monitor, and (c) droplets viewed from inside the sensor assembly.
Figure 2. Model-scale wind tunnel setup showing the (a) rain simulation system and sensor position, (b) laser precipitation monitor, and (c) droplets viewed from inside the sensor assembly.
Applsci 16 01835 g002
Figure 3. Droplet size distribution comparisons between sample measurements taken from outdoor vehicle experiments, traditional spray nozzle systems, and VeRSA, averaged from both full- and model-scale wind tunnels.
Figure 3. Droplet size distribution comparisons between sample measurements taken from outdoor vehicle experiments, traditional spray nozzle systems, and VeRSA, averaged from both full- and model-scale wind tunnels.
Applsci 16 01835 g003
Figure 4. LiDAR sensor assembly with various configurations: (a) flat cover flush with the sensor container, (b) slant cover tilted forward, (c) curved cover with symmetric curvature height-wise, (d) deflector in front of the cover to guide airflow along the cover. Light grey: enclosure for the LiDAR sensor. Blue: the cover. Dark grey: adapter. Yellow: deflector.
Figure 4. LiDAR sensor assembly with various configurations: (a) flat cover flush with the sensor container, (b) slant cover tilted forward, (c) curved cover with symmetric curvature height-wise, (d) deflector in front of the cover to guide airflow along the cover. Light grey: enclosure for the LiDAR sensor. Blue: the cover. Dark grey: adapter. Yellow: deflector.
Applsci 16 01835 g004
Figure 5. LiDAR visibility correlations to perceived rain intensity for different materials in configurations of (a) flat, (b) slant, (c) curve, and (d) flap.
Figure 5. LiDAR visibility correlations to perceived rain intensity for different materials in configurations of (a) flat, (b) slant, (c) curve, and (d) flap.
Applsci 16 01835 g005
Figure 6. Sample LiDAR vision for different materials and configurations under the same driving-in-rain condition. Points are shown using a color scale with green being higher reflectivity and red being lower reflectivity.
Figure 6. Sample LiDAR vision for different materials and configurations under the same driving-in-rain condition. Points are shown using a color scale with green being higher reflectivity and red being lower reflectivity.
Applsci 16 01835 g006
Figure 7. Three-dimensional surface plots illustrating LiDAR visibility as a function of perceived rain intensity and surface wettability for (a) flat, (b) slant, (c) curve, and (d) flap configurations.
Figure 7. Three-dimensional surface plots illustrating LiDAR visibility as a function of perceived rain intensity and surface wettability for (a) flat, (b) slant, (c) curve, and (d) flap configurations.
Applsci 16 01835 g007
Figure 8. (a) Coefficient a and (b) Coefficient b of the LiDAR performance prediction model for the flat, slant, curve, and flap configurations.
Figure 8. (a) Coefficient a and (b) Coefficient b of the LiDAR performance prediction model for the flat, slant, curve, and flap configurations.
Applsci 16 01835 g008
Table 1. Simulated driving-in-rain conditions, within ±5% of targets.
Table 1. Simulated driving-in-rain conditions, within ±5% of targets.
Simulated Rain Intensities (mm/h), D3,0 Droplet Size (mm)
Rain Category50 km/h75 km/h100 km/h
Light7.8, 0.7016.2, 0.7133.0, 0.80
Moderate14.8, 0.7034.2, 0.81100.4, 1.14
Heavy35.3, 0.8298.0, 1.14241.3, 1.55
Table 2. Empirical values for parameters in Equation (4) to determine coefficient a.
Table 2. Empirical values for parameters in Equation (4) to determine coefficient a.
ConfigurationP1P2P3P4R2
Flat0.00026−0.04701.39984.720.84
Slant0.00100−0.239916.279−255.930.92
Curve0.00030−0.06021.98467.430.97
Flap0.00080−0.16719.875−81.060.95
Table 3. Empirical values for parameters in Equation (5) to determine coefficient b.
Table 3. Empirical values for parameters in Equation (5) to determine coefficient b.
λ < 80°λ ≥ 80°
ConfigurationP5(1)P6(1)P5(2)P6(2)
Flat5 × 10−50.00661 × 10−60.0020
Slant1 × 10−40.0021−4 × 10−50.0085
Curve−2 × 10−40.0149−8 × 10−50.0136
Flap−7 × 10−50.01186 × 10−70.0029
Table 4. Model validation with independent dataset at selected driving-in-rain conditions, comparing model prediction and measured LiDAR performance.
Table 4. Model validation with independent dataset at selected driving-in-rain conditions, comparing model prediction and measured LiDAR performance.
WCA
(deg)
Perceived Rain Intensity (mm/h)Model Prediction (%)Measured Visibility (%)Difference
(%)
16592.681.312.2
144587.189.73.0
971514.813.68.1
813525.923.49.7
510045.744.13.5
3825010.412.3−18.3
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Pao, W.Y.; Li, L.; Agelin-Chaab, M.; Lang, H. A Multivariable Model for Predicting Automotive LiDAR Visibility Under Driving-In-Rain Conditions. Appl. Sci. 2026, 16, 1835. https://doi.org/10.3390/app16041835

AMA Style

Pao WY, Li L, Agelin-Chaab M, Lang H. A Multivariable Model for Predicting Automotive LiDAR Visibility Under Driving-In-Rain Conditions. Applied Sciences. 2026; 16(4):1835. https://doi.org/10.3390/app16041835

Chicago/Turabian Style

Pao, Wing Yi, Long Li, Martin Agelin-Chaab, and Haoxiang Lang. 2026. "A Multivariable Model for Predicting Automotive LiDAR Visibility Under Driving-In-Rain Conditions" Applied Sciences 16, no. 4: 1835. https://doi.org/10.3390/app16041835

APA Style

Pao, W. Y., Li, L., Agelin-Chaab, M., & Lang, H. (2026). A Multivariable Model for Predicting Automotive LiDAR Visibility Under Driving-In-Rain Conditions. Applied Sciences, 16(4), 1835. https://doi.org/10.3390/app16041835

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop