The light detection and ranging (LiDAR) sensors are one of the most promising options for automated driving. They provide range and intensity information from their surroundings. From the obtained data, objects can be not only detected, but also recognized (e.g., pedestrians, other vehicles). LiDAR development has been intense in recent years and, for example, their resolution has improved significantly. However, they still have their weaknesses, especially in adverse weather conditions. In order to tackle the automated driving challenges—(1) Objects on the road,(2) traffic jam ahead, (3) pedestrians crossing the road,(4) enhanced environmental awareness, and (5) drivable area awareness in all condition—the performance of LiDARs and their limits need to be investigated, and possibly, solutions need to be found for these restrictions.
Primarily, two kinds of methods have been used to investigate and validate the performance of automotive LiDARs: Mathematical models and simulations, and indoor tests in fog chambers. In both methods, it is easy to reproduce and control the test environment and conditions. Additionally, there is no need to use real vehicles. This saves fuel and working hours, and is safer, especially when considering some risky scenarios in traffic.
], Rasshofer et al. investigated physical principles behind the influence of weather on LiDARs. Based on this, they developed a novel electro-optical laser radar target simulator system. Its idea is to reproduce the optical return signals measured by reference laser radar under adverse weather conditions. It replicates their pulse shape, wavelength, and power levels. Real-world measurements in fog, rain, and snow were performed to verify their models, though this proved to be somewhat challenging as the conditions in real traffic are neither predictable nor stable.
Hasirlioglu et al. [2
] presented a model to describe the impact of rain. The distance between object and sensor is divided into layers whose effects (absorption, transmission, and reflection of light) per layer are considered in summary. To validate their theoretical model, they built a rain simulator that was furthermore validated by comparing it to statistical data of real rain.
Fersch et al. investigated the influence of rain on LiDARs with a small aperture [3
]. They developed two models to examine the effect of two potentially performance degrading scenarios: Discrete rain drops in the proximity of a laser aperture and a protective window getting wet. In conclusion, neither of the scenarios have a significant effect on LiDARs’ performance.
], Reway et al. presented a hardware-in-the-loop testing method with real automotive cameras, which were connected to a simulated environment. In this environment, they were able to create different traffic scenarios and vary the environmental conditions, including dense fog. The method was eventually used to validate an ADAS platform available on the market. The obtained results matched the specification of the ADAS platform, showing the applicability of the developed testing method.
Goodin et al. developed a mathematical model for the performance degradation of LiDAR as a function of rain-rate [5
]. Furthermore, the model was incorporated into a 3D autonomous vehicle simulator which also has a detailed physics-based LiDAR simulation, also known as the Mississippi State University autonomous vehicle simulator (MAVS). According to their tests run in simulations of the obstacle-detection system for roof-mounted LiDAR, heavy rain does not seem to decrease the system’s performance significantly.
Indoor fog chambers are used both for experimentally investigating the performance of sensors [6
] and for validating the developed models against real conditions [1
]. Many have built their own chamber to mimic fog or rain and some have used the state-of-the-art fog chamber of CEREMA. Depending on the chosen chamber, the methods to modify the parameters of weather conditions inside the chamber (e.g., visibility) vary. In CEREMA’s fog chamber, they have a wide variety of adjustable parameters (fog’s particle size, meteorological visibility, rain’s particle size, and intensity). In self-built chambers, the fog density is primarily controlled by the number of fog layers.
Tests performed in fog chambers are always static, and neither the sensors nor the targets were moving at all. The targets used are usually “natural” targets, i.e., vehicles, mannequin puppets, but calibrated targets were occasionally used [6
Hasirlioglu et al. [7
] built their own rain simulator that consisted of rain layers that are individually controlled. The rain intensity is adjusted by varying the activated rain layers. By increasing the number of active rain layers, they find the point where the sensor is no longer able to differentiate the target from the rain. This is the basis of their benchmark methodology: The number of rain layers with a specific rain intensity defines the performance of the sensor tested. The rain simulator and methodology can also be used to validate theoretical models. Basic radar, LiDAR, and camera sensors were tested to show the influence of rain on their performance. As a target, they used the Euro NCAP vehicle target, which is a standardized test object.
Hasirlioglu et al. [8
] continued their work by creating a fog simulator. Its principle is similar to their rain simulator as it consists of individually controlled fog layers. Sensor disturbance is, again, increased by activating more and more layers. They did not use any glycol or glycerin-based fluid to create the fog, thus ensuring the natural absorption and scattering properties of real fog. The principle for detecting the performance of sensors is also similar to their rain simulator: When the sensor is no longer able to differentiate its target from the fog, its limit is reached. The number of active fog layers serves as the assessment criterion.
In their study, Kim et al. [9
] concentrated on how fog affects the visibility of a safety sensor for a robot. Hence, the working environment is not in traffic, but the sensors used are the same as in the traffic environment. In their artificial fog chamber, the density of fog and brightness of the chamber are controlled to meet a desired target value. The laser device measures the spectral transmission. The visibility of Velodyne VLP-16 was investigated visually and their result was that its performance decreases the denser the fog is.
Daniel et al. [10
] presented a test setup with a sensor set that consisted of a low-THz imaging radar, a LiDAR, and a stereo optical camera. Their aim is to highlight the need for low-THz radar as a part of automotive sensor setup especially in weather conditions where optical systems fail. They built their own fog chamber inside a large tent in order to maintain a dense fog. The fog was created with a commercial smoke machine. As targets, they had two dummies as pedestrians, a metallic trolley, and a reference target. Visibility was measured with a Secchi disc. They recorded images at three fog density levels. Visually examining the results yielded that fog does not affect the performance of radar much, whereas the visibility of LiDARs and cameras is decreased when fog density increases.
Bijelic et al. [6
] had four LiDARs from Velodyne and Ibeo evaluated in CEREMA’s climate chamber, where it is possible to produce two different fog droplet distributions. The density of fog is continuously controlled, thus keeping the fog stable. The test scene consisted of various different targets, e.g., reflector posts, pedestrian mannequins, a traffic sign, and a car. LiDAR sensors were located at their pre-selected positions on a test vehicle, which was placed at the beginning of the chamber. Velodyne sensors were mounted on top of the car and Ibeo sensors at its bumper. The static scene was recorded with different fog densities and droplet distributions with all sensors.
These measurements were visually investigated and resulted in the general conclusion that fog reduces the maximum viewing distance. However, when two manufacturers’ sensors were compared, Ibeo LiDARs were able to detect targets at a lower visibility than Velodyne sensors. Only a small difference was found between advection and radiation fog’s influence, so evaluation was continued using only advection fog (large droplets).
Furthermore, Bijelic et al. further evaluated the performance of their setup by using calibrated Zenith Polymer diffuse-reflective targets with reflectivities of 5%, 50%, and 90%. They were installed on a pole that a person held away from oneself. With this setup, they obtained the maximum viewing distance for each sensor. Again, these measurements even further verify that the fog decreases the maximum viewing distance drastically. Using multiple echoes and adaptive noise levels, the performance is improved but still not to a level that is sufficient for autonomous driving.
In their study, Kutila et al. [11
] evaluated and compared LiDARs with two different wavelengths of 950 nm and 1550 nm in the CEREMA fog chamber. Operating wavelength of 1550 nm is justified by its potential benefits: Less scattering in fog and more optical energy can be used because of the more relaxed eye safety regulations. However, in their study, the option for more optical power was ignored and they concentrated on measuring the scattering in fog. To analyze the visibility, they selected targets that represented typical road traffic scenarios: Mannequins as pedestrians, a traffic sign, and a car. Comparison to find out the effect of wavelength was done so that a reference visibility, from which the reference energy was measured, was chosen and reflected energy values at other visibilities were compared to the reference energy. Despite taking into account all the possible differences between the 905 nm and 1550 nm LiDARs, they were not able to find significant differences in the reflected energies.
Filgueira et al. studied the effects of real rain on LiDAR measurements outdoors [12
]. They set up Velodyne VLP-16 outside to measure six areas with different materials and surfaces. Rain attribute values were obtained from a nearby meteorological station and they were assumed constant in the area under examination. Performance was evaluated by comparing the range and intensity measurements to a reference situation, i.e., no rain. Their results show variations in range measurements and decreases in returned intensity values as the rain density increases.
Even though the effects of adverse weather are intensely investigated, the studies have concentrated on rain and fog, leaving out other potentially critical weather conditions. For example, the effects of snowfall and powder snow are still unknown, although estimations based on existing knowledge can be made. Still, no artificial snow simulators exist nor are there outdoor tests performed in snowy conditions.
To offset this lack of knowledge, we performed tests with various LiDARs inside a fog chamber and outside in snowy conditions. These tests and their results are presented in the next two chapters. The aim is to keep the testing process practical to meet the major obstacles in roadmap towards automated driving.
2. Materials and Methods
Two types of tests were performed: Indoors in the CEREMA fog chamber and outdoors in northern Finland and Sweden. The sets of sensors were nearly the same in both testing environments.
A set of LiDARs used in tests consisted of Velodyne PUCK VLP-16, Ouster OS-164
, Cepton HR80T and HR80W, and Ibeo Lux. In outdoor tests, Robosense RS-LiDAR-32 was also included, but it did not arrive in time for fog chamber tests. A selected set provided a wide range of LiDARs with different designs from various manufacturers. Their specifications are presented in more detail in Table 1
. Velodyne, Ouster, and Robosense are of similar design with a cylinder-shaped casing and based on a constantly rotating structure. Cepton’s design differs from this as it uses two mirrors and covers the field of view with a sine wave -like pattern. Other sensors use layers on top of one another to create a vertical view. Ouster operates on a different wavelength (850 nm) than the rest of the sensors, which use wavelengths around 905 nm. All sensors are said to work at sub-zero temperatures but Velodyne and Robosense only as low as −10 °C, which may not be enough for the snowy tests performed in a colder environment.
Fog and rain tests were performed in the Clermont-Ferrand laboratory, where a fog chamber is located. It is a state-of-the-art fog chamber where it is possible to control and reproduce the fog’s particle size, meteorological visibility, and rain’s particle size and intensity [13
]. Here, the meteorological visibility means the visibility distance, which is a practical parameter describing fog characteristics in relation to light transmission (see more detailed definition in [13
LiDAR performance tests in foggy conditions were executed in late November 2018. LiDARs were mounted side-by-side facing the targets in the tunnel as shown in Figure 1
. To reduce interference, only one LiDAR at a time was turned on and measuring. The sensors were used with their given settings and configuration and no additional adjustments were made.
The main targets were two pairs of optically calibrated plates with reflectivities of 90% and 5% (white and black, respectively). These are similar to the ones that are used by LiDAR manufacturers when they measure their sensor’s performance. The targets were placed in pairs of white and black side by side. Larger ones (0.5 × 0.5 m) were 1 m behind the smaller targets that were also lower so that they would not block the view to the larger targets. In the end, the larger ones were 0.5 m further than the indicated distance (e.g., target at 10 m means that smaller targets were at 9.5 m and larger ones at 10.5 m). These four targets were placed alternately at 10 m, 15 m, and 20 m. At first, a measurement with no fog was done to have a reference point for each LiDAR and distance. Fog densities used in the tests had visibilities of 10 m, 15m, 20 m, 25 m, 30 m, 40 m, and 50 m with smaller droplet size.
Collected data from the fog chamber were then processed so that for each combination of LiDAR, target distance, and a target, a region of interest (ROI) was selected. Horizontally and vertically, this ROI only covered the target but included a region 1.5 m ahead and behind the target. In this way, we were able to capture the possible outliers and find out how much effect the adverse weather had on LiDAR’s range measuring performance. From the selected echo points, a standard deviation of the distance was calculated. We primarily used the larger pair of targets to have more echo points for evaluations. The only exception to this was Ibeo’s measurements at 20 m, where the smaller targets blocked the view to the larger ones, thus decreasing the number of echoes excessively.
Turbulent snow tests took place outdoors in northern Finland during winter. Location and timing were chosen because of their high probability for snowy conditions. LiDARs were placed on the roof of VTT’s test vehicle Martti (Figure 2
and Figure 3
) except for three Ibeo LiDARs, which were installed in the car’s bumper. Tests were performed at the airport runway in Sodankylä. There was about 20 cm of sub-zero snow on the road surface and temperature was at around −4 °C. At first, a reference drive was executed with calm conditions, i.e., no turbulent snow from a leading vehicle was present. Then, the same route was driven with another vehicle, which Martti followed as closely as possible at distances of 15–25 m in order to be surrounded by the powder snow cloud that had risen from the road’s surface. Both test drives were done for each LiDAR separately so that they would not disturb one another.
The results have been saved to the hard drives of the vehicle with GPS time-stamp and coordinates. These are used for synchronizing data between different sensor devices.
The LiDAR performance in turbulent snow and freezing conditions was visually evaluated for each sensor separately. The point cloud data are difficult to analyze with any feasible metrics since the aim is to support pattern recognition algorithms. Pattern recognition requires that there are good coverage points reflected from the surface even if there is material like snow in front of the sensor. The best method of analysis is to visually estimate whether the number of points is sufficient for estimating type and size of the object (e.g., passenger car, pedestrian, or building).
In conclusion, all tested LiDARs performed worse in fog and turbulent snow than in clear weather conditions, as expected. In fog chamber tests, every sensor’s performance decreased the denser the fog and the further the target. Making a direct comparison, Cepton’s different approach proved the most efficient in stationary fog chamber tests but its point clouds are quite noisy. No significant differences were found between the other sensors. The darker target was more difficult for the sensors to detect than the lighter one.
In the turbulent snow tests, all tested sensors were blocked by the powder snow and their viewing distance was shortened. However, from these tests, it is not possible to say if some sensor performed absolutely better than the others did. The powder snow itself was not visible in two of the sensors’ data, but its presence was observable by the missing echo points in front of and behind the test vehicle. Only Robosense produced points that directly showed the presence of snow. It is notable that not only does the leading vehicle cause the snow to whirl, but the test vehicle itself does as well. This makes detection in that direction difficult as well and shortens the viewing distance. Temperatures that were a few degrees centigrade below zero did not affect the performance of the LiDARs. However, based on these tests we cannot say how well they would perform in even colder environment.
There are no ideal fog, rain, or snow conditions that create a single threshold when the objects are not visible anymore. Moreover, this is highly dependent on LiDAR type. For example, with multilayer LiDAR (e.g., 16 or 32), the influence of one single laser spot is less significant. However, in the future, the aim is to investigate if the parameters and reliability levels of object recognition can be more precisely adapted according to density of the atmospheric changes. This would be essential, especially when considering sensor data fusion and having an understanding which sensor data the automated vehicle should rely on. At this level of LiDAR performance, fully autonomous vehicles that rely on accurate and reliable sensor data cannot be guaranteed to work in all conditions and environments. The powder snow on the roads is very common in colder environments for much of the year and, thus, is not ignorable. The cold temperatures may also bring other difficulties to the used sensors that are, as yet, unknown to us. This provides encouragement for the continued investigation of the LiDAR performance in these challenging conditions.