Next Article in Journal
Assessing Landslide Drivers in Social–Ecological–Technological Systems: The Case of Metropolitan Region of São Paulo, Brazil
Previous Article in Journal
Research on Long-Term Tidal-Height-Prediction-Based Decomposition Algorithms and Machine Learning Models
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Layered SOTIF Analysis and 3σ-Criterion-Based Adaptive EKF for Lidar-Based Multi-Sensor Fusion Localization System on Foggy Days

1
College of Mechanical and Vehicle Engineering, Chongqing University, Chongqing 400030, China
2
School of Vehicle and Mobility, Tsinghua University, Beijing 100084, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(12), 3047; https://doi.org/10.3390/rs15123047
Submission received: 6 April 2023 / Revised: 23 May 2023 / Accepted: 8 June 2023 / Published: 10 June 2023

Abstract

:
The detection range and accuracy of light detection and ranging (LiDAR) systems are sensitive to variations in fog concentration, leading to the safety of the intended functionality-related (SOTIF-related) problems in the LiDAR-based fusion localization system (LMSFLS). However, due to the uncontrollable weather, it is almost impossible to quantitatively analyze the effects of fog on LMSFLS in a realistic environment. Therefore, in this study, we conduct a layered quantitative SOTIF analysis of the LMSFLS on foggy days using fog simulation. Based on the analysis results, we identify the component-level, system-level, and vehicle-level functional insufficiencies of the LMSFLS, the corresponding quantitative triggering conditions, and the potential SOTIF-related risks. To address the SOTIF-related risks, we propose a functional modification strategy that incorporates visibility recognition and a 3σ-criterion-based variance mismatch degree grading adaptive extended Kalman filter. The visibility of a scenario is recognized to judge whether the measurement information of the LiDAR odometry is disturbed by fog. Moreover, the proposed filter is adopted to fuse the abnormal measurement information of the LiDAR odometry with IMU and GNSS. Simulation results demonstrate that the proposed strategy can inhibit the divergence of the LMSFLS, improve the SOTIF of self-driving cars on foggy days, and accurately recognize the visibility of the scenarios.

1. Introduction

Accurate localization information is crucial for autonomous vehicles and affects subsequent decision planning and control. Compared with a single sensor, multi-sensor fusion technology has many advantages, such as more channels to obtain information, less information uncertainty, and more convenience for fault detection [1,2]. Therefore, it is considered the first choice for autonomous vehicle localization technology. Light detection and ranging (LiDAR), based on the time-of-flight principle, is widely used in autonomous vehicle localization and environmental sensing. It has the advantages of a wide detection range, high accuracy, not being easily affected by ambient lighting conditions, and a 3D perception of the surrounding environment [3,4]. Therefore, the industry has widely adopted LiDAR-based multi-sensor fusion localization schemes, such as Google Waymo and Baidu Apollo.
However, adverse weather conditions, such as rain, snow, and fog, can degrade the detection performance of LiDAR and limit the all-weather operation of autonomous vehicles [5,6]. Previous studies [7,8] have shown that heavy rain interferes with LiDAR in a similar way to fog, while fog has a greater interference on LiDAR than rain or snow. Therefore, this study investigated the interference of fog on LiDAR.
The laser beam emitted by LiDAR is absorbed, scattered, and refracted by fog, which attenuates the range and increases the uncertainty of LiDAR detection. Fog interference on LiDAR causes the LiDAR-based multi-sensor fusion localization system (LMSFLS) to provide incorrect localization information to an autonomous vehicle, which ultimately affects the safety of the vehicle. Such safety problems are caused by the performance insufficiencies of LiDAR on foggy days and are not caused by faults in LiDAR. According to ISO 21448 [9], the safety of intended functionality (SOTIF) is defined as the “absence of unreasonable risk due to hazards resulting from functional insufficiencies of the intended functionality or its implementation.” Safety problems caused by functional insufficiencies of LiDAR on foggy days fall within the SOTIF domain.
Foggy conditions are a common occurrence in the real world, and LiDAR is susceptible to functional deficiencies caused by fog, which can lead to SOTIF-related risks of LMSFLS. However, due to weather constraints, it is challenging to quantitatively analyze the impact of fog interference on LMSFLS in natural environments. Therefore, the primary objectives of this study are to (1) utilize a fog simulation approach to quantitatively simulate and assess the impact of fog at varying concentrations on LMSFLS, (2) quantitatively analyze the simulation results to identify component-level, system-level, and vehicle-level functional insufficiencies of the LMSFLS, associated triggering conditions, and potential SOTIF-related harms on foggy days, and (3) propose a functional modification strategy to improve the SOTIF of LMSFLS under fog disturbance. The primary contributions of this research are:
  • A layered quantitative SOTIF analysis method was proposed for the LMSFLS in foggy environments based on ISO 21448. The method includes static detection analysis for LiDAR and localization performance analysis for LMSFLS. Based on this, we identified the potential SOTIF-related harms and quantitative triggering conditions of LMSFLS on foggy days by quantitatively analyzing the component-level, system-level, and vehicle-level functional insufficiencies caused by fog in different concentrations.
  • A functional modification strategy was proposed to address the SOTIF-related harms of LMSFLS. In this strategy, visibility recognition was first introduced to identify whether LiDAR odometry is interfered with by fog. Subsequently, the 3σ-criterion-based variance mismatch degree grading adaptive extended Kalman filter (3σ-VMDG-AEKF) was employed to accurately isolate abnormal measurement information in LiDAR odometry through sequential filtering and variance mismatch degree grading.

2. Related Work

Our research involved the effects of adverse weather on LiDAR, LiDAR-based multi-sensor localization methods, and SOTIF of the automated driving function. This section reviews the current status of these research areas.

2.1. Effects of Adverse Weather on LiDAR

In recent years, industries have paid increasing attention to the effects of adverse weather conditions on vehicle sensors. Some research results have been introduced in detail in [7,10,11]. Modeling and testing are the two main methods used to study the effects of fog on LiDAR. Modeling methods include physics-based and data-driven modeling, and testing methods include indoor and outdoor testing.
Several studies have been conducted on modeling based on physical mechanisms. The research in [8,12,13] modeled the interference of adverse weather on LiDAR according to its physical mechanism. Hahner et al. [14] modeled the effects of fog on LiDAR detection according to the physical mechanism of LiDAR pulse transmission and added the information on these effects to LiDAR point-cloud datasets in clear weather to generate LiDAR point-cloud datasets affected by fog. These datasets were then used to train a neural network to study the target detection performance of LiDAR under foggy conditions. Zhao et al. [15] proposed a data-oriented LiDAR model composed of geometric and physical models that modeled signal attenuation and unwanted raw data in rain, snow, and foggy weather. The mentioned studies mainly focused on the effects of fog on LiDAR using theoretical models, and experimental tests were used to verify the accuracy of the proposed models. However, other scholars have directly used experimental data to study the effects of fog on LiDAR. Kutila et al. [16], in the outdoor conditions of Sodankyla airport, conducted rain, snow, and fog attenuation tests on LiDAR and pointed out that LiDAR detection performance is sensitive to variations in fog concentration. Refs. [17,18] used the Cerema chamber to control the concentration of fog to test the effects of fog at different concentrations on different LiDAR, and they pointed out that increasing the transmit power of LiDAR can improve its detection range. In [19,20,21], the Cerema chamber test data were used to train machine-learning models and predict fog attenuation on LiDAR detection performance. These studies focused on the effects of fog on LiDAR detection performance or target recognition and provided essential references for our subsequent research.

2.2. LiDAR-Based Multi-Sensor Fusion Localization

LiDAR 3D point cloud feature extraction is a critical step in LiDAR localization, affecting localization accuracy and real-time performance [22]. Features such as road signs, lane markings, guard-rail reflectors, and road markings are used for high-precision localization in urban roads in [23,24,25,26]. Steink et al. [27] proposed a feature-extraction method based on geometric fingerprint technology from a point cloud. They applied this method to the IMU-LiDAR fusion localization system to achieve centimeter-level localization accuracy. Liu et al. [28] proposed a feature extraction network, YOLOv5-Tassel, based on YOLOv5 for extracting complex and variable features. Yin et al. [29] proposed a localization framework based on 3D LiDAR. LocNet was used in this framework for 3D point cloud feature recognition, improving localization efficiency and accuracy. Some studies in the context of LiDAR-based localization used map matching. Lu et al. [30] presented the L3-Net-based LiDAR localization system that achieved centimeter-level localization accuracy. Chen et al. [31] exploited range images generated from 3D LiDAR scans to address the problem of localizing mobile robots or autonomous cars in a map of a large-scale outdoor environment represented by a triangular mesh. Regarding fusion localization methods, Xiong et al. [32] proposed a robust estimation method for automated vehicle sideslip angle and attitude based on the parallel adaptive Kalman filter, which combined GNSS and INS. Xia et al. [33,34] estimated the yaw alignment error and velocity error of the vehicle using the Kalman filter and then employed the Consensus Kalman Filter to synthesize the vehicle kinematics and dynamics and estimate the vehicle’s heading error. These studies demonstrate that fusion methods based on Kalman filtering can accurately estimate the vehicle’s state parameters. Therefore, they have been widely adopted in lidar-based fusion localization systems. Zubača et al. [35] proposed an extended H∞ filter with an adaptive new information sequence to fuse the measurement information of LiDAR, IMU, and other dynamic sensors of vehicles to improve the robustness and accuracy of vehicle post-estimation. Maaref et al. [36] proposed a lane-level localization method that combines LiDAR odometry and cellular data pseudo-distance. However, these studies mainly focused on the means to improve the localization performance of LMSFLS in clear or rainy weather without considering the effects of foggy days. Therefore, in this study, we developed a foggy point cloud simulation method to introduce fog interference to the LMSFLS.

2.3. SOTIF of the Automated Driving Function

With the demands of SOTIF for autonomous vehicles, some SOTIF-related studies on automated driving functions have emerged. Different SOTIF analysis methods, which combined with system analysis theories such as Systems Theory Process Analysis (STPA) and Model-Based Systems Engineering (MBSE), were presented in [37,38,39,40], and these methods were applied to autonomousemergency braking (AEB) systems. Both [41] and [42] studied the SOTIF-related problems of lane-keeping assist (LKA) systems and proposed solutions. Guo et al. [43] studied the SOTIF-related problems of human misuse caused by driver distrust of autopilot systems. They proposed a path-planning method based on model predictive control, considering the degree of confidence. Huang et al. [44] proposed a public systematic identification method for triggering events owing to system performance limitations and human misuse. On this basis, they offered the safety analysis and verification procedures of SOTIF and applied these procedures to an L3 autonomous vehicle. Unlike the above studies, Wang et al. [45] proposed a robust non-fragile fault-tolerant control strategy as a risk reduction method to ensure SOTIF of the cooperative adaptive cruise control (CACC) under the conditions of system uncertainty, multi-source perturbations, and controller perturbations.
From the abovementioned studies, based on ISO 21448, scholars proposed corresponding SOTIF analysis processes for AEB, LKA, CACC, and other driving assistance functions by directly using system analysis theories such as STPA or MBSE. These processes focus on the identification of qualitative triggering conditions and human misuse. However, the quantitative SOTIF analysis methods for the SOTIF-related problems caused by fog in LMSFLS are lacking. Therefore, in this study, the component-level, system-level, and vehicle-level functional insufficiencies of LMSFLS caused by the fog will be quantitatively analyzed to identify the SOTIF-related harms and quantitative triggering conditions. Then, functional modification strategies for these SOTIF-related problems would be proposed to improve the SOTIF of LMSFLS on foggy days.

3. Fog Interference Simulation Method of LMSFLS

In this section, we explain in detail our approach for obtaining localization information with fog interference. After introducing the LMSFLS architecture with fog interference introduced in Section 3.1, we show how we generated foggy point clouds in Section 3.2.

3.1. The Architecture of LMSFLS with Fog Interference Introduced

The vehicle-mounted LMSFLS used in this study comprised LiDAR, INS, and GNSS sensors, and the fusion algorithm adopted was the extended Kalman filter (EKF). The information source for each sensor was as follows. The 360° circumnavigation LiDAR was installed on the top of the vehicle, and point clouds were generated by scanning a scenario. INS and GNSS obtained the required information directly from the vehicle.
We proposed an LMSFLS architecture, which introduced fog interference, as illustrated in Figure 1. First, clear point clouds of the scenario were generated using the LiDAR numerical model. Second, the clear point clouds and visibility were input into the foggy point-cloud generation model (FPCGM) to generate the foggy point clouds, which considered the attenuation and noise effects of fog for the scenario. Third, the foggy point clouds were input into the point-cloud registration module. Then the foggy point clouds after registration were input into LiDAR odometry to solve the pose information of the ego vehicle. Finally, we fused the measurement information from LiDAR odometry, INS pre-integration, and GNSS to get the estimation information of the ego vehicle. Through the above four steps, we introduced fog interference into the LMSFLS. Therefore, it can output both the detection information of LiDAR and the localization information of LMSFLS that have been hampered by fog.
A discrete-time state-space nonlinear model of the LMSFLS can be expressed as follows:
X k = f X k 1 + W k 1
Z k = h X k + M k
where f is the state transition function, h is the observation function, X = [ p s v   v s v   φ s v ] T is the state vector, Z = [ p o v   v o v   φ o v ] T is the observation vector, in both X and Z , p = [ p x   p y   p z ] T and v = [ v x   v y   v z ] T are the position and velocity in the navigation coordinate system, φ = [ φ x   φ y   φ z ] T is the Euler angle of attitude in the vehicle coordinate system; W is the process noise vector, M is the observation noise vector, and k is the time-step. The EKF algorithm is used as the data fusion algorithm, where INS pre-integration is treated as the prediction process, and the measurements from laser odometry and GNSS are regarded as the observation process. Please refer to Appendix A for details on LiDAR odometry, INS pre-integration, and data fusion.

3.2. Foggy Point Clouds Generation

Based on the existing research on LiDAR attenuation [46], physical [20], and noise [47] models, we propose FPCGM to generate fog point clouds. The specific process for converting clear point clouds into foggy point clouds is shown in Algorithm 1. Please refer to Appendix B for a detailed explanation of Algorithm 1.
Algorithm 1: Generation of foggy point clouds
1Initialization:   Input   visibility   and   one   frame   of   clear   point   cloud   that   includes   n  points
2for i < n  do
3 Calculate   γ using (A9)
4 Calculate   P r i using (A10)
5 Calculate   S N R i using (A17)
6 if S N R i > S N R 0  then
7 Calculate   σ R i   and   add   it   on   x i
8 else
9 The i t h point is invalid
10 end if
11end for
12returnOne frame of foggy point cloud
To verify the feasibility of algorithm 1, we input a frame of real foggy point clouds from the RADIATE [48] dataset into the FPCGM. Since this frame of fog point cloud contains over 4000 laser points, the generation of each laser point represents a simulation by FPCGM. Therefore, this verified FPCGM over 4000 times in this case. After reasonably setting the reflectance of the targets, the point cloud intensity distribution was calculated, and Table 1 listed the parameter settings. Then the calculated point cloud intensity distribution was compared with the real point cloud intensity distribution. The results are shown in Figure 2 and Figure 3, and the data in the figure is normalized.
Figure 2 demonstrates the intensity distribution of the simulation and real results of foggy point clouds, and we can see that the distribution of point cloud intensity is similar in both the simulation and real results. Figure 3 shows the corresponding normalized value distribution. From it, we can also find both the simulation and real results are similar. So, the proposed FPCGM is feasible.

4. Fog Simulation-Based Layered SOTIF Analysis

In this section, we conducted a layered quantitative SOTIF analysis, as shown in Figure 4, to identify the SOTIF-related issues of LMSFLS in foggy environments. First, we performed a quantitative analysis of LiDAR static detection to assess the impact of fog on a single laser beam and a one-frame point cloud. Subsequently, we analyzed the effect of different concentrations of fog on the localization performance of the LMSFLS. Based on the analysis results, we identified the functional insufficiencies of LMSFLS at the component, system, and vehicle levels, along with the corresponding triggering conditions and potential SOTIF-related harms.

4.1. The Analysis for LiDAR Static Detection in Foggy Environments

The maximum detection range of vehicle-borne LiDAR in clear weather is generally 100–200 m, and the reflectance of the corresponding target object is 0.8–1. We assumed that the farthest detection range of the LiDAR was 120 m in clear weather, and the corresponding reflectance of the target was 0.8. When these two parameters were input into the FPCGM, the S N R was 68.3 and σ R was 0.12 m under this condition. This S N R can be used as the S N R 0 in Algorithm 1; that is, it is considered invalid if the S N R of the point-cloud echo is less than 68.3, otherwise, it is retained.
According to [49], when the horizontal visibility is 1–10 km, it is light fog; when the horizontal visibility is 0.5–1 km, it is fog; and when the horizontal visibility is 0.2–0.5 km, it is heavy fog. First, we set the visibility to 10 km, 1 km, 0.8 km, 0.6 km, 0.5 km, 0.4 km, 0.3 km, and 0.2 km to simulate the interference of fog on the static detection of a single laser beam of LiDAR by using FPCGM. Subsequently, we set the visibility to 10 km, 1 km, 0.6 km, and 0.2 km and input one frame of clear point cloud, as shown in Figure 5, into FPCGM to simulate fog interference on clear point clouds. The LiDAR parameters are listed in Table 1, and the simulation results are shown in Figure 6 and Figure 7.
In Figure 6a, the intersection points of the curves and horizontal line are the S N R s corresponding to the maximum detection ranges of the LiDAR echo signal under this visibility. Parts of the curves that are lower than the horizontal line are invalid. In Figure 6b, the intersection points of the curves and horizontal line are the values of σ R corresponding to the maximum detection ranges of the LiDAR echo signal under this visibility, and parts of the curves that are higher than the horizontal line are considered invalid. In Figure 7a–d, when the visibility is 10 km, 1 km, 0.6 km, and 0.2 km, the LiDAR receives 6, 4, 2, and 1 cluster of obstacle point clouds, respectively. The following conclusions can be drawn from Figure 6 and Figure 7.
  • Under the condition that the visibility remains constant, the S N R and σ R of the echo signals decreases and increases, respectively, with an increase in the LiDAR detection range.
  • When the LiDAR detection range is constant, the S N R decreases and σ R increases with a decrease in visibility. The smaller the visibility, the faster the decrease rate of S N R , and the quicker the increase rate of σ R .
  • With a reduction in visibility, the maximum detection range of LIDAR decreases. When the visibility was 10 km, 1 km, 0.8 km, 0.6 km, 0.5 km, 0.4 km, 0.3 km, and 0.2 km, the corresponding maximum detection ranges were 120 m, 88 m, 83 m, 76 m, 71 m, 65.5 m, 58 m, and 48 m.
  • The number of point cloud clusters of obstacles in foggy point clouds decreases with reduced visibility.
These results indicate that the S N R and σ R of the LiDAR echo signals are sensitive to variations in visibility, resulting in the reduction of maximum detection range and accuracy of LiDAR on foggy days. This will inevitably affect the localization performance of the LMSFLS. Therefore, it is necessary to analyze the effects of fog on the localization performance of the LMSFLS further.

4.2. The SOTIF Analysis for LMSFLS in Foggy Environments

  • Simulation scenarios and parameters setting
An uneven arrangement of obstacles in a scenario affects the localization performance of the LiDAR odometry. To study the interference of fog on the LMSFLS more clearly, it is necessary to eliminate the effects of an uneven arrangement of obstacles. Therefore, this study used the driving scenario designer app of MATLAB to build straight and curved urban expressway scenarios with obstacles uniformly arranged on both sides of the roads, as shown in Figure 8 and Figure 9. The distance between obstacles in the scenarios is 75 m. The distance between the obstacles and the road is 40 m. When the visibility is less than or equal to 0.3 km, the maximum detection range of LiDAR is reduced by more than 50%, and the sensor is stopped immediately for safety reasons. Therefore, the effects of fog on the localization performance of LMSFLS for these two visibility types of 0.2 km and 0.3 km will not be discussed in a subsequent study. In the simulation, the vehicle ran in the middle lane of the right road at a constant speed of 90 km/h, and the total running distance was approximately 1 km. The visibility of the scenarios was set to 10 km, 1 km, 0.8 km, 0.6 km, and 0.4 km. The MATLAB simulation duration is related to the setting of road length, and the simulation ends when the vehicle runs to the end of the setting road. The parameters of the various sensors are listed in Table 2.
2.
Quantitative triggering condition analysis
Under the interference of fog at different concentrations in straight and curved urban expressway scenarios, the variation in LMSFLS localization performance is shown in Figure 10 and Figure 11, in which the localization error versus the running time of the vehicle is plotted. The localization errors were calculated in the vehicle coordinate system, as in reference [50]. From Figure 10a,b and Figure 11a,b, we can see that in the straight urban expressway scenario, with a decrease in visibility, the lateral localization error curves of the LMSFLS constantly fluctuate around 0 m. However, the longitudinal localization error curves diverged when the visibility was 0.6 km and 0.4 km. In the curved urban expressway scenario, with a decrease in visibility, the lateral and longitudinal localization error curves of the LMSFLS diverged when the visibility was 0.8 km, 0.6 km, and 0.4 km.
Therefore, it can be concluded that the quantitative triggering condition for SOTIF-related issues of LMSFLS in the straight urban expressway scenario is when the visibility is less than or equal to 0.6 km, and in the curved urban expressway scenario, it is when the visibility is less than or equal to 0.8 km.
3.
Functional insufficiencies analysis
To clarify the reasons for the divergence of localization errors, we analyzed the relationship between the increment of localization errors and LiDAR point clouds. We found that when the increment of localization errors increased, the obstacle point cloud detected by LiDAR was a feature-degraded point cloud similar to that in Figure 12. Inputting two consecutive frames of this type of point cloud into the LiDAR odometry for scan-to-scan matching can create an illusion of point cloud repetition patterns in the LiDAR odometry. When solving for the vehicle pose transformation, this type of point cloud can only provide constraints on the vehicle’s lateral degree of freedom but cannot provide constraints on the longitudinal and vertical rotational degrees of freedom for pose estimation algorithms. Therefore, it can be observed that in the straight urban expressway scenario, where the vehicle’s attitude does not undergo rotation, the lateral positioning accuracy of the fusion localization system does not diverge, while the longitudinal positioning accuracy diverges. This indicates that the longitudinal solution of the LiDAR odometry has degraded. In the curved urban expressway scenario, where the vehicle experiences rotation, the fusion localization system exhibits divergence in both lateral and longitudinal positioning accuracy, indicating that the overall solution of the LiDAR odometry has degraded. We statistically analyzed the relationship between visibility and the probability of degradation point clouds detected by LiDAR in straight and curved urban expressway scenarios, as shown in Table 3. From it, we can see that the lower the visibility is, the higher the probability of LiDAR detecting feature degraded point clouds.
Based on the above analysis, in foggy straight and curved urban expressway scenarios, the component-level functional insufficiencies of LMSFLS are, respectively, the longitudinal and overall solution degradation of the LiDAR odometry. Then according to Figure 10 and Figure 11, the system-level functional insufficiencies of LMSFLS are, respectively, longitudinal localization divergence and longitudinal-lateral localization divergence. Finally, it can be inferred that the vehicle-level functional insufficiencies in the straight urban expressway scenario were unexpected acceleration and deceleration, and in the curved urban expressway scenario were unexpected acceleration, deceleration, and steering.
4.
SOTIF-related harms analysis
According to ISO 21448, a vehicle-level functional insufficiency corresponds to a hazardous behavior. When a hazardous behavior occurs and there is a condition that could cause a harm in the scenario, a hazardous event is formed. If the hazardous event is uncontrollable, harm will occur. In the straight urban expressway scenario of this study, if the hazardous behavior of unexpected acceleration or deceleration of the ego vehicle is triggered by fog with visibility less than 0.6 km and there are other vehicles in front or behind, a hazardous event of a collision between the ego vehicle and the front or rear vehicle occurs. Moreover, if there is no safety strategy to control the vehicle, the harm of collision between the ego vehicle and the front or rear vehicle will be caused. Similarly, in the curved urban expressway scenario, if the hazardous behavior of unexpected acceleration, deceleration and steering of the ego vehicle is triggered by fog with visibility less than 0.8 km and there are other vehicles around the ego vehicle, the harm of collision between the ego vehicle and surrounding vehicles or road infrastructure will be caused. A more detailed description is as follows: in the event of an unexpected acceleration and left steering of the ego vehicle, a harm of collision with a neighboring vehicle or road infrastructure in the front left may occur. Similarly, an unexpected acceleration and right steering of the ego vehicle may occur a harm of collision with a neighboring vehicle or road infrastructure in the front right. Moreover, if the ego vehicle unexpectedly decelerates and turns left, it may occur a harm of collision with a neighboring vehicle or road infrastructure in the rear left. Likewise, an unexpected deceleration and right turn of the ego vehicle may occur a harm of collision with a neighboring vehicle or road infrastructure in the rear right. The results of the SOTIF analysis are presented in Table 4.

5. Functional Modification Strategy

According to the SOTIF analysis results, whether in a straight or curved urban expressway scenario with a decrease in visibility, the LMSFLS will experience the functional insufficiency of localization divergence, leading to the potential SOTIF-related harm of collision with vehicles. Therefore, it is necessary to propose a functional modification strategy to ensure the stability of the LMSFLS and then improve the SOTIF.

5.1. Strategy Process

To address the SOTIF-related risks of the LMSFLS caused by fog, we proposed a functional modification strategy that was mainly divided into two steps. In the first step, the visibility of the current scenario is recognized, and whether the LiDAR odometry is disturbed by fog is judged according to the set visibility threshold V t h . In the second step, if LiDAR odometry is disturbed by fog, the 3σ-VMDG-AEKF is adopted to fuse the LiDAR odometry measurement information with GNSS and INS. The process of the functional modification strategy is shown in Figure 13.

5.2. Recognition of Visibility

It can be seen from previous analysis that the detection range and accuracy of LiDAR are susceptible to variations in visibility. Therefore, visibility is used to judge whether the measurement information of LiDAR odometry is disturbed by fog. Based on the echo point clouds from LiDAR, taking the natural logarithm of both sides of Equation (A10), we obtain the following result:
γ j = 1 2 x j ln ( x j ) 2 P r j C L ρ t a r
where superscript j represents the j t h laser point. According to Equation (A9), we obtain
V j = 0.18126 λ 2 + 0.13709 λ + 3.7502 γ j
Using (3) and (4), we can calculate the visibility value recognized by each laser point in one frame foggy point cloud. The mean of these values is then calculated as the visibility of the current scenario. It is worth noting that the values of visibility recognized by the laser points with shorter ranges fluctuated wildly; therefore, the values of visibility recognized by the laser points with longer ranges were selected, and their mean was the recognized visibility of the current scenario. In this study, laser points with an x greater than 30 m are used to recognize visibility. That is,
V = 1 N j = 1 N V j
where N is the number of laser points whose x values are greater than 30 m.
After recognizing the visibility of the scenario, we can judge if the LiDAR odometry is disturbed by fog by setting a visibility threshold V t h . The analysis results in Table 3 show that when the visibility is less or equal to 0.8 km, the LiDAR odometry has a high probability of being disturbed. Therefore, the visibility threshold V d t h was set to 0.8 km in this study.

5.3. 3σ-Criterion-Based Variance Mismatch Degree Grading Adaptive Extended Kalman Filter

After a sensor fault is identified, the most direct and effective way to handle it is to isolate the corresponding sensor so as not to affect the stability of the localization system. In this study, LiDAR odometry is disturbed by fog, and the measurement information deviates from the actual value in part or whole. If LiDAR odometry is directly isolated, it inevitably leads to inefficient use of its measurement information. Therefore, a method that can accurately identify and isolate the abnormal part in the measurement information is required to fuse the LiDAR odometry measurement information with GNSS and INS; to meet this requirement, the 3σ-VMDG-AEKF algorithm was proposed in this study.
ε is defined as the one-step prediction residual, which represents the degree of variance mismatch between the measured state of the LiDAR and the predicted state of the INS. For the k t h measurement, the ε of the i t h state is
ε k i = Z k i H k i X ˇ k ,
where the “ ˇ ” symbol represents the prediction, Z k i is the i t h state in Z k , H k = J ( h X ˇ k ) , J is the Jacobian matrix, and H k i is the i t h row of H k .
σ is defined as the standard deviation of ε . For the k t h measurement, the σ of the i t h state is
σ k i = H k i P ˇ k i H k i T + R k i ,
where P is the mean square error matrix of X , P ˇ k i is the i t h diagonal element of P ˇ k , R is the observation noise matrix, and R k i is the i t h diagonal element of R k .
C ^ is defined as the estimation of the noise matrix observed by LiDAR. According to Sage–Husa [51] filter, for the k t h measurement, the i t h diagonal element of C ^ is
C ^ k i = 1 β k C ^ k 1 i + β k ε k i ε k i ,
where the “ ^ ” symbol represents the estimation, β k = β k 1 β k 1 + b , β 0 = 1 , and 0 < b < 1 . Here, b is called the fading factor, and its value is usually 0.9–0.999.
When the i t h state in the observation vector of LiDAR is no variance mismatch, there is
C ^ k i H k i P ˇ k i H k i T + R k i .
When the i t h state in the observation vector of LiDAR is a mild variance mismatch, R k i is considered to be biased; for (9) to be true, there is
C ^ k i H k i P ˇ k i H k i T = α k i R k i ,
where α is the adaptive noise coefficient. For the k t h measurement, the α of the i t h state is
α k i = C ^ k i H k i P ˇ k i H k i T R k i .
Finally, we can adaptively compute the X ^ k i and P ^ k i using the following three equations:
K k i = P ˇ k i H k i T H k i P ˇ k i H k i T + α k i R k i 1
X ^ k i = X ˇ k i + K k i ε k i
P ^ k i = I K k i H k i P ˇ k i
After recognizing that the LiDAR odometry is disturbed by fog, the ε and σ of each element in the measurement state vector of LiDAR odometry are calculated sequentially. According to the 3 σ criterion in the Gaussian distribution, if ε < σ , the element does not exhibit a variance mismatch. This element can be directly used for fusion. If σ ε < 3 σ , then the element with a mild variance mismatch can be adaptively fused. If ε 3 σ , then the element with a severe mismatch of variance should be isolated directly. However, in practical applications, the value of σ may not be sufficiently accurate. We can modify σ by setting σ = a · σ , where 0 < a < 1 . The specific process is shown in Algorithm 2.
Algorithm 2:   3 σ -VMDG-AEKF
1Initialization:  X ˇ k , Z k , P ˇ k , R k
2for i m
3 Calculate   ε k i using (6)
4 Calculate   σ k i using (7)
5 if ε k i < σ k i
6 Calculate   X ^ k i and P ^ k i using (13) and (14)
7 else if σ k i | ε k i | < 3 σ k i
8 Calculate   α k i using (11)
9 Calculate   X ^ k i and P ^ k i using (12)–(14)
10 else
11 X ^ k i = X ˇ k i , P ^ k i = P ˇ k i
12 end if
13end for
14Return X ^ k and P ^ k

6. Validation of Functional Modification Strategy

6.1. Design of Validation Schemes

We verified the effectiveness and visibility recognition performance of the proposed strategy in two quasi-real scenarios built after the Beijing Inner Ring Expressway. The straight urban expressway scenario was built as a part of the West North Fourth Ring Road. The curved urban expressway scenario was built after a part of the North East Third Ring Road. They are shown in Figure 14 and Figure 15.
In the two quasi-real scenarios, we set two types of visibility: constant and variable. The constant visibility is 0.4 km. The variation range of the variable visibility is 0.4–1 km, the visibility first decreasing from 1 km to 0.4 km, then stabilizing for a while, and finally increasing from 0.4 km to 1 km. This working condition simulates a vehicle moving from a fog area to a heavy fog area and then from a heavy fog area to a fog area. The operation conditions of the vehicle are the same as before. The comparison methods are EKF, AEKF, and EKF with fault diagnosis and isolation (EKF with FDI). The EKF is the filter we used for SOTIF analysis, which lacks adaptive and FDI capabilities. The AEKF is a filter that uses the Sage–Husa method for noise parameter adaptation. The EKF with FDI is a filter with FDI capabilities, where the FDI method is based on the state chi-square test. The parameters of the sensors are set in Table 1 and Table 2. The specific verification scheme is shown in Table 5.

6.2. Analysis of Simulation Results

Case 1: Visibility at 0.4 km
Figure 16 and Figure 17 show the localization performance of the proposed strategy and comparison methods in the quasi-real straight and curved urban expressway scenario at 0.4 km visibility.
As observed from Figure 16a–c, in the quasi-real straight urban expressway scenario, the longitudinal error curve of the EKF diverges rapidly. The longitudinal error curves of the AEKF and EKF with FDI increase gradually, and the performance of the EKF with FDI is superior to that of the AEKF. However, the longitudinal error of the proposed functional modification strategy is always approximately 0 m. Since there is no attitude change of the vehicle in the quasi-real straight urban expressway scenario, the feature degraded point clouds detected by LiDAR can provide constraints for lateral localization. So, the lateral error curves of the four methods are maintained at approximately 0 m without divergence. The advantages of the proposed functional modification strategy in this scenario are mainly reflected in the longitudinal localization. Moreover, the visibility of the current scenario can be recognized by our proposed functional modification strategy.
As observed in Figure 17a–c, the longitudinal and lateral error curves of EKF diverge rapidly in the quasi-real curved urban expressway scenario. The longitudinal and lateral error curves of AEKF and EKF with FDI rise gradually with the increase of simulation time, and the performance of EKF with FDI is better than that of AEKF. However, the longitudinal and lateral errors of the proposed functional modification strategy are always about 0 m. Furthermore, the proposed functional modification strategy can identify the visibility of the current scenario.
The above simulation results show that the proposed strategy can effectively improve the SOTIF of LMSFLS when the fog visibility is low and unchanged, and the performance is better than that of the comparison methods.
Case 2: Visibility at 0.4–1 km
Figure 18 and Figure 19 show the localization performance of the proposed strategy and comparison methods in the quasi-real straight and curved urban expressway scenario at variable visibility. From them, we can find that the variation of error curves of the four methods is similar to that of Case 1. It means that the proposed strategy can also effectively improve the SOTIF of LMSFLS when the fog visibility is low and changeable, and the performance is also better than the comparison methods.
According to the above two cases, the proposed functional modification strategy can recognize the visibility of scenarios and effectively suppress the divergence of the localization errors in both quasi-real straight and curved urban expressway scenarios under constant and variable visibility conditions. It improves the SOTIF of the LMSFLS caused by fog and eliminates the SOTIF-related risks. Compared with traditional methods, the proposed method has the advantages of higher localization accuracy and better stability.
According to the localization accuracy requirements for highway autonomous driving proposed in [52], we calculated the 95% accuracy and maximum localization error of the proposed method and three comparative methods under the aforementioned four sub-scenarios. The calculation results are shown in Table 6. It is worth noting that the 95% accuracy and maximum localization error corresponding to each method are the comprehensive results of the four sub-scenarios. It can be seen that the proposed strategy and EKF with FDI meet the localization accuracy requirements in a foggy environment, but the proposed strategy improves the longitudinal and lateral accuracy by approximately 61% and 23%, respectively, compared to the EKF with FDI. The longitudinal 95% accuracy of AEKF failed to meet the requirements, and EKF completely failed to achieve the required localization accuracy. This indicates that the proposed strategy has better localization accuracy while meeting the requirements.

7. Conclusions

In this study, the objective is to improve the SOTIF of autonomous vehicles in a foggy environment. Firstly, a layered SOTIF analysis method is proposed to quantitatively analyze the SOTIF-related issues of the LMSFLS. Subsequently, a fog visibility recognition method and a 3σ-VMDG-AEKF are proposed to ensure the localization performance of the LMSFLS. Finally, a virtual simulation platform is constructed to validate the effectiveness of the proposed functional modification strategies.
Some conclusions can be drawn. (1) The proposed layered SOTIF analysis method could identify the quantitative triggering conditions, functional insufficiencies, and SOTIF-related harms of LMSFLS on foggy days and solve the problem that it is difficult to perform quantitative SOTIF analysis on LMSFLS due to weather limitations. (2) The proposed visibility recognition method can identify the visibility of the scenarios to determine whether LiDAR odometry is disturbed by fog, and the proposed 3σ-VMDG-AEKF can effectively identify and isolate abnormal measurement information in LiDAR odometry to ensure the stability of LMSFLS.
In the future, real vehicle experiments will be considered to verify the effectiveness of the proposed functional modification strategy. We plan to integrate HD maps into this localization system and further study the HD map-based multi-sensor fusion localization systems on foggy days.

Author Contributions

Conceptualization, L.C. and Y.L.; formal analysis, L.C., Y.H. and Y.L.; methodology, L.C., Y.H. and Y.L.; software, L.C., Y.L. and J.C.; supervision, Y.L.; validation, Y.H.; writing—original draft, L.C., Y.H. and Y.L.; writing—review and editing, Y.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Beijing Municipal Science and Technology Project under Grant No. Z211100004221005.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Appendix A.1. LiDAR Odometry

Before performing point cloud registration, it is necessary to preprocess the point cloud affected by fog. The commonly used preprocessing steps mainly include downsampling and feature extraction. Feature extraction can cause some point cloud information to be lost, and different feature extraction methods have different performances to fog interference. In order to achieve better registration results, we only use voxel filtering to downsample the point cloud, and the voxel grid size is set to 0.5 m.
In this study, since there is no plan to map the scenario, we use the scan-to-scan approach for point cloud registration. The registration algorithm used is Point to Plane Iterative Closest Points [53], and in order to achieve better registration performance, we set the maximum iteration times for solving pose transformation to 50.

Appendix A.2. INS Pre-Integration

The pre-integration form of INS is as follows:
p k = p k 1 + t · v k 1 + t 2 2 C n b · s f k 1 + g
v k = v k 1 + t C n b · s f k 1 + g
q k = q k 1 q ω k 1 · t
where p , v , q , a n d   g are the position, velocity, attitude quaternion, and gravity acceleration in the navigation coordinate system, s f and ω are the specific force and angular velocity in the vehicle coordinate system, C n b is the rotation matrix from the vehicle coordinate system to the navigation coordinate system, t is the sampling time interval, and the operator “ ” represents quaternion multiplication.

Appendix A.3. Data Fusion

The data fusion algorithm used is the EKF, which can be expressed as follows:
X ˇ k = f X ^ k 1
P ˇ k = Φ ˇ k P ^ k 1 Φ ˇ k T + Q k 1
K k = P ˇ k H k T H k P ˇ k H k T + R k 1
X ^ k = X ˇ k + K k Z k h X ˇ k
P ^ k = I K k H k P ˇ k
where Φ ˇ k = J ( f X ^ k 1 ) , H k = J ( h X ˇ k ) , Q is the process noise matrix.

Appendix B

The detailed explanation of Algorithm 1 is as follows. Firstly, the visibility V of the external environment is input into the attenuation model to get the extinction coefficient γ . We use the Naboulsi [46] model, which distinguishes the types of fog, considers the size distribution of fog drops, and clearly describes the functional relationship between γ , laser wavelength λ, and V , as the attenuation model of the FPCGM. Its form is as follows:
γ = 0.18126 λ 2 + 0.13709 λ + 3.7502 V
Secondly, one frame of clear point cloud that includes n points is input into the physical model, and then the echo energy P r i of each point i is computed. The LiDAR physical model [20] can be expressed as follows:
P r i = C L 1 x i 2 ρ t a r e 2 γ x i
C L = 1 2 π P 0 A R η s r η s t
where C L represents the inherent property of LiDAR, P 0 is the laser-emitted energy, A R is the effective area of the laser receiver, η s r is the laser-receiver efficiency, η s t is the laser-emitter efficiency, x i is the detection range, ρ t a r is the reflective rate of the target.
Finally, the P r i and x i of each point i are input into the noise model, which considers sunlight noise and dark counting to generate the foggy point clouds. It includes the following steps:
1.
To calculate the valid signal for each point i . The formula is as follows:
N r i = P r i τ η e
τ = 2 Δ S c
e = h c λ
where N r i is the number of valid signal photons, τ is the pulse time, Δ S is the spatial resolution, c is the speed of light, e is the energy of a single photon, h is Planck’s constant, and η is the quantum efficiency of the receiver.
2.
To calculate the sunlight noise signal, which is given below:
N B = T R P B π θ 2 2 λ A R 2 Δ S c η λ h c
where N B is the number of background light photons received by the receiver, P B is the brightness of the sky background radiation, θ is the viewing angle of the receiving telescope, λ is the bandwidth of the optical filter, and T R is the total transmittance of the optical receiver system.
3.
To calculate the dark counting signal. It can be expressed as follows:
N D = C D 2 Δ S c
where N D is the number of dark counts and C D is the dark-counting rate of the laser receiver.
4.
When combining steps 1–3, we can calculate the S N R i of each echo signal using the following formula [47]:
S N R i = N r i F N r i + N B + N D
where F is the detector noise factor.
5.
If the S N R i of point, i is greater than S N R 0 , its standard deviation σ R i has the following relationship with S N R i [47]:
σ R i ~ 1 S N R i
where the S N R 0 is the smallest S N R of the echo signals that can be received by LiDAR.

References

  1. Wang, Z.; Wu, Y.; Niu, Q. Multi-Sensor Fusion in Automated Driving: A Survey. IEEE Access 2020, 8, 2847–2868. [Google Scholar] [CrossRef]
  2. Ghorai, P.; Eskandarian, A.; Kim, Y.-K.; Mehr, G. State Estimation and Motion Prediction of Vehicles and Vulnerable Road Users for Cooperative Autonomous Driving: A Survey. IEEE Trans. Intell. Transp. Syst. 2022, 23, 16983–17002. [Google Scholar] [CrossRef]
  3. Xu, X.; Zhang, L.; Yang, J.; Cao, C.; Wang, W.; Ran, Y.; Tan, Z.; Luo, M. A Review of Multi-Sensor Fusion SLAM Systems Based on 3D LIDAR. Remote Sens. 2022, 14, 2835. [Google Scholar] [CrossRef]
  4. Chen, W.; Zhou, C.; Shang, G.; Wang, X.; Li, Z.; Xu, C.; Hu, K. SLAM Overview: From Single Sensor to Heterogeneous Fusion. Remote Sens. 2022, 14, 6033. [Google Scholar] [CrossRef]
  5. Wang, W.; You, X.; Chen, L.; Tian, J.; Tang, F.; Zhang, L. A Scalable and Accurate De-Snowing Algorithm for LiDAR Point Clouds in Winter. Remote Sens. 2022, 14, 1468. [Google Scholar] [CrossRef]
  6. Aldibaja, M.; Yanase, R.; Kuramoto, A.; Kim, T.H.; Yoneda, K.; Suganuma, N. Improving Lateral Autonomous Driving in Snow-Wet Environments Based on Road-Mark Reconstruction Using Principal Component Analysis. IEEE Intell. Transp. Syst. Mag. 2021, 13, 116–130. [Google Scholar] [CrossRef]
  7. Zhang, Y.; Carballo, A.; Yang, H.; Takeda, K. Autonomous Driving in Adverse Weather Conditions: A Survey. arXiv 2021, arXiv:2112.08936. [Google Scholar]
  8. Hespel, L.; Riviere, N.; Huet, T.; Tanguy, B.; Ceolato, R. Performance Evaluation of Laser Scanners through the Atmosphere with Adverse Condition. In Proceedings of the SPIE, Electro-Optical Remote Sensing, Photonic Technologies, and Applications V, Prague, Czech Republic, 19–22 September 2011; pp. 64–78. [Google Scholar]
  9. ISO 21448; Road Vehicles—Safety of the Intended Functionality. International Organization for Standardization: Geneva, Switzerland, 2022.
  10. Yoneda, K.; Suganuma, N.; Yanase, R.; Aldibaja, M. Automated Driving Recognition Technologies for Adverse Weather Conditions. IATSS Res. 2019, 43, 253–262. [Google Scholar] [CrossRef]
  11. Zang, S.; Ding, M.; Smith, D.; Tyler, P.; Rakotoarivelo, T.; Kaafar, M.A. The Impact of Adverse Weather Conditions on Autonomous Vehicles: How Rain, Snow, Fog, and Hail Affect the Performance of a Self-Driving Car. IEEE Veh. Technol. Mag. 2019, 14, 103–111. [Google Scholar] [CrossRef]
  12. Rasshofer, R.H.; Spies, M.; Spies, H. Influences of Weather Phenomena on Automotive Laser Radar Systems. Adv. Radio Sci. 2011, 9, 49–60. [Google Scholar] [CrossRef] [Green Version]
  13. Dannheim, C.; Icking, C.; Mäder, M.; Sallis, P. Weather Detection in Vehicles by Means of Camera and LIDAR Systems. In Proceedings of the 6th International Conference on Computational Intelligence, Communication Systems and Networks, Tetova, Macedonia, 27–29 May 2014; pp. 186–191. [Google Scholar]
  14. Hahner, M.; Sakaridis, C.; Dai, D.; Van Gool, L. Fog Simulation on Real LiDAR Point Clouds for 3D Object Detection in Adverse Weather. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Virtual, 11–17 October 2021; pp. 15283–15292. [Google Scholar]
  15. Zhao, J.; Li, Y.; Zhu, B.; Deng, W.; Sun, B. Method and Applications of Lidar Modeling for Virtual Testing of Intelligent Vehicles. IEEE Trans. Intell. Transp. Syst. 2021, 22, 2990–3000. [Google Scholar] [CrossRef]
  16. Kutila, M.; Pyykönen, P.; Ritter, W.; Sawade, O.; Schäufele, B. Automotive LIDAR Sensor Development Scenarios for Harsh Weather Conditions. In Proceedings of the 2016 IEEE 19th International Conference on Intelligent Transportation Systems (ITSC), Rio de Janeiro, Brazil, 1–4 November 2016; pp. 265–270. [Google Scholar]
  17. Bijelic, M.; Gruber, T.; Ritter, W. A Benchmark for Lidar Sensors in Fog: Is Detection Breaking Down? In Proceedings of the 2018 IEEE Intelligent Vehicles Symposium (IV), Changshu, China, 26–30 June 2018; pp. 760–767. [Google Scholar]
  18. Kutila, M.; Pyykönen, P.; Holzhüter, H.; Colomb, M.; Duthon, P. Automotive LiDAR Performance Verification in Fog and Rain. In Proceedings of the 2018 21st International Conference on Intelligent Transportation Systems (ITSC), Maui, HI, USA, 4–7 November 2018; pp. 1695–1701. [Google Scholar]
  19. Heinzler, R.; Schindler, P.; Seekircher, J.; Ritter, W.; Stork, W. Weather Influence and Classification with Automotive Lidar Sensors. In Proceedings of the 2019 IEEE Intelligent Vehicles Symposium (IV), Paris, France, 9 June 2019; pp. 1527–1534. [Google Scholar]
  20. Li, Y.; Duthon, P.; Colomb, M.; Ibanez-Guzman, J. What Happens for a ToF LiDAR in Fog? IEEE Trans. Intell. Transp. Syst. 2020, 22, 6670–6681. [Google Scholar] [CrossRef]
  21. Yang, T.; Li, Y.; Ruichek, Y.; Yan, Z. Performance Modeling a Near-Infrared ToF LiDAR Under Fog: A Data-Driven Approach. IEEE Trans. Intell. Transp. Syst. 2021, 23, 11227–11236. [Google Scholar] [CrossRef]
  22. Xia, X.; Meng, Z.; Han, X.; Li, H.; Tsukiji, T.; Xu, R.; Zhang, Z.; Ma, J. Automated Driving Systems Data Acquisition and Processing Platfor. arXiv 2022, arXiv:2211.13425. [Google Scholar]
  23. Ghallabi, F.; El-Haj-Shhade, G.; Mittet, M.-A.; Nashashibi, F. LIDAR-Based Road Signs Detection For Vehicle Localization in an HD Map. In Proceedings of the 2019 IEEE Intelligent Vehicles Symposium (IV), Paris, France, 9–12 June 2019; pp. 1484–1490. [Google Scholar]
  24. Ma, W.-C.; Tartavull, I.; Bârsan, I.A.; Wang, S.; Bai, M.; Mattyus, G.; Homayounfar, N.; Lakshmikanth, S.K.; Pokrovsky, A.; Urtasun, R. Exploiting Sparse Semantic HD Maps for Self-Driving Vehicle Localization. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 3 November 2019; pp. 5304–5311. [Google Scholar]
  25. Jung, Y.; Seo, S.-W.; Kim, S.-W. Curb Detection and Tracking in Low-Resolution 3D Point Clouds Based on Optimization Framework. IEEE Trans. Intell. Transp. Syst. 2020, 21, 3893–3908. [Google Scholar] [CrossRef]
  26. Wang, Z.; Fang, J.; Dai, X.; Zhang, H.; Vlacic, L. Intelligent Vehicle Self-Localization Based on Double-Layer Features and Multilayer LIDAR. IEEE Trans. Intell. Veh. 2020, 5, 616–625. [Google Scholar] [CrossRef]
  27. Steinke, N.; Ritter, C.-N.; Goehring, D.; Rojas, R. Robust LiDAR Feature Localization for Autonomous Vehicles Using Geometric Fingerprinting on Open Datasets. IEEE Robot. Autom. Lett. 2021, 6, 2761–2767. [Google Scholar] [CrossRef]
  28. Liu, W.; Quijano, K.; Crawford, M.M. YOLOv5-Tassel: Detecting Tassels in RGB UAV Imagery With Improved YOLOv5 Based on Transfer Learning. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 8085–8094. [Google Scholar] [CrossRef]
  29. Yin, H.; Wang, Y.; Ding, X.; Tang, L.; Huang, S.; Xiong, R. 3D LiDAR-Based Global Localization Using Siamese Neural Network. IEEE Trans. Intell. Transp. Syst. 2020, 21, 1380–1392. [Google Scholar] [CrossRef]
  30. Lu, W.; Zhou, Y.; Wan, G.; Hou, S.; Song, S. L3-Net: Towards Learning Based LiDAR Localization for Autonomous Driving. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 16–20 June 2019; pp. 6382–6391. [Google Scholar]
  31. Chen, X.; Vizzo, I.; Läbe, T.; Behley, J.; Stachniss, C. Range Image-Based LiDAR Localization for Autonomous Vehicles. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), 30 May 2021; pp. 5802–5808. [Google Scholar]
  32. Xiong, L.; Xia, X.; Lu, Y.; Liu, W.; Gao, L.; Song, S.; Yu, Z. IMU-Based Automated Vehicle Body Sideslip Angle and Attitude Estimation Aided by GNSS Using Parallel Adaptive Kalman Filters. IEEE Trans. Veh. Technol. 2020, 69, 10668–10680. [Google Scholar] [CrossRef]
  33. Xia, X.; Xiong, L.; Huang, Y.; Lu, Y.; Gao, L.; Xu, N.; Yu, Z. Estimation on IMU Yaw Misalignment by Fusing Information of Automotive Onboard Sensors. Mech. Syst. Signal Process. 2022, 162, 107993. [Google Scholar] [CrossRef]
  34. Xia, X.; Hashemi, E.; Xiong, L.; Khajepour, A. Autonomous Vehicle Kinematics and Dynamics Synthesis for Sideslip Angle Estimation Based on Consensus Kalman Filter. IEEE Trans. Control Syst. Technol. 2023, 31, 179–192. [Google Scholar] [CrossRef]
  35. Zubača, J.; Stolz, M.; Watzenig, D. Extended H∞ Filter Adaptation Based on Innovation Sequence for Advanced Ego-Vehicle Motion Estimation. In Proceedings of the 2020 IEEE 3rd Connected and Automated Vehicles Symposium (CAVS), Victoria, BC, Canada, 18 November 2020; pp. 1–5. [Google Scholar]
  36. Maaref, M.; Khalife, J.; Kassas, Z.M. Lane-Level Localization and Mapping in GNSS-Challenged Environments by Fusing Lidar Data and Cellular Pseudoranges. IEEE Trans. Intell. Veh. 2019, 4, 73–89. [Google Scholar] [CrossRef]
  37. Martin, H.; Winkler, B.; Grubmüller, S.; Watzenig, D. Identification of Performance Limitations of Sensing Technologies for Automated Driving. In Proceedings of the 2019 IEEE International Conference on Connected Vehicles and Expo (ICCVE), Graz, Austria, 4–8 November 2019; pp. 1–6. [Google Scholar]
  38. Jianyu, D.; Zhang, H. Model-Based Systemic Hazard Analysis Approach for Connected and Autonomous Vehicles and Case Study Application in Automatic Emergency Braking System. SAE Intl. J. CAV 2021, 4, 23–34. [Google Scholar] [CrossRef]
  39. Vaicenavicius, J.; Wiklund, T.; Grigaite, A.; Kalkauskas, A.; Vysniauskas, I.; Keen, S.D. Self-Driving Car Safety Quantification via Component-Level Analysis. SAE Intl. J. CAV 2021, 4, 35–45. [Google Scholar] [CrossRef]
  40. Zhou, H.; Li, X.; He, X.; Li, P.; Xiao, L.; Zhang, D. Research on Safety of the Intended Functionality of Automobile AEB Perception System in Typical Dangerous Scenarios of Two-Wheelers. Accid. Anal. Prev. 2022, 173, 106709. [Google Scholar] [CrossRef] [PubMed]
  41. Abdulazim, A.; Elbahaey, M.; Mohamed, A. Putting Safety of Intended Functionality SOTIF into Practice; SAE Technical Paper 2021-01-0196; SAE International: Warrendale, PA, USA; pp. 1–11.
  42. Yan, M.; Chen, W.; Wang, Q.; Zhao, L.; Liang, X.; Cai, B. Human–Machine Cooperative Control of Intelligent Vehicles for Lane Keeping—Considering Safety of the Intended Functionality. Actuators 2021, 10, 210. [Google Scholar] [CrossRef]
  43. Guo, M.; Shang, S.; Haifeng, C.; Zhang, K.; Deng, W.; Zhang, X.; Yu, F. Control Model of Automated Driving Systems Based on SOTIF Evaluation; SAE Technical Paper 2020-01-1214; SAE International: Warrendale, PA, USA, 2020; pp. 2900–2906. [Google Scholar]
  44. Huang, A.; Xing, X.; Zhou, T.; Chen, J. A Safety Analysis and Verification Framework for Autonomous Vehicles Based on the Identification of Triggering Events; SAE Technical Paper 2021-01-5010; SAE International: Warrendale, PA, USA; pp. 1–8.
  45. Wang, B.; Luo, Y.; Zhong, Z.; Li, K. Robust Non-Fragile Fault Tolerant Control for Ensuring the Safety of the Intended Functionality of Cooperative Adaptive Cruise Control. IEEE Trans. Intell. Transp. Syst. 2022, 23, 18746–18760. [Google Scholar] [CrossRef]
  46. Naboulsi, M.C.A.; Sizun, H.; Fornel, F.D.R. Fog Attenuation Prediction for Optical and Infrared Waves. Opt. Eng. 2004, 43, 319–329. [Google Scholar] [CrossRef]
  47. Baltsavias, E.P. Airborne Laser Scanning: Basic Relations and Formulas. ISPRS J. Photogramm. Remote Sens. 1999, 54, 199–214. [Google Scholar] [CrossRef]
  48. Sheeny, M.; De Pellegrin, E.; Mukherjee, S.; Ahrabian, A.; Wang, S.; Wallace, A. RADIATE: A Radar Dataset for Automotive Perception in Bad Weather. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May–5 June 2021; pp. 1–7. [Google Scholar]
  49. China National GB/T 33673-2017; Grade of Horizontal Visibility. China National Standards: Beijing, China, 2017.
  50. Mori, D.; Sugiura, H.; Hattori, Y. Adaptive Sensor Fault Detection and Isolation Using Unscented Kalman Filter for Vehicle Positioning. In Proceedings of the 2019 IEEE Intelligent Transportation Systems Conference (ITSC), Auckland, New Zealand, 27–30 October 2019; pp. 1298–1304. [Google Scholar]
  51. Narasimhappa, M.; Mahindrakar, A.D.; Guizilini, V.C.; Terra, M.H.; Sabat, S.L. MEMS-Based IMU Drift Minimization: Sage Husa Adaptive Robust Kalman Filtering. IEEE Sens. J. 2020, 20, 250–260. [Google Scholar] [CrossRef]
  52. Reid, T.G.R.; Houts, S.E.; Cammarata, R.; Mills, G.; Agarwal, S.; Vora, A.; Pandey, G. Localization Requirements for Autonomous Vehicles. SAE Intl. J. CAV 2019, 2, 12-02-03-0012. [Google Scholar] [CrossRef] [Green Version]
  53. Low, K.-L. Linear Least-Squares Optimization for Point-to-Plane ICP Surface Registration; Technical Report TR04-004; University of North Carolina at Chapel Hill: Chapel Hill, NC, USA, 2004; Volume 4, pp. 1–3. [Google Scholar]
Figure 1. The architecture of LMSFLS with fog interference introduced.
Figure 1. The architecture of LMSFLS with fog interference introduced.
Remotesensing 15 03047 g001
Figure 2. Cloud map of the intensity distribution of foggy point clouds. (a) Simulation result; (b) real result.
Figure 2. Cloud map of the intensity distribution of foggy point clouds. (a) Simulation result; (b) real result.
Remotesensing 15 03047 g002
Figure 3. Numerical map of the intensity distribution of foggy point clouds. (a) Simulation result; (b) real result.
Figure 3. Numerical map of the intensity distribution of foggy point clouds. (a) Simulation result; (b) real result.
Remotesensing 15 03047 g003
Figure 4. The layered SOTIF analysis process.
Figure 4. The layered SOTIF analysis process.
Remotesensing 15 03047 g004
Figure 5. Clear point cloud.
Figure 5. Clear point cloud.
Remotesensing 15 03047 g005
Figure 6. Effects of fog at different concentrations on the static detection performance of LiDAR. (a) Effects on S N R ; (b) effects on σ R .
Figure 6. Effects of fog at different concentrations on the static detection performance of LiDAR. (a) Effects on S N R ; (b) effects on σ R .
Remotesensing 15 03047 g006
Figure 7. Foggy point clouds at different visibility. (a) Visibility at 10 km; (b) visibility at 1 km; (c) visibility at 0.6 km; (d) visibility at 0.2 km.
Figure 7. Foggy point clouds at different visibility. (a) Visibility at 10 km; (b) visibility at 1 km; (c) visibility at 0.6 km; (d) visibility at 0.2 km.
Remotesensing 15 03047 g007
Figure 8. Straight urban expressway scenario. (a) Vertical view; (b) front view.
Figure 8. Straight urban expressway scenario. (a) Vertical view; (b) front view.
Remotesensing 15 03047 g008
Figure 9. Curved urban expressway scenario. (a) Vertical view; (b) front view.
Figure 9. Curved urban expressway scenario. (a) Vertical view; (b) front view.
Remotesensing 15 03047 g009
Figure 10. Localization errors at different visibility in straight urban expressway scenario. (a) Lateral error; (b) longitudinal error.
Figure 10. Localization errors at different visibility in straight urban expressway scenario. (a) Lateral error; (b) longitudinal error.
Remotesensing 15 03047 g010
Figure 11. Localization errors at different visibility in curved urban expressway scenario. (a) Lateral error; (b) longitudinal error.
Figure 11. Localization errors at different visibility in curved urban expressway scenario. (a) Lateral error; (b) longitudinal error.
Remotesensing 15 03047 g011
Figure 12. Feature degraded point cloud.
Figure 12. Feature degraded point cloud.
Remotesensing 15 03047 g012
Figure 13. Flowchart of the functional modification strategy.
Figure 13. Flowchart of the functional modification strategy.
Remotesensing 15 03047 g013
Figure 14. Quasi-real straight urban expressway scenario. (a) Real scenario; (b) simulation scenario.
Figure 14. Quasi-real straight urban expressway scenario. (a) Real scenario; (b) simulation scenario.
Remotesensing 15 03047 g014
Figure 15. Quasi-real curved urban expressway scenario. (a) Real scenario; (b) simulation scenario.
Figure 15. Quasi-real curved urban expressway scenario. (a) Real scenario; (b) simulation scenario.
Remotesensing 15 03047 g015
Figure 16. Localization performance of different methods at 0.4 km visibility in the quasi-real straight urban expressway scenario. (a) Longitudinal errors; (b) lateral errors; (c) visibility recognition performance of the proposed strategy.
Figure 16. Localization performance of different methods at 0.4 km visibility in the quasi-real straight urban expressway scenario. (a) Longitudinal errors; (b) lateral errors; (c) visibility recognition performance of the proposed strategy.
Remotesensing 15 03047 g016
Figure 17. Localization performance of different methods at 0.4 km visibility in the quasi-real curved urban expressway scenario. (a) Longitudinal errors; (b) lateral errors; (c) visibility recognition performance of the proposed strategy.
Figure 17. Localization performance of different methods at 0.4 km visibility in the quasi-real curved urban expressway scenario. (a) Longitudinal errors; (b) lateral errors; (c) visibility recognition performance of the proposed strategy.
Remotesensing 15 03047 g017aRemotesensing 15 03047 g017b
Figure 18. Localization performance of different methods at variable visibility in the quasi-real straight urban expressway scenario. (a) Longitudinal errors; (b) lateral errors; (c) visibility recognition performance of the proposed strategy.
Figure 18. Localization performance of different methods at variable visibility in the quasi-real straight urban expressway scenario. (a) Longitudinal errors; (b) lateral errors; (c) visibility recognition performance of the proposed strategy.
Remotesensing 15 03047 g018aRemotesensing 15 03047 g018b
Figure 19. Localization performance of different methods at variable visibility in the quasi-real curved urban expressway scenario. (a) Longitudinal errors; (b) lateral errors; (c) visibility recognition performance of the proposed strategy.
Figure 19. Localization performance of different methods at variable visibility in the quasi-real curved urban expressway scenario. (a) Longitudinal errors; (b) lateral errors; (c) visibility recognition performance of the proposed strategy.
Remotesensing 15 03047 g019aRemotesensing 15 03047 g019b
Table 1. Lidar parameters.
Table 1. Lidar parameters.
ParametersDescriptionsValues
P 0 Laser emitted energy1.6 μJ
Δ S Spatial resolution15 m
η s t Laser emitter efficiency0.8
η s r Laser receiver efficiency0.3
η Receiver quantum efficiency 0.1
θ Receiving telescope viewing angle 0.0003 rad
A R Receiver effective area 10 cm2
λ Optical filter bandwidth60 nm
P B Sky background radiation brightness 0.6 W/(m2·nm·sr)
C D Dark counting rate of the receiver300
Table 2. Sensors parameters.
Table 2. Sensors parameters.
SensorsParameters
INS GyroscopeNoise: 0.573 deg/s
Frequency: 100 Hz
INS AccelerometerNoise: 0.1 m/s2
Frequency: 100 Hz
GNSS PositionNoise: 2 m
Frequency: 10 Hz
GNSS VelocityNoise: 0.1 m/s
Frequency: 10 Hz
LiDARMax Range: 120 m
Range Accuracy: 0.002 m
Azimuth: 0.4 deg
Elevation: 1.875 deg
Azimuthal Limits: [−180° 180°]
Elevation Limits: [−15° 15°]
Frequency: 20 Hz
Table 3. The probability of detecting feature degenerated point clouds.
Table 3. The probability of detecting feature degenerated point clouds.
ScenariosVisibility (km)Probability of Detecting Feature
Degenerated Point Clouds (%)
Straight urban expressway10.00
1.00
0.80
0.66.7
0.440.0
Curved urban expressway10.00
1.00.7
0.84.6
0.615.1
0.440.8
Table 4. Results of SOTIF analysis.
Table 4. Results of SOTIF analysis.
ScenariosTriggering
Condition
Functional Insufficiencies
Component-LevelSystem-LevelVehicle-Level
Straight urban expresswayVisibility is 0.6 km or less.Solution degradation of longitudinal localizationDivergence of longitudinal localization.Unintended acceleration or deceleration
Curved urban expresswayVisibility is 0.8 km or less.Solution degradation of longitudinal and lateral localizationDivergence of longitudinal and lateral localization.Unintended acceleration, deceleration, or steering
ScenariosSOTIF-Related Harms
Straight urban expressway
  • Collision with the front neighboring vehicle.
  • Collision with the rear neighboring vehicle.
Curved urban expressway
  • Collision with a neighboring vehicle or road infrastructure in the front left.
  • Collision with a neighboring vehicle or road infrastructure in the front right.
  • Collision with a neighboring vehicle or road infrastructure in the rear left.
  • Collision with a neighboring vehicle or road infrastructure in the rear right.
Table 5. The validation scheme of the functional modification strategy.
Table 5. The validation scheme of the functional modification strategy.
Validation
Scenarios
Road LengthVisibilityComparison Methods
Quasi-real straight
urban expressway
750 mCase 1: 0.4 kmEKF
AEKF
EKF with FDI
Case 2: 0.4–1 km
Quasi-real curved
urban expressway
700 mCase 1: 0.4 kmEKF
AEKF
EKF with FDI
Case 2: 0.4–1 km
Table 6. Analysis of localization accuracy of different methods.
Table 6. Analysis of localization accuracy of different methods.
Error TypesRequirementsMethods
EKFAEKFEKF with FDIProposed Strategy
Longitudinal
error (m)
Max1.4011.850.620.310.12
95% accuracy0.488.880.510.270.10
Lateral
error (m)
Max0.574.590.200.130.10
95% accuracy0.244.170.170.110.10
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cao, L.; He, Y.; Luo, Y.; Chen, J. Layered SOTIF Analysis and 3σ-Criterion-Based Adaptive EKF for Lidar-Based Multi-Sensor Fusion Localization System on Foggy Days. Remote Sens. 2023, 15, 3047. https://doi.org/10.3390/rs15123047

AMA Style

Cao L, He Y, Luo Y, Chen J. Layered SOTIF Analysis and 3σ-Criterion-Based Adaptive EKF for Lidar-Based Multi-Sensor Fusion Localization System on Foggy Days. Remote Sensing. 2023; 15(12):3047. https://doi.org/10.3390/rs15123047

Chicago/Turabian Style

Cao, Lipeng, Yansong He, Yugong Luo, and Jian Chen. 2023. "Layered SOTIF Analysis and 3σ-Criterion-Based Adaptive EKF for Lidar-Based Multi-Sensor Fusion Localization System on Foggy Days" Remote Sensing 15, no. 12: 3047. https://doi.org/10.3390/rs15123047

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop