1. Introduction
Ambient sensing is the critical technology for ALV (Autonomous Land Vehicles) /UGV (Unmanned Ground Vehicles) to achieve autonomous navigation in outdoor environments. Negative obstacles such as ditches, trenches, potholes, puddles, and steep hills in unstructured environments seriously affect the safe driving of ALV/UGV. Therefore, accurate negative obstacle detection is of particular importance in the field of unmanned driving. Estimating a negative obstacle’s geometry at a given distance is still challenging because the obstacle is located below the ground, which is difficult to detect with vehicle sensors.
Tingbo Hu et al. [
1] proposed an image sequence-based negative obstacle detection algorithm for negative obstacle detection. Their algorithm is based on the phenomenon that a negative obstacle is ‘darker’ than the surrounding badlands and that the darker the distance, the more pronounced it is. Also, different cues are combined in a Bayesian framework to detect obstacles in the image sequence. L. Matthies et al. [
2] proposed a negative obstacle detection method based on infrared features. The method is based on the phenomenon that negative obstacles tend to dissipate less heat and are warmer than the surrounding terrain at night, and performs local intensity analysis of the infrared images to mark areas of significant intensity as potential negative obstacles [
3]. The final negative obstacle validation is then performed by multi-frame verification and fusion. Arturo L. Rankin et al. [
4] further coupled nighttime negative obstacle detection with thermic feature-based cues and geometric cues based on stereo distance data, using edge detection to generate closed contour candidate negative obstacle regions and geometrical filtering, to determine whether they are located in the ground plane. Negative obstacle cues were fused from thermic features, geometry-based distance image analysis, and geometry-based topographic map analysis.
These three typical methods for detecting negative obstacles have some limitations in temperature or lighting requirements, and are not sufficiently robust [
5] to detect negative obstacles. Image sequence-based methods have the phenomenon of misreporting shadows of vegetation on the ground as negative obstacles; the most significant limitation of the thermal infrared image-based method is that it can only detect negative obstacles at night and greatly influence the climate and environment. Compared with infrared and visual sensors, LiDAR has the advantage of directly and accurately capturing the distance information of objects without being affected by conditions such as light and weather, so it is widely used in ALV/UGV [
6] environmental perception technology.
LiDAR has a superior position and role in the field of negative obstacle detection. It has the advantages of high lateral resolution, high range detection accuracy, and strong anti-active interference ability. A more accurate detection method uses HDL-64 LiDAR or VLS-128 LiDAR on large vehicles, but the higher price of multi-beam LiDAR makes it difficult to popularize. Single-beam LiDAR with a rotating mechanical structure is frequently used on small and micro vehicles. Shang E et al. [
7] proposed a negative obstacle detection method based on dual HDL-32 LiDAR with a unique dual LiDAR mounting method and a feature fusion based on the AMFA (adaptive matching filter based algorithm) algorithm. The FFA (feature fusion-based algorithm) algorithm fuses all features generated by different LiDAR or captured in different frames. The weight of each feature is estimated by the Bayes rule. The algorithm has good robustness and stability, a 20% increase in detection range, and a reduction in calculation time by two orders of magnitude compared to state-of-the-art technology. Liu Jiayin et al. [
8] proposed an environment sensing method based on dual HDL-32 LiDAR, which can significantly improve the vehicle forward to LiDAR spot density, compared to the simple HDL-64 LiDAR spot density through a unique LiDAR mounting method. Wang Pei et al. [
9] proposed a negative obstacle detection algorithm with single-line LiDAR and visionary fusion. The method lacks in the accurate estimation of negative obstacle geometric features, and the detection range is small and the accuracy insufficient. The literature [
10] uses a Kinect sensor to detect negative obstacles and convert them into the laser scan data. The literature [
11] proposes a set of algorithms for general obstacle feature extraction using radar and images, and a contour extraction method using multilayer techniques specifically applicable to negative obstacle detection. The literature [
12] uses stereo information in combination with saliency to initialize the energy function, and uses color information to optimize the results. Then, the optimization results are hysteresis thresholds to reach the final negative obstacle region. All of the above methods can achieve better environmental sensing and save hardware costs compared to a single multi-beam LiDAR, but the accuracy of the detection methods varies and the combination of LiDAR methods still fails to completely solve the expensive cost problem.
This paper proposes a single multi-line radar method to improve the accuracy of negative obstacle geometry feature estimation with lower hardware cost to address the above problems. The specific contributions of the method are as follows:
- (1)
Since it is difficult to measure feature lengths in 3D point cloud data, this paper proposes to convert 3D point cloud data into 2D elevation raster maps to estimate geometric features, which reduces the difficulty of estimation;
- (2)
Selection of the most suitable filter for denoising negative obstacle point cloud data and denoising elevation images among many 3D and 2D denoising methods, and proposal of a denoising system applicable to 3D to 2D maps;
- (3)
A proposed method for estimating geometric features based on two-dimensional elevation negative obstacle images;
- (4)
A method for detecting the horizontal distance from negative obstacles to LiDAR is proposed.
The method is simple, computationally convenient, and low-cost, breaking the previous costly methods such as dual multi-beam LiDAR and joint calibration of LiDAR and camera or combination of IMU [
13] inertial guidance and LiDAR, and using a single VLP-16 LiDAR to accurately estimate the geometry of negative obstacles on the horizontal ground by using virtual images generated from the data and some geometric operations. This method saves hardware costs and frees up more space and money to implement driverless technology.
2. Data Preprocessing
After calibrating external parameters of the LiDAR [
14,
15,
16], the original point cloud was collected, and then a PassThrough filter [
17] was applied to detect the negative obstacle. Then, the denoising effects of the StatisticalOutlierRemoval filter [
18], RadiusOutlier removal, and Conditional removal [
19] were compared. The StatisticalOutlierRemoval filter processed the negative obstacle point cloud and achieved an excellent denoising effect. The 3D negative obstacle point cloud was projected onto the XY plane to obtain a 2.5D negative obstacle elevation image, and 0.02 m and 0. 05 m rasters (rasterization) were added to the elevation image to obtain the elevation raster image. The denoising effects of the Gauss filter [
20], Median filter [
21], and Mean filter [
22] were compared, and the Median filter that best met the denoising requirements of this experiment was used. At this point, it was determined if the elevation raster image was the first frame of data that hit the front wall of the negative obstacle precisely and, if so, we continued to estimate the geometric features; if not, the negative obstacle point cloud needed to be selected again. Based on the elevation raster map, a distance measurement method was built to estimate the length and width of the negative obstacle on the image, and then the geometric features of the negative obstacle were calculated by multiplying the product of the rasterized dimensions and the number of pixels. The flowchart of the single-frame 3D laser point cloud-based structured negative obstacle geometry estimation is shown in
Figure 1.
2.6. High Range Raster Image Denoising and Smoothing
When generating an elevation raster image, it is inevitable that some new noise will appear on the image. Unlike the noise in
Section 2.2, the noise in
Section 2.2 is point cloud noise and needs to be processed by the above point cloud denoising filter. The noise of the elevation map is essentially different from the noise of the point cloud, so other methods are used to further denoise.
Noise burrs generally have definite frequency characteristics, and the use of appropriate filtering techniques can effectively suppress noise and improve the signal-to-noise ratio of the measurement system. The more commonly used smooth filtering methods are Median filter, Gauss filter, Aver filter, Adaptive filter, and Fit filter. The filter is to establish a mathematical model; through this model which shows the image data for energy conversion, low energy on the exclusion of noise is part of the low energy. For the eight-connected region of the image, the pixel value at the middle point is equal to the mean value of the pixel values in the eight-connected region, which will produce ringing in the image if the ideal filter is used. If Gauss filter is used, the system function is smooth and the ringing phenomenon is avoided.
In this paper, the three most commonly used filters for elevation raster images are selected for comparison: Median filter, Gauss filter, and Aver filter.
The basic principle of the Median filter is to replace the value of a point in a digital image or sequence with the median value of each point in a neighborhood of that point, so that the surrounding pixels are close to the true value, thus eliminating isolated noise points. The method is used to generate a monotonically ascending (or descending) sequence of 2D data using a structured 2D sliding template that sorts the pixels within the plate by the pixel value. The output of the 2D Median filter is
where
,
, are the original image and the processed image, respectively.
is a 2D template, usually a 2 × 2, 3 × 3 area, but can also be different shapes, such as a line, circle, cross, ring, rhombus and so on.
- b.
Gauss filter.
A Gauss filter is essentially a kind of signal filter; its use is the signal smoothing process is to achieve a better image edge. The image is first Gauss-smoothed filtered as well as noise removed, and then the second-order derivative is found to determine the edge, which is also calculated as a frequency-domain product to a null-domain convolution. Set as a given point, its neighborhood is
, where
,
,
. After Gauss filter smoothing the
point after the
z-axis directional coordinates of the
point is
where
is the Gaussian function, and
is the normalization coefficient, which are expressed as
- c.
Aver filter.
The idea of the Aver filter is to replace the value of a given point with a weighted average value within the neighborhood of the given point. Set
for a given point, its neighborhood is
. After the Aver filter smoothing point
becomes
where
is the normalized weighting factor.
After comparing the three filtering denoising principles and methods, we found that the Median filter method is very useful in eliminating particle noise. It has a unique role in the phase analysis and processing method of optical measurement fringe images, but it has little effect on the fringe center analysis method. The Median filter is a classical method for smoothing out the noise and is commonly used to protect edge information in image processing. To further determine whether the filter we chose is appropriate for this experiment, we de-noise the elevation raster images using three separate filters.
Figure 8 shows the comparison of negative obstacles before denoising and the three kinds of filtering effects. From left to right, they are the image before denoising, the denoising effect of the Aver filter, the denoising effect of the Gauss filter, and the denoising effect of the Median filter. It can be seen that the denoising effect of the Aver filter is not significant, and the Gauss filter redundantly removes the pixels of the negative obstacle, which will affect the subsequent estimation of the length of the negative obstacle. Only the denoising effect of the Median filter is most suitable for this method. Therefore, the OpenCV-based Median filter algorithm was finally chosen to denoise our elevation raster images.
Based on the premise of the Median filter, this paper adopts the design idea, necessary operation steps, algorithm flow, and algorithm analysis of a fast Median filter encoding algorithm, which takes advantage of the position relationship of the elements in the data window and considers the correlation of the data elements in two adjacent Median filter windows. It uses the encoding sorting method to retain the encoding sorting information of the previous window data as a reference for the data sorting in the following window. This algorithm reduces the number of comparisons in the Median filter process by combining two adjacent Median filter operations in a traditional algorithm into one.
Author Contributions
All five authors contributed to this work. X.L. designed the research. X.L., Z.G. and X.C. processed the corresponding data. X.L. and Z.G. wrote the first draft of the manuscript. S.S. and J.L. revised and edited the final version. All authors have read and agreed to the published version of the manuscript.
Funding
This research was funded by the Natural Science Foundation of Heilongjiang Province of China, grant number No.LH2020C042; Fundamental Research Funds for the Central Universities, grant number No.2572019CP20.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
The data used to support this study’s findings are available from the corresponding author upon request.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Hu, T.; Nie, Y.; Wu, T.; He, H. Negative obstacle detection from image sequences. Proc. SPIE 2011, 8009, 1–7. [Google Scholar]
- Matthies, L.; Rankin, A. Negative Obstacle Detection by Thermal Signature. In Proceedings of the IEEE International Conference on Intelligent Robots and Systems, Las Vegas, NV, USA, 27 October–1 November 2003; Volume 1, pp. 906–913. [Google Scholar]
- Karunasekera, H.; Zhang, H.; Xi, T. Stereo vision based negative obstacle detection. In Proceedings of the IEEE International Conference on Control and Automation, Ohrid, Macedonia, 3–6 July 2017; pp. 834–838. [Google Scholar] [CrossRef]
- Rankin, A.L.; Huertas, A.; Matthies, L.H. Night-time negative obstacle detection for off-road autonomous navigation. Proc. SPIE 2007, 6561, 1–12. [Google Scholar]
- Wu, Y.; Li, Y.; Li, W. Robust LiDAR-based localization scheme for unmanned ground vehicle via multisensor fusion. IEEE Trans. Neural Netw. Learn. Syst. 2020, 1–11. [Google Scholar] [CrossRef] [PubMed]
- Baum, T.E.; Chobot, J.P.; Wolkowicz, K.I.; Brennan, S.N. Negative obstacle detection using LiDAR sensors for a robotic wheelchair. IEEE Access 2018, 3, 1. [Google Scholar]
- Shang, E.; An, X.; Wu, T. LiDAR based negative obstacle detection for field autonomous land vehicles. J. Field Robot. 2015, 33, 591–617. [Google Scholar] [CrossRef]
- Liu, J.; Tang, Z.; Wang, A. Negative Obstacle Detection in Unstructured Environment Based on Multiple LiDARs and Compositional Features. Robot 2017, 39, 638–651. [Google Scholar]
- Wang, P.; Guo, J.; Li, L. Negative Obstacle Detection Algorithm Based on Single Line Laser Radar and Vision Fusion. Comput. Eng. 2017, 43, 303–308. [Google Scholar]
- Ghani, M.F.A.; Sahari, K.S.M. Detecting negative obstacle using Kinect sensor. Int. J. Adv. Robot. Syst. 2017, 14. [Google Scholar] [CrossRef] [Green Version]
- Zhang, B.; Chen, H.; Xi, J. Obstacle Detection on Unstructured Terrain for Unmanned Ground Vehicles. Automot. Eng. 2009, 31, 526–530. [Google Scholar]
- Karunasekera, H.; Wang, H.; Zhang, H.D. Energy Minimization Approach for Negative Obstacle Region Detection. IEEE Trans. Veh. Technol. 2019, 68, 11668–11678. [Google Scholar] [CrossRef]
- Zhang, Y.G.; Li, Q.; Wang, T.S. A detection method of negative obstacles based on IMU-LiDAR. Electron. Opt. Control 2019, 26, 106–110. [Google Scholar]
- Cheng, J.; Feng, Y.; Cao, Y. Extrinsic Calibration Method for Multiple LiDARs Mounted on Mobile Vehicle. Opto-Electron. Eng. 2013, 40, 89–94. [Google Scholar]
- Chen, G.; Gao, Z.; He, L. Step-By-Step Automatic Calibration Algorithm for Exterior Parameters of 3D LiDAR Mounted on Vehicle. Chin. J. Lasers 2017, 44, 1–7. [Google Scholar]
- Dib, J.; Sirlantzis, K.; Howells, G. A Review on Negative Road Anomaly Detection Methods. IEEE Access 2020, 8, 57298–57316. [Google Scholar] [CrossRef]
- Wang, J.; Zhao, H.; Wang, D. GPS trajectory-based segmentation and multi-filter-based extraction of expressway curbs and markings from mobile laser scanning data. Eur. J. Remote Sens. 2018, 51, 1022–1035. [Google Scholar]
- Xianming, M.; Yongshu, L.; Jiali, X. Experiment and analysis of point cloud denoising using bilateral filtering method. Bull. Surv. Mapp. 2017, 115, 87–89. [Google Scholar]
- Morales, G.; Human, S.G.; Tells, J. Shadow removal in high-resolution satellite images using conditional generative adversarial networks. In Proceedings of the Annual International Symposium on Information Management and Big Data, Lima, Peru, 21–23 August 2019; Volume 898, pp. 328–340. [Google Scholar]
- Han, L.; Yan, Q.; Cao, Z. Study of the search method of slip surface of rock slope based on Gauss filter technology. J. China Univ. Min. Technol. 2020, 49, 471–478. [Google Scholar]
- Wang, X.; Hou, R.; Gao, X. Research on yarn diameter and unevenness based on an adaptive median filter denoising algorithm. Fibres Text. East. Eur. 2020, 28, 36–41. [Google Scholar] [CrossRef]
- Anindita, K.; Sumanta, B.; Chittabarni, S. An Axis Based Mean Filter for Removing High-Intensity Salt and Pepper Noise. In Proceedings of the IEEE Calcutta Conference, Kolkata, India, 28–29 February 2020; pp. 363–367. [Google Scholar]
- Yang, M.; Gan, S.; Yuan, X. Point cloud denoising processing technology for complex terrain debris flow gully. In Proceedings of the IEEE 4th International Conference on Cloud Computing and Big Data Analytics, Singapore, 17–18 April 2019; pp. 402–406. [Google Scholar]
- Zhang, F.; Zhang, C.; Yang, H. Point cloud denoising with principal component analysis and a novel bilateral filter. Traitement Du Signal 2019, 36, 393–398. [Google Scholar] [CrossRef]
- Li, B.; Zhang, T.; Xia, T. Vehicle Detection from 3D LiDAR Using Fully Convolutional Network. In Proceedings of the Robotics: Science and Systems, Ann Arbor, MI, USA, 18–22 June 2016; Volume 12. [Google Scholar]
- Zhen, X.; Seng, J.C.Y.; Somani, N. Adaptive Automatic Robot Tool Path Generation Based on Point Cloud Projection Algorithm. In Proceedings of the IEEE International Conference on Emerging Technologies and Factory Automation (ETFA), Zaragoza, Spain, 10–13 September 2019. [Google Scholar]
- Attia, M.; Slama, Y. Efficient Initial Guess Determination Based on 3D Point Cloud Projection for ICP Algorithms. In Proceedings of the International Conference on High. Performance Computing & Simulation, Genoa, Italy, 17–21 July 2017. [Google Scholar]
- Guan, Y.; Sang, X.; Xing, S. Parallel multi-view polygon rasterization for 3D light field display. Opt. Express 2020, 28, 34406. [Google Scholar] [CrossRef]
- Marzougui, M.; Alasiry, A.; Kortli, Y. A Lane Tracking Method Based on Progressive Probabilistic Hough Transform. IEEE Access 2020, 8, 84893–84905. [Google Scholar] [CrossRef]
- Chen, B.; Ding, B.; Wang, J. Application of an Improved Hough Transform and Image Correction Algorithm in ACC. J. Phys. Conf. Ser. 2020, 1621, 012044. [Google Scholar] [CrossRef]
- Cai, Y.; Shi, T.; Tang, Z. Negative obstacle perception in unstructured environment with double multi-beam LiDAR. Acta Autom. Sin. 2018, 44, 569–576. [Google Scholar]
Figure 1.
Flow chart for estimating geometric features of structured negative obstacles based on a single-frame 3D laser point cloud.
Figure 2.
Raw point cloud image. (a) 40 cm × 40 cm x-y view; (b) 40 cm × 40 cm y-z view; (c) 40 cm × 40 cm x-z view; (d) 100 cm × 40 cm x-y view; (e) 100 cm × 40 cm y-z view; (f) 100 cm × 40 cm x-z view.
Figure 3.
Before and after PassThrough filter comparison. (a) PassThrough filter the front point cloud data; (b) rendering after PassThrough filter.
Figure 4.
RadiusOutlier removal schematic.
Figure 5.
Rendering of negative obstacle point cloud after denoising.
Figure 7.
High range raster image. (a) Groove-less; (b) grooved.
Figure 8.
Three types of filtering and denoising effect comparison chart.
Figure 9.
Progressive Probabilistic Hough Transform detects straight-line data.
Figure 10.
Estimating the length of negative obstacles.
Figure 11.
The mathematical model for width estimation.
Figure 12.
ROS Autonomous Navigation Vehicle.
Figure 13.
Typical negative obstacle scenario. (a) 40 cm × 40 cm; (b) 100 cm × 40 cm; (c) 120 cm × 60 cm.
Figure 14.
Unstructured experimental scenarios. (a) 1 m; (b) 1.5 m; (c) ∞.
Figure 15.
Schematic diagram of negative obstacle detection range estimation.
Figure 16.
Comparison of detection results at different distances. (a) 111.96 cm; (b)129.94 cm; (c) 149.55 cm; (d)189.41 cm; (e) 244.33 cm; (f) 324.90 cm; (g) 497.32 cm; (h) 1718.7 cm.
Figure 17.
Relative error analysis. (a) Length relative error analysis; (b) width relative error analysis.
Figure 18.
Verify the data error bar. (a) Verify the length data error bar; (b) verify the width data error bar.
Figure 19.
Comparison diagram of negative obstacle detection performance.
Figure 20.
Performance comparison of single-line LiDAR and 16-line LiDAR. (a) Single line LiDAR detects negative obstacle; (b) 16-line LiDAR detects negative obstacle.
Table 1.
Specific parameters of the experiment.
Parameter | Technical Indicators |
---|
LiDAR Height (m) | 0.3 |
Effective Scanning Angle (°) | −15~+15 |
Vehicle Speed (m/s) | 1 |
Sampling Frequency (Hz) | 20 |
Table 2.
LiDAR main parameters and technical specifications.
Parameter | Technical Indicators |
---|
Number of Laser Lines | 16 |
Measuring range (m) | 100 |
Weight (g) | 830 |
Measurement Accuracy (cm) | ±3 |
Horizontal Measurement Angle Range (°) | 360 |
Horizontal Angle Resolution (°) | 0.1~0.4 |
Vertical Angle Resolution (°) | 2 |
Vertical Measurement Angle Range (°) | 30 (−15~+15) |
Detection Frequency (Hz) | 5~20 |
Table 3.
Experimental results of multidimensional, multi-group negative obstacle measurements.
Experiment No. | Distance (cm) | Hand Mark | Experimental Measurement |
---|
Length (m) | Width (m) | Length (m) | Width (m) |
---|
1 | <111 | 0.4 | 0.4 | - | - |
111.96 | 0.4 | 0.4 |
129.94 | 0.400143 | 0.400567 |
149.55 | 0.4032 | 0.40416 |
189.41 | 0.4087 | 0.40772 |
244.33 | 0.4123 | 0.41497 |
342.90 | 0.4225 | 0.4357 |
497.32 | 0.36491 | 0.370384 |
1718.70 | 0.32 | 0.3004 |
>1719 | - | - |
2 | <111 | 1 | 0.4 | - | - |
111.96 | 1 | 0.4 |
129.94 | 1 | 0.4 |
149.55 | 1.000212 | 0.40034 |
189.41 | 1.0008 | 0.40087 |
244.33 | 1.00443 | 0.4047 |
342.90 | 1.00753 | 0.4192 |
497.32 | 1.01847 | 0.4345 |
1718.70 | 1.0456 | 0.47432 |
>1719 | - | - |
3 | <111 | 1.2 | 0.6 | - | - |
111.96 | 1.2 | 0.6 |
129.94 | 1.2 | 0.6 |
149.55 | 1.200333 | 0.601333 |
189.41 | 1.2004 | 0.60384 |
244.33 | 1.201332 | 0.59357 |
342.90 | 1.2078 | 0.62 |
497.32 | 1.210322 | 0.620384 |
1718.70 | 1.24 | 0.540357 |
>1719 | - | - |
Table 4.
Length and width error analysis table.
Distance (cm) | 111.96 | 129.94 | 149.55 | 189.41 | 244.33 | 342.9 | 497.32 |
---|
40 cm | ALE | 0 | 0.0143 | 0.32 | 0.87 | 1.23 | 2.25 | −3.509 |
RLE | 0% | 0.03575% | 0.8% | 2.175% | 3.075% | 5.625% | −8.7725% |
100 cm | ALE | 0 | 0 | 0.0212 | 0.08 | 0.443 | 0.753 | 1.847 |
RLE | 0% | 0% | 0.0212% | 0.08% | 0.443% | 0.753% | 1.847% |
120 cm | ALE | 0 | 0 | 0.0333 | 0.04 | 0.1332 | 0.78 | 1.0322 |
RLE | 0% | 0% | 0.02775% | 0.033% | 0.111% | 0.65% | 0.86017% |
40 cm | AWE | 0 | 0.0567 | 0.416 | 0.772 | 1.497 | 3.57 | −2.9616 |
RWE | 0% | 0.14175% | 1.04% | 1.93% | 3.7425% | 8.925% | −7.404% |
60 cm | AWE | 0 | 0 | 0.1333 | 0.384 | −0.0634 | 2 | 2.0384 |
RWE | 0% | 0% | 0.22217% | 0.64% | 0.10567% | 3.33% | 3.3973% |
Table 5.
Experimental results of negative obstacle measurement in non-structural environments.
Experiment No. | Distance (m) | Maximum Diameter Measurable (m) | Measurement Results (m) | Error (%) |
---|
1 | 1 | 1.19 | - | - |
2 | 0.8479 | 28.75 |
3 | 0.9432 | 20.74 |
4 | 0.5014 | 57.87 |
5 | 0.6470 | 45.63 |
6 | - | - |
2 | 1 | 1.62 | - | - |
2 | 1.9502 | 20.38 |
3 | 2.2335 | 37.87 |
4 | 0.8274 | 48.97 |
5 | 1.1193 | 30.91 |
6 | - | - |
3 | 1 | 0.5 | - | - |
2 | 0.4735 | 5.3 |
3 | 0.5596 | 11.92 |
4 | 0.4324 | 13.52 |
5 | 0.3350 | 33 |
6 | - | - |
| Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).