Qualification of the PAVIN Fog and Rain Platform and Its Digital Twin for the Evaluation of a Pedestrian Detector in Fog
Abstract
:1. Introduction
- First, how can we guarantee that we have tested a wide enough range of conditions? AI-based algorithms are black boxes and it is, therefore, very difficult to find their boundary conditions. Indeed, the typology, position, and orientation of the pedestrian can influence the results of the algorithm. Similarly, the environment, disturbing objects, and occlusions can influence the detection. Beyond these geometric issues, weather conditions also have strong impacts, e.g., illumination, camera glare [4,5], fog [6,7,8,9,10], rain [7,10,11], and snow [8]. Interest in this issue is recent in the field of autonomous vehicles and is the subject of numerous studies [12,13,14], but at present, the works listed in the literature only present particular cases and not a global solution.Even if all the conditions required for successful validation have been identified, it is impossible to reproduce them all in real-world conditions. For this, one solution is to use numerical simulation. Many numerical simulators dedicated to autonomous vehicles exist [15,16,17,18]. Most offer variants regarding pedestrians, environments, or weather, but only a few are calibrated [15] against real-world conditions, to our knowledge.
- The second question is: How can we validate the realism and representativeness of a digital simulator? Will the behavior of artificial intelligence be the same in front of the simulator and in reality? To make things more complex, the data can be partially or totally simulated, so X-in-the-loop simulators appear with augmented reality mechanisms. It is a model of this type that we propose to test in this article.Beyond numerical simulation, physical simulation methods are used to simulate adverse weather conditions. This is the case with the PAVIN fog and rain platform, which can reproduce adverse weather conditions on demand [19]. This platform is calibrated from a meteorological point of view (calibration of intensities, drop size, and velocity). However, unlike a purely simulated test, a physical test must be qualified from a repeatability point of view. This is essential in the context of certification tests, where test laboratories are often qualified and audited, making repeatability tests and uncertainty measurements mandatory. Can this type of platform guarantee the repeatability of tests, as well as a standard deviation on the results obtained with AI?
2. Method
2.1. Physically Simulated Fog: PAVIN Fog and Rain Platform
2.2. Numerically Simulated Fog: K-HiL Model
2.3. A Metric Based on a Pedestrian Detection Algorithm
3. The Cerema Foggy Twin Pedestrians Database
3.1. Real Data
- A stereoscopic camera (ZED 2I from StereoLabs [35]) to obtain visible images and depth information. The focal lens of the camera is 4 mm, the frame rate is 19 fps, and the image size is 1280 × 720 pixels.
- A thermal camera named Serval 640 GigE from Xenics [36], with a frame rate of 25 fps, and the image size is 640 × 480 pixels.
- Clothing: 50% of the clothing is representative of summer weather and 50% of winter weather.
- Accessories: a selection of pedestrians carry accessories with different sizes.
- Gender: 60% of the pedestrians are male and 40% are female.
- Small: For small accessories, such as a small backpack, a helmet, a plant, etc.
- Large: For large accessories, such as a large cardboard box, a snowboard, an open umbrella, etc.
- No accessories: When the pedestrian is not wearing any accessory or the accessory does not alter the pedestrian’s overall silhouette (e.g., a headlamp, a yellow fluorescent vest, a cell phone).
- All: All pedestrians, regardless of the accessory sizes.
- Clear weather (CW): it allows for a reference scene without disturbances due to the presence of fog.
- Medium fog (MF): with a MOR of 23 m, it allows modifying the general aspect of the objects of the scene by leaving all the elements of the visible scene detectable.
- Dense fog (DF): with a MOR of 10 m, it allows elements of the background to disappear for the stereo camera but not for the thermal camera.
- Daytime conditions, with the greenhouse opened on the sides to capture as much natural light as possible (See Figure 6).
- Night conditions, with the greenhouse totally closed (not presented in this paper).
- 119,772 annotated images for clear weather;
- 113,630 annotated images for medium fog;
- 51,102 annotated images for dense fog;
3.2. Numerical Simulation Parameters
- Manual. Stable visibility fixed at 10 m for DF and 23 m for MF. These values correspond to the setpoints given for fog production on the PAVIN fog and rain platform. It means that we consider that the fog is perfectly stable.
- Automatic. Visibility that is faithful to the conditions of the production of the fog during the tests by taking the real values of visibility recorded simultaneously. As the fog varies slightly during the tests, the exact visibility recorded at the time of acquisition is used to simulate the fog numerically on the corresponding image (i.e., the same pedestrian position).
4. Results and Discussions
4.1. Global Results: Effects of Weather Conditions and Accessories
4.2. Effect of MOR Uncertainty on the PAVIN Fog and Rain Platform
4.3. Effect of Pedestrian Distance from the Camera
4.4. Volume of Data Required to Create a Pedestrian Database
4.5. Intercomparison of Fog: Real Fog vs. Numerical Simulation
5. Conclusions and Future Work
- The accessory sizes carried by the pedestrians influence the detection accuracy. YOLO achieves lower accuracy when the accessories are large, which could represent a risk factor for pedestrians in the case of detection dedicated to autonomous vehicle decision-making. In this field of application, detection algorithms must be able to encompass the detection of pedestrians and the accessories they wear.
- The MOR chosen for dense fog (10 m) is, therefore, too dense for this type of analysis, on visible-domain sensors. Choosing the right MOR range will, therefore, be essential to test algorithms in adverse weather conditions, but this can be a tricky task, especially as it depends on the depth of the elements in the scene. Road standards specify a maximum range of 200 m. Fog with a visibility of 10 m is far too strong for existing technologies, even though this type of fog is present in nature. This shows that the use of a passive image sensor in the visible range alone is unlikely to guarantee safety in such conditions.
- Although visibility did fluctuate slightly during the experiments inside the fog and rain PAVIN platform, it exhibited minimal influence on the scores of pedestrian detections, with approximately 8% of relative differences in the AUC values for 75% of our dataset in medium fog conditions. It shows the robustness of the platform for evaluating the performance of detection tools in foggy weather conditions. This impact is not zero, however, and needs to be taken into account when estimating uncertainty. The ability to guarantee a degree of uncertainty on the score obtained can make this type of equipment compatible with certification testing, a new and particularly remarkable result.
- The various tests carried out on the amount of data required for a robust pedestrian database enable us to estimate the minimum number of different pedestrians needed at 15, whether or not they are wearing an accessory, with less than 10% relative difference and an image acquisition frequency of around one image per second. This can obviate the need to manually label large quantities of data when it is unnecessary for achieving a low level of uncertainty. On the other hand, uncertainty can rise to around 30% if fewer than two pedestrians are used, as is the case with the main current standards (EuroNCap, AEB standard, etc.), which only use a single certified pedestrian. This calls into question the test protocols established in current standards.
- Numerical simulation of fog on clear weather images using the K-HiL model produced visually realistic data, but with a higher intensity of fog than in the images from the PAVIN platform tests. This visually observable difference is also reflected in the scores, with relative differences of around −10% between the accessory and numerical simulation configuration. One way of improving the simulation would be to take better account of background luminance, which, in our case, was estimated as the mean luminance of the 10% brightest pixels of the images. Many simulators on the market currently use this type of model, which is based on Koschmieder’s law. It is, therefore, urgent and imperative to verify and validate these simulators against real data, in order to guarantee their veracity when used as proof for certification.
- To complete this study and this database of pedestrians, the night condition dataset will be analyzed. Thus, numerical simulation of fog will be performed on the nighttime clear weather images.
- In addition to this task, the numerical simulation of fog could enable us to determine the visibility value at which YOLO can no longer achieve satisfactory detection scores. This can help show the right visibility range to be tested in standards, so that the latter is compatible with technology capabilities, while being demanding enough to guarantee safety.
- Pedestrian distance has been studied here, but the database could also be used to analyze pedestrian orientation. This analysis would be highly relevant for refining the scenarios used in the standards, in order to challenge the detection algorithms to better guarantee safety.
- A hardware-in-the-loop fog model was used here, and as the digital twin is available, it would be interesting to check what the result would be for a fully virtual digital simulator (full 3D simulation).
- We chose YOLOv3 as the control algorithm in this study. However, this arbitrary choice is open to discussion, and it would be interesting to see whether the results obtained are consistent when using different pedestrian detection algorithms.
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
ODD | operational design domain |
AI | artificial intelligence |
AV | autonomous vehicle |
VRUs | vulnerable road users |
ADAS | advanced driver assistance systems |
AEB | automatic emergency braking |
CFTP | Cerema Foggy-Twin Pedestrian |
HiL | hardware-in-the-loop |
AUC | area under the precision and recall curve |
CW | clear weather |
MF | medium fog |
DF | dense fog |
MOR | meteorological optical range |
YOLO | you only look once |
IOU | intersection over union |
ITS | intelligent transportation system |
References
- J3016; Taxonomy and Definitions for Terms Related to On-Road Motor Vehicle Automated Driving Systems. SAE International: Washington, DC, USA, 2014.
- UN Regulation No 157—Uniform Provisions Concerning the Approval of Vehicles with Regards to Automated Lane Keeping Systems [2021/389]. 2021. Available online: http://data.europa.eu/eli/reg/2021/389/oj (accessed on 26 September 2023).
- ISO/DIS 22733-2(en); Road Vehicles—Test Method to Evaluate the Performance of Autonomous Emergency Braking Systems—Part 2: Car to Pedestrian. ISO: New York, NY, USA, 2023.
- Zhang, Y.; Carballo, A.; Yang, H.; Takeda, K. Perception and sensing for autonomous vehicles under adverse weather conditions: A survey. SPRS J. Photogramm. Remote Sens. 2023, 196, 146–177. [Google Scholar] [CrossRef]
- Dong, Y.; Kang, C.; Zhang, J.; Zhu, Z.; Wang, Y.; Yang, X.; Su, H.; Wei, X.; Zhu, J. Benchmarking Robustness of 3D Object Detection to Common Corruptions in Autonomous Driving. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 17–24 June 2023. [Google Scholar]
- Li, Y.; Duthon, P.; Colomb, M.; Ibanez-Guzman, J. What happens for a ToF LiDAR in fog? Trans. Intell. Transp. Syst. 2020, 22, 6670–6681. [Google Scholar] [CrossRef]
- Dahmane, K.; Duthon, P.; Bernardin, F.; Colomb, M.; Amara, N.E.B.; Chausse, F. The Cerema pedestrian database: A specific database in adverse weather conditions to evaluate computer vision pedestrian detectors. In Proceedings of the SETIT, Hammamet, Tunisia, 18–20 December 2016; pp. 480–485. [Google Scholar]
- Bijelic, M.; Mannan, F.; Gruber, T.; Ritter, W.; Dietmayer, K.; Heide, F. Seeing Through Fog Without Seeing Fog: Deep Sensor Fusion in the Absence of Labeled Training Data. arXiv 2019, arXiv:1902.08913. [Google Scholar] [CrossRef]
- Minh Mai, N.A.; Duthon, P.; Salmane, P.H.; Khoudour, L.; Crouzil, A.; Velastin, S.A. Camera and LiDAR analysis for 3D object detection in foggy weather conditions. In Proceedings of the 2022 12th International Conference on Pattern Recognition Systems (ICPRS), Saint-Etienne, France, 7–10 June 2022; pp. 1–7. [Google Scholar] [CrossRef]
- Duthon, P.; Edelstein, N.; Zelentzer, E.; Bernardin, F. Quadsight® Vision System in Adverse Weather Maximizing the benefits of visible and thermal cameras. In Proceedings of the 2022 12th International Conference on Pattern Recognition Systems (ICPRS), Saint-Etienne, France, 7–10 June 2022; pp. 1–6. [Google Scholar] [CrossRef]
- Pfeuffer, A.; Dietmayer, K. Robust semantic segmentation in adverse weather conditions by means of sensor data fusion. In Proceedings of the 2019 22th International Conference on Information Fusion (FUSION), Ottawa, ON, Canada, 2–5 July 2019; IEEE: New York, NY, USA, 2019; pp. 1–8. [Google Scholar]
- Feng, X.; Jiang, W. Improved foggy pedestrian detection algorithm based on YOLOv5s. In Proceedings of the Third International Seminar on Artificial Intelligence, Networking, and Information Technology (AINIT 2022), Shanghai, China, 23–25 September 2022; Hu, N., Zhang, G., Eds.; International Society for Optics and Photonics, SPIE: San Diego, CA, USA, 2023; Volume 12587, p. 125871M. [Google Scholar] [CrossRef]
- Liu, X.; Lin, Y. YOLO-GW: Quickly and Accurately Detecting Pedestrians in a Foggy Traffic Environment. Sensors 2023, 23, 5539. [Google Scholar] [CrossRef] [PubMed]
- Broughton, G.; Majer, F.; Rouček, T.; Ruichek, Y.; Yan, Z.; Krajník, T. Learning to see through the haze: Multi-sensor learning-fusion System for Vulnerable Traffic Participant Detection in Fog. Robot. Auton. Syst. 2021, 136, 103687. [Google Scholar] [CrossRef]
- Ben-Daoued, A.; Duthon, P.; Bernardin, F. SWEET: A Realistic Multiwavelength 3D Simulator for Automotive Perceptive Sensors in Foggy Conditions. J. Imaging 2023, 9, 54. [Google Scholar] [CrossRef] [PubMed]
- ANSYS. 2022. Available online: https://www.ansys.com/content/dam/content-creators/creative/source-files/ansys-avxcelerate-sensors-datasheet.pdf (accessed on 26 September 2023).
- Dosovitskiy, A.; Ros, G.; Codevilla, F.; Lopez, A.; Koltun, V. CARLA: An open urban driving simulator. In Proceedings of the Conference on Robot Learning, Mountain View, CA, USA, 13–15 November 2017; PMLR: Cambridge, MA, USA, 2017; pp. 1–16. [Google Scholar]
- Hiblot, N.; Gruyer, D.; Barreiro, J.S.; Monnier, B. Pro-sivic and roads. a software suite for sensors simulation and virtual prototyping of adas. In Proceedings of the DSC, Thessaloniki, Greece, 13–14 September 2010; pp. 277–288. [Google Scholar]
- Liandrat, S.; Duthon, P.; Bernardin, F.; Ben Daoued, A.; Bicard, J.L. A review of Cerema PAVIN fog & rain platform: From past and back to the future. In Proceedings of the ITS World Congress, Los Angeles, CA, USA, 18–22 September 2022. [Google Scholar]
- Geiger, A.; Lenz, P.; Urtasun, R. Are we ready for autonomous driving? the kitti vision benchmark suite. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; IEEE: New York, NY, USA, 2012; pp. 3354–3361. [Google Scholar]
- Sun, P.; Kretzschmar, H.; Dotiwalla, X.; Chouard, A.; Patnaik, V.; Tsui, P.; Guo, J.; Zhou, Y.; Chai, Y.; Caine, B.; et al. Scalability in Perception for Autonomous Driving: Waymo Open Dataset. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020. [Google Scholar]
- Caesar, H.; Bankiti, V.; Lang, A.H.; Vora, S.; Liong, V.E.; Xu, Q.; Krishnan, A.; Pan, Y.; Baldan, G.; Beijbom, O. nuScenes: A multimodal dataset for autonomous driving. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020. [Google Scholar]
- Sakaridis, C.; Dai, D.; Van Gool, L. Semantic foggy scene understanding with synthetic data. Int. J. Comput. Vis. 2018, 126, 973–992. [Google Scholar] [CrossRef]
- Sakaridis, C.; Dai, D.; Hecker, S.; Van Gool, L. Model Adaptation with Synthetic and Real Data for Semantic Dense Foggy Scene Understanding. In Proceedings of the European Conference on Computer Vision (ECCV), Glasgow, UK, 23–28 August 2018. [Google Scholar] [CrossRef]
- Maddern, W.; Pascoe, G.; Linegar, C.; Newman, P. 1 year, 1000 km: The Oxford RobotCar dataset. Int. J. Rob. Res. 2017, 36, 3–15. [Google Scholar] [CrossRef]
- Mao, J.; Niu, M.; Jiang, C.; Liang, H.; Chen, J.; Liang, X.; Li, Y.; Ye, C.; Zhang, W.; Li, Z.; et al. One Million Scenes for Autonomous Driving: ONCE Dataset. In Proceedings of the 35th Conference on Neural Information Processing Systems, Online, 6–14 December 2021. [Google Scholar]
- Yu, F.; Chen, H.; Wang, X.; Xian, W.; Chen, Y.; Liu, F.; Madhavan, V.; Darrell, T. BDD100K: A Diverse Driving Dataset for Heterogeneous Multitask Learning. In Proceedings of the Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–19 June 2020. [Google Scholar] [CrossRef]
- Koschmieder, H. Theorie der horizontalen Sichtweite. Zur Phys. Freien Atmosphare 1924, 12, 33–55. [Google Scholar]
- Redmon, J.; Farhadi, A. YOLOv3: An Incremental Improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar] [CrossRef]
- World Meteorological Organization. The Guide to Hydrological Practices; (WMO No. 168); WHO: Geneva, Switzerland, 2009. [Google Scholar]
- NF P99-320; Recueil des Données Météorologoiques et Routières. AFNOR: Paris, France, 1989.
- Gordon, J.I. Daytime Visibility: A Conceptual Review; SIO Ref. 80-1; University of California: San Diego, CA, USA, 1979. [Google Scholar]
- He, K.; Sun, J.; Tang, X. Single image haze removal using dark channel prior. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 1956–1963. [Google Scholar] [CrossRef]
- Lee, Z.; Shang, S. Visibility: How Applicable is the Century-Old Koschmieder Model? J. Atmos. Sci. 2016, 73, 4573–4581. [Google Scholar] [CrossRef]
- Stereolabs. ZED 2i Datasheet Feb2022. 2022. Available online: https://www.stereolabs.com/assets/datasheets/zed-2i-datasheet-feb2022.pdf (accessed on 5 June 2023).
- IMAGING, S. Serval-640-GigE. 2015. Available online: https://www.pei-france.com/uploads/tx_etim/STEMMER_28330.pdf (accessed on 7 June 2023).
Accessory Size | Number of Pedestrians |
---|---|
Small | 25 |
Large | 33 |
No Accessories | 42 |
All | 100 |
Accessories Sub-List | Weather Condition | Area Under Curve | |
---|---|---|---|
IOU = 0.5 | IOU = 0.7 | ||
All | CW | 0.90 | 0.65 |
MF | 0.85 | 0.56 | |
DF | 0.27 | 0.08 | |
No Accessories | CW | 0.93 | 0.74 |
MF | 0.89 | 0.64 | |
DF | 0.29 | 0.08 | |
Small | CW | 0.93 | 0.69 |
MF | 0.89 | 0.61 | |
DF | 0.29 | 0.08 | |
Large | CW | 0.83 | 0.48 |
MF | 0.75 | 0.39 | |
DF | 0.24 | 0.07 |
Visibility Range (m) | Number of Frames | Area Under Curve | Relative Deviation (%) |
---|---|---|---|
19–21 | 15,527 | 0.5 | −14.6 |
22 | 21,125 | 0.54 | −8.0 |
23 | 16,526 | 0.59 | - |
24–26 | 7762 | 0.64 | +8.6 |
Accessory Size | |||||||||
---|---|---|---|---|---|---|---|---|---|
Small | Large | No Acc. | All | ||||||
Np | Weath. Cond. | Mean AUC | Std/Rel. (%) Deviation | Mean AUC | Std/Rel. (%) Deviation | Mean AUC | Std/Rel. (%) Deviation | Mean AUC | Std/Rel. (%) Deviation |
2 | CW | 0.66 | 0.14/22.1 | 0.48 | 0.24/49.2 | 0.74 | 0.09/ 11.6 | 0.62 | 0.17/27.9 |
MF | 0.61 | 0.10/17.1 | 0.39 | 0.17/43.0 | 0.63 | 0.11/16.9 | 0.54 | 0.16/30.4 | |
5 | CW | 0.68 | 0.09/13.0 | 0.48 | 0.17/35.1 | 0.74 | 0.05/6.7 | 0.64 | 0.10/16.5 |
MF | 0.61 | 0.07/11.0 | 0.38 | 0.11/28.2 | 0.63 | 0.08/12.3 | 0.56 | 0.10/17.7 | |
10 | CW | 0.68 | 0.05/7.4 | 0.48 | 0.10/20.2 | 0.75 | 0.03/4.6 | 0.65 | 0.07/11.3 |
MF | 0.61 | 0.04/6.6 | 0.40 | 0.07/17.6 | 0.63 | 0.04/6.5 | 0.53 | 0.07/13.2 | |
15 | CW | 0.68 | 0.03/4.8 | 0.48 | 0.07/14.1 | 0.75 | 0.03/3.5 | 0.65 | 0.05/7.8 |
MF | 0.60 | 0.02/4.2 | 0.40 | 0.05/12.6 | 0.64 | 0.03/5.4 | 0.55 | 0.05/9.5 | |
20 | CW | 0.69 | 0.02/2.8 | 0.48 | 0.05/10.9 | 0.75 | 0.02/3.0 | 0.64 | 0.05/7.6 |
MF | 0.60 | 0.02/3.1 | 0.39 | 0.03/9.1 | 0.64 | 0.03/4.3 | 0.56 | 0.04/8.0 | |
25 | CW | 0.69 | 0 | 0.49 | 0.03/ 6.2 | 0.75 | 0.01/2.1 | 0.65 | 0.04/6.3 |
MF | 0.61 | 0 | 0.40 | 0.03/ 6.7 | 0.64 | 0.02/3.6 | 0.56 | 0.04/6.7 | |
33 | CW | - | - | 0.48 | 0 | 0.75 | 0.01/1.3 | 0.64 | 0.04/6.0 |
MF | - | - | 0.40 | 0 | 0.64 | 0.01/2.0 | 0.56 | 0.03/6.0 | |
42 | CW | - | - | - | - | 0.75 | 0 | 0.65 | 0.02/3.8 |
MF | - | - | - | - | 0.64 | 0 | 0.56 | 0.03/5.1 | |
50 | CW | - | - | - | - | - | - | 0.65 | 0.02/3.9 |
MF | - | - | - | - | - | - | 0.55 | 0.02/4.0 | |
100 | CW | - | - | - | - | - | - | 0.65 | 0 |
MF | - | - | - | - | - | - | 0.56 | 0 |
Accessory Size | |||||||||
---|---|---|---|---|---|---|---|---|---|
Small | Large | No Acc. | All | ||||||
Nf | Weath. Cond. | Mean AUC | Std/Rel. (%) Deviation | Mean AUC | Std/Rel. (%) Deviation | Mean AUC | Std/Rel. (%) Deviation | Mean AUC | Std/Rel. (%) Deviation |
1 | CW | 0.69 | 0.0/0.03 | 0.48 | 0.0/0.03 | 0.75 | 0.0/0.02 | 0.65 | 0.0/0.01 |
MF | 0.61 | 0.0/0.10 | 0.39 | 0.0/0.14 | 0.64 | 0.0/0.08 | 0.56 | 0.0/0.02 | |
2 | CW | 0.69 | 0.0/0.28 | 0.48 | 0.0/0.44 | 0.75 | 0.0/0.45 | 0.65 | 0.0/0.23 |
MF | 0.61 | 0.0/0.42 | 0.39 | 0.0/0.18 | 0.64 | 0.0/0.20 | 0.56 | 0.0/0.03 | |
5 | CW | 0.69 | 0.01/0.83 | 0.48 | 0.0/0.56 | 0.75 | 0.0/0.57 | 0.65 | 0.0/0.37 |
MF | 0.61 | 0.0/0.55 | 0.39 | 0.0/0.80 | 0.64 | 0.0/0.74 | 0.56 | 0.0/0.58 | |
10 | CW | 0.69 | 0.01/1.18 | 0.48 | 0.01/1.56 | 0.75 | 0.01/1.27 | 0.65 | 0.0/0.82 |
MF | 0.61 | 0.01/1.51 | 0.39 | 0.01/2.35 | 0.64 | 0.01/0.95 | 0.56 | 0.0/0.76 | |
20 | CW | 0.69 | 0.02/2.38 | 0.48 | 0.01/2.44 | 0.75 | 0.01/1.64 | 0.65 | 0.01/1.11 |
MF | 0.60 | 0.01/2.40 | 0.39 | 0.01/3.88 | 0.64 | 0.01/1.74 | 0.56 | 0.0/0.97 | |
50 | CW | 0.67 | 0.03/4.58 | 0.48 | 0.04/7.77 | 0.75 | 0.02/2.27 | 0.65 | 0.01/2.22 |
MF | 0.60 | 0.02/3.57 | 0.39 | 0.03/7.31 | 0.64 | 0.01/2.31 | 0.56 | 0.01/2.1 | |
100 | CW | 0.66 | 0.05/6.96 | 0.47 | 0.04/8.59 | 0.74 | 0.03/4.45 | 0.65 | 0.02/3.15 |
MF | 0.59 | 0.04/6.01 | 0.39 | 0.04/11.30 | 0.64 | 0.03/4.31 | 0.56 | 0.02/4.09 | |
200 | CW | 0.67 | 0.06/9.03 | 0.45 | 0.05/12.16 | 0.74 | 0.05/6.23 | 0.65 | 0.02/3.94 |
MF | 0.61 | 0.04/6.58 | 0.34 | 0.05/13.96 | 0.63 | 0.03/6.56 | 0.56 | 0.03/5.14 | |
400 | CW | 0.65 | 0.07/10.48 | 0.47 | 0.07/16.06 | 0.72 | 0.06/8.09 | 0.63 | 0.04/5.84 |
MF | 0.59 | 0.07/11.33 | 0.39 | 0.07/18.17 | 0.61 | 0.04/6.56 | 0.57 | 0.04/6.98 |
Accessories Size | AUC | Relative Deviation | |||
---|---|---|---|---|---|
Artificial Fog | Numerically Simulated Fog | Numerically Simulated Fog | |||
(Reference) | Automatic | Manual | Automatic | Manual | |
Small | 0.61 | 0.53 | 0.54 | −13.1 % | −11.5 % |
Large | 0.39 | 0.34 | 0.35 | −12.8 % | −10.3 % |
No Acc | 0.64 | 0.59 | 0.6 | −7.8 % | −6.2 % |
All | 0.56 | 0.5 | 0.51 | −10.7 % | −8.9 % |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Segonne, C.; Duthon, P. Qualification of the PAVIN Fog and Rain Platform and Its Digital Twin for the Evaluation of a Pedestrian Detector in Fog. J. Imaging 2023, 9, 211. https://doi.org/10.3390/jimaging9100211
Segonne C, Duthon P. Qualification of the PAVIN Fog and Rain Platform and Its Digital Twin for the Evaluation of a Pedestrian Detector in Fog. Journal of Imaging. 2023; 9(10):211. https://doi.org/10.3390/jimaging9100211
Chicago/Turabian StyleSegonne, Charlotte, and Pierre Duthon. 2023. "Qualification of the PAVIN Fog and Rain Platform and Its Digital Twin for the Evaluation of a Pedestrian Detector in Fog" Journal of Imaging 9, no. 10: 211. https://doi.org/10.3390/jimaging9100211
APA StyleSegonne, C., & Duthon, P. (2023). Qualification of the PAVIN Fog and Rain Platform and Its Digital Twin for the Evaluation of a Pedestrian Detector in Fog. Journal of Imaging, 9(10), 211. https://doi.org/10.3390/jimaging9100211