# Phenomenological Modelling of Camera Performance for Road Marking Detection

^{1}

^{2}

^{3}

^{4}

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Problem Definition

- Ideal Sensor Model: This model provides the most accurate detection results from the geometric space of sensor coverage. This kind of model is frequently employed in multibody simulation software. However, the ideal sensor model is not able to measure and estimate perception errors. Hence, reliability is reduced during the simulation.
- Physical Sensor Model: This model is more numerically complicated and often produces higher accuracy. Since the model parameters correspond to the physical imaging process of the sensors, the output can be used to replicate physical effects and principles correctly. However, developing a physical sensor model requires knowledge about the physical characteristics and internal imaging algorithm. In our study, a MOBILEYE camera series 630 [36] is used, which includes complicated and confidential perception algorithms that are difficult to be simulated in software.
- Phenomenological Sensor Model: It simulates sensor performance, whereas phenomenological output effects are modelled without consideration for internal processes or algorithms of a camera, but with an emphasis on reproducing the real effects that are the difference between camera outputs and reference data. The phenomenological sensor model places greater emphasis on physical effects to establish the relationship between input and output of the camera model. While using this model, it is possible to map the realistic behaviour of lane detection more quickly and efficiently. Moreover, the camera modelling framework avoids complex algorithms.

## 3. Experimental Setup

#### 3.1. Data Collection

#### 3.2. Ground Truth Definition

## 4. Methodology

#### 4.1. Target Determination

- Replicating trajectory of GPS data on M86 road map;
- For each timestamp of trajectory data, the test car is positioned on the road, and ${C}_{0}$ is calculated for each side of the road, resulting in ${C}_{0}$ Left and ${C}_{0}$ Right;
- The difference for each side of the road is calculated independently, resulting in ${C}_{0}$-LPE Left and ${C}_{0}$-LPE Right;
- Combining results into a two-dimensional vector provides us with ${C}_{0}$-LPE as the target of MLP.

#### 4.2. Feature Selection

- The test car’s ADMA-RTK-based trajectory data are selected as a base timeline. Each timestamp from it will be used as a reference point.
- Features will be checked with respect to whether the their timestamp aligns with a reference point within an offset interval from −0.02 s to 0.02 s. They will be saved in a database aligning values with the reference timestamp.
- The process is repeated until it proceeds through all reference points.

#### 4.3. Neural Network Modelling

_{0}-LPE and C

_{1}-HAE.

## 5. Results and Discussion

#### 5.1. Training Results

#### 5.2. Comparing with Other Approaches

- Support Vector Machine (SVM): It is a widely utilized soft computing method in various fields. The fundamental idea is to fit data in specific areas by using non-linear mappings and to apply linear methods in function space, which has been applied for a regression problem and demonstrates superior generalization performance [55].
- Linear Regression (LR): It attempts to model the connection between two variables by fitting a linear equation to the observed data. One is the explanatory variable, and the other is the dependent variable. This algorithm is a fundamental regression method introduced in [56].
- Gaussian Regression of Process (GPR): It combines the structural properties of Bayesian NN with the nonparametric flexibility of Gaussian processes [57]. This model considers the input-dependent signal and noise correlations between various response variables. It performs well on small datasets and can also be used to measure prediction uncertainty.
- Ensemble Boosting (EB): The idea of an EB is presented in [58], and it fits a wide range of regression problems, and the architecture is the generation of sequential hypotheses, where each hypothesis tries to improve the previous one. General bias errors are eliminated throughout the sequencing process, and good predictive models are generated.
- Stepwise regression (SR): It is the iterative process of building a regression model by selecting independent variables to be used in a final model, which is introduced and applied in [59]. It entails gradually increasing or decreasing the number of putative explanatory factors and evaluating statistical significance after each cycle.

#### 5.3. Virtual Validation in CarMaker

## 6. Conclusions

## Author Contributions

## Funding

## Data Availability Statement

## Acknowledgments

## Conflicts of Interest

## References

- Administration, N.H.T.S. National motor vehicle crash causation survey: Report to congress. Natl. Highw. Traffic Saf.
**2008**, 811, 059. [Google Scholar] - Cicchino, J.B. Effects of lane departure warning on police-reported crash rates. J. Saf. Res.
**2018**, 66, 61–70. [Google Scholar] [CrossRef] [PubMed] - Eichberger, A.; Rohm, R.; Hirschberg, W.; Tomasch, E.; Steffan, H. RCS-TUG Study: Benefit potential investigation of traffic safety systems with respect to different vehicle categories. In Proceedings of the 24th International Technical Conference on the Enhanced Safety of Vehicles (ESV), Washington, DC, USA, 8–11 June 2011. [Google Scholar]
- Jianwei, N.; Jie, L.; Mingliang, X.; Pei, L.; Zhao, X. Robust Lane Detection Using Two-stage Feature Extraction with Curve Fitting. Pattern Recognit.
**2016**, 59, 225–233. [Google Scholar] - Aly, M. Real time detection of lane markers in urban streets. In 2008 IEEE Intelligent Vehicles Symposium; IEEE: Piscataway, NJ, USA, 2008; pp. 7–12. [Google Scholar]
- Zhang, Y.; Lu, Z.; Zhang, X.; Xue, J.H.; Liao, Q. Deep Learning in Lane Marking Detection: A Survey. IEEE Trans. Intell. Transp. Syst.
**2021**. [Google Scholar] [CrossRef] - Wang, Z.; Ren, W.; Qiu, Q. Lanenet: Real-time lane detection networks for autonomous driving. arXiv
**2018**, arXiv:1807.01726. [Google Scholar] - Cireşan, D.C.; Giusti, A.; Gambardella, L.M.; Schmidhuber, J. Mitosis detection in breast cancer histology images with deep neural networks. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Nagoya, Japan, 22–26 September 2013; pp. 411–418. [Google Scholar]
- Chng, Z.M.; Lew, J.M.H.; Lee, J.A. RONELD: Robust Neural Network Output Enhancement for Active Lane Detection. In Proceedings of the 2020 25th International Conference on Pattern Recognition (ICPR), Milan, Italy, 10–15 January 2021; pp. 6842–6849. [Google Scholar]
- Kim, J.; Lee, M. Robust lane detection based on convolutional neural network and random sample consensus. In Proceedings of the International Conference on Neural Information Processing, Montreal, QC, Canada, 8–13 December 2014; pp. 454–461. [Google Scholar]
- Kalra, N.; Paddock, S.M. Driving to safety: How many miles of driving would it take to demonstrate autonomous vehicle reliability? Transp. Res. Part A Policy Pract.
**2016**, 94, 182–193. [Google Scholar] [CrossRef] - Shladover, S.E. Connected and automated vehicle systems: Introduction and overview. J. Intell. Transp. Syst.
**2018**, 22, 190–200. [Google Scholar] [CrossRef] - Hussain, R.; Zeadally, S. Autonomous cars: Research results, issues, and future challenges. IEEE Commun. Surv. Tutorials
**2018**, 21, 1275–1313. [Google Scholar] [CrossRef] - Bardt, H. Autonomous Driving—A Challenge for the Automotive Industry. Intereconomics
**2017**, 52, 171–177. [Google Scholar] [CrossRef] - Bellem, H.; Klüver, M.; Schrauf, M.; Schöner, H.P.; Hecht, H.; Krems, J.F. Can we study autonomous driving comfort in moving-base driving simulators? A validation study. Hum. Factors
**2017**, 59, 442–456. [Google Scholar] [CrossRef] - Uricár, M.; Hurych, D.; Krizek, P.; Yogamani, S. Challenges in designing datasets and validation for autonomous driving. arXiv
**2019**, arXiv:1901.09270. [Google Scholar] - Li, W.; Pan, C.; Zhang, R.; Ren, J.; Ma, Y.; Fang, J.; Yan, F.; Geng, Q.; Huang, X.; Gong, H.; et al. AADS: Augmented autonomous driving simulation using data-driven algorithms. Sci. Robot.
**2019**, 4. [Google Scholar] [CrossRef] [PubMed] [Green Version] - Schlager, B.; Muckenhuber, S.; Schmidt, S.; Holzer, H.; Rott, R.; Maier, F.M.; Saad, K.; Kirchengast, M.; Stettinger, G.; Watzenig, D.; et al. State-of-the-Art Sensor Models for Virtual Testing of Advanced Driver Assistance Systems/Autonomous Driving Functions. SAE Int. J. Connect. Autom. Veh.
**2020**, 3, 233–261. [Google Scholar] [CrossRef] - Stolz, M.; Nestlinger, G. Fast generic sensor models for testing highly automated vehicles in simulation. EI
**2018**, 135, 365–369. [Google Scholar] [CrossRef] [Green Version] - Muckenhuber, S.; Holzer, H.; Rübsam, J.; Stettinger, G. Object-based sensor model for virtual testing of ADAS/AD functions. In Proceedings of the 2019 IEEE International Conference on Connected Vehicles and Expo (ICCVE), Graz, Austria, 4–8 November 2019; pp. 1–6. [Google Scholar]
- Hanke, T.; Hirsenkorn, N.; Dehlink, B.; Rauch, A.; Rasshofer, R.; Biebl, E. Generic architecture for simulation of ADAS sensors. In Proceedings of the 2015 16th International Radar Symposium (IRS), Dresden, Germany, 24–26 June 2016; pp. 125–130. [Google Scholar]
- Genser, S.; Muckenhuber, S.; Solmaz, S.; Reckenzaun, J. Development and Experimental Validation of an Intelligent Camera Model for Automated Driving. Sensors
**2021**, 21, 7583. [Google Scholar] [CrossRef] - Yang, T.; Li, Y.; Ruichek, Y.; Yan, Z. Performance Modeling a Near-Infrared ToF LiDAR Under Fog: A Data-Driven Approach. IEEE Trans. Intell. Transp. Syst.
**2021**. [Google Scholar] [CrossRef] - Fang, W.; Zhang, S.; Huang, H.; Dang, S.; Huang, Z.; Li, W.; Wang, Z.; Sun, T.; Li, H. Learn to Make Decision with Small Data for Autonomous Driving: Deep Gaussian Process and Feedback Control. J. Adv. Transp.
**2020**, 2020, 8495264. [Google Scholar] [CrossRef] - Hanke, T.; Hirsenkorn, N.; Dehlink, B.; Rauch, A.; Rasshofer, R.; Biebl, E. Classification of sensor errors for the statistical simulation of environmental perception in automated driving systems. In Proceedings of the 2016 IEEE 19th International Conference on Intelligent Transportation Systems (ITSC), Rio de Janeiro, Brazil, 1–4 November 2016; pp. 643–648. [Google Scholar]
- Hirsenkorn, N.; Hanke, T.; Rauch, A.; Dehlink, B.; Rasshofer, R.; Biebl, E. A non-parametric approach for modeling sensor behavior. In Proceedings of the 2015 16th International Radar Symposium (IRS), Dresden, Germany, 24–26 June 2015; pp. 131–136. [Google Scholar]
- Eder, T.; Hachicha, R.; Sellami, H.; van Driesten, C.; Biebl, E. Data Driven Radar Detection Models: A Comparison of Artificial Neural Networks and Non Parametric Density Estimators on Synthetically Generated Radar Data. In Proceedings of the 2019 Kleinheubach Conference, Miltenberg, Germany, 23–25 September 2019; pp. 1–4. [Google Scholar]
- Höber, M.; Nalic, D.; Eichberger, A.; Samiee, S.; Magosi, Z.; Payerl, C. Phenomenological Modelling of Lane Detection Sensors for Validating Performance of Lane Keeping Assist Systems. In Proceedings of the 2020 IEEE Intelligent Vehicles Symposium (IV), Las Vegas, NV, USA, 23 June 2020; pp. 899–905. [Google Scholar]
- Schneider, S.A.; Saad, K. Camera Behavior Models for ADAS and AD functions with Open Simulation Interface and Functional Mockup Interface. Cent. Model Based Cyber Phys. Prod. Dev.
**2018**, 20, 19-19. [Google Scholar] - Schneider, S.A.; Saad, K. Camera behavioral model and testbed setups for image-based ADAS functions. EI
**2018**, 135, 328–334. [Google Scholar] [CrossRef] - Wittpahl, C.; Zakour, H.B.; Lehmann, M.; Braun, A. Realistic image degradation with measured PSF. Electron. Imaging
**2018**, 2018, 149-1. [Google Scholar] [CrossRef] [Green Version] - Carlson, A.; Skinner, K.A.; Vasudevan, R.; Johnson-Roberson, M. Modeling camera effects to improve visual learning from synthetic data. In Proceedings of the European Conference on Computer Vision (ECCV) Workshops, Munich, Germany, 8–14 September 2018. [Google Scholar]
- Eichberger, A.; Markovic, G.; Magosi, Z.; Rogic, B.; Lex, C.; Samiee, S. A Car2X sensor model for virtual development of automated driving. Int. J. Adv. Robot. Syst.
**2017**, 14, 1729881417725625. [Google Scholar] [CrossRef] [Green Version] - Bernsteiner, S.; Magosi, Z.; Lindvai-Soos, D.; Eichberger, A. Radarsensormodell für den virtuellen Entwicklungsprozess. ATZelektronik
**2015**, 10, 72–79. [Google Scholar] [CrossRef] - Ponn, T.; Müller, F.; Diermeyer, F. Systematic analysis of the sensor coverage of automated vehicles using phenomenological sensor models. In Proceedings of the 2019 IEEE Intelligent Vehicles Symposium (IV), Paris, France, 9–12 June 2019; pp. 1000–1006. [Google Scholar]
- Mobileye. LKA Common CAN Protocol; Mobileye: Jerusalem, Israel, 2019. [Google Scholar]
- Borkar, A.; Hayes, M.; Smith, M.T.; Pankanti, S. A layered approach to robust lane detection at night. In Proceedings of the 2009 IEEE Workshop on Computational Intelligence in Vehicles and Vehicular Systems, Nashville, TN, USA, 30 March–2 April 2009; pp. 51–57. [Google Scholar]
- Li, Y.; Zhang, W.; Ji, X.; Ren, C.; Wu, J. Research on lane a compensation method based on multi-sensor fusion. Sensors
**2019**, 19, 1584. [Google Scholar] [CrossRef] [Green Version] - BIPM; IEC; ISO; IUPAC; IUPAP; OML. Guide to the Expression of Uncertainty in Measurement; BIPM: Tokyo, Japan, 1995. [Google Scholar]
- Schneider, D.; Schick, B.; Huber, B.; Lategahn, H. Measuring Method for Function and Quality of Automated Lateral Control Based on High-precision Digital Grund Truth Maps. In VDI/VW-Gemeinschaftstagung Fahrerassistenzsysteme und Automatisiertes Fahren 2018; VDI: Düsseldorf, Germany, 2018; pp. 3–15. [Google Scholar]
- Tihanyi, V.; Tettamanti, T.; Csonthó, M.; Eichberger, A.; Ficzere, D.; Gangel, K.; Hörmann, L.B.; Klaffenböck, M.A.; Knauder, C.; Luley, P. Motorway measurement campaign to support R&D activities in the field of automated driving technologies. Sensors
**2021**, 21, 2169. [Google Scholar] [PubMed] - Zhang, J.; Chen, M.; Zhao, S.; Hu, S.; Shi, Z.; Cao, Y. ReliefF-based EEG sensor selection methods for emotion recognition. Sensors
**2016**, 16, 1558. [Google Scholar] [CrossRef] [PubMed] - Palma-Mendoza, R.J.; Rodriguez, D.; De-Marcos, L. Distributed ReliefF-based feature selection in Spark. Knowl. Inf. Syst.
**2018**, 57, 1–20. [Google Scholar] [CrossRef] [Green Version] - GmbH, G.E. Technical Documentation ADMA Version 1.0; GeneSys Electronik GmbH: Offenburg, Germany, 2013. [Google Scholar]
- Khatib, T.; Mohamed, A.; Sopian, K.; Mahmoud, M. Estimating ambient temperature for Malaysia using generalized regression neural network. Int. J. Green Energy
**2012**, 9, 195–201. [Google Scholar] [CrossRef] - Lee, D.; Yeo, H. A study on the rear-end collision warning system by considering different perception-reaction time using multi-layer perceptron neural network. In Proceedings of the 2015 IEEE Intelligent Vehicles Symposium (IV), Seoul, Korea, 28 June–1 July 2015; pp. 24–30. [Google Scholar]
- Liu, B.; Zhao, Q.; Jin, Y.; Shen, J.; Li, C. Application of combined model of stepwise regression analysis and artificial neural network in data calibration of miniature air quality detector. Sci. Rep.
**2021**, 11, 1–12. [Google Scholar] - Bishop, C.M.; Roach, C. Fast curve fitting using neural networks. Rev. Sci. Instruments
**1992**, 63, 4450–4456. [Google Scholar] [CrossRef] [Green Version] - Li, Y.; Tang, G.; Du, J.; Zhou, N.; Zhao, Y.; Wu, T. Multilayer perceptron method to estimate real-world fuel consumption rate of light duty vehicles. IEEE Access
**2019**, 7, 63395–63402. [Google Scholar] [CrossRef] - Ceven, S.; Bayir, R. Implementation of Hardware-in-the-Loop Based Platform for Real-time Battery State of Charge Estimation on Li-Ion Batteries of Electric Vehicles using Multilayer Perceptron. Int. J. Intell. Syst. Appl. Eng.
**2020**, 8, 195–205. [Google Scholar] [CrossRef] - Møller, M.F. A scaled conjugate gradient algorithm for fast supervised learning. Neural Netw.
**1993**, 6, 525–533. [Google Scholar] [CrossRef] - IPG CarMaker. Reference Manual (V 8.1.1).; IPG Automotive GmbH: Karlsruhe, Germany, 2019. [Google Scholar]
- Chandran, V.; Patil, C.K.; Karthick, A.; Ganeshaperumal, D.; Rahim, R.; Ghosh, A. State of charge estimation of lithium-ion battery for electric vehicles using machine learning algorithms. World Electr. Veh. J.
**2021**, 12, 38. [Google Scholar] [CrossRef] - Liao, X.; Li, Q.; Yang, X.; Zhang, W.; Li, W. Multiobjective optimization for crash safety design of vehicles using stepwise regression model. Struct. Multidiscip. Optim.
**2008**, 35, 561–569. [Google Scholar] [CrossRef] - Cherkassky, V.; Ma, Y. Practical selection of SVM parameters and noise estimation for SVM regression. Neural Netw.
**2004**, 17, 113–126. [Google Scholar] [CrossRef] [Green Version] - Montgomery, D.C.; Peck, E.A.; Vining, G.G. Introduction to Linear Regression Analysis; John Wiley & Sons: Hoboken, NJ, USA, 2021. [Google Scholar]
- Quinonero-Candela, J.; Rasmussen, C.E. A unifying view of sparse approximate Gaussian process regression. J. Mach. Learn. Res.
**2005**, 6, 1939–1959. [Google Scholar] - Avnimelech, R.; Intrator, N. Boosting regression estimators. Neural Comput.
**1999**, 11, 499–520. [Google Scholar] [CrossRef] - Zhou, N.; Pierre, J.W.; Trudnowski, D. A stepwise regression method for estimating dominant electromechanical modes. IEEE Trans. Power Syst.
**2011**, 27, 1051–1059. [Google Scholar] [CrossRef] - Hoang, T.M.; Hong, H.G.; Vokhidov, H.; Park, K.R. Road lane detection by discriminating dashed and solid road lanes using a visible light camera sensor. Sensors
**2016**, 16, 1313. [Google Scholar] [CrossRef]

**Figure 3.**M86 freeway located near Csorna (Hungary) on route E65 (GNSS coordinates: 47.625778, 17.270162).

**Figure 5.**Illustration of the selected input feature variables on side and top views of the vehicle.

**Figure 8.**MLP training regression graph. (

**a**) Training regression result for ${C}_{0}$-LPE (

**b**) Training regression result for ${C}_{1}$-HAE.

**Figure 10.**Simulation result for ${C}_{0}$ and ${C}_{1}$ estimation. (

**a**) ${C}_{1}$ estimation comparison between camera detection data and MLP based output. (

**b**) ${C}_{0}$ of left lane estimation comparison between camera detection data and MLP based output. (

**c**) ${C}_{0}$ of right lane estimation comparison between camera detection data and MLP-based output.

Parameters | Definition |
---|---|

${C}_{0}$: Lane position | Lateral distance from the centerline of the host vehicle to the left/right lane marking |

${C}_{1}$: Heading angle | The vehicle heading relative to the lane heading |

${C}_{2}$: Curvature | The curvature of the lane ahead |

${C}_{3}$: Curvature derivative | Curvature rate |

Features | ${\mathit{C}}_{0}$-LPE | ${\mathit{C}}_{1}$-HAE | Description |
---|---|---|---|

${d}_{L}$ | ✓ | ✓ | The distance between real trajectory of the vehicle and center line of the road |

${a}_{Y}$ | ✓ | ✓ | The lateral acceleration of vehicle |

${a}_{Z}$ | - | ✓ | The vertical acceleration of vehicle |

$\theta $ | ✓ | ✓ | Pitch angle |

$\varphi $ | - | ✓ | Roll angle |

${\dot{\theta}}_{Y}$ | ✓ | - | Pitch rate |

${\dot{\psi}}_{Z}$ | ✓ | - | Yaw rate |

Hyper Parameter | MLP Configuration |
---|---|

Learning rate | Adaptive |

Hidden layer | 4 |

Hidden units for each layer | [50 30 10 10] |

Training function | SCG |

Activation function | Hyperbolic tangent sigmoid |

Metrics | ${\mathit{C}}_{0}$-LPE | ${\mathit{C}}_{1}$-HAE |
---|---|---|

MSE | 0.085 m${}^{2}$ | 0.008 rad${}^{2}$ |

RMSE | 0.092 m | 0.089 rad |

R${}^{2}$ | 95.5% | 94.0% |

Output | Metrics | Regression Algorithm | |||||
---|---|---|---|---|---|---|---|

MLP | SVM | LR | GPR | EB | SR | ||

${C}_{0}$-LPE estimation | MSE | 0.085 | 0.077 | 0.075 | 0.022 | 0.035 | 0.075 |

RMSE | 0.092 | 0.278 | 0.274 | 0.15 | 0.187 | 0.274 | |

R2 | 95.50% | 40% | 42% | 83% | 73% | 42.40% | |

${C}_{1}$-HAE estimation | MSE | 0.008 | 0.023 | 0.022 | 0.012 | 0.012 | 0.021 |

RMSE | 0.089 | 0.151 | 0.148 | 0.11 | 0.11 | 0.148 | |

R2 | 94.00% | 73% | 74% | 86% | 86% | 74.30% |

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Li, H.; Tarik, K.; Arefnezhad, S.; Magosi, Z.F.; Wellershaus, C.; Babic, D.; Babic, D.; Tihanyi, V.; Eichberger, A.; Baunach, M.C.
Phenomenological Modelling of Camera Performance for Road Marking Detection. *Energies* **2022**, *15*, 194.
https://doi.org/10.3390/en15010194

**AMA Style**

Li H, Tarik K, Arefnezhad S, Magosi ZF, Wellershaus C, Babic D, Babic D, Tihanyi V, Eichberger A, Baunach MC.
Phenomenological Modelling of Camera Performance for Road Marking Detection. *Energies*. 2022; 15(1):194.
https://doi.org/10.3390/en15010194

**Chicago/Turabian Style**

Li, Hexuan, Kanuric Tarik, Sadegh Arefnezhad, Zoltan Ferenc Magosi, Christoph Wellershaus, Darko Babic, Dario Babic, Viktor Tihanyi, Arno Eichberger, and Marcel Carsten Baunach.
2022. "Phenomenological Modelling of Camera Performance for Road Marking Detection" *Energies* 15, no. 1: 194.
https://doi.org/10.3390/en15010194