Highly Curved Lane Detection Algorithms Based on Kalman Filter

: The purpose of the self-driving car is to minimize the number casualties of tra ﬃ c accidents. One of the e ﬀ ects of tra ﬃ c accidents is an improper speed of a car, especially at the road turn. If we can make the anticipation of the road turn, it is possible to avoid tra ﬃ c accidents. This paper presents a cutting edge curve lane detection algorithm based on the Kalman ﬁlter for the self-driving car. It uses parabola equation and circle equation models inside the Kalman ﬁlter to estimate parameters of a using curve lane. The proposed algorithm was tested with a self-driving vehicle. Experiment results show that the curve lane detection algorithm has a high success rate. The paper also presents simulation results of the autonomous vehicle with the feature to control steering and speed using the results of the full curve lane detection algorithm.


Introduction
The development of the self-driving car is needed for the safety of driver and passenger on the vehicle [1]. Traffic accidents occur for various reasons. The majority of traffic accidents are caused by an improper speed on the road turning or unexpected lane changes when avoiding an obstacle [2]. Some modern cars are already equipped with the emergency braking system, collision warning system, lane-keeping assist system, adaptive cruise control. These systems could be used to help avert traffic accidents when driver is distracted or lost control.
The two most important parts of advanced driver assistance systems are a collision avoidance system and a Lane keeping assist system, which could help to reduce the number of traffic accidents. A fundamental technique for effective collision avoidance and lane-keeping is a robust lane detection method [3]. Especially that method should detect a straight or a curve lane in the far-field of view. A car moving at a given speed will spend a certain time to stop or reduce speed while keeping stability. This means it is necessary to detect road lane in the near field as well as in far-field of view.
Tamal Datta et al. showed a way to detect lane in their lane detection technique [4]. The technique consists of image pre-processing steps (grayscale conversion, canny edge detection, bitwise logical operation) on the image input; it also masked the image according to the region of interest (ROI) in the image. The final stage uses the Hough transformation [5,6] method and detects the lines. Using this method, the parameters for a straight line are achieved. However, their technique did not propose lane detection for curve lanes and can obtain parameters of curve lines (parabola and circle).
A video-based land detection at night was introduced by Xuan He et al. [7]. The method steps include the Gabor filter operator [8] for image pre-processing, adaptive splay ROI, and Hough transform This research is considered on a curve lane detection algorithm, which can estimate parameters of the road turning and define geometric shapes based on the mathematical model and the Kalman filter [17].

Research Method
Our new algorithm consists of two main parts. (1) Image pre-processing. It contains Otsu's threshold method [19] and top view image transform [20] to create a top-view image of the road. Hough transform predict the straight lane in the near-field of view. (2) A curve lane detection. We use the Kalman filter to detect a curve lane in the far-field of view. This Kalman filter algorithm includes two different methods, the first method is based on the parabola model [23], and the second method is based on the circle model [24]. This method shown in Figure 2 can estimate parameters of the road turning and find geometric shapes based on the mathematical model and the Kalman filter.

Otsu Threshold
In 1978 inventor Nobuyuki Otsu introduced a new threshold technique. The Otsu threshold technique uses statistical analysis, which can be used to determine the optimal threshold for an image. Nobuyuki Otsu introduced a problem with one threshold for two classes and later extended to a problem with multiple thresholds. For the two classes, this technique assumes the image containing two classes of pixels following bi-modal histogram, foreground pixels, and background pixels. The Otsu threshold method minimizes the sum of the weighted class variances. He named this sum within-class variance and defines it as equation (1): The criterion tries to separate the pixels, such that the classes are homogeneous in themselves. Since a measure of group homogeneity is the variance, the Otsu criterion follows consequently. Therefore, the optimal threshold is the one, for which the within-class variance is minimal. In order to find the optimal threshold instead of minimizing the within-class variance is defined as equation (2)

Research Method
Our new algorithm consists of two main parts. (1) Image pre-processing. It contains Otsu's threshold method [19] and top view image transform [20] to create a top-view image of the road. Hough transform predict the straight lane in the near-field of view. (2) A curve lane detection. We use the Kalman filter to detect a curve lane in the far-field of view. This Kalman filter algorithm includes two different methods, the first method is based on the parabola model [23], and the second method is based on the circle model [24]. This method shown in Figure 2 can estimate parameters of the road turning and find geometric shapes based on the mathematical model and the Kalman filter.
Appl. Sci. 2020, 10, x FOR PEER REVIEW 3 of 22 This research is considered on a curve lane detection algorithm, which can estimate parameters of the road turning and define geometric shapes based on the mathematical model and the Kalman filter [17].

Research Method
Our new algorithm consists of two main parts. (1) Image pre-processing. It contains Otsu's threshold method [19] and top view image transform [20] to create a top-view image of the road. Hough transform predict the straight lane in the near-field of view. (2) A curve lane detection. We use the Kalman filter to detect a curve lane in the far-field of view. This Kalman filter algorithm includes two different methods, the first method is based on the parabola model [23], and the second method is based on the circle model [24]. This method shown in Figure 2 can estimate parameters of the road turning and find geometric shapes based on the mathematical model and the Kalman filter.

Otsu Threshold
In 1978 inventor Nobuyuki Otsu introduced a new threshold technique. The Otsu threshold technique uses statistical analysis, which can be used to determine the optimal threshold for an image. Nobuyuki Otsu introduced a problem with one threshold for two classes and later extended to a problem with multiple thresholds. For the two classes, this technique assumes the image containing two classes of pixels following bi-modal histogram, foreground pixels, and background pixels. The Otsu threshold method minimizes the sum of the weighted class variances. He named this sum within-class variance and defines it as equation (1): The criterion tries to separate the pixels, such that the classes are homogeneous in themselves. Since a measure of group homogeneity is the variance, the Otsu criterion follows consequently. Therefore, the optimal threshold is the one, for which the within-class variance is minimal. In order to find the optimal threshold instead of minimizing the within-class variance is defined as equation (2):

Otsu Threshold
In 1978 inventor Nobuyuki Otsu introduced a new threshold technique. The Otsu threshold technique uses statistical analysis, which can be used to determine the optimal threshold for an image. Nobuyuki Otsu introduced a problem with one threshold for two classes and later extended to a problem with multiple thresholds. For the two classes, this technique assumes the image containing two classes of pixels following bi-modal histogram, foreground pixels, and background pixels. The Otsu threshold method minimizes the sum of the weighted class variances. He named this sum within-class variance and defines it as Equation (1): The criterion tries to separate the pixels, such that the classes are homogeneous in themselves. Since a measure of group homogeneity is the variance, the Otsu criterion follows consequently. Therefore, the optimal threshold is the one, for which the within-class variance is minimal. In order to find the optimal threshold instead of minimizing the within-class variance is defined as Equation (2): Appl. Sci. 2020, 10, 2372 where µ T is the total mean calculated over all gray levels. So the task of finding the optimal set of thresholds t * 1 , t * 2 , . . . t * M−1 in Equation (3) is either to find the thresholds, which minimize the within-class variance or to find the ones, which maximize the between-class variance. The result is the same.

Top View Image Transformation
The second step in our algorithm is to create a top view image of the road. The output image is the top view or bird's view of the road where lanes will be parallel or close to parallel after this transformation. Also, this transformation converts from pixels in the image plane to the world coordinate metric. If necessary we can measure distance using that transformed image. Figure 3 illustrates the geometry of the top-view image transformation. For the transformation, we need some parameters, where θ h is the horizontal view angle of the camera, θ v is the vertical view angle of the camera, H is the height of the camera located, and α is the tilt angle of the camera.
The height of the camera located in the vehicle is measured in the metric system. We can create two types of top-view image, one is measured by metric using H parameter, another is measured by pixel using H pixel parameter. V is the width of the front view image P i (U i , V i ) and is proportional to W min of the top view image field illustrated in Figure 3. Equations (4) and (5) show the relationship between the H measured by metric and H pixel measured by pixel. where μ is the total mean calculated over all gray levels. So the task of finding the optimal set of thresholds * , * , … * in equation (3) is either to find the thresholds, which minimize the withinclass variance or to find the ones, which maximize the between-class variance. The result is the same. * , * , … * = = ,

Top View Image Transformation
The second step in our algorithm is to create a top view image of the road. The output image is the top view or bird's view of the road where lanes will be parallel or close to parallel after this transformation. Also, this transformation converts from pixels in the image plane to the world coordinate metric. If necessary we can measure distance using that transformed image. Figure 3 illustrates the geometry of the top-view image transformation. For the transformation, we need some parameters, where θ is the horizontal view angle of the camera, θ is the vertical view angle of the camera, is the height of the camera located, and α is the tilt angle of the camera. The height of the camera located in the vehicle is measured in the metric system. We can create two types of top-view image, one is measured by metric using H parameter, another is measured by pixel using parameter. V is the width of the front view image ( , ) and is proportional to Wmin of the top view image field illustrated in Figure 3. Equations (4) and (5) show the relationship between the H measured by metric and measured by pixel. Coefficient K is used to transform the metric into the pixel data. Coefficient K is used to transform the metric into the pixel data.
According to the geometrical description shown in Figure 3, for each point P i (U i , V i ) on the front view image, the corresponding sampling point P t (U i , V i ) on the top view image can be calculated by using the previous Equations (4)-(6).
Then RGB color data are copied from the (U i , V i ) position of the front view camera image to the (x i , y i ) position of the top view image.
After top-view image transformation, line detection becomes a simple process, which only detects parallel lines that are generally separated by a given, fixed distance. The next step is to detect a straight lane using the Hough transform.

Straight Lane Detection with Hough Transform
In the near view image, a straight line detection algorithm is formulated by using a standard Hough transformation. The Hough transform also detects many incorrect lines. We need to eliminate the incorrect lanes to reduce computational time and complexity [34].
To remove incorrect lines using the same algorithms on the road lane, the removal of detected lines is needed in which the vehicle is not in. For example, after Hough transformation on the binary image, the longest two lines will be chosen to avoid the issue. The detection for curve lane will start at the finishing points of those two longest two lines which were chosen based on length.

Curve Lane Detection Based on Kalman Filter and Parabola Equation
In this paper, the most important part is the curve line detection part. This method should detect a straight or a curve line in the far-field of view. Image data (white points in the far-field of view) include uncertainties and noise generated during capturing and processing steps. Therefore, as a robust estimator against these irregularities, a Kalman filter was adopted to form an observer [22]. First of all, we need to define the equation of the curve line, which is a non-linear equation. For the curve line, the best-fit equations are the parabola equation and the circle equation.
In this part, we consider a curve lane detection algorithm which is based on the Kalman filter and Parabola equation. From parabola equation y = ax 2 + bx + c we need to define three parameters using at least three measurement data. Equations (7) and (8) show system equation of the parabola. Figure 4 illustrates the basic parabolic model of road turning.
where x i−1 , x i , x i+1 and y i−1 , y i , y i+1 are the measurement data of the curve line detection process. In our case, the measurement data are the coordinates of the white points in the far section.
First of all, we need to define the equation of the curve line, which is a non-linear equation. For the curve line, the best-fit equations are the parabola equation and the circle equation.
In this part, we consider a curve lane detection algorithm which is based on the Kalman filter and Parabola equation. From parabola equation = + + we need to define three parameters using at least three measurement data. Equations (7) and (8) show system equation of the parabola. Figure 4 illustrates the basic parabolic model of road turning. From Equation (9) we can estimate a, b, c parameters easily.
Using this matrix form we can implement our Kalman filter design for curve lane detection. Two important matrices of the Kalman filter are the measurement transition matrix (H) and the state transition matrix (A). It can be expressed as Equation (10): In our case, the measurement transformation matrix [H i ] contains three white points coordinate values of x axis rearranging the calculation as Equation (11). But, the transition matrix is the identity matrix, because of our Kalman filter design used for the image process.
These two matrices are often referred to as the process and the measurement models, as they serve as the basis for a Kalman filter. The Kalman filter has two steps: the prediction step and the correction step. The prediction step can be expressed as follows, Equations (12) and (13): where P post is the covariance of the predicted state. The correction steps of the Kalman filter can be expressed through the following Equations (14), (15) and (17).
where K i+1 is the Kalman gain, X post is the a posteriori estimate state at the i-th white point. P post is the a posteriori estimate error covariance matrix at the i-th white point in the Equation (17).
The Equation (16) shows a matrix form of a posteriori estimate state.
To evaluate the viability of the proposed algorithms, we tested the parabola equation using both real data and noisy measurement data in Matlab. The Kalman filter can estimate the parameters of the parabola equation from noisy data. The result section shows a comparison between the measurement value and estimation value, the real value. Figure 5 shows the expected results of left and right turning on the road using the parabolic model based on the Kalman filter [35].
Appl. Sci. 2020, 10, x FOR PEER REVIEW 7 of 22 measurement value and estimation value, the real value. Figure 5 shows the expected results of left and right turning on the road using the parabolic model based on the Kalman filter [35]. From all these experiment results from the result section, we can see one relationship between road turning and "a" parameter of our approach. If the road is turning toward the left side, the "a" parameter is lower than zero. If the road is turning toward right side, "a" parameter is higher than zero and if the road is straight, "a" parameter is almost equal to zero. This process is shown in Figure  6. From all these experiment results from the result section, we can see one relationship between road turning and "a" parameter of our approach. If the road is turning toward the left side, the "a" parameter is lower than zero. If the road is turning toward right side, "a" parameter is higher than zero and if the road is straight, "a" parameter is almost equal to zero. This process is shown in Figure 6.

Curve Lane Detection Based on Kalman Filter and Circle Equation
For the curve line, the second-best fit equation is the circle equation, shown in Figure 7. In this part, we consider curve line detection algorithms [24] based on the Kalman filter and circle equation. From the circle equation r 2 = (x − h) 2 + (y − k) 2 we need to define circle radius r and the center of circle (h, k).
Using every three points of a circle we can draw a pair of chords. Based on these two chords we can calculate the center of the circle [36]. If we have n number points, it is possible to calculate n − 2 number center.
Pairs of chords in each chain are used to calculate the center. Here, (x 1 , y 1 ), (x 2 , y 2 ), (x 3 , y 3 ) points divide a circle to three arcs, and L 1 , L 2 are the perpendicularly bisecting lines of the corresponding Appl. Sci. 2020, 10, 2372 8 of 22 chords. P 1 , P 2 points, shown in Figure 8 [36], are located in the lines L 1 , L 2 , and the Equations (18) and (21) show the coordinates of P 1 , P 2 points. From all these experiment results from the result section, we can see one relationship between road turning and "a" parameter of our approach. If the road is turning toward the left side, the "a" parameter is lower than zero. If the road is turning toward right side, "a" parameter is higher than zero and if the road is straight, "a" parameter is almost equal to zero. This process is shown in Figure  6.   (19) and (20) For the L 2 line same calculation runs to estimate the equation of L 2 line, expressed in form of Equations (22) and (23).

Curve Lane Detection Based on Kalman Filter and Circle Equation
For the curve line, the second-best fit equation is the circle equation, shown in Figure 7. In this part, we consider curve line detection algorithms [24]   Using every three points of a circle we can draw a pair of chords. Based on these two chords we can calculate the center of the circle [36]. If we have number points, it is possible to calculate − 2 number center.
Pairs of chords in each chain are used to calculate the center. Here, ( , ), ( , ), ( , ) points divide a circle to three arcs, and , are the perpendicularly bisecting lines of the corresponding chords. , points, shown in Figure 8 [36], are located in the lines , , and the Equations (18) and (21) show the coordinates of , points.
Perpendicular lines rules and points coordinate are used to calculate the equation of line in equations (19) and (20) For the line same calculation runs to estimate the equation of line, expressed in form of equation (22) and (23). Based on L 1 , L 2 lines we can calculate the center of the circle expressed in Equations (24) and (25). The intersection of these two lines indicates the center of the circle. y center = m 22 x center + c 22 y center = m 11 x center + c 11 (24) x center = c 22 − c 11 m 11 − m 22 y center = m 11 x center + c 11 (25) But, this method cannot determine the center correctly, it is easily disturbed by noises. Therefore, we need the second step for the estimation of the correct center using a Kalman filter. The Kalman filter is estimated using the raw data of the center (x center , y center ), which is stored by the previous step.
For the x coordinate and y coordinate of the center, we need individual estimation based on the Kalman filter. Equations (26)- (32) show the Kalman filter for the center of the circle. Equations (26)- (28) show an initial value of Kalman gain and P prior is the covariance of the predicted state.
P prior = AP post A T + Q r = P post + Q r (28) where z i = x center(i) is the x-coordinate value of the center, which is stored by the previous step.
After that, we can run the same process for the y coordinate value of the center. In the end, we can easily estimate the radius of the circle using these correct values of x, y coordinate of center. Figure 9 shows a curve lane detection expected result of Kalman filter in the prepared road image [35], this image has a circle shape road turning. The results present good performance for both of them, left turning and right turning. Using this result we can predict road turning and based on this value of radius we can control the speed of the self-driving car. For example, if the radius value is low, the self-driving car needs to reduce speed, if the radius value is high, the self-driving car can be on the same speed (no need to reduce speed). Also using radius value we can estimate suitable velocity based on centrifugal force Equation (33), as shown in Figure 10. Using this result we can predict road turning and based on this value of radius we can control the speed of the self-driving car. For example, if the radius value is low, the self-driving car needs to reduce speed, if the radius value is high, the self-driving car can be on the same speed (no need to reduce speed). Also using radius value we can estimate suitable velocity based on centrifugal force Equation (33), as shown in Figure 10.
reduce speed, if the radius value is high, the self-driving car can be on the same speed (no need to reduce speed). Also using radius value we can estimate suitable velocity based on centrifugal force Equation (33), as shown in Figure 10.

Setup for Simulation in 3D Environments and Results
The 3D lane detection environment for simulation is designed using the GAZEBO simulator. MATLAB is used for image preprocessing, lane detection algorithm, and closed-loop lane keeping control, shown in Figure 11a. It shows the software communication in brief.
(a) Figure 10. Centrifugal force when a car goes around a curve.

Setup for Simulation in 3D Environments and Results
The 3D lane detection environment for simulation is designed using the GAZEBO simulator.   Figure 11b shows the comprehensive connection between nodes and topics in the gazebo simulator. From the gazebo GUI, the camera sensor node gives us the image_row topic. The Matlab node receives the raw RGB image of the lane and processes the frames. Then, the Matlab node generates the heading angle and sends it using cmd_vel topic to the gazebo. In order to get the trajectory of the robot, the odom topic is used to plot the position of the vehicle.
For lane detection and closed-loop lane keeping control, we used two simulated track environments using pioneer robot-vehicle as shown in Figure 12 Figure 11b shows the comprehensive connection between nodes and topics in the gazebo simulator. From the gazebo GUI, the camera sensor node gives us the image_row topic. The Matlab node receives the raw RGB image of the lane and processes the frames. Then, the Matlab node generates the heading angle and sends it using cmd_vel topic to the gazebo. In order to get the trajectory of the robot, the odom topic is used to plot the position of the vehicle.
For lane detection and closed-loop lane keeping control, we used two simulated track environments using pioneer robot-vehicle as shown in Figure 12 To assess the viability of the introduced algorithms, random noises were added with real values. We performed the experiment with noisy measurement data of the parabola equation in Matlab. Figure 12b illustrates the 3d view and map of the athletic field and Figure 12a illustrates another track environment. The plotted graph of the trajectory path is generated from the odometry data of the robot-vehicle which shows that the curve lane follows the road curve scenario, as shown in Figure  13a  To assess the viability of the introduced algorithms, random noises were added with real values. We performed the experiment with noisy measurement data of the parabola equation in Matlab. Figure 12b illustrates the 3d view and map of the athletic field and Figure 12a illustrates another track environment. The plotted graph of the trajectory path is generated from the odometry data of the robot-vehicle which shows that the curve lane follows the road curve scenario, as shown in Figure 13a,b with respect to the lane tracking of environments from Figure 12a The angular velocity control uses a proportional-integral-differential (PID) controller, which is a control loop feedback mechanism. In PID control, the current output is based on the feedback of the previous output, which is computed to keep the error small. The error is calculated as the difference between the desired and the measured value, which should be as small as possible. Two objectives The angular velocity control uses a proportional-integral-differential (PID) controller, which is a control loop feedback mechanism. In PID control, the current output is based on the feedback of the previous output, which is computed to keep the error small. The error is calculated as the difference between the desired and the measured value, which should be as small as possible. Two objectives are executed, keeping the robot driving along the centerline dy = 0 and keeping the robot heading angle, θ = 0, as shown in Figure 14. The equation can be expressed as y center_line = y right line + y le f t line /2; dy = y center_line − y center_pixel from where error term can be written as error = −(dy + l sin θ). The steering angle of the car can be estimated using a straight line detection result while also detecting the curve lanes. The angular velocity control uses a proportional-integral-differential (PID) controller, which is a control loop feedback mechanism. In PID control, the current output is based on the feedback of the previous output, which is computed to keep the error small. The error is calculated as the difference between the desired and the measured value, which should be as small as possible. Two objectives are executed, keeping the robot driving along the centerline = 0 and keeping the robot heading angle, = 0, as shown in Figure 14.    Figure 16 show the PID error and difference in pixel respectively. The steering angle is derived from the arctangent of the centerline of the vehicle. Here, co-efficient of p-term = 0.3 and co-efficient of d-term = 0.1. Figure 17 shows the steering angle in the simulation experiment in scenario one. Figure 15 shows that most of the time the error is positive. As a result, Figure 17 Figure 17 shows the steering angle in the simulation experiment in scenario one. Figure 15 shows that most of the time the error is positive. As a result, Figure 17 generates positive steering angles most of the time which means steering left. All the figures below are the result of the simulation from the map of Figure 13a. The vehicle maneuver was performed counter-clockwise. To stay in the center lane, the vehicle needs to take steering on the left. So, that is the reason for error value were greater than zero during the simulation. The portion where the error value is less than zero indicates steering right and also indicates that there is curve going right particularly at that time. The sudden reason for the spike in 480th number of cycle indicates that the dy value becomes high at that moment. In order to reduce the dy value and bring the vehicle to the position close to the center line, the error value is increased. Therefore, there steering angle to turn left was higher than its average value.
the reason for error value were greater than zero during the simulation. The portion where the error value is less than zero indicates steering right and also indicates that there is curve going right particularly at that time. The sudden reason for the spike in 480th number of cycle indicates that the value becomes high at that moment. In order to reduce the value and bring the vehicle to the position close to the center line, the error value is increased. Therefore, there steering angle to turn left was higher than its average value.    the reason for error value were greater than zero during the simulation. The portion where the error value is less than zero indicates steering right and also indicates that there is curve going right particularly at that time. The sudden reason for the spike in 480th number of cycle indicates that the value becomes high at that moment. In order to reduce the value and bring the vehicle to the position close to the center line, the error value is increased. Therefore, there steering angle to turn left was higher than its average value.    value is less than zero indicates steering right and also indicates that there is curve going right particularly at that time. The sudden reason for the spike in 480th number of cycle indicates that the value becomes high at that moment. In order to reduce the value and bring the vehicle to the position close to the center line, the error value is increased. Therefore, there steering angle to turn left was higher than its average value.     Figure 18 represents an image output from MATLAB in the pixel coordinate system in the algorithm. The image frame generated from the camera on the center is converted to a pixel coordinate system due to convenience.

Experimental Results for the Curve Lane Detection
Results of the auto threshold by Otsu are shown in Figure 19b Figure 18 represents an image output from MATLAB in the pixel coordinate system in the algorithm. The image frame generated from the camera on the center is converted to a pixel coordinate system due to convenience.   The auto threshold results in Figure 19b are clearer than the manual threshold results in Figure  19a. Also, it can be robust in outside experiments. Top view transformation converts from pixels in the image plane to world coordinates metric as shown in Figure 20.   The auto threshold results in Figure 19b are clearer than the manual threshold results in Figure  19a. Also, it can be robust in outside experiments. Top view transformation converts from pixels in the image plane to world coordinates metric as shown in Figure 20.  The auto threshold results in Figure 19b are clearer than the manual threshold results in Figure 19a. Also, it can be robust in outside experiments. Top view transformation converts from pixels in the image plane to world coordinates metric as shown in Figure 20.

Experimental Results for the Curve Lane Detection
The Hough transform result generates lines that should be almost parallel, as shown in Figure 21. Also, the section to track the curve lane starts at the finishing point of these two longest straight lines. The Hough transform result generates lines that should be almost parallel, as shown in Figure  21. Also, the section to track the curve lane starts at the finishing point of these two longest straight lines. To evaluate the effectiveness of the proposed algorithms, we tested with noisy measurement data of the parabola equation in Matlab. The noisy data is created by adding random value to the true value generated using the random function from Matlab. The measurement legend marked in blue in Figure 22, is the combination of true value and noisy value. Our Kalman filter can estimate To evaluate the effectiveness of the proposed algorithms, we tested with noisy measurement data of the parabola equation in Matlab. The noisy data is created by adding random value to the true value generated using the random function from Matlab. The measurement legend marked in blue in Figure 22, is the combination of true value and noisy value. Our Kalman filter can estimate parameters of the parabola equation from noisy data. Figure 22 presents a comparison between the measurement value and estimation value, the real value.
In Figure 23, the graphs illustrated the estimation results of parameters. At the end of the process, estimation results become almost equal to true values. In this simulation, "a" parameter's true value is 8 and the estimated value is 7.4079, the "b" parameter's true value is 16 and the estimated value is 22.4366, "c" parameter's true value 50 and the estimated value is 37.115. There is almost no difference between the estimated value and the true value compared to the noise value ratio. Now, it is possible to apply the proposed algorithms in the processed image to perform the detection process for the curve lane.  Figure 25. Figure 26 shows a comparison between the measurement value and the estimation value, the true value of circle detection. Of course, the proposed algorithm also has some shortcomings.
For example, the right-hand side of Figure 24 is the aftermath of shadow reflection from the left side of the lane. The reflection produces brightness in the lane image, causing a slight change in the detection. Further research is needed later.
Appl. Sci. 2020, 10, x FOR PEER REVIEW 16 of 22 parameters of the parabola equation from noisy data. Figure 22 presents a comparison between the measurement value and estimation value, the real value. In Figure 23, the graphs illustrated the estimation results of parameters. At the end of the process, estimation results become almost equal to true values. In this simulation, "a" parameter's true value is 8 and the estimated value is 7.4079, the "b" parameter's true value is 16 and the estimated value is 22.4366, "c" parameter's true value 50 and the estimated value is 37.115. There is almost no difference between the estimated value and the true value compared to the noise value ratio. Now, it is possible to apply the proposed algorithms in the processed image to perform the detection process for the curve lane.   the second line = −3.166 • 10 , = 0.2832, c = 1292. The simulation result of circle detection is shown in Figure 25. Figure 26 shows a comparison between the measurement value and the estimation value, the true value of circle detection. Of course, the proposed algorithm also has some shortcomings. For example, the right-hand side of Figure 24 is the aftermath of shadow reflection from the left side of the lane. The reflection produces brightness in the lane image, causing a slight change in the detection. Further research is needed later.   After that, we tested in the top-view transformed image of the different circular real roads. Figure 27a,b illustrates the result of curve line detection using the circle model in real road image.
Here, the yellow line is the result of our algorithm and radius r = 1708.2, the center of the circle is x center = 957.81, y center = −1260.2. Using this result we can predict road turning and based on this value of radius we can control the speed of the self-driving car. For example, if the radius value is low, the self-driving car needs to reduce speed, if the radius value is high, the self-driving car can be at the same speed (no need to reduce speed).   After that, we tested in the top-view transformed image of the different circular real roads. Figure 27a,b illustrates the result of curve line detection using the circle model in real road image. Here, the yellow line is the result of our algorithm and radius = 1708.2, the center of the circle is = 957.81, = −1260.2. Using this result we can predict road turning and based on this value of radius we can control the speed of the self-driving car. For example, if the radius value is low, the self-driving car needs to reduce speed, if the radius value is high, the self-driving car can be at the same speed (no need to reduce speed).

Conclusions
In this paper, a curve line detection algorithm using the Kalman filter is presented. The algorithm is split into two sections: (1) Image pre-processing. It contains Otsu's threshold method. Top view image transforms to create a top-view image of the road. A Hough transform to track a straight lane in the near-field of

Conclusions
In this paper, a curve line detection algorithm using the Kalman filter is presented. The algorithm is split into two sections: (1) Image pre-processing. It contains Otsu's threshold method. Top view image transforms to create a top-view image of the road. A Hough transform to track a straight lane in the near-field of view of the camera sensor. (2) Curve lane detection. The Kalman filter provides the detection result for curve lanes in the far-field of view. This section consists of two different methods, the first method is based on the parabola model, and a second method is based on the circle model.
The experimental results show that the curve lane detection method can be effectively detected even under a very noisy environment and with parabola and circle model. Also, we have deployed the algorithm in the gazebo simulation environment to verify the performance. One advantage of the proposed algorithm is its robustness against noise, as our algorithms are based on the Kalman filter. The viability of the proposed curve lane detection strategy can be applied to the self-driving car systems as well as to the advanced driver assistant systems. Based on our curve lane detection results, we can predict road turning, and also estimate suitable velocity and angular velocity for the self-driving car. Also, our proposed algorithm provides close-loop lane keeping control to stay in lane. The experimental result shows the proposed algorithm achieves an average of 10 fps. Even though the algorithm has an auto threshold method to adjust with different light conditions such as low-light, further study is needed to detect lane in conditions like light reflection, shadows, worn-lane, etc. Moreover, the proposed algorithm does not require high GPU processing unit to perform other CNN-based algorithms. The performance is satisfactory in the CPU-based system according to the fps. However, CNN-based study in the pre-processing step can provide a more efficient result for edge detection.