Next Article in Journal
Investigation of Dual-Flow Deep Learning Models LSTM-FCN and GRU-FCN Efficiency against Single-Flow CNN Models for the Host-Based Intrusion and Malware Detection Task on Univariate Times Series Data
Next Article in Special Issue
On the Impact of the Rules on Autonomous Drive Learning
Previous Article in Journal
Vibrational Quenching of Weakly Bound Cold Molecular Ions Immersed in Their Parent Gas
Previous Article in Special Issue
An Adaptive Approach for Multi-National Vehicle License Plate Recognition Using Multi-Level Deep Features and Foreground Polarity Detection Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Highly Curved Lane Detection Algorithms Based on Kalman Filter

1
School of Information Communication Technology, Mongolian University of Science and Technology, Sukhbaatar 14191, Mongolia
2
School of Mechanical & Convergence System Engineering, Kunsan National University, 558 Daehak-ro, Gunsan 54150, Korea
*
Author to whom correspondence should be addressed.
Appl. Sci. 2020, 10(7), 2372; https://doi.org/10.3390/app10072372
Submission received: 12 February 2020 / Revised: 19 March 2020 / Accepted: 25 March 2020 / Published: 30 March 2020
(This article belongs to the Special Issue Intelligent Transportation Systems: Beyond Intelligent Vehicles)

Abstract

:
The purpose of the self-driving car is to minimize the number casualties of traffic accidents. One of the effects of traffic accidents is an improper speed of a car, especially at the road turn. If we can make the anticipation of the road turn, it is possible to avoid traffic accidents. This paper presents a cutting edge curve lane detection algorithm based on the Kalman filter for the self-driving car. It uses parabola equation and circle equation models inside the Kalman filter to estimate parameters of a using curve lane. The proposed algorithm was tested with a self-driving vehicle. Experiment results show that the curve lane detection algorithm has a high success rate. The paper also presents simulation results of the autonomous vehicle with the feature to control steering and speed using the results of the full curve lane detection algorithm.

1. Introduction

The development of the self-driving car is needed for the safety of driver and passenger on the vehicle [1]. Traffic accidents occur for various reasons. The majority of traffic accidents are caused by an improper speed on the road turning or unexpected lane changes when avoiding an obstacle [2]. Some modern cars are already equipped with the emergency braking system, collision warning system, lane-keeping assist system, adaptive cruise control. These systems could be used to help avert traffic accidents when driver is distracted or lost control.
The two most important parts of advanced driver assistance systems are a collision avoidance system and a Lane keeping assist system, which could help to reduce the number of traffic accidents. A fundamental technique for effective collision avoidance and lane-keeping is a robust lane detection method [3]. Especially that method should detect a straight or a curve lane in the far-field of view. A car moving at a given speed will spend a certain time to stop or reduce speed while keeping stability. This means it is necessary to detect road lane in the near field as well as in far-field of view.
Tamal Datta et al. showed a way to detect lane in their lane detection technique [4]. The technique consists of image pre-processing steps (grayscale conversion, canny edge detection, bitwise logical operation) on the image input; it also masked the image according to the region of interest (ROI) in the image. The final stage uses the Hough transformation [5,6] method and detects the lines. Using this method, the parameters for a straight line are achieved. However, their technique did not propose lane detection for curve lanes and can obtain parameters of curve lines (parabola and circle).
A video-based land detection at night was introduced by Xuan He et al. [7]. The method steps include the Gabor filter operator [8] for image pre-processing, adaptive splay ROI, and Hough transform to detect the marker. Lane tracking method that uses Kalman filter [9], is added after lane detection to increase the probability and real-time detection of lane markers. But, their pre-processing steps lack an adaptive auto-threshold method to detect lane in all conditions.
In order to eliminate the uncertainty of lane condition, Shun Yang et al. proposed a replacement of image pre-processing [10]. Their method uses deep learning-based lane detection as a replacement for feature-based lane detection. However, the UNet [11] based encoder-decoder requires high GPU processing unit like Nvidia GPU Geforce GTX 1060 for training and testing. Also, the paper claims CNN-branch is much slower than the feature-based branch in terms of detection rate. The fast detection rate is very important in the case of the autonomous vehicle since the vehicle moves at a very high speed. Also, the result lacks to show results of lane detection in case of the curved lane.
Moreover, Yeongho Son et al. introduced an algorithm [12] to solve the limitation of detecting lane in light illumination change or a shadow or worn-out lanes by using a local adaptive threshold to extract important features from the lane images. Moreover, their paper proposes a feedback RANSAC [13] algorithm to avoid false lane detection by computing two scoring functions based on the lane parameters. They used the quadratic lane model for lane fitting and Kalman filter for smooth lane tracking. However, the algorithm did not provide any close-loop lane keeping control to stay in lane.
Furthermore, a combination of the Hough transform and R-least square method is introduced by Libiao Jiang et al. [14]. They used the Kalman filter to track the lane. Their combination of this method provides results on straight lane marking. Similarly, hazed-based Kalman is used to enhance the information of the road by Chen Chen et al. [15]. Also, Huifend Wang et al. introduced a straight and polynomial curve model to detect the continuity of the lane [16]. The curve fitting method was used to detect the lanes.
In our paper, we introduce a curve lane detection algorithm based on Kalman filter [17]. This algorithm includes Otsu’s threshold method [18,19] to convert RGB to Black-White image, image pre-processing using top view image transform [20,21] to create a top-view image of the road, a Hough transform for detecting the straight lane in the near-field of the sensor [22], and parameter estimation of the curve lane using a Kalman filter. Also, we use two different models for a curve lane in the Kalman filter. One is the parabola model [23], another is the circle model [24]. The Kalman-based linear parabolic lane detection is already tested on consecutive video frames using the parabolic model by K.H. Lim et al. [25]. The paper presents the method which is extended to the circular model. Our proposed method shows robustness against noise. Effective parameter estimation of a curve lane detection could be used to control the speed and heading angle of the self-driving car [26].
Multiple methods have been introduced to detect lane for the self-driving car and advanced driver assistance systems. The vision-based lane detection methods usually used some popular techniques, an edge detector to create a binary image, the classical Hough transform [27,28] to detect straight lines, the color segmentation to extract lane markers, etc.
Most of the methods focused on only a straight lane detection in the near distance using a Hough transform or some simple methods. For a curve lane, few number methods used to detect curved roads such as parabola [23] and hyperbola fitting and B-Splines [29,30], Bezier Splines. To enhance the result of lane detection, the area at the bottom of an image is considered as a region of interest (ROI) [31]. Segmenting ROI will increase the efficiency of the lane detection method and eliminate the effect of the upper portion of a road image [32]. The majority of the methods directly detect lanes from the images that are captured by the front-view camera, as shown in Figure 1. However could be robust using the raw image for detect lane, estimating the parameters of road lane may be difficult [33].
This research is considered on a curve lane detection algorithm, which can estimate parameters of the road turning and define geometric shapes based on the mathematical model and the Kalman filter [17].

2. Research Method

Our new algorithm consists of two main parts. (1) Image pre-processing. It contains Otsu’s threshold method [19] and top view image transform [20] to create a top-view image of the road. Hough transform predict the straight lane in the near-field of view. (2) A curve lane detection. We use the Kalman filter to detect a curve lane in the far-field of view. This Kalman filter algorithm includes two different methods, the first method is based on the parabola model [23], and the second method is based on the circle model [24]. This method shown in Figure 2 can estimate parameters of the road turning and find geometric shapes based on the mathematical model and the Kalman filter.

2.1. Otsu Threshold

In 1978 inventor Nobuyuki Otsu introduced a new threshold technique. The Otsu threshold technique uses statistical analysis, which can be used to determine the optimal threshold for an image. Nobuyuki Otsu introduced a problem with one threshold for two classes and later extended to a problem with multiple thresholds. For the two classes, this technique assumes the image containing two classes of pixels following bi-modal histogram, foreground pixels, and background pixels. The Otsu threshold method minimizes the sum of the weighted class variances. He named this sum within-class variance and defines it as Equation (1):
σ w 2 = ω 1 σ 1 2 + ω 2 σ 2 2 ,
The criterion tries to separate the pixels, such that the classes are homogeneous in themselves. Since a measure of group homogeneity is the variance, the Otsu criterion follows consequently. Therefore, the optimal threshold is the one, for which the within-class variance is minimal. In order to find the optimal threshold instead of minimizing the within-class variance is defined as Equation (2):
{ σ B 2 = ω 1 ( μ 1 μ T ) 2 + ω 2 ( μ 2 μ T ) 2 μ T = i = 1 L p ( i ) · i
where μ T is the total mean calculated over all gray levels. So the task of finding the optimal set of thresholds [ t 1 , t 2 , t M 1 ] in Equation (3) is either to find the thresholds, which minimize the within-class variance or to find the ones, which maximize the between-class variance. The result is the same.
[ t 1 , t 2 , t M 1 ] = a r g m i n { σ w 2 } = a r g m a x { σ B 2 } ,

2.2. Top View Image Transformation

The second step in our algorithm is to create a top view image of the road. The output image is the top view or bird’s view of the road where lanes will be parallel or close to parallel after this transformation. Also, this transformation converts from pixels in the image plane to the world coordinate metric. If necessary we can measure distance using that transformed image.
Figure 3 illustrates the geometry of the top-view image transformation. For the transformation, we need some parameters, where θ h is the horizontal view angle of the camera, θ v is the vertical view angle of the camera, H is the height of the camera located, and α is the tilt angle of the camera.
The height of the camera located in the vehicle is measured in the metric system. We can create two types of top-view image, one is measured by metric using H parameter, another is measured by pixel using H p i x e l parameter. V is the width of the front view image P i ( U i , V i ) and is proportional to Wmin of the top view image field illustrated in Figure 3. Equations (4) and (5) show the relationship between the H measured by metric and H p i x e l measured by pixel.
{ L m i n = H tan ( α ) W m i n = 2 L m i n tan ( θ h / 2 ) K = V / W m i n ,
Coefficient K is used to transform the metric into the pixel data.
{ H p i x e l = H · K γ = θ v · ( U U i U ) x i = L i L 0 = H p i x e l tan ( α + γ ) H p i x e l tan ( α )
According to the geometrical description shown in Figure 3, for each point P i ( U i , V i ) on the front view image, the corresponding sampling point P t ( U i , V i ) on the top view image can be calculated by using the previous Equations (4)–(6).
{ β = θ h · ( V V i V ) y i = L i · tan ( θ h β )
Then RGB color data are copied from the ( U i , V i ) position of the front view camera image to the ( x i , y i ) position of the top view image.
After top-view image transformation, line detection becomes a simple process, which only detects parallel lines that are generally separated by a given, fixed distance. The next step is to detect a straight lane using the Hough transform.

2.3. Straight Lane Detection with Hough Transform

In the near view image, a straight line detection algorithm is formulated by using a standard Hough transformation. The Hough transform also detects many incorrect lines. We need to eliminate the incorrect lanes to reduce computational time and complexity [34].
To remove incorrect lines using the same algorithms on the road lane, the removal of detected lines is needed in which the vehicle is not in. For example, after Hough transformation on the binary image, the longest two lines will be chosen to avoid the issue. The detection for curve lane will start at the finishing points of those two longest two lines which were chosen based on length.

2.4. Curve Lane Detection Based on Kalman Filter and Parabola Equation

In this paper, the most important part is the curve line detection part. This method should detect a straight or a curve line in the far-field of view. Image data (white points in the far-field of view) include uncertainties and noise generated during capturing and processing steps. Therefore, as a robust estimator against these irregularities, a Kalman filter was adopted to form an observer [22]. First of all, we need to define the equation of the curve line, which is a non-linear equation. For the curve line, the best-fit equations are the parabola equation and the circle equation.
In this part, we consider a curve lane detection algorithm which is based on the Kalman filter and Parabola equation. From parabola equation y = a x 2 + b x + c we need to define three parameters using at least three measurement data. Equations (7) and (8) show system equation of the parabola. Figure 4 illustrates the basic parabolic model of road turning.
{ y i 1 = a x i 1 2 + b x i 1 1 + c x i 1 0 y i = a x i 2 + b x i 1 + c x i 0 y i + 1 = a x i + 1 2 + b x i + 1 1 + c x i + 1 0
[ x i 1 2 x i 1 1 x i 1 0 x i 2 x i 1 x i 0 x i + 1 2 x i + 1 1 x i + 1 0 ] [ a b c ] = [ y i 1 y i y i + 1 ]
where x i 1 , x i , x i + 1 and y i 1 , y i , y i + 1 are the measurement data of the curve line detection process. In our case, the measurement data are the coordinates of the white points in the far section.
From Equation (9) we can estimate a , b , c parameters easily.
[ a b c ] = ( i n v [ x i 1 2 x i 1 1 x i 1 0 x i 2 x i 1 x i 0 x i + 1 2 x i + 1 1 x i + 1 0 ] ) [ y i 1 y i y i + 1 ]
Using this matrix form we can implement our Kalman filter design for curve lane detection. Two important matrices of the Kalman filter are the measurement transition matrix (H) and the state transition matrix (A). It can be expressed as Equation (10):
[ Y i ] = [ H i ] [ X i ]
In our case, the measurement transformation matrix [ H i ] contains three white points coordinate values of x axis rearranging the calculation as Equation (11). But, the transition matrix is the identity matrix, because of our Kalman filter design used for the image process.
[ Y i ] = [ y i 1 y i y i + 1 ]   [ H i ] = [ x i 1 2 x i 1 1 x i 1 0 x i 2 x i 1 x i 0 x i + 1 2 x i + 1 1 x i + 1 0 ]   [ X p r i o r ] = [ a b c ]
These two matrices are often referred to as the process and the measurement models, as they serve as the basis for a Kalman filter. The Kalman filter has two steps: the prediction step and the correction step. The prediction step can be expressed as follows, Equations (12) and (13):
X p o s t = A i X p r i o r + B u k ;   [ X p o s t ] = [ X p r i o r ]
P p o s t = A i P p r i o r A i T + Q r ;         P p o s t = P p r i o r + Q r
where P p o s t is the covariance of the predicted state. The correction steps of the Kalman filter can be expressed through the following Equations (14), (15) and (17).
K i + 1 = ( P p o s t H T ) ( H P p o s t H T + R ) 1 + Q r
X p o s t = X p r i o r + K ( z i H i X p r i o r ) ;     z i = [ y m e a s ( i 1 ) y m e a s ( i ) y m e a s ( i + 1 ) ]
where K i + 1 is the Kalman gain, X p o s t is the a posteriori estimate state at the i -th white point. P p o s t is the a posteriori estimate error covariance matrix at the i -th white point in the Equation (17).
The Equation (16) shows a matrix form of a posteriori estimate state.
[ a i + 1 b i + 1 c i + 1 ] = [ a i b i c i ] + K ( [ y m e a s ( i 1 ) y m e a s ( i ) y m e a s ( i + 1 ) ] [ x i 1 2 x i 1 1 x i 1 0 x i 2 x i 1 x i 0 x i + 1 2 x i + 1 1 x i + 1 0 ] [ a i b i c i ] )
P p o s t = ( I K H i ) P p r i o r
To evaluate the viability of the proposed algorithms, we tested the parabola equation using both real data and noisy measurement data in Matlab. The Kalman filter can estimate the parameters of the parabola equation from noisy data. The result section shows a comparison between the measurement value and estimation value, the real value. Figure 5 shows the expected results of left and right turning on the road using the parabolic model based on the Kalman filter [35].
From all these experiment results from the result section, we can see one relationship between road turning and “a” parameter of our approach. If the road is turning toward the left side, the “a” parameter is lower than zero. If the road is turning toward right side, “a” parameter is higher than zero and if the road is straight, “a” parameter is almost equal to zero. This process is shown in Figure 6.

2.5. Curve Lane Detection Based on Kalman Filter and Circle Equation

For the curve line, the second-best fit equation is the circle equation, shown in Figure 7. In this part, we consider curve line detection algorithms [24] based on the Kalman filter and circle equation. From the circle equation r 2 = ( x h ) 2 + ( y k ) 2 we need to define circle radius r and the center of circle ( h , k ) .
Using every three points of a circle we can draw a pair of chords. Based on these two chords we can calculate the center of the circle [36]. If we have n number points, it is possible to calculate n 2 number center.
Pairs of chords in each chain are used to calculate the center. Here, ( x 1 , y 1 ) , ( x 2 , y 2 ) , ( x 3 , y 3 ) points divide a circle to three arcs, and L 1 , L 2 are the perpendicularly bisecting lines of the corresponding chords. P 1 , P 2 points, shown in Figure 8 [36], are located in the lines L 1 , L 2 , and the Equations (18) and (21) show the coordinates of P 1 , P 2 points.
{ P 1 ( x ) = ( x 1 + x 2 ) / 2 P 1 ( y ) = ( y 1 + y 2 ) / 2
Perpendicular lines rules and P 1 points coordinate are used to calculate the equation of L 1 line in Equations (19) and (20)
y = m 1 x + c 1         m 1 = y 1 y 2 x 1 x 2         m 11 = ( 1 m 1 )
c 11 = P 1 ( y ) P 1 ( x ) m 11         y = m 11 x + c 11
For the L 2 line same calculation runs to estimate the equation of L 2 line, expressed in form of Equations (22) and (23).
{ P 2 ( x ) = ( x 2 + x 3 ) / 2 P 2 ( y ) = ( y 2 + y 3 ) / 2
y = m 2 x + c 2         m 2 = y 2 y 3 x 2 x 3       m 22 = ( 1 m 2 )
c 22 = P 2 ( y ) P 2 ( x ) m 22         y = m 22 x + c 22
Based on L 1 , L 2 lines we can calculate the center of the circle expressed in Equations (24) and (25). The intersection of these two lines indicates the center of the circle.
{ y c e n t e r = m 22 x c e n t e r + c 22 y c e n t e r = m 11 x c e n t e r + c 11
x c e n t e r = c 22 c 11 m 11 m 22         y c e n t e r = m 11 x c e n t e r + c 11
But, this method cannot determine the center correctly, it is easily disturbed by noises. Therefore, we need the second step for the estimation of the correct center using a Kalman filter. The Kalman filter is estimated using the raw data of the center ( x c e n t e r ,   y c e n t e r ) , which is stored by the previous step.
For the x coordinate and y coordinate of the center, we need individual estimation based on the Kalman filter. Equations (26)–(32) show the Kalman filter for the center of the circle. Equations (26)–(28) show an initial value of Kalman gain and P p r i o r is the covariance of the predicted state.
K x c e n t e r = 1 ,   I = 1 ,   H = 1 ,
X p r i o r = X p o s t
P p r i o r = A P p o s t A T + Q r = P p o s t + Q r  
K x c e n t e r = ( P p r i o r H T ) ( H P p r i o r H T + R ) 1 + Q r
z i = x c e n t e r ( i )
X p o s t = X p r i o r + K x c e n t e r ( z i H X p r i o r )
P p o s t = ( I K x c e n t e r H ) P p r i o r + Q r
where z i = x c e n t e r ( i ) is the x-coordinate value of the center, which is stored by the previous step. After that, we can run the same process for the y coordinate value of the center. In the end, we can easily estimate the radius of the circle using these correct values of x, y coordinate of center.
Figure 9 shows a curve lane detection expected result of Kalman filter in the prepared road image [35], this image has a circle shape road turning. The results present good performance for both of them, left turning and right turning.
Using this result we can predict road turning and based on this value of radius we can control the speed of the self-driving car. For example, if the radius value is low, the self-driving car needs to reduce speed, if the radius value is high, the self-driving car can be on the same speed (no need to reduce speed). Also using radius value we can estimate suitable velocity based on centrifugal force Equation (33), as shown in Figure 10.
F c = m u 2 r ;       u = F c · r m  

3. Setup for Simulation in 3D Environments and Results

The 3D lane detection environment for simulation is designed using the GAZEBO simulator. MATLAB is used for image preprocessing, lane detection algorithm, and closed-loop lane keeping control, shown in Figure 11a. It shows the software communication in brief.
Figure 11b shows the comprehensive connection between nodes and topics in the gazebo simulator. From the gazebo GUI, the camera sensor node gives us the image_row topic. The Matlab node receives the raw RGB image of the lane and processes the frames. Then, the Matlab node generates the heading angle and sends it using cmd_vel topic to the gazebo. In order to get the trajectory of the robot, the odom topic is used to plot the position of the vehicle.
For lane detection and closed-loop lane keeping control, we used two simulated track environments using pioneer robot-vehicle as shown in Figure 12. Using this image input from the Gazebo simulator, the algorithm detects the lane and estimates the vehicle’s angular velocity and linear velocity based on lane detection results and output in Matlab. Initial parameters are set according to the simulation environment. For top view transformation, parameters are set as H = 59 ,   f = 0.01 ,   α = 0.62 ,   θ v = 0.9 ,   θ h = 1.1 , and to get the binary image of the lane, the auto threshold for each channel is used. T h r e s h a v e r a g e = 0.5804 .
To assess the viability of the introduced algorithms, random noises were added with real values. We performed the experiment with noisy measurement data of the parabola equation in Matlab. Figure 12b illustrates the 3d view and map of the athletic field and Figure 12a illustrates another track environment. The plotted graph of the trajectory path is generated from the odometry data of the robot-vehicle which shows that the curve lane follows the road curve scenario, as shown in Figure 13a,b with respect to the lane tracking of environments from Figure 12a,b.
The angular velocity control uses a proportional-integral-differential (PID) controller, which is a control loop feedback mechanism. In PID control, the current output is based on the feedback of the previous output, which is computed to keep the error small. The error is calculated as the difference between the desired and the measured value, which should be as small as possible. Two objectives are executed, keeping the robot driving along the centerline d y = 0 and keeping the robot heading angle, θ = 0 , as shown in Figure 14. The equation can be expressed as y c e n t e r _ l i n e = ( y r i g h t l i n e + y l e f t l i n e ) / 2 ;   d y = y c e n t e r _ l i n e y c e n t e r _ p i x e l from where error term can be written as e r r o r = ( d y + l sin θ ) . The steering angle of the car can be estimated using a straight line detection result while also detecting the curve lanes.
Figure 15 and Figure 16 show the PID error and d y difference in pixel respectively. The steering angle is derived from the arctangent of the centerline of the vehicle. Here, co-efficient of p-term = 0.3 and co-efficient of d-term = 0.1. Figure 17 shows the steering angle in the simulation experiment in scenario one. Figure 15 shows that most of the time the error is positive. As a result, Figure 17 generates positive steering angles most of the time which means steering left. All the figures below are the result of the simulation from the map of Figure 13a. The vehicle maneuver was performed counter-clockwise. To stay in the center lane, the vehicle needs to take steering on the left. So, that is the reason for error value were greater than zero during the simulation. The portion where the error value is less than zero indicates steering right and also indicates that there is curve going right particularly at that time. The sudden reason for the spike in 480th number of cycle indicates that the d y value becomes high at that moment. In order to reduce the d y value and bring the vehicle to the position close to the center line, the error value is increased. Therefore, there steering angle to turn left was higher than its average value.

4. Experimental Results for the Curve Lane Detection

Figure 18 represents an image output from MATLAB in the pixel coordinate system in the algorithm. The image frame generated from the camera on the center is converted to a pixel coordinate system due to convenience.
Results of the auto threshold by Otsu are shown in Figure 19b from the output of the Figure 20a. For manual threshold T h r e s h r e d = 0.7491 ,   T h r e s h g r e e n = 0.7202 ,   T h r e s h b l u e = 0.5834 are used. T h r e s h r e d = 0.75 ,   T h r e s h g r e e n = 0.68 ,   T h r e s h b l u e = 0.6 are obtained from here from auto threshold values. For top view transformation, parameters are set as H = 105 ,   f = 0.01 ,   α = 1.0472 ,   θ v = 0.9 ,   θ h = 1.1 .
The auto threshold results in Figure 19b are clearer than the manual threshold results in Figure 19a. Also, it can be robust in outside experiments. Top view transformation converts from pixels in the image plane to world coordinates metric as shown in Figure 20.
The Hough transform result generates lines that should be almost parallel, as shown in Figure 21. Also, the section to track the curve lane starts at the finishing point of these two longest straight lines.
To evaluate the effectiveness of the proposed algorithms, we tested with noisy measurement data of the parabola equation in Matlab. The noisy data is created by adding random value to the true value generated using the random function from Matlab. The measurement legend marked in blue in Figure 22, is the combination of true value and noisy value. Our Kalman filter can estimate parameters of the parabola equation from noisy data. Figure 22 presents a comparison between the measurement value and estimation value, the real value.
In Figure 23, the graphs illustrated the estimation results of parameters. At the end of the process, estimation results become almost equal to true values. In this simulation, “a” parameter’s true value is 8 and the estimated value is 7.4079, the “b” parameter’s true value is 16 and the estimated value is 22.4366, “c” parameter’s true value 50 and the estimated value is 37.115. There is almost no difference between the estimated value and the true value compared to the noise value ratio. Now, it is possible to apply the proposed algorithms in the processed image to perform the detection process for the curve lane.
Figure 24 presents the real experimental results of the curve line detection based on the Kalman filter. Where the yellow line is the result of our algorithm and estimation results of first line a = 1.2186 · 10 4 , b = 0.8486 , c = 382.4092 , and estimation results of second line a = 3.0639 · 10 5 , b = 0.238 , c = 885 in the first experiment image of the road. In the second experiment image of the road, estimation results of first line a = 2.6319 · 10 5 , b = 0.2096 , c = 938.96 , estimation results of the second line a = 3.166 · 10 5 , b = 0.2832 , c = 1292 . The simulation result of circle detection is shown in Figure 25. Figure 26 shows a comparison between the measurement value and the estimation value, the true value of circle detection. Of course, the proposed algorithm also has some shortcomings. For example, the right-hand side of Figure 24 is the aftermath of shadow reflection from the left side of the lane. The reflection produces brightness in the lane image, causing a slight change in the detection. Further research is needed later.
After that, we tested in the top-view transformed image of the different circular real roads. Figure 27a,b illustrates the result of curve line detection using the circle model in real road image.
Here, the yellow line is the result of our algorithm and radius r = 1708.2 , the center of the circle is x c e n t e r = 957.81 , y c e n t e r = 1260.2 . Using this result we can predict road turning and based on this value of radius we can control the speed of the self-driving car. For example, if the radius value is low, the self-driving car needs to reduce speed, if the radius value is high, the self-driving car can be at the same speed (no need to reduce speed).

5. Conclusions

In this paper, a curve line detection algorithm using the Kalman filter is presented. The algorithm is split into two sections:
(1)
Image pre-processing. It contains Otsu’s threshold method. Top view image transforms to create a top-view image of the road. A Hough transform to track a straight lane in the near-field of view of the camera sensor.
(2)
Curve lane detection. The Kalman filter provides the detection result for curve lanes in the far-field of view. This section consists of two different methods, the first method is based on the parabola model, and a second method is based on the circle model.
The experimental results show that the curve lane detection method can be effectively detected even under a very noisy environment and with parabola and circle model. Also, we have deployed the algorithm in the gazebo simulation environment to verify the performance. One advantage of the proposed algorithm is its robustness against noise, as our algorithms are based on the Kalman filter. The viability of the proposed curve lane detection strategy can be applied to the self-driving car systems as well as to the advanced driver assistant systems. Based on our curve lane detection results, we can predict road turning, and also estimate suitable velocity and angular velocity for the self-driving car. Also, our proposed algorithm provides close-loop lane keeping control to stay in lane. The experimental result shows the proposed algorithm achieves an average of 10 fps. Even though the algorithm has an auto threshold method to adjust with different light conditions such as low-light, further study is needed to detect lane in conditions like light reflection, shadows, worn-lane, etc. Moreover, the proposed algorithm does not require high GPU processing unit to perform other CNN-based algorithms. The performance is satisfactory in the CPU-based system according to the fps. However, CNN-based study in the pre-processing step can provide a more efficient result for edge detection.

Author Contributions

Conceptualization, B.D. and D.-J.L.; methodology, B.D. and S.H.; software, S.H. and B.D.; validation, S.H., and B.D.; formal analysis, B.D.; investigation, B.D. and S.H.; resources, D.-J.L.; data curation, D.-J.L.; writing—original draft preparation, B.D. and S.H.; writing—review and editing, S.H.; visualization, S.H. and B.D.; supervision, D.-J.L.; project administration, D.-J.L.; funding acquisition, D.-J.L.. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded and conducted under the Competency Development Program for Industry Specialists of Korean Ministry of Trade, Industry and Energy (MOTIE), operated by Korea Institute for Advancement of Technology (KIAT). (No. N0002428, HRD program for Future Car). This research was supported by the Development Program through the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (NO.2019R1F1A1049711).

Acknowledgments

I (Sabir Hossain) would like to express my thanks and appreciation to my supervisor Professor Deok-jin Lee for his direction and support throughout this paper. I would like to thank Byambaa Dorj for his cooperation in this paper. Additionally, I would also like to pay my profound feeling of appreciation to all CAIAS (Center for Artificial Intelligence and Autonomous System) lab members for their support and CAIAS lab for providing me all the facilities that were required from the lab.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Fleetwood, J. Public health, ethics, and autonomous vehicles. Am. J. Public Health 2017, 107, 532–537. [Google Scholar] [CrossRef] [PubMed]
  2. Green, M. “How Long Does It Take to Stop?” Methodological Analysis of Driver Perception-Brake Times. Transp. Hum. Factors 2000, 2, 195–216. [Google Scholar] [CrossRef]
  3. Vacek, S.; Schimmel, C.; Dillmann, R. Road-marking analysis for autonomous vehicle guidance. In Proceedings of the European Conference on Mobile Robots, Freiburg, Germany, 19–21 September 2007; pp. 1–6. [Google Scholar]
  4. Datta, T.; Mishra, S.K.; Swain, S.K. Real-Time Tracking and Lane Line Detection Technique for an Autonomous Ground Vehicle System. In Proceedings of the International Conference on Intelligent Computing and Smart Communication 2019; Springer: Singapore, 2020; pp. 1609–1625. [Google Scholar]
  5. Ballard, D.H. Generalizing the Hough transform to detect arbitrary shapes. In Pattern Recognition; Elsevier: Amsterdam, The Netherlands, 1981; Volume 13, pp. 111–122. [Google Scholar]
  6. Illingworth, J.; Kittler, J. A survey of the hough transform. Comput. Vision, Graph. Image Process. 1988, 44, 87–116. [Google Scholar] [CrossRef]
  7. He, X.; Duan, Z.; Chen, C.; You, F. Video-based lane detection and tracking during night. In CICTP 2019: Transportation in China—Connecting the World—Proceedings of the 19th COTA International Conference of Transportation Professionals; A.S.C.E.: Reston, VA, USA, 2019; pp. 5794–5807. ISBN 9780784482292. [Google Scholar]
  8. Mehrotra, R.; Namuduri, K.R.; Ranganathan, N. Gabor filter-based edge detection. Pattern Recognit. 1992, 25, 1479–1494. [Google Scholar] [CrossRef]
  9. Welch, G.; Bishop, G. An Introduction to the Kalman Filter. In Pract. 2006, 7, 1–16. [Google Scholar]
  10. Yang, S.; Wu, J.; Shan, Y.; Yu, Y.; Zhang, S. A Novel Vision-Based Framework for Real-Time Lane Detection and Tracking; S.A.E. Technical Paper; S.A.E.: Warrendale, PA, USA, 2019; Volume 2019. [Google Scholar]
  11. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Lecture Notes in Computer Science, Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; Springer: Cham, Switzerland, 2015; Volume 9351, pp. 234–241. [Google Scholar]
  12. Son, Y.; Lee, E.S.; Kum, D. Robust multi-lane detection and tracking using adaptive threshold and lane classification. Mach. Vis. Appl. 2019, 30, 111–124. [Google Scholar] [CrossRef]
  13. Borkar, A.; Hayes, M.; Smith, M.T. Robust lane detection and tracking with Ransac and Kalman filter. In Proceedings of the International Conference on Image Processing, ICIP, Cairo, Egypt, 7–10 November 2009; pp. 3261–3264. [Google Scholar]
  14. Jiang, L.; Li, J.; Ai, W. Lane Line Detection Optimization Algorithm based on Improved Hough Transform and R-least Squares with Dual Removal. In Proceedings of the 2019 IEEE 4th Advanced Information Technology, Electronic and Automation Control Conference (IAEAC), Chengdu, China, 20–22 December 2019; Volume 1, pp. 186–190. [Google Scholar]
  15. Chen, C.; Tang, L.; Wang, Y.; Qian, Q. Study of the Lane Recognition in Haze Based on Kalman Filter. In Proceedings of the 2019 International Conference on Artificial Intelligence and Advanced Manufacturing (AIAM), Dublin, Ireland, 16–18 October 2019; pp. 479–483. [Google Scholar]
  16. Wang, H.; Wang, Y.; Zhao, X.; Wang, G.; Huang, H.; Zhang, J. Lane Detection of Curving Road for Structural Highway with Straight-Curve Model on Vision. IEEE Trans. Veh. Technol. 2019, 68, 5321–5330. [Google Scholar] [CrossRef]
  17. Salarpour, A.; Salarpour, A.; Fathi, M.; Dezfoulian, M. Vehicle Tracking Using Kalman Filter and Features. Signal Image Process. Int. J. 2011, 2, 1–8. [Google Scholar] [CrossRef]
  18. Otsu, N. A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man Cybern. 1996, 9, 62–66. [Google Scholar] [CrossRef] [Green Version]
  19. Yuan, X.; Martínez, J.F.; Eckert, M.; López-Santidrián, L. An improved Otsu threshold segmentation method for underwater simultaneous localization and mapping-based navigation. Sensors 2016, 16, 1148. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  20. Dorj, B.; Lee, D.J. A Precise Lane Detection Algorithm Based on Top View Image Transformation and Least-Square Approaches. J. Sens. 2016, 2016, 4058093. [Google Scholar] [CrossRef] [Green Version]
  21. Aly, M. Real time detection of lane markers in urban streets. In Proceedings of the IEEE Intelligent Vehicles Symposium, Eindhoven, The Netherlands, 4–6 June 2008; pp. 7–12. [Google Scholar]
  22. Tseng, C.-C.; Cheng, H.-Y.; Jeng, B.-S. A Lane Detection Algorithm Using Geometry Information and Modified Hough Transform. In Proceedings of the 18th IPPR Conference on Computer Vision, Graphics and Image Processing, Taipei, Taiwan, 21–23 August 2005; Volume 1, pp. 796–802. [Google Scholar]
  23. Jung, C.R.; Kelber, C.R. A lane departure warning system based on a linear-parabolic lane model. In Proceedings of the IEEE Intelligent Vehicles Symposium, Parma, Italy, 14–17 June 2004; pp. 891–895. [Google Scholar]
  24. Luo, L.; Xu, D.; Zhang, Z.; Zhang, J.; Qu, W. A fast and robust circle detection method using perpendicular bisector of chords. In Proceedings of the 2013 25th Chinese Control and Decision Conference, CCDC 2013, Guiyang, China, 25–27 May 2013; pp. 2856–2860. [Google Scholar]
  25. Lim, K.H.; Seng, K.P.; Ang, L.-M.; Chin, S.W. Lane detection and Kalman-based linear-parabolic lane tracking. In Proceedings of the 2009 International Conference on Intelligent Human-Machine Systems and Cybernetics, IHMSC’09, Hangzhou, China, 26–27 August 2009; Volume 2, pp. 351–354. [Google Scholar]
  26. Dorj, B.; Tuvshinjargal, D.; Chong, K.; Hong, D.P.; Lee, D.J. Multi-sensor fusion based effective obstacle avoidance and path-following technology. Adv. Sci. Lett. 2014, 20, 1751–1756. [Google Scholar] [CrossRef]
  27. Assidiq, A.A.M.; Khalifa, O.O.; Islam, M.R.; Khan, S. Real time lane detection for autonomous vehicles. In Proceedings of the International Conference on Computer and Communication Engineering 2008, ICCCE08: Global Links for Human Development, Kuala Lumpur, Malaysia, 13–15 May 2008; pp. 82–88. [Google Scholar]
  28. Sehestedt, S.; Kodagoda, S. Efficient Lane Detection and Tracking in Urban Environments. In Proceeding of the European Conference on Mobile Robots (ECMR), Freiburg, Germany, 19–21 September 2007; pp. 1–6. [Google Scholar]
  29. Lim, K.H.; Seng, K.P.; Ang, L.-M. River flow lane detection and Kalman filtering-based B-spline lane tracking. Int. J. Veh. Technol. 2012, 2012, 465819. [Google Scholar] [CrossRef]
  30. Yonghong, X.; Qiang, J. A new efficient ellipse detection method. In Proceedings of the Object Recognition Supported by User Interaction for Service Robots, Quebec City, QC, Canada, 11–15 August 2002; Volume 16, pp. 957–960. [Google Scholar]
  31. Hoang, T.M.; Hong, H.G.; Vokhidov, H.; Park, K.R. Road lane detection by discriminating dashed and solid road lanes using a visible light camera sensor. Sensors 2016, 16, 1313. [Google Scholar] [CrossRef] [PubMed]
  32. Mu, C.; Ma, X. Lane Detection Based on Object Segmentation and Piecewise Fitting. Telkomnika Indones. J. Electr. Eng. 2014, 12, 3491–3500. [Google Scholar] [CrossRef]
  33. Seo, Y.; Rajkumar, R.R. Use of a Monocular Camera to Analyze a Ground Vehicle’s Lateral Movements for Reliable Autonomous City Driving. In Proceedings of the IEEE IROS Workshop on Planning, Perception and Navigation for Intelligent Vehicles, Tokyo, Japan, 3–7 November 2013; pp. 197–203. [Google Scholar]
  34. Olson, C.F. Constrained Hough Transforms for Curve Detection. Comput. Vis. Image Underst. 1999, 73, 329–345. [Google Scholar] [CrossRef] [Green Version]
  35. Dorj, B. Top-view Image Transformation Based Precise Lane Detection Techniques and Embedded Control for an Autonomous Self-Driving Vehicle. Ph.D. Thesis, Kunsan National University, Kunsan, Korea, 2017. [Google Scholar]
  36. Wang, H.; Liu, N. Design and recognition of a ring code for AGV localization. In Proceedings of the 2003 Joint Conference of the 4th International Conference on Information, Communications and Signal Processing and 4th Pacific-Rim Conference on Multimedia (ICICS-PCM 2003), Singapore, 15–18 December 2003; Volume 1, pp. 532–536. [Google Scholar]
Figure 1. Field of view on the road turning.
Figure 1. Field of view on the road turning.
Applsci 10 02372 g001
Figure 2. The flow diagram of the lane detection algorithms using the Kalman filter.
Figure 2. The flow diagram of the lane detection algorithms using the Kalman filter.
Applsci 10 02372 g002
Figure 3. Top view image transformation.
Figure 3. Top view image transformation.
Applsci 10 02372 g003
Figure 4. Parabola model of the road turning.
Figure 4. Parabola model of the road turning.
Applsci 10 02372 g004
Figure 5. Left and right turning on the road.
Figure 5. Left and right turning on the road.
Applsci 10 02372 g005
Figure 6. Relationship between “a” parameter and road turning.
Figure 6. Relationship between “a” parameter and road turning.
Applsci 10 02372 g006
Figure 7. Circle model of the road turning.
Figure 7. Circle model of the road turning.
Applsci 10 02372 g007
Figure 8. Calculation of the center of the circle.
Figure 8. Calculation of the center of the circle.
Applsci 10 02372 g008
Figure 9. The road turning with the circle shape (a) right turning (b) left turning.
Figure 9. The road turning with the circle shape (a) right turning (b) left turning.
Applsci 10 02372 g009
Figure 10. Centrifugal force when a car goes around a curve.
Figure 10. Centrifugal force when a car goes around a curve.
Applsci 10 02372 g010
Figure 11. Gazebo-MATLAB Software Communication; (a) Brief Visualization of Software Architecture, (b) RQT Graph.
Figure 11. Gazebo-MATLAB Software Communication; (a) Brief Visualization of Software Architecture, (b) RQT Graph.
Applsci 10 02372 g011
Figure 12. Gazebo real-time physic engine simulation of the environments; (a) Normal Track Environment, (b) Athletic field Track.
Figure 12. Gazebo real-time physic engine simulation of the environments; (a) Normal Track Environment, (b) Athletic field Track.
Applsci 10 02372 g012
Figure 13. Plotted graph of the trajectory path of the ground vehicle; (a) Normal Track Environment, (b) Athletic field Track.
Figure 13. Plotted graph of the trajectory path of the ground vehicle; (a) Normal Track Environment, (b) Athletic field Track.
Applsci 10 02372 g013
Figure 14. Steering angle calculation.
Figure 14. Steering angle calculation.
Applsci 10 02372 g014
Figure 15. Error term for proportional-integral-differential (PID) control.
Figure 15. Error term for proportional-integral-differential (PID) control.
Applsci 10 02372 g015
Figure 16. Distance between robot position and center-line (lateral error).
Figure 16. Distance between robot position and center-line (lateral error).
Applsci 10 02372 g016
Figure 17. Steering angle output from the vehicle in the simulator.
Figure 17. Steering angle output from the vehicle in the simulator.
Applsci 10 02372 g017
Figure 18. Front-view camera image in pixel coordinate system.
Figure 18. Front-view camera image in pixel coordinate system.
Applsci 10 02372 g018
Figure 19. Manual and Otsu threshold from the athletic track lane image.
Figure 19. Manual and Otsu threshold from the athletic track lane image.
Applsci 10 02372 g019
Figure 20. Top view transformed image; (a) Real image for top view transformation, (b) Top view, (c) Otsu Threshold.
Figure 20. Top view transformed image; (a) Real image for top view transformation, (b) Top view, (c) Otsu Threshold.
Applsci 10 02372 g020
Figure 21. Hough transform result (the longest two lines).
Figure 21. Hough transform result (the longest two lines).
Applsci 10 02372 g021
Figure 22. Comparison between measured value (Blue), KF estimation (Orange), and real value (Red) of parabola detection.
Figure 22. Comparison between measured value (Blue), KF estimation (Orange), and real value (Red) of parabola detection.
Applsci 10 02372 g022
Figure 23. The simulation result of predicting ‘a’, ‘b’ and ‘c’ parameters from the parabolic detection.
Figure 23. The simulation result of predicting ‘a’, ‘b’ and ‘c’ parameters from the parabolic detection.
Applsci 10 02372 g023
Figure 24. The real experiment result of the cure lane detection based on Kalman filter (yellow); (a) Output Result of Parabolic Curve Detection, (b) Result aftermath of shadow reflection.
Figure 24. The real experiment result of the cure lane detection based on Kalman filter (yellow); (a) Output Result of Parabolic Curve Detection, (b) Result aftermath of shadow reflection.
Applsci 10 02372 g024
Figure 25. The simulation result of predicting ‘xc’, ‘yc’ and ‘r’ parameters from the circle detection.
Figure 25. The simulation result of predicting ‘xc’, ‘yc’ and ‘r’ parameters from the circle detection.
Applsci 10 02372 g025
Figure 26. Comparison between measured value (blue), KF estimation (Orange), and true value (Red) of circle detection.
Figure 26. Comparison between measured value (blue), KF estimation (Orange), and true value (Red) of circle detection.
Applsci 10 02372 g026
Figure 27. The result of curve lane detection based on circle model and Kalman filter (Yellow); (a) Input Image for Circle Detection, (b) Output Result.
Figure 27. The result of curve lane detection based on circle model and Kalman filter (Yellow); (a) Input Image for Circle Detection, (b) Output Result.
Applsci 10 02372 g027

Share and Cite

MDPI and ACS Style

Dorj, B.; Hossain, S.; Lee, D.-J. Highly Curved Lane Detection Algorithms Based on Kalman Filter. Appl. Sci. 2020, 10, 2372. https://doi.org/10.3390/app10072372

AMA Style

Dorj B, Hossain S, Lee D-J. Highly Curved Lane Detection Algorithms Based on Kalman Filter. Applied Sciences. 2020; 10(7):2372. https://doi.org/10.3390/app10072372

Chicago/Turabian Style

Dorj, Byambaa, Sabir Hossain, and Deok-Jin Lee. 2020. "Highly Curved Lane Detection Algorithms Based on Kalman Filter" Applied Sciences 10, no. 7: 2372. https://doi.org/10.3390/app10072372

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop