Research on Lane Detection Based on Global Search of Dynamic Region of Interest (DROI)

A novel lane detection approach, based on the dynamic region of interest (DROI) selection in the horizontal and vertical safety vision, is proposed to improve the accuracy of lane detection in this paper. The curvature of each point on the edge of the road and the maximum safe distance, which are solved by the lane line equation and vehicle speed data of the previous frame, are used to accurately select the DROI at the current moment. Next, the global search of DROI is applied to identify the lane line feature points. Subsequently, the discontinuous points are processed by interpolation. To fulfill fast and accurate matching of lane feature points and mathematical equations, the lane line is fitted in the polar coordinate equation. The proposed approach was verified by the Caltech database, under the premise of ensuring real-time performance. The accuracy rate was 99.21% which is superior to other mainstream methods described in the literature. Furthermore, to test the robustness of the proposed method, it was tested in 5683 frames of complicated real road pictures, and the positive detection rate was 99.07%.


Introduction
In recent years, advanced driver assistance systems (ADAS) and autonomous driving are becoming more and more important to reducing traffic accidents. As a key technology for intelligent vehicles, lane detection has attracted widespread attention from plenty of institutes and automobile technology companies [1]. Among the research, vision-based lane detection has always been a hot topic in the field of lane line detection. For example, in these studies, the selection of the region of interest (ROI) has been widely used to limit the range of lane detection because it can efficiently reduce the level of redundant data to improve the real-time capabilities and accuracy. In [2], one quarter of entire image, i.e., near the bottom of the image, was selected as the ROI; however, the hood and front windshield of the car can cause false edges, which would affect subsequent lane line detection. In order to improve the accuracy capabilities, the vanishing point of the lane line, indicating the point at which two lane lines cross in the distance, was treated as the upper boundary of the ROI [3][4][5]. Then, Hai [6] proposed setting two thirds of the region below the vanishing point as the ROI. Although the calculation amount of this method is less than that of the method with the vanishing point as the upper boundary, this method still cannot effectively extract the ROI following a safe distance from the front field of view. Similarly, to improve detection performance, the ROI was separated into two parts and the piecewise linear stretch function was used to enhance the pixels of images in the region; however, the boundary was difficult to select [7]. Furthermore, in [8], to improve the accuracy of lane detection, the static ROI was divided into four subareas, i.e., the left and right lane lines in the near and far fields of view, but

Image Preprocessing
Image preprocessing needs to be properly performed to successfully extract the lane line pixels in images and complete the lane line detection. First of all, image graying needs to be done to reduce the calculation load, because the original images contain too much miscellaneous or repetitive information. After the grayscale image is obtained, the distinction between lane line information and other interference is weakened. Therefore, in order to highlight the lane information in the image and reduce or remove the interference information, image enhancement processing needs to be carried out on the gray image. After obtaining the enhanced image with a clear difference between lane line and interference information, image segmentation on local OTSU (maximum between-cluster variance algorithm) needs to be used to extract the lane line information. Lastly, the inverse perspective transformation matrix transforms the image coordinates into real road coordinates.

Image Grayscale Processing
The data samples used in this paper come from the Udacity, Caltech and integrated databases, collected by a 1 mega camera (ds90ub913a) and an onsemi 1.0 mp ar0144rccb sensor. Because the format of the image data was RGB, the images were converted into grayscale images to reduce computing time. The grayscale is calculated by Equation (1) [17]. Gray = R * 0.3 + G * 0.59 + B * 0. 11 (1)

Image Enhancement Processing
To focus on the useful information and weaken or exclude interference information, the gray images need to be processed using image enhancement technology. Histogram equalization is applied to process the gray image, as it can increase the dynamic range of the pixels and lessen the computation load. After histogram equalization, the dynamic range of pixels becomes wider, and the difference between the lane line information and other interference information becomes more obvious. The effect of the histogram equalization is shown in the

Image Segmentation on Local OTSU
In order to improve detection accuracy, the majority of the noise in the raw image should be processed in the background layer using image segmentation technology. The lane line can act as the objective layer. On the other hand, the method of the largest variance between classes is used to remove the interference information and achieve better segmentation of the road image. The variance between classes of the pixels [18], segmented by the image I(x, y), is computed by Equation (2).
where and are the average gray of the segmented foreground and background, respectively, is the proportion of the background pixel in the raw image, and is the percentage of the pixel in the segmented image.
The traversal approach is used to search for the threshold value, which determines the largest variance between classes. In addition, local adaptive segmentation is applied to the road image. The image processing result is compared and shown in Figure 2.

Inverse Perspective Transformation
Due to the perspective effect of the camera, the road image appears larger in the near vision and smaller in the far vision. A distorted road image is shown in Figure 1. To obtain the real road information, the road image needs to be processed by means of inverse perspective transformation [19]. After the transformation, the perspective effect of the camera is eliminated. The parallel lane line is shown in [20]. The perspective of the coordinate is shown in Figure 3.

Image Segmentation on Local OTSU
In order to improve detection accuracy, the majority of the noise in the raw image should be processed in the background layer using image segmentation technology. The lane line can act as the objective layer. On the other hand, the method of the largest variance between classes is used to remove the interference information and achieve better segmentation of the road image. The variance between classes of the pixels [18], segmented by the image I(x, y), is computed by Equation (2).
where µ 0 and µ 1 are the average gray of the segmented foreground and background, respectively, ω 1 is the proportion of the background pixel in the raw image, and ω 0 is the percentage of the pixel in the segmented image. The traversal approach is used to search for the threshold value, which determines the largest variance between classes. In addition, local adaptive segmentation is applied to the road image. The image processing result is compared and shown in Figure 2.

Image Segmentation on Local OTSU
In order to improve detection accuracy, the majority of the noise in the raw image should be processed in the background layer using image segmentation technology. The lane line can act as the objective layer. On the other hand, the method of the largest variance between classes is used to remove the interference information and achieve better segmentation of the road image. The variance between classes of the pixels [18], segmented by the image I(x, y), is computed by Equation (2).
where and are the average gray of the segmented foreground and background, respectively, is the proportion of the background pixel in the raw image, and is the percentage of the pixel in the segmented image.
The traversal approach is used to search for the threshold value, which determines the largest variance between classes. In addition, local adaptive segmentation is applied to the road image. The image processing result is compared and shown in Figure 2.

Inverse Perspective Transformation
Due to the perspective effect of the camera, the road image appears larger in the near vision and smaller in the far vision. A distorted road image is shown in Figure 1. To obtain the real road information, the road image needs to be processed by means of inverse perspective transformation [19]. After the transformation, the perspective effect of the camera is eliminated. The parallel lane line is shown in [20]. The perspective of the coordinate is shown in Figure 3.

Inverse Perspective Transformation
Due to the perspective effect of the camera, the road image appears larger in the near vision and smaller in the far vision. A distorted road image is shown in Figure 1. To obtain the real road information, the road image needs to be processed by means of inverse perspective transformation [19]. After the transformation, the perspective effect of the camera is eliminated. The parallel lane line is shown in [20]. The perspective of the coordinate is shown in Figure 3.  Because affine transformation is based on the corresponding relationship of image coordinate transformation, the transformation from inverse perspective to aerial view can be performed [21]. The principle is as follows: the raw image is set as ( , , ), and the transferred image is ( , , 1); the image coordinate, which is transferred by the transformation matrix, is ( , , ). The alpha and beta angles represent the pitch and deviation angles of the on-board camera, respectively. The raw image coordinate system is the reference coordinate system, so w is set as 1. The expressions for x and y are as follows: The matrix of transformation takes the following form: The aerial view transformation is expressed as: where T = denotes the linear transformation, the inverse transformation is = [ ] , and = [ ] represents the image transition. Therefore, x and y can be calculated using Equations (7) and (8), respectively, with the value of being 1: According to the equations, as long as the coordinates of four noncolinear points in the image are found, the transformation matrix T can be obtained. Then, each point coordinate in the aerial view, corresponding in the original image, can be solved using Equation (6). Figure 4 shows the aerial view obtained from the affine transformation; those that are marked in Figure 4 are the selected coordinate points. Because affine transformation is based on the corresponding relationship of image coordinate transformation, the transformation from inverse perspective to aerial view can be performed [21]. The principle is as follows: the raw image is set as U(u, v, w), and the transferred image is I(x, y, 1); the image coordinate, which is transferred by the transformation matrix, is U (u , v , w ). The alpha and beta angles represent the pitch and deviation angles of the on-board camera, respectively. The raw image coordinate system is the reference coordinate system, so w is set as 1. The expressions for x and y are as follows: The matrix of transformation takes the following form: The aerial view transformation is expressed as: where T = t 11 t 12 t 21 t 22 denotes the linear transformation, the inverse transformation is T 2 = t 13 t 23 T , and T 3 = t 31 t 32 represents the image transition. Therefore, x and y can be calculated using Equations (7) and (8), respectively, with the value of t 33 being 1: According to the equations, as long as the coordinates of four noncolinear points in the image are found, the transformation matrix T can be obtained. Then, each point coordinate in the aerial view, corresponding in the original image, can be solved using Equation (6). Figure 4 shows the aerial view obtained from the affine transformation; those that are marked in Figure 4 are the selected coordinate points.

Longitudinal Boundary Design of DROI Based on Safe Car Distance
In order to obtain a better ROI, in this paper, the concept of safety vision is introduced based on road traffic safety rules. Therefore, according to the adaptive security field of view, the DROI will be selected.
Generally, the field of vision comes from the front of the vehicle. In accordance with the focus of imitating human habits, this section restricts the longitudinal range of DROI to the safe distance.
The safety area of the front view refers to the distance when the car brakes in an emergency. If the speed of the vehicle is too high, the lane line vanishing point will be the longitudinal dynamic region of the interest boundary. Meanwhile, in order to ensure safety on roads with excessive curvatures, the speed must be less than the critical speed of rollover.
Since the lane line detection is aimed at matching the lane line equations in the pixel coordinate system, the image coordinate system must be converted into the geodetic coordinate system to find the longitudinal safe distance of vehicles. Firstly, the displacement ratio of the image coordinate system to the geodetic coordinate system is calibrated. Next, in the image coordinate system, the horizontal and longitudinal displacements of the lane are expressed in pixel coordinates. The minimum units are a row and a column. In the actual process, the displacement is described by m, so the difference between the two coordinate systems must be calibrated after the aerial view transformation [14]. The ratios of horizontal and vertical distances, between an image coordinate system and a geodetic coordinate system, are and respectively; they are given by: = , where m denotes the value of the row in the pixel coordinates, 1 is the actual distance in the geodetic coordinates, n represents the value of the column, and s indicates the value of the realistic length in the geodetic coordinates. The aggregation equations of the feature points in the left and right lane line image are given below.
The aggregation equations of the left and right lane line points represented in the geodetic coordinate are shown in Equation (13): The curvature equation in the rectangular coordinate system is as follows:

Longitudinal Boundary Design of DROI Based on Safe Car Distance
In order to obtain a better ROI, in this paper, the concept of safety vision is introduced based on road traffic safety rules. Therefore, according to the adaptive security field of view, the DROI will be selected.
Generally, the field of vision comes from the front of the vehicle. In accordance with the focus of imitating human habits, this section restricts the longitudinal range of DROI to the safe distance.
The safety area of the front view refers to the distance when the car brakes in an emergency. If the speed of the vehicle is too high, the lane line vanishing point will be the longitudinal dynamic region of the interest boundary. Meanwhile, in order to ensure safety on roads with excessive curvatures, the speed must be less than the critical speed of rollover.
Since the lane line detection is aimed at matching the lane line equations in the pixel coordinate system, the image coordinate system must be converted into the geodetic coordinate system to find the longitudinal safe distance of vehicles. Firstly, the displacement ratio of the image coordinate system to the geodetic coordinate system is calibrated. Next, in the image coordinate system, the horizontal and longitudinal displacements of the lane are expressed in pixel coordinates. The minimum units are a row and a column. In the actual process, the displacement is described by m, so the difference between the two coordinate systems must be calibrated after the aerial view transformation [14]. The ratios of horizontal and vertical distances, between an image coordinate system and a geodetic coordinate system, are a x and a y respectively; they are given by: where m denotes the value of the row in the pixel coordinates, 1 is the actual distance in the geodetic coordinates, n represents the value of the column, and s indicates the value of the realistic length in the geodetic coordinates. The aggregation equations of the feature points in the left and right lane line image are given below.
Appl. Sci. 2020, 10, 2543 The aggregation equations of the left and right lane line points represented in the geodetic coordinate are shown in Equation (13): The curvature equation in the rectangular coordinate system is as follows: The radius of curvature of lane lines is calculated by the radius of curvature equation: The slope at a certain point of the lane line is shown below: v c can be calculated by the following inverse trig function: where β is the inclination angle of the tangent line at each point under the track of the lane line, v c is the longitudinal speed of the lane line, and v r is the vehicle speed at the current frame. The empirical formula between speed and brake distance is as follows [22]: Now, according to the relationship between the actual displacement and the image pixel, the boundary of the left and right lane lines in the image can be determined.

Lateral Boundary Design of DROI Based on Road Curvature
In order to reduce computation in the lateral vision and improve detection performance, the lateral ROI constraint, based on road curvature, is proposed. If the curvature of the road is too large, according to the road extension direction, the left and right lateral boundaries will be set. For example, considering a left-hand bend, the ROI boundary on the left is large while that on the right remains within the basic range. Similarly, if a car drives in a straight line, the left and right ROI boundaries remain within the basic safety range. The flow chart of the horizontal ROI constraint is shown in Figure 5.
Appl. Sci. 2020, 10, x FOR PEER REVIEW 6 of 22 The radius of curvature of lane lines is calculated by the radius of curvature equation: The slope at a certain point of the lane line is shown below: can be calculated by the following inverse trig function: where is the inclination angle of the tangent line at each point under the track of the lane line, is the longitudinal speed of the lane line, and is the vehicle speed at the current frame.
The empirical formula between speed and brake distance is as follows [22]: Now, according to the relationship between the actual displacement and the image pixel, the boundary of the left and right lane lines in the image can be determined.

Lateral Boundary Design of DROI Based on Road Curvature
In order to reduce computation in the lateral vision and improve detection performance, the lateral ROI constraint, based on road curvature, is proposed. If the curvature of the road is too large, according to the road extension direction, the left and right lateral boundaries will be set. For example, considering a left-hand bend, the ROI boundary on the left is large while that on the right remains within the basic range. Similarly, if a car drives in a straight line, the left and right ROI boundaries remain within the basic safety range. The flow chart of the horizontal ROI constraint is shown in Figure 5.  To obtain the road curvature, firstly, according to the road extension direction, the track equation of the lane line is obtained. Next, the curvature radius at each point of the road can be solved through the track equation of the lane line. Because the time difference between the two adjacent frames of the road image is very small, even if the car is driving at a high speed of 120 km/h, the displacement difference between the two adjacent frames is less than 1.3 m. Therefore, the boundary of the ROI, which is obtained from the lane line track of the above frame, is used as the ROI of the next frame.
The curve equation of lane line trajectory is as follows: The lateral velocity of the car at each point in the lane line curve can be obtained using the following equation: To obtain the road curvature, firstly, according to the road extension direction, the track equation of the lane line is obtained. Next, the curvature radius at each point of the road can be solved through the track equation of the lane line. Because the time difference between the two adjacent frames of the road image is very small, even if the car is driving at a high speed of 120 km/h, the displacement difference between the two adjacent frames is less than 1.3 m. Therefore, the boundary of the ROI, which is obtained from the lane line track of the above frame, is used as the ROI of the next frame. The curve equation of lane line trajectory is as follows: The lateral velocity of the car at each point in the lane line curve can be obtained using the following equation Because the original lane line image is transformed by the aerial view, the generated aerial view is approximately equally spaced in the horizontal and vertical directions. Therefore, the horizontal pixel size of the image has a certain proportional relationship with the actual horizontal distance; its longitudinal distance also has a corresponding proportional relationship.
In the geodetic coordinate system, the distance between the left and right lane lines is λ = 3.75 m. In the image coordinate system, g l u i , v j and g r u i+a , v j are selected at the same height of the left and right lane lines, respectively, and the lateral displacement between these two points is the distance between the left and right lane lines in the image. The lateral displacement is as follows: The ratio of the actual distance to the rows in a horizontal pixel is represented by µ.
According to the rules of safe driving, when the lane line is straight, the basic lateral safety constraint is 0.875/u. Eventually, the lateral boundary can be obtained by the relationship between braking distance and speed, as given by Equation (18).
In the actual lane line detection process, in order to avoid excessive calculation errors in the lateral boundary caused by pixel differences, lane lines are detected in the aerial view in this paper as shown in Figure 6. At the same time, the ROI, which is determined by the lane line in the current frame, is used in the aerial view in the next frame.
Appl. Sci. 2020, 10, x FOR PEER REVIEW 7 of 22 In the geodetic coordinate system, the distance between the left and right lane lines is λ = 3.75 m. In the image coordinate system, ( , ) and ( , ) are selected at the same height of the left and right lane lines, respectively, and the lateral displacement between these two points is the distance between the left and right lane lines in the image. The lateral displacement is as follows: The ratio of the actual distance to the rows in a horizontal pixel is represented by . = According to the rules of safe driving, when the lane line is straight, the basic lateral safety constraint is 0.875/ . Eventually, the lateral boundary can be obtained by the relationship between braking distance and speed, as given by Equation (18).
In the actual lane line detection process, in order to avoid excessive calculation errors in the lateral boundary caused by pixel differences, lane lines are detected in the aerial view in this paper as shown in Figure 6. At the same time, the ROI, which is determined by the lane line in the current frame, is used in the aerial view in the next frame.

Detection Principle of the DROI Global Research and Starting Point Design
Considering the complex lane environment, the noise on the actual road, as well as the absence and discontinuity of lane lines, a global search method based on DROI is proposed to identify lane line feature points. In this section, the seed function of pixel recognition is employed to conduct a global search for DROI and realize the recognition of lane line feature points. After a series of preprocessing, such as graying, image enhancement and image segmentation, the binary lane line image matrix is obtained.
In the image matrix, as shown in Figure      The specific search starting line is shown in Figure 9 and the search process is shown in Table 1 Appl. Sci. 2020, 10, x FOR PEER REVIEW 9 of 22 the image translation rules. The specific search starting line is shown in Figure 9 and the search process is shown in Table 1 Right   According to the DROI global search principle, the seed function is set as ( , ) [21], and ( , ) refers to the part of image where the filter ( , ) is applied. Therefore, the judging function is expressed as follows: In order to judge whether the point is the lane line or not, if the judging function satisfies ( , ) ≥ λ, the point can be identified as the point on the lane line [23]. If the threshold requirement is not met, the search should be continued before reaching the ROI boundary. Since the image function is a matrix, the seed can be set as a template of structural elements to carry out dot product operation on the image matrix.
The template of the seed function is as follows: where is the element in the seed template, which can be selected according to the effect of the binary image. is set to: = 1( = 1,2,3 … . . , = 1,2,3 … ), In addition, threshold λ is set as 9 in this paper. The seed function is calculated with the original road image to obtain the value and judge the lane line position. Then, the data of the lane line feature point is determined.

1.
Set the search seed function H(x, y) 2.
Start to search from the bottom, and then stop the search at the ROI edge.

3.
Begin to search for the next line when the ROI edge is searched, and search each line only once.

4.
The next line should start to be searched when it reaches the boundary without searching the feature point. According to the DROI global search principle, the seed function is set as H(x, y) [21], and Z(x, y) refers to the part of image where the filter H(x, y) is applied.
Therefore, the judging function is expressed as follows: σ(x, y) = Z(x, y) × H(x, y) In order to judge whether the point is the lane line or not, if the judging function satisfies σ(x i , y i ) ≥ λ, the point can be identified as the point on the lane line [23]. If the threshold requirement is not met, the search should be continued before reaching the ROI boundary. Since the image function is a matrix, the seed can be set as a template of structural elements to carry out dot product operation on the image matrix.
The template of the seed function is as follows: where s ij is the element in the seed template, which can be selected according to the effect of the binary image. s ij is set to: In addition, threshold λ is set as 9 in this paper. The seed function is calculated with the original road image to obtain the value and judge the lane line position. Then, the data of the lane line feature point is determined.

ROI Search Based on the Previous Image
The difference between the two adjacent frames is not small, but it is related to the speed of the vehicle. To improve the real-time capability of lane line detection, the search scope of the next frame can be set based on the last search. Therefore, if the lateral velocity of the vehicle is low, the scope of the DROI search is smaller than the previous area. Additionally, to avoid incorrect detection and misdetection of lane lines due to lessening the range of search, the on and off conditions of the local search should also be set simultaneously.

Lane line estimation based on another lane line
Some information from the raw image will be lost due to gray processing and the aerial view transformation. During image preprocessing, incomplete and worn lane lines can also be eliminated when some noise is removed. Thus, the lane lines will not be detected. In order to avoid this, the completed lane line can be utilized to estimate another lane line position. Concrete images are shown in Figure 10.
Appl. Sci. 2020, 10, x FOR PEER REVIEW 10 of 22 The difference between the two adjacent frames is not small, but it is related to the speed of the vehicle. To improve the real-time capability of lane line detection, the search scope of the next frame can be set based on the last search. Therefore, if the lateral velocity of the vehicle is low, the scope of the DROI search is smaller than the previous area. Additionally, to avoid incorrect detection and misdetection of lane lines due to lessening the range of search, the on and off conditions of the local search should also be set simultaneously.

Lane line estimation based on another lane line
Some information from the raw image will be lost due to gray processing and the aerial view transformation. During image preprocessing, incomplete and worn lane lines can also be eliminated when some noise is removed. Thus, the lane lines will not be detected. In order to avoid this, the completed lane line can be utilized to estimate another lane line position. Concrete images are shown in Figure 10.

Interpolation of Lane Line Discontinuities
Due to the existence of dotted lines and worn lane lines, lane line detection cannot be carried out effectively. Therefore, in this paper, the missing points between line segments are filled by linear interpolation [24].
The first-order interpolation is as follows: The quadratic interpolation is given by: For generally straight lanes or lanes with small curvature, first-order interpolation can supplement the missing data points. For roads with large curvatures, the quadratic interpolation method can better supplement the data of the missing points.

Feature Point Tracking of the Lane Line
In the process of lane line detection, due to stains on the road and occlusion from other vehicles, lane line absence will occur frequently, i.e., the lane line cannot be seen visually in a single frame. In such cases, lane line prediction and tracking are essential. Owing to the small computation load, the grey prediction method has good performance and does not need a large amount of original data for sample support, making it very suitable for lane line prediction [25].
To efficiently achieve lane line prediction and tracking, firstly, the lane line data is transformed into one-dimensional data, which takes the form of ( ) = ( ) ( ), = 1,2,3 … , . Because the onedimensional data represent the horizontal ordinate values, and the lane line in the image is the image

Interpolation of Lane Line Discontinuities
Due to the existence of dotted lines and worn lane lines, lane line detection cannot be carried out effectively. Therefore, in this paper, the missing points between line segments are filled by linear interpolation [24].
The first-order interpolation is as follows: The quadratic interpolation is given by: For generally straight lanes or lanes with small curvature, first-order interpolation can supplement the missing data points. For roads with large curvatures, the quadratic interpolation method can better supplement the data of the missing points.

Feature Point Tracking of the Lane Line
In the process of lane line detection, due to stains on the road and occlusion from other vehicles, lane line absence will occur frequently, i.e., the lane line cannot be seen visually in a single frame. In such cases, lane line prediction and tracking are essential. Owing to the small computation load, the grey prediction method has good performance and does not need a large amount of original data for sample support, making it very suitable for lane line prediction [25].
To efficiently achieve lane line prediction and tracking, firstly, the lane line data is transformed into one-dimensional data, which takes the form of X (0) = X (0) (i), i = 1, 2, 3, . . . , n . Because the one-dimensional data represent the horizontal ordinate values, and the lane line in the image is the image pixel matrix, the one-dimensional data is non-negative. Before the gray forecast model is set, x (0) should be calculated once: Then, the following differential equation with X (1) is established: The solution to the differential Equation (30) is as follows: , involved in Equation (31), can be obtained by: where B is the data matrix and Y n indicates the data column. The result of the above equation is the cumulative value of the predicted value, so a subsequent subtraction operation needs to be carried out. Thus, the tracking solution is given by:

Lane Line Fitting Based on a Polar Coordinate System
After completing the above steps, the data feature points of the lane lines are available. In this section, the data sets of the left and right lane lines are established, and then the lane line coordinates are estimated by establishing the mathematical model.
A great deal of research has been undertaken on lane line models in the literature. Existing lane line models mainly include the parabolic, linear, hyperbolic, B spline curve and mathematical models of other functions. However, these models are built based on the rectangular coordinate system, and one characteristic of rectangular coordinates is that a given independent variable corresponds to a given dependent variable. In addition, an infinitely sloping lane line cannot be fitted by the equation. In order to solve these problems, the feature points of the lane line are fitted using the polar coordinate system, yielding superior results to those reported using traditional methods.
The shape of the lane line in the image coordinate system is divided into two kinds of straight line and curve. In a Cartesian coordinate system, a given x-coordinate will have multiple values corresponding to the y-coordinate when the lane line is perpendicular to the X-axis. This would violate the definition of linear function, and the lane line could not be represented by the function. To solve this problem, a novel method based on the polar coordinate system is proposed. In this method, the data of the lane line can be fitted by the function ρ = f (α).
The conditions for the reciprocal transformation of the polar coordinate system and rectangular coordinate system are as follows: The poles of the polar coordinate system coincide with the origin of the rectangular coordinate system. 3.
The polar axis overlaps with the x axis of the rectangular coordinate system.

4.
Both coordinate systems have the same unit length.
According to Figure 11, the angle of the lane line in the polar coordinate system varies from 0 to π. If the remaining conditions are met, the transformation from the rectangular coordinate system to polar coordinate system can be fulfilled. During lane line fitting, the lane should be divided into straight line fitting and curve line fitting. Additionally, before the ROI is obtained, the differential coefficient should be solved. Based on Equation (26), the feature point (x, y) in the rectangular coordinate system can be transferred to point (α, ρ) in the polar coordinate system. Finally, the lane line feature points are fitted by the least square method in the polar coordinate system.
Appl. Sci. 2020, 10, x FOR PEER REVIEW 12 of 22 According to Figure 11, the angle of the lane line in the polar coordinate system varies from 0 to π. If the remaining conditions are met, the transformation from the rectangular coordinate system to polar coordinate system can be fulfilled. During lane line fitting, the lane should be divided into straight line fitting and curve line fitting. Additionally, before the ROI is obtained, the differential coefficient should be solved. Based on Equation (26), the feature point (x, y) in the rectangular coordinate system can be transferred to point (α, ) in the polar coordinate system. Finally, the lane line feature points are fitted by the least square method in the polar coordinate system. The coordinate transformation relationship is as follows: Based on Equation (36), the following fitting equation can be obtained: Additionally, according to Equations (14) and (23), the curvature formula can be achieved in the polar coordinate system: The curvature radius is represented by: where is the inclination of the tangent line at each point below the track, which can be computed by the equation below: The coordinate transformation relationship is as follows: Based on Equation (36), the following fitting equation can be obtained: Additionally, according to Equations (14) and (23), the curvature formula can be achieved in the polar coordinate system: Appl. Sci. 2020, 10, 2543

of 22
The curvature radius is represented by: where β is the inclination of the tangent line at each point below the track, which can be computed by the equation below: According the relationship between ρ and θ, k t , the slope of the tangent line at each point of the lane line is solved.
For simplification, ρ and θ are represented by x and y in the following text. Because the least-squares method has prominent advantages in lane line fitting and high fitting accuracy [26,27], it can realize the fitting of straight lines and curves. In this section, the specific steps of fitting are explained as follows: Firstly, the approximate curve y i = ∅(x i ) can be obtained by the given lane line data P i (x i , y i ). At the same time, to ensure the accuracy of the curve, the bias between the obtained equation and the original data point must be minimized. The bias of the curve on the point P The fitting polynomial is set as follows: The sum of the distance from the feature point of the lane line to the fitting equation is: The partial derivative of a i is expressed in a simplified form as follows: The following equation is achieved after Equation (44) is transformed to the matrix form and simplified: Generally, if both sides of the equitation XA = Y are multiplied by the inverse matrix of x, then the value of A can be determined. In addition, the condition of the method is that A should be a square matrix and not a singular matrix. However, in most cases, this condition is not met. Therefore, the value of A can be solved by multiplying the transpose of X, as shown below: Then, the least square fitting coefficient of A can be obtained based on Equation (46). A is substituted into the polynomial fitting equation to obtain the lane line equation.

Experimental Conditions and Algorithm Flow
The novel method of the lane line proposed in the paper is verified using the Matlab 2014a software. The operating hardware environment for lane detection is a computer with Inter(R) Core(TM) i7-6700K CPU. The algorithm flow is shown in Figure 12.

Evaluation Index of the Lane Line Detection
In order to give an unbiased evaluation for the detection performance of the lane line detection algorithm, this paper utilizes several indicators to evaluate the detection performance [28,29], including the recall rate (the ratio of the number of samples detected correctly in the positive detection to the total number of positive samples), the precision rate (the ratio of the number of samples detected correctly in the positive detection to the number of positive detection), and the missing rate (the ratio of the number of samples detected incorrectly in the negative detection to the total number of positive samples). To obtain the recall rate, precision rate, and missing rate, the true positives (TP), the false positives (FP), the false negatives (FN) and the true negatives (TN) are used. In the lane line detection process, if the detection marks include more than 80% of the lane lines in the DROI, the error is no more than 5 pixels, and there is no other false detection, then the lane line identification is accurate. TP refers to correct detections FP to false detections among the positive detections. FN means the number of samples detected incorrectly in the negative detection. TN refers the number of samples detected correctly in the negative detection. Using TP, FP, TN, and FN, the recall rate, precision rate, and loss rate are given as follows. In addition, Figure 13 shows the corresponding effect diagram:

Evaluation Index of the Lane Line Detection
In order to give an unbiased evaluation for the detection performance of the lane line detection algorithm, this paper utilizes several indicators to evaluate the detection performance [28,29], including the recall rate (the ratio of the number of samples detected correctly in the positive detection to the total number of positive samples), the precision rate (the ratio of the number of samples detected correctly in the positive detection to the number of positive detection), and the missing rate (the ratio of the number of samples detected incorrectly in the negative detection to the total number of positive samples). To obtain the recall rate, precision rate, and missing rate, the true positives (TP), the false positives (FP), the false negatives (FN) and the true negatives (TN) are used. In the lane line detection process, if the detection marks include more than 80% of the lane lines in the DROI, the error is no more than 5 pixels, and there is no other false detection, then the lane line identification is accurate. TP refers to correct detections FP to false detections among the positive detections. FN means the number of samples detected incorrectly in the negative detection. TN refers the number of samples detected correctly in the negative detection. Using TP, FP, TN, and FN, the recall rate, precision rate, and loss rate are given as follows. In addition, Figure 13 shows the corresponding effect diagram: In addition, Figure 13 shows the corresponding effect diagram.

Comparative Analysis of Test Results
If self-gathered data is used for the verification of lane line detection algorithms, it is hard to evaluate in a common standard. The Caltech datasets are widely utilized because they have not only structured paths, but also unstructured roads including shadows and road marks of varying intensities. Therefore, these data are used to evaluate the algorithm in this paper. Some classic roads are shown in Figure 14.

Comparative Analysis of Test Results
If self-gathered data is used for the verification of lane line detection algorithms, it is hard to evaluate in a common standard. The Caltech datasets are widely utilized because they have not only structured paths, but also unstructured roads including shadows and road marks of varying intensities. Therefore, these data are used to evaluate the algorithm in this paper. Some classic roads are shown in Figure 14.

Comparative Analysis of Test Results
If self-gathered data is used for the verification of lane line detection algorithms, it is hard to evaluate in a common standard. The Caltech datasets are widely utilized because they have not only structured paths, but also unstructured roads including shadows and road marks of varying intensities. Therefore, these data are used to evaluate the algorithm in this paper. Some classic roads are shown in Figure 14.  . The proposed method was compared with the above methods using the Caltech data sets, as shown in Table 2. As shown in Table 2, the detection performance of the proposed method is better than those of the other methods. The main reasons are as follows: in [30], the lane line vanishing point is not easy to choose and there is an absence of lane line, so this method is unsuitable for worn lane lines. The RANSAC algorithm [29] is unsuitable for lane lines with large curvatures. The method proposed by Seo encounters difficulties improving lane line detection according to the driving direction due to the inverse of the lane line. The detection results of the method presented in this paper are shown in Figure 15. . The proposed method was compared with the above methods using the Caltech data sets, as shown in Table 2. As shown in Table 2, the detection performance of the proposed method is better than those of the other methods. The main reasons are as follows: in [30], the lane line vanishing point is not easy to choose and there is an absence of lane line, so this method is unsuitable for worn lane lines. The RANSAC algorithm [29] is unsuitable for lane lines with large curvatures. The method proposed by Seo encounters difficulties improving lane line detection according to the driving direction due to the inverse of the lane line. The detection results of the method presented in this paper are shown in Figure 15.   In order to further evaluate the algorithm proposed in this paper, other typical road environments including highways, wet roads, and rural roads are tested. Figure 16 shows some typical road conditions used for evaluation: Appl. Sci. 2020, 10, x FOR PEER REVIEW 18 of 22 In order to further evaluate the algorithm proposed in this paper, other typical road environments including highways, wet roads, and rural roads are tested. Figure 16 shows some typical road conditions used for evaluation: Figure 16. Other typical road environments. Table 3 shows the test results of the proposed detection method under different driving scenarios. It is seen that for all tested driving scenarios, the positive detection rate of the proposed method is close to 99%, and the missing detection rate is less than 3%, which meets the requirements of lane line detection. In addition, Figure 17 shows the test results under different road conditions.  Table 3 shows the test results of the proposed detection method under different driving scenarios. It is seen that for all tested driving scenarios, the positive detection rate of the proposed method is close to 99%, and the missing detection rate is less than 3%, which meets the requirements of lane line detection. In addition, Figure 17 shows the test results under different road conditions. The detection accuracy is improved by adding the constraints described in this paper. In order to avoid affecting the varying curvature of lane lines, the row of the large obstacles including font markings and zebra crossings can be ignored. Furthermore, if there is a lane line on the road, this can be used for other lane line estimation. Lastly, the method proposed in this paper provides good detection results using both classical data sets and complex road conditions data sets, which meets the practical application requirements of road recognition.
Appl. Sci. 2020, 10, x FOR PEER REVIEW 18 of 22 In order to further evaluate the algorithm proposed in this paper, other typical road environments including highways, wet roads, and rural roads are tested. Figure 16 shows some typical road conditions used for evaluation: Figure 16. Other typical road environments. Table 3 shows the test results of the proposed detection method under different driving scenarios. It is seen that for all tested driving scenarios, the positive detection rate of the proposed method is close to 99%, and the missing detection rate is less than 3%, which meets the requirements of lane line detection. In addition, Figure 17 shows the test results under different road conditions.   The detection accuracy is improved by adding the constraints described in this paper. In order to avoid affecting the varying curvature of lane lines, the row of the large obstacles including font markings and zebra crossings can be ignored. Furthermore, if there is a lane line on the road, this can be used for other lane line estimation. Lastly, the method proposed in this paper provides good

Discussion of Results
In order to validate the advantages of the proposed algorithm, it was tested using the Caltech data sets. The obtained results demonstrate that the positive detection rate reaches 99.21% and the missing detection rate is about 2.7%, which is superior to other mainstream methods in terms of accuracy.
Furthermore, 5683 frames of road images including highway, rural road, wet roads, and mountain roads during the daytime and at night were further detected using the proposed method. The results show that the positive detection rate reaches 99.07%, while the misdetection rate is lower than 3%, which indicates that the proposed algorithm provides good adaptability in different complex environments.

Conclusions
In order to improve the accuracy of lane detection, a novel approach based on DROI selection in the safety vision and lane line fitting in a polar coordinate system is proposed in the paper. The characteristics of lane lines on structured roads are analyzed, and the DROI global search method is used to detect the lane lines. Meanwhile, the interpolation and tracking of lane line feature points are carried out. Given the advantages of the polar coordinate equation fitting curve, the least square method is used to fit the lane line in the polar coordinate system. Lastly, the proposed approach is evaluated using the Caltech database, and the accuracy rate is 99.21%. Additionally, the positive detection rate is shown to be 99.07% in complicated real road environments, which verifies the robustness of the proposed method. The accuracy of this method is clearly superior to those of other mainstream methods.
The approach proposed in this paper has good application prospects in advanced driver assistance systems. In addition, the method of extracting the dynamic region of interest can also be applied in the field of robotics.