Next Article in Journal
Inertial Sensor Based Analysis of Lie-to-Stand Transfers in Younger and Older Adults
Next Article in Special Issue
Ontology-Based Architecture for Intelligent Transportation Systems Using a Traffic Sensor Network
Previous Article in Journal
A Novel Optimization Technique to Improve Gas Recognition by Electronic Noses Based on the Enhanced Krill Herd Algorithm
Previous Article in Special Issue
Vertical Corner Feature Based Precise Vehicle Localization Using 3D LIDAR in Urban Area
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Efficient Lane Boundary Detection with Spatial-Temporal Knowledge Filtering

Institute of Artificial Intelligence and Robotics, Xi’an Jiaotong University, Xi’an 710049, China
*
Author to whom correspondence should be addressed.
Sensors 2016, 16(8), 1276; https://doi.org/10.3390/s16081276
Submission received: 29 March 2016 / Revised: 25 July 2016 / Accepted: 8 August 2016 / Published: 12 August 2016
(This article belongs to the Special Issue Sensors for Autonomous Road Vehicles)

Abstract

:
Lane boundary detection technology has progressed rapidly over the past few decades. However, many challenges that often lead to lane detection unavailability remain to be solved. In this paper, we propose a spatial-temporal knowledge filtering model to detect lane boundaries in videos. To address the challenges of structure variation, large noise and complex illumination, this model incorporates prior spatial-temporal knowledge with lane appearance features to jointly identify lane boundaries. The model first extracts line segments in video frames. Two novel filters—the Crossing Point Filter (CPF) and the Structure Triangle Filter (STF)—are proposed to filter out the noisy line segments. The two filters introduce spatial structure constraints and temporal location constraints into lane detection, which represent the spatial-temporal knowledge about lanes. A straight line or curve model determined by a state machine is used to fit the line segments to finally output the lane boundaries. We collected a challenging realistic traffic scene dataset. The experimental results on this dataset and other standard dataset demonstrate the strength of our method. The proposed method has been successfully applied to our autonomous experimental vehicle.

Graphical Abstract

1. Introduction

Lane boundary detection has been extensively studied over the past few decades for its significance in autonomous guided vehicles and advanced driver assistance systems. Despite the remarkable progress achieved, many challenges remain to be addressed. The first challenge is the drastic change of lane structures. For example, a lane may suddenly become wide or narrow. In addition, lane boundary detection is often disturbed by too ‘sufficient’ visual cues in traffic scenes, such as shadows, vehicles, and traffic signs on roads, or contrarily, too weak cues, such as worn lane markings. These challenges often make conventional methods inapplicable or even result in misleading outcomes.
One of the major reasons is that the conventional methods highlight the effect of lane appearance features, but overlook the effect of prior spatial-temporal knowledge. Lanes are knowledge-dominated visual entities. Lane appearances are relatively simple, usually parallel lines. They have no sophisticated structures, textures and features. When the appearance is interfered with by noise, humans often resort to their prior knowledge to identify lanes. For example, in urban scenes where the frontal and passing vehicles occlude the lanes, humans can filter out pseudo lines and estimate the real lane boundaries according to their knowledge of lane width constraints and the lane boundaries at the previous time.
In this paper, we propose a spatial-temporal knowledge filtering method to detect lane boundaries in videos. The general framework is shown in Figure 1. This model unifies the feature-based detection and knowledge-guided filtering into one framework. With a video frame, the model first extracts line segment features with the Line Segment Detector (LSD) [1]. This approach differs from traditional edge detection and can obtain robust and accurate line segments in various traffic scenes. A Crossing Point Filter (CPF) and a Structure Triangle Filter (STF) are proposed to filter out noisy line segments. These two filters characterize the spatial structure constraints and temporal location constraints, which represent the prior spatial-temporal knowledge about the lanes. A straight line or curve model that is determined by a state machine is used to fit the line segments and finally to produce the lane boundaries.
The proposed method was tested on a large-scale dataset. This dataset was collected in natural traffic environments under various weather conditions, illumination settings and scenes. The experimental results demonstrate that the method can detect lane boundaries in various challenging scenes with high performance. Moreover, the proposed algorithm has been successfully applied to our autonomous experimental vehicle.
Compared to previous work, this paper makes four major contributions.
  • It develops a framework that incorporates prior spatial-temporal knowledge and appearance features to detect lane boundaries in videos.
  • It proposes two knowledge-based filters to filter out noisy line segments.
  • It builds a large-scale dataset of traffic scene videos. The proposed method was tested on this dataset and achieved impressive performance.
  • The algorithm has been successfully applied to an autonomous experimental vehicle.

2. Related Work

In this section, we briefly review related literature from the following major streams: feature extraction, feature refinement, lane fitting and lane tracking.

2.1. Feature Extraction

Edges are among the most widely-used features in lane representation and detection [2,3,4,5,6,7,8,9,10,11,12,13,14]. Canny edges [15] are composed of pixels with strong gradient magnitudes. The steerable Gaussian filter [5,16,17,18,19,20,21] extracts edge features by utilizing the gradient orientation information. However, the thresholds to determine edges in these methods are manually set to be constant, which causes the method to be inapplicable to dynamically-changing traffic scenes. Color is another widely-used feature in lane detection [22,23]. However, it is sensitive to illumination changes.
Machine learning methods were recently introduced to feature extraction to overcome those drawbacks [24,25,26,27,28]. The method in [24] trained an artificial neural network classifier to obtain the potential lane-boundary pixels. Such features extracted by an off-line-trained classifier are closely related to the varieties and scales of the training samples. Multiple types of features were fused to overcome the drawbacks of unary features in a previous study [29]. In our work, the LSD algorithm [1] is used to extract lane segment features. This approach can accurately extract line segments in various traffic scenes without manually setting the thresholds.

2.2. Feature Refinement

To refine the extracted features, classical image processing algorithms, such as the threshold segment [11,12,13,30,31,32] and the Gaussian filter [4,24,30,33,34,35], are usually employed. These methods need to set the thresholds manually and do not take advantage of the information of road geometry structures.
Filtering methods based on geometry constraints are also explored to refine line features [5,9,10,23,36,37]. For example, the IPM-based methods [3,9,18,20,25,27,30,31,32,33,35,36,38,39,40,41,42] eliminate noise by searching for horizontal intensity bumps in bird’s eye-view images based on the assumptions of parallel lane boundaries and flat roads. However, if roads are not flat, with those methods, lane boundaries will be mapped as nonparallel lines on the bird’s eye-view images, thereby leading to false detection. Additionally, horizontal bumps are difficult to detect in traffic scenes with weak visual cues, such as worn-out lane boundaries and complex illuminations. Moreover, the IPM-based methods require calibration parameters, which causes inevitable systematic error and repeated calibrations if the camera is moved.
Aside from the flat roads and parallel lane boundaries, other geometrical structure constraints are used for feature refinement. Global lane shape information is utilized to iteratively refine feature maps in [5]. The method in [9] utilizes driving direction to remove useless lines. Vanishing points are utilized in both [5,9] to improve the filtering performance. Shape and size information is used to determine whether a region belongs to a lane boundary in [23]. However, the spatial-temporal constraints have not been extensively analyzed.
Searching strategies are also explored to refine feature maps. The model in [43] searches for useful features by employing a scanning strategy from the middle pixel of each row to both sides. An edge-guided searching strategy is proposed in [8]. Kang and Jung [44] detected real lane boundaries using a dynamic programming search method. Expensive sensors, such as GPS, IMU and LIDAR, are utilized to provide assistant information [25,45]. However, these strategies often lack general applicability.
In this paper, we propose two generally applicable filters, namely CPF and STF. They characterize geometrical structure and the temporal location constraint, respectively.

2.3. Lane Fitting

Many straight and curve fitting methods have been developed. Hough transform is frequently used for straight line fitting [6,7,8,10,11,25,30,31,35,37,38,43,46,47]. The parabola and hyperbola are classical curve models that are adopted by [5,7,36,38,48,49,50,51,52]. The major limitation of the quadratic curve is the lack of flexibility to model the arbitrary shape of lane boundaries. Therefore, other curves, such as Catmull–Rom [2,34,46,53], B-spline [6,35], Bezier [33] and the cubic curve [4,18,30], are also widely used. Generally, when fitting a lane, many candidates are generated by RANSAC [8,9,18,20,24,25,27,31,32,33,35,38,39,54], and the candidate with the maximum likelihood is chosen.
In this paper, we design a state machine to estimate if a lane is straight or curved. Then, the straight line or curve fitting model is used to fit lanes.

2.4. Lane Tracking

Tracking technology is used to improve the computation efficiency and detection performance by utilizing the information of temporal coherence. Among tracking methods, the Kalman filter [43,46,54] and the particle filter [24,28,55] are the most widely used. The model in [55] defines the particle as a vector to represent the control points of lane boundaries. However, such methods often assume that the changes of lane boundary positions between two consecutive frames are small, which may be inapplicable when a vehicle turns at a crossroad or changes lanes. The road paint, heavy traffic and worn lanes also bring challenges to these methods.

3. Feature Extraction

We extract line segments in video frames as lane boundary features with the Line Segment Detector (LSD) proposed in [1]. LSD is an efficient and accurate line segment extractor, which does not require manually-set parameters.
The principle process of the LSD extraction is described as follows [1]. An RGB image is first converted to a gray image, which is then partitioned into many line support regions. Each region is composed of a group of connected pixels that share the same gradient angle. The line segment that best approximates each line support region is identified. Finally, all of the detected line segments are validated. Figure 2 shows some procedures of the feature extraction.
To convert an RGB image into a gray image, the gray intensity I ( x ) at a pixel x is represented as a weighted average of the RGB values R ( x ) , G ( x ) and B ( x ) , i.e., [11],
I ( x ) = ω 1 R ( x ) + ω 2 G ( x ) + ω 3 B ( x )
The study [48] has demonstrated that the red and green channels exhibit good contrast properties for white and yellow lane markings. Since most lane markings on real roads are white and yellow, we set ω 1 = 0 . 5 , ω 2 = 0 . 5 and ω 3 = 0 in Equation (1) to enhance the contrast of lane markings to surroundings.
The extracted line segments above lanes’ vanishing line are the noise for lane boundary detection. To remove those noisy line segments, we should localize the lanes’ vanishing line. We develop a vanishing line detection method that does not require camera calibration. In our method, we assume that the rotation angle of the camera with respect to the horizontal axis is zero.
To localize the vanishing line, the crossing points of all of the line segments are first computed. Then, the image plane is uniformly divided into horizontal bands, as illustrated in Figure 2c. Each horizontal band is assigned a score. The score of the i-th band is,
P i = n i i = 1 N n i
where n i is the number of crossing points in the i-th band and N is the number of bands. In this work, the height of each band is set as 10 pixels.
The horizontal symmetry axis of the band with the highest score is considered as the vanishing line. The line segments above the vanishing line are eliminated, and the remaining segments serve as lane boundary features for subsequent processing, as shown in Figure 2d. Some vanishing line detection examples are shown in Figure 3.

4. Filtering

Due to the noise and complex traffic scenes, not all line segment features extracted by LSD are from lane boundaries. The line segment features possibly on the lane boundaries should be kept while the features in other areas should be eliminated. This processing is realized by filtering. In this section, we present two knowledge-based filters that are used to filter out noisy line segments. The filtering reflects and characterizes two types of knowledge, namely spatial geometry constraints and temporal location consistency.

4.1. Crossing Point Filter

4.1.1. Definition

According to the camera projection, lane boundaries in a 2D image that are parallel in the 3D world will intersect at the same vanishing point [23]. The general idea of CPF is to filter out those line segments not passing the vanishing point.
However, a single point is prone to be interfered with by noise and difficult to estimate accurately. Inspired by previous studies that use vanishing points to detect lanes [5,9], we use a bounding box near the vanishing point to refine the line segments. We call this bounding box the vanishing box, as the red box shown in Figure 4b. A line segment is filtered out if all of the crossing points of this segment with other segments are outside the vanishing box. Figure 4c shows that many noisy line segments are filtered out.
The vanishing box in the n-th frame is define as:
b n = { x n , y n , w n , h n , s n }
where ( x n , y n ) , w n , h n and s n are the top-left point, the width, the height and the score of b n , respectively.

4.1.2. Vanishing Box Search

Since the vanishing box is close to the vanishing line, we search for the vanishing box in a restrictive region R centered on the vanishing line, as the green box shown in Figure 4b. The restrictive region R is defined as:
R = { r x , r y , r w , r h }
where r w is the width of R and set as the width of the image. r h is the height of R and set as r h = 60 in our work. r x = 0 and r y = v 0 - 0 . 5 r h are the top-left point of R, where v 0 is the vertical position of the vanishing line in the image.
Within R, we search for the vanishing box b n at the positions by a spacing step d = 5 in the horizontal and vertical directions. The candidate box at the local coordinate ( i , j ) relative to the top-left point of R is b n i j = { x n i j , y n i j , w n i j , h n i j , s n i j } . In the experiment, we set the width and height as w n i j = W / 4 , h n i j = 30 , where W is the width of the image.
The position ( x n i j , y n i j ) in the image is:
x n i j = r x + i · d , y n i j = r y + j · d
The score s n i j is defined as:
s n i j = n i j n
where n i j is the number of crossing points inside b n i j and n is the total number of crossing points inside R.
Among all of the candidate boxes, the one with the highest score is identified as the vanishing box b n .

4.2. Structure Triangle Filter

CPF cannot filter out the noisy line segments that are parallel to the lanes. For example, in Figure 5a, the noisy line segments on the arrow traffic signs still remain after applying CPF. We present a structure triangle filter (STF) to further remove those noisy line segments that are parallel with the lane boundaries.

4.2.1. Definition

We assume that in the real 3D world, the left and right neighboring lanes have the same width as the ego lane. Based on this assumption, the ego lane and the neighboring lanes form a triangular structure with the bottom line of the image, as shown in Figure 6a. The intersection points of the lanes and the image bottom line, as B , C , D , E in Figure 6a, meet B D = C E = B C .
The line segments in a small neighborhood around a lane are likely to contribute to estimating the lane. We call this neighborhood the tolerance region, as the yellow regions in Figure 6b. The tolerance region can be approximately defined with a range in the image’s bottom line. If a line segment is in a lane’s tolerance region, the segment’s intersection point with the image bottom will be in a small neighborhood of the lane’s intersection point with the image bottom; otherwise, the intersection point will be outside this neighborhood. The similar equal-width lane assumption and constraint were also utilized in previous work [39].
As shown in Figure 6b, the green segments are in the tolerance region, while the red segments are outside; B 1 B 2 and C 1 C 2 are the small neighborhoods. In the experiment, we empirically set B B 1 = 2 B B 2 and B B 1 = B C / 8 .

4.2.2. Estimation

To filter out noisy line segments with STF, we should estimate the points B, C, D and E in each video frame. To estimate B, we identify the line segments from CPF with negative slopes (in the image coordinate system). Among all of the intersection points of these segments with the image’s bottom line, the point with the maximum horizontal coordinate is approximately taken as B. C is estimated in a similar way, but using the line segments with positive slopes and selecting the point with the minimum horizontal coordinate. With B and C, D and E are computed with the constraint B D = C E = B C .
It should be noted that the estimated B, C, D and E are not the intersection points of real lane boundaries with the image’s bottom line. After filtering out noisy line segments with STF, the remaining line segments are used to estimate real lane boundaries, which will be detailed in Section 5.

4.2.3. Temporal Knowledge Transition

The STF filtering is based on reasonable estimation of B and C. Incorrect estimation may lead to misleading results for the subsequent processing. Therefore, the incorrect estimation should be identified. If this occurs, the STF from the previous frames will be applied, which reflects the transition of temporal knowledge about lanes.
In the current frame, let L B C be the length of the estimated B C and L a be a prior constant. If 0 . 7 L a L B C 1 . 6 L a , the estimated B and C are taken to be applicable; otherwise, they are inapplicable. L a is the average value of all L B C in the previous frames where B and C are identified to be applicable. This is computed with Algorithm 1, where L a is empirically initialized. The two empirical values 0 . 7 and 1 . 6 slack the range of L B C and make the method more robust to lane drift.
Algorithm 1: Computing L a
 1:
Initialize Q = 0 , L s u m = 0 , L a
 2:
while ( c a p t u r e   f r a m e ) do
 3:
if L B C applicable then
 4:
   L s u m + = L B C
 5:
else
 6:
   L s u m + = L a
 7:
end if
 8:
Q = Q + 1
 9:
L a = L s u m / Q
10:
end while

5. Fitting

The line segments output from CPF and STF are further used for fitting lane boundaries. Since straight and curved lanes both occur in traffic scenes, we should adopt different fitting models. We present a road state machine to determine if a lane is straight or curved. Then, the corresponding line or curve model is selected to fit lane boundaries.

5.1. Road State Machine

The state machine includes three states: turn-left road, turn-right road and straight road, as shown in Figure 7. We assume that the state cannot directly transfer between ‘turn-left’ and ‘turn-right’ in two consecutive frames. The road state is jointly decided by two types of measures. Only if the two measures indicate the same state, the state of the current road is assigned the indicated state. When the two measures indicate different states, the current road state is assigned the state in the last frame.
As discussed in Section 4.2, line segments in tolerance regions contribute to estimating lane boundaries. The first measure is the difference between the inclination angles of the line segments respectively in the left boundary’s tolerance region and the right boundary’s tolerance region. It is defined as:
θ = arctan ( 1 n 1 k = 1 n 1 | j 1 k - j 2 k i 1 k - i 2 k | ) - arctan ( 1 n 2 k = 1 n 2 | j 3 k - j 4 k i 3 k - i 4 k | )
( i 1 k , j 1 k ) and ( i 2 k , j 2 k ) are the endpoints of the k-th line segment in the left boundary’s tolerance region. ( i 3 k , j 3 k ) and ( i 4 k , j 4 k ) are the endpoints of the k-th line segment in the right boundary’s tolerance region. n 1 and n 2 are the segment’s numbers in the two regions, respectively.
For θ, we introduced a positive threshold Δ. If θ > Δ , which means that the inclination angle of the left boundary is larger than the angle of the right boundary, the road state will be ‘turn-left’. If - Δ θ Δ , which means that the inclination angles of the two boundaries are close, the road state will be ‘straight’. If θ < - Δ , which means that the inclination angle of the left lane is smaller than the angle of the right lane, the road state will be ‘turn-right’. In our experiment, Δ is set as π / 9 empirically.
The second measure is the horizontal coordinate u of the lane’s vanishing point. Let W be the image width and Δ 1 be a positive threshold. If u < 0 . 5 W - Δ 1 , which means the vanishing point is in the left side of the image, the road state will be ‘turn-left’. Similarly, 0 . 5 W - Δ 1 u 0 . 5 W + Δ 1 and u > 0 . 5 W + Δ 1 indicate the ‘straight’ and ‘turn-right’ states, respectively. In our experiment, Δ 1 is set as W / 16 empirically.
With the indications of θ and u, we can decide if a road is straight or curved. Figure 7 shows the road state decision table.

5.2. Lane Fitting

5.2.1. Straight Lane

For a straight boundary, we use a line to fit the line segments in a tolerance region (as a yellow region in Figure 6b). The line slope a and a point ( x 0 , y 0 ) on the line are defined as:
a = 1 N i = 1 N y 1 i - y 2 i x 1 i - x 2 i , x 0 = 1 N i = 1 N x 1 i + x 2 i 2 , y 0 = 1 N i = 1 N y 1 i + y 2 i 2
where ( x 1 i , y 1 i ) and ( x 2 i , y 2 i ) are the end points of line segments, while N is the line segment number.

5.2.2. Curved Lane

We use the Catmull–Rom model for curved lanes [2,53]. Since curves are prone to be interfered with by noise, with the line segments in each frame, we generate all candidate Catmull–Rom curves for the left and right boundaries of a lane. The curve pair of the left and right boundaries that is most similar to the curve pair of the last frame is identified as the final results.
The similarity of the lane boundary pairs in two consecutive frames is defined as:
f = 1 m i = 1 m ( 1 - w i * | L n + 1 i - L n i | L n + 1 i )
We use Figure 8 to illustrate Equation (9). The green curves in Figure 8a are the lane boundary results in the n-th frame, and the curves in Figure 8b are the results in the ( n + 1 ) -th frame. The red line is the vanishing line, and the yellow dotted lines represent the lane width in different rows. m is the number of rows between the vanishing line and the bottom line. L n i is the width of the lane in the i-th row of the n-th frame, and L n + 1 i is the width in the same row of the ( n + 1 ) -th frame. w i is a penalty factor, which is defined as w i = L n i / L n m , where L n m is the width of the lane on the bottom line of the image in the n-th frame.
We define a Catmull–Rom curve with five control points p i = ( x i , y i ) , i = 1 , 2 , 3 , 4 , 5 . The vertical coordinate of p 1 is set as the vertical coordinate of the vanishing line, and its horizontal coordinate is estimated by the curve vanishing point estimation method [34]. p 2 , p 3 and p 4 are distinctly located at three horizontal regions with unequal heights, respectively, as shown in Figure 9b. In their distinct regions, p 2 , p 3 and p 4 are set as the endpoints of the line segments. By assigning all of the endpoint combinations to p 2 , p 3 and p 4 , all of the candidate curves are generated. p 5 is set as the crossing point of the image bottom line and the line segment containing p 4 . For curve fitting, two assistant points, p 0 = ( x 0 , y 0 ) and p 6 = ( x 6 , y 6 ) , are empirically defined as x 0 = 2 x 1 - x 2 , y 0 = y 2 , x 6 = x 5 , y 6 = 2 y 5 - y 4 .

6. Experiments

6.1. Dataset and Setting

We collected a large-scale realistic traffic scene dataset: the XJTU-IAIR traffic scene dataset. It includes about 103,176 video frames. The dataset covers: (1) various kinds of lanes, such as dashed lanes, curve lanes, worn-out lanes and occluded lanes; (2) diverse road structures, such as wide roads, narrow roads, merging roads, tunnel roads, on-ramp roads, off-ramp roads, irregular shape roads and roads without lane boundaries; (3) various disturbances, such as shadows, road paint, vehicles and road water; (4) complex illumination conditions, such as day, night, dazzling, dark and illumination change; and (5) different behaviors of ego vehicles, such as changing lanes and crossroad turning. In addition to lane detection, this dataset can also be used in traffic scene understanding, vehicle tracking, object detection, etc. For lane detection, we tested our method on parts of videos in this dataset.
The dataset is organized as follows. We first categorize the videos into highway and urban videos. In each part, the videos are classified into general ones and particular ones. The general videos are longer and include different kinds of traffic scenes, while the particular videos are shorter and focus on certain traffic conditions, such as illumination, curve, night, changing lanes, etc. Table 1 summarizes the statistics of our dataset.
We also tested our algorithm on Aly’s dataset [33], which is a well-known and well-organized dataset for testing lane detection. It includes four video sequences. It was captured in real traffic scenes with various shadows and street surroundings.
Experiments 1, 2 and 3 were performed on a PC platform equipped with an Intel i7 CPU with a quad-core of 2.8 GHZ at the frame resolution of 640 × 480 . Experiment 4 was conducted on different platforms with lower computing ability.

6.2. Evaluation Criterion

We use the criteria of precision and recall to evaluate the performance of the methods, which are defined as:
P r e c i s i o n = T P T P + F P , R e c a l l = T P T P + F N
where T P is the total number of true positives, F P is the total number of false positives and F N is the total number of false negatives.
The detected boundary is taken as a true positive if the horizontal distances between the detected boundary and the ground truth at several different positions are all less than the predefined thresholds. For straight lanes, this rule is checked at two positions: the bottom line and the line that divides the distance between the bottom line and the vanishing line as 5:1, as shown in Figure 10a. For curved lanes, an additional check is made at the vanishing line, as shown in Figure 10b. In the experiments, we manually count the number of true positives, false positives and false negatives with the same standard. To have a fair comparison with other methods, we compute the performance with the results of ego lanes. In the following experiments, we use GT to denote the total number of ground truth and FPS to denote frames per second.

6.3. Experiment 1: XJTU-IAIR Traffic Scene Dataset

We evaluate our method on parts of videos in our XJTU-IAIR traffic scene dataset. These videos are original and contain various scene conditions, such as complex illumination, night, curve lanes, lane changes, etc.
Table 2 presents the quantitative results of our method, and Figure 11 shows some examples of lane boundary detection in various scenes. Figure 12 shows some examples of lane boundary detection when the road structures change. Despite the various disturbances on highways, our method successfully detects almost all lane boundaries with a precision of 99.9%. Only 16 out of 14,182 are false alarms because a van was overtaking at a near distance. Even excessively ‘sufficient’ or weak visual cues exist in some challenging scenes, as illustrated in Figure 11; our method still exhibits excellent performance that benefits from STF. Our algorithm can also successfully detect lane boundaries with changing structures, as shown in Figure 12.

6.4. Experiment 2: Comparison with Other Methods

We compare our method to three other lane detection methods on the data from the XJTU-IAIR traffic scene dataset, namely He’s method [14], Bertozzi’s method [41] and Seo’s method [9]. He’s method [14] uses the Canny detector to extract edge features and Hough transformation to select lane boundaries. Bertozzi’s method [41] searches for the intensity bumps to detect lanes. Seo’s method [9] extracts lane features using a spatial filter and refines the feature by utilizing the driving direction.
Table 3 presents the performance of each method in different traffic conditions, and Figure 13 shows some examples. Table 3 and Figure 13 show that our method is more robust than the other three methods in these challenging traffic conditions, which proves the strength of our knowledge-based filtering framework. For example, in the heavy urban video, which exhibits complex lane structures and visual cues, the performance of our method is much better than the other three methods. This is because our method combines the prior spatial-temporal knowledge to detect the lane boundaries rather than only utilizing the appearance features.

6.5. Experiment 3: Aly’s Dataset

We also test our method on Aly’s dataset [33]. The comparisons between our method and other methods are summarized in Table 4. For a fair comparison, we adopt the same evaluation criteria as in Aly’s method [33].
Aly’s dataset includes four videos. Our algorithm demonstrates good performance on Videos 1, 3 and 4. On Video 2, a large number of false positives exist because of crossroads with many cracks. The line segments at the cracks share the same direction and location with the real lane boundaries and, therefore, can hardly be filtered out by CPF and STF.

6.6. Experiment 4: Different Platforms

To transplant our algorithm to other computing platforms, we also test our method on Raspberry Pi 3 and ARK-10. Raspberry Pi 3 is a single board computer with a 1.2-GHZ CPU, and ARK-10 is an embedded industrial control computer with a 2.0-GHZ CPU.
In the test, we adopt two strategies for algorithm acceleration. Firstly, we resize the original image into 320 × 240 . Secondly, on each resized frame, we set the region between the vanishing line and the bottom line as the searching region. This strategy prevents the algorithm from searching invalid areas where there is no lane boundary. Table 5 shows the performance and the speed of our algorithm on these two platforms. Our algorithm can achieve an FPS of about 18 on Raspberry Pi 3 and about 29 on ARK-10. Such a speed can basically meet the requirement of some real-time applications.
On the other hand, the low resolution of the video frames may cause some potential limitations. Some detailed information is not perceived on the low-resolution frames, which may lead to more false negatives in some situations with excessively weak visual cues.

7. Discussion and Conclusions

In this paper, we propose a spatial-temporal knowledge filtering method to detect lane boundaries in videos. The model unifies the feature-based detection and knowledge-guided filtering into one framework. Two filters are proposed to filter out the noisy line segments in the massive original line segment features. These two filters characterize the spatial structure constraint and temporal location constraint, which represent the prior spatial-temporal knowledge about lanes. The proposed method was tested on a large-scale traffic dataset, and the experimental results demonstrate the strength of the method. The proposed algorithm has been successfully applied to and tested on our autonomous experimental vehicle.
Our method may produce false results in some special traffic conditions, such as crossroads, zebra crossings and wet roads. Figure 14 shows some examples of false results. Our future work will focus on these issues and other intelligent vehicle-related topics, such as pedestrian action prediction, complex traffic scene understanding and 3D traffic scene reconstruction.

Acknowledgments

This work was supported by the grants: 973 Program (No. 2015CB351703), National Natural Science Foundation of China (NSFC) (No. 61503297) and the Program of State Key Laboratory of Mathematical Engineering and Advanced Computing (No. 2015A05).

Author Contributions

Zhixiong Nan and Ping Wei desinged the method and wrote the paper. Zhixiong Nan performed the experiment and analyzed the data. Zhixiong Nan and Linhai Xu collected the dataset. Linhai Xu and Nanning Zheng designed the experimental system, and provided valuable insights in the experiment and manuscript writing.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Von Gioi, R.G.; Jakubowicz, J.; Morel, J.; Randall, G. LSD: A fast line segment detector with a false detection control. IEEE Trans. Pattern Anal. Mach. Intell. 2008, 32, 722–732. [Google Scholar] [CrossRef] [PubMed]
  2. Wang, Y.; Shen, D.; Teoh, E.K. Lane detection using Catmull-Rom spline. In Proceedings of the IEEE Intelligent Vehicles Symposium, Stuttgart, Germany, 15 May 1998; pp. 51–57.
  3. Yim, Y.U.; Oh, S.Y. Three-feature based automatic lane detection algorithm (TFALDA) for autonomous driving. IEEE Trans. Intell. Transp. Syst. 2003, 4, 219–225. [Google Scholar]
  4. Nuthong, C.; Charoenpong, T. Lane detection using smoothing spline. In Proceedings of the International Congress on Image and Signal Processing, Beijing, China, 24–28 October 2010; pp. 989–993.
  5. Wang, Y.; Dahnoun, N.; Achim, A. A novel system for robust lane detection and tracking. Signal Process. 2012, 92, 319–334. [Google Scholar] [CrossRef]
  6. Wang, Y.; Teoh, E.K.; Shen, D. Lane detection and tracking using B-snake. Image Vis. Comput. 2004, 22, 269–280. [Google Scholar] [CrossRef]
  7. Assidiq, A.A.; Khalifa, O.O.; Islam, R.; Khan, S. Real time lane detection for autonomous vehicles. In Proceedings of the IEEE International Conference on Computer and Communication Engineering, Kuala Lumpur, Malaysia, 13–15 May 2008; pp. 82–88.
  8. Li, Y.; Iqbal, A.; Gans, N.R. Multiple lane boundary detection using a combination of low-level image features. In Proceedings of the IEEE Conference on Intelligent Transportation Systems, Qingdao, China, 8–11 October 2014; pp. 1682–1687.
  9. Seo, Y.W.; Rajkumar, R. Utilizing instantaneous driving direction for enhancing lane-marking detection. In Proceedings of the IEEE Intelligent Vehicles Symposium, Dearborn, MI, USA, 8–11 June 2014; pp. 170–175.
  10. Zhou, S.; Jiang, Y.; Xi, J.; Gong, J.; Xiong, G.; Chen, H. A novel lane detection based on geometrical model and gabor filter. In Proceedings of the IEEE Intelligent Vehicles Symposium, San Diego, CA, USA, 21–24 June 2010; pp. 59–64.
  11. Yoo, H.; Yang, U.; Sohn, K. Gradient-enhancing conversion for illumination-robust lane detection. IEEE Trans. Intell. Transp. Syst. 2013, 14, 1083–1094. [Google Scholar] [CrossRef]
  12. Kluge, K.; Johnson, G. Statistical characterization of the visual characteristics of painted lane markings. In Proceedings of the IEEE Intelligent Vehicles Symposium, Detroit, MI, USA, 25–26 September 1995; pp. 488–493.
  13. He, Y.; Wang, H.; Zhang, B. Color-based road detection in urban traffic scenes. IEEE Trans. Intell. Transp. Syst. 2004, 5, 309–318. [Google Scholar] [CrossRef]
  14. He, J.; Rong, H.; Gong, J.; Huang, W. A lane detection method for lane departure warning system. In Proceedings of the International Conference on Optoelectronics and Image Processing, Hainan, China, 11–12 November 2010; pp. 28–31.
  15. Canny, J. A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell. 1986, 8, 679–698. [Google Scholar] [CrossRef] [PubMed]
  16. McCall, J.C.; Trivedi, M.M. Video-based lane estimation and tracking for driver assistance: Survey, system, and evaluation. IEEE Trans. Intell. Transp. Syst. 2006, 7, 20–37. [Google Scholar] [CrossRef]
  17. McCall, J.C.; Trivedi, M.M. An integrated, robust approach to lane marking detection and lane tracking. In Proceedings of the IEEE Intelligent Vehicles Symposium, Parma, Italy, 14–17 June 2004; pp. 533–537.
  18. Haloi, M.; Jayagopi, D.B. A robust lane detection and departure warning system. In Proceedings of the IEEE Intelligent Vehicles Symposium, Seoul, Korea, 28 June–1 July 2015.
  19. Wang, J.M.; Chung, Y.C.; Chang, S.L.; Chen, S.W. Lane marks detection using steerable filters. In Proceedings of the IPPR Conference on Computer Vision, Graphics and Image Processing, Jinmen, China, 17–19 August 2003; pp. 858–865.
  20. Sivaraman, S.; Trivedi, M.M. Integrated lane and vehicle detection, localization, and tracking: A synergistic approach. IEEE Trans. Intell. Transp. Syst. 2013, 14, 906–917. [Google Scholar] [CrossRef]
  21. McCall, J.C.; Wipf, D.; Trivedi, M.M.; Rao, B. Lane change intent analysis using robust operators and sparse bayesian learning. IEEE Trans. Intell. Transp. Syst. 2007, 8, 431–440. [Google Scholar] [CrossRef]
  22. Sun, T.Y.; Tsai, S.J.; Chan, V. HSI color model based lane-marking detection. In Proceedings of the IEEE Conference on Intelligent Transportation Systems, Toronto, ON, Canada, 17–20 September 2006; pp. 1168–1172.
  23. Cheng, H.Y.; Jeng, B.S.; Tseng, P.T.; Fan, K.C. Lane detection with moving vehicles in the traffic scenes. IEEE Trans. Intell. Transp. Syst. 2006, 7, 571–582. [Google Scholar] [CrossRef]
  24. Kim, Z. Robust lane detection and tracking in challenging scenarios. IEEE Trans. Intell. Transp. Syst. 2008, 9, 16–26. [Google Scholar] [CrossRef]
  25. Han, T.; Kim, Y.; Kim, K. Lane detection & localization for UGV in urban environment. In Proceedings of the IEEE Conference on Intelligent Transportation Systems, Qingdao, China, 8–11 October 2014; pp. 590–596.
  26. Chen, T.; Lu, S. Context-aware lane marking detection on urban roads. In Proceedings of the IEEE International Conference on Image, Quebec City, QC, Canada, 27–30 September 2015; pp. 2557–2561.
  27. Smart, M.; Waslander, S.L. Stereo augmented detection of lane marking boundaries. In Proceedings of the IEEE Conference on Intelligent Transportation Systems, Canary Islands, Spain, 15–18 September 2015; pp. 2491–2496.
  28. Gopalan, R.; Hong, T.; Shneier, M.; Chellappa, R. A learning approach towards detection and tracking of lane markings. IEEE Trans. Intell. Transp. Syst. 2012, 13, 1088–1098. [Google Scholar] [CrossRef]
  29. Beck, J.; Stiller, C. Non-parametric lane estimation in urban environments. In Proceedings of the IEEE Intelligent Vehicles Symposium, Dearborn, MI, USA, 8–11 June 2014; pp. 43–48.
  30. Jung, S.; Youn, J.; Sull, S. Efficient lane detection based on spatiotemporal images. IEEE Trans. Intell. Transp. Syst. 2015, 17, 1–7. [Google Scholar] [CrossRef]
  31. Borkar, A.; Hayes, M.; Smith, M.T. Robust lane detection and tracking with ransac and Kalman filter. In Proceedings of the IEEE International Conference on Image Processing, Cairo, Egypt, 7–10 November 2009; pp. 3261–3264.
  32. Borkar, A.; Hayes, M.; Smith, M.T. A novel lane detection system with efficient ground truth generation. IEEE Trans. Intell. Transp. Syst. 2012, 13, 365–374. [Google Scholar] [CrossRef]
  33. Aly, M. Real time detection of lane markers in urban streets. In Proceedings of the IEEE Intelligent Vehicles Symposium, Eindhoven, The Netherlands, 4–6 June 2008; pp. 7–12.
  34. Wang, Y.; Shen, D.; Teoh, E.K. Lane detection using spline model. Pattern Recognit. Lett. 2000, 21, 677–689. [Google Scholar] [CrossRef]
  35. Deng, J.; Han, Y. A real-time system of lane detection and tracking based on optimized RANSAC B-spline fitting. In Proceedings of the ACM International Conference on Research in Adaptive and Convergent Systems, Montreal, QC, Canada, 1–4 October 2013; pp. 157–164.
  36. de Paula, M.B.; Jung, C.R. Automatic detection and classification of road Lane markings using onboard vehicular cameras. IEEE Trans. Intell. Transp. Syst. 2015, 16, 1–10. [Google Scholar] [CrossRef]
  37. Satzoda, R.K.; Suchitra, S.; Srikanthan, T. Robust extraction of lane markings using gradient angle histograms and directional signed edges. In Proceedings of the IEEE Intelligent Vehicles Symposium, Madrid, Spain, 3–7 June 2012; pp. 754–759.
  38. Tan, H.; Zhou, Y.; Zhu, Y.; Yao, D.; Li, K. A novel curve lane detection based on improved river flow and RANSA. In Proceedings of the IEEE Conference on Intelligent Transportation Systems, Qingdao, China, 8–11 October 2014; pp. 133–138.
  39. Kang, S.N.; Lee, S.; Hur, J.; Seo, S.W. Multi-lane detection based on accurate geometric lane estimation in highway scenarios. In Proceedings of the IEEE Intelligent Vehicles Symposium, Dearborn, MI, USA, 8–11 June 2014; pp. 221–226.
  40. Kuhnl, T.; Fritsch, J. Visio-spatial road boundary detection for unmarked urban and rural roads. In Proceedings of the IEEE Intelligent Vehicles Symposium, Dearborn, MI, USA, 8–11 June 2014; pp. 1251–1256.
  41. Bertozzi, M.; Broggi, A. GOLD: A parallel real-time stereo vision system for generic obstacle and lane detection. IEEE Trans. Image Process. 1998, 7, 62–81. [Google Scholar] [CrossRef] [PubMed]
  42. Bertozzi, M.; Broggi, A. Real-time lane and obstacle detection on the system. In Proceedings of the IEEE Intelligent Vehicles Symposium, Tokyo, Japan, 19–20 September 1996; pp. 213–218.
  43. Mammeri, A.; Boukerche, A.; Lu, G. Lane detection and tracking system based on the MSER algorithm, hough transform and kalman filter. In Proceedings of the ACM International Conference on Modeling, Analysis and Simulation of Wireless and Mobile Systems, Montreal, QC, Canada, 21–26 September 2014; pp. 259–266.
  44. Kang, D.J.; Jung, M.H. Road lane segmentation using dynamic programming for active safety vehicles. Pattern Recognit. Lett. 2003, 24, 3177–3185. [Google Scholar] [CrossRef]
  45. Kammel, S.; Pitzer, B. Lidar-based lane marker detection and mapping. In Proceedings of the IEEE Intelligent Vehicles Symposium Proceeding, Eindhoven, The Netherlands, 4–6 June 2008; pp. 1137–1142.
  46. Zhao, K.; Meuter, M.; Nunn, C.; Müller, D.; Müller-Schneiders, S.; Pauli, J. A novel multi-lane detection and tracking system. In Proceedings of the IEEE Intelligent Vehicles Symposium, Madrid, Spain, 3–7 June 2012; pp. 1084–1089.
  47. Wang, J.; Wu, Y.; Liang, Z.; Xi, Y. Lane detection based on random hough transform on region of interesting. In Proceedings of the IEEE International Conference on Information and Automation, Colombo, Sri Lanka, 17–19 December 2010; pp. 1735–1740.
  48. Li, Q.; Zheng, N.; Cheng, H. Springrobot: A prototype autonomous vehicle and its algorithms for lane detection. IEEE Trans. Intell. Transp. Syst. 2004, 5, 300–308. [Google Scholar] [CrossRef]
  49. Park, J.W.; Lee, J.W.; Jhang, K.Y. A lane-curve detection based on an LCF. Pattern Recognit. Lett. 2003, 24, 2301–2313. [Google Scholar] [CrossRef]
  50. Guiducci, A. Parametric model of the perspective projection of a road with applications to lane keeping and 3D road reconstruction. Comput. Vis. Image Underst. 1999, 73, 414–427. [Google Scholar] [CrossRef]
  51. Kluge, K.; Lakshmanan, S. A deformable-template approach to lane detection. In Proceedings of the IEEE Intelligent Vehicles Symposium, Detroit, MI, USA, 25–26 September 1995; pp. 54–59.
  52. Chen, Q.; Wang, H. A real-time lane detection algorithm based on a hyperbola-pair model. In Proceedings of the IEEE Intelligent Vehicles Symposium, Meguro-ku, Japan, 13–15 June 2006; pp. 510–515.
  53. Catmull, E.; Rom, R. A class of local interpolating splines. Comput. Aided Geom. Des. 1974, 74, 317–326. [Google Scholar]
  54. Choi, H.C.; Park, J.M.; Choi, W.S.; Oh, S.Y. Vision-based fusion of robust lane tracking and forward vehicle detection in a real driving environment. Int. J. Automot. Technol. 2012, 13, 653–669. [Google Scholar] [CrossRef]
  55. Berriel, R.F.; Aguiar, E.D.; Oliveirasantos, T. A particle filter-based lane marker tracking approach using a cubic spline model. In Proceedings of the SIBGRAPI Conference on Graphics, Patterns and Images, Salvador, Brazil, 26–29 August 2015; pp. 149–156.
Figure 1. Overview of our method.
Figure 1. Overview of our method.
Sensors 16 01276 g001
Figure 2. Feature extraction. (a) Original image; (b) Line Segment Detector (LSD) result; (c) Horizontal bands in the image; the yellow points represent the crossing points of line segments; (d) Line segments above the vanishing line are eliminated.
Figure 2. Feature extraction. (a) Original image; (b) Line Segment Detector (LSD) result; (c) Horizontal bands in the image; the yellow points represent the crossing points of line segments; (d) Line segments above the vanishing line are eliminated.
Sensors 16 01276 g002
Figure 3. Examples of vanishing line detection.
Figure 3. Examples of vanishing line detection.
Sensors 16 01276 g003
Figure 4. Illustration of the Crossing Point Filter (CPF). (a) Original feature map; (b) The vanishing box (red box) and the restrictive region (green box); the yellow points are the crossing points of all line segments; (c) The CPF result.
Figure 4. Illustration of the Crossing Point Filter (CPF). (a) Original feature map; (b) The vanishing box (red box) and the restrictive region (green box); the yellow points are the crossing points of all line segments; (c) The CPF result.
Sensors 16 01276 g004
Figure 5. Filtering results. (a) Result of CPF; (b) Result of Structure Triangle Filter (STF).
Figure 5. Filtering results. (a) Result of CPF; (b) Result of Structure Triangle Filter (STF).
Sensors 16 01276 g005
Figure 6. Illustration of the structure triangular filter. (a) The structure triangle; (b) Line segments filtering with the structure triangle; the yellow regions indicate the lane tolerance regions.
Figure 6. Illustration of the structure triangular filter. (a) The structure triangle; (b) Line segments filtering with the structure triangle; the yellow regions indicate the lane tolerance regions.
Sensors 16 01276 g006
Figure 7. Road state machine. L, R, S and K denote ’left’, ’right’, ’straight’ and ’keep’, respectively.
Figure 7. Road state machine. L, R, S and K denote ’left’, ’right’, ’straight’ and ’keep’, respectively.
Sensors 16 01276 g007
Figure 8. Shape similarity of lane boundaries in two consecutive frames. (a) Lane boundary detection in the n-th frame; (b) Lane boundary detection in the ( n + 1 ) -th frame.
Figure 8. Shape similarity of lane boundaries in two consecutive frames. (a) Lane boundary detection in the n-th frame; (b) Lane boundary detection in the ( n + 1 ) -th frame.
Sensors 16 01276 g008
Figure 9. Curve lane boundary fitting. (a) Lane boundaries and control points in the n-th frame; (b) Computing the control points in the ( n + 1 ) -th frame; (c) Lane boundaries and control points in the ( n + 1 ) -th frame.
Figure 9. Curve lane boundary fitting. (a) Lane boundaries and control points in the n-th frame; (b) Computing the control points in the ( n + 1 ) -th frame; (c) Lane boundaries and control points in the ( n + 1 ) -th frame.
Sensors 16 01276 g009
Figure 10. Evaluation criterion. (a) Straight lanes; (b) Curved lanes.
Figure 10. Evaluation criterion. (a) Straight lanes; (b) Curved lanes.
Sensors 16 01276 g010
Figure 11. Examples of lane detection in various scenes. (a) Roads with shadows and paint; (b) Vehicle disturbance; (c) Worn lane boundaries; (d) Complex illumination; (e) Night.
Figure 11. Examples of lane detection in various scenes. (a) Roads with shadows and paint; (b) Vehicle disturbance; (c) Worn lane boundaries; (d) Complex illumination; (e) Night.
Sensors 16 01276 g011
Figure 12. Examples of lane detection in road structure changes. (a) In the process of changing lanes; (b) Lanes suddenly become narrow; (c) Curved roads.
Figure 12. Examples of lane detection in road structure changes. (a) In the process of changing lanes; (b) Lanes suddenly become narrow; (c) Curved roads.
Sensors 16 01276 g012
Figure 13. Examples of our method and other methods. (a) Original images; (b) He’s method [14]; (c) Bertozzi’s method [41]; (d) Seo’s method [9]; (e) Our method.
Figure 13. Examples of our method and other methods. (a) Original images; (b) He’s method [14]; (c) Bertozzi’s method [41]; (d) Seo’s method [9]; (e) Our method.
Sensors 16 01276 g013
Figure 14. Examples of false positives. (ac) Zebra crossings; (d,e) Crossroads; (f) Wet roads.
Figure 14. Examples of false positives. (ac) Zebra crossings; (d,e) Crossroads; (f) Wet roads.
Sensors 16 01276 g014
Table 1. XJTU-IAIR traffic scene dataset.
Table 1. XJTU-IAIR traffic scene dataset.
Traffic ScenesHighwayUrbanIlluminationCurveNightChanging Lanes
Frame Number70919608552454187640548
Table 2. The performance of our method in different scenes of XJTU-IAIR traffic scene dataset.
Table 2. The performance of our method in different scenes of XJTU-IAIR traffic scene dataset.
Traffic ScenesFramesGTTPFPFNPrecision (%)Recall (%)FPS
Highway709114,18214,16616099.910028
Moderate Urban557910,157978623016397.798.421
Heavy Urban588010,616989247815895.498.416
Illumination10402080208000100.0100.023
Night8041566156610099.4100.035
Curve41883682241099.598.825
Changing Lanes5481096109600100.0100.028
Table 3. Comparison with other methods on different scenes of the XJTU-IAIR traffic scene dataset.
Table 3. Comparison with other methods on different scenes of the XJTU-IAIR traffic scene dataset.
Traffic ScenesMethodsFramesGTTPFPFNPrecision (%)Recall (%)FPS
HighwayOurs9991998199800100.0100.030
He’s [14]9991998180518317290.891.314
Bertozzi’s [41]99919981886337998.396.035
Seo’s [9]999199817112117889.095.621
Moderate UrbanOurs9991998196630098.5100.018
He’s [14]999199819464193182.398.411
Bertozzi’s [41]9991998164923615287.591.634
Seo’s [9]9991998136312451191.772.714
Heavy UrbanOurs89916921621892694.898.419
He’s [14]899169216096086972.695.911
Bertozzi’s [41]8991692128938621277.085.931
Seo’s [9]899169297029544176.768.715
IlluminationOurs10402080208000100.0100.023
He’s [14]10402080172569730571.285.023
Bertozzi’s [41]1040208018821853291.098.334
Seo’s [9]10402080150644812877.192.217
NightOurs8041566156610099.4100.035
He’s [14]804156614913267582.195.212
Bertozzi’s [41]804156694237960371.361.036
Seo’s [9]804156613831048793.094.124
Changing LanesOurs5481096109600100.0100.028
He’s [14]54810968763688170.491.513
Bertozzi’s [41]5481096101585592.399.539
Seo’s [9]5481096975992490.897.621
Table 4. Comparison with other methods on Aly’s dataset.
Table 4. Comparison with other methods on Aly’s dataset.
VideoMethodsFramesCR(%)FPR(%)
1Ours25099.81.9
Aly’s [33]25097.23.0
He’s [14]25087.625.9
Seo’s [9]25087.610.8
2Ours40692.66.8
Aly’s [33]40696.238.4
He’s [14]40682.454.8
Seo’s [9]40689.19.2
3Ours33698.21.3
Aly’s [33]33696.74.7
He’s [14]33674.027.7
Seo’s [9]33681.85.3
4Ours23297.82.2
Aly’s [33]23295.12.2
He’s [14]23285.125.2
Seo’s [9]23288.85.2
Table 5. The performance of our method on different computing platforms.
Table 5. The performance of our method on different computing platforms.
PlatformsFramesResolutionPrecision (%)Recall (%)FPS
Raspberry Pi 32587 320 × 240 99.0100.018
ARK-102587 320 × 240 99.0100.029

Share and Cite

MDPI and ACS Style

Nan, Z.; Wei, P.; Xu, L.; Zheng, N. Efficient Lane Boundary Detection with Spatial-Temporal Knowledge Filtering. Sensors 2016, 16, 1276. https://doi.org/10.3390/s16081276

AMA Style

Nan Z, Wei P, Xu L, Zheng N. Efficient Lane Boundary Detection with Spatial-Temporal Knowledge Filtering. Sensors. 2016; 16(8):1276. https://doi.org/10.3390/s16081276

Chicago/Turabian Style

Nan, Zhixiong, Ping Wei, Linhai Xu, and Nanning Zheng. 2016. "Efficient Lane Boundary Detection with Spatial-Temporal Knowledge Filtering" Sensors 16, no. 8: 1276. https://doi.org/10.3390/s16081276

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop