Next Article in Journal
An Efficient and Robust Improved Whale Optimization Algorithm for Large Scale Global Optimization Problems
Previous Article in Journal
Machine Learning Models and Videos of Facial Regions for Estimating Heart Rate: A Review on Patents, Datasets, and Literature
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improving Lane Detection Performance for Autonomous Vehicle Integrating Camera with Dual Light Sensors

1
Litbig, Seongnam-si 13487, Korea
2
Mando, Seongnam-si 13487, Korea
3
Department of Electronic Engineering, College of Convergence Technology, Korea National University of Transportation, Chungju-si 27469, Korea
*
Author to whom correspondence should be addressed.
Electronics 2022, 11(9), 1474; https://doi.org/10.3390/electronics11091474
Submission received: 7 April 2022 / Revised: 29 April 2022 / Accepted: 2 May 2022 / Published: 4 May 2022
(This article belongs to the Topic Intelligent Transportation Systems)

Abstract

:
Automotive companies have studied the development of lane support systems in order to secure the Euro New Car Assessment Program (NCAP)’s high score. A front camera module is applied with safety assistance systems in an intelligent vehicle. However, the front camera module has limitations in terms of backlight conditions, entering or exiting tunnels, and night driving because of lower image quality. In this paper, we propose an integrated camera with dual light sensor for improving lane detection performance under the worst conditions. We include a new algorithm to enhance image data quality and improve edge detection and lane tracking using illumination information. We evaluate the tests under various conditions on a real road. These tests are performed on 728 km of road (under various external situations and lane types) for false alarm rates. The experimental results show that the system is promising in terms of reliability, enhancement, and improvements.

1. Introduction

In 2010, the eSafety Challenge Euro New Car Assessment Program (NCAP) announced Euro NCAP Advanced, which includes a new reward system for new safety technologies. Such a system provides a complement to NCAP’s existing star rating scheme and rewards those manufacturers that promote new technologies with safety benefits. Automotive companies have studied the development of an advanced driving assistance system (ADAS) in order to secure NCAP’s high score. As a result, safety assistance systems, such as lane departure warning, blind spot monitoring, attention assistance, and autonomous emergency braking, are offered by automotive companies as options on their newest models. In particular, lane support systems have increasingly become widespread as safety assistance systems. Therefore, since 2014, Euro NCAP has opted to include lane support systems as standard requirement [1].
Major automobile manufacturers have actively researched lane support systems that employ a front camera module. The front camera module, which incorporates a lane recognition algorithm, executes lane support functions, such as lane departure warning systems (LDWS) and lane keeping assistance systems (LKAS), to help improve driver safety [2,3]. However, the front camera module has limitations in terms of backlight conditions, entering or exiting tunnels, and night driving because of lower image quality. In order to overcome these limitations, many studies are conducted. Methods that remove noise from images by applying filtering are used [4,5], or methods that are adaptive to changes in lighting are used [6,7,8].
In this paper, in order to improve the lane detection algorithm for lane support systems in automotive applications, we propose an integrated camera with a dual light sensor that can detect a lane under the worst conditions. In addition, integrating a dual solar sensor in the front camera module can reduce the size of the system and is cost-effective [9].
The rest of this paper is organized as follows. Related studies on the vision-based lane detection and light intensity detection for dual light sensor are summarized in Section 2. Section 3 presents the proposed method. Section 4 presents the experimental results, and the paper is concluded in Section 5.

2. Related Work

2.1. General Methods for Vision-Based Lane Detection

Lane detection can be divided into four parts: (1) feature extraction, (2) line detection, (3) line fitting, and (4) lane tracking [3]. Lane features such as edges are extracted from the input image. Mu et al. and Tu et al. extracted lane features for lane detection using a Sobel filter [10,11]. Wu et al. and Wang et al. extracted lane features using a Canny edge detector [12,13]. Fang et al., Yim et al., and Jung et al. extracted lane features using line filters [14,15,16], and Bertozzi et al. extracted features using region filters [17]. From the extracted features, line detection selects lane features and removes non-lane features. Aung et al. selected lane features using the Hough transform [18]. Lee selected lane features using an edge distribution function [19]. Jung et al. selected lane features using the Hough transform and an edge distribution function [20]. Taubel et al. and Borkar et al. selected lane features using inverse perspective mapping [21,22]. Line fitting creates a mathematical model using selected lane features. Zhao et al. made a model using spline [23]. Obradovic et al. made a model using fuzzy line and fuzzy point [24]. Lin et al. made a straight line model [25]. Wu et al. and Mu et al. made a model using linear parabolic [10,12]. Lane tracking predicts the position in the next frame and sets the ROI(Region Of Interest). Borkar et al., Wu et al., and Obradovic et al. tracked the lane using a Kalman filter [13,24,26]. Kim et al. and Li et al. tracked the lane using a particle filter [27,28].
Various studies have been conducted on the lane detection using deep learning. Kim et al. detected lanes using convolutional neural networks [29]. Zou et al. detected lanes using convolutional neural networks and long short-term memory that is one of the recurrent neural networks [30]. Neven et al. detected lanes using LaneNet [31]. Ghafoorian et al. detected lanes using embedding loss driven generative adversarial networks [32].
Lane detection using deep learning has good performance but has a large amount of calculation. Therefore, the proposed method does not use deep learning-based lane detection but uses a general lane detection method to implement on an embedded platform.
The proposed method extracts lane features using combination line filters and region filters. The proposed method detects lines using inverse perspective mapping. The proposed method uses linear parabolic for the model. Furthermore, the proposed method uses a Kalman filter for tracking lanes.
Edge detection perceives drastic changes in the video pixel values of the input image. Based on this, edge detection finds the boundaries of streets and lanes. Changes in the image pixel values can be obtained by calculating the difference between the current and next pixels. Edge detection can be expressed as
E ( x , y ) = y = 1 H x = 1 W ( I ( x + 1 , y ) I ( x 1 ) , y )
where E denotes the result of edge detection, I denotes the input image, W denotes the width of input image, and H denotes the height of input image. Edge detection results are as shown in Figure 1.
Region detection is a method for detecting a region using the lane marking color. The region area is defined as follows: the image area turns a bright area into a dark area, and turns a dark area into a bright area. Region detection uses a dark-bright-dark function, and the filter for this function is as shown in Figure 2.
The results of region detection using the dark-bright-dark function are shown in Figure 3. As can be seen, the lane width appears wider in the proximal region and narrower in the distant region of the input image. The actual lane area is calculated correctly, even when the lane width appears different along the input image, using various scales for the actual size of the black ( 0 ) and white ( 1 ) areas of the filter for the dark-bright-dark function.
It uses both edge and region detection results to more improve the accuracy of lane detection. To detect lanes, an edge detection algorithm calculates the median value between positive edge and negative edge from lane edges. In addition, edge detection calculates the median value from the region of interest (ROI) of a lane. It detects the lane if overlapped median values are detected from both the edge and region detection results. It also detects noise.
Figure 4 shows the ROI setting result using Kalman filter tracking and the line detection result using inverse perspective mapping. In Figure 4, the red lines are the line detection results, and the green lines are the ROI setting results.
Figure 5 shows the result of line fitting using a linear parabolic. In Figure 5, the green lines are the lane fitting results.
Finally, the block diagram for the general lane detection algorithm with edge and region detection is shown in Figure 6. Subsequent to obtaining the input image from the front camera module, the lane’s edge is detected and the lane region is extracted. Next, the lane component is extracted from the edge and region. Finally, the actual lane is selected and recognized by fitting, and the tracking algorithm is applied to enhance the accuracy of lane detection.

2.2. Light Intensity Detection for Dual Light Sensor

Typically, solar sensors are used for air-conditioning systems to maintain the air temperature demanded by the driver. Dual light sensors integrate dual solar and twilight sensors into one chip. The circuit for the desired solar sensor angular response and for measuring the amount of light is shown in Figure 7. This circuit provides output currents from photodiodes that are proportional to the amount of light received and desired angular response [33].
As a result, dual light sensors can cover a wide range of illumination, from twilight at only a few l u x to full sunlight at 100,000 l u x . Illumination data for dual light sensors are as shown in Figure 8a,b. The relative solar outputs for dual light sensors are as shown in Figure 8c [34]. The measured results for the amount of light in a plane surface are listed in Table 1. According to Figure 8 and Table 1, external conditions, such as backlight or night driving, can be recognized by the light density.

3. Proposed Lane Detection Method with Illumination Information

The proposed lane detection method uses illumination intensity information in order to improve those situations where the lane edges in an image have relatively weak contrast or where there are strong distracting edges.

3.1. Block Diagram for Proposed Lane Detection Method

The dual light sensor provides information about the amount and direction of light. As shown in Table 1, weather conditions are extracted with illumination information. In addition, backlight and side-light are detected using the direction of the light. Tunnel information is extracted with rapidly changing lighting.
The proposed method improves lane detection by using information from the dual light sensor. In the proposed method, three parts are improved in lane detection: (1) image quality, (2) edge detection, and (3) lane tracking.
Figure 9 shows the block diagram of the proposed method. As shown in Figure 9, it is configured to receive input image and dual light sensor data simultaneously. The red line indicates the improved part using the dual light sensor compared to the lane recognition method described in Section 2.
Image quality enhancement uses night, day, and sun angle information from dual light sensors. In the proposed method, the enhanced image is defined as second image data. Second image data create an image with the best quality for lane detection. Edge detection improvement changes the threshold using tunnel information and twilight information from the dual light sensor. If the threshold is changed, the edge of the lane is extracted even in the image where the edge of the lane does not appear well. Lane tracking improvement narrows the ROI by using tunnel and side-light information from a dual light sensor. If the ROI is reduced, false lane features appearing inside the tunnel or due to shadows are removed.

3.2. Image Data Quality Enhancements

General lane detection algorithms require the control of exposure time or varying of analog gain from image sensor information. However, image sensors cannot detect external conditions, such as backlight or night driving situations. As a result, gain and exposure control response time are delayed, thus leading to wrong detection and a decrease in the lane detection rate. Therefore, enhanced image processing that uses an additional dual light sensor is applied for the recognition of backlight conditions or night driving situations and enhancement of the gain and exposure control performance (faster response time).
Exposure time can be calculated with Equations (2) and (3). The value of E I is calculated by changing C E using the light sensor illumination from E O ; C E can be calculated by the weighted sum of L and A and is directly proportional to the log value of L and inversely proportional to the absolute value of A. Similarly to Equations (2) and (3), analog gain can be calculated with Equations (4) and (5). The value of G I is calculated by changing C G using the light sensor illumination from G O .
E I = ( 1 C E ) × E O
C E = ω L E log 2 L + ω A E ( π 2 A )
G I = ( 1 C G ) × G O
C G = ω L G log 2 L + ω A G ( π 2 A )
where E I denotes the next exposure time, C E denotes the constant of the control of exposure time by the illumination value, and E O denotes the current exposure time of the camera sensor; G I denotes the next gain, C G denotes the constant of the gain control by the illumination value, G O denotes the current gain of the camera sensor, L denotes the illumination value, which is derived from the information of the dual light sensor, and A denotes the angle of sun, which is derived from the information of the dual light sensor.

3.3. Edge Detection Improvement

Lane detection performance is related to edge and region detection. If the threshold for edge and region detection is too high, the lane detection rate decreases. On the contrary, if the threshold is too low, the lane detection quality is lower and the false recognition rate increases because of the poor image quality. In this paper, the light intensity information from dual light sensors is used for various thresholds of edge and region detection, which enhances the quality of lane detection on critical conditions, such as tunnel and twilight driving.
Figure 10 shows the tunnel and twilight images taken using the front camera. As shown in Figure 10, the edge and region of lanes markings do not appear clearly in the tunnels and at twilight.
The control for the threshold of edge and region detection can be calculated with Equation (6).
T h E = T h L , i f   T u n n e l = 1   o r   T w i l i g h t = 1 T h H , e l s e
where T h E denotes the result of the threshold value for edge and region detection, and T h L and T h H denote the low and high threshold values, respectively.
Figure 11 shows the edge detection result from the tunnel image. Figure 11a shows the edge detection result when the threshold is high, and Figure 11b shows the edge detection result when the threshold is low. As shown in Figure 11, when the threshold is high in the tunnel image, the edges of the lane markings are not visible. In contrast, when the threshold is low, the edges of the lane markings are visible.

3.4. Lane Tracking Improvement

External illumination information is efficient not only for edge and region detection, but also for tracking performance in specific cases, such as when entering tunnels or detecting guardrail shadows caused by lateral light sources.
Figure 12 shows that a line similar to the lane marking is created by the shadow from the inside of the tunnel and side light. Figure 12a is an image of the inside of a tunnel, and Figure 12b shows a shadow image of a guardrail with side lighting.
The proposed method uses a third polynomial model based on vehicle coordinates. Figure 13 shows the lane marking model. As shown in Figure 13, the proposed method uses a third polynomial model in which the longitudinal direction of the vehicle is the x-axis [35].
Lane marking model f L ( x ) and f R ( x ) are defined by Equation (7) [35].
f L ( x ) = a L · x 3 + b L · x 2 + c L · x + d L f R ( x ) = a R · x 3 + b R · x 2 + c R · x + d R
where a L and a R represent curvature rate, b L and b R are curvature, c L and c R are heading angle, and d L and d R are offset of left and right lanes, respectively.
The control for tracking the area size can be calculated with Equation (8). The Kalman filter is used for tracking, and the position of the next lane is estimated by Kalman prediction [36,37,38]. The lane position for the next frame is estimated through the relationship between the estimated and current lane positions.
W a = α W k α = 0.5 , i f   T u n n e l = 1   o r   S i d e l i g h t = 1 α = 1 , e l s e
where W k is the range of the area calculated by the Kalman filter prediction, and W a is the estimation range when α is 0.5 if the lateral light or tunnel-entering condition, or α is 1 for all other cases. W k is calculated by Equation (9).
W k = ( d C d K p ) + C d .
where d C is offset of lane marking model in current frame, d K p is offset of Kalman prediction result, and C d is constant value of detection range.
Figure 14 shows the calculation tracking range result. The green lines are lane detection results in the current frame, and the red lines are the calculated tracking range. Figure 14a shows the result with a narrow tracking range, and Figure 14b shows the result with a normal tracking range.

4. Experimental Results

4.1. Design Results for Proposed Integrated Camera with Dual Light Sensor

The configuration of the integrated front camera module and dual light sensor is shown in Figure 15. The front camera module receives an image from the external vehicle environment with an image sensor. The input image is optimized for quality using image signal processing. The vision processor executes the lane recognition algorithm. The dual light sensor detects the intensity of the sun on the right and left regions with dual solar and twilight sensors and estimates the position of the sun and light density. A photograph of the realized integrated front camera module with dual light sensor is shown in Figure 16. The designed circuit board and appearance are shown in Figure 16a,b, respectively. Because the developed camera has a highly integrated size, it has advantages such as reduced camera and sensor size and cost-effectiveness.
The specifications for the complementary metal oxide semiconductor (CMOS) and dual light sensors are listed in Table 2. The camera can detect the lane up to 90 m for the lane support system. The dual light sensor can measure light intensity from 0 lux to 100,000 lux.

4.2. Experiment Results

The environment for the lane support system tests is shown in Figure 17. The vehicle is installed with an integrated front camera module in front of the windshield. These tests are performed in dry conditions with an ambient temperature of approximately 25 C.
The test results for the enhancement of image data quality are shown in Figure 18. Figure 18a shows the original image data for the backlight condition. The image data are saturated with white because of sunlight, and thus it is difficult to distinguish the lane on the road surface. In contrast, the results after enhancing the image quality for the backlight condition are shown in Figure 18b. Based on Figure 8b and Table 1, we applied the sunlight condition to Equations (2) and (3). As a result, the lane image data shows a clearly distinguished lane within the surrounding road surface.
Figure 18c shows the original image data for the night driving situation. In some areas, the image data are saturated by other light sources (such as other vehicles’ headlight and backlight, streetlights, and neon signs), and thus it is difficult to distinguish the lane on the road surface. On the contrary, the results after enhancing the image quality for the night driving situation are shown in Figure 18d. Based on Figure 8a and Table 1, we applied the twilight condition to Equations (2) and (3). Figure 18e shows the original image data for the entering tunnel situation. In some areas, the image data are saturated with black because of shadow, and thus it is difficult to distinguish the lane on the road surface. In contrast, the results after enhancing the image quality for the entering tunnel situation are shown in Figure 18f. Based on Figure 8a and Table 1, we applied the twilight condition to Equation (2) and (3). As a result, the lane image data show a clearly distinguished lane within the surrounding road surface. Finally, we obtain optimized image data in the worst environment conditions using the light density from the dual light sensor for the control of exposure time and gain.
The test results for the improvement of edge detection and the tracking process are shown in Figure 19a–d. Figure 19a shows the original image data when entering a tunnel. Because of the sudden changes in the external light condition, the lane detection algorithm perceives the left lane incorrectly. The results of the tracking process improvement when entering the tunnel are shown in Figure 19b. Based on Figure 8a and Table 1, we applied the tunnel condition to Equation (5). Table 1 lists twilight as measuring 10.8 l u x ; this illumination value is applied because the condition is similar to entering a tunnel. As a result, we can confirm correct detection of the left lane, as shown in Figure 19b.
Figure 19c shows the original image data from the lateral light condition. Because of the guardrail shadow, the front camera module cannot detect the left lane. The results of improving edge detection under lateral light conditions are shown in Figure 19d. Based on Figure 9, we can obtain the normalized output, which is less than 0.5 for the lateral light condition where the sunlight appears at less than 30 from the horizon. We applied the lateral light condition to Equation (4). As a result, we can correctly detect the left lane.
Finally, we can obtain the optimized lane detection results without incorrect detection or recognition in the worst environment conditions using the light density from the dual light sensor for the control of the threshold of the edge (region) area and tracking performance.
In order to obtain a more precise assessment, tests were conducted under various conditions on a real road. Typically, assessments of lane support systems evaluate the rate of correctly recognizing the lane and the false alarm rate. For our test, we equipped a vehicle with a camera in order to detect the accuracy of our method and the number of false alarms. We drove the test vehicle for a total of 728 km [39].
First, real-road tests were conducted under various external situations: day and night driving, backlight conditions, and the presence of lateral light, rain, and snowfall. The analysis indicates good results in the cases of day, night, backlight, and lateral light, as shown in Figure 20a,b.
Second, real-road tests were performed under different lane types: those with a solid line, dashed line, curved line, road marking line, and crosswalk. The analysis indicates good results in the cases of solid line, dashed line, curved line, and road marking line, as shown in Figure 21a–e.
For each performance index, the number of correct detections, false positives, and false negatives was evaluated. A total of 59 cases were correctly detected in the first set of tests. Only one false negative is present (caused by the heavy rain condition, as shown in Figure 20a). CMOS camera technology cannot compete against the human eye. Under heavy rain conditions, lane detection is not possible.
A total of 36 cases were correctly detected in the second set of tests. Two false negatives were present (caused by the crosswalk, as shown in Figure 21e). There are limitations in the proposed method. The results of the real situation tests are listed in Table 3.

5. Conclusions

In the near future, vehicles will have a variety of ADAS for autonomous driving. Lane support systems are part of the safety assistance systems and are the foundation for achieving this goal.
In this paper, in order to improve those cases where the lane edges in an image have relatively weak contrast, or where there are strong distracting edges, we proposed a novel lane detection method that uses illumination intensity information. We presented a new algorithm for enhancing the quality of the image data and improving edge detection and lane tracking using the illumination information. For our evaluation, we designed an integrated front camera module with dual light sensor and mounted it on the windshield of a vehicle. Lane detection performance was measured under the worst conditions. For comparing the results before and after applying the proposed method, a total of 105 cases were correctly detected in the tests (external situations and different lane types). Only one false positive (caused by rainy conditions) and two false negatives (caused by crosswalk and road marking conditions) were found. Finally, when considering the results of various evaluation tests, we could confirm the performance improvement of the lane support system.
Future work will be extended to investigate enhancements for lane detection, including recognizing a variety of lane types (crosswalks, road markings, and so on), in order to reduce the number of false negatives. We also plan to improve deep learning-based lane detection using dual light sensors.

Author Contributions

Y.L. developed the algorithm and performed experiments. M.-k.P. developed the H/W system, developed the algorithm, and performed experiments. M.P. contributed to the development of the algorithm, validation of experiments, and research supervision. Y.L. and M.-k.P. contributed to writing the manuscript. M.P. contributed to reviewing and editing the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Education (2018R1D1A1B070481431530382068210105). This work is supported by the Korea Agency for Infrastructure Technology Advancement (KAIA) grant funded by the Ministry of Land, Infrastructure and Transport (grant no. 22AMDP-C160501-02). Also, This work was supported by the Ministry of Trade, Industry and Energy(MOTIE, Korea) (No. 20018055, Development of fail operation technology in Lv.4 autonomous driving systems).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Euro NCAP. Assessment Protocol—SA v9.1. Available online: https://cdn.euroncap.com/media/67254/euro-ncap-assessment-protocol-sa-v91.pdf (accessed on 31 March 2022).
  2. Martinez, F.J.; Toh, C.-K.; Cano, J.-C.; Calafate, C.T.; Manzoni, P. Emergency services in future intelligent transportation systems based on vehicular communication networks. IEEE Intell. Transp. Syst. Mag. 2010, 2, 6–20. [Google Scholar] [CrossRef]
  3. Narote, S.P.; Bhujbal, P.N.; Narote, A.S.; Dhane, D.M. A review of recent advances in lane detection and departure warning system. Pattern Recognit. 2018, 13, 216–234. [Google Scholar] [CrossRef]
  4. Hsiao, P.-Y.; Yeh, C.-W.; Huang, S.-S.; Fu, L.-C. A portable vision-based real-time lane departure warning system: Day and night. IEEE Trans. Veh. Technol. 2008, 58, 2089–2094. [Google Scholar] [CrossRef]
  5. Wang, J.-G.; Lin, C.-J.; Chen, S.-M. Applying fuzzy method to vision-based lane detection and departure warning system. Expert Syst. Appl. 2010, 37, 113–126. [Google Scholar] [CrossRef]
  6. Duan, J.; Zhang, Y.; Zheng, B. Lane line recognition algorithm based on threshold segmentation and continuity of lane line. In Proceedings of the 2016 2nd IEEE International Conference on Computer and Communications (ICCC), Chengdu, China, 14–17 October 2016; pp. 680–684. [Google Scholar]
  7. Chai, Y.; Wei, S.J.; Li, X.C. The multi-scale Hough transform lane detection method based on the algorithm of Otsu and Canny. Adv. Mater. Res. 2014, 1042, 126–130. [Google Scholar] [CrossRef]
  8. Gaikwad, V.; Lokhande, S. Lane departure identification for advanced driver assistance. IEEE Trans. Intell. Transp. Syst. 2014, 16, 910–918. [Google Scholar] [CrossRef]
  9. Bläsing, F. Integrated Design and Functional Solution for a Camera Front-End in the Windshield Sensor Cluster. In Proceedings of the SAE World Congress & Exhibition, Detroit, MI, USA, 16–19 April 2007. No. 2007-01-0393. [Google Scholar]
  10. Mu, C.; Ma, X. Lane detection based on object segmentation and piecewise fitting. TELKOMNIKA Indones. J. Electr. Eng. 2014, 12, 3491–3500. [Google Scholar] [CrossRef]
  11. Tu, C.; Wyk, B.V.; Hamam, Y.; Djouani, K.; Du, S. Vehicle position monitoring using Hough transform. IERI Procedia 2013, 4, 316–322. [Google Scholar] [CrossRef] [Green Version]
  12. Wu, P.-C.; Chang, C.-Y.; Lin, C.H. Lane-mark extraction for automobiles under complex conditions. Pattern Recognit. 2014, 47, 2756–2767. [Google Scholar] [CrossRef]
  13. Wang, Y.; Shen, D.; Teoh, E.K. Lane detection using spline model. Pattern Recognit. Lett. 2000, 21, 677–689. [Google Scholar] [CrossRef]
  14. Fang, C.-Y.; Liang, J.-H.; Lo, C.-S.; Chen, S.-W. A real-time visual-based front-mounted vehicle collision warning system. In Proceedings of the 2013 IEEE Symposium on Computational Intelligence in Vehicles and Transportation Systems (CIVTS), Singapore, 16–19 April 2013; pp. 1–8. [Google Scholar]
  15. Yim, Y.U.; Oh, S.-Y. Three-feature based automatic lane detection algorithm (TFALDA) for autonomous driving. IEEE Trans. Intell. Transp. Syst. 2003, 4, 219–225. [Google Scholar]
  16. Jung, H.G.; Lee, Y.H.; Kang, H.J.; Kim, J. Sensor fusion-based lane detection for LKS+ ACC system. Int. J. Automot. Technol. 2009, 10, 219–228. [Google Scholar] [CrossRef]
  17. Bertozzi, M.; Broggi, A.; Conte, G.; Fascioli, A. Obstacle and lane detection on ARGO. In Proceedings of the Conference on Intelligent Transportation Systems, Boston, MA, USA, 12 November 1997; pp. 1010–1015. [Google Scholar]
  18. Aung, T.; Zaw, M.H. Video based lane departure warning system using Hough transform. In Proceedings of the International Conference on Advances in Engineering and Technology (ICAET), Singapore, 29–30 March 2014; pp. 29–30. [Google Scholar]
  19. Lee, J.W. A machine vision system for lane-departure detection. Comput. Vis. Image Underst. 2002, 86, 52–78. [Google Scholar] [CrossRef] [Green Version]
  20. Jung, C.R.; Kelber, C.R. Lane following and lane departure using a linear-parabolic model. Image Vis. Comput. 2005, 23, 1192–1202. [Google Scholar] [CrossRef]
  21. Taubel, G.; Sharma, R.; Yang, J.-S. An experimental study of a lane departure warning system based on the optical flow and Hough transform methods. WSEAS Trans. Syst. 2014, 13, 105–115. [Google Scholar]
  22. Borkar, A.; Hayes, M.; Smith, M.T.; Pankanti, S. A layered approach to robust lane detection at night. In Proceedings of the 2009 IEEE Workshop on Computational Intelligence in Vehicles and Vehicular Systems, Nashville, TN, USA, 30 March–2 April 2009; pp. 51–57. [Google Scholar]
  23. Zhao, K.; Meuter, M.; Nunn, C.; Müller, D.; Schneiders, S.M.; Pauli, J. A novel multi-lane detection and tracking system. In Proceedings of the 2012 IEEE Intelligent Vehicles Symposium, Madrid, Spain, 3–7 June 2012; pp. 1084–1089. [Google Scholar]
  24. Obradović, Ð.; Konjović, Z.; Pap, E.; Rudas, I. Linear fuzzy space based road lane model and detection. WSEAS Trans. Syst. 2014, 38, 37–47. [Google Scholar] [CrossRef]
  25. Lin, Q.; Han, Y.; Hahn, H. Real-time lane departure detection based on extended edge-linking algorithm. In Proceedings of the 2010 Second International Conference on Computer Research and Development, Kuala Lumpur, Malaysia, 7–10 May 2010; pp. 725–730. [Google Scholar]
  26. Borkar, A.; Hayes, M.; Smith, M.T. Robust lane detection and tracking with RANSAC and Kalman filter. In Proceedings of the 2009 16th IEEE International Conference on Image Processing (ICIP), Cairo, Egypt, 7–10 November 2009; pp. 3261–3264. [Google Scholar]
  27. Kim, Z. Robust lane detection and tracking in challenging scenarios. IEEE Trans. Intell. Transp. Syst. 2008, 9, 16–26. [Google Scholar] [CrossRef] [Green Version]
  28. Li, H.; Nashashibi, F. Robust real-time lane detection based on lane mark segment features and general a priori knowledge. In Proceedings of the 2011 IEEE International Conference on Robotics and Biomimetics, Karon Beach, Thailand, 7–11 December 2009; pp. 812–817. [Google Scholar]
  29. Kim, J.; Lee, M. Robust lane detection based on convolutional neural network and random sample consensus. In Proceedings of the International Conference on Neural Information Processing, Kuching, Malaysia, 3–6 November 2014; pp. 454–461. [Google Scholar]
  30. Zou, Q.; Jiang, H.; Dai, Q.; Yue, Y.; Chen, L.; Wang, Q. Robust lane detection from continuous driving scenes using deep neural networks. IEEE Trans. Veh. Technol. 2019, 69, 41–54. [Google Scholar] [CrossRef] [Green Version]
  31. Neven, D.; De Brabandere, B.; Georgoulis, S.; Proesmans, M.; Van Gool, L. Towards end-to-end lane detection: An instance segmentation approach. In Proceedings of the 2018 IEEE intelligent vehicles symposium (IV), Changshu, China, 26–30 June 2018; pp. 286–291. [Google Scholar]
  32. Ghafoorian, M.; Nugteren, C.; Baka, N.; Booij, O.; Hofmann, M. EL-GAN: Embedding loss driven generative adversarial networks for lane detection. In Proceedings of the European Conference on Computer Vision (ECCV) Workshops, Munich, Germany, 8–14 September 2018; pp. 1–17. [Google Scholar]
  33. Marek, J.; Trah, H.P.; Suzuki, Y.; Yokomori, I. Sensors for Automotive Applications, 4th ed.; John Wiley & Sons: New York, NY, USA, 2006; pp. 462–473. [Google Scholar]
  34. Mouser Electronics. Dual Solar Sensor. Available online: http://www.mouser.com/ds/2/18/amphenol_datasheet_Dual%20Solar_SUF005A001-746349.pdf (accessed on 31 March 2022).
  35. Son, Y.S.; Kim, W.; Lee, S.-H.; Chung, C.C. Robust multirate control scheme with predictive virtual lanes for lane-keeping system of autonomous highway driving. IEEE Trans. Veh. Technol. 2014, 64, 3378–3391. [Google Scholar] [CrossRef]
  36. McCall, J.C.; Trivedi, M.M. Video-based lane estimation and tracking for driver assistance: Survey, system, and evaluation. IEEE Trans. Intell. Transp. Syst. 2006, 7, 20–37. [Google Scholar] [CrossRef] [Green Version]
  37. Redmill, K.A.; Upadhya, S.; Krishnamurthy, A.; Ozguner, U. A lane tracking system for intelligent vehicle applications. In Proceedings of the 2001 IEEE Intelligent Transportation Systems, Oakland, CA, USA, 25–29 August 2001; pp. 273–279. [Google Scholar]
  38. Choi, H.-C.; Park, J.-M.; Choi, W.-S.; Oh, S.-Y. Vision-based fusion of robust lane tracking and forward vehicle detection in a real driving environment. Int. J. Automot. Technol. 2012, 13, 653–669. [Google Scholar] [CrossRef]
  39. Euro NCAP. Test Protocol–LSS v4.0. Available online: https://cdn.euroncap.com/media/67895/euro-ncap-lss-test-protocol-v40.pdf (accessed on 31 March 2022).
Figure 1. Edge detection result: (a) input image, and (b) edge detection result.
Figure 1. Edge detection result: (a) input image, and (b) edge detection result.
Electronics 11 01474 g001
Figure 2. Filter for dark-bright-dark function.
Figure 2. Filter for dark-bright-dark function.
Electronics 11 01474 g002
Figure 3. Dark-bright-dark filter result: (a) input image, and (b) dark-bright-dark filter result.
Figure 3. Dark-bright-dark filter result: (a) input image, and (b) dark-bright-dark filter result.
Electronics 11 01474 g003
Figure 4. Line detection and lane tracking results: (a) daytime result, and (b) night result.
Figure 4. Line detection and lane tracking results: (a) daytime result, and (b) night result.
Electronics 11 01474 g004
Figure 5. Line fitting results: (a) daytime result, and (b) night result.
Figure 5. Line fitting results: (a) daytime result, and (b) night result.
Electronics 11 01474 g005
Figure 6. Block diagram for general lane detection algorithm.
Figure 6. Block diagram for general lane detection algorithm.
Electronics 11 01474 g006
Figure 7. Dual light sensors: (a) dual light sensors, and (b) circuit for dual light sensors.
Figure 7. Dual light sensors: (a) dual light sensors, and (b) circuit for dual light sensors.
Electronics 11 01474 g007
Figure 8. Illumination for dual light sensor: (a) twilight sensor voltage output vs. light level, (b) solar sensor current output vs. light level, and (c) relative solar output for dual light sensors.
Figure 8. Illumination for dual light sensor: (a) twilight sensor voltage output vs. light level, (b) solar sensor current output vs. light level, and (c) relative solar output for dual light sensors.
Electronics 11 01474 g008
Figure 9. Block diagram of the proposed method.
Figure 9. Block diagram of the proposed method.
Electronics 11 01474 g009
Figure 10. Examples of tunnel and twilight image: (a) tunnel image, and (b) twilight image.
Figure 10. Examples of tunnel and twilight image: (a) tunnel image, and (b) twilight image.
Electronics 11 01474 g010
Figure 11. Edge detection results: (a) result ( T h E = T h H ), and (b) result ( T h E = T h L ).
Figure 11. Edge detection results: (a) result ( T h E = T h H ), and (b) result ( T h E = T h L ).
Electronics 11 01474 g011
Figure 12. Examples of tunnel and guardrail with side lighting image: (a) tunnel image, and (b) guardrail with side lighting image.
Figure 12. Examples of tunnel and guardrail with side lighting image: (a) tunnel image, and (b) guardrail with side lighting image.
Electronics 11 01474 g012
Figure 13. Example of lane marking model.
Figure 13. Example of lane marking model.
Electronics 11 01474 g013
Figure 14. Examples of calculated tracking range result: (a) narrow tracking range result, and (b) normal tracking range result.
Figure 14. Examples of calculated tracking range result: (a) narrow tracking range result, and (b) normal tracking range result.
Electronics 11 01474 g014
Figure 15. Actual design for circuit board of proposed integrated front camera module dual light sensor.
Figure 15. Actual design for circuit board of proposed integrated front camera module dual light sensor.
Electronics 11 01474 g015
Figure 16. Photograph of realized integrated front camera module with dual light sensor: (a) designed circuit board and (b) appearance.
Figure 16. Photograph of realized integrated front camera module with dual light sensor: (a) designed circuit board and (b) appearance.
Electronics 11 01474 g016
Figure 17. Integrated front camera module installed on inner windshield.
Figure 17. Integrated front camera module installed on inner windshield.
Electronics 11 01474 g017
Figure 18. Test results for image quality enhancements: (a) backlight condition (without illumination data), (b) backlight condition (with illumination data), (c) night driving situation (without illumination data), (d) night driving situation (with illumination data) (e) entering tunnel situation (without illumination data), and (f) entering tunnel situation (with illumination data).
Figure 18. Test results for image quality enhancements: (a) backlight condition (without illumination data), (b) backlight condition (with illumination data), (c) night driving situation (without illumination data), (d) night driving situation (with illumination data) (e) entering tunnel situation (without illumination data), and (f) entering tunnel situation (with illumination data).
Electronics 11 01474 g018
Figure 19. Test results for edge detection and tracking process improvements: (a) entering tunnel condition (without illumination data), (b) entering tunnel condition (with illumination data), (c) side-light driving situation (without illumination data), and (d) side-light driving situation (with illumination data).
Figure 19. Test results for edge detection and tracking process improvements: (a) entering tunnel condition (without illumination data), (b) entering tunnel condition (with illumination data), (c) side-light driving situation (without illumination data), and (d) side-light driving situation (with illumination data).
Electronics 11 01474 g019
Figure 20. Real-road tests were conducted under various external situations: (a) heavy rain, and (b) snowfall.
Figure 20. Real-road tests were conducted under various external situations: (a) heavy rain, and (b) snowfall.
Electronics 11 01474 g020
Figure 21. Real-road tests were performed under different lane types: (a) dashed line + dashed line, (b) solid line + dashed line, (c) curved line, (d) road marking line, and (e) crosswalk.
Figure 21. Real-road tests were performed under different lane types: (a) dashed line + dashed line, (b) solid line + dashed line, (c) curved line, (d) road marking line, and (e) crosswalk.
Electronics 11 01474 g021
Table 1. Measured results for light amount in plane surface.
Table 1. Measured results for light amount in plane surface.
ConditionIllumination (lux)
Sunlight107,527
Full Daylight10,752
Overcast Day1075
Very Dark Day107
Twilight10.8
Deep Twilight1.08
Full Moon0.108
Quarter Moon0.0108
Starlight0.0011
Overcast Night0.0001
Table 2. CMOS sensor and dual light sensor.
Table 2. CMOS sensor and dual light sensor.
ItemParameterSpecification
CMOS SensorFOV (Field of view)52 (H) × 38
Resolution1280 × 800 (HD)
Frame rate30 fps
Dynamic range115 dB
Detection range90 m
Dual Light SensorLight intensity0 to 100,000 lux
Sensor output current145 mA ± 15%
Angular response (elevation angle)−90 /90
Angular response (azimuth)40
Table 3. Test results from real road.
Table 3. Test results from real road.
Perform.CorrectFalseFalse
IndexDetectionNegativesPositives
External situations5910
Different lane types3601
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Lee, Y.; Park, M.-k.; Park, M. Improving Lane Detection Performance for Autonomous Vehicle Integrating Camera with Dual Light Sensors. Electronics 2022, 11, 1474. https://doi.org/10.3390/electronics11091474

AMA Style

Lee Y, Park M-k, Park M. Improving Lane Detection Performance for Autonomous Vehicle Integrating Camera with Dual Light Sensors. Electronics. 2022; 11(9):1474. https://doi.org/10.3390/electronics11091474

Chicago/Turabian Style

Lee, Yunhee, Min-ki Park, and Manbok Park. 2022. "Improving Lane Detection Performance for Autonomous Vehicle Integrating Camera with Dual Light Sensors" Electronics 11, no. 9: 1474. https://doi.org/10.3390/electronics11091474

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop