# Online Road Detection under a Shadowy Traffic Image Using a Learning-Based Illumination-Independent Image

^{*}

## Abstract

**:**

## 1. Introduction

- (1)
- It comes up with a learning-based illuminant invariance method, which greatly solves the influences of the major interference of shadow detection in the road images and gains more accurate, robust road detecting results.
- (2)
- This paper proposes an online road detection method, which avoids partial detection in the offline detection, achieves subtle detection on every shaded frame of the road images, and gains more effective and robust results.
- (3)
- Compared with the advanced methods of II-RD and MII-RD based on common open image sets, by using the methods proposed in this paper, we could gain more precise and robust detecting results. Besides these advantages of this detecting method based on the common open datasets, we can gain the same effective results by using our own images and datasets.

## 2. Illumination-Independent Image

## 3. Learning-Based Online $\theta $ Calibration

#### 3.1. Road Block Feature Extraction with Multi-Feature Fusion

**Color distance:**

_{i}and C

_{j}represent pixel i and pixel j, and R, G, and B correspond to three channels.

**First moment:**

**Second moment:**

**Third moment:**

- (1)
**Contrast**: Is the measure of how much the local change is in the image, which reflects the sharpness and the texture groove depth of the image. The deeper the texture groove is, the clearer the result is. On the contrary, the contrast value is smaller, then the groove is shallower and the result is more blurred:$$Con={\displaystyle \sum _{i}{\displaystyle \sum _{j}{\left(i-j\right)}^{2}P\left(i\text{},j\right)}}.$$- (2)
**Energy**: Is the measure of stability of the image texture grayscale change, which reflects the image distribution uniformity and texture thickness. A high energy value reflects that the current texture change is more stable:$$Asm={\displaystyle \sum _{i}{\displaystyle \sum _{j}P{\left(i\text{},j\right)}^{2}}}.$$- (3)
**Entropy**: Is a measure of the randomness of an image containing the amount of information. It shows the complexity of the gray level distribution of the image. The larger the entropy value, the more complicated the image:$$Ent=-{\displaystyle \sum _{i}{\displaystyle \sum _{j}P\left(i\text{},j\right)\mathrm{log}P\left(i\text{},j\right)}}.$$- (4)
**Homogeneity**: Is the similarity of the gray level of the image in the row or column direction, reflecting the local gray correlation. The larger the value, the stronger the correlation:$$Corr=\left[{\displaystyle \sum _{i}{\displaystyle \sum _{j}\left(\left(ij\right)P\left(i\text{},j\right)\right)-{\mu}_{x}{\mu}_{y}}}\right]/{\sigma}_{x}{\sigma}_{y}$$

#### 3.2. SVM Road Block Classifier

#### 3.3. Minimum Entropy Solution

## 4. Road Detection Algorithm

#### 4.1. Region of Interest Extraction

**(1) Sky removal**

**(2) Hood removal**

- (1)
- Determining different road image data sets Seq1, Seq2, …, Seqn;
- (2)
- Randomly extract 20 frames from the dataset Seq1, calculate the head position Dis(i), i = 1, 2, ..., 20, and calculate the Dis(i) mean value to determine the final split position, Dis_head;
- (3)
- Mark the video sequence according to the calculated Dis_head to remove the hood area of the vehicle; and
- (4)
- Repeat steps (2) and (3) for data sets, Seq2, … , Seqn.

#### 4.2. Illumination-Independent Image Acquisition

Algorithm 1. Illumination-independent image acquisition |

Require: Number of road blocks n = 3, color information matrix feature, gray level co-occurrence matrix feature, SVM road block classifier |

Ensure: Illumination-independent image corresponding to a certain frame |

1: for k = first frame do last frame |

2: for i = 1 do N (number of blocks, N ≥ 3) |

3: Randomly extract a block of 30 × 30 pixel, and n = 1; |

4: Extract the region block feature color information matrix, gray level co-occurrence matrix and input its into the SVM road block classifier; |

5: if road block then |

6: Store road blocks in a 150 × 30 matrix, and n = n + 1; |

7: else |

8: i = i + 1, Repeat steps 3~6; |

9: end if |

10: Compose new image with 90 × 30 pixels; 11: Calculate the new image online $\theta $ angle by the minimum entropy algorithm; |

12: Obtain the illumination-independent image using the illumination-independent image theory; |

13: end for |

14: end for |

#### 4.3. Road Sample Set

#### 4.4. Road Classifier

## 5. Experimental Results

#### 5.1. The Experiment of the CVC Public Dataset

#### 5.1.1. Sampling Method Experiment

_{i}, and 400 sampling points are randomly selected from the previous frame, F

_{i−1}, road detection result, S

_{i−1}. Finally, a total of 900 sampling points are extracted. However, in this paper, we directly select 900 sampling points from S

_{i}.

#### 5.1.2. Qualitative Road Detection Result

#### 5.1.3. Result Evaluation Index

#### 5.2. The Experiment of the Self-Built Video Sequence

#### 5.2.1. Qualitative Road Detection Result

#### 5.2.2. Result Evaluation Index

#### 5.3. The Experiment of the Strong Sunlight and Low Contrast Condition

#### 5.4. Detection Results Analysis and Discussion

#### 5.5. Time Validity Analysis

## 6. Conclusions

## Data Availability

## Author Contributions

## Funding

## Acknowledgments

## Conflicts of Interest

## References

- Burns, L.D. Sustainable mobility: A vision of our transport future. Nature
**2013**, 497, 181. [Google Scholar] [CrossRef] [PubMed] - Islam, K.T.; Raj, R.G.; Mujtaba, G. Recognition of Traffic Sign Based on Bag-of-Words and Artificial Neural Network. Symmetry
**2017**, 9, 138. [Google Scholar] [CrossRef] - Sivaraman, S.; Trivedi, M.M. Looking at Vehicles on the Road: A Survey of Vision-Based Vehicle Detection, Tracking, and Behavior Analysis. IEEE Trans. Intell. Transp. Syst.
**2013**, 14, 1773–1795. [Google Scholar] [CrossRef] [Green Version] - Xing, Y.; Lv, C.; Chen, L.; Wang, H.; Wang, H.; Cao, D.; Velenis, E.; Wang, F.Y. Advances in Vision-Based Lane Detection: Algorithms, Integration, Assessment, and Perspectives on ACP-Based Parallel Vision. IEEE/CAA J. Autom. Sin.
**2018**, 5, 645–661. [Google Scholar] [CrossRef] - Hillel, A.B.; Lerner, R.; Dan, L.; Raz, G. Recent progress in road and lane detection: A survey. Mach. Vis. Appl.
**2014**, 25, 727–745. [Google Scholar] [CrossRef] - Mendes, C.C.T.; Frémont, V.; Wolf, D.F. Vision-Based Road Detection using Contextual Blocks. arXiv, 2015; arXiv:1509.01122. [Google Scholar]
- Zhang, Y.; Su, Y.; Yang, J.; Ponce, J.; Kong, H. When Dijkstra Meets Vanishing Point: A Stereo Vision Approach for Road Detection. IEEE Trans. Image Process.
**2018**, 27, 2176–2188. [Google Scholar] [CrossRef] [Green Version] - Kong, H.; Audibert, J.Y.; Ponce, J. General road detection from a single image. IEEE Trans. Image Process.
**2010**, 19, 2211–2220. [Google Scholar] [CrossRef] - Munajat, M.D.E.; Widyantoro, D.H.; Munir, R. Road detection system based on RGB histogram filterization and boundary classifier. In Proceedings of the International Conference on Advanced Computer Science and Information Systems, Depok, Indonesia, 10–11 October 2015; pp. 195–200. [Google Scholar]
- Somawirata, I.K.; Utaminingrum, F. Road detection based on the color space and cluster connecting. In Proceedings of the IEEE International Conference on Signal and Image Processing, Beijing, China, 13–15 August 2016; pp. 118–122. [Google Scholar]
- Lu, K.; Li, J.; An, X.; He, H. Vision Sensor-Based Road Detection for Field Robot Navigation. Sensors
**2015**, 15, 29594–29617. [Google Scholar] [CrossRef] [Green Version] - Wang, W.; Ding, W.; Li, Y.; Yang, S. An Efficient Road Detection Algorithm Based on Parallel Edges. Acta Opt. Sin.
**2015**, 35, 0715001. [Google Scholar] [CrossRef] - Yang, T.; Wang, M.; Qin, Y. Road detection algorithm using the edge and region features in images. J. Southeast Univ.
**2013**, 43, 81–84. [Google Scholar] - Zhu, D.; Dai, L.; Luo, Y.; Zhang, G.; Shao, X.; Itti, L.; Lu, J. Multi-Scale Adversarial Feature Learning for Saliency Detection. Symmetry
**2018**, 10, 457. [Google Scholar] [CrossRef] - Badrinarayanan, V.; Kendall, A.; Cipolla, R. SegNet: A Deep Convolutional Encoder-Decoder Architecture for Scene Segmentation. IEEE Trans. Pattern Anal. Mach. Intell.
**2017**, 39, 2481–2495. [Google Scholar] [CrossRef] - Cheng, G.; Wang, Y.; Xu, S.; Wang, H.; Xiang, S.; Pan, C. Automatic Road Detection and Centerline Extraction via Cascaded End-to-End Convolutional Neural Network. IEEE Trans. Geosci. Remote Sens.
**2017**, 55, 3322–3337. [Google Scholar] [CrossRef] - Narayan, A.; Tuci, E.; Labrosse, F.; Alkilabi, M.H.M.; Narayan, A.; Tuci, E.; Labrosse, F.; Alkilabi, M.H.M. Road detection using convolutional neural networks. In Proceedings of the European Conference on Artificial Life Ecal, Lyon, France, 4–8 September 2017; pp. 314–321. [Google Scholar]
- Geng, L.; Sun, J.; Xiao, Z.; Zhang, F.; Wu, J. Combining CNN and MRF for Road Detection. Comput. Electr. Eng.
**2018**, 70, 895–903. [Google Scholar] [CrossRef] - Han, X.; Lu, J.; Zhao, C.; You, S.; Li, H. Semisupervised and Weakly Supervised Road Detection Based on Generative Adversarial Networks. IEEE Signal Process. Lett.
**2018**, 25, 551–555. [Google Scholar] [CrossRef] - Costea, A.D.; Nedevschi, S. Traffic scene segmentation based on boosting over multimodal low, intermediate and high order multi-range channel features. In Proceedings of the Intelligent Vehicles Symposium, Los Angeles, CA, USA, 11–14 June 2017; pp. 74–81. [Google Scholar]
- Alvarez, J.M.Á.; Lopez, A.M. Road Detection Based on Illuminant Invariance. IEEE Trans. Intell. Transp. Syst.
**2011**, 12, 184–193. [Google Scholar] [CrossRef] - Wang, B.; Frémont, V. Fast road detection from color images. In Proceedings of the Intelligent Vehicles Symposium, Gold Coast, QLD, Australia, 23–26 June 2013; pp. 1209–1214. [Google Scholar]
- Kai, D.U.; Song, Y.C.; Yong-Feng, J.U.; Yao, J.R.; Fang, J.W.; Bao, X. Improved Road Detection Algorithm Based on Illuminant Invariant. J. Transp. Syst. Eng. Inf. Technol.
**2017**, 17, 45–52. [Google Scholar] - Finlayson, G.D.; Drew, M.S.; Lu, C. Intrinsic Images by Entropy Minimization. In Proceedings of the European Conference on Computer Vision, Prague, Czech Republic, 11–14 May 2004; pp. 582–595. [Google Scholar]
- Xu, X. Nonlinear trigonometric approximation and the Dirac delta function. J. Comput. Appl. Math.
**2007**, 209, 234–245. [Google Scholar] [CrossRef] - Siqueira, F.R.D.; Schwartz, W.R.; Pedrini, H. Multi-scale gray level co-occurrence matrices for texture description. Neurocomputing
**2013**, 120, 336–345. [Google Scholar] [CrossRef] - Vapnik, V.N. An overview of statistical learning theory. IEEE Trans. Neural Netw.
**1999**, 10, 988–999. [Google Scholar] [CrossRef] - Chang, F.; Cui, H.; Liu, C.; Zhao, Y.; Ma, C. Traffic sign detection based on Gaussian color model and SVM. Chin. J. Sci. Instrum.
**2014**, 35, 43–49. [Google Scholar] - Chang, C.C.; Lin, C.J. LIBSVM: A library for support vector machines. ACM Trans. Intell. Syst. Technol.
**2011**, 2, 1–27. [Google Scholar] [CrossRef] - Dubská, M. Real Projective Plane Mapping for Detection of Orthogonal Vanishing Points. In Proceedings of the British Machine Vision Conference, Bristol, UK, 9–13 September 2013. [Google Scholar]
- Fritsch, J.; Kuhnl, T.; Geiger, A. A new performance measure and evaluation benchmark for road detection algorithms. In Proceedings of the International IEEE Conference on Intelligent Transportation Systems, The Hague, Netherlands, 6–9 October 2013; pp. 1693–1700. [Google Scholar]

**Figure 2.**Results of sky and hood removal. (

**a**) Effect comparison of the sky removal method on the uphill section, (

**b**) effect comparison of the sky removal method and hood removal, (

**c**) sky and hood removal.

**Figure 3.**The result shows the illumination-independent image. (

**a**) Shaded road image, (

**b**) illumination-independent image, (

**c**) shadow area road detection result.

**Figure 4.**Comparison and description of the sampling methods. (

**a**) II-RD sampling method. A1~A9 represent nine fixed position sampling areas, and x, y represent the length and width of each area, respectively. x = y = 10 pixels, n = 900 pixels. (

**b**) MII-RD sampling method. Combine the sample sets, S

_{i}, and S

_{i−1}, S

_{i}indicates the safety distance area in front of the vehicle, S

_{i−1}indicates the area for the previous frame road detection result, n = 900 pixels. (

**c**) Our sampling method (OLII-RD), where W and H represent the length and width of the safe area, respectively. n = 900 pixels.

**Figure 5.**After getting sampling points through II-RD, MII-RD, and OLII-RD, normal distribution fitting situations are compared for sampling points. From left to right: Normal distribution fitting for DS and RS video sequences, respectively.

**Figure 6.**Comparison of road detection results for CVC datasets. From left to right: Original image, groundtruth map, result of II-RD, result of MII-RD, result of our method, respectively.

**Figure 7.**Comparison graph of the F value for detection algorithms for CVC datasets. From left to right: F curve for DS and RS video sequences, respectively.

**Figure 8.**Comparison of road detection results for VRR datasets. From left to right: Original image, groundtruth map, result of II-RD, result of MII-RD, result of our method, respectively.

**Figure 9.**Comparison graph of F value for detection algorithms for VRR datasets. From left to right: F curve for S1 and S2 video sequences, respectively.

**Figure 10.**Comparison of road detection results for strong sunlight and low contrast images. From left to right: Original image, groundtruth map, result of II-RD, result of MII-RD, result of our method, respectively.

Sequence | Approach | Mean | SD |
---|---|---|---|

DS | II-RD | 90.5175 | 3.0271 |

MII-RD | 90.1478 | 1.2052 | |

OLII-RD | 90.1139 | 0.9924 | |

RS | II-RD | 90.8848 | 2.3918 |

MII-RD | 90.5733 | 1.1070 | |

OLII-RD | 90.3438 | 1.0593 |

Sequence | Approach | $\mathbf{P}\pm \mathbf{\sigma}$ | $\mathbf{R}\pm \mathbf{\sigma}$ | $\mathbf{F}\pm \mathbf{\sigma}$ |
---|---|---|---|---|

DS | II-RD | 0.6259 | 0.9933 | 0.7454 |

MII-RD | 0.8565 ±0.0091 | 0.9838 ±0.0003 | 0.9123 ±0.0033 | |

OLII-RD | 0.9014±0.0036 | 0.9902 ±0.0011 | 0.9437±0.0017 | |

RS | II-RD | 0.7321 | 0.9813 | 0.8275 |

MII-RD | 0.9125 ±0.0034 | 0.9597 ±0.0011 | 0.9337 ±0.0008 | |

OLII-RD | 0.9216±0.0028 | 0.9842±0.0009 | 0.9519±0.0014 |

Sequence | Approach | $\mathbf{P}\pm \mathbf{\sigma}$ | $\mathbf{R}\pm \mathbf{\sigma}$ | $\mathbf{F}\pm \mathbf{\sigma}$ |
---|---|---|---|---|

S1 | II-RD | 0.6846 | 0.7250 | 0.7042 |

MII-RD | 0.8438 ±0.0029 | 0.8068 ±0.0012 | 0.8222 ±0.0017 | |

OLII-RD | 0.9162±0.0017 | 0.8873±0.0012 | 0.9016±0.0014 | |

S2 | II-RD | 0.9122 | 0.8515 | 0.8808 |

MII-RD | 0.9908 ±0.0001 | 0.8833 ±0.0003 | 0.9340 ±0.0002 | |

OLII-RD | 0.9755±0.0001 | 0.9219±0.0002 | 0.9479±0.0001 |

Sequence | Length | nFrames | Location | Capture Time | Precision | Recall | F-Measure |
---|---|---|---|---|---|---|---|

1 | 1 m 53 s | 680 | suburb | morning | 91.52 | 93.18 | 92.34 |

2 | 3 m 8 s | 1128 | suburb | afternoon | 92.67 | 94.35 | 93.50 |

3 | 2 m 37 s | 942 | school | midday | 93.29 | 95.87 | 94.56 |

4 | 2 m 17 s | 828 | urban | nightfall | 90.83 | 89.23 | 90.02 |

5 | 1 m 20 s | 480 | urban | afternoon | 88.76 | 92.61 | 90.64 |

Total | 11 m 15 s | 4058 | - | - | 91.41 | 93.05 | 92.22 |

© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Song, Y.; Ju, Y.; Du, K.; Liu, W.; Song, J.
Online Road Detection under a Shadowy Traffic Image Using a Learning-Based Illumination-Independent Image. *Symmetry* **2018**, *10*, 707.
https://doi.org/10.3390/sym10120707

**AMA Style**

Song Y, Ju Y, Du K, Liu W, Song J.
Online Road Detection under a Shadowy Traffic Image Using a Learning-Based Illumination-Independent Image. *Symmetry*. 2018; 10(12):707.
https://doi.org/10.3390/sym10120707

**Chicago/Turabian Style**

Song, Yongchao, Yongfeng Ju, Kai Du, Weiyu Liu, and Jiacheng Song.
2018. "Online Road Detection under a Shadowy Traffic Image Using a Learning-Based Illumination-Independent Image" *Symmetry* 10, no. 12: 707.
https://doi.org/10.3390/sym10120707