Next Article in Journal / Special Issue
Exploiting Deep Matching and SAR Data for the Geo-Localization Accuracy Improvement of Optical Satellite Images
Previous Article in Journal
Mapping Spartina alterniflora Biomass Using LiDAR and Hyperspectral Data
Previous Article in Special Issue
Geometry-Based Global Alignment for GSMS Remote Sensing Images
Article Menu
Issue 6 (June) cover image

Export Article

Remote Sensing 2017, 9(6), 590; https://doi.org/10.3390/rs9060590

Article
Road Detection by Using a Generalized Hough Transform
1
College of Information and Control Engineering, China University of Petroleum (East China), Qingdao 266580, China
2
The 16th Institute, China Aerospace Science and Technology Corporation, Xi’an 710100, China
3
School of Information Science and Engineering, Yunnan University, Kunming 650091, China
*
Authors to whom correspondence should be addressed.
Academic Editors: Qi Wang, Nicolas H. Younan, Carlos López-Martínez and Prasad S. Thenkabail
Received: 28 April 2017 / Accepted: 8 June 2017 / Published: 10 June 2017

Abstract

:
Road detection plays key roles for remote sensing image analytics. Hough transform (HT) is one very typical method for road detection, especially for straight line road detection. Although many variants of Hough transform have been reported, it is still a great challenge to develop a low computational complexity and time-saving Hough transform algorithm. In this paper, we propose a generalized Hough transform (i.e., Radon transform) implementation for road detection in remote sensing images. Specifically, we present a dictionary learning method to approximate the Radon transform. The proposed approximation method treats a Radon transform as a linear transform, which then facilitates parallel implementation of the Radon transform for multiple images. To evaluate the proposed algorithm, we conduct extensive experiments on the popular RSSCN7 database for straight road detection. The experimental results demonstrate that our method is superior to the traditional algorithms in terms of accuracy and computing complexity.
Keywords:
Hough transform; dictionary learning; road detection; Radon transform

1. Introduction

The determination of the location and orientation of a straight line road is a fundamental task for many computer vision applications such as road network extraction [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23], image registration [4],visual tracking [5], robot autonomous navigation [6], hyperspectral image classification [7,8], Global Navigation Satellite System(GNSS) [9,10], unmanned aerial vehicle images [11], and sports video broadcasting [12,13]. A Hough transform (HT) [14,15,16] is one of the very typical methods and has been widely applied to computer processing, image processing, and digital image processing. It transforms the problem of a global detection in a binary image into peaks detection in a Hough parameter space. Dozens of HT extensions have been developed for solving straight line road detection problem. And particularly, these methods can be divided into the following four groups: generalized HT (GHT) [17,18,19,20,21], randomized HT (RHT) [22,23,24,25], probabilistic HT (PHT) [26,27,28,29], and fuzzy HT (FHT) [30,31,32].
Generalized HT (GHT) [17,18,19,20,21] detects arbitrary object curves (i.e., shapes having no or complex analytical form) by transforming the curves in image space into a four dimensional parameter space. For example, Lo et al. [18] developed a perspective-transformation-invariant GHT (PTIGHT) by using a new perspective reference table (PR-table) to detect perspective planar shapes. Ji et al. [19] proposed fuzzy GHT by using fuzzy set theory to focus the vote peaks to one point. Yang et al. [20] proposed polygon-invariant GHT (PI-GHT) by exploiting the scale-and rotation invariant polygon triangles characteristic to accomplish High-Speed Vision-Based Positioning. Xu et al. [21] developed robust invariant GHT (RIGHT) based on a robust shape model by utilizing an iterative training method.
Randomized Hough transform (RHT) [22,23,24,25] reduces the calculation and storage by using random sampling in image space, converging mapping and dynamic storage. Lu et al. [23] proposed an iterative randomized HT (IRHT) by the iteration to gradually reduce the target area from the entire image to the region of interest. Jiang [24] determined sample points and candidate circles by probability sampling and optimized methods to avoid false detection. Lu et al. [25] developed a direct inverse RHT (DIRHT) by incorporating inverse HT with RHT, this method is able to enhance the target ellipse in strong noisy images.
Probabilistic Hough transform (PHT) [26,27,28,29] defines a Hough transform in a mathematically “correct” form with a likelihood function in the output parameters. Matas et al. [27] proposed Progressive PHT (PPHT) utilized the difference in the fraction of votes to greatly reduce the amount of calculation of line detections. Galambos et al. [28] controlled the vote process by gradient information to improve the performance of PPHT. Qiu and Wang [29] proposed an improved PPHT by exploiting segment-weighted voting and density-based segment filtering to improve accuracy rate.
Fuzzy Hough transform (FHT) [30,31,32] finds the target shapes in noisy images by fitting data points approximately. Basak and Pal [31] utilized gray level images in FHT (gray FHT) to process the shape distortion. Pugin and Zhiznyakov [32] proposed a new method of filter or fusion of straight lines after performing FHT and thus avoiding detecting unnecessary linear features.
Although Hough transform and its many variants have achieved better results, it is still a great challenge to develop a low computational complexity and time-saving HT algorithm. In this paper, we propose a new method based on a generalized HT (i.e., Radon transform) and apply it for straight road detection in remote sensing images. We adopt a dictionary learning method [33] to approximate the Radon transform. The proposed approximation method has two significant contributions: (1) our method treats Radon transform as a linear transform, which greatly reducing the computational complexity; and (2) linear transformation makes it possible to realize parallel implementation of the Radon transform for multiple images, which can save time. To evaluate the proposed algorithm, we conduct extensive experiments on the popular RSSCN7 database for straight road detection. The experimental results demonstrate that our method is superior to the traditional HT algorithm in terms of accuracy and computing complexity.
The rest of this paper is arranged as follows. Section 2 briefly reviews the related works including the Hough transform and Radon transform. Section 3 presents the dictionary learning method to approximate the Radon transform. Section 4 describes the extensive experiments and discusses the experimental results. Finally, Section 5 gives some conclusions.

2. Related Work

In this section, we review some related works including Hough transform and Radon transform.
A Hough transform [14,15,16] detects shape in binary images by using an array named parameter space. Each point in binary images votes for the parameters space. The highest values of votes in the parameter space represent a parameter shape with the same linear features in the original image. Generally, linear features of a straight line on two dimensional plane ( s 1 , s 2 ) are parameterized by the slope ( k ) and intercept ( b ) . Each point of a straight line will focus on one point in the ( k , b ) parameter space (Figure 1).
However, when the values of parameters are infinite (i.e., k = ), the parametrization of a straight line exists a singularity. Duda and Hart [34] proposes that straight lines can be parameterized by ρ , θ (Figure 2). And the mapping relations between image point ( s 1 , s 2 ) and ( ρ , θ ) parameter space satisfy the following:
ρ = s 1 c o s ( θ ) + s 2 s i n ( θ )
Considering that a Hough transform can only be used for binary images, a Radon transform extends this concept to the problem of straight line detection in grayscale images [35]. If we denote y ( s 1 , s 2 ) as an image on a two-dimensional Euclidean plane space, the Radon transform x ( ρ , θ ) of image y can be expressed as follows [36]:
x = = 2 y ( s 1 , s 2 ) δ ( ρ s 1 c o s ( θ ) s 2 s i n ( θ ) ) d s 1 d s 2 )
where δ ( . ) is the Dirac delta function, is the Radon operator, y ( s 1 , s 2 ) is the grayscale value of the point of ( s 1 , s 2 ) , ρ is the distance between the origin and the vertical of straight line, and θ is the angle between the normal of straight line and the s 1 axis. Each point y ( s 1 , s 2 ) can be mapped into a sinusoidal curve in the parameterized space, and a single point ( ρ , θ ) in the parameter space can be used to represent a line in image space.
The inverse Radon transform is defined as
y ( s 1 , s 2 ) = C x ( ρ , θ ) = 0 π z [ s 1 c o s ( θ ) + s 2 s i n ( θ ) , θ ] d θ
z ( ρ , θ ) + | ω | X ( ω , θ ) e j 2 π ω t d ω
where = C 1 is the Radon operator, X ( ω , θ ) is the Fourier transform of x ( ρ , θ ) at angle θ . In addition, the Formulas (3) and (4) are the filtered back projection algorithm which is introduced to compute the inverse Radon transform.

3. Dictionary Learning Based Radon Transform

In this section, we introduced a dictionary learning method to approximate the Radon transform. Specifically, we use linear transform to approximate the discretized form of Formula (3) in practice. The relationship between the discretized parameter space image x and the discretized image data y can be defined as [37]:
y = C x
where C is the discrete inverse Radon operator, y R m n denotes the vectorized y R m n , and x R p q denotes the vectorized x R p q .
In this paper, we employ a dictionary learning method to obtain the matrix C . Suppose the N training samples is Y = ( y 1 , y 2 , , y N ) R m n × N , where y i R m n denotes the vectorized y i R m n . X = ( x 1 , x 2 , , x N ) R p q × N , and x i R p q denotes the vectorized x i R p q . Our purpose is to learn a dictionary C R m n × p q based on Equation (5):
( y 1 , y 2 , , y N ) = C ( x 1 , x 2 , , x N )
Since X is not a square matrix, matrix C can be calculate by the least squares method through minimizing the following objection function:
J = Y C X 2
where * denotes the 2-norm of * . By minimizing the objective function (7), we have
C = Y ( X T X ) 1 X T ,   when   p q > N ;
or
C = Y X T ( X X T ) 1 ,   when   p q < N ;
since matrix X X T or X T X may be a singular matrix or approach a singular matrix, we add a damping factor α (with range from 0.1 to 1) to ensure the stability of numerical value:
C = Y ( X T X + α I ) 1 X T ,   when   p q > N ;
or
C = Y X T ( X X T + α I ) 1 ,   when   p q < N ;
where matrix X T is the transpose of the matrix X and I is a unit matrix.
Hence, the Radon transform of an image can be treated as a two matrix multiplication (i.e., linear transform):
X = C 1 Y
Since C is not a square matrix, we can obtain the value of X by minimizing the following target function:
J = Y C X 2
we have
X = ( C T C ) 1 C T Y ,   when   m m > p q ;
or
X = C T ( C C T ) 1 Y ,   when   m m < p q .
Similarly, to ensure the stability of a numerical value, we add a damping factor α .
X = ( C T C + α I ) 1 C T Y ,   when   m m > p q ;
or
X = C T ( C C T + α I ) 1 Y ,   when   m m < p q ;
where matrix C T is the transpose of the matrix C .
Our method treats a Radon transform as a linear transform, which can be realized by parallel computation of the Radon transform for multiple images:
( x 1 , x 2 , , x N ) = ( C T C + α I ) 1 C T ( y 1 , y 2 , , y N ) ,   when   m m > p q ;
or
( x 1 , x 2 , , x N ) = C T ( C C T + α I ) 1 ( y 1 , y 2 , , y N ) ,   when   m m > p q ;
The advantages of our solution is two-fold. Firstly, the transform (5) of the Radon operator makes it convenient and reasonable to leverage the performance by adding some special regularizations. For example, we can incorporate our objective function (5) into the regularization framework:
x ^ = arg min x y C x 2 + α 2 φ ( x )
where φ ( x ) is a regularization term which includes norm regularizer terms, log regularizer term, etc. Norm regularization terms take the form of φ l 1 ( x ) = x 1 = i | x i | for l 1 - regularization, φ l 2 ( x ) = x 2 = i x i 2 for l 2 - regularization, φ l p ( x ) = 1 / p ( i | x i | p ) , ( p < 1 ) for l p - regularization, etc. The log regularization term is in the form of φ log ( x ) = i log | x i | . We will verify the effect of adding regularization items in the future work.
Secondly, the linear transformation makes it possible to detect a straight line road of multiple images at one time, which will significantly reduce the time consuming aspect of this process.

4. Experiments and Discussion

In order to evaluate the performance of our method, we implement extensive experiments on RSSCN7 [38]. The RSSCN7 database is a remote sensing database which was issued in 2015, and the size of each remote sensing image is 400 × 400 pixels. There are 2800 remote sensing scene images in the RSSCN7 database, and they are from seven typical scene categories, which are a grassland, forest, farmland, parking lot, residential region, industrial region, river and lake. In this paper, we selected 170 remote sensing images with a straight line road to verify the proposed algorithm, and those 170 color images are converted to grayscale images in the preprocessing stage. Particularly, 150 images are used as a training set and the others as a test set. Some selected remote sensing images are shown in Figure 3.
In order to obtain sufficient training images, we rotate those 150 images from 0 to 180 degrees with a fixed step length, i.e., 10 degrees. Thus, we totally have 2700 grayscale images with the same size by intercepting those rotating images. Finally, the 2700 grayscale images are resized to 128 × 128 . Further, all the test images are also adjusted to the size 128 × 128 .
In this section, we demonstrate some experimental results of test samples and illustrate how our method is superior to the traditional algorithms in terms of accuracy and computing complexity.
Figure 4, Figure 5, Figure 6 and Figure 7 illustrate the experimental results of four test samples. Now we discuss the experimental results of our methods with the experimental results of a traditional Radon transform.
Figure 4a, Figure 5a, Figure 6a and Figure 7a show the Radon transform of a test sample in a two-dimensional parameter space and Figure 4i, Figure 5i, Figure 6i and Figure 7i show the transform image from our method in a two-dimensional parameter space. The one distinctly bright spot in Figure 4a, Figure 5a, Figure 6a and Figure 7a and Figure 4i, Figure 5i, Figure 6i and Figure 7i corresponds to the detected line (i.e., a red line) overlaid on the test image. It cannot be easy to isolate this one distinctly bright spot which matches with the straight road in test images from transform domain due to a lot of interference highlights in Figure 4a, Figure 5a, Figure 6a and Figure 7a. However, Figure 4i, Figure 5i, Figure 6i and Figure 7i show a bright spot corresponding to the detected line overlaid on the test image. The bright spot area in Figure 4a, Figure 5a, Figure 6a and Figure 7a is cluttered in visual effect. However, our algorithm reduces the effect of cluttered interference bright spots. By comparing Figure 4a, Figure 5a, Figure 6a and Figure 7a with Figure 4i, Figure 5i, Figure 6i and Figure 7i, we can see that the proposed method is superior to the conventional Radon transform in terms of the visual effect.
Figure 4b, Figure 5b, Figure 6b and Figure 7b show the three-dimensional form of Radon transform and Figure 4j, Figure 5j, Figure 6j and Figure 7j show the three-dimensional form of our method. The peak (i.e., bright spot in Figure 4a, Figure 5a, Figure 6a and Figure 7a) in Figure 4b, Figure 5b, Figure 6b and Figure 7b and Figure 4j, Figure 5j, Figure 6j and Figure 7j corresponds to the detected line overlaid on the test image. As seen in Figure 4b, Figure 5b, Figure 6b and Figure 7b, it cannot be easy to isolate the actual peak corresponding to the road in the test image from test samples owing to the mess in the transform domain. Particularly, the clutter of the peak in the transform domain will lead to false road detection or missed detection. From Figure 4j, Figure 5j, Figure 6j and Figure 7j, we can see that our method greatly accentuates the peak amplitudes relative to the background and it is possible to visually distinguish the peak point corresponding to the actual location of the straight road. Figure 4j, Figure 5j, Figure 6j and Figure 7j show that our method reduces the clutter interference to a large extent, and we are very easily able to isolate the true peak corresponding to the road in the test image.
Figure 4c, Figure 5c, Figure 6c and Figure 7c show the detected line from our method overlaid on the test samples and Figure 4k, Figure 5k, Figure 6k and Figure 7k show the detected line from Radon transform overlaid on the test samples. The location of the true and estimated straight line road are shown in Table 1. The ground truth parameters ( ρ , θ ) of the straight road are obtained by manual marking in sample images.
It can be seen from the Figure 4b,c that the peak point does not match with the straight road in test sample a215. Figure 4j,k of test sample a215 show a conspicuous peak point which corresponds to the straight road in test sample a215. The experimental results of the test sample a215 illustrate that our method has a better detected result than traditional Radon transform if the detection target is not obvious.
The enlarged part in the Figure 5c shows the detected line from Radon transform. The enlarged part in the Figure 5k shows the detected line from our method. We can see that our detected results are closer to the true straight road.
By observing the Figure 6b, we can see that the peaks in a three-dimensional parameter space do not focus on one point. Scattered peaks result in a wrong detection, while Figure 6j illustrates that the peak point obtained by our method is more concentrated and easier to distinguish. From Figure 6k, we can see that the detected lines from our method correspond to the actual location of roads.
Test sample g146 has some noises which are similar to the straight road. From Figure 7c, we see that the detected line from the traditional Radon transform does not match with the straight road in the noisy image very well, whereas our method has good robustness for noisy images, as is shown in Figure 7k.
The above experimental results indicate that our method can accurately detect the position of the straight road when the noise is high or the road characteristics are not obvious, which illustrates that our method has stronger robustness, and our detected results are closer to the actual road location.
We also compared our method with a traditional Hough transform. A Hough transform can only be used for binary images. Although the binary images weaken the background noise, they also cause the loss of some road information.
Figure 4d, Figure 5d, Figure 6d and Figure 7d are the binary images of the test samples. By comparing the images of two-dimensional parameter space in Figure 4, Figure 5, Figure 6 and Figure 7, we see that Figure 4e, Figure 5e, Figure 6e and Figure 7e also have many interference bright spots although the binary image weakens the background noise. However, Figure 4i, Figure 5i, Figure 6i and Figure 7i only include true bright spots. From the transform images of three-dimensional form in Figure 4, Figure 5, Figure 6 and Figure 7, we see that our method greatly accentuates an area of high intensity in the transform domain relative to the background.
A binary image causes the loss of some road information. It can be seen from the Figure 7g,k that the detected line from Hough transform does not correspond to the true position of the straight road.
To clearly compare our method with the Radon transform and Hough transform, we also report the Receiver Operator Curves (ROC) result in Figure 4h, Figure 5h, Figure 6h and Figure 7h. The ROC was produced by changing the threshold parameter. Specifically, we first determine a threshold parameter, if peak points surpass the threshold, it was classified as road pixels, or otherwise as noise pixels. The ground truth data was obtained by manual marking in remote sensing image. The x-axis is the false positive rate (FPR) which can be calculated by:
F P R = P o s i t i v e s   c o r r e c t l y   c l a s s i f i e d T o t a l   p o s i t i v e
the y-axis is the true positive rate (TPR) which can be calculated by:
T P R = n e g a t i v e s   i n c o r r e c t l y   c l a s s i f i e d T o t a l   n e g a t i v e s
The accuracy of detected methods is measured through the area under the ROC curve. As shown in Figure 4h, Figure 5h, Figure 6h and Figure 7h, we can see that the accuracy of our method outperforms the traditional Radon transform and Hough transform.
To further demonstrate the performance of our method, we show the experimental results of another two test samples in Figure 8 and Figure 9. The description of the experimental results in Figure 8 and Figure 9 is the same as above.
Specifically, test sample a038 shows a grayscale image with a shorter straight road. Figure 8b shows an undistinguishable peak point due to the mess in transform domain, and the detected line overlaid on a038 does not match with the straight road. From Figure 8j, we see that our method greatly accentuates an area of high intensity in the transform domain relative to the background. The experimental results in Figure 8 illustrate that our method can well detect a shorter straight road. The same conclusion can be drawn in the experimental result of test sample b230. This indicates that our algorithm is more sensitive to a shorter line road.
Particularly, our method is able to complete the line road detection of multiple images at one time. In dealing with a large number of images, our method facilitates parallel implementation of the Radon transform for multiple images (i.e., replace vector y i with a matrix). Table 2 shows the time-consuming comparison between our method and the traditional Radon transform. We record the average running time of 20 test samples. Radon transform takes 0.106 s for per test image, but our method only takes 0.027 s. Experimental results of Table 2 show that the computation of our method is nearly 4 times faster than Radon transform.
Above all, our method is superior to the traditional Radon transform in terms of accuracy and computing complexity.
Table 3 illustrates the mean-error and variance of error. We can see that the mean-error of our method is much lower than traditional Radon transform. Hence, the detected parameters ( ρ , θ ) using our method is closer to the ground truth parameters. From the values of variance, we see that our method is more stable in detecting straight line.

5. Conclusions

Road detection plays a key role for remote sensing image analytics and has attracted intensive attention. A Hough transform (HT) is a very typical method for road detection, especially for straight line road detection, and many variants have been proposed based on Hough transforms. However, developing a low computational complexity and time-saving Hough transform algorithm is still a great challenge. To solve the above problems, we present an approximation method by treating a Radon transform as a linear transform, which facilitates parallel implementation of the Radon transform for multiple images. Extensive experiments which are conducted on the RSSCN7 database show that our method is superior to the traditional Radon transform in terms of both accuracy and computing complexity. In the future, we will further study the regularization function being based on our algorithm to optimize our method.

Acknowledgments

This paper was supported by the National Natural Science Foundation of China under Grants 61671480, 61572486; the Fundamental Research Funds for the Central Universities, China University of Petroleum (East China), under Grants 14CX02203A.

Author Contributions

Weifeng Liu, Zhenqing Zhang and Shuying Li conceived and designed the experiments; Zhenqing Zhang performed the experiments; Shuying Li and Dapeng Tao analyzed the data, Weifeng Liu and Zhenqing Zhang contributed to the writing of the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhu, L.; Lehtomäki, M.; Hyyppä, J.; Puttonen, E.; Krooks, A.; Hyyppä, H. Automated 3D scene reconstruction from open geospatial data sources: Airborne laser scanning and a 2D topographic database. Remote Sens. 2015, 7, 6710–6740. [Google Scholar] [CrossRef]
  2. Cheng, L.; Wu, Y.; Tong, L.; Chen, Y.; Li, M. Hierarchical Registration Method for Airborne and Vehicle LiDAR Point Cloud. Remote Sens. 2015, 7, 13921–13944. [Google Scholar] [CrossRef]
  3. Maboudi, M.; Amini, J.; Hahn, M.; Saati, M. Road Network Extraction from VHR Satellite Images Using Context Aware Object Feature Integration and Tensor Voting. Remote Sens. 2016, 8, 637. [Google Scholar] [CrossRef]
  4. Han, J.; Pauwels, E.J.; De Zeeuw, P. Visible and infrared image registration in man-made environments employing hybrid visual features. Pattern Recognit. Lett. 2013, 34, 42–51. [Google Scholar] [CrossRef]
  5. Wang, Q.; Fang, J.; Yuan, Y. Multi-cue based tracking. Neurocomputing 2014, 131, 227–236. [Google Scholar] [CrossRef]
  6. Wang, Q.; Fang, J.; Yuan, Y. Adaptive road detection via context-aware label transfer. Neurocomputing 2015, 158, 174–183. [Google Scholar] [CrossRef]
  7. Ghamisi, P.; Plaza, J.; Chen, Y.; Li, J.; Plaza, A.J. Advanced Spectral Classifiers for Hyperspectral Images: A review. IEEE Geosci. Remote Sens. Mag. 2017, 5, 8–32. [Google Scholar] [CrossRef]
  8. Wang, Q.; Lin, J.; Yuan, Y. Salient band selection for hyperspectral image classification via manifold ranking. IEEE Trans. Neural Netw. Learn. Syst. 2016, 27, 1279–1289. [Google Scholar] [CrossRef] [PubMed]
  9. Gao, Z.; Shen, W.; Zhang, H.; Ge, M.; Niu, X. Application of Helmert Variance Component Based Adaptive Kalman Filter in Multi-GNSS PPP/INS Tightly Coupled Integration. Remote Sens. 2016, 8, 553. [Google Scholar] [CrossRef]
  10. Qian, C.; Liu, H.; Tang, J.; Chen, Y.; Kaartinen, H.; Kukko, A.; Zhu, L.; Liang, X.; Chen, L.; Hyyppä, J. An Integrated GNSS/INS/LiDAR-SLAM Positioning Method for Highly Accurate Forest Stem Mapping. Remote Sens. 2016, 9, 3. [Google Scholar] [CrossRef]
  11. Duan, F.; Wan, Y.; Deng, L. A Novel Approach for Coarse-to-Fine Windthrown Tree Extraction Based on Unmanned Aerial Vehicle Images. Remote Sens. 2017, 9, 306. [Google Scholar] [CrossRef]
  12. Han, J.; Farin, D.; de With, P.H. Broadcast court-net sports video analysis using fast 3-D camera modeling. IEEE Trans. Circuits Syst. Video Technol. 2008, 18, 1628–1638. [Google Scholar]
  13. Han, J.; Farin, D.; de With, P. A mixed-reality system for broadcasting sports video to mobile devices. IEEE MultiMed. 2011, 18, 72–84. [Google Scholar] [CrossRef]
  14. Ye, H.; Shang, G.; Wang, L.; Zheng, M. A new method based on hough transform for quick line and circle detection. In Proceedings of the International Conference on Biomedical Engineering and Informatics, Shenyang, China, 14–16 October 2015; pp. 52–56. [Google Scholar]
  15. Hough Paul, V.C. Method and Means for Recognizing Complex Patterns. U.S. Patent 3,069,654, 18 December 1962. [Google Scholar]
  16. Mukhopadhyay, P.; Chaudhuri, B.B. A survey of Hough Transform. Pattern Recognit. 2015, 48, 993–1010. [Google Scholar] [CrossRef]
  17. Ballard, D.H. Generalizing the Hough transform to detect arbitrary shapes. Pattern Recognit. 1981, 13, 111–122. [Google Scholar] [CrossRef]
  18. Lo, R.C.; Tsai, W.H. Perspective-transformation-invariant generalized Hough transform for perspective planar shape detection and matching. Pattern Recognit. 1997, 30, 383–396. [Google Scholar] [CrossRef]
  19. Ji, Y.; Mao, L.; Huang, Q.; Gao, Y. Research on object shape detection from image with high-level noise based on fuzzy generalized Hough Transform. In Proceedings of the Multimedia and Signal Processing (CMSP), Guilin, Guangxi, China, 14–15 May 2011; pp. 209–212. [Google Scholar]
  20. Xu, J.; Sun, X.; Zhang, D.; Fu, K. Automatic detection of inshore ships in high-resolution remote sensing images using robust invariant generalized Hough transform. IEEE Geosci. Remote Sens. Lett. 2014, 11, 2070–2074. [Google Scholar]
  21. Yang, H.; Zheng, S.; Lu, J.; Yin, Z. Polygon-Invariant Generalized Hough Transform for High-Speed Vision-Based Positioning. IEEE Trans. Autom. Sci. Eng. 2016, 13, 1367–1384. [Google Scholar] [CrossRef]
  22. Xu, L.; Oja, E.; Kultanen, P. A new curve detection method: Randomized Hough transform (RHT). Pattern Recognit. Lett. 1990, 11, 331–338. [Google Scholar] [CrossRef]
  23. Lu, W.; Tan, J. Detection of incomplete ellipse in images with strong noise by iterative randomized Hough transform (IRHT). Pattern Recognit. 2008, 41, 1268–1279. [Google Scholar] [CrossRef]
  24. Jiang, L. Efficient randomized Hough transform for circle detection using novel probability sampling and feature points. Opt. Int. J. Light Electron. Opt. 2012, 123, 1834–1840. [Google Scholar] [CrossRef]
  25. Lu, W.; Yu, J.; Tan, J. Direct inverse randomized Hough transform for incomplete ellipse detection in noisy images. J. Pattern Recognit. Res. 2014, 1, 13–24. [Google Scholar] [CrossRef]
  26. Stephens, R.S. Probabilistic approach to the Hough transform. Image Vis. Comput. 1991, 9, 66–71. [Google Scholar] [CrossRef]
  27. Matas, J.; Galambos, C.; Kittler, J. Robust detection of lines using the progressive probabilistic hough transform. Comput. Vis. Image Underst. 2000, 78, 119–137. [Google Scholar] [CrossRef]
  28. Galambos, C.; Kittler, J.; Matas, J. Gradient based progressive probabilistic Hough transform. IEE Proc. Vis. Image Signal. Process. 2001, 148, 158–165. [Google Scholar] [CrossRef]
  29. Qiu, S.; Wang, X. The improved progressive probabilistic hough transform for paper wrinkle detection. In Proceedings of the International Conference on Signal Processing (ICSP), Beijing, China, 21–25 October 2012; pp. 783–786. [Google Scholar]
  30. Han, J.H.; Kóczy, L.; Poston, T. Fuzzy hough transform. Pattern Recognit. Lett. 1994, 15, 649–658. [Google Scholar] [CrossRef]
  31. Basak, J.; Pal, S.K. Theoretical quantification of shape distortion in fuzzy Hough transform. Fuzzy Sets Syst. 2005, 154, 227–250. [Google Scholar] [CrossRef]
  32. Pugin, E.V.; Zhiznyakov, A.L. In Proceedings of the Filtering of meaningful features of fuzzy hough transform. In Proceedings of the Dynamics of Systems, Mechanisms and Machines (Dynamics), Omsk, Russia, 15–17 November 2016; pp. 1–5. [Google Scholar]
  33. Liu, T.; Tao, D. On the performance of manhattan nonnegative matrix factorization. IEEE Trans. Neural Netw. Learn. Syst. 2016, 27, 1851–1863. [Google Scholar] [CrossRef] [PubMed]
  34. Duda, R.O.; Hart, P.E. Use of the Hough transformation to detect lines and curves in pictures. Commun. ACM 1972, 15, 11–15. [Google Scholar] [CrossRef]
  35. Deans, S.R. Hough transform from the Radon transform. IEEE Trans. Pattern Anal. Mach. Intell. 1981, 2, 185–188. [Google Scholar] [CrossRef]
  36. Deans, S.R. The Radon Transform and Some of Its Application; John Wiley & Sons Inc.: New York, NY, USA, 1983; Volume 223, pp. 3–4. [Google Scholar]
  37. Aggarwal, N.; Karl, W.C. Line detection in images through regularized Hough transform. IEEE Trans. Image Process. 2006, 15, 582–591. [Google Scholar] [CrossRef] [PubMed]
  38. Zou, Q.; Ni, L.; Zhang, T.; Wang, Q. Deep learning based feature selection for remote sensing scene classification. IEEE Geosci. Remote Sens. Lett. 2015, 12, 2321–2325. [Google Scholar] [CrossRef]
Figure 1. Mapping of P 1 and P 2 from Cartesian space to the slope-intercept parameter space.
Figure 1. Mapping of P 1 and P 2 from Cartesian space to the slope-intercept parameter space.
Remotesensing 09 00590 g001
Figure 2. Mapping of P 1 and P 2 from Cartesian space to the ( ρ , θ ) parameter space.
Figure 2. Mapping of P 1 and P 2 from Cartesian space to the ( ρ , θ ) parameter space.
Remotesensing 09 00590 g002
Figure 3. Some remote sensing images with straight road examples from the RSSCN7 dataset.
Figure 3. Some remote sensing images with straight road examples from the RSSCN7 dataset.
Remotesensing 09 00590 g003
Figure 4. a215 is a test image. (a) Radon transform of test image in two-dimensional parameter space. (b) Three-dimensional form of (a). (c) Detected line from (b) overlaid on test image. (d) Binary image of the test image. (e) Hough transform of (d). (f) Three-dimensional form of (e). (g) Detected line from (f). (h) Receiver Operator Curves of the evaluated detection methods. (i) Transform image obtained by our method. (j) Three-dimensional form of (i). (k) Detected line from (j).
Figure 4. a215 is a test image. (a) Radon transform of test image in two-dimensional parameter space. (b) Three-dimensional form of (a). (c) Detected line from (b) overlaid on test image. (d) Binary image of the test image. (e) Hough transform of (d). (f) Three-dimensional form of (e). (g) Detected line from (f). (h) Receiver Operator Curves of the evaluated detection methods. (i) Transform image obtained by our method. (j) Three-dimensional form of (i). (k) Detected line from (j).
Remotesensing 09 00590 g004
Figure 5. a266 is a test image. (a) Radon transform of test image in two-dimensional parameter space. (b) Three-dimensional form of (a). (c) Detected line from (b) overlaid on test image. (d) Binary image of the test image. (e) Hough transform of (d). (f) Three-dimensional form of (e). (g) Detected line from (f). (h) Receiver Operator Curves of the evaluated detection methods. (i) Transform image obtained by our method. (j) Three-dimensional form of (i). (k) Detected line from (j).
Figure 5. a266 is a test image. (a) Radon transform of test image in two-dimensional parameter space. (b) Three-dimensional form of (a). (c) Detected line from (b) overlaid on test image. (d) Binary image of the test image. (e) Hough transform of (d). (f) Three-dimensional form of (e). (g) Detected line from (f). (h) Receiver Operator Curves of the evaluated detection methods. (i) Transform image obtained by our method. (j) Three-dimensional form of (i). (k) Detected line from (j).
Remotesensing 09 00590 g005
Figure 6. b088 is a test image. (a) Radon transform of test image in two-dimensional parameter space. (b) Three-dimensional form of (a). (c) Detected line from (b) overlaid on test image. (d) Binary image of the test image. (e) Hough transform of (d). (f) Three-dimensional form of (e). (g) Detected line from (f). (h) Receiver Operator Curves of the evaluated detection methods. (i) Transform image obtained by our method. (j) Three-dimensional form of (i). (k) Detected line from (j).
Figure 6. b088 is a test image. (a) Radon transform of test image in two-dimensional parameter space. (b) Three-dimensional form of (a). (c) Detected line from (b) overlaid on test image. (d) Binary image of the test image. (e) Hough transform of (d). (f) Three-dimensional form of (e). (g) Detected line from (f). (h) Receiver Operator Curves of the evaluated detection methods. (i) Transform image obtained by our method. (j) Three-dimensional form of (i). (k) Detected line from (j).
Remotesensing 09 00590 g006
Figure 7. g146 is a test image. (a) Radon transform of test image in two-dimensional parameter space. (b) Three-dimensional form of (a). (c) Detected line from (b) overlaid on test image. (d) Binary image of the test image. (e) Hough transform of (d). (f) Three-dimensional form of (e). (g) Detected line from (f). (h) Receiver Operator Curves of the evaluated detection methods. (i) Transform image obtained by our method. (j) Three-dimensional form of (i). (k) Detected line from (j).
Figure 7. g146 is a test image. (a) Radon transform of test image in two-dimensional parameter space. (b) Three-dimensional form of (a). (c) Detected line from (b) overlaid on test image. (d) Binary image of the test image. (e) Hough transform of (d). (f) Three-dimensional form of (e). (g) Detected line from (f). (h) Receiver Operator Curves of the evaluated detection methods. (i) Transform image obtained by our method. (j) Three-dimensional form of (i). (k) Detected line from (j).
Remotesensing 09 00590 g007
Figure 8. a038 is a test image. (a) Radon transform of test image in two-dimensional parameter space. (b) Three-dimensional form of (a). (c) Detected line from (b) overlaid on test image. (d) Binary image of the test image. (e) Hough transform of (d). (f) Three-dimensional form of (e). (g) Detected line from (f). (h) Receiver Operator Curves of the evaluated detection methods. (i) Transform image obtained by our method. (j) Three-dimensional form of (i). (k) Detected line from (j).
Figure 8. a038 is a test image. (a) Radon transform of test image in two-dimensional parameter space. (b) Three-dimensional form of (a). (c) Detected line from (b) overlaid on test image. (d) Binary image of the test image. (e) Hough transform of (d). (f) Three-dimensional form of (e). (g) Detected line from (f). (h) Receiver Operator Curves of the evaluated detection methods. (i) Transform image obtained by our method. (j) Three-dimensional form of (i). (k) Detected line from (j).
Remotesensing 09 00590 g008
Figure 9. b230 is a test image. (a) Radon transform of test image in two-dimensional parameter space. (b) Three-dimensional form of (a). (c) Detected line from (b) overlaid on test image. (d) Binary image of the test image. (e) Hough transform of (d). (f) Three-dimensional form of (e). (g) Detected line from (f). (h) Receiver Operator Curves of the evaluated detection methods. (i) Transform image obtained by our method. (j) Three-dimensional form of (i). (k) Detected line from (j).
Figure 9. b230 is a test image. (a) Radon transform of test image in two-dimensional parameter space. (b) Three-dimensional form of (a). (c) Detected line from (b) overlaid on test image. (d) Binary image of the test image. (e) Hough transform of (d). (f) Three-dimensional form of (e). (g) Detected line from (f). (h) Receiver Operator Curves of the evaluated detection methods. (i) Transform image obtained by our method. (j) Three-dimensional form of (i). (k) Detected line from (j).
Remotesensing 09 00590 g009
Table 1. The orientation of true and detected straight line.
Table 1. The orientation of true and detected straight line.
Test Samplea215a266b088g146
True ( ρ , θ ) ( 56.1 ,   90.7 ° ) ( 27.6 ,   178.4 ° ) ( 8.3 ,   16.95 ° ) , ( 33.2 ,   17.6 ° ) ( 0.8 ,   91.4 ° )
Our Method ( ρ , θ ) ( 56 ,   91 ° ) ( 28 ,   179 ° ) ( 9 ,   18 ° ) , ( 33 ,   18 ° ) ( 1 ,   92 ° )
Radon Transform ( ρ , θ ) ( 0 ,   46 ° ) ( 27 ,   180 ° ) ( 20 ,   28 ° ) , ( 26 ,   23 ° ) ( 1 ,   60 ° )
Table 2. Time-consuming comparison of two methods.
Table 2. Time-consuming comparison of two methods.
MethodsOur MethodRadon Transform
Test Samples 20 20
Average Running Time 0.027   s 0.106   s
Table 3. The mean-error and variance of two methods.
Table 3. The mean-error and variance of two methods.
MethodsOur MethodRadon Transform
Test Samples 20 20
Line Parameters ρ θ ρ θ
Mean-Error 0.32 0.911 ° 11.46 15.65 °
Variance of Error 0.089 0.286 ° 528.788 1105.48 °
Remote Sens. EISSN 2072-4292 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top