Next Article in Journal
Integrating Logistic Regression and Geostatistics for User-Oriented and Uncertainty-Informed Accuracy Characterization in Remotely-Sensed Land Cover Change Information
Next Article in Special Issue
Review of Forty Years of Technological Changes in Geomatics toward the Big Data Paradigm
Previous Article in Journal
Automatic Airport Recognition Based on Saliency Detection and Semantic Information
Previous Article in Special Issue
Comparative Perspective of Human Behavior Patterns to Uncover Ownership Bias among Mobile Phone Users
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Approach to Urban Road Extraction Using High-Resolution Aerial Image

1
Institute of Remote Sensing and Geographic Information System, Peking University, Beijing 100871, China
2
China Transport Telecommunications & Information Center, Beijing 100011, China
3
Department of Big Data Technology and Application Development, Computer Network Information Center, Chinese Academy of Sciences, Beijing 100190, China
*
Author to whom correspondence should be addressed.
ISPRS Int. J. Geo-Inf. 2016, 5(7), 114; https://doi.org/10.3390/ijgi5070114
Submission received: 24 March 2016 / Revised: 2 July 2016 / Accepted: 8 July 2016 / Published: 13 July 2016
(This article belongs to the Special Issue Big Data for Urban Informatics and Earth Observation)

Abstract

:
Road information is fundamental not only in the military field but also common daily living. Automatic road extraction from a remote sensing images can provide references for city planning as well as transportation database and map updating. However, owing to the spectral similarity between roads and impervious structures, the current methods solely using spectral characteristics are often ineffective. By contrast, the detailed information discernible from the high-resolution aerial images enables road extraction with spatial texture features. In this study, a knowledge-based method is established and proposed; this method incorporates the spatial texture feature into urban road extraction. The spatial texture feature is initially extracted by the local Moran’s I, and the derived texture is added to the spectral bands of image for image segmentation. Subsequently, features like brightness, standard deviation, rectangularity, aspect ratio, and area are selected to form the hypothesis and verification model based on road knowledge. Finally, roads are extracted by applying the hypothesis and verification model and are post-processed based on the mathematical morphology. The newly proposed method is evaluated by conducting two experiments. Results show that the completeness, correctness, and quality of the results could reach approximately 94%, 90% and 86% respectively, indicating that the proposed method is effective for urban road extraction.

1. Introduction

As a key component of the transportation systems, roads belong to the infrastructure of modernization. They are significant in both the military field and common daily living. With the development of the remote sensing technology, automatic road extraction from remote sensing images has become an important subject in digital photogrammetry [1]. Road extraction can provide references for city planning, transportation database, map updating, and land resource management, as well as guidance during emergencies and disaster rescue operations [2].
With the introduction of computers, road extraction gradually advanced from manual operation to automation. Various road extraction algorithms have been proposed over the past decades, including edge and line detection [3], image classification and segmentation [4,5,6], or multiple quadratic snakes [7]. The integration of spectral and shape information has also been investigated as a means to extract road features [8]. A review of various road extraction methods was presented by Kaur et al. [9], which divides the road extraction process into three stages: image pre-processing, road detection, and post-processing. Li et al. [10] reviewed some applications of semi-automatic road extraction methods. Trinder et al. [1] proposed a knowledge-based method for road extraction from aerial images; in their method, the radiometric and geometric properties of roads and the relationship among the roads in images of different resolutions are used to extract roads. However, this method requires different resolution images, which are difficult to achieve because of mismatch of different data sources. Shi et al. [11] presented an integrated method to extract urban main-road centerlines from optical images. The method integrates spectral–spatial classification, local Geary’s C, road shape features, locally weighted regression, and tensor voting. The method uses the spectral bands only; as a result, it fails to handle complex road junctions. Miao et al [12] extracted the road centerline from high-resolution imagery by using shape features and multivariate adaptive regression splines (MARS) which separated the road features from the background; consequently, the smoothness of the extracted road centerline was increased. Nevertheless, the proposed method is based on the homogeneous surface properties, but the texture information is not considered. Thus, it is unsuitable for extracting spectral heterogeneous roads. Singh and Garg [13] used the adaptive global thresholds and the morphological operations to extract road networks from high resolution satellite images. The drawback is that some non-roads are also extracted as roads. Song and Civco [14] integrated shape features with the results of pixel-wise support vector machine classification to extract roads. Given that the detailed information of the roads is not considered, their method is unsuitable for road extraction from high-resolution images. In conclusion, various algorithms have been proposed for road extraction; however, they share a common problem. The detailed information is inadequately considered, and spectral similarities exist between roads and other artificial structures with impervious surfaces; as a result, the accuracy of the extraction is relatively low when using spectral bands alone.
To overcome the aforementioned problem and improve the accuracy of road extraction, a new method is proposed in this study to extract urban roads from high-resolution remote sensing images. The proposed road extraction method is based on object-based methods [15], in which the spatial texture feature is extracted by using the local spatial statistics, and the derived texture is added to the spectral bands for road extraction.

2. Methodology

This study proposes a method for urban road extraction from high-resolution aerial images. The method is divided into three key processes: texture information extraction, road extraction, and post-processing. The texture information is extracted from the remote sensing images firstly and is added to the spectral bands of the image. Second, the urban roads are extracted using the knowledge-based model. Finally, the extracted results are post-processed to eliminate noise by using mathematical morphology. Figure 1 illustrates the processes in detail.

2.1. Texture Information Extraction

Image texture generally describes the spatial variability of radiometric data, and it is expressed as the digital numbers of a remote sensing image. In extracting features from images, texture information plays a critical role in distinguishing among roads, buildings, and other artificial objects [16]. Before the texture information extraction, the road edges are initially enhanced using a bilateral filter, which is defined as [17,18]
B F [ L ] P = 1 W p q s G σ s ( | | p q | | ) G σ r ( | L p L q | ) L q
where L is the image. The normalization factor Wp ensures that the sum of the pixel weights is equal to 1.0 G σ s ( | | p q | | ) . G σ r is a range Gaussian that decreases the influence of pixel q when the intensity value Lq value differs from Lp. The target pixel that will be filtered is denoted by p [17], and the weight G σ s for pixel q is defined by the Gaussian Lp. The image filtered by the bilateral filter is processed by the principal component analysis (PCA) [19], The PCA result is calculated with the local Moran's I index for texture extraction. The result is then added to the spectral bands of the image by layer stacking. The local Moran’s I [20,21] is defined as
I = n i = 1 n j = 1 n w i j ( x i x ¯ ) ( x j x ¯ ) i = 1 n j = 1 n w i j i = 1 n ( x i x ¯ ) 2
where n is the number of georeferenced observations, xi, xj are the observations at the ith and jth location respectively, and x ¯ the mean of the observations. The value of weight wij in Equation (2) is determined based on the adjacent neighbors and is defined as
w i j = { 1 if   i ,   j  are adjacent neighbors 0 otherwise
A specific class of neighborhood rules must be selected to compute the local Moran’s I. This rule defines which adjacent pixels should be compared with the central pixel. The choices are listed as follows [22]:
Rook’s CaseSelects the pixels on the top, bottom, left, and right.
Bishop’s CaseSelects four diagonal neighboring pixels.
Queen’s CaseSelects all eight neighboring pixels.
HorizontalSelects two neighboring pixels in the same row.
VerticalSelects two neighboring pixels in the same column.
Positive SlopeSelects two neighboring pixels in opposite corners in a positive diagonal.
Negative SlopeSelects two neighboring pixels in opposite corners in a negative diagonal.
For ease of computation, wij is set as one and the rook’s case adjacency is selected in this study.

2.2. Road Extraction

In this paper, the selection and representation of road knowledge is critical. In general, road knowledge from the high-resolution remote sensing images is described as follows: (1) the road objects generated by image segmentation present a relatively homogeneous gray, showing a certain level of contrast with the surrounding background [18]. (2) Roads are ribbon-shaped with steady width. (3) The rectangularity and aspect ratio are high. (4) The road edges are obvious and the two edges on both sides of the road are parallel. A knowledge-based method is used to extract the roads in this study. The method consists of two main steps: hypothesis generation and hypothesis verification. A hypothesis model is established to generate the hypotheses for roads, and these hypotheses are later verified with the verification model.
When an entire road is divided into several road segments by multi-scale segmentation, the regular boundary of the road segment turns into an irregular polygon. For polygonal objects, using the external rectangle is an effective method to describe the shape approximately [23]. In general, external rectangle includes minimum bounding rectangle (MBR) and minimum area bounding rectangle (MABR). The MABR can better describe the direction, length, width, and other shape characteristics of a polygonal object [23]; thus, this study uses the MABR of a road object to express road knowledge. The MABR calculation method was proposed by Castleman [24]. Figure 2a shows the schematic diagram of the MABR. The search rectangle is first rotated around the polygon’s centroid at a regular angle interval angle of 5°. The areas of search rectangles are calculated after each rotation and the rectangle with the smallest area is the MABR of the road segment. The MABR is illustrated in Figure 2b.
Quantitative expression parameters of road knowledge include brightness, mean, standard deviation, rectangularity, area and so on. The “brightness” is used to represent the gray value of an image, whereas the “mean” is used to represent the gray value of a band of the image. The relationship between the brightness and mean is defined as
B = 1 K i = 0 K m ¯ i
where B is the brightness. K is the number of bands in the image, and m ¯ i is the mean gray value of the ith band. The standard deviation reflects the homogeneity of road objects. The standard deviation is defined as
σ = 1 N i = 0 N ( u i u ¯ ) 2
where σ is the standard deviation of the road object. N is the number of pixels in the road object. u ¯ is the mean gray value of the road object and ui is the gray value of the ith pixel in the road object. According to road knowledge (1) mentioned in the first paragraph of Section 2.2, the standard deviation of road objects is to some extent low, and the gray presents a certain contrast with the surrounding background. Thus, the brightness and standard deviation are selected to build the hypothesis model. The other three components of road knowledge (Paragraph 1, Section 2.2) show that the rectangularity and aspect ratio of road objects are rather high. However, another problem is that the rectangularity and aspect ratio of the buildings are also relatively high. To overcome this problem, we use the area to eliminate the negative influences of the buildings. The buildings are independent in image and they are not bordering each other; thus, the area of the building is significantly less than that of the road. In conclusion, the rectangularity, aspect ratio and area are selected to develop the verification model. Rectangularity is described as
R E = A r o a d A M A B R = A r o a d A M A B R × W M A B R
where RE is the rectangularity of the road, Aroad is the area of the road, AMABR is the area of the MABR, LMABR and WMABR are the length and width of the MABR respectively. The expression for the hypothesis model is as follows:
H r o a d = B r o a d S r o a d B r o a d = { b | b 1 < b < b 2 } ;   S r o a d = { s | s 1 < s < s 2 }
where Hroad represents the hypothetical roads, b and s are the brightness and standard deviation of the roads respectively, b is calculated using the average gray value of the four bands (R, G, B and the spatial texture). b1 and b2 are the predefined thresholds for the brightness of the roads, s 1 and s 2 are the predefined thresholds for the standard deviation of the roads. For the objects generated by image segmentation, if b [ b 1 ,   b 2 ] or s [ s 1 ,   s 2 ] , then the corresponding objects belong to set Broad or set Sroad respectively. The hypothesis verification model is used to remove those false objects such as trees, vehicles and artificial structures. The expression for the hypothesis verification model is as follows:
V r o a d = H r o a d Y ¯ Y = R W A R = { r | r < r 1 } ; W = { w | w < w 1 } ; A = { a | a < a 1 }
where Vroad represents the roads after verification, r, w and a are the rectangularity, aspect ratio and area of the roads respectively, r1, w1 and a1 are the predefined thresholds for the rectangularity, aspect ratio and area of the road segments respectively. For the objects generated by image segmentation, if r ( 0 ,   r 1 ) ,   w ( 0 ,   w 1 ) or a ( 0 ,   a 1 ) , then the corresponding objects belong to set R, set W, or set A, respectively. In Equation (8), Y ¯ is the complement set of set Y. Equation (8) means that, when the objects in the hypothetical roads are included in set Y, then they are removed from the Hroad. The verified results are the extracted roads.
The complete method for road extraction is shown in Figure 3, which contains sufficient and detailed information of its entire procedure.

2.3. Post-Processing

As a result of the variations in the vehicles, lanes, buildings, and other ground objects, certain noises and holes appear in the road extraction results. To eliminate them and improve the accuracy of the results, we process the extracted results by the closing operation of mathematical morphology.
Mathematical morphology method was introduced by Matheron and Serra in 1964, and it is one of the most important frameworks for non-linear image processing [25]. The basic operations of mathematical morphology include dilation, erosion, opening and closing. The closing operation is defined as
A B = ( A B ) B
where • is the closing operation, A is the binary image, B is the structuring element, ⊕ and ⊖ are the dilation and erosion operations, respectively. Figure 4 shows the experiment for the closing operation. Figure 4a presents the original image, and Figure 4b shows the results of the road extraction. The results after being processed by the closing operation are shown in Figure 4c.
In this study, the selected structuring element was a disc with a radius r of 2 pixels. When the structuring element is at this scale, the roads are not adversely affected. Consequently, the roads are extracted more completely.

3. Results and discussions

3.1. Experiment 1

The study area is located in Yangjiang, Guangdong Province, China. A 0.1 m three-band (red, green and blue) aerial image with a size of 2808 pixels × 2719 pixels is used in this study, as shown in Figure 5a. It was provided by the national disaster reduction center of China. The width of the road is approximately 10 m. The type of the road is high-grade urban.
PCA is a standard tool in modern data analysis, and it can compress and enhance data by applying linear algebra [19]. The dimension of the data can be reduced by PCA. In this experiment, PCA was performed for data compression and image enhancement by using the ENVI software package (Transform > Principal Components > Forward PC Rotation > Compute New Statistics and Rotate). The principal components were calculated based on a covariance matrix. The dimension of input data is three, and that of the PCA result is one. The PCA result was shown in Figure 5b and selected for texture information extraction with the local Moran’s I. With respect to the adjacent neighbors, the rook’s case adjacencies were selected to limit comparisons to pixels that share an edge. The result obtained by the local Moran’s I was shown in Figure 5c, and this result was added to the original bands of the image by layer stacking.
After the texture information extraction, the multi-scale segmentation method was used for image segmentation. The multi-scale segmentation is a bottom-up region-merging technique starting with one-pixel objects [26]. In the subsequent steps, smaller image objects are merged into larger ones. Through this pair-wise clustering process, the underlying optimization procedure minimizes the heterogeneity of the resulting image objects. In each step, a pair of adjacent image objects are merged, resulting in the smallest growth of the defined heterogeneity. If the smallest growth exceeds the threshold defined by the scale parameter, the process stops. The scale parameter [26] can be determined by the number of pixels of the extracted ground target of interest or the range of spatial structure in the image. The smaller the scale parameter is, the less frequent the merging process is implemented. The generated image object region is likewise smaller, and the size of the generated image objects increases as the scale parameter increases. In this way, the homogeneous image objects are generated. Figure 5d presents the result of image segmentation. The generated objects were processed by the hypothesis model, and the hypothetical roads were obtained, as shown in Figure 5e. Subsequently, the hypothesis verification model is used to validate the hypothetical roads and remove the false roads. In this process, the values of b1, b2, s1, s2, r1, w1 and a1 are defined as 75, 105, 12, 15, 0.6, 2 and 50000 pixels respectively. Figure 5f presents the roads after verification, which was post-processed by the closing operation of mathematical morphology. The post-processed result was illustrated in Figure 5g. Figure 5h shows the result with the Hu’s method [27], in which, spectral and shape features but texture information are used in road extraction.

3.2. Experiment 2

In the second case study, the newly proposed method is tested on a high-resolution image acquired by unmanned aerial vehicles. The size of the image is 2080 pixels × 2395 pixels, and its spatial resolution is 0.1 m per pixel. The image is shown in Figure 6a. The roads in this image have many branches, and such roads are very common in actual practice.
As in the previous experiment, the local Moran’s I was likewise applied to extract texture information in this experiment after the PCA. Figure 6b shows the PCA results, which was processed by the local Moran’s I with the rook’s case adjacencies. The texture information in Figure 6c was added to the original bands and the multi-scale segmentation method was again applied to image segmentation. Figure 6d shows the result of image segmentation. The hypothesis model was applied to extract roads, and the result is shown in Figure 6e. In Figure 6e, some buildings and other impervious surfaces, which should have been removed, are mistaken for roads. The hypothesis verification model was used to remove the false roads, and the result is shown in Figure 6f. In this process, the values of b1, b2, s1, s2, r1, w1 and a1 are defined as 110, 130, 4, 8, 0.5, 1 and 4000 pixels respectively. The morphological closing operation was applied to fill the holes. The result was shown in Figure 6g. Figure 6h shows the result obtained using the Hu’s method [27], which had been described in Section 3.1.

3.3. Accuracy Evaluation

To evaluate the accuracy of the road extraction results in the two experiments, we compare them to the manually created ground-truth representation of the roads. The manually delineated roads are shown in Figure 7, which presents the extracted roads for comparison.
The following three widely accepted evaluation measures are used to evaluate how well our road extraction results match the ground-truth data set [4,5,28,29,30].
C o m p l e t e n e s s = T P T P + F N
C o r r e c t n e s s = T P T P + F P
Q u a l i t y = T P T P + F N + F P
where TP denotes the extracted road pixels that coincide with the reference data, FN refers to the road pixels that are in the reference data but not in the extracted result, and FP represents the extracted road pixels that are not in the reference data. Table 1 presents the accuracy of the road extraction results.
It can be seen from Table 1 that the extracted results are fairly accurate, which justifies the effectiveness of the proposed method. In Experiment 1, the completeness and correctness of the results can respectively reach 95.12% and 90.31%, and the quality of the results is 86.31%. The corresponding accuracy values for Experiment 2 are 93.56%, 91.53%, and 86.11%, respectively. Compared with the existing method, the proposed method achieves higher accuracy. However, some errors still exist for two possible reasons. The first reason is the influence of mixed pixels, which blur the road boundary and consequently affect the accuracy of road extraction. The other reason is the negative influence of vehicles, trees and dust, which present spectra different from those of roads and may thus be easily omitted (Figure 8). In future research, the road vector data can be used to connect broken road segments.

3.4. Parameter Selection

In the newly proposed method, few parameters, such as the segmentation scale are set according to the input image resolution. In general, the segmentation scale and road width are positively correlated. In this paper, the road width Wroad is approximately 100 pixels, and the segmentation scale was set to 100, 110, ..., 300 according to the analysis of the image characteristics. After some trial and error, the proposed extraction method with a segmentation scale of 200 performs best when the homogeneity of the road objects is the highest. We can find the detailed definition of the homogeneity in [31]. The optimal segmentation scale would be larger than 200 when the road width is thicker than 100 pixels and vice versa. Moreover, on the basis of numerous experiments, we got an empirical conclusion that the optimal segmentation scale is usually 2 × W r o a d . However, when applied to the complex road networks, certain deviations may occur, and the optimal segmentation scale is usually in the scope of ( 2 ± 0.5 ) W r o a d , which can be used in determining of the general range of the optimal segmentation scale.
However, some parameter selections require manual interaction in the proposed method. We conduct a sensitivity analysis on the proposed method by varying each parameter within a reasonable range while holding other parameters fixed [32]. Quality (Q), which is introduced in Section 3.3, is used to evaluate the free parameter quantitatively. When Q reaches the global maximum, the corresponding value of the free parameter is selected as the threshold value. For the parameter selections in Experiment 1, the quantitative results for the free parameters are shown in Figure 9, which compares several reasonable values of b1, b2, s1, s2, r1, w1 and a1 (Section 2.2). According to the experiments, the values of b1, b2, s1, s2, r1, w1 and a1 are set to 75, 105, 12, 15, 0.6 and 2, respectively. The corresponding free parameters in Experiment 2 are also set using the same procedure.

4. Conclusions

In this study, the representation and extraction of road knowledge are investigated, and a knowledge-based method to extract urban roads from high-resolution aerial images is proposed. More specifically, the proposed method incorporates texture, spectral and shape features. Compared with existing methods, which use spectral and shape features without texture information, the proposed method exhibits improved accuracy, as shown in Table 1. Furthermore, with the use of mathematical morphology in the post-processing stage, the proposed method shows good smoothness of road edges and reduces the negative influence of vehicles, lanes, and other ground objects. The overall quality of the results is higher than 85%.
The newly developed approach is evaluated by using high resolution RGB aerial images. The experimental results show that the proposed method can be used to extract roads successfully. In view of the influence of mixed pixels, research on target-enhancing algorithms are worth exploring to increase the contrast between roads and other land markers. Furthermore, a small quantity of broken road segments appear in the experiments; thus, the use of road vector data to connect these broken road segments warrants further research.

Acknowledgments

This work was supported by the National Science and Technology Major Project of China (Grant No. 11-y20A05-9001-15/16). All the images were provided by the national disaster reduction center of China. The authors would like to thank anonymous reviewers for their constructive comments, which greatly improved the quality of our manuscript.

Author Contributions

The concept of this study was conceived by Jianhua Wang and Qiming Qin. The experiments were carried out by Jianhua Wang, who also prepared the figures. This manuscript was written by Jianhua Wang, Zhongling Gao, Jianghua Zhao and Xin Ye.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Trinder, J.; Wang, Y. Knowledge-Based road interpretation in aerial images. Int. Arch. Photogramm. Remote Sens. 1999, 32, 635–640. [Google Scholar]
  2. Luo, Z. Analysis and Research of Road Extraction from High Resolution Remote Sensing Images, Shanghai; Shanghai Jiao Tong University: Shanghai, China, 2008. [Google Scholar]
  3. Das, S.; Mirnalinee, T.T.; Varghese, K. Use of salient features for the design of a multistage framework to extract roads from high-resolution multispectral satellite images. IEEE Trans. Geosci. Remote Sens. 2011, 49, 3906–3931. [Google Scholar] [CrossRef]
  4. Miao, Z.; Wang, B.; Shi, W. A semi-automatic method for road centerline extraction from VHR images. IEEE Geosci. Remote Sens. Lett. 2014, 11, 1856–1860. [Google Scholar] [CrossRef]
  5. Shi, W.; Miao, Z.; Debayle, J. An integrated method for urban main road centerline extraction from optical remotely sensed imagery. IEEE Trans. Geosci. Remote Sens. 2014, 52, 3359–3372. [Google Scholar] [CrossRef]
  6. Yuan, J.D.; Wang, W.B.; Yan, L.; Li, R. Legion-based automatic road extraction from satellite imagery. IEEE Trans. Geosci. Remote Sens. 2011, 40, 4528–4538. [Google Scholar] [CrossRef]
  7. Nakaguro, Y.; Makhanov, S.S.; Dailey, M.N. Numerical experiments with cooperating multiple quadratic snakes for road extraction. Int. J. Geogr. Inf. Sci. 2011, 25, 765–783. [Google Scholar] [CrossRef]
  8. Grote, A.; Heipke, C.; Rottensteiner, F. Road network extraction in suburban areas. Photogramm. Rec. 2012, 27, 8–28. [Google Scholar] [CrossRef]
  9. Kaur, A.; Singh, R. Various methods of road extraction from satellite images: A review. Int. J. Res. 2015, 2, 1025–1032. [Google Scholar]
  10. Li, Y.; Xu, L.; Piao, H. Semi-automatic road extraction from high-resolution remote sensing image: Review and prospects. In Proceedings of the 2009 IEEE Ninth International Conference on Hybrid Intelligent Systems, Shenyang, China, 12–14 August 2009; pp. 204–209.
  11. Shi, W.; Miao, Z.; Wang, Q.; Zhang, H. Spectral-spatial classification and shape features for urban road centerline extraction. IEEE Geosci. Remote Sens. Lett. 2013, 11, 788–792. [Google Scholar]
  12. Miao, Z.; Shi, W.; Zhang, H.; Wang, X. Road centerline extraction from high-resolution imagery based on shape features and multivariate adaptive regression splines. IEEE Geosci. Remote Sens. Lett. 2013, 10, 583–587. [Google Scholar] [CrossRef]
  13. Singh, P.P.; Garg, R.D. Automatic road extraction from high resolution satellite image using adaptive global thresholding and morphological operations. Indian Soc. Remote Sens. 2013, 41, 631–640. [Google Scholar] [CrossRef]
  14. Song, M.J.; Civco, D. Road extraction using SVM and image segmentation. Photogramm. Eng. Remote Sens. 2004, 70, 1365–1371. [Google Scholar] [CrossRef]
  15. Al-khudhairy, D.; Caravaggi, I.; Giada, S. Structural damage assessments from IKONOS data using change detection, object-oriented segmentation and classification techniques. Photogramm. Eng. Remote Sens. 2005, 71, 825–837. [Google Scholar] [CrossRef]
  16. Song, J.; Wang, X.; Li, P. Urban building damage detection from VHR imagery by including temporal and spatial texture features. J. Remote Sens. 2012, 16, 1233–1245. [Google Scholar]
  17. Paris, S.; Kornprobst, P.; Tumblin, J.; Durand, F. Bilateral filtering: Theory and applications. Found. Trends (R) Comput. Graphics Vis. 2008, 4, 1–73. [Google Scholar] [CrossRef]
  18. Wang, J.; Qin, Q.; Zhao, J.; Ye, X.; Feng, X.; Qin, X.; Yang, X. Knowledge-Based detection and assessment of damaged roads using post-disaster high-resolution remote sensing image. Remote Sens. 2015, 7, 4948–4967. [Google Scholar] [CrossRef]
  19. Shlens, J. A Tutorial on Principal Components Analysis. 2014. Available online: http://arxiv.org/pdf/1404.1100v1.pdf (accessed on 7 April 2014).
  20. Emerson, C.W.; Lam, N.S.; Quattroch, D.A. A comparison of local variance, fractal dimension, and Moran’s I as aids to multi-spectral image classification. Int. J. Remote Sens. 2005, 26, 1575–1588. [Google Scholar] [CrossRef]
  21. Anselin, L. Local indicators of spatial association—LISA. Geogr. Anal. 1995, 27, 93–115. [Google Scholar] [CrossRef]
  22. Zhu, Z.; Su, W. The analysis of the classification of SPOT 5 image based on local spatial statistics. J. Remote Sens. 2011, 15, 957–972. [Google Scholar]
  23. Cheng, P.; Yan, H.; Han, Z. An algorithm for computing the minimum area bounding rectangle of an arbitrary polygon. J. Eng. Graphics 2008, 1, 122–126. [Google Scholar]
  24. Castleman, K.R. Digital Image Processing; Prentice Hall: Upper Saddle River, NJ, USA, 1995. [Google Scholar]
  25. Dias, F.; Cousty, J.; Najman, L. Dimensional operators for mathematical morphology on simplicial complexes. Pattern Recognit. Lett. 2014, 47, 111–119. [Google Scholar] [CrossRef]
  26. Malik, R.; Kheddam, R.; Belhadj-Aissa, A. Multi-scale segmentation for remote sensing imagery based on minimum heterogeneity rule. In Proceedings of the Conference on Image Processing Theory, Tools & Applications, Paris, France, 14–17 October 2014.
  27. Hu, J.; Zhang, X.; Shen, X.; Zhang, C. A method of road extraction in high-resolution remote sensing imagery based on object-oriented image analysis. Remote Sens. Technol. Appl. 2006, 21, 184–188. [Google Scholar]
  28. Wiedemmann, C.; Heipke, C.; Mayer, H. Empirical evaluation of automatically extracted road axes. In Proceedings of the 1998 Conference on Computer Vision and Pattern Recognition (CVPR), Santa Barbara, CA, USA, 23–25 June 1998; pp. 172–187.
  29. Linh, T.; Debra, F. Quantitative evaluation strategies for urban 3D model generation from remote sensing data. Comput. Graphics. 2015, 49, 82–91. [Google Scholar]
  30. Boyko, A.; Thomas, F. Extracting roads from dense point clouds in large scale urban environment. ISPRS J. Photogramm. Remote Sens. 2011, 66, S2–S12. [Google Scholar] [CrossRef]
  31. Chen, C. Recognition and Damage Assessment for Bridge over Water from High-resolution Optical Remote Sensing Images. Ph.D. Thesis, Peking University, Beijing, China, 9 June 2013. [Google Scholar]
  32. Gao, C.; Sun, Y. Automatic road centerline extraction from imagery using road GPS data. Remote Sens. 2014, 6, 9014–9033. [Google Scholar]
Figure 1. Flow chart of the newly proposed method.
Figure 1. Flow chart of the newly proposed method.
Ijgi 05 00114 g001
Figure 2. Minimum area bounding rectangle (MABR). (a) The schematic diagram of MABR; (b) The MABR of road object.
Figure 2. Minimum area bounding rectangle (MABR). (a) The schematic diagram of MABR; (b) The MABR of road object.
Ijgi 05 00114 g002
Figure 3. Detailed flow diagram of the proposed method.
Figure 3. Detailed flow diagram of the proposed method.
Ijgi 05 00114 g003
Figure 4. Experiment of the closing operation. (a) Original image; (b) Results of road extraction; (c) Results of the closing operation.
Figure 4. Experiment of the closing operation. (a) Original image; (b) Results of road extraction; (c) Results of the closing operation.
Ijgi 05 00114 g004
Figure 5. Results on the first image. (a) Study area image; (b) Band 1 of Principal Component Analysis (PCA) results; (c) Result by the local Moran’s I; (d) Result of image segmentation; (e) Hypothesized road; (f) Road after verification; (g) Extracted Road and original image superposition; (h) Result using Hu’s method.
Figure 5. Results on the first image. (a) Study area image; (b) Band 1 of Principal Component Analysis (PCA) results; (c) Result by the local Moran’s I; (d) Result of image segmentation; (e) Hypothesized road; (f) Road after verification; (g) Extracted Road and original image superposition; (h) Result using Hu’s method.
Ijgi 05 00114 g005
Figure 6. Results on the second image. (a) Study area image; (b) Band 1 of PCA results; (c) Result by the local Moran’s I; (d) Result of image segmentation; (e) Hypothesized road; (f) Road after verification; (g) Extracted Road and original image superposition; (h) Result using Hu’s method.
Figure 6. Results on the second image. (a) Study area image; (b) Band 1 of PCA results; (c) Result by the local Moran’s I; (d) Result of image segmentation; (e) Hypothesized road; (f) Road after verification; (g) Extracted Road and original image superposition; (h) Result using Hu’s method.
Ijgi 05 00114 g006
Figure 7. The reference road and extracted road. (a) The reference road data of Experiment 1; (b) The extracted road of Experiment 1; (c) The reference road data of Experiment 2; (d) The extracted road of Experiment 2. (where true positives are shown in red, false positives in green and false negatives in blue).
Figure 7. The reference road and extracted road. (a) The reference road data of Experiment 1; (b) The extracted road of Experiment 1; (c) The reference road data of Experiment 2; (d) The extracted road of Experiment 2. (where true positives are shown in red, false positives in green and false negatives in blue).
Ijgi 05 00114 g007
Figure 8. The negative influence of vehicles and Trees. (a) Vehicles; (b) The negative influence of vehicles; (c) The dust, which present different spectra as road; (d) The negative influence of dust.
Figure 8. The negative influence of vehicles and Trees. (a) Vehicles; (b) The negative influence of vehicles; (c) The dust, which present different spectra as road; (d) The negative influence of dust.
Ijgi 05 00114 g008
Figure 9. Sensitivity test of free parameters. (a) the brightness threshold b1; (b) the brightness threshold b2; (c) the standard deviation threshold s1; (d) the standard deviation threshold s2; (e) the rectangularity threshold r1; (f) the aspect ratio threshold w1.
Figure 9. Sensitivity test of free parameters. (a) the brightness threshold b1; (b) the brightness threshold b2; (c) the standard deviation threshold s1; (d) the standard deviation threshold s2; (e) the rectangularity threshold r1; (f) the aspect ratio threshold w1.
Ijgi 05 00114 g009
Table 1. Accuracy evaluation of road extraction.
Table 1. Accuracy evaluation of road extraction.
ExperimentMethodCompleteness (%)Correctness (%)Quality (%)
1Proposed method95.1290.3186.31
Hu’s Method89.9186.2478.63
2Proposed method93.5691.5386.11
Hu’s Method92.5587.3381.59

Share and Cite

MDPI and ACS Style

Wang, J.; Qin, Q.; Gao, Z.; Zhao, J.; Ye, X. A New Approach to Urban Road Extraction Using High-Resolution Aerial Image. ISPRS Int. J. Geo-Inf. 2016, 5, 114. https://doi.org/10.3390/ijgi5070114

AMA Style

Wang J, Qin Q, Gao Z, Zhao J, Ye X. A New Approach to Urban Road Extraction Using High-Resolution Aerial Image. ISPRS International Journal of Geo-Information. 2016; 5(7):114. https://doi.org/10.3390/ijgi5070114

Chicago/Turabian Style

Wang, Jianhua, Qiming Qin, Zhongling Gao, Jianghua Zhao, and Xin Ye. 2016. "A New Approach to Urban Road Extraction Using High-Resolution Aerial Image" ISPRS International Journal of Geo-Information 5, no. 7: 114. https://doi.org/10.3390/ijgi5070114

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop