E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Special Issue "Remote Sensing for 3D Urban Morphology"

A special issue of Remote Sensing (ISSN 2072-4292).

Deadline for manuscript submissions: closed (30 September 2017)

Special Issue Editors

Guest Editor
Prof. Dr. Bailang Yu

School of Geographic Sciences, Key Lab. of Geographic Information Science (Ministry of Education), East China Normal University, 500 Dongchuan Rd, Shanghai 200241, China
Website | E-Mail
Phone: +86-21-54341172
Fax: +86-21-5434-1172
Interests: nighttime light remote sensing; urban remote sensing; object-oriented analysis for remotely sensed images; LiDAR (Light Detection and Ranging)
Guest Editor
Dr. Lei Wang

Department of Geography & Anthropology, Louisiana State University, Baton Rouge, LA 70803, USA
Website | E-Mail
Phone: +1-225-578-8876
Interests: remote sensing; geocomputation; watershed hydrology; urban hazards
Guest Editor
Dr. Qiusheng Wu

Department of Geography, Binghamton University, State University of New York, Binghamton, NY 13902-6000, USA
Website 1 | Website 2 | E-Mail
Phone: +1-607-777-3145
Interests: remote sensing; GIS; wetland hydrology; climate change; soil moisture; LiDAR

Special Issue Information

Dear Colleagues,

Worldwide urbanization has transformed vast farmlands, grasslands, wetlands, and forests into urban landscapes at unprecedented rates, resulting in profound changes on the Earth’s surface. In recent decades, increasing urban population, higher demand for housing and infrastructure, and rising land price, have pressed many cities to adopt the vertical development strategy. Ideally, vertical cities would overcome many problems such as rapid population growth, air pollution, and loss of arable and green spaces. However, due to lack of sufficient information and knowledge about how the urban features such as buildings, monuments, streets, parking lots, and open spaces should be put together, most of the zoning plans of the large cities in the world are not sustainable to the changing environment. Urban morphology, or in other word the 3D structure of the urban built environment is one of the keys to smart urban planning. For example, existing studies have reported that 3D urban morphology affects the wind conditions at a pedestrian level, access to sunlight and solar radiation, interior temperatures of buildings, surface thermal conditions, dispersion of atmospheric pollutants, and land subsidence. Scientific knowledge of 3D urban morphology and its interactions with other urban environmental/ecological components are fundamentally important for smart urban planning.

Remote Sensing technology provides a synoptic and cost-effective way for measuring the 3D urban morphology and analyzing its impacts on the natural environments. For example, aerial stereo photogrammetry, interferometric synthetic aperture radar (InSAR), and airborne light detection and ranging (LiDAR) have been employed to digitize urban areas to 3D maps in a geographic information system (GIS). In addition, there are more research interests towards the use of mobile laser scanning (MLS), oblique photogrammetry, and unmanned aerial vehicles (UAV), to measure 3D urban morphology.

This Special Issue aims to invite prospective authors to submit an original manuscript of their latest innovative research results in Remote Sensing for “3D Urban Morphology”. Comprehensive reviews of this field are also welcomed. The range of topics includes, but is not limited to:

  • State-of-the-art remote sensing technologies for measuring 3D urban morphology
  • New definitions of 3D landscape indices and their applications
  • New methods for 3D modeling of urban areas using remotely sensed data
  • Urban structure analysis based 2D and/or 3D morphology
  • Impacts of 3D morphology on urban environment and ecology

Authors are required to check and follow the Instructions to Authors, http://www.mdpi.com/journal/remotesensing/instructions.

Dr. Bailang Yu
Dr. Lei Wang
Dr. Qiusheng Wu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (9 papers)

View options order results:
result details:
Displaying articles 1-9
Export citation of selected articles as:

Research

Open AccessArticle Time-Continuous Hemispherical Urban Surface Temperatures
Remote Sens. 2018, 10(1), 3; https://doi.org/10.3390/rs10010003
Received: 6 October 2017 / Revised: 5 December 2017 / Accepted: 17 December 2017 / Published: 21 December 2017
PDF Full-text (12114 KB) | HTML Full-text | XML Full-text
Abstract
Traditional methods for remote sensing of urban surface temperatures (Tsurf) are subject to a suite of temporal and geometric biases. The effect of these biases on our ability to characterize the true geometric and temporal nature of urban Tsurf is
[...] Read more.
Traditional methods for remote sensing of urban surface temperatures (Tsurf) are subject to a suite of temporal and geometric biases. The effect of these biases on our ability to characterize the true geometric and temporal nature of urban Tsurf is currently unknown, but is certainly nontrivial. To quantify and overcome these biases, we present a method to retrieve time-continuous hemispherical radiometric urban Tsurf (Them, r) from broadband upwelling longwave radiation measured via pyrgeometer. By sampling the surface hemispherically, this measure is postulated to be more representative of the complex, three-dimensional structure of the urban surface than those from traditional remote sensors that usually have a narrow nadir or oblique viewing angle. The method uses a sensor view model in conjunction with a radiative transfer code to correct for atmospheric effects in three-dimensions using in situ profiles of air temperature and humidity along with information about surface structure. A practical parameterization is also included. Using the method, an eight-month climatology of Them, r is retrieved for Basel, Switzerland. Results show the importance of a robust, geometrically representative atmospheric correction routine to remove confounding atmospheric effects and to foster inter-site, inter-method, and inter-instrument comparison. In addition, over a month-long summertime intensive observation period, Them, r was compared to Tsurf retrieved from nadir (Tplan) and complete (Tcomp) perspectives of the surface. Large differences were observed between Tcomp, Them, r, and Tplan, with differences between Tplan and Tcomp of up to 8 K under clear-sky viewing conditions, which are the cases when satellite-based observations are available. In general, Them, r provides a better approximation to Tcomp than Tplan, particularly under clear-sky conditions. The magnitude of differences in remote sensed Tsurf based on sensor-surface-sun geometry varies significantly based on time of day and synoptic conditions and prompts further investigation of methodological and instrument bias in remote sensed urban surface temperature records. Full article
(This article belongs to the Special Issue Remote Sensing for 3D Urban Morphology)
Figures

Graphical abstract

Open AccessArticle Towards High-Definition 3D Urban Mapping: Road Feature-Based Registration of Mobile Mapping Systems and Aerial Imagery
Remote Sens. 2017, 9(10), 975; https://doi.org/10.3390/rs9100975
Received: 24 July 2017 / Revised: 6 September 2017 / Accepted: 18 September 2017 / Published: 21 September 2017
Cited by 4 | PDF Full-text (33734 KB) | HTML Full-text | XML Full-text
Abstract
Various applications have utilized a mobile mapping system (MMS) as the main 3D urban remote sensing platform. However, the accuracy and precision of the three-dimensional data acquired by an MMS is highly dependent on the performance of the vehicle’s self-localization, which is generally
[...] Read more.
Various applications have utilized a mobile mapping system (MMS) as the main 3D urban remote sensing platform. However, the accuracy and precision of the three-dimensional data acquired by an MMS is highly dependent on the performance of the vehicle’s self-localization, which is generally performed by high-end global navigation satellite system (GNSS)/inertial measurement unit (IMU) integration. However, GNSS/IMU positioning quality degrades significantly in dense urban areas with high-rise buildings, which block and reflect the satellite signals. Traditional landmark updating methods, which improve MMS accuracy by measuring ground control points (GCPs) and manually identifying those points in the data, are both labor-intensive and time-consuming. In this paper, we propose a novel and comprehensive framework for automatically georeferencing MMS data by capitalizing on road features extracted from high-resolution aerial surveillance data. The proposed framework has three key steps: (1) extracting road features from the MMS and aerial data; (2) obtaining Gaussian mixture models from the extracted aerial road features; and (3) performing registration of the MMS data to the aerial map using a dynamic sliding window and the normal distribution transform (NDT). The accuracy of the proposed framework is verified using field data, demonstrating that it is a reliable solution for high-precision urban mapping. Full article
(This article belongs to the Special Issue Remote Sensing for 3D Urban Morphology)
Figures

Graphical abstract

Open AccessArticle A Convolutional Neural Network-Based 3D Semantic Labeling Method for ALS Point Clouds
Remote Sens. 2017, 9(9), 936; https://doi.org/10.3390/rs9090936
Received: 1 August 2017 / Revised: 23 August 2017 / Accepted: 8 September 2017 / Published: 10 September 2017
Cited by 1 | PDF Full-text (3610 KB) | HTML Full-text | XML Full-text
Abstract
3D semantic labeling is a fundamental task in airborne laser scanning (ALS) point clouds processing. The complexity of observed scenes and the irregularity of point distributions make this task quite challenging. Existing methods rely on a large number of features for the LiDAR
[...] Read more.
3D semantic labeling is a fundamental task in airborne laser scanning (ALS) point clouds processing. The complexity of observed scenes and the irregularity of point distributions make this task quite challenging. Existing methods rely on a large number of features for the LiDAR points and the interaction of neighboring points, but cannot exploit the potential of them. In this paper, a convolutional neural network (CNN) based method that extracts the high-level representation of features is used. A point-based feature image-generation method is proposed that transforms the 3D neighborhood features of a point into a 2D image. First, for each point in the ALS data, the local geometric features, global geometric features and full-waveform features of its neighboring points within a window are extracted and transformed into an image. Then, the feature images are treated as the input of a CNN model for a 3D semantic labeling task. Finally, to allow performance comparisons with existing approaches, we evaluate our framework on the publicly available datasets provided by the International Society for Photogrammetry and Remote Sensing Working Groups II/4 (ISPRS WG II/4) benchmark tests on 3D labeling. The experiment results achieve 82.3% overall accuracy, which is the best among all considered methods. Full article
(This article belongs to the Special Issue Remote Sensing for 3D Urban Morphology)
Figures

Figure 1

Open AccessArticle A Novel Building Type Classification Scheme Based on Integrated LiDAR and High-Resolution Images
Remote Sens. 2017, 9(7), 679; https://doi.org/10.3390/rs9070679
Received: 26 April 2017 / Revised: 29 June 2017 / Accepted: 29 June 2017 / Published: 1 July 2017
Cited by 1 | PDF Full-text (17150 KB) | HTML Full-text | XML Full-text
Abstract
Building type information is crucial to many urban studies, including fine-resolution population estimation, urban planning, and management. Although scientists have developed many methods to extract buildings via remote sensing data, only a limited number of them focus on further classification of the extracted
[...] Read more.
Building type information is crucial to many urban studies, including fine-resolution population estimation, urban planning, and management. Although scientists have developed many methods to extract buildings via remote sensing data, only a limited number of them focus on further classification of the extracted results. This paper presents a novel building type classification scheme based on the integration of building height information from LiDAR, textural, spectral, and geometric information from high-resolution remote sensing images, and super-object information from the integrated dataset. Building height information is firstly extracted from LiDAR point clouds using a progressive morphological filter and then combined with high-resolution images for object-oriented segmentation. Multi-resolution segmentation of the combined image is performed to collect super-object information, which provides more information for classification in the next step. Finally, the segmentation results, as well as their super-object information, are inputted into the random forest classifier to obtain building type classification results. The classification scheme proposed in this study is tested through applications in two urban village areas, a type of slum-like land use characterized by dense buildings of different types, heights, and sizes, in Guangzhou, China. Segment level classification of the study area and validation area reached accuracies of 80.02% and 76.85%, respectively, while the building-level results reached accuracies of 98.15% and 87.50%, respectively. The results indicate that the proposed building type classification scheme has great potential for application in areas with multiple building types and complex backgrounds. This study also proves that both building height information and super-object information play important roles in building type classification. More accurate results could be obtained by incorporating building height information and super-object information and using the random forest classifier. Full article
(This article belongs to the Special Issue Remote Sensing for 3D Urban Morphology)
Figures

Graphical abstract

Open AccessArticle A New Stereo Pair Disparity Index (SPDI) for Detecting Built-Up Areas from High-Resolution Stereo Imagery
Remote Sens. 2017, 9(6), 633; https://doi.org/10.3390/rs9060633
Received: 30 March 2017 / Revised: 10 June 2017 / Accepted: 15 June 2017 / Published: 20 June 2017
Cited by 1 | PDF Full-text (11770 KB) | HTML Full-text | XML Full-text
Abstract
Within-class spectral variation and between-class spectral confusion in remotely sensed imagery degrades the performance of built-up area detection when using planar texture, shape, and spectral features. Terrain slopes and building heights extracted from auxiliary data, such as Digital Surface Models (DSMs) however, can
[...] Read more.
Within-class spectral variation and between-class spectral confusion in remotely sensed imagery degrades the performance of built-up area detection when using planar texture, shape, and spectral features. Terrain slopes and building heights extracted from auxiliary data, such as Digital Surface Models (DSMs) however, can improve the results. Stereo imagery incorporates height information unlike single remotely sensed images. In this study, a new Stereo Pair Disparity Index (SPDI) for indicating built-up areas is calculated from stereo-extracted disparity information. Further, a new method of detecting built-up areas from stereo pairs is proposed based on the SPDI, using disparity information to establish the relationship between two images of a stereo pair. As shown in the experimental results for two stereo pairs covering different scenes with diverse urban settings, the SPDI effectively differentiates between built-up and non-built-up areas. Our proposed method achieves higher accuracy built-up area results from stereo images than the traditional method for single images, and two other widely-applied DSM-based methods for stereo images. Our approach is suitable for spaceborne and airborne stereo pairs and triplets. Our research introduces a new effective height feature (SPDI) for detecting built-up areas from stereo imagery with no need for DSMs. Full article
(This article belongs to the Special Issue Remote Sensing for 3D Urban Morphology)
Figures

Figure 1

Open AccessArticle Automatic Sky View Factor Estimation from Street View Photographs—A Big Data Approach
Remote Sens. 2017, 9(5), 411; https://doi.org/10.3390/rs9050411
Received: 8 April 2017 / Revised: 8 April 2017 / Accepted: 22 April 2017 / Published: 30 April 2017
Cited by 4 | PDF Full-text (10233 KB) | HTML Full-text | XML Full-text
Abstract
Hemispherical (fisheye) photography is a well-established approach for estimating the sky view factor (SVF). High-resolution urban models from LiDAR and oblique airborne photogrammetry can provide continuous SVF estimates over a large urban area, but such data are not always available and are difficult
[...] Read more.
Hemispherical (fisheye) photography is a well-established approach for estimating the sky view factor (SVF). High-resolution urban models from LiDAR and oblique airborne photogrammetry can provide continuous SVF estimates over a large urban area, but such data are not always available and are difficult to acquire. Street view panoramas have become widely available in urban areas worldwide: Google Street View (GSV) maintains a global network of panoramas excluding China and several other countries; Baidu Street View (BSV) and Tencent Street View (TSV) focus their panorama acquisition efforts within China, and have covered hundreds of cities therein. In this paper, we approach this issue from a big data perspective by presenting and validating a method for automatic estimation of SVF from massive amounts of street view photographs. Comparisons were made with SVF estimates derived from two independent sources: a LiDAR-based Digital Surface Model (DSM) and an oblique airborne photogrammetry-based 3D city model (OAP3D), resulting in a correlation coefficient of 0.863 and 0.987, respectively. The comparisons demonstrated the capacity of the proposed method to provide reliable SVF estimates. Additionally, we present an application of the proposed method with about 12,000 GSV panoramas to characterize the spatial distribution of SVF over Manhattan Island in New York City. Although this is a proof-of-concept study, it has shown the potential of the proposed approach to assist urban climate and urban planning research. However, further development is needed before this approach can be finally delivered to the urban climate and urban planning communities for practical applications. Full article
(This article belongs to the Special Issue Remote Sensing for 3D Urban Morphology)
Figures

Figure 1

Open AccessArticle HRTT: A Hierarchical Roof Topology Structure for Robust Building Roof Reconstruction from Point Clouds
Remote Sens. 2017, 9(4), 354; https://doi.org/10.3390/rs9040354
Received: 23 January 2017 / Revised: 31 March 2017 / Accepted: 7 April 2017 / Published: 8 April 2017
PDF Full-text (9890 KB) | HTML Full-text | XML Full-text
Abstract
The identification and representation of building roof topology are basic, but important, issues for 3D building model reconstruction from point clouds. Always, the roof topology is expressed by the roof topology graph (RTG), which stores the plane–plane adjacencies as graph edges. As the
[...] Read more.
The identification and representation of building roof topology are basic, but important, issues for 3D building model reconstruction from point clouds. Always, the roof topology is expressed by the roof topology graph (RTG), which stores the plane–plane adjacencies as graph edges. As the decision of the graph edges is often based on local statistics between adjacent planes, topology errors can be easily produced because of noise, lack of data, and resulting errors in pre-processing steps. In this work, the hierarchical roof topology tree (HRTT) is proposed, instead of traditional RTG, to represent the topology relationships among different roof elements. Building primitives or child structures are taken as inside tree nodes; thus, the plane–model and model–model relations can be well described and further exploited. Integral constraints and extra verifying procedures can also be easily introduced to improve the topology quality. As for the basic plane-to-plane adjacencies, we no longer decide all connections at the same time, but rather we decide the robust ones preferentially. Those robust connections will separate the whole model into simpler components step-by-step and produce the basic semantic information for the identification of ambiguous ones. In this way, the effects from structures of minor importance or spurious ridges can be limited to the building locale, while the common features can be detected integrally. Experiments on various data show that the proposed method can obviously improve the topology quality and produce more precise results. Compared with the RTG-based method, two topology quality indices increase from 80.9% and 79.8% to 89.4% and 88.2% in the test area. The integral model quality indices at the pixel level and the plane level also achieve the precision of 90.3% and 84.7%, accordingly. Full article
(This article belongs to the Special Issue Remote Sensing for 3D Urban Morphology)
Figures

Graphical abstract

Open AccessArticle Classification of ALS Point Cloud with Improved Point Cloud Segmentation and Random Forests
Remote Sens. 2017, 9(3), 288; https://doi.org/10.3390/rs9030288
Received: 21 November 2016 / Revised: 26 January 2017 / Accepted: 14 March 2017 / Published: 18 March 2017
Cited by 10 | PDF Full-text (12206 KB) | HTML Full-text | XML Full-text
Abstract
This paper presents an automated and effective framework for classifying airborne laser scanning (ALS) point clouds. The framework is composed of four stages: (i) step-wise point cloud segmentation, (ii) feature extraction, (iii) Random Forests (RF) based feature selection and classification, and (iv) post-processing.
[...] Read more.
This paper presents an automated and effective framework for classifying airborne laser scanning (ALS) point clouds. The framework is composed of four stages: (i) step-wise point cloud segmentation, (ii) feature extraction, (iii) Random Forests (RF) based feature selection and classification, and (iv) post-processing. First, a step-wise point cloud segmentation method is proposed to extract three kinds of segments, including planar, smooth and rough surfaces. Second, a segment, rather than an individual point, is taken as the basic processing unit to extract features. Third, RF is employed to select features and classify these segments. Finally, semantic rules are employed to optimize the classification result. Three datasets provided by Open Topography are utilized to test the proposed method. Experiments show that our method achieves a superior classification result with an overall classification accuracy larger than 91.17%, and kappa coefficient larger than 83.79%. Full article
(This article belongs to the Special Issue Remote Sensing for 3D Urban Morphology)
Figures

Graphical abstract

Open AccessFeature PaperArticle A Graph-Based Approach for 3D Building Model Reconstruction from Airborne LiDAR Point Clouds
Remote Sens. 2017, 9(1), 92; https://doi.org/10.3390/rs9010092
Received: 13 November 2016 / Revised: 15 December 2016 / Accepted: 12 January 2017 / Published: 20 January 2017
Cited by 11 | PDF Full-text (13422 KB) | HTML Full-text | XML Full-text
Abstract
3D building model reconstruction is of great importance for environmental and urban applications. Airborne light detection and ranging (LiDAR) is a very useful data source for acquiring detailed geometric and topological information of building objects. In this study, we employed a graph-based method
[...] Read more.
3D building model reconstruction is of great importance for environmental and urban applications. Airborne light detection and ranging (LiDAR) is a very useful data source for acquiring detailed geometric and topological information of building objects. In this study, we employed a graph-based method based on hierarchical structure analysis of building contours derived from LiDAR data to reconstruct urban building models. The proposed approach first uses a graph theory-based localized contour tree method to represent the topological structure of buildings, then separates the buildings into different parts by analyzing their topological relationships, and finally reconstructs the building model by integrating all the individual models established through the bipartite graph matching process. Our approach provides a more complete topological and geometrical description of building contours than existing approaches. We evaluated the proposed method by applying it to the Lujiazui region in Shanghai, China, a complex and large urban scene with various types of buildings. The results revealed that complex buildings could be reconstructed successfully with a mean modeling error of 0.32 m. Our proposed method offers a promising solution for 3D building model reconstruction from airborne LiDAR point clouds. Full article
(This article belongs to the Special Issue Remote Sensing for 3D Urban Morphology)
Figures

Graphical abstract

Back to Top