remotesensing-logo

Journal Browser

Journal Browser

Optical Remote Sensing Applications in Urban Areas II

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Urban Remote Sensing".

Deadline for manuscript submissions: closed (15 December 2022) | Viewed by 46797

Special Issue Editors


E-Mail Website
Guest Editor
Canada Centre for Mapping and Earth Observation, Natural Resources Canada, Ottawa, ON K1S 5K2, Canada
Interests: optical remote sensing technology development and EO-based spatial analysis for applications related to urban and mining development
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Remote Sensing, K.N.Toosi University of Technology, Tehran 19967-15433, Iran
Interests: LiDAR technology and its applications; application of remote sensing in disaster management; bio-geomatics; artificial intelligence; image processing; pattern recognition; remote sensing calibration; optical, thermal, multispectral, UAV, and satellite data processing

Special Issue Information

Dear Colleagues,

Urban areas are the center of human settlement and civilization. They also play essential roles in various aspects of human life, including economic, political, cultural, and educational activities. On the other hand, these areas have physically and geographically complex systems and phenomena due to the presence and integration of various elements, such as residential, industrial, infrastructure, road networks, green spaces, and water bodies. As such, these phenomena are the study subject of experts and researchers in different fields, from social to physical sciences and engineering. In particular, the physical characteristics on urbanized land are essential for various applications in geography, sustainable development, urban planning, and civil engineering. Geospatial information, with different levels of detail at local and regional scales, can provide a valuable source of information to reach the ultimate objectives of urban studies.

Remote sensing technology and techniques are among the most effective observation and analysis tools for providing geospatial information about urban land complexes. From the beginning of the remote sensing era, aerial photography has provided unprecedented views of urban areas. In addition, Earth observation (EO) systems, such as optical satellites for EO, have acquired unique and valuable spatial, spectral, and temporal information of the surface of the planet, including urban areas. This collection of EO tools has been progressively been improved by new operational spaceborne, airborne, and drone imagery, as well as optical, lidar, thermal, and radar data sources. In addition, technological revolutions related to open data and informatics resources, big data, and cloud computing platforms bring both opportunities and challenges for the user and the academic community in urban studies.

The previous Special Issue ‘Optical Remote Sensing Applications in Urban Areas’ was a great success. The main objective of this Remote Sensing Special Issue (SI) is to promote the state-of-the-art research and development in optical Earth observation analytics and urban applications. For the Second Volume, we invite researchers with different areas of expertise and interests to consider this opportunity and submit their papers on both applications and methodologies in optical remote sensing for urban areas.

Dr. Saeid Homayouni
Dr. Ying Zhang
Dr. Ali Mohammadzadeh
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Spatiotemporal analysis of urban lands and change detection
  • Urban natural/manmade hazards and disaster management
  • Earthquake damage detection
  • Optical Earth Observations for flood management
  • Land-cover mapping
  • Land-cover land-use (LCLU) modeling
  • Green space monitoring
  • Urban feature detection and extraction
  • Urbanization impacts and sustainable development
  • 3D mapping and modeling from remote sensing data
  • Image processing and analysis, and data mining
  • Artifical intelegece, machine and deep learning approches
  • Object-based image analysis
  • Big Earth observation analytics by cloud computing technology

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (14 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

22 pages, 8168 KiB  
Article
MS4D-Net: Multitask-Based Semi-Supervised Semantic Segmentation Framework with Perturbed Dual Mean Teachers for Building Damage Assessment from High-Resolution Remote Sensing Imagery
by Yongjun He, Jinfei Wang, Chunhua Liao, Xin Zhou and Bo Shan
Remote Sens. 2023, 15(2), 478; https://doi.org/10.3390/rs15020478 - 13 Jan 2023
Cited by 3 | Viewed by 3032
Abstract
In the aftermath of a natural hazard, rapid and accurate building damage assessment from remote sensing imagery is crucial for disaster response and rescue operations. Although recent deep learning-based studies have made considerable improvements in assessing building damage, most state-of-the-art works focus on [...] Read more.
In the aftermath of a natural hazard, rapid and accurate building damage assessment from remote sensing imagery is crucial for disaster response and rescue operations. Although recent deep learning-based studies have made considerable improvements in assessing building damage, most state-of-the-art works focus on pixel-based, multi-stage approaches, which are more complicated and suffer from partial damage recognition issues at the building-instance level. In the meantime, it is usually time-consuming to acquire sufficient labeled samples for deep learning applications, making a conventional supervised learning pipeline with vast annotation data unsuitable in time-critical disaster cases. In this study, we present an end-to-end building damage assessment framework integrating multitask semantic segmentation with semi-supervised learning to tackle these issues. Specifically, a multitask-based Siamese network followed by object-based post-processing is first constructed to solve the semantic inconsistency problem by refining damage classification results with building extraction results. Moreover, to alleviate labeled data scarcity, a consistency regularization-based semi-supervised semantic segmentation scheme with iteratively perturbed dual mean teachers is specially designed, which can significantly reinforce the network perturbations to improve model performance while maintaining high training efficiency. Furthermore, a confidence weighting strategy is embedded into the semi-supervised pipeline to focus on convincing samples and reduce the influence of noisy pseudo-labels. The comprehensive experiments on three benchmark datasets suggest that the proposed method is competitive and effective in building damage assessment under the circumstance of insufficient labels, which offers a potential artificial intelligence-based solution to respond to the urgent need for timeliness and accuracy in disaster events. Full article
(This article belongs to the Special Issue Optical Remote Sensing Applications in Urban Areas II)
Show Figures

Figure 1

19 pages, 4735 KiB  
Article
A Rapid Self-Supervised Deep-Learning-Based Method for Post-Earthquake Damage Detection Using UAV Data (Case Study: Sarpol-e Zahab, Iran)
by Narges Takhtkeshha, Ali Mohammadzadeh and Bahram Salehi
Remote Sens. 2023, 15(1), 123; https://doi.org/10.3390/rs15010123 - 26 Dec 2022
Cited by 10 | Viewed by 3723
Abstract
Immediately after an earthquake, rapid disaster management is the main challenge for relevant organizations. While satellite images have been used in the past two decades for building-damage mapping, they have rarely been utilized for the timely damage monitoring required for rescue operations. Unmanned [...] Read more.
Immediately after an earthquake, rapid disaster management is the main challenge for relevant organizations. While satellite images have been used in the past two decades for building-damage mapping, they have rarely been utilized for the timely damage monitoring required for rescue operations. Unmanned aerial vehicles (UAVs) have recently become very popular due to their agile deployment to sites, super-high spatial resolution, and relatively low operating cost. This paper proposes a novel deep-learning-based method for rapid post-earthquake building damage detection. The method detects damages in four levels and consists of three steps. First, three different feature types—non-deep, deep, and their fusion—are investigated to determine the optimal feature extraction method. A “one-epoch convolutional autoencoder (OECAE)” is used to extract deep features from non-deep features. Then, a rule-based procedure is designed for the automatic selection of the proper training samples required by the classification algorithms in the next step. Finally, seven famous machine learning (ML) algorithms—including support vector machine (SVM), random forest (RF), gradient boosting (GB), extreme gradient boosting (XGB), decision trees (DT), k-nearest neighbors (KNN), and adaBoost (AB)—and a basic deep learning algorithm (i.e., multi-layer perceptron (MLP)) are implemented to obtain building damage maps. The results indicated that auto-training samples are feasible and superior to manual ones, with improved overall accuracy (OA) and kappa coefficient (KC) over 22% and 33%, respectively; SVM (OA = 82% and KC = 74.01%) was the most accurate AI model with a slight advantage over MLP (OA = 82% and KC = 73.98%). Additionally, it was found that the fusion of deep and non-deep features using OECAE could significantly enhance damage-mapping efficiency compared to those using either non-deep features (by an average improvement of 6.75% and 9.78% in OA and KC, respectively) or deep features (improving OA by 7.19% and KC by 10.18% on average) alone. Full article
(This article belongs to the Special Issue Optical Remote Sensing Applications in Urban Areas II)
Show Figures

Figure 1

22 pages, 11336 KiB  
Article
Analyzing Satellite-Derived 3D Building Inventories and Quantifying Urban Growth towards Active Faults: A Case Study of Bishkek, Kyrgyzstan
by C. Scott Watson, John R. Elliott, Ruth M. J. Amey and Kanatbek E. Abdrakhmatov
Remote Sens. 2022, 14(22), 5790; https://doi.org/10.3390/rs14225790 - 16 Nov 2022
Cited by 5 | Viewed by 2211
Abstract
Earth observation (EO) data can provide large scale, high-resolution, and transferable methodologies to quantify the sprawl and vertical development of cities and are required to inform disaster risk reduction strategies for current and future populations. We synthesize the evolution of Bishkek, Kyrgyzstan, which [...] Read more.
Earth observation (EO) data can provide large scale, high-resolution, and transferable methodologies to quantify the sprawl and vertical development of cities and are required to inform disaster risk reduction strategies for current and future populations. We synthesize the evolution of Bishkek, Kyrgyzstan, which experiences high seismic hazard, and derive new datasets relevant for seismic risk modeling. First, the urban sprawl of Bishkek (1979–2021) was quantified using built-up area land cover classifications. Second, a change detection methodology was applied to a declassified KeyHole Hexagon (KH-9) and Sentinel-2 satellite image to detect areas of redevelopment within Bishkek. Finally, vertical development was quantified using multi-temporal high-resolution stereo and tri-stereo satellite imagery, which were used in a deep learning workflow to extract buildings footprints and assign building heights. Our results revealed urban growth of 139 km2 (92%) and redevelopment of ~26% (59 km2) of the city (1979–2021). The trends of urban growth were not reflected in all the open access global settlement footprint products that were evaluated. Building polygons that were extracted using a deep learning workflow applied to high-resolution tri-stereo (Pleiades) satellite imagery were most accurate (F1 score = 0.70) compared to stereo (WorldView-2) imagery (F1 score = 0.61). Similarly, building heights extracted using a Pleiades-derived digital elevation model were most comparable to independent measurements obtained using ICESat-2 altimetry data and field-measurements (normalized absolute median deviation < 1 m). Across different areas of the city, our analysis suggested rates of building growth in the region of 2000–10,700 buildings per year, which when combined with a trend of urban growth towards active faults highlights the importance of up-to-date building stock exposure data in areas of seismic hazard. Deep learning methodologies applied to high-resolution imagery are a valuable monitoring tool for building stock, especially where country-level or open-source datasets are lacking or incomplete. Full article
(This article belongs to the Special Issue Optical Remote Sensing Applications in Urban Areas II)
Show Figures

Graphical abstract

26 pages, 8551 KiB  
Article
Spatial Validation of Spectral Unmixing Results: A Case Study of Venice City
by Rosa Maria Cavalli
Remote Sens. 2022, 14(20), 5165; https://doi.org/10.3390/rs14205165 - 15 Oct 2022
Cited by 6 | Viewed by 1794
Abstract
Since remote sensing images offer unique access to the distribution of land cover on earth, many countries are investing in this technique to monitor urban sprawl. For this purpose, the most widely used methodology is spectral unmixing which, after identifying the spectra of [...] Read more.
Since remote sensing images offer unique access to the distribution of land cover on earth, many countries are investing in this technique to monitor urban sprawl. For this purpose, the most widely used methodology is spectral unmixing which, after identifying the spectra of the mixed-pixel constituents, determines their fractional abundances in the pixel. However, the literature highlights shortcomings in spatial validation due to the lack of detailed ground truth knowledge and proposes five key requirements for accurate reference fractional abundance maps: they should cover most of the area, their spatial resolution should be higher than that of the results, they should be validated using other ground truth data, the full range of abundances should be validated, and errors in co-localization and spatial resampling should be minimized. However, most proposed reference maps met two or three requirements and none met all five. In situ and remote data acquired in Venice were exploited to meet all five requirements. Moreover, to obtain more information about the validation procedure, not only reference spectra, synthetic image, and fractional abundance models (FAMs) that met all the requirements, but also other data, that no previous work exploited, were employed: reference fractional abundance maps that met four out of five requirements, and fractional abundance maps retrieved from the synthetic image. Briefly summarizing the main results obtained from MIVIS data, the average of spectral accuracies in root mean square error was equal to 0.025; using FAMs, the average of spatial accuracies in mean absolute error (MAEk-Totals) was equal to 1.32 and more than 78% of these values were related to sensor characteristics; using reference fractional abundance maps, the average MAEk-Totals value increased to 1.97 because errors in co-localization and spatial-resampling affected about 29% of these values. In conclusion, meeting all requirements and the exploitation of different reference data increase the spatial accuracy, upgrade the validation procedure, and improve the knowledge of accuracy. Full article
(This article belongs to the Special Issue Optical Remote Sensing Applications in Urban Areas II)
Show Figures

Graphical abstract

29 pages, 12264 KiB  
Article
Automatic Detection and Assessment of Pavement Marking Defects with Street View Imagery at the City Scale
by Wanyue Kong, Teng Zhong, Xin Mai, Shuliang Zhang, Min Chen and Guonian Lv
Remote Sens. 2022, 14(16), 4037; https://doi.org/10.3390/rs14164037 - 18 Aug 2022
Cited by 3 | Viewed by 3381
Abstract
Pavement markings could wear out before their expected service life expires, causing traffic safety hazards. However, assessing pavement-marking conditions at the city scale was a great challenge in previous studies. In this article, we advance the method of detecting and evaluating pavement-marking defects [...] Read more.
Pavement markings could wear out before their expected service life expires, causing traffic safety hazards. However, assessing pavement-marking conditions at the city scale was a great challenge in previous studies. In this article, we advance the method of detecting and evaluating pavement-marking defects at the city scale with Baidu Street View (BSV) images, using a case study in Nanjing. Specifically, we employ inverse perspective mapping (IPM) and a deep learning-based approach to pavement-marking extraction to make efficient use of street-view imageries. In addition, we propose an evaluation system to assess three types of pavement-marking defects, with quantitative and qualitative results provided for each image. Factors causing pavement-marking defects are discussed by mapping the spatial distribution of pavement-marking defects at the city scale. Our proposed methods are conducive to pavement-marking repair operations. Beyond this, this article can contribute to smart urbanism development by creating a new road maintenance solution and ensuring the large-scale realization of intelligent decision-making in urban infrastructure management. Full article
(This article belongs to the Special Issue Optical Remote Sensing Applications in Urban Areas II)
Show Figures

Figure 1

18 pages, 32550 KiB  
Article
Semantic Segmentation of Multispectral Images via Linear Compression of Bands: An Experiment Using RIT-18
by Yuanzhi Cai, Lei Fan and Cheng Zhang
Remote Sens. 2022, 14(11), 2673; https://doi.org/10.3390/rs14112673 - 2 Jun 2022
Cited by 4 | Viewed by 3067
Abstract
Semantic segmentation of remotely sensed imagery is a basic task for many applications, such as forest monitoring, cloud detection, and land-use planning. Many state-of-the-art networks used for this task are based on RGB image datasets and, as such, prefer three-band images as their [...] Read more.
Semantic segmentation of remotely sensed imagery is a basic task for many applications, such as forest monitoring, cloud detection, and land-use planning. Many state-of-the-art networks used for this task are based on RGB image datasets and, as such, prefer three-band images as their input data. However, many remotely sensed images contain more than three spectral bands. Although it is technically possible to feed multispectral images directly to those networks, poor segmentation accuracy was often obtained. To overcome this issue, the current image dimension reduction methods are either to use feature extraction or to select an optimal combination of three bands through different trial processes. However, it is well understood that the former is often comparatively less effective, because it is not optimized towards segmentation accuracy, while the latter is less efficient due to repeated trial selections of three bands for the optimal combination. Therefore, it is meaningful to explore alternative methods that can utilize multiple spectral bands efficiently in the state-of-the-art networks for semantic segmentation of similar accuracy as the trial selection approach. In this study, a hot-swappable stem structure (LC-Net) is proposed to linearly compress the input bands to fit the input preference of typical networks. For the three commonly used network structures tested on the RIT-18 dataset (having six spectral bands), the approach proposed was found to be an equivalently effective but much more efficient alternative to the trial selection approach. Full article
(This article belongs to the Special Issue Optical Remote Sensing Applications in Urban Areas II)
Show Figures

Graphical abstract

22 pages, 4231 KiB  
Article
Optimum Feature and Classifier Selection for Accurate Urban Land Use/Cover Mapping from Very High Resolution Satellite Imagery
by Mojtaba Saboori, Saeid Homayouni, Reza Shah-Hosseini and Ying Zhang
Remote Sens. 2022, 14(9), 2097; https://doi.org/10.3390/rs14092097 - 27 Apr 2022
Cited by 13 | Viewed by 3201
Abstract
Feature selection to reduce redundancies for efficient classification is necessary but usually time consuming and challenging. This paper proposed a comprehensive analysis for optimum feature selection and the most efficient classifier for accurate urban area mapping. To this end, 136 multiscale textural features [...] Read more.
Feature selection to reduce redundancies for efficient classification is necessary but usually time consuming and challenging. This paper proposed a comprehensive analysis for optimum feature selection and the most efficient classifier for accurate urban area mapping. To this end, 136 multiscale textural features alongside a panchromatic band were initially extracted from WorldView-2, GeoEye-3, and QuickBird satellite images. The wrapper-based and filter-based feature selection were implemented to optimally select the best ten percent of the primary features from the initial feature set. Then, machine leaning algorithms such as artificial neural network (ANN), support vector machine (SVM), and random forest (RF) classifiers were utilized to evaluate the efficiency of these selected features and select the most efficient classifier. The achieved optimum feature set was validated using two other images of WorldView-3 and Pleiades. The experiments revealed that RF, particle swarm optimization (PSO), and neighborhood component analysis (NCA) resulted in the most efficient classifier and wrapper-based and filter-based methods, respectively. While ANN and SVM’s process time depended on the number of input features, RF was significantly resistant to the criterion. Dissimilarity, contrast, and correlation features played the greatest contributing role in the classification performance among the textural features used in this study. These trials showed that the feature number could be reduced optimally to 14 from 137; these optimally selected features, alongside the RF classifier, can produce an F1-measure of about 0.90 for different images from five very high resolution satellite sensors for various urban geographical landscapes. These results successfully achieve our goal of assisting users by eliminating the task of optimal feature selection and classifier, thereby increasing the efficiency of urban land use/cover classification from very high resolution images. This optimal feature selection can also significantly reduce the high computational load of the feature-engineering phase in the machine and deep learning approaches. Full article
(This article belongs to the Special Issue Optical Remote Sensing Applications in Urban Areas II)
Show Figures

Figure 1

16 pages, 6540 KiB  
Article
RegGAN: An End-to-End Network for Building Footprint Generation with Boundary Regularization
by Qingyu Li, Stefano Zorzi, Yilei Shi, Friedrich Fraundorfer and Xiao Xiang Zhu
Remote Sens. 2022, 14(8), 1835; https://doi.org/10.3390/rs14081835 - 11 Apr 2022
Cited by 10 | Viewed by 3518
Abstract
Accurate and reliable building footprint maps are of great interest in many applications, e.g., urban monitoring, 3D building modeling, and geographical database updating. When compared to traditional methods, the deep-learning-based semantic segmentation networks have largely boosted the performance of building footprint generation. However, [...] Read more.
Accurate and reliable building footprint maps are of great interest in many applications, e.g., urban monitoring, 3D building modeling, and geographical database updating. When compared to traditional methods, the deep-learning-based semantic segmentation networks have largely boosted the performance of building footprint generation. However, they still are not capable of delineating structured building footprints. Most existing studies dealing with this issue are based on two steps, which regularize building boundaries after the semantic segmentation networks are implemented, making the whole pipeline inefficient. To address this, we propose an end-to-end network for the building footprint generation with boundary regularization, which is termed RegGAN. Our method is based on a generative adversarial network (GAN). Specifically, a multiscale discriminator is proposed to distinguish the input between false and true, and a generator is utilized to learn from the discriminator’s response to generate more realistic building footprints. We propose to incorporate regularized loss in the objective function of RegGAN, in order to further enhance sharp building boundaries. The proposed method is evaluated on two datasets with varying spatial resolutions: the INRIA dataset (30 cm/pixel) and the ISPRS dataset (5 cm/pixel). Experimental results show that RegGAN is able to well preserve regular shapes and sharp building boundaries, which outperforms other competitors. Full article
(This article belongs to the Special Issue Optical Remote Sensing Applications in Urban Areas II)
Show Figures

Graphical abstract

18 pages, 4346 KiB  
Article
Gross Floor Area Estimation from Monocular Optical Image Using the NoS R-CNN
by Chao Ji and Hong Tang
Remote Sens. 2022, 14(7), 1567; https://doi.org/10.3390/rs14071567 - 24 Mar 2022
Cited by 2 | Viewed by 2259
Abstract
Gross floor area is defined as the product of number of building stories and its base area. Gross floor area acquisition is the core problem to estimate floor area ratio, which is an important indicator for many geographical analyses. High data acquisition cost [...] Read more.
Gross floor area is defined as the product of number of building stories and its base area. Gross floor area acquisition is the core problem to estimate floor area ratio, which is an important indicator for many geographical analyses. High data acquisition cost or inherent defect of methods for existing gross floor area acquisition methods limit their applications in a wide range. In this paper we proposed three instance-wise gross floor area estimation methods in various degrees of end-to-end learning from monocular optical images based on the NoS R-CNN, which is a deep convolutional neural network to estimate the number of building stories. To the best of our knowledge, this is the first attempt to estimate instance-wise gross floor area from monocular optical satellite images. For comparing the performance of the proposed three methods, experiments on our dataset from nine cities in China were carried out, and the results were analyzed in detail in order to explore the reasons for the performance gap between the different methods. The results show that there is an inverse relationship between the model performance and the degree of end-to-end learning for base area estimation task and gross floor area estimation task. The quantitative and qualitative evaluations of the proposed methods indicate that the performances of proposed methods for accurate GFA estimation are promising for potential applications using large-scale remote sensing images. The proposed methods provide a new perspective for gross floor area/floor area ratio estimation and downstream tasks such as population estimation, living conditions assessment, etc. Full article
(This article belongs to the Special Issue Optical Remote Sensing Applications in Urban Areas II)
Show Figures

Figure 1

18 pages, 27782 KiB  
Article
An Object-Oriented Approach to the Classification of Roofing Materials Using Very High-Resolution Satellite Stereo-Pairs
by Francesca Trevisiol, Alessandro Lambertini, Francesca Franci and Emanuele Mandanici
Remote Sens. 2022, 14(4), 849; https://doi.org/10.3390/rs14040849 - 11 Feb 2022
Cited by 14 | Viewed by 4756
Abstract
The availability of multispectral images, with both high spatial and spectral resolution, makes it possible to obtain valuable information about complex urban environment, reducing the need for more expensive surveying techniques. Here, a methodology is tested for the semi-automatic extraction of buildings and [...] Read more.
The availability of multispectral images, with both high spatial and spectral resolution, makes it possible to obtain valuable information about complex urban environment, reducing the need for more expensive surveying techniques. Here, a methodology is tested for the semi-automatic extraction of buildings and the mapping of the main roofing materials over a urban area of approximately 100 km2, including the entire city of Bologna (Italy). The methodology follows an object-oriented approach and exploits a limited number of training samples. After a validation based on field inspections and close-range photos acquired by a drone, the final map achieved an overall accuracy of 94% (producer accuracy 79%) regarding the building extraction and of 91% for the classification of the roofing materials. The proposed approach proved to be flexible enough to catch the strong variability of the urban texture in different districts and can be easily reproducible in other contexts, as only satellite imagery is required for the mapping. Full article
(This article belongs to the Special Issue Optical Remote Sensing Applications in Urban Areas II)
Show Figures

Graphical abstract

23 pages, 9265 KiB  
Article
Extracting Urban Road Footprints from Airborne LiDAR Point Clouds with PointNet++ and Two-Step Post-Processing
by Haichi Ma, Hongchao Ma, Liang Zhang, Ke Liu and Wenjun Luo
Remote Sens. 2022, 14(3), 789; https://doi.org/10.3390/rs14030789 - 8 Feb 2022
Cited by 14 | Viewed by 3343
Abstract
In this paper, a novel framework for the automatic extraction of road footprints from airborne LiDAR point clouds in urban areas is proposed. The extraction process consisted of three phases: The first phase is to extract road points by using the deep learning [...] Read more.
In this paper, a novel framework for the automatic extraction of road footprints from airborne LiDAR point clouds in urban areas is proposed. The extraction process consisted of three phases: The first phase is to extract road points by using the deep learning model PointNet++, where the features of the input data include not only those selected from raw LiDAR points, such as 3D coordinate values, intensity, etc., but also the digital number (DN) of co-registered images and generated geometric features to describe a strip-like road. Then, the road points from PointNet++ were post-processed based on graph-cut and constrained triangulation irregular networks, where both the commission and omission errors were greatly reduced. Finally, collinearity and width similarity were proposed to estimate the connection probability of road segments, thereby improving the connectivity and completeness of the road network represented by centerlines. Experiments conducted on the Vaihingen data show that the proposed framework outperformed others in terms of completeness and correctness; in addition, some narrower residential streets with 2 m width, which have normally been neglected by previous studies, were extracted. The completeness and the correctness of the extracted road points were 84.7% and 79.7%, respectively, while the completeness and the correctness of the extracted centerlines were 97.0% and 86.3%, respectively. Full article
(This article belongs to the Special Issue Optical Remote Sensing Applications in Urban Areas II)
Show Figures

Figure 1

19 pages, 10380 KiB  
Article
Object-Based High-Rise Building Detection Using Morphological Building Index and Digital Map
by Sejung Jung, Kirim Lee and Won Hee Lee
Remote Sens. 2022, 14(2), 330; https://doi.org/10.3390/rs14020330 - 11 Jan 2022
Cited by 8 | Viewed by 3112
Abstract
High-rise buildings (HRBs) as modern and visually unique land use continue to increase due to urbanization. Therefore, large-scale monitoring of HRB is very important for urban planning and environmental protection. This paper performed object-based HRB detection using high-resolution satellite image and digital map. [...] Read more.
High-rise buildings (HRBs) as modern and visually unique land use continue to increase due to urbanization. Therefore, large-scale monitoring of HRB is very important for urban planning and environmental protection. This paper performed object-based HRB detection using high-resolution satellite image and digital map. Three study areas were acquired from KOMPSAT-3A, KOMPSAT-3, and WorldView-3, and object-based HRB detection was performed using the direction according to relief displacement by satellite image. Object-based multiresolution segmentation images were generated, focusing on HRB in each satellite image, and then combined with pixel-based building detection results obtained from MBI through majority voting to derive object-based building detection results. After that, to remove objects misdetected by HRB, the direction between HRB in the polygon layer of the digital map HRB and the HRB in the object-based building detection result was calculated. It was confirmed that the direction between the two calculated using the centroid coordinates of each building object converged with the azimuth angle of the satellite image, and results outside the error range were removed from the object-based HRB results. The HRBs in satellite images were defined as reference data, and the performance of the results obtained through the proposed method was analyzed. In addition, to evaluate the efficiency of the proposed technique, it was confirmed that the proposed method provides relatively good performance compared to the results of object-based HRB detection using shadows. Full article
(This article belongs to the Special Issue Optical Remote Sensing Applications in Urban Areas II)
Show Figures

Figure 1

30 pages, 21043 KiB  
Article
Is It All the Same? Mapping and Characterizing Deprived Urban Areas Using WorldView-3 Superspectral Imagery. A Case Study in Nairobi, Kenya
by Stefanos Georganos, Angela Abascal, Monika Kuffer, Jiong Wang, Maxwell Owusu, Eléonore Wolff and Sabine Vanhuysse
Remote Sens. 2021, 13(24), 4986; https://doi.org/10.3390/rs13244986 - 8 Dec 2021
Cited by 14 | Viewed by 4335
Abstract
In the past two decades, Earth observation (EO) data have been utilized for studying the spatial patterns of urban deprivation. Given the scope of many existing studies, it is still unclear how very-high-resolution EO data can help to improve our understanding of the [...] Read more.
In the past two decades, Earth observation (EO) data have been utilized for studying the spatial patterns of urban deprivation. Given the scope of many existing studies, it is still unclear how very-high-resolution EO data can help to improve our understanding of the multidimensionality of deprivation within settlements on a city-wide scale. In this work, we assumed that multiple facets of deprivation are reflected by varying morphological structures within deprived urban areas and can be captured by EO information. We set out by staying on the scale of an entire city, while zooming into each of the deprived areas to investigate deprivation through land cover (LC) variations. To test the generalizability of our workflow, we assembled multiple WorldView-3 datasets (multispectral and shortwave infrared) with varying numbers of bands and image features, allowing us to explore computational efficiency, complexity, and scalability while keeping the model architecture consistent. Our workflow was implemented in the city of Nairobi, Kenya, where more than sixty percent of the city population lives in deprived areas. Our results indicate that detailed LC information that characterizes deprivation can be mapped with an accuracy of over seventy percent by only using RGB-based image features. Including the near-infrared (NIR) band appears to bring significant improvements in the accuracy of all classes. Equally important, we were able to categorize deprived areas into varying profiles manifested through LC variability using a gridded mapping approach. The types of deprivation profiles varied significantly both within and between deprived areas. The results could be informative for practical interventions such as land-use planning policies for urban upgrading programs. Full article
(This article belongs to the Special Issue Optical Remote Sensing Applications in Urban Areas II)
Show Figures

Figure 1

18 pages, 45798 KiB  
Article
Efficient Occluded Road Extraction from High-Resolution Remote Sensing Imagery
by Dejun Feng, Xingyu Shen, Yakun Xie, Yangge Liu and Jian Wang
Remote Sens. 2021, 13(24), 4974; https://doi.org/10.3390/rs13244974 - 7 Dec 2021
Cited by 11 | Viewed by 3202
Abstract
Road extraction is important for road network renewal, intelligent transportation systems and smart cities. This paper proposes an effective method to improve road extraction accuracy and reconstruct the broken road lines caused by ground occlusion. Firstly, an attention mechanism-based convolution neural network is [...] Read more.
Road extraction is important for road network renewal, intelligent transportation systems and smart cities. This paper proposes an effective method to improve road extraction accuracy and reconstruct the broken road lines caused by ground occlusion. Firstly, an attention mechanism-based convolution neural network is established to enhance feature extraction capability. By highlighting key areas and restraining interference features, the road extraction accuracy is improved. Secondly, for the common broken road problem in the extraction results, a heuristic method based on connected domain analysis is proposed to reconstruct the road. An experiment is carried out on a benchmark dataset to prove the effectiveness of this method, and the result is compared with that of several famous deep learning models including FCN8s, SegNet, U-Net and D-Linknet. The comparison shows that this model increases the IOU value and the F1 score by 3.35–12.8% and 2.41–9.8%, respectively. Additionally, the result proves the proposed method is effective at extracting roads from occluded areas. Full article
(This article belongs to the Special Issue Optical Remote Sensing Applications in Urban Areas II)
Show Figures

Graphical abstract

Back to TopTop