Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (26)

Search Parameters:
Keywords = very high resolution (VHR) SAR

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 5323 KiB  
Article
An Object-Based Deep Learning Approach for Building Height Estimation from Single SAR Images
by Babak Memar, Luigi Russo, Silvia Liberata Ullo and Paolo Gamba
Remote Sens. 2025, 17(17), 2922; https://doi.org/10.3390/rs17172922 - 22 Aug 2025
Abstract
The accurate estimation of building heights using very-high-resolution (VHR) synthetic aperture radar (SAR) imagery is crucial for various urban applications. This paper introduces a deep learning (DL)-based methodology for automated building height estimation from single VHR COSMO-SkyMed images: an object-based regression approach based [...] Read more.
The accurate estimation of building heights using very-high-resolution (VHR) synthetic aperture radar (SAR) imagery is crucial for various urban applications. This paper introduces a deep learning (DL)-based methodology for automated building height estimation from single VHR COSMO-SkyMed images: an object-based regression approach based on bounding box detection followed by height estimation. This model was trained and evaluated on a unique multi-continental dataset comprising eight geographically diverse cities across Europe, North and South America, and Asia, employing a cross-validation strategy to explicitly assess out-of-distribution (OOD) generalization. The results demonstrate highly promising performance, particularly on European cities where the model achieves a Mean Absolute Error (MAE) of approximately one building story (2.20 m in Munich), significantly outperforming recent state-of-the-art methods in similar OOD scenarios. Despite the increased variability observed when generalizing to cities in other continents, particularly in Asia with its distinct urban typologies and the prevalence of high-rise structures, this study underscores the significant potential of DL for robust cross-city and cross-continental transfer learning in building height estimation from single VHR SAR data. Full article
Show Figures

Figure 1

18 pages, 2748 KiB  
Article
Analysis of Scattering Mechanisms in SAR Image Simulations of Japanese Wooden Buildings Damaged by Earthquake
by Yang Yu and Wataru Takeuchi
Buildings 2024, 14(11), 3585; https://doi.org/10.3390/buildings14113585 - 12 Nov 2024
Viewed by 1463
Abstract
The difficulty in identifying collapsed houses and damaged structures in synthetic aperture radar (SAR) images after natural disasters represents a significant challenge in the monitoring of urban structural deformation using SAR. SAR image simulation was conducted on a three-dimensional model of a typical [...] Read more.
The difficulty in identifying collapsed houses and damaged structures in synthetic aperture radar (SAR) images after natural disasters represents a significant challenge in the monitoring of urban structural deformation using SAR. SAR image simulation was conducted on a three-dimensional model of a typical wooden building in Japan to analyze the scattering mechanism of the structure in collapsed and uncollapsed states. Based on the physical properties of the buildings, a correlation was established between the simulated SAR image feature signals and the geometric structures of the buildings. The findings indicate that SAR scattering is more uniform for uncollapsed structures, which is predominantly influenced by their geometry. At low incidence angles, single reflections were the predominant phenomenon, whereas at high incidence angles, multiple reflections became more prevalent. The uncollapsed building’s facade formed a dihedral angle, exhibiting bright lines in the SAR image. Multiple reflections occurred at the edges of the building and floor junctions. These findings follow the theoretical predictions. In the case of the collapsed buildings, multiple reflections occurred with greater frequency, and irregular scattering was observed. Notwithstanding the augmented scattering pathways, some walls nevertheless manifested single reflections. The collapsed structures demonstrated a reduced sensitivity to alterations in the angle of incidence. Full article
(This article belongs to the Section Building Structures)
Show Figures

Figure 1

7 pages, 4892 KiB  
Proceeding Paper
Performance Evaluation of Urban Canopy Parameters Derived from VHR Optical Stereo Data
by Kshama Gupta, Shweta Khatriker and Ashutosh Bhardwaj
Environ. Sci. Proc. 2024, 29(1), 62; https://doi.org/10.3390/ECRS2023-16646 - 6 Nov 2023
Cited by 1 | Viewed by 826
Abstract
Urban canopy parameters (UCPs) are parameters which are utilized to define the thermal, radiative, and roughness properties of urban areas, which have a significant impact on the urban microclimate. The rapidly growing urbanization, especially in developing regions, leads to the modification of urban [...] Read more.
Urban canopy parameters (UCPs) are parameters which are utilized to define the thermal, radiative, and roughness properties of urban areas, which have a significant impact on the urban microclimate. The rapidly growing urbanization, especially in developing regions, leads to the modification of urban geometry, which calls for the characterization of UCPs in the countries of such regions to account for high population pressure, heterogeneous urban environments, and the subsequent impacts on global climate change. A research study conducted in Delhi, India, found that very-high-resolution (VHR) optical satellite stereo datasets provide reasonable accuracy with respect to the extraction of building heights and footprints, which are further employed for the computation of UCPs. However, the study evaluates only the key input parameters due to the non-availability of the 3D geodatabase. Hence, in this study, an attempt has been made to evaluate all UCPs derived from VHR optical stereo data, along with the key input parameters, against reference data collected from the field in the city of Bhubaneshwar, India. Performance evaluation with reference-data-derived UCPs shows that all the UCPs retrieved from VHR optical stereo data have a high prediction accuracy. Overall bias, overall mean absolute error (MAE), and root mean square error (RMSE) from satellite-derived UCPs were found to be better than 1 m for most of the UCPs, except for building-surface-area-to-plan-area ratio, height-to-width ratio, and complete aspect ratio, which were found to be less than 2.7 m. The correlation coefficient values were also observed to be more than 0.7 for most of the UCPs, except plan area density, roughness length, and frontal area density. This study concludes that UCPs derived from VHR optical stereo data have high accuracy, even in the low-to-medium-rise urban environments of the study area. The study has a high potential to be replicated in countries in developing regions which have similar development characteristics and face resource and policy constraints with respect to the availability of airborne LiDAR and SAR data. Full article
(This article belongs to the Proceedings of ECRS 2023)
Show Figures

Figure 1

19 pages, 59883 KiB  
Article
Query-Based Cascade Instance Segmentation Network for Remote Sensing Image Processing
by Enping Chen, Maojun Li, Qian Zhang and Man Chen
Appl. Sci. 2023, 13(17), 9704; https://doi.org/10.3390/app13179704 - 28 Aug 2023
Cited by 2 | Viewed by 1724
Abstract
Instance segmentation (IS) of remote sensing (RS) images can not only determine object location at the box-level but also provide instance masks at the pixel-level. It plays an important role in many fields, such as ocean monitoring, urban management, and resource planning. Compared [...] Read more.
Instance segmentation (IS) of remote sensing (RS) images can not only determine object location at the box-level but also provide instance masks at the pixel-level. It plays an important role in many fields, such as ocean monitoring, urban management, and resource planning. Compared with natural images, RS images usually pose many challenges, such as background clutter, significant changes in object size, and complex instance shapes. To this end, we propose a query-based RS image cascade IS network (QCIS-Net). The network mainly includes key components, such as the efficient feature extraction (EFE) module, multistage cascade task (MSCT) head, and joint loss function, which can characterize the location and visual information of instances in RS images through efficient queries. Among them, the EFE module combines global information from the Transformer architecture to solve the problem of long-term dependencies in visual space. The MSCT head uses a dynamic convolution kernel based on the query representation to focus on the region of interest, which facilitates the association between detection and segmentation tasks through a multistage structural design that benefits both tasks. The elaborately designed joint loss function and the use of the transfer-learning technique based on a well-known dataset (MS COCO) can guide the QCIS-Net in training and generating the final instance mask. Experimental results show that the well-designed components of the proposed method have a positive impact on the RS image instance segmentation task. It achieves mask average precision (AP) values of 75.2% and 73.3% on the SAR ship detection dataset (SSDD) and Northwestern Polytechnical University Very-High-Resolution dataset (NWPU-VHR-10 dataset), outperforming the other competitive models. The method proposed in this paper can enhance the practical application efficiency of RS images. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

23 pages, 11802 KiB  
Article
Satellite-Based Identification and Characterization of Extreme Ice Features: Hummocks and Ice Islands
by Igor Zakharov, Pradeep Bobby, Desmond Power, Sherry Warren and Mark Howell
Remote Sens. 2023, 15(16), 4065; https://doi.org/10.3390/rs15164065 - 17 Aug 2023
Cited by 3 | Viewed by 1772
Abstract
The satellite-based techniques for the monitoring of extreme ice features (EIFs) in the Canadian Arctic were investigated and demonstrated using synthetic aperture radar (SAR) and electro-optical data sources. The main EIF types include large ice islands and ice-island fragments, multiyear hummock fields (MYHF) [...] Read more.
The satellite-based techniques for the monitoring of extreme ice features (EIFs) in the Canadian Arctic were investigated and demonstrated using synthetic aperture radar (SAR) and electro-optical data sources. The main EIF types include large ice islands and ice-island fragments, multiyear hummock fields (MYHF) and other EIFs, such as fragments of MYHF and large, newly formed hummock fields. The main objectives for the paper included demonstration of various satellite capabilities over specific regions in the Canadian Arctic to assess their utility to detect and characterize EIFs. Stereo pairs of very-high-resolution (VHR) imagery provided detailed measurements of sea ice topography and were used as validation information for evaluation of the applied techniques. Single-pass interferometric SAR (InSAR) data were used to extract ice topography including hummocks and ice islands. Shape from shading and height from shadow techniques enable us to extract ice topography relying on a single image. A new method for identification of EIFs in sea ice based on the thermal infrared band of Landsat 8 was introduced. The performance of the methods for ice feature height estimation was evaluated by comparing with a stereo or InSAR digital elevation models (DEMs). Full polarimetric RADARSAT-2 data were demonstrated to be useful for identification of ice islands. Full article
(This article belongs to the Special Issue Recent Advances in Sea Ice Research Using Satellite Data)
Show Figures

Graphical abstract

31 pages, 5152 KiB  
Review
Transformers in Remote Sensing: A Survey
by Abdulaziz Amer Aleissaee, Amandeep Kumar, Rao Muhammad Anwer, Salman Khan, Hisham Cholakkal, Gui-Song Xia and Fahad Shahbaz Khan
Remote Sens. 2023, 15(7), 1860; https://doi.org/10.3390/rs15071860 - 30 Mar 2023
Cited by 198 | Viewed by 18939
Abstract
Deep learning-based algorithms have seen a massive popularity in different areas of remote sensing image analysis over the past decade. Recently, transformer-based architectures, originally introduced in natural language processing, have pervaded computer vision field where the self-attention mechanism has been utilized as a [...] Read more.
Deep learning-based algorithms have seen a massive popularity in different areas of remote sensing image analysis over the past decade. Recently, transformer-based architectures, originally introduced in natural language processing, have pervaded computer vision field where the self-attention mechanism has been utilized as a replacement to the popular convolution operator for capturing long-range dependencies. Inspired by recent advances in computer vision, the remote sensing community has also witnessed an increased exploration of vision transformers for a diverse set of tasks. Although a number of surveys have focused on transformers in computer vision in general, to the best of our knowledge we are the first to present a systematic review of recent advances based on transformers in remote sensing. Our survey covers more than 60 recent transformer-based methods for different remote sensing problems in sub-areas of remote sensing: very high-resolution (VHR), hyperspectral (HSI) and synthetic aperture radar (SAR) imagery. We conclude the survey by discussing different challenges and open issues of transformers in remote sensing. Full article
(This article belongs to the Special Issue Deep Learning for Remote Sensing Image Classification II)
Show Figures

Figure 1

21 pages, 127743 KiB  
Article
Unsupervised Change Detection for VHR Remote Sensing Images Based on Temporal-Spatial-Structural Graphs
by Junzheng Wu, Weiping Ni, Hui Bian, Kenan Cheng, Qiang Liu, Xue Kong and Biao Li
Remote Sens. 2023, 15(7), 1770; https://doi.org/10.3390/rs15071770 - 25 Mar 2023
Cited by 1 | Viewed by 2621
Abstract
With the aim of automatically extracting fine change information from ground objects, change detection (CD) for very high resolution (VHR) remote sensing images is extremely essential in various applications. However, the increase in spatial resolution, more complicated interactive relationships of ground objects, more [...] Read more.
With the aim of automatically extracting fine change information from ground objects, change detection (CD) for very high resolution (VHR) remote sensing images is extremely essential in various applications. However, the increase in spatial resolution, more complicated interactive relationships of ground objects, more evident diversity of spectra, and more severe speckle noise make accurately identifying relevant changes more challenging. To address these issues, an unsupervised temporal-spatial-structural graph is proposed for CD tasks. Treating each superpixel as a node of graph, the structural information of ground objects presented by the parent–offspring relationships with coarse and fine segmented scales is introduced to define the temporal-structural neighborhood, which is then incorporated with the spatial neighborhood to form the temporal-spatial-structural neighborhood. The graphs defined on such neighborhoods extend the interactive range among nodes from two dimensions to three dimensions, which can more perfectly exploit the structural and contextual information of bi-temporal images. Subsequently, a metric function is designed according to the spectral and structural similarity between graphs to measure the level of changes, which is more reasonable due to the comprehensive utilization of temporal-spatial-structural information. The experimental results on both VHR optical and SAR images demonstrate the superiority and effectiveness of the proposed method. Full article
Show Figures

Figure 1

24 pages, 10438 KiB  
Article
A Dual Neighborhood Hypergraph Neural Network for Change Detection in VHR Remote Sensing Images
by Junzheng Wu, Ruigang Fu, Qiang Liu, Weiping Ni, Kenan Cheng, Biao Li and Yuli Sun
Remote Sens. 2023, 15(3), 694; https://doi.org/10.3390/rs15030694 - 24 Jan 2023
Cited by 9 | Viewed by 3168
Abstract
The very high spatial resolution (VHR) remote sensing images have been an extremely valuable source for monitoring changes occurring on the Earth’s surface. However, precisely detecting relevant changes in VHR images still remains a challenge, due to the complexity of the relationships among [...] Read more.
The very high spatial resolution (VHR) remote sensing images have been an extremely valuable source for monitoring changes occurring on the Earth’s surface. However, precisely detecting relevant changes in VHR images still remains a challenge, due to the complexity of the relationships among ground objects. To address this limitation, a dual neighborhood hypergraph neural network is proposed in this article, which combines multiscale superpixel segmentation and hypergraph convolution to model and exploit the complex relationships. First, the bi-temporal image pairs are segmented under two scales and fed to a pre-trained U-net to obtain node features by treating each object under the fine scale as a node. The dual neighborhood is then defined using the father-child and adjacent relationships of the segmented objects to construct the hypergraph, which permits models to represent higher-order structured information far more complex than the conventional pairwise relationships. The hypergraph convolutions are conducted on the constructed hypergraph to propagate the label information from a small amount of labeled nodes to the other unlabeled ones by the node-edge-node transformation. Moreover, to alleviate the problem of imbalanced sampling, the focal loss function is adopted to train the hypergraph neural network. The experimental results on optical, SAR and heterogeneous optical/SAR data sets demonstrate that the proposed method offersbetter effectiveness and robustness compared to many state-of-the-art methods. Full article
Show Figures

Figure 1

18 pages, 13207 KiB  
Article
On the Discovery of a Roman Fortified Site in Gafsa, Southern Tunisia, Based on High-Resolution X-Band Satellite Radar Data
by Nabil Bachagha, Wenbin Xu, Xingjun Luo, Nicola Masini, Mondher Brahmi, Xinyuan Wang, Fatma Souei and Rosa Lasaponora
Remote Sens. 2022, 14(9), 2128; https://doi.org/10.3390/rs14092128 - 28 Apr 2022
Cited by 2 | Viewed by 3831
Abstract
The increasing availability of multiplatform, multiband, very-high-resolution (VHR) satellite synthetic aperture radar (SAR) data has attracted the attention of a growing number of scientists and archeologists. In particular, over the last two decades, archeological research has benefited from SAR development mainly due to [...] Read more.
The increasing availability of multiplatform, multiband, very-high-resolution (VHR) satellite synthetic aperture radar (SAR) data has attracted the attention of a growing number of scientists and archeologists. In particular, over the last two decades, archeological research has benefited from SAR development mainly due to its unique ability to acquire scenes both at night and during the day under all weather conditions, its penetration capability, and the provided polarimetric and interferometric information. This paper explored the potential of a novel method (nonlocal (NL)-SAR) using TerraSAR-X (TSX) and Constellation of Small Satellites for Mediterranean Basin Observation (COSMO)-SkyMed (CSK) data to detect buried archeological remains in steep, rugged terrain. In this investigation, two test sites were selected in southern Tunisia, home to some of the most valuable and well-preserved limes from the Roman Empire. To enhance the subtle signals linked to archeological features, the speckle noise introduced into SAR data by the environment and SAR system must be mitigated. Accordingly, the NL-SAR method was applied to SAR data pertaining to these two significant test sites. Overall, the investigation (i) revealed a fortified settlement from the Roman Empire and (ii) identified an unknown urban area abandoned during this period via a field survey, thus successfully confirming the capability of SAR data to reveal unknown, concealed archeological sites, even in areas with a complex topography. Full article
(This article belongs to the Special Issue Advances in Remote Sensing for Exploring Ancient History)
Show Figures

Figure 1

21 pages, 10909 KiB  
Technical Note
Data Augmentation for Building Footprint Segmentation in SAR Images: An Empirical Study
by Sandhi Wangiyana, Piotr Samczyński and Artur Gromek
Remote Sens. 2022, 14(9), 2012; https://doi.org/10.3390/rs14092012 - 22 Apr 2022
Cited by 20 | Viewed by 4281
Abstract
Building footprints provide essential information for mapping, disaster management, and other large-scale studies. Synthetic Aperture Radar (SAR) provides consistent data availability over optical images owing to its unique properties, which consequently makes it more challenging to interpret. Previous studies have demonstrated the success [...] Read more.
Building footprints provide essential information for mapping, disaster management, and other large-scale studies. Synthetic Aperture Radar (SAR) provides consistent data availability over optical images owing to its unique properties, which consequently makes it more challenging to interpret. Previous studies have demonstrated the success of automated methods using Convolutional Neural Networks to detect buildings in Very High Resolution (VHR) SAR images. However, the scarcity of such datasets that are available to the public can limit research progress in this field. We explored the impact of several data augmentation (DA) methods on the performance of building detection on a limited dataset of SAR images. Our results show that geometric transformations are more effective than pixel transformations. The former improves the detection of objects with different scale and rotation variations. The latter creates textural changes that help differentiate edges better, but amplifies non-object patterns, leading to increased false positive predictions. We experimented with applying DA at different stages and concluded that applying similar DA methods in training and inference showed the best performance compared with DA applied only during training. Some DA can alter key features of a building’s representation in radar images. Among them are vertical flips and quarter circle rotations, which yielded the worst performance. DA methods should be used in moderation to prevent unwanted transformations outside the possible object variations. Error analysis, either through statistical methods or manual inspection, is recommended to understand the bias presented in the dataset, which is useful in selecting suitable DAs. The findings from this study can provide potential guidelines for future research in selecting DA methods for segmentation tasks in radar imagery. Full article
(This article belongs to the Special Issue Synthetic Aperture Radar (SAR) Meets Deep Learning)
Show Figures

Graphical abstract

15 pages, 5704 KiB  
Article
Joint Sparsity for TomoSAR Imaging in Urban Areas Using Building POI and TerraSAR-X Staring Spotlight Data
by Lei Pang, Yanfeng Gai and Tian Zhang
Sensors 2021, 21(20), 6888; https://doi.org/10.3390/s21206888 - 17 Oct 2021
Cited by 6 | Viewed by 2561
Abstract
Synthetic aperture radar (SAR) tomography (TomoSAR) can obtain 3D imaging models of observed urban areas and can also discriminate different scatters in an azimuth–range pixel unit. Recently, compressive sensing (CS) has been applied to TomoSAR imaging with the use of very-high-resolution (VHR) SAR [...] Read more.
Synthetic aperture radar (SAR) tomography (TomoSAR) can obtain 3D imaging models of observed urban areas and can also discriminate different scatters in an azimuth–range pixel unit. Recently, compressive sensing (CS) has been applied to TomoSAR imaging with the use of very-high-resolution (VHR) SAR images delivered by modern SAR systems, such as TerraSAR-X and TanDEM-X. Compared with the traditional Fourier transform and spectrum estimation methods, using sparse information for TomoSAR imaging can obtain super-resolution power and robustness and is only minorly impacted by the sidelobe effect. However, due to the tight control of SAR satellite orbit, the number of acquisitions is usually too low to form a synthetic aperture in the elevation direction, and the baseline distribution of acquisitions is also uneven. In addition, artificial outliers may easily be generated in later TomoSAR processing, leading to a poor mapping product. Focusing on these problems, by synthesizing the opinions of various experts and scholarly works, this paper briefly reviews the research status of sparse TomoSAR imaging. Then, a joint sparse imaging algorithm, based on the building points of interest (POIs) and maximum likelihood estimation, is proposed to reduce the number of acquisitions required and reject the scatterer outliers. Moreover, we adopted the proposed novel workflow in the TerraSAR-X datasets in staring spotlight (ST) work mode. The experiments on simulation data and TerraSAR-X data stacks not only indicated the effectiveness of the proposed approach, but also proved the great potential of producing a high-precision dense point cloud from staring spotlight (ST) data. Full article
(This article belongs to the Section Radar Sensors)
Show Figures

Figure 1

21 pages, 4513 KiB  
Article
A Novel Motion Compensation Scheme for Airborne Very High Resolution SAR
by Zhen Chen, Zhimin Zhang, Yashi Zhou, Pei Wang and Jinsong Qiu
Remote Sens. 2021, 13(14), 2729; https://doi.org/10.3390/rs13142729 - 12 Jul 2021
Cited by 17 | Viewed by 2975
Abstract
Due to the atmospheric turbulence, the motion trajectory of airborne very high resolution (VHR) synthetic aperture radars (SARs) is inevitably affected, which introduces range-variant range cell migration (RCM) and aperture-dependent azimuth phase error (APE). Both types of errors consequently result in defocused images, [...] Read more.
Due to the atmospheric turbulence, the motion trajectory of airborne very high resolution (VHR) synthetic aperture radars (SARs) is inevitably affected, which introduces range-variant range cell migration (RCM) and aperture-dependent azimuth phase error (APE). Both types of errors consequently result in defocused images, as residual range- and aperture-dependent motion errors are significant in VHR-SAR images. Nevertheless, little work has been devoted to the range-variant RCM auto-correction and aperture-dependent APE auto-correction. In this paper, a precise motion compensation (MoCo) scheme for airborne VHR-SAR is studied. In the proposed scheme, the motion error is obtained from inertial measurement unit and SAR data, and compensated for with respect to both range and aperture. The proposed MoCo scheme compensates for the motion error without space-invariant approximation. Simulations and experimental data from an airborne 3.6 GHz bandwidth SAR are employed to demonstrate the validity and effectiveness of the proposed MoCo scheme. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Graphical abstract

25 pages, 8301 KiB  
Article
Situational Awareness of Large Infrastructures Using Remote Sensing: The Rome–Fiumicino Airport during the COVID-19 Lockdown
by Andrea Pulella and Francescopaolo Sica
Remote Sens. 2021, 13(2), 299; https://doi.org/10.3390/rs13020299 - 16 Jan 2021
Cited by 8 | Viewed by 4018
Abstract
Situational awareness refers to the process of aggregating spatio-temporal variables and measurements from different sources, aiming to improve the semantic outcome. Remote Sensing satellites for Earth Observation acquire key variables that, when properly aggregated, can provide precious insights about the observed area. This [...] Read more.
Situational awareness refers to the process of aggregating spatio-temporal variables and measurements from different sources, aiming to improve the semantic outcome. Remote Sensing satellites for Earth Observation acquire key variables that, when properly aggregated, can provide precious insights about the observed area. This article introduces a novel automatic system to monitor the activity levels and the operability of large infrastructures from satellite data. We integrate multiple data sources acquired by different spaceborne sensors, such as Sentinel-1 Synthetic Aperture Radar (SAR) time series, Sentinel-2 multispectral data, and Pleiades Very-High-Resolution (VHR) optical data. The proposed methodology exploits the synergy between these sensors for extracting, at the same time, quantitative and qualitative results. We focus on generating semantic results, providing situational awareness, and decision-ready insights. We developed this methodology for the COVID-19 Custom Script Contest, a remote hackathon funded by the European Space Agency (ESA) and the European Commission (EC), whose aim was to promote remote sensing techniques to monitor environmental factors consecutive to the spread of the Coronavirus disease. This work focuses on the Rome–Fiumicino International Airport case study, an environment significantly affected by the COVID-19 crisis. The resulting product is a unique description of the airport’s area utilization before and after the air traffic restrictions imposed between March and May 2020, during Italy’s first lockdown. Experimental results confirm that the proposed algorithm provides remarkable insights for supporting an effective decision-making process. We provide results about the airport’s operability by retrieving temporal changes at high spatial and temporal resolutions, together with the airplane count and localization for the same period in 2019 and 2020. On the one hand, we detected an evident change of the activity levels on those airport areas typically designated for passenger transportation, e.g., the one close to the gates. On the other hand, we observed an intensification of the activity levels over areas usually assigned to landside operations, e.g., the one close to the hangar. Analogously, the airplane count and localization have shown a redistribution of the airplanes over the whole airport. New parking slots have been identified as well as the areas that have been dismissed. Eventually, by combining the results from different sensors, we could affirm that different airport surface areas have changed their functionality and give a non-expert interpretation about areas’ usage. Full article
(This article belongs to the Special Issue Remote Sensing Data Interpretation and Validation)
Show Figures

Figure 1

21 pages, 6055 KiB  
Article
Farmland Parcel Mapping in Mountain Areas Using Time-Series SAR Data and VHR Optical Images
by Wei Liu, Jian Wang, Jiancheng Luo, Zhifeng Wu, Jingdong Chen, Yanan Zhou, Yingwei Sun, Zhanfeng Shen, Nan Xu and Yingpin Yang
Remote Sens. 2020, 12(22), 3733; https://doi.org/10.3390/rs12223733 - 13 Nov 2020
Cited by 41 | Viewed by 4181
Abstract
Accurate, timely, and reliable farmland mapping is a prerequisite for agricultural management and environmental assessment in mountainous areas. However, in these areas, high spatial heterogeneity and diversified planting structures together generate various small farmland parcels with irregular shapes that are difficult to accurately [...] Read more.
Accurate, timely, and reliable farmland mapping is a prerequisite for agricultural management and environmental assessment in mountainous areas. However, in these areas, high spatial heterogeneity and diversified planting structures together generate various small farmland parcels with irregular shapes that are difficult to accurately delineate. In addition, the absence of optical data caused by the cloudy and rainy climate impedes the use of time-series optical data to distinguish farmland from other land use types. Automatic delineation of farmland parcels in mountain areas is still a very difficult task. This paper proposes an innovative precise farmland parcel extraction approach supported by very high resolution(VHR) optical image and time series synthetic aperture radar(SAR) data. Firstly, Google satellite imagery with a spatial resolution of 0.55 m was used for delineating the boundaries of ground parcel objects in mountainous areas by a hierarchical extraction scheme. This scheme divides farmland into four types based on the morphological features presented in optical imagery, and designs different extraction models to produce each farmland type, respectively. The potential farmland parcel distribution map is then obtained by the layered recombination of these four farmland types. Subsequently, the time profile of each parcel in this map was constructed by five radar variables from the Sentinel-1A dataset, and the time-series classification method was used to distinguish farmland parcels from other types. An experiment was carried out in the north of Guiyang City, Guizhou Province, Southwest China. The result shows that, the producer’s accuracy of farmland parcels obtained by the hierarchical scheme is increased by 7.39% to 96.38% compared with that without this scheme, and the time-series classification method produces an accuracy of 80.83% to further obtain the final overall accuracy of 96.05% for the farmland parcel maps, showing a good performance. In addition, through visual inspection, this method has a better suppression effect on background noise in mountainous areas, and the extracted farmland parcels are closer to the actual distribution of the ground farmland. Full article
Show Figures

Graphical abstract

24 pages, 7151 KiB  
Article
HQ-ISNet: High-Quality Instance Segmentation for Remote Sensing Imagery
by Hao Su, Shunjun Wei, Shan Liu, Jiadian Liang, Chen Wang, Jun Shi and Xiaoling Zhang
Remote Sens. 2020, 12(6), 989; https://doi.org/10.3390/rs12060989 - 19 Mar 2020
Cited by 111 | Viewed by 8170
Abstract
Instance segmentation in high-resolution (HR) remote sensing imagery is one of the most challenging tasks and is more difficult than object detection and semantic segmentation tasks. It aims to predict class labels and pixel-wise instance masks to locate instances in an image. However, [...] Read more.
Instance segmentation in high-resolution (HR) remote sensing imagery is one of the most challenging tasks and is more difficult than object detection and semantic segmentation tasks. It aims to predict class labels and pixel-wise instance masks to locate instances in an image. However, there are rare methods currently suitable for instance segmentation in the HR remote sensing images. Meanwhile, it is more difficult to implement instance segmentation due to the complex background of remote sensing images. In this article, a novel instance segmentation approach of HR remote sensing imagery based on Cascade Mask R-CNN is proposed, which is called a high-quality instance segmentation network (HQ-ISNet). In this scheme, the HQ-ISNet exploits a HR feature pyramid network (HRFPN) to fully utilize multi-level feature maps and maintain HR feature maps for remote sensing images’ instance segmentation. Next, to refine mask information flow between mask branches, the instance segmentation network version 2 (ISNetV2) is proposed to promote further improvements in mask prediction accuracy. Then, we construct a new, more challenging dataset based on the synthetic aperture radar (SAR) ship detection dataset (SSDD) and the Northwestern Polytechnical University very-high-resolution 10-class geospatial object detection dataset (NWPU VHR-10) for remote sensing images instance segmentation which can be used as a benchmark for evaluating instance segmentation algorithms in the high-resolution remote sensing images. Finally, extensive experimental analyses and comparisons on the SSDD and the NWPU VHR-10 dataset show that (1) the HRFPN makes the predicted instance masks more accurate, which can effectively enhance the instance segmentation performance of the high-resolution remote sensing imagery; (2) the ISNetV2 is effective and promotes further improvements in mask prediction accuracy; (3) our proposed framework HQ-ISNet is effective and more accurate for instance segmentation in the remote sensing imagery than the existing algorithms. Full article
(This article belongs to the Special Issue Deep Neural Networks for Remote Sensing Applications)
Show Figures

Graphical abstract

Back to TopTop