Special Issue "Advanced Topics in Remote Sensing"

A special issue of Remote Sensing (ISSN 2072-4292).

Deadline for manuscript submissions: closed (20 February 2019).

Special Issue Editors

Dr. Xiaofeng Li
Website
Guest Editor
NCWCP - E/RA3, 5830 University Research Court, College Park, MD 20740, USA
Interests: AI oceanography; big data; ocean remote sensing; physical oceanography; boundary layer meteorology; synthetic aperture radar imaging mechanism; multiple-polarization radar applications; satellite image classification and segmentation
Special Issues and Collections in MDPI journals
Dr. Yanfei Zhong
Website
Guest Editor
State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University; 129 Luoyu Road, Wuhan 430079, China
Interests: multi- and hyperspectral remote sensing data processing; high resolution image processing and scene analysis; and computational intelligence
Special Issues and Collections in MDPI journals

Special Issue Information

Dear Colleagues,

The 2018 International Conference on Advanced Remote Sensing will be held in Wuhan, China, 16–18 October 2018. Remote Sensing (IF: 3.406) will launch a Special Issue, entitled “Advanced Topics in Remote Sensing”, following this conference. The Special Issue will consist of papers presented at this conference in the following areas:

1. remote sensing sensors and data acquisition

satellite data acquisition, airborne data acquisition, ground-based data acquisition

2. remote sensing data processing

high-resolution image processing, hyperspectral image processing, synthetic aperture radar data processing, visible and infrared image processing, laser scanning data processing

3. remote sensing application

land, atmosphere, ocean, and cryosphere applications

All submitted manuscripts will be peer reviewed according to Remote Sensing guidelines, and they should not have been published or be considered for publication elsewhere. The publisher, MDPI, offers conference participants a 20% discount on the publishing fees. Papers will be published online within a week after the acceptance.

For questions and inquiries, please contact the Assistant Editor:

Ms. Yanhua Li at [email protected]

Dr. Xiaofeng Li
Dr. Yanfei Zhong
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2200 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Open AccessArticle
Derivation of Vegetation Optical Depth and Water Content in the Source Region of the Yellow River using the FY-3B Microwave Data
Remote Sens. 2019, 11(13), 1536; https://doi.org/10.3390/rs11131536 - 28 Jun 2019
Cited by 1
Abstract
This study uses the brightness temperature at the given microwave frequency (18.7 GHz) from the Microwave Radiation Imager (MWRI) on-board the Fengyun-3B (FY-3B) satellite to improve the τ-ω model by considering the radiative contribution from waterbody in the pixels over the wetland of [...] Read more.
This study uses the brightness temperature at the given microwave frequency (18.7 GHz) from the Microwave Radiation Imager (MWRI) on-board the Fengyun-3B (FY-3B) satellite to improve the τ-ω model by considering the radiative contribution from waterbody in the pixels over the wetland of the Yellow River source region, China. In order to retrieve vegetation optical depth (VOD), a dual-polarization slope parameter is defined to express the surface emissivity in the τ-ω model as the sum of soil emissivity and waterbody emissivity. In the regions with no waterbody, the original τ-ω model without considering waterbody impact is used to derive VOD. With use of the field observed vegetation water content (VWC) in the source region of the Yellow River during the summer of 2012, a regression relationship between VOD and VWC is established and then the vegetation parameter b is estimated. The relationship is employed to derive the spatial VWC during the entire vegetation growing period. The VOD retrieved is invalid and failed in some part of the study area by using the previous τ-ω model, while the results from the improved τ-ω model indicate that the VOD is in the range of 0.20 to 1.20 and the VWC is in the range of 0.20kg/m2 to 1.40kg/m2 in the entire source region of the Yellow River in 2012. Both VOD and VWC exhibit a pattern of low values in the west part and high values in the east part. The largest regional variations appear along the Yellow River. The comparison between the remote-sensing-estimated VWC and the ground-measured VWC gives the root mean square error of 0.12kg/m2. These assessments reveal that with considering the fractional seasonal wetlands in the source region of the Yellow River, the microwave remote sensing measurements from the FY-3B MWRI can be successfully used to retrieve the VWC in the source region of the Yellow River. Full article
(This article belongs to the Special Issue Advanced Topics in Remote Sensing)
Show Figures

Graphical abstract

Open AccessArticle
A New Single-Pass SAR Interferometry Technique with a Single-Antenna for Terrain Height Measurements
Remote Sens. 2019, 11(9), 1070; https://doi.org/10.3390/rs11091070 - 06 May 2019
Cited by 2
Abstract
One of the prospective research topics in radar remote sensing technology is the methodology for designing an optimal radar system for high-precision two-dimensional and three-dimensional image acquisition of the Earth’s surface with minimal hardware requirements. In this study, we propose a single-pass interferometric [...] Read more.
One of the prospective research topics in radar remote sensing technology is the methodology for designing an optimal radar system for high-precision two-dimensional and three-dimensional image acquisition of the Earth’s surface with minimal hardware requirements. In this study, we propose a single-pass interferometric synthetic aperture radar (SAR) imaging technique with only a single antenna for the estimation of the terrain height. This technique enabled us to obtain terrain height information in one flight of the carrier, on which only one receiving antenna was mounted. This single-antenna single-pass interferometry required a squint angle look geometry and additional image synthesis processing. The limiting accuracy of the terrain height measurement was approximately 1.5 times lower than that of the conventional two-pass mode and required a longer baseline than two-pass interferometry to have an equivalent accuracy performance. This imaging method could overcome the temporal decorrelation problem of two-pass interferometry due to a short time gap in the radar echo acquisitions during two sub-aperture intervals. We compared the accuracy performance of the terrain height measurements of our method with the conventional two-pass interferometry. This comparison was carried out at various spectral bandwidths, degrees of surface roughness, and baseline lengths. We validated our idea with numerical simulations of a digital elevation map, and showed real extracted data of the terrain heights in the Astrakhan and Volga regions of the Russian Federation, obtained from airborne SAR with our single-antenna single-pass interferometry technique. Full article
(This article belongs to the Special Issue Advanced Topics in Remote Sensing)
Show Figures

Graphical abstract

Open AccessArticle
Local Azimuth Ambiguity-to-Signal Ratio Estimation Method Based on the Doppler Power Spectrum in SAR Images
Remote Sens. 2019, 11(7), 857; https://doi.org/10.3390/rs11070857 - 09 Apr 2019
Abstract
In synthetic aperture radar (SAR) images, azimuth ambiguity is one of the important factors that affect image quality. Generally, the azimuth ambiguity-to-signal ratio (AASR) is a measure of the azimuth ambiguity of SAR images. For the low signal-to-noise ratio (SNR) ocean areas, it [...] Read more.
In synthetic aperture radar (SAR) images, azimuth ambiguity is one of the important factors that affect image quality. Generally, the azimuth ambiguity-to-signal ratio (AASR) is a measure of the azimuth ambiguity of SAR images. For the low signal-to-noise ratio (SNR) ocean areas, it is difficult to accurately estimate the local AASR using traditional estimation algorithms. In order to solve this problem, a local AASR estimation method based on the Doppler power spectrum in SAR images is proposed in this paper by analyzing the composition of the local Doppler spectrum of SAR images. The method not only has higher estimation accuracy under low SNR, but also overcomes the limitations of traditional algorithms on SAR images when estimating AASR. The feasibility and accuracy of the proposed method are verified by simulation experiments and spaceborne SAR data. Full article
(This article belongs to the Special Issue Advanced Topics in Remote Sensing)
Show Figures

Graphical abstract

Open AccessArticle
Retrieval of High Spatial Resolution Aerosol Optical Depth from HJ-1 A/B CCD Data
Remote Sens. 2019, 11(7), 832; https://doi.org/10.3390/rs11070832 - 07 Apr 2019
Abstract
A high-spatial resolution aerosol optical depth (AOD) dataset is critically important for regional meteorology and climate studies. Chinese Huanjing-1 (HJ-1) A/B charge-coupled diode (CCD) data are a suitable data source for retrieving AODs. However, AOD cannot be retrieved based on the dark target [...] Read more.
A high-spatial resolution aerosol optical depth (AOD) dataset is critically important for regional meteorology and climate studies. Chinese Huanjing-1 (HJ-1) A/B charge-coupled diode (CCD) data are a suitable data source for retrieving AODs. However, AOD cannot be retrieved based on the dark target method due to the absence of a shortwave infrared band. In this study, an AOD estimation method based on the relationships between visible bands of HJ-1 A/B CCDs is proposed. The Polarization and Directionality of the Earth’s Reflectances (POLDER) Bidirectional Reflectance Distribution Function (BRDF) dataset was used to construct a lookup table for interband regression coefficients that varied by solar/view angle and land cover type. Finally, high-spatial resolution AODs could be retrieved with the aerosol lookup table and constraints. The results showed that the AODs retrieved from the HJ-1 A/B CCD data had the same range of distribution and trends as a visual interpretation of the images and Moderate Resolution Imaging Spectroradiometer (MODIS) aerosol products did. The validation results using four sites of the Aerosol Robotic Network (AERONET) in Beijing showed that the value of the correlation coefficient R was 0.866, the root mean square error (RMSE) was 0.167, the mean absolute error (MAE) was 0.131, and the expected error (EE) was 53.9%. If the measurements of an AERONET site were used as prior knowledge, AOD retrieval results could be much more accurately obtained by this method (R is 0.989, RMSE is 0.052, MAE is 0.042, and EE is 96.7%). Full article
(This article belongs to the Special Issue Advanced Topics in Remote Sensing)
Show Figures

Figure 1

Open AccessArticle
Building Footprint Extraction from High-Resolution Images via Spatial Residual Inception Convolutional Neural Network
Remote Sens. 2019, 11(7), 830; https://doi.org/10.3390/rs11070830 - 07 Apr 2019
Cited by 14
Abstract
The rapid development in deep learning and computer vision has introduced new opportunities and paradigms for building extraction from remote sensing images. In this paper, we propose a novel fully convolutional network (FCN), in which a spatial residual inception (SRI) module is proposed [...] Read more.
The rapid development in deep learning and computer vision has introduced new opportunities and paradigms for building extraction from remote sensing images. In this paper, we propose a novel fully convolutional network (FCN), in which a spatial residual inception (SRI) module is proposed to capture and aggregate multi-scale contexts for semantic understanding by successively fusing multi-level features. The proposed SRI-Net is capable of accurately detecting large buildings that might be easily omitted while retaining global morphological characteristics and local details. On the other hand, to improve computational efficiency, depthwise separable convolutions and convolution factorization are introduced to significantly decrease the number of model parameters. The proposed model is evaluated on the Inria Aerial Image Labeling Dataset and the Wuhan University (WHU) Aerial Building Dataset. The experimental results show that the proposed methods exhibit significant improvements compared with several state-of-the-art FCNs, including SegNet, U-Net, RefineNet, and DeepLab v3+. The proposed model shows promising potential for building detection from remote sensing images on a large scale. Full article
(This article belongs to the Special Issue Advanced Topics in Remote Sensing)
Show Figures

Figure 1

Open AccessArticle
Spatial–Spectral Fusion Based on Conditional Random Fields for the Fine Classification of Crops in UAV-Borne Hyperspectral Remote Sensing Imagery
Remote Sens. 2019, 11(7), 780; https://doi.org/10.3390/rs11070780 - 01 Apr 2019
Cited by 8
Abstract
The fine classification of crops is critical for food security and agricultural management. There are many different species of crops, some of which have similar spectral curves. As a result, the precise classification of crops is a difficult task. Although the classification methods [...] Read more.
The fine classification of crops is critical for food security and agricultural management. There are many different species of crops, some of which have similar spectral curves. As a result, the precise classification of crops is a difficult task. Although the classification methods that incorporate spatial information can reduce the noise and improve the classification accuracy, to a certain extent, the problem is far from solved. Therefore, in this paper, the method of spatial–spectral fusion based on conditional random fields (SSF-CRF) for the fine classification of crops in UAV-borne hyperspectral remote sensing imagery is presented. The proposed method designs suitable potential functions in a pairwise conditional random field model, fusing the spectral and spatial features to reduce the spectral variation within the homogenous regions and accurately identify the crops. The experiments on hyperspectral datasets of the cities of Hanchuan and Honghu in China showed that, compared with the traditional methods, the proposed classification method can effectively improve the classification accuracy, protect the edges and shapes of the features, and relieve excessive smoothing, while retaining detailed information. This method has important significance for the fine classification of crops in hyperspectral remote sensing imagery. Full article
(This article belongs to the Special Issue Advanced Topics in Remote Sensing)
Show Figures

Graphical abstract

Open AccessArticle
Airborne SAR Imaging Algorithm for Ocean Waves Based on Optimum Focus Setting
Remote Sens. 2019, 11(5), 564; https://doi.org/10.3390/rs11050564 - 07 Mar 2019
Cited by 1
Abstract
Ocean waves are the richest texture on the sea surface, from which valuable information can be inversed. In general, the Synthetic Aperture Radar (SAR) images of surface waves will inevitably be distorted due to the intricate motion of surface waves. However, commonly used [...] Read more.
Ocean waves are the richest texture on the sea surface, from which valuable information can be inversed. In general, the Synthetic Aperture Radar (SAR) images of surface waves will inevitably be distorted due to the intricate motion of surface waves. However, commonly used imaging algorithms do not take the motion of surface waves into consideration. Therefore, surface waves on the obtained SAR images are rather blurred. To solve this problem, an airborne SAR imaging algorithm for ocean waves based on optimum focus setting is proposed in this paper. Firstly, in order to obtain the real azimuth phase speed of dominant wave, the geometric and scanning distortion in the blurred SAR image is calibrated. Subsequently, according to the SAR integration time and wavelength of the dominant wave, a proper focus setting variation section is selected. Afterwards, all the focus settings in this variation section are used to refocus the image, which are then compared to decide the optimum focus setting for dominant wave. Finally, by redesigning the azimuth matched filter using this optimum focus setting, a well-focused SAR image for the dominant wave can be obtained. The proposed algorithm is applied to both simulation and field data, and SAR images of surface waves are obtained. Furthermore, the obtained images are compared with those obtained with a zero-focus setting. The comparison shows that the focus of surface waves is significantly improved, which verifies the effectiveness of the proposed algorithm. Finally, how to choose the appropriate focus setting variation section under different parameters and the applicability of the algorithm are analyzed. Full article
(This article belongs to the Special Issue Advanced Topics in Remote Sensing)
Show Figures

Graphical abstract

Open AccessArticle
A Multi-Scale Filtering Building Index for Building Extraction in Very High-Resolution Satellite Imagery
Remote Sens. 2019, 11(5), 482; https://doi.org/10.3390/rs11050482 - 26 Feb 2019
Cited by 3
Abstract
Building extraction plays a significant role in many high-resolution remote sensing image applications. Many current building extraction methods need training samples while it is common knowledge that different samples often lead to different generalization ability. Morphological building index (MBI), representing morphological features of [...] Read more.
Building extraction plays a significant role in many high-resolution remote sensing image applications. Many current building extraction methods need training samples while it is common knowledge that different samples often lead to different generalization ability. Morphological building index (MBI), representing morphological features of building regions in an index form, can effectively extract building regions especially in Chinese urban regions without any training samples and has drawn much attention. However, some problems like the heavy computation cost of multi-scale and multi-direction morphological operations still exist. In this paper, a multi-scale filtering building index (MFBI) is proposed in the hope of overcoming these drawbacks and dealing with the increasing noise in very high-resolution remote sensing image. The profile of multi-scale average filtering is averaged and normalized to generate this index. Moreover, to fully utilize the relatively little spectral information in very high-resolution remote sensing image, two scenarios to generate the multi-channel multi-scale filtering index (MMFBI) are proposed. While no high-resolution remote sensing image building extraction dataset is open to the public now and the current very high-resolution remote sensing image building extraction datasets usually contain samples from the Northern American or European regions, we offer a very high-resolution remote sensing image building extraction datasets in which the samples contain multiple building styles from multiple Chinese regions. The proposed MFBI and MMFBI outperform MBI and the currently used object based segmentation method on the dataset, with a high recall and F-score. Meanwhile, the computation time of MFBI and MBI is compared on three large-scale very high-resolution satellite image and the sensitivity analysis demonstrates the robustness of the proposed method. Full article
(This article belongs to the Special Issue Advanced Topics in Remote Sensing)
Show Figures

Graphical abstract

Open AccessArticle
Spatiotemporal Patterns and Morphological Characteristics of Ulva prolifera Distribution in the Yellow Sea, China in 2016–2018
Remote Sens. 2019, 11(4), 445; https://doi.org/10.3390/rs11040445 - 21 Feb 2019
Cited by 6
Abstract
The world’s largest macroalgal blooms, Ulva prolifera, have appeared in the Yellow Sea every summer on different scales since 2007, causing great harm to the regional marine economy. In this study, the Normalized Difference of Vegetation Index (NDVI) index was used to [...] Read more.
The world’s largest macroalgal blooms, Ulva prolifera, have appeared in the Yellow Sea every summer on different scales since 2007, causing great harm to the regional marine economy. In this study, the Normalized Difference of Vegetation Index (NDVI) index was used to extract the green tide of Ulva prolifera from MODIS images in the Yellow Sea in 2016–2018, to investigate its spatiotemporal patterns and to calculate its occurrence probability. Using the standard deviational ellipse (SDE), the morphological characteristics of the green tide, including directionality and regularity, were analyzed. The results showed that the largest distribution and coverage areas occurred in 2016, with 57,384 km2 and 2906 km2, respectively and that the total affected region during three years was 163,162 km2. The green tide drifted northward and died out near Qingdao, Shandong Province, which was found to be a high-risk region. The coast of Jiangsu Province was believed to be the source of Ulva prolifera, but it was probably not the only one. The regularity of the boundary shape of the distribution showed a change that was opposite to the variation of scale. Several sharp increases were found in the parameters of the SDE in all three years. In conclusion, the overall situation of Ulva prolifera was still severe in recent years, and the sea area near Qingdao became the worst hit area of the green tide event. It was also shown that the sea surface wind played an important part in its migration and morphological changes. Full article
(This article belongs to the Special Issue Advanced Topics in Remote Sensing)
Show Figures

Graphical abstract

Open AccessArticle
Accuracy Assessment on MODIS (V006), GLASS and MuSyQ Land-Surface Albedo Products: A Case Study in the Heihe River Basin, China
Remote Sens. 2018, 10(12), 2045; https://doi.org/10.3390/rs10122045 - 16 Dec 2018
Cited by 4
Abstract
This study assessed accuracies of MCD43A3, Global Land-Surface Satellite (GLASS) and forthcoming Multi-source Data Synergized Quantitative Remote Sensing Production system (MuSyQ) albedos using ground observations and Huan Jing (HJ) data over the Heihe River Basin. MCD43A3 and MuSyQ albedos show similar high accuracies [...] Read more.
This study assessed accuracies of MCD43A3, Global Land-Surface Satellite (GLASS) and forthcoming Multi-source Data Synergized Quantitative Remote Sensing Production system (MuSyQ) albedos using ground observations and Huan Jing (HJ) data over the Heihe River Basin. MCD43A3 and MuSyQ albedos show similar high accuracies with identical root mean square errors (RMSE). Nevertheless, MuSyQ albedo is better correlated with ground measurements when sufficient valid observations are available or snow-free. The opposite happens when less than seven valid observations are available. GLASS albedo presents a larger RMSE than MCD43A3 and MuSyQ albedos in comparison with ground measurements. Over surfaces with smaller seasonal variations, MCD43A3 and MuSyQ albedos show smaller RMSEs than GLASS albedo in comparison with HJ albedo. However, for surfaces with larger temporal variations, both RMSEs and R2 of GLASS albedo are comparable with MCD43A3 and MuSyQ. Generally, MCD43A3 and MuSyQ albedos featured the same RMSEs of 0.034 and similar R2 (0.920 and 0.903, respectively), which are better than GLASS albedo (RMSE = 0.043, R2 = 0.787). However, when it comes to comparison with aggregated HJ albedo, MuSyQ and GLASS albedos are with lower RMSEs of 0.027 and 0.032 and higher R2 of 0.900 and 0.898 respectively than MCD43A3 (RMSE = 0.038, R2 = 0.836). Despite the limited geographic region of the study area, they still provide an important insight into the accuracies of three albedo products. Full article
(This article belongs to the Special Issue Advanced Topics in Remote Sensing)
Show Figures

Graphical abstract

Open AccessArticle
An Improved Boosting Learning Saliency Method for Built-Up Areas Extraction in Sentinel-2 Images
Remote Sens. 2018, 10(12), 1863; https://doi.org/10.3390/rs10121863 - 22 Nov 2018
Cited by 2
Abstract
Built-up areas extraction from satellite images is an important aspect of urban planning and land use; however, this remains a challenging task when using optical satellite images. Existing methods may be limited because of the complex background. In this paper, an improved boosting [...] Read more.
Built-up areas extraction from satellite images is an important aspect of urban planning and land use; however, this remains a challenging task when using optical satellite images. Existing methods may be limited because of the complex background. In this paper, an improved boosting learning saliency method for built-up area extraction from Sentinel-2 images is proposed. First, the optimal band combination for extracting such areas from Sentinel-2 data is determined; then, a coarse saliency map is generated, based on multiple cues and the geodesic weighted Bayesian (GWB) model, that provides training samples for a strong model; a refined saliency map is subsequently obtained using the strong model. Furthermore, cuboid cellular automata (CCA) is used to integrate multiscale saliency maps for improving the refined saliency map. Then, coarse and refined saliency maps are synthesized to create a final saliency map. Finally, the fractional-order Darwinian particle swarm optimization algorithm (FODPSO) is employed to extract the built-up areas from the final saliency result. Cities in five different types of ecosystems in China (desert, coastal, riverside, valley, and plain) are used to evaluate the proposed method. Analyses of results and comparative analyses with other methods suggest that the proposed method is robust, with good accuracy. Full article
(This article belongs to the Special Issue Advanced Topics in Remote Sensing)
Show Figures

Figure 1

Back to TopTop