Next Issue
Volume 11, December-1
Previous Issue
Volume 11, November-1

Table of Contents

Remote Sens., Volume 11, Issue 22 (November-2 2019) – 124 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Cover Story (view full-size image) In this study, we developed land surface directional reflectance and albedo products from GOES-R [...] Read more.
Order results
Result details
Select all
Export citation of selected articles as:
Open AccessArticle
Urban Land Use and Land Cover Classification Using Multisource Remote Sensing Images and Social Media Data
Remote Sens. 2019, 11(22), 2719; https://doi.org/10.3390/rs11222719 - 19 Nov 2019
Cited by 3 | Viewed by 1251
Abstract
Land use and land cover (LULC) are diverse and complex in urban areas. Remotely sensed images are commonly used for land cover classification but hardly identifies urban land use and functional areas because of the semantic gap (i.e., different definitions of similar or [...] Read more.
Land use and land cover (LULC) are diverse and complex in urban areas. Remotely sensed images are commonly used for land cover classification but hardly identifies urban land use and functional areas because of the semantic gap (i.e., different definitions of similar or identical buildings). Social media data, “marks” left by people using mobile phones, have great potential to overcome this semantic gap. Multisource remote sensing data are also expected to be useful in distinguishing different LULC types. This study examined the capability of combined multisource remote sensing images and social media data in urban LULC classification. Multisource remote sensing images included a Chinese ZiYuan-3 (ZY-3) high-resolution image, a Landsat 8 Operational Land Imager (OLI) multispectral image, and a Sentinel-1A synthetic aperture radar (SAR) image. Social media data consisted of the hourly spatial distribution of WeChat users, which is a ubiquitous messaging and payment platform in China. LULC was classified into 10 types, namely, vegetation, bare land, road, water, urban village, greenhouses, residential, commercial, industrial, and educational buildings. A method that integrates object-based image analysis, decision trees, and random forests was used for LULC classification. The overall accuracy and kappa value attained by the combination of multisource remote sensing images and WeChat data were 87.55% and 0.84, respectively. They further improved to 91.55% and 0.89, respectively, by integrating the textural and spatial features extracted from the ZY-3 image. The ZY-3 high-resolution image was essential for urban LULC classification because it is necessary for the accurate delineation of land parcels. The addition of Landsat 8 OLI, Sentinel-1A SAR, or WeChat data also made an irreplaceable contribution to the classification of different LULC types. The Landsat 8 OLI image helped distinguish between the urban village, residential buildings, commercial buildings, and roads, while the Sentinel-1A SAR data reduced the confusion between commercial buildings, greenhouses, and water. Rendering the spatial and temporal dynamics of population density, the WeChat data improved the classification accuracies of an urban village, greenhouses, and commercial buildings. Full article
Show Figures

Graphical abstract

Open AccessArticle
Fully Dense Multiscale Fusion Network for Hyperspectral Image Classification
Remote Sens. 2019, 11(22), 2718; https://doi.org/10.3390/rs11222718 - 19 Nov 2019
Cited by 4 | Viewed by 949
Abstract
The convolutional neural network (CNN) can automatically extract hierarchical feature representations from raw data and has recently achieved great success in the classification of hyperspectral images (HSIs). However, most CNN based methods used in HSI classification neglect adequately utilizing the strong complementary yet [...] Read more.
The convolutional neural network (CNN) can automatically extract hierarchical feature representations from raw data and has recently achieved great success in the classification of hyperspectral images (HSIs). However, most CNN based methods used in HSI classification neglect adequately utilizing the strong complementary yet correlated information from each convolutional layer and only employ the last convolutional layer features for classification. In this paper, we propose a novel fully dense multiscale fusion network (FDMFN) that takes full advantage of the hierarchical features from all the convolutional layers for HSI classification. In the proposed network, shortcut connections are introduced between any two layers in a feed-forward manner, enabling features learned by each layer to be accessed by all subsequent layers. This fully dense connectivity pattern achieves comprehensive feature reuse and enforces discriminative feature learning. In addition, various spectral-spatial features with multiple scales from all convolutional layers are fused to extract more discriminative features for HSI classification. Experimental results on three widely used hyperspectral scenes demonstrate that the proposed FDMFN can achieve better classification performance in comparison with several state-of-the-art approaches. Full article
Show Figures

Graphical abstract

Open AccessArticle
Inter-Calibration of the OSIRIS-REx NavCams with Earth-Viewing Imagers
Remote Sens. 2019, 11(22), 2717; https://doi.org/10.3390/rs11222717 - 19 Nov 2019
Cited by 1 | Viewed by 620
Abstract
The Earth-viewed images acquired by the space probe OSIRIS-REx during its Earth gravity assist flyby maneuver on 22 September 2017 provided an opportunity to radiometrically calibrate the onboard NavCam imagers. Spatially-, temporally-, and angularly-matched radiances from the Earth viewing GOES-15 and DSCOVR-EPIC imagers [...] Read more.
The Earth-viewed images acquired by the space probe OSIRIS-REx during its Earth gravity assist flyby maneuver on 22 September 2017 provided an opportunity to radiometrically calibrate the onboard NavCam imagers. Spatially-, temporally-, and angularly-matched radiances from the Earth viewing GOES-15 and DSCOVR-EPIC imagers were used as references for deriving the calibration gain of the NavCam sensors. An optimized all-sky tropical ocean ray-matching (ATO-RM) calibration approach that accounts for the spectral band differences, navigation errors, and angular geometry differences between NavCam and the reference imagers is formulated in this paper. Prior to ray-matching, the GOES-15 and EPIC pixel level radiances were mapped into the NavCam field of view. The NavCam 1 ATO-RM gain is found to be 9.874 × 10−2 Wm−2sr−1µm−1DN−1 with an uncertainty of 3.7%. The ATO-RM approach predicted an offset of 164, which is close to the true space DN of 170. The pre-launch NavCam 1 and 2 gains were compared with the ATO-RM gain and were found to be within 2.1% and 2.8%, respectively, suggesting that sensor performance is stable in space. The ATO-RM calibration was found to be consistent within 3.9% over a factor of ±2 NavCam 2 exposure times. This approach can easily be adapted to inter-calibrate other space probe cameras given the current constellation of geostationary imagers. Full article
(This article belongs to the Special Issue Remote Sensing: 10th Anniversary)
Show Figures

Graphical abstract

Open AccessArticle
Multimodal and Multi-Model Deep Fusion for Fine Classification of Regional Complex Landscape Areas Using ZiYuan-3 Imagery
Remote Sens. 2019, 11(22), 2716; https://doi.org/10.3390/rs11222716 - 19 Nov 2019
Cited by 5 | Viewed by 964
Abstract
Land cover classification (LCC) of complex landscapes is attractive to the remote sensing community but poses great challenges. In complex open pit mining and agricultural development landscapes (CMALs), the landscape-specific characteristics limit the accuracy of LCC. The combination of traditional feature engineering and [...] Read more.
Land cover classification (LCC) of complex landscapes is attractive to the remote sensing community but poses great challenges. In complex open pit mining and agricultural development landscapes (CMALs), the landscape-specific characteristics limit the accuracy of LCC. The combination of traditional feature engineering and machine learning algorithms (MLAs) is not sufficient for LCC in CMALs. Deep belief network (DBN) methods achieved success in some remote sensing applications because of their excellent unsupervised learning ability in feature extraction. The usability of DBN has not been investigated in terms of LCC of complex landscapes and integrating multimodal inputs. A novel multimodal and multi-model deep fusion strategy based on DBN was developed and tested for fine LCC (FLCC) of CMALs in a 109.4 km2 area of Wuhan City, China. First, low-level and multimodal spectral–spatial and topographic features derived from ZiYuan-3 imagery were extracted and fused. The features were then input into a DBN for deep feature learning. The developed features were fed to random forest and support vector machine (SVM) algorithms for classification. Experiments were conducted that compared the deep features with the softmax function and low-level features with MLAs. Five groups of training, validation, and test sets were performed with some spatial auto-correlations. A spatially independent test set and generalized McNemar tests were also employed to assess the accuracy. The fused model of DBN-SVM achieved overall accuracies (OAs) of 94.74% ± 0.35% and 81.14% in FLCC and LCC, respectively, which significantly outperformed almost all other models. From this model, only three of the twenty land covers achieved OAs below 90%. In general, the developed model can contribute to FLCC and LCC in CMALs, and more deep learning algorithm-based models should be investigated in future for the application of FLCC and LCC in complex landscapes. Full article
Show Figures

Graphical abstract

Open AccessArticle
Unsupervised Clustering of Multi-Perspective 3D Point Cloud Data in Marshes: A Case Study
Remote Sens. 2019, 11(22), 2715; https://doi.org/10.3390/rs11222715 - 19 Nov 2019
Cited by 1 | Viewed by 716
Abstract
Dense three-dimensional (3D) point cloud data sets generated by Terrestrial Laser Scanning (TLS) and Unmanned Aircraft System based Structure-from-Motion (UAS-SfM) photogrammetry have different characteristics and provide different representations of the underlying land cover. While there are differences, a common challenge associated with these [...] Read more.
Dense three-dimensional (3D) point cloud data sets generated by Terrestrial Laser Scanning (TLS) and Unmanned Aircraft System based Structure-from-Motion (UAS-SfM) photogrammetry have different characteristics and provide different representations of the underlying land cover. While there are differences, a common challenge associated with these technologies is how to best take advantage of these large data sets, often several hundred million points, to efficiently extract relevant information. Given their size and complexity, the data sets cannot be efficiently and consistently separated into homogeneous features without the use of automated segmentation algorithms. This research aims to evaluate the performance and generalizability of an unsupervised clustering method, originally developed for segmentation of TLS point cloud data in marshes, by extending it to UAS-SfM point clouds. The combination of two sets of features are extracted from both datasets: “core” features that can be extracted from any 3D point cloud and “sensor specific” features unique to the imaging modality. Comparisons of segmented results based on producer’s and user’s accuracies allow for identifying the advantages and limitations of each dataset and determining the generalization of the clustering method. The producer’s accuracies suggest that UAS-SfM (94.7%) better represents tidal flats, while TLS (99.5%) is slightly more suitable for vegetated areas. The users’ accuracies suggest that UAS-SfM outperforms TLS in vegetated areas with 98.6% of those points identified as vegetation actually falling in vegetated areas whereas TLS outperforms UAS-SfM in tidal flat areas with 99.2% user accuracy. Results demonstrate that the clustering method initially developed for TLS point cloud data transfers well to UAS-SfM point cloud data to enable consistent and accurate segmentation of marsh land cover via an unsupervised method. Full article
Show Figures

Graphical abstract

Open AccessArticle
Continuous Monitoring of Differential Reflectivity Bias for C-Band Polarimetric Radar Using Online Solar Echoes in Volume Scans
Remote Sens. 2019, 11(22), 2714; https://doi.org/10.3390/rs11222714 - 19 Nov 2019
Viewed by 587
Abstract
The measurement error of differential reflectivity (ZDR), especially systematic ZDR bias, is a fundamental issue for the application of polarimetric radar data. Several calibration methods have been proposed and applied to correct ZDR bias. However, recent studies have shown [...] Read more.
The measurement error of differential reflectivity (ZDR), especially systematic ZDR bias, is a fundamental issue for the application of polarimetric radar data. Several calibration methods have been proposed and applied to correct ZDR bias. However, recent studies have shown that ZDR bias is time-dependent and can be significantly different on two adjacent days. This means that the frequent monitoring of ZDR bias is necessary, which is difficult to achieve with existing methods. As radar sensitivity has gradually been enhanced, large amounts of online solar echoes have begun to be observed in volume-scan data. Online solar echoes have a high frequency, and a known theoretical value of ZDR (0 dB) could thus allow the continuous monitoring of ZDR bias. However, online solar echoes are also affected by low signal-to-noise ratio and precipitation attenuation for short-wavelength radar. In order to understand the variation of ZDR bias in a C-band polarimetric radar at the Nanjing University of Information Science and Technology (NUIST-CDP), we analyzed the characteristics of online solar echoes from this radar, including the daily frequency of occurrence, the distribution along the radial direction, precipitation attenuation, and fluctuation caused by noise. Then, an automatic method based on online solar echoes was proposed to monitor the daily ZDR bias of the NUIST-CDP. In the proposed method, a one-way differential attenuation correction for solar echoes and a maximum likelihood estimation using a Gaussian model were designed to estimate the optimal daily ZDR bias. The analysis of three months of data from the NUIST-CDP showed the following: (1) Online solar echoes occurred very frequently regardless of precipitation. Under the volume-scan mode, the average number of occurrences was 15 per day and the minimum number was seven. This high frequency could meet the requirements of continuous monitoring of the daily ZDR bias under precipitation and no-rain conditions. (2) The result from the proposed online solar method was significantly linearly correlated with that from the vertical pointing method (observation at an elevation angle of 90°), with a correlation coefficient of 0.61, suggesting that the proposed method is feasible. (3) The day-to-day variation in the ZDR bias was relatively large, and 32% of such variations exceeded 0.2 dB, meaning that a one-time calibration was not representative in time. Accordingly, continuous calibration will be necessary. (4) The ZDR bias was found to be largely influenced by the ambient temperature, with a large negative correlation between the ZDR bias and the temperature. Full article
(This article belongs to the Special Issue Radar Polarimetry—Applications in Remote Sensing of the Atmosphere)
Show Figures

Graphical abstract

Open AccessArticle
Deep Learning-Generated Nighttime Reflectance and Daytime Radiance of the Midwave Infrared Band of a Geostationary Satellite
Remote Sens. 2019, 11(22), 2713; https://doi.org/10.3390/rs11222713 - 19 Nov 2019
Cited by 3 | Viewed by 741
Abstract
Midwave infrared (MWIR) band of 3.75 μm is important in satellite remote sensing in many applications. This band observes daytime reflectance and nighttime radiance according to the Earth’s and the Sun’s effects. This study presents an algorithm to generate no-present nighttime reflectance and [...] Read more.
Midwave infrared (MWIR) band of 3.75 μm is important in satellite remote sensing in many applications. This band observes daytime reflectance and nighttime radiance according to the Earth’s and the Sun’s effects. This study presents an algorithm to generate no-present nighttime reflectance and daytime radiance at MWIR band of satellite observation by adopting the conditional generative adversarial nets (CGAN) model. We used the daytime reflectance and nighttime radiance data in the MWIR band of the meteoritical imager (MI) onboard the Communication, Ocean and Meteorological Satellite (COMS), as well as in the longwave infrared (LWIR; 10.8 μm) band of the COMS/MI sensor, from 1 January to 31 December 2017. This model was trained in a size of 1024 × 1024 pixels in the digital number (DN) from 0 to 255 converted from reflectance and radiance with a dataset of 256 images, and validated with a dataset of 107 images. Our results show a high statistical accuracy (bias = 3.539, root-mean-square-error (RMSE) = 8.924, and correlation coefficient (CC) = 0.922 for daytime reflectance; bias = 0.006, RMSE = 5.842, and CC = 0.995 for nighttime radiance) between the COMS MWIR observation and artificial intelligence (AI)-generated MWIR outputs. Consequently, our findings from the real MWIR observations could be used for identification of fog/low cloud, fire/hot-spot, volcanic eruption/ash, snow and ice, low-level atmospheric vector winds, urban heat islands, and clouds. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Graphical abstract

Open AccessFeature PaperArticle
Lunar Calibration for ASTER VNIR and TIR with Observations of the Moon in 2003 and 2017
Remote Sens. 2019, 11(22), 2712; https://doi.org/10.3390/rs11222712 - 19 Nov 2019
Cited by 3 | Viewed by 862
Abstract
The Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), which is a multiband pushbroom sensor suite onboard Terra, has successfully provided valuable multiband images for approximately 20 years since Terra’s launch in 1999. Since the launch, sensitivity degradations in ASTER’s visible and near [...] Read more.
The Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), which is a multiband pushbroom sensor suite onboard Terra, has successfully provided valuable multiband images for approximately 20 years since Terra’s launch in 1999. Since the launch, sensitivity degradations in ASTER’s visible and near infrared (VNIR) and thermal infrared (TIR) bands have been monitored and corrected with various calibration methods. However, a unignorable discrepancy between different calibration methods has been confirmed for the VNIR bands that should be assessed with another reliable calibration method. In April 2003 and August 2017, ASTER observed the Moon (and deepspace) for conducting a radiometric calibration (called as lunar calibration), which can measure the temporal variation in the sensor sensitivity of the VNIR bands enough accurately (better than 1%). From the lunar calibration, 3–6% sensitivity degradations were confirmed in the VNIR bands from 2003 to 2017. Since the measured degradations from the other methods showed different trends from the lunar calibration, the lunar calibration suggests a further improvement is needed for the VNIR calibration. Sensitivity degradations in the TIR bands were also confirmed by monitoring the variation in the number of saturated pixels, which were qualitatively consistent with the onboard and vicarious calibrations. Full article
(This article belongs to the Special Issue ASTER 20th Anniversary) Printed Edition available
Show Figures

Graphical abstract

Open AccessArticle
Identifying Linear Traces of the Han Dynasty Great Wall in Dunhuang Using Gaofen-1 Satellite Remote Sensing Imagery and the Hough Transform
Remote Sens. 2019, 11(22), 2711; https://doi.org/10.3390/rs11222711 - 19 Nov 2019
Cited by 3 | Viewed by 719
Abstract
The Han Dynasty Great Wall (GH), one of the largest and most significant ancient defense projects in the whole of northern China, has been studied increasingly not only because it provides important information about the diplomatic and military strategies of the Han Empire [...] Read more.
The Han Dynasty Great Wall (GH), one of the largest and most significant ancient defense projects in the whole of northern China, has been studied increasingly not only because it provides important information about the diplomatic and military strategies of the Han Empire (206 B.C.–220 A.D.), but also because it is considered to be a cultural and national symbol of modern China as well as a valuable archaeological monument. Thus, it is crucial to obtain the spatial pattern and preservation situation of the GH for next-step archaeological analysis and conservation management. Nowadays, remote sensing specialists and archaeologists have given priority to manual visualization and a (semi-) automatic extraction approach is lacking. Based on the very high-resolution (VHR) satellite remote sensing imagery, this paper aims to identify automatically the archaeological features of the GH located in ancient Dunhuang, northwest China. Gaofen-1 (GF-1) data were first processed and enhanced after image correction and mathematical morphology, and the M-statistic was then used to analyze the spectral characteristics of GF-1 multispectral (MS) data. In addition, based on GF-1 panchromatic (PAN) data, an auto-identification method that integrates an improved Otsu segmentation algorithm with a Linear Hough Transform (LHT) is proposed. Finally, by making a comparison with visual extraction results, the proposed method was assessed qualitatively and semi-quantitatively to have an accuracy of 80% for the homogenous background in Dunhuang. These automatic identification results could be used to map and evaluate the preservation state of the GH in Dunhuang. Also, the proposed automatic approach was applied to identify similar linear traces of other generations of the Great Wall of China (Western Xia Dynasty (581 A.D.–618 A.D.) and Ming Dynasty (1368 A.D.–1644 A.D.)) in various geographic regions. Moreover, the results indicate that the computer-based automatic identification has great potential in archaeological research, and the proposed method can be generalized and applied to monitor and evaluate the state of preservation of the Great Wall of China in the future. Full article
Show Figures

Graphical abstract

Open AccessArticle
Denoising Algorithm for the FY-4A GIIRS Based on Principal Component Analysis
Remote Sens. 2019, 11(22), 2710; https://doi.org/10.3390/rs11222710 - 19 Nov 2019
Cited by 2 | Viewed by 606
Abstract
The Geostationary Interferometric Infrared Sounder (GIIRS) is the first high-spectral resolution advanced infrared (IR) sounder onboard the new-generation Chinese geostationary meteorological satellite FengYun-4A (FY-4A). The GIIRS has 1650 channels, and its spectrum ranges from 700 to 2250 cm−1 with an unapodized spectral [...] Read more.
The Geostationary Interferometric Infrared Sounder (GIIRS) is the first high-spectral resolution advanced infrared (IR) sounder onboard the new-generation Chinese geostationary meteorological satellite FengYun-4A (FY-4A). The GIIRS has 1650 channels, and its spectrum ranges from 700 to 2250 cm−1 with an unapodized spectral resolution of 0.625 cm−1. It represents a significant breakthrough for measurements with high temporal, spatial and spectral resolutions worldwide. Many GIIRS channels have quite similar spectral signal characteristics that are highly correlated with each other in content and have a high degree of information redundancy. Therefore, this paper applies a principal component analysis (PCA)-based denoising algorithm (PDA) to study simulation data with different noise levels and observation data to reduce noise. The results show that the channel reconstruction using inter-channel spatial dependency and spectral similarity can reduce the noise in the observation brightness temperature (BT). A comparison of the BT observed by the GIIRS (O) with the BT simulated by the radiative transfer model (B) shows that a deviation occurs in the observation channel depending on the observation array. The results show that the array features of the reconstructed observation BT (rrO) depending on the observation array are weakened and the effect of the array position on the observations in the sub-center of the field of regard (FOR) are partially eliminated after the PDA procedure is applied. The high observation and simulation differences (O-B) in the sub-center of the FOR array notably reduced after the PDA procedure is implemented. The improvement of the high O-B is more distinct, and the low O-B becomes smoother. In each scan line, the standard deviation of the reconstructed background departures (rrO-B) is lower than that of the background departures (O-B). The observation error calculated by posterior estimation based on variational assimilation also verifies the efficiency of the PDA. The typhoon experiment also shows that among the 29 selected assimilation channels, the observation error of 65% of the channels was reduced as calculated by the triangle method. Full article
(This article belongs to the Special Issue Feature Papers for Section Atmosphere Remote Sensing)
Show Figures

Graphical abstract

Open AccessArticle
Evaluation of Satellite-Based Rainfall Estimates in the Lower Mekong River Basin (Southeast Asia)
Remote Sens. 2019, 11(22), 2709; https://doi.org/10.3390/rs11222709 - 19 Nov 2019
Cited by 3 | Viewed by 851
Abstract
Satellite-based precipitation is an essential tool for regional water resource applications that requires frequent observations of meteorological forcing, particularly in areas that have sparse rain gauge networks. To fully realize the utility of remotely sensed precipitation products in watershed modeling and decision-making, a [...] Read more.
Satellite-based precipitation is an essential tool for regional water resource applications that requires frequent observations of meteorological forcing, particularly in areas that have sparse rain gauge networks. To fully realize the utility of remotely sensed precipitation products in watershed modeling and decision-making, a thorough evaluation of the accuracy of satellite-based rainfall and regional gauge network estimates is needed. In this study, Tropical Rainfall Measuring Mission (TRMM) Multi-Satellite Precipitation Analysis (TMPA) 3B42 v.7 and Climate Hazards Group InfraRed Precipitation with Station data (CHIRPS) daily rainfall estimates were compared with daily rain gauge observations from 2000 to 2014 in the Lower Mekong River Basin (LMRB) in Southeast Asia. Monthly, seasonal, and annual comparisons were performed, which included the calculations of correlation coefficient, coefficient of determination, bias, root mean square error (RMSE), and mean absolute error (MAE). Our validation test showed TMPA to correctly detect precipitation or no-precipitation 64.9% of all days and CHIRPS 66.8% of all days, compared to daily in-situ rainfall measurements. The accuracy of the satellite-based products varied greatly between the wet and dry seasons. Both TMPA and CHIRPS showed higher correlation with in-situ data during the wet season (June–September) as compared to the dry season (November–January). Additionally, both performed better on a monthly than an annual time-scale when compared to in-situ data. The satellite-based products showed wet biases during months that received higher cumulative precipitation. Based on a spatial correlation analysis, the average r-value of CHIRPS was much higher than TMPA across the basin. CHIRPS correlated better than TMPA at lower elevations and for monthly rainfall accumulation less than 500 mm. While both satellite-based products performed well, as compared to rain gauge measurements, the present research shows that CHIRPS might be better at representing precipitation over the LMRB than TMPA. Full article
(This article belongs to the Special Issue Remote Sensing and Modeling of the Terrestrial Water Cycle)
Show Figures

Graphical abstract

Open AccessArticle
Annual Green Water Resources and Vegetation Resilience Indicators: Definitions, Mutual Relationships, and Future Climate Projections
Remote Sens. 2019, 11(22), 2708; https://doi.org/10.3390/rs11222708 - 19 Nov 2019
Cited by 5 | Viewed by 1910
Abstract
Satellites offer a privileged view on terrestrial ecosystems and a unique possibility to evaluate their status, their resilience and the reliability of the services they provide. In this study, we introduce two indicators for estimating the resilience of terrestrial ecosystems from the local [...] Read more.
Satellites offer a privileged view on terrestrial ecosystems and a unique possibility to evaluate their status, their resilience and the reliability of the services they provide. In this study, we introduce two indicators for estimating the resilience of terrestrial ecosystems from the local to the global levels. We use the Normalized Differential Vegetation Index (NDVI) time series to estimate annual vegetation primary production resilience. We use annual precipitation time series to estimate annual green water resource resilience. Resilience estimation is achieved through the annual production resilience indicator, originally developed in agricultural science, which is formally derived from the original ecological definition of resilience i.e., the largest stress that the system can absorb without losing its function. Interestingly, we find coherent relationships between annual green water resource resilience and vegetation primary production resilience over a wide range of world biomes, suggesting that green water resource resilience contributes to determining vegetation primary production resilience. Finally, we estimate the changes of green water resource resilience due to climate change using results from the sixth phase of the Coupled Model Inter-comparison Project (CMIP6) and discuss the potential consequences of global warming for ecosystem service reliability. Full article
(This article belongs to the Special Issue Ecosystem Services with Remote Sensing)
Show Figures

Graphical abstract

Open AccessArticle
Analysis of Parameters for the Accurate and Fast Estimation of Tree Diameter at Breast Height Based on Simulated Point Cloud
Remote Sens. 2019, 11(22), 2707; https://doi.org/10.3390/rs11222707 - 19 Nov 2019
Cited by 2 | Viewed by 581
Abstract
Terrestrial laser scanning (TLS) is a high-potential technology in forest surveys. Estimating the diameters at breast height (DBH) accurately and quickly has been considered a key step in estimating forest structural parameters by using TLS technology. However, the accuracy and speed of DBH [...] Read more.
Terrestrial laser scanning (TLS) is a high-potential technology in forest surveys. Estimating the diameters at breast height (DBH) accurately and quickly has been considered a key step in estimating forest structural parameters by using TLS technology. However, the accuracy and speed of DBH estimation are affected by many factors, which are classified into three groups in this study. We adopt an additive error model and propose a simple and common simulation method to evaluate the impacts of three groups of parameters, which include the range error, angular errors in the vertical and horizontal directions, angular step width, trunk distance, slice thickness, and real DBH. The parameters were evaluated statistically by using many simulated point cloud datasets that were under strict control. Two typical circle fitting methods were used to estimate DBH, and their accuracy and speed were compared. The results showed that the range error and the angular error in horizontal direction played major roles in the accuracy of DBH estimation, the angular step widths had a slight effect in the case of high range accuracy, the distance showed no relationship with the accuracy of the DBH estimation, increasing the scanning angular width was relatively beneficial to the DBH estimation, and the algebraic circle fitting method was relatively fast while performing DBH estimation, as is the geometrical method, in the case of high range accuracy. Possible methods that could help to obtain accurate and fast DBH estimation results were proposed and discussed to optimize the design of forest inventory experiments. Full article
(This article belongs to the Special Issue Virtual Forest)
Show Figures

Graphical abstract

Open AccessArticle
Assessment of Portable Chlorophyll Meters for Measuring Crop Leaf Chlorophyll Concentration
Remote Sens. 2019, 11(22), 2706; https://doi.org/10.3390/rs11222706 - 19 Nov 2019
Cited by 7 | Viewed by 971
Abstract
Accurate measurement of leaf chlorophyll concentration (LChl) in the field using a portable chlorophyll meter (PCM) is crucial to support methodology development for mapping the spatiotemporal variability of crop nitrogen status using remote sensing. Several PCMs have been developed to measure LChl instantaneously [...] Read more.
Accurate measurement of leaf chlorophyll concentration (LChl) in the field using a portable chlorophyll meter (PCM) is crucial to support methodology development for mapping the spatiotemporal variability of crop nitrogen status using remote sensing. Several PCMs have been developed to measure LChl instantaneously and non-destructively in the field, however, their readings are relative quantities that need to be converted into actual LChl values using conversion functions. The aim of this study was to investigate the relationship between actual LChl and PCM readings obtained by three PCMs: SPAD-502, CCM-200, and Dualex-4. Field experiments were conducted in 2016 on four crops: corn (Zea mays L.), soybean (Glycine max L. Merr.), spring wheat (Triticum aestivum L.), and canola (Brassica napus L.), at the Central Experimental Farm of Agriculture and Agri-Food Canada in Ottawa, Ontario, Canada. To evaluate the impact of other factors (leaf internal structure, leaf pigments other than chlorophyll, and the heterogeneity of LChl distribution) on the conversion function, a global sensitivity analysis was conducted using the PROSPECT-D model to simulate PCM readings under different conditions. Results showed that Dualex-4 had a better performance for actual LChl measurement than SPAD-502 and CCM-200, using a general conversion function for all four crops tested. For SPAD-502 and CCM-200, the error in the readings increases with increasing LChl. The sensitivity analysis reveals that deviations from the calibration functions are more induced by non-uniform LChl distribution than leaf architectures. The readings of Dualex-4 can have a better ability to restrict these influences than those of the other two PCMs. Full article
(This article belongs to the Special Issue Remote Sensing for Precision Nitrogen Management)
Show Figures

Graphical abstract

Open AccessLetter
Analysis of the Optimal Wavelength for Oceanographic Lidar at the Global Scale Based on the Inherent Optical Properties of Water
Remote Sens. 2019, 11(22), 2705; https://doi.org/10.3390/rs11222705 - 19 Nov 2019
Cited by 3 | Viewed by 552
Abstract
Understanding the optimal wavelength for detecting the water column profile from a light detection and ranging (lidar) system is important in the design of oceanographic lidar systems. In this research, the optimal wavelength for detecting the water column profile using a lidar system [...] Read more.
Understanding the optimal wavelength for detecting the water column profile from a light detection and ranging (lidar) system is important in the design of oceanographic lidar systems. In this research, the optimal wavelength for detecting the water column profile using a lidar system at the global scale was analyzed based on the inherent optical properties of water. In addition, assuming that the lidar system had a premium detection characteristic in its hardware design, the maximum detectable depth using the established optimal wavelength was analyzed and compared with the mixed layer depth measured by Argo data at the global scale. The conclusions drawn are as follows: first, the optimal wavelengths for the lidar system are between the blue and green bands. For the open ocean, the optimal wavelengths are between 420 and 510 nm, and for coastal waters, the optimal wavelengths are between 520 and 580 nm. To obtain the best detection ability using a lidar system, the best configuration is to use a lidar system with multiple bands. In addition, a 490 nm wavelength is recommended when an oceanographic lidar system is used at the global scale with a single wavelength. Second, for the recommended 490 nm band, a lidar system with the 4 attenuating length detection ability can penetrate the mixed layer for 80% of global waters. Full article
Show Figures

Graphical abstract

Open AccessArticle
Victim Localization in USAR Scenario Exploiting Multi-Layer Mapping Structure
Remote Sens. 2019, 11(22), 2704; https://doi.org/10.3390/rs11222704 - 19 Nov 2019
Cited by 1 | Viewed by 556
Abstract
Urban search and rescue missions require rapid intervention to locate victims and survivors in the affected environments. To facilitate this activity, Unmanned Aerial Vehicles (UAVs) have been recently used to explore the environment and locate possible victims. In this paper, a UAV equipped [...] Read more.
Urban search and rescue missions require rapid intervention to locate victims and survivors in the affected environments. To facilitate this activity, Unmanned Aerial Vehicles (UAVs) have been recently used to explore the environment and locate possible victims. In this paper, a UAV equipped with multiple complementary sensors is used to detect the presence of a human in an unknown environment. A novel human localization approach in unknown environments is proposed that merges information gathered from deep-learning-based human detection, wireless signal mapping, and thermal signature mapping to build an accurate global human location map. A next-best-view (NBV) approach with a proposed multi-objective utility function is used to iteratively evaluate the map to locate the presence of humans rapidly. Results demonstrate that the proposed strategy outperforms other methods in several performance measures such as the number of iterations, entropy reduction, and traveled distance. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Graphical abstract

Open AccessArticle
Multi-Scale Association between Vegetation Growth and Climate in India: A Wavelet Analysis Approach
Remote Sens. 2019, 11(22), 2703; https://doi.org/10.3390/rs11222703 - 18 Nov 2019
Cited by 1 | Viewed by 1433
Abstract
Monsoon climate over India has high degree of spatio-temporal heterogeneity characterized by the existence of multi-climatic zones along with strong intra-seasonal, seasonal, and inter-annual variability. Vegetation growth of Indian forests relates to this climate variability, though the dependence structure over space and time [...] Read more.
Monsoon climate over India has high degree of spatio-temporal heterogeneity characterized by the existence of multi-climatic zones along with strong intra-seasonal, seasonal, and inter-annual variability. Vegetation growth of Indian forests relates to this climate variability, though the dependence structure over space and time is yet to be explored. Here, we present a comprehensive analysis of this association with quality-controlled satellite-based remote sensing dataset of vegetation greenness and radiation along with station based gridded precipitation datasets. A spatio-temporal time-frequency analysis using wavelets is performed to understand the relative association of vegetation growth with precipitation and radiation at different time scales. The inter-annual variation of forest greenness over the Tropical India are observed to be correlated with the seasonal monsoon precipitation. However, at inter and intra-seasonal scales, vegetation has a strong association with radiation in regions of high precipitation like the Western Ghats, Eastern Himalayas, and Northeast hills. Forests in Western Himalayas were found to be correlated more on the winter precipitation from western disturbances than the south west monsoon precipitation. Our results provide new and useful region-specific information for dynamic vegetation modelling in the Indian monsoon region that may further be used in understanding global vegetation-land-atmosphere interactions. Full article
(This article belongs to the Special Issue Remote Sensing of Tropical Phenology)
Show Figures

Graphical abstract

Open AccessLetter
The Radiative Transfer Characteristics of the O2 Infrared Atmospheric Band in Limb-Viewing Geometry
Remote Sens. 2019, 11(22), 2702; https://doi.org/10.3390/rs11222702 - 18 Nov 2019
Cited by 1 | Viewed by 591
Abstract
The O2(a1Δg) emission near 1.27 μm provides an important means to remotely sense the thermal characteristics, dynamical features, and compositional structures of the upper atmosphere because of its photochemistry and spectroscopic properties. In this work, an emission–absorption [...] Read more.
The O2(a1Δg) emission near 1.27 μm provides an important means to remotely sense the thermal characteristics, dynamical features, and compositional structures of the upper atmosphere because of its photochemistry and spectroscopic properties. In this work, an emission–absorption transfer model for limb measurements was developed to calculate the radiation and scattering spectral brightness by means of a line-by-line approach. The nonlocal thermal equilibrium (non-LTE) model was taken into account for accurate calculation of the O2(a1Δg) emission by incorporating the latest rate constants and spectral parameters. The spherical adding and doubling methods were used in the multiple scattering model. Representative emission and absorption line shapes of the O 2 ( a 1 Δ g , υ = 0 ) O 2 ( X Σ g 3 , υ = 0 ) band and their spectral behavior varying with altitude were examined. The effects of solar zenith angle, surface albedo, and aerosol loading on the line shapes were also studied. This paper emphasizes the advantage of using infrared atmospheric band for remote sensing of the atmosphere from 20 up to 120 km, a significant region where the strongest coupling between the lower and upper atmosphere occurs. Full article
Show Figures

Graphical abstract

Open AccessArticle
Spatiotemporal Fusion of Satellite Images via Very Deep Convolutional Networks
Remote Sens. 2019, 11(22), 2701; https://doi.org/10.3390/rs11222701 - 18 Nov 2019
Viewed by 757
Abstract
Spatiotemporal fusion provides an effective way to fuse two types of remote sensing data featured by complementary spatial and temporal properties (typical representatives are Landsat and MODIS images) to generate fused data with both high spatial and temporal resolutions. This paper presents a [...] Read more.
Spatiotemporal fusion provides an effective way to fuse two types of remote sensing data featured by complementary spatial and temporal properties (typical representatives are Landsat and MODIS images) to generate fused data with both high spatial and temporal resolutions. This paper presents a very deep convolutional neural network (VDCN) based spatiotemporal fusion approach to effectively handle massive remote sensing data in practical applications. Compared with existing shallow learning methods, especially for the sparse representation based ones, the proposed VDCN-based model has the following merits: (1) explicitly correlating the MODIS and Landsat images by learning a non-linear mapping relationship; (2) automatically extracting effective image features; and (3) unifying the feature extraction, non-linear mapping, and image reconstruction into one optimization framework. In the training stage, we train a non-linear mapping between downsampled Landsat and MODIS data using VDCN, and then we train a multi-scale super-resolution (MSSR) VDCN between the original Landsat and downsampled Landsat data. The prediction procedure contains three layers, where each layer consists of a VDCN-based prediction and a fusion model. These layers achieve non-linear mapping from MODIS to downsampled Landsat data, the two-times SR of downsampled Landsat data, and the five-times SR of downsampled Landsat data, successively. Extensive evaluations are executed on two groups of commonly used Landsat–MODIS benchmark datasets. For the fusion results, the quantitative evaluations on all prediction dates and the visual effect on one key date demonstrate that the proposed approach achieves more accurate fusion results than sparse representation based methods. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Graphical abstract

Open AccessArticle
Aircraft Target Classification for Conventional Narrow-Band Radar with Multi-Wave Gates Sparse Echo Data
Remote Sens. 2019, 11(22), 2700; https://doi.org/10.3390/rs11222700 - 18 Nov 2019
Cited by 2 | Viewed by 578
Abstract
For a conventional narrow-band radar system, the detectable information of the target is limited, and it is difficult for the radar to accurately identify the target type. In particular, the classification probability will further decrease when part of the echo data is missed. [...] Read more.
For a conventional narrow-band radar system, the detectable information of the target is limited, and it is difficult for the radar to accurately identify the target type. In particular, the classification probability will further decrease when part of the echo data is missed. By extracting the target features in time and frequency domains from multi-wave gates sparse echo data, this paper presents a classification algorithm in conventional narrow-band radar to identify three different types of aircraft target, i.e., helicopter, propeller and jet. Firstly, the classical sparse reconstruction algorithm is utilized to reconstruct the target frequency spectrum with single-wave gate sparse echo data. Then, the micro-Doppler effect caused by rotating parts of different targets is analyzed, and the micro-Doppler based features, such as amplitude deviation coefficient, time domain waveform entropy and frequency domain waveform entropy, are extracted from reconstructed echo data to identify targets. Thirdly, the target features extracted from multi-wave gates reconstructed echo data are weighted and fused to improve the accuracy of classification. Finally, the fused feature vectors are fed into a support vector machine (SVM) model for classification. By contrast with the conventional algorithm of aircraft target classification, the proposed algorithm can effectively process sparse echo data and achieve higher classification probability via weighted features fusion of multi-wave gates echo data. The experiments on synthetic data are carried out to validate the effectiveness of the proposed algorithm. Full article
(This article belongs to the Special Issue Radar and Sonar Imaging and Processing)
Show Figures

Graphical abstract

Open AccessArticle
Assessment of Night-Time Lighting for Global Terrestrial Protected and Wilderness Areas
Remote Sens. 2019, 11(22), 2699; https://doi.org/10.3390/rs11222699 - 18 Nov 2019
Cited by 4 | Viewed by 830
Abstract
Protected areas (PAs) play an important role in biodiversity conservation and ecosystem integrity. However, human development has threatened and affected the function and effectiveness of PAs. The Defense Meteorological Satellite Program/Operational Linescan System (DMSP/OLS) night-time stable light (NTL) data have proven to be [...] Read more.
Protected areas (PAs) play an important role in biodiversity conservation and ecosystem integrity. However, human development has threatened and affected the function and effectiveness of PAs. The Defense Meteorological Satellite Program/Operational Linescan System (DMSP/OLS) night-time stable light (NTL) data have proven to be an effective indicator of the intensity and change of human-induced urban development over a long time span and at a larger spatial scale. We used the NTL data from 1992 to 2013 to characterize the human-induced urban development and studied the spatial and temporal variation of the NTL of global terrestrial PAs. We selected seven types of PAs defined by the International Union for Conversation of Nature (IUCN), including strict nature reserve (Ia), wilderness area (Ib), national park (II), natural monument or feature (III), habitat/species management area (IV), protected landscape/seascape (V), and protected area with sustainable use of natural resources (VI). We evaluated the NTL digital number (DN) in PAs and their surrounding buffer zones, i.e., 0–1 km, 1–5 km, 5–10 km, 10–25 km, 25–50 km, and 50–100 km. The results revealed the level, growth rate, trend, and distribution pattern of NTL in PAs. Within PAs, areas of types V and Ib had the highest and lowest NTL levels, respectively. In the surrounding 1–100 km buffer zones, type V PAs also had the highest NTL level, but type VI PAs had the lowest NTL level. The NTL level in the areas surrounding PAs was higher than that within PAs. Types Ia and III PAs showed the highest and lowest NTL growth rate from 1992 to 2013, respectively, both inside and outside of PAs. The NTL distributions surrounding the Ib and VI PAs were different from other types. The areas close to Ib and VI boundaries, i.e., in the 0–25 km buffer zones, showed lower NTL levels, for which the highest NTL level was observed within the 25–100 km buffer zone. However, other types of PAs showed the opposite NTL patterns. The NTL level was lower in the distant buffer zones, and the lowest night light was within the 1–25 km buffer zones. Globally, 6.9% of PAs are being affected by NTL. Conditions of wilderness areas, e.g., high latitude regions, Tibetan Plateau, Amazon, and Caribbean, are the least affected by NTL. The PAs in Europe, Asia, and North America are more affected by NTL than South America, Africa, and Oceania. Full article
Show Figures

Graphical abstract

Open AccessArticle
Image Formation of Azimuth Periodically Gapped SAR Raw Data with Complex Deconvolution
Remote Sens. 2019, 11(22), 2698; https://doi.org/10.3390/rs11222698 - 18 Nov 2019
Cited by 4 | Viewed by 603
Abstract
The phenomenon of periodical gapping in Synthetic Aperture Radar (SAR), which is induced in various ways, creates challenges in focusing raw SAR data. To handle this problem, a novel method is proposed in this paper. Complex deconvolution is utilized to restore the azimuth [...] Read more.
The phenomenon of periodical gapping in Synthetic Aperture Radar (SAR), which is induced in various ways, creates challenges in focusing raw SAR data. To handle this problem, a novel method is proposed in this paper. Complex deconvolution is utilized to restore the azimuth spectrum of complete data from the gapped raw data in the proposed method. In other words, a new approach is provided by the proposed method to cope with periodically gapped raw SAR data via complex deconvolution. The proposed method provides a robust implementation of deconvolution for processing azimuth gapped raw data. The proposed method mainly consists of phase compensation and recovering the azimuth spectrum of raw data with complex deconvolution. The gapped data become sparser in the range of the Doppler domain after phase compensation. Then, it is feasible to recover the azimuth spectrum of the complete data from gapped raw data via complex deconvolution in the Doppler domain. Afterwards, the traditional SAR imaging algorithm is capable of focusing the reconstructed raw data in this paper. The effectiveness of the proposed method was validated via point target simulation and surface target simulation. Moreover, real SAR data were utilized to further demonstrate the validity of the proposed method. Full article
(This article belongs to the Special Issue Radar and Sonar Imaging and Processing)
Show Figures

Graphical abstract

Open AccessArticle
Integrating LiDAR, Multispectral and SAR Data to Estimate and Map Canopy Height in Tropical Forests
Remote Sens. 2019, 11(22), 2697; https://doi.org/10.3390/rs11222697 - 18 Nov 2019
Cited by 4 | Viewed by 1136
Abstract
Developing accurate methods to map vegetation structure in tropical forests is essential to protect their biodiversity and improve their carbon stock estimation. We integrated LIDAR (Light Detection and Ranging), multispectral and SAR (Synthetic Aperture Radar) data to improve the prediction and mapping of [...] Read more.
Developing accurate methods to map vegetation structure in tropical forests is essential to protect their biodiversity and improve their carbon stock estimation. We integrated LIDAR (Light Detection and Ranging), multispectral and SAR (Synthetic Aperture Radar) data to improve the prediction and mapping of canopy height (CH) at high spatial resolution (30 m) in tropical forests in South America. We modeled and mapped CH estimated from aircraft LiDAR surveys as a ground reference, using annual metrics derived from multispectral and SAR satellite imagery in a dry forest, a moist forest, and a rainforest of tropical South America. We examined the effect of the three forest types, five regression algorithms, and three predictor groups on the modelling and mapping of CH. Our CH models reached errors ranging from 1.2–3.4 m in the dry forest and 5.1–7.4 m in the rainforest and explained variances from 94–60% in the dry forest and 58–12% in the rainforest. Our best models show higher accuracies than previous works in tropical forests. The average accuracy of the five regression algorithms decreased from dry forests (2.6 m +/− 0.7) to moist (5.7 m +/− 0.4) and rainforests (6.6 m +/− 0.7). Random Forest regressions produced the most accurate models in the three forest types (1.2 m +/− 0.05 in the dry, 4.9 m +/− 0.14 in the moist, and 5.5 m +/− 0.3 the rainforest). Model performance varied considerably across the three predictor groups. Our results are useful for CH spatial prediction when GEDI (Global Ecosystem Dynamics Investigation lidar) data become available. Full article
(This article belongs to the Section Forest Remote Sensing)
Show Figures

Graphical abstract

Open AccessArticle
The Influence of Heterogeneity on Lunar Irradiance Based on Multiscale Analysis
Remote Sens. 2019, 11(22), 2696; https://doi.org/10.3390/rs11222696 - 18 Nov 2019
Viewed by 517
Abstract
The Moon is a stable light source for the radiometric calibration of satellite sensors. It acts as a diffuse panel that reflects sunlight in all directions, however, the lunar surface is heterogeneous due to its topography and different mineral content and chemical composition [...] Read more.
The Moon is a stable light source for the radiometric calibration of satellite sensors. It acts as a diffuse panel that reflects sunlight in all directions, however, the lunar surface is heterogeneous due to its topography and different mineral content and chemical composition at different locations, resulting in different optical properties. In order to perform radiometric calibration using the Moon, a lunar irradiance model using different observation geometry is required. Currently, two lunar irradiance models exist, namely, the Robotic Lunar Observatory (ROLO) and the Miller and Turner 2009 (MT2009). The ROLO lunar irradiance model is widely used as the radiometric standard for on-orbit sensors. The MT2009 lunar irradiance model is popular for remote sensing at night, however, the original version of the MT2009 lunar irradiance model takes less consideration of the heterogeneous lunar surface and lunar topography. Since the heterogeneity embedded in the lunar surface is the key to the improvement of the lunar irradiance model, this study analyzes the influence of the heterogeneous surface on the irradiance of moonlight based on model data at different scales. A heterogeneous correction factor is defined to describe the impact of the heterogeneous lunar surface on lunar irradiance. On the basis of the analysis, the following conclusions can be made. First, the influence of heterogeneity in the waning hemisphere is greater than that in waxing hemisphere under all 32 wavelengths of the ROLO filters. Second, the influence of heterogeneity embedded in the lunar surface exerts less impact on lunar irradiance at lower resolution. Third, the heterogeneous correction factor is scale independent. Finally, the lunar irradiance uncertainty introduced by topography is very small and decreases as the resolution of model data decreases due to the loss of topographic information. Full article
Show Figures

Graphical abstract

Open AccessArticle
Multispectral Image Super-Resolution Burned-Area Mapping Based on Space-Temperature Information
Remote Sens. 2019, 11(22), 2695; https://doi.org/10.3390/rs11222695 - 18 Nov 2019
Viewed by 717
Abstract
Multispectral imaging (MI) provides important information for burned-area mapping. Due to the severe conditions of burned areas and the limitations of sensors, the resolution of collected multispectral images is sometimes very rough, hindering the accurate determination of burned areas. Super-resolution mapping (SRM) has [...] Read more.
Multispectral imaging (MI) provides important information for burned-area mapping. Due to the severe conditions of burned areas and the limitations of sensors, the resolution of collected multispectral images is sometimes very rough, hindering the accurate determination of burned areas. Super-resolution mapping (SRM) has been proposed for mapping burned areas in rough images to solve this problem, allowing super-resolution burned-area mapping (SRBAM). However, the existing SRBAM methods do not use sufficiently accurate space information and detailed temperature information. To improve the mapping accuracy of burned areas, an improved SRBAM method utilizing space–temperature information (STI) is proposed here. STI contains two elements, a space element and a temperature element. We utilized the random-walker algorithm (RWA) to characterize the space element, which encompassed accurate object space information, while the temperature element with rich temperature information was derived by calculating the normalized burn ratio (NBR). The two elements were then merged to produce an objective function with space–temperature information. The particle swarm optimization algorithm (PSOA) was employed to handle the objective function and derive the burned-area mapping results. The dataset of the Landsat-8 Operational Land Imager (OLI) from Denali National Park, Alaska, was used for testing and showed that the STI method is superior to the traditional SRBAM method. Full article
(This article belongs to the Special Issue New Advances on Sub-pixel Processing: Unmixing and Mapping Methods)
Show Figures

Graphical abstract

Open AccessArticle
Enhanced Feature Extraction for Ship Detection from Multi-Resolution and Multi-Scene Synthetic Aperture Radar (SAR) Images
Remote Sens. 2019, 11(22), 2694; https://doi.org/10.3390/rs11222694 - 18 Nov 2019
Cited by 6 | Viewed by 867
Abstract
Independent of daylight and weather conditions, synthetic aperture radar (SAR) images have been widely used for ship monitoring. The traditional methods for SAR ship detection are highly dependent on the statistical models of sea clutter or some predefined thresholds, and generally require a [...] Read more.
Independent of daylight and weather conditions, synthetic aperture radar (SAR) images have been widely used for ship monitoring. The traditional methods for SAR ship detection are highly dependent on the statistical models of sea clutter or some predefined thresholds, and generally require a multi-step operation, which results in time-consuming and less robust ship detection. Recently, deep learning algorithms have found wide applications in ship detection from SAR images. However, due to the multi-resolution imaging mode and complex background, it is hard for the network to extract representative SAR target features, which limits the ship detection performance. In order to enhance the feature extraction ability of the network, three improvement techniques have been developed. Firstly, multi-level sparse optimization of SAR image is carried out to handle clutters and sidelobes so as to enhance the discrimination of the features of SAR images. Secondly, we hereby propose a novel split convolution block (SCB) to enhance the feature representation of small targets, which divides the SAR images into smaller sub-images as the input of the network. Finally, a spatial attention block (SAB) is embedded in the feature pyramid network (FPN) to reduce the loss of spatial information, during the dimensionality reduction process. In this paper, experiments on the multi-resolution SAR images of GaoFen-3 and Sentinel-1 under complex backgrounds are carried out and the results verify the effectiveness of SCB and SAB. The comparison results also show that the proposed method is superior to several state-of-the-art object detection algorithms. Full article
(This article belongs to the Section Ocean Remote Sensing)
Show Figures

Figure 1

Open AccessArticle
Modeling and Assessment of GPS/Galileo/BDS Precise Point Positioning with Ambiguity Resolution
Remote Sens. 2019, 11(22), 2693; https://doi.org/10.3390/rs11222693 - 18 Nov 2019
Cited by 1 | Viewed by 616
Abstract
Multi-frequency and multi-GNSS integration is currently becoming an important trend in the development of satellite navigation and positioning technology. In this paper, GPS/Galileo/BeiDou (BDS) precise point positioning (PPP) with ambiguity resolution (AR) are discussed in detail. The mathematical model of triple-system PPP AR [...] Read more.
Multi-frequency and multi-GNSS integration is currently becoming an important trend in the development of satellite navigation and positioning technology. In this paper, GPS/Galileo/BeiDou (BDS) precise point positioning (PPP) with ambiguity resolution (AR) are discussed in detail. The mathematical model of triple-system PPP AR and the principle of fractional cycle bias (FCB) estimation are firstly described. With the data of 160 stations in Multi-GNSS Experiment (MGEX) from day of year (DOY) 321-350, 2018, the FCBs of the three systems are estimated and the experimental results show that the range of most GPS wide-lane (WL) FCB is within 0.1 cycles during one month, while that of Galileo WL FCB is 0.05 cycles. For BDS FCB, the classification estimation method is used to estimate the BDS FCB and divide it into GEO and non-GEO (IGSO and MEO) FCB. The variation range of BDS GEO WL FCB can reach 0.5 cycles, while BDS non-GEO WL FCB does not exceed 0.1 cycles within a month. However, the accuracies of GPS, Galileo, and BDS non-GEO narrow-lane (NL) FCB are basically the same. In addition, the number of visible satellites and Position Dilution of Precision (PDOP) values of different combined systems are analyzed and evaluated in this paper. It shows that the triple-system combination can significantly increase the number of observable satellites, optimize the spatial distribution structure of satellites, and is significantly superior to the dual-system and single-system. Finally, the positioning characteristics of single-, dual-, and triple-systems are analyzed. The results of the single station positioning experiment show that the accuracy and convergence speed of the fixed solutions for each system are better than those of the corresponding float solutions. The average root mean squares (RMSs) of the float and the fixed solution in the east and north direction for GPS/Galileo/BDS combined system are the smallest, being 0.92 cm, 0.52 cm and 0.50 cm, 0.46 cm respectively, while the accuracy of the GPS in the up direction is the highest, which is 1.44 cm and 1.27 cm, respectively. Therefore, the combined system can accelerate the convergence speed and greatly enhance the stability of the positioning results. Full article
(This article belongs to the Special Issue Global Navigation Satellite Systems for Earth Observing System)
Show Figures

Figure 1

Open AccessArticle
Adaptive Least-Squares Collocation Algorithm Considering Distance Scale Factor for GPS Crustal Velocity Field Fitting and Estimation
Remote Sens. 2019, 11(22), 2692; https://doi.org/10.3390/rs11222692 - 18 Nov 2019
Cited by 1 | Viewed by 572
Abstract
High-precision, high-reliability, and high-density GPS crustal velocity are extremely important requirements for geodynamic analysis. The least-squares collocation algorithm (LSC) has unique advantages over crustal movement models to overcome observation errors in GPS data and the sparseness and poor geometric distribution in GPS observations. [...] Read more.
High-precision, high-reliability, and high-density GPS crustal velocity are extremely important requirements for geodynamic analysis. The least-squares collocation algorithm (LSC) has unique advantages over crustal movement models to overcome observation errors in GPS data and the sparseness and poor geometric distribution in GPS observations. However, traditional LSC algorithms often encounter negative covariance statistics, and thus, calculating statistical Gaussian covariance function based on the selected distance interval leads to inaccurate estimation of the correlation between the random signals. An unreliable Gaussian statistical covariance function also leads to inconsistency in observation noise and signal variance. In this study, we present an improved LSC algorithm that takes into account the combination of distance scale factor and adaptive adjustment to overcome these problems. The rationality and practicability of the new algorithm was verified by using GPS observations. Results show that the new algorithm introduces the distance scale factor, which effectively weakens the influence of systematic errors by improving the function model. The new algorithm can better reflect the characteristics of GPS crustal movement, which can provide valuable basic data for use in the analysis of regional tectonic dynamics using GPS observations. Full article
Show Figures

Graphical abstract

Open AccessArticle
Hyperspectral Pansharpening Based on Spectral Constrained Adversarial Autoencoder
Remote Sens. 2019, 11(22), 2691; https://doi.org/10.3390/rs11222691 - 18 Nov 2019
Cited by 2 | Viewed by 576
Abstract
Hyperspectral (HS) imaging is conducive to better describing and understanding the subtle differences in spectral characteristics of different materials due to sufficient spectral information compared with traditional imaging systems. However, it is still challenging to obtain high resolution (HR) HS images in both [...] Read more.
Hyperspectral (HS) imaging is conducive to better describing and understanding the subtle differences in spectral characteristics of different materials due to sufficient spectral information compared with traditional imaging systems. However, it is still challenging to obtain high resolution (HR) HS images in both the spectral and spatial domains. Different from previous methods, we first propose spectral constrained adversarial autoencoder (SCAAE) to extract deep features of HS images and combine with the panchromatic (PAN) image to competently represent the spatial information of HR HS images, which is more comprehensive and representative. In particular, based on the adversarial autoencoder (AAE) network, the SCAAE network is built with the added spectral constraint in the loss function so that spectral consistency and a higher quality of spatial information enhancement can be ensured. Then, an adaptive fusion approach with a simple feature selection rule is induced to make full use of the spatial information contained in both the HS image and PAN image. Specifically, the spatial information from two different sensors is introduced into a convex optimization equation to obtain the fusion proportion of the two parts and estimate the generated HR HS image. By analyzing the results from the experiments executed on the tested data sets through different methods, it can be found that, in CC, SAM, and RMSE, the performance of the proposed algorithm is improved by about 1.42%, 13.12%, and 29.26% respectively on average which is preferable to the well-performed method HySure. Compared to the MRA-based method, the improvement of the proposed method in in the above three indexes is 17.63%, 0.83%, and 11.02%, respectively. Moreover, the results are 0.87%, 22.11%, and 20.66%, respectively, better than the PCA-based method, which fully illustrated the superiority of the proposed method in spatial information preservation. All the experimental results demonstrate that the proposed method is superior to the state-of-the-art fusion methods in terms of subjective and objective evaluations. Full article
(This article belongs to the Special Issue Remote Sensing Image Restoration and Reconstruction)
Show Figures

Graphical abstract

Open AccessArticle
Fine-Grained Classification of Hyperspectral Imagery Based on Deep Learning
Remote Sens. 2019, 11(22), 2690; https://doi.org/10.3390/rs11222690 - 18 Nov 2019
Cited by 2 | Viewed by 907
Abstract
Hyperspectral remote sensing obtains abundant spectral and spatial information of the observed object simultaneously. It is an opportunity to classify hyperspectral imagery (HSI) with a fine-grained manner. In this study, the fine-grained classification of HSI, which contains a large number of classes, is [...] Read more.
Hyperspectral remote sensing obtains abundant spectral and spatial information of the observed object simultaneously. It is an opportunity to classify hyperspectral imagery (HSI) with a fine-grained manner. In this study, the fine-grained classification of HSI, which contains a large number of classes, is investigated. On one hand, traditional classification methods cannot handle fine-grained classification of HSI well; on the other hand, deep learning methods have shown their powerfulness in fine-grained classification. So, in this paper, deep learning is explored for HSI supervised and semi-supervised fine-grained classification. For supervised HSI fine-grained classification, densely connected convolutional neural network (DenseNet) is explored for accurate classification. Moreover, DenseNet is combined with pre-processing technique (i.e., principal component analysis or auto-encoder) or post-processing technique (i.e., conditional random field) to further improve classification performance. For semi-supervised HSI fine-grained classification, a generative adversarial network (GAN), which includes a discriminative CNN and a generative CNN, is carefully designed. The GAN fully uses the labeled and unlabeled samples to improve classification accuracy. The proposed methods were tested on the Indian Pines data set, which contains 33,3951 samples with 52 classes. The experimental results show that the deep learning-based methods provide great improvements compared with other traditional methods, which demonstrate that deep models have huge potential for HSI fine-grained classification. Full article
(This article belongs to the Special Issue Deep Learning and Feature Mining Using Hyperspectral Imagery)
Show Figures

Figure 1

Previous Issue
Back to TopTop