Previous Issue
Volume 11, November-1

Table of Contents

Remote Sens., Volume 11, Issue 22 (November-2 2019)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Order results
Result details
Select all
Export citation of selected articles as:
Open AccessArticle
Multisource Point Clouds, Point Simplification and Surface Reconstruction
Remote Sens. 2019, 11(22), 2659; https://doi.org/10.3390/rs11222659 (registering DOI) - 13 Nov 2019
Abstract
As data acquisition technology continues to advance, the improvement and upgrade of the algorithms for surface reconstruction are required. In this paper, we utilized multiple terrestrial Light Detection And Ranging (Lidar) systems to acquire point clouds with different levels of complexity, namely dynamic [...] Read more.
As data acquisition technology continues to advance, the improvement and upgrade of the algorithms for surface reconstruction are required. In this paper, we utilized multiple terrestrial Light Detection And Ranging (Lidar) systems to acquire point clouds with different levels of complexity, namely dynamic and rigid targets for surface reconstruction. We propose a robust and effective method to obtain simplified and uniform resample points for surface reconstruction. The method was evaluated. A point reduction of up to 99.371% with a standard deviation of 0.2 cm was achieved. In addition, well-known surface reconstruction methods, i.e., Alpha shapes, Screened Poisson reconstruction (SPR), the Crust, and Algebraic point set surfaces (APSS Marching Cubes), were utilized for object reconstruction. We evaluated the benefits in exploiting simplified and uniform points, as well as different density points, for surface reconstruction. These reconstruction methods and their capacities in handling data imperfections were analyzed and discussed. The findings are that i) the capacity of surface reconstruction in dealing with diverse objects needs to be improved; ii) when the number of points reaches the level of millions (e.g., approximately five million points in our data), point simplification is necessary, as otherwise, the reconstruction methods might fail; iii) for some reconstruction methods, the number of input points is proportional to the number of output meshes; but a few methods are in the opposite; iv) all reconstruction methods are beneficial from the reduction of running time; and v) a balance between the geometric details and the level of smoothing is needed. Some methods produce detailed and accurate geometry, but their capacity to deal with data imperfection is poor, while some other methods exhibit the opposite characteristics. Full article
(This article belongs to the Section Engineering Remote Sensing)
Show Figures

Graphical abstract

Open AccessArticle
Detection of Fusarium Head Blight in Wheat Using a Deep Neural Network and Color Imaging
Remote Sens. 2019, 11(22), 2658; https://doi.org/10.3390/rs11222658 (registering DOI) - 13 Nov 2019
Abstract
Fusarium head blight (FHB) is a devastating disease of wheat worldwide. In addition to reducing the yield of the crop, the causal pathogens also produce mycotoxins that can contaminate the grain. The development of resistant wheat varieties is one of the best ways [...] Read more.
Fusarium head blight (FHB) is a devastating disease of wheat worldwide. In addition to reducing the yield of the crop, the causal pathogens also produce mycotoxins that can contaminate the grain. The development of resistant wheat varieties is one of the best ways to reduce the impact of FHB. To develop such varieties, breeders must expose germplasm lines to the pathogen in the field and assess the disease reaction. Phenotyping breeding materials for resistance to FHB is time-consuming, labor-intensive, and expensive when using conventional protocols. To develop a reliable and cost-effective high throughput phenotyping system for assessing FHB in the field, we focused on developing a method for processing color images of wheat spikes to accurately detect diseased areas using deep learning and image processing techniques. Color images of wheat spikes at the milk stage were collected in a shadow condition and processed to construct datasets, which were used to retrain a deep convolutional neural network model using transfer learning. Testing results showed that the model detected spikes very accurately in the images since the coefficient of determination for the number of spikes tallied by manual count and the model was 0.80. The model was assessed, and the mean average precision for the testing dataset was 0.9201. On the basis of the results for spike detection, a new color feature was applied to obtain the gray image of each spike and a modified region-growing algorithm was implemented to segment and detect the diseased areas of each spike. Results showed that the region growing algorithm performed better than the K-means and Otsu’s method in segmenting diseased areas. We demonstrated that deep learning techniques enable accurate detection of FHB in wheat based on color image analysis, and the proposed method can effectively detect spikes and diseased areas, which improves the efficiency of the FHB assessment in the field. Full article
(This article belongs to the Special Issue Advanced Imaging for Plant Phenotyping)
Show Figures

Graphical abstract

Open AccessLetter
Correlation between Ionospheric TEC and the DCB Stability of GNSS Receivers from 2014 to 2016
Remote Sens. 2019, 11(22), 2657; https://doi.org/10.3390/rs11222657 (registering DOI) - 13 Nov 2019
Abstract
The Global Navigation Satellite System (GNSS) differential code biases (DCBs) are a major obstacle in estimating the ionospheric total electron content (TEC). The DCBs of the GNSS receiver (rDCBs) are affected by various factors such as data quality, estimation method, receiver type, hardware [...] Read more.
The Global Navigation Satellite System (GNSS) differential code biases (DCBs) are a major obstacle in estimating the ionospheric total electron content (TEC). The DCBs of the GNSS receiver (rDCBs) are affected by various factors such as data quality, estimation method, receiver type, hardware temperature, and antenna characteristics. This study investigates the relationship between TEC and rDCB, and TEC and rDCB stability during a three-year period from 2014 to 2016. Linear correlations between pairs of variables, measured with Pearson’s coefficient (), are considered. It is shown that the correlation between TEC and rDCB is the smallest in low-latitude regions. The mid-latitude regions exhibit the maximum value of. In contrast, the correlation between TEC and rDCB root mean square (RMS, stability) was greater in low-latitude regions. A strong positive correlation (R≥0.90) on average between TEC and rDCB RMS was also revealed at two additional GNSS stations in low-latitude regions, where the correlation shows clear latitudinal dependency. We found that the correlation between TEC and rDCB stability is still very strong even after replacing a GNSS receiver. Full article
(This article belongs to the Section Atmosphere Remote Sensing)
Open AccessArticle
Response of Beech (Fagus sylvatica L.) Trees to Competition—New Insights from Using Fractal Analysis
Remote Sens. 2019, 11(22), 2656; https://doi.org/10.3390/rs11222656 (registering DOI) - 13 Nov 2019
Abstract
Individual tree architecture and the composition of tree species play a vital role for many ecosystem functions and services provided by a forest, such as timber value, habitat diversity, and ecosystem resilience. However, knowledge is limited when it comes to understanding how tree [...] Read more.
Individual tree architecture and the composition of tree species play a vital role for many ecosystem functions and services provided by a forest, such as timber value, habitat diversity, and ecosystem resilience. However, knowledge is limited when it comes to understanding how tree architecture changes in response to competition. Using 3D-laser scanning data from the German Biodiversity Exploratories, we investigated the detailed three-dimensional architecture of 24 beech (Fagus sylvatica L.) trees that grew under different levels of competition pressure. We created detailed quantitative structure models (QSMs) for all study trees to describe their branching architecture. Furthermore, structural complexity and architectural self-similarity were measured using the box-dimension approach from fractal analysis. Relating these measures to the strength of competition, the trees are exposed to reveal strong responses for a wide range of tree architectural measures indicating that competition strongly changes the branching architecture of trees. The strongest response to competition (rho = −0.78) was observed for a new measure introduced here, the intercept of the regression used to determine the box-dimension. This measure was discovered as an integrating descriptor of the size of the complexity-bearing part of the tree, namely the crown, and proven to be even more sensitive to competition than the box-dimension itself. Future studies may use fractal analysis to investigate and quantify the response of tree individuals to competition. Full article
(This article belongs to the Special Issue 3D Forest Structure Observation)
Show Figures

Graphical abstract

Open AccessArticle
Developing Land Surface Directional Reflectance and Albedo Products from Geostationary GOES-R and Himawari Data: Theoretical Basis, Operational Implementation, and Validation
Remote Sens. 2019, 11(22), 2655; https://doi.org/10.3390/rs11222655 (registering DOI) - 13 Nov 2019
Abstract
The new generation of geostationary satellite sensors is producing an unprecedented amount of Earth observations with high temporal, spatial and spectral resolutions, which enable us to detect and assess abrupt surface changes. In this study, we developed the land surface directional reflectance and [...] Read more.
The new generation of geostationary satellite sensors is producing an unprecedented amount of Earth observations with high temporal, spatial and spectral resolutions, which enable us to detect and assess abrupt surface changes. In this study, we developed the land surface directional reflectance and albedo products from Geostationary Operational Environment Satellite-R (GOES-R) Advanced Baseline Imager (ABI) data using a method that was prototyped with the Moderate Resolution Imaging Spectroradiometer (MODIS) data in a previous study, and was also tested with data from the Advanced Himawari Imager (AHI) onboard Himawari-8. Surface reflectance is usually retrieved through atmospheric correction that requires the input of aerosol optical depth (AOD). We first estimated AOD and the surface bidirectional reflectance factor (BRF) model parameters simultaneously based on an atmospheric radiative transfer formulation with surface anisotropy, and then calculated the “blue-sky” surface broadband albedo and directional reflectance. This algorithm was implemented operationally by the National Oceanic and Atmospheric Administration (NOAA) to generate the GOES-R land surface albedo product suite with a daily updated clear-sky satellite observation database. The “operational” land surface albedo estimation from ABI and AHI data was validated against ground measurements at the SURFRAD sites and OzFlux sites and compared with the existing satellite products, including MODIS, Visible infrared Imaging Radiometer (VIIRS), and Global Land Surface Satellites (GLASS) albedo products, where good agreement was found with bias values of −0.001 (ABI) and 0.020 (AHI) and root-mean-square-errors (RMSEs) less than 0.065 for the hourly albedo estimation. Directional surface reflectance estimation, evaluated at more than 74 sites from the Aerosol Robotic Network (AERONET), was proven to be reliable as well, with an overall bias very close to zero and RMSEs within 0.042 (ABI) and 0.039 (AHI). Results show that the albedo and reflectance estimation can satisfy the NOAA accuracy requirements for operational climate and meteorological applications. Full article
Open AccessArticle
Next-Generation Gravity Missions: Sino-European Numerical Simulation Comparison Exercise
Remote Sens. 2019, 11(22), 2654; https://doi.org/10.3390/rs11222654 (registering DOI) - 13 Nov 2019
Abstract
Temporal gravity retrieval simulation results of a future Bender-type double pair mission concept, performed by five processing centers of a Sino-European study team, have been inter-compared and assessed. They were computed in a synthetic closed-loop simulation world by five independent software systems applying [...] Read more.
Temporal gravity retrieval simulation results of a future Bender-type double pair mission concept, performed by five processing centers of a Sino-European study team, have been inter-compared and assessed. They were computed in a synthetic closed-loop simulation world by five independent software systems applying different gravity retrieval methods, but were based on jointly defined mission scenarios. The inter-comparison showed that the results achieved a quite similar performance. Exemplarily, the root mean square (RMS) deviations of global equivalent water height fields from their true reference, resolved up to degree and order 30 of a 9-day solution, vary in the order of 10% of the target signal. Also, co-estimated independent daily gravity fields up to degree and order 15, which have been co-estimated by all processing centers, do not show large differences among each other. This positive result is an important pre-requisite and basis for future joint activities towards the realization of next-generation gravity missions. Full article
(This article belongs to the Special Issue Remote Sensing by Satellite Gravimetry)
Show Figures

Graphical abstract

Open AccessArticle
Pixel-Wise PolSAR Image Classification via a Novel Complex-Valued Deep Fully Convolutional Network
Remote Sens. 2019, 11(22), 2653; https://doi.org/10.3390/rs11222653 (registering DOI) - 13 Nov 2019
Abstract
Although complex-valued (CV) neural networks have shown better classification results compared to their real-valued (RV) counterparts for polarimetric synthetic aperture radar (PolSAR) classification, the extension of pixel-level RV networks to the complex domain has not yet thoroughly examined. This paper presents a novel [...] Read more.
Although complex-valued (CV) neural networks have shown better classification results compared to their real-valued (RV) counterparts for polarimetric synthetic aperture radar (PolSAR) classification, the extension of pixel-level RV networks to the complex domain has not yet thoroughly examined. This paper presents a novel complex-valued deep fully convolutional neural network (CV-FCN) designed for PolSAR image classification. Specifically, CV-FCN uses PolSAR CV data that includes the phase information and uses the deep FCN architecture that performs pixel-level labeling. The CV-FCN architecture is trained in an end-to-end scheme to extract discriminative polarimetric features, and then the entire PolSAR image is classified by the trained CV-FCN. Technically, for the particularity of PolSAR data, a dedicated complex-valued weight initialization scheme is proposed to initialize CV-FCN. It considers the distribution of polarization data to conduct CV-FCN training from scratch in an efficient and fast manner. CV-FCN employs a complex downsampling-then-upsampling scheme to extract dense features. To enrich discriminative information, multi-level CV features that retain more polarization information are extracted via the complex downsampling scheme. Then, a complex upsampling scheme is proposed to predict dense CV labeling. It employs the complex max-unpooling layers to greatly capture more spatial information for better robustness to speckle noise. The complex max-unpooling layers upsample the real and the imaginary parts of complex feature maps based on the max locations maps retained from the complex downsampling scheme. In addition, to achieve faster convergence and obtain more precise classification results, a novel average cross-entropy loss function is derived for CV-FCN optimization. Experiments on real PolSAR datasets demonstrate that CV-FCN achieves better classification performance than other state-of-art methods. Full article
(This article belongs to the Special Issue Feature-Based Methods for Remote Sensing Image Classification)
Show Figures

Graphical abstract

Open AccessArticle
Assessment of Integrated Water Vapor Estimates from the iGMAS and the Brazilian Network GNSS Ground-Based Receivers in Rio de Janeiro
Remote Sens. 2019, 11(22), 2652; https://doi.org/10.3390/rs11222652 (registering DOI) - 13 Nov 2019
Abstract
There is pressing demand for knowledge improvement of the integrated water vapor (IWV) distribution in regions affected by heat islands that are associated with extreme rainfall events such as in the metropolitan area of Rio de Janeiro (MARJ). This work assessed the suitability [...] Read more.
There is pressing demand for knowledge improvement of the integrated water vapor (IWV) distribution in regions affected by heat islands that are associated with extreme rainfall events such as in the metropolitan area of Rio de Janeiro (MARJ). This work assessed the suitability and evaluation of the spatiotemporal distribution of Global Navigation Satellite Systems (GNSS) IWV from the cooperation of the International GNSS Monitoring and Assessment System (iGMAS) and the National Observatory (Observatório Nacional, ON) of Brazil, from the Brazilian Network for Continuous Monitoring (RBMC), and IWV products from Moderate Resolution Imaging Spectroradiometer (MODIS) and radiosonde, jointly with surface meteorological data, in two sectors of the state of Rio de Janeiro from February 2015–August 2018. High variability of the near surface air temperature (T) and relative humidity (RH) were observed among eight meteorological sites. The mean T differences between sites, up to 4.4 °C, led to mean differences as high as 3.1 K for weighted mean temperature (Tm) and hence 0.83 mm for IWV differences. Local grid points of MODIS IWV estimates had relatively good agreement with the GNSS-derived IWV, with mean differences from –2.4–1.1 mm for the daytime passages of the satellites TERRA and AQUA and underestimation from –9 mm to –3 mm during nighttime overpasses. A contrasting behavior was found in the radiosonde IWV estimates compared with the estimates from GNSS. There were dry biases of 1.4 mm (3.7% lower than expected) by radiosonde IWV during the daytime, considering that all other estimates were unbiased and the differences between IWVGNSS and IWVRADS were consistent. Based on the IWV comparisons between radiosonde and GNSS at nighttime, the atmosphere over the radiosonde site is about 1.2 mm and 2.3 mm wetter than that over the RBMC RIOD and iGMAS RDJN stations, respectively. The atmosphere over the site RIOD was 1.2 mm wetter than over that of RDJN for all three-hour means. These results showed that there were important variabilities in the meteorological conditions and in the distribution of water vapor in the MERJ. The data from the iGMAS RDJN station were feasible, together with those from the RBMC, MODIS, and radiosonde data, to investigate IWV in the region with occurrence of heat islands and peculiar physiographic and meteorological characteristics. This work recommends the magnification of the GNSS network in the State of Rio de Janeiro with the use of complete meteorological station data collocated near every GNSS receiver, aiming to improve local IWV estimates and serving as additional support for operational numerical assimilation, weather forecast, and nowcast of extreme rainfall and flooding events. Full article
(This article belongs to the Special Issue Global Navigation Satellite Systems for Earth Observing System)
Show Figures

Graphical abstract

Open AccessArticle
Infrastructure Safety Oriented Traffic Load Monitoring Using Multi-Sensor and Single Camera for Short and Medium Span Bridges
Remote Sens. 2019, 11(22), 2651; https://doi.org/10.3390/rs11222651 (registering DOI) - 13 Nov 2019
Abstract
A reliable and accurate monitoring of traffic load is of significance for the operational management and safety assessment of bridges. Traditional weight-in-motion techniques are capable of identifying moving vehicles with satisfactory accuracy and stability, whereas the cost and construction induced issues are inevitable. [...] Read more.
A reliable and accurate monitoring of traffic load is of significance for the operational management and safety assessment of bridges. Traditional weight-in-motion techniques are capable of identifying moving vehicles with satisfactory accuracy and stability, whereas the cost and construction induced issues are inevitable. A recently proposed traffic sensing methodology, combining computer vision techniques and traditional strain based instrumentation, achieves obvious overall improvement for simple traffic scenarios with less passing vehicles, but are enfaced with obstacles in complicated traffic scenarios. Therefore, a traffic monitoring methodology is proposed in this paper with extra focus on complicated traffic scenarios. Rather than a single sensor, a network of strain sensors of a pre-installed bridge structural health monitoring system is used to collect redundant information and hence improve accuracy of identification results. Field tests were performed on a concrete box-girder bridge to investigate the reliability and accuracy of the method in practice. Key parameters such as vehicle weight, velocity, quantity, type and trajectory are effectively identified according to the test results, in spite of the presence of one-by-one and side-by-side vehicles. The proposed methodology is infrastructure safety oriented and preferable for traffic load monitoring of short and medium span bridges with respect to accuracy and cost-effectiveness. Full article
(This article belongs to the Special Issue Vision-Based Sensing in Engineering Structures)
Show Figures

Graphical abstract

Open AccessArticle
Validation of the Hurricane Imaging Radiometer Forward Radiative Transfer Model for a Convective Rain Event
Remote Sens. 2019, 11(22), 2650; https://doi.org/10.3390/rs11222650 (registering DOI) - 13 Nov 2019
Abstract
The airborne Hurricane Imaging Radiometer (HIRAD) was developed to remotely sense hurricane surface wind speed (WS) and rain rate (RR) from a high-altitude aircraft. The approach was to obtain simultaneous brightness temperature measurements over a wide frequency range to independently retrieve the WS [...] Read more.
The airborne Hurricane Imaging Radiometer (HIRAD) was developed to remotely sense hurricane surface wind speed (WS) and rain rate (RR) from a high-altitude aircraft. The approach was to obtain simultaneous brightness temperature measurements over a wide frequency range to independently retrieve the WS and RR. In the absence of rain, the WS retrieval has been robust; however, for moderate to high rain rates, the joint WS/RR retrieval has not been successful. The objective of this paper was to resolve this issue by developing an improved forward radiative transfer model (RTM) for the HIRAD cross-track viewing geometry, with separated upwelling and specularly reflected downwelling atmospheric paths. Furthermore, this paper presents empirical results from an unplanned opportunity that occurred when HIRAD measured brightness temperatures over an intense tropical squall line, which was simultaneously observed by a ground based NEXRAD (Next Generation Weather Radar) radar. The independently derived NEXRAD RR created the simultaneous 3D rain field “surface truth”, which was used as an input to the RTM to generate HIRAD modeled brightness temperatures. This paper presents favorable results of comparisons of theoretical and the simultaneous, collocated HIRAD brightness temperature measurements that validate the accuracy of this new HIRAD RTM. Full article
(This article belongs to the Section Ocean Remote Sensing)
Show Figures

Figure 1

Open AccessArticle
A Multi-Temporal Object-Based Image Analysis to Detect Long-Lived Shrub Cover Changes in Drylands
Remote Sens. 2019, 11(22), 2649; https://doi.org/10.3390/rs11222649 (registering DOI) - 13 Nov 2019
Abstract
Climate change and human actions condition the spatial distribution and structure of vegetation, especially in drylands. In this context, object-based image analysis (OBIA) has been used to monitor changes in vegetation, but only a few studies have related them to anthropic pressure. In [...] Read more.
Climate change and human actions condition the spatial distribution and structure of vegetation, especially in drylands. In this context, object-based image analysis (OBIA) has been used to monitor changes in vegetation, but only a few studies have related them to anthropic pressure. In this study, we assessed changes in cover, number, and shape of Ziziphus lotus shrub individuals in a coastal groundwater-dependent ecosystem in SE Spain over a period of 60 years and related them to human actions in the area. In particular, we evaluated how sand mining, groundwater extraction, and the protection of the area affect shrubs. To do this, we developed an object-based methodology that allowed us to create accurate maps (overall accuracy up to 98%) of the vegetation patches and compare the cover changes in the individuals identified in them. These changes in shrub size and shape were related to soil loss, seawater intrusion, and legal protection of the area measured by average minimum distance (AMD) and average random distance (ARD) analysis. It was found that both sand mining and seawater intrusion had a negative effect on individuals; on the contrary, the protection of the area had a positive effect on the size of the individuals’ coverage. Our findings support the use of OBIA as a successful methodology for monitoring scattered vegetation patches in drylands, key to any monitoring program aimed at vegetation preservation. Full article
(This article belongs to the Special Issue Remote Sensing Applications in Monitoring of Protected Areas)
Show Figures

Graphical abstract

Open AccessArticle
Lifting Scheme-Based Deep Neural Network for Remote Sensing Scene Classification
Remote Sens. 2019, 11(22), 2648; https://doi.org/10.3390/rs11222648 (registering DOI) - 13 Nov 2019
Abstract
Recently, convolutional neural networks (CNNs) achieve impressive results on remote sensing scene classification, which is a fundamental problem for scene semantic understanding. However, convolution, the most essential operation in CNNs, restricts the development of CNN-based methods for scene classification. Convolution is not efficient [...] Read more.
Recently, convolutional neural networks (CNNs) achieve impressive results on remote sensing scene classification, which is a fundamental problem for scene semantic understanding. However, convolution, the most essential operation in CNNs, restricts the development of CNN-based methods for scene classification. Convolution is not efficient enough for high-resolution remote sensing images and limited in extracting discriminative features due to its linearity. Thus, there has been growing interest in improving the convolutional layer. The hardware implementation of the JPEG2000 standard relies on the lifting scheme to perform wavelet transform (WT). Compared with the convolution-based two-channel filter bank method of WT, the lifting scheme is faster, taking up less storage and having the ability of nonlinear transformation. Therefore, the lifting scheme can be regarded as a better alternative implementation for convolution in vanilla CNNs. This paper introduces the lifting scheme into deep learning and addresses the problems that only fixed and finite wavelet bases can be replaced by the lifting scheme, and the parameters cannot be updated through backpropagation. This paper proves that any convolutional layer in vanilla CNNs can be substituted by an equivalent lifting scheme. A lifting scheme-based deep neural network (LSNet) is presented to promote network applications on computational-limited platforms and utilize the nonlinearity of the lifting scheme to enhance performance. LSNet is validated on the CIFAR-100 dataset and the overall accuracies increase by 2.48% and 1.38% in the 1D and 2D experiments respectively. Experimental results on the AID which is one of the newest remote sensing scene dataset demonstrate that 1D LSNet and 2D LSNet achieve 2.05% and 0.45% accuracy improvement compared with the vanilla CNNs respectively. Full article
Show Figures

Graphical abstract

Open AccessArticle
Winter Wheat Mapping Based on Sentinel-2 Data in Heterogeneous Planting Conditions
Remote Sens. 2019, 11(22), 2647; https://doi.org/10.3390/rs11222647 (registering DOI) - 13 Nov 2019
Abstract
Monitoring and mapping the spatial distribution of winter wheat accurately is important for crop management, damage assessment and yield prediction. In this study, northern and central Anhui province were selected as study areas, and Sentinel-2 imagery was employed to map winter wheat distribution [...] Read more.
Monitoring and mapping the spatial distribution of winter wheat accurately is important for crop management, damage assessment and yield prediction. In this study, northern and central Anhui province were selected as study areas, and Sentinel-2 imagery was employed to map winter wheat distribution and the results were verified with Planet imagery in the 2017–2018 growing season. The Sentinel-2 imagery at the heading stage was identified as the optimum period for winter wheat area extraction after analyzing the images from different growth stages using the Jeffries–Matusita distance method. Therefore, ten spectral bands, seven vegetation indices (VI), water index and building index generated from the image at the heading stage were used to classify winter wheat areas by a random forest (RF) algorithm. The result showed that the accuracy was from 93% to 97%, with a Kappa above 0.82 and a percentage error lower than 5% in northern Anhui, and an accuracy of about 80% with Kappa ranging from 0.70 to 0.78 and a percentage error of about 20% in central Anhui. Northern Anhui has a large planting scale of winter wheat and flat terrain while central Anhui grows relatively small winter wheat areas and a high degree of surface fragmentation, which makes the extraction effect in central Anhui inferior to that in northern Anhui. Further, an optimum subset data was obtained from VIs, water index, building index and spectral bands using an RF algorithm. The result of using the optimum subset data showed a high accuracy of classification with a great advantage in data volume and processing time. This study provides a perspective for winter wheat mapping under various climatic and complicated land surface conditions and is of great significance for crop monitoring and agricultural decision-making. Full article
(This article belongs to the collection Sentinel-2: Science and Applications)
Show Figures

Graphical abstract

Open AccessArticle
Decomposing the Long-term Variation in Population Exposure to Outdoor PM2.5 in the Greater Bay Area of China Using Satellite Observations
Remote Sens. 2019, 11(22), 2646; https://doi.org/10.3390/rs11222646 (registering DOI) - 13 Nov 2019
Abstract
The Greater Bay Area (GBA) of China is experiencing a high level of exposure to outdoor PM2.5 pollution. The variations in the exposure level are determined by spatiotemporal variations in the PM2.5 concentration and population. To better guide public policies that [...] Read more.
The Greater Bay Area (GBA) of China is experiencing a high level of exposure to outdoor PM2.5 pollution. The variations in the exposure level are determined by spatiotemporal variations in the PM2.5 concentration and population. To better guide public policies that aim to reduce the population exposure level, it is essential to explicitly decompose and assess the impacts of different factors. This study took advantage of high-resolution satellite observations to characterize the long-term variations in population exposure to outdoor PM2.5 for cities in the GBA region during the three most-recent Five-Year Plan (FYP) periods (2001–2015). A new decomposition method was then used to assess the impact of PM2.5 variations and demographic changes on the exposure variation. Within the decomposition framework, an index of pollution-population-coincidence–induced PM2.5 exposure (PPCE) was introduced to characterize the interaction of PM2.5 and the population distribution. The results showed that the 15-year average PPCE levels in all cities were positive (e.g., 6 µg/m3 in Guangzhou), suggesting that unfavorable city planning had led to people dwelling in polluted areas. An analyses of the spatial differences in PM2.5 changes showed that urban areas experienced a greater decrease in PM2.5 concentration than did rural areas in most cities during the 11th (2006–2010) and 12th (2011–2015) FYP periods. These spatial differences in PM2.5 changes reduced the PPCE levels of these cities and thus reduced the exposure levels (by as much as -0.58 µg/m3/year). The population migration resulting from rapid urbanization, however, increased the PPCE and exposure levels (by as much as 0.18 µg/m3/year) in most cities during the three FYP periods considered. Dongguan was a special case in that the demographic change reduced the exposure level because of its rapid development of residential areas in cleaner regions adjacent to Shenzhen. The exposure levels in all cities remained high because of the high mean PM2.5 concentrations and their positive PPCE. To better protect public health, control efforts should target densely populated areas and city planning should locate more people in cleaner areas. Full article
(This article belongs to the Special Issue Urban Air Quality Monitoring using Remote Sensing)
Show Figures

Graphical abstract

Open AccessLetter
Watson on the Farm: Using Cloud-Based Artificial Intelligence to Identify Early Indicators of Water Stress
Remote Sens. 2019, 11(22), 2645; https://doi.org/10.3390/rs11222645 (registering DOI) - 13 Nov 2019
Abstract
As demand for freshwater increases while supply remains stagnant, the critical need for sustainable water use in agriculture has led the EPA Strategic Plan to call for new technologies that can optimize water allocation in real-time. This work assesses the use of cloud-based [...] Read more.
As demand for freshwater increases while supply remains stagnant, the critical need for sustainable water use in agriculture has led the EPA Strategic Plan to call for new technologies that can optimize water allocation in real-time. This work assesses the use of cloud-based artificial intelligence to detect early indicators of water stress across six container-grown ornamental shrub species. Near-infrared images were previously collected with modified Canon and MAPIR Survey II cameras deployed via a small unmanned aircraft system (sUAS) at an altitude of 30 meters. Cropped images of plants in no, low-, and high-water stress conditions were split into four-fold cross-validation sets and used to train models through IBM Watson’s Visual Recognition service. Despite constraints such as small sample size (36 plants, 150 images) and low image resolution (150 pixels by 150 pixels per plant), Watson generated models were able to detect indicators of stress after 48 hours of water deprivation with a significant to marginally significant degree of separation in four out of five species tested (p < 0.10). Two models were also able to detect indicators of water stress after only 24 hours, with models trained on images of as few as eight water-stressed Buddleia plants achieving an average area under the curve (AUC) of 0.9884 across four folds. Ease of pre-processing, minimal amount of training data required, and outsourced computation make cloud-based artificial intelligence services such as IBM Watson Visual Recognition an attractive tool for agriculture analytics. Cloud-based artificial intelligence can be combined with technologies such as sUAS and spectral imaging to help crop producers identify deficient irrigation strategies and intervene before crop value is diminished. When brought to scale, frameworks such as these can drive responsive irrigation systems that monitor crop status in real-time and maximize sustainable water use. Full article
(This article belongs to the Special Issue UAVs for Vegetation Monitoring)
Show Figures

Graphical abstract

Open AccessArticle
RealPoint3D: Generating 3D Point Clouds from a Single Image of Complex Scenarios
Remote Sens. 2019, 11(22), 2644; https://doi.org/10.3390/rs11222644 (registering DOI) - 13 Nov 2019
Abstract
Generating 3D point clouds from a single image has attracted full attention from researchers in the field of multimedia, remote sensing and computer vision. With the recent proliferation of deep learning, various deep models have been proposed for the 3D point cloud generation. [...] Read more.
Generating 3D point clouds from a single image has attracted full attention from researchers in the field of multimedia, remote sensing and computer vision. With the recent proliferation of deep learning, various deep models have been proposed for the 3D point cloud generation. However, they require objects to be captured with absolutely clean backgrounds and fixed viewpoints, which highly limits their application in the real environment. To guide 3D point cloud generation, we propose a novel network, RealPoint3D, to integrate prior 3D shape knowledge into the network. Taking additional 3D information, RealPoint3D can handle 3D object generation from a single real image captured from any viewpoint and complex background. Specifically, provided a query image, we retrieve the nearest shape model from a pre-prepared 3D model database. Then, the image, together with the retrieved shape model, is fed into RealPoint3D to generate a fine-grained 3D point cloud. We evaluated the proposed RealPoint3D on the ShapeNet dataset and ObjectNet3D dataset for the 3D point cloud generation. Experimental results and comparisons with state-of-the-art methods demonstrate that our framework achieves superior performance. Furthermore, our proposed framework works well for real images in complex backgrounds (the image has the remaining objects in addition to the reconstructed object, and the reconstructed object may be occluded or truncated) with various viewing angles. Full article
(This article belongs to the Special Issue Point Cloud Processing in Remote Sensing)
Show Figures

Graphical abstract

Open AccessArticle
GPS-PWV based Improved Long-Term Rainfall Prediction Algorithm for Tropical Regions
Remote Sens. 2019, 11(22), 2643; https://doi.org/10.3390/rs11222643 (registering DOI) - 12 Nov 2019
Abstract
Global positioning system (GPS) satellite delay is extensively used in deriving the precipitable water vapor (PWV) with high spatio–temporal resolution. One of the recent applications of GPS derived PWV values are to predict rainfall events. In the literature, there are rainfall prediction algorithms [...] Read more.
Global positioning system (GPS) satellite delay is extensively used in deriving the precipitable water vapor (PWV) with high spatio–temporal resolution. One of the recent applications of GPS derived PWV values are to predict rainfall events. In the literature, there are rainfall prediction algorithms based on GPS-PWV values. Most of the algorithms are developed using data from temperate and sub-tropical regions. Mostly these algorithms use maximum PWV rate, maximum PWV variation and monthly PWV values as a criterion to predict the rain events. This paper examines these algorithms using data from the tropical stations and proposes the use of maximum PWV value for better prediction. When maximum PWV value and maximum rate of increment criteria are implemented on the data from the tropical stations, the false alarm ( F A ) rate is reduced by almost 17% as compared to the results from the literature. There is a significant reduction in F A rates while maintaining the true detection ( T D ) rates as high as that of the literature. A study done on the varying historical length of data and lead time values shows that almost 80% of the rainfall can be predicted with a false alarm of 26 . 4 % for a historical data length of 2 hours and a lead time of 45 min to 1 hour. Full article
(This article belongs to the Special Issue Weather Forecasting and Modeling Using Satellite Data)
Open AccessArticle
Comparison of Bi-Hemispherical and Hemispherical-Conical Configurations for In Situ Measurements of Solar-Induced Chlorophyll Fluorescence
Remote Sens. 2019, 11(22), 2642; https://doi.org/10.3390/rs11222642 (registering DOI) - 12 Nov 2019
Abstract
During recent decades, solar-induced chlorophyll fluorescence (SIF) has shown to be a good proxy for gross primary production (GPP), promoting the development of ground-based SIF observation systems and supporting a greater understanding of the relationship between SIF and GPP. However, it is unclear [...] Read more.
During recent decades, solar-induced chlorophyll fluorescence (SIF) has shown to be a good proxy for gross primary production (GPP), promoting the development of ground-based SIF observation systems and supporting a greater understanding of the relationship between SIF and GPP. However, it is unclear whether such SIF-oriented observation systems built from different materials and of different configurations are able to acquire consistent SIF signals from the same target. In this study, we used four different observation systems to measure the same targets together in order to investigate whether SIF from different systems is comparable. Integration time (IT), reflectance, and SIF retrieved from different systems with hemispherical-conical (hemi-con) and bi-hemispherical (bi-hemi) configurations were also evaluated. A newly built prism system (SIFprism, using prism to collect both solar and target radiation) has the shortest IT and highest signal to noise ratio (SNR). Reflectance collected from the different systems showed small differences, and the diurnal patterns of both red and far-red SIF derived from different systems showed a marginal difference when measuring the homogeneous vegetation canopy (grassland). However, when the target is heterogeneous, e.g., the Epipremnum aureum canopy, the values and diurnal pattern of far-red SIF derived from systems with a bi-hemi configuration were obviously different with those derived from the system with hemi-con configuration. These results demonstrate that different SIF systems are able to acquire consistent SIF for landscapes with a homogeneous canopy. However, SIF retrieved from bi-hemi and hemi-con configurations may be distinctive when the target is a heterogeneous (or discontinuous) canopy due to the different fields of view and viewing geometries. Our findings suggest that the bi-hemi configuration has an advantage to measure heterogeneous canopies due to the large field of view for upwelling sensors being representative for the footprint of the eddy covariance flux measurements. Full article
(This article belongs to the Section Biogeosciences Remote Sensing)
Show Figures

Graphical abstract

Open AccessArticle
Integrating the Continuous Wavelet Transform and a Convolutional Neural Network to Identify Vineyard Using Time Series Satellite Images
Remote Sens. 2019, 11(22), 2641; https://doi.org/10.3390/rs11222641 (registering DOI) - 12 Nov 2019
Abstract
Grape is an economic crop of great importance and is widely cultivated in China. With the development of remote sensing, abundant data sources strongly guarantee that researchers can identify crop types and map their spatial distributions. However, to date, only a few studies [...] Read more.
Grape is an economic crop of great importance and is widely cultivated in China. With the development of remote sensing, abundant data sources strongly guarantee that researchers can identify crop types and map their spatial distributions. However, to date, only a few studies have been conducted to identify vineyards using satellite image data. In this study, a vineyard is identified using satellite images, and a new approach is proposed that integrates the continuous wavelet transform (CWT) and a convolutional neural network (CNN). Specifically, the original time series of the normalized difference vegetation index (NDVI), enhanced vegetation index (EVI), and green chlorophyll vegetation index (GCVI) are reconstructed by applying an iterated Savitzky-Golay (S-G) method to form a daily time series for a full year; then, the CWT is applied to three reconstructed time series to generate corresponding scalograms; and finally, CNN technology is used to identify vineyards based on the stacked scalograms. In addition to our approach, a traditional and common approach that uses a random forest (RF) to identify crop types based on multi-temporal images is selected as the control group. The experimental results demonstrated the following: (i) the proposed approach was comprehensively superior to the RF approach; it improved the overall accuracy by 9.87% (up to 89.66%); (ii) the CWT had a stable and effective influence on the reconstructed time series, and the scalograms fully represented the unique time-related frequency pattern of each of the planting conditions; and (iii) the convolution and max pooling processing of the CNN captured the unique and subtle distribution patterns of the scalograms to distinguish vineyards from other crops. Additionally, the proposed approach is considered as able to be applied to other practical scenarios, such as using time series data to identify crop types, map landcover/land use, and is recommended to be tested in future practical applications. Full article
(This article belongs to the Section Remote Sensing in Agriculture and Vegetation)
Show Figures

Graphical abstract

Open AccessArticle
Assessment of Leaf Area Index of Rice for a Growing Cycle Using Multi-Temporal C-Band PolSAR Datasets
Remote Sens. 2019, 11(22), 2640; https://doi.org/10.3390/rs11222640 (registering DOI) - 12 Nov 2019
Abstract
C-band polarimetric synthetic aperture radar (PolSAR) data has been previously explored for estimating the leaf area index (LAI) of rice. Although the rice-growing cycle was partially covered in most of the studies, details for each phenological phase need to be further characterized. Additionally, [...] Read more.
C-band polarimetric synthetic aperture radar (PolSAR) data has been previously explored for estimating the leaf area index (LAI) of rice. Although the rice-growing cycle was partially covered in most of the studies, details for each phenological phase need to be further characterized. Additionally, the selection and exploration of polarimetric parameters are not comprehensive. This study evaluates the potential of a set of polarimetric parameters derived from multi-temporal RADARSAT-2 datasets for rice LAI estimation. The relationships of rice LAI with backscattering coefficients and polarimetric decomposition parameters have been examined in a complete phenological cycle. Most polarimetric parameters had weak relationships (R2 < 0.30) with LAI at the transplanting, reproductive, and maturity phase. Stronger relationships (R2 > 0.50) were observed at the vegetative phase. HV/VV and RVI FD had significant relationships (R2 > 0.80) with rice LAI for the whole growth period. They were utilized to develop empirical models. The best LAI inversion performance (RMSE = 0.81) was obtained when RVI FD was used. The acceptable error demonstrated the possibility to use the decomposition parameters for rice LAI estimation. The HV/VV-based model had a slightly lower estimation accuracy (RMSE = 1.29) but can be a practical alternative considering the wide availability of dual-polarized datasets. Full article
Show Figures

Figure 1

Open AccessArticle
Evapotranspiration Data Product from NESDIS GET-D System Upgraded for GOES-16 ABI Observations
Remote Sens. 2019, 11(22), 2639; https://doi.org/10.3390/rs11222639 (registering DOI) - 12 Nov 2019
Abstract
Evapotranspiration (ET) is a major component of the global and regional water cycle. An operational Geostationary Operational Environmental Satellite (GOES) ET and Drought (GET-D) product system has been developed by the National Environmental Satellite, Data and Information Service (NESDIS) in the National Oceanic [...] Read more.
Evapotranspiration (ET) is a major component of the global and regional water cycle. An operational Geostationary Operational Environmental Satellite (GOES) ET and Drought (GET-D) product system has been developed by the National Environmental Satellite, Data and Information Service (NESDIS) in the National Oceanic and Atmospheric Administration (NOAA) for numerical weather prediction model validation, data assimilation, and drought monitoring. GET-D system was generating ET and Evaporative Stress Index (ESI) maps at 8 km spatial resolution using thermal observations of the Imagers on GOES-13 and GOES-15 before the primary operational GOES satellites transitioned to GOES-16 and GOES-17 with the Advanced Baseline Imagers (ABI). In this study, the GET-D product system is upgraded to ingest the thermal observations of ABI with the best spatial resolution of 2 km. The core of the GET-D system is the Atmosphere-Land Exchange Inversion (ALEXI) model, which exploits the mid-morning rise in the land surface temperature to deduce the land surface fluxes including ET. Satellite-based land surface temperature and solar insolation retrievals from ABI and meteorological forcing from NOAA NCEP Climate Forecast System (CFS) are the major inputs to the GET-D system. Ancillary data required in GET-D include land cover map, leaf area index, albedo and cloud mask. This paper presents preliminary results of ET from the upgraded GET-D system after a brief introduction of the ALEXI model and the architecture of GET-D system. Comparisons with in situ ET measurements showed that the accuracy of the GOES-16 ABI based ET is similar to the results from the legacy GET-D ET based on GOES-13/15 Imager data. The agreement with the in situ measurements is satisfactory with a correlation of 0.914 averaged from three Mead sites. Further evaluation of the ABI-based ET product, upgrade efforts of the GET-D system for ESI products, and conclusions for the ABI-based GET-D products are discussed. Full article
(This article belongs to the Special Issue Earth Monitoring from A New Generation of Geostationary Satellites)
Show Figures

Graphical abstract

Open AccessReview
A Review of the Applications of Remote Sensing in Fire Ecology
Remote Sens. 2019, 11(22), 2638; https://doi.org/10.3390/rs11222638 (registering DOI) - 12 Nov 2019
Abstract
Wildfire plays an important role in ecosystem dynamics, land management, and global processes. Understanding the dynamics associated with wildfire, such as risks, spatial distribution, and effects is important for developing a clear understanding of its ecological influences. Remote sensing technologies provide a means [...] Read more.
Wildfire plays an important role in ecosystem dynamics, land management, and global processes. Understanding the dynamics associated with wildfire, such as risks, spatial distribution, and effects is important for developing a clear understanding of its ecological influences. Remote sensing technologies provide a means to study fire ecology at multiple scales using an efficient and quantitative method. This paper provides a broad review of the applications of remote sensing techniques in fire ecology. Remote sensing applications related to fire risk mapping, fuel mapping, active fire detection, burned area estimates, burn severity assessment, and post-fire vegetation recovery monitoring are discussed. Emphasis is given to the roles of multispectral sensors, lidar, and emerging UAS technologies in mapping, analyzing, and monitoring various environmental properties related to fire activity. Examples of current and past research are provided, and future research trends are discussed. In general, remote sensing technologies provide a low-cost, multi-temporal means for conducting local, regional, and global-scale fire ecology research, and current research is rapidly evolving with the introduction of new technologies and techniques which are increasing accuracy and efficiency. Future research is anticipated to continue to build upon emerging technologies, improve current methods, and integrate novel approaches to analysis and classification. Full article
(This article belongs to the Special Issue Remote Sensing Approaches to Biogeographical Applications)
Show Figures

Graphical abstract

Open AccessArticle
Hybrid Scene Structuring for Accelerating 3D Radiative Transfer Simulations
Remote Sens. 2019, 11(22), 2637; https://doi.org/10.3390/rs11222637 (registering DOI) - 12 Nov 2019
Abstract
Three-dimensional (3D) radiative transfer models are the most accurate remote sensing models. However, presently the application of 3D models to heterogeneous Earth scenes is a computationally intensive task. A common approach to reduce computation time is abstracting the landscape elements into simpler geometries [...] Read more.
Three-dimensional (3D) radiative transfer models are the most accurate remote sensing models. However, presently the application of 3D models to heterogeneous Earth scenes is a computationally intensive task. A common approach to reduce computation time is abstracting the landscape elements into simpler geometries (e.g., ellipsoid), which, however, may introduce biases. Here, a hybrid scene structuring approach is proposed to accelerate the radiative transfer simulations while keeping the scene as realistic as possible. In a first step, a 3D description of the Earth landscape with equal-sized voxels is optimized to keep only non-empty voxels (i.e., voxels that contain triangles) and managed using a bounding volume hierarchy (BVH). For any voxel that contains triangles, within-voxel BVHs are created to accelerate the ray–triangle intersection tests. The hybrid scheme is implemented in the Discrete Anisotropic Radiative Transfer (DART) model by integrating the Embree ray-tracing kernels developed at Intel. In this paper, the performance of the hybrid algorithm is compared with the original uniform grid approach implemented in DART for a 3D city scene and a forest scene. Results show that the removal of empty voxels can accelerate urban simulation by 1.4×~3.7×, and that the within-voxel BVH can accelerate forest simulations by up to 258.5×. Full article
(This article belongs to the Section Forest Remote Sensing)
Show Figures

Graphical abstract

Open AccessArticle
Synergy of Satellite Remote Sensing and Numerical Ocean Modelling for Coastal Geomorphology Diagnosis
Remote Sens. 2019, 11(22), 2636; https://doi.org/10.3390/rs11222636 (registering DOI) - 12 Nov 2019
Abstract
Sediment dynamics is the primary driver of the evolution of the coastal geomorphology and of the underwater shelf clinoforms. In this paper, we focus on mesoscale and sub-mesoscale processes, such as coastal currents and river plumes, and how they shape the sediment dynamics [...] Read more.
Sediment dynamics is the primary driver of the evolution of the coastal geomorphology and of the underwater shelf clinoforms. In this paper, we focus on mesoscale and sub-mesoscale processes, such as coastal currents and river plumes, and how they shape the sediment dynamics at regional or basin spatial scales. A new methodology is developed that combines observational data with numerical modelling: the aim is to pair satellite measurements of suspended sediment with velocity fields from numerical oceanographic models, to obtain an estimation of the sediment flux. A numerical divergence of this flux is then computed. The divergence field thus obtained shows how the aforementioned mesoscale processes distribute the sediments. The approach was applied and discussed on the Adriatic Sea, for the winter of 2012, using data provided by the ESA Coastcolour project and the output of a run of the MIT General Circulation Model. Full article
(This article belongs to the Special Issue Coastal Waters Monitoring Using Remote Sensing Technology)
Show Figures

Figure 1

Open AccessArticle
Fast Super-Resolution of 20 m Sentinel-2 Bands Using Convolutional Neural Networks
Remote Sens. 2019, 11(22), 2635; https://doi.org/10.3390/rs11222635 - 11 Nov 2019
Abstract
Images provided by the ESA Sentinel-2 mission are rapidly becoming the main source of information for the entire remote sensing community, thanks to their unprecedented combination of spatial, spectral and temporal resolution, as well as their associated open access policy. Due to a [...] Read more.
Images provided by the ESA Sentinel-2 mission are rapidly becoming the main source of information for the entire remote sensing community, thanks to their unprecedented combination of spatial, spectral and temporal resolution, as well as their associated open access policy. Due to a sensor design trade-off, images are acquired (and delivered) at different spatial resolutions (10, 20 and 60 m) according to specific sets of wavelengths, with only the four visible and near infrared bands provided at the highest resolution (10 m). Although this is not a limiting factor in general, many applications seem to emerge in which the resolution enhancement of 20 m bands may be beneficial, motivating the development of specific super-resolution methods. In this work, we propose to leverage Convolutional Neural Networks (CNNs) to provide a fast, upscalable method for the single-sensor fusion of Sentinel-2 (S2) data, whose aim is to provide a 10 m super-resolution of the original 20 m bands. Experimental results demonstrate that the proposed solution can achieve better performance with respect to most of the state-of-the-art methods, including other deep learning based ones with a considerable saving of computational burden. Full article
(This article belongs to the Special Issue Image Super-Resolution in Remote Sensing)
Show Figures

Graphical abstract

Open AccessArticle
Radiometric Variations of On-Orbit FORMOSAT-5 RSI from Vicarious and Cross-Calibration Measurements
Remote Sens. 2019, 11(22), 2634; https://doi.org/10.3390/rs11222634 - 11 Nov 2019
Abstract
A new Taiwanese satellite, FORMOSAT-5 (FS-5), with a payload remote sensing instrument (RSI) was launched in August 2017 to continue the mission of its predecessor FORMOSAT-2 (FS-2). Similar to FS-2, the RSI provides 2-m resolution panchromatic and 4-m resolution multi-spectral images as the [...] Read more.
A new Taiwanese satellite, FORMOSAT-5 (FS-5), with a payload remote sensing instrument (RSI) was launched in August 2017 to continue the mission of its predecessor FORMOSAT-2 (FS-2). Similar to FS-2, the RSI provides 2-m resolution panchromatic and 4-m resolution multi-spectral images as the primary payload on FS-5. However, the radiometric properties of the optical sensor may vary, based on the environment and time after being launched into the space. Thus, maintaining the radiometric quality of FS-5 RSI imagery is essential and significant to scientific research and further applications. Therefore, the objective of this study aimed at the on-orbit absolute radiometric assessment and calibration of on-orbit FS-5 RSI observations. Two renowned approaches, vicarious calibrations and cross-calibrations, were conducted at two calibration sites that employ a stable atmosphere and high surface reflectance, namely, Alkali Lake and Railroad Valley Playa in North America. For cross-calibrations, the Landsat-8 Operational Land Imager (LS-8 OLI) was selected as the reference. A second simulation of the satellite signal in the solar spectrum (6S) radiative transfer model was performed to compute the surface reflectance, atmospheric effects, and path radiance for the radiometric intensity at the top of the atmosphere. Results of vicarious calibrations from 11 field experiments demonstrated high consistency with those of seven case examinations of cross-calibration in terms of physical gain in spectra, implying that the practicality of the proposed approaches is high. Moreover, the multi-temporal results illustrated that RSI decay in optical sensitivity was evident after launch. The variation in the calibration coefficient of each band showed no obvious consistency (6%–24%) in 2017, but it tended to be stable at the order of 3%–5% of variation in most spectral bands during 2018. The results strongly suggest that periodical calibration is required and essential for further scientific applications. Full article
Show Figures

Graphical abstract

Open AccessArticle
Mapping and Monitoring Fractional Woody Vegetation Cover in the Arid Savannas of Namibia Using LiDAR Training Data, Machine Learning, and ALOS PALSAR Data
Remote Sens. 2019, 11(22), 2633; https://doi.org/10.3390/rs11222633 - 11 Nov 2019
Abstract
Namibia is a very arid country, which has experienced significant bush encroachment and associated decreased livestock productivity. Therefore, it is essential to monitor bush encroachment and widespread debushing activities, including selective bush thinning and complete bush clearing. The aim of study was to [...] Read more.
Namibia is a very arid country, which has experienced significant bush encroachment and associated decreased livestock productivity. Therefore, it is essential to monitor bush encroachment and widespread debushing activities, including selective bush thinning and complete bush clearing. The aim of study was to develop a system to map and monitor fractional woody cover (FWC) at national scales (50 m and 75 m resolution) using Synthetic Aperture Radar (SAR) satellite data (Advanced Land Observing Satellite (ALOS) Phased Arrayed L-band Synthetic Aperture Radar (PALSAR) global mosaics, 2009, 2010, 2015, 2016) and ancillary variables (mean annual precipitation—MAP, elevation), with machine learning models that were trained with diverse airborne Light Detection and Ranging (LiDAR) data sets (244,032 ha, 2008–2014). When only the SAR variables were used, an average R2 of 0.65 (RSME = 0.16) was attained. Adding either elevation or MAP, or both ancillary variables, increased the mean R2 to 0.75 (RSME = 0.13), and 0.79 (RSME = 0.12). The inclusion of MAP addressed the overestimation of FWC in very arid areas, but resulted in anomalies in the form of sharp gradients in FWC along a MAP contour which were most likely caused by to the geographic distribution of the LiDAR training data. Additional targeted LiDAR acquisitions could address this issue. This was the first attempt to produce SAR-derived FWC maps for Namibia and the maps contain substantially more detailed spatial information on woody vegetation structure than existing national maps. During the seven-year study period the Shrubland–Woodland Mosaic was the only vegetation structural class that exhibited a regional net gain in FWC of more than 0.2 across 9% (11,906 km2) of its area that may potentially be attributed to bush encroachment. FWC change maps provided regional insights and detailed local patterns related to debushing and regrowth that can inform national rangeland policies and debushing programs. Full article
(This article belongs to the Special Issue Applications of Remote Sensing in Rangelands Research)
Show Figures

Figure 1

Open AccessArticle
Application of DInSAR-PSI Technology for Deformation Monitoring of the Mosul Dam, Iraq
Remote Sens. 2019, 11(22), 2632; https://doi.org/10.3390/rs11222632 - 11 Nov 2019
Abstract
On-going monitoring of deformation of dams is critical to assure their safe and efficient operation. Traditional monitoring methods, based on in-situ sensors measurements on the dam, have some limitations in spatial coverage, observation frequency, and cost. This paper describes the potential use of [...] Read more.
On-going monitoring of deformation of dams is critical to assure their safe and efficient operation. Traditional monitoring methods, based on in-situ sensors measurements on the dam, have some limitations in spatial coverage, observation frequency, and cost. This paper describes the potential use of Synthetic Aperture Radar (SAR) scenes from Sentinel-1A for characterizing deformations at the Mosul Dam (MD) in NW Iraq. Seventy-eight Single Look Complex (SLC) scenes in ascending geometry from the Sentinel-1A scenes, acquired from 03 October 2014 to 27 June 2019, and 96 points within the MD structure, were selected to determine the deformation rate using persistent scatterer interferometry (PSI). Maximum deformation velocity was found to be about 7.4 mm·yr−1 at a longitudinal subsidence area extending over a length of 222 m along the dam axis. The mean subsidence velocity in this area is about 6.27 mm·yr−1 and lies in the center of MD. Subsidence rate shows an inverse relationship with the reservoir water level. It also shows a strong correlation with grouting episodes. Variations in the deformation rate within the same year are most probably due to increased hydrostatic stress which was caused by water storage in the dam that resulted in an increase in solubility of gypsum beds, creating voids and localized collapses underneath the dam. PSI information derived from Sentinel-1A proved to be a good tool for monitoring dam deformation with good accuracy, yielding results that can be used in engineering applications and also risk management. Full article
(This article belongs to the Special Issue InSAR for Earth Observation)
Show Figures

Graphical abstract

Open AccessArticle
Category-Sensitive Domain Adaptation for Land Cover Mapping in Aerial Scenes
Remote Sens. 2019, 11(22), 2631; https://doi.org/10.3390/rs11222631 - 11 Nov 2019
Abstract
Since manually labeling aerial images for pixel-level classification is expensive and time-consuming, developing strategies for land cover mapping without reference labels is essential and meaningful. As an efficient solution for this issue, domain adaptation has been widely utilized in numerous semantic labeling-based applications. [...] Read more.
Since manually labeling aerial images for pixel-level classification is expensive and time-consuming, developing strategies for land cover mapping without reference labels is essential and meaningful. As an efficient solution for this issue, domain adaptation has been widely utilized in numerous semantic labeling-based applications. However, current approaches generally pursue the marginal distribution alignment between the source and target features and ignore the category-level alignment. Therefore, directly applying them to land cover mapping leads to unsatisfactory performance in the target domain. In our research, to address this problem, we embed a geometry-consistent generative adversarial network (GcGAN) into a co-training adversarial learning network (CtALN), and then develop a category-sensitive domain adaptation (CsDA) method for land cover mapping using very-high-resolution (VHR) optical aerial images. The GcGAN aims to eliminate the domain discrepancies between labeled and unlabeled images while retaining their intrinsic land cover information by translating the features of the labeled images from the source domain to the target domain. Meanwhile, the CtALN aims to learn a semantic labeling model in the target domain with the translated features and corresponding reference labels. By training this hybrid framework, our method learns to distill knowledge from the source domain and transfers it to the target domain, while preserving not only global domain consistency, but also category-level consistency between labeled and unlabeled images in the feature space. The experimental results between two airborne benchmark datasets and the comparison with other state-of-the-art methods verify the robustness and superiority of our proposed CsDA. Full article
(This article belongs to the Special Issue Deep Transfer Learning for Remote Sensing)
Show Figures

Graphical abstract

Open AccessArticle
Satellite Retrieval of Downwelling Shortwave Surface Flux and Diffuse Fraction under All Sky Conditions in the Framework of the LSA SAF Program (Part 2: Evaluation)
Remote Sens. 2019, 11(22), 2630; https://doi.org/10.3390/rs11222630 - 11 Nov 2019
Abstract
High frequency knowledge of the spatio-temporal distribution of the downwelling surface shortwave flux (DSSF) and its diffuse fraction (fd) at the surface is nowadays essential for understanding climate processes at the surface–atmosphere interface, plant photosynthesis and carbon cycle, and for the solar energy [...] Read more.
High frequency knowledge of the spatio-temporal distribution of the downwelling surface shortwave flux (DSSF) and its diffuse fraction (fd) at the surface is nowadays essential for understanding climate processes at the surface–atmosphere interface, plant photosynthesis and carbon cycle, and for the solar energy sector. The European Organization for the Exploitation of Meteorological Satellites (EUMETSAT) Satellite Application Facility for Land Surface Analysis operationally delivers estimation of the MDSSFTD (MSG Downwelling Surface Short-wave radiation Fluxes—Total and Diffuse fraction) product with an operational status since the year 2019. The method for retrieval was presented in a companion paper. Part 2 now focuses on the evaluation of the MDSSFTD algorithm and presents a comparison of the corresponding outputs, i.e., total DSSF and diffuse fraction (fd) components, against in situ measurements acquired at four Baseline Surface Radiation Network (BSRN) stations over a seven-month period. The validation is performed on an instantaneous basis. We show that the satellite estimates of DSSF and fd meet the target requirements defined by the user community for all-sky (clear and cloudy) conditions. For DSSF, the requirements are 20 Wm−2 for DSSF < 200 Wm−2, and 10% for DSSF ≥ 200 Wm−2. The mean bias error (MBE) and relative mean bias error (rMBE) compared to the ground measurements are 3.618 Wm−2 and 0.252%, respectively. For fd, the requirements are 0.1 for fd < 0.5, and 20% for fd ≥ 0.5. The MBE and rMBE compared to the ground measurements are −0.044% and −17.699%, respectively. The study also provides a separate analysis of the product performances for clear sky and cloudy sky conditions. The importance of representing the cloud–aerosol radiative coupling in the MDSSFTD method is discussed. Finally, it is concluded that the quality of the aerosol optical depth (AOD) forecasts currently available is accurate enough to obtain reliable diffuse solar flux estimates. This quality of AOD forecasts was still a limitation a few years ago. Full article
(This article belongs to the Special Issue Satellite Images for Assessing Solar Radiation at Surface)
Show Figures

Graphical abstract

Previous Issue
Back to TopTop