Previous Issue

E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Table of Contents

Remote Sens., Volume 10, Issue 10 (October 2018)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
View options order results:
result details:
Displaying articles 1-157
Export citation of selected articles as:
Open AccessArticle Mapping Crop Residue and Tillage Intensity Using WorldView-3 Satellite Shortwave Infrared Residue Indices
Remote Sens. 2018, 10(10), 1657; https://doi.org/10.3390/rs10101657 (registering DOI)
Received: 23 August 2018 / Revised: 20 September 2018 / Accepted: 9 October 2018 / Published: 18 October 2018
PDF Full-text (3920 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Crop residues serve many important functions in agricultural conservation including preserving soil moisture, building soil organic carbon, and preventing erosion. Percent crop residue cover on a field surface reflects the outcome of tillage intensity and crop management practices. Previous studies using proximal hyperspectral
[...] Read more.
Crop residues serve many important functions in agricultural conservation including preserving soil moisture, building soil organic carbon, and preventing erosion. Percent crop residue cover on a field surface reflects the outcome of tillage intensity and crop management practices. Previous studies using proximal hyperspectral remote sensing have demonstrated accurate measurement of percent residue cover using residue indices that characterize cellulose and lignin absorption features found between 2100 nm and 2300 nm in the shortwave infrared (SWIR) region of the electromagnetic spectrum. The 2014 launch of the WorldView-3 (WV3) satellite has now provided a space-borne platform for the collection of narrow band SWIR reflectance imagery capable of measuring these cellulose and lignin absorption features. In this study, WorldView-3 SWIR imagery (14 May 2015) was acquired over farmland on the Eastern Shore of Chesapeake Bay (Maryland, USA), was converted to surface reflectance, and eight different SWIR reflectance indices were calculated. On-farm photographic sampling was used to measure percent residue cover at a total of 174 locations in 10 agricultural fields, ranging from plow-till to continuous no-till management, and these in situ measurements were used to develop percent residue cover prediction models from the SWIR indices using both polynomial and linear least squares regressions. Analysis was limited to agricultural fields with minimal green vegetation (Normalized Difference Vegetation Index < 0.3) due to expected interference of vegetation with the SWIR indices. In the resulting residue prediction models, spectrally narrow residue indices including the Shortwave Infrared Normalized Difference Residue Index (SINDRI) and the Lignin Cellulose Absorption Index (LCA) were determined to be more accurate than spectrally broad Landsat-compatible indices such as the Normalized Difference Tillage Index (NDTI), as determined by respective R2 values of 0.94, 0.92, and 0.84 and respective residual mean squared errors (RMSE) of 7.15, 8.40, and 12.00. Additionally, SINDRI and LCA were more resistant to interference from low levels of green vegetation. The model with the highest correlation (2nd order polynomial SINDRI, R2 = 0.94) was used to convert the SWIR imagery into a map of crop residue cover for non-vegetated agricultural fields throughout the imagery extent, describing the distribution of tillage intensity within the farm landscape. WorldView-3 satellite imagery provides spectrally narrow SWIR reflectance measurements that show utility for a robust mapping of crop residue cover. Full article
(This article belongs to the Special Issue Quantitative Remote Sensing of Land Surface Variables)
Figures

Graphical abstract

Open AccessFeature PaperArticle Mapping and Forecasting Onsets of Harmful Algal Blooms Using MODIS Data over Coastal Waters Surrounding Charlotte County, Florida
Remote Sens. 2018, 10(10), 1656; https://doi.org/10.3390/rs10101656 (registering DOI)
Received: 3 September 2018 / Revised: 13 October 2018 / Accepted: 16 October 2018 / Published: 18 October 2018
PDF Full-text (2117 KB) | HTML Full-text | XML Full-text
Abstract
Over the past two decades, persistent occurrences of harmful algal blooms (HAB; Karenia brevis) have been reported in Charlotte County, southwestern Florida. We developed data-driven models that rely on spatiotemporal remote sensing and field data to identify factors controlling HAB propagation, provide
[...] Read more.
Over the past two decades, persistent occurrences of harmful algal blooms (HAB; Karenia brevis) have been reported in Charlotte County, southwestern Florida. We developed data-driven models that rely on spatiotemporal remote sensing and field data to identify factors controlling HAB propagation, provide a same-day distribution (nowcasting), and forecast their occurrences up to three days in advance. We constructed multivariate regression models using historical HAB occurrences (213 events reported from January 2010 to October 2017) compiled by the Florida Fish and Wildlife Conservation Commission and validated the models against a subset (20%) of the historical events. The models were designed to capture the onset of the HABs instead of those that developed days earlier and continued thereafter. A prototype of an early warning system was developed through a threefold exercise. The first step involved the automatic downloading and processing of daily Moderate Resolution Imaging Spectroradiometer (MODIS) Aqua products using SeaDAS ocean color processing software to extract temporal and spatial variations of remote sensing-based variables over the study area. The second step involved the development of a multivariate regression model for same-day mapping of HABs and similar subsequent models for forecasting HAB occurrences one, two, and three days in advance. Eleven remote sensing variables and two non-remote sensing variables were used as inputs for the generated models. In the third and final step, model outputs (same-day and forecasted distribution of HABs) were posted automatically on a web map. Our findings include: (1) the variables most indicative of the timing of bloom propagation are bathymetry, euphotic depth, wind direction, sea surface temperature (SST), ocean chlorophyll three-band algorithm for MODIS [chlorophyll-a OC3M] and distance from the river mouth, and (2) the model predictions were 90% successful for same-day mapping and 65%, 72% and 71% for the one-, two- and three-day advance predictions, respectively. The adopted methodologies are reliable at a local scale, dependent on readily available remote sensing data, and cost-effective and thus could potentially be used to map and forecast algal bloom occurrences in data-scarce regions. Full article
(This article belongs to the Special Issue Quantitative Remote Sensing of Land Surface Variables)
Figures

Graphical abstract

Open AccessArticle Glint Removal Assessment to Estimate the Remote Sensing Reflectance in Inland Waters with Widely Differing Optical Properties
Remote Sens. 2018, 10(10), 1655; https://doi.org/10.3390/rs10101655 (registering DOI)
Received: 3 September 2018 / Revised: 5 October 2018 / Accepted: 9 October 2018 / Published: 18 October 2018
PDF Full-text (4633 KB) | HTML Full-text | XML Full-text
Abstract
The quality control of remote sensing reflectance (Rrs) is a challenging task in remote sensing applications, mainly in the retrieval of accurate in situ measurements carried out in optically complex aquatic systems. One of the main challenges is related to
[...] Read more.
The quality control of remote sensing reflectance (Rrs) is a challenging task in remote sensing applications, mainly in the retrieval of accurate in situ measurements carried out in optically complex aquatic systems. One of the main challenges is related to glint effect into the in situ measurements. Our study evaluates four different methods to reduce the glint effect from the Rrs spectra collected in cascade reservoirs with widely differing optical properties. The first (i) method adopts a constant coefficient for skylight correction (ρ) for any geometry viewing of in situ measurements and wind speed lower than 5 m·s−1; (ii) the second uses a look-up-table with variable ρ values accordingly to viewing geometry acquisition and wind speed; (iii) the third method is based on hyperspectral optimization to produce a spectral glint correction, and (iv) computes ρ as a function of wind speed. The glint effect corrected Rrs spectra were assessed using HydroLight simulations. The results showed that using the glint correction with spectral ρ achieved the lowest errors, however, in a Colored Dissolved Organic Matter (CDOM) dominated environment with no remarkable chlorophyll-a concentrations, the best method was the second. Besides, the results with spectral glint correction reduced almost 30% of errors. Full article
(This article belongs to the Special Issue Remote Sensing of Inland Waters and Their Catchments)
Figures

Figure 1

Open AccessArticle Near Real-Time Extracting Wildfire Spread Rate from Himawari-8 Satellite Data
Remote Sens. 2018, 10(10), 1654; https://doi.org/10.3390/rs10101654 (registering DOI)
Received: 5 September 2018 / Revised: 6 October 2018 / Accepted: 13 October 2018 / Published: 17 October 2018
PDF Full-text (3437 KB) | HTML Full-text | XML Full-text
Abstract
Fire Spread Rate (FSR) can indicate how fast a fire is spreading, which is especially helpful for wildfire rescue and management. Historically, images obtained from sun-orbiting satellites such as Moderate Resolution Imaging Spectroradiometer (MODIS) were used to detect active fire and burned area
[...] Read more.
Fire Spread Rate (FSR) can indicate how fast a fire is spreading, which is especially helpful for wildfire rescue and management. Historically, images obtained from sun-orbiting satellites such as Moderate Resolution Imaging Spectroradiometer (MODIS) were used to detect active fire and burned area at the large spatial scale. However, the daily revisit cycles make them inherently unable to extract FSR in near real­-time (hourly or less). We argue that the Himawari-8, a next generation geostationary satellite with a 10-min temporal resolution and 0.5–2 km spatial resolution, may have the potential for near real-time FSR extraction. To that end, we propose a novel method (named H8-FSR) for near real-time FSR extraction based on the Himawari-8 data. The method first defines the centroid of the burned area as the fire center and then the near real-time FSR is extracted by timely computing the movement rate of the fire center. As a case study, the method was applied to the Esperance bushfire that broke out on 17 November, 2015, in Western Australia. Compared with the estimated FSR using the Commonwealth Scientific and Industrial Research Organization (CSIRO) Grassland Fire Spread (GFS) model, H8-FSR achieved favorable performance with a coefficient of determination (R2) of 0.54, mean bias error of –0.75 m/s, mean absolute percent error of 33.20% and root mean square error of 1.17 m/s, respectively. These results demonstrated that the Himawari-8 data are valuable for near real-time FSR extraction, and also suggested that the proposed method could be potentially applicable to other next generation geostationary satellite data. Full article
(This article belongs to the Special Issue Remote Sensing of Wildfire)
Figures

Figure 1

Open AccessArticle Wavelet-Based Correlation Identification of Scales and Locations between Landscape Patterns and Topography in Urban-Rural Profiles: Case of the Jilin City, China
Remote Sens. 2018, 10(10), 1653; https://doi.org/10.3390/rs10101653
Received: 25 July 2018 / Revised: 7 October 2018 / Accepted: 7 October 2018 / Published: 17 October 2018
PDF Full-text (5916 KB) | HTML Full-text | XML Full-text
Abstract
Landscapes display overlapping sets of correlations in different regions at different spatial scales, and these correlations can be delineated by pattern analysis. This study identified the correlations between landscape pattern and topography at various scales and locations in urban-rural profiles from Jilin City,
[...] Read more.
Landscapes display overlapping sets of correlations in different regions at different spatial scales, and these correlations can be delineated by pattern analysis. This study identified the correlations between landscape pattern and topography at various scales and locations in urban-rural profiles from Jilin City, China, using Pearson correlation analysis and wavelet method. Two profiles, 30 km (A) and 35 km (B) in length with 0.1-km sampling intervals, were selected. The results indicated that profile A was more sensitive to the characterization of the land use pattern as influenced by topography due to its more varied terrain, and three scales (small, medium, and large) could be defined based on the variation in the standard deviation of the wavelet coherency in profile A. Correlations between landscape metrics and elevation were similar at large scales (over 8 km), while complex correlations were discovered at other scale intervals. The medium scale of cohesion and Shannon’s diversity index was 1–8 km, while those of perimeter-area fractal dimension and edge density index were 1.5–8 km and 2–8 km, respectively. At small scales, the correlations were weak as a whole and scattered due to the micro-topography and landform elements, such as valleys and hillsides. At medium scales, the correlations were most affected by local topography, and the land use pattern was significantly correlated with topography at several locations. At large spatial scales, significant correlation existed throughout the study area due to alternating mountains and plains. In general, the strength of correlation between landscape metrics and topography increased gradually with increasing spatial scale, although this tendency had some fluctuations in several locations. Despite a complex calculating process and ecological interpretation, the wavelet method is still an effective tool to identify multi-scale characteristics in landscape ecology. Full article
Figures

Figure 1

Open AccessArticle Guidelines for Underwater Image Enhancement Based on Benchmarking of Different Methods
Remote Sens. 2018, 10(10), 1652; https://doi.org/10.3390/rs10101652
Received: 11 September 2018 / Revised: 7 October 2018 / Accepted: 12 October 2018 / Published: 17 October 2018
PDF Full-text (5935 KB) | HTML Full-text | XML Full-text
Abstract
Images obtained in an underwater environment are often affected by colour casting and suffer from poor visibility and lack of contrast. In the literature, there are many enhancement algorithms that improve different aspects of the underwater imagery. Each paper, when presenting a new
[...] Read more.
Images obtained in an underwater environment are often affected by colour casting and suffer from poor visibility and lack of contrast. In the literature, there are many enhancement algorithms that improve different aspects of the underwater imagery. Each paper, when presenting a new algorithm or method, usually compares the proposed technique with some alternatives present in the current state of the art. There are no studies on the reliability of benchmarking methods, as the comparisons are based on various subjective and objective metrics. This paper would pave the way towards the definition of an effective methodology for the performance evaluation of the underwater image enhancement techniques. Moreover, this work could orientate the underwater community towards choosing which method can lead to the best results for a given task in different underwater conditions. In particular, we selected five well-known methods from the state of the art and used them to enhance a dataset of images produced in various underwater sites with different conditions of depth, turbidity, and lighting. These enhanced images were evaluated by means of three different approaches: objective metrics often adopted in the related literature, a panel of experts in the underwater field, and an evaluation based on the results of 3D reconstructions. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Figures

Graphical abstract

Open AccessArticle Validation of Hourly Global Horizontal Irradiance for Two Satellite-Derived Datasets in Northeast Iraq
Remote Sens. 2018, 10(10), 1651; https://doi.org/10.3390/rs10101651
Received: 27 August 2018 / Revised: 26 September 2018 / Accepted: 15 October 2018 / Published: 17 October 2018
PDF Full-text (7605 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Several sectors need global horizontal irradiance (GHI) data for various purposes. However, the availability of a long-term time series of high quality in situ GHI measurements is limited. Therefore, several studies have tried to estimate GHI by re-analysing climate data or satellite images.
[...] Read more.
Several sectors need global horizontal irradiance (GHI) data for various purposes. However, the availability of a long-term time series of high quality in situ GHI measurements is limited. Therefore, several studies have tried to estimate GHI by re-analysing climate data or satellite images. Validation is essential for the later use of GHI data in the regions with a scarcity of ground-recorded data. This study contributes to previous studies that have been carried out in the past to validate HelioClim-3 version 5 (HC3v5) and the Copernicus Atmosphere Monitoring Service, using radiation service version 3 (CRSv3) data of hourly GHI from satellite-derived datasets (SDD) with nine ground stations in northeast Iraq, which have not been used previously. The validation is carried out with station data at the pixel locations and two other data points in the vicinity of each station, which is something that is rarely seen in the literature. The temporal and spatial trends of the ground data are well captured by the two SDDs. Correlation ranges from 0.94 to 0.97 in all-sky and clear-sky conditions in most cases, while for cloudy-sky conditions, it is between 0.51–0.72 and 0.82–0.89 for the clearness index. The bias is negative for most of the cases, except for three positive cases. It ranges from −7% to 4%, and −8% to 3% for the all-sky and clear-sky conditions, respectively. For cloudy-sky conditions, the bias is positive, and differs from one station to another, from 16% to 85%. The root mean square error (RMSE) ranges between 12–20% and 8–12% for all-sky and clear-sky conditions, respectively. In contrast, the RMSE range is significantly higher in cloudy-sky conditions: above 56%. The bias and RMSE for the clearness index are nearly the same as those for the GHI for all-sky conditions. The spatial variability of hourly GHI SDD differs only by 2%, depending on the station location compared to the data points around each station. The variability of two SDDs is quite similar to the ground data, based on the mean and standard deviation of hourly GHI in a month. Having station data at different timescales and the small number of stations with GHI records in the region are the main limitations of this analysis. Full article
(This article belongs to the Special Issue Solar Radiation, Modelling and Remote Sensing)
Figures

Graphical abstract

Open AccessArticle The Random Forest-Based Method of Fine-Resolution Population Spatialization by Using the International Space Station Nighttime Photography and Social Sensing Data
Remote Sens. 2018, 10(10), 1650; https://doi.org/10.3390/rs10101650
Received: 22 August 2018 / Revised: 9 October 2018 / Accepted: 13 October 2018 / Published: 17 October 2018
PDF Full-text (13688 KB) | HTML Full-text | XML Full-text
Abstract
Despite the importance of high-resolution population distribution in urban planning, disaster prevention and response, region economic development, and improvement of urban habitant environment, traditional urban investigations mainly focused on large-scale population spatialization by using coarse-resolution nighttime light (NTL) while few efforts were made
[...] Read more.
Despite the importance of high-resolution population distribution in urban planning, disaster prevention and response, region economic development, and improvement of urban habitant environment, traditional urban investigations mainly focused on large-scale population spatialization by using coarse-resolution nighttime light (NTL) while few efforts were made to fine-resolution population mapping. To address problems of generating small-scale population distribution, this paper proposed a method based on the Random Forest Regression model to spatialize a 25 m population from the International Space Station (ISS) photography and urban function zones generated from social sensing data—point-of-interest (POI). There were three main steps, namely HSL (hue saturation lightness) transformation and saturation calibration of ISS, generating functional-zone maps based on point-of-interest, and spatializing population based on the Random Forest model. After accuracy assessments by comparing with WorldPop, the proposed method was validated as a qualified method to generate fine-resolution population spatial maps. In the discussion, this paper suggested that without help of auxiliary data, NTL cannot be directly employed as a population indicator at small scale. The Variable Importance Measure of the RF model confirmed the correlation between features and population and further demonstrated that urban functions performed better than LULC (Land Use and Land Cover) in small-scale population mapping. Urban height was also shown to improve the performance of population disaggregation due to its compensation of building volume. To sum up, this proposed method showed great potential to disaggregate fine-resolution population and other urban socio-economic attributes. Full article
Figures

Graphical abstract

Open AccessFeature PaperArticle Hyperspectral and LiDAR Fusion Using Deep Three-Stream Convolutional Neural Networks
Remote Sens. 2018, 10(10), 1649; https://doi.org/10.3390/rs10101649
Received: 5 September 2018 / Revised: 8 October 2018 / Accepted: 15 October 2018 / Published: 16 October 2018
PDF Full-text (22826 KB) | HTML Full-text | XML Full-text
Abstract
Recently, convolutional neural networks (CNN) have been intensively investigated for the classification of remote sensing data by extracting invariant and abstract features suitable for classification. In this paper, a novel framework is proposed for the fusion of hyperspectral images and LiDAR-derived elevation data
[...] Read more.
Recently, convolutional neural networks (CNN) have been intensively investigated for the classification of remote sensing data by extracting invariant and abstract features suitable for classification. In this paper, a novel framework is proposed for the fusion of hyperspectral images and LiDAR-derived elevation data based on CNN and composite kernels. First, extinction profiles are applied to both data sources in order to extract spatial and elevation features from hyperspectral and LiDAR-derived data, respectively. Second, a three-stream CNN is designed to extract informative spectral, spatial, and elevation features individually from both available sources. The combination of extinction profiles and CNN features enables us to jointly benefit from low-level and high-level features to improve classification performance. To fuse the heterogeneous spectral, spatial, and elevation features extracted by CNN, instead of a simple stacking strategy, a multi-sensor composite kernels (MCK) scheme is designed. This scheme helps us to achieve higher spectral, spatial, and elevation separability of the extracted features and effectively perform multi-sensor data fusion in kernel space. In this context, a support vector machine and extreme learning machine with their composite kernels version are employed to produce the final classification result. The proposed framework is carried out on two widely used data sets with different characteristics: an urban data set captured over Houston, USA, and a rural data set captured over Trento, Italy. The proposed framework yields the highest OA of 92 . 57 % and 97 . 91 % for Houston and Trento data sets. Experimental results confirm that the proposed fusion framework can produce competitive results in both urban and rural areas in terms of classification accuracy, and significantly mitigate the salt and pepper noise in classification maps. Full article
Figures

Graphical abstract

Open AccessArticle Global Fractional Vegetation Cover Estimation Algorithm for VIIRS Reflectance Data Based on Machine Learning Methods
Remote Sens. 2018, 10(10), 1648; https://doi.org/10.3390/rs10101648
Received: 27 August 2018 / Revised: 28 September 2018 / Accepted: 9 October 2018 / Published: 16 October 2018
PDF Full-text (5480 KB) | HTML Full-text | XML Full-text
Abstract
Fractional vegetation cover (FVC) is an essential input parameter for many environmental and ecological models. Recently, several global FVC products have been generated using remote sensing data. The Global LAnd Surface Satellite (GLASS) FVC product, which is generated from Moderate Resolution Imaging Spectroradiometer
[...] Read more.
Fractional vegetation cover (FVC) is an essential input parameter for many environmental and ecological models. Recently, several global FVC products have been generated using remote sensing data. The Global LAnd Surface Satellite (GLASS) FVC product, which is generated from Moderate Resolution Imaging Spectroradiometer (MODIS) data, has attained acceptable performance. However, the original MODIS operation design lifespan has been exceeded. The Visible Infrared Imaging Radiometer Suite (VIIRS) onboard the Suomi National Polar-Orbiting Partnership (S-NPP) satellite was designed to be the MODIS successor. Therefore, developing an FVC estimation algorithm for VIIRS data is important for maintaining continuous FVC estimates in case of MODIS failure. In this study, a global FVC estimation algorithm for VIIRS surface reflectance data was proposed based on machine learning methods, which investigated the performances of back propagating neural networks (BPNNs), general regression networks (GRNNs), multivariate adaptive regression splines (MARS), and Gaussian process regression (GPR). The training samples were extracted from the GLASS FVC product and corresponding reconstructed VIIRS surface reflectance in 2013 over the global sampling locations. The VIIRS reflectances of red and near infrared (NIR) bands were the input variables for these machine learning methods. The theoretical performances and independent validation results indicated that the four machine learning methods could achieve similar and reliable FVC estimates. Regarding the FVC estimation accuracy, the GPR method achieved the best performance (R2 = 0.9019, RMSE = 0.0887). The MARS method had the obvious advantage of computational efficiency. Furthermore, the FVC estimates achieved good spatial and temporal continuities. Therefore, the proposed FVC estimation algorithm for VIIRS data can potentially generate reliable global FVC data for related applications. Full article
(This article belongs to the Section Remote Sensing in Agriculture and Vegetation)
Figures

Graphical abstract

Open AccessArticle Vegetation Characterization through the Use of Precipitation-Affected SAR Signals
Remote Sens. 2018, 10(10), 1647; https://doi.org/10.3390/rs10101647
Received: 31 August 2018 / Revised: 1 October 2018 / Accepted: 13 October 2018 / Published: 16 October 2018
PDF Full-text (2172 KB) | HTML Full-text | XML Full-text
Abstract
Current space-based SAR offers unique opportunities to classify vegetation types and to monitor vegetation growth due to its frequent acquisitions and its sensitivity to vegetation geometry. However, SAR signals also experience frequent temporal fluctuations caused by precipitation events, complicating the mapping and monitoring
[...] Read more.
Current space-based SAR offers unique opportunities to classify vegetation types and to monitor vegetation growth due to its frequent acquisitions and its sensitivity to vegetation geometry. However, SAR signals also experience frequent temporal fluctuations caused by precipitation events, complicating the mapping and monitoring of vegetation. In this paper, we show that the influence of a priori known precipitation events on the signals can be used advantageously for the classification of vegetation conditions. For this, we exploit the change in Sentinel-1 backscatter response between consecutive acquisitions under varying wetness conditions, which we show is dependent on the state of vegetation. The performance further improves when a priori information on the soil type is taken into account. Full article
(This article belongs to the Section Remote Sensing in Agriculture and Vegetation)
Figures

Graphical abstract

Open AccessArticle Sparsity-Based Spatiotemporal Fusion via Adaptive Multi-Band Constraints
Remote Sens. 2018, 10(10), 1646; https://doi.org/10.3390/rs10101646
Received: 31 August 2018 / Revised: 3 October 2018 / Accepted: 6 October 2018 / Published: 16 October 2018
PDF Full-text (6452 KB) | HTML Full-text | XML Full-text
Abstract
Remote sensing is an important means to monitor the dynamics of the earth surface. It is still challenging for single-sensor systems to provide spatially high resolution images with high revisit frequency because of the technological limitations. Spatiotemporal fusion is an effective approach to
[...] Read more.
Remote sensing is an important means to monitor the dynamics of the earth surface. It is still challenging for single-sensor systems to provide spatially high resolution images with high revisit frequency because of the technological limitations. Spatiotemporal fusion is an effective approach to obtain remote sensing images high in both spatial and temporal resolutions. Though dictionary learning fusion methods appear to be promising for spatiotemporal fusion, they do not consider the structure similarity between spectral bands in the fusion task. To capitalize on the significance of this feature, a novel fusion model, named the adaptive multi-band constraints fusion model (AMCFM), is formulated to produce better fusion images in this paper. This model considers structure similarity between spectral bands and uses the edge information to improve the fusion results by adopting adaptive multi-band constraints. Moreover, to address the shortcomings of the 1 norm which only considers the sparsity structure of dictionaries, our model uses the nuclear norm which balances sparsity and correlation by producing an appropriate coefficient in the reconstruction step. We perform experiments on real-life images to substantiate our conceptual augments. In the empirical study, the near-infrared (NIR), red and green bands of Landsat Enhanced Thematic Mapper Plus (ETM+) and Moderate Resolution Imaging Spectroradiometer (MODIS) are fused and the prediction accuracy is assessed by both metrics and visual effects. The experiments show that our proposed method performs better than state-of-the-art methods. It also sheds light on future research. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Figures

Graphical abstract

Open AccessArticle Potential of Sentinel-2A Data to Model Surface and Canopy Fuel Characteristics in Relation to Crown Fire Hazard
Remote Sens. 2018, 10(10), 1645; https://doi.org/10.3390/rs10101645
Received: 12 September 2018 / Revised: 12 October 2018 / Accepted: 14 October 2018 / Published: 16 October 2018
PDF Full-text (1417 KB) | HTML Full-text | XML Full-text
Abstract
Background: Crown fires are often intense and fast spreading and hence can have serious impacts on soil, vegetation, and wildlife habitats. Fire managers try to prevent the initiation and spread of crown fires in forested landscapes through fuel management. The minimum fuel conditions
[...] Read more.
Background: Crown fires are often intense and fast spreading and hence can have serious impacts on soil, vegetation, and wildlife habitats. Fire managers try to prevent the initiation and spread of crown fires in forested landscapes through fuel management. The minimum fuel conditions necessary to initiate and propagate crown fires are known to be strongly influenced by four stand structural variables: surface fuel load (SFL), fuel strata gap (FSG), canopy base height (CBH), and canopy bulk density (CBD). However, there is often a lack of quantitative data about these variables, especially at the landscape scale. Methods: In this study, data from 123 sample plots established in pure, even-aged, Pinus radiata and Pinus pinaster stands in northwest Spain were analyzed. In each plot, an intensive field inventory was used to characterize surface and canopy fuels load and structure, and to estimate SFL, FSG, CBH, and CBD. Equations relating these variables to Sentinel-2A (S-2A) bands and vegetation indices were obtained using two non-parametric techniques: Random Forest (RF) and Multivariate Adaptive Regression Splines (MARS). Results: According to the goodness-of-fit statistics, RF models provided the most accurate estimates, explaining more than 12%, 37%, 47%, and 31% of the observed variability in SFL, FSG, CBH, and CBD, respectively. To evaluate the performance of the four equations considered, the observed and estimated values of the four fuel variables were used separately to predict the potential type of wildfire (surface fire, passive crown fire, or active crown fire) for each plot, considering three different burning conditions (low, moderate, and extreme). The results of the confusion matrix indicated that 79.8% of the surface fires and 93.1% of the active crown fires were correctly classified; meanwhile, the highest rate of misclassification was observed for passive crown fire, with 75.6% of the samples correctly classified. Conclusions: The results highlight that the combination of medium resolution imagery and machine learning techniques may add valuable information about surface and canopy fuel variables at large scales, whereby crown fire potential and the potential type of wildfire can be classified. Full article
(This article belongs to the Special Issue Remote Sensing of Wildfire)
Figures

Figure 1

Open AccessArticle Robust Correlation Tracking for UAV Videos via Feature Fusion and Saliency Proposals
Remote Sens. 2018, 10(10), 1644; https://doi.org/10.3390/rs10101644
Received: 10 September 2018 / Revised: 6 October 2018 / Accepted: 12 October 2018 / Published: 16 October 2018
PDF Full-text (8516 KB) | HTML Full-text | XML Full-text
Abstract
Following the growing availability of low-cost, commercially available unmanned aerial vehicles (UAVs), more and more research efforts have been focusing on object tracking using videos recorded from UAVs. However, tracking from UAV videos poses many challenges due to platform motion, including background clutter,
[...] Read more.
Following the growing availability of low-cost, commercially available unmanned aerial vehicles (UAVs), more and more research efforts have been focusing on object tracking using videos recorded from UAVs. However, tracking from UAV videos poses many challenges due to platform motion, including background clutter, occlusion, and illumination variation. This paper tackles these challenges by proposing a correlation filter-based tracker with feature fusion and saliency proposals. First, we integrate multiple feature types such as dimensionality-reduced color name (CN) and histograms of oriented gradient (HOG) features to improve the performance of correlation filters for UAV videos. Yet, a fused feature acting as a multivector descriptor cannot be directly used in prior correlation filters. Therefore, a fused feature correlation filter is proposed that can directly convolve with a multivector descriptor, in order to obtain a single-channel response that indicates the location of an object. Furthermore, we introduce saliency proposals as re-detector to reduce background interference caused by occlusion or any distracter. Finally, an adaptive template-update strategy according to saliency information is utilized to alleviate possible model drifts. Systematic comparative evaluations performed on two popular UAV datasets show the effectiveness of the proposed approach. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Figures

Graphical abstract

Open AccessArticle Multi-Spectral Water Index (MuWI): A Native 10-m Multi-Spectral Water Index for Accurate Water Mapping on Sentinel-2
Remote Sens. 2018, 10(10), 1643; https://doi.org/10.3390/rs10101643
Received: 21 August 2018 / Revised: 25 September 2018 / Accepted: 12 October 2018 / Published: 16 October 2018
PDF Full-text (12603 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Accurate water mapping depends largely on the water index. However, most previously widely-adopted water index methods are developed from 30-m resolution Landsat imagery, with low-albedo commission error (e.g., shadow misclassified as water) and threshold instability being identified as the primary issues. Besides, since
[...] Read more.
Accurate water mapping depends largely on the water index. However, most previously widely-adopted water index methods are developed from 30-m resolution Landsat imagery, with low-albedo commission error (e.g., shadow misclassified as water) and threshold instability being identified as the primary issues. Besides, since the shortwave-infrared (SWIR) spectral band (band 11) on Sentinel-2 is 20 m spatial resolution, current SWIR-included water index methods usually produce water maps at 20 m resolution instead of the highest 10 m resolution of Sentinel-2 bands, which limits the ability of Sentinel-2 to detect surface water at finer scales. This study aims to develop a water index from Sentinel-2 that improves native resolution and accuracy of water mapping at the same time. Support Vector Machine (SVM) is used to exploit the 10-m spectral bands among Sentinel-2 bands of three resolutions (10-m; 20-m; 60-m). The new Multi-Spectral Water Index (MuWI), consisting of the complete version and the revised version (MuWI-C and MuWI-R), is designed as the combination of normalized differences for threshold stability. The proposed method is assessed on coincident Sentinel-2 and sub-meter images covering a variety of water types. When compared to previous water indexes, results show that both versions of MuWI enable to produce native 10-m resolution water maps with higher classification accuracies (p-value < 0.01). Commission and omission errors are also significantly reduced particularly in terms of shadow and sunglint. Consistent accuracy over complex water mapping scenarios is obtained by MuWI due to high threshold stability. Overall, the proposed MuWI method is applicable to accurate water mapping with improved spatial resolution and accuracy, which possibly facilitates water mapping and its related studies and applications on growing Sentinel-2 images. Full article
(This article belongs to the Section Remote Sensing in Geology, Geomorphology and Hydrology)
Figures

Figure 1

Open AccessArticle Synergistic Use of Radar Sentinel-1 and Optical Sentinel-2 Imagery for Crop Mapping: A Case Study for Belgium
Remote Sens. 2018, 10(10), 1642; https://doi.org/10.3390/rs10101642
Received: 28 September 2018 / Revised: 12 October 2018 / Accepted: 12 October 2018 / Published: 16 October 2018
PDF Full-text (8017 KB) | HTML Full-text | XML Full-text
Abstract
A timely inventory of agricultural areas and crop types is an essential requirement for ensuring global food security and allowing early crop monitoring practices. Satellite remote sensing has proven to be an increasingly more reliable tool to identify crop types. With the Copernicus
[...] Read more.
A timely inventory of agricultural areas and crop types is an essential requirement for ensuring global food security and allowing early crop monitoring practices. Satellite remote sensing has proven to be an increasingly more reliable tool to identify crop types. With the Copernicus program and its Sentinel satellites, a growing source of satellite remote sensing data is publicly available at no charge. Here, we used joint Sentinel-1 radar and Sentinel-2 optical imagery to create a crop map for Belgium. To ensure homogenous radar and optical inputs across the country, Sentinel-1 12-day backscatter mosaics were created after incidence angle normalization, and Sentinel-2 normalized difference vegetation index (NDVI) images were smoothed to yield 10-daily cloud-free mosaics. An optimized random forest classifier predicted the eight crop types with a maximum accuracy of 82% and a kappa coefficient of 0.77. We found that a combination of radar and optical imagery always outperformed a classification based on single-sensor inputs, and that classification performance increased throughout the season until July, when differences between crop types were largest. Furthermore, we showed that the concept of classification confidence derived from the random forest classifier provided insight into the reliability of the predicted class for each pixel, clearly showing that parcel borders have a lower classification confidence. We concluded that the synergistic use of radar and optical data for crop classification led to richer information increasing classification accuracies compared to optical-only classification. Further work should focus on object-level classification and crop monitoring to exploit the rich potential of combined radar and optical observations. Full article
(This article belongs to the Special Issue High Resolution Image Time Series for Novel Agricultural Applications)
Figures

Graphical abstract

Open AccessArticle Evaluating RADARSAT-2 for the Monitoring of Lake Ice Phenology Events in Mid-Latitudes
Remote Sens. 2018, 10(10), 1641; https://doi.org/10.3390/rs10101641
Received: 30 August 2018 / Revised: 25 September 2018 / Accepted: 12 October 2018 / Published: 16 October 2018
PDF Full-text (15593 KB) | HTML Full-text | XML Full-text
Abstract
Lake ice is an important component in understanding the local climate as changes in temperature have an impact on the timing of key ice phenology events. In recent years, there has been a decline in the in-situ monitoring of lake ice events in
[...] Read more.
Lake ice is an important component in understanding the local climate as changes in temperature have an impact on the timing of key ice phenology events. In recent years, there has been a decline in the in-situ monitoring of lake ice events in Canada and microwave remote sensing imagery from synthetic aperture radar (SAR) is more widely used due to the high spatial resolution and response of backscatter to the freezing and melting of the ice surface. RADARSAT-2 imagery was used to develop a threshold-based method for determining lake ice events for mid-latitude lakes in Central Ontario from 2008 to 2017. Estimated lake ice phenology events are validated with ground-based observations and are compared against the Moderate Resolution Imaging Spectroradiometer (MODIS band 2). The threshold-based method was found to accurately identify 12 out of 17 freeze events and 13 out of 17 melt events from 2015–2017 when compared to ground-based observations. Mean absolute errors for freeze events ranged from 2.5 to 10.0 days when compared to MODIS imagery while the mean absolute error for water clear of ice (WCI) ranged from 1.5 to 7.1 days. The method is important for the study of mid-latitude lake ice due to its unique success in detecting multiple freeze and melting events throughout the ice season. Full article
(This article belongs to the Section Remote Sensing in Geology, Geomorphology and Hydrology)
Figures

Graphical abstract

Open AccessArticle The AMSU-Based Hydrological Bundle Climate Data Record—Description and Comparison with Other Data Sets
Remote Sens. 2018, 10(10), 1640; https://doi.org/10.3390/rs10101640
Received: 30 July 2018 / Revised: 12 September 2018 / Accepted: 11 October 2018 / Published: 16 October 2018
PDF Full-text (4678 KB) | HTML Full-text | XML Full-text
Abstract
Passive microwave measurements have been available on satellites back to the 1970s, first flown on research satellites developed by the National Aeronautics and Space Administration (NASA). Since then, several other sensors have been flown to retrieve hydrological products for both operational weather applications
[...] Read more.
Passive microwave measurements have been available on satellites back to the 1970s, first flown on research satellites developed by the National Aeronautics and Space Administration (NASA). Since then, several other sensors have been flown to retrieve hydrological products for both operational weather applications (e.g., the Special Sensor Microwave/Imager—SSM/I; the Advanced Microwave Sounding Unit—AMSU) and climate applications (e.g., the Advanced Microwave Scanning Radiometer—AMSR; the Tropical Rainfall Measurement Mission Microwave Imager—TMI; the Global Precipitation Mission Microwave Imager—GMI). Here, the focus is on measurements from the AMSU-A, AMSU-B, and Microwave Humidity Sounder (MHS). These sensors have been in operation since 1998, with the launch of NOAA-15, and are also on board NOAA-16, -17, -18, -19, and the MetOp-A and -B satellites. A data set called the “Hydrological Bundle” is a climate data record (CDR) that utilizes brightness temperatures from fundamental CDRs (FCDRs) to generate thematic CDRs (TCDRs). The TCDRs include total precipitable water (TPW), cloud liquid water (CLW), sea-ice concentration (SIC), land surface temperature (LST), land surface emissivity (LSE) for 23, 31, 50 GHz, rain rate (RR), snow cover (SC), ice water path (IWP), and snow water equivalent (SWE). The TCDRs are shown to be in general good agreement with similar products from other sources, such as the Global Precipitation Climatology Project (GPCP) and the Modern-Era Retrospective Analysis for Research and Applications (MERRA-2). Due to the careful intercalibration of the FCDRs, little bias is found among the different TCDRs produced from individual NOAA and MetOp satellites, except for normal diurnal cycle differences. Full article
(This article belongs to the Special Issue Remote Sensing of Essential Climate Variables and Their Applications)
Figures

Graphical abstract

Open AccessArticle Hyperspectral Classification via Superpixel Kernel Learning-Based Low Rank Representation
Remote Sens. 2018, 10(10), 1639; https://doi.org/10.3390/rs10101639
Received: 9 September 2018 / Revised: 2 October 2018 / Accepted: 2 October 2018 / Published: 16 October 2018
PDF Full-text (7101 KB) | HTML Full-text | XML Full-text
Abstract
High dimensional image classification is a fundamental technique for information retrieval from hyperspectral remote sensing data. However, data quality is readily affected by the atmosphere and noise in the imaging process, which makes it difficult to achieve good classification performance. In this paper,
[...] Read more.
High dimensional image classification is a fundamental technique for information retrieval from hyperspectral remote sensing data. However, data quality is readily affected by the atmosphere and noise in the imaging process, which makes it difficult to achieve good classification performance. In this paper, multiple kernel learning-based low rank representation at superpixel level (Sp_MKL_LRR) is proposed to improve the classification accuracy for hyperspectral images. Superpixels are generated first from the hyperspectral image to reduce noise effect and form homogeneous regions. An optimal superpixel kernel parameter is then selected by the kernel matrix using a multiple kernel learning framework. Finally, a kernel low rank representation is applied to classify the hyperspectral image. The proposed method offers two advantages. (1) The global correlation constraint is exploited by the low rank representation, while the local neighborhood information is extracted as the superpixel kernel adaptively learns the high-dimensional manifold features of the samples in each class; (2) It can meet the challenges of multiscale feature learning and adaptive parameter determination in the conventional kernel methods. Experimental results on several hyperspectral image datasets demonstrate that the proposed method outperforms several state-of-the-art classifiers tested in terms of overall accuracy, average accuracy, and kappa statistic. Full article
(This article belongs to the Special Issue Superpixel based Analysis and Classification of Remote Sensing Images)
Figures

Graphical abstract

Open AccessArticle Ground Reflectance Retrieval on Horizontal and Inclined Terrains Using the Software Package REFLECT
Remote Sens. 2018, 10(10), 1638; https://doi.org/10.3390/rs10101638
Received: 31 August 2018 / Revised: 7 October 2018 / Accepted: 12 October 2018 / Published: 15 October 2018
PDF Full-text (8356 KB) | HTML Full-text | XML Full-text
Abstract
This paper presents the software package REFLECT for the retrieval of ground reflectance from high and very-high resolution multispectral satellite images. The computation of atmospheric parameters is based on the 6S (Second Simulation of the Satellite Signal in the Solar Spectrum) routines. Aerosol
[...] Read more.
This paper presents the software package REFLECT for the retrieval of ground reflectance from high and very-high resolution multispectral satellite images. The computation of atmospheric parameters is based on the 6S (Second Simulation of the Satellite Signal in the Solar Spectrum) routines. Aerosol optical properties are calculated using the OPAC (Optical Properties of Aerosols and Clouds) model, while aerosol optical depth is estimated using the dark target method. A new approach is proposed for adjacency effect correction. Topographic effects were also taken into account, and a new model was developed for forest canopies. Validation has shown that ground reflectance estimation with REFLECT is performed with an accuracy of approximately ±0.01 in reflectance units (for the visible, near-infrared, and mid-infrared spectral bands), even for surfaces with varying topography. The validation of the software was performed through many tests. These tests involve the correction of the effects that are associated with sensor calibration, irradiance, and viewing conditions, atmospheric conditions (aerosol optical depth AOD and water vapour), adjacency, and topographic conditions. Full article
(This article belongs to the Special Issue Data Restoration and Denoising of Remote Sensing Data)
Figures

Figure 1

Open AccessArticle Vegetation Optical Depth and Soil Moisture Retrieved from L-Band Radiometry over the Growth Cycle of a Winter Wheat
Remote Sens. 2018, 10(10), 1637; https://doi.org/10.3390/rs10101637
Received: 31 August 2018 / Revised: 1 October 2018 / Accepted: 13 October 2018 / Published: 15 October 2018
PDF Full-text (5690 KB) | HTML Full-text | XML Full-text
Abstract
L-band radiometer measurements were performed at the Selhausen remote sensing field laboratory (Germany) over the entire growing season of a winter wheat stand. L-band microwave observations were collected over two different footprints within a homogenous winter wheat stand in order to disentangle the
[...] Read more.
L-band radiometer measurements were performed at the Selhausen remote sensing field laboratory (Germany) over the entire growing season of a winter wheat stand. L-band microwave observations were collected over two different footprints within a homogenous winter wheat stand in order to disentangle the emissions originating from the soil and from the vegetation. Based on brightness temperature (TB) measurements performed over an area consisting of a soil surface covered by a reflector (i.e., to block the radiation from the soil surface), vegetation optical depth (τ) information was retrieved using the tau-omega (τ-ω) radiative transfer model. The retrieved τ appeared to be clearly polarization dependent, with lower values for horizontal (H) and higher values for vertical (V) polarization. Additionally, a strong dependency of τ on incidence angle for the V polarization was observed. Furthermore, τ indicated a bell-shaped temporal evolution, with lowest values during the tillering and senescence stages, and highest values during flowering of the wheat plants. The latter corresponded to the highest amounts of vegetation water content (VWC) and largest leaf area index (LAI). To show that the time, polarization, and angle dependence is also highly dependent on the observed vegetation species, white mustard was grown during a short experiment, and radiometer measurements were performed using the same experimental setup. These results showed that the mustard canopy is more isotropic compared to the wheat vegetation (i.e., the τ parameter is less dependent on incidence angle and polarization). In a next step, the relationship between τ and in situ measured vegetation properties (VWC, LAI, total of aboveground vegetation biomass, and vegetation height) was investigated, showing a strong correlation between τ over the entire growing season and the VWC as well as between τ and LAI. Finally, the soil moisture was retrieved from TB observations over a second plot without a reflector on the ground. The retrievals were significantly improved compared to in situ measurements by using the time, polarization, and angle dependent τ as a priori information. This improvement can be explained by the better representation of the vegetation layer effect on the measured TB. Full article
(This article belongs to the Special Issue Soil Moisture Remote Sensing Across Scales)
Figures

Graphical abstract

Open AccessArticle Multi-Resolution Feature Fusion for Image Classification of Building Damages with Convolutional Neural Networks
Remote Sens. 2018, 10(10), 1636; https://doi.org/10.3390/rs10101636
Received: 27 July 2018 / Revised: 28 September 2018 / Accepted: 9 October 2018 / Published: 14 October 2018
PDF Full-text (4879 KB) | HTML Full-text | XML Full-text
Abstract
Remote sensing images have long been preferred to perform building damage assessments. The recently proposed methods to extract damaged regions from remote sensing imagery rely on convolutional neural networks (CNN). The common approach is to train a CNN independently considering each of the
[...] Read more.
Remote sensing images have long been preferred to perform building damage assessments. The recently proposed methods to extract damaged regions from remote sensing imagery rely on convolutional neural networks (CNN). The common approach is to train a CNN independently considering each of the different resolution levels (satellite, aerial, and terrestrial) in a binary classification approach. In this regard, an ever-growing amount of multi-resolution imagery are being collected, but the current approaches use one single resolution as their input. The use of up/down-sampled images for training has been reported as beneficial for the image classification accuracy both in the computer vision and remote sensing domains. However, it is still unclear if such multi-resolution information can also be captured from images with different spatial resolutions such as imagery of the satellite and airborne (from both manned and unmanned platforms) resolutions. In this paper, three multi-resolution CNN feature fusion approaches are proposed and tested against two baseline (mono-resolution) methods to perform the image classification of building damages. Overall, the results show better accuracy and localization capabilities when fusing multi-resolution feature maps, specifically when these feature maps are merged and consider feature information from the intermediate layers of each of the resolution level networks. Nonetheless, these multi-resolution feature fusion approaches behaved differently considering each level of resolution. In the satellite and aerial (unmanned) cases, the improvements in the accuracy reached 2% while the accuracy improvements for the airborne (manned) case was marginal. The results were further confirmed by testing the approach for geographical transferability, in which the improvements between the baseline and multi-resolution experiments were overall maintained. Full article
Figures

Figure 1

Open AccessArticle Long-Term Surface Water Dynamics Analysis Based on Landsat Imagery and the Google Earth Engine Platform: A Case Study in the Middle Yangtze River Basin
Remote Sens. 2018, 10(10), 1635; https://doi.org/10.3390/rs10101635
Received: 31 July 2018 / Revised: 28 September 2018 / Accepted: 13 October 2018 / Published: 14 October 2018
PDF Full-text (9598 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Dynamics of surface water is of great significance to understand the impacts of global changes and human activities on water resources. Remote sensing provides many advantages in monitoring surface water; however, in large scale, the efficiency of traditional remote sensing methods is extremely
[...] Read more.
Dynamics of surface water is of great significance to understand the impacts of global changes and human activities on water resources. Remote sensing provides many advantages in monitoring surface water; however, in large scale, the efficiency of traditional remote sensing methods is extremely low because these methods consume a high amount of manpower, storage, and computing resources. In this paper, we propose a new method for quickly determining what the annual maximal and minimal surface water extent is. The maximal and minimal water extent in the year of 1990, 2000, 2010 and 2017 in the Middle Yangtze River Basin in China were calculated on the Google Earth Engine platform. This approach takes full advantage of the data and computing advantages of the Google Earth Engine’s cloud platform, processed 2343 scenes of Landsat images. Firstly, based on the estimated value of cloud cover for each pixel, the high cloud covered pixels were removed to eliminate the cloud interference and improve the calculation efficiency. Secondly, the annual greenest and wettest images were mosaiced based on vegetation index and surface water index, then the minimum and maximum surface water extents were obtained by the Random Forest Classification. Results showed that (1) the yearly minimal surface water extents were 14,751.23 km2, 14,403.48 km2, 13,601.48 km2, and 15,697.42 km2, in the year of 1990, 2000, 2010, and 2017, respectively. (2) The yearly maximal surface water extents were 18,174.76 km2, 20,671.83 km2, 19,097.73 km2, and 18,235.95 km2, in the year of 1990, 2000, 2010, and 2017, respectively. (3) The accuracies of surface water classification ranged from 86% to 93%. Additionally, the causes of these changes were analyzed. The accuracy evaluation and comparison with other research results show that this method is reliable, novel, and fast in terms of calculating the maximal and minimal surface water extent. In addition, the proposed method can easily be implemented in other regions worldwide. Full article
(This article belongs to the Section Remote Sensing in Geology, Geomorphology and Hydrology)
Figures

Figure 1

Open AccessArticle Scattering and Radiative Properties of Morphologically Complex Carbonaceous Aerosols: A Systematic Modeling Study
Remote Sens. 2018, 10(10), 1634; https://doi.org/10.3390/rs10101634
Received: 8 September 2018 / Revised: 4 October 2018 / Accepted: 6 October 2018 / Published: 14 October 2018
PDF Full-text (8331 KB) | HTML Full-text | XML Full-text
Abstract
This paper provides a thorough modeling-based overview of the scattering and radiative properties of a wide variety of morphologically complex carbonaceous aerosols. Using the numerically-exact superposition T-matrix method, we examine the absorption enhancement, absorption Ångström exponent (AAE), backscattering linear depolarization ratio (LDR),
[...] Read more.
This paper provides a thorough modeling-based overview of the scattering and radiative properties of a wide variety of morphologically complex carbonaceous aerosols. Using the numerically-exact superposition T-matrix method, we examine the absorption enhancement, absorption Ångström exponent (AAE), backscattering linear depolarization ratio (LDR), and scattering matrix elements of black-carbon aerosols with 11 different model morphologies ranging from bare soot to completely embedded soot–sulfate and soot–brown carbon mixtures. Our size-averaged results show that fluffy soot particles absorb more light than compact bare-soot clusters. For the same amount of absorbing material, the absorption cross section of internally mixed soot can be more than twice that of bare soot. Absorption increases as soot accumulates more coating material and can become saturated. The absorption enhancement is affected by particle size, morphology, wavelength, and the amount of coating. We refute the conventional belief that all carbonaceous aerosols have AAEs close to 1.0. Although LDRs caused by bare soot and certain carbonaceous particles are rather weak, LDRs generated by other soot-containing aerosols can reproduce strong depolarization measured by Burton et al. for aged smoke. We demonstrate that multi-wavelength LDR measurements can be used to identify the presence of morphologically complex carbonaceous particles, although additional observations can be needed for full characterization. Our results show that optical constants of the host/coating material can significantly influence the scattering and absorption properties of soot-containing aerosols to the extent of changing the sign of linear polarization. We conclude that for an accurate estimate of black-carbon radiative forcing, one must take into account the complex morphologies of carbonaceous aerosols in remote sensing studies as well as in atmospheric radiation computations. Full article
Figures

Graphical abstract

Open AccessArticle Measurement Characteristics of Near-Surface Currents from Ultra-Thin Drifters, Drogued Drifters, and HF Radar
Remote Sens. 2018, 10(10), 1633; https://doi.org/10.3390/rs10101633
Received: 8 September 2018 / Revised: 2 October 2018 / Accepted: 9 October 2018 / Published: 14 October 2018
PDF Full-text (3017 KB) | HTML Full-text | XML Full-text
Abstract
Concurrent measurements by satellite tracked drifters of different hull and drogue configurations and coastal high-frequency radar reveal substantial differences in estimates of the near-surface velocity. These measurements are important for understanding and predicting material transport on the ocean surface as well as the
[...] Read more.
Concurrent measurements by satellite tracked drifters of different hull and drogue configurations and coastal high-frequency radar reveal substantial differences in estimates of the near-surface velocity. These measurements are important for understanding and predicting material transport on the ocean surface as well as the vertical structure of the near-surface currents. These near-surface current observations were obtained during a field experiment in the northern Gulf of Mexico intended to test a new ultra-thin drifter design. During the experiment, thirty small cylindrical drifters with 5 cm height, twenty-eight similar drifters with 10 cm hull height, and fourteen drifters with 91 cm tall drogues centered at 100 cm depth were deployed within the footprint of coastal High-Frequency (HF) radar. Comparison of collocated velocity measurements reveals systematic differences in surface velocity estimates obtained from the different measurement techniques, as well as provides information on properties of the drifter behavior and near-surface shear. Results show that the HF radar velocity estimates had magnitudes significantly lower than the 5 cm and 10 cm drifter velocity of approximately 45% and 35%, respectively. The HF radar velocity magnitudes were similar to the drogued drifter velocity. Analysis of wave directional spectra measurements reveals that surface Stokes drift accounts for much of the velocity difference between the drogued drifters and the thin surface drifters except during times of wave breaking. Full article
(This article belongs to the Special Issue Ocean Surface Currents: Progress in Remote Sensing and Validation)
Figures

Figure 1

Open AccessArticle Influence of Leaf Specular Reflection on Canopy Radiative Regime Using an Improved Version of the Stochastic Radiative Transfer Model
Remote Sens. 2018, 10(10), 1632; https://doi.org/10.3390/rs10101632
Received: 9 July 2018 / Revised: 27 September 2018 / Accepted: 11 October 2018 / Published: 14 October 2018
PDF Full-text (2050 KB) | HTML Full-text | XML Full-text
Abstract
Interpreting remotely-sensed data requires realistic, but simple, models of radiative transfer that occurs within a vegetation canopy. In this paper, an improved version of the stochastic radiative transfer model (SRTM) is proposed by assuming that all photons that have not been specularly reflected
[...] Read more.
Interpreting remotely-sensed data requires realistic, but simple, models of radiative transfer that occurs within a vegetation canopy. In this paper, an improved version of the stochastic radiative transfer model (SRTM) is proposed by assuming that all photons that have not been specularly reflected enter the leaf interior. The contribution of leaf specular reflection is considered by modifying leaf scattering phase function using Fresnel reflectance. The canopy bidirectional reflectance factor (BRF) estimated from this model is evaluated through comparisons with field-measured maize BRF. The result shows that accounting for leaf specular reflection can provide better performance than that when leaf specular reflection is neglected over a wide range of view zenith angles. The improved version of the SRTM is further adopted to investigate the influence of leaf specular reflection on the canopy radiative regime, with emphases on vertical profiles of mean radiation flux density, canopy absorptance, BRF, and normalized difference vegetation index (NDVI). It is demonstrated that accounting for leaf specular reflection can increase leaf albedo, which consequently increases canopy mean upward/downward mean radiation flux density and canopy nadir BRF and decreases canopy absorptance and canopy nadir NDVI when leaf angles are spherically distributed. The influence is greater for downward/upward radiation flux densities and canopy nadir BRF than that for canopy absorptance and NDVI. The results provide knowledge of leaf specular reflection and canopy radiative regime, and are helpful for forward reflectance simulations and backward inversions. Moreover, polarization measurements are suggested for studies of leaf specular reflection, as leaf specular reflection is closely related to the canopy polarization. Full article
(This article belongs to the Special Issue Radiative Transfer Modelling and Applications in Remote Sensing)
Figures

Figure 1

Open AccessArticle Hyperspectral Image Restoration under Complex Multi-Band Noises
Remote Sens. 2018, 10(10), 1631; https://doi.org/10.3390/rs10101631
Received: 31 August 2018 / Revised: 29 September 2018 / Accepted: 6 October 2018 / Published: 14 October 2018
PDF Full-text (7610 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Hyperspectral images (HSIs) are always corrupted by complicated forms of noise during the acquisition process, such as Gaussian noise, impulse noise, stripes, deadlines and so on. Specifically, different bands of the practical HSIs generally contain different noises of evidently distinct type and extent.
[...] Read more.
Hyperspectral images (HSIs) are always corrupted by complicated forms of noise during the acquisition process, such as Gaussian noise, impulse noise, stripes, deadlines and so on. Specifically, different bands of the practical HSIs generally contain different noises of evidently distinct type and extent. While current HSI restoration methods give less consideration to such band-noise-distinctness issues, this study elaborately constructs a new HSI restoration technique, aimed at more faithfully and comprehensively taking such noise characteristics into account. Particularly, through a two-level hierarchical Dirichlet process (HDP) to model the HSI noise structure, the noise of each band is depicted by a Dirichlet process Gaussian mixture model (DP-GMM), in which its complexity can be flexibly adapted in an automatic manner. Besides, the DP-GMM of each band comes from a higher level DP-GMM that relates the noise of different bands. The variational Bayes algorithm is also designed to solve this model, and closed-form updating equations for all involved parameters are deduced. The experiment indicates that, in terms of the mean peak signal-to-noise ratio (MPSNR), the proposed method is on average 1 dB higher compared with the existing state-of-the-art methods, as well as performing better in terms of the mean structural similarity index (MSSIM) and Erreur Relative Globale Adimensionnelle de Synthèse (ERGAS). Full article
Figures

Figure 1

Open AccessTechnical Note Estimation of LAI in Winter Wheat from Multi-Angular Hyperspectral VNIR Data: Effects of View Angles and Plant Architecture
Remote Sens. 2018, 10(10), 1630; https://doi.org/10.3390/rs10101630
Received: 28 August 2018 / Revised: 9 October 2018 / Accepted: 10 October 2018 / Published: 13 October 2018
PDF Full-text (9245 KB) | HTML Full-text | XML Full-text
Abstract
View angle effects present in crop canopy spectra are critical for the retrieval of the crop canopy leaf area index (LAI). In the past, the angular effects on spectral vegetation indices (VIs) for estimating LAI, especially in crops with different plant architectures, have
[...] Read more.
View angle effects present in crop canopy spectra are critical for the retrieval of the crop canopy leaf area index (LAI). In the past, the angular effects on spectral vegetation indices (VIs) for estimating LAI, especially in crops with different plant architectures, have not been carefully assessed. In this study, we assessed the effects of the view zenith angle (VZA) on relationships between the spectral VIs and LAI. We measured the multi-angular hyperspectral reflectance and LAI of two cultivars of winter wheat, erectophile (W411) and planophile (W9507), across different growing seasons. The reflectance of each angle was used to calculate a variety of VIs that have already been published in the literature as well as all possible band combinations of Normalized Difference Spectral Indices (NDSIs). The above indices, along with the raw reflectance of representative bands, were evaluated with measured LAI across the view zenith angle for each cultivar of winter wheat. Data analysis was also supported by the use of the PROSAIL (PROSPECT + SAIL) model to simulate a range of bidirectional reflectance. The study confirmed that the strength of linear relationships between different spectral VIs and LAI did express different angular responses depending on plant type. LAI–VI correlations were generally stronger in erectophile than in planophile wheat types, especially at the zenith angle where the background is expected to be more evident for erectophile type wheat. The band combinations and formulas of the indices also played a role in shaping the angular signatures of the LAI–VI correlations. Overall, off-nadir angles served better than nadir angle and narrow-band indices, especially NDSIs with combinations of a red-edge (700~720 nm) and a green band, were more useful for LAI estimation than broad-band indices for both types of winter wheat. But the optimal angles much differed between two plant types and among various VIs. High significance (R2 > 0.9) could be obtained by selecting appropriate VIs and view angles on both the backward and forward scattering direction. These results from the in-situ measurements were also corroborated by the simulation analysis using the PROSAIL model. For the measured datasets, the highest coefficient was obtained by NDSI(536,720) at −35° in the backward (R2 = 0.971) and NDSI(571,707) at 55° in the forward scattering direction (R2 = 0.984) for the planophile and erectophile varieties, respectively. This work highlights the influence of view geometry and plant architecture. The identification of crop plant type is highly recommended before using remote sensing VIs for the large-scale mapping of vegetation biophysical variables. Full article
(This article belongs to the Section Remote Sensing in Agriculture and Vegetation)
Figures

Graphical abstract

Open AccessArticle Automated Attitude Determination for Pushbroom Sensors Based on Robust Image Matching
Remote Sens. 2018, 10(10), 1629; https://doi.org/10.3390/rs10101629
Received: 21 August 2018 / Revised: 10 October 2018 / Accepted: 11 October 2018 / Published: 13 October 2018
PDF Full-text (7849 KB) | HTML Full-text | XML Full-text
Abstract
Accurate attitude information from a satellite image sensor is essential for accurate map projection and reducing computational cost for post-processing of image registration, which enhance image usability, such as change detection. We propose a robust attitude-determination method for pushbroom sensors onboard spacecraft by
[...] Read more.
Accurate attitude information from a satellite image sensor is essential for accurate map projection and reducing computational cost for post-processing of image registration, which enhance image usability, such as change detection. We propose a robust attitude-determination method for pushbroom sensors onboard spacecraft by matching land features in well registered base-map images and in observed images, which extends the current method that derives satellite attitude using an image taken with 2-D image sensors. Unlike 2-D image sensors, a pushbroom sensor observes the ground by changing its position and attitude according to the trajectory of a satellite. To address pushbroom-sensor observation, the proposed method can trace the temporal variation in the sensor attitude by combining the robust matching technique for a 2-D image sensor and a non-linear least squares approach, which can express gradual time evolution of the sensor attitude. Experimental results using images taken from a visible and near infrared pushbroom sensor of the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) onboard Terra as test image and Landsat-8/OLI images as a base map show that the proposed method can determine satellite attitude with an accuracy of 0.003° (corresponding to the 2-pixel scale of ASTER) in roll and pitch angles even for a scene in which there are many cloud patches, whereas the determination accuracy remains 0.05° in the yaw angle that does not affect accuracy of image registration compared with the other two axes. In addition to the achieved attitude accuracy that was better than that using star trackers (0.01°) regarding roll and pitch angles, the proposed method does not require any attitude information from onboard sensors. Therefore, the proposed method may contribute to validating and calibrating attitude sensors in space, at the same time better accuracy will contribute to reducing computational cost in post-processing for image registration. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Figures

Graphical abstract

Open AccessArticle Quantifying the Reflectance Anisotropy Effect on Albedo Retrieval from Remotely Sensed Observations Using Archetypal BRDFs
Remote Sens. 2018, 10(10), 1628; https://doi.org/10.3390/rs10101628
Received: 23 August 2018 / Revised: 26 September 2018 / Accepted: 11 October 2018 / Published: 13 October 2018
PDF Full-text (11288 KB) | HTML Full-text | XML Full-text
Abstract
The reflectance anisotropy effect on albedo retrieval was evaluated using the Moderate Resolution Imaging Spectroradiometer (MODIS) bidirectional reflectance distribution functions (BRDFs) product, and archetypal BRDFs. Shortwave-band archetypal BRDFs were established, and validated, based on the Anisotropy Flat indeX (AFX) and time series MODIS
[...] Read more.
The reflectance anisotropy effect on albedo retrieval was evaluated using the Moderate Resolution Imaging Spectroradiometer (MODIS) bidirectional reflectance distribution functions (BRDFs) product, and archetypal BRDFs. Shortwave-band archetypal BRDFs were established, and validated, based on the Anisotropy Flat indeX (AFX) and time series MODIS BRDF over tile h11v03. To generate surface albedo, archetypal BRDFs were used to fit simulated reflectance, based on the least squares method. Albedo was also retrieved based on the least root-mean-square-error (RMSE) method or normalized difference vegetation index (NDVI) based prior BRDF knowledge. The difference between those albedos and the MODIS albedo was used to quantify the reflectance anisotropy effect. The albedo over tile h11v03 for day 185 in 2009 was retrieved from single directional reflectance and the third archetypal BRDF. The results show that six archetypal BRDFs are sufficient to represent the reflectance anisotropy for albedo estimation. For the data used in this study, the relative uncertainty caused by reflectance anisotropy can reach up to 7.4%, 16.2%, and 20.2% for sufficient, insufficient multi-angular and single directional observations. The intermediate archetypal BRDFs may be used to improve the albedo retrieval accuracy from insufficient or single observations with a relative uncertainty range of 8–15%. Full article
Figures

Figure 1

Back to Top